qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,223,163 | <p>I don't have any idea on how to prove it, and I need it for one of my questions which is still unanswered: <a href="https://math.stackexchange.com/questions/2192947/what-is-the-largest-number-smaller-than-100-such-that-the-sum-of-its-divisors-is?noredirect=1#comment4521040_2192947">What is the largest number smaller than 100 such that the sum of its divisors is larger than twice the number itself?</a>.</p>
| Mark Pineau | 432,691 | <p>Here is the standard proof for the above claim:</p>
<p>Prove $10^n-4$ is always a multiple of 6, for $n\in \mathbb{N}$.</p>
<p>That is: $10^n-4=6m$, for $m\in \mathbb{N}$.</p>
<p>We prove the above via induction:</p>
<p>Consider the base case, that is, when n = 1, we thus have:</p>
<p>$$10^{1}-4=10-4=6=6m \ for \ m=1 \ \checkmark$$ </p>
<p>Consider the $n^{th}$ case, that is: $$10^n-4=6m \Rightarrow10^n=6m+4, \ for \ n,m\in \mathbb{N}$$</p>
<p>Then we want to prove the $(n+1)^{th}$.</p>
<p>So consider the following:</p>
<p>$$10^{n+1}-4=(10^n\cdot 10)-4=10(6m+4)-4=60m+36=6(10m+6).$$</p>
<p>Indeed we see $10^{n+1}-4$ is a multiple of $6$ given the $n^{th}$ case.</p>
<p>Therefore by induction, the claim holds for all $n\in \mathbb{N}$.</p>
|
2,868,172 | <p>The power rule states that for any real number $r$, </p>
<p>$$\frac{d}{dx}x^r=rx^{r-1}$$</p>
<p>Now one common way to prove this is to use the definition $x^r=e^{r\ln x}$, where $e^x$ is defined as the inverse function of $\ln x$, which is in turn defined as $\int_1^x\frac{dt}{t}$.</p>
<p>But this puts the cart before the horse, because students typically learn differential calculus before integral calculus. And there is a perfectly good definition of exponentiation of real numbers that does not rely on integral calculus:</p>
<p>$$x^r=\lim_{q\rightarrow r} x^q$$</p>
<p>where $q$ is a variable that ranges over the rational numbers. </p>
<p>So my question is, if we use this definition, and we take it for granted that $\frac{d}{dx}x^q=qx^{q-1}$ holds true for rational numbers (which can be easily proven without invoking $e$), then can we prove the power rule for real exponents without invoking $e$?</p>
<p>EDIT: Here’s a more precise formulation of the definition above. If $r$ is a real number, we say that $x^r = L$ if for any $\epsilon>0$ there exists a $\delta>0$ such that for any rational number $q$ such that $|q-x| < \delta$, we have $|x^q-L|<\epsilon$.</p>
| Eric Wofsey | 86,856 | <p>Yes. In general, if you have a sequence of $C^1$ functions $f_n$ on some interval and functions $f$ and $g$ such that $f_n\to f$ pointwise and $f_n'\to g$ uniformly, then $f'=g$. (Quick sketch of how to prove this without using integration: fix $x$ and $h$ and pick $n$ so that $f_n$ is sufficiently close to $f$ at $x$ and $x+h$ and $f_n'$ is sufficiently close to $g$ uniformly. Then $\frac{f(x+h)-f(x)}{h}$ will be close to $\frac{f_n(x+h)-f_n(x)}{h}$. By the mean value theorem the latter is equal to $f_n'(y)$ for some $y$ between $x+h$ and $x$, and $f_n'(y)$ is close to $g(y)$. Finally, if $h$ is sufficiently small, $g(y)$ will be close to $g(x)$ since $g$ is continuous.)</p>
<p>So, given $r\in\mathbb{R}$, pick a sequence of rational numbers $q_n$ converging to $r$ and let $f_n(x)=x^{q_n}$. Then $f_n(x)$ converges to $f(x)=x^r$ and $f_n'(x)=q_nx^{q_n-1}$ converges to $g(x)=rx^{r-1}$. Moreover, it is not hard to check that these convergences are uniform on any compact subset of $(0,\infty)$. It thus follows that $f'=g$ on $(0,\infty)$.</p>
|
2,113,062 | <blockquote>
<p>Find $x^2+y^2$ if $x^2-\frac{2}{x}=3y^2$ and $y^2-\frac{11}{y}=3x^2$.</p>
</blockquote>
<p>My try:</p>
<p>$$\frac{y^2-\frac{11}{y}}{3}-\frac{2}{x}=3y^2$$</p>
<p>then?</p>
| Batominovski | 72,152 | <p>Assume that $x,y\in\mathbb{R}$ and $a,b\in\mathbb{R}$ satisfy $$x^2-\frac{a}{x}=3y^2\text{ and }y^2-\frac{b}{y}=3x^2\,.$$
Then, taking $z:=x+\text{i}y$, we have
$$z^3=\left(x^3-3xy^2\right)-\text{i}\left(y^3-3x^2y\right)=a-b\text{i}\,.$$
Thus,
$$x^2+y^2=|z|^2=\sqrt[3]{\left|z^3\right|^2}=\sqrt[3]{|a-b\text{i}|^2}=\sqrt[3]{a^2+b^2}\,.$$</p>
<p><strong>Caveat:</strong> This solution only works if $x$ and $y$ are real numbers. If they can be complex numbers, dxiv's first solution is better. In fact, his solution shows that $x^2+y^2$ can be any of the three cubic roots of $a^2+b^2$.</p>
|
2,312,913 | <p>In the triangle $ABC$, let $E$ be a point on $BC$ such that $BE : EC =
3: 2$. Pick points $D$ and $F$ on the sides $AB$ and $AC$ , correspondingly, so that
$3AD = 2AF$ . Let $G$ be the point of intersection of $AE$ and $DF$ . Given that $AB = 7$
and $AC = 9$, find the ratio $DG: GF$.</p>
<p>I have been working on trying to solve this problem. I am having difficulty relating the length of $AB =7$ and the ratios given to find $AD:BD$. Similarly I am having trouble finding ratio of $FC:AF$. I am sure that I can solve this problem if someone can give me a hint on how to find those ratios. Any help would be appreciated.</p>
| StatsSorceress | 259,187 | <p>You need "at least 1 girl". From a probability standpoint, P(at least one girl) = 1 - P(no girls)</p>
<p>For combinatorics, that would mean there can be 19 boys and 1 girl, or 18 boys and 2 girls, or ... up to 0 boys and 20 girls.</p>
<p>${25 \choose 19}{25 \choose 1}$ is the number of ways to choose 19 boys from 25 and 1 girl from 25. The total number of ways to choose at least 1 girl would then be:</p>
<p>$${25 \choose 19}{25 \choose 1} + {25 \choose 18}{25 \choose 2} + \ldots + {25 \choose 0}{25 \choose 20}$$ </p>
|
2,374,282 | <p>I am trying to find all connected sets containing $z=i$ on which $f(z)=e^{2z}$ is one to one.
I have no idea how to approach.
Can someone give me some hints?
Thank you</p>
| gary | 6,595 | <p>I don't know if this is an acceptable answer, but you can do
1) An SQL Server query using the DateDiff function, a "" function with inputs ( DatePart, Date 1, Date 2) where DatePart is the "Gradation" : Years, Months, Days, etc. </p>
<p>Select DateDiff( Days, Date1, Date2) , where Date1 is your DOB, Date2 is the date for which you want to find the number of days passed. You can run it for free here: <a href="https://www.w3schools.com/sql/func_datediff.asp" rel="nofollow noreferrer">https://www.w3schools.com/sql/func_datediff.asp</a></p>
<p>OR
2) Use a perpetual calendar <a href="https://accuracyproject.org/perpetualcalendars.html" rel="nofollow noreferrer">https://accuracyproject.org/perpetualcalendars.html</a>: There are 14 possible day arrangements for a calendar: Once the date of January 1st and whether the year is a leap year or not, the calendar is determined : Jan 1st can fall on Mon through Sunday , and the year is either leap or non-leap. Just look up the year number and look up the calendar for that year.</p>
|
85,126 | <p>Does anyone have an implementation for <code>AnglePath</code> (see <a href="http://reference.wolfram.com/language/ref/AnglePath.html" rel="nofollow"><code>AnglePath</code> Documentation</a> and <a href="http://blog.wolfram.com/2015/05/21/new-in-the-wolfram-language-anglepath/" rel="nofollow">example usage</a>) in <em>Mathematica</em> 10.0?</p>
| J. M.'s persistent exhaustion | 50 | <p>I managed to figure out how to re-implement <code>AnglePath[]</code>, since I needed to do something turtle-related for a friend, and I still do not have access to a computer with version 10.</p>
<p>First, the special cases:</p>
<pre><code>anglePath[steps_] := anglePath[{{0, 0}, 0}, steps]
anglePath[p_?VectorQ, steps_] := anglePath[{p, 0}, steps]
anglePath[alpha0_, steps_] := anglePath[{{0, 0}, alpha0}, steps]
anglePath[{p_?VectorQ, v_?VectorQ}, steps_] :=
anglePath[{p, Arg[#1 + I #2] & @@ v}, steps]
anglePath[palpha_, steps_?VectorQ] :=
anglePath[palpha, Transpose[PadLeft[{steps}, {2, Automatic}, 1]]]
</code></pre>
<p>I chose to implement the general form through the use of a compiled function; thus, the general case is merely a wrapper for the compiled function <code>apc</code>:</p>
<pre><code>anglePath[{v_?VectorQ, alpha0_}, steps_?MatrixQ] := apc[v, alpha0, steps]
</code></pre>
<p>Here's the compiled function that does all the magic:</p>
<pre><code>apc = Compile[{{v0, _Real, 1}, {alpha0, _Real}, {steps, _Real, 2}},
Module[{alpha = alpha0, z = v0[[1]] + I v0[[2]], pb, r, theta},
pb = Internal`Bag[{z}];
Scan[({r, theta} = #; Internal`StuffBag[pb,
z += r Exp[I (alpha += theta)]]) &, steps];
{Re[#], Im[#]} & /@ Internal`BagPart[pb, All]],
RuntimeOptions -> "Speed"];
</code></pre>
<p>In my opinion, the use of the complex number formulation is both mathematically and programmatically elegant. (An opinion I've held ever since reading <a href="http://books.google.com/books?id=25tMEYTik-AC" rel="nofollow noreferrer">Zwikker's book</a>.) I did mention the possibility of using <code>Fold[]</code> instead; I'll leave the writing of that version of <code>apc</code> as an exercise for the interested reader.</p>
<p>Here are two examples. The first one is the generation of the so-called "dragon curve":</p>
<pre><code>Graphics[Line[anglePath[KroneckerSymbol[-1, Range[2048]] π/2]]]
</code></pre>
<p><img src="https://i.stack.imgur.com/BFKaJ.png" alt="dragon curve"></p>
<p>(There is an equivalent example in the docs.)</p>
<p>The second example is the use of <code>anglePath[]</code> to generate the <a href="https://mathematica.stackexchange.com/questions/66969">spiral of Theodorus</a>:</p>
<pre><code>Graphics[{FaceForm[None], EdgeForm[Black], Polygon[PadRight[#, {3, 2}]] & /@
Partition[anglePath[{{1, 0}, π/2},
Prepend[ArcCot[Sqrt[Range[15]]], 0]], 2, 1]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/xj23w.png" alt="spiral of Theodorus, first try"></p>
|
2,585,466 | <p>I have two growth curve data sets, A (Martians) and B (Venusians). Data point sets of age (0 (birth) - 250 months, X axis) against height (0 - 200 centimeters, Y axis). The first set (A) contains 67 X Y point pairs, the second set (B) contains 27 point pairs. I have fit both data sets to my favorite version of the Logistic Equation using NonlinearModelFit. NonlinearModelFit returns estimates for my two independent variables: Increment, (N0), and Time Coefficient (k). Then following I invoked "ParameterTable" calculating: (1) Standard Errors (2) t-Statistics and (3) P-Values for both of the curve fitting exercises, Martians and Venusians. Of the three Parameters: Standard Errors, t-Statistics, and P-Values, which parameter indicates a better fit to an energy conservative logistic equilibrium? Standard Errors on the calculated Time Coefficients (k)? t-Statistics on the calcuated Time Coefficents (k)? Is growth on Mars more of an energy conservative mechanical process than growth on Venus? Are data sets with different numbers of point pairs directly comparable on Standard Errors, t-Statistics and P-Values? </p>
| jonsno | 310,635 | <p>Let point of intersection be $(x_o, y_o)$. The slope of parabola is $y' = \frac{2a}{y_o}$ and of hyperbola is $\frac{-c^2}{x_o^2}$.
Product of slope is $-1$ for perpendicular intersection:</p>
<p>$$\frac{2a}{y_o}=\frac{x_o^2}{c^2}\\
x^2_o y_o = 2ac^2 $$
The point $(x_o, y_o)$ satisfies both the parabola and hyperbola. So it satisfies their equation:
$$\\ \implies x_o = 2a$$</p>
<p>Now we get $y_o = \frac{c^2}{2a}$. Put in parabola equation </p>
<p>$$\frac{x^4}{4a^2} = 4a \cdot 2a$$</p>
<p>which is the required answer :)</p>
|
2,585,466 | <p>I have two growth curve data sets, A (Martians) and B (Venusians). Data point sets of age (0 (birth) - 250 months, X axis) against height (0 - 200 centimeters, Y axis). The first set (A) contains 67 X Y point pairs, the second set (B) contains 27 point pairs. I have fit both data sets to my favorite version of the Logistic Equation using NonlinearModelFit. NonlinearModelFit returns estimates for my two independent variables: Increment, (N0), and Time Coefficient (k). Then following I invoked "ParameterTable" calculating: (1) Standard Errors (2) t-Statistics and (3) P-Values for both of the curve fitting exercises, Martians and Venusians. Of the three Parameters: Standard Errors, t-Statistics, and P-Values, which parameter indicates a better fit to an energy conservative logistic equilibrium? Standard Errors on the calculated Time Coefficients (k)? t-Statistics on the calcuated Time Coefficents (k)? Is growth on Mars more of an energy conservative mechanical process than growth on Venus? Are data sets with different numbers of point pairs directly comparable on Standard Errors, t-Statistics and P-Values? </p>
| lab bhattacharjee | 33,337 | <p>WLOG any point on $y^2=4ax$ is $(at^2,2at)$</p>
<p>At the point$(P)$ of intersection $$c^2=at^2\cdot2at\iff t^3=\dfrac{c^2}{2a^2}\ \
\ \ \ (1)$$</p>
<p>Now the gradient of $y^2=4ax$ at $P$ is $$\dfrac{4a}{2(2at)}=\dfrac1t$$</p>
<p>and that of $xy=c^2$ $$-\dfrac{2at}{at^2}=-\dfrac2t$$</p>
<p>We need $-\dfrac2t\cdot\dfrac1t=-1\implies t^2=2$</p>
<p>Replace this value of $t$ in $(1)$</p>
|
815,661 | <p>Let $m$ be the product of first n primes (n > 1) , in the following expression :</p>
<p>$$m=2⋅3…p_n$$</p>
<p>I want to prove that $(m-1)$ is not a complete square.</p>
<p>I found two ways that might prove this . My problem is with the SECOND way . </p>
<p><strong>First solution (seems to be working) :</strong> </p>
<p>The first way that I used is this : </p>
<p>Proof by negation : assume that $m-1$ is a complete square , i.e. $m-1 = x^2$ , then </p>
<p>$m=x^2+1=x^2-(-1)=(x-(-1))(x+(-1))=(x+1)(x-1)$</p>
<p>So we have either : </p>
<ol>
<li><p>$(x+1)$ is even and $(x-1)$ is even </p></li>
<li><p>$(x+1)$ is even and $(x-1)$ is odd</p></li>
<li><p>$(x-1)$ is even and $(x+1)$ is odd</p></li>
</ol>
<p>First case : $(x+1)$ is even and $(x-1)$ is even , then $m$ looks like this : </p>
<p>$m=2⋅otherNumbersA⋅2⋅otherNumbersB$ </p>
<p>If we disregard $2$ then $m$ is a multiplication of $n-1$ prime numbers , then </p>
<p>$m$ is a multiplication of : $2 \cdot bigPrimeNumber$ . Contradiction . </p>
<p>The other two cases are just the same .</p>
<p><strong>Second solution (my problem) :</strong></p>
<p>What I'm interested in is the following solution (that I'm stuck in) :</p>
<p>Proof by negation : assume that : $m-1 = x^2$ and $m=2⋅3…p_n$ , means that $m$ divides by 3 , so we can write : $m-1≡2(mod 3)$ , which means that : </p>
<p>$m-1≡2(mod 3) ===> (m-1)-2=3q , q\in N ===> m-3=3q=m=3(1+q)$</p>
<p>Meaning : </p>
<p>$m-1=x^2$</p>
<p>$m-1≡2(mod 3)$</p>
<p>$x^2≡2(mod 3)$</p>
<p>How do I continue from here ? how can I use : $x^2≡2(mod 3)$ to reach a contradiction ?</p>
<p>Thanks</p>
| Mathmo123 | 154,802 | <p>If there is a solution to $x^2 \equiv 2$ (mod $3$), then $x$ must be $0$, $1$ or $2$ (mod $3$). Now you can try each case and see that none of these will give you a solution:</p>
<ul>
<li>If $x \equiv 1,2$ (mod $3$) then $x^2 \equiv$ $1$ (mod $3$)</li>
<li>If $x \equiv 0$ (mod $3$) then $x^2 \equiv$ $0$ (mod $3$)</li>
</ul>
<p>So in none of these cases do we have $x^2 \equiv 2$ (mod $3$)</p>
|
813,715 | <p>Say I am asked to find, in expanded form without brackets, the equation of a circle with radius 6 and centre 2,3 - how would I go on about doing this?</p>
<p>I know the equation of a circle is $x^2 + y^2 = r^2$, but what do i do with this information?</p>
| 5xum | 112,884 | <p>The equation of a circle with the centre at $(0,0)$ is $x^2+y^2=r^2$. This is because the circle with the radius $r$ is composed of all points which are $r$ away from $(0,0)$, and since the distance of a point $(x,y)$ from $(0,0)$ is $\sqrt{x^2+y^2}$, this means that the equation will be $$\sqrt{x^2+y^2}=r,$$
or, squaring that, $$x^2+y^2=r^2.$$</p>
<p><strong>Hint</strong></p>
<p>To center the point around an arbitrary point, think about how you would calculate the distance between $(x,y)$ and that arbitrary point.</p>
|
4,077,917 | <p>If you have
<span class="math-container">$$
\int_0^2 \int_0^{\sqrt{4 - x^2}} e^{-(x^2 + y^2)} dy \, dx
$$</span>
and you convert to polar coordinates, you integrate from <span class="math-container">$0$</span> to <span class="math-container">$\pi/2$</span>) with respect to theta.</p>
<p>But, if you have
<span class="math-container">$$
\int_{-6}^6 \int_0^{\sqrt{36-x^2}} \sin(x^2+y^2) \, dy \, dx
$$</span>
and you convert to polar coordinates, you integrate from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span> with respect to theta. Can someone explain to me why the bounds of integration with respect to theta are different in these two problems? I'm having a hard time figuring it out. It would be a lot of help. Thanks.</p>
| David G. Stork | 210,401 | <p>Sometimes a picture is worth 1000 words:</p>
<p><a href="https://i.stack.imgur.com/6m1OZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6m1OZ.png" alt="enter image description here" /></a></p>
<hr />
<pre><code>Graphics[
{Red, Disk[{0, 0}, 2, {0, \[Pi]/2}],
Opacity[0.1], Green, Disk[{0, 0}, 6, {0, \[Pi]}]},
Axes -> Automatic]
</code></pre>
|
1,987,358 | <p>We know that Riemann sum gives us the following formula for a function <span class="math-container">$f\in C^1$</span>:</p>
<blockquote>
<p><span class="math-container">$$\lim_{n\to \infty}\frac 1n\sum_{k=0}^n f\left(\frac kn\right)=\int_0^1f(x) dx.$$</span></p>
</blockquote>
<p>I am looking for an example where the exact calculation of <span class="math-container">$\int f$</span> would be interesting with a Riemann sum.</p>
<p>We usually use integrals to calculate a Riemann sum, but I am interesting in the other direction.</p>
<hr>
<p><em>Edit.</em></p>
<p>I actually found an example of my own today. You can compute </p>
<p><span class="math-container">$$I(\rho)=\int_0^\pi \log(1-2\rho \cos \theta+\rho^2)\mathrm d \theta$$</span></p>
<p>using Riemann sums.</p>
| RRL | 148,510 | <p>When definite integrals are amenable to exact valuation, it is typically the case that the more expedient approach involves an anti-derivative rather than the limit of a Riemann sum. Often computation of the limit may be straightforward or even trivial, but somewhat tedious, as is the case for integrals of $f: x \mapsto x$ or $f: x \mapsto x^2$. </p>
<p>On the other hand, integrals with simple integrands and easily recognized anti-derivatives such as $f: x\mapsto x^{-2}$ are more challenging with regard to the limit of Riemann sum -- and in that sense the Riemann sum may be "interesting." </p>
<p>To make this more explicit, consider computing the integral</p>
<p>$$\int_a^b x^{-2} \, dx = \lim_{n \to \infty}S_n$$</p>
<p>where </p>
<p>$$S_n =\frac{b-a}{n}\sum_{k=1}^n \left(a + \frac{b-a}{n}k\right)^{-2}.$$</p>
<p>We have </p>
<p>$$\frac{b-a}{n}\sum_{k=1}^n \left(a + \frac{b-a}{n}k\right)^{-1}\left(a + \frac{b-a}{n}(k+1)\right)^{-1} \leqslant S_n \\ \leqslant \frac{b-a}{n}\sum_{k=1}^n \left(a + \frac{b-a}{n}k\right)^{-1}\left(a + \frac{b-a}{n}(k-1)\right)^{-1},$$</p>
<p>and decomposing into partial fractions,</p>
<p>$$\sum_{k=1}^n \left\{\left(a + \frac{b-a}{n}k\right)^{-1}-\left(a + \frac{b-a}{n}(k+1)\right)^{-1}\right\} \leqslant S_n \\\leqslant \sum_{k=1}^n \left\{\left(a + \frac{b-a}{n}(k-1)\right)^{-1}-\left(a + \frac{b-a}{n}k\right)^{-1}\right\}. $$</p>
<p>Since the sums are telescoping, we have</p>
<p>$$\left(a + \frac{b-a}{n}\right)^{-1}-\left(a + \frac{b-a}{n}(n+1)\right)^{-1} \leqslant S_n \leqslant a^{-1} - b^{-1}.$$</p>
<p>By the squeeze theorem, we get the value of the integral as</p>
<p>$$\lim_{n \to \infty}S_n = a^{-1} - b^{-1}.$$</p>
<p>An example where I found the Riemann sum an interesting and, perhaps, most expedient approach is:</p>
<p><a href="https://math.stackexchange.com/q/1764999/148510">Bronstein Integral 21.42</a></p>
|
871,744 | <p>When A and B are square matrices of the same order, and O is the zero square matrix of the same order, prove or disprove:-
$$AB=0 \implies A=0 \text{ or } \ B=0$$</p>
<p>I proved it as follows:-</p>
<p>Assume $A \neq O$ and $ B \neq O$:
then, $$ |A||B| \neq 0 $$
$$ |AB| \neq 0 $$
$$ AB \neq O $$
$$ \therefore A \neq O\ and\ B \neq O \implies AB \neq O $$
$$ \neg[ AB \neq O] \implies \neg [ A \neq O\ and\ B \neq O ] $$
$$AB=O \implies A=O \text{ or } \ B=O$$</p>
<p>But when considering,
A := \begin{pmatrix} 1&1 \\1&1
\end{pmatrix} and B:= \begin{pmatrix} -1& 1\\ 1 &-1
\end{pmatrix}then, AB=O and A$\neq $O and B $\neq$ O</p>
<p>I can't figure out which one and where I went wrong.</p>
| Marco Flores | 164,575 | <p>The problem with your first approach is that there exist non-zero matrices with zero determinant. For instance, $A$ and $B$ in your example.</p>
|
1,111,168 | <p>Do we say that $0$ has $n$ $n$th roots, all nondistinct, or only one?</p>
<p>I don't think it makes any difference, but I'm curious what the convention is.</p>
| Mariano Suárez-Álvarez | 274 | <p>As many as roots of the polynomial equation $X^n=0$, that is, $n$.</p>
|
1,111,168 | <p>Do we say that $0$ has $n$ $n$th roots, all nondistinct, or only one?</p>
<p>I don't think it makes any difference, but I'm curious what the convention is.</p>
| hmakholm left over Monica | 14,366 | <p>The terminology varies a bit between people and fields, but what I would say is that $x^n=0$ has one root (namely $0$) of <em>multiplicity</em> $n$.</p>
<p>If we explicitly say that we count that root "with multiplicity", of course, there are $n$ of it.</p>
|
139,105 | <p>Can a (finite) collection of disjoint circle arcs in $\mathbb{R}^3$ be interlocked in the sense in that they cannot be separated, i.e. each moved arbitrarily far from one another while remaining disjoint (or at least never crossing) throughout?
(Imagine the arcs are made of rigid steel; but infinitely thin.)
The arcs may have different radii; each spans strictly less than $2 \pi$ in angle, so each has a positive "gap" through which arcs may pass:
<br /> <img src="https://i.stack.imgur.com/hd2l0.jpg" alt="Arcs4"><br />
Of course, if one could prove that in any such collection, one arc can be removed to infinity, the result would follow by induction.
But an impediment to that approach is that sometimes there is no arc than can be removed while all the others remain fixed.</p>
<p>Another approach would be to reduce the <em>piercing number</em> of the configuration:
the number of intersections of an arc with the disks on whose boundary the arcs lie. If the piercing number could always be reduced in any configuration, then it would "only" remain to prove that if there are no disk-arc piercings at all, the configuration can be separated.</p>
<p>Intuitively it seems that no such collection can interlock, but I am not seeing a proof.
I'd appreciate any proof ideas—or interlocked configurations!</p>
| Joseph O'Rourke | 6,094 | <p>@Cristi: Perhaps your first example needs an argument to exclude the following motion,
which might only be possible with my generous arc-gap?
<br /><img src="https://i.stack.imgur.com/LFbnJ.jpg" alt="TwoArcsMoving"></p>
|
3,712,256 | <p>I am trying to prove that: </p>
<blockquote>
<p>For nonempty subsets of the positive reals <span class="math-container">$A,B$</span>, both of which are bounded above, define
<span class="math-container">$$A \cdot B = \{ab \mid a \in A, \; b \in B\}.$$</span>
Prove that <span class="math-container">$\sup(A \cdot B) = \sup A \cdot \sup B$</span>.</p>
</blockquote>
<p>Here is what I have so far. </p>
<blockquote>
<p>Let <span class="math-container">$A, B \subset \mathbb{R}^+$</span> be nonempty and bounded above, so <span class="math-container">$\sup A$</span> and <span class="math-container">$\sup B$</span> exist by the least-upper-bound property of <span class="math-container">$\mathbb{R}$</span>. For any <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span>, we have
<span class="math-container">$$ab \leq \sup A \cdot b \leq \sup A \cdot \sup B.$$</span>
Hence, <span class="math-container">$A \cdot B$</span> is by bounded above by <span class="math-container">$\sup A \cdot \sup B$</span>. Since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are nonempty, <span class="math-container">$A \cdot B$</span> is nonempty by construction, so <span class="math-container">$\sup(A \cdot B)$</span> exists. Furthermore, since <span class="math-container">$\sup A \cdot \sup B$</span> is an upper bound of <span class="math-container">$A \cdot B$</span>, by the definition of the supremum, we have
<span class="math-container">$$\sup(A \cdot B) \leq \sup A \cdot \sup B.$$</span>
It suffices to prove that <span class="math-container">$\sup(A \cdot B) \geq \sup A \cdot \sup B$</span>. </p>
</blockquote>
<p>I cannot figure out the other half of this. A trick involving considering <span class="math-container">$\sup A - \epsilon$</span> and <span class="math-container">$\sup B - \epsilon$</span> for some <span class="math-container">$\epsilon > 0$</span> and establishing that <span class="math-container">$\sup(A \cdot B) < \sup A \cdot \sup B + \epsilon$</span> did not seem to work, though it did in the additive variant of this proof. I haven't anywhere used the assumption that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are contained in the <strong>positive</strong> real numbers, and it seems to me that this assumption must be important, probably as it pertains to inequality sign, so I assume that at some point I will need to multiply inequalities by some positive number. I cannot seem to get a good start on this, though. A hint on how to get started on this second half would be very much appreciated. </p>
| alepopoulo110 | 351,240 | <p>If <span class="math-container">$\varepsilon>0$</span>, take <span class="math-container">$a\in A,b\in B$</span> such that <span class="math-container">$\sup A-\varepsilon<a$</span> and <span class="math-container">$\sup B-\varepsilon<b$</span>. Then it is</p>
<p><span class="math-container">$$(\sup A-\varepsilon)\cdot(\sup B-\varepsilon)<ab\leq\sup(A\cdot B) $$</span></p>
<p>So,
<span class="math-container">$$(\sup A-\varepsilon)\cdot(\sup B-\varepsilon)<\sup(A\cdot B) $$</span>
is true for any <span class="math-container">$\varepsilon>0$</span>. What happens if you let <span class="math-container">$\varepsilon\to0^+$</span>?</p>
|
25,707 | <p>[Edit: to explain some things that were not so clear in the original post]</p>
<p>I believe in being straightforward. Without linking to the specific question, people will just treat this as another vague gripe with nothing to discuss. But to talk about the specific issues with a specific post is equivalent to pointing out the specific authors of those posts too. I give credit where it is due; I've commended and upvoted some other answers by the author in question, yet many people have judged me solely based on my blunt criticism of this answer of his/hers. Past interactions showed that this user generally refuses to admit any serious conceptual error, and this is nowhere near the first time. So please be <strong>fair</strong> if you wish to judge my statements and actions.</p>
<p>Secondly, someone has taken the initiative to edit the objectively incorrect post to fix the error. As I mentioned in the comments, I thought this was not according to the SE rules, which is why I did not do it myself even though I really really wanted to. If the community thinks this is the way to go I have no problems with doing it quietly without making a fuss. It would also in my opinion be a viable solution to the whole issue of 'undeserved' upvotes.</p>
<p>Thirdly, someone felt that I was claiming access to some kind of (absolute?) truth. No I don't. But correct mathematics with respect to modern standards is not subjective. A correct statement is one that is provable in the chosen foundations (usually assumed to be ZFC), and a correct proof is a valid sequence of deductions in that chosen system (in practice at least a description that logicians can easily see is translatable into the foundational system). That is very much objective enough to any modern logician, and that is what I mean by "correct". For example, "$1+1 = 3$" is incorrect; anyone who disagrees should provide precise descriptions of their non-mainstream foundations or notations rather than claiming it is correct from a certain point of view or saying that someone highly qualified said it.</p>
<p>(I'm lenient with missing assumptions or logical gaps <strong>if and only if</strong> the expected audience is estimated to be largely capable of filling them in correctly. But the two posts I am mainly referring to here are simply false and there is nothing for me to be lenient about. I talk about this only because it's relevant to another post mentioned at the end.)</p>
<p>Fourthly, if this feels like a complaint, it is. Sorry if it offends anyone but I want to see Math SE remain a reliable repository of mathematics, and this is what I right now feel is a necessary topic of discussion to help to achieve that outcome.</p>
<hr>
<p>[Original post]</p>
<p>The two top-voted answers to <a href="https://math.stackexchange.com/q/2099679/21820">this question</a> are of such poor quality and I don't understand why none of the many upvoters have noticed. The answer by <em>David</em> essentially states that we use the square-root in RMS speed simply because it gives the correct units for velocity, which is <strong>not even true</strong> strictly speaking because the final quantity is not any kind of <strong>velocity</strong> at all. The answer by <em>Yves Daoust</em> is worse, affirming the false claim in the question that the average speed is zero. He/she even claims that his/her answer is at the level of the OP, but the falsehood is not defensible. Why does it seem like most Math SE users are carelessly upvoting answers without reading carefully?</p>
<p>More pertinently, <strong>what can be done about it?</strong> <a href="https://math.meta.stackexchange.com/a/18921/21820">Downvoting as per this meta-post</a> clearly fails, and review queues almost always fail (The Not-An-Answer flag is specifically for posts that do not even seem to address the question, and the Low-Quality flag is nearly always rejected for posts that are long). Also, these wrong answers are not as bad as <a href="https://math.meta.stackexchange.com/q/17512/21820">fake answers</a>, but it seems we cannot even agree on flagging to get rid of fake answers.</p>
<p>Note: There are even <strong>4</strong> completely nonsensical answers at the bottom, 2 of which are deleted. One talks about exams, as if that has anything to do with valid mathematics! It also makes the rubbish claim that $x = y$ iff $\sqrt{x} = \sqrt{y}$. Another one says that RMS "rectifies" negative to become positive, which cannot even be made sense of. I'll leave you to peruse the other 2 yourself. But these do not appear to be a problem probably because they came late and do not gather upvotes fast enough.</p>
| Brick | 263,389 | <p>On one hand, I tend to agree with your concern generally across all SE sites. There are cases where questions are asked that clearly have fact-based answers, and the voting system is not always good about ensuring that the factually correct answers move to the top. I think that Math is better than some of the other sites in this respect because there are a lot of users who are BOTH active and qualified.</p>
<p>On the other hand, it's not clear what can be done about this since there's no way within the site to independently determine what's true and what's not aside from the voting system. Moreover, it's not always completely clear cut which answers are true since context may matter. For example, you've complained about the answer on this question that claims $\sqrt{x}=\sqrt{y}$ is the same as $x=y$. As a general mathematical statement, this is, of course, false. In context of this problem, however, it's absolutely true since the physics of the problem ensure that both $x>0$ and $y>0$. It might have been better if the answer pointed out this dependency, but there is certainly a sense in which this part of the answer is correct and you are wrong in your objection.</p>
<p>My approach to this problem is to vote, comment, and - as a last resort - leave sites that cannot maintain a minimum level of quality. I don't think Math is anywhere near that minimum threshold for me, but I <em>have</em> left other SE sites for this reason. Let's hope we can keep Math heading in the right direction. </p>
<p>I saw your comment about not answering because people don't read beyond the first few, by the way. I think that's not quite as true as you've made it out to be, especially when there are constructive comments on the top-voted answers that draw attention to the late-comers. In a case where there is a wrong answer at the top, I urge new answers even if it takes (much) longer for a late answer to move up the list.</p>
|
2,117,054 | <p>Find all prime solutions of the equation $5x^2-7x+1=y^2.$</p>
<p>It is easy to see that
$y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ </p>
<p>In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$</p>
<p>How put together the two cases?</p>
<p>Computer find two prime solutions $(3,5)$ and $(11,23).$</p>
| Kelly Lowder | 406,721 | <p>If there is a solution it has more than 4000 digits, which makes me think there is no solution beyond the one two already mentioned.</p>
<p>In Mathematica,</p>
<pre><code>i=1;
ans=Solve[5 x ^2-7x+1==y^2&&x>0&&y>0,{x,y},Integers];
ans2=ans/.C[1]->i//RootReduce
Dynamic@i
Dynamic[IntegerLength@ans2[[All,All,2]]]
</code></pre>
<p>and then run</p>
<pre><code> While[FreeQ[PrimeQ[ans2[[All,All,2]]],{True,True}],i++;ans2=ans/.C[1]->i//RootReduce]
</code></pre>
<p>Setting <code>i=0</code> will stop the loop with the two known solutions:</p>
<pre><code>{{x->3,y->5},{x->11,y->23}}
</code></pre>
<p>but there appear to be no easily found solutions beyond <code>i=1</code></p>
|
2,117,054 | <p>Find all prime solutions of the equation $5x^2-7x+1=y^2.$</p>
<p>It is easy to see that
$y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ </p>
<p>In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$</p>
<p>How put together the two cases?</p>
<p>Computer find two prime solutions $(3,5)$ and $(11,23).$</p>
| math | 382,933 | <p>Since <span class="math-container">$(0,1)$</span> is a solution to <span class="math-container">$5x^2-7x+1=y^2$</span>, it can be used to parametrize all rational solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>. That will give us:</p>
<p><span class="math-container">$$x:=\frac{-2ab - 7b^2}{a^2 - 5b^2}$$</span>
and
<span class="math-container">$$y:=\frac{a}{b}x+1$$</span></p>
<p>where <span class="math-container">$a,b\in \mathbb{Z}$</span>, <span class="math-container">$b\neq 0$</span>.</p>
<p>Since <span class="math-container">$x$</span> is prime, it follows that either <span class="math-container">$b=1$</span> or <span class="math-container">$x=b$</span>.</p>
<ul>
<li><p><em><strong>case <span class="math-container">$b=1$</span></strong></em></p>
<p>This gives us <span class="math-container">$x:=(-2a - 7)/(a^2 - 5)$</span>. Since <span class="math-container">$x\in \mathbb{Z}$</span>,
we get <span class="math-container">$a=\pm 2$</span>, and then <span class="math-container">$x=11$</span> or <span class="math-container">$x=3$</span>.
Those will give <span class="math-container">$y=23$</span> or <span class="math-container">$y=5$</span>, repectively.</p>
</li>
<li><p><strong>case <span class="math-container">$x=b$</span></strong>.</p>
</li>
</ul>
<p>We have that <span class="math-container">$\frac{-2ab - 7b^2}{a^2 - 5b^2}=b$</span> gives</p>
<p><span class="math-container">$$(*) \hspace{2cm} a(a + 2) =5b^2 - 7b.$$</span></p>
<p>Since <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are prime, and <span class="math-container">$x=2$</span> or <span class="math-container">$y=2$</span> do not give solutions to <span class="math-container">$5x^2-7x+1=y^2$</span>, we conclude that <span class="math-container">$y=a+1$</span> and <span class="math-container">$x=b$</span> are ODD primes. In particular, <span class="math-container">$a(a + 2)\equiv 0 \mod 4$</span>.
Now reducing (*) <span class="math-container">$\mod 4$</span>, contradicts the fact that <span class="math-container">$b$</span> is odd. Thus, the case <span class="math-container">$x=b$</span> does not occur.</p>
<p>Therefore <span class="math-container">$(x,y)= (11,23)$</span> and <span class="math-container">$(x,y)=(3,5)$</span> are the only solutions
where both coordinates are prime numbers.</p>
|
2,117,054 | <p>Find all prime solutions of the equation $5x^2-7x+1=y^2.$</p>
<p>It is easy to see that
$y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ </p>
<p>In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$</p>
<p>How put together the two cases?</p>
<p>Computer find two prime solutions $(3,5)$ and $(11,23).$</p>
| Joffan | 206,402 | <p>Can we just charge straight at this?</p>
<p>$y$ is odd. $x=2 \ (\Rightarrow y^2=7)$ is not a solution, so $x$ is an odd prime.</p>
<p>$x(5x-7) = (y-1)(y+1)$, so $x \mid (y-1) $ or $x \mid (y+1)$ ($x$ is prime)
so $kx=y\pm1$, $k$ even</p>
<p>$k\ge4$ is too large: $(kx\pm1)^2\ge (4x-1)^2 $ $= 16x^2-8x+1$ $>5x^2-7x+1$. So only $k=2$, that is $x=\frac 12(y\pm1)$, makes the equality feasible.</p>
<p>Considering the two cases: </p>
<ul>
<li><p><strong>(1)</strong> $x=\frac 12(y+1)$, $y=2x-1$:<br>
$x(5x-7) = 4x(x-1) \implies x = 3, y=5$</p></li>
<li><p><strong>(2)</strong> $x=\frac 12(y-1)$, $y=2x+1$:<br>
$x(5x-7) = 4x(x+1) \implies x = 11, y=23$</p></li>
</ul>
<p>Note that I didn't constrain $y$ at any point - the two solutions just happened to have $y$ prime.</p>
|
939,747 | <p>Denote by $a_n$ the sum of the first $n$ primes. Prove that there is a perfect square between $a_n$ and $a_{n+1}$, inclusive, for all $n$.</p>
<p>The first few sums of primes are $2$, $5$, $10$, $17$, $28$, $41$, $58$, $75$. It seems there is a perfect square between each pair of successive sums. In addition, we can put a bound on $a_n$, namely $a_n \le 2+3+5+7+9+11+...+(2n+1)=n^2+2n+5$.</p>
| Travis Willse | 155,629 | <p>Not all matrices $A$ arise way. As tetori points out, all matrices of the form $XX^T$ for some vector $X \in \mathbb{R}$ are symmetric, they all have rank (at most) one, and the diagonal entries are all nonnegative.</p>
<p>These properties actually characterize such matrices, however, and we can show this by describing how we can recover $X$ from such $A$---in fact we can only do this up to sign, as $XX^T = (-X)(-X)^T$.</p>
<p>First, the $(a, a)$ entry of a matrix $XX^T$ is $x_a^2$, so for each $a$, $x_i \in \{\pm \sqrt{A_{aa}}\}$. If we fix a choice of sign for $x_1$, then the sign of the remaining entries $x_a$ is determined by the sign of $x_1$ and the signs of $A_{a, b} = x_a x_b$, $a \neq b$.</p>
<p><strong>Edit</strong> As Marc points out, there's a faster way to recover $X$. Any column of $A$ is a multiple of $X$. If there is some nonzero column $A^k$ (so that, in particular, $A_{kk} > 0$) we can recover $X$ by $$X := \pm \frac{1}{\sqrt{A_{aa}}} A^a.$$ Of course, if $A = 0$, then $X = 0$ is the only solution</p>
|
3,577,249 | <p>Let be <span class="math-container">$X$</span> a topological space and let be <span class="math-container">$Y\subseteq X$</span> such that <span class="math-container">$\mathscr{der}(Y)=\varnothing$</span>: so for any neighborhood <span class="math-container">$I_y$</span> of <span class="math-container">$y\in Y $</span> it result that <span class="math-container">$I_y\cap Y={y}$</span> and so Y is a set of isolated point. Now of <span class="math-container">$Y$</span> is a set of isolated point clearly none of its point could be a accumulation point: anyway if <span class="math-container">$x\notin Y$</span> how prove that for any its neighborhood <span class="math-container">$I_x$</span> it result that <span class="math-container">$I_x\cap Y=\varnothing$</span>?</p>
<p>Could someone help me, please?</p>
| DanielWainfleet | 254,665 | <p>der(<span class="math-container">$Y$</span>) is empty iff <span class="math-container">$Y$</span> is a closed discrete subspace of <span class="math-container">$X$</span>.</p>
<p>(1). If der(<span class="math-container">$Y$</span>) is empty: (i). Any <span class="math-container">$p\in \overline Y$</span> \ <span class="math-container">$Y$</span> would belong to der(<span class="math-container">$Y$</span>). So <span class="math-container">$\overline Y=Y.$</span> (ii). Any <span class="math-container">$y\in Y$</span> has an open nbhd <span class="math-container">$U_y$</span> (in the space <span class="math-container">$X$</span>) such that <span class="math-container">$U_y\cap Y=\{y\},$</span> so <span class="math-container">$\{y\}$</span> is open in the subpace topology on <span class="math-container">$Y$</span>.</p>
<p>(2).If <span class="math-container">$Y$</span> is a closed discrete subspace of <span class="math-container">$X$</span>: (iii). If <span class="math-container">$y\in Y$</span> then <span class="math-container">$\{y\}$</span> is open in the subspace <span class="math-container">$Y$</span> so there exists some open subset <span class="math-container">$U_y$</span> of the space <span class="math-container">$X$</span> such that <span class="math-container">$U_y\cap Y=\{y\},$</span> so <span class="math-container">$y\not\in$</span> der(<span class="math-container">$Y$</span>). (iv). If <span class="math-container">$x\in X$</span> \ <span class="math-container">$Y$</span> then <span class="math-container">$X\setminus Y=X$</span> \ <span class="math-container">$\overline Y$</span> is a nbhd of <span class="math-container">$x$</span> which is disjoint from <span class="math-container">$Y,$</span> so <span class="math-container">$x\not \in$</span> der(<span class="math-container">$Y$</span>).</p>
<p>Example. Let <span class="math-container">$X=\Bbb R$</span> with the usual topology, and let <span class="math-container">$Y=\Bbb N.$</span> </p>
<p>Note that if der(<span class="math-container">$Y$</span>) is empty then members of <span class="math-container">$Y$</span> are isolated points in the space <span class="math-container">$Y$</span> but not necessarily isolated points of <span class="math-container">$X$</span>.</p>
|
1,165,147 | <p>I would like to find the area under the curve of $\frac{\sin(ax/2)}{\sin(x/2)}$, namely between the first zero crossing on the left and right:</p>
<p>$$
\int_{-\frac{2\pi}{a}}^{\frac{2\pi}{a}} \frac{\sin(\frac{ax}{2})}{\sin(\frac{x}{2})}
$$</p>
<p>I realized from Wolfram Alpha that this was not a simple solution, so I was wondering if there is a good approximation for the area. For my application, $a$ is usually over $10000$.</p>
<p>Thank you for any advice and suggestions.</p>
| Julián Aguirre | 4,791 | <p>Using symmetry and the change of variable $x=2\,t$ we have
$$
\int_{-\frac{2\pi}{a}}^{\frac{2\pi}{a}} \frac{\sin(\frac{ax}{2})}{\sin(\frac{x}{2})}\,dx=
4\int_0^{\pi/a} \frac{\sin(a\,x)}{\sin x}\,dx.
$$
If $a>2$ then
$$
\frac{a}{\pi}\,\Bigl(\sin\frac{\pi}{a}\Bigr)\,x\le\sin x\le x\quad 0\le x\le\frac{\pi}{a}
$$
and
$$
\int_0^{\pi/a}\frac{\sin(a\,x)}{x}\,dx\le\int_0^{\pi/a}\frac{\sin(a\,x)}{\sin x}\,dx\le\frac{\pi}{a}\,\csc\frac{\pi}{a}\int_0^{\pi/a}\frac{\sin(a\,x)}{x}\,dx.
$$
Finally we get
$$
\int_0^{\pi}\frac{\sin x}{x}\,dx\le\int_0^{\pi/a}\frac{\sin(a\,x)}{\sin x}\,dx\le\frac{\pi}{a}\,\csc\frac{\pi}{a}\int_0^{\pi}\frac{\sin x}{x}\,dx.
$$
For $a$ large
$$
1\le\frac{\pi}{a}\,\csc\frac{\pi}{a}\le1+\frac16\,\frac{\pi^2}{a^2}.
$$
Thus
$$
0\le\int_0^{\pi/a}\frac{\sin(a\,x)}{\sin x}\,dx-\int_0^{\pi}\frac{\sin x}{x}\,dx\le\frac16\,\frac{\pi^2}{a^2}\int_0^{\pi}\frac{\sin x}{x}\,dx.
$$</p>
|
870,583 | <p><strong>Question:</strong>
Each user on a computer system has a password, which is six to eight characters long, where each character is an upper-case letter or a digit. Each password must contain at least one digit. How many possible passwords are there?</p>
<blockquote>
<p>I'm in the <strong>Basic of Counting</strong> section of my Discrete Mathematics book, and I have a problem with my reasoning with this. I will give you my reasoning and the books reasoning. Both give different answers, but I don't see a difference in the train of thought, so I need someone to point out the difference.</p>
</blockquote>
<p><strong>My Attempt:</strong>
Immediately I noticed 3 kinds of character length which allows me to break it down to three cases, respectively $P_6, P_7, P_8$, then add all of them. For $P_6$, $5$ will be made up of alpha numeric characters, and $1$ is made up of just digits due to the requirement of <em>"at least one digit"</em>, thus</p>
<p>$$P_6 = (36)^5*10$$</p>
<p>Should be enough to show my train of thought, now the books solution.</p>
<p><strong>Books Solution:</strong> The book did the same thing in dividing in 3 cases and adding them later so I'll go ahead and show you their train of thought for $P_6$. </p>
<p>$$P_6 = 36^6 - 26^6$$</p>
<p>Basically its the number of possible 6 alphanumeric minus just alpha numeric.</p>
<blockquote>
<p>I know that both give different answers, but I still can't tell why.</p>
</blockquote>
| amanpandeyap | 193,162 | <p>$36^6$ gives you the number of passwords $6$ characters long, including passwords which are alphanumeric. But it also includes password which contains only alphabet, therefore subtract subtract $26^6$. </p>
<p>Similarly for passwords of length $7$ and $8$.</p>
|
1,255,368 | <p>How do I solve this? What steps? I have been beating my head into the wall all evening. </p>
<p>$$ x^2 + y^2 = \frac{x}{y} + 4 $$</p>
| CivilSigma | 229,877 | <p>We have:</p>
<p>$$ \frac{d(x^2)}{dx} + \frac{d(y^2)}{dy}\cdot \frac{dy}{dx} = \frac{d( \frac{x}{y})}{dx} + \frac{d(4)}{dx}$$</p>
<p>$$2x+2y\frac{dy}{dx} = \frac{y-x \frac{dy}{dx} }{ y^2} +0$$</p>
<p>Can you finish?</p>
<hr>
<p>Okay, to finish it up:</p>
<p>$$2y\frac{dy}{dx} - \frac{y-x \frac{dy}{dx} }{ y^2} = -2x $$<br>
Multiplying both sides by $y^2$
$$2y^3\frac{dy}{dx}-(y-x\frac{dy}{dx})=-2xy^2$$
$$2y^3\frac{dy}{dx}+x\frac{dy}{dx}=-2xy^2 +y$$
$$\frac{dy}{dx} \cdot (2y^3+x) =-2xy^2 +y$$
$$\frac{dy}{dx} = \frac{-2xy^2 +y}{2y^3+x}$$</p>
|
2,378,508 | <p>I am reading about Arithmetic mean and Harmonic mean. From <a href="https://en.wikipedia.org/wiki/Harmonic_mean#In_physics" rel="nofollow noreferrer">wikipedia</a>
I got this comparision about them:</p>
<blockquote>
<p>In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average. For instance, if a vehicle travels a certain distance at a speed x (e.g., 60 kilometres per hour -
km
/
h
) and then the same distance again at a speed y (e.g., 40
km
/
h
), then its average speed is the harmonic mean of x and y (48
km
/
h
), and its total travel time is the same as if it had traveled the whole distance at that average speed. However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 50 kilometres per hour. The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. </p>
</blockquote>
<pre><code> distance time velocity remark
1st section d/2 t1 60
2nd section d/2 t2 40
1st + 2nd section d (t1+t2) v use harmonic mean to calculate v
1st section d1 t/2 60
2nd section d2 t/2 40
1st + 2nd section d1+d2 t v use arithmetic mean to calculate v
</code></pre>
<p>How <code>distance</code> and <code>time</code> are pushing us to compute harmonic mean and arithmetic mean respectively for computing "average <code>v</code>" in this case?</p>
| Bloch | 462,921 | <p>Your equation is not even a differential equation, as only the second derivative but no other derivatives (or the function itself) appears. You can simply integrate twice.</p>
<p>However, in my opinion:</p>
<p>$(y'')^{2/3}=-(y'')^{3/2}$</p>
<p>$(y'')^{3/2}\ (y'')^{-2/3} = (y'')^{5/6}=-1$</p>
<p>so $(y'')^{5/6}+1=0$</p>
|
2,378,508 | <p>I am reading about Arithmetic mean and Harmonic mean. From <a href="https://en.wikipedia.org/wiki/Harmonic_mean#In_physics" rel="nofollow noreferrer">wikipedia</a>
I got this comparision about them:</p>
<blockquote>
<p>In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average. For instance, if a vehicle travels a certain distance at a speed x (e.g., 60 kilometres per hour -
km
/
h
) and then the same distance again at a speed y (e.g., 40
km
/
h
), then its average speed is the harmonic mean of x and y (48
km
/
h
), and its total travel time is the same as if it had traveled the whole distance at that average speed. However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 50 kilometres per hour. The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. </p>
</blockquote>
<pre><code> distance time velocity remark
1st section d/2 t1 60
2nd section d/2 t2 40
1st + 2nd section d (t1+t2) v use harmonic mean to calculate v
1st section d1 t/2 60
2nd section d2 t/2 40
1st + 2nd section d1+d2 t v use arithmetic mean to calculate v
</code></pre>
<p>How <code>distance</code> and <code>time</code> are pushing us to compute harmonic mean and arithmetic mean respectively for computing "average <code>v</code>" in this case?</p>
| Raghav Madan | 932,208 | <p>I dont think any existing answers answer your question .
The degree of the differential equation you asked is 9 because when you try to convert it to a polynomial form , you get the form as <span class="math-container">$(y")^9-(y")^4=0$</span> which is similar to finding degree of polynomial <span class="math-container">$(x)^9-(x)^4=0$</span> which is clearly 9 .
Hope this answers your doubt .</p>
|
2,735,001 | <p>I was asked to find the corresponding series for the function $\ln(x^2+4)$</p>
<p>The obvious solution to me was to use the well known fact $$\ln(1+x)=\sum_{n=1}^\infty (-1)^{n-1}\frac{x^n}{n}$$
And substituting $x^2+3$ for $x$
$$\ln(1+(x^2+3))=\sum_{n=1}^\infty (-1)^{n-1}\frac{(x^2+3)^n}{n}$$
Using binomial theorem on the $(x^2+3)^n$ on the inside gives us the nested summation
$$\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\sum_{m=0}^n {n\choose{m}}x^{2m}3^{n-m}=S_1$$</p>
<p>However, the answer key gives the series as
$$S_2=\ln 4+\sum_{n=1}^\infty (-1)^n \frac{x^{2n+2}}{2^{2n+2}(n+1)}$$</p>
<p>Question: Is $S_1=S_2$? If so, how do we prove this? If not, where is the error in this reasoning?</p>
<p>Thanks</p>
| Przemysław Scherwentke | 72,361 | <p>HINT:
$$
4+x^2=4\left(1+\frac{x^2}{4}\right)
$$
and $\ln ab=\ln a+\ln b$.</p>
|
2,568,157 | <p>Consider the following:</p>
<p>$$(1^5+2^5)+(1^7+2^7)=2(1+2)^4$$</p>
<p>$$(1^5+2^5+3^5)+(1^7+2^7+3^7)=2(1+2+3)^4$$</p>
<p>$$(1^5+2^5+3^5+4^5)+(1^7+2^7+3^7+4^7)=2(1+2+3+4)^4$$</p>
<p>In General is it true for further increase i.e.,</p>
<p>Is</p>
<p>$$\sum_{i=1}^n i^5+i^7=2\left( \sum_{i=1}^ni\right)^4$$ true $\forall $ $n \in \mathbb{N}$</p>
| Guy Fsone | 385,707 | <p>The formula is already true for $n=1,2,....,5$ and we know that
$$ \sum_{i=1}^ni= \frac{n\left(n+1\right)}{2}$$
Assume
$$\sum_{i=1}^n i^5+i^7=2\left( \sum_{i=1}^ni\right)^4 =\frac{n^4\left(n+1\right)^4}{8}$$</p>
<p>then,
$$\begin{align}\sum_{i=1}^{n+1} i^5+i^7&=\sum_{i=1}^{n} i^5+i^7 +(n+1)^5 +(n+1)^7\\&=\color{blue}{\frac{n^4\left(n+1\right)^4}{8}}+(n+1)^5 +(n+1)^7
\\&=\color{blue}{\frac{n^4\left(n+1\right)^4}{8}}+(n+1)^4\left[n+1 +(n+1)^3 \right]
\\&=(n+1)^4\left( \frac{n^4}{8} +n+1 +\color{red}{n^3+3n^2+3n+1}\right)
\\&=(n+1)^4\left( \frac{n^4}{8}+ n^3+3n^2+4n+2\right)
\\&=\frac{(n+1)^4}{8}\left( n^4+ \color{blue}{4}\cdot\color{red}{2}\cdot n^3+\color{blue}{6}\cdot\color{red}{2^2}\cdot n^2+\color{blue}{4}\cdot\color{red}{2^3}\cdot n+\color{red}{2^4}\right)
\\&=\color{blue}{\frac{\left(n+1\right)^4\left(n+2\right)^4}{8}=2\left( \sum_{i=1}^{n+1}i\right)^4}\end{align}$$</p>
<p>which prove that the formula is true</p>
|
438,490 | <p>I'm trying to find the eigenvalues and eigenvectors of the following <span class="math-container">$n\times n$</span> matrix, with <span class="math-container">$k$</span> blocks.
<span class="math-container">\begin{gather*}
X = \left( \begin{array}{cc}
A & B & \cdots & \\
B & A & B & \cdots \\
B & B & A & B & \cdots \\
\cdots\end{array} \right) \\
A = \left( \begin{array}{cc}
a & \cdots \\
a & \cdots \\
\cdots\end{array} \right) \in R^{\frac{n}{k} \times \frac{n}{k}} \\
B = \left( \begin{array}{cc}
b & \cdots \\
b & \cdots \\
\cdots\end{array} \right) \in R^{\frac{n}{k} \times \frac{n}{k}}.
\end{gather*}</span></p>
<p>I know that the first eigenvector is the constant <span class="math-container">$1$</span> vector. <br>
So the first eigenvalue is: <span class="math-container">$X1 = \left(\frac{n}{k}a+(k-1)\frac{n}{k} b\right)1$</span>.</p>
<p>How do I find all of the rest of the eigenvalues and eigenvectors?</p>
| Joseph Van Name | 22,277 | <p><span class="math-container">$X$</span> is simply a tensor product <span class="math-container">$C\otimes D$</span> where <span class="math-container">$C$</span> is the matrix with all diagonal entries <span class="math-container">$a$</span> and non-diagonal entries <span class="math-container">$b$</span> and where each entry in <span class="math-container">$D$</span> is <span class="math-container">$1$</span>.</p>
<p>If <span class="math-container">$R$</span>, <span class="math-container">$S$</span> are diagonalizable, then the eigenvalues of <span class="math-container">$R\otimes S$</span> are simply the products <span class="math-container">$\lambda\mu$</span> where <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$R$</span> and <span class="math-container">$\mu$</span> is an eigenvalue of <span class="math-container">$S$</span>. The eigenvectors of <span class="math-container">$R\otimes S$</span> are simply the values <span class="math-container">$x\otimes y$</span> where <span class="math-container">$x$</span> is an eigenvector of <span class="math-container">$R$</span> and <span class="math-container">$y$</span> is an eigenvector of <span class="math-container">$S$</span>.</p>
<p>The eigenvalues of <span class="math-container">$D$</span> are <span class="math-container">$0$</span>, <span class="math-container">$n/k$</span> where <span class="math-container">$n/k$</span> corresponds to the eigenvector <span class="math-container">$[1,\dotsc,1]^\perp$</span> and the eigenvectors of <span class="math-container">$0$</span> are the vectors <span class="math-container">$[x_1,\dotsc,x_{n/k}]^T$</span> with <span class="math-container">$x_1+\dotsb+x_{n/k}=0$</span>. The eigenvalues of <span class="math-container">$C$</span> can be found in a similar way.</p>
|
2,365,666 | <blockquote>
<p>Question: Let $\phi: \mathbb{Z}\rightarrow \mathbb{Z}$ be a ring homomorphism.</p>
</blockquote>
<p>Show that $\forall x \in \mathbb{Z}:$ either $\phi\left ( x \right )=0$ or $\phi\left ( x \right )=x.$</p>
<p>I've shown that $\phi\left ( x \right )=x$.</p>
<p>Now I want to show that $\phi\left ( x \right )=0$.</p>
<p>I begin by assuming that $\phi\left ( x \right )\neq x$.
Then did inverse maps and using group operation preservation but to no avail.</p>
<p>Any hint is appreciated. </p>
<p>Edit: <a href="https://i.stack.imgur.com/3WkaN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3WkaN.png" alt="enter image description here"></a></p>
| Chinnapparaj R | 378,881 | <p>Any homomorphism $\phi:\mathbb{Z} \rightarrow \mathbb{Z}$ is completely determined by the image of $1$</p>
<p>Note that $\phi(1)=\phi(1^2)=\phi(1)\phi(1)=(\phi(1))^2$ and so $\phi(1)=0$ or $1$</p>
<p>If $\phi(1)=0$, then $\phi(x)=\phi(x.1)=\phi(x).\phi(1)=0$</p>
<p>If $\phi(1)=1$, then $\phi(x)=x$ </p>
|
2,365,666 | <blockquote>
<p>Question: Let $\phi: \mathbb{Z}\rightarrow \mathbb{Z}$ be a ring homomorphism.</p>
</blockquote>
<p>Show that $\forall x \in \mathbb{Z}:$ either $\phi\left ( x \right )=0$ or $\phi\left ( x \right )=x.$</p>
<p>I've shown that $\phi\left ( x \right )=x$.</p>
<p>Now I want to show that $\phi\left ( x \right )=0$.</p>
<p>I begin by assuming that $\phi\left ( x \right )\neq x$.
Then did inverse maps and using group operation preservation but to no avail.</p>
<p>Any hint is appreciated. </p>
<p>Edit: <a href="https://i.stack.imgur.com/3WkaN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3WkaN.png" alt="enter image description here"></a></p>
| Robert Lewis | 67,071 | <p>Look at this: </p>
<p>$\phi(1) = \phi(1^2) = \phi(1) \cdot \phi(1) = (\phi(1))^2; \tag{1}$</p>
<p>(1) shows that $\phi(1) = 0$ or $\phi(1) = 1$; if $\phi(1) = 0$ then</p>
<p>$\phi(x) = \phi(1x) = \phi(1)\phi(x) = 0(\phi(x)) = 0 \tag{2}$</p>
<p>for all $x \in \Bbb Z$; if $\phi(1) \ne 0$, then $\phi(1) = 1$; if $\phi(1) = 1$, then</p>
<p>$\phi(x) = x \tag{3}$</p>
<p>for $x > 0$; just add $1$ to itself $x$ times. If $x<0$ use $x + (-x) = 0$ to show $\phi(x) = -\phi(-x) = x$.</p>
|
2,881,914 | <p>Using a computer I found the double sum</p>
<p>$$S(n)= \sum_{j=1}^n\sum_{k=1}^n \frac{j^2 + jk + k^2}{j^2(j+k)^2k^2}$$
has values</p>
<p>$$S(10) \quad\quad= 1.881427206538142 \\ S(1000) \quad= 2.161366028875634 \\S(100000) = 2.164613524212465\\$$</p>
<p>As a guess I compared with fractions $\pi^p/q$ where $p,q$ are positive integers and it appears </p>
<p>$$\lim_{n \to \infty} S(n) = \frac{\pi^4}{45} = 2\zeta(4) \approx 2.164646467422276 $$</p>
<p>I'd be interested in seeing a proof if true. </p>
| Tom Himler | 457,359 | <p>So before I start, I've never even attempted to evaluate a double sum before, so there could very well have been an easier way.</p>
<p>$$\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{j^2+jk+k^2}{j^2k^2(j+k)^2} = \sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{k^2(j+k)^2} +\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{jk(j+k)^2} + \sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2(j+k)^2}= $$</p>
<p>$$ 2\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{k^2(j+k)^2} +\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{jk(j+k)^2}$$</p>
<p>Through partial fraction decomposition the above equals:</p>
<p>$$\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^3k} -\frac{1}{j^3(j+k)}-\frac{1}{j^2(j+k)}+2\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{2}{j^3(j+k)}-\frac{2}{j^3k}+\frac{1}{j^2k^2}+\frac{1}{j^2(j+k)^2} $$</p>
<p>Collecting like-terms:</p>
<p>$$3\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^3(j+k)}-3\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^3k} +\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2(j+k)^2} + 2\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2k^2} = $$</p>
<p>The final sum clearly equals $2\zeta(2)^2$ or $\frac{\pi^4}{18}$. I then evaluate the first two sums by combining them to get:</p>
<p>$$3\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^3}(\frac{1}{j+k}-\frac{1}{k}) $$</p>
<p>Interchanging j and k to and averaging the two sums to get:</p>
<p>$$\frac{3}{2}\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^3}(\frac{1}{j+k}-\frac{1}{k})+\frac{1}{k^3}(\frac{1}{j+k}-\frac{1}{j}) $$</p>
<p>This can be rewritten as:</p>
<p>$$-\frac{3}{2}\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2k(j+k)}+\frac{1}{k^2j(j+k)}= -\frac{3}{2}\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2k^2} = -\frac{3}{2}\zeta{(2)}^2 = -\frac{\pi^4}{24}$$</p>
<p>So putting that back into the original problem:</p>
<p>$$\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2(j+k)^2} + \frac{\pi^4}{18} -\frac{\pi^4}{48} = \sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2(j+k)^2} + \frac{\pi^4}{72} $$</p>
<p>This is all I got to. I couldn't evaluate the other sum the way I did before. Using a calculator there is a very good chance it equals $\frac{\pi^4}{120}$.</p>
<p>Just for fun I was able to write the remaining sum as:</p>
<p>$$\sum_{j=1}^\infty\sum_{k=1}^\infty \frac{1}{j^2(j+k)^2} = \sum_{n=1}^\infty \frac{\zeta(2,n+1)}{n^2}$$</p>
<p>Where $\zeta(x,y)$ is the Hurwitz Zeta Function. Wolfram Alpha was able to calculate the sum as $\frac{\pi^4}{120}$ as desired.</p>
|
2,881,914 | <p>Using a computer I found the double sum</p>
<p>$$S(n)= \sum_{j=1}^n\sum_{k=1}^n \frac{j^2 + jk + k^2}{j^2(j+k)^2k^2}$$
has values</p>
<p>$$S(10) \quad\quad= 1.881427206538142 \\ S(1000) \quad= 2.161366028875634 \\S(100000) = 2.164613524212465\\$$</p>
<p>As a guess I compared with fractions $\pi^p/q$ where $p,q$ are positive integers and it appears </p>
<p>$$\lim_{n \to \infty} S(n) = \frac{\pi^4}{45} = 2\zeta(4) \approx 2.164646467422276 $$</p>
<p>I'd be interested in seeing a proof if true. </p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[#ffd,10px]{\ds{\sum_{j = 1}^{\infty}\sum_{k = 1}^{\infty}
{j^{2} + jk + k^{2} \over j^{2}\pars{j + k}^{2}k^{2}}}} =
\sum_{j = 1}^{\infty}\sum_{k = 1}^{\infty}
{j^{2} + jk + k^{2} \over j^{2}k^{2}}\
\overbrace{\bracks{-\int_{0}^{1}\ln\pars{x}x^{j + k - 1}\,\dd x}}
^{\ds{1 \over \pars{j + k}^{2}}}
\\[5mm] = &\
-\int_{0}^{1}\ln\pars{x}
\pars{\sum_{j = 1}^{\infty}x^{j}
\sum_{k = 1}^{\infty}{x^{k} \over k^{2}} +
\sum_{j = 1}^{\infty}{x^{j} \over j}
\sum_{k = 1}^{\infty}{x^{k} \over k} +
\sum_{j = 1}^{\infty}{x^{j} \over j^{2}}
\sum_{k = 1}^{\infty}x^{k}}\,{\dd x \over x}
\\[5mm] = &\
-\int_{0}^{1}\ln\pars{x}
\bracks{2\sum_{j = 1}^{\infty}x^{j}
\sum_{k = 1}^{\infty}{x^{k} \over k^{2}} +
\pars{\sum_{j = 1}^{\infty}{x^{j} \over j}}^{2}}\,{\dd x \over x}
\label{1}\tag{1}
\end{align}</p>
<blockquote>
<p>Note that $\ds{\sum_{\ell = 1}^{\infty}{x^{\ell} \over \ell^{s}} =
\,\mrm{Li}_{s}\pars{x}}$ where $\ds{\mrm{Li}_{s}}$ is the <a href="https://en.m.wikipedia.org/wiki/Polylogarithm" rel="nofollow noreferrer">Polylogarithm Function</a>. Moreover,
$\ds{\mrm{Li}_{1}\pars{x} = -\ln\pars{1- x}}$,
$\ds{\mrm{Li}_{s + 1}\pars{z} = \int_{0}^{z}{\mrm{Li}_{s}\pars{t} \over t}\,\dd t}$ and
$\ds{\sum_{j = 1}^{\infty}x^{j} = {x \over 1 - x}}$.</p>
</blockquote>
<p>\eqref{1} becomes
\begin{align}
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\bbox[#ffd,10px]{\ds{\sum_{j = 1}^{\infty}\sum_{k = 1}^{\infty}
{j^{2} + jk + k^{2} \over j^{2}\pars{j + k}^{2}k^{2}}}} =
-2\int_{0}^{1}{\ln\pars{x}\,\mrm{Li}_{2}\pars{x} \over 1 - x}\,\dd x -
\int_{0}^{1}{\ln\pars{x}\ln^{2}\pars{1 - x} \over x}\,\dd x
\\[5mm] = &\
-2\int_{0}^{1}\overbrace{\ln\pars{1 - x} \over x}^{\ds{-\mrm{Li}_{2}'\pars{x}}}\,\
\mrm{Li}_{2}\pars{1 - x} \,\dd x -
\int_{0}^{1}{\ln\pars{x}\ln^{2}\pars{1 - x} \over x}\,\dd x
\label{2}\tag{2}
\end{align}
where we set $\ds{x \mapsto 1 - x}$ in the first integral. With <a href="https://en.m.wikipedia.org/wiki/Polylogarithm#Dilogarithm" rel="nofollow noreferrer">Euler Reflection Formula</a>
$\ds{\mrm{Li}_{2}\pars{1 - x} = -\mrm{Li}_{2}\pars{x} + {\pi^{2} \over 6} -\ln\pars{x}\ln\pars{1 - x}}$. \eqref{2} becomes:
\begin{align}
&\bbox[#ffd,10px]{\ds{\sum_{j = 1}^{\infty}\sum_{k = 1}^{\infty}
{j^{2} + jk + k^{2} \over j^{2}\pars{j + k}^{2}k^{2}}}}
\\ = &\ -\
\overbrace{\int_{0}^{1}\totald{\mrm{Li}_{2}^{2}\pars{x}}{x} \,\dd x}
^{\ds{\pi^{4} \over 36}}\ +\
{\pi^{2} \over 3}\
\overbrace{\int_{0}^{1}\mrm{Li}_{2}'\pars{x}\,\dd x}
^{\ds{\pi^{2} \over 6}}\ +\
\overbrace{\int_{0}^{1}{\ln\pars{x}\ln^{2}\pars{1 - x} \over x}\,\dd x}^{\ds{-\,{\pi^{4} \over 180}}}
\\[5mm] = &\
\bbx{\large{\pi^{4} \over 45}}
\end{align}</p>
|
3,881,029 | <p><span class="math-container">$R(A)$</span> is range of <span class="math-container">$A$</span> <br>
<span class="math-container">$N(A)$</span> is nullspace of <span class="math-container">$A$</span> <br>
<span class="math-container">$R(A^T)$</span> is range of <span class="math-container">$A^T$</span> <br>
<span class="math-container">$N(A^T)$</span> is nullspace of <span class="math-container">$A^T$</span> <br></p>
<p>Suppose <span class="math-container">$y \in R(A)$</span> and <span class="math-container">$x \in N(A^T)$</span>.</p>
<p>How would one go about showing that <span class="math-container">$x^Ty=0$</span> (aka <span class="math-container">$x$</span> is perpendicular to <span class="math-container">$y$</span>)?</p>
| Community | -1 | <p><strong>Hint:</strong> The range is the span of the columns.</p>
|
2,137,530 | <blockquote>
<p>Given $N$ trials of a die roll, where we have defined $D$ as the number of distinct outcomes, what would be the mean and standard deviation of $D$?</p>
</blockquote>
<p>If we have defined $I(k)$ as an indicator random variable which equals 1 if outcome $k$ (such as 6) appears at least once, and 0 otherwise, for $k\in\{ 1,\dots,6\}$, then by definition
$$D = \sum\limits_{k=1}^6 I(k)$$
How do the dependencies between the $I(k)$ play into the solution? (Which is the part that is tripping me up the most.)</p>
| n88k | 1,039,908 | <p>I just wanted to give another way to answer the question. Instead of partitioning by whether the number <span class="math-container">$i$</span>, <span class="math-container">$1 \leq i \leq 6$</span>, has appeared, one can consider <span class="math-container">$X_i$</span> to be the indicator random variable corresponding to whether there is an extra distinct unique value because of roll <span class="math-container">$i$</span>. Hence, <span class="math-container">$D=X_1 + ... + X_n$</span>.</p>
<p>We have <span class="math-container">$E(D)=\sum E(X_i)$</span> and <span class="math-container">$E(X_i)=\left(\frac{5}{6}\right)^{i-1}$</span> since there is an extra distinct value if and only if all the previous values are not the same value as roll <span class="math-container">$i$</span>. Hence <span class="math-container">$E(D)=\sum_{i=1}^n \left(\frac{5}{6}\right)^{i-1} = \frac{1-\left(\frac{5}{6}\right)^n}{\frac{1}{6}}$</span>.</p>
<p>As for the variance/standard deviation:</p>
<p><span class="math-container">$\begin{align}
var(D)&=E((X_1+...+X_n)^2)-E(D)^2\\
&=E(X_1^2+...+X_n^2) + 2\sum_{i<j}E(X_iX_j)-E(D)^2 \\
&=E(D)+2\sum_{i=1}^n\sum_{k=1}^{n-i}E(X_iX_{i+k})-E(D)^2
\end{align}
$</span></p>
<p>Hence this boils down to finding <span class="math-container">$E(X_iX_{i+k})$</span>. Let <span class="math-container">$A$</span> be the event that value of roll <span class="math-container">$i$</span> not equal value of roll <span class="math-container">$i+k$</span>}. Since
<span class="math-container">$$E(X_iX_{i+k} | A)=\left(\frac{4}{6}\right)^{i-1} \left(\frac{5}{6}\right)^{k-1}$$</span>
and</p>
<p><span class="math-container">$
\begin{align}
E(X_iX_{i+k}) &= E(X_iX_{i+k} | A)P(A) + E(X_iX_{i+k} | A^c)P(A^c)\\
&= \left(\frac{4}{6}\right)^{i-1} \left(\frac{5}{6}\right)^{k-1} \left(\frac{5}{6}\right) + 0
\end{align}
$</span></p>
<p>, the answer follows.</p>
|
88,511 | <p>In version <a href="http://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn102.html">10.2</a> there is a new experimental function: <a href="http://reference.wolfram.com/language/ref/FindFormula.html"><code>FindFormula[]</code></a>.</p>
<p>I suspect that a <a href="https://en.wikipedia.org/wiki/Symbolic_regression">genetic programming algorithm (symbolic regression)</a> is behind this new feature, but I can't find any references.</p>
<p><strong>Question</strong></p>
<ul>
<li>What is behind this new function?</li>
</ul>
| Jacob Akkerboom | 4,330 | <p>The following reveals definitions</p>
<pre><code><< GeneralUtilities`
PrintDefinitions@FindFormula
</code></pre>
<p>As usual one can click the symbols to find definitions of functions "further down". It should also be noted that <code>FindFormula</code> is listed in the <a href="https://reference.wolfram.com/language/guide/MachineLearning.html">Machine Learning guide</a>, which corresponds to symbol names like <code>SymbolicMachineLearning`PackageScope`ImputArgumentsTestFindFormula</code> shown further down by <code>PrintDefinitions</code>.</p>
|
110,896 | <p>Now I have a function, say $f(k,z)=e^{-kz}(1+kz)$</p>
<p>I want to find the $n$th $\log$ derivative with respect to z.
like $(z\partial_z)^{(n)}f(k,z)$ (or $(\partial_{\ln z})^{(n)}f(k,z)$
if you like), where the $(n)$ denotes that we take the derivative $n$
times. </p>
<p>I found the answer in <a href="https://mathematica.stackexchange.com/questions/9598">this question</a> quite helpful for me to find a general expression for $\partial_z^{(n)}f(k,z)$; however, I don't know how to generalize it to $\log$ derivative case using <em>Mathematica</em>. Any suggestions?</p>
| bbgodfrey | 1,063 | <p>Consider</p>
<pre><code>logDiv[f_, n_] := Nest[z D[#, z] &, f, n]
</code></pre>
<p>which applies <code>z D[#, z] &</code> <code>n</code> times to <code>f</code>. Then, for instance,</p>
<pre><code>logDiv[k z, 2]
(* k z *)
</code></pre>
<p>or</p>
<pre><code>logDiv[Exp[-kz] (1 + k z), 2]
(* E^-kz k z *)
</code></pre>
|
3,409,068 | <p>I have to find (and prove) the infimum and supremum of the following set:</p>
<p><span class="math-container">$M_1:=\{x\in\mathbb{Q} \mid x^2 < 9\}$</span></p>
<p>On first glance, I would say:</p>
<p><span class="math-container">$\inf M_1=-3 $</span><br>
<span class="math-container">$\sup M_1=3$</span></p>
<p>Now I have to prove that these really are the infimum and supremum of the set, and that's the point where I'm having problems. According to the definition of <span class="math-container">$\inf$</span> and <span class="math-container">$\sup$</span>, this means, that <span class="math-container">$-3$</span> is the biggest lower bound and 3 is the lowest upper bound:</p>
<p><span class="math-container">$\forall x\in\mathbb(M_1): -3 \leq x \leq 3$</span></p>
<p>We can see, that -3 and 3 are not elements of M1, which means:</p>
<p><span class="math-container">$\forall x\in\mathbb(M_1):-3<x<3$</span></p>
<p>But how can I show that -3 and 3 are the <span class="math-container">$\textbf{biggest / smallest}$</span> bound? I mean, for example, what if there is a number bigger than -3 that acts like a lower bound to the set? Obviously there isn't a bigger lower bound, but how can I mathematically show it? Do you guys have any advice? Thanks in advance, and sorry for my English :D</p>
| Eric Towers | 123,905 | <p>If <span class="math-container">$f(x) = x\ln|x|$</span> is analytic at <span class="math-container">$x = 0$</span>, you can differentiate it there. What's <span class="math-container">$f'(0)$</span>?</p>
<blockquote class="spoiler">
<p> <span class="math-container">$f'(0)$</span> is undefined. <span class="math-container">$\lim_{x \rightarrow 0} f'(x) = -\infty$</span>.</p>
</blockquote>
<p>So is it analytic at <span class="math-container">$x = 0$</span>?</p>
|
715,809 | <p>I have occasionally come across Leibniz's Law (left to right, ie the indiscernibility of identicals) written with schematic letters in the consequent, and occasionally with a bound predicate-variable taking the place of the schematic letters. What is the relevant difference between these formulations? If there is ongoing debate, any leads on relevant literature would be most helpful.</p>
| Jon | 124,068 | <p>I take it that by the schematic formulation you mean a meta-language statement such as the following:</p>
<ul>
<li>For any $r+1$-place relation symbol $R$ of the object language the sentence $\forall x_1 \ldots x_r \forall xy(x =y \rightarrow (Rx x_1 \ldots x_r \leftrightarrow Ry x_1 \ldots x_r))$ is true in every model (for the language). </li>
</ul>
<p>The most apparent difference is that the schematic formulation doesn't give you Leibniz's Law in your object language. For any relation symbol it only gives you the truth of the statement that identical objects don't differ in satisfying the symbol. </p>
<p>The second-order version, on the other hand, does provide an object level representation of the law. As a consequence one can use it as the beginning of a definition of the identity predicate, something that is not possible using the schematic version. </p>
<p>I haven't come across any discussion of the differences between these formulations. There is a recent paper by Andrew Bacon about paradoxes that result when naive truth theories are combined with the schematic version: </p>
<p><a href="http://www-bcf.usc.edu/~abacon/papers/Some%20problems%20for%20naive%20truth%20theory.pdf" rel="nofollow">http://www-bcf.usc.edu/~abacon/papers/Some%20problems%20for%20naive%20truth%20theory.pdf</a></p>
<p>I haven't read it entirely yet, but it may be of interest to you. </p>
|
174,655 | <p>So I have 2 lists of 10000+ lists of 3 numbers, e.g.</p>
<pre><code>{{1,2,3},{4,5,6},{7,8,9},...}
{{2,1,3},{4,5,6},{41,2,0},...}
</code></pre>
<p>Wanting a result like </p>
<pre><code>{2,...}
</code></pre>
<p>Getting some sort of list of <code>True</code>/<code>False</code> is also probably enough, like this:</p>
<pre><code>{False,True,False,...}
</code></pre>
<p>I guess I could use <code>Position</code> once I've done that.</p>
<p>I tried to use <code>Thread</code>, as below:</p>
<pre><code>Thread[{{a, b}, {c, d}, {e, f}} == {{a, b}, {d, e}, {f, e}}]
</code></pre>
<p>which gives the <code>True</code>/<code>False</code> output</p>
<pre><code>{True, {c, d} == {d, e}, {e, f} == {f, e}}
</code></pre>
<p>But as soon as there are actual numbers in place, it doesn't work:</p>
<pre><code>Thread[{{1, 2}, {2, 3}, {4, 5}} == {{1, 3}, {2, 3}, {4, 5}}]
</code></pre>
<p>Returns</p>
<pre><code>False
</code></pre>
<p>I'd really appreciate any help you could give.</p>
<p>Thanks,</p>
<p>H</p>
| Fraccalo | 40,354 | <p>One possible way of doing it:</p>
<pre><code>a = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {1, 2, 3}};
b = {{2, 1, 3}, {4, 5, 6}, {41, 2, 0}, {1, 2, 3}};
Position[MapThread[Equal, {a, b}], True]
</code></pre>
<p>The output reads: <code>{{2}, {4}}</code></p>
<p>In my code, it compares the elements for each position, but it doesn't "cross check" them, i.e.it doesn't consider that the element n of list a could be equal to element m!=n of list b</p>
<p>EDIT:
note that the solution provided by @Henrik Schumacher is much faster :)</p>
<pre><code>a = RandomInteger[{0, 4}, {100000, 3}];
b = RandomInteger[{0, 4}, {100000, 3}];
RepeatedTiming@Position[MapThread[Equal, {a, b}], True][[1]]
(*0.089*)
RepeatedTiming@
Position[Unitize[Subtract[a, b]].ConstantArray[1,
Dimensions[a][[2]]], 0, 1][[1]]
(*0.0057*)
</code></pre>
|
73,375 | <p>The example cells in the documentation each have a count of the cells inside their section:</p>
<pre><code> Cell[TextData[{"Basic Examples", " ", Cell["(4)", "ExampleCount"]}],
"ExampleSection", "ExampleSection"]
</code></pre>
<p>But this is static content, how exactly would this work dynamically? I'd like to make my own cell style where the cell dingbat counts the number of cells inside the cell group it contains (and updates itself dynamically of course). </p>
<p>I've looked at the stylesheet for outline-styled notebooks and then tried using the <code>Counter*</code> options, but these are for dynamic tallying, not content counting and there's not much documentation on these esoteric front-end things like </p>
<pre><code>CounterBoxOptions->{CounterFunction:>CapitalRomanNumeral}]
</code></pre>
<p>Any help would be appreciated.</p>
<p><img src="https://i.stack.imgur.com/Yujf7.png" alt="enter image description here"></p>
| Carl Woll | 45,431 | <p>Here is a stylesheet solution that gives "Section" cells a dingbat that counts the number of "Input" cells in the section:</p>
<pre><code>SetOptions[
EvaluationNotebook[],
StyleDefinitions -> Notebook[
{
Cell[StyleData[StyleDefinitions -> "Default.nb"]],
Cell[StyleData["Input"],
CounterIncrements -> "CellCount"
],
Cell[StyleData["Notebook"],
TaggingRules -> "TotalCells" -> Dynamic[Length[Cells[CellStyle -> "Input"]]]
],
Cell[StyleData["Section"],
CellDingbat -> Cell @ BoxData @ DynamicBox[
RowBox[{
"(",
ToBoxes[
AbsoluteCurrentValue[ParentCell[EvaluationCell[]], {TaggingRules, "NextSection"}] -
AbsoluteCurrentValue[ParentCell[EvaluationCell[]], {TaggingRules, "ThisSection"}]
],
")"
}]
],
TaggingRules -> {
"NextSection" -> Dynamic @ Replace[
NextCell[CellStyle -> "Section"],
{
None :> CurrentValue[EvaluationNotebook[], {TaggingRules, "TotalCells"}],
o_ :> CurrentValue[o, {"CounterValue", "CellCount"}]
}
],
"ThisSection" -> Dynamic[CurrentValue[EvaluationCell[], {"CounterValue", "CellCount"}]]}],
},
StyleDefinitions -> "PrivateStylesheetFormatting.nb"
]
]
</code></pre>
<p>Note that all the dynamic content might cause the notebook to be laggy.</p>
|
3,657,026 | <p>I need to prove expression using mathematical induction <span class="math-container">$P(1)$</span> and <span class="math-container">$P(k+1)$</span>, that:</p>
<p><span class="math-container">$$
1^2 + 2^2 + \dots + n^2 = \frac{1}{6}n(n + 1)(2n + 1)
$$</span></p>
<p>Proving <span class="math-container">$P(1)$</span> gave no difficulties, however I was stuck with <span class="math-container">$P(k+1)$</span>, I've reached this point:</p>
<p><span class="math-container">$$
1^2 + \dots + (k+1)^2 = 1^2 + \dots + k^2 + (k+1)^2 = \\ \frac{1}{6}k(k+1)(2k+1) + (k+1)^2
$$</span></p>
<p>I've checked answer from the exercise book, the next step would be:</p>
<p><span class="math-container">$$
= \frac{1}{6}(k+1)(k(2k+1)+6(k+1))
$$</span></p>
<p>How it was converted like that? Could you provide some explanation?</p>
<p>Thank you in advance</p>
| Isaac Ren | 415,180 | <p>If you look carefully at <span class="math-container">$\frac16k(k+1)(2k+1)+(k+1)^2$</span>, you'll see that it's a sum of two terms, <span class="math-container">$\frac16k(k+1)(2k+1)$</span> and <span class="math-container">$(k+1)^2$</span>. Both of these terms have a factor <span class="math-container">$(k+1)$</span>, so we can factor it out:
<span class="math-container">$$=(k+1)\left(\frac16k(2k+1)+(k+1)\right)$$</span>
Next, we can move the <span class="math-container">$\frac16$</span> as follows:
<span class="math-container">$$=\frac16(k+1)\left(k(2k+1)+6(k+1)\right).$$</span>
And there you have it!</p>
<p>In general, in these situations, the trick is to factor or develop expressions in the right way. If you can't figure out how to go from expression A to expression B, maybe going from B to A can be easier.</p>
|
2,758,965 | <p>Show $\log(1-x)=-\sum_{n=1}^{\infty}\frac{x^n}{n}\,\forall x\in(-1,1)$. Which value does $\sum_{n=1}^{\infty}\frac{(-1)^n}{n}$ take?</p>
<hr>
<p>Now because I skipped forward in my (personal) textbook I know that I could tackle this using knowledge of the Maclaurin/Taylor series. However, it was not covered in the lecture (yet) and my mind is fixated on using Maclaurin/Taylor (which I'm not allowed to use)! Can anybody show me an alternate approach that I will probably feel very stupid for not seeing?</p>
| dxiv | 291,201 | <p>Alt. hint: by AM-GM:</p>
<p>$$
\frac{1}{a}+\frac{1}{bc}=\frac{1}{a}+\frac{a+b+c}{bc} = \frac{1}{a} + \frac{1}{b}+\frac{1}{c}+\frac{a}{bc}\ge 4 \sqrt[4]{\frac{1}{b^2c^2}} = 4 \sqrt{\frac{1}{bc}}
$$</p>
|
2,758,965 | <p>Show $\log(1-x)=-\sum_{n=1}^{\infty}\frac{x^n}{n}\,\forall x\in(-1,1)$. Which value does $\sum_{n=1}^{\infty}\frac{(-1)^n}{n}$ take?</p>
<hr>
<p>Now because I skipped forward in my (personal) textbook I know that I could tackle this using knowledge of the Maclaurin/Taylor series. However, it was not covered in the lecture (yet) and my mind is fixated on using Maclaurin/Taylor (which I'm not allowed to use)! Can anybody show me an alternate approach that I will probably feel very stupid for not seeing?</p>
| Macavity | 58,320 | <p>Another way, is to use a more general Holder’s inequality:
$$LHS \geqslant \left( \frac1{\sqrt[3]{abc}}+\frac1{\sqrt[3]{a^2b^2c^2}}\right)^3\geqslant (3+9)^3$$</p>
|
411,868 | <p>Laver showed in 1995 that the period of the first row of certain <a href="https://en.wikipedia.org/wiki/Laver_table" rel="noreferrer">Laver tables</a> is unbounded, assuming that a rank-into-rank cardinal exists.</p>
<p>The most accessible proof of his result that I was able to find is in chapter 12 of Patrick Dehornoy's Braids and Self-Distributivity (<a href="https://link.springer.com/book/10.1007/978-3-0348-8442-6" rel="noreferrer">Springer 2000</a>). The proof is quite technical. Laver defines an algebra on the models of set-theory - technically, on certain elementary embeddings of huge sets. He defines two operations on such elementary embeddings, quotients out certain large infinities to get finite results, and investigates their properties.</p>
<p>My question is: why should elementary embeddings of ZFC have anything to do with the periodicity of these finite tables?</p>
<p>More generally, is there some guiding intuition I can use to make sense of Laver's very complex construction? I find it difficult to understand how Laver could have plowed through this weird and intricate technical construction without having some reason to think it would lead to some kind of specific finitary result.</p>
| Joseph Van Name | 22,277 | <p>Richard Laver was a set theorist throughout his entire career, so he was originally motivated to investigate very large cardinals and the resulting finite algebras from a set theoretic perspective, but today it is probably best to think of the Laver tables <span class="math-container">$A_{n}$</span> as algebraic objects within a much larger class of algebraic objects. With this algebraic perspective, large cardinals are a source of examples of such algebraic objects, and large cardinals also provide the consistency strength to prove more theorems.</p>
<p><strong>Red herrings</strong></p>
<p>As one looks more closely at the Laver tables and similar structures, one observes that some ideas are essential while others are non-essential or just special cases of a more general idea including the following notions.</p>
<ol>
<li><p>The Laver tables, algebras generated by 1 elementary embedding-The Laver tables are simply the one-generator finite algebraic structures within much broader classes of algebraic structures such as the nilpotent left-distributive algebras. The Laver tables are analogous to the finite cyclic groups while the broader class of algebraic structures is analogous to the class of all groups. You get a more accurate analogy by relating the Laver tables to the cyclic <span class="math-container">$p$</span>-groups while the broader class of algebraic structures is analogous to the class of all <span class="math-container">$p$</span>-groups and related objects such as pro-<span class="math-container">$p$</span>-groups (in this analogy, <span class="math-container">$\mathcal{E}_{\lambda}$</span> corresponds to a dense <span class="math-container">$G_{\delta}$</span>-subset of a pro-<span class="math-container">$p$</span>-group). And yes, these generalizations of Laver tables have the same kind of intricate structure and periodicity that appears with the Laver tables.</p>
</li>
<li><p>Set theory-While the Laver tables originally arose from set theory, they along with similar algebraic structures can easily be studied without any reference to large cardinals. Furthermore, some of these structures similar to Laver tables cannot arise from large cardinals.</p>
</li>
<li><p>Composition-The composition of elementary embeddings and a more general composition like operation in the Laver tables should be thought of as operations that are constructed from the self-distributive application operation.</p>
</li>
<li><p>Rank-into-rank embeddings-One can construct self-distributive algebras such as the Laver tables from <span class="math-container">$n$</span>-huge cardinals rather than rank-into-rank cardinals.</p>
</li>
<li><p>A linearly ordered set of critical points-The notion of a critical point is essential for Laver tables, but it can be generalized quite a bit. The direct product of two Laver tables <span class="math-container">$A_{5}\times A_{6}$</span> will have a partially ordered but not linearly ordered set of critical points. In algebra, one would typically like to study classes of algebraic structures that are closed under taking products (or at least finite product), quotients, and substructures. One should therefore consider algebraic structures with a partially ordered set of critical points.</p>
</li>
</ol>
<p><strong>Starting from nilpotence</strong></p>
<p>The nilpotent left-distributive algebras are algebraic structures that resemble the algebras of elementary embeddings and Laver tables. The notion of a nilpotent left-distributive algebras is very easy to define, but it precisely captures the notion of a critical point and composition operation as well as other notions related to Laver table for finite algebras (infinite algebras will require a different axiomatization though, but that is a completely open research direction). I admit that very little is known about nilpotent left-distributive algebras (these structures will become more understandable in the future when people write papers on these structures), but they so far seem to be one of the most important classes of algebraic structures that contains the Laver tables but does not contains objects like non-trivial quandles which are quite different from the Laver tables.</p>
<p>If <span class="math-container">$(X,*)$</span> is a left-distributive algebra, then define terms <span class="math-container">$x^{[n]},x_{[n]}$</span> for <span class="math-container">$n\geq 1$</span> recursively by letting <span class="math-container">$x^{[1]}=x_{[1]}=x$</span> and <span class="math-container">$x^{[n+1]}=x*x^{[n]},x_{[n+1]}=x_{[n]}*x$</span>.</p>
<p>Suppose that <span class="math-container">$(X,*)$</span> is a left-distributive algebra. We say that an element <span class="math-container">$x\in X$</span> is a left-identity if <span class="math-container">$x*y=y$</span> for each <span class="math-container">$y\in Y$</span>. We say that a subset <span class="math-container">$L\subseteq X$</span> is a left-ideal if <span class="math-container">$x*y\in L$</span> whenever <span class="math-container">$y\in L$</span>. Let <span class="math-container">$\mathrm{Li}(X)$</span> be the collection of all left-identities in <span class="math-container">$(X,*)$</span>. Then we say that <span class="math-container">$(X,*)$</span> is a nilpotent left-distributive algebra if</p>
<ol>
<li><p><span class="math-container">$\mathrm{Li}(X)$</span> is a left-ideal, and</p>
</li>
<li><p>for each <span class="math-container">$x\in X$</span>, there exists an <span class="math-container">$n$</span> where <span class="math-container">$x^{[n]}\in\mathrm{Li}(X)$</span>.</p>
</li>
</ol>
<p>The algebra <span class="math-container">$\mathcal{E}_{\lambda}/\equiv^{\gamma}$</span> is nilpotent because of the following prominent result:</p>
<p>Theorem (Kunen inconsistency): Suppose that <span class="math-container">$j:V_{\alpha}\rightarrow V_{\alpha}$</span> is a non-trivial elementary embedding. Let <span class="math-container">$\lambda=\lim_{n\in\omega}j^{n}(\mathrm{crit}(j))$</span>. Then <span class="math-container">$\alpha=\lambda$</span> or <span class="math-container">$\alpha=\lambda+1$</span>.</p>
<p>Let us now obtain critical points and a composition operation from nilpotent left-distributive algebras.</p>
<p>If <span class="math-container">$(X,*)$</span> is a left-distributive algebra, then for each <span class="math-container">$n\in\omega$</span>, define an operation <span class="math-container">$*_{n}$</span> recursively by letting <span class="math-container">$x*_{0}y=y,x*_{n+1}y=x*_{n}(x*y)=x*(x*_{n}y)$</span>. Each <span class="math-container">$*_{n}$</span> is self-distributive. If <span class="math-container">$(X,*)$</span> is a nilpotent-self-distributive algebra, then the sequence <span class="math-container">$(x*_{n}y)_{n}$</span> is eventually constant for each <span class="math-container">$x,y\in X$</span>, so let <span class="math-container">$x*_{\infty}y=\lim_{n\rightarrow\infty}x*_{n}y$</span>. Define a relation <span class="math-container">$\preceq$</span> on <span class="math-container">$(X,*)$</span> by setting <span class="math-container">$x\preceq y$</span> if and only if <span class="math-container">$x*_{\infty}y\in\mathrm{Li}(X)$</span>. Then <span class="math-container">$\preceq$</span> is a pre-ordering. Let <span class="math-container">$\simeq$</span> be the equivalence relation defined by letting <span class="math-container">$x\simeq y$</span> iff <span class="math-container">$x\preceq y\preceq x$</span>.
Let <span class="math-container">$\mathrm{crit}[X]=X/\simeq$</span>, and let <span class="math-container">$\mathrm{crit}(x)$</span> denote the equivalence class containing <span class="math-container">$x$</span> whenever <span class="math-container">$x\in X$</span>. Then <span class="math-container">$\mathrm{crit}[X]$</span> is partially ordered by letting <span class="math-container">$\mathrm{crit}(x)\leq\mathrm{crit}(y)$</span> if and only if <span class="math-container">$x\preceq y$</span>. One can define an implication operation on <span class="math-container">$\mathrm{crit}[X]$</span> by letting
<span class="math-container">$\mathrm{crit}(x)\rightarrow\mathrm{crit}(y)=\mathrm{crit}(x*_{\infty}y).$</span></p>
<p>Suppose that <span class="math-container">$X$</span> is a self-distributive algebra where <span class="math-container">$\mathrm{Li}(X)$</span> is a non-empty left-ideal. Suppose that <span class="math-container">$L$</span> is a poset. Then we say that a mapping <span class="math-container">$\phi:X\rightarrow L$</span> is a critical point operator if it satisfies the following conditions:</p>
<p>i. <span class="math-container">$\phi(x)\leq\phi(y)\leftrightarrow\phi(x)\leq\phi(x*y)$</span>.</p>
<p>ii. <span class="math-container">$\phi(y)\leq\phi(x*y)$</span>.</p>
<p>iii. <span class="math-container">$\phi(y)\leq\phi(z)\rightarrow\phi(x*y)\leq\phi(x*z)$</span>.</p>
<p>iv. <span class="math-container">$\phi(x)=1$</span> if and only if <span class="math-container">$x\in\mathrm{Li}(X)$</span>.</p>
<p>v. <span class="math-container">$\phi(x)=\phi(x*x)$</span> if and only if <span class="math-container">$x\in\mathrm{Li}(X).$</span></p>
<p>vi. For all <span class="math-container">$x\in X$</span>, there is a <span class="math-container">$c\in X$</span> with <span class="math-container">$c*c\in\mathrm{Li}(X)$</span> and
<span class="math-container">$\phi(x)=\phi(c)$</span>.</p>
<p>If <span class="math-container">$X$</span> is a nilpotent LD-system, then the mapping <span class="math-container">$\mathrm{crit}:X\rightarrow\mathrm{crit}[X]$</span> is a critical point operator.</p>
<p>Proposition: Suppose that <span class="math-container">$X$</span> is a finite left distributive algebra, and <span class="math-container">$\phi:X\rightarrow L$</span> is a critical point operator. Then <span class="math-container">$(X,*)$</span> is nilpotent, and <span class="math-container">$\phi(x)\leq\phi(y)$</span> if and only if <span class="math-container">$\mathrm{crit}(x)\leq\mathrm{crit}(y)$</span></p>
<p>The above proposition shows that the finite nilpotent left-distributive algebras are just the finite left-distributive algebras with a sensible (though partially ordered) notion of critical points. One can obtain nilpotent self-distributive algebras from rank-into-rank embeddings without even invoking Kunen's inconsistency result.</p>
<p>If <span class="math-container">$X$</span> is a nilpotent self-distributive algebra, and <span class="math-container">$\alpha\in\mathrm{crit}[X]$</span>, then there is some <span class="math-container">$c\in X$</span> with <span class="math-container">$c*c\in\mathrm{Li}(X)$</span> and <span class="math-container">$\mathrm{crit}(c)=\alpha.$</span> Define a congruence <span class="math-container">$\equiv^{\alpha}$</span> on <span class="math-container">$X$</span> by letting
<span class="math-container">$x\equiv^{\alpha}y$</span> if and only if <span class="math-container">$c*x=c*y$</span>.</p>
<p>The nilpotent left-distributive algebras can also be endowed with a composition operation in some sense.</p>
<p>An LD-monoid is an algebraic structure <span class="math-container">$(X,*,\circ,1)$</span> where <span class="math-container">$(X,*)$</span> is an LD-system, <span class="math-container">$(X,\circ,1)$</span> is a monoid, and</p>
<ol>
<li><p><span class="math-container">$x*1=1,1*x=x$</span>,</p>
</li>
<li><p><span class="math-container">$x\circ y=(x*y)\circ x$</span>,</p>
</li>
<li><p><span class="math-container">$x*(y\circ z)=(x*y)\circ(x*z)$</span>,</p>
</li>
<li><p><span class="math-container">$(x\circ y)*z=x*(y*z)$</span>.</p>
</li>
</ol>
<p>A nilpotent LD-monoid is an LD-monoid <span class="math-container">$(X,*,\circ,1)$</span> where <span class="math-container">$(X,*)$</span> is nilpotent and <span class="math-container">$\text{Li}(X)=\{1\}$</span>.</p>
<p>If <span class="math-container">$(X,*,\circ,1)$</span> is a nilpotent LD-monoid, then <span class="math-container">$(\mathrm{crit}[X],\rightarrow,\wedge,1)$</span> is a Heyting semilattice where <span class="math-container">$\mathrm{crit}(x)\wedge\mathrm{crit}(y)=\mathrm{crit}(x\circ y)$</span>.</p>
<p>Suppose that <span class="math-container">$(X,*)$</span> is a nilpotent left-distributive algebra where <span class="math-container">$\text{Li}(X)$</span> is a non-empty left-ideal.</p>
<p>Let <span class="math-container">$\simeq$</span> be the smallest equivalence relation on <span class="math-container">$\bigcup_{n\in\omega}X^{n}$</span> that satisfies the following conditions whenever <span class="math-container">$m\geq 0,n\geq 0$</span>, and <span class="math-container">$x_{1},\dots,x_{m},y_{1},\dots,y_{n}\in X$</span>:</p>
<p>i. <span class="math-container">$(x_{1},\dots,x_{m},a,y_{1},\dots,y_{n})\simeq(x_{1},\dots,x_{m},y_{1},\dots,y_{n})$</span>
whenever <span class="math-container">$a\in\mathrm{Li}(X)$</span>.</p>
<p>ii. <span class="math-container">$(x_{1},\dots,x_{m},a,b,y_{1},\dots,y_{n})\simeq(x_{1},\dots,x_{m},a*b,a,y_{1},\dots,y_{n})$</span> whenever <span class="math-container">$a,b\in X$</span></p>
<p>Let <span class="math-container">$\text{LDM}(X)=\bigcup_{n\in\omega}X^{n}/\simeq$</span>. We shall let <span class="math-container">$[x_{1},\dots,x_{m}]$</span> denote the equivalence class containing
<span class="math-container">$(x_{1},\dots,x_{m})$</span>. Then <span class="math-container">$\text{LDM}(X)$</span> is a LD-monoid with operations defined by
<span class="math-container">$[x_{1},\dots,x_{m}]\circ[y_{1},\dots,y_{n}]=[x_{1},\dots,x_{m},y_{1},\dots,y_{n}]$</span> and <span class="math-container">$[]*[y_{1},\dots,y_{n}]=[y_{1},\dots,y_{n}]$</span> and
<span class="math-container">$[x_{1},\dots,x_{m+1}]*[y_{1},\dots,y_{n}]=
[x_{1},\dots,x_{m}]*[x_{m+1}*y_{1},\dots,x_{m+1}*y_{n}]$</span>.</p>
<p>If <span class="math-container">$(X,*)$</span> is a nilpotent LD-system, then <span class="math-container">$\text{LDM}(X)$</span> is a nilpotent LD-monoid, and the mapping <span class="math-container">$e:X\rightarrow\text{LDM}(X)$</span> defined by <span class="math-container">$e(x)=[x]$</span> is a homomorphism. This is therefore a construction that allows one to obtain a composition operation from the self-distributive operation.</p>
<p>Proposition: A finite left-distributive algebra <span class="math-container">$(X,*)$</span> where <span class="math-container">$\text{Li}(X)$</span> is a left-ideal is nilpotent if and only if <span class="math-container">$\text{LDM}(X)$</span> is finite.</p>
<p>The nilpotent left-distributive algebras generated by a single element in a sense are just the Laver tables.</p>
<p>Proposition: Suppose that <span class="math-container">$X$</span> is a finite monogenic left-distributive algebra such that <span class="math-container">$\mathrm{Li}(X)\neq\emptyset$</span>. Then <span class="math-container">$X\simeq A_{n}$</span>.</p>
<p>Proposition: The Laver tables <span class="math-container">$A_{n}$</span> are up-to-isomorphism, the only finite monogenic nilpotent left-distributive algebras.</p>
<p>Theorem: Suppose that <span class="math-container">$X$</span> is a monogenic nilpotent left-distributive algebra. Then either <span class="math-container">$X$</span> is isomorphic to a Laver table <span class="math-container">$n$</span> or <span class="math-container">$X$</span> is infinite and for all <span class="math-container">$n$</span>, we have <span class="math-container">$X/\equiv^{\alpha_{n}}\simeq A_{n}$</span> where
<span class="math-container">$\alpha_{n}=\mathrm{crit}(x_{[2^{n}]})$</span>.</p>
<p>Theorem: Suppose that there exists a rank-into-rank cardinal. Then the Laver tables <span class="math-container">$A_{n}$</span> are the only nilpotent monogenic left-distributive algebras.</p>
<p><strong>Laver-like algebras</strong></p>
<p>For the order of operations, the implied parentheses are grouped on the left so that <span class="math-container">$a*b*c*d=((a*b)*c)*d$</span>.</p>
<p>A left-distributive algebra <span class="math-container">$(X,*)$</span> is Laver-like precisely when</p>
<ol>
<li><p><span class="math-container">$\mathrm{Li}(X)$</span> is a left-ideal and</p>
</li>
<li><p>whenever <span class="math-container">$x_{n}\in X$</span> for each <span class="math-container">$n\in\omega$</span>, there exists some <span class="math-container">$N$</span> where
<span class="math-container">$x_{0}*\dots*x_{N}\in\mathrm{Li}(X)$</span>.</p>
</li>
</ol>
<p>Every Laver-like algebra is nilpotent, and if <span class="math-container">$(X,*)$</span> is Laver-like, then
<span class="math-container">$\mathrm{crit}[X]$</span> is well-ordered. Every quotient algebra <span class="math-container">$\mathcal{E}_{\lambda}/\equiv^{\gamma}$</span> of rank-into-rank embeddings is Laver-like (this is the Laver-Steel theorem).</p>
<p>Theorem: Every Laver-like generated by a single element is isomorphic to a Laver table <span class="math-container">$A_{n}$</span>.</p>
<p>Proposition: If <span class="math-container">$(X,*)$</span> is a reduced Laver-like algebra, then mapping <span class="math-container">$e_{X}:X\rightarrow\text{LDM}(X)$</span> is an isomorphism (so there exists some composition operation <span class="math-container">$\circ$</span> such that <span class="math-container">$(X,*,\circ,1)$</span> is an LD-monoid).</p>
<p>The Laver-like algebras therefore more closely resemble the algebras of elementary embeddings, but there are still reduced Laver-like algebras that cannot arise from elementary embeddings.</p>
<p><strong>Construction of new nilpotent left-distributive algebras from old ones</strong></p>
<p>The construction of the Laver tables is actually a special case of a technique for constructing new nilpotent self-distributive algebras from old ones.</p>
<p>While there are plenty of ways to construct new nilpotent left-distributive algebras from old ones, it is most fruitful to construct new nilpotent left-distributive algebras that have the same number of generators as the old algebras but which have a higher poset of critical points (more specifically if <span class="math-container">$Y$</span> is the new left-distributive algebra, and <span class="math-container">$X$</span> is the old one, then <span class="math-container">$\textrm{crit}(Y)$</span> is isomorphic to <span class="math-container">$\textrm{crit}(X)\cup\{\mu\}$</span> where <span class="math-container">$\mu$</span> is a new element defined by letting <span class="math-container">$\mu>\gamma$</span> for each <span class="math-container">$\gamma\in\textrm{crit}(X)$</span>). The best way to achieve this is to begin with a finite left-distributive algebra <span class="math-container">$(X,*)$</span> that is generated by a system <span class="math-container">$(x_{a})_{a\in A}$</span> and obtain a new finite left-distributive algebra <span class="math-container">$(Y,*)$</span> that is generated by <span class="math-container">$(y_{a})_{a\in A}$</span> and where <span class="math-container">$\mathrm{crit}[Y]\setminus\{1\}$</span> has a maximum element <span class="math-container">$\gamma$</span> and there there is a (necessarily surjective) homomorphism <span class="math-container">$\phi:Y\rightarrow X$</span> with <span class="math-container">$\ker(\phi)=(\equiv^{\gamma})$</span> and where <span class="math-container">$\phi(y_{a})=x_{a}$</span> for each <span class="math-container">$a\in A$</span>. Observe that in this case, <span class="math-container">$X$</span> is isomorphic to a subalgebra <span class="math-container">$\{c*y\mid y\in Y\}$</span> of <span class="math-container">$Y$</span> where <span class="math-container">$\mathrm{crit}(c)=\gamma$</span>.</p>
<p>A way to build the algebra <span class="math-container">$Y$</span> from <span class="math-container">$X$</span> algorithmically is to start off with <span class="math-container">$X$</span> and then repeatedly construct new algebras <span class="math-container">$Z$</span> that contain <span class="math-container">$X$</span> as a subalgebra but where <span class="math-container">$Z$</span> is generated by <span class="math-container">$X\cup\{r\}$</span> for some element <span class="math-container">$r\in Z$</span> and where <span class="math-container">$Z\setminus\{r\}$</span> is a subalgebra of <span class="math-container">$Z$</span> that can be written as a sub-direct product of previously constructed algebras.</p>
<p>When one applies this technique of extending algebras to larger algebras one element at a time, one goes from <span class="math-container">$A_{n}$</span> to <span class="math-container">$A_{n+1}$</span> by traversing through all the intermediate algebras <span class="math-container">$\{x\in A_{n+1}\mid x\geq r\}$</span> where <span class="math-container">$1\leq r\leq 2^{n}+1$</span>.</p>
|
1,743,542 | <p>I'm looking for a continuous random variable with the following properties</p>
<ul>
<li>It is not bounded towards $+\infty$.</li>
<li>The expected value of the <em>maximum</em> of x-many draws out of that random variable has a closed-form solution.</li>
</ul>
<p>The more standard and well-known it is, the better. I have no idea how to perform this search. The first property is immediately obvious. I've tried looking at Normal and Pareto, and compute</p>
<p>$$\int x F(j)^{x-1} F'(j) j dj$$</p>
<p>where $F(j)$ is the CDF of the RV, and $x$ denotes the number of draws. In both cases, this integral became quite messy (See <a href="https://math.stackexchange.com/questions/1743535/expected-maximum-of-pareto">here</a> for a question regarding Pareto).</p>
<p>What is a recommended way of finding such a RV? Is there perhaps an obvious candidate?</p>
| user21820 | 21,820 | <p>For a distribution $X$ with cumulative density function $F$, the probability that $n$ variables independently drawn from $X$ are all at most $r$ would be $F(r)^n$. That gives you the cumulative density function for the maximum of those $n$ variables, and from there you can get everything you want.</p>
<p>Now the exponential distribution is nice because it's using the exponential function, which remains like itself under plenty of operations, but many other distributions should work for your purpose too.</p>
|
211,623 | <p>Let $G$ be a (finite) group, and $a, b \in G$ be any two elements. Consider the sequence defined by
\begin{eqnarray*}
s_0 &=& a, \\
s_1 &=& b, \text{and} \\
s_{n+2} &=& s_{n+1} s_{n} \text{ for $n \geq 0$}.
\end{eqnarray*}
This sequence is what we get when replace the letters in the algae L-system (cf: <a href="https://en.wikipedia.org/wiki/L-system#Example_1:_Algae" rel="nofollow">https://en.wikipedia.org/wiki/L-system#Example_1:_Algae</a>) with the two elements of our group. As $G$ is finite, there are finitely many pairs $(s_{n+1}, s_{n})$ and hence that the sequence will eventually start repeating. Do we know more about this sequence $(s_n)$? Like, when is it periodic?</p>
| Joonas Ilmavirta | 55,893 | <p>Here are some observations about the sequence when $a$ and $b$ commute.</p>
<p>If $a=e$, then $s_n=b^{F_n}$, where $F_n$ is the $n$th Fibonacci number.
Let $|b|$ denote the order of $b$; we are interested in Fibonacci numbers modulo $|b|$.
The Fibonacci numbers are periodic modulo any finite number, so the sequence $s_n$ is periodic.
The period is the <a href="https://en.wikipedia.org/wiki/Pisano_period" rel="nofollow">Pisano period</a> $\pi(|b|)$.</p>
<p>If $a$ commutes with $b$, then $s_n=b^{F_n}a^{F_{n-1}}$ (with the natural convention that $F_{-1}=1$).
Therefore $s_n$ is a (commutative) product of two periodic sequences and therefore periodic.</p>
<p>If $a$ and $b$ are powers of the same element, we get again the Fibonacci numbers but with different initial conditions than $F_0=0$ and $F_1=1$.
Again, we get periodicity.</p>
|
1,977,031 | <p>How would I parameterise this curve in 3D?
I am confused since the diagrams deal with three variables in total – should I use complex numbers? I'm only used to two diagrams and haven't encountered a problem with three like this.</p>
| Jyrki Lahtonen | 11,619 | <p>The projection in the $xy$-plane looks like an <a href="https://en.wikipedia.org/wiki/Archimedean_spiral" rel="nofollow noreferrer">Archimedean spiral</a>. Except that everything in the $y$-direction is doubled. The formulas
$$
x=t \cos t,\qquad y=2t\sin t
$$
match with the first figure perfectly. The given points correspond to $t=9\pi/2$,
$t=5\pi$, $t=11\pi/2$ and $t=6\pi$ - all in the third revolution $t\in[4\pi,6\pi]$.</p>
<p>The two latter figures give the impression that the curve lies on the
elliptical cone
$$
(3x)^2+(3y/2)^2=(\pi z)^2.
$$
Plugging in the first two equations gives
$$
(3x)^2+(3y/2)^2=9t^2(\cos^2t+\sin^2t)=9t^2,
$$
so we can solve that $z=3t/\pi$.</p>
<p>Here's a 3D view of that parametrized curve by Mathematica</p>
<p><a href="https://i.stack.imgur.com/epARC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/epARC.png" alt="enter image description here"></a></p>
<p>together with a view from the side</p>
<p><a href="https://i.stack.imgur.com/rGsfX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rGsfX.png" alt="enter image description here"></a></p>
<hr>
<p>How to? The Archimedean spirals (as well as the logarithmic spirals) occur in all books about polar coordinates. I had a bit of luck spotting that the four points you have fit on an Archimedean spiral, if you stretch it by a factor of two in the direction of $y$-axis. The latter two sketches give that we should always have $z\ge0, |x|\le \pi z/3, |y|\le2\pi z/3$, also implying that everything in the $y$-direction is stretched by a factor of two. Going from these data points to a curve on a cone is just 3D-imagination. Anyway, here is the curve
$$
\left\{\begin{array}{cl}x&=t\cos t,\\ y&=2t\sin t,\\ z&=3t/\pi\end{array}\right.
$$
one more time together with the surface of the above elliptical cone.</p>
<p><a href="https://i.stack.imgur.com/f38VE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f38VE.png" alt="enter image description here"></a></p>
|
4,028,534 | <p>I have three points with coordinates: <span class="math-container">$A (5,-1,0),B(2,4,10)$</span>, and <span class="math-container">$C(6,-1,4)$</span>.</p>
<p>I have the following vectors <span class="math-container">$\overrightarrow {CA} = (-1, 0, -4)$</span> and <span class="math-container">$\overrightarrow{CB} = (-4, 5, 6)$</span>.</p>
<p>To find the area of the triangle I used the dot product between these vectors to get the angle and then applied the formula <span class="math-container">$A=0.5ab\sin{C}$</span> to find the area of the triangle which gave me <span class="math-container">$15.07(2dp)$</span>.</p>
<p>However in the given solutions the answer is given as <span class="math-container">$(3*\sqrt(102))/2$</span></p>
<p>I think they have used the trig identity <span class="math-container">$\cos^2(\theta) + \sin^2(\theta) = 1$</span> to find the value of <span class="math-container">$\sin(\theta)$</span> rather than <span class="math-container">$\arccos(\theta)$</span> to find the angle ACB. However I don't understand why there would be such a discrepancy between the two answers; one using <span class="math-container">$\arccos$</span> and the other using the trig identity.</p>
| ultralegend5385 | 818,304 | <p>Your book got a typo: it's <span class="math-container">$101$</span> and not <span class="math-container">$102$</span>. And that's it, that is approximately your answer.</p>
<p>Hope this helps. Ask anything if not clear :)</p>
|
756,662 | <p>I am trying to figure out what random variables are measurable with respect to sigma algebra given by $[1-4^{-n}, 1]$ where $n= 0, 1, 2, ....$ if $[0,1]$ is the sample space. I believe I can do with with indicator functions but I'm not sure how to write this.
Thanks!</p>
| drhab | 75,923 | <p>If $\mathcal{P}$ is a countable partition of the space then $\mathcal{F}=\left\{ \cup\mathcal{P}'\mid\mathcal{P}'\subset\mathcal{P}\right\} $
is the $\sigma$-algebra generated by the elements of $\mathcal{P}$.</p>
<p>A function $f$ is $\mathcal{F}$-measurable iff it is constant on
elements of $\mathcal{P}$. </p>
<p>So such functions take the form: $$\sum_{P\in\mathcal{P}}c_{P}1_{P}$$
where the $c_{P}$ are constants and the $1_{P}$ are characteristic
functions. </p>
<p>You are in such a position. The sets $I_{n}=\left[1-4^{-n},1-4^{-n-1}\right)$
together with singleton $\left\{ 1\right\} $ form such a partition of
$\left[0,1\right]$.</p>
|
1,642,029 | <p>I'm looking at my textbooks steps for calculating the complexity of bubble sort...and it jumps a step where I don't know what exactly they did. </p>
<p><a href="https://i.stack.imgur.com/XaztP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XaztP.png" alt="enter image description here"></a></p>
<p>I see everything up to that point using summation rules and all, but am unsure about that jump. Any help on explaining more how they got that?</p>
| MPW | 113,214 | <p>If $(b_n)$ converges and you have only omitted finitely many terms from $(a_n)$ to get $(b_n)$, then the original sequence $(a_n)$ must have been convergent to begin with. Convergence only depends on the tail.</p>
<p>So if $(a_n)$ is not convergent, you can repeat the process infinitely many times on the "leftovers".</p>
|
2,709,832 | <p><a href="https://en.wikipedia.org/wiki/Cantor_set#Construction_and_formula_of_the_ternary_set" rel="noreferrer">Cantor set</a>
See the link, I am referring to cantor set on the real line. I wish to show that it is compact. I am doing this by pointing following arguments. I am not sure if this is enough.</p>
<ol>
<li>Cantor set is bounded by definition in the region <span class="math-container">$[0,1]$</span></li>
<li>Cantor set is the union of closed intervals, and hence it is a closed set.</li>
<li>Since the Cantor set is both bounded and closed it is compact by Heine-Borel Theorem.</li>
</ol>
| Cyriac Antony | 120,721 | <p>Cantor set is defined as $C=\cap_n C_n$ where $C_{n+1}$ is obtained from $C_n$ by dropping 'middle third' of each closed interval in $C_n$</p>
<p>As you have noted, Cantor set is bounded.</p>
<p>Since each $C_n$ is closed and $C$ is an intersection of such sets, $C$ is closed (arbitrary intersection of closed sets is a closed set).</p>
<p>As $C$ is closed and bounded, it is compact by Heine-Borel theorem.</p>
<p>PS: You cannot say that Cantor set is a union of closed intervals. Rudin is giving Cantor set as an example for a perfect set that contains no open interval!</p>
|
456,581 | <p>If one requires simply the existence of partial derivatives of first order rather than all orders, then a standard example is the function</p>
<p>$$ f(x,y) = \left\{\begin{array}{l l}
\frac{2xy}{x^2+y^2} & \quad \text{if $(x,y)\neq(0,0),$}\\
0 & \quad \text{if $(x,y)=(0,0).$}
\end{array} \right.$$</p>
<p>However, this does not constitute an answer to my question since the partial derivative of $\frac{\partial f}{\partial x}$ with respect to $y$ does not exist at the origin.</p>
<p>PS: This question rose out of my wonder as to whether, in the definition of a smooth function, continuity of partials is an essential requirement or not.</p>
| paul garrett | 12,291 | <p>@DavidL.Renfro's comment's link gives a very nice example... But/and there may be some reason to give a complementary sort of answer. Namely, if we look at (let's say <em>tempered</em> for simplicity, to avoid worrying about smooth truncations and such) <em>distributions</em>, first (as warm-up to the point I want to make) Clairault's theorem about interchangeability of second mixed partials becomes always-true, because Fourier transform (of tempered distributions) converts $\partial/\partial x$ to mere multiplication by $ix$, and similarly for $y$, and these multiplication operators certainly commute! In particular, any <em>failure</em> in counter-examples has to be in a form that integration against test functions cannot detect.</p>
<p>Similarly, with the question at hand, at a just-slightly more sophisticated level, a Sobolev imbedding theorem says that if $f\in L^2(\mathbb R^2)$ and $\Delta f\in L^2(\mathbb R^2)$, then $f$ is continuous. Here $\Delta$ is the sum of the second pure partials, as usual. (Further, $f$ can be smoothly truncated so that any obstacle to square-integrability is not at infinity, but just local.)</p>
<p>Thus, once again, for $f$ locally square integrable (!) compactly supported and $\Delta f$ locally square-integrable, $f$ is continuous.</p>
<p>Thus, counter-examples must have other pretty pathological features, too.</p>
|
1,253,778 | <p>When we say that an inequality is sharp, does it mean that it is "the best" inequality we can get between the two quantities involved?</p>
<p>For example, I read that we would say that the inequality
$$
\frac{a^2+b^2}{2}\geq ab
$$
is sharp, but wouldn't $\frac{a^2+b^2}{2}$ on the RHS be sharper than $ab$?</p>
<p>Do we just mean that we can't <em>multiply</em> the RHS of $\cdot\geq\cdot$ by a constant $>1$ (or equivalently that we can't multiply the LHS by a constant in $[0,1)$)? So that would be <em>a</em> "best" inequality <em>in this sense</em>?</p>
| Pietro Paparella | 414,530 | <p>Here is another definition of sharpness that demonstrates that the inequality is <em>optimal</em> (i.e., best-possible).</p>
<p>If <span class="math-container">$f,g:\mathbb{R}^n \longrightarrow \mathbb{R}$</span> and
<span class="math-container">\begin{equation}
f(x) \geq g(x),~\forall x \in S \subseteq\mathbb{R}^n, \tag{1} \label{ineq}
\end{equation}</span>
then \eqref{ineq} is called <em>sharp</em> if there is an element <span class="math-container">$\hat{x} \in S$</span> such that <span class="math-container">$f(\hat{x}) = g(\hat{x})$</span>.</p>
<p>Why is this inequality the best possible inequality? Suppose that \eqref{ineq} is sharp as defined above. For contradiction, if <span class="math-container">$h:\mathbb{R}^n \longrightarrow \mathbb{R}$</span> is a function such that
<span class="math-container">\begin{equation}
f(x) \ge h(x) > g(x),~\forall x \in S
\end{equation}</span>
then, in particular, <span class="math-container">$f(\hat{x}) > g(\hat{x})$</span>, a contradiction. Thus, the inequality is the best-possible.</p>
|
657,047 | <p>So I have $a^n = b$. When I know $a$ and $b$, how can I find $n$?</p>
<p>Thanks in advance!</p>
| bsdshell | 54,105 | <p>$$
\begin{align}a^n &= b\\
\Rightarrow \log_{a}{a^n} &= \log_{a}{b}\quad(\because \log_{a}a^n = n)\\
\Rightarrow n &= \log_{a}{b} \end{align}
$$</p>
|
271,259 | <p>My goal is to have NumericQ[h[j]]=True for any j regardless of whether j may be symbolic with no defined value.</p>
<p>Setting NumericQ[h[j_]]=True does not work and as I understand the NumericFunction attribute only works when the input is also numeric.</p>
<p>One solution might be to unprotect NumericQ and then set NumericQ[h[j_]]=True but I have not tried this as I am afraid of using Unprotect.
Another solution might be to define my own NumericQ function, that I can call numericq for example, and set numericq[h[j_]]=True but I would have to replace all previous NumericQ with numericq in my code and test that everything works.</p>
<ul>
<li>edit *</li>
</ul>
<p>I want to do this because I want my code to ignore certain symbols. My solution was to treat these symbols as numeric but this does not work for symbols like h[j] when j is a generic symbol. Any other function that works like NumericQ could work although I would have to change my code a bit.</p>
| user293787 | 85,954 | <p><em>This is an extended comment and not strictly an answer. But I thought I put this here, since people that end up here for whatever reason might find it useful. For reference, I am currently using Version 12.3.</em></p>
<p>While writing the other answer, I was surprised to find that the assignment</p>
<pre><code>NumericQ[undefinedsymbol] = True
</code></pre>
<p>gives no warning, despite <code>NumericQ</code> having the attribute <code>Protected</code>. And it works, meaning now</p>
<pre><code>NumericQ[undefinedsymbol]
(* gives True *)
</code></pre>
<p>By contrast</p>
<pre><code>NumericQ[undefinedsymbol2] = "seriously"
</code></pre>
<p>gives the error message</p>
<blockquote>
<p>Cannot set NumericQ[undefinedsymbol2] to seriously; the lhs argument must be a symbol and the rhs must be True or False</p>
</blockquote>
<p>This error would suggest that</p>
<pre><code>NumericQ[undefinedsymbol3[789]] = True
</code></pre>
<p>would fail since <code>undefinedsymbol3[789]</code> is not a symbol, but it gives no error or warning. However,</p>
<pre><code>NumericQ[undefinedsymbol3[789]]
(* gives False *)
</code></pre>
<p>I find this a bit misleading. Probably <code>NumericQ[undefinedsymbol3[789]] = True</code> should give an error, since without error, one assumes that the definition was recorded, when in fact it was not.</p>
<p>I do not know if people use this feature of <code>NumericQ</code>. I do not see it mentioned in the <a href="https://reference.wolfram.com/language/ref/NumericQ.html" rel="nofollow noreferrer">documentation</a>.</p>
<p><strong>Comment.</strong> I can understand how <code>NumericQ[undefinedsymbol] = True</code> can work despite the protection. For example, <code>NumericQ</code> could be using something similar to the following code to circumvent the protection:</p>
<pre><code>myNumericQ/:Set[myNumericQ[x_Symbol],tf:(True|False)]:=(
Unprotect[myNumericQ];
AppendTo[DownValues[myNumericQ],HoldPattern[myNumericQ[x]]:>tf];
Protect[myNumericQ];
tf);
Protect[myNumericQ];
</code></pre>
|
163,816 | <p>I'm looking for a good exercise book for probability theory, preferably at least partially with solutions to it. I want it to be detailed, not trivial, providing me solid fundamentals in the topic to be developed in the future. I'd wish it to be more of a "applied" technical university approach than the highly "abstract" one tailored for pure mathematics student at a university, however it need not to be so.</p>
<p>The topics I would like it to cover are more or less like in the part one of the book "Probability, Random Variables and Stochastic Processes" by Papoulis and Pillai ( <a href="http://www.mhhe.com/engcs/electrical/papoulis/graphics/toc.pdf" rel="nofollow">table of contents</a> ). Thank you in advance.</p>
| nanme | 33,515 | <p>"A Collection of Exercises in Advanced Probability Theory" - the solutions manual of all even-numbered exercises from the book “A First Look at Rigorous Probability Theory” (second edition, 2006) by Jeffrey S. Rosenthal :</p>
<p><a href="http://www.worldscibooks.com/mathematics/6300_solutionsmanual.pdf" rel="nofollow">http://www.worldscibooks.com/mathematics/6300_solutionsmanual.pdf</a></p>
|
482,003 | <p>I need help with the following limit $$\lim_{n\to\infty}\sum_{k=1}^n \frac{1}{\sqrt{kn}}$$</p>
<p>Thanks.</p>
| GEdgar | 442 | <p>mrf has the main idea. But since the integral is improper (as mrf notes) some care is required. Here is one way, using the Lebesgue theory...</p>
<p>Let <span class="math-container">$f(x) = 1/\sqrt{x}$</span>. For fixed <span class="math-container">$n$</span>, let <span class="math-container">$g_n$</span> be defined by <span class="math-container">$g_n(x) = \sqrt{n/k}$</span> for <span class="math-container">$(k-1)/n < x \le k/n$</span>. Then <span class="math-container">$0 < g_n(x) \le f(x)$</span> and <span class="math-container">$g_n(x) \to f(x)$</span> on <span class="math-container">$(0,1]$</span>. Since <span class="math-container">$f$</span> is Lebesgue integrable on <span class="math-container">$(0,1]$</span>, we have by the dominated convergence theorem <span class="math-container">$\int_0^1 g_n \to \int_0^1 f$</span>. That is:
<span class="math-container">$$
\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{n}{k}} \to \int_0^1 \frac{dx}{\sqrt{x}}
$$</span></p>
<p><strong>ASIDE</strong> </p>
<p>Monthly problem 11376 has an example of how blindly saying "Riemann sum" can lead one astray. The solution is on p. 283 of the March, 2010 issue. The problem defines
<span class="math-container">$$
S_n(a) = \sum_{an \lt k \le (a+1)n}\frac{1}{\sqrt{kn-an^2}\;}
$$</span>
for real <span class="math-container">$a$</span> and positive integer <span class="math-container">$n$</span>, and asks for which <span class="math-container">$a$</span> does <span class="math-container">$\lim_{n \to \infty} S_n(a)$</span> exist. <strong>Many</strong> solvers noted that <span class="math-container">$S_n(a)$</span> is a Riemann sum for
<span class="math-container">$$
\int_a^{a+1} \frac{dx}{\sqrt{x-a}\;} = 2
$$</span>
and then carelessly concluded that <span class="math-container">$S_n(a) \to 2$</span> for all <span class="math-container">$a$</span>. But, in fact, as the published solution shows, <span class="math-container">$S_n(a)$</span> converges if and only if <span class="math-container">$a$</span> is rational. </p>
<p>The problem here is the case <span class="math-container">$a=0$</span>, and fortunately <span class="math-container">$0$</span> is rational.</p>
|
26,062 | <p>Given a <code>TabView</code> panel like this one</p>
<pre><code>TabView[{
DynamicModule[{x = False}, {Checkbox[Dynamic[x]], Dynamic[x]}],
"foo"}]
</code></pre>
<p>I would like to reset the value of <code>x</code> to its initial value (<code>False</code>) every time the first tab gets selected - that is, I basically want to "reset"/"reload" the tab on each selection.</p>
<p>Also, the variable(s) in question should remain <em>local</em> variables on a per-tab basis, if possible. This is, I hope to avoid a solution declaring <code>x</code> globally.</p>
<p>Any help very much appreciated (either on reloading the whole tab or on setting specific dynamic variables).</p>
| Kuba | 5,478 | <h3>edit</h3>
<p>This is an old question but it came to my mind I know the solution now.</p>
<pre><code>DynamicModule[{tab = 1},
TabView[{
DynamicModule[{x = False},
Row[{ Dynamic[tab; x = False; "", TrackedSymbols :> {tab}],
{Checkbox[Dynamic[x]], Dynamic[x]}}]]
,
"foo"},
Dynamic[tab]]
]
</code></pre>
<p>So the trick is to keep <em>invisible</em> <code>Dynamic</code> which triggers resetting <code>x</code> whenever <code>tab</code> is changed. <code>x</code> is scoped in the tab only while <code>tab</code> is in outer <code>DynamicModule</code>.</p>
<hr />
<h3>old stuff</h3>
<p>I don't know how to deal with it when <code>x</code> is scoped to first tab, but I don't think it is the most important thing here.</p>
<pre><code>TabView[{
1 -> {Checkbox[Dynamic[x]], Dynamic[x]},
2 -> "foo"}, Dynamic[k, If[# == 2, x = False; k = #;, k = #] &]]
</code></pre>
<p>This solution is using <code>TabView</code> in form <code>TabView[{ }, i ]</code> which is described in <code>Help</code>.
Second argument in <code>Dynamic</code> is monitoring which tab is active and switching value of x.</p>
|
3,048,702 | <p>Let <span class="math-container">$S$</span> be a orientable closed surface with genus <span class="math-container">$g \geq 1$</span> and let <span class="math-container">$\gamma \subset S$</span> be an immersed curve. Does there exist a finite cover of <span class="math-container">$S$</span> where <span class="math-container">$\gamma$</span> lifts to a curve that is homotopic to an embedded curve? </p>
<p>If so, and <span class="math-container">$\gamma$</span> has <span class="math-container">$k$</span> double points, is there an explicit bound on the degree of the finite cover that we need to take in order to lift <span class="math-container">$\gamma$</span> to a curve homotopic to an embedded curve?</p>
| Lee Mosher | 26,501 | <p>The existence of the finite cover you are asking for is a well known consequence of the residual finiteness of surface groups, a theorem with a long history. </p>
<p>For a discussion of that theorem, which starts out with the proof that answers your first question, see <a href="https://260p.wordpress.com/2014/04/02/residually-finite-groups/" rel="nofollow noreferrer">this post </a> and <a href="https://260p.wordpress.com/2014/04/08/2-surface-groups-are-residually-finite/" rel="nofollow noreferrer">its followup</a>.</p>
<p>For your second question I don't have a good answer. Also, I'm not sure what you would count as an "explicit" bound. However, I know exactly how I would go about finding an "explicit" bound in the case of a free group, where the proof of residual finiteness is much easier. I'm guessing that the surface group case can also be done by working through the proof of residual finiteness.</p>
<p><strong>Comment:</strong> As pointed out in the answer of @MoisheCohen, my answer is incomplete. Essentially what I have written produces a cover of <span class="math-container">$S$</span> such that <em>some iterate</em> <span class="math-container">$\gamma^n$</span> lifts to an embedded curve. The additional requirement that <span class="math-container">$n=1$</span> needs subgroup separability ideas as expalined in the other answer, and as applied to the cyclic subgroup generated by <span class="math-container">$\gamma$</span>.</p>
|
223,955 | <p>How can we convert a list to an integer correctly? </p>
<p><strong>{5, 22, 4, 5} -> 52245?</strong></p>
<p>When I use the command <code>FromDigits</code> in Mathematica </p>
<pre><code>FromDigits[{5, 22, 4, 5}]
</code></pre>
<p>The result is incorrect, namely <strong>7245</strong></p>
| kglr | 125 | <pre><code>{5, 22, 4, 5} ~ StringRiffle ~ "" // FromDigits
</code></pre>
<blockquote>
<pre><code>52245
</code></pre>
</blockquote>
|
1,043,090 | <p>A rectangle $ABCD$, which measure $9 ft$ by $12 ft$, is folded once perpendicular to diagonal AC so that the opposite vertices A and C coincide. Find the length of the fold. So I tried to fold a rectangular paper but there are spare edges. So the gray is my fold and I'm not sure if its in the middle of my diagonal AC? If it is then now I need to solve the A to the center and center to A. Where I'm so confused. <img src="https://i.stack.imgur.com/Ksk6t.png" alt="enter image description here"></p>
| MvG | 35,416 | <p>The length is</p>
<p>$$\frac{45}4\,\text{ft}=11.25\,\text{ft}$$</p>
<p><img src="https://i.stack.imgur.com/dVhsr.png" alt="Figure"></p>
<p>One way to obtain this result is using coordinates. Use $E$ as the center of the coordinate system, then the corners are $A,B,C,D=\left(\pm\frac92,\pm\frac{12}2\right)$. The diagonal $AC$ has the equation $y=\frac{12}{9}x=\frac43x$ so the perpendicular line has $y=-\frac34x$. For $x=-\frac92$ you get $y=\frac{27}{8}$, so you have $F=\left(-\frac92,\frac{27}8\right)$ and likewise $G=\left(\frac92,-\frac{27}8\right)$. The line between these two has a length of</p>
<p>$$\lvert FG\rvert=\sqrt{9^2+\left(\frac{27}4\right)^2}=\sqrt{\frac{1296+729}{16}}=\sqrt{\frac{2025}{16}}=\frac{45}4$$</p>
|
2,024,097 | <blockquote>
<p>Which prime numbers $p \in \mathbb{Z}$ are reducible in the unique factorization domain $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{2}\right]$ ?</p>
</blockquote>
<p>Suppose $p$ is a prime integer and $p = \alpha \beta$ in $\mathbb{Z}\left[\frac{1 + \sqrt{-3}}{2}\right] = \mathbb{Z}[\omega]$. Then $f(p) = p^2 = f(\alpha)f(\beta)$, where $f: a + b \omega \mapsto a^2 + ab + b^2$ is the norm in $\mathbb{Z}[\omega]$.</p>
<p>There are only two possibilities:</p>
<ol>
<li>$f(\alpha) = 1, f(\beta) = p^2 $, so $\alpha $ is a unit</li>
<li>$f(\alpha) = p, f(\beta) = p $</li>
</ol>
<p>Set $\alpha = a + b \omega$, then $f(\alpha) = a^2 + b^2 + ab = p$. Therefore, if there exist integers $a$ and $b$ which solve this equation, then $p$ is reducible. And if this equation has no integer solutions, $p$ is prime in $\mathbb{Z}[\omega]$. But how can I express these solutions? I tried to use congruences $\text{mod} \ 4 $ , but it did not help much.</p>
| Mr. Brooks | 162,538 | <p>Very minor detail, the function $f$ is more commonly notated $N$, for "norm."</p>
<p>More importantly, however, I think the norm function reveals itself more clearly if you regard it in this manner: given $a + b \sqrt{-3}$, for $\{a, b\} \in \mathbb{Z}$, then $N(a + b \sqrt{-3}) = a^2 + 3b^2$; or given $$\frac{a + b \sqrt{-3}}{2}$$ with $a$ and $b$ both odd integers, then $$N\left(\frac{a + b \sqrt{-3}}{2}\right) = \frac{a^2 + 3b^2}{4}.$$ Looking at $\omega$ has its uses, but at this point it tends to add a layer of confusion that blocks facility.</p>
<p>Then, for a prime $p$ to not be prime in this ring, it has to be such that $4p = a^2 + 3b^2$. So congruence modulo $4$ does not help us. Hence we need to look at congruence modulo $3$ instead. Since $4 \equiv 1 \pmod 3$, what we're looking for then is $p \equiv 1 \pmod 3$. That's because $a^2 + 3b^2 \equiv 2 \pmod 3$ is impossible.</p>
<p>Then, if you discover $4p = a^2 + 3b^2$ or $p = a^2 + 3b^2$, then you can just plug $a$ and $b$ into $a + b \sqrt{-3}$ or $$\frac{a + b \sqrt{-3}}{2}.$$ If you <em>have</em> to have it as $\alpha + \beta \omega$, from the form with halves do $\alpha = a + b$ and $\beta = b$ (the extra negative halves of $b \omega$ will be thus compensated -- if I did this correctly).</p>
|
1,215,273 | <blockquote>
<p>Describe the cosets of the subgroup $\langle 3\rangle$ of $\mathbb{Z}$</p>
</blockquote>
<p>The problem I have is $\mathbb{Z}$ is infinite.</p>
<p>So we know that $\langle 3\rangle=\{0,3,6,9,12,\ldots\}$ and I know the definition of cosets (in this case right cosets) is the set of all products of ha, as a remains fixed and $h$ ranges over $H$.</p>
<p>So $H$ is the subgroup $\langle 3\rangle$ which does not remain fixed and the elements of $\mathbb{Z}$ remains fixed.</p>
<p>I started writing some numbers out to see how I can possibly describe it but I didn't see any help.</p>
<p>I defined: $\mathbb{Z}=\{\ldots,-4,-3,-2,-1,0,1,2,3,4,\ldots\}$ and I already defined $\langle 3\rangle=\{0,3,6,9,12,\ldots\}$</p>
<p>Therefore here are some numbers:</p>
<p>Lets pick $-4$ as our fixed $a$, so then we obtain:</p>
<p>$0 \times -4=0$</p>
<p>$3 \times -4=-12$</p>
<p>$6 \times -4=-24$</p>
<p>and so on</p>
<p>If I pick $2$ as our fixed $a$, then we have:</p>
<p>$0 \times 2=0$</p>
<p>$3 \times 2=6 $</p>
<p>$6 \times 2=12 $</p>
<p>I probably can't see it but then how would I describe the cosets?</p>
| John Hughes | 114,036 | <p>The operation on $Z$ in this case is ADDITION. So
$$
<3> = \{ \ldots, -6, -3, 0, 3, 6, \ldots \}
$$
and a typical coset is
$$
<3> + 0 = <3>
$$
while another is
$$
<3> + 1 = \{ \ldots, -6+1, -3+1, 0+1, 3+1, 6+1, \ldots\}.
$$
With that, can you write down all the other cosets? How many are there? </p>
|
4,042,250 | <p>My idea is to use disjoint events and calculating the probability of getting at least two heads for each number rolled. For example, if I roll a 3, I would calculate the probability with the expression <span class="math-container">$(\frac{1}{6}) (\frac{1}{2})^3 \binom{3}{2} + (\frac{1}{6}) (\frac{1}{2})^3\binom{3}{3})= \frac{1}{12}$</span> and then add up the probabilities of getting at least two for each rolls, since the events are disjoint, summing to <span class="math-container">$\frac{67}{128}$</span>. Is this a valid solution? Is there a better approach to solving this problem?</p>
| Vons | 274,987 | <p>It is valid. You find that P(at least 2 heads|die=1) = 0, P(at least 2 heads|die=2)=1/4, P(at least 2 heads|die=3)=1/2, P(at least 2 heads|die=4)=11/16, P(at least 2 heads|die=5)=13/16, and P(at least 2 heads|die=6)=57/64. Then 1/6*(0+1/4+1/2+11/16+13/16+57/64)=67/128.</p>
<p>There is a way you can numerically approximate the answer and that is to use simulation. You can write code to run 10000 rolls of the die to calculate the probability that you get at least 2 heads. Then do this 100 times, and on each iteration gets a probability of getting at least 2 heads. The mean of these 100 probabilities is a cross-validated probability of <code>0.523544</code>. We can check that <span class="math-container">$\frac{67}{128}\approx0.5234375$</span>, which is very close.</p>
|
4,042,250 | <p>My idea is to use disjoint events and calculating the probability of getting at least two heads for each number rolled. For example, if I roll a 3, I would calculate the probability with the expression <span class="math-container">$(\frac{1}{6}) (\frac{1}{2})^3 \binom{3}{2} + (\frac{1}{6}) (\frac{1}{2})^3\binom{3}{3})= \frac{1}{12}$</span> and then add up the probabilities of getting at least two for each rolls, since the events are disjoint, summing to <span class="math-container">$\frac{67}{128}$</span>. Is this a valid solution? Is there a better approach to solving this problem?</p>
| user247327 | 247,327 | <p>There is a 1/6 chance you will roll a 1. Then you CAN'T flip two heads. The probability is 0.</p>
<p>There is a 1/6 chance you will roll a 2. The probability of flipping two head in two flips is (1/2)(1/2)= 1/4. The probability is (1/6)(1/4)= 1/24.</p>
<p>There is a 1/6 chance you will roll a 3. The probability of flipping two heads,and then a tail, HHT, (1/2)(1/2)(1/2)= 1/8. But the probability of HTH or THH is the same. The probability of two heads in three flips is 3/8. The probability is (1/6)(3/8)= 1/16.</p>
<p>There is a 1/6 chance you will roll a 4. The probability of flipping two heads,and then two tails, HHTT, (1/2)(1/2)(1/2)(1/2)= 1/16. There are <span class="math-container">$\frac{4!}{2!2!}=
6$</span> permutations, HHTT, HTHT, THHT, THTH, and THHT, all having the same probability, 1/16. The probability of two heads in four flips is 6/16= 3/8. The probability is (1/6)(3/8)= 1/16.</p>
<p>There is a 1/6 chance you will roll a 5. The probability of flipping two heads, and then three tails is <span class="math-container">$(1/2)^5= 1/32$</span>. There are <span class="math-container">$\frac{5!}{2!3!}= 10$</span> permutations (I won't write them all) so the probability of two heads and three tails in any order is <span class="math-container">$\frac{10}{32}= \frac{5}{16}$</span>. The probability is (1/6)(5/16)= 5/96.</p>
<p>There is a 1/6 chance you will roll a 6. The probability of flipping two heads then four tails is <span class="math-container">$(1/2)^6= 1/64$</span>. There are <span class="math-container">$\frac{6!}{2!4!}= 15$</span> permutations so the probability of two heads and four tails in any order is [tex]\frac{15}{64}[/tex]. The probability is (1/6)(15/64)= 5/32.</p>
<p>If you roll a die then flip a coin that number of times the probability you get exactly two heads is 1/24+ 1/16+ 1/16+ 5/96+ 5/32= 4/96+ 6/96+ 6/96+ 5/96+ 15/96= 36/96= 3/8.</p>
|
3,853,980 | <p>Let <span class="math-container">$A$</span> be an <span class="math-container">$n×n $</span> complex matrix such that the three matrices <span class="math-container">$A+I$</span> , <span class="math-container">$A^2+I $</span> , <span class="math-container">$ A^3+I$</span> are all unitary .Prove that<span class="math-container">$ A$</span> is the zero matrix</p>
<p>I try to show that</p>
<p><span class="math-container">$Trace( A^{\theta}A) =0$</span> where <span class="math-container">$A^{\theta }$</span> is conjugate transpose of matrix <span class="math-container">$A$</span></p>
<p><span class="math-container">$\because $</span>
<span class="math-container">$Trace( A^{\theta}A)$</span> = <span class="math-container">$|a_{11}|^2 + |a_{12}|^2....|a_{nn}|^2$</span></p>
<p><span class="math-container">$A+I$</span> is unitary ,so</p>
<p><span class="math-container">$(A+I)^{\theta}(A+I)= I $</span></p>
<p><span class="math-container">$\implies (A^ {\theta}+I)(A +I) =I $</span></p>
<p><span class="math-container">$A^ {\theta}A+ A^ {\theta}+A = 0$</span></p>
<p><span class="math-container">$ Trace( A^{\theta}A)= -( Trace( A^{\theta}+A))$</span>
<span class="math-container">$ \implies Trace( A^{\theta}A)=-2$</span>( sum of real parts of each diagonal entry of A</p>
<p>I don't know how to proceed further
Please help</p>
| Matthew H. | 801,306 | <p>Set <span class="math-container">$X:=\max(X_1,X_2)$</span>. Notice from symmetry <span class="math-container">$$\operatorname{cov}(X_1+X_2,X)=\operatorname{cov}(X_1,X)+\operatorname{cov}(X_2,X)=2\operatorname{cov}(X_1,X)$$</span> Let's take a closer look at <span class="math-container">$\operatorname{cov}(X_1,X)$</span>. First notice <span class="math-container">$E(X_1)=\frac{1}{2}$</span> and
<span class="math-container">$$E(X)=\int_0^1xf_X(x)\,dx=\int_0^12x^2\,dx=\frac{2}{3}$$</span> Therefore</p>
<p><span class="math-container">$$\operatorname{cov}(X_1,X)=E(X_1X)-E(X_1)E(X)=E(X_1X)-\frac{1}{3}$$</span> From the total law of expectation, <span class="math-container">$$E(X_1X)=E(X_1X\mid X_1 \leq X_2)P(X_1 \leq X_2)+E(X_1X\mid X_1>X_2)P(X_1>X_2)$$</span> Notice <span class="math-container">$P(X_1 \leq X_2)=P(X_1>X_2)=\frac{1}{2}$</span> and <span class="math-container">$$E(X_1X\mid X_1 \leq X_2)=E(X_1X_2\mid X_1 \leq X_2)=\int_0^1\int_{x_1}^1\frac{x_1x_2}{P(X_1 \leq X_2)}\,dx_2\,dx_1=\frac{1}{4}$$</span> On the other hand, <span class="math-container">$$E(X_1X\mid X_1 > X_2)=E(X_1^2\mid X_1 > X_2) = \int_0^1 \int_{x_2}^1 \frac{x_1^2}{P(X_1 > X_2)}\,dx_1\,dx_2=\frac{1}{2}$$</span> We get that <span class="math-container">$E(X_1X)=\frac{1}{2}\big[\frac{1}{4}+\frac{1}{2}\big]=\frac{3}{8}$</span> which means <span class="math-container">$\operatorname{cov}(X_1,X)=\frac{1}{24}$</span> and finally <span class="math-container">$$\operatorname{cov}(X_1+X_2,X)=\frac{1}{12}$$</span></p>
|
56,103 | <p>A referee asked me to include a reference or proof for the following classical fact. It's not hard to prove, but I'd prefer to just give a reference -- does anyone know one?</p>
<p>Let $X$ be a nice space (eg a smooth manifold, or more generally a CW complex). The topological Picard group $Pic(X)$ is the set of isomorphism classes of $1$-dimensional complex vector bundles on $X$. The set $Pic(X)$ is an abelian group with group operation the fiberwise tensor product, and the first Chern class map</p>
<p>$$c_1 : Pic(X) \longrightarrow H^2(X;\mathbb{Z})$$</p>
<p>is an isomorphism of abelian groups.</p>
<p>Now make the assumption that $H_1(X;\mathbb{Z})$ is a finite abelian group. One nice construction of elements of $Pic(X)$ is as follows. Consider $\phi \in Hom(H_1(X;\mathbb{Z}),\mathbb{Q}/\mathbb{Z})$. Let $\tilde{X}$ be the universal cover, so $\pi_1(X)$ acts on $\tilde{X}$ and $X = \tilde{X} / \pi_1(X)$. Let $\psi : \pi_1(X) \rightarrow \mathbb{Q}/\mathbb{Z}$ be the composition of $\phi$ with the natural map $\pi_1(X) \rightarrow H_1(X;\mathbb{Z})$. Define an action of $\pi_1(X)$ on $\tilde{X} \times \mathbb{C}$ by the formula</p>
<p>$$g(p,z) = (g(p),e^{2 \pi i \psi(g)}z) \quad \quad \text{for $g \in \pi_1(X)$ and $(p,z) \in \tilde{X} \times \mathbb{C}$}.$$</p>
<p>Observe that this makes sense since $\psi(g) \in \mathbb{Q} /\mathbb{Z}$. Define $E_\phi = (\tilde{X} \times \mathbb{C}) / \pi_1(X)$. The projection onto the first factor induces a map $E_{\phi} \rightarrow X$ which is easily seen to be a complex line bundle. The line bundle $E_{\phi}$ is known as the flat line bundle on $X$ with monodromy $\phi$.</p>
<p>Now, the universal coefficient theorem says that we have a short exact sequence</p>
<p>$$0 \longrightarrow Ext(H_1(X;\mathbb{Z}),\mathbb{Z}) \longrightarrow H^2(X;\mathbb{Z}) \longrightarrow Hom(H_2(X;\mathbb{Z}),\mathbb{Z}) \longrightarrow 0.$$</p>
<p>Since $H_1(X;\mathbb{Z})$ is a finite abelian group, there is a natural isomorphism $\rho : Hom(H_1(X;\mathbb{Z}),\mathbb{Q}/\mathbb{Z}) \rightarrow Ext(H_1(X;\mathbb{Z}),\mathbb{Z}) $. We can finally state the fact for which I am looking for a reference :</p>
<p>$$c_1(E_{\phi}) = \rho(\phi).$$</p>
| Dan Ramras | 4,042 | <p>It's probably not exactly what you want (in particular, they're dealing with real bundles and the Stiefel Whitney classes), but something sort of close is discussed in the appendix to </p>
<p>MR2003827 (2004h:53116)
Ho, Nan-Kuo(3-TRNT); Liu, Chiu-Chu Melissa(1-HRV)
Connected components of the space of surface group representations.
Int. Math. Res. Not. 2003, no. 44, 2359–2372.
53D30 (22F05 57N05)</p>
|
667,903 | <p>$$D=\left\{z\ \left|\ \right. 1<|z|<2, \ \Re(z)>-\frac{1}{2}\right\}$$</p>
<p>I can try to prove that each disc with radius 1, and 2 is open (?), so I need to show that the ball centered at $w$ is contained in the disc with radius 2 </p>
<p>If $w\in \{z: |z|<2\}$, then $|z_0-w|<2$, let $\epsilon = 2-|z_0-w|>0$</p>
<p>Let $z\in \mathbb{C}$, such that $|z-w|<\epsilon$</p>
<p>and $|z-z_0|\leq |z-w|+|z_0-w|<2$, so the ball of radius $\epsilon$ is contained inside the disc with radius $2$, so the disc is open. </p>
<p>Similar proof for the disc with radius 1, I understand the property of intersection now, but why is it? The intersection would give you the disc with radius 1, not the washer?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>Is the intersection of three open sets. Each one is the inverse image of a open set by a continuous function.</p>
<p>EDIT: intersection of open sets, the general case.</p>
<p>Let be $O_1,O_2$ two open sets.</p>
<p>If $z_0\in O_1\cap O_2$, then $z_0\in O_1$ and $z_0\in O_2$ by definition, so
$$\exists r_1>0: \ B(z_0,r_1)\subset O_1,\qquad\exists r_2>0: \ B(z_0,r_2)\subset O_2.$$
Take $r=\min(r_1,r_2)$. Then, $B(z_0,r)\subset B(z_0,r_1)\subset O_1$ and $B(z_0,r)\subset B(z_0,r_2)\subset O_2$ , i.e., there is a $z_0$-centered ball $B(z_0,r)\subset O_1\cap O_2$. The extension to any finite intersection of open sets is obvious.</p>
|
2,856,180 | <p>I've been learning some about counting and basic combinatorics. But some scenarios were not explained in my class...</p>
<p><strong>Example problem:</strong> You are given 6 tiles. 1 is labeled "1", 2 are labeled "2", and 3 are labeled "3".</p>
<p><strong>Problem 1:</strong> How many different ways can you arrange groups of three tiles (order matters)?</p>
<p><strong>Answer 1: 19</strong>, {1,2,2} {1,2,3} {1,3,2} {1,3,3} {2,1,2} {2,1,3} {2,2,1} {2,2,3} {2,3,1} {2,3,2} {2,3,3} {3,1,2} {3,1,3} {3,2,1} {3,2,2} {3,2,3} {3,3,1} {3,3,2} {3,3,3}.</p>
<p><strong>Question 1:</strong> You can't use the normal $\frac{5!}{\left(5-3\right)!}$ because you have repeated symbols. And you can't divide by the factorials of repeated symbols $\frac{6!}{\left(6-3\right)!\cdot3!\cdot2!}$ like you can when n = k and you make different arrangements of all n of symbols $\frac{n!}{\prod_{i=1}^mn_i}$. What formula would be used to find the answer to a more complex version of this problem?</p>
<p><strong>Problem 2:</strong> How many different ways can you make groups of three tiles (order doesn't matter)?</p>
<p><strong>Answer 2: 6</strong>, {1,2,2} {1,2,3} {1,3,3} {2,2,3} {2,3,3} {3,3,3}.</p>
<p><strong>Question 2:</strong> You can't use the normal $\frac{n!}{\left(n-k\right)!\cdot k!}$ because you have repeated symbols. What formula would be used to find the answer to a more complex version of this problem?</p>
| Tengu | 58,951 | <p>Don't try to find a new formula every time you encounter a new counting problem! Counting problems have many variations so it's better if you know <strong>how</strong> to count rather than knowing the formula. For example, formulas for combinations and permutations are essentially coming from <a href="https://en.wikipedia.org/wiki/Rule_of_product" rel="nofollow noreferrer">Rule of Product</a>, where you give yourself a procedure to construct the objects that you want to count. </p>
<p>Thus, when facing a counting problem, ask yourself a question:</p>
<blockquote>
<p>How do you construct the objects?</p>
</blockquote>
<p>Let's take an example as your first problem. Your objects can be divided
into three main cases:</p>
<p>Case $1$: If all three tiles have different labels, i.e. your set contains $1,2,3$ so this is a permutation problem which gives the answer of $3!=6$.</p>
<p>Case $2$: If two of the three tiles have same label. Such tile can either be $3$ or $2$ so there are two ways to choose such tile. After choosing a tile to appear $2$ times, we need to choose the remaining tile with different label. There are $2$ ways to do this. After that, we need to order the three tiles. Since two of the tiles have same label so there are $\frac{3!}{2!}$ permutation. This case gives $2 \cdot 2 \cdot \frac{3!}{2!}=12$.</p>
<p>Case $3$: All three tiles have same label. This is just $(3,3,3)$ so there is one tile in this case.</p>
<p>Summing up, you get $6+12+1=19$.</p>
|
3,852,362 | <p>Let <span class="math-container">$~X = \{(x,y)∈ℝ^2∶|x| ≤1,~|y|≤1\}$</span> and function <span class="math-container">$f : X →ℝ$</span> defined by <span class="math-container">$$f(x,y)=\dfrac{x\cos x + y \sin y}{x^2+y^2+\alpha}$$</span>where <span class="math-container">$\alpha\gt0$</span>, then the range of <span class="math-container">$f(x,y)$</span> is<br>
<span class="math-container">$a)~~$</span> not compact set<br>
<span class="math-container">$b)~~$</span> bounded open set<br>
<span class="math-container">$c)~~$</span> connected open set<br>
<span class="math-container">$d)~~$</span> connected closed set</p>
<p>How to find the range of <span class="math-container">$f(x,y)$</span> and the further things ?</p>
<p>Clearly, since <span class="math-container">$\alpha\gt0$</span>, for any value of <span class="math-container">$(x,y),~~x^2+y^2+\alpha\ne0$</span> and therefore <span class="math-container">$f(x,y)$</span> is defined for all values of <span class="math-container">$(x,y)$</span>. Now how to proceed further ?</p>
| Lutz Lehmann | 115,115 | <p>In <span class="math-container">$Y(s)=G(s)X(s)$</span> transfer the denominator to the left side and apply the inverse Laplace transform to get
<span class="math-container">$$
y'(t)+5y(t)=5x(t-5.8)
$$</span>
for the input function <span class="math-container">$x$</span> and the output function <span class="math-container">$y$</span>. By the usual conventions <span class="math-container">$y(t)=x(t)=0$</span> for <span class="math-container">$t\le 0$</span></p>
<hr />
<p>I switched from using the pair <span class="math-container">$(x,u)$</span> to the pair <span class="math-container">$(y,x)$</span>, as <span class="math-container">$u$</span> has two conventional uses, ones as the unit ramp or Heaviside function, also denoted <span class="math-container">$\Theta$</span>, and on the other hand as an arbitrary control input. Here it is used in the latter role. It is less confusing this way.</p>
<hr />
<p>The <span class="math-container">$z$</span> transform produces a linear multi-step method to this problem. One could just take the Euler or better Adams-Bashford method and some interpolation procedure to get about the same result. Explicit Euler is unstable, let's try implicit Euler
<span class="math-container">$$
y_{k+1} = y_{k}+\tau(x((k+1)\tau-5.8)-5y_{k+1})
$$</span>
so that with <span class="math-container">$\tau=0.5$</span> and linear interpolation
<span class="math-container">$$
3.5y_{k+1}-y_k= 0.5(0.8x_{k-11}+0.2x_{k-10})
$$</span>
making for a transfer function of
<span class="math-container">$$
z^{-11}\frac{0.1142857+0.0285714z}{z-0.285714}
$$</span></p>
<hr />
<p>To replicate <code>c2d</code>, according to <a href="https://www.mathworks.com/help/control/ug/continuous-discrete-conversion-methods.html" rel="nofollow noreferrer">https://www.mathworks.com/help/control/ug/continuous-discrete-conversion-methods.html</a>, one needs to solve the equation for the step function, <span class="math-container">$u[k]=1$</span> for <span class="math-container">$k\ge 0$</span>, <span class="math-container">$u(t)=1$</span> for <span class="math-container">$t\ge 0$</span>, else zero. Then sample the output and compute the transfer function.
<span class="math-container">$$
(e^{5t}y(t))'=5e^{5\cdot 5.8}\cdot u(t-5.8)e^{5(t-5.8)}
\\
e^{5t}y(t)=e^{5\cdot 5.8}u(t-5.8)(e^{5(t-5.8)}-1)
\\
y(t)=u(t-5.8)(1-e^{-5(t-5.8)})
$$</span>
Then compute the discretization
<span class="math-container">$$
Y[z]=\sum_{k=12}^\infty z^{-k}y(0.5k)=\frac{z^{-12}}{1-z^{-1}}
-\frac{z^{-12}e^{-1}}{1-e^{-2.5}z^{-1}}
$$</span>
Now <span class="math-container">$U[z]=\frac1{1-z^{-1}}$</span>, so that in their fraction
<span class="math-container">$$
H[z]=z^{-12}\left(1-e^{-1}\frac{z-1}{z-e^{-2.5}}\right)
=z^{-12} \frac{(1-e^{-1})z+(e^{-1}-e^{-2.5})}{z-e^{-2.5}}
\\
=z^{-12}\frac{0.6321205588285577\,z+0.28579444254754355}{z-0.0820849986238988}
$$</span>
is exactly the result of the Matlab routine.</p>
|
3,852,362 | <p>Let <span class="math-container">$~X = \{(x,y)∈ℝ^2∶|x| ≤1,~|y|≤1\}$</span> and function <span class="math-container">$f : X →ℝ$</span> defined by <span class="math-container">$$f(x,y)=\dfrac{x\cos x + y \sin y}{x^2+y^2+\alpha}$$</span>where <span class="math-container">$\alpha\gt0$</span>, then the range of <span class="math-container">$f(x,y)$</span> is<br>
<span class="math-container">$a)~~$</span> not compact set<br>
<span class="math-container">$b)~~$</span> bounded open set<br>
<span class="math-container">$c)~~$</span> connected open set<br>
<span class="math-container">$d)~~$</span> connected closed set</p>
<p>How to find the range of <span class="math-container">$f(x,y)$</span> and the further things ?</p>
<p>Clearly, since <span class="math-container">$\alpha\gt0$</span>, for any value of <span class="math-container">$(x,y),~~x^2+y^2+\alpha\ne0$</span> and therefore <span class="math-container">$f(x,y)$</span> is defined for all values of <span class="math-container">$(x,y)$</span>. Now how to proceed further ?</p>
| user577215664 | 475,762 | <p><span class="math-container">$$G(s) = e^{-5.8s} \frac{5}{s+5}$$</span>
<span class="math-container">$$\implies \dfrac {X(s)}{U(s)}= e^{-5.8s} \frac{5}{s+5}$$</span>
<span class="math-container">$$sX(s)+5X(s)= 5e^{-5.8s}U(s)$$</span>
Since <span class="math-container">$x(0)=0$</span>:
<span class="math-container">$$(sX(s)-x(0))+5X(s)= 5e^{-5.8s}U(s)$$</span>
Apply inverse Laplace Transform:
<span class="math-container">$$x'(t)+5x(t)= 5u(t-5.8)$$</span>
<span class="math-container">$$\implies A=-5, B=5$$</span></p>
|
105,071 | <p>As one may know, a <b>dynamical system</b> can be defined with a monoid or a group action on a set, usually a manifold or similar kind of space with extra structure, which is called the <i>phase space</i> or <i>state space</i> of the dynamical system. The monoid or group doing the acting is what I call the <i>time space</i> of the dynamical system, and is usually the naturals, integers, or reals. Often, one may require the evolution map to be continuous, differentiable, etc.</p>
<p>But has anyone studied a generalization in which we allow the <i>time space</i> to be something more general & exotic, a multidimensional space like $\mathbb{R}^n$, $\mathbb{C}$ (viewed as "2-dimensional" by considering it to be like $\mathbb{R}^2$), etc.? I'm especially curious about the case where the time space is $\mathbb{C}$ and the phase space is $\mathbb{C}^n$ or another complex manifold and the map is required to be holomorphic in both its arguments, as that holomorphism provides a natural linkage between the two dimensions that lets us think of the complex time as a single 2-dimensional time as opposed to two real times (any dynamical system with a timespace of $\mathbb{R}^n$ can be decomposed into a bunch of mutually-commutative evolution maps with timespaces of $\mathbb{R}$). My questions are:</p>
<ol>
<li><p>Is it true that the only dynamical system with phase-space $\mathbb{C}$ and time-space $\mathbb{C}$ where the evolution map is required to be holomorphic in both its arguments (the time and point to evolve) is the linear one given by
$$\phi^t(z) = e^{ut} z + K \frac{1 - e^{ut}}{1 - e^u}, K, u \in \mathbb{C}$$
? I suspect so, because an injective entire function is linear (in the sense of a “linear equation” not <em>necessarily</em> a “linear map”), and $\phi^{-t}$ must be the inverse of $\phi^t$. Thus, $\phi^t$ must have the form $a(t) z + b(t)$ with $a: \mathbb{C} \rightarrow \mathbb{C}$, $b: \mathbb{C} \rightarrow \mathbb{C}$. Am I right?</p></li>
<li><p>Are there any interesting (e.g. with complicated, even chaotic behavior) dynamical systems of this kind on $\mathbb{C}^n$? On more sophisticated complex manifolds? (for the phase space, that is) If so, can you provide an example? Or does the holomorphism requirement essentially rule this out? EDIT: I provide one below.</p></li>
<li><p>There is something else here, an interesting observation I made. Consider the above complex-time, holomorphic dynamical system. We can investigate the two prime behaviors represented by the real and imaginary times. We'll just set u = 1, K = 0 for here.</p></li>
</ol>
<p>In “real time”, the dynamics looks like an “explosion” in the plane: all points “blast” away from z = 0 at exponentially increasing velocity.</p>
<p>In “imaginary time”, the result is cyclic motion, swirling around $z = 0$ with constant angular velocity that depends only on the distance of the point from $z = 0$.</p>
<p>But if we trace the contours of these two evolutions, formed from different points on the plane, and then superimpose them, we have contours intersecting in what looks like contours from a contour graph of the image of the complex plane under the function $\exp(z)$! Conversely, we could say it looks like a countour plot of $\log(z)$ with a cut along a ray from $0$. So, somehow “naturally” related to the dynamical system $\phi^t(z) = \exp(t) z$ is the function $\exp(z)$ (or, perhaps, $\log(z)$).</p>
<p><img src="https://i.stack.imgur.com/kPq32.png" alt="plot of evoluton countours for first CTHDS"></p>
<p>Note that my plotting facilities are unfortunately pretty limited, so I can't give really nice graphs with lots of contours, just a few selected ones taken from evolutions of various points in both real and imag times.</p>
<p>But we have another case. To see this, we must turn our attention away from a phase space given by the complex plane to one given by the Riemann sphere, $\hat{\mathbb{C}}$. In this case, we still have the dynamical system as above, but we have an additional class of dynamical systems given by the “Moebius transformations”, which include the above linear-function dynamical systems as a special case. One example is
$$\phi^t(z) = \frac{(1 + e^{i\pi t}) z + (1 – e^{i\pi t})}{(1 – e^{i\pi t})z + (1 + e^{i\pi t})}$$.
It is easy to check that this is indeed a Moebius transformation of the Riemann sphere. This map is holomorphic everywhere on the Riemann sphere. Note that for integer step t, the unit-step map is the reciprocal map.</p>
<p>Now we consider the contour lines of the real and imaginary evolution, as before. They look like this:</p>
<p><img src="https://i.stack.imgur.com/MECR7.png" alt="plot of evolution contours for second CTHDS"></p>
<p>(Physics buffs may notice that the real evolution (concentric circles) reminds one of the lines of a <i>magnetic</i> field of a <i>magnetic</i> dipole (like a bar magnet), while the imaginary evolution (arcs joining at points) looks like the lines of an <i>electric</i> field of an <i>electric</i> dipole.)</p>
<p>Again, notice how the lines meet at right angles. It looks again like the image of the plane (or the Riemann sphere, perhaps?) under some function which may be holomorphic, though I'm not sure what that function is in this case. Is this the case? Is there such a function, with a special relation to this CTHDS in the same way as $\exp$ (or $\log$) is to the other?</p>
<p>But in any case, it appears that for a complex-time holomorphic dynamical system, or CTHDS, there exists an associated natural function. What is the significance of this function/map? How does it relate to the CTHDS? If you give me a CTHDS that I don't have a closed form for, can I find its natural map?</p>
| John B | 74,138 | <p>This question is from long ago, but I will give it a try. I want to note that multi-dimensional times do occur "naturally", in some reasonable manner. As an example, take coupled map lattices, with two times involved: the actual time and the space translation, along which different systems may interact. In this setting (which is thus a $\mathbb Z^2$ action) it is easier to produce many invariant measures, say for short-ranged actions. In fact one can even develop a thermodynamic formalism, equilibrium and Gibbs measures, etc.</p>
<p>In the context of higher rank actions with some hyperbolicity there has also been some progress, notably with work of Kalinin and Katok where the existence of an invariant geometric structure is obtained for the first time from homotopy data.</p>
|
271,343 | <p>I need help finding the integral of $\sin(\sqrt{x})dx$. I have the answer here but would like to know how to get there. </p>
| cderwin | 50,816 | <p>Use the substitution $t=\sqrt{x}$, and then use integration by parts. From the substitution, you should get:</p>
<p>$$\int\sin\sqrt{x}dx=2\int t\sin (t) dt$$</p>
<p>Then, use integration by parts. Let $u=t$ and $dv=\sin t$. Then $du=dt$ and $v=-\cos t$, and we have:</p>
<p>$$\int t\sin (t) dt=-t\cos t +\int \cos (t) dt=\sin t -t\cos t=\sin\sqrt{x}-\sqrt{x}\cos\sqrt{x}$$</p>
<p>and we have the solution:</p>
<p>$$\int\sin\sqrt{x}dx=2(\sin\sqrt{x}-\sqrt{x}\cos\sqrt{x})$$</p>
<p>I hope this clears up the matter for you!</p>
|
35,585 | <p>Let H be an infinite dimensional and separable Hilbert space. Let C be a closed and
connected subset of H containing more than one point. Can C ever be the countable union
of closed and totally disconnected subsets of H?</p>
| Gerald Edgar | 454 | <p>$C$ is simply a connected complete separable metric space with more than one point.
Now the union of countably many closed sets of topological dimension zero must again have dimension zero. But I suppose "totally disconnected" is not quite the same as "zero dimensional" so this is not yet a complete answer.</p>
|
35,585 | <p>Let H be an infinite dimensional and separable Hilbert space. Let C be a closed and
connected subset of H containing more than one point. Can C ever be the countable union
of closed and totally disconnected subsets of H?</p>
| Bill Johnson | 2,554 | <p>Let $C$ be an explosion point space such as the Knaster–Kuratowski fan (<a href="http://en.wikipedia.org/wiki/Knaster" rel="nofollow">http://en.wikipedia.org/wiki/Knaster</a>–Kuratowski_fan) and $p$ the explosion point in $C$.<br>
Set $A_n = \{p\}\cup (C\sim B(p; 1/n))$. </p>
|
1,003,096 | <p>Let $G=(\mathbb{Q}-\{0\},*)$ and $H=\{\frac{a}{b}\mid a,b\text{ are odd integers}\}$.</p>
<ol>
<li>Show $H$ is a normal subgroup of $G$.</li>
<li>Show that $G/H \cong (\mathbb{Z},+)$</li>
</ol>
<p>I know that there are multiple definitions for normal subgroup and I am having a hard time to develop the proof for these particular sets. </p>
<p>For part 2. I need help developing a function from $G/H \to (\mathbb{Z},+)$.</p>
| anomaly | 156,999 | <p>1) $G$ is abelian, so any subgroup is normal.</p>
<p>2) Show that every element of $G/H$ is of the form $2^kH$ with $k\in \mathbb{Z}$; the map $2^k H \to k$ then gives you the required isomorphism.</p>
|
57,131 | <p>Evaluating the following lines in my computer takes near six seconds in Mathematica 10 and near 5 in Mathematica 9. I consider this very slow as only quantiles are being calculated from data to create this simple set of charts. I think that Mathematica should do much better than this. Do you agree? Thoughts?</p>
<pre><code>data = Table[RandomVariate[NormalDistribution[RandomInteger[5], 1], 100000], {10}];
Timing@BoxWhiskerChart[data, ChartLabels -> {"a", "b", "c"}, PerformanceGoal -> "Speed"]
</code></pre>
| paw | 4,539 | <p>You could build your own plot function and calculate the <code>Quantiles</code> seperately as @Verbeia suggested.</p>
<p>Here is a little example you could expand on:</p>
<pre><code>WhiskerChart[data_List] := Module[{n = Length[data], max, min, mean, min25, max75},
max = Max[data];
min = Min[data];
mean = Quantile[data, 0.5];
min25 = Quantile[data, 0.25];
max75 = Quantile[data, 0.75];
Graphics[
{Line[{{1, min}, {1, max}}],
Line[{{0.8, min}, {1.2, min}}],
Line[{{0.8, max}, {1.2, max}}],
{Orange, Rectangle[{0.7, min25}, {1.3, max75}]},
{White, Line[{{0.7, mean}, {1.3, mean}}]}},
Frame -> True, AspectRatio -> 1, PlotRangePadding -> 1]
]
data = RandomVariate[NormalDistribution[1, 1], 100000];
BoxWhiskerChart[data, AspectRatio -> 1] // Timing
WhiskerChart[data] // Timing
</code></pre>
<p><img src="https://i.stack.imgur.com/8uEQP.png" alt="enter image description here"></p>
|
65,582 | <p>Let $G$ be a locally compact group on which there exists a Haar measure, etc..</p>
<p>Now I am supposed to take such a metrisable $G$, and given the existence of some metric on $G$, prove that there exists a translation-invariant metric, i.e., a metric $d$ such that $d(x,y) = d(gx,gy)$ for all $x,y,g \in G$. How to go about this?</p>
| Makoto Kato | 28,422 | <p>Though the answer was essentially given in the comment area, I think it's nice to have the full proof here.
We follow largely Bourbaki.</p>
<p><strong>Notation</strong>
Let $X$ be a set.
Let $U$ and $V$ be subsets of $X \times X$.
We define $UV$ = {$(x, y) \in X \times X;$ there exists $z \in X$ such that $(x, z) \in U$ and $(z, y) \in V$}.</p>
<p>Let $U, V, W$ be subsets of $X \times X$.
We define $UVW = (UV)W$.</p>
<p>Similarly we define $U^n, n = 1, 2, ...$</p>
<p>We define $U^{-1} = \{(x, y) \in X; (y, x) \in U\}$.</p>
<p><strong>Lemma</strong>
Let $X$ be a <a href="http://en.wikipedia.org/wiki/Uniform_space" rel="nofollow">uniform space</a>.
Let $U_n, n = 1, 2, ...$ be a sequence of entourages on $X$.
Suppose $(U_{n+1})^3 \subset U_n$ for each $n$.
We define a function $g:X \times X \rightarrow [0, \infty)$ as follows.</p>
<ul>
<li><p>If $(x, y) \in \bigcap U_n$, then $g(x, y) = 0$.</p></li>
<li><p>If $(x, y) \in U_n - U_{n+1}$, then $g(x, y) = 1/2^n$.</p></li>
<li><p>If $(x, y) ∈ X\times X - U_1$, then $g(x, y) = 1$.</p></li>
</ul>
<p>Let $(x, y) \in X\times X$.
Let $z_0, z_1, ..., z_p$ be a finite sequence of elements of $X$ such that $x = z_0, y = z_p$.
Then $\sum_{i=0}^{p-1} g(z_i, z_{i+1}) $ $\geq$ $(1/2)g(x, y)$.</p>
<p>Proof(Bourbaki):
We use induction on $p$.
If $p = 1$, the assertion is clear.</p>
<p>Suppose $p > 1$.
Let $a = \sum_{i=0}^{p-1} g(z_i, z_{i+1})$.
Since $g(x, y) \leq 1$, if $a \geq 1/2$, then $a \geq (1/2)g(x, y)$.
Hence we can assume $a < 1/2$.
Let $h$ = max {$q; \sum_{i=0}^{q-1} g(z_i, z_{i+1}) \leq a/2$}.</p>
<p>Then $\sum_{i=0}^{h} g(z_i, z_{i+1}) > a/2$.</p>
<p>Hence
$\sum_{i=h+1}^{p-1} g(z_i, z_{i+1}) \leq a/2$.</p>
<p>By the induction assumption,
$(1/2)g(x, z_h) \leq \sum_{i=0}^{h-1} g(z_i, z_{i+1}) \leq a/2$.
Hence</p>
<p>(1) $g(x, z_h) \leq a$</p>
<p>Similarly,
$(1/2)g(z_{h + 1}, y) \leq \sum_{i=h+1}^{p-1} g(z_i, z_{i+1}) \leq a/2$.
Hence</p>
<p>(2) $g(z_{h + 1}, y) \leq a$.</p>
<p>Clearly, </p>
<p>(3) $g(z_h, z_{h + 1}) \leq a$</p>
<p>Let $k$ = min {$k ∈ \mathbb{Z}; k > 0, 1/2^k \leq a$}.
Since $a < 1/2$, $k \geq 2$.</p>
<p>By (1), (3), (2), we get:</p>
<p>(a) $(x, z_h) \in U_k$.</p>
<p>(b) $(z_h, z_{h + 1}) \in U_k$.</p>
<p>(c) $(z_{h + 1}, y) \in U_k$.</p>
<p>Hence, $(x, y) \in (U_k)^3 \subset U_{k-1}$</p>
<p>Hence $g(x, y) \leq 1/2^{k-1} ≦ 2a$.
<strong>QED</strong></p>
<p><strong>Theorem 1</strong>
Let $X$ be a uniform space.
Suppose $X$ has a fundamental system of countable entourages.
Then there exists a <a href="http://en.wikipedia.org/wiki/Pseudometric_space" rel="nofollow">pseudometric</a> $d$ on $X$ such that $d$ is compatible with the uniform structure of $X$.</p>
<p>Proof:
Let $V_n, n = 1, 2, ...$ be a fundamental system of countable entourages.
By induction and the axiom of dependent choice, we can define a sequence of entourages $U_n, n = 1, 2, ...$ which satisfies the following conditions.</p>
<p>(1) Each $U_n$ is symmetric, i.e. $U_n = (U_n)^{-1}$.</p>
<p>(2) $U_1 \subset V_1$</p>
<p>(3) $(U_{n+1})^3 \subset U_n \cap V_{n+1}$ for each $n \geq 1$.</p>
<p>Let $f(x, y)$ = inf $\sum_{i=0}^{p-1} g(z_i, z_{i+1})$ for each $(x, y) \in X\times X$,
where $g:X\times X \rightarrow [0, \infty)$ is the function defined in the Lemma and
the inf is taken over every finite sequence of elements $z_0, z_1, ..., z_p$ of $X$ such that $x = z_0, y = z_p$.</p>
<p>Clearly $f$ is symmetric and satisfies the triangle inequality.
Since $f(x, y) \leq g(x, y)$, $f(x, x) = 0$.
Hence $f$ is a pseudometric.</p>
<p>By the Lemma, $(1/2)g(x, y) \leq f(x, y) \leq g(x, y)$.</p>
<p>Let $W_a$ = {$(x, y) \in X\times X ; f(x, y) < a$} for any $a > 0$.</p>
<p>Let $a > 0$ be a real number.
Let $k$ be an integer such that $k > 0$ and $1/2^k < a$.
Let $(x, y) \in U_k$.
$f(x, y) ≦ g(x, y) ≦ 1/2^k < a$.
Hence $U_k ⊂ W_a$.</p>
<p>Conversely, let $k > 0$ be an integer.
Suppose $f(x, y) \leq 1/2^{k+1}$.
Since $f(x, y) \geq (1/2)g(x, y)$, $g(x, y) \leq 1/2^k$.
Hence $W_{1/2^{k+1}} \subset U_k$.
<strong>QED</strong></p>
<p><strong>Theorem 2</strong>
Let $G$ be a topological group.
Suppose $G$ has a fundamental system of countable neighborhoods of the identity $e$.
Then there exists a pseudometric $d$ on $G$ such that $d(x, y) = d(zx, zy)$ for any $x, y, z \in G$. Moreover $d$ is compatible with the left unform structure of $G$.</p>
<p>Proof:
Let $V_n, n = 1, 2, ...$ be a system of neighborhood of $e$.
We can assume that $V_n = (V_n)^{-1}$, $(V_n)^3 \subset V_n$ for each $n$.
Let $U_n$ = {$(x, y) \in G\times G ; x^{-1}y \in V_n$} for each $n$.
For each $n$, $U_n$ is symmetric and $(U_n)^3 \subset U_n$.
$U_n, n = 1, 2, ...$ is fundamental system of entourages of the left uniform structure of $G$.</p>
<p>Let $f(x, y)$ = inf $\sum_{i=0}^{p-1} g(z_i, z_{i+1})$ for each $(x, y) \in X\times X$,
where $g:G\times G \rightarrow [0, \infty)$ is the function defined in the Lemma and
the inf is taken over every finite sequence of elements $z_0, z_1, ..., z_p$ of $X$ such that $x = z_0, y = z_p$.</p>
<p>By Theorem 1, $f$ is a pseudometric compatible with the left uniform structure of $G$.</p>
<p>Let $(x, y) \in U_n$.
For any $z \in G$, $(zx)^{-1}zy = x^{-1}y \in V_n$.
Hence $(zx, zy) \in U_n$.</p>
<p>Conversely, if $(zx, zy) \in U_n$, then $(x, y) \in U_n$.
Hence $g(zx, zy) = g(x, y)$.
Hence $f(zx, zy) = f(x, y)$.
<strong>QED</strong></p>
<p><strong>Corollary</strong>
Let $G$ be a metrizable topological group.
Then there exists a metric $d$ on $G$ such that $d(x, y) = d(zx, zy)$ for any $x, y, z \in G$.
Moreover $d$ is compatible with the left unform structure of $G$.</p>
<p>Proof:
Since $G$ is metrizable, it has a fundamental system of countable neighborhoods of the identity $e$.
Hence, by Theorem 2, there exists a pseudometric $d$ on $G$ such that $d(x, y) = d(zx, zy)$ for any $x, y, z \in G$.
Since $d$ is compatible with the left uniform structure of $G$ and $G$ is Hausdorff, $d$ is a metric.<strong>QED</strong></p>
|
3,106,574 | <p>Let <span class="math-container">$(a_n) _{n\ge 0}$</span> <span class="math-container">$a_{n+2}^3+a_{n+2}=a_{n+1}+a_n$</span>,<span class="math-container">$\forall n\ge 1$</span>, <span class="math-container">$a_0,a_1 \ge 1$</span>. Prove that <span class="math-container">$(a_n) _{n\ge 0}$</span> is convergent.<br>
I could prove that <span class="math-container">$a_n \ge 1$</span> by mathematical induction, but here I am stuck. </p>
| Dr. Wolfgang Hintze | 198,592 | <p>This is the first time I see this type of recusrion relation where the next term in the sequence is not given directly but instead a function of it is given. This makes the problem particularly interesting (for me, at least).</p>
<p>Hence an uncommon approach seems to be justified. I am going to relate convergence to the stability of simple (constant) solutions.</p>
<p><strong>1. Layout of the solution</strong></p>
<p>Let us start with the observation that there are three constant solutions of the recursion </p>
<p><span class="math-container">$$a_{n+2}^3+a_{n+2}=a_{n+1}+a_{n}\tag{1}$$</span></p>
<p>These are given by the roots of the equation</p>
<p><span class="math-container">$$z^3+z = z + z\tag{2}$$</span></p>
<p>that is</p>
<p><span class="math-container">$$s_1 = \{a_n = - 1\}$$</span>
<span class="math-container">$$s_2 = \{a_n = 0\}\tag{3}$$</span>
<span class="math-container">$$s_3=\{a_n = + 1\}$$</span></p>
<p>We show below that <span class="math-container">$s_1$</span> and <span class="math-container">$s_3$</span> are stable solutions, and <span class="math-container">$s_2$</span> is unstable.</p>
<p>A stable solution is definied as one where a sufficiently small deviation from the solution is damped out for <span class="math-container">$n\to\infty$</span>. In an unstable solution, in contrast, small deviations increase so that the sequence departs from the solution for large <span class="math-container">$n$</span>.</p>
<p>Depending on the initial values <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span> (except the singular case <span class="math-container">$a_1 = a_2=0$</span>) the solution approches one of the stable solutions. This statement is equivalent to saying the sequence is convergent.</p>
<p><strong>2. Stability analysis</strong></p>
<p>Applying the common procedure we let</p>
<p><span class="math-container">$$a_k= s + \delta_k\tag{4}$$</span></p>
<p>where <span class="math-container">$s$</span> is the constant solution to be studied and <span class="math-container">$\delta_k$</span> is a small deviation. Small is defined as <span class="math-container">$|\delta_k|<<1$</span> </p>
<p>Inserting <span class="math-container">$(4)$</span> into <span class="math-container">$(1)$</span> gives</p>
<p><span class="math-container">$$(s + \delta_{n+2})^3 +(s + \delta_{n+2})= (s + \delta_{n+1})+(s + \delta_{n}) $$</span></p>
<p>Expanding the 3rd power and keeping only the linear term in <span class="math-container">$\delta$</span> gives</p>
<p><span class="math-container">$$s^3 + 3 s^2 \delta_{n+2} +(s + \delta_{n+2})= (s + \delta_{n+1})+(s + \delta_{n}) $$</span></p>
<p>Observing that <span class="math-container">$s$</span> solves <span class="math-container">$(2)$</span> this can be simplified to</p>
<p><span class="math-container">$$(1+3 s^2)\delta_{n+2} =\delta_{n+1}+\delta_{n}\tag{5} $$</span></p>
<p>This is a linear recursion equation which can be solved by standard methods.</p>
<p>In the case <span class="math-container">$s=0$</span> <span class="math-container">$(5)$</span> reduces to the Fibonacci recursion which is known to diverge, in agreement with our identifying this solution as unstable.</p>
<p>For the cases <span class="math-container">$s=\pm 1$</span> I leave as an exercise to the reader to show that the solutions <span class="math-container">$\delta_n \to 0$</span> for <span class="math-container">$n\to\infty$</span> and arbitrary but suffciently small initial conditions.</p>
<p><strong>3. Numerical study (in Mathematica)</strong></p>
<p>As an example of how to treat the recursion <span class="math-container">$(1)$</span> numerically I used these commands in Mathematica</p>
<pre><code>a[1] := 1; a[2] := 0;
Table[a[n] =
z /. FindRoot[z^3 + z == a[n - 1] + a[n - 2], {z, 0}], {n, 3, 20}]
{0.682328, 0.53187, 0.765544, 0.794984, 0.879716, 0.913186, 0.946085, \
0.963849, 0.977093, 0.985069, 0.990473, 0.993857, 0.996071, 0.997477, \
0.998385, 0.998965, 0.999337, 0.999575}
</code></pre>
<p>Explanation: in the first line the initial values are defined. The FindRoot command solves for <span class="math-container">$z$</span> and thus finds the value of <span class="math-container">$a_{n+2}$</span> for the given right Hand side, i.e. as a function of <span class="math-container">$a_{n+1}$</span> and <span class="math-container">$a_{n}$</span>. The Table command provides a list of the first few terms of the sequence.</p>
<p>We can see that with these initial conditions the values approach the stable solution <span class="math-container">$a_n=+1$</span></p>
<p>In this case</p>
<pre><code>a[1] := -0.1; a[2] := 0;
Table[a[n] =
z /. FindRoot[z^3 + z == a[n - 1] + a[n - 2], {z, 0}], {n, 3, 20}]
{-0.0990289, -0.0980852, -0.19023, -0.268877, -0.396685, -0.522729, \
-0.647697, -0.749461, -0.828489, -0.884939, -0.924151, -0.950462, \
-0.967888, -0.979268, -0.986656, -0.991426, -0.994498, -0.996472}
</code></pre>
<p>the stable solution <span class="math-container">$a_n=-1$</span> is approached.</p>
|
3,514,494 | <p>Expand into Laurent's series after the power of z-a for the following function</p>
<p><span class="math-container">$$ f(z)=\frac{z+z^2}{(1-z)^3} $$</span>for a = 0 .</p>
<p>I know the formula for the Laurent's series but I don't know how to apply it. </p>
| kludg | 42,926 | <p>If you need expected value of won prizes, it is
<span class="math-container">$$\frac{17}{36}\cdot 1 + \frac{8}{36}\cdot 2 + \frac{1}{36}\cdot 3$$</span></p>
|
3,514,494 | <p>Expand into Laurent's series after the power of z-a for the following function</p>
<p><span class="math-container">$$ f(z)=\frac{z+z^2}{(1-z)^3} $$</span>for a = 0 .</p>
<p>I know the formula for the Laurent's series but I don't know how to apply it. </p>
| Math1000 | 38,584 | <p><span class="math-container">$X$</span> does not have a Poisson distribution because it only takes finitely many values. Observe that <span class="math-container">$X=X_1+X_2+X_3$</span> where <span class="math-container">$X_1\sim\mathrm{Ber}\left(\frac16\right)$</span>, <span class="math-container">$X_2\sim\mathrm{Ber}\left(\frac13\right)$</span>, and <span class="math-container">$X_3\sim\mathrm{Ber}\left(\frac12\right)$</span> are independent. Therefore
<span class="math-container">\begin{align}
\mathbb P(X=0) &= \mathbb P(X_1=0,X_2=0,X_3=0)\\ &= \mathbb P(X_1=0)\mathbb P(X_2=0)\mathbb P(X_3=0)\\
&=\left(1-\frac16\right)\left(1-\frac13\right)\left(1-\frac12\right)\\
&= \frac5{18},
\end{align}</span>
<span class="math-container">\begin{align}
\mathbb P(X=1) &= \mathbb P\left(\bigcup_{i=1}^3 \{X_i=1\}\cap\bigcap_{j\in\{1,2,3\}\setminus\{i\}}\{X_j=0\} \right)\\
&= \sum_{i=1}^3 \mathbb P(X_i=1)\prod_{j\in\{1,2,3\}\setminus i\}}\mathbb P(X_j=0)\\
&= \mathbb P(X_1=1)\mathbb P(X_2=0)\mathbb P(X_3=0)+\mathbb P(X_1=0)\mathbb P(X_2=1)\mathbb P(X_3=0)+\mathbb P(X_1=0)\mathbb P(X_2=0)\mathbb P(X_3=1)\\
&= \frac16\left(1-\frac13\right)\left(1-\frac12\right) +\left(1-\frac16\right)\frac13\left(1-\frac12\right) + \left(1-\frac16\right)\left(1-\frac13\right)\frac12 \\&= \frac{17}{36},
\end{align}</span>
<span class="math-container">\begin{align}
\mathbb P(X=2) &= \mathbb P(X_1+X_2+X_3=2)\\
&= \mathbb P(X_1=1)\mathbb P(X_2=1)\mathbb P(X_3=0)+\mathbb P(X_1=1)\mathbb P(X_2=0)\mathbb P(X_3=1)+\mathbb P(X_1=0)\mathbb P(X_2=1)\mathbb P(X_3=1)\\
&= \frac16\cdot\frac13\left(1-\frac12\right)+ \frac16\left(1-\frac13\right)\frac12 + \left(1-\frac16\right)\frac13\cdot\frac12\\
&= \frac29,
\end{align}</span>
and
<span class="math-container">\begin{align}
\mathbb P(X=3) &= \mathbb P(X_1+X_2+X_3=3)\\ &= \mathbb P(X_1=1)\mathbb P(X_2=1)\mathbb P(X_3=1)\\
&= \frac16\cdot\frac13\cdot\frac12 = \frac1{36}.
\end{align}</span>
The expected value of <span class="math-container">$X$</span> is
<span class="math-container">\begin{align}
\mathbb E[X] &= \sum_{i=0}^3 i\cdot\mathbb P(X=i)\\
&= 0\cdot\frac5{18} + 1\cdot\frac{17}{36} + 2\cdot\frac29 + 3\cdot\frac1{36}\\
&= 1.
\end{align}</span></p>
|
3,910,345 | <p>Recently a lecturer used this notation, which I assume is a sort of twisted form of Leibniz notation:</p>
<p><span class="math-container">$$y\,\mathrm{d}x - x\,\mathrm{d}y \equiv -x^2\,\mathrm{d}\left(\frac{y}{x}\right)$$</span></p>
<p>The logic here was that this could be used as:</p>
<p><span class="math-container">$$\begin{align}
-x^2\,\mathrm{d}\left(\frac{y}{x}\right) &\equiv -x^2\,\left(\frac{\mathrm{d}y}{x} -\frac{y}{x^2}\,\mathrm{d}x\right)\\
&\equiv y\mathrm{d}x - x\mathrm{d}y
\end{align}
$$</span></p>
<p>Why is this legal?</p>
<p>I can see some kind of differentiation going on with the second term in the above equivalence, producing the <span class="math-container">$\frac{1}{x^2}$</span>, but having the single <span class="math-container">$\mathrm{d}$</span> seems like a really weird abuse of notation, and I don't quite follow why it splits the single <span class="math-container">$\frac{y}{x}$</span> fraction into two parts.</p>
| Ethan Bolker | 72,858 | <p>In the context for this discussion you have two variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that are somehow related. Perhaps you have <span class="math-container">$y$</span> as a function of <span class="math-container">$x$</span>, or perhaps both depend on some other parameter <span class="math-container">$t$</span>. You are interesting in knowing how small changes in <span class="math-container">$x$</span> and <span class="math-container">$y$</span> change the quotient <span class="math-container">$y/x$</span>, written as the product
<span class="math-container">$y \times (1/x)$</span>. So what you are looking for is <span class="math-container">$d(y/x)$</span>. If <span class="math-container">$y$</span> depends explicitly on <span class="math-container">$x$</span> you can think of this as calculating <span class="math-container">$d/dx$</span>. If the dependence is just implicit it's easier to work with the differentials.</p>
<p>The actual algebra uses the product rule and the rule for differentiating <span class="math-container">$1/x$</span>. You could do it directly with the quotient rule.</p>
<p>To see more intuitively what is going on, simplify the expression
<span class="math-container">$$
-x^2\left( \frac{y+dy}{x+dx} - \frac{y}{x} \right)
$$</span>
and remember that <span class="math-container">$dx$</span> and <span class="math-container">$dy$</span> are small.</p>
|
3,995,986 | <p>Need help integrating:
<span class="math-container">$$\int _0^{\infty }\:\:\frac{6}{\theta}xe^{-\frac{2x}{\theta }}\left(1-e^{-\frac{x}{\theta }}\right)dx$$</span></p>
<p>I think I should multiply the <span class="math-container">$$xe^{-\frac{2x}{\theta }}$$</span> out and then use integration by parts but it is not really working for me?</p>
| Quanto | 686,284 | <p>Integrate by parts as follows</p>
<p><span class="math-container">\begin{align}
& \int _0^{\infty }\:\:\frac{6}{\theta}xe^{-\frac{2x}{\theta }}\left(1-e^{-\frac{x}{\theta }}\right)dx\\
=& \frac6{\theta} \int _0^{\infty }
x\> d \left( -\frac {\theta}2 e^{-\frac{2x}{\theta } }
+ \frac {\theta}3e^{-\frac{3x}{\theta } } \right)
= \int _0^{\infty }
\left( 3e^{-\frac{2x}{\theta } }
-2 e^{-\frac{3x}{\theta } } \right)dx
= \frac 56{\theta}
\end{align}</span></p>
|
642,324 | <p>According to <a href="http://en.wikipedia.org/wiki/Regression_analysis" rel="nofollow">wikipedia</a>, regression analysis is a statistical process for estimating the relationships among variables. Regression analysis is widely used for prediction and forecasting. So why is regression analysis also used as statistical test? For example, in <a href="http://www.ats.ucla.edu/stat/spss/whatstat/" rel="nofollow">this page</a>, logistic regression and linear regression are listed among t-test, ANOVA, chi-square test etc.</p>
| JJacquelin | 108,514 | <p>On my viewpoint, regression analysis is a mathematical tool which can be used in various circonstances and for various kind of problems. Of course, in many statistical problems a regression technic is currently used. But they are also problems involving regressions where nothing is randomly specified and where no statistics are needed.
A mathematical tool on one hand and mathematical or physical problem on the other hand, are two different thinks. </p>
|
29,255 | <p>sorry! am not clear with these questions</p>
<ol>
<li><p>why an empty set is open as well as closed?</p></li>
<li><p>why the set of all real numbers is open as well as closed?</p></li>
</ol>
| sihong xie | 44,881 | <p>$\emptyset$ is open. By definition, $X$ is open if $\forall x\in X$, there is a open set $U\subset X$ such that $x\in U$. So there is not any point in $\emptyset$, the condition of the definition is automatically satisfied (a logical convention).</p>
<p>$\mathbb{R}$ is open (check Andrea's answer), so its complement, $\emptyset$ is closed.</p>
<p>Therefore, $\emptyset$ is both open and closed.</p>
|
2,128,588 | <p>The question;</p>
<p>$U = \{x |Ax = 0\}$ If $ A = \begin{bmatrix}1 & 2 & 1 & 0 & -2\\ 2 & 1 & 2 & 1 & 2\\1 & 1 & 0 & -1 & -2\\ 0 & 0 & 2 & 0 & 4\end{bmatrix}$</p>
<p>Find a basis for $U$.</p>
<p><hr>
To make it linearly independent, I reduce the rows of $A$;</p>
<p>$\begin{bmatrix}1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & -2\\0 & 0 & 1 & 0 & 2\\0 & 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix}x_1 & x_2 & x_3 & x_4\end{bmatrix} = 0$</p>
<p>So I figure I just have to list the rows out as vectors;</p>
<p>Basis vectors = $\begin{pmatrix}1\\0\\0\\0\\0\end{pmatrix} , \begin{pmatrix}0\\1\\0\\0\\-2\end{pmatrix} , \begin{pmatrix}0\\0\\1\\0\\2\end{pmatrix} , \begin{pmatrix}0\\0\\0\\1\\0\end{pmatrix}$</p>
<p>(can't figure out how to surround it all with curly brackets in MathJax).</p>
<p><hr>
Am I doing this properly? It feels too simple...</p>
| Alvin Jin | 276,134 | <p>Hint: $n=1$ gives us two "successes" (3 and 6) and thus a probability of 1/3.</p>
<p>$n=2$ gives us $2 + 5 + 4 + 1 = 12$ successes (corresponding to 3,6,9,12). There are a total of 36 outcomes. This gives a probability of $1/3$.</p>
<p>$n=3$ gives us $1 + 10 + 25 +25 +10 + 1 = 72$ successes (corresponding to 3,6,9,12,15,18). There are a total of 216 outcomes. This gives a probability of $1/3$.</p>
|
480,727 | <p>If $$2^x=3^y=6^{-z}$$ and $x,y,z \neq 0 $ then prove that:$$ \frac{1}{x}+\frac{1}{y}+\frac{1}{z}=0$$</p>
<p>I have tried starting with taking logartithms, but that gives just some more equations.</p>
<p>Any specific way to solve these type of problems?</p>
<p>Any help will be appreciated.</p>
| lab bhattacharjee | 33,337 | <p>As $\log_aa=1$ and $\log_a(bc)=\log_ab+\log_ac,$</p>
<p>applying logarithm wrt $2,$ </p>
<p>$x=y\log_23=-z\log_26=-z(1+\log_23)$</p>
<p>$x=y\log_23\implies \log_23=\frac xy$</p>
<p>and put this value of $\log_23$ in $\displaystyle x=-z(1+\log_23)$ </p>
<p>to eliminate $\log_23$ and simplify.</p>
|
1,386,261 | <p>They start by choosing $m$ s.t. $|s_n - s| \lt \frac{1}{2} |s|$ if $n > m$.
From here, it looks like they use the triangular inequality $|s_n - s| + |s_n| < |s|$ to come up with this statement but I'm not sure as they introduce some variable m.</p>
<p>Next, given $\epsilon > 0$, there is a $N > m$ s.t $n \geq N$ (why not just state this in the first place?) implies $|s_n - s| < \frac{1}{2}|s^2|\epsilon$.</p>
<p>Hence for $n \geq N$ we have:</p>
<p>$|\frac{1}{s_n} - \frac{1}{s}| < |\frac{s_n-s}{s_ns}|<\frac{2}{|s^2|}|s_n-s|<\epsilon$.</p>
<p>I can follow the inequalities, but I'm not sure what techniques they used to come to this conclusion. I would appreciate it if someone could shed some light on this, thanks!</p>
| Travis Willse | 155,629 | <p>The Zariski topology is a topology well-suited to the study of algebraic varieties, the principal objects of study in algebraic geometry:</p>
<blockquote>
<p>For any field $\Bbb F$, a set $X \subseteq \Bbb F^n$ is defined to be closed in the Zariski topology on iff it has the form $$V(S) := \{(x_1, \ldots, x_n) \in \Bbb F^n : f(x_1, \ldots, x_n) = 0 \,\forall f \in S\}$$ for some set $S$ of polynomials $f_a(x_1, \ldots, x_n) \in \Bbb F[x_1, \ldots, x_n]$.</p>
</blockquote>
<p>The general linear group $GL(n, \Bbb F)$ is usually defined to be set of invertible $n \times n$ matrices over $\Bbb F$, that is, the $n \times n$ matrices with nonzero determinant. On the other hand, the determinant map $\det: M(n, \Bbb F) \to \Bbb F$ is a polynomial in the entries $x_{11}, \ldots, x_{nn}$ of the argument matrix $X$, where $M(n, \Bbb F)$ is the space of $n \times n$ matrices over $\Bbb F$, which we can identify with $\Bbb F^{n^2}$ in the obvious way. Thus, the set $$V(\{\det\}) := \{X \in M(n, \Bbb F) : \det X = 0\} \subset M(n, \Bbb R) \cong \Bbb F^{n^2}$$ is closed in the Zariski topology, and hence its complement, namely, $$V(\{\det\})^c = \{X \in M(n, \Bbb F) : \det X \neq 0\} = GL(n, \Bbb F),$$ is open in the topology.</p>
|
1,029,650 | <p>In Four-dimensional space, the Levi-Civita symbol is defined as:</p>
<p>$$ \varepsilon_{ijkl } =$$
\begin{cases}
+1 & \text{if }(i,j,k,l) \text{ is an even permutation of } (1,2,3,4) \\
-1 & \text{if }(i,j,k,l) \text{ is an odd permutation of } (1,2,3,4) \\
0 & \text{otherwise}
\end{cases}
</p>
<p>Let's suppose that I fix the last index ( l=4 for example). I guess that the 4-indices symbol can now be replaced with a 3-indices one:</p>
<p>$$ \varepsilon_{ijk } =$$
\begin{cases}
+1 & \text{if } (i,j,k) \text{ is } (1,2,3), (2,3,1) \text{ or } (3,1,2), \\
-1 & \text{if } (i,j,k) \text{ is } (3,2,1), (1,3,2) \text{ or } (2,1,3), \\
\;\;\,0 & \text{if }i=j \text{ or } j=k \text{ or } k=i
\end{cases} </p>
<p>My doubt is the following: is $$ \varepsilon_{ijk4 }A^{jk} = \varepsilon_{ij4k }A^{jk } $$ true? (In the sense that the 4-indices symbols can both be replaced by the same 3-indices symbols. I'm using the Einstein notation, so multiple indices are summed) or they give two 3-indices symbols with different sign</p>
| Baalateja Kataru | 620,119 | <p>There is a method by which we can formally verify your guess that,</p>
<p><span class="math-container">$$
\epsilon^{ijk4} = \epsilon^{ijk}
$$</span></p>
<p>And determine whether,</p>
<p><span class="math-container">$$ \epsilon^{ijk4} = \pm \epsilon^{ij4k} $$</span></p>
<p>The <span class="math-container">$n$</span>-indexed Levi-Civita symbol is defined in terms of the generalized Kronecker delta as:</p>
<p><span class="math-container">$$ \epsilon^{i_1i_2 \dots i_n} = \delta^{i_1i_2 \dots i_n}_{12 \dots n} $$</span></p>
<p>In case of the four-indexed Levi-Civita symbol of 4D space,</p>
<p><span class="math-container">$$
\epsilon^{ijkl} = \delta^{ijkl}_{1234}
$$</span></p>
<p>By definition of the generalized Kronecker delta, this can be written as the determinant:</p>
<p><span class="math-container">$$
\epsilon^{ijkl} = \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_3 && \delta^i_4 \\
\delta^j_1 && \delta^j_2 && \delta^j_3 && \delta^j_4 \\
\delta^k_1 && \delta^k_2 && \delta^k_3 && \delta^k_4 \\
\delta^l_1 && \delta^l_2 && \delta^l_3 && \delta^l_4 \\
\end{vmatrix}
$$</span></p>
<p>When <span class="math-container">$l = 4$</span>,</p>
<p><span class="math-container">$$
\epsilon^{ijk4} = \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_3 && \delta^i_4 \\
\delta^j_1 && \delta^j_2 && \delta^j_3 && \delta^j_4 \\
\delta^k_1 && \delta^k_2 && \delta^k_3 && \delta^k_4 \\
\delta^4_1 && \delta^4_2 && \delta^4_3 && \delta^4_4 \\
\end{vmatrix} = \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_3 && \delta^i_4 \\
\delta^j_1 && \delta^j_2 && \delta^j_3 && \delta^j_4 \\
\delta^k_1 && \delta^k_2 && \delta^k_3 && \delta^k_4 \\
0 && 0 && 0 && 1 \\
\end{vmatrix}
$$</span></p>
<p>Expanding this via the last row,</p>
<p><span class="math-container">$$
\epsilon^{ijk4} = 0 \det \begin{vmatrix}
\delta^i_2 && \delta^i_3 && \delta^i_4 \\
\delta^j_2 && \delta^j_3 && \delta^j_4 \\
\delta^k_2 && \delta^k_3 && \delta^k_4
\end{vmatrix}
- 0 \det \begin{vmatrix}
\delta^i_1 && \delta^i_3 && \delta^i_4 \\
\delta^j_1 && \delta^j_3 && \delta^j_4 \\
\delta^k_1 && \delta^k_3 && \delta^k_4
\end{vmatrix}
+ 0 \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_4 \\
\delta^j_1 && \delta^j_2 && \delta^j_4 \\
\delta^k_1 && \delta^k_2 && \delta^k_4
\end{vmatrix}
+ 1 \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_3 \\
\delta^j_1 && \delta^j_2 && \delta^j_3 \\
\delta^k_1 && \delta^k_2 && \delta^k_3
\end{vmatrix}
$$</span></p>
<p>The only remaining term is,</p>
<p><span class="math-container">$$
\epsilon^{ijk4} = \det \begin{vmatrix}
\delta^i_1 && \delta^i_2 && \delta^i_3 \\
\delta^j_1 && \delta^j_2 && \delta^j_3 \\
\delta^k_1 && \delta^k_2 && \delta^k_3
\end{vmatrix}
$$</span></p>
<p>Which by the definitions of the generalized Kronecker delta and the Levi-Civita symbol is,</p>
<p><span class="math-container">$$
\epsilon^{ijk4} = \delta^{ijk}_{123} = \epsilon^{ijk}
$$</span></p>
<p>When we write the Levi-Civita symbol as a determinant like we did above, the totally antisymmetric property that it possess becomes evident: swapping any two indices corresponds to interchanging their corresponding rows in the matrix due to which the determinant, which is the Levi-Civita itself, changes sign.</p>
<p>From this behavior, we can easily deduce what <span class="math-container">$\epsilon^{ij4k}$</span> would be without going through the computations once again: it is just the result of exchanging the last two indices of <span class="math-container">$\epsilon^{ijk4}$</span>,</p>
<p><span class="math-container">$$
\epsilon^{ij4k} = -\epsilon^{ijk4} = -\epsilon^{ijk}
$$</span></p>
<p>So yes, depending on what index you fix, you will get a three-indexed Levi-Civita with a different sign.</p>
|
1,032,535 | <p>I know $n \in \mathbb{N}$ and...</p>
<p>$$
a_n = \begin{cases}
0 & \text{ if } n = 0 \\
a_{n-1}^{2} + \frac{1}{4} & \text{ if } n > 0
\end{cases}
$$</p>
<ol>
<li><strong>Base Case:</strong></li>
</ol>
<p>$$a_1 = a^2_0 + \frac{1}{4}$$</p>
<p>$$a_1 = 0^2 + \frac{1}{4} = \frac{1}{4}$$</p>
<p>Thus, we have that $0 < a_1 < 1$. So our base case is ok.</p>
<ol start="2">
<li><strong>Inductive hypothesis</strong>:</li>
</ol>
<p>Assume $n$ is arbitrary. Suppose
$$0 < a_{n} < 1$$
$$0 < a_{n-1}^{2} + \frac{1}{4} < 1$$
is true, when $n > 1$.</p>
<ol start="3">
<li><strong>Inductive step</strong>:</li>
</ol>
<p>Let's prove
$$0 < a_{n+1} < 1$$
$$0 < a_{n}^{2} + \frac{1}{4} < 1$$</p>
<p>is also true when $n > 1$.</p>
<p>My guess is that we have to prove that $a^2_{n}$ has to be less than $\frac{3}{4}$, which otherwise would make $a_{n+1}$ equal or greater than $1$.</p>
<p>So we have $(a_{n-1}^{2} + \frac{1}{4})^2 < \frac{3}{4}$... I don't know if this is correct, and how to continue...</p>
| mvggz | 167,171 | <p>Ok I'll right it down so that it's clear to you :) </p>
<p>I want to prove the property P: " 0 < $a_n$ < 1 "</p>
<p>I look at the property A: $0 < a_n < \frac{1}{2}$</p>
<p>A => P, that is : If A is true then P is true </p>
<p>I'll prove A using induction (so technically I don't prove P by induction, but by implication).</p>
<p>$ a_o = 0 $ < $\frac{1}{2}$</p>
<p>If $ 0 < a_n < \frac{1}{2} $ , then :</p>
<p>$ a_{n+1} = a_n^2 + \frac{1}{4} $ > $a_n^2 $ > 0</p>
<p>And : $ a_{n+1} = a_n^2 + \frac{1}{4} $ < $ (\frac{1}{2})^2 + \frac{1}{4} = \frac{1}{2} $</p>
<p>Hence you get : 0 < $a_{n+1}$ < $\frac{1}{2}$ : the hypothesis holds for the rank n+1</p>
<p>So you have proven using induction that for every n positive integer you have :</p>
<p>0 < $a_n$ < $\frac{1}{2}$</p>
<p>But since : $\frac{1}{2}$ < 1 , you also have: </p>
<p>0 < $a_n$ < $\frac{1}{2}$ < 1 ie 0 < $a_n$ < 1</p>
|
717,664 | <p>I need a step by step answer on how to do this. What I've been doing is converting the top to $2e^{i(\pi/4)}$ and the bottom to $\sqrt2e^{i(-\pi/4)}$. I know the answer is $2e^{i(\pi/2)}$ and the angle makes sense but obviously I'm doing something wrong with the coefficients. I suspect maybe only the real part goes into calculating the amplitude but I can't be sure.</p>
| Michael Hoppe | 93,935 | <p>What about
$$\frac{1+i}{1-i}=\frac{2\sqrt{2}(\sqrt{2}/2+i\sqrt{2}/2)}{\sqrt{2}(\sqrt{2}/2-i\sqrt{2}/2)}
=2\frac{e^{i\frac{\pi}{4}}}{e^{-i\frac{\pi}{4}}}=2e^{i\frac{\pi}{2}}=2i?$$</p>
|
1,109,853 | <blockquote>
<p><em>The below proof is incorrect. See the answers for more information.</em></p>
</blockquote>
<p>This question is in the context of exploring how to explain the process of developing a proof.</p>
<p>When reading a proof on the irrationality of $ \sqrt{3} $, I came across the following statement, which was not proved in the irrationality proof itself.</p>
<ul>
<li>If $ a^2 $ is divisible by 3, then $ a $ is divisible by 3.</li>
</ul>
<p>I believe the following proves the above statement:</p>
<ol>
<li>Let $k$ be an integer, and $a$ be an integer divisible by $n$, where $ a=n(k+1) $.</li>
<li>$ a = nk+n $</li>
<li>$ a^2 = (nk + n)(nk + n) $</li>
<li>$ a^2 = n^2(k+1)(k+1) $</li>
<li>Therefore, $a^2$ is divisible by $n$.</li>
</ol>
<p>Although the above proof "feels" valid to me, it also seems like the proof is not complete, in a formal sense, because:</p>
<ul>
<li>Constraints are not placed on the variables.</li>
<li>Although the leap from step 4 to step 5 seems intuitive, there is no formal explanation as to why the step is valid. (It seems like something is missing to explain how to go from divisible by $n^2$ to divisible by $n$.)</li>
<li>$a$ is divisible by 3 $\implies$ $a^2$ is divisible by 3, but no justification is given for the opposite implication.</li>
</ul>
<p>All that said, is the above proof sufficient to justify the initial assertion about divisibility by 3? What would formally justify going from step 4 to step 5?</p>
<p>And more generally: Are there objective standards for sufficiency of proof, either published or generally accepted?</p>
| Glare | 208,225 | <p>As @mapierce271 pointed out already, your proof doesn't actually accomplish what you've set out to show. I would also like to add that your "theorem" (if $n$ divides $a^2$, then $n$ divides $a$) is not true for just <em>any</em> integer $n$. For example, $8^2=64$ is divisible by $32$, but $8$ is definitely not divisible by $32$. However, it is true if $n$ is a prime number (which $3$ is). The easiest method of showing this will use "Euclid's Lemma": if $p$ is prime and $p$ divides the product $ab$, then $p$ divides $a$ or $p$ divides $b$. You can read more about Euclid's Lemma here: <a href="http://en.wikipedia.org/wiki/Euclid's_lemma" rel="nofollow">http://en.wikipedia.org/wiki/Euclid's_lemma</a></p>
|
199,148 | <p><a href="https://i.stack.imgur.com/9BuHp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9BuHp.png" alt="enter image description here"></a> </p>
<pre><code>pbdomains = <|
"Overall " -> Around[2.6, 0.04],
"PB" -> Around[4.25, 0.06]
|>;
BarChart[pbdomains, ChartStyle -> "BrightBands",
LabelStyle -> {FontFamily -> "Times New Roman", 28, Bold,
GrayLevel[0]}, Frame -> True, FrameLabel -> {"", " Count"},
BarSpacing -> Tiny,
ChartLabels -> Callout[Automatic, Above, Appearance -> "Balloon"]]
</code></pre>
| kglr | 125 | <p><strong>1.</strong> Use the (undocumented) option <code>"FixedBarSpacing"</code> as <code>"FixedBarSpacing" -> True</code> or as <code>Method -> {"FixedBarSpacing" -> True}</code></p>
<pre><code>BarChart[pbdomains, ChartStyle -> "BrightBands",
LabelStyle -> {FontFamily -> "Times New Roman", 28, Bold,
GrayLevel[0]}, Frame -> True, FrameLabel -> {"", " Count"},
ChartLabels -> Callout[Automatic, Above, Appearance -> "Balloon"],
"FixedBarSpacing" -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/0v1xb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0v1xb.png" alt="`[![enter image description here][1]][1]" /></a></p>
<p><strong>2.</strong> Use <code>{pbdomains}</code> as the first argument and use the option <code>BarSpacing -> {Tiny, 1}</code>:</p>
<pre><code>BarChart[{pbdomains},
BarSpacing -> { Tiny, 1},
ChartStyle -> "BrightBands",
LabelStyle -> {FontFamily -> "Times New Roman", 28, Bold, GrayLevel[0]},
ImageSize -> Large, Frame -> True, FrameLabel -> {"", " Count"},
ChartLabels -> Callout[Automatic, Above, Appearance -> "Balloon"]]
</code></pre>
<p><a href="https://i.stack.imgur.com/9QKgF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9QKgF.png" alt="enter image description here" /></a></p>
|
4,107,396 | <p>I'm stuck at solving the following problem: launch 3 fair coins independently. Let A the event: "you get at least a head" and B "you get exactly one tail".
Then what is the probability of the event <span class="math-container">$A \cup B$</span>?</p>
| I am a person | 806,777 | <p>The probability that you want is everything except if you get all <span class="math-container">$3$</span> tails, since everything else is in the union of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. So the answer = <span class="math-container">$1 - \frac{1}{8} = \boxed{\frac{7}{8}}$</span>.</p>
|
442,759 | <p>I was reading a book on groups, it points out about the uniqueness of the neutral element and the inverse element. I got curious, are there algebraic structures with more than one neutral element and/or more than one inverse element?</p>
| Brian Rushton | 51,970 | <p>In an interesting way, yes; in matrix theory, you can have a ring of matrices that has a single identity, but has subspaces which have distinct right identities. For instance, in 2X2 matrices, the subset of all matrices which have 0's in the left column has a right identity given by $\begin{bmatrix}0 & 0\\0 & 1\end{bmatrix}$, while the subset given by matrices with all 0's on the right is given by $\begin{bmatrix}1 & 0\\0 & 0\end{bmatrix}$.</p>
<p>Of course the identity for the whole ring is $\begin{bmatrix} 1 & 0\\0 & 1 \end{bmatrix}$.</p>
|
1,576,713 | <p>$X$ and $Y$ are two sets and $f:X\to Y$. If $f(C)=\{f(x):x\in C\}$ for $C\subseteq X$ and $f^{-1}(D)=\{x:f(x)\in D\}$ for $D\subseteq Y$, then the true statement is </p>
<p>(A) $f(f^{-1}(B))=B$</p>
<p>(B) $f^{-1}(f(A))=A$</p>
<p>(C) $f(f^{-1}(B))=B$ only if $B\subseteq f(X)$</p>
<p>(D) $f^{-1}(f(A))=A$ only if $f(X)=Y$</p>
| Alex M. | 164,025 | <p>Let $X = \Bbb R, \ Y = [0, \infty)$ and $f(x) = x^2$. If $A = (-\infty, 0]$, then $f(A) = [0, \infty)$ and $f^{-1} (f(A)) = f^{-1} ([0, \infty)) = \Bbb R \ne A$, therefore $f^{-1} (f(A)) \ne A$, so (B) is false.</p>
<p>Let $X = [0, \infty), \ Y = \Bbb R$ and $f(x) = x$. If $A \subseteq X$, then $f(A) = A \subseteq Y$, so $f^{-1} (f(A)) = f^{-1} (A) = A$. Nevertheless, $f(X) \ne Y$, so (D) is false.</p>
<p>Let $X = \Bbb R, \ Y = \Bbb R$ and $f(x) = x^2$. Let $B = [-1,1]$. Notice that $f^{-1} (B) = f^{-1} ([0,1]) = [-1, 1]$ (because $f^{-1} ([-1,0)) = \emptyset$), so that $f (f^{-1} (B)) = f([-1,1]) = [0,1] \ne B$, therefore (A) is false, too.</p>
<p>Finally, let us investigate (C).</p>
<p>Let $B \subseteq f(X)$. If $y \in B \subseteq f(X)$, then there exist $x \in X$ such that $f(x) = y \in B$, so $x \in f^{-1} (B)$, which implies (just apply $f$) that $f(x) \in f(f^{-1}(B))$, so $y \in f(f^{-1} (B))$. This shows that $B \subseteq f(f^{-1} (B))$. For the opposite inclusion, if $y \in f(f^{-1} (B))$, then there exist $x \in f^{-1} (B)$ with $y = f(x)$; but $x \in f^{-1} (B)$ means that $f(x) \in B$ (this is what "inverse image" means), so then $y = f(x) \in B$. We have thus proved that $f(f^{-1} (B)) \subseteq B$. Since we have inclusions in both directions, we have proved that $B \subseteq f(X) \implies B = f(f^{-1} (B))$.</p>
<p>Assume now that $B = f(f^{-1} (B))$ and let $C = B \cap f(X), \ D = B \setminus f(X)$; notice that $B = C \cup D$, $C \cap D = \emptyset$ and $f^{-1} (D) = \emptyset$. Then $$B = f(f^{-1} (B)) = f(f^{-1} (C \cup D)) = f(f^{-1} (C) \cup f^{-1} (D)) = f(f^{-1} (C)) \cup f(\emptyset) = f(f^{-1} (C)) .$$ Since $D \subseteq B = f(f^{-1} (C)) \subseteq f(X)$, we deduce that $D = \emptyset$, so $B \subseteq f(X)$. We have thus proved that $B = f(f^{-1} (B)) \implies B \subseteq f(X)$.</p>
<p>To conclude, not only is (C) the only true statement, it is true in a much stronger form, with logical equivalence instead of just implication.</p>
|
156,285 | <p>I have been working on this exercise for a while now. It's in B.L. van der Waerden's <em>Algebra (Volume I)</em>, page $19$. The exercise is as follows:</p>
<blockquote>
<p>The order of the symmetric group $S_n$ is $n!=\prod_{1}^{n}\nu$. (Mathematical induction on $n$.)</p>
</blockquote>
<p>I don't comprehend how we can logically use induction here. It seems that the first step would be proving $S_1$ has $1!=1$ elements. This is simply justified: There is only one permutation of $1$, the permutation of $1$ to itself.</p>
<p>The next step would be assuming that $S_n$ has order $n!$. Now here is where I get stuck. How do I use this to show that $S_{n+1}$ has order $(n+1)!$?</p>
<p>Here is my attempt: I am thinking this is because all $n!$ permutations of $S_n$ now have a new element to permutate. For example, if we take one single permutation
$$
p(1,\dots,n)
=
\begin{pmatrix}
1 & 2 & 3 & \dots & n\\
1 & 2 & 3 & \dots & n
\end{pmatrix}
$$
We now have $n$ modifications of this single permutation by adding the symbol $(n+1)$:</p>
<p>\begin{align}
p(1,2,\dots,n,(n+1))&=
\begin{pmatrix}
1 & 2 & \dots & n & (n+1)\\
1 & 2 & \dots & n & (n+1)
\end{pmatrix}\\
p(2,1,\dots,n,(n+1))&=
\begin{pmatrix}
1 & 2 & \dots & n & (n+1)\\
2 & 1 & \dots & n & (n+1)
\end{pmatrix}\\
\vdots\\
p(n,2,\dots,1,(n+1))&=
\begin{pmatrix}
1 & 2 & \dots & n & (n+1)\\
n & 2 & \dots & 1 & (n+1)
\end{pmatrix}\\
p((n+1),2,\dots,n,1)&=
\begin{pmatrix}
1 & 2 & \dots & n & (n+1)\\
(n+1) & 2 & \dots & n & 1
\end{pmatrix}
\end{align}</p>
<p>There are actually $(n+1)$ permutations of that specific form, but we take $p(1,\dots,n)=p(1,\dots,n,(n+1))$ in order to illustrate and prove our original statement. We can make this general equality for all $n!$ permutations: $p(x_1,x_2,\dots,x_n)=p(x_1,x_2,\dots,x_n,x_{n+1})$ where $x_i$ is any symbol of our finite set of $n$ symbols and $x_{n+1}$ is strictly defined as the symbol $(n+1)$.</p>
<p>We can repeat this process for all $n!$ permutations in $S_n$. This gives us $n!n$ permutations. Then, adding in the original $n!$ permutations, we have $n!n+n!=(n+1)n!=(n+1)!$. Consequently, $S_n$ has order $n!$.</p>
<p>How is my reasoning here? Furthermore, is there a more elegant argument? I do not really see my argument here as <em>incorrect</em>, it just seems to lack elegance. My reasoning may well be very incorrect, however. If so, please point it out to me.</p>
| Eugene | 31,288 | <p>You can actually just use a combinatorial argument for this. The permutation group is a bijection from a set of $n$ elements to itself. So look at the first element in the permutation. There are $n$ choices to send that element to. Now for the second element, there are only $n-1$ choices left (because it is a bijection you cannot send two different elements in the domain to the same element in the codomain), and so on until you only have $1$ choice left for the last element. Thus we get $n!$ ways to arrange the permutation.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.