qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,870,305 | <p>I am trying to calculate the Jacobian of a function that has quaternions and 3D points in it. I refer to quaternions as <span class="math-container">$q$</span> and 3D points as <span class="math-container">$p$</span>
<span class="math-container">$$h_1(q)=A C(q)p $$</span>
<span class="math-container">$$h_2(q)=q_1\otimes q \otimes q_2 $$</span></p>
<p>where <span class="math-container">$A\in R^{3x3}$</span> and <span class="math-container">$C(q)$</span> is <em>Direction cosine matrix</em>.</p>
<p>I am using the Hamilton form for the quaternions.</p>
<p>I would like to calculate the following Jacobians:
<span class="math-container">$$H_1 = \frac{\partial h_1(q)}{\partial q} $$</span>
<span class="math-container">$$H_2 = \frac{\partial h_2(q)}{\partial q} $$</span></p>
<p>Following <a href="http://www.iri.upc.edu/people/jsola/JoanSola/objectes/notes/kinematics.pdf" rel="nofollow noreferrer">Joan Solà's reference</a> eq. 18 what I have is </p>
<p><span class="math-container">$$H_1 = A^TC(q)^T[p]_x $$</span>
<span class="math-container">$$H_2 = [q_1]_L[q_2]_R $$</span></p>
<p>Where <span class="math-container">$[q]_R$</span> and <span class="math-container">$[q]_L$</span> are the right and left handed conversion of quaternion to matrix form as defined in <a href="http://www.iri.upc.edu/people/jsola/JoanSola/objectes/notes/kinematics.pdf" rel="nofollow noreferrer">Joan Solà's reference</a> eq. 18.</p>
<p>All rotations are body centric.</p>
<p>Is this correct?
Is there a better way to do this?
Can the expression be easily simplified?</p>
| JustThinking | 356,245 | <p>Quaternion derivatives are not that straight forward. Usually a direction is required (see <a href="https://en.wikipedia.org/wiki/Quaternionic_analysis" rel="noreferrer">here</a>).</p>
<p>I'm assuming your thought process behind $H_2$ was something like</p>
<p>$$ h_2 = q_1 \otimes q \otimes q_2 = q_1 \otimes (q \otimes q_2) = [q_1]_L\ (q \otimes q_2) = [q_1]_L\ [q_2]_R\ q$$
$$ H_2 = \frac{\partial h_2}{\partial q} = [q_1]_L\ [q_2]_R\ $$</p>
<p>where I assigned L,R indices as in your reference. But that is the opposite indices of what you wrote. This also has other issues which I will get to in a bit.</p>
<p>For $H_1$, you have the direction cosine matrix as a function of $q$ but you don't appear to have taken the derivative of $C(q)$ at all. So I'm not following the logic there.</p>
<p>Looking at the definition of the direction cosine matrix in terms of the components of $q$, I feel this is venturing into the territory of abusing quaternions as a container of 4 variables.</p>
<p>There are a lot of things going on here, so let me try to untangle them a bit and note on them separately.</p>
<p><strong>Representation of quaternions</strong></p>
<p>If you wish to use linear algebra, there is already a real valued matrix representation of quaternions. I would suggest just using that representation if you want to clarify the content of some equations.</p>
<p>Instead, here quaternion are treated as a column vector (eq 7 of your reference), which leads to you then using two additional representations of quaternions to fit them into a linear algebra setting to represent multiplication. This confuses the mathematical content.</p>
<p>Using the definitions in your reference:</p>
<p>$$ [a + x\ i + y\ j + z\ k]_L =
\begin{bmatrix}
a & -x & -y & -z \\
x & a & -z & y \\
y & z & a & -x \\
z & -y & x & a \\
\end{bmatrix}$$</p>
<p>$$ [a + x\ i + y\ j + z\ k]_R =
\begin{bmatrix}
a & -x & -y & -z \\
x & a & z & -y \\
y & -z & a & x \\
z & y & -x & a \\
\end{bmatrix}$$</p>
<p>That then means:</p>
<p>$$[zk]_L\ [yj]_R =
\begin{bmatrix}
0 & 0 & 0 & -z \\
0 & 0 & -z & 0 \\
0 & z & 0 & 0 \\
z & 0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
0 & 0 & -y & 0 \\
0 & 0 & 0 & -y \\
y & 0 & 0 & 0 \\
0 & y & 0 & 0
\end{bmatrix}
=
\begin{bmatrix}
0 & -yz & 0 & 0 \\
-yz & 0 & 0 & 0 \\
0 & 0 & 0 & -yz \\
0 & 0 & -yz & 0
\end{bmatrix}
$$</p>
<p>Which doesn't fit the form for any of the previous representations of a quaternion.</p>
<p>Therefore saying $H_2 = [q_1]_R\ [q_2]_L$, is stating the concept which is used here for a derivative of a quaternion function, can result in something outside of the set of functions of quaternions. Is this what you intended? How do you intend to interpret this object?</p>
<p><strong>Representing multiple things with the same object</strong></p>
<p>We can use a real value to represent temperature or distance, but these are distinct things so somehow these 'type labels' ('units') must be carried around to remind us of this.</p>
<p>Similarly, trying to represent both rotations and positions with quaternions can confuse things if one isn't careful to carry around these 'type labels'. For while it may make some notation closer to how you'd write the calculation in code, these objects (positions and rotations) are not on the same footing and their components transform differently if we change basis.</p>
<p>Even the underlying choice to represent a rotation with a quaternion (so that one can write $x'=q \otimes x \otimes q^*$ as done on pg 3 of your reference), doesn't actually relate to the underlying mathematical structures as cleanly as one might imagine. (For example, why do we need to use quaternion multiplication twice? Why is the angle off by a factor of 2? See this classic paper on
<a href="http://www.lrcphysics.com/storage/documents/Hamilton%20Rodrigues%20and%20Quaternion%20Scandle.pdf" rel="noreferrer">rotations and quaternions</a>)</p>
<p><strong>functions of quaternions</strong></p>
<p>Not every "left linear" equation in quaterions
$$f(q) = a\ q + b$$
can be rewritten as a "right linear" equation
$$f(q) = q\ c + d$$</p>
<p>If we consider a linear equation of all combinations of multiplication from each side:
$$g(q) = a\ q\ b + c\ q + q \ d + e$$</p>
<p>Then how should one interpret the derivative of this function? Even for linear equations we are forced to be careful with our expectations.</p>
<p><strong>Potential "abuse" of quaternions</strong></p>
<p>Now, it is possible that what you are trying to do has little to do with these complications. Somewhere along the way, ideas from quaternions get used for writing code in engineering, physics, graphics applications, but the equations eventually carry around so much specialized 'type information' that it really begins to feel more like linear algebra notation used merely to succinctly represent what the code is doing. In such cases quaternions eventually become abused as more of a way to carry around 4 parameters than anything else (which is how I'd describe many of the things at <a href="http://quaternions.com" rel="noreferrer">quaternions.com</a>). In which case these linear algebra short hands can just be viewed as defining a calculation for a real valued function in four real variables. In which case you can discuss the partial derivatives of this with respect to any of the variables without the issues above.</p>
<p>So at the end of the day, if you are just using quaternions to calculate some transformation (likely including rotations without being forced to choose a basis for Euler angles), and you'd like to know the partial derivative of some transformation with respect to some parameter, you can always just expand out the transformation. It may not have some nice "quaternion" looking form, but that is just because you were actually just manipulating linear equations and nothing fundamentally "quaternion" in the first place.</p>
|
1,217,771 | <p>Let $x,y \in R$.
If $0 \leq y < x$ for all $x > 0$, then $y=0$.</p>
<p>Proof by contradiction: </p>
<p>Assume the opposite that is; "If $0 \leq y < x$ for all $x > 0$, then $y\neq0$".
Subtract $x$ from each part of the inequality to get,
$0-x \leq y-x < 0$
Then multiply through by -1 to get,
$x \geq y+ x > 0$
Since $x>0$, this implies a contradiction of the original statement, therefore we conclude that if $0 < y < x$ for all $x > 0$, then $y=0$.
Is my reasoning correct or is there something I can improve upon?</p>
| abel | 9,252 | <p>$$\begin{align}(x + x^{1/2})^{1/2} &= x^{1/2} + \frac 12 x^{-1/2}x^{1/2} -\frac 1 8 x^{-3/2}x+\cdots\\&=\sqrt x+\frac 12-\frac1{8\sqrt x} +\cdots\\
(x-1)^{1/2} &=\sqrt x -\frac 1{2\sqrt x}+\cdots \end{align}$$</p>
<p>therefore $$(x + x^{1/2})^{1/2} - (x-1)^{1/2}=\frac 1 2 + \frac 3{8\sqrt x} + \cdots \rightarrow \frac 12 \text{ as } x \to \infty.$$</p>
|
138,866 | <p>I have data in a csv file. The first row has labels, and the first column, too.</p>
<pre><code>Datos = Import["C:\\Users\\jodom\\Desktop\\Data.csv"]
</code></pre>
<p>Tha data in the csv file is that:</p>
<pre><code>{{"No", "Vol", "Vel"}, {1, 500, 45}, {2, 700, 67}, {3, 350, 87}, {4,
123, 23}, {5, 587, 45}, {6, 435, 89}, {7, 896, 65}, {8, 125,
45}, {9, 476, 27}, {10, 987, 80}}
</code></pre>
<p>I put those csv data into a dataset:</p>
<pre><code>B = Dataset[Datos]
</code></pre>
<p>You can check it out as an image here,on how it has seen on wolfram after the import:
<a href="https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing" rel="noreferrer">https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing</a></p>
<p>Now I want to convert the first row that has the labels, into a head or label of the dataset, and the first column into a label column, so I can get data from this dataset, like </p>
<pre><code>Dataset[labelrow, labelcolumn]
</code></pre>
| yode | 21,532 | <p>As the OP,you have a <code>.csv</code> file like:</p>
<p><img src="https://i.stack.imgur.com/oFBJo.png" alt=""> </p>
<p>Then you just use <code>SemanticImport</code></p>
<pre><code>SemanticImport["Data.csv"]
</code></pre>
<p><img src="https://i.stack.imgur.com/cyOAS.png" alt=""> </p>
<hr>
<p>Of course,if you have a <code>data</code> like:</p>
<pre><code>data = {{"No", "Vol", "Vel"}, {1, 500, 45}, {2, 700, 67}, {3, 350,
87}, {4, 123, 23}, {5, 587, 45}, {6, 435, 89}, {7, 896, 65}, {8,
125, 45}, {9, 476, 27}, {10, 987, 80}}
</code></pre>
<p>You can get a <code>Dataset</code></p>
<pre><code>SemanticImportString[ExportString[data, "CSV"]]
</code></pre>
<p><img src="https://i.stack.imgur.com/wZQrs.png" alt=""> </p>
|
150,084 | <p>I need to say whether or not $f_n(x)=n\left(\sqrt{x+\frac{1}{n}}-\sqrt{x}\right)$ is uniformly convergent on $(0,\infty)$.</p>
<p>I've found that the function is locally convergent to $f(x)=\frac{1}{2\sqrt{x}}$ and was trying to find $\sup{|f_n(x)-f(x)|}$.</p>
<p>I got the derivative $f_n'(x)= \frac{2nx\left(x-\sqrt x\sqrt{x+\frac{1}{n}}\right)+\sqrt{x}\sqrt{x+\frac{1}{n}}}{...}$ and could not find $x$ so that $f_n'(x)=0$</p>
<p>Any ideas?</p>
| Jim Belk | 1,726 | <p>Note that each of the functions $\displaystyle f_n(x) = n\biggl(\sqrt{x+\frac{1}{n}}-\sqrt{x}\biggr)$ is bounded on $(0,\infty)$, with $f_n(x) \leq \sqrt{n}$. Since $\displaystyle f(x) = \frac{1}{2\sqrt{x}}$ is unbounded on $(0,\infty)$, the sequence $\{f_n\}_{n=1}^\infty$ does not converge uniformly to $f$.</p>
|
3,072,142 | <p>Isn't <span class="math-container">$\mathbb Q[X]/(X^2+1)\cong \mathbb Q[i]$</span> wrong and should be <span class="math-container">$\mathbb Q[X]/(X^2+1)\cong \mathbb Q(i)$</span> ?</p>
<p>Indeed, <span class="math-container">$\mathbb Q[X]/(X^2+1)$</span> is a field whereas <span class="math-container">$\mathbb Q[i]$</span> is a ring (is the Fraction ring of <span class="math-container">$\mathbb Q(i)$</span>).</p>
| BigbearZzz | 231,327 | <p>Integral is linear, i.e. <span class="math-container">$$\int \alpha f d\lambda = \alpha \int f d\lambda$$</span> for any <span class="math-container">$\alpha\in\Bbb R$</span>. In your case you also need to use <span class="math-container">$\int_0^1 1 d\lambda=1$</span>.</p>
|
330,526 | <p>Let <span class="math-container">$\tau>0$</span>, and let <span class="math-container">$T\in \mathcal{D}'(\mathbb{R})$</span> be a <span class="math-container">$\tau$</span>-periodic distribution (that is,
<span class="math-container">$
\langle T, \varphi(\cdot+\tau)\rangle= \langle T,\varphi\rangle
$</span>
for all <span class="math-container">$\varphi \in \mathcal{D}(\mathbb{R})$</span>). Then
<span class="math-container">$$
T=\sum_{n\in \mathbb{Z}} c_n e^{i 2\pi t/\tau},
$$</span>
for some <span class="math-container">$c_n\in \mathbb{C}$</span>, and where the equality means that the symmetric partial sums of the series on the right hand side converge in <span class="math-container">$\mathcal{D}'(\mathbb{R})$</span> to <span class="math-container">$T$</span>. What are the <span class="math-container">$c_n$</span>s in terms of <span class="math-container">$T$</span>? One would think that they are given by <span class="math-container">$c_n=\langle T|_{(0,2\pi)}, e^{-in2\pi /\tau}\rangle/\tau$</span>, but <span class="math-container">$e^{-in2\pi/\tau}$</span> is not a test function in <span class="math-container">$\mathcal{D}((0,2\pi))$</span>. </p>
| user44191 | 44,191 | <p><span class="math-container">$\DeclareMathOperator\deg{deg}\DeclareMathOperator\dim{dim}\DeclareMathOperator\span{span}$</span>I have at least a few things that may help. It's not a full answer, but it doesn't fit in a comment, either.</p>
<p>We start by completely ignoring the first equation and dealing with the other two. We are looking for solutions to:
<span class="math-container">\begin{align}
A_0 A_1^\dagger + A_1 A_2^\dagger + A_2 A_0^\dagger &= 0 \tag{2} \\
A_0^\dagger A_1 + A_1^\dagger A_2 + A_2^\dagger A_0 &= 0 \tag{3}
\end{align}</span></p>
<p>Assume we have a representation for the algebra corresponding to these equations. Define <span class="math-container">$W = V \oplus V^*$</span>, and define <span class="math-container">$A'_i: W \rightarrow W$</span>, <span class="math-container">$A'_i(u, v) = (A_i^\dagger(v), A_i(u))$</span>. Then <span class="math-container">$A'_i$</span> are a representation of the algebra <span class="math-container">$S$</span>, defined by the equations
<span class="math-container">\begin{align}
A'_0 A'_1 + A'_1 A'_2 + A'_2 A'_0 &= 0 \tag{2'} \\
A'_1 A'_0 + A'_2 A'_1 + A'_0 A'_2 &= 0 \tag{3'}
\end{align}</span>
or equivalently:
<span class="math-container">$$
xy + yz + zx = 0,\quad yx + zy + xz = 0.
$$</span></p>
<p>Similarly, given a representation <span class="math-container">$V$</span> of <span class="math-container">$S$</span>, we should naturally have that <span class="math-container">$V \oplus V^*$</span> is a representation of equations <span class="math-container">$(2)$</span> and <span class="math-container">$(3)$</span>. </p>
<p>Correspondingly, it should be useful to look at the representation theory of <span class="math-container">$S$</span>. Specifically, we want to look at irreducible representations of <span class="math-container">$S$</span>. </p>
<p>First, let us define <span class="math-container">$S$</span> more precisely. Let <span class="math-container">$V$</span> be the vector space spanned by the symbols <span class="math-container">$x$</span>, <span class="math-container">$y$</span>, <span class="math-container">$z$</span>. Then let <span class="math-container">$TV$</span> be the tensor algebra over <span class="math-container">$V$</span>; in other words, <span class="math-container">$TV$</span> is the set of linear combinations of strings with characters <span class="math-container">$x$</span>, <span class="math-container">$y$</span>, <span class="math-container">$z$</span>. Define <span class="math-container">$S \mathrel{:=} TV/\langle xy + yz + zx, yx + zy + xz\rangle$</span>, with quotient map <span class="math-container">$q: TV \rightarrow S$</span>. </p>
<p>For the rest of this post, we study <span class="math-container">$S$</span> and its representation theory.</p>
<p>Let <span class="math-container">$r := x + y + z, s := x + \omega y + \omega^2 z, t := x + \omega^2 y + \omega z$</span>, where <span class="math-container">$\omega$</span> is the third root of unity. Then equations <span class="math-container">$(2')$</span> and <span class="math-container">$(3')$</span> can be rewritten:</p>
<p><span class="math-container">$$r^2 = st = ts$$</span></p>
<p>Note that <span class="math-container">$r^2$</span> is a central element, as <span class="math-container">$r^2$</span> commutes with <span class="math-container">$r$</span>, and <span class="math-container">$s$</span> and <span class="math-container">$t$</span> commute with each other, and so both commute with <span class="math-container">$st$</span>. This should lead to a nice description of an indecomposable subspace <span class="math-container">$V$</span>: central elements act as a scalar multiple of the identity on indecomposables, so choose a constant <span class="math-container">$c$</span>. If <span class="math-container">$c \neq 0$</span>, then let <span class="math-container">$s$</span> be any nonsingular matrix, and let <span class="math-container">$r'$</span> be any projection that doesn't commute with any of the projections that can be expressed as a polynomial of <span class="math-container">$s$</span>. Then set <span class="math-container">$r = \sqrt{c}(2 r' - I), t = c s^{-1}$</span> (for some choice of <span class="math-container">$\sqrt{c}$</span>). </p>
<p>I am still working on the case where <span class="math-container">$c = 0$</span>. </p>
|
2,460,195 | <p>I had the following question:</p>
<p>Three actors are to be chosen out of five — Jack, Steve, Elad, Suzy, and Ali. What is the probability that Jack and Steve would be chosen, but Suzy would be left out?</p>
<p>The answer given was:
Total Number of actors = $5$;
Since Jack and Steve need to be in the selection and Suzy is to be left out, only one selection matters.
Number of actors apart from Jack, Steve, and Suzy = $2$;
Probability of choosing 3 actors including Jack and Steve, but not Suzy = $$\frac{C(2,1)}{C(3,5)} = \frac{1}{5}$$</p>
<p>I do not understand the answer. What do they mean by only one selection matters? It looks like they are choosing $1$ person from $2$ combinations? Why? Can anyone please explain this.</p>
<p>Thanks</p>
| Satish Ramanathan | 99,745 | <p>we can use generating function to find the number of solution for the problem that you have stated.</p>
<p>That is you will have to find the coefficient of $x^{16}$ in the following products of g.f</p>
<p>$(1+x^2+x^4+..)(1+x+x^2+x^3+...)^2$</p>
<p>The number of solutions is $\boxed{81}$</p>
|
2,682,093 | <blockquote>
<p>Let $\circ$ be an inequality.</p>
<p>Prove $|x| \circ a \equiv -a \circ x \circ a$.</p>
</blockquote>
<p>If $x$ is positive, then $|x| \circ a = x \circ a$.</p>
<p>If $x$ is negative, then $|x| \circ a = x \circ a$ ?</p>
| Fred | 380,717 | <p>I suppose that you have to show: for $x,a \in \mathbb R$ and $a \ge 0$:</p>
<p>$|x| \le a$ implies $-a \le x \le a$.</p>
<p>Case 1: $x \ge 0$. Then we get from $|x| \le a$ that $0 \le x \le a$ and hence $-a \le x \le a$.</p>
<p>Case 2: $x < 0$. Then we get from $|x| \le a$ that $- x \le a$ and hence $-a \le x <0$, thus $-a \le x \le a$.</p>
|
670,570 | <p>Four gallons of yellow paint plus two gallons of red paint make orange paint. I assume this makes six gallons. So the ratio is 4:2, or 2:1.</p>
<p>Question: how many gallons of yellow paint, and how many gallons or red paint, to make two gallons of orange paint?</p>
<pre><code>2y + r = 2
2y + y = 2
3y = 2
y = 2/3
</code></pre>
<p>or</p>
<pre><code>4y + 2r = 6
(4y + 2r)/3 = 2
</code></pre>
<p>so I get 4/3 and 2/3.</p>
<p>However, in this section of <a href="http://www.pearsonschool.com/index.cfm?locator=PS2cJa&PMDBSUBCATEGORYID=25741&PMDBSITEID=2781&PMDBSUBSOLUTIONID=&PMDBSOLUTIONID=6724&PMDBSUBJECTAREAID=&PMDBCATEGORYID=806&PMDbProgramID=32310&elementType=attribute&elementID=1" rel="nofollow">the text book</a>, I'm not sure that it's "allowed" to do any of that. Is it possible to solve this just with cross-multiplying a ratio?</p>
<p>Their examples setup a ratio with an unknown <code>n</code>, cross multiply and solve for <code>n</code>. I don't see how to solve this word problem with that technique.</p>
| mathematics2x2life | 79,043 | <p>Try plain polynomial long division. If it gives you trouble, try it with a smaller example like $x^4-2x^3+4x^2-x+7$ divided by $x^2-3$. </p>
<p>In your original problem, you should find the quotient to be....</p>
<blockquote class="spoiler">
<p> $x^{40}-x^{39}+x^{35}-x^{34}+x^{30}-x^{28}+x^{25}-x^{23}+x^{20}-x^{17}+x^{15}-x^{12}+x^{10}-x^6+x^5-x+1$</p>
</blockquote>
|
670,570 | <p>Four gallons of yellow paint plus two gallons of red paint make orange paint. I assume this makes six gallons. So the ratio is 4:2, or 2:1.</p>
<p>Question: how many gallons of yellow paint, and how many gallons or red paint, to make two gallons of orange paint?</p>
<pre><code>2y + r = 2
2y + y = 2
3y = 2
y = 2/3
</code></pre>
<p>or</p>
<pre><code>4y + 2r = 6
(4y + 2r)/3 = 2
</code></pre>
<p>so I get 4/3 and 2/3.</p>
<p>However, in this section of <a href="http://www.pearsonschool.com/index.cfm?locator=PS2cJa&PMDBSUBCATEGORYID=25741&PMDBSITEID=2781&PMDBSUBSOLUTIONID=&PMDBSOLUTIONID=6724&PMDBSUBJECTAREAID=&PMDBCATEGORYID=806&PMDbProgramID=32310&elementType=attribute&elementID=1" rel="nofollow">the text book</a>, I'm not sure that it's "allowed" to do any of that. Is it possible to solve this just with cross-multiplying a ratio?</p>
<p>Their examples setup a ratio with an unknown <code>n</code>, cross multiply and solve for <code>n</code>. I don't see how to solve this word problem with that technique.</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Both are in Geometric Series</p>
<p>$$1+x+x^2+x^3+x^4=\frac{1-x^5}{1-x}$$ and $$1+x^{11}+x^{22}+x^{33}+x^{44}=\frac{1-x^{55}}{1-x^{11}}$$</p>
<p>So, $$\frac{1+x^{11}+x^{22}+x^{33}+x^{44}}{1+x+x^2+x^3+x^4}=\frac{(1-x^{55})(1-x)}{(1-x^{11})(1-x^5)}$$</p>
<p>Now using <a href="https://math.stackexchange.com/questions/7473/prove-that-gcdan-1-am-1-a-gcdn-m-1">Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$</a>, </p>
<p>$1-x^{55}$ is divisible by $1-x^5,1-x^{11}$ and </p>
<p>$(1-x^{11},1-x^5)=1-x$</p>
|
555,051 | <p>I am having trouble understanding the following homework question,</p>
<p>Let $H=\{0,\pm 3, \pm6, \pm9,\ldots\}$ Find all the left cosets of $H$ in $\Bbb Z$.</p>
<p>I know the answer is $H$, $1+H$, and $2+H$ but I am having difficulty understanding why.</p>
<p>Thank you!</p>
| anon | 11,763 | <p>Consider the three cosets $\color{Blue}{3\Bbb Z}$, $\color{Green}{1+3\Bbb Z}$, $\color{Red}{2+3\Bbb Z}$ as they sit inside $\color{Black}{\Bbb Z}$ itself:</p>
<p>$~~~$ <img src="https://i.stack.imgur.com/Sb5CS.png" alt="mod 3"></p>
<p>Do you see any numbers that were skipped? Notice that the pattern repeats. How can we justify this though, formally and algebraically? We need to be able to take any coset and show that it is actually equal to one of these three. It suffices to show that every integer is either $0$, $1$ or $2$ plus a multiple of $3$. This leads us to consider representations of integers of the form $n=3q+r$ where the residue $r$ is $0\le r<3$. Know any theorems or identities or whatnot that deal with this?</p>
|
1,953,628 | <p>0 choices for the 1st person.
17 choices for the 2nd person (must exclude 1st and his/her two neighbours) </p>
<p>For 2 of these choices of 2nd person, there is one shared neighbour, so 15 remaining choices. (e.g. if they are numbered 1 to 20 in a circle, 1st person is #1, 2nd is #3, then people 20,1,2,3,4 are excluded).
For the other 15 choices of 2nd person, there are no shared neighbors, so 14 remaining choices. </p>
<p>So if order matters, total is $20 \cdot (2 \cdot 15 + 15 \cdot 14) $
but since order does not matter, divide by $3! = 6$ to account for the permutations in order of the 3 people.
So total = $20 \cdot \frac{2 \cdot 15 + 15 \cdot 14}{6}$</p>
<p>just redid it; does this make any sense?</p>
| true blue anil | 22,388 | <p>Here is another way:<br>
Form three blocks with a non-chosen $(N)$ clockwise of a chosen $(C),\;e.g. \boxed {CN}$</p>
<p>We now have $17$ entities, viz. $3$ blocks and $14$ others</p>
<p>We can place the blocks in $\binom{17}3$ ways, fulfilling the "non-neighbour" criterion,</p>
<p><em>but since in the process we are giving each entity only $17$ starting places instead of</em> $20$,</p>
<p>ans = $\binom{17}3\cdot\frac{20}{17} = 800$ </p>
|
2,028,799 | <p>I've been trying to get this limit for hours. Can someone help me, please?<br>
The solution manual says it's 0 but I can't get there. I tried to use $\lim_{h\to0} {h-\cos(h)\over h} = 0$.</p>
<p>$$\lim_{x\to 0}\frac{x\cos(1/x)}{x-\sqrt{x}}.$$<br>
Thank you.</p>
| StackTD | 159,845 | <blockquote>
<p>$$\lim_{x\to 0}\frac{x\cos(1/x)}{x-\sqrt{x}}.$$ </p>
</blockquote>
<p>Due to the $\sqrt{x}$, clearly $x>0$. Substitute $t = \frac{1}{x}$, then $t \to +\infty$ as $x \to 0^+$ and you have:</p>
<p>$$\lim_{x\to 0}\frac{x\cos\left(\tfrac{1}{x}\right)}{x-\sqrt{x}}
= \lim_{t\to +\infty}\frac{\tfrac{1}{t}\cos\left(t\right)}{\tfrac{1}{t}-\sqrt{\tfrac{1}{t}}}
= \lim_{t\to +\infty}\frac{\cos\left(t\right)}{1-\sqrt{t}}$$
And the limit is clearly $0$ since $-1 \le \cos t \le 1$ and the denominator is unbounded.</p>
|
1,730,445 | <p>I am checking whether the limit is true or not. $z$ is complex number
\begin{equation}
\lim_{z \rightarrow 0} z\sin(\frac{1}{z})=0
\end{equation}
I found Laurent series of $z\sin(\frac{1}{z})$ which is $\sum_{n=0}^{\infty} (\frac{1}{z})^{2n}\frac{1}{(2n+1)!}(-1)^n$ </p>
<p>the series is not defined $z =0$.</p>
<p>Can the Laurent series gives us any information about the limit whether it is true or not ? Thank you in advance for your help.</p>
| Julián Aguirre | 4,791 | <p>The list does not exist, since $\displaystyle z\sin\frac1z$ has an essential singularity at $z=0$. That is, there an infinite number of non-zero negative powers in the Laurent series around $z=0$.</p>
|
3,709,331 | <p>How can I integrate
<span class="math-container">$$
\int \frac{x\,dx}{(a-bx^2)^2}
$$</span> I've tried to use partial fraction decomposition, but I'm getting six equations for four variables, and they don't give uniform answers.</p>
| Vishu | 751,311 | <p>Substitute <span class="math-container">$x=\frac{\sqrt a}{\sqrt b} \sin \theta \implies dx=\frac{\sqrt a}{\sqrt b}\cos \theta d\theta$</span> to get <span class="math-container">$$\frac{1}{ab}\int\tan \theta \cdot \sec^2\theta \ d\theta =\frac {1}{ab}\int\tan\theta \ d(\tan\theta) = \frac {1}{2ab}\tan^2\theta+C\\=\frac{1}{2ab} \frac{\sin^2\theta}{1-\sin^2\theta}+C\\=\frac{1}{2ab} \frac{\frac ba x^2}{1-\frac ba x^2}+C \\ =\frac{x^2}{2a(a-bx^2)} +C$$</span></p>
|
2,442,297 | <blockquote>
<p>Ryan has been given a salary increase of $7.39\%$. The salary increase is for the value of €$4231$.</p>
<p>His salary is now $x$. Solve for $x$.</p>
</blockquote>
<p>My head is saying </p>
<p>$$\begin{align}
4231 / 7.39 &= 572 \\
572 * 100 &= 57,200
\end{align}$$</p>
<p>is not correct, but I am having a brainfart right now.</p>
<p>Can anyone help ?</p>
<p>Thanks.</p>
| alexjo | 103,399 | <p>$$
y\times 7.39\%= 4,231\quad\Longrightarrow \quad y=\frac{ 4,231}{7.39\%}=\frac{ 4,231}{0.0739}\approx 57,253\\
x=y(1+7.39\%)=57,2534+4,231=61,484
$$</p>
|
2,294,991 | <p>I am trying to find the limit as $x\to 8$ of the following function. What follows is the function and then the work I've done on it. </p>
<p>$$ \lim_{x\to 8}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8}$$</p>
<hr>
<p>\begin{align}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} &= \frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} \times \frac{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}} \\\\
& = \frac{\frac{1}{x+1}-\frac{1}{9}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac{8-x}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac {-1}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}\end{align}</p>
<p>At this point I try direct substitution and get:
$$ = \frac{-1}{\frac{2}{3}}$$</p>
<p>This is not the answer. Could someone please help me figure out where I've gone wrong?</p>
| Amarildo | 307,377 | <p>$$
\begin{aligned}
\lim\limits_{x \to 8} \frac{\frac{1}{\sqrt{x +1}} - \frac 13}{x-8}
& = \lim _{t\to 0}\left(\frac{\frac{1}{\sqrt{\left(t+8\right)\:+1}}\:-\:\frac{1}{3}}{\left(t+8\right)-8}\right)
\\& = \lim _{t\to 0}\left(\frac{\left(3-\sqrt{t+9}\right)\sqrt{t+9}}{3t^2+27t}\right)
\\& = \lim _{t\to \:0}\left(-\frac{1}{3\left(3+\sqrt{t+9}\right)\sqrt{t+9}}\right)
\\& = \color{red}{-\frac{1}{54}}
\end{aligned}
$$</p>
|
2,294,991 | <p>I am trying to find the limit as $x\to 8$ of the following function. What follows is the function and then the work I've done on it. </p>
<p>$$ \lim_{x\to 8}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8}$$</p>
<hr>
<p>\begin{align}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} &= \frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} \times \frac{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}} \\\\
& = \frac{\frac{1}{x+1}-\frac{1}{9}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac{8-x}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac {-1}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}\end{align}</p>
<p>At this point I try direct substitution and get:
$$ = \frac{-1}{\frac{2}{3}}$$</p>
<p>This is not the answer. Could someone please help me figure out where I've gone wrong?</p>
| zhw. | 228,045 | <p>By definition, this is $f'(8)$ for $f(x) = (x+1)^{-1/2}.$</p>
|
2,316,561 | <p>How to evaluate the integral $$\frac{1}{2 \pi i}
\int \limits_{c-i \infty}^{c+i \infty} \frac{ds}{s(1-q^{1-s})}\text{?}$$ I tried with Perron's formula but I couldn't solve it. The result of the integral is $\frac{1}{2}$. Can someone help please?!</p>
| reuns | 276,986 | <p>For some $q > 1$ let</p>
<p>$$f(x) = \sum_{n \ge 0,q^n < e^x} q^n$$
For $x = n \log q$ we take the mean value of the left and right limit, that is
$$f(n \log q) = \frac{f(\epsilon+n\log q)+f(-\epsilon+n\log q)}{2}$$</p>
<p>Thus $f(0) = 1/2$</p>
<hr>
<p>For $\Re(s ) > 1$
$$F(s) = \frac{1}{1-q^{1-s}} = \sum_{n=1}^\infty q^n q^{-sn} = s \int_0^\infty f(x) e^{-sx}dx$$</p>
<p>By inverse Fourier/Laplace/Mellin transform (or Perron's formula), for $c > 1$ and $x \ne n \log q$ </p>
<p>$$f(x) = \frac{1}{2i\pi} \int_{c-i \infty}^{c+i\infty} \frac{F(s)}{s} e^{sx}ds$$</p>
<p>With $x=0$</p>
<p>$$f(0) = \frac12= \frac{1}{2i\pi} \int_{c-i \infty}^{c+i\infty} \frac{F(s)}{s} ds= \frac{1}{2i\pi} \int_{c-i \infty}^{c+i\infty} \frac{1}{s(1-q^{1-s})} ds$$
And the RHS is analytic in $q$, so we can extend by analytic continuation to $q \in \mathbb{C}$, and the same for $c$.</p>
|
4,111,835 | <p>So I was given the following prompt:</p>
<blockquote>
<p>When <span class="math-container">$x=−2$</span>, for what values of p does the series converge?
<span class="math-container">$$\sum_{n=1}^\infty\left(\frac{(-1)^{n+1}(x-3)^n}{5^n\cdot n^p}\right)$$</span></p>
</blockquote>
<p>I ended up working out this problem to find out that it's convergent for all values of <span class="math-container">$p$</span> greater than or equal to <span class="math-container">$2$</span>, but I'm a bit confused about how to show this. I'm also confused over whether or not this would be an alternating series, since more often than not the <span class="math-container">$-1$</span> in the numerator has been indicative of an alternating series. Any help would be appreciated!</p>
| Community | -1 | <p>It diverges for <span class="math-container">$p=1$</span> (harmonic series).</p>
<p>It simplifies to <span class="math-container">$-1\cdot\sum_{n=1}^\infty\dfrac1{n^p}$</span>. So it converges for <span class="math-container">$p\gt1$</span> (p-series).</p>
|
3,758,635 | <p>I'm trying to prove <span class="math-container">$n+\left(-1\right)^n\ge \dfrac{n}{2}$</span> is true for all natural numbers <span class="math-container">$n \ge 2$</span> via induction. The base case is trivial as
<span class="math-container">$$2+(-1)^2 \ge \frac{1}{2}(2)$$</span>
<span class="math-container">$$3 \ge 1.$$</span>
For the induction step, I'm looking at <span class="math-container">$$(n+1) + (-1)^{n+1} = n+1+(-1)(-1)^n$$</span>
<span class="math-container">$$\ge \frac{1}{2}n +1-2\cdot(-1)^n$$</span>
This is where I get stuck. Any help would be appreciated.</p>
| Paul Frost | 349,785 | <p>You know the <em>Neumann series</em> <span class="math-container">$\sum_{i=0}^\infty T^i$</span> of <span class="math-container">$T$</span>. It is well-known that if the Neumann series of <span class="math-container">$T$</span> converges, then <span class="math-container">$Id – T$</span> is invertible and its inverse is given by the Neumann series of <span class="math-container">$T$</span>.</p>
<p>The Neumann series certainly converges for <span class="math-container">$\lVert T \rVert < 1$</span>. However, it also converges if <span class="math-container">$\lVert T^n \rVert < 1$</span> for some <span class="math-container">$n$</span>. To see this, note that
<span class="math-container">$$(Id-T)(\sum_{i=0}^{n-1}T^i) = Id -T^n , \tag{1}$$</span>
<span class="math-container">$$(\sum_{i=0}^{n-1}T^i)(Id-T) = Id -T^n . \tag{2}
$$</span>
Since <span class="math-container">$Id-T^n$</span> is invertible in <span class="math-container">$BL(X)$</span>, it is bijective. Thus (1) implies that <span class="math-container">$Id-T$</span> is surjective and (2) that <span class="math-container">$Id-T$</span> is injective. Hence <span class="math-container">$Id-T$</span> is bijective. This doesn't automatically mean that the algebraic inverse <span class="math-container">$(Id-T)^{-1} \in L(X)$</span> is bounded. However, we have <span class="math-container">$S= (\sum_{i=0}^{n-1}T^i) (Id- T^n)^{-1} \in BL(X)$</span> and by (1)
<span class="math-container">$$
(Id-T)S = (Id -T^n)(Id -T^n)^{-1} = Id \tag{3}
$$</span>
which shows that
<span class="math-container">$$(Id-T)^{-1} = S \in BL(X) . \tag{4}$$</span></p>
<p>Now you see why <span class="math-container">$(*)$</span> helps: <span class="math-container">$\frac{\lvert b-a\rvert^{n}}{n!}$</span> becomes arbitrarily small, hence <span class="math-container">$\lVert A^n \rVert \le \frac{\lvert b-a\rvert^{n}}{n!}\lVert h \rVert_{\infty} < 1$</span> for large enough <span class="math-container">$n$</span>.</p>
<p>By the way, you can also show directly via <span class="math-container">$(*)$</span> that the Neumann series of <span class="math-container">$A$</span> converges. In fact,
<span class="math-container">$$\left\lVert \sum_{i=n}^m A^i \right\rVert \le \sum_{i=n}^m \lVert A^i \rVert \le \sum_{i=n}^m \frac{\lvert b-a\rvert^{i}}{i!}\lVert h \rVert_{\infty} = \left(\sum_{i=n}^m \frac{\lvert b-a\rvert^{i}}{i!}\right) \lVert h \rVert_{\infty} . \tag{5}$$</span></p>
<p>But <span class="math-container">$\sum_{i=n}^m \frac{\lvert b-a\rvert^{i}}{i!}$</span> is a section of the convergent series <span class="math-container">$e^{\lvert b-a\rvert} = \sum_{i=0}^\infty \frac{\lvert b-a\rvert^{i}}{i!}$</span>, thus becomes arbitrarily small for large enough <span class="math-container">$n$</span>.</p>
|
263,359 | <p>Let $H$ be a monoid, and denote by $H^\times$ and $\mathcal A(H)$, respectively, the <em>set of units</em> (or <em>invertible elements</em>) and the <em>set of atoms</em> (or <em>irreducible elements</em>) of $H$ (an element $a \in H$ is an atom if $a \notin H^\times$ and $a = xy$ for some $x, y \in H$ implies $x \in H^\times$ or $y \in H^\times$). </p>
<p>Given $x \in H$, we set $\mathsf L_H(x) := \{k \in \mathbf N^+: x = a_1 \cdots a_k \text{ for some }a_1, \ldots, a_k \in \mathcal A(H)\}$ if $x \ne 1_H$ and $\mathsf L_H(x) := \{0\} \subseteq \mathbf N$ otherwise (in factorization theory, this is referred to as the <em>set of lengths</em> of $x$ (relative to the atoms of $H$)). We say that $H$ is a <em>BF-monoid</em> if $\mathsf L_H(x)$ is non-empty and finite for every $x \in H \setminus H^\times$.</p>
<blockquote>
<p><strong>Q.</strong> Does there exist a commutative BF-monoid $H$ such that $H \ne H^\times$ and $au = a$ for all $a \in \mathcal A(H)$ and $u \in H^\times$? If so, can we make $|H^\times| = \kappa$ for every fixed (small) cardinal $\kappa \ne 0$?</p>
</blockquote>
<p>My guess is that the answer to both questions is positive, but so far I haven't been able to construct an example to prove it. And though the question is not so important, a positive answer would shed light on the relation (and the contrast) between two different "philosophies" beyond the definition of what is called the <em>factorization monoid</em> of $H$.</p>
| Benjamin Steinberg | 15,934 | <p>The answer seems yes if I understood the question. Take $G$ to be any commutative group and $N$ be the free semigroup on one-generator $x$ (so isomorphic to the positive natural numbers under $+$, but I want to use multiplicative notation). Let $H=G\cup N$ where the operation on $G$ and $N$ are their original operations and the product $gn=n=ng$ for any $g\in G$ and $n\in N$. Then $H^\times=G$ and your property is satisfied. The element $x$ is the only atom of $H$ and each non-unit is uniquely a product of atoms so it is a BF-monoid. Of course $G$ can have any cardinality.</p>
|
3,503,518 | <blockquote>
<p>Determine if <span class="math-container">$ \sum_{n=2}^{\infty} \frac{(\sin{n})\sum\limits_{k=1}^{n} \frac{1}{k}}{(\log n)^2}$</span> is convergent or divergent.</p>
</blockquote>
<hr>
<p>[My attempt]</p>
<p>It seems like Dirichlet test, so I tried to show that <span class="math-container">$a_n := \frac{\sum\limits_{k=1}^{n} \frac{1}{k}}{(\log n)^2}$</span> is decreasing and converges to zero.</p>
<p>By the <a href="https://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow noreferrer">integral test proof</a>, I know that
<span class="math-container">$$
\int_1^{n+1}\frac{dx}{x}\leq\sum_{k=1}^n\frac{1}{k}\leq 1+\int_1^{n}\frac{dx}{x}
$$</span>
Since <span class="math-container">$\int\frac{dx}{x}=\ln(x)+C$</span>, I can calculate that <span class="math-container">$a_n$</span> converges to zero by the squeeze theorem.</p>
<p>However, I can't show that <span class="math-container">$a_n$</span> is a monotonic decreasing sequence...</p>
<p>How to solve this?</p>
| SL_MathGuy | 730,041 | <p>Consider <span class="math-container">$$a_n = \dfrac{\sum _{k=1}^{n}1/k}{[log (n)]^2}.$$</span> It can be proved that,</p>
<p><span class="math-container">$$\sum _{k=1}^{n}1/k = log(n) + \gamma + o(1)$$</span>, where <span class="math-container">$\gamma$</span>- Euler's constant.
Using this, we can simplify <span class="math-container">$a_n$</span> so that,</p>
<p><span class="math-container">$a_n = 1/log (n) + \gamma/[log(n)]^2+o(1)$</span>. Differentiate this w.r.t <span class="math-container">$n$</span> so that <span class="math-container">$a_n' \leq0$</span>. Hence, the sequence {<span class="math-container">$a_n$</span>} is non-increasing. Finally, apply the Dirichlet's test to conclude the series is convergent.</p>
|
381,093 | <p>I would like to prove Chebyshev's sum inequality, which states that:</p>
<p>If <span class="math-container">$a_1\geq a_2\geq \cdots \geq a_n$</span> and <span class="math-container">$b_1\geq b_2\geq \cdots \geq b_n$</span>, then<br />
<span class="math-container">$$
\frac{1}{n}\sum_{k=1}^n a_kb_k\geq \left(\frac{1}{n}\sum_{k=1}^n a_k\right)\left(\frac{1}{n}\sum_{k=1}^n b_k\right)
$$</span><br />
I am familiar with the non-probabilistic proof, but I need a probabilistic one.</p>
| Iosif Pinelis | 36,721 | <p>A more general inequality is
<span class="math-container">$$Ef(X)g(X)\ge Ef(X)\,Eg(X),\tag{1}$$</span>
where <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are nondecreasing (say bounded) functions from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> and <span class="math-container">$X$</span> is any real-valued random variable. (In your case, <span class="math-container">$X$</span> is uniformly distributed on the set <span class="math-container">$[n]:=\{1,\dots,n\}$</span>, whereas <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are any nondecreasing bounded functions from <span class="math-container">$\mathbb R$</span> to <span class="math-container">$\mathbb R$</span> such that <span class="math-container">$f(j)=a_j$</span> and <span class="math-container">$g(j)=b_j$</span> for all <span class="math-container">$j\in[n]$</span>.)</p>
<p>To prove (1), note that <span class="math-container">$(f(x)-f(y))(g(x)-g(y))\ge0$</span> for all real <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, since <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are nondecreasing. Therefore, letting <span class="math-container">$Y$</span> denote an independent copy of <span class="math-container">$X$</span>, we have
<span class="math-container">$$0\le E(f(X)-f(Y))(g(X)-g(Y)) \\
=Ef(X)g(X)+Ef(Y)g(Y)-Ef(X)g(Y)-Ef(Y)g(X) \\
=Ef(X)g(X)+Ef(Y)g(Y)-Ef(X)\,Eg(Y)-Ef(Y)\,Eg(X) \\
=Ef(X)g(X)+Ef(X)g(X)-Ef(X)\,Eg(X)-Ef(X)\,Eg(X) \\
=2[Ef(X)g(X)-Ef(X)\,Eg(X)],$$</span>
whence (1) follows.</p>
|
417,131 | <p>Does limit $\frac{4xy}{\sqrt{x^2+y^2}}$ as $(x,y) \to (0,0)$ exist or not?</p>
<p>To remove the root, I squared the whole equation and I get
limit $\frac{16x^2y^2}{\sqrt{x^2+y^2}}$. Then I dont know how to continue working it. Can anyone help to do this questions for me please.</p>
<p>Thank you for your effort in advance!</p>
| DonAntonio | 31,254 | <p>An idea:</p>
<p>$$|x-y|\le|x|+|y|\implies \frac{\sin^2(x-y)}{|x|+|y|}\le\frac{\sin^2(x-y)}{|x-y|}=|\sin(x-y)|\frac{|\sin(x-y)|}{|x-y|}$$</p>
<p>Now just remember that $\,\displaystyle{\lim_{t\to 0}\frac{\sin t}t=1}\;$ and stuff...</p>
|
4,501,927 | <p>Hi working again on the gamma function I find that :</p>
<p>let <span class="math-container">$0<x\leq 1$</span> and <span class="math-container">$1\leq k\leq \infty$</span> then define :</p>
<p><span class="math-container">$$f(x)=(x!)!,g(x)=\left(\left(x!\right)!\right)^{\frac{1}{e^{x^{k}}}}$$</span></p>
<p>Then a conjecture :</p>
<p>It seems <span class="math-container">$\exists k\in(1,\infty)$</span> such that <span class="math-container">$g(x)$</span> admits an asymptote as <span class="math-container">$x\to \infty$</span>.</p>
<p>Then see here <a href="https://math.stackexchange.com/questions/4467521/trying-to-prove-the-stirling-approximation-using-concavity-for-x-geq-1">Trying to prove the Stirling approximation using concavity for $x\geq 1$</a> we can squeeze the function <span class="math-container">$f(x)$</span> and remains to determine some constant .</p>
<p>If my conjecture is true how to achieve this ?</p>
| Claude Leibovici | 82,404 | <p>As I wrote in comment, the same work can be done for
<span class="math-container">$$g(x)=(x!)!$$</span>
<span class="math-container">$$g(x)=g(a)+\Gamma (a+1) \Gamma (a!+1)\sum_{n=2}^\infty \frac {d_n} {n!} \, (x-a)^n$$</span> The first coefficients being
<span class="math-container">$$\color{red}{d_2}=\psi ^{(1)}(a+1) \psi ^{(0)}(a!+1)$$</span>
<span class="math-container">$$\color{red}{d_3}=\psi ^{(2)}(a+1) \psi ^{(0)}(a!+1)$$</span>
<span class="math-container">$$\color{red}{d_4}=\left(3 \psi ^{(1)}(a+1)^2+\psi ^{(3)}(a+1)\right) \psi ^{(0)}(a!+1)+$$</span> <span class="math-container">$$3 \Gamma
(a+1) \psi ^{(1)}(a+1)^2 \psi ^{(0)}(a!+1)^2+3 \Gamma (a+1) \psi ^{(1)}(a+1)^2
\psi ^{(1)}(a!+1)$$</span></p>
<p>Computing again</p>
<p><span class="math-container">$$\Psi_n=\int_0^1 \Big[[(x!)!]-\text{approximation}_{(n)}\Big]^2\,dx$$</span></p>
<p><span class="math-container">$$\left(
\begin{array}{cc}
n & \log_{10}\big[\Psi_n\big] \\
2 & -5.0613 \\
3 & -5.0973 \\
4 & -6.3083 \\
5 & -6.7855 \\
6 & -7.7200 \\
7 & -8.3849 \\
8 & -9.2041 \\
9 & -9.9228 \\
10 & -10.691 \\
11 & -11.423
\end{array}
\right)$$</span>
that is to say
<span class="math-container">$$\log_{10}\big[\Psi_n\big] \sim -\frac {3n+13}4$$</span></p>
|
1,423,491 | <p>Suppose you are flipping a coin with probability of Heads being 0.4931 and Tails being 0.5069</p>
<p>Can someone please tell me what is the probability of hitting 6 and 7 tails in a row in 24 tries? how about 44 tries?</p>
<p>OK. I have been asked to edit my question!
Here is the story:</p>
<p>The game "Baccarat" in casinos is very much like flipping a coin. the outcome is either "banker", "player" or Tie. If you disregard ties, the probability of banker win is 0.5069 and player win is 0.4931.
I am betting in a way that every loss is covered by an eventual win. But if there are 7 losses in a row, i do not play any more. I usually play 20 hands to win 100$. I wanted to know what is the probability of my loss.
I hope the critics are now satisfied with the reason behind my question!</p>
| Community | -1 | <p>For $k$ times in $n$ trials: $$p = {n \choose k} (0.4139)^{n-k}(0.5069)^k$$</p>
<p>These are called Bernoulli trials, and this probability distribution is called the binomial distribution. </p>
|
1,423,491 | <p>Suppose you are flipping a coin with probability of Heads being 0.4931 and Tails being 0.5069</p>
<p>Can someone please tell me what is the probability of hitting 6 and 7 tails in a row in 24 tries? how about 44 tries?</p>
<p>OK. I have been asked to edit my question!
Here is the story:</p>
<p>The game "Baccarat" in casinos is very much like flipping a coin. the outcome is either "banker", "player" or Tie. If you disregard ties, the probability of banker win is 0.5069 and player win is 0.4931.
I am betting in a way that every loss is covered by an eventual win. But if there are 7 losses in a row, i do not play any more. I usually play 20 hands to win 100$. I wanted to know what is the probability of my loss.
I hope the critics are now satisfied with the reason behind my question!</p>
| Krish Wadhwana | 310,698 | <p>In 24 flips, probability of getting 6 tails in a row:
You can either get all tails in flips 1,2,3,4,5,6 or flips 2,3,4,5,6,7 or flips 3,4,5,6,7,8 or flips 4,5,6,7,8,9 or flips 5,6,7,8,9,10 to flips 19,20,21,22,23,24. If you could all of the possibilities you get 19. So you choose one case from 19 which is $${19 \choose 1}$$ and the probability of all of these to be tails is $$(0.5069)^6$$
So the answer should be $${19 \choose 1}*(0.5069)^6$$</p>
|
3,678,908 | <p>Let</p>
<p><span class="math-container">$$
V = \{ax^3+bx^2+cx+d|a,b,c,d \in Z_7\}
$$</span></p>
<p>Let the subspace <span class="math-container">$U$</span> Be a subspace of <span class="math-container">$V$</span>:</p>
<p><span class="math-container">$$
U = \{p(x) \in V | p(3)=p(5)=0\}
$$</span></p>
<p>The answers says that:</p>
<p><span class="math-container">$$
Dim(U) = 2
$$</span></p>
<p>How did they conclude that the dimension of <span class="math-container">$U$</span> Is <span class="math-container">$2$</span>?</p>
| Barry Cipra | 86,747 | <p>Rewrite the equation as <span class="math-container">$|2x+3|=|x-1|+6$</span> and sketch the graphs of <span class="math-container">$y=|2x+3|$</span> and <span class="math-container">$y=|x-1|+6$</span>. You will see that the two branches of the steeper V shape of the first graph each intersect once with the left (downward sloping) branch of the less steep second graph. This tells us that the solutions satisfy the simpler equation</p>
<p><span class="math-container">$$|2x+3|=7-x$$</span></p>
<p>at which point squaring both sides to <span class="math-container">$4x^2+12+9=49-14x+x^2$</span> leads to</p>
<p><span class="math-container">$$3x^2+26x-40=(3x-4)(x+10)=0$$</span></p>
<p>so the two solutions are <span class="math-container">$x=4/3$</span> and <span class="math-container">$x=-10$</span>. (Alternatively, and possibly more simply, <span class="math-container">$2x+3=7-x$</span> gives <span class="math-container">$x=4/3$</span> while <span class="math-container">$-2x-3=7-x$</span> gives <span class="math-container">$x=-10$</span>.)</p>
<p>Taking a graphical approach does not help with all problems, but it's nonetheless worth keeping in mind that some equations can be solved by interpreting their solutions as the points where two curves intersect, especially when the curves are relatively easy to draw.</p>
|
234,483 | <p>The question is, "Give a description of each of the congruence classes modulo 6."</p>
<p>Well, I began saying that we have a relation, $R$, on the set $Z$, or, $R \subset Z \times Z$, where $x,y \in Z$. The relation would then be $R=\{(x,y)|x \equiv y~(mod~6)\}$</p>
<p>Then, $[n]_6 =\{x \in Z|~x \equiv n~(mod~6)\}$</p>
<p>$[n]_6=\{x \in Z|~6|(x-n)\}$</p>
<p>$[n]_6=\{x \in Z|~k(x-n)=6\}$, where $n \in Z$</p>
<p>As I looked over what I did, I started think that this would not describe all of the congruence classes on modulo 6. Also, what would I say k is? After despairing, I looked at the answer key, and they talked about there only being 6 equivalence classes. Why are there only six of them? It also says that you can describe equivalence classes as one set, how would I do that?</p>
| amWhy | 9,003 | <p>Take any $n \in \mathbb{Z}.$</p>
<p>Then for each such $n$ there is an integer $k$ such that one of the following equations are satisfied. $$n=6k + 0$$ $$n=6k+1$$ $$n=6k+2$$ $$n=6k+3$$ $$n=6k+4$$ </p>
<p>$$n=6k+5$$</p>
<p>Can any integer satisfy <em>more</em> than one of the above equalities?</p>
<p>Can any integer <em>NOT</em> satisfy any of of the above equalities?</p>
<p>Then we can describe the equivalence classes of the equivalence relation $\equiv_{6}$ (congruence, mod $6$) as they relate to the <em>division algorithm</em>: </p>
<blockquote>
<p>Given any $d \in \mathbb{Z}, d>0$, we know that $\forall n \in \mathbb{Z}$, there exist unique integers $q, r$ such that $n = dq + r$ where $0\leq r< d$.</p>
</blockquote>
<p>In this problem, we have the divisor $d = 6$ of a given $n$: </p>
<ul>
<li>$k = q\in \mathbb{Z}$ is the unique corresponding <em>quotient</em> which results when dividing $n$ by $d = 6$, and </li>
<li>$r$ is the unique <em>corresponding remainder</em>, $0\le r < d = 6$, left after dividing that $n$ by $6$. </li>
</ul>
<hr>
<p>Then the corresponding equivalence classes can be defined in terms of the the remainder $r$:<br> $$[r]_6 \in \{[0]_6,[1]_6, [2]_6, [3]_6, [4]_6, [5]_6\} \;\text{ where}$$</p>
<p>$$ [0]_6 = \{...,-12,-6,0,6,12,...\}$$
$$[1]_6 = \{...-11,-5,1,7,13,...\}$$
$$[2]_6 = \{...-10,-4,2, 8, 14...\}$$
$$ \vdots$$
$$[5]_6 = \{...-7,-1,5, 11, 17,...\}$$
<br></p>
|
3,817,092 | <p>How to show that we may replace the property of a norm <span class="math-container">$$||x||=0 \iff x=0$$</span> by <span class="math-container">$$||x||=0 \implies x = 0$$</span> without altering the concept
of a norm.</p>
| Jason DeVito | 331 | <p>Qiaochu has given a very nice answer, but I wanted to add another large class of examples.</p>
<blockquote>
<p>Suppose <span class="math-container">$G$</span> is a simply connected closed Lie group, and <span class="math-container">$H$</span> is a connected closed Lie group. Suppose <span class="math-container">$H$</span> acts on <span class="math-container">$G$</span> freely via some action, and suppose further that the rank of <span class="math-container">$H$</span> is equal to the rank of <span class="math-container">$G$</span>. Then the orbit space <span class="math-container">$G/H$</span> is a closed, simply connected manifold of positive Euler characteristic.</p>
</blockquote>
<p>One way to construct such actions is as follows. Beginning with any <span class="math-container">$G$</span> as above, let <span class="math-container">$H$</span> be a connected subgroup of <span class="math-container">$G$</span> which contains a maximal torus of <span class="math-container">$G$</span>. Then <span class="math-container">$H$</span> acts on <span class="math-container">$G$</span> by left multiplication, and this meets all the hypothesis above. This gives rise to the so-called <em>homogeneous</em> spaces.</p>
<p>Another way to construct such actions is to allow <span class="math-container">$H\subseteq G\times G$</span>. Then <span class="math-container">$H$</span> acts on <span class="math-container">$G$</span> via <span class="math-container">$(h_1,h_2)\ast g = h_1 g h_2^{-1}$</span>. When this action is free, this gives rise to the so-called <em>biquotients</em>.</p>
<p>The theorem above doesn't require that <span class="math-container">$H$</span> acts on <span class="math-container">$G$</span> using the multiplicative structure on <span class="math-container">$G$</span>, but I don't know of any examples that are not of this type.</p>
<p>The proof of the theorem is as follows. First, the fact that the orbit space under a free action by a compact Lie group is a manifold is well known. So let me focus on showing that the quotient is simply connected and that it has positive Euler characteristic.</p>
<p>Since <span class="math-container">$H$</span> acts on <span class="math-container">$G$</span> freely, there is a principal <span class="math-container">$H$</span>-bundle <span class="math-container">$H\rightarrow G\rightarrow G/H$</span>. The long exact sequence in homotopy groups associated to this ends with <span class="math-container">$$...\rightarrow\pi_1(H)\rightarrow \pi_1(G)\rightarrow \pi_1(G/H)\rightarrow \pi_0(H)\rightarrow ...$$</span></p>
<p>By assumption, <span class="math-container">$\pi_1(G)\cong\pi_0(H)\cong 0$</span>, so it follows that <span class="math-container">$\pi_1(G/H) = 0$</span>.</p>
<p>Lastly, the fun part. Why does <span class="math-container">$G/H$</span> have positive Euler characteristic? Well, all Lie groups have the rational homotopy groups of a product of odd spheres. Specifically, <span class="math-container">$\pi_{even}(G)\otimes \mathbb{Q} = 0$</span> and <span class="math-container">$\dim \pi_{odd}(G)\otimes \mathbb{Q} = \operatorname{rank}(G)$</span>.</p>
<p>From the long exact sequence in rational homotopy groups associated to the bundle <span class="math-container">$H\rightarrow G\rightarrow G/H$</span>, it follows that <span class="math-container">$G/H$</span> is <a href="https://en.wikipedia.org/wiki/Rational_homotopy_theory#Elliptic_and_hyperbolic_spaces" rel="noreferrer">rationally elliptic</a>. Further, from the same exact sequence together it follows that <span class="math-container">$\dim \pi_{even}(G/H)\otimes \mathbb{Q} = \dim \pi_{odd}(G/H)\otimes\mathbb{Q}$</span>.</p>
<p>For rationally elliptic spaces, this condition on rational homotopy groups forces <span class="math-container">$\chi(G/H) > 0$</span>. See, for example, Felix, Halperin, and Thomas's book "Rational Homotopy Theory", specifically in Part VI (section 32).</p>
|
557,146 | <p>I have a clock time in form of HH.MM.SS (hours, minutes, sec), I need to create a way to convert it to one number so that I can determent which time is bigger.
for example
00.00.00 is the smallest
23.59.59 is the biggest</p>
<p>11.03.50 > 11.02.57
etc..</p>
<p>I thought to do the following</p>
<p>hours*10000+minutes*100+sec
but im not sure it works for all cases... how can i prove it works?</p>
| Rocket Man | 85,096 | <p>Just convert the total time to seconds. The most seconds is clearly the "largest" time.</p>
<p>$$
T=3600h+60m+s
$$</p>
|
3,987,552 | <p>I need to prove or refute a property about a sequence of numbers.
Here is what is given to me:</p>
<p>Sequence (<span class="math-container">$a_1,a_2,...,a_k,a_{k+1},a_{k+2}$</span>) containing <span class="math-container">$k+2$</span> numbers. Every number <span class="math-container">$0 < a_i \leq M, i=1,...,k+2$</span> for the same given constant <span class="math-container">$M$</span>. Moreover, <span class="math-container">$\sum_{i=1}^{k+2} a_i = N$</span>, for another given constant <span class="math-container">$N > 0$</span>. The constants <span class="math-container">$N$</span> and <span class="math-container">$M$</span> are related by <span class="math-container">$kM \geq N, k > 1$</span>.</p>
<p>Then, I need to prove or refute the following property: Can all consecutive pairs of numbers <span class="math-container">$a_i, a_{i+1}, i=1,...,k+1$</span> be defined such that <span class="math-container">$a_i + a_{i+1} > M$</span> ? Or does it lead to a violation of one of the given constraints?</p>
<p>I tried searching for something similar but in truth, I barely know what to search for. My tentative proofs do not really go anywhere meaningful, so I had to resort to more experienced math guys to help me with this. It has been a long time since I had to prove some property like this.</p>
| Measure me | 854,564 | <p>Assuming that given <span class="math-container">$k$</span>, <span class="math-container">$N$</span> is selected such that
<span class="math-container">$$\sum_{i=1}^{k+2} a_i = N,$$</span>
I claim it is possible <span class="math-container">$ \iff k\ge 3$</span></p>
<p>First of all we see the cases <span class="math-container">$k=1,2$</span> to check it is impossible:</p>
<p>(i) if <span class="math-container">$k=1$</span> then
<span class="math-container">$$M< a_1+a_2 < N \le kM = M;$$</span>
(ii) if <span class="math-container">$k=2$</span> then we only have <span class="math-container">$(a_1,a_2,a_3,a_4)$</span>, and
<span class="math-container">$$a_1+a_2>M \mbox{ and } a_3+a_4 >M ,$$</span>
so
<span class="math-container">$$N = \sum_{i=1}^{k+2} a_i >M + M =2M = kM .$$</span></p>
<p>Now let <span class="math-container">$k\ge 3$</span>: let <span class="math-container">$\epsilon > 0$</span> (arbitrarily small) and such that <span class="math-container">$\epsilon < M$</span>, then I pick:
<span class="math-container">$$a_i= M \mbox{ if } i \mbox{ is even; } a_i = M -\epsilon \mbox{ if } i \mbox{ is odd.}$$</span>
Now we obseve that <span class="math-container">$a_i + a_{i+1} = 2M - \epsilon > M$</span> anf that <span class="math-container">$0 < a_i \le M$</span>, so they are good, then:</p>
<p>(i) if <span class="math-container">$k$</span> is even
<span class="math-container">$$\sum_{i=1}^{k+2} a_i = N = (2M -\epsilon)\Big(\frac{k}{2} + 1\Big) = (k+2)M - \epsilon\Big(\frac{k}{2}+1\Big)$$</span>
which is possible if I pick <span class="math-container">$\epsilon$</span> such that
<span class="math-container">$$kM \ge (k+2)M - \epsilon\Big(\frac{k}{2}+1\Big)$$</span>
which implies that
<span class="math-container">$$\epsilon \ge \frac{2M}{\frac{k}{2}+1} .$$</span>
Also we have that <span class="math-container">$M> \epsilon$</span>, so we can pick such a <span class="math-container">$\epsilon$</span> only if
<span class="math-container">$$M> \frac{2M}{\frac{k}{2}+1},$$</span>
which means that <span class="math-container">$k > 2$</span>.</p>
<p>(ii) if <span class="math-container">$k$</span> is odd
<span class="math-container">$$\sum_{i=1}^{k+2} a_i = N = (2M-\epsilon)\Big(\frac{k+1}{2}\Big) + a_{k+2} = (k+1)M - \epsilon \Big(\frac{k+1}{2}\Big) + a_{k+2} =$$</span>
<span class="math-container">$$= (k+2)M - \epsilon \Big(\frac{k+1}{2} +1 \Big)$$</span>
and here too we see it is possible if <span class="math-container">$\epsilon$</span> is such that
<span class="math-container">$$\epsilon \ge \frac{2M}{\frac{k+1}{2}+1} .$$</span>
This implies that it is tue if
<span class="math-container">$$M > \frac{2M}{\frac{k+1}{2}+1},$$</span>
which means if <span class="math-container">$k>1$</span>.</p>
<p>This concludes my claim.</p>
|
3,257,387 | <p>denote <span class="math-container">$(a_n)$</span> as a sequence. If the consecutive difference tend to <span class="math-container">$0$</span> as <span class="math-container">$n$</span> tends to infinity, do it mean that <span class="math-container">$(a_n)$</span> converges? I feel like it should mean so but I am struggling to see a clear cut proof</p>
| Kitter Catter | 166,001 | <p>Consider <span class="math-container">$a_n=\log(n)$</span> if you take the limit for large <span class="math-container">$n$</span> then I think it is fairly clear that <span class="math-container">$a_n$</span> diverges. In that limit <span class="math-container">$a_n - a_{n-1} =\log\frac{n}{n-1}\rightarrow 0$</span></p>
|
17,259 | <p>I am using <em>Mathematica</em> to go through the examples and exercises on the book <em>Modern Control Systems</em>, 6th edition by Dorf. On page 605, there is a table (Table 8.5) with the Bode plot for several transfer functions. In what follows there is a piece of code that attempts to build the very same table.</p>
<p>Here is the code:</p>
<pre><code>With[{ τ1 = 20, τ2 = 2, τ3 = 0.4, τ4 = 0.05, τa = 10, τb = 1, k = 10},
Grid[
Partition[
Table[ BodePlot[ sys, PlotLabel->sys, GridLines -> Automatic], { sys,
{ k/(s τ1 + 1), (k(s τa + 1))/(s(s τ1 + 1)(s τ2 + 1)),
k/((s τ1 + 1)(s τ2 + 1)), k/s^2, k/((s τ1 + 1)(s τ2 + 1)(s τ3 + 1)),
k/(s^2 (s τ1 + 1)), k/s, (k(s τa + 1))/(s^2 (s τ1 + 1)),
k/(s(s τ1 + 1)), k/s^3, k/(s(s τ1 + 1)(s τ2 + 1)),
(k (s τa + 1))/s^3, (k (s τa + 1)(s τb + 1))/s^3,
(k (s τa + 1))/(s^2 (s τ1 + 1)(s τ2 + 1)),
(k (s τa + 1)(s τb + 1))/(s(s τ1 + 1)(s τ2 + 1)(s τ3 + 1)(s τ4 + 1)) }
}
], 2], Frame->All, Spacings->6] ]
</code></pre>
<p><img src="https://i.stack.imgur.com/DyGQX.gif" alt="a bunch of Bode plots"></p>
<p>All the transfer functions with <code>1/s^n</code> <code>( n > 1 )</code> give the wrong result as far as the phase plot is concerned. Is there a simple way to fix this? <em>Wolfram</em> does not have a time line to go through the problem and sort it out.</p>
| James Cunnane | 4,882 | <p>No error seen in those phase plots. </p>
<p>For a transfer function of the form K/s^n, i.e. n poles at the origin, we expect constant phase of (-90 * n) degrees, plus or minus some integer multiple of 360 degrees - which is exactly what your Mathematica plots show. </p>
|
820,490 | <p>I am studying pre-calculus mathematics at the moment, and I need help in verifying if $\sin (\theta)$ and $\cos (\theta)$ are functions? I want to demonstrate that for any angle $\theta$ that there is only one associated value of $\sin (\theta)$ and $\cos (\theta)$. How do I go about showing this?</p>
| Lutz Lehmann | 115,115 | <p>I'm afraid that from a pre-calculus point of view, there is no consistent definition of the functional relationships that allows to discuss them as real functions (real valued in one real variable).</p>
<p>The big gap to be closed is the notion of an angle. In geometry, an angle is defined by a pair of rays from the same point. These most often occur at corners of triangles. All geometrically similar pairs of rays define the same angle. You can arrange such an equivalent pair of rays to have the first ray horizontal pointing to the right, then the second ray is uniquely characterized by a point on the unit circle (after fixing an origin and a unit length). The coordinates of that point then are the cosine and sine of that (equivalence class of) angle(s).</p>
<p>Then there are some angles that can be constructed and calculated geometrically by constructing regular n-gons and thus dividing the unit circle into n equal parts. n=3,4,6,8,12 are easy, n=5 is possible without too much effort. By bisection one can enclose any other angle in fractions of the full circle, starting from these known fractions, but this goes close to limit arguments, which already is calculus.</p>
<p>One can identify points on the unit circle with rotation matrices or complex numbers, which simplifies the angle arithmetic. Doubling an angle is then squaring the matrix or complex number, bisecting an angle corresponds to a square root. This gives functional equations that in their components are the trigonometric identities. Demanding continuity and differentiability at the zero angle will also give the trigonometric functions as unique solutions. However, this uniqueness is again enforced by calculus arguments.</p>
|
99,750 | <p>Let $G$ be a reductive group, $F$ a Frobenius morphism, $B$ a Borel subgroup $F$-stable and consider the finite groups $G^F$ and $U^F$ where $U$ is the radical unipotent of $B=UT$ ($T$ torus).</p>
<p>I would like a reference for the description of the algebra $End_{G^F}( \mathbb{C}[G^F/U^F] )$. More precisely, I'd like to relate it with a structure of Hecke algebra, which is usually defined as $End_{G^F}( \mathbb{C}[G^F/B^F] ) := End_{G^F} ( Ind_{B^F}^{G^F} 1 )$. I hope to find that the endomorphism algebra is isomorphic to some kind of extension of the Hecke algebra by the torus $T$.</p>
<p>Thank you!</p>
| Community | -1 | <p>Personally I like the definition in <a href="https://arxiv.org/abs/math/0203010" rel="nofollow noreferrer">Barton, Sudbery</a> paper (thank you, Bruce for adding the reference):</p>
<p><a href="http://www.ams.org/mathscinet-getitem?mr=2020553" rel="nofollow noreferrer">MR2020553 (2005b:17017) Barton, C. H. ; Sudbery, A.</a>
<a href="http://www.sciencedirect.com/science/article/pii/S000187080300015X/pdf?md5=bf96b9d3e894f5a0698b7ccfad356be3&pid=1-s2.0-S000187080300015X-main.pdf" rel="nofollow noreferrer">Magic squares and matrix models of Lie algebras.
Adv. Math. 180 (2003), no. 2, 596--647.</a></p>
<p>It uses triality algebra based on $\mathbb R, \mathbb C, \mathbb H, \mathbb O$ composition algebras. Using this one can construct all compact and non-compact exceptional Lie algebras.</p>
<p>Tits-Freudenthal magic square correspond to square of algebras:</p>
<p>$\begin{matrix}
R\otimes R & R\otimes C & R\otimes H & R\otimes O \\
C\otimes R & C\otimes C & C\otimes H & C\otimes O \\
H\otimes R & H\otimes C & H\otimes H & H\otimes O \\
O\otimes R & O\otimes C & O\otimes H & O\otimes O \\
\end{matrix}$</p>
<p>You can replace composition algebra $A$ with split version $\tilde A $ to obtain non compact version. </p>
<p>Lie algebra in position $A\otimes B$ is $TriA + TriB + A\otimes B + A\otimes B + A\otimes B$. The triality Lie algebra is equal to $Der A+2A'$ which is equal to $0,so_2+so_2, so_3+so_3+so_3, so_8$ for the four composition algebras listed above. The bracket is defined in mentioned paper.
To obtain $f_4$ with compact $spin_9$ we should change sign in last two $A\otimes B$.</p>
<p><strong>Explanation</strong></p>
<p>I would like to add few sentences why I think this is beautiful description of exceptional Lie groups. It is rather description of exceptional Lie algebras, not groups. The groups can be obtained from Lie algebras by using exponential map.</p>
<p>First reason is that all four exceptional Lie algebras: $f_4$, $e_6$, $e_7$, $e_8$ are obtained in uniform way. Second reason is that it is elegant and fairly easy to understand the bracket. You should comprehend the notion of triality in composition algebra. Third reason is that you can easily see the symmetry of Freudenthal-Tits "magic" square of Lie algebras. It is no more "magic" as it was in original construction of Tits and Freudenthal where Jordan algebra was used.</p>
<p>We can look at $n=2$ algebras which is "younger brother" of magic square for $n=3$. The exceptional symmetric spaces are obtained as quotient of entry in magic square for $n=3$ with corresponding entry for $n=2$. Placing one square on top of another and preparing base square for $n=1$ with $Tri A+Tri B$ we obtain "magic cube" of Lie algebras. Exceptional symmetric spaces can be obtained as quotients of neighbour points in magic cube.</p>
<p>We can also replace given algebra $A$ by split version $\tilde A$ as I mentioned above. This way we can obtain non-compact versions of exceptional Lie algebras.</p>
<p><strong>Future development</strong></p>
<p>I would like to add what is still missing in this nice picture. It would be good to have focus on Lie group, not on Lie algebra. The geometry is hidden in the group. Lie algebra was created as algebraic tool to classify the groups.</p>
<p>It would be good to have uniform definition of exceptional symmetric spaces. For example <a href="http://www.ims.cuhk.edu.hk/~leung/PhD%20students/Thesis%20Yong%20Dong%20Huang.pdf" rel="nofollow noreferrer">Huang thesis</a> contain definition of symmetric spaces as grassmanians.</p>
<p>It is not easy to define finite groups of Lie type for exceptional Lie groups. It would be good to have something also working for finite fields.</p>
|
2,903,105 | <p>A Fourier transform of function R into function Q is defined as:
$$Q(\underline{k}) = \int_{}^{}R(\underline{x}) e^{-i\underline{k}·\underline{x}} \mathrm{d}\underline{x}.$$
where I've underlined $\underline{x}$ and $\underline{k}$ to denote that they are a set of vectors.</p>
<p>Suppose that $R(\underline{x})$ is only a function of coordinate differences, for example, $R(\underline{x})=$($x$<sub>1</sub>-$x$<sub>2</sub>)($x$<sub>3</sub>-$x$<sub>1</sub>), where $\underline{x}$=($x$<sub>1</sub>,$x$<sub>2</sub>,$x$<sub>3</sub>).</p>
<p>Why must the Fourier transform $Q(\underline{k})$ contain a Dirac Delta function?</p>
<p>Note: The fact that the Fourier transform must contain a Dirac Delta function was mentioned on the bottom of page 25 of the following set of notes by my Physics professor, <a href="http://www-thphys.physics.ox.ac.uk/people/JohnCardy/qft/qftMT2012.pdf" rel="nofollow noreferrer">http://www-thphys.physics.ox.ac.uk/people/JohnCardy/qft/qftMT2012.pdf</a></p>
| md2perpe | 168,433 | <p>A somewhat physical explanation is this:<br>
Dependence on coordinate differences only implies translation invariance, which by <a href="https://en.wikipedia.org/wiki/Noether%27s_theorem" rel="nofollow noreferrer">Noether's theorem</a> implies conservation of momentum, which in turn is represented as $\delta(\sum_k p_k)$ in $p$-space.</p>
<p>A mathematical example in 1 dimension:
$$\begin{align}
\mathcal{F}\{f(x_1-x_2)\}(p_1,p_2)
&= \iint f(x_1-x_2) \, e^{-i(p_1 x_1+p_2 x_2)} \, dx_1 \, dx_2 \\
&= \{ \text{ variable change: } s = x_1 - x_2,\ t = x_2 \} \\
&= \iint f(s) \, e^{-i(p_1(s+t)+p_2t)} \, ds \, dt \\
&= \int \left( \int f(s) \, e^{-ip_1s} \, ds \right) e^{-i(p_1+p_2)t} \, dt \\
&= \int \mathcal{F}\{f\}(p_1) e^{-i(p_1+p_2)t} \, dt \\
&= \mathcal{F}\{f\}(p_1) \int e^{-i(p_1+p_2)t} \, dt \\
&= \mathcal{F}\{f\}(p_1) \, 2\pi \, \delta(p_1+p_2)
\end{align}$$</p>
|
2,829,362 | <p>Suppose the random variable $T$ which represents the time needed for one person to travel from city A to city B ( in minutes). $T$ is normally distributed with mean $60$ minutes and variance $20$ minutes. Also, suppose $600$ people depart at the exact same time with each of their travel time being independent from one another.</p>
<p>Now the question is, what is the probability that less than $80$ people will need to travel more than $1$ hour ?</p>
<p>How I tried to do this is by using the binomial probability distribution to calculate the probability of $i$ people being late out of the 600. Then I summed $i$ from $0$ to $79$ because these are disjoint sets of events. But first I needed to know the probability that a random person will be late. This is simply equal to $1/2$ because $T$ is normally distributed with mean 60. So we get for $X$ the amount of people being late:</p>
<p>$$P(X < 80) = \sum\limits_{i=0}^{79} \frac{600!}{i!(600-i)!}
\left(\frac{1}{2}\right)^i\left(\frac{1}{2}\right)^{600-i} =\sum\limits_{i=0}^{79} \frac{600!}{i!(600-i)!}
\left(\frac{1}{2}\right)^{600} \approx 2.8^{-80} $$</p>
<p>But this probability is practically $0$, which seems to go against my intuition ( it's reasonably possible for less than $80$ people being late). So where did I go wrong in my reasoning ? Also, why did they give the variance which I didn't use (this was an exam question by the way). Has this maybe something to do with the CLT (central limit theorem) ?</p>
| InsideOut | 235,392 | <p>You cannot apply the fundamental theorem of calculus since the function <span class="math-container">$$\frac{1}{x}$$</span> is not defined on the interval <span class="math-container">$[-1,1]$</span>. Split the integral in the following way
<span class="math-container">$$\int_0^1\frac{dx}{x}+\int_{-1}^0\frac{dx}{x}$$</span></p>
<p>Now you can apply the following theorem</p>
<blockquote>
<p>Let <span class="math-container">$a$</span> be a real, then <span class="math-container">$$\int_0^a \frac{1}{x^\alpha}dx$$</span> converges if and only if <span class="math-container">$\alpha<1$</span>.</p>
</blockquote>
<p>Hence both integral diverge, then the sum of them diverges.</p>
|
3,752,676 | <p>what is <span class="math-container">$P(P(P(333^{333})))$</span>, where P is sum of digit of a number. for an example <span class="math-container">$P(35)=3+5=8$</span></p>
<p>a)18</p>
<p>b)9</p>
<p>c)33</p>
<p>d)333</p>
<p>f)5</p>
<p>I tried to find this but I couldn't. I started to find a pattern for an example the first few power of <span class="math-container">$333^{333}$</span> are:</p>
<p><span class="math-container">$A=333*333=110889 \; \; \; \; \; \; P(A)=3^{3}=27$</span></p>
<p><span class="math-container">$B=110889*333= 36926037 \; \; \; \; \; \; P(B)=36$</span></p>
<p><span class="math-container">$C=36926037*333=12296370321 \; \; \; \; \; \; P(C)=36 $</span></p>
<p><span class="math-container">$D=12296370321*333=4094691316893 \; \; \; \; \; \; P(D)=63$</span></p>
<p>Can I say it is always 9? so <span class="math-container">$P(P(P(333^{333})))=9$</span>?</p>
| Mastrem | 253,433 | <p>Let <span class="math-container">$G:=\mathbb{Z}_4\oplus\mathbb{Z}_{12}$</span> and <span class="math-container">$H\subset G$</span> the subgroup <span class="math-container">$\langle (2,2)\rangle$</span>.</p>
<p>Because <span class="math-container">$G$</span> has no elements of order <span class="math-container">$8$</span>, <span class="math-container">$G/H$</span> doesn't either, so it can't be <span class="math-container">$\mathbb{Z}_8$</span>.</p>
<p>Note that <span class="math-container">$(0,3)$</span> has order <span class="math-container">$4$</span> in <span class="math-container">$G$</span> and <span class="math-container">$(0,3),(0,6)\not\in H$</span>, so <span class="math-container">$(0,3)$</span> has order <span class="math-container">$4$</span> in <span class="math-container">$G/H$</span>. Hence, <span class="math-container">$G/H$</span> has elements of order <span class="math-container">$4$</span> and cannot be isomorphic to <span class="math-container">$\mathbb{Z}_2\oplus\mathbb{Z}_2\oplus\mathbb{Z}_2$</span>.</p>
<p>Therefore, since it must be one of the three, <span class="math-container">$G/H\cong \mathbb{Z}_2\oplus\mathbb{Z}_4$</span></p>
|
2,815 | <p>This is a final exam question in my algorithms class:</p>
<p>$k$ is a taxicab number if $k = a^3+b^3=c^3+d^3$, and $a,b,c,d$ are distinct positive integers. Find all taxicab numbers $k$ such that $a,b,c,d < n$ in $O(n)$ time.</p>
<p>I don't know if the problem had a typo or not, because $O(n^3)$ seems more reasonable. The best I can come up with is $O(n^2 \log n)$, and that's the best anyone I know can come up with. </p>
<p>The $O(n^2 \log n)$ algorithm: </p>
<ol>
<li><p>Try all possible $a^3+b^3=k$ pairs, for each $k$, store $(k,1)$ into a binary tree(indexed by $k$) if $(k,i)$ doesn't exist, if $(k,i)$ exists, replace $(k,i)$ with $(k,i+1)$</p></li>
<li><p>Transverse the binary tree, output all $(k,i)$ where $i\geq 2$</p></li>
</ol>
<p>Are there any faster methods? This should be the best possible method without using any number theoretical result because the program might output $O(n^2)$ taxicab numbers. </p>
<p>Is $O(n)$ even possible? One have to prove there are only $O(n)$ taxicab numbers lesser than $2n^3$ in order to prove there exist a $O(n)$ algorithm.</p>
<p><strong>Edit</strong>: The professor admit it was a typo, it should have been $O(n^3)$. I'm happy he made the typo, since the answer Tomer Vromen suggested is amazing.</p>
| Aryabhata | 1,102 | <p>Apparently this is a solved problem and every rational solution to $x^3 + y^3 = z^3 + w^3$ is proportional to</p>
<p>$x = 1 − (p − 3q)(p^2 + 3q^2)$</p>
<p>$y = −1 + (p + 3q)(p^2 + 3q^2)$</p>
<p>$z = (p + 3q) − (p^2 + 3q^2)^2$</p>
<p>$w = −(p − 3q) + (p^2 + 3q^2)^2$</p>
<p>See: <a href="http://129.81.170.14/~erowland/papers/koyama.pdf">http://129.81.170.14/~erowland/papers/koyama.pdf</a>, Page 2.</p>
<p>Also see: <a href="http://sites.google.com/site/tpiezas/010">http://sites.google.com/site/tpiezas/010</a> (search for J. Binet).</p>
<p>So it seems like an O(n^2) algorithm might be possible. Perhaps using this more cleverly can give us an O(n) time algorithm.</p>
<p>Hope that helps. Please do update us with the solution given by your professor.</p>
|
2,815 | <p>This is a final exam question in my algorithms class:</p>
<p>$k$ is a taxicab number if $k = a^3+b^3=c^3+d^3$, and $a,b,c,d$ are distinct positive integers. Find all taxicab numbers $k$ such that $a,b,c,d < n$ in $O(n)$ time.</p>
<p>I don't know if the problem had a typo or not, because $O(n^3)$ seems more reasonable. The best I can come up with is $O(n^2 \log n)$, and that's the best anyone I know can come up with. </p>
<p>The $O(n^2 \log n)$ algorithm: </p>
<ol>
<li><p>Try all possible $a^3+b^3=k$ pairs, for each $k$, store $(k,1)$ into a binary tree(indexed by $k$) if $(k,i)$ doesn't exist, if $(k,i)$ exists, replace $(k,i)$ with $(k,i+1)$</p></li>
<li><p>Transverse the binary tree, output all $(k,i)$ where $i\geq 2$</p></li>
</ol>
<p>Are there any faster methods? This should be the best possible method without using any number theoretical result because the program might output $O(n^2)$ taxicab numbers. </p>
<p>Is $O(n)$ even possible? One have to prove there are only $O(n)$ taxicab numbers lesser than $2n^3$ in order to prove there exist a $O(n)$ algorithm.</p>
<p><strong>Edit</strong>: The professor admit it was a typo, it should have been $O(n^3)$. I'm happy he made the typo, since the answer Tomer Vromen suggested is amazing.</p>
| Tomer Vromen | 26 | <p>I don't know about $O(n)$, but I can do it in $O(n^2)$. The main idea is to use <a href="http://eli.thegreenplace.net/2008/08/23/initializing-an-array-in-constant-time/">initialization of an array in $O(1)$</a> (this is the best reference that I've found, which is weird since this seems like a very important concept). Then you iterate through all the possible $(a,\ b)$ pairs and do the same as step 1 in your proposed algorithm. Since $a^3+b^3 \leq 2n^3$, the array needs to be of size $2n^3$, but it's still initialized in $O(1)$. Accessing an array element is $O(1)$ like in a regular array.</p>
|
925,558 | <p>How do I prove that the inductive sequence $y_{n+1}= \dfrac {2y_n + 3}{4}$ is bounded? $y(1)=1$</p>
<p><strong>Attempt:</strong> Let us assume that the given sequence is unbounded. : </p>
<p>Then, $y_{n+1} \rightarrow \infty$ either for some finite $n$ or when $n \rightarrow \infty$</p>
<blockquote>
<p>CASE $1 :$ When $|y_n| \rightarrow \infty$ at a finite $n$</p>
</blockquote>
<p>Since : $y_{n}= \dfrac {2y_{n-1} + 3}{4}$, then $y_n \rightarrow \pm \infty \implies y_{n-1} \rightarrow \pm \infty \implies y_{n-2} \rightarrow \pm \infty ~~\cdots$</p>
<p>This ultimately means $y_2 \rightarrow \pm \infty$ which is not true.</p>
<p>Hence, there does not exist a finite $n$ for which $y_n \rightarrow \pm \infty$.</p>
<blockquote>
<p>CASE $2:$ When $|y_n| \rightarrow \infty$ at $n \rightarrow \infty$</p>
</blockquote>
<p>I think we can proceed the same way as we did above, i.e inductively, we proceed like above and deduce that if the above assumption is true, then $y_2 \rightarrow \infty$, which is not true.</p>
<p>Is my attempt correct?</p>
<p>Does there exist a proof without induction as well?
Thank you for your help.</p>
| idm | 167,226 | <p>$$|y_{n+1}-y_n|=\frac{1}{2}|y_n-y_{n-1}|=\frac{1}{2^n}|y_0-y_1|$$</p>
<p>then
$$|y_n-y_{n+r}|\leq |y_n-y_{n+1}|+|y_{n+1}-y_{n+2}|+...+|y_{n+r-1}-y_{n+r}|=\frac{1}{2^n}|y_0-y_1|\sum_{k=0}^r\frac{1}{2^k}=\frac{1}{2^n}\underset{n\to\infty }{\longrightarrow } 0$$</p>
<p>Then $(y_n)$ is a Cauchy sequence, and so it converge. You can conclude that $(y_n)$ is bounded.</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Boris Novikov | 62,565 | <p>$$\log 2 = 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots$$
In the general case
$$\log \frac{1+x}{1-x} = 2(x+\frac{x^3}{3}+\frac{x^5}{5}+\frac{x^7}{7}+\ldots)$$</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Alfred Centauri | 36,057 | <p>And let's not forget this method (read off of the Ln scale).</p>
<p><img src="https://i.stack.imgur.com/4Z7nk.jpg" alt="enter image description here"></p>
<p><a href="http://scienceblogs.com/goodmath/2006/09/13/using-a-slide-rule-part-2-expo/" rel="noreferrer">Image source</a></p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| egreg | 62,967 | <p>One can use the fact that
$$
\log x=\lim_{n\to\infty}n\left(1-\frac{1}{\sqrt[n]{x}}\right)
$$
For $\log2$ a good approximation is
$$
1048576\left(1-\frac{1}{\sqrt[1048576]{2}}\right)
$$
where
$$
\sqrt[1048576]{x}
$$
can be computed by pressing twenty times the <code>SQRT</code> key on a pocket calculator, since $1048576=2^{20}$ (or computing it by hand, with <em>much</em> patience and time to spend).</p>
<p>What I get doing those computations is $0.6931469565952$, while a real computer gives $0.69314718055994530941$, so we have five exact decimal digits. Of course bigger numbers won't do, since the $2^{20}$-th root of it will be too near $1$ and the necessary digits would have already been lost.</p>
<p><sub>(Note: $\log$ is the natural logarithm; I <em>refuse</em> to denote it in any other way. <code>;-)</code>)</sub></p>
|
1,127,480 | <p>Consider the set $S = \{x \in \mathbb R: x < \frac2x\}$. Determine the value of $sup$ $S$ (if it exists). </p>
<p>Here is my attempt:<br>
Firstly, $S = \{x \in \mathbb R: 0 < x < \sqrt 2$ $\lor$ $x < -\sqrt2 \}$.<br>
Since $1 \in S$, $S \ne \emptyset$.<br>
Also, $\forall x \in S$, $x < \sqrt2$ and so $S$ is bounded above.<br>
By the least upper bound property of $\mathbb R$, $sup$ $S$ exists.<br>
Suppose $\exists y \in \mathbb R: y < \sqrt2 \land y$ is an upper bound of $S$.<br>
Since $1 \in S$, $1 \le y < \sqrt2 \Rightarrow y \in S$.<br>
... </p>
<p>How do I show that $sup$ $S$ $= \sqrt 2$?</p>
| drhab | 75,923 | <p>To find $S$ the equation $x<\frac2{x}$ or equivalently $\frac{(x-\sqrt2)(x+\sqrt2)}{x}<0$ must be solved. </p>
<p>This leads to $S=(-\infty,-\sqrt2)\cup(0,\sqrt2)$. </p>
<p>Here $\sqrt2$ is an upper bound of $S$ and proving that $\sup S=\sqrt2$ comes to showing that every $y<\sqrt2$ is <em>not</em> an upper bound of $S$. That means: showing that $y<s$ is true for some $s\in S$.</p>
|
1,557,686 | <p>I have a definition of a <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution" rel="nofollow">Hypergeometric distribution</a> as follows:</p>
<blockquote>
<p>Definition: the Hypergeometric distribution is a discrete probability distribution that describes the probability of $k$ successes in $n$ draws, without replacement, from a finite population of size $N$ that contains exactly $K$ successes, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of $k$ successes in $n$ draws with replacement.</p>
</blockquote>
<p>A random variable $X$: no. of successes in <strong>$K$ successes</strong>. The pdf is</p>
<p>$$P(X=k)=\frac{(\text{#ways for $k$ successes})\times (\text{# ways for $n-k$ failures})}{(\text{total number of way to select})}=\frac{\binom{K}{k}\binom{N-K}{n-k}}{\binom{N}{n}}$$</p>
<p>My question is what is definition of r.v $X$? In my word, I write as <em>"A random variable $X$: no. of successes in <strong>$K$ successes</strong>. "</em>, Is it correct? I am confusing about "no. of successes in <strong>$K$ successes</strong>" or "no. of successes in <strong>$K$ trails</strong>." Thanks in advance</p>
| Em. | 290,196 | <p>Generally, I think, this is taught as or thought of as a box full of "good" items and "bad" items. I will use a slightly different notation. Suppose we have a box full of $N$ balls, and there are $G$ good balls, with $G\leq N$. Let $X$ represent the number of "good" balls I draw when I grab $n$ balls from the box. Then if I want to calculate the probability of getting $k$ good balls in $n$ draws, then in notation it is
$$P(X = k) = \frac{\binom{G}{k}\binom{N-k}{n-k}}{\binom{N}{n}}.$$
The denominator counts all the ways to choose $n$ balls from the $N$ balls in the box. The numerator counts all the ways to choose $n$ balls where there $k$ good balls and $n-k$ bad balls.</p>
<hr>
<p>So from your perspective, if you choose one of the $K$ items, or an event from $K$ is realized, that is considered a success. You can consider it a "good" ball.</p>
|
3,271,917 | <p><span class="math-container">$$ \sum_{n=1}^\infty (-1)^{n+1} \left( \frac{1.4.7\dots .(3n-2)}{2.3.8\dots .(3n-1)} \right)^2 $$</span></p>
<p>I have done the<span class="math-container">$ \sum_{n=1}^\infty \left( \frac{1.4.7\dots .(3n-2)}{2.3.8\dots .(3n-1)} \right)^2 $</span> part , and showed it divergent using Gauss test .</p>
<p>But i am not able to do this part <span class="math-container">$ \sum_{n=1}^\infty (-1)^{n+1} \left( \frac{1.4.7\dots .(3n-2)}{2.3.8\dots .(3n-1)} \right)^2 $</span> ,tried leibniz test to do , but could not do that.</p>
<p>I have no idea how to do this please help.</p>
| Rkb | 597,464 | <p><span class="math-container">$(\frac{1}{2})^2 < \frac{1}{2}$</span>,
<span class="math-container">$(\frac{4}{5})^2 < \frac{3}{4}$</span>,
<span class="math-container">$(\frac{7}{8})^2 < \frac{5}{6}$</span>, <span class="math-container">$\dots \dots$</span>
<span class="math-container">$(\frac{3n-2}{3n-1})^2 < \frac{2n-1}{2n}$</span></p>
<p>Multiplying all , we get </p>
<p><span class="math-container">$\prod_{n=1}^n(\frac{3n-2}{3n-1})^2 <\prod_{n=1}^n \frac{2n-1}{2n}$$ ......(1)$</span></p>
<p>Now,
<span class="math-container">$(\frac{1}{2}) < \frac{2}{3}$</span>,<span class="math-container">$(\frac{3}{4}) < \frac{4}{5}$</span>,<span class="math-container">$\dots$</span> <span class="math-container">$(\frac{2n-1}{2n}) < \frac{2n}{2n+1}$</span></p>
<p>Multiplying all, we get</p>
<p><span class="math-container">$(\frac{1.3.5. \dots 2n-1}{2.4.6. \dots 2n}) < \frac{2.4.6. \dots2n}{3.5.\dots.2n-1.2n+1}$</span></p>
<p><span class="math-container">$(\frac{1.3.5. \dots 2n-1}{2.4.6. \dots 2n}) < \frac{2.4.6. \dots2n}{3.5.\dots.2n-1}.(\frac{1}{2n+1})$</span></p>
<p><span class="math-container">$(\frac{1.3.5. \dots 2n-1}{2.4.6. \dots 2n})^2 < (\frac{1}{2n+1})$</span></p>
<p><span class="math-container">$(\frac{1.3.5. \dots 2n-1}{2.4.6. \dots 2n})< (\frac{1}{\sqrt{(2n+1)}})$$......... (2)$</span></p>
<p>From <span class="math-container">$(1)$</span>&<span class="math-container">$(2)$</span>,we get</p>
<p><span class="math-container">$\prod_{n=1}^n(\frac{3n-2}{3n-1})^2< (\frac{1}{\sqrt{(2n+1)}})$</span></p>
<p>Now,
<span class="math-container">$lim_{n\to \infty} (\frac{1}{\sqrt{(2n+1)}}) = 0$</span></p>
<p>Hence , <span class="math-container">$lim_{n\to \infty} \prod_{n=1}^n(\frac{3n-2}{3n-1})^2 = 0$</span> </p>
<p>Hence by leibniz criterion, the given series <span class="math-container">$\sum_{n=1}^\infty (-1)^{n+1} \left( \frac{1.4.7\dots .(3n-2)}{2.3.8\dots .(3n-1)} \right)^2 $</span> is convergent. </p>
<p><strong>Answer credit, phara narai</strong></p>
|
71,354 | <p>I've been reading Moroianu's Kahler geometry notes and found a unattributed quote that says that if the Kahler properties hold, then
"a long list of miracles occur"</p>
<p>I am guessing that this quote belongs to Kahler himself, but I can't back this up. Does anyone know?</p>
| csar | 10,942 | <p>Following quid's comment (I don't have the reputation to comment), assuming <i>Über eine bemerkenswerte Hermitesche Metrik</i> is the place to look, it doesn't appear to be in there. The "long list of miracles" does seem to be apparent in a quick skim of the paper, though (and a naive skim at that--I just read the words, not the content).</p>
|
71,354 | <p>I've been reading Moroianu's Kahler geometry notes and found a unattributed quote that says that if the Kahler properties hold, then
"a long list of miracles occur"</p>
<p>I am guessing that this quote belongs to Kahler himself, but I can't back this up. Does anyone know?</p>
| Community | -1 | <p>I will make a CW answer to collect together some information. </p>
<p>Igor Rivin found a published text containing the relevant phrase.
It is in "The unabated vitality of Kählerian geometry," by Jean-Pierre Bourguignon which is included in the collected works of Kähler (Kähler, Mathematische Werke/Mathematical Works, edited by Berndt and Riemenschneider, 2003).</p>
<p>The relevant pasage is (from the text of Bourguignon where 'he' refers to Kähler): </p>
<blockquote>
<p>Quoting his terms, the case $d \omega = 0$ presents itself as "a remarkable exception". This is the condition he supposes throughout the paper whose purpose it is to describe a long list of miracles occuring then. </p>
</blockquote>
<p>This suggest to me that while Bourguignon is first quoting Kähler (the "a remarkable exception") he then stops quoting (and a new sentence started) and describes in his [Bourguignon's] own words the list of result/properties obtained by Kähler as miracolous.</p>
<p>Side note: in this text there are some other verbatim quotes and they are under quotation marks; so except if Bourguignon inadvertently omitted them, he is not quoting. </p>
<p>Furthermore, the paper of Kähler in question "Über eine bemerkenswerte Hermitesche Metrik" does not seem to contain such a phrase (cf. csar). I also searched the above mentioned book for appropriate terms (miracles, the German analog Wunder, and also <code>Mir*</code> in case he should have used Mirakel, which exists but is a bit rare); this did not turn up anything, besides what is mentioned above. </p>
<p>Therefore it seems likely to me that this 'miracles' are due to Bourguignon and not Kähler; and, Moroianu is sort-of quoting Bourguignon. The time-line might seem a bit short, the notes being from 2003 as well as the book, however in view of the fact that Moroianu is a former student of Bourguignon this seems much less surprising, and perhaps even reinforces the idea that Moroianu is quoting Bourguignon.</p>
<p>Final note: in case somebody wants to make really sure, Moroianu is a (it seems now inactive) MO user, so he might, if made aware of the need, give an authentic account. </p>
|
77,705 | <p>If $A$ and $B$ are both $n \times n$ matrices, and $v$ is a non-zero $n \times 1$ column vector then is it true that if
$$ABv = BAv$$
then $$AB=BA$$</p>
| barf | 11,203 | <p>Suppose $AB = C.$ Then we have $ABv= Cv =BAv$ for all $v.$</p>
<p>Do you feel comfortable saying $BA = C$ now? Do you know any theorems about the uniqueness of matrices of linear transformations under fixed bases?</p>
<p>Edit: In Light of the other answer, i should clarify, this is true only <em>for all</em> $v.$</p>
|
2,895,343 | <blockquote>
<p>Differentiate $\tan^3(x^2)$</p>
</blockquote>
<p>I first applied the chain rule and made $u=x^2$ and $g=\tan^3u$. I then calculated the derivative of $u$, which is $$u'=2x$$ and the derivative of $g$, which is
$$g'=3\tan^2u$$</p>
<p>I then applied the chain rule and multiplied them together, which gave me </p>
<p>$$f'(x)=2x3\tan^2(x^2)$$</p>
<p>Is this correct? If not, any hints as to how to get the correct answer?</p>
| TheSimpliFire | 471,884 | <p>You are almost there! In this case you need to apply the Chain Rule three times.</p>
<p>We have that $$(\tan^3(x^2))'=3\tan^2(x^2)\cdot(\tan(x^2))'=3\tan^2(x^2)\cdot\sec^2(x^2)\cdot(x^2)'=6x\tan^2(x^2)\sec^2(x^2)$$</p>
|
132,875 | <p>I was told that I could obtain an analytic solution to a particle falling under the influence of Newtonian gravity by using <code>DSolveValue</code>.</p>
<p><strong>What I am given</strong></p>
<ul>
<li>$G = M = m = 1$</li>
<li>$M$ is a point mass at $z=0$</li>
<li>the particle falls along the $z$ axis</li>
<li>$z(0) = 0$</li>
<li>$\frac{\mathrm{d}z(-\infty)}{\mathrm{d}t} = 0$</li>
</ul>
<p>Thus,
$$\frac{\mathrm{d}^2z(t)}{\mathrm{d}t^2} + \frac{1}{z(t)^2} = 0$$
and I'm trying to find $z$ as just a function of $t$.
So I tried</p>
<pre><code>zOft = DSolveValue[{z''[t] + 1/z[t]^2 == 0, z'[-∞] == 0, z[0] == 0}, z[t], t]
</code></pre>
<p>But I get the error</p>
<pre><code>DSolveValue::bvimp: General solution contains implicit solutions. In
the boundary value problem, these solutions will be ignored, so some
of the solutions will be lost.
</code></pre>
<p>And <code>zOft</code> just becomes <code>DSolveValue[...everything above...]</code>. So I don't actually get an expression for $z(t)$.</p>
<p>I am fairly new at <em>Mathematica</em>. Is there something I am doing wrong in the code? Or is it just generally not possible to analytically solve this? Is it something wrong with the fact that $\frac{1}{z(0)^2}$ is undefined? Do I have to somehow re-normalize the time coordinate? I was told that I should be able to find an analytic expression dependent only on $t$ of the position of the particle.</p>
| xzczd | 1,871 | <p>There actually exists 2 solutions.</p>
<p>We first make a change of variable $t=\log(s)$ to eliminate the troublesome <code>-Infinity</code>, here I'll use <a href="https://mathematica.stackexchange.com/a/80267/1871"><code>DChange</code></a> for the task:</p>
<pre><code>eq = {z''[t] + 1/z[t]^2 == 0, z'[-inf] == 0, z[0] == 0};
neweq = DChange[eq, t == Log@s, t, s, z[t]] /. inf -> Infinity
(* {1/z[s]^2 + s (z'[s] + s z''[s]) == 0,
z'[0] == 0, z[1] == 0} *)
general = DSolve[neweq[[1]], z[s], s]
(*{Solve[……], Solve[……]}*)
</code></pre>
<p><code>DSolve</code> gives 2 unsolved equations as the general solution. By substituting the b.c. $z(1)=0$ into the equation:</p>
<pre><code>soleq = general[[All, 1]];
soleq /. s -> 1 /. z[1] -> 0
(* {0 == C[2], 0 == C[2]} *)
</code></pre>
<p><code>C[2]</code> is determined. The next task is to make use of the b.c. $z'(0)=0$. We differentiate the equation and solve for $z'(s)$:</p>
<pre><code>First@Solve[D[#, s], z'[s]] & /@ (soleq /. C[2] -> 0)
(* {{z'[s] -> (Sqrt[2] Sqrt[1 + C[1] z[s]])/(s Sqrt[z[s]])},
{z'[s] -> -((Sqrt[2] Sqrt[1 + C[1] z[s]])/(s Sqrt[z[s]]))}} *)
</code></pre>
<p>Given $z'(0)=0$, <code>1 + C[1] z[s] /. s -> 0</code> can only be equal to <code>0</code> i.e. <code>z[s] -> -1/C[1] /. s -> 0</code>. Substitute this condition back into <code>soleq</code> and solve for <code>C[1]</code>:</p>
<pre><code>soleq /. C[2] -> 0 /. z[s] -> -1/C[1]
(* {(I π)/(2 Sqrt[2] C[1]^(3/2)) + Log[s] == 0,
-((I π)/(2 Sqrt[2] C[1]^(3/2))) + Log[s] == 0} *)
First@Solve[#, C[1]] & /@ % /. s -> 0
(* {{C[1] -> 0}, {C[1] -> 0}} *)
</code></pre>
<p>Substitute the value of <code>C[1]</code> and <code>C[2]</code> into the equation (for <code>C[1]</code> we need to take the limit), solve for <code>z[s]</code> and change the variable back:</p>
<pre><code>0 == Limit[Subtract @@ #, C[1] -> 0] & /@ soleq /. C[2] -> 0
(* {0 == Log[s] - 1/3 Sqrt[2] z[s]^(3/2),
0 == Log[s] + 1/3 Sqrt[2] z[s]^(3/2)} *)
newsol = First@Solve[#, z[s]] & /@ %
(* {{z[s] -> (3^(2/3) Log[s]^(2/3))/2^(1/3)},
{z[s] -> (3^(2/3) (-Log[s])^(2/3))/2^(1/3)}} *)
{sol1[t_], sol2[t_]} = z[s] /. newsol /. Log@s -> t
(* {(3^(2/3) t^(2/3))/2^(1/3), (3^(2/3) (-t)^(2/3))/2^(1/3)} *)
</code></pre>
<p>Both of the solutions satisfy the equation and the b.c.s:</p>
<pre><code>eq /. {{z -> sol1}, {z -> sol2}} /. inf -> Infinity
(* {{True, True, True}, {True, True, True}} *)
</code></pre>
|
132,875 | <p>I was told that I could obtain an analytic solution to a particle falling under the influence of Newtonian gravity by using <code>DSolveValue</code>.</p>
<p><strong>What I am given</strong></p>
<ul>
<li>$G = M = m = 1$</li>
<li>$M$ is a point mass at $z=0$</li>
<li>the particle falls along the $z$ axis</li>
<li>$z(0) = 0$</li>
<li>$\frac{\mathrm{d}z(-\infty)}{\mathrm{d}t} = 0$</li>
</ul>
<p>Thus,
$$\frac{\mathrm{d}^2z(t)}{\mathrm{d}t^2} + \frac{1}{z(t)^2} = 0$$
and I'm trying to find $z$ as just a function of $t$.
So I tried</p>
<pre><code>zOft = DSolveValue[{z''[t] + 1/z[t]^2 == 0, z'[-∞] == 0, z[0] == 0}, z[t], t]
</code></pre>
<p>But I get the error</p>
<pre><code>DSolveValue::bvimp: General solution contains implicit solutions. In
the boundary value problem, these solutions will be ignored, so some
of the solutions will be lost.
</code></pre>
<p>And <code>zOft</code> just becomes <code>DSolveValue[...everything above...]</code>. So I don't actually get an expression for $z(t)$.</p>
<p>I am fairly new at <em>Mathematica</em>. Is there something I am doing wrong in the code? Or is it just generally not possible to analytically solve this? Is it something wrong with the fact that $\frac{1}{z(0)^2}$ is undefined? Do I have to somehow re-normalize the time coordinate? I was told that I should be able to find an analytic expression dependent only on $t$ of the position of the particle.</p>
| Joe | 45,091 | <p>Thank you all so much for the answers. They are really detailed and helped me a lot. However, I ended up doing something slightly different and got the same result.</p>
<p>$$\ddot{z} = \frac{\mathrm{d}\dot{z}}{\mathrm{d}t} = \frac{\mathrm{d}\dot{z}}{\mathrm{d}z}\cdot \frac{\mathrm{d}z}{\mathrm{d}t} = \frac{\mathrm{d}\dot{z}}{\mathrm{d}z} \dot{z} = - \frac{1}{z^2}$$ $$\Rightarrow \dot{z} \mathrm{d}\dot{z} = -\frac{\mathrm{d}z}{z^2}$$ $$\Rightarrow \frac{1}{2}\dot{z}^2 = \frac{1}{z} + c$$ $$\Rightarrow \dot{z}^2 = \frac{2}{z} + c'$$</p>
<p>To satisfy $\dot{z} = 0$ at $z = \infty$ (i.e. $\dot{z}(t=-\infty) = 0$), $c' = 0$ (see Jens comment for a better explaination). Thus, $z\dot{z}^2=2$. Plugging in</p>
<p><code>DSolveValue[{z[t] z'[t]^2 == 2, z[0] == 0}, z[t], t]</code></p>
<p>Gives</p>
<p><code>(3^(2/3) (-t)^(2/3))/2^(1/3)</code></p>
<p>or $$\frac{3^{2/3}(-t)^{2/3}}{2^{1/3}}$$ as you all said.</p>
<p>Or, without <em>Mathematica</em>, $z\dot{z}^2=2 \rightarrow \sqrt[]{z}\mathrm{d}z = \sqrt[]{2}\mathrm{d}t \rightarrow \frac{2}{3}z^{3/2} = \sqrt[]{2}t + c$. $z(0) = 0 \rightarrow c = 0$. So
$$z^{3/2} = \frac{3}{2^{1/2}}t$$
$$\Rightarrow z(t) = \frac{3^{2/3}}{2^{1/3}}(\pm t)^{2/3}$$
To fit our conditions, choose the minus.</p>
|
2,121,316 | <p>so I have x circles, given by X,Y coordinates + radius. </p>
<p>I'm unsuccessfully trying to figure out, how to make an algorithm for counting how many circles will line segment cross. For illustration: <a href="https://i.stack.imgur.com/mJ8C8.png" rel="nofollow noreferrer">1</a> (Green line crossed 2 circles, red line crossed 1)</p>
<p>I know the starting and ending points of line segments + circles as stated before. Tried to solve it with vector distance equation. The code i wrote was like this (Already googled answers):</p>
<pre><code>for(int i = 0; i<d;i++)
{
for (int j = 0; j<s; j++)
{
int x1 = lines[i][0];
int x2 = lines[i][2];
int y1 = lines[i][1];
int y2 = lines[i][3];
double x21 = x2-x1;
double y12 = y1-y2;
double x12 = x1-x2;
double y21 = y2-y1;
double x0 = rooms[j][0];
double y0 = rooms[j][1];
double numerator = abs((x21*x0)+(y12*y0)+(x12*y1)+(y21*x1));
double enumerator = sqrt(pow(x21,2)+pow(y12,2));
double s = numerator /enumerator ;
double r = rooms[j][2];
if (s<=r) {
count++;
}
}
}
</code></pre>
<p>, but the code didn't work.</p>
| gt6989b | 16,192 | <p>Another, less convoluted way is to convert everything to base 2:
$$
0 = (2^2)^{x+1} - (2^3)^{2x} = 2^{2x+2} - 2^{6x}...
$$</p>
|
2,121,316 | <p>so I have x circles, given by X,Y coordinates + radius. </p>
<p>I'm unsuccessfully trying to figure out, how to make an algorithm for counting how many circles will line segment cross. For illustration: <a href="https://i.stack.imgur.com/mJ8C8.png" rel="nofollow noreferrer">1</a> (Green line crossed 2 circles, red line crossed 1)</p>
<p>I know the starting and ending points of line segments + circles as stated before. Tried to solve it with vector distance equation. The code i wrote was like this (Already googled answers):</p>
<pre><code>for(int i = 0; i<d;i++)
{
for (int j = 0; j<s; j++)
{
int x1 = lines[i][0];
int x2 = lines[i][2];
int y1 = lines[i][1];
int y2 = lines[i][3];
double x21 = x2-x1;
double y12 = y1-y2;
double x12 = x1-x2;
double y21 = y2-y1;
double x0 = rooms[j][0];
double y0 = rooms[j][1];
double numerator = abs((x21*x0)+(y12*y0)+(x12*y1)+(y21*x1));
double enumerator = sqrt(pow(x21,2)+pow(y12,2));
double s = numerator /enumerator ;
double r = rooms[j][2];
if (s<=r) {
count++;
}
}
}
</code></pre>
<p>, but the code didn't work.</p>
| kingW3 | 130,953 | <p>Well you have that $8=2^{3}=(\sqrt{4})^3=4^{3/2}$.Now as to why it works we have that
$$a^{\log_a b}=b$$
Now plugging in $b=8$ we can write
$$8=4^{\log_4 8}$$
And we have that $\log_4 8=\frac{3}{2}$
So we can write $8^{2x}$ as $(4^{3/2})^{2x}$.
Your friends approach is correct.</p>
|
2,062,357 | <p>Let $\sum_{i=1}^\infty a_i$ be an absolute convergent series and let $\sum_{i=1}^\infty b_i$ be a sub-series of it, i.e.
$$b_j=c_j\cdot a_j\quad (c_j\in\{0,1\}),\qquad\forall j\in\mathbb N$$
We can say that $\sum_{i=1}^\infty b_i$ is absolutely convergent as well. So its limit, say $L$, is a real number.</p>
<p>Now the question is, what can be said about the set of all possible values of $L$? Is it connected? Closed maybe? Neither closed / connected / compact? I have no idea.</p>
<p>Can we determine whether a specific number belongs to this set or not? I was specially interested in finding a sub-series of $\sum \frac 1{n^2}$ whose limit is, say $\frac{\pi}6$, and then I ended up with this question.</p>
| Community | -1 | <p>Take $A_1,A_2,\ldots,A_n$ to be sets such that if $i \neq j$ then $A_i$ and $A_j$ are disjoint (that is, $A_i \cup A_j$ is the emtpy set). Then $A_1, A_2, \ldots,A_n$ are said to be mutually disjoint. </p>
<p>Now if $A$ and $B$ are sets such that $A \cap B = A$ (or $B$ without less of generality) then $A$ is said to be contained in $B$, that is $$A \subseteq B.$$</p>
|
898,082 | <p>I have the answer for this, but my teacher hadn't taught the whole "when cosine is an even, the value of $-\arccos (-0.7)$ is a solution too." </p>
<p>Please: </p>
<p>-tell me when a $\cos$/$\sin$ function is even/odd </p>
<p>-what happens if its odd? </p>
<p>-how to use the "$±\arccos(-0.7) + 2kπ$" ( don't understand why you add 2kπ) </p>
<p>-and how to find the solutions! </p>
<p>Another example is $$\sec( x )= -3, -π ≤ x < π $$</p>
<p>I really don't understand what happens if the function is "even" or "odd." And how to determine if it is.</p>
| rae306 | 168,956 | <p><strong>Even and odd functions</strong></p>
<ul>
<li><em>Even functions</em>: $f(x)=f(-x)$. Geometrically, this is symmetry about the $y$-axis.</li>
<li><em>Odd functions</em>: $-f(x)=f(-x)$. Geometrically, this is origin symmetry.</li>
</ul>
<p>From these definitions and the graphs of $\sin x$ and $\cos x$, we can see that $\sin x$ is odd, $\cos x$ even.</p>
<p><img src="https://i.stack.imgur.com/Lve6h.png" alt="enter image description here"></p>
<p>Note: ${\color{red} \sin \color{red}x}$ is red, $\color{blue}\cos \color{blue}x$ is blue.</p>
|
944,965 | <p>Question: whats the order of the element a=33 in Z60 (under modular addition)?</p>
<p>Answer: $\langle 33 \rangle= \{33,6,39,12,45,18,51,24,57,30,3,36,9,42,15,48,21,54,27,0,\}$. Therefore, $\text{order}(33)=20$</p>
<p>Whats the inverse of $33$ in $\mathbb{Z}_{60}$ (under modular addition)?</p>
<p>Im struggling with finding the inverse. Can anyone show me how to find it? </p>
<p>I would deeply appreciate your work and efforts</p>
<p>Thanks</p>
| egreg | 62,967 | <p>You don't need to compute all the multiples. You know that the order of $33$ is the least positive integer $n$ such that
$$
n\cdot33\equiv 0\pmod{60}
$$
that is, the least positive integer such that $33n$ is a multiple of $60$ or, equivalently, $11n$ is a multiple of $20$. Thus $n=20$.</p>
<p>For the inverse (I'd use “opposite”, but it's a matter of conventions), you want an integer $m$ such that $33+m\equiv0\pmod{60}$. Of course an integer $m$ such that $33+m=60$ is good.</p>
|
36,625 | <p>Say $L\mathbb{C}^\times$ is the loop group of smooth maps $S^1 \to \mathbb{C}^\times$. There is a submonoid $L_{poly}\mathbb{C}^\times$ of loops that look like $w_0 + w_1z +w_2z^2 + \cdots + w_nz^n$ where $z = e^{i\theta}$ (as Andrew notes below this is not a group because its not closed under taking inverses). Equivalently $L_{poly}\mathbb{C}^\times$ as a set is just polynomials $p(z) \in \mathbb{C}[z]$ such that $p(z) \ne 0$ for $|z| = 1$. </p>
<p>If we mod out by scaling and rotation then the set of polynomial loops describe a subset $X$ of $\mathbb{P}(\oplus_{n \in \mathbb{N}} \mathbb{C})$ (by identifying a loop with its vector of coefficients). I want to look at $X$ from an algebro-geometric point of view, but I have no intuition about how bad or nice $X$ may be; i.e. can it be a variety? </p>
<p>The way I've been thinking about it is that $X = \cup_{n \in \mathbb{N}} X_n$ where $X_n \subset \mathbb{P}^n$ are the loops of degree at most $n$. I think $X_1$ is the image under the projection $\mathbb{C}^2 - 0 \to \mathbb{P}^1$ of the set {$(w_0,w_1):|w_0|\ne |w_1|$}. So it seems describable as the complement of a hypersurface in $\mathbb{R}^4$ but probably its not a complex variety.</p>
<p>But already trying to figure out what $X_2$ is seems difficult. Also I feel I don't have any `sophisticated' way of thinking about this stuff meaning my attempts to describe $X_2$ seems to always degenerate to just fumbling around with planar geometry. </p>
<p>Some specific questions regarding this setup:</p>
<p>0) What is the dimension of $X_n$?</p>
<p>1) Which if any of the $X_n$ or $X$ are a variety over $\mathbb{C}$ or $\mathbb{R}$?</p>
<p>2) If $X$ or $X_n$ are not varieties can you find any positive dimensional varieties contained in them?</p>
<p>3) Can you suggest any tools that might be useful for answering any of the previous questions?</p>
<p>Of course if any of this seems to easy you are welcome to replace $\mathbb{C}^\times$ with $GL(n,\mathbb{C})$, polynomials with rational functions or with power series convergent in an annulus containing $|z| = 1$.</p>
<p>An extra thought: So $L_{poly}\mathbb{C}^\times$ is not a group, but it seems you do get a group if you look at convergent series in non positive powers of $z$; i.e. loops that look like $\sum_{j \in \mathbb{N}} c_j z^{-j}$. I wonder if there's anything interesting you can say about the $c_j$.</p>
| damiano | 4,344 | <p>This is mostly a series of comments, but guided by the questions you asked.</p>
<p>First of all, I will only talk about $X_n$, interpreting it as the space of non-zero complex polynomials $p$ of degree at most $n$ such that no root of $p$ lies on the unit circle, taken up to non-zero scaling. We may as well think of the polynomials as homogeneous of degree exactly $n$ in two variables, so that each point of $X_n$ defines a subset of $n$ points of the complex projective line $\mathbb{CP}^1$ (the set of roots of the polynomial) that is disjoint from the unit circle. Therefore, $X_n$ is certainly a non-empty open subset in the standard topology of the projective space of homogeneous polynomials of degree $n$, and therefore <strong>$X_n$ has (real) dimension $2n$</strong>. Observe that the unit circle disconnects $\mathbb{CP}^1$, and that the points of $X_n$ are likewise distributed into (at least) $n+1$ connected components, corresponding to how the $n$ points in $\mathbb{CP}^1$ are distributed with respect to the two halves obtained by removing the unit circle (recall that $\mathbb{CP}^1$ is topologically a two-dimensional sphere and that the unit circle can be identified with the equator of the sphere, so that, among the $n$ points we are talking about, there are some that are in one hemisphere and some that are in the other). On the other hand, a non-empty Zariski open subset of an irreducible algebraic variety is connected. Thus, as a subset of the space of homogenous polynomials of degree $n$, <strong>the space $X_n$ is certainly not a complex algebraic subvariety</strong>.</p>
<p>But, we may decide to analyze further the space $X_n$, zooming in on the locus $X_n^k$ where, for a fixed integer $k$, there are $k$ points on the half containing the origin and $n-k$ points on other half. Clearly, within each hemisphere, the points are free to roam around! Thus $X_n^k$ is homeomorphic (and in fact diffeomorphic with the natural choice of differentiable structure) to the space of ordered pairs $(p_1,p_2)$ where $p_1$ is a monic polynomial of degree $k$ and $p_2$ is a monic polynomial of degree $n-k$: expand each hemisphere to a whole complex plane and "code" the $k$ points on one half by the unique monic polynomial having them as a root (and do the same to the other half). Thus each space $X_n^k$ is connected and in fact diffeomorphic to $\mathbb{C}^k \times \mathbb{C}^{n-k}$. As a complex manifold you can also say that $X_n^k$ is the product of the symmetric product of $k$ copies of the unit disk with the symmetric product of $n-k$ copies of the unit disk. Thus, again using a structure induced from the ambient space of homogeneous polynomials, any complex algebraic subvariety of $X_n$ would be a complex algebraic subvariety of a symmetric product of unit disks: <strong>I think that this means that the only complex algebraic subvarieties of $X_n$ are the points</strong>.</p>
<p>Finally, let me make a small stab at getting your hand on $X$, by mentioning one description of the "glueing" of $X_n$ inside $X_{n+1}$. From the point of view of non-homogeneous polynomials, this corresponds to simply realizing that a polynomial of degree at most $n$ is also a polynomial of degree at most $n+1$. From the point of view of their homogenizations, the inclusion corresponds to replacing $z^i$ by $x^iy^{n+1-i}$ instead of $x^iy^{n-i}$. Effectively, we are adding the point at infinity as one of our roots (namely the extra root $y=0$). Thus in terms of the description above, we observe that the two hemispheres are not "identical": one of them has a point that is special, namely the point at infinity. The points of $X_{n+1}$ that come from points of $X_n$ are the points that correspond to $(n+1)$-tuples one of whose elements is the point at infinity.</p>
|
4,204,416 | <p>First, can someone provide a simple explanation for the <span class="math-container">$\bar{x}$</span> formula: <span class="math-container">$$\frac{\iint xdA}{area}$$</span> My understanding of the formula is as follows: we let <span class="math-container">$z=x$</span>, calculate the volume, and divide by the area to find <span class="math-container">$\bar{z}$</span>, which is the same as <span class="math-container">$\bar{x}$</span>.
Although I provided an explanation, I honestly still don't totally understand how that formula works; my teacher just assumed we knew it and didn't cover it. Unfortunately, most the websites I looked at just stated the formula or spent a lot of time explaining torque, moment, etc. An intuitive explanation or one that involved Riemann sums would be immensely helpful.</p>
<p>Second, I wanted to test this formula out with this example: <span class="math-container">$$\frac{\int_0^6 \int_0^{-(x/6)+1} xdydx}{3} = 2$$</span></p>
<p>But I didn't want to just use this formula: I wanted to try to find the <span class="math-container">$x$</span> for which the volumes on "either side" of this <span class="math-container">$x$</span> value are the same. Quickly, I discovered that <span class="math-container">$$\int_0^3 \int_0^{-(x/6)+1} xdydx = \int_3^6 \int_0^{-(x/6)+1} xdydx$$</span>
so <span class="math-container">$x=3$</span>. But then shouldn't that mean <span class="math-container">$\bar{x} = 3$</span> because the volume on either side is the same? Or is this nonsense?</p>
<p>Most likely, it's nonsense, and I think it's due to me not totally understanding the <span class="math-container">$\bar{x}$</span> formula.</p>
| Kman3 | 641,945 | <h2 id="mass-ryvx">Mass</h2>
<p>Imagine a lamina with constant density, <span class="math-container">$\rho$</span>. The mass <span class="math-container">$m$</span> of the lamina and the area <span class="math-container">$A$</span> occupied by the lamina are in equal proportion. <span class="math-container">$1/4$</span> of the lamina has <span class="math-container">$1/4$</span> of the mass of the lamina, and so on. Now, what happens when the density isn't constant? This is what calculus was built for.</p>
<p>Say the density changes from point to point within a certain lamina. We can say that at a position <span class="math-container">$(x,y)$</span>, the density <span class="math-container">$\rho(x,y)$</span> can be expressed as</p>
<p><span class="math-container">$$\rho(x,y)=\frac{dm}{dA}$$</span></p>
<p>Now, to find the total mass of the object, we can integrate density with respect to volume:</p>
<p><span class="math-container">$$dm = \rho(x,y) \ dA$$</span></p>
<p><span class="math-container">$$m= \iint_R \rho(x,y) \ dA$$</span></p>
<p>So now that we have <strong>mass</strong> covered, let us define <strong>moments</strong>.</p>
<h2 id="moments-d5is">Moments</h2>
<p>Say I give you a bat and a kettlebell. You're going to hold the bat horizontally and I will hang the kettlebell on different parts of the bat. In other words, I'm going to change the mass distribution of the system (bat and kettlebell) several times. You will find that the closer I hang the kettlebell to where you're holding, the easier the bat is to move. Mathematically speaking, suppose your hands are located at the origin. When I hang the kettlebell closer to your hands, the center of mass of the system moves closer to the origin. When I hang it further away from your hands, the center of mass moves farther from the origin. We turn the concept of how easily the bat moves into the idea of a <strong>moment</strong>.</p>
<p>We know that the further the kettlebell is from your hands, the harder the bat is to move. This means that moments are proportional to distance from the origin. Moments are proportional to mass, too: if I give you a <span class="math-container">$10$</span> kg kettlebell versus a <span class="math-container">$20$</span> kg kettlebell, you will undoubtedly notice a difference.</p>
<p>In 2D space, we use the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axes, so we will have moments relative to the <span class="math-container">$x$</span> axis and the <span class="math-container">$y$</span> axis. In other words, we will be rotating the bat around the two axes. Given the empirical evidence we have discussed, we can define the <strong>moments about the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axes</strong> as</p>
<p><span class="math-container">$$M_x = m_1 y_1+m_2 y_2+...+m_n y_n = \sum_{i=1}^{n} m_i y_i$$</span></p>
<p><span class="math-container">$$M_y = m_1 x_1 + m_2 x_2 + ... + m_n x_n = \sum_{i=1}^{n} m_i x_i$$</span></p>
<p>Now, the problem becomes complicated when instead of macroscopic kettlebells, you have microscopic changes in mass from point to point. Our friend calculus will be helpful here. Each piece of mass will be tiny, so our moments will become</p>
<p><span class="math-container">$$M_x = \lim_{n \to \infty} y_1 \Delta m + y_2 \Delta m+...+ y_n \Delta m = \lim_{n \to \infty} \sum_{i=1}^{n} y_i \Delta m$$</span></p>
<p><span class="math-container">$$M_y = \lim_{n \to \infty} x_1 \Delta m + x_2 \Delta m+...+ x_n \Delta m = \lim_{n \to \infty} \sum_{i=1}^{n} x_i \Delta m$$</span></p>
<p>Though we know that <span class="math-container">$\Delta m$</span> is just the density, <span class="math-container">$\rho (x,y)$</span>, times the tiny piece of area <span class="math-container">$\Delta A$</span>, so we now have</p>
<p><span class="math-container">$$M_x = \lim_{n \to \infty} \sum_{i=1}^{n} y_i \Delta m = \lim_{n \to \infty} \sum_{i=1}^{n} y_i \rho (x,y) \Delta A = \iint_R y \rho (x,y) \ dA$$</span></p>
<p><span class="math-container">$$M_y = \lim_{n \to \infty} \sum_{i=1}^{n} x_i \Delta m = \lim_{n \to \infty} \sum_{i=1}^{n} x_i \rho (x,y) \Delta A = = \iint_R x \rho (x,y) \ dA$$</span></p>
<h2 id="center-of-mass-ox68">Center of Mass</h2>
<p>To find <strong>center of mass</strong> on any axis, we simply divide the moment about the opposing axis by the total mass. This means that</p>
<p><span class="math-container">$$\overline{x} = \frac{M_y}{m} = \frac{\iint_R x \rho (x,y) \ dA}{\iint_R \rho(x,y) \ dA}$$</span></p>
<p><span class="math-container">$$\overline{y} = \frac{M_x}{m} = \frac{\iint_R y \rho (x,y) \ dA}{\iint_R \rho(x,y) \ dA}$$</span></p>
<p>Hopefully this helps aid your understanding. I'm not too good with Riemann sums so I may need to edit this answer, but I think this should give you what you need.</p>
|
337,035 | <p>Recall Wolpert's lemma:</p>
<p>Let X,Y be hyperbolic surfaces and <span class="math-container">$f:X\to Y$</span> a <span class="math-container">$K$</span>-quasiconformal homeomorphism. For any homotopy class of curves <span class="math-container">$c$</span> let <span class="math-container">$\ell(c)$</span> denote the length of the geodesic in the class. Then <span class="math-container">$$\frac{\ell(c)}{K}\leq \ell(f(c))\leq K\ell(c)$$</span></p>
<p>I am wondering: if <span class="math-container">$f$</span> is a <span class="math-container">$C^1$</span> diffeomorphism between closed hyperbolic surfaces that has this property, is it <span class="math-container">$K$</span>-quasiconformal? Of course it is quasiconformal for some other constant, by nature of being <span class="math-container">$C^1$</span>. </p>
| Alex Nolte | 136,267 | <p>No. The issue is that quasiconformality constants are much more sensitive to local distortion than lengths. This is morally a <span class="math-container">$L^\infty$</span> (quasiconformal constants) to <span class="math-container">$L^1$</span> (lengths) comparison: <span class="math-container">$L^\infty$</span> bounds on a function on a finite measure space imply <span class="math-container">$L^1$</span> bounds, but in general the converse fails.</p>
<p>To sketch a counter-example, take a hyperbolic surface with an <span class="math-container">$\epsilon$</span>-short curve for <span class="math-container">$\epsilon > 0$</span> very small. Such a curve has a long collar neighborhood that is close to a standard form. Leaving the core curve fixed, stretch a small annular neighborhood yielding a dilatation of <span class="math-container">$2$</span> at the core curve. Compensate for the stretch in a slightly larger neighborhood to produce a <span class="math-container">$C^1$</span> homeomorphism supported in a small annular neighborhood <span class="math-container">$V_\delta$</span> of the core curve. Call this map from our hyperbolic surface to itself <span class="math-container">$f_\delta$</span>. By construction, <span class="math-container">$K(f_\delta) \geq 2$</span>.</p>
<p>One sees that for any geodesic <span class="math-container">$c$</span>, the length of <span class="math-container">$f(c)$</span> is close to the length of <span class="math-container">$c$</span>. Taking <span class="math-container">$\epsilon, \delta$</span> close to <span class="math-container">$0$</span>, the numbers <span class="math-container">$\sup\limits_{c} l(f_\delta(c))/l(c)$</span> and <span class="math-container">$\inf\limits_c l(f_\delta(c))/l(c)$</span> can be made to be arbitrarily close to <span class="math-container">$1$</span>. The key thing to do here is to bound the distortion of geodesic segments in <span class="math-container">$V_\delta$</span> in terms of the homotopy class (rel <span class="math-container">$\partial V_\delta$</span>) of the segment.</p>
|
3,304,800 | <p>Given that <span class="math-container">$x^2-4cx+b^2 \gt 0$</span> <span class="math-container">$\:$</span> <span class="math-container">$\forall$</span> <span class="math-container">$x \in \mathbb{R}$</span> and <span class="math-container">$a^2+c^2-ab \lt 0$</span></p>
<p>Then find the Range of <span class="math-container">$$y=\frac{x+a}{x^2+bx+c^2}$$</span></p>
<p>My try:</p>
<p>Since <span class="math-container">$$x^2-4cx+b^2 \gt 0$$</span> we have Discriminant</p>
<p><span class="math-container">$$D \lt 0$$</span> <span class="math-container">$\implies$</span></p>
<p><span class="math-container">$$b^2-4c^2 \gt 0$$</span></p>
<p>Also</p>
<p><span class="math-container">$$x^2+bx+c^2=(x+a)^2+(b-2a)(x+a)+a^2+c^2-ab$$</span></p>
<p>Hence</p>
<p><span class="math-container">$$y=\frac{1}{(x+a)+b-2a+\frac{a^2+c^2-ab}{x+a}}$$</span></p>
<p>But Since <span class="math-container">$a^2+c^2-ab \lt 0$</span> We can't Use AM GM Inequality</p>
<p>Any way to proceed?</p>
| Certainly not a dog | 691,550 | <p>If a number <span class="math-container">$p$</span> is in the range of the <span class="math-container">$y$</span> you have given, it means <span class="math-container">$\frac{x+a}{x^2+bx+c^2} = p$</span> for some <span class="math-container">$x$</span>.</p>
<p>Or, in other words, that equation has a real solution. </p>
<p>=> <span class="math-container">$px^2+bpx+pc^2 = x+a$</span> For a real value of <span class="math-container">$x$</span></p>
<p>=> the discriminant of <span class="math-container">$px^2+(bp-1)x+pc^2-a$</span> is non-negative. </p>
<p>After some rearrangement we have any <span class="math-container">$p$</span> where the following inequality stands true is in the range of y. </p>
<p>=> <span class="math-container">$p^2(b^2-4c^2) + p(4a-2b) + 1 \geq 0$</span></p>
<p>The discriminant upon expansion turns out to be <span class="math-container">$16(a^2+c^2-ab)$</span> which is given to be strictly negative. As you have noted, the leading coefficient is positive. This means the given expression is always positive so every value of <span class="math-container">$p$</span> satisfies the condition. So, <span class="math-container">$p$</span> takes all real values. This means the range of the given function is <span class="math-container">$(-\infty, \infty)$</span>.</p>
|
2,355,246 | <p>The optimisation problem [PROBLEM 1]:
$$\max \sum_{t=T} x_t $$
subject to:
$$\ (1) x_t = (1+\alpha) x_{t-1}\ \forall t ..T$$
$$\ t=\{ 0..T\} , x_0= given$$
$$\ (e.g) x_1 = (1+\alpha) x_0\ $$
$$\ (e.g) x_2 = (1+\alpha) x_1\ $$
$$\ (e.g) x_3 = (1+\alpha) x_2\ $$ </p>
<p>$$\ $$</p>
<p>$$\ (2) x_t \ge 0 $$</p>
<p>Now, I would like to allow negative value of the variable "x" and apply different factor. I made the following changes [PROBLEM 2]:</p>
<p>$$\ x_t = x_t^+ - x_t^- $$</p>
<p>$$\ x_t = (1+\alpha) x_{t-1}^+ - (1+\beta) x_{t-1}^- $$</p>
<p>$$\ x_t^+ , x_t^- \ge 0 $$</p>
<p>$$\ x_t \ge -10 $$</p>
<p>The solver gives infeasible(unbounded) solution! Is there an alternative way to solve such kind of problem, particularly, applying different factor (multiplier) for negative values. The model is more complex than what I wrote. But I wanted to just highlight on the problem.</p>
| paul garrett | 12,291 | <p>There certainly is philosophical content to this question. And we can respond in a fundamental way without using any alleged properties of "real numbers"... by considering vectors $({2n\over n^2+1},{n^2-1\over n^2+1},0,0,\ldots)$ for positive integers $n$.</p>
<p>EDIT: (in light of comments...) For what it's worth, again, this recipe does not use trig functions, and does not use real numbers. On another hand, we can let $n$ range through positive real numbers (or some avatar thereof), if desired. It all depends on what in what context one wants the question answered, obviously. But this "Pythagorean triple" formula is pretty "close to the metal"...</p>
|
440,744 | <p>The vector space dimension of the cohomology group of the <span class="math-container">$2$</span>-plane Grassmannian <span class="math-container">$\mathrm{Gr}_{2,n}$</span> is given by the number of tuples <span class="math-container">$(\lambda_1,\lambda_2)$</span> satisfying
<span class="math-container">$$
n - 2 \geq \lambda_1 \geq \lambda_2 \geq 0.
$$</span>
Explicitly this is given by
<span class="math-container">$$
\binom{n}{2}.
$$</span>
This also happens to be the dimension of <span class="math-container">$V_{\pi_2}$</span> the second fundamental representation of <span class="math-container">$\frak{sl}_n$</span>. I am guessing this is not an accident, especially since the <span class="math-container">$2$</span>-plane Grassmannian corresponds (in the usual way) to <span class="math-container">$V_{\pi_2}$</span>.</p>
<p>Does this extend to the general identity
<span class="math-container">$$
\mathrm{dim}(H^{*}(\mathrm{Gr}_{d,n})) = \mathrm{dim}(V_{\pi_d})?
$$</span>
If it does, then what is a conceptual explanation for this?</p>
<p>EDIT: Since <span class="math-container">$V_{\pi_d}$</span> is isomorphic to the exterior power
<span class="math-container">$$
\Lambda^d(V_{\pi_1})
$$</span>
and <span class="math-container">$V_{\pi_1}$</span> is of dimension of <span class="math-container">$n$</span>, we see that the RHS of the claimed identity is the binomial coefficient
<span class="math-container">$$
\binom{n}{d}.
$$</span>
It follows from the general formula given in this <a href="https://mathoverflow.net/questions/196546/hard-lefschetz-theorem-for-the-flag-manifolds">answer</a> that the LHS is the same binomial coefficient. Thus the identity does indeed extend from <span class="math-container">$2$</span>-planes to <span class="math-container">$d$</span>-planes. So the question is if there is a conceptual reason for this . . .</p>
| Oliver | 4,087 | <p>Harry Tamvakis gives an explanation of the relation between Grassmannian cohomology and representations of SL_n in <a href="https://arxiv.org/abs/math/0306414" rel="noreferrer">The connection between representation theory and Schubert calculus</a>.</p>
<p>One fundamental obstacle for explaining this phenomenon is that there is not an analogous relationship between the cohomology rings of other flag varieties (Grassmannian-analogues for other Lie groups) and the representations of those Lie groups. For Tamvakis, the key fact that makes his explanation go is that GL_n is dense in its own Lie algebra, which is not generally true for other Lie groups. (At least this is what I remember; it's been a while since I read the paper carefully.)</p>
|
1,110,122 | <p>Prove $$\large\int_{-\pi}^{\pi}\sin (\sin x) \,dx =0$$ without using the fact that $\sin(x)$ is odd.</p>
<p>Computing this in wolfram says that it is uncomputable, which leads me to believe that the only way to find this would be methods for solving definite integrals. I am wondering if it is possible with any other techniques such as DUIS or residues?</p>
| Sangchul Lee | 9,340 | <p>Here is a very weird solution. Notice that for $s \in \Bbb{R}$,</p>
<p>$$ \int_{-\pi}^{\pi} \sin(s\sin x) \, dx = \Im \int_{-\pi}^{\pi} \exp(is\sin x) \, dx. $$</p>
<p>Now it follows that</p>
<p>\begin{align*}
\int_{-\pi}^{\pi} \exp(is\sin x) \, dx
&= \int_{-\pi}^{\pi} \exp(se^{ix}/2) \exp(-se^{-ix}/2) \, dx \\
&= \int_{-\pi}^{\pi} \exp(se^{ix}/2) \cdot \overline{\exp(-se^{ix}/2)} \, dx \\
&= 2\pi \sum_{n=0}^{\infty} \left\{ \frac{1}{n!}\left(\frac{s}{2}\right)^{n} \right\}\left\{ \frac{(-1)^{n}}{n!}\left(\frac{s}{2}\right)^{n} \right\} \\
&= 2\pi \sum_{n=0}^{\infty} \frac{(-1)^{n}}{(n!)^{2}}\left(\frac{s}{2}\right)^{2n} \\
&= 2\pi J_{0}(s),
\end{align*}</p>
<p>where $J_{0}$ is the Bessel function of the first kind. Here we only need the fact that $J_{0}(s) \in \Bbb{R}$ if $s \in \Bbb{R}$. Consequently we get</p>
<p>$$ \int_{-\pi}^{\pi} \sin(s\sin x) \, dx = 0 $$</p>
<p>and</p>
<p>$$ \int_{-\pi}^{\pi} \cos(s\sin x) \, dx = 2\pi J_{0}(s). $$</p>
<p>(Of course, now both two identities extend to all of $s \in \Bbb{C}$ by the principle of analytic continuation.)</p>
|
3,552,779 | <p>In a 3D world, given a box <code>B</code>, a pivot <code>P</code>, and a direction vector <code>V</code>, how can I find out how much to rotate at <code>P</code> such that <code>V</code> points towards an arbitrary point <code>A</code>?</p>
<p>Problem source:</p>
<p>I am a software developer that come across the need to rotate an object in the above manner, where a 3d model need to be rotated in this way for the user to interact with.</p>
<p>Current Attempts:</p>
<p>I tried using an offset between the direction vector and the pivot, and calculate the rotation required between the offseted target and the pivot.</p>
<p>However all my current attempts is done in code, and I left the mathematical calculation to the libraries due to my limited knowledge - which means to be honest I am not very clear what they actually do.</p>
<p>Note:</p>
<ol>
<li><code>B</code> can be of any arbitrary size, </li>
<li><code>P</code> can be anywhere within the box</li>
<li><code>V</code> can be anywhere within the box</li>
<li><code>A</code> can be anywhere in the world</li>
</ol>
<p><a href="https://i.stack.imgur.com/sEGP9.png" rel="nofollow noreferrer">An illustration of what I am aiming for in 2D</a></p>
| Andrew Chin | 693,161 | <p>Suppose you must. Then we have <span class="math-container">$$9\lim_{a\to2^+}\int_a^3\frac1{\sqrt[4]{x-2}}\,dx.$$</span></p>
<p>Let <span class="math-container">$u=x-2$</span>. Then we have <span class="math-container">$$9\lim_{a\to2^+}\int_{a-2}^1 u^{-1/4}\, du.$$</span></p>
<p>By the power rule, we have <span class="math-container">$$9\lim_{a\to2^+} \left.\frac43u^{3/4}\right]_{a-2}^1=9\left[\frac43(1-\lim_{a\to2^+}(a-2)^{3/4})\right]=9\left[\frac43(1-0)\right]=12.$$</span></p>
<p>Notice that it does not make a difference whether you use <span class="math-container">$\lim_{a\to2^+}(a-2)$</span> or <span class="math-container">$0$</span>.</p>
|
3,552,779 | <p>In a 3D world, given a box <code>B</code>, a pivot <code>P</code>, and a direction vector <code>V</code>, how can I find out how much to rotate at <code>P</code> such that <code>V</code> points towards an arbitrary point <code>A</code>?</p>
<p>Problem source:</p>
<p>I am a software developer that come across the need to rotate an object in the above manner, where a 3d model need to be rotated in this way for the user to interact with.</p>
<p>Current Attempts:</p>
<p>I tried using an offset between the direction vector and the pivot, and calculate the rotation required between the offseted target and the pivot.</p>
<p>However all my current attempts is done in code, and I left the mathematical calculation to the libraries due to my limited knowledge - which means to be honest I am not very clear what they actually do.</p>
<p>Note:</p>
<ol>
<li><code>B</code> can be of any arbitrary size, </li>
<li><code>P</code> can be anywhere within the box</li>
<li><code>V</code> can be anywhere within the box</li>
<li><code>A</code> can be anywhere in the world</li>
</ol>
<p><a href="https://i.stack.imgur.com/sEGP9.png" rel="nofollow noreferrer">An illustration of what I am aiming for in 2D</a></p>
| Arturo Magidin | 742 | <p>There are certain conditions that must be met for a substitution to be "legal". In most circumstances these conditions are naturally met and so they are not emphasized; you have here one situation in which they are not. </p>
<p>For instance, one set of conditions is given in Anton, Anderson, and Bivens (<em>Calculus, Early Transcendentals</em>, 11th Edition), Theorem 5.9.1:</p>
<blockquote>
<p><strong>Theorem 5.9.1.</strong> If <span class="math-container">$g'(x)$</span> is continuous on <span class="math-container">$[a,b]$</span>, and <span class="math-container">$f$</span> is continuous on an interval containing the values of <span class="math-container">$g(x)$</span> for <span class="math-container">$a\leq x\leq b$</span>, then
<span class="math-container">$$\int_a^b f(g(x))g'(x)\,dx = \int_{g(a)}^{g(b)} f(u)\,du.$$</span></p>
</blockquote>
<p>Here you have <span class="math-container">$g(x) = \sqrt[4]{x-2}$</span>, so <span class="math-container">$g'(x) = \frac{1}{4}(x-2)^{-3/4}$</span>... which is <em>not</em> continuous on the interval <span class="math-container">$[2,3]$</span> that you are working on. </p>
<p>In fact, as you note, the initial integral is improper, which means you aren't <em>really</em> evaluating that integral: you are evaluting a limit,
<span class="math-container">$$\lim_{h\to 2^+}\int_h^3 \frac{9}{\sqrt[4]{x-2}}\,dx.$$</span>
The integral in the limit does satisfy the conditions of the theorem above, so you can make the substitution to get
<span class="math-container">$$\lim_{h\to 2^+}\int_{\sqrt[4]{h-2}}^1 \frac{u^3}{u}\,du = \lim_{a\to 0^+}\int_a^1 u^2\,du,$$</span>
and proceed from there. </p>
|
165,385 | <p>I am generating a list of 1's, 2's, and 3's with different probabilities for each number. I then convert this list into three binarized lists, giving a list of the locations of each digit in the original list. There must be a more efficient way to do this? Perhaps changing the way I make the original list?</p>
<pre><code>list = RandomChoice[{0.5, 0.3, 0.2} -> {1, 2, 3}, 20]
list1 = ReplacePart[ReplacePart[ReplacePart[list, Position[list, 2] -> 0],
Position[list, 3] -> 0], Position[list, 1] -> 1]
list2 = ReplacePart[ReplacePart[ReplacePart[list, Position[list, 1] -> 0],
Position[list, 3] -> 0], Position[list, 2] -> 1]
list3 = ReplacePart[ReplacePart[ReplacePart[list, Position[list, 1] -> 0],
Position[list, 2] -> 0], Position[list, 3] -> 1]
</code></pre>
| kglr | 125 | <p><strong>Update:</strong> On Wolfram Cloud, the timing comparison I get is the opposite of David's claim that <em>"the result is about twice as fast as the generation of the list and the result in the fastest method described by others here"</em>; that is, to my surprise, <code>BitXor</code>-based method is more than 2x faster than David's method. Using <code>RandomVariate</code>+ <code>EmpiricalDistribution</code> + <code>WeightedData</code> combination instead of <code>RandomChoice</code> improves the timings. Using <code>IdentityMatrix[3]</code> as suggested by @Carl, the timings for David's method improve significantly but they are still no better than the timings for <code>BitXor</code>-based methods.</p>
<pre><code>ClearAll[f1,f2,f3,f4]
f1[n_]:= Module[{lst=Developer`ToPackedArray@RandomChoice[{0.5, 0.3, 0.2}->{1, 2, 3}, n]},
BitXor[1, Unitize[BitXor[#, lst] & /@ {1, 2, 3}] ]]
f2[n_]:= Module[{lst=Developer`ToPackedArray[ RandomVariate[
EmpiricalDistribution[WeightedData[Range[3], {.5, .3, .2}]], n]]},
BitXor[1, Unitize[BitXor[#, lst] & /@ {1, 2, 3}] ]]
f3[n_]:= Transpose[Developer`ToPackedArray@RandomChoice[{.3, .2, .5} ->
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}, n]]
f4[n_]:= Transpose[Developer`ToPackedArray@RandomChoice[{.3, .2, .5} ->
IdentityMatrix[3], n]]
funcs = {f1, f2, f3, f4};
timings = Prepend[Table[Prepend[Table[First[RepeatedTiming[funcs[[j]][i]]],
{j, 1, 4}], i], {i, {200000, 10^6}}], {"n", "f1","f2","f3", "f4"}];
Grid[timings, Dividers -> All] // TeXForm
</code></pre>
<blockquote>
<p>$\begin{array}{|c|c|c|c|c|}
\hline
\text{n} & \text{f1} & \text{f2} & \text{f3} & \text{f4} \\
\hline
200000 & 0.0069 & 0.0061 & 0.016 & 0.0063 \\
\hline
1000000 & 0.0359 & 0.032 & 0.0885 & 0.035 \\
\hline
\end{array}$</p>
</blockquote>
<p>(I cannot time larger lists because the required computation exceeds the memory limit of the free cloud accounts.)</p>
<hr>
<p><strong>Original answer:</strong></p>
<pre><code>{l1, l2, l3} = BitXor[1, Unitize[BitXor[#, list] & /@ {1, 2, 3}] ]
TeXForm @ {list, l1, l2, l3}
</code></pre>
<blockquote>
<p>$\left(
\begin{array}{cccccccccccccccccccc}
1 & 1 & 1 & 1 & 1 & 2 & 3 & 1 & 2 & 1 & 1 & 1 & 2 & 3 & 2 & 1 & 1 & 1 & 3 & 2 \\
1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\
\end{array}
\right)$</p>
</blockquote>
<p><strong>Timings:</strong> Using Henrik's setup, this is twice as fast as Henrik's method.</p>
<pre><code>list = Developer`ToPackedArray[RandomChoice[{0.5, 0.3, 0.2} -> {1, 2, 3}, 200000]];
RepeatedTiming[{l1, l2, l3} = BitXor[1, Unitize[BitXor[#, list] & /@ {1, 2, 3}] ]][[1]]
</code></pre>
<blockquote>
<p>0.0013</p>
</blockquote>
<pre><code>RepeatedTiming[list1a = Subtract[1, Unitize[Subtract[list, 1]]];
list2a = Subtract[1, Unitize[Subtract[list, 2]]];
list3a = Subtract[1, Unitize[Subtract[list, 3]]];][[1]]
</code></pre>
<blockquote>
<p>0.0025</p>
</blockquote>
<pre><code>RepeatedTiming[list1b = Unitize@Clip[list, {1, 1}, {0, 0}];
list2b = Unitize@Clip[list, {2, 2}, {0, 0}];
list3b = Unitize@Clip[list, {3, 3}, {0, 0}];][[1]]
</code></pre>
<blockquote>
<p>0.0027</p>
</blockquote>
<pre><code>And @@ (Equal @@@
Transpose[{{list1a, list2a, list3a}, {list1b, list2b, list3b}, {l1, l2, l3}}])
</code></pre>
<blockquote>
<p>True</p>
</blockquote>
|
1,691,009 | <p>I am struggling with a particular concept, I'll lay it out how I think will best allow for an answer:</p>
<p>Let $A =$ $\Bbb Z$ be the set of all integers and the universe of discourse. </p>
<pre><code>Let B, be the set of even numbers
Let C, be the set of odd numbers
Let D, be the set of positive numbers
Let E, be the set of negative numbers
</code></pre>
<p>Now I would like the ability to be able to express the following Sets in words, to help my understanding of the topic:</p>
<p>A) $(A-B)$</p>
<p>B) $ C \cap D$</p>
<p>C) $\overline{(D \cup B)}$ </p>
<p>Any help to express these sets in 'words' would be great, it has me stumped. </p>
<p>Thanks!</p>
| assefamaru | 262,896 | <p>Your universe of discourse is the set of all integers. One property of integers is that they have a parity of either being even or odd. So if you exclude all even numbers from the set of integers, then you will be left with a set containing all odd numbers.</p>
<p>Hence, $A-B=C$, or using words, <strong>$A-B$ is the set of odd integers</strong>.
$\{...,-5,-3,-1,1,3,5,...\}$</p>
<p>On the other hand, <strong>$C \cap D$ is the set of odd positive integers</strong>, since you are taking the set of odd numbers, and the set of positive numbers, and intersecting them, meaning that you are looking at the numbers that both sets have in common, which are the positive integers that are odd. $\{1,3,5,7,...\}$</p>
<p>For part C, you have $D$, a set of positives, and $B$, a set of evens. If you want a set that contains neither even numbers nor positives, ie. $\overline{D\cup B}$, then your set becomes <strong>the set of odd negative integers</strong>. $\{...,-5,-3,-1\}$</p>
<p>To express the sets in words, simply think about what kind of elements the sets would contain (ie. their property with respect to the universe of discourse), and write that in words.</p>
|
2,636,188 | <p>How would I go about proving:</p>
<blockquote>
<p>For every non-prime $n \in \mathbb N$ there exists $m \in \mathbb Z / n\mathbb Z \setminus \{0\}$ with $m^{n-1} \not\equiv 1 \mod n$.</p>
</blockquote>
<p>I already proved the other way around (straight forward using Euler-Fermat), but am stuck at this point.</p>
| Jose Brox | 146,587 | <p>Pick $m$ a nontrivial divisor of $n$, and let $k:=n/m$. Suppose that $m^{n-1}\equiv 1(\mod n)$. Then $k\equiv k\cdot 1\equiv km^{n-1}\equiv nm^{n-2}\equiv 0(\mod n)$, contradiction, since $0<k<n$.</p>
|
626,837 | <p>If $a+b+c+d = 30$ and $a,b,c,d$ lie between $0$ and $9$. How to find number of solutions of this equation.</p>
| Blue7 | 118,890 | <p>The answers are given here: <a href="http://www.wolframalpha.com/input/?i=0%3Ca%3C9,%200%3Cb%3C9,%200%3Cc%3C9,%200%3Cd%3C9,%20a%2bb%2bc%2bd%20=%2030" rel="nofollow">http://www.wolframalpha.com/input/?i=0%3Ca%3C9%2C+0%3Cb%3C9%2C+0%3Cc%3C9%2C+0%3Cd%3C9%2C+a%2Bb%2Bc%2Bd+%3D+30</a></p>
<p>However I do not know how these were calculated sorry.</p>
<p>Hopefully this will help give you a start to solving your problem. </p>
|
3,583,600 | <blockquote>
<p>Define a branch for <span class="math-container">$\sqrt{1+\sqrt{z}}$</span> and show it is analytic.</p>
</blockquote>
<p>I defined a branch <span class="math-container">$(-\pi, \pi)$</span>, and so that means the function <span class="math-container">$\sqrt{1+\sqrt{z}}$</span> is analytic on <span class="math-container">$\mathbb{C}\setminus \left\{y=0,x\leq 0\right\}$</span>.</p>
<p>I am trying to analyze when such a function is in the deleted area. Already from <span class="math-container">$\sqrt{z}$</span>, <span class="math-container">$y$</span> must be zero. It remains to consider the case when <span class="math-container">$x\leq 0$</span>. Since there's an added <span class="math-container">$+1$</span> inside the square root, does this change <span class="math-container">$x\leq 0$</span> to <span class="math-container">$x\leq -1$</span>? And so the analytic domain is <span class="math-container">$\mathbb{C}\setminus \left\{y=0,x\leq -1\right\}$</span>?</p>
| Carlos Obiglio | 761,132 | <p><span class="math-container">$ {w = \sqrt{1 + \sqrt{z}},( z \; \epsilon \; C) \mapsto circle \; |w|\measuredangle\phi,\ by \ Moivre \ theorem \ to \ determine \ the\ complex\ roots \ it \ follows \ that: \ w^n \ = \ r^n \ ( cos(\ n \phi )\ + sin(\ n \phi) )
\ let's \; define \; this \; circle \; as \; f(z).} $</span>
<span class="math-container">$ { A \;positive \ branch \; is \; defined \; in \ the \; domain \; 0\le\phi< 2\pi \; , ( or \; restricting \; z,) \Rightarrow hence \;} z0 \ne \ z1 \; is \; fulfilled \; that\; f(z0) \; \ne \; f(z1), \; hence \; f(z) \; is \; analityc. $</span> }</p>
|
1,631,366 | <p>I need to prove that $\log(x)^{10} < x$ for $\ x>10^{10}$
It's pretty clearly true to me, but I need a good proof of it. I tried induction, and got stuck there.</p>
| mm8511 | 180,904 | <p>Assuming this is $\log$ base $10$, you know that $\log_{10}(10^n)^{10}=n^{10}$ (this follows from the identity $\log_{a}(a^{k})=k$), so it seems you would have to verify that $10^{n}>n^{10}$ for $n>10$. I am not sure that this makes it easier for you, but atleast the log is gone. You can then show that the derivative of $n^{10}$ grows slower than $10^n$ (My rep is not high enough to leave this as a comment). </p>
|
123,502 | <p>Suppose $A \subseteq \{1,\dots,n\}$ does not contain any arithmetic progressions of length $k+1$. What is the largest number of $k$-term arithmetic progressions that $A$ can have? (one may also wish to put some lower or upper on the size of $A$) We can work over $\mathbb{Z}_p$ if it makes the answer any easier. The "degenerate" case $k=2$ asks for the largest size of the set without arithmetic progressions and it is known that there exist $A$'s with this property of almost linear size.</p>
| Timothy Chow | 3,106 | <p>I'd recommend asking for input from professors who have a track record of supervising successful undergraduate research, such as <a href="http://www.d.umn.edu/~jgallian/tanglewood.html" rel="nofollow noreferrer">Joseph Gallian</a> or the people who run the <a href="http://www.math.osu.edu/ross/" rel="nofollow noreferrer">Ross Program</a>.</p>
<p>Frank Morgan also has some <a href="http://sites.williams.edu/Morgan/2011/12/19/elements-of-good-nsf-reu-proposal/" rel="nofollow noreferrer">good advice</a> for planning the nitty-gritty details of an REU program.</p>
<p>One general comment that I have, which is somewhat specific to mathematics, is that it is important for the advisor not to underestimate the potential for original undergraduate research. In your article, you wrote, "Only in a very rare circumstances exceptionally talented undergraduate students guided by some of the world's best research mathematicians can produce a genuine results." However, Gallian would be the first to admit that he is not a top research mathematician, yet his program has produced some of the world's best undergraduate mathematics research. And even Gallian admits that he has sometimes underestimated the students in his program. In my experience, professors are far more likely to create failure by expecting failure than they are to give students an exaggerated view of their own abilities. Mathematics research is difficult, but there's no reason to make it even more difficult by creating an atmosphere where failure is the expected norm.</p>
<p>See also <a href="https://mathoverflow.net/questions/45802/undergraduate-math-research">this MO question</a>.</p>
|
158,944 | <p>Is there any type of function that when graphed would show a curve which intersects the x axis multiple times, with each point being an arbitrary distance from the last?</p>
<p>I mean, not like a trig function where each intersect is the same distance from the last. (2,0); (4,0); (6,0); (8,0). And not like a spiral where the distance gets bigger and bigger (or smaller) (2,0); (4,0); (8,0); (16,0);</p>
<p>But for example, some curve which intersects x-axis at (2,0); (6,0); (14,0); (15,0); (20,0); (122,0)...</p>
<p>Does that type function exist?</p>
<p>If so, is it possible to solve/get the equation, given only those intersect points?</p>
<p>I wouldn't need the exact equation of any particular curve. Just the equation of any curve that happens to intersect x-axis at whatever given arbitrary x values. Is that at least that possible to do?</p>
| André Nicolas | 6,312 | <p>If you specify an arbitrary <strong>finite</strong> set of points $\{a_1, a_2, a_3, \dots, a_n\}$, the polynomial function
$$f(x)=(x-a_1)(x-a_2)(x-a_3) \cdots (x-a_n)$$
has the desired property.</p>
|
158,944 | <p>Is there any type of function that when graphed would show a curve which intersects the x axis multiple times, with each point being an arbitrary distance from the last?</p>
<p>I mean, not like a trig function where each intersect is the same distance from the last. (2,0); (4,0); (6,0); (8,0). And not like a spiral where the distance gets bigger and bigger (or smaller) (2,0); (4,0); (8,0); (16,0);</p>
<p>But for example, some curve which intersects x-axis at (2,0); (6,0); (14,0); (15,0); (20,0); (122,0)...</p>
<p>Does that type function exist?</p>
<p>If so, is it possible to solve/get the equation, given only those intersect points?</p>
<p>I wouldn't need the exact equation of any particular curve. Just the equation of any curve that happens to intersect x-axis at whatever given arbitrary x values. Is that at least that possible to do?</p>
| Alex Becker | 8,173 | <p>If I'm interpreting your question correctly, you want to know if, given an increasing sequence of points $(a_n)$ in $\mathbb R$, we can define a continuous function $f:\mathbb R\to\mathbb R$ such that $f(x)=0$ if and only if $x=a_i$ for some $i$ (that is, the graph of $f$ intersects the $x$ axis precisely at the points $(a_i,0)$). The answer is yes, in fact there are infinitely many curves which will suffice. They might be hard to describe however. The simplest example I can think of is
$$f(x)=(-1)^i\sin\left(\frac{(x-a_i)\pi}{a_{i+1}-a_i}\right)\text{ for } a_i\leq x\leq a_{i+1}$$
and $f(x)=0$ otherwise.
This is continuous but not differentiable, and has the advantage of actually <em>crossing</em> the $x$-axis at each $a_i$ rather than just touching it.</p>
|
4,370,989 | <p>I have looked in some of my old books and found an exericse that I do not know how to solve. It seems pretty simple though.</p>
<p>The question is as follows:</p>
<pre><code>Which of these integers are prime?:
111
111.1
111.111
111.111.11
111.111.111
111.111.111.1
111.111.111.111
</code></pre>
<p>I remember one rule of thumb saying that if the sum of integers mod 9 were 0,3 or 9, then you could know for sure that the not number was NOT prime.</p>
<p>But for numbers such as this, is there any way of checking whether the number is actually prime?</p>
| Theo Bendit | 248,286 | <p>Just to simplify the problem a little, let's consider the free <em>monoid</em> on an alphabet <span class="math-container">$A$</span> (a monoid is a set with an associative operation with an identity, but not necessarily any inverses). The free group works similarly, but there's a little more complexity hidden there due to the presence of inverses.<span class="math-container">$^1$</span></p>
<p>The free monoid on <span class="math-container">$A$</span> is the set of finite-length strings we can form taking symbols from <span class="math-container">$A$</span>, equipped with the concatenation operation. This operation takes two strings, and appends the symbols from the second string (in order) onto the end of the first string, to form a new string. The length of the new string is the sum of the lengths of the old string.</p>
<p>For example, if <span class="math-container">$A = \{a, b, c\}$</span>, then <span class="math-container">$bcbaab$</span> would be one string in the free monoid, and <span class="math-container">$cca$</span> would be another string. If <span class="math-container">$*$</span> is our concatenation operation, then
<span class="math-container">$$bcbaab * cca = bcbaabcca, \quad cca * bcbaab = ccabcbaab.$$</span>
This operation is associative, and the empty string is the identity, so we do indeed get a monoid. As the above example shows, this operation is not commutative.</p>
<p>While it's true that, for example <span class="math-container">$cca = c * c * a$</span>, it's important to realise that the meaning of <span class="math-container">$cca$</span> is not literally <span class="math-container">$c * c * a$</span>. It is a string. If you like, it is a function from <span class="math-container">$\{1, 2, 3\}$</span> to <span class="math-container">$A = \{a, b, c\}$</span>, or an ordered <span class="math-container">$3$</span>-tuple <span class="math-container">$(c, c, a)$</span>, or however you want to encode strings as sets.</p>
<p>We can't just define strings like <span class="math-container">$cca$</span> to be <span class="math-container">$c * c * a$</span>, because this would be a circular definition of the concatenation operation <span class="math-container">$*$</span>. We cannot define an operation like <span class="math-container">$*$</span> without a set of things it acts upon. If this set of things already uses <span class="math-container">$*$</span> in its definition, then this makes the definition circular. We have to have a separate notion of a string before we can begin to talk about how to combine them.</p>
<p>So, with this in mind, yes, there is an operation here, and yes, it's nothing but adding new words to old ones.</p>
<hr />
<p><span class="math-container">$^1$</span> For your information, the free group differs from the free monoid in a couple of ways. First, the alphabet <span class="math-container">$A$</span> is "doubled", as we must include inverses for each symbol. So, we form strings not just from, say, <span class="math-container">$\{a, b, c\}$</span>, but from <span class="math-container">$\{a, b, c, a^{-1}, b^{-1}, c^{-1}\}$</span>. We also need to form an equivalence relation between "equivalent" words. For example, <span class="math-container">$abcc^{-1}$</span> should be considered the same as <span class="math-container">$ab$</span>. For this reason, the elements of the free group are not just strings, but equivalence classes of strings.</p>
|
2,447,850 | <p>So I have to prove 2 things:</p>
<ol>
<li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R, x>0$. </p></li>
<li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R$. </p></li>
</ol>
<p>For #1, I know that $\frac{x^n}{n!} >0$, which means that I can find an upper bound and use squeeze theorem. For #2, I have no idea where to start.</p>
| Mark Viola | 218,419 | <p>Note that $n!\ge (n/2)^{n/2}$. Then, we have</p>
<p>$$\left|\frac{x^n}{n!}\right|\le \frac{|x|^n}{\left(\sqrt{n/2}\right)^n}=\left(\frac{2|x|}{n}\right)^n$$</p>
<p>Can you conclude now?</p>
|
3,741,122 | <p>Recently I've tried to find the difference between partial differentiation and total differentiation.
I've heard the total derivative is defined on single value functions, while the partial derivative by contrast is defined on multivariate functions.
My problem is, that total differentiation is used on multivariate functions all the time.</p>
<p>Every time I come up with a rigorous definition I arrive at a contradiction.
I will share what I have defined so far, and hopefully you can enlighten me.</p>
<p>Let</p>
<p><span class="math-container">$$f: (x_1, ... , x_n) \rightarrow f(x_1, ..., x_n)$$</span></p>
<p>and it's partial derivative by the difference quotient</p>
<p><span class="math-container">$$\frac{\partial f}{\partial x_i} = \lim_{h \to 0} \frac{f(x_1,..,x_i+h,...x_n)- f(x_1,..., x_n)}{h}$$</span></p>
<p>the total derivative must by contrast account for interdependence between <span class="math-container">$x_k$</span> in the domain of f.</p>
<p><span class="math-container">$$\frac{df}{dx_i}\stackrel{?}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{\partial x_k}{\partial x_i}}$$</span></p>
<p>This seemed sensible to me, until I realized it simplified to</p>
<p><span class="math-container">$$n \frac{\partial f}{\partial x_i}$$</span></p>
<p>which definitely isn't right.</p>
<p>Can someone tell me where I've made an error? Or provide better definition? This issue really annoys me, since all my research so far didn't answer this question at all.</p>
<p>Edit:
Ok thank you for all the responses!
I'm just writing out the final formula for total derivatives for quick lookup now:
<span class="math-container">$\frac{d}{d x_i}$</span> is defined recursively as
<span class="math-container">$$\frac{df}{dx_i}\stackrel{!}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{d x_k}{d x_i}}$$</span></p>
<p>until <span class="math-container">$x_k$</span> has a domain without interdependence, in which case <span class="math-container">$\frac{\partial x_j}{\partial x_i}$</span> = <span class="math-container">$\frac{d x_j}{d x_i}$</span> and the entire expression can be calculated by limits.</p>
| Community | -1 | <p>The difficulty comes from the fact that with function of several variables, there can be dependencies between the variables (which is not possible in the univariate case).</p>
<p>Consider the function <span class="math-container">$f(x,y):=x+y$</span>.</p>
<p>The <em>partial derivative</em> on <span class="math-container">$x$</span> is what you get by varying <span class="math-container">$x$</span> and freezing the other variables, i.e.</p>
<p><span class="math-container">$$\frac{\partial f(x,y)}{\partial x}=f_x(x,y)=\frac{d(x+y)}{dx}=1.$$</span></p>
<p>Now consider the dependency <span class="math-container">$y=x^2$</span>.</p>
<p>We still have the partial derivative on <span class="math-container">$x$</span></p>
<p><span class="math-container">$$\frac{\partial f(x,x^2)}{\partial x}=f_x(x,x^2)=1$$</span> but this is no more</p>
<p><span class="math-container">$$\frac{d(x+y)}{dx}=\frac{d(x+x^2)}{dx}=1+2x.$$</span></p>
<p>The latter is the <em>total</em> derivative as it accounts for the indirect variations of <span class="math-container">$f$</span> due to the variations of the other arguments induced by <span class="math-container">$x$</span>.</p>
<p>The rule is now</p>
<p><span class="math-container">$$\frac{df(x,y)}{dx}=\frac{\partial f(x,y)}{\partial x}+\frac{\partial f(x,y)}{\partial y}\frac{dy}{dx}=f_x(x,x^2)+f_y(x,x^2)\frac{d(x^2)}{dx}.$$</span></p>
|
4,449,164 | <p>My question stems from page 2 of <a href="https://www.semanticscholar.org/paper/Stability-and-positive-supermartingales-Bucy/31bbbfc842b5717ead8cd7e997a5117fe7a66373" rel="nofollow noreferrer">this paper by Bucy</a>, which states:</p>
<blockquote>
<p>[A random variable <span class="math-container">$x$</span>] is almost everywhere constant a.e.
<span class="math-container">$P$</span>.</p>
</blockquote>
<p>where <span class="math-container">$P$</span> is a probability measure. My interpretation of this is as follows (where I consider <span class="math-container">$x$</span> to be real-valued).</p>
<h4>Interpretation</h4>
<p>Given the probability space <span class="math-container">$(\Omega ,\mathcal F, P)$</span> and some constant <span class="math-container">$c\in \mathbb R$</span>, <span class="math-container">$x:\Omega \rightarrow \mathbb R$</span> is a random variable of the following form:</p>
<p><span class="math-container">\begin{align}
x(\omega) = \begin{cases}
c \qquad&\omega \in A \\
g(\omega) &\omega \in A^c
\end{cases}
\end{align}</span>
where <span class="math-container">$A\in \mathcal F$</span> is such that <span class="math-container">$P(A^c)=0$</span> and <span class="math-container">$g$</span> is an arbitrary real-valued function. The sets <span class="math-container">$A$</span> satisfying the foregoing condition capture what we mean by "almost everywhere" w.r.t. the measure P.</p>
<h4>Questions</h4>
<ol>
<li><p>Is the above interpretation correct?</p>
</li>
<li><p>Is this to be thought of as a general form of what one might call a 'degenerate' random variable?</p>
</li>
<li><p>If (2) is yes, then is there some intuition for why such a definition might be desirable? As opposed to defining a degenerate r.v. as the constant function <span class="math-container">$\tilde x : \Omega \rightarrow \{c\}$</span>.</p>
</li>
</ol>
| Alex Ortiz | 305,215 | <ol>
<li><p>The interpretation is correct, although I would add for completeness the function <span class="math-container">$g$</span> should be measurable, though this is often understood even without specifying it. A more succinct way to say the same thing is that <span class="math-container">$X$</span> is constant <span class="math-container">$P$</span>-a.e. if for every <span class="math-container">$c\in \mathbb R$</span>, <span class="math-container">$P(X = c)\in\{0,1\}$</span>.</p>
</li>
<li><p>It is certainly not a "random" r.v. in the colloquial sense of the word. A common word that is used to describe such r.v.'s is <em>deterministic</em> random variables.</p>
</li>
<li><p>In measure theory and probability theory, it is quite common to consider measurable functions (or random variables) as only being defined up to sets of measure zero, and to work instead with equivalence classes of (measurable) functions that are equal to one another almost everywhere. I think in practice, it's harmless to assume that a random variable which is constant almost everywhere is in fact constant <em>everywhere</em>, though I can't imagine a situation where it makes anything clearer. Indeed, the sample space (domain) usually plays a minor role relative to the distribution or law of a random variable we are considering (i.e., the numbers <span class="math-container">$P(X\in A)$</span>, as <span class="math-container">$A$</span> ranges over the measurable sets in the codomain of the r.v.).</p>
</li>
</ol>
|
99,786 | <p>Suppose I want to run several times of the same code containing <code>RandomReal</code> function, but I don't initialise the random seed. What will happen?</p>
<p>I tried, but each time the <code>RandomReal[]</code> return different values, does it mean that Mathematica using some kind of default method to initialise the random number generator even without <code>SeedRandom[]</code>?</p>
| e.doroskevic | 18,696 | <p><strong>From documentation:</strong></p>
<blockquote>
<p>You can use SeedRandom[n] to make sure you get the same sequence of
pseudorandom numbers on different occasions.</p>
</blockquote>
<p><strong>Reference</strong></p>
<p><a href="https://reference.wolfram.com/language/ref/SeedRandom.html" rel="nofollow">SeedRandom</a></p>
|
99,786 | <p>Suppose I want to run several times of the same code containing <code>RandomReal</code> function, but I don't initialise the random seed. What will happen?</p>
<p>I tried, but each time the <code>RandomReal[]</code> return different values, does it mean that Mathematica using some kind of default method to initialise the random number generator even without <code>SeedRandom[]</code>?</p>
| Szabolcs | 12 | <p>Let's think about when the answer to this question may be relevant. You know that if using the same seed on two different occasions, the RNG will generate the same sequence of numbers. So the relevant questions are:</p>
<ol>
<li><p>Does Mathematica use the same seed after startup every time?</p>
<p><strong>No.</strong> This is easy to test.</p></li>
<li><p>Many programs take the seed from the system time. Also, the <code>SeedRandom</code> documentation says that "<code>SeedRandom[]</code> resets the generator, using as a seed the time of day and certain attributes of the current Wolfram System session." Is this what Mathematica does on startup?</p>
<p><strong>Yes.</strong> On OS X I ran the following in a terminal:</p>
<pre><code>math -run Print@RandomReal[] & math -run Print@RandomReal[]
</code></pre>
<p>That is, I started two kernels at the same time and checked if they generate the same sequence of random numbers. They do.</p></li>
<li><p>If the seed is taken from the system time, what about parallelization? What if I start up all parallel kernels at the same time? Will they all use the same seed?</p>
<p><strong>No.</strong> The Parallel Tools package takes care of this. Each kernel will use a different seed. You can easily test this using <code>ParallelTable[RandomReal[], {$ProcessorCount}]</code>. You'll get a list of different numbers.</p></li>
<li><p>Okay, but what if instead of using in-Mathematica parallelization, I manually run several Mathematica processes in parallel, say on a HPC cluster?</p>
<p>Modern operating systems come with <a href="https://en.wikipedia.org/wiki//dev/random">a nondeterministic random number generator</a>. This gives truly unpredictable random numbers using entropy sources such as keystrokes and network traffic, and is used for cryptographical purposes. You can take the seed from there. Here's a function I used for this in the past. It's only for Unix-like systems (OS X and Linux):</p>
<pre><code>(* Nondeterministic random numbers from the OS *)
NondeterministicRandomInteger::usage = "NondeterministicRandomInteger[] returns an unpredictable random integer between 0..2^32-1.";
NondeterministicRandomInteger::notsupp = "Not supported on ``";
Switch[$OperatingSystem,
"MacOSX"|"Unix",
NondeterministicRandomInteger[] :=
AbortProtect@Module[{stream, res},
stream = OpenRead["!head -c 4 /dev/random", BinaryFormat -> True];
res = BinaryRead[stream, "UnsignedInteger32"];
Close[stream];
res
],
_,
NondeterministicRandomInteger[] := (Message[NondeterministicRandomInteger::notsupp, $OperatingSystem]; $Failed)
]
</code></pre>
<p>Use this only for seeding: <code>SeedRandom@NondeterministicRandomInteger[]</code>.</p>
<p>If I did this today, I would use something like <a href="http://www.boost.org/doc/libs/1_57_0/doc/html/boost/random/random_device.html">boost::random_device</a> through LibraryLink for a better cross platform solution that also works on Windows.</p></li>
</ol>
|
1,019,408 | <p>Usually, $f$ denotes a function, $f(x)$ is an image of $x$ under $f$. But what's $f(X)$ if $X$ is a set? </p>
<p>edit: Please, disregard the body of this question. I had to put something here to be able to post the question. </p>
| Paul | 17,980 | <p>It is $f(X)=\{f(x): x\in X\}.$</p>
|
516,244 | <p>My professor gave us this example on her notes:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}+\frac{1}{2^n}\right)$$</p>
<p>So I know we're supposed to find the partial fraction, which ends up being</p>
<p>$$\left(\frac{3}{n(n+3)}=\frac{A}{n}+\frac{B}{n+3}=
\frac{1}{n}-\frac{1}{n+3}\right)$$</p>
<p>So based on how she did the other examples, I would expect her to do:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{1}-\frac{1}{4}+\frac{1}{2}-\frac{1}{5}\right.....)$$, because I'd be plugging in numbers for n starting with n=1. However, she instead did the following:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{n}-\frac{1}{n+1}+\frac{1}{n+1}-\frac{1}{n+2}+\frac{1}{n+2}-\frac{1}{n+3}\right)$$,</p>
<p>which would definitely be a lot more helpful in helping cancel out terms like you're supposed to when doing telescoping series, BUT I don't know why she's doing this. I thought we were supposed to plug in values from n and that's what should be increasing each time, but instead the number being added to n is the one going up and I have no clue why. I don't think I'm asking this question in the best way possible, but I'm kinda confusing myself because she did other examples and they feel nothing like this and I'm just starting to learn all this, so can somebody please give me some insight as to what is going on?</p>
<p>(and I know I'm supposed to also deal with the sum of the $$\frac{1}{2^n}$$ term but I'm kinda ignoring it for now since I don't even know what's going on with the first one</p>
| sds | 37,092 | <p>What you are looking for is called the <a href="http://en.wikipedia.org/wiki/Shoelace_formula" rel="noreferrer">shoelace formula</a>:</p>
<p><span class="math-container">\begin{align*}
\text{Area}
&= \frac12 \big| (x_A - x_C) (y_B - y_A) - (x_A - x_B) (y_C - y_A) \big|\\
&= \frac12 \big| x_A y_B + x_B y_C + x_C y_A - x_A y_C - x_C y_B - x_B y_A \big|\\
&= \frac12 \Big|\det \begin{bmatrix}
x_A & x_B & x_C \\
y_A & y_B & y_C \\
1 & 1 & 1
\end{bmatrix}\Big|
\end{align*}</span></p>
<p>The last version tells you how to generalize the formula to higher dimensions.</p>
<p><strong>PS.</strong> Another generalization of the formula is obtained by noting that it follows from a discrete version of the <a href="https://en.wikipedia.org/wiki/Green%27s_theorem" rel="noreferrer">Green's theorem</a>:</p>
<p><span class="math-container">$$ \text{Area} = \iint_{\text{domain}}dx\,dy = \frac12\oint_{\text{boundary}}x\,dy - y\,dx $$</span></p>
<p>Thus the signed (oriented) area of a polygon with <span class="math-container">$n$</span> vertexes <span class="math-container">$(x_i,y_i)$</span> is</p>
<p><span class="math-container">$$ \frac12\sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i $$</span></p>
<p>where indexes are added modulo <span class="math-container">$n$</span>.</p>
|
749,849 | <p>Question:</p>
<p>A submatrix $B$ consisting of "s" rows of $A$ is selected from an n-square matrix $A$ of rank $r_{A}$. prove that the rank of $B$ is equal to or greater than $r_{A}+s-n$.</p>
<p>My thoughts:</p>
<p>I start with an easy case, says $A=I_4$. Then, by selecting 2 first rows of $A$. We obtain a matrix $B$:$$\begin{pmatrix}
1 &0 &0 &0 \\
0&1 &0 &0
\end{pmatrix}$$
So, the rank of $B=4+2-4=2$. At least, I know the statement is true for this trivial situation. But can anyone help me to figure out the general situation about the problem?
Thanks in advance.</p>
| Chandramauli Chakraborty | 375,555 | <p>Here <span class="math-container">$A$</span> is <span class="math-container">$n\times n$</span> matrix and <span class="math-container">$r_A$</span> be the rank of <span class="math-container">$A$</span>. Now we need to eliminate <span class="math-container">$n-s$</span> rows from <span class="math-container">$n$</span> rows. Let <span class="math-container">$C=\{A_{i_k*}:1\leq k\leq r_A\} $</span>, be the set of basis of row space of A , i.e , <span class="math-container">$\mathcal{R}(A)$</span>, where <span class="math-container">$A_{i_k*}$</span> denote the <span class="math-container">$i_k\textit{-th}$</span> row of the matrix A.</p>
<p>Now choose any row <span class="math-container">$A_{j_m*}$</span>, there would be two possibilities, i.e. either <span class="math-container">$A_{j_m*}\in C$</span> or <span class="math-container">$A_{j_m*}\not\in C$</span>.If <span class="math-container">$A_{j_m*}\in C$</span>, then the new matrix is of order <span class="math-container">$(n-1)\times n$</span> has rank <span class="math-container">$r_A-1$</span>, while in the other case, <span class="math-container">$r_B=r_A$</span>, since <span class="math-container">$A_{j_m*}\in span(C)$</span>.</p>
<p>Hence if we want the minimum of <span class="math-container">$r_B$</span>, then it should be the worst case where all the <span class="math-container">$n-s$</span> rows belong to <span class="math-container">$C$</span>. Hence <span class="math-container">$r_B\geq r_A-(n-s)$</span>.</p>
|
3,495,779 | <p>The question is:
Prove that if graph <span class="math-container">$G$</span> is a <strong>Connected Planar Graph</strong> where each region is made up of at least <span class="math-container">$k\,(k\ge3)$</span> edges, then it must satisfy:</p>
<p><span class="math-container">$e\ge\frac{k(v-2)}{k-2}$</span></p>
<p><span class="math-container">$e$</span> means <span class="math-container">$G$</span>'s number of edges, <span class="math-container">$v$</span> means <span class="math-container">$G$</span>'s number of vertices</p>
<p>Then I am prepare using induction to prove this.But I don't know which variable should I use. The number of edges or the number of vertices? And I even don't know what <span class="math-container">$k$</span> means here. Is <span class="math-container">$k$</span> a number no less than <span class="math-container">$3$</span> or it equals to the minimum number of edges of <span class="math-container">$G$</span>'s regions? Due to these doubts,I am confused how to start my induction correctly.</p>
| Chor | 707,050 | <p>It is no necessary to use induction.To resolve it,we need to use Euler's Formula:</p>
<blockquote>
<p>For a simple <strong>Connected Planar Graph</strong> G in which <span class="math-container">$e$</span>,<span class="math-container">$v$</span>,<span class="math-container">$f$</span> means its number of edges, verticesit,and regions,G must satisfy <span class="math-container">$v-e+f=2$</span>.</p>
</blockquote>
<p>And we also need:</p>
<blockquote>
<p>For a simple <strong>Connected Planar Graph</strong> G,its sum of the number of edges of all regions is equal to twice the number of its edges.</p>
</blockquote>
<p>We can supposed that the graph has <span class="math-container">$f$</span> regions.Since each region's edges is not less than <span class="math-container">$k$</span>,the sum of the number of edges of all regions is not less than <span class="math-container">$kf$</span>.In other words,</p>
<p><span class="math-container">$2e\ge kf$</span></p>
<p>Then using Euler's Formula,we have:</p>
<p><span class="math-container">$2e \ge k(2+e-v)\implies (2-k)e \ge k(2-v) \implies e \ge \frac{k(v-2)}{k-2}$</span></p>
<p>Then question resolved.</p>
|
1,558,210 | <p>In a linear algebra book I'm reading now, there was the following exercise:</p>
<blockquote>
<p>Let <span class="math-container">$W\subseteq V$</span> be a subspace of vector space <span class="math-container">$V$</span>. Do there always exist two subspaces <span class="math-container">$W_1,W_2\subseteq V$</span> such that:</p>
<ol>
<li><p><span class="math-container">$W_1+W_2=V$</span></p>
</li>
<li><p><span class="math-container">$W_1\cap W_2=W$</span></p>
</li>
<li><p><span class="math-container">$W_1\neq V,W_2\neq V$</span></p>
</li>
</ol>
</blockquote>
<p>The answer is clearly no if we allow <span class="math-container">$W=V$</span>, but even without it we can find counterexamples, e.g. <span class="math-container">$W=\Bbb R\times\{0\},V=\Bbb R^2$</span>.</p>
<p>Critical property of the example above is that there are no "intermediate" spaces, i.e. if <span class="math-container">$W\subseteq W'\subseteq V$</span>, then <span class="math-container">$W'=W$</span> or <span class="math-container">$W'=V$</span>. I started wondering whether this is an equivalent condition to failure of condition in problem, and I found out it is the case. Below I present a proof of this fact, which however makes heavy use of the axiom of choice (in existence of bases).</p>
<p>My question now is:</p>
<blockquote>
<p>Can the equivalence which I state and prove below be shown without any appeal to axiom of choice?</p>
</blockquote>
<hr />
<p>For <span class="math-container">$W$</span> subspace of <span class="math-container">$V$</span> the following are equivalent:</p>
<ol>
<li><p>There exist subspaces <span class="math-container">$W_1,W_2$</span> which satisfy 1-3 above</p>
</li>
<li><p>There exists a proper subspace of <span class="math-container">$V$</span> properly containing <span class="math-container">$W$</span>.</p>
</li>
</ol>
<p>Proof: 1 <span class="math-container">$\Rightarrow$</span> 2: I claim <span class="math-container">$W_1$</span> is such a proper subspace. Clearly <span class="math-container">$W\subseteq W_1\subsetneq V$</span>. If <span class="math-container">$W_1=W$</span>, then <span class="math-container">$V=W_1+W_2=W+W_2=W_2$</span> as <span class="math-container">$W\subseteq W_2$</span>, but this is a contradiction.</p>
<p>2 <span class="math-container">$\Rightarrow$</span> 1: Let <span class="math-container">$W\subsetneq W_1\subsetneq V$</span>. Let <span class="math-container">$B_1$</span> be any basis of <span class="math-container">$W$</span>, <span class="math-container">$B_2$</span> any basis of <span class="math-container">$W_1$</span> containing <span class="math-container">$B_1$</span> and <span class="math-container">$B_3$</span> any basis of <span class="math-container">$V$</span> containing <span class="math-container">$B_2$</span>. Define <span class="math-container">$W_2=\text{span}((B_3\setminus B_2)\cup B_1)$</span>. It's straightforward to see that <span class="math-container">$W_1,W_2$</span> satisfy the properties we want them to.</p>
| Michael Burr | 86,421 | <p>From the comments above, I will provide my guess as to what the OP meant by $\lim_{f(x)\rightarrow f(t)}\phi(x)=\phi(t)$ means:
$$
\forall \varepsilon>0,\exists\delta>0\text{ s.t. }\forall x\text{ with }|f(x)-f(t)|<\delta,|\phi(x)-\phi(t)|<\varepsilon.
$$</p>
<p>The original statement is true (provided $\phi$ is reasonably continuous). The statement hinges on the fact that for $f:[a,b]\rightarrow\mathbb{R}$ continuous, injective, and monotonic, then
$$
\lim_{x:f(x)\rightarrow f(t)}x=t.
$$
The limit is taken over $x$, but $f(x)$ is approaching $f(t)$.</p>
<p>In other words, the question is
$$
\lim_{s\rightarrow f(t)}f^{-1}(s)=t.
$$</p>
<p>From this <a href="https://math.stackexchange.com/questions/541082/proving-the-inverse-of-a-continuous-function-is-also-continuous">problem</a>, although it is more general than what you need, it follows that $f^{-1}$ is continuous. Therefore, the limit follows.</p>
|
2,376,315 | <p>So I'm trying to solve the problem irrational ^ irrational = rational. Here is my proof
Let $i_{1},i_{2}$ be two irrational numbers and r be a rational number such that $$i_{1}^{i_{2}} = r$$
So we can rewrite this as $$i_{1}^{i_{2}} = \frac{p}{q}$$
Then by applying ln() to both sides we get $$i_2\ln(i_1) = \ln(p)-\ln(q)$$
which can be rewritten using the difference of squares as $$ i_2\ln(i_1) = \left(\sqrt{\ln(p)}-\sqrt{\ln(q)}\right)\left(\sqrt{\ln(p)}+\sqrt{\ln(q)}\right)$$
so now we have $$i_1 = e^{\sqrt{\ln(p)}+\sqrt{\ln(q)}}$$
$$i_2 = \sqrt{\ln(p)}-\sqrt{\ln(q)}$$
because I've found an explicit formula for $i_1$ and $i_2$ we are done.</p>
<p>So I'm new to proofs and I'm not sure if this is a valid argument. Can someone help me out?</p>
| Ben Grossmann | 81,360 | <p>As your proof is currently set up, you would need to show that your explicit formula gives you irrational numbers $i_1,i_2$ for at least one pair of integer values $p,q$. While it is certainly <em>believable</em> (and in fact true) that this is the case, it requires proof.</p>
<p>You're approaching this problem from the wrong mindset. All we need is an <em>example</em> of a pair of irrational numbers satisfying "irrational ^ irrational = rational". It is not necessarily useful to find a "general solution" of any kind.</p>
<p>The classic proof the statement can be summarized as follows:</p>
<blockquote class="spoiler">
<p> If $\sqrt{2}^{\sqrt{2}}$ is rational, then we're done. If not, then note that $\left[\sqrt{2}^{\sqrt{2}}\right]^{\sqrt{2}}$ is rational.</p>
</blockquote>
|
893,959 | <p>I have a series of problems in inequalities that I can not solve,please help me if you can.</p>
<p>problem 1 :$a,b,c \geq 0$ such that $\sqrt{a^2+b^2+c^2}=\sqrt[3]{ab+bc+ca} $
prove that $a^2b+b^2c+c^2a+abc \le \frac{4}{27}$</p>
<p>problem 2 : $a,b,c\geq0$ and $a+b+c = 1$ prove that </p>
<p>1, $ \sqrt{a+\frac{(b-c)^2}{4}}+\sqrt{b}+\sqrt{c} \leq \sqrt{3} $</p>
<p>2, $\sqrt{a+\frac{(b-c)^2}{4}} + \sqrt{b+\frac{(a-c)^2}{4}} + \sqrt{c+\frac{(a-b)^2}{4}} \leq2$</p>
<p>problem 3 $a,b,c \geq0$ and $a+b+c=1$ find Maximum value of :
$M=\frac{1+a^2}{1+b^2}+\frac{1+b^2}{1+c^2}+\frac{1+c^2}{1+a^2}$</p>
| Macavity | 58,320 | <p>I will do #1 (the title problem) here: The constraint gives -$$(a^2+b^2+c^2)^3 = (ab+bc+ca)^2$$</p>
<p>$$\therefore 3(a^2+b^2+c^2) \ge (a+b+c)^2 = (a^2+b^2+c^2)+2(a^2+b^2+c^2)^{3/2} $$</p>
<p>$$\implies a^2+b^2+c^2 \le 1 \implies a+b+c \le \sqrt3$$</p>
<p>Now the result follows from $a^2b+b^2c+c^2a+abc \le \frac4{27}(a+b+c)^3 \le \frac{4\sqrt3}{9} ...(1)$, with equality when $a = b = c = \frac1{\sqrt 3}$.</p>
<p>Proof of $(1)$ - let $b$ be the median of $a, b, c$. Then $(b-c)(b-a) \le 0 \implies b^2+ac \le b(a+c)$. So we have
$$a^2b+c(b^2+ac)+abc \le b(a+c)^2 \le \frac4{27} (a+b+c)^3$$</p>
<p>Note that this means your #1 question was incorrect, the maximum of LHS is $3\sqrt3$ times the RHS in that inequality.</p>
|
164,340 | <p>I recently found myself at a spot that I never believed I'll get (or at least not that soon in my career). I ran into a problem which seems to be best answered via categories.</p>
<blockquote>
<p>The situation is this, I have a directed system of structures and the maps are all the inclusion map, that is <span class="math-container">$X_i$</span> for <span class="math-container">$i\in I$</span> where <span class="math-container">$(I,\leq)$</span> is a directed set; and if <span class="math-container">$i\leq j$</span> then <span class="math-container">$X_i$</span> is a substructure of <span class="math-container">$X_j$</span>.</p>
<p>Suppose that the direct limit of the system exists. Can I be certain that this direct limit is actually the union? Namely, what sort of categories would ensure this, and what possible counterexamples are there?</p>
</blockquote>
<p>I asked several folks around the department today, some assured me that this is the case for concrete categories, while others assured me that a counterexample can be found (although it won't be organic, and would probably be manufactured for this case).</p>
<p>The situation is such that the direct system is originating from forcing, so it's quite... wide and probably immune to some of the "thinkable" counterexamples (by arguments of [set theoretical] genericity from one angle or another), and so any counterexample which is essentially a linearly ordered system is not going to be useful as a counterexample.</p>
<p>Another typical counterexample which is irrelevant here is finitely-generated things, for example we can take a direct system of f.g. vector spaces whose limit is not f.g. but this aspect is also irrelevant to me; although I am not sure how to accurately describe this sort of irrelevance.</p>
<p>Last point (which came up with every person I discussed this question today), if we consider: <span class="math-container">$$\mathbb R\hookrightarrow\mathbb R^2\hookrightarrow\ldots$$</span>
Then we consider those to be actually increasing sets in inclusion and not "natural identifications" as commonly done in categories. So the limit of the above would actually be <span class="math-container">$\mathbb R^{<\omega}$</span> (all finite sequences from <span class="math-container">$\mathbb R$</span>).</p>
| Zhen Lin | 5,191 | <p>Your question essentially amounts to asking, "when does the forgetful functor $U : \mathcal{C} \to \textbf{Set}$ create directed colimits?" More generally, one could replace "directed colimit" by "filtered colimit". There is, as far as I know, no general answer. </p>
<p>Here is one reasonably general class of categories $\mathcal{C}$ for which there is such a forgetful functor. Let us consider a finitary algebraic theory $\mathfrak{T}$, i.e. a one-sorted first-order theory with a <em>set</em> of operations of finite arity and whose axioms form a <em>set</em> of universally-quantified equations. For example, $\mathfrak{T}$ could be the theory of groups, or the theory of $R$-modules for any fixed $R$-module. Then, the category $\mathcal{C}$ of models of $\mathfrak{T}$ will be a category in which filtered colimits are, roughly speaking, obtained by taking the union of the underlying sets. This can be proven "by hand", by showing that the obvious construction has the required universal property: the key lemma is that filtered colimits commute with finite limits in $\textbf{Set}$ – so, for example, $\varinjlim_{\mathcal{J}} X \times \varinjlim_{\mathcal{J}} Y \cong \varinjlim_{\mathcal{J}} X \times Y$ if $\mathcal{J}$ is a filtered category. Mac Lane spells out the details in [CWM, Ch. IX, §3, Thm 1].</p>
<hr>
<p><strong>Addenda.</strong> Fix a one-sorted first-order signature $\Sigma$. Consider the directed colimit of the underlying sets of some $\Sigma$-structures: notice that the colimit inherits a $\Sigma$-structure if and only if the operations and predicates of $\Sigma$ are all finitary. Qiaochu's counterexample with $\{ 0 \} \subset \{ 0, 1 \} \subset \{ 0, 1, 2 \} \subset \cdots$ can be pressed into service here as well.</p>
<p>So let us assume $\Sigma$ only has finitary operations and predicates. The problem is now to establish an analogue of Łoś's theorem for directed colimits. Let $\mathcal{J}$ be a directed set and let $X : \mathcal{J} \to \Sigma \text{-Str}$ be a directed system of $\Sigma$-structures. Let us say that a logical formula $\phi$ is "good" just if
$X_j \vDash \phi$ for all $X_j$ implies $\varinjlim X \vDash \phi$ (where $\varinjlim X$ is computed in $\textbf{Set}$ and given the induced $\Sigma$-structure).</p>
<ol>
<li><p>It is not hard to check that universally quantified equations and atomic predicates are good formulae.</p></li>
<li><p>The set of good formulae is closed under conjunction and disjunction.</p></li>
<li><p>The set of good formulae is closed under universal quantification.</p></li>
<li><p>The set of good formulae is <em>not</em> closed under existential quantification: the formula $\forall x . \, x \le m$ (with free variable $m$) is a good formula in the signature of posets, but $\exists m . \, \forall x . \, x \le m$ is clearly not preserved by direct limits.</p></li>
<li><p>However, a quantifier-free good formula is still a good formula when prefixed with any number of existential quantifiers. </p></li>
<li><p>In particular, the set of good formulae is <em>not</em> closed under negation: the property of being unbounded above can be expressed as a good formula in the signature of posets with inequality, but its negation is the property of being bounded above.</p></li>
</ol>
<p>Section 6.5 of [Hodges, <em>Model theory</em>] seems to have some relevant material, but I haven't read it yet. The point, I suppose, is that there are some fairly strong conditions that the theory in question must satisfy before the directed colimit in $\textbf{Set}$ is even a <em>model</em> of the theory, let alone be a directed colimit in the category of models of the theory.</p>
|
2,919,096 | <p>I am trying to solve a system of two second-order ODEs. After separating them, I obtained a fourth-order independent ODE as illustrated below. I wonder if there is a specific technique to solve it.</p>
<p>$$y^{(4)}+\frac{a_1}{x} y^{(3)}+\frac{a_2}{x^2}y^{(2)}+a_3y^{(2)}+\frac{a_4}{x}y^{(1)}+a_5y=0$$</p>
| Przemo | 99,778 | <p>Since our ODE is of fourth order and has regular singularities at zero and infinity it would make sense the map it onto a different ODE with the same properties and with known solutions. In here the generalized hypergeometric function <span class="math-container">$f(x):=F_{0,3}[b_1,b_2,b_3;x]$</span> is a good candidate. That function satisfies the following ODE (see <a href="https://en.wikipedia.org/wiki/Generalized_hypergeometric_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Generalized_hypergeometric_function</a>):
<span class="math-container">\begin{equation}
f(x)= \frac{d}{d x} \prod\limits_{n=1}^3 \left( x \frac{d}{d x} + b_n-1 \right) f(x)
\end{equation}</span>
Now let us define a scaling function <span class="math-container">$\theta(x) := A x^r$</span> and in the ODE above let us change the variable as follows: <span class="math-container">$x \rightarrow \theta(x)$</span> and <span class="math-container">$d/d x \rightarrow 1/\theta^{'}(x) d/d x$</span>. Then it turns out that the rescaled function satisfies the following ODE:
<span class="math-container">\begin{eqnarray}
0=-A r^4 f(x) x^{r-4}+\frac{((b_1+b_2+b_3-3) r+6) f^{(3)}(x)}{x}+\frac{\left((b_2 (b_3-2)-2 b_3+b_1 (b_2+b_3-2)+3) r^2+3 (b_1+b_2+b_3-3) r+7\right) f''(x)}{x^2}+\frac{((b_1-1) r+1) ((b_2-1) r+1) ((b_3-1) r+1)
f'(x)}{x^3}+f^{(4)}(x)
\end{eqnarray}</span>
where with a slight abuse of notation we can write <span class="math-container">$f(x):=f(\theta(x))$</span>.
By comparing the above with our original ODE we write:
<span class="math-container">\begin{eqnarray}
r&=&4\\
-A r^4 &=& a_5\\
(b_1-1) r+1 &=& 0\\
\left((b_2 (b_3-2)-2 b_3+b_1 (b_2+b_3-2)+3) r^2+3 (b_1+b_2+b_3-3) r+7\right)&=& a_2\\
((b_1+b_2+b_3-3) r+6) &=& a_1
\end{eqnarray}</span>
This has a following solution:
<span class="math-container">\begin{eqnarray}
r&=&4\\
A&=&-\frac{a_5}{256}\\
b_1 &=& \frac{3}{4}\\
b_2 &=& \frac{1}{8} \left( 3+a_1- \sqrt{(1-a_1)^2-4 a_2}\right)\\
b_3 &=& \frac{1}{8} \left( 3+a_1+ \sqrt{(1-a_1)^2-4 a_2}\right)
\end{eqnarray}</span>
and therefore a function
<span class="math-container">\begin{equation}
v(x):= F_{0,3}\left[b_1,b_2,b_2,-\frac{a_5}{256} x^4\right]
\end{equation}</span>
solves the equation:
<span class="math-container">\begin{equation}
0=\frac{a_1 v^{(3)}(x)}{x}+\frac{a_2 v''(x)}{x^2}+a_5 v(x)+v^{(4)}(x)
\end{equation}</span>
which is the original ODE subject to <span class="math-container">$a_3=$</span> and <span class="math-container">$a_4=0$</span>.</p>
<p>The code snippet verifies the result:</p>
<pre><code>In[53]:=
A1 =.; A2 =.; A3 =.; A4 =.; A5 =.;
{A, r} = {-A5/256, 4};
{b[1], b[2], b[3]} = {3/4, 1/8 (3 + A1 - Sqrt[(1 - A1)^2 - 4 A2]),
1/8 (3 + A1 + Sqrt[(1 - A1)^2 - 4 A2])};
v[x_] = HypergeometricPFQ[{}, {b[1], b[2], b[3]}, A x^r];
FullSimplify[
A5 v[x] + (A2/x^2) D[v[x], {x, 2}] + A1 /x D[v[x], {x, 3}] +
D[v[x], {x, 4}]]
Out[57]= 0
</code></pre>
<p>Update: Since mapping generalized hypergeometric functions onto our ODE appears to be quite fruitful let us try now with another specimen of this kind. We take <span class="math-container">$f(x):=F_{2,3}\left[a_1,a_2;b_1,b_2,b_3;x\right]$</span>. We take <span class="math-container">$(p,q)=(2,3)$</span> and our function satisfies the following ODE:
<span class="math-container">\begin{equation}
x \prod\limits_{n=1}^p \left(x \frac{d}{d x} + a_n\right) f(x)= x \frac{d}{d x} \prod\limits_{n=1}^q \left( x \frac{d}{d x} + b_n-1 \right) f(x)
\end{equation}</span>
and now we repeat the procedure above. This means we firstly rescale the abscissa <span class="math-container">$x \rightarrow A x^r$</span> and then rescale the ordinate <span class="math-container">$f(x) \rightarrow x^s v(x)$</span>. As usual we carry out those calculations in Mathematica because the formulae are unwieldy otherwise. In this case we have eight parameters to tweek firstly the five parameters of the hypergeometric function <span class="math-container">$(a_j)_{j=1}^p$</span>, <span class="math-container">$(b_j)_{j=1}^q$</span> and the two parameters of the asbcissa scaling <span class="math-container">$A$</span>,<span class="math-container">$r$</span> and finally the ordinate scaling exponent <span class="math-container">$s$</span>. It turns out that the following choice is fruitful. We have:</p>
<p><span class="math-container">\begin{eqnarray}
s&=&\frac{1}{2}\left(-5+A_1 + \sqrt{(1-A_1)^2-4 A_2} \right)\\
a_1&=& -\frac{s}{2}\\
a_2&=&\frac{A_4-A_3(1+s)}{2 A_3}\\
b_1&=&\frac{-s+2}{2}\\
b_2&=&\frac{1-s}{2}\\
b_3&=&\frac{A_1-3-2 s}{2}
\end{eqnarray}</span>
and
<span class="math-container">\begin{eqnarray}
A&=&-\frac{A_3}{4}\\
r&=&2
\end{eqnarray}</span>
and therefore the function
<span class="math-container">\begin{eqnarray}
v(x)&=& x^{-s} F_{2,3}\left[a_1,a_2;b_1,b_2,b_3;A x^r\right]
\end{eqnarray}</span>
solves the equation:
<span class="math-container">\begin{equation}
0=\frac{A_1 v^{(3)}(x)}{x}+\left(\frac{A_2}{x^2}+A_3\right) v''(x)+\frac{A_4 v'(x)}{x}+v^{(4)}(x)
\end{equation}</span>
which is the original equation subject to the coefficient at the zeroth derivative (the right-most term) being equal to zero.</p>
<p>Again the code snippet provides a verification:</p>
<pre><code>In[61]:= A1 =.; A2 =.; A3 =.; A4 =.; A5 =.;
s = 1/2 (-5 + A1 + Sqrt[(-1 + A1)^2 - 4 A2]);
{a[1], a[2], b[1], b[2], b[3]} = {-s/2, (A4 - A3 (1 + s))/(
2 A3), (-s + 2)/2, (1 - s)/2, (A1 - 3 - 2 s)/2};
{A, r} = {-A3/4, 2};
v[x_] = x^(-s) HypergeometricPFQ[{a[1], a[2]}, {b[1], b[2], b[3]},
A x^r];
FullSimplify[
A4 /x D[v[x], x] + (A3 + A2/x^2) D[v[x], {x, 2}] +
A1 /x D[v[x], {x, 3}] + D[v[x], {x, 4}]]
Out[66]= 0
</code></pre>
|
2,652,675 | <p>Given that A = $\begin{bmatrix} 2 & 1 \\ -5 & -4 \end{bmatrix} $ and B = $\begin{bmatrix} 3 & -1 \\ -1 & 0 \end{bmatrix} $ </p>
<p>Find a 2 X 2 matrix C such that $CA= B$</p>
<p>I multiply both sides by $A^{-1}$ </p>
<p>Since $A^{-1}A = I $ </p>
<p>$ CI = BA^{-1}$ </p>
<p>Since $CI = IC = C$ </p>
<p>$ C = BA^{-1} $ </p>
<p>However, when I carry on and find out the answer to matrix C, I can’t get the answer. Where have I gone wrong ? </p>
| Adriano | 76,987 | <p>Given some topology $\mathcal T$ generated by a basis $\mathcal B$ on some set $X$, it is always possible to make a finer topology $\mathcal T_d$, where $\mathcal T \subseteq \mathcal T_d$, by declaring that every set is open. Indeed, when the new basis is the set of all singletons, we obtain the finest possible topology $\mathcal T_d$ known as <em>the discrete topology</em>. However, $\mathcal B$ is only a basis for $\mathcal T$, not $\mathcal T_d$. In general, a basis generates a unique topology, but a given topology can be generated by more than one choice of basis.</p>
|
2,768,088 | <p>A farmer cultivates mushrooms in his garden. A greedy neighbor wants to pick some but the farmer is trying to block him.</p>
<p>The garden has the form of a 8x6 grid. Rows 1 to 8 from the front to the back and columns A to F from left to right. The mushrooms are planted in the 8th row (6 mushrooms). The farmer is initially standing at the block E7, right in front of the mushrooms and can move at any of his direct surrounding 8 blocks (including those behind him, where the mushrooms are planted).
The neighbor initially stands at block F1 and is trying to reach the mushrooms by walking at any of his directly surrounding blocks (including those situated diagonally in relation to his position). Once the neighbor reaches the farmer, he hits him and can then reach the mushrooms, but if the farmer reaches the neighbor, he hits him also, and he has to back out. The neighbor moves first and then they alternate turns. Will he manage to get at least one mushroom, or the farmer will block him?
To summarize, the "game" ends in any of the 3 cases: </p>
<ol>
<li><p>The farmer reaches the neighbor (walks on his square). In this case, the neighbor has to leave and go home. </p></li>
<li><p>The neighbor reaches the farmer, even once (walks on his square). Then the farmer has to admit he lost, and let him get the mushrooms! </p></li>
<li><p>The neighbor reaches one (any) mushroom before the farmer manages to stop him.</p></li>
</ol>
<p>Describe some of the optimal moves for each of them, using the grid coordinates.
<a href="https://i.stack.imgur.com/05NhZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/05NhZ.jpg" alt="Garden"></a></p>
<p>I tried to set the neighbor "chase" the farmer by trying to be on the same column with him but can't find a general pattern. </p>
<p>FYI I found this in an Ukrainian magazine at the Kiev airport - I hope I translated everything correctly!</p>
| poetasis | 546,655 | <p>It appears that, unless the farmer takes a deliberately bad path, he will always be able to hit the neighbor. Here are sample moves juxtaposed. In the 2nd example, the farmer could have avoided being hit by taking a different path. The neighbor can only be hit or move to places to avoid being hit but he cannot get past the farmer if the farmer moves wisely.</p>
<p>N1: 2F 3E 4D </p>
<p>F1: 6F 5E 4D-HIT</p>
<p>N2: 2E 3D 3C 4C-HIT</p>
<p>F2: 6E 5D 4C</p>
<p>N3: 2E 3D 3C 4D</p>
<p>F3: 6E 5D 5C 4D-HIT</p>
|
2,768,088 | <p>A farmer cultivates mushrooms in his garden. A greedy neighbor wants to pick some but the farmer is trying to block him.</p>
<p>The garden has the form of a 8x6 grid. Rows 1 to 8 from the front to the back and columns A to F from left to right. The mushrooms are planted in the 8th row (6 mushrooms). The farmer is initially standing at the block E7, right in front of the mushrooms and can move at any of his direct surrounding 8 blocks (including those behind him, where the mushrooms are planted).
The neighbor initially stands at block F1 and is trying to reach the mushrooms by walking at any of his directly surrounding blocks (including those situated diagonally in relation to his position). Once the neighbor reaches the farmer, he hits him and can then reach the mushrooms, but if the farmer reaches the neighbor, he hits him also, and he has to back out. The neighbor moves first and then they alternate turns. Will he manage to get at least one mushroom, or the farmer will block him?
To summarize, the "game" ends in any of the 3 cases: </p>
<ol>
<li><p>The farmer reaches the neighbor (walks on his square). In this case, the neighbor has to leave and go home. </p></li>
<li><p>The neighbor reaches the farmer, even once (walks on his square). Then the farmer has to admit he lost, and let him get the mushrooms! </p></li>
<li><p>The neighbor reaches one (any) mushroom before the farmer manages to stop him.</p></li>
</ol>
<p>Describe some of the optimal moves for each of them, using the grid coordinates.
<a href="https://i.stack.imgur.com/05NhZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/05NhZ.jpg" alt="Garden"></a></p>
<p>I tried to set the neighbor "chase" the farmer by trying to be on the same column with him but can't find a general pattern. </p>
<p>FYI I found this in an Ukrainian magazine at the Kiev airport - I hope I translated everything correctly!</p>
| Ross Millikan | 1,827 | <p>The neighbor can win. He starts to E1 and claims the distant opposition. If the farmer moves forward, so does the neighbor. If the farmer moves sideways the neighbor moves diagonally forward on the side away from the farmer. Now the farmer must move toward the neighbor and the neighbor can move in front of the farmer an even number of spaces away, maintaining the opposition. Again the farmer must move sideways one way and the neighbor moves diagonally forward on the other side. </p>
<p>A game might go<br>
E1 D6<br>
F2 E6<br>
E2 D6<br>
F3 E5<br>
E3 D5<br>
F4 E6<br>
E4 D6<br>
F5 and the neighbor can get the F mushroom</p>
|
293,110 | <p>How do I prove that homomorphism $\phi : \; \mathrm{Mod}(S_g)\to \mathrm{Sp}(2g, \mathbb{Z})$ (induced by the action of mapping class group of a surface on integer homologies of a surface) is an epimorphism? My idea was to work with generators, but I was not able to prove it this way. </p>
<p>I would love to get detailed answers in order to understand this better. </p>
| Adam P. Goucher | 39,521 | <p>If you take your description and rename $\textrm{set} \mapsto \textrm{class}$ and $\textrm{small set} \mapsto \textrm{set}$, and add some further axioms beyond the ones you mention (such as global choice), then the resulting theory is <strong>von Neumann-Bernays-Goedel set theory</strong> (NBG).</p>
<p>NBG has some very elegant properties, including:</p>
<ul>
<li>Finitely axiomatisable in exactly the same way that ZFC is not;</li>
<li>Conservative over ZFC, so NBG and ZFC prove exactly the same statements about sets;</li>
<li>All proper classes (i.e. classes which are not sets) are equally-sized in the sense that their elements are in bijective correspondence;</li>
<li>Allows one to talk about classes such as $V$ and $\mathbf{On}$ as first-class objects in the theory, instead of defining them meta-mathematically as predicates over sets.</li>
</ul>
|
293,110 | <p>How do I prove that homomorphism $\phi : \; \mathrm{Mod}(S_g)\to \mathrm{Sp}(2g, \mathbb{Z})$ (induced by the action of mapping class group of a surface on integer homologies of a surface) is an epimorphism? My idea was to work with generators, but I was not able to prove it this way. </p>
<p>I would love to get detailed answers in order to understand this better. </p>
| Thomas Benjamin | 20,597 | <p>I believe that an answer to your question [1] is the system that Dana Scott developed in his paper, "Axiomatizing Set Theory" found in <em>Proceedings of Symposia in Pure Mathematics</em>, Volume 13, Part II, 1974, pp. 207-14. This system is, in Scott's words, the formalization of the following intuition:</p>
<blockquote>
<p>But note that our original intuition of set is based on the idea of having collections of <em>already</em> fixed objects. The suggestion of considering all-inclusive collections [such as the set of all sets not containing itself, the set of all ordinals, etc.--my comment] only came in later by way of formal simplification of language. The suggestion proved unfortunate, and so we must return to the primary intuitions. These intuitions can gain an initial precision through formulating the two basic axioms of <em>extensionality</em> and <em>comprehension</em>, which we now discuss in detail.</p>
</blockquote>
<p>Scott first, however, develops the following, "nameless", axiom:</p>
<blockquote>
<p>Let the variables <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span>, <span class="math-container">$a^{'}$</span>, <span class="math-container">$b^{'}$</span>, <span class="math-container">$c^{'}$</span>, <span class="math-container">$a^{''}$</span>,... range over <em>sets</em> and the variables <span class="math-container">$x$</span>, <span class="math-container">$y$</span>, <span class="math-container">$z$</span>, <span class="math-container">$x^{'}$</span>, <span class="math-container">$y^{'}$</span>, <span class="math-container">$z^{'}$</span>, <span class="math-container">$x^{''}$</span>... range over <em>arbitrary</em> objects. The symbol <span class="math-container">$=$</span> is used for <em>identity</em> and <span class="math-container">$\in$</span> for <em>membership</em>. Whether it is really interesting or profitable to allow for non-sets in the theory is debatable; but let us not exclude them yet. We agree that the condition <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$y$</span> should imply that <span class="math-container">$y$</span> is a set, a principle that we can formulate in logical symbols thus:</p>
<p><span class="math-container">$\forall$$x$</span>,<span class="math-container">$y$</span>[<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$y$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$\exists$$a$</span>[<span class="math-container">$y$</span> = <span class="math-container">$a$</span>]]</p>
<p>Of course this must be taken as a axiom; but it is so primitive, so much just a convention of grammar, that we will not even give it a name.</p>
</blockquote>
<p>Scott then gives the axioms of <em>extensionality</em> and <em>comprehension</em>:</p>
<blockquote>
<p>Extensionality. <span class="math-container">$\forall$$a$</span>,<span class="math-container">$b$</span>[<span class="math-container">$\forall$$x$</span>[<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$a$</span> <span class="math-container">$\leftrightarrow$</span> <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$b$</span>] <span class="math-container">$\rightarrow$</span> <span class="math-container">$a$</span> = <span class="math-container">$b$</span>].</p>
<p>Comprehension. <span class="math-container">$\forall$$a$</span> <span class="math-container">$\exists$$b$$\forall$$x$</span>[<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$b$</span> <span class="math-container">$\leftrightarrow$</span> <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$a$</span> <span class="math-container">$\land$</span> <span class="math-container">$\Phi$</span>(<span class="math-container">$x$</span>)].</p>
</blockquote>
<p>Regarding Extensionality, Scott writes:</p>
<blockquote>
<p>The extensionality axiom formalizes our idea that a set is nothing more than a collection of objects: It is uniquely determined by its elements.</p>
</blockquote>
<p>Regarding the Comprehension axiom, Scott has a bit more to say:</p>
<blockquote>
<p>The comprehension axiom formalizes the idea that one a collection <span class="math-container">$a$</span> is fixed, we can then extract from <span class="math-container">$a$</span> any arbitrary subcollection <span class="math-container">$b$</span>. The extraction process is effected by finding a property <span class="math-container">$\Phi$</span>(<span class="math-container">$x$</span>) which distinguishes <span class="math-container">$b$</span> as the subset of <span class="math-container">$a$</span> comprehending all those elements having the property. There is no reason to place any restriction on how <span class="math-container">$\Phi$</span>(<span class="math-container">$x$</span>) is formulated: We believe in the existence of arbitrary subsets. It is a great temptation to erase the condition <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$a$</span>, thus simplifying the axiom schema; but we all know what happens. It is much more profitable to ask: Where does the <span class="math-container">$a$</span> come from?</p>
</blockquote>
<p>He then proceeds to discuss (from a historical perspective) the answer to the above question:</p>
<blockquote>
<p>Zermelo answered the question by giving several construction principles for obtaining new <span class="math-container">$a$</span>'s from old. Fraenkel and Skolem extended the method, and von Neumann, Bernays and Godel modified it somewhat. Actually this is a rather sad history--because set theory is made to seem so artificial and formalistic. The naive axioms are contradictory. We block the contradiction and thereby emasculate the theory. Therefore to get anywhere we reinstate a few of the principles we eliminated and hope for the best. Now it would be wrong to accuse any of the above men of holding such a simplistic view of the axiomatic process. Nevertheless it is a widely held view and one that is easy to fall into when considering the formal axioms. Let us try to see whether there is another path to the same theory more obviously based on the underlying intuition.</p>
</blockquote>
<p>Scott the goes on to make the following claim:</p>
<blockquote>
<p>The truth is that there is only one satisfactory way of avoiding the paradoxes: namely, the use of some form of the <em>theory of types</em> . That was the basis of of both Russell's and Zermelo's intuitions. Indeed the best way to regard Zermelo's theory is a simplification and extension of Russell's. (We mean Russell's <em>simple</em> theory of types, of course.) Thus mixing of types is easier and annoying repititions are avoided. Once the later types are allowed to accumulate the earlier ones, we can then easily imagine <em>extending</em> the types into the transfinite--just how far we want to go must necessarily be left open. Now Russell made his types <em>explicit</em> in his notation and Zermelo left them <em>implicit</em>. It is a mistake to leave something so important invisible, because so many people will misunderstand you. What we shall try to do here is to axiomatize the types in as simple a way as possible so that everyone can agree that the idea is natural.</p>
</blockquote>
<p>He now provides the basic intuitions for his "cumulative types":</p>
<blockquote>
<p>Let us proceed on a very primitive basis. In order to obtain the sets to which the comprehension axioms can be applied, we imaginesome way of dividing the sets into levels. There will be earlier levels and later levels. Let the sets up to a certain level be thought of as forming a <em>partial</em> universe <span class="math-container">$V$</span> which is regarded as a legitimate set [in Zuhair's terminology, a small set--my comment]. We can be generous and assume that all the nonsets, the set-theoretic atoms, belong to all the levels. In a <em>later</em> universe <span class="math-container">$V^{'}$</span> we have not only the elements of <span class="math-container">$V$</span>, but also <span class="math-container">$V$</span> itself to be used to form subcollections of <span class="math-container">$V^{'}$</span>; that is, <span class="math-container">$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$V^{'}$</span> as well as <span class="math-container">$V$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$V^{'}$</span>, where we define</p>
<p><span class="math-container">$\forall$$x$</span>,<span class="math-container">$y$</span>[<span class="math-container">$x$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$y$</span> <span class="math-container">$\leftrightarrow$</span> <span class="math-container">$\forall$$z$</span>[<span class="math-container">$z$</span> <span class="math-container">$\in$</span> <span class="math-container">$x$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$z$</span> <span class="math-container">$\in$</span> <span class="math-container">$y$</span>]].</p>
<p>Furthermore, not only <span class="math-container">$V$</span>, but by the same token all the subcollections of <span class="math-container">$V$</span>, should also be <em>elements</em> of <span class="math-container">$V^{'}$</span>.</p>
</blockquote>
<p>Scott continues defining the notion of <em>type</em> in his system:</p>
<blockquote>
<p>Once a set is fixed at one level, all its subsets are fixed at a later level--that is certainly the basic idea of the theory of types. We formalize this idea <em>not</em> by introducing type indices, but more simply by identifying a level with the collection of all sets (and nonsets) up to that level. We let variables <span class="math-container">$V$</span>, <span class="math-container">$V^{'}$</span>, <span class="math-container">$V^{''}$</span>,... range over these levels--that is, we take the idea of a <em>type level</em> (as identified with certain sets) as a primitive notion. The "later than" relation is transcribed simply as <span class="math-container">$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$V^{'}$</span>. It need hardly be mentioned that we assume that there is at least one level and that each level is a set--axioms that we do not stop to name. What is important is the idea that a given level is <em>nothing more than</em> the accumulation of all the members and subsets of all the <em>earlier</em> levels (and all the nonsets, if any there be). In formal terms we have this axiom:</p>
<p>Accumulation. <span class="math-container">$\forall$$V^{'}$$\forall$$x$</span>[<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$V^{'}$</span> <span class="math-container">$\leftrightarrow$</span> <span class="math-container">$\lnot$$\exists$$a$</span>[<span class="math-container">$x$</span>=<span class="math-container">$a$</span>] <span class="math-container">$\lor$</span> <span class="math-container">$\exists$$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$V^{'}$</span>[<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$V$</span> <span class="math-container">$\lor$</span> <span class="math-container">$x$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$V$</span>]].</p>
<p>(By the way, just because we use the <em>variables</em> <span class="math-container">$V$</span>, <span class="math-container">$V^{'}$</span>, <span class="math-container">$V^{''}$</span>,... we should <em>not</em> think of the levels as being arranged in a <span class="math-container">$\omega$</span>-type sequence. In general we will want a transfinite sequence. Also note that <span class="math-container">$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$V^{'}$</span> does <em>not</em> imply that <span class="math-container">$V^{'}$</span> is the <em>next</em> level; it may be a much later level.) The purpose of this axiom is to show how the levels <em>fit together</em>.</p>
</blockquote>
<p>The next axiom captures the intuition that however far the levels go out, they eventually capture everything:</p>
<blockquote>
<p>Restriction. <span class="math-container">$\forall$$x$$\exists$$V$</span>[<span class="math-container">$x$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$V$</span>]</p>
<p>In other words, the <em>whole universe</em>, if only it were a set, would behave as the ultimate level in the sense of the previous axiom. (Note that this axiom gives the existence of at east one level. It really should have been formulated with the clause [<span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$V$</span> <span class="math-container">$\lor$</span> <span class="math-container">$x$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$V$</span>], but we shall show below that <span class="math-container">$x$</span> <span class="math-container">$\in$</span> <span class="math-container">$V$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$x$</span> <span class="math-container">$\subseteq$</span> <span class="math-container">$V$</span>].) This will turn out to be nothing more or less than the well-known axiom of foundation, which is generally poorly understood. We feel that in the present context, it appears as a quite natural expression of the fact that the sets are restricted to levels.</p>
</blockquote>
<p>Finally, Scott includes the following reflection principle as the last axiom:</p>
<blockquote>
<p>Reflection. <span class="math-container">$\exists$$V$$\forall$$x$$\in$$V$</span>[<span class="math-container">$\Phi$</span>(<span class="math-container">$x$</span>) <span class="math-container">$\rightarrow$</span> <span class="math-container">$\Phi^{(V)}$</span>(<span class="math-container">$x$</span>)] [from which one can derive infinity and replacement--my comment. See pp.213-214].</p>
</blockquote>
<p>It should be noted that Scott, from Extensionality, Comprehension, Accumulation, and Restriction, shows that</p>
<blockquote>
<p>(union and power set) also drop out of these axioms [pp. 210-212--my comment]</p>
</blockquote>
<p>and that by Restriction, one can add complements over the whole domain of discourse so the existence of Scott's system is a correct answer to question [1].</p>
|
3,805,452 | <p>Given a stick and break it randomly at two places, what is the probability that you can form a triangle from the pieces?</p>
<p>Here is my attempt and the answer does not match, so I am confused what went wrong with this argument.</p>
<p>I first denote the two randomly chosen positions by <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, and let <span class="math-container">$A=\max(X,Y)$</span>, <span class="math-container">$B=\min(X,Y)$</span>. We are interested in the probability of the event <span class="math-container">$\{A>\frac{1}{2}, B>A-\frac{1}{2}\}$</span>. Thus, we want the joint distribution of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. To compute that, I computed
<span class="math-container">$$F_{A,B}(w,z)=\mathbb{P}(A\leq w, B\leq z)=\mathbb{P}(A\leq w)-\mathbb{P}(A\leq w, B>z)=\mathbb{P}(X\leq w,Y\leq w)-\mathbb{P}(X\leq w, Y\leq w, X>z, Y>z)$$</span>
Therefore, we have if <span class="math-container">$z\leq w$</span>
<span class="math-container">$$F_{A,B}(w,z)=w^2-(w-z)^2$$</span>
otherwise
<span class="math-container">$$F_{A,B}(w,z)=w^2$$</span>
Then the joint density of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> is
<span class="math-container">$$f_{A,B}(w,z)=\frac{\partial^2 F}{\partial w\partial z}(w,z)=2$$</span>
if <span class="math-container">$z\leq w$</span> and <span class="math-container">$0$</span> otherwise.<br />
Finally
<span class="math-container">$$\mathbb{P}(A>\frac{1}{2},B>A-\frac{1}{2})=\int_{\frac{1}{2}}^1\int_{w-\frac{1}{2}}^w2dzdw=\frac{1}{2}$$</span>
The answer is <span class="math-container">$\frac{1}{4}$</span> instead, but I can't figure out what went wrong with this argument.</p>
| BruceET | 221,800 | <p>The following simulation in R looks at a million such randomly broken
sticks, finds the lengths of the three pieces, and finally finds the
length of the longest piece. If the longest has length less
than half, you can make a triangle. Answer: <span class="math-container">$0.250\pm 0.001.$</span></p>
<pre><code>set.seed(2020)
mx = replicate(10^6, max(diff(c(0, sort(runif(2)), 1))))
mean(mx < .5)
[1] 0.250222 # aprx 1/4
2*sd(mx < .5)/1000
[1] 0.000866282 # aprx 95% marg of sim error
</code></pre>
<p><em>Note:</em> A related but <em>different</em> problem breaks the stick once uniformly at random and then breaks the longer piece uniformly at random.</p>
|
3,984,322 | <p>I have the 2nd order homogeneous ODE:
<span class="math-container">$$y(x)''+\frac{2}{x}y'(x)-By=0 $$</span>
Where <span class="math-container">$B$</span> is a constant. Since I am a bit rusty with analytical solutions, I plugged it in MatLab's symbolic solver which found the following solution.
<span class="math-container">$$y(x)=C_1\frac{e^{\sqrt{B}x}}{x}-C_2\frac{e^{-\sqrt{B}x}}{2\sqrt{B}x}$$</span>
Unofrtunately, the constant B takes both positive and negative values, but since the ODE is derived from a physical problem I'd like the solution to always be real.
Any suggestion/comment is highly appreciated.</p>
| Varun Vejalla | 595,055 | <p>That solution is also valid for negative <span class="math-container">$B$</span>, although it will use complex numbers. To find the solution without complex numbers using that solution, I first made the substitution <span class="math-container">$\frac{C_2}{2\sqrt{B}} \to C_2$</span> just for convenience: <span class="math-container">$$y(x)=\frac{C_1e^{\sqrt{B}x} - C_2e^{-\sqrt{B}x}}{x}$$</span></p>
<p>Then for negative <span class="math-container">$B$</span>, this is <span class="math-container">$$\frac{C_1e^{i\sqrt{-B}x} - C_2e^{-i\sqrt{-B}x}}{x}$$</span></p>
<p>Using that <span class="math-container">$e^{ix} = \cos(x)+i\sin(x)$</span>, this becomes <span class="math-container">$$\frac{C_1\left(\cos(x\sqrt{-B}) + i\sin(x\sqrt{-B})\right) - C_2\left(\cos(x\sqrt{-B}) - i\sin(x\sqrt{-B})\right)}{x}$$</span></p>
<p>Making the substitution <span class="math-container">$C_1 - C_2 = c_1$</span> and <span class="math-container">$iC_1 + iC_2 = c_2$</span> makes it <span class="math-container">$$\frac{c_1\cos(x\sqrt{-B}) + c_2\sin(x\sqrt{-B})}{x}$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.