qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,314,992 | <p>What operations do I need to perform the following conversion?</p>
<p>$$
\frac{\partial ^2y}{\partial x\partial z} \mapsto \frac{\partial ^2y}{\partial x\partial t}
$$</p>
| ZanCoul | 109,553 | <p>It should be chain rule. From the way you wrote your question, $t$ depends on $z$. So then you should have
\begin{align*}
\frac{\partial^2 y }{\partial x \partial z} = \frac{\partial^2 y}{\partial x\partial t}\frac{\partial t}{\partial z}
\end{align*}</p>
|
2,962,377 | <p>Rewrite <span class="math-container">$f(x,y) = 1-x^2y^2$</span> as a product <span class="math-container">$g(x) \cdot h(y)$</span> (both arbitrary functions)</p>
<p>To make more clear what I'm talking about I will give a example.</p>
<p>Rewrite <span class="math-container">$f(x,y) = 1+x-y-xy$</span> as <span class="math-container">$g(x)h(y)$</span></p>
<p>If we choose <span class="math-container">$g(x) = (1+x)$</span> and <span class="math-container">$h(y) = (1-y)$</span> we have</p>
<p><span class="math-container">$f(x,y) = g(x) h(y) \implies (1+x-y-xy) = (1+x)(1-y)$</span></p>
<p>I'm trying to do the same with <span class="math-container">$f(x,y) = 1-x^2y^2 = (1-xy)(1+xy)$</span>.</p>
<p>New question:</p>
<blockquote>
<p>Is there also a contradiction for <span class="math-container">$f(x,y) = \frac{xy}{1-x^2y^2}$</span> ? Or it's possible to write <span class="math-container">$f(x,y) $</span> as <span class="math-container">$g(x)h(y)$</span> ?</p>
</blockquote>
| Community | -1 | <p><span class="math-container">$$f(x,0)=1=h(0)\cdot h(y)$$</span>
and
<span class="math-container">$$f(0,y)=1=g(x)\cdot h(0)$$</span></p>
<p>so that both <span class="math-container">$g(x)$</span> and <span class="math-container">$h(y)$</span> are constant !?</p>
|
2,747,074 | <p>I've seen multiple ways on how to solve it online (and most likely the majority are wrong), but I don't know how to solve this fully so that I could get the full grade during my exam.</p>
<p>The question is as follows:</p>
<blockquote>
<p>Prove that if $\sum a_n$ is convergent with $a_n > 0$ for all $n$,
then $\sum a^2_n$ is also convergent.</p>
</blockquote>
<p>I understand that the starting point to prove this is that $\lim a_n =0$, but after that I really don't know what I'm supposed to say.</p>
| José Carlos Santos | 446,262 | <p>Since <span class="math-container">$\lim_{n\to\infty}a_n=0$</span>, you have <span class="math-container">$a_n\leqslant1$</span> if <span class="math-container">$n$</span> is large enough. But then <span class="math-container">${a_n}^2\leqslant a_n$</span> if <span class="math-container">$n$</span> is large enough and therefore you can apply the comparison test.</p>
|
1,419,756 | <p>Let $A$ be a nonempty set in the metric space $(X,d)$ and, for $\epsilon>0$, define</p>
<p>$$A_\epsilon = \{x\in X: d(x,A) < \epsilon\}$$.</p>
<p>Then I want to prove that $A_\epsilon$ is open in $X$.</p>
<p>So what I have tried so far is that, well I want to prove that a set is open so I take $x \in A_\epsilon $, then we have to show that $B_{\epsilon_{1}} \subset A_\epsilon$ for a fix $\epsilon$, and since it is clear that $A_\frac{\epsilon}{2}\subset A_\epsilon$ we take $\epsilon_{1}=\epsilon-d(x,A)$ and we pick $z \in B_{\frac{\epsilon_{1}}{2}} $ and we observe the following:</p>
<blockquote>
<p>I think I got it, $$d(z,A)<d(z,x)+d(x,A)<\epsilon_{1}+d(x,A)=\epsilon-d(x,A)+d(x,A)=\epsilon$$ <strong>Am I right?, and Is the triangle inequality true for a point and a set?</strong></p>
</blockquote>
<p>So the thing is that I am not sure of the <strong>above step</strong>, Can someone tell me if I am right?, and If I am not, Can someone help me to fix it?</p>
<p>Thanks a lot in advance</p>
<p><strong>NOTE</strong>: $$d(x,A)=inf\{d(x,y):y \in A\}$$</p>
| Arpit Kansal | 175,006 | <p><strong>Another Approach:</strong> First note that If $A$ is a non empty subset of a metric space $(X,d)$ then the function $f: X \to \mathbb R$ given by $f(x)=d(x,A)$ is uniformly continuous.</p>
<p>Because $$| f(x) - f(y) | = | d(x,A) - d(y,A) | \leq d(x,y)$$.</p>
<p>This means that $f$ is uniformly continuous (use $\delta = \epsilon$ in any point).</p>
<p>Further note that $A_\epsilon= f^{-1}( -\infty,\epsilon)$.</p>
|
110,709 | <p>By a closed geodesic, I mean a smooth periodic geodesic $\mathbb{R} \rightarrow (M,g)$. I will consider them up to geometric distinction.
This means that any two closed geodesics are equivalent if they have the same image in $(M,g)$. </p>
<p>Manifolds with constant curvature $\leq 0$,
by Cartan's theorem,
cannot have any closed contractible geodesics,
and every riemannian metric on $S^2$ has infinitely many closed geodesics
(for $n\geq 3$, the analogous theorem for $S^n$ is not known).
Moreover,
if the sequence of Betti numbers of the loops space $\Omega(M)$ is unbounded and $M$ is simply-connected,
then $(M,g)$ contains infinitely many (contractible) closed geodesics.</p>
<p><strong>Are there any known examples of riemannian manifolds with finitely and positively many closed contractible geodesics (or even just closed geodesics)?</strong></p>
<p>There is a theorem associated with Gromov asserting that the word problem of $\pi_1 M$ is solvable if there is a metric $g$ on $M$ with only finitely many contractible closed geodesics. I was wondering if there are any non-trivial examples for this theorem.</p>
| Ian Agol | 1,345 | <p>I think if you take the metric on $\mathbb{R}^2$ obtained by rotating a curve which is $\sqrt{1-x^2}$ for $-1\leq x\leq 0$, and $x^2+1$ for $x\geq 0$ around the $x$-axis, then I think there will be a single closed contractible geodesic obtained by rotating the point $(0,1)$ around the $x$-axis.</p>
|
2,781,265 | <p>I am learning category theory as a hobby. In the book I am studying from, I was looking at an example of a functor. I want to understand what this functor looks like.</p>
<p>If $G$ is a group, a functor $F: G \to \mathbf{Set}$ picks out a set $A = F(\star)$, together with a homomorphism from $G$ to the group of permutations of $A$. This is a permutation representation of $G$. </p>
<p>I am confused about some things here:</p>
<ol>
<li><p>What is $\star$, is it the single object in the group considered as a category?</p></li>
<li><p>Why is this homomorphism from $G$ to the group of permutations of $A$ coming into the picture, doesn't a functor just need to map objects to some objects and morphisms to some morphisms? </p></li>
<li><p>The book says if that a group can be considered as a category consisting of the single object $G$ where all the morphisms are isomorphisms. Why is this the case? What does this category look like?</p></li>
</ol>
| Community | -1 | <ol>
<li>Yes. It is common to use $*$ as the name of the object in a one-object category; more generally, it is often used as the name of the unique object contained in a singleton set. ($\star$ is less common, but is surely what's meant)</li>
<li>A functor must additionally preserve products of morphisms, as well as send identity morphisms to identity morphisms.</li>
<li>Because if you write what "one object category whose morphisms are all isomorphisms" means, you'll see its almost verbatim the same thing as what "a monoid in which every element has an inverse" means (plus the statement that there's one object).</li>
</ol>
|
552,384 | <blockquote>
<p>Prove that</p>
<p>$$
\int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx = \ln\left(\frac{a}{b}\right)
$$</p>
</blockquote>
<p><strong>My Attempt:</strong></p>
<p>Define the function $I(a,b)$ as</p>
<p>$$
I(a,b) = \int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx
$$</p>
<p>Differentiate both side with respect to $a$ to get</p>
<p>$$
\begin{align}
\frac{dI(a,b)}{da} &= \int_{0}^{\infty}\frac{0-e^{-ax}(-x)}{x}\,dx\\
&= \int_{0}^{\infty}e^{-ax}\,dx\\
&= -\frac{1}{a}(0-1)\\
&= \frac{1}{a}
\end{align}
$$</p>
<p>How can I complete the proof from here?</p>
| Mhenni Benghorbal | 35,472 | <p>Note that the following is a <a href="https://math.stackexchange.com/questions/294383/evaluate-int-0-infty-left-fracx-textex-texte-x/295326#295326">general technique that can handle much harder problems</a>. Recalling the Laplace transform </p>
<blockquote>
<p>$$ F(s) = \int_{0}^{\infty} f(x) e^{-sx} dx. $$</p>
</blockquote>
<p>Consider the more general integral</p>
<p>$$ F(s) = \int_{0}^{\infty} \frac{e^{-bx}-e^{-ax}}{x} e^{-sx} dx \implies F'(s) = -\int_{0}^{\infty} ({e^{-bx}-e^{-ax}}) e^{-sx} dx .$$</p>
<p>Now, it is just a matter of evaluating the last integral and integrating the answer with respect to $s$ and then taking the limit as $s\to 0$ to find the desired value.</p>
<p><strong>Note:</strong> When you integrate with respect to $s$ do not forget the constant of integration. To find it use the fact that</p>
<blockquote>
<p>$$ \lim_{s\to \infty} F(s) = 0. $$ </p>
</blockquote>
|
1,013,776 | <p>I am looking at the following <strong>Theorem</strong>:</p>
<p>Let $\phi$ a type. We suppose that there is a set $Y$, such that $\forall x(\phi(x) \to x \in Y)$. Then, there is the set $\{ x: \phi(x) \}$.</p>
<p>and I try to understand its proof.</p>
<p>From the axiom shema of specification, there is the set $V=\{ x \in Y: \phi(x) \}$</p>
<p>$$x \in V \leftrightarrow (x \in Y \wedge \phi(x))$$</p>
<p>How can we continue, in order to show that $\{ x: \phi(x) \}$ is a set?</p>
| Cameron Buie | 28,900 | <p>The idea, here, is to show that $V$ contains and is contained in the alleged set. One inclusion is clear, since $V$ is the intersection of the alleged set with $Y.$ For the reverse inclusion, use what you know about the type $\phi$ and the set $Y.$</p>
<p><strong>Added</strong>: On the one hand, suppose that $x\in V.$ That is, we have $x\in Y$ and $\phi(x).$ So, in particular, $\phi(x),$ meaning $x$ is a member of the alleged set. Since this holds for all $x\in V,$ then it follows that $V$ is contained in the alleged set.</p>
<p>On the other hand, suppose that $x$ is a member of the alleged set, meaning $\phi(x).$ Since $\phi(x),$ then by assumption, we know that $x\in Y.$ Hence, since $x\in Y$ and $\phi(x),$ then $x\in V$ by definition. Hence, $V$ contains the alleged set, as well.</p>
<p>By double-inclusion, $$V=\{x:\phi(x)\},$$ so the alleged set is, indeed, a set.</p>
<hr>
<p>Hopefully, you see why that bit about $Y$ is essential to the proof. If we have no such condition, then we may not have a set at all. For example, suppose that $\phi(x)$ is the statement "$x=x.$" Then if $\{x:\phi(x)\}$ <strong><em>were</em></strong> a set, it would be the "set of all sets," which leads to a number of paradoxes, and so cannot exist.</p>
|
3,764,342 | <p>Here since <span class="math-container">$\lim \frac{a_n}{a_{n+1}}=1.$</span> So no definite conclusion can be made about the nature of the sequence <span class="math-container">$\langle a_n\rangle$</span>.</p>
<p>So how can I can proceed to find the value of <span class="math-container">$a_1$</span> from the relation: <span class="math-container">$a_ka_{k+1}=k,$</span> for any <span class="math-container">$k\in\mathbb N$</span> ?</p>
<p>Please suggest something..</p>
| Mushu Nrek | 743,552 | <p>Another way to get to series is through integration. The most known example is probably
<span class="math-container">$$
\sum_{n\geq 1} \dfrac{1}{n^2} = \int_0^1\int_0^1 \dfrac{1}{1 - xy}\,dx\,dy
$$</span>
which often serves as exercise for multiple integration and change of variables. Another example has recently been presented in this <a href="https://www.youtube.com/watch?v=5KpGSMyUANU" rel="nofollow noreferrer">video</a> by Michael Penn on his very nice channel. I hope this is what you had in mind!</p>
|
2,368,771 | <blockquote>
<p>How to prove the function $$ f(z)=\exp\Big(\frac{z}{1-\cos z}\Big)$$ has an essential singularity at $z=0$ ?</p>
</blockquote>
<p>It's actually hard to express the Laurent series of $f(z)$ around $0$, because the power $\frac{z}{1-\cos z}$ itself is already in the series form (since $\cos z$ appears there and it has the series expansion) and $e^{z/(1-\cos z)}$ has again a series form. </p>
<p>Edit 1: I already see <a href="https://math.stackexchange.com/questions/407318/at-z-0-the-function-fz-expz-over-1-cos-z-has">this</a> but it does not give information about the Laurent expension of $f(z)$ </p>
<p>Edit 2: How to Proceed or can anyone explain the limit of $e^{z/(1-\cos z)}$ at $0$ does not exist?</p>
| Adayah | 149,178 | <p>It's sufficient to prove that the limit $$a = \lim_{z \to 0} \exp \left( \frac{z}{1-\cos z} \right)$$ does not exist, for we have the following trichotomy:</p>
<ul>
<li>if $z$ is a removable singularity, the limit exists and $a \in \mathbb{C},$</li>
<li>if $z$ is a pole, then $a = \infty$ (the complex infinity),</li>
<li>if $z$ is an essential singularity, the limit does not exist.</li>
</ul>
|
17,772 | <p>I have an algorithm that segments depth images using surface fitting. At the moment the algothim uses least squares polymonial fitting, but polynomials are not powerful enough to fit the shapes that are in these images. I replaced the explicit polynomial $z = f(x,y)$, with the implicit fitting problem $f(x,y,z) = 0$. Where $x$ and $y$ are the pixels location and $z$ is the value of the pixel and $f$ is a polynomial. A classic problem with many nice linear solutions. This did make the fitting much more powerful, but this left me with a problem I have been unable to solve for a while now. Once I had the least squares solution to the implicit poly, I had to solve for z, not easy at all, but I could search for the minimum (there are only 256 possible pixel values to search through after all). THE problem is that there was more than one solution for z! Where a pixel can only have one value. That is, once the implicit poly was solved for z there were several roots.</p>
<p>So my question is;</p>
<p>How do I formulate a least squares minimization problem for a single root? What contraints must I add?</p>
<p>This might not be completely clear so here is an example:
You have a set of noisy data points $\{x,y\}$ that form a semi-circle about the origin in the positive y only. I want to fit $f(x,y) = 0$ to the data points, or to be pricise, a circle. $f == x^2 + y^2 + c = 0$.</p>
<p>The least squares minimization problem is a nice simple linear
$\min_c \sum_{i=0}^n (x_i^2 + y_i^2 + c)^2$
but I am only interested minimizing the datapoints distance to the positive half of the circle. Solving for $y$ and taking the positive root gives us $y_i = \sqrt{-c-x_i^2}$. Giving us the acutal minimization problem as
$\min_c \sum_{i=0}^n (y_i - \sqrt{-c-x_i^2})^2$
NOT a nice linear problem at all. These two minimisation problems will give differnt results since the negative half of the circle should not try to fit itself to any data points. How can i constrain the linear problem so that it is an equivelent to minimizing to a single root? </p>
<p>Or in general constrin $\min_{ijk} (\sum_{i,j,k}a_{ijk}x^iy^jz^k)^2$ to a minimization to a single root in $z$?</p>
<p>I hope this makes sence and sorry for the length but this has been annoying me for weeks.</p>
| Tategami | 4,537 | <p>I finally have an example ready...</p>
<p>So we have an image of a sphere, and we take data points on it.</p>
<p><img src="https://imgur.com/4mKD9.png" alt="alt text"></p>
<p>Since this is a sphere it is reasonable to assume that we can fit a sphere to this part of the image. Here are the reults:</p>
<p><img src="https://imgur.com/9fxf4.png" alt="alt text"></p>
<p>Blue is low error, we can see that most points fit well, the other are the `parasitic' roots, interpolations look as follows:</p>
<p><img src="https://imgur.com/VRPKn.png" alt="alt text">
<img src="https://imgur.com/egftE.png" alt="alt text"></p>
<p>The correct root fits the image well, the other root is just getting in the way (at the moment) but here is the fitted function:</p>
<p><img src="https://imgur.com/C7pR0.png" alt="alt text"></p>
<p>Now this is a nice result since the 'edge' of the fitted sphere corresponds to the edge of the sphere in the image. Something I imagine the suggested rational fitting can't do any why I WANT several roots in the solution, but I only want to fit one to the data!!</p>
<p>So whats the problem, well, this is a nice result but a 3rd order polynomial might fit it better (the data is technically not a simple sphere), or i might have other data that would need even higher order fitting (toruses for exmaple are 4th order). So I fit a 3rd order poly and here is the resulting function:
<img src="https://imgur.com/v9kaL.png" alt="alt text"></p>
<p>Not nice, the roots are all over each other, fighting against each other to fit the data.</p>
<p>Now I want the re-iterate my question. One can always solve for z and fit a single root to the data using non-linear regression, but my feeling is that we should be able to do the same using constrained linear fitting. And HOW is my question! Here is an example that constraining the fit does at least help. Here the $z$ and $z^2$ parameters are fixed to $1$: Fitting the same 3rd order poly to the same data:
<img src="https://imgur.com/UVJga.png" alt="alt text"></p>
<p>A much friendlier 3rd order poly, but this restriction is too much it turns out, and i have lost the nice results.</p>
<p>So the question in terms of my original example:</p>
<p>How can I linearise the following minimization problem without reintroducing multiple roots?</p>
<p>$\min_c \sum_{i=0}^n (y_i - \sqrt{-c-x_i^2})^2$</p>
<p>and then we can see if this can be done in general to all implicit polys....</p>
<p>Thanks.</p>
|
255,230 | <p>A <em>complete linear hypergraph</em> is a <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraph</a> $H=(V,E)$ such that </p>
<ol>
<li>$|e|\geq 2$ for all $e\in E$,</li>
<li>$|e_1\cap e_2|=1$ for all $e_1, e_2\in E$ with $e_1\neq e_2$, and</li>
<li>for all $v\in V$ we have $|\{e\in E:v\in e\}| \geq 2.$</li>
</ol>
<p>For $n>2$ set $\mathbb{N}_n =\{1,\ldots,n\}$ and $$\ell(n)=\min\{|E|: E\subseteq{\cal P}(\mathbb{N}_n) \text{ and }(\mathbb{N}_n, E) \text{ is complete linear}\},$$ so $\ell(n)$ is the <em>greatest lower bound</em> for the number of edges on a complete linear hypergraph on $n$ points.</p>
<p><strong>Question</strong>: What is the value of $\ell(n)$, depending on $n$?</p>
<hr>
<p>(Note concerning least upper bounds: If $H=(V,E)$ is a complete linear hypergraph with $|V|=n>2$, then $|E| \leq n$ by the <a href="https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Erd%C5%91s_theorem_(incidence_geometry)" rel="nofollow noreferrer">theorem of DeBruijn-Erdos</a>, and we can reach $|E| = n$ with the so-called "near pencil", see the same link.)</p>
| Pat Devlin | 22,512 | <p>Let $L$ be the number of edges in the configuration and $n$ the number of points. Then $n \leq {L \choose 2}$ and this bound is tight.</p>
<p><strong>There are configurations with that many points:</strong> Let the points correspond to the two element subsets of $\{1, 2, \ldots, k\}$. And let the edges correspond to the numbers $\{1, 2, \ldots , k\}$. Then say edge $L_i$ contains the vertex $v_{\{a,b\}}$ iff $i \in \{a,b\}$. This is linear, and it has ${k \choose 2}$ points and $k$ edges. It covers each point exactly twice.</p>
<p><strong>Any configuration has at most that many points:</strong> The previous answer can be pushed to give $L \geq \sqrt{2n}$, but we can get our above bound exactly. Let $\mathcal{L}$ be the set of edges. For each point $i \in \{1, 2, \ldots , n\}$, associate a set $V_i \subseteq \mathcal{L}$, which is the edges in $\mathcal{L}$ containing the point $i$. Then for all $i \neq j$, we have $|V_i \cap V_j| \leq 1$ (else there are two edges containing the same two points). We know that each point is covered at least twice, so $|V_i| \geq 2$. For each set $V_i$, let $W_i$ be any subset of size $2$ (it doesn't matter how we pick $W_i$). Then the sets $W_i$ are distinct $2$-element subsets of $\mathcal{L}$, which means there are at most ${L \choose 2}$ of them.</p>
<hr>
<p><strong>Added afterwards:</strong> To complete the discussion, we have the corresponding lower bound on $n$ that $L \leq n$. This is by <a href="https://en.wikipedia.org/wiki/Fisher's_inequality#Generalization" rel="nofollow noreferrer">(generalized) Fisher's inequality</a>, and it uses the fact that the intersection of any two edges has size $1$. And this can be attained by the near pencil mentioned in original post or a projective plane.</p>
<p>So together, we have $L \leq n \leq {L \choose 2}$ and both bounds are best possible.</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| Plutoro | 108,709 | <p>Changing basis allows you to convert a matrix from a complicated form to a simple form. It is often possible to represent a matrix in a basis where the only nonzero elements are on the diagonal, which is exceptionally simple. These diagonal elements will be the eigenvalues of the matrix. This is especially helpful in solving linear systems of differential equations. Often in physics, engineering, logistics, and probably lots of other places, you have a system of differential equations which all depend on each other. In order to solve the system directly, you would have to solve all equations at once, which is hard. We can use matrices to describe this system. By changing basis, you may be able to make that matrix diagonal, which effectively separates the differential equations from each other, so you can solve just one at a time. This is comparatively easy. Situations where I know this comes up are:</p>
<ul>
<li>Quantum mechanics, solving the Schrodinger equation to describe the state of matter on the quantum level.</li>
<li>Electrical engineering, understanding the time-evolution of an electrical circuit.</li>
<li>Mechanical engineering, understanding the motion of a linear mechanical system, such as multiple spring-mass system.</li>
</ul>
<p><strong>Edit:</strong></p>
<p>Here is another very important reason for change of basis I just remembered: suppose you have a matrix $A$, and you want to calculate $A^n$ for some large $n$. This takes a while, even for computers. But, if you can find a change of basis matrix $P$ so that $A=P^{-1}DP$ for a diagonal matrix $D$, then $$A^n=P^{-1}DPP^{-1}DPP^{-1}DP...P^{-1}DP.$$ All of the terms $PP^{-1}$ are the identity, so we get $$A^n=P^{-1}D^nP.$$ Powers of diagonal matrices are really easy: just raise each diagonal element to the $n$th power. So this method allows us to find powers of matrices very easily. When would we ever want to take large powers of matrices? Whenever finding numerical solutions to differential equations, partial or ordinary. This comes up all the time, so we are glad that we can change bases!</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| djechlin | 79,767 | <p>It's used to prove the <a href="https://en.wikipedia.org/wiki/Master_theorem" rel="noreferrer">Master Theorem</a>. Diagonal matrices are very easy to take powers of ($diag(a_1,\ldots,a_n)^k = diag(a_1^k, \ldots, a_n^k)$) and we have $(PAP^{-1})^k = PA^kP^{-1}$. The Master Theorem relies on changing to a diagonal basis, taking a large power, and changing back.</p>
<p>At the least, try working out <a href="https://en.wikipedia.org/wiki/Fibonacci_number" rel="noreferrer">the explicit formula for the Fibonacci numbers</a> this way.</p>
|
1,740,032 | <p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
| Teodor | 333,240 | <h1>Structural dynamics with the Finite Element Method</h1>
<p>Computing the dynamic response of a structure is done with modal analysis.</p>
<p>The original vector space is displacements and rotations in each node. These are unsuited to dynamic response. The system of equations is therefore transformed into a new vector space where each element in the vector space represents a vibration mode.</p>
<p><a href="https://en.wikipedia.org/wiki/Vibration#Illustration_of_a_multiple_DOF_problem" rel="nofollow noreferrer">Wikipedia</a> has some nice animations of vibration modes, ordered from lowest to highest frequency:
<img src="https://upload.wikimedia.org/wikipedia/commons/6/68/Beam_mode_1.gif" alt="Mode 1">
<img src="https://upload.wikimedia.org/wikipedia/commons/7/76/Beam_mode_2.gif" alt="Mode 2">
<img src="https://upload.wikimedia.org/wikipedia/commons/4/43/Beam_mode_3.gif" alt="Mode 3">
<img src="https://upload.wikimedia.org/wikipedia/commons/c/ca/Beam_mode_4.gif" alt="Mode 4">
<img src="https://upload.wikimedia.org/wikipedia/commons/b/be/Beam_mode_5.gif" alt="Mode 5">
<img src="https://upload.wikimedia.org/wikipedia/commons/c/cb/Beam_mode_6.gif" alt="Mode 6"></p>
<p>Each mode (eigenvector) is the vibration to a specific frequency (eigenvalue). When these have been found, we can compute interesting stuff like the structural response to an earthquake of a certain <a href="https://en.wikipedia.org/wiki/Response_spectrum#Seismic_Response_Spectra" rel="nofollow noreferrer">frequency spectrum</a>. Afterwards, the basis is changed back to the original to obtain forces and displacements in each node.</p>
|
24,660 | <p>Pro's and cons of number line model vs color counter model</p>
<p>When teaching multiplication to elementary schoolers, the "number line model" and "color counter model" are both widely used techniques. Can somebody help me to understand some of the pro's and cons for either model? Thank you</p>
| Steven Gubkin | 117 | <p>The color counter model is as follows: model an integer as a collection of positive and negative particles. I will use an ordered pair <span class="math-container">$(P,N)$</span> to denote the number of positive and negative chips present.</p>
<p>The interesting thing here is that different pairs can represent the same integer. <span class="math-container">$(3,0) = (4,1) = (5,2)$</span>, and <span class="math-container">$(0,2) = (1,3) = (2,4)$</span> for instance. <span class="math-container">$(a,b)$</span> is equivalent to <span class="math-container">$(c,d)$</span> if and only if <span class="math-container">$a+d = b+c$</span>.</p>
<p>This is actually the official <em>definition</em> of the integers when you build them up from scratch. As such it is logically prior to the number line from a rigorous development. It also rhymes with the definition of equivalent fraction: in the foundations of mathematics, we actually <em>define</em> the rational numbers as equivalence classes of pairs of integers (ordinate not being equal to zero) with <span class="math-container">$(a,b) ~ (c,d)$</span> if and only if <span class="math-container">$a*d=c*d$</span>.</p>
<p>This is all far too abstract for school children, but they can learn to "cancel" pos/neg pairs and "create" pos/neg pairs when it suits them. This is valuable work, and can support learning the rules for symbolic manipulation of signed numbers.</p>
<p>I think that both models are valuable. Negative numbers are strange beasts, and it takes a lot of work to understand their internal consistency, how their behavior is the "only sensible" way to make the properties of arithmetic work out, how they can sensibly model real world phenomena (temperature, debt, etc).</p>
|
365,166 | <p>I have trouble coming up with combinatorial proofs. How would you justify this equality?</p>
<p><span class="math-container">$$
n\binom {n-1}{k-1} = k \binom nk
$$</span>
where <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$k$</span> is an integer.</p>
| André Nicolas | 6,312 | <p>We have a group of $n$ people, and want to count the number of ways to choose a committee of $k$ people with <strong>Chair</strong>. </p>
<p>For the left-hand side, we select the Chair first, and then $k-1$ from the remaining $n-1$ to join her.</p>
<p>For the right-hand side, we choose $k$ people, and select one of them to be Chair. </p>
|
365,166 | <p>I have trouble coming up with combinatorial proofs. How would you justify this equality?</p>
<p><span class="math-container">$$
n\binom {n-1}{k-1} = k \binom nk
$$</span>
where <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$k$</span> is an integer.</p>
| Steven John | 73,144 | <p>Since k is on the right-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . , switch the sides so it is on the left-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . . </p>
<p>Multiply
multiply
To compute a product; to perform a multiplication. n by each term
term
Any expression written as a product or quotient. Example: 2xy , , or inside the parentheses
parentheses
Marks of inclusion (symbols: ( and ) ). Parentheses is the plural form of parenthesis. . </p>
<p>Multiply
multiply
To compute a product; to perform a multiplication. k by each term
term
Any expression written as a product or quotient. Example: 2xy , , or inside the parentheses
parentheses
Marks of inclusion (symbols: ( and ) ). Parentheses is the plural form of parenthesis. . </p>
<p>Since k is on the right-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . , switch the sides so it is on the left-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . . </p>
<p>Since the variable
variable
A letter used to represent a number value in an expression or an equation. is in the denominator
denominator
The bottom part of a fraction. on the left-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . , this can be solved as a ratio
ratio
A pair of numbers that compares different types of units. . For example, is equivalent
equivalent
Two or more expressions that have the same value. to </p>
<p>Since k is on the right-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . , switch the sides so it is on the left-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . . </p>
<p>Reduce the expression
expression
A mathematical symbol, or combination of symbols, representing a value, or relation. Example: 2+2=4 by canceling out all common factors
factor
One of two or more expressions that are multiplied together to get a product. from the numerator
numerator
The top part of a fraction. and denominator
denominator
The bottom part of a fraction. . </p>
<p>k-1=(n-1) </p>
<p>Remove the parentheses
parentheses
Marks of inclusion (symbols: ( and ) ). Parentheses is the plural form of parenthesis. around the expression
expression
A mathematical symbol, or combination of symbols, representing a value, or relation. Example: 2+2=4 n-1. </p>
<p>k-1=n-1 </p>
<p>Since -1 does not contain the variable
variable
A letter used to represent a number value in an expression or an equation. to solve for, move it to the right-hand side of the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . by adding 1 to both sides. </p>
<p>k=1+n-1 </p>
<p>Subtract 1 from 1 to get 0. </p>
<p>k=n </p>
<p>To rewrite as a function
function
A set of ordered pairs where each first element is paired with one and only one second element and no element in either pair is without a partner. of n (e.g. f(n)), write the equation
equation
A mathematical statement that says two expressions have the same value; any number sentence with an = . so that k is by itself on one side of the equal sign and an expression
expression
A mathematical symbol, or combination of symbols, representing a value, or relation. Example: 2+2=4 involving only n is on the other side. </p>
<p>f(n)=n </p>
|
3,611,845 | <p>For the Stable Matching algorithm by Gale-Shapley, how do I prove that at most one man will get his worst choice? </p>
<p>My intuition is that I have to use contradiction. Assume that there are two men who will get their worst preferences: <span class="math-container">$M_1$</span> with <span class="math-container">$W_1$</span> and <span class="math-container">$M_2$</span> with <span class="math-container">$W_2$</span>. I have to prove <span class="math-container">$M_2$</span> and <span class="math-container">$W_2$</span> are unstable. However, I can't think of anything. Can anyone help me with the proof? </p>
<p>Thanks! </p>
| Community | -1 | <p>Suppose all men have the same preferences and are indifferent over all women. In any stable match, any man who matches gets his worst choice.</p>
<p>Suppose all the men have strict preferences. Suppose there are <span class="math-container">$K$</span> men and <span class="math-container">$K+1$</span> women. All men rank the same woman last, but all women and all men find one another acceptable. None of the men get their worst choice in the outcome of the GS algorithm with the men proposing and the claim is false.</p>
|
2,100,726 | <p><strong>Problem:</strong> In a binomial experiment with 45 trials, the probability of more than 25 successes can be approximated by $P(z > \frac{(25-27)}{3.29}$)</p>
<p>What is the probability of success of a single trial of this experiment? </p>
<p><strong>Choices:</strong> </p>
<ul>
<li>0.07</li>
<li>0.56</li>
<li>0.79</li>
<li>0.61</li>
<li>0.6</li>
</ul>
<p><strong>My Solution:</strong>
Since the experiment can be approximated by a normal distribution, I evaluated P(z) using NormalCdf on my calculator. $$P(z > \frac{25-27}{3.29}) = NormCdf(\frac{25-27}{3.29},\infty, 0, 1) = 0.72834 $$</p>
<p>Then, I set up the binomialCdf expression as $binomCdf(45, p, 26, 45)$ using 26 as the minimum number of successes as the question specifies it to be more than 25. </p>
<p><strong>Through inspection, I found p to be 0.61</strong>. However, the answer key says otherwise. </p>
<p>Can someone check the error in my work? Thank you!</p>
| Cettt | 397,540 | <p>Note that
$$
\lim_{x \to \infty} e^{\frac 1x}= e^0=1,
$$
and that $(0, \infty) \ni x \mapsto 1-e^{\frac 1x}$ is non-positive.
Therefore
$$
\lim_{x \to \infty} f(x) = - \infty.
$$
Also, $\lim_{x \to \infty} x = \infty$. Therefore by taking the limit $\lim_{x \to \infty} f(x) +x$ you have an expression of the type $-\infty+\infty$ which can - depending on the situation - take any value in $[-\infty,\infty]$.</p>
<p>For example for $c \in \Bbb R$ define $f(x)=-x+ c$ then you also have that $\lim_{x \to \infty} f(x) = - \infty$ and $\lim_{x \to \infty} f(x)+x = c$.</p>
|
2,100,726 | <p><strong>Problem:</strong> In a binomial experiment with 45 trials, the probability of more than 25 successes can be approximated by $P(z > \frac{(25-27)}{3.29}$)</p>
<p>What is the probability of success of a single trial of this experiment? </p>
<p><strong>Choices:</strong> </p>
<ul>
<li>0.07</li>
<li>0.56</li>
<li>0.79</li>
<li>0.61</li>
<li>0.6</li>
</ul>
<p><strong>My Solution:</strong>
Since the experiment can be approximated by a normal distribution, I evaluated P(z) using NormalCdf on my calculator. $$P(z > \frac{25-27}{3.29}) = NormCdf(\frac{25-27}{3.29},\infty, 0, 1) = 0.72834 $$</p>
<p>Then, I set up the binomialCdf expression as $binomCdf(45, p, 26, 45)$ using 26 as the minimum number of successes as the question specifies it to be more than 25. </p>
<p><strong>Through inspection, I found p to be 0.61</strong>. However, the answer key says otherwise. </p>
<p>Can someone check the error in my work? Thank you!</p>
| Mark Viola | 218,419 | <p>Note that as $x\to \infty$, the asymptotic expansion of $e^{1/x}$ is </p>
<p>$$e^{1/x}=1+\frac1x+\frac1{2x^2}+O\left(\frac1{x^3}\right)$$</p>
<p>Hence, </p>
<p>$$\begin{align}
\lim_{x\to \infty}\left(\frac{1}{1-e^{1/x}}+x\right)&=\lim_{x\to \infty}\left(\frac{1}{1-\left(1+\frac1x+\frac1{2x^2}+O\left(\frac1{x^3}\right)\right)}+x\right)\\\\
&=\lim_{x\to \infty}\left(x-\frac{x}{1+\frac{1}{2x}+O\left(\frac1{x^2}\right)}\right)\\\\
&=\lim_{x\to \infty}\left(\frac12+O\left(\frac1x\right)\right)\\\\
&=\frac12
\end{align}$$</p>
|
2,100,726 | <p><strong>Problem:</strong> In a binomial experiment with 45 trials, the probability of more than 25 successes can be approximated by $P(z > \frac{(25-27)}{3.29}$)</p>
<p>What is the probability of success of a single trial of this experiment? </p>
<p><strong>Choices:</strong> </p>
<ul>
<li>0.07</li>
<li>0.56</li>
<li>0.79</li>
<li>0.61</li>
<li>0.6</li>
</ul>
<p><strong>My Solution:</strong>
Since the experiment can be approximated by a normal distribution, I evaluated P(z) using NormalCdf on my calculator. $$P(z > \frac{25-27}{3.29}) = NormCdf(\frac{25-27}{3.29},\infty, 0, 1) = 0.72834 $$</p>
<p>Then, I set up the binomialCdf expression as $binomCdf(45, p, 26, 45)$ using 26 as the minimum number of successes as the question specifies it to be more than 25. </p>
<p><strong>Through inspection, I found p to be 0.61</strong>. However, the answer key says otherwise. </p>
<p>Can someone check the error in my work? Thank you!</p>
| egreg | 62,967 | <p>In cases like these you can easily get tricked up, so it can be better to do $x=1/t$, whereupon the first limit becomes
$$
\lim_{x\to\infty}\frac{1}{1-e^{1/x}}=
\lim_{t\to0^+}\frac{1}{1-e^t}=-\infty
$$
because $1-e^t<0$ for $t>0$.</p>
<p>Also
$$
\lim_{x\to\infty}(f(x)+x)=
\lim_{t\to0^+}\left(\frac{1}{1-e^t}+\frac{1}{t}\right)=
\lim_{t\to0^+}\frac{t+1-e^t}{t(1-e^t)}=
\lim_{t\to0^+}\frac{t+1-1-t-\frac{t^2}{2}+o(t^2)}{t(1-1-t+o(t))}=
\frac{1}{2}
$$</p>
|
2,708,902 | <p>I am looking for an inductive proof of the following.</p>
<blockquote>
<p>Let $\alpha \in S_n$ be a cycle of length $p$. Then $\alpha^p =1$.</p>
</blockquote>
<p>For example $\alpha = (123)$ then $\alpha^3 = 1$ it is easy to verify.</p>
<p><em>My attempt:</em> Induction on $p$: The base case is true for $p=1$. Inductive case, assume that true for $k$ i.e $\alpha^k = 1$. We want to prove for $\alpha^{k+1}$. Rewrite $\alpha^{k+1} $ can be rewritten as product of two cycle's of length less than $p$??</p>
<p>Note: I know that this question is already answered elsewhere on the site; <strong>however, I want to solve the above problem using induction</strong>.</p>
| David C. Ullrich | 248,223 | <p>No. A polynomial over $R$ is not the same thing as the function from $R$ to $R$ defined by a polynomial. In particular, if we assume that $r^3=0$ for every $r\in R$ then $$p(x)=2x^3+x^2$$and $$q(x)=x^2$$ define the same function, but (assuming that $2\ne0$ in $R$) they are different polynomials, and in fact $q$ is monic while $p$ is not.</p>
|
1,686,455 | <p>For $a_1, ... , a_2 > 0$, I need to prove that
$( \sum_{i=1}^{n} a_i)( \sum_{i=1}^{n} \frac{1}{a_i})\geq n^2 $.</p>
<p>When do we have inequality?</p>
<p>Should I do it by induction? any hints</p>
| Daniel Akech Thiong | 169,316 | <p>Hint: Take $y_{i} = \sqrt{a_{i}}$ and $x_{i} = \frac{1}{\sqrt{a_{i}}}$ in the Cauchy-Schwarz inequality. </p>
<p>Then $n^{2} = \left(\sum_{i=1}^{n}x_{i}y_{i} \right)^{2} \leq \left (\sum_{i=1}^{n}x_{i}^{2} \right) \cdot \left(\sum_{i=1}^{n} y_{i}^{2} \right)$</p>
|
77,246 | <p>What is the "right" analog in the orbifold case of a singular homology of a topological space?
We can not just take the homology of the underlying space, because it does not contain much information.
For example, is there any kind homology of orbifolds such that the first homology group is the abelianization of the fundamental group of the orbifold? And such that an $n$-dim orbifold will have all homology group higher than $n$-dim equal to zero? And if there is such a homology, will there be Poincare duality in the orbifold case???</p>
<p>Thanks very much! </p>
| David Roberts | 4,177 | <p>If your orbifold is given to you as a proper etale Lie/topological groupoid, you can:</p>
<ul>
<li>take the nerve of this groupoid, to get a simplicial manifold/space, </li>
<li>take the levelwise singular set to get a bisimplicial set, </li>
<li>take the diagonal to get a simplicial set, </li>
<li>take geometric realisation to get a topological space, </li>
<li>and then take (co)homology. </li>
</ul>
<p>This recipe at the very least gives you the homotopy type of the orbifold as a space (to which you can apply the (co)homology functor you want), but it perhaps too big and unwieldy for your purposes. </p>
|
322,745 | <p>I'm faced with the following problem on primes. Does someone have any clue? Is it (a reformulation of) an open problem?</p>
<p>Let <span class="math-container">$d$</span> be a positive integer, <span class="math-container">$d\geq 2$</span>. By Dirichlet's theorem, there is an infinite set <span class="math-container">$\mathcal{P}$</span> of primes congruent to <span class="math-container">$1$</span> modulo <span class="math-container">$d$</span>. </p>
<p>Consider the set <span class="math-container">$N$</span> of integers <span class="math-container">$n=1+kd$</span> where all prime divisors of <span class="math-container">$k$</span> belong to <span class="math-container">$\mathcal{P}$</span>. </p>
<p>Could we expect that <span class="math-container">$N$</span> contains infinitely many primes? (we need at least <span class="math-container">$d$</span> to be even).</p>
| Robert Israel | 13,650 | <p>Yes, we should expect it.
For any even <span class="math-container">$d \ge 2$</span>, Dickson's conjecture implies that there are infinitely many primes <span class="math-container">$p \equiv 1 \bmod d$</span> such that <span class="math-container">$1 + p d$</span> is prime.</p>
<p>Of course, expecting and proving are very different matters.</p>
|
197,421 | <p>Consider the quotient group $\mathbb{Q}/\mathbb{Z}$ of the additive group of rational numbers. Then how to find the order of the element $2/3 + \mathbb{Z} $ in $\mathbb{Q}/\mathbb{Z}$.</p>
| Fredrik Meyer | 4,284 | <p>Supplementing Makoto's answer, we can generalize: the order of an element $p/q \in \mathbb{Q}/\mathbb{Z}$ is the least integer $n$ such that $(p/q)n$ is an integer. If the fraction is written in reduced form, this is the same as the denominator.</p>
|
4,293,199 | <p>let me preface by saying this <em>is</em> from a homework question, but the question is not to plot the decision boundary, just to train the model and do some predictions. I have already done that and my predictions <em>seem</em> to be correct.</p>
<p>But I would like to verify my results by plotting the decision boundary. This is not part of the homework.</p>
<p>The question was to take a simple dataset <span class="math-container">$$ X = \begin{bmatrix}
-1&0&2&0&1&2\\
-1&1&0&-2&0&-1
\end{bmatrix} $$</span></p>
<p><span class="math-container">$$ y = \begin{bmatrix}
1&1&1&-1&-1&-1
\end{bmatrix}
$$</span></p>
<p>Given this, convert the input to non-linear functions:
<span class="math-container">$$
z = \begin{bmatrix}
x_1\\x_2\\x_1^2\\x_1x_2\\x_2^2
\end{bmatrix}
$$</span></p>
<p>Then train the binary logistic regression model to determine parameters <span class="math-container">$\hat{w} = \begin{bmatrix} w\\b \end{bmatrix}$</span> using <span class="math-container">$\hat{z} = \begin{bmatrix} z\\1 \end{bmatrix}$</span></p>
<p>So, now assume that the model is trained and I have <span class="math-container">$\hat{w}^*$</span> and would like to plot my decision boundary <span class="math-container">$\hat{w}^{*T}\hat{z} = 0$</span></p>
<p>Currently to scatter the matrix I have</p>
<pre><code>scatter(X(1,:), X(2,:))
axis([-1.5 2.5 -2.5 1.5])
hold on
% what do I do to plot the decision boundary?
</code></pre>
<p>Not sure where to go from here. I have tried using symbolic functions, but <code>fplot</code> doesn't like using 2 variables.</p>
| MyMolecules | 962,751 | <p>Use <span class="math-container">$\cos (A+B)$</span> formula to show for a natural number <span class="math-container">$n$</span>,</p>
<p><span class="math-container">$$\cos nx + \cos \left(nx+\frac{2n\pi}{3}\right) + \cos \left(nx+\frac{4n\pi}{3}\right) = \begin{cases} 0 , \quad \text{when n is coprime to 3} \\ 3\cos nx , \quad \text{when n is a multiple of 3} \end{cases}$$</span></p>
<p>Combining this with the identity from other answer,</p>
<p><span class="math-container">$$\cos^5 \theta = \frac{1}{16}\cos 5\theta + \frac{5}{16}\cos 3\theta + \frac{10}{16}\cos \theta$$</span></p>
<p>it is easy to see that first and third terms, <span class="math-container">$n=1,5$</span>, should sum to zero, while middle term remains to give</p>
<p><span class="math-container">$$\cos^5x+\cos^5\left( x+\frac{2\pi}{3}\right) + \cos^5\left( x+\frac{4\pi}{3}\right) = \frac{15}{16}\cos 3x$$</span></p>
|
4,293,199 | <p>let me preface by saying this <em>is</em> from a homework question, but the question is not to plot the decision boundary, just to train the model and do some predictions. I have already done that and my predictions <em>seem</em> to be correct.</p>
<p>But I would like to verify my results by plotting the decision boundary. This is not part of the homework.</p>
<p>The question was to take a simple dataset <span class="math-container">$$ X = \begin{bmatrix}
-1&0&2&0&1&2\\
-1&1&0&-2&0&-1
\end{bmatrix} $$</span></p>
<p><span class="math-container">$$ y = \begin{bmatrix}
1&1&1&-1&-1&-1
\end{bmatrix}
$$</span></p>
<p>Given this, convert the input to non-linear functions:
<span class="math-container">$$
z = \begin{bmatrix}
x_1\\x_2\\x_1^2\\x_1x_2\\x_2^2
\end{bmatrix}
$$</span></p>
<p>Then train the binary logistic regression model to determine parameters <span class="math-container">$\hat{w} = \begin{bmatrix} w\\b \end{bmatrix}$</span> using <span class="math-container">$\hat{z} = \begin{bmatrix} z\\1 \end{bmatrix}$</span></p>
<p>So, now assume that the model is trained and I have <span class="math-container">$\hat{w}^*$</span> and would like to plot my decision boundary <span class="math-container">$\hat{w}^{*T}\hat{z} = 0$</span></p>
<p>Currently to scatter the matrix I have</p>
<pre><code>scatter(X(1,:), X(2,:))
axis([-1.5 2.5 -2.5 1.5])
hold on
% what do I do to plot the decision boundary?
</code></pre>
<p>Not sure where to go from here. I have tried using symbolic functions, but <code>fplot</code> doesn't like using 2 variables.</p>
| Jyrki Lahtonen | 11,619 | <p>Swatting this fly with an oversized mallet.</p>
<p>Let
<span class="math-container">$$f(x)=\cos^5x+\cos^5\left( x+\frac{2\pi}{3}\right) + \cos^5\left( x+\frac{4\pi}{3}\right).$$</span>
Because cosine has period <span class="math-container">$2\pi$</span>, we easily see that <span class="math-container">$f(x)$</span> has period <span class="math-container">$2\pi/3$</span>. It is well behaved (continuously differentiable), so it can be expanded into a Fourier series.
The function <span class="math-container">$f(x)$</span> is even, <span class="math-container">$f(-x)=f(x)$</span> for all <span class="math-container">$x$</span>, so the expansion only contains cosine terms.
As you observed, it is a quintic polynomial in <span class="math-container">$\cos x$</span>, so
<span class="math-container">$$
f(x)=\sum_{k=0}^5a_k\cos kx
$$</span>
for some constants <span class="math-container">$a_k\in\Bbb{R}$</span>. Period <span class="math-container">$2\pi/3$</span> implies that <span class="math-container">$a_k=0$</span> unless <span class="math-container">$k$</span> is divisible by three. At this point we know that
<span class="math-container">$$f(x)=a_0+a_3\cos 3x$$</span>
for some constants <span class="math-container">$a_0,a_3$</span>. We have
<span class="math-container">$f(x+\pi)=-f(x)$</span> for all <span class="math-container">$x$</span> implying that <span class="math-container">$a_0=0$</span> (you can see this also by observing that only odd powers of <span class="math-container">$\cos x$</span> occur in that quintic, or by observing that <span class="math-container">$\int_0^{2\pi}f(x)\,dx=0$</span>).</p>
<p>Last but not least
<span class="math-container">$$a_3=f(0)=1+(-1/2)^5+(-1/2)^5=15/16.$$</span>
So
<span class="math-container">$$f(x)=\frac{15}{16}\cos 3x.$$</span></p>
|
1,649,997 | <p>I'm having a hard time coming up with two unbounded sequences where their difference yields $0$ when $n\rightarrow\infty$. Any ideas?</p>
| Bernard | 202,857 | <p>Take $(\sqrt{n+1})$ and $(\sqrt n)$.</p>
|
2,468,781 | <p>The law of Universal Generalization states that: </p>
<p>P(c)</p>
<p>(x) P(x) </p>
<p>Now, I understand that this works only if c is <em>any random element</em> from the universe. Such arbitrary selection makes this rule mathematically valid. However, I do not understand how it holds true in practical examples. </p>
<p>For instance, if I randomly pick out a number from the set of the integers 1 to 10 and it turns out to be a prime number, I can infer using Universal Generalization that all the numbers in the set are prime. But this would be a fallacious conclusion. How then, can the law be used in practice? </p>
| skyking | 265,767 | <p>The rule requires $c$ not to be mentioned in the prerequisites of the theory or any assumptions made. This means that it is a symbol whose meaning hasn't been restricted in <strong>any</strong> way. It is this lack of restriction that makes it possible to generalize.</p>
<p>If $P(c)$ can be proved with a $c$ in such a theory it would be able to reproduce the same steps with any other symbol as well and the proof would be valid. </p>
<p>So it's not about actually picking a specimen when using this. At most one uses some kind of restriction on the symbol that will then become a restriction of the generalization - for example a proof can start with letting $c$ be a positive real number and then using that assumption you prove $P(c)$. Then the generalization would need to be restricted as $\forall x\in\mathbb R^+: P(x)$.</p>
<p>If you acctually pick a specimen you restrict the symbol to the extreme. The generalization would be to the point of meaningless - you basically can come to something like "any integer equal to seven is a prime", not much of a generalisation.</p>
|
1,132,187 | <p>$x^2 = y^2 + xy + 5$, where $x$ and $y$ are natural numbers.</p>
<p>Here is what I have so far:</p>
<ol>
<li><p>$x \neq y$ (from the equation).</p></li>
<li><p>$x$ is always odd (using the equation and assuming $2$ cases - $y$ is odd or $y$ is even).</p></li>
<li><p>Solving for the equation as a quadratic in $y$, $5x^2 - 20 \geq 0$ and a perfect square.</p></li>
</ol>
<p>I feel I am missing a crucial point which will guide me towards a solution.</p>
<p>Hint please!</p>
| Will Jagy | 10,400 | <p>I like Conway's graphical method for these, the quadratic form is $f(x,y) = x^2 - xy- y^2,$ and we are looking for the value $5.$ As you can see, these happen when $y,x$ are consecutive Lucas numbers, $x$ is the larger one. I will need to look this up, there is something about odd/even indices as well. Alright, looked it up, the solutions with natural numbers are
$$ x = L_{2n}, \; \; y = L_{2n-1}; \; \; \; \; \; \; n \geq 1 $$</p>
<p>See chapter 1 in <a href="http://www.maths.ed.ac.uk/~aar/papers/conwaysens.pdf" rel="nofollow noreferrer">http://www.maths.ed.ac.uk/~aar/papers/conwaysens.pdf</a> </p>
<p>Let's see, Conway likes the letter $h$ for the little blue numbers labelling edges, the arrow points in the direction of increasing form value. He likes $a,b$ for the values, and two values $a,b$ on either side of an edge $h$ denote the quadratic form $a x^2 + h x y + b y^2$ or $a x^2 - h x y + b y^2$ which is "equivalent" to the original. Our original is $x^2 - xy - y^2$ as that arrow points left, we see that $x^2 + xy - y^2,$ $x^2 + 3xy + y^2,$ and $x^2 + 5 xy + 5 y^2$ are equivalent to that. So is $5 x^2 + 5 xy + x^2.$</p>
<p>Conway does not generally draw in the $x,y$ coordinates of a point, i did that in green. Conway prefers to write those as vectors $e_1$ or $e_2.$ My way is done in another book, by Stillwell. Finally, neither author forces the diagram to show the automorphism group, but, for MSE, that seems an important aspect. </p>
<p><img src="https://i.stack.imgur.com/NrjRw.jpg" alt="enter image description here"></p>
<p>What is traditionally called the <strong>automorphism group</strong> of the quadratic forms tells us that if we have a solution $x^2 - x y - y^2 = 5,$ then we get another one from
$$ (2x+y, x+y). $$ This is the matrix product
$$
\left(
\begin{array}{cc}
2 & 1 \\
1 & 1
\end{array}
\right)
\left(
\begin{array}{c}
x \\
y
\end{array}
\right) =
\left(
\begin{array}{c}
2x+y \\
x+y
\end{array}
\right)
$$
The matrix
$$
A =
\left(
\begin{array}{cc}
2 & 1 \\
1 & 1
\end{array}
\right)
$$
has determinant $1,$ and trace $3.$ So, Cayley-Hamiltion says
$$ A^2 - 3 A + I = 0, $$ or
$$ A^2 = 3 A - I . $$
This tells us that, if we put the solutions $(x_n, y_n),$ we have
$$ x_{n+2} = 3 x_{n+1} - x_n, $$
$$ y_{n+2} = 3 y_{n+1} - y_n $$
as identities in the separate variables. these lead quickly to confirmation of the Lucas property. </p>
|
2,391,505 | <p>Most ODE textbooks provide the following steps to the solution of a separable differential equation (here the exponential equation is used as an example):</p>
<p>$$\frac{dN}{dt}=-\lambda N(t) \Rightarrow \frac{dN}{N(t)}=-\lambda dt\Rightarrow \int\frac{1}{N}dN=-\lambda\int dt \Rightarrow ln\mid N\mid = -\lambda t+C\Rightarrow \mid N(t) \mid=e^{-\lambda t +C}=e^Ce^{-\lambda t}\Rightarrow N(t)=e^Ce^{-\lambda t} \text{ if N $>0$ and }N(t)=-e^Ce^{-\lambda t} \text{ if N < 0}.$$</p>
<p>Ultimately this can be simplified to $N(t)=Ae^{-\lambda t}$ where $A=e^C$ is positive or negative accordingly. </p>
<p>I find this demonstration unintuitive. Doesn't the author know that math students have just spent 3 semesters of Calculus having instructors insist that the Leibniz derivative operator is not a fraction, that these infinitesimals are objects that do not really exist on the real number line and which require great mathematical maturity to comprehend? Now, can we try to make this demonstration in a manner that respects our understanding of the Leibniz derivative operator as a symbol that cannot be broken apart?</p>
<p>EDIT: Questions similar to this have been asked all over this forum, few have satisfactory answers, however I have ran into this one with some great posts: <a href="https://math.stackexchange.com/questions/2142783/separable-differential-equations-detaching-dy-dx">Separable differential equations: detaching dy/dx</a> </p>
| eyeballfrog | 395,748 | <p>The "splitting of the derivative" is just a shorthand for u-substitution in the resulting integral. u-substitution is usually written as
$$
\int f(u(x))u'(x) dx = \int f(u)du
$$
but this statement in Leibniz notation is
$$
\int f(u(x))\frac{du}{dx} dx = \int f(u) du
$$
which is the justification for the formal algebra on the differentials. In the case of your differential equation, the proper analysis is
$$
\frac{dN}{dt} = -\lambda N\Longrightarrow\int \frac{1}{N}\frac{dN}{dt} dt = \int -\lambda \,dt\Longrightarrow \int\frac{dN}{N} = -\lambda \int dt\Longrightarrow \ln|N| = -\lambda t + C\Longrightarrow N = Ce^{-\lambda t}
$$</p>
|
2,998,811 | <p><strong>Exercise :</strong></p>
<blockquote>
<p>We consider a market of one period <span class="math-container">$(\Omega, \mathcal{F}, \mathbb P, S^0, S^1)$</span>, where the sample space <span class="math-container">$\Omega$</span> has a finite number of elements and the <span class="math-container">$\sigma-$</span>algebra <span class="math-container">$\mathcal{F} = 2^\Omega$</span>. Furthermore, with <span class="math-container">$S^0$</span> we symbolize the zero risk asset with initial value <span class="math-container">$S_0^0=1$</span> at the time <span class="math-container">$t=0$</span> and interest rate <span class="math-container">$r>-1$</span> (which means <span class="math-container">$S_1^0 = 1+r$</span>). With <span class="math-container">$S^1$</span> we symbolize an asset with risk with initial value <span class="math-container">$S_0^1 >0$</span> at the time <span class="math-container">$t=0$</span> and with value <span class="math-container">$S_1^1$</span> at the time <span class="math-container">$t=1$</span> which is a random variable.</p>
<p>Let <span class="math-container">$\mathbb{P}[\{\omega\}]>0$</span> for all <span class="math-container">$\omega \in \Omega$</span>. We define :
<span class="math-container">$$a:=\min S_1^1(\omega) \quad \text{and} \quad b:=\max S_1^1(\omega)$$</span>
and we assume that <span class="math-container">$0<a<b$</span>. Show that the market is arbitrage-free <strong>if and only if</strong> it is :
<span class="math-container">$$a<S_0^1(1+r)<b$$</span></p>
</blockquote>
<p>A value process is defined as :</p>
<p><span class="math-container">$$V_t = V_t^\bar{\xi} = \bar{\xi}\cdot \bar{S}_t = \sum_{i=0}^d \xi_t^i\cdot \bar{S}_t^i, \quad t \in \{0,1\}$$</span></p>
<p>where <span class="math-container">$\xi = (\xi^0, \xi) \in \mathbb R^{d+1}$</span> is an investment strategy where the number <span class="math-container">$\xi^i$</span> is equal to the number of pieces from the security <span class="math-container">$S^i$</span> which are contained in the portfolio at the time period <span class="math-container">$[0,1], i \in \{0,1,\dots,d\}$</span>.</p>
<p>If a market has arbitrage, then the following hold. In case they do not, the market is arbitrage free.</p>
<p><span class="math-container">$$V_0 \leq 0, \quad \mathbb P(V1 \geq 0) = 1, \quad \mathbb P(V_1 > 0) > 0$$</span></p>
<p><strong>Question :</strong> How would one proceed to proving this using the mathematical definition of arbitrage by constructing a certain strategy ?</p>
| RRL | 148,510 | <p>In your answer you proved the contrapositive and, hence, the forward implication that no arbitrage implies <span class="math-container">$a < S_0^1(1+r) < b$</span>.</p>
<p>To prove the converse we assume that <span class="math-container">$a < S_0^1(1+r) < b$</span> and show that there can be no possible arbitrage. A potential arbitrage opportunity arises either from a portfolio that is short the risk-free asset and long the risky asset (with weights appropriately scaled) where</p>
<p><span class="math-container">$$\tag{1}V_0 = \xi S_0^0 + S_0^1 \leqslant 0, \\ \implies \xi \leqslant - \frac{S_0^1}{S_0^0},$$</span></p>
<p>or a portfolio that long the risk-free asset and short the risky asset where </p>
<p><span class="math-container">$$\tag{2}V_0 = S_0^0 + \xi S_0^1 \leqslant 0, \\ \implies \xi \leqslant - \frac{S_0^0}{S_0^1}$$</span></p>
<p>It then follows that there can be no arbitrage if in either case there exists at least one future state <span class="math-container">$\omega$</span> with non-zero probability such that <span class="math-container">$V_1(\omega) < 0$</span>. Under the given assumptions the risky asset attains nonnegative minimum and maximum values <span class="math-container">$a = S_1^1(\omega_{min})$</span> and <span class="math-container">$b= S_1^1(\omega_{max})$</span>, respectively, where <span class="math-container">$\mathbb{P}(\{\omega_{min}\}) > 0 $</span> and <span class="math-container">$\mathbb{P}(\{\omega_{max}\}) > 0 $</span>.</p>
<p>Thus, in case (1)</p>
<p><span class="math-container">$$\min V_1(\omega) = \xi S_0^0(1+r) + S_1^1(\omega_{min}) = \xi S_0^0(1+r) + a \\\leqslant -\frac{S_0^1}{S_0^0} S_0^0(1+r)+a = -S_0^1(1+r) + a < 0,$$</span></p>
<p>and in case (2)</p>
<p><span class="math-container">$$\min V_1(\omega) = S_0^0(1+r) + \xi S_1^1(\omega_{min}) = S_0^0(1+r) + \xi b \\\leqslant S_0^0(1+r) - \frac{S_0^0}{S_0^1}b = \frac{S_0^0}{S_0^1}\left(S_0^1(1+r) - b\right) < 0$$</span></p>
<p>Therefore, it is impossible to construct a portfolio such that <span class="math-container">$V_0 \leqslant 0$</span> and <span class="math-container">$\mathbb{P}(V_1 \geqslant 0) = 1$</span>.</p>
|
836,945 | <p>I am currently an undergraduate math/CS major with coursework done in Linear Algebra, Vector Calculus (that covered a significant amount of Real 1 material), Discrete Math, and about to take courses in Algebra and Real Analysis 1. I was wondering if there are any books about famous problems (Riemann, Goldbach, Collatz, etc) and progress thus far that are accessible to me. If possible, I would prefer them to have more emphasis on the math rather than the history. As an example, I enjoyed reading Fermat's Enigma by Simon Singh but was disappointed in how it glossed over most of the math. If there are any suggestions I would greatly appreciate it.</p>
| Gerry Myerson | 8,269 | <p>Jeff Lagarias edited a book of papers about the Collatz problem. Some of the papers should be quite accessible to you. </p>
|
836,945 | <p>I am currently an undergraduate math/CS major with coursework done in Linear Algebra, Vector Calculus (that covered a significant amount of Real 1 material), Discrete Math, and about to take courses in Algebra and Real Analysis 1. I was wondering if there are any books about famous problems (Riemann, Goldbach, Collatz, etc) and progress thus far that are accessible to me. If possible, I would prefer them to have more emphasis on the math rather than the history. As an example, I enjoyed reading Fermat's Enigma by Simon Singh but was disappointed in how it glossed over most of the math. If there are any suggestions I would greatly appreciate it.</p>
| Fred Daniel Kline | 28,555 | <p><a href="http://rads.stackoverflow.com/amzn/click/9812381597" rel="nofollow">The Goldbach Conjecture</a> by Yuan Wang. </p>
<p>From the book's description:</p>
<blockquote>
<p>A detailed description of a most important unsolved mathematical problem - the Goldbach conjecture. Raised in 1742 in a letter from Goldbach to Euler, this conjecture attracted the attention of many mathematical geniuses. Several great achievements were made, but only until the 1920s. This work gives an exposition of these results and their impact on mathematics, in particular, number theory. It also presents (partly or wholly) selections from important literature, so that readers can get a full picture of the conjecture.</p>
</blockquote>
|
1,670,816 | <p>Given <span class="math-container">$a, b \in \Bbb R$</span>, consider the following large tridiagonal matrix</p>
<p><span class="math-container">$$M := \begin{pmatrix}
a^2 & b & 0 & 0 & \cdots \\
b & (a+1)^2 & b & 0 & \cdots & \\
0 & b & (a+2)^2 & b & \cdots \\
\vdots & \vdots & \vdots & \vdots & \ddots
\end{pmatrix}$$</span></p>
<p>What can be said about its eigenvalues? Are analytic expressions known? Or, at least, properties of the eigenvalues?</p>
| marty cohen | 13,079 | <p>If
$f(x)
=(x-1)(x-2)(x-3)(x-4)-10^{-6}x^6
$,
if $c$ is small,</p>
<p>$\begin{array}\\
f(4+c)
&=(3+c)(2+c)(1+c)(c)-10^{-6}(4+c)^6\\
&=6c(1+c/2)(1+c/3)(1+c)-10^{-6}4^6(1+c/4)^6\\
&\approx 6c(1+c/2+c/3+c)-(2/5)^6(1+6c/4)\\
&=6c(1+11c/6)-(2/5)^6(1+3c/2)\\
&=c(6+(3/2)(2/5)^6) +11c^2-(2/5)^6\\
&\approx 6c -(2/5)^6\\
\end{array}
$</p>
<p>If this is zero,
$c = \frac{(2/5)^6}{6}
\approx 0.0006826
$
so the root is about
$4.0006826$,
which agrees very nicely
with Moo's much more accurate answer.</p>
|
4,092,182 | <p>Let <span class="math-container">$I$</span> be a interval and <span class="math-container">$f:I \rightarrow \mathbb{R}$</span> monotone and surjective prove that <span class="math-container">$f$</span> is continuous.</p>
<p>I tried using the definition of <span class="math-container">$\epsilon$</span>-<span class="math-container">$\delta$</span> and supposing that <span class="math-container">$f$</span> is not continuous but I don't see where use that <span class="math-container">$f$</span> is surjective.</p>
| Jotabeta | 841,213 | <p>Given <span class="math-container">$X,Y$</span> arbitrary vector fields, we define the hessian of <span class="math-container">$f$</span> as the <span class="math-container">$(0,2)$</span> tensor given by
<span class="math-container">$$
H_f(X,Y)=X(Yf)-(\nabla_X Y)(f).
$$</span>
The expression in coordinates takes the form <span class="math-container">$(H_f)_{ij}=f_{;ij}$</span>, where <span class="math-container">$f_{;ij}=\partial_{j}\partial_{i}f - \Gamma_{ji}^k \partial_{k}f$</span>, and in particular in a normal chart <span class="math-container">$f_{;ij}=\partial_{j}\partial_{i}f$</span>.
We can take the <span class="math-container">$(1,1)$</span> tensor relative to <span class="math-container">$H_f$</span>, which I denote by <span class="math-container">$h_f$</span> and is the unique operator that
<span class="math-container">$$
g(h_f(X),Y)=H_f(X,Y).
$$</span>
In fact, <span class="math-container">$h_f(X)=\nabla_X \nabla f$</span> (where <span class="math-container">$\nabla f$</span> is the gradient of <span class="math-container">$f$</span>) and in a normal chart we can write <span class="math-container">$(h_f)_i^j=(H_f)_{ik}g^{jk}$</span>.
Furthermore, due to <span class="math-container">$(\nabla_X df)(Y)=X(df(Y))-df(\nabla_X Y)=X(Yf)-(\nabla_X Y)(f)$</span>, we can rewrite the hessian as <span class="math-container">$H_f=\nabla (df)$</span>.</p>
|
385,404 | <p>So, I have this question which is still troubling me:</p>
<blockquote>
<p>Find the value of $k$ such that the equation $2x^3 + 3x^2 + kx - 48 = 0$ has two solutions equal in value but opposite in sign.</p>
</blockquote>
<p>I've had numerous attempts at this, such as using simultaneous equations and the factor theorem, but there always seems to be a problem. I'm sure I'm missing an important step here. Any clearing up would be great, thanks!</p>
| asun | 76,794 | <p>well you can use comparison.</p>
<p>$2x^3+3x^2+kx−48=(2x+a)(x+b)(x-b) = (2x+a)(x^2-b^2) = (2x^3 +ax^2 - 2b^2x - ab^2)$</p>
<p>so a comparison </p>
<p>$2x^3+3x^2+kx−48 = (2x^3 +ax^2 - 2b^2x - ab^2)$</p>
<p>gives us a = 3, b = 4, k=-32</p>
<p>so answers the question would be
$(2x+3)(x-4)(x+4)$</p>
<p>sorry i am not sure of how to make superscript for power. </p>
|
1,958,616 | <p>Question: Let $z=x+iy $. Using complex notation, find an equation of a circle with radius $5$ and center $(3,-6) $.</p>
<p>The answer seems simple to derive, but I'm curious as to whether or not there is more that must be done to establish the equation than what I have accomplished.</p>
<p>The circle with the above information can be constructed from the equation</p>
<p>$$(x-3)^2+(y+6)^2=25$$</p>
<p>which implies that </p>
<p>$$\sqrt {(x-3)^2+(y+6)^2}=5$$</p>
<p>Therefore from the definition of the modulus of a complex number, we have</p>
<p>$$z=(x-3)+i (y+6)$$</p>
<p>However, I fail to see how we might use the $z $ in the first sentence of this post.</p>
| wgrenard | 142,968 | <p>In the complex plane, a circle of radius $r$ centered at the origin has the simple equation</p>
<p>$$|z| = 5$$</p>
<p>Where $z = x + iy$ and $|z|=\sqrt{x^2 + y^2}$. We can transform this equation from being centered at the origin, to being centered at the point $3-6i$ by doing the following:</p>
<p>$$|z - (3-6i)| = 5$$</p>
<p>By substituting $z=x+iy$ into this expression, and using the definition of $|z|$ you will obtain the formula for a circle in the $xy$ plane of radius $5$ centered at the point $(3,-6)$.</p>
|
1,958,616 | <p>Question: Let $z=x+iy $. Using complex notation, find an equation of a circle with radius $5$ and center $(3,-6) $.</p>
<p>The answer seems simple to derive, but I'm curious as to whether or not there is more that must be done to establish the equation than what I have accomplished.</p>
<p>The circle with the above information can be constructed from the equation</p>
<p>$$(x-3)^2+(y+6)^2=25$$</p>
<p>which implies that </p>
<p>$$\sqrt {(x-3)^2+(y+6)^2}=5$$</p>
<p>Therefore from the definition of the modulus of a complex number, we have</p>
<p>$$z=(x-3)+i (y+6)$$</p>
<p>However, I fail to see how we might use the $z $ in the first sentence of this post.</p>
| John Wayland Bales | 246,513 | <p>You are doing it backwards. </p>
<p>Starting with the idea that the distance between $x+iy$ and $3-6i$ is $5$ we have</p>
<p>\begin{equation}
\vert x+iy - (3-6i)\vert=5
\end{equation}</p>
<p>or, rearranging terms</p>
<p>\begin{equation}
\vert x-3 +(y+6)i \vert=5
\end{equation}</p>
<p>thus</p>
<p>\begin{equation}
\sqrt{(x-3)^2+(y+6)^2 }=5
\end{equation}</p>
<p>and finally the sought result</p>
<p>\begin{equation}
(x-3)^2+(y+6)^2=25
\end{equation}</p>
|
1,804,333 | <blockquote>
<p>How many divisors does $25^2+98^2$ have?</p>
</blockquote>
<hr>
<p>My Attempt:</p>
<p>Calculator is not allowed but using calculator I found $193\times53$ that means $8$ divisors and that $4$ of them are positive.</p>
| almagest | 172,006 | <p>The trick here is the useful identity $a^4+4b^4=(a^2+2ab+2b^2)(a^2-2ab+2b^2)$. In this case we have $a=5,b=7$, so we get the factorisation $(25+70+98)(25-70+98)=193\cdot53$.</p>
<p>It is now easy to check that 53,193 are prime, so there are as you say 4 divisors: 1, 53, 193 and the number itself.</p>
<p>It is usual in this kind of question only to count the positive divisors. Obviously there are another 4 negative divisors.</p>
|
1,804,333 | <blockquote>
<p>How many divisors does $25^2+98^2$ have?</p>
</blockquote>
<hr>
<p>My Attempt:</p>
<p>Calculator is not allowed but using calculator I found $193\times53$ that means $8$ divisors and that $4$ of them are positive.</p>
| Taha Akbari | 331,405 | <p>$25^2+98^2=123^2-2 \times 98 \times 25=123^2-70^2=193 \times 53 $</p>
<p>I found this answer using almagest's answer thanks a lot for help.</p>
|
176,054 | <p>I have a friend who wants to study something applied to neurosciences. He is going to begin his grad studies in mathematics.
He asked me which areas of mathematics could be applied to neurosciences.
Since I don't know the answer, I thought mathoverflow would be the right place to ask.
I mean, there are many areas of mathematics that could be applied to neurosciences. But the question is the following: which are the fields that have already been applied to neurosciences? Are there areas related to dynamical systems, stochastic process, probability, topology, analysis, PDE or algebra applied to neurosciences?
Articles are welcome. </p>
<p>Thank you in advance</p>
| Aaron Hoffman | 1,281 | <p>(just my two cents, but too long for a comment)</p>
<p>Does your friend care more about the tools he uses in his research (e.g. theorem-proving vs. developing models vs. using known mathematics to analyze data and/or models) or the community in which his work has impact (e.g. mathematicians vs. applied scientists). </p>
<p>If the former I would advise following the branch of mathematics he most enjoys. If he is hoping to maintain a connection to neuroscience he might sample from statistics and applied analysis (by which I mean probability, stochastic processes, information theory, PDE and dynamical systems) while noting that other possibilities for application exist (e.g. computational homology). That said, he should be aware that direct and immediate impact in the neuroscience community will likely require skills other than theorem-proving.</p>
<p>If the latter I would advise devoting considerable energy to learning about the applied science which he hopes to advance in parallel with a broad mathematical education and to be open to learning whatever tools necessary (likely not just theorem-proving) to advance that science.</p>
|
4,136,532 | <p>I know it's a weird question.<br />
But this thing is confusing me.<br />
(x̅) : average μ<br />
</p>
<p>①<br />
∵ <span class="math-container">$\frac{1}{n}\sum\limits_{i=1}^n(x_i) = \bar{x}$</span><br />
∴ <span class="math-container">$\sum\limits_{i=1}^n(x_i) = {n}\bar{x}$</span><br />
</p>
<p>②<br />
∵ <span class="math-container">$\sum\limits_{i=1}^n(C) = {n}{C}$</span> | C = constant<br />
∴ <span class="math-container">$\sum\limits_{i=1}^n(\bar{x}) = {n}{\bar{x}}$</span>
</p>
<hr />
<p>From ①, ②<br />
<span class="math-container">$(x_i) = (\bar{x})$</span> ???</p>
<p>How could this be true??
Am I missing something?</p>
| Siong Thye Goh | 306,553 | <p><span class="math-container">$$1+3=4$$</span>
<span class="math-container">$$2+2=4$$</span></p>
<p>We can't conclude that <span class="math-container">$1=2$</span>.</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Mikhail Katz | 28,128 | <p>Stewart's book is an old favorite:</p>
<p>Stewart, Ian. From here to infinity. With a foreword by James Joseph Sylvester. The Clarendon Press, Oxford University Press, New York, 1996.</p>
<p>Stewart has many popularisation books some of which have reached best-seller status, e.g.:</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0465013023" rel="nofollow noreferrer">Professor Stewart's Cabinet of Mathematical Curiosities</a></p>
<p>or</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/0465017754" rel="nofollow noreferrer">Professor Stewart's Hoard of Mathematical Treasures</a></p>
<p>etc.</p>
|
1,974,939 | <p>I don't understand red line parts. I dont know why the assumtion that the list in (1) contains every real number leads to the conclusion that the intersection n to infinite number In equal to empty set.
Please explain why... I will really appreciate it. </p>
<p><a href="https://i.stack.imgur.com/UtJHG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UtJHG.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/ges2a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ges2a.png" alt="enter image description here"></a></p>
| Berci | 41,488 | <p>Every real number is assumed to among the $x_n$'s, but we constructed the $I_k$ intervals so that $x_n\notin \bigcap_kI_k$ holds for all $n$ (because, by construction $x_n\notin I_{n+1}$).</p>
<p>So this intersection contains no real numbers. So it is empty.</p>
|
375,016 | <p>I just started reading the ABC of category theory using the appendix of a text, the first chapter of a text that I have never read, and above all (I found out now that they handle well the theory) the wikipedia pages. I want to know only this: in all three sources are given the definition of the category, and in all three I noticed that it's not possible to treat categories in set theory. The reason strikes the eye immediately when it is assumed that the objects are contained in classes, in particular the example of the category of sets. I ask then how it is treated the theory because from what I'm reading I do not understand (indeed in the second source it seems that develops internally to set theory). I think it needs some extension of set theory for treating them.</p>
| Simone | 49,584 | <p>I suggest you to read (at least parts of) the following nice paper:</p>
<p><a href="http://arxiv.org/pdf/0810.1279v2.pdf" rel="nofollow">http://arxiv.org/pdf/0810.1279v2.pdf</a></p>
<p>From the abstract:
"Questions of set-theoretic size play an essential role in category
theory, especially the distinction between sets and proper classes (or small sets
and large sets). There are many different ways to formalize this, and which
choice is made can have noticeable effects on what categorical constructions
are permissible. In this expository paper we summarize and compare a number of such “set-theoretic foundations for category theory,” and describe their
implications for the everyday use of category theory"</p>
<p>I hope this helps.</p>
<p>The answer to your question "Category theory $\subset$ Set theory or vice versa?" has been given in the comment to your question of Asaf Karagila. </p>
|
1,273,086 | <p>Sum of three dice (six faced) throws is $15$. What is the probability that first throw was $4$?</p>
<p>The way I thought of solving this was...
- Given - sum of second and third throw is $11$
- Probability of getting first throw = $4$ is $1$ out of $6$, that is $\frac{1}{6}$</p>
<p>Is this correct?</p>
| aepound | 222,527 | <p>You can state this very nicely using <a href="http://en.wikipedia.org/wiki/Conditional_probability#As_an_axiom_of_probability" rel="nofollow">conditional probabilities</a>:</p>
<p>Following your start, I'd look at
$$
\begin{split}
P\left(\sum_{i=1}^3 x_i = 15, x_1 = 4\right) &= P\left(\sum_{i=1}^3 x_i = 15|x_1 = 4 \right)P\left(x_1 = 4\right) \\
&= P\left(\sum_{i=2}^3 x_i = 11\right)P\left(x_1 = 4\right),
\end{split}
$$
where by independence (unless a roll depends on previous rolls) the second line is derived from the first.</p>
<p>You have the second term on the right side, and now need to calculate the first term.</p>
|
28,395 | <p>I want to generate a table of Euclidean distances from the points in a list to a given point, e.g.:</p>
<pre><code>Table[EuclideanDistance[MyList[[i]], c], {i, 1, Length[MyList]}]
</code></pre>
<p>Moreover, when <code>EuclideanDistance</code> returns a value greater than $T$, I'd like to replace that value with a given real number $R$. Is there a simple way to do this?</p>
| bill s | 1,783 | <p>Here's a variation on Jens method that does the calculation by mapping the function <code>EuclideanDistance</code> onto the list and then clipping the result:</p>
<pre><code>myList = RandomReal[{-1, 1}, 10];
c = {0, 0}; t = 1; r = -1;
Clip[EuclideanDistance[#, c] & /@ myList, {0, 1}, {0, r}]
</code></pre>
|
907,154 | <p>Renata walks down an escalator that moves up and counts <span class="math-container">$150$</span> steps. Her sister Fernanda climbs the same escalator and counts <span class="math-container">$75$</span> steps. If the speed of Renata (in steps per time unit) is three times the speed of Fernanda, determine how many steps are visible on the escalator at any time. </p>
<blockquote>
<p>The answer is 120 steps.</p>
</blockquote>
<p>This is my first post ever on stackexchange, hello!</p>
| Adriano | 76,987 | <p>Let:</p>
<ul>
<li>$x$ be the number of visible steps.</li>
<li>$y$ be the speed of the escalator (in steps per time unit).</li>
<li>$z$ be the speed of Fernanda (in steps per time unit), relative to the escalator.</li>
</ul>
<p>Consider Renata's trip. Notice that in the time that Renata takes to finish her trip, she has travelled $150$ steps at a speed of $3z$, while the escalator has travelled $150 - x$ steps at a speed of $y$. This yields:
$$
\frac{150}{3z} = \frac{150 - x}{y}
$$
Now consider Fernanda's trip. Notice that in the time that Fernanda takes to finish her trip, she has travelled $75$ steps at a speed of $z$, while the escalator has travelled $x - 75$ steps at a speed of $y$. This yields:
$$
\frac{75}{z} = \frac{x - 75}{y}
$$
Dividing the first equation by the second yields:
\begin{align*}
\frac{\frac{150}{75}}{\frac{3z}{z}} &= \frac{\frac{150 - x}{x - 75}}{\frac{y}{y}} \\
\frac{2}{3} &= \frac{150 - x}{x - 75} \\
2(x - 75) &= 3(150 - x) \\
2x - 150 &= 450 - 3x \\
5x &= 600 \\
x &= 120
\end{align*}
as desired.</p>
|
2,958,566 | <p>Expressing <span class="math-container">$\frac{t+2}{t^3+3}$</span> in the form <span class="math-container">$a_o+a_1t+...+a_4t^4$</span>, where <span class="math-container">$t$</span> is a root of <span class="math-container">$x^5+2x+2$</span>.</p>
<p>So i can deal with the numerator, but how do I get rid of the denomiator to get it into the correct form? Thanks in advance!</p>
| Bill Dubuque | 242 | <p>As I show <a href="https://math.stackexchange.com/a/614369/242">here,</a> such results boil down to the first <span class="math-container">$\,2\,$</span> terms of the Binomial Theorem, viz.</p>
<p><span class="math-container">$\!\bmod m\!:\ c\equiv -a(b\!-\!1)\,$</span> so <span class="math-container">$\ \color{#0a0}0\equiv (b\!-\!1)c\equiv -\color{#0a0}{a(b\!-\!1)^2}$</span> so only <span class="math-container">$1$</span>st <span class="math-container">$2$</span> terms survive below</p>
<p><span class="math-container">$\qquad\quad\ \ \ a(1+b\!-\!1)^n =\, \color{#c00}{a\, +\,} n\,\underbrace{a(b\!-\!1)}_{\Large \equiv\ \color{#c00}{-c}}\ +\ \underbrace{\color{#0a0}{a(b\!-\!1)^2}(\cdots)}_{\large \equiv\ \color{#0a0}0}\ \,$</span> so adding <span class="math-container">$\ cn+d\ $</span> we get</p>
<p><span class="math-container">$\quad\ \Rightarrow\ \ ab^n +cn+d\, \equiv\, \color{#c00}{a-c\,n}+cn\! +\! d \,\equiv\, a\!+\!d\,\equiv\, 0\ \ \ $</span> <strong>QED</strong></p>
<p><strong>Remark</strong> <span class="math-container">$ $</span> If you <em>require</em> an inductive proof then do so for the first <span class="math-container">$2$</span> terms in the Binomial Theorem. It's easy - the inductive step amounts to multiplying by <span class="math-container">$\,1\!+\!a\pmod{\!a^2},\,$</span> viz.</p>
<p><span class="math-container">$\!\begin{align}{\rm mod}\,\ \color{#c00}{a^2}\!:\,\ (1+ a)^n\, \ \ \equiv&\,\ \ 1 + na\qquad\qquad\,\ \ {\rm i.e.}\ \ P(n)\\[1pt]
\Rightarrow\ \ (1+a)^{\color{}{n+1}}\! \equiv &\ (1+na)(1 + a)\quad\, {\rm by}\ \ 1+a \ \ \rm times\ prior\\
\equiv &\,\ \ 1+ na+a+n\color{#c00}{a^2}\\
\equiv &\,\ \ 1\!+\! (n\!+\!1)a\qquad\ \ \ {\rm i.e.}\ \ P(\color{}{n\!+\!1})\\[2pt]
\end{align}$</span></p>
|
3,996,665 | <p>I am really confused about below questions. I am new both in this platform and below topics. I hope, I was clearly explain the questions.</p>
<p>For ∀n∈<span class="math-container">$\mathbb{Z}$</span><sup>+</sup></p>
<p>Based on <span class="math-container">$B_n$</span> = {n, n+1, n+2...}, <strong>B</strong> = {<span class="math-container">$B_n$</span>|n ∈ <span class="math-container">$\mathbb{Z}$</span><sup>+</sup>} family of sets are given. According to this,</p>
<p>(i) Family <strong>B</strong> is the base for a topology on <span class="math-container">$\mathbb{Z}$</span><sup>+</sup>. Show it.
(ii) Show that there is no Hausdorff by writing the topology on <span class="math-container">$\mathbb{Z}$</span><sup>+</sup> produced by <strong>B</strong>.
(iii) Show that the sequence (2,4,6,8,...) in <span class="math-container">$\mathbb{Z}$</span><sup>+</sup> converges to each point according to topology produced by the family <strong>B</strong></p>
<p>Thank you...</p>
| Nikolay | 837,264 | <p>(i) to show that <span class="math-container">$\mathcal{B}=\{B_n\}$</span> is a base for some topology on some space <span class="math-container">$X$</span>, one should shows that <span class="math-container">$$\forall A, B \in \mathcal{B}\quad\exists C_1,\ldots,C_n\in\mathcal{B}\quad A\cap B = \bigcup_{i=1}^{n}C_n$$</span>
and that <span class="math-container">$\bigcup_{B\in\mathcal{B}}\ B = X$</span></p>
<p>This conditions easily verified in your case <span class="math-container">$X=\mathbb{Z}^+$</span> and <span class="math-container">$B=\{\{n, n+1, n+2, \ldots\}| n \in \mathbb{Z}^+\}$</span>.</p>
<p>The topology will be same as <span class="math-container">$\mathcal{B}$</span> because <span class="math-container">$\mathcal{B}$</span> is closed under union too. So all open sets will be the form <span class="math-container">$\{n, n+1, n+2, \ldots\}$</span>.</p>
<p>(ii) For all open <span class="math-container">$U$</span> if <span class="math-container">$n \in U$</span> then <span class="math-container">$n + 1 \in U$</span> use this to show that point <span class="math-container">$n$</span> and <span class="math-container">$n+1$</span> don't satisfy Hausdorff condition.</p>
<p>(iii) Sequence <span class="math-container">$\{x_n\}$</span> converges to <span class="math-container">$x$</span> <span class="math-container">$\iff$</span> <span class="math-container">$\forall$</span> open <span class="math-container">$U$</span> s.t <span class="math-container">$x\in U$</span> <span class="math-container">$\exists N\ \forall n \ge N\ x_n \in U$</span></p>
<p>in out case <span class="math-container">$x_n = 2\cdot n$</span>.
By the property that for all open <span class="math-container">$U$</span> if <span class="math-container">$n \in U$</span> then <span class="math-container">$n + 1 \in U$</span> if <span class="math-container">$x_n\in U$</span> then <span class="math-container">$\forall m\ge n\ x_m \in U$</span></p>
<p>So you only need to verify that
<span class="math-container">$\forall x\in\mathbb{Z}^+\ \forall$</span> open <span class="math-container">$U$</span> s.t <span class="math-container">$x\in U$</span> <span class="math-container">$\exists n$</span> such that <span class="math-container">$x_n \in U$</span>. That you can derive from the form of open sets in our case.</p>
|
1,176,016 | <p>How do you show that if you have sets $B_1, B_2, \cdots ,B_n$ and a set $C$, then
$$(B_1\cap B_2, \cap \cdots B_n)\cup C= (B_1\cup C)\cap(B_2\cup C) \cap \cdots
\cap (B_n\cup C)\,?$$</p>
<p>Thanks</p>
| mvw | 86,776 | <p>Interpreting both sides as numbers
$$
e^x = 4x
$$
the equal sign might hold for certain values of $x$.</p>
<p>Interpreting both sides as functions
$$
\exp = 4 \mbox{ id}
$$
the equal sign is not valid.
The differentiation operator acts on functions, that is why the second interpretation has to be used, and explains why the results stay different as well.</p>
<p>If one uses a valid function equality, like
$$
\tan x = \sin x / \cos x
$$
then the differentiated equation is valid as well:
$$
1+ \tan^2 x = (\cos^2 x + \sin^2 x)/ \cos^2 x
$$</p>
|
3,387,049 | <p>Say a deck of cards is dealt out equally to four players (each player receives 13 cards).</p>
<p>A friend of mine said he believed that if one player is dealt four-of-a-kind (for instance), then the likelihood of another player having four-of-a-kind is increased - compared to if no other players had received a four-of-a-kind. </p>
<p>Statistics isn't my strong point but this sort of makes sense given the pigeonhole principle - if one player gets <code>AAAAKKKKQQQQJ</code>, then I would think other players would have a higher likelihood of having four-of-a-kinds in their hand compared to if the one player was dealt <code>AQKJ1098765432</code>.</p>
<p>I wrote a Python program that performs a Monte Carlo evaluation to validate this theory, which found:</p>
<ul>
<li>The odds of <strong>exactly one</strong> player having a four-of-a-kind are ~12%.</li>
<li>The odds of <strong>exactly two</strong> players having a four-of-a-kind are ~0.77%.</li>
<li>The odds of <strong>exactly three</strong> players having a four-of-a-kind are ~0.03%.</li>
<li>The odds of <strong>all four</strong> players having a four-of-a-kind are ~0.001%.</li>
</ul>
<p>But counter-intuitively, four-of-a-kind frequencies appear to decrease as more players are dealt those hands:</p>
<ul>
<li>The odds of <strong>two or more</strong> players having a four-of-a-kind when <strong>at least one</strong> player has four-of-a-kind are ~6.24%.</li>
<li>The odds of <strong>three or more</strong> players having a four-of-a-kind when <strong>at least two</strong> players have four-of-a-kind are ~3.9%.</li>
<li>The odds of <strong>all four</strong> players having a four-of-a-kind when <strong>at least three</strong> players have four-of-a-kind are ~1.39%.</li>
</ul>
<p>The result is non-intuitive and I'm all sorts of confused - not sure if my friend's hypothesis was incorrect, or if I'm asking my program the wrong questions.</p>
| Ross Millikan | 1,827 | <p>You have to be careful what question you are asking. What your friend seems to be claiming is if you draw a hand of <span class="math-container">$13$</span> cards for one player and it has four of a kind, the chance of a second hand you draw from the rest of the deck having four of a kind is increased. My intuition agrees with that. Distributing all the cards and asking the chance of two four of a kinds given there is one four of a kind is a different question. It seems likely you could modify your program to ask the first question.</p>
<p>For a comparison, I imagined four players each flipping a coin with <span class="math-container">$0.05$</span> chance to come up heads. Clearly here your friend's question has player <span class="math-container">$2$</span> getting heads with chance <span class="math-container">$0.05$</span> regardless of what player <span class="math-container">$1$</span> does. The chance at least one player gets heads is about <span class="math-container">$0.1714$</span>. The chance at least two players get heads, given that at least one did, is about <span class="math-container">$0.07558$</span>, which is less than the chance of at least one getting heads.</p>
|
3,489,898 | <p>In order to show that a metric space <span class="math-container">$(X, d)$</span> is not complete one may apply the definition and look for a Cauchy sequence <span class="math-container">$\{x_n\}\subset X$</span> which does not converge with respect to the metric <span class="math-container">$d$</span>. Now I have often seen (on books, e.g.) another approach: one may show that a sequence <span class="math-container">$\{x_n\}\subset X$</span> converges with respect to the metric <span class="math-container">$d$</span> to a limit <span class="math-container">$x$</span> which is not contained in <span class="math-container">$X$</span>. </p>
<p>A common example may be the following: since <span class="math-container">$x_n:= (1+1/n)^n\in \mathbb{Q}$</span> for every <span class="math-container">$n \in \mathbb{N}$</span> and <span class="math-container">$x_n \to e$</span>, but <span class="math-container">$e \notin \mathbb{Q}$</span>, one can conclude that <span class="math-container">$\mathbb{Q}$</span> is not complete. </p>
<p>I've always considered this to be obvious but I now realize I can't explain why this works. The quantity <span class="math-container">$d(x_n, x)$</span> itself need not be well-defined, in general, if <span class="math-container">$x \notin X$</span>. So my question is: why (and under which conditions) this criterion for not-completeness of a metric space ("limit is not in the same space as the sequence") can be used? </p>
| José Carlos Santos | 446,262 | <p>It can be used when you are aware of the existence of a metric space <span class="math-container">$(Y,d^\ast)$</span> such that:</p>
<ol>
<li><span class="math-container">$X\subset Y$</span>;</li>
<li><span class="math-container">$(\forall x,x'\in X):d^\ast(x,x')=d(x,x')$</span>.</li>
</ol>
<p>In the example that you have mentioned, that space is <span class="math-container">$\mathbb R$</span>, endowed with its usual metric.</p>
<p>Actually, such a metric space always exist (take the <a href="https://en.wikipedia.org/wiki/Complete_metric_space#Completion" rel="nofollow noreferrer">completion</a> of <span class="math-container">$X$</span>, for instance), but if you don't know how to work with it, that's a useless piece of information.</p>
|
4,335,896 | <blockquote>
<p><span class="math-container">$$\frac{1}{2\cdot4}+\frac{1\cdot3}{2\cdot4\cdot6}+\frac{1\cdot3\cdot5}{2\cdot4\cdot6\cdot8}+\frac{1\cdot3\cdot5\cdot7}{2\cdot4\cdot6\cdot8\cdot10}+\cdots$$</span>
is equal to?</p>
</blockquote>
<p><strong>My approach:</strong></p>
<p>We can see that the <span class="math-container">$n^{th}$</span> term is <span class="math-container">\begin{align}a_n&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\color{red}{[(2n+2)-(2n+1)}]\\&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)}-\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)\cdot(2n+1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\
\end{align}</span></p>
<p>From here I just have a telescopic series to solve, which gave me <span class="math-container">$$\sum_{n=1}^{\infty}a_n=0.5$$</span></p>
<p><strong>Another approach :</strong> <em>note :</em> <span class="math-container">$$\frac{(2n)!}{2^nn!}=(2n-1)!!$$</span></p>
<p>Which gives <span class="math-container">$$a_n=\frac{1}{2}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right)$$</span></p>
<p>So basically I need to compute
<span class="math-container">$$\frac{1}{2}\sum_{n=1}^{\infty}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right) \tag{*}$$</span></p>
<p>I'm not able to determine the binomial expression of <span class="math-container">$(*)$</span> (if it exists) or else you can just provide me the value of the sum</p>
<p>Any hints will be appreciated, and you can provide different approaches to the problem too</p>
| epi163sqrt | 132,007 | <blockquote>
<p>We can write the series as
<span class="math-container">\begin{align*}
\color{blue}{\frac{1}{2}\sum_{n=1}^{\infty}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right)}
&=\frac{1}{2}\sum_{n=1}^{\infty}\frac{1}{n+1}\binom{2n}{n}\left(\frac{1}{4}\right)^n\tag{1}\\
&=\frac{1}{2}\sum_{n=1}^{\infty}C_n\left(\frac{1}{4}\right)^n\tag{2}\\
&=\frac{1}{2}\left(\left.\frac{1-\sqrt{1-4x}}{2x}\right|_{x=\frac{1}{4}}\right)-\frac{1}{2}\tag{3}\\
&=\frac{1}{2}\cdot 2-\frac{1}{2}\tag{4}\\
&\,\,\color{blue}{=\frac{1}{2}}
\end{align*}</span></p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we write the coefficient using binomial coefficients.</p>
</li>
<li><p>In (2) we note that <span class="math-container">$C_n=\frac{1}{n+1}\binom{2n}{n}$</span> are the ubiquituous <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="noreferrer">Catalan numbers</a></em>.</p>
</li>
<li><p>In (3) we use the <em><a href="https://en.wikipedia.org/wiki/Catalan_number#First_proof" rel="noreferrer">generating function</a></em> of the Catalan numbers evaluated at <span class="math-container">$x=\frac{1}{4}$</span>. Since the series expansion of the generating function starts with <span class="math-container">$n=0$</span> we compensate it by subtracting <span class="math-container">$\frac{1}{2}$</span>.</p>
</li>
<li><p>in (4) we evaluate the series at <span class="math-container">$x=\frac{1}{4}$</span> and simplify in the last step.</p>
</li>
</ul>
|
648,946 | <p>I want to find
$$ \lim_{x\to0} \frac{(e^{-x^2}-1)\sin x }{x\ln(1+x^2)}$$
using a Maclaurin series and not using the l'Hôpital's rule.</p>
<p>However I can't seem to get it right.</p>
<p>Thanks for any possible answers.</p>
| Ross Millikan | 1,827 | <p>Your first line is incorrect and should be deleted or the argument of $\log$ corrected to $(1+i\sqrt 3)$. The second line is correct if you are using the principal value of the complex logarithm. The rest is fine.</p>
|
3,064,943 | <p>Consider the senquence of iid r.v. <span class="math-container">$(Y_k)_{k\geq1}$</span> such that <span class="math-container">$\mathbb{P}(Y_k=1)=\mathbb{P}(Y_k=-1)=\frac{1}{2}$</span> and then consider the process <span class="math-container">$X=(X_k)_{n\geq1}$</span> such that <span class="math-container">$X_n=\sum_{k=1}^n\frac{Y_k}{k}$</span>. It's very easy to see that <span class="math-container">$X$</span> is a martingale w.r.t. the filtration that it generate itself. However i tried proving that it is almost surely convergent using the convergence theorem for martingales but i failed. Is this really almost surely convergent? if yes is possible to prove this using martingales theory or is better to use other techniques like Borel-Cantelli lemmas?</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$EX_n^{2}=\sum_{k=1}^{n} var (\frac {Y_k} k)=\sum_{k=1}^{n} \frac 1 {k^{2}}$</span> which is bounded. It is well known that this implies uniform integrability and any uniformly integrable martingale converges almost surely. </p>
|
2,648,273 | <p>I am reading "Linear Algebra" by Takeshi SAITO.</p>
<p>Why $n \geq 0$ instead of $n \geq 1$?<br>
Why $K^0 = \{0\}$?<br>
Is $K^0 = \{0\}$ a definition or not?</p>
<p>He wrote as follows in his book:</p>
<p>Let $K$ be a field, and $n \geq 0$ be a natural number. </p>
<p>$$K^n = \left\{\begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} \middle| a_1, \cdots, a_n \in K \right\}$$</p>
<p>is a $K$ vector space with addition of vectors and scalar multiplication.</p>
<p>$$\begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} +
\begin{pmatrix}
b_{1} \\
b_{2} \\
\vdots \\
b_{n}
\end{pmatrix} = \begin{pmatrix}
a_{1}+b_{1} \\
a_{2}+b_{2} \\
\vdots \\
a_{n}+b_{n}
\end{pmatrix}\text{,}$$
$$c \begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} =
\begin{pmatrix}
c a_{1} \\
c a_{2} \\
\vdots \\
c a_{n}
\end{pmatrix}\text{.}$$
When $n = 0$, $K^0 = 0 = \{0\}$.</p>
| edm | 356,114 | <p>It is a convention, which you can take as a definition.</p>
<p>Since $K^n$ is an $n$-dimensional vector space when $n$ is a positive integer, we would like to have $K^0$ to denote a zero-dimensional vector space, which would be $\{0\}$.</p>
|
4,048,158 | <p>Given data points <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span>, if we define divided difference recursively as:</p>
<p><span class="math-container">$$f[x_0,\cdots,x_{k+1}] = \frac{f[x_1,\cdots,x_{k+1}]-f[x_0,\cdots,x_k]}{x_{k+1}-x_0} \text{ with the definition that} f[x_0] = f(x_0)$$</span></p>
<p>then it is true that the unique interpolating polynomial <span class="math-container">$L_m(x)$</span> of degree <span class="math-container">$m$</span> such that <span class="math-container">$L_m(x_i) = f(x_i)$</span> for all <span class="math-container">$i = 0\cdots m$</span> is given by:
<span class="math-container">$$
L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x), \text{ where } \omega_k(x) = \prod_{i=0}^{k-1}(x-x_i).
$$</span>
The question asks to show this. That is, show the coefficients of the interpolating polynomial of degree <span class="math-container">$m$</span> in the basis <span class="math-container">$\{\omega_k\}_{k=0}^{m}$</span> is exactly <span class="math-container">$a_k = f[x_0,\cdots,x_k]$</span>, the divided differences.</p>
<p>To tackle this problem, I've attempted to use induction. The base step is trivial as <span class="math-container">$a_0 = f(x_0)$</span>, but I am stuck on the inductive step. I supposed the degree <span class="math-container">$m$</span> interpolating polynomial of some data <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span> is indeed given by:
<span class="math-container">$$
L_m(x) = \sum_{k=0}^mf[x_0,\cdots,x_k]\omega_k(x)
$$</span>
and I want to argue that <span class="math-container">$L_{m+1}(x) = \sum_{k=0}^{m+1}f[x_0,\cdots,x_k]\omega_k(x)$</span> is the interpolating polynomial of degree <span class="math-container">$m+1$</span> interpolating <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m+1}$</span>, with any additional point <span class="math-container">$(x_{m+1},f(x_{m+1}))$</span> added.</p>
<p>Clearly, <span class="math-container">$L_{m+1}(x)$</span> interpolates <span class="math-container">$\{(x_i,f(x_i)\}_{i=0}^{m}$</span> by definition of the <span class="math-container">$\omega_k$</span>'s, but I am stuck at showing <span class="math-container">$L_{m+1}(x_{m+1}) = f(x_{m+1})$</span>. Hints will be extremely appreciated!</p>
| PierreCarre | 639,238 | <p>Consider adding the table entries one by one...</p>
<ol>
<li><p><span class="math-container">$p_0(x) = f_0$</span> interpolates the data on <span class="math-container">$x_0$</span>.</p>
</li>
<li><p>The interpolant on <span class="math-container">$x_0, x_1$</span> is given by
<span class="math-container">$$
p_1(x) = p_0(x) + c_1 (x-x_0).
$$</span>
We can compute <span class="math-container">$c_1$</span> simply by prescribing <span class="math-container">$p_1(x_1)= f_1$</span>, which leads to
<span class="math-container">$$
p_0(x_1) + c_1(x_1-x_0)= f_1 \Leftrightarrow c_1 =\frac{f_1-f_0}{x_1-x_0}
$$</span>
hence,
<span class="math-container">$$
p_1(x) = f_0 + \frac{f_1-f_0}{x_1-x_0} (x-x_0)
$$</span></p>
</li>
<li><p>The interpolant on <span class="math-container">$x_0, x_1, x_2$</span> is given by
<span class="math-container">$$
p_2(x)=p_1(x) + c_2 (x-x_0)(x-x_1)
$$</span>
again, <span class="math-container">$c_2$</span> is computed by demanding <span class="math-container">$p_2(x_2)=f_2$</span>:
<span class="math-container">$$
f_0+\frac{f_1-f_0}{x_1-x_0}(x_2-x_0) + c_2(x_2-x_0)(x_2-x_1) \Leftrightarrow c_2 = \cdots
$$</span></p>
</li>
</ol>
<p>This way, you recover the recursive definition of the divided differences, with the correspondence <span class="math-container">$c_k = f[x_0, \cdots, x_k]$</span>. I think you can now reconstruct the inductive step.</p>
|
2,727,598 | <p>Given p is a prime number greater than 2, and</p>
<p>$ 1 + \frac{1}{2} + \frac{1}{3} + ... \frac{1}{p-1} = \frac{N}{p-1}$ </p>
<p>how do I show, $ p | N $ ???</p>
<p>The previous part of this question had me factor $ x^{p-1} -1$ mod $p$. Which I think is just plainly $(x-1) ... (x-(p-1))$ </p>
| quasi | 400,434 | <p>Presumably, the intended problem was to show, assuming $p$ is an odd prime, that
$$1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p-1} = \frac{N}{(p-1)!}$$
implies $p{\,\mid\,}N$.
<p>
It suffices to show the $\text{LHS}$ is congruent to $0$, mod $p$.
<p>
But mod $p$, the fractions
$$1, \frac{1}{2}, \frac{1}{3},..., \frac{1}{p-1}$$
are just a permutation of
$$1,2,3,...,p-1$$
which sums to ${\large{\frac{(p-1)p}{2}}}$, hence is congruent to $0$, mod $p$.</p>
|
476,955 | <p>Let $X$ be a set with 4 elements. Is it possible to have a topology on $X$ with 14 open sets?</p>
| MJD | 25,554 | <p>Suppose there are exactly two non-open sets, say $S$ and $T$, and that at least one of them, say $S = \{a\}$, is a 1-set. Each 1-set $\{a\}$ can be written as the intersection of 2-sets in three ways:</p>
<p>$$ \{a, b\}\cap \{a,c\} = \{a\}\\
\{a, b\}\cap \{a,d\} = \{a\}\\
\{a, c\}\cap \{a,d\} = \{a\}
$$</p>
<p>If $S =\{a\}$ is non-open, then at most one of the 2-sets is non-open, and then at least one of the intersections above is an intersection of two open sets, so $S$ is open, which is a contradiction.</p>
<p>So all of the 1-sets are open. But then the topology is discrete and all 16 subsets of $X$ are open.</p>
|
606,380 | <blockquote>
<p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p>
</blockquote>
<p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p>
<p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
| Community | -1 | <p>For my earlier comment: By expanding everything I mean, you can clear the denominator, write down everything in terms of symmetric polynomials, and try to use AM-GM to compare them.</p>
<p>On the other hand, there is also a one liner, similar to math110's solution:</p>
<p>$$\frac{1}{a+b+1} \leq \frac{2c+ab}{2(a+b+c)+ab+bc+ca}$$</p>
<p>After clearing the denominator, this is equivalent to $(c-1)^2(a+b) \ge 0$.</p>
|
606,380 | <blockquote>
<p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p>
</blockquote>
<p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p>
<p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
| nguyenhuyenag | 410,198 | <p>Using the Cauchy-Schwarz inequality we have
<span class="math-container">$$\frac{1}{1+a+b}=\frac{c+2}{(c+1+1)(1+a+b)} \leqslant \frac{c+2}{(\sqrt{c}+\sqrt{a}+\sqrt{b})^2}.$$</span>
Therefore
<span class="math-container">$$\frac{1}{1 + a + b} + \frac{1}{1 + b + c} + \frac{1}{1 + c + a} \leqslant \frac{a+b+c+6}{(\sqrt{a}+\sqrt{b}+\sqrt{c})^2}.$$</span>
But <span class="math-container">$$(\sqrt{a}+\sqrt{b}+\sqrt{c})^2 = a+b+c+2(\sqrt{ab}+\sqrt{ab}+\sqrt{ab}) \geqslant a+b+c+6.$$</span>
So
<span class="math-container">$$\frac{1}{1 + a + b} + \frac{1}{1 + b + c} + \frac{1}{1 + c + a} \leqslant 1.$$</span></p>
|
1,678,687 | <p>Find the point $(x_0, y_0)$ on the line $ax + by = c$ that is closest to the origin. </p>
<p>According to this <a href="http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php" rel="nofollow noreferrer">source</a>, I thought that $\left( -\frac{ac}{a^{2}+b^{2}}, -\frac{bc}{a^{2}+b^{2}} \right)$ was the point but it doesn't seem to be correct. Thanks for any help.</p>
| πr8 | 302,863 | <p>Calling $\bar v =(a,b), \bar x=(x,y)$, the line has equation $\bar v \cdot \bar x=c$.</p>
<p>Cauchy-Schwarz gives that $\vert \bar v\vert \cdot \vert \bar x\vert \ge c$, with equality when $\bar v, \bar x$ are linearly dependent.</p>
<p>Setting $\bar x = \lambda\bar v$ and plugging it into $\bar v \cdot \bar x=c$, we see that equality is attained at $\bar x = \frac{c}{\vert \bar v \vert ^2}\bar v$, where $\vert \bar x \vert ^2=\frac{c^2}{\vert \bar v \vert ^2}$.</p>
<p>This corresponds to $(x,y)=\left(\frac{ac}{a^2+b^2},\frac{bc}{a^2+b^2}\right)$ and $d=\frac{c}{\sqrt{a^2+b^2}}$.</p>
|
1,678,687 | <p>Find the point $(x_0, y_0)$ on the line $ax + by = c$ that is closest to the origin. </p>
<p>According to this <a href="http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php" rel="nofollow noreferrer">source</a>, I thought that $\left( -\frac{ac}{a^{2}+b^{2}}, -\frac{bc}{a^{2}+b^{2}} \right)$ was the point but it doesn't seem to be correct. Thanks for any help.</p>
| choco_addicted | 310,026 | <p>Since given question has <a href="/questions/tagged/multivariable-calculus" class="post-tag" title="show questions tagged 'multivariable-calculus'" rel="tag">multivariable-calculus</a>, I will use Lagrange multiplier. Define $f,g$ as
\begin{align}
f(x,y)&=x^2+y^2\\
g(x,y)&=ax+by-c=0,
\end{align}
Then there exists $\lambda\in \mathbb{R}$ such that
$$
(2x,2y)=\lambda(a,b)
$$
and so $2x=\lambda a$ and $2y=\lambda b$. Then
$$
ax+by-c=\frac{1}{2}a^2\lambda +\frac{1}{2}b^2\lambda -c=0
$$
and so
$$
\lambda=\frac{2c}{a^2+b^2}
$$
It means that $f(x,y)$ has a critical point at $(x,y)=\left(\frac{ac}{a^2+b^2},\frac{bc}{a^2+b^2}\right)$. It is the closest point.</p>
|
1,586,607 | <p>This is something I've been thinking about lately;</p>
<p>$$\int_0^1 \int_0^1 \frac{1}{1-(xy)^2} dydx$$
Solutions I've read involve making the substitutions: $x= \frac{sin(u)}{cos(v)}$ and $y= \frac{sin(v)}{cos(u)}$. This reduces the integral to the area of a right triangle with both legs of length $\frac{\pi}{2}$. My problem is that coming up with this substitution is not at all obvious to me, and realizing how the substitution distorts the unit square into a right triangle seems to require a lot of reflection. My approach without fancy tricks involves letting $u = xy$ and then the integral "simplifies" accordingly:</p>
<p>$\begin{align*} \int_0^1 \int_0^1 \frac{1}{1-(xy)^2} dydx &= \int_0^1\frac{1}{x}\int_0^x \frac{1}{1-u^2}dudx\\
&= \int_0^1\frac{1}{2x}\int_0^x \frac{1}{1-u}+\frac{1}{1+u}dudx\\
&= \int_0^1\frac{1}{2x}ln\left(\frac{1+x}{1-x}\right)dx
\end{align*}$ </p>
<p>If I've done everything right this should be $\frac{\pi^2}{8}$ but I haven't figured out how to solve it.</p>
| Jan Eerland | 226,665 | <p>If you haven't an idea, just start:</p>
<p>$$\int_{0}^{1}\int_{0}^{1}\frac{1}{1-(xy)^2}\space\text{d}y\text{d}x=$$
$$\int_{0}^{1}\left[\int_{0}^{1}\frac{1}{1-(xy)^2}\space\text{d}y\right]\text{d}x=$$ </p>
<hr>
<p>For the integrand $u=xy$ and $\text{d}u=x\space\text{d}y$.</p>
<p>This gives a new lower bound $u=x\cdot0=0$ and upper bound $u=x\cdot1=x$:</p>
<hr>
<p>$$\int_{0}^{1}\left[\frac{1}{x}\int_{0}^{x}\frac{1}{1-u^2}\space\text{d}u\right]\text{d}x=$$
$$\int_{0}^{1}\left[\frac{1}{x}\left[\tanh^{-1}\left(u\right)\right]_{0}^{x}\right]\text{d}x=$$
$$\int_{0}^{1}\left[\frac{1}{x}\left(\tanh^{-1}\left(x\right)-\tanh^{-1}\left(0\right)\right)\right]\text{d}x=$$
$$\int_{0}^{1}\left[\frac{1}{x}\left(\tanh^{-1}\left(x\right)-0\right)\right]\text{d}x=$$
$$\int_{0}^{1}\left[\frac{\tanh^{-1}\left(x\right)}{x}\right]\text{d}x=$$
$$\frac{1}{2}\left[\text{Li}_2(x)-\text{Li}_2(-x)\right]_{0}^{1}=$$
$$\frac{1}{2}\left(\left(\text{Li}_2(1)-\text{Li}_2(-1)\right)-\left(\text{Li}_2(0)-\text{Li}_2(-0)\right)\right)=$$
$$\frac{1}{2}\left(\frac{\pi^2}{4}-0\right)=$$
$$\frac{1}{2}\left(\frac{\pi^2}{4}\right)=\frac{\pi^2}{8}$$ </p>
|
904,074 | <p>Is this Convergent or Divergent</p>
<p>$$\int_0^1 \frac{1}{\sin(x)}\mathrm dx $$</p>
<p>So little background to see if I am solid on this topic otherwise correct me please :) </p>
<p>To check for convergence I can look for a "bigger" function and compare if that is convergent, If yes then for sure the one in question is too. So if the "bigger" function is not convergent can we conclude that the function in question is divergent or do we have to check for divergence to? That is a "smaller" function which has to be divergent?</p>
<p>And for this question I have no Idea WHAT kind of function to compare with:/ </p>
| Lucian | 93,448 | <blockquote>
<p><em>Is this Convergent or Divergent</em> ?</p>
</blockquote>
<p>Yes! :-)</p>
<blockquote>
<p><em>I have no Idea WHAT kind of function to compare with:/</em></p>
</blockquote>
<p><strong>Hint:</strong> $\sin x\sim x$ for $x\to0$.</p>
|
2,845,451 | <p>I was reading a number theory book and it was stated that $d^k+(a-d)^k=a[d^{k-1}-d^{k-2}(a-d)+ . . .+(a-d)^{k-1}]$ for $k$ odd. How did they arrive at this factorization? Is there an easy way to see it?</p>
| J.G. | 56,861 | <p>It's slightly easier if we switch to $x=d,\,y=a-d$ so you want to verify $x^k+y^k=(x+y)(x^{k-1}-x^{k-2}+\cdots +y^{k-1})$ for odd $k$. You can now easily prove this by $k\mapsto k+2$ induction, or by taking it as the odd-$k$, $y=-z$ special case of $x^k-z^k=(x-z)\sum_{j=0}^{k-1}x^j z^{k-1-j}$. If you just want to prove divisibility without worrying about the quotient, you can also work modulo $x+y$, viz. $x=-y\implies x^k=(-1)^k y^k$. The power of $-1$ simplifies to $-1$ as desired for $k$ odd.</p>
|
3,772,193 | <p>If I put the two vectors 1,0,0 and 0,1,0 next to each other</p>
<p><span class="math-container">$$\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}$$</span></p>
<p>I can see that they are independent since I cannot write <span class="math-container">$(1,0,0)^t$</span> as <span class="math-container">$(0,1,0)^t$</span>.</p>
<p>But I've heard that "If you have a row of <span class="math-container">$0$</span>'s in your matrix, it's linearly dependent"</p>
<p>But the matrix above has a row of zeroes but is still linearly independent. Which means I'm understanding something wrong. What am I missing?</p>
| José Carlos Santos | 446,262 | <p>That statement (“If you have a row of <span class="math-container">$0$</span>'s in your matrix, it's linearly dependent”) holds if you have <span class="math-container">$n$</span> vectors in <span class="math-container">$\Bbb R^n$</span>. That is not the case here: you have <strong>two</strong> vectors in <span class="math-container">$\Bbb R^{\bf 3}$</span>.</p>
|
57,508 | <p>While learning commutative algebra and basic algebraic geometry and trying to understand the structure of results (i.e. what should be proven first and what next) I came to the following question: </p>
<p>Is it possible to prove that $\mathbb A^2-point$ is not an affine variety, if you don't know that the polynomial ring is a unique factorisation domain?</p>
<p>It seems to me, that this question has some meaning, since when we define affine variety, we don't need to use the fact that the polynomial ring is an UFD. Don't we?</p>
| Sándor Kovács | 10,076 | <p>The abstract reason for this is that $\mathbb A^2$ satisfies Serre's property $(S_2)$ (which follows from being smooth) and hence every regular function is determined in codimension $1$. In other words, the point is that a regular function on $\mathbb A^2\setminus \{0\}$ is a rational function on $\mathbb A^2$ and the locus of indeterminacy of a rational function on $\mathbb A^2$ is of pure codimension $1$ (think of meromorphic functions on $\mathbb C^2$), so if you leave out something of codimension at least $2$, then you cannot get new regular functions. This implies that the natural injection of the ring of regular functions on $\mathbb A^2\setminus \{0\}$ into the ring of regular functions on $\mathbb A^2$ is an isomorphism, but then they can't be both affine as then they would have to be isomorphic. (Note that I did not just say that their rings of regular functions are isomorphic, but that the isomorphism is induced by the embedding).</p>
<p>More generally this argument shows that if $X$ is an affine variety of dimension at least $2$ and $P\in X$ is a closed point such that $\mathrm{depth}_P\mathscr O_X\geq 2$ (this is automatic if for example $X$ is normal, or a complete intersection in something smooth, or Cohen-Macaulay), then $X\setminus\{P\}$ is <em>not</em> affine.</p>
<p>This is an instance of Hartogs' theorem on extending holomorphic functions on normal complex analytic spaces. See <a href="https://mathoverflow.net/questions/45347/why-does-the-s2-property-of-a-ring-correspond-to-the-hartogs-phenomenon/45354#45354">this</a> MO question for more.</p>
|
2,939,058 | <p>The question to be solved is:</p>
<p><span class="math-container">$$ \lim_{n \to \infty} \left( \ \sum_{k=10}^{n+9} \frac{2^{11(k-9)/n}}{\log_2 e^{n/11}} \ - \sum_{k=0}^{n-1} \frac{58}{\pi\sqrt{(n-k)(n+k)}} \ \right)$$</span></p>
<p>The first thing that occured to me was to transform the limits into definite integrals using the limit definition of integrals, so it'll become easier to evaluate.</p>
<p>However, I have no clue how to convert them into definite integrals. Could anyone please shed some light on how to proceed? Or is there a better way to solve this problem?</p>
| mathlove | 78,967 | <blockquote>
<p>If <span class="math-container">$\gcd(y,z)=1$</span>, is the biconditional "<span class="math-container">$yz$</span> is perfect <span class="math-container">$\iff D(y)D(z)=2s(y)s(z)$</span>" always true?</p>
</blockquote>
<p>Yes.</p>
<p>If <span class="math-container">$yz$</span> is perfect with <span class="math-container">$\gcd(y,z)=1$</span>, then since
<span class="math-container">$$\sigma(yz)=\sigma(y)\sigma(z)=2yz$$</span>
we have
<span class="math-container">$$\begin{align}D(y)D(z)&=(2y-\sigma(y))(2z-\sigma(z))\\\\&=(2y-\sigma(y))\left(2z-\frac{2yz}{\sigma(y)}\right)
\\\\&=4yz-\frac{4y^2z}{\sigma(y)}-2z\sigma(y)+2yz
\\\\&=4yz-2z\sigma(y)-\frac{4y^2z}{\sigma(y)}+2yz
\\\\&=2(\sigma(y)-y)\left(\frac{2yz}{\sigma(y)}-z\right)
\\\\&=2(\sigma(y)-y)(\sigma(z)-z)
\\\\&=2s(y)s(z)\end{align}$$</span></p>
<hr>
<p>If <span class="math-container">$D(y)D(z)=2s(y)s(z)$</span> and <span class="math-container">$\gcd(y,z)=1$</span>, then
<span class="math-container">$$\begin{align}&(2y-\sigma(y))(2z-\sigma(z))=2(\sigma(y)-y)(\sigma(z)-z)
\\\\&\implies 4yz-2y\sigma(z)-2z\sigma(y)+\sigma(y)\sigma(z)=2\sigma(y)\sigma(z)-2z\sigma(y)-2y\sigma(z)+2yz
\\\\&\implies 2yz=\sigma(y)\sigma(z)
\\\\&\implies 2yz=\sigma(yz)
\\\\&\implies \text{$yz$ is perfect}\end{align}$$</span></p>
|
2,048,962 | <blockquote>
<p><strong>Question:</strong> How would you solve this sinusoidal equation:</p>
<blockquote>
<p>Solve $5\cos(6x)+6=9$. Assume $n$ is an integer and the answers are in degrees.</p>
<ul>
<li><p>$-8.86+n\cdot 60$</p></li>
<li><p>$-3.54+n\cdot 60$</p></li>
<li><p>$3.54+n\cdot 60$</p></li>
<li><p>$8.86+n\cdot 60$</p></li>
<li><p>$15.13+ n\cdot 360$</p></li>
<li><p>$126.87+n\cdot 360$</p></li>
</ul>
</blockquote>
</blockquote>
<p>I'm sort of new to this. But I have tried to isolate the trigonometric parts, and I get$$\cos(6x)=\frac 35\tag{1}$$
But after this, I'm not sure what to do. Do I take the $\arccos$ of both sides? If so, what will $\arccos\frac 35$ evaluate to? I don't think it's going to be a "perfect" number such as $\dfrac \pi 3$.</p>
| Ovi | 64,460 | <p>I don't believe there is a "nice" way to do it. Anyway the multiple choices all being terminating decimals should give you that hint.</p>
<p>Starting from $\cos(6x)=\dfrac 35$, <a href="https://www.wolframalpha.com/input/?i=(arccos%203%2F5)%2F6%20convert%20to%20degrees" rel="nofollow noreferrer">WolframAlpha</a> gives $8.86^{\circ}$ as a solution (like you said just take $\arccos$ of both sides and divide by $6$).</p>
<p>Now you know if $\cos (6 \cdot 8.86) = \dfrac 35$, then $\cos (6 \cdot 8.86 + 360n) = \dfrac 35$, so $\cos [6(8.86 + 60n)] = \dfrac 35$</p>
<p>Therefore $x = 8.86 + 60n$ is the solution.</p>
<p>If you really want to do it calculator free <a href="https://math.stackexchange.com/questions/421892/how-to-calc-arc-sine-without-a-calculator">you can</a>, but I wouldn't reccoment it.</p>
<p><strong>EDIT:</strong></p>
<p>Since cosine is an even function (i.e. $\cos (-\theta) = \cos \theta )$, another family of solutions is $-8.86 - 60n$, or just $-8.86 + 60n$. So the question has two answers.</p>
|
1,406,325 | <p>If $p(x)$ and $q(x)$ are continuous functions for any $x$, can $y(x)=\sin(x^2)$ be solution of the diff equation $y''+p(x)y'+q(x)y=0$ in some interval $I=[a,b] $containing $0$? I think it is not as simple as replacing $\sin(x^2)$ into the equation and analyzing the obtained expression.</p>
| MathMajor | 113,330 | <p>We have $y = \sin (x^2)$, $y' = 2x \cos (x^2)$, and $y'' = 2 \cos(x^2) - 4x^2 \sin (x^2)$. Substituting,</p>
<p>$$ 2 \cos(x^2) - 4x^2 \sin (x^2) + p(x) 2x \cos (x^2) + q(x) \sin (x^2) = 0$$</p>
<p>Following A.G.'s solution given in the comments, setting $x=0$ in the equation gives $2=0$ if we assume $p(x)$ and $q(x)$ being bounded in a neighbourhood of $0$. Hence $p(x)$ and $q(x)$ cannot be continuous.</p>
|
1,406,325 | <p>If $p(x)$ and $q(x)$ are continuous functions for any $x$, can $y(x)=\sin(x^2)$ be solution of the diff equation $y''+p(x)y'+q(x)y=0$ in some interval $I=[a,b] $containing $0$? I think it is not as simple as replacing $\sin(x^2)$ into the equation and analyzing the obtained expression.</p>
| A.Γ. | 253,273 | <p>One can also use the standard argument about existence and uniqueness of Cauchy problem like <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="nofollow noreferrer">Picard–Lindelöf theorem</a>. By this theorem, a linear differential equation with continuous coefficients has a <em>unique</em> solution with given initial condition at a point (see, for example, <a href="https://math.stackexchange.com/questions/1361997/does-the-cauchy-lipschitz-theorem-extend-to-higher-order-des">here</a>). Assuming $y(x)=\sin(x^2)$ to be a solution, it solves the Cauchy problem with $y(0)=y'(0)=0$ together with the trivial solution $y=0$. It contradicts the uniqueness.</p>
|
4,155,792 | <p>I have seen many math books, and some of them, very good books, that say that <span class="math-container">$0!=1$</span> 'by convention'. I think that <span class="math-container">$0!$</span> <em>must</em> be <span class="math-container">$1$</span> because it is the product of the empty set. That is, for <span class="math-container">$a\neq 0$</span>,
<span class="math-container">$$a^0=0!=\prod\emptyset=1$$</span>
What do you think?</p>
<p>EDIT: I'm not asking for a proof that <span class="math-container">$0!=1$</span>, I really don't be needed to prove that. The question is, <em>again</em>, 'what do you think?', that is, do you think that is more convenient to define <span class="math-container">$0!=1$</span> by... well, convention, or is it better to see <span class="math-container">$0!$</span> as an empty product?</p>
<p>In other words, this is a <strong>soft</strong> question.</p>
| johnnyb | 298,360 | <p>I think if you are teaching factorial in a procedural way then trying to.explain the product of an empty set is pretty far outside the context, and simply saying it is a matter of convention is more cognitively in line with where the student is at.</p>
<p>When teaching at the board, I try to separate out <em>what</em> a factorial is (number of ways a set can be rearranged) vs <em>how</em> to calculate it. Asking the class to tell me how many different ways they can arrange an empty set usually gives them the picture.</p>
|
2,061,695 | <p>I'm examining function slope and determine relative extrema of function (local minimum and local maximum)</p>
<p>The example is as following:
$
y = x^8 e^{-5x}
$</p>
<p>To do this I have to determine derivative of this function, which is:
$$
y' = 8x^7e^{-5x}+x^8(-5e^{-5x}) = -e^{-5x}x^7(5x-8)
$$</p>
<p>Then according to my notes I need to solve equation from derivative:
$
-e^{-5x}x^7(5x-8)=0
$</p>
<p>How to solve this equation?</p>
| Mark Fischler | 150,362 | <p>Notice that $e^{-5x}$ will never be zero, so you can divided through by that. Then it is easy to solve the equation, which is
$$
x^7(5x-8)=0
$$
The two solutions are $x=0$ and $x=\frac85$.</p>
|
1,968,267 | <p>When doing induction should you always try to put your final answer as the "<em>desired</em> " form? For example if: $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ we ought to give the final answer as $$\frac{2(k+1)^{3} + 21(k+1)^{2} + 67(k+1)}{6}?$$</p>
<p>I just expanded both the $\text{LHS}_{k+1}$ and the $\text{RHS}_{k+1}$ to show they were equal after the induction. Like this: </p>
<hr>
<p>Show that $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ for all integers $n \geq 1$.</p>
<p>For $n = 1$,</p>
<p>$$\sum^{1}_{k=1}(k+2)(k+4) = 15$$</p>
<p>and</p>
<p>$$\frac{2(1)^{3} + 21(1)^{2} + 67(1)}{6} = 15$$</p>
<p>Assume that it is true for some integer $n = k$, thus $$\sum^{k}_{k=1}(k+2)(k+4) = \frac{2k^{3} + 21k^{2} + 67k}{6}$$ so the $\text{LHS}_{k+1}$ $$\sum^{k+1}_{k=1}(k+2)(k+4) = \sum^{k}_{k=1}(k+2)(k+4) + (k+3)(k+5)$$ $$= \frac{2k^{3} + 21k^{2} + 67k}{6} + \frac{6(k+3)(k+5)}{6}$$ $$=\frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Now the $\text{RHS}_{k+1}$ $$\frac{2(k+1)^{3} + 21(k+1)^{2}+ 67(k+1)}{6} = \frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Thus $\text{LHS}_{k+1} = \text{RHS}_{k+1}$ Q.E.D.</p>
| MrYouMath | 262,304 | <p>Another method (maybe overkill :D): The sum $S(n)=\sum_{k=1}^nP(k)$ in which $P(k)$ is a polynomial of $l^{th}$ degree can be expressed by a polynomial in $n$ of degree $l+1$. For your problem $P(k)=(k+2)(k+4)$ and has degree 2. Hence the sum can be expressed as a polynomial $G(n)=a_3n^3+a_2n^2+a_1n+a_0$. Now calculate four terms in your sum and solve the system $G(i)=a_3i^3+a_2i^2+a_1i+a_0=S(i)$. This method is more powerful, as it allows you to derive arbitrary sums over polynomial expresssions.</p>
|
176,600 | <p>The question is about factoring extremely large integers but you can have a look at <a href="https://stackoverflow.com/questions/11699464/mathematically-navigating-a-large-2d-numeric-grid-in-c-sharp">this question</a> to see the context if it helps. Please note that I am not very familiar with mathematical notation so would appreciate a verbose description of equations.</p>
<p><strong>The Problem:</strong></p>
<p>The integer in question will ALWAYS be a power of N and will be known to us to calculate against. So let's assume N = 2 for example. That will give us a sequence of numbers like:</p>
<pre><code>2, 4, 8, 16... up to hundreds of thousands of digits.
</code></pre>
<p>I need to find all possible factors (odd, even, prime, etc.) as efficiently as possible.</p>
<p><strong>The Question:</strong></p>
<p>What is the solution and how could I understand this from mathematical and computational perspectives?</p>
<p><strong>EDIT:</strong></p>
<p>Does the fact that each number to be factored is a power of 2 help in eliminating any complexity or computational time?</p>
| Ross Millikan | 1,827 | <p>Are you given $N$? Then if the number to be factored is $A$, you have $A=N^k$ and only have to find $k$. Taking logs, $k=\frac {\log A}{\log N}$. So, yes, if you know $N$ it helps a lot.</p>
<p>If $N$ is prime, all the possible factors of $A$ are of the form $N^m$ for $0 \le m \le k$ so you have them. If $N$ is composite, factor it as $N=p_1^{q_1}p_2^{q_2}\ldots p_n^{q_n}$. Then all the factors are of the form $p_1^{m_1}p_2^{m_2}\ldots p_n^{m_n}$ where $0 \le m_1 \le kq_1$ etc.</p>
|
176,600 | <p>The question is about factoring extremely large integers but you can have a look at <a href="https://stackoverflow.com/questions/11699464/mathematically-navigating-a-large-2d-numeric-grid-in-c-sharp">this question</a> to see the context if it helps. Please note that I am not very familiar with mathematical notation so would appreciate a verbose description of equations.</p>
<p><strong>The Problem:</strong></p>
<p>The integer in question will ALWAYS be a power of N and will be known to us to calculate against. So let's assume N = 2 for example. That will give us a sequence of numbers like:</p>
<pre><code>2, 4, 8, 16... up to hundreds of thousands of digits.
</code></pre>
<p>I need to find all possible factors (odd, even, prime, etc.) as efficiently as possible.</p>
<p><strong>The Question:</strong></p>
<p>What is the solution and how could I understand this from mathematical and computational perspectives?</p>
<p><strong>EDIT:</strong></p>
<p>Does the fact that each number to be factored is a power of 2 help in eliminating any complexity or computational time?</p>
| Community | -1 | <p>If $N = 2$ (or any prime) and you can get $k$ as Ross indicated, then the divisors of $A$ are $\{1, N, N^2, \ldots, N^i, \ldots, N^{k} \}.$ If $N$ is a composite, then you will incur extra time complexity in computing the prime factorization of $N = \prod_{i = 1}^{\ell} p_i^{e_i}$ and the divisors of $A$ are all possible numbers of the form $\prod_{i=1}^{\ell} p_i^{r_i}$ where $\mathbf{0} \le r_i \le ke_i.$ Notice $r_i$ can be zero for some terms. There are $\prod_{i=1}^{\ell}(ke_i + 1)$ such possible divisors.</p>
|
1,372,997 | <p>This question originates from the 1996 Canada National Olympiad.</p>
<blockquote>
<p>Let $r_1, r_2, \dots, r_m$ be a given set of $m$ positive rational numbers such that</p>
<p>$\sum\limits^{m}_{k=1}{r_k} = 1 \tag{1}$</p>
<p>Define the function $f$ by </p>
<p>$f(n) = n − \sum\limits^{m}_{k=1}{\lfloor{r_k n}\rfloor} \tag{2}$</p>
<p>for each positive integer $n$, where $\lfloor{x}\rfloor$ denotes the greatest integer less than or equal to x. </p>
<p><strong>Determine the minimum and maximum values of $f(n)$</strong>. </p>
</blockquote>
<hr>
<p>The floor of a real number can be expressed in terms of the fractional part:</p>
<p>$\lfloor{x}\rfloor = x - \{x\}$ </p>
<p>so we can use (1) to re-express (2) as</p>
<p>$f(n) = n - \sum\limits^{m}_{k=1}{(r_k n - \{r_k n\}}) = n - n\sum\limits^{m}_{k=1}{r_k} + \sum\limits^{m}_{k=1}{\{r_k n\}}$
so
$f(n) = \sum\limits^{m}_{k=1}{\{r_k n\}} \tag{3}$</p>
<p>Since $r_i \in \mathbb{Q^+}$ we may write for each $i$:</p>
<p>$r_i = \dfrac{p_i}{q_i}\text{ with }0 < p_i \le q_i; p_i,q_i \in \mathbb{Z^+} \tag{4}$</p>
| Justin | 219,394 | <p>$f(n)=\sum_{k=1}^{m}(r_k n - \lfloor {r_k n}\rfloor)$</p>
<p>$\lfloor {r_k n}\rfloor \leq r_k n$</p>
<p>So, $f(n) \geq 0$</p>
<p>$r_k n - \lfloor {r_k n}\rfloor <1 $</p>
<p>$f(n) < \sum_{k=1}^{m}1$</p>
<p>So, $f(n) < m$</p>
<p>$r_k = p_k / q_k$ where $p_k < q_k$</p>
<p>If $n = q_1.q_2.....q_m - 1$ </p>
<p>Then $n= (q_1.q_2.....q_m-q_k)+(q_k-1)$ </p>
<p>So, </p>
<p>$r_k n - \lfloor {r_k n}\rfloor = p_k/q_k (q_k-1)- \lfloor {p_k/q_k(q_k-1)}\rfloor $</p>
<p>$= 1-p_k/q_k$</p>
<p>So, $max(f(n)) = \sum_{k=1}^{m}(1-r_k)$
$= m-1$</p>
|
1,321,704 | <p>For example: $3^\sqrt5$ versus $5^\sqrt3$</p>
<p>I tried to write numbers as this:</p>
<p>$3^{5^{\frac{1}{2}}}$ and then as
$3^{\frac{1}{2}^5}$</p>
<p>But this method gives the wrong answer because $a^{(b^c)} \ne a^{bc}$</p>
| user134824 | 134,824 | <p>Doing a problem like this takes a combination of knowledge about rules of exponentiation and skill at estimation.</p>
<p>First, raise both $3^{\sqrt 5}$ and $5^{\sqrt 3}$ to the power $\sqrt 5$, which gives the numbers $3^5=243$ and $5^{\sqrt{15}}$ using the rule $(a^b)^c = a^{bc}$. This has the effect of removing one of the square roots from an exponent: now we just need to estimate $5^{\sqrt 15}$. And $\sqrt{15}>3.5$, so
$$
5^{\sqrt{15}}>5^{3.5}=5^3\sqrt 5 = 125\sqrt 5.
$$
Since $\sqrt 5>2$, we have that $5^{\sqrt{15}}>125\cdot2=250 > 243 = 3^5$.
So
$$
\fbox{$5^{\sqrt 3} > 3^{\sqrt 5}$}.
$$</p>
|
2,598,912 | <p>Let $f$ be some function in $L_{loc}^1(\mathbb{R})$ such that, for some $a \in \mathbb{R}$,</p>
<p>$$\int_{|x| \leq r} |f(x)|dx \leq (r+1)^a$$</p>
<p>for all $r \geq 0$. Show that $f(x)e^{-|tx|} \in L^1(\mathbb{R})$ for all $t \in \mathbb{R} \setminus \{0\}$. </p>
<p>I'm having a hard time finding use of the bound described above. Any help would be appreciated. </p>
| zhw. | 228,045 | <p>Hint: The given bound is a polynomial growth estimate. Exponential decay always wins over polynomial growth. To get the given estimate in play, integrate by parts.</p>
<p><strong>Added later</strong>: To see how integration by parts can work here, assume WLOG $f\ge 0$ and $t>0.$ The integral can be written</p>
<p>$$\int_0^\infty (f(x) + f(-x)) e^{-tx}\, dx.$$</p>
<p>For $y\ge 0,$ define $F(y) = \int_0^y (f(x) + f(-x))\, dx.$ Then</p>
<p>$$\int_0^y (f(x) + f(-x)) e^{-tx}\, dx = F(x)e^{-tx}\big|_0^y + t\int_0^y F(x) e^{-tx}\, dx.$$</p>
|
402,310 | <p>Vector math is something I find very interesting. However, we have never been told the link between vectors in physics (usually represented as arrows, e.g. a force vector) and in algebra (e.g. represented like a column matrix). It was really never explained well in classes.</p>
<p>Here are the things I can't wrap my head around:</p>
<ul>
<li>How can a vector (starting from the algebraic definition) be represented as an arrow? Is it correct to assume that a vector (in a 2-dimensional space) $v = [1,1]$ could be represented as an arrow from the origin $[0,0]$ to the point $[1,1]$?</li>
<li>If the above assumption is correct, what does it mean in the physics representation to normalize a vector?</li>
<li>If I have a vector $[1,1]$, would the vector $[-1,1]$ be orthogonal to that first vector? (Because if you draw the arrows they are perpendicular).</li>
<li>How can one translate an object along a vector? Is that simply scalar addition?</li>
</ul>
<p>These questions probably sound really odd, but they come from a lack of decent explanation in both physics and algebra.</p>
| Muphrid | 45,296 | <p>Physicists tend to emphasize a geometric interpretations of vectors, which mathematicians need not do. This is because one of the main uses of vectors in physics is to talk about the geometry of some system, while vectors in general can be used in more abstract senses (and perhaps with no geometric interpretation at all).</p>
<p>Concerning your first bullet point, your interpretation is correct: this is a model of a vector space as an algebra of directions. Each arrow has a starting point and an ending point, which determines both length and orientation. These are the properties of a "direction".</p>
<p>Normalizing vectors is a way to lose the length information while maintaining directionality; it makes vectors unit length.</p>
<p>Yes, you're correct in saying those vectors would be orthogonal. This is part of the geometric interpretation.</p>
<p>I'm not sure what you refer to by "translating an object about a vector". It could be taking the location of the object and simply moving every point in the direction of the vector. This is better described by vector addition.</p>
|
2,088,564 | <blockquote>
<p>Sum of binomial product $\displaystyle \sum^{n}_{r=0}\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}$</p>
</blockquote>
<p>Simplifying $\displaystyle \frac{p!}{r!\cdot (p-r)!} \cdot \frac{q!}{r!\cdot (q-r)!}\cdot \frac{(n+r)!}{(p+q)! \cdot (n+r-p-q)!}$.</p>
<p>Could some help me with this, thanks</p>
| epi163sqrt | 132,007 | <p>It is convenient to use the <em>coefficient of</em> operator $[t^r]$ to denote the coefficient of $t^r$ in a series. This way we can write e.g.
\begin{align*}
\binom{p}{r}=[t^r](1+t)^p
\end{align*}</p>
<blockquote>
<p>The following is valid
\begin{align*}
\sum_{r=0}^n\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}=\binom{n}{p}\binom{n}{q}
\end{align*}</p>
<p>We obtain
\begin{align*}
\sum_{r=0}^n&\binom{p}{r}\binom{q}{r}\binom{n+r}{p+q}\\
&=\sum_{r=0}^n\binom{p}{r}\binom{q}{q-r}\binom{n+r}{p+q}\\
&=\sum_{r=0}^\infty[t^r](1+t)^p[v^{q-r}](1+v)^q[w^{p+q}](1+w)^{n+r}\tag{1}\\
&=[v^q](1+v)^q[w^{p+q}](1+w)^n\sum_{r=0}^\infty(v(1+w))^r[t^r](1+t)^p\tag{2}\\
&=[v^q](1+v)^q[w^{p+q}](1+w)^n(1+v(1+w))^p\tag{3}\\
&=[v^q](1+v)^{p+q}[w^{p+q}](1+w)^n\left(1+\frac{vw}{1+v}\right)^p\tag{4}\\
&=[v^q](1+v)^{p+q}[w^{p+q}]\sum_{k=0}^{p+q}\binom{n}{k}w^k\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\tag{5}\\
&=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}[w^{p+q-k}]\sum_{j=0}^p\binom{p}{j}\left(\frac{v}{1+v}\right)^jw^j\\
&=[v^q](1+v)^{p+q}\sum_{k=0}^{p+q}\binom{n}{k}\binom{p}{p+q-k}\frac{v^{p+q-k}}{(1+v)^{p+q-k}}\tag{6}\\
&=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}[v^{k-p}](1+v)^k\tag{7}\\
&=\sum_{k=p}^{p+q}\binom{n}{k}\binom{p}{k-q}\binom{k}{p}\\
&=\binom{n}{p}\sum_{k=p}^{p+q}\binom{p}{k-q}\binom{n-p}{k-p}\tag{8}\\
&=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{k+p-q}\binom{n-p}{k}\tag{9}\\
&=\binom{n}{p}\sum_{k=0}^{q}\binom{p}{q-k}\binom{n-p}{k}\tag{10}\\
&=\binom{n}{p}\binom{n}{q}
\end{align*}
and the claim follows.</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we apply the <em>coefficient of</em> operator for each binomial coefficient.</p></li>
<li><p>In (2) we use the linearity of the <em>coefficient of</em> operator and apply the rule $[t^{p}]t^qA(t)=[t^{p-q}]A(t)$.</p></li>
<li><p>In (3) we apply the substitution rule of the <em>coefficient of</em> operator with $t:=v(1+w)$
\begin{align*}
A(z)=\sum_{r=0}^\infty a_rz^r=\sum_{r=0}^\infty z^r[t^r]A(t)
\end{align*}</p></li>
<li><p>In (4) we factor out $(1+v)^p$.</p></li>
<li><p>In (5) we apply the binomial summation formula twice. Since we want to select the coefficient of $w^{p+q}$ we can restrict the upper limit of the left sum with $k=p+q$.</p></li>
<li><p>In (6) we select the coefficient of $w^{p+q-k}$.</p></li>
<li><p>In (7) we do some simplifications and can restrict the lower limit of the sum with $k=p$.</p></li>
<li><p>In (8) we apply the <em>cross product</em> $\binom{n}{k}\binom{k}{j}=\binom{n}{j}\binom{n-j}{k-j}$.</p></li>
<li><p>In (9) we shift the summation index and start from $k=0$.</p></li>
<li><p>In (10) we apply the <em><a href="https://en.wikipedia.org/wiki/Vandermonde's_identity#Chu.E2.80.93Vandermonde_identity" rel="noreferrer">Chu-Vandermonde identity</a></em>.</p></li>
</ul>
|
637,699 | <p>We have a triangle $ABC$. Whats the value of angle $C$?</p>
<p>$$\sin^2(A)+\sin^2(B)-\sin^2(C)=1$$</p>
<p>I made a small java program and it gave me an answer. I want to know how to make it through other ways.</p>
| Lucian | 93,448 | <p>$$\sin^2 A+\sin^2 B-\sin^2(A+B)=\sin^2 A+\sin^2 B-(\sin A\cos B+\sin B\cos A)^2=$$</p>
<p>$$=\sin^2 A+\sin^2 B-(\sin^2A\cos^2B+\sin^2B\cos^2A+2\sin A\cos A\sin B\cos B)=$$</p>
<p>$$=\sin^2A\,(1-\cos^2B)+\sin^2B\,(1-\cos^2A)-2\sin A\cos A\sin B\cos B=$$</p>
<p>$$=2\sin^2A\sin^2B-2\sin A\cos A\sin B\cos B=1\iff$$</p>
<p>$$\iff2\sin^2A\sin^2B-1=2\sin A\cos A\sin B\cos B\to$$</p>
<p>$$\to(2\sin^2A\sin^2B-1)^2=4\sin^2A\cos^2A\sin^2B\cos^2B\iff$$</p>
<p>$$\iff4\sin^4A\sin^4B-4\sin^2A\sin^2B+1=4\sin^2A(1-\sin^2A)\sin^2B(1-\sin^2B)\iff$$</p>
<p>$$\iff4x^2y^2-4xy+1=4x(1-x)y(1-y)\iff\ldots\iff x^2+(y-2)x+\frac1{4y}=0.$$</p>
<p>Now we have ourselves a quadratic equation in <em>x</em>, with <em>y</em> as a parameter, where <em>x</em> is $\sin^2A$, and <em>y</em> is $\sin^2B$. Could you take it from here ? :-)</p>
|
17,610 | <p>Let $V$ be a finite dimensional highest weight representation of a (semi)-simple Lie algebra. For each $n\ge 0$ take $a_n$ to be the dimension of the space of invariant tensors in $\otimes^n V$.</p>
<p>In certain cases there is a formula for $a_n$. For example, for $V$ the two dimensional representation of $sl(2)$ we get $a_n=0$ if $n$ is odd and for $n$ even we get the ubiquitous Catalan numbers. In general I don't expect a formula but the sequence does satisfy a linear recurrence relation with polynomial coefficients (known as D-finite).</p>
<p>For example, for the seven dimensional representation of $G_2$ this sequence starts:<br>
1, 0, 1, 1, 4, 10, 35, 120, 455, 1792, 7413, 31780, 140833, 641928, 3000361, 14338702, 69902535, 346939792, 1750071307, 8958993507, 46484716684, 244187539270, 1297395375129, 6965930587924<br>
for more background see <a href="http://www.oeis.org/A059710" rel="nofollow">http://www.oeis.org/A059710</a> </p>
<p>This satisfies the recurrence<br>
$(n+5)(n+6)a_n=2(n-1)(2n+5)a_{n-1}+(n-1)(19n+18)a_{n-2}+ 14(n-1)(n-2)a_{n-3}$</p>
<p><strong>Question</strong> How does one find these recurrence relations?</p>
<p>Then I also have a more challenging follow-up question. The space of invariant tensors in $\otimes^n V$ also has an action of the symmetric group $S_n$ and so a Frobenius character which is a symmetric function of degree $n$.</p>
<p><strong>Question</strong> How does one calculate these symmetric functions?</p>
<p>I know these can be calculated using plethysms individually. I am hoping for something along the lines of the first question.</p>
<p><strong>Further remarks</strong> David's answer solves the problem theoretically but I want to make some remarks about the practicalities. This is in case anyone wants to experiment and also because I believe there is a more efficient method.</p>
<p>The $sl(2)$ example can easily be extended. For the $n$-dimensional representation $a_k$ is the coefficient of $ut^k$ in<br>
$$\frac{u-u^{-1}}{1-t\left(\frac{u^n-u^{-n}}{u-u^{-1}}\right)}$$<br>
For the case $n=3$ see <a href="http://www.oeis.org/A005043" rel="nofollow">http://www.oeis.org/A005043</a> and
<a href="http://www.oeis.org/A099323" rel="nofollow">http://www.oeis.org/A099323</a><br>
I am not aware of any references for $n\ge 4$. I don't know if these are algebraic.</p>
<p>The limitation of this method is that there is a sum over the Weyl group. This means it is impractical to implement this method for $E_8$. For the adjoint representation of $E_8$ the start of the sequence is<br>
1 0 1 1 5 16 79 421 2674 19244 156612 1423028 14320350<br>
(found using LiE)</p>
| Martin Rubey | 3,032 | <p>The first question has a simple answer: somehow calculate the first few terms of your sequence, and feed your favorite guessing machine with them. I advertise the one built into FriCAS (because its authors are Waldek Hebisch and myself):</p>
<pre>
(1) -> guessPRec [1, 0, 1, 1, 4, 10, 35, 120, 455, 1792, 7413, 31780, 140833, 641928, 3000361, 14338702, 69902535, 346939792, 1750071307, 8958993507, 46484716684, 244187539270, 1\
297395375129, 6965930587924]
(1)
[
[
f(n):
2 2
(- n - 17n - 72)f(n + 3) + (4n + 30n + 44)f(n + 2)
+
2 2
(19n + 113n + 150)f(n + 1) + (14n + 42n + 28)f(n)
=
0
,
f(0)= 1, f(1)= 0, f(2)= 1]
]
Type: List(Expression(Integer))
</pre>
<p>(shameless plug: using this program, you can just as well find algebraic differential equations, algebraic recurrence relations and certain functional equations)</p>
<p>Martin</p>
|
17,610 | <p>Let $V$ be a finite dimensional highest weight representation of a (semi)-simple Lie algebra. For each $n\ge 0$ take $a_n$ to be the dimension of the space of invariant tensors in $\otimes^n V$.</p>
<p>In certain cases there is a formula for $a_n$. For example, for $V$ the two dimensional representation of $sl(2)$ we get $a_n=0$ if $n$ is odd and for $n$ even we get the ubiquitous Catalan numbers. In general I don't expect a formula but the sequence does satisfy a linear recurrence relation with polynomial coefficients (known as D-finite).</p>
<p>For example, for the seven dimensional representation of $G_2$ this sequence starts:<br>
1, 0, 1, 1, 4, 10, 35, 120, 455, 1792, 7413, 31780, 140833, 641928, 3000361, 14338702, 69902535, 346939792, 1750071307, 8958993507, 46484716684, 244187539270, 1297395375129, 6965930587924<br>
for more background see <a href="http://www.oeis.org/A059710" rel="nofollow">http://www.oeis.org/A059710</a> </p>
<p>This satisfies the recurrence<br>
$(n+5)(n+6)a_n=2(n-1)(2n+5)a_{n-1}+(n-1)(19n+18)a_{n-2}+ 14(n-1)(n-2)a_{n-3}$</p>
<p><strong>Question</strong> How does one find these recurrence relations?</p>
<p>Then I also have a more challenging follow-up question. The space of invariant tensors in $\otimes^n V$ also has an action of the symmetric group $S_n$ and so a Frobenius character which is a symmetric function of degree $n$.</p>
<p><strong>Question</strong> How does one calculate these symmetric functions?</p>
<p>I know these can be calculated using plethysms individually. I am hoping for something along the lines of the first question.</p>
<p><strong>Further remarks</strong> David's answer solves the problem theoretically but I want to make some remarks about the practicalities. This is in case anyone wants to experiment and also because I believe there is a more efficient method.</p>
<p>The $sl(2)$ example can easily be extended. For the $n$-dimensional representation $a_k$ is the coefficient of $ut^k$ in<br>
$$\frac{u-u^{-1}}{1-t\left(\frac{u^n-u^{-n}}{u-u^{-1}}\right)}$$<br>
For the case $n=3$ see <a href="http://www.oeis.org/A005043" rel="nofollow">http://www.oeis.org/A005043</a> and
<a href="http://www.oeis.org/A099323" rel="nofollow">http://www.oeis.org/A099323</a><br>
I am not aware of any references for $n\ge 4$. I don't know if these are algebraic.</p>
<p>The limitation of this method is that there is a sum over the Weyl group. This means it is impractical to implement this method for $E_8$. For the adjoint representation of $E_8$ the start of the sequence is<br>
1 0 1 1 5 16 79 421 2674 19244 156612 1423028 14320350<br>
(found using LiE)</p>
| David E Speyer | 297 | <p>Finding the recurrence (and proving it is correct) can be done by the standard techniques for extracting the diagonal of a rational power series. </p>
<p>Let $\beta_1$, $\beta_2$, ..., $\beta_N$ be the weights of $V$. Let $\rho$ be half the sum of the positive roots and $\Delta = \sum (-1)^{\ell(w)} e^{w(\rho)}$ be the Weyl denominator. Then
$$\sum_{n=0}^{\infty} t^n \chi \left( V^{\otimes n} \right) = \frac{1}{1- \sum_{i=1}^N t e^{\beta_i}}$$
and
$$\sum_{n=0}^{\infty} t^n \dim \left( V^{\otimes n} \right)^{\mathfrak{g}} = \mbox{Coefficient of}\ e^{\rho}\ \mbox{in} \ \left( \Delta \frac{1}{1- \sum_{i=1}^N t e^{\beta_i}} \right).$$</p>
<p>For example, if $\mathfrak{g}=\mathfrak{sl}_2$ and $V$ is the two dimensional irrep, the right hand side is
$$ \mbox{Coefficient of}\ u \ \mbox{in} \left( \frac{(u-u^{-1})}{1-tu^{-1} - tu} \right)$$
which can be seen without too much trouble to be the generating function for Catalan numbers.</p>
<p>The diagonal of a rational generating function is $D$-finite by a <a href="http://www.ams.org/mathscinet-getitem?mr=929767" rel="nofollow">result of Lipshitz</a>. The particular recurrence can be found by <a href="http://www.ams.org/mathscinet-getitem?mr=647562" rel="nofollow">Sister Celine's method</a> (see theorems 10 and 11). I found these references in Stanley, <em>Enumerative Combinatorics Vol. II</em>, solution to exercise 6.61. Stanley warns that there is a gap in Zeilberger's argument, but hopefully his algorithm is right.</p>
|
572,429 | <p>If A is invertible, prove that $\lambda \neq 0$, and $\vec{v}$ is also an eigenvector for $A^{-1}$, what is the corresponding eigenvalue?</p>
<p>I don't really know where to start with this one. I know that $p(0)=det(0*I_{n}-A)=det(-A)=(-1)^{n}*det(A)$, thus if both $p(0)$ and $det(0) = 0$ then $0$ is an eigenvalue of A and A is not invertible. If neither are $0$, then $0$ is not an eigenvalue of A and thus A is invertible. I'm unsure of how to use this information to prove $\vec{v}$ is also an eigenvector for $A^{-1}$ and how to find a corresponding eigenvalue.</p>
| BaronVT | 39,526 | <p>Remember that </p>
<p>If $v$ is an eigenvector of $A$ corresponding to eigenvalue $\lambda$, then
$$
Av = \lambda v.
$$</p>
<p>You want to show that, for some other number $\mu$,
$$
A^{-1}v = \mu v.
$$
So, starting with $Av = \lambda v$ again,
$$
A^{-1}A v = A^{-1} \lambda v = \lambda A^{-1} v
$$
and I'll leave the rest to you.</p>
|
575,069 | <p>Let $\mathbb R_n[x]$ the vector space of polynomials with degree less or equal $n$ and we consider the linear transformation $f$ defined by
$$\forall P\in \mathbb R_n[x]\quad f(P)=(x^2-1)P''+2xP'$$
I proved that $f$ has the spectrum
$$\mathrm{sp}(f)=\{k(k+1),\ k=0,\ldots,n\}$$
I'm stuck in this question: Prove that there's a unique basis $(P_0,\ldots,P_n)$ of $\mathbb R_n[x]$ such that:
$$\forall k=0,\ldots,n\quad P_k \ \text{is a monic polynomial with degree }\ k\ \text{which's an eigenvector of }\ f $$
Any help would be appreciated.</p>
| Lutz Lehmann | 115,115 | <p>You can compare, as said in the comments, by
<span class="math-container">$$
\frac1{n^2}<\frac1{n(n-1)}=\frac1{n-1}-\frac1n
\text{ so that } ~~
\sum_{n=1}^\infty\frac1{n^2}\le1+\sum_{n=2}^\infty \frac1{n-1}-\frac1n=2$$</span>
by telescoping the upper bound. A closer bound is
<span class="math-container">$$
\frac1{n^2}<\frac1{(n+1)(n-1)}=\frac1{2(n-1)}-\frac1{2(n+1)}
\text{ so that } ~~
\sum_{n=1}^\infty\frac1{n^2}\le1+\frac12\sum_{n=2}^\infty \frac1{n-1}-\frac1{n+1}=\frac74
$$</span>
where the telescoping "leap-frogs".</p>
<p>You could also use the construction of the Cauchy condensation test where
<span class="math-container">$$
\frac1{n^2}<\frac1{4^k}~~\text{ for }~~ 2^k\le n < 2^{k+1} ~~
\text{ so that } ~~
\sum_{n=1}^\infty\frac1{n^2}\le\sum_{k=0}^\infty2^k\cdot \frac1{4^k}=\sum_{k=0}^\infty2^{-k}=2$$</span></p>
|
2,075 | <p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p>
<p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p>
<p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p>
<p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p>
<p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
| svenkatr | 2,641 | <p>One big difference between Math Stackexchange and Mathoverflow is that you don't need to be a mathematician or even a math graduate student to participate and contribute to math stackexchange. I am an electrical engineering grad student and I love this community. It gives me a chance to not only learn new things every few days, but it also helps me remain sharp in the few math topics that I have some knowledge of.</p>
<p>I feel that moderation and the general management of the site is pretty good. As Qiaochu Yuan mentioned, the Signal to Noise Ratio is very high here and this partly due to the maturity of the community here. All online communities have disputes and disagreements and squabbles. Don't be too worried by those kinds of things. Take advantage of the fact that there are so many knowledgeable people here who are willing to spend their time and effort to answer math questions. This site is great simply for the wealth of knowledge that its participants have. </p>
|
804,310 | <p>The probability distribution of a discrete random variable x is
$$f (x)= \begin{pmatrix}3 \\ x \end{pmatrix} (1/4)^x (3/4)^{3-x} $$ Find the mean value of x.
Construct a cumulative distribution function for f (x).
i find out$$ P(X=o) = 0.421875$$ $$ P(X=1) = 0.421875$$ $$ P(X=2) = 0.140625$$ $$ P(X=3) = 0.015625$$ put the value in Binomial distribution.</p>
| georg | 144,937 | <p>"The essence of the probability argument is that we have 3 independent trials, and on each trial, outcome 1 occurs with probability p and some other outcome with probability 1−p. help me for finding mean value"</p>
<p>I would say that
x = {0,1}, P(0) = 1-p, P(1) = p, E(x) =0*(1-p)+1*p = p</p>
<p>When $X = x_1+x_2+\cdots + x_n$, then $E(X) = E(x_1)+E(x_2)+\cdots+E(x_n)=np$ (generally apply to both dependent and independent experiments).</p>
|
3,947,632 | <p>It is almost obvious that we have to use derivative in order to find the answer. However, I couldn't see this one to be derivative of something.</p>
<p>It would be amazing if you share your thoughts on this problem. Also, I would be thankful if you would explain what was intuition behind your solution. Thanks!</p>
| Claude Leibovici | 82,404 | <p><em>An old trick !</em>
<span class="math-container">$$S=\sum_{k = 1}^{n}k^2 \, x^{k - 1}=\sum_{k = 1}^{n}\big[k(k-1)+k \big]\, x^{k - 1}$$</span> that is to say
<span class="math-container">$$S=\sum_{k = 1}^{n}k(k-1) \, x^{k - 1}+\sum_{k = 1}^{n}k \, x^{k - 1}$$</span>
<span class="math-container">$$S=x\sum_{k = 1}^{n}k(k-1) \, x^{k - 2}+\sum_{k = 1}^{n}k \, x^{k - 1}$$</span>
<span class="math-container">$$S=x \left(\sum_{k = 1}^{n}x^k \right)''+\left(\sum_{k = 1}^{n}x^k \right)'$$</span></p>
|
1,703,140 | <p>Let $n$ be composite. If $a$ is coprime to $n$ such that $a^{n-1} \equiv 1 \pmod n$ then $a$ is called a Fermat liar. If $a^{n-1} \not\equiv 1 \pmod n$, then $a$ is a Fermat witness to the compositeness of $n$.</p>
<p>Assume the witnesses or liars are themselves prime. If two Fermat liars are multiplied together their product will also be a Fermat liar. If a Fermat witness and a Fermat liar are multiplied together, the product is a Fermat witness. </p>
<p>Is it possible for the product of two Fermat witnesses to be a Fermat liar if the witnesses are themselves prime?</p>
<p>My motivation for the question is to consider what might happen if I use a product of primes as the base in a Fermat primality test rather than a single prime. My hope is that a product of primes would improve the odds of finding the correct answer with one test.</p>
| Win Vineeth | 311,216 | <p>(a)Fill the first $\lfloor\frac S9 \rfloor$ digits by $9$<br>Fill the next digit by $S\mod9$ <br>Fill the rest of the digits by $0$</p>
<p>(b)Fill the last $\lfloor\frac {S-1}9 \rfloor$ digits by $9$<br>Fill the previous digit by $S-1\mod9$ <br> Fill the rest of the digits by $0$<br> Add $1$ to the first digit.</p>
<p>Example from the comment below,
$S=15$, $N=2$</p>
<p>$\lfloor\frac {S-1}9 \rfloor = 1$</p>
<p>$S-1\mod9 = 5$</p>
<p>So, you have $59$. No remaining digits, so no $0$s. Now, the last step - add $1$ gives $69$</p>
|
1,087,910 | <p>Erdős proved that if $f(n)$ is a monotone increasing function from the natural numbers to the positive reals, and $f(n)$ is completely multiplicative, then there exists some constant $C$ such that $f(n)=n^C$ for all $n$.</p>
<p>I have not been able to find a nice proof of this online and was hoping someone could provide a proof.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>hint: use that $$\frac{1}{x^2(x-1)^3}=3\, \left( x-1 \right) ^{-1}-{x}^{-2}+ \left( x-1 \right) ^{-3}-2\,
\left( x-1 \right) ^{-2}-3\,{x}^{-1}
$$</p>
|
3,369,129 | <p>I have the following formulas: </p>
<p><span class="math-container">$$S = 4 \times \tan\left(\frac{180^\circ-a}{2}\right) - \frac{π}{90^\circ} \times (180^\circ-a)$$</span></p>
<p>where:</p>
<p><span class="math-container">$$S \gt 0$$</span></p>
<p><span class="math-container">$$0^\circ \lt a \lt 180^\circ$$</span></p>
<p>and I want to solve them by <strong>α</strong> but I don't know how to take <strong>α</strong> out of the <strong>tan()</strong>.<br>
I tried to use some functions with arctan() but with no luck.</p>
<p>Could someone help me out with it?</p>
| albert chan | 696,342 | <p>Rewrite your expression, angle in radians</p>
<p><span class="math-container">$${S\over4} + {\pi \over 2} =\cot{a\over2} +{a\over2}$$</span>
<span class="math-container">$$a_{i+1} = 2\cot^{-1}({S \over 4} + {\pi-a_i \over 2})$$</span></p>
<p>Example, with <span class="math-container">$S=10, a_0=0$</span>, using <a href="https://math.stackexchange.com/questions/760708/aitkens-extrapolation/3369554#3369554">Aitkens Extrapolation</a> </p>
<p><span class="math-container">$$Aitken(a_i) = a_{i} - {(a_{i}-a_{i-1})^2 \over a_{i} - 2a_{i-1}+a_{i-2}}$$</span></p>
<p><span class="math-container">$\begin{matrix}
i & a_i & Aitken(a_i) \cr
1 & 0.4817648629 \cr
2 & 0.5108008969 \cr
3 & 0.5126606739 & 0.5127879457\cr
4 & 0.5127884300 \cr
5 & 0.5127884612 & 0.5127884633 \cr
6 & 0.5127884633
\end{matrix}$</span></p>
<hr>
<p>We can give a better guess for above iterations.<br>
<span class="math-container">$$S = 4 \cot{a\over2} - 2(\pi-a) = {8 \over a} - 2\pi + {4a \over 3} - {a^3 \over 90}+ O(a^5)$$</span></p>
<p>Let <span class="math-container">$b = \large \frac{3}{8}\normalsize (S+2\pi)$</span>
<span class="math-container">$$b = {3\over a} + {a \over 2} - {a^3 \over 240} + O(a^5)$$</span>
Drop <span class="math-container">$O(a^3)$</span>, and solve the quadratic:</p>
<p><span class="math-container">$$a ≈ \frac{6}{b + \sqrt{b^2-6}}$$</span></p>
<p>For <span class="math-container">$S=10,\quad a ≈ 0.51284 02951,\quad error ≈ 0.010\%$</span> </p>
<p>We can improve the guess by re-using above formula, with a new <span class="math-container">$b$</span>.<br>
<span class="math-container">$$b ← b + {1\over240}\left( \frac{6}{b + \sqrt{b^2-6}} \right)^3$$</span></p>
<p>For <span class="math-container">$S=10,\quad a ≈ 0.51278 87723,\quad error ≈ 0.000060\%$</span> </p>
<p>Note: the guess goes complex if <span class="math-container">$S < 8\sqrt{2\over3} - 2\pi ≈ 0.24879$</span></p>
<p>For small S, I use @Lutz setup, but with <span class="math-container">$\tan(x)$</span> <a href="http://functions.wolfram.com/ElementaryFunctions/Tan/10/" rel="nofollow noreferrer">continued fraction representation</a>. </p>
<p><span class="math-container">$${S \over 4} ≈ \large{x \over 1- {x^2 \over 3- {x^2\over5}}}-x =
\large {5x^3 \over 15-6x^2}$$</span>
<span class="math-container">$$x^3 ≈ S({3\over4}-{3\over10}x^2)$$</span>
Since small S also meant small x, we let <span class="math-container">$t={3\over4}S$</span>, and keep approximating ...</p>
<p>1st approximation: <span class="math-container">$x ≈ \sqrt[3]{t(1-\require{cancel} \cancel{{2\over5}x^2)}} ≈ \sqrt[3]{t}$</span><br>
2nd approximation: <span class="math-container">$x ≈ \sqrt[3]{t(1-{2\over5}(\sqrt[3]t)^2}$</span><br>
3rd approximation: <span class="math-container">$x ≈ \sqrt[3]{t} - {2 \over 15}t \require{cancel} \cancel{-{4\over225}t^{5/3} +\;...}$</span><br>
<span class="math-container">$$a = \pi-2x ≈ \pi - \sqrt[3]{6S} + {S \over 5}$$</span></p>
<hr>
<p>By trial and error, I found a formula that is even better than solving the cubic exactly!<br>
For small <span class="math-container">$S$</span>, let <span class="math-container">$b = \sqrt[3]{6S}$</span></p>
<p><span class="math-container">$$a ≈ \pi - {30b \over 30 + b^2}$$</span></p>
|
3,744,582 | <p>I was told that</p>
<blockquote>
<p>Two permutation matrices represent conjugate permutations iff they have same characteristic polynomial (where the conjugacy is considered only in <span class="math-container">$S_{n}$</span>).</p>
</blockquote>
<p>The first implication is clear to me i.e. permutation matrices representing conjugate permutations, being similar matrices, have same characteristic polynomial.
But, I do not understand why is it necessary that if two permutation matrices have same characteristic polynomial they should represent conjugate permutation? I was told to see newton's identities. But I do not see anything relating characteristic polynomial with permutation matrices in them.</p>
<p>Anyhow, I did it and I have written answer. But, I would really like to know a whether there exists a proof that uses newtons identities or algaebraic manipulations.</p>
| ImBatman | 764,333 | <p>We'll first prove that the characteristic polynomial of a permutation matrix representing a permutation <span class="math-container">$\pi\in S_{n}$</span> having the cycle structure <span class="math-container">$1^{c_{1}}2^{c_{2}}...n^{c_{n}}$</span> is of the form <span class="math-container">$\prod_{j=1}^{n}(x^{j}-1)^{c_{j}}$</span>.</p>
<p>The eigenvalues of any permutation matrix representing the cycle structure <span class="math-container">$1^{c_{1}}2^{c_{2}}...n^{c_{n}}$</span> are <span class="math-container">$i_{th}$</span> roots of unity counted with multiplicities <span class="math-container">$c_{i}$</span> (where i represents the cycle lengths), which is shown <a href="https://math.stackexchange.com/questions/3168674/how-to-find-the-eigenvalues-of-permutation-matrices?noredirect=1&lq=1">here</a>. As we know that Eigenvalues are the roots of characteristic polynomial, the polynomial corresponding to these eigenvalues is clearly <span class="math-container">$\prod_{j=1}^{n}(x^{j}-1)^{c_{j}}$</span>.</p>
<p>We'll prove that such representation is unique. Before that let's state a fact we'll use</p>
<blockquote>
<p>The nth root of unity <span class="math-container">$\alpha = cos(\frac{2\pi}{n})+isin(\frac{2\pi}{n})$</span> does not satisfy <span class="math-container">$x^{k}-1 = 0$</span> for <span class="math-container">$0\lt k\lt n$</span>. But for <span class="math-container">$k\gt n$</span> it satisfies <span class="math-container">$x^{k}-1 = 0$</span> for <span class="math-container">$k=nt$</span> where <span class="math-container">$t\in \Bbb N$</span>.</p>
</blockquote>
<p>Coming to the proof, Let <span class="math-container">$XP_{1}$</span> and <span class="math-container">$XP_{2}$</span> both have same roots of unity counted with multiplicities. But <span class="math-container">$XP_{1}$</span> and <span class="math-container">$XP_{2}$</span> differ in their representation i.e.
They are like<br />
<span class="math-container">$$XP_{1} = \prod_{j=1}^{n}(x^{j}-1)^{c_{j}}, XP_{2} = \prod_{j=1}^{n}(x^{j}-1)^{d_{j}}$$</span> Let <span class="math-container">$j_{1}$</span> be the greatest <span class="math-container">$j$</span> such that <span class="math-container">$c_{j}\neq d_{j}$</span>. As per our assumption and the fact we stated above, the occurence of the <span class="math-container">$j_{1}$</span> root of unity <span class="math-container">$\alpha = cos(\frac{2\pi}{j_{1}})+isin(\frac{2\pi}{j_{1}})$</span> in <span class="math-container">$XP_{1}$</span> and <span class="math-container">$XP_{2}$</span><br />
<span class="math-container">$1$</span>. Due to <span class="math-container">$(x^{j}-1)^{c_{j}}$</span> for <span class="math-container">$j\gt j_{1}$</span> is same in both.<br />
<span class="math-container">$2$</span>. Due to <span class="math-container">$(x^{j}-1)^{c_{j}}$</span> for <span class="math-container">$j\lt j_{1}$</span> is zero.<br />
<span class="math-container">$3$</span>. Due to <span class="math-container">$(x^{j}-1)^{c_{j}}$</span> for <span class="math-container">$j=j_{1}$</span> viz <span class="math-container">$c_{j_{1}}$</span>,<span class="math-container">$d_{j_{1}}$</span> is different.</p>
<p>But it means multiplicities of <span class="math-container">$\alpha$</span> are different in <span class="math-container">$XP_{1}$</span> and <span class="math-container">$XP_{2}$</span>. We reach a contradiction. Hence <span class="math-container">$c_{j} = d_{j} \space \forall j\in \Bbb N$</span></p>
<p>Now if two permutation matrices have the same characteristic polynomial they must have the same cycle type and hence they must represent conjugate permutations in <span class="math-container">$S_{n}$</span>.</p>
|
3,744,582 | <p>I was told that</p>
<blockquote>
<p>Two permutation matrices represent conjugate permutations iff they have same characteristic polynomial (where the conjugacy is considered only in <span class="math-container">$S_{n}$</span>).</p>
</blockquote>
<p>The first implication is clear to me i.e. permutation matrices representing conjugate permutations, being similar matrices, have same characteristic polynomial.
But, I do not understand why is it necessary that if two permutation matrices have same characteristic polynomial they should represent conjugate permutation? I was told to see newton's identities. But I do not see anything relating characteristic polynomial with permutation matrices in them.</p>
<p>Anyhow, I did it and I have written answer. But, I would really like to know a whether there exists a proof that uses newtons identities or algaebraic manipulations.</p>
| Eric Wofsey | 86,856 | <p>If a matrix <span class="math-container">$A$</span> has eigenvalues <span class="math-container">$\lambda_1,\dots,\lambda_n$</span> (listed with algebraic multiplicity), then <span class="math-container">$A^k$</span> has eigenvalues <span class="math-container">$\lambda_1^k,\dots,\lambda_n^k$</span>, and so <span class="math-container">$\operatorname{tr}(A^k)=\sum_i\lambda_i^k.$</span> By Newton's identities, <span class="math-container">$\sum_i\lambda_i^k$</span> can be expressed in terms of the elementary symmetric polynomials in the <span class="math-container">$\lambda_i$</span>, which are just (up to sign) the coefficients of the characteristic polynomial of <span class="math-container">$A$</span>.</p>
<p>The upshot is that if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> have the same characteristic polynomial, then <span class="math-container">$\operatorname{tr}(A^k)=\operatorname{tr}(B^k)$</span> for all <span class="math-container">$k$</span>. Now if <span class="math-container">$A$</span> is a permutation matrix corresponding to a permutation <span class="math-container">$\pi$</span>, then <span class="math-container">$\operatorname{tr}(A^k)$</span> is just the number of fixed points of <span class="math-container">$\pi^k$</span>. So, it suffices to show that if <span class="math-container">$\pi,\rho\in S_n$</span> are such that <span class="math-container">$\pi^k$</span> and <span class="math-container">$\rho^k$</span> have the same number of fixed points for each <span class="math-container">$k$</span>, then <span class="math-container">$\pi$</span> and <span class="math-container">$\rho$</span> have the same cycle structure. To show this, let <span class="math-container">$a_k$</span> be the number of <span class="math-container">$k$</span>-cycles in <span class="math-container">$\pi$</span> and let <span class="math-container">$b_k$</span> be the number of <span class="math-container">$k$</span>-cycles in <span class="math-container">$\rho$</span>. Note then that the number of fixed points of <span class="math-container">$\pi^k$</span> is <span class="math-container">$\sum_{d\mid k}da_d$</span> and the number of fixed points of <span class="math-container">$\rho^k$</span> is <span class="math-container">$\sum_{d\mid k}db_d$</span>. We know these are equal, and using strong induction on <span class="math-container">$k$</span> we may assume that <span class="math-container">$a_d=b_d$</span> for every proper divisor <span class="math-container">$d$</span> of <span class="math-container">$k$</span>. It follows that <span class="math-container">$ka_k=kb_k$</span> and thus <span class="math-container">$a_k=b_k$</span>.</p>
|
11,172 | <p>As a TA who led calculus* 1 and 2 discussion section and holds office hour** in the previous year, I heard the following (wrong) arguments several times.</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} \sqrt{x+1}-\sqrt{x}=0$</span> because <span class="math-container">$\infty-\infty=0$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} x^{1/x}=1$</span> because <span class="math-container">$\infty^0=1$</span>.</p>
</li>
<li><p><span class="math-container">$\int_1^{\infty}f(x)dx$</span> and <span class="math-container">$\int_1^{\infty}g(x)dx$</span> both diverge so <span class="math-container">$\int_1^{\infty}f(x)+g(x)dx$</span> diverge.</p>
</li>
</ol>
</blockquote>
<p>I usually explain the arguments are not true in general by providing a (very trivial) counter example, for example,</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=\infty$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)-g(x)=0$</span>, for example, <span class="math-container">$f(x)=x+1$</span> and <span class="math-container">$g(x)=x$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=0$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)^{g(x)}=1$</span>, for example, <span class="math-container">$f(x)=2^x$</span> and <span class="math-container">$g(x)=1/x$</span>.</p>
</li>
<li><p>False in general, for example <span class="math-container">$f(x)=-g(x)=1$</span></p>
</li>
</ol>
</blockquote>
<p>After giving explanations like that I sometime heard "But in your examples you can cancel the expression/formula..." and I was not sure how to continue. I tried the following methods, non of them seem to work very well.</p>
<p>a. Provide a much more complicated counter example which requires a few minutes of calculation to get the answer. This often leads to further confusion.</p>
<p>b. Just say that is the wrong way to do it. It sounds like "I'm the teacher so believe me." and doesn't do too much.</p>
<p>c. Show them the correct way to do their problems. This is almost like b (Why is your way the right way and mine is the wrong way?).</p>
<p>I'm looking for a better way to deal with questions like these.</p>
<p>*<span class="math-container">$\epsilon-\delta$</span> definition is not introduced.
** Office hour is in tutoring center where I'm also responsible for students take the class from the professors I'm not TA'ing for.</p>
| Daniel Hast | 3,505 | <p>Often in courses like this, the problem is that students are almost exclusively asked to do problems that involve rote computation using procedures they've been taught, not problems that require reasoning about the concepts involved or even constructing their own examples. So, how about assigning problems that require students to come up with examples refuting various false notions like the ones mentioned? (Or at least discuss problems like these in class.)</p>
<p>I'm thinking of problems like:</p>
<ul>
<li>"Give an example of functions $f$ and $g$ such that $\lim_{x \to \infty} f(x) = \infty$, $\lim_{x \to \infty} g(x) = 0$, and $\lim_{x \to \infty} f(x)^{g(x)} = 5$."</li>
<li>"Give an example of functions $f$ and $g$ such that $\lim_{x \to \infty} f(x) = \infty$, $\lim_{x \to \infty} g(x) = \infty$, and $\lim_{x \to \infty} (f(x) - g(x))$ does not exist."</li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.