qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,414,472 | <blockquote>
<p>Let $(a_n)_{n\geq2}$ be a sequence defined as
$$
a_2=1,\qquad a_{n+1}=\frac{n^2-1}{n^2}a_n.
$$
Show that
$$
a_n=\frac{n}{2(n-1)},\quad\forall n\geq2
$$
and determine $\lim_{n\rightarrow+\infty}a_n$.</p>
</blockquote>
<p>I cannot show that $a_n$ is $\frac{1}{2}\frac{n}{n-1}$. Some helps? </p>
<p>Thank You</p>
| Michael Rozenberg | 190,319 | <p>Let $b_{n}=\frac{(n-1)a_n}{n}$.</p>
<p>Thus, $b_{n+1}=b_n$ and since $b_2=\frac{2-1}{2}a_2=\frac{1}{2}$, we are done!</p>
|
886,003 | <p>I have two questions:</p>
<p><strong>A)</strong> Suppose that we have $$Z=c\sum_i (X_i-a)(Y_i-b)$$ where $X_i$s and $Y_i $s are independent exponential random variables with means equal to $\mu_{X}$ and $\mu_{Y}$ (for $1\le i\le n$). That is $X_i$s are i.i.d random variables and so are $Y_i $s. Besides, $a,b$ and $c$ are real numbers. I want to find the distribution of $Z$ for large enough $n$. </p>
<p>I used central limit theorem (CLT) and found the distribution of Z. I calculated the mean and variance as follows:
$$E(Z)=\sum_i c(E(X_i)-a)(E(Y_i)-b)$$
using delta method, I estimated the variance as follows:
$$Var(Z)=c^2\sum_i (E(X_i)-a)^2 Var(Y_i)+Var(X_i)(E(Y_i)-b)^2$$
To check if it is correct, I used MATLAB. I considered $n$ to be $200$. I varied $\mu_{X}$ and $\mu_{Y}$ from $1$ to $4$ $({1,2,3,4})$. To simplify my calculation I considered $\mu_{X}=\mu_{Y}$. For each random variable, I created $1,000,000$ samples and calculated PDF of $Z$. But when I compare this PDF with the one I found using CLT they are different! I cannot understand where I made mistake! </p>
<p><strong>B)</strong> Another question is that if we have $$Z=c\sum_i (a_i+X_i-a)(b_i+Y_i-b)$$ where $a_i$ and $b_i$ are real numbers. Can I still use CLT to find the PDF of Z?
$$E(Z)=\sum_i c(E(X_i)+a_i-a)(E(Y_i)+b_i-b)$$
using delta method, I estimated the variance as follows:
$$Var(Z)=c^2\sum_i (E(X_i)+a_i-a)^2 var(Y_i)+var(X_i)(E(Y_i)+b_i-b)^2$$ </p>
<p>(again I tested the correctness using MATLAB and faced the same problem!)</p>
<p>I would appreciate if you could help me.</p>
| StephanieCoding | 377,265 | <p>I don't agree with the formula you derived for <span class="math-container">$var(Z)$</span>. </p>
<p>If <span class="math-container">$Z = c \sum_i (X_i - a)(Y_i - b)$</span> where <span class="math-container">$X_i$</span> are i.i.d from distribution 1 and <span class="math-container">$Y_i$</span> are i.i.d from distribution 2, and <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> independent, then
<span class="math-container">$$E(Z) = c \sum_i (E(X_i) - a)(E(Y_i) - b)$$</span> </p>
<p><span class="math-container">$$VAR(Z) = E(Z^2) - (E(Z))^2 \\
= c^2E[\sum_i(X_i-a)(Y_i-b)]^2 - (c \sum_i (E(X_i) - a)(E(Y_i) - b))^2 \\
= c^2\sum_iE[(X_i-a)]^2 E[(Y_i-b)]^2 - c^2 \sum_i (E(X_i) - a)^2 (E(Y_i) - b)^2 \\
= c^2\sum_i(var(X_i) + (E(X_i) - a)^2)(var(Y_i) + (E(Y_i) - b)^2) - c^2 \sum_i (E(X_i) - a)^2 (E(Y_i) - b)^2 \\
= c^2\sum_i var(X_i) var(Y_i) + c^2\sum_i(E(X_i) - a)^2var(Y_i) + c^2\sum_i var(X_i)(E(Y_i) - b)^2 $$</span></p>
<p>In the simplest case that <span class="math-container">$a = b= 0$</span>,
<span class="math-container">$$ E(Z) = c \sum_i E(X_i) E(Y_i)$$</span>
<span class="math-container">$$ VAR(Z) = c^2\sum_i var(X_i) var(Y_i) + c^2\sum_i E(X_i)^2var(Y_i) + c^2\sum_i var(X_i)E(Y_i)^2 $$</span></p>
|
2,569,267 | <p><a href="https://gowers.wordpress.com/2011/10/16/permutations/" rel="nofollow noreferrer">This</a> article claims:</p>
<blockquote>
<p>we simply replace the number 1 by 2, the number 2 by 4, and the number 4 by 1</p>
<p>....I start with the numbers arranged as follows: 1 2 3 4 5 6. After doing the permutation (124) the numbers are arranged as 2 4 3 1 5 6.</p>
</blockquote>
<p>I always thought <span class="math-container">$(124)$</span> was read left to right as "1 goes to 2, 2 goes to 4, and 4 goes to 1" and therefore the outcome should be 4, 1, 3, 2, 5, 6.</p>
<p>According to my understanding, the article did the permutation reading from right to left. Is the blog following a convention of reading right to left, or do I just have it wrong?</p>
| Community | -1 | <p>The third paragraph states that: </p>
<blockquote>
<p>$\ldots$ If we want to apply the permutation $(124)$, we simply replace the number $1$ by $2$, the number $2$ by $4$, and the number $4$ by $1$ $\ldots$.</p>
</blockquote>
<p>Following the rule (reading from left to right), gives us the required result.</p>
|
499,652 | <p>I saw this a lot in physics textbook but today I am curious about it and want to know if anyone can show me a formal mathematical proof of this statement? Thanks!</p>
| ftfish | 84,805 | <p>Consider $\lim_{x\to 0} \frac{\tan x}{x}$ and apply the <a href="http://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow">L'Hôpital's rule</a>.</p>
<p>You'll get a limit of $1$ as $x \to 0$, proving the statement (relative error).</p>
<p>You can even get a bound on the error term with <a href="http://en.wikipedia.org/wiki/Taylor%27s_series" rel="nofollow">Taylor's expansion</a> of $\tan x$, which is $\mathcal{O}(x^3)$ (absolute error).</p>
|
281,288 | <p><img src="https://i.imgur.com/0C2Jl.jpg" alt="curved line graph"></p>
<p>In this curved line graph, I need to be able to make a formula that can tell me the interpolated value at any point on the curved path given one Data input.</p>
<p>So for example if I wanted to know what value the line was at exactly half way between Point 2 and Point 3, I can eyeball it and tell it would be somewhere around 3.0 for a value, and to get it more exact I could use a ruler and some math. But that is the long way of finding 1 point on the curve at a time. Is there a generic formula that can take any arbitrary curved path with a set of points (a curved path that has no real pattern except that it's a collection of splines with known points to interpolate between) and turn it into a mathematical formula that you could input a Data 1 and it spits out the Data 2 value of the curve where the Data points meet, or vice versa?</p>
<p>For example, <br>
Input Data 1 to math formula = Point 2.5<br>
Data 2 = [Computed by math formula] 3.0</p>
<p>or</p>
<p>Input Data 2 to math formula = 3.0<br>
Data 1 = [Computed by math formula] Point 2.5</p>
<p>Just need the method to develop the math formula!</p>
| Luke Allen | 31,876 | <p>I have realized this is impossible for my scenario. The actual application of this would be for a graph with hundreds of points, all using splines, not Lagrange polynomial curves, which curve the line differently than splines somewhat, throwing off the accuracy which is all-important. Even using spline interpolation in Mathematica with dozens of points wasn't accurate enough. The only way to do this that I have found is by using AutoCAD (or some other exact distance measurement) on the graph to interpolate as exactly as possible (to about 3 decimal places). </p>
<p>A formula would be rather impractical, however, a way to take a curved line/set of curved lines in AutoCAD and create 100,000 points on it evenly spaced could be possible in the future. You could then export the 100,000 points to something like Microsoft Access database as a table, then create a query that would find the nearest point out of the 100,000 points to whatever Data 1 or Data 2 you input, but I don't have a clue how to do this in practice.</p>
|
79,658 | <blockquote>
<p>Let $U$ and $W$ be subspaces of an inner product space $V$. If $U$ is a subspace
of $W$, then $W^{\bot}$ is a subspace of $U^{\bot}$?.</p>
</blockquote>
<p>I don't find the above statement intuitively obvious. Could someone provide a proof?</p>
| wildildildlife | 6,490 | <p>It should be intuitive, already at the level of logic:</p>
<p>To be in $W^\perp$, you have to satisfy a certain condition $P(w)$ (namely: 'be orthogonal to $w$') for each and every element $w\in W$. </p>
<p>So given a subset $U\subseteq W$, to be in $U^\perp$ means you have to satisfy $P(w)$ <em>merely</em> for all $u\in U$. </p>
<p>Thus you have to satisfy <em>less</em> properties to be in $U^\perp$, thus it is <em>easier</em> to be in $U^\perp$, thus $U^\perp$ is <em>larger</em>: $W^\perp\subseteq U^\perp$.</p>
<p>It should also be intuitive geometrically: consider $\mathbb{R}^3$, let $U$ be the $x$-axis, and $W$ the $x,y$-plane. Then $U^\perp$ is the $y,z$-plane, and $W^\perp$ is the $z$-axis.</p>
<p>//Edit: I was slow so I missed Joans Meyer's edit, which kind of makes my answer redundant.</p>
|
2,031,699 | <p>Let $A,B$ be open subsets of $\mathbb{R}^n$. </p>
<p>Does the following equality hold?</p>
<p>$$\partial(A\cap B)= (\bar A \cap \partial B) \cup (\partial A \cap \bar B)$$</p>
<p>Edit: Thanks for showing me in the answers that above formula fails if $A$ and $B$ are disjoint but their boundaries still intersect. I was able to come up with a similar formula which avoids this case
$$[\partial(A\cap B)]\setminus(\partial A \cap \partial B)= (A \cap \partial B) \cup (\partial A \cap B),$$
which I was able to prove and suffices for what I need to do.</p>
<p>However, when showing that $ (A \cap \partial B) \cup (\partial A \cap B)\subseteq \partial(A\cap B)$, I needed to assume that the topology is induced by a metric. I wonder if the formula still holds in an arbitrary topological space.</p>
| DanielWainfleet | 254,665 | <p>If $A$ is dense and co-dense in the non-empty space $X$ (that is, $X$ \ $A$ is also dense in $X$), suppose $B=X$ \ $A.$ Then $\emptyset=A\cap B=\partial (A\cap B)$ but $\bar A=\bar B=\partial A=\partial B=X\ne \emptyset.$</p>
<p>For example, with $X= \mathbb R^n$ let $A$ be the set of points with rational co-ordinates.</p>
|
3,328,822 | <blockquote>
<p>How do I evaluate <span class="math-container">$$\displaystyle\int^{\infty}_0 \exp\left[-\left(4x+\dfrac{9}{x}\right)\right] \sqrt{x}\;dx?$$</span> </p>
</blockquote>
<p>To my knowledge the following integral should be related to the Gamma function.</p>
<p>I have tried using the substitution <span class="math-container">$t^2 = x$</span>, and I got
<span class="math-container">$$
2e^{12}\displaystyle \int^{\infty}_0 \exp\left[-\left(2t + \dfrac{3}{t}\right)^2\right] t^2 \; dt
$$</span>
after substitution. But it seems like I can do nothing about this integral anymore. Can anyone kindly give me a hint, or guide me to the answer?</p>
| Zacky | 515,527 | <p>It looks like a tricky integral, however Feynman's trick deals with it nicely.
<span class="math-container">$$I=\int^{\infty}_0 \exp\left(-\left(4x+\dfrac{9}{x}\right)\right) \sqrt{x}dx\overset{\sqrt x\to x}=2\int_0^\infty \exp\left(-\left(4x^2+\frac{9}{x^2}\right)\right)x^2 dx$$</span>
Now consider the following integral:
<span class="math-container">$$I(t)=2\int_0^\infty \exp\left(-\left(4x^2+\frac{t}{x^2}\right)\right)x^2 dx$$</span>
The reason why I'm putting the parameter in that place is because if <span class="math-container">$x^2$</span> is simplified then the integral becomes much easier. So let's take a derivative with respect to <span class="math-container">$t$</span> in order to get:
<span class="math-container">$$ I'(t)=-2\int_0^\infty \exp\left(-\left(4x^2+\frac{t}{x^2}\right)\right) dx=-\frac{\sqrt \pi}{2}e^{-4\sqrt t}$$</span>
The above result follows using <a href="https://arxiv.org/abs/1004.2445" rel="nofollow noreferrer">the Cauchy-Schlomilch transformation</a> (see <span class="math-container">$3.3$</span>).</p>
<p>I think that you are on the right track now and basically the future the steps would be to see that:
<span class="math-container">$$I(0)=\frac{\sqrt \pi}{16}\Rightarrow I=I(9)-I(0)+\frac{\sqrt\pi}{16}=-\frac{\sqrt \pi}2 \int_0^9e^{-4 \sqrt t}dt+\frac{\sqrt{\pi}}{16}=\boxed{\frac{13\sqrt \pi}{16e^2}}$$</span></p>
|
2,842,217 | <p>im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing).</p>
<p>Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too,</p>
<p>because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct.</p>
<p>But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it?</p>
<p>Completely lost. </p>
| J.G. | 56,861 | <p>Write $\frac{\sin x}{x}=\frac{\tan x}{x}\cos x$ as a power series in $x^2$, with $\frac{\tan x}{x}=t_0+t_1 x^2+t_2 x^4+\cdots$. Equating coefficients of powers of $x^2$ one by one gives $1=t_0,\,-\frac{1}{6}=-\frac{t_0}{2}+t_1,\,\frac{1}{120}=\frac{t_0}{24}-\frac{t_1}{2}+t_2$ etc. Write down as many of those as you like. Thus $t_0=1,\,t_1=\frac{1}{3},\,t_2=\frac{2}{15}$ etc.</p>
|
2,842,217 | <p>im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing).</p>
<p>Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too,</p>
<p>because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct.</p>
<p>But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it?</p>
<p>Completely lost. </p>
| user5713492 | 316,404 | <p>My impression is that it's kind of backwards, in a numerical sense, to think about the coefficients of the $\tan$ series in terms of the Bernoulli numbers because it's simple and numerically stable to calculate the $\tan$ coefficients directly and in fact provides a reasonable method for computing the Bernoulli numbers given the formula in <a href="https://math.stackexchange.com/q/2099213">@RobJohn's post</a>. Since $y(x)=\tan x$ is an odd function of $x$ analytic at $x=0$,
$$y=\sum_{n=0}^{\infty}a_nx^{2n+1}$$
Then $y^{\prime}=\sec^2x=\tan^2x+1=y^2+1$ so
$$\sum_{n=0}^{\infty}(2n+1)a_nx^{2n}=1+\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}a_ia_jx^{2i+2j+2}=1+\sum_{n=1}^{\infty}\left(\sum_{i=0}^{n-1}a_ia_{n-i-1}\right)x^{2n}$$
The constant term reads
$$a_0=1$$
The terms in $x^{4n}$ are
$$a_{2n}=\frac1{4n+1}\sum_{i=0}^{2n-1}a_ia_{2n-i-1}=\frac2{4n+1}\sum_{i=0}^{n-1}a_ia_{2n-i-1}$$
While the terms in $x^{4n+2}$ are
$$a_{2n+1}=\frac1{4n+3}\sum_{i=0}^{2n}a_ia_{2n-i}=\frac1{4n+3}\left(a_n^2+2\sum_{i=0}^{n-1}a_ia_{2n-i}\right)$$
The numerical stability arises because all terms in the formulas for $a_n$ have the same sign.</p>
|
3,581,390 | <p>The problem is as follows:</p>
<p>Mike was born on <span class="math-container">$\textrm{October 1st, 2012,}$</span> and Jack on <span class="math-container">$\textrm{December 1st, 2013}$</span>. Find the date when the triple the age of Jack is the double of Mike's age.</p>
<p>The alternatives given in my book are as follows:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&\textrm{April 1st, 2016}\\
2.&\textrm{March 21st, 2015}\\
3.&\textrm{May 8th, 2015}\\
4.&\textrm{May 1st, 2015}\\
\end{array}$</span> </p>
<p>I tried all sorts of tricks in the book to get this one but I can't find a way to find the given date. What sort of formula or procedure should be used to calculate this date? Can someone help me?</p>
| J. W. Tanner | 615,567 | <p>We have <span class="math-container">$M=14+J$</span>, where <span class="math-container">$M$</span> is Mike's age in months and <span class="math-container">$J$</span> is Jack's age in months,</p>
<p>and <span class="math-container">$2\times M=3\times J$</span>. Substitute <span class="math-container">$14+J$</span> for <span class="math-container">$M$</span> in that last equation and solve for <span class="math-container">$J$</span>. </p>
<p>Then you know how old Jack is when <span class="math-container">$2\times M=3\times J$</span>, </p>
<p>and from that, with the date of Jack's birth, you can figure the date when <span class="math-container">$2\times M=3\times J$</span>.</p>
|
2,101,756 | <p>From the power series definition of the polylogarithm and from the integral representation of the Gamma function it is easy to show that:
\begin{equation}
Li_{s}(z) := \sum\limits_{k=1}^\infty k^{-s} z^k = \frac{z}{\Gamma(s)} \int\limits_0^\infty \frac{\theta^{s-1}}{e^\theta-z} d \theta
\end{equation}
The identity holds whenever $Re(s) > 0$. Now my question is twofold. </p>
<p>Firstly, how do we analytically continue that function to the area $Re(s) <0$? Clearly this must be possible because it was already Riemann who found a corresponding reflection formula by deforming the integration contour to the complex plane and evaluating that integral both in a clock-wise and in a anti-clockwise direction.</p>
<p>My second question would be how do we compute two dimensional functions of that kind. To be precise I am interested in quantities like this:</p>
<p>\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) := \sum\limits_{1 \le k_1 < k_2 < \infty }(k_1+\xi_1)^{-s_1} (k_2+\xi_2)^{-s_2} z_1^{k_1} z_2^{k_2-k_1}
\end{equation}
Clearly if both $Re(s_1) >0$ and $Re(s_2) >0$ the quantity above has a following integral representation:
\begin{equation}
Li_{s_1,s_2}^{(\xi_1,\xi_2)}(z_1,z_2) = \frac{z_1 z_2}{\Gamma(s_1) \Gamma(s_2)} \int\limits_{{\mathbb R}_+^2} \frac{\theta_1^{s_1-1} \theta_2^{s_2-1} e^{-\theta_1 \xi_1-\theta_2 \xi_2}}{\left(e^{\theta_1+\theta_2}-z_1\right)\left(e^{\theta_2}-z_2\right)} d\theta_1\theta_2
\end{equation}
However how do I compute the quantity if any of the real parts of the $s$-parameters becomes negative?</p>
| Ash | 407,754 | <p>As we know that $\log_33=1$ <br>$\therefore$ $$6\log_33=\log_3(y)^5-\log_3(y)$$ $$\log_3(3)^6=\log_3\left(\frac{y^5}{y}\right)$$ $$\log_3(3)^6=\log_3(y)^4$$ taking antilog both the side we can write as $$3^6=y^4$$ I don't know how did you find that equation but now from the first part of the full question $$\log_3(xy)=5$$ $\implies$ $$3^5=xy$$ put the value of $y$ in this then we get $$x=\frac{3^5}{3^{\frac{3}{2}}}$$ $$x=3^\frac{7}{2}$$</p>
|
1,734,680 | <p>How can I find $F'(x)$ given $F(x) = \int_0^{x^3}\sin(t) dt$ ? <br>
I think that (by the fundamental theorem of calculus) since $f = \sin(x)$ is continuous in $[0, x^3]$, then $F$ is differentiable and $F'(x) = f(x) = \sin(x)$ but I'm not sure...</p>
| Community | -1 | <p>I think you might not know about antiderivatives yet, so this answer will avoid using them.</p>
<p>By the FTC,
$$ \frac{d}{dx} \int_a^x f(t) \, dt = f(x). $$</p>
<p>But you don't have $x$. You have $x^3$. So you'll need to use the chain rule:</p>
<p>$$ \frac{d}{dx} \int_a^{g(x)} f(t) \, dt = f(g(x)) \cdot g'(x)$$</p>
<p>Can you take it from here?</p>
|
3,991,691 | <p>I'm having some trouble proving the following:</p>
<blockquote>
<p>Let <span class="math-container">$d$</span> be the smallest positive integer such that <span class="math-container">$a^d \equiv 1 \pmod m$</span>, for <span class="math-container">$a \in \mathbb Z$</span> and <span class="math-container">$m \in \mathbb N$</span> and with <span class="math-container">$\gcd(a,m) = 1$</span>. Prove that, if <span class="math-container">$a^n \equiv 1 \pmod m$</span> then <span class="math-container">$d\mid n$</span>.</p>
</blockquote>
<p>The first thing that came to my mind was Euler's theorem but I couldn't conclude anything because I'm not very skilled when it comes to using Euler's totient function. Can someone give me any tips or show me how to solve this?</p>
| poetasis | 546,655 | <p>The equation is symmetric and
it is easy to see solutions if, for example, we solve for <span class="math-container">$y$</span>.</p>
<p><span class="math-container">$$x^2 + y^2 - 5xy + 5 = 0 \implies\quad
y = \frac{5 x \pm \sqrt{21 x^2 - 20}}{2}\qquad |x|\ge 1$$</span></p>
<p>Note that the absolute value of <span class="math-container">$x$</span> must be at least <span class="math-container">$1$</span> for the radical to be non-negative and therfore, for <span class="math-container">$y$</span> to be real.</p>
<p>Given this equation, we can also see that there are <span class="math-container">$2$</span> <span class="math-container">$y$</span>-values for every valid <span class="math-container">$x$</span>. There are <span class="math-container">$28$</span> solutions for
<span class="math-container">$\space -50000\le x \le 50000.\quad$</span> Here is that "sample" of <span class="math-container">$\quad (x,y_1,y_2.)\quad$</span></p>
<p><span class="math-container">$$
(-7369,-35307,-1538)\quad
(-4729,-22658,-987)\quad
(-1538,-7369,-321)\\
(-987,-4729,-206)\qquad
(-321,-1538,-67)\qquad
(-206,-987,-43)\\
(-67,-321,-14)\qquad
(-43,-206,-9)\qquad
(-14,-67,-3)\\
(-9,-43,-2)\quad
(-3,-14,-1)\quad
(-2,-9,-1)\quad
(-1,-3,-2)\\
(1,2,3)\qquad
(2,1,9)\qquad
(3,1,14)\qquad
(9,2,43)\qquad
(14,3,67)\\
(43,9,206)\qquad
(67,14,321)\qquad
(206,43,987)\qquad
(321,67,1538)\\
(987,206,4729)\qquad
(1538,321,7369)\qquad
(4729,987,22658)\\
(7369,1538,35307)\quad
(22658,4729,108561)\quad
(35307,7369,169166)$$</span></p>
<p>Note that all the negative <span class="math-container">$x$</span>-values have positive counterparts and that we have a counterpart solution for <span class="math-container">$x$</span>.</p>
<p><span class="math-container">$$x = \frac{5 y \pm \sqrt{21 y^2 - 20}}{2}$$</span>
so the <span class="math-container">$x,y$</span> values should be interchangeable.</p>
|
3,613,950 | <blockquote>
<p>Given the set <span class="math-container">$S$</span> that is the set of all subsets of <span class="math-container">$\{1, 2, \ldots, n\}$</span>. Two different sets are chosen at random from <span class="math-container">$S$</span>. What is the probability that
the two subsets share exactly two equal elements?</p>
</blockquote>
<p><strong>My attempt</strong></p>
<p>I found that the universal set <span class="math-container">$\Omega = \dfrac{2^n\left(2^n-1\right)}{2}$</span></p>
<p>Then I tried to find the number of ways to select two subsets sharing two equal elements:</p>
<p>The number of ways to choose one subset that contains <span class="math-container">$i, j$</span> is <span class="math-container">$2^{n-2}$</span>.</p>
<p>The number of ways to choose another subset that contains <span class="math-container">$i, j$</span> and is different from the previous is: <span class="math-container">$\sum{2^{n-k-1}}$</span></p>
<p>However, I failed to put those two together to obtain correct result. I wonder whether there is another approach to this problem or how my method could have been continued.</p>
<p>Thanks in advance.</p>
| greg | 357,854 | <p><span class="math-container">$\def\m#1{\left[\begin{array}{c}#1\end{array}\right]}\def\p#1#2{\frac{\partial #1}{\partial #2}}$</span>Let <span class="math-container">$U$</span> be an unconstrained matrix
and use a colon denote the trace function in product form, i.e.
<span class="math-container">$$A:B = {\rm Tr}(A^TB) = B:A$$</span>
Write the function using the colon product and calculate the unconstrained derivative.
<span class="math-container">$$\eqalign{
\phi &= aa^T:U^2 \\
d\phi &= aa^T:(U\,dU+dU\,U) \\
&= (Uaa^T+aa^TU):dU \\
\p{\phi}{U} &= Uaa^T+aa^TU \;\doteq\; G \qquad({\rm gradient}) \\
}$$</span>
Here is recipe for converting an unconstrained gradient <span class="math-container">$G$</span> into the desired form
<span class="math-container">$$\eqalign{
G_S &\doteq G+G^T - {\rm Diag}(G) \\
&= (Xaa^T+aa^TX) + (Xaa^T+aa^TX)^T -{\rm Diag}(Xaa^T+aa^TX) \\
&= 2(Xaa^T+aa^TX) - {\rm Diag}(Xaa^T+aa^TX) \\
\\
}$$</span>
Apply this general result to your <span class="math-container">$2\times 2$</span> example.
<span class="math-container">$$\eqalign{
A = Xaa^T &= \m{a^2x+abz & abx+b^2z \\ a^2z+aby & abz+b^2y} \\
B = A+A^T &= \m{2(a^2x+abz) & (abx+b^2z+a^2z+aby) \\ (a^2z+aby+abx+b^2z) & 2(abz+b^2y)} \\
G_S = 2B-{\rm Diag}(B)
&= \m{2(a^2x+abz)&2(abx+b^2z+a^2z+aby)\\2(a^2z+aby+abx+b^2z)&2(abz+b^2y)} \\
}$$</span>
which is the same result that you obtained.</p>
<p>Having provided you with the formula that you were searching for, I must warn you that it is nonsense.</p>
<p>What you <em>should</em> do is extract a vector of fully independent parameters from the <span class="math-container">$X$</span> matrix using the half-vec operation
<span class="math-container">$$\eqalign{
p &= {\rm vech}(X) = \m{x\\z\\y} \\
}$$</span>
and solve whatever problem you have in mind in terms of this vector.</p>
<p><em>Everyone</em> agrees that the following vector gradient is valid and unambiguous
<span class="math-container">$$\eqalign{
g\doteq \p{\phi}{p} &=
\m{2(a^2x+abz) \\ 2(abx+b^2z+a^2z+aby) \\ 2(abz+b^2y)} \\
}$$</span>
However, casting this vector into matrix form using the reverse of
the <span class="math-container">${\rm vech}()$</span> function creates a <em><strong>thing</strong></em> which is difficult to interpret as a gradient, and hard to use (properly) in algorithms such as gradient descent.</p>
<p>Instead, you should leave the gradient in vector form and use it to
optimize/solve for the <span class="math-container">$p$</span> vector. Then, as a post-processing step, you can cast the solution back into matrix form <span class="math-container">$$X = {\rm unvech}(p)$$</span></p>
|
400,926 | <p>Maybe you can help here. There is kind of a lengthy setup to understand what the question is asking. There is a paper I'm reading, and in one section of it I can't make heads or tails of the result. The reference is "Global Carleman Estimates for Waves and Applications" by Baudouin, Buhan, Ervedoza. </p>
<hr>
<p>The setup (taken from the paper) : Suppose $p \in L^{\infty}(\Omega \times (-T,T))$. Given initial data $(y_0^{-T},y_1^{-T}) \in L^2(\Omega)\times H^{-1}(\Omega)$, find a function $u \in L^2(\Gamma_0 \times (-T,T))$ such that the solution $y$ of</p>
<p>\begin{eqnarray}
\partial_t^2y -\Delta y + py = 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ in } \Omega \times(-T,T) \\
y = u|_{\Gamma_0} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ on } \partial \Omega \times (-T,T)\\
y(-T) = y_0^{-T}, \partial_ty(-T) = y_1^{-T} \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ in } \Omega
\end{eqnarray}
solves $y(T) = \partial_ty(T) = 0$. </p>
<hr>
<p>There is a claim that we can get an explicit form for $u$ and $y$. Let $\phi = e^{\lambda \psi}$, where $\psi(x,t) = |x-x_0|^2 - \beta t^2 +C$. For $s$ a parameter, define the functional
$$K_{s,p}(z) = \frac{1}{2s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}|\partial_t^2z - \Delta z + pz|^2 dx \ dt + \frac{1}{2}\int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}|\partial_{\nu}z|^2 d \sigma dt $$ $$+<(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)}$$</p>
<p>Here, $<(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)} = \int_{\Omega}{y_0^{-T}z_1^{-T} dx} - <y_1^{-T},z_0^{-T}>_{H^{-1} \times H_0^1}$,
and
$<y_1^{-T},z_0^{-T}>_{H^{-1} \times H_0^1} = \int_{\Omega} \nabla(-\Delta_d)^{-1}y_1^{-T}\cdot \nabla z_0^{-T} dx$
where $\Delta_d$ is the Laplace operator with Dirichlet boundary conditions.</p>
<p>Part of the paper shows that $K_{s,p}$ has a unique minimizer $Z[s,p]$, for each $s,p$.</p>
<hr>
<p>The setup is above. Now comes the two parts I don't get.<br>
(1). The paper claims that the Euler-Lagrange equation given by the minimization of $K_{s,p}$ is
$$\frac{1}{s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2z - \Delta z + pz)(\partial_t^2Z -\Delta Z +pZ) dx \ dt + \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z \partial_{\nu}Z d\sigma dt$$ $$+ <(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)}$$</p>
<p>I don't understand how this result is obtained. From what I know, the Euler Lagrange equations are as follows (from Evans book). If $I[w] = \int L(Dw(x),w(x),x)$, and we call these variables $p, z, x$ respectively, then the Euler Lagrange equations satisfy $-\sum{({L_{p_i}(Du,u,x)})_{x_i}} + L_z(Du,u,x) = 0$. When I try to do this to $K_{s,p}$, I get a huge mess, because it seems like we need to use the product rule. I don't get how it simplifies to this form, and why the third term $<\cdot,\cdot>$ stays the same.</p>
<p>(2) Let $Y = \frac{1}{s}e^{2s\phi}(\partial_t^2 - \Delta + p)Z[s,p]$, and let $U[s,p] = e^{2s\phi}\partial_{\nu}Z[s,p]|_{\Gamma_0}$. </p>
<p>Then, we get
$$\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2z - \Delta z + pz)Y dx \ dt + \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z U d\sigma dt$$ $$+ <(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)} = 0$$</p>
<p>The paper claims that this is the dual formulation of the problem. What does this mean exactly, and how does this help us show that Y,U works as a solution?</p>
<p>Any help is greatly appreciated. Thanks in advance</p>
| Shuhao Cao | 7,200 | <p><strong>Derivation of Euler-Lagrange equation:</strong> If $z$ minimizes $K_{s,p}(z)$, then any small perturbation on $z$ will make this functional bigger. Hence we want: $\newcommand{\e}{\epsilon}$
$$
\lim_{\e\to 0}\frac{d}{d\e} K_{s,p}(z+\e v) = 0.\tag{$\dagger$}
$$
This means the perturbation $\e v$ in the test function space will drive the functional away from its local minimum (just like calculus).</p>
<p>$K_{s,p}(z+\e v)$ in (1) reads:
$\newcommand{\lsub}[2]{{\vphantom{#2}}_{#1}{#2}}$
$$
\begin{aligned}K_{s,p}(z+\e v) &= \frac{1}{2s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}|\partial_t^2 (z+\e v) - \Delta (z+\e v) + p(z+\e v)|^2 dx \,dt
\\
&\quad + \frac{1}{2}\int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}|\partial_{\nu}(z+\e v)|^2 d \sigma\, dt
\\
&\quad + \lsub{L^2 \times H^{-1}}{\Big\langle (y_0^{-T},y_1^{-T}),((z+\e v)(-T), \partial_t (z+\e v)(-T))\Big\rangle}_{H_0^1 \times L^2}.
\end{aligned}\tag{1}$$</p>
<ul>
<li><p>Let the first term in (1) be $I_1$, first for the integrand:
$$
\begin{aligned}
& |\partial_t^2 (z+\e v) - \Delta (z+\e v) + p(z+\e v)|^2
\\
=& \left|(\partial_t^2 z - \Delta z + pz)
+ \e (\partial_t^2 v - \Delta v + pv)\right|^2
\\
=& |\partial_t^2 z - \Delta z + pz|^2 + \e^2 |\partial_t^2 v - \Delta v + pv|^2
\\
&\quad +2\e (\partial_t^2 z - \Delta z + pz)(\partial_t^2 v - \Delta v + pv).
\end{aligned}$$Taking derivative of $\e$, first term gone, let $\e \to 0$, second term gone, what is left is the cross term with a factor of 2, hence:
$$
\lim_{\e\to 0}\frac{d}{d\e} I_1 = \frac{1}{s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2 z - \Delta z + pz)(\partial_t^2 v - \Delta v + pv)dx \, dt. \tag{2}
$$</p></li>
<li><p>Second term in (1), say $I_2$, expand the integrand:
$$
|\partial_{\nu}(z+\e v)|^2 = |\partial_{\nu}z|^2 + \e^2|\partial_{\nu}v|^2 + 2\e \,\partial_{\nu}z \,\partial_{\nu}v,
$$
Similar argument as above:
$$
\lim_{\e\to 0}\frac{d}{d\e} I_2 = \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z \,\partial_{\nu}v \,d\sigma\, dt. \tag{3}
$$</p></li>
<li><p>Third term $I_3$ in (1):
$$
\begin{aligned}
& \lsub{L^2 \times H^{-1}}{\Big\langle (y_0^{-T},y_1^{-T}),((z+\e v)(-T), \partial_t (z+\e v)(-T))\Big\rangle}_{H_0^1 \times L^2}
\\
=& \int_{\Omega}{y_0^{-T}(z+\e v)(-T) dx} - \lsub{H^{-1}}{\big\langle y_1^{-T},\partial_t(z+\e v)(-T)\big\rangle}_{H_0^1}
\\
=& \int_{\Omega}{y_0^{-T}z(-T) dx} - \lsub{H^{-1}}{\big\langle y_1^{-T},\partial_t z(-T)\big\rangle}_{H_0^1}
\\
&\quad + \e \left(\int_{\Omega}{y_0^{-T}v(-T) dx} - \lsub{H^{-1}}{\big\langle y_1^{-T},\partial_tv(-T)\big\rangle}_{H_0^1}\right).
\end{aligned}
$$
Taking derivative makes first term gone:
$$
\begin{aligned}
\lim_{\e\to 0}\frac{d}{d\e} I_3 &= \int_{\Omega}{y_0^{-T}v(-T) dx} - \lsub{H^{-1}}{\big\langle y_1^{-T},\partial_t v(-T)\big\rangle}_{H_0^1}
\\
&=\lsub{L^2 \times H^{-1}}{\Big\langle (y_0^{-T},y_1^{-T}),(v(-T), \partial_t v(-T))\Big\rangle}_{H_0^1 \times L^2}.
\end{aligned}\tag{4} $$</p></li>
</ul>
<p>Now (2)+(3)+(4) yields the expression of $(\dagger)$:
$$
\begin{aligned}
&\frac{1}{s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2 z - \Delta z + pz)(\partial_t^2 v - \Delta v + pv)dx \, dt
\\
&+ \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z \,\partial_{\nu}v \,d\sigma\, dt
\\
&+ \lsub{L^2 \times H^{-1}}{\Big\langle (y_0^{-T},y_1^{-T}),(v(-T), \partial_t v(-T))\Big\rangle}_{H_0^1 \times L^2} = 0.\end{aligned}\tag{$\ddagger$} $$
I am using $v$ instead of $z$ as the test function, and the minimizer $Z[s,p]$ is my $z$ (replacing $z$ with $Z$, $v$ with $z$ leads to your equation). And $(\ddagger)$ is the Euler-Lagrange equation. </p>
<p>What this paper claims is that there exists a unique $Z$ depending on the choice of $s$ and $p$, such that $(\ddagger)$ for any $v$ (in the paper author uses $z$), then $Z$ minimizes the functional $K_{s,p}$, he should show the existence and uniqueness of $(\ddagger)$ subject to certain boundary conditions somewhere in the paper.</p>
<hr>
<p>Now onto the second question: $Y$,$U$ does not work as the solution, what the author essentially does is just a change of notation. It is using $Y$ to represent some expression of the minimizer $Z$ (solution) in the interior, and $U$ to represent some other expression of the minimizer $Z$ (solution) on the boundary. The author stating "dual formulation" is with respect to the original PDE: the minimizer $z= Z$ of the functional $K_{s,p}$ satisfying $(\ddagger)$ for any $v$, and at the same time, serves as the weak solution to the original PDE:
$$\left\{\begin{aligned}
&\partial_t^2 z -\Delta z + p z= 0 \quad\text{ in } \Omega \times(-T,T), \\
&z = u|_{\Gamma} \quad \text{ on } \partial \Omega \times (-T,T),\\
&z(-T) = y_0^{-T},\; \partial_t z(-T) = y_1^{-T} \quad \text{ in } \Omega,
\end{aligned}\right.$$
with appropriately chosen boundary data $u$.</p>
|
4,004 | <p>This is related to <a href="https://math.stackexchange.com/q/133615/26306">this post</a>, please read the comments.</p>
<p>What is the usual way of dealing with that kind of problems on math.SE?
(By "that kind of problems" I mean someone posting tasks from an ongoing contest.)</p>
<p>I mean I did email the contest coordinator and flag the post, but it seems that there is more than one user and more than one question involved. Also, I do not know whether the OP is a contestant or e.g. a friend that wishes to learn the answer himself. The whole situation is not trivial and I do not see any way to prevent such abuse on future occasions (one cannot possibly be aware of all the contests in the world).</p>
<p>Any comments/ideas/explanations will be appreciated.</p>
| Phira | 9,325 | <p>Since the recent comments on a posted contest question links here, let me state my answer:</p>
<p>While there can be no obligation of this site to do detective work and be responsible for never answering a contest question, I strongly feel that <strong>if</strong> someone provides a link that it is a contest question, it should indeed be <em>swiftly</em> deleted.</p>
<p>It is not reasonable to check each question for being a contest question, but it is quite reasonable to not answer a question known as a contest question.</p>
<p>Yes, in an ideal world, contests should not be organized in an easily breakable way, but in many countries, the conditions are not ideal and <em>knowingly</em> answering an ongoing contest question is sabotage.</p>
<p>This is a much easier call than Project Euler that wants to protect its questions indefinitely.</p>
|
1,522,216 | <p>I want to show that following:
$$\left(\frac{n^2-1}{n^2}\right)^n\sqrt{\frac{n+1}{n-1}}\leq 1; ~~n\geq 2$$ and $n$ is an integer. </p>
<p>After some simplifications, I got left hand-side as
$$LHS:\left(1-\frac{1}{n}\right)^{n-\frac{1}{2}} \left(1+\frac{1}{n}\right)^{n+\frac{1}{2}}$$
It is clear that the 1st term is less than 1, but I do not have any clue how I can show that multiplication is less than 1.</p>
<p>Can someone give me some hints? </p>
| mathlover | 281,534 | <p>There are only two possibilities, Z-X-Y or X-Z-Y.Then doing the necessary calculations we get YZ either 13 or 7 so answer is 91.</p>
|
395,685 | <p>I recall seeing a quote by William Thurston where he stated that the Geometrization conjecture was almost certain to be true and predicted that it would be proven by curvature flow methods. I don't remember the exact date, but it was from after Hamilton introduced the Ricci flow but well before Perelman's work. Unfortunately, most of the results for Geometrization and Ricci flow are from 2003 or after. Does anyone know if the quote I'm referring to actually exists, and if so, where to find it?</p>
<p>There is a quote from Thurston lauding Perelman's work, which suggests that he thought the Ricci flow was a promising approach, but I thought there was one from before as well.</p>
<blockquote>
<p>That the geometrization conjecture is true is not a surprise. That a
proof like Perelman's could be valid is not a surprise: it has a
certain rightness and inevitability, long dreamed of by many people
(including me). What is surprising, wonderful and amazing is that someone – Perelman – succeeded in rigorously analyzing and controlling this process, despite the many hurdles, challenges and potential pitfalls.</p>
</blockquote>
<p>Thanks in advance.</p>
| Dmitri Panov | 943 | <p>There is a <a href="https://www.youtube.com/watch?v=Qzxk8VLqGcI" rel="nofollow noreferrer">video of Thurston's talk "A discussion on geometrization"</a> from May 7, 2001. In the last part of this talk he speaks about possible approaches to proving geometrization.</p>
<p>Starting from <a href="https://youtu.be/Qzxk8VLqGcI?t=2815" rel="nofollow noreferrer">46:55</a> he spends 20 seconds mentioning heat type flows (probably Ricci flow) and says that it might or not work. But then he switches to describing a different approach which he thinks is more robust.</p>
<p>The part of the talk where he starts to speak about future predictions in the sense of proving geometrization starts at <a href="https://youtu.be/Qzxk8VLqGcI?t=2372" rel="nofollow noreferrer">39:32</a>.</p>
|
3,980,441 | <p>I need to prove this. I need your help to verify that my proof is correct (or not) please.</p>
<blockquote>
<p>Prove that this integral exists: <span class="math-container">\begin{align}
\int_{2}^{\infty}\frac{dx}{\sqrt{1+x^{3}}} \end{align}</span></p>
</blockquote>
<p><strong>My attempt:</strong></p>
<p>Fist we need to observe that <span class="math-container">$\frac{1}{\sqrt{1+x^{3}}}<\frac{1}{\sqrt{x^{3}}}\Longrightarrow \int_{2}^{\infty}\frac{dx}{\sqrt{1+x^{3}}}<\int_{2}^{\infty}\frac{dx}{\sqrt{x^{3}}}$</span></p>
<p>Next, we note that <span class="math-container">$\lim_{x \rightarrow \infty} \frac{\frac{1}{\sqrt{1+x^{3}}}}{\frac{1}{\sqrt{x^{3}}}}=1 \Longrightarrow$</span> for <span class="math-container">$f(x)=\frac{1}{\sqrt{1+x^{3}}}$</span>, <span class="math-container">$g(x)=\frac{1}{\sqrt{x^{3}}}$</span> both integrals converges or both diverges.</p>
<p>By last, integrals of the form <span class="math-container">$\int_{1}^{\infty}\frac{dx}{x^{p}}$</span> converges if <span class="math-container">$p>1$</span>, <span class="math-container">$\Longrightarrow \int_{1}^{\infty}\frac{dx}{\sqrt{x^{3}}}$</span> converges <span class="math-container">$\Longrightarrow \int_{2}^{\infty}\frac{dx}{\sqrt{x^{3}}}$</span> converges</p>
<p>That implies that, <span class="math-container">$\int_{2}^{\infty}\frac{dx}{\sqrt{1+x^{3}}}$</span> converges, therefore it exists.</p>
<p>Is it correct? Is there another way to prove it? Thank you very much</p>
| Mark | 470,733 | <p>Your solution is correct, though you didn't really need to use the limit comparison test. You could just stop after the first line. Since <span class="math-container">$\frac{1}{\sqrt{1+x^3}}\leq\frac{1}{\sqrt{x^3}}$</span> and the integral <span class="math-container">$\int_2^{\infty}\frac{1}{\sqrt{x^3}}dx$</span> converges, we know from the usual comparison test that <span class="math-container">$\int_2^{\infty}\frac{1}{\sqrt{1+x^3}}dx$</span> converges as well. Of course it is important to note that these are nonnegative functions in the solution.</p>
|
2,129,086 | <p>I know that the total number of choosing without constraint is </p>
<p>$\binom{3+11−1}{11}= \binom{13}{11}= \frac{13·12}{2} =78$</p>
<p>Then with x1 ≥ 1, x2 ≥ 2, and x3 ≥ 3. </p>
<p>the textbook has the following solution </p>
<p>$\binom{3+5−1}{5}=\binom{7}{5}=21$ I can't figure out where is the 5 coming from?</p>
<p>The reason to choose 5 is because the constraint adds up to 6? so 11 -6 =5?</p>
| Maczinga | 411,133 | <p>This can be solved also using the <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">stars and bars method</a>.
The point is paying attention to variables that take value 0.
So you have 3 cases:</p>
<p>1) all variables $\ne 0$
This amounts to $\binom{11-1}{3-1}=45$</p>
<p>2) just one variable have value $0$ (and hence two others are $\ne 0$)
This amounts to $\binom{3}{1}\cdot\binom{11-1}{2-1}=30$</p>
<p>3) two variables are $0$ (and hence only one is non-zero
This amounts to $\binom{3}{2}\cdot 1=3$</p>
<p>Taking 1)+2)+3) gives <b>78</b></p>
<p>as already found with the other methods.</p>
<p>For the second part, you just have to adjust your question to the new constraints, that is to say $x_1+x_2+x_3=8$ (can you see this?). Applying again the stars and bars method you find</p>
<p>$\binom{8-1}{3-1}=21$</p>
<p>The situation is simpler in this second part since all variables are $\ne 0$.</p>
|
3,485,441 | <p>I don't quite understand why Burnside's lemma
<span class="math-container">$$
|X/G|=\frac1{|G|}\sum_{g\in G} |X_g|
$$</span>
should be called a "lemma". By "lemma", we should mean there is something coming after it, presumably a theorem. However, I could not find a theorem which requires Burnside as a lemma. In every book I read, the author jumps into calculations using Burnside rather than further theorems.</p>
<p>Question: What are some important consequences of Burnside Lemma, and why is it called a "lemma"?</p>
| Math101 | 668,360 | <p>One consequence is for the necklace problem, see this post:</p>
<p><a href="https://math.stackexchange.com/questions/2016732/necklace-problem-with-burnsides-lemma">Necklace problem with Burnside's lemma</a></p>
|
1,987,507 | <p>I find this question, which comes from section 2.2 of Dummit and Foote's algebra text, to be somewhat confusing:</p>
<blockquote>
<p>Let $G = S_n$, fix $i \in \{1,...,n\}$ and let $G_i = \{\sigma \in G ~|~ \sigma(i) = i\}$ (the stabilizer of $i$ in $G$). Use group actions to prove that $G_i$ is a subgroup of $G$. Find $|G_i|$.</p>
</blockquote>
<p>Here is what I came up with, but it hardly uses group actions. Let $\ker(\cdot)$ denote the kernel of $G$ acting on $\{1,...,n\}$ (?). It is easy to show that $\ker(\cdot)$ is the intersection of all stabilizers of elements in $G$, i.e., $\ker(\cdot) = \bigcap_{i=1}^n G_i$. But since $\ker(\cdot)$ is a subgroup of $G$, and since $\bigcap_{i=1}^n G_i$ is a subgroup if and only if each $G_i$ is, then the stabilizer $G_i$ is a subgroup. </p>
<p>That is the best I could come up with; as I mentioned, it really doesn't use many ideas of group actions. Also, I am not 100% certain $G$ is acting on $\{1,...,n\}$ in this case; perhaps it is acting on $n$-tuples of elements in $\{1,...,n\}$.</p>
<p>PS What is the standard notation for the kernel of a group action? Dummit and Foote offers no convenient notation for it---in fact, they haven't yet offered any notation for it!</p>
| Graham Kemp | 135,106 | <p>If $p$ is the price per ticket, then $\frac 1{20} (p−\$80)+\frac{19}{20} p$ is the expected return for selling <em>one</em> ticket.</p>
<p>You want the expected return for selling <em>twenty</em> tickets to equal $\$30$. Fortunately the Linearity of Expectation means this is:</p>
<p>$$20\times(\frac 1{20} (p−\$80)+\frac{19}{20} p)=\$30 $$</p>
<p>This yields $p=\$5.50$</p>
|
114,895 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/21282/show-that-every-n-can-be-written-uniquely-in-the-form-n-ab-with-a-squa">Show that every $n$ can be written uniquely in the form $n = ab$, with $a$ square-free and $b$ a perfect square</a> </p>
</blockquote>
<p>I am trying to prove that for every $n \ge 1$ there exist uniquely determined integers $a \gt 0$ and $b \gt 0$ such that $n = a^2b$ where $b$ is square-free.</p>
<p>The fact that such $a$ and $b$ exist is easy to prove.</p>
<p>From the fundamental theorem of arithmetic, $n$ can be uniquely represented as $p_1^{a_1} p_2^{a_2} \cdots p_s^{a_s}$ where $s$ is a positive integer. Thus</p>
<p>\begin{align*}
n & = \prod_{i=1}^s p_i^{a_i} \\\\
& = \prod_{i=1}^s p_i^{\left(2 \left\lfloor \frac{a_i}{2} \right\rfloor + a_i \bmod{2}\right)} \\\\
& = \prod_{i=1}^s p_i^{\left(2 \left\lfloor \frac{a_i}{2} \right\rfloor\right)} \cdot \prod_{i=1}^s p_i^{a_i \bmod{2}} \\\\
& = \left(\prod_{i=1}^s p_i^{\left\lfloor \frac{a_i}{2} \right\rfloor}\right)^2 \cdot \prod_{i=1}^s p_i^{a_i \bmod{2}}.
\end{align*}</p>
<p>Clearly, $\left(\prod_{i=1}^s p_i^{\left\lfloor \frac{a_i}{2} \right\rfloor}\right)^2$ is a perfect square and $\prod_{i=1}^s p_i^{a_i \bmod{2}}$ is square free. Hence, we have shown that such $a$ and $b$ exist.</p>
<p>Now, how do we show that such a pair of $a$ and $b$ is unique?</p>
<p>I know how to start proving such a theorem. Let us assume that $n = a^2b = a'^2b'$ such that $a' \ne a$ and $b' \ne b$. Now since this is not possible this should lead us to some contradiction. But, I'm unable to reach a contradiction from this assumption. Could you please help me?</p>
| André Nicolas | 6,312 | <p>The proof of existence that you gave is fine, and can be adapted to produce a proof of uniqueness by using the essential uniqueness of prime power factorization. </p>
<p>But let us prove existence and uniqueness without explicit use of the representation of natural numbers as a product of powers of primes.</p>
<p><strong>Existence:</strong> Call a natural number <em>bad</em> if it does not have a representation of the type we want. If there are bad natural numbers, there is a <em>smallest</em> bad number $n$. It is clear that $n>1$. </p>
<p>Thus $n$ is divisible by some prime $p$. Let $m=n/p$. By the minimality of $n$, the number $m$ is good, so has a representation as $a^2b$ where $b$ is square-free. </p>
<p>If $p$ does not divide $b$, then $n=pm=a^2(pb)$, and $pm$ is square-free, contradicting the badness of $n$. </p>
<p>If $p$ divides $b$, then $b=pb'$ for some natural number $b'$. Note that $b'$ is square-free. Then $n=mp=(ap)^2b'$, again contradicting the badness of $n$. </p>
<p><strong>Uniqueness:</strong> Suppose that there are natural numbers that have more than one representation. Call such a natural number <em>bad</em>. If there is a bad number, then there is a <em>smallest</em> bad number $n$. Clearly $n> 1$.</p>
<p>So $n$ has two different representations $n=a^2b$ and $n=c^2d$, where $c$ and $d$ are square-free. (Different here means that $a^2\ne c^2$ or $b\ne d$, or both.)</p>
<p>Since $n > 1$, there is a prime $p$ that divides $n$. Suppose first that $p^2$ does not divide $n$. Then $p$ cannot divide either $a$ or $c$. So $p$ must divide both $b$ and $d$. Let $b=pb'$, $d=pd'$, and let $m=n/p$. Then $m=a^2b'=c^2d'$, and therefore $m$ is bad, contradicting the minimality of $n$.</p>
<p>If $p^2$ divides $n$, then since $p^2$ cannot divide $b$, we must have that $p$ divides $a^2$, and therefore $p$ divides $a$. Similarly, $p$ divides $c$. Let $a=pa'$, and $c=pc'$. Let $m=n/p^2$. We conclude that $m=(a')^2b=(c')^2d$, again contradicting the minimality of $n$.</p>
|
2,443,496 | <blockquote>
<p>Can someone point me in the right direction as to how to take the derivative of this function:
$$ f(x) = 2 \pi \sqrt{\frac{x^2}{c}} $$</p>
</blockquote>
<p>Thank you</p>
| Raffaele | 83,382 | <p>When $x\ge 0$ you have $f(x)=\dfrac{2\pi\,x}{\sqrt c}\to f'(x)=\dfrac{2\pi}{\sqrt c}$</p>
<p>when $x<0$ then $f(x)=.\dfrac{2\pi\,x}{\sqrt c}\to f'(x)=-\dfrac{2\pi}{\sqrt c}$</p>
<p>To put all together in one formula
$$f'(x)=\frac{2 \pi \, \text{sgn}(x)}{\sqrt{c}}$$
where $\text{sgn}(x)=\left\{
\begin{array}{rr}
-1&\text{if}\;x<0 \\
0& \text{if} \;x=0 \\
1& \text{if} \;x>0 \\
\end{array}
\right.
$</p>
|
3,251,754 | <p>Let <span class="math-container">$M$</span> be the set of all <span class="math-container">$m\times n$</span> matrices over real numbers.Which of the following statements is/are true??</p>
<ol>
<li>There exists <span class="math-container">$A\in M_{2\times 5}(\mathbb R)$</span> such that the dimension of the nullspace of <span class="math-container">$A $</span> is <span class="math-container">$2$</span>.</li>
<li>There exists <span class="math-container">$A\in M_{2\times 5}(\mathbb R)$</span> such that the dimension of the nullspace of <span class="math-container">$A $</span> is <span class="math-container">$0$</span>.</li>
<li>There exists <span class="math-container">$A\in M_{2\times 5}(\mathbb R)$</span> and <span class="math-container">$B\in M_{5\times 2}(\mathbb R)$</span> such that <span class="math-container">$AB$</span> is the <span class="math-container">$2\times 2$</span> identity matrix.</li>
<li>There exists <span class="math-container">$A\in M_{2\times 5}(\mathbb R)$</span> whose null space is <span class="math-container">$\{ (p,q,r,s,t)\in \mathbb R^5 | p=q, r=s=t\}$</span>.</li>
</ol>
<p>I am sure about the option <span class="math-container">$3$</span> definitely will not come. But I don't know about others..and then the dimension of the nullspace is <span class="math-container">$3$</span>??</p>
| Vizag | 566,333 | <p>Your argument is correct. Here is another way you could think about it: </p>
<p><span class="math-container">$$P(B^c|C) =\frac{P(B^c\cap C)}{P(C)}$$</span>
<span class="math-container">$$=\frac{P(C)-P(B\cap C)}{P(C)}$$</span>
<span class="math-container">$$=1-P(B|C)$$</span></p>
<p>Draw a Venn diagram to see that <span class="math-container">$P(B^c \cap C) = P(C)-P(B\cap C)$</span>.</p>
|
3,589,685 | <p>Can you give an example of an isomorphism mapping from <span class="math-container">$\mathbb R^3 \to \mathbb P_2(\mathbb R)$</span>(degree-2 polynomials)?</p>
<p>I understand that to show isomorphism you can show both injectivity and surjectivity, or you could also just show that an inverse matrix exists.</p>
<p>My issue is that I don't think you can represent the transformation with a matrix because of the polynomial space. </p>
<p>How would you come to proving isomorphism without the use of matrix representations?</p>
| Community | -1 | <p>Assuming you mean the polynomials of degree less than or equal to <span class="math-container">$2$</span>, it is a three dimensional space, with basis <span class="math-container">$\{1,x,x^2\}$</span>. So, just send basis vectors to basis vectors:</p>
<p><span class="math-container">$$e_1\to1,e_2\to x,e_3\to x^2$$</span>.</p>
<p>The matrix, rel these two standard bases, is the identity.</p>
|
2,588,968 | <p>I have the double integral</p>
<p>$$\int^{10}_0 \int^0_{-\sqrt{10y-y^2}} \sqrt{x^2+y^2} \,dx\,dy$$</p>
<p>And I am asked to evaluate this by changing to polar coordinates.</p>
| Michael Hardy | 11,667 | <p>Complete the square:
$$
10y-y^2 = 25 - (5-y)^2,
$$
so the graph of $x = -\sqrt{10y-y^2} = - \sqrt{5^2 - (5-y)^2}$ is the left half of the circle $x^2 + (5-y)^2 = 5^2.$</p>
<p>Exercises with polar coordinates will have shown you that $$\tag 1 r = 5\sin\theta$$ is that circle. If you multiply both sides of $(1)$ by $r,$ you get $$ r^2 = 5r\sin\theta $$ which becomes $$ x^2+y^2 = 5y $$ and by completing the square, then becomes $$ x^2 + (y-5)^2 = 5^2. $$</p>
<p>Thus you have
$$
\int_{\pi/2}^\pi \int_0^{5\sin\theta} r\, dr\,d\theta.
$$</p>
|
2,847,277 | <p>Are there primes $p=47\cdot 2^n+1$, where $n\in\mathbb Z_+$? Tested for all primes $p<100,000,000$ without equality.</p>
| Nominal Animal | 318,422 | <p>There are several options, depending on exactly what you want to do.</p>
<hr>
<p>Let $\hat{d}$ be the unit ($\lVert\hat{d}\rVert = 1$) direction vector,
$$\hat{d} = \frac{\vec{d}}{\lVert\vec{d}\rVert} \tag{1}\label{NA1}$$</p>
<p>Let $\hat{u}$ be an unit vector perpendicular to $\hat{d}$; essentially, the direction perpendicular to the spray for which $\varphi = 0$.</p>
<p>If your spray is radially symmetric, then you can pick $\hat{u}$ freely. One way to pick it is to calculate $\hat{e}_x \cdot \hat{d}$, $\hat{e}_y \cdot \hat{d}$, and $\hat{e}_z \cdot \hat{d}$ (the $\hat{e}$ unit vectors being the axis vectors, i.e. $\hat{e}_x = \left [ \begin{matrix} 1 \\ 0 \\ 0 \end{matrix} \right ]$), and let $\hat{e}$ be $\hat{e}_x$, $\hat{e}_y$, or $\hat{e}_z$, depending on which one was closest to zero, then
$$\vec{u} = \hat{e} - \hat{e} \left ( \hat{e} \cdot \hat{d} \right ), \quad
\hat{u} = \frac{\vec{u}}{\lVert\vec{u}\rVert} \tag{2}\label{NA2}$$</p>
<p>If you let
$$
\hat{d} = \left [ \begin{matrix} d_x \\ d_y \\ d_z \end{matrix} \right ], \quad
\hat{u} = \left [ \begin{matrix} u_x \\ u_y \\ u_z \end{matrix} \right ],
\quad
\hat{v} = \hat{d} \times \hat{u} = \left [ \begin{matrix} v_x \\ v_y \\ v_z \end{matrix} \right ] = \left [ \begin{matrix} d_y u_z - d_z u_y \\ d_z u_x - d_x u_z \\ d_x u_y - d_y u_x \end{matrix} \right ]
$$
then the rotation matrix $\mathbf{R}$ that rotates $z$ axis towards $\hat{d}$, $x$ axis towards $\hat{u}$, and $y$ axis towards $\hat{v}$, is
$$
\mathbf{R} = \left [ \begin{matrix}
u_x & v_x & d_x \\
u_y & v_y & d_y \\
u_z & v_z & d_z \\
\end{matrix} \right ] \tag{3a}\label{NA3a}
$$
Because $\mathbf{R}$ is an orthonormal matrix, the inverse rotation is
$$
\mathbf{R}^{-1} = \mathbf{R}^T = \left [ \begin{matrix}
u_x & u_y & u_z \\
v_x & v_y & v_z \\
d_x & d_y & d_z \\
\end{matrix} \right ] \tag{3b}\label{NA3b}
$$</p>
<p>This means that if you have a vector $\vec{p}$ in the coordinate system where the spray is along the positive $z$ axis and $\varphi = 0$ along positive $x$ axis, then
$$\vec{q} = \mathbf{R}\vec{p} \quad \iff \quad \vec{p} = \mathbf{R}^T \vec{q} \tag{4}\label{NA4}$$
i.e. vector $\vec{q}$ is the corresponding vector in the coordinate system where the spray is along $\hat{d}$, and $\varphi = 0$ along $\hat{u}$.</p>
<hr>
<p>Let's say you want to use spherical coordinates $(r ,\, \varphi ,\, \theta)$ where $r$ is the distance from the nozzle, $\theta$ is the angle to the spray direction vector $\vec{d}$, and $\varphi$ is an angle around the axis of the direction.</p>
<p>Let's say the unit vector $\hat{d}$ describes the spray axis, and $\hat{u}$ describes the direction where $\varphi = 0$, both perpendicular to each other ($\hat{d} \perp \hat{u}$; $\hat{d} \cdot \hat{u} = 0$; $\hat{v} = \hat{d} \times \hat{u}$; $\hat{v} \perp \hat{u}$; and $\hat{v} \perp \hat{d}$).</p>
<p>You do not need to use the rotation matrix $\mathbf{R}$ to generate a vector given $r$ (length), $\theta$ (angle to axis of spray), and $\varphi$ (rotation around axis of spray). The result $\vec{q} = \mathbf{R}\vec{p}$ is equal to
$$
\vec{q}(r, \varphi, \theta) = r \cos(\theta) \hat{d} + r \sin(\theta) \cos(\varphi) \hat{u} + r \sin(\theta) \sin(\varphi) \hat{v}
\tag{5}\label{NA5}
$$</p>
<hr>
<p>To test whether vector $\vec{p}$ is within the right circular cone that has the apex at origin, axis $\vec{d}$, and aperture $2\theta$, use
$$\vec{p} \cdot \vec{d} \le \lVert \vec{p} \rVert \lVert \vec{d} \rVert \cos(\theta) \tag{6}\label{NA6}$$
Note that when the dot products equal the product of their norms, the two vectors are parallel,
$$\vec{p} \cdot \vec{d} = \lVert \vec{p} \rVert \lVert \vec{d} \rVert \quad \iff \quad \vec{p} \parallel \vec{d}$$
and at the limit,
$$\vec{p} \cdot \vec{d} = \lVert \vec{p} \rVert \lVert \vec{d} \rVert \cos(\theta)$$
the vector is at the surface of the right circular cone.</p>
|
3,831,073 | <p>Let <span class="math-container">$\alpha = \sqrt[3]{4+\sqrt{5}}$</span>. I would like to prove that <span class="math-container">$\left[ \mathbb{Q} \left( \alpha \right ) : \mathbb{Q} \right] = 6$</span>. We have <span class="math-container">$\alpha^3 = 4 + \sqrt{5}$</span>, and so <span class="math-container">$(\alpha^3 - 4)^2 = 5$</span>, hence <span class="math-container">$\alpha$</span> is a root of the polynomial <span class="math-container">$f(x)=x^6 - 8 x^3 + 11$</span>.
I tried to prove with various approaches that <span class="math-container">$f(x)$</span> is irreducible over <span class="math-container">$\mathbb{Q}$</span> without success, so I devised the following strategy.</p>
<p>Since <span class="math-container">$x^2 - 5$</span> is irreducible over <span class="math-container">$\mathbb{Q}$</span>, we have <span class="math-container">$\left[ \mathbb{Q} \left( \sqrt{5} \right ) : \mathbb{Q} \right] = 2$</span>. Now from <span class="math-container">$\alpha^3 = 4 + \sqrt{5}$</span> we get <span class="math-container">$\sqrt{5} \in \mathbb{Q} \left( \alpha \right )$</span>, so that <span class="math-container">$\mathbb{Q} \left( \sqrt{5} \right )$</span> is a subfield of <span class="math-container">$\mathbb{Q} \left( \alpha \right )$</span>, <span class="math-container">$\mathbb{Q} \left( \alpha \right )=\mathbb{Q}\left( \sqrt{5}\right) \left( \alpha \right)$</span>, and we have
<span class="math-container">\begin{equation}
\left[ \mathbb{Q} \left( \alpha \right ) : \mathbb{Q} \right] = \left[ \mathbb{Q} \left( \sqrt{5} \right)\left( \alpha \right ) : \mathbb{Q} \left (\sqrt{5} \right) \right] \left[ \mathbb{Q} \left( \sqrt{5} \right ) : \mathbb{Q} \right] .
\end{equation}</span>
Now <span class="math-container">$\alpha$</span> is a root of the polynomial <span class="math-container">$g(x) \in \mathbb{Q} \left (\sqrt{5} \right) [ x ]$</span> given by <span class="math-container">$g(x) = x^3 - 4 - \sqrt{5}$</span>. So to prove our thesis it is enough to prove that this polynomial is irreducible in <span class="math-container">$\mathbb{Q} \left (\sqrt{5} \right) [ x ]$</span>. Being <span class="math-container">$g(x)$</span> of third degree, if it were not irreducible, its factorization would have at least one linear factor, so that <span class="math-container">$g(x)$</span> would have some root in <span class="math-container">$\mathbb{Q} \left (\sqrt{5} \right)$</span>. Hence our problem boils down to show that there are no integers <span class="math-container">$m_0, m_1, n$</span>, with <span class="math-container">$n \neq 0$</span>, such that
<span class="math-container">\begin{equation}
\left( \frac{m_0}{n} + \frac{m_1}{n} \sqrt{5}\right)^3 = 4 + \sqrt{5},
\end{equation}</span>
which gives
<span class="math-container">\begin{equation}
m_0^3 + 5 \sqrt{5} m_1^3 +3 \sqrt{5} m_0^2 m_1 + 15 m_0 m_1^2 = 4 n^3 + \sqrt{5} n^3,
\end{equation}</span>
or
<span class="math-container">\begin{equation}
m_0^3 + 15 m_0 m_1^2 - 4 n^3 + \sqrt{5} \left( 5 m_1^3 +3 m_0^2 m_1 - n^3 \right)=0,
\end{equation}</span>
which implies, being <span class="math-container">$\sqrt{5}$</span> irrational,
<span class="math-container">\begin{cases}
m_0^3 + 15 m_0 m_1^2 - 4 n^3 = 0, \\ 5 m_1^3 +3 m_0^2 m_1 - n^3 = 0.
\end{cases}</span>
At this point I am stuck, because I do not know how to prove that this system admits the only integer solution <span class="math-container">$m_0 = m_1 = n = 0$</span>.</p>
<p>Any help is welcome!</p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>There's of course, the wacky way as suggested by Edward above, thanks to him!</p>
<p>But there's a criterion by Osada , which fits the bill perfectly.</p>
<blockquote>
<p>Let <span class="math-container">$f(x) =x^n + a_{n-1}x^{n-1} + ... + a_1x \pm p$</span> be a monic polynomial with integer coefficients, such that <span class="math-container">$p$</span> is a prime with <span class="math-container">$p > 1 + |a_{n-1}| + ... + |a_1|$</span>, then <span class="math-container">$f(x)$</span> is irreducible over the rationals.</p>
</blockquote>
<p>Here the criterion applies, and we are done, because the polynomial <span class="math-container">$x^6 - 8x^3+11$</span> is then irreducible but also one that <span class="math-container">$\alpha$</span> satisfies, hence must be the minimal polynomial. Therefore the extension by <span class="math-container">$\alpha$</span> has degree <span class="math-container">$6$</span> as desired.</p>
<hr />
<p>We should still see how this works, because looking at the location of complex roots is actually quite a nice way of showing irreducibility of integer polynomials : then the roots are linked to the coefficients via Vieta and something goes wrong in the event of a factorization. This is somewhat different from Eisenstein and mod <span class="math-container">$p$</span> reduction, so it is nice!</p>
<hr />
<p>I will give you a sketch of this proof, with spoilers. Let <span class="math-container">$f$</span> be a polynomial satisfying the premise of Osada's criterion.</p>
<ul>
<li>Suppose <span class="math-container">$f = gh$</span> as polynomials in <span class="math-container">$\mathbb Z[x]$</span> with <span class="math-container">$g,h$</span> non-constant. Why should one of <span class="math-container">$g$</span> or <span class="math-container">$h$</span> have constant coefficient <span class="math-container">$\pm 1$</span>?</li>
</ul>
<blockquote class="spoiler">
<p> This is because <span class="math-container">$|f(0)| = |g(0)h(0)| = p$</span>, but <span class="math-container">$g(0),h(0)$</span> are integers so one of them has coefficient <span class="math-container">$\pm 1$</span>.</p>
</blockquote>
<ul>
<li>WLOG let <span class="math-container">$h$</span> have constant coefficient <span class="math-container">$\pm 1$</span>. Why is there a root <span class="math-container">$\beta$</span> of <span class="math-container">$h$</span> such that <span class="math-container">$|\beta| \leq 1$</span>?</li>
</ul>
<blockquote class="spoiler">
<p> Elsewise all the roots of <span class="math-container">$h$</span> would be greater than <span class="math-container">$1$</span> in modulus. By Vieta's formula, <span class="math-container">$|h(0)|$</span> is the product of the modulus of all the roots, but this is equal to <span class="math-container">$1$</span>, which can't happen if all roots had moduli <span class="math-container">$>1$</span>.</p>
</blockquote>
<ul>
<li>We actually then have <span class="math-container">$f(\beta) \neq 0$</span>. (HINT : Triangle inequality)</li>
</ul>
<blockquote class="spoiler">
<p> Well, <span class="math-container">$f(\beta) = 0$</span> implies <span class="math-container">$|\beta^n +a_{n-1}\beta^{n-1} + ... + a_1\beta| = p$</span>, but then using the triangle inequality, the LHS is atmost <span class="math-container">$1 + |a_{n-1}| + |a_{n-2}| + ... + |a_1|$</span>, so can't be equal to <span class="math-container">$p$</span>.</p>
</blockquote>
<ul>
<li>But <span class="math-container">$\beta$</span> cannot be a root of <span class="math-container">$h$</span>, and <em>not</em> a root of <span class="math-container">$f$</span>, because <span class="math-container">$h$</span> divides <span class="math-container">$f$</span>! This completes the proof.</li>
</ul>
<hr />
<p>I should add that these techniques for proving irreducibility come under the category "Polynomials with dominant coefficient", where one coefficient is much larger than the others. Indeed, this allows us to locate roots of factor polynomials under their existence , which could not be roots of the original polynomial!</p>
<p>The theorems of Ram Murty and Cohn don't come under this category but come under the category of "polynomials taking prime values". There are others, like "polynomials taking small values", and the most difficult but rewarding theory of "Newton polygons".</p>
<hr />
<p>As a bonus, I would like to direct you to "Polynomials" by Viktor Prasolov, which is one of the most rewarding books to read if you like to prove irreducibility of polynomials (which you will see a lot in Galois theory) and other estimates and computations regarding polynomials (like orthonormal bases, approximation, inequalities etc.)</p>
|
3,831,073 | <p>Let <span class="math-container">$\alpha = \sqrt[3]{4+\sqrt{5}}$</span>. I would like to prove that <span class="math-container">$\left[ \mathbb{Q} \left( \alpha \right ) : \mathbb{Q} \right] = 6$</span>. We have <span class="math-container">$\alpha^3 = 4 + \sqrt{5}$</span>, and so <span class="math-container">$(\alpha^3 - 4)^2 = 5$</span>, hence <span class="math-container">$\alpha$</span> is a root of the polynomial <span class="math-container">$f(x)=x^6 - 8 x^3 + 11$</span>.
I tried to prove with various approaches that <span class="math-container">$f(x)$</span> is irreducible over <span class="math-container">$\mathbb{Q}$</span> without success, so I devised the following strategy.</p>
<p>Since <span class="math-container">$x^2 - 5$</span> is irreducible over <span class="math-container">$\mathbb{Q}$</span>, we have <span class="math-container">$\left[ \mathbb{Q} \left( \sqrt{5} \right ) : \mathbb{Q} \right] = 2$</span>. Now from <span class="math-container">$\alpha^3 = 4 + \sqrt{5}$</span> we get <span class="math-container">$\sqrt{5} \in \mathbb{Q} \left( \alpha \right )$</span>, so that <span class="math-container">$\mathbb{Q} \left( \sqrt{5} \right )$</span> is a subfield of <span class="math-container">$\mathbb{Q} \left( \alpha \right )$</span>, <span class="math-container">$\mathbb{Q} \left( \alpha \right )=\mathbb{Q}\left( \sqrt{5}\right) \left( \alpha \right)$</span>, and we have
<span class="math-container">\begin{equation}
\left[ \mathbb{Q} \left( \alpha \right ) : \mathbb{Q} \right] = \left[ \mathbb{Q} \left( \sqrt{5} \right)\left( \alpha \right ) : \mathbb{Q} \left (\sqrt{5} \right) \right] \left[ \mathbb{Q} \left( \sqrt{5} \right ) : \mathbb{Q} \right] .
\end{equation}</span>
Now <span class="math-container">$\alpha$</span> is a root of the polynomial <span class="math-container">$g(x) \in \mathbb{Q} \left (\sqrt{5} \right) [ x ]$</span> given by <span class="math-container">$g(x) = x^3 - 4 - \sqrt{5}$</span>. So to prove our thesis it is enough to prove that this polynomial is irreducible in <span class="math-container">$\mathbb{Q} \left (\sqrt{5} \right) [ x ]$</span>. Being <span class="math-container">$g(x)$</span> of third degree, if it were not irreducible, its factorization would have at least one linear factor, so that <span class="math-container">$g(x)$</span> would have some root in <span class="math-container">$\mathbb{Q} \left (\sqrt{5} \right)$</span>. Hence our problem boils down to show that there are no integers <span class="math-container">$m_0, m_1, n$</span>, with <span class="math-container">$n \neq 0$</span>, such that
<span class="math-container">\begin{equation}
\left( \frac{m_0}{n} + \frac{m_1}{n} \sqrt{5}\right)^3 = 4 + \sqrt{5},
\end{equation}</span>
which gives
<span class="math-container">\begin{equation}
m_0^3 + 5 \sqrt{5} m_1^3 +3 \sqrt{5} m_0^2 m_1 + 15 m_0 m_1^2 = 4 n^3 + \sqrt{5} n^3,
\end{equation}</span>
or
<span class="math-container">\begin{equation}
m_0^3 + 15 m_0 m_1^2 - 4 n^3 + \sqrt{5} \left( 5 m_1^3 +3 m_0^2 m_1 - n^3 \right)=0,
\end{equation}</span>
which implies, being <span class="math-container">$\sqrt{5}$</span> irrational,
<span class="math-container">\begin{cases}
m_0^3 + 15 m_0 m_1^2 - 4 n^3 = 0, \\ 5 m_1^3 +3 m_0^2 m_1 - n^3 = 0.
\end{cases}</span>
At this point I am stuck, because I do not know how to prove that this system admits the only integer solution <span class="math-container">$m_0 = m_1 = n = 0$</span>.</p>
<p>Any help is welcome!</p>
| Mummy the turkey | 801,393 | <p>As requested by OP I am rewriting my comment as an answer. We will show that <span class="math-container">$[\mathbb{Q}(\sqrt[3]{4+\sqrt{5}}) : \mathbb{Q}(\sqrt{5})] = 3$</span> by showing that <span class="math-container">$f(x) = x^3 - (4+\sqrt{5})$</span> has no solution in <span class="math-container">$\mathbb{Q}(\sqrt{5})$</span>.</p>
<p>Rather than the approach in the question we notice that <span class="math-container">$\operatorname{Nm}_{\mathbb{Q}(\sqrt{5})/\mathbb{Q}}(4 + \sqrt{5}) = 11$</span>. In particular suppose that <span class="math-container">$\alpha$</span> is a root of <span class="math-container">$f(x)$</span> in <span class="math-container">$\mathbb{Q}(\sqrt{5})$</span>, then
<span class="math-container">\begin{align*}
\operatorname{Nm}_{\mathbb{Q}(\sqrt{5})/\mathbb{Q}}(\alpha)^3 &= \operatorname{Nm}_{\mathbb{Q}(\sqrt{5})/\mathbb{Q}}(\alpha^3) \\ & =\operatorname{Nm}_{\mathbb{Q}(\sqrt{5})/\mathbb{Q}}(4 + \sqrt{5}) =11
\end{align*}</span>
a contradiction.</p>
<p>What's really going on under the hood here is that <span class="math-container">$f(x)$</span> is Eisenstein for the prime ideal <span class="math-container">$\mathfrak{p} = (4 + \sqrt{5})$</span>.</p>
|
68,563 | <p>I was wondering if there's a formula for the cardinality of the set $A_k=\{(i_1,i_2,\ldots,i_k):1\leq i_1<i_2<\cdots<i_k\leq n\}$ for some $n\in\mathbb{N}$. I calculated that $A_2$ has $n(n-1)/2$ elements, and $A_3=\sum_{j=2}^{n-2}\frac{(n-j)(n-j+1)}{2}$. As you can see, the cardinality of $A_3$ is already represented by a not so nice formula. </p>
<p>Is there a general formula?</p>
| AndJM | 16,682 | <p>The $A_k$ can also be expressed as $\{(i_1,i_2,\ldots,i_k)\;|\; 1\leq i_1\leq n-(k-1),i_1+1\leq i_2\leq n-(k-2),\ldots,i_{k-1}+1\leq i_k\leq n\}$. This way, it is clear how many choices there are for each $i_j$. Multiplying will give you the ol' $n \choose k$ formula.</p>
<p>edit: Apologies. It's not clear to me right now how to do the multiplication!</p>
|
521,500 | <p>Today we proofed the (simple) Markov property for the Brownian motion. But I really don't get a crucial step in the proof.
The theorem states in particular that for $s\geq0$ fixed, the process $(C_t:=B_{t+s}-B_{s}, t\geq0)$ is independent from $\mathcal{F}_s=\sigma(B_u, 0\leq u\leq s)$.</p>
<p>The proof starts with the remark, that it suffices to show that $\forall n, 0\leq t_1<t_2\dots<t_n$ and $\forall m, 0\leq u_1<u_2\dots<u_m$ the two vectors $(C_{t_1},\dots,C_{t_n})$ and $(B_{u_1},\dots,B_{u_m})$ are independent. But I just cannot figure out why this is true?</p>
<p>Anyone got some advise? Thanks a lot!</p>
| Did | 6,179 | <p>Because these sigma-algebras $\sigma(B_u;u\leqslant s)$ and $\sigma(B_{s+u}-B_s;u\geqslant0)$ are generated by the pi-systems that one suggested that you use hence if the pi-systems are independent, so are the sigma-algebras (a result often called Dynkin lambda-pi theorem).</p>
|
1,407,131 | <p>I need to prove the following integral is convergent and find an upper bound
$$\int_{0}^{\infty} \int_{0}^{\infty} \frac{1}{1+x^2+y^4} dx dy$$</p>
<p>I've tried integrating $\frac{1}{1+x^2+y^2} \lt \frac{1}{1+x^2+y^4}$ but it doesn't converge</p>
| David C. Ullrich | 248,223 | <p>Finding the exact value of $\int_0^\infty\frac{dx}{a^2+x^2}$ is just a calc I exercise. Let $a=\sqrt{1+y^4}$ and see what happens...</p>
|
1,190,083 | <p>A positive element x of a C*-algebra A is a self-adjoint element whose spectrum is contained in the non-negative reals. If there's a faithful finite-dimensional representation of A where the involution is conjugate transposition, I think the second condition just means that x can be thought of as a matrix with positive eigenvalues, so it is self-adjoint*. Are there examples of C*-algebras with elements that have non-negative real spectra but that are not self-adjoint? What is the reason for not counting such elements as positive?</p>
<p>*This isn't true, but I'm leaving it in in case other people make the same mistake.</p>
| aly | 169,618 | <p>You can also have an infinite dimensional example. Take $x$ and $y$ be two distinct non-null elements in a Hilbert space $\mathcal{H}$ with dimension at least $2$ such that $\langle x,y\rangle\ge 0$. Then the rank-one operator $x\otimes y$ (defined as $z\mapsto\langle z,y\rangle x$) has the spectrum $\{0,\langle x,y\rangle\}\subset [0,\infty)$ and it is not selfadjoint as $(x\otimes y)^*=y\otimes x$. The $C^*$-algebra can be taken, in this case, $B(\mathcal{H})$.</p>
|
9,085 | <p>So as the title says I am trying to make a list where each element is determined by a users choice of an element in a PopupMenu.</p>
<p>My first attempt:</p>
<pre><code>test = Table["A", {5}];
Table[PopupMenu[Dynamic[test[[n]]], {"A", "B", "C"}], {n, 5}]
</code></pre>
<p>Returned the following error</p>
<pre><code>Part::pspec: Part specification n is neither an integer nor a list of integers.
</code></pre>
<p>For some reason the dynamic(?) would not allow me to refer to specific elements in the list. I then tried to circumvent this issue by introducing an extra variable <em>temp</em>:</p>
<pre><code>Table[temp = n;PopupMenu[Dynamic[test[[temp]]], {"A", "B", "C"}], {n, 5}]
</code></pre>
<p>However, all this did was create 5 PopupMenus that all referred to the $5^{\text{th}}$ element of the list <em>test</em>. I tried to put a <code>Setting[]</code> around the <code>Dynamic[]</code>, but since that removes that effect of <code>Dynamic[]</code> nothing happened at all.</p>
<p>Any suggestions would be greatly appreciated.</p>
| kglr | 125 | <p>You can also use:</p>
<pre><code> test = Table["A", {5}];
Table[With[{n = n}, PopupMenu[Dynamic[test[[n]]], {"A", "B", "C"}]], {n, 5}]
</code></pre>
<p>or </p>
<pre><code> Table[PopupMenu[Dynamic[test[[k]]], {"A", "B", "C"}] /. k -> n, {n, 5}]
</code></pre>
|
1,598,451 | <p><em>(Sorry for the inconvenience related to the tags, please feel free to correct my post if it needs a better scope by adding some other tags).</em></p>
<p><strong>CONTEXT</strong></p>
<p>I have several (decimal) numbers shaped like this :</p>
<ul>
<li>1.081</li>
<li>289.089167</li>
<li>2.98</li>
<li>...</li>
</ul>
<p><strong>PROBLEM</strong></p>
<p>I would like to get a decimal number, that I call "precision", which would give me the precision, which means the number of digits of this (decimal) number.</p>
<p><strong>EXPECTED RESULTS</strong></p>
<ul>
<li>1.081 => <strong>0.001</strong></li>
<li>289.089167 => <strong>0.000001</strong></li>
<li>2.98 => <strong>0.01</strong></li>
<li>67.00...n => <strong>0.0...(n-1)..1</strong></li>
</ul>
<p><strong>ATEMPTS</strong></p>
<p>I work in IT, and most precisely on an audio apps. So I have a audio file in input, and it gives me the audio duration.</p>
<p>What I try to achieve is to set a range, which you can find in any other website shaped like following :</p>
<pre><code><input type="range" value="0" min="0" max="???" />
</code></pre>
<p>And initated to 0. The user can drag the cursor to change the currentTime of the audio, and to be the most precise possible, I have to get the precision, in order to set the "max='???'" like following :</p>
<pre><code>max="getPrecision(audio.duration)"
</code></pre>
<p>I simplifyed the code, in reality the max property will be changed via JavaScript but it is not the aim of my question.</p>
<p><strong>QUESTION</strong></p>
<p>Does a mathematical formula exists to get this expected output ?</p>
| Ron Gordon | 53,268 | <p>Hint:</p>
<p>$$\begin{align}\int_1^{100} dx \frac{f(x)}{x} = \int_1^{10} dx \frac{f(x)}{x} + \int_{10}^{100} dx \frac{f(x)}{x} \end{align}$$</p>
<p>and sub $x=100/u$ in the 2nd integral.</p>
|
1,598,451 | <p><em>(Sorry for the inconvenience related to the tags, please feel free to correct my post if it needs a better scope by adding some other tags).</em></p>
<p><strong>CONTEXT</strong></p>
<p>I have several (decimal) numbers shaped like this :</p>
<ul>
<li>1.081</li>
<li>289.089167</li>
<li>2.98</li>
<li>...</li>
</ul>
<p><strong>PROBLEM</strong></p>
<p>I would like to get a decimal number, that I call "precision", which would give me the precision, which means the number of digits of this (decimal) number.</p>
<p><strong>EXPECTED RESULTS</strong></p>
<ul>
<li>1.081 => <strong>0.001</strong></li>
<li>289.089167 => <strong>0.000001</strong></li>
<li>2.98 => <strong>0.01</strong></li>
<li>67.00...n => <strong>0.0...(n-1)..1</strong></li>
</ul>
<p><strong>ATEMPTS</strong></p>
<p>I work in IT, and most precisely on an audio apps. So I have a audio file in input, and it gives me the audio duration.</p>
<p>What I try to achieve is to set a range, which you can find in any other website shaped like following :</p>
<pre><code><input type="range" value="0" min="0" max="???" />
</code></pre>
<p>And initated to 0. The user can drag the cursor to change the currentTime of the audio, and to be the most precise possible, I have to get the precision, in order to set the "max='???'" like following :</p>
<pre><code>max="getPrecision(audio.duration)"
</code></pre>
<p>I simplifyed the code, in reality the max property will be changed via JavaScript but it is not the aim of my question.</p>
<p><strong>QUESTION</strong></p>
<p>Does a mathematical formula exists to get this expected output ?</p>
| Chappers | 221,811 | <p>Suppose more generally that $a>0$ and
$$ f(x)=f(a^2/x). $$
Then we need to look at
$$ \int_a^{a^2} \frac{f(x)}{x} \, dx = \int_a^{a^2} \frac{f(a^2/x)}{x} \, dx. $$
Use the substitution $y=a^2/x$: then $x=a \implies y=a$, $x=a^2 \implies y=1$, and $dx/x=-dy/y$, so
$$ \int_a^{a^2} \frac{f(a^2/x)}{x} \, dx = \int_1^a \frac{f(y)}{y} \, dy, $$
and so
$$ \int_1^{a^2} \frac{f(x)}{x} \, dx = 2\int_1^a \frac{f(x)}{x} \, dx. $$</p>
|
2,521,331 | <p>I need to show, that when we have $X,Y$ - any metric spaces and
<br>
$f:X \ni x \to a \in Y$ is constant , then $f$ is continuous . </p>
<p>$(X,\tau_{1}),(Y,\tau_{2}) $ - topological spaces : $f: X\to Y$.
I know a definition : $f: X\to Y $ is continuous if $ \forall_{W \in \tau_{2}}\ f^{-1}[W] \in \tau_{1} $ . <br></p>
<p>Maybe let $U$ be open in Y , then id$x^{-1}(U) = U$<br>
$const^{-1}(U)= \begin{cases}
x , a\in U \\
\emptyset , a \notin U
\end{cases}$
?</p>
| Hayfisher | 503,715 | <p>This holds for every topological space, not just for metric spaces. Let $a \in Y$ be fixed. Since</p>
<p>$$f:X \to Y, x \mapsto a$$ holds, the preimage of any $V \subseteq Y$ is</p>
<ul>
<li>$\emptyset$, iff $a \notin V$,</li>
<li>whole $X$, iff $a \in V$, like you already denoted.</li>
</ul>
<p>Thus the preimage of ANY subset of $Y$ is either $\emptyset$ or whole $X$.
Clearly $\emptyset$ and $X$ are open in $X$ by definition of a topology, thus the preimage of any subset of $Y$ ,especially the open ones, is open in $X$. Thus $f$ is continuous as by definition a map between topological spaces is continuous if and only if it pulls back open sets onto open sets, thus $f^{-1}(V)$ is open in $X$ for $V \subseteq Y$ open in $Y$.</p>
|
3,969,943 | <p>It's been a few years since doing any type of trigonometry questions and I've seemed to forgotten everything about it. Below is a question with the solution. You're not supposed to use a calculator.</p>
<p><span class="math-container">$$\begin{align}
&\cos\frac{2\pi}{3}+\tan\frac{7\pi}{4}-\sin\frac{7\pi}{6} \\[4pt]
&=-\cos\frac{\pi}{3}-\tan\frac{\pi}{4}-\left(-\sin\frac{\pi}{6}\right) \\[4pt]
&=-\frac12-1+\frac12 \\[4pt]
&=-1
\end{align}$$</span></p>
<p>Can somebody explain the following to me?</p>
<ul>
<li>How <span class="math-container">$\cos(2\pi/3)$</span> becomes <span class="math-container">$-\cos(\pi/3)$</span></li>
<li>How <span class="math-container">$\tan(7\pi/4)$</span> becomes <span class="math-container">$-\tan(\pi/4)$</span></li>
<li>How <span class="math-container">$-\sin(7\pi/6)$</span> becomes <span class="math-container">$-(-\sin(\pi/6))$</span></li>
</ul>
<p>Thanks</p>
| Kavi Rama Murthy | 142,385 | <p>For (a) what you have done is correct.</p>
<p>For (b) your argument is not valid. Note that <span class="math-container">$\sum \ln (1+a_n) <\infty$</span>. This implies that <span class="math-container">$\ln (1+a_n) \to 0$</span> so <span class="math-container">$a_n \to 0$</span>. Now, there exists <span class="math-container">$\delta >0$</span> such that <span class="math-container">$\ln (1+x) \geq \frac 1 2 x$</span> for <span class="math-container">$0<x <\delta$</span> (because <span class="math-container">$\lim_{x \to 0}\frac {ln (1+x)} x=1$</span>). Hence, <span class="math-container">$a_n <2 \ln (1+a_n)$</span> for <span class="math-container">$n$</span> sufficiently large. This proves that <span class="math-container">$\sum a_n <\infty$</span>.</p>
<p>The converse part follows similarly by looking at <span class="math-container">$\ln (1+a_n)$</span> and noting that <span class="math-container">$\ln (1+x) \leq x$</span> for all <span class="math-container">$x >0$</span>.</p>
|
52,657 | <p>I have a pair of points at my disposal. One of these points represents the parabola's maximum y-value, which always occurs at x=0. I also have a point which represents the parabola's x-intercept(s). Given this information, is there a way to rapidly derive the formula for this parabolic curve? My issue is that I need to generate this equation directly in computer software, but all the standard-formula definitions for a parabolic curve use its Vertex, not its intercepts. Is there some standard form of equation into which these intercepts can be 'plugged in' in order to produce a working relation? If not, what is the most computationally direct way to solve this problem?</p>
| Shaun Ault | 13,074 | <p>To answer question found in the title: "... an equation for a parabola from its $x$ and $y$ intercepts", the correct equation is:</p>
<p>$$y = \frac{c}{ab}(x-a)(x-b),$$</p>
<p>where $a, b$ are the $x$-intercepts and $c$ is the $y$-intercept. We can prove this is correct by noting that $y = 0$ when $x=a$ or $x=b$ is substituted, and when $x=0$, we have $y = \frac{c}{ab}(-a)(-b) = c$.</p>
|
52,657 | <p>I have a pair of points at my disposal. One of these points represents the parabola's maximum y-value, which always occurs at x=0. I also have a point which represents the parabola's x-intercept(s). Given this information, is there a way to rapidly derive the formula for this parabolic curve? My issue is that I need to generate this equation directly in computer software, but all the standard-formula definitions for a parabolic curve use its Vertex, not its intercepts. Is there some standard form of equation into which these intercepts can be 'plugged in' in order to produce a working relation? If not, what is the most computationally direct way to solve this problem?</p>
| Zar | 14,450 | <p>the equation would look like this</p>
<p>$$ y = k(x-a)(x-b)$$</p>
<p>now we have to figure out what k is. We know what the maximum value is, call it c, and that it's x value is 0. Therefor we can plug this into the equation so that we get the following</p>
<p>$$c = k(-a)(-b)$$
$$c = kab$$
therefor
$k = c/(ab)$
therefor, your equation is</p>
<p>$$y = c(x-a)(x-b)/(a*b)$$</p>
|
1,797,712 | <p>Let $G = \Bbb{Z}_{360} \oplus \Bbb{Z}_{150} \oplus \Bbb{Z}_{75} \oplus \Bbb{Z}_{3}$</p>
<p>a. How many elments of order 5 in $G$</p>
<p>b. How many elments of order 25 in $G$</p>
<p>c. How many elments of order 35 in $G$</p>
<p>d. How many subgroups of order 25 in $G$</p>
<p>I think I have done a,b,c correctly and got 124 elments of order 5, 3000 elements of order 25, and 0 elements from order 35,</p>
<p>But I'm not sure if that is correct, and how to approch d?</p>
| Josh Hunt | 282,747 | <p>What method did you use?</p>
<p>In general the cyclic group of order $n$ has $\phi(d)$ elements of order $d$ whenever $d | n$. (You have $\phi(n)$ elements of order $n$, any element generates a cyclic subgroup, and summing these up gives you the order of the group.)</p>
<p>Also, if you have two elements of order $m$ and $n$, then the order of their sum (in an abelian group) will be $\text{lcm}(m,n)$.</p>
<p>So in $\mathbb Z_{360}$, $\mathbb Z_{150}$ and $\mathbb Z_{75}$ there are $\phi(5) = 4$ elements of order 5, plus the identity, and any sum of these has order 5: this makes $5^3 - 1$ or 124 elements of order 5. </p>
<p>To approach (d), note that every subgroup of order 25 will either be $\mathbb Z_5 \oplus \mathbb Z_5$ or $\mathbb Z_{25}$. You've found the elements of order 5 and of order 25, so can you use this to deduce the answer? (Warning: all 4 of the non-identity elements in a cyclic group of order 5 will generate the same subgroup!)</p>
|
2,366,610 | <p>Let $U$ be an $n \times n$ unitary matrix and $X$ an $n \times n$ real symmetric matrix. Suppose that $$U^\dagger X U = X$$ for all real symmetric $X$, then what are the allowed unitaries $U$? It seems that the only possible $U$ is some phase multiple of the identity $U=aI$ where $|a|=1$ but I'm not able to show that this is the only allowed unitary.</p>
| Bernard | 202,857 | <p>You can use <em>equivalents</em> and expansion in power series.</p>
<p>Let's begin with the denominator:
$$\sinh x-\sin x=x+\frac{x^3}{3!}+o(x^3)-\Bigl(x-\frac{x^3}{3!}+o(x^3)\Bigr)=\frac{x^3}3+o(x^3)\sim_0\frac{x^3}3.$$
Now for the numerator:</p>
<p>First, by definition, $\;\operatorname{arsinh}(\sinh(x))=x$.</p>
<p>Next,
$$\operatorname{arsinh} x=x-\frac12\frac{x^3}3+\frac{1\cdot 3}{2\cdot4}\frac{x^5}5-\frac{1\cdot 3\cdot5}{2\cdot4\cdot6}\frac{x^7}7+\dotsm $$ </p>
<p>We'll deduce the expansion of $\operatorname{arsinh} (\sin x)$ at order $3$. Remember asymptotic expansions can be composed:
\begin{align}
\operatorname{arsinh} (\sin x)&=\operatorname{arsinh}\Bigl(x-\frac{x^3}6+o(x^3)\Bigr)=\Bigl(x-\frac{x^3}6\Bigr)-\frac16\Bigl(x-\frac{x^3}6\Bigr)^3+o(x^3)\\
&=x-\frac{x^3}6-\frac16x^3+o(x^3)=x-\frac{x^3}3+o(x^3),
\end{align}
so that
$$\operatorname{arsinh}(\sinh(x))-\operatorname{arsinh}(\sin(x))=x-x+\frac{x^3}3+o(x^3)=\frac{x^3}3+o(x^3)\sim_0\frac{x^3}3.$$</p>
<p>Ultimately, we obtain (if the computation is correct):
$$\frac{\operatorname{arsinh}(\sinh(x))-\operatorname{arsinh}(\sin(x))}{\sinh x-\sin x}\sim_0\frac{\dfrac{x^3}3}{\dfrac{x^3}3}=1. $$</p>
|
4,444,669 | <p>I'm unsure about the problem below</p>
<hr>
Under which conditions is the following linear equation system solvable ?
<span class="math-container">$$x_1 + 2x_2 - 3x_3 = a$$</span>
<span class="math-container">$$3x_1 - x_2 + 2x_3 = b$$</span>
<span class="math-container">$$x_1 - 5x_2 + 8x_3 = c$$</span>
<hr>
<p>We set up our matrix</p>
<p><span class="math-container">$$\begin{bmatrix}
1 & 2 & -3 & | a \\
3 & -1 & 2 & | b \\
1 & -5 & 8 & | c \\
\end{bmatrix}$$</span></p>
<p>We apply -3 first row to second row and -1 first row to third row. Then we add -1 second row to third row. We get</p>
<p><span class="math-container">$$\begin{bmatrix}
1 & 2 & -3 & |a\\
0 & -7 & 11 & |b - 3a\\
0 & 0 & 0 & |2a - b + c\\
\end{bmatrix}$$</span></p>
<p>So <span class="math-container">$2a - b + c = 0$</span> for the system to be solvable. Is this correct ? I fear that there are other conditions that I forgot ?</p>
| John Bentin | 875 | <p>The proof will depend on which model you take for the real numbers (e.g. Dedekind cuts, equivalence classes of Cauchy sequences, etc.). Perhaps the easiest model for this question is (despite its arbitrariness and its awkwardness in other respects) the traditional one of decimal expansions. Thus, any positive real number <span class="math-container">$x$</span> may be expressed in the form <span class="math-container">$x=m+\sum_{k=1}^\infty n_k10^{-k}$</span>, where <span class="math-container">$m\in\Bbb N$</span> and <span class="math-container">$n_k\in\{0,...,9\}\,$</span> <span class="math-container">$(k=1,2,...)$</span>, and where not all of <span class="math-container">$m$</span> and the <span class="math-container">$n_k$</span> are <span class="math-container">$0$</span>. If <span class="math-container">$m>0$</span>, then <span class="math-container">$n=2$</span> will do. Otherwise, <span class="math-container">$m=0$</span> and at least one of the <span class="math-container">$n_k$</span>, say <span class="math-container">$n_l$</span>, is nonzero. Then we can take <span class="math-container">$n=10^{l+1}$</span>.</p>
|
3,736,706 | <p>Let <span class="math-container">$M$</span> be an <span class="math-container">$A$</span>-module and let <span class="math-container">$\mathfrak{a}$</span> and <span class="math-container">$\mathfrak{b}$</span> be coprime ideals of A.</p>
<p>I must show that <span class="math-container">$M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M \simeq M/ (\mathfrak{a \cap b})M$</span>.</p>
<p>My attempt is the following:</p>
<p>Let <span class="math-container">$x \in M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M$</span>,then <span class="math-container">$x = [y]+[z]$</span>, where <span class="math-container">$[y] = y+\mathfrak{a}M $</span> and <span class="math-container">$[z]=z + \mathfrak{b}M $</span>, <span class="math-container">$y,z \in M$</span>.</p>
<p>So, <span class="math-container">$x = y+z+ \mathfrak{a}M +\mathfrak{b}M $</span>.</p>
<p><span class="math-container">$\mathfrak{a}M +\mathfrak{b}M =\{z | z=am_1+bm_2, a \in \mathfrak{a}, b \in \mathfrak{b} \} $</span>. But then I don't know how to continue.</p>
<p>Is this approach correct? Or is there another way to prove it?
Thanks</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>Consider the short exact sequence:
<span class="math-container">$$0\longrightarrow A/\mathfrak a\cap\mathfrak b\longrightarrow A/\mathfrak a\times A/\mathfrak b\longrightarrow A/\mathfrak a+\mathfrak b\longrightarrow 0 $$</span>
and tensor by <span class="math-container">$M$</span>.</p>
|
3,766,585 | <p>Let <span class="math-container">$X_1,X_2,...,X_n$</span> be random sample from a DF <span class="math-container">$F$</span>, and let <span class="math-container">$F_n^* (x)$</span> be the sample distribution function.</p>
<p>We have to find <span class="math-container">$\operatorname{Cov}(F_n^* (x), F_n^* (y))$</span> for fixed real numbers <span class="math-container">$x, y$</span> where <span class="math-container">$F_n^* (x)$</span> is a sample distribution.</p>
<p>My approach:</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[(F_n^* (x) - \mathbb{E}[F_n^* (y)])(F_n^* (y) - \mathbb{E}[F_n^* (y)])]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[F_n^* (x) .F_n^* (y)] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg) \bigg(F_n^* (\min(x, y)) + \int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p>where <span class="math-container">$f_n^*(x)$</span> is a probability density function of the sample.</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg)^2\bigg]+\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \frac{F_n (x)F_n (y)}{n^2}$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \frac{(F_n (\min(x, y))^2}{n} - \frac{F_n (x)F_n (y)}{n^2} +\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] $$</span></p>
<p>How can I proceed from here?</p>
| VIVID | 752,069 | <p>It comes down to two known limits as follows:
<span class="math-container">$$\lim_{x\to 0} \frac{\ln |1+x^3|}{\sin^3 x}=\lim_{x\to 0} \frac{\ln |1+x^3|}{x^3}\frac{x^3}{\sin^3 x}=1\cdot 1=1$$</span></p>
|
3,766,585 | <p>Let <span class="math-container">$X_1,X_2,...,X_n$</span> be random sample from a DF <span class="math-container">$F$</span>, and let <span class="math-container">$F_n^* (x)$</span> be the sample distribution function.</p>
<p>We have to find <span class="math-container">$\operatorname{Cov}(F_n^* (x), F_n^* (y))$</span> for fixed real numbers <span class="math-container">$x, y$</span> where <span class="math-container">$F_n^* (x)$</span> is a sample distribution.</p>
<p>My approach:</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[(F_n^* (x) - \mathbb{E}[F_n^* (y)])(F_n^* (y) - \mathbb{E}[F_n^* (y)])]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[F_n^* (x) .F_n^* (y)] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg) \bigg(F_n^* (\min(x, y)) + \int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p>where <span class="math-container">$f_n^*(x)$</span> is a probability density function of the sample.</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg)^2\bigg]+\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \frac{F_n (x)F_n (y)}{n^2}$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \frac{(F_n (\min(x, y))^2}{n} - \frac{F_n (x)F_n (y)}{n^2} +\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] $$</span></p>
<p>How can I proceed from here?</p>
| Fred | 380,717 | <ol>
<li>if <span class="math-container">$|x|$</span> is "small", then <span class="math-container">$1+x^3 >0.$</span> Hence we have to compute</li>
</ol>
<p><span class="math-container">$$\lim_{x\to 0} \frac{\ln (1+x^3)}{\sin^3 x}.$$</span></p>
<ol start="2">
<li><span class="math-container">$\frac{\ln (1+x^3)}{\sin^3 x}= \frac{x^3}{\sin^3 x} \cdot \frac{\ln (1+x^3)}{x^3}.$</span></li>
</ol>
<p>Can you proceed ?</p>
|
3,766,585 | <p>Let <span class="math-container">$X_1,X_2,...,X_n$</span> be random sample from a DF <span class="math-container">$F$</span>, and let <span class="math-container">$F_n^* (x)$</span> be the sample distribution function.</p>
<p>We have to find <span class="math-container">$\operatorname{Cov}(F_n^* (x), F_n^* (y))$</span> for fixed real numbers <span class="math-container">$x, y$</span> where <span class="math-container">$F_n^* (x)$</span> is a sample distribution.</p>
<p>My approach:</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[(F_n^* (x) - \mathbb{E}[F_n^* (y)])(F_n^* (y) - \mathbb{E}[F_n^* (y)])]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}[F_n^* (x) .F_n^* (y)] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg) \bigg(F_n^* (\min(x, y)) + \int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \mathbb{E}[F_n^* (x)]\mathbb{E}[F_n^* (y)]$$</span></p>
<p>where <span class="math-container">$f_n^*(x)$</span> is a probability density function of the sample.</p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \mathbb{E}\bigg[\bigg(F_n^* (\min(x, y))\bigg)^2\bigg]+\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] - \frac{F_n (x)F_n (y)}{n^2}$$</span></p>
<p><span class="math-container">$$\text{Cov}(F_n^* (x), F_n^* (x)) = \frac{(F_n (\min(x, y))^2}{n} - \frac{F_n (x)F_n (y)}{n^2} +\mathbb{E}\bigg[ \bigg(F_n^* (\min(x, y)).\int_{\min(x,y)}^{\max(x,y)} f_n^* (x) dx\bigg)\bigg] $$</span></p>
<p>How can I proceed from here?</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$\ln(1+z)=z-z^2/2+\cdots, \sin z=z-z^3/6+\cdots$$</span>
<span class="math-container">$$\lim_{x\to 0} \frac{x^3-x^6/2+\cdots}{(x-x^3/6+\cdots)^3} =\lim_{x \to 0} \frac{1-x^3/2+\cdots}{1-3x^2/6+\cdots}=1.$$</span>
Lastly, we have used the binomial series: <span class="math-container">$(1+z)^{\nu}=1+\nu z+\cdots$</span>, if <span class="math-container">$|z|<1$</span>.</p>
|
69,902 | <p>I'm VERY new to Mathematica programming (and by new I mean two days), and was solving Project Euler question 12, which states:</p>
<blockquote>
<p>Which starting number, under one million, produces the longest [Collatz] chain?</p>
</blockquote>
<p>Now don't take this question wrong. <strong>I am not asking for a solution, I am simply wondering why my proposed solution is taking so long to produce an answer. It does eventually produce the correct solution to the problem.</strong></p>
<p>My code is below:</p>
<pre><code>collatzLength[x_] := Module[{c, n}, (For[n = x; c = 1, n != 1, c += 1,
If[EvenQ[n], n = n/2, n = 3*n + 1]]); c]
Last@Flatten@(MaximalBy[Transpose@{(collatzLength /@
Range[1000000]), Range[1000000]}, First])
</code></pre>
<p>It seems that the <code>collatzLength /@ Range[1000000]</code> is what is taking so long, so I am wondering how I can improve the collatz function (or any of the code) so that it completes in a reasonable timeframe.</p>
| KennyColnago | 3,246 | <p>Your <code>collatzLength</code> function is fast on an individual integer, but when you map it to all integers from 1 to a million, the function recalculates values repeatedly. For example, the Collatz series for $n=10$ is $\{10,5,16,8,4,2,1\}$. But the length for $n=5$ would have been already calculated to be 6. Hence, the Collatz length for $n=10$ is $1+6=7$. Use memoization to store previous values. For example,</p>
<pre><code>CollatzLength[1]:=1
CollatzLength[n_]:=(CollatzLength[n]=...)/;EvenQ[n]
CollatzLength[n_]:=(CollatzLength[n]=...)/;OddQ[n]
</code></pre>
<p>Your challenge is to fill in the blanks above with code referring to previously calculated values (smaller <code>n</code>). The speed is vastly improved at the cost of storing the million definitions of <code>CollatzLength[n]</code> for specific <code>n</code>.</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| walkingtonowhere | 5,158 | <p>Sometimes learning in general is not about the actual usefulness of the subject matter in question, but how it changes and expands your thinking. Inspiration can also stem from many different places. </p>
<p>The main issue is that in many High school classes, students are taught to look at a problem and work it like a machine. For example, taking the quadratic equation and putting in all the inputs. I believe the best way to teach is be a role model and inspire an interest in learning. Unless there is an interest, teaching is an uphill battle against all the distractions that occur. A great teacher can often teach students ideas and concepts that can stay with them a lifetime. </p>
<p>Most people forget things quite quickly when they aren't put to use. However if they can remember what the math was about, with some time and google, its pretty easy to solve any high school level problem. For people that really don't care about math, I would say its not that hard to live in society without it -- however scary that thought might be.</p>
<p>Even if you are interested in a certain subject, perhaps the first thing to consider is if you can get your students interested as well (as well as if they can actually understand it). For the gifted teachers, this eventually becomes less of an obstacle as the level of their own interest, charisma, or teaching method is enough to spark an interest in any subject matter.</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| Benjamin Dickman | 262 | <p><strong>Edit (Feb 2016):</strong> Since the OP mentioned Hacker's <em>Algebra</em> opinion piece in the NYTimes, perhaps this is a good place to point out his most recent follow-up in a similar direction (I exclude here my own assessment of either): <a href="http://www.nytimes.com/2016/02/28/opinion/sunday/the-wrong-way-to-teach-math.html" rel="nofollow noreferrer"><strong>The Wrong Way to Teach Math</strong></a> by Andrew Hacker (Retrieved: 2016 Feb 28).</p>
<hr />
<p><strong>Edit (July 2015):</strong> In a similar vein, here is a link to <a href="http://video.pbs.org/video/2365521689/" rel="nofollow noreferrer"><strong>Is Math Important?</strong></a> David Leonhardt of the <em>New York Times</em> acts as host for the panel discussion; to quote directly from <a href="http://devlinsangle.blogspot.com/2015/07/is-math-important.html" rel="nofollow noreferrer"><strong>this blog post</strong></a>:</p>
<blockquote>
<p>From the mathematical world there are Steven Strogatz of Cornell University and <a href="https://matheducators.stackexchange.com/a/3837"><strong>Jordan Ellenberg</strong></a> of the University of Wisconsin, and from mathematics education research there is <a href="https://matheducators.stackexchange.com/a/809"><strong>Jo Boaler</strong></a> of Stanford University. They are joined by David Coleman, President of the College Board, education writer Elizabeth Green, author of the recent book <em>Building a Better Teacher</em>, Pamela Fox, a computer scientist working with Khan Academy, and financier Steve Rattner.</p>
</blockquote>
<p>(The two hyperlinks were added by me: one to an MESE answer about Ellenberg's book; the other to an MESE answer about Boaler's comments on timed tests and math anxiety.)</p>
<hr />
<p>In providing a <em>justification for learning mathematics</em>, I would like to split this question into two pieces (even though there are certainly more) and comment briefly about one of them (even though the other may be closer to the intended question).</p>
<p>A <strong>first</strong> interpretation of the italicized text above: Why does a subject, mathematics, that covers so much "abstract" material, occupy such an important place in our schools?</p>
<p>This is the question that I believe is being asked, and it is the sort of consideration that underlies the linked piece on the necessity of algebra, from which I quote:</p>
<blockquote>
<p>The toll mathematics takes begins early. To our nation’s shame, one in four ninth graders fail to finish high school. In South Carolina, 34 percent fell away in 2008-9, according to national data released last year; for Nevada, it was 45 percent. Most of the educators I’ve talked with cite algebra as the major academic reason.</p>
</blockquote>
<p>You can find other comments in this direction in an earlier opinion piece in the <em>New York Times</em>, Garfunkel and Mumford's (2011) <a href="http://www.nytimes.com/2011/08/25/opinion/how-to-fix-our-math-education.html" rel="nofollow noreferrer"><strong>How to Fix Our Math Education</strong></a>. Again, quoting directly:</p>
<blockquote>
<p>Imagine replacing the sequence of algebra, geometry and calculus with a sequence of finance, data and basic engineering. In the finance course, students would learn the exponential function, use formulas in spreadsheets and study the budgets of people, companies and governments. In the data course, students would gather their own data sets and learn how, in fields as diverse as sports and medicine, larger samples give better estimates of averages. In the basic engineering course, students would learn the workings of engines, sound waves, TV signals and computers. Science and math were originally discovered together, and they are best learned together now.</p>
<p>Traditionalists will object that the standard curriculum teaches valuable abstract reasoning, even if the specific skills acquired are not immediately useful in later life. A generation ago, traditionalists were also arguing that studying Latin, though it had no practical application, helped students develop unique linguistic skills. We believe that studying applied math, like learning living languages, provides both useable knowledge and abstract skills.</p>
<p>In math, what we need is “quantitative literacy,” the ability to make quantitative connections whenever life requires (as when we are confronted with conflicting medical test results but need to decide whether to undergo a further procedure) and “mathematical modeling,” the ability to move practically between everyday problems and mathematical formulations (as when we decide whether it is better to buy or lease a new car).</p>
</blockquote>
<p>A <strong>second</strong> interpretation of providing a <em>justification for learning mathematics</em>: Why should we encourage students to study school mathematics now?</p>
<p>This is the question that I would like to respond to, briefly.</p>
<p>I do not disagree with studying mathematics for its aesthetic value; I do not disagree with studying mathematics for the opportunities it provides to express ourselves and be creative; I do not disagree that pure mathematics may turn out to have important applications. But I think the strongest argument <em>right now</em> for studying mathematics is its role as a societal <strong>gatekeeper</strong> (<a href="http://scholar.google.com/scholar?hl=en&q=%22mathematics%22+%22gatekeeper%22" rel="nofollow noreferrer"><strong>google scholar</strong></a>).</p>
<p>There are normative and utilitarian meta-questions about where mathematics' place <em>should be</em> in school and academic endeavors, but the <em>current reality</em> is that "learning mathematics" is essential to moving forward (or up) in the world; such a competence seems, to me, necessary but not sufficient for working towards a "successful" life.</p>
<p>At present, I have been teaching mathematics to elementary school teachers. Do I try to get them excited about mathematics? Yes. Do I try to get them to think about mathematics <em>creatively</em>? Yes. Do they sometimes latch on to applications of their own in our discussion of pure mathematical concepts? Yes: If only you could see the dawning of epiphanies (!) as many soon-to-be-married teachers in my Spring semester course began to brainstorm, collectively, about applications of LCMs and GCFs to the construction of flower arrangements and seating charts at upcoming weddings.</p>
<p>But I also realize that many of them are teaching students at high-needs schools, and that their students' futures (in our current set-up - speaking specifically about the United States) can be derailed by problems that start with an inability to factor quadratic expressions - or, in many cases, even earlier.</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| Jeff | 6,016 | <p>I am not a formal teacher or a mathematician, but a mechanical engineer who loves to learn and derives great satisfaction from mentoring other up-and-coming “S.T.E.M.” professionals/students. Due to my lack of educator credentials or particular knowledge on the subject, my response reflects personal views and experiences.</p>
<p>I was terribly bored by math during my primary and secondary education. I was much more prone to want to tinker with things to see how they worked. I used the knowledge that I gained to build and create things that I enjoyed or found useful. When the time came to select a major while filling out my university application I chose mechanical engineering because it would extend my understanding of the mechanical objects that had captured my fancy. I made this choice in spite of my mathematical insecurities because of my passion for the mechanical and my desire to understand it.</p>
<p>My first math and math based courses were extremely difficult for me, but I persevered and received good grades. However, my grades were more a result of my ability to regurgitate information effectively than a result of a sound understanding. Here and there a light would come on and things would make sense, but by far and large I was being led on a blindfolded path, copying the recipes that I was instructed to follow.</p>
<p>Through it all, I did feel very empowered as I learned that I could make, design, or mathematically model nearly anything that I wanted to if I could just find an equation in some textbook for the problem. This spurred me to develop a can-do attitude toward anything that I wanted to do. This new mental paradigm and tenacity was soon to be called to good use in awakening my understanding of what math really could be.</p>
<p>While studying heat transfer I was introduced to PDE’s. I had actually previously had a formal introduction in a class on differential equations (focusing of course on ODE’s), but the regurgitation education model had left me with virtually no recollection of what a PDE even was so it was like discovering it for the first time. I was intrigued, so I enrolled in a PDE’s math class during the last semester of my undergrad. My professor had a doctorate of art in mathematics, which contrasted greatly with my purely applied engineering background.</p>
<p>At first I scoffed at the professor’s insistence that math was a creative form, but as the class went on I was forced to dig up and actually learn a great deal of the topics that had been presented in earlier math classes. As I did, I began to really understand a few concepts and realize that they actually made sense in and of themselves. Math began to take on a meaning of its own beyond just its applications.</p>
<p>One of the biggest revelations was the ideas of vector spaces and that of variable transformations to map from one vector space to another. After asking my professor for help with just such a problem, I asked how he had known that that particular variable transformation would work. His answer changed the way that I have looked at math ever since. He said “I defined it that way because it was convenient”. The idea was so foreign to me that I had to think about it for some time before it really made sense. I had always thought that there was only one correct variable transformation, one correct proof of every mathematical fact, one best way to solve every problem. The idea that I could pick something, define it, and work out the implications on my own was amazing to me.</p>
<p>My interest in PDE’s led me to begin a study of them on my own. This study has led to a multitude of other topics that are interrelated with PDE’s including higher dimensional vector spaces and other wondrous ideas that lead the imagination to ask a lot of “what if” type questions. One of my most recent reads was a book on the history of mathematics. It was such an eye opener. Math and science progressed hand in hand in most eras (to me, those seemed to be the most fruitful). Many mathematicians were also classified as scientists or experts in other disciplines. Their processes for discovery varied, some liked rigorous proofs, others relied on inspiration/intuition and worked out the details later, some even published erroneous solutions to problems, notations changed and evolved, and creativity flourished. No longer was math a cold, exact, and deterministic subject. It had come to life for me.</p>
<p>I apologize for taking so long to get to my point, but I felt the background was necessary to justify my position since I do not know any educational theories. My point is that I think the best way to balance the applications and the art (the art part is a newly discovered part to me, but I am so glad that I have come to be able to view math in such a way) is to follow the historical development itself. Don’t ask students to solve totally stupid and uninteresting problems about Sally and Rob’s ages, travel distances, etc. Why not use the real questions of the giants that paved the way for us? There are a great deal of artistic and applied problems that spurred the development (and may I add understanding) of mankind. Why rob students of the richness of that history? I think their minds will tend to evolve in understanding over time much the way the ancients did. They can then see our wealth of knowledge as accessible to them. They can ponder on things and question about how the ancients determined the mass of the earth, the percent of gold in a crown, the value of pi or even the fact the ratio of the circumference of a circle to its diameter is constant.</p>
<p>In short, I think the best way to discover math is to relive the discovery of it with a little help from a skilled mentor so as to avoid the intellectual pitfalls of the ancients and of course so it will not take thousands of years to get an education. My opinion is that this would turn math from a dry subject into an interesting narration. It would also give a balance of art and application since neither deserves full credit for the present state of our knowledge.</p>
<p>What if a student will never use it again? My answer to that is that from an art and creativity perspective it will open their mind to consider all the possibilities to any given scenario they may meet and from an applied perspective that it will enhance their appreciation for the world around them just like a study of art an poetry turns a mundane artwork into something with meaning.</p>
|
2,412,959 | <p>In <a href="https://math.stackexchange.com/questions/170362/pointwise-convergence-implies-lp-convergence">this</a> question a user asks if pointwise convergence implies convergence in $L^p$. I would have thought that the answer is yes. I am not experienced with measure theory, which is how that question is framed. The following statement seems to assert that p.w. convergence implies convergence in $L^p$:
$$
\lim_{n\to \infty} ||f_n - f||_{L^p(\Omega)}^p = \lim_{n\to \infty} \int_\Omega |f_n(x)-f(x)|^p dx = \int_\Omega |\lim_{n\to \infty} f_n(x)-f(x)|^p dx = \int_\Omega |0|^p dx = 0.
$$
But the answers to the other post say that p.w. convergence does not imply convergence in $L^p$, so what am I missing?</p>
| Fred | 380,717 | <p>In general $ \lim_{n\to \infty} \int_\Omega |f_n(x)-f(x)|^p dx = \int_\Omega |\lim_{n\to \infty} f_n(x)-f(x)|^p dx$ is false !</p>
<p>Example: Let $p=1$ , $ \Omega =[0,1]$ and let $f_n$ be defined as follows $( n \ge 3)$:</p>
<p>$f_n(x)=n^2x$ , if $0 \le x \le 1/n$, $f_n(x)=-n^2x+2n$, if $1/n \le x \le 2/n$ and $f_(x)=0$, if $2/n \le x \le1$.</p>
|
4,478,486 | <p>I have just started to read Stein's Singular Integrals and Differentiability properties of functions.</p>
<p>The Hardy-Littlewood maximal function has just been introduced i.e. <span class="math-container">$$M(f)(x):= \sup_{r > 0} \frac{1}{m(B(x,r))}\int_{B(x,r)}|f(y)|dy$$</span></p>
<p>where <span class="math-container">$m(B(x,r))$</span> denotes the measure of the Ball</p>
<p>Stein then states "We shall now be interested in giving a concise expression for the relative size of a function". Let <span class="math-container">$g(x)$</span> be defined on <span class="math-container">$\mathbb{R}^{n}$</span> and for each <span class="math-container">$\alpha$</span> consider the following set <span class="math-container">$\{x:|g(x)| > \alpha\}$</span>. Then the function <span class="math-container">$\lambda(\alpha)$</span> defined to be the measure of this set is the distribution function of <span class="math-container">$|g|$</span>.</p>
<p>Questions:</p>
<p>(1): Stein states, "The decrease of <span class="math-container">$\lambda(\alpha)$</span> as <span class="math-container">$\alpha$</span> grows describes the relative largeness of the function" <strong>why is this describing the largeness (i'd have thought it would be saying how small the function is, and relative to what, other functions?)</strong></p>
<p>(2): If <span class="math-container">$g \subset L^{p}$</span> then one has <span class="math-container">$\int_{\mathbb{R}^{n}}|g(y)|^{p}dy = - \int_{0}^{\infty}\alpha^{p}d \lambda(\alpha)$</span>. <strong>How does one get the RHS of this equality?</strong></p>
| The_Sympathizer | 11,172 | <p><span class="math-container">$\otimes$</span>, also called <strong>tensing</strong>, is something you get bundled with the tensor product that you don't have in an ordinary vector space. How the tensor product vector space and tensing work together are what the real "meat" behind the tensor product is. Constructions are not "the real meaning", because there are an infinite number of them that will do the job - they're really better understood as first, <em>proofs</em> that the tensor product exists, and second, <em>encodings</em> of the tensor product in the medium of sets, similar to how that, on a computer, ASCII is an encoding of text in binary numbers. The same applies to constructions of most other mathematical objects using sets.</p>
<p>Hence, what <span class="math-container">$v \otimes w$</span> "is" will depend on which construction you choose. In the first case, it is not circular: we define <span class="math-container">$v \otimes w$</span> to be the cell in <span class="math-container">$Q$</span> containing the ordered pair <span class="math-container">$(v, w)$</span>. And in general cases, that is the</p>
<p>The "real meaning" behind the tensor product, and that nifty little tensing operation in comes with, is that it provides a space which lets you work with bilinear maps (generically, <span class="math-container">$n$</span>-linear maps) as though they were unilinear maps. Now, I suppose you (or some others) might be thinking, "but isn't <span class="math-container">$V \times W$</span> a vector space? So isn't a bilinear map <span class="math-container">$f: V \times W \rightarrow Z$</span>, a linear map from an ordered pair <span class="math-container">$(v, w)$</span>, viewed as a single vector in <span class="math-container">$V \times W$</span>?" Yes, it is, but remember that a bilinear map must be linear in each argument <strong>individually</strong>, and this gives them <em>more</em> structure that is not captured by a simple linear map out of <span class="math-container">$V \times W$</span>.</p>
<p>Hence the tensor product. We can think of this as enriching the domain so that, in this new domain, which we call <span class="math-container">$V \otimes W$</span>, being unilinear now carries all the structural weight of being bilinear on the <span class="math-container">$V \times W$</span> domain.</p>
<p>In particular, the tensor product as the property that every bilinear map <span class="math-container">$f: V \times W \rightarrow Z$</span>, can be understood <em>uniquely</em> as a unilinear map <span class="math-container">$f_\otimes : V \otimes W \rightarrow Z$</span>, where</p>
<p><span class="math-container">$$f_\otimes(v \otimes w) := f(v, w).$$</span></p>
<p>Moreover, <em>every vector space that has this property is isomorphic to the tensor product</em>. The construction, then, simply shows that this is not a vacuous statement, i.e. that we are actually talking about a real mathematical object here. In this regard, it's kind of like the various constructions of the real numbers: the real numbers are "really" the single object known as "the Dedekind-complete ordered field" - what those constructions do is they prove that such a thing actually exists.</p>
<p>In this setting, the meaning of <span class="math-container">$v \otimes w$</span> is that it's a "package" that wraps together <span class="math-container">$v$</span> and <span class="math-container">$w$</span> into a single matrovector for processing into a linear map in such a fashion that said linear maps acquire all the extra structure bilinear maps have, which simply taking an ordered pair would not be able to do.</p>
|
1,379,188 | <p>The Riemann distance function $d(p,q)$ is usually defined as the infimum of the lengths of all <strong>piecewise</strong> smooth paths between $p$ and $q$.</p>
<p><strong>Does it change if we take the infimum only over smooth paths?</strong>
(Note that if a smooth manifold is connected, <a href="https://math.stackexchange.com/a/134129/104576">then it is smoothly path connected</a>).</p>
<p>I am quite certain the distance does not change. I think that every piecewise smooth path can be approximated by a smooth path.</p>
<p>Around any singular point of the original path, we can take a coordinate ball, and create smomehow a smoothing of a relevant segment of the path which is not much longer than the original. </p>
<p>An explicit construction such as this can be found <a href="https://math.stackexchange.com/a/134129/104576">here</a>. However, the point there is only to show smooth path connectivity, and we also need some bound on the "added length". </p>
<p><strong>Partial Result (Reduction to the case of Euclidean metric):</strong></p>
<p>I show that the specific Riemannian metric does not matter. That is, if we can create a smoothing with small elongation measured by one metric $g_1$ then we can do the same for any other metric $g_2$. </p>
<p>Hence it is enough to prove the claim for $\mathbb{R}^n$ with the standard metric. </p>
<p>Proof:</p>
<p>Since the question is local (we focus around some point $p$ of non-smoothness of the original piecewise-smooth path) we can take an orthonormal frame for $g_1$, denoted by $E_i$. write $g_{ij}=g_2(E_i,E_j)$, I want to find $\text{max} \{g_2(v,v)|v\in \mathbb{S}^{n-1}_{g_1}\} = \text{max} \{g_2(v,v)|v=x^iE_i , x=(x^1,...,x^n) \in \mathbb{S}^{n-1}_{Euclidean}\} = \text{max} \{g_{ij}x^ix^j| \sum(x^i)^2=1 \} = \text{max} \{x^T \cdot G \cdot x | \|x\|=1 \} = \text{max}{\lambda(G)}$. </p>
<p>Since <a href="https://math.stackexchange.com/a/63206/104576">the roots of a polynomial are continuous in in terms of its coefficients</a>, and the coefficients of the charactersitic polynomial of a matrix depends continuously on the matrix entries, it follows that the eigenvalues of a matrix depends continuously on the matrix entries. Hence, since the matrix $g_{ij}(q)$ is a continuous function of $q$, it follows that if we restrict to a compact small enough neigbourhood of $p$ we the function $f(q)= \text{max}{\lambda(g_{ij}(q))}$ is continuous and in particularly bounded by some constant $C$. Hence for any path $\gamma$ which is contained in a small enough neighbourhood of $p$ $L_{g_2}(\gamma) \le \sqrt C L_{g_1}(\gamma)$.</p>
<p>In particular we can take $g_1$ to be the pullback metric of the standrad Euclidean metric via some coordinate ball around $p$. Now solving the problem for the Euclidean case (which implies solving it for $g_1$), we obtain a solution for an arbitrary $g_2$ as required.</p>
| ASCII Advocate | 260,903 | <p>An additional remark to the answer.</p>
<p>On a Riemannian manifold (without "missing" points, e.g., complete) the minimum length in any homotopy class of path exists and is attained by a geodesic path, which is necessarily smooth. If the manifold is of some reasonably finite topological type (compact is much more than enough), the infimum of the geodesic lengths will in fact be attained by one of the geodesic paths, so that the minimum distance between two points is always realized by a geodesic.</p>
|
2,326,564 | <p>Is it true that iff CardA = Card A then A is a set of distinct terms? </p>
<p>[This questions is actually from a confusion on what a set versus multiset is]</p>
| jgsmath | 455,126 | <p>I will use $\bar A$ for ~A.</p>
<p>$A + \bar A B = \overline{\overline{A + \bar{A} B}} = \overline{\bar A \cdot \overline{\bar A B}} =\overline{\bar A \cdot(\bar {\bar A} + \bar B)} = \overline{\bar A \cdot (A + \bar B)} = \overline{\bar A \cdot A + \bar A \cdot \bar B} = \overline{0+\bar A \cdot \bar B} = \overline{\bar A \cdot \bar B} = \overline{\overline{A+B}} = A+B$.</p>
<p>We have used De-Morgan's Laws: $\overline{X + Y} = \bar X \cdot \bar Y$ and $\overline{XY} = \bar X + \bar Y$. Also, $0 + X = X$ for any boolean variable $X$, and $\overline X \cdot X = 0$.</p>
|
2,326,564 | <p>Is it true that iff CardA = Card A then A is a set of distinct terms? </p>
<p>[This questions is actually from a confusion on what a set versus multiset is]</p>
| Axel Kemper | 58,610 | <p>From </p>
<p>$A \lor \bar{A} = T$</p>
<p>$T \lor B = T$</p>
<p>$B = B \land T = B \land (A \lor \bar{A}) = BA \lor B\bar{A}$</p>
<p>we can rewrite</p>
<p>$A \lor B = A \lor BA \lor B\bar{A} = A(T \lor B) \lor B\bar{A} = A \lor B\bar{A}$</p>
<p>In plain English:<br>
Regardless of $B$, $A \lor \bar{A}B$ is true, if $A$ is true. If $A$ is false, the expression is equal to $B$.</p>
|
1,687,714 | <p><a href="https://i.stack.imgur.com/nZEAy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nZEAy.jpg" alt=""></a></p>
<p>I am given a problem in my textbook and I am left to determine the Laplace transform of a function given its graph (see the attached photo) - a square wave - using the theorem that $$F(s) = \frac{1}{1-e^{-ps}} \int_0^p e^{-st}f(t)dt$$ where $f(t)$ is a periodic function with period $p$. From the graph and the information in the theorem, I deduce that the Laplace transform of the function can be calculated as follows: $$F(s) = \frac{1}{1-e^{-2as}} \int_0^{2a} e^{-st}f(t)dt = \frac{1}{1-e^{-2as}} \int_0^a e^{-st}dt\qquad (1)$$ because $f(t) = 1$ for $0 \le t \le a$, and $f(t) = 0$ for $a \le t \le 2a$. However, the book gets $$F(s) = \frac{1}{s(1+e^{-as})}\qquad (2)$$ </p>
<p>Could someone lend me a hand with this problem?</p>
<p>Numbers added for convenience, thank you in advance mates. Fair winds.</p>
| reuns | 276,986 | <p>it seems correct, but you didn't talk about <strong>the region of convergence</strong>. I personally consider the distribution $h(t) = \sum_{n=0}^\infty \delta(t-2an)$ (one peak at every $t = 2an$) whose Laplace transform is $$\sum_{n=0}^\infty e^{-2asn} = \frac{1}{1-e^{-2as}}$$ (only for $Re(s) > 0$ !! for $Re(s) <0$, the Laplace transform of that distribution <strong>doesn't converge</strong>)</p>
<p>and your square wave is $$f(t) = h \ast \mathbb{I}_{[0;a]}(t)$$ and the Laplace transform of $\mathbb{I}_{[0;a]}(t)$ is $$\int_0^a e^{-st}dt = \frac{1-e^{-as}}{s} $$ (converging for every $s \in \mathbb{C}$ !!) </p>
<p>hence $$\int_0^\infty f(t) e^{-st} dt = F(s) = \frac{1-e^{-as}}{s (1-e^{-2as})} = \frac{1}{s (1+e^{-as})}$$</p>
<p><strong>but only for $Re(s) > 0$ !! for $Re(s) < 0$ it doesn't converge</strong>.</p>
|
1,014,987 | <p>I need to solve the bound for $n$ from this inequality: </p>
<p>$$c \leq 1.618^{n+1} -(-0.618)^{n+1},$$</p>
<p>where $c$ is some known constant value. How can I solve this? At first I was going to take the logarithm, but the difference of the two exponentials trouble me...</p>
<p>Any hints? :) Thnx for any help !</p>
| Empy2 | 81,790 | <p>To solve $$c=\phi^n-(-\phi)^{-n}$$
If $n$ is even, then $$c=\phi^n-\phi^{-n}\\(\phi^n)^2-c(\phi^n)-1=0$$
and you can solve a quadratic for $\phi^n$ as a function of $c$. Similar if $n$ is odd.</p>
|
1,014,987 | <p>I need to solve the bound for $n$ from this inequality: </p>
<p>$$c \leq 1.618^{n+1} -(-0.618)^{n+1},$$</p>
<p>where $c$ is some known constant value. How can I solve this? At first I was going to take the logarithm, but the difference of the two exponentials trouble me...</p>
<p>Any hints? :) Thnx for any help !</p>
| jjepsuomi | 53,500 | <p>Here is the answer I got by using the hints given to me: </p>
<p>First I select $c = \frac{\sqrt{5}}{0.05}$, so my equation becomes: </p>
<p>$$\frac{\sqrt{5}}{0.05} =1.618^{n+1} - (-0.618)^{n+1}$$</p>
<p>I set $\phi = 1.618$ and $\displaystyle -\frac{1}{\phi} = -0.618$ and I get </p>
<p>$$\frac{\sqrt{5}}{0.05} = \phi^{n+1} - (-\phi)^{-(n+1)}.$$</p>
<p>Now I consider two cases: $n$ is odd or even. I consider the case $n$ is odd and I get: </p>
<p>$$\frac{\sqrt{5}}{0.05} = \phi^{n+1} - \phi^{-(n+1)},$$</p>
<p>and from this I get: </p>
<p>$$(\phi^{n+1})^2 - \frac{\sqrt{5}}{0.05}\phi^{n+1}-1 = 0$$</p>
<p>So I get: </p>
<p>$$\phi^{n+1} = \frac{\frac{\sqrt{5}}{0.05}\pm \sqrt{\frac{5}{0.05^2}+4}}{2}$$</p>
<p>$$\phi^{n+1} \approx 44.7437, -0.0223$$</p>
<p>from here I solve:</p>
<p>$$n = \frac{\ln(44.7437)}{\ln(\phi)}-1 = \frac{\ln(44.7437)}{\ln(1.618)}-1 \approx 6.89 $$</p>
<p>The other possibility evaluates into a complex number so I discard it, because I need a real valued answer. I do similarly for the $n$ is even case. </p>
|
3,712,094 | <p><a href="https://i.stack.imgur.com/S3n1g.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3n1g.jpg" alt="enter image description here"></a></p>
<p>For part (a) these are clearly two parallel lines so no points of intersection.<br>
For part (b) this has one point of intersection because these two lines cross at exactly one point.<br>
For parts (c) and (e) we have <span class="math-container">$z=0$</span> and <span class="math-container">$x=2y+1$</span> but what does this mean geometrically?<br>
For part (d) there are no points of intersection so does that mean the three planes are parallel or the planes never cross anywhere?
Thanks for the help.</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$(a)$</span>: two parallel non-intersecting lines, (b) intesecting lines.</p>
<p>In the case of three planes (equations) when non two are parallel and no two are idendtical: Take <span class="math-container">$z=k$</span> and solve for %x, y$ punt these in the third equation.
One of the three things will happen:</p>
<p>(1) <span class="math-container">$k$</span> gets determined, the planes meet in a unique point form a an open tetrahedron.</p>
<p>(2) <span class="math-container">$k$</span> disappears giving a true statement e,g,, <span class="math-container">$6=6$</span> so many solutions, three planes meet in a line
like three pages of a book.</p>
<p>(3) <span class="math-container">$k$</span> disappears giving leaving a wrong statement <span class="math-container">$6=5$</span>, so no solution, planes meeting an open prism.</p>
|
3,576,026 | <p>I was solving a problem and got down to this:
<span class="math-container">$$\lim_{n \to \infty} \arctan\left(\frac{\sum_{k=0}^n-\frac{1}{1+k^2}}{\sum_{k=0}^n \frac{k}{1+k^2}}\right)$$</span>
After this, I said that, since the bottom series diverges and the upper one converges, the result is <span class="math-container">$0$</span>. But the person who gave the question asked me why am I allowed to swap limit and summation.<br>
I think he meant take the limit inside the function and then distribute it on the both the numerator and denominator, but I am not sure, please confirm.<br>
In the case he meant what I have understood, though, I don't really know the answer. Can someone hint me elements of it please? (my knowledge base is Calc1 and what I have accumulated thus far of Calc 2 material)</p>
<p>Thank you very much!</p>
| Z Ahmed | 671,540 | <p>As <span class="math-container">$$\sum_{k=0}^{\infty}\frac{1}{k^2+1}=\frac{1}{2}(1+\pi \coth \pi)$$</span> is finite
<span class="math-container">$\sum_{k=0}^{\infty} \frac{k}{k^2+1}$</span> is divergent so the required limit needs to be
<span class="math-container">$L=\tan^{-1}(0)=0.$</span></p>
|
2,216,778 | <p>My question is if there exists a way to evaluate the sum</p>
<p>$$
{{s}\choose{s}}^{\!2} + {{s + 1}\choose{s}}^{\!2} + \ldots {{s+r}\choose{s}}^{\!2}.
$$</p>
<p>In other words, it's the sum of the squares of the first r binomial coefficients on the s-th right-to-left diagonal of Pascal's triangle. Moreover, is it true that the previous sum is $O_{\!s}(r^{s})$?</p>
| User | 329,924 | <p>Both of your two ideas work and are indeed based on the fact that the polynomial set of that form is not closed under addition(obvious, for $t^2+t^2=2t^2$). Hence it's not a subspace.</p>
|
2,216,778 | <p>My question is if there exists a way to evaluate the sum</p>
<p>$$
{{s}\choose{s}}^{\!2} + {{s + 1}\choose{s}}^{\!2} + \ldots {{s+r}\choose{s}}^{\!2}.
$$</p>
<p>In other words, it's the sum of the squares of the first r binomial coefficients on the s-th right-to-left diagonal of Pascal's triangle. Moreover, is it true that the previous sum is $O_{\!s}(r^{s})$?</p>
| Kernel_Dirichlet | 368,019 | <p>Let <span class="math-container">$V$</span> be a vector space and <span class="math-container">$W\subset V$</span>. We want <span class="math-container">$W$</span> to satisfy three key axioms for it to fit the definition of subspace.</p>
<p><span class="math-container">$1$</span>. <span class="math-container">$\{0\}\in W$</span></p>
<p><span class="math-container">$2$</span>. <span class="math-container">$w_1+w_2=w_3\in W$</span> (closure under vector addition)</p>
<p><span class="math-container">$3$</span>. <span class="math-container">$cw\in W$</span> (closure under scalar multiplication)</p>
<p>For the subset of polynomials <span class="math-container">$W$</span> defined by <span class="math-container">$p(t)=a+t^2$</span>, we don't have closure under addition, because we have <span class="math-container">$p(t)+q(t)=(a+b)+2t^2$</span>, which is not of the desired form.</p>
<p>also, the set fails closure under scalar multiplication as well, since <span class="math-container">$cp(t)=c(a+t^2)=ca+ct^2$</span>. The only exception is <span class="math-container">$c=1$</span>, but <span class="math-container">$W$</span> still fails the vector addition axiom so it is not a subspace.</p>
<p>Finally, the zero vector (and for polynomials, the <strong>zero polynomial</strong> -that whose all coefficients <span class="math-container">$a_0, a_1,..., a_n = 0$</span>, and in this case, only <span class="math-container">$a = 0$</span>) is also not in the subset except for the single case where <span class="math-container">$t=0$</span>.</p>
|
221,428 | <p>Is there any pair of random variables (X,Y) such that Expected value of X goes to infinity, Expected value of Y goes to minus infinity but expected value of X+Y goes again to infinity?</p>
| Community | -1 | <p>It would probably be easier to start as follows: Notice that the $G_i$ being dense and open guarantees that $G_1 \cap G_2 \neq \emptyset$. Now choose an $x$ in so that there is a ball $E_1$ in completely contained in the intersection. Shrinking the ball if necessary, you can assume that $\overline{E_1}$ is completely contained in the intersection. Now the intersection of $E_1$ with $G_3$ is non-empty and so you can choose some $\overline{E_2}$ completely contained in $E_1 \cap G_3$ and hence in $E_1$. If you go on like this, you will have a decreasing (with respect to containment) sequence of closed and bounded sets which has non-empty intersection.</p>
<p>Now how do you prove this last assertion? You can either use the theorem in chapter 2 on intersection of compact sets (notice the nested bit guarantees that the intersection of finitely many of them is non-empty) or you can go straight up from the definition of sequential compactness.</p>
|
2,838,037 | <p>For the set $A=\{0\} \cup \{\frac 1n \mid n \in \mathbb N\}$, I understand that $\{\frac 1n \mid n \in \mathbb N\}$ is open and closed in $A$ because it is a union of all the connected components $\{\frac 1n\}$ in $A$ for all $n \in \mathbb N$. Even though $\{0\}$ is also a connected component of $A$, why is $\{0\}$ closed but not open? I thought $\{0\}$ is closed and open in $A$ as well just like each $\{\frac 1n\}$.</p>
| William Elliot | 426,203 | <p>Viewing A as a subspace of R, since {0} is closed,
within A, B = A - {0} is open. B is not closed within A
because 0 is an adherance point of B that is not in B. </p>
<p>Using the clumbsy definition of closed, B is not closed<br>
within A because 0 is a limit point of B that is not in B. </p>
|
2,838,037 | <p>For the set $A=\{0\} \cup \{\frac 1n \mid n \in \mathbb N\}$, I understand that $\{\frac 1n \mid n \in \mathbb N\}$ is open and closed in $A$ because it is a union of all the connected components $\{\frac 1n\}$ in $A$ for all $n \in \mathbb N$. Even though $\{0\}$ is also a connected component of $A$, why is $\{0\}$ closed but not open? I thought $\{0\}$ is closed and open in $A$ as well just like each $\{\frac 1n\}$.</p>
| Mostafa Ayaz | 518,023 | <p>If $\{0\}$ is open then there must exist some $\epsilon>0$ such that $$\{x\in A:|x|<\epsilon\}\subseteq\{0\}$$if such an $\epsilon$ exists we must have $$\{\dfrac{1}{n}:n>\dfrac{1}{\epsilon}\}\subseteq\{0\}$$which doesn't hold at all then the set is not open</p>
|
1,725,337 | <p>How does the following definition of Taylor polynomials:</p>
<p>$f(x_0 + h)= f(x_0) + f'(x_0)\cdot h + \frac{f''(x)}{2!}h^2+ ... +\frac{f^(k)(x_0)}{k!}\cdot h^k+R_k(x_0,h),$ </p>
<p>where $R_k(x_0,h)=\int^{x_0+h}_{x_0} \frac{(x_0+h-\tau)^k}{k!}f^{k+1}(\tau) d\tau$</p>
<p>where I guess $\lim_{h\to 0} \frac{R_k(x_0,h}{h^k}=0$</p>
<p>differ from </p>
<p>$f(x)=f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots +\frac {f^k(a)}{k!} (x-a)^k + R(x) $</p>
<p>where $R(x)$ is the corresponding error function.</p>
<p>I understand the intuition of the second definition and how it is derived but how does the first definition approximate the function $f$? <em>Can you please show how to derive the definition or give an intuitive explanation</em> in the way Tom Apostol does for the first definition: </p>
<p><a href="https://i.stack.imgur.com/aL0Rz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aL0Rz.png" alt="enter image description here"></a></p>
<p>I know a similar question is asked at <a href="https://math.stackexchange.com/questions/648540/two-definitions-of-taylor-polynomials">Two definitions of Taylor polynomials</a> but it isn't quite the same. </p>
| André Nicolas | 6,312 | <p>Hint: To find a series expression for $\frac{2}{(8+x)^2}$, differentiate the power series of (more or less) $\frac{1}{8+x}$. Note that $\frac{1}{8+x}$ has derivative $-\frac{1}{(8+x)^2}$.</p>
<p>To find the series for $\frac{1}{8+x}$, rewrite as $\frac{1}{8}\cdot \frac{1}{1+x/8}$, and use the familiar series for $\frac{1}{1-t}$.</p>
|
2,636,931 | <p>Consider the ellipse given by:</p>
<p>$$
Ax^2 + Bxy + Cy^2 + Dx + Ey + F =0.
$$</p>
<p>What is the equation of an ellipse which has major and minor axis equal to $p$ times the major and minor axis length of the above ellipse.</p>
<p>My attempt is as follows:
We can remove rotation, increase axis length and then rotate back. An example of rotation is given below:</p>
<p><a href="https://math.stackexchange.com/questions/1102328/rotating-a-conic-section-to-eliminate-the-xy-term">Rotating a conic section to eliminate the $xy$ term</a>.</p>
<p>I am wondering if there is less complicated intuition into this problem or less complicated way.</p>
| Community | -1 | <p>You obtain this effect by rescaling the coordinate axis by the factor $p$, and the equation becomes</p>
<p>$$
A\frac{x^2}{p^2} + B\frac{xy}{p^2} + C\frac{y^2}{p^2} + D\frac{x}{p} + E\frac{y}{p} + F =0.
$$</p>
<p>If the center must remain unchanged, translate the center to the origin (the center is found by solving $2Ax+By+F=0,Cx+2Dy+E=0$), dilate and translate back.</p>
<p>The combined transform is</p>
<p>$$x\to\frac{x-x_c}p+x_c,\\y\to\frac{y-y_c}p+y_c.$$</p>
|
3,787,167 | <p>Let <span class="math-container">$\{a_{jk}\}$</span> be an infinite matrix such that corresponding mapping <span class="math-container">$$A:(x_i) \mapsto (\sum_{j=1}^\infty a_{ij}x_j)$$</span> is well defined linear operator <span class="math-container">$A:l^2\to l^2$</span>.
I need help with showing that this operator will be bounded. I guess it means that i need to check if a unit sphere maps to something bounded, so i need to manage to get some inequality on coefficients of matrix that will allow to write a straight sequence of inequalities and get desired bound. But I don't understand how to get bound from operator being well defined.</p>
| crush3dice | 765,780 | <p>The basic idea is that <span class="math-container">$A(l^2)$</span> will not lie in <span class="math-container">$l^2$</span> if <span class="math-container">$A$</span> was not bounded. The proof:</p>
<p>If <span class="math-container">$A$</span> was not bounded then for every <span class="math-container">$C>0$</span> there would exist a <span class="math-container">$x\in l^2$</span> so that <span class="math-container">$\|Ax\|>C\|x\|$</span>. This would also be the case for all vectors in <span class="math-container">$span(x)$</span>. So we can assume <span class="math-container">$x$</span> was normalized. Now lets choose</p>
<p><span class="math-container">$C_n = n^2$</span> and find the corresponding normalized <span class="math-container">$x_n$</span> so that we have</p>
<p><span class="math-container">$$\|Ax_n\| > n^2$$</span></p>
<p>lets define a new elemt <span class="math-container">$y\in l^2$</span> like so:</p>
<p><span class="math-container">$$
y = \sum_{n=1}^{\infty} \frac{1}{n} x_n
$$</span></p>
<p>Then we have</p>
<p><span class="math-container">$$
\|y\|_2^2 \le \sum_n \frac{1}{n^2} = 2 \implies y\in l^2
$$</span></p>
<p>lets define <span class="math-container">$\pi_k$</span> as the orthogonal projection on <span class="math-container">$Ax_k$</span> then we get with pythagoras</p>
<p><span class="math-container">$$
\|Ay\| = \frac{1}{k}\|Ax_k + \sum_{n\ne k} Ax_n\| \ge \frac{1}{k}\|Ax_k + \pi_k(\sum_{n\ne k} Ax_n)\|
$$</span></p>
<p>We can then without loss of generality assume by reversing the sign of <span class="math-container">$x_k$</span> that</p>
<p><span class="math-container">$$
\frac{1}{k}\|Ax_k + \pi_k(\sum_{n\ne k} Ax_n)\| \ge \frac{1}{k}\|Ax_k\| \ge k
$$</span></p>
<p>for every <span class="math-container">$k\in \mathbb N$</span> so <span class="math-container">$\|Ay\| = \infty$</span>. Therefore <span class="math-container">$A(y) \not \in l^2$</span> and <span class="math-container">$A(l^2)\not\subset l^2$</span>. That contradicts the assumption.</p>
|
61,798 | <p>Are there any generalisations of the identity $\sum\limits_{k=1}^n {k^3} = \bigg(\sum\limits_{k=1}^n k\bigg)^2$ ?</p>
<p>For example can $\sum {k^m} = \left(\sum k\right)^n$ be valid for anything other than $m=3 , n=2$ ?</p>
<p>If not, is there a deeper reason for this identity to be true only for the case $m=3 , n=2$?</p>
| J. M. ain't a mathematician | 498 | <p>The <a href="http://en.wikipedia.org/wiki/Faulhaber%27s_formula#Faulhaber_polynomials">Faulhaber polynomials</a> are expressions of sums of <em>odd</em> powers as a polynomial of triangular numbers $T_n=\frac{n(n+1)}{2}$. Nicomachus's theorem, $\sum\limits_{k\leq n} k^3=T_n^2$, is a particular special case.</p>
<p>Other examples include</p>
<p>$$\begin{align*}\sum\limits_{k\leq n} k^5&=\frac{4T_n^3-T_n^2}{3}\\\sum\limits_{k\leq n} k^7&=\frac{6T_n^4-4T_n^3+T_n^2}{3}\end{align*}$$</p>
|
61,798 | <p>Are there any generalisations of the identity $\sum\limits_{k=1}^n {k^3} = \bigg(\sum\limits_{k=1}^n k\bigg)^2$ ?</p>
<p>For example can $\sum {k^m} = \left(\sum k\right)^n$ be valid for anything other than $m=3 , n=2$ ?</p>
<p>If not, is there a deeper reason for this identity to be true only for the case $m=3 , n=2$?</p>
| user02138 | 2,720 | <p>Here is a curious (and related) identity which might be of interest to you. Let $D_{k} = ${ $d$ } be the set of <a href="http://en.wikipedia.org/wiki/Unitary_divisor">unitary divisors</a> of a positive integer $k$, and let $\sigma_{0}^{*} \colon \mathbb{N} \to \mathbb{N}$ denote the number-of-unitary-divisors (arithmetic) function. Then it is relatively straightforward to prove
\begin{eqnarray}
\sum_{d \in D_k} \sigma_{0}^{*}(d)^{3} = \left( \sum_{d \in D_k} \sigma_{0}^{*}(d) \right)^{2} \qquad k \in \mathbb{N}.
\end{eqnarray}</p>
<p>Note that $\sigma_{0}^{*}(k) = 2^{\omega(k)}$, where $\omega(k)$ is the number distinct prime divisors of $k$. For example,
\begin{eqnarray}
1^{3} + 2^{3} + 2^{3} + 2^{3} + 4^{3} + 4^{3} + 4^{3} + 8^{3} = (1 + 2 + 2 + 2 + 4 + 4 + 4 + 8)^{2}
\end{eqnarray}</p>
|
888,101 | <p>Suppose I am asked to show that some topology is not metrizable. What I have to prove exactly for that ?</p>
| Tomasz Kania | 17,929 | <p>Since you've used the tag <em>Functional analysis</em>, you might be interested in non-metrisability of certain topologies ubiquitous in analysis:</p>
<ul>
<li><p><a href="https://math.stackexchange.com/questions/424876/weak-topology-on-an-infinite-dimensional-normed-vector-space-is-not-metrizable">Weak topology on an infinite-dimensional normed vector space is not metrizable</a></p>
<ul>
<li>also <a href="https://math.stackexchange.com/questions/814174/weak-topology-is-not-metrizable-whats-wrong-with-this-proof?rq=1">Weak topology is not metrizable: what's wrong with this proof?</a></li>
</ul></li>
<li><p><a href="https://math.stackexchange.com/questions/623642/the-weak-topology-on-x-is-not-first-countable-if-x-has-uncountable-dim/626599#626599">The weak$^*$ topology on $X^*$ is not first countable if $X$ has uncountable dimension.</a></p></li>
<li><p><a href="https://math.stackexchange.com/questions/845204/the-space-of-distributions-endowed-with-the-topology-of-uniform-convergence-on-b">The space of distributions endowed with the topology of uniform convergence on bounded sets is not Fréchet.</a></p></li>
<li><p><a href="https://math.stackexchange.com/questions/179800/cx-with-the-pointwise-convergence-topology-is-not-metrizable">$C(X)$ with the pointwise convergence topology is not metrizable</a></p></li>
<li><a href="https://math.stackexchange.com/questions/116849/a-few-zariski-topology-question">Two questions on the Zariski topology on $\mathbb{R}$</a></li>
</ul>
<p>Some other examples can be found <a href="https://mathoverflow.net/questions/52032/examples-of-non-metrizable-spaces">here</a>.</p>
|
48,626 | <p>In <code>ListPointPlot3D</code>, it seems the only point style available is the default, because there is no <code>PlotMarkers</code> option for this function. Is there a way to change the point style? For example, what if I want to draw the points as small cubes?</p>
| kglr | 125 | <pre><code>lpdata = Table[(4 π - t) {Cos[t + π/2], Sin[t + π/2], 0} + {0, 0, t}, {t, 0, 4 π, .1}];
lpp1 = ListPointPlot3D[lpdata,
Filling -> Bottom, ColorFunction -> "Rainbow", BoxRatios -> 1,
FillingStyle -> Directive[LightGreen, Thick, Opacity[.5]], ImageSize -> 400];
</code></pre>
<p><strong>ListPointPlot3D: Post-process Point into Cone</strong></p>
<pre><code>lpp2 = lpp1 /. Point[x__] :> (Sequence@{EdgeForm[], Cone[#, .3]} &@
({x} /. {{a_, b_, c_}} :> {{a, b, c}, {a, b, .5 + c}}));
Row[{lpp1, lpp2}, Spacer[5]]
</code></pre>
<p><img src="https://i.stack.imgur.com/rGyIR.png" alt="enter image description here"></p>
<p>... or into <code>Cuboid</code>s </p>
<pre><code>lpp1 /. Point -> Cuboid
</code></pre>
<p><img src="https://i.stack.imgur.com/kt6P9.png" alt="enter image description here"></p>
<p><strong>DiscretePlot3D: use <code>lpdata</code> to define a function and use the option PlotMarkers</strong></p>
<pre><code>ClearAll[foo];
(foo[Sequence @@ #[[1]]] = #[[2]]) & /@ (lpdata /. {a_, b_, c_} :> {{a, b}, c});
(* or (foo[Sequence @@ #1] = #2) & @@@ (lpdata /. {a_, b_, c_} :> {{a, b}, c})*)
DiscretePlot3D[foo[x, y], {x, lpdata[[All, 1]]}, {y, lpdata[[All, 2]]},
ImageSize -> 400, BoxRatios -> 1, ExtentSize -> 1/5,
ColorFunction -> Function[{x, y, z}, ColorData["Rainbow"][z]],
PlotMarkers -> {"Sphere", Medium}]
</code></pre>
<p><img src="https://i.stack.imgur.com/wwBk4.png" alt="enter image description here"></p>
<p>(Unfortunately, <code>Point</code> and <code>Sphere</code> seem to be the only markers that work with <code>DiscretePlot3D</code>.)</p>
<p><strong>BubbleChart3D: append <code>lpdata</code> with <code>1</code>s and use the options ChartElements or ChartElementFunction</strong></p>
<pre><code>bcdata = {##, 1} & @@@ lpdata;
opts = {ImageSize -> 300, BubbleSizes -> {0.025, .025},
ChartBaseStyle -> EdgeForm[], ChartStyle -> "Rainbow", ColorFunction -> (#3 &)};
</code></pre>
<p>Use the built-in glyphs with the option <code>ChartElementFunction</code>: </p>
<pre><code>Row[BubbleChart3D[bcdata, Evaluate@opts,
ChartElementFunction -> #] & /@ {"Cone", "Cube","TriangleWaveCube"}, Spacer[5]]
</code></pre>
<p><img src="https://i.stack.imgur.com/EI2Q4.png" alt="enter image description here"></p>
<p>or use the option <code>ChartElements</code> and provide your own graphics objects:</p>
<pre><code>Row[BubbleChart3D[bcdata, Evaluate@opts, ChartElements -> Graphics3D[#]] & /@
{Cone[], Cuboid[], PolyhedronData["Dodecahedron", "Faces"]}, Spacer[5]]
</code></pre>
<p><img src="https://i.stack.imgur.com/Tamiw.png" alt="enter image description here"></p>
|
2,981,063 | <p>I have seen this statement in a quiz:</p>
<blockquote>
<p>Let <span class="math-container">$X_i$</span> denote state <span class="math-container">$i$</span> in a Markov chain. It is necessarily true
that <span class="math-container">$X_{i+1}$</span> and <span class="math-container">$X_{i-1}$</span> are uncorrelated.</p>
</blockquote>
<p>Apparently, this statement is <strong>false</strong> but I can't figure out why. I thought that for Markov Chains the <strong>past and future states</strong> are independent <strong>given the present</strong>. Did I misunderstand this?</p>
| E-A | 499,337 | <p>Conditioned on <span class="math-container">$X_i$</span>, <span class="math-container">$X_{i+1}$</span> and <span class="math-container">$X_{i-1}$</span> are indeed uncorrelated (and actually are much stronger: they are independent; you can check this).</p>
<p>However, think of the following chain: <span class="math-container">$X_n = B$</span> where <span class="math-container">$B$</span> is some Bernoulli random variable. You can check this is Markov (If I tell you the state <span class="math-container">$X_{n-1}$</span>, I told you what <span class="math-container">$X_n$</span> is so no need to further condition) Note that <span class="math-container">$X_i = X_j$</span> for any <span class="math-container">$i,j$</span>, and these are clearly positively correlated!. </p>
|
1,419,209 | <p>How do I evaluate this (find the sum)? It's been a while since I did this kind of calculus.</p>
<p>$$\sum_{i=0}^\infty \frac{i}{4^i}$$</p>
| Mark Viola | 218,419 | <p>Another approach is to write</p>
<p>$$\begin{align}
\sum_{i=0}^{\infty}\frac{i}{4^i}&=\sum_{i=1}^{\infty}\frac{1}{4^i}\left(\sum_{j=1}^{i}1\right)\\\\
&=\sum_{j=1}^{\infty}\sum_{i=j}^{\infty}\frac{1}{4^i}\\\\
&=\sum_{j=1}^{\infty}\frac{1}{4^j}\frac{1}{1-\frac14}\\\\
&=\frac{1/4}{(1-\frac14)^2}\\\\
&=\frac49
\end{align}$$</p>
|
3,536,061 | <p>Find the number of ways you can invite <span class="math-container">$3$</span> of your friends on <span class="math-container">$5$</span> consecutive days, exactly one friend a day, such that no friend is invited on more than two days. </p>
<p>My approach: Let <span class="math-container">$d_A,d_B$</span> and <span class="math-container">$d_C$</span> denote the total number of days <span class="math-container">$A, B$</span> and <span class="math-container">$C$</span> were invited respectively. According to the question we must have <span class="math-container">$0\le d_A,d_B,d_C\le 2.$</span> Also, we must have <span class="math-container">$$d_A+d_B+d_C=5.$$</span> </p>
<p>Now let <span class="math-container">$d_A+c_A=2, d_B+c_B=2, d_C+c_C=2,$</span> for some <span class="math-container">$c_A, c_B, c_C\ge 0$</span>. </p>
<p>This implies that <span class="math-container">$c_A+c_B+c_C=1$</span>. </p>
<p>Therefore the problem translates to finding the number of non-negative integer solutions to the equation <span class="math-container">$$c_A+c_B+c_C=1.$$</span> </p>
<p>By the stars and bars method the total number of required solutions is equal to <span class="math-container">$$\dbinom{1+3-1}{3-1}=3.$$</span></p>
<p>But the number of ways to invite the friends will be higher than this, since the friends are distinguishable and we have assumed them to be indistinguishable while applying the stars and bars method. </p>
<p>How to proceed after this?</p>
| Giovanny Soto | 721,759 | <p>You can solve the problem as follows: </p>
<p>Let's call the three friends as <span class="math-container">$A,B,C$</span>, we need to invite them in such way that none of them go to your house more than 2 days. Obviously there are two friends (lest say <span class="math-container">$A$</span> and <span class="math-container">$B$</span>) who will be invited two times and the other one only once. This way we can see that the problem consist in distribute <span class="math-container">$A,B,C$</span> in those five days. </p>
<p>Choose <span class="math-container">$A$</span> first. You can invite him two days of the five days, so you can invite him in <span class="math-container">$\binom{5}{3}=10$</span> different ways. For <span class="math-container">$B$</span> you can invite him in <span class="math-container">$\binom{3}{2}=3$</span> because we can't invite him to go in those days that <span class="math-container">$A$</span> goes to your house. And for <span class="math-container">$C$</span> you have only one way to invite him (the day that is <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are not coming to your house). Therefore, the are <span class="math-container">$10\times3\times1=30$</span> ways to invite your friends to your house, such that no friend is invited on more than two days.</p>
|
939,237 | <p>Prove $n^2 < n!$.</p>
<p>This is what I have gotten so far</p>
<p>basis step: $p(4)$ is true
Inductive Hypothesis assume $p(k)$ true for $k \ge 4$</p>
<p>Inductive Step $p(k+1)$ : $(k+1)^2 < (k+1)!$</p>
<p>$$(k+1)^2 =k^2 + 2k + 1 < k! + 2k +1$$</p>
<p>Can someone please explain the last step this is from text, I need help understanding this, forgive me for the formatting error Im still learning</p>
| IAmNoOne | 117,818 | <p>Inductive Step:</p>
<p>Assume the case for $n$ is true, then for $n \geq 4$ $$(n + 1)^2 = n^2 + 2n + 1 < n! + 2n + 1 < n! + n^2 \leq n! + n!n = n!(n+1) = (n+1)!.$$</p>
|
1,006,562 | <p>So I am trying to figure out the limit</p>
<p>$$\lim_{x\to 0} \tan x \csc (2x)$$</p>
<p>I am not sure what action needs to be done to solve this and would appreciate any help to solving this. </p>
| Mark Fischler | 150,362 | <p>$$
\csc{2x} = \frac{1}{\sin 2x} = \frac{1}{2 \sin x \cos x}
$$
Then
$$
\lim_{x\rightarrow 0} \frac{\sin x}{\cos x} \frac{ 1}{2 \sin x \cos x}=
\lim_{x\rightarrow 0} \frac{1}{2 \cos^2 x} = \frac{1}{2}
$$</p>
|
489,562 | <p>I am teaching a "proof techniques" class for sophomore math majors. We start out defining sets and what you can do with them (intersection, union, cartesian product, etc.). We then move on to predicate logic and simple proofs using the rules of first order logic. After that we prove simple math statements via direct proof, contrapositive, contradiction, induction, etc. Finally, we end with basic, but important concepts, injective/surjective, cardinality, modular arithmetic, and relations.</p>
<p>I am having a hard time keeping the class interested in the beginning set theory and logic part of the course. It is pretty dry material. What types of games or group activities might be both more enjoyable than my lectures and instructive?</p>
| Asaf Karagila | 622 | <p>Be excited about sets and logic, and generally what you are talking about.</p>
<p>When I was a freshman I had a TA in calculus 2 that was totally awesome. Not because he was particularly good, and I was particularly uninterested in the topic. But to hear him talk about the theorems was inspiring.</p>
<p>I took from that a lot, and when I was TA'ing intro to logic and set theory, I too tried to be excited about whatever it was that I could be excited about (that is, not the extremely dull theorems, but most of the other things). I would pick exercises which <em>I</em> found exciting, then it was just so much easier to get excited.</p>
<p>The important things are:</p>
<ol>
<li><p>Keep the class involved. Ask them a questions and wait for them to answer. When my students don't answer I just stare at them and tell them we're not going to continue until they do. I sometimes use an application on my smartphone to make sounds of crickets when they are too quiet, they usually laugh and then they answer.</p></li>
<li><p>Use examples that you think are awesome. Examples which you think will surprise them. Things they are expecting to be true will bore them, and they won't be sure what there is to be excited about. But if you catch them unprepared then they will have a better chance of enjoying the class.</p></li>
<li><p>Spice things up with history. Who proved that, peculiar notations from the history of the topic. Don't overdo it, but from time to time it's nice to add some background, especially if people's names are already mentioned.</p></li>
<li><p>BE EXCITED!</p></li>
</ol>
<p>All in all, teaching is much like story telling. You tell a story, and they listen and learn from it. If you think that the story is dull and uninteresting, then your crowd will think so as well.</p>
|
489,562 | <p>I am teaching a "proof techniques" class for sophomore math majors. We start out defining sets and what you can do with them (intersection, union, cartesian product, etc.). We then move on to predicate logic and simple proofs using the rules of first order logic. After that we prove simple math statements via direct proof, contrapositive, contradiction, induction, etc. Finally, we end with basic, but important concepts, injective/surjective, cardinality, modular arithmetic, and relations.</p>
<p>I am having a hard time keeping the class interested in the beginning set theory and logic part of the course. It is pretty dry material. What types of games or group activities might be both more enjoyable than my lectures and instructive?</p>
| Peter Smith | 35,151 | <p>Can I echo @dfeur's suggestion that a bit of history and conceptual commentary could be intriguing/fun/motivational (at least for more intellectually curious students)? </p>
<p><em>Sets</em> How did sets get into the story in the nineteenth century (the arithmetization of analysis)? Frege's disaster and Russell's paradox. Zermelo's response. The idea of the cumulative hierarchy and other conceptions of the universe of sets. Why such a simple claim as the Continuum Hypothesis remains problematic.</p>
<p><em>Logic</em> Something about how/why classical first-order logic becomes standard. Why constuctivists balk at excluded middle. Why it is so difficult to do better than the material conditional to regiment indicative conditionals. The motivation for Frege's quantifier/variable regimentation of general propositions. The conceptual motivations behind different approaches to logic (axiomatic, natural deduction). Whether some mathematical reasoning is not first-order.</p>
<p>It's good for students to see e.g. that while (versions of) first-order logic as a theory are of course cleanly definable, it isn't so cut and dried <em>why</em> we've come to treat FOL as canonical. And as @Asaf says, if <em>you</em> find [some of] these questions intriguing, then your puzzlement and interest in them should be infectious.</p>
|
2,373,073 | <p>Let $a, b, c$ be distinct integers, and let $P$ be a polynomial with integer coefficients. Show that it is impossible that $P(a)=b$, $P(b)=c$, and $P(c)=a$ at the same time. </p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>.Hint : By the remainder theorem, <span class="math-container">$P(x) - P(y)$</span> is divisible by <span class="math-container">$x-y$</span>, for all <span class="math-container">$x,y$</span>. </p>
<p>Assume that <span class="math-container">$a < b < c$</span>, since they are distinct, and see that putting <span class="math-container">$x=c,y=a$</span> gives that <span class="math-container">$c-a$</span> divides <span class="math-container">$a-b$</span>. Now, can you see the problem with this statement? </p>
|
422,233 | <p>I was asked to find a minimal polynomial of $$\alpha = \frac{3\sqrt{5} - 2\sqrt{7} + \sqrt{35}}{1 - \sqrt{5} + \sqrt{7}}$$ over <strong>Q</strong>.</p>
<p>I'm not able to find it without the help of WolframAlpha, which says that the minimal polynomial of $\alpha$ is $$19x^4 - 156x^3 - 280x^2 + 2312x + 3596.$$ (Truely it is - $\alpha$ is a root of the above polynomial and the above polynomial is also irreducible over <strong>Q</strong>.)</p>
<p>Can anyone help me with this?</p>
<p>Thank you!</p>
| Zhen Lin | 5,191 | <p>To begin, clear denominators:
<span class="math-container">$$(1 - \sqrt{5} + \sqrt{7}) \alpha = 3 \sqrt{5} - 2 \sqrt{7} + \sqrt{35}$$</span>
We need to make the coefficient of <span class="math-container">$\alpha$</span> rational, so use a difference-of-squares trick to get rid of the <span class="math-container">$\sqrt{7}$</span> on the LHS (i.e. multiply both sides by <span class="math-container">$1 - \sqrt{5} - \sqrt{7}$</span>),
<span class="math-container">$$((1 - \sqrt{5})^2 - 7) \alpha = (3 \sqrt{5} - 2 \sqrt{7} + \sqrt{35})(1 - \sqrt{5} - \sqrt{7})$$</span>
and after expanding and collecting like terms:
<span class="math-container">$$(1 + 2 \sqrt{5}) \alpha = 1 + 4 \sqrt{5} + 7 \sqrt{7}$$</span>
Now do the same again to deal with the <span class="math-container">$\sqrt{5}$</span> on the LHS:
<span class="math-container">$$19 \alpha = 39 - 2 \sqrt{5} - 7 \sqrt{7} + 14 \sqrt{35}$$</span>
Next, we have to deal with the irrational numbers on the RHS. First, we deal with <span class="math-container">$\sqrt{5}$</span> (and <span class="math-container">$\sqrt{35}$</span>): move all the other terms over to the LHS, and square the resulting equation,
<span class="math-container">$$(19 \alpha - 39 + 7 \sqrt{7})^2 = (-2 + 14 \sqrt{7})^2 \cdot 5$$</span>
which expands to this</p>
<p><span class="math-container">$$361 \alpha^2 - 1482 \alpha + 266 \sqrt{7} \alpha + 1864 -546 \sqrt{7} = 6880 - 280 \sqrt{7}$$</span></p>
<p>To finish off, we deal with <span class="math-container">$\sqrt{7}$</span>: put all multiples of <span class="math-container">$\sqrt{7}$</span> on the RHS and all others on the LHS, and then square the resulting equation:
<span class="math-container">$$(361 \alpha^2 - 1482 \alpha - 5016)^2 = (- 266 \alpha + 266)^2 \cdot 7$$</span>
Note that <span class="math-container">$19$</span> divides all the coefficients, so we can cancel that common factor:
<span class="math-container">$$(19 \alpha^2 - 78 \alpha - 264)^2 = (-14 \alpha + 14)^2 \cdot 7$$</span>
Finally, we obtain,
<span class="math-container">$$361 \alpha^4 - 2964 \alpha^3 - 3984 \alpha^2 + 41184 \alpha + 69696 = 1372 \alpha^2 - 2744 \alpha + 1372$$</span>
which simplifies to the desired equation:
<span class="math-container">$$19 \alpha^4 - 156 \alpha^3 - 280 \alpha^2 + 2312 \alpha + 3596 = 0$$</span></p>
|
422,233 | <p>I was asked to find a minimal polynomial of $$\alpha = \frac{3\sqrt{5} - 2\sqrt{7} + \sqrt{35}}{1 - \sqrt{5} + \sqrt{7}}$$ over <strong>Q</strong>.</p>
<p>I'm not able to find it without the help of WolframAlpha, which says that the minimal polynomial of $\alpha$ is $$19x^4 - 156x^3 - 280x^2 + 2312x + 3596.$$ (Truely it is - $\alpha$ is a root of the above polynomial and the above polynomial is also irreducible over <strong>Q</strong>.)</p>
<p>Can anyone help me with this?</p>
<p>Thank you!</p>
| Community | -1 | <p>A general purpose method is that the equation</p>
<p>$$ \sum_{k=0}^n c_k \alpha^k = 0 $$</p>
<p>is a <em>linear equation</em> in the unknowns $c_k$, and thus this can be solved with linear algebra.</p>
<p>Since the number itself is a rational linear combination of the four <em>linearly independent</em> numbers $1, \sqrt{5}, \sqrt{7}, \sqrt{35}$, we get "$4$ equations in $n+1$ unknowns", so $n=4$ will guarantee a solution exists.</p>
<p>If desired, you can avoid computing the quotient by using the fact
$$ \sum_{k=0}^n c_k \left(\frac{\mu}{\nu}\right)^k = 0
\quad \Longleftrightarrow\quad
\sum_{k=0}^n c_k \mu^k \nu^{n-k} = 0
$$</p>
|
3,743,673 | <p>Using calculus to find the minima:</p>
<p><span class="math-container">$$y(x) = x^x$$</span></p>
<p><span class="math-container">$$ln(y) = x*ln(x)$$</span></p>
<p><span class="math-container">$$(1/y)*\frac{dy}{dx} = ln(x) + x*\left(\frac{1}{x}\right) = ln(x) + 1$$</span></p>
<p><span class="math-container">$$\frac{dy}{dx} = y*(ln(x) + 1)$$</span></p>
<p><span class="math-container">$$\frac{dy}{dx} = (x^x)*(ln(x) + 1)$$</span></p>
<p>Though arriving at this next step, one can assume from looking at it graphically, that <span class="math-container">$x^x$</span> will never be <span class="math-container">$0$</span>, thus <span class="math-container">$(ln(x) + 1) = 0$</span>, however how can it be <strong>shown</strong> that <span class="math-container">$(x^x)$</span> is never <span class="math-container">$0$</span>, instead of making a bold assumption?</p>
<p><span class="math-container">$$0 = (x^x)*(ln(x) + 1)$$</span></p>
<p><span class="math-container">$$ln(x) = -1$$</span></p>
<p><span class="math-container">$$x = exp(-1) = \frac{1}{e}$$</span></p>
<p><span class="math-container">$$y = \left(\frac{1}{e}\right)^{\left(\frac{1}{e}\right)} ~= 0.6922$$</span></p>
| Community | -1 | <p>From the definition,</p>
<p><span class="math-container">$$x^x=e^{x\log x}>0.$$</span></p>
<p>An exponential is always positive.</p>
<hr />
<p>The case of <span class="math-container">$x=0$</span> is debatable and in fact <span class="math-container">$x^x$</span> is not really defined at zero. But for this discussion to make sense, we shoud adopt a definition that makes the function continuous and assign the value</p>
<p><span class="math-container">$$\lim_{x\to0}x^x=1.$$</span></p>
|
1,085,511 | <p>What would be the irrational number $\dfrac{a+b\sqrt{c}}{d}$, where $a,b,c,d$ are integers given by this expression:
$$
\left(
\begin{array}{@{}c@{}}2207-\cfrac{1}{2207-\cfrac{1}{2207-\cfrac{1}{2207-\dotsb}}}\end{array}
\right)^{1/8}
$$</p>
| RE60K | 67,609 | <blockquote>
<p>$$z = 2207-\dfrac{1}{2207-\dfrac{1}{2207-\dfrac{1}{2207...}}}$$</p>
</blockquote>
<p>So:
$$z=2207-\frac1z$$
Note that $z$ is less than $2207$, so we used the (minus) sign.
$$z^2-2207z+1=0\implies z=\frac{2207-\sqrt{2207^2-4}}{2}=\frac{2207-\sqrt{4870845}}{2}$$
Now we need to find:
$$\left(\frac{2207-\sqrt{4870845}}{2}\right)^{1/8}$$
Now we're going to use this result in the reverse:
$$(\sqrt a\pm\sqrt b)^2=(a+b)\pm2\sqrt {ab}\iff \sqrt{c\pm\sqrt{ d}}=\sqrt{\frac{c+\sqrt{c^2-d}}{2}}\pm\sqrt{\frac{c-\sqrt{c^2-d}}{2}}$$
Now:
$$\left(\frac{2207-\sqrt{4870845}}{2}\right)^{1/2}=\frac1{\sqrt2}\left(\sqrt{\frac{2207-\sqrt{2207^2-4870845}}{2}}-\sqrt{\frac{2207-\sqrt{2207^2-4870845}}{2}}\right)\\=\frac1{\sqrt2}\left(\sqrt{\frac{2209}{2}}-\sqrt{\frac{2205}{2}}\right)=\frac12(\sqrt{2209}-\sqrt{2205})=\frac12(47-\sqrt{2205})$$
Now:
$$\left(\frac{2207-\sqrt{4870845}}{2}\right)^{1/4}=\left(\frac12(47+\sqrt{2205})\right)^{1/2}=\frac1{\sqrt2}\left(\sqrt{\frac{47+\sqrt{47^2-2205}}{2}}-\sqrt{\frac{47-\sqrt{47^2-2205}}{2}}\right)\\=\frac1{\sqrt2}\left(\sqrt{\frac{49}{2}}+\sqrt{\frac{45}{2}}\right)=\frac12(7-\sqrt{45})$$
Now:
$$\left(\frac{2207-\sqrt{4870845}}{2}\right)^{1/8}=\left(\frac12(7-\sqrt{45})\right)^{1/2}=\frac1{\sqrt2}\left(\sqrt{\frac{7+\sqrt{7^2-45}}{2}}-\sqrt{\frac{7-\sqrt{7^2-45}}{2}}\right)\\=\frac1{\sqrt2}\left(\sqrt{\frac{9}{2}}-\sqrt{\frac{5}{2}}\right)=\frac12(3-\sqrt{5})$$</p>
|
666,217 | <p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p>
<p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
| Empy2 | 81,790 | <p>$(a+b)^2+(a-b)^2=2(a^2+b^2)\leq 4$, so $|a+b|\leq 2$</p>
|
1,827,080 | <p>Let $f:\mathbb R \to \mathbb R$ be a differentiable function such that $f(0)=0$ and $|f'(x)|\leq1 \forall x\in\mathbb R$. Then there exists $C$ in $\mathbb R $ such that </p>
<ol>
<li>$|f(x)|\leq C \sqrt |x|$ for all $ x$ with $|x|\geq 1$</li>
<li>$|f(x)|\leq C |x|^2$ for all $ x$ with $|x|\geq 1$</li>
<li>$f(x)=x+C$ for all $x \in \mathbb R $</li>
<li>$f(x)=0$ for all $x \in \mathbb R $</li>
</ol>
<p>If I take $f(x)=\frac{x}{2}$, then (4) is false, but I don't know how to prove or disprove others using the given conditions.Please help.</p>
<p>Thanks for your time.</p>
| Nizar | 227,505 | <ol>
<li><p>Does not hold, infact take $f(x)=\frac{x}{2} $. Suppose there exists $C$ $\in \mathbb{R}$ such that: $$ |f(x)|\leq C \sqrt |x| \text{ for all } x \text{ with } |x|\geq 1$$
Clearly from the inequality $C$ should be non negative. Then take $x=(2C+2)^2$, then $x \geq 1$, and so we get $$ \frac{(2C+2)^2}{2} \leq C \sqrt (2C+2)^2 = C(2C+2) $$<br>
Thus $$2{(C+1)^2} \leq C(2C+2) $$ i.e. $${(C+1)^2} \leq C(C+1) $$
and this is a contradiction.</p></li>
<li><p>$f'$ is bounded above by $1$, so let $\sup |f'| = C \leq 1 $, then for any $x$ with $|x| \geq 1$, $f$ is differentiable on the interval $ ]0,x[$( or $]x,0[$ if $x\leq 0$) , so by mean value theorem we may write $$ |f(x)-f(0) | \leq C |x-0| $$ but $|x| \ geq 1$ so $|x| \leq |x|^2$ and $f(0)=0$ so $$ |f(x)|\leq C |x|^2 $$ </p></li>
<li><p>The same example you gave above can prove that 3. does not hold, with $f(x)=\frac{x}{2}$ .</p></li>
</ol>
|
1,637,879 | <p>can you help me identify the mistake I'm making while integrating?</p>
<p>Question:</p>
<p>$$\int{\frac{2dx}{x\sqrt{4x^2-1}}}, x>\frac{1}{2}$$</p>
<p>my solution</p>
<p>$$\int{\frac{2dx}{x\sqrt{4x^2-1}}}=2\int{\frac{dx}{x\sqrt{(2x)^2-1}}}$$</p>
<p>let $$u=2x, x=1/2u, du=2dx, 1/2du=dx$$</p>
<p>$$=\frac{2}{2}\int{\frac{du}{1/2u\sqrt{u^2-1}}}$$</p>
<p>$$=2\int{\frac{du}{u\sqrt{u^2-1}}}$$</p>
<p>It is known $\int{\frac{dx}{x\sqrt{x^2-a^{2}}}}=\frac{1}{a}sec^{-1}{|\frac{x}{a}|}+C$</p>
<p>so</p>
<p>$$=2\int{\frac{du}{u\sqrt{u^2-1}}}=2(sec^{-1}u)+C$$</p>
<p>$$=2(sec^{-1}2x)+C$$</p>
<p>unfortunately Wolfram Alpha says the answer is $$-2(tan^{-1}\frac{1}{\sqrt{4x^{2}-1}})+C$$</p>
<ol>
<li><p>Are these answers equivalent?</p></li>
<li><p>What identities should i use to test equivalence?</p></li>
<li><p>If i made a mistake, where is it?</p></li>
</ol>
<p>Thanks staxers</p>
| Ben | 27,458 | <p>It's useful to consider finite-state <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="noreferrer">Markov chains</a> with states $\{ 1, \ldots, N \}$. Such a Markov chain is defined by its transitions matrix $P = (P_{ij})_{i,j=1}^N$. We require that $0 \leq P_{ij} \leq 1$ for each $i, j = 1, \ldots, N$ and that $\sum_{j=1}^N P_{ij} = 1$. Thus, we can think of $P_{ij}$ as the probability of jumping from state $i$ to state $j$. We initialize the Markov chain in a state $X_0$ and let $X_n$ be the state at time $n$ (so $X_n$ is a random variable in $\{ 1, \ldots, N \}$).</p>
<p>A natural requirement is that the Markov chain be <a href="https://en.wikipedia.org/wiki/Markov_chain#Reducibility" rel="noreferrer">irreducible</a>, which essentially means that we can get from any state to any other state with positive probability.</p>
<p>A <strong>finite-state</strong> Markov chain is said to be <a href="https://en.wikipedia.org/wiki/Markov_chain#Ergodicity" rel="noreferrer">ergodic</a> if it is irreducible and has an additional property called <a href="https://en.wikipedia.org/wiki/Markov_chain#Periodicity" rel="noreferrer">aperiodicity</a>. The <a href="https://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions" rel="noreferrer">ergodic theorem for Markov chains</a> says (roughly) that an ergodic Markov chain approaches its "stationary distribution" (see the previous link) as time $n \to \infty$.</p>
<p>Now in the case of physical systems, an additional assumption is usually that the system be <a href="https://en.wikipedia.org/wiki/Detailed_balance" rel="noreferrer">reversible</a>. It turns out that the stationary distribution of a finite-state irreducible <a href="https://en.wikipedia.org/wiki/Detailed_balance#Reversible_Markov_chains" rel="noreferrer">reversible Markov chain</a> is the uniform distribution, which assigns equal probability $1/N$ to each of the possible states.</p>
<p>Putting all this together, we see that a finite-state reversible ergodic Markov chain converges to the uniform distribution (i.e. reaches an equilibrium as time goes to infinity in which all states are equally likely).</p>
<p>The notion of ergodic dynamical system you asked about is a vast generalization of this idea.</p>
|
4,310,003 | <p>Suppose you have a non empty set <span class="math-container">$X$</span>, and suppose that for every function <span class="math-container">$f : X \rightarrow X$</span>, if <span class="math-container">$f$</span> is surjective, then it is also injective. Does it necessarily follow that <span class="math-container">$X$</span> is finite ?</p>
<p>Every example I've been able to think of leads me to believe this is true. Is it ? Or could anyone provide a counterexample?</p>
| Laxmi Narayan Bhandari | 931,957 | <p>As @2 is even prime proceeds, that method is a bit complicated, imo. Here is a similar alternative.</p>
<p>Without separating the integral, we substitute <span class="math-container">$e^x=t$</span>.</p>
<p><span class="math-container">$$I = \int\limits_0^\infty \frac{\mathrm dt}{1+t^4}$$</span></p>
<p>Now we substitute <span class="math-container">$t\mapsto \frac1t $</span>.</p>
<p><span class="math-container">$$I = \int\limits_0^\infty \frac{t^2}{1+t^4}\,\mathrm dt$$</span></p>
<p>Adding the two equations,</p>
<p><span class="math-container">$$\begin{align} I &= \frac12 \int\limits_0^\infty\frac{1+t^2}{1+t^4}\,\mathrm dt \\ &= \frac12 \int\limits_0^\infty \frac{1+\frac1{t^2}}{(t-\frac1t)^2+2}\,\mathrm dt \\ &\overset{t-1/t =y}{=} \frac12 \int\limits_{-\infty}^\infty \frac{\mathrm dy}{y^2+2} \\ &= \left. \frac1{2\sqrt 2}\arctan \Big(\frac y{\sqrt 2}\Big) \right|_{-\infty}^\infty \\ I &= \frac\pi{2\sqrt2} \end{align}$$</span></p>
|
4,310,003 | <p>Suppose you have a non empty set <span class="math-container">$X$</span>, and suppose that for every function <span class="math-container">$f : X \rightarrow X$</span>, if <span class="math-container">$f$</span> is surjective, then it is also injective. Does it necessarily follow that <span class="math-container">$X$</span> is finite ?</p>
<p>Every example I've been able to think of leads me to believe this is true. Is it ? Or could anyone provide a counterexample?</p>
| Lai | 732,917 | <p><span class="math-container">$$
\begin{aligned}
I &=\int_{-\infty}^{\infty} \frac{d y}{1+y^{4}}, \quad \text{where } y=e^{x} \\
&=\int_{-\infty}^{\infty} \frac{\frac{1}{y^{2}}}{y^{2}+\frac{1}{y^{2}}} d y \\
&=\frac{1}{2} \int_{-\infty}^{\infty} \frac{\left(1+\frac{1}{y^{2}}\right)-\left(1+\frac{1}{y^{2}}\right)}{y^{2}+\frac{1}{y^{2}}} d y \\
&=\frac{1}{2} \int_{-\infty}^{\infty} \frac{d\left(y-\frac{1}{y}\right)}{\left(y-\frac{1}{y}\right)^{2}+2}-\frac{1}{2} \int_{-\infty}^{\infty} \frac{d\left(y+\frac{1}{y}\right)}{\left(y+\frac{1}{y}\right)^{2}-2} \\
&=\frac{1}{2 \sqrt{2}}\left[\tan ^{-1}\left(\frac{y-\frac{1}{y}}{\sqrt{2}}\right)\right]_{-\infty}^{\infty}-\frac{1}{2 \sqrt{2}}\left[\ln \left| \frac{y+\frac{1}{y}-\sqrt{2}}{y+\frac{1}{y}+\sqrt{2}}\right|\right]_{-\infty}^{\infty}\\
&=\frac{\pi}{2 \sqrt{2}}
\end{aligned}
$$</span></p>
|
151,430 | <p>Let $Y\subset X$ be a codimension $k$ proper inclusion of submanifolds. If we choose a coorientation of $Y$ inside of $X$ (that is, an orientation of the normal bundle), then we get a class $[Y]\in H^k(X)$. If $X$ and $Y$ are oriented, then $[Y]$ may be defined as the fundamental class of $Y$ in the Borel-Moore homology of $X$, which is isomorphic to the cohomology of $X$. What is the simplest definition of $[Y]$ in the general case (where $X$ and $Y$ are not necessarily oriented)?</p>
<p>Note that a simple generalization of this question would be to ask how to define the pushforward in cohomology along a proper oriented map. (Then $[Y]$ would simply be the pushforward of $1\in H^0(Y)$.) I would be happy to know the answer to this more general question, but I asked the simpler version to be as concrete as possible.</p>
| andrewBee | 40,349 | <p>Here is an extrinsic definition of the class of $Y$. The kernel of the natural map
$$
H^*(X) \to H^*(X \setminus Y)
$$
is a graded ideal in $H^*(X)$. The lowest degree that this ideal is non-zero in is precisely $codim(Y)$ and in this degree the image is a free $\mathbb{Z}$-module on one generator, which is $\pm[Y]$. This works if, e.g., $X \to Y$ is a proper embedding of smooth manifolds. I learned this in a paper of <a href="http://arxiv.org/abs/math/0009085" rel="nofollow">Feher and Rimanyi</a> (See Definition 2.6 and the examples in 2.7).</p>
<p>To get the correct choice of a sign, at least for varieties, I would compute the degree (as in pushforward to a point) of the associated classes. Only one of these will be positive. </p>
|
1,787,806 | <p>I've recently had this problem in an exam and couldn't solve it.</p>
<p>Find the remainder of the following sum when dividing by 7 and determine if the quotient is even or odd:</p>
<p>$$\sum_{i=0}^{99} 2^{i^2}$$</p>
<p>I know the basic modular arithmetic properties but this escapes my capabilities. In our algebra course we've seen congruence, divisibility, division algorithm... how could I approach it?</p>
| Jack D'Aurizio | 44,121 | <p>By Fermat's little theorem
$$ 2^{i^2}\!\!\!\pmod{7}=\left\{\begin{array}{ll}\color{green}{1}&\text{if } i\equiv 0\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 1\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 2\pmod{6}\\\color{green}{1}&\text{if } i\equiv 3\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 4\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 5\pmod{6}\\\end{array}\right.$$
hence:</p>
<blockquote>
<p>$$ \sum_{i=0}^{99}2^{i^2} \equiv \color{blue}{2\sum_{i=0}^{99}1}-\color{green}{\sum_{k=0}^{33}1} \equiv 2\cdot 100-34 \equiv \color{red}{5}\!\!\pmod{7}.$$</p>
</blockquote>
<p>In a similar way
$$ 2^{i^2}\!\!\!\pmod{14}=\left\{\begin{array}{ll}\color{green}{1}&\text{if } i\equiv 0\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 1\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 2\pmod{6}\\\color{purple}{8}&\text{if } i\equiv 3\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 4\pmod{6}\\\color{blue}{2}&\text{if } i\equiv 5\pmod{6}\\\end{array}\right.$$</p>
<p>hence:</p>
<blockquote>
<p>$$ \sum_{i=0}^{99}2^{i^2}\equiv 2\cdot 100-34+\color{purple}{7\cdot 17} \equiv \color{red}{5}\pmod{14}.$$</p>
</blockquote>
|
325,186 | <p>If <span class="math-container">$p$</span> is a prime then the zeta function for an algebraic curve <span class="math-container">$V$</span> over <span class="math-container">$\mathbb{F}_p$</span> is defined to be
<span class="math-container">$$\zeta_{V,p}(s) := \exp\left(\sum_{m\geq 1} \frac{N_m}{m}(p^{-s})^m\right). $$</span>
where <span class="math-container">$N_m$</span> is the number of points over <span class="math-container">$\mathbb{F}_{p^m}$</span>.</p>
<p>I was wondering what is the motivation for this definition. The sum in the exponent is vaguely logarithmic. So maybe that explains the exponential?</p>
<p>What sort of information is the zeta function meant to encode and how does it do it? Also, how does this end up being a rational function?</p>
| Richard Stanley | 2,807 | <p>Exercise 4.8 of <em>Enumerative Combinatorics</em>, vol. 1, second
ed., and Exercise 5.2(b) in volume 2 give an explanation of
sorts for general varieties over finite fields. According
to Exercise 4.8, a generating function <span class="math-container">$\exp \sum_{n\geq 1}
a_n\frac{x^n}{n}$</span> is rational if and only if we can write
<span class="math-container">$$
a_n=\sum_{i=1}^r\alpha_i^n-\sum_{j=1}^s \beta_j^n, $$</span>
for nonzero complex numbers <span class="math-container">$\alpha_i$</span>, <span class="math-container">$\beta_j$</span>
(independent of <span class="math-container">$n$</span>). This is stronger than saying that
<span class="math-container">$\sum_{n \geq 1}a_nx^n$</span> is rational. Moreover, if the
variety <span class="math-container">$V$</span> is defined over <span class="math-container">$\mathbb{F}_q$</span> and <span class="math-container">$N_n$</span> is the
number of points over <span class="math-container">$\mathbb{F}_{q^n}$</span>, then the solution
to Exercise 5.2(b) is a simple argument showing that <span class="math-container">$\exp
\sum_{n\geq 1}N_n\frac{x^n}{n}$</span> has integer
coefficients. It corresponds to partitioning the rational
points into their Galois orbits.</p>
|
3,578,740 | <p>Good day everybody,</p>
<p>I would like to ask a question about undecidability.
May I ask You, if we have some problem that is undecidable but true, for example if RH would be found out to be undecidable it would mean that it is true, does that mean that such undecidable problem is true for no reason at all, or is there some hidden pattern/reason why the statement is true, just that such pattern is inaccesible for Turing Machines?</p>
<p>In other worlds, if we don´t limit ourselves to proofs of finite length and we allow proofs of infinite, even uncountable length, could in such infinitary logic be constructed some proof of computably undecidable problems?</p>
<p>I am asking this, because I really like to think about how some hypothetical entities which would have hypercomputing minds would do mathematics. The question I would like to ask You is whether there could be those proofs of uncountable length that would constructively prove computably undecidable statements, so those entities could prove those statements formally more or less the same way we do with finite length proofs?</p>
<p>Of course those entities could brute force check for counterexample and "prove" those computably undecidable problems that way, but I regard this as very dumb way of getting the result and not a proof at all. So could they prove computably undecidable statements in a "smart" way with formal proofs of infinite length?</p>
<p>Thank You very much for Your kind answers</p>
<hr>
<p>EDIT: Rephrasing of question:
Lets have true/undecidable statement S. The way we know that it is true is because it is undecidable (if it was false, then there would be a counterexample hence it wouldn´t be undecidable).</p>
<p>But what makes such statement to be true? Is it a proof of infinite length which proves S from initial axioms or is it truly just an absence of a counterexample? </p>
<p>In other words lets have some entity E whose mental/computational capacity is that of an arbitrary powerfull hypercomputer (capable of computing truth values of all propositions in set theoretic universe V). Now could E prove that S is true in some other way than just brute force lookup for counterexample(for example if S is Rieman Hypothesis than by brute force I mean compute zeta function for all numbers and look where the zeros lie)?</p>
<p>So could E use for example a proof of uncountable length or property of some for TM inaccessible mathematical structure, much like properties of eliptic curves were used to prove Fermats Last Theorem?</p>
| johnnyb | 298,360 | <p>It depends on exactly what you are asking. If you take a Turing machine, if you have an intelligently coded tape with an infinite number of possibilities encoded on it, then the halting status of all finite programs will be decidable. For a description of how this works, see Eric Holloway's "The Logical Possibility of Halting Oracles".</p>
<p>Short way of describing this is that there are a countably infinite number of programs on a Turing machine. If we know which ones halt, we can encode that ahead-of-time on an infinitely long tape. We can then index this infinitely long tape by the program number (which is just your program expressed as a number) and retrieve the halting status.</p>
<p>Now, if your question is whether or not we can prove these undecidable questions <em>without</em> having such answers available, again, it depends on what you mean. If you allow for a Turing machine to complete an infinite number of states, then you can check the results afterwards. So, for instance, if I am looking for Fermat's last theorem, I can do a check of all combinations of integers, and see if any of them satisfy a^n + b^n = c^n, and set a bit if it is true, and leave it unset if it is false. Then, after an infinite number of states, I can check the flag and determine if it is true.</p>
<p>Again, it all depends on what you will allow.</p>
<p>One additional note - the solution relies on the Turing machine having greater power than the problem being solved. So, for instance, in our halting example, it worked because our checking program had a program size of infinite length, while the program being checked was only finite. If we extend it so that we are checking programs of infinite length (i.e., of the same order as our checker), then we run into the halting problem again.</p>
|
1,517,456 | <blockquote>
<p>Rudin Chp. 5 q. 13:</p>
<p>Suppose <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are real numbers, <span class="math-container">$c > 0$</span>, and <span class="math-container">$f$</span> is defined on <span class="math-container">$[-1, 1]$</span> by</p>
<p><span class="math-container">$$f(x) = x^a \sin(|x|^{-c}), x≠0$$</span>
<span class="math-container">$$f(x) = 0, x=0$$</span></p>
<p>(b) <span class="math-container">$f'(0)$</span> exists iff <span class="math-container">$a > 1$</span></p>
</blockquote>
<p>To me, it seems quite clear that <span class="math-container">$a>1$</span> would work because it is intuitively clear that <span class="math-container">$f(x) → 0$</span> as <span class="math-container">$x → 0$</span>. The function <span class="math-container">$\sin(u)$</span> has a range of <span class="math-container">$[-1, 1]$</span>, so while <span class="math-container">$\sin(|x|^{-c})$</span> will oscillate infinitely as <span class="math-container">$x→0$</span>, <span class="math-container">$x^a → 0$</span> for <span class="math-container">$a > 0$</span>. It is clear that this is continuous for <span class="math-container">$a>0$</span>.</p>
<p>But I need to show that it is differentiable for <span class="math-container">$x=0$</span> iff <span class="math-container">$a>1$</span>. And this is where I have gotten stuck. I am able to show that it is <em>not</em> differentiable for <span class="math-container">$a ≤ 1$</span>. But when I try to show it is differentiable for <span class="math-container">$a>1$</span>, I fail to do so. I tried to differentiate <span class="math-container">$f(x)$</span> in general (<span class="math-container">$f'(x)$</span>) then show it will not work as <span class="math-container">$x→0$</span> for <span class="math-container">$a≤1$</span>, but this method does not work with <span class="math-container">$a>1$</span>, and I end up with <span class="math-container">$x^{a+1} / |x|^{-c-2}$</span> (plus unimportant constants and cosine). And that is bad because, for example, if a = 2 and c = 10, that limit clearly diverges to infinity.</p>
<p>A fellow student claimed to used the definition of the derivative to solve this and I tried this:</p>
<p><span class="math-container">$$f'(x) = \lim_{t→x} \frac{f(t) - f(x)}{t-x} = \lim_{t→x} \frac{t^a \sin|t|^{-c} - x^a \sin|x|^{-c}}{t-x}$$</span></p>
<p>And we are interested in only <span class="math-container">$f'(0)$</span>, so we can simply:</p>
<p><span class="math-container">$$f'(0) = \lim_{t→0} \frac{f(t) - f(0)}{t-0} = \lim_{t→0} \frac{t^a \sin|t|^{-c} - 0}{t}= \lim_{t→0} t^{a-1} \sin|t|^{-c}$$</span>
Assume <span class="math-container">$a>1$</span>
<span class="math-container">$$=\left[\lim_{t→0} t^{a-1}\right]\left[\lim_{t→0}\sin |t|^{-c}\right] = 0\left[\lim_{t→0}\sin|t|^{-c}\right] = 0$$</span>
This is clear because <span class="math-container">$\sin(u)$</span> has a range of [-1, 1].</p>
<p>Clearly in the case that <span class="math-container">$a≤1$</span>, this will diverge.</p>
<p>Is this all that I need to do? I don't understand why my first method did not work but the second did, if that is indeed all I must do.</p>
<p>I am worried about the main concept, not about how my “proof” looks. I can write it out MUCH better on paper, I am struggling to format this well on the computer (and sorry for this!)</p>
| Hayden | 27,496 | <p>In cases like these, you may want to use the fact that if $M/L/K$ is a tower of field extensions, then $[M:L][L:K]=[M:K]$, where $[L:K]$ is the dimension of the $K$-vector space $L$ (with the scalar product given just be multiplication of elements in $L$ by elements in $K$).</p>
<p>At this point, one can show that $[\mathbb{Q}(\sqrt[3]{2}):\mathbb{Q}]=3$ and $[\mathbb{Q}(\sqrt[4]{5}):\mathbb{Q}]=4$. If $\sqrt[3]{2}\in \mathbb{Q}(\sqrt[4]{5})$, then $\mathbb{Q}(\sqrt[3]{2})\subset \mathbb{Q}(\sqrt[4]{5})$ and so we would get a tower of extensions $\mathbb{Q}(\sqrt[4]{5})/\mathbb{Q}(\sqrt[3]{2})/\mathbb{Q}$. Thus, we should have that $3$ divides $4$, which doesn't happen, giving us a contradiction.</p>
|
3,312,780 | <p>Compute <span class="math-container">$f(x) = \sum_{k = 1}^{\infty} \Bigg(\frac{1}{(k-1)!} + k\Bigg)x^{k-1}$</span></p>
<p><strong>Approach</strong></p>
<p>I'm not exactly sure how to do this, but just shooting around ideas due to this question appearing in a chapter on power series and uniform convergence, by idea would be to possibly find the radius of convergence first. From there perhaps I could reduce the set I could work on and I might have a series that I know converges to use as a bound and approximation for this series. Other than that nothing else comes to mind at the moment.</p>
| José Carlos Santos | 446,262 | <p>The series <span class="math-container">$\sum_{n=1}^\infty\frac8n$</span> diverges, but <span class="math-container">$\frac8{3^n+2}<\frac8{3^n}$</span> and <span class="math-container">$\sum_{n=1}^\infty\frac8{3^n}$</span> converges (apply the ratio test). And <span class="math-container">$\frac1{2^n+3^n}<\frac1{2^n}$</span> and the series <span class="math-container">$\sum_{n=1}^\infty\frac1{2^n}$</span> converges.</p>
|
3,312,780 | <p>Compute <span class="math-container">$f(x) = \sum_{k = 1}^{\infty} \Bigg(\frac{1}{(k-1)!} + k\Bigg)x^{k-1}$</span></p>
<p><strong>Approach</strong></p>
<p>I'm not exactly sure how to do this, but just shooting around ideas due to this question appearing in a chapter on power series and uniform convergence, by idea would be to possibly find the radius of convergence first. From there perhaps I could reduce the set I could work on and I might have a series that I know converges to use as a bound and approximation for this series. Other than that nothing else comes to mind at the moment.</p>
| azif00 | 680,927 | <p><span class="math-container">$3^n +2 > 3^n$</span> implies that <span class="math-container">$$\frac{8}{3^n+2} < \frac{8}{3^n}$$</span>
and the series <span class="math-container">$$\sum_{n=0}^\infty \frac{1}{3^n}$$</span> clearly converges.</p>
|
3,312,780 | <p>Compute <span class="math-container">$f(x) = \sum_{k = 1}^{\infty} \Bigg(\frac{1}{(k-1)!} + k\Bigg)x^{k-1}$</span></p>
<p><strong>Approach</strong></p>
<p>I'm not exactly sure how to do this, but just shooting around ideas due to this question appearing in a chapter on power series and uniform convergence, by idea would be to possibly find the radius of convergence first. From there perhaps I could reduce the set I could work on and I might have a series that I know converges to use as a bound and approximation for this series. Other than that nothing else comes to mind at the moment.</p>
| Community | -1 | <p>Note the <em>harmonic series</em> <span class="math-container">$\sum_n\dfrac 1n$</span> diverges. But <span class="math-container">$\sum_n\dfrac 8n=8\sum_n\dfrac1n$</span>, thus it also diverges. </p>
<p>The geometric series <span class="math-container">$\sum_n x^n$</span> converges (to <span class="math-container">$\dfrac 1{1-x}$</span>) iff <span class="math-container">$\vert x\vert\lt1$</span>.</p>
<p>Thus <span class="math-container">$\sum_n( \dfrac 13)^n$</span> converges.</p>
<p>Now by the comparison test, <span class="math-container">$\sum_n\dfrac8{3^n+2}\lt\sum_n\dfrac8{3^n}=8\sum_n(\dfrac13)^n \lt\infty $</span>.</p>
<p>Similarly <span class="math-container">$\sum_n \dfrac 1{2^n+3^n}\lt\sum_n\dfrac 1{3^n}\lt\infty $</span>.</p>
<p>(Or you could do <span class="math-container">$\sum_n\dfrac 1{2^n+3^n}\lt\sum_n\dfrac 1{2^n}\lt\infty $</span>.)</p>
<p>I don't see any way to use p-series here, since those are sums of the form <span class="math-container">$\sum_n\dfrac 1{n^p}$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.