qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
Martin Brandenburg
1,650
<p>Lagrange's Theorem has some applications to elementary number theory:</p> <p>1) Wilson's Theorem says that for a prime $p$ we have $(p-1)! \equiv -1 \pmod p$. Proof: By Lagrange's Theorem applied to $\mathbb{F}_p^*$ we have $X^{p-1}-1 = \prod_{a \in \mathbb{F}_p^*} (X+a)$ in $\mathbb{F}_p[X]$. Now let $X \mapsto 0$.</p> <p>2) The algebraic definition of binomial coefficients relies on the fact that for $n,m \in \mathbb{N}$ we have that $n! m!$ divides $(n+m)!$. Here is a proof: There is a canonical monomorphism</p> <p>$Sym(\{1,\dotsc,n\}) \times Sym(\{n+1,\dotsc,n+m\}) \hookrightarrow Sym(\{1,\dotsc,n+m\})$.</p> <p>Now apply Lagrange's Theorem.</p>
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
Makoto Kato
28,422
<p>A lucid proof of the quadratic reciprocity law can be obtained by the theory of cyclotomic number fields.</p> <p><a href="https://math.stackexchange.com/questions/174858/discriminant-of-the-quadratic-subfield-of-the-cyclotomic-number-field-of-an-odd">Discriminant of the quadratic subfield of the cyclotomic number field of an odd prime order $l$</a></p>
453,295
<p>I wanna show that the non-zero elements of $\mathbb Z_p$ ($p$ prime) form a group of order $p-1$ under multiplication, i.e., the elements of this group are $\{\overline1,\ldots,\overline{p-1}\}$. I'm trying to prove that every element is invertible in the following manner:</p> <blockquote> <p><strong>Proof (a)</strong></p> <p>By Bézout's lemma given $\bar a\in\mathbb Z_p$, there are $x,y \in \mathbb Z$ such that </p> <p>$ax+py=1\implies\overline {ax+py}=\overline 1\implies \overline a \overline x+\overline p\overline y=\overline1\implies \overline a \overline x+\overline 0=\overline1\implies \overline a\overline x=\overline1$</p> </blockquote> <p>There are two problems with this proof, first $\overline 0$ is not defined, because $\overline 0$ is not defined because isn't in $\{\overline 1,\ldots \overline {p-1}\}$ secondly the sum is not defined, because I'm using multiplication in this case.</p> <blockquote> <p><strong>Proof (b)</strong></p> <p>By Fermat's little theorem, for each $\overline a \in \mathbb Z_p$, we have:</p> <p>$a^{p-1}\equiv 1$ (mod $p$), then $\overline {a^{p-2}}$ is an inverse to $\overline a$. </p> </blockquote> <p>My problem with this proof is why $\overline {a^{p-2}}\in \{\overline1,\ldots,\overline{p-1}\}$.</p> <p>I've already known these proofs, but with a little bit more experience, I found these lacks of rigor.</p> <p>Thanks in advance.</p>
Ben Grossmann
81,360
<p>$\overline{0}$ is the equivalence class of $0$, that is, the multiples of $p$. We can use this because multiplication and addition are well defined ring operations in $\mathbb Z_n$. The precursor to this proof is to show that addition and multiplication over the integers modulo $n$ are, in fact, well defined (i.e. give you the same answer regardless of which representative of an equivalence class is chosen). By using $\overline{0}$, we have stepped away from the multiplicative group. However, the math still works, and it tells us that every non-zero element of $\mathbb Z_p$ has a multiplicative inverse.</p> <p>The answer to your second question is "because of a)".</p>
1,511,078
<p><strong>Show that the product of two upper (lower) triangular matrices is again upper (lower) triangular.</strong></p> <p>I have problems in formulating proofs - although I am not 100% sure if this text requires one, as it uses the verb "show" instead of "prove". However, I have found on the internet the proof below but my problem is not just that I can't do one by myself, but also that it happens that I don't understand a proof which is already written - thing which let me think that I should quit these studies.</p> <p><strong>My first question is: how to choose the right strategy for proving something, in this case as in others?</strong>. I guess it is also matter of interest...Interest for numbers and their properties. I've never had such interest, frankly - although, perhaps, it is not nice to say this here. The reason for which I started studying math and some physics over an year and a half ago is because I dreamed about and still dreaming to do a kind of research for which I am interested and I need scientific background.</p> <p>Sorry for the digression but I am so upset and I need some general advice too. Coming back to the topic of the present proof, <strong>my second question is: could you explain me in plain English the idea and logic behind the following proof, please? Is this proof general enough to cover also my case and what changes do I need to add for it?</strong> - I ask this because the text to which I refer speaks about upper and lower triagular matrices, while the text of the proof I report here speaks about upper triangular matrices only. I also notice that the following proof considers only square matrices but also rectangular matrices can be upper/lower triangular.</p> <p>The only thing I know about all this is what upper/lower triangular matrices are and how to perform multiplication between matrices. This is the proof found on the internet:</p> <p>"Suppose that U and V are two upper triangular <em>n × n</em> matrices. By the row-column rule for matrix multiplication we know that the <em>(i,j)-th</em> entry of the product UV is $ui1v1j + ui2v2j +···+ uinvnj$. We need to show that if i > j then this expression evaluates to 0. In fact, we will show that every term $uikvkj$ of this expression evaluates to 0. To prove this, we consider two cases: • If i > k then $uik = 0$ since U is upper triangular. Hence $uikvkj = 0$. • If k > j then $vkj = 0$ since V is upper triangular. Hence $uikvkj = 0$. Since i > j, for every k weeither have $i &gt; k$ or $k &gt; j$ (possibly both) so the set wo cases cover all possibilities for k."</p>
Muthu Kumar N P
390,986
<p>Let the two upper triangular matrices be A(nxn) and B(nxn). And let C=AB.</p> <p>Now, for matrix multiplication, $$C_{ij}=\sum_{r=1}^n A_{ir}B_{rj}$$</p> <p>Since, A and B are upper triangular matrices,</p> <p>$A_{ir}=0$ , when i>r</p> <p>$B_{rj}=0$ , when r>j</p> <p>So, For an element of C (i.e., $A_{ir}B_{rj}$) to be non zero, both $A_{ir}$ and $B_{rj}$ need to be non zero. </p> <p>This happens when both i$\le$r and r$\le$j. This means i$\le$j.</p> <p>So, the only non-zero elements in C are those with i$\le$j. </p> <p>The previous statement is the condition required for an upper triangular matrix.</p> <p>Hence it is clear that the product of two upper triangular matrices (in our case C) is always an upper triangular matrix.</p>
762,651
<p>I have to prove that "any straight line $\alpha$ contained on a surface $S$ is an asymptotic curve and geodesic (modulo parametrization) of that surface $S$". Can I have hints at tackling this problem? It seems so general that I am not sure even how to formulate it well, let alone prove it. Intuitively, I imagine that the normal $n_{\alpha}$ to the line/curve is perpendicular to the normal vector $N_{S}$ to the surface $S$, thus resulting in the asymptoticity; alternatively, a straight line has curvature $k = 0$ everywhere, and so the result follows. Is this reasoning adequate for a proof of the first part? I also realize that both geodesics and straight lines are the paths of shortest distance between two points on given surfaces (here, both $S$), thus the straight line must be a geodesic of any surface which contains it; should I quantify this statement, though?</p> <p>Let $\mathbb{H}^2 = \{(x, y) \in \mathbb{R}^2: y&gt;0 \}$ be the hyperbolic plane with the Riemannian metric $(\mathbb{d}s)^2 =\frac{(\mathbb{d}x)^2+(\mathbb{d}y)^2}{y^2}$. Consider a "square" $P = \{ (x, y) \in \mathbb{H}^2: 1 \leq x,y \leq 2 \}$. I need to calculate the geodesic curvature of the sides of $P$ and, for Gaussian curvature $K$ of $P$, I have to calculate $\int_{P} (K) \mathbb{d}\sigma$, where $\mathbb{d}\sigma$ is the area measure element of $\mathbb{H}^2$. Just hints as to how to start would be helpful. (I see that I have the first fundamental form, from which I can derive the coefficients $E$, $F$, and $G$ and thereby (hopefully easily) Christoffel symbols and an expression for area, but I do not see how any of this takes the actual square into account.  Only the coordinates at which I evaluate these quantities seem to come come from the square! But I would still like detailed examples of even these things, please.)</p>
Community
-1
<p>This answer is very late, but this is another way to view the problem:</p> <p>We are trying to prove $a^2 &gt; b^2$, or $a^2-b^2&gt;0$.</p> <p>Using the identity: $a^2-b^2$ = $(a+b)(a-b)$,</p> <p>we are equivalently trying to prove: </p> <p>$(a+b)(a-b) &gt; 0 $</p> <p>Since $0&lt;a&lt;b$, </p> <p>$(a+b) &gt; 0 $.</p> <p>Now, the LHS is only positive if $a-b&gt;0$ or $a&gt;b$ which has been our assumption from the beginning, where $a,b&gt;0$</p> <p>And the proof falls into place. The only problem here is that we are starting from what we want to prove and building from that, but one could rewrite the proof so that doesn't happen. </p>
3,407,489
<p><span class="math-container">$\neg\left (\neg{\left (A\setminus A \right )}\setminus A \right )$</span></p> <p><span class="math-container">$A\setminus A $</span> is simply empty set and <span class="math-container">$\neg$</span> of that is again empty set. Empty set <span class="math-container">$\setminus$</span> A is empty set or? But every empty set is included in every set?</p> <p>I am confused when it comes to this..</p>
hamam_Abdallah
369,188
<p>The emptyset is subset of every set <span class="math-container">$ E $</span> because you cannot find an element which is in the emptyset and not in the set <span class="math-container">$ E$</span>. <span class="math-container">$$\text{ Proof}$$</span></p> <p>Let <span class="math-container">$ x$</span> be an arbitrary element,</p> <p><span class="math-container">$ p, q$</span> the propositions <span class="math-container">$$p \;\; : \; x\notin \emptyset$$</span> and <span class="math-container">$$ q \;\; : \;\; x\in E.$$</span></p> <p>By the emptyset Axiom, the proposition <span class="math-container">$ p$</span> is always true.</p> <p>By the argument of Addition,</p> <p>the proposition <span class="math-container">$ p \vee q$</span> is also true, and by material implication,</p> <p><span class="math-container">$$\neg p \;\; \implies q \;\; \text{ is true}$$</span> or <span class="math-container">$$x\in \emptyset\;\; \implies \;\; x\in E$$</span> thus <span class="math-container">$$\emptyset \;\; \subset\;\; E$$</span></p>
1,218,582
<p>I was presented with the function $max (|x|,|y|)$ which should output a maximum value of given two.... I can only suppose this one creates some body in $\mathbb{R}^3$ but how do you sketch it and what does it mean in $\mathbb{R}^3$? for that matter in $\mathbb{R}^2$ I cant really imagine it also.. </p>
David
119,775
<p>The function by itself does not specify any region of $\Bbb R^2$. If you meant $f(x,y)=constant$ then consider for example $$\max(|x|,|y|)=1\ .$$ This can be written as $$x=\pm1\ ,\ -1\le y\le1\qquad\hbox{or}\qquad y=\pm1\ ,\ -1\le x\le1$$ which is a square with vertices at $(\pm1,\,\pm1)$.</p>
108,331
<p>I find the frequent emergence of logarithms and even nested logarithms in number theory, especially the <a href="http://en.wikipedia.org/wiki/Prime_gap#Lower_bounds">prime number counting business</a>, somewhat unsettling. What is the reason for them?</p> <p>Has it maybe to do with the series expansion of the logarithm? Or is there something inherently exponential in any of the relevant number distributions, like in complexity theory or combinatorical problems? I think maybe in how you can construct bigger integers out of smaller ones.</p>
Gerry Myerson
8,269
<p>If you are asking, why do you find it unsettling that logarithms occur in Number Theory, I'm afraid you will have to ask a psychiatrist. </p> <p>If you are asking, why are there logarithms in Number Theory, consider the following naive effort to find the number of primes up to $N$: </p> <p>There are $N$ integers up to $N$; $(1/2)N$ odd integers up to $N$; $(1/2)(2/3)N$ are prime to 2 and 3; $(1/2)(2/3)(4/5)N$ are prime to 2, 3, and 5; and so on. Continue this reasoning up to the largest prime $p$ not exceeding $\sqrt N$, and what's left should be the primes between $\sqrt N$ and $N$. So you have to estimate $(1/2)(2/3)(4/5)\cdots((p-1)/p)$, and in the limit as $N\to\infty$, that product looks something like $1/\log N$ (but not exactly - the naive argument needs a fair bit of sophisticated tweaking). It's the limiting process that lets logarithms into the mix. </p>
108,331
<p>I find the frequent emergence of logarithms and even nested logarithms in number theory, especially the <a href="http://en.wikipedia.org/wiki/Prime_gap#Lower_bounds">prime number counting business</a>, somewhat unsettling. What is the reason for them?</p> <p>Has it maybe to do with the series expansion of the logarithm? Or is there something inherently exponential in any of the relevant number distributions, like in complexity theory or combinatorical problems? I think maybe in how you can construct bigger integers out of smaller ones.</p>
tomcuchta
1,796
<p>One reason that it occurs in analytic number theory has to do with the <a href="http://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a>:</p> <p>$$\zeta(s) = \displaystyle\sum_{n=1}^{\infty} \frac{1}{n^s},$$</p> <p>where $s = \sigma + it$ is a complex number <a href="http://en.wikipedia.org/wiki/Complex_number">complex number</a>, and for reasons of convergence, we require that $\sigma &gt; 1$. It turns out that due to an argument by Euler, we have </p> <p>$$\zeta(s) = \displaystyle\sum_{n=1}^{\infty} \frac{1}{n^s} = \displaystyle\prod_p \frac{1}{1-p^{-s}}.$$</p> <p>(where the $\displaystyle\prod_p$ operation indicates taking a product over all prime numbers $p$ and $\displaystyle\sum_p$ indicates taking a sum over all primes $p$). Big products like that aren't as easy to use and manipulate as sums, so mathematicians will do just about anything to turn a product into a sum to make it easier to work with. You probably know these three facts about logarithms: </p> <p>i) $\log(ab) = \log(a)+\log(b)$,</p> <p>ii) $\log{\frac{a}{b}}=\log(a)-\log(b)$, and </p> <p>iii) $\log(1)=0$. </p> <p>We can exploit those three rules after taking a logarithm of both sides of the above equation and we get the following:</p> <p>$$\begin{array}{ll} \log(\zeta(s)) &amp;= \log \left( \displaystyle\prod_p \frac{1}{1-p^{-s}} \right) \\ &amp;= \displaystyle\sum_p \log \left(\frac{1}{1-p^{-s}} \right) \\ &amp;= \displaystyle\sum_p \left[ \log(1) - \log({1-p^{-s}}) \right] \\ &amp;= -\displaystyle\sum_p \log({1-p^{-s}}). \end{array}$$</p> <p>Without going into too much detail, after you get to this point, you apply a standard operation from calculus called a "<a href="http://en.wikipedia.org/wiki/Derivative">derivative</a>" which you then scrutinize with analysis.</p> <p>This general technique of investigation leads directly to proving the <a href="http://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>, which says</p> <p>$$\displaystyle\lim_{x \rightarrow \infty} \frac{\pi(x)}{\frac{x}{\log(x)}} = 1,$$</p> <p>which can be stated roughly in English as "as you go farther down the number line, calculating $\frac{n}{\log(n)}$ will give you a pretty good estimate of how many primes there are up to that point".</p>
107,525
<p>Say I have two random variables X and Y from the same class of distributions, but with different means and variances (X and Y are parameterized differently). Say the variance converges to zero as a function of n, but the mean is not a function of n. Can it be formally proven, without giving the actual pdf of X and Y, that their overlap area (defined the integral over the entire domain of min(f,g), where f,g are the respective pdfs) converges to zero when n goes to infinity? Perhaps this is too obvious...?</p>
Jean-Victor Côté
24,376
<p>As long as the means are different, when variances go to zero, overlap goes to nothing.</p>
4,195,399
<p>Given that <span class="math-container">$a,b,c &gt; 0$</span> are real numbers such that <span class="math-container">$$\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\le 1,$$</span> prove that <span class="math-container">$$\frac{1}{b+c+1}+\frac{1}{c+a+1}+\frac{1}{a+b+1}\ge 1.$$</span></p> <hr /> <p>I first rewrote <span class="math-container">$$\frac{1}{a+b+1} = 1 - \frac{a+b}{a+b+1},$$</span> so the second inequality can be rewritten as <span class="math-container">$$\frac{b+c}{b+c+1} + \frac{c+a}{c+a+1} + \frac{a+b}{a+b+1} \le 2.$$</span> Cauchy-Schwarz gives us <span class="math-container">$$\sum \frac{a+b}{a+b+1} \geq \frac{(\sum \sqrt{a+b})^2}{\sum a+ b+ 1}.$$</span> That can be rewritten as <span class="math-container">$$\frac{2(a+b+c) + 2\sum \sqrt{(a+b)(a+c)}}{2(a+b+c) + 3},$$</span> which is greater than or equal to <span class="math-container">$$\frac{2(a+b+c) + 2 \sum(a + \sqrt{bc})}{2(a+b+c) + 3} = \frac{4(a+b+c) + 2 \sum \sqrt{bc}}{2(a+b+c) + 3} \geq 2,$$</span> which is the opposite of what I want. Additionally, I'm unsure of how to proceed from here.</p>
Calvin Lin
54,563
<p>Applying Jensens to <span class="math-container">$ f(x) = \frac{ x} { (a+b+c+1) - x } $</span>, we have <span class="math-container">$$ 1\geq \sum \frac{a}{b+c+1} = f(a) + f(b) + f(c) \geq 3 f ( \frac{a+b+c } { 3} ) = 3 \times \frac{ a + b + c } { 2a + 2b + 2c + 3 } \Rightarrow a + b + c \leq 3.$$</span></p> <p>Applying Jensens to <span class="math-container">$ g(x) = \frac{ 1 } { (a+b+c+1) - x }$</span>, we have</p> <p><span class="math-container">$$ \sum \frac{1}{ b+c+1} = g(a) + g(b) + g(c) \geq 3 g( \frac{ a+b+c} { 3 } ) = 3 \times \frac{ 3 } { 2a+2b+2c + 3 } \geq 1. $$</span></p>
4,415,559
<p>I am trying to find the root of <span class="math-container">$f(x)=ln(x)-cos(x)$</span> by writing an algorithm for bisection and fixed-point iteration method. I am currently using python but whenever I'm running it using either of the two methods, it prints out &quot;math domain error&quot;. I guess this is due to ln(x) when x becomes 0 or negative.</p> <p>So, I asked myself if this manipulation is valid:</p> <p>If <span class="math-container">$f(x)=ln(x)-cos(x)=0$</span>, then <span class="math-container">$ln(x)=cos(x)$</span>. It also follows that <span class="math-container">$x=e^{cos(x)}$</span> so we have a function, say <span class="math-container">$h(x)=x-e^{cos(x)}$</span>, that has same root with <span class="math-container">$f(x)$</span>. So, I tried using <span class="math-container">$h$</span> to find the root of <span class="math-container">$f$</span> and I resolved the error prompt I am getting whenever I use <span class="math-container">$f$</span> in my code. This is for bisection method, and I got the root that I want to get.</p> <p>I still don't know what is the appropriate <span class="math-container">$g(x)$</span> should I take for fixed-point iteration method such that if <span class="math-container">$f(x)=0$</span>, then <span class="math-container">$x=g(x)$</span> and <span class="math-container">$g'(x)&lt;1$</span> for some open interval.</p> <p>First question: Is using an alternative function <span class="math-container">$h$</span> to solve for the actual root of <span class="math-container">$f$</span> valid?</p> <p>Last question: What could be a possible <span class="math-container">$g(x)$</span> to use to find the root using fixed-point iteration method?</p> <p>Any help would be appreciated.</p>
José C Ferreira
1,029,870
<p>You don't need to change the function <span class="math-container">$f(x)$</span>. Choose <span class="math-container">$a=0.1$</span> and <span class="math-container">$b=e$</span> and you will find that <span class="math-container">$f(a)&lt;0$</span> and <span class="math-container">$f(b)&gt;0$</span>. The bisection method will work.</p> <p>If you do some iterations in the bisection method, you'll find an approximation to the root, say <span class="math-container">$c$</span>.</p> <p>Use this <span class="math-container">$c$</span> as initial guess to the Newton's method , as fixed point method given by <span class="math-container">$x=x-f(x)/f'(x)$</span>.</p> <p><a href="https://www.mycompiler.io/view/4U2uPGWKMPz" rel="nofollow noreferrer">Here is a link</a> to my code in R.</p> <p>You can find many results by searching for &quot;\(x=x-f(x)/f'(x)\)&quot; on SearchOnMath.</p>
1,880,090
<p>The solution states that the ball of radius $\epsilon &gt;0$ around a real number $x$ always contains the non-real number $x+i\epsilon/2$. </p> <p>I don't understand the answer, for every number $x \in \mathbb{R}$ there is an open ball, right? For every $x \in \mathbb{R}$ there is an $r&gt;0$ such that I can form an open ball $B_r(x)\subset \mathbb{R}$.</p>
PMar
358,074
<p>To decide if $\mathbb{R}$ is open in $\mathbb{C}$, you must use the topology of $\mathbb{C}$, not of $\mathbb{R}$. That is, you must take $B_r(x)\subset \mathbb{C}$.</p>
21,262
<p><strong>Bug introduced in 9.0 and fixed in 11.1</strong></p> <hr> <p><code>NDSolve</code> in Mathematica 9.0.0 (MacOS) is behaving strangely with a piecewise right hand side. The following code (a simplified version of my real problem):</p> <pre><code>sol = NDSolve[{x'[t] == Piecewise[{{2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}} ], x[0] == 0}, x, {t, 0, 1}]; Print[x[1] /. sol[[1]]]; </code></pre> <p>gives the correct answer of 0.5 about 50% of the time, but often returns -0.5 and -1 instead. Rerunning it gives apparently random results. It always gives the correct result in Mathematica 8.</p> <p>Here's what I've figured out so far:</p> <ol> <li>It apparently has something to do with the <code>Mod[t,1]</code>, because it works fine with just "t" in the <code>Piecewise</code>. Unfortunately I'm looking at a piecewise periodic system (not just from t=0 to 1).</li> <li>It's only the first segment of the solution from t=0 to t=0.5 that varies from run to run.</li> <li>Using initial condition <code>x[10^-100]==0</code> fixes the problem, but this is an ugly hack.</li> </ol> <p>Can anyone replicate this strange behavior, know what's behind it, or have a better suggested fix?</p>
Michael E2
4,999
<p>Taking the OP's clue that starting at <code>x[10^-100] == 0</code> solves the problem, we can try explicitly setting the derivative value at <code>t == 0</code>. The following produces a consistent and correct result:</p> <pre><code>solME2 = First@ NDSolve[{ x'[t] == Piecewise[{ {2, t == 0}, {2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}}], x[0] == 0}, x, {t, 0, 1}] ListLinePlot[x /. solME2, Mesh -&gt; All] </code></pre> <p>As an aside, discontinuities are processed as events, and generally an event at the initial condition is problematic. Even if this observation is related to the issue, it's is still far from explaining the lack of determinacy in the computation. It's certainly a bug, and a full explanation may be elusive.</p> <p>Note also that turning off discontinuity processing, either with the option <code>Method -&gt; {"DiscontinuityProcessing" -&gt; False}</code> or by using <code>?NumericQ</code>-protected function for the right-hand side, has its own issues:</p> <pre><code>sol10 = NDSolve[{x'[t] == Piecewise[{ {2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}}], x[0] == 0}, x, {t, 0, 10}, Method -&gt; {"DiscontinuityProcessing" -&gt; False}]; ListLinePlot[x /. sol10[[1]], Mesh -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/ohgLO.png" alt="Mathematica graphics"></p> <p>The maximum step size is too large and the discontinuities are jumped over. You should be prepared to manually intervene in such cases.</p> <p>The OP's code also leaks variables of the form <code>sNNN</code> from <code>Unique["s"]</code>, like another recent question, <a href="https://mathematica.stackexchange.com/questions/129583/generation-of-global-variables-when-using-ndsolvevalue-and-piecewise-function">Generation of global variables when using NDSolveValue and Piecewise function</a>.</p>
206,723
<p>Can any one explain why the probability that an integer is divisible by a prime $p$ (or any integer) is $1/p$?</p>
MikeEVMM
865,089
<p>Although <a href="https://math.stackexchange.com/a/206729/865089">Stephen Stadnicki's answer</a> is much more rigorous, I only made sense of it with the following argument:</p> <p>If <span class="math-container">$p$</span> is dividing a number <span class="math-container">$i$</span> in <span class="math-container">$1, \ldots, n$</span>, then that number must be possible to write as <span class="math-container">$i = p \times j$</span>, where <span class="math-container">$j \in \{1, \ldots, n/p\}$</span> (<span class="math-container">$n/p$</span> follows from the fact that <span class="math-container">$i \leq n$</span>).</p> <p>From the range of <span class="math-container">$j$</span>, we can see that there are <span class="math-container">$n/p$</span> numbers in <span class="math-container">$1, \ldots, n$</span> divisible by <span class="math-container">$p$</span>. Therefore, if we are taking a number uniformly from <span class="math-container">$1, \ldots, n$</span>, there is a probability of <span class="math-container">$\frac{n/p}{n} = 1/p$</span> that it will be divisible by <span class="math-container">$p$</span>.</p>
105,040
<p>This question in stackExchange remained unanswered. </p> <p>Let $\mathbb F$ be a finite field. Denote by $M_n(\mathbb F)$ the set of matrices of order $n$ over $\mathbb F$ . For a matrix $A∈M_n(\mathbb F)$ what is the cardinality of $C_{M_n(\mathbb F)} (A)$ , the centralizer of $A$ in $M_n(\mathbb F)$? There are papers about it? </p>
Geoff Robinson
14,450
<p>You asked this on Math Stack Exchange too. One messy case is when $A$ is unipotent. Some cases of tat are dealt with in a famous paper of P. Hall and G. Higman on "Reduction Theorems for Burnside's Problem" (Proceedings of London Mathematical Society, 1956). The centralizer of a semisimple matrix is relatively easy to understand. Therefore, the essence of the question does reside in the structure of the centralizer of a nilpotent matrix.</p>
1,189,216
<p>Wikipedia and other sources claim that </p> <p>$PA +\neg G_{PA}$</p> <p>can be consistent, where $\neg G_{PA}$ is the Gödel statement for PA.</p> <p>So what is the error in my reasoning?</p> <p>$G_{PA}$ = "$G_{PA}$ is unprovable in PA"</p> <p>$\neg G_{PA} $</p> <p>$\implies$ $\neg$ "$G_{PA}$ is unprovable in PA"</p> <p>$\implies$ "$G_{PA}$ is provable in PA" </p> <p>$\implies$ $G_{PA}$</p> <p>I would also appreciate it if someone could provide a somewhat intuitive explanation.</p> <p>Sources:</p> <ol> <li><a href="http://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic#Arithmetic_unsoundness_for_models_with_.7EG_true" rel="nofollow noreferrer">Non-standard model of arithmetic on Wikipedia</a></li> <li><a href="http://en.wikipedia.org/wiki/Rosser%27s_trick#Background" rel="nofollow noreferrer">Rosser's trick on Wikipedia</a></li> <li><a href="https://math.stackexchange.com/a/16383/155096">Understanding Gödel's Incompleteness Theorem</a></li> <li><a href="https://math.stackexchange.com/a/183106/155096">Is the negation of the Gödel sentence always unprovable too?</a></li> </ol>
Hanno
81,567
<p>The last step is the problematic one: You are given a formula $\varphi$ and try to prove, in the (finitistic!) metatheory, that $\textsf{PA}\vdash "\textsf{PA}\vdash\varphi"$ implies $\textsf{PA}\vdash\varphi$. While the implication "$\Leftarrow$" is fine (a proof of $\varphi$ from $\textsf{PA}$ could be explicitly coded, giving a proof of $"\textsf{PA}\vdash\varphi"$ in $\textsf{PA}$, too) the implication "$\Rightarrow$" needs the $\omega$-consistency of $\textsf{PA}$: It might happen that, even though we have $\textsf{PA}$ proving $\exists n: "n\text{ codes a proof of }\varphi"$, for every 'actual' natural number $\textbf{n}$ we have $\textsf{PA}\vdash "\textbf{n}\text{ does not code a proof of }\varphi$". This is what Asaf alluded to model theoretically.</p>
300,745
<p>If a function is uniformly continuous in $(a,b)$ can I say that its image is bounded?</p> <p>($a$ and $b$ being finite numbers).</p> <p>I tried proving and disproving it. Couldn't find an example for a non-bounded image. </p> <p>Is there any basic proof or counter example for any of the cases?</p> <p>Thanks a million!</p>
Community
-1
<p>There is a $\delta &gt;0$ such that for all $x,y\in (a,b)$ with $|x-y|\leq \delta$ we have $|f(x)-f(y)|\leq 1$. Let $p=\min (\delta,\frac{b-a}{3})$. Then $f$ is continuous on $I=[a+p,b-p]$. Hence $f$ is bounded on $I$. In addition, $f$ is bounded by $|f(a+p)|+|f(b-p)|+1$ on $(a-b)-I$. This means $f$ is bounded on $(a,b)$.</p>
355,454
<p>Let <span class="math-container">$p$</span> be an odd prime. The <span class="math-container">$\mathbb F_p$</span> cohomology of the cyclic group of order <span class="math-container">$p$</span> is well-known: <span class="math-container">$\mathrm{H}^\bullet(C_p, \mathbb F_p) = \mathbb F_p[\xi,x]$</span> where <span class="math-container">$\xi$</span> has degree 1, <span class="math-container">$x$</span> has degree 2, and the Koszul signs are imposed (so that in particular <span class="math-container">$\xi^2 = 0$</span>). As a module over the Steenrod algebra, the only interesting fact is that <span class="math-container">$x = \beta \xi$</span>, where <span class="math-container">$\beta$</span> denotes the Bockstein. The rest of the Steenrod powers can be worked out by hand.</p> <p>There are two groups of order <span class="math-container">$p^2$</span>. The <span class="math-container">$\mathbb F_p$</span> cohomology of <span class="math-container">$C_p \times C_p$</span>, including its Steenrod powers, is computable from the Kunneth formula. For the cyclic group <span class="math-container">$C_{p^2}$</span>, you have to think slightly more, because there is a ring isomorphism <span class="math-container">$\mathrm{H}^\bullet(C_p, \mathbb F_p) \cong \mathrm{H}^\bullet(C_{p^2}, \mathbb F_p)$</span>, but the Bockstein vanishes on <span class="math-container">$\mathrm{H}^\bullet(C_{p^2}, \mathbb F_p)$</span>. Still I think the Steenrod algebra action is straightforward to write down. </p> <p>I want to know about the groups of order <span class="math-container">$p^3$</span>. The abelian ones are not too hard, I think, and there are two nonabelian groups. The one with exponent <span class="math-container">$p^2$</span> is traditionally denoted "<span class="math-container">$p^{1+2}_-$</span>", and the one with exponent <span class="math-container">$p$</span> is traditionally denoted "<span class="math-container">$p^{1+2}_+$</span>". I care more about the latter one, but I'm happy to hear answers about both. And right now I care most about the prime <span class="math-container">$p=3$</span>.</p> <p>The cohomology of these groups was computed in 1968 by Lewis in <a href="https://www.jstor.org/stable/1994856" rel="nofollow noreferrer">The Integral Cohomology Rings of Groups of Order <span class="math-container">$p^3$</span></a>. Actually, as is clear from the title, Lewis computes the integral cohomology, from which the <span class="math-container">$\mathbb F_p$</span>-cohomology can be read off using the universal coefficient theorem. For the case I care more about, Lewis finds that <span class="math-container">$\mathrm{H}^\bullet(p^{1+2}_+, \mathbb Z)$</span> has the following presentation. (I am quoting from <a href="https://doi.org/10.1017/S0305004100075940" rel="nofollow noreferrer">Green, On the cohomology of the sporadic simple group <span class="math-container">$J_4$</span>, 1993</a>.) The generators are: <span class="math-container">$$ \begin{matrix} \text{name} &amp; \text{degree} &amp; \text{additive order} \\ \alpha_1, \alpha_2 &amp; 2 &amp; p \\ \nu_1, \nu_2 &amp; 3 &amp; p \\ \theta_j, 2 \leq j \leq p-2 &amp; 2j &amp; p \\ \kappa &amp; 2p-2 &amp; p \\ \zeta &amp; 2p &amp; p^2 \end{matrix}$$</span> (For the <span class="math-container">$p=3$</span> case that I care most about, there are no <span class="math-container">$\theta$</span>s, since <span class="math-container">$2 \not\leq 3-2$</span>.) A complete (possibly redundant) list of relations is: <span class="math-container">$$ \nu_i^2 = 0, \qquad \theta_i^2 = 0, \qquad \alpha_i \theta_j = \nu_i \theta_j = \theta_k \theta_j = \kappa \theta_j = 0$$</span> <span class="math-container">$$\alpha_1 \nu_2 = \alpha_2 \nu_1, \qquad \alpha_1 \alpha_2^p = \alpha_2 \alpha_1^p, \qquad \nu_1\alpha_2^p = \nu_2 \alpha_1^p,$$</span> <span class="math-container">$$ \alpha_i\kappa = -\alpha_i^p, \qquad \nu_i\kappa = -\alpha_i^{p-1}\nu_i,$$</span> <span class="math-container">$$ \kappa^2 = \alpha_1^{2p-2} - \alpha_1^{p-1}\alpha_2^{p-1} + \alpha_2^{2p-2}, $$</span> <span class="math-container">$$ \nu_1 \nu_2 = \begin{cases} \theta_3, &amp; p &gt; 3, \\ 3\zeta, &amp; p = 3. \end{cases}$$</span> From this Green (ibid.), for example, writes down a PBW-type basis.</p> <blockquote> <p>Question: What is the action of the Steenrod algebra been on <span class="math-container">$\mathrm{H}^\bullet(p^{1+2}_+, \mathbb F_p)$</span>?</p> </blockquote> <p>I'm not very good at Steenrod algebras. Does the ring structure on the <span class="math-container">$\mathbb Z$</span>-cohomology suffice to determine the action? For instance, the additive structure of <span class="math-container">$\mathrm{H}^\bullet(G, \mathbb Z)$</span> already determines the Bockstein action on <span class="math-container">$\mathrm{H}^\bullet(G, \mathbb F_p)$</span>. If there is a systematic way to do it, where can I learn to do the computations?</p>
Nicholas Kuhn
102,519
<p>If <span class="math-container">$P$</span> is the group of order <span class="math-container">$p^3$</span> and exponent <span class="math-container">$p$</span>, its mod <span class="math-container">$p$</span> cohomology ring is known to have its depth = its Krull dimension = rank of a maximal elementary abelian subgroup = 2. A theorem of Jon Carlson then implies that the product of restriction maps <span class="math-container">$$ H^*(P;\mathbb F_p) \rightarrow \prod_E H^*(E;\mathbb F_p)$$</span> is monic, where the product is over (conjugacy classes of) subgroups <span class="math-container">$E \simeq \mathbb Z/p \times \mathbb Z/p$</span>. </p> <p>Thus the ring you care about, viewed as an algebra equipped with Steenrod operations (I would call this an unstable <span class="math-container">$A_p$</span>--algebra) embeds in a known unstable algebra. Thus if you know how algebra generators of <span class="math-container">$H^*(P;\mathbb F_p)$</span> restrict to the various <span class="math-container">$H^*(E;\mathbb F_p)$</span>'s, it shouldn't be hard to calculate Steenrod operations.</p> <p>For example, David Green and Simon King's group cohomology website tells you cohomology ring generators and relations, and these restrictions for the group of order 27. (See <a href="https://users.fmi.uni-jena.de/cohomology/27web/27gp3.html" rel="nofollow noreferrer">https://users.fmi.uni-jena.de/cohomology/27web/27gp3.html</a>) I'll let you take it from here.</p> <p>[By the way, you began your question with a remark that there isn't so much to the cohomology ring <span class="math-container">$H^*(C_p;\mathbb F_p)$</span> as a module over the Steenrod algebra. Yes, it is trivial to compute, but it is a deep theorem, with huge unexpected consequences, that this is a injective object in the category of unstable <span class="math-container">$A_p$</span>--modules. See the book by Lionel Schwartz on the Sullivan conjecture for more detail.]</p> <p><strong>Edit the next day</strong>: Thanks to Leason for pointing out that my map doesn't detect for <span class="math-container">$p&gt;3$</span>. To get the detection result, one needs that the depth = Krull dimension, and this turns out to only happen in the one case that I looked up carefully: <span class="math-container">$p=3$</span>. So in the other cases, the depth will be 1 = rank of the center, and one needs a more general detection theorem, pioneered by Henn-Lannes-Schwartz in the mid 1990's, and then explored by me in various papers about a decade ago. (Totaro later wrote about this in his cohomology book: this is the result mentioned by Heard.) In the case in hand, the range of the detection map for <span class="math-container">$H^*(P;\mathbb F_p)$</span> will need one more term in the product: Let <span class="math-container">$C &lt; P$</span> be the center: a group of order <span class="math-container">$p$</span>. The multiplication homomorphism <span class="math-container">$C \times P \rightarrow P$</span> induces a map of unstable algebras <span class="math-container">$$ H^*(P;\mathbb F_p) \rightarrow H^*(C;\mathbb F_p) \otimes H^{*\leq 2p}(P;\mathbb F_p)$$</span> where the last term means truncate above degree <span class="math-container">$2p$</span>. That the number <span class="math-container">$2p$</span> works to detect all remaining nilpotence is an application of my general result: it can be determined by understanding the restriction to the center.</p> <p>At any rate, for that original group of order 27 and exponent 3, this isn't needed. (The group of order 27 and exponent 9 will also need that extra factor in the detection map range.) </p>
355,454
<p>Let <span class="math-container">$p$</span> be an odd prime. The <span class="math-container">$\mathbb F_p$</span> cohomology of the cyclic group of order <span class="math-container">$p$</span> is well-known: <span class="math-container">$\mathrm{H}^\bullet(C_p, \mathbb F_p) = \mathbb F_p[\xi,x]$</span> where <span class="math-container">$\xi$</span> has degree 1, <span class="math-container">$x$</span> has degree 2, and the Koszul signs are imposed (so that in particular <span class="math-container">$\xi^2 = 0$</span>). As a module over the Steenrod algebra, the only interesting fact is that <span class="math-container">$x = \beta \xi$</span>, where <span class="math-container">$\beta$</span> denotes the Bockstein. The rest of the Steenrod powers can be worked out by hand.</p> <p>There are two groups of order <span class="math-container">$p^2$</span>. The <span class="math-container">$\mathbb F_p$</span> cohomology of <span class="math-container">$C_p \times C_p$</span>, including its Steenrod powers, is computable from the Kunneth formula. For the cyclic group <span class="math-container">$C_{p^2}$</span>, you have to think slightly more, because there is a ring isomorphism <span class="math-container">$\mathrm{H}^\bullet(C_p, \mathbb F_p) \cong \mathrm{H}^\bullet(C_{p^2}, \mathbb F_p)$</span>, but the Bockstein vanishes on <span class="math-container">$\mathrm{H}^\bullet(C_{p^2}, \mathbb F_p)$</span>. Still I think the Steenrod algebra action is straightforward to write down. </p> <p>I want to know about the groups of order <span class="math-container">$p^3$</span>. The abelian ones are not too hard, I think, and there are two nonabelian groups. The one with exponent <span class="math-container">$p^2$</span> is traditionally denoted "<span class="math-container">$p^{1+2}_-$</span>", and the one with exponent <span class="math-container">$p$</span> is traditionally denoted "<span class="math-container">$p^{1+2}_+$</span>". I care more about the latter one, but I'm happy to hear answers about both. And right now I care most about the prime <span class="math-container">$p=3$</span>.</p> <p>The cohomology of these groups was computed in 1968 by Lewis in <a href="https://www.jstor.org/stable/1994856" rel="nofollow noreferrer">The Integral Cohomology Rings of Groups of Order <span class="math-container">$p^3$</span></a>. Actually, as is clear from the title, Lewis computes the integral cohomology, from which the <span class="math-container">$\mathbb F_p$</span>-cohomology can be read off using the universal coefficient theorem. For the case I care more about, Lewis finds that <span class="math-container">$\mathrm{H}^\bullet(p^{1+2}_+, \mathbb Z)$</span> has the following presentation. (I am quoting from <a href="https://doi.org/10.1017/S0305004100075940" rel="nofollow noreferrer">Green, On the cohomology of the sporadic simple group <span class="math-container">$J_4$</span>, 1993</a>.) The generators are: <span class="math-container">$$ \begin{matrix} \text{name} &amp; \text{degree} &amp; \text{additive order} \\ \alpha_1, \alpha_2 &amp; 2 &amp; p \\ \nu_1, \nu_2 &amp; 3 &amp; p \\ \theta_j, 2 \leq j \leq p-2 &amp; 2j &amp; p \\ \kappa &amp; 2p-2 &amp; p \\ \zeta &amp; 2p &amp; p^2 \end{matrix}$$</span> (For the <span class="math-container">$p=3$</span> case that I care most about, there are no <span class="math-container">$\theta$</span>s, since <span class="math-container">$2 \not\leq 3-2$</span>.) A complete (possibly redundant) list of relations is: <span class="math-container">$$ \nu_i^2 = 0, \qquad \theta_i^2 = 0, \qquad \alpha_i \theta_j = \nu_i \theta_j = \theta_k \theta_j = \kappa \theta_j = 0$$</span> <span class="math-container">$$\alpha_1 \nu_2 = \alpha_2 \nu_1, \qquad \alpha_1 \alpha_2^p = \alpha_2 \alpha_1^p, \qquad \nu_1\alpha_2^p = \nu_2 \alpha_1^p,$$</span> <span class="math-container">$$ \alpha_i\kappa = -\alpha_i^p, \qquad \nu_i\kappa = -\alpha_i^{p-1}\nu_i,$$</span> <span class="math-container">$$ \kappa^2 = \alpha_1^{2p-2} - \alpha_1^{p-1}\alpha_2^{p-1} + \alpha_2^{2p-2}, $$</span> <span class="math-container">$$ \nu_1 \nu_2 = \begin{cases} \theta_3, &amp; p &gt; 3, \\ 3\zeta, &amp; p = 3. \end{cases}$$</span> From this Green (ibid.), for example, writes down a PBW-type basis.</p> <blockquote> <p>Question: What is the action of the Steenrod algebra been on <span class="math-container">$\mathrm{H}^\bullet(p^{1+2}_+, \mathbb F_p)$</span>?</p> </blockquote> <p>I'm not very good at Steenrod algebras. Does the ring structure on the <span class="math-container">$\mathbb Z$</span>-cohomology suffice to determine the action? For instance, the additive structure of <span class="math-container">$\mathrm{H}^\bullet(G, \mathbb Z)$</span> already determines the Bockstein action on <span class="math-container">$\mathrm{H}^\bullet(G, \mathbb F_p)$</span>. If there is a systematic way to do it, where can I learn to do the computations?</p>
Drew Heard
16,785
<p>I think the answer you want can be found in the following article:</p> <pre><code>AUTHOR = {Leary, I. J.}, TITLE = {The mod-{<span class="math-container">$p$</span>} cohomology rings of some {<span class="math-container">$p$</span>}-groups}, JOURNAL = {Math. Proc. Cambridge Philos. Soc.}, FJOURNAL = {Mathematical Proceedings of the Cambridge Philosophical Society}, VOLUME = {112}, YEAR = {1992}, NUMBER = {1}, PAGES = {63--75}, ISSN = {0305-0041}, </code></pre>
11,244
<p>In order to evaluate new educational material the contentment of students with this material is often measured. However, just because a student is contented doesn't mean that he/she has actually learned something. Is there any research investigating the correlation between students contentment and the educational quality of the presented material?</p>
Anschewski
199
<p>I would recommend you to specify the term "educational quality". I think there is a study indicating that in German schools, the two educational goal variables of students' motivation and students' mathematical knowledge are negatively correlated across clases, so you have a trade-off there. I guess, students' contentment would be positively correlated with their motivation. Correlations are not transitive, however, so I cannot say anything about contentment and mathematical learning. You might also benefit from specifying your target group. The correlation you asked for my differ across groups of students (school, university, vocational education, pensioners).</p>
2,634,277
<p>I am working on some development formulas for surfaces and as a byproduct of abstract theory i get that: $$\int_{-\frac{\pi}{2}}^\frac{\pi}{2}\frac{1+\sin^2\theta}{(\cos^4\theta+(\gamma\cos^2\theta-\sin\theta)^2)^\frac{3}{4}}d\theta$$ is independent on the parameter $\gamma\in\mathbb{R}$. I thought that there was something wrong with my calculations but actually turns out that using Mathematica that the value is somewhat near $5,24412$ independently on the $\gamma$ I plug in the calculation of the integral. Is there any way to verify that actually this is a constant by direct computations, complex analysis, or at least is this kind of integrals studied?</p> <p>Edit:obviously differentiating in the integral does not help much</p>
Yuriy S
269,624
<p>This is not an answer, but too long for a comment.</p> <p>I still think it may be helpful to consider the derivative (I use $x$ as integration variable).</p> <p>$$J=\frac{d}{d \gamma} I(\gamma)=\int_{-\pi/2}^{\pi/2} \frac{(1+\sin^2 x)(\gamma ~\cos^2 x-\sin x) \cos^2 x~dx}{\left(\cos^4 x+(\gamma~ cos^2 x-\sin x)^2 \right)^{7/4}}$$</p> <p>We need to prove $J \equiv 0$.</p> <p>However, there's a stronger statement $^*$, which seems to be true numerically:</p> <p>$$J_1=\int_{-\pi/2}^{\pi/2} \frac{\gamma (1+\sin^2 x) \cos^4 x~dx}{\left(\cos^4 x+(\gamma~ cos^2 x-\sin x)^2 \right)^{7/4}}$$</p> <p>$$J_2=\int_{-\pi/2}^{\pi/2} \frac{(1+\sin^2 x) \cos^2 x \sin x ~dx}{\left(\cos^4 x+(\gamma~ cos^2 x-\sin x)^2 \right)^{7/4}}$$</p> <blockquote> <p>$$J_1 \equiv J_2$$</p> </blockquote> <p>Mathematica confirms it numerically for $\gamma \in (-1,1)$. For $|\gamma|&gt;1$ there's some trouble with computing the integrals numerically.</p> <p><a href="https://i.stack.imgur.com/91WK6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/91WK6.png" alt="enter image description here"></a></p> <p>Moreover, as we see, both integrals follow linear dependence on $\gamma$ with very good accuracy.</p> <p>Using linear regression in Mathematica, I obtained, with amazing accuracy the following fit:</p> <blockquote> <p>$$J_1(\gamma)= \frac{2}{3} L \gamma $$</p> </blockquote> <p>Where $L$ is a lemniscate constant:</p> <p>$$\frac{2}{3} L = \frac{1}{3 \sqrt{2 \pi}} \left( \Gamma \left( \frac{1}{4} \right) \right)^2 =\frac{4}{3} \int_0^1 \frac{dx}{\sqrt{1-x^4}}$$</p> <p>Here are the plots with the proposed closed form (green) for comparison:</p> <p><a href="https://i.stack.imgur.com/9RRxH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9RRxH.png" alt="enter image description here"></a></p> <hr> <p>This gives us another task, to prove the proposed closed form for both integrals.</p> <hr> <p>Note that for $J_1$ we can divide by $\gamma$ and obtain the following proposition:</p> <p>$$\int_{-\pi/2}^{\pi/2} \frac{ (1+\sin^2 x) \cos^4 x~dx}{\left(\cos^4 x+(\gamma~ cos^2 x-\sin x)^2 \right)^{7/4}}=\frac{4}{3} \int_0^1 \frac{dx}{\sqrt{1-x^4}}$$</p> <p>Which makes $J_1/ \gamma$ yet another integral which doesn't depend on $\gamma$.</p> <hr> <p>Edit, remark:</p> <p>$^*$ this statement is not really stronger, because as we can see, the rest of the integrated function is non-negative for all $x$, which means that $J_1 \equiv J_2$ follows from $J \equiv 0$.</p>
520,046
<blockquote> <p>Find the smallest natural number that leaves residues $5,4,3,$ and $2$ when divided respectively by the numbers $6,5,4,$ and $3$.</p> </blockquote> <p>I tried $$x\equiv5\pmod6\\x\equiv4\pmod5\\x\equiv3\pmod4\\x\equiv2\pmod3$$What $x$ value?</p>
Brian M. Scott
12,042
<p>HINT: Notice that your congruences are equivalent to the following ones:</p> <p>$$\left\{\begin{align*} x\equiv-1\pmod6\\ x\equiv-1\pmod5\\ x\equiv-1\pmod4\\ x\equiv-1\pmod3 \end{align*}\right.$$</p> <p>In other words, $x+1$ is divisible by $6,5,4$, and $3$. What’s the smallest positive integer with that property?</p>
520,046
<blockquote> <p>Find the smallest natural number that leaves residues $5,4,3,$ and $2$ when divided respectively by the numbers $6,5,4,$ and $3$.</p> </blockquote> <p>I tried $$x\equiv5\pmod6\\x\equiv4\pmod5\\x\equiv3\pmod4\\x\equiv2\pmod3$$What $x$ value?</p>
vijay_pavani.123
118,683
<p>The problem can be solved by Chinese Remainder Theorem (CRT) or by taking lcm of divisors. Here divisors are 3, 4, 5 and 6; and the 1 must be subtracted? can you guess why we are subtracting 1? Or you can ask me if you need any clarifications.</p>
1,987,387
<p>I don't remember any method to compute the closed from for the following series. $$ \sum_{k=0}^{\infty}\binom{3k}{k} x^k .$$</p> <p>I tried by putting $\binom{3k}{k}$ in Mathematica for different $k$ and asking for the generating function it deliver a complicated formula which is the following. $$ \frac{2\cos[\frac{1}{3} \sin^{-1}(\frac{\sqrt{3x}}{2})]}{\sqrt{4-27x}} $$</p> <p>I was wondering if there is any simple form? </p>
epi163sqrt
132,007
<blockquote> <p><em>Hint:</em> A closed form can be found by means of the <em><a href="https://en.wikipedia.org/wiki/Lagrange_inversion_theorem" rel="nofollow noreferrer">Lagrange Inversion Formula</a></em>.</p> <p>An answer based upon this method is given at <em><a href="https://math.stackexchange.com/questions/774434/generating-function-of-binom3nn/815207#815207">this MSE link</a></em>.</p> </blockquote>
22
<p>By matrix-defined, I mean</p> <p>$$\left&lt;a,b,c\right&gt;\times\left&lt;d,e,f\right&gt; = \left| \begin{array}{ccc} i &amp; j &amp; k\\ a &amp; b &amp; c\\ d &amp; e &amp; f \end{array} \right|$$</p> <p>...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal)</p> <p>If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why?</p> <p>As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.</p>
BBischof
16
<p>Assuming you know the definition of orthogonal as "a is orthogonal to b iff $a\cdot b=0$ then we could calculate $(a \times b)\cdot a = a_1(a_2b_3-a_3b_2)-a_2(a_1b_3-a_3b_1)-a_3(a_1b_2-a_2b_1)=0$ and $(a \times b)\cdot b-0$, so the cross product is orthogonal to both. As Nold mentioned, if the two vectors a and b lie in the x,y plane, then the orthogonal vectors must be purely in the z direction.</p>
500,931
<p>I have this theorem which I can't prove.Please help.</p> <p>"Show that every positive rational number $x$ can be expressed in the form $\sum_{k=1}^n \frac{a_k}{k!}$ in one and only one way where each $a_k$ is non-negative integer with $ a_k≤ k − 1$ for $k ≥ 2$ and $a_n&gt;0$."</p> <p>I think the ONE way is <a href="https://math.stackexchange.com/questions/71851/prove-that-any-rational-can-be-expressed-in-the-form-sum-limits-k-1n-frac">this</a>.But I don't know that how to prove that it is the ONLY way.</p> <p>Or is <a href="https://math.stackexchange.com/questions/71851/prove-that-any-rational-can-be-expressed-in-the-form-sum-limits-k-1n-frac">this</a> not that ONE way?</p> <p>Please prove it.Thank you.</p>
Brian M. Scott
12,042
<p>HINT: For existence I find it easiest to work backwards. Express the fractional part of $x$ as a fraction $\frac{c}d$ in lowest terms, and let $n$ be minimal such that $d\mid n!$. Write $x$ as $\frac{c}{n!}$, and write $c=q_0n+a_n$, where $0\le a_n&lt;n$. Then the fractional part of $x$ is</p> <p>$$x-\lfloor x\rfloor=\frac{c}{n!}=\frac{q_0n+a_n}{n!}=\frac{q_0}{(n-1)!}+\frac{a_n}{n!}\;.$$</p> <p>Now write $q_0=q_1(n-1)+a_{n-1}$, where $0\le a_{n-1}&lt;n-1$, and note that</p> <p>$$\frac{q_0}{(n-1)!}=\frac{q_1(n-1)+r_{n-1}}{(n-1)!}=\frac{q_1}{(n-2)!}+\frac{r_{n-1}}{(n-1)!}\;,$$</p> <p>so that you can continue the process to find in succession $a_n$, $a_{n-1}$, $a_{n-2}$, and so on to $a_1$.</p> <p>You can actually organize this as an induction on $n$: if your induction hypothesis is that every rational number that can be written with a denominator that divides $(n-1)!$ has a decomposition of the desire kind, the first calculation above show that the same is true of fractions with denominators dividing $n!$.</p> <p>For uniqueness use the fact that for $1\le m\le n$ we have</p> <p>$$\sum_{k\ge m}^n\frac{k-1}{k!}=\sum_{k\ge m}^n\left(\frac1{(k-1)!}-\frac1{k!}\right)=\frac1{(m-1)!}-\frac1{n!}&lt;\frac1{(m-1)!}\;;$$</p> <p>this implies that if you don’t choose $a_{m-1}$ to be as large as possible, the terms with smaller denominators cannot make up the difference.</p>
1,800,519
<blockquote> <p>Let $\omega$ be an $n$-form and $\mu$ be an $m$-form where both are acting on a manifold $M$. Is the Lie derivative $L_{X}(\omega \wedge \mu)$ where $X$ is a smooth vector field acting on $M$ an exact form? </p> </blockquote> <p>I think it is but I've been unable to prove it, so any help would be greatly appreciated. </p>
YannickSSE
238,069
<p>In general this is not true. Recall that $$ L_X(\omega) = i_x d\omega + d i_x\omega $$ where you see that the right part is exact and the left part mustn't be. As an example for your case take $N$ a manifold with a non exact form $\mu$ and let $\omega$ be a 0-form (function) on $\mathbb{R}$ and define $M=N\times\mathbb{R}$ and note that the extension of $\mu$ is still not exact. Taking $\omega=x$ and $X=\partial_x$ where $x$ is the variable in $\mathbb{R}$ one sees $$ L_X(\omega\wedge \mu) = L_X(\omega)\wedge\mu +(-1)^0\omega\wedge L_X(\mu)=\mu $$ since the projection of $X$ on $T_N$ vanishes.</p>
354,100
<p>Does the expression <span class="math-container">$$\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}R^n,$$</span> which gives the volume of an <span class="math-container">$n$</span>-dimensional ball of radius <span class="math-container">$R$</span> when <span class="math-container">$n$</span> is a nonnegative integer, have any known significance when <span class="math-container">$n$</span> is an odd negative integer?</p>
Anixx
10,059
<p>I will answer regarding the dimension <span class="math-container">$-1$</span>.</p> <p>An example of such space is a set of periodic lattices on a real line.</p> <p>Indeed, you can see that the <a href="https://en.wikipedia.org/wiki/Hausdorff_dimension" rel="nofollow noreferrer">Hausdorff dimension</a> of a periodic lattice is <span class="math-container">$-1$</span>: If we scale down the lattice twice, it would be able to include two original lattices. Since scaling the fractal down twice makes it twice as big, its dimension is <span class="math-container">$\frac{\ln 2}{\ln (1/2)}=-1$</span>.</p> <p>The formula for the volume of a ball gives <span class="math-container">$\frac1{\pi R}$</span> for <span class="math-container">$n=-1$</span>. This means the unit ball (<span class="math-container">$R=1$</span>) includes only one lattice: the <span class="math-container">$\pi$</span>-periodic one.</p> <p>If we reduce the radius (the step of the lattice divided by <span class="math-container">$\pi$</span>), its volume increases: a lattice with step <span class="math-container">$1/2$</span> can be represented as two lattices of step <span class="math-container">$1$</span>. Thus the volume of a lattice of step <span class="math-container">$1/2$</span> consists of two &quot;points&quot; (<span class="math-container">$1$</span>-periodic lattices), and so has volume twice the volume of <span class="math-container">$1$</span>-periodic lattice.</p> <p>Alternatively, we can consider it being 1 point with &quot;weight&quot; 2 (the weight being proportional to the density of a lattice, so that our space has fuzzy membership function).</p> <p>Similarly, if we increase the lattice step (&quot;radius&quot; of the <span class="math-container">$-1$</span>-sphere), we can consider it a lattice with weight below <span class="math-container">$1$</span>, the same as its volume.</p> <p>So, the volume of the <span class="math-container">$-1$</span>-ball is the density of the lattice.</p> <p>Interesting observation: in positive-dimentional space the ball becomes a point with reduction of its radius to zero. In zero-dimensional Euclidean space the ball is always a point, disregarding the &quot;radius&quot; and in this <span class="math-container">$-1$</span>-dimensional sphere the ball becomes a zero-volume point when we increase the radius infinitely.</p> <p>You can find some further ideas <a href="https://tglad.blogspot.com/2017/08/reframing-geometry-to-include-negative.html" rel="nofollow noreferrer">here</a>.</p>
1,044,009
<p>I have two numbers $N$ and $M$. I efficiently want to calculate how many pairs of $a$,$b$ are there such that $1 \leq a \leq N$ and $1 \leq b \leq M$ and $ab$ is a perfect square.</p> <p>I know the obvious $N*M$ algorithm to compute this. But i want something better than that. I think it can be done in a better time may $\operatorname{O}(m+n)$ or something like that but calculating new pairs directly from previous pairs rather than iterating over all $a$ and $b$. Thanks for any help in advance. A pseudo code will be more helpful.</p>
Rory Daulton
161,807
<p>If you have a list of prime numbers up to $\max(M,N)$ here is one way to do it.</p> <p>For each increasing $a$ from $1$ through $N$, find the prime decomposition of $a$. Multiply the primes (using exponent $1$) that have an odd exponent in the decomposition--call the result $c$. If $a \cdot c&lt;M$ then $c$ is your smallest possible $b$ for this $a$; otherwise, there is no such $b$ for this $a$.</p> <p>Then use a recursive algorithm to multiply $c$ by numbers $d$ made from prime decompositions with even exponents. This algorithm would be something like: try $d=2^2$. If $a \cdot c \cdot d \le M$ then use it and use $3^2$ and so on. When that trial is done, use $d=2^4$, and so on.</p> <p>The pseudo code would be a little tricky here, as for all recursion, but it should not be hard. My guess for this algorithm is order $N \cdot \ln(M)$.</p>
876,209
<p>I am confused on how to write a formal proof for sum notations. How would I write a formal proof for this example?</p> <p>Prove that $$\sum\limits_{k = 0}^\infty\frac{2}{3^k} = 3.$$ Prove that for any $\alpha \in \{0, 2\}^\mathbb{N}$ that $$0 \le \sum\limits_{k = 0}^\infty\frac{\alpha(k)}{3^k} \le 3.$$</p>
Shawn O'Hare
161,325
<p>The series $S:=\sum_{k=0}^{\infty} \frac{2}{3^k}$ is a geometric series with first term $a=2$ and multiplier $r=1/3$, therefor $$ \lim_{n \to \infty} \sum_{k=0}^n \frac{2}{3^k} = \frac{a}{1-r} = \frac{2}{2/3} = 3. $$</p> <p>Now let $T:=\sum_{k=0}^{\infty} \frac{\alpha(k)}{3^k}$. For any $k \in \mathbb N$ it's clear that $0 \leq \frac{\alpha(k)}{3^k} \leq \frac{2}{3^K}$ and so we have the following inequality involving the partial sums $T_n:=\sum_{k=0}^n \frac{\alpha(k)}{3^k}$ and $S_n:=\sum_{k=0}^n \frac{2}{3^k}$: $$ 0 \leq T_n \leq S_n $$ for all $n \in \mathbb N$. The sequence $(T_n)_{n \in \mathbb N}$ is increasing and bound from above, hence converges to some value less than $\lim_{n \to \infty} S_n = 3$. This establishes that the series $T$ itself converges to some value between $0$ and $3$, as desired. </p>
1,557,733
<p>I have a function $f(\mathbf{u}, \Sigma)$ where $\mathbf{u}$ is a $p \times 1$ vector and $\Sigma$ is a $p \times p$ real symmetric matrix (positive semi-definite).</p> <p>I somehow successfully computed the partial derivatives $\frac{\partial f}{\partial \mathbf{u}}$ and $\frac{\partial f}{\partial \Sigma}$.</p> <p>In this case, how do I optimize the function $f$ using Newton-Raphson's method?</p> <p>=====Details=======</p> <p>$y = X\mathbf{u} + \frac{1}{2}\operatorname{diag}(X\Sigma X^{T})$</p> <p>$e = exp(y)$</p> <p>$f = \mathbf{1}^{T} e$</p> <p>where $exp$ is component-wise and $\mathbf{1}$ is a vector with ones. $X$ is not symmetric.</p>
Gyumin Roh
292,972
<p>(ii) should be fixed. It would be larger than $\frac{3p}{8}$.</p> <p>Let $AD, BE, CF$ be the medians, and let the centroid be $G$.</p> <p>WLOG, let the two medians be $m_a, m_b$.</p> <p>From the Triangle Inequality on $\triangle AGB$, we have $$\frac{2}{3}m_a+\frac{2}{3}m_b &gt; c$$</p> <p>Summing this cyclically gives $$m_a+m_b+m_c &gt; \frac{3}{4}(a+b+c)$$</p> <p>Let the midpoint of $GB$ be $M$. We have $$GD=\frac{1}{3}m_a, GM=\frac{1}{3}m_b, MD=\frac{1}{3}m_c$$</p> <p>Therefore, triangle inequality on $\triangle GMD$ gives $m_a+m_b&gt;m_c$.</p> <p>Therefore, we have $2(m_a+m_b) &gt; \frac{3}{4}(a+b+c)$, so $m_a+m_b&gt;\frac{3}{8}(a+b+c)$.</p>
3,338,885
<p>I want to prove this theorem using Binomial theorem and I've got trouble in understanding 3rd step if anyone knows why please explain :) Prove that sum: </p> <p><span class="math-container">$\sum_{r=0}^{k}\binom{m}{r}\binom{n}{k-r}=\binom{m+n}{k}$</span> </p> <p>1st step:<br> <span class="math-container">$(1+y)^{m+n}=(1+y)^m(1+y)^n$</span> </p> <p>2nd step: we use the binomial theorem formula and evaluate that sum <span class="math-container">$\sum_{m+n}^{r=0}\binom{m+n}{r}y^r=\sum_{r=0}^{m}\binom{m}{r}y^r \sum_{r=0}^{n}\binom{n}{r}y^r$</span></p> <p>3step: equating the coefficient of <span class="math-container">$y^k$</span> </p> <p><span class="math-container">$\binom{m+n}{k}=\binom{m}{0}\binom{n}{k}+\binom{m}{1}\binom{n}{k-1}+...+\binom{m}{k}\binom{n}{k-k}$</span></p> <p>Why and how is the 3rd step allowed? Thank you </p>
Wuestenfux
417,848
<p>In the 3rd step, <span class="math-container">$$\sum_{k=0}^{m+n} {m+n\choose k} y^k = \sum_{i=0}^m {m\choose i} y^i\sum_{i=0}^n{n\choose j} y^j =\sum_{k=0}^{m+n} \sum_{i,j\atop i+j=k}{m\choose i}{n\choose j} y^k.$$</span> This gives the desired equation.</p>
3,338,885
<p>I want to prove this theorem using Binomial theorem and I've got trouble in understanding 3rd step if anyone knows why please explain :) Prove that sum: </p> <p><span class="math-container">$\sum_{r=0}^{k}\binom{m}{r}\binom{n}{k-r}=\binom{m+n}{k}$</span> </p> <p>1st step:<br> <span class="math-container">$(1+y)^{m+n}=(1+y)^m(1+y)^n$</span> </p> <p>2nd step: we use the binomial theorem formula and evaluate that sum <span class="math-container">$\sum_{m+n}^{r=0}\binom{m+n}{r}y^r=\sum_{r=0}^{m}\binom{m}{r}y^r \sum_{r=0}^{n}\binom{n}{r}y^r$</span></p> <p>3step: equating the coefficient of <span class="math-container">$y^k$</span> </p> <p><span class="math-container">$\binom{m+n}{k}=\binom{m}{0}\binom{n}{k}+\binom{m}{1}\binom{n}{k-1}+...+\binom{m}{k}\binom{n}{k-k}$</span></p> <p>Why and how is the 3rd step allowed? Thank you </p>
drhab
75,923
<p>Let <span class="math-container">$a_i\in\mathbb R$</span> denote a constant for <span class="math-container">$i=0,1,\dots,n$</span>.</p> <p>The function <span class="math-container">$\mathbb R\to\mathbb R$</span> prescribed by <span class="math-container">$x\mapsto\sum_{i=0}^na_ix^i$</span> is the same as the function <span class="math-container">$\mathbb R\to\mathbb R$</span> prescribed by <span class="math-container">$x\mapsto 0$</span> if and only if <span class="math-container">$a_i=0$</span> for every <span class="math-container">$i\in\{0,1,\dots,n\}$</span>.</p> <p>Sufficiency is obvious. Necessity is a consequence of the fact that the first function has at most <span class="math-container">$n$</span> roots if <span class="math-container">$a_i\neq0$</span> for some <span class="math-container">$i\in\{0,1,\dots,n\}$</span>.</p> <p>This can be applied by proving that two polynomial functions <span class="math-container">$p_1,p_2$</span> are the same (or equivalently <span class="math-container">$p_1-p_2$</span> is the zero function) if and only if they have the same coefficients.</p>
1,821,849
<p>Let $K/F$ be a field extension and $L_1,L_2$ subfields of $K$ such that $L_1$ and $L_2$ have finite degree over $F$. </p> <p>Does $L_1 \cong L_2$ imply $[L_1 : F ]=[L_2 : F]$? Obviously, if the isomorphism fixes $K$ (which isn't always necessarily true) the result holds. The result even holds if $F$ is of finite degree over its prime field. </p> <p>When trying the usual proof, we get that $[L_1 : F] = [L_1^{\theta} : F^{\theta} ]=[L_2 : F^{\theta}]$ when $\theta \,\colon L_1 \rightarrow L_2$ is the given isomorphism. But I don't see how to relate $F^{\theta}$ to $F$ in an easy way. Any help or counterexample?</p> <p>Many thanks in advance.</p>
Hmm.
227,501
<p>This is indeed a very subtle question. If you assume the standard $F$-vector space structures on both $L_1$ and $L_2$, this need not be true in general. For example, let us take $L_1=\mathbb{C}(X^2)$, $L_2=\mathbb{C}(X)$, and take $F=\mathbb{C}(X^2)$. Clearly the map $X^2 \to X$ gives you an isomorphism from $L_1$ to $L_2$. Note that under the usual $F$-vector space structures, we have, $$[L_1:F]=[\mathbb{C}(X^2):\mathbb{C}(X^2)]=1$$</p> <p>On the other hand, $$[L_2:F]=[\mathbb{C}(X):\mathbb{C}(X^2)]=2$$</p> <p>The idea here, is that under this isomorphism, $\mathbb{C}(X)$ gets endowed with a different $\mathbb{C}(X^2)$-vector space structure, one in which you do have $[\mathbb{C}(X):\mathbb{C}(X^2)]=1$. The structure, as I'm sure you'll notice instantly, is the following: $$\forall h \in \mathbb{C}(X^2),g\in \mathbb{C}(X),~~h(X^2).g(X)=h(X)g(X) $$</p> <p>So, you can possibly conclude that $[L_1:F]=[L_2:F]$, provided you are considering a non-canonical $F$-vector space structure. </p>
216,532
<p>How do I find the limit of something like</p> <p>$$ \lim_{x\to \infty} \frac{2\cdot3^{5x}+5}{3^{5x}+2^{5x}} $$</p> <p>?</p>
glebovg
36,367
<p>Note that</p> <p>$$\frac{{2 \cdot {3^{5x}} + 5}}{{{3^{5x}} + {2^{5x}}}} \sim \frac{{2 \cdot {3^{5x}}}}{{{3^{5x}} + {2^{5x}}}} = \frac{2}{{1 + {{\left( {\frac{2}{3}} \right)}^{5x}}}}.$$</p> <p>So ...</p>
1,613,171
<p>On page $61$ of the book <a href="http://solmu.math.helsinki.fi/2010/algebra.pdf" rel="nofollow">Algebra</a> by Tauno Metsänkylä, Marjatta Näätänen, it states</p> <blockquote> <p>$\langle \emptyset \rangle =\{1\},\langle 1 \rangle =\{1\}. H\leq G \implies \langle H \rangle =H$</p> </blockquote> <p>where $H \leq G$ means that H is the subgroup of G.</p> <p>Now assumme $H=\emptyset$ so $\langle \emptyset \rangle = \emptyset \not = \{1\}$, contradiction. Please explain p.61 of the book that is the line in orange above.</p>
hhh
5,902
<blockquote> <p>$\langle \emptyset \rangle=\{1\},\langle 1 \rangle=\{1\}. H\leq G \implies \langle H\rangle=H.$ </p> </blockquote> <p><strong>Examples.</strong></p> <p>$G=\mathbb R^*$ without zero. (G,*) is multiplicative group.</p> <ol> <li><p>If you have $\langle \emptyset \rangle =\{\}=\emptyset$, then this $\emptyset$ cannot be a group because no unit element, failing the definition of the group so the implication false with this.</p></li> <li><p>$H=0$. $\langle 0\rangle =\langle \emptyset\rangle=\{1\}$</p></li> <li><p>$H=1$. $\langle 1\rangle =\{1\}$.</p></li> <li><p>$H=6$. $\langle 6\rangle \not =\langle 2,3\rangle$ because $2\not\in\langle 6\rangle$ while $2\in\langle 2,3\rangle$.</p></li> <li><p>$H=2$. $\langle 2\rangle =\{2^n\quad |\quad n\in\mathbb Z\}$. This is when $G=\mathbb Q^*$ because $0$ has no multiplicative inverse. This does not mean that the exponent cannot be zero i.e. the identity element $1\in\langle 2\rangle.$</p></li> </ol> <p><strong>Exponents must be integers (not real numbers)</strong></p> <ol> <li><p>$H=2$. When $G=\mathbb R^*$, $\langle 2\rangle = \{2^n\quad |\quad n\in\mathbb R\}.$ Wrong, the exponents must be integers. </p></li> <li><p>$H=2$. When $G=\mathbb R^*$, $\langle 2\rangle = \{2^n \quad | \quad n\in\mathbb Z\}$.</p></li> <li><p>$H=3$. $\langle 3\rangle =\{3^n\}$, $n\in\mathbb R$. Wrong, exponents cannot be real numbers.</p></li> <li><p>$H=4$. $\langle 4\rangle =\langle 2\rangle$ because $\langle k\rangle,k\in\mathbb R^*$ consisting of all multiplies having the factors and their inverses. Wrong exponents cannot be real numbers.</p></li> <li><p>$H=5$. $\langle 5\rangle =\{5^n\}, n\in\mathbb R$. Wrong because exponents must be integers.</p></li> </ol> <p><strong>$\langle a\rangle$ is not defined here with sets of sets</strong></p> <ol> <li><p>What is $\langle \{1\}\rangle$? $\langle \{1\}\rangle =\{\{1\}\}$? $\{\{k\}\}$ is not a number so undefined? Sets of sets not addressed here.</p></li> <li><p>Does the statement "$\langle \emptyset \rangle=\{1\},\langle 1 \rangle=\{1\}. H\leq G \implies \langle H\rangle=H.$" work with sets having sets like $\{\{\{1\}\}\}$? No.</p></li> </ol> <p><strong>Further questions</strong></p> <ol> <li><a href="https://math.stackexchange.com/questions/1613351/multiplicative-group-mathbb-r-is-group-but-mathbb-r-is-not-gro">Multiplicative group $(\mathbb R^*, &#215;)$ is group but $(\mathbb R, &#215;)$ is not group, why?</a> where the question is also on additive groups: $(\mathbb R, +)$ is group but $(\mathbb R^*, +)$ is not group.</li> </ol>
1,844,374
<p>Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious. It seems like $\times$ is already quite sufficient as a descriptive symbol.</p>
almagest
172,006
<p>As @DavidRicherby implies out in a comment below, one should ideally distinguish carefully the history of the dot notation from the possible reasons for keeping it, retaining it, or modifying its usage. Unfortunately although I am qualified by age (66) to comment on the last 50 years, I am otherwise ill-qualified to deal with the history (for which see <a href="https://hsm.stackexchange.com/questions/2970/when-and-by-whom-were-the-different-symbols-for-multiplication-used/2972#2972">when and by whom</a> and also the answers by @hjhjhj57 and @RobertSoupe). So what follows may sometimes seem to mix history and reasons and at times get the history wrong. <em>It is intended</em> as one individual's take on reasons. Note also that having only lived in the USA for about 3 years (2 years LA, and 6 months each in NY and Colorado), I am much more familiar with the UK scene than the USA scene, and know almost nothing about other countries.</p> <p>There are multiple reasons. Perhaps the most important is a desire to make the notation as concise as possible. The change is not really from $a\times b$ to $a\cdot b$. It is from $a\times b$ to $ab$. In many undergraduate algebra books, and at the research and journal level, the "multiplication" operation is just denoted by juxtaposition. But then that is also true in some maybe most schools for teenagers. Glancing at the 2014 Core Maths papers from one of the leading UK exam boards for the "A-levels" (the final exams for most pupils), they seem to use juxtaposition exclusively. On the other hand papers for GCSE maths (typically taken age 16) seem to use $246\times10$.</p> <p>This is also linked to a desire for speed. There are significantly less keystrokes in $ab$ than in either $a\times b$ or $a\cdot b$ if you are using LaTeX. Perhaps more important, being more concise, juxtaposition is easier to read. But as @DavidRicherby points out in a much upvoted comment below, LaTeX has come late to the party, so it may have a minor role in maintaining the status quo, but could not have helped to bring it about.</p> <p>Another reason is avoiding ambiguity. For example $3^25$ is unambiguous because the exponent separates the $3$ and $5$. But in LaTeX if you try to write $3\cdot5^2$ by juxtaposition you have to insert a special space to get $3\ 5^2$ and the outcome is still not entirely satisfactory. But I may pay too much attention to such matters, because having published two books in the last two years, I wonder how anyone manages to combine writing math books with a full-time job, the work involved is horrendous!</p> <p>Another reason may be that 3D vectors are often introduced early, and have two multiplication operations: the dot product and the cross product. So one is forced to use two different symbols to avoid ambiguity. Of course, one could avoid that by using the tensor subscript approach, and how all that is handled has a fashion element in it. For the last few decades for example, there has been a campaign to move us towards Clifford or "geometric algebras" (where the cross product is frowned on and the wedge product is key).</p> <p>Note also that $a\cdot b$ often does not represent ordinary multiplication. Of course $3\cdot5$ almost always does, but as one moves through undergraduate work into graduate work $a\cdot b$ is increasingly used to represent operations other than the ordinary multiplication (of integers, reals etc).</p> <p>As @Kundor correctly points out, the OP's real question could be seen as why teach $5\times 6$ in the first place? I have never tried to teach anyone younger than about 9. But I am fairly sure that trying to use juxtaposition when arithmetic is first introduced would be a non-starter. So the question becomes why not start with $5\cdot6$, instead of moving to it years later?</p> <p>That seems to me a mixture of history and psychology. I want to keep away from the history if possible, but the psychology does not surprise me. Making sensible changes to familiar things is hugely difficult when large numbers of people are involved, particularly when it is completely unclear to them how the change will benefit them. I clearly remember the UK's move from the old "pounds, shillings and pence" (with 12 old pence to the shilling, 20 shillings to the pound). It required a massive campaign by the government. In that case it was obvious that a simple 100 new pence to the pound would be much easier, but few people wanted to switch, having got used to the old currency.</p> <p>Another example is the difficulty we have had in the UK moving from fahrenheit to celsius for temperature. All our weather forecasts are now in celsius (or rather centigrade - the identical system with a different name), but it took years to get most people to accept it. The old system was bizarre (bp of water 212, fp 32), yet I believe it is still used in the USA!</p> <p>Or take miles. The SI unit is km. But there seems no prospect of the UK changing all its road signs to km for the foreseeable future. Remember this is a country where we drive on the wrong side of the road. When I was commuting backwards and forwards to LA and picking up rental cars at LAX and my own car at LHR, the only way I could find to remember it, was that I had to drive so that I was as near the centre of the road as possible. Mercifully I never got the wrong kind of car.</p> <p>So changing the status quo is tough. Time to make an obvious point: MSE is read in many countries, and practices vary widely, often even within countries. @Chieron 's much upvoted comment under the question notes that some schools never use $3\times4$, but start with $3\cdot4$.</p> <p>Similarly, I have tended to focus above on differences relevant to teenagers and undergraduates, but @BenC 's answer makes the excellent and easily overlooked point about potential and actual confusion between the centre dot and the decimal point.</p> <p>Again, @RobertSoupe (in his answer) makes the excellent point (which I managed to overlook entirely) about potential confusion between times $\times$ and the variable $x$ when children move on from learning tables to slightly more advanced maths. See also the comment by @user21820 below.</p> <p>I would also draw attention to some comments by @snulty. Under the question he and @MauroALLEGRANZA note that Descartes used $x$ for unknown and juxtaposition for multiply (which shows how tricky historical discussion can be unless you are well briefed)! I also highly recommend snulty's answer. I am ill-qualified to comment on the truth, but it certainly sounds highly plausible.</p> <p><a href="https://i.stack.imgur.com/ctkph.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/ctkph.jpg" alt="enter image description here"></a></p> <p><a href="https://en.wikipedia.org/wiki/Ren%C3%A9_Descartes#/media/File:Frans_Hals_-_Portret_van_Ren%C3%A9_Descartes.jpg" rel="noreferrer">Wikimedia</a> Rene Descartes after Frans Hals</p> <p>A final observation. One of the simultaneously delightful and frustrating aspects of the academic world is that diktats do not work. To persuade people to change their usage can take generations. Sometimes (as with the great Classical Statistics debacle) one has to wait more than a century to get important changes widely accepted. Notation is particularly tricky. New areas of maths are constantly emerging and people are constantly hijacking old symbols for new uses, so that at any moment notation often appears inconsistent across the whole field. It is hard to see what can be done to change that. So $a\times b$ can still mean ordinary multiplication, but sometimes it means vector product, <em>even though</em> there is no boldface or under- or over-lining to make clear that $a,b$ are vectors.</p> <p>So context is always king.</p> <p>I could not resist adding this (a facsimile of a page of <a href="https://en.wikipedia.org/wiki/Ren%C3%A9_Descartes" rel="noreferrer">Descartes</a> (1596-1650) ):</p> <p><a href="https://i.stack.imgur.com/6GiNg.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/6GiNg.jpg" alt="enter image description here"></a></p>
1,844,374
<p>Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious. It seems like $\times$ is already quite sufficient as a descriptive symbol.</p>
Ben C
218,588
<p>There is also an ambiguity between a decimal fraction with a dot, as in $3.5^2$, and multiplication with a centre dot, as in $3\cdot5^2$, particularly if the latter doesn't have spacing around the dot to give context, as in $3\!\cdot\!5^2$.</p> <p>In fact some textbooks use a centre dot for decimal fractions, for example Nelkon and Parker's <em>Advanced Level Physics</em> (sixth edition published in the UK in 1987, at least, which uses $\times$ for multiplication).</p>
1,844,374
<p>Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious. It seems like $\times$ is already quite sufficient as a descriptive symbol.</p>
snulty
128,967
<p>I'm looking for sources (see edit), but I would imagine when teaching children to count, add, multiply, you start with integers and addition is symbolised like $1+1=2$. Then you try to teach them that multiplication is short for lots of addition $3+3+3+3=4\times 3$ and $4+4+4=3\times 4$, and theres the obvious similarity between the symbols $\times$ and $+$. </p> <p>At a later stage in school $3\cdot4 $ can look like $3.4$ as in $3\frac{4}{10}$, so I imagine this would be nice to avoid especially when you're also teaching kids to practice their handwriting, so they might not always put a dot in the exact place you tell them.</p> <p>Then finally when you want to move onto more advanced things that $\cdot$ and $\times$ can stand for, even just algebra with a variable $x$, you might want to change to a better symbol. </p> <p>I think the other answers take this point of view very well, so I won't mention anything about that.</p> <p><strong>Edit</strong>: On the <a href="http://www.ncca.ie/uploadedfiles/Curriculum/Maths_Curr.pdf" rel="nofollow">irish curriculum</a> at around third and fourth class they do multiplication and decimals at roughly the same time. It says to develop an understanding of multiplication of repeated addition and division as repeated subtraction (obviously in whole number cases).</p>
4,203,704
<p>Understanding the Yoneda lemma maps.</p> <p>I'm trying to understand the maps between the natural transformations and <span class="math-container">$F(A)$</span> in the proof of the Yoneda lemma. I've been struggling for a bit to understand the Yoneda lemma, so I'm trying to understand the mapping construction as a recipe and using that to understand the theorem and gain an intuition for it rather than vice versa.</p> <p><strong>What, exactly, are the two maps between the natural transformations and <span class="math-container">$F(A)$</span> in the proof of the (covariant) Yoneda lemma?</strong></p> <hr /> <p><a href="https://proofwiki.org/wiki/Yoneda_Lemma_for_Covariant_Functors" rel="nofollow noreferrer">This entry on ProofWiki</a> shows explicit constructions for the maps from <span class="math-container">$F(A)$</span> to <span class="math-container">$\mathrm{Nat}(h_A, F(A))$</span> and the reverse.</p> <p>I'm going to change the notation slightly and use a superscript to denote which category an object is in. <span class="math-container">$x^C$</span> is an object in the category <span class="math-container">$C$</span>. <span class="math-container">$f^{\mathrm{Mor}(C)}$</span> is an arrow in the category <span class="math-container">$C$</span>. I will sometimes use a specific set, such as <span class="math-container">$F(A)$</span>, as an annotation. <span class="math-container">$x^{F(A)}$</span> means <span class="math-container">$x^{\mathrm{Set}}$</span> and, additionally, <span class="math-container">$x$</span> is an element of the set <span class="math-container">$F(A)$</span>.</p> <p>The map <span class="math-container">$\alpha$</span> goes from <span class="math-container">$\mathrm{Nat}(h_A, F)$</span> to <span class="math-container">$F(A)$</span>. The map is defined as follows.</p> <p><span class="math-container">$$ \alpha \;\;\text{is}\;\; \eta \mapsto \eta_A(\text{id}_A) $$</span></p> <p>I am really confused why the RHS is not just <span class="math-container">$\eta_A$</span>. Since <span class="math-container">$\eta$</span> is a natural transformation between functors from <span class="math-container">$C$</span> to <span class="math-container">$\mathrm{Set}$</span>, this would mean that <span class="math-container">$\eta_A$</span> is in <span class="math-container">$\text{Mor}(\mathrm{Set})$</span>. However, <span class="math-container">$\text{id}_A$</span> is also in <span class="math-container">$\text{Mor}(\mathrm{Set})$</span>. I don't know how this construction produces an object in <span class="math-container">$\mathrm{Set}$</span>.</p> <p>With type annotations, I think you get</p> <p><span class="math-container">$$ \eta^{\mathrm{Nat}(h_A, F(A))} \mapsto \eta_A^{\mathrm{Mor}(\mathrm{Set})}(\text{id}_A^{\mathrm{Mor}(\mathrm{Set})}) $$</span></p> <p>I'm also confused about the <span class="math-container">$\beta$</span> map. <span class="math-container">$\beta$</span> goes from <span class="math-container">$F(A)$</span> to <span class="math-container">$\mathrm{Nat}(h_A, F)$</span>.</p> <p>Here is what <span class="math-container">$\beta$</span> looks like with type annotations on parameters alone and the ultimate RHS. I inferred the types myself, but the overall expression is similar to the presentation on ProofWiki.</p> <p><span class="math-container">$$ u^{F(A)} \mapsto x^C \mapsto f^{\mathrm{Mor}(C)} \mapsto (Ff)(u)^{\mathrm{Set}} $$</span></p> <p>I'm also confused by this expression. A given, specific natural transformation <span class="math-container">$\eta$</span> can be though of as a map from <span class="math-container">$C$</span> to <span class="math-container">$\mathrm{Mor}(\mathrm{Set})$</span>, i.e. it associates morphisms in the target category to objects in the source category.</p> <p>Given this, it's hard for me to see why we don't end up with something of the following form for <span class="math-container">$\beta$</span>, i.e. we're given an element of <span class="math-container">$F(A)$</span>, and our map <span class="math-container">$\beta$</span> kicks out the component map of a natural transformation.</p> <p><span class="math-container">$$ u^{F(A)} \mapsto x^C \mapsto (\cdots)^{\mathrm{Mor}(\mathrm{Set})} $$</span></p> <p>So, what exactly are the maps between <span class="math-container">$F(A)$</span> and the natural transformations used in the proof of the Yoneda lemma? I'm having a hard time finding the proof presented in a substantially different way (or a more elementary way) from what ProofWiki does. For example, the <a href="https://en.wikipedia.org/wiki/Yoneda_lemma#Proof" rel="nofollow noreferrer">proof on Wikipedia</a> seems broadly similar in terms of the explicit construction, which makes me think I'm missing something big/obvious.</p>
qualcuno
362,866
<p>Note that since <span class="math-container">$\eta$</span> is a natural transformation from <span class="math-container">$h^A $</span> to <span class="math-container">$F$</span>, by definition <span class="math-container">$\eta_A$</span> is a function (i.e. an arrow in <span class="math-container">$\mathsf{Set}$</span>)</p> <p><span class="math-container">$$\eta_A \colon \hom_C(A,A) = h^A(A) \to F(A).$$</span></p> <p>In particular, the indentity of <span class="math-container">$1_A \colon A \to A$</span> of <span class="math-container">$A$</span> is an element of the domain of <span class="math-container">$\eta_A$</span>, and it makes sense to compute <span class="math-container">$\eta_A(1_A)$</span>.</p> <p>Now consider any other object <span class="math-container">$B$</span> in <span class="math-container">$C$</span>. The natural transformation <span class="math-container">$\eta$</span> gives you a map <span class="math-container">$\eta_B \colon \hom_C(A,B) \to F(B)$</span>.</p> <p>Suppose we want to compute <span class="math-container">$\eta_B(f)$</span> where <span class="math-container">$f \colon A \to B$</span> is some arrow. Observe that by functoriality <span class="math-container">$f$</span> defines a map <span class="math-container">$f_\ast \colon g \in \hom_C(A,A) \mapsto fg\in \hom_C(A,B)$</span> and <span class="math-container">$f$</span> is nothing else but <span class="math-container">$f_*(1_A) = f1_A$</span>.</p> <p>Our question boils down to computing <span class="math-container">$\eta_B(f) = (\eta_B \circ f_\ast) (1_A)$</span>. By naturality of <span class="math-container">$\eta$</span>,</p> <p><span class="math-container">$$\eta_B(f) = (\eta_B \circ f_\ast) (1_A) = (F(f) \circ \eta_A)(1_A) = F(f)(\eta_A(1_A)). \tag{$\star$}$$</span></p> <p>Hence all expressions <span class="math-container">$\eta_B(f)$</span> for any <span class="math-container">$B \in C$</span> and <span class="math-container">$f \colon A \to B$</span> are determined by (the morphism <span class="math-container">$f$</span> given and) <span class="math-container">$\eta_A(1_A)$</span>.</p> <p>In other words, <span class="math-container">$\eta_A$</span> is characterized completely by <span class="math-container">$\eta_A(1_A)$</span> (if you look carefully, this is a proof of injectivity).</p> <p>To show that the assignment <span class="math-container">$\eta \mapsto \eta_A(1_A)$</span> is surjective, we shall pick <span class="math-container">$x \in F(A)$</span> and (try to) produce <span class="math-container">$\eta$</span> such that <span class="math-container">$\eta_A(1_A) = x$</span>. Observe that <span class="math-container">$(\star)$</span> forces us to have</p> <p><span class="math-container">$$\eta_B(f) = F(f)(x) \qquad (B \in C, f \in \hom_C(A,B)).$$</span></p> <p>You can check that this is in fact a natural transformation, completing the proof. Note also that the surjectivity proof is a restatement of the fact that the inverse mapping is well defined.</p> <p>Finally, to respond to your last question:</p> <blockquote> <p>A given, specific natural transformation <span class="math-container">$\eta$</span> can be though of as a map from <span class="math-container">$C$</span> to <span class="math-container">$\mathsf{Mor(Set)}$</span>.</p> </blockquote> <p>And indeed, that <em>is</em> what we are doing: to each <span class="math-container">$X \in C$</span> we associate a map <span class="math-container">$\eta_X \colon \hom(A,X) \to F(X)$</span> which is a function between sets.</p> <p>To define <span class="math-container">$\eta_X$</span> we shall say what is the image of a given morphism <span class="math-container">$f \colon A \to X$</span> of the domain set is. We decree <span class="math-container">$f \mapsto F(f)(x)$</span>.</p>
250,687
<p>I'm doing a sanity check of the following equation: <span class="math-container">$$\sum_{j=2}^\infty \frac{(-x)^j}{j!}\zeta(j) \approx x(\log x + 2 \gamma -1)$$</span></p> <p>Naive comparison of the two shows a bad match but I suspect one of the graphs is incorrect.</p> <ol> <li>Why isn't there a warning?</li> <li>How do I compute this sum correctly?</li> </ol> <pre><code>katsurda[x_] := NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}]; katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0]; plot1 = DiscretePlot[katsurda[x], {x, 0, 40, 2}]; plot2 = Plot[katsurdaApprox[x], {x, 0, 40}]; Show[plot1, plot2] </code></pre> <p><a href="https://i.stack.imgur.com/pBmVX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBmVX.png" alt="enter image description here" /></a></p> <ol start="3"> <li><strong>meta</strong> How do I avoid being mislead by incorrect numeric results? Would using <code>NIntegrate</code> instead of <code>NSum</code> give better guarantees? My usual approach of a avoiding machine precision, checking <code>Precision</code> of the answer and minding warnings fails in the example below</li> </ol> <pre><code>katsurda[x_] := NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}, WorkingPrecision -&gt; 32, NSumTerms -&gt; 2.5 x]; katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0]; Print[&quot;Precision: &quot;, Precision@katsurda[100]] (* 13.9729 *) Print[&quot;Discrepancy: &quot;, katsurda[100] - katsurdaApprox[100]] (* 94.65088290385, but should be &lt;1 *) </code></pre> <p><strong>Background:</strong> the expression comes from &quot;Power series with the Riemann zeta-function in the coefficients&quot; by Katsurada M (<a href="https://projecteuclid.org/journals/proceedings-of-the-japan-academy-series-a-mathematical-sciences/volume-72/issue-3/Power-series-with-the-Riemann-zeta-function-in-the-coefficients/10.3792/pjaa.72.61.pdf" rel="nofollow noreferrer">paper</a>)</p>
Roman
26,598
<p>Turn the sum around to make it non-alternating:</p> <p><span class="math-container">$$ \sum_{j=2}^{\infty}\frac{(-x)^j}{j!}\zeta(j) = \sum_{j=2}^{\infty}\frac{(-x)^j}{j!}\left(\sum_{n=1}^{\infty}\frac{1}{n^j}\right) = \sum_{n=1}^{\infty}\left(\sum_{j=2}^{\infty}\frac{(-x)^j}{j!}\frac{1}{n^j}\right) = \sum_{n=1}^{\infty}\left(e^{-x/n}-1+\frac{x}{n}\right) $$</span></p> <pre><code>f[x_?NumericQ] := NSum[E^(-x/n) - 1 + x/n, {n, ∞}, Method -&gt; &quot;EulerMaclaurin&quot;] g[x_] = x (Log[x] + 2 EulerGamma - 1) + 1/2; Plot[f[x] - g[x], {x, 0, 1000}] </code></pre> <p><a href="https://i.stack.imgur.com/lZTit.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lZTit.png" alt="enter image description here" /></a></p> <p>The remaining difference between the expressions is due to numerical inaccuracies and can be fixed by increasing the <a href="https://reference.wolfram.com/language/ref/WorkingPrecision.html" rel="nofollow noreferrer"><code>WorkingPrecision</code></a>.</p>
12,102
<p>For local field, the reciprocity map establishes almost an isomorphism from the multiplicative group to the Abelian Absolute Galois group. (In global case the relationship is almost as nice). It is tempted to think that there can be no such nice accident. </p> <p>Do we know any explanation which suggest that there "should be" a relationship between the multiplicative group and the Galois group?</p> <p>Actually, my current belief is that the reciprocity map is half accidental. I think that there is a natural extension where we can define a natural action of the multiplicative group. In the local case this is the Lubin-Tate extension (a generalization of cyclotomic extension). The fact that this Lubin-Tate extension is the Abelian Absolute Galois group is an accident. </p> <p>Do we know something that might support/ reject this view?</p> <p>Please feel free to edit the question into a form that you think might be better.</p>
Emerton
2,874
<p>The reciprocity map is completely natural (in the technical sense of category theory). For example, if $K$ and $L$ are two local fields, and $\sigma:K \rightarrow L$ is an isomorphism, then $\sigma$ induces an isomorphism of multiplicative groups $K^{\times} \rightarrow L^{\times}$ and also of abelian absolute Galois groups $G\_K^{ab} \rightarrow G\_L^{ab}$. The reciprocity laws for $K$ and $L$ are then compatible with these two isomorphisms induced by $\sigma$. </p> <p>On the other hand the factorization $K^{\times} = {\mathbb Z} \times {\mathcal O}\_K^{\times}$ is not canonical (it depends on a choice of uniformizer), and the identification of ${\mathcal O}\_K^{\times}$ with the Galois group of a maximal totally ramified abelian extension of $K$ also depends on a choice of uniformizer (which goes into the construction of the Lubin--Tate formal group, and hence into the construction of the totally ramified extension; different choices of uniformizer will give different formal groups, and different extensions). </p> <p>As others pointed out, the local reciprocity map is also a logical consequence of the global Artin map and global Artin reciprocity law (which makes no reference to local multiplicative groups, but simply to the association $\mathfrak p \mapsto Frob\_{\mathfrak p}$ of Frobenius elements to unramified prime ideals; see the beginning of Tate's article in Cassels--Frohlich for a nice explanation of this). Thus it is natural in a more colloquial sense of the word as well.</p> <p>Indeed, the idelic formulation of the glocal reciprocity map and the formulation of the local reciprocity map in terms of multiplicative groups are not accidental or ad hoc inventions; they were forced on number theorists as a result of making deep investigations into the nature of global class field theory.</p>
2,495,440
<p>If $f'(x) + f(x) = x,\;$ find $f(4)$.</p> <p>Could someone help me to solve this problem ? </p> <p>The answer is 3 but I don't know why. <em>with no use of integration or exponential functions</em> and <em>the function is polynomial</em></p>
Aggelos Bessis
348,376
<p>The only way f is a polynomial is if it is of 1st degree because $ deg( f'(x)+f(x))=deg(x)=1$. Since $degf'(x)&lt;degf(x)$ we obtain that $degf(x)=1$ so you can set $ f(x)=ax+b$ so we get that $ f'(x)=a $. Plug those in and: $ f'(x)+f(x)=x=&gt; ax+(a+b)=x $</p>
624,850
<blockquote> <p>Prove: $$x^\alpha-\alpha x \le 1- \alpha \\ \forall x\ge 0 \ , \ 0&lt;\alpha &lt;1$$</p> </blockquote> <p>It does resamble Lagrange's MVT, in order to get to the RHS, say $\alpha\in[0,1]\Rightarrow \frac{1-\alpha-0^\alpha-\alpha 0}{1-0}=1-\alpha$.</p> <p>But I'm not sure what to do about the LHS. I always get a negative value in the LHS and we know that the RHS will always be postive but that doesn't really help with proving it...</p> <p>Edit: maybe define a function: $g(x) = x^\alpha-\alpha x - 1+ \alpha$ Derive it: $g'(x) = \alpha x^{\alpha-1}-\alpha $ but then I get to $x^{\alpha-1}=0$</p> <hr> <p>Also, I had two questions when trying to figure this out:</p> <ol> <li><p>Can we use deriviation of both sides of an inequality to prove it ?</p></li> <li><p>Can we use limit of both sides of an inequaltiy to prove it ?</p></li> </ol> <p>Thanks.</p>
daulomb
98,075
<p>Let $f(x)=x^\alpha-\alpha x-\alpha+1$ for $x\geq 0$, where $\alpha&lt;1$, so $f'(x)=\alpha x^{\alpha-1}-\alpha$ and $f''(x)=\alpha(\alpha-1)x^{\alpha-2}$. Then $f'(x)=0\Rightarrow x=1$ with $f''(1)=\alpha(\alpha-1)\leq 0$. This implies that $f$ has a local maximum at $x=1$ and so $f(1)=2(1-\alpha)\geq1-\alpha=f(0)$ . Also we have $\displaystyle \lim_{x\to\infty}f(x)=-\infty$. Since $f$ is continuous for $x\geq 0$, considering above values we see that $f$ has a maximum at $x=1$ and equals $f(1)=2(1-\alpha)$. Therefore we have $f(x)\leq f(1)$ for all $x\geq 0$ and this leads to the desired inequality.</p>
624,850
<blockquote> <p>Prove: $$x^\alpha-\alpha x \le 1- \alpha \\ \forall x\ge 0 \ , \ 0&lt;\alpha &lt;1$$</p> </blockquote> <p>It does resamble Lagrange's MVT, in order to get to the RHS, say $\alpha\in[0,1]\Rightarrow \frac{1-\alpha-0^\alpha-\alpha 0}{1-0}=1-\alpha$.</p> <p>But I'm not sure what to do about the LHS. I always get a negative value in the LHS and we know that the RHS will always be postive but that doesn't really help with proving it...</p> <p>Edit: maybe define a function: $g(x) = x^\alpha-\alpha x - 1+ \alpha$ Derive it: $g'(x) = \alpha x^{\alpha-1}-\alpha $ but then I get to $x^{\alpha-1}=0$</p> <hr> <p>Also, I had two questions when trying to figure this out:</p> <ol> <li><p>Can we use deriviation of both sides of an inequality to prove it ?</p></li> <li><p>Can we use limit of both sides of an inequaltiy to prove it ?</p></li> </ol> <p>Thanks.</p>
Paramanand Singh
72,031
<p>The problem is too easy once one rewrites it as $x^{\alpha} - 1 \leq \alpha(x - 1)$. If $f(x) = x^{\alpha}$, then $f'(x) = \alpha x^{\alpha - 1}$ and then by Mean Value Theorem we have $$x^{\alpha} - 1 = f(x) - f(1) = (x - 1)f'(c) = \alpha(x - 1)c^{\alpha - 1}$$ where $c$ is between $1$ and $x$. Clearly we need to consider cases when $x &lt; 1$ and $x &gt; 1$. For $x = 1$ the result is trivial. Supposing that $x &gt; 1$ we need to ensure that $c^{\alpha - 1} &lt; 1$ which is practically obvious as $1 &lt; c &lt; x$ and $\alpha - 1 &lt; 0$. Same way if $x &lt; 1$ then we need to establish that $c^{\alpha - 1} &gt; 1$ and again this is obvious because then $x &lt; c &lt; 1$.</p>
366,096
<p>Let's consider $J\subset \mathbb R^2$ such that J is convex and such that it's boundary it's a curve $\gamma$. Let's suppose that $\gamma$ is anti-clockwise oriented, let's consider it signed curvature $k_s$. I want to prove the intuitive following fact:</p> <p>$$ \int\limits_\alpha {k_s } \left( s \right)ds \geqslant 0 $$</p> <p>For every sub-curve $\alpha \subset \gamma $.</p> <p>And then prove that $k_s(s) \ge 0$</p> <p>I have no idea how to attack this problem, intuitively I can see the result.</p>
Community
-1
<p>This is a more formal version of Brian Rushton's answer. Suppose there is a point of negative curvature. Choose $xy$ coordinates so that this point is the origin $(0,0)$, the tangent direction is $x$-axis, and the $y$-axis points inside the convex set. Let $y=f(x)$ be the equation of a part of curve near $(0,0)$. (Implicit function theorem says you can solve for $y$ in terms of $x$.)</p> <p>The curvature at $(0,0)$ is $f''(0)$, according to equation (14) <a href="http://mathworld.wolfram.com/Curvature.html" rel="nofollow">here</a>. Since $f''(0)&lt;0$ and $f'(0)=0$, it follows that $f(x)&lt;0$ for $0&lt;|x|&lt;\delta$, if $\delta$ is sufficiently small. This contradicts the convexity of the set: e.g., the line segment from $(x,f(x))$ to $(-x,f(-x))$ crosses the negative half of the $y$-axis.</p>
694,668
<p>Let (X,Y) be uniformly distributed in a circle of radius 1. Show that if R is the distance from the center of the circle to (X,Y) then $R^2$ is uniform on (0,1). </p> <p>This is question from the Simulation text of Prof. Sheldon Ross. Any hints? </p>
copper.hat
27,978
<p>Here is a much more tedious way of doing the same thing:</p> <p>Let $A \subset (0,1)$ be an open set. For later convenience, let $N = \{(x,0) | x \in (-1,0] \}$.</p> <p>Let $C_A = \{ x | x_1^2+x_2^2 \in A \}$, and note that $C_A = \{ \sqrt{t} (\cos \theta, \sin \theta) | t \in A, \theta \in (-\pi, \pi] \} \subset B(0,1)$. Note that if we let $C_A' = C_A \setminus N =\{ \sqrt{t} (\cos \theta, \sin \theta) | t \in A, \theta \in (-\pi, \pi) \} $, the measure remains the same.</p> <p>Define $\phi: (0,\infty) \times (-\pi, \pi) \to \mathbb{R}^2 \setminus N$ by $\phi(\alpha) = (\sqrt{\alpha_1} \cos \alpha_2, \sqrt{\alpha_1} \sin \alpha_2)$, note that $\phi$ is smooth, bijective and $\det {\partial \phi(\alpha) \over \partial \alpha} = {1 \over 2} &gt;0$, hence $\phi$ is a diffeomorphism.</p> <p>The important point is that $\phi(A \times (-\pi, \pi)) = C_A'$.</p> <p>The change of variables theorem gives $\int_{C_A'} dx = \int_{\phi(A \times (-\pi, \pi))} dx = \int_{A \times (-\pi, \pi)} | \det {\partial \phi(\alpha) \over \partial \alpha} | d \alpha = \pi m(A)$, where $m$ is the Lebesgue measure of $A$.</p> <p>If $\mu$ is the uniformly distributed measure on the unit circle, we have $\mu = {1 \over \pi} m$, and so we have $\mu(C_A) = m(C_A') = m(A)$. Since this is true for open $A$, it is true for $G_\delta$ sets and since $m$ is outer regular and complete, it follows that it is true for all measurable sets.</p> <p>Hence $R^2$ is distributed uniformly on $(0,1)$.</p>
357,101
<p>There exists a minimal subshift <span class="math-container">$X$</span> with a point <span class="math-container">$x \in X$</span> such that <span class="math-container">$x_{(-\infty,0)}.x_0x_0x_{(0,\infty)} \in X$</span>?</p>
Ville Salo
123,634
<p>It is well-known that the Chacon substitution <span class="math-container">$\tau$</span> defined by <span class="math-container">$\tau(0) = 0010$</span>, <span class="math-container">$\tau(1) = 1$</span> produces a minimal subshift, when you take the legal words to be the words that appear in some <span class="math-container">$\tau^n(0)$</span>. The two-sided fixed point from <span class="math-container">$0.0$</span> is <span class="math-container">$x.y = {...0010001010010.0010001010010...}$</span> and the one from <span class="math-container">$0. 10$</span> is <span class="math-container">$x.1y$</span>. So they are both in <span class="math-container">$X$</span>.</p> <p>Now apply the additional substitution <span class="math-container">$\alpha(0) = 01$</span>, <span class="math-container">$\alpha(1) = 1$</span> and you still get a minimal subshift <span class="math-container">$Y$</span> as image (this is a flow equivalence onto its image). We have <span class="math-container">$\alpha(x).\alpha(y) \in Y$</span> and <span class="math-container">$\alpha(x) . 1 \alpha(y) \in Y$</span>, and since <span class="math-container">$\alpha(x)$</span> ends with <span class="math-container">$1$</span> this is a pair of the kind you are asking for.</p>
445
<p>Under what circumstances should a question be made community wiki?</p> <p>Probably any question asking for a list of something (e.g. <a href="https://math.stackexchange.com/questions/81/list-of-interesting-math-blogs">1</a>) must be CW. What else? What about questions asking for a list of applications of something (like, say, <a href="https://math.stackexchange.com/questions/804/applications-for-the-class-equation-from-group-theory">3</a>) or questions like <a href="https://math.stackexchange.com/questions/1446/interesting-properties-of-ternary-relations">2</a>? Should all soft-questions be made community wiki (and how we should define soft-question, in that case)?</p>
Tom Boardman
160
<p>Okay, so I like Casebash's approach of taking SO as the template but, as Grigory says, there is perhaps more necessity for a divide here*, there is also more scope.</p> <p>Certainly, as Casebash says, the <strong>broadly discursive</strong> but <strong>well motivated</strong> should be hit with the wiki hammer (and by implication, those with poor motivation are closure fodder), as should big-lists.</p> <p>But notation, terminology and math history questions, as Grigory points out, provide an interesting borderline case- for my money:</p> <h3>Notation</h3> <ul> <li>What does the turnstile symbol mean in logic? <strong>bad question</strong></li> <li>Intuition for the turnstile symbol? <strong>CW</strong></li> <li>What is the difference between the turnstile and double turnstile symbols in logic? <strong>Rep</strong></li> </ul> <h3>Terminology</h3> <ul> <li>Is 0 a natural number? <strong>bad CW question</strong></li> <li>Why might one consider 0 not to be a natural number? <strong>good CW</strong></li> <li>How much of arithmetic is possible without 0? <strong>Rep</strong></li> </ul> <h3>Math History</h3> <ul> <li>Who invented the integral? <strong>bad question</strong></li> <li>How has calculus changed since its creation? <strong>CW</strong></li> <li>How did Newton's definition of the integral differ from Leibniz's? <strong>Rep</strong></li> </ul> <p>Notice how it is possible to make repworthy questions in each topic, but also useless ones. In most cases the CWs are the ones that are broadly discursive, but in some cases the rep ones can be a little also. What is key is that, if a <em>slightly</em> discursive question <strong>requires special knowledge and care</strong> , we should offer rep as an incentive. Also if a question on a discursive topic is <strong>asking about a very specific and well defined subtopic</strong>, I feel we should deem it a fair question.</p> <p>*It has been mentioned elsewhere [by @Noah Snyder?] that the proportion of people using math professionally to those using math in general is always going to be lower to the corresponding proportions of those using programming.</p> <p><strong>Edit:</strong> The problem of <a href="https://math.stackexchange.com/questions/1706/does-set-include-zero">this question</a>, which is both <strong>specific/well-defined</strong> and requiring <strong>no real special knowledge or care</strong> might be seen as something of a Godel number to my Principia Mathematica... That said, I think we can leave it rep and the votes will take care of it ;)</p>
445
<p>Under what circumstances should a question be made community wiki?</p> <p>Probably any question asking for a list of something (e.g. <a href="https://math.stackexchange.com/questions/81/list-of-interesting-math-blogs">1</a>) must be CW. What else? What about questions asking for a list of applications of something (like, say, <a href="https://math.stackexchange.com/questions/804/applications-for-the-class-equation-from-group-theory">3</a>) or questions like <a href="https://math.stackexchange.com/questions/1446/interesting-properties-of-ternary-relations">2</a>? Should all soft-questions be made community wiki (and how we should define soft-question, in that case)?</p>
Larry Wang
73
<p>Some facts relevant to this question:</p> <p><strong>The effects of making a question/answer Community Wiki:</strong> </p> <ul> <li>Lower threshold to edit: only requires 100 rep, compared to 1000 for normal posts </li> <li>No reputation is gained for upvotes, nor lost for downvotes (this also means no -1 rep for casting a downvote) </li> <li>The system interprets Community Wiki posts to be owned by the <a href="https://math.stackexchange.com/users/-1/community">Community User</a> rather than the original author. </li> <li>All (future?) answers to a community wiki question automatically become community wiki</li> </ul> <p><strong>A question or answer becomes Community Wiki when:</strong></p> <ul> <li>The body of the post has been edited six (6) times by at least four (4) different people.</li> <li>The post has been edited eight (8) times by the original owner.</li> <li>The post's author checks the community wiki checkbox when composing the question or answer. Note: <em>this feature is not available to users with &lt;15 rep</em></li> <li>The question generates more than n answers (I think n is 30 here, but it might be 15)</li> <li>The answer is posted in response to a community wiki question</li> <li>The original author or a moderator edits the post and checks the community wiki box.</li> </ul> <p><strong>Other important information:</strong></p> <ul> <li>Community Wiki mode is irreversible.</li> <li>Although reputation from votes is not counted, badges are still awarded as normal.</li> <li>Any reputation changes before a post becomes community wiki stay.</li> </ul>
3,309,511
<p>Prove that there exists infinitely many pairs of positive real numbers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> such that <span class="math-container">$x\neq y$</span> but <span class="math-container">$ x^x=y^y$</span>.</p> <p>For example <span class="math-container">$\tfrac{1}{4} \neq \tfrac{1}{2}$</span> but </p> <p><span class="math-container">$$\left( \frac{1}{4} \right)^{1/4} = \left( \frac{1}{2} \right)^{1/2}$$</span></p> <p>I am confused how to approach the problem. I think we have to find all the sloutions in a certain interval, probably <span class="math-container">$(0,1]$</span>.</p>
eyeballfrog
395,748
<p>While I like the continuity-based approach of the other answers, you can also get a parameterized set of solutions through algebraic methods.</p> <p>Let <span class="math-container">$x$</span> be the larger of the two and define <span class="math-container">$a \in (0,1)$</span> by <span class="math-container">$a = y/x$</span>. Then</p> <p><span class="math-container">$$ x^x = y^y = (ax)^{ax}\,\,\, \Longrightarrow\,\,\, x = a^a x^a\Longrightarrow \,\,\,x = a^{a/(1-a)}, $$</span> and correspondingly <span class="math-container">$y = a^{1/(1-a)}$</span>. Since there are infinitely many <span class="math-container">$a\in (0,1)$</span>, there are infinitely many solutions to <span class="math-container">$x^x = y^y$</span></p>
3,978,606
<p>Question says</p> <blockquote> <p>For <span class="math-container">$(C[0,1], \Vert\cdot\Vert_{\infty})$</span>, let <span class="math-container">$B=\{f\in C[0,1] : \Vert f\Vert_{\infty} \leq 1\}$</span>. Find all <span class="math-container">$f\in B$</span> such that there exist <span class="math-container">$g,h\in B$</span>, <span class="math-container">$g\neq h$</span>, with <span class="math-container">$f=\frac{g+h}{2}$</span></p> </blockquote> <p>I studied some graphs and I feel that for <span class="math-container">$f \equiv 1$</span> and <span class="math-container">$f \equiv -1$</span>, we can't find such <span class="math-container">$g,h, g\neq h$</span> which gives <span class="math-container">$f=\frac{g+h}{2}$</span> as only choice for <span class="math-container">$g$</span> and <span class="math-container">$h$</span> left is to be identically equal to <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>, in respective case. For all other cases, we can increase and decrease function <span class="math-container">$f$</span> little bit(for those <span class="math-container">$x$</span> such that <span class="math-container">$\Vert f(x)\Vert \neq 1$</span>) so as to get average equal to <span class="math-container">$f$</span>. I can sketch that but not able to put it mathematically. Am I correct?</p> <p>Thanks.</p>
Chrystomath
84,081
<p>Suppose <span class="math-container">$f$</span> has a point <span class="math-container">$x$</span> such that <span class="math-container">$-1&lt;f(x)&lt;1$</span>. Then, by continuity of <span class="math-container">$f$</span>, there is a whole interval about <span class="math-container">$x$</span>, say <span class="math-container">$(x-\delta,x+\delta)$</span> for which this is true. Let <span class="math-container">$s:=\max\{1-|f(t)|:x-\delta\le t\le x+\delta\}&gt;0$</span>.</p> <p>Let <span class="math-container">$g(t):=f(t)+r(t-x)$</span> and <span class="math-container">$h(t):=f(t)-r(t-x)$</span> where <span class="math-container">$r(t)=s(1-\min(1,|t|))$</span> is the tent function centered at zero with height <span class="math-container">$s$</span>. Then clearly <span class="math-container">$f=\frac{g+h}{2}$</span> and <span class="math-container">$|g(t)|\le|f(t)|+s\le1$</span>; similarly for <span class="math-container">$h$</span>. Hence <span class="math-container">$g,h\in B$</span> and <span class="math-container">$g\ne h$</span>, with the required properties.</p>
990,340
<p>I have an arguement with my friends on a probability question.</p> <p><strong>Question</strong>: There are lots of stone balls in a big barrel A, where 60% are black and 40% are white, black and white ones are identical, except the color.</p> <p>First, John, blindfolded, takes 110 balls into a bowl B; afterwards, Jane, blindfolded also, from bowl B takes 10 balls into cup C -- and find all 10 balls in C are white.</p> <p>Now, what's the expectation of black balls in bowl B?</p> <p>It seems there are 3 answers</p> <p><strong>Answer 1</strong> My friend thinks the probability of black stones in bowl B is still 60%, or 60% * 100 = 60 black balls expected in bowl B.</p> <p><strong>Answer 2</strong> However, I think 60% is just prior probability; with 10 balls all white from bowl B, we shall calculate the posterior probability. Denote $B_k$ as the black ball numbers in bowl B, and $C_{w}=10$ as the event that 10 balls in cup C are all white.</p> <p>$$E(B_B | C_w=10) = \sum_{x=10}^{110}\left[x P(B_k = x | C_w=10)\right] = \sum_{x=10}^{110} \left[x \frac{P(B_k = x \text{ and } C_w=10)}{P(C_w=10)}\right] = \sum_{x=10}^{110} \left[x \frac{P(B_k = x) P( C_w=10 | B_k=x)}{P(C_w=10)}\right] $$</p> <p>, where $$P(C_w=10) = \sum_{x=10}^{110} \left[ P(B_k = x) P(C_w = 10 | B_k=x) ) \right] $$</p> <p>, and according to binomial distribution $$P(B_k=x) = {110 \choose x} 0.6^x 0.4^{110-x} $$ , and $$P(C_w = 10 | B_k=x) = \frac{1}{(110-x)(110-x-1)\cdots (110-x-9)}$$ </p> <p><strong>Answer 3</strong> This is from Stefanos below: You can ignore the step that John takes 110 balls into a bowl B, this does not affect the answer. The expected percentages in bowl $B$ are again $60\%$ and $40\%$ percent, i.e. 66 black balls and 44 white balls. Now, after Jane has drawn 10 white balls obviously the posterior probability changes, since the expected number of balls is now 66 and 34. So, you are correct.</p> <p>Which answer is right?</p> <p>I sort of don't agree with Stefanos that, the black ball distribution from bowl B could vary a lot from barrel A, as the sampling distribution could be different from the universe distribution.</p> <p>in other words, if Janes draws a lot balls and all are white, e.g. 50 balls are all white, I fancy it's reasonable to suspect that bowl B does not have a 60%-40% black-white distribution.</p>
Dr. Sonnhard Graubner
175,066
<p>we have $7^{17}&lt;2^{51}&lt;3^{34}$</p>
1,114
<p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
Dan Piponi
1,233
<p>I'm surprised this example hasn't been mentioned already:</p> <p>The 3x3x3 Rubik's cube forms a group. The 15-puzzle forms a groupoid.</p> <p>The reason is that any move that can be applied to a Rubik's cube can be applied at any time, regardless of the current state of the cube.</p> <p>This is not true of the 15-puzzle. The legal moves available to you depend on where the hole is. So you can only compose move B after move A if A leaves the puzzle in a state where move B can be applied. This is what characterises a groupoid.</p> <p>There's more to be found <a href="http://cornellmath.wordpress.com/2008/01/27/puzzles-groups-and-groupoids/">here</a>.</p>
1,114
<p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
André Henriques
5,690
<p>Penrose tilings are beautiful objects, with a lot of <b>symmetry...</b>&nbsp;&nbsp; but their symmetry group is <b>trivial</b>!</p> <p>So there's a discrepancy somewhere. The answer is: "groupoids"! The topological groupoid of symmetries of a Penrose tiling is non-trivial, and contains all the information that your intuition might call "symmetry". </p> <p>The reason it's a groupoid and not a group is the following. Given a Penrose tiling, there are many different tilings that are locally undistinguishable from your original tiling. These are the objects of your groupoid. It becomes a topological groupoid under the topology of "uniform convergence in any bounded domain". The arrows are given by isomorphisms between a given tiling (=object) and a translated or rotated version of itself.</p>
3,780,959
<p>Consider a connected, unweighted, undirected graph <span class="math-container">$G$</span>. Let <span class="math-container">$m$</span> be the number of edges and <span class="math-container">$n$</span> be the number of nodes.</p> <p>Now consider the following random process. First sample a uniformly random spanning tree of <span class="math-container">$G$</span> and then pick an edge from this spanning tree uniformly at random. Our process returns the edge.</p> <p>If I want to sample many edges from <span class="math-container">$G$</span> from the probability distribution implied by this process, is there a more efficient (in terms of computational complexity) method than sampling a new random spanning tree each time?</p>
smapers
306,270
<p>Approximately sampling according to the effective resistances is done in the sparsification algorithm of Spielman and Srivastava. See Theorem 2 of <a href="https://arxiv.org/abs/0803.0929" rel="nofollow noreferrer">this paper</a>. The complexity has a one-off cost of <span class="math-container">$\tilde{O}(m)$</span>, and then cost <span class="math-container">$\tilde{O}(1)$</span> per sample.</p>
1,341,505
<p>Let U: $\mathbb R$ -> $\mathbb R$ be a concave function, let X be a random variable with a finite expected value, and let Y be a random variable that is independent of X and has an expected value 0. Define Z=X+Y. Prove that $E[U(X)] \ge E[U(Z)]$</p> <p>I know that $E(X)=E(Z)$, and by Jensen's inequality $U[E(X)] \ge E[U(X)]$ but it gives me nothing so far.</p> <p>Please help. Thanks a lot.</p>
Ant
66,711
<p>Rewrite the thesis as </p> <p>$$E[u(X)] = \int u(x) dP^X \ge \int u(x + y) dP^X \otimes dP^Y = E[u(X + Y)]$$</p> <p>Now let's concentrate on the right hand side; in particular we can use fubini tonelli to write </p> <p>$$\int u(x + y) dP^X \otimes dP^Y = \int \left(\int u(x+y)dP^Y\right) dP^X$$</p> <p>The inner integral is simply $E[u(x + Y)]$ for some constant $x$. Since $u$ is concave we know (by Jensen) that $$E[u(x + Y)] \le u(E[x + Y]) = u(x)$$</p> <p>Therefore $$E[u(X + Y)] = \int \left(\int u(x+y)dP^Y\right) dP^X \le \int u(x) dP^X = E[u(X)]$$</p> <p>which is what we wanted to prove.</p>
3,223,732
<p>Let <span class="math-container">$X=U\cup V$</span> where <span class="math-container">$U,V$</span> are simply-connected open sets and <span class="math-container">$U\cap V$</span> is the disjoint union of two simply connected sets. We also have the condition that any subspace <span class="math-container">$S$</span> of <span class="math-container">$X$</span> homeomorphic to <span class="math-container">$[0,1]$</span> has an open neighborhood that deformation retracts onto <span class="math-container">$S$</span>.</p> <p>We can choose points <span class="math-container">$p$</span> and <span class="math-container">$q$</span>, one from each of the two disjoint components of <span class="math-container">$U\cap V$</span>, that are not connected by a path. Then <span class="math-container">$U\cup V$</span> should deformation retract onto the union of two paths (one path <span class="math-container">$\alpha$</span> in <span class="math-container">$U$</span>, another path <span class="math-container">$\beta$</span> in <span class="math-container">$V$</span>) that connects <span class="math-container">$p$</span> and <span class="math-container">$q$</span>, hence the fundamental group must be <span class="math-container">$\mathbb{Z}$</span>.</p> <p>But I don't know how to rigorously show this part. We don't know if <span class="math-container">$U$</span> deformation retracts onto <span class="math-container">$\alpha$</span> and <span class="math-container">$V$</span> deformation retracts onto <span class="math-container">$\beta$</span>.. and even if we show that, I don't know how do we deal with the intersection <span class="math-container">$U\cap V$</span>.</p>
雨が好きな人
438,600
<p>Here’s one suggestion (not rigorous but to give you some intuition):</p> <p>The kernel is the set of vectors in the domain that are mapped to zero in the codomain. The dimension of the kernel can be thought of as the number of dimensions that get ‘squashed’ by the transformation. By ‘squashed’, I mean, for example, all of the vectors in a <span class="math-container">$3$</span>-dimensional space being mapped to a <span class="math-container">$2$</span>-dimensional plane. You can imagine a cube, or some other <span class="math-container">$3$</span>-dimensional object, being squashed until it is flat.</p> <p>We are mapping from a <span class="math-container">$5$</span>-dimensional space to a <span class="math-container">$3$</span>-dimensional space, so we are already forced to squash <span class="math-container">$2$</span> dimensions. Therefore the dimension of the kernel is at least <span class="math-container">$2$</span>. If <em>all</em> of the vectors are mapped to zero by the transformation, then all <span class="math-container">$5$</span> dimensions of the domain will be squashed, meaning that the dimension of the kernel is at most <span class="math-container">$5$</span>. So we have <span class="math-container">$2 \leq dim(Null(T)) \leq 5$</span>.</p> <p>If you want to use the rank-nullity theorem, we can instead consider the image of <span class="math-container">$T$</span>. In the case where all vectors are mapped to zero, the image clearly has dimension zero. It is also clear that the dimension of the image can be at most <span class="math-container">$3$</span>, which will be the case if the ‘output’ vectors occupy all of the space we are mapping to. So we have <span class="math-container">$0 \leq dim(Im(T)) \leq 3$</span> which, by the rank-nullity theorem (<span class="math-container">$dim(Im(T)) + dim(Null(T)) = 5$</span> in this case), implies the result above.</p>
3,845,968
<p><a href="https://math.stackexchange.com/q/29666/717872">There</a> <a href="https://math.stackexchange.com/q/11/717872">are</a> <a href="https://math.stackexchange.com/q/363977/717872">tens</a> <a href="https://math.stackexchange.com/q/3339682/717872">of</a> <a href="https://math.stackexchange.com/q/2005492/717872">posts</a> <a href="https://math.stackexchange.com/q/281492/717872">already</a> <a href="https://math.stackexchange.com/q/1291388/717872">on</a> <a href="https://math.stackexchange.com/q/1600940/717872">this</a> site about whether <span class="math-container">$0.\overline{9} = 1$</span>.</p> <p>This is something that intrigues me, and I have a question about this, including a &quot;proof&quot; which I have found myself.</p> <h3>Question:</h3> <p><a href="https://math.stackexchange.com/questions/281492/about-0-999-1#comment614377_281492">This comment</a> says that</p> <blockquote> <p>you need a <em>terminating</em> decimal to get something less than 1.</p> </blockquote> <p>If so, does it mean that a non-terminating decimal (e.g. <span class="math-container">$0.\overline{9}$</span>) is <span class="math-container">$\ge 1$</span>?</p> <p>So is <span class="math-container">$\frac{1}{3}$</span> (<span class="math-container">$0.\overline{3}$</span>) also <span class="math-container">$\ge 1$</span>? It is non-terminating, but you can subtract <span class="math-container">$\frac{1}{3}$</span> from <span class="math-container">$1$</span> to get <span class="math-container">$\frac{2}{3} = 0.\overline{6}$</span>, which is another non-terminating decimal. How do those mechanics work?</p> <hr /> <p><strong>Theorem:</strong> <span class="math-container">$0.99999... = 1(.00000... = 1)$</span></p> <p><strong>Proof:</strong></p> <p><span class="math-container">\begin{align} \frac{1}{9} &amp;= 0.11111... \\ \frac{2}{9} &amp;= 0.22222... \\ \frac{3}{9} &amp;= 0.33333... \\ \frac{4}{9} &amp;= 0.44444... \\ \frac{5}{9} &amp;= 0.55555... \\ \frac{6}{9} &amp;= 0.66666... \\ \frac{7}{9} &amp;= 0.77777... \\ \frac{8}{9} &amp;= 0.88888... \\ \therefore \frac{9}{9} &amp;= 0.99999... \\ &amp;= 1 \end{align}</span></p> <p>Is the above proof correct? I came up with it myself before I decided to ask this question, but I do not know if it is mathematically valid.</p>
Community
-1
<p>The non-terminating notation (either <span class="math-container">$0.9999\cdots$</span> or <span class="math-container">$0.\overline9$</span>) is a disguised limit, namely</p> <p><span class="math-container">$$\lim_{n\to\infty}\sum_{k=1}^n\frac 9{10^k}$$</span> or <span class="math-container">$$\lim_{n\to\infty}\left(1-10^{-n}\right).$$</span></p> <p>This limit equals <span class="math-container">$1$</span>.</p> <hr /> <p>To relate this to your proof, we indeed have</p> <p><span class="math-container">$$0.\overline1=\lim_{n\to\infty}\sum_{k=1}^n\frac 1{10^k}=\frac19,$$</span></p> <p>then</p> <p><span class="math-container">$$9\cdot 0.\overline1=9\lim_{n\to\infty}\sum_{k=1}^n\frac 1{10^k}=\lim_{n\to\infty}\sum_{k=1}^n\frac 9{10^k}=0.\overline 9$$</span> and <span class="math-container">$$\frac99=1.$$</span></p> <p>But honestly, I see no benefit taking this indirect route through <span class="math-container">$9\cdot0.\overline 1$</span>, and for completeness, you should explain why <span class="math-container">$\dfrac19=0.\overline1$</span>, and why <span class="math-container">$9\cdot0.\overline1=0.\overline9$</span> like I did (or another way).</p>
2,204,812
<p>The solution of the differential equation $\frac{dy}{dx}-xtan(y-x)=1$ will be?</p> <p>For solving such problems first we should see if the equation is in variable seperable form or not. Obviously here it is not. So I tried to see if it can be made to variable seperable by substitution, but substituting $y-x=z$ would not give the answer as there is one $x$ remaining outside that $tan(y-x)$. Also it is not a homogenous nor is getting converted into homogenous form so that I could substitute $y=vx$ or $x=vy$. So which method should I use here? I am getting wrong answer everytime, please help in dealing with this. Atleast provide me a hint.</p>
k.Vijay
428,609
<p>Substituting $y-x=z$ will give the solution.</p> <p>let $y-x=z$ then $\dfrac{dy}{dx}=1+\dfrac{dz}{dx}$</p> <p>Now, \begin{align*} \dfrac{dy}{dx}-x\cdot\tan(y-x)&amp;=1\\ \Rightarrow1+\dfrac{dz}{dx}-x\cdot\tan z&amp;=1\\ \Rightarrow\dfrac{dz}{dx}&amp;=x\cdot\tan z\\ \displaystyle\int\cot z\ dz&amp;=\displaystyle\int x\ dx+k\\ \Rightarrow\ln{\left|\sin{z}\right|}&amp;=\dfrac{1}{2}x^2+k\\ \Rightarrow\ln{\left|\sin{z}\right|}^2&amp;=x^2+2k\\ \Rightarrow\sin^2{(y-x)}&amp;=e^{x^2+2k}\\ \Rightarrow\sin^2{(y-x)}&amp;=c\cdot e^{x^2} \end{align*}</p>
3,932,803
<p>I need to prove <span class="math-container">$$ \lim_{x\rightarrow\ 0}\frac{x^2-8}{{x-8}} =1 $$</span> using epsilon-delta definition. I know I need to show that for every <span class="math-container">$\epsilon &gt;0$</span> there exist a <span class="math-container">$\delta &gt;0$</span> such that if <span class="math-container">$|x|&lt;\delta$</span>, then <span class="math-container">$| {\frac{x^2-8}{x-8}}-1|&lt;\epsilon$</span></p> <p>WHat I did:</p> <p><span class="math-container">$|{\frac{x^2-8}{x-8}}-1|=|{\frac{x^2-x}{x-8}}|$</span>, but Im having troubles now deciding what my delta should be.</p> <p>I would really appreciate if someone could give me an explanation on how should I choose delta and why. Thanks.</p>
José Carlos Santos
446,262
<p>Note that<span class="math-container">$$\frac{x^2-x}{x-8}=x\frac{x-1}{x-8}.$$</span>Now, if <span class="math-container">$|x|&lt;1$</span>, then <span class="math-container">$-1&lt;x&lt;1$</span> and therefore you have two things:</p> <ul> <li><span class="math-container">$-9&lt;x-8&lt;-7$</span>, which implies that <span class="math-container">$|x-8|&gt;7$</span>;</li> <li><span class="math-container">$-2&lt;x-1&lt;0$</span>, which implies that <span class="math-container">$|x-1|&lt;2$</span>.</li> </ul> <p>So,<span class="math-container">$$|x|&lt;1\implies\left|x\frac{x-1}{x-8}\right|&lt;\frac27|x|.$$</span>Therefore, given <span class="math-container">$\varepsilon&gt;0$</span>, take <span class="math-container">$\delta=\min\left\{\frac72\varepsilon,1\right\}$</span>and then<span class="math-container">\begin{align}|x|&lt;\delta&amp;\implies|x|&lt;\frac72\varepsilon\quad\text{and}\quad|x|&lt;1\\&amp;\implies\left|x\frac{x-1}{x-8}\right|&lt;\varepsilon.\end{align}</span></p>
3,932,803
<p>I need to prove <span class="math-container">$$ \lim_{x\rightarrow\ 0}\frac{x^2-8}{{x-8}} =1 $$</span> using epsilon-delta definition. I know I need to show that for every <span class="math-container">$\epsilon &gt;0$</span> there exist a <span class="math-container">$\delta &gt;0$</span> such that if <span class="math-container">$|x|&lt;\delta$</span>, then <span class="math-container">$| {\frac{x^2-8}{x-8}}-1|&lt;\epsilon$</span></p> <p>WHat I did:</p> <p><span class="math-container">$|{\frac{x^2-8}{x-8}}-1|=|{\frac{x^2-x}{x-8}}|$</span>, but Im having troubles now deciding what my delta should be.</p> <p>I would really appreciate if someone could give me an explanation on how should I choose delta and why. Thanks.</p>
Barry Cipra
86,747
<p>We have</p> <p><span class="math-container">$$\left|x^2-x\over x-8\right|=|x|\left|x-1\over x-8\right|\le|x|{|x|+1\over||x|-8|}$$</span></p> <p>Now the main thing that causes a problem is the smallness of the denominator if <span class="math-container">$|x|\approx8$</span>. But since we're interested in the limit as <span class="math-container">$x\to0$</span>, it's easy to stay away from <span class="math-container">$8$</span> by requiring, say <span class="math-container">$|x|\le7$</span>, in which case we have</p> <p><span class="math-container">$${|x|+1\over||x|-8|}\le{7+1\over|7-8|}=8$$</span></p> <p>So taking <span class="math-container">$\delta=\min(7,\epsilon/8)$</span> we have</p> <p><span class="math-container">$$0\lt|x|\lt\delta\implies\left|x^2-x\over x-8\right|\le8|x|\lt8\delta\le\epsilon$$</span></p> <p>as desired.</p> <p>Remark: The key to limit write-ups of this type is to keep in mind that even though we're thinking of <em>small</em> epsilons, we don't want to be embarrassed by claiming an implication that isn't true in case someone decides to use a large <span class="math-container">$\epsilon$</span>. E.g., if we just let <span class="math-container">$\delta=\epsilon/8$</span>, we run the risk of the <em>false</em> implication <span class="math-container">$0\lt|x|\lt8\implies|(x^2-x)/(x-8)|\lt64$</span>, which is falsified, for example, at <span class="math-container">$x=7.9$</span> (that is, <span class="math-container">$0\lt|7.9|\lt8$</span> but <span class="math-container">$|(7.9^2-7.9)/(7.9-8)|=545.1\gt64$</span>). The role of the &quot;min&quot; is to avoid such false claims. As the other answers here show, there is no single prefered way to choose your delta; it's somewhat a matter of taste. My preference is to try to isolate what causes things to be uncontrollably large, usually a potential zero in a denominator, impose a restriction brings that term under control, and then look at what happens to everything else.</p>
3,932,803
<p>I need to prove <span class="math-container">$$ \lim_{x\rightarrow\ 0}\frac{x^2-8}{{x-8}} =1 $$</span> using epsilon-delta definition. I know I need to show that for every <span class="math-container">$\epsilon &gt;0$</span> there exist a <span class="math-container">$\delta &gt;0$</span> such that if <span class="math-container">$|x|&lt;\delta$</span>, then <span class="math-container">$| {\frac{x^2-8}{x-8}}-1|&lt;\epsilon$</span></p> <p>WHat I did:</p> <p><span class="math-container">$|{\frac{x^2-8}{x-8}}-1|=|{\frac{x^2-x}{x-8}}|$</span>, but Im having troubles now deciding what my delta should be.</p> <p>I would really appreciate if someone could give me an explanation on how should I choose delta and why. Thanks.</p>
CopyPasteIt
432,081
<p>We've solved the problem on scrap paper and now present a solution devoid of the process details.</p> <p>It is easy to show that</p> <p><span class="math-container">$\quad \large |x| \lt \frac{1}{6} \implies |\frac{x-1}{x-8}| \lt \frac{1}{7}$</span></p> <p>For <span class="math-container">$\varepsilon \gt 0$</span> set <span class="math-container">$\delta = \text{min}(\frac{1}{6}, 7\varepsilon)$</span>.</p> <p>If <span class="math-container">$x \lt \delta$</span> then</p> <p><span class="math-container">$\quad \large |{\frac{x^2-8}{x-8}-1|=| \frac{x-1}{x-8}}|\,|x| \lt \frac{1}{7} \cdot 7\varepsilon = \varepsilon$</span></p>
94,501
<p>The well-known Vandermonde convolution gives us the closed form <span class="math-container">$$\sum_{k=0}^n {r\choose k}{s\choose n-k} = {r+s \choose n}$$</span> For the case <span class="math-container">$r=s$</span>, it is also known that <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {r \choose n-k} = (-1)^{n/2} {r \choose n/2} \quad [n \mathrm{\ is\ even}]$$</span> When <span class="math-container">$r\not= s$</span>, is there a known closed form for the alternating Vandermonde sum? <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {s \choose n-k}$$</span></p>
Robert Israel
8,508
<p>According to Maple, the answer is ${s\choose n}{{}_2F_1(-r,-n;\,s-n+1;\,-1)}$ (of course we must assume $s \ge n$ for this to make sense).</p>
3,238,914
<p>When is the <a href="https://en.wikipedia.org/wiki/Euler_line" rel="nofollow noreferrer">Euler line</a> parallel with a triangle's side?</p> <p>I have found that a triangle with angles <span class="math-container">$45^\circ$</span> and <span class="math-container">$\arctan2$</span> is a case.</p> <p>Is there any other case? <a href="https://i.stack.imgur.com/1KjZe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KjZe.png" alt="Image"></a>></p>
Blue
409
<p>The figure shows <span class="math-container">$\triangle ABC$</span> with orthocenter <span class="math-container">$P$</span>, circumcenter <span class="math-container">$Q$</span>, and circumradius <span class="math-container">$r$</span>. For a non-equilateral, the Euler line is determined by <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>.</p> <p><a href="https://i.stack.imgur.com/4eEqY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4eEqY.png" alt="enter image description here"></a></p> <p>For the Euler line to be parallel to side <span class="math-container">$\overline{AB}$</span>, the distances from <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> to that side must be equal. Simple right-triangle trigonometry in the figure gives the distances, so we have</p> <p><span class="math-container">$$2\cos A \cos B = \cos C \tag{1}$$</span></p> <p>Expanding <span class="math-container">$\cos C = \cos(\pi-A-B) = -\cos(A+B) = -\cos A\cos B + \sin A \sin B$</span>, we see that <span class="math-container">$(1)$</span> give <span class="math-container">$$3\cos A \cos B = \sin A\sin B \tag{2}$$</span></p> <p>We "know" that parallelism cannot occur if any angle is right; therefore, we may assume that no cosine is vanishes. Dividing through by <span class="math-container">$\cos A\cos B$</span>, then, we have this condition for the Euler line to be parallel to <span class="math-container">$\overline{AB}$</span>.</p> <blockquote> <p><span class="math-container">$$\tan A\tan B = 3 \tag{$\star$}$$</span></p> </blockquote> <p>(I'm still looking for the best way to achieve <span class="math-container">$(\star)$</span> directly, without algebraic manipulation from <span class="math-container">$(1)$</span>.) </p> <hr> <p>To check OP's example, we can relabel vertices and consider angles <span class="math-container">$$A = 45^\circ \qquad B = 180^\circ-45^\circ-\arctan 2 = 135^\circ - \arctan 2$$</span> Then <span class="math-container">$$\tan A= 1 \qquad \tan B= \frac{\tan 135^\circ-\tan(\arctan 2)}{1+\tan 135^\circ \tan(\arctan 2)} = \frac{-1-2}{1+(-2)} = 3$$</span> and we see that <span class="math-container">$(\star)$</span> holds. <span class="math-container">$\square$</span> </p>
2,930,292
<p>I'm currently learning the unit circle definition of trigonometry. I have seen a graphical representation of all the trig functions at <a href="https://www.khanacademy.org/math/trigonometry/unit-circle-trig-func/unit-circle-definition-of-trig-functions/a/trig-unit-circle-review" rel="nofollow noreferrer">khan academy</a>. </p> <p><img src="https://i.stack.imgur.com/zp5WB.png" alt="enter image description here"></p> <p>I understand how to calculate all the trig functions and what they represent. Graphically, I only understand why sin and cos is drawn the way it is. I'm having trouble understand why tangent, cotangent, secant, cosecant are drawn the way they are. </p> <p>Can someone please provide me with some intuitions.</p>
KM101
596,598
<p>If you complete the diagram with all the right triangles, you would notice all of them are actually similar, meaning pairs of corresponding sides produce equivalent ratios. I’m not exactly sure how to explain this considering there aren’t any points in your diagram, so here’s a link to the proof I used.<br> <a href="https://i.stack.imgur.com/C57JO.jpg" rel="nofollow noreferrer">Trigonometric Ratios on a Unit Circle</a></p>
2,845,085
<p>Find $f(5)$, if the graph of the quadratic function $f(x)=ax^2+bx+c$ intersects the ordinate axis at point $(0;3)$ and its vertex is at point $(2;0)$</p> <p>So I used the vertex form, $y=(x-2)^2+3$, got the quadratic equation and then put $5$ instead of $x$ to get the answer, but it's wrong. I think I shouldn't have added $3$ in the vertex form but I don't know how else I can solve this</p>
Matheus Nunes
523,585
<p>Intersect the ordinate in $(0;3)$ implies: $$3=f(0)=a0^2+b0+c=c$$ If you use $x=\frac{-b}{2a}$, with $x$ the vertex, you get $$2=-b/2a \implies b=-4a$$ Evolve $f(2)=0$ using the equation of the form given in the problem, then you will have $$0=a2^2+b2+3 \implies 2b=-3-4a = -a+b \implies b=-3$$ So, $$-3=-4a \implies a=\frac{3}{4}$$ Then, $$f(x)=\frac{3}{4}x^2-3x+2$$ $$f(5)=\frac{75}{4}-15+3=\frac{27}{4}$$</p>
1,057
<p>Suppose a finite group has the property that for every $x, y$, it follows that </p> <p>\begin{equation*} (xy)^3 = x^3 y^3. \end{equation*}</p> <p>How do you prove that it is abelian?</p> <hr> <p>Edit: I recall that the correct exercise needed in addition that the order of the group is not divisible by 3.</p>
Jorge Miranda
446
<p>On the other hand, if the order of your group is not a multiple of 3 then it must be abelian!</p> <p>You can read a proof <a href="http://groupprops.subwiki.org/wiki/Cube_map_is_endomorphism_iff_abelian_%28if_order_is_not_a_multiple_of_3%29" rel="noreferrer">here</a></p>
879,886
<p>If one number is thrice the other and their sum is $16$, find the numbers.</p> <p>I tried, Let the first number be $x$ and the second number be $y$ Acc. to question </p> <p>$$ \begin{align} x&amp;=3y &amp;\iff x-3y=0 &amp;&amp;(1)\\ x&amp;=16-3y&amp;&amp;&amp;(2) \end{align} $$</p>
Disha Sidhwani
665,944
<p>Let the first number be <span class="math-container">$x$</span> And second which is thrice be <span class="math-container">$3y$</span></p> <p>Acc to ques.. </p> <p><span class="math-container">$x=3y\ldots \textrm{equation 1}$</span></p> <p><span class="math-container">$x+y=16\ldots\textrm{equation 2}$</span></p> <p>By Elimination method </p> <pre><code> x-3y = 0 -(x+ y) = -16 ______________ 0- 4y = - 16 </code></pre> <p>Dividing the equation by <span class="math-container">$(-4)$</span>. The result is <span class="math-container">$y=4$</span></p> <p>Put <span class="math-container">$y=4$</span> into the first equation :<span class="math-container">$x=3\cdot 4\Rightarrow x=12$</span></p> <p>I hope this will help you.. Thank you! Please like my comment if you like it </p>
2,918,091
<p>Suppose I want to find the locus of the point $z$ satisfying $|z+1| = |z-1|$</p> <p>Let $z = x+iy$</p> <p>$\Rightarrow \sqrt{(x+1)^2 + y^2} = \sqrt{(x-1)^2 + y^2}$ <br/> $\Rightarrow (x+1)^2 = (x-1)^2$ <br/> $\Rightarrow x+1 = x-1$ <br/> $\Rightarrow 1= -1$ <br/> $\Rightarrow$ Loucus does not exist</p> <p>Is my approach incorrect? The answer I was given was that the y-axis describes the locus.</p> <p>Any help would be appreciated.</p>
Peter Szilas
408,605
<p>Consider the points $A (-1,0)$ and B $(1,0)$ in the complex plane.</p> <p>Let $z$, call it $C$, be any point in the complex plane.</p> <p>In $\triangle ABC$ length $BC = |z-1$|, length $AC = |z+1|$.</p> <p>Since $|z-1|=|z+1|$, $ \triangle ABC$ is isosceles with base $AB$.</p> <p>Hence the locus of $z$ is the perpendicular bisector, i.e. $z$ lies on the $y$-axis.</p>
447,484
<p>I am just learning about differential forms, and I had a question about employing Green's theorem to calculate area. Generalized Stokes' theorem says that $\int_{\partial D}\omega=\int_D d\omega$. Let's say $D$ is a region in $\mathbb{R}^2$. The familiar formula to calculate area is $\iint_D 1 dxdy = \frac{1}{2}\int_{\partial D}x\,dy - y\,dx$, and indeed, $d(x\,dy - y\,dx)=2\,dx\,dy$. But why aren't we allowed to simply use $\int_{\partial D}x\,dy$? Doesn't $d(x\,dy)=d(x)dy = (1\,dx + 0\,dy)dy = dx\,dy$?</p>
Bill Cook
16,423
<p>You <strong>can</strong> use $\int_{\partial D} x\,dy$ to compute area in this context. The "familiar formula" does have a more symmetric look to it -- maybe that's why you find it more familiar. </p> <p>There are infinitely many formulas like this that work. In general you need two functions $P$ and $Q$ such that $Q_x-P_y=1$. Then $\int_{\partial D} P\,dx+Q\,dy$ will compute the area.</p> <p>$P=-y/2$ and $Q=x/2$ gives your familiar formula.</p> <p>$P=0$ and $Q=x$ is the formula in question.</p> <p>One could also use $P=-y$ and $Q=0$ (i.e. $\int_{\partial D} -y\,dx$) to compute the area. </p> <p>Those 3 choices are standard ones presented in traditional multivariate calculus texts. But of course there are infinitely many other choices as well.</p>
1,644,905
<p>How to simplify the following equation:</p> <p>$$\sin(2\arccos(x))$$ I am thinking about:</p> <p>$$\arccos(x) = t$$</p> <p>Then we have:</p> <p>$$\sin(2t) = 2\sin(t)\cos(t)$$</p> <p>But then how to proceed?</p>
KR136
186,017
<p>So, you have reasoned that the expression is equivalent to: </p> <p>$2\sin(\arccos(x))\cos(\arccos(x))$</p> <p>Because $\arccos(x) ≡ \cos^{-1}(x)$, this is equivalent to </p> <p>$2x\sin(\arccos(x))$</p> <p>$\sin(\arccos(x))$ is in fact equivalent to $\sqrt{1+x^2}$ by identity. This can be shown by the Pythagorean Theorem.</p> <p>The proof of the identity goes as follows: It is well known that $\sin^2(x)+\cos^2(x)=1$</p> <p>Replacing $x$ with $\arccos(x)$, we have:</p> <p>$\sin^2(\arccos(x))+ \cos^2(\arccos(x))=1$</p> <p>Because $\cos$ and $\arccos$ are inverse functions, $\cos^2(\arccos(x))=x^2$</p> <p>Therefore $\sin^2(\arccos(x))=1-x^2$, and</p> <p>$\sin(\arccos(x))=\sqrt{1-x^2}$</p> <p>Therefore the answer is $2x\sqrt{1-x^2}$. </p>
3,873,521
<p>Going through Christof Paar's book on cryptography. In his chapter on DHKE, he has the following</p> <p><a href="https://i.stack.imgur.com/xQQPc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xQQPc.png" alt="enter image description here" /></a></p> <p>The book doesn't seem to have the proof for <span class="math-container">$a^{|G|} = 1 $</span> Also how is it a generalization of Fermat's Little Theorem for Cyclic Groups?</p>
halrankard2
819,436
<p>You've worded the question in a way that suggests the book <em>does</em> have the proof of part 2. But, just in case, the proof is that any <span class="math-container">$a\in G$</span> generates a (cyclic) subgroup of <span class="math-container">$G$</span> of size <span class="math-container">$ord(a)$</span> and, moreover, Lagrange's Theorem implies that the size of any subgroup divides the size of the whole group. The proof Lagrange's Theorem is that if <span class="math-container">$H$</span> is a subgroup of <span class="math-container">$G$</span>, then <span class="math-container">$G$</span> is partitioned into the left cosets of <span class="math-container">$H$</span>, each of which has the same size as <span class="math-container">$H$</span>, and so <span class="math-container">$|H|$</span> divides <span class="math-container">$|G|$</span>.</p> <p>Now part 1 follows immediately. If <span class="math-container">$m=ord(a)$</span> then <span class="math-container">$|G|=mn$</span> for some <span class="math-container">$n$</span> by 2. So <span class="math-container">$a^{|G|}=a^{mn}=(a^m)^n=1^n=1$</span>.</p> <p>Fermat's little theorem says that for any prime <span class="math-container">$p$</span> and any integer <span class="math-container">$a$</span> not divisible by <span class="math-container">$p$</span> we have <span class="math-container">$a^{p-1}\equiv 1$</span> (mod <span class="math-container">$p$</span>). Since it is enough to consider the remainder of <span class="math-container">$a$</span> mod <span class="math-container">$p$</span>, we may view <span class="math-container">$a$</span> as an element of the group <span class="math-container">$G=\{1,\ldots,p-1\}$</span> under multiplication mod <span class="math-container">$p$</span>. Now Fermat says that <span class="math-container">$a^{p-1}=1$</span> for any <span class="math-container">$a\in G$</span>. This is a special case of part 1 of your theorem.</p>
3,873,521
<p>Going through Christof Paar's book on cryptography. In his chapter on DHKE, he has the following</p> <p><a href="https://i.stack.imgur.com/xQQPc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xQQPc.png" alt="enter image description here" /></a></p> <p>The book doesn't seem to have the proof for <span class="math-container">$a^{|G|} = 1 $</span> Also how is it a generalization of Fermat's Little Theorem for Cyclic Groups?</p>
Oliver Kayende
704,766
<p>Each translation <span class="math-container">$gH:=\{gh:h\in H\}$</span> of a <span class="math-container">$G$</span> subgroup <span class="math-container">$H$</span> is called a <span class="math-container">$\it{left}\;H\;coset$</span>. The left <span class="math-container">$H$</span> cosets all have the same size <span class="math-container">$|H|$</span> and <span class="math-container">$$g'\in gH\implies\exists\;h\in H\;\;g'H=ghH=g(hH)=gH$$</span> Therefore <span class="math-container">$gH$</span> is the unique left coset containing <span class="math-container">$g$</span> and thus the family <span class="math-container">$(G:H):=\{gH:g\in G\}$</span> is a partition of <span class="math-container">$G$</span> into parts of equal size. <span class="math-container">$$\therefore\;|H|\;\Big\vert\;|G|=\sum_{X\in(G:H)}|X|\;\;\;\therefore\;\text{ord}(a)=|\langle a\rangle|\;\Big\vert\;|G|\;\;\;\therefore\;a^{|G|}\in\langle a^{\text{ord}(a)}\rangle=\langle\mathbf 1\rangle\;\;\;\therefore\;a^{|G|}=\mathbf 1$$</span></p>
3,014,670
<p>I don't have any experience working with radicals, but I'm working on a function that requires products of nth roots to be positive or negative, depending on the number of negative factors. </p> <p><em>I've done some initial research, and reviews these Stack questions: <a href="https://math.stackexchange.com/questions/26363/square-roots-positive-and-negative">Square roots — positive and negative</a> and <a href="https://math.stackexchange.com/questions/437908/the-product-rule-of-square-roots-with-negative-numbers">The Product Rule of Square Roots with Negative Numbers</a> but I couldn't find the information I was seeking (or am not fully understanding the answers.)</em></p> <p>Are the following expressions true? If not, how can I produce the those results?</p> <p><span class="math-container">$\sqrt[2]{1*-1} = -1$</span></p> <p><span class="math-container">$\sqrt[3]{1*1*-1} = -1$</span></p> <p><span class="math-container">$\sqrt[3]{1*-1*-1} = 1$</span></p> <hr> <p>[update] This is what the function does:</p> <p><span class="math-container">$\sqrt[n]{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}} \text{ }*\text{ } \frac{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}}{\Delta_1*\Delta_2*...*\Delta_n}$</span></p> <p>such that if there are an odd number of negative factors, the product is negative, otherwise positive.</p> <ul> <li>Is there a more compact way to express this?</li> </ul> <p>also, any tips on notation are appreciated. </p>
John L Winters
603,347
<p>All rational people are not lakers.</p> <p>The opposite converse is equally valid.</p> <p>The converse that you posit is not equally valid.</p>
3,014,670
<p>I don't have any experience working with radicals, but I'm working on a function that requires products of nth roots to be positive or negative, depending on the number of negative factors. </p> <p><em>I've done some initial research, and reviews these Stack questions: <a href="https://math.stackexchange.com/questions/26363/square-roots-positive-and-negative">Square roots — positive and negative</a> and <a href="https://math.stackexchange.com/questions/437908/the-product-rule-of-square-roots-with-negative-numbers">The Product Rule of Square Roots with Negative Numbers</a> but I couldn't find the information I was seeking (or am not fully understanding the answers.)</em></p> <p>Are the following expressions true? If not, how can I produce the those results?</p> <p><span class="math-container">$\sqrt[2]{1*-1} = -1$</span></p> <p><span class="math-container">$\sqrt[3]{1*1*-1} = -1$</span></p> <p><span class="math-container">$\sqrt[3]{1*-1*-1} = 1$</span></p> <hr> <p>[update] This is what the function does:</p> <p><span class="math-container">$\sqrt[n]{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}} \text{ }*\text{ } \frac{\overline{\Delta_1}*\overline{\Delta_2} *...*\overline{\Delta_n}}{\Delta_1*\Delta_2*...*\Delta_n}$</span></p> <p>such that if there are an odd number of negative factors, the product is negative, otherwise positive.</p> <ul> <li>Is there a more compact way to express this?</li> </ul> <p>also, any tips on notation are appreciated. </p>
user
505,767
<p>Indicating with <span class="math-container">$L$</span> the set of lakers <span class="math-container">$l$</span> and with <span class="math-container">$\Pi$</span> the set of irrational people <span class="math-container">$\pi$</span>, the first statement is equivalent to</p> <p><span class="math-container">$$\forall \pi\in \Pi \quad \pi\in L$$</span></p> <p>the second one is</p> <p><span class="math-container">$$\forall l\in L\quad l\in \Pi $$</span></p> <p>which is not equivalent to the first one, indeed from this last one we could also have <span class="math-container">$\pi \not \in L$</span> for aome <span class="math-container">$\pi$</span>.</p>
3,873,433
<p>Is it true that for all square, complex matrices A, B <span class="math-container">$$ \left\|AB\right\|_p\leq\left\|A\right\|\left\|B\right\|_p$$</span></p> <p>where <span class="math-container">$\left\| .\right\|_p$</span> refers to the Schatten p-norm and <span class="math-container">$\left\| .\right\|$</span> refers to the spectral norm? How would I prove this?</p>
user1551
1,551
<p>One can use the minimax principle for singular values to prove that <span class="math-container">$\sigma_k(AB)\le\sigma_1(A)\sigma_k(B)$</span> for each <span class="math-container">$k$</span>. The inequality in question now follows directly.</p>
3,181,696
<p>I am attempting to solve the integral of the following...</p> <p><span class="math-container">$$\int_{0}^{2 \pi}\int_{0}^{\infty}e^{-r^2}rdr\Theta $$</span></p> <p>So I do the following step...</p> <p><span class="math-container">$$=2 \pi\int_{0}^{\infty}e^{-r^2}rdr$$</span></p> <p>but then the next step is to substitute <span class="math-container">$s = -r^2$</span> which results in...</p> <p><span class="math-container">$$=2 \pi\int_{- \infty}^{0}\frac{1}{2}e^{s}ds$$</span></p> <p>The limits of integration are reversed now and the <span class="math-container">$r$</span> somehow results in <span class="math-container">$1/2$</span>.</p> <p>Can someone explain why this works? Why did substituting cause the limits change and result in the integration above?</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$s=-r^{2}$</span> gives <span class="math-container">$ds=-2rdr$</span> so <span class="math-container">$dr =-\frac 1 {2r} ds$</span>. Also, as <span class="math-container">$r$</span> increases from <span class="math-container">$0$</span> to <span class="math-container">$\infty$</span>, <span class="math-container">$s$</span> decreases from <span class="math-container">$0$</span> to <span class="math-container">$-\infty$</span>.</p>
3,181,696
<p>I am attempting to solve the integral of the following...</p> <p><span class="math-container">$$\int_{0}^{2 \pi}\int_{0}^{\infty}e^{-r^2}rdr\Theta $$</span></p> <p>So I do the following step...</p> <p><span class="math-container">$$=2 \pi\int_{0}^{\infty}e^{-r^2}rdr$$</span></p> <p>but then the next step is to substitute <span class="math-container">$s = -r^2$</span> which results in...</p> <p><span class="math-container">$$=2 \pi\int_{- \infty}^{0}\frac{1}{2}e^{s}ds$$</span></p> <p>The limits of integration are reversed now and the <span class="math-container">$r$</span> somehow results in <span class="math-container">$1/2$</span>.</p> <p>Can someone explain why this works? Why did substituting cause the limits change and result in the integration above?</p>
DINEDINE
506,164
<p>Do you really need substitution. We already know the antiderivative of <span class="math-container">$re^{-r^2}$</span> and it is <span class="math-container">$-e^{-r^2}\over 2$</span></p>
57,232
<p>Given a Heegaard splitting of genus $n$, and two distinct orientation preserving homeomorphisms, elements of the mapping class group of the genus $n$ torus, is there a method which shows whether or not these homeomorphisms, when used to identify the boundaries of the pair of handlebodies, will produce the same $3$-manifold?</p>
Peter Humphries
3,803
<p>Expanding on Matt's answer, it is possible to show without too much difficulty (see <a href="http://www-personal.umich.edu/~hlm/math775/ch18.pdf">here</a>, Exercise 3 of section 18.2.1) that if $(a,q) = 1$, then $$\sum_{n \leq x}{\mu(n) e^{2\pi i an/q}} = \sum_{d \mid q} \frac{\mu(d)}{\varphi(q/d)} \sum_{\chi \pmod{q/d}} \tau(\overline{\chi}) \chi(a) M\left(\frac{x}{d}; \chi \chi_{0(d)}\right),$$ where the inner sum is over all Dirichlet characters modulo $q/d$, $\tau(\overline{\chi})$ is the Gauss sum of $\overline{\chi}$, $\chi_{0(d)}$ is the principal character modulo $d$, and $$M(x ; \chi) = \sum_{n \leq x}{\mu(n) \chi(n)}.$$ A pretty simple calculation (using Euler products and the fact that $\mu(n) \chi(n)$ is multiplicative) shows that for $\Re(s) &gt; 1$, $$\frac{1}{L(s,\chi)} = s \int_{1}^{\infty} \frac{M(x ; \chi)}{x^s} \: \frac{dx}{x}.$$ So one can then use Perron's formula to invert this relationship, and then apply the classic method of pushing the integral to the left of the line $\Re(s) &gt; 1$ and use estimates on $1/L(s,\chi)$ in the zero-free region to obtain the desired bound on $$\sum_{n \leq x}\mu(n) e^{2\pi i an/q}.$$ Though this doesn't seem to give a uniform bound, Montgomery and Vaughan's notes outline (Q1-Q5 of the same section) how to obtain uniform bounds in $\alpha$.</p> <p>Note also that it is quite easy to show that for every $\varepsilon &gt; 0$, $$\sum_{n \leq x}{\mu(n) e^{2\pi i n \alpha}} = O(x^{1/2 + \varepsilon})$$ for <em>almost every</em> $\alpha \in \mathbb{R}$; this is just a simple application of Carleson's theorem on the pointwise almost-everywhere convergence of an $L^2$-periodic function to its Fourier series. Unfortunately, this is not quantitative; we cannot say that this applies to rational $\alpha$ (otherwise we would be able to prove GRH).</p>
3,956,913
<p>For this equation :</p> <blockquote> <p><span class="math-container">${ (x^2 - 7x + 11)}^{x^2 - 13x +42}=1$</span></p> </blockquote> <p>The integer solutions of <span class="math-container">$x$</span> found by WolframAlpha using inverse (logarithmic) function are <span class="math-container">$ 2 , 5 , 6 , 7 .$</span> Why it cannot find the other solutions 3 and 4 ? Is there any full proof method to find all the solutions at once ?</p>
trancelocation
467,003
<p>You can use the binomial theorem but in a slightly adopted manner.</p> <p>Consider</p> <p><span class="math-container">$$\left(\frac{(n+1)^{\frac 1{n+1}}}{n^{\frac 1n}}\right)^{n(n+1)}=\frac 1n\left(1+\frac 1n\right)^n$$</span></p> <p>Now use the binomial theorem on <span class="math-container">$\left(1+\frac 1n\right)^n$</span> and note that for <span class="math-container">$n\geq 3$</span> and <span class="math-container">$2\leq k\leq n$</span> you have</p> <p><span class="math-container">$$\binom{n}{k}\frac 1{n^k}&lt;\frac 1{k!}\leq\frac 1{2^{k-1}}$$</span> Summing up the binomial expansion and using these estimates, you get for <span class="math-container">$n\geq 3$</span></p> <p><span class="math-container">$$\left(1+\frac 1n\right)^n&lt;3 \stackrel{n\geq 3}{\Rightarrow} \frac 1n \left(1+\frac 1n\right)^n&lt;1$$</span></p> <p>Checking the cases <span class="math-container">$n=1$</span> and <span class="math-container">$n=2$</span> you get <span class="math-container">$\boxed{n=3}$</span> as maximum.</p>
798,307
<p>I have a linear functional from the space of nxn matrices over a field F. The functional satisfies $f(A) = f(PAP^{-1})$ for all invertible $P$ and $A$ an nxn matrix. I'm trying to show that $f(A) = \lambda tr(A)$ for some constant $\lambda$.</p> <p>So far I have:</p> <ul> <li>The linear functionals have basis $f_{E_{i,j}}(A) = tr({E_{i,j}}^t A)$ where $E_{i,j}$ is zero everywhere except in the $i,j$ position.</li> <li>If $PAP^{-1} = A$ for all invertible $P$, then A is a multiple of the identity matrix.</li> </ul> <p>Thanks for any help </p>
EPS
133,563
<p>I write this as a separate answer, because the method of solution is different from my previous post.</p> <p>By your first observation, or simply using the Riesz Representation Theorem, one can deduce that there exists a matrix $B$ such that $f(X)=\text{tr }(BX)$. Since $f(X)=f(PXP^{-1})$ and $\text{ tr} AB=\text{tr } BA$, one concludes that $$f(X)=\text{tr } (BPXP^{-1})=\text{tr }(P^{-1}BP X),$$ and consequently without loss of generality $B$ we can (and we will) assume that $B$ is an upper triangular matrix. The diagonal entries of $B$ are equal because otherwise they can be permuted by means of conjugation by some $P$ and this changes the value of $\text{tr }BX$ for an appropriate choice of diagonal matrix $X$. </p> <p>The upper diagonal entries of $B$ have to be zero, because otherwise we can fix some $k\ne l$ such that $b_{kl}\ne 0$. Now let the nonzero entries of the $n\times n$ matrix $X=[x_{ij}]$ be as follows $x_{ii}=i$ and $x_{lk}=1$. This leads to a contradiction, because $$f(X)=BX\ne f(\text{diag} (1, \dots, n))=B\text{ diag} (1, \dots, n)$$ whereas $X$ and the diagonal matrix $\text{diag} (1, \dots, n)$ are similar.</p> <p>Putting these together, we conclude that $B$ is a scalar matrix.</p>
1,419,483
<p>Can anyone please help me in solving this integration problem $\int \frac{e^x}{1+ x^2}dx \, $?</p> <p>Actually, I am getting stuck at one point while solving this problem via integration by parts.</p>
Yash
246,871
<p><a href="http://www.wolframalpha.com/input/?i=integrate%28exp%28x%29%2F%281%2Bx" rel="nofollow">http://www.wolframalpha.com/input/?i=integrate%28exp%28x%29%2F%281%2Bx</a>*x%29%29</p> <p>Read from the link It cannot be solved using byparts</p>
443,475
<p>I am reading some geometric algebra notes. They all started from some axioms. But I am still confused on the motivation to add inner product and wedge product together by defining $$ ab = a\cdot b + a \wedge b$$ Yes, it can be done like complex numbers, but what will we lose if we deal with inner product and wedge product separately? What are some examples to show the advantage of geometric product vs other methods?</p>
Muphrid
45,296
<p>It is, perhaps, misleading to even call this addition. It is no more (and no less) addition than it is addition to add $5 e_1$ and $3 e_2$. You might say, "Of course we can add those. They're members of the same vector space; you just add corresponding components."</p> <p>Well, we can do the same thing with multivectors. You just have $2^n$ components corresponding to $2^n$ basis blades. In this sense, the addition operations we're doing are actually quite pedestrian. The problem with viewing it as a $2^n$ dimensioned vector space is that you no longer have the clear geometric interpretation of elements, which is why this picture is often avoided. Still, you could easily say that all the geometric product is doing is giving us a meaningful multiplication operation between these vector space elements.</p> <p>You ask about "motivation" for adding two disparate things together. I don't know if that's the right word. I'm no authority on the history, but I think you need to turn the picture on its head. It's much easier to <em>start</em> with the axioms of the geometric product and explore the consequences and how those consequences are useful.</p> <p>The geometric product allows us quite a bit of compactness of notation. For example, the following integrals come up often in discussions of the fundamental theorem of calculus:</p> <p>$$\oint G(r-r') \, dS' \, A(r') = \int_V \dot G(r-r') \, dV' \cdot \dot \nabla' A(r') + \int_V G(r-r') \, dV' \cdot \nabla' A(r')$$</p> <p>when $A$ is, for example, a vector field with nonzero curl, there's actually quite a lot going on in the LHS than you might think. Without the geometric product's ability to combine dot and wedge products, we would have to do something like</p> <p>$$\langle G (dS') A \rangle_2 = (G \cdot dS') \wedge A + (G \wedge dS') \cdot A$$</p> <p>And if the vector field has nonzero divergence also, then we also have the expressions</p> <p>$$\langle G (ds') A \rangle_0 = (G \cdot dS') \cdot A$$</p> <p>on the left. Without the implicit ability to add multivectors of different grades, we would have to write two separate integrals to capture the full description of the theorem.</p> <p>This is also apparent when writing certain differential equations. For example, Maxwell's equations in vacuum can be simplified to</p> <p>$$\nabla F = J$$</p> <p>for a vector field $J$ and bivector field $F$, which tells us immediately that $\nabla \wedge F = 0$ as a consequence.</p> <p>Will you be <em>fundamentally unable</em> to do tensor algebra and mathematics without the ability to add multivectors? Well, no. You can always separate equations in GA out into their component grades, and this is exactly what ends up happening when you do stuff in index notation or in differential forms. Still, the ability to describe several equations at once, with each grade describing its own independent equation, is just as powerful as the ability to break down a vector equation into each of its components' equations.</p>
443,475
<p>I am reading some geometric algebra notes. They all started from some axioms. But I am still confused on the motivation to add inner product and wedge product together by defining $$ ab = a\cdot b + a \wedge b$$ Yes, it can be done like complex numbers, but what will we lose if we deal with inner product and wedge product separately? What are some examples to show the advantage of geometric product vs other methods?</p>
user48672
138,298
<p>Well, actually the perspective should be reversed. Starting from the geometric product $a b$ having an inverse as mentioned $$ a a^{-1}=1 $$ one can define 2 identities $$ a \cdot b= \frac{1}{2} \left(a b + \alpha(a) \, \alpha (b) \right) $$ where $\alpha$ is the reflection automorphism $$ a \wedge b= \frac{1}{2} \left(a b - \alpha(a) \, \alpha (b) \right) $$ and this is true for all grades in the algebra. </p> <p>Then $ a \wedge b$ gives an oriented area element, $ a \wedge b \wedge c$ gives an oriented volume and so on. The usual linear algebra can also be reformulated using the Geometric algebra. </p>
635,989
<p>What counterexample can I use to prove that ($ \mathbb{R}_{[x]}$is any polynomial):</p> <p>$L :\mathbb{R}_{[x]}\rightarrow\mathbb{R}_{[x]},(L(p))(x)=p(x)p'(x)$ is not linear transformation. I have already proven this using definition but it is hard to think about example. I would be grateful for any help.</p>
Thomas
26,188
<p>How about this? $$ L(x + x^2) = (x+x^2)(1 + 2x) = \dots $$ $$ L(x) + L(x^2) = \dots $$</p> <hr> <p>As a side note, if you want to <em>prove</em> that $L$ is not linear, you just have to provide one example where one of the properties fail. You say that you have proved it "using the definition" (I am not sure what you mean here), but this is not necessary. By providing a counter example, you have shown that the axioms don't hold.</p>
635,989
<p>What counterexample can I use to prove that ($ \mathbb{R}_{[x]}$is any polynomial):</p> <p>$L :\mathbb{R}_{[x]}\rightarrow\mathbb{R}_{[x]},(L(p))(x)=p(x)p'(x)$ is not linear transformation. I have already proven this using definition but it is hard to think about example. I would be grateful for any help.</p>
Michael Hoppe
93,935
<p>$L(x+1)=(x+1)\cdot1$, whereas $L(x)+L(1)= x\cdot1+1\cdot0$.</p> <p>Edit: evidently $L(2x)\neq2L(x)$.</p>
4,547,480
<p>I am working with some data for which I am interested in calculating some physical parameters. I have a system of linear equations, which I can write in matrix form as:</p> <p><span class="math-container">$$ \textbf{A} \textbf{x} = \textbf{b} $$</span></p> <p>where <span class="math-container">$\textbf{A}$</span> is a square matrix containing the coefficients of the linear equations, <span class="math-container">$\textbf{x}$</span> is a column vector of length <span class="math-container">$n$</span> containing the unknown parameters that I want to calculate, and <span class="math-container">$\textbf{b}$</span> is a known vector of length <span class="math-container">$n$</span>.</p> <p>Given no other constraints, the solution is to simply calculate <span class="math-container">$\textbf{x}$</span> by inverting <span class="math-container">$\textbf{A}$</span>. However, I have hard inequality constraints on <span class="math-container">$\textbf{x}$</span> because of physical reasons of the data in question:</p> <p><span class="math-container">$$ 0 \leq x_i \leq c_i \ \forall \ i=1,2,...n $$</span></p> <p>where <span class="math-container">$x_i$</span> are the values in <span class="math-container">$\textbf{x}$</span> and all <span class="math-container">$c_i$</span> are known.</p> <p>Now, these extra inequality constraints make the problem overdetermined. However, I have the choice to remove data (e.g., remove rows from <span class="math-container">$\textbf{A}$</span>) because I know a priori that some data are more reliable. Thus, I am hoping that I can make the unconstrained problem underdetermined, but <em>given the hard inequality constraints, make the constrained problem exactly determined</em> and calculate a single unique solution of <span class="math-container">$\textbf{x}$</span>.</p> <p>To summarize, my question is: how can I solve a determined system of linear equations subject to inequality constraints on the unknown parameters?</p> <p>I looked into a few potential techniques like linear programming and bounded variable least squares. However, the goal of those methods is to maximize or minimize some objective function, whereas I want an exact solution to my equations. My gut feeling is that a solution should exist but I don't have the linear algebra background to find it, so I appreciate any help!</p> <hr /> <p>These are details on my specific problem that might help with a solution:</p> <ul> <li>All values in <span class="math-container">$\textbf{A}$</span> are between 0 and 1.</li> <li>The value of the <span class="math-container">$c_i$</span> is at most 1.</li> <li><span class="math-container">$n=36$</span></li> </ul>
Erwin Kalvelagen
295,867
<p>You can use Linear Programming with a dummy objective. The usual approach is to use an objective with all zero coefficients: <span class="math-container">$$\begin{aligned} \min_x \&gt; &amp; 0^Tx \\ &amp; Ax=b \\ &amp; 0 \le x_i \le u_i \end{aligned}$$</span></p> <p>This will find a feasible solution and then stop.</p> <p>Of course, if you are only interested in feasibility, we can actually use any random objective. If I am reading things correctly, you have a unique solution.</p>
2,401,281
<blockquote> <p>Show that for $\{a,b,c\}\subset\Bbb Z$ if $a+b+c=0$ then $2(a^4 + b^4+ c^4)$ is a perfect square. </p> </blockquote> <p>This question is from a math olympiad contest. </p> <p>I started developing the expression $(a^2+b^2+c^2)^2=a^4+b^4+c^4+2a^2b^2+2a^2c^2+2b^2c^2$ but was not able to find any useful direction after that.</p> <p><strong>Note</strong>: After getting 6 answers here, another user pointed out other question in the site with similar but not identical content (see above), but the 7 answers presented include more comprehensive approaches to similar problems (e.g. newton identities and other methods) that I found more useful, as compared with the 3 answers provided to the other question. </p>
Michael Rozenberg
190,319
<p>Since $$2(a^2b^2+a^2c^2+b^2c^2)-a^4-b^4-c^4=(a+b+c)(a+b-c)(a+c-b)(b+c-a)=0,$$ we obtain $$2(a^4+b^4+c^4)=a^4+b^4+c^4+2(a^2b^2+a^2c^2+b^2c^2)=(a^2+b^2+c^2)^2.$$ Done!</p>
2,401,281
<blockquote> <p>Show that for $\{a,b,c\}\subset\Bbb Z$ if $a+b+c=0$ then $2(a^4 + b^4+ c^4)$ is a perfect square. </p> </blockquote> <p>This question is from a math olympiad contest. </p> <p>I started developing the expression $(a^2+b^2+c^2)^2=a^4+b^4+c^4+2a^2b^2+2a^2c^2+2b^2c^2$ but was not able to find any useful direction after that.</p> <p><strong>Note</strong>: After getting 6 answers here, another user pointed out other question in the site with similar but not identical content (see above), but the 7 answers presented include more comprehensive approaches to similar problems (e.g. newton identities and other methods) that I found more useful, as compared with the 3 answers provided to the other question. </p>
achille hui
59,379
<p>A systematic way doing this is using <a href="https://en.wikipedia.org/wiki/Newton&#39;s_identities" rel="nofollow noreferrer">Newton's identifites</a>.</p> <p>Let $p_k = a^k + b^k + c^k$ for $k = 1, 2, 3, 4$ and $$\begin{align} s_1 &amp;= a + b + c\\ s_2 &amp;= ab+bc+ca\\ s_3 &amp;= abc \end{align}$$ be the elementary symmetric polynomials associated with $a, b, c$.<br> Newton's identities tell us:</p> <p>$$\require{cancel}\begin{array}{rlrlrlrlrl} p_1 &amp;-&amp; s_1 &amp;&amp;&amp;&amp;&amp;= 0\\ p_2 &amp;-&amp; \cancelto{ 0}{\color{grey}{s_1 p_1}} &amp;+&amp; 2s_2 &amp;&amp;&amp;= 0\\ p_3 &amp;-&amp; \cancelto{ 0}{\color{grey}{s_1 p_2}} &amp;+&amp; \cancelto{ 0}{\color{grey}{s_2 p_1}} &amp;- &amp;3s_3 &amp;= 0\\ p_4 &amp;-&amp; \cancelto{ 0}{\color{grey}{s_1 p_3}} &amp;+&amp; s_2 p_2 &amp;-&amp; \cancelto{ 0}{\color{black}{s_3 p_1}} &amp;= 0 \end{array} $$ When $a + b + c = 0$, $s_1 = 0$ and $1^{st}$ equation $p_1 - s_1 = 0$ tell us $p_1 = 0$.<br> Substitute back into $2^{nd}$ and $4^{th}$ equations lead to</p> <p>$$\begin{cases} p_2 = -2s_2,\\ p_4 = -s_2 p_2 \end{cases} \quad\implies\quad 2(a^4+b^4+c^4) = 2p_4 = -2s_2 p_2 = p_2^2 = (a^2+b^2+c^2)^2$$</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Thomas Riepe
451
<p><a href="http://www.pbs.org/wgbh/nova/sciencenow/3210/04.html" rel="nofollow" title="PBS docu">This video</a> is less about mathematics, but about a fascinating mathematician in two bodies who helped saving medieval unicorns - students liked it. </p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Konrad Voelkel
956
<p>My personal all-time favorite is the Klein Four with their song "Finite Simple Group (of Order Two)"... it has lots of puns on topology in it, but I guess it doesn't teach anything.</p> <p><a href="https://www.youtube.com/watch?v=BipvGD-LCjU" rel="noreferrer" title="Go to the Klein Four Video on YouTube">Here's the link to the "Finite Simple Group" song</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Andrew Stacey
45
<p>On <a href="http://www.k-3d.org/wiki/Animation_Gallery" rel="nofollow">this page</a> of sample animations using the k3d program there's a short animation of a "flower" blooming which is actually the first part of the sphere eversion.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
GMRA
135
<p>The Newton institute in Cambridge tapes alot (all?) of it's lectures, and they can be found on the Institutes webpage. High quality for videos of lectures.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Mitch
825
<p>The Institute for Advanced Study tapes some of its <a href="http://video.ias.edu/">lectures</a>. They tend to be very good.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
vonjd
1,047
<p>Among the best math videos can be found here: <a href="http://www.khanacademy.org/" rel="nofollow">http://www.khanacademy.org/</a></p> <p>(or the youtube-channel: <a href="http://www.youtube.com/khanacademy" rel="nofollow">http://www.youtube.com/khanacademy</a> )</p> <p>There is everything from counting to solving differential equations with Laplace transforms - nearly 1.000 videos altogether (and the guy is funny :-)</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Spinorbundle
675
<p>Sir Michael Atiyah: <a href="https://www.youtube.com/watch?v=dToui7IVwBY" rel="nofollow noreferrer">Beauty in Mathematics</a>.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Bob
13,822
<p>Documentary about infinite and its implications in mathematics (BBC)<br> <a href="http://www.youtube.com/watch?v=Cw-zNRNcF90" rel="nofollow">http://www.youtube.com/watch?v=Cw-zNRNcF90</a></p> <p>As usual, Gregory Chaitin on the history of logic<br> <a href="http://www.youtube.com/watch?v=HLPO-RTFU2o" rel="nofollow">http://www.youtube.com/watch?v=HLPO-RTFU2o</a></p> <p>Another one about logic and artificial intelligence<br> <a href="http://www.youtube.com/watch?v=nA3m9jgMp3U" rel="nofollow">http://www.youtube.com/watch?v=nA3m9jgMp3U</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Jesus Martinez Garcia
1,887
<p>All the talks of <a href="http://www.maths.ed.ac.uk/~aar/atiyah80.htm" rel="nofollow">Atiyah 80+</a></p>