qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
264,740
<p>On Hilbert spaces, the following is true:</p> <p>Let $T$ be a densely-defined linear operator with non-empty resolvent set, then $T$ is closed.</p> <p>The obvious proof I see to show this uses explicitly the Hilbert space structure which is why I would like to ask:</p> <p>Is the same result true for operators on Banach spaces?</p>
Convexity
86,275
<p>STANDING ASSUMPTIONS: Let <span class="math-container">$T:D_T\rightarrow X$</span> be a linear operator, where <span class="math-container">$X$</span> is a normed space (or metrizable TVS) and <span class="math-container">$D_T\subset X$</span>.</p> <p>DEFINITION: Then <span class="math-container">$\lambda$</span> belongs to the <em>resolvent set</em> <span class="math-container">$\rho(T)$</span> iff <span class="math-container">$T-\lambda I$</span> is one-to-one and <strong>onto</strong> and its inverse is bounded.</p> <p>LEMMA. <strong>A linear operator <span class="math-container">$T$</span> in a normed space <span class="math-container">$X$</span> is closed if<span class="math-container">$^\dagger$</span> it has a non-empty resolvent set.</strong></p> <p>PROOF. Let <span class="math-container">$D_T\owns x_n\rightarrow x$</span>, <span class="math-container">$Tx_n\rightarrow y$</span>.<span class="math-container">$^\ddagger$</span> Let <span class="math-container">$\lambda$</span> be in the resolvent set. Then <span class="math-container">$x=\lim_n x_n = \lim_n (T-\lambda)^{-1}(T-\lambda)x_n=(T-\lambda)^{-1}(y-\lambda x)$</span> (+). Therefore, <span class="math-container">$x\in (T-\lambda)^{-1}(X)=D_T$</span>. By (+), <span class="math-container">$(T-\lambda)x=(T-\lambda)(T-\lambda)^{-1}(y-\lambda x)=(y-\lambda x)$</span>;  hence <span class="math-container">$Tx=y$</span>, so <span class="math-container">$T$</span> is closed, QED.</p> <p>Note: Robert Israel's shorter proof on this page shows the same for any TVS (if you require that the inverse is a continuous operator). The above proof (that works for all metrizable TVSs) might be easier for some readers.</p> <hr> <p><span class="math-container">$\dagger$</span>) The "if" in the lemma cannot be reversed, as some closed operators (even on <span class="math-container">$\ell^2$</span>) have an empty resolvent set, by <a href="https://math.stackexchange.com/questions/3262168/closed-operator-with-trivial-resolvent-set">https://math.stackexchange.com/questions/3262168/closed-operator-with-trivial-resolvent-set</a>    </p> <p><span class="math-container">$\ddagger$</span>) In a metrizable space, a set is closed iff it is sequentially closed. So the graph <span class="math-container">$G_T:=\{(x,Tx):\, x\in D_T\}$</span> is closed iff <span class="math-container">$G_T\owns (x_n,Tx_n)\rightarrow (x,y)\ \Rightarrow\ (x,y)\in G_T$</span> (i.e., <span class="math-container">$x\in D_T$</span> and <span class="math-container">$y=Tx$</span>).</p> <p>Note that your assumption that the operator is densely defined was not needed. This is a strict improvement:</p> <p>EXAMPLE. Let <span class="math-container">$L$</span> denote the inverse of the right-shift <span class="math-container">$R:(x_1,x_2,\cdots)\rightarrow (0,x_1,x_2,\cdots)$</span>. Then dom<span class="math-container">$L=R(\ell^2)$</span> is not dense, as <span class="math-container">$(1,0,0,\cdots)$</span> is not in its closure, but yet the (onto-)resolvent set <span class="math-container">$\rho(L)$</span> is nonempty, as <span class="math-container">$(L-0)^{-1}=R$</span> and hence <span class="math-container">$0\in\rho(L)$</span>.</p> <hr> <p>This "onto" definition seems to be standard: Rudin: F.A., 13.26 &amp; <a href="https://en.wikipedia.org/wiki/Spectrum_(functional_analysis)#Spectrum_of_an_unbounded_operator" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Spectrum_(functional_analysis)#Spectrum_of_an_unbounded_operator</a></p> <p><strong>Your claim is true only for this standard definition ("onto"), not for the older definition ("dense range" in place of "onto")</strong>, by Matthew Daws' example. For closed operators, the two definitions are equivalent. For a further discussion on this and the two definitions, see: <a href="https://mathoverflow.net/questions/348543/standard-definition-of-a-resolvent-a-zi-must-be-onto-not-merely-have-a-dense-r">Standard definition of a resolvent: A-zI must be onto, not merely have a dense range?</a></p>
1,515,900
<p>In many books and papers on analysis I met this equality without proof:</p> <p>$$\sup \limits_{t\in[a,b]}f(t)-\inf \limits_{t\in[a,b]}f(t)=\sup \limits_{t,s\in[a,b]}|f(t)-f(s)|$$</p> <p>Can anyone show strict and nice proof of that equality?</p> <p>I would really grateful foe your help!</p>
Lutz Lehmann
115,115
<p>For every $ε&gt;0$ there are $s$ and $t$ such that $$ \inf f\le f(s)\le \inf f+ε $$ and $$ \sup f -ε \le f(t) \le \sup f $$ Combine this to $$ \sup f - \inf f-2ε \le f(t)-f(s)\le \sup f - \inf f $$ to see that indeed the claimed identity holds.</p>
830,599
<p>The function $f$ is defined as follows: $$f(x):=\sum_{j=1}^{\infty} \frac{x^j}{j!} e^{-x}$$</p> <p>It's easy to see that $f(0)=0$. But I am interested in the value $$\lim_{x \rightarrow 0^+} f(x).$$</p> <p>Even <a href="https://www.wolframalpha.com/input/?i=lim_%28x-%3E0%29+%28sum_%28i%3D1%29%5Einfinity+x%5Ej%2F%28j%21%29+e%5E%28-x%29%29+" rel="nofollow">Wolfram Alpha</a> does not help here. I tried to plot this function, but this doesn't work neither. And my calculator doesn't give a solution for concrete values of $x$, so I have no idea how to get on here. </p>
Eff
112,061
<p>$$ f(x) =\sum\limits_{j=1}^\infty \frac{x^j}{j!}e^{-x} = e^{-x}\sum\limits_{j=1}^\infty \frac{x^j}{j!} = e^{-x}\cdot (e^x-1) = 1-e^{-x}.$$</p> <p>Here we have used that $$e^x = \sum\limits_{j=0}^\infty\frac{x^j}{j!} = 1+\sum\limits_{j=1}^{\infty}\frac{x^j}{j!} $$ So that $$ \sum\limits_{j=1}^{\infty}\frac{x^j}{j!} = e^x-1.$$</p>
2,423,055
<p>I am sort of baffled by this thing, already real number has every thing in it why is this concept of $\Bbb R^2$ ? What does it mean? What is its advantage?</p>
G Tony Jacobs
92,129
<p>We usually use $\mathbb{R}$, the set of real numbers, to refer to what we picture as the number line.</p> <p>Thus, $\mathbb{R}^2$, the set of pairs of real numbers, is what we picture as the $xy$-plane, coordinatized by two number lines.</p> <p>If you want to talk about space with $n$ dimensions, then you want a "copy" of $\mathbb{R}$ for each dimension, to give you independent coordinates.</p>
516,244
<p>My professor gave us this example on her notes:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}+\frac{1}{2^n}\right)$$</p> <p>So I know we're supposed to find the partial fraction, which ends up being</p> <p>$$\left(\frac{3}{n(n+3)}=\frac{A}{n}+\frac{B}{n+3}= \frac{1}{n}-\frac{1}{n+3}\right)$$</p> <p>So based on how she did the other examples, I would expect her to do:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{1}-\frac{1}{4}+\frac{1}{2}-\frac{1}{5}\right.....)$$, because I'd be plugging in numbers for n starting with n=1. However, she instead did the following:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{n}-\frac{1}{n+1}+\frac{1}{n+1}-\frac{1}{n+2}+\frac{1}{n+2}-\frac{1}{n+3}\right)$$,</p> <p>which would definitely be a lot more helpful in helping cancel out terms like you're supposed to when doing telescoping series, BUT I don't know why she's doing this. I thought we were supposed to plug in values from n and that's what should be increasing each time, but instead the number being added to n is the one going up and I have no clue why. I don't think I'm asking this question in the best way possible, but I'm kinda confusing myself because she did other examples and they feel nothing like this and I'm just starting to learn all this, so can somebody please give me some insight as to what is going on?</p> <p>(and I know I'm supposed to also deal with the sum of the $$\frac{1}{2^n}$$ term but I'm kinda ignoring it for now since I don't even know what's going on with the first one</p>
Piquito
219,998
<p>You know that <strong>AB × AC</strong> is a vector perpendicular to the plane ABC such that |<strong>AB × AC</strong>|= Area of the parallelogram ABA’C. Thus this area is equal to ½ |AB × AC|.</p> <p><a href="https://i.stack.imgur.com/3oDbh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3oDbh.png" alt="enter image description here" /></a></p> <p>From <strong>AB</strong>= <span class="math-container">$(x_2 -x_1, y_2-y_1)$</span>; <strong>AC</strong>= <span class="math-container">$(x_3-x_1, y_3-y_1)$</span>, we deduce then</p> <p>Area of <span class="math-container">$\Delta ABC$</span> = <span class="math-container">$\frac12$$[(x_2-x_1)(y_3-y_1)- (x_3-x_1)(y_2-y_1)]$</span></p>
1,022,485
<p>I'm working on a homework problem for my discrete math class, and I'm stuck. (Note: I made a post about this earlier, but I read the problem incorrectly, thus the work was wrong, so I deleted the post.)</p> <p>Prove by mathematical induction that for every integer n $\ge$ 0, $12\mid8^{2n+1}+2^{4n+2}$</p> <p>I start out by proving the base case, $F(0)$, to be true:</p> <p>$$ F(0)=8^1+2^2=12\quad \text{Obviously, 12 is divisible by 12} $$</p> <p>I then move on to the induction step to prove the $F(n+1)$ is true:</p> <p>I assume that $8^{2n+1}+2^{4n+2}$ is divisible by 12, and then plug in $(n+1)$:</p> <p>$$ F(n+1)=8^{2(n+1)+1}+2^{4(n+1)+2}=8^{2n+3}+2^{4n+6} $$</p> <p>I then do $F(n+1)-F(n)$:</p> <p>$$ F(n+1)-F(n)=(8^{2n+3}+2^{4n+6})-(8^{2n+1}+2^{4n+2}) $$</p> <p>$$ =8^{2n+3}-8^{2n+1}+2^{4n+6}-2^{4n+2} $$</p> <p>I then factor out the terms used in $F(n)$:</p> <p>$$ 8^{2n+1}(8^2-1)+2^{4n+2}(2^4-1)=8^{2n+1}(63)+2^{4n+2}(15) $$</p> <p>I can re-write the result as:</p> <p>$$ 8^{2n+1}(63)+2^{4n+2}(12+3) $$</p> <p>This is where I'm stuck. I broke up the $15$ into $12+3$ since I need to prove that there is a multiple of 12, but I don't know what to do with the 63, since (I think) you're supposed to have the terms in $F(n)$ multiplied by 3 after you distribute so that you can factor out the 3 and have $F(n)$ in the equation, which is proven to be divisible by 12. </p> <p>I tried splitting the $63$ into $(21*3)$</p> <p>$$ 8^{2n+1}(21*3)+2^{4n+2}(12+3) $$</p> <p>But I'm not sure what to do next. Any ideas?</p>
ir7
26,651
<p>$$ 8^{2n+1}\cdot63+2^{4n+2}\cdot 15=8^{2n}\cdot(8\cdot63)+2^{4n}\cdot(4\cdot15). $$</p>
2,376,315
<p>So I'm trying to solve the problem irrational ^ irrational = rational. Here is my proof Let $i_{1},i_{2}$ be two irrational numbers and r be a rational number such that $$i_{1}^{i_{2}} = r$$ So we can rewrite this as $$i_{1}^{i_{2}} = \frac{p}{q}$$ Then by applying ln() to both sides we get $$i_2\ln(i_1) = \ln(p)-\ln(q)$$ which can be rewritten using the difference of squares as $$ i_2\ln(i_1) = \left(\sqrt{\ln(p)}-\sqrt{\ln(q)}\right)\left(\sqrt{\ln(p)}+\sqrt{\ln(q)}\right)$$ so now we have $$i_1 = e^{\sqrt{\ln(p)}+\sqrt{\ln(q)}}$$ $$i_2 = \sqrt{\ln(p)}-\sqrt{\ln(q)}$$ because I've found an explicit formula for $i_1$ and $i_2$ we are done.</p> <p>So I'm new to proofs and I'm not sure if this is a valid argument. Can someone help me out?</p>
Jens Renders
131,972
<p>How do you know that the values you end up with are irrational? If you are going to use the irrationality of e and natural logarithms, you might aswel just use $e^{\ln (2)}=2$ as a counterexample and you're done. The famous proof given in another answer uses only $\sqrt{2}$ because it's easy to prove that that is an irrational number</p>
893,959
<p>I have a series of problems in inequalities that I can not solve,please help me if you can.</p> <p>problem 1 :$a,b,c \geq 0$ such that $\sqrt{a^2+b^2+c^2}=\sqrt[3]{ab+bc+ca} $ prove that $a^2b+b^2c+c^2a+abc \le \frac{4}{27}$</p> <p>problem 2 : $a,b,c\geq0$ and $a+b+c = 1$ prove that </p> <p>1, $ \sqrt{a+\frac{(b-c)^2}{4}}+\sqrt{b}+\sqrt{c} \leq \sqrt{3} $</p> <p>2, $\sqrt{a+\frac{(b-c)^2}{4}} + \sqrt{b+\frac{(a-c)^2}{4}} + \sqrt{c+\frac{(a-b)^2}{4}} \leq2$</p> <p>problem 3 $a,b,c \geq0$ and $a+b+c=1$ find Maximum value of : $M=\frac{1+a^2}{1+b^2}+\frac{1+b^2}{1+c^2}+\frac{1+c^2}{1+a^2}$</p>
math110
58,742
<p>We have $$\frac{\sqrt b+\sqrt c}{\sqrt 2}=\sqrt{b+c-\frac{(\sqrt b-\sqrt c)^2}{2}}\le \sqrt{b+c-\frac{( b- c)^2}{4}}$$ therefore from this and Cauchy-Schwarz it follows that $$\left( \sqrt{a+\frac{(b-c)^2}{4}}+\sqrt{b}+\sqrt{c}\right)^2\le \left(a+\frac{(b-c)^2}{4}+\frac{(\sqrt b+\sqrt c)^2}{ 2}\right)(1+2)\le 3\left(a+\frac{(b-c)^2}{4}+b+c -\frac{(b-c)^2}{4}\right)=3 $$</p>
2,090,059
<p>Let $M = \bigcup\limits_{n=1}^\infty M_n$, $M_n$ be a measurable sets on $\mathbb R$ and $f$ be a measurable function on $\mathbb R$. Then I want to know how the equality $$ \lim\limits_{N\to \infty}\int_{M\setminus \bigcup\limits_{n=1}^NM_n} |f(x)| \,dx = 0 $$ holds. It seems something like monotone convergence theorem, but I know the monotone convergence theorem is for the monotone "functions". How can I justify this?</p>
Gabriel Romon
66,096
<p>The map $\nu:A\mapsto \int_A |f|$ is a measure.</p> <p>Note that $A_N:=M\setminus \bigcup_{n=1}^NM_n$ is a decreasing sequence that converges to $\emptyset$.</p> <p>Assuming $\int |f|&lt;\infty$, we have $\lim_N\nu(A_N)=\nu(\emptyset)=0$.</p>
293,110
<p>How do I prove that homomorphism $\phi : \; \mathrm{Mod}(S_g)\to \mathrm{Sp}(2g, \mathbb{Z})$ (induced by the action of mapping class group of a surface on integer homologies of a surface) is an epimorphism? My idea was to work with generators, but I was not able to prove it this way. </p> <p>I would love to get detailed answers in order to understand this better. </p>
Zuhair Al-Johar
95,347
<p>The following is actually an answer to whether the theory presented in this post can interpret $\text{NBG}$. </p> <p>The answer is to the positive! Moreover, it can interpret Morse-Kelley class theory $\text{MK}$.</p> <p>The axioms of the theory are all those present in this post with the only exception of "Set Union" which is modified as suggested in [2].</p> <p>$\text{Poof}$</p> <p>$\text{Definitions:}$</p> <p>$x \ is \ Class^{MK} \iff small(x) \lor \forall y \subset x \big{(}small(y) \to \exists small \ k (k \in x \wedge k \not \in y) \big{)}$</p> <p>$x \ is \ set^{MK} \iff small(x)$</p> <p>$ x =^{MK} y \iff \forall small \ z \ (z \in x \leftrightarrow z \in y)$</p> <p>$ y \in^{MK} x \iff y \in x \wedge small(y)$</p> <p>The proof of extensionality is straightforward. Now $V$ would be any set that has all small sets among its elements, for instance, we'll take $V$ to be the set of all sets in this theory (i.e. the complementary set of $\emptyset$). That $V$ contains an infinite element and is closed under set union, pairing and power is obvious, so is $\in^{MK}\text{Regularity}.$ </p> <p>What remains is to prove the class comprehension schema of $\text{MK}$. </p> <p>$\text{LEMMA}$: This theory prove the following Separation schema:</p> <p>$\forall A \ \exists x \ \forall small \ y \ (y \in x \longleftrightarrow y \in A \wedge \phi(y))$</p> <p>Proof: the relation $A_1=\{y|\exists z \in A \wedge y=\langle 0,z \rangle\}$ is 1-1, so is the relation </p> <p>$A_2=\{y|[\exists z \in A (\phi(z) \wedge y=\langle 0,z \rangle)] \lor [\exists z \in A (\neg \phi(z) \wedge y=\langle 1,z \rangle)] \}$</p> <p>We need also to prove this form of intersection:</p> <p>$\forall A, B \ \exists X \ \forall small \ y \ (y \in X \longleftrightarrow y \in A \wedge y \in B)$</p> <p>Proof: let us take $A \cap B$ to be $(A^c \cup B^c)^c$ </p> <p>Now our separation set is $(A_1 \cap A_2)^{-1}$. </p> <p>where the $``^{-1}"$ operator uses the following consequence of the last axiom:</p> <p>$\phi(z,x) \iff \\ (\forall m \in x [\exists small \ k \ (m=\langle 0, k \rangle) \lor \neg small (m)] \wedge \ \exists y \in x (\neg small(y)) \wedge [\exists m \in x (small(m) \wedge m=\langle 0,z \rangle) \lor \exists m \in x (\neg small(m) \wedge z= \langle 0,m \rangle) ] ) \lor (\neg \forall m \in x [\exists k (small(k) \wedge m=\langle 0, k \rangle) \lor \neg small (m)] \wedge \exists m \in x (z= \langle 1,m \rangle)) $</p> <p>In reality, we don't need the full scheme in [2] for that development! What is needed is to only prove the existence of $A_2$ sets for every set $A$ and every formula $\phi$, and $X^{-1}$ for every set $X$, that's it. </p>
3,713,206
<p>Spivak (3rd edition) proposes solving the integral <span class="math-container">$$\int \frac{1+e^x}{1-e^x} dx$$</span> by letting <span class="math-container">$u=e^x$</span>, <span class="math-container">$x=\ln(u)$</span>, and <span class="math-container">$dx=\frac{1}{u}du$</span>. This results in the integral <span class="math-container">$$\int \frac{1+u}{1-u}\frac{1}{u}du\\=\int \frac{2}{1-u}+\frac{1}{u}du=-2\ln(1-u)+\ln(u)=-2\ln(1-e^x)+x$$</span></p> <p>From this example, Spivak argues that a similar method will work on any integral of the form <span class="math-container">$\int f(g(x))dx$</span> whenever <span class="math-container">$g(x)$</span> is invertible in the appropriate interval. Because this method is not a simple application of the substitution theorem, Spivak provides the following justification for his claim.</p> <p>Consider continuous <span class="math-container">$f$</span> and <span class="math-container">$g$</span> where <span class="math-container">$g$</span> is invertible on the appropriate interval. Applying the above method to the arbitrary case, we let <span class="math-container">$u=g(x)$</span>, <span class="math-container">$x=g^{−1}(u)$</span>, and <span class="math-container">$dx=(g^{−1})′(u)du$</span>. Thus, we need to show that <span class="math-container">$$∫f(g(x))dx=∫f(u)(g^{−1})′(u)du$$</span> To prove this equality Spivak uses a more typical substitution <span class="math-container">$u=g(x)$</span>, <span class="math-container">$du=g′(x)dx$</span> and applies it by noting that <span class="math-container">$$∫f(g(x))dx=∫f(g(x))g′(x)\frac{1}{g′(x)}dx$$</span> Presumably using the substitution theorem, which roughly states that <span class="math-container">$∫f(g(x))g'(x)dx=∫f(u)du$</span>, Spivak asserts that <span class="math-container">$$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$</span> Then, because <span class="math-container">$(g^{-1})'(u)=\frac{1}{g'(g^{-1}(u))}$</span> Spivak concludes <span class="math-container">$$∫f(u)\frac{1}{g′(g^{−1}(u))}du=∫f(u)(g^{−1})′(u)du$$</span></p> <p>I lose track of the argument when Spivak argues that <span class="math-container">$$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$</span> In the original example, it was clear to me how we could apply the substitution theorem to make this equality true because <span class="math-container">$\frac{1}{g'(x)}$</span> was in fact a function of <span class="math-container">$g(x)$</span> as <span class="math-container">$g'(x)=g(x)$</span>. But this is not necessarily true in all cases, or so it seems. How do we know that <span class="math-container">$f(g(x))\frac{1}{g'x}$</span> can be written in the form <span class="math-container">$h(g(x))$</span> for some continuous function <span class="math-container">$h$</span>? To sum up, my main questions is, how do we use the substitution theorem to justify the equality <span class="math-container">$$∫f(g(x))g′(x)\frac{1}{g′(x)}dx=∫f(u)\frac{1}{g′(g^{−1}(u))}du$$</span></p>
peek-a-boo
568,204
<p>Note that <span class="math-container">\begin{align} \int f(g(x))\cdot g'(x) \dfrac{1}{g'(x)}\ dx &amp;= \int \underbrace{f(g(x))\cdot\dfrac{1}{(g' \circ g^{-1})(g(x))}}_{h(g(x))}\ \underbrace{g'(x)\ dx}_{du} \\ \end{align}</span> where I defined <span class="math-container">$h(t):= f(t) \cdot \dfrac{1}{(g' \circ g^{-1})(t)}$</span>. So, now of course, you can apply the substitution rule in the first form Spivak presented, simply by putting <span class="math-container">$u = g(x)$</span> and <span class="math-container">$du = g'(x)\ dx$</span>.</p> <hr /> <p>Anyway, let me just add a few comments about how (a few years back), after reading Spivak's chapter, I really convinced myself that the substitution rule is actually true as a consequence of the chain rule, rather than some symbolic manipulation (I mean of course I knew it was true, but not how the computations aligned with the formalism).</p> <p>I would recommend reading this <a href="https://math.stackexchange.com/a/2868217/568204">previous answer</a> of mine, where I pretty much (re)explain what I understood from reading this section in Spivak. Anyway, the gist is the following. Given any continuous <span class="math-container">$f$</span>, when we write down the symbol <span class="math-container">$\int f(x)\, dx$</span>, what we really mean is a differentiable function (or if you wish an equivalence class of differentiable functions), <span class="math-container">$F$</span>, such that <span class="math-container">$F' = f$</span>. So, I shall refer to this primitive function <span class="math-container">$F$</span> (or rather its equivalence class) by the symbol <span class="math-container">$\text{prim}(f)$</span>. Then, the common subsitution rule <span class="math-container">\begin{align} \int f(g(x)) \cdot g'(x) \, dx &amp;= \int f(u)\, du \quad \text{where $u = g(x)$} \end{align}</span> can be written (I'd say more correctly from a technical perspective) as <span class="math-container">\begin{align} \text{prim}((f \circ g) \cdot g') &amp;= \text{prim}(f) \circ g. \end{align}</span> (by the way <span class="math-container">$\text{prim}(f) \circ g$</span> of course means <span class="math-container">$[\text{prim}(f)] \circ g$</span>). All this equation is saying is that if you differentiate both sides, you get the same function, namely <span class="math-container">$(f \circ g) \cdot g'$</span>. On the LHS, that is trivially true, by definition of <span class="math-container">$\text{prim}(\cdot)$</span>, while on the RHS, it is because of the chain rule. Now, if <span class="math-container">$g$</span> is invertible, we can &quot;solve&quot; this equation to get <span class="math-container">\begin{align} \text{prim}(f) &amp;= \text{prim}((f \circ g) \cdot g') \circ g^{-1} \end{align}</span> This equation is true for EVERY continuous <span class="math-container">$f$</span>, and every continuously differentiable <span class="math-container">$g$</span>, which is invertible. Just for the sake of avoiding confusion later on, I'll write this as <span class="math-container">\begin{align} \text{prim}(\phi) &amp;= \text{prim}((\phi \circ \psi) \cdot \psi') \circ \psi^{-1} \end{align}</span> Now, the equation you're looking for is obtained by plugging in <span class="math-container">$\phi = f \circ g$</span> and <span class="math-container">$\psi = g^{-1}$</span>. Then, this immediately reduces to <span class="math-container">\begin{align} \text{prim}(f \circ g) &amp;= \text{prim}\left(f \cdot (g^{-1})'\right) \circ g \\ &amp;= \text{prim}\left(f \cdot \dfrac{1}{g' \circ g^{-1}} \right) \circ g, \end{align}</span> where in the last line, I used the inverse function theorem for the formula of derivative of inverses. Of course, if you write this out in the classical notation, it says that if we put <span class="math-container">$u = g(x)$</span>, then <span class="math-container">\begin{align} \int f(g(x))\, dx &amp;= \int f(u) \cdot (g^{-1})'(u)\, du \\ &amp;= \int f(u) \cdot \dfrac{1}{g'(g^{-1}(u))}\, du \end{align}</span></p> <p>In the previous answer of mine, I explain in slightly more detail, how to translate back and forth between the two notations.</p> <p>Also, one final remark which I feel compelled to add: no one ever uses this <span class="math-container">$\text{prim}(\cdot)$</span> notation, and for good reason, because in actual hands-on computations, it doesn't serve us too well, so of course, you should get comfortable with all the tricks of integration being applied in the classical notation as well. The only thing which this notation offers is a temporary way of writing things, in order to clarify for oneself what exactly is going on when applying the substitution rule (as I'm sure several people have atleast once thought about why it's true considering we're taught derivatives are not fractions, but yet in this one circumstance, we treat it as a fraction).</p>
7,064
<p>The <a href="http://en.wikipedia.org/wiki/Long_division" rel="nofollow">Wikipedia article</a> on long division explains the different notations. I still use the European notation I learned in elementary school in Colombia. I had difficulty adapting to the US/UK notation when I moved to the US. However, I did enjoy seeing my classmates' puzzled faces in college whenever we had a professor that preferred the European notation.</p> <p>What long division notation do you use and where did you learn it?</p>
Juan S
2,219
<p>US notation - which is taught in Australia</p>
7,064
<p>The <a href="http://en.wikipedia.org/wiki/Long_division" rel="nofollow">Wikipedia article</a> on long division explains the different notations. I still use the European notation I learned in elementary school in Colombia. I had difficulty adapting to the US/UK notation when I moved to the US. However, I did enjoy seeing my classmates' puzzled faces in college whenever we had a professor that preferred the European notation.</p> <p>What long division notation do you use and where did you learn it?</p>
Community
-1
<p>This is so interesting, especially the idea of language dictating direction! I grew up in Argentina and attended a bilingual school...so I learned both European and US notation but am stronger in the US because we used it more often. Now I'm teaching language and will do a lesson on the different "formats" for simple math. Gracias a todos!</p>
2,421,421
<blockquote> <p>Evaluate $$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$ </p> </blockquote> <p>I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.</p>
Robert Z
299,698
<p>The given limit does not exists. However we are able to evaluate the right and the left limit (without using <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow noreferrer">L'Hospital's rule</a>).</p> <p>Note that $$\sqrt{1+\cos(2x)}=\sqrt{1+\cos^2(x)-\sin^2(x)}=\sqrt{2}|\cos(x)|=\sqrt{2}|\sin(\pi/2-x)|.$$ Hence as $x\to \pi/2$, $$\frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}} =\frac{\sqrt{2}|\sin(\pi/2-x)|}{\sqrt{2}\frac{(\pi/2-x)}{\sqrt{\pi/2}+\sqrt{x}}}=\frac{|\sin(\pi/2-x)|}{\pi/2-x}\cdot \left(\sqrt{\pi/2}+\sqrt{x}\right).$$ Recalling that $\sin(t)/t$ goes to $1$ as $t\to 0$, we may conclude that $$\lim_{x\to (\pi/2)^{\pm}} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}=\mp\sqrt{2\pi}.$$</p>
99,617
<p>How can I animate a point on a polar curve? I have used <code>Animate</code> and <code>Show</code> together before in order to get the curve and the moving point together on the same plot, but combining the polar plot and the point doesn't seem to be working because point only works with Cartesian coordinates.</p> <p>Here is the code I used before to animate a point on a parametric curve. For higher values of <code>a</code> and <code>theta</code>, you can see the point moving along the curve better (I was required to animate all three parameters).</p> <pre><code>Animate[ Show[ ParametricPlot[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}, {t, 0, 15}, AxesLabel -&gt; {"x", "y"}, PlotRange -&gt; {{0, 50}, {0, 30}}], Graphics[{Red, PointSize[.05], Point[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}]}] ], {t, 0, 5, Appearance -&gt; "Labeled"}, {a, 1, 20, Appearance -&gt; "Labeled"}, {θ, 0, Pi/2, Appearance -&gt; "Labeled"}, AnimationRunning -&gt; False ] </code></pre> <p>Here is the code I tried to use to animate a point on a polar curve, but the point does not even show up.</p> <pre><code>Animate[ Show[ PolarPlot[2 Sin[4*θ], {θ, 0, 2 Pi}], Graphics[Red, PointSize[Large], Point[{2 Sin[4*θ] Cos[θ], 2 Sin[4*θ] Sin[θ]}]] ], {θ, 0, 2 Pi}, AnimationRunning -&gt; False ] </code></pre>
David G. Stork
9,735
<pre><code>Animate[ Show[ PolarPlot[Sin[3 t], {t, 0, π}], Graphics[{Red, PointSize[0.02], Point[Sin[3 θ] {Cos[θ], Sin[θ]}]}]], {θ, 0, π}] </code></pre>
2,280,323
<p>I am given a linear transformation $$T:\mathbb{R^3} \rightarrow \mathbb{R^2}$$ $$T((x,y,z)) = (x+y,-y+z)$$ The task if to find a basis in $\mathbb{R^3}$, let's call it $B=\{e_1, e_2, e_3\}$ and $\mathbb{R^2}$, let's call it $B'=\{f_1, f_2\}$ such that $A$ is the matrix of this transformation with respect to the found bases. </p> <p>Here is the matrix $A$: $$A = \begin{bmatrix} 1&amp;0&amp;0\\0&amp;2&amp;0\end{bmatrix}$$</p> <p>I think I am not sure how to interpret the given matrix $A$.</p>
Bill
418,485
<p>You can find the transformation matrix by standard basis of $\mathbb R^3$ and $\mathbb R^2$ called standared basis. $\{ e_1,e_2,e_3\}$ for $\mathbb R^3$ and $\{ e_1,e_2\}$ for $\mathbb R^2$. now</p> <p>$T(e_1)=T(1,0,0)=(1,0)=e_1$</p> <p>$T(e_2)=T(0,1,0)=(1,-1)=e_1-e_2$</p> <p>$T(e_3)=T(0,0,1)=(0,1)=e_2$</p> <p>now the transformation matrix with respect to standard basis is:</p> <p>$$A = \begin{bmatrix} 1&amp;1&amp;0\\0&amp;-1&amp;1\end{bmatrix}$$</p> <p>Now you could easily find the basis for your question</p> <p>$T(e_1)=f_1$</p> <p>$T(e_2)=2f_2$</p> <p>$T(e_3)=0$</p> <p>In addition, the nullity of T is spaned by $e_3$.</p>
3,585,111
<p>Given that <span class="math-container">$x^{\frac{2}{3}}=({x^2})^{\frac{1}{3}}$</span>, I thought that <span class="math-container">$f(x)$</span> is continuous in <span class="math-container">$\mathbb{R}$</span> but when I plot this function, I get something like this in wolfram.</p> <p><a href="https://i.stack.imgur.com/pwL67.png" rel="nofollow noreferrer">enter image description here</a></p> <p>What happens to <span class="math-container">$f(-5)$</span>? is not <span class="math-container">$f(-5)=25^{\frac{1}{3}}$</span>. I find this plot odd. There is nothing for negative <span class="math-container">$x$</span> values. I thought the graph should be symmetrical about the <span class="math-container">$y$</span> axis.</p> <p>So does it mean <span class="math-container">$f$</span> is not continuous on all of <span class="math-container">$\mathbb{R}$</span>?</p>
Sujit Bhattacharyya
524,692
<p>I think it's maybe the fault of the plotting program (maybe the software understood something else), I've used <a href="https://www.desmos.com/calculator" rel="nofollow noreferrer">Desmos Graphic Calculator</a> and find that this function is perfectly continuous in <span class="math-container">$\Bbb R$</span>.</p> <p><a href="https://i.stack.imgur.com/68piu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/68piu.jpg" alt="Here is the image"></a></p> <p>P.S.: If you are trying to find the continuity of some function try plotting the function manually if you are unable to solve it the go for software.</p>
381,313
<p>Solve the limits of below:</p> <p>(1) $\lim\limits_{n \to \infty} \int_0^n (1+\frac{x}{n})^n e^{-2x}dx$.</p> <p>(2) $\lim\limits_{n \to \infty} \int_0^n (1-\frac{x}{n})^n e^{\frac{x}{2}}dx$. </p> <p>(3) $\int_0^{\infty} \frac{e^{-x}\sin^2 x}{x}dx$</p> <p>(Hint: show $f(x,y)=e^{-x}\sin2xy$ is integrable on $[0,\infty) \times [0,1]$)</p>
Neal
20,569
<p>Consider your question for principal $G$-bundles over an honest space $M$. Note that the structure group of a principal $G$-bundle is $G$. </p> <p>Isomorphism classes of principal $G$-bundles over $M$ are in one-to-one correspondence with homotopy classes of maps $M\to BG$, the classifying space of $G$. So in particular, the fiber $G$ and base $M$ characterize the bundle if and only if $[M,BG]$ is trivial.</p> <p>See also this MSE question: <a href="https://math.stackexchange.com/questions/34888/classification-of-general-fibre-bundles">Classification of general fibre bundles</a></p>
206,636
<p>What's the fastest way to find the local maxima of a 2D list? <em>E.g.</em></p> <pre><code>nx = ny = 100; dat = Table[Sin[2. \[Pi] x/nx] (0.1 + Cos[2. \[Pi] y/ny]), {y, 0, ny}, {x, 0, nx}]; ListPlot3D[dat] </code></pre> <p><img src="https://i.stack.imgur.com/MyGBl.png" alt="Mathematica graphics"></p> <p>This (updated) function has three local maxima of different heights:</p> <pre><code>Position[MaxDetect[dat], 1] (* {{1, 26}, {51, 76}, {101, 26}} *) dat[[1, 26]] dat[[51, 76]] dat[[101, 26]] (* 1.1, 0.9, 1.1 *) </code></pre> <p>My original attempt was super-slow:</p> <pre><code>RepeatedTiming[MaxDetect[Chop@dat];][[1]] (* 1.55 *) </code></pre> <p>Turns out using <code>Chop</code> is a very bad idea. Without it is 100X faster:</p> <pre><code>RepeatedTiming[MaxDetect[dat];][[1]] (* 0.016 *) </code></pre> <p>Along the way I discovered another version that is 2X faster yet:</p> <pre><code>RepeatedTiming[MaxDetect[Image[dat]];][[1]] (* 0.0067 *) </code></pre> <p><strong>Questions</strong></p> <ol> <li>Why is <code>MaxDetect</code> so much slower when <code>Chop</code> is applied? (I should add that my actual non-example problem has lots of small values that needed to <code>Chop</code>-ping)</li> <li>Why does converting to an <code>Image</code> speed it up further?</li> <li>Is there any faster way available?</li> </ol>
Niki Estner
242
<blockquote> <p>Is there any faster way available?</p> </blockquote> <p>There is, if you're willing to to use a different definition of "local maximum". If a local maximum is any value at least as high as any value in a 11x11 neighborhood, then you can use <code>UnitStep[dat - Dilation[dat, 5]]</code>, which is about 16 times faster for your data:</p> <pre><code>RepeatedTiming[MaxDetect[dat];][[1]] </code></pre> <blockquote> <p>0.016</p> </blockquote> <pre><code>RepeatedTiming[UnitStep[dat - Dilation[dat, 5]];][[1]] </code></pre> <blockquote> <p>0.00101</p> </blockquote> <p>But the result is slightly different:</p> <p><a href="https://i.stack.imgur.com/62RbQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/62RbQ.png" alt="enter image description here"></a></p> <p>Note the marked pixels on the left border in the <code>dat-Dilation</code> version. By the definition above, these pixels mark one extended local maximum.</p>
144,709
<p>Recently, I was informed that we can verify the famous formula about <strong>$\mathrm{lcm}(a,b)$</strong> and <strong>$\gcd(a,b)$</strong> which is $$\mathrm{lcm}(a,b)=\frac{|ab|}{\gcd(a,b)} $$ via group theory. </p> <p>The least common multiple of two integers $a$ and $b$, usually denoted by <strong>$\mathrm{lcm}(a,b)$</strong>, is the smallest positive integer that is a multiple of both $a$ and $b$ and the greatest common divisor <strong>($\gcd$)</strong>, of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.</p> <p>I do not know if we can prove this equation by using the groups or not, but if we can I am eager to know the way someone face it. Thanks.</p>
lhf
589
<p>Consider the canonical map $\mathbb Z \to \mathbb Z/(a) \times \mathbb Z/(b)$ given by $x\mapsto (x \bmod a, x \bmod b)$.</p> <p>What is the kernel? What is the image?</p>
3,809,026
<p>I'm trying to solve this problem in the implicit differentiation section of the book I'm going through:</p> <blockquote> <p>The equation that implicitly defines <span class="math-container">$f$</span> can be written as</p> <p><span class="math-container">$y = \dfrac{2 \sin x + \cos y}{3}$</span></p> <p>In this problem we will compute <span class="math-container">$f(\pi/6)$</span>. The same method could be used to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span>.</p> <p>Let <span class="math-container">$a_1 = \dfrac{2\sin(\pi/6)}{3} = \dfrac{1}{3}$</span></p> <p>and for every positive integer <span class="math-container">$n$</span> let</p> <p><span class="math-container">$a_{n+1} = \dfrac{2\sin(\pi/6) + \cos a_n}{3} = \dfrac{1 + \cos a_n}{3}$</span></p> <p>(a) Prove that for every positive integer <span class="math-container">$n$</span>, <span class="math-container">$|a_n - f(\pi/6)| \leq 1/3^n$</span> (Hint: Use mathematical induction) (b) Prove that <span class="math-container">$\lim_{n \to \infty} a_n = f(\pi/6)$</span></p> </blockquote> <p>Now, I'm not sure how to solve <span class="math-container">$f(\pi/6)$</span> in the base case of the induction proof.</p> <p>Also, does this above pattern of assuming <span class="math-container">$a_{n+1}$</span> and using that to compute <span class="math-container">$f(x)$</span> for any value of <span class="math-container">$x$</span> has any name ? I would like to read more about it.</p>
Javi
554,663
<p>Yes. If the proposition <span class="math-container">$P(n) $</span> holds true for <span class="math-container">$n \geq n_0$</span>, and you want to prove it by induction, you could try to prove <span class="math-container">$P (n_0)$</span> is true, <span class="math-container">$P (n_1) $</span> is true, and so on. This is equivalent to prove the base case many times... but since you are going to need an inductive step anyway (cause you are not going to be around here long enough to prove infinitely many propositions), there's no need to prove more than one case to serve as base case, which will determine the first element <span class="math-container">$n_0$</span> for which <span class="math-container">$P (n)$</span> is true. Of course you could choose <span class="math-container">$n_1 &gt; n_0$</span> for the base case, but then you will be proving that <span class="math-container">$P (n) $</span> is true for <span class="math-container">$n \geq n_1$</span>, and still need a separate proof for <span class="math-container">$P (n_0)$</span>.</p>
3,907,519
<p>Prove that for all <span class="math-container">$a,b \in\mathbb Z$</span>, if <span class="math-container">$a+b$</span> is even then <span class="math-container">$a-b$</span> is even.</p> <p>I started off by assuming that a and <span class="math-container">$b$</span> are both odd integers, <span class="math-container">$2k+1$</span> and <span class="math-container">$2m+1$</span> such that their sum is <span class="math-container">$2(k+m+1)$</span> which is an even integer. I then concluded that <span class="math-container">$a-b$</span> is <span class="math-container">$2(k-m)$</span> which is also even, hence the statement is true. My question is, is this enough to prove that the statement is true or do I also have to proof it when both <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are even, and when one of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is even and the other is odd. Thanks.</p>
DMcMor
155,622
<p>So far you've only proved that the sum of two odd integers is even. Try dropping that assumption and see what you can do, i.e. suppose <span class="math-container">$a + b = 2n$</span>, for some integers <span class="math-container">$a,b,n$</span>. Can you get from that equation to an equation involving <span class="math-container">$a - b$</span>?</p>
58,920
<p>I have a csv that encodes the results of plugging certain parameters (say, <code>A</code> and <code>B</code>) into certain functions (say, <code>f</code> and <code>g</code>). Think matrix with (say, 4) columns and a million rows with the columns corresponding to <code>A, B, C = f(A, B)</code>, and <code>D = g(A, B)</code> respectively. I'd like to be able to use the dynamic visualization features of mathematica, but this requires turning this data into discreet functions, and the columns into their own lists to use as domains in the Manipulate environment. </p> <p>I want to be able to do something like <code>Manipulate[Plot[function[A, B], {A, (list of values)}], {B, (list of values)}, {function, {f, g}}]</code></p> <p>I have been poring over the Help guide for hours and this seems to be such a basic thing to do that I feel crazy for not being able to find it. </p>
hieron
16,373
<p>C, D, E, I, K, N, O are symbols already in use by Mathematica <br>(ONCE KID)<br> you do not follow the recommended convention that user variables are written lowercase.</p> <p>You can see that when typing the letter it is colored black, and not blue.</p> <p>most probably the <code>Evaluate</code> statement solves your other problem in your given metacode <br></p> <p>Manipulate[Plot[function[A, B]//Evaluate, {A, (list of values)}], {B, (list of values)}, {function, {f, g}}]</p>
13,478
<p>This may be a silly question - but are there interesting results about the invariant: the minimal size of an open affine cover? For example, can it be expressed in a nice way? Maybe under some additional hypotheses?</p>
Emerton
2,874
<p>In fact this is a difficult question in general, I believe. For example, if $X$ is a closed subvariety of ${\mathbb P}^n$, then asking for the minimal number of affine opens that cover the complement ${\mathbb P}^n\setminus X$ is the same as asking for the minimal number of hypersurfaces whose (set-theoretic) intersection equals $X$. I believe this question is open in general.</p>
1,028,301
<p>What I've first done is show that $Aut(\mathbb Z / 24\mathbb Z)$ is isomorphic to $\mathbb Z_2\oplus\mathbb Z_2\oplus\mathbb Z_2$ due to it having 8 members and the greatest rank out of all of them is 2.</p> <p>Now $M_2(\mathbb Z/3\mathbb Z)$ is isomophic to $\mathbb Z_3\oplus\mathbb Z_3\oplus\mathbb Z_3\oplus\mathbb Z_3$ (right?) so I need to find a homomorphism between those two groups.</p> <p>I'm stuck here. Any help is appreciated.</p>
Stahl
62,500
<p><strong>Hint:</strong> If $\phi : M_2(\Bbb Z/(3))\to\operatorname{Aut}\Bbb Z/(24)$ is a homomorphism and $e\neq M\in M_2(\Bbb Z/(3))$, you must have $e = \phi(e) = \phi(M^{ord(M)}) = \phi(M)^{ord(M)}$. Hence, $ord(M)\mid \#\operatorname{Aut}\Bbb Z/(24) = 8$. But...</p>
618,166
<p>Does there exists a real valued function which is everywhere continuous and differentiable at exactly one point on the real line?</p>
ncmathsadist
4,154
<p>You know that you can produce a function $f$ so that $f$ is continuous and differentiable nowhere. Pick $a\in\mathbb{R}$; put $g(x) = (x - a)^2 f(x).$ The function $g$ is differentiable exactly at $a$ and nowhere else.</p>
1,382,003
<p>I'm trying to solve problem 7 from the IMC 2015, Blagoevgrad, Bulgaria (Day 2, July 30). Here is the problem</p> <blockquote> <p>Compute $$\large\lim_{A\to\infty}\frac{1}{A}\int_1^A A^\frac{1}{x}\,\mathrm dx$$</p> </blockquote> <p>And here is my approach:</p> <p>Since $A^\frac{1}{x}=\exp\left(\frac{\ln A}{x}\right)$, then using Taylor series for exponential function, we have $$\exp\left(\frac{\ln A}{x}\right)=1+\frac{\ln A}{x}+\frac{\ln^2 A}{x^2}+\frac{\ln^3 A}{x^3}+\cdots=1+\frac{\ln A}{x}+\sum_{k=2}^\infty\frac{\ln^k A}{x^k}$$ Hence, integrating term by term is trivial. \begin{align} \lim_{A\to\infty}\frac{1}{A}\int_1^A A^\frac{1}{x}\,\mathrm dx&amp;=\lim_{A\to\infty}\frac{1}{A}\int_1^A \left[1+\frac{\ln A}{x}+\sum_{k=2}^\infty\frac{\ln^k A}{x^k}\right]\,\mathrm dx\\ &amp;=\lim_{A\to\infty}\frac{1}{A}\left[A-1+\ln^2 A-\sum_{k=2}^\infty\frac{\ln^k A}{(k-1)A^{k-1}}+\sum_{k=2}^\infty\frac{\ln^k A}{(k-1)}\right]\\ &amp;=1+\lim_{A\to\infty}\sum_{k=2}^\infty\frac{\ln^k A}{A(k-1)}\\ \end{align} I'm stuck here because I can't evaluate the last term as $k\to\infty$. My guess the answer is $1$, but I'm not sure. Is my approach correct? If it's correct, how does one evaluate the last limit? I'm also interesting in knowing the other approaches of this this problem in formally math setting. Thanks.</p>
Erick Wong
30,402
<p>The key to showing that this limit converges to <span class="math-container">$1$</span> is to show that the function <span class="math-container">$A^{1/x}$</span> decreases quickly towards <span class="math-container">$1$</span> as <span class="math-container">$x$</span> increases beyond <span class="math-container">$1$</span>, so that the integral isn't significantly larger than <span class="math-container">$\int_1^A 1\,dx \sim A$</span>. To this end we will focus on showing that the difference (which is clearly positive) is small enough:</p> <p><span class="math-container">$$\int_1^A (A^{1/x} - 1)\,dx = o(A).$$</span></p> <p>This can be done be with three piecewise-constant approximations:</p> <p>1) We first consider <span class="math-container">$x$</span> close to <span class="math-container">$1$</span>, where we don't have an upper bound better than <span class="math-container">$A^{1/x} \le A$</span>. So to get a contribution of <span class="math-container">$o(A)$</span> we need to cut it off at <span class="math-container">$x = 1 + o(1)$</span>. We want <span class="math-container">$A^{1/x}$</span> to shrink down to a decent amount, so it behooves us to choose the <span class="math-container">$o(1)$</span> to be very slowly shrinking, say at <span class="math-container">$x = 1 + 2/\sqrt{\ln A}$</span>. Then the contribution from this piece is <span class="math-container">$$\int_1^{1 + 2/\sqrt{\ln A}} (A^{1/x} - 1)\, dx \le \int_1^{1 + 2/\sqrt{\ln A}} A\, dx = \frac{2A}{\sqrt{\ln A}} = o(A).$$</span></p> <p>2) Now we consider <span class="math-container">$x \ge 1 + 2/\sqrt{\ln A}$</span>. For large enough <span class="math-container">$A$</span>, we have <span class="math-container">$1/x \le 1 - 1/\sqrt{\ln A}$</span>. This means for <span class="math-container">$x \ge 1 + 2/\sqrt{\ln A}$</span>, <span class="math-container">$$\exp\big(\tfrac{\ln A}{x}\big) \le A \exp\big(-\tfrac{\ln A}{\sqrt{\ln A}}\big) = A \exp(-\sqrt{\ln A}).$$</span></p> <p>This is considerably smaller than <span class="math-container">$A$</span>, so we are free to use this bound for a sizable range of <span class="math-container">$x$</span>. We'd like to go beyond <span class="math-container">$x = \ln A$</span> which is where <span class="math-container">$A^{1/x}$</span> actually gets close to <span class="math-container">$1$</span>. Happily, this goal is in reach as (for all large <span class="math-container">$A$</span>) <span class="math-container">$\sqrt{\ln A} &gt; 3 \ln \ln A$</span>, so <span class="math-container">$$ \exp(-\sqrt{\ln A}) &lt; \exp(-3 \ln \ln A) = 1/(\ln A)^3.$$</span></p> <p>That means we can set the next cutoff at <span class="math-container">$x = (\ln A)^2$</span>, and we still get <span class="math-container">$$\int_{1 + 2/\sqrt{\ln A}}^{(\ln A)^2} A^{1/x} - 1 \, dx &lt; \int_0^{(\ln A)^2} A \exp(-\sqrt{\ln A})\, dx &lt; A \frac{(\ln A)^2}{(\ln A)^3} = o(A).$$</span></p> <p>3) Now the last piece from <span class="math-container">$x = (\ln A)^2$</span> to <span class="math-container">$A$</span>. We chose the previous cutoff to make <span class="math-container">$A^{1/x}$</span> very close to one, indeed for <span class="math-container">$x \ge (\ln A)^2$</span> we have</p> <p><span class="math-container">$$A^{1/x} \le \exp(1/\ln A) = 1 + o(1),$$</span> since <span class="math-container">$\exp(1/\ln A) \to 1$</span> as <span class="math-container">$A \to \infty$</span> (we could be more precise using Taylor expansion and say that the RHS is <span class="math-container">$1 + O(1/\ln A)$</span>). Therefore the contribution from this third interval is</p> <p><span class="math-container">$$\int_{(\log A)^2}^A (A^{1/x} - 1)\, dx \le \int_0^A (\exp(1/\ln A) - 1)\, dx. = \int_0^A o(1)\, dx = o(A).$$</span></p> <p>Combining 1), 2), 3) we have the desired upper bound. Although the calculations are a little messy the argument is elementary and doesn't require integrating infinite series term-by-term.</p>
732,548
<p>The proof for this using Dependent choice, is easy. I forgot how to prove this with only AC$_\omega$. Could someone help?</p> <p>I feel really sorry that i keep asking these basic questions.. This would be my last one. Thank you in advance</p>
Jason Zimba
132,296
<p>Rename $x$ as $\Delta x$. That may help you to see that your limit is $\int_0^\infty{f(x)\,dx}$, where $f(x) = {2\over x^2+1}$. (You should recognize your limit as the limit of a Riemann sum.)</p>
3,653,016
<p>I am solving this equation: </p> <p><img src="https://i.stack.imgur.com/WN7ce.png" alt="Image of the equation"><br> My issue arises on line 2 where we have (n + 1 - 2 + 1)(n + 1 + 2)/2. This is what i understand. We have the formula n! = n(n+1)/2. Subbing values into the equation yields us with 10[(n+1)(n+1 + 1)/2] however we need to account for the fact we are starting at j = 2. I would then do this by 10[(n+1-2)(n+1+1)/2]. What I dont understand is why in the solution they have given </p> <ul> <li>(n+1 - 2 <strong>- 1 +1</strong>) the -1 + 1</li> <li>(n + 1 <strong>+ 2</strong>) the + 2</li> </ul> <p>How did they make that logical jump and what were the steps/thought process in doing so.</p>
user781829
781,829
<p>in line 2 the <span class="math-container">$\frac{(n+1+2)}{2}$</span> part is the average of the terms being summed. In this problem this is calculated by adding the first and last term and dividing by 2. The <span class="math-container">$(n+1-2+1)$</span> part is the number of terms being summed. This is calculated by subtracting the index of the first term (2) from the index of the last term (n+1) and adding one.</p> <p>The sum is equal to the average value of each term multiplied by the number of terms.</p> <p>Also, <span class="math-container">$n!=\frac{n(n+1)}{2}$</span> is not true. The correct equation is <span class="math-container">$\sum_{i=1}^n i = \frac{n(n+1)}{2}$</span></p>
404,954
<p>I've never seen phrases like "$\sqrt{5}$ people" or "a set with $\pi$ many elements". Are there sets with cardinality, say, $\frac{1}{2}$?</p> <p><strong>Edit:</strong> As Brian M. Scott pointed out, the only real numbers that are cardinalities of sets are the non-negative integers. Could you explain why is it so?</p>
Nate Eldredge
822
<p>Fundamentally, cardinals and real numbers are different things. You can think of the cardinality of a set as some abstract object ("cardinal") assigned to it, in such a way that two sets get assigned the same cardinal if and only if there is a bijection between them.</p> <p>But there is a natural way to identify the <em>finite</em> cardinals with the natural numbers: namely, identify a cardinal $a$ with the natural number $n_a$ such that the set $\{1, 2, \dots, n_a\}$ has cardinality $a$ (i.e. any other set with cardinality $a$ has a bijection with $\{1,2,\dots, n_a\}$ and any other set with cardinality $a$). (In this discussion, "natural numbers" includes 0.) This is a nice identification because it makes cardinal arithmetic match up with the arithmetic of natural numbers. For instance:</p> <ul> <li><p>Ordering: $n_a \le n_b$ if and only if any set of cardinality $a$ has an injection into any set of cardinality $b$</p></li> <li><p>Addition: If $A,B$ are disjoint and have cardinalities $a,b$ respectively, then the cardinality $c$ of their union $C = A \cup B$ has $n_c = n_a + n_b$.</p></li> <li><p>Multiplication: If $A,B$ have cardinalities $a,b$ and $C = A \times B$ has cardinality $c$, then $n_c = n_a n_b$.</p></li> <li><p>Exponents: If $A,B$ have cardinalities $a,b$, and $C = A^B$ is the set of all functions from $B$ to $A$, then $n_c = n_a^{n_b}$.</p></li> </ul> <p>One could imagine a system that identified other (infinite) cardinals with real numbers other than the natural numbers. For instance, nobody could stop me from proposing a system in which the cardinality $\aleph_0$ of the set of integers is identified with the real number $22/7$. But this system wouldn't have the properties listed above. For instance, $22/7 \le 4$, but there is no injection from $\mathbb{Z}$ to $\{1,2,3,4\}$, and $\mathbb{Z} \times \{1,2,3,4,5,6,7\}$ definitely does <em>not</em> have the same cardinality as $\{1,2,\dots, 21,22\}$. In fact, it's not hard to see that there would be <em>no</em> way of identifying infinite cardinals with real numbers that would preserve the above list of properties.</p> <p>The properties above are very useful, and so in order to preserve them, we generally do not attempt to identify any other real numbers with cardinals. In principle, there could be another system that, although it didn't satisfy the above properties, had some other useful properties. But I've never heard of one that was useful enough to attract much attention.</p>
2,898,616
<p>My Math teacher told me that $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$. I asked for a proof and he gave me the result by using the chain rule. I don't understand why this is the case for every function because the function $y=x^2$ doesn't have an inverse function. If this is the case then why is $\frac{dx}{dy}$ not $\frac{-1}{2x}$? Maybe it only works for injective functions or we assume only one of the values to be true.</p>
J.G.
56,861
<p>The branch choice doesn't matter, since if $x=-y^{1/2}$ then $\frac{dx}{dy}=-\frac{1}{2}y^{-1/2}=\frac{1}{2x}$. A general proof that $\frac{dx}{dy}=\frac{1}{\frac{dy}{dx}}$ is easy if you write each side as a limit by definition.</p>
4,417,576
<p>Say we divide 1858.54 cm by 5 -- where 5 is an exact number. We get the quotient 371.708</p> <p>According to the rules of significant figures, we would answer write 371.708 cm, since &quot;1858.54&quot; is 6 significant figures, and 5, being an exact number doesn't count.</p> <p>My question is, how do you make sense of <strong>371.708 cm</strong>, which is more precise than the original? That is, the quotient is precise to the nearest thousandth while the original is precise to nearest hundredth. To me that doesn't make sense.</p>
David K
139,123
<p>According to you, significant figures don't matter. What matters is whether you have hundredths or thousandths. (Or, presumably, tenths or hundredths, etc.)</p> <p>So let's apply your logic in the other direction.</p> <p>Let's try a simpler example. Take a measurement of <span class="math-container">$102.3$</span> and divide it by the exact number <span class="math-container">$100.$</span></p> <p>If <span class="math-container">$102.3$</span> is accurate to the nearest tenth, the actual value <span class="math-container">$x$</span> that was measured is somewhere in the interval from <span class="math-container">$102.25$</span> to <span class="math-container">$102.35.$</span> Anything less than <span class="math-container">$102.25$</span> would be nearer to <span class="math-container">$102.2,$</span> and anything greater than <span class="math-container">$102.35$</span> would be nearer to <span class="math-container">$102.4.$</span></p> <p>When we divide a number <span class="math-container">$x$</span> that is no less than <span class="math-container">$102.25$</span> by exactly <span class="math-container">$100$</span>, the result is no less than <span class="math-container">$1.0225.$</span> When we divide a number <span class="math-container">$x$</span> that is no greater than <span class="math-container">$102.35$</span> by exactly <span class="math-container">$100$</span>, the result is no greater than <span class="math-container">$1.0235.$</span></p> <p>So whatever the true exact value of <span class="math-container">$x$</span> is, when we divide by exactly <span class="math-container">$100$</span> we get a number that is at least <span class="math-container">$1.0225$</span> and at most <span class="math-container">$1.0235.$</span></p> <p>A good way to represent the resulting number without throwing away information is to write it as <span class="math-container">$1.023,$</span> accurate to the nearest thousandth.</p> <p>If you write the result as <span class="math-container">$1.0$</span> (to the nearest tenth) then you have thrown away information.</p> <hr /> <p>For an even more extreme example: The number <span class="math-container">$12.3,$</span> accurate to the nearest tenth, divided by exactly <span class="math-container">$1000.$</span></p> <hr /> <p>This works in the other direction too. If you start with a number accurate to the nearest thousandth and multiply by exactly <span class="math-container">$100,$</span> you get a number accurate to the nearest tenth, not the nearest thousandth. Any error in the original measurement is magnified by the multiplication.</p>
119,492
<p>First I define a function, just a sum of a few sin waves at different angular frequencies:</p> <pre><code>ubdat = 50; ws = 10*{2, 5, 10, 20, 40} fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, .001}]; pts = Length@fn ListPlot[fn, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] {20, 50, 100, 200, 400} </code></pre> <p><a href="https://i.stack.imgur.com/ZCcaO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZCcaO.png" alt="enter image description here"></a></p> <p>If I take the Fourier transform and scale it correctly, you can see the correct peaks:</p> <pre><code>fnft = Abs@Fourier@fn; fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}]; ListPlot[fnftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/adDJS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/adDJS.png" alt="enter image description here"></a></p> <p>Now, I want to do a low pass filter on it, for, say, $\omega_c=140$. This should get rid of the peaks at 200 and 400, ideally. Doing it this way returns the same plots as above:</p> <pre><code>fnfilt = LowpassFilter[fn, 140]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p>I assume the problem is something to do with defining SampleRate, but the documentation explaining how it's defined or how to use it on the <a href="https://reference.wolfram.com/language/ref/LowpassFilter.html" rel="noreferrer">LowpassFilter page</a> is <em>very</em> sparse:</p> <blockquote> <p>By default, SampleRate->1 is assumed for images as well as data. For a sampled sound object of sample rate of r, SampleRate->r is used. With SampleRate->r, the cutoff frequency should be between 0 and $r*\pi$.</p> </blockquote> <p>It appears to have a broken link at the bottom, so maybe that had something helpful. The page for SampleRate itself has even less info.</p> <p>My naive attempt at choosing a sample rate would be dividing the number of samples by the total range, so in this case, <code>Floor[pts/ubdat]=1000</code>. Using this <em>does</em> affect the FT, but not a whole lot:</p> <pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -&gt; 1000]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/XUqiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XUqiC.png" alt="enter image description here"></a></p> <p>So what am I missing? I've tried googling for some sort of guide on using filters in Mathematica, but I can't find anything and it's very frustrating.</p>
Community
-1
<p>I choose a simple Butterworth filter and show, based on the spectra, how it works. Cutoff frequency and filter order can be set.</p> <pre><code>ubdat = 50; ws = 10*{2, 5, 10, 20, 40}; fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, 0.001}]; Manipulate[ filter = ButterworthFilterModel[{"Lowpass", filterOrder, cutoffFreqency}]; discreteFilter = ToDiscreteTimeModel[filter, 0.001]; (* for 1-D filter one can set `OutputResponse` instead of `RecurrenceFilter` *) (* recurrence = RecurrenceFilter[discreteFilter, fn]; *) recurrence =OutputResponse[discreteFilter, fn][[1]]; fourier = Abs@Fourier@recurrence; fourierNormed = Table[{2*Pi*i/ubdat, fourier[[i]]}, {i, Length@fourier}]; ListLinePlot[fourierNormed, PlotRange -&gt; {{0, 500}, All}], {filterOrder, {1, 2, 3, 4, 5}}, {cutoffFreqency, {400, 300, 200, 140, 100}}] </code></pre> <p><a href="https://i.stack.imgur.com/jfmUc.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jfmUc.gif" alt="enter image description here"></a></p>
168,926
<p>How much of ring theory arises simply from applying group-theoretic results to the additive structure, or semigroup/monoid-theoretic results to the multiplicative structure? To be more precise, can anyone describe <em>how much</em> rigidity is imposed on the structure of rings by the distributive axiom which "connects" addition and multiplication? What are some examples of results, say, where distributivity really plays a key role?</p>
Martin Brandenburg
1,650
<blockquote> <p>How much of ring theory arises simply from applying group-theoretic results to the additive structure, or semigroup/monoid-theoretic results to the multiplicative structure?</p> </blockquote> <p>Almost nothing. And in any case this would not be called ring theory. For example, a finite subgroup of the multiplicative group of a field is cyclic. This belongs to group theory, not to field or ring theory, although it is crucial for field theory.</p> <blockquote> <p>To be more precise, can anyone describe how much rigidity is imposed on the structure of rings by the distributive axiom which "connects" addition and multiplication? What are some examples of results, say, where distributivity really plays a key role?</p> </blockquote> <p>There is no field $K$ with $K^* \cong \mathbb{Z}$.</p> <p>This kind of rigidity arises everywhere. Just take any interesting result on rings and you will see that it makes no sense at all to try to prove this for pairs of (abelian group,monoid). So, just open a textbook and you will see lots of evidence that distributivity plays a key role.</p>
2,845,033
<p>Empirically this equality holds:</p> <p>$\gcd(10,8) = 2 $ and $\gcd(\operatorname{mod}(10,8),8) = \gcd(2,8) = 2 $</p> <p>$\gcd(18,9) = 9 $ and $\gcd( \operatorname{mod}(18,9),9) = \gcd(0,9) = 9 $</p> <p>I am stuck on how to prove it,though , and do not understand why this holds true.</p>
Robert Lewis
67,071
<p>For any planar system of the form</p> <p>$\dot x = X(x, y), \tag 1$</p> <p>$\dot y = Y(x, y), \tag 2$</p> <p>the <a href="https://en.wikipedia.org/wiki/Bendixson%E2%80%93Dulac_theorem" rel="nofollow noreferrer">Bendixson-Dulac criterion</a> states that there are no periodic orbits in a region $\Omega$ where</p> <p>$\nabla \cdot (\phi(X, Y)) = \nabla \cdot (\phi X, \phi Y) \ne 0, \tag 3$</p> <p>where $\phi(x, y)$ is some differentiable scalar function on $\Omega$. For the present system, we may take $\Omega = \Bbb R^2$ and</p> <p>$\dot x = y = X(x, y), \tag 4$</p> <p>$\dot y = -2x - y - 3x^4 + y^2 = Y(x, y); \tag 5$</p> <p>if we set</p> <p>$\phi(x, y) = e^{\beta x}, \tag 6$</p> <p>then we calculate:</p> <p>$\nabla \cdot (\phi(X,Y)) = \dfrac{\partial (\phi X)}{\partial x} + \dfrac{\partial (\phi Y)}{\partial y} = \dfrac{\partial (e^{\beta x}y)}{\partial x} + \dfrac{\partial (e^{\beta x}(-2x - y - 3x^4 + y^2))}{\partial y}$ $= \beta e^{\beta x}y + (e^{\beta x}(-1 + 2y)) = e^{\beta x}(\beta y - 1 + 2y) = e^{\beta x}((\beta + 2)y - 1); \tag 7$</p> <p>we observe that the function $e^{\beta x}((\beta + 2)y - 1)$ occurring on the right-hand side of (7) takes on the value zero precisely at that $y_0$ for which</p> <p>$(\beta + 2)y_0 - 1 = 0, \tag 8$</p> <p>that is, along the line</p> <p>$y = y_0 = \dfrac{1}{\beta + 2}, \tag 9$</p> <p>in the $xy$-plane. If we choose </p> <p>$\beta = \epsilon - 2, \tag{10}$</p> <p>for some real $\epsilon$, then since</p> <p>$\epsilon = \beta + 2, \tag{11}$</p> <p>we see that</p> <p>$y_0(\epsilon) = \dfrac{1}{\beta + 2} = \dfrac{1}{\epsilon}, \tag{12}$</p> <p>and we further observe that by choosing $\epsilon &gt; 0$ sufficiently small, not only may we ensure that $y_0(\epsilon)$ is arbitraritly large, but also that the function</p> <p>$(\beta + 2)y - 1 = \epsilon y - 1 \tag{13}$</p> <p>increases with increasing $y$; thus it is positive for $y &gt; y_0$ and negative for $y &lt; y_0$. </p> <p>Now if we had a periodic orbit, it would form the boundary of a compact subset $K \subset \Bbb R^2$ upon which the function $y$ is continuous, hence bounded above by some</p> <p>$y_M = \sup \{y \mid y \in K \}; \tag{14}$ </p> <p>thus for suffiiently small $\beta + 2 = \epsilon &gt; 0$ we have</p> <p>$y_M &lt; y_0; \tag{15}$</p> <p>that is, on $K$,</p> <p>$(\beta + 2)y - 1 &lt; 0 \Longrightarrow e^{\beta x}((\beta + 2)y - 1) &lt; 0; \tag{16}$</p> <p>but now by Bendixson-Dulac, <em>via</em> (7), this contradicts the existence of a periodic trajectory of the system (1)-(2); thus in particular this system cannot have a limit cycle in $\Bbb R^2$.</p>
856,556
<p>Let $f_n(x)=\frac{1}{n}\chi_{[0,n]}(x)$, $x\in\mathbb{R}$, $n\in\mathbb{N}$ and $\chi$ is the characteristic/indicator function. Now it is clear that $f_n\rightarrow 0$, but in the text I am using it says we can't apply the Dominated Convergence theorem as there is no function to dominate $f_n$</p> <p>I have trouble seeing that. To me the $f_n$'s are dominated by the constant function $1$. So I think can apply the Dominated Convergence theorem in some cases which depends on the choice measure where the constant functions are integrable. So if we are using the Lebesgue measure (on $\mathbb{R}$) then we can't use the Dominated Convergence theorem (and the text is right), but if are using a finite measure then will it be valid to use the Dominated Convergence theorem and we will then have $\lim\limits_n\int f_n\rightarrow0$. Also if our space was just a bounded interval and we have the Lebesgue measure (since it is now a finite measure on the internal), then will it be fine to use the Dominated Convergence theorem on the $f_n$'s ? Is this right or wrong? </p>
Michael Hardy
11,667
<p>The smallest thing that could possibly dominate the functions $f_n$, $n=1,2,3,\ldots$ is the function $$ x\mapsto g(x) = \sup_n |f_n(x)| = \begin{cases} \dfrac{1}{\lceil x \rceil} &amp; \text{if }x&gt;0, \\[10pt] 0 &amp; \text{if }x&lt;0. \end{cases} $$ The problem is that $$ \int_{[0,\infty)} g(x)\,dx = \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> <p>If the integral of the dominating function is infinite, then the dominated convergence theorem is not applicable.</p>
2,493,802
<p>If $A \subset \mathbb R$, we have the distance between a point in $\mathbb R$ and $A$ as</p> <p>$$d(x,A)=\inf\{|x-a| \mid a \in A\}.$$</p> <p><strong>For $\varepsilon&gt;0$, if we define $A(\varepsilon) = \{x \in \mathbb R \mid d(x,A) &lt; \varepsilon \}$, how do we show this set is open?</strong></p> <hr> <p>I know that $A \subset A(\varepsilon)$ since for any point in $a \in A$ we will have $d(a, A)=0 &lt; \varepsilon$ for any $\varepsilon &gt;0$.</p> <p>I can see that $A(\varepsilon)$ is sort of like a neighborhood of the set $A$ but I am having trouble formalizing this.</p>
qualcuno
362,866
<p>If you know that a function is continous if and only if preimages of open sets are open, you can just prove that $d(x,A)$ is continous, and then observe that if $f(x) = d(x,A)$,</p> <p>$$ A(\epsilon) = f^{-1}(-\infty, \epsilon) $$</p> <p>which is in fact the preimage via a continous function of the open set $(-\infty, \epsilon)$.</p>
3,930,373
<p>Hi so im working on a question about finding the <span class="math-container">$rank(A)$</span> and the <span class="math-container">$dim(Ker(A))$</span> of a 7x5 Matrix. Without being given an actual matrix to work from.</p> <p>I have been told that that the homogeneous equation <span class="math-container">$A\vec x=\vec0$</span> has general solution <span class="math-container">$\vec x=\lambda \vec v$</span> for some non zero <span class="math-container">$\vec v$</span> in <span class="math-container">$R^{5}$</span>.</p> <p>So my thinking so far is that I know for an <span class="math-container">$m*n$</span> matrix we know that:</p> <p><span class="math-container">$rk(A)+dimker(A)=n$</span> which must mean that <span class="math-container">$rk(A)+dimker(A)=5$</span></p> <p>but this is where I get stuck and dont know how to proceed.</p> <p>Any help is greately appreciated.</p> <p><a href="https://i.stack.imgur.com/2kLWC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2kLWC.png" alt="enter image description here" /></a> .</p> <p>This is the exact question for the person who asked.</p>
Barry Cipra
86,747
<p>In <em>complex</em> analysis, the assertion <span class="math-container">$\lim_{z\to0}\sqrt{-z^2}=0$</span> is defensibly true even if you don't bother to specify the sign of the square root for nonzero numbers. But as a <em>real</em>-valued function of a <em>real</em> variable, the expression <span class="math-container">$\sqrt{-x^2}$</span> makes no sense except at <span class="math-container">$x=0$</span>, so it really doesn't make sense to talk about its limit as <span class="math-container">$x$</span> tends to <span class="math-container">$0$</span>. Most formal definitions of limit require the point being tended to to be a &quot;limit point&quot; for the domain of the function, meaning every (punctured) neighborhood of the point has nonempty intersection with the domain. The natural domain for the expression <span class="math-container">$\sqrt{-x^2}$</span> as a real-valued function of a real variable is the singleton set <span class="math-container">$\{0\}$</span>, which has no limit points.</p> <p><strong>Added later:</strong> To put things another way, asking for the meaning of <span class="math-container">$\lim_{x\to c}f(x)$</span> when <span class="math-container">$c$</span> is not a limit point for the domain of <span class="math-container">$f$</span> is a little like asking for the meaning of <span class="math-container">$\gcd(\sqrt2,\sqrt3)$</span>. That is, the greatest common divisor is well defined for pairs of integers (not both <span class="math-container">$0$</span>), but not for any old pair of numbers; likewise limits are only defined at limit points. Expressions like <span class="math-container">$\gcd(\sqrt2,\sqrt3)$</span> and <span class="math-container">$\lim_{x\to0}\sqrt{-x^2}$</span> can, perhaps, be usefully likened to Noam Chomsky's grammatically correct but semantically nonsensical sentence <a href="https://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously" rel="nofollow noreferrer">&quot;Colorless green ideas sleep furiously.&quot;</a></p>
3,259,910
<p>My question is: if G:= Group of polynomials under addition with elements from <span class="math-container">$\{0,1,2,3,4,5,6,7,8,9\}$</span> <span class="math-container">$f(x)=7x^2+5^x+4$</span> and <span class="math-container">$g(x)=4x^2+8^x+6$</span> ,<span class="math-container">$f(x)+g(x)=x^2+3x$</span> what is the order of <span class="math-container">$f(x)$</span>,<span class="math-container">$g(x)$</span>,<span class="math-container">$f(x)+g(x)$</span> i find that the answers are <span class="math-container">$10$</span>,<span class="math-container">$5$</span> and <span class="math-container">$10$</span> . Am i correct?? (i noticed that zero polynomial is the identity here) then if <span class="math-container">$h(x)=a_nx^n+...+a_0$</span> belongs to <span class="math-container">$G$</span> what is the order of it? given that</p> <p><span class="math-container">$gcd(a_1,a_2,,,a_n)=1$</span>,</p> <p><span class="math-container">$gcd(a_1,a_2,,,,a_n)=2$</span>,</p> <p><span class="math-container">$gcd(a_1,a_2,,,a_n)=5$</span>,</p> <p><span class="math-container">$gcd(a_1,a_2,,,a_n)=10$</span></p> <p>Here i do not understand how to proceeded ?</p>
Key Flex
568,718
<p>Let <span class="math-container">$f(x)=x^3+bx^2+cx+1$</span></p> <p><span class="math-container">$f(0)=1&gt;0$</span></p> <p><span class="math-container">$f(-1)=(b-c)&lt;0$</span></p> <p>So, <span class="math-container">$\alpha$</span> lies between <span class="math-container">$f(0)$</span> and <span class="math-container">$f(-1)$</span> which is <span class="math-container">$-1,0$</span> </p> <p><span class="math-container">$2\tan^{-1}(\csc\alpha)+\tan^{-1}(2\sin\alpha\sec^2\alpha)$</span></p> <p><span class="math-container">$2\tan^{-1}\alpha\left(\dfrac{1}{\sin\alpha}\right)+\tan^{-1}\left(\dfrac{2\sin\alpha}{\cos^2\alpha}\right)$</span></p> <p><span class="math-container">$2\tan^{-1}\alpha\left(\dfrac{1}{\sin\alpha}\right)+\tan^{-1}\left(\dfrac{2\sin\alpha}{1-\sin^2\alpha}\right)$</span></p> <p><span class="math-container">$2\left[\tan^{-1}\left(\dfrac{1}{\sin\alpha}\right)\right]+\tan^{-1}(\sin\alpha)$</span></p> <p><span class="math-container">$2\left(-\dfrac{\pi}{2}\right)=-\pi$</span></p>
1,899,740
<p>Let $(\Omega,\Sigma,\nu)$ be a measurable space with $\nu$ a positive $\sigma$-additive measure on $\Sigma$ and le $f:\Omega\to \mathbb{R}$ be a function such that, $$ \int_{\Omega}f(x)d\nu(x)=0 $$ Does this imply that $f=0\quad \nu-\ a.e.$ ?</p> <p>Any hint will be appreciated :) thank you for your time !</p>
Ant
66,711
<p>How about $ f(x)= x $ on $\Omega = (-1,1)$ and $\nu $ the lebesgue measure?</p>
4,541,392
<p>I have an integral that looks like the following: <span class="math-container">$$ \int_a^b \frac{1}{\left(1 + cx^2\right)^{3/2}} \mathrm{d}x $$</span> I have seen a method of solving it being to substitute <span class="math-container">$x = \frac{\mathrm{tan}(u)}{\sqrt{c}}$</span>; however, this seems somewhat sloppy to me. Is there perhaps a better way of tackling this integral?</p>
math and physics forever
879,009
<p>let <span class="math-container">${(1+cx^2)}^{-1/2}=u \rightarrow 1 $</span></p> <p>so you have <span class="math-container">$du= \frac{-1}{2} \frac{2x}{(1+cx^2)^{-3/2}}dx$</span></p> <p>from 1</p> <p><span class="math-container">$x=\sqrt{\frac{1}{u^2} -c}$</span></p> <p>so the new integrand is <span class="math-container">$\frac{{-du}}{\sqrt{\frac{1}{u^2} -c}}$</span></p> <p>which is the same as <span class="math-container">$$\frac{-udu}{\sqrt{1-cu^2}}$$</span> I think it is doable from here</p>
163,654
<p>Do You know any successful applications of the geometric group theory in the number theory? GTG is my main field of interest and I would love to use it to prove new facts in the number theory.</p>
HenrikRüping
3,969
<p>Here is a counterexample, which is probably not "nice". Let $X$ be the Warsaw-circle. Let $X_n$ be the obtained from the Warsaw-circle by thickening the limit inverval by $1/n$. The intersection of all the $X_n$'s is the Warsaw-circle, and its first homology vanishes. </p> <p>Each $X_n$ is homotopy equivalent to $S^1$ and the inclusion $X_{n+1}\rightarrow X_n$ is a homotopy equivalence. Thus $\lim H_1(X_i)$ is $\mathbb{Z}$ and there cannot be a surjection.</p> <p>Meta: Every $X_i$ is has the structure of a compact CW-complex. However the intersection behaves badly, we cannot arrange $X_{n+1}$ to be a subcomplex of $X_n$ in this example; otherwise the intersection would be a CW-complex, which it is not. I guess this has to be a part of the niceness condition. But then on the other hand, since all spaces are assumed to be compact, we have $X_i=X_{i+1}$ almost always and thus both inverse systems stabilize. I have no clue what a good niceness condition could be.</p>
119,527
<p>I'm trying to prove the following:</p> <blockquote> <p>Let $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ be two sequences such that $(a_n)_{n\in\mathbb{N}}$ converges and $(b_n)_{n\in\mathbb{N}}$ is bounded. If $a=\lim_{n\to\infty} a_n$, prove that $$\limsup_{n\to\infty} (a_n+b_n) = a +\limsup_{n\to\infty} b_n$$</p> </blockquote> <p>It's easy to show that $\limsup_{n\to\infty} (a_n+b_n) \leq a +\limsup_{n\to\infty} b_n$ since the limit superior is subadditive, but I'm at a loss on how to prove the other inequality.</p>
anon
11,763
<p>Show that</p> <p>$$\forall\epsilon&gt;0:\quad \limsup\, (a_n+b_n)\ge \limsup\, \big((a-\epsilon)+b_n\big)=(a-\epsilon)+\limsup\,b_n $$</p>
1,169,591
<p>For all finite vector spaces and all linear transformations to/from those spaces, <strong>how can you prove/show from the definition of a linear transformation</strong> that all linear transformations can be calculated using a matrix.</p>
Mathemagician1234
7,012
<p>The most general way to show this is to show the dual space of linear transformations on a finite dimensional vector space V over a field F is isomorphic to the vector space of m x n matrices with coeffecients in the same field F. However, although the proof isn't difficult,it needs considerably more machinery then just the definition of a linear transformation and building to the isomorphism theorem takes a number of pages. An excellent and self contained presentation of linear transformations and matrices which ends with the isomorphism theorem can be found in <a href="http://cseweb.ucsd.edu/~gill/CILASite/Resources/09Chap5.pdf" rel="nofollow">Chapter 5 of the beautiful online textbook by S. Gill Williamson of The University of California at San Diego.</a> It's Theorem 5.13. But my strongest advice to you is to work through the entire chapter-not only is it a really beautiful and clear presentation of the structure of the vector space L(U,V), you'll only fully understand the proof after working through the chapter. It's worth the effort. </p>
3,251,928
<p><span class="math-container">$$\frac{1}{a(a-b)(a-c)} + \frac{1}{b(b-a)(b-c)} + \frac{1}{c(c-a)(c-b)} $$</span></p> <p>I tried to get everything to the same denominator, and then simplify numerators first but it is very complicated and long if I just use brute force, to multiply all the expressions given from the previous unification of denominator.</p>
lab bhattacharjee
33,337
<p>The common denominator is <span class="math-container">$$-abc(a-b)(b-c)(c-a)$$</span></p> <p>The numerator is</p> <p><span class="math-container">$$bc(b-c)+ca(c-a)+ab(a-b)=bc(b-c)+c^2a-ca^2+a^2b-ab^2$$</span></p> <p><span class="math-container">$$=(b-c)(bc+a^2-a(b+c))$$</span></p> <p><span class="math-container">$$=(b-c)(b(c-a)-a(c-a))=?$$</span></p>
2,248,303
<p>Evaluate the integral $ \int_{0}^{\infty} {\frac{\sin ^3 (x)}{x} }\:dx$ </p>
Angina Seng
436,618
<p>$$4\sin^3 x=3\sin x-\sin 3x$$ Your integral is $$\frac34\int_0^\infty\frac{\sin x}x\,dx -\frac14\int_0^\infty\frac{\sin 3x}x\,dx. $$</p>
2,248,303
<p>Evaluate the integral $ \int_{0}^{\infty} {\frac{\sin ^3 (x)}{x} }\:dx$ </p>
Bérénice
317,086
<p>We can linearise $\sin^3(x)$ : $${\displaystyle\int}\dfrac{\sin^3\left(x\right)}{x}\,\mathrm{d}x=-\class{steps-node}{\cssId{steps-node-1}{\dfrac{1}{4}}}{\displaystyle\int}\dfrac{\sin\left(3x\right)-3\sin\left(x\right)}{x}\,\mathrm{d}x=-\class{steps-node}{\cssId{steps-node-1}{\dfrac{1}{4}}}{\displaystyle\int}\dfrac{\sin\left(3x\right)}{x}\,\mathrm{d}x+\class{steps-node}{\cssId{steps-node-1}{\dfrac{3}{4}}}{\displaystyle\int}\dfrac{\sin\left(x\right)}{x}\,\mathrm{d}x$$$$=\dfrac{3\operatorname{Si}\left(x\right)}{4}-\dfrac{\operatorname{Si}\left(3x\right)}{4}+C$$ Finally : $${\displaystyle\int}_0^\infty\dfrac{\sin^3\left(x\right)}{x}\,\mathrm{d}x=\dfrac{3\operatorname{Si}\left(\infty\right)}{4}-\dfrac{\operatorname{Si}\left(\infty\right)}{4}-\dfrac{3\operatorname{Si}\left(0\right)}{4}+\dfrac{\operatorname{Si}\left(0\right)}{4}\\=\dfrac{\operatorname{Si}\left(\infty\right)}{2}-\dfrac{\operatorname{Si}\left(0\right)}{2}$$ But $\operatorname{Si}\left(0\right)=0$ and $\operatorname{Si}\left(\infty\right)=\frac\pi2 :$ $${\displaystyle\int}_0^\infty\dfrac{\sin^3\left(x\right)}{x}\,\mathrm{d}x=\frac\pi4$$</p>
3,148,623
<p>I want to find all prime ideals in <span class="math-container">$\mathbb{Z}/n\mathbb{Z}$</span> where <span class="math-container">$n&gt;1$</span>.</p> <p>I think I have to use the following theorem (because they asked me to prove it right before this exercise, which wasn't too complicated):</p> <blockquote> <p>Let R be a ring and I ⊂ R an ideal. Let φ : R → R/I be the natural residue class homomorphism. Let I ⊂ J be another ideal. Then J is a prime ideal in R if and only if φ(J) is a prime ideal in R/I.</p> </blockquote> <p>And</p> <blockquote> <p>Let R be a principal ideal domain and <span class="math-container">$I\subset R, I\neq(0)$</span> an ideal. Then every ideal generated by a irreducible element is a prime ideal</p> </blockquote> <p>An irreducible element in <span class="math-container">$\mathbb{Z}$</span> are exactly the prime numbers. I also know that <span class="math-container">$\mathbb{Z}$</span> is a principal ideal domain.</p> <p>I tried supposing that <span class="math-container">$J=(p)$</span> is an ideal of <span class="math-container">$\mathbb{Z}$</span>, and when <span class="math-container">$p$</span> prime, it is a prime ideal. Then if <span class="math-container">$J$</span> is a superset of <span class="math-container">$n\mathbb{Z}$</span>, we have that <span class="math-container">$\phi(J)$</span> is a prime ideal of <span class="math-container">$\mathbb{Z}/n\mathbb{Z}$</span>. Is this correct? For what <span class="math-container">$p$</span> values does this hold? What if <span class="math-container">$J$</span> is not a superset of <span class="math-container">$n\mathbb{Z}$</span> as the theorem requires?</p>
Bargabbiati
352,078
<p>An ideal <span class="math-container">$I$</span> in a commutative ring <span class="math-container">$A$</span> is prime if and only if <span class="math-container">$A/I$</span> is an integral domain; <span class="math-container">$\mathbb Z/n \mathbb Z$</span> is a finite ring, so a the quotient by a prime ideal has to be a finite integral domain, which is a field. </p> <p>Now it is not hard to see that (since the subgroups of <span class="math-container">$\mathbb Z/n \mathbb Z$</span> are all isomorphic to <span class="math-container">$\mathbb Z/ d \mathbb Z$</span> with <span class="math-container">$d|n$</span>, and these subgroups are also ideals), this is possible if and only if <span class="math-container">$(\mathbb Z/ n \mathbb Z)/ I \simeq \mathbb Z / p \mathbb Z $</span> with <span class="math-container">$p$</span> prime; you can easily deduce what are the ideals you are looking for.</p>
549,438
<p>Need to take the limit: $$\lim_{x \to 1}\left(\frac{x}{x-1}-\frac{1}{\ln(x)}\right) = \lim_{x \to 1}\left(\frac{x\cdot \ln(x)-x+1}{(x-1)\cdot \ln(x)}\right)=(0/0)$$ Now I can use L'Hospital's rule: $$\lim_{x \to 1}\left(\frac{1\cdot \ln(x)+x\cdot \frac{1}{x}-1}{1\cdot \ln(x)+(x-1)\cdot\frac{1}{x}}\right)= \lim_{x \to 1}\left(\frac{\ln(x)+1-1}{\ln(x)+\frac{(x-1)}{x}}\right)=\lim_{x \to 1}\left(\frac{\ln(x)}{\frac{(x-1)+x \cdot \ln(x)}{x}}\right)=\lim_{x \to 1}\frac{x\cdot \ln(x)}{x-1+x\cdot \ln(x)}=\frac{1\cdot 0}{1-1+0}=(0/0)$$ As you can see I came to $(0/0)$ again. So what I have to do to solve this problem?</p>
pi37
46,271
<p>You can apply L'Hopital's rule again on the new limit: $$ \lim_{x \to 1}\frac{x\cdot ln(x)}{x-1+x\cdot ln(x)}=\lim_{x\to 1}\frac{ln(x)+1}{1+ln(x)+1}=\frac{0+1}{0+1+1}=\frac{1}{2} $$</p> <hr> <p>Alternatively, note that $$ \lim_{x \to 1}\frac{x\cdot ln(x)}{x-1+x\cdot ln(x)}=\lim_{x\to 1}\displaystyle\frac{1}{1+\frac{x-1}{x\cdot ln(x)}} $$ So it suffices to compute $$ \lim_{x\to 1}\frac{x-1}{x\cdot\ln(x)}=\lim_{x\to 1} \frac{1}{1+ln(x)}=1 $$ by L'Hopital, so the original limit was $\frac{1}{1+1}=\frac{1}{2}$.</p>
2,070,353
<p>Let there be two arbitrary functions $f$ and $g$, and let $g \circ f$ be their composition.</p> <ol> <li><p>Suppose $g \circ f$ is one-one. Then prove that this implies $f$ is also one-one.</p></li> <li><p>Suppose $g \circ f$ is onto. Then prove that this implies $g$ is also onto.</p></li> </ol> <p>I tried to do both of the proofs by contradiction method but got stuck in that approach. Please help me in this proof. Proofs in more than one way is welcomed.</p>
Community
-1
<p>Find out the common difference denoted by $d_1$. </p> <p>Let the first term be represented by $A$ and let the last term be represented by $L$. </p> <p>Calculate number of terms $T$ using the formula : $T = \frac{L-A}{d_1} + 1$. We thus get $T = n-2$ using $L = 2n+1$ and $A= 7$. </p> <hr> <p>As to why the answer is $n-2$, one can explain it in terms of the direct consequence of the formula. It is not always necessary that the number of terms in a sequence should be $n$. Your sequence is that of the odd numbers starting from $7$ and not from $3$. The latter case would have given you $n$ terms. Hope it helps.</p>
3,547,768
<p>Given a function <span class="math-container">$f:\mathbb{R}\rightarrow\mathbb{R}$</span> which is differentiable and given that <span class="math-container">$f(0)=2$</span> and <span class="math-container">$(e^x + 1)f'(x) = e^x f(x)$</span> for every <span class="math-container">$x \in \mathbb{R}$</span>, prove that <span class="math-container">$f(x) = e^x + 1$</span>.</p>
user247327
247,327
<p>That's a pretty straightforward "differential equation". From <span class="math-container">$(e^x+ 1)f'(x)= (e^x+ 1)\frac{df}{dx}= e^xf(x)$</span> we have <span class="math-container">$\frac{df}{f}= \frac{e^x}{e^x+ 1} dx$</span>.</p> <p>Now integrate both sides. On the left we have <span class="math-container">$ln(f)+ C_1$</span>. To integrate on the right, let <span class="math-container">$u= e^x+ 1$</span>. Then <span class="math-container">$du= e^xdx$</span> so the right side becomes <span class="math-container">$\frac{du}{u}$</span>. Integrating we have <span class="math-container">$ln(u)+ C_2= ln(e^x+ 1)+ C_2$</span>. </p> <p>Then <span class="math-container">$ln(f)+ C_1= ln(e^x+ 1)+ C_2$</span> so <span class="math-container">$ln(f)= ln(e^x+ 1)+ (C_2- C_1)$</span> or, letting <span class="math-container">$C'= C_2- C_1$</span>, <span class="math-container">$ln(f)= ln(e^x+ 1)+ C'$</span>.</p> <p>Take the exponential of both sides: <span class="math-container">$f(x)= e^{ln(e^x+ 1)+ C'}= e^{ln(e^x+ 1)}e^{C'}= e^{C'}(e^x+ 1)$</span>.</p> <p>Finally, letting <span class="math-container">$C= e^{C'}$</span>. <span class="math-container">$f(x)= C(e^x+ 1)$</span>.</p> <p>Since we are told that f(0)= 2, setting x= 0 in that equation, <span class="math-container">$f(0)= C(e^0+ 1)= C(1+ 1)= 2C= 2$</span> so <span class="math-container">$C= 1$</span>. </p> <p>We have <span class="math-container">$f(x)= e^x+ 1$</span>.</p> <p>Check: <span class="math-container">$f(0)= e^0+ 1= 1+ 1= 2$</span>. <span class="math-container">$f'(x)= e^x$</span> so <span class="math-container">$(e^x+ 1)f'(x)= (e^x+ 1)e^x= e^{2x}+ e^x$</span>. And <span class="math-container">$e^xf(x)= e^x(e^x+ 1)= e^{2x}+ e^x$</span>. Yes, <span class="math-container">$(e^x+ 1)f'(x)= e^xf(x)$</span>.</p> <p>(Strictly speaking, there should be a whole lot of "absolute values" in those logarithms, such as <span class="math-container">$ln|e^x+ 1|$</span>, and <span class="math-container">$e^{C'}$</span> would be positive for any C'. But allowing C to be positive or negative then fixes the absolute value problem.)</p>
2,657,112
<p>I am currently reading the paper "PRIMES is in p" and have come across some notation that I don't quite understand in this following sentence</p> <blockquote> <p>Consider a prime $q$ that is a factor of $n$ and let $q^k || n$. Then...</p> </blockquote> <p>What does the notation $q^k || n$ mean here?</p> <p>The full paper can be found <a href="https://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf" rel="noreferrer">here</a> and the notation described above is used in the proof on page 2</p>
Randall
464,495
<p>Typically, that $q^k$ divides $n$ but no higher power of $q$ does.</p>
1,755,107
<p>Let $(X,d)$ be a metric space and let $A \subseteq X$. We define the distance from a point $x \in X$ to $A$ by $d(x,A)= \inf \{ d(x,a) : a \in A \} $.</p> <p>What will be the value of $d(x, \emptyset )$? I am confused between $+ \infty $ and $- \infty$. Also, is it possible to find $ \min \{ d(x,a) : a \in \emptyset \} $? (I know $\emptyset$ is empty, just asking symbolically whether we can take $\min$ instead of $\inf$)</p> <p>Thanks.</p>
Jack
206,560
<p>If $A$ is empty, then $\{ d(x,a) : a \in A \}$ is empty. Then inf$\{\emptyset \} = \infty$. </p> <p>The proof that inf$\{\emptyset \} = \infty$ can be found in this post:</p> <p><a href="https://math.stackexchange.com/a/432308">https://math.stackexchange.com/a/432308</a></p>
851,112
<p>My goal is to obtain a reasonable approximation of the Gini index of a company (UBS). I need to obtain an estimate of the salaries distribution from publicly available data:</p> <ul> <li>Nuber of employees=60205 </li> <li>total compensation paid=15.182E9 CHF</li> <li>min salary=50000 CHF</li> <li>median salary = 100000 CHF</li> <li>max salary=11430000 CHF</li> </ul> <p>I know it's very underdeterminated, but what's the best that can be obtained from this ?</p>
Alijah Ahmed
124,032
<p>Starting from the expression which you have just derived, we can remain in the complex exponential domain and simplify further in this domain as follows:- $$\displaystyle \frac{(e^{i\theta}+e^{i\phi})(e^{i(\theta+\phi)}-e^{i0})}{(e^{i\theta}-e^{i\phi})(e^{i(\theta+\phi)}+e^{i0})}=\frac{e^{i(2\theta+\phi)}-e^{i\theta}+e^{i(\theta+2\phi)}-e^{i\phi}}{e^{i(2\theta+\phi)}+e^{i\theta}-e^{i(\theta+2\phi)}-e^{i\phi}}\\=\frac{e^{i(\theta+\phi)}(e^{i\theta}-e^{-i\theta}+e^{i\phi}-e^{-i\phi})}{e^{i(\theta+\phi)}(e^{i\theta}-e^{-i\theta}-e^{i\phi}+e^{-i\phi})}\\=\frac{\color{blue}{(e^{i\theta}-e^{-i\theta})}+\color{red}{(e^{i\phi}-e^{-i\phi})}}{\color{blue}{(e^{i\theta}-e^{-i\theta})}-\color{red}{(e^{i\phi}+e^{-i\phi})}}\\=\frac{\color{blue}{2i\sin\theta}+\color{red}{2i\sin\phi}}{\color{blue}{2i\sin\theta}-\color{red}{2i\sin\phi}}\\=\frac{\sin\theta+\sin\phi}{\sin\theta-\sin\phi}$$ </p>
494,141
<p>I'm slightly confused about representations of Simple functions. I take it that simple functions are of the form $\phi(x) = \sum_{k=1}^N a_k \chi_{A_k}(x)$ where $\chi$ is the indicator function and that $A_k$'s are measurable sets.</p> <p>Now on building the theory of integration using simple functions, it appears that it is necessary to construct the canonical form of simple functions for a definition of the Lebesgue integral. (well that's how Stein does it).</p> <p>But my question is, if $\phi$ is a simple function defined above where it is not in its canonical form, will writing it in its canonical form ensure that the resulting disjoint sets are measurable? Or is this tautological by definition??</p> <p>I cannot seem to grasp this because subsets of measurable sets aren't necessarily measurable.</p>
Community
-1
<p>If there are two sets $A_1$ and $A_2$ such that $A_1 \cap A_2 \ne \emptyset$, we may simply rewrite the sum to include the terms</p> <p>$$a_1 \chi_{A_1 \setminus A_2} + (a_1 + a_2) \chi_{A_2 \cap A_1} + a_2 \chi_{A_2 \setminus A_1}$$</p> <p>Now it <em>is</em> true that intersections and differences of measurable sets are measurable, so there's no problems.</p>
297,686
<p>I want to show the following inequality is true: $|\int_{a}^{b}fg|^2\leq \int_{a}^{b}f^2\int_{a}^{b}g^2$. </p> <p>My first thought was to use the tagged partition definition of a Riemann integral combined with the Schwarz inequality.</p> <p>So let $P=\{a=x_0&lt;x_1&lt;...&lt;x_n=b\}$ and $S_p=\{t_1,...,t_n\}$ where $t_i \in [x_{i-1}, x_i]$.</p> <p>By the Schwarz inequality we have:</p> <p>$$\left|\sum\limits_{i=1}^{n}f(t_i)g(t_i)\Delta x_i \right|^2\leq\sum\limits_{i=1}^{n}f(t_i)^2\Delta x_i\sum\limits_{i=1}^{n}g(t_i)^2\Delta x_i.$$</p> <p>Essentially I was going to follow the proof for the Cauchy-Schwarz inequality but by making proper substitutions.</p> <p>I have two questions: is this the best way of proving this, and is this even a correct start. </p>
Alex Youcis
16,497
<p>This is the $p=q=2$ case of Holder's inequality, which <a href="http://planetmath.org/ProofOfHolderInequality.html" rel="nofollow">has a very simple proof</a>.</p>
297,686
<p>I want to show the following inequality is true: $|\int_{a}^{b}fg|^2\leq \int_{a}^{b}f^2\int_{a}^{b}g^2$. </p> <p>My first thought was to use the tagged partition definition of a Riemann integral combined with the Schwarz inequality.</p> <p>So let $P=\{a=x_0&lt;x_1&lt;...&lt;x_n=b\}$ and $S_p=\{t_1,...,t_n\}$ where $t_i \in [x_{i-1}, x_i]$.</p> <p>By the Schwarz inequality we have:</p> <p>$$\left|\sum\limits_{i=1}^{n}f(t_i)g(t_i)\Delta x_i \right|^2\leq\sum\limits_{i=1}^{n}f(t_i)^2\Delta x_i\sum\limits_{i=1}^{n}g(t_i)^2\Delta x_i.$$</p> <p>Essentially I was going to follow the proof for the Cauchy-Schwarz inequality but by making proper substitutions.</p> <p>I have two questions: is this the best way of proving this, and is this even a correct start. </p>
Robert Israel
8,508
<p>You seem to know the "sum" version of the Cauchy-Schwarz inequality. The standard proof of the "integral" version is exactly the same as that for the "sum" version: just change sums to integrals. It's really a theorem about inner products.</p>
2,107,164
<p>Let $G$ be gorup and $H$ be normal subgroup of $G$.</p> <p>Suppose that $H \cong \mathbb{Z}$ ($\mathbb{Z}$:integer) and $G/H \cong \mathbb{Z}/ n \mathbb{Z} (\mathbb{Z} \ni n \geq 2 )$.</p> <p>Then I'm stuck in next problems.</p> <p>(1)If $n$ is odd number, G is abelian group.</p> <p>(2)Classfy the group G up to isomorphism</p> <p>Please tell me any idea and help me.</p>
Servaes
30,382
<p><strong>Hint:</strong> Classify the homomorphisms $\Bbb{Z}/n\Bbb{Z}\ \longrightarrow\ \operatorname{Aut}(\Bbb{Z})$.</p>
4,631,014
<blockquote> <p>Finding range of function</p> </blockquote> <blockquote> <p><span class="math-container">$\displaystyle f(x)=\cos(x)\sin(x)+\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}$</span></p> </blockquote> <p>I have use Algebric inequality</p> <p><span class="math-container">$\displaystyle -(a^2+b^2)\leq 2ab\leq (a^2+b^2)$</span></p> <p><span class="math-container">$\displaystyle (\cos^2(x)+\sin^2(x))\leq 2\cos(x)\sin(x)\leq \cos^2(x)+\sin^2(x)\cdots (1)$</span></p> <p>And</p> <p><span class="math-container">$\displaystyle -[\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\leq 2\cos(x)\sqrt{\sin^2(x)+\sin^2(\alpha)}\leq [\cos^2(x)+\sin^2(x)+\sin^2(\alpha)]\cdots (2)$</span></p> <p>Adding <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span></p> <p><span class="math-container">$\displaystyle -(1+1+\sin^2(\alpha))\leq 2\cos(x)[\sin(x)+\sqrt{\sin^2(x)+\sin^2(\alpha)}]\leq (1+1+\sin^2(\alpha))$</span></p> <p><span class="math-container">$\displaystyle -\bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)\leq f(x)\leq \bigg(1+\frac{\sin^2(\alpha)}{2}\bigg)$</span></p> <p>I did not know where my try is wrong.</p> <p>Please have a look on it</p> <p>But actual answer is <span class="math-container">$\displaystyle -\sqrt{1+\sin^2(\alpha)}\leq f(x)\leq \sqrt{1+\sin^2(\alpha)}$</span></p>
Anne Bauval
386,889
<p>Let <span class="math-container">$a=|\sin\alpha|.$</span></p> <p>First notice that (since <span class="math-container">$|\sin x|\le\sqrt{a^2+\sin^2x}$</span>) when the maximum value <span class="math-container">$f_{\max}$</span> is attained the <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> of <span class="math-container">$x$</span> are positive, and that the minimum value is <span class="math-container">$f_{\min}=-f_{\max},$</span> corresponding to the same <span class="math-container">$\sin$</span> and the opposite <span class="math-container">$\cos.$</span></p> <p><span class="math-container">$$f'(x)=\left(1+\frac{\sin x}{\sqrt{a^2+\sin^2x}}\right)g(x)$$</span> where <span class="math-container">$$g(x):=\cos^2x-\sqrt{a^2+\sin^2x}\,\sin x$$</span> decreases continuously from <span class="math-container">$g(0)=1-0=1$</span> to <span class="math-container">$g(\pi/2)=0-a=-a$</span> hence vanishes for a unique <span class="math-container">$x_0\in[0,\pi/2].$</span> One easily checks that <span class="math-container">$$\sin^2x_0=\frac1{2+a^2},\quad\cos^2x_0=\frac{1+a^2}{2+a^2}.$$</span></p> <p>Therefore, <span class="math-container">$f_\max=f(x_0)=\sqrt{1+a^2}$</span> and <span class="math-container">$$\operatorname{range}(f)=\left[-\sqrt{1+a^2},\sqrt{1+a^2}\right].$$</span></p>
227,732
<p>Are there any general relations between the eigenvalues of a matrix $M$ and those of $M^2$?</p>
Muphrid
45,296
<p>Clearly, if $\underline M(u) = \lambda u$, then $\underline M^2(u) = \lambda^2 u$.</p> <p>However, $\underline M^2$ may have eigenvectors that weren't eigenvectors of the original linear operator. Rotation operators come to mind. A 90 degree rotation has no eigenvectors*, but chaining two of these together makes all vectors in the plane of rotation eigenvectors with eigenvalue $-1$.</p> <p>*Edit: in the plane of rotation.</p>
227,732
<p>Are there any general relations between the eigenvalues of a matrix $M$ and those of $M^2$?</p>
copper.hat
27,978
<p>A useful way to view an eigenspace is that the matrix $M$ just becomes multiplication on the eigenspace.</p> <p>So, suppose $Mv = \lambda v$. Applying $M$ again just results in another multiplication by $\lambda$, as in $M(M v) = M (\lambda v) = \lambda M v = \lambda^2 v$. Repeating gives $M^kv = \lambda^k v$.</p> <p>If $p$ is a polynomial, say $p(x) = \sum p_k x^k$, and we let $p(M) = \sum p_k M^k$, then we have $p(M)v = \sum p_k M^k v = \sum p_k \lambda^k v = p(\lambda) v$. So, if $\lambda$ is an eigenvalue of $M$, then $p(\lambda)$ is an eigenvalue of $p(M)$.</p> <p>The result is true more generally, but that is out of scope here.</p> <p>If $M$ happened to be invertible, then multiplying an eigenvector by $M^{-1}$ becomes dividing by $\lambda$ (which must be non-zero since I'm assuming invertibility). This follows from $M^{-1} Mv = v = M^{-1} (\lambda v)$, which gives $M^{-1} v = \frac{1}{\lambda} v$. Repeating gives $M^{-k} v = \frac{1}{\lambda^k} v$, so the formula $M^kv = \lambda^k v$ holds for all integers (assuming invertibility).</p>
406,252
<p>How to define the amount of additions. E.g. $1+2+3+4+5+6+7+8+9$</p> <p>Are there $9$ additions, because of the nine numbers that are added together. Or can you also say that there are $8$ additions, because there are only $8$ '$+$' signs.</p>
Tony Stark
80,093
<p>$$ \sum_{i=1}^n a_i = a_1 + \cdots + a_n $$ There are $n$ terms in the addition but what if there are minuses? Hence counting terms are kinda less ambiguous. Also we can do: $$ \sum_{i=1}^n a_i = 0+ a_1 + \cdots + a_n $$ and we have $n$ additions now.</p>
432,291
<p>This question is related to <a href="https://mathoverflow.net/questions/421450/homotopy-type-theory-how-to-disprove-that-0-operatornamesucc0-in-the-ty">Homotopy type theory : how to disprove that <span class="math-container">$0=\mathrm{succ}(0)$</span> in the type <span class="math-container">$\mathbb N$</span></a>.</p> <p>Section 2.13 in <a href="https://homotopytypetheory.org/book/" rel="nofollow noreferrer">The HoTT Book</a> uses &quot;the encode-decode method to characterize the path space of the natural numbers&quot; by defining a type family:</p> <p><span class="math-container">$$\mathrm{code} : \mathbb N \to \mathbb N \to \cal U$$</span></p> <p>with</p> <p><span class="math-container">$$\begin{array}{rcl} \mathrm{code}(0,0) &amp; :\equiv &amp; \mathbf 1\newline \mathrm{code}(\mathrm{succ} (m),0) &amp; :\equiv &amp; \mathbf 0\newline \dots &amp; :\equiv &amp; \dots\newline \dots &amp; :\equiv &amp; \dots \end{array}$$</span></p> <p>To my understanding, <span class="math-container">$\mathrm{code}$</span> is only well-defined, if (in particular) <span class="math-container">$0:\mathbb N$</span> and <span class="math-container">$\mathrm{succ}(0):\mathbb N$</span> are <strong>not</strong> judgementally equal. How can we be sure that this is the case?</p>
L. Garde
110,166
<p>As Daniel explained, the induction principle associated with <span class="math-container">$\mathbb{N}$</span> allows you, by definition, to define <span class="math-container">$code$</span> in this way, even if <span class="math-container">$0$</span> and <span class="math-container">$succ(0)$</span> were judgmentally equal. In such a situation, <span class="math-container">$code$</span> would still be well defined, but this would result in an inconsistent theory, where <strong>0</strong> would be judgementally equal to <strong>1</strong>, and therefore inhabited.</p> <p>A metatheoretic justification that HoTT is consistent is therefore a justification that <span class="math-container">$0$</span> and <span class="math-container">$succ(0)$</span> are not judgmentally equal.</p>
3,687,521
<p>Recently I came accross this integral : <span class="math-container">$$ \int \:\frac{1}{\sqrt[3]{\left(1+x\right)^2\left(1-x\right)^4}}dx $$</span> I would evaluate it like this, first start by the substitution: <span class="math-container">$$ x=\cos(2u) $$</span> <span class="math-container">$$ dx=-2\sin(2u)du $$</span> Our integral now becomes: <span class="math-container">$$\int \:\frac{-2\sin \left(2u\right)du}{\sqrt[3]{\left(\cos \left(2u\right)+1\right)^2\left(\cos \:\left(2u\right)-1\right)^4}}$$</span> <span class="math-container">$$\cos(2u)=\cos(u)^2-\sin(u)^2$$</span> Thus: <span class="math-container">$$\cos(2u)+1=2\cos(u)^2$$</span> <span class="math-container">$$\cos(2u)-1=-2\sin(u)^2$$</span> Thus our integral now becomes: <span class="math-container">$$\int \:\frac{-\sin \left(2u\right)du}{\sqrt[3]{4\cos \left(u\right)^416\sin \left(u\right)^8}}=\frac{1}{2}\int \:\frac{-\sin \left(2u\right)du}{\sqrt[3]{\cos \left(u\right)^4\sin \left(u\right)^8}}$$</span> we know: <span class="math-container">$$\sin \left(u\right)=\cos \left(u\right)\tan \left(u\right)$$</span> Thus our integral becomes: <span class="math-container">$$\int \:\frac{-\tan \left(u\right)\cos \left(u\right)^2du}{\cos \:\left(u\right)^4\sqrt[3]{\tan \left(u\right)^8}}=\int \frac{-\tan \:\left(u\right)\sec \left(u\right)^2du}{\sqrt[3]{\tan \:\left(u\right)^8}}\:$$</span> By letting <span class="math-container">$$v=\tan \:\left(u\right)$$</span> <span class="math-container">$$dv=\sec \left(u\right)^2du$$</span> Our integral now becomes: <span class="math-container">$$\int -v\:^{1-\frac{8}{3}}dv=-\frac{v^{2-\frac{8}{3}}}{2-\frac{8}{3}}+C=\frac{3}{2\sqrt[3]{v^2}}+C$$</span> Undoing all our substitutions: <span class="math-container">$$\frac{3}{2\sqrt[3]{\tan \left(u\right)^2}}+C$$</span> <span class="math-container">$$\tan \:\left(u\right)^2=\frac{1}{\cos \left(u\right)^2}-1=\frac{2}{1+\cos \left(2u\right)}-1=\frac{2}{1+x}-1$$</span> Our integral therefore: <span class="math-container">$$\frac{3}{2\sqrt[3]{\frac{2}{1+x}-1}}+C$$</span> However, online integral give me anti-derivative of <span class="math-container">$$\frac{-3\sqrt[3]{\frac{2}{x-1}+1}}{2}+C$$</span> so I want to know where I went wrong</p>
Claude Leibovici
82,404
<p>Another way (more complex than @Quanto's one) <span class="math-container">$$I=\int\dfrac{dx}{\sqrt[3]{\left(1-x\right)^4\left(x+1\right)^2}}=\int\dfrac{dx}{\left(x-1\right)^\frac{4}{3}\left(x+1\right)^\frac{2}{3}}$$</span> <span class="math-container">$$u=\dfrac{1}{\sqrt[3]{x-1}}\implies I=-3\int\dfrac{du}{\left(\frac{1}{u^3}+2\right)^\frac{2}{3}}$$</span> <span class="math-container">$$v=\sqrt[3]{2u^3+1}\implies I=\frac 12 \int dv=\frac v 2$$</span></p>
6,056
<p>Are there any good places for me to sell off my mathematics books online especially Springer and Dover books?</p> <p>(I thought perhaps this was off topic, but then I thought everybody in math probably has the problem of having bought pricey books that they would prefer to get some money back for, at one point or another.)</p>
jericson
1,352
<p>You can also list your books on amazon to sell rather than have amazon buy them from you</p>
6,056
<p>Are there any good places for me to sell off my mathematics books online especially Springer and Dover books?</p> <p>(I thought perhaps this was off topic, but then I thought everybody in math probably has the problem of having bought pricey books that they would prefer to get some money back for, at one point or another.)</p>
Mathemagician1234
7,012
<p>I've been a seller on EBAY for years and they are out of control on fees. Also,people go to EBAY looking for a deal and you probably won't be able to make a good return on serious currently used textbooks on EBAY.You've got a much better chance of a good profit on Amazon. <strong><em>DON'T try and sell your books at buyback at the site-they'll rob you blind.</em></strong> You'll make much more becoming a seller and selling directly to customers.There are no fees directly-but Amazon charges 15 % per sale you make. You can also try a new site that's beginning to give EBAY serious competition called Bonanza. It's built a sizable and growing user base and it's fees are much lower then EBAY. The down side is you'll have to do your own advertising to drive people to the site. But given what you save on fees,it's worth the work and time investment.</p>
23,980
<p>I am calculating huge data files with an external program. I would like then to import the data into Mathematica for analysis. The files are 2 columns and up to many millions of rows. </p> <p>So for small data files I have just been using:</p> <pre><code>dataTable = Import["data.txt", "Table"]; </code></pre> <p>However once the files gets so large (into the millions), Mathematica and my computer slow down considerably. Now, I don't really care about most of the data in these files and in Mathematica I end up only using several thousand entries. So my question is, can I import a random sampling from the large data file (say 10000 elements only) instead of importing the entire file? So if I had a file with 10^6 rows, I would like to import just 10^4 of those rows (preferably randomly).</p>
Michael Hale
5,568
<p>You'll want to use <a href="http://en.wikipedia.org/wiki/Reservoir_sampling" rel="nofollow">Algorithm R for reservoir sampling</a> to get a truly random sample (as opposed to periodic) with only one scan of the input and no need to know the length of the input in advance. Basically, you take the first items and then randomly replace them with decreasing frequency as you complete the scan.</p> <pre><code>sampleSize = 100; stream = OpenRead["data.txt"]; read[] := Read[stream, {Number, Number}] result = ReadList[stream, {Number, Number}, sampleSize]; i = sampleSize; While[(next = read[]) =!= EndOfFile, j = RandomInteger[{1, ++i}]; If[j &lt;= sampleSize, result[[j]] = next]] </code></pre>
241,138
<p>I think I didn't pay attention to uniform distributions because they're too easy. So I have this problem</p> <blockquote> <ol> <li>It takes a professor a random time between 20 and 27 minutes to walk from his home to school every day. If he has a class at 9.00 a.m. and he leaves home at 8.37 a.m., find the probability that he reaches his class on time.</li> </ol> </blockquote> <p>I am not sure I know how to do it. I think I would use $F(x)$, and I tried to look up how to figure it out but could only find the answer $F(x)=(x-a)/(b-a)$.</p> <p>So I input the numbers and got $(23-20)/(27-20)$ which is $3/7$ but I am not sure that is the corret answer, though it seems right to me.</p> <p>I'm not here for homework help (I am not being graded on this problem or anything), but I do want to understand the concepts. Too often I just learn how to do math and don't "really" understand it. </p> <p>So I would like to know how to properly do uniform distribution problems (of continuous variable) and maybe how to find the $F(x)$. I thought it was the integral but I didn't get the same answer.</p> <p>Remember I want to understand this. Thanks so much for your time. </p>
Mike
17,976
<p>Your problem is $e^{4x}$ is part of your homogeneous solution, which explains why you get $0$ when you try $Ae^{4x}$ as a particular solution.</p> <p>I would try $Ax^2e^{4x}$ as a particular solution.</p> <p>Just to show it works, let me show you another way to solve the problem. Let</p> <p>$$z=y'-4y,z'=y''-4y'$$ $$y''-8y'+16y=-\frac12e^{4x}$$ $$(y'-4y)'-4(y'-4y)=z'-4z=-\frac12e^{4x}$$ $$e^{-4x}z'-4e^{-4x}z=(e^{-4x}z)'=-\frac12$$ $$e^{-4x}z=-\frac12x+k_1,z=-\frac12xe^{4x}+k_1e^{4x}$$ $$y'-4y=-\frac12xe^{4x}+k_1e^{4x}$$ $$e^{-4x}y'-4e^{-4x}y=(e^{-4x}y)'=-\frac12x+k_1$$ $$e^{-4x}y=-\frac14x^2+k_1x+k_2,y=-\frac14x^2e^{4x}+k_1xe^{4x}+k_2e^{4x}$$</p> <p>So our particular solution turns out to be $-\frac14x^2e^{4x}$</p>
3,401,055
<blockquote> <p>If the distance of any point <span class="math-container">$(x,y)$</span> from the origin is defined as <span class="math-container">$d(x,y)=max\{|x|,|y|\}$</span> where <span class="math-container">$|.|$</span> represents the absolute value function, and <span class="math-container">$d(x,y)=a$</span> whre <span class="math-container">$a$</span> is a non-zero constant, then the locus of the resulting curve is:</p> <p>(A) Circle</p> <p>(B) Straight Line</p> <p>(C) Square</p> <p>(D) Triangle</p> </blockquote> <p>I know the meaning of <span class="math-container">$max\{b,c\}$</span> which gives the maximum of <span class="math-container">$b$</span> or <span class="math-container">$c$</span> as its output. How to deal with the case <span class="math-container">$max\{|x|,|y|\}$</span>? Since <span class="math-container">$d(x,y)=a$</span>, I concluded the locus must be a circle, but the answer is a square. How can this be possible?</p>
mechanodroid
144,766
<p>I'm assuming <span class="math-container">$V$</span> is a complex vector space.</p> <p>Notice that from <span class="math-container">$\langle Av,v\rangle \le -\|v\|^2, \forall v \in V$</span> follows that <span class="math-container">$A$</span> is hermitian. Indeed, we can write <span class="math-container">$A = B+iC$</span> for some hermitian matrices <span class="math-container">$B, C$</span>. Then if <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$C$</span> with unit eigenvector <span class="math-container">$v$</span>, we have <span class="math-container">$$\overbrace{\langle Bv,v\rangle}^{\in\mathbb{R}}+i\lambda= \langle Bv,v\rangle + i\langle Cv,v\rangle = \langle Av,v\rangle \in \mathbb{R}$$</span> so <span class="math-container">$\lambda = 0$</span>. It follows that <span class="math-container">$C = 0$</span> so <span class="math-container">$A = B$</span> which is hermitian.</p> <p>Therefore <span class="math-container">$A$</span> is diagonalizable with eigenvalues <span class="math-container">$\lambda_1, \ldots, \lambda_n \le- 1$</span> (this follows easily from <span class="math-container">$\langle Av,v\rangle \le -\|v\|^2, \forall v \in V$</span>). We have <span class="math-container">$$A = P^{-1}\operatorname{diag}(\lambda_1, \ldots, \lambda_n)P$$</span></p> <p>so</p> <p><span class="math-container">$$e^{tA} = e^{P^{-1}\operatorname{diag}(t\lambda_1, \ldots, t\lambda_n)P} = P^{-1}\operatorname{diag}(e^{t\lambda_1}, \ldots, e^{t\lambda_n})P \xrightarrow{t\to\infty} 0$$</span> since <span class="math-container">$t\lambda_k \to -\infty$</span> for all <span class="math-container">$k=1, \ldots, n$</span>.</p>
1,564,750
<p>Imagine that your country's postal system only issues 2 cent and 5 cent stamps. Prove that it possible to pay for postage using only these stamps for any amount n cents, where n is at least 4.</p> <hr> <p>My attempt (using Strong induction, I know we can use induction but since they say you can can apply strong induction for generalized/weak induction cases):</p> <p>Base case: $$n=4: 2 \times2cents$$ $$n=5: 0 \times 5cents$$ $$n=6: 3 \times 2cents$$ $$n=7: 1 \times 2 cents + 1 \times 5 cents$$</p> <p>You might ask why I have so many base cases, this is my reason why: The question states that we can pay for postage using 2 and 5 cents only. Hence, we have 3 general cases: </p> <p>1: ONLY 2-cents stamps are used</p> <p>2: ONLY 5-cents stamps are used</p> <p>3: 2-cents and 5-cents stamps are used.</p> <p>(Till now, are my base cases valid?)</p> <p>Assume that for n=k, P(k) is true and that we need to show P(k+1)</p> <p>$\textbf{Induction hypothesis:}$ P(k) is true when $4\le i \le k$ and $k \ge 7$</p> <p>Since $4\le (k-3) \le k$ , P(k-3) or i is true by Induction hypothesis.</p> <p>Now, k-3 cents can be formed using 2 and 5-cent stamps.</p> <p>To get k+1 stamps, we can just replace it with $\textbf{four}$ 2-cent stamps?</p> <hr> <p>Thank you! Is my proof valid or no? Any alternatives for this question using strong induction? Also, if I have used strong mathematical induction wrong or any of the steps are incorrect, please explain why.</p>
Christian Blatter
1,303
<p>You can do all even amounts, and you can do all odd amounts $\geq5$. It follows that the only positive amounts you cannot do are $1$ and $3$.</p>
1,836,236
<p>For which $a$ and $b$ is this matrix diagonalizable?</p> <p>$$A=\begin{pmatrix} a &amp; 0 &amp; b \\ 0 &amp; b &amp; 0 \\ b &amp; 0 &amp; a \end{pmatrix}$$</p> <p>How to get those $a$ and $b$? I calculated eigenvalues and eigenvectors, but don't know what to do next?</p>
user115350
334,306
<p>Matrix A has 3 eigenvalues a-b, a+b and b. In order for them to be distinguishable, following conditions should be met. $$b \ne 0$$ $$a \ne 0$$ $$a \ne 2b$$</p>
3,453,626
<p>I was playing around with the concepts of injectivity, surjectivity, and bijectivity recently. I used these three Wikipedia articles as references:</p> <p><a href="https://en.wikipedia.org/wiki/Injective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Injective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Surjective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Surjective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bijection</a></p> <p>The third article on bijection states the following:</p> <ul> <li>A bijection is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y.</li> <li>A bijection is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set.</li> </ul> <p>I've encountered these definitions of bijectivity many times before and they're not surprising. However, the article on injectivity also states the following about injective functions:</p> <ul> <li>Every element of the function's codomain is the image of at most one element of its domain.</li> </ul> <p>So then let's say we have two sets <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> with three and two elements respectively. We have a function <span class="math-container">$f$</span> such that <span class="math-container">$f$</span> maps exactly one element of <span class="math-container">$X$</span> onto one element of <span class="math-container">$Y$</span>. Then, by the definition of injectivity above, <span class="math-container">$f$</span> should be injective since every element of <span class="math-container">$Y$</span> is the image of one or zero (i.e. at most one) elements in <span class="math-container">$X$</span>. Of course, <span class="math-container">$f$</span> is also non-total.</p> <p>Now let's rather say that <span class="math-container">$f$</span> maps <em>two</em> elements of <span class="math-container">$X$</span> onto two elements of <span class="math-container">$Y$</span>. Then <span class="math-container">$f$</span> should not only be injective but also surjective since <span class="math-container">$Y$</span> has only two elements and they are both covered by elements in <span class="math-container">$X$</span>.</p> <p>Such a function should be considered bijective since it's both injective and surjective. However, it's incorrect to say that each element of <span class="math-container">$X$</span> is paired with exactly one element of <span class="math-container">$Y$</span>.</p> <p>So what am I missing here?</p>
Dr. Sonnhard Graubner
175,066
<p>Use that <span class="math-container">$$m_a=\frac{1}{2}\sqrt{2(b^2+c^2)-a^2}$$</span> etc.</p>
3,453,626
<p>I was playing around with the concepts of injectivity, surjectivity, and bijectivity recently. I used these three Wikipedia articles as references:</p> <p><a href="https://en.wikipedia.org/wiki/Injective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Injective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Surjective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Surjective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bijection</a></p> <p>The third article on bijection states the following:</p> <ul> <li>A bijection is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y.</li> <li>A bijection is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set.</li> </ul> <p>I've encountered these definitions of bijectivity many times before and they're not surprising. However, the article on injectivity also states the following about injective functions:</p> <ul> <li>Every element of the function's codomain is the image of at most one element of its domain.</li> </ul> <p>So then let's say we have two sets <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> with three and two elements respectively. We have a function <span class="math-container">$f$</span> such that <span class="math-container">$f$</span> maps exactly one element of <span class="math-container">$X$</span> onto one element of <span class="math-container">$Y$</span>. Then, by the definition of injectivity above, <span class="math-container">$f$</span> should be injective since every element of <span class="math-container">$Y$</span> is the image of one or zero (i.e. at most one) elements in <span class="math-container">$X$</span>. Of course, <span class="math-container">$f$</span> is also non-total.</p> <p>Now let's rather say that <span class="math-container">$f$</span> maps <em>two</em> elements of <span class="math-container">$X$</span> onto two elements of <span class="math-container">$Y$</span>. Then <span class="math-container">$f$</span> should not only be injective but also surjective since <span class="math-container">$Y$</span> has only two elements and they are both covered by elements in <span class="math-container">$X$</span>.</p> <p>Such a function should be considered bijective since it's both injective and surjective. However, it's incorrect to say that each element of <span class="math-container">$X$</span> is paired with exactly one element of <span class="math-container">$Y$</span>.</p> <p>So what am I missing here?</p>
Quanto
686,284
<p>Note,</p> <p><span class="math-container">$$m_a^4+m_b^4+m_c^4 = (m_a^2+m_b^2+m_c^2 )^2 - 2(m_a^2m_b^2+m_b^2m_c^2+m_c^2m_a^2) $$</span></p> <p>Then, evaluate, <span class="math-container">$$m_a^2+m_b^2+m_c^2 = \frac{2b^2+2c^2-a^2}{4} +\frac{2c^2+2a^2-b^2}{4} +\frac{2a^2+2b^2-c^2}{4} = \frac34 (a^2+b^2+c^2)$$</span></p> <p><span class="math-container">$$m_a^2m_b^2+m_b^2m_c^2+m_c^2m_a^2 = \frac9{16}(a^2b^2+b^2c^2+c^2a^2)$$</span></p> <p>Thus,</p> <p><span class="math-container">$$\frac{m_a^4+m_b^4+m_c^4}{a^4+b^4+c^4} = \frac9{16}\frac{(a^2+b^2+c^2)^2-2(a^2b^2+b^2c^2+c^2a^2)}{a^4+b^4+c^4} =\frac{9}{16}$$</span></p>
2,646,749
<p>I'm solving the following problem:</p> <blockquote> <p>Identify all maximal ideals in the ring $\mathbb R[x]/(x^2-3x+2)$.</p> </blockquote> <p>I want to solve it in two ways.</p> <p><strong>First method.</strong> Consider the surjective quotient homomorphism $\mathbb R[x]\rightarrow \mathbb R[x]/(x^2-3x+2)$ with kernel $I=(x^2-3x+2)$. The ideals of the image correspond bijectively to the ideals of $\mathbb R[x]$ containing $I$. Since $\mathbb R[x]$ is a PID, the only ideals in that ring containing $I$ are $(1),(x-1),(x-2),(x^2-3x+2)$. The image of $(1)$ is the whole ring and therefore cannot be maximal. The image of $(x^2-3x+2)$ is the zero ideal and also cannot be maximal. The images of $(x-1)$ and $(x-2)$ are $(x-1)+I$ and $(x-2)+I$ respectively. <em>How do I show rigorously that the quotient of $\mathbb R[x]/I$ by each of these ideals is a field?</em> (I think this is the most reasonable way to show that $(x-1)+I$ and $(x-2)+I$ are maximal; if not, please let me know if there is an easier way).</p> <p><strong>Second method.</strong> I want to use the fact that the maximal ideals of $\mathbb R\times \mathbb R$ are $(0)\times \mathbb R$ and $\mathbb R\times (0)$. To this end define a ring homomorphism $\phi: \mathbb R[x]\rightarrow \mathbb R \times \mathbb R$ by $f(x)\mapsto (f(1),f(2))$. This is an empmorphism with kernel $I$. By the first isomorphism theorem it induces an isomorphism $\mathbb R[x]/I\simeq \mathbb R \times \mathbb R$. Now we need to find the inverse images of $(0)\times \mathbb R$ and $\mathbb R\times (0)$ under $\phi$ and take their images under the quotient map $\mathbb R[x]\rightarrow \mathbb R[x]/I$. We have $\phi^{-1}(1,0)=-x+2; \phi^{-1}(0,1)=x-1$. This implies $\phi^{-1}(\mathbb R\times (0))=(x-2)$ and $\phi^{-1}((0)\times \mathbb R)=(x-1)$ because for example $-x+2\in \phi^{-1}(\mathbb R\times (0)) $ and since the inverse image of any ideal is an ideal, $(x-2)$ lies in $\phi^{-1}(\mathbb R\times (0))$. <em>But why there is nothing else in $\phi^{-1}(\mathbb R\times (0))$ except $(x-2)$?</em></p>
johnnycrab
171,304
<p>For the first method, it is the usual proof that for any field $K$ and $a \in K$ we have $K[X]/(X-a) \cong K$, as by the third isomorphism theorem $(K[X]/I)/(\mathfrak a/I) \cong K[X]/\mathfrak a$ for $I \subset \mathfrak a \subset K[X]$. For this define the ring morphism:</p> <p>$$K[X] \to K, f \mapsto f(a).$$</p> <p>This is clearly surjective as $K \subseteq K[X]$ and the ideal $(X-a)$ lies in the kernel. Hence get surjective morphism</p> <p>$$\varphi \colon K[X]/(X-a) \to K, [f] \mapsto [f(a)],$$</p> <p>where $[f]$ denotes the equivalence class of some $f \in K[X]$. To show that this is injective, let $\varphi([f]) = 0$. As $[X] = [a]$, we have $[f] = [f(a)]$, so $f(a) = 0$ and $(X-a) | f \implies f \in (X-a)$ and $\varphi$ injective. So $(X-a)$ is maximal as the fraction ring is a field.</p> <p>The answer to your second question follows again from the maximality of $(X-2)$. </p>
873,755
<p>Let $f(x) = x^{10}+5x^9-8x^8+7x^7-x^6-12x^5+4x^4-8x^3+12x^2-5x-5. $</p> <p>Without using long division (which would be horribly nasty!), find the remainder when $f(x)$ is divided by $x^2-1$.</p> <p>I'm not sure how to do this, as the only way I know of dividing polynomials other than long division is synthetic division, which only works with linear divisors. I thought about doing $f(x)=g(x)(x+1)(x-1)+r(x)$, but I'm not sure how to continue. Thanks for the help in advance.</p>
ReverseFlow
109,908
<p>Plug in $1$ and $-1$ to get two values of $r(x)$ which is linear. From there you can get what $a,b$ are in $ax+b.$ </p> <p>Since $$f(x)=g(x)(x+1)(x-1)+r(x)$$</p> <p>we have</p> <p>$$ f(1)=g(1)(1+1)(1-1)+r(1)=r(1)=-10$$ $$ f(-1)=g(1)(-1+1)(-1-1)+r(-1)=r(-1)=16$$</p> <p>We know the remainder is of degree $1$, so</p> <p>$r(x)=ax+b$</p> <p>and now we know, $$r(1)=ax+b=a+b=-10$$ $$r(-1)=ax+b=-a+b=16$$</p> <p>so, solve </p> <p>$$a+b=-10$$ $$-a+b=16$$</p> <p>which yields, $a=-13$ $b=3$, so </p> <p>$$r(x)=-13x+3$$</p>
2,174
<p>As a teenager I was given this problem which took me a few years to solve. I'd like to know if this hae ever been published. When I presented my solution I was told that it was similar to one of several he had seen.</p> <p>The problem:</p> <p>For an <span class="math-container">$n$</span> dimensional space, develop a formula that evaluates the maximum number of <span class="math-container">$n$</span> dimensional regions when divided by <span class="math-container">$k$</span> <span class="math-container">$n-1$</span> dimensional (hyper)planes.</p> <p>Example: <span class="math-container">$A$</span> line is partitioned by points: <span class="math-container">$1$</span> point, <span class="math-container">$2$</span> line segments. <span class="math-container">$10$</span> points, <span class="math-container">$11$</span> line segments, and so one.</p>
Doug Chatham
273
<p>Section 3 of <a href="http://www.macalester.edu/~bressoud/pub/Art_of_Counting/Art_of_Counting.pdf" rel="noreferrer">http://www.macalester.edu/~bressoud/pub/Art_of_Counting/Art_of_Counting.pdf</a> discusses the problem for $n=3$ and poses the problem of generalizing to higher dimensions. This article states that the first published solution was by Jakob Steiner in 1826. Other references are cited.</p>
2,706,834
<p>I have the next exercise: </p> <p>Let $E$ be a normed space, $n\in \mathbb{N}$ and $\{x_1,\dots,x_n\}\in E$ a family of independent vectors. Show that for all $\alpha_1,\dots,\alpha_n\in \mathbb{K}$ there is a linear form $T\in E'$ such that $T(x_i)=\alpha_i$, for all $i=1,\dots,n$.</p> <p>My solution:</p> <p>I used the following theorem derived from Hahn Banach's Theorem</p> <p>Theorem (Bounded linear functionals): Let $X$ be a normed space and let $x_0\neq 0$ be any element of $X$. Then there exists a bounded linear functional $T$ on $X$ such that</p> <p>$||T||=1$, $T(x_0)=||x_0||$.</p> <p>How $\{x_1,\dots,x_n\}\in E$ a family of independent vectors, then each $x_i\neq 0$, with $i=1,\dots,n$. In addition $X$ is a normed space.</p> <p>Therefore exists $T\in E'$ such that $||T||=1$, $T(x_i)=||x_i||$. Let $\alpha_i=||x_i||$. With which you have to $T(x_i)=\alpha_i$</p>
Aweygan
234,668
<p>The result you are trying to prove requires $\alpha_1,\ldots,\alpha_n$ be arbitrary, but at the end of your proof you have $\alpha_i=\|x_i\|$. Thus your solution is incorrect.</p> <p>In fact, the result your using (the corollary of the Hahn-Banach theorem), while it is a great one, isn't strong enough to show what you're trying to show. What you should instead use is the following corollary of the Hahn-Banach theorem:</p> <blockquote> <p>Let $X$ be a normed space, and let $M\subset X$ be a closed proper subspace. If $x_0\in X\setminus M$, there is some $f\in X^*$ such that $f(x)=0$ for all $x\in M$ and $f(x_0)\neq 0$. </p> </blockquote>
2,784,531
<p>Let <span class="math-container">$X = (X_1, X_2, \dots, X_n)$</span> be jointly Gaussian with mean vector <span class="math-container">$\mu$</span> and covariance matrix <span class="math-container">$\Sigma$</span>. Let <span class="math-container">$S$</span> be their sum.</p> <p>I know that the distribution of each <span class="math-container">$X_i \mid S = s$</span> is also Gaussian.</p> <p>When <span class="math-container">$n=2$</span>, I know that <span class="math-container">$$ E\left( X_1\mid S = s \right) = s \frac{\sigma_1^2}{\sigma_1^2 + \sigma_2^2} $$</span> and <span class="math-container">$$ V\left(X_1\mid S = s \right) = \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2 + \sigma_2^2} $$</span> (see <a href="https://stats.stackexchange.com/a/17464">here</a> and <a href="https://stats.stackexchange.com/q/9071">here</a>). I could probably work out analogous expressions for an arbitrary <span class="math-container">$n$</span> if I sat down with a pencil and paper and worked at it for a bit.</p> <p>What I want to know is, <strong>what is the distribution of <span class="math-container">$X$</span> given <span class="math-container">$S = s$</span>?</strong></p> <p>I know that this can't be Gaussian, since the sum is bounded. It's clearly not Dirichlet or anything Dirichlet-esque, since the marginal distributions are Gaussian. But beyond that I don't have a clue.</p>
Maxim
491,644
<p>The distribution of <span class="math-container">$(\boldsymbol X | S = s)$</span> is still jointly normal but degenerate. Let <span class="math-container">$\boldsymbol T = (1, 1, \dots, 1)^t$</span> and let <span class="math-container">$\boldsymbol X$</span> and <span class="math-container">$\boldsymbol \mu$</span> also be column vectors. Then <span class="math-container">$(X_1, \dots, X_n, \boldsymbol T^t \boldsymbol X)$</span> is jointly normal as an affine transform of a jointly normal distribution, and we can use the general formula for a conditional distribution of components of a jointly normal distribution: <span class="math-container">$$(\boldsymbol X | \boldsymbol T^t \boldsymbol X = s) \sim \mathcal N \!\left( \boldsymbol \mu + \frac {s - \boldsymbol \mu^t \boldsymbol T} {\boldsymbol T^t \Sigma \boldsymbol T} \Sigma \boldsymbol T, \Sigma - \frac 1 {\boldsymbol T^t \Sigma \boldsymbol T} \Sigma \boldsymbol T (\Sigma \boldsymbol T)^t \right).$$</span> <span class="math-container">$\boldsymbol T$</span> is an eigenvector of the covariance matrix with the eigenvalue <span class="math-container">$0$</span>.</p>
147,441
<p>I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something:</p> <p>"Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ "</p> <p>Now, I've attempted to prove the above, is my intuition right?</p> <p>Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates.</p> <p>a) $Fy$ turns our y co-ordinates into $x$ co-ordinates.</p> <p>b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates</p> <p>c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$</p> <p>d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved.</p> <p>Also, just to check, are the entries <em>in</em> the matrix $F^{-1}AF$ still written in terms of the standard basis?</p> <p>Thanks.</p>
Christian Blatter
1,303
<p>Your statements a) – d) are correct as stated.</p> <p>Concerning the sentence "Also, just to check, are the entries in the matrix $F^{-1}AF$ still written in terms of the standard basis?", one can say the following: The entries in this matrix are just numbers. The given matrix $A$ describes a certain linear map acting on points $x=(x_1,\ldots, x_n)$. The matrix $F^{-1}AF$ describes the same linear transformation when the points $x$ get new coordinates $(y_1,\ldots, y_n)$.</p>
4,285,189
<p>I am aware of the following identity</p> <p><span class="math-container">$\det\begin{bmatrix}A &amp; B\\ C &amp; D\end{bmatrix} = \det(A)\det(D - CA^{-1}B)$</span></p> <p>When <span class="math-container">$A = D$</span> and <span class="math-container">$B = C$</span> and when <span class="math-container">$AB = BA$</span> the above identity becomes</p> <p><span class="math-container">$\det\begin{bmatrix}A &amp; B\\ B &amp; A\end{bmatrix} = \det(A)\det(A - BA^{-1}B) = \det(A^2 - B^2) = \det(A-B)\det(A+B)$</span>.</p> <p>However, I couldn't prove this identity for the case where <span class="math-container">$AB \neq BA$</span>.</p> <p><strong>EDIT:</strong> Based on @Trebor 's suggestion.</p> <p>I think I could do the following.</p> <p><span class="math-container">$\det\begin{bmatrix}A &amp; B\\ B &amp; A\end{bmatrix} = \det\begin{bmatrix}A &amp; B\\ B-A &amp; A-B\end{bmatrix} = \det(A^2-B^2) = \det(A-B)\det(A+B)$</span>.</p>
user1551
1,551
<p>Your second attempt almost works. You may use the fact that <span class="math-container">$\det\pmatrix{A&amp;B\\ C&amp;D}=\det(AD-BC)$</span> whenever <span class="math-container">$A,B,C,D$</span> are square matrices of the same sizes and <span class="math-container">$CD=DC$</span>: <span class="math-container">$$ \det\pmatrix{A&amp;B\\ B&amp;A} =\det\pmatrix{A&amp;B\\ B-A&amp;A-B} =\det\left(A(A-B)-B(B-A)\right) =\det\left((A+B)(A-B)\right). $$</span></p>
4,179,720
<p>I'm starting to study triple integrals. In general, I have been doing problems which require me to sketch the projection on the <span class="math-container">$xy$</span> plane so I can figure out the boundaries for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. For example, I had an exercise where I had to calculate the volume bound between the planes <span class="math-container">$x=0$</span>, <span class="math-container">$y=0$</span>, <span class="math-container">$z=0$</span>, <span class="math-container">$x+y+z=1$</span> which was easy. For the projection on the <span class="math-container">$xy$</span> plane, I set that <span class="math-container">$z=0$</span>, then I got <span class="math-container">$x+y=1$</span> which is a line.</p> <p>However, now I have the following problem:</p> <p>Calculate the volume bound between:</p> <p><span class="math-container">$$z=xy$$</span></p> <p><span class="math-container">$$x+y+z=1$$</span></p> <p><span class="math-container">$$z=0$$</span></p> <p>now I know that if I put <span class="math-container">$z=0$</span> into the second equation I get the equation <span class="math-container">$y=1-x$</span> which is a line, but I also know that <span class="math-container">$z=xy$</span> has to play a role in the projection. If I put <span class="math-container">$xy=0$</span> I don't get anything useful. Can someone help me understand how these projections work and how I can apply it here?</p>
true blue anil
22,388
<p>Same thing, possibly shortest expression, <span class="math-container">$\frac{8!!}{8!} = \frac1{105}$</span></p> <p>(Ladies sit 1 by 1 in available seats, reserving opposite end for spouse)</p>
267,121
<h2>UPDATE on FINAL RESULT</h2> <p>Thanks to @SquareOne effort I generated higher-resolution videos with smoothing transitions that can be seen here:</p> <ul> <li><p><a href="https://www.linkedin.com/feed/update/urn:li:activity:6926902980323512320/" rel="noreferrer">https://www.linkedin.com/feed/update/urn:li:activity:6926902980323512320/</a></p> </li> <li><p><a href="https://twitter.com/superflow/status/1521191832012705792" rel="noreferrer">https://twitter.com/superflow/status/1521191832012705792</a></p> </li> </ul> <p>I might post my version of @SquareOne code with some bug corrections later. I am grateful to this community and @SquareOne for outstanding support.</p> <h2>INTRO &amp; <em>BOUNTY</em> TARGET</h2> <p>Dear friends, as you know there is currently an ongoing war in Ukraine: <strong><a href="https://war.ukraine.ua" rel="noreferrer">https://war.ukraine.ua</a></strong></p> <blockquote> <p><em><strong>I need your help on some coding in image/video processing, which is very simple to formulate, but not obvious in execution. Smooth exactly-timed transitions between video frames and perfect alignment of video frames is a key challenge here. BOUNTY TARGET IS EXPLAINED AT THE END OF THE POST.</strong></em></p> </blockquote> <h2>DATA Description</h2> <p><em>United States' <a href="https://www.understandingwar.org/" rel="noreferrer">Institute for the Study of War</a> (ISW, &quot;a non-partisan, non-profit, public policy research organization&quot;) performs daily research and publishes daily maps of the battlefield. Their work is public and gives many references to data sources they use. For instance...</em></p> <blockquote> <p><strong>The whole-Ukraine overview map from the 1st day of invasion on FEB 24, 2022 (<a href="https://www.understandingwar.org/backgrounder/ukraine-conflict-update-7" rel="noreferrer">source</a>):</strong></p> </blockquote> <p><a href="https://i.stack.imgur.com/bmDVR.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/bmDVR.jpg" alt="enter image description here" /></a></p> <blockquote> <p><strong>A recent whole-Ukraine overview map from APR 19, 2022 (<a href="https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-19" rel="noreferrer">source</a>):</strong></p> </blockquote> <p><a href="https://i.stack.imgur.com/YYdnd.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/YYdnd.jpg" alt="enter image description here" /></a></p> <h2>DATA Source</h2> <p>These maps are published almost daily with all publications gathered here: <a href="https://www.understandingwar.org/publications" rel="noreferrer">https://www.understandingwar.org/publications</a></p> <h2><em>BOUNTY</em> TARGET</h2> <p><em><strong>BOUNTY WILL BE AWARDED TO the CODE GENERATING BEST .MP4 VIDEO of a SEQUENCE of MAPS.</strong></em></p> <p>Basic part for the <strong>BOUNTY</strong>:</p> <ul> <li><p><strong>Programatic data access</strong>. While URLs of daily articles and images follow some pattern, it is not always regular. <em>How do we write a piece of code that accesses whole-map programmatically?</em> We do not want do do this manually. Approach 1: look at the daily image URLs (<a href="https://www.understandingwar.org/sites/default/files/DraftUkraineCoTApril19%2C2022.png" rel="noreferrer">example</a>), but they are still not regular. Approach 2: look at the daily articles URLs and get 1st image from the article (<a href="https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-april-19" rel="noreferrer">example</a>), but they are still not regular. Maybe there are other approaches. <strong>START DAT: FEB 24. END DATE: CURRENT DAY</strong>.</p> </li> <li><p><strong>Each frame must have a date stamp.</strong> For example: FEB 24, 2022, FEB 25, 2022, etc.</p> </li> <li><p><strong>Image alignment of Ukraine border - the GREATEST challenge</strong>. All these maps-images are slightly different. Ukraine country border should NOT jump from frame to frame.</p> </li> <li><p><strong>Duration of a frame and smoothness of transition</strong> Each map-image (key-frame) should be held on screen 1 second. Each of 3 transitional blended frames should last 0.15 seconds. This is a toy example of how to achieve it. Imagine you have just 3 key-frames.</p> </li> </ul> <p><a href="https://i.stack.imgur.com/3VgCO.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/3VgCO.jpg" alt="enter image description here" /></a></p> <p>Build transitions via interpolated blended frames:</p> <pre><code>frames=Values[TimeSeriesResample[TimeSeries[imglist,{0}],1/4]] </code></pre> <p><a href="https://i.stack.imgur.com/oL4tF.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/oL4tF.jpg" alt="enter image description here" /></a></p> <p>Define non-inform timings as</p> <pre><code>In[68]:= timings=Flatten[Riffle[Table[1,3],{Table[.15,3]}]] </code></pre> <blockquote> <p>Out[68]= {1,0.15<code>,0.15</code>,0.15<code>,1,0.15</code>,0.15<code>,0.15</code>,1}</p> </blockquote> <p>Create video as</p> <pre><code>SlideShowVideo[frames -&gt; timings] </code></pre> <p><a href="https://i.stack.imgur.com/N3XNu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/N3XNu.jpg" alt="enter image description here" /></a></p> <p>Export to .MP4 via</p> <p><a href="http://reference.wolfram.com/language/ref/format/MP4.html" rel="noreferrer">http://reference.wolfram.com/language/ref/format/MP4.html</a></p> <p><strong>Thank you very much for considering this!!!</strong> Collecting data from independent sources, and displaying it in a comprehensive animation, can help to inform society in ways that numbers and unorganized static images cannot. This is the sort of thing we as a community can do best in these difficult times.</p>
SquareOne
19,960
<p><a href="https://i.stack.imgur.com/TFygI.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/TFygI.gif" alt="enter image description here" /></a></p> <p><a href="https://drive.google.com/file/d/1eaVxxExRI-b14jKKJmaqEQIqZVZuv9JM/view?usp=sharing" rel="noreferrer">Here</a> is a link to a mp4 video (400 pixel width without fade transition due to memory limit in the the free basic Wolfram Cloud)</p> <p><strong>1. Fetching</strong></p> <p>The idea is to search all the ukraine-project pages for the links to all the reports containing the maps images. The name of the reports and of the images are searched given some keywords-string patterns.</p> <p>All the images are downloaded locally (these are big images which need resizing).</p> <pre><code>webpage=&quot;https://www.understandingwar.org/project/ukraine-project&quot;; pagePattern=&quot;?page=&quot;; titlePattern=&quot;russian-offensive-campaign&quot;; imageTitlePattern=&quot;UkraineCo&quot;~~__~~&quot;.png&quot;; maxpages= Import[webpage,&quot;Hyperlinks&quot;] // Select[StringContainsQ[pagePattern]] //StringCases[pagePattern~~d:DigitCharacter..-&gt;d] // Flatten//ToExpression // Max; allReportsLinks= Range[0,maxpages] // Map[( Import[webpage&lt;&gt;pagePattern&lt;&gt;ToString@#, &quot;Hyperlinks&quot;]// Select[StringContainsQ[titlePattern]] )&amp; ]//Flatten//Union; allImagesLinks=allReportsLinks// Map[{#, Import[#,&quot;Hyperlinks&quot;]//Select[ StringContainsQ[imageTitlePattern]]//Union}&amp;]; URLDownload[Select[allImagesLinks,Length@#[[2]]&gt;0&amp;]//#[[All,2,1]]&amp;,&quot;.&quot;]; </code></pre> <p><strong>2. Resizing</strong></p> <p>Once downloaded locally, the original maps are resized for convenience and are also directly renamed with the corresponding map date.</p> <pre><code>fn=FileNames[&quot;*Ukr*.png&quot;]; imageWidth=300; months2num={&quot;April&quot;-&gt;&quot;04&quot;,&quot;March&quot;-&gt;&quot;03&quot;,&quot;Feb&quot;-&gt;&quot;02&quot;}; filedatePatterns={(d:DigitCharacter..)~~m:(&quot;April&quot;|&quot;March&quot;|&quot;Feb&quot;), (LetterCharacter~~(m:&quot;April&quot;|&quot;March&quot;|&quot;Feb&quot;)~~d:DigitCharacter..)}; allimagefiles= Table[fn//Map[{#,StringCases[#,pattern:&gt; StringJoin[m/.months2num,If[StringLength@d==1,&quot;0&quot;&lt;&gt;d,d],&quot;_&quot;,ToString@imageWidth,&quot;.png&quot;]]}&amp;], {pattern,filedatePatterns}]// Map[Select[Length@#[[2]]==1&amp;]]//Join@@#&amp;//Map[Flatten]; Table[img=Import[imf[[1]]]; If[Head[img]===Image,Export[imf[[2]],ImageResize[img,imageWidth]]],{imf,allimagefiles}]; </code></pre> <p><strong>3. Aligning</strong></p> <p>The maps are aligned according to a caracteristic pattern which is always visible on all maps.</p> <p><a href="https://i.stack.imgur.com/hnXuB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hnXuB.png" alt="enter image description here" /></a></p> <p>This image pattern is a small portion of the black sea on the east side with also two country borders:</p> <p><a href="https://i.stack.imgur.com/882g1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/882g1.png" alt="enter image description here" /></a></p> <p>The best results are obtained when extracting and comparing the edges of the images:</p> <pre><code>imgpatt=Import[&quot;0225_300.png&quot;]//RemoveAlphaChannel// ImageTake[#,{280,360},{250,295}]&amp; //EdgeDetect[#,1]&amp; </code></pre> <p><a href="https://i.stack.imgur.com/Z7j0o.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7j0o.png" alt="enter image description here" /></a></p> <p><code>ImageCorrelate</code> is used here to find the positions where the image pattern best matches the maps. It is then be possible to center all the maps onto this position-pattern.</p> <pre><code>bestpos[img_,pattern_]:= ImageCorrelate[img, pattern, NormalizedSquaredEuclideanDistance, Padding -&gt; None] // ColorConvert[#,&quot;Graylevel&quot;]&amp; // ImageData // Position[#, Min[#]]&amp;// {#[[1,2]],ImageDimensions[img][[2]]-#[[1,1]] -ImageDimensions[pattern][[2]]}&amp; imageWidth=300; finalfiles=FileNames[&quot;*_&quot;&lt;&gt;ToString[imageWidth]&lt;&gt;&quot;.png&quot;]; res=Table[img=Import[file]//RemoveAlphaChannel;xy=bestpos[EdgeDetect[img,1],imgpatt];{xy,ImageDimensions@img-xy},{file,finalfiles}]; {xg,yb,xd,yh}=Transpose@res//Map[Transpose]//Flatten[#,1]&amp;//Map[Max]; i0=Image[Graphics[{White,Rectangle[{0,0},{xg+xd,yb+yh}]}],ImageSize-&gt;{xg+xd,yb+yh}]; imgs=Table[img=Import[file]//RemoveAlphaChannel; ImageCompose[i0,img,{xg,yb},bestpos[EdgeDetect[img,1],imgpatt]],{file,finalfiles}]; </code></pre> <p><strong>4. Movie</strong></p> <p>Assembling the images into a movie as indicated by @Vitaly:</p> <pre><code>frames=Values[TimeSeriesResample[TimeSeries[imgs,{0}],1/3]]; timings=Flatten[Riffle[Table[0.30,Length@finalfiles],{Table[.15,2]}]]; Remove[imgs]; SlideShowVideo[frames -&gt; timings] </code></pre> <p><strong>---------------------------Previous</strong></p> <p>Just a proof of concept with 4 maps, i have not finished yet. I tried to align the full images with different approaches, <code>ImageCorrelate</code> seems to give here the more robust results.</p> <p><a href="https://i.stack.imgur.com/29AIZ.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/29AIZ.gif" alt="enter image description here" /></a></p>
80,787
<p>Hello,</p> <p>I am very new to the field of approximation theory, and since an extended search on the Internet did not provide answers for two rather basic questions, I decided to ask them here. </p> <p>1) From my understanding upper bounds for</p> <p>$$ \inf_{q} \int_{-1}^{1} |f(x) - q(x)|^{2p} dt $$</p> <p>with $f$ continuous and $q$ a polynomial of degree $n$, are expressed in terms of the $L^p$ smoothness of $f$ and in terms of the degree $n$. Could somebody point me to a proof of such a result?</p> <p>2) Heuristically, what kind of information do lower bounds for the above infinum contain ? (For example, suppose that I can give a lower bound of $p!$ for the above infinimum as $p \rightarrow \infty$). </p> <p>My last question might not be well-posed, so if it doesn't make sense please ignore it.</p> <p>Thank you.</p>
Anatoly Kochubei
12,205
<p>Results on the $L^p$-approximation theory can be found in the basic books on the subject:</p> <p>Timan, A.F. Theory of approximation of functions of a real variable. Oxford: Pergamon Press. 1963.</p> <p>Achieser, N.I. Theory of approximation. New York: Frederick Ungar Publishing Co. (1956). </p>
2,711,189
<p>Is their anyway you can integrate a wedge of cylinder (picture below) not by adding up the parts cut in parallel lines but by adding up parts cut in pivotal lines to the center of the half circle?</p> <p><a href="https://i.stack.imgur.com/BrucD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BrucD.png" alt="enter image description here"></a></p>
Cye Waldman
424,641
<p>The answer to your question is yes, but it's a little trickier than than when the cuts are parallel. Let's consider in the simplest terms</p> <p>$$V=\int dV$$</p> <p>If we wish to reduce this to an integral over $\theta$ only, then we need to reduce $dV$ in terms of the area. In order to do that we need recourse to Pappus's $2^{nd}$ Centroid Theorem: the volume of a planar area of revolution is the product of the area $A$ and the length of the path traced by its centroid $C$, i.e., $2πC$. The bottom line is that the volume is given simply by $V=2πCA$. Now, this works for partial rotations as well. To the end we can write</p> <p>$$dV=A(\theta)C(\theta)~d\theta$$</p> <p>where we can readily show that</p> <p>$$A(\theta)=\frac{1}{2}r^2\tan \alpha \sin \theta\\ C(\theta)=\frac{2}{3}r$$</p> <p>(Here I've generalized your $30^{\circ}$ to $\alpha$.) Thus we arrive at</p> <p>$$V=\int_0^{\pi} \frac{r^3\tan\alpha}{3}\sin\theta~d\theta=\frac{r^3\tan\alpha}{3}\int_0^{\pi} \sin\theta~d\theta=2\frac{r^3\tan\alpha}{3}$$</p> <p>We can verify this this solution by comparison with Archimedes hoof (see <a href="https://math.stackexchange.com/questions/2436656/what-happend-to-pi">here</a>), for which $r=1,\tan\alpha=2$ and $V=4/3$.</p>
1,088,687
<p>I have this example here:</p> <p>$\lim_{n \to \infty} \frac{n^{2}+1}{n^{2}+n+1} = 1$</p> <p>so I have to prove that, $\forall \epsilon &gt; 0, \exists n_{0}\in N, \forall n(n \in N) \geq n_{0} \Rightarrow |\frac{n^{2}+1}{n^{2}+n+1} - 1| &lt; \epsilon$</p> <p>I have done in these steps <br/> <br/> $|\frac{n^{2}+1}{n^{2}+n+1} - 1| &lt; \epsilon$ <br/> <br/> $|\frac{-n}{n^{2}+n+1}| &lt; \epsilon$ <br/> <br/> $\frac{n}{n^{2}+n+1} &lt; \epsilon$ <br/> <br/> $n^{2} + n(1-\frac{1}{\epsilon}) + 1 &gt; 0$ <br/> <br/> D = $(1-\frac{1}{\epsilon})^{2} - 4 = (1 - \frac{1}{\epsilon} - 2)(1 - \frac{1}{\epsilon} + 2) = -(1 + \frac{1}{\epsilon})(1-\frac{1}{\epsilon}) = \frac{1}{\epsilon^{2}} - 1$ <br/> <br/></p> <h2>$n_{1,2} = \frac{\frac{1}{\epsilon} - 1 \pm \sqrt{D}}{2}$</h2> <p>I guess we need that <br/> <br/></p> <h2>$n &gt; \frac{\frac{1}{\epsilon} - 1 + \sqrt{D}}{2} = \frac{\frac{1}{\epsilon} - 1 + \sqrt{\frac{1}{\epsilon^{2}}- 1}}{2} = \frac{\frac{1}{\epsilon} - 1 + \sqrt{1 + (\frac{1}{\epsilon^{2}}- 2)}}{2} &gt; \frac{\frac{1}{\epsilon} - 1 + \frac{1}{2}(\frac{1}{\epsilon^{2}}- 2)}{2} = \frac{\frac{1}{\epsilon} - 1 + \frac{1}{2\epsilon^{2}}- 1}{2} = \frac{\frac{1}{2\epsilon^{2}} +\frac{1}{\epsilon} -2}{2}$</h2> <p>But I think I have done some mistakes and this last thing could be in more simple form</p> <p>P.S: How can I verify whether my result $n(\epsilon)$ is true in the end?</p>
paw88789
147,810
<p>Sometimes you don't want to be too precise with your estimates. A loose estimate with a nice form that gets the job done is often easiest.</p> <p>Here's a start:</p> <p>$|\frac{n^2+1}{n^2+n+1}-1|=\frac{n}{n^2+n+1}&lt;\frac{n}{n^2}=\frac1n$</p>
336,170
<p>Create a rational function with vertical asymptotes $x=\pm1$ and oblique asymptote of $y=2x-3$ and a $y$-intercept of $4$.</p>
Pedro
23,350
<p>You can go by parts, a la Jack.</p> <p>First, you want your function to have vertical asymptotes at $x=1,-1$. So we really want $$\tag 1 \frac{1}{(1-x)(1+x)}$$</p> <p>Now, as $x\to\infty$; this has a "oblique" asymptote of $0$, so we can just add the asymptote we want: $$\frac{1}{(1-x)(1+x)}+2x-3$$</p> <p>Now, let's evaluate this at $x=0$. It gives $$1-3=-2$$. Since $(1)$ evaluates to $1$; we can multply it by the constant that will give us $4$, i.e. $7$. Then $$\frac{7}{(1-x)(1+x)}+2x-3$$</p> <p>will evaluate to $4$ on the origin, and have the required conditions. You can try and produce more examples to get the hang of it.</p>
1,509,443
<p>Prove that for $x\geq 1$ the following inequality is true: $$\dfrac{1}{x}\leq \ln(2x+1)-\ln(2x-1).$$</p> <p>Can anyone help with this problem? I thought about some hours but any ideas</p>
Thomas Andrews
7,933
<p>More generally, show that if $f(t)$ is a convex differentiable function on $[a,b]$ then $$(b-a)f\left(\frac{a+b}{2}\right) \leq \int_{a}^b f(t)\,dt$$</p> <p>In the above case, $a=2x-1,b=2x+1, f(t)=\frac{1}{t}$.</p>
2,744,981
<blockquote> <p>Let $X$ be a Banach space. Let $T\in \mathbb{B}(X)$. If $T$ is an isometry and not invertible, prove that $\sigma(T) = \overline{\mathbb{D}}$.</p> </blockquote> <p>I can show that $\sigma(T) \subset \overline{\mathbb{D}}$. Since $T$ is not invertible, then $0 \in \sigma(T)$. Suppose $\sigma(T) \neq \overline{\mathbb{D}}$, then we can find $|\lambda|&lt;1$ on the boundry of the spectrum, $\partial \sigma(T)$. Then $\lambda \in \sigma_{ap}(T)$. How can I go from here to a contradiction?</p>
Balaji sb
213,498
<p>Just completing the argument i.e., prove if <span class="math-container">$\lambda \in \partial \sigma(T)$</span> with <span class="math-container">$|\lambda|&lt;1$</span> then <span class="math-container">$\lambda \in \sigma_{ap}(T)$</span> :</p> <p><span class="math-container">$T-\lambda I$</span> is not invertible implies there exists <span class="math-container">$v$</span> in the Banach space, <span class="math-container">$(T-\lambda I)v = 0 \implies Tv = \lambda v \implies ||Tv|| = |\lambda| ||v|| \implies |\lambda| = 1$</span> (since <span class="math-container">$||Tx ||= ||x||$</span> as <span class="math-container">$T$</span> is an isometry). So <span class="math-container">$\sigma_p(T) \subseteq \partial\mathbb{D}$</span>.</p> <p>Assume that <span class="math-container">$(T-\lambda I)x_n \rightarrow x \implies x_{n_k} \rightarrow x'$</span> for some <span class="math-container">$x'$</span> for any <span class="math-container">$|\lambda|&lt;1$</span>. Let <span class="math-container">$\lambda_r \in \sigma_r(T)$</span> and <span class="math-container">$|\lambda_r| &lt; 1$</span>, Then there exists a point <span class="math-container">$x$</span> in unit circle such that <span class="math-container">$B_r(x) \cap \text{Image}(T-\lambda_r I) = \emptyset$</span> for some <span class="math-container">$r&gt;0$</span>.</p> <p>Let <span class="math-container">$\lambda$</span> be such that <span class="math-container">$|\lambda-\lambda_r| &lt; \frac{r(1-|\lambda|)}{2}$</span> and <span class="math-container">$|\lambda|&lt;1$</span>. If <span class="math-container">$\lambda \notin \sigma_r(T) \cup \sigma_p(T)$</span> then for some <span class="math-container">$\{y_n\}$</span>, <span class="math-container">$(T-\lambda I)y_n \rightarrow x \implies y_n \rightarrow y$</span> and <span class="math-container">$(T-\lambda_r I) y = x-(\lambda_r-\lambda)y$</span>. Since <span class="math-container">$|\lambda-\lambda_r| &lt; \frac{r(1-|\lambda|)}{2}$</span> and since we have shown that, <span class="math-container">$(T-\lambda_r I) y = x-(\lambda_r-\lambda)y \implies B_r(x) \cap \text{Image}(T-\lambda_r I) \neq \emptyset$</span> since <span class="math-container">$||(\lambda_r-\lambda)y|| &lt; r$</span> leading to a contradiction. This is because <span class="math-container">$(T-\lambda I)y = x \implies ||Ty|| - |\lambda| ||y|| \leq ||x|| = 1 \implies ||y|| - |\lambda| ||y|| \leq ||x|| = 1 \implies ||y|| \leq \frac{1}{(1-|\lambda|)}$</span> Hence <span class="math-container">$\lambda \in \sigma_r(T) \cup \sigma_p(T)$</span> and since <span class="math-container">$|\lambda|&lt;1$</span>, we have that <span class="math-container">$\lambda \in \sigma_r(T)$</span>. Hence for every <span class="math-container">$\lambda_r \in \sigma_r(T)$</span> with <span class="math-container">$|\lambda_r| &lt; 1$</span>, there is an <span class="math-container">$\gamma&gt;0$</span> such that <span class="math-container">$B_{\gamma}(\lambda_r) \subseteq \sigma_r(T)$</span></p> <p>Hence if <span class="math-container">$\lambda$</span> with <span class="math-container">$|\lambda|&lt;1$</span> and <span class="math-container">$\lambda \in \partial \sigma(T)$</span> then <span class="math-container">$\lambda \in \sigma_{ap}(T)$</span> by the fact that <span class="math-container">$\sigma(T)$</span> is closed and <span class="math-container">$\lambda \notin \sigma_r(T)$</span>. Hence we have proved the above two statements assuming <span class="math-container">$(T-\lambda I)x_n \rightarrow x \implies $</span> <span class="math-container">$x_{n_k} \rightarrow x'$</span> for some <span class="math-container">$x'$</span> for any <span class="math-container">$|\lambda|&lt;1$</span>. This assumption is true since <span class="math-container">$T$</span> is an isometry as <span class="math-container">$(1-|\lambda|) \times ||x_n-x_m|| \leq ||T(x_n-x_m)|| - ||\lambda(x_n-x_m)||\leq ||(T-\lambda I)(x_n-x_m)|| &lt; \epsilon \implies$</span> <span class="math-container">$\{x_n\}$</span> is a cauchy sequence and hence <span class="math-container">$x_n \rightarrow x'$</span> for some <span class="math-container">$x'$</span> as the space is complete.</p> <p>Bonus:</p> <p>Further if there exists <span class="math-container">$\lambda \in \sigma(T) \setminus \sigma_p(T)$</span> with <span class="math-container">$|\lambda|&lt;1$</span> and <span class="math-container">$\sigma(T)$</span> is a strict subset of <span class="math-container">$\mathbb{D}$</span> then since <span class="math-container">$\sigma(T)$</span> is closed, we can find a <span class="math-container">$\lambda$</span> with <span class="math-container">$\lambda \in \partial \sigma(T) \cap (\sigma(T) \setminus \sigma_p(T))$</span> with <span class="math-container">$|\lambda|&lt;1$</span> then based in the above proof, we have <span class="math-container">$\lambda \in \sigma_{ap}(T)$</span>. From this point we can follow the proof given in the answer by @bitesizebo</p> <p>So the proof is complete if we show that there exists <span class="math-container">$\lambda \in \sigma(T) \setminus \sigma_{p}(T)$</span> such that <span class="math-container">$|\lambda|&lt;1$</span>. This is true since <span class="math-container">$0 \in \sigma(T) \setminus \sigma_{p}(T)$</span>.</p>
1,927,599
<p>In first-order logic, there is normally a formal distinction between <em>constants</em> and <em>variables</em>. Namely:</p> <ul> <li><p>A <em>constant symbol</em> is a $0$-ary function symbol in a language $\mathcal{L}$.</p></li> <li><p>A <em>variable</em> is one of countably many special symbols used for first-order reasoning, and can be quantified over.</p></li> </ul> <p>Instead, suppose we insist that there are no variables, but only constants. And specifically:</p> <ul> <li><p>We may quantify over constant symbols, but not over general function or relation symbols.</p></li> <li><p>There is an infinite supply of unused constant symbols available for use.</p></li> </ul> <p><strong>Question: can we develop first-order-logic this way?</strong> Has it been done, and if not, what goes wrong?</p> <hr> <h1>Why would you want to do that?</h1> <p>Both the syntax and the semantics of constants and variables suggest that they are more naturally understood as the same thing. Consider:</p> <ul> <li><p><strong>Logical implication for formulas</strong> Let $\varphi, \psi$ be any formulas. We say that $\varphi \vDash \psi$ if, under any model( interpretation of the function symbols and relation symbols), <em>and</em> any assignment of the variables to elements of our model, if $\varphi$ is true then $\psi$ is true.</p> <p>If variables are just constant symbols, then this simply says that under any interpretation (which must automatically include interpreting the constant symbols), if $\varphi$ is true then $\psi$ is true.</p></li> <li><p><strong>Axioms with existential statements vs. axioms with constant symbols</strong> Consider the group axioms. There are two different formal presentations of the identity axiom, both commonly used: option (i) is to have a constant symbol $e$ in the language, and assert the axiom that $\forall x (x e = ex = x$). Option (ii) is to assert that $\exists y \forall x (xy = yx = x)$. If you do the first axiom, then your language of groups is $\mathcal{L} = \{*, e\}$; if you do the second axiom, your language is just $\{*\}$. (The unary function for inverse is also sometimes included, $^{-1}$.)</p> <p>The two formulations give two different theories over two different languages, but they amount to the same thing. However, the equivalence is not immediate because while we can show the theory with the constant symbol $e$ in it can prove the other theory's axioms, going the other way requires "defining" a new symbol $e$ and adding it to the language, which has to be done on a meta level.</p></li> <li><p><strong>$\exists$ elimination</strong> The $\exists$ elimination rule tells us that from $\exists x \; \varphi(x)$, we may deduce that $\varphi(x)$, where $x$ is a new variable that does not occur free in the current scope. But informally, what is this saying? <em>If we know that there exists an object satisfying a property, we may give it a name</em>. And I think of the <em>name</em> as a constant symbol even though it's formally a variable.</p> <p>Consider defining $i$ in the complex numbers given the axiom $\exists x \; : \; x^2 = -1$. We employ $\exists$ elimination to obtain $x^2 = -1$ for some variable $x$. But it would be helpful for this variable to then become part of our language, so that $i$ can be considered a constant instead of a variable.</p></li> <li><p><strong>$\forall$ introduction</strong> If $a$ is a constant symbol, it should be valid to conclude $\forall x \; \varphi(x)$ from $\varphi(a)$, if there are no formulas involving $a$ in scope. But, it isn't allowed. Because we insist on only generalizing from a <em>variable</em> which doesn't occur free in any other statement, we can't generalize from a <em>constant</em> even though it seems we should be able to.</p></li> </ul> <h1>Possible issues</h1> <ul> <li><p><strong>How do we distinguish between sentences and formulas?</strong> For any language $\mathcal{L}$, we can define a <em>sentence</em> to be a formula where all the constant symbols are either bound, or elements of $\mathcal{L}$.</p></li> <li><p><strong>What about quantifying over an important constant?</strong> If we are in the langauge of fields, it does seem a bit odd to allow a sentences such as $\forall 0 \; \forall 1 \; (0 + 1 = 1 + 0)$. But this is more bad style than anything else, and doesn't strike me as a problem with the formal system. Bound variables are just dummy variables whose names don't matter; quantifying over constants that are mentioned in the axioms would be discouraged, but not disallowed formally. We would have to say that within the scope of the quantification the axioms about the constant will not apply.</p></li> </ul>
Stephen A. Meigs
160,805
<p>On a fundamental level (not relative to the theory under consideration) Bourbaki does not define constants as separate from variables. To him, a <i>constant in a theory</i> is just a variable that does not occur (freely) in axioms of the theory. Clearly if you don't fundamentally distinguish constants from variables, then you need theories where a formula A can be a theorem without $\forall{x}A$ being a theorem; thus, the more fundamental question, perhaps, is whether such theories should be allowed. Indeed, it does seem slightly useful to define variables so that they can be the same in any language, reducing the bother of worrying about what the variables are, while it seems like $0$-ary functions (constant symbols) should depend on the language, but it doesn't feel fundamentally very useful, but just perhaps technically so.</p> <p>Say you are trying to prove $A \rightarrow B$. Ideally you might want to assume $A$ by considering it an axiom temporarily, then prove $B$. But technically, without allowing axioms whose generalizations don't hold, one can't assume $A$. Instead, the modern way in practice is to temporarily pretend the free variables of $A$ are new constants and then use the theorem on constants at the end to replace the constants with variables. This is an inelegant nuisance which becomes unnecessary if one does things more like Bourbaki. True, you don't automatically get that $\forall{x}A$ is a theorem whenever $A$ is a theorem, but so what? Instead you have the equally useful rule that $\forall{x}A$ is a theorem whenever $A$ is a theorem and $x$ does not appear freely in an axiomatization of the theory (i.e., it is not a constant of the theory in Bourbaki terminology). True, when giving axioms, Bourbaki must be clear whether their generalizations or the statements themselves are intended as axioms, but that's just a mild nuisance.</p> <p>Your first argument <b>logical implication for formulas</b> is spot on. Whether one feels it is more fundamental to interpret just the functions or the functions and their variables simultaneously, it is definitely convenient to be able to do the latter in one fell swoop, as one might by merely interpreting everything (say) into a complete Henkin extension of the theory, which one can't do if by complete one means a consistent theory generated by sentences in which every sentence or its negation is in the theory as opposed to a consistent theory not necessarily generated by sentences in which every statement or its negation is in the theory. True, the former standard notion of completeness may be more fundamental (or not!), but that's not the point--it's convenient to be able to have both concepts available with names for each.</p>
2,350,256
<p>Solve: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$</p> <p>My Attempt: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$ $$y^{y^{\dfrac {3}{2}}}=(y^{\dfrac {3}{2}})^y$$</p> <p>How do I solve further?</p>
Michael Rozenberg
190,319
<p>The domain is $y&gt;0$.</p> <p>We need to solve $$y^{y\sqrt{y}}=y^{\frac{3}{2}y},$$ which gives $y=1$ or $$y\sqrt{y}=\frac{3}{2}y,$$ which is $y=\frac{9}{4}$ and we get the answer: $\{1,\frac{9}{4}\}$.</p>
2,350,256
<p>Solve: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$</p> <p>My Attempt: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$ $$y^{y^{\dfrac {3}{2}}}=(y^{\dfrac {3}{2}})^y$$</p> <p>How do I solve further?</p>
Grown pains
417,854
<p>Use logarithm?</p> <p>y = 1 is a solution.</p> <p>Let $y \not = 1$.</p> <p>\[ y^{3/2}\log y = 3y/2 \log y \] or dividing by $y \log y$, \[ y^{1/2} = 3/2. \]</p>
2,350,256
<p>Solve: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$</p> <p>My Attempt: $$y^{y\sqrt {y}}=(y\sqrt {y})^{y}$$ $$y^{y^{\dfrac {3}{2}}}=(y^{\dfrac {3}{2}})^y$$</p> <p>How do I solve further?</p>
Baris Aytekin
450,397
<p>$y^{y^3\over 2}={y^y} {y^{y\over 2}}$</p> <p>$y^{y^3\over 2}={y^3y\over 2}$</p> <p>$\frac{y^3} 2 =\frac {3y} 2 $</p> <p>$2{y^3\over 2}=3y$</p> <p>$4{y^3}=9y^2$</p> <p>$4{y^3}-9{y^2}=0$</p> <p>${y^2}(4y-9)=0$</p> <p>$SS={\{0,0,{9\over 4}}\}$</p> <p>However if you define $0^0$ as an indeterminate form the only root to this equation is $9\over 4$.</p>
3,969,679
<p><strong>The letters in the word GUMTREE and KOALA are rearranged to form a 12-letter word where KOALA appears precisely in order but not necessarily together. How many ways can this happen?</strong></p> <p>So I attmepted it via this method:</p> <p>Firstly arrange like this since KOALA must be in order but not necessarily together: (let the fullstops (.) be spots for the letters in GUMTREE. There are <span class="math-container">$7$</span> letters in GUMTREE and <span class="math-container">$6$</span> full stops so <span class="math-container">$6^7$</span>.</p> <p>.K.O.A.L.A.</p> <p>Butthere are two E's so <span class="math-container">$\frac{6^7}{2!}$</span>. The letters in KOALA are fixed so they have <span class="math-container">$1$</span> way each, except for A (there are two so the first A has two choices and the second A has one choice)</p> <p>Therefore, <span class="math-container">$$2\cdot\frac{6^7}{2!}=279936$$</span></p> <p>But the answer is 1995840 arrangements</p> <p>I belive my method is very close but I am forgetting to multiply by something. Can someone point out my logical flaw? Otherwise the worked solutions propose <span class="math-container">$\frac{12!}{5!2!}$</span>, but I don't get why you divide by 5! for the KOALA, since they are not identical letters... regardless it would be great to understand both ways!</p> <p>Thanks</p>
ryang
21,813
<p>Probably simplest to frame the problem as arranging 12 letters, in which 2 are identical of one kind (EE in GUMTREE), and 5 are identical of another kind (XXXXX substituting for KOALA, which is treated as a black box since its letters have already been pre-arranged). So, <span class="math-container">$$\frac{12!}{2!5!},$$</span> as suggested.</p>
3,969,679
<p><strong>The letters in the word GUMTREE and KOALA are rearranged to form a 12-letter word where KOALA appears precisely in order but not necessarily together. How many ways can this happen?</strong></p> <p>So I attmepted it via this method:</p> <p>Firstly arrange like this since KOALA must be in order but not necessarily together: (let the fullstops (.) be spots for the letters in GUMTREE. There are <span class="math-container">$7$</span> letters in GUMTREE and <span class="math-container">$6$</span> full stops so <span class="math-container">$6^7$</span>.</p> <p>.K.O.A.L.A.</p> <p>Butthere are two E's so <span class="math-container">$\frac{6^7}{2!}$</span>. The letters in KOALA are fixed so they have <span class="math-container">$1$</span> way each, except for A (there are two so the first A has two choices and the second A has one choice)</p> <p>Therefore, <span class="math-container">$$2\cdot\frac{6^7}{2!}=279936$$</span></p> <p>But the answer is 1995840 arrangements</p> <p>I belive my method is very close but I am forgetting to multiply by something. Can someone point out my logical flaw? Otherwise the worked solutions propose <span class="math-container">$\frac{12!}{5!2!}$</span>, but I don't get why you divide by 5! for the KOALA, since they are not identical letters... regardless it would be great to understand both ways!</p> <p>Thanks</p>
Will Orrick
3,736
<p>As I understand it, you are doing <span class="math-container">$7$</span> trials, one for each letter of GUMTREE. In the first trial, you decide which of the six slots, indicated by full stops, to use for G, in the second you decide which to use for U, and so on.</p> <p>Here's the problem with that approach: let's say all seven letters of GUMTREE get put in the first slot. You still haven't said how the letters in that slot are to be ordered. There are <span class="math-container">$7!/2!$</span> orders, so you might think to multiply by that factor. But let's say that G got put in the first slot, U in the second, and so on up to R in the fifth slot, with both Es put in the sixth slot. Now there is no rearrangement possible, so the appropriate multiplicative factor is <span class="math-container">$1$</span>, not <span class="math-container">$7!/2$</span>.</p> <p>Because of this issue I don't see any simple fix for your approach.</p>
3,969,679
<p><strong>The letters in the word GUMTREE and KOALA are rearranged to form a 12-letter word where KOALA appears precisely in order but not necessarily together. How many ways can this happen?</strong></p> <p>So I attmepted it via this method:</p> <p>Firstly arrange like this since KOALA must be in order but not necessarily together: (let the fullstops (.) be spots for the letters in GUMTREE. There are <span class="math-container">$7$</span> letters in GUMTREE and <span class="math-container">$6$</span> full stops so <span class="math-container">$6^7$</span>.</p> <p>.K.O.A.L.A.</p> <p>Butthere are two E's so <span class="math-container">$\frac{6^7}{2!}$</span>. The letters in KOALA are fixed so they have <span class="math-container">$1$</span> way each, except for A (there are two so the first A has two choices and the second A has one choice)</p> <p>Therefore, <span class="math-container">$$2\cdot\frac{6^7}{2!}=279936$$</span></p> <p>But the answer is 1995840 arrangements</p> <p>I belive my method is very close but I am forgetting to multiply by something. Can someone point out my logical flaw? Otherwise the worked solutions propose <span class="math-container">$\frac{12!}{5!2!}$</span>, but I don't get why you divide by 5! for the KOALA, since they are not identical letters... regardless it would be great to understand both ways!</p> <p>Thanks</p>
true blue anil
22,388
<p>This might appeal to you as a simple way</p> <p>First arrange <span class="math-container">$-K-O-A-L-A-$</span> with gaps between letters.</p> <p>All we need to do is to count how many ways GUMTREE can be interleaved.</p> <p><span class="math-container">$G$</span> can be inserted in <span class="math-container">$6$</span> ways, <strong>but then</strong> <span class="math-container">$U$</span> <strong>can now be inserted in</strong> <span class="math-container">$7$</span> <strong>ways</strong>,<span class="math-container">$M$</span> can be inserted in <span class="math-container">$8$</span> ways, and so on, and finally divide by <span class="math-container">$2!$</span> for the repeated <span class="math-container">$E$</span></p> <p>Thus, <span class="math-container">$\dfrac{6\cdot 7\cdot 8\cdot 9\cdot {10}\cdot{11}\cdot{12}}{2!} = 1995840$</span></p>
758,158
<p>I am trying to use <span class="math-container">$f(x)=x^3$</span> as a counterexample to the following statement. </p> <p>If <span class="math-container">$f(x)$</span> is strictly increasing over <span class="math-container">$[a,b]$</span> then for any <span class="math-container">$x\in (a,b), f'(x)&gt;0$</span>. </p> <p>But how can I show that <span class="math-container">$f(x)=x^3$</span> is strictly increasing?</p>
Thomas
26,188
<p>You want to show that the function $f(x) = x^3$ is strictly increasing on $\mathbb{R}$.</p> <p>Maybe you can just use the definition. That is, let $a &lt; b$. Assume that $0&lt;a$ (you can do the other cases I am sure). Let $h = b - a &lt; 0$. You want to show that $f(a) &lt; f(b)$.</p> <p>So $$\begin{align} f(a) = a^3 &amp;= (b - h)^3\\ &amp;= b^3 - 3b^2h +3 bh^2 - h^3 \\ &amp;&lt;b^3 \\ &amp;= f(b). \end{align} $$ This is clear since $h &gt; 0$, so $-3b^2h + 3bh^2 = -3hb(b - h) = -3hba &lt; 0$.</p> <hr> <p>All that said, you could, of course just use $f(x) = x$ as a counter example.</p>
696,900
<p>This should be trivial but for some reason I cannot think of a formula which enlarges a number in proportion to another number decreasing in the negative direction</p> <p>Example:</p> <p>if value 1 = $-0.1$, value 2 should be set to $0.9$<br> if value 1 = $-0.2$, value 2 should be set to $0.8$<br> if value 1 = $-0.4$, value 2 should be set to $0.6$</p> <p>and so on...</p> <p>Thanks for any help</p>
Zev Chonoles
264
<p>How about $$\mathsf{(\text{value 2})}=1+\mathsf{(\text{value 1})}\quad ?$$ It fits all the examples you gave.</p>
3,112,263
<p>Given that there are three integers <span class="math-container">$a, b,$</span> and <span class="math-container">$c$</span> such that <span class="math-container">$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=\frac{6}{7}$</span>, what is the value of a+b+c?</p> <hr> <p>Immediately, I see that I should combine the left hand side. Doing such results in the equation <span class="math-container">$$\frac{ab+ac+bc}{abc}=\frac{6x}{7x}.$$</span> This branches into two equation <span class="math-container">$$ab+ac+bc=6x$$</span><span class="math-container">$$abc=7x.$$</span> From this, I can tell that one of a, b, or c must be a multiple of 7, and the other two are a factor of <span class="math-container">$x$</span>. Now, I do trial and error, but I find this very tiring and time consuming. Is there a better method?</p> <p>Also, if you are nice, could you also help me on <a href="https://math.stackexchange.com/questions/3098173/ns-base-5-and-base-6-representations-treated-as-base-10-yield-sum-s-for">this</a>(<a href="https://math.stackexchange.com/questions/3098173/ns-base-5-and-base-6-representations-treated-as-base-10-yield-sum-s-for">$N$&#39;s base-5 and base-6 representations, treated as base-10, yield sum $S$. For which $N$ are $S$&#39;s rightmost two digits the same as $2N$&#39;s?</a>) question?</p> <p>Thanks!</p> <p>Max0815</p>
Community
-1
<p>Your argument isn't correct as stated due to the issue pointed out by Dan Uznanski in the comments. But following your idea of combining things, we have</p> <p><span class="math-container">$$6abc = 7(ab + ac + bc)$$</span></p> <p>so that <span class="math-container">$7 | abc$</span> and <span class="math-container">$6 | (ab + ac + bc)$</span>. Let's assume that <span class="math-container">$a$</span> is divisible by <span class="math-container">$7$</span>, so clearly <span class="math-container">$a \ge 7$</span> (assuming that all the numbers are positive). It then follows that</p> <p><span class="math-container">$$\frac 1 b + \frac 1 c = \frac 6 7 - \frac 1 a \ge \frac 5 7 &gt; \frac 2 3$$</span></p> <p>Now if <span class="math-container">$b$</span> and <span class="math-container">$c$</span> were both strictly greater than <span class="math-container">$2$</span>, we would have a contradiction (why?). We can therefore assume <span class="math-container">$c = 2$</span> and reduce to</p> <p><span class="math-container">$$\frac 1 a + \frac 1 b = \frac 5 {14}.$$</span> Now repeat the ideas: We have</p> <p><span class="math-container">$$\frac 1 b = \frac 5 {14} - \frac 1 a \ge \frac 5 {14} - \frac 1 7 = \frac{3}{14} \implies b \le \frac{14}{3} \implies b \le 4.$$</span></p> <p>This is now a very finite set to check.</p>
2,362,500
<p>When you are solving an integral, and you have a square root of a square, for instance</p> <p>$$ \int \sqrt{1+\cos x} = \int \sqrt{\frac{\sin^2x}{1-\cos x}}$$</p> <p>Do I have to take the absolute value of $\sin x$, or can I just take the positive value? </p> <p>$$= \int \frac{|\sin x|}{\sqrt{1-\cos x}}$$ OR $$= \int \frac{\sin x}{\sqrt{1-\cos x}}$$</p> <p>If I have to take the absolute value, should I look for a better method when solving integrals than that one, because two solutions seem more complicated.</p>
ProgSnob
456,185
<p>Using same nomenclature as you have used, there are 4 cases:</p> <ol> <li>P5 to P9 are selected = 1 case</li> <li>P1 and P2 both selected, choose any three from P5-P9 = 1*(5C3) = 10 cases</li> <li>P1 and P2 both selected, one of (P3, P4) is selected, and choose any two from P5-P9 = 1*(2C1)*(5C2) = 20 cases</li> <li>Don't select P1, P2; choose one out of P3, P4 and 4 from P5-P9 = (2C1)*(5C4) = 10 cases</li> </ol> <p>Total = 1+10+20+10 = 41</p>
1,912,660
<p>I am trying to really understand why the gradient of a function gives the direction of steepest ascent intuitively.</p> <p>Assuming that the function is differentiable at the point in question,<br> a) I had a look at a few resources online and also looked at this <a href="https://math.stackexchange.com/questions/223252/why-is-gradient-the-direction-of-steepest-ascent">Why is gradient the direction of steepest ascent?</a> , a popular question on this stackexchange site.<br> The accepted answer basically says that we multiply the gradient with an arbitrary vector and then say that the product is maximum when the vector points in the same direction as the gradient? This to me really does not answer the question, but it has 31 upvotes so can someone please point out what I am obviously missing?</p> <p>b) Does the gradient of a function tell us a way to reach the maxima or minima? if yes, then how and which one - maxima or minima or both?<br> Edit: I read the gradient descent algorithm and that answers this part of my question.</p> <p>c) Since gradient is a feature of the function at some particular point - am I right in assuming that it can only point to the local maxima or minima?</p>
user326210
326,210
<p>The question is <em>how you would measure the steepness of ascent</em>. For one-dimensional functions, steepness is defined in terms of the derivative:</p> <p>$$g^\prime(x) \equiv \lim_{h \rightarrow 0}\frac{f(x+h)-f(x)}{h}$$</p> <p>By this limit definition, steepness is measured by computing the slope between the points $\langle x, f(x)\rangle$ and $\langle x + h, f(x+h)\rangle$, and letting that distance $h$ get smaller and smaller.</p> <hr/> <p>Now the question is how we extend this idea of steepness to functions of <em>more than one</em> variable. </p> <p><strong>Trick #1: Directional steepness requires only ordinary derivatives</strong></p> <p>Suppose we have a two-variable function $f(x,y)$. (Conceptually, the graph of $f$ is a surface hovering above the $xy$ plane.) Because we are presumably just learning multivariable calculus, we don't have a mathematical definition for the "steepness" at a point $\langle x,y\rangle$. However, there is a trick:</p> <blockquote> <p>Suppose you pick a point $\langle x_0, y_0\rangle$. And you also pick a direction, in the form of a line like $2y = 3x$. You can see how the height of the function $f$ varies as you start at the point $\langle x_0, y_0 \rangle$ and take small steps in the direction of the line. You can compute this <em>directional steepness</em> using only the ordinary (one-dimensional) derivative.</p> </blockquote> <p>In fact the equation is something like this:</p> <p>$$D_{2y=3x} f = \lim_{h\rightarrow 0}\frac{f(x_0 + 2h, y_0 + 3h) - f(x_0, y_0)}{h}$$</p> <p>(Advanced side note: this definition really is just a one-dimensional derivative. If I parameterize the line $2y=3x$ using a function like $u(t) = \langle 2t, 3t\rangle$, I can define the directional derivative as just $$D_u f \equiv D(f\circ u)(0).$$ To put it in more standard notation, $D_u f \equiv [\frac{d}{dt}f(u(t)) ]_{t=0}$ )</p> <p><strong>Trick #2: The gradient is a list of the steepness in each axis direction</strong></p> <p>In the previous section, we defined how to compute the direction steepness of a function &mdash; that is, the steepness <em>in the direction of a line</em>. </p> <p>The lines along the coordinate axes are especially important. If we have a multivariable function $f(x_1, x_2, x_3, \ldots, x_n)$, let $\ell_1, \ell_2, \ldots \ell_n$ be lines, where $\ell_i$ is the line lying along the $x_i$ axis.</p> <p>We'll define the gradient to be the list of directional steepnesses in each of the coordinate directions:</p> <p>$$\nabla f = \langle D_{\ell_1}f, D_{\ell_2}f, \ldots, D_{\ell_n}f\rangle.$$</p> <p>Let's think carefully about this structure. The function $f$ takes in a list of numbers $x_1,\ldots, x_n$ and produces a single number. The function $\nabla f$ takes in a list of $n$ numbers and produces a list of $n$ steepnesses (which are also numbers.)</p> <p>Visually, you can imagine that $\nabla f$ takes in a point $\langle x_1, \ldots, x_n\rangle$ and produces a steepness vector at that point. The components of that vector are made up of the directional steepnesses of the function $f$ in the direction of the coordinate axes.</p> <p><strong>Trick #3: Dot products measure directional overlap</strong></p> <p>When $\vec{u}$ and $\vec{v}$ are vectors, then the dot product between $\vec{u}$ and $\vec{v}$ can be defined by </p> <p>$$\vec{u}\cdot \vec{v} = ||\vec{u}|| \cdot ||\vec{v} || \cdot \cos{\theta},$$</p> <p>where $\theta$ is the angle between the two vectors. </p> <p>Now suppose $\vec{v}$ is kept constant. If we keep the length of $\vec{u}$ constant but allow it to revolve in a circle, for example, we can change the angle $\theta$ and see how it affects the dot product.</p> <p>Evidently, <em>the dot product is maximized when the two vectors are pointing in the same direction</em>, because then $\cos{\theta}=\cos{0} = 1$ is maximal. </p> <p><strong>Trick #4: You can compute directional steepness using the dot product</strong></p> <p>Recall that $D_u f$ is the steepness of $f$ in the direction of some line $u$. Recall that $\nabla f$ is the <em>gradient of $f$</em>&mdash; a list of the directional steepnesses in each of the coordinate directions.</p> <p>It turns out that the following fact is true:</p> <blockquote> <p>If $u(t) = \langle at, bt\rangle$ is the parametrization of a line, and if $u(t)$ has length 1 when $t=1$, then $$D_u(f) = \nabla f \cdot u(1) $$ In other words, we can compute the directional steepness as the dot product of the gradient and the line of the direction.</p> </blockquote> <p><strong>Conclusion: The graident is the direction of steepest ascent</strong> Because we can compute directional steepness as a dot product with the gradient, the answer to the question: "In which direction is this function steepest?" is the same as the answer to the question "Which line will have the greatest dot product with the gradient?", which we know is "The line which is parallel to the gradient!".</p>