qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,485,109
<blockquote> <p>Proof by induction that $1^2 + 2^2 + 3^2 +.......+ n^2 = \frac{1}{6}\cdot n\cdot (1+n)\cdot (1+2n)$</p> </blockquote> <p>I tried showing that </p> <p>$1^2 + 2^2 + 3^2 +.......+ (k+1)^2 = \frac{1}{6}\cdot (k+1)\cdot (1+(k+1))\cdot (1+2(k+1))$</p> <p>By using the left side:</p> <p>$1^2 + 2^2 + 3^2 +.......+ (k+1)^2$ </p> <p>$= 1^2 + 2^2 + 3^2 +.......+ k^2 +(k+1)^2 $</p> <p>$= \frac{1}{6}\cdot k\cdot (1+k)\cdot (1+2k) + (k+1)^2$</p> <p>I tried expanding and making it equal the right side but I was not able to get it.</p>
Aderinsola Joshua
395,530
<p>How many pair of positive integers are solutions to the equation.<br> $$5X + 7Y = 1234$$ because $1234$ is even, and only the sum of two odd numbers or two even numbers is even, and also because $5$ and $7$ are both odd.<br> Therefore $X$ and $Y$ can be both odd or both even; if $X$ was odd, $Y$ would be odd and vice versa.<br> $X \gt 0$ and $Y \gt 0$<br> $5X + 7Y = 1234$ $5X = 1234 - 7Y$ $X = \frac{1234}{5} - \frac{7Y}{5}$.<br> break the fraction $\frac{1235}{5}$ down $ \begin{align} \frac{1234}{5} = 246 + \frac{4}{5}\\ X = 246 + \frac{4}{5} - \frac{7Y}{5}\\ X = 246 + \frac{4-7Y}{5}\\ X = 246 - \frac{7Y-4}{5}\\ X = 246 - n \end{align} $.<br> Since $X$ is a positive integer, then $\frac{7Y-4}{5}$ must be a positive integer also, and $\frac{7Y-4}{5} = n$.<br> So that $Y = \frac{5n+4}{7}$ $n$ is a position integer that can make $Y$ a positive integer, as said in the question.<br> $n = 2, 9, 16, 23, 30.......$ $n = 2 + 7*(a-1)$.<br> where $n \lt 246$ for $X \rightarrow$ positive, so that by inspection the largest value of $n$ would be $240$.<br> Put the value of $n$ here $\frac{5n+4}{7}$ to get $Y$.<br> $Y = 2, 7, 12, 17, ....... , 172$ $Y = 2 + 5*(a-1)$ Maximum $Y$ is Therefore $172$, we can calculate the number of series for $Y$.<br> $Y_{\text{max}} = 172 = 2 + 5*(a-1)$.<br> Therefore $a = \frac{172-2}{5} + 1$ so that $a = 35$.<br> Now work the original equation for $X$ also.<br> $ \begin{align} 7Y = 1234 - 5X\\ Y = \frac{1234}{7} - \frac{5X}{7}\\ Y = 176 + \frac{2}{7} - \frac{5X}{7}\\ Y = 176 + \frac{2-5X}{7}\\ Y = 176 - \frac{5X-2}{7}\\ Y = 176 - m \end{align} $.<br> $\frac{5X-2}{7}$ must be a positive integer also $\frac{5X-2}{7} = m$.<br> where $m \lt 176$ for $Y$ to remain positive.<br> Insert Possible values for $m$ which would make $Y$ a positive integer. $m = 4, 9, 14, 19, 24........$ $m = 4 + 5*(a-1)$ By inspection, largest value for $m = 174$.<br> Because $X = \frac{7m+2}{5}$<br> Then $X = 6, 13, 20, 27,........, 244$ $X = 6 + 7*(a-1)$.<br> maximum $X = 244$, we can also calculate the number of series for $X$<br> $X_{\text{max}} = 244 = 6 + 7*(a-1)$ Therefore $a$ also equals $35$<br> The series for $X$ and $Y$ appear in the equation, such that $X$ increases as $Y$ decreases and vice versa.<br> $ \begin{align} X \rightarrow 6,13, 20, 27......., 240, 237, 244\\ Y \rightarrow 172, 167, 162, 157,......,12, 7, 2 \end{align} $ The number of numbers in the series is $35$, Notice how odd $X$ align for $Y$ and even $X$ align for even $Y$</p>
1,730,445
<p>I am checking whether the limit is true or not. $z$ is complex number \begin{equation} \lim_{z \rightarrow 0} z\sin(\frac{1}{z})=0 \end{equation} I found Laurent series of $z\sin(\frac{1}{z})$ which is $\sum_{n=0}^{\infty} (\frac{1}{z})^{2n}\frac{1}{(2n+1)!}(-1)^n$ </p> <p>the series is not defined $z =0$.</p> <p>Can the Laurent series gives us any information about the limit whether it is true or not ? Thank you in advance for your help.</p>
Henricus V.
239,207
<p>Casorati-Weierstrass theorem implies that the limit does not exist. In fact, approaching from $i\mathbb{R}$ gives $$ \lim_{h \to 0} ih \sin \frac{1}{ih} = -\lim_{h \to 0} h \sinh \frac{1}{h} = -\infty $$ By l'Hopital.</p>
2,442,297
<blockquote> <p>Ryan has been given a salary increase of $7.39\%$. The salary increase is for the value of €$4231$.</p> <p>His salary is now $x$. Solve for $x$.</p> </blockquote> <p>My head is saying </p> <p>$$\begin{align} 4231 / 7.39 &amp;= 572 \\ 572 * 100 &amp;= 57,200 \end{align}$$</p> <p>is not correct, but I am having a brainfart right now.</p> <p>Can anyone help ?</p> <p>Thanks.</p>
Community
-1
<p>As someone who's been told to go into accounting, you forgot to take markup versus margin into account. The 7.39% is a markup on the old amount (or about a 6.88% margin on the new amount) you didn't successfully find 100% of the original amount and forgot to add the markup back in you needed to find 107.39% of the original so 572*107.39 = 57200+4004+171.6+51.48 = 61427.08 but none of that matters if you don't start with the correct numbers. </p>
3,727,772
<p><span class="math-container">$Y_1,Y_2,\ldots,Y_{n+1}$</span> be non-empty subsets of <span class="math-container">$\{1,2,3\ldots,n\}$</span>. Prove that there exists non-empty disjoint subsets <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> of <span class="math-container">$\{1,2,3\ldots,n+1\}$</span> such that <span class="math-container">$$\bigcup\limits_{i\in A_1} Y_{i}=\bigcup\limits_{j\in A_2} Y_{j}.$$</span></p> <p>Please give a hint for this problem. I am trying but could not proceed.</p>
lhl73
797,186
<p>Let <span class="math-container">$e_i$</span> for <span class="math-container">$i = 1, \ldots, n$</span> be the standard basis for <span class="math-container">$\Bbb{R}^n$</span>. That is, <span class="math-container">$e_i$</span> is the <span class="math-container">$n$</span>-tuple where the <span class="math-container">$i$</span>'th component is <span class="math-container">$1$</span> and all the other components are <span class="math-container">$0$</span>. To each <span class="math-container">$j \in \{1, \ldots, n+1\}$</span> we define a non-zero vector <span class="math-container">$v_j \in \Bbb{R}^n$</span> by <span class="math-container">$$v_j = \sum_{i \in X_j}e_i.$$</span> We then have <span class="math-container">$n+1$</span> non-zero vectors in the <span class="math-container">$n$</span>-dimensional vector space <span class="math-container">$\Bbb{R}^n$</span>. So there is a non-trivial linear relation among them. I.e there exists numbers <span class="math-container">$c_j \in \Bbb{R}$</span> with not all <span class="math-container">$c_j = 0$</span> such that <span class="math-container">$$\sum_{j=1}^{n+1}c_j v_j = 0.$$</span> Now define <span class="math-container">$A = \{j | c_j &gt; 0\}$</span> and <span class="math-container">$B = \{j | c_j &lt; 0\}$</span>. And define two vectors <span class="math-container">$a,b \in \Bbb{R}^{n}$</span> as <span class="math-container">$a = \sum_{j\in A}c_j v_j$</span> and <span class="math-container">$b = \sum_{j \in B}-c_j v_j$</span>. Then we have <span class="math-container">$$0 = \sum_{j=1}^{n+1} c_j v_j = \sum_{c_j &gt; 0}c_j v_j + \sum_{c_j &lt;0} c_j v_j = a - b.$$</span> In other words <span class="math-container">$$a = b.$$</span> This shows that both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are non-empty, since at least one of them is non-empty.</p> <p>Let <span class="math-container">$a_i$</span> for <span class="math-container">$i=1, \ldots, n$</span> be the coordinates of <span class="math-container">$a$</span> in the standard basis. That is, <span class="math-container">$$a = \sum_{i=1}^{n}a_i e_i.$$</span> Since <span class="math-container">$a$</span> is the linear combination with positive coefficients of the <span class="math-container">$v_j$</span>s for <span class="math-container">$j \in A$</span>, which are themselves linear combination of basis vectors with positive coefficients, we have that <span class="math-container">$a_i&gt;0$</span> if and only if there exists a <span class="math-container">$j \in A$</span> such that the coefficient of <span class="math-container">$e_i$</span> in the expression for <span class="math-container">$v_j$</span> is positive. And by the definition of <span class="math-container">$v_j$</span> this is the case if and only if <span class="math-container">$i \in X_j$</span>.<br /> In other words we have shown that <span class="math-container">$a_i &gt; 0 \iff i \in \cup_{j\in A}X_j$</span>.</p> <p>Similarly we write <span class="math-container">$$b = \sum_{i=1}^{n}b_i e_i.$$</span> And similarly we have <span class="math-container">$b_i &gt; 0 \iff i \in \cup_{j \in B}X_j$</span>. But since <span class="math-container">$a = b$</span> we have <span class="math-container">$a_i = b_i$</span> for all <span class="math-container">$i$</span>. Hence <span class="math-container">$\cup_{j\in A}X_j = \cup_{j\in B}X_j$</span>. And this is what we needed to show since <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are non-empty and disjoint by construction.</p>
223,754
<p>Suppose $f$ and $g$ are such that $f(g(x)) = 1$. Does this imply that $g(f(x))$ is constant?</p>
Community
-1
<p>NO.</p> <p>$$f(x) = \begin{cases} 1 &amp; x \geq 0 \\ 0 &amp; x &lt; 0 \end{cases}$$</p> <p>$$g(x) = x^2, \,\,\, \forall x \in \mathbb{R}$$</p> <p>We then have $$f(g(x)) = f(x^2) = 1, \,\,\,\, \forall x \in \mathbb{R}$$ whereas $$g(f(x)) = f(x)^2 = \begin{cases} 1 &amp; x \geq 0 \\ 0 &amp; x &lt; 0 \end{cases}$$</p>
223,754
<p>Suppose $f$ and $g$ are such that $f(g(x)) = 1$. Does this imply that $g(f(x))$ is constant?</p>
Community
-1
<p>No, $\forall x \in I$ and $I' = f^{-1}(1) $ \ $ g(x)$ $$f(g(x)) = 1$$ $$f(g(f(x))) = 1$$ $$g(f(x)) \in f^{-1}(1) = g(x) \cup I'$$</p>
1,620,175
<p>Given metric space $M = (\mathbb{R}^2, d)$ where $d = \operatorname{max}\{|x_1 - y_1|, |x_2 - y_2|\}$, how can one measure distance from some arbitrary point $X$ to the line $y = 3$, let's say?</p> <p>How it can be done, by use of the identity $$\operatorname{max}\{x, y\} = \frac{1}{2}(x + y+ |x - y|)\text{?}$$</p>
Rob Arthan
23,171
<p>Hint: the "circles" of radius $r$ about $X$ in $M$ are squares with edges parallel to the axes that are centred on $X$ and have sides of length $r/2$. See <a href="https://en.wikipedia.org/wiki/Uniform_norm" rel="nofollow">https://en.wikipedia.org/wiki/Uniform_norm</a> or <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)" rel="nofollow">https://en.wikipedia.org/wiki/Norm_(mathematics)</a>. To find the distance from $X$ to a line $l$ find the smallest of these squares that has a vertex on $l$.</p>
2,002,672
<p>I am ok with finding a power series for $\tanh^{-1}{z}$ since I can just see that since $\tanh iz = i\tan z$, then $\tanh^{-1}z = \frac1i \tan^{-1}{iz}$ and use the power series for $\tan$ to get:</p> <p>$$\tanh^{-1}{z} = \sum^\infty_{k=0} \frac{z^{2k+1}}{2k+1}$$</p> <p>But then I am asked to deduce that:</p> <p>$$1- \frac15 + \frac19 - \frac1{13} + \ldots = \frac{\pi + 2 \ln(1+\sqrt{2})}{4 \sqrt 2}$$</p> <p>Looking at this sum, it looks like it's the power series for something involving $\tanh^{-1} 1$ and $\tan^{-1} 1$. But surely $\tanh^{-1} 1$ won't be defined?</p> <p>I'm stumped, there's probably something I'm not seeing here.</p>
Alex M.
164,025
<p>In the case of $\sin$, $\cos$, $\exp$, $\tan$ etc., there is nothing to prove because most mathematicians use those power series as their <em>definitions</em>, meaning that those functions are analytic by definition.</p> <p>Composition of analytic functions is again analytic on the subsets where it may be performed.</p> <p>In general, though, if one is given $f : D \subseteq \Bbb R \to \Bbb R$, one usually shows that for every compact $K \subseteq D$ there exist $C_K \ge 0$ such that for every $x \in K$ and every $n \in \Bbb N$ one has $| f^{(n)} (x) | \le C_K ^{n+1} n!$ and this is a necessary and sufficient condition to have $f$ analytic on $D$.</p>
239,097
<p>Recently, I want to buy some 30xx series GPUs which are used to do deep learning jobs, so does anyone know whether Mathematica's <code>NetTrain</code> supports 30xx series GPUs?</p>
Fidel I. Schaposnik
45,839
<p>At the time of writing this (Feb. 6, 2021), Mathematica 12.2 doesn't support these cards, see for example:</p> <p><a href="https://mathematica.stackexchange.com/questions/237234/extremely-long-gpu-initialization-times-on-mathematica-12-2-and-rtx3090">Extremely Long GPU Initialization Times on Mathematica 12.2 and RTX3090</a></p> <p>and</p> <p><a href="https://community.wolfram.com/groups/-/m/t/2141352" rel="noreferrer">https://community.wolfram.com/groups/-/m/t/2141352</a></p> <p>The reason seems to be that the MXNet paclet hasn't been updated to a compatible version yet, so one should presumable expect to see this issue solved sometime in the (near?) future...</p>
263,359
<p>Let $H$ be a monoid, and denote by $H^\times$ and $\mathcal A(H)$, respectively, the <em>set of units</em> (or <em>invertible elements</em>) and the <em>set of atoms</em> (or <em>irreducible elements</em>) of $H$ (an element $a \in H$ is an atom if $a \notin H^\times$ and $a = xy$ for some $x, y \in H$ implies $x \in H^\times$ or $y \in H^\times$). </p> <p>Given $x \in H$, we set $\mathsf L_H(x) := \{k \in \mathbf N^+: x = a_1 \cdots a_k \text{ for some }a_1, \ldots, a_k \in \mathcal A(H)\}$ if $x \ne 1_H$ and $\mathsf L_H(x) := \{0\} \subseteq \mathbf N$ otherwise (in factorization theory, this is referred to as the <em>set of lengths</em> of $x$ (relative to the atoms of $H$)). We say that $H$ is a <em>BF-monoid</em> if $\mathsf L_H(x)$ is non-empty and finite for every $x \in H \setminus H^\times$.</p> <blockquote> <p><strong>Q.</strong> Does there exist a commutative BF-monoid $H$ such that $H \ne H^\times$ and $au = a$ for all $a \in \mathcal A(H)$ and $u \in H^\times$? If so, can we make $|H^\times| = \kappa$ for every fixed (small) cardinal $\kappa \ne 0$?</p> </blockquote> <p>My guess is that the answer to both questions is positive, but so far I haven't been able to construct an example to prove it. And though the question is not so important, a positive answer would shed light on the relation (and the contrast) between two different "philosophies" beyond the definition of what is called the <em>factorization monoid</em> of $H$.</p>
Salvo Tringali
16,537
<p><a href="https://mathoverflow.net/a/263362/16537">Benjamin Steinberg's example</a> is neat and brilliant. Here is a different construction, which, though blatantly complicate, could help with related questions (I had had it in mind for days, but for some reason I couldn't make it work until minutes ago...).</p> <p>To start with, let $M$ be an additively written monoid (either commutative or not). I'll denote by $\mathcal P(M)$ the <em>extended power monoid</em> of $M$, that is, the monoid obtained by endowing the set of all subsets of $M$ with the binary operation $$ (X, Y) \mapsto X+Y := \{x+y: x \in X \text{ and }y \in Y\}, $$ and by $\mathcal P_{\rm fin}(M)$ the submonoid of $\mathcal P(M)$ consisting of all <em>non-empty finite</em> subsets of $M$ (namely, the <em>power monoid</em> of $M$).</p> <p>Accordingly, let $\mathbb N = (\mathbf N, +)$ be the additive monoid of non-negative integers and $G$ an additively written abelian group of cardinality $\kappa$, and take $H$ to be the smallest submonoid of $\mathcal P(G \times \mathbb N)$ containing all $1$-element subsets of $G \times \{0\}$, as well as all sets of the form $G \times X$ for which $X$ is a finite subset of $\mathbf N$ with $0 \in X$ and $|X| \ge 2$. Then $H$ is a commutative monoid with $$H^\times = \bigl\{\{(g,0)\bigr\}: g \in G\} \simeq_{\sf Grp} G \quad\text{and}\quad \mathcal A(H) = \{G \times X: 0 \in X \in \mathcal A(\mathcal P_{\rm fin}(\mathbb N))\},$$ and actually a (commutative) BF-monoid, as it follows from considering that $H$ is unit-cancellative and the function $H \to \mathbf N: (A, B) \mapsto |B| - 1$ is a length function (here we use a very basic result from additive number theory, i.e., that $|X+Y| \ge |X| + |Y| - 1$ for all non-empty $X, Y \subseteq \mathbf N$). Moreover, $A + U = A$ for every $U \in H^\times$ and $A \in \mathcal A(H)$, since $\{(g,0)\} + G \times X = G \times X$ for every $X \in \mathcal P_{\rm fin}(\mathbb N)$.</p> <p><em>Notes.</em> A monoid $M$ is called <em>unit-cancellative</em> provided that $xy = x$ or $yx = x$ for some $x, y \in M$ only if $y \in M^\times$, and a function $\lambda: M \to \mathbf N$ is a length function if $\lambda(x) &lt; \lambda(y)$ for all $x, y \in M$ for which $y = uxv$ for some $u, v \in M$ with $u \notin M^\times$ or $v \notin M^\times$. If $M$ is unit-cancellative, then it's possible to prove that $M$ is BF iff $M$ has a length function (if requested, I can give a reference).</p>
2,844,170
<p>First off, sorry if this is a basic question or one that has been asked before. I really don't know how to phrase it, so it's a hard question to google.</p> <p>I'm looking for a function that will generate a line similar to the one below</p> <pre><code> __/ __/ __/ / </code></pre> <p>I'm pretty good at math, but for some reason this seems to be stumping me as it seems like it should be really simple.</p> <p>In case it helps, I am planning on using it to drive an animation, so that it moves, pauses, moves, pauses, etc. using the current time (zero through infinity) as the input.</p> <p>I am using an "Absolute," system (IE: if I were to jump to frame 35, the math needs to be able to calculate frame 35 without knowing the frames before it), so I can't do anything like <code>if (floor(sin(time)) + 1 &gt; 0) { add 1 }</code></p>
zachThePerson
408,153
<p>John Wayland's answer was good, and worked on the project I needed, but after playing around on a different project I came up with a more &quot;tweakable,&quot; function that does the same thing, but allows changing of parameters.</p> <p>D is the duration of one cycle, T is the amount of time the transition should take, and A is the amount of change between cycles.</p> <p><span class="math-container">$$cycle(x) = A(max\{\frac{(x \mod D) -D}{T} + 1, 0\} + \lfloor \frac{x}{D} \rfloor)$$</span></p> <p><strong>Breakdown</strong></p> <p>Create saw wave with intended duration <span class="math-container">$$(x \mod D)$$</span> clamp it so that only 1 unit is greater than 0 <span class="math-container">$$max\{(x \mod D) - D + 1, 0\}$$</span> set duration of transition <span class="math-container">$$max\{\frac{(x \mod D) -D}{T} + 1, 0\}$$</span> create stepped line, where each step is the duration of one cycle <span class="math-container">$$\lfloor\frac{x}{D}\rfloor$$</span> Add the two together so that each step of the previous functions &quot;lifts,&quot; each cycle up by 1 unit <span class="math-container">$$max\{\frac{(x \mod D) -D}{T} + 1, 0\} + \lfloor \frac{x}{D} \rfloor$$</span> Finally multiply the whole thing by A to set the amount of desired change between cycles.</p> <p><a href="https://i.stack.imgur.com/SVszo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SVszo.png" alt="Screenshot of working function" /></a></p> <p>As a bonus, you can also wrap the <code>max</code> function in a smoothstep function and get a very nice smooth graph for animation</p> <p><span class="math-container">$$S(x)=x^2(3-2x)$$</span> <span class="math-container">$$cycle(x) = A(S(max\{\frac{(x \mod D) -D}{T} + 1, 0\}) + \lfloor \frac{x}{D} \rfloor)$$</span></p> <p><a href="https://i.stack.imgur.com/hda9c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hda9c.png" alt="Screenshot of working smoothsetpped function" /></a></p>
2,970,053
<p>I've been trying to understand this problem for hours but not getting it. HELP!!!</p> <p>The correct answer is <span class="math-container">$\frac{2}{3}$</span>, but I don't know why this is the correct answer.</p> <p>Thank you in advance for your help!</p>
trancelocation
467,003
<p>Note that the symmetry argument of M. Wind produces an elegant solution. </p> <p>Here, I present a solution "on combinatorical foot":</p> <ul> <li>First, arrange the <span class="math-container">$37$</span> other books: <span class="math-container">$\color{blue}{37!}$</span></li> <li>These <span class="math-container">$37$</span> books offer <span class="math-container">$38$</span> "slots" where to put the <span class="math-container">$3$</span> books in the given specific order: <span class="math-container">$\color{green}{38}$</span></li> <li>In each "slot" <span class="math-container">$s_i$</span> <span class="math-container">$(i=1,\ldots, \color{green}{38})$</span> you can put <span class="math-container">$s_i =0,1,2,3$</span> books, such that <span class="math-container">$s_1 + \cdots +s_{\color{green}{38}} = \color{orange}{3}$</span>. As the order of the books is fixed, you get as the number of ways (using the formula for combinations with repetitions): <span class="math-container">$\binom{\color{green}{n}+\color{orange}{k}-1}{\color{orange}{k}}=\color{blue}{\binom{40}{3}}$</span></li> </ul> <p>It follows: <span class="math-container">$$P(\text{books of series appear in ascending order}) = \frac{\color{blue}{37!\cdot \binom{40}{3}}}{40!} = \frac{1}{6}$$</span></p>
2,205,776
<p>I am pretty sure there is an easy Counter example But i do not find One right Now.</p>
Fly by Night
38,495
<p>Your question is in-correct, there is no counter-example.</p> <p>For a sequence of integers $(n_k)$ to converge, you need: for any $\varepsilon &gt;0 $, there is an integer $K &gt;0$, such that $k \ge K \ \ \implies \ \ \left|n_k-n_K\right| &lt; \varepsilon $.</p> <p>If, for example, $\varepsilon = \frac{1}{4}$, then $\left|n_k-n_K\right| &lt; \varepsilon \iff n_k = n_K$. We would need $n_k=n_K$ for all $k&gt;K$. That means the sequence needs to be constant after a certain point.</p>
672,412
<p>I am reading an e-book called <a href="http://www.ldsinsight.org/">To Infinity and Beyond</a> by Dr. Kent A Bessey. In the book the author makes the claim that Georg Cantor made a discovery "where half of a pie is as large as the whole".</p> <p>In talking about it, he seems to claim that because half a pie can be broken into an infinite amount of pieces, and likewise a whole pie can be broken into an infinite amount of pieces they are infact the same size.</p> <p>By the same concept, he states that if you took all of the pieces of the edge of a box you could create as many more boxes of whatever size you wanted using those pieces.</p> <p>This seems undeniably false to me. I cannot help but draw a parallel between limits -> infinity. Where those limits may equal 2 or some other finite value. In my view, even if you were to break half a pie into an infinite amount of pieces the pieces could never add up to more than half a pie.</p> <p>Am I misunderstanding? Can someone explain this concept better?</p>
Peteris
124,006
<p>The pies are a bad analogy, since they aren't infinite. However, maybe the following analogy would be useful and a different perspective from other answers:</p> <p>Suppose a genie gives you a magic cup of beer that's always full, no matter how much you pour out or drink of it. Now, let's suppose that you get another, exactly identical cup - can you argue that the total amount of beer inside of them both is different than the amount of beer in the first one?</p>
706,514
<p>I know that the fundamental group of homeomorphic spaces are isomorphic. Is the converse true? I mean, can we say the two spaces with isomorphic fundamental groups are homeomorphic? </p>
Dan Rust
29,059
<p>Let $X'=X\sqcup Y$ where $Y$ is a space with cardinality greater than $X$ and make the basepoint of $X'$ still lie inside the copy of $X$. Then $\pi_1(X')=\pi_1(X)$ but they are not homeomorphic because of different cardinalities.</p>
3,399,276
<p>If the problem is to write the following with simplified polynomials</p> <p><span class="math-container">$$\frac{x^2 + 5x + 6}{x^2+1}$$</span></p> <p>Is it possible to do this problem with synthetic division? If so, how?</p> <p>I've tried googling, finding on youtube, even plug this in Wolfram Alpha, no helpful results :/</p>
Dr. Sonnhard Graubner
175,066
<p>To find all zeros of <span class="math-container">$$\frac{x^2+5x+6}{x^2+1}$$</span> you have to solve the equation <span class="math-container">$$x^2+5x+6=0$$</span> Or this here <span class="math-container">$$\frac{x^2+5x+6}{x^2+1}=1+{\frac {5\,x+5}{{x}^{2}+1}}$$</span> It is <span class="math-container">$$\frac{x^2+5x+6}{x^2+1}=\frac{x^2+1+5x+5}{x^2+1}$$</span></p>
796,787
<p>Given a Lipschitz function $g$ (i.e. $|g(x) - g(y)| \leq L |x - y|, \forall x, y \in dom(g)$), and an function $f$ integrable on $[a, b]$, how do we prove $g \circ f$ is integrable on $[a, b]$, preferably using Darboux integrals/sums?</p> <p>Let us assume $g, f$ have appropriate domains and codomains.</p>
EPS
133,563
<p><strong>Hint</strong>: A function is (Riemann) integrable if and only if its set of discontinuity has measure zero.</p>
209,044
<p>Please help me deal with this kind of question about ODEs. My codes are as follows</p> <pre><code>m = 100; a = D[x[t], {t, 2}]; t1up = 2 x''[t] + 1/2 (490 + 34 x''[t] + 2 (490 + 50 x''[t])); t1down = 490 + 53 x''[t]; t1 = Piecewise[{{t1up, x'[t] &gt;= 0}, {t1down, x'[t] &lt; 0}}] equa00 = t1 == m*a t0 = 50; s1 = NDSolve[{equa00, x[0] == 1, x'[0] == 1}, x, {t, 0, 50}] </code></pre> <p>However, I get an error:</p> <blockquote> <p>NDSolve::ntdvdae: Cannot solve to find an explicit formula for the derivatives. NDSolve will try solving the system as differential-algebraic equations. >></p> </blockquote> <p>So is it a differential-algebraic equation? How to solve it?</p> <p>I have another question, too: How to plot the <code>t1-t</code> figure after we get the <code>s1</code>? I have tried the following codes:</p> <pre><code>t1upvalue = (t1up /. {x'[t] -&gt; (x'[t] /. s1), x''[t] -&gt; (x''[t] /. s1)}) t1downvalue = (t1down /. {x'[t] -&gt; (x'[t] /. s1), x''[t] -&gt; (x''[t] /. s1)}) t1value = Piecewise[{{t1upvalue, (x'[t] /. s1) &gt;= 0}, {t1downvalue, (x'[t] /. s1) &lt; 0}}], Plot[t1value[[1]], {t, 0, t0},PlotRange -&gt; All] </code></pre> <p>However it doesn't work.</p>
xinxin guo
4,756
<p>Changing the last line to: </p> <pre><code>s1 = NDSolve[{equa00, x[0] == 1, x'[0] == 1}, x, {t, 0, 50}, SolveDelayed -&gt; True] </code></pre> <p>or</p> <pre><code>s1 = NDSolve[{equa00, x[0] == 1, x'[0] == 1}, x, {t, 0, 50}, Method -&gt; {"EquationSimplification" -&gt; "Residual"}] </code></pre> <p>seems help for your problem.</p> <p><strong>In reponse to updated question on plot slution</strong></p> <p>To plot your solution, maybe this is what you want?</p> <pre><code>Remove["Global`*"] // Quiet; m = 100; a = D[x[t], {t, 2}]; t1up = 2 x''[t] + 1/2 (490 + 34 x''[t] + 2 (490 + 50 x''[t])); t1down = 490 + 53 x''[t]; t1 = Piecewise[{{t1up, x'[t] &gt;= 0}, {t1down, x'[t] &lt; 0}}]; equa00 = t1 == m*a; t0 = 50; (*s1 = NDSolveValue[{equa00 // Simplify`PWToUnitStep, x[0] == 1, x'[0] == 1}, x, {t, 0, 50}];*) s1 = x /.First@NDSolve[{equa00 // Simplify`PWToUnitStep, x[0] == 1, x'[0] == 1}, x, {t, 0, 50}]; sAll = {x[t] -&gt; s1[t], x'[t] -&gt; s1'[t], x''[t] -&gt; s1''[t]}; t1upvalue = t1up /. sAll; t1downvalue = t1down /. sAll; t1value = Piecewise[{{t1upvalue, s1'[t] &gt;= 0}, {t1downvalue, s1'[t] &lt; 0}}]; Plot[t1value, {t, 0, t0}, PlotRange -&gt; All] </code></pre>
85,228
<p>I wonder if the periodic paths of a lightray trapped between two nonconcentric circles, each perfectly reflecting, are known. The behavior of such rays seems chaotically complicated. For example, left below the highlighted initial ray has slope <span class="math-container">$\frac{1}{12}$</span>, while that to the right has slope <span class="math-container">$\frac{1}{9}$</span>. (The green circle has radius <span class="math-container">$\frac{3}{4}$</span>.) After 500 reflections, neither ray has become periodic. <br /> &nbsp;&nbsp;&nbsp;<img src="https://i.stack.imgur.com/Yak4Y.jpg" alt="Light rays trapped in annulus"> <br /> I suspect the combination of dispersive and focusing reflection (from the inner and outer circles respectively) leads to this complex behavior. But perhaps the periodic paths are known?</p> <p>I only know the 2-cycles from rays collinear with the circle centers, and those that Noam Elkies kindly identified in his comment: "regular <span class="math-container">$n$</span>-gons (and <span class="math-container">$(n/k)$</span>-gons, i.e. stars) that stay so close to the outer circle that they never hit the inner one." </p> <p>(Related MO question: "<a href="https://mathoverflow.net/questions/38307/">Trapped rays bouncing between two convex bodies</a>.") <hr /> <b>Update 1</b>. I found one! :-) No doubt among the "simple short cycles" that Noam had in mind: <br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img src="https://i.stack.imgur.com/1tdDg.jpg" alt="3-cycle"> <br /> <hr /> <b>Update 2</b>. With the benefit of the search terms helpfully provided by Ian Agol and Igor Riven, I found a useful <em>Physical Review</em> paper by G. Gouesbet, S. Meunier-Guttin-Cluzel, and G. Grehan, "Periodic orbits in Hamiltonian chaos of the annular billiard," <a href="http://pre.aps.org/abstract/PRE/v65/i1/e016212" rel="nofollow noreferrer">Phys. Rev. E 65, 016212 (2001)</a>. Their approach is more experimental than theoretical:</p> <blockquote> <p>Periodic orbits embedded in the phase space are systematically investigated, with a focus on inclusion-touching periodic orbits, up to symmetrical orbits of period 6. Candidates for periodic orbits are detected by investigating grayscale distance charts and, afterward, each candidate is validated (or rejected) by using analytical and/or numerical methods.</p> </blockquote> <p>The (unstable) 3-cycle I found they label "2(1)1" in their (barely discernable) Fig.5 inventory below: <br /> &nbsp;&nbsp;&nbsp;<img src="https://i.stack.imgur.com/Diet8.jpg" alt="Fig5"></p>
Igor Rivin
11,142
<p>Following up on @Ian's comments: the magic words are "monotone twist maps of the annulus" and "Aubry-Mather theory". googling either of the above (or looking in Katok-Hasselblad, who have a whole chapter on the subject) is your best bet.</p>
4,602,464
<p>I've been asked to provide context to my original question, so here's the context:</p> <p>The rectangle in the problem below represents a pool table whose &quot;pool table light&quot; cannot be easily moved, but CAN easily be rotated. No portion of the pool table's perimeter can be too close to a wall in order for the players to use their pool sticks uninhibited. The left side of the pool table (head-side) is already as close to the wall as this threshold. Therefore, at first when you rotate it counter-clockwise on the table's center point, the corner will become closer to the wall than desired during the first x number of degrees of rotation. However, eventually there will be a degree of rotation where the corner is no longer too close to the wall again.</p> <p>I'm interested in how to determine this degree of rotation mathematically more than I'm interested in practical suggestions about alternative ways of addressing this concrete problem. This is the reason why I initially asked the question in the abstracted form below:</p> <p><strong>Original Question:</strong></p> <p>If the size of a rectangle is 55.5&quot; x 99.75&quot;, and its top-left corner's edge is located at an origin (0,0) on a Cartesian plane, while its top-right corner's edge is located at (97.5,0) on a Cartesian plane, as you begin to rotate the rectangle on its center point counter-clockwise the top-left corner's edge-position will have an x value location that is negative. This x value becomes more negative as you increase the counter-clockwise rotation for a certain number of degrees, and then the x value increases until it is no longer negative at some degree of rotation.</p> <p><strong>At what degree of counter-clockwise rotation will x no longer be a negative value?</strong></p> <p>I'm thinking it should be a little over 56 degrees. I tested this by putting a piece of paper along the left edge of a <a href="https://i.imgur.com/1FVhLRW.png" rel="nofollow noreferrer">rectangle that I drew to scale</a> onto my computer's display using <a href="https://inkscape.org/" rel="nofollow noreferrer">InkScape</a>. As I rotated this scaled rectangle counter-clockwise, I could see the rectangle's top-left corner translucently through this paper that began to cover it due to the rotation. According to Inkscape's rotation measurements, the top-left corner wasn't fully visible again until I rotated it approximately 56 degrees.</p> <p><strong>Here are some conflicting AI generated answers. Are any correct?</strong></p> <h2><strong>Answer 1:</strong></h2> <p>To find the degree at which the x value of the top-left corner's edge becomes non-negative, we can use trigonometry. The center of the rectangle is at the point (48.75, 49.875), which is the midpoint of the line segment connecting the top-left and top-right corners.</p> <p>Let's call the angle of rotation theta. The x value of the top-left corner's edge can be expressed as:</p> <p>x = -55.5 * cos(theta) + 48.75</p> <p>We want to find the value of theta such that x is non-negative, or x &gt;= 0. Substituting the expression for x into this inequality, we get:</p> <p>-55.5 * cos(theta) + 48.75 &gt;= 0</p> <p>Solving for theta, we get:</p> <p>theta &lt;= arccos((48.75 / -55.5))</p> <p>The inverse cosine function, or arccos, returns an angle in radians. To convert this angle to degrees, we can multiply it by 180/pi, where pi is approximately equal to 3.14159. This gives us:</p> <p>theta &lt;= 57.9 degrees</p> <p>Therefore, the top-left corner's edge has a non-negative x value at or after a counter-clockwise rotation of 57.9 degrees.</p> <h2><strong>Answer 2:</strong></h2> <p>To find the degree of rotation at which the top-left corner's x-coordinate becomes positive, we can use some basic geometry and trigonometry.</p> <p>First, let's call the center of the rectangle's axis of rotation (the point about which it is rotating) the origin of the coordinate system, and let's call the distance from the origin to the top-left corner of the rectangle &quot;r&quot;. We can then use the Pythagorean theorem to find the value of r:</p> <p>r = sqrt((55.5/2)^2 + (99.75/2)^2) = sqrt(3025/4 + 9900.0625/4) = sqrt(7562.53125/4) = sqrt(1890.6328125) ≈ 43.42</p> <p>Now, let's call the angle between the positive x-axis and the line connecting the origin to the top-left corner of the rectangle &quot;theta&quot;. Using some basic trigonometry, we can find the value of theta:</p> <p>theta = atan((99.75/2)/(55.5/2)) = atan(4) ≈ 69.1 degrees</p> <p>Now, let's consider the rotation of the rectangle about the origin. As the rectangle rotates counter-clockwise, the angle between the positive x-axis and the line connecting the origin to the top-left corner of the rectangle will decrease. When this angle becomes zero, the x-coordinate of the top-left corner will be positive.</p> <p>Therefore, the degree of counter-clockwise rotation at which the x-coordinate of the top-left corner becomes positive is:</p> <p>360 - theta = 360 - 69.1 ≈ 291 degrees</p> <p>Note that this is just one solution -- the x-coordinate of the top-left corner will also become positive after any multiple of 360 degrees of rotation. For example, it will also become positive after 652 degrees of rotation (291 + 360), or 913 degrees of rotation (291 + 360 + 360), etc.</p> <h2><strong>Answer 3:</strong></h2> <p>To find the degree of rotation at which the x value of the top-left corner's edge is no longer negative, we can use trigonometry.</p> <p>First, let's call the center of the rectangle (the point about which it rotates) point C, and let's call the top-left corner of the rectangle point A. Let's also call the angle of rotation theta (θ).</p> <p>[asy] unitsize(2cm);</p> <p>pair A, B, C;</p> <p>A = (-0.5,1); B = (1.5,1); C = (0.5,0.5);</p> <p>draw(A--B--C--cycle); draw(C--A,dashed);</p> <p>label(&quot;<span class="math-container">$A$</span>&quot;, A, N); label(&quot;<span class="math-container">$B$</span>&quot;, B, N); label(&quot;<span class="math-container">$C$</span>&quot;, C, S); label(&quot;<span class="math-container">$\theta$</span>&quot;, (0.7,0.9)); label(&quot;<span class="math-container">$OC$</span>&quot;, (C--(0,0)), S); [/asy]</p> <p>Since point C is the center of rotation, the distance from C to the x-axis (the line with equation y = 0) is constant. Let's call this distance h. We can then use the trigonometric functions sine and cosine to find the x and y coordinates of point A as a function of the angle of rotation theta.</p> <p>The x coordinate of A is given by:</p> <p>x = hcos(theta)</p> <p>The y coordinate of A is given by:</p> <p>y = hsin(theta)</p> <p>Since we want to find the angle of rotation theta such that x is no longer negative, we can set x = 0 and solve for theta:</p> <p>0 = hcos(theta)</p> <p>cos(theta) = 0</p> <p>Theta = 90 degrees or 270 degrees</p> <p>So, the angle of rotation at which the x value of the top-left corner's edge is no longer negative is 90 degrees or 270 degrees.</p> <h2><strong>Answer 4:</strong></h2> <p>I apologize for the misunderstanding in my previous responses. You are correct that the degree of counter-clockwise rotation at which the top-left corner's x-coordinate becomes non-negative is approximately 56 degrees, as determined through your experimentation with a scaled rectangle.</p> <p>To confirm this result using trigonometry, we can use the sine function:</p> <p>opposite = hypotenuse * sin(angle)</p> <p>Substituting in the known values, we get:</p> <p>opposite = 99.75 * sin(angle)</p> <p>We want to find the angle at which the opposite side is no longer negative, so we set opposite = 0 and solve for angle:</p> <p>0 = 99.75 * sin(angle) angle = arcsin(0 / 99.75) angle = 55.999999 degrees</p> <p>This result is approximately 56 degrees, as you determined through your experimentation.</p> <p>I apologize for any confusion my previous responses may have caused. Let me know if you have any further questions.</p> <h2><strong>Answer 5:</strong></h2> <p>To determine the degree of counter-clockwise rotation at which the top-left corner's x-coordinate becomes non-negative, we can use some basic trigonometry.</p> <p>The top-left corner's x-coordinate can be represented as the length of the side opposite the angle of rotation in a right triangle, with the center of the rectangle as the right angle and the side connecting the center of the rectangle to the top-left corner as the hypotenuse. The length of the side connecting the center of the rectangle to the top-right corner represents the length of the side adjacent to the angle of rotation.</p> <p>We can use the sine function to calculate the length of the side opposite the angle of rotation:</p> <p>opposite = hypotenuse * sin(angle)</p> <p>Substituting in the known values, we get:</p> <p>opposite = 99.75 * sin(angle)</p> <p>We want to find the angle at which the opposite side is no longer negative, so we set opposite = 0 and solve for angle:</p> <p>0 = 99.75 * sin(angle) angle = arcsin(0 / 99.75) angle = 56.309932474020215 degrees</p> <p>Therefore, the top-left corner's x-coordinate becomes non-negative at approximately 56.31 degrees of counter-clockwise rotation.</p> <p>I apologize for any confusion my previous responses may have caused. Let me know if you have any further questions.</p>
Nick Castillo
683,736
<p>These are subtly different. All one forms on a given (assume finite dimensional vector space) are linear maps from vectors to the field. On a manifold each point has a vector space associated with it (the tangent space <span class="math-container">$T_pM$</span>) upon which one can define one forms as above. The collection of all one forms is commonly denoted as <span class="math-container">$T_pM^*$</span>. The difference here is that now a one form on a manifold is a function: <span class="math-container">$$\omega:M \to T_pM^*$$</span></p> <p>And the properties must interact in a &quot;nice&quot; way with the smooth structure of <span class="math-container">$M$</span> or if you like the tangent bundle <span class="math-container">$TM$</span>.</p>
3,257,387
<p>denote <span class="math-container">$(a_n)$</span> as a sequence. If the consecutive difference tend to <span class="math-container">$0$</span> as <span class="math-container">$n$</span> tends to infinity, do it mean that <span class="math-container">$(a_n)$</span> converges? I feel like it should mean so but I am struggling to see a clear cut proof</p>
Floris Claassens
638,208
<p>No, consider <span class="math-container">$$a_{n}=\sum^{n}_{i=1}\frac{1}{i}.$$</span> Clearly <span class="math-container">$a_{n+1}-a_{n}=\frac{1}{n+1}$</span> tends to <span class="math-container">$0$</span>, but <span class="math-container">$a_{n}$</span> diverges to infinity.</p>
2,487,866
<p>Gödel's <a href="https://en.wikipedia.org/wiki/Constructible_universe" rel="nofollow noreferrer">constructible universe</a> seems to have some attractive properties. Sets are constructed in a very regular, easy-to-understand way, and one has a definite answer on certain major set theoretic questions, such as the generalized continuum hypothesis. Alternative models posit the existence of objects like large cardinals, which (to my humble intelligence) seem esoteric and far removed from reality. </p> <p>I am curious about what motivates mathematicians to study these alternative models. Are there "practical" reasons to explore models besides the constructible universe? Does the knowledge thus acquired lead to insights outside of mathematical logic and model theory? Or is it more of a "pure" inquiry, pursued for its own sake or for aesthetic reasons?</p>
Michael Weiss
79,741
<p>I think a realist philosophy accounts for much of the interest. (Of course, $L$ itself has been scrutinized thoroughly over the years.) Many (most?) set theorists take a so-called Platonist attitude: they believe that the universe of sets "really exists" in some sense. Quoting from Drake's <em>Set Theory: an Introduction to Large Cardinals</em>:</p> <blockquote> <p>I have written this book from an uncompromisingly <em>realist</em> or <em>platonist</em> position; that is, I have taken the viewpoint that in some sense sets do exist, as objects to be studied, and that set theory is just as much about fixed objects as is number theory. ... It seems very difficult to me to give any reason for the study of large cardinals without taking a viewpoint of this sort.</p> </blockquote> <p>You can find similar sentiments in Gödel's essay "What is Cantor's Continuum Problem?" and in the conclusion to Cohen's book <em>Set Theory and the Continuum Hypothesis</em>.</p> <p>If you adhere to this philosophy, it seems very odd to suppose that all sets are constructible. Gödel's definition involves (speaking casually) describing sets using first-order logic, and repeating this process transfinitely. Why should we believe that all sets can be described this way, even all subsets of the integers? For many set theorists, intuition supports the opposite conclusion. Gödel supposedly believed that $c=\aleph_2$ (and of course $V=L$ implies $c=\aleph_1$). Cohen is on record about his belief:</p> <blockquote> <p>A point of view which the author feels may eventually come to be accepted is that CH is <em>obviously</em> [his emphasis] false. The main reason one accepts the Axiom of Infinity is probably that we feel it absurd to think that the process of adding only one set at a time can exhaust the entire universe. Similarly with the higher axioms of infinity. Now $\aleph_1$ is the set of countable ordinals and this is merely a special and the simplest way of generating a higher cardinal. The set $C$ is, in contrast, generated by a totally new and more powerful principle, namely the Power Set Axiom. It is unreasonable to expect that any description of a larger cardinal which attempts to build up that cardinal from ideas deriving from the Replacement Axiom can ever reach $C$. Thus $C$ is greater than $\aleph_n$, $\aleph_\omega$, $\aleph_\alpha$ where $\alpha=\aleph_\omega$, etc. This point of view regards $C$ as an incredibly rich set given to us by one bold new axiom, which can never be approached by any piecemeal process of construction. Perhaps later generations will see the problem more clearly and express themselves more eloquently.</p> </blockquote> <p>Here's quote from the Gödel essay, motivating large cardinal axioms:</p> <blockquote> <p>...the axioms of set theory by no means form a system closed in itself, but, quite on the contrary, the very concept of set on which they are based suggests their extension by new axioms which assert the existence of still further iterations of the operation "set of". These axioms can be formulated also as propositions asserting the existence of very great cardinal numbers...</p> </blockquote> <p>For a more recent discussion, see "Does V Equal L?" by Penelope Maddy, <em>Journal of Symbolic Logic</em> 58(1) (March 1993) pp.15-41.</p> <p>Besides Platonism, the desire to explore logical structure motivates a lot of work in this field. For example, which consequences of the Axiom of Choice are actually equivalent to it? People explore this sort of thing, even without doubting the truth (whatever that means) of the axiom of choice.</p>
1,424,561
<p>I know that $\sum_0^\infty \frac{\lambda^x} {x!} = e^\lambda$, but I'm having a really difficult time dealing with the extra $x$.</p>
Laurent Duval
257,503
<p>The additional $x$ (from your initial knowledge) should lead you to think about derivatives with respect to $\lambda $. Remember a polynomial differentiates as: $$ \left(x^n\right)' = nx^{n-1}$$. So you can rewrite in a close form: $$ \sum_{x=0}^\infty \frac{x\lambda^x}{x!} = \lambda\sum_{x=0}^\infty \frac{x\lambda^{x-1}}{x!}. $$ If your present cursus includes convergent series and Fubini theorem, you can integrate the whole formula, switch $\sum$ and $\int$ signs, and get the desired result.</p>
1,424,561
<p>I know that $\sum_0^\infty \frac{\lambda^x} {x!} = e^\lambda$, but I'm having a really difficult time dealing with the extra $x$.</p>
Dilip Sarwate
15,941
<p>Very inelegantly and in schoolboy fashion we can write</p> <p>\begin{align} \sum_{x=0}^\infty x\frac{\lambda^x}{x!} &amp;= 0\frac{\lambda^0}{0!} + 1\frac{\lambda^1}{1!} + 2\frac{\lambda^2}{2!} + 3\frac{\lambda^3}{3!} + 4\frac{\lambda^4}{4!} + \cdots\\ &amp;= 0 + \lambda +\frac{\lambda^2}{1!} + \frac{\lambda^3}{2!} + \frac{\lambda^4}{3!}+\cdots\\ &amp;= \lambda +\frac{\lambda^2}{1!} + \frac{\lambda^3}{2!} + \frac{\lambda^4}{3!}+\cdots\\ &amp;= \lambda\left[1 + \frac{\lambda^1}{1!} + \frac{\lambda^2}{2!} + \frac{\lambda^3}{3!} + \cdots\right]\\ &amp;= \lambda e^\lambda \end{align}</p>
549,254
<blockquote> <p>$$\frac{1}{3}=.33\bar{3}$$ </p> </blockquote> <p>is a rational number, but the $3$ keeps on repeating indefinitely (infinitely?). How is this a ratio if it shows this continuous pattern instead of being a finite ratio? </p> <p>I understand that $\pi$ is irrational because it extends infinitely <em>without repetition</em>, but I am confused about what makes $1/3=.3333\bar{3}$ rational. It is clearly repeating, but when you apply it to a number, the answers are different: $.33$ and $.3333$ are part of the same concept, $1/3$, yet:</p> <p>$.33$ and $.3333$ are different numbers: </p> <p>$.33/2=.165$ and $.3333/2=.16665$, yet they are both part of $1/3$. </p> <p>How is $1/3=.33\bar{3}$ rational?</p>
Paramanand Singh
72,031
<p>I believe the fundamental problem (or confusion) here is that OP finds it difficult to believe that a rational number, which is a ratio of two finite integers, can have a representation which is infinite. This confusion is primarily due to the fact that most people try to think of a number and its representation as one and the same thing. However the concept of a number is different from the concept of representing it.</p> <p>I will provide a simple example. In decimal notation the number "five" is written as <span class="math-container">$5$</span>, but in binary it is written as <span class="math-container">$101$</span> and in ternary as <span class="math-container">$12$</span>. Same is the case for rational numbers. A fraction like "one/two" can be written as <span class="math-container">$0.5$</span> in decimals (as a finite expression), but the same can't be written as a finite decimal in ternary. Similarly "one/three" can be written as a finite decimal in ternary, but as an infinite one in normal base ten.</p> <p>It has to be understood very clearly that a rational number may or may not have finite representation depending on the kind of representation chosen. Also it is better to understand why some rationals can have finite decimal representation and others don't have such finite decimal representation. Here the following result helps:</p> <p><em>A rational number <span class="math-container">$p/q$</span> can be represented as a finite decimal in base <span class="math-container">$b$</span> notation, if and only if denominator <span class="math-container">$q$</span> divides <span class="math-container">$b^{n}$</span> for some positive integer <span class="math-container">$n$</span>.</em></p> <p>Also it should be noted that in case the decimal representation is not finite, then it has to follow a repeating pattern. This happens because the decimal representation is obtained via division of <span class="math-container">$p$</span> by <span class="math-container">$q$</span> and hence the only possible choices for remainder are <span class="math-container">$0, 1, 2,\ldots, q - 1$</span>. If at some point we get a <span class="math-container">$0$</span> remainder then the decimal representation is finite; otherwise the remainder has to repeat and lead to a repeating non-terminating decimal representation.</p>
99,750
<p>Let $G$ be a reductive group, $F$ a Frobenius morphism, $B$ a Borel subgroup $F$-stable and consider the finite groups $G^F$ and $U^F$ where $U$ is the radical unipotent of $B=UT$ ($T$ torus).</p> <p>I would like a reference for the description of the algebra $End_{G^F}( \mathbb{C}[G^F/U^F] )$. More precisely, I'd like to relate it with a structure of Hecke algebra, which is usually defined as $End_{G^F}( \mathbb{C}[G^F/B^F] ) := End_{G^F} ( Ind_{B^F}^{G^F} 1 )$. I hope to find that the endomorphism algebra is isomorphic to some kind of extension of the Hecke algebra by the torus $T$.</p> <p>Thank you!</p>
Ian Agol
1,345
<p>There's a nice construction of the $E_8$ Lie algebra due to Borcherds based on methods from vertex operator algebras, but with no understanding of vertex algebras needed. See <a href="http://math.berkeley.edu/~anton/written/LieGroups/LieGroups.pdf">p. 152 of these notes</a> from a course by Borcherds and others. See also section <a href="http://math.berkeley.edu/~theojf/#classes">7.4 of notes by Johnson-Freyd</a>. The idea is to start with the root system and root lattice, and construct the Lie algebra using Serre's relations. But with the relations there is a sign ambiguity, so one passes to a 2-fold cover of the lattice to resolve the sign issues, and check that everything works. Once you have $E_8$, you can find $E_7$ sitting inside it. Since the lattice is self-dual (simply-connected), you can just exponentiate to get the Lie group. </p>
2,903,105
<p>A Fourier transform of function R into function Q is defined as: $$Q(\underline{k}) = \int_{}^{}R(\underline{x}) e^{-i\underline{k}·\underline{x}} \mathrm{d}\underline{x}.$$ where I've underlined $\underline{x}$ and $\underline{k}$ to denote that they are a set of vectors.</p> <p>Suppose that $R(\underline{x})$ is only a function of coordinate differences, for example, $R(\underline{x})=$($x$<sub>1</sub>-$x$<sub>2</sub>)($x$<sub>3</sub>-$x$<sub>1</sub>), where $\underline{x}$=($x$<sub>1</sub>,$x$<sub>2</sub>,$x$<sub>3</sub>).</p> <p>Why must the Fourier transform $Q(\underline{k})$ contain a Dirac Delta function?</p> <p>Note: The fact that the Fourier transform must contain a Dirac Delta function was mentioned on the bottom of page 25 of the following set of notes by my Physics professor, <a href="http://www-thphys.physics.ox.ac.uk/people/JohnCardy/qft/qftMT2012.pdf" rel="nofollow noreferrer">http://www-thphys.physics.ox.ac.uk/people/JohnCardy/qft/qftMT2012.pdf</a></p>
leonbloy
312
<p>This is not totally rigorous, but for Physics it should be enough :-)</p> <p>In general, if $Q(\underline{k}) $ is the transform of $R(\underline{x})$, then the transform of $R(\underline{x} + a \underline{u})$ where $ \underline{u}=(1,1,1)$ and $a$ is some scalar, is $$Q'(\underline{k})=\exp(i 2 \pi a \, \underline{u} \cdot \underline{k}) Q(\underline{k}) = \exp(i 2 \pi a ( k_1+k_2 +k_3)) \, Q(\underline{k}) \tag{1}$$</p> <p>But if $R(\underline{x})$ is only a function of coordinate differences, then $R(\underline{x} + a \underline{u})=R(x)$, and $Q'(\underline{k})=Q(\underline{k})$ for all $a$. </p> <p>Then, from $(1)$, $$Q(\underline{k})\ne 0 \implies \exp(i 2 \pi a ( k_1+k_2 +k_3))=1 \hskip{5mm} (\forall a) \implies k_1+k_2 +k_3 = 0$$ </p> <p>That is, $Q(\underline{k})$ must be zero everywere except (at most) a measure zero set (a plane in $\mathbb{R}^3$). Then either $Q(\underline{k})$ is identically zero (trivial) or it's "degenerate" (has Dirac deltas).</p>
3,542,374
<p>I have the matrix of stacked constraints</p> <p><span class="math-container">$$\begin{bmatrix} x_1^2 &amp; x_1y_1 &amp; y_1^2 &amp; x_1 &amp; y_1 &amp; 1 \\ x_2^2 &amp; x_2y_2 &amp; y_2^2 &amp; x_2 &amp; y_2 &amp; 1 \\ x_1^2 &amp; x_3y_3 &amp; y_3^2 &amp; x_3 &amp; y_3 &amp; 1 \\ x_4^2 &amp; x_4y_4 &amp; y_4^2 &amp; x_4 &amp; y_4 &amp; 1 \\ x_5^2 &amp; x_5y_5 &amp; y_5^2 &amp; x_5 &amp; y_5 &amp; 1 \end{bmatrix} \mathbf{c} = \mathbf{0},$$</span></p> <p>where <span class="math-container">$\mathbf{c} = (a, b, c, d, e, f)^T$</span> is a conic.</p> <p>So <span class="math-container">$\mathbf{c}$</span> is the null vector of this <span class="math-container">$5 \times 6$</span> matrix. Apparently, this shows that <span class="math-container">$\mathbf{c}$</span> is determined uniquely (up to scale) by five points in general position. What is the concept from linear algebra that tells us that this shows that <span class="math-container">$\mathbf{c}$</span> is determined uniquely? And what is meant by "up to scale"?</p> <p>Thank you.</p>
Jonas Linssen
598,157
<p>It means that your matrix has rank 5, so its null space has dimension <span class="math-container">$6-5=1$</span>. This means that you have exactly one nonzero solution <span class="math-container">$c$</span> with norm/magnitude/length 1 and whose first nonzero entry is positive. Any other solution is a multiple of that <span class="math-container">$c$</span>, or in other words a scaling of <span class="math-container">$c$</span>.</p>
646,596
<p>I need to calculate the described squared circle... with only given the length of the side (a).</p> <p><img src="https://i.stack.imgur.com/VZ8Yn.png" alt="enter image description here"></p> <p>I need to calculate the area of described squared circle. How exactly is it done with only a side given?</p>
John
7,163
<p>If the length of the side of the square is $a$ then its diagonal is $\sqrt{2}a$, which is also the diameter of the circle. The radius of the circle is half this: $a/\sqrt{2}.$ This makes the area of the circle $\pi a^2 / 2.$ </p>
925,558
<p>How do I prove that the inductive sequence $y_{n+1}= \dfrac {2y_n + 3}{4}$ is bounded? $y(1)=1$</p> <p><strong>Attempt:</strong> Let us assume that the given sequence is unbounded. : </p> <p>Then, $y_{n+1} \rightarrow \infty$ either for some finite $n$ or when $n \rightarrow \infty$</p> <blockquote> <p>CASE $1 :$ When $|y_n| \rightarrow \infty$ at a finite $n$</p> </blockquote> <p>Since : $y_{n}= \dfrac {2y_{n-1} + 3}{4}$, then $y_n \rightarrow \pm \infty \implies y_{n-1} \rightarrow \pm \infty \implies y_{n-2} \rightarrow \pm \infty ~~\cdots$</p> <p>This ultimately means $y_2 \rightarrow \pm \infty$ which is not true.</p> <p>Hence, there does not exist a finite $n$ for which $y_n \rightarrow \pm \infty$.</p> <blockquote> <p>CASE $2:$ When $|y_n| \rightarrow \infty$ at $n \rightarrow \infty$</p> </blockquote> <p>I think we can proceed the same way as we did above, i.e inductively, we proceed like above and deduce that if the above assumption is true, then $y_2 \rightarrow \infty$, which is not true.</p> <p>Is my attempt correct?</p> <p>Does there exist a proof without induction as well? Thank you for your help.</p>
egreg
62,967
<p>An unbounded sequence need not diverge to $\infty$, even if its values are positive.</p> <hr> <p>If the sequence converges, then its limit $l$ satisfies $$ l=\frac{2l+3}{4} $$ so $l=3/2$. We could try proving that $y_n\le 3$ for all $n$. The base case is clear, so assume that $y_n\le 3$ and try to prove that $y_{n+1}\le 3$ without peeking below.</p> <blockquote class="spoiler"> <p> We have $2y_n+3\le 2\cdot3+3=9$, which implies $y_{n+1}\le\dfrac{9}{4}&lt;3.$</p> </blockquote> <p>The bound below is also easy: $y_n&gt;0$, for all $n$.</p>
3,671,675
<blockquote> <p>Question: Suppose <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> is a twice differentiable function with <span class="math-container">$f(0)=1$</span>, <span class="math-container">$f'(0)=0$</span> and satisfies <span class="math-container">$f''(x)-5f'(x)+6f(x)\ge 0$</span> for every <span class="math-container">$x\ge 0$</span>. Prove that <span class="math-container">$f(x)\ge 3e^{2x}-2e^{3x}$</span> for every <span class="math-container">$x\ge 0$</span>. </p> </blockquote> <p>My approach: Let <span class="math-container">$h:\mathbb{R}\to\mathbb{R}$</span> be such that <span class="math-container">$h(x)=f(x)-3e^{2x}+2e^{3x}, \forall x\in\mathbb{R}.$</span> Thus <span class="math-container">$h$</span> is also a twice differentiable function with <span class="math-container">$$h'(x)=f'(x)-6e^{2x}+6e^{3x}, \forall x\in\mathbb{R}, \text{ and }\\h''(x)=f''(x)-12e^{2x}+18e^{3x}, \forall x\in\mathbb{R}.$$</span></p> <p>Also observe that <span class="math-container">$$h''(x)-5h'(x)+6h(x)=f''(x)-5f'(x)+6f(x), \forall x\in\mathbb{R}.$$</span> </p> <p>Thus we have <span class="math-container">$$h''(x)-5h'(x)+6h(x)\ge 0, \forall x\ge 0.$$</span></p> <p>Now for the sake of contradiction, let us assume that <span class="math-container">$\exists a&gt;0,$</span> such that <span class="math-container">$$f(a)&lt;3e^{2a}-2e^{3a}\implies f(a)-3e^{2a}+2e^{2a}&lt;0\implies h(a)&lt;0.$$</span> </p> <p>Note that <span class="math-container">$h(0)=0$</span>. Thus, by applying MVT to the function <span class="math-container">$h$</span> on the interval <span class="math-container">$[0,a]$</span>, we can conclude that <span class="math-container">$\exists c\in(0,a)$</span>, such that <span class="math-container">$$h'(c)=\frac{h(a)-h(0)}{a-0}=\frac{h(a)}{a}\implies h'(c)&lt;0.$$</span></p> <p>Again, note that <span class="math-container">$h'(0)=0$</span>. Thus by applying MVT to the function <span class="math-container">$h'$</span> on the interval <span class="math-container">$[0,c]$</span>, we can conclude that <span class="math-container">$\exists c_1\in(0,c)$</span>, such that <span class="math-container">$$h''(c_1)=\frac{h'(c)-h'(0)}{c-0}=\frac{h'(c)}{c}\implies h''(c_1)&lt;0.$$</span></p> <p>Thus we have <span class="math-container">$f''(c_1)-12e^{2c_1}+18e^{3c_1}&lt;0\implies f''(c_1)&lt;12e^{2c_1}-18e^{3c_1}&lt;0.$</span> </p> <p>As one can see, I am trying to prove it using "proof by contradiction". So, is there any way to proceed on these lines, or is there some alternative way to prove?</p>
Aditya Dwivedi
697,953
<p>Separate the terms as <span class="math-container">$\ {f^{\prime\prime}}-3{f^{\prime}}-2(f^{\prime} - 3f ) \ge 0$</span></p> <p>now multiplying by <span class="math-container">$ e^{-2x}$</span> and integrating from 0 to x we get <span class="math-container">$f^{\prime} \ - \ 3f \ge -3e^{2x}$</span></p> <p>again multiplying by <span class="math-container">$ e^{-3x}$</span> and integrating from 0 to x we get the desired result</p>
3,671,675
<blockquote> <p>Question: Suppose <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> is a twice differentiable function with <span class="math-container">$f(0)=1$</span>, <span class="math-container">$f'(0)=0$</span> and satisfies <span class="math-container">$f''(x)-5f'(x)+6f(x)\ge 0$</span> for every <span class="math-container">$x\ge 0$</span>. Prove that <span class="math-container">$f(x)\ge 3e^{2x}-2e^{3x}$</span> for every <span class="math-container">$x\ge 0$</span>. </p> </blockquote> <p>My approach: Let <span class="math-container">$h:\mathbb{R}\to\mathbb{R}$</span> be such that <span class="math-container">$h(x)=f(x)-3e^{2x}+2e^{3x}, \forall x\in\mathbb{R}.$</span> Thus <span class="math-container">$h$</span> is also a twice differentiable function with <span class="math-container">$$h'(x)=f'(x)-6e^{2x}+6e^{3x}, \forall x\in\mathbb{R}, \text{ and }\\h''(x)=f''(x)-12e^{2x}+18e^{3x}, \forall x\in\mathbb{R}.$$</span></p> <p>Also observe that <span class="math-container">$$h''(x)-5h'(x)+6h(x)=f''(x)-5f'(x)+6f(x), \forall x\in\mathbb{R}.$$</span> </p> <p>Thus we have <span class="math-container">$$h''(x)-5h'(x)+6h(x)\ge 0, \forall x\ge 0.$$</span></p> <p>Now for the sake of contradiction, let us assume that <span class="math-container">$\exists a&gt;0,$</span> such that <span class="math-container">$$f(a)&lt;3e^{2a}-2e^{3a}\implies f(a)-3e^{2a}+2e^{2a}&lt;0\implies h(a)&lt;0.$$</span> </p> <p>Note that <span class="math-container">$h(0)=0$</span>. Thus, by applying MVT to the function <span class="math-container">$h$</span> on the interval <span class="math-container">$[0,a]$</span>, we can conclude that <span class="math-container">$\exists c\in(0,a)$</span>, such that <span class="math-container">$$h'(c)=\frac{h(a)-h(0)}{a-0}=\frac{h(a)}{a}\implies h'(c)&lt;0.$$</span></p> <p>Again, note that <span class="math-container">$h'(0)=0$</span>. Thus by applying MVT to the function <span class="math-container">$h'$</span> on the interval <span class="math-container">$[0,c]$</span>, we can conclude that <span class="math-container">$\exists c_1\in(0,c)$</span>, such that <span class="math-container">$$h''(c_1)=\frac{h'(c)-h'(0)}{c-0}=\frac{h'(c)}{c}\implies h''(c_1)&lt;0.$$</span></p> <p>Thus we have <span class="math-container">$f''(c_1)-12e^{2c_1}+18e^{3c_1}&lt;0\implies f''(c_1)&lt;12e^{2c_1}-18e^{3c_1}&lt;0.$</span> </p> <p>As one can see, I am trying to prove it using "proof by contradiction". So, is there any way to proceed on these lines, or is there some alternative way to prove?</p>
N.Quy
600,377
<p>I have no idea about following your proof by contradiction. However, I have another proof.</p> <p>This problem has the same idea as a simple problem: Let <span class="math-container">$f :\mathbb{R} \to \mathbb{R}$</span> be continuously differentiable such that <span class="math-container">$f(0)=1$</span> and <span class="math-container">$f(x)-f'(x) \geq 0$</span> for every <span class="math-container">$x \geq 0$</span>. Prove that <span class="math-container">$f(x) \leq e^x$</span> for any <span class="math-container">$x \geq 0$</span>.</p> <p>The idea is considering new function <span class="math-container">$F(x)=e^{-x}f(x)$</span> then <span class="math-container">$F'(x)=e^{-x}(f'(x)-f(x)) \leq 0$</span> for any <span class="math-container">$x \geq 0$</span>. Then <span class="math-container">$F(x) \leq F(0)=1$</span> for any <span class="math-container">$x \geq 0$</span>, which give you <span class="math-container">$f(x) \leq e^x$</span> for any <span class="math-container">$x \geq 0$</span>.</p> <p>Back to your problem, we use the same method.</p> <ul> <li><p>Note that <span class="math-container">$0\leq f"(x)-5f'(x)+6f(x)= f"(x)-3f'(x) + 2(3f(x)-f'(x))$</span>. Define <span class="math-container">$g(x)=3f(x)-f'(x)$</span> then <span class="math-container">$g'(x) \leq 2g(x)$</span> and <span class="math-container">$g(0)=3f(0)-f'(0)=3$</span>. Use the same argument with <span class="math-container">$G(x)=e^{-2x}g(x)$</span> we get <span class="math-container">$g(x) \leq 3 e^{2x}$</span>.</p></li> <li><p><span class="math-container">$g(x) \leq 3 e^{2x} \iff 3(f(x)-3e^{2x})\leq f'(x) -6e^{2x}$</span>. Define <span class="math-container">$h(x)=f(x)-3e^{2x}$</span> then <span class="math-container">$3h(x) \leq h'(x)$</span> and <span class="math-container">$h(0)=-2$</span>. Use the same argument with <span class="math-container">$H(x)=e^{-3x}g(x)$</span> we get <span class="math-container">$h(x)=f(x)-3e^{2x} \geq -2 e^{3x}$</span>, which gives the result.</p></li> </ul>
1,127,480
<p>Consider the set $S = \{x \in \mathbb R: x &lt; \frac2x\}$. Determine the value of $sup$ $S$ (if it exists). </p> <p>Here is my attempt:<br> Firstly, $S = \{x \in \mathbb R: 0 &lt; x &lt; \sqrt 2$ $\lor$ $x &lt; -\sqrt2 \}$.<br> Since $1 \in S$, $S \ne \emptyset$.<br> Also, $\forall x \in S$, $x &lt; \sqrt2$ and so $S$ is bounded above.<br> By the least upper bound property of $\mathbb R$, $sup$ $S$ exists.<br> Suppose $\exists y \in \mathbb R: y &lt; \sqrt2 \land y$ is an upper bound of $S$.<br> Since $1 \in S$, $1 \le y &lt; \sqrt2 \Rightarrow y \in S$.<br> ... </p> <p>How do I show that $sup$ $S$ $= \sqrt 2$?</p>
Misto
211,919
<p>You shall prove that the supremum is not greater nor less than $\sqrt 2$. ie by supposing so in both cases and getting a contradiction. </p>
185,097
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/1/different-kinds-of-infinities">Different kinds of infinities?</a> </p> </blockquote> <p>Today I got to know that two infinity can be compared, But I want to know how is this possible? infinity will be infinity. If it doesn't have any particular value, how can we say that this infinity is small and other one is greater. Can anyone help me?</p>
Thomas
26,188
<p>The question about several types of infinity has come up several times before. As linked to in the comments about, you can read through the question and answer given in <a href="https://math.stackexchange.com/questions/1/different-kinds-of-infinities">What Does it Really Mean to Have Different Kinds of Infinities?</a> or <a href="https://math.stackexchange.com/questions/182171/are-all-infinities-equal?lq=1">Are all infinities equal?</a>.</p> <p>You ask about if you can compare infinities. Well, you might say that you can. IF you take for example the natural numbers $1,2,3,...$, then there are an infinite number of them. We say that the set is infinite. But, you can also count them. If we look at the real numbers, then the fact is that you cannot count these. So in a way, the infinite number of real number is "greater" than the infinite number of natural numbers.</p> <p>But all this comes down to the question about how you measure the size of something. If someone says that something is bigger than something else, then they should always be able to define <em>exactly</em> what that means. We don't (I don't) like when questions become philosophical, then it has (In my opinion) left the realm of mathematics. So if someone tells you that one infinity is greater than another infinity, ask them exactly what they mean. How do you measure sizes of infinities? If they are a mathematician, they will be able to give you a precise definition (study Andre's answer).</p> <p>But, what we usually think about when we compare numbers (or elements in a set) is a set with some kind of ordering on. Without going into any detail, there are different types or orderings, but you can think about how we can order the set consisting of the real numbers in the usual way (ex $7 &gt; 3$). But in this example we are just talking about the real numbers. And infinity is not a number.</p> <p>One more thing to keep in mind is that we will some times write that a limit is equal to infinity. Like $$\lim_{x \to a} f(x) = \infty. $$</p> <p>However, when we write this, we don't think (I don't) of $\infty$ as an element in the set of real numbers (it isn't). All we mean by writing that the limit is infinity is that the values of $f(x)$ become arbitrarily large as $x$ "gets" close to $a$.</p> <p>Just a few things.</p>
2,062,357
<p>Let $\sum_{i=1}^\infty a_i$ be an absolute convergent series and let $\sum_{i=1}^\infty b_i$ be a sub-series of it, i.e. $$b_j=c_j\cdot a_j\quad (c_j\in\{0,1\}),\qquad\forall j\in\mathbb N$$ We can say that $\sum_{i=1}^\infty b_i$ is absolutely convergent as well. So its limit, say $L$, is a real number.</p> <p>Now the question is, what can be said about the set of all possible values of $L$? Is it connected? Closed maybe? Neither closed / connected / compact? I have no idea.</p> <p>Can we determine whether a specific number belongs to this set or not? I was specially interested in finding a sub-series of $\sum \frac 1{n^2}$ whose limit is, say $\frac{\pi}6$, and then I ended up with this question.</p>
Brian M. Scott
12,042
<p>Let $\mathscr{A}$ be the family of sets. Your conditions amounts to saying that the partial order $\langle\mathscr{A},\supseteq\rangle$ is a directed forest. The example in your picture is even an arborescence (directed tree):</p> <pre><code> * light brown / \ / \ yellow * * dark brown / \ / \ red * * blue </code></pre>
944,965
<p>Question: whats the order of the element a=33 in Z60 (under modular addition)?</p> <p>Answer: $\langle 33 \rangle= \{33,6,39,12,45,18,51,24,57,30,3,36,9,42,15,48,21,54,27,0,\}$. Therefore, $\text{order}(33)=20$</p> <p>Whats the inverse of $33$ in $\mathbb{Z}_{60}$ (under modular addition)?</p> <p>Im struggling with finding the inverse. Can anyone show me how to find it? </p> <p>I would deeply appreciate your work and efforts</p> <p>Thanks</p>
paw88789
147,810
<p>The inverse of $33$ in this additive group is $-33$, but in $\mathbb{Z}_{60}$, we have $-33\equiv 27\pmod{60}$</p>
36,625
<p>Say $L\mathbb{C}^\times$ is the loop group of smooth maps $S^1 \to \mathbb{C}^\times$. There is a submonoid $L_{poly}\mathbb{C}^\times$ of loops that look like $w_0 + w_1z +w_2z^2 + \cdots + w_nz^n$ where $z = e^{i\theta}$ (as Andrew notes below this is not a group because its not closed under taking inverses). Equivalently $L_{poly}\mathbb{C}^\times$ as a set is just polynomials $p(z) \in \mathbb{C}[z]$ such that $p(z) \ne 0$ for $|z| = 1$. </p> <p>If we mod out by scaling and rotation then the set of polynomial loops describe a subset $X$ of $\mathbb{P}(\oplus_{n \in \mathbb{N}} \mathbb{C})$ (by identifying a loop with its vector of coefficients). I want to look at $X$ from an algebro-geometric point of view, but I have no intuition about how bad or nice $X$ may be; i.e. can it be a variety? </p> <p>The way I've been thinking about it is that $X = \cup_{n \in \mathbb{N}} X_n$ where $X_n \subset \mathbb{P}^n$ are the loops of degree at most $n$. I think $X_1$ is the image under the projection $\mathbb{C}^2 - 0 \to \mathbb{P}^1$ of the set {$(w_0,w_1):|w_0|\ne |w_1|$}. So it seems describable as the complement of a hypersurface in $\mathbb{R}^4$ but probably its not a complex variety.</p> <p>But already trying to figure out what $X_2$ is seems difficult. Also I feel I don't have any `sophisticated' way of thinking about this stuff meaning my attempts to describe $X_2$ seems to always degenerate to just fumbling around with planar geometry. </p> <p>Some specific questions regarding this setup:</p> <p>0) What is the dimension of $X_n$?</p> <p>1) Which if any of the $X_n$ or $X$ are a variety over $\mathbb{C}$ or $\mathbb{R}$?</p> <p>2) If $X$ or $X_n$ are not varieties can you find any positive dimensional varieties contained in them?</p> <p>3) Can you suggest any tools that might be useful for answering any of the previous questions?</p> <p>Of course if any of this seems to easy you are welcome to replace $\mathbb{C}^\times$ with $GL(n,\mathbb{C})$, polynomials with rational functions or with power series convergent in an annulus containing $|z| = 1$.</p> <p>An extra thought: So $L_{poly}\mathbb{C}^\times$ is not a group, but it seems you do get a group if you look at convergent series in non positive powers of $z$; i.e. loops that look like $\sum_{j \in \mathbb{N}} c_j z^{-j}$. I wonder if there's anything interesting you can say about the $c_j$.</p>
David Corwin
1,355
<p>0) Under most reasonable notions of dimension, the (real) dimension would be $2n$. For a polynomial up to scaling is the same as an element of $\mathbb{C}^n$ (i.e. a collection of $n$ roots). We can mod out by scaling by a positive real factor, giving something of real dimension $2n-1$. When doing this modding out, the fibre of each element of the quotient contains at most $n$ (hence a finite number) of polynomials which are not in $X_n$, implying that the real (topological) dimension should still be $2n$.</p> <p>As for whether it's a variety, you're looking at the intersection of infinitely many Zariski closed sets defined by $\sum_{i=0}^n w_i z^i$ for each $z \in S^1$, which I would not necessarily expect to be a variety. It is, however, a definable real set (in the sense of model theory), which would imply that it's at least almost a kind of quasiprojective real variety. In other words, the infinitely many inequation relations satisfied by elements of $X_n$ lie on a real variety themselves, meaning they probably have some sort of nice structure. Someone who knows more than I do should try to answer the rest of the question.</p>
36,625
<p>Say $L\mathbb{C}^\times$ is the loop group of smooth maps $S^1 \to \mathbb{C}^\times$. There is a submonoid $L_{poly}\mathbb{C}^\times$ of loops that look like $w_0 + w_1z +w_2z^2 + \cdots + w_nz^n$ where $z = e^{i\theta}$ (as Andrew notes below this is not a group because its not closed under taking inverses). Equivalently $L_{poly}\mathbb{C}^\times$ as a set is just polynomials $p(z) \in \mathbb{C}[z]$ such that $p(z) \ne 0$ for $|z| = 1$. </p> <p>If we mod out by scaling and rotation then the set of polynomial loops describe a subset $X$ of $\mathbb{P}(\oplus_{n \in \mathbb{N}} \mathbb{C})$ (by identifying a loop with its vector of coefficients). I want to look at $X$ from an algebro-geometric point of view, but I have no intuition about how bad or nice $X$ may be; i.e. can it be a variety? </p> <p>The way I've been thinking about it is that $X = \cup_{n \in \mathbb{N}} X_n$ where $X_n \subset \mathbb{P}^n$ are the loops of degree at most $n$. I think $X_1$ is the image under the projection $\mathbb{C}^2 - 0 \to \mathbb{P}^1$ of the set {$(w_0,w_1):|w_0|\ne |w_1|$}. So it seems describable as the complement of a hypersurface in $\mathbb{R}^4$ but probably its not a complex variety.</p> <p>But already trying to figure out what $X_2$ is seems difficult. Also I feel I don't have any `sophisticated' way of thinking about this stuff meaning my attempts to describe $X_2$ seems to always degenerate to just fumbling around with planar geometry. </p> <p>Some specific questions regarding this setup:</p> <p>0) What is the dimension of $X_n$?</p> <p>1) Which if any of the $X_n$ or $X$ are a variety over $\mathbb{C}$ or $\mathbb{R}$?</p> <p>2) If $X$ or $X_n$ are not varieties can you find any positive dimensional varieties contained in them?</p> <p>3) Can you suggest any tools that might be useful for answering any of the previous questions?</p> <p>Of course if any of this seems to easy you are welcome to replace $\mathbb{C}^\times$ with $GL(n,\mathbb{C})$, polynomials with rational functions or with power series convergent in an annulus containing $|z| = 1$.</p> <p>An extra thought: So $L_{poly}\mathbb{C}^\times$ is not a group, but it seems you do get a group if you look at convergent series in non positive powers of $z$; i.e. loops that look like $\sum_{j \in \mathbb{N}} c_j z^{-j}$. I wonder if there's anything interesting you can say about the $c_j$.</p>
Andrew Stacey
45
<p>(This is perhaps more of a comment than an answer, though I hope it also gives you some ideas where to look further.)</p> <p>I have difficulty with the fourth word of the second sentence: "subgroup". The space that you describe is not a subgroup of $L \mathbb{C}^\times$. It is a submonoid, but is not closed under taking inverses. Even if (as is usual) you allow Laurent polynomials, it is still not a subgroup. So exactly what answer will satisfy you depends on whether you really meant "subgroup" and so need to change the rest, or you really meant the space you describe and are happy to drop the requirement that it be a subgroup.</p> <p>If the former (that you really want a subgroup), then I have some pointers for you. If the latter, I don't.</p> <p>To get a polynomial subgroup of a loop group, you need to be working with a compact Lie group. The simplest example being $U_n$. A polynomial loop in $U_n$ is simply one that looks like a polynomial when viewed as a loop in $M_n(\mathbb{C})$. These do have polynomial inverses since one simply takes the adjoint of the coefficients to produce the inverse and that is still a polynomial. (For a more general compact Lie group, embed it analytically in some $U_n$.) For $\mathbb{C}^\times$, this produces polynomial loops in $S^1$ which is ... not a lot! You get $\{z^k : k \in \mathbb{Z}\}$. Note, however, that this is homotopy equivalent to the much larger group $L\mathbb{C}^\times$!</p> <p>If this is what you are interested in, then the place to look is <em>Loop Groups</em> by Pressley and Segal. Another reference, also by Segal, is <a href="http://www.ams.org/mathscinet-getitem?mr=1055875" rel="nofollow">Loop groups and harmonic maps</a>. From memory (it's been a while since I last looked at it), the latter article deals with issues close to what you're asking about.</p>
1,116,202
<p>I am taking a class in which knowledge of gradients is a prerequisite. I am familiar with gradients but don't have too much experience, so I am having trouble understanding the following example.</p> <p>$\theta, x \in \mathbb R^d$.</p> <p>Define $\nabla J(\theta) = \begin{pmatrix} \frac{\partial}{\partial \theta_1} J(\theta) \\ \frac{\partial}{\partial \theta_2} J(\theta) \\ \vdots \\ \frac{\partial}{\partial \theta_d} J(\theta) \end{pmatrix}$ .</p> <p>Then, I'm having trouble understanding why the following is true: $\nabla (\theta \cdot x) = \nabla \left( \sum_{i=1}^{d} \theta_i x_i \right) = \begin{pmatrix} \frac{\partial}{\partial \theta_1} \theta \cdot x \\ \frac{\partial}{\partial \theta_2} \theta \cdot x \\ \vdots \\ \frac{\partial}{\partial \theta_d} \theta \cdot x \end{pmatrix} = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d \end{pmatrix} = x$ </p> <p>My two questions are:</p> <ul> <li>Why is the third equality true? That is, why is $\frac{\partial}{\partial \theta_i} \cdot x = x_i$?</li> <li>The first equality expresses the result as the gradient of a scalar: $\nabla \left( \sum_{i=1}^{d} \theta_i x_i \right)$. How is it possible this is equal to a vector $x$ in the last equality?</li> </ul>
Amitai Yuval
166,201
<p>There are a few alternative ways to think of it, which may make it simpler to understand. One which is quite similar to the above calculation is$$\nabla(\theta\cdot x)=\nabla\left(\sum\theta_ix_i\right)=\left(\begin{array}{c}\frac{\partial}{\partial\theta_1}\sum\theta_ix_i\\\vdots\\\frac{\partial}{\partial\theta_d}\sum\theta_ix_i\end{array}\right)=\left(\begin{array}{c}x_1\\\vdots\\x_d\end{array}\right)=x.$$Regarding the second question - the vector $x$ is constant, and the product $\theta\cdot x$ is a function of $\theta$. The gradient is a machine that eats a function and returns a vector, as is the case here.</p>
2,355,246
<p>The optimisation problem [PROBLEM 1]: $$\max \sum_{t=T} x_t $$ subject to: $$\ (1) x_t = (1+\alpha) x_{t-1}\ \forall t ..T$$ $$\ t=\{ 0..T\} , x_0= given$$ $$\ (e.g) x_1 = (1+\alpha) x_0\ $$ $$\ (e.g) x_2 = (1+\alpha) x_1\ $$ $$\ (e.g) x_3 = (1+\alpha) x_2\ $$ </p> <p>$$\ $$</p> <p>$$\ (2) x_t \ge 0 $$</p> <p>Now, I would like to allow negative value of the variable "x" and apply different factor. I made the following changes [PROBLEM 2]:</p> <p>$$\ x_t = x_t^+ - x_t^- $$</p> <p>$$\ x_t = (1+\alpha) x_{t-1}^+ - (1+\beta) x_{t-1}^- $$</p> <p>$$\ x_t^+ , x_t^- \ge 0 $$</p> <p>$$\ x_t \ge -10 $$</p> <p>The solver gives infeasible(unbounded) solution! Is there an alternative way to solve such kind of problem, particularly, applying different factor (multiplier) for negative values. The model is more complex than what I wrote. But I wanted to just highlight on the problem.</p>
Daron
53,993
<p>Just consider the upper-right quadrant of the unit circle. </p> <p>I presume you are happy to admit there are infinitely many points in the interval $[0,1]$.</p> <p>Each $x \in [0,1]$ is the $x$-coordinate of a point on the upper-right quadrant. </p> <p>The $y$-value of that point is is $y= \displaystyle \sqrt{1-x^2}$ by Pythagoras' theorem.</p> <p>So each of the infinitely-many $x \in [0,1]$ is the $x$-coordinate of a unit vector. </p> <p>These vectors are all distinct because their $x$-coordinates are all distinct.</p> <p>But we can be more explicit! </p> <p>We can in fact write down infinitely many allowed $x$-values.$^1$ For example the sequence $1, \frac{1}{2}, \frac{1}{3}, \ldots $ of infinitely many real numbers. </p> <p>The same method as before lets us write down infinitely many different unit vectors:</p> <p>$\displaystyle \Big ( \frac{1}{2} , \sqrt{1-\frac{1}{4} } \Big )$, $\displaystyle \Big ( \frac{1}{3} , \sqrt{1-\frac{1}{9} } \Big )$, $\displaystyle \Big ( \frac{1}{4} , \sqrt{1-\frac{1}{16} } \Big ), \ldots$</p> <p>And you can check they are unit length by taking inner prododucts.</p> <hr> <p>$1$. If you want to show there are uncountably many real numbers, we cannot write them all down. And you need to specify what you ARE assuming about the real number line to start.</p>
440,744
<p>The vector space dimension of the cohomology group of the <span class="math-container">$2$</span>-plane Grassmannian <span class="math-container">$\mathrm{Gr}_{2,n}$</span> is given by the number of tuples <span class="math-container">$(\lambda_1,\lambda_2)$</span> satisfying <span class="math-container">$$ n - 2 \geq \lambda_1 \geq \lambda_2 \geq 0. $$</span> Explicitly this is given by <span class="math-container">$$ \binom{n}{2}. $$</span> This also happens to be the dimension of <span class="math-container">$V_{\pi_2}$</span> the second fundamental representation of <span class="math-container">$\frak{sl}_n$</span>. I am guessing this is not an accident, especially since the <span class="math-container">$2$</span>-plane Grassmannian corresponds (in the usual way) to <span class="math-container">$V_{\pi_2}$</span>.</p> <p>Does this extend to the general identity <span class="math-container">$$ \mathrm{dim}(H^{*}(\mathrm{Gr}_{d,n})) = \mathrm{dim}(V_{\pi_d})? $$</span> If it does, then what is a conceptual explanation for this?</p> <p>EDIT: Since <span class="math-container">$V_{\pi_d}$</span> is isomorphic to the exterior power <span class="math-container">$$ \Lambda^d(V_{\pi_1}) $$</span> and <span class="math-container">$V_{\pi_1}$</span> is of dimension of <span class="math-container">$n$</span>, we see that the RHS of the claimed identity is the binomial coefficient <span class="math-container">$$ \binom{n}{d}. $$</span> It follows from the general formula given in this <a href="https://mathoverflow.net/questions/196546/hard-lefschetz-theorem-for-the-flag-manifolds">answer</a> that the LHS is the same binomial coefficient. Thus the identity does indeed extend from <span class="math-container">$2$</span>-planes to <span class="math-container">$d$</span>-planes. So the question is if there is a conceptual reason for this . . .</p>
Ben Webster
66
<p>This is a special case of geometric Satake. I have to admit, I'm really struggling to find a place where this theorem is stated in an elementary-ish statement that makes this clear and am totally failing. By <a href="https://arxiv.org/pdf/math/0401222.pdf" rel="noreferrer">Mirkovic-Vilonen, Thm. 7.3</a> (and the discussion below), the irreps <span class="math-container">$V_{\lambda}$</span> of any complex algebraic group can be written as the intersection cohomology of the closure <span class="math-container">$\overline{\check{G}[[t]] \cdot t^{\lambda}\cdot \check{G}[[t]]}/G[[t]]\subset G((t))/G[[t]]$</span> in the affine Grassmannian. If <span class="math-container">$\lambda$</span> is minuscule, then this orbit is already closed and smooth, so the intersection cohomology is usual cohomology. Furthermore, it is of the form <span class="math-container">$\check{G}/\check{P}$</span> for <span class="math-container">$\check{P}$</span> the parabolic corresponding to the stabilizer of the minuscule weight in the Weyl group.</p> <p>For <span class="math-container">$G=\check{G}=GL_n$</span>, these are all Grassmannians; more generally, every minuscule representation is isomorphic to the cohomology of the corresponding cominuscule flag variety of the dual group.</p>
1,283,516
<p>Given a triangle $ABC$, make it a point $D$ on the side $AB$. Show that $\overline {CD}$ is smaller than the length of one of the sides $BC$ and $AC$.</p> <p>Ideas? The triangular inequality will not.</p> <p>I wanted to try the theorem of the exterior angle and then apply a preposition that says "If two angles of a triangle are not congruent, then the sides who oppose these angles are different measures and the long side opposes mair angle".</p>
Jack D'Aurizio
44,121
<p>We may assume without loss of generality that $CA\geq CB$. Then the triangle $ABC$ lies inside the circle $\Gamma$ with centre $C$ and radius $CA$. Assuming $CD&gt;CA$, $D$ lies outside $\Gamma$, but that leads to a contradiction, since a circle is a convex set and the segment $AB$ lies inside $\Gamma$.</p> <p>Another approach: assume that the projection of $C$ on the $AB$-line, say $H_C$, lies between $A$ and $B$. For any $P\in H_C B$, the length of $PC$ is between $CH_C$ and $CB$ by the Pythagorean theorem, and for any $P\in H_C A$, the length of $PC$ is between $CH_C$ and $CA$ for the same reason, hence for any $P\in AB$, $PC$ cannot exceed $\max(CA,CB)$. On the other hand, if the projection of $C$ on the $AB$-line lies outside the $AB$-segment, the map $P\to d(P,C)$ is a monotonic map on the $AB$-segment, and $PC\leq\max(CA,CB)$ still holds.</p>
1,283,516
<p>Given a triangle $ABC$, make it a point $D$ on the side $AB$. Show that $\overline {CD}$ is smaller than the length of one of the sides $BC$ and $AC$.</p> <p>Ideas? The triangular inequality will not.</p> <p>I wanted to try the theorem of the exterior angle and then apply a preposition that says "If two angles of a triangle are not congruent, then the sides who oppose these angles are different measures and the long side opposes mair angle".</p>
grand_chat
215,011
<p>The triangle inequality--in Euclidean vector space--does work. Think of $A$, $B$, $C$, $D$ as vectors in $\mathbb R^d$, so that the Euclidean norm $||A-C||$ is the length of the side $AC$, and $||D-C||$ is the length of segment $DC$, and so on. By definition of $D$, we have $D=\lambda A + (1-\lambda) B$ for some $\lambda$ between $0$ and $1$. Then $$ \begin{align} ||D-C||&amp;= ||\lambda (A-C)+(1-\lambda)(B-C)||\\ &amp;\le \lambda||A-C|| +(1-\lambda)||B-C||\\&amp;\le\max\{||A-C||,||B-C||\} \end{align}$$</p>
2,273,477
<p><strong>Bessel's Inequality</strong></p> <p>Let $(X, \langle\cdot,\cdot\rangle )$ be an inner product space and $(e_k)$ an orthonormal sequence in $X$. Then for every $x \in X$ : $$ \sum_{1}^{\infty} |\langle x,e_k\rangle |^2 \le ||x||^2$$ where $\| \cdot\|$ is of course the norm induced by the inner product. </p> <p>Now suppose we have a sequence of scalars $a_k$ and that the series $$ \sum_{1}^{\infty} a_k e_k = x $$ converges to a $x \in X$. </p> <blockquote> <p><strong>Lemma 1</strong> We can easily show that $a_k=\langle x,e_k\rangle $ (i'll do it fast)</p> <p><em>Proof.</em> Denote $s_n$ the sequence of partial sums of the above series, which of course converges to $x$. Then for every $j&lt;n$ , $ \langle s_n, e_j\rangle = a_j$ and by continuuity of the inner product $a_j=\langle x,e_j\rangle $</p> <p><strong>Lemma 2</strong> We can also show that since $s_n$ converges to $x$, then $σ_n = |a_1|^2 + ... + |a_2|^2 $ converges to $\|x\|^2 $ :</p> <p><em>Proof.</em> $\|s_n\|^2 = \| a_1 e_1 +...+a_2 e_2\|^2 = |a_1|^2 + ... |a_n|^2 $ since $(e_k)$ are orthonormal (Pythagorean). But $||s_n||^2$ converges to $||x||^2 $ , which completes the proof.</p> </blockquote> <p>So we showed the following $$\sum_1^{\infty} |a_k|^2= \sum_1^\infty |\langle x,e_k\rangle |^2 = ||x||^2$$</p> <p><strong>Confusion</strong></p> <p>So the equality holds for Bessel inequality, for $x$. We arbitrarily chose $a_k$, so does that mean the the equality holds for all $x \in X$ ? Obviously not, otherwise it would be Bessel's equality. What am I getting wrong?</p>
Disintegrating By Parts
112,478
<p>Bessel's inequality is equality for some $x$ iff $x$ is in the closed linear span $\overline{[\{ e_k \}]}$ of the orthonormal elements $\{ e_n \}$. You don't have to assume that $\sum_k \langle x,e_n\rangle e_n$ converges to $x$, but that is the final conclusion if $x \in \overline{[\{ e_k \}]}$. This conclusion follows because $$\|x-\sum_{k=1}^{n}\langle x,e_n\rangle e_n\| =\inf_{\alpha_1,\cdots,\alpha_n} \|x-\sum_{k=1}^{n}\alpha_k e_k\|$$</p>
349,212
<p><strong>Context:</strong> I am a PhD student in theoretical physics with higher-than-average education on differential geometry. I am trying to understand Lagrangian and Hamiltonian field theories and related concepts like Noether's theorem etc. in a mathematically rigorous way since the standard physics literature is sorely lacking for someone who values precision and generality, in my opinion.</p> <p>I am currently studying various text by Anderson, Olver, Krupka, Sardanashvili etc. on the variational bicomplex and on the formulation of Lagrangian systems on jet bundles. I do not rule the formalism yet, but made significant steps towards understanding.</p> <p>On the other hand, most physics literature employs the functional formalism, where rather than calculus on variations taking place on finite dimensional jet bundles (or the "mildly infinite dimensional" <span class="math-container">$\infty$</span>-jet bundle), it takes place on the suitably chosen (and usually not actually explicitly chosen) infinite dimensional space of smooth sections (of the given configuration bundle).</p> <p>Even relatively precise physics authors like Wald, DeWitt or Witten (lots of 'W's here) seems to prefer this approach (I am referring to various papers on the so-called "covariant phase space formulation", which is a functional and infinite dimensional but manifestly "covariant" approach to Hamiltonian dynamics, which also seems to be a focus of DeWitts "The Global Approach to Quantum Field Theory", which is a book I'd like to read through but I find it impenetrable yet).</p> <p>I find it difficult to arrive at a common ground between the functional formalism and the jet-based formalism. I also do not know if the functional approach had been developed to any modern standard of mathematical rigour, or the variational bicomplex-based approach has been developed precisely to avoid the usual infinite dimensional troubles.</p> <p><strong>Example:</strong></p> <p><a href="https://i.stack.imgur.com/PLEld.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PLEld.png" alt="Augmented variational bicomplex"></a></p> <p>Here is an image from Anderson's "The Variational Bicomplex", which shows the so-called augmented variational bicomplex. Here <span class="math-container">$I$</span> is the so-called interior Euler operator, which seems to be a substitute for integration by parts in he functional approach.</p> <p>Later on, Anderson proves that the vertical columns are locally exact, and the <em>augmented</em> horizontal rows (sorry for picture linking, xypic doesn't seem to be working here, don't know how to draw complices)</p> <p><a href="https://i.stack.imgur.com/StDOy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/StDOy.png" alt="enter image description here"></a></p> <p>are locally exact as well. In fact for the homotopy operator <span class="math-container">$\mathcal H^1:\mathcal F^1\rightarrow\Omega^{n,0}$</span> that reconstructs Lagrangians from "source forms" (equations of motion) he gives (for source form <span class="math-container">$\Delta=P_a[x,y]\theta^a\wedge\mathrm d^nx$</span>) <span class="math-container">$$ \mathcal H^1(\Delta)=\int_0^1 P_a[x,tu]u^a\mathrm dt\ \mathrm d^nx. $$</span></p> <p>On the other hand, if we use the functional formalism in an unrigorous manner, the functional derivative <span class="math-container">$$ S\mapsto\frac{\delta S[\phi]}{\delta \phi^a(x)} $$</span> behaves like the infinite dimensional analogue of the ordinary partial derivative, so using the local form of the homotopy operator for the de Rham complex (which for the lowest degree is <span class="math-container">$f:=H(\omega)=\int_0^1\omega_\mu(tx)x^\mu\mathrm dt$</span>) and extending it "functionally", one can arrive at the fact that if an "equation of motion" <span class="math-container">$E_a(x)[\phi]$</span> satisfies <span class="math-container">$\frac{\delta E_a(x)}{\delta\phi^b(y)}-\frac{\delta E_b(y)}{\delta\phi^a(x)}=0$</span>, then <span class="math-container">$E_a(x)[\phi]$</span> will be the functional derivative of the action functional <span class="math-container">$$ S[\phi]=\int_0^1\mathrm dt\int\mathrm d^nx\ E_a(x)[t\phi]\phi^a(x). $$</span></p> <p>I have (re)discovered this formula on my own by simply abusing the finite dimensional analogy and was actually surprised that this works, but it does agree (up to evaluation on a secton and integration) with the homotopy formula given in Anderson.</p> <p>This makes me think that the "variation" <span class="math-container">$\delta$</span> can be considered to be a kind of exterior derivative on the formal infinite dimensional space <span class="math-container">$\mathcal F$</span> of all (suitable) field configurations, and the Lagrangian inverse problem can be stated in terms of the de Rham cohomology of this infinite dimensional field space.</p> <p>This approach however fails to take into account boundary terms, since it works only if integration by parts can be performed with reckless abandon and all resulting boundary terms can be thrown away. This can be also seen that if we consider the variational bicomplex above, the <span class="math-container">$\delta$</span> variation in the functional formalism corresponds to the <span class="math-container">$\mathrm d_V$</span> vertical differential, but in the augmented horizontal complex, the <span class="math-container">$\delta_V=I\circ\mathrm d_V$</span> appears, which has the effect of performing integrations by parts, and the first variation formula is actually <span class="math-container">$$ \mathrm d_V L=E(L)-\mathrm d_H\Theta, $$</span> where the boundary term appears explicitly in the form of the horizontally exact term.</p> <p>The functional formalism on the other hand requires integrals everywhere and boundary terms to be thrown aside for <span class="math-container">$\delta$</span> to behave as an exterior derivative. Moreover, integrals of different dimensionalities (eg. integrals over spacetime and integrals over a hypersurface etc.) tend to appear sometimes in the functional formalism, which can only be treated using the same concept of functional derivative if various delta functions are introduced, which makes me think that de Rham currents (I am mostly unfamiliar with this area of mathematics) are also involved here.</p> <p><strong>Question:</strong> I would like to ask for references to papers/and or textbooks that develop the functional formalism in a general and mathematically precise manner (if any such exist) and also (hopefully) that compare meaningfully the functional formalism to the jet-based formalism.</p>
Pedro Lauridsen Ribeiro
11,211
<p>I do not know if it is good form for MO to cite one's own papers when answering a question, but I will take the chance. This matter is addressed in quite a bit of detail in my joint paper with Romeo Brunetti and Klaus Fredenhagen, </p> <ul> <li>R. Brunetti, K. Fredenhagen, P. L. Ribeiro, <em>Algebraic Structure of Classical Field Theory: Kinematics and Linearized Dynamics for Real Scalar Fields</em>, <a href="https://link.springer.com/article/10.1007/s00220-019-03454-z" rel="noreferrer">Commun. Math. Phys. <strong>368</strong> (2019) 519-584</a>, <a href="https://arxiv.org/abs/1209.2148" rel="noreferrer">arXiv:1209.2148 [math-ph]</a>. </li> </ul> <p>There we discuss only scalar fields, but the discussion remains essentially unchanged for sections of fiber bundles or even fibered manifolds. In what follows, I will assume the former.</p> <p>As a rule, the functional formalism is more general, for it is clear that given a smooth <span class="math-container">$d$</span>-form <span class="math-container">$\omega$</span> on the total space of the <span class="math-container">$k$</span>-th order jet bundle <span class="math-container">$J^k\pi$</span> of the fiber bundle <span class="math-container">$\pi:E\rightarrow M$</span> with <span class="math-container">$D$</span>-dimensional typical fiber <span class="math-container">$Q$</span> over the smooth <span class="math-container">$d$</span>-dimensional (space-time) manifold <span class="math-container">$M$</span> (we assume all finite-dimensional manifolds here to be smooth, Hausdorff, paracompact and connected) - think of <span class="math-container">$\omega$</span> as a "Lagrangian density" - we have that <span class="math-container">$$F_K(\varphi)=\int_K (j^k\varphi)^*\omega\ ,\quad\varphi\in\Gamma(\pi)$$</span> is a functional on the space <span class="math-container">$\Gamma(\pi)=\{\varphi\in\mathscr{C}^\infty(M,E)\ |\ \pi\circ\varphi=\mathrm{id}_M\}$</span> of smooth sections of <span class="math-container">$\pi$</span> for each compact <em>region</em> <span class="math-container">$K\subset M$</span> (i.e. <span class="math-container">$K$</span> has nonvoid interior) and each <span class="math-container">$k=0,1,...\infty$</span>. The case <span class="math-container">$k=\infty$</span> can be handled just like for finite <span class="math-container">$k$</span> since the total space of the infinite-order jet bundle <span class="math-container">$J^\infty\pi$</span> is the countable projective limit of the finite-dimensional manifolds <span class="math-container">$J^k\pi$</span> and therefore is a metrizable Fréchet manifold, see e.g. the aforementioned paper or the book by Andreas Kriegl and Peter Michor, <a href="https://www.mat.univie.ac.at/~michor/apbookh-ams.pdf" rel="noreferrer"><em>The Convenient Setting of Global Analysis</em> (AMS, 1997)</a> for a discussion of the manifold structure of <span class="math-container">$J^\infty\pi$</span>. For a better handling of the boundary terms which appear when performing functional derivatives (see below), it is convenient to replace <span class="math-container">$K$</span> with the multiplication of <span class="math-container">$(j^k\varphi)^*\omega$</span> by some <span class="math-container">$f\in\mathscr{C}^\infty_c(M)$</span> and then integrate over the whole of <span class="math-container">$M$</span>, thus yielding the functional <span class="math-container">$$F(\varphi)=\int_M f(j^k\varphi)^*\omega\ ,\quad\varphi\in\Gamma(\pi)\ .$$</span> Speaking of which, <span class="math-container">$\Gamma(\pi)$</span> has an infinite-dimensional manifold structure modelled on the locally convex vector spaces <span class="math-container">$\Gamma_c(\varphi^*V\pi)$</span> of smooth sections with compact support of the pullback of the vertical bundle <span class="math-container">$V\pi=\ker T\pi$</span> of <span class="math-container">$\pi$</span> by each <span class="math-container">$\varphi\in\Gamma(\pi)$</span> (these locally convex vector spaces are all canonically topologically isomorphic to each other) - the corresponding (topological) manifold structure is the so-called <em>Whitney topology</em> on <span class="math-container">$\Gamma(\pi)$</span>. There is in addition a natural smooth structure on <span class="math-container">$\Gamma(\pi)$</span> modelled on that of <span class="math-container">$\Gamma_c(\varphi^*V\pi)$</span> - it can be proven that smooth curves <span class="math-container">$\gamma:\mathbb{R}\rightarrow\Gamma(\pi)$</span> are necessarily of the form <span class="math-container">$\gamma\in\mathscr{C}^\infty(\mathbb{R}\times M,E)$</span> with <span class="math-container">$\gamma(t,\cdot)=\gamma(t)=\gamma_t\in\Gamma(\pi)$</span> (i.e. <span class="math-container">$\pi\circ\gamma_t=\mathrm{id}_M$</span>) for all <span class="math-container">$t\in\mathbb{R}$</span> and for all <span class="math-container">$a&lt;b\in\mathbb{R}$</span> there is <span class="math-container">$K\subset M$</span> compact such that <span class="math-container">$\gamma(t,p)$</span> is constant in <span class="math-container">$t\in[a,b]$</span> for all <span class="math-container">$p\not\in K$</span>. This entails that <span class="math-container">$\gamma'_t\in\Gamma_c(\gamma_t^*V\pi)$</span> for all <span class="math-container">$t\in\mathbb{R}$</span> since we are differentiating along a single fiber of <span class="math-container">$\pi$</span> when differentiating with respect to the curve parameter <span class="math-container">$t$</span>. Given that notion of smooth curves, smooth maps are just the ones that map smooth curves to smooth curves (see A. Kriegl, P. Michor, <em>loc. cit.</em> for many more details). Given the specific notion of smooth curves and smooth maps that <span class="math-container">$\Gamma(\pi)$</span> has, it is easy to see why the "Poincaré-lemma-type" argument used by Anderson to solve the inverse problem of calculus of variations can be recast in functional form with essentially no change as you did, for the core of the argument still remains essentially finite-dimensional.</p> <p>In order to see where the boundary terms come from, consider the <em>second</em> functional <span class="math-container">$F$</span> above. The variational derivative consists only in taking the derivative of <span class="math-container">$F(\gamma_t)$</span> with respect to <span class="math-container">$t$</span> at <span class="math-container">$t=0$</span>, where <span class="math-container">$\gamma:\mathbb{R}\rightarrow\Gamma(\pi)$</span> is a smooth curve on <span class="math-container">$\Gamma(\pi)$</span> so that <span class="math-container">$\gamma_0=\varphi$</span>, <span class="math-container">$\gamma'_0=\delta\varphi\in\Gamma_c(\varphi^*V\pi)$</span> and applying the chain rule (which, by the way, does hold in this setting). One then applies the standard variational formula <span class="math-container">$j^k\delta\varphi=\delta(j^k\varphi)$</span> (recall that <span class="math-container">$j^k\varphi$</span> only takes into account <em>base = horizontal</em> derivatives of sections, whereas <span class="math-container">$\delta\varphi$</span> is just a <em>fiber = vertical</em> derivative. The desired commutativity comes from local triviality of <span class="math-container">$\pi$</span> or, more generally, the implicit function theorem in the case of arbitrary fibered manifolds) together with integration by parts - the result is the Euler-Lagrange derivative of <span class="math-container">$\omega$</span> plus a sum of terms proportional to derivatives of positive order of the cutoff function <span class="math-container">$f$</span>. This latter sum yields the boundary terms in the (distributional) limit when <span class="math-container">$f$</span> becomes the characteristic function of a compact region <span class="math-container">$K$</span> with smooth boundary <span class="math-container">$\partial K$</span> - in this limit, <span class="math-container">$F$</span> becomes <span class="math-container">$F_K$</span> defined above.</p> <p>The functional formalism is genuinely more general than the jet bundle formalism also because it can handle a large class of <em>nonlocal</em> functionals. Allowing these is seen to yield better algebraic properties (closure under "pointwise" = "field-wise" products, etc.) than just considering local ones, which is convenient for field quantization at a later stage, among other things. Moreover, there is a very simple and elegant characterization of local functionals within the functional formalism which does not mention jet bundles anywhere - see e.g. Proposition 2.2, pp. 535-539 of the above paper.</p>
2,367,788
<p>Need to find the total numbers out of all 6 digit numbers where a digit is repeated exactly 4 times in the number. </p> <p>Eg. 111122, 111123 is valid But 111121 is not valid.</p>
Henno Brandsma
4,280
<p>Let's construct all such numbers, starting from the left: The first digit can be chosen in $9$ ways, as $0$ is forbidden.</p> <p>case 1: this is also the repeated digit: then we can pick $5 \choose 3$ remaining places where that same digit will appear. And finally we are left with picking out of $9$ remaining digits for the 2 remaining positions without restrictions ($9^2$).</p> <p>case 2: this is not the repeated digit: we have to pick $5 \choose 4$ remaining positions where the repeated digit (for which we have 9 choices left) has to appear. We are left with one position with a digit unequal to the repeated one (so 9 options).</p> <p>So I get $9({ 5 \choose 3}\cdot 9^2 + 9\cdot{5 \choose 4}\cdot 9)= 9^3(10+5)=10935$ options.</p>
1,393,311
<blockquote> <p>Let $f:[0,1]\to[0,1]$ be a continuous function. Define $h:(0,1)\to[0,1]$ such that, $$h(x)=f(x)-\left\lfloor f(x)\right\rfloor$$Is $h$ continuous? Here $\left\lfloor x\right\rfloor$ is the <a href="https://www.google.co.in/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=7&amp;cad=rja&amp;uact=8&amp;sqi=2&amp;ved=0CEQQFjAGahUKEwiHtt2er6HHAhVBBo4KHTHGAGU&amp;url=http%3A%2F%2Fmathworld.wolfram.com%2FFloorFunction.html&amp;ei=8RfKVcf5GMGMuASxjIOoBg&amp;usg=AFQjCNHELl1VhlrQZPnRxx_Yduw2VQQfCA&amp;bvm=bv.99804247,d.c2E" rel="nofollow">floor function</a>.</p> </blockquote> <p>This problem arose due to solving another problem in Real Analysis. Intuitively, it seems that $h$ is continuous but I can neither prove or disprove it. Any help is appreciated.</p>
Dominik
259,493
<p>Yes, $A\setminus B^\circ = A \cap (B^\circ)^c$ is compact since it is the intersection of a compact and a closed set. You could even drop any assumptions about B, as long as A is compact.</p>
2,826,327
<p>$$\lim_{x\rightarrow \infty }\left ( 1+\frac{3}{x+2} \right )^{3x-6}$$</p> <p>I've tried to factor and simplfy the expression. I got:</p> <p>$${\left ( 1+\frac{3}{x+2} \right )^{\frac{1}{x+2}}}^{3({x^2-4})}$$</p> <p>I set $x$ to $1/t$ I get:</p> <p>$${\left ( 1+\frac{3}{\frac{1}{t}+2} \right )^{\frac{1}{\frac{1}{t}+2}}}^{3 \left({\frac{1}{t}^2-4} \right)}$$</p> <p>then I am left with:</p> <p>$$\left ( e^{3} \right )^{3\left(\frac{1}{t^2}-4\right)}$$ which I get by using Euler number.</p> <p>The answer is $e^9$, but clearly the answer I get is $(e^9)^{\text{expression}}$ which is not equal to the answer.</p>
Federico
327,744
<p>To get the result notice that $3x-6 = 9 \cdot \frac{x+2}{3} - 12$ then you limit is $$ \lim_{x \to +\infty} \left[ \left( 1 + \frac{1}{\frac{x+2}{3}} \right)^{\frac{x+2}{3}} \right]^9 \cdot \left( 1 + \frac{3}{x+2} \right)^{-12}. $$ The second term goes to $1$, while the term inside the brackets goes to $e$. Therefore, you get that the limit is equal to $e^9$.</p>
2,826,327
<p>$$\lim_{x\rightarrow \infty }\left ( 1+\frac{3}{x+2} \right )^{3x-6}$$</p> <p>I've tried to factor and simplfy the expression. I got:</p> <p>$${\left ( 1+\frac{3}{x+2} \right )^{\frac{1}{x+2}}}^{3({x^2-4})}$$</p> <p>I set $x$ to $1/t$ I get:</p> <p>$${\left ( 1+\frac{3}{\frac{1}{t}+2} \right )^{\frac{1}{\frac{1}{t}+2}}}^{3 \left({\frac{1}{t}^2-4} \right)}$$</p> <p>then I am left with:</p> <p>$$\left ( e^{3} \right )^{3\left(\frac{1}{t^2}-4\right)}$$ which I get by using Euler number.</p> <p>The answer is $e^9$, but clearly the answer I get is $(e^9)^{\text{expression}}$ which is not equal to the answer.</p>
Doug M
317,162
<p>$\lim_\limits{x\to\infty}\left(1+\frac {3}{x+2}\right)^{3x-6}$</p> <p>$y = x+2$</p> <p>$\lim_\limits{y\to\infty}\left(1+\frac {3}{y}\right)^{3y-12}\\ \lim_\limits{y\to\infty}\left(1+\frac {3}{y}\right)^{3y}\lim_\limits{y\to\infty}\left(1+\frac {3}{y}\right)^{-12}$</p> <p>Let't attack these separately $\lim_\limits{y\to\infty}\left(1+\frac {3}{y}\right)^{-12} = 1$</p> <p>As $y$ gets to be large $\frac {3}{y}$ becomes effectively $0,$ and the limit goes to 1.</p> <p>$\lim_\limits{y\to\infty}\left(\left(1+\frac {3}{y}\right)^{y}\right)^3\\ \lim_\limits{y\to\infty}\left(1+\frac {3}{y}\right)^{y} = e^3$</p> <p>$\lim_\limits{y\to\infty}\left(\left(1+\frac {3}{y}\right)^{y}\right)^3 = (e^3)^3 = e^9$</p>
1,163,001
<p>I need help to prove this congruence: </p> <p>$$ 3^n -4(2^n) + 6(1^n) + (-1)^n \equiv 0 \pmod {24} $$</p> <p>I have tried to used Euler's Theorem on the powers of 2 and 3 individually but now I'm stuck.</p>
Brian M. Scott
12,042
<p>As revised, it's still not true for $n=0$. However, it is true for all positive integers $n$, so assume that $n\ge 1$. </p> <p>HINT: Try proving by induction that $3^n\equiv 3\pmod{24}$ when $n$ is odd, and $3^n\equiv 9\pmod{24}$ when $n$ is even. Then discover and prove a similar result for $4(2^n)$. Then put the pieces together</p>
1,691,009
<p>I am struggling with a particular concept, I'll lay it out how I think will best allow for an answer:</p> <p>Let $A =$ $\Bbb Z$ be the set of all integers and the universe of discourse. </p> <pre><code>Let B, be the set of even numbers Let C, be the set of odd numbers Let D, be the set of positive numbers Let E, be the set of negative numbers </code></pre> <p>Now I would like the ability to be able to express the following Sets in words, to help my understanding of the topic:</p> <p>A) $(A-B)$</p> <p>B) $ C \cap D$</p> <p>C) $\overline{(D \cup B)}$ </p> <p>Any help to express these sets in 'words' would be great, it has me stumped. </p> <p>Thanks!</p>
gordon sung
249,633
<p>1) $(A-B)$ is $\{x\in \mathbb Z| x\in A \cap x \notin B\} \iff \{x\in \mathbb Z| x\in A \cap x \notin 2\mathbb Z\}$ </p> <p>This means the element $x$ found in the set is an odd integer.</p> <p>2) $C \cap D$ means integers that are both positive and odd $\iff$ positive odd integers</p> <p>3) $\overline{(D \cup B)} \iff (\overline{D} \cap \overline{B})$ means numbers that are negative odd integers.</p>
711,922
<p>I know there are well-ordered sets that are not countable.</p> <p>Suppose you are given an uncountable, well-ordered set $S$.</p> <p>Isn't it possible to provide a bijection $f:\mathbb{N} \rightarrow S$ as following?</p> <p>$S$ is well-ordered, so it has the smallest element, say $s_1$. $S \setminus$ {$s_1$} is also well-ordered, so there is the next smallest element, $s_2$, Similarly, there is the next smallest element $s_3$, and so on.</p> <p>Continuing like this, define $f(i) = s_i$. </p> <p>I know there is something wrong with this, but I cannot really see why...</p>
5xum
112,884
<blockquote> <p>Suppose you are given an uncountable, well-ordered set $S$.</p> <p>Isn't it possible to provide a bijection $f:\mathbb{N} \rightarrow S$</p> </blockquote> <p>No. $\mathbb N$ is countable. You will end up with a set of elements $\{s_1,s_2,s_3,\dots\}$, but there will exist elements you have not covered. There is nothing in your definition that demands you covered all of the elements.</p> <p>An example (not a well ordered set, I know, but may still illustrate my point) is if you look at the set $$S=\left\{\frac12, \frac23, \frac34, \dots, \frac{n}{n+1},\dots\right\}\cup[1,2]$$</p> <p>The procedure you described works on $S$, although $S$ is not well ordered. It takes $\frac12 = s_1$ as it is the least element. Then it takes $s_2=\frac23$ and so on. It produces $s_i = \frac{i}{i+1}$ which is an injection from $\mathbb N$ to $S$, but it does not cover the whole $S$.</p> <p><strong>Edit:</strong> In fact, you can even take the set $$T=\left\{\frac12, \frac23, \frac34, \dots, \frac{n}{n+1},\dots\right\}\cup\{1\},$$ whic <strong>is</strong> well ordered and is even <em>countable</em>, but your procedure still does not produce a bijection from $\mathbb N$ to $T$.</p>
516,501
<p>Find limit: $x_n=\dfrac{1+\frac12+...+\frac1{2^n}}{1+\frac14+...+\frac1{4^n}}$</p> <p>as $n \rightarrow \infty$</p> <p>My "intuition" says that it should be $\frac34$ but I don't know how to proove it with rigour.</p>
drhab
75,923
<p>Hint:have a look at $\frac{1+\frac{1}{2}+\cdots+\frac{1}{2^{n}}}{1+\frac{1}{4}+\cdots+\frac{1}{4^{n}}}\times\frac{2-1}{4-1}=\frac{\left(1+\frac{1}{2}+\cdots+\frac{1}{2^{n}}\right)\left(2-1\right)}{\left(1+\frac{1}{4}+\cdots+\frac{1}{4^{n}}\right)\left(4-1\right)}$ You will 'loose' quite some terms.</p>
3,133,798
<blockquote> <p>Find an asymptotic expansion at order <span class="math-container">$6$</span> of <span class="math-container">$f(x) = \int_x^{x^2} \frac{\mathrm{d}t}{\sqrt{1+t^4}}$</span></p> </blockquote> <p>I don't know how to proceed. I think I need to do a change of variable yet I don't know which one. I tried <span class="math-container">$u = t/x$</span> yet it doens't seem to work...</p> <p>Thank you !</p>
marty cohen
13,079
<p>You want <span class="math-container">$f(x) = \int_x^{x^2} \frac{\mathrm{d}t}{\sqrt{1+t^4}} $</span> to order 6.</p> <p>Being as general as possible, let <span class="math-container">$f(x) = \int_{x^a}^{x^b} (1+t^u)^v dt $</span>.</p> <p>Then, using the generalized binomial theorem,</p> <p><span class="math-container">$\begin{array}\\ f(x) &amp;= \int_{x^a}^{x^b} (1+t^u)^v dt\\ &amp;= \int_{x^a}^{x^b} \sum_{n=0}^{\infty} \binom{v}{n}t^{un}dt\\ &amp;= \sum_{n=0}^{\infty} \binom{v}{n}\int_{x^a}^{x^b} t^{un}dt\\ &amp;= \sum_{n=0}^{\infty} \binom{v}{n}\dfrac{t^{un+1}}{un+1}|_{x^a}^{x^b} \\ &amp;= \sum_{n=0}^{\infty} \binom{v}{n}\dfrac{(x^b)^{un+1}-(x^a)^{un+1}}{un+1} \\ &amp;= \sum_{n=0}^{\infty} \binom{v}{n}\dfrac{x^{b(un+1)}-x^{a(un+1)}}{un+1} \\ \end{array} $</span></p> <p>Then take as many terms as you want.</p>
618,288
<p>I know it's been answered before (at least to the case with $n$ different eigenvalues) but I didn't find a proof for the general case, and I would like some help with this question.</p> <p>We are given linear transforms $S,T: V\to V$ where $V$ is some vector space.</p> <p>We are given that $S$ and $T$ commute, $ST=TS$, and that they are diagonalizable:</p> <p>$T=PD_1P^{-1}$ and $S=KD_2K^{-1}$, where $D_1, D_2$ are diagonal and $K,P$ are invertible.</p> <p>We are asked to show that $S$ and $T$ have a common eigenspace.</p> <p><strong>My solution</strong></p> <p>Maybe I understood the question wrong, but what I tried to do is show that if $v$ is an eigenvector of $S$ then it is also an eigenvector of $T$.</p> <p>let $Sv=\lambda v$.</p> <p>$STv=TSv=T\lambda v=\lambda Tv$ which implies that $Tv$ is an eigenvector of $S$ with eigenvalue $\lambda$.</p> <p>Why does that mean that $v$ is an eigenvalue of $T$?</p> <p>Another possible way to solve this question is write:</p> <p>$PD_1P^{-1}KD_2K^{-1} = KD_2K^{-1}PD_1P^{-1}$ and get that $P=K$ but I don't know how to do that either.</p>
user44197
117,158
<p>You are almost there...</p> <p>You have shown that the eigenspace of $S$ corresponding to $\lambda$ is invariant under $T$. If $\lambda$ is an eigenvalue of multiplicity 1, then $Tv$ is a multiple of $v$ and hence is an eigenvalue of $T$.</p> <p>If this is not the case, all you can say is the invariance of the eigenspaces. Take for example $S=I$ the identity matrix. Every vector is an eigenvector of $S$ but this is not true for a general $T$.</p> <p>The result is <strong>not</strong> true in general</p>
618,288
<p>I know it's been answered before (at least to the case with $n$ different eigenvalues) but I didn't find a proof for the general case, and I would like some help with this question.</p> <p>We are given linear transforms $S,T: V\to V$ where $V$ is some vector space.</p> <p>We are given that $S$ and $T$ commute, $ST=TS$, and that they are diagonalizable:</p> <p>$T=PD_1P^{-1}$ and $S=KD_2K^{-1}$, where $D_1, D_2$ are diagonal and $K,P$ are invertible.</p> <p>We are asked to show that $S$ and $T$ have a common eigenspace.</p> <p><strong>My solution</strong></p> <p>Maybe I understood the question wrong, but what I tried to do is show that if $v$ is an eigenvector of $S$ then it is also an eigenvector of $T$.</p> <p>let $Sv=\lambda v$.</p> <p>$STv=TSv=T\lambda v=\lambda Tv$ which implies that $Tv$ is an eigenvector of $S$ with eigenvalue $\lambda$.</p> <p>Why does that mean that $v$ is an eigenvalue of $T$?</p> <p>Another possible way to solve this question is write:</p> <p>$PD_1P^{-1}KD_2K^{-1} = KD_2K^{-1}PD_1P^{-1}$ and get that $P=K$ but I don't know how to do that either.</p>
Community
-1
<p>The result which's known for two (or more) diagonalizable matrices that commute is that are simultaneous diagonalizable, that's they are diagonalizable in the same basis. Let's prove it by induction on the dimension $\dim E$ (the result is trivial if $\dim E=1$):</p> <p>If $S$ or $T$ is an homothetie then the result is obvious. Now assume that neither $S$ nor $T$ is an homothetie and since $S$ is diagonalizable then $$E=\bigoplus_{\lambda\in\mathrm{sp}(S)}E_\lambda(S)$$ where $E_\lambda(S)$ is the eigenspace of $S$ associated to the eigenvalue $\lambda$. Since $S$ isn't an homothetie then $$\forall\lambda\in \mathrm{sp}(S)\;\;\dim E_\lambda(S)\le\dim E-1$$ and since $ST=TS$ then $ E_\lambda(S)$ is invariant by $T$. Let $T'=T_{| E_\lambda(S)}$ the restriction of $T$ to $ E_\lambda(S)$ so by hypothesis $S$ and $T'$ are simultaneous diagonalizable on $ E_\lambda(S)$ that's there's a basis $B_\lambda$ of $ E_\lambda(S)$ in which $T'$ is diagonal. In the basis $B=\cup B_\lambda$ the two matrices $S$ and $T$ are diagonal.</p>
618,288
<p>I know it's been answered before (at least to the case with $n$ different eigenvalues) but I didn't find a proof for the general case, and I would like some help with this question.</p> <p>We are given linear transforms $S,T: V\to V$ where $V$ is some vector space.</p> <p>We are given that $S$ and $T$ commute, $ST=TS$, and that they are diagonalizable:</p> <p>$T=PD_1P^{-1}$ and $S=KD_2K^{-1}$, where $D_1, D_2$ are diagonal and $K,P$ are invertible.</p> <p>We are asked to show that $S$ and $T$ have a common eigenspace.</p> <p><strong>My solution</strong></p> <p>Maybe I understood the question wrong, but what I tried to do is show that if $v$ is an eigenvector of $S$ then it is also an eigenvector of $T$.</p> <p>let $Sv=\lambda v$.</p> <p>$STv=TSv=T\lambda v=\lambda Tv$ which implies that $Tv$ is an eigenvector of $S$ with eigenvalue $\lambda$.</p> <p>Why does that mean that $v$ is an eigenvalue of $T$?</p> <p>Another possible way to solve this question is write:</p> <p>$PD_1P^{-1}KD_2K^{-1} = KD_2K^{-1}PD_1P^{-1}$ and get that $P=K$ but I don't know how to do that either.</p>
Marc van Leeuwen
18,880
<p>The result is <strong>not true</strong>. The linear transformations of a space of dimension$~3$ with, with respect to a fixed basis, have diagonal matrices with diagonal entries $(0,0,1)$ respectively $(0,1,1)$ satisfy all hypotheses, but they do not have an eigenspace in common.</p>
863,375
<p>$f(t) = t^3 -t^2 +t + 7$.</p> <p>Just made it up, but looking through previous tests, they come up a lot when trying to find eigenvalues. How would I easily factorize this or make it=0?</p> <p>Wow, thanks for the quick and thorough answers. </p> <p>The one I'm looking at is t^3 -t^2 -t +1</p> <p>What is the fastest way to mathematically prove 1 and -1 as eigenvalues? (Hopefully without the cubic equation :)</p>
Dennis Gulko
6,948
<p>The only general way is the Cardano-Tartaglia formula: <a href="http://en.m.wikipedia.org/wiki/Cubic_function" rel="nofollow">http://en.m.wikipedia.org/wiki/Cubic_function</a> </p> <p>In most exam questions, it does not require that. You just have to 'guess' one root. You can do that by knowing that if a rational number $\frac ab$ is a root of $a_nx^n +... +a_0$ then $b$ divides $a_n$ and $a$ divides $a_0$.<br> In this case, you have to check only $\pm1,\pm7$. If neither is a root - then there are no rational roots.<br> If you successfully found a root $\alpha$, just divide by $x-\alpha$ to get a degree $2$ polynomial. </p>
863,375
<p>$f(t) = t^3 -t^2 +t + 7$.</p> <p>Just made it up, but looking through previous tests, they come up a lot when trying to find eigenvalues. How would I easily factorize this or make it=0?</p> <p>Wow, thanks for the quick and thorough answers. </p> <p>The one I'm looking at is t^3 -t^2 -t +1</p> <p>What is the fastest way to mathematically prove 1 and -1 as eigenvalues? (Hopefully without the cubic equation :)</p>
Peter Woolfitt
145,826
<p>The example you have given only has $1$ real root and that is </p> <p>$$t = \frac{1}{3}\left[1-\frac{2^\frac{2}{3}}{(3 \sqrt{267}-49)^\frac{1}{3}}+(2 (3 \sqrt{267}-49))^\frac{1}{3}\right]$$</p> <p>as <a href="http://www.wolframalpha.com/input/?i=t%5E3-t%5E2%2Bt%2B7" rel="nofollow">computed by Wolfram|Alpha</a>. In general you can find the roots of a cubic using the <a href="http://en.wikipedia.org/wiki/Cubic_function#General_formula_for_roots" rel="nofollow">cubic equation</a>.</p> <p>Doesn't the above look awful? If you were to pick a random cubic polynomial then there would likely be no resort but to apply the cubic formula - but the cubics you are given aren't random. Someone has to write those questions and they don't want you have have to solve really difficult equations. In every case I've seen for schoolwork (unless someone has made a mistake), at least one of the eigenvaules has been rational - for which you can apply the <a href="http://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow">rational root theorem</a> - but more often a root is simply guessable. In most cases checking the function at small integer values yields a root. Once you have a single root $r$ (and assuming you haven't been able to guess the others) you can divide your polynomial by $x-r$ and then apply the quadratic formula to find the remaining roots.</p>
3,583,600
<blockquote> <p>Define a branch for <span class="math-container">$\sqrt{1+\sqrt{z}}$</span> and show it is analytic.</p> </blockquote> <p>I defined a branch <span class="math-container">$(-\pi, \pi)$</span>, and so that means the function <span class="math-container">$\sqrt{1+\sqrt{z}}$</span> is analytic on <span class="math-container">$\mathbb{C}\setminus \left\{y=0,x\leq 0\right\}$</span>.</p> <p>I am trying to analyze when such a function is in the deleted area. Already from <span class="math-container">$\sqrt{z}$</span>, <span class="math-container">$y$</span> must be zero. It remains to consider the case when <span class="math-container">$x\leq 0$</span>. Since there's an added <span class="math-container">$+1$</span> inside the square root, does this change <span class="math-container">$x\leq 0$</span> to <span class="math-container">$x\leq -1$</span>? And so the analytic domain is <span class="math-container">$\mathbb{C}\setminus \left\{y=0,x\leq -1\right\}$</span>?</p>
Kavi Rama Murthy
142,385
<p>If <span class="math-container">$1+\sqrt z= -t$</span> with <span class="math-container">$t \geq 0$</span> then <span class="math-container">$z=(1-t)^{2}$</span>. The range of <span class="math-container">$(1-t)^{2}$</span> on <span class="math-container">$[0,\infty)$</span> is <span class="math-container">$[0,\infty)$</span> so <span class="math-container">$z \in [0,\infty)$</span> which is not true. Hence <span class="math-container">$1+\sqrt z$</span> does not lie on the negative real axis when <span class="math-container">$z$</span> does not lie on the negative real axis. Hence <span class="math-container">$\sqrt {1+\sqrt z }$</span> is well defined and analytic on the complex plane with <span class="math-container">$(-\infty, 0]$</span> removed. </p>
237,464
<p>Let $-\frac{1}{2}\le a \le\frac{1}{2}$ and $b\in[0,\infty)$.</p> <p>Definitions: $$f_k(a;b):=\frac{(2k+\frac{1}{2}+a)^2+b}{(2k+\frac{1}{2}-a)^2+b}(\frac{k}{k+1})^{2a},$$ $$f(a;b):=\prod\limits_{k=1}^\infty f_k(a;b)$$ QUESTIONs: </p> <p>(1) Does $f(a;b)=1$ have any solution with $a\neq 0$?</p> <p>(2) If yes: Single points $(a;b)$ or areas ? </p> <p>Thank you very much !</p> <p>EDIT: Have changed $(\frac{k}{k+1})^a$ to $(\frac{k}{k+1})^{2a}$. It was a mistake.</p> <p>2th EDIT: It seems to be $f(a,b)&lt;\pi^{2a}$ for $a&gt;0$, at least for e.g. $b&gt;2$. Correct ? </p>
Iosif Pinelis
36,721
<p>Note that for any $a\in[-\frac12,\frac12]\setminus\{0\}$ and $b\in[0,\infty)$ one has $f_k(a;b)-1\sim a/k$ as $k\to\infty$. So, the product diverges to $0$ if $a&lt;0$ and to $\infty$ if $a&gt;0$. Thus, the equation $f(a;b)=1$ has no solutions with $a\neq 0$. </p> <hr> <p>Details on $f_k(a;b)-1\sim a/k$ as $k\to\infty$: \begin{equation} f_k(a;b)=\Big(1+\frac{2a(4k+1)}{(2k+\frac12-a)^2+b}\Big) \Big(1-\frac1{k+1}\Big)^a \end{equation}<br> \begin{equation} =\Big(1+\frac{(2+o(1))a}k\Big)\Big(1-\frac{(1+o(1))a}k\Big)=1+\frac{(1+o(1))a}k. \end{equation}</p>
289,757
<p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p> <p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p> <p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em> Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p> <p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p> <p>So here comes my question</p> <blockquote> <p>How much math are you actually doing at your job? And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p> </blockquote> <p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p> <p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
Laura Balzano
60,172
<p>Before I was in academics, I was a software engineer for a signal processing company. I also didn't "do" a lot of math. However, I had to know how to incorporate pre-written mathematical algorithms into my own code, and I had to use my strong logic to write and debug good code. </p> <p>Don't discount the fact that you know math and use it to understand things. Knowing when an automated result from your software package seems fishy is way more important than being able to sit down and do integration by parts again. People without strong math and logic skills will see "p&lt;.5" from stata and count their result statistically significant without thinking twice. </p> <p>Just like D Seita said, it most jobs you are unlikely to really use exactly what you learned in school. But school should have taught you how to think critically and how to learn new things that come your way. If some academic somewhere somehow figures out how to apply reproducing kernel hilbert spaces to fluid dynamics, you'll be right on top of that!</p> <p>Now, if what you really want is to do integration by parts, then come back to academics :) </p>
289,757
<p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p> <p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p> <p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em> Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p> <p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p> <p>So here comes my question</p> <blockquote> <p>How much math are you actually doing at your job? And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p> </blockquote> <p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p> <p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
Michaela Light
32,656
<p>Math has been useful to me in software business for </p> <ul> <li>the ability to organize complex systems analysis into parts (aka modules or lemmas) </li> <li>to be able to keep working on a very complex problem when the end is not in sight</li> <li>to have intuition when results don't look right</li> <li>to be very careful to get every single character in a program right. (Neither computers or proofs like typos)</li> <li>to layout code (or proofs) so that they are easy for me or others to read and check</li> <li>to learn how to learn by myself (programmers and mathematicians are always learning new ideas)</li> <li>how to debug or find errors in proofs</li> <li>to quickly pick up new syntax or notation, and to be able to create my own notation that is both expressive and powerful</li> <li>to be able to focus single minded on a problem for long periods of time</li> </ul>
2,447,850
<p>So I have to prove 2 things:</p> <ol> <li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R, x&gt;0$. </p></li> <li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R$. </p></li> </ol> <p>For #1, I know that $\frac{x^n}{n!} &gt;0$, which means that I can find an upper bound and use squeeze theorem. For #2, I have no idea where to start.</p>
Peter Szilas
408,605
<p>A little bit of support for Koto:</p> <p>For every $x \in \mathbb{R}$ the exponential series </p> <p>$\star)$ $\exp(x): = \sum_{n=0}^{\infty}\dfrac{x^n}{n!}$ converges absolutely.</p> <p>Hence: $\lim_{n \rightarrow \infty} |\dfrac{x^n}{n!}| = 0$ for $x \in \mathbb{R}$.</p> <p>Proof of $\star$):</p> <p>Ratio test: </p> <p>With $a_n: = \dfrac{x^n}{n!}$ we get for </p> <p>$x\ne 0$ and $n\ge 2|x|$:</p> <p>$|\dfrac{a_{n+1}}{a_n}| =$</p> <p>$ |\dfrac{x^{n+1}}{(n+1)!}\dfrac{n!}{x^n}|=$</p> <p>$\dfrac{|x|}{n+1}\le 1/2$. </p>
3,741,122
<p>Recently I've tried to find the difference between partial differentiation and total differentiation. I've heard the total derivative is defined on single value functions, while the partial derivative by contrast is defined on multivariate functions. My problem is, that total differentiation is used on multivariate functions all the time.</p> <p>Every time I come up with a rigorous definition I arrive at a contradiction. I will share what I have defined so far, and hopefully you can enlighten me.</p> <p>Let</p> <p><span class="math-container">$$f: (x_1, ... , x_n) \rightarrow f(x_1, ..., x_n)$$</span></p> <p>and it's partial derivative by the difference quotient</p> <p><span class="math-container">$$\frac{\partial f}{\partial x_i} = \lim_{h \to 0} \frac{f(x_1,..,x_i+h,...x_n)- f(x_1,..., x_n)}{h}$$</span></p> <p>the total derivative must by contrast account for interdependence between <span class="math-container">$x_k$</span> in the domain of f.</p> <p><span class="math-container">$$\frac{df}{dx_i}\stackrel{?}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{\partial x_k}{\partial x_i}}$$</span></p> <p>This seemed sensible to me, until I realized it simplified to</p> <p><span class="math-container">$$n \frac{\partial f}{\partial x_i}$$</span></p> <p>which definitely isn't right.</p> <p>Can someone tell me where I've made an error? Or provide better definition? This issue really annoys me, since all my research so far didn't answer this question at all.</p> <p>Edit: Ok thank you for all the responses! I'm just writing out the final formula for total derivatives for quick lookup now: <span class="math-container">$\frac{d}{d x_i}$</span> is defined recursively as <span class="math-container">$$\frac{df}{dx_i}\stackrel{!}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{d x_k}{d x_i}}$$</span></p> <p>until <span class="math-container">$x_k$</span> has a domain without interdependence, in which case <span class="math-container">$\frac{\partial x_j}{\partial x_i}$</span> = <span class="math-container">$\frac{d x_j}{d x_i}$</span> and the entire expression can be calculated by limits.</p>
Christian Blatter
1,303
<p>In linear algebra you learn about linear transformations <span class="math-container">$A:\&gt;{\mathbb R}^n\to{\mathbb R}^m$</span>. This <span class="math-container">$A$</span> is a single letter, but the full information content of <span class="math-container">$A$</span> is encased in an <span class="math-container">$m\times n$</span> matrix <span class="math-container">$\bigl[A_{ik}\bigr]$</span> of real numbers. I hope you don't say &quot;I never know when to speak of <span class="math-container">$A$</span>, and when of <span class="math-container">$A_{21}$</span>&quot;. In the cases <span class="math-container">$m=1$</span>, <span class="math-container">$n=1$</span>, <span class="math-container">$m=n=1$</span> the matrix <span class="math-container">$\bigl[A_{ik}\bigr]$</span> is a row vector, a column vector, a single number.</p> <p>If you now have a function <span class="math-container">$f:\&gt;{\mathbb R}^n\to{\mathbb R}^m$</span> then its <em>derivative</em> <span class="math-container">$df(p)$</span> at a certain point <span class="math-container">$p$</span> in the domain of <span class="math-container">$f$</span> is a linear map <span class="math-container">$A$</span> as described above. This map can be used to approximate <span class="math-container">$f$</span> in the neighborhood of <span class="math-container">$p$</span>, and for other things, as explained in Calculus 102. The map <span class="math-container">$A$</span> has its matrix <span class="math-container">$\bigl[A_{ik}\bigr]$</span>, and it turns out that <span class="math-container">$$A_{ik}={\partial f_i\over\partial x_k}(p)\ ,$$</span> whereby the <em>partial derivatives</em> <span class="math-container">${\partial f_i\over\partial x_k}(p)$</span> can be calculated as limits. In particular, when <span class="math-container">$m=1$</span> then <span class="math-container">$f$</span> is real valued, and <span class="math-container">$\bigl[A_{ik}\bigr]$</span> is a row vector <span class="math-container">$$\bigl[A_1 \ A_2\ \ldots\ A_n\bigr]=\left({\partial f\over\partial x_1},{\partial f\over\partial x_2},\ \ldots,\ {\partial f\over\partial x_n}\right)_p=:\nabla f(p)\ .$$</span> Expressions of the form <span class="math-container">$$\phi(x):=f\bigl(x,y(x)\bigr)$$</span> are a sad story, and should be avoided. Here the letter <span class="math-container">$x$</span> is used for two different things, namely as coordinate variable for the outer function <span class="math-container">$f$</span> and also as single variable of the composed function <span class="math-container">$\phi$</span>. When studying rules of differentiation you should say: I have a vector valued inner function <span class="math-container">$t\mapsto {\bf r}(t)=\bigl(x(t),y(t)\bigr)$</span> and an outer function <span class="math-container">$f$</span> in the <span class="math-container">$(x,y)$</span> plane. &quot;By coincidence&quot; we have <span class="math-container">$x(t)=t$</span>, so that <span class="math-container">$\phi:=f\circ {\bf r}$</span> appears as <span class="math-container">$\phi(t)=f\bigl(t,y(t)\bigr)$</span>. The chain rule then gives <span class="math-container">$$\phi'(t)={\partial f\over\partial x}\biggr|_{{\bf r}(t)}\cdot 1+{\partial f\over\partial y}\biggr|_{{\bf r}(t)}\cdot y'(t)\ .$$</span></p>
1,051,357
<p>$$ \begin{bmatrix} -1&amp;&amp;2&amp;&amp;-3&amp;&amp;4\\ 5&amp;&amp;0&amp;&amp;2&amp;&amp;-2\\ 2&amp;&amp;1&amp;&amp;1&amp;&amp;2\\ 0&amp;&amp;0&amp;&amp;3&amp;&amp;-2\\ \end{bmatrix} $$</p> <p>I wanted to confirm that if I use $(w, x,$$ y, z)$ would the image of the vector be:</p> <pre><code>-w + 2x - 3y + 4z 5w + 2y - 2z 2w + x + y + 2z 3y - 2z </code></pre> <p>Could anybody provide some help?</p> <p>If i wanted to check if this matrix is invertible how would i go about doing that?</p>
datalava
187,292
<p>There are many things that can tell you whether or not a matrix is invertible (see <a href="http://en.wikipedia.org/wiki/Invertible_matrix#The_invertible_matrix_theorem" rel="nofollow">The Invertible Matrix Theorem</a>.) </p> <p>One of these is whether the matrix has full rank. The reduced row echelon form can tell you the rank of a matrix. If the RREF has no rows/columns that are all $0$, then the matrix has full rank. This is equivalent to saying that the dimension of the rowspace (or columnspace) is $n$ (in this case, $n=4$).</p>
4,029,725
<p>In the book by Artificial Intelligence by Norvig and Russel, I came across following problem:</p> <blockquote> <p>Prove if correct: <span class="math-container">$(A ∧ B) \models (A ⇔ B).$</span></p> </blockquote> <p>I quickly interpreted <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span> and tried to prove it using the <a href="https://www.wolframalpha.com/input/?i=%28A+and+B%29+implies+%28A+xnor+B%29" rel="nofollow noreferrer">truth table</a>:</p> <p><a href="https://i.stack.imgur.com/SCakv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCakv.png" alt="enter image description here" /></a></p> <p>It seems that at least while interpreting <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span>, the statement is true. Then I gave second thought and did some more reading to come across <a href="https://cs.stackexchange.com/questions/72360/how-is-implication-same-as-entailment">this</a> thread. Now I know that the two are not same. But, it turns out, when we don't interpret <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span>, the statement is still true (my course TA uploaded answers without proof).</p> <p>So I am wondering:</p> <p><strong>Q1.</strong> How exactly the given statement is correct (given that <span class="math-container">$\models$</span> and <span class="math-container">$\implies$</span> are not same)?<br /> <strong>Q2.</strong> Is my method to interpret both same and then forming truth table, a correct method for such problems? If not then how I should solve it?<br /> <strong>Q3.</strong> If answer to Q2 is no, then will above method of interpreting <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span> and forming truth table always given correct answer? If not, when it will fail to give correct answer?<br /> <strong>Q4.</strong> I also tried to solve the same using <a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/lecture-notes/Lecture7FinalPart1.pdf" rel="nofollow noreferrer">resolution</a>:</p> <p><span class="math-container">$$\neg (A\Longleftrightarrow B)\equiv \neg((A\wedge B)\vee(\neg A\wedge\neg B))\equiv\neg(A\wedge B)\wedge \neg(\neg A\wedge \neg B)\equiv (\neg A\vee\neg B)\wedge (A \vee B)$$</span> So my clauses will be:</p> <ul> <li><span class="math-container">$A$</span> (from <span class="math-container">$A\wedge B$</span>)</li> <li><span class="math-container">$B$</span> (from <span class="math-container">$A\wedge B$</span>)</li> <li><span class="math-container">$(\neg A\vee\neg B)$</span> from <span class="math-container">$(\neg A\vee\neg B)\wedge (A \vee B)$</span></li> <li><span class="math-container">$(A \vee B)$</span> from <span class="math-container">$(\neg A\vee\neg B)\wedge (A \vee B)$</span></li> </ul> <p><a href="https://i.stack.imgur.com/Zrhje.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zrhje.png" alt="enter image description here" /></a></p> <p>So I was able to derive empty clause, so my assumption <span class="math-container">$\neg (A\Longleftrightarrow B)$</span> was incorrect. So <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span>. Will application of resolution technique for <span class="math-container">$\models$</span> be exactly same?</p> <p><strong>Update</strong></p> <p>[This is my updated understanding based on Graham's answer]</p> <p><strong>(a)</strong> After reading Graham's answer, I felt that the truth table above is proving the &quot;tautology&quot; <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span>, but not <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>.</p> <p><strong>(b)</strong> To prove <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>, I need another truth table, something like this: <a href="https://i.stack.imgur.com/Ywbb2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ywbb2.png" alt="enter image description here" /></a></p> <p><strong>(c)</strong> Also, I guess the resolution technique is used to prove tautology <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span> and not <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>. However, I feel we can use the (first) truth table and resolution proving <span class="math-container">$\implies$</span> to also prove <span class="math-container">$\models$</span>, because of the following fact:</p> <blockquote> <p><span class="math-container">$\varphi\vDash \psi$</span> iff <span class="math-container">$M(\varphi)\subseteq M(\psi)$</span>: that is, iff every truth assignment that makes <span class="math-container">$\varphi$</span> true also make <span class="math-container">$\psi$</span> true. This is the case iff <span class="math-container">$\vDash \varphi\Rightarrow\psi$</span>, i.e., if the formula <span class="math-container">$\varphi\Rightarrow\psi$</span> is true in all truth assignments (is a tautology). - <a href="https://cs.stackexchange.com/a/72363/17040">source</a></p> </blockquote> <p>Can someone please confirm if my understanding in the above points is correct?</p>
ryang
21,813
<blockquote> <p><strong>(a)</strong> After reading Graham's answer, I felt that the truth table above is proving the &quot;tautology&quot; <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span>, but not <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>.<br /> <strong>(b)</strong> To prove <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>, I need another truth table, something like this: <a href="https://i.stack.imgur.com/Ywbb2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ywbb2.png" alt="enter image description here" /></a></p> </blockquote> <p>To be absolutely clear: the difference is this (here, <span class="math-container">$\psi$</span> and <span class="math-container">$\theta$</span> are <strong>compound propositional formulae</strong>):</p> <ol> <li>using a truth table to verify that <span class="math-container">$(\psi\to \theta)$</span> is a tautology: construct the truth table of <span class="math-container">$(\psi\to \theta),$</span> then verify that the column with the main connective is filled <em>only</em> with <code>1</code>'s;</li> <li>using a truth table to verify that <span class="math-container">$(\psi\models \theta):$</span> construct a truth table containing both <span class="math-container">$\psi$</span> and <span class="math-container">$\theta,$</span> then verify that in <em>every</em> row where <span class="math-container">$\psi$</span> has value <code>1</code>, <span class="math-container">$\theta$</span> also has value <code>1</code>.</li> </ol> <p>(1) and (2) are equivalent (the Deduction Theorem and <a href="https://math.stackexchange.com/a/3773733/21813">its converse</a> formally justifies this).</p>
830,599
<p>The function $f$ is defined as follows: $$f(x):=\sum_{j=1}^{\infty} \frac{x^j}{j!} e^{-x}$$</p> <p>It's easy to see that $f(0)=0$. But I am interested in the value $$\lim_{x \rightarrow 0^+} f(x).$$</p> <p>Even <a href="https://www.wolframalpha.com/input/?i=lim_%28x-%3E0%29+%28sum_%28i%3D1%29%5Einfinity+x%5Ej%2F%28j%21%29+e%5E%28-x%29%29+" rel="nofollow">Wolfram Alpha</a> does not help here. I tried to plot this function, but this doesn't work neither. And my calculator doesn't give a solution for concrete values of $x$, so I have no idea how to get on here. </p>
Michael Albanese
39,599
<p><strong>Hint:</strong> Note that $$f(x) = \sum_{j=1}^{\infty}\frac{x^j}{j!}e^{-x} = e^{-x}\sum_{j=1}^{\infty}\frac{x^j}{j!} = e^{-x}\left(\sum_{j=0}^{\infty}\frac{x^j}{j!}-1\right).$$</p>
13,829
<p>I was trying to understand the notion of a connection. I have heard in seminars that a connection is more or less a differential equation. I read the definition of Kozsul connection and I am trying to assimilate it. So far I cannot see why a connection is a differential equation. Please help me with some clarification.</p>
Jonas Meyer
1,424
<p>Another method: You could write it as the sum of the integrals on the intervals $\left[\sqrt{\frac{\pi}{2}+k\pi},\sqrt{\frac{\pi}{2}+(k+1)\pi}\right]$, and make a substitution $u=x^2$ to bound the integral on such an interval below by $\frac{1}{\sqrt{\frac{\pi}{2}+(k+1)\pi}}$. (I'm ignoring the interval $\left[0,\sqrt{\frac{\pi}{2}}\right]$.)</p> <p>Using the inequality $\frac{\pi}{2}+(k+1)\pi\leq(k+2)\pi$ along with an integral comparison then leads to the estimate $$\int_0^\sqrt{\frac{\pi}{2}+N\pi}|\cos(x^2)|dx\geq\frac{2}{\sqrt{\pi}}(\sqrt{N}-\sqrt{2}).$$</p>
2,439,324
<blockquote> <p>Does there exist an uncountable set $A \subset \mathbb{R}$, such that for every $a \in A$ and every $\epsilon&gt;0$, $(a-\epsilon,a+\epsilon)\not\subset A$?</p> </blockquote> <p>I am not sure what the answer is, but I am having trouble trying to construct such a set. Any hints?</p>
Community
-1
<p><a href="https://en.wikipedia.org/wiki/Cantor_set" rel="nofollow noreferrer">This</a> totally disconnected set is also an example.</p>
73,675
<p>Dear all,</p> <p>I'm seeking a reference for a claim made in lecture 8 of Jacob Lurie's chromatic homotopy theory notes (<a href="http://www.math.harvard.edu/~lurie/252xnotes/Lecture8.pdf" rel="noreferrer">http://www.math.harvard.edu/~lurie/252xnotes/Lecture8.pdf</a>). More particularly, Theorem 6 of this lecture states that (say over $\mathbb{F}_2$, so that things are commutative) the spectrum $\mathbb{G} = \operatorname{Spec} \mathcal{A}_*$ of the dual Steenrod algebra $\mathcal{A}_*$ is the automorphism group of the additive formal group law, in the obvious sense.</p> <p>Lurie argues convincingly that $\mathbb{G}$ does act on the additive formal group law, but I don't think he attempts to prove that this action gives an isomorphism with the automorphism group. I'd be grateful if someone could give me a reference for this fact.</p> <p>Cheers,</p> <p>Saul</p>
S. Carnahan
121
<p>MIT OpenCourseWare has some <a href="http://ocw.mit.edu/courses/mathematics/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/lecture-notes/" rel="nofollow">notes from a course</a> that Lurie taught in 2007. I believe <a href="http://ocw.mit.edu/courses/mathematics/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/lecture-notes/lecture13.pdf" rel="nofollow">the lecture on the dual Steenrod algebra</a> has a proof of the claim.</p>
3,351,012
<p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span> seems to be leading me in circles. The integral I get when I use integration by parts, <span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span> just leads me back to <span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span>. I am not sure how to solve it.</p> <p><strong>My Steps:</strong></p> <p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span></p> <p>Let <span class="math-container">$u = \sin(3\theta)$</span> and <span class="math-container">$dv=e^{2\theta}d\theta$</span></p> <p>Then <span class="math-container">$du = 3\cos(3\theta)d\theta$</span> and <span class="math-container">$v = \frac{1}{2}e^{2\theta}$</span> <span class="math-container">\begin{align*} \int e^{2\theta} \sin(3 \theta)d\theta &amp;= \frac{1}{2} e^{2\theta}\sin(3\theta) - \int\frac{1}{2}e^{2\theta}3\cos(3\theta)d\theta\\ &amp;=e^{2\theta}\sin(3\theta) - \frac{3}{2}\int e^{2\theta}\cos(3\theta)d\theta\\ \end{align*}</span></p> <hr> <p><span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span></p> <p>Let <span class="math-container">$u = \cos(3\theta)$</span> and <span class="math-container">$dv = e^{2\theta}d\theta$</span></p> <p>Then <span class="math-container">$du = -3\sin(3\theta)d\theta$</span> and <span class="math-container">$v=\frac{1}{2}e^{2\theta}$</span> </p> <p><span class="math-container">\begin{align*} \int e^{2\theta}\cos(3\theta) &amp;= \frac{1}{2}e^{2\theta}\cos(3\theta)-\int (\frac{1}{2}e^{2\theta}\cdot-3\sin(3\theta))d\theta\\ &amp;=\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} \int e^{2\theta}\sin(3\theta)d\theta \end{align*}</span></p> <p>So you can see I just keep going in circles. How can I break out of this loop?</p>
Gabe
449,093
<p>Take your examples together, <span class="math-container">\begin{align*} \int e^{2\theta}\sin(3\theta)d\theta &amp;=\frac{1}{2}e^{2\theta}\sin(3\theta)- \frac{3}{2} \left(\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} \int e^{2\theta}\sin(3\theta)d\theta\right) \end{align*}</span> Substituting the integral for a variable, say <span class="math-container">$X$</span>, gives you: <span class="math-container">$$X=\frac{1}{2}e^{2\theta}\sin(3\theta)- \frac{3}{2} \left(\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} X\right)$$</span> simplifying gives you: <span class="math-container">$$X=\frac{1}{2}e^{2\theta}\sin(3\theta)- \frac{3}{4}e^{2\theta}\cos(3\theta)- \frac{9}{4} X$$</span> so your answer is <span class="math-container">$$X=\int e^{2\theta}\sin(3\theta)=\frac{4}{13}\left(\frac{1}{2}e^{2\theta}\sin(3\theta)- \frac{3}{4}e^{2\theta}\cos(3\theta)\right)=\frac{e^{2\theta}\left(2\sin(3\theta)-3\cos(3\theta)\right)}{13}$$</span> and a simple derivative check shows this to be true. Note your first example last line, you are missing a <span class="math-container">$\frac{1}{2}$</span> on the right hand side.</p>
3,351,012
<p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span> seems to be leading me in circles. The integral I get when I use integration by parts, <span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span> just leads me back to <span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span>. I am not sure how to solve it.</p> <p><strong>My Steps:</strong></p> <p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span></p> <p>Let <span class="math-container">$u = \sin(3\theta)$</span> and <span class="math-container">$dv=e^{2\theta}d\theta$</span></p> <p>Then <span class="math-container">$du = 3\cos(3\theta)d\theta$</span> and <span class="math-container">$v = \frac{1}{2}e^{2\theta}$</span> <span class="math-container">\begin{align*} \int e^{2\theta} \sin(3 \theta)d\theta &amp;= \frac{1}{2} e^{2\theta}\sin(3\theta) - \int\frac{1}{2}e^{2\theta}3\cos(3\theta)d\theta\\ &amp;=e^{2\theta}\sin(3\theta) - \frac{3}{2}\int e^{2\theta}\cos(3\theta)d\theta\\ \end{align*}</span></p> <hr> <p><span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span></p> <p>Let <span class="math-container">$u = \cos(3\theta)$</span> and <span class="math-container">$dv = e^{2\theta}d\theta$</span></p> <p>Then <span class="math-container">$du = -3\sin(3\theta)d\theta$</span> and <span class="math-container">$v=\frac{1}{2}e^{2\theta}$</span> </p> <p><span class="math-container">\begin{align*} \int e^{2\theta}\cos(3\theta) &amp;= \frac{1}{2}e^{2\theta}\cos(3\theta)-\int (\frac{1}{2}e^{2\theta}\cdot-3\sin(3\theta))d\theta\\ &amp;=\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} \int e^{2\theta}\sin(3\theta)d\theta \end{align*}</span></p> <p>So you can see I just keep going in circles. How can I break out of this loop?</p>
farruhota
425,072
<p>Another method is to predict the answer: <span class="math-container">$$\int e^{2x}\sin(3x)dx=Ae^{2x}\sin (3x)+Be^{2x}\cos (3x)+C \Rightarrow \\ e^{2x}\sin (3x)=2Ae^{2x}\sin (3x)+3Ae^{2x}\cos (3x)+2Be^{2x}\cos (3x)-3Be^{2x}\sin (3x) \Rightarrow \\ \begin{cases} 2A-3B=1\\ 3A+2B=0\end{cases}\Rightarrow A=\frac2{13};B=-\frac3{13}$$</span> Hence, the final answer is: <span class="math-container">$$\int e^{2x}\sin(3x)dx=\frac2{13}e^{2x}\sin (3x)-\frac3{13}e^{2x}\cos (3x)+C.$$</span></p>
2,385,152
<p>Given a matrix A, e.g. $$ A=\begin{bmatrix} a_{11}&amp; a_{12} &amp; a_{13} \\ a_{21}&amp; a_{22} &amp; a_{23} \\ a_{31}&amp; a_{32} &amp; a_{33} \\ \end{bmatrix} $$ eliminating the row and the column corresponding $a_{21}$ results in a smaller matrix $$ B=\begin{bmatrix} a_{12} &amp; a_{13} \\ a_{32} &amp; a_{33} \\ \end{bmatrix} $$ Is there a denotation for such resultant matrix B? Matrix cofactor involves similar operation but it does not gives a matrix.</p>
Dave
334,366
<p>This is a type of submatrix that one obtains by removing a select number of rows and columns. In your case, you eliminated the second row and first column, so a common notation is that this is a $(2,1)$-submatrix of $A$. Minors and cofactors are obtained through determinants of certain submatrices. Check out this <a href="https://en.wikipedia.org/wiki/Matrix_(mathematics)#Submatrix" rel="nofollow noreferrer">link</a> for more information.</p> <p>Another common notation is found <a href="https://proofwiki.org/wiki/Definition:Submatrix" rel="nofollow noreferrer">here</a>. In your example, as the second row and first column are removed, the resulting matrix $B$ would be denoted $A(2;1)$ in this notation.</p>
1,019,408
<p>Usually, $f$ denotes a function, $f(x)$ is an image of $x$ under $f$. But what's $f(X)$ if $X$ is a set? </p> <p>edit: Please, disregard the body of this question. I had to put something here to be able to post the question. </p>
Ashwin Ganesan
157,927
<p>If $f(S)=S$, we say $f$ fixes the set $S$ setwise. If $f(s)=s$, for all $s \in S$, then we say $f$ fixes the set $S$ pointwise. The terminology is intuitive and makes it clear that the latter condition is stronger than and implies the former. But the former does not imply the latter; for eg, take $f$ to be the bijection from $\{1,2,3,4,5\}$ to itself, defined by the permutation $f:=(1,2,3)(4)(5)$. Then $f$ fixes the set $\{1,2,3\}$ setwise, but $f$ does not fix the set $\{1,2,3\}$ pointwise because $f$ moves 1 to 2. </p>
516,244
<p>My professor gave us this example on her notes:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}+\frac{1}{2^n}\right)$$</p> <p>So I know we're supposed to find the partial fraction, which ends up being</p> <p>$$\left(\frac{3}{n(n+3)}=\frac{A}{n}+\frac{B}{n+3}= \frac{1}{n}-\frac{1}{n+3}\right)$$</p> <p>So based on how she did the other examples, I would expect her to do:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{1}-\frac{1}{4}+\frac{1}{2}-\frac{1}{5}\right.....)$$, because I'd be plugging in numbers for n starting with n=1. However, she instead did the following:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{n}-\frac{1}{n+1}+\frac{1}{n+1}-\frac{1}{n+2}+\frac{1}{n+2}-\frac{1}{n+3}\right)$$,</p> <p>which would definitely be a lot more helpful in helping cancel out terms like you're supposed to when doing telescoping series, BUT I don't know why she's doing this. I thought we were supposed to plug in values from n and that's what should be increasing each time, but instead the number being added to n is the one going up and I have no clue why. I don't think I'm asking this question in the best way possible, but I'm kinda confusing myself because she did other examples and they feel nothing like this and I'm just starting to learn all this, so can somebody please give me some insight as to what is going on?</p> <p>(and I know I'm supposed to also deal with the sum of the $$\frac{1}{2^n}$$ term but I'm kinda ignoring it for now since I don't even know what's going on with the first one</p>
Caleb Stanford
68,107
<p>Heron's formula is inefficient; there is in fact a direct formula. If the triangle has one vertex at the origin, and the other two vertices are $(a,b)$ and $(c,d)$, the formula for its area is $$ A = \frac{\left| ad - bc \right|}{2} $$</p> <p>To get a formula where the vertices can be anywhere, just subtract the coordinates of the third vertex from the coordinates of the other two (translating the triangle) and then use the above formula.</p>
516,244
<p>My professor gave us this example on her notes:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}+\frac{1}{2^n}\right)$$</p> <p>So I know we're supposed to find the partial fraction, which ends up being</p> <p>$$\left(\frac{3}{n(n+3)}=\frac{A}{n}+\frac{B}{n+3}= \frac{1}{n}-\frac{1}{n+3}\right)$$</p> <p>So based on how she did the other examples, I would expect her to do:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{1}-\frac{1}{4}+\frac{1}{2}-\frac{1}{5}\right.....)$$, because I'd be plugging in numbers for n starting with n=1. However, she instead did the following:</p> <p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{n}-\frac{1}{n+1}+\frac{1}{n+1}-\frac{1}{n+2}+\frac{1}{n+2}-\frac{1}{n+3}\right)$$,</p> <p>which would definitely be a lot more helpful in helping cancel out terms like you're supposed to when doing telescoping series, BUT I don't know why she's doing this. I thought we were supposed to plug in values from n and that's what should be increasing each time, but instead the number being added to n is the one going up and I have no clue why. I don't think I'm asking this question in the best way possible, but I'm kinda confusing myself because she did other examples and they feel nothing like this and I'm just starting to learn all this, so can somebody please give me some insight as to what is going on?</p> <p>(and I know I'm supposed to also deal with the sum of the $$\frac{1}{2^n}$$ term but I'm kinda ignoring it for now since I don't even know what's going on with the first one</p>
Michael Hoppe
93,935
<p>Now here's a solution which works in any vector space with an inner product: take the half of the root of the Gram-Determinant of two sides of the triangle, that is <span class="math-container">$$\frac12\sqrt{\det\begin{pmatrix}\langle b-a,b-a\rangle&amp; \langle b-a,c-a\rangle\\ \langle b-a,c-a\rangle &amp; \langle c-a,c-a\rangle \end{pmatrix}}.$$</span></p>
1,743,465
<p>If $a=(1,2,3,4,5)$ is an example of a vector in $\mathbb R^5$, what could be an example of a vector in $\mathbb C^5$ ? is it $a=(1,2,3,4,i5)$ ?</p> <p>Also, $x=a+ib$ is $2$ dimensional, can a complex number be one dimensional? like when $a=0$ or $b=0$, but if $b=0$ then it is a real number, so can we say that all real numbers (scalars) are one dimensional complex numbers?</p>
Surb
154,545
<p>$$(1,i,1+i,3+2i,7i,\pi+\sqrt 2i)$$</p>
482,030
<p>I'm reading a proof of the irrationality of <span class="math-container">$\sqrt 2$</span>. In a step it states that <span class="math-container">$2d^2=n^2$</span> implies that <span class="math-container">$n$</span> is multiple of <span class="math-container">$2$</span>. How?</p>
Ömer
55,199
<p>square of even numbers are even, so $n$ is even</p>
4,020,412
<p>I was reading about ODE variable separation solving and the book says that assuming that a function can be expressed as the product of two single-variable functions loses generality, which I understand. I cannot, however, prove that it does. For example: the function <span class="math-container">$\sqrt{x+t}$</span> cannot be expressed as <span class="math-container">$X(x)\cdot T(t)$</span>. I tried proving it by contradiction and got to the result <span class="math-container">$X_1(x)\cdot T_1(t) = (x+it)\cdot(x-it)$</span> but I'm not sure if this is sufficient. That is, what guarantees that <span class="math-container">$(x+it)\cdot(x-it)$</span> cannot be rearranged in some way so that it only has the variable <span class="math-container">$x$</span> in one of the factors and only <span class="math-container">$t$</span> in the other?</p>
mathcounterexamples.net
187,663
<p>If <span class="math-container">$f(x,t) = \sqrt{x+t} = X(x) \cdot Y(t)$</span> for all <span class="math-container">$x, t \ge 0$</span> then</p> <p><span class="math-container">$$X(x)\cdot Y(0) = \sqrt{x} \text{ and } X(0) \cdot Y(t) = \sqrt t$$</span></p> <p>But <span class="math-container">$X(0) \cdot Y(0) = \sqrt{0+0} = 0$</span> and therefore the contradiction <span class="math-container">$X(0) = 0$</span> or <span class="math-container">$Y(0) = 0$</span>.</p>
918,509
<p>Let $X$ be a random variable with pdf $f$. I would like to know why:</p> <p>$$\operatorname{E} [X] = \int_\Omega X \, \mathrm{d}P = \int_\Omega X(\omega) P(\mathrm{d}\omega)= \int_{-\infty}^\infty x f(x)\, \mathrm{d}x . $$</p> <p>I mean I don't get it why it is all equal and what the notation in third term means. Thanks for any calrification.</p>
almagest
172,006
<p>Define $a(n,1)=n(1+2+\dots+(n+1))=n(n+1)(n+2)/2$. Then define recursively $$a(n,k)=n\sum_{r=1}^{n+1}a(r,k-1)$$ Then your sequence is $a(1,1),a(1,2),a(1,3),\dots$. So entering that into Mathematica, we get $3, 15, 105, 945, 10395, 135135,\dots$. Checking with oeis.org we find these are just the numbers $1\times3\times5\times7\times\dots\times 2n-1$.</p> <p>Since you did not ask for a proof of the correct formula and I am lazy, I have not given one, but it should be fairly easy by induction.</p>
507,133
<p>I came across a problem in a book that asked us to find the first number $n$ such that $\phi(n)\geqslant 1,000$ it turns out that the answer is 1009, which is a prime number. There were several questions of this type, and our professor conjectured that it will always be the next prime. However, no one has been able to come up with a proof for this conjecture. So, more formally the conjecture is:</p> <p>For all $n\in\mathbb{N}$ the least positive integer $k\in\mathbb{N}$, with $k&gt;1$ such that $\phi(k)\geqslant n$ is a prime.</p> <p>I have worked with the Möbius Inversion function, and other minimal element contradictory proofs, but nothing has worked so far. Does anyone have any good ideas? </p>
Will Jagy
10,400
<p>Initial response: this should follow from a fairly mild conjecture on <a href="http://en.wikipedia.org/wiki/Prime_gap" rel="noreferrer">prime gaps</a>, that there for a prime $p \geq 127,$ there is always a prime $q$ with $p &lt; q &lt; p + \sqrt p.$ The last known bad one is $113,$ as $\sqrt {113} \approx 10.63,$ the sum is about $123.63,$ but the first prime after $113$ is $127.$ The things that have <a href="http://en.wikipedia.org/wiki/Prime_gap#Upper_bounds" rel="noreferrer">actually been proved</a> of this type have exponents slightly larger then $1/2,$ plus they typically have the condition "for large enough numbers," meaning we cannot invoke these theorems directly for this problem. Edit, Komputer Kalkulation: for a prime $p \geq 2999,$ there is always a prime $q$ with $p &lt; q &lt; p + \frac{1 }{2} \sqrt p \;;$ kalkulated for all $p \leq 1000000.$ This slightly stronger conjecture (that it is true for all $p \geq 2999$) implies the conjecture in the original question quite directly. Plus, you can see from the <a href="http://en.wikipedia.org/wiki/Prime_gap#Numerical_results" rel="noreferrer">Table of first 75</a> that this stronger conjecture holds for all $ 4 \cdot 10^{18} \geq p \geq 9551,$ and my little computer run just extends the 9551 down to 2999.</p> <p>The reason this is relevant is the way $\phi$ reduces numbers. While $\phi(p) = p - 1, $ suppose we had a prime $r$ with $r^2$ of comparable size to $p.$ Then $\phi(r^2) = r^2 - r,$ which is closer to $p - \sqrt p.$ If $r^2 - r \geq N,$ we think there is going to be a $N &lt; p &lt; r^2.$ Similarly,for primes $r,s$ and $rs \approx p,$ we find that $\phi(rs)$ is smaller still. </p> <p>So, this is a pretty reasonable conjecture. There could even be an elementary proof, hard to say.</p> <p>EXERCISE: if $M \geq 4$ is <strong>not</strong> prime, does it follow that $\phi(M) \leq M - \sqrt M?$ I'm going to do a little computer run.</p> <p>Komputer Kalkulation: if $M \geq 4$ is <strong>not</strong> prime, then $\phi(M) \leq M - \sqrt M,$ and equality holds only if $M = r^2$ for prime $r.$ </p> <p>Should not be difficult to prove the Kalkulation. EDIT: yes, this is meaning of the first displayed equation in the answer by mjqxxxx </p>
2,066,765
<p>Are there any examples of subspaces of $\ell^{2}$ and $\ell^{\infty}$ which are not closed?</p>
Orest Bucicovschi
378,410
<p>As an observation, every subspace generated by countably many linearly independent vectors (that is, a subspace of dimension $\aleph_0$) inside a Banach space is not closed, since a Banach space cannot have (algebraic) dimension $\aleph_0$. ( if it were closed, if would be itself a Banach space).</p>
1,940,837
<p>I don't understand the difference between $\bigoplus_{i=1}^n M_i$ and $\prod_{i=1}^n M_i$ where $\{M_i\}_{i=1,...,n}$ is a collection of $R-$module. When $I$ is not finite, $$\prod_{i\in I}M_i=\{(x_i)_{i\in I}\mid x_i\in I\}$$ and $$\bigoplus _{i\in I}M_i=\{(x_i)_{i\in I}\mid x_i=0\text{ except for finite number of $i$}\}.$$</p> <p>But for $I$ finite, it looks to be the same... so what's the difference between them when $I$ is finite ?</p>
Zelos Malum
197,853
<p>For finite cases they are identical and either and both can be used interchangably without problem. The reason why the notation exists is because for the infinite case they are NOT the same. One is the universal product construct, the other is the coproduct. Both of them garantuees the existence of a homomorphism of certain types, however one is surjective and the other is injective.</p>
2,452,135
<p>I need to prove the set $A=\left \{ \left ( r\cos t,r\sin t \right ) \mid 0\leq t\leq \theta ,r\geq 0\right \}$ is closed. I have tried to use the theorems for continuous functions, but I always into troubles cause the domain is not bounded. </p>
Mariam
486,284
<p>hint: a set is closed if its complementary is open.</p>
2,452,135
<p>I need to prove the set $A=\left \{ \left ( r\cos t,r\sin t \right ) \mid 0\leq t\leq \theta ,r\geq 0\right \}$ is closed. I have tried to use the theorems for continuous functions, but I always into troubles cause the domain is not bounded. </p>
Alex Provost
59,556
<p>Pick any point $p \in \mathbb{R}^2 \setminus A$. (If such a point does not exist, then $A = \mathbb{R}^2$ is closed.) Let $d &gt; 0$ be the smallest distance between this point and the rays $t = 0$, $t = \theta$. This choice of $d$ ensures that the distance between $p$ and any point in the sector $A$ is at least $d$. Therefore, the open disk of radius $d$ around $p$ lies entirely within $\mathbb{R}^2 \setminus A$, so $\mathbb{R}^2 \setminus A$ is open.</p>
73,219
<p>For Riemann surfaces there are at least to possible notions of hyperbolicity. The classical one given by the Uniformization Theorem, or equivalently the type problem, which essentially says that a simply connected Riemann surfaces is conformally equivalent to one of the following:</p> <ul> <li>Riemann Sphere $\mathbb{C}\cup\{\infty\}$ (elliptic type).</li> <li>Complex plane (parabolic type).</li> <li>Open unit disk (hyperbolic type).</li> </ul> <p>On the other hand, given a Riemann surface one can asks if it is hyperbolic in the Gromov's sense. In other words, does there exists $\delta&gt;0$ such that all the geodesic triangles in the surface are $\delta$-thin? </p> <p>It seems to me that this two notions of hyperbolicity are not equivalent and one can have counterexamples in both directions. For instance, the two dimensional torus $\mathbb{T}^2$ is hyperbolic in Gromov's sense (since it is compact), but it's also a quotient of the Euclidean plane by a free action of a discrete group of isometries and therefore, of parabolic type. </p> <p>My questions are: what is a sufficient condition for a surface of hyperbolic type to be Gromov's hyperbolic? what is known about the relation of these two notions? </p> <p><strong>Related Question:</strong> Let $G$ be an infinite planar graph with uniformly bounded degree and assume that the simple random walk is transient. Is the graph necessarily Gromov's hyperbolic? </p>
Sam Nead
1,650
<p>NEW ANSWER:</p> <p>As there has been much confusion on this point (some of it mine...): </p> <blockquote> <p>Definition: A Riemannian 2-manifold $S$ is of <em>hyperbolic type</em> if the universal cover of $S$ is conformally equivalent to the open unit disk, $D$. </p> </blockquote> <p>On the other hand we have</p> <blockquote> <p>Definition: A <em>hyperbolic surface</em> $S$ is a surface equipped with a complete Riemannian metric of constant curvature minus one.</p> </blockquote> <p>It is an exercise to show that all hyperbolic surfaces are surfaces of hyperbolic type. On the other hand, a surface of hyperbolic type need not be hyperbolic. As an easy example of this, choose your favorite positive function $f$ on the disk $D$ and use $f$ to scale the Poincare metric. This new metric is (almost surely) not constant curvature but <em>is</em> conformally equivalent to the Poincare metric. </p> <p>With these definitions in place: the original question is ill-posed. Knowing that a surface $S$ is of hyperbolic type does not suffice to tell us the metric. To be precise, there are conformally equivalent metrics $\rho_0$ and $\rho_1$ on the open disk $D$ so that the first is Gromov hyperbolic and the second is not. (Eg, let $\rho_0$ be the Poincare metric while $\rho_1$ has larger and larger "mushrooms" as you walk to infinity.)</p> <p>OLD ANSWER (written in terms of the above definitions):</p> <p>I'll assume that you are asking for a sufficient condition to ensure that a hyperbolic surface $S$ is Gromov hyperbolic. One condition is that $S$ has finite area. In this case $S$ has a compact core (which is of no interest in this setting) and a finite number of cusps. A cusp is obtained by modding out a horodisk by a parabolic isometry. All cusps are quasi-isometric to rays. Thus $S$ is quasi-isometric to a tree having one vertex and one ray per cusp.</p> <p>A simpler condition is that $\pi_1(S)$ is finitely generated. The allowed surfaces are now somewhat more complicated: in addition to cusps there can be funnels in the complement of the compact core. A funnel is obtained by modding out a half-plane by a hyperbolic isometry. All funnels are quasi-isometric to the hyperbolic plane. (This is a nice exercise!) So, here, $S$ is quasi-isometric to the one-point union of a collection of hyperbolic planes and rays. </p> <p>As for the "opposite direction": When the group is infinitely generated things can be very strange. For example, consider any cubic, connected graph $X$, of infinite diameter, where all edges have length one. (This is a very large class of metric spaces, even after passing to quasi-isometric equivalence classes.) Then, for any such graph $X$ there is a hyperbolic surface $S_X$ quasi-isometric to $X$.</p>
180,495
<blockquote> <p>Suppose $T$ is an everywhere defined linear map from a Hilbert space $\mathcal{H}$ to itself. Suppose $T$ is also symmetric so that $\langle Tx,y\rangle=\langle x,Ty\rangle$ for all $x,y\in\mathcal{H}$. Prove that $T$ is a bounded directly from the uniform boundedness principle and not the closed graph theorem.</p> </blockquote> <p>This is problem III.13 in the Reed-Simon volume 1. Hints are welcome.</p>
Nate Eldredge
822
<p>Hint: Consider the family of linear functionals $f_x$ defined by $f_x(y) = \langle Tx,y \rangle$, as $x$ ranges over the unit ball of $\mathcal{H}$.</p>
766,765
<blockquote> <p>Let $f:[0,\infty]\rightarrow \mathbb{R}$ a bounded function in each bounded interval. If $\lim_{x\rightarrow \infty} [f(x+1)-f(x)]=L$ then prove that $\lim_{x\rightarrow \infty}\displaystyle\frac{f(x)}{x}=L$</p> </blockquote> <p>I appreciate any hint to solve this problem. </p> <p>Thanks a lot!</p>
user21820
21,820
<p>For any $a &gt; 0$:</p> <p>&emsp; Let $y$ be such that $f(x+1) - f(x) \in L + [-a,a]$ for any $x \ge y$</p> <p>&emsp; Then $f(y+t+k) - f(y+t) \in k ( L + [-a,a] )$ for any $k \in \mathbb{N}$ and $t \in [0,1]$</p> <p>&emsp; Also $f(y+t)$ is bounded over all $t \in [0,1]$</p> <p>&emsp; Thus ${\large \frac{f(y+t+k)}{y+t+k} } \in {\large \frac{f(y+t)+k(L+[-a,a])}{y+t+k} } \to L+[-a,a]$ uniformly for all $t \in [0,1]$ as $k \to \infty$</p> <p>&emsp; Thus ${ \large \frac{f(x)}{x} } \in L + [-2a,2a]$ as $x \to \infty$</p> <p>Therefore ${ \large \frac{f(x)}{x} } \to L$ as $x \to \infty$</p> <p><br></p> <p>To explain what happens in the last two lines inside "For any $a&gt;0$", we basically have $\large\frac{p(t)+k(L+q(t))}{y+t+k}$ for some $p$ that is bounded on $[0,1]$ and $q$ that is in $[-a,a]$. As $k$ increases, $p(t)$ becomes insignificant because it is bounded, and also $\frac{k}{y+t+k}$ approaches 1 uniformly over all $t \in [0,1]$, hence the expression approaches something in $L+[-a,a]$ uniformly over all $t \in [0,1]$. "Uniformly" means that for any desired margin of error there is a common cutoff point for $k$ beyond which the expression is within the error margin for all $t$. The next line follows from this because eventually the expression will be within $a$ of $L+[-a,a]$.</p>
2,602,438
<p>For an embedded software implementation I would like to compute,</p> <p>$S(b) = \sum_{i=1}^{N}log( x_i - b )$,</p> <p>for various values of $b$. Here $x_i$ is an array of fixed numbers.</p> <p>Is there a fast way to do this without having to recompute the sum?</p> <p>--</p> <p>I tried looking at the Taylor expansion of $log( x - b )$ around $x=a$</p> <p>$log( x - b ) \approx log(a-b) + \frac{x-a}{a-b} - \frac{(x-a)^2}{2(a-b)^2} + \frac{(x-a)^3}{3(a-b)^3} - \frac{(x-a)^4}{4(a-b)^4} + O((x-a)^5)$.</p> <p>Hence,</p> <p>$S(b) \approx \sum_{i=1}^N log(a-b) + \frac{x_i-a}{a-b} - \frac{(x_i-a)^2}{2(a-b)^2} + \frac{(x_i-a)^3}{3(a-b)^3} - \frac{(x_i-a)^4}{4(a-b)^4} + O((x_i-a)^5)$.</p> <p>Then in $S(b)$ all sums that depend on $x_i$ can be precomputed.</p> <p>However, I have my doubts about the accuracy of this approach.</p>
Community
-1
<p>$x^2\not \ge 0\implies $ $x $ is not a real number...</p>
184,241
<p>I am having issues with the code:</p> <pre><code>CC = p0^2/(p0 - L) Dbuy = (1 - CC)*(L/p0) + (CC - p0)*Log[CC/p0] Dwait = (1 - p1)*(L/(p1 - p0)) Dlock = (1 - p1)*(1 - L/p0 - L/(p1 - p0)) + (p1 - CC)*(1 - L/p0) - L*Log[(p1 - p0)/(CC - p0)] RevLocking1 = Table[NMaximize[{x*Dbuy*p0 + x*Dwait*0.4*p1 + x*Dlock*(p0 + 0.4*L), x*(Dbuy + Dwait + Dlock) &lt;= 1, 0 &lt;= L &lt;= p0 &lt;= p1 &lt;= 1}, {p0, p1, L}, MaxIterations -&gt; 10000], {x, 1, 6, 0.25}] </code></pre> <p>The code works, but it ignores the constraint:</p> <pre><code>0 &lt;= L &lt;= p0 &lt;= p1 &lt;= 1 </code></pre> <p>Since it reaches values like:</p> <pre><code>L -&gt; 0.303712, p0 -&gt; -0.0156877, p1 -&gt; 0.900987 </code></pre> <p>Any ideas?</p>
Carl Woll
45,431
<p>You can use the undocumented <code>AllowedHeads</code> option of <a href="http://reference.wolfram.com/language/ref/Transpose" rel="noreferrer"><code>Transpose</code></a> to do this:</p> <pre><code>tr = Transpose[listData, AllowedHeads -&gt; {Association, List}] </code></pre> <blockquote> <p>&lt;|"Input" -> {1, 2, 3}, "double" -> {2, 4, 6}, "squared" -> {1, 4, 9}|></p> </blockquote> <p>and back:</p> <pre><code>Transpose[tr, AllowedHeads -&gt; {Association, List}] </code></pre> <blockquote> <p>{&lt;|"Input" -> 1, "double" -> 2, "squared" -> 1|>, &lt;|"Input" -> 2, "double" -> 4, "squared" -> 4|>, &lt;|"Input" -> 3, "double" -> 6, "squared" -> 9|>}</p> </blockquote>
64,491
<p>I need to change the ticks in the xaxis for representation pH values, but the ticks are not dispersed along the range of xaxis. Another question: why the vertical line defined with Epilog is not visible?</p> <pre><code>Kb1 := Kw/Ka1 Kb2 := Kw/Ka2 oh := Kw/x f[x_] := Cs Kb2 oh/(Kb2 oh + oh^2 + Kb1 Kb2) ff[x_] = Simplify[f[x]] Ka1 = 6.2 10^-8 Ka2 = 4.8 10^-13 Cs = 0.1 myTicks[xmin_, xmax_] := {#, -N[Log[10, #]]} &amp; /@FindDivisions[{xmin, xmax}, 5] LogLinearPlot[ff[x], {x, 1. 10^-15, 15 10^-10}, AxesLabel -&gt; {"pH", "[HA]"}, Ticks -&gt; {myTicks, Automatic}, PlotRange -&gt; All, Epilog -&gt; Line[{{Log[10, 1.725 10^-10], 0}, {Log[10, 1.725 10^-10], 0.1}}]] </code></pre> <p><img src="https://i.stack.imgur.com/vACwu.png" alt="enter image description here"></p>
ubpdqn
1,997
<p>In version 10,</p> <pre><code>LogLinearPlot[ff[x], {x, 10^-15, 15 10^-10}, Frame -&gt; True, FrameTicks -&gt; {{Automatic, Automatic}, {Table[{10^j, -j}, {j, -15, -9}], None}}, FrameLabel -&gt; {"pH", "[HA]"}, GridLines -&gt; {{{1.725 10^-10, Red}}, None}] </code></pre> <p><img src="https://i.stack.imgur.com/9b1le.png" alt="enter image description here"></p>
3,072,198
<p>How can one explain that <span class="math-container">$$\frac{d}{dx}\left(\int_0^x{\cos(t^2+t)dt}\right) = \cos(x^2+x)$$</span> Without solving the integral?</p> <p>I know it's related to the fundamental theorem of calculus, but here we have a derivative with respect to <span class="math-container">$x$</span>, while the antiderivative is with respect to <span class="math-container">$t$</span>. </p> <p>Thank you.</p>
Parcly Taxel
357,390
<p>Say <span class="math-container">$f(t)=\cos(t^2+t)$</span> and <em>an</em> antiderivative is <span class="math-container">$F(t)$</span>. The integral in question is, by the fundamental theorem of calculus, <span class="math-container">$$F(x)-F(0)$$</span> <span class="math-container">$F(0)$</span> is a constant and disappears upon differentiating with respect to <span class="math-container">$x$</span>, whereas <span class="math-container">$F(x)$</span> becomes <span class="math-container">$f(x)$</span> once again. Thus, after differentiation we must have the RHS as <span class="math-container">$\cos(x^2+x)$</span>.</p>
3,552,264
<p>How to find function <span class="math-container">$f(x)$</span> that has continuous derivative on <span class="math-container">$[0,2]$</span> satisfies the following conditions:</p> <ol> <li><span class="math-container">$f(2) = 3$</span></li> <li><span class="math-container">$\displaystyle \int_0^2 [f'(x)]^2 dx = 4$</span></li> <li><span class="math-container">$\displaystyle \int_0^2 x^2f(x) dx = \frac{1}{3}$</span></li> </ol> <p><em>My attempt:</em> By using integration by parts, I found that <span class="math-container">$\displaystyle \int_0^2 x^3f'(x) dx = 23$</span> and I tried to find constant <span class="math-container">$\alpha$</span> such that <span class="math-container">$\displaystyle \int_0^2 [f'(x) + \alpha x^3]^2 dx = 0$</span> so that I can have <span class="math-container">$f'(x) = -\alpha x^3$</span>. However, the result was strange, I obtained two different "ugly" values and failed to confirm whether the solution was right. I then searched for solution online but did not come across anything helpful.</p> <p>I would love to know is there another way to solve this problem. I'm grateful if anyone could help. Thanks in advance.</p>
Fred
380,717
<p>We have, by Cauchy - Schwarz:</p> <p><span class="math-container">\begin{align} 23 &amp; =\int_0^2 x^3f'(x) \, dx \le \left(\int_0^2 x^6 dx\right)^{1/2} \cdot \left(\int_0^2 (f'(x))^2 \, dx \right)^{1/2} \\ &amp; =2 \left(\int_0^2 x^6 \, dx\right)^{1/2} \le 2 \left(\int_0^2 2^6 \, dx \right)^{1/2} =\sqrt{2} \cdot 16 &lt; 23. \end{align}</span></p> <p>A contradiction ! Consequence ?</p>
286,989
<p>There is a very simple formulation for the character of irreducible representations of $S_n$ evaluated on an n-cycle, i.e. that it is 0 on all non-hook partitions, and $(-1)^m$ on hooks. Is there an analogous computation for irreducible characters of $B_n$, the hyperoctahedral group, evaluated on signed 2n-cycles? That is, on the conjugacy class indexed by the bipartition $([n],\emptyset)$?</p>
Richard Stanley
2,807
<p>In general, if $(\lambda,\mu)$ is a bipartition of $n$, then $$ \prod_i(p_{\lambda_i}(x)+p_{\lambda_i}(y))\cdot\prod_j (p_{\mu_j}(x)-p_{\mu_j}(y)) = \sum_{(\alpha,\beta)} \chi^{\alpha,\beta}(\lambda,\mu)s_\alpha(x)s_\beta(y), $$ where $(\alpha,\beta)$ ranges over all bipartitions of $n$ and $\chi^{\alpha,\beta}$ is the irreducible character of $B_n$ indexed by $(\alpha,\beta)$. Setting $(\lambda,\mu)= (n,\emptyset)$ gives $$ p_n(x)+p_n(y) = \sum_{(\alpha,\beta)} \chi^{\alpha,\beta}(n,\emptyset)s_\alpha(x)s_\beta(y). $$ But (as alluded to in the question) $$ p_n = \sum_{i=0}^{n-1} (-1)^i s_{n-i,1^i}, $$ so $$ p_n(x)+p_n(y)= \sum_{i=0}^{n-1} (-1)^i s_{n-i,1^i}(x) + \sum_{i=0}^{n-1} (-1)^i s_{n-i,1^i}(y). $$</p>
53,596
<p>It's "well known" that, for any weight $k$ and level $N$, the space $S_k(\Gamma_1(N))$ of cusp forms of that weight and level has a basis in which all the Hecke operators act by matrices with entries in $\mathbb{Z}$; consequently all the Hecke eigenvalues are algebraic numbers (indeed algebraic integers).</p> <p>I was reflecting on how to prove this while teaching an undergraduate course on modular forms. For $k \ge 2$ it's not hard: there's the Eichler-Shimura machinery which relates it to a question about cohomology, and the cohomology with $\mathbb{Z}$ coefficients does the job. Alternatively, and more or less equivalently, you use the pairing with modular symbols. Both of these methods break down for $k = 1$; the only argument I know that works in this case is to use the fact that $X_1(N)$ has a model as an algebraic variety, and weight $k$ modular forms correspond to sections of the $k$-th power of a line bundle that has a purely algebraic definition. But that's not really something I can stand up and explain to a class of undergraduate students!</p> <blockquote> <p>For cusp forms of weight $k = 1$, can the algebraicity of the Hecke eigenvalues be proved without quoting heavy machinery from arithmetic geometry? </p> </blockquote>
Socky
33,127
<p>Let $S = S_{\mathbf{Q}} = M_{13}(\Gamma_1(N),\mathbf{Q})$, and $S_{\mathbf{C}} = S \otimes \mathbf{C}$ denote the corresponding space of modular forms over $\mathbf{C}$.</p> <p>Let $V \subset S \times S$ be the subspace cut out by pairs of forms $(A,B)$ satisfying the following equation:</p> <p>$$A \cdot E_{12} = B \cdot \Delta $$</p> <p>As equations in the Fourier coefficients of $A$ and $B$ these are linear equations with coefficients in $\mathbf{Q}$. Since, by the $q$-expansion principle, a modular form can be recovered from some finite number of Fourier coefficients, $V$ is determined by the null space of some finite matrix with coefficients in $\mathbf{Q}$. Since a linear system over $\mathbf{Q}$ has the same rank over $\mathbf{C}$, it follows that $V_{\mathbf{C}} = V \otimes \mathbf{C}$, where $V_{\mathbf{C}}$ is the set of solutions in $S_{\mathbf{C}} \times S_{\mathbf{C}}$ of the same equations.</p> <p>On the other hand, there is an isomorphism: $$V_{\mathbf{C}} \rightarrow M_{1}(\Gamma_1(N),\mathbf{C})$$ given by $$(A,B) \mapsto \frac{A}{\Delta} = \frac{B}{E_{12}}$$ The point is that $E_{12}$ and $\Delta$ do not have any common zeros, so the image of this map clearly consists of holomorphic forms. Hence the map is well defined. However, if $F$ has weight one, then $(A,B) = (F \cdot \Delta, F \cdot E_{12})$ maps to $F$, so the map is surjective. It is clearly injective, so it is an isomorphism.</p> <p>It follows that the image of $V$ under this map gives a rational basis for $M_1(\Gamma_1(N))$. Since $V$ is then preserved by Hecke operators (as is obvious on $q$-expansions), the result follows.</p>
2,975,505
<p>Assume that g : <strong>R</strong> → <strong>R</strong> is a bijection and define f : <strong>R</strong> → <strong>R</strong> by</p> <p>f(x) = 2g(x) + 1.</p> <p>Determine, with proof, whether f is a bijection.</p> <p>My opinion:</p> <p><em>I know that a function should either be increasing or decreasing in interval. So if g(x) > 0, then f(x)>0 and if g(x)&lt;0, then f(x)&lt;0. My thinking is that f(x) is either increasing or decreasing and hence it is also bijective.</em></p>
user29418
386,180
<p>The continuous function <span class="math-container">$h(x) = \pi + \frac{1}{2}\sin \left ( \frac{x}{2}\right) - x $</span> is positive at 0 and negative at <span class="math-container">$2\pi$</span>, the interval being <span class="math-container">$[ \pi, -\pi ]$</span>, so by the Intermediate Value Theorem there is a point <span class="math-container">$a \in [0,2\pi]$</span> with <span class="math-container">$f(a) - a = 0 $</span> or <span class="math-container">$f(a) = a$</span>. This fixed point is unique because h(x) is monotonically decreasing (mentioned by Diger).</p> <hr> <p>In general, if the intermediate value theorem for {h(x) = f(x) - x} shows that there is a zero at this function, then there is a point a, such that f(a) - a = 0 or f(a) = a. This fixed point is unique because h(x) is monotonic.</p>
970,872
<p>There is a common brain teaser that goes like this:</p> <p>You are given two ropes and a lighter. This is the only equipment you can use. You are told that each of the two ropes has the following property: if you light one end of the rope, it will take exactly one hour to burn all the way to the other end. But it doesn't have to burn at a uniform rate. In other words, half the rope may burn in the first five minutes, and then the other half would take 55 minutes. The rate at which the two ropes burn is not necessarily the same, so the second rope will also take an hour to burn from one end to the other, but may do it at some varying rate, which is not necessarily the same as the one for the first rope. Now you are asked to measure a period of 45 minutes. How will you do it?</p> <p>Now I usually love brain teasers but this one frustrated me for a while because I could not prove that if a rope of non-uniform density is burned at both ends it burns in time $T/2$. I think I have sketched a proof by induction that shows that it's not actually true.</p> <p>Given a rope of uniform density the burn rate at either end is equal so clearly it burns in time $T/2$. Now, consider a rope of non-uniform density, the total time T for this rope to burn is the linear combination of the times of the uniform density "chunks" to burn, i.e. $T = T_1 + T_2 + \ldots + T_n$. So consider, $T/2 = T_1/2+ T_2/2 + \ldots + T_n/2$. If we look at each $T_i/2$ this is precisely the time it takes to burn the uniform segment $T_i$ if lit at both ends. Therefore, in order to arrive at a rope that burns in time $T/2$, one would need to light each uniform segment on both ends, not simply the end of both ends of the total rope. What am I doing wrong?</p>
Ralph D. Jeffords
826,037
<p>Here is my proof that no matter how much the rate of burning varies over the time interval <span class="math-container">$[0, 1]$</span> as long as the time to burn when lit at one end is 1 hour then lighting the rope simultaneously at both ends will consume the rope after 1/2 hour.</p> <p>Formally, the amount the rope (lit at one end) that burns in time <span class="math-container">$a$</span> is <span class="math-container">$\int_{t=0}^a r(t)dt$</span> where <span class="math-container">$r(t)$</span> is the rate of burning. The amount the rope that burns in time <span class="math-container">$a$</span> if lit from the other end is <span class="math-container">$\int_{t=0}^a r(1-t)dt$</span>. The one hour restriction is the fact <span class="math-container">$\int_{t=0}^1 r(t)dt = L$</span> where <span class="math-container">$L$</span> is the length of the rope. Thus the amount of rope (lit at both ends) that burns in time <span class="math-container">$a$</span> is <span class="math-container">$\int_{t=0}^a r(t)dt + \int_{t=0}^a r(1-t)dt = \int_{t=0}^a r(t)dt + \int_{t=1-a}^1 r(t)dt$</span> (using substitution rule for definite integrals). When time <span class="math-container">$a$</span> reaches 1/2 it is easy to see that the rope is entirely consumed but not before. Although this is an easy result and fundamental to this type of physical process, it is glossed over in presentations of the problem to the general (non-mathematical) public.</p>
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
Beren Sanders
1,148
<p>Terry Tao wrote on this subject; I think you will be happy with his <a href="http://terrytao.wordpress.com/career-advice/does-one-have-to-be-a-genius-to-do-maths/" rel="noreferrer">conclusions</a>.</p> <p>An excerpt: &quot;The number of interesting mathematical research areas and problems to work on is vast – far more than can be covered in detail just by the “best” mathematicians, and sometimes the set of tools or ideas that you have will find something that other good mathematicians have overlooked, especially given that even the greatest mathematicians still have weaknesses in some aspects of mathematical research. As long as you have education, interest, and a reasonable amount of talent, there will be some part of mathematics where you can make a solid and useful contribution.&quot;</p>
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
G. Jay Kerns
10,351
<p>You don't have to be Michael Jordan to play basketball.</p>