qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,671,357
<p>I'm trying to solve a minimization problem whose purpose is to optimize a matrix whose square is close to another given matrix. But I can't find an effective tool to solve it.</p> <p>Here is my problem:</p> <blockquote> <p>Assume we have an unknown Q with parameter $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$, and a given matrix G, that is, $Q=\begin{pmatrix} q11&amp;q12 &amp;0 &amp;q14 \\q21&amp;q22&amp; q23&amp;0\\ 0&amp;q32&amp; q33&amp;q34\\ q41&amp;0&amp; q43&amp;q44\\ \end{pmatrix} $, $G=\begin{pmatrix} 0.48&amp;0.24 &amp;0.16 &amp;0.12 \\ 0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12\\0.48&amp;0.24 &amp;0.16 &amp;0.12 \end{pmatrix} $,</p> <p>The problem is how to find the values of $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$ such that the square of $Q$ is very close to matrix $G$.</p> </blockquote> <p>I choose to minimize the Frobenius norm of their difference, that is, </p> <blockquote> <p>$ Q* ={\arg\min}_{Q} \| Q^2-G\|_F$</p> <p>s.t. $0\leq q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44 \leq 1$,$\quad$<br> $\quad$ $q11+q12+q14=1$, $\quad$ $q21+q22+q23=1$, $\quad$ $q32+q33+q34=1$, $\quad$ $q41+q43+q44=1$.</p> </blockquote> <p>During those days, I am frustrated to find a effective tool to execute the above optimization algorithm, can someone help me to realize it?</p>
Samrat Mukhopadhyay
83,973
<p>If $Q\ne \pm G$, then a minima cannot be achieved. You could try to write down $Q$ as $Q=G+\delta$ where $$\delta=\begin{pmatrix} \delta_1 &amp; -\delta_1\\ \delta_2 &amp;-\delta_2 \end{pmatrix}$$ where the delta's have to be chosen in a way such that they satisfy the constraints, i.e. $$-0.4\le \delta_1,\delta_2\le 0.6$$ Then any such feasible $\delta$ will give you a solution that can make $Q^2$ arbitrarily close to $G$. For example if you want to have $\|Q^2-G\|_F&lt; \epsilon$, then $$\|G\delta+\delta G+\delta^2\|_F&lt;\epsilon$$ This will make $\delta_1,\delta_2$ to be confined inside an hyperellipse of degree $4$, whose major and minor axis lengths will be controlled by $\epsilon$.</p>
2,038,520
<p>I know that the series b. converges as $\sum \frac{1}{n^p}$ converges for $p&gt;1$, So a. also converges. I want to know the sum.</p> <blockquote> <blockquote> <p>a.$1+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}+.....$</p> <p>$b.1+\frac{1}{4}+\frac{1}{9}+\frac{1}{16}+.....$</p> </blockquote> </blockquote>
Macavity
58,320
<p>If you know <a href="https://en.wikipedia.org/wiki/Basel_problem" rel="nofollow noreferrer">$\displaystyle \sum_1^\infty \frac1{n^2} = \frac{\pi^2}6$</a>, then this is easy, as the even series is just $\displaystyle \sum_1^\infty \frac1{4n^2} = \frac{\pi^2}{24}$.</p> <p>I leave the odd series for you to separate out.</p>
2,137,591
<blockquote> <p>$$\int \frac{1}{x+x\log x}\,dx$$</p> </blockquote> <p>I couldn't use any of the integration techniques to solve this, any help will be appreciated!</p>
Community
-1
<p><strong>"HINT":</strong> (Assuming of course that $\log=\ln$) It is $$\int\frac{1}{x}\frac{1}{1+\ln x}\,dx$$ so you can make the substitution $y=\ln x$.</p>
3,710,377
<p>In <em>Postmodern Analysis</em> by Jurgen Jost, the Lebesgue integral of a step function is defined as follows:</p> <p>Suppose we have a step function <span class="math-container">$t:W\subset\mathbb{R}^d\to \mathbb{R}$</span> defined on a cube <span class="math-container">$W\subset\mathbb{R}^d$</span> given by <span class="math-container">$$ t = \sum_j c_j \mathbb{1}_{W_j} \quad (*)$$</span> where <span class="math-container">$c_j$</span> is a real number and <span class="math-container">$W_j$</span> is a cube with side-length <span class="math-container">$\ell_j&gt;0$</span> for <span class="math-container">$j=1,2,...k$</span> and if <span class="math-container">$j\neq j'$</span>, then <span class="math-container">$\text{int}(W_j)\cap \text{int}(W_{j'})=\varnothing$</span> and <span class="math-container">$\bigcup_j W_j = W$</span> (i.e. the cubes <span class="math-container">$\{W_j\}$</span> are almost disjoint and partition W, and <span class="math-container">$t$</span> is constant on the interior of each cube). Then we define the integral of <span class="math-container">$t$</span> to be <span class="math-container">$$ \int_{\mathbb{R}^d} t = \sum_{j=1}^k c_j\ell_j^d. $$</span></p> <p>I want to show that this definition is indepenedent of the collection of cubes <span class="math-container">$\{W_j\}_{j=1}^k$</span>, as it is obvious that the function <span class="math-container">$t$</span> can have many representations of the form <span class="math-container">$(*)$</span>. Although it is "geometrically obvious in my minds eye", I am having difficulty proving this fact. Any help is appreciated. </p> <p>What I have tried: Given two collections of cubes <span class="math-container">$\{W_j\}$</span> and <span class="math-container">$\{B_i\}$</span> where we can write <span class="math-container">$$ t = \sum_i a_i\mathbb{1}_{B_i} = \sum_j c_j\mathbb{1}_{W_j} $$</span> we need to show <span class="math-container">$$ \sum_i a_i k^d_i = \sum_j c_j \ell_j^d $$</span> where <span class="math-container">$k_i$</span> is the side length of the cube <span class="math-container">$B_i$</span> and <span class="math-container">$\ell_j$</span> is the side length of the cube <span class="math-container">$W_j$</span>.</p> <p>I am not sure how to formally prove this equality, in general. The other idea I have is to find some kind of canonical representation of <span class="math-container">$(*)$</span>.</p> <p>I feel as though I am missing something, because the author says this fact is obvious and gives no proof or justification. I don't want to rely on faith.</p>
Ross Millikan
1,827
<p>Hint: Suppose <span class="math-container">$g(x)=\frac x2$</span>. Of course, <span class="math-container">$f(x)=2x$</span> makes <span class="math-container">$f(g(x))=x$</span>, but there are other possibilities. Can you find one?</p>
51,509
<p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
Mr.Wizard
121
<p>Since belisarius specifically refused to expound on his answer, which arguably would make my editing it for such purpose tantamount to vandalism, I shall post my own.</p> <p>Regarding <a href="http://reference.wolfram.com/mathematica/ref/RealDigits.html" rel="noreferrer"><code>RealDigits</code></a>:</p> <blockquote> <p>For integers and rational numbers with terminating digit expansions, RealDigits[x] returns an ordinary list of digits. For rational numbers with non-terminating digit expansions it yields a list of the form {a<sub>1</sub>,a<sub>2</sub>,...,{b<sub>1</sub>,b<sub>2</sub>,...}} representing the digit sequence consisting of the a<sub>i</sub> followed by infinite cyclic repetitions of the b<sub>i</sub>.  »</p> </blockquote> <p>Therefore we can use <code>RealDigits</code> to find the non-terminating cyclic digits of a fraction. The output syntax is of the form <code>{{___, r : {__}}, _}</code> where <code>r</code> is the list of repeating digits. The digits are easily extracted using that pattern, or more tersely <a href="http://reference.wolfram.com/mathematica/ref/Level.html" rel="noreferrer"><code>Level</code></a>:</p> <pre><code>RealDigits[1/243] ~Level~ {3} </code></pre> <blockquote> <pre><code>{4, 1, 1, 5, 2, 2, 6, 3, 3, 7, 4, 4, 8, 5, 5, 9, 6, 7, 0, 7, 8, 1, 8, 9, 3, 0, 0} </code></pre> </blockquote> <p>For comparison a number with a terminating decimal expansion:</p> <pre><code>RealDigits[1/4] ~Level~ {3} </code></pre> <blockquote> <pre><code>{} </code></pre> </blockquote>
51,509
<p>Here is a problem due to Feynman. If you take 1 divided by 243 you get 0.004115226337 .... It goes a little cockeyed after 559 when you're carrying out the decimal expansion, but it soon straightens itself out and repreats itself nicely. Now I want to see how many times it repeats itself. Does it do this indefinitely, or does it stop after certain number of repititions? Can you write a simple <em>Mathematica</em> program to verify one conjecture or the other?</p>
Greg Hurst
4,346
<p>You can also call <code>WolframAlpha["1/243"]</code>.</p>
2,027,337
<p>My homework sets up the problem accordingly:</p> <blockquote> <p>An object moves horizontally in one dimension with a velocity given by ​v(t) = $8\cos\left(\frac{\pi \cdot t}{6}\right)$ m/s.</p> <p>Find the The position of the object is given by ​s(t) = $s\left(t\right)=\int _0^t\:v\left(y\right)\:dy\:$ for $t\ge 0$. Find the position function for all $t\ge 0$.</p> </blockquote> <p>I find this problem differently worded than any other u-substitution problem I've worked on, and I'm having trouble figuring it out. Apparently I can use this relationship:</p> <blockquote> <p>$\int_a^b\:f\left(g\left(x\right)\right)g'\left(x\right)dx\:=\:\int_{g\left(a\right)}^{g\left(b\right)}f\left(u\right)du\:$</p> </blockquote> <p>...Which I've used before. I assume g(x) would equal my u-substitution, which is $\frac{\pi \cdot t}{6}$ I presume - but what confuses me are the boundaries, one of which is a variable. Could someone walk me through this?</p> <p>There is also a follow up question: </p> <blockquote> <p>What is the period of the motion - that ​is, starting at any​ point, how long does it take for the object to return to that​ position?</p> </blockquote> <p>Since the period of the sine function is $2\pi$, do I just set the resulting equation to that and solve? </p>
5xum
112,884
<p>Well, the task tells you two things:</p> <ol> <li>The velocity, $v(t)$, is given as $v(t)=8\cos\frac{\pi-t}{6}$</li> <li>The location is $\int_0^t v(y) dy$.</li> </ol> <p>From $1$, you know that $$v(t)=8\cos\frac{\pi-t}{6}$$ meaning that $$v(y)=8\cos\frac{\pi-y}{6}$$</p> <p>From the second, you then get </p> <p>$$s(t)=\int_0^t 8\cos\frac{\pi-y}{6}dy$$</p> <p>which is a basic integral that you should be able to calculate.</p>
2,384,422
<p>I'm really stuck on how to go about solving the following first order ODE; I've got little idea on how to approach it, and I'd really appreciate if someone could give me some hints and/or working for a solution so I can have a reference point on how to approach these sorts of problems.</p> <p>The following is one of many ODE's I've gotten off a problem set I found in a textbook at a library:</p> <p>$$y' = xe^{-\sin(x)} - y\cos(x)$$</p> <p>Can anyone help?</p>
Hasek
395,670
<p>This kind of ODE should be solved as follows:</p> <ol> <li>Solve the corresponding <strong>homogeneous</strong> equation.</li> </ol> <p>In your case it is $y'+y\cos(x)=0$ which has solution $y=c\cdot e^{-\sin(x)}$.</p> <ol start="2"> <li><strong>Consider constant</strong> in previous solution <strong>as a function of variable $x$</strong> and substitute it in original equation.</li> </ol> <p>So, we have $y(x)=c(x)\cdot e^{-\sin(x)}$ and should substitute it into $y'+y\cos(x) = xe^{-\sin(x)}$.</p> <p>This leads us to general solution in the form of $y(x) = \frac{1}{2}x^2e^{-\sin(x)}+ce^{-\sin(x)}$. </p>
1,630,480
<p>Assume $U\subset\mathbb{R}\times\mathbb{R}^{n}=\mathbb{R}^{n+1}$, $U$ is open and $(t_0, \bf{x}$$_0)\in U$. Assume ${\bf f} (= {\bf f}(t,{\bf x})) : U \to \mathbb{R}$ is <em>continuous</em>. Then the following is called an <em>initial value problem</em>, with <em>initial condition</em>:</p> <p>\begin{align*} \frac{d\bf{x}}{dt} &amp;= {\bf f}(t, {\bf x}),\\ {\bf x}(t_0) &amp;= {\bf x}_0. \end{align*}</p> <p>My doubt is $\bf{x}$ is a vector, so $\frac{d\bf{x}}{dt} \in \mathbb{R}^{n}$ but ${\bf f}(t, {\bf x}) \in \mathbb{R}$. Am I correct? So how can they be equal?</p> <p>Thanks for the help in advance.</p>
epi163sqrt
132,007
<p>We see in OPs example all $5$ different ways to multiply four matrices according to the associative law. This corresponds to the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="noreferrer">Catalan number</a></em></p> <p>$$C_3=\frac{1}{4}\binom{6}{3}=5$$.</p> <p>We write these $5$ variants explicitely with dots and obtain</p> <p>\begin{align*} &amp;(((A \cdot B)\cdot C)\cdot D)\\ &amp;((A\cdot (B\cdot C))\cdot D)\\ &amp;((A\cdot B)\cdot (C\cdot D))\\ &amp;(A\cdot ((B\cdot C)\cdot D))\\ &amp;(A\cdot (B\cdot (C\cdot D)))\\ \end{align*}</p> <blockquote> <p>We can <em>bijectively transform</em> this represention into strings of valid pairs of open and closed parentheses. We do so by skipping the matrices and all open parentheses and replacing the dots with opening parentheses.</p> <p>\begin{align*} &amp;(\ )\ (\ )\ (\ )\\ &amp;(\ (\ )\ )\ (\ )\\ &amp;(\ )\ (\ (\ )\ )\\ &amp;(\ (\ )\ (\ )\ )\\ &amp;(\ (\ (\ )\ )\ )\\ \end{align*}</p> <p>In general we consider strings of length $2n$ consisting of $n$ open and $n$ closed parentheses. Valid sequences can be characterized, that parsing a string from left to right, starting with $0$ from the beginning and <em>adding</em> $1$ when reading an open parenthesis and <em>subtracting</em> $1$ when reading a closed parenthesis we always get a non-negative number. At the end we get $0$.</p> <p>Now let's count the number $C_n$ of all valid sequences of length $2n$. The number of <em>all</em> sequences is \begin{align*} \binom{2n}{n} \end{align*}</p> <p>A <em>bad</em> sequence contains $n$ open and $n$ closed parentheses, but reaches the value $-1$ at a certain step for the first time during parsing. When we have reached the value $-1$ we have parsed precisely one closing parentheses more than open parentheses.</p> <p>We now <em>reverse</em> from that point on all parentheses, i.e. we exchange all open with closed parentheses and vice-versa. This results in a sequence with two more closed parentheses than open parentheses. So we have a total of $n+1$ closed parentheses and $n-1$ open parentheses.</p> <p>It follows, the number of <em>bad</em> sequences is \begin{align*} \binom{2n}{n+1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude the number $C_n$ of all valid sequences of length $2n$ is \begin{align*} C_n=\binom{2n}{n}-\binom{2n}{n+1}=\frac{1}{n+1}\binom{2n}{n}\qquad \qquad n\geq 1 \end{align*}</p> </blockquote> <p>In OPs example $n\geq 2$ matrices imply $n-1$ dots for multiplication. These dots can be substituted with $n-1$ open parentheses giving $C_{n-1}=\frac{1}{n}\binom{2(n-1)}{n-1}$ different valid arrangements.</p>
2,917,535
<p>I have found this problem in a 10th grade textbook and it's given me headaches trying to solve it. It says, determine the set:</p> <p>$$ A = \left \{ x \in \mathbb Z| \root3\of{\frac{7x+2}{x+5}} \in \mathbb Z\right \} $$</p> <p>So I have to find a condition for x so that the expression under the radical is a perfect cube. I remember solving these kind of exercises with perfect squares, but I can't figure this one out. </p>
ajotatxe
132,456
<p>Hint: for "large" $x$, the radical is between $6$ and $8$</p>
2,917,535
<p>I have found this problem in a 10th grade textbook and it's given me headaches trying to solve it. It says, determine the set:</p> <p>$$ A = \left \{ x \in \mathbb Z| \root3\of{\frac{7x+2}{x+5}} \in \mathbb Z\right \} $$</p> <p>So I have to find a condition for x so that the expression under the radical is a perfect cube. I remember solving these kind of exercises with perfect squares, but I can't figure this one out. </p>
Sarvesh Ravichandran Iyer
316,409
<p>The key point is that if $\sqrt[3]{a}$ is an integer, then $a$ is an integer, because $a$ is the cube of some integer, and the cube of an integer is always an integer.</p> <p>Therefore, we conclude that $\frac{7x+2}{x+5}$ is an integer. Write this as $\frac{(7x+35) - 33}{x+5} = 7 - \frac{33}{x+5}$.</p> <p>Now, if $b$ is an integer, so is $7-b$. From this, we see that $\frac{33}{x+5}$ is an integer.</p> <p>Consequently, $x+5$ is a (not necessarily positive) divisor of $33$. And how many such divisors of $33$ are there? You get the corresponding values of $x$ and test if $7-\frac{33}{x+5}$ is a perfect cube or not. </p>
1,115,645
<p>I understand that a primitive polynomial is a polynomial that generates all elements of an extension field from a base field. However I am not sure how to apply this definition to answer my question. Can someone explain to me how I need to start please?</p>
Will Brooks
206,592
<p>If you are happy to be vulgar, you can simply evaluate the expression at all 7 integer values, and show none of them is $0(7)$</p> <p>Alternatively, $(x+4)^2 + 1\equiv x^2 + x + 3(7)$ and $-1$ is not a quadratic residue of $7$</p>
602,286
<p>I'm reading a paper which uses the following fact; it appears to be standard but I am not sure where to look for a proof.</p> <blockquote> <p><strong>Claim.</strong> Let $M$ be a complete Riemannian manifold (assumed to be second countable, so no long lines). There is an increasing sequence of open sets $U_n$ with $\bigcup_n U_n = M$ and smooth, compactly supported functions $\phi_n : M \to [0,1]$ such that $\phi_n = 1$ on $U_n$ and $\sup_n |\nabla \phi_n| &lt; \infty$.</p> </blockquote> <p>This is trivial if $M$ is compact (take $U_n = M$ and $\phi_n = 1$). If we drop the requirement that the derivatives of $\phi_n$ be uniformly bounded, it's a consequence of the $\sigma$-compactness of $M$ and Urysohn's lemma. Also, the completeness is essential as we can see by taking $M$ to be an open interval. </p> <p>I would appreciate a proof (or at least a hint) or a reference.</p>
Community
-1
<p>I'll get you started: Factoring the right side, we find that</p> <p>$$\frac{dy}{dx} = x^2 y + x^2 - (y + 1) = (y + 1)(x^2 - 1)$$</p> <p>Upon rearrangement,</p> <p>$$\frac{dy}{y + 1} = (x^2 - 1) dx$$</p>
2,426,892
<blockquote> <p>Between which two integers does <span class="math-container">$\sqrt{2017}$</span> fall? </p> </blockquote> <p>Since <span class="math-container">$2017$</span> is a prime, there's not much I can do with it. However, <span class="math-container">$2016$</span> (the number before it) and <span class="math-container">$2018$</span> (the one after) are not, so I tried to factorise them. But that didn't work so well either, because they too are not perfect squares, so if I multiply them by a number to make them perfect squares, they're no longer close to <span class="math-container">$2017.$</span> How can I solve this problem?</p> <p>Update: Okay, since <span class="math-container">$40^2 = 1600$</span> and <span class="math-container">$50^2 = 2500$</span>, I just tried <span class="math-container">$45$</span> and <span class="math-container">$44$</span> and they happened to be the answer - but I want to be more mathematical than that... </p>
Will Jagy
10,400
<p>I can only imagine this was intended to be about $$ (10a + 5)^2 = 100 a (a+1) + 25, $$ $$ 15^2 = 225, $$ $$ 25^2 = 625, $$ $$ 35^2 = 1225, $$ $$ 45^2 = 2025. $$ Then $$ 44^2 = 2025 - 2 \cdot 45 + 1 = 2025 - 90 + 1 &lt; 2017. $$</p> <p>EXAMPLE: factor $10001 = 10^4 + 1$</p> <p>$$ 105^2 = 11025 $$ $$ 105^2 - 10001 = 1024 $$ $$ 1024 = 32^2 $$ $$ 10001 = 105^2 - 32^2$$ $$ 10001 = (105 - 32)(105 + 32) = 73 \cdot 137 $$ </p>
3,575,334
<p>I am trying to show that <span class="math-container">$\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt \to 0$</span> as <span class="math-container">$N \to \infty$</span> where <span class="math-container">$|f(N+1/2+it)| \le A/(1+(N+1/2)^2)$</span> for some constant <span class="math-container">$A$</span>. </p> <p>To show this, I have <span class="math-container">$|\int_{-b}^{b} \frac{f(N+\frac{1}{2} + it)}{e^{2\pi i(N+\frac{1}{2} + it)}-1} dt |\le \frac{A}{1+N^2} \int_{-b}^b \frac{1}{|e^{2\pi i(N+\frac{1}{2})} e^{-2\pi t} - 1|} dt \le \frac{A}{1+N^2} \int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}.$</span></p> <p>My proof would be complete if we have <span class="math-container">$\int_{-b}^b \frac{dt}{e^{-2 \pi t} - 1}$</span> exists. However, I don't know think it does. How can I show that the contour integral vanishes in the limit?</p>
roundsquare
706,295
<p>J. W. Tanner's answer is correct, but just to do it via a Venn diagram:</p> <p><span class="math-container">$A - C$</span> is region I and region IV</p> <p><span class="math-container">$C - B$</span> is region III and region V</p> <p>Since there are no regions in both, <span class="math-container">$(A - C)\cap(C - B)$</span> must empty i.e. <span class="math-container">$\emptyset$</span></p> <p><a href="https://i.stack.imgur.com/11wlY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/11wlY.png" alt="enter image description here"></a></p>
930,949
<p>Given that the circle C has center $(a,b)$ where $a$ and $b$ are positive constants and that C touches the $x$-axis and that the line $y=x$ is a tangent to C show that $a = (1 + \sqrt{2})b$</p>
Dan Christensen
3,515
<blockquote> <p>Why are there so many different definitions of predicate??</p> </blockquote> <p>I would be hard to go wrong just memorizing the definition given by your prof. It's enough to know that there are many subtle variations. At this point in your career, it would be a waste of time to try to determined which definition is the best. After working through several sample problems, it will become second nature.</p> <blockquote> <p>Also, my professor says that the following sentence is a statement, for every x in the set of D, P(x). Where P(x) is a predicate of variable x.</p> <p>We know that for a predicate to be a statement, we need to quantify all variables, however, in the above statement, we didn't know what set D is. I mean, If we do not know set D, why would the above statement be a statement?</p> </blockquote> <p>You should clarify this with your prof, but D may just be his &quot;well-defined domain of values.&quot; Or the <em>domain of quantification</em>. If he was writing this using a universal quantifier, he might write:</p> <p><span class="math-container">$\forall x: P(x)$</span></p> <p>Some writers may be more explicit, especially if there is more than one domain of quantification under consideration as often happens in mathematics:</p> <p><span class="math-container">$\forall x\in D: P(x)$</span></p> <p>Both can be thought of as <em>statements.</em> Opinions may vary, but it is a very minor point.</p>
2,674,102
<p>Is the following Proof Correct?</p> <blockquote> <p>Given that $T\in\mathcal{L}(\mathbf{R}^2)$ defined by $T(x,y) = (-3y,x)$. $T$ has no eigenvalues.</p> </blockquote> <p><em>Proof.</em> Let $\sigma_T$ denote the set of all eigenvalues of $T$ and assume that $\sigma_T\neq\varnothing$ then for some $\lambda\in\sigma_T$ we have $T(x,y) = \lambda(x,y) = (-3y,x)$ where $(x,y)\neq (0,0)$, equivalently $\lambda x = -3y\text{ and }\lambda y = x$. but then $\lambda(\lambda y) = -3y$ equivalently $y(\lambda^2+3) = 0$. The equation $\lambda^2+3 = 0$ has no solutions in $\mathbf{R}$ consequently $y=0$ and then by equation $\lambda y = x$ it follows that $x=0$ thus $(x,y) = (0,0)$ contradicting the fact that $(x,y)\neq (0,0)$.</p> <p>$\blacksquare$</p>
SAHEB PAL
309,736
<p><strong>Another approach:</strong></p> <p>The matrix representation of $T$ w.r.t. standard basis $\{(1,0),(0,1)\}$ of $\mathbb{R}^2$ is $A=\begin{pmatrix}0&amp;-3\\1&amp;0\end{pmatrix}$. So the characteristic equation $|A-\lambda I|=0 $ gives $\lambda^2+3=0$. Thus $T$ has no eigenvalues.</p>
1,017,411
<blockquote> <p>Let <span class="math-container">$R$</span> be a commutative Ring with <span class="math-container">$1$</span> and <span class="math-container">$M$</span> a <span class="math-container">$R$</span>-Module. <span class="math-container">$$\varphi: \begin{cases}R &amp; \longrightarrow \text{end}_R(M) \\ a &amp; \longmapsto \lambda_a \end{cases} $$</span> is a Ringisomorphism for <span class="math-container">$M=R$</span></p> </blockquote> <p><strong>My approach</strong>: First to clarify, <span class="math-container">$\lambda_a: M \to M, x \mapsto ax$</span> is the homothetic mapping.</p> <p>I managed to show that <span class="math-container">$\varphi$</span> is a Ringmorphism with <span class="math-container">$\varphi(1_R)=\lambda_{1_R}=1_{\text{end}_R(M)}=id_M$</span></p> <p>Now I am stuck with the 2nd part. I was told that the easiest way to complete this exercise is to find the inverse mapping and show that the two composition yield the identity mapping (on the respective set)</p> <p>At some point I was given the following mapping <span class="math-container">$$\xi : \begin{cases}\text{end}_R(R) &amp; \longrightarrow R \\ \delta &amp; \longmapsto \delta (1) \end{cases} $$</span></p> <p>I yet fail to understand the intuition behind this mapping, how one comes up with such an idea and ontop of that, why it works. Here are my calculations:</p> <p>Let <span class="math-container">$x \in R$</span> be arbitrary <span class="math-container">$$\varphi(\xi(\delta(x)))=\varphi(\delta(1))=\lambda_{\delta(1)} $$</span> My <span class="math-container">$x \in R$</span> seems to have 'vanished' which is clearly a bad calculation on my end. So I suppose I have to do the calculation like that: <span class="math-container">$$\varphi(\xi (\delta(x)))=\varphi(\delta(1)(x))=\lambda_{\delta(1)(x)}\overset{?}=\lambda_{\delta(1)}(x)=\delta(1)x =x \delta(1) \\ = \delta(x) = id_{\text{end}_R(R)}\tag{*}\delta(x)$$</span></p> <p>After the answers provided below I understand that the above calculation does hold, but there is clearly some magic (and with magic I mean bad mathematics performed by me) going on at the step indicated with ?. It seems that my argument is always getting 'eaten up' or ends up in places where I can no longer work with it.</p> <p>Furthermore let <span class="math-container">$a \in R$</span> be arbitrary: <span class="math-container">$$\xi(\varphi(a))=\xi(\lambda_a)=\lambda_a(1)=a\cdot 1=a=id_M(a) $$</span> Which I am okay with.</p> <p>Could someone please enlighten me with some insight regarding this exercise? Especially in the calculation marked with (*) I am hopelessly lost (because of a mapping defined by a mapping through a mapping .....)</p>
Timbuc
118,527
<p>I think you're forgetting a tiny thing here: observe $\;\delta\;$ is an $\;R$- homomorphism, and thus</p> <p>$$\delta(x)=\delta(x\cdot1)=x\cdot\delta(1)\;,\;\;\forall\,x\in R$$</p> <p>I think this solves the whole conundrum.</p>
1,949,966
<h2>Q 1a</h2> <p>Is it possible to define a number $x$ such that $|x|=-1$, where $|\cdot|$ means absolute value, in the same manner that we define $i^2=-1$?</p> <p>I have no idea if it makes sense, but then again, $\sqrt{-1}$ used to not be a thing either.</p> <p>To be more explicit, I want as many properties to hold as possible, e.g. $|a|\times|b|=|a\times b|$ and $|a|=|-a|$, as some properties that seem to hold for all different types of numbers (or in some analogous way).</p> <hr> <h2>Q 1b</h2> <p>If we let the solution to $|x|=-1$ be $x=z_1$, and we allow the multiplicativeness property,</p> <p>$$|(z_1)^2|=1$$</p> <p>Or, further,</p> <p>$$|(z_1)^{2n}|=1\tag{$n\in\mathbb N$}$$</p> <p>Note that this does not mean $z_1$ is any such real, complex, or any other type of number. We used to think $|x|=1$ had two solutions, $x=1,-1$, but now we can give it the solution $x=e^{i\theta}$ for $\theta\in[0,2\pi)$. Adding in the solution $(z_1)^{2n}$ is no problem as far as I can see.</p> <p>However, there result in some problems I simply cannot quite see so clearly, for example,</p> <p>$$|z_1+3|=?$$</p> <p>There exists no such way to define such values at the moment.</p> <p>Similarly, let $z_2$ be the number that satisfies the following:</p> <p>$$|z_2|=z_1$$</p> <p>As far as I see it, it is not possible to create $z_2$, given $z_1$ and $z_0\in\mathbb C$.</p> <p>The following has a solution, in case you were wondering.</p> <p>$$|\sqrt{z_1}|=i$$</p> <p>so no, I did not forget to consider such cases.</p> <p>But, more generally, I wish to define the following numbers in a recursive sort of way.</p> <p>$$|z_{n+1}|=z_n$$</p> <p>since, as far as I can tell, $z_{n+1}$ is not representable using $z_k$ for $k\le n$. In this way, the nature of $z_n$ goes on forever, unlike $i$, which has the solution $\sqrt i=\frac1{\sqrt2}(1+i)$.</p> <p>So, my second question is to ask if anyone can discern some properties about $z_n$, defining them as we did above? And what is $|z_1+3|=?$</p> <hr> <h2>Q 2a</h2> <p>This part is important, so I truly want you guys (and girls) to consider this:</p> <blockquote> <p>Can you construct a problem such that $|x|=-1$ will be required in a step as you solve the problem, but such that the final solution is a real/complex/anything already well known. This is similar to <em><a href="https://en.wikipedia.org/wiki/Casus_irreducibilis" rel="nofollow noreferrer">Casus irreducibilis</a></em>, which basically forced $i$ to exist by establishing its need to exist.</p> </blockquote> <p>I am willing to give a large rep bounty for anyone able to create such a scenario/problem. </p> <hr> <h2>Q 2b</h2> <p>And if it is truly impossible, why? Why is it not possible to define some 'thing' the solution to the problem, keep a basic set of properties of the absolute value, and carry on? What's so different between $|x|=-1$ and $x^2=-1$, for example?</p> <hr> <h2>Thoughts to consider:</h2> <p>Now, <a href="https://math.stackexchange.com/a/1345391/272831">Lucian</a> has pointed out that there are plenty of things we do not yet understand, like $z_i\in\mathbb R^a$ for $a\in\mathbb Q_+^\star\setminus\mathbb N$. There may very well exist such a number, but in a field we fail to understand so far.</p> <p>Similarly, the triangle inequality clearly cannot coexist with such numbers as it is. For the triangle inequality to exist, someone has to figure out how to make triangles with non-positive/real lengths.</p> <p>As for the <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)#Definition" rel="nofollow noreferrer">properties/axioms of the norm</a> I want:</p> <p>$$p(v)=0\implies v=0$$</p> <p>$$p(av)=|a|p(v)$$</p>
Arthur
15,500
<p>First of all, you can define $|\cdot|$ to mean whatever you want in any given context, as long as you're clear and upfront about it.</p> <p>That being said, one usually wants $|\cdot|$ to be a <em>norm</em>, which means it fulfills a certain list of criteria. Among them is $|x|\geq 0$. If you break these rules, does your operation really deserve to be called "absolute value"? Does your operation deserve to be written using $|\cdot |$? Personally, I would say it doesn't, which means that using that symbol wouldn't be <em>wrong</em>, per se, but it would make it more difficult for your readers to understand what's going on, simply because of what they expect from that notation.</p> <p>One notable exception, as pointed out in the comments, is the determinant of square matrices. And real / complex numbers are square matrices (of dimension $1\times1$), so in that context we really have $|-1|=-1$. But that's a different operation.</p>
1,706,939
<p>Can anyone share an easy way to approximate $\log_2(x)$, given $x$ is between $0$ and 1?</p> <p>I'm trying to solve this using an old fashioned calculator (i.e. no logs)</p> <p>Thanks!</p> <p>EDIT: I realize that I stepped a bit ahead. The x comes in the form of a fraction, e.g. 3/8, which is indeed between 0 and 1, but could also be written as log2(3) - log2(8). I am hoping there is a quick way to approximate this calculation to let's say 2 decimals</p>
CiaPan
152,299
<p>If your input involves just multiplication or division of small natural numbers and you don't need accuracy exceeding 4 decimal numbers, then logarithmic tables could be the simplest solution.</p> <p>See for example</p> <ul> <li><a href="http://www.rapidtables.com/math/algebra/logarithm/Logarithm_Table.htm" rel="nofollow">http://www.rapidtables.com/math/algebra/logarithm/Logarithm_Table.htm</a></li> <li><a href="http://myhandbook.info/table_2log.html" rel="nofollow">http://myhandbook.info/table_2log.html</a></li> </ul> <p>For much bigger or smaller numbers you may apply standard reduction: $$\log_2 (x\cdot 2^n) = \log_2 x + n$$ $$\log_2 (x/ 2^n) = \log_2 x - n$$</p> <p>with some pre-selected powers, say<br> $2^3 = 8$, $2^5 = 32$, $2^8=256$, $2^{10} = 1024$, $2^{15}=32768$, $2^{20} = 1048576$...</p> <hr> <p>You may also search the Web for some online $\log_2$ calculator, like <a href="http://www.miniwebtool.com/log-base-2-calculator/" rel="nofollow">this one</a>.</p>
1,706,939
<p>Can anyone share an easy way to approximate $\log_2(x)$, given $x$ is between $0$ and 1?</p> <p>I'm trying to solve this using an old fashioned calculator (i.e. no logs)</p> <p>Thanks!</p> <p>EDIT: I realize that I stepped a bit ahead. The x comes in the form of a fraction, e.g. 3/8, which is indeed between 0 and 1, but could also be written as log2(3) - log2(8). I am hoping there is a quick way to approximate this calculation to let's say 2 decimals</p>
N74
288,459
<p>When $x$ is in the range $]0.5, 1[$ you can easily find a binary representation of the $\log_2(x)$ using this algorithm:</p> <ol> <li>Start writing $-0.$ (with the dot as the result will be fractional) and evaluate $z=1/x$.</li> <li>if $z^2&gt;2$ append $1$ and let $z=z/2$, else append $0$, to the result.</li> <li>go to step 2 until you have enough digits (or you have $z=1$)</li> </ol> <p>Calculate $3.3$ binary digit for each decimal to stop the algorithm.</p> <p>You will find a number like $-0.0110101$ that can easily be converted in decimal as $-0.4140625$ weighting each digit as $1 \over 2^n$.</p> <p>Following is an example of the calculations for $\log_2(0.75)\approx -0.415$. </p> <pre><code>1.333333333 0 -0. 1.333333333 1.777777778 0 0 1.777777778 3.160493827 1 -0.25 1.580246914 2.497180308 1 -0.125 1.248590154 1.558977373 0 0 1.558977373 2.430410448 1 -0.03125 1.215205224 1.476723736 0 0 1.476723736 2.180712994 1 -0.0078125 -0.4140625 </code></pre>
1,679,920
<p>I'm working for a firm, who can only use straight lines and (parts of) circles.</p> <p>Now I would like to do the following: imagine a square of size $5\times5$. I would like to expand it with $2$ in the $x$-direction and $1$ in the $y$-direction. The expected result is a rectangle of size $7\times9$. Until here, everything is OK.</p> <p>Now I would like the edges to be rounded, but as the length expanding is different in $x$ and $y$ direction, the rounding should be based on ellipses, not on circles, but I don't have ellipses to my disposal, so I'll need to approximate those ellipses using circular arcs.</p> <p>I've been looking on the internet for articles about this matter (searching for splines, Bézier, interpolation, ...), but I don't find anything. I have tried myself to invent my own approximation using standard curvature calculations, but I don't find a way to glue the curvature circular arcs together.</p> <p>Does somebody know a way to approximate ellipses using circular arcs?</p> <p>Thanks<br> Dominique</p>
Paul H.
278,067
<p>Here is an article that I used to implement a routine for converting Bézier curves to tangent arcs that may help you. It includes C++ code. <a href="http://www.ryanjuckett.com/programming/biarc-interpolation/" rel="nofollow">Biarc Interpolation</a></p> <p>If this doesn't qualify as an answer on this site, maybe someone could post the link as a comment instead and I will remove the answer.</p>
242,636
<p>I am interested in the proof of the following result: Suppose that $A &gt; 1$, $\lambda \in \mathbb{R}$, and for $0 &lt; Z \leq 1$, let $U(Z)$ be the number of integer solutions $v$ of \begin{eqnarray} |v| &lt; ZA \ \ \ \text{ and } \ \ \ \| \lambda v \| &lt; Z A^{-1}. \end{eqnarray} Then, if $0 &lt; Z_1 &lt; Z_2 \leq 1$, we have $$ U(Z_1) \gg (Z_1/Z_2) \ U(Z_2). $$</p> <p>I would greatly appreciate any comments or hints on this! Thank you very much!</p> <p>PS Here $\|x\|$ denotes the distance the closest integer. </p>
user94168
94,168
<p>Is it a reference you want? Check chapter 12 (if I remember correctly) in Davenport's book on diophantine equations and inequalities.</p>
242,636
<p>I am interested in the proof of the following result: Suppose that $A &gt; 1$, $\lambda \in \mathbb{R}$, and for $0 &lt; Z \leq 1$, let $U(Z)$ be the number of integer solutions $v$ of \begin{eqnarray} |v| &lt; ZA \ \ \ \text{ and } \ \ \ \| \lambda v \| &lt; Z A^{-1}. \end{eqnarray} Then, if $0 &lt; Z_1 &lt; Z_2 \leq 1$, we have $$ U(Z_1) \gg (Z_1/Z_2) \ U(Z_2). $$</p> <p>I would greatly appreciate any comments or hints on this! Thank you very much!</p> <p>PS Here $\|x\|$ denotes the distance the closest integer. </p>
js21
21,724
<p>I can show the (exact) inequality $$(*) \ \ \ \ \ \ V(Z_1) \geq \left(\frac{Z_1}{Z_2} \right)^2 V(Z_2) \ \ \quad (\frac{2}{A} \leq Z_1 \leq Z_2 \leq \frac{A}{2}), $$ for a smoothed version of $U$ defined by $$ V(Z) = \sum_{\nu \in \mathbb{Z}} \mathrm{sinc}^2\left( \frac{\nu}{2ZA} \right) \left( 1 - \frac{A ||\lambda \nu||}{Z} \right)_{+}. $$ Indeed, the Fourier transform of the function $f(x) =\mathrm{sinc}^2\left( \frac{x}{2} \right)$ is given by the formula $\hat{f}(x)= (1- |x|)_+$, so that we have $$V(Z) = \sum_{\nu \in \mathbb{Z}} \ f \left( \frac{\nu}{ZA} \right) \hat{f}\left( \frac{A ||\lambda \nu||}{Z} \right).$$ Now, if $Z^{-1} A \geq 2$ and $ZA \geq 2$, then Poisson summation formula yields $$ \hat{f}\left( \frac{A ||\lambda \nu||}{Z} \right) = \sum_{n \in \mathbb{Z}} \hat{f}\left( \frac{A (\lambda \nu + n )}{Z} \right) = \frac{Z}{A} \sum_{\mu \in \mathbb{Z}} f \left( \frac{Z \mu}{A} \right) e(\lambda \mu \nu), \\ \text{and similarly} \ \ \ \sum_{\nu \in \mathbb{Z}} f \left( \frac{\nu}{ZA} \right) e(\lambda \mu \nu) = ZA \ \hat{f}\left( ZA ||\lambda \nu||\right), $$ where $e(x) = e^{2 i \pi x}$. We thus get $$V(Z) = \frac{Z}{A} \sum_{\nu \in \mathbb{Z}} \ f \left( \frac{\nu}{ZA} \right) \left( \sum_{\mu \in \mathbb{Z}} f \left( \frac{Z \mu}{A} \right) e(\lambda \mu \nu)\right) \\ = \frac{Z}{A} \sum_{\mu \in \mathbb{Z}} f \left( \frac{Z \mu}{A} \right) \left( \sum_{\nu \in \mathbb{Z}} \ f \left( \frac{\nu}{ZA} \right) e(\lambda \mu \nu)\right) \\ =Z^2 \sum_{\mu \in \mathbb{Z}} f \left( \frac{Z \mu}{A} \right) \hat{f}\left( ZA ||\lambda \nu||\right) \\ = Z^2 V(Z^{-1}). $$ Since $V(Z^{-1})$ is a decreasing function of $Z$, this yields $(*)$.</p> <p>Note that $(*)$ implies an inequality of the form $$ U'(Z_1) \gg \left(\frac{Z_1}{Z_2} \right)^2 U(Z_2),$$ with $$ U'(Z_1) = \sum_{\substack{\nu \in \mathbb{Z} \\ ||\lambda \nu|| &lt; Z_1 A^{-1}}} \min \left( 1, \left(\frac{Z_1 A}{|\nu|}\right)^2 \right). $$</p>
1,940,448
<p>I am stuck on this question. Could someone help me?</p> <p>$$ \text{Find value of } S = \displaystyle\sum_{n=0}^{\infty} \cfrac{1}{n!(n+2)} $$</p> <p>I am supposed to show that $ S = 1 $ in two ways: <br /><br /> 1) Integrate the taylor series of $ xe^x $ <br /> 2) Differentiate the taylor series of $ \frac{e^{x-1}}{x} $</p> <p>For (1), I tried to using the fact that the taylor series of $ e^x = \displaystyle\sum_{n=0}^{\infty} \frac{x^n}{n!} $</p> <p>Now, multiplying $ x $ into the taylor series gives: $$ xe^x = \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} $$</p> <p>Integrating this yields the following:</p> <p>$$ \begin{align} \int_0^x xe^x &amp;= \displaystyle\int_0^x\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} \\ &amp;= \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+2}}{n!(n+2)} \end{align} $$</p> <p>I am not sure how to cary on from here.</p> <p>For (2), I am not sure how to find the taylor series for $ \frac{(e^x - 1)}{x} $</p> <p>Is anyone able to assist me?</p>
Carl Schildkraut
253,966
<p>For (1), you need to be careful with using $x$ as your dummy variable: try</p> <p>$$\int_0^t xe^x\ dx = \int_0^t \sum_{n=0}^{\infty} \frac{x^{n+1}}{n!}\ dx$$</p> <p>$$ = \sum_{n=0}^{\infty} \int_0^t\frac{x^{n+1}}{n!}\ dx$$</p> <p>$$ = \sum_{n=0}^{\infty} \frac{x^{n+2}}{n!(n+2)}\bigg|_0^t$$</p> <p>$$ = \sum_{n=0}^{\infty} \frac{t^{n+2}}{n!(n+2)}$$</p> <p>At $t=1$, this equals our desired sum. We then evaluate the integral</p> <p>$$\int_0^1 xe^x\ dx$$</p> <p>which can be done with integration by parts. </p> <p>For (2), we can simply use that the Taylor Series of $e^x$ is</p> <p>$$e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}$$</p> <p>$$e^x-1 = \sum_{n=1}^{\infty} \frac{x^n}{n!}$$</p> <p>$$\frac{e^x-1}{x} = \sum_{n=1}^{\infty} \frac{x^{n-1}}{n!} = \sum_{n=0}^{\infty} \frac{x^n}{(n+1)!}$$</p> <p>and then differentiate, and set $x=1$ as you did before.</p>
1,940,448
<p>I am stuck on this question. Could someone help me?</p> <p>$$ \text{Find value of } S = \displaystyle\sum_{n=0}^{\infty} \cfrac{1}{n!(n+2)} $$</p> <p>I am supposed to show that $ S = 1 $ in two ways: <br /><br /> 1) Integrate the taylor series of $ xe^x $ <br /> 2) Differentiate the taylor series of $ \frac{e^{x-1}}{x} $</p> <p>For (1), I tried to using the fact that the taylor series of $ e^x = \displaystyle\sum_{n=0}^{\infty} \frac{x^n}{n!} $</p> <p>Now, multiplying $ x $ into the taylor series gives: $$ xe^x = \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} $$</p> <p>Integrating this yields the following:</p> <p>$$ \begin{align} \int_0^x xe^x &amp;= \displaystyle\int_0^x\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} \\ &amp;= \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+2}}{n!(n+2)} \end{align} $$</p> <p>I am not sure how to cary on from here.</p> <p>For (2), I am not sure how to find the taylor series for $ \frac{(e^x - 1)}{x} $</p> <p>Is anyone able to assist me?</p>
Khosrotash
104,171
<p>Another way $$S = \sum_{n=0}^{\infty} \cfrac{1}{n!(n+2)}\\S = \displaystyle\sum_{n=0}^{\infty} \cfrac{1}{n!(n+2)}\times \frac{n+1}{n+1}\\=\sum_{n=0}^{\infty} \cfrac{n+1}{(n+2)!}\\= \sum_{n=0}^{\infty} \cfrac{(n+2)-1}{(n+2)!}\\= \sum_{n=0}^{\infty} \cfrac{n+2}{(n+2)!}-\cfrac{1}{(n+2)!}\\= \sum_{n=0}^{\infty} \cfrac{1}{(n+1)!}-\cfrac{1}{(n+2)!}\\= \sum_{n=0}^{\infty} f(n)-f(n+1)\\= \frac{1}{(0+1)!}-\lim_{n \to \infty}\frac{1}{(n+2)!}\\=1-0 $$</p>
4,263,629
<blockquote> <p>Let <span class="math-container">$A=\{(x,y) \in \Bbb R^2 \mid x \ge 1, 0&lt;y&lt;\frac{1}{x^2}\}$</span>. Show that <span class="math-container">$m_2(A) &lt; \infty$</span> where <span class="math-container">$m$</span> is the Lebesgue measure.</p> </blockquote> <p>I now that the integral <span class="math-container">$\int_{1}^\infty \frac{1}{x^2} dx = 1$</span> is it true that <span class="math-container">$m_2(A)$</span> is less than or equal to the Riemann integral?</p> <p>Another approach would be to cover the set <span class="math-container">$A$</span>. For this I considered <span class="math-container">$I_k=(k,k+\varepsilon) \times (0, \frac{1}{\varepsilon^2})$</span>. Now <span class="math-container">$\ell(I_k) =\frac{k}{\varepsilon^2} + \frac{1}{\varepsilon}$</span> but the sum of these isn't very nice so I guess I need to find another cover?</p>
Mark
470,733
<p>You are close. Measure of a set is equal to the Lebesgue integral of the constant function <span class="math-container">$1$</span> on that set. So the measure of your set is equal to the double integral <span class="math-container">$\iint\limits_A 1 dxdy$</span>. Since the function has an absolutely convergent improper Riemann integral, the Lebesgue integral is equal to the improper Riemann integral here. And by Fubini's theorem the Riemann integral is indeed equal to <span class="math-container">$\int_1^\infty\int_0^{\frac{1}{x^2}}1dydx=\int_1^\infty\frac{1}{x^2}dx$</span>, which is a finite number.</p>
1,855,824
<blockquote> <p>Given $a_1=1$ and $a_n=a_{n-1}+4$ where $n\geq2$ calculate, $$\lim_{n\to \infty }\frac{1}{a_1a_2}+\frac{1}{a_2a_3}+\cdots+\frac{1}{a_na_{n-1}}$$</p> </blockquote> <p>First I calculated few terms $a_1=1$, $a_2=5$, $a_3=9,a_4=13$ etc. So $$\lim_{n\to \infty }\frac{1}{a_1a_2}+\frac{1}{a_2a_3}+\cdots+\frac{1}{a_na_{n-1}}=\lim_{n\to \infty }\frac{1}{5}+\frac{1}{5\times9}+\cdots+\frac{1}{a_na_{n-1}} $$</p> <p>Now I got stuck. How to proceed further? Should I calculate the sum ? Please help.</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>$$\dfrac4{a_ma_{m-1}}=\dfrac{a_m-a_{m-1}}{a_ma_{m-1}}=?$$</p> <p>$a_m=1+4\cdot(m-1)=?$</p> <p>Do you recognize the <a href="https://en.wikipedia.org/wiki/Telescoping_series">Telescoping series</a>?</p>
3,725,385
<p><span class="math-container">$\{(x, y, z)\} \space$</span> with <span class="math-container">$\space x + y + z = 0$</span></p> <p>Working through some problems in a textbook and I'm not very confident about checking if subsets are subspaces. I know that for a subset to be a subspace of <span class="math-container">$\space \mathbb{R}^3 \space$</span> it must be closed under addition and scalar multiplication but I'm not sure how to check this with examples. Any help would be appreciated!</p>
Maryam
626,408
<p>It is because the subset is the inverse image of the subspace <span class="math-container">$\{0\}$</span> under the linear map <span class="math-container">$(x,y,z)\mapsto x+y+z$</span>.</p>
3,245,796
<p>Let <span class="math-container">$f$</span> have a continuous second derivative. Prove that</p> <p><span class="math-container">$$f(x) = f(a) + (x - a)f'(a) + \int_a^x(x - t)f''(t) dt.$$</span></p> <p>This is a modification of exercise 6.6.4 from Advanced Calculus by Fitzpatrick. I have seen that this question has been asked here: <a href="https://math.stackexchange.com/questions/785897/proving-fx-f0-f0x-int-0x-x-t-ft-dt-for-all-x">Proving $f(x) = f(0) + f&#39;(0)x + \int_0^x (x-t) f&#39;&#39;(t) dt$ for all x</a>. However, there didn't seem to be a suitable answer.</p> <p>Here is my attempt at the problem.</p> <p>Since <span class="math-container">$f$</span> has a continuous second derivative, then the first derivative is also continuous. Therefore, by the first fundamental theorem of calculus, we have that</p> <p><span class="math-container">$$f(x) = f(a) + \int_a^x f'(t)dt.$$</span></p> <p>Expanding out the right-hand side of the above using integration by parts, we see that</p> <p><span class="math-container">$$f(x) = f(a) + f'(t)t - \int_a^x tf''(t) dt.$$</span></p> <p>This is where I am confused.</p>
peek-a-boo
568,204
<p>Let <span class="math-container">$u(t) = f'(t)$</span> and let <span class="math-container">$v(t) = t-x$</span> (don't be confused by the fact that there's an <span class="math-container">$x$</span> in the definition of <span class="math-container">$v(t)$</span>; right now we are keeping <span class="math-container">$x$</span> fixed and varying <span class="math-container">$t$</span>). Then, we apply integration by parts: <span class="math-container">\begin{align} \int_a^x f'(t) \cdot 1 \, dt &amp;= \int_a^x u(t) \cdot v'(t) \, dt \\ &amp;= u(t) \cdot v(t) \bigg\rvert_a^x - \int_a^x u'(t) \cdot v(t) \, dt \\ &amp;= \left(f'(x) \cdot 0 - f'(a) \cdot (a-x) \right) - \int_a^x f''(t) \cdot (t-x) \, dt \\ &amp;= (x-a) f'(a) + \int_a^x (x-t) f''(t) \, dt \end{align}</span> Hence, <span class="math-container">\begin{align} f(x) &amp;= f(a) + \int_a^x f'(t) \, dt \\ &amp;= f(a) + (x-a) f'(a) + \int_a^x (x-t) f''(t) \, dt \end{align}</span></p>
833,143
<p>Wolfram alpha solves $\sqrt{x+1}\ge\sqrt{x+2}+\sqrt{x+3}$ for $x$, and answers $x=-2/3(3+\sqrt{3})$. How did it do it? Thanks!</p>
Hagen von Eitzen
39,174
<p>We need $x\ge -1$ in order for all roots to be defined. Then the right hand side is positive, hence in lets divide by the positive number $\sqrt{x+1}$ (and note that $x+1&gt;0$): $$ 1\ge\sqrt{1+\frac1{1+x}}+\sqrt {1+\frac{2}{1+x}}\ge 2$$ contradiction!</p>
1,685
<p>Are there books or article that develop (or sketch the main points) of Euclidean geometry without fudging the hard parts such as angle measure, but might at times use coordinates, calculus or other means so as to maintain rigor or avoid the detail involved in Hilbert-type axiomatizations?</p> <p>I am aware of Hilbert's foundations and the book by Moise. I was wondering if there is anything more modern that tries to stay (mostly) in the tradition of synthetic geometry. </p>
Robin Chapman
226
<p>You might look at Hartshorne's <a href="http://books.google.co.uk/books?id=EJCSL9S6la0C&amp;lpg=PP1&amp;dq=hartshorne&amp;pg=PP1#v=onepage&amp;q&amp;f=false" rel="noreferrer"><em>Geometry: Euclid and Beyond</em></a>.</p>
1,685
<p>Are there books or article that develop (or sketch the main points) of Euclidean geometry without fudging the hard parts such as angle measure, but might at times use coordinates, calculus or other means so as to maintain rigor or avoid the detail involved in Hilbert-type axiomatizations?</p> <p>I am aware of Hilbert's foundations and the book by Moise. I was wondering if there is anything more modern that tries to stay (mostly) in the tradition of synthetic geometry. </p>
Anton Petrunin
12,434
<p>Try <a href="http://www.jstor.org/discover/10.2307/1968336?uid=3739864&amp;uid=2129&amp;uid=2&amp;uid=70&amp;uid=4&amp;uid=3739256&amp;sid=21102247102467" rel="nofollow"><em>A Set of Postulates for Plane Geometry</em> by Birkhoff</a>.</p>
2,728,248
<blockquote> <p>Let $K=\mathbb{Q}(\sqrt{-2})$. Show that $\mathcal{O}(K)$ is a principal ideal domain. Deduce that every prime $p\equiv 1, 3$ (mod 8) can be written as $p = x^2 + 2y^2$ with $x, y \in \mathbb{Z}$.</p> </blockquote> <p>As $−2$ is squarefree $6\equiv 1$ (mod 4) we have $\mathcal{O}(K) = \mathbb{Z}[ \sqrt{2}$]. The discriminant is $\Delta$ = −8. The degree $n = 2$. The signature is $(0, 2)$. Thus the Minkowski bound is</p> <p>$$ B_k = \frac{2!}{2^2}=\frac{4}{\pi}\times \sqrt{8} = \frac{4\sqrt{2}}{\pi}&lt;2$$</p> <p>Hence $Cl(K)$ is generated by the empty set of ideal classes and so $Cl(K) = \{1\}$. So this means $\mathcal{O}(K)$ is a principal ideal domain I believe...</p> <p>Ok, now if we let $p \equiv 1$ or $3$ (mod 8). By quadratic reciprocity, $−2 ≡ \alpha^2$ (mod p) for some integer $\alpha$. Thus</p> <p>$$X^2 + 2 ≡ (X + \alpha)(X − \alpha) \quad\text{(mod}~~ p).$$</p> <p>Ok, now I am slightly stuck, can we apply some theorem here? Not sure if what above is correct to get to the desired result</p>
Mathmo123
154,802
<p>The theorem you're after is the Kummer-Dedekind theorem:</p> <blockquote> <p><strong>Theorem</strong>: Let $p$ be a prime, and let $\beta\in \mathcal O_K$ be such that $K=\mathbb Q(\beta)$ and $p\nmid (\mathcal O_K:\mathbb Z[\beta])$. Let $f(X)$ be the minimal polynomial of $\beta$ over $\mathbb Q$. Suppose that $$f(X) \equiv f_1(X)^{e_1}\cdots f_m(X)^{e_m}\pmod p. $$ Then $p$ splits as $p\mathcal O_K = \mathfrak{p_1^{e_1}\cdots p_m^{e_m}}$ in $\mathcal O_K$.</p> </blockquote> <p>In this case, when $p\equiv 1,3\pmod 8$, we see that $p$ splits completely in $K$. If $\mathfrak p$ is a prime dividing $p$, then $\mathfrak p$ has norm $p$. Moreover, since $K$ is a PID, $\mathfrak p=\langle a+b\sqrt{-2}\rangle$ for some $a,b\in\mathbb Z$. It follows that $p = N(\mathfrak p) = a^2+2b^2.$</p>
2,065,254
<p>Let $f: \mathbb{R} \to \mathbb{R}$ be a function that is twice differentiable.</p> <p>We know that: $$\lim_{x\to-\infty}\ f(x) = 1$$</p> <p>$$\lim_{x\to\infty}\ f(x) = 0$$</p> <p>$$f(0) = \pi$$</p> <p>We have to prove that there exist at least two points of the function in which $f''(x) = 0$.</p> <p>How could we do it in a rigorous way? It is pretty intuitive, but in a rigorous way it isn't that simple for me...</p>
Doug M
317,162
<p>since f(0) > 1 $\lim_\limits {x\to -\infty} f(x)$ and $\lim_\limits {x\to \infty} f(x)$ and $f(x)$ is not monotonic.</p> <p>Since $f(x)$ is continuous and differentiable, there it takes on a maximum value somewhere. $f'(c) = 0$</p> <p>There exists an $x$ on each tail such that f'(x) = 0$</p> <p>By the mean value theorem, there exist points in $(-\infty, c)$ and $(c,\infty)$ where $f''(x) = 0$ </p>
4,498,199
<p>Exercise 1.2.1(vii) from Page 5 of Keith Devlin's &quot;The Joy of Sets&quot;:</p> <blockquote> <p>Prove the following assertion directly from the definitions. The drawing of &quot;Venn diagrams&quot; is forbidden; this is an exercise in the manipulation of logical formalisms. <span class="math-container">$$(x\subseteq y)\leftrightarrow(x\cap y =x)\leftrightarrow(x\cup y=y)$$</span></p> </blockquote> <p><strong>Attempt at solution</strong>:</p> <p>This is not a difficult claim to understand or prove in terms of sets, but I can't figure out the pure logical formalism. I guess I should break this into seven individual implications?</p> <ol> <li><span class="math-container">$\forall w(w\in x\rightarrow w\in y)\rightarrow \forall z(z\in x\rightarrow (z\in x\enspace\wedge\enspace z\in y)) $</span></li> <li><span class="math-container">$\forall w(w\in x\rightarrow w\in y)\rightarrow \forall z((z\in x\enspace\wedge\enspace z\in y)\rightarrow z\in x))$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w(w\in x\rightarrow w\in y)$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w((w \in x \enspace\lor\enspace w\in y)\rightarrow w\in y)$</span></li> <li><span class="math-container">$\forall z(z\in x\rightarrow(z\in x \enspace\wedge\enspace z\in y))\rightarrow \forall w(w\in y\rightarrow(w \in x \enspace\lor\enspace w\in y))$</span></li> <li><span class="math-container">$\forall w((w\in x\enspace\lor\enspace w\in y)\rightarrow w\in y)\rightarrow\forall z((z\in x \enspace\wedge\enspace z\in y)\rightarrow z\in x)$</span></li> <li><span class="math-container">$\forall w((w\in x\enspace\lor\enspace w\in y)\rightarrow w\in y)\rightarrow\forall z(z\in x \rightarrow (z\in x \enspace\wedge\enspace z\in y))$</span></li> </ol> <p>In addition to wondering if I'm doing this correctly, I also feel like I gained nothing from writing out these implications.</p>
Dan Velleman
414,884
<p>You could try doing these proofs using Proof Designer:</p> <p><a href="https://djvelleman.people.amherst.edu/pd.html" rel="nofollow noreferrer">https://djvelleman.people.amherst.edu/pd.html</a></p> <p>I think the result would be the kind of proof that Devlin has in mind.</p>
2,134,653
<blockquote> <p>There is a Vessel holding 40 litres of milk. 4 litres of Milk is initially taken out from the Vessel and 4 litres of water is then poured in. After this 5 litres of mixtures of Mixture is replaced with six litres of water and finally six litres of Mixture is Replaced with the six litres of water. How Much Milk is there is in the Vessel?</p> </blockquote> <p>I have tried:</p> <p>Initally Vessel containing 40 litres of Milk :</p> <p>4 litres out means -> 36 litres </p> <p>4 litres of water is poured in - > 4 litres</p> <p>so Now total quantity is 40 litres </p> <p>Mixture containing water and Milk in the ratio: 36:4 i.e 9:1</p> <p>Again 5 litres of Mixture is replaced with the six litres of water</p> <p>for that:</p> <p>9x - 9/10*5 : x -1/10*5</p> <p>Now the Ratio becomes:</p> <p>90x - 45 : 10x -5 i.e 9x -9:2x -1</p> <p>six litres of water is added</p> <p>9x -9 :2x -5</p> <p>again six litres of Mixture is replaced then</p> <p>9x -9 - 9/10*6 : 2x -5 -9/10*6</p> <p>that is </p> <p>90x -144 :10x -84</p> <p>after adding six litres of water again we got </p> <p>90x -144 :10x -78</p> <p>so Milk containing is </p> <p>90x-144+10x -78 =41</p> <p>100x -222 =41</p> <p>100x = 263</p> <p>x= 2.63</p> <pre><code>again substituting the value x=2.63 in 90x -144 then i am getting 92.7 milk total itself 41 litres of Milk Please anyone guide me answer </code></pre> <p><strong>what i am doing mistake please anyone guide me for the answer</strong></p>
Peter Phipps
15,984
<p>Initial congifuration is $40$ litres of milk and no water, say $m=40$ and $w=0$.</p> <p>Remove $4$ litres of milk and add $4$ litres of water: $m=36, w=4$.</p> <p>The mixture is $36:4$ or $9:1$. $5$ litres of mixture contains $5\times \frac{9}{10}=\frac 92$ litres of milk and $5\times \frac{1}{10}=\frac 12$ litres of water.</p> <p>Remove $5$ litres of mixture: $m=36-\frac 92=\frac{63}2, w=4-\frac 12 = \frac 72$.</p> <p>Add $6$ litres water: $m=\frac{63}2, w=\frac 72+6=\frac{19}2$.</p> <p>The mixture is now $63:19$. $6$ litres of mixture contains $6\times \frac{63}{82}=\frac{189}{41}$ litres of milk and $6\times \frac{19}{82}=\frac{57}{41}$ litres of water.</p> <p>Remove $6$ litres of mixture: $m=\frac{63}2-\frac{189}{41}=\frac{2205}{82}, w=\frac{19}2-\frac{57}{41}=\frac{665}{82}$.</p> <p>Add $6$ litres of water: $m=2205/82, w=\frac{665}{82}+6 = \frac{1157}{82}$.</p> <p>Final configuration: $\frac{2205}{82}$ litres of milk (approx $26.9$) and $\frac{1157}{82}$ litres of water (approx $14.1$).</p>
1,795,836
<p>Let's say that $A \subset X$ is a deformation retract. It follows that $A$ is both a retract and a space homotopically equivalent to $X$. Is the converse true? Probably not, but I couldn't find any example yet.</p> <p>More specifically the converse would be:</p> <p>If $A \subset X$ is a retract which is homotopic to $X$ as a topological space then does there exist a homotopy between the retraction and the identity map: $$H:X \times [0, 1] \to X$$ such that $H(x,0)=x$, $H(x,1)\in A$ and $H(a,1)=a$ for $a\in A$.</p>
Stefan Hamcke
41,672
<p>No. Let $X = \{0,1,2,3,\dots\}$ and $A = \{1,2,3,\dots\}$, both with the discrete topology, and let $i: A \to X$ be the inclusion. Then $i$ has a retraction $r: X \to A, n\mapsto\max\{n,1\}$, and is even a cofibration. $X$ and $A$ are clearly isomorphic. The inclusion, however, is not a homotopy equivalence.</p>
39,476
<p>Fold is an extension of Nest for 2 arguments. How does one extend this concept to multiple arguments. Here is a trivial example:</p> <pre><code>FoldList[#1 (1 + #2) &amp;, 1000, {.01, .02, .03}] </code></pre> <p>Say I want do something like:</p> <pre><code>FoldList[#1(1+#2)-#3&amp;,1000,{.01,.02,.03},{100,200,300}] </code></pre> <p>where 100,200,300 are the values for #3. I know Fold doesn't work this way. I'm looking for an approach that will work like this... ((1000*(1.01)-100)*1.02-200)*1.03-300).</p> <p>Is there a way to extend Fold to more than 2 arguments? or is there a better approach for solving problems like this?</p>
Leonid Shifrin
81
<p>Yes, there is. Group your extra arguments in a list, and address them by their positions in the function under <code>Fold</code>. For your particular example:</p> <pre><code>FoldList[#1 (1 + First@#2) - Last@#2 &amp;, 1000, Transpose@{{.01, .02, .03}, {100, 200, 300}}] (* {1000, 910., 728.2, 450.046} *) </code></pre>
21,752
<blockquote> <p>"Let $P$ be the change-of-basis matrix from a basis $S$ to a basis $S&#39;$ in a vector space $V$. Then, for any vector $v \in V$, we have $$P[v]_{S&#39;}=[v]_{S} \text{ and hence, } P^{-1}[v]_{S} = [v]_{S&#39;}$$</p> <p>Namely, if we multiply the coordinates of $v$ in the original basis $S$ by $P^{-1}$, we get the coordinates of $v$ in the new basis $S&#39;$." - Schaum's Outlines: Linear Algebra. 4th Ed.</p> </blockquote> <p>I am having a lot of difficulty keeping these matrices straight. Could someone please help me understand the reasoning behind (what appears to me as) the counter-intuitive naming of $P$ as the change of basis matrix from $S$ to $S&#39;$? It seems like $P^{-1}$ is the matrix which actually changes a coordinate vector in terms of the 'old' basis $S$ to a coordinate vector in terms of the 'new' basis $S&#39;$...</p> <p><em>Added:</em></p> <blockquote> <p>"Consider a basis $S = \{u_1,u_2,...,u_n\}$ of a vector space $V$ over a field $K$. For any vector $v\in V$, suppose $v = a_1u_1 +a_2u_2+...+a_nu_n$</p> <p>Then the coordinate vector of $v$ relative to the basis $S$, which we assume to be a column vector (unless otherwise stated or implied), is denoted and defined by $[v]_S = [a_1,a_2,...,a_n]^{T}$. "</p> <p>"Let $S = \{ u_1,u_2,...,u_n\}$ be a basis of a vector space $V$, and let $S&#39;=\{v_1,v_2,...,v_n\}$ be another basis. (For reference, we will call $S$ the 'old' basis and $S&#39;$ the 'new' basis.) Because $S$ is a basis, each vector in the 'new' basis $S&#39;$ can be written uniquely as a linear combination of the vectors in S; say,</p> <p>$\begin{array}{c} v_1 = a_{11}u_1 + a_{12}u_2 + \cdots +a_{1n}u_n \\ v_2 = a_{21}u_1 + a_{22}u_2 + \cdots +a_{2n}u_n \\ \cdots \cdots \cdots \\ v_n = a_{n1}u_1 + a_{n2}u_2 + \cdots +a_{nn}u_n \end{array}$</p> <p>Let $P$ be the transpose of the above matrix of coefficients; that is, let $P = [p_{ij}]$, where $p_{ij} = a_{ij}$. Then $P$ is called the \textit{change-of-basis matrix} from the 'old' basis $S$ to the 'new' basis $S&#39;$." - Schaum's Outline: Linear Algebra 4th Ed.</p> </blockquote> <p>I am trying to understand the above definitions with this example:</p> <p>Basis vectors of $\mathbb{R}^{2}: S= \{u_1,u_2\}=\{(1,-2),(3,-4)\}$ and $S&#39; = \{v_1,v_2\}= \{(1,3), (3,8)\}$ the change of basis matrix from $S$ to $S&#39;$ is $P = \left( \begin{array}{cc} -\frac{13}{2} &amp; -18 \\ \frac{5}{2} &amp; 7 \end{array} \right)$.</p> <p>My current understanding is the following: normally vectors such as $u_1, u_2$ are written under the assumption of the usual basis that is $u_1 = (1,-2) = e_1 - 2e_2 = [u_1]_E$. So actually $[u_1]_S = (1,0)$ and I guess this would be true in general... But I am not really understanding what effect if any $P$ is supposed to have on the basis vectors themselves (I think I understand the effect on the coordinates relative to a basis). I guess I could calculate a matrix $P&#39;$ which has the effect $P&#39;u_1, P&#39;u_2,...,P&#39;u_n = v_1, v_2,..., v_n$ but would this be anything?</p>
Qiaochu Yuan
232
<p>The situation here is closely related to the following situation: say you have some real function $f(x)$ and you want to shift its graph to the right by a positive constant $a$. Then the correct thing to do to the function is to shift $x$ over to the <em>left</em>; that is, the new function is $f(x - a)$. In essence you have shifted the graph to the right by shifting the coordinate axes to the left. </p> <p>In this situation, if you have a vector $v$ expressed in some basis $e_1, ... e_n$, and you want to express it in a new basis $Pe_1, .... Pe_n$ (this is why $P$ is called the change of basis matrix), then you multiply the numerical vector $v$ by $P^{-1}$ in order to do this. You should carefully work through some numerical examples to convince yourself that this is correct. Consider, for example, the simple case that $P$ is multiplication by a scalar.</p> <p>The lesson here is that one must carefully distinguish between vectors and the components used to express a vector in a particular basis. <a href="http://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors">Vectors transform covariantly, but their components transform contravariantly</a>. </p>
21,752
<blockquote> <p>"Let $P$ be the change-of-basis matrix from a basis $S$ to a basis $S&#39;$ in a vector space $V$. Then, for any vector $v \in V$, we have $$P[v]_{S&#39;}=[v]_{S} \text{ and hence, } P^{-1}[v]_{S} = [v]_{S&#39;}$$</p> <p>Namely, if we multiply the coordinates of $v$ in the original basis $S$ by $P^{-1}$, we get the coordinates of $v$ in the new basis $S&#39;$." - Schaum's Outlines: Linear Algebra. 4th Ed.</p> </blockquote> <p>I am having a lot of difficulty keeping these matrices straight. Could someone please help me understand the reasoning behind (what appears to me as) the counter-intuitive naming of $P$ as the change of basis matrix from $S$ to $S&#39;$? It seems like $P^{-1}$ is the matrix which actually changes a coordinate vector in terms of the 'old' basis $S$ to a coordinate vector in terms of the 'new' basis $S&#39;$...</p> <p><em>Added:</em></p> <blockquote> <p>"Consider a basis $S = \{u_1,u_2,...,u_n\}$ of a vector space $V$ over a field $K$. For any vector $v\in V$, suppose $v = a_1u_1 +a_2u_2+...+a_nu_n$</p> <p>Then the coordinate vector of $v$ relative to the basis $S$, which we assume to be a column vector (unless otherwise stated or implied), is denoted and defined by $[v]_S = [a_1,a_2,...,a_n]^{T}$. "</p> <p>"Let $S = \{ u_1,u_2,...,u_n\}$ be a basis of a vector space $V$, and let $S&#39;=\{v_1,v_2,...,v_n\}$ be another basis. (For reference, we will call $S$ the 'old' basis and $S&#39;$ the 'new' basis.) Because $S$ is a basis, each vector in the 'new' basis $S&#39;$ can be written uniquely as a linear combination of the vectors in S; say,</p> <p>$\begin{array}{c} v_1 = a_{11}u_1 + a_{12}u_2 + \cdots +a_{1n}u_n \\ v_2 = a_{21}u_1 + a_{22}u_2 + \cdots +a_{2n}u_n \\ \cdots \cdots \cdots \\ v_n = a_{n1}u_1 + a_{n2}u_2 + \cdots +a_{nn}u_n \end{array}$</p> <p>Let $P$ be the transpose of the above matrix of coefficients; that is, let $P = [p_{ij}]$, where $p_{ij} = a_{ij}$. Then $P$ is called the \textit{change-of-basis matrix} from the 'old' basis $S$ to the 'new' basis $S&#39;$." - Schaum's Outline: Linear Algebra 4th Ed.</p> </blockquote> <p>I am trying to understand the above definitions with this example:</p> <p>Basis vectors of $\mathbb{R}^{2}: S= \{u_1,u_2\}=\{(1,-2),(3,-4)\}$ and $S&#39; = \{v_1,v_2\}= \{(1,3), (3,8)\}$ the change of basis matrix from $S$ to $S&#39;$ is $P = \left( \begin{array}{cc} -\frac{13}{2} &amp; -18 \\ \frac{5}{2} &amp; 7 \end{array} \right)$.</p> <p>My current understanding is the following: normally vectors such as $u_1, u_2$ are written under the assumption of the usual basis that is $u_1 = (1,-2) = e_1 - 2e_2 = [u_1]_E$. So actually $[u_1]_S = (1,0)$ and I guess this would be true in general... But I am not really understanding what effect if any $P$ is supposed to have on the basis vectors themselves (I think I understand the effect on the coordinates relative to a basis). I guess I could calculate a matrix $P&#39;$ which has the effect $P&#39;u_1, P&#39;u_2,...,P&#39;u_n = v_1, v_2,..., v_n$ but would this be anything?</p>
Nick Alger
3,060
<p>One major reason is practical. The matrix that converts vectors in the new coordinates into the old coordinates is easy to come by: you just put your new basis vectors as columns of the matrix.</p> <p>Then to find the matrix going the other way around, you have to compute the inverse of this matrix.</p> <p>Thus, it makes sense to call the first one $P$, and the second one $P^{-1}$.</p>
1,651,991
<p>Let $p(x)$ be an odd degree polynomial and let $q(x)=(p(x))^2+ 2p(x)-2$ </p> <p>a) The equation $q(x)=p(x)$ admits atleast two distinct real solutions.</p> <p>b) The equation $q(x)=0$ admits atleast two distinct real solutions.</p> <p>c) The equation $p(x)q(x)=4$ admits atleast two distinct real solutions.</p> <p>which of the following are true?</p> <p>i know that all the three are true but donot know how to prove them</p>
MPW
113,214
<p><strong>Hint:</strong> First, determine the slope $m$ of the line through the given points.</p> <p>Then, pick one of the points and determine how much $x$ changes when moving from that point to your new value of $x$. We write this as $\Delta x = x_{new}-x_{old}$.</p> <p>Use the fact that $m = \frac{\Delta y}{\Delta x}$ to get that $\Delta y = m\cdot \Delta x$. You know $m$ and $\Delta x$ from above, so now you know $\Delta y$. That's how much $y$ changes when moving from the chosen point to the new point, that is, $\Delta y = y_{new}-y_{old}$. So you have what you're after: $$y_{new}=y_{old} + \Delta y$$</p> <p>Now you do it with the actual numbers.</p>
2,418,440
<p><strong>Defn:</strong> Let $f$ be a function from $\mathbb{R}$ into a set $X$. We say that $f$ is <em>periodic</em> if there exists $p&gt;0$ such that for all $x\in \mathbb{R}$, we have $f(x+p)=f(x)$. </p> <p><strong>Prove</strong>: If $f$ is a continuous periodic function from $\mathbb{R}$ into a metric space $M$, then $f$ is uniformly continuous on $\mathbb{R}$.</p> <p><strong>Attempt:</strong> I think I can use the fact that for all $x \in \mathbb{R}$, $[x,x+p]$ is a closed and bounded interval. Then $f$ is compact and hence uniformly continuous on the interval. </p> <p>I also tried considering $[0,p]$. In that case, $x = np+\alpha$ and $y = mp + \beta$ for some $m,p \in \mathbb{Z}$ and $\alpha,\beta \in \mathbb{R}$. If $n&lt;0$, then we can choose $\alpha \in \mathbb{R^\mathbf{-}}$, so that $f(np+\alpha)=f(|\alpha|)$ and not $f(1-\alpha)$. Hope that makes sense.</p> <p>Then $\alpha,\beta \in [0,p]$, and I can find a $\delta$ that works for all $\alpha, \beta$ in $[0,p]$. The problem is, I need to constrain $|x-y|$ and somehow get that $|\alpha-\beta| &lt; \delta$. So far I haven't figured out how to do this.</p> <p>Been furrowing my brow at this for a while.. any hints very welcomed...</p> <p>Thanks</p>
orangeskid
168,051
<p>HINT:</p> <p>your idea is to translate each point to the interval $[0,p]$ and hope that the translates are also close by. Better to translate both by the same multiple of p, so that the smaller one lands inside, and the larger one is still guaranteed not too far. So you consider the restriction of your function to [0, p+1]. hope you can continue from here.</p>
1,731,978
<p>For two complex numbers $z_1$ and $z_2$, it is given that: </p> <blockquote> <p>$$|z_1+z_2|&gt;|z_1-z_2|$$</p> </blockquote> <p>How could we prove that $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$</p> <p>If I take $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ I get $x_1x_2+i y_1y_2=0$ but it does help me in proving $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$. How should I proceed?</p>
Arthur
15,500
<p>Hint 1: $|z_1 + z_2| &gt; |z_1 - z_2|$ means that $z_1$ and $z_2$ are closer to one another in the complex plane than $z_1$ and $-z_2$ are.</p> <p>Hint 2: What is the geometric significance of $\operatorname{arg}(z_1/z_2)$?</p>
1,731,978
<p>For two complex numbers $z_1$ and $z_2$, it is given that: </p> <blockquote> <p>$$|z_1+z_2|&gt;|z_1-z_2|$$</p> </blockquote> <p>How could we prove that $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$</p> <p>If I take $z_1=x_1+iy_1$ and $z_2=x_2+iy_2$ I get $x_1x_2+i y_1y_2=0$ but it does help me in proving $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$. How should I proceed?</p>
K.K.McDonald
302,349
<p>factor $z_2$ in each side and get: $$|z_1+z_2|&gt;|z_1-z_2|=|z_2(\frac{z_1}{z_2}+1)|&gt;|z_2(\frac{z_1}{z_2}-1)|=|z_2|\frac{z_1}{z_2}+1|&gt;|z_2||\frac{z_1}{z_2}+1|=|\frac{z_1}{z_2}+1|&gt;|\frac{z_1}{z_2}-1|$$ this statement means that $|\text{Real}\{\frac{z_1}{z_2}+1\}|&gt;|\text{Real}\{\frac{z_1}{z_2}-1\}|$ which means $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$</p> <p>because if you take these slack variables: $$\text{Real}\{\frac{z_1}{z_2}+1\}=a_r\;,\;\text{Imag}\{\frac{z_1}{z_2}+1\}=a_i \;,\; \text{Real}\{\frac{z_1}{z_2}-1\}=b_r\;,\;\text{Imag}\{\frac{z_1}{z_2}-1\}=b_i$$ note that $a_i=b_i$ .then you can write the first expression like bellow: $$|\frac{z_1}{z_2}+1|&gt;|\frac{z_1}{z_2}-1|\Rightarrow \sqrt{a_r^2+a_i^2}&gt;\sqrt{b_r^2+b_i^2} \Rightarrow |a_r|&gt;|b_r|$$ also for concluding the last part that $-\frac{\pi}{2}&lt;arg\big(\frac{z_1}{z_2}\big)&lt;\frac{\pi}{2}$ we have to solve for: $$|\text{Real}\{\frac{z_1}{z_2}+1\}|&gt;|\text{Real}\{\frac{z_1}{z_2}-1\}| $$ which means: $$|\text{Real}\{\frac{z_1}{z_2}\}+1|&gt;|\text{Real}\{\frac{z_1}{z_2}\}-1| $$ which can be solved easily with conditioning on $\text{Real}\{\frac{z_1}{z_2}\} \in \mathbb R$ by considering $\text{Real}\{\frac{z_1}{z_2}\}=r$ \begin{cases} r \ge 1 \; \quad \qquad \Rightarrow |r+1|&gt;|r-1| \Rightarrow r+1&gt;r-1 \quad \text{always true}\\ -1 \le r \le +1 \Rightarrow |r+1|&gt;|r-1| \Rightarrow r+1&gt;1-r \Rightarrow r &gt; 0\\ r \le -1 \; \; \qquad \Rightarrow |r+1|&gt;|r-1| \Rightarrow -r-1&gt;1-r \quad \text{always false} \end{cases} so we see that only $\text{Real}\{\frac{z_1}{z_2}\} &gt; 0$ satisfies inequality.</p>
1,673,854
<p>Taylor Series of $f(x) = \sqrt{x}$ about $c = 1$</p> <p>I've tried doing this problem but stuck at finding a pattern..</p> <p>Work:</p> <p>$$T_n = \sum^\infty_{n=0}\frac{f^n(c)}{n!}(x-c)^n = f(a) + \frac{f'(c)}{1!}(x-a)^1 + \frac{f''(c)}{2!}(x-a)^2+... $$</p> <p>So $f(x) = \sqrt{x}$</p> <p>$$ f'(x)=\frac12x^{-\frac12}$$ $$ f''(x)=\frac{-1}4x^{-\frac{3}2}$$ $$ f'''(x)=\frac{3}8x^{-\frac{5}2}$$ and $$f(1) = 1$$ $$f'(1) = \frac12$$ $$f''(1) = -\frac14$$ $$f'''(1) = \frac38$$</p> <p>So far I know that its alternating so $(-1)^n$</p> <p>But I'm having trouble with the fractions since I thought it would have been $\left(\frac12\right)^n$ but the $\frac38$ wont work with that. Have I done something wrong so far or am I just not thinking of this the right way?</p> <p>So I've noticed that starting from the $n=3$ the numerator is $3$ then $3*5$ then $3*5*7$ and so on... but how do I account for the first 3 terms? (n = 0, 1, 2)</p> <p>I learned that I can exclude it by simply taking it out and adding it from where the pattern starts to work so...</p> <p>$$1+\frac{x-1}{2}-\frac{(x-2)^2}{8} + \sum^\infty_{n=3}something$$</p>
Domenico Vuono
227,073
<p>Hint: note that if the series $$\sum_{n=1}^{+ \infty}\frac{(2n-1)!}{(2n+1)!}$$ converges $$\lim_{n\to \infty}\frac{(2n-1)!}{(2n+1)!}=0$$ you can use the ratio test to show that the series converges.</p>
198,945
<p>Why and how publishing a paper in proceedings?<br> What are the difference with a "classical" journal?<br> What's the list of the main proceedings in which one can publish?<br> Do proceedings papers (never, sometimes, often or always) appear on mathscinet?</p>
Pace Nielsen
3,199
<p>I agree completely with Andreas' answer. One further consideration is publicity. It is easy for papers published in conference proceedings to become lost to general knowledge, or known only to very specialized groups. By publishing in a regular and reputable journal, the chances others will read your paper goes up.</p> <p>Further, it is not only administrators who hold the opinion that many conference proceeding volumes are of lower quality (at least in mathematics, but not, say, in computer science).</p>
2,281,894
<blockquote> <p>The Hardy space <span class="math-container">$H^2(\mathbb{D})$</span> is defined to be the space of all functions <span class="math-container">$f$</span> &gt;holomorphic on the unit disk <span class="math-container">$\mathbb{D}$</span> with the norm <span class="math-container">$\lVert \cdot \rVert_H$</span></p> <p><span class="math-container">$\lVert f \rVert_H^2=\sup_{0&lt;r&lt;1}\int_0^{2\pi}|f(re^{i\theta})|^2 d\theta$</span></p> <p>is finite.</p> <p>Show that <span class="math-container">$H^2(\mathbb{D})$</span> is a Hilbert space.</p> </blockquote> <p>I have shown that if <span class="math-container">$f(z)=\sum_n c_nz^n$</span>, then <span class="math-container">$\lVert f \rVert_H^2=2\pi \sum_n|c_n|^2$</span>. How does this imply <span class="math-container">$H^2(\mathbb{D})$</span> is a Hilbert space? What is the inner product induced by the norm?</p>
Fred
380,717
<p>Inner product: $(f|g)=\sup_{0&lt;r&lt;1}\int_0^{2\pi}f(re^{i\theta}) \overline{g(re^{i\theta})}d\theta$</p>
3,621,223
<p>I use a software called Substance Designer which has a Pixel Processor where I can assign to every pixel of a image a gray-scale value defined by a series of operations.</p> <p>I am basically trying to generate a <a href="https://i.stack.imgur.com/6jhYa.jpg" rel="nofollow noreferrer">"normal gradient"</a> generated by the normals of a semi-ellipse with a given <strong>a</strong> and <strong>b</strong> semi-major and semi-minor axis. </p> <p>This semi ellipse is **origin-centered ** and has the principle axes parallel to the x and y axes.</p> <p>For all points P(x,y) with y≥0, I want to find the angle or direction θ of the outwards-facing ellipse normal that intersects that point. Both when a>b and if possible when b>a "</p> <p><a href="https://i.stack.imgur.com/LA7x1.jpg" rel="nofollow noreferrer">Here is a visual representation of what I am after, although I only need the values for y>0</a></p> <p><a href="https://commons.wikimedia.org/wiki/File:Evolute1.gif" rel="nofollow noreferrer">I am trying to visualize the blue tangents on the evolute when y> 0. All the points on the same normal will have the same value.</a></p> <p>Thanks </p>
David G. Stork
210,401
<p>This figure may help those who wish to solve this problem:</p> <p><a href="https://i.stack.imgur.com/hIeFH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hIeFH.png" alt="enter image description here"></a></p> <p>By the way, here are three origin-centered ellipses with the same eccentricity <span class="math-container">$e=2$</span> that go through the same point, so if the OP wants a <em>unique</em> and hence solvable problem, it should be restated as I have, above. </p> <p><a href="https://i.stack.imgur.com/6bHF9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bHF9.png" alt="enter image description here"></a></p>
2,325,436
<p>I was reading <em>Introduction to quantum mechanics</em> by David J. Griffiths and came across following paragraph:</p> <blockquote> <p><span class="math-container">$3$</span>. The eigenvectors of a hermitian transformation span the space.</p> <p>As we have seen, this is equivalent to the statement that any hermitian matrix can be diagonalized. <strong>This rather technical fact is</strong>, in a sense, <strong>the mathematical support on which much of a quantum mechanics leans</strong>. It turns out to be a thinner reed then one might have hoped, because <strong>the proof does not carry over to infinite-dimensional spaces.</strong>&quot;</p> </blockquote> <p>My thoughts:</p> <p>If much of a quantum mechanics leans on it, but the proof does not carry over to infinite-dimensional spaces, then hermitian transformations with infinite dimensionality are spurious.</p> <p>But there is infinite set of separable solutions for e.g. particle in a box. So Hamiltionan for that system has spectrum with infinite number of eigenvectors and is of infinite dimensionality.</p> <p>If we can't prove that this infinite set of eigenvectors span the space then how can we use completness all the time?</p> <p>Am I missing something here? Any missconceptions?</p> <p>I'd appriciate any help.</p>
Cameron Williams
22,551
<p>Turning my comments into an answer and adding details since this is an important question in mathematical physics:</p> <p>Many potentials will cause serious issues regarding completeness of eigenfunctions of Schrodinger operators. Really, we don't <em>need</em> completeness of the eigenfunctions, though it is desirable. That said, we can guarantee that many physical systems have nice spectra and this is usually all we really care about for most systems.</p> <p>In general, constructing <em>self-adjoint</em> operators is difficult because you need to get the domain exactly right. We often settle for <em>essentially self-adjoint</em> operators. These operators are very close to self-adjoint but they just don't have quite the full domain or range (but are pretty close!). This is good enough for physicists (and often mathematicians because many theorems work out the way you want). Particularly you can use the spectral theorem for essentially self-adjoint operators. Simply having a symmetric operator (one where $\langle Ax,y\rangle = \langle x,Ay\rangle$ for $x,y$ in the domain of $A$) is not nearly enough to guarantee (essential) self-adjointness. You need to know some extra things to guarantee (essential) self-adjointness. I can elaborate on this if desired.</p> <p>With that said, there are conditions that can be placed on the quantum mechanical potentials that can guarantee essential self-adjointness. Many quantum mechanical potentials fall into this regime - NOT ALL. Moreover, just because a potential does not fall into this regime, it does not necessarily mean that it does not give rise to a complete set of eigenfunctions. I'm pulling this from Galindo and Pascual's <em>Quantum Mechanics</em> Volume $1$. They state that:</p> <blockquote> <p>Let the potential $U$ be piecewise continuous, then define $U_{\pm} = \lim_{x\to\pm\infty} U(x)$. If one of $U_{\pm}$ is finite and decays faster than $x^{-1}$ to the limiting value, if $U_{\pm} = +\infty$, or if $U_{\pm} = -\infty$ but $U(x) \ge -ax^2 - b$ for some $a,b&gt; 0$, then the corresponding Schrodinger equation is essentially self-adjoint on $C_0^{\infty}(\Bbb R)$.</p> </blockquote> <p>In these cases, the spectrum is real. Particularly, if eigenvalues exist, they are real. This last bit is not guaranteed and requires some work to show for various systems. Consider $U \equiv 0$ on $\Bbb R$, then there are no eigenvalues (since the <em>generalized</em> eigenfunctions are complex exponentials which are not $L^2$-normalizable).</p>
3,460,843
<p>I understand that the way to calculate the cube root of <span class="math-container">$i$</span> is to use Euler's formula and divide <span class="math-container">$\frac{\pi}{2}$</span> by <span class="math-container">$3$</span> and find <span class="math-container">$\frac{\pi}{6}$</span> on the complex plane; however, my question is why the following solution doesn't work. </p> <p>So <span class="math-container">$(-i)^3 = i$</span>, but why can I not cube root both sides and get <span class="math-container">$-i=(i)^{\frac{1}{3}}$</span>. Is there a rule where equality is not maintained when you root complex numbers or is there something else I am violating and not realizing?</p>
Canardini
341,007
<p><span class="math-container">$$\int{\frac{-3dx}{x^2+4}}=-3\int{\frac{dx}{x^2+4}}=-3\int{\frac{dx}{4\left(\frac{x^2}{4}+1\right)}}$$</span></p>
95,741
<p>I wonder if there is any difference between mapping and a function. Somebody told me that the only difference is that mapping can be from any set to any set, but function must be from $\mathbb R$ to $\mathbb R$. But I am not ok with this answer. I need a simple way to explain the differences between mapping and function to a lay man together with some illustration (if possible).</p> <p>Thanks for any help.</p>
Arimakat
151,617
<p><a href="http://books.google.rs/books/about/Introduction_to_Smooth_Manifolds.html?id=xygVcKGPsNwC&amp;redir_esc=y">John M. Lee, Introduction to Smooth Manifolds, 2002</a>:</p> <p>Although the terms <strong>function</strong> and <strong>map</strong> are technically synonymous, in studying smooth manifolds it is often convenient to make a slight distinction between them. Throughout this book we generally reserve the term <strong>function</strong> for a map whose range is $\mathbb{R}$ (<em>a real-valued function</em>) or $\mathbb{R}^k$ for some $k &gt; 1$ (<em>a vector-valued function</em>). The word <strong>map</strong> or <strong>mapping</strong> can mean any type of map, such as a map between arbitrary manifolds.</p>
4,329,888
<p>I have a problem of understanding how to find shaded regions in Complex Plane.</p> <p><span class="math-container">\begin{array}{l} |z-2i|\ \geqslant \ |z+6+4i|\\ \\ \sqrt{x^{2} +( y-2)^{2}} =\sqrt{( x+6)^{2} +( y+4)^{2}}\\ x^{2} +( y-2)^{2} =( x+6)^{2} +( y+4)^{2}\\ x^{2} +y^{2} -4y+4=x^{2} +12x+36+y^{2} +8y+16\\ 12x+12y+48=0\\ y=-x-4\\ \end{array}</span></p> <p><img src="https://i.stack.imgur.com/CrMh6.png" alt="Sketch without shaded regions" /></p> <p>I can sketch this in complex plane, But I'm confused in how to get shaded regions like this questions.</p>
JMP
210,189
<p>Start from</p> <p><span class="math-container">$$\sin\pi z = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}z^{2n+1}$$</span></p> <p>and</p> <p><span class="math-container">$$-1 = \cos\pi = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n}}{(2n)!}$$</span></p> <p>Consider</p> <p><span class="math-container">$$\sin\pi z \cos\pi = \sin(\pi (z+1)) = \sum_{n=0}^\infty \frac{(-1)^n\pi^{2n+1}}{(2n+1)!}z^{2n+1}\left(\sum_{n=0}^\infty \frac{(-1)^n\pi^{2n}}{(2n)!}\right)$$</span></p> <p>and compare the coefficients of <span class="math-container">$z^k$</span> to</p> <p><span class="math-container">$$\sum_{n=0}^\infty \frac{(-1)^{n+1}\pi^{2n+1}}{(2n+1)!}(z+1)^{2n+1}$$</span></p> <p>They are the same.</p>
3,987,718
<p>Let <span class="math-container">$L \in \mathbb{R}$</span> and let <span class="math-container">$f$</span> be a function that is differentiable on a deleted neighborhood of <span class="math-container">$x_{0} \in \mathbb{R}$</span> such that <span class="math-container">$\lim_{x \to x_{0}}f'(x)=L$</span>.</p> <p>Find a function satisfying the above, and such that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x_{0}$</span>.</p> <p>--</p> <p>So I think that I do not completlely understand, when a function is indeed differentiable at <span class="math-container">$x_{0}$</span> and when it's not, and why in both cases I can still find its <span class="math-container">$f'$</span>?</p> <p>I will appreciate some explanation about that.</p> <p>Moreover, I thought of <span class="math-container">$f(x)=x^x$</span> or <span class="math-container">$f(x)=\ln(x^x)$</span>.</p> <p>If I understand it correctly, than both my <span class="math-container">$f$</span>'s does not differentiable at <span class="math-container">$x_{0}=0$</span>, because:</p> <p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{x^x-0^0}{x-0}$</span> which is undefined?</p> <p>or</p> <p><span class="math-container">$f'(0)=\lim_{x \to 0}\frac{\ln(x^x)-\ln(0^0)}{x-0}$</span> which is undefined?</p> <p>Thanks a lot!</p>
Wuestenfux
417,848
<p>Take the function <span class="math-container">$f(x) = |x-x_0|$</span>.</p> <p><span class="math-container">$\lim_{x\rightarrow x_0^-} \frac{|(x_0-h)+x_0|}{-h} = \frac{|h|}{-h}$</span> and</p> <p><span class="math-container">$\lim_{x\rightarrow x_0^+} \frac{|(x_0+h)+x_0|}{h} = \frac{|h|}{h}$</span>.</p>
2,662,033
<p>My question is how to prove that $(X,d)$ is complete if and only if $(X,d')$ is complete.</p> <p>I have that $d$ and $d'$ are strongly equivalent metrics and I have used this to show that a sequence $x_{n}$ is Cauchy in $(X,d)$ if and only if it is Cauchy in $(X,d')$.</p> <p>I have the definition of complete as: "A metric space $X$ is complete if every Cauchy sequence in $X$ is convergent in $X$."</p> <p>Since this is an if and only if statement I know I need to prove it both ways. </p> <p>I am wondering if you also use the definition of strongly equivalent metrics to prove the completeness of the metrics or if you need to prove the convergence of the Cauchy sequence. A bit confused on how to approach this.</p>
user284331
284,331
<p>So $\alpha d'(x,y)\leq d(x,y)\leq\beta d'(x,y)$ for some $\alpha,\beta&gt;0$.</p> <p>Given $d(x_{n},x_{m})\rightarrow 0$ and $d'$ is complete, then $\alpha d'(x_{n},x_{m})\rightarrow 0$, so $d'(x_{n},x_{m})\rightarrow 0$, then for some $x\in X$, $d'(x_{n},x)\rightarrow 0$, and hence $\dfrac{1}{\beta}d(x_{n},x)\rightarrow 0$, and so $d(x_{n},x)\rightarrow 0$.</p>
270,641
<p>I want to find the inverse triple Laplace transform of <span class="math-container">$L^{-1}_{x_{3}} L^{-1}_{x_{2}} L^{-1}_{x_{1}} \left[ \frac{-1}{s^2_{1} + s^2_{2} + s^2_{3}} \right]$</span>. I did <span class="math-container">\begin{align*} L^{-1}_{x_{3}} L^{-1}_{x_{2}} L^{-1}_{x_{1}} \left[ \frac{-1}{s^2_{1} + s^2_{2} + s^2_{3}} \right] &amp;= L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[L^{-1}_{x_{1}} \left[ \frac{-1}{s^2_{1} + s^2_{2} + s^2_{3}} \right] \right] \right] \\ &amp;= (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[\frac{1}{a} L^{-1}_{x_{1}} \left[ \frac{a}{s^2_{1} + a^2} \right] \right] \right], \ \ a^2 = s^2_{2} + s^2_{3} \\ &amp;= (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[\frac{ \sin \left( x_{1} \sqrt{ \left( s^2_{2} + s^2_{3}\right)} \right) }{\sqrt{ \left( s^2_{2} + s^2_{3}\right)}} \right] \right] \\ &amp;= (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[ \frac{ \displaystyle\sum_{k = 0}^{\infty} \frac{(-1)^k \left(x_{1} \sqrt{s^2_{2} + s^2_{3}} \right)^{2k+1}}{(2k+1)!} }{\sqrt{ \left( s^2_{2} + s^2_{3}\right)}} \right] \right] \\ &amp;\approx (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[ \frac{ x_{1} \sqrt{s^2_{2} + s^2_{3}} - \frac{1}{6} \left(x_{1} \sqrt{s^2_{2} + s^2_{3}} \right)^3 }{\sqrt{ \left( s^2_{2} + s^2_{3}\right)}} \right] \right] \\ &amp;\approx (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[ x_{1} - \frac{1}{6} x_{1}^3 \left( s^2_{2} + s^2_{3} \right) \right] \right] \\ &amp;\approx (-1) L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[ \left( x_{1} - \frac{1}{6} x_{1}^3 s^2_{3} \right) - \frac{1}{6} x_{1}^3 s^2_{3} \right] \right] \\ &amp;\approx (-1) L^{-1}_{x_{3}} \left[ \left( x_{1} - \frac{1}{6} x_{1}^3 s^2_{3} \right) \delta(x_{2}) - \frac{1}{6} x_{1}^3 \delta^{"}(x_{2}) \right] \\ &amp;\approx (-1) \left( \left( x_{1} \delta(x_{3}) - \frac{1}{6} x_{1}^3 \delta^{"}(x_{3}) \right) \delta(x_{2}) - \frac{1}{6} x_{1}^3 \delta^{"}(x_{2}) \delta(x_{3}) \right) \end{align*}</span> I am wondering if this solution is correct or not? and if it is incorrect, what should I do to get the correct solution? I would appreciate your help.</p>
Abdulhameed Qahtan Abbood Alta
87,352
<p>I have another way to solve the problem by using the Taylor series of three variables and I am wondering whether it is correct or not: <span class="math-container">\begin{align*} L^{-1}_{x_{3}} L^{-1}_{x_{2}} L^{-1}_{x_{1}} \left[ \frac{-1}{s^2_{1} + s^2_{2} + s^2_{3}} \right] &amp;= L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[L^{-1}_{x_{1}} \left[ \frac{-1}{s^2_{1} + s^2_{2} + s^2_{3}} \right] \right] \right] \\ &amp;= L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[ L^{-1}_{x_{1}} \left[ \frac{-1}{a^2 + b^2 + c^2} + \frac{2}{a^2 + b^2 + c^2} \left[ a(s_{1} - a) + b(s_{2} - b) + c(s_{3} - c) \right] + \ldots \right] \right] \right] \\ &amp; \approx L^{-1}_{x_{3}} \left[ L^{-1}_{x_{2}} \left[L^{-1}_{x_{1}} \left[ \frac{-1}{a^2 + b^2 + c^2} + \frac{2}{a^2 + b^2 + c^2} \left[ a(s_{1} - a) + b(s_{2} - b) + c(s_{3} - c) \right] \right] \right] \right] \\ &amp;\approx \frac{-1}{a^2 + b^2 + c^2} \delta(x_{1}) \delta(x_{2}) \delta(x_{3}) \\ &amp; \ \ \ + \frac{2}{a^2 + b^2 + c^2} \Big[ a \left(\delta^{\prime}(x_{1}) \delta(x_{2}) \delta(x_{3}) - a \delta(x_{1}) \delta(x_{2}) \delta(x_{3}) \right) \\ &amp; \ \ \ + b \left(\delta(x_{1}) \delta^{\prime}(x_{2}) \delta(x_{3}) - b \delta(x_{1}) \delta(x_{2}) \delta(x_{3}) \right) \\ &amp; \ \ \ + c \left(\delta(x_{1}) \delta(x_{2}) \delta^{\prime}(x_{3}) - c \delta(x_{1}) \delta(x_{2}) \delta(x_{3}) \right) \Big], \end{align*}</span> for arbitrary <span class="math-container">$a, b, c \in [0, \infty)$</span>.</p>
654,968
<p>Let $g(x)=x+6$ and $h(x)=\frac{4}{x}$. Compute $\displaystyle\left(\frac{h}{g}\right)(5)$.</p> <p>I've plugged $5$ in for $x$ but I keep coming up with $.07$ and thanks to webassign I know that is wrong. I'm sure I'm missing something basic but what is it?</p>
TZakrevskiy
77,314
<p>The space is a countable union of balls centered in zero: $$A = \bigcup_{n\in N}B(0,n).$$</p> <p>The image of $B(0,n)$ is precompact, therefore, separable. Countable union of separable sets is separable.</p>
203,378
<p>I am trying to solve this equation where I need the solution of K in term of v</p> <pre><code> Solve[1 - K - (54 (20 - K) v (2 (-10 + K) (5 (1300 - 10 K + 3 K^2 - 100 (2 + K)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 1, (-10 + K)/(-20 + K)] + (-13000 + 400 K - 75 K^2 + 6 K^3 + 400 (-25 + 7 K) - 10 (200 - 140 K + 21 K^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2, (-10 + K)/(-20 + K)]) (((-20 + K)^3 Hypergeometric2F1[-1 - Sqrt[3], -1 + Sqrt[3], 1, -(10/(-20 + K))])/(-10 + K)^3)))/((12 (-20 + K) (-10 + K) (5 (1300 - 10 K + 3 K^2 - 100 (2 + K)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 1, (-10 + K)/(-20 + K)] + (-13000 + 400 K - 75 K^2 + 6 K^3 + 400 (-25 + 7 K) - 10 (200 - 140 K + 21 K^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2, (-10 + K)/(-20 + K)]) (-(((20 - K)^3 Hypergeometric2F1[-1 - Sqrt[3], -1 + Sqrt[3], 1, -(K/(-20 + K))])/(8 (-10 + K)^3))) + 1/4 (-3 (-20 + K) (-10 Sqrt[3] - Sqrt[(-10 + K)^2] + Sqrt[3] K) (-10 Sqrt[3] + Sqrt[(-10 + K)^2] + Sqrt[3] K) (-K (2 K - 2 (10 + K)) Hypergeometric2F1[ 2 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 1, ( 2 (-10 + K))/(-20 + K)] - 2 (K^2 - 2 K (10 + K) + 4 (100 - 10 K + K^2)) Hypergeometric2F1[ 2 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2, ( 2 (-10 + K))/(-20 + K)]) - 2 (-10 + K)^2 (-K (3 K^2 - 2 K (50 + K) + 4 (300 - 10 K + K^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 1, ( 2 (-10 + K))/(-20 + K)] - (10 (-28 + K) K^2 + 3 K^3 + 8 K (1050 - 70 K + K^2) + 16 (-6000 + 750 K - 40 K^2 + K^3)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 3 + (Sqrt[3] (-10 + K))/Sqrt[(-10 + K)^2], 2, ( 2 (-10 + K))/(-20 + K)])) )) == 0, K] </code></pre> <p>But, everytime I am getting a statement Solve::nsmet: This system cannot be solved with the methods available to Solve.</p> <p>Please give me any suggestion. In have tried to find solution by using 'FindRoot'. But the problem with this is, If I give some value of 'v' then only it gives root.</p>
Mariusz Iwaniuk
26,828
<p>Massages from <code>Solve</code> says: <a href="https://en.wikipedia.org/wiki/Transcendental_equation" rel="nofollow noreferrer">transcendental equation</a> can't be solved analytically,but we can plot solution of function <code>k[v]</code>.</p> <pre><code>eq = 1 - k - (108 (20 - k) (-20 + k)^3 v Hypergeometric2F1[-1 - Sqrt[ 3], -1 + Sqrt[3], 1, -(10/(-20 + k))] (5 (1300 - 10 k + 3 k^2 - 100 (2 + k)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 1, (-10 + k)/(-20 + k)] + (-13000 + 400 k - 75 k^2 + 6 k^3 + 400 (-25 + 7 k) - 10 (200 - 140 k + 21 k^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2, (-10 + k)/(-20 + k)]))/((-10 + k)^2 (-(1/(2 (-10 + k)^2)) 3 (20 - k)^3 (-20 + k) Hypergeometric2F1[-1 - Sqrt[3], -1 + Sqrt[3], 1, -(k/(-20 + k))] (5 (1300 - 10 k + 3 k^2 - 100 (2 + k)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 1, (-10 + k)/(-20 + k)] + (-13000 + 400 k - 75 k^2 + 6 k^3 + 400 (-25 + 7 k) - 10 (200 - 140 k + 21 k^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2, (-10 + k)/(-20 + k)]) + 1/4 (-3 (-20 + k) (-10 Sqrt[3] - Sqrt[(-10 + k)^2] + Sqrt[3] k) (-10 Sqrt[3] + Sqrt[(-10 + k)^2] + Sqrt[3] k) (-k (2 k - 2 (10 + k)) Hypergeometric2F1[ 2 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 1, ( 2 (-10 + k))/(-20 + k)] - 2 (k^2 - 2 k (10 + k) + 4 (100 - 10 k + k^2)) Hypergeometric2F1[ 2 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2, ( 2 (-10 + k))/(-20 + k)]) - 2 (-10 + k)^2 (-k (3 k^2 - 2 k (50 + k) + 4 (300 - 10 k + k^2)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 1, ( 2 (-10 + k))/(-20 + k)] - (10 (-28 + k) k^2 + 3 k^3 + 8 k (1050 - 70 k + k^2) + 16 (-6000 + 750 k - 40 k^2 + k^3)) Hypergeometric2F1[ 3 - (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 3 + (Sqrt[3] (-10 + k))/Sqrt[(-10 + k)^2], 2, ( 2 (-10 + k))/(-20 + k)])))); ContourPlot[eq == 0, {v, -10, 1}, {k, 0, 11}, FrameLabel -&gt; Automatic] </code></pre> <p><a href="https://i.stack.imgur.com/E2lY6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E2lY6.png" alt="enter image description here"></a></p>
4,084,517
<p>In geometry of 2D and 3D, it's not uncommon for people to call a square or rectangle a <code>Box</code> in the field I work in. This makes naming things easier since it's clear what's in a folder of 'boxes'.</p> <p>Does the similar name exist for a circle and a sphere? Interestingly we have circles in 3D that could be defined as the intersection of a geometric primitive with a sphere, like a plane slicing through a sphere to form a circle. However searching for a unifying name (even if it's not that good of a name) has been unfruitful.</p> <p>Does a name exist for this classification? As in, a name for circular or spherical objects defined by some distance from (for convenience here) the origin?</p> <p>Some things seem easy, like a hyperplane is going down a dimension, so it ends up working for almost any dimension. I was hoping there was something like this that would either work for 2 and 3 dimensions, or better, work for N dimensions.</p>
RobertTheTutor
883,326
<p>We often refer to an &quot;<span class="math-container">$n$</span>-sphere&quot;, usually to say <span class="math-container">$S^{1}$</span> is a circle and <span class="math-container">$S^2$</span> is the surface of an ordinary sphere. An <span class="math-container">$n$</span>-sphere sits in <span class="math-container">$R^{n+1}$</span>. I sometimes see hypersphere used, but that is less common.</p> <p>If you want to include the interior, the term is &quot;<span class="math-container">$n$</span>-ball&quot;.</p> <p>By the way, the generic word for &quot;length, area, volume&quot; etc. is <em>content</em>.</p>
2,285,299
<p>For $ c&gt; b&gt;a&gt;0 $ Is this inequality true? $$ c^2+ab&gt; ac+bc $$</p> <p>If yes can anybody please provide hint so I can solve it? </p>
Gautam Shenoy
35,983
<p>Hint: </p> <p>$$(c-b)c &gt; (c-b)a$$</p> <p>This is because $c-b &gt; 0$ and $c &gt; a$</p>
3,841,806
<p>Using spherical coordinates I have to find the volume of a cone <span class="math-container">$z=\sqrt{x^2+y^2}$</span> inscribed in a sphere <span class="math-container">$(x-1)^2+y^2+z^2=4.$</span></p> <p>I can`t find <span class="math-container">$\rho$</span> because the center of sphere is displaced from the origin.</p> <hr /> <p>I tried solving it using Mathematica, but i did something wrong somewhere <a href="https://i.stack.imgur.com/CtRvq.png" rel="nofollow noreferrer">enter image description here</a></p>
Narasimham
95,860
<p>HINT</p> <p>Eliminating <span class="math-container">$z$</span> between the two given equations and simplifying one obtains</p> <p><span class="math-container">$$ x^2-x+\frac12+y^2-2=0 $$</span></p> <p>the cylinder radius <span class="math-container">$ R=\frac{\sqrt 7}{2}$</span></p> <p><span class="math-container">$$ (x-\dfrac12)^2+y^2=R^2$$</span></p> <p>with parametrization</p> <p><span class="math-container">$$ x= \dfrac12 +R \cos \theta, y= R \sin \theta $$</span></p> <p><a href="https://i.stack.imgur.com/9NuxW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NuxW.png" alt="enter image description here" /></a></p> <p>Due to symmetry volume of one cone nappe need be considered.</p> <p>The sketch can be used to define easier parametrized limits of integration. Cone common apex is origin of spherical coordinates.</p>
1,534,694
<p>I tried to solve for the following limit: </p> <p>$$\lim_{x\rightarrow \infty} (e^{2x}+x)^{1/x}$$ and I reached to the indeterminate form: $${4e^{2x}}\over {4e^{2x}}$$ if I plug in, I will get another indeterminate form! </p>
Thomas Andrews
7,933
<p>Tricky solution: $$\begin{align} (e^{2x}+x)^{1/x}&amp;=e^{2x/x}\left(1+\frac{1}{e^{2x}/x}\right)^{1/x}\\ &amp;=e^2\left(\left(1+\frac{1}{e^{2x}/x}\right)^{e^{2x}/x}\right)^{e^{-2x}} \end{align}$$</p> <p>Since $e^{2x}/x\to\infty$ and $(1+1/y)^y\to e$ as $y\to\infty$, and $e^{-2x}\to 0$, you get the limit is $e^2$.</p>
2,250,339
<p>The Mad Hatter sets up what he believes is a zero-knowledge protocol. The integer n is the product of two large primes p and q and he wants to prove to the March Hare that he knows the factorization of n without revealing to anyone the actual factors $p$, $q$. He devises the following procedure: </p> <p>March Hare chooses a random integer $x \pmod n$, computes $y \equiv x^2 \pmod n$, and sends $y$ to Mad Hatter. Mad Hatter then computes the square roots $z_i$ of $y$ modulo $n$ and sends $z = \min(z_i)$ to March Hare who verifies that $z^2 \equiv y \pmod n$. The Hatter offers to repeat this $10$ times with different values of $y$.</p> <ol> <li><p>Should the March Hare be convinced that the Hatter knows $p$ and $q$? <br> My guess: <strong>yes</strong></p></li> <li><p>Is March Hare able to use all this information to factor $n$? (In which cases this isn’t a zero knowledge procedure at all). <br> My guess: <strong>no</strong></p></li> <li>The Knave of Hearts, who eavesdropped the entire sequence, recovers all ten values of $\{y_i,z_i\}$. Can he use this to obtain any information about $p$ and $q$? <br> My guess: <strong>no</strong></li> </ol> <p>Does this look right?</p>
hmakholm left over Monica
14,366
<p>Unless $x$ happens to be a multiple of $p$ or $q$, the number $y=x^2$ has four square roots modulo $pq$, namely the solutions for $z$ of the Chinese remainder systems $$ z \equiv \pm x \pmod p \\ z \equiv \pm x \pmod q $$ for each of the four combinations of $\pm$.</p> <p>If the March Hare chooses $x$s at random, there's a pretty good chance that the response he gets back in at least one of the ten tries is <em>not</em> one of $x$ or $pq-x$. He then knows a $z$ that differs from his $x$ by some multiple of either $p$ or $q$, and therefore $\gcd(z-x,pq)$ will be one of the factors.</p> <p>However, since the Hatter promises the <em>numerically smallest</em> square root, the Hare can do better than choosing $x$ <em>completely</em> at random. In fact, if he chooses $x$ between $\frac{n-\sqrt n}2$ and $\frac{n+\sqrt n}2$, then he can be <em>certain</em> that the $z$ he gets back will be useful to him (why?), and he only needs to ask a single question.</p> <p>Does any of this help the Knave, though?</p>
194,671
<p>I'm searching for two symbols - considering they exist - (1) unknown value; (2) unknown probability.</p> <p><strong>Note</strong>: I thought that $x$ was used in a temporary context, whenever I see it, it remains unknown until an evaluation is made. I was thinking in a "unknown and impossible to be known" context. I'm not sure if this context exists or if $x$ also express it.</p>
Seyhmus Güngören
29,940
<p>As the others already mentioned. There are conventional variables to the unknowns such as $x$ for an unknown value or $P(A)$ for an unknown probability. We take them as variables and assume that they are unknown however there is a certain probability that they take some reasonable values. That is actually the reason why we define them.</p> <p>Additionally in computer programming there is a term called 'NaN:=Not a Number'. That is also somehow an unknown value, However even worse, we dont have any hope that it can be as in the case $x$. They are often mentioned as Indeterminate_forms arising from $0/0$, $\infty-\infty,\infty/\infty$ etc.. </p> <p>There might be also another interpretation to understand an unknown probability. If you have a probabilistic model and If you have deviations from the model assumptions due to outliers or inaccurate estimation etc.., then this phenomenon is called <em>uncertainty</em>. In such a case you are unsure about which probabilistic model you have resulting an unknown probability.</p>
194,671
<p>I'm searching for two symbols - considering they exist - (1) unknown value; (2) unknown probability.</p> <p><strong>Note</strong>: I thought that $x$ was used in a temporary context, whenever I see it, it remains unknown until an evaluation is made. I was thinking in a "unknown and impossible to be known" context. I'm not sure if this context exists or if $x$ also express it.</p>
Vlad Atanasiu
157,761
<p>You might want to borrow notations from outside mathematics if there is none which suits you. Here are some suggestions: A practical notation for the unknown is the single or triple <strong>interrogation</strong> mark "<code>?</code>" / "???", as you might use in a table or a database. Sometimes the <strong>asterisk</strong> "<code>*</code>" is used for similar ends (but note its other uses, which are semantically slightly different from the notion of "unknown": in programming, in regular expressions for example, the asterisk stands as a type of "wild card" for one or more <em>unspecified</em> values, e.g. "night*" for {night, nights, nightly, etc.}; in linguistics it designates a <em>hypothetical</em>, reconstructed word form, such as "*nókʷt-s" for the "night" in Proto-Indo-European). In epigraphy and related sciences the bracketed <strong>ellipsis</strong> "<code>[...]</code>" stands for an unknown textual sequence, but the notation is more widely known from partial quotes: "Bla bla [...] bla bla."</p> <p>@<a href="https://math.stackexchange.com/users/62967/egreg">egreg</a>: Thanks for the comment - I modified the post accordingly. The point is that if @<a href="https://math.stackexchange.com/users/25805/ig%C3%A4ria-mnagarka">Igäria Mnagarka</a> really needs a symbol for "unknown" and there is none, it could be invented or borrowed from where it exists.</p> <p>@<a href="https://math.stackexchange.com/users/25805/ig%C3%A4ria-mnagarka">Igäria Mnagarka</a>: To push things farther, you could specify the type of unknown you wish to speak about (which would be quite interesting to see applied in Bayesian analysis). Following <a href="https://en.wikipedia.org/wiki/There_are_known_knowns" rel="nofollow noreferrer">Donald Rumsfeld's typology</a> and the post above, you could use <code>?</code> for a <em>known unknown</em>, <code>*</code> for a <em>known known</em> (unspecified value from a set of known values), <code>!</code> for a <em>unknown known</em> (negation of something specified), and <code>[ ]</code> for a <em>unknown unknown</em>.</p>
3,637,785
<p>Prove that <span class="math-container">$${2n \choose n} 2^{-2n} = (-1)^n {-\frac12 \choose n},$$</span></p> <p><span class="math-container">$$\frac{1}n {2n -2 \choose n-1} 2^{-2n +1} = (-1)^{n-1} {\frac12 \choose n}.$$</span></p> <p>The second part can be proved by replacing <span class="math-container">$n$</span> by <span class="math-container">$n-1$</span> in the first part. For the first part, I found that the right side is equal to <span class="math-container">${n-1/2 \choose n}$</span>, but when I expand the left side, I get something like <span class="math-container">$$\frac{2n/4}{n}\frac{(2n-1)/4}{n-1}...\frac{(2n-n+1)/4}{1}$$</span> which does not look similar with <span class="math-container">${n-1/2 \choose n}$</span>.</p> <p>I appreciate if you give some help. </p>
jamie
737,935
<p>I was struggling to see how the last digit of <span class="math-container">$b^2$</span> must be 0. I believe the answer is that it is not necessary, it could be 0 or 5. It can be seen that the last digits of <span class="math-container">$a^2,b^2$</span> must be in 0,1,4,9,6,5 , so cannot be 2 or 8,</p> <p><span class="math-container">$$ a^2\equiv 0 \mod 10$$</span> <span class="math-container">$$ 2b^2\equiv 0 \mod 10$$</span> Since <span class="math-container">$2|10$</span>, <span class="math-container">$$ b^2\equiv 0 \mod 5$$</span> Then since 5 is prime, we can make the argument that the factor of 5 did not come from squaring. Therefore if true for <span class="math-container">$b,a$</span> we can divide by 5 to find it is also true for two smaller integers, leading to a contradiction by infinite descent due to the well ordering principle.</p>
3,684,799
<p>When the theory of groups is built up from its axioms, it is often necessary to establish very simple results such as</p> <p><span class="math-container">$ax = xa \Longrightarrow a^{-1}x = xa^{-1}. \tag 1$</span></p> <p>Thus we ask how the title question might be proved.</p>
Martin Argerami
22,857
<p>Your argument is correct, but showing that the spectrum is <span class="math-container">$\{0\}$</span> does not in general imply that <span class="math-container">$T=0$</span>; it does, though, when <span class="math-container">$T$</span> is selfadjoint but you need to include that argument. </p> <p>The two usual ways to go would be </p> <ul> <li><p>use the Spectral Theorem, that expresses <span class="math-container">$T$</span> (since it is compact and selfadjoint) in terms of its eigenvalues.</p></li> <li><p>Use the formula for the spectral radius. You have that <span class="math-container">$$\tag1 \operatorname{spr}(T)=\lim_n\|T^n\|^{1/n}. $$</span> In your setup, this shows that <span class="math-container">$\sigma(T)=\{0\}$</span>. But to conclude that <span class="math-container">$T=0$</span> you need to mix this with the fact that <span class="math-container">$T=T^*$</span>. You have <span class="math-container">$\|T^2\|=\|T^*T\|=\|T\|^2$</span>. Using induction you get <span class="math-container">$\|T^{2n}\|=\|T^{2n}\|$</span>. Now using <span class="math-container">$(1)$</span> you have <span class="math-container">$$ 0=\lim_n\|T^n\|^{1/n}=\lim_n\|T^{2n}\|^{1/2n}=\|T\|. $$</span> So <span class="math-container">$\|T\|=0$</span> and then <span class="math-container">$T=0$</span>. This last argument doesn't use that <span class="math-container">$T$</span> is compact, so it shows that any selfadjoint operator with spectrum <span class="math-container">$\{0\}$</span> is equal to zero. </p></li> </ul>
232,424
<p>Are there any claims and counterclaims to mathematics being in some certain cases a result of common sense thinking? Or can some mathematical results be figured out using just pure common sense i.e. no mathematical methods? </p> <p>I'd also appreciate any mentions relating to sciences, social sciences or ordinary life.</p>
Martin Argerami
22,857
<p>There is this saying among mathematicians, that you don't really understand something until it becomes obviously trivial. So, in that sense, all of mathematics is "common sense thinking". </p>
2,321,667
<p>Patrick Suppes in his book <a href="http://rads.stackoverflow.com/amzn/click/0486406873" rel="nofollow noreferrer">Introduction to Logic</a> on page 63 asks a reader to proof a statement $$\forall x\forall y\forall z(xPy\land yPz\to xPz)$$ from the theory which he calls "Theory of rational behavior". The statement is based on the notion of weak preference $xQy$, its two properties (lines 1 and 2) and a definition of strict preference $xPy$ (line 3): $$\begin{array}{p} \{1\}&amp;(1)&amp;\forall x\forall y\forall z(xQy\land yQz\to xQz)&amp;\text{Transitive property} \\ \{2\}&amp;(2)&amp;\forall x\forall y(xQy \lor yQx)&amp;\text{Axiom of order}\\ \{3\}&amp;(3)&amp;\forall x\forall y(xPy\leftrightarrow \neg yQx)&amp;\text{Definition of strict preferece}\\ \{4\}&amp;(4) &amp; xPy\land yPz &amp; \text{Assumption} \\ \{3,4\}&amp;(5) &amp; \neg yQx\land \neg zQy &amp; \text{from (3)(4) using U.S.} \\ \{2,3,4\}&amp;(6) &amp; xQy &amp; \text{from (2)(5) using U.S.} \\ \{2,3,4\}&amp;(7) &amp; yQz &amp; \text{from (2)(5) using U.S.} \\ \{1,2,3,4\}&amp;(8) &amp; xQz &amp; \text{from (1)(6)(7)} \\ \end{array} $$ U.S. stands here for the <em>Rule of Universal Specification</em></p> <p>$xPz$ is equal to $\neg zQx$ by the definition of strict preference on line $3$. So we want to show that $\neg zQx$ logically follows from the premises $\{1,2,3,4\}$ and then use conditioning on line $(4)$ and <em>The Rule of Universal Generalisation</em> to prove the given statement. But from $xQz\land(xQz \lor zQx)$ we cannot conclude $\neg zQx$ because according to the <em>Axiom of order</em> both $xQz$ and $zQx$ can be <em>true</em> together.</p> <p>I've tried the method of interpretations to check validity of the statement that has to be proven but haven't found any, such that its antecedent would be <em>true</em> and conclusion would be <em>false</em>.</p> <p>If my derivation is fine so far, I'm looking for tips which will help me to get to the finish line here. Will appreciate any feedback.</p>
Bram28
256,001
<p>Do a proof by contradiction, i.e. continue with:</p> <p>$$\begin{array}{p} \{9\}&amp;(9)&amp;\neg xPz&amp;\text{Assumption} \\ \{3,9\}&amp;(2)&amp;zQx&amp;\text{from (3)(9)}\\ \{1,2,3,4,9\}&amp;(10)&amp;zQy&amp;\text{from (1)(6)(10)}\\ \{3,4\}&amp;(11) &amp; \neg zQy &amp; \text{from (5)} \\ \{1,2,3,4,9\}&amp;(12) &amp; \bot &amp; \text{from (10)(11)} \\ \{1,2,3,4\}&amp;(13) &amp; xPz &amp; \text{from (12) using proof by Contradiction} \\ \end{array} $$</p> <p>(I wasn't sure how proof by Contradiction works in your system ... but you get the idea)</p>
854,438
<p>I've read in a lot of places how there was a "foundational crisis" in defining the "foundations of mathematics" in the 20th century. Now, I understand that mathematics was very different then, I suppose the ideas of Church, Godel, and Turing were either in their infancy, or not well-known, but I still hear this kind of language a lot today, and I don't know why.</p> <p>From my perspective, mathematics is essentially the study of any kind of formal system understandable by an intelligent, yet finite being. By definition then, the only reasonable "foundation for all of mathematics" must be a Turing complete language, and the choice of what specific language is basically arbitrary (except for concerns of elegance). The idea of creating a finite set of axioms that describes "all of mathematics" seems fruitless to me, unless what is being described is a Turing complete system.</p> <p>Is this idea of finding a foundation for all of "mathematics" still prominent today? I suppose I can understand why this line of reasoning was once prominent, but I don't see why it is relevant today. Is the continuum hypothesis true? Well, do you want it to be?</p>
Trevor Wilson
39,378
<p>The question is broad, so I'll just try to address one of the sub-questions. I'll change it a little bit to allow for the possibility of pluralism:</p> <blockquote> <p>What is the meaning of finding a "foundation of mathematics”?</p> </blockquote> <p>As an example, I'll try to explain why set theory is a foundation of mathematics, and what it means for it to be one. This isn't intended to exclude other possible foundations, although they tend to be subsumed by set theory in a sense discussed below.</p> <p>Based on one of the OP's comments above, I'll treat the mention of computability in the question figuratively rather than literally. Roughly speaking, a Turing-complete system of computation is one that can simulate any Turing machine, and so by the Church&ndash;Turing thesis, can simulate any other (realistic) system of computation. Therefore a Turing-complete system of computation can be taken as a "foundation" for computation.</p> <p>As far as this question is concerned, I think that under the appropriate analogy between computation and mathematical logic, the notion of simulation of one computation by another corresponds to the notion of <em>interpretation</em> of one theory in another. By an interpretation of a theory $T_1$ in a theory $T_2$ I mean an effective translation procedure which, given a statement $\varphi_1$ in the language of $T_1$, produces a corresponding statement $\varphi_2$ in the language of $T_2$ such that $\varphi_1$ is a theorem of $T_1$ if and only if $\varphi_2$ is a theorem of $T_2$. So a mathematician working in the theory $T_1$ is essentially (modulo this translation procedure) working in some "part" of the theory $T_2$.</p> <p>In these terms, a foundation of mathematics could be reasonably described as a (consistent) mathematical theory that can interpret every other (consistent) mathematical theory. However, it follows from the incompleteness theorems that no such "universal" theory can exist. But remarkably, the set theory $\mathsf{ZFC}$ can interpret almost all of "ordinary" (non-set-theoretic) mathematics. For example, if I'm working in analysis and I prove some theorem about continuous functions, then my theorem and its proof can in principle be translated into a theorem of $\mathsf{ZFC}$ and its proof in $\mathsf{ZFC}$. (Under this translation, a continuous function is a set of ordered pairs, which are themselves sets of sets, satisfying a certain set-theoretic property, <em>etc.</em>)</p> <p>This means that set theory can serve as a foundation for most of mathematics. For set theory itself, no one theory (<em>e.g.</em> $\mathsf{ZFC}$) can serve as a universal foundation. But we can informally define a hierarchy of set theories ($\mathsf{ZFC}$, $\mathsf{ZFC} + {}$"there is an inaccessible cardinal", $\mathsf{ZFC} + {}$"there is a measurable cardinal", <em>etc.</em>) called the large cardinal hierarchy, strictly increasing in interpretability strength, which seems to be practically universal in the sense that every "natural" mathematical theory, set-theoretic or otherwise, can be interpreted in one of these theories. (The reason this doesn't violate the incompleteness theorem is that the large cardinal hierarchy is defined informally in an open-ended way, so we can't just take its union and get a universal theory.)</p> <p>Your last point about the continuum hypothesis raises an important question: what if the various candidate set theories branch off in many different directions rather than lining up in a neat hierarchy? Well, it turns out so far that the natural theories we consider <em>do</em> line up in the interpretability hierarchy, even if they do not line up in the stronger sense of inclusion. For example, the methods of inner models and of forcing used to prove the independence of $\mathsf{CH}$ from $\mathsf{ZFC}$ also give interpretations of the two candidate extensions $\mathsf{ZFC} + \mathsf{CH}$ and $\mathsf{ZFC} + \neg \mathsf{CH}$ by one other (and by $\mathsf{ZFC}$ itself,) showing that they inhabit the same place in the interpretability hierarchy. If you believe $\mathsf{CH}$ and I believe $\neg\mathsf{CH}$ we can still interpret each others results, rather than dismissing them as meaningless.</p>
4,249,794
<p>i have to determine whether this graph is bipartite or not:</p> <p><img src="https://i.stack.imgur.com/AjxQzl.png" alt="" /></p> <p>I have found an answer but i am not sure about it. If we divide the vertices set into <span class="math-container">$\{a,d,c,h\}$</span> and <span class="math-container">$\{b,f,e,g\}$</span>, then it fulfills bipartite properties. Is it correct ?</p>
David G. Stork
210,401
<p>If you want to test large graphs, use software, such as <em>Mathematica</em>:</p> <pre><code>myGraph = Graph[{a \[UndirectedEdge] b, a \[UndirectedEdge] e, a \[UndirectedEdge] f, b \[UndirectedEdge] h, c \[UndirectedEdge] f, c \[UndirectedEdge] g, d \[UndirectedEdge] e, d \[UndirectedEdge] f, d \[UndirectedEdge] g, f \[UndirectedEdge] h}, VertexLabels -&gt; &quot;Name&quot;, GraphLayout -&gt; &quot;BipartiteEmbedding&quot;] </code></pre> <p><a href="https://i.stack.imgur.com/jRFtq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jRFtq.png" alt="enter image description here" /></a></p> <pre><code>BipartiteGraphQ[myGraph] </code></pre> <p>(* True *)</p>
2,089,502
<blockquote> <p>How many numbers are there from $1$ to $1400$ which maintain these conditions: when divided by $5$ the remainder is $3$ and when divided by $7$ the remainder is $2$?</p> </blockquote> <p>How can I start? I am newbie in modular arithmetics. I can just figure out that the number $= 5k_1+3 = 7k_2+2$. </p>
Glitch
74,045
<p>Assume initially that $x \le y$. From the FTC we have $$ \cos(y) - \cos(x) = \int_x^y \cos'(z) dz = -\int_x^y \sin(z) dz $$ and so $$ |\cos(y) - \cos(x)| \le \left\vert -\int_x^y \sin(z) dz \right\vert \le \int_x^y |\sin(z)| dz \le \int_x^y dz = y-x = |y-x|. $$ If $y \le x$ a similar argument works, which I'll leave as an exercise.</p>
2,089,502
<blockquote> <p>How many numbers are there from $1$ to $1400$ which maintain these conditions: when divided by $5$ the remainder is $3$ and when divided by $7$ the remainder is $2$?</p> </blockquote> <p>How can I start? I am newbie in modular arithmetics. I can just figure out that the number $= 5k_1+3 = 7k_2+2$. </p>
Fernando Revilla
401,424
<p>$$\left|\cos x -\cos y\right|=\left|−2\sin\frac{​​​x+y}{2}​​\cdot\sin\frac{​​​x-y}{2}​​\right|$$ $$=2\left|\sin\frac{​​​x+y}{2}​​​\right|\left|\sin\frac{​​​x-y}{2}​​​\right|\le 2\cdot 1\cdot\left|\frac{​​​x-y}{2}​​​\right|=\left|x - y\right|.$$</p>
1,518,393
<p>So I honestly don't even know where to start. Does it mean that a number k within the permutations is what we are looking for or something like that?</p>
user21820
21,820
<p>Every permutation can be decomposed into disjoint cycles. For example $(1,2,3) (4,5)$ can be used to denote the permutation that maps $1 \to 2 \to 3 \to 1$ and $4 \to 5 \to 4$. The question asks how many permutations of $n$ objects have $1,2,\cdots,k$ all in different cycles.</p> <p><strong>Hint</strong>: The easiest way is via induction. Check that it works for $k$ objects. Now create a $1$-to-$(n+1)$ correspondence between such permutations for $n$ objects and such permutations for $n+1$ objects. You must prove both directions of the correspondence.</p>
1,518,393
<p>So I honestly don't even know where to start. Does it mean that a number k within the permutations is what we are looking for or something like that?</p>
Yuval Filmus
1,277
<p>Take a random permutation in $S_n$, decompose it as a product of disjoint cycles, and erase all numbers larger than $k$. Show that the resulting permutation is a random permutation in $S_k$. Since the probability that a random permutation in $S_k$ satisfies your condition is $1/k!$ (only the identity permutation), the result follows.</p>
1,612,808
<p>Suppose that $X$ is a finite $G$-set. A group $G$ is of prime power if $|G|=p^n$ for $p$ prime.</p> <p>The fixed point set $X_G=\{x\in X : gx=x$ $\forall g\in G\}$.</p> <p>I'm asked to prove that $|X|=|X_G|$ (mod $p$), but I'm unsure of how I should start.</p>
Francis Begbie
300,218
<p>Note that $G$ acts on $X-X_G$, and $p$ divides the size of every orbit in $X-X_G$ by the Orbit Stabilizer Theorem, hence $p$ divides $|X-X_G|$, i.e. $|X|$ and $|X_G|$ are congruent modulo $p$.</p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Bill Dubuque
6,716
<p>One striking example that comes to mind is Nathan Jacobson's proof that rings satisfying the identity $X^m = X$ are commutative. This is model-theoretic and proceeds by a certain type of factorization which reduces the problem to the (subdirectly) irreducible factors of the variety. These turn out to be certain finite fields, which are commutative, as desired. By (Birkhoff) completeness there must also exist a purely equational proof (in the language of rings) but even for small $m$ this is notoriously difficult, e.g. $m = 3$ is often posed as a difficult <a href="http://groups-beta.google.com/group/sci.math/msg/9b884af731351f10">exercise</a>. It's only recently that such a general non-model-theoretic equational proof was discovered by John Lawrence (as Stan Burris informed me). I don't know if it has been published yet, but see their earlier <a href="http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/fields3.pdf">work [1]</a></p> <p>So here, by "higher-order" conceptual structural reasoning, one is able to escape the confines of first-order equational logic and give a more conceptual proof than the brute-force equational proofs - arguments so devoid of intuition that they can been discovered by an automatic theorem prover.</p> <p><a href="http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/fields3.pdf">1</a> S. Burris and J. Lawrence, Term rewrite rules for finite fields.<br> International J. Algebra and Computation 1 (1991), 353-369. <a href="http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/fields3.pdf">http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/fields3.pdf</a> </p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Pietro Majer
6,101
<p>A beautiful classical example from Functional Analysis is the Hausdorff moment problem: characterize the sequences <span class="math-container">$m:=(m_0,m_1,\dots)$</span> of real numbers that are moments of some positive, finite Borel measure on the unit interval <span class="math-container">$I:=[0,1]$</span>: <span class="math-container">$$m_k=\int_I x^kd\mu(x).$$</span> A necessary condition immediately comes from <span class="math-container">$\int_I x^{\ j}(1-x)^{\ k} d\mu(x)\geq0$</span>, and is expressed saying that <span class="math-container">$m$</span> has to be a &quot;<em>completely monotone</em>&quot; sequence, that is <span class="math-container">$$(I-S)^k m\ge0,$$</span> where <span class="math-container">$S$</span> is the shift operator acting on sequences (in other words, the <span class="math-container">$k$</span>-th discrete difference of <span class="math-container">$m$</span> has the sign of <span class="math-container">$(-1)^k$</span>: <span class="math-container">$m$</span> is positive, decreasing, convex,...). The nontrivial fact is that this is also a sufficient condition, thus caracterizing the sequences of moments. Moreover, the measure is then unique.</p> <p>I'll quote two proofs, both very nice. The first is close to the original one by Hausdorff; the second is a consequence of the Choquet's theorem.</p> <p><strong>Proof I, with computation (skipped).</strong> Bernstein polynomials <span class="math-container">$$B_nf(x):=\sum_{k=0}^n f\big(\frac{k}{n}\big)\Big( {n \atop k}\Big)x^k(1-x)^{n-k}, \qquad n\in\mathbb{N},\; f\in C^0(I), \; x\in I ,$$</span> define a sequence of linear positive operators strongly convergent to the identity <span class="math-container">$$B_n:C^0(I)\to C^0(I)\ .$$</span></p> <p>Therefore the transpose operators <span class="math-container">$$B_n ^*:C^0(I)^ *\to C^0(I)^ *$$</span> give a sequence of operators weakly convergent to the identity. If you write down what is <span class="math-container">$B_n^ *(\mu)$</span> for a Radon measure <span class="math-container">$\mu\in C^0(I)^ *$</span> you'll observe that it is a linear combinations of Dirac measures located at the points <span class="math-container">$\{k/n\}_{0\leq k\leq n}$</span>, and with coefficients only depending on the moments of <span class="math-container">$\mu$</span>. This gives a uniqueness result and a heuristic argument: if <span class="math-container">$m$</span> is a sequence of moments for some measure <span class="math-container">$\mu$</span>, then <span class="math-container">$\mu$</span> can be reconstructed by its moments as a weak* limit of discrete measures <span class="math-container">$\mu_n:=B_n^*(\mu)$</span>. This observation leads to a constructive solution of the problem. Indeed, given a completely monotone sequence <span class="math-container">$m$</span>, consider the corresponding sequence of measures <span class="math-container">$\mu_n$</span> suggested by the expression of <span class="math-container">$B_n^*(\mu)$</span> in terms of the <span class="math-container">$(m_k)$</span>. Due to the assumption of complete monotoniticy they turns out to be positive measures, and with some more computations one shows that they converges weakly* to a measure <span class="math-container">$\mu$</span> with moment's sequence <span class="math-container">$m$</span>.</p> <p><strong>Proof II, no or little computation.</strong> Completely monotone sequences with <span class="math-container">$m_0=1$</span> are a closed convex, thus weakly* compact and metrizable subset <span class="math-container">$M$</span> of <span class="math-container">$l^\infty$</span>. A one-line, smart computation shows that the extremal points of <span class="math-container">$M$</span> are exactly the exponential sequences, <span class="math-container">$m^{ (t)}:=(1,t,t^2,\dots)$</span>, for <span class="math-container">$0\leq t \leq1$</span> (these turn out to be the moments of Dirac measures in points of <span class="math-container">$I$</span>, of course). By the Choquet's theorem, for any given <span class="math-container">$m\in M$</span> there exists a probability measure on <span class="math-container">$\mathrm{ex}(M),$</span> that we identify with <span class="math-container">$I,$</span> such that <span class="math-container">$m=\int_I m^{ (t) } d\mu(t).$</span> But this exactly means <span class="math-container">$m_k=\int_I t^{\ k} d \mu(t)$</span> for all <span class="math-container">$k\in\mathbb{N}.$</span></p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Peter LeFanu Lumsdaine
2,273
<p>A toy example, using the Yoneda lemma:</p> <p><strong>Claim:</strong> There are two canonical bialgebra structures (the “additive” and “multiplicative” structures) on $k[x]$, and one of them (the additive one) in fact makes it a Hopf algebra.</p> <p><strong>Proof 1:</strong> (Calculation.) Write down the formulas; check the axioms! This isn't an especially long calculation, but it's a bit tedious; while seeing the formulas is nice, checking the axioms isn't (to my taste) especially enlightening.</p> <p><strong>Proof 2:</strong> (Abstract.) “Bialgebra” = “comonoid in ($k$-<strong>Alg</strong>,$\otimes$)”. We know $k[x]$ is the free $k$-algebra on one generator, so there's a natural isomorphism $\mathrm{Hom}(k[x],A) \cong A$, for any $k$-algebra $A$. So $\mathrm{Hom}(k[x],A)$ is naturally an algebra — so it has two natural monoid structures, + and $\cdot$, and under + it's moreover a group. By Yoneda, these must correspond to two comonoid structures on $k[x]$, and the one corresponding to + must be Hopf!</p> <p>Now, what I really like about this proof is that it still connects closely to the computations. By the way that the Yoneda lemma works, you can read off what the two coalgebra structures actually are; but now you don't have to check the axioms, since you already know they hold! Also, you now know there'll be a “co-distributive law” connecting the two, which you might never have thought of just from the first approach… And also, this gives a way of looking for bialgebra structures on other algebras: look at what they classify/represent!</p> <p>This shows up, I think, a lot of the power of abstract approaches. They put formulas and calculations into a bigger picture; they can help you do interesting calculations, while letting you skip tedious ones; and they can suggest calculations you might not have thought of doing otherwise. But (as you can probably guess from that) I love calculation too: I wouldn't want either without the other. If abstract nonsense is the garden, concrete computations are the flowers.</p>
30,220
<p>Jeremy Avigad and Erich Reck claim that one factor leading to abstract mathematics in the late 19th century (as opposed to concrete mathematics or hard analysis) was <em>the use of more abstract notions to obtain the same results with fewer calculations.</em></p> <p>Let me quote them from their remarkable historical <a href="https://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf" rel="nofollow noreferrer">paper</a> &quot;Clarifying the nature of the infinite: the development of metamathematics and proof theory&quot;.</p> <blockquote> <p>The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by <a href="http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Stein</a> (<a href="https://web.archive.org/web/20140415224643/http://mcps.umn.edu/philosophy/11_10Stein.pdf" rel="nofollow noreferrer">Wayback Machine</a>), as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear:</p> </blockquote> <blockquote> <blockquote> <p>A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree).</p> </blockquote> </blockquote> <blockquote> <p>In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation.</p> </blockquote> <p><strong>What are concrete examples from concrete fields avoiding calculations by the use of abstract notions?</strong> (Here &quot;calculation&quot; means any type of routine technicality.) Category theory and topoi may provide some examples.</p> <p>Thanks in advance.</p>
Qiaochu Yuan
290
<p>The first proof I ever saw of the orthogonality relations for characters of finite groups was computational: it did a lot of matrix computations and manipulations of sums, which I didn't like at all. There is a much more conceptual proof which begins by observing that Schur's lemma is equivalent to the claim that</p> <p><span class="math-container">$$\text{dim Hom}(A, B) = \delta_{ab}$$</span></p> <p>for irreducible representations <span class="math-container">$A, B$</span>, where <span class="math-container">$\text{Hom}$</span> denotes the set of <span class="math-container">$G$</span>-module homomorphisms. One then observes that <span class="math-container">$\textbf{Hom}(A, B) = A^{*} \otimes B$</span> is itself a <span class="math-container">$G$</span>-module and <span class="math-container">$\text{Hom}$</span> is precisely the submodule consisting of the copies of the trivial representation. Finally, the projection from <span class="math-container">$\textbf{Hom}$</span> to <span class="math-container">$\text{Hom}$</span> can be written</p> <p><span class="math-container">$$v \mapsto \frac{1}{|G|} \sum_{g \in G} gv$$</span></p> <p>and the trace of a projection is the dimension of its image.</p> <p>I particularly like this proof because the statement of the orthogonality relations is concrete and not abstract, but this proof shows exactly where the abstract content (Schur's lemma, Maschke's theorem) is made concrete (the trace computation). It also highlights the value of viewing the category of <span class="math-container">$G$</span>-modules as an algebraic object in and of itself: a symmetric monoidal category with duals.</p> <p>In addition, this interpretation of Schur's lemma suggests that <span class="math-container">$\text{Hom}(A, B)$</span> behaves like a categorification of the inner product in a Hilbert space, where the contravariant/covariant distinction between <span class="math-container">$A, B$</span> corresponds to the conjugate-linear/linear distinction between the first and second entries of an inner product. This leads to 2-Hilbert spaces and is a basic motivation for the term &quot;adjoint&quot; in category theory, as explained for example by John Baez <a href="https://arxiv.org/abs/q-alg/9609018" rel="nofollow noreferrer">here</a>. It is also related to quantum mechanics, where one thinks of the inner product as describing the amplitude of a transition between two states occurs and of <span class="math-container">$\text{Hom}(A, B)$</span> as describing the way those transitions occur. John Baez explains related ideas <a href="https://math.ucr.edu/home/baez/rosetta.pdf" rel="nofollow noreferrer">here</a>.</p>
1,336,381
<p>Find the sum of solutions to:</p> <p>$$ 2\log^2_{4}(|x+1|)+\log_4(|x^2-1|)+\log_{\frac{1}{4}}(|x-1|)=0 $$</p> <p>I'm not sure about what to do with the absolute values, how can I get rid of them?</p> <p>Should I solve for all various cases depending on the sign of $x+1$ and $x-1$?</p>
Ben Grossmann
81,360
<p>The answer is no. As a counterexample, consider the sequence $$ f_n(x) = \frac{n}{n+1}\cdot \frac{1}{x} $$</p>
4,275,780
<blockquote> <p>If <span class="math-container">$0&lt;a&lt;b$</span> and <span class="math-container">$0&lt;c&lt;d$</span> then <span class="math-container">$\frac{c+a}{d+a} &lt;\frac{c+b}{d+b}.$</span></p> </blockquote> <p>I get to <span class="math-container">$$d+a&lt;d+b \Longrightarrow \frac{1}{d+b} &lt; \frac{1}{d+a}$$</span> but that inequality seems opposite of what I am trying to prove. Any advice is appreciated.</p>
John Joy
140,156
<p>It may be helpful to just rename some of the quantities.</p> <p>Suppose <span class="math-container">$$\alpha = c+a\\ \beta=d+a\\ \gamma=b-a$$</span> then <span class="math-container">$$\begin{align} \frac{\alpha}{\beta}&amp;\lt\frac{\alpha+\gamma}{\beta+\gamma}\\ \alpha\beta + \alpha\gamma&amp;\lt \alpha\beta +\beta\gamma\\ \alpha\gamma&amp;\lt\beta\gamma\\ \alpha&amp;\lt\beta\\ \end{align}$$</span></p> <p>Now, reversing these steps should give you a road map to writing a formal proof.</p>
2,910,101
<p>A positive integer X is said to be a cube-loving number if it can be written as $(a^3) \cdot b$, for some positive integers $a$ and $b$ ($a&gt;1$,$b \ge 1$). Given a positive integer $n$, determine the number of Cube-loving numbers less than or equal to $n$.</p>
Yanlong LIU
1,057,860
<p>Just use the theorem 2.3.1, which is the usual Chernoff's inequality about upper bound can work, we can get almost the same result like the first answer, which is <span class="math-container">$t \mathrm{log}(t) + O(t) \geq \mathrm{log}(n) - \epsilon$</span>, just show <span class="math-container">$t = O(\mathrm{log}(n)/\mathrm{log}\mathrm{log}(n)) $</span></p>
3,257,799
<blockquote> <p>Find all values of <span class="math-container">$a$</span> for which the equation <span class="math-container">$$ (a-1)4^x + (2a-3)6^x = (3a-4)9^x $$</span> has only one solution.</p> </blockquote> <p><br> I have two cases, one when <span class="math-container">$a = 1$</span> and other when Discriminant <span class="math-container">$= 0$</span>.<br> The answers I get are <span class="math-container">$a =1; a=0,5$</span>. <br> Since I don't have the answers could sb tell me if I am right.</p>
nonuser
463,553
<p>Write <span class="math-container">$t=3^x/2^x&gt;0$</span>, then we have <span class="math-container">$$(a-1)+(2a-3)t=(3a-4)t^2$$</span></p> <p>Case <span class="math-container">$a={4\over 3}$</span> then the equation is linear so <span class="math-container">$$t= {1-a\over 2a-3}=1\implies \Big({3\over 2}\Big)^x= 1 \implies x=0$$</span></p> <p>Case <span class="math-container">$a\ne {4\over 3}$</span> then the equation is quadratic, so we have two subcases </p> <ul> <li>The discirminat is <span class="math-container">$0$</span>: <span class="math-container">$$(2a-3)^2+4(3a-4)(a-1)=0$$</span> which is the same as <span class="math-container">$$(4a-5)^2 =0$$</span> so <span class="math-container">$a={5\over 4}$</span>. In this case we get <span class="math-container">$t=1$</span> again.</li> <li>The discirminat is <span class="math-container">$&gt;0$</span> then we have two solutions <span class="math-container">$t_1,t_2$</span> and if we want to have exactly one then one must be positive and other negative, so <span class="math-container">$$t_1\cdot t_2 &lt;0\implies {1-a\over 3a-4}&lt;0 \implies a\in (-\infty, 1]\cup [{4\over 3},\infty)\cup\{{5\over 4}\}$$</span> (Last inequality we get from Vieta formulas.)</li> </ul>
69,448
<p>What <code>Method</code> options are allowed for <code>DensityPlot</code> and <code>ContourPlot</code>? I am unable to find this information either in MMA documentation or in SE. Thanks.</p>
Simon Woods
862
<p>As far as I know there is no documented list of <code>Method</code> options for <code>ContourPlot</code> and <code>DensityPlot</code>. If you want to experiment there is a large list of strings in <code>Charting`CommonDump`$VisualizationMethodOptions</code> to have a look at. Some of these are option settings, some are option values. Most seem to have no effect on a simple <code>ContourPlot</code> and probably apply to different visualization functions or only apply in specific circumstances.</p> <p>For an example here are a couple that affect the creation of the mesh:</p> <pre><code>GraphicsRow[ ContourPlot[Sin[x y], {x, -2, 2}, {y, -2, 2}, Mesh -&gt; All, PlotPoints -&gt; 5, Method -&gt; {#}] &amp; /@ {Automatic, "Subdivision" -&gt; "Loop", "PolygonReduction" -&gt; 100}] </code></pre> <p><img src="https://i.stack.imgur.com/1JRMz.png" alt="enter image description here"></p> <p>This is the complete list:</p> <pre><code>{"ScalingFunctions", "PlotRandomSeed", "DraftRendering", "ArrayPlot", "InvertNormalsDirection", "BoundaryOffset", "Refinement", "MeshBoundaryValues", "StepsJoined", "SurfaceStitch", "UnboundedPolygons", "InterpolateMesh", "DownsampleWindow", "FilterMeshAll", "OriginalCoordinates", "ReturnMeshObject", "ReturnRawMeshObject", "MeshMaxRecursion", "ContourMaxRecursion", "DelaunayDomainScaling", "PolygonReverse", "VertexAliasTolerance", "Average", "Fan", "Seidel", "Constrained", "GradientAligned", "MeshRegions", "PathPolygons", "SimplifyPaths", "PackGraphicsComplex", "SnapContourVertex", "PlotTheme", "VertexColorsPalette", "VectorBackgroundPadding", "ReturnImage", "Closed3DRegion", "PolygonColoring", "Equalized", "EqualizeColor", "ColorFunctionData", "Valence", "Laplace", "Conformal", "RGBColorSpace", "GrayColorSpace", "ParallelPlotEvaluate", "ParallelPlotMethod", "ParallelPlotParameters", "LightingMethod", "DiffuseReflection", "AspectBasedShading", "Contrast", "Brightness", "Saturation", "SpatialResolution", "ElevationDefault", "IlluminationModel", "AngularDistanceRadius", "UseNumericalFunction", "NumericalFunction", "FlattenFunctions", "SuppressMessages", "MessagesHead", "MappingFunctions", "DomainMappingFunctions", "LegacyColorFunctionProcessing", "ContourShadingPrefixFunction", "ContoursPrefixFunction", "UseCaching", "CutMeshLines", "FillBoundaryLines", "Ungroup", "CloseMeshThickness", "ClipNoneMeshShading", "ClipAtPlotRange", "ClipMeshOverlay", "ClipBoundaryLines", "Subdivision", "CellDecomposition", "Divisions", "ControlValue", "VertexTolerance", "MaxBoundaryEdgeLength", "CellCuboids", "Dihedral", "Gaussian", "GradientNorm", "Loop", "Contouring", "Curvature", "ContourLevel", "PlanarRectangular", "Triangular", "Quad", "QuadTriangular", "Length", "Area", "Perimeter", "BhatiaLawrence", "AverageNormal", "WeightedNormal", "Barycenter", "Cotangents", "Circumcenter", "Incenter", "Inradius", "Circumradius", "InteriorAngles", "Dual", "OFF", "NOFF", "Frenet", "NaturalNeighbour", "InverseDistance", "Kriging", "MaxMemoryUse", "Intersect", "FullLattice", "MarchingCubes", "AdvancedMarchingCubes", "AdaptiveTriangular", "Octree", "OctreeCubes", "Algebraic", "Cubical", "Tetra", "Seeds", "Linear", "Bisect", "NoiseDelta", "ContourSpacing", "MeshSpacing", "Quantile", "CurveLength", "ArcLength", "DataLineMesh", "DataPointMesh", "GraphicsIndex", "SymbolicPiecewiseSubdivision", "pw", "PiecewiseTimeConstraint", "SymbolicPointsTimeConstraint", "Singularities", "Isolated", "SingularCurves", "SingularMaxRecursion", "ExclusionsOffset", "PolygonReduction", "Polygons", "PolygonContraction", "PointPlacement", "QuadricWeighting", "CompactnessRatio", "MeshPenalty", "BoundaryWeight", "EndPoint", "EndOrMidPoint", "LinearPoint", "OptimalPoint", "UniformWeight", "AreaWeight", "AngleWeight", "AverageWeight", "AreaAverageWeight", "NormalWeight", "VerticesGoal", "EdgesDistanceGoal", "MinArea", "PreserveInteriorFaces", "SegmentPartition", "SegmentLengthGoal", "LegendsFunction", "Legend", "Extrapolation", "Interpolation", "PointsToSpheres", "Caps", "ConnectEnds", "StreamlinesMethod", "StreamlinesSamplingStep", "StreamlinesInsertionStep", "StreamlinesParameterLimit", "StreamlinesNDSolve", "StreamlinesNumericalFunction", "LICLines", "LICMinHits", "LICMaxLines", "AccuracyGoal", "PrecisionGoal", "HSBChannel", "LICModulate", "NewtonFlow", "PerturbateFrame", "PerturbateSeeds", "ParseGlyphStyle", "LinePath", "LineArrow", "GlyphPath", "Directional", "DirectionalScaled", "Velocity", "VelocityScaled", "ControlPoints", "BSplineCurve", "BezierCurve", "NURBSCurve", "XSplineCurve", "BSplineShape", "BezierShape", "NURBSShape", "XSplineShape", "SharedMemoryReference", "Method", "Automatic", "None", "All", "True", "False"} </code></pre>
4,549,340
<p>I have heard people say that the flight time from Fort Lauderdale to Seattle is the longest possible flight time within the continental United States. However, upon further consideration, I realized that the curvature of the Earth may cause the visible distance on a map to decrease when traveling north (the circumference of a cross section of the Earth is smaller further from the equator). A change in cross sectional circumference as one travels north can affect the true distance between Fort Lauderdale and a destination. This is not accounted for in a 2D map which is what most think of when considering flight times. With this in mind, does the Earth’s curvature affect the apparent distance (on a 2D map) between Fort Lauderdale and Seattle, and if so, is there another location in the continental United States with a longer flight time from Fort Lauderdale? In other words, suppose you were to fly around the globe across the equator. This would take longer than flying around the globe at a point north of the equator (say Seattle). This is due to the curvature of the Earth, so would this curvature also take effect when traveling from Fort Lauderdale to Seattle. The Earth is “wider” across Florida’s latitude than it is at Seattle’s. This means that it must take longer to travel from Florida to San Diego than from Maine to Seattle. My questions is if that difference could account for a change in flight time. Although Seattle is north of Fort Lauderdale and thus farther, there is added distance closer to the equator due to the Earth’s curvature. So may there exist a location south of Seattle that is further simply due to this change in cross sectional circumference?</p>
Rohit Yadav
910,446
<p>For all <span class="math-container">$n$</span>, <span class="math-container">$m$$&gt;$$N_ε$</span>, without loss of genarality, let m&gt;n,</p> <p>|<span class="math-container">$x_n$</span>−<span class="math-container">$x_m$</span>|<span class="math-container">$&lt;=$</span>|<span class="math-container">$x_n$</span> − <span class="math-container">$x_{n+1}$</span> + <span class="math-container">$x_{n+1}$</span> - <span class="math-container">$x_{n+2}$</span> + <span class="math-container">$x_{n+2}$</span> -...+ <span class="math-container">$x_{m-1}$</span>- <span class="math-container">$x_m$</span>|</p> <p><span class="math-container">$&lt;=$</span>|<span class="math-container">$x_n$</span> − <span class="math-container">$x_{n+1}$</span>| + |<span class="math-container">$x_{n+1}$</span> - <span class="math-container">$x_{n+2}$</span>|+...+|<span class="math-container">$x_{m-1}$</span> - <span class="math-container">$x_m$</span>|</p> <p>&lt;<span class="math-container">$1/2^n$</span>+<span class="math-container">$1/2^{n+1}$</span>+<span class="math-container">$1/2^{n+2}$</span>+...+<span class="math-container">$1/2^m$</span></p> <p>&lt;<span class="math-container">$1/2^n$</span> (1 - <span class="math-container">$1/2^{m-n}$</span>)/(1-<span class="math-container">$1/2$</span>) &lt; <span class="math-container">$2/2^n$</span> &lt; ε</p> <p>Thus <span class="math-container">$(x_n)$</span> is a Cauchy sequence, implying that it is convergent.</p>
1,653,106
<p>I was following a calculus tutorial that factored the equation $x^4-16$ into $(x^2 +4) (x+2)(x-2)$.</p> <p>Why is the factorization of $x^4-16 = (x^2 + 4)(x+2)(x-2)$ rather than $(x^2 - 4)(x^2 +4)$? </p>
Cameron Buie
28,900
<p>They are <em>both</em> factorizations of $x^4-16,$ but $(x^2+4)(x^2-4)$ is a less complete factorization.</p>
2,990,947
<p>If r stands for counter-clockwise 90 degree rotation, s stands for horizontal flip. <span class="math-container">$D_4= \{1, r, r^2, r^3, s, rs, r^2s, r^3s\}$</span>. What rule should I apply to find the subgroups of <span class="math-container">$D_4$</span>? Should I just put elements with same order in the same subgroup?</p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>Pick elements one by one and see what happens to their generated subgroups (i.e. orbits under the operation <span class="math-container">$\cdot$</span>). Then try to mix them with each other. E.g.</p> <ul> <li><span class="math-container">$O(s): 1, s, s^2=1$</span></li> <li><span class="math-container">$O(r^2): 1, r^2, (r^2)^2 = r^4 = 1$</span></li> <li><span class="math-container">$O(r^2,s): 1, s, r^2, r^2 s, 1$</span></li> </ul> <p>Can you find some other ones?</p> <p><strong>UPDATE</strong></p> <p>In more detail, to see what the orbit generated by <span class="math-container">$s$</span> and <span class="math-container">$r^2$</span> is, you apply the operation to all possible combinations of the base elements:</p> <ul> <li><span class="math-container">$s$</span> generates just itself since <span class="math-container">$s^2=1$</span></li> <li><span class="math-container">$r^2$</span> generates just itself since <span class="math-container">$(r^2)^2 = r^4 = 1$</span></li> <li>The last thing left is to multiply them, getting <span class="math-container">$r^2 s$</span> (and <span class="math-container">$sr^2$</span> if the operation was not commutative but here it is). Finally, <span class="math-container">$(sr^2) \cdot s = r^2$</span> and <span class="math-container">$(sr^2) \cdot r^2 = s$</span> so no new elements are generated here either.</li> </ul> <p>Hence, the final orbit is <span class="math-container">$$O(r^2,s) = \{1,s,r^2, sr^2\}.$$</span></p>
2,662,554
<p>I have to use Proof by contradiction to show what if $n^2 - 2n + 7$ is even then $n + 1$ is even. </p> <p>Assume $n^2 - 2n + 7$ is even then $n + 1$ is odd. By definition of odd integers, we have $n = 2k+1$. </p> <p>What I have done so far:</p> <p>\begin{align} &amp; n + 1 = (2k+1)^2 - 2(2k+1) + 7 \\ \implies &amp; (2k+1) = (4k^2+4k+1) - 4k-2+7-1 \\ \implies &amp; 2k+1 = 4k^2+1-2+7-1 \\ \implies &amp; 2k = 4k^2 + 4 \\ \implies &amp; 2(2k^2-k+2) \end{align}</p> <p>Now, this is even but I wanted to prove that this is odd(the contradiction). Can you some help me figure out my mistake? </p> <p>Thank you.</p>
user
505,767
<p><strong>HINT</strong></p> <p>Note that</p> <ul> <li>$n^2-2n+7=n^2-2n+1+6=(n-1)^2+6$ is even $\iff$ $n-1$ is even</li> </ul> <p>and</p> <ul> <li>$n+1=(n-1)+2$</li> </ul>
81,257
<p>The classic Donaldson-Kronheimer book (Geometry of 4-manifolds) uses the Yang Mills gradient flow (sometimes called heat flow) on $M$ all over the place,</p> <p>$\frac{d A}{dt} = -\frac{\delta YM(A)}{\delta A}$</p> <p>where $YM(A)$ is the Yang Mills 'action' the integral of the curvature square,</p> <p>$YM(A) = \int d^4x Tr F_{\mu\nu} F_{\mu\nu} &gt; 0$</p> <p>The setting is quite general, either $M$ is a general 4-manifold or Kahler manifold and so all theorems, existence, uniqueness, etc, are quite general.</p> <p>I'm wondering if there are further results somewhere for the specific case of $M = T^4$ the 4-torus. For example, is it known how the long time asymptotics look like in this case? Theorems about possible blow-ups? I'd think asymptotically the gradient flow drives the connection towards a critical point but is it known how it is approached, exponentially or polynomially in $t$?</p> <p>Actually for $M=T^4$ I suspect $t^2 YM(A(t))$ goes to a constant for $t\to\infty$ as long as the initial condition for the flow is in a sufficiently small neighbourhood of the absolute minimum of $YM(A)$ but I can't prove it. What is certainly true is that $YM(A(t))$ is a decreasing function of $t$.</p>
Orbicular
3,509
<p>I think you should just take a look at the following paper:</p> <p><a href="http://arxiv.org/PS_cache/arxiv/pdf/1103/1103.0845v1.pdf" rel="nofollow">http://arxiv.org/PS_cache/arxiv/pdf/1103/1103.0845v1.pdf</a></p> <p>(In particular it should exponential convergence.) Should there still be question, you might post them!</p>
81,257
<p>The classic Donaldson-Kronheimer book (Geometry of 4-manifolds) uses the Yang Mills gradient flow (sometimes called heat flow) on $M$ all over the place,</p> <p>$\frac{d A}{dt} = -\frac{\delta YM(A)}{\delta A}$</p> <p>where $YM(A)$ is the Yang Mills 'action' the integral of the curvature square,</p> <p>$YM(A) = \int d^4x Tr F_{\mu\nu} F_{\mu\nu} &gt; 0$</p> <p>The setting is quite general, either $M$ is a general 4-manifold or Kahler manifold and so all theorems, existence, uniqueness, etc, are quite general.</p> <p>I'm wondering if there are further results somewhere for the specific case of $M = T^4$ the 4-torus. For example, is it known how the long time asymptotics look like in this case? Theorems about possible blow-ups? I'd think asymptotically the gradient flow drives the connection towards a critical point but is it known how it is approached, exponentially or polynomially in $t$?</p> <p>Actually for $M=T^4$ I suspect $t^2 YM(A(t))$ goes to a constant for $t\to\infty$ as long as the initial condition for the flow is in a sufficiently small neighbourhood of the absolute minimum of $YM(A)$ but I can't prove it. What is certainly true is that $YM(A(t))$ is a decreasing function of $t$.</p>
Willie Wong
3,948
<p>Donaldson and Kronheimer wrote their book by 1990. There were some further developments about long time behaviour of Yang-Mills flow on four manifolds by, among others, Struwe and collaborators. You may try starting with <a href="http://www.ams.org/mathscinet-getitem?mr=1443269" rel="nofollow">Schlatter's dissertation</a>. </p> <p>Crawling through MathSciNet reference links may get you "somewhere", but it is my impression that higher dimensional Yang-Mills heat flow is still somewhat of an open problem. Is there any reason why you expect $\mathbb{T}^4$ would be better behaved than, say, the unit ball?</p>
3,806,122
<p>I tried using Chinese remainder theorem but I kept getting 19 instead of 9.</p> <p>Here are my steps</p> <p><span class="math-container">$$ \begin{split} M &amp;= 88 = 8 \times 11 \\ x_1 &amp;= 123^{456}\equiv 2^{456} \equiv 2^{6} \equiv 64 \equiv 9 \pmod{11} \\ y_1 &amp;= 9^{-1} \equiv 9^9 \equiv (-2)^9 \equiv -512 \equiv -6 \equiv 5 \pmod{11}\\ x_2 &amp;= 123^{456} \equiv 123^0 \equiv 1 \pmod{8}\\ y_2 &amp;= 1^{-1} \equiv 1 \pmod{8} \\ 123^{456} &amp;\equiv \sum_{i=1}^2 x_i\times\frac{M}{m_i} \times y_i \equiv 9\times\frac{88}{11}\times5 + 1\times\frac{88}{8} \times1 \equiv 371 \equiv 19 \pmod{88} \end{split} $$</span></p>
Community
-1
<p>By Euler's theorem, we first get <span class="math-container">$123^{40}\cong1\pmod{88}$</span>, since <span class="math-container">$\varphi(88)=40$</span>. This results in <span class="math-container">$35^{16}\pmod{88}$</span>, easily.</p> <p>Now we use CRT: <span class="math-container">$\begin{cases}x\cong 35^{16}\pmod8\\x\cong35^{16}\pmod{11}\end{cases}$</span>.</p> <p>So, <span class="math-container">$x\cong3^{16}\pmod8\implies x\cong1\pmod8$</span>, and <span class="math-container">$x\cong2^{16}\pmod{11}\implies x\cong5^4\pmod{11}\implies x\cong9\pmod{11}$</span>, together yielding <span class="math-container">$x\cong9\pmod{88}$</span> by CCRT (constant case of the Chinese remainder theorem).</p>
3,525,621
<p>Find all integral solutions to the equation <span class="math-container">$x^2 + 4xy - y^2 = m$</span> with <span class="math-container">$-5 \leq m \leq 10$</span>.</p> <p>I know that I can set <span class="math-container">$m = -5$</span> to <span class="math-container">$m = 10$</span> and solve all of the equations independently. But is there any better method to this question?</p>
Ali Shadhar
432,085
<p><span class="math-container">$$\int_0^1\frac{\ln(1+x)-\ln(1-x)}{x}\ dx$$</span> <span class="math-container">$$=\int_0^1\frac{\ln(1+x)}{x}\ dx-\int_0^1\frac{\ln(1-x)}{x}\ dx$$</span></p> <p><span class="math-container">$$=-\operatorname{Li}_2(-x)|_0^1+\operatorname{Li}_2(x)|_0^1$$</span></p> <p><span class="math-container">$$=\frac12\zeta(2)+\zeta(2)=\frac32\zeta(2)$$</span></p> <hr> <p>A different way is by setting <span class="math-container">$x=\frac{1-u}{1+u}$</span></p> <p><span class="math-container">$$\int_0^1 \frac{1}{x}\ln\left(\frac{1+x}{1-x}\right)\ dx=-2\int_0^1\frac{\ln u}{1-u^2}\ du$$</span></p> <p>now use the series expansion for the denominator. </p>
1,234,726
<p>How many lattice paths are there from $(0, 0)$ to $(10, 10)$ that do not pass to the point $(5, 5)$ but do pass to $(3, 3)$?</p> <p>What I have so far:</p> <p>The number of lattice paths from $(0,0)$ to $(n,k)$ is equal to the binomial coefficient $\binom{n+k}n$ (according to Wikipedia). So the number of lattice paths from $(0, 0)$ to $(10, 10)$ is $\binom{20}{10}$, and the number of lattice paths through $(5, 5)$ is $\binom{10}{5}$, and the number of lattice paths that pass through $(3, 3)$ is $\binom{6}{3}$. What next?</p>
MBW
6,884
<p>Decompose your lattice paths in two parts: the one up until reaching $(3,3)$ and from $(3,3)$ to $(10, 10)$ not passing by $(5,5)$. We can translate this second part to be actually the number of paths from $(0, 0)$ to $(7,7)$ not passing by $(2,2)$. The fact that your paths must pass through $(3,3)$ make these problems independent.</p> <p>For the first one we have ${6 \choose 3}$ possibilities</p> <p>For the second one, the number of paths not passing through $(5,5)$ is <em>all</em> the paths minus those that <em>do</em> pass through $(5,5)$. We can calculate the second number using the same strategy splitting the path from $(3,3)$ to $(5,5)$ and then from $(5,5)$ to $(10, 10)$. This can be done in $${4 \choose 2}{10 \choose 5}$$ ways. So in total we get, for this second part $${14 \choose 7} - {4 \choose 2}{10 \choose 5}$$.</p> <p>Since we can choose any path for the first part and combine it with any other one from the second, the nuumber we get is the product of the number for those parts which is $${6 \choose 3}\left({14 \choose 7} - {4 \choose 2}{10 \choose 5}\right)$$</p>
348,532
<p>Consider the following integral <span class="math-container">$$ I_\delta(\lambda)=\int_0^\delta e^{i\lambda \exp(-x^{-2})}dx. $$</span> Here, <span class="math-container">$\phi(x)=\exp(-x^{-2})$</span> is the phase function. I would like to study the rate of decay of <span class="math-container">$I(\lambda)$</span> as <span class="math-container">$\lambda\to \infty$</span>.</p> <p>In Stein's <em>Harmonic Analysis</em>, the case where the phase function has finite order of vanishing was discussed. More precisely, if <span class="math-container">$\phi^{(j)}(0)=0$</span> for all <span class="math-container">$0\leq j\leq k$</span> but <span class="math-container">$\phi^{(k+1)}(0)\neq 0$</span>, then there is <span class="math-container">$\delta&gt;0$</span> such that <span class="math-container">$$ I_\delta(\lambda)=c \lambda^{-(k+1)^{-1}}+O(\lambda^{-(k+2)^{-1}}), \quad \text{as $\lambda\to \infty$}. $$</span> where <span class="math-container">$c$</span> is a nonzero constant. Now since the phase function has infinite order of decay, I expect that <span class="math-container">$I_\delta(\lambda)$</span> will have very slow decay, probably slower than <span class="math-container">$\lambda^{-\alpha}$</span> for any <span class="math-container">$\alpha&gt;0$</span>.</p>
Bazin
21,907
<p>Let me fix <span class="math-container">$\delta=1$</span> for simplicity. Let us use a Van der Corput method. We have for <span class="math-container">$\epsilon\in (0,1)$</span> to be chosen later, with <span class="math-container">$\phi(x)= e^{-x^{-2}}, $</span> noting that <span class="math-container">$\phi'(x)=\phi(x) 2 x^{-3}$</span> <span class="math-container">$$ I(\lambda)=\underbrace{\int_0^{\epsilon} e^{i\lambda \phi(x)} dx}_{O(\epsilon)}+\underbrace{\int_{\epsilon}^1 \frac{d}{dx}\bigl(e^{i\lambda \phi(x)} \bigr) \frac{x^3dx}{i\lambda 2\phi(x)}}_{J(\lambda)}. $$</span> We have <span class="math-container">$$ 2i\lambda J(\lambda)=\Bigl[e^{i\lambda \phi(x)}\frac{x^3}{\phi(x)}\Bigr]^{x=1}_{x=\epsilon} -\int_{\epsilon}^1 e^{i\lambda \phi(x)}\left(\frac{3x^2}{\phi(x)}-\frac{ 2 }{\phi(x)}\right) dx. $$</span> As a result, we get <span class="math-container">$$ 2i\lambda J(\lambda)=O(1)+O(\epsilon^3 e^{\epsilon^{-2}})+O(e^{\epsilon^{-2}})=O(e^{\epsilon^{-2}}), $$</span> and thus <span class="math-container">$ \vert I(\lambda)\vert\le \epsilon+O(\lambda^{-1}e^{\epsilon^{-2}}). $</span> We choose <span class="math-container">$\epsilon=2(\ln \lambda)^{-1/2}$</span> and we have <span class="math-container">$$ \epsilon=2(\ln \lambda)^{-1/2}\ge \lambda^{-1}e^{\epsilon^{-2}}=\lambda^{-1+\frac14} =\lambda^{-3/4}, $$</span> providing <span class="math-container">$$ \vert I(\lambda)\vert\le C(\ln \lambda)^{-1/2}. $$</span> It is quite likely that the exact solution of the equation <span class="math-container">$ \epsilon=\lambda^{-1}e^{\epsilon^{-2}}$</span> will give a slightly better estimate.</p>
163,589
<p>The tensor product of some (finite dimensional real) vector spaces is acted on by the direct product of their general linear groups. I would like to know if there are explicit invariants in the case of 3 vector spaces. For one vector space there are two orbits: 0 vector, and non-zero vector. For two vector spaces, $T\in U\otimes V \cong Hom(U^*,V)$ there are finitely many orbits characterized by $rank(T)$. For 3 vector spaces the dimension of $U\otimes V\otimes W$ is $uvw$ and the dimension of $GL(U)\times GL(V) \times GL(W)$ is $u^2+v^2+w^2$ so that usually the space of orbits has positive dimension. Any references would be most welcome. I am particularly interested in the case U,V have dimension 4 and W has dimension 8.</p>
Nathaniel Johnston
11,236
<p>For what it's worth, in the case when $U,V$, and $W$ all have dimension $2$ (i.e., a case that is much simpler than the $4$-dimensional one you're interested in), it is known that there are exactly six orbits. In particular, every vector is in the orbit of exactly one of these six vectors (where $\{\mathbf{e}_1,\mathbf{e}_2\}$ is some fixed basis of $U,V,W$):</p> <ol> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1$</li> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1 + \mathbf{e}_1 \otimes \mathbf{e}_2 \otimes \mathbf{e}_2$</li> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1 + \mathbf{e}_2 \otimes \mathbf{e}_1 \otimes \mathbf{e}_2$</li> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1 + \mathbf{e}_2 \otimes \mathbf{e}_2 \otimes \mathbf{e}_1$</li> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1 + \mathbf{e}_2 \otimes \mathbf{e}_2 \otimes \mathbf{e}_2$</li> <li>$\mathbf{e}_1 \otimes \mathbf{e}_1 \otimes \mathbf{e}_2 + \mathbf{e}_1 \otimes \mathbf{e}_2 \otimes \mathbf{e}_1 + \mathbf{e}_2 \otimes \mathbf{e}_1 \otimes \mathbf{e}_1$</li> </ol> <p>Furthermore, a generic vector in $U \otimes V \otimes W$ belongs to the orbit of the vector 5 above: the other orbits all have measure $0$.</p>
4,234,095
<p>I need to show that <span class="math-container">$[\mathbb{Q}(2^{1/4},2^{1/6}):\mathbb{Q}]$</span> is a field extension of degree <span class="math-container">$12$</span>. It is possible to show that the degree is at least <span class="math-container">$12$</span> because it is divisible by <span class="math-container">$6$</span> and <span class="math-container">$4$</span> by finding the minimal polynomial of the simple field extensions of <span class="math-container">$2^{1/4}$</span> and <span class="math-container">$2^{1/6}$</span>, but I am not sure how to bound the inequality in the other direction.</p> <p>Another way to approach this problem might just be to explicitly find the basis, but I think there should be a way to find a bound on the inequality.</p>
Maths Rahul
865,134
<p>(1) <span class="math-container">$[\mathbb{Q}(2^{1/4}):\mathbb{Q}]=4$</span> since <span class="math-container">$2^{1/4}$</span> satisfies <em>irreducible</em> polynomial <span class="math-container">$x^4-2$</span> of degree <span class="math-container">$4$</span> over <span class="math-container">$\mathbb{Q}$</span>.</p> <p>(2) <span class="math-container">$[\mathbb{Q}(2^{1/6}):\mathbb{Q}]=6$</span> since <span class="math-container">$2^{1/6}$</span> satisfies <em>irreducible</em> polynomial <span class="math-container">$x^6-2$</span> of degree <span class="math-container">$6$</span> over <span class="math-container">$\mathbb{Q}$</span>.</p> <p>(3) From degrees of extensions <span class="math-container">$\mathbb{Q}(2^{1/4})$</span> and <span class="math-container">$\mathbb{Q}(2^{1/6})$</span> over <span class="math-container">$\mathbb{Q}$</span>, none of them is contained in other.</p> <p>(4) <span class="math-container">$\mathbb{Q}(2^{1/2})$</span> is contained in both fields, and again, <span class="math-container">$[\mathbb{Q}(2^{1/2}):\mathbb{Q}]=2$</span>.</p> <p>(5) From (3) and (4), <span class="math-container">$\mathbb{Q}(2^{1/4}) \cap \mathbb{Q}(2^{1/6})$</span> is precisely <span class="math-container">$\mathbb{Q}(2^{1/2})$</span>.</p> <p>(6) Draw lattice diagram of fields <span class="math-container">$\mathbb{Q}(2^{1/4}, 2^{1/6})$</span>, <span class="math-container">$\mathbb{Q}(2^{1/6})$</span>, <span class="math-container">$\mathbb{Q}(2^{1/4})$</span> and (intersection) <span class="math-container">$\mathbb{Q}(2^{1/2})$</span> [looks like diamond].</p> <p>Degrees of parallel sides are same; you can easily deduce your claim.</p>
2,358,490
<p>Let $V$ be a finite dimensional vector space over the field $K$, and let $W_1$ and $W_2$ be subspaces. Express $(W_1+W_2)^{\perp}$ in terms of $W_1^{\perp}$ and $W_2^{\perp}$. Also, express $(W_1\cap W_2)^{\perp}$ in terms of $W_1^{\perp}$ and $W_2^{\perp}$.</p> <p>I have no idea what this exercise is asking. Remark: I am self-studying and I do not have solutions.</p> <p><strong>Questions</strong>:</p> <p>What am I supposed to prove? How should I prove it?</p>
C. Ding
320,080
<p>Hint: $(W_1+W_2)^{\perp}=W_1^\perp\cap W_2^\perp$; $(W_1\bigcap W_2)^{\perp}=W_1^\perp +W_2^\perp$.</p>
550,230
<p>If 2 vectors form a basis for $\mathbb{R}^2$, must these 2 vectors always be orthogonal to each other?</p> <p>For instance, the standard bases in $\mathbb{R}^2$ are definitely orthogonal (easily drawn). How about other bases?</p>
GAM
58,916
<p>No, consider $A=\left\{\left[ \begin{array}{c} 1\\ 2\\ \end{array} \right],\left[ \begin{array}{c} 0\\ 2\\ \end{array} \right]\right\}\subset\mathbb{R^2}$. $A$ is linearly independent and spans $\mathbb{R^2}$, so $A$ forms a basis for $\mathbb{R^2}$. However, $\left[ \begin{array}{c} 1\\ 2\\ \end{array} \right]$ and $\left[ \begin{array}{c} 0\\ 2\\ \end{array} \right]$ are not orthogonal.</p>
3,491,978
<blockquote> <p>Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover.</p> </blockquote> <p>My attempt:</p> <p>(X,d) is compact. Therefore there exists a finite subcover of X.</p> <p>Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover.</p> <p>Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui. </p> <p>I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed. </p> <p>Any help is greatly appreciated!</p>
Matematleta
138,929
<p>The Wiki proof linked in the comments uses the fact that a continuous function on a compact set reaches its extrema. If you want a proof from scratch and closer to what you are trying to do, here are a few hints:</p> <p><span class="math-container">$1).\ $</span> Let <span class="math-container">$\mathcal A$</span> be an open cover of <span class="math-container">$X$</span>. For each <span class="math-container">$x\in X$</span> there is an open neighborhood <span class="math-container">$B_{\epsilon_x}(x)$</span> such that each such <span class="math-container">$B$</span> is contained in an element of <span class="math-container">$\mathcal A$</span>.</p> <p><span class="math-container">$2).\ $</span> The <span class="math-container">$B's$</span> give you <span class="math-container">$\textit{another}$</span> open cover of <span class="math-container">$X$</span>.</p> <p><span class="math-container">$3).\ $</span> Take a finite subcover of the cover from <span class="math-container">$2)$</span> and note that you also get a finite number of <span class="math-container">$\epsilon_x's$</span>.</p> <p><span class="math-container">$4).\ $</span> Using the conclusion in <span class="math-container">$3)$</span>, define <span class="math-container">$\delta&gt;0$</span> appropriately to conclude.</p>