qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,354,015
<blockquote> <p>If <span class="math-container">$a, b, c, d, e, f, g, h$</span> are positive numbers satisfying <span class="math-container">$\frac{a}{b}&lt;\frac{c}{d}$</span> and <span class="math-container">$\frac{e}{f}&lt;\frac{g}{h}$</span> and <span class="math-container">$b+f&gt;d+h$</span>, then <span class="math-container">$\frac{a+e}{b+f} &lt; \frac{c+g}{d+h}$</span>.</p> </blockquote> <p>I thought it is easy to prove. But I could not. How to prove this? Thank you.</p> <p>The question is a part of a bigger proof I am working on. There are two strictly concave, positive valued, strictly increasing functions <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> (See Figure 1). Given 4 points <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$x_3$</span> and <span class="math-container">$x_4$</span> such that <span class="math-container">$x_1&lt; x_i$</span>, <span class="math-container">$i=2, 3,4$</span> and <span class="math-container">$x_4&gt; x_i$</span>, <span class="math-container">$i=1, 2, 3$</span>, let <span class="math-container">$d=x_2-x_1$</span>, <span class="math-container">$b=x_4-x_3$</span> <span class="math-container">$c=f_1(x_2)-f_1(x_1)$</span>, <span class="math-container">$a=f_1(x_4)-f_1(x_3)$</span>. And given 4 points <span class="math-container">$y_1$</span>, <span class="math-container">$y_2$</span>, <span class="math-container">$y_3$</span> and <span class="math-container">$y_4$</span> such that <span class="math-container">$y_1&lt; y_i$</span>, <span class="math-container">$i=2, 3,4$</span> and <span class="math-container">$y_4&gt; y_i$</span>, <span class="math-container">$i=1, 2, 3$</span>, let <span class="math-container">$h=y_2-y_1$</span>, <span class="math-container">$f=y_4-y_3$</span> <span class="math-container">$g=f_2(y_2)-f_2(y_1)$</span>, <span class="math-container">$e=f_2(y_4)-f_2(y_3)$</span>.</p> <p>Since the functions are concave, we have <span class="math-container">$\frac{a}{b}&lt;\frac{c}{d}$</span> and <span class="math-container">$\frac{e}{f}&lt;\frac{g}{h}$</span>. And I thought in this setting, it is true that <span class="math-container">$\frac{a+e}{b+f} &lt; \frac{c+g}{d+h}$</span> even without the restriction <span class="math-container">$b+f&gt;d+h$</span>.</p> <p><img src="https://i.stack.imgur.com/Onz2d.png" alt="Figure 1"></p>
Klaus Draeger
65,787
<p>The updated question (with the additional constraint $b+f&gt;d+h$) is also false. For example,</p> <p>$\frac{1}{1}&lt;\frac{3}{2}$ and $\frac{9}{4}&lt;\frac{5}{2}$, but $\frac{1+9}{1+4} = \frac{10}{5} = \frac{8}{4} = \frac{3+5}{2+2}$.</p>
739,301
<p>Show that a Dirac delta measure on a topological space is a Radon measure.</p> <p>Show that the sum of two Radon measures is also a Radon measure. Please help me.</p>
George
54,962
<p>Dirac measure is a measure $\delta_{x}$ on a topological space $X$ (with a Borel $\sigma$-algebra $B(X)$ of subsets of $X$) defined for a given $x \in X$ and any Borel set $A \subseteq X$ by $\delta_{x}(A)=1$ iff $x \in A$. Notice that $\{ x\}$ like $\emptyset$ is compact subset of $X$. For each $A \in B(X)$ and for $\epsilon&gt;0$ we take under $F_{\epsilon}$ the set $\{x\}$ if $x \in A$ or $\emptyset$ if $x \notin A$. Then we get $\delta_{x}(A \setminus F_{\epsilon})=0&lt;\epsilon$ which means that $\delta_{x}$ is radon measure(It is obviousthat $F_{\epsilon}\subseteq A$ and $F_{\epsilon}$ is compact).</p> <p>Now let $\mu_1$ and $\mu_2$ be two Radon measures in $X$. Let $A \in B(X)$. Since $\mu_i$ is radon measure, for $\epsilon&gt;0$ there is a compact subset $F^{(i)}_{\epsilon} \subseteq A$ such that $\mu_i(A \setminus F^{(i)}_{\epsilon})&lt;\frac{\epsilon}{2}$ for $i=1,2$. It is obvious that $C_{\epsilon} :=F^{(1)}_{\epsilon} \cup F^{(2)}_{\epsilon}$ is compact subset of $A$ such that $$(\mu_1+\mu_2)(A \setminus C_{\epsilon})=\mu_1(A \setminus C_{\epsilon})+\mu_2)(A \setminus C_{\epsilon})\le \mu_1(A \setminus F^{(1)}_{\epsilon})+\mu_2)(A \setminus F^{(2)}_{\epsilon})&lt;\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon.$$</p> <p>The latter relation means that $\mu_1+\mu_2$ is Radon measure in $X$.</p>
275,308
<p>Problems with calculating </p> <p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$</p> <p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$</p> <p>Correct answer is -2. Please show where this time I've error. Thanks in advance!</p>
DonAntonio
31,254
<p>$$\lim_{x\to\ 0}\frac{\log\cos 2x}{x\sin x}\stackrel{\text{L'Hospital}}=\lim_{x\to 0}\frac{-2\tan 2x}{\sin x+x\cos x}{}\stackrel{\text{L'H}}=\lim_{x\to 0}-\frac{4\sec 2x}{2\cos x-x\sin x}=-\frac{4}{2}=-2$$</p>
66,801
<p>In short, I am interested to know of the various approaches one could take to learn modern harmonic analysis in depth. However, the question deserves additional details. Currently, I am reading Loukas Grafakos' "Classical Fourier Analysis" (I have progressed to chapter 3). My intention is to read this book and then proceed to the second volume (by the same author) "Modern Fourier Analysis". I have also studied general analysis at the level of Walter Rudin's "Real and Complex Analysis" (first 15 chapters). In particular, if additional prerequisites are required for recommended references, it would be helpful if you could state them.</p> <p>My request is to know how one should proceed after reading these two volumes and whether there are additional sources that one could use that are helpful to get a deeper understanding of the subject. Also, it would be nice to hear suggestions of some important topics in the subject of harmonic analysis that are current interests of research and references one could use to better understand these topics.</p> <p>However, I understand that as one gets deeper into a subject such as harmonic analysis, one would need to understand several related areas in greater depth such as functional analysis, PDE's and several complex variables. Therefore, suggestions of how one can incorporate these subjects into one's learning of harmonic analysis are welcome. (Of course, since this is mainly a request for a roadmap in harmonic analysis, it might be better to keep any recommendations of references in these subjects at least a little related to harmonic analysis.)</p> <p>In particular, I am interested in various connections between PDE's and harmonic analysis and functional analysis and harmonic analysis. It would be nice to know about references that discuss these connections. </p> <p>Thank you very much!</p> <p><strong>Additional Details</strong>: Thank you for suggesting Stein's books on harmonic analysis! However, I am not sure how one should read these books. For example, there seems to be overlap between Grafakos and Stein's books but Stein's "Harmonic Analysis" seems very much like a research monograph and although it is, needless to say, an excellent book, I am not very sure what prerequisites one must have to tackle it. In contrast, the other two books by Stein are more elementary but it would be nice to know of the sort of material that can be found in these two books but that cannot be found in Grafakos. </p>
Community
-1
<p>One of my favourite books for harmonic analysis is Fourier analysis by Javier Duoandikoetxea. In fact, I would recommend that as your first port of call for learning harmonic analysis if you already have some background (such as an undergraduate/postgraduate course in harmonic analysis). That is a terrific reference for background regardless of what you want to do with harmonic analysis. I would tackle this before moving onto Elias Stein's book "harmonic analysis: real-variable methods, orthogonality and oscillatory integrals" (also a great book).</p> <p>As mentioned above, it really depends on what type of harmonic analysis you are interested in, but I would certainly recommend those as well as harmonic analysis by Katznelson, the two volume books by Grafakos, both of Stein's books on "introduction to Fourier analysis on Euclidean spaces" and "singular integrals and differentiability properties of functions" are useful for singular integrals. I'd also recommend a treatise on trigonometric series by Bary. Zygmund's two volume books on trigonometric series are good, but I would tackle a few other books on harmonic analysis before going for it. It is quite complex in comparison to the other references and will not help much if you do not already have a foundation in harmonic/Fourier analysis. </p> <p>If you like abstract harmonic analysis, go for "principles of harmonic analysis" by Anton Deitmar. </p> <p>Harmonic analysis and PDEs by Christ, Kenig and Sadosky is good for specific directions (such as PDEs, probability, curvature and spectral theory).</p> <p>Terence Tao's website is great for lecture notes (all academic resources on his website are great!) </p> <p>Finally, "lectures on nonlinear wave equations" by Christopher Sogge and "nonlinear dispersive equations" by Terence Tao are great books that have a focus on dispersive PDEs using techniques from harmonic analysis (such as Littlewood-Paley theory).</p> <p>Just to add an extra reference, check out "Topics in Harmonic Analysis Related to the Littlewood-Paley Theory" also by Elias Stein</p>
66,801
<p>In short, I am interested to know of the various approaches one could take to learn modern harmonic analysis in depth. However, the question deserves additional details. Currently, I am reading Loukas Grafakos' "Classical Fourier Analysis" (I have progressed to chapter 3). My intention is to read this book and then proceed to the second volume (by the same author) "Modern Fourier Analysis". I have also studied general analysis at the level of Walter Rudin's "Real and Complex Analysis" (first 15 chapters). In particular, if additional prerequisites are required for recommended references, it would be helpful if you could state them.</p> <p>My request is to know how one should proceed after reading these two volumes and whether there are additional sources that one could use that are helpful to get a deeper understanding of the subject. Also, it would be nice to hear suggestions of some important topics in the subject of harmonic analysis that are current interests of research and references one could use to better understand these topics.</p> <p>However, I understand that as one gets deeper into a subject such as harmonic analysis, one would need to understand several related areas in greater depth such as functional analysis, PDE's and several complex variables. Therefore, suggestions of how one can incorporate these subjects into one's learning of harmonic analysis are welcome. (Of course, since this is mainly a request for a roadmap in harmonic analysis, it might be better to keep any recommendations of references in these subjects at least a little related to harmonic analysis.)</p> <p>In particular, I am interested in various connections between PDE's and harmonic analysis and functional analysis and harmonic analysis. It would be nice to know about references that discuss these connections. </p> <p>Thank you very much!</p> <p><strong>Additional Details</strong>: Thank you for suggesting Stein's books on harmonic analysis! However, I am not sure how one should read these books. For example, there seems to be overlap between Grafakos and Stein's books but Stein's "Harmonic Analysis" seems very much like a research monograph and although it is, needless to say, an excellent book, I am not very sure what prerequisites one must have to tackle it. In contrast, the other two books by Stein are more elementary but it would be nice to know of the sort of material that can be found in these two books but that cannot be found in Grafakos. </p>
Thomas Kojar
99,863
<p>Here are a few relevant books that appeared since this question was asked.</p> <ul> <li>Barry Simon's <a href="https://www.amazon.ca/Harmonic-Analysis-Barry-Simon/dp/1470411024/ref=sr_1_2?keywords=simon%20harmonic%20analysis&amp;qid=1567982213&amp;s=books&amp;sr=1-2" rel="nofollow noreferrer">Harmonic analysis volume</a></li> <li>E. M. Stein and R. Shakarchi, <em>Functional analysis</em>.</li> <li>C. Muscalu and W. Schlag, <em>Classical and multilinear harmonic analysis</em></li> <li>P. Mattila, <em>Fourier Analysis and Hausdorff Dimension</em></li> </ul>
2,704,955
<p>In my test on complex analysis I encountered following problem:</p> <blockquote> <p>Find $\oint\limits_{|z-\frac{1}{3}|=3} z \text{Im}(z)\text{d}z$</p> </blockquote> <p>So first I observed that function $z\text{Im}(z)$ is not holomorphic at least on real axis. Therefore we have to intgrate using parametrization.</p> <p>First, let's change variable $w = z - \frac{1}{3}$. So we got $\oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w+\frac{1}{3})\text{d}w = \oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w)\text{d}w = \frac{1}{2i}\oint\limits_{|w|=3} (w+\frac{1}{3}) (w-\bar w)\text{d}w$.</p> <p>Then by letting $w=3e^{i \phi}$ we transform integral to the form $\frac{1}{2}\int\limits_{0}^{2\pi}(3e^{i \phi}+\frac{1}{3})(3e^{i \phi}-3e^{-i \phi})ie^{i \phi}\text{d}\phi = -\frac{1}{2}\int\limits_{0}^{2\pi}\text{d}\phi=-\pi$.</p> <p>Is my reasoning correct? I don't quite sure about change of variable I made since function is not holomorphic at real axis. Is there any other way how this integral can be evaluated? Thanks! </p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \oint_{\verts{z - 1/3}\ =\ 3}z\,\Im\pars{z}\,\dd z &amp; = \int_{-\pi}^{\pi}\pars{{1 \over 3} + 3\expo{\ic\theta}} \Im\pars{{1 \over 3} + 3\expo{\ic\theta}}3\expo{\ic\theta}\ic\,\dd\theta \\[5mm] &amp; = 3\ic\int_{-\pi}^{\pi}\pars{{1 \over 3}\expo{\ic\theta} + 3\expo{2\ic\theta}} \bracks{3\sin\pars{\theta}}\dd\theta \\[5mm] &amp; = 3\ic\int_{-\pi}^{\pi}\bracks{\sin^{2}\pars{\theta}\ic + 9\sin\pars{2\theta}\sin\pars{\theta}\ic}\dd\theta \\[5mm] &amp; = -6\int_{0}^{\pi}\bracks{{\color{red}{1} - \cos\pars{2\theta} \over \color{red}{2}} + 9\,{\cos\pars{\theta} - \cos\pars{3\theta} \over 2}}\dd\theta \\[5mm] &amp; = -6\int_{0}^{\pi}{1 \over 2}\,\dd\theta = \bbx{-3\pi} \end{align}</p>
3,298,311
<p>How can I prove (by definition) that, if <span class="math-container">$a, b \in \mathbb{R}$</span> and <span class="math-container">$a&lt;b$</span>, then <span class="math-container">$[a, b]$</span> is equal to the set of accumulation (limit) points?</p> <p>Let <span class="math-container">$(E, d)$</span> a metric space and <span class="math-container">$S \subseteq E$</span>. <span class="math-container">$x \in E$</span> is a limit point if <span class="math-container">$(B_\varepsilon(x)-\lbrace x \rbrace ) \cap S \neq \emptyset$</span> for all <span class="math-container">$\varepsilon &gt;0$</span></p>
Peter Szilas
408,605
<p>Attempt:</p> <p><span class="math-container">$U:= $</span>{<span class="math-container">$x| x &lt;a$</span>}; <span class="math-container">$V:=$</span>{<span class="math-container">$x|x &gt;b$</span>};</p> <p><span class="math-container">$U,V$</span> are open sets, </p> <p>1) <span class="math-container">$U \cup V$</span> is open.</p> <p><span class="math-container">$[a,b]=(U \cup V)^c$</span> is closed (<span class="math-container">$c$</span> for complement in <span class="math-container">$\mathbb{R}$</span>).</p> <p>2) Let <span class="math-container">$x_0 \not \in [a,b]$</span>, then </p> <p><span class="math-container">$x_0 \in [a,b]^c= U \cup V$</span>, which is an open set.</p> <p>Hence </p> <p><span class="math-container">$B_\epsilon (x_0) \subset U \cup V$</span> for an <span class="math-container">$ \epsilon &gt;0$</span>.</p> <p><span class="math-container">$x_0$</span> is not a limit point of <span class="math-container">$[a,b]$</span>.</p>
536,805
<p>Three cards are drawn sequentially from a deck that contains 16 cards numbered 1 to 16 in an arbitrary order. Suppose the first card drawn is a 6.</p> <p>Define the event of interest, A, as the set of all increasing 3-card sequences, i.e. A={(x1,x2,x3)|x1 &lt; x2 &lt; x3}, where x1,x2,x3∈{1,⋯,16}. Define event B as the set of 3-card sequence that starts with 6, i.e. B={(x1,x2,x3)|x1=6} or simply B={(6,x2,x3)}</p> <p>Let $S_{x_3=t}$ represent the subset $\{(6,x_2,t)|6 \lt x_2 \lt t\}$, then $|A \cap B|=\displaystyle\sum_{t=8}^{16}∣∣S_{x_3=t}∣∣$.</p> <p>Then what is |A∩B|=?</p>
drhab
75,923
<p>If I understand you well then $A\cap B=\left\{ \left(x_{1},x_{2},x_{3}\right)\mid x_{1}&lt;x_{2}&lt;x_{3}\wedge x_{1}=6\right\} =\left\{ \left(6,x_{2},x_{3}\right)\mid6&lt;x_{2}&lt;x_{3}\right\} $ so $\left|A\cap B\right|=\left({10\atop 2}\right)=45$</p> <p>Two distinct numbers are to be chosen from $\left\{ 7,\cdots,16\right\} $.</p> <p>This is an alternative way to find $\left|A\cap B\right|$ and I should prefer it, but off course I don't know the context of your problem.</p>
2,159,083
<p>I am currently translating a research paper from French (which I do not speak well). I have made good progress with copious use of google translate &amp; switching between French &amp; English versions of articles on wikipedia, coupled with knowledge in the given field. However, I am stuck on the following ($M$ is a matrix):</p> <p><a href="https://i.stack.imgur.com/X4wQY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X4wQY.png" alt="enter image description here"></a></p> <p>The parenthetical implies that $M$ is similar to a <a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow noreferrer">Companion Matrix</a>, but I am stumped when it comes to what "monogène" means. Google translate says "monogenic", which I would infer to be the author's term for "similar to the companion matrix of a polynomial." However, I can't say that I've come across the term "Monogenic Matrix" before, plus this would just be the Froebius normal form / rational canonical form of a matrix with only one block.</p> <p>Is this a reasonable translation/interpretation?</p>
Derek Elkins left SE
305,738
<p>Before focusing on the specific question, I'd like to provide some context. First, every (non-trivial) topos has a Boolean subtopos. This is essentially what you say, and it roughly corresponds to the double negation interpretation of classical logic into intuitionistic logic. However, one of the things that makes toposes interesting is that many things are toposes, but relatively few things are <em>Boolean</em> toposes. Similarly, (part of) what makes intuitionistic type theory interesting is that it's the internal language of a topos. Restricting to a classical theory discards many relevant examples. For example, as the name "topos" suggests, a significant application is to topology with a topological space giving rise to a topos. Restricting to Boolean toposes is like only considering <a href="https://en.wikipedia.org/wiki/Stone_space" rel="noreferrer">Stone spaces</a> which largely defeats the purpose of topology. (One could say this paragraph is an [partial] argument for why the law of excluded middle is <em>not</em> a "good principle", though it is certainly convenient when it holds.)</p> <p>The way predicates are interpreted in categorical logic for an arbitrary category is as subobjects. A predicate $P$ on "individuals" of sort $A$ is viewed as a subobject of the object $A$ in a category, i.e. as a(n equivalence class of) monomorphism(s) $P \hookrightarrow A$. A subobject classifier allows us to reify these subobjects as <em>terms</em>. That is, the <em>formula</em> $a:A\vdash P(a)$ becomes the <em>term</em> $a:A \vdash \chi_P(a):\Omega$. We can then recover the formula as $a:A\vdash P(a)\Leftrightarrow \chi_P(a)=_\Omega\top$. In fact, this equivalence is the heart of what a subobject classifier is. Characteristic functions $\chi_P$ that factor through $\mathbf{2}$ correspond to <em>decidable</em> predicates, i.e. predicates for which the law of excluded middle holds. As a formula $a:A\vdash P(a)\lor\neg P(a)$ or via a characteristic function $a:A\vdash (\chi_P(a)\lor\neg\chi_P(a))=_\Omega\top$. The former states that $P$ is a <a href="https://ncatlab.org/nlab/show/complemented+subobject" rel="noreferrer">complemented or decidable subobject</a> of $A$. In the latter, $\lor$ and $\neg$ are operations on $\Omega$ rather than logical connectives.</p> <p>A topos where $\Omega \ncong \mathbf{2}$ is one whose internal logic has some propositions that are not decidable in the above sense. If you restrict yourself to Boolean-valued characteristic functions, then you are restricting yourself to only the decidable propositions in your internal logic. This is a completely coherent thing to do, but it means there are formulas you can state which will have no Boolean-valued characteristic function. They will, however, always have an $\Omega$-valued characteristic function. If you <em>identify</em> predicates with Boolean-valued characteristic functions, then what you are doing is equivalent to interpreting into a Boolean subtopos of whatever topos you started with.</p>
1,690,092
<p>$a_{n} = \frac{1}{\sqrt{n^2+n}} + \frac{1}{\sqrt{n^2+n+1}} + ... + \frac{1}{\sqrt{n^2+2n-1}}$</p> <p>and I need to check whether this sequence converges to a limit without finding the limit itself. I think about using the squeeze theorem that converges to something (I suspect '$1$').</p> <p>But I wrote $a_{n+1}$ and $a_{n-1}$ and it doesn't get me anywhere...</p>
Dac0
291,786
<p>You're right, you just have to consider that $$\frac{n}{\sqrt{n^2+n}}\ge a_{n} \ge \frac{n}{\sqrt{n^2+2n-1}}$$ Then take the limit and you're done. </p>
626,928
<p>I took linear algebra course this semester (as you've probably noticed looking at my previously asked questions!). We had a session on preconditioning, what are they good for and how to construct them for matrices with special properties. It was a really short introduction for such an important research topic, so I'm not sure if I got familiar with them. I need some elementary sources on it to be able to construct preconditions for famous matrices and to make sure I fully understand the concept. Any helps would be greatly appreciated. </p>
Han de Bruijn
96,057
<p>Look what I have here in my bookshelves: the original thesis by Henk (H.A.) van der Vorst where it all started with , <A HREF="http://books.google.nl/books/about/Preconditioning_by_Incomplete_Decomposit.html?id=9Ad7NwAACAAJ&amp;redir_esc=y" rel="nofollow">titled</A> : <I>Preconditioning by Incomplete Decompositions</I> (1982). If you can lay your hands on it ..</p>
40,500
<blockquote> <p>What are the most fundamental/useful/interesting ways in which the concepts of Brownian motion, martingales and markov chains are related?</p> </blockquote> <p>I'm a graduate student doing a crash course in probability and stochastic analysis. At the moment, the world of probability is a confusing blur, but I'm starting with a grounding in the basic theory of markov chains, martingales and Brownian motion. While I've done a fair amount of analysis, I have almost no experience in these other matters and while understanding the definitions on their own isn't too difficult, the big picture is a long way away.</p> <p>I would like to <strong>gather together results and heuristics</strong>, each of which links together two or more of Brownian motion, martingales and Markov chains in some way. Answers which <strong>relate probability to real or complex analysis</strong> would also be welcome, such as "Result X about martingales is much like the basic fact Y about sequences".</p> <p>The thread may go on to contain a Big List in which each answer is the posters' favourite as yet unspecified result of the form "This expression related to a markov chain is always a martingale because blah. It represents the intuitive idea that blah".</p> <p>Because I know little, I can't gauge the worthiness of this question very well so apologies in advance if it is deemed untenable by the MO police.</p>
The Bridge
2,642
<p>Hi, </p> <p>Regarding Martingales you can see them as fair games This means that if the (martingale) process represents your (random) wealth, you should not be able to design a strategy to increase your current wealth, no matter what the outcome of the sample space is.</p> <p>Brownian Motion can be seen as a limit of rather simple random walks but I'm sure that you know about this. </p> <p>Markov processes "disconnect" Future and Past of the process conditionnally on the present value of the process. Where "disconnect" means that functions of past and of future values of the process are independent conditionnally on the present value of the process.</p> <p>Does it make things more clear ?</p>
2,372,743
<p>Show that $$\lim_{x\to\infty} \int_{1}^x e^{-y^2} dy$$ exist and is in $[e^{-4},1]$</p> <p>I cannot find a good minoration, I must show it whitout using the double integration, I totally don't know how to solve it</p>
ty.
360,063
<p>I have a funny answer, if you find Fourier transform without integrating fun. :D</p> <p>Note that $e^{-y^2}$ is monotonically decreasing as $|y|\rightarrow\infty$ and that the function is always positive. That means \begin{equation} 0&lt;\int_{1}^{\infty}e^{-y^2}dy&lt;\int_{0}^{\infty}e^{-y^2}dy. \end{equation} If we find the value of the right integral and find that is indeed smaller than 1 and if we find a piece of area larger than $e^{-4}$ we are done. So let's do it!</p> <p>I will give the function a name: $f:\mathbb{R}\rightarrow\mathbb{R},\quad f(y)=e^{-y^2/2}$.</p> <p>First note that the function is even $f(-y)=f(y)$, so $$\ \int_{\mathbb{R}}f(y)dy=2\ \int_{0}^{\infty}f(y)dy.$$ With the Fourier transform (=FT) of it (which exists because it is a function of Schwarz space) $$\tilde{f}(k)=F(f)(k)=a\int_{\mathbb{R}}f(y)e^{-iky}dy$$ and the inverse Fourier transform of it $$f(x)=F^{-1}\left(\tilde{f}\right)(x)=a\int_{\mathbb{R}}\tilde{f}(k)e^{iky}dk$$ where $a=\frac{1}{\sqrt{2\pi}}$. OK, let's note that $$f'(y)=-yf(y).$$ We know that $\tilde{f}(k)$ exists. So let me calculate, weirdly enough its derivative $$ \frac{d}{dk}\tilde{f}(k)=\frac{d}{dk}a\int_{\mathbb{R}}f(y)e^{-iky}dy=a\int_{\mathbb{R}}\frac{\partial}{\partial k}f(y)e^{-iky}dy=a\int_{\mathbb{R}}f(y)e^{-iky}(-iy)dy=(*). $$ From the differential equation above we substitute to get $f'(y)$ in the integrand:$$ (*)=ai\int_{\mathbb{R}}f'(y)e^{-iky}dy. $$ Now with the product rule/parital integration we get$$ (*)=ai\left(f(y)e^{-iky}\vert_{-\infty}^{\infty}-\int_{\mathbb{R}} f(y)e^{-iky}(-ik)dy\right)=-k\tilde{f}(k)$$ There! The FT of $f$ satisfies the same linear differential equation as $f$. So we can conclude that $f=\tilde{f}$. I.e. $$\tilde{f}(k)=e^{-k^2/2}$$ If you are pedantic, you would say the integration constant of the solution to $$\tilde{f}'(k)=-k\tilde{f}(k)$$ has nothing to do with the FT. That's true but with the inverse FT, you will see the constant is 1. Luckily, I am not pedantic.</p> <p>Moving on, let's calculate the integral using $u=\sqrt{2}y$ substitution $$\int_{0}^{\infty}e^{-y^2}dy=\frac{1}{2}\int_{\mathbb{R}}e^{-y^2}dy=\frac{1}{2}\int_{\mathbb{R}}e^{-u^2/2}\frac{1}{\sqrt{2}}du=\frac{\sqrt{2\pi}}{\sqrt{2\pi}}\frac{1}{2}\int_{\mathbb{R}}e^{-u^2/2}\frac{1}{\sqrt{2}}e^{-i0u}du=\frac{\sqrt{\pi}}{2}\tilde{f}(0)=\frac{\sqrt{\pi}}{2}&lt;1. $$ Now let's vertify the lower bound is still $\geq e^{-4}$: The rectangle with length 1 and height $e^{-4}$ fits underneath the graph of $e^{-y^2}$ for $y\in[1,2],$ so $$ e^{-4}&lt;\int_{1}^{\infty}e^{-y^2}dy&lt;1. $$</p>
1,893,609
<p>I am trying to show that $A=\{(x,y) \in \Bbb{R} \mid -1 &lt; x &lt; 1, -1&lt; y &lt; 1 \}$ is an open set algebraically. </p> <p>Let $a_0 = (x_o,y_o) \in A$. Suppose that $r = \min\{1-|x_o|, 1-|y_o|\}$ then choose $a = (x,y) \in D_r(a_0)$. Then</p> <p>Edit: I am looking for the proof of the algebraic implication that $\|a-a_0\| = \sqrt {(x-x_o)^2+(y-y_o)^2} &lt; r \Rightarrow|x| &lt; 1 , |y| &lt; 1 $</p>
Narasimham
95,860
<p>Unless you <em>delete/discard</em> one out of the four given points and you do not have a unique circle with a center and radius.This is over-determined but consistent. </p>
4,133,760
<p>Dr Strang in his book linear algebra and it's applications, pg 108 says ,when talking about the left inverse of a matrix( <span class="math-container">$m$</span> by <span class="math-container">$n$</span>)</p> <blockquote> <p><strong>UNIQUENESS:</strong> For a full column rank <span class="math-container">$r=n . A x=b$</span> has at most one solution <span class="math-container">$x$</span> for every <span class="math-container">$b$</span> if and only if the columns are linearly independent. Then <span class="math-container">$A$</span> has an <span class="math-container">$n$</span> by <span class="math-container">$m$</span> left-inverse <span class="math-container">$B$</span> such that <span class="math-container">$B A=I_{n}$</span>. This is possible only if <span class="math-container">$m \geq n$</span>.</p> </blockquote> <p>I understand why there can be at most one solution for a full column rank but how does that lead to <span class="math-container">$A$</span> having a left inverse?</p> <p>I'd be grateful if someone could help or hint at the answer.</p>
hm2020
858,083
<p><strong>Question:</strong> &quot;I understand why there can be at most one solution for a full column rank but how does that lead to A having a left inverse? I'd be grateful if someone could help or hint at the answer.&quot;</p> <p><strong>Answer:</strong> Let <span class="math-container">$k$</span> be a real numbers and <span class="math-container">$V:=k^n, W:=k^m$</span>. Since the equation <span class="math-container">$Ax=b$</span> has a unique solution for all <span class="math-container">$b\in W$</span> it follows the equation <span class="math-container">$Ax=0$</span> has a unique solution, namely <span class="math-container">$x=0$</span>. Hence the map <span class="math-container">$A: V \rightarrow W$</span> is an injection and it follows <span class="math-container">$n \leq m$</span>. We get an exact sequence of <span class="math-container">$k$</span>-vector spaces</p> <p><span class="math-container">$$0 \rightarrow V \rightarrow W \rightarrow^p W/V \rightarrow 0$$</span></p> <p>and we may choose a section <span class="math-container">$s$</span> of <span class="math-container">$p$</span>. This is a <span class="math-container">$k$</span>-linear map <span class="math-container">$s: W/V \rightarrow W$</span> with <span class="math-container">$ p \circ s = Id$</span>. This gives an idempotent endomorphism of <span class="math-container">$W$</span>: <span class="math-container">$u:=s \circ p$</span> with <span class="math-container">$u^2=u$</span>. From this it follows we may write</p> <p><span class="math-container">$$W \cong V \oplus Im(u)$$</span></p> <p>and the projection map <span class="math-container">$p_V: W \cong V \oplus Im(u) \rightarrow V$</span> is a left inverse to the inclusion map defined by the matrix <span class="math-container">$A$</span>. If you choose a basis for <span class="math-container">$W$</span> you get a matrix <span class="math-container">$B$</span> with <span class="math-container">$BA=Id_n$</span></p>
4,480,570
<p>I am reading over the proof of <strong>Lemma 10.32 (Local Frame Criterion for Subbundles)</strong> in Lee's <em>Introduction to Smooth Manifolds</em>.</p> <p>The lemma says</p> <blockquote> <p>Let <span class="math-container">$\pi: E \rightarrow M$</span> be a smooth vector bundle and suppose that for each <span class="math-container">$p\in M$</span> we are given an M-dimensional linear subspace <span class="math-container">$D_p \subseteq E_p$</span>. Then <span class="math-container">$D = \cup_{p \in M} D_p \subseteq E$</span> is a smooth subbundle of <span class="math-container">$E$</span> iff each point of <span class="math-container">$M$</span> has a neighborhood <span class="math-container">$U$</span> on which there exist smooth local sections <span class="math-container">$\sigma_1, \cdots, \sigma_m: U \rightarrow E$</span> with the property that <span class="math-container">$\sigma_1(q), \cdots, \sigma_m(q)$</span> form a basis for <span class="math-container">$D_q$</span> at each <span class="math-container">$q \in U$</span>.</p> </blockquote> <p>Overall I understand the proof of this lemma, besides the part where we need to show that <span class="math-container">$D$</span> is an embedded submanifold with or without boundary of <span class="math-container">$E$</span>. Professor Lee's proof says that</p> <blockquote> <p>it suffices to show that each <span class="math-container">$p \in M$</span> has a neighborhood <span class="math-container">$U$</span> such that <span class="math-container">$D \cap \pi^{-1}(U)$</span> is an embedded submanifold (possibly with boundary) in <span class="math-container">$\pi^{-1}(U) \in E$</span>.</p> </blockquote> <p>It is not very obvious to me why it is sufficient by showing this. May someone explain the logic to me?</p> <p>Edit: Here's my attempt to reason it: By Theorem 5.8, if <span class="math-container">$D ∩ \pi^{-1}(U)$</span> is an embedded submanifold in <span class="math-container">$\pi^{-1}(U)$</span>, it satisfies the local k-slice condition. Now because <span class="math-container">$D$</span> is a union of <span class="math-container">$D ∩ \pi^{-1}(U)$</span> over different neighborhoods of <span class="math-container">$p \in M$</span>, it satisfies the local k-slice condition as well, and hence again by Theorem 5.8, <span class="math-container">$D$</span> is an embedded submanifold.</p> <p>Please let me know if anything is wrong and how it can be corrected.</p> <p>Thank you very much.</p> <p>Here's a screenshot of the Lemma and its (partial) proof: <a href="https://i.stack.imgur.com/87mUp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87mUp.jpg" alt="enter image description here" /></a></p>
Leonhard Euler
998,037
<p>Unfortunately, Theorem 5.8 only works for smooth manifolds without boundary. But here <span class="math-container">$E$</span> probably has boundary. A better theorem is Theorem 5.51,but it also requires that <span class="math-container">$M$</span> is a smooth manifold without boundary there.</p>
2,362,942
<p>How could I notate a matrix rotation?</p> <p>Example: $ A = \begin{pmatrix} a &amp; b \\ c &amp; d \end{pmatrix},\:\:\: A_{\text{rotated}} = \begin{pmatrix} c &amp; a \\ d &amp; b\end{pmatrix}$. </p> <p>Notice the whole matrix is "rotated" clockwise. Is there any notation for this, and anyway to compute it generally via basic matrix operations such as addition and multiplication or other?</p>
Angina Seng
436,618
<p>Your rotated matrix is $$A^t\pmatrix{0&amp;1\\1&amp;0}.$$</p>
46
<p>I have solved a couple of questions myself in the past, and I think some of them are interesting to the public and will most likely appear in the future. One example for this is the question how to enable antialiasing in the Linux frontend, for which there is no native support right now. My question would now be whether posting these as a new question would be appropriate, and then immediately answer it.</p>
Andy Ross
43
<p>Most of these answers say, "If it is interesting, ask it." But what do we mean by "interesting"? Or more appropriately, interesting to whom? </p> <p>If we want only Mathematica experts to visit the site then we should probably ask questions only an expert would ask. I'm of the thinking that we probably want some frequently asked beginning and intermediate sorts of questions to draw new users to the site. </p> <p>I say this for two reasons. 1: we don't want to scare people from asking simple questions. 2: We want to draw people in who are doing a web search.</p> <p>When I started with Mathematica most of my problems dealt with simple list manipulation and importing data. I also found simple shorthand used in examples (e.g. #, /@, <em>,</em>__, /., /;) maddeningly confusing. </p> <p>I wouldn't really be interested in a question about these things now, but I bet many new users would be.</p>
133,936
<p>I am trying to understand a part of the following theorem:</p> <blockquote> <p><strong>Theorem.</strong> Assume that $f:[a,b]\to\mathbb{R}$ is bounded, and let $c\in(a,b)$. Then, $f$ is integrable on $[a,b]$ if and only if $f$ is integrable on $[a,c]$ and $[c,b]$. In this case, we have $$\int_a^bf=\int_a^cf+\int_c^bf.$$ <em>Proof.</em> If $f$ is integrable on $[a,b]$, then for every $\epsilon&gt;0$ there exists a partition $P$ such that $U(f,P)-L(f,P)&lt\epsilon$. Because refining a partition can only potentially bring the upper and lower sums closer together, we can simply add $c$ to $P$ if it is not already there. Then, let $P_1=P\cap[a,c]$ be a partition of $[a,c]$, and $P_2=P\cap[c,b]$ be a partition of $[c,b]$. It follows that $$U(f,P_1)-L(f,P_1)&lt\epsilon\text{ and }U(f,P_2)-L(f,P_2)&lt\epsilon,$$ implying that $f$ is integrable on $[a,c]$ and $[c,b]$.</p> <p>[...]</p> </blockquote> <p>How does that last expression "follow?" Neither $P_1$ nor $P_2$ are refinements of $P$, but they are still somehow less than $\epsilon$; will that not make their difference larger? That is, $$U(f,P_i)-L(f,P_i)\geqslant U(f,P)-L(f,P),$$ for $i=1,2$? Thanks in advance!</p>
Community
-1
<p>(Too long for a comment.)</p> <p>That the answer has been spelt out, I wanted to clarify this thing about refinements:</p> <blockquote> <ol> <li>Neither of <span class="math-container">$P_i$</span> is a refinement of <span class="math-container">$P$</span>.</li> </ol> </blockquote> <p>Yes. You're right. In fact, the containment goes this way: <span class="math-container">$P_i \subsetneq P$</span>. So, your contention is right.</p> <hr /> <ul> <li><p>You claim this inequality: <span class="math-container">$$U(f,P_i)-L(f,P_i) \geqslant U(f,P)-L(f,P) \tag{$\ast$}$$</span></p> <p>I think the reason why you think this is true is because of the same old containment I mention there. But, <strong>that simply does not mean that <span class="math-container">$P$</span> is a refinement of <span class="math-container">$P_i$</span>.</strong> First of all, <span class="math-container">$P_i$</span> is a partition of <span class="math-container">$[a,c]$</span> if <span class="math-container">$i=1$</span> and <span class="math-container">$[c,b]$</span> when <span class="math-container">$i=2$</span>.</p> </li> </ul> <blockquote> <p>So, <span class="math-container">$(\ast)$</span> fails for a nice reason. <span class="math-container">$P$</span> is bigger but not a finer partition than <span class="math-container">$P_i$</span>'s.</p> </blockquote> <hr /> <p>To see that <span class="math-container">$(\ast)$</span> really fails, just note that those that appear in the sum with the partition <span class="math-container">$P_i$</span> also appear in <span class="math-container">$P$</span>. And, that the terms involved are non-negative gives you the inequality reversed. As <a href="https://math.stackexchange.com/a/133956/21436">one of the answers</a> here point out, <span class="math-container">$P=P_1 \cup P_2$</span> also, tells you that inequality is actually reversed.</p> <p>I hope this helps.</p>
1,318,934
<blockquote> <p>Let $q$ be an odd prime power. Consider the map $f:\Bbb F_{q^3} \rightarrow \Bbb F_{q^3}$, defined by $$f(x)=\alpha x^q+\alpha^q x$$ for some fixed $\alpha \in \Bbb F_{q^3} \setminus \{ 0 \}$. Show that $f$ is a bijection.</p> </blockquote> <p>Hint: If $\beta \in \ker(f) \setminus \{ 0 \}$ consider the relative norm map $N_{\Bbb F_{q^3}/{\Bbb F_q}}(\alpha \beta^q)$.</p>
Jyrki Lahtonen
11,619
<p>I like Adam's solution a lot. I arrived at the scene late, so I'm just adding this as my best guess as to what the hint means.</p> <p>The mapping $f$ is linear over the subfield $\Bbb{F}_q$, so it suffices to show that its kernel is trivial. Assume that there exists a $\beta$ such that $f(\beta)=0$. This implies that $$ \alpha\beta^q=-\alpha^q\beta.\qquad(*) $$ Let's appply the Frobenius automorphism to both sides: $$ \alpha^q\beta^{q^2}=-\alpha^{q^2}\beta^q. $$ Repeat the dose remembering that raising to power $q^3$ is the identity mapping: $$ \alpha^{q^2}\beta=-\alpha\beta^{q^2}. $$ Let's multiply these three equations together, and arrive at $$ N(\alpha)N(\beta)=-N(\alpha)N(\beta). $$ Here $N(x)=x\cdot x^q\cdot x^{q^2}$ is the relative norm map. Because $-1\neq1$ (here we need the assumption that $q$ is odd) this implies that $N(\alpha)N(\beta)=0$. But the norm vanishes only at zero, so we can conclude that $\alpha=0$ or $\beta=0$. The former possibility was assumed not to hold, so $\beta=0$. The claim follows.</p> <p>Using Adam's idea we could deduce (assuming $\alpha\beta\neq0$) from (*) directly that $$ \left(\frac\beta\alpha\right)^{q-1}=-1. $$ This leads to a contradiction by Adam's argument.</p>
1,688,184
<blockquote> <p>Prove that $2\sqrt 5$ is irrational</p> </blockquote> <p><strong>My attempt:</strong></p> <p>Suppose $$2\sqrt 5=\frac p q\quad\bigg/()^2$$ </p> <p>$$\Longrightarrow 4\cdot 5=\frac{p^2}{q^2}$$</p> <p>$$\Longrightarrow 20\cdot q^2=p^2$$</p> <p>$$\Longrightarrow q\mid p^2$$</p> <p>$$\text{gcd}(p,q)=1$$</p> <p>$$\Longrightarrow \text{gcd}(p^2,q)=1$$</p> <p>How can I procced?</p>
Galc127
111,334
<p>You have $20|p^2$, thus $2|p^2\wedge 5|p^2$, hence by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="nofollow">Euclid's lemma</a> $2|p\wedge 5|p$, hence $10|p$, so we can write $p=10k$ for $k\in\mathbb{Z}$ and then $20q^2=100k^2\Rightarrow q^2=5k^2$, hence $5|q^2$, and by the same lemma $5|q$, thus $\text{gcd}(p,q)=5$, so we got a contradiction.</p>
195,006
<p>I am not very familiar with mathematical proofs, or the notation involved, so if it is possible to explain in 8th grade English (or thereabouts), I would really appreciate it.</p> <p>Since I may even be using incorrect terminology, I'll try to explain what the terms I'm using mean in my mind. Please correct my terminology if it is incorrect so that I can speak coherently about this answer, if you would.</p> <p>Sequential infinite set: A group of ordered items that flow in a straight line, of which there are infinitely many. So, all integers from least to greatest would be an example, because they are ordered from least to greatest in a sequential line, but an infinite set of bananas would not, since they are not linearly, sequentially ordered. An infinite set of bananas that were to be eaten one-by-one would be, though, because they are iterated through (eaten) one-by-one (in linear sequence).</p> <p>Sequential infinite subsets: Multiple sets within a sequential infinite set that naturally fall into the same order as the items of the sequential infinite set of which they are subsets. So, for example, the infinite set of all integers from least to greatest can be said to have the following two sequential infinite subsets within it: all negative integers; and all positive integers. They are sequential because the negative set comes before the positive set when ordered as stated. They are infinite because they both contain an infinite qty of items, and they are subsets because they are within the greater infinite set of all integers.</p> <p>So I'm wondering if every (not some, but every) sequential infinite set contains within it sequential infinite subsets. The subsets (not the items within them) being sequentially ordered is extremely important. Clearly, a person could take any infinite set, remove one item, and have an infinite subset. Put the item back, remove a different item, and you have multiple infinite subsets. But I need them to be not only non-overlapping, but also sequential in order.</p> <p>Please let me know if this does not make sense, and thank you for dumbing the answer down for me.</p>
Ross Millikan
1,827
<p>It sounds like you are thinking of your set as a tree and want to know if there is an infinite path. The path through the tree would be the ordering you are talking about. <a href="http://en.wikipedia.org/wiki/K%C3%B6nig%27s_lemma" rel="nofollow">König's lemma</a> assures you that if the tree is finitely branching that there will be an infinite path.</p>
1,147,373
<p>I need to prove that <span class="math-container">$$I = \int^{\infty}_{-\infty}u(x,y) \,dy$$</span> is independent of <span class="math-container">$x$</span> and find its value, where <span class="math-container">$$u(x,y) = \frac{1}{2\pi}\exp\left(+x^2/2-y^2/2\right)K_0\left(\sqrt{(x-y)^2+(-x^2/2+y^2/2)^2}\right)$$</span></p> <p>and <span class="math-container">$K_0$</span> is the modified Bessel function of the second kind with order zero. Evaluating the integral numerically with Mathematica for different values of <span class="math-container">$x$</span> gives the result of <span class="math-container">$2.38$</span>, but I want to know if it is possible to show analytically.</p> <p>Increasing <span class="math-container">$x$</span> results in an increase of the exponential term on the left, but it also then strongly increases the argument of modified Bessel function, thus reducing its value.</p> <p>To show that integral is independant of <span class="math-container">$x$</span>, it is sufficient to show that <span class="math-container">$\int^{\infty}_{-\infty}\frac{\, d}{\, dx}u(x,y) = 0$</span> but any differentiation looks more and more ugly.</p> <p><strong>EDIT</strong> Mathematica test:</p> <pre><code> x = 100 NIntegrate[ (1/(2 Pi))* Exp[x*x/2 - y*y/2] BesselK[0, Sqrt[(x - y)*(x - y) + (x*x/2 - y*y/2)*(x*x/2 - y*y/2 )]], {y, -Infinity, x, Infinity}, MaxRecursion -&gt; 22] </code></pre> <p>This gives an answer of <span class="math-container">$0.378936$</span> independent of the choice of <span class="math-container">$x$</span>. In the earlier calculation I missed the factor <span class="math-container">$\frac{1}{2\pi}$</span>.</p>
JJacquelin
108,514
<p>Are you sure that there is no typo in your equation ?</p> <p>With : \begin{eqnarray} u(x,y) = \frac{1}{2\pi}\exp\left(+x^2/2-y^2/2\right)K_0\left(\sqrt{(x-y)^2+(-x^2/2+y^2/2)^2}\right) \end{eqnarray}</p> <p>I do not find $I=\int^{\infty}_{-\infty}u(x,y) \,d y \simeq 2.38$ and $I(x)$ is not constant.</p> <p>For example : Case $x=0$ :</p> <p>$$u(0,y) = \frac{1}{2\pi}\exp\left(-y^2/2\right)K_0\left(\sqrt{(-y)^2+(y^2/2)^2}\right)$$ $$I = \int^{\infty}_{-\infty}u(0,y) \,d y \simeq 0.64965$$ Note : this numerical result is not valid because some precautions where not taken at that time. More recent numerical results confirmes the value $0.378936$ given by chatur.</p> <p>In the radical, are the powers not on the same order ?</p> <p>Note :</p> <p>The numerical computation is hazarduous because $K_0(0)=\infty$. </p> <p>The integral is not convergent in the usual sens. But it can be convergent if we consider the Cauchy principal value.</p> <p>During numerical computation, $y$ varies from a low value $&lt;x$ to an high value $&gt;x$. Proceeding by steps, it can happend that $y$ comes very close to $x$ and the argument of $K_0$ be very close to $0$. In that case, the numerical calculus involves transitorily some differences between very big numbers. </p> <p>All depends how the sofware detect the singular point and how he manage to treat it as a Cauchy principal value integration. I do not know how Mathematica proceed. Presently, I cannot say if the numerical values obtained are significant or not. If it was poved that the results from Mathematica are correct, I would take my hat off ! </p> <p>NOTE 2 :</p> <p>As pointed out by GEdgard, the singularity at $x=y$ i.e. at $K_0(0)$ is logarithmic. So, there is no major difficulty for numerical computation, in so far some precautions are taken. I made a few numerical tests on this point. </p> <p>What is more, for several values of $x$ , I computed the derivative $\frac{dI}{dx}$, which is the integral of $\frac{du}{dx}$ . The results are very close to $0$.</p> <p>This draw to think that the chatur's conjecture $I(x)=$constant might be exact. Of course, this is note a prove.</p>
1,225,122
<p>So I am currently studying a course in commutative algebra and the main object that we are looking at are ideals generated by polynomials in n variables. But the one thing I don't understand when working with these ideals is when we reduce the generating set to something much simpler. For e.g.</p> <p>Consider the Ideal $I$ = $&lt;x^2-4x + 3, x^2 +x -2&gt;$, then since $x -1$ is a common factor of both the polynomials in the generating set we deduce that I is infact $&lt;x-1&gt;$. So my question is what is the criteria that applies when we are reducing the generating set to something much simpler.</p> <p>Based on what I understand, I am guessing in the above example that since every polynomial is divisible by $x-1$ we can say the ideal is generated by $x-1$ (wouldn't this result in the loss of any elements?). But I am not entirely convinced by my reasoning and would prefer to hear it from someone who understands this stuff better. </p> <p>Also using the same reasoning as above can we then say that the ideal $I$ = $&lt;x^3 - x^2 + x&gt;$ = $&lt;x&gt;$ ?</p>
Bill Dubuque
242
<p>The tuple notation is used both for gcds and ideals because they share many of the same laws, e.g. those below that are familiar from wide use in the Euclidean algorithm and related results:</p> <p>$$\quad a(b,c)\, =\, (ab,ac)\qquad \rm [Distributive\ Law]\qquad $$</p> <p>$$ (a,b)\, =\, (a,b')\ \ \ {\rm if}\ \ \ b\equiv b'\!\!\!\pmod a\qquad $$</p> <p>Applied to your example $ $ (where, $ $ in your case, $\ f(x) = x\!+\!2\in\Bbb Z[x])$</p> <p>$$\begin{align}((x\!-\!1)(x\!-\!3),(x\!-\!1)f(x))\, =&amp;\,\ (x\!-\!1)\,(x\!-\!3,\,f(x))\\ =&amp;\,\ (x\!-\!1)\,(x\!-\!3,\,f(3))\ \ {\rm by\ \ mod}\ x\!-\!3\!:\ x\equiv 3\Rightarrow f(x)\equiv f(3)\\ =&amp;\,\ (x\!-\!1)\ \ {\rm when}\ \ f(3)\ \ \ \text{is a unit (invertible)} \end{align}$$</p>
757,917
<p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p> <p>But how do you show this? I know of no rules that works with addition inside square roots.</p> <p>I noticed I could do this:</p> <p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p> <p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
sirfoga
83,083
<p><strong>Hint</strong>: Simply try to square both sides of the equation (since they are both positive numbers).</p>
1,547,972
<p>Why is $\sin(x^2)$ similar of $\ x \sin(x)$? </p> <p>I graphed it using desmos and when I look at it, the behavior as x approaches zero seems to be to oscillate less. </p> <p>Yet as x approaches infinity and negative infinity $\sin(x^2)$ oscillates between y=1 and y=-1 while $\ x *sin(x)$ oscillates between y=x and y=-x.</p> <p>I was wondering why these functions are so similar yet so different. I'm in 10th grade and I"m currently learning precalculus so if answers could be targeted to a precalculus level that would be great.</p>
Thomas Andrews
7,933
<p>When $x$ gets close to zero, $\sin x \approx x-\frac{x^3}{6}$. So $$\sin(x^2)\approx x^2-\frac{x^6}{6}\\x\sin(x)\approx x^2-\frac{x^4}{6}$$</p> <p>Now, when $x$ is small, $x^4$ and $x^6$ are "very small." So the functions are dominated by $x^2$ near $x=0$. Indeed, if you graphed $y=x^2$ alongside, you'd see that both of your functions are close to but smaller than $y=x^2$.</p> <p>Add in $y=\sin^2(x)$, and it will be similar, too.</p> <p>When you get to calculus, this will be explored by studying "power series" for functions.</p> <p>We also see from this approximation that since $\frac{x^6}{6}&lt;\frac{x^4}{6}$ when $x$ near enough to zero (say $|x|&lt;1$) we see that $\sin(x^2)&gt;x\sin(x)$.</p>
1,547,972
<p>Why is $\sin(x^2)$ similar of $\ x \sin(x)$? </p> <p>I graphed it using desmos and when I look at it, the behavior as x approaches zero seems to be to oscillate less. </p> <p>Yet as x approaches infinity and negative infinity $\sin(x^2)$ oscillates between y=1 and y=-1 while $\ x *sin(x)$ oscillates between y=x and y=-x.</p> <p>I was wondering why these functions are so similar yet so different. I'm in 10th grade and I"m currently learning precalculus so if answers could be targeted to a precalculus level that would be great.</p>
MKBG
293,645
<p>Try finding the $ \lim_{x \rightarrow 0} \frac{\sin(x^2)}{x \sin(x)}$, you will see it is finite, thus they behave simiraly near $0$, which is important for approximations near $0$. Although $\sin(x^2)$ is bounded, and $x \sin(x)$ is not, they have local minima and maxima at the same points.</p>
905,672
<p>Let $S$ be a subset of a group $G$ that contains the identity element $1$ and such that the left cosets $aS$ with $a$ in $G$, partition $G$.Prove that $S$ a is a subgroup of $G$.</p> <p>My try:</p> <p>For $h$ in $S$, If I show that $hS=S$, then that would imply that $S$ is closed. </p> <p>Now $hS$ is a partiton of $S$ and contains $h$ since $1$ is in $S$. Also $h$ is in $S$. Hence $h \in S\cap hS$. Moreover both of these are partitions and two partitions are either disjoint or equal. Hence $S=hS$ which says that $S$ is closed. </p> <p>Does this seem alright??</p> <p>Thanks!!</p>
Hamou
165,000
<p>Let $x,y\in S$, and consider the sets $x^{-1}S$ and $y^{-1}S$, now remark that $1\in x^{-1}S\cap y^{-1}S$ as $\{aS\}_{a\in G}$ is a partition of $G$, then $x^{-1}S=y^{-1}S$, since $x^{-1}\in x^{-1}S$ we get $x^{-1}\in y^{-1}S$, there exists $s\in S$ such that $x^{-1}=y^{-1}s$, so $yx^{-1}=s\in S$. It follow that $S$ is a subgroup of $G$.</p>
2,494,069
<p>All the calculators I've tried overflow at 2^1024. I'm not sure how to go about calculating this number and potentially even larger ones.</p>
Peter
82,961
<p>Calculate $$2048\cdot \ln(2)$$ which is equal to $\ln(2^{2048})$</p>
268,635
<p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
Brian M. Scott
12,042
<p>The probability is $\frac14$. The argument given in <a href="https://math.stackexchange.com/a/172368/12042">this answer</a> to the corresponding question for $n$-gons applies equally well to circles. <a href="https://math.stackexchange.com/a/1407/12042">This answer</a> to the generalization of this question to arbitrary dimensions may also be of interest: the probability that the convex hull of $n+2$ points in $S^n$ (the unit sphere in $\Bbb R^{n+1}$) contains the origin is $2^{−n−1}$.</p>
3,775,554
<p><strong>Identity for a set X:</strong></p> <p><em>The set X has an identity under the operation if there is an element <strong>j</strong> in set X such that <strong>j * a = a * j = a</strong> for all elements <strong>a</strong> in set X.</em></p> <p>According to my college book the counting numbers don' t have an identity.</p> <p>But for the operation * there is number 1 (= j ) such that a * 1 = 1 * a = a for all element a in set X.</p> <p><strong>Why do natural numbers have no identity?</strong></p>
Klaus
635,596
<p>The operation is crucial here. <span class="math-container">$(\mathbb{N},\cdot)$</span> does have an identity, namely <span class="math-container">$1$</span> as you observed. For <span class="math-container">$(\mathbb{N},+)$</span> the identity would be <span class="math-container">$0$</span> and so it depends on your definition of <span class="math-container">$\mathbb{N}$</span> whether you include <span class="math-container">$0$</span> or not. I assume that your book uses the definition <span class="math-container">$\mathbb{N} = \{1,2,3,\ldots\}$</span> and so no number <span class="math-container">$j \in \mathbb{N}$</span> satifies <span class="math-container">$j + a = a$</span> for all <span class="math-container">$a \in \mathbb{N}$</span>.</p>
11,724
<p>What is the compelling need for introducing a theory of $p$-adic integration?</p> <p>Do the existing theories of $p$-adic integration use some kind of analogues of Lebesgue measures? That is, do we put a Lebesgue measure on $p$-adic spaces, and just integrate real or complex valued functions on $p$-adic spaces, or is something more possible like integrating $p$-adic valued functions on $p$-adic spaces? What is the machinery used?</p> <p>Then again, does the integration on spaces like $\mathbb C_p$ give something more than the usual integration in real analysis? I mean, the integration of complex valued functions of complex variables, or more precisely holomorphic functions, is much a much more interesting topic than measure theory. Is a similar analogue true in $p$-adic cases?</p> <p>I have also seen mentioned that Grothendieck's cohomology theories like etale cohomology, crystalline cohomology etc., fit into such $p$-adic integration theories. What could possibly be the connection?</p>
Matt E
221
<p>I would normally take $p$-adic integration to mean "integration of $p$-adic valued functions" or "integration of differential forms with some kind of $p$-adic valued functions as coefficients", where the integration is also taking place over some kind of $p$-adic space or manifold.</p> <p>The reason for wanting such theories are various. One reason is indicated in George S.'s answer: there are <em>known</em> analogues of classical Hodge theory, known as $p$-adic Hodge theory, whose proofs however are not analytic, but rather proceed via arithmetic geometry. One would like to have more analytic ways of thinking about them, and this is one goal of Robert Coleman's theory. (In a recent volume of Asterisque, namely vol. 331, Coleman and Iovita have an article, <a href="http://math.berkeley.edu/~coleman/Hidden/HiddenStr.pdf" rel="nofollow noreferrer">"Hidden structures on semistable curves"</a>, related to this problem.) (Note also that $p$-adic Hodge theory relates $p$-adic etale cohomology to crystalline cohomology, which gives on answer to your question of how $p$-adic integration might be related to those topics.)</p> <p>Another reason is that many integral formulas (involving usual archimedean integrals) appear in the theory of classical $L$-functions attached to automorphic forms, and one would like, at least in certain contexts, to be able to write down $p$-adic analogues so as to construct $p$-adic $L$-functions.</p> <p>As for what machinery is used: in the theory of $p$-adic $L$-functions and related contexts in Iwasawa theory, often nothing more is used than basic computations with Riemann sums. In the material related to $p$-adic Hodge theory, much more substantial theoretical foundations are used: tools from arithemtic geometry, rigid analysis, possibly Berkovich spaces, and related topics. </p>
4,170,150
<p>I have the following two definitions:</p> <p>Let <span class="math-container">$S$</span> be a subset of a Banach space <span class="math-container">$X$</span>.</p> <ol> <li><p>We say that <span class="math-container">$S$</span> is <em>weakly bounded</em> if <span class="math-container">$l\in X^*$</span>, the dual space of <span class="math-container">$X$</span>, then <span class="math-container">$\sup\{|l(s)|:s\in S\}&lt;\infty$</span>.</p> </li> <li><p>We say that <span class="math-container">$S$</span> is <em>strongly bounded</em> if <span class="math-container">$\sup\{||s||:s\in S\}&lt;\infty$</span>.</p> </li> </ol> <p>I need to prove those definitions are equivalent. I have already proved that 2 implies 1, but I'm stuck with the other direction. I suspected that I have to use Hahn-Banach's theorem, or maybe some of their corollaries, but I don't know how to proced.</p> <p>Thanks to all of you.</p>
alepopoulo110
351,240
<p>Assume that <span class="math-container">$1$</span> holds. For <span class="math-container">$s\in S$</span> define <span class="math-container">$\phi_s:X^*\to\mathbb{C}$</span> by <span class="math-container">$\phi_s(f)=f(s)$</span>. Then <span class="math-container">$\phi_s$</span> is a bounded linear functional, since <span class="math-container">$|\phi_s(f)|=|f(s)|\leq\|s\|\cdot\|f\|$</span>. Now since <span class="math-container">$$\sup_{s\in S}|\phi_s(f)|&lt;\infty$$</span> for all <span class="math-container">$f\in X^*$</span> by our assumption, we can apply the principle of uniform boundedness and conclude that <span class="math-container">$\sup_{s\in S}\|\phi_s\|&lt;\infty$</span>.</p> <p>If you show that <span class="math-container">$\|\phi_s\|=\|s\|$</span>, you are done (note that it is immediate that <span class="math-container">$\|\phi_s\|\leq\|s\|$</span>). The other inequality follows from Hahn-Banach. Can you do that?</p>
2,459,493
<p>Here's the <span class="math-container">$C^1$</span> norm : </p> <p><span class="math-container">$|| f || = \sup | f | + \sup | f '|$</span></p> <p>where the supremum is taken on <span class="math-container">$[a, b]$</span>. </p> <p>Please, justify your answer (proofs or counterexamples are needed). </p>
Donald Splutterwit
404,247
<p>Hint: How many sequences are there in total ?</p> <p>How many sequences are there that do not contains both $a$ and $b$ ?</p> <p>So ... </p> <p>Alternatively :</p> <p>The sure fire bet to get the answer is to list the $10$ possibilities and calculate their multiplicities. ($S$ = sequence upto permutation, $M$= Multiplicity , $*= c,d \text{ or } e$)</p> <p>\begin{array}{l|l} S &amp; M \\ \hline aaa &amp; \color{blue}{1} \\ aab &amp; \color{blue}{3} \\ abb &amp; \color{blue}{3} \\ bbb &amp; \color{blue}{1} \\ aa* &amp; \color{blue}{9} \\ ab* &amp; \color{blue}{18} \\ bb* &amp; \color{blue}{9} \\ a** &amp; \color{blue}{27} \\ b** &amp; \color{blue}{27} \\ *** &amp; 27 \\ \hline &amp; 125 \end{array}</p>
1,302,891
<p>I would like to ask you a question about the following question.</p> <p>Let $f:(a,b)\rightarrow \mathbb{R}$ be non-decreasing i.e. $f(x_1)\leq f(x_2)$ and let $c \in (a,b)$. Show that $\lim_{x \ \rightarrow \ c-}{f(x)}$ and $\lim_{x \ \rightarrow \ c+}{f(x)}$ both exist.</p> <p>If $f$ is continuous at $c$ then obviously both limits exist and: $$\lim_{x \ \rightarrow \ c-}{f(x)}=\lim_{x \ \rightarrow \ c+}{f(x)}=f(c)$$ But how would we approach this is $f$ is not continuous at $c$? Thank you guys!</p>
gnometorule
21,386
<p>The function's amplitude on $(a,b)$ is bounded by $m:= f(b)-f(a) \lt \infty$. Look at the right-side limit at a point of discontinuity $c$ (existence of the left-side limit follows in the same way). </p> <p>Fix $m \gt \epsilon \gt 0$. Pick a point $x_1$ in the interval to the right of $c$. If $y_1 := f(x_1) - f(c) \lt \epsilon$, you are done (monotonicity). If not, pick a second point $x_2:= \frac{1}{2}(x_1-c)$ in $(c, x_1)$. Calculate $y_2$ as before, and check if it is small enough. Keep this process going until a $y_i \lt \epsilon$ at which point monotonicity finishes the proof. </p> <p>If this never happens, we have a sequence $\{x_i\}$ converging from the right to $c$ where all $y_i$ defined as above are greater than $\epsilon$. If the sequence $\{f(x_i)\}$ does not converge, it is not Cauchy. This means that there exists an $\epsilon \gt 0$, such that for each $d \gt 0$ there is a pair $m, n \gt d$ s.t. $$f(x_m) - f(x_n) = |f(x_m) - f(x_n)| \ge \epsilon$$, where we assume wlog $m \lt n$ to fit our construction, and use monotonicity. Note that this is not the same $\epsilon$ as above; it's chosen for notational simplicity. Call this first $n$, $N$. </p> <p>But then looking at the sequence only for $k \gt n$, the same applies: we find another pair with indices both larger than $m, n$ whose difference is larger than the <em>same</em> $\epsilon$. Now, for the sum $S$ we have (using 1-4 as indices for simplicity) $$ 2 \epsilon \le S = f(x_1) -f(x_2)+ f(x_ 3)- f(x_4) \le f(x_1) - f(x_4)$$ ($f(x_3) \le f(x_2)$ as $x_3 \le x_2$), so the sum "telescopes" in this way (this is where monotonicity matters again). Iterate this process, and $S$ goes to $\infty$. Hence, $$f(x_N) -f(c) = f(x_N) - f(x_k) + f(x_k) - f(c) \ge f(x_N) - f(x_k) + \epsilon \rightarrow \infty$$ which is a contradiction. </p> <p>Hence, the sequence $\{f(x_i)\}$ converges, which was to be shown (this is enough by monotonicity again as all other sequences are sandwiched by this one). </p>
4,618,975
<p>I've read the definition of the ring homomorphism:<br /> <strong>Definition</strong>. Let <span class="math-container">$R$</span> and <span class="math-container">$S$</span> be rings. A ring homomorphism is a function <span class="math-container">$f : R → S$</span> such that:<br /> (a) For all <span class="math-container">$x, y ∈ R, f(x + y) = f(x) + f(y).$</span><br /> (b) For all <span class="math-container">$x, y ∈ R, f(xy) = f(x)f(y)$</span>.<br /> (c) <span class="math-container">$f(1) = 1.$</span></p> <hr /> <p>I want to see some examples of ring homomorphism <span class="math-container">$f : R → S$</span>, where <span class="math-container">$R$</span> is arbitrary. So, I consider <span class="math-container">$S=R/I$</span> , where <span class="math-container">$I$</span> is an ideal of <span class="math-container">$R$</span>. I can also imbed <span class="math-container">$R$</span> in a <em>Cartesian product</em>.</p> <hr /> <p><strong>What else?</strong> Are there some <em>famous</em> ring homomorphisms <span class="math-container">$f : R \to S$</span> for <span class="math-container">$R$</span> arbitrary?</p> <hr /> <p>Thank you.</p>
user 1
133,030
<p><span class="math-container">$f : R \to R[x]$</span>, where <span class="math-container">$x$</span> is indeterminate.<br /> <span class="math-container">$f : R \to S^{-1} R$</span>, where <span class="math-container">$S$</span> is a multiplicatively closed set (localization).</p>
3,956,392
<p>So the question is as follows:</p> <blockquote> <p>An urn contains m red balls and n blue balls. Two balls are drawn uniformly at random from the urn, without replacement.</p> </blockquote> <blockquote> <p>(a) What is the probability that the first ball drawn is red?</p> </blockquote> <blockquote> <p>(b) What is the probability that the second ball drawn is red?*</p> </blockquote> <p>The answer to a) quite clearly works out to be <span class="math-container">$\frac{m}{(m+n)}$</span>, but the answer to b turns out to be the same, and my tutor said this is intuitive by a symmetry argument.</p> <p>i.e. that <span class="math-container">$P(A_1)$</span> = <span class="math-container">$P(A_2)$</span> where <span class="math-container">$A_i$</span> is the event that a red ball is drawn on the ith turn. However I am struggling to see how this is evident, can anyone explain this?</p>
herb steinberg
501,262
<p>The secon ball probability is<span class="math-container">$\frac{m}{m+n}\frac{m-1}{m+n-1}+\frac{n}{n+m}\frac{m}{m+n-1}=\frac{m}{n+m}$</span> This is the same as the first, which, by symmetry, is what it should be.</p>
2,555,811
<p>Is there a name for the following type of ordering on some set $S$ {$a,b,c$} that includes only $&gt;$ and $=$ for example: </p> <p>$$a&gt;b&gt;c$$ $$a&gt;b=c$$ $$a=b&gt;c$$ $$a=b=c$$</p> <p>Is there some name for these orderings? </p> <hr> <p>I know that all these <em>satisfy</em> a <em>total preorder</em> on $S$, since a preoder on $S$ is simply one in which the elements are ordered by the $\geq$ relation. But is there a name for these particular orderings?</p> <p>Are my examples all instances of total orderings, since all members are comparable?</p> <p>Is it okay to call these simply various "orderings" on $S$?</p>
Guy Fsone
385,707
<p>Let $$f(h)= \ln\left(\frac{1}{n}\sum\limits_{k=1}^{n} k^{h}\right)\implies f'(h) =\frac{\left(\frac{1}{n}\sum\limits_{k=1}^{n} k^{h}\ln k \right)}{\left(\frac{1}{n}\sum\limits_{k=1}^{n} k^{h} \right)}$$ $$f(0)= 0 ~~~and~~~~f'(0)=\left(\frac{1}{n}\sum\limits_{k=1}^{n} \ln k\right)=\color{blue}{ \frac{1}{n}\ln \left(n!\right) } $$ </p> <p>Let $x=1/h$ then we have</p> <p>$$\lim_{x\to \infty}\left(\frac{1}{n}\sum_{k=1}^{n} k^{1/x}\right)^{nx} =\lim_{h\to 0}\exp\left(\frac{n}{h}\ln\left(\frac{1}{n}\sum\limits_{k=1}^{n} k^{h}\right)\right)\\ =\lim_{h\to 0}\exp\left(n\frac{f(h)}{h}\right) =\color{red}{\exp\left(nf'(0)\right) }=\color{red}{n! } $$</p>
83,764
<p>I have come across an interesting property of a dynamical system, being transformed by a map, but i haven't been able to figure out <em>why</em> this is happening (for quite some time now actually). Any help is greatly appreciated. Here goes then:</p> <p>Let M be a n-D manifold and $\dot x=F(x)u_1, F\in \mathbb{R}^{n\times m}, x \in \mathbb{R}^{n}, u_1 \in \mathbb{R}^{m}$ be a control system evolving on M (F is the system matrix i.e. state transition function, and $u_1$ is the input of the system. For all practical purposes $u_1$ is an m-vector from an input space $\mathbb{R}^{m}$). Now let $x=\Psi (y)$ be a coordinate change on M and $u_2=M(y)u_1$ a transformation of the input $u_1$ of the first system. By applying these maps on the system, you get the new equations $\dot y=F(y)u_2$. As you may notice, F is <em>the same</em> in both systems. The problem is <em>why is this happening</em> i.e. for what systems and transformations does this property hold?</p> <h2>A little more elaboration</h2> <p>It is useful to investigate the maps more closely. In the general case one has</p> <p>$\dot x=D\Psi \dot y$<br> $\dot x= F(x)u_1$</p> <p>thus<br> $\dot y=D\Psi ^{-1} F(x)u_1$, (1)</p> <p>where $D\Psi$ is the Jacobian matrix of $\Psi$. In our case it actually turns out that: </p> <p>$\dot y=F(y)M(y)u_1$. (2)</p> <p>You can then consider that $u_2=M(y)u_1$ and get the final system,</p> <p>$\dot y=F(y)u_2$,</p> <p>that is, <em>the same</em> system. By (1),(2) you get,</p> <p>$D\Psi ^{-1} F(x)u_1=F(y)M(y)u_1 \Rightarrow (D\Psi ^{-1} F(x)-F(y)M(y))u_1=0$. </p> <p>Since this holds for every $u_1$, you have the condition,</p> <p>$F(\Psi (y))=D\Psi F(y)M(y)$</p> <p>So, what does this condition imply? What systems F and maps $\Psi$ hold this property (of system invariance)? I should note that F is nonlinear and a case study where this actually happens is the kinematic model of a unicycle robot i.e. <a href="http://planning.cs.uiuc.edu/node660.html" rel="noreferrer">this</a>. Any ideas?</p>
Issa
32,602
<p>This is not a surprising fact for driftless systems to have symmetries. In our previous work we considered only nonlinear systems with drift and classified the symmetries accordingly. In case of a driftless system \dot x=F(x)u you can interpret $F$ (that's what it is) as a distribution of vector fields $f_1, \dots, f_m$ (column vectors of the matrix) and whenever such distribution is involutive ($[f_i, f_j]=\lambda_1(x)f_1+\cdots+\lambda_m(x)f_m$ then necessarily the system admits nontrivial symmetries (this is due to the Frobenius Theorem). That's the case you have here with the unicycle with $f_1=(\cos \theta, \sin \theta, 0), f_2=(0,0, 1)$. I can write in more extensive way but I hope you would get the idea from there.</p> <p>Issa</p>
602,248
<p>This is the system of equations: $$\sqrt { x } +y=7$$ $$\sqrt { y } +x=11$$</p> <p>Its pretty visible that the solution is $(x,y)=(9,4)$</p> <p>For this, I put $x={ p }^{ 2 }$ and $y={ q }^{ 2 }$. Then I subtracted one equation from the another such that I got $4$ on RHS and factorized LHS to get two factors in terms of $p$ and $q$.</p> <p>Then $4$ can be represented as $2*2$, $4*1$ or $1*4$. Comparing the two factors on both sides, I got the solution.</p> <p>As you can see, the major drawback here is that I assumed this system has only integral solutions and then went further. Is there any way I can prove that this system indeed has only integral solutions or is there any other elegant way to solve this question?</p>
rajb245
72,919
<p>If you move things around and square the equations, you have the following two starting equations. $$x=(7-y)^2$$ $$y=(11-x)^2$$</p> <p>If you plug one of these into the other and factor out a term, you get $$ (x-9)(x^3-35x^2+397x-1444)=0. $$</p> <p>This is a quartic equation, with four solutions in general. There is a linear term, $x-9$, and the remaining cubic term. If $x=9$, the linear term goes to zero and the equation is solved. If you plug $x=9$ back into the original equations, you get $y=4$, so this represents the original solution you found. To find the other solutions, we only have to focus on the roots of the remaining cubic term. So we're left with finding the roots of this equation</p> <p>$$ x^3-35x^2+397x-1444=0. $$</p> <p>All cubics have three roots, if you count complex roots and double roots. The possibilities go like this for any cubic:</p> <p>There are three real, distinct roots </p> <p><img src="https://i.stack.imgur.com/1shMc.png" alt="Three roots"></p> <p>There are three real roots, but two of them are merged into a double root</p> <p><img src="https://i.stack.imgur.com/gYo53.png" alt="enter image description here"></p> <p>There is one real root and a pair of complex conjugate roots</p> <p><img src="https://i.stack.imgur.com/TJc0H.png" alt="enter image description here"></p> <p>You can follow along with <a href="https://en.wikipedia.org/wiki/Cubic_function#Roots_of_a_cubic_function" rel="nofollow noreferrer">this procedure from Wikipedia</a> to calculate the exact roots of the equation. Or you can use a computer solver for the roots that does a similar procedure internally:</p> <p><img src="https://i.stack.imgur.com/UBp66.gif" alt="enter image description here"></p> <p>This shows you that this cubic is in the first family with three real, distinct roots, with $x \approx 7.8687, 12.848, 14.283$ and corresponding $y\approx 9.80504, 3.4151, 10.7781$. </p> <p><strong>Edit</strong></p> <p>As a user pointed out, the values we just found should be plugged into the original equations to see if they actually solve the equation, or if they are extraneous. It turns out that these three solutions are indeed extraneous, meaning that they are spurious results of squaring the original equations. This leaves the only solution we've found as $(9,4)$. It is necessary that the solutions are a subset of the four presented, because the system is fundamentally fourth order (two quadratic equations), so it can have at most four solutions. The only one of the necessary solutions that is sufficient is $(9,4)$, so this is the unique solution. It consists of two integers, so we have proof that this equation only has integer solutions.</p>
3,173,349
<p>The question:</p> <p>Suppose G is a finite group of Isometries in the plane. Suppose p is fixed by G. Prove that G is conjugate into O(2).</p> <p>What exactly does "conjugate into O(2)" mean and how would I proceed?</p>
AnalysisStudent0414
97,327
<p>By reducing mod <span class="math-container">$2$</span> and <span class="math-container">$3$</span> you can show that <span class="math-container">$2$</span> and <span class="math-container">$3$</span> divide <span class="math-container">$m$</span>, hence <span class="math-container">$m^2$</span> has actually to be divisible by <span class="math-container">$36$</span>. But this is not really helpful towards your result.</p> <p>Let <span class="math-container">$p&gt;2$</span> be a prime that divides <span class="math-container">$n$</span>. Then <span class="math-container">$p$</span> does not divide <span class="math-container">$(n+1)$</span> and <span class="math-container">$(n+2)$</span>. If <span class="math-container">$n=p^k \cdot q$</span> with <span class="math-container">$(q,p)=1$</span>, then <span class="math-container">$k$</span> has to be even (since every prime appears with even power in the decomposition of <span class="math-container">$m^2$</span>). We didn't make any assumption on <span class="math-container">$p$</span> except <span class="math-container">$p&gt;2$</span>, so every prime except possibly <span class="math-container">$2$</span> appears with even multiplicity! So <span class="math-container">$n=s^2$</span> or <span class="math-container">$n=2\cdot s^2$</span> for some <span class="math-container">$s \in \mathbb{N}$</span>. </p> <p>We can make the same argument for <span class="math-container">$n+1$</span> and <span class="math-container">$n+2$</span>, so there are <span class="math-container">$t$</span> and <span class="math-container">$u$</span> such that <span class="math-container">$n+1=t^2$</span> or <span class="math-container">$2t^2$</span> and <span class="math-container">$n+2=u^2$</span> or <span class="math-container">$2t^2$</span>. Now, note that the difference between any two squares of elements of <span class="math-container">$\mathbb{N}$</span> is always bigger than <span class="math-container">$3$</span>.</p> <p>If <span class="math-container">$n$</span> is even, then <span class="math-container">$n+1=t^2$</span>. But then <span class="math-container">$n$</span> cannot be a square, so <span class="math-container">$n=2s^2$</span>, and <span class="math-container">$n+2=2u^2$</span>. But then <span class="math-container">$n+2-n = 2u^2 - 2s^2$</span> so <span class="math-container">$1 = u^2 - s^2$</span>, another contradiction. </p> <p>If <span class="math-container">$n$</span> is odd then <span class="math-container">$n=s^2$</span> and <span class="math-container">$n+2=u^2$</span>, a contradiction.</p>
4,590,713
<p>If I know the McLaurin series of <span class="math-container">$$f(x)=\sum_{n=0}^\infty a_n x^n,$$</span> can I say something about the McLaurin series of <span class="math-container">$$e^{f(x)}=\sum_{n=0}^\infty b_n x^n?$$</span> In other words, what is the relation between <span class="math-container">$a_n$</span> and <span class="math-container">$b_n$</span>?</p>
KCd
619
<p>Let's aspire to do better than an <span class="math-container">$\Omega$</span>-estimate by getting an asymptotic estimate.</p> <p>Here is a very useful fact for estimating partial sums of sequences: if <span class="math-container">$\{a_n\}$</span> and <span class="math-container">$\{b_n\}$</span> are sequences of positive numbers with <span class="math-container">$a_n \sim b_n$</span> and either sequence of partial sums <span class="math-container">$\sum_{k \leq n} a_k$</span> or <span class="math-container">$\sum_{k \leq n} b_k$</span> tends to <span class="math-container">$\infty$</span>, then they both tend to <span class="math-container">$\infty$</span> and are asymptotic: <span class="math-container">$\sum_{k \leq n} a_n \sim \sum_{k \leq n} b_k$</span> as <span class="math-container">$n \to \infty$</span>. Prove that.</p> <p>Next, check that <span class="math-container">$\int_n^{n+1} \sqrt{x\log x}\,dx \sim \sqrt{n\log n}$</span> as <span class="math-container">$n \to \infty$</span>, so we can apply the above result with <span class="math-container">$a_n = \sqrt{n\log n}$</span> and <span class="math-container">$b_n = \int_n^{n+1}\sqrt{x\log x}\,dx$</span> to get <span class="math-container">$$ \sum_{k=1}^n \sqrt{k\log k} \sim \sum_{k=1}^n \int_k^{k+1} \sqrt{x\log x}\,dx = \int_1^{n+1}\sqrt{x\log x}\,dx. $$</span></p> <p>Since <span class="math-container">$\log x$</span> grows very slowly, we expect heuristically that we can treat <span class="math-container">$\log x$</span> on a large interval <span class="math-container">$[1,y]$</span> as the right endpoint value <span class="math-container">$\log y$</span> for the purpose of making asymptotic estimates, which suggests that <span class="math-container">$$ \int_1^y \sqrt{x\log x}\,dx \stackrel{?}{\sim} \sqrt{\log y}\int_1^y\sqrt{x}\,dx = \frac{2}{3}y^{3/2}\sqrt{\log y}. $$</span> Having guessed an asymptotic estimate for <span class="math-container">$\int_1^y \sqrt{x\log x}\,dx$</span>, meaning we guess <span class="math-container">$$ \lim_{y \to \infty} \frac{\int_1^y \sqrt{x\log x}\,dx}{(2/3)y^{3/2}\sqrt{\log y}} \stackrel{?}{=} 1, $$</span> you can <em>prove</em> it is correct by using L'Hospital's rule. Do that.</p> <p>Returning to the sum, where <span class="math-container">$n$</span> runs over positive integers, we obtain <span class="math-container">$$ \sum_{k=1}^n \sqrt{k\log k} \sim \int_1^{n+1}\sqrt{x\log x}\,dx \sim \frac{2}{3}(n+1)^{3/2}\sqrt{\log (n+1)} \sim \frac{2}{3}n^{3/2}\sqrt{\log n}. $$</span></p> <p><strong>Example</strong>. The ratio of <span class="math-container">$\sum_{k=1}^n \sqrt{k\log k}$</span> to <span class="math-container">$(2/3)n^{3/2}\sqrt{\log n}$</span> at <span class="math-container">$n = 10^4$</span>, <span class="math-container">$10^5$</span>, <span class="math-container">$10^6$</span>, <span class="math-container">$10^7$</span>, and <span class="math-container">$10^8$</span> is <span class="math-container">$.962$</span>, <span class="math-container">$.970$</span>, <span class="math-container">$.975$</span>, <span class="math-container">$.978$</span>, and <span class="math-container">$.981$</span>. So the convergence is slow, but steady.</p> <p>For sums of the kind you asked about, it is easy in practice to get asymptotic estimates by the method outlined above, so you should never be satisfied with <span class="math-container">$\Omega$</span>-estimates in those kinds of problems.</p> <p><strong>Remark</strong>. It is not always that case that treating <span class="math-container">$\log x$</span> on a large interval <span class="math-container">$[1,y]$</span> as the &quot;constant&quot; <span class="math-container">$\log y$</span> will lead to the right asymptotic estimate on an integral. For example, consider <span class="math-container">$$ \int_2^y \frac{dx}{x\sqrt{\log x}}. $$</span> Heuristically we expect this grows like <span class="math-container">$$ \frac{1}{\sqrt{\log y}}\int_2^y \frac{dx}{x} = \frac{1}{\sqrt{\log y}}(\log y - \log 2) \sim \sqrt{\log y}, $$</span> but the integrand <span class="math-container">$1/(x\sqrt{\log x})$</span> has antiderivative <span class="math-container">$2\sqrt{\log x}$</span>, so <span class="math-container">$$ \int_2^y \frac{dx}{x\sqrt{\log x}} = 2\sqrt{\log y} - 2\sqrt{\log 2} \sim 2\sqrt{\log y} $$</span> and the heuristic guess was off by a factor of 2. But at least we got the right order of magnitude <span class="math-container">$\sqrt{\log y}$</span>, and you can pin down the correct constant factor in the estimate (without knowing the exact antiderivative of the integrand) by computing <span class="math-container">$$ \lim_{y \to \infty} \frac{\int_2^y dx/(x\sqrt{\log x})}{\sqrt{\log y}} = 2 $$</span> using L'Hospital's rule.</p>
1,969,479
<blockquote> <p>Prove that the symmetric point of orthocenter $H$ of a triangle with respect to the midpoint of any side resides on the triangle's circumcircle. <a href="https://i.stack.imgur.com/tkh4c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tkh4c.png" alt="enter image description here"></a></p> </blockquote> <p>In the above figure,it's sufficient to prove that triangle $BHM$ is equal to triangle $CKM$, but how? I tried to make use of the symmetric point of $H$ with respect to $BC$ which resides on circumcircle but failed...</p>
Jack D'Aurizio
44,121
<p>The symmetric of $H$ with respect to the $BC$-side, say $J$, lies on the circumcircle of $ABC$ since $\widehat{BHC}+\widehat{BAC}=\pi$. If we consider the symmetric of $J$ with respect to the perpendicular bisector of $BC$ we get $K$, hence $K$ lies on the circumcircle of $ABC$, too, since the circumcircle of $ABC$ is symmetric with respect to the perpendicular bisector of $BC$.</p> <p><a href="https://i.stack.imgur.com/LYXQq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LYXQq.png" alt="enter image description here"></a></p> <p>As an alternative, from the statement it clearly follows that $BHCK$ is a parallelogram, since its diagonals $BC$ and $HK$ intersect at their midpoints. That gives $\widehat{CKB}=\widehat{BHC}=\pi-\widehat{BAC}$, hence $K$ lies on the circumcircle of $ABC$.</p>
1,969,479
<blockquote> <p>Prove that the symmetric point of orthocenter $H$ of a triangle with respect to the midpoint of any side resides on the triangle's circumcircle. <a href="https://i.stack.imgur.com/tkh4c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tkh4c.png" alt="enter image description here"></a></p> </blockquote> <p>In the above figure,it's sufficient to prove that triangle $BHM$ is equal to triangle $CKM$, but how? I tried to make use of the symmetric point of $H$ with respect to $BC$ which resides on circumcircle but failed...</p>
Robert Z
299,698
<p>A complex number proof. Without loss of generality we may assume that <span class="math-container">$A,B,C$</span> are complex numbers with the unit circle of <span class="math-container">$\mathbb{C}$</span> centered at the origin <span class="math-container">$O$</span>. Then <span class="math-container">$K=-C$</span>, <span class="math-container">$H=A+B+C$</span> and <span class="math-container">$$\frac{H+K}{2}=\frac{A+B}{2}=M.$$</span> As a reference I suggest <a href="https://pdfs.semanticscholar.org/3426/bc3cb1da7b3788f64e723d63676a1bdff831.pdf" rel="nofollow noreferrer"><em>Bashing Geometry with Complex Numbers</em></a> by E. Chen.</p>
1,395,093
<blockquote> <p>Prove that if $f\in L^1([0,1],\lambda)$ is not constant almost everywhere then there exists an interval so that $\int_I\!f\,\mathrm{d}\lambda\neq 0$. Here $\lambda$ is the Lebesgue measure.</p> </blockquote> <p>Since this is obviously true for continuous functions, I've been trying to use the fact that continuous functions with compact support are dense in $L^1$, but I'm not sure how to set it up.</p>
Christopher Carl Heckman
261,187
<p>To find critical points, inflection points, etc., just do what you do when you don't have any parameters. The only difference is that the points will involve the value of $a$.</p> <p>For instance, $$f'(x) = \frac{-a}{x^2} + \frac{-2a}{x^3} = \frac{-ax-2a}{x^3}.$$ Now, for which values of $x$ is this expression zero? Etc.</p>
172,157
<p>Today I was asked if you can determine the divergence of $$\int_0^\infty \frac{e^x}{x}dx$$ using the limit comparison test.</p> <p>I've tried things like $e^x$, $\frac{1}{x}$, I even tried changing bounds by picking $x=\ln u$, then $dx=\frac{1}{u}du$. Then the integral, with bounds changed becomes $\int_1^\infty \frac{1}{\ln u}du$ This didn't help either.</p> <p>This problem intrigued me, so any helpful pointers would be greatly appreciated.</p>
Keith Anker
126,605
<p>∫ ε to 1 e^x / x dx > ∫ ε to 1 1/x dx, → ∞ as ε → 0; and</p> <p>∫ 1 to N e^x / x dx > ∫ 1 to N 1/x dx, → ∞ as N → ∞;</p> <p>and so we are done.</p>
38,594
<p>One of the definitions of a Galois extension is that $E/K$ is Galois iff $E$ is the splitting field of some <strong>separable</strong> polynomial $f(x) \in K[x]$, yes?</p> <p>I want to understand why the following is true:</p> <blockquote> <p>Let $f(x) \in \mathbb{Q}[x]$ be a polynomial and let $F$ be a splitting field of $f(x)$. Then $F/\mathbb{Q}$ is Galois. </p> </blockquote> <p>My question is: How do we know that $f(x)$ is a separable polynomial? Can you please explain this part?</p>
Patrick Da Silva
10,704
<p>That is because there is a theorem that states that in every perfect field ($\mathbb Q$ and $\mathbb F_p$ are examples), every irreducible polynomial is separable (i.e. all their roots have multiplicity 1). Your polynomial f(x) in your assertion is not necessarily separable, but if you write $$ f(x) = (p_1(x))^{a_1} (p_2(x))^{a_2}... (p_n(x))^{a_n}, \quad a_i \ge 1 $$ with $p_i(x)$ being the irreducible factors of $f$, then F is also the splitting field for $$ g(x) = p_1(x) p_2(x) ... p_n(x). $$ since both of these polynomials ($f$ and $g$) have exactly the same roots. The polynomial g is separable since it is a product of irreducible polynomials (if you use the Dummit &amp; Foote, which you should if you don't, I recommend ; I am speaking of Corollary 34 page 547.) so that every root of $g$ has multiplicity 1.</p> <p>Therefore, the splitting field of $f$ being Galois over $\mathbb Q$ doesn't mean that $f$ is separable, but rather that some polynomial (the $g$ one) divides $f$ and has the same roots as $f$, with $g$ being separable and having the same splitting field as $f$.</p> <p>I hope this helped.</p>
3,963,517
<blockquote> <p>Write to power series of <span class="math-container">$$f(x)=\frac{\sin(3x)}{2x}, \quad x\neq 0.$$</span></p> </blockquote> <p>I try to get a series for <span class="math-container">$\sin(3x)$</span> with <span class="math-container">$x=0$</span> and multiplying the series with <span class="math-container">$2x$</span>.</p> <p>Is that right?</p>
K.defaoite
553,081
<p>Yes, that's completely fine. The Maclaurin series for <span class="math-container">$\sin$</span> is <span class="math-container">$$\sin(t)=t-\frac{t^3}{3!}+\frac{t^5}{5!}+...$$</span> So letting <span class="math-container">$t=3x$</span>, <span class="math-container">$$\frac{1}{2x}\sin(3x)=\frac{1}{2x}\left(3x-\frac{(3x)^3}{3!}+\frac{(3x)^5}{5!}+...\right)$$</span> You can move the <span class="math-container">$(2x)^{-1}$</span> inside: <span class="math-container">$$\frac{\sin(3x)}{2x}=\frac{3^1x^1}{2x}-\frac{3^3x^3}{2x\cdot3!}+\frac{3^5x^5}{2x\cdot5!}+...$$</span> Hopefully you can construct the general term from here?</p>
3,486,456
<p>Suppose <span class="math-container">$f:X\rightarrow Y$</span> is a homeomorphism. Show that if <span class="math-container">$X$</span> is hausdorff then so is <span class="math-container">$Y$</span>.</p> <p>My attempt: Let <span class="math-container">$y_1,y_2\in Y$</span> be distinct, by bijectivity of <span class="math-container">$f$</span>, there exists distinct <span class="math-container">$x_1,x_2 \in X$</span> such that <span class="math-container">$f(x_1)=y_1$</span> and <span class="math-container">$f(x_2)=y_2$</span>. Since <span class="math-container">$X$</span> is hausdorff, there exists disjoint open subsets of <span class="math-container">$X$</span>, <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> containing <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> respectively. Since <span class="math-container">$f$</span> is a homeomorphism, it is an open map, hence <span class="math-container">$f(V_1)\cap f(V_2)$</span> is the union of two open sets. Since <span class="math-container">$f$</span> is injective, <span class="math-container">$f(V_1 \cap V_2)= f(V_1) \cap f(V_2)= \varnothing$</span>, and <span class="math-container">$f(V_1),f(V_2)$</span> contain <span class="math-container">$y_1,y_2$</span> respectively. Thus <span class="math-container">$Y$</span> is hausdorff.</p> <p>Is it correct?</p>
Yves Stalder
722,843
<p>There is a little problem with the sentence 'hence <span class="math-container">$f(V_1)\cap f(V_2)$</span>  is the union of two open sets', because you should write <span class="math-container">$f(V_1)\cup f(V_2)$</span> instead of <span class="math-container">$f(V_1)\cap f(V_2)$</span>, and because you should precise that <span class="math-container">$f(V_1)$</span> and <span class="math-container">$f(V_2)$</span> are open sets.</p> <p>One should also precise that <span class="math-container">$f(V_1)$</span> and <span class="math-container">$f(V_2)$</span> are disjoint, and contain <span class="math-container">$y_1$</span> and <span class="math-container">$y_2$</span> respectively. But this is done in your text</p> <p>The rest of your argument is correct.</p>
1,563,004
<p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p> <p>How do I know if $v$ is a unbiased estimation of $x$?</p> <p>I am not sure how to approach this problem, any ideas are appreciated!</p>
Leucippus
148,155
<p>Using $$\log_{10}(x) = \frac{\ln(x)}{\ln(10)}$$ then \begin{align} D \left[ 10^{x} \, \log_{10}(x) \right] &amp;= D \left[ e^{x \, \ln(10)} \, \frac{\ln(x)}{\ln(10)} \right] \\ &amp;= \frac{1}{\ln(10)} \, \left[ \frac{10^{x}}{x} + \ln(10) \, 10^{x} \, \ln(x) \right] \\ &amp;= 10^{x} \, \left[ \frac{1}{ x \, \ln(10)} + \ln(10) \, \log_{10}(x) \right] \end{align}</p>
858,250
<p>I would like to evaluate the following alternating sum of products of binomial coefficients: $$\sum_{k=0}^{m} (-1)^k \binom m k \binom n k .$$ I had the idea to use Pascal recursion to re-express $\binom n k$ so that we always have $m$ as the upper index and I have been able to come up with nice expressions for $$ \sum_{k=0}^{m} (-1)^k \binom m k \binom m {k-j},$$ ($j=0$ gives the central binomial coefficient as is well known and the others turn out to be shifted away from the centre by $j$ steps, up to a sign).</p> <p>However we then end up with another alternating sum of products of these solutions with the binomial coefficients that came out of using the recursion. And this sum seems to be even worse to solve, at least combinatorially. Any help appreciated! Thanks in advance.</p>
Sasha
11,069
<p>Let $c_k = (-1)^k \binom{m}{k} \binom{n}{k}$. Then $$ \frac{c_{k+1}}{c_k} = - \frac{\binom{m}{k+1}}{\binom{m}{k}} \cdot \frac{\binom{n}{k+1}}{\binom{n}{k}} = - \frac{k-m}{k+1} \cdot \frac{k-n}{k+1} $$ Hence $$ c_k = c_0 \prod_{i=0}^{k-1} (-1) \frac{i-m}{i+1} \cdot \frac{i-n}{i+1} = \frac{(-1)^k}{k!} \frac{(-m)_k \cdot (-n)_k}{(1)_k} $$ where $(a)_k$ denotes the <a href="http://en.wikipedia.org/wiki/Pochhammer_symbol" rel="nofollow">Pochhammer symbol</a>. Hence the sum in question can be understood as the <a href="http://en.wikipedia.org/wiki/Hypergeometric_function" rel="nofollow">Gauss hypergeometric function</a>: $$ \sum_{k=0}^m (-1)^k \binom{m}{k} \binom{n}{k} = {}_2F_1\left(-m, -n; 1; -1\right) $$</p>
109,061
<p>Most people learn in linear algebra that its possible to calculate the eigenvalues of a matrix by finding the roots of its characteristic polynomial. However, this method is actually very slow, and while its easy to remember and its possible for a person to use this method by hand, there are many better techniques available (which do not rely on factoring a polynomial).</p> <p>So I was wondering, why on earth is it actually important to have techniques available to solve polynomial equations? (to be specific, I mean solving over $\mathbb{C}$)</p> <p>I actually used to be fairly interested in how to do it, and I know a lot of the different methods that people use. I was just thinking about it though, and I'm actually not sure what sort of applications there are for those techniques.</p>
Math Gems
75,092
<p>One important consequence of being able to explicitly solve polynomial equations is that it permits great simplifications by <strong>linearizing</strong> what would otherwise be much more complicated <strong>nonlinear</strong> phenomena. The ability to factor polynomials completely into linear factors over $\mathbb C$ enables widespread linearization simplifications of diverse problems. An example familiar to any calculus student is the fact that integration of rational functions is much simpler over $\mathbb C$ (vs. $\mathbb R$) since partial fraction decompositions involve at most linear (vs. quadratic) polynomials in the denominator. Analogously, one may reduce higher-order constant coefficient differential and difference equations (i.e. recurrences) to linear (first-order) equations by factoring them as linear operators over $\mathbb C$ (i.e. "operator algebra").</p> <p>More generally, such <strong>simplification by linearization</strong> was at the heart of the development of abstract algebra. Namely, Dedekind, by abstracting out the essential linear structures (ideals and modules) in number theory, greatly simplified the prior nonlinear Gaussian theory (based on quadratic forms). This enabled him to exploit to the hilt the power of linear algebra. Examples abound of the revolutionary breakthroughs that this brought to number theory and algebra - e.g. it provided the methods needed to generalize the law of quadratic reciprocity to higher-order reciprocity laws - a longstanding problem that motivated much of the early work in number theory and algebra. </p>
3,830,971
<p>The function <span class="math-container">$f(x)=\cot^{-1} x$</span> is well known to be neither even nor odd because <span class="math-container">$\cot^{-1}(-x)=\pi-\cot^{-1} x$</span>. it's domain is <span class="math-container">$(-\infty, \infty)$</span> and range is <span class="math-container">$(0, \pi)$</span>. Today, I was surprised to notice that Mathematica treats it as an odd function, and yields its plot as given below:</p> <p><a href="https://i.stack.imgur.com/IcH3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IcH3w.png" alt="enter image description here" /></a></p> <p>How to reconcile this ? I welcome your comments.</p> <p>Edit: I used: Plot[ArcCot[x], {x, -3, 3}] there to plot</p>
Martin R
42,969
<p>From <a href="https://mathworld.wolfram.com/InverseCotangent.html" rel="nofollow noreferrer">Inverse Cotangent</a> on Wolfram MathWorld:</p> <blockquote> <p>There are at least two possible conventions for defining the inverse cotangent. This work follows the convention of Abramowitz and Stegun (1972, p. 79) and the Wolfram Language, taking <span class="math-container">$\cot^{-1}x$</span> to have range <span class="math-container">$(-\pi/2,\pi/2]$</span>, a discontinuity at <span class="math-container">$x=0$</span>, and the branch cut placed along the line segment <span class="math-container">$(-i,i)$</span>.</p> <p>This definition is also consistent, as it must be, with the Wolfram Language's definition of <code>ArcTan</code>, so <code>ArcCot[z]</code> is equal to <code>ArcTan[1/z]</code>.</p> <p>A different but common convention (e.g., Zwillinger 1995, p. 466; Bronshtein and Semendyayev, 1997, p. 70; Jeffrey 2000, p. 125) defines the range of <span class="math-container">$\cot^{-1}x$</span> as <span class="math-container">$(0,\pi)$</span>, thus giving a function that is continuous on the real line <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <p>The former definition is what Mathematica uses. Note that with that definition, <span class="math-container">$\cot^{-1}(0) = \pi/2$</span>, so it is an odd function only if you exclude <span class="math-container">$x=0$</span> from the domain.</p> <p>The latter definition satisfies <span class="math-container">$\cot^{-1}(-x)=\pi-\cot^{-1} x$</span> and is not an odd function.</p>
1,430,369
<p>I know how to prove this by induction but the text I'm following shows another way to prove it and I guess this way is used again in the future. I'm confused by it.</p> <p>So the expression for first n numbers is: $$\frac{n(n+1)}{2}$$</p> <p>And this second proof starts out like this. It says since:</p> <p>$$(n+1)^2-n^2=2n+1$$</p> <p>Absolutely no idea where this expression came from, doesn't explain where it came from either.</p> <p>Then it proceeds to say: \begin{align} 2^2-1^2&amp;=2*1+1 \\ 3^2-2^2&amp;=2*2+1\\ &amp;\dots\\ n^2-(n-1)^2&amp;=2(n-1)+1\\ (n+1)^2-n^2&amp;=2n+1 \end{align} At this point I'm completely lost. But it continues to say "adding and noting the cancellations on the left, we get" \begin{align} (n+1)^2-1&amp;=2(1+2+...+n)+n \\ n^2+n&amp;=2(1+2+...+n) \\ (n(n+1))/2&amp;=1+2+...+n \end{align}</p> <p>Which proves it but I have no clue what has happened. I am entirely new to these math proofs. Im completely lost. I was great at high school math and calculus but now I haven't got the slightest clue of what's going on. Thanks</p>
Calvin Khor
80,734
<p>This is a telescoping sum. The use of $(n+1)^2 - n^2 = 2n + 1$ is a clever trick, and it is only clear why we use it once you understand the whole argument. The idea is to first find $\sum_1^n (2k+1) = 2(1+\dots+n)+(1+\dots+1)$ and use this to find $\sum_1^n k =1+\dots+n$. </p> <p>I'm going to colour code the given text above, as well as add a few lines which will hopefully make things clearer.</p> <p>\begin{align} \color{blue}{2^2}-1^2&amp;=2\times 1+1 \\ \color{red}{3^2}-\color{blue}{2^2}&amp;=2\times 2+1\\ \color{green}{4^2}-\color{red}{3^2}&amp;=2\times 3+1\\ \color{orange}{5^2}-\color{green}{4^2}&amp;=2\times 4+1\\ &amp;\dots\\ \color{brown}{n^2}-(n-1)^2&amp;=2(n-1)+1\\ (n+1)^2-\color{brown}{n^2}&amp;=2n+1 \end{align}</p> <p>Hopefully you can see, when you add each coloured pair together, e.g. since one is $+\color{blue}{2^2}$ and the other is $-\color{blue}{2^2}$, they cancel out completely. The same happens with every number you can pair. The only ones that don't cancel are the $-1^2 $ at the start, and also the $(n+1)^2$ at the end. So by adding up all these equations (there are $n$ of them) we get</p> <p>$$ (n+1)^2 - 1 = 2(1+\dots+n) + (1+\dots+1) = 2(1+\dots+n) + n$$</p> <p>Does this help? (Tell me if the remainder of the argument isnt clear as well.)</p>
2,416,910
<p>Let $A:=\{-k,\ldots,-2,-1,0,1,2,\ldots,k\}$, $k&lt;\infty$, $k\in \mathbb{N}$.</p> <p>Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be such that $f(-x)=-f(x)$ for all $x\in A$ and $f(x+y)=f(x)+f(y)$ for all $x,y\in A$ such that $x+y\in A$.</p> <p>Does it follow that there is $\alpha\neq 0$ such that $f(x)=\alpha x$ for all $x\in A$?</p>
J.G
293,121
<p>The idea is to have a random variable with extreme values that are of opposite sign from the mean, so that shifting by the mean leads to the contributions of these terms to be augmented. To do this, those extreme values must correspondingly have low weight.</p> <p>For an explicit example, let <span class="math-container">$X$</span> be <span class="math-container">$1$</span> with probability <span class="math-container">$.995$</span> and <span class="math-container">$-100$</span> with probability <span class="math-container">$.005$</span>. Then <span class="math-container">$\mathbb{E}[X]=.495$</span>. You can numerically check that <span class="math-container">\begin{gather} \mathbb{E}[\exp(X^2/43.5^2)]=.005\exp(100^2/43.5^2)+.995\exp(1^2/43.5^2)\approx 1.98\ldots\\ \mathbb{E}[\exp((X-\mathbb{E}[X])^2/43.5^2)]=.005\exp((100.495)^2/43.5^2)+.995\exp(.505^2/43.5^2)\approx 2.03. \end{gather}</span> This means that <span class="math-container">\begin{equation} \|X\|_{\psi_2}&lt;43.5&lt;\|X-\mathbb{E}[X]\|_{\psi_2}, \end{equation}</span> so the inequality fails for <span class="math-container">$C=1$</span>. There's likely more extreme examples, but this is just the first one I found from trial and error.</p>
157,985
<p>Question from a beginner. I have data containing dates and values of the format:</p> <pre><code> data = {{{2015, 1, 1}, 2}, {{2015, 1, 2}, 3}, {{2015, 2, 1}, 4}, {{2015, 2, 2}, 5}, {{2016, 1, 1}, 6}, {{2016, 1, 2}, 7}} </code></pre> <p>Aim is to multiply the values of each day in a month, e.g. for January 2015, the result should be 2*3=6, for February 2015 4*5=20 and so on. Ideally, the output would be a list of the format {{January 2015, 6}, {February 2015, 20},etc}, but just a list of the results of the multiplications would be fine.</p> <p>To group the data by month, I use:</p> <pre><code>selectElements[list_, start_, end_] := Module[{s = AbsoluteTime@start, e = AbsoluteTime@end}, Select[list, Composition[s &lt;= # &lt;= e &amp;, AbsoluteTime, First]]] </code></pre> <p>I then create a table multiplying the values of the data grouped by month:</p> <pre><code>test1 = Table[Times @@ selectElements[data, {y, m, 1}, {y, m, 31}], {y, 2015, 2016}, {m, 1, 12}] </code></pre> <p>However, this multiplies not only the values, but also the dates themselves giving me:</p> <pre><code> {{{{4060225, 1, 2}, 6}, {{4060225, 4, 2}, 20}, 1, 1, 1, 1, 1, 1, 1, 1,1, 1}, {{{4064256, 1, 1}, 42}, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}} </code></pre> <p>I'm sure there is an easy way to get just the dates and the values I'm interested in (i.e. 6,20,42, ideally with month/year), but so far I couldn't find it. I'd be very grateful for any pointers.</p>
Alan
19,530
<pre><code>GroupBy[data, Part[First[#], ;; 2] &amp;, Apply[Times, Last /@ #] &amp;] </code></pre>
157,985
<p>Question from a beginner. I have data containing dates and values of the format:</p> <pre><code> data = {{{2015, 1, 1}, 2}, {{2015, 1, 2}, 3}, {{2015, 2, 1}, 4}, {{2015, 2, 2}, 5}, {{2016, 1, 1}, 6}, {{2016, 1, 2}, 7}} </code></pre> <p>Aim is to multiply the values of each day in a month, e.g. for January 2015, the result should be 2*3=6, for February 2015 4*5=20 and so on. Ideally, the output would be a list of the format {{January 2015, 6}, {February 2015, 20},etc}, but just a list of the results of the multiplications would be fine.</p> <p>To group the data by month, I use:</p> <pre><code>selectElements[list_, start_, end_] := Module[{s = AbsoluteTime@start, e = AbsoluteTime@end}, Select[list, Composition[s &lt;= # &lt;= e &amp;, AbsoluteTime, First]]] </code></pre> <p>I then create a table multiplying the values of the data grouped by month:</p> <pre><code>test1 = Table[Times @@ selectElements[data, {y, m, 1}, {y, m, 31}], {y, 2015, 2016}, {m, 1, 12}] </code></pre> <p>However, this multiplies not only the values, but also the dates themselves giving me:</p> <pre><code> {{{{4060225, 1, 2}, 6}, {{4060225, 4, 2}, 20}, 1, 1, 1, 1, 1, 1, 1, 1,1, 1}, {{{4064256, 1, 1}, 42}, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}} </code></pre> <p>I'm sure there is an easy way to get just the dates and the values I'm interested in (i.e. 6,20,42, ideally with month/year), but so far I couldn't find it. I'd be very grateful for any pointers.</p>
ubpdqn
1,997
<p><a href="http://reference.wolfram.com/language/ref/TimeSeriesAggregate.html" rel="nofollow noreferrer"><code>TimeSeriesAggregate</code></a> can also be used, e.g.:</p> <pre><code>#1[[1, {1, 2}]] -&gt; #2 &amp; @@@TimeSeriesAggregate[data, "Month", Times @@ # &amp;] </code></pre> <p>yielding:</p> <pre><code>{{2015, 1} -&gt; 6, {2015, 2} -&gt; 20, {2016, 1} -&gt; 42} </code></pre>
59,949
<p>How to track <code>Initialization</code> step by step? Any tricky solution is ok. One condition, <code>x</code> should be <code>DynamicModule</code> variable.</p> <pre><code>DynamicModule[{x}, Dynamic[x], Initialization :&gt; ( x = 1; Pause@1; x = 2; Pause@1; x = 3; )] </code></pre> <p>I want the progress to be reflected in <code>Dynamic[x]</code>. </p>
Kuba
5,478
<h3>tl;dr;</h3> <p>Use <code>SynchronousInitialization -&gt; False</code> and, in case of simple <code>Dynamic[x]</code>, add <code>TrackedSymbols</code>: <code>Dynamic[x, TrackedSymbols :&gt; {x}]</code> due to the bug mentioned below.</p> <hr /> <p>Finally I'm able to answer my question with understanding. Two years, no big deal :)</p> <p>So I want <code>Dynamic[x]</code> to reflect the state of the initialization. The last example in WReach's answer does what I need but in a way I don't like. <code>Dynamic</code> with <code>UpdateInterval-&gt;0</code> is a flood of unnecessary calls. Which will still be performed till that <code>Dynamic</code> is visible.</p> <h3>My understanding and solution to the problem</h3> <p>According to my explanation in:</p> <ul> <li><a href="https://mathematica.stackexchange.com/a/120414/5478"><strong>Synchronizing Dynamics with other preemptive evaluations</strong></a></li> </ul> <p><strong>all we need is</strong> to make <code>Initialization</code> code to be evaluated through the <em>MainLink</em>. <strong><code>SynchronousInitialization -&gt; False</code></strong> serves that purpose.</p> <p><strong>Yet that fix is not working</strong>, we still see the initial value and then only the very last at the end.</p> <pre><code>DynamicModule[{x = 0}, Dynamic[x], SynchronousInitialization -&gt; False, Initialization :&gt; ( x = 1; Pause@1; x = 2; Pause@1; x = 3 ) ] </code></pre> <p>At this point, a beginner's morale will drop. This should work. Why isn't it? The answer is, come to StackExchange more often. There is a <a href="/questions/tagged/bug" class="post-tag" title="show questions tagged &#39;bug&#39;" rel="tag">bug</a> which interferes:</p> <ul> <li><a href="https://mathematica.stackexchange.com/q/100828/5478"><strong>What is the difference between <code>Dynamic[x]</code> and <code>Dynamic[ h[x] ]</code> for DynamicModule variables?</strong></a></li> </ul> <p>So my minimal example was too minimal:</p> <pre><code>DynamicModule[{x = 0}, Dynamic[ x, TrackedSymbols :&gt; {x}], SynchronousInitialization -&gt; False, Initialization :&gt; ( x = 1; Pause@1; x = 2; Pause@1; x = 3 ) ] </code></pre> <p>Probably the most verbose solution is <code>Dynamic[x, TrackedSymbols :&gt; {x}]</code> which appears to be superfluous but at least it is clear.</p> <p>Use whatever you want but there is usually something more than <code>x</code> so that bug can be missed.</p> <p>Uff, works and fits my understanding of <em>Kernel</em>-<em>FrontEnd</em> communication.</p> <hr /> <h3>Real life example</h3> <pre><code>DynamicModule[{x = 0, init = False}, Dynamic[ If[ init, Overlay[{ Dynamic@ProgressIndicator[Appearance -&gt; &quot;Indeterminate&quot;], Dynamic[x, TrackedSymbols :&gt; {x}] }, All, 1, Alignment -&gt; Center, BaseStyle -&gt; 18 ], x ], TrackedSymbols :&gt; {init, x} ], SynchronousInitialization -&gt; False, Initialization :&gt; ( init = True; x = 1; Pause@1; x = 2; Pause@1; x = 3; Pause[1]; init = False; )] </code></pre> <p><a href="https://i.stack.imgur.com/OskJk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OskJk.gif" alt="enter image description here" /></a></p>
4,023,366
<p>So the question is pretty much described in the title already. I have to show the following result. I have tried it but am failing to do so. Anyone who can please help me in understanding its proof. I have attached the picture of formulas that may prove to be helpful.</p> <p><span class="math-container">$\exists x.P(x) \lor \exists x.Q(x)$</span> <span class="math-container">$\vdash$</span> <span class="math-container">$\exists x.(P(x) \lor Q(x))$</span></p> <p><a href="https://i.stack.imgur.com/O5Hod.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O5Hod.jpg" alt="enter image description here" /></a></p>
Graham Kemp
135,106
<p>Let us get you <em>started</em> with the derivation for <span class="math-container">$\exists x~.P(x)~\lor~\exists x~.Q(x)\vdash \exists x~.(P(x)\lor Q(x))$</span> using the Gentzen tree system.</p> <p><span class="math-container">$$\dfrac{\exists x~.P(x)~\lor~\exists x~.Q(x)\\\qquad\qquad\vdots}{\exists x~.(P(x)\lor Q(x))}{?}$$</span></p> <hr /> <p>Well, <em>existential statements</em> may be derived either by construction, or by reduction to absurdity.   Which is appropriate here?</p> <p>We have existences in the premise, so that indicates we may be constructive.   Thus we shall be using existential introduction using a witnesses derived from the premise's existentials.</p> <p>But wait!   To access those existentials we will first need to eliminate that disjunction.</p> <p><span class="math-container">$$\dfrac{\lower{1.5ex}{\exists x~.P(x)~\lor~\exists x~.Q(x)}\quad\dfrac{[\exists x~.P(x)]^1\\\quad\vdots}{\exists x~.(P(x)\lor Q(x))}\quad\dfrac{[\exists x~.Q(x)]^2\\\quad\vdots}{\exists x~.(P(x)\lor Q(x))}}{\exists x~.(P(x)\lor Q(x))}{\small\lor\textsf{elim}^{1,2}}$$</span></p> <hr /> <p>Now you can look to the <em>existential</em> introduction and elimination rules, and disjunction introduction while we are at it.<span class="math-container">$$\dfrac{\lower{1.5ex}{\exists z~.\phi(z)}~~\dfrac{[\phi(t)]^1\\\quad\vdots}{\psi}}{\psi}{\small\exists\textsf{elim}^1}\qquad\dfrac{\rho(t)}{\exists z~.\rho(z)}{\small\exists\textsf{intro}}\\~\\~\\\dfrac{\chi(s)}{\chi(s)\lor\varphi(s)}{\small\lor\textsf{intro}}\qquad\dfrac{\varphi(t)}{\chi(t)\lor\varphi(t)}{\small\lor\textsf{intro}}$$</span></p> <p>Just put it together.</p>
412,944
<p>Is it mathematically acceptable to use <a href="https://math.stackexchange.com/questions/403346/prove-if-n2-is-even-then-n-is-even">Prove if $n^2$ is even, then $n$ is even.</a> to conclude since 2 is even then $\sqrt 2$ is even? Further more using that result to also conclude that $\sqrt [n]{2}$ is even for all n?</p> <p>Similar argument for odd numbers should give $\sqrt[n]{k}$ is even or odd when k is even odd.</p> <p>My question is does any of above has been considered under a more formal subject or it is a correct/nonsensical observation ? </p>
Marc van Leeuwen
18,880
<p>The question you linked to starts (quite naturally) with "Suppose $n$ is an integer", which clearly excludes taking $n=\sqrt 2$. Therefore it cannot be used to conclude that $\sqrt2$ is even. Obviously $\sqrt2$ being even is absurd since even numbers in particular have to be integers. I would say your "observation" is sloppy (overlooking the stated conditions) and your following reasoning unfounded (or nonsensical if you prefer).</p>
412,944
<p>Is it mathematically acceptable to use <a href="https://math.stackexchange.com/questions/403346/prove-if-n2-is-even-then-n-is-even">Prove if $n^2$ is even, then $n$ is even.</a> to conclude since 2 is even then $\sqrt 2$ is even? Further more using that result to also conclude that $\sqrt [n]{2}$ is even for all n?</p> <p>Similar argument for odd numbers should give $\sqrt[n]{k}$ is even or odd when k is even odd.</p> <p>My question is does any of above has been considered under a more formal subject or it is a correct/nonsensical observation ? </p>
Peter LeFanu Lumsdaine
2,439
<p>Other answerers are answering several slightly different interpretations of your question; but taking literally what you asked,</p> <blockquote> <p>Is it mathematically acceptable to use “<a href="https://math.stackexchange.com/questions/403346/prove-if-n2-is-even-then-n-is-even">Prove if $n^2$ is even, then $n$ is even.</a>” to conclude since 2 is even then $\sqrt 2$ is even?</p> </blockquote> <p>the answer is a quite interesting “no, but also yes”.</p> <p>Formally, the answer is definitely <em>no</em>: <a href="https://math.stackexchange.com/a/412950/2439">as @john explained</a>, you can’t use that theorem to conclude what you suggest, since the theorem and proof start by assuming that <em>n</em> is an integer, so $\sqrt{2}$ is not a valid value for $n$.</p> <p>However, it is excellent and very acceptable mathematical practice to do what you did, and take that theorem as inspiration for considering a new generalisation of the concepts involved. And, <a href="https://math.stackexchange.com/a/412971/2439">as @KeyIdeas describes</a>, this can lead to some very nice theories.</p> <p><strong>TL;DR: you can’t <em>conclude</em> this from that theorem/proof, but you can certainly be <em>inspired</em> along these lines by them.</strong></p>
399,564
<p>Using Mathematical induction on $k$, prove that for any integer $k\geq 1$,</p> <p>$$(1-x)^{-k}=\sum_{n\geq 0}\binom{n+k-1}{k-1}x^n$$</p> <p>How should I proceed? The tutorial teacher attempted this question and forgot halfway through... <em>facepalm</em></p>
Brian M. Scott
12,042
<p>HINT: Your induction hypothesis is that</p> <p>$$(1-x)^{-k}=\sum_{n\ge 0}\binom{n+k-1}{k-1}x^n\;.$$</p> <p>For the induction step take a look at this calculation:</p> <p>$$\begin{align*} \sum_{n\ge 0}\binom{n+k}kx^n&amp;=\sum_{n\ge 0}\left(\binom{n+k-1}{k-1}+\binom{n+k-1}k\right)x^n\\ &amp;=(1-x)^{-k}+\sum_{n\ge 0}\binom{n+k-1}kx^n\\ &amp;=(1-x)^{-k}+\sum_{n\ge 1}\binom{n+k-1}kx^n\\ &amp;=(1-x)^{-k}+x\sum_{n\ge 1}\binom{n+k-1}kx^{n-1}\\ &amp;=(1-x)^{-k}+x\sum_{n\ge 0}\binom{n+k}kx^n\;. \end{align*}$$</p>
1,413,212
<blockquote> <p>Given $x$ the number of circles of radius $r$, what is a good approximate size of the radius of a bigger circle which they fit in?</p> </blockquote> <p>To explain in actual problem terms. I want to move units in a video games which take up a certain amount of area around, I don't want those units overlapping so I want the points there told to move to to be spaced far enough apart to make sure they aren't. I know how to make sure each point is far enough apart but i'm picking points randomly in a circle and then checking to make sure they are far enough apart. </p> <p>The problem is if a circle is too small in size there simply isn't enough valid movement points for all the units and you get an infinite loop endlessly trying to find an answer that doesn't exist. If the circle is too large you end up with units spaced out over vast areas way out of proportion. </p> <p>I do not need a perfect minimum radius answer I just need to be certain the amount will fit while not being too large. To show that I did think about it my rough guess was.</p> <p>the square root of (amount needed) * (radius of each circle * 2)</p>
Zenohm
224,178
<p>Since it appears that you're doing basic calculus, I'll put my answer in terms of that.</p> <p>The definition for the slope of a tangent line on a curve using limits is as follows</p> <blockquote> <p>$$ \lim_{x \to a} \frac{f(x)-f(a)}{x-a} = m $$</p> </blockquote> <p>Now let's look at what they give you,</p> <blockquote> <p>$$ \lim_{x \to 1} \frac{f(x)-4}{x-1} = 10 $$</p> </blockquote> <p>Finally, let's look at what they ask of you,</p> <blockquote> <p>$$ \lim_{x \to 1} f(x) = ? $$</p> </blockquote> <p>If you look at the definition and what they've given you, you'll see that they have given you $f(1)$. Since the function is continuous at $x=1$, </p> <p>$$\lim_{x \to 1} f(x) = f(1)$$</p> <p>Therefore,</p> <p>$$\lim_{x \to 1} f(x) = 4$$</p> <p>Always remember to look for any correlation between your equations and your problem if you ever get stuck.</p>
2,517,601
<p>I tried to find the residue of the function $$f(z) = \frac{1}{z(1-\cos(z))}$$ at the $z=0$. So I did:</p> <p>\begin{align} \hbox{Res}(0)&amp;=\lim_{x \to 0}(z)(f(z))\\ \hbox{Res}(0)&amp;= \lim_{x \to 0} \frac{1}{1-\cos(z)} \end{align}</p> <p>and I got that the residue to be $\infty$, and it seems to be wrong.</p>
Nosrati
108,128
<p>$$\cos z=1-\dfrac{1}{2!}z^2+\dfrac{1}{4!}z^4-\dfrac{1}{6!}z^6+\cdots$$ then \begin{align} \dfrac{1}{z(1-\cos z)} &amp;= \dfrac{1}{z^3\left(\dfrac{1}{2!}-\dfrac{1}{4!}z^2+\dfrac{1}{6!}z^4+\cdots\right)} \\ &amp;= \dfrac{1}{z^3}\left(2+\dfrac{1}{6}z^2+\dfrac{1}{120}z^4+\cdots\right) \\ &amp;= \dfrac{2}{z^3}+\dfrac{1}{6z}+\dfrac{z}{120}+\cdots \end{align} s0 $a_{-1}=\dfrac16$.</p>
176,167
<p>$\newcommand{\scp}[2]{\langle #1,#2\rangle}\newcommand{\id}{\mathrm{Id}}$ Let $f$ and $g$ be two proper, convex and lower semi-continuous functions (on a Hilbert space $X$ or $X=\mathbb{R}^n$) and let $g$ be continuously differentiable. Consequently, the subdifferential $\partial f$ and the gradient $\nabla g$ are monotone operators, i.e. $$ \scp{\nabla g(x)-\nabla g(y)}{x-y}\geq 0 $$ for all $x,y$ and for $u\in\partial f(x)$ and $v\in\partial f(y)$ $$ \scp{u-v}{x-y}\geq 0. $$ My question:</p> <blockquote> <p>Is the operator $x\mapsto x - (\id + \gamma\partial f)^{-1}(x-\gamma\nabla g(x))$ monotone for $\gamma\geq 0$?</p> </blockquote> <p>Note that I ask for all values $\gamma\geq 0$ specifically for large values.</p> <p>An equivalent question is</p> <blockquote> <p>Does for $\gamma\geq 0$ it holds that $$ \scp{(\id + \gamma\partial f)^{-1}(x-\gamma\nabla g(x)) - (\id + \gamma\partial f)^{-1}(y-\gamma\nabla g(y))}{x-y}\leq \|x-y\|^2 $$</p> </blockquote> <p>Thoughts:</p> <p>For $f=0$ and $g=0$ it's clear (for $g=0$ this follows since the "proximal operator" $x\mapsto (\id + \gamma\partial f)^{-1}(x)$ is non-expansive, for $f=0$, the monotonicity of $\gamma\nabla g$ works in the right direction and does the trick). </p> <p>Also for small $\gamma$ (smaller that $2/L$ if $L$ is the Lipschitz constant of $\nabla g$, if is has one) the thing is clear as then the mapping $x\mapsto (\id + \gamma\partial f)^{-1}(x - \gamma\nabla g(x))$ is again non-expasive (and used as iteration in the so-called proximal gradient method). However, for large $\gamma$ I could not make use of any of these observation since using Cauchy-Schwarz for the inner product ruins the estimate then.</p> <p>Moreover all examples I tried numerically (in various dimensions and for various functions $f$ and $g$) suggested that the claim holds.</p> <p>Intuitively, all these together makes me think that the answer to my questions is yes, but I failed to prove it. Also all inequalities in Bauschke/Combettes "Convex analysis and Monotone Operator Theory in Hilbert spaces" I found were not helpful. </p>
Suvrit
8,430
<p>Although not monotone at the operator level (as suggested by C. Mooney's proof), the monotonicity of prox-residual norms is known (probably you are already aware of it). </p> <p>Let $P_\eta^g$ denote the prox-operator for $g$ with index $\eta$, i.e., \begin{equation*} P_\eta^g(y) := \text{argmin}_{x} \tfrac{1}{2\eta}\|x-y\|^2 + g(x). \end{equation*}</p> <p>Let $y, z \in \mathbb{R}^n$ and $\eta &gt; 0$. Define the functions \begin{eqnarray*} p_g(\eta) &amp;:=&amp; \tfrac1\eta\|P_\eta^g(y-\eta z) - y\|\\ q_g(\eta) &amp;:=&amp; \|P_\eta^g(y-\eta z) - y\|. \end{eqnarray*}</p> <blockquote> <p><strong>Theorem.</strong> $p_g$ is a decreasing function of $\eta$ and $q_g$ is an increasing function of $\eta$.</p> </blockquote>
3,684,158
<p>You are given two circles:</p> <p>Circle G: <span class="math-container">$(x-3)^2 + y^2 = 9$</span></p> <p>Circle H: <span class="math-container">$(x+3)^2 + y^2 = 9$</span></p> <p>Two lines that are tangents to the circles at point <span class="math-container">$A$</span> and <span class="math-container">$B$</span> respectively intersect at a point <span class="math-container">$P$</span> such that <span class="math-container">$AP + BP = 10$</span></p> <p>Find the locus of all points <span class="math-container">$P$</span>.</p> <p><a href="https://i.stack.imgur.com/zKl67.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zKl67.png" alt="enter image description here"></a></p> <hr> <p>This problem is solvable if we set point <span class="math-container">$P = (x,y)$</span> and solve the equation <span class="math-container">$AP + BP = 10$</span>. After substituting <span class="math-container">$GP^2 = AP^2 + 3^2$</span> and <span class="math-container">$HP^2 = BP^2 + 3^2$</span> and getting the following equation for an ellipse </p> <p><span class="math-container">$16x^2 +25y^2 = 625$</span></p> <p>That's a lot of math and algebra to do, so my question is: What is the geometric reasoning behind why is the locus an ellipse (without using analytical geometry) or is there any other elegant proofs that lack heavy calculations?</p>
secavara
512,290
<p>Not really an answer to the question, but I wanted to post this gif that shows the ellipse being formed</p> <p><a href="https://i.stack.imgur.com/m2IbD.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m2IbD.gif" alt="elipse"></a></p>
1,147,773
<p>Do you have any explicit example of an infinite dimensional vector space, with an explicit basis ?</p> <p>Not an Hilbert basis but a family of linearly independent vectors which spans the space -any $x$ in the space is a <strong>finite</strong> linear sum of elements of the basis.</p> <p>In general the existence of such a basis follows by the Axiom of choice but I wonder if there is at least one non trivial (not finite dimensional) case where we have some explicit constuction.</p>
Bernard
202,857
<p>Rings of polynomials over any field in any number of indeterminates (viewed as vector spaces over the field) have the monomials as a basis.</p> <p>Ring of integer-valued polynomials with coefficients in $\mathbf Q\,$ have as a basis:</p> <p>$$ \Bigl\{1, x, \frac{x(x-1)}2, \dots, \frac{x(x-1)\dotsm(x-k+1)}{k!},\dots\Bigr\}$$</p>
2,796,854
<p>what happens if both the second and first derivatives at a certain point are $0$? Is it an infliction point, an extremum point, or neither? Can we say anything at all about a point in such a case? </p>
Dan Uznanski
167,895
<p>You can tell what it does (assuming this function is well-behaved at the point in question; without that this question becomes basically pointless) based on the order of the lowest non-zero derivative beyond the first.</p> <p>An <strong>inflection point</strong> has a non-zero <strong>odd</strong> derivative: $f(x)=x^3$ has an inflection point at $0$ because its third derivative is non-zero.</p> <p>An <strong>extremum</strong> has a non-zero <strong>even</strong> derivative: $f(x) = x^4$ has an extremum at $0$ because its fourth derivative is non-zero.</p>
289,405
<p>Consider an elementary class $\mathcal{K}$. It is quite common in model theory that a structure $K$ in $\mathcal K$ comes with a closure operator $$\text{cl}: \mathcal{P}(K) \to \mathcal{P}(K), $$ which establishes a <a href="https://en.wikipedia.org/wiki/Pregeometry_(model_theory)" rel="nofollow noreferrer">pregeometry</a> on $K$.</p> <p>Any pregeometry yields a notion of dimension, say:</p> <p>$$\text{dim} (K) = \min \{|A|: A \subset K \text{ and } \text{cl}(A) = K\}$$ I am interested in some natural properties shared by dimensions induced by pregeometries.</p> <p>What kind of properties I am looking for? An example is the following,</p> <blockquote> <p>Suppose $K = \bigcup K_i$ (non redundant increasing chain) and that its dimesion is infinite, is it true that $\text{dim}(K) = \sum_i \text{dim}(K_i)$?</p> </blockquote> <p>I already know that there are many results like "trivial geometries are modular" but this is not the kind of result I am looking for. I am looking for structural properties of dimension because I am interested in giving an axiomatic definition of dimension.</p>
Jay Kangel
8,684
<p>Marcel van de Vel's "Theory of Convex Structures" has material on dimension theory. Currently I do not have access to the book and its been a very long time since I read it. Here is a copy of the table of contents. The second and third from the last items may be of interest.</p> <p>Table of Contents</p> <p>Introduction. List of Frequent Symbols. I. Abstract Convex Structures. Basic concepts. The Hull operator. Half-spaces and separation. Interval spaces. Base-point orders. Modular spaces. Bryant-Webster spaces. II. Convex Invariants. Classical convex invariants. Invariants and product spaces. Invariants in other constructions. Infinite combinatorics. Tverberg numbers. III. Topological Convex Structures. Topology and convexity on the same set. Continuity of the Hull operator. Uniform convex structures. Topo-convex separation. Intrinsic topology. IV. Miscellaneous. Embedding Bryant-Webster spaces into vector spaces. Extremality, pseudo-boundary and pseudo-interior. Continuous selection. Dimension theory. Dimension and convex invariants. Fixed points.</p>
2,967,626
<p>I have a graph that is 5280 units wide, and 5281 is the length of the arc.</p> <p><img src="https://i.stack.imgur.com/4rFK4.png"></p> <p>Knowing the width of this arc, and how long the arc length is, how would I calculate exactly how high the highest point of the arc is from the line at the bottom?</p>
Blue
409
<p>Note that <span class="math-container">$b=z$</span> violates the strict inequality (the right-hand side becomes zero). Consequently, we have <span class="math-container">$0&lt;b &lt; z$</span>, which allows us to write </p> <blockquote> <p><span class="math-container">$$b=z \sin\theta \tag{1}$$</span></p> </blockquote> <p>for some <span class="math-container">$0^\circ &lt; \theta &lt; 90^\circ$</span>. Then the square root reduces immediately to <span class="math-container">$\cos\theta$</span>, and your inequality simplifies to <span class="math-container">$$0 &gt; 2 r^2 z \sin^2\theta - z \left( 2 r^2 - 2 r z \sin\theta\cos\theta \right) \quad\to\quad 0 &gt; 2 r z \cos\theta\;(z\sin\theta - r \cos\theta) \tag{2}$$</span> Now, since <span class="math-container">$z$</span>, <span class="math-container">$r$</span>, <span class="math-container">$\cos\theta$</span> are strictly-positive, <span class="math-container">$(2)$</span> implies <span class="math-container">$0 &gt; z \sin\theta - r\cos\theta$</span>, so that</p> <blockquote> <p><span class="math-container">$$r &gt; z \tan\theta \tag{3}$$</span></p> </blockquote>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
Jack D'Aurizio
44,121
<p>There is an interesting trick: you may couple the two equations by writing $$ e^{ix}+e^{iy} = i \tag{1}$$ hence $e^{ix}$ and $e^{iy}$, that are two points on the unit circle, are simmetric with respect to the imaginary axis. By imposing that their sum has unit norm, we clearly get $\{x,y\}=\left\{\frac{\pi}{6},\frac{5\pi}{6}\right\}$:</p> <p><a href="https://i.stack.imgur.com/h56FI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h56FI.png" alt="enter image description here"></a></p>
3,097,640
<p>So I have the following problem: $a + b = c + c. I want to prove that the equation has infinitely many relatively prime integer solutions.</p> <p>What I did first was factor the right side to get: (</p>
individ
128,505
<p>In the equation:</p> <p><span class="math-container">$$X^2+Y^2=Z^5+Z$$</span></p> <p>I think this formula should be written in a more general form:</p> <p><span class="math-container">$$Z=a^2+b^2$$</span></p> <p><span class="math-container">$$X=a(a^2+b^2)^2+b$$</span></p> <p><span class="math-container">$$Y=b(a^2+b^2)^2-a$$</span></p> <p>And yet another formula:</p> <p><span class="math-container">$$Z=\frac{a^2+b^2}{2}$$</span></p> <p><span class="math-container">$$X=\frac{(a-b)(a^2+b^2)^2-4(a+b)}{8}$$</span></p> <p><span class="math-container">$$Y=\frac{(a+b)(a^2+b^2)^2+4(a-b)}{8}$$</span></p> <p><span class="math-container">$a,b$</span> - arbitrary integers.</p> <p>Solutions can be written as follows:</p> <p><span class="math-container">$$Z=\frac{(a^2+b^2)^2}{2}$$</span></p> <p><span class="math-container">$$X=\frac{((a^2+b^2)^4+4)a^2+2((a^2+b^2)^4-4)ab-((a^2+b^2)^4+4)b^2}{8}$$</span></p> <p><span class="math-container">$$Y=\frac{((a^2+b^2)^4-4)a^2-2((a^2+b^2)^4+4)ab-((a^2+b^2)^4-4)b^2}{8}$$</span></p> <p>where <span class="math-container">$a,b$</span> - any integers asked us.</p> <p>Well, a simple solution:</p> <p><span class="math-container">$$Z=(a^2+b^2)^2$$</span></p> <p><span class="math-container">$$X=a^2+2(a^2+b^2)^4ab-b^2$$</span></p> <p><span class="math-container">$$Y=(a^2+b^2)^4a^2-2ab-(a^2+b^2)^4b^2$$</span></p>
903,117
<p>I am trying to evaluate $$\int_{-\infty}^{\infty} \frac{\sin(x)^2}{x^2} dx $$ Would a contour work? I have tried using a contour but had no success. Thanks.</p> <p>Edit: About 5 minutes after posting this question I suddenly realised how to solve it. Therefore, sorry about that. But thanks for all the answers anyways.</p>
Empy2
81,790
<p>What happens if you split $\cos2x$ into $e^{2ix}$ and $e^{-2ix}$? The two pieces need different contours - one above the real line, the other below.</p>
651,707
<p>Let $A=(a_{ij})\in \mathbb{M}_n(\mathbb{R})$ be defined by</p> <p>$$ a_{ij} = \begin{cases} i, &amp; \text{if } i+j=n+1 \\ 0, &amp; \text{ otherwise} \end{cases} $$ Compute $\det (A)$</p> <hr> <blockquote> <p>After calculation I get that it may be $(-1)^{n-1}n!$. Am I right?</p> </blockquote>
Danny obama
123,792
<p>Just take $n=2$ then accordingly given conditions </p> <p>$$A=\begin{bmatrix} 0 &amp; 1\\ 2 &amp; 0\\\end{bmatrix}$$ $Det(A)= -2=-2!$</p> <p>&amp; For $n=3$ $Det(A)= -6=-3!$</p> <p>For $n=4$ $Det(A)=24=4!$</p> <p>This suggests us the general formula for det. of such type of matrix is $$(-1)^{(n-1)(\frac{n}{2})}.n!$$</p> <p>I think u can see it very easily Thanks.</p>
381,309
<p>Let <span class="math-container">$T\in B(\mathcal{H} \otimes \mathcal{H})$</span> where <span class="math-container">$\mathcal{H}$</span> is a Hilbert space. We can define operators <span class="math-container">$$T_{[12]}= T \otimes 1;\quad T_{[23]}= 1 \otimes T$$</span> and if <span class="math-container">$\Sigma: \mathcal{H} \otimes \mathcal{H} \to \mathcal{H} \otimes \mathcal{H}$</span> is the &quot;flip&quot; map, then we can define <span class="math-container">$$T_{[13]}= \Sigma_{[23]}T_{[12]}\Sigma_{[23]}= \Sigma_{[12]}T_{[23]}\Sigma_{[12]}$$</span></p> <p><strong>Question</strong>: Given <span class="math-container">$S,T \in B(\mathcal{H} \otimes \mathcal{H})$</span>, is it true that <span class="math-container">$$(ST)_{[13]}= S_{[13]}T_{[13]}?$$</span></p> <p>I attempted this as follows:</p> <p>We know that the algebraic tensor product <span class="math-container">$B(\mathcal{H}) \odot B(\mathcal{H})$</span> is weak<span class="math-container">$^*$</span>-dense (= <span class="math-container">$\sigma$</span>-weakly dense) in <span class="math-container">$B(\mathcal{H} \otimes \mathcal{H})$</span>. It is easy to see that the identity holds for <span class="math-container">$S,T \in B(\mathcal{H}) \odot B(\mathcal{H})$</span>.</p> <p>Can I conclude from this that the equality holds for all <span class="math-container">$S,T \in B(\mathcal{H}) \overline{\otimes} B(\mathcal{H})= B(\mathcal{H}\otimes \mathcal{H})$</span> (here, the first tensorproduct is the von-Neumann algebraic tensor product).</p> <p>It is natural to try to use results involving weak<span class="math-container">$^*$</span>-continuity and Kaplansky-density-like results, but I'm having trouble finishing the proof. Any ideas?</p>
Alex Ravsky
43,954
<p>For any infinite group <span class="math-container">$G$</span> we can easily construct a Vitali subset <span class="math-container">$V$</span> of <span class="math-container">$G$</span>. Indeed, pick an arbitrary countable infinite subgroup <span class="math-container">$H$</span> of <span class="math-container">$G$</span> and let <span class="math-container">$V$</span> be a subset of <span class="math-container">$G$</span> which intersects each right coset <span class="math-container">$Hg$</span> of <span class="math-container">$H$</span> is exactly one element. Then <span class="math-container">$G$</span> is a disjoint union of a countable infinite family <span class="math-container">$\{hV:h\in H\}$</span> consisting of left translation copies of the set <span class="math-container">$V$</span>. Let <span class="math-container">$\mu$</span> be any countably additive left-invariant probability measure on <span class="math-container">$G$</span>. Suppose for the sake of contradiction that <span class="math-container">$H$</span> is measurable with respect to <span class="math-container">$\mu$</span>. If <span class="math-container">$\mu(H)=0$</span> then <span class="math-container">$\mu(G)=0$</span>, a contradiction. If <span class="math-container">$\mu(H)&gt;0$</span> then <span class="math-container">$\mu(G)$</span> is infinite, a contradiction.</p> <p>Remark that the above construction of <span class="math-container">$V$</span> works even for amenable groups, because a left-invariant probability measure required by amenability is required to be <em>finitely</em>-additive.</p> <p>Also remark that the main result of the paper [BGR] easily implies that if <span class="math-container">$G$</span> is a meager Hausdorff (para)topological group then the set <span class="math-container">$V$</span> can be constructed to be nowhere dense in <span class="math-container">$G$</span>.</p> <p><em>References</em></p> <p>[BGR] Taras Banakh, Igor Guran, Alex Ravsky, <em><a href="http://arxiv.org/abs/0908.2225" rel="nofollow noreferrer">Characterizing meager paratopological groups</a></em>, Applied general topology <strong>12</strong>:1 (2011) 27–33.</p>
119,532
<p>I have a <code>ListAnimate</code> where the frames consist of graphics made of lines and points. I want to make sure that, when the animation is paused and then resumed, all frames appear normal again, so if you stop it to rotate the plot (it's 3D), after a loop it looks normal again.</p> <p>My code is the following:</p> <pre><code>ListAnimate[ {Labeled[Show[Graphics3D[g0, ViewPoint -&gt; Front], ImageSize -&gt; Full], S0, Top], Labeled[Show[Graphics3D[g1, ViewPoint -&gt; Front], ImageSize -&gt; Full], S1, Top], ..., Labeled[Show[Graphics3D[g51, ViewPoint -&gt; Front], ImageSize -&gt; Full], S51, Top]}, 4, AnimationRunning -&gt; False] </code></pre> <p>where g1, ..., g51 are the images and S1, ..., S51 are timestamps that I print on the top.</p> <p>I read in another answer that setting <code>PreserveImageOptions -&gt; False</code> should take care of this. However, the option doesn't do anything, I tried all possible placing of it as well as wrapping the whole thing in a <code>Manipulate</code> expression.</p>
ubpdqn
1,997
<p>This is just illustrative.</p> <pre><code>f[z_] := ReIm[z + 1/z] Manipulate[ ParametricPlot[{f[Complex @@ v + r Exp[I t]], ReIm[Complex @@ v + r Exp[I t]]}, {t, 0, 2 Pi}, PlotRange -&gt; {{-5, 5}, {-5, 5}}, Epilog -&gt; {Text[v, {-3, 3}]}, AspectRatio -&gt; Automatic, Frame -&gt; True, PlotLabel -&gt; (Complex @@ v + r Exp[I t])], {{v, {0, 0}}, {-4, -4}, {4, 4}, Locator}, {r, 1, 3}]e to get all in one: </code></pre> <p><a href="https://i.stack.imgur.com/23a9V.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/23a9V.gif" alt="enter image description here"></a></p> <p>I have voted for MarcoB's answer.</p>
3,238,054
<blockquote> <p>Find parameter <span class="math-container">$a$</span> for which <span class="math-container">$$\frac{ax^2+3x-4}{a+3x-4x^2}$$</span> takes all real values for <span class="math-container">$x \in \mathbb{R}$</span></p> </blockquote> <p>I have equated the function to a real value, say, k which gets me a quadratic in x. I have then put <span class="math-container">$D\geq 0$</span> (since <span class="math-container">$x \in \mathbb{R}$</span>) which gets me <span class="math-container">$a \geq -9/16$</span></p> <p>How do I proceed further to get other parameter for <span class="math-container">$a $</span>? </p>
Fred
380,717
<p><span class="math-container">$ a \geq -9/16$</span> is not correct. For <span class="math-container">$ a \leq -9/16$</span> the domain of the function <span class="math-container">$f(x)=\frac{ax^2+3x-4}{a+3x-4x^2}$</span> is <span class="math-container">$ \mathbb R.$</span></p>
3,341,471
<p>Let <span class="math-container">$M$</span> be a multiplication operator on <span class="math-container">$L^2(\mathbb{R})$</span> defined by <span class="math-container">$$Mf(x) = m(x) f(x)$$</span> where <span class="math-container">$m(x)$</span> is continuous and bounded. Prove that <span class="math-container">$M$</span> is a bounded operator on <span class="math-container">$L^2(\mathbb{R})$</span> and that its spectrum is given by <span class="math-container">$$\sigma(M) = \overline{\{m(x) : x \in \mathbb{R}\}}.$$</span> Can <span class="math-container">$M$</span> have eigenvalues? </p> <p>My partial answer is below:</p> <p>First, observe that for <span class="math-container">$f \in L^2(\mathbb{R})$</span>, <span class="math-container">$$\| Mf\|^2 = \|m(x) f(x) \|^2_{L^2} = \int_\mathbb{R} (m(x))^2 (f(x))^2 \leq \int_\mathbb{R} \|m\|_\infty (f(x))^2 = \|m\|_\infty \|f\|_{L^2}^2.$$</span> Since <span class="math-container">$m$</span> is continuous, <span class="math-container">$\|m\|_\infty = R$</span> for some constant <span class="math-container">$R$</span>, so <span class="math-container">$M$</span> is bounded. </p> <p>Define the set <span class="math-container">$X$</span> as <span class="math-container">$X = \overline{\{m(x) : x \in \mathbb{R}\}}$</span>. I show <span class="math-container">$X \subset \sigma(M)$</span> by contrapositive. Suppose <span class="math-container">$\lambda \in \rho(M)$</span>. Then <span class="math-container">$(M-\lambda I)$</span> is invertible. For <span class="math-container">$g\in L^2(\mathbb{R})$</span>, <span class="math-container">$(M-\lambda I)g(x) = m(x)g(x) - \lambda g(x)$</span>. Clearly, the inverse of this operator is <span class="math-container">$\frac{1}{M - \lambda I}$</span>. As the inverse is well-defined and bounded, <span class="math-container">$m(x)g(x) - \lambda g(x) \not= 0$</span>, which implies that <span class="math-container">$\lambda \not= m(x)$</span> for any <span class="math-container">$x\in\mathbb{R}$</span>. Thus, <span class="math-container">$\lambda \not\in X$</span>. Since the spectrum is closed, it follows that <span class="math-container">$X \subseteq \sigma(M)$</span>. </p> <p>I'm not sure how to proceed to show that <span class="math-container">$\sigma(M) \subseteq X$</span>. </p>
daw
136,544
<p>Your reasoning that <span class="math-container">$X\subset \sigma(M)$</span> is not correct. Your proof only yields that the spectrum of <span class="math-container">$M$</span> contains values <span class="math-container">$\lambda$</span> for which <span class="math-container">$\{x: m(x)=\lambda\}$</span> has positive measure: The equation <span class="math-container">$(M-\lambda I)g=f$</span> implies <span class="math-container">$(m(x)-\lambda)g(x)=f(x)$</span> only for almost all <span class="math-container">$x$</span>.</p> <p>Here is a more expanded proof. Let <span class="math-container">$x_0$</span> be given and suppose <span class="math-container">$m(x_0)\in \rho(M)$</span>. Take <span class="math-container">$\epsilon&gt;0$</span>. Since <span class="math-container">$m$</span> is continuous, there is <span class="math-container">$\delta&gt;0$</span> such that <span class="math-container">$|m(x)-m(x_0)| \le \epsilon$</span> for all <span class="math-container">$|x-x_0|\le\delta$</span>. Define <span class="math-container">$f(x):=\chi_{(x_0-\delta,x_0+\delta)}(x)$</span>. Then there is <span class="math-container">$g$</span> such that <span class="math-container">$(M-m(x_0)I)g=f$</span> or <span class="math-container">$(m(x)-m(x_0))g(x)=f(x)$</span> for almost all <span class="math-container">$x$</span>. This implies <span class="math-container">$m(x)\ne m(x_0)$</span> for almost all <span class="math-container">$x$</span>, as otherwise the equation <span class="math-container">$(M-m(x_0)I)g=f$</span> would have multiple solutions. Then <span class="math-container">$$ |g(x)| = |m(x)-m(x_0)| ^{-1} f(x)\ge \epsilon^{-1} f(x) $$</span> for almost all <span class="math-container">$x$</span>, which implies <span class="math-container">$\|g\|_{L^2} \ge \epsilon^{-1} \|f\|_{L^2}$</span>. Now, <span class="math-container">$\epsilon&gt;0$</span> was arbitrary, and <span class="math-container">$(M-m(x_0)I)^{-1}$</span> cannot be bounded, hence <span class="math-container">$m(x_0)\in \sigma(M)$</span>.</p>
4,320,209
<p><em>I am stuck on the following two questions on matrix Algebra:</em></p> <p>Let <span class="math-container">$x$</span> be a vector. Find the following:</p> <ul> <li><span class="math-container">$\frac{\partial}{\partial x}||x\otimes x||^2$</span>, here ||.|| denotes Euclidean norm of a vector and <span class="math-container">$\otimes$</span> denotes the kroneckar product.</li> <li><span class="math-container">$\frac{\partial}{\partial x^T}(\frac{x}{||x||})$</span>.</li> </ul> <p><strong>My try</strong>: <strong>(i)</strong> Let us take <span class="math-container">$x=(x_1,x_2,\dots,x_n)$</span>. Then <span class="math-container">$x\otimes x=\begin{pmatrix}x_1^2 &amp;x_1x_2&amp; x_1x_3 &amp;\dots &amp; x_1x_n \\ x_2x_1 &amp;x_2^2&amp; x_2x_3 &amp;\dots &amp; x_2x_n \\ \dots &amp; \dots&amp; \dots &amp; \dots &amp; \dots\\ \dots &amp; \dots&amp; \dots &amp; \dots &amp; \dots\\ x_nx_1 &amp;x_nx_2&amp; x_nx_3 &amp;\dots &amp; x_n^2 \end{pmatrix}$</span></p> <p>I am confused about how do I take <span class="math-container">$||x\otimes x||$</span> and find its derivative in terms of <span class="math-container">$x$</span>.</p> <p><strong>(II)</strong> Since <span class="math-container">$||x||$</span> is a scalar, so we can write <span class="math-container">$\frac{\partial}{\partial x^T}(\frac{x}{||x||})=\frac{1}{||x||}\frac{\partial}{\partial x^T}(x)$</span>. Here I am stuck how to find the derivative of <span class="math-container">$x$</span> with respect to <span class="math-container">$x^T$</span>.</p> <p>Can someone please help me to clear my doubts and complete the above problems?</p>
Steph
993,428
<p>A slightly different approach is to note that the function you want to differentiate can be written as <span class="math-container">$$ \phi = \| \mathbf{x} \otimes \mathbf{x} \|_2^2 = \| \mathbf{x} \mathbf{x}^T \|^2_F $$</span></p> <p>The differential writes <span class="math-container">$$ d\phi = 2 \mathbf{x} \mathbf{x}^T:d(\mathbf{x} \mathbf{x}^T) = 4 \mathbf{x} \mathbf{x}^T \mathbf{x}:d\mathbf{x} $$</span> The gradient is thus <span class="math-container">$$ \frac{\partial \phi}{\partial \mathbf{x}} = 4 \| \mathbf{x} \|_2^2 \mathbf{x} $$</span></p>
3,745,097
<p>In my general topology textbook there is the following exercise:</p> <blockquote> <p>If <span class="math-container">$F$</span> is a non-empty countable subset of <span class="math-container">$\mathbb R$</span>, prove that <span class="math-container">$F$</span> is not an open set, but that <span class="math-container">$F$</span> may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>.</p> </blockquote> <p>I already proved that <span class="math-container">$F$</span> is not opened in the euclidean topology, but why is the second part true?</p> <p>If <span class="math-container">$F$</span> is countable then <span class="math-container">$F \sim \mathbb N$</span>. This means that we can list the elements of <span class="math-container">$F$</span>, so we can write: <span class="math-container">$F=\{f_1,...,f_k,...\}$</span></p> <p><span class="math-container">$\mathbb R \setminus F= (-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1})$</span></p> <p>We have that <span class="math-container">$(-\infty, f_1) \in \tau$</span> and that every <span class="math-container">$(f_i,f_{i + 1}) \in \tau$</span>. Because the union of elements of <span class="math-container">$\tau$</span> is also a element of <span class="math-container">$\tau$</span>, we have that <span class="math-container">$(-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1}) \in \tau$</span>, then <span class="math-container">$F$</span> is closed.</p> <p>Is this correct, because the statement says that &quot;may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>&quot;?</p>
Henno Brandsma
4,280
<p>Well <span class="math-container">$\Bbb Z$</span> is countable and closed, and <span class="math-container">$\Bbb Q$</span> is countable and not closed (even dense). So both are possible for countable sets. All that wass asked essentially is two examples.</p>
2,010,329
<p>The function under consideration is:</p> <p>$$y = \int_{x^2}^{\sin x}\cos(t^2)\mathrm d t$$</p> <p>Question asks to find the derivative of the following function. I let $u=\sin(x)$ and then $\tfrac{\mathrm d u}{\mathrm d x}=\cos(x)$. Solved accordingly but only to get the answer as $$ \cos(x)\cos(\sin^2(x))-\cos(x)\cos(x^4) $$ but the answer is given as: $$ \cos(x)\cos(\sin^2(x))-2x\cdot\cos(x^4) $$</p> <p>May I know where I went wrong? Is my substitution wrong in the first place?</p>
Lutz Lehmann
115,115
<p>You have, in fundamental theorem terms, the expression $y=F(\sin x)-F(x^2)$ where $F'(t)=f(t)=\cos(t^2)$. Thus $$ y'(x)=F'(\sin x)·\cos x-F'(x^2)·2x=f(\sin x)·\cos x-f(x^2)·2x $$ per the chain rule.</p>
2,010,329
<p>The function under consideration is:</p> <p>$$y = \int_{x^2}^{\sin x}\cos(t^2)\mathrm d t$$</p> <p>Question asks to find the derivative of the following function. I let $u=\sin(x)$ and then $\tfrac{\mathrm d u}{\mathrm d x}=\cos(x)$. Solved accordingly but only to get the answer as $$ \cos(x)\cos(\sin^2(x))-\cos(x)\cos(x^4) $$ but the answer is given as: $$ \cos(x)\cos(\sin^2(x))-2x\cdot\cos(x^4) $$</p> <p>May I know where I went wrong? Is my substitution wrong in the first place?</p>
RGS
329,832
<p>Let us say that $F $ is a primitive of $cos(t^2) $. Hence the integral of $y $ can be written as</p> <p>$$y = \int_{x^2}^{sin\ x} cos(t^2) dt = F(sin\ x) - F(x^2) $$</p> <p>Hence you can derive $y $ by deriving the rightmost expression and by making use of the fact that $F' = cos(t^2) $.</p> <p>$$y' = (F(sin\ x))' - (F(x^2))' = (cos\ x)\cdot F'(sin\ x) - 2x F'(x^2) = cos\ x\cdot(cos(sin^2 x)) - 2x(cos\ x^4)$$</p> <p>Without seeing all your calculations it is hard to tell where you went wrong but it must have been a minor error when differentiating everything. Maybe a chain rule that you got wrong?</p>
220,907
<p>If I have these three lists:</p> <pre><code>list1={{0.01,87.,0.},{0.03,87.,0.18353},{0.1,87.,0.494987},{0.3,87.,0.899803},{1.,87.,1.08076},{3.,87.,1.10593},{10.,87.,1.04781},{10.,87.,1.02449},{10.,87.,0.964193},{30.,87.,1.0602},{30.,87.,1.04075},{30.,87.,1.05987},{100.,87.,1.14661},{100.,87.,1.00639},{100.,87.,1.09384},{300.,87.,1.067},{300.,87.,1.15047},{300.,87.,1.10715},{1000.,87.,1.05152},{1000.,87.,1.06942},{1000.,87.,1.17143},{3000.,87.,1.12162},{10000.,87.,1.13136}} list2={{0.01,75.,0.},{0.03,75.,0.},{0.1,75.,0.0959691},{0.3,75.,0.37954},{1.,75.,0.678807},{3.,75.,0.90385},{10.,75.,0.965262},{10.,75.,1.01025},{10.,75.,1.01675},{30.,75.,1.04836},{30.,75.,1.11345},{30.,75.,1.09146},{100.,75.,1.16961},{100.,75.,1.19018},{100.,75.,1.16968},{300.,75.,1.20834},{300.,75.,1.22955},{300.,75.,1.19569},{1000.,75.,1.25479},{1000.,75.,1.32295},{1000.,75.,1.22151},{3000.,75.,1.28794},{10000.,75.,1.25897}} list3={{0.01,55.,0.},{0.03,55.,0.},{0.1,55.,0.},{0.3,55.,0.},{1.,55.,0.},{3.,55.,0.},{10.,55.,0.},{10.,55.,0.},{10.,55.,0.},{30.,55.,0.},{30.,55.,0.},{30.,55.,0.},{100.,55.,0.},{100.,55.,0.},{100.,55.,0.},{300.,55.,0.0721335},{1000.,55.,0.214175},{3000.,55.,0.748622},{10000.,55.,1.05191}} </code></pre> <p>How can I make the list to combine such as they get organized from smallest to highest number based on the first element of the list (which go from 0.01 to 10.000) as to get <code>{0.01,87,0,0.01,75,0,0.01,55,0,.......10000.,87.,1.13136,10000.,75.,1.25897,10000.,55.,1.05191}</code></p> <p>Thank you!</p>
kglr
125
<pre><code>Flatten @ SortBy[{First}] @ Join[list1, list2, list3] </code></pre> <blockquote> <pre><code>{0.01,87.,0.,0.01,75.,0.,0.01,55.,0.,0.03,87.,0.18353,0.03,75.,0.,0.03,55., 0., 0.1, 87.,0.494987,0.1,75.,0.0959691,0.1,55.,0.,0.3,87.,0.899803,0.3,75.,0.37954,0.3,55., 0., 1., 87.,1.08076,1.,75.,0.678807,1.,55.,0.,3.,87.,1.10593,3.,75.,0.90385,3.,55.,0., 10., 87.,1.04781,10.,87.,1.02449,10.,87.,0.964193,10.,75.,0.965262,10.,75.,1.01025, 10., 75., 1.01675,10.,55.,0.,10.,55.,0.,10.,55.,0.,30.,87.,1.0602,30.,87.,1.04075, 30., 87., 1.05987,30.,75.,1.04836,30.,75.,1.11345,30.,75.,1.09146,30.,55.,0.,30.,55.,0., 30., 55., 0.,100.,87.,1.14661,100.,87.,1.00639,100.,87.,1.09384,100.,75.,1.16961, 100., 75., 1.19018,100.,75.,1.16968,100.,55.,0.,100.,55.,0.,100.,55.,0.,300.,87.,1.067, 300., 87., 1.15047,300.,87.,1.10715,300.,75.,1.20834,300.,75.,1.22955,300.,75.,1.19569, 300., 55., 0.0721335,1000.,87.,1.05152,1000.,87.,1.06942,1000.,87.,1.17143, 1000., 75., 1.25479,1000.,75.,1.32295,1000.,75.,1.22151,1000.,55.,0.214175, 3000., 87., 1.12162,3000.,75.,1.28794,3000.,55.,0.748622,10000.,87.,1.13136, 10000., 75., 1.25897,10000.,55.,1.05191} </code></pre> </blockquote>
4,180,344
<p>I started learning from Algebra by Serge Lang. In page 5, he presented an equation<br><i></p> <blockquote> <p>Let <span class="math-container">$G$</span> be a commutative monoid, and <span class="math-container">$x_1,\ldots,x_n$</span> elements of <span class="math-container">$G$</span>. Let <span class="math-container">$\psi$</span> be a bijection of the set of integers <span class="math-container">$(1,\ldots,n)$</span> onto itself. Then</i> <span class="math-container">$$\prod_{\nu = 1}^n x_{\psi(\nu)} = \prod_{\nu=1}^n x_\nu$$</span></p> </blockquote> <p>In this equation, the mapping <span class="math-container">$\psi(\nu) = \nu $</span>, resulting the same value, I don't understand why <span class="math-container">$x_{\psi(\nu)}$</span> bothers to index with a mapping, rather than a number.</p>
Shaun
104,041
<p>This is an easy exercise if you use induction on <span class="math-container">$n$</span>. Here <span class="math-container">$\psi$</span> represents a different order of the elements of <span class="math-container">$\{1,\dots, n\}$</span> and hence a different order of the <span class="math-container">$x_\nu$</span> (assuming <span class="math-container">$\psi$</span> is not the identity map, although the result still holds in that case).</p>
1,742,418
<p>So far I have this:</p> <p>What is the chance that at least one is a 2?</p> <p>There is $\frac{5}{6}$ that you will not get any 2 on the four dices. From this we get $\frac{5^4}{6^4}$</p> <p>The probability of getting at least one 2 is $1 - \frac{5^4}{6^4} $</p> <p>Now I have no idea how to proceed from here.</p>
woogie
330,017
<p>You are correct for the first part. You just need to simplify your expression to a single fraction.<br> $\frac{6^4-5^4}{6^4}=\frac{1296-625}{1296}$<br> $=\frac{671}{1296}$ </p>
215,061
<p>Is it possible to do a 2D plot taking the point coordinates from separate lists of x and y values? These lists were produced by calculations within the same notebook. For a small number of points it is easy to enter x &amp; y values by hand but it gets tedious as the number of points grows (I've done it.). If this is possible, I cannot find it in the documentation. Presently, I have over 40 points to plot and I expect the lists to grow. Any advice would be appreciated.</p>
Mr.Wizard
121
<p>Using your original definitions:</p> <pre><code>fn = Function @@ {replace}; fn[2] </code></pre> <blockquote> <pre><code>{(2 αu)/(2 + ku), 2 Du, (2 αv)/(2 + kv)} </code></pre> </blockquote> <pre><code>fn /@ {1, 2} </code></pre> <blockquote> <pre><code>{{αu/(1 + ku), Du, αv/(1 + kv)}, {(2 αu)/(2 + ku), 2 Du, (2 αv)/(2 + kv)}} </code></pre> </blockquote> <pre><code>fn @ {1, 2} </code></pre> <blockquote> <pre><code>{{αu/(1 + ku), (2 αu)/(2 + ku)}, {Du, 2 Du}, {αv/(1 + kv), (2 αv)/(2 + kv)}} </code></pre> </blockquote>
4,344,538
<p><strong>Lemma.</strong> Let <span class="math-container">$X$</span> be a topological vector space over <span class="math-container">$\mathbb{K}$</span> and <span class="math-container">$v\in X$</span>. Then the following mapping is continuous <span class="math-container">$$\begin{align*} \varphi _v:\mathbb{K}&amp;\rightarrow X \\ \xi &amp;\mapsto \xi v. \end{align*}$$</span></p> <p><em>Proof.</em> For any <span class="math-container">$\xi\in\mathbb{K}$</span>, we have <span class="math-container">$\varphi_v(\xi)=M(\psi _v\left ( \xi \right ))$</span>, where <span class="math-container">$\psi_v:\mathbb{K}\rightarrow\mathbb{K}\times X $</span> given by <span class="math-container">$\psi_v(\xi)=(\xi,v)$</span> is clearly continuous by definition of product topology and <span class="math-container">$M:\mathbb{K}\times X$</span> is the scalar multiplication in the topological vector space. <span class="math-container">$X$</span> which is continuous by definition of topological vector space. Hence, <span class="math-container">$\varphi_v$</span> is continuous as composition of continuous mappings.</p> <p>My question is:</p> <ul> <li>Why <span class="math-container">$\psi_v$</span> is continuous by definition of product topology? I have looked for the definition here <a href="https://en.wikipedia.org/wiki/Product_topology#Definition" rel="nofollow noreferrer">Product topology</a> but there is no information that I need.</li> </ul> <p>Can someone help me? Thanks.</p>
Kritiker der Elche
908,786
<p>Let <span class="math-container">$A,B, C$</span> be topological spaces. A function <span class="math-container">$\phi : A \to B \times C$</span> is continuous iff the functions <span class="math-container">$p_B \circ \phi : A \to B$</span> and <span class="math-container">$p_B \circ \phi : A \to C$</span> are continuous, where <span class="math-container">$p_B : B \times C \to B$</span> and <span class="math-container">$p_C : B \times C \to C$</span> are the projection maps. This is known as the universal property of the product topology on <span class="math-container">$B \times C$</span>.</p> <p>We have <span class="math-container">$p_{\mathbb K} \circ \psi_v = id_{\mathbb K}$</span> which is continuous. Moreover <span class="math-container">$p_X \circ \psi_v$</span> is the constant map with value <span class="math-container">$v$</span> which is also continuous.</p>
2,298,855
<p>Let $u,v\in H^1(\Omega )$ where $\Omega =B_2\backslash B_1$. How can I have continuity and coercivity of $$a(u,v)=\int_{\Omega }\nabla u\cdot \nabla v-\int_{B_2\backslash B_1}3^{-d}uv\ ?$$</p> <p>The best thing I get is $$|a(u,v)|\leq \|\nabla u\|_{L^2}\|\nabla v\|_{L^2}+3^{-d}\|u\|_{L^2}\|v\|_{L^2},$$ but I can't get $|a(u,v)|\leq C\|u\|_{H^1}\|v\|_{H^1}$. And for coercivity, I don't know.</p>
gerw
58,577
<p>The bilinear form $a$ is not continuous. By plugging in the constant function $u = 1$, we find $a(u,u) = -3^{-d} \, |B_2 \setminus B_2| &lt; 0$. Hence, $a$ cannot be coercive.</p>
2,983,519
<p>I have to calculate the limit of this formula as <span class="math-container">$n\to \infty$</span>.</p> <p><span class="math-container">$$a_n = \frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl)$$</span></p> <p>I tried the Squeeze Theorem, but I get something like this:</p> <p><span class="math-container">$$\frac{1}{\sqrt{2}}\leftarrow\frac{n}{\sqrt{2n^2}}\le\frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl) \le \frac{n}{\sqrt{n^2+n}}\to1$$</span></p> <p>As you can see, the limits of two other sequences aren't the same. Can you give me some hints? Thank you in advance.</p>
user
505,767
<p>As an alternative by Stolz-Cesaro</p> <p><span class="math-container">$$\frac{b_n}{c_n} = \frac{\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}}{\sqrt n}$$</span></p> <p><span class="math-container">$$\frac{b_{n+1}-b_n}{c_{n+1}-c_n} = \frac{\frac{1}{\sqrt{2n+2}}+\frac{1}{\sqrt{2n+1}}-\frac{1}{\sqrt{n+1}}}{\sqrt{n+1}-\sqrt n}$$</span></p> <p>and</p> <p><span class="math-container">$$\frac{\frac{1}{\sqrt{2n+2}}+\frac{1}{\sqrt{2n+1}}-\frac{1}{\sqrt{n+1}}}{\sqrt{n+1}-\sqrt n}\frac{\sqrt{n+1}+\sqrt n}{\sqrt{n+1}+\sqrt n}=$$</span></p> <p><span class="math-container">$$\frac{\sqrt{n+1}+\sqrt n}{\sqrt{2n+2}}+\frac{\sqrt{n+1}+\sqrt n}{\sqrt{2n+1}}-\frac{\sqrt{n+1}+\sqrt n}{\sqrt{n+1}}\to\frac4{\sqrt 2}-2=2\sqrt 2-2$$</span></p>
189,308
<p>I have this problem: Find integration limits and compute the following integral.</p> <p>$$\iiint_D(1-z)\,dx\,dy\,dz \\ D = \{\;(x, y, z) \in R^3\;\ |\;\; x^2 + y^2 + z^2 \le a^2, z\gt0\;\}$$</p> <p>I can compute this as an indefinite integral but finding integration limits beats me. As indefinite integral the result looks like this (hopefully without any careless mistakes):</p> <p>$$\iiint(1-z)\,dx\, dy\, dz \\ = \iint(x(1-z) + C_x)\,dy\, dz\\ = \iint (x - xz + C_x)\,dy\, dz \\ = \int (xy - xyz + yC_x + C_y)\,dz\\ = xyz - \frac{xyz^2}{2} + yzC_x + zC_y + C_z$$</p>
Sasha
11,069
<p>Use linearity: $$ \int_D (1-z) \,\mathrm{d}V = \int_D 1 \,\mathrm{d}V - \int_D z \,\mathrm{d}V =\frac{2 \pi}{3} a^3 - \int_D z \,\mathrm{d}V $$ The latter integral is best evaluated in cylindrical coordinates $(x,y,z) = (r \sin(\phi), r \cos(\phi), z)$. The measure $dV$ factors into $dz$ times the measure on the disk $x^2+y^2 \leqslant a^2-z^2$, whose area is $\pi (a^2-z^2)$, thus $$ \int_D z\, \mathrm{d}V = \int_0^a z \left( \pi (a^2-z^2) \right) \,\mathrm{d} z = \frac{\pi}{4} a^4 $$</p>
1,372,779
<p>Given that $x$ is a positive integer, find $x$ in $(E)$.</p> <p>$$\tag{E} j-n=x-n\cdot\left\lceil\frac{x}{n}\right\rceil$$ All $n, j, x$ are positive integers.</p>
syusim
138,951
<p><strong>Hint</strong>: it may help to get rid of the ceiling function by using the division algorithm and letting $x = kn + a$ for some minimal $a$.</p>
827,072
<p>How to find the equation of a circle which passes through these points $(5,10), (-5,0),(9,-6)$ using the formula $(x-q)^2 + (y-p)^2 = r^2$.</p> <p>I know i need to use that formula but have no idea how to start, I have tried to start but don't think my answer is right.</p>
Fardad Pouran
76,758
<p>Suppose the points are $A,B,C$. Then intersect the equations of perpendicular bisectors of $AB$ and $BC$. This is the center of the desired circle. (with your notation $(p,q)$)</p> <p>Now calculate the distance between $(p,q)$ and $A$. Now $r$ is also found.</p> <p><img src="https://i.stack.imgur.com/iBPC0.jpg" alt="circle"></p>
3,379,756
<p>Why does the close form of the summation <span class="math-container">$$\sum_{i=0}^{n-1} 1$$</span> equals <span class="math-container">$n$</span> instead of <span class="math-container">$n-1$</span>?</p>
cansomeonehelpmeout
413,677
<p><span class="math-container">$$\sum_{i=1}^{n-1}1=n-1$$</span> When <span class="math-container">$i$</span> starts at <span class="math-container">$0$</span> you get an extra term <span class="math-container">$$\sum_{i=0}^{n-1} 1=n$$</span></p>
2,322,481
<p>Look at this limit. I think, this equality is true.But I'm not sure.</p> <p>$$\lim_{k\to\infty}\frac{\sum_{n=1}^{k} 2^{2\times3^{n}}}{2^{2\times3^{k}}}=1$$ For example, $k=3$, the ratio is $1.000000000014$</p> <blockquote> <p>Is this limit <strong>mathematically correct</strong>?</p> </blockquote>
Barry Cipra
86,747
<p>Note that</p> <p>$$2^{2\times3^k}\lt\sum_{n=1}^k 2^{2\times3^n}\lt2^{2\times3^k}+(k-1)2^{2\times3^{k-1}}$$</p> <p>hence</p> <p>$$1\lt{\sum_{n=1}^k 2^{2\times3^n}\over2^{2\times3^k}}\lt1+{k-1\over2^{2(3^k-3^{k-1})}}=1+{k-1\over16^{3^{k-1}}}\lt1+{k\over2^k}$$</p> <p>where the final inequality is extremely crude, intended mainly to make it easy to see that the Squeeze Theorem tells us the limit as $k\to\infty$ is $1$.</p> <p>Ah, I see José Carlos Santos had much the same idea.</p>
2,322,481
<p>Look at this limit. I think, this equality is true.But I'm not sure.</p> <p>$$\lim_{k\to\infty}\frac{\sum_{n=1}^{k} 2^{2\times3^{n}}}{2^{2\times3^{k}}}=1$$ For example, $k=3$, the ratio is $1.000000000014$</p> <blockquote> <p>Is this limit <strong>mathematically correct</strong>?</p> </blockquote>
zhw.
228,045
<p>Since the denominator $\to \infty,$ Stolz-Cesaro comes into play, and we consider</p> <p>$$\frac{ 2^{2\cdot 3^{k+1}}}{2^{2\cdot 3^{k+1}}- 2^{2\cdot 3^{k}}} = \frac{1}{1-2^{2\cdot3^{k}-2\cdot3^{k+1}}} = \frac{1}{1-2^{-4\cdot3^{k}}} \to 1.$$</p> <p>SC thus implies the limit is $1.$ </p>
1,424,297
<p>Why is $a\overline bc + ab = ab + ac$? I think it has something to do with the rule $a + \overline a = 1$, right?</p>
k170
161,538
<p>$$\lim\limits_{n\to\infty} n^2\left(1-\cos\left(\frac{1}{n}\right)\right)$$ Let $m=\frac1n$, then $$\lim\limits_{m\to 0^+} \frac{1-\cos m}{m^2}=\lim\limits_{m\to 0^+} \frac{2\sin^2\left(\frac{m}{2}\right)}{m^2}$$ Let $k=\frac{m}{2}$, then $$\lim\limits_{k\to 0^+} \frac{2\sin^2 k}{4k^2}=\frac12\lim\limits_{k\to 0^+} \frac{\sin^2 k}{k^2}=\frac12$$</p>
675,857
<p>I am trying to figure out easy understandable approach to given small number of $n$, list all possible is with proof, I read this paper but it is really beyond my level to fathom,</p> <p>attempt for $\phi(n)=110$, </p> <p>$$\phi(n)=110=(2^x)\cdot(3^b)\cdot(11^c)\cdot(23^d)\quad\text{ since }\quad p-1 \mid \phi(n)=110$$ and $x =\{0,1\}$, $b=\{0,1\}$, $c=\{0,1,2\}$, $d=\{0,1\}$ .</p> <p>So total $2\cdot2\cdot3\cdot2 =24$ options to test if the $\phi(n)=110$, </p> <p>I am not sure if this is a enough to show or there are no other numbers.</p> <p>look at this paper <a href="http://arxiv.org/pdf/math/0404116v3.pdf" rel="nofollow">http://arxiv.org/pdf/math/0404116v3.pdf</a></p>
Will Jagy
10,400
<p>Note that $$ \phi(n) \geq \sqrt {\frac{n}{2}}, $$ so this is a finite search</p> <p><a href="https://math.stackexchange.com/questions/301837/is-the-euler-phi-function-bounded-below">Is the Euler phi function bounded below?</a></p>
2,335,627
<p>Is the following statement true? If so, how can I prove it?</p> <blockquote> <p>If the power series $\sum_{n=0}^{\infty} a_n x^n$ converges for all $x \in (x_0, x_1)$ then it converges absolutely for all $x \in (-\max\{|x_0|, |x_1|\}, \max\{|x_0|, |x_1|\})$.</p> </blockquote>
Theo Bendit
248,286
<p>The trick is first showing absolute convergence. Once you show that, then for any $y_0 \in (-\max\lbrace |x_0|, |x_1| \rbrace, \max\lbrace |x_0|, |x_1| \rbrace)$, we can find a $y_1 \in (x_0, x_1)$, with $|y_1| &gt; |y_0|$. Then, because of absolute convergence at $y_1$, we can use the comparison test: $$\sum |a_n| |y_0|^n \le \sum |a_n| |y_1|^n &lt; \infty,$$ so $\sum a_n y_0^n$ converges absolutely.</p> <p>Now to show absolute convergence. Take an arbitrary $y_0 \in (x_0, x_1)$ and choose $y_1$ in the same interval such that $|y_1| &gt; |y_0|$. Then, since $\sum a_n y_1^n$ converges, by the divergence test, we must have $a_n y_1^n \rightarrow 0$. It therefore follows that there exists an $N$ such that $$n \ge N \implies |a_n| |y^n_1| &lt; 1 \implies |a_n| |y^n_0| &lt; \left|\frac{y_0}{y_1}\right|^n.$$ Note that $|y_0/y_1| &lt; 1$, so we have a convergent series, so by the comparison test, $\sum a_n y_0^n$ converges absolutely.</p>
3,571,637
<p>I was reading Linear Algebra by Hoffman Kunze, and encountered this in the Theorem 8 of the Chapter Coordinates, the theorem is stated below : </p> <p><a href="https://i.stack.imgur.com/6xFlJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6xFlJ.png" alt="coo"></a></p> <p>What I get about is in the most bottom line (uniqueness), it says </p> <p><span class="math-container">$"... it\: is\: clear\: that $</span></p> <p><span class="math-container">$$ \alpha'_{i}=\sum_{i=1}^{n} P_{ij}\alpha_{i}." \tag{a}$$</span></p> <p>Is there any easy way to see why it is clear? I tried many ways and what I found was that we start from scratch, (starting the same with the proof, with <span class="math-container">$\scr \overline{B}$</span> and then we find an invertible matrix, say <span class="math-container">$Q$</span>, which we don't know equal to <span class="math-container">$P$</span> or not, such that the property (a) above holds with <span class="math-container">$Q_{ij}$</span> instead of <span class="math-container">$P_{ij}$</span>. Then now what we are left to show is that <span class="math-container">$P$</span> = <span class="math-container">$Q$</span>, and we currently have:</p> <p><span class="math-container">$$ x_{i}=\sum_{j=1}^{n} P_{ij} x'_{j}\tag{from (i)}$$</span> </p> <p>and,</p> <p><span class="math-container">$$ x_{i}=\sum_{j=1}^{n} Q_{ij} x'_{j}$$</span></p> <p>So together, </p> <p><span class="math-container">$$ \sum_{j=1}^{n} Q_{ij} x'_{j}=\sum_{j=1}^{n} P_{ij} x'_{j}$$</span></p> <p><span class="math-container">$$ \sum_{j=1}^{n} (Q_{ij} - P_{ij}) x'_{j}= 0$$</span></p> <p>The way I showed that this implies that <span class="math-container">$P$</span> = <span class="math-container">$Q$</span> is by asserting that <span class="math-container">$P - Q \neq 0^{n\times n}$</span> and find a contradiction. Suppose <span class="math-container">$A := P - Q \neq 0^{n\times n}$</span> then we can choose a row <span class="math-container">$r$</span> such that the k-th entry is non-zero, we can plug in <span class="math-container">$0$</span> to any other entries other then the k-th and we are left with something non zero equals to <span class="math-container">$0$</span> which is a contradiction, so that <span class="math-container">$A = 0^{n\times n}$</span> and <span class="math-container">$P = Q$</span>, and since <span class="math-container">$(a)$</span> holds for <span class="math-container">$Q$</span> and <span class="math-container">$P = Q$</span>, it follows that <span class="math-container">$(a)$</span> holds for <span class="math-container">$P$</span>. </p> <p>I'm pretty sure that the proof is correct since we can plug in various values to <span class="math-container">$x'_{1}, ... , x'_{n} \in F$</span>, since <span class="math-container">$F$</span> is a field, it sure contains <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, but this proof seems to be lengthy and is not as clear as how it was written to be by Hoffman and Kunze, I think I'm missing something here, and will be very thankful for a good explanation. Thanks!</p>
Trevor Gunn
437,127
<p>Representing <span class="math-container">$\alpha_j'$</span> in the ordered basis <span class="math-container">$(\alpha_1',\dots,\alpha_n')$</span> will just give you the standard basis vector <span class="math-container">$e_j$</span> because <span class="math-container">$\alpha_j' = 0\alpha_1' + \dots + 1\alpha_j' + \dots + 0 \alpha_n'$</span>. Multiplying <span class="math-container">$P$</span> by <span class="math-container">$e_j$</span> gives you the <span class="math-container">$j$</span>-th column of <span class="math-container">$P$</span>. So equation (i) says that</p> <p><span class="math-container">$$ [\alpha_j']_{\mathcal B} = \begin{bmatrix} P_{1,j} \\ \vdots \\ P_{n,j} \end{bmatrix}. $$</span></p> <p>By definition of <span class="math-container">$[\alpha_j']_{\mathcal B}$</span>, this means that <span class="math-container">$$ \alpha_j' = P_{1,j}\alpha_1 + \dots + P_{n,j}\alpha_n = \sum_{i = 1}^n P_{i,j} \alpha_i. $$</span></p>
915,218
<p>I'd like some help figuring out how to calculate $n$ points of the form $(x,\sin(x))$ for $x\in[0,2\pi)$, such that the Cartesian distance between them (the distance between each pair of points if you draw a straight line between them) is the same.</p> <p><strong>My background:</strong> I know math up to and through Algebra, have a fairly good grasp of Trig, know the Pythagorean Theorem, but only know the basic principles of Calculus (area under a curve, acceleration of acceleration, etc).</p> <p>I'd like enough information so that I can either write a computer algorithm to compute the points directly (if possible) or, failing that, write an iterative search function that converges on the proper points. It would also be nice if you could explain how it works, although I could probably figure that out myself if I had the right equation.</p>
amon
72,417
<p>The trivial solution is to have all points be the same point, such that the distance is zero. The point $(0, 0)$ lies on the sine curve and therefore represents a solution.</p> <p>For non-zero distances between the points, things are a bit more complicated. We start with:</p> <ul> <li>The number of points $n$ with $n &gt; 0$.</li> <li>A starting point $r_1$. We can always use $r_1 = (0, \sin 0) = (0, 0)$.</li> <li>A constant distance $d$. Formally, this must be in the range $(0, \frac 1 n \int_{(x_1)_x}^{2\pi}|\sin x|\,\mathrm dx)$, but let's settle for the simpler $d=\frac 1 n$ instead.</li> </ul> <p>Then, given a point $r_i =: (x_a, y_a)$, we can figure out the next point $r_{i+1} =: (x_b, y_b)$:</p> <ul> <li><p>We know that each point is of the form $(x, \sin x)$.</p></li> <li><p>The distance between the two points is $d = |r_{i+1} - r_i|$.</p></li> </ul> <p>The point $r_i$ is known, and point $r_{i+1} = (x_b, \sin x_b)$ with $x_b &gt; x_a$. To calculate $r_{i+1}$, we then have to solve the following equation for $x_b$:</p> <p>$$ \begin{align} d &amp;= |r_{i+1} - r_i| \\\Leftrightarrow\quad d^2 &amp;= (x_b - x_a)^2 + (\sin x_b - y_a)^2 \end{align} $$</p> <p>Note that this includes both $b_x$ and $\sin b_x$, which makes this uncomfortably … numeric … to solve. Doing so is left as a programming exercise for the reader.</p>