qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
627,258
<p>Helly everybody,<br> I'm trying to find another approach to topology in order to justify the axiomatization of topology. My idea was as follows:</p> <p>Given an <strong>arbitrary</strong> collection of subsets of some space: $\mathcal{C}\in\mathcal{P}^2(\Omega)$<br> Define a closure operator by: $\overline{A}:=\bigcap_{A\subseteq C\in\mathcal{C}}C$<br> This gives rise to a topology apart from the space itself being open.<br> However, considering the space as being equipped with a notion of being close, all topological question can be studied - as in topological spaces.<br> <em>(I left out the details as being part of my research)</em></p> <p>So my question is:<br> <em>What could go BADLY! wrong if a collection would satisfy all axioms for open sets but the entire space not necessarily being open?</em></p> <p>Thanks for your help! Cheers Alex</p>
GEdgar
442
<p>This can be done. But it is not very interesting. Any of these equivalent versions of "topology" could be used its starting point:</p> <p>(1) In axioms for open sets, do not require the whole space to be open</p> <p>(2) In axioms for closed sets, do not require the empty set to be closed</p> <p>(3) In axioms for a "closure" operation $\tau$, do not require $\tau(\varnothing) = \varnothing$</p> <p>(4) In axioms for "neighborhood", do not require the collection of neighborhoods of a point to be nonempty</p> <p>(5) In description of net convergence, do not require a directed set to be nonempty</p> <p><strong>added</strong> </p> <p>Nothing "really bad" happens. But also nothing interesting happens. We can write an almost topological space $X$ uniquely as a disjoint union, $T \cup Z$, where $T$ is the "topological part", the maximal open set in $X$, a topological space in the usual sense; and $Z$ is the "extra part", the closure of the empty set in $X$, which has no topological structure at all. </p> <p>Maybe the only use is as a counterexample, showing why we should require a directed set to be nonempty. </p>
132,003
<p>I have a time consuming function that is going to be iterated in a <code>Nest</code> or <code>NestList</code> and I would like to know if there is a good way to monitor the progress. I have found a partial work-around, but it requires an extra global variable (n). </p> <pre><code>fun[x_] := Module[{}, n++; Pause[1]]; ProgressIndicator[Dynamic[n/5]] n = 0; NestList[fun, Null, 5] </code></pre> <p>Besides being poor coding practice, this is a problem because when I call the Nest from different places in the larger code (for example, make two copies of the above and execute both), all the progress indicators move synchronously, rather than being limited to the <code>NestList</code> that is actually executing.</p>
MarcoB
27,951
<p>Inspired by <a href="https://mathematica.stackexchange.com/a/227/27951">this older post</a> by Andrew Moylan, I tried combining <code>Monitor</code> with your <code>ProgressIndicator</code>, then wrapping the whole thing in a <code>DynamicModule</code> to limit the scope the <code>n</code> variable:</p> <pre><code>Clear[fun] DynamicModule[{n}, n = 0; fun[x_] := Module[{}, n++; Pause[1]]; Monitor[ NestList[fun, Null, 5], ProgressIndicator[Dynamic[n/5]] ] ] </code></pre> <p>This may become cumbersome for larger code, though. Alternatively, I was wondering if you would consider assigning an explicit context to the global <code>n</code> counter, with different contexts used for different copies of the <code>Nest</code> code, effectively separating them. Still rather cumbersome though.</p>
73,785
<p>I am new to Mathematica, and I have read this <a href="https://mathematica.stackexchange.com/questions/29203/determine-the-2d-fourier-transform-of-an-image">post</a> to understand how to perform Fourier transform on an image. My mission is to extract information on the typical distance between the black patches in the image I have attached here. The code that I attach here gives me the Fourier transform, but I don't know how to take out from the Fourier transform the values of the wavenumbers.</p> <p>I have used this piece of code</p> <pre><code>img = Import["example.jpg"]; Image[img, ImageSize -&gt; 300] data = ImageData[img];(*get data*) {nRow, nCol, nChannel} = Dimensions[data]; d = data[[All, All, 2]]; d = d*(-1)^Table[i + j, {i, nRow}, {j, nCol}]; fw = Fourier[d, FourierParameters -&gt; {1, 1}]; (*adjust for better viewing as needed*) fudgeFactor = 100; abs = fudgeFactor*Log[1 + Abs@fw]; Labeled[Image[abs/Max[abs], ImageSize -&gt; 300],Style["Magnitude spectrum", 18]] </code></pre> <p>I have the following image on which I would like to perform this analysis - <img src="https://i.stack.imgur.com/hALsH.jpg" alt="enter image description here"></p>
Anton Antonov
34,008
<p>This is not an answer more of an extended comment...</p> <blockquote> <p>My mission is to extract information on the typical distance between the black patches in the image I have attached here.</p> </blockquote> <p>Do we have to use Fourier transform for this?</p> <p>For example we can get the required estimate with these commands:</p> <pre><code>img = Import["http://i.stack.imgur.com/hALsH.jpg"]; Row[{"Image dimensions:", ImageDimensions[img]}] imgB = ColorNegate@Binarize[img, 0.4]; Block[{img = imgB}, Show[img, Graphics[{Red, Thick, Circle[#[[1]], #[[2]]] &amp; /@ ComponentMeasurements[ img, {"Centroid", "EquivalentDiskRadius"}][[All, 2]]}]] ] </code></pre> <p><a href="https://i.stack.imgur.com/ytj9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ytj9um.png" alt=""></a></p> <pre><code>comps = ComponentMeasurements[ imgB, {"Centroid", "EquivalentDiskRadius"}][[All, 2]]; dists = Flatten@ Map[Take[Sort[#], UpTo[5]] &amp;, Outer[EuclideanDistance[#1[[1]], #2[[1]]] - (#1[[2]] + #2[[2]]) &amp;, comps, comps, 1]]; qs = Range[0, 1, 0.25]; TableForm[{qs, Quantile[dists, qs]}] Histogram[dists] </code></pre> <p><a href="https://i.stack.imgur.com/cx365.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cx365.png" alt=""></a></p>
313,030
<p>I often find myself writing a definition which requires a proof. You are defining a term and, contextually, need to prove that the definition makes sense. </p> <p>How can you express that? What about a definition with a proof?</p> <p>Sometime one can write the definition and then the theorem. But often happens that many definition which should stay together need to be split because a theorem is required in between.</p> <p>A tentative example:</p> <p><strong>Definition</strong> (rational numbers) Let <span class="math-container">$\sim$</span> be the equivalence relation on <span class="math-container">$\mathbb Z^*\times \mathbb Z$</span> given by <span class="math-container">$$ (q,p) \sim (q',p') \iff pq' = p'q. $$</span> We define <span class="math-container">$\mathbb Q= (\mathbb Z^*\times \mathbb Z)/\sim$</span>. On <span class="math-container">$\mathbb Q$</span> we define addition and multiplication as follows <span class="math-container">$$ [(q,p)] + [(q',p')] = [(qq',pq'+p'q)] \\ [(q,p)] \cdot[(q',p')] = [(qq',pp')] $$</span> With these operations and choosing <span class="math-container">$0_\mathbb Q=[(1,0)]$</span>, and <span class="math-container">$1_\mathbb Q=[(1,1)]$</span> turns out that <span class="math-container">$\mathbb Q$</span> is a field.</p> <p><strong>Proof.</strong> We are going to prove that <span class="math-container">$\sim$</span> is indeed an equivalence relation, that addition and multiplication are well defined and that the resulting set is a field. [...]</p>
Francois Ziegler
19,276
<p>Dixmier always solves this as follows, e.g. in <em>C*-algebras</em> — surely one possible example of good exposition (E. C. Lance’s translator‘s preface: “With is clear and straightforward style, this remains the best book from which to learn about <em>C*</em>-algebras”):</p> <blockquote> <p><strong>16.1. The compact group associated with a topological group</strong></p> <p><strong>16.1.1. Theorem.</strong> <em>Let <span class="math-container">$G$</span> be a topological group. There exists a compact group <span class="math-container">$\Sigma$</span> and a continuous morphism <span class="math-container">$\alpha:G\to\Sigma$</span> possessing the following property:</em> (...). <em>Furthermore, the pair <span class="math-container">$(\Sigma,\alpha)$</span> is determined up to isomorphism by this property.</em></p> <p>(... Long proof goes here ...)</p> <p><strong>16.1.2. Definition.</strong> The group <span class="math-container">$\Sigma$</span> is called the <em>compact group associated with</em> <span class="math-container">$G$</span>, and <span class="math-container">$\alpha$</span> is called the <em>canonical morphism of <span class="math-container">$G$</span> into <span class="math-container">$\Sigma$</span></em>.</p> </blockquote> <p>The book has about two dozen “X.Y.2” definitions like this — e.g. no less than six over pp. 116–123 (disjoint representations, factor representation, quasi-equivalent representations, type I, multiplicity-free, multiplicity), and the almost last statement in the book is a definition (Plancherel measure).</p> <hr> <p><strong>Added:</strong> I took the above example as perhaps the closest to yours (constructed object). In fact Dixmier’s relative Bourbaki does yours <em>exactly</em>, just the same down to the use of a subtitle (Algebra I.2.4 “Monoid of fractions”, I.8.12 <a href="https://archive.org/details/ElementsOfMathematics-AlgebraPart1/page/n136" rel="nofollow noreferrer">“Rings of fractions”</a>, I.9.4 “The field of rational numbers”):</p> <blockquote> <p><strong>12. Rings of fractions</strong></p> <p><strong>Theorem 4.</strong> (...)</p> <p><strong>Definition 8.</strong> <em>The ring defined in Theorem</em> 4 <em>is called the ring of fractions</em> (...)</p> </blockquote> <p>Of course, this “classic” way is not the only one: I also agree with Nik Weaver’s answer, and with this contrasting epigraph in Reed &amp; Simon, <em>Functional Analysis</em> (my emphasis): “A good definition should be the <strong><em>hypothesis</em></strong> of a theorem. (J. Glimm)”</p>
356,353
<p>I learned in my Intro Algebraic Number Theory class that there exist infinitely many integer pairs $(x,y)$ that satisfy the hyperbola $x^2-ny^2=1$; just consider that there are infinitely many units in $\mathcal{O}_{\mathbb{Q}(\sqrt{n})}$, and their norms satisfy the desired equation. Although this is a nice connection, I was wondering if it is possible to reach the solution without using high-powered Algebraic Number Theory. And more generally, does the same result hold true for $x^2-ny^2=k$ where $k$ is any integer? And how would one solve that?</p>
Zander
25,711
<p>See Wikipedia on <a href="http://en.wikipedia.org/wiki/Pell%27s_equation" rel="nofollow">Pell's equation</a>, also <a href="http://mathworld.wolfram.com/PellEquation.html" rel="nofollow">MathWorld</a>.</p> <p>If there's one solution with $x,y\ge1$ then see Brahmagupta's method or the section "Additional solutions from the fundamental solution."</p> <p>If $(x_1,y_1)$ and $(x_2,y_2)$ are solutions then $(x_1 x_2+n y_1 y_2,x_1y_2+x_2y_1)$ is another larger solution.</p> <p>If $n$ is not a square then there's always a solution (and hence infinitely many), see MathWorld for how you can find it from a continued fraction expansion.</p> <p>For $x^2-ny^2=k$ the same argument can get you infinitely many solutions given a fundamental one, but whether or not one exists is more complicated.</p>
61,798
<p>Are there any generalisations of the identity $\sum\limits_{k=1}^n {k^3} = \bigg(\sum\limits_{k=1}^n k\bigg)^2$ ?</p> <p>For example can $\sum {k^m} = \left(\sum k\right)^n$ be valid for anything other than $m=3 , n=2$ ?</p> <p>If not, is there a deeper reason for this identity to be true only for the case $m=3 , n=2$?</p>
Davide Giraudo
9,849
<p>We can't have a relationship of the form $$\forall n\in\mathbb N^*, \sum_{k=1}^nk^a=\left(\sum_{k=1}^nk^b\right)^c$$ for $a,b,c\in\mathbb N$, except in the case $c=1$ and $a=b$ or $a=3$, $b=1$ and $c=2$. Indeed, we can write $$\sum_{k=1}^nk^a =n^{a+1}\frac 1n\sum_{k=1}^n\left(\dfrac kn\right)^a$$ hence $$\sum_{k=1}^nk^a\;\overset{\scriptsize +\infty}{\large \sim}\;n^{a+1}\int_0^1t^adt=\dfrac{n^{a+1}}{a+1}$$ and if we have the initial equality we should have $a+1 =(b+1)c$ and $a+1=(b+1)^c$. In particular, $(b+1)^{c-1}=c$. If $c&gt;1$, then $c= (b+1)^{c-1}\geq 2^{c-1}\geq c$, and we should have $c=2$ and $b=1$, therefore $a=3$.</p>
888,101
<p>Suppose I am asked to show that some topology is not metrizable. What I have to prove exactly for that ?</p>
Giuseppe Negro
8,157
<p>If a topology is metrizable, then the "diagonal sequence trick" is available. This means that if you have a sequence $$ x_{(n)} \to x,\qquad n \to \infty $$ and each term of the sequence is the limit of another sequence, belonging to a "good" set $G$: $$ G\ni x^{k}_{(n)}\to x_{(n)}, \qquad k\to \infty $$ then you can construct a "diagonal" sequence $x^{k(n)}_{(n)}\in G$ which converges to the original limit point $x$. In particular, $x$ is in the sequential closure of the good set $G$. </p> <p>This fact is usually phrased in topological terms as "topological closure and sequential closure are the same". So the first thing to try when proving that a topology is not metrizable is showing that the diagonal sequence trick does not work. </p> <p>P.S.: I have in mind von Neumann's proof that the weak topology on $L^2(\mathbb{S}^1)$-space is not metrizable. It goes as follows. Let $$G=\left\{ e^{int}+n e^{imt}\ | n,m\in \mathbb{N}_{\ge 1}\right\}$$ be the "good" set in $L^2(\mathbb{S}^1)$. The question is whether we can or we cannot reach $0$ by taking one weak limit in the good set. The answer is negative because, roughly speaking, to reach $0$ we need to let $n$ go to infinity, otherwise the term $e^{int}$ would never vanish in the weak limit. But letting $n$ go to infinity the term $ne^{imt}$ diverges in norm, hence it is not bounded in weak sense as well. So $0$ is not in the sequential closure of $G$. </p> <p>However, if we let $n$ be fixed and we take a limit in $m$, then the term $ne^{imt}$ vanishes in the weak limit. With the above terminology we have $$ e^{int}+ne^{imt}=x^m_{(n)}\rightharpoonup x_{(n)}=e^{int}, \qquad m\to \infty.$$ If we now let $n\to \infty$ we get $$ x_{(n)}=e^{int}\rightharpoonup 0,$$ but we have already seen that $0$ is not in the sequential closure of $G$. So the diagonal sequence trick is not available in $L^2(\mathbb{S}^1)$-space with weak topology. Hence that topology is not metrizable. </p>
1,285,213
<p>Let $f\in P_2(\mathbb R)$, the space of second-order polynomials with real coefficients, and let the linear operator $T$ be defined as $T[f(x)] = f(0)+f(1)(x+x^2)$.</p> <p>Is $T$ diagonalizable? If so, find a basis $\beta$ of $P_2(\mathbb R)$ in which $[T]_\beta$ is a diagonal matrix.</p>
Brian Fitzpatrick
56,960
<p>Let \begin{align*} e_1(t) &amp;= 1 &amp; e_2(t) &amp;= t &amp; e_3(t) &amp;= t^2 \end{align*} and note that $\alpha=\{e_1,e_2,e_3\}$ is a basis for $P_2(\Bbb R)$. Furthermore, note that \begin{array}{lclclcl} T(e_1) &amp; = &amp; \color{red}{1}\,e_1 &amp; + &amp; \color{green}{1}\,e_2 &amp; + &amp; \color{blue}{1}\,e_3 \\ T(e_2) &amp; = &amp; \color{red}{0}\,e_1 &amp; + &amp; \color{green}{1}\,e_2 &amp; + &amp; \color{blue}{1}\,e_3 \\ T(e_3) &amp; = &amp; \color{red}{0}\,e_1 &amp; + &amp; \color{green}{1}\,e_2 &amp; + &amp; \color{blue}{1}\,e_3 \end{array} This implies that the matrix of $T$ relative to $\alpha$ is $$ [T]_\alpha= \begin{bmatrix} \color{red}{1} &amp; \color{red}{0} &amp; \color{red}{0} \\ \color{green}{1} &amp; \color{green}{1} &amp; \color{green}{1} \\ \color{blue}{1} &amp; \color{blue}{1} &amp; \color{blue}{1} \end{bmatrix} $$ Now, $T$ is diagonalizable if and only if $[T]_\alpha$ is diagonalizable. Can you determine if $[T]_\alpha$ is diagonalizable? If so, what is the change of basis matrix and how does it relate to your desired basis $\beta$?</p>
1,285,213
<p>Let $f\in P_2(\mathbb R)$, the space of second-order polynomials with real coefficients, and let the linear operator $T$ be defined as $T[f(x)] = f(0)+f(1)(x+x^2)$.</p> <p>Is $T$ diagonalizable? If so, find a basis $\beta$ of $P_2(\mathbb R)$ in which $[T]_\beta$ is a diagonal matrix.</p>
Hagen von Eitzen
39,174
<p>We need to find eigenvectors. Since the image of $T$ is inly twodimensional, it is clear that one eigenvalue must be $0$. Indeed, $f(x)=x\cdot(x-1)=x^2-x$ is obviously in the kernel of $T$ (and the kernel consists precicely of the multiple of this - why?)</p> <p>It should be clear from the definition of $T$ that all eigenvectors to nonzero eigenvalues have the form $f(x)=a+bx+bx^2$ (because $T[f]$ has this form). Observe that $f(x)=a+bx+bx^2$ implies $f(0)=a$ and $f(1)=a+2b$, so that $T[f]=a+(a+2b)(x+x^2)$, $T[f](0)=a$, and $T[f](1)=3a+4b$. Therefore if $a\ne0$ then the eigenvalue $\lambda$ must be $1$ and we must have $b=a+2b$, i.e., $b=-a$. We conclude that the eigenvectors of eigenvalue $1$ are precisely the multiples of $1-x-x^2$. Remains the case $a=0$ and necessarily $b\ne0$; we see that such $f$ is eigenvector of eigenvalue $2$. </p> <p>In conclusoin, we have found a basis of eigenvectors $$\beta\begin{cases}&amp;x^2-x&amp;\lambda=0\\ &amp;x^2+x-1&amp;\lambda=1\\ &amp;x^2+x&amp;\lambda=2\end{cases}$$ so that $T$ with respect to this basis is in fact diagonal: $$[T]_\beta=\begin{pmatrix}0&amp;0&amp;0\\0&amp;1&amp;0\\0&amp;0&amp;2\end{pmatrix}.$$</p>
1,673,452
<p>Let $\{a_j\}_{j=1}^N$ be a finite set of positive real numbers. Suppose </p> <p>$$\sum_{j=1}^{N} a_j = A,$$ prove</p> <p>$$\sum_{j=1}^{N} \frac{1}{a_j} \geq \frac{N^2}{A}.$$ </p> <p>Hints on how to proceed?</p>
Lutz Lehmann
115,115
<p>Try the Cauchy-Schwarz inequality. This would be a 3 line proof. $$ \left(\sum_{i=1}^Nx_iy_i\right)^2\le\sum_{i=1}^Nx_i^2·\sum_{i=1}^Ny_i^2 $$ Now chose $x_i,y_i$ so that one recognizes the sums in the task and that $x_iy_i=1$.</p>
1,673,452
<p>Let $\{a_j\}_{j=1}^N$ be a finite set of positive real numbers. Suppose </p> <p>$$\sum_{j=1}^{N} a_j = A,$$ prove</p> <p>$$\sum_{j=1}^{N} \frac{1}{a_j} \geq \frac{N^2}{A}.$$ </p> <p>Hints on how to proceed?</p>
Svetoslav
254,733
<p>$$N=\sqrt{a_1}.\frac{1}{\sqrt{a_1}}+...+\sqrt{a_n}.\frac{1}{\sqrt{a_n}}\leq \sqrt{a_1+...+a_n}\sqrt{\frac{1}{a_1}+...+\frac{1}{a_n}}$$ Now square both sides of the inequality.</p>
2,603,799
<p>Good morning, i need help with this exercise.</p> <blockquote> <p>Prove all tangent plane to the cone $x^2+y^2=z^2$ goes through the origin</p> </blockquote> <p><strong>My work:</strong></p> <p>Let $f:\mathbb{R}^3\rightarrow\mathbb{R}$ defined by $f(x,y,z)=x^2+y^2-z^2$</p> <p>Then,</p> <p>$\nabla f(x,y,z)=(2x,2y,-2z)$</p> <p>Let $(a,b,c)\in\mathbb{R}^3$ then $\nabla f(a,b,c)=(2a,2b,-2c)$</p> <p>By definition, the equation of the tangent plane is</p> <p>\begin{eqnarray} \langle(2a,2b,-2c),(x-a,y-b,z-c)\rangle &amp;=&amp; 2a(x-a)+2b(y-b)+2c(z-c)\\ &amp;=&amp;2ax-2a^2+2by-2b^2+2cz-2c^2 \\ &amp;=&amp;0 \end{eqnarray}</p> <p>In this step i'm stuck, can someone help me?</p>
user
505,767
<p>The equation of the tangent plane is</p> <p>$$z-z_0=f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)$$</p> <p>For the cone we have</p> <p>$$f(x,y)=\sqrt{x^2+y^2}\implies f_x=\frac{x}{\sqrt{x^2+y^2}} \quad f_y=\frac{y}{\sqrt{x^2+y^2}}$$</p> <p>Thus the tangent plane at $(x_0,y_0,z_0)$ is $$z-z_0=\frac{x_0}{z_0} (x-x_0)+\frac{y_0}{z_0}(y-y_0)$$ $$z\cdot z_0-z_0^2=x\cdot x_0-x_0^2+y\cdot y_0-y_0^2$$ $$z\cdot z_0=x\cdot x_0+y\cdot y_0$$</p>
3,319,629
<p>The question is from a practice exam I am currently trying to do: <a href="https://i.stack.imgur.com/VWQhs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VWQhs.png" alt="enter image description here"></a></p> <p>I am really not sure how to go about this one. In essence, I'd imagine that the idea is to find the area of the greater curve, and then from that subtract the calculated area of the smaller one. How do I go about doing this? I figure you can do this with one iterated integral?</p> <p>Further, if possible, as an aside to this question, I am also wondering how to find the area of just <strong>one</strong> of these curves with iterated integrals?</p> <p>Thank you very much!</p>
Toby Mak
285,313
<p>You can just calculate the area of the <span class="math-container">$\text{outer curve} - \text{inner curve}$</span>:</p> <p><span class="math-container">$$\frac{1}{2} \int_0^{2\pi} \big(4 - \cos(3\theta) \big)^2 - \big(2 + \sin(3 \theta) \big)^2$$</span></p>
2,981,063
<p>I have seen this statement in a quiz:</p> <blockquote> <p>Let <span class="math-container">$X_i$</span> denote state <span class="math-container">$i$</span> in a Markov chain. It is necessarily true that <span class="math-container">$X_{i+1}$</span> and <span class="math-container">$X_{i-1}$</span> are uncorrelated.</p> </blockquote> <p>Apparently, this statement is <strong>false</strong> but I can't figure out why. I thought that for Markov Chains the <strong>past and future states</strong> are independent <strong>given the present</strong>. Did I misunderstand this?</p>
grand_chat
215,011
<p>The key phrase is "given the present". If past and future are independent given the present, it doesn't follow that past and future are unconditionally independent.</p> <p>For example, consider the simple random walk that takes a step either left or right with equal probability. If you know where I am today, then the knowledge of where I was yesterday won't affect where you think I'll be tomorrow. OTOH if you <em>don't</em> know where I am today, then knowing where I was yesterday will affect where you think I will be tomorrow, since tomorrow I can be at most two steps away from where I was yesterday.</p>
16,725
<p>I was directed just now to a post with the following abbreviated time-line:</p> <ul> <li>Question was posted <strong>21</strong> hours ago</li> <li>Question was closed as "unclear what you are asking" <strong>19</strong> hours ago</li> <li>Question was deleted by the votes of three 10K users <strong>4</strong> hours ago. </li> </ul> <p>Yes, I get it: the original question is very vague and it was not at all clear what the OP is asking. But there were some users commenting and trying to help the OP formulate it into a mathematical question. The whole purpose of having posts put "On-Hold" versus "Closed" is that we are supposed to give new users a chance to edit their questions to a form that fits the community norm. This rapid-fire deletion runs entirely contrary to that. </p> <p><strong>Request</strong>: Can we please be a little bit more generous about deleting bad posts? </p> <p>It is one thing to delete a low-quality orphan which the OP abandoned, but I feel that over-zealous deletions of recent posts (which are not obviously spam or offensive) is unfair to the new users and creates an unwelcoming atmosphere. </p> <p>Worse, the OP is now essentially deprived of a chance of learning from his mistakes: if he cannot see the comments he cannot know why his earlier question is closed and deleted<sup>1</sup>! And sure enough my attention was brought to this question because the OP simply posted his question again<sup>2</sup>, identically to the original version that was closed and deleted. So the net effect is that the deletion of the original post is <em>counterproductive</em> to our goal of having clear, well-formed question on this site. </p> <hr> <p><sup>1</sup> A user can see his <a href="https://meta.stackexchange.com/questions/185491/what-is-the-deleted-recent-questions-page-in-the-user-profile">own recently deleted question</a>, though it may not be immediately obvious to new users how to do that. A user can also see his own deleted question at any time provided that the user saved the URL. </p> <p><sup>2</sup> I should also add that footnote 1 notwithstanding, in the case that caught my attention the original poster used an unregistered account for the first post. This made it additionally difficult to see the comments on the deleted post. </p>
Community
-1
<p>Concrete suggestion: let's avoid voting to delete questions that display [on hold] unless they are of the spam/offensive/trolling/joke kinds. </p> <p>Also, given that the system automatically deletes nonpositively scored closed posts that got no positively scored answer, I see no reason to spend <a href="http://meta.math.stackexchange.com/questions/16425/fate-of-the-homework-tag-the-community-voted-now-what/16438#comment59962_16438">expensive delete votes</a> on such questions. I suggest focusing on the <a href="http://meta.math.stackexchange.com/q/16255/">backlog</a> instead, in particular on the <a href="https://math.stackexchange.com/search?page=90&amp;tab=votes&amp;q=closed%3a1%20locked%3a0%20duplicate%3a0">downvoted closed non-duplicate posts</a> where automatic deletion is prevented only by an answer worthy of Wolfram Alpha Pro. </p>
3,489,086
<p>I was given this integral:</p> <p><span class="math-container">$$\int^{\infty}_{0}\frac{\arctan(x)}{x}dx$$</span></p> <p>As the title says, I have to find out whether it is convergent or not. So far, I have tried integrating by parts and substituting <span class="math-container">${\arctan(x)}$</span>, and neither got me anywhere.</p>
Community
-1
<p><span class="math-container">$$\int_0^\infty\frac{\arctan(x)}{x}dx=\int_{-\infty}^\infty\arctan(e^t)\,dt$$</span> but this integrand does not vanish.</p>
30,141
<p>I need to define a function <code>fun</code>, and then re-define this function iteratively. The code is given at the end.</p> <p>First, a function <code>fun[x_, y_, d_]</code> is defined, which is a polynomial in $x$ and $y$, and $d$ is the degree of this polynomial.</p> <p>My goal is to modify <code>fun</code> according to some of its coefficients, see the definition <code>fun2[x_, y_, d_]</code> for example.</p> <p>The problem is that, these coefficients are $0$ if $d$ is not substituted by a "real" number, like $d = 1$. See the code between the definitions of <code>fun</code> and <code>fun2</code>.</p> <p>Just after the definition of <code>fun2</code>, I compute <code>fun2[x, y, d]</code> and it is the same as <code>fun1[x, y, d]</code>. The reason is that $d$ is not assigned a value. But <code>fun2[x, y, 1]</code> gives the desired answer, which is different from <code>fun[x, y, 1]</code>.</p> <p>The problem is that I want to repeat this process many times, say</p> <pre><code>fun3 = fun2 + Coefficient[fun2, x^2] fun4 = fun3 + Coefficient[fun3, x^3] .... fun"d" = fun2 + Coefficient[fun"d-1", x^(d-1)] </code></pre> <p>(This is just an example, the real process I need is far more complicated and can not be deined in one go. In this example, one can just use <code>Sum[Coefficient...]</code>.)</p> <p>Of course I don't want introduce so many functions. I want to define <code>fun</code> and modify it in a for loop. But the code below shows that <code>Coefficient[fun[x, y, d], ...]</code> is always $0$, since $d$ is not assigned a value.</p> <p>I can't use a variable like <code>temp</code> in this for loop, because</p> <pre><code>For [i =... .... temp = fun[x, y, d] temp = temp + Coefficient[temp, x^i] ... </code></pre> <p>will just give <code>fun[x, y, d]</code>, as <code>Coefficient[temp, x^i]</code> is always $0$ as mentioned above.</p> <p><strong><em>I need a suggestion on how to do such work more elegantly.</em></strong></p> <p><img src="https://i.stack.imgur.com/vA7ZU.png" alt="screen shot"></p>
Hector
8,803
<p>Try the following code:</p> <pre><code>fun[n_Integer?Positive][x_, y_, d_Integer] := fun[n - 1][x, y, d] + Coefficient[fun[n - 1][x, y, d], x, n - 1 ]; fun[0][x_, y_, d_Integer] := Sum[(i + j) x^i y^j Subscript[a, i, j], {i, d}, {j, d}]; Table[fun[n][x, y, 1], {n, 0, 2}] // TableForm </code></pre>
30,141
<p>I need to define a function <code>fun</code>, and then re-define this function iteratively. The code is given at the end.</p> <p>First, a function <code>fun[x_, y_, d_]</code> is defined, which is a polynomial in $x$ and $y$, and $d$ is the degree of this polynomial.</p> <p>My goal is to modify <code>fun</code> according to some of its coefficients, see the definition <code>fun2[x_, y_, d_]</code> for example.</p> <p>The problem is that, these coefficients are $0$ if $d$ is not substituted by a "real" number, like $d = 1$. See the code between the definitions of <code>fun</code> and <code>fun2</code>.</p> <p>Just after the definition of <code>fun2</code>, I compute <code>fun2[x, y, d]</code> and it is the same as <code>fun1[x, y, d]</code>. The reason is that $d$ is not assigned a value. But <code>fun2[x, y, 1]</code> gives the desired answer, which is different from <code>fun[x, y, 1]</code>.</p> <p>The problem is that I want to repeat this process many times, say</p> <pre><code>fun3 = fun2 + Coefficient[fun2, x^2] fun4 = fun3 + Coefficient[fun3, x^3] .... fun"d" = fun2 + Coefficient[fun"d-1", x^(d-1)] </code></pre> <p>(This is just an example, the real process I need is far more complicated and can not be deined in one go. In this example, one can just use <code>Sum[Coefficient...]</code>.)</p> <p>Of course I don't want introduce so many functions. I want to define <code>fun</code> and modify it in a for loop. But the code below shows that <code>Coefficient[fun[x, y, d], ...]</code> is always $0$, since $d$ is not assigned a value.</p> <p>I can't use a variable like <code>temp</code> in this for loop, because</p> <pre><code>For [i =... .... temp = fun[x, y, d] temp = temp + Coefficient[temp, x^i] ... </code></pre> <p>will just give <code>fun[x, y, d]</code>, as <code>Coefficient[temp, x^i]</code> is always $0$ as mentioned above.</p> <p><strong><em>I need a suggestion on how to do such work more elegantly.</em></strong></p> <p><img src="https://i.stack.imgur.com/vA7ZU.png" alt="screen shot"></p>
ubpdqn
1,997
<p>Another way (using same definition of f[x,y,d]:</p> <pre><code> nestpol[n_Integer, d_Integer] := NestList[{#[[1]] + 1, #[[2]] + Coefficient[#[[2]], x^(#[[1]])]} &amp;, {1, f[x, y, d]}, n][[;; , 2]] </code></pre> <p>note (i) if just want the last value just change NestList to Nest (ii) in Hector's answer (based on no constant term in polynomial) f[0][x,y,d]=f[1][x,y.d], so nestpol[N,d] yields the same result as Table[fun[n][x, y, 1], {n, 1,N}] </p>
55,482
<p>I write a code that creates a compiled function, and then call that function over and over to generate a list. I run this code on a remote server via a batch job, and will run several instances of it. Sometimes when I make changes to the code, I make a mistake, and inside the compiled function is an undefined variable, such that when the function is called I get the following error messages (repeated several times)</p> <pre><code> CompiledFunction::cfse: Compiled expression w should be a machine-size complex number. CompiledFunction::cfex: Could not complete external evaluation at instruction 18; proceeding with uncompiled evaluation. </code></pre> <p>This causes massive memory usage (which puts me on the system administrator's bad side), and the results are garbage if since there was a mistake in the code. Is there any way to force the code to abort and quit the program rather than proceed with uncompiled evaluation?</p>
Jason B.
9,490
<p>Adding this option for <code>Compile</code></p> <pre><code>"RuntimeOptions" -&gt; {"RuntimeErrorHandler" -&gt;Function[Throw[$Failed]]} </code></pre> <p>will cause it to abort evaluation if any error messages come up. To more directly control the memory usage, and stay on the sysadmin's good side, wrap the call to the compiled function with <code>MemoryConstrained</code>, which causes it to abort if the memory goes above a certain threshold.</p>
3,853,351
<p>Given an n-dimensional ellipsoid in <span class="math-container">$\mathbb{R}^n$</span>, is any orthogonal projection of it to a subspace also an ellipsoid? Here, an ellipsoid is defined as</p> <p><span class="math-container">$$\Delta_{A, c}=\{x\in \Bbb R^n\,:\, x^TAx\le c\}$$</span></p> <p>where <span class="math-container">$A$</span> is a symmetric positive definite n by n matrix, and <span class="math-container">$c &gt; 0$</span>.</p> <p>I'm just thinking about this because it gives a nice visual way to think about least-norm regression.</p> <p>I note that SVD proves immediately that any linear image (not just an orthogonal projection) of an ellipsoid is also an ellipsoid, however there might be a more geometrically clever proof when the linear map is an orthogonal projection.</p>
Narasimham
95,860
<p>Indeed, ellipsoids cast ellipse shape shadows on the ground.</p> <p>The intersection of any conicoid and a <em>first</em> degree equation <em>plane</em> illumination terminator between two tangential points is a conic section. It can be proved by elimination to the conic <em>second</em> degree equation.</p> <p><a href="https://i.stack.imgur.com/Bu5zj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Bu5zj.png" alt="enter image description here" /></a></p>
2,929,203
<p>Suppose we define the relation <span class="math-container">$∼$</span> by <span class="math-container">$v∼w$</span> (where <span class="math-container">$v$</span> and <span class="math-container">$w$</span> are arbitrary elements in <span class="math-container">$R^n$</span>) if there exists a matrix <span class="math-container">$$A∈ GL_n(R)$$</span> such that <span class="math-container">$v=Aw$</span>. What are the equivalence classes for <span class="math-container">$∼$</span> in this case? NOTE:<span class="math-container">$GL_n(R)$</span> is a set that contains all the <span class="math-container">$n×n$</span> matrices with <span class="math-container">$det≠0$</span></p>
Acccumulation
476,070
<p>Given any unit vectors, there is a nonsingular rotation matrix that takes the first vector to the other. Given any non-zero, non-unit vector, there is some non-singular scaling matrix that takes the vector to a unit vector. So given two arbitrary non-zero vectors <span class="math-container">$v$</span> and <span class="math-container">$w$</span>, there are <span class="math-container">$S_v$</span> and <span class="math-container">$S_w$</span> such that <span class="math-container">$S_vv$</span> and <span class="math-container">$S_ww$</span> are unit vectors, and <span class="math-container">$R$</span> such that <span class="math-container">$R(S_vv)=S_ww$</span>. Then <span class="math-container">$w=(S_w^{-1}RS_v)v$</span>. So taking <span class="math-container">$A=S_w^{-1}RS_v$</span>, we have that <span class="math-container">$v=Aw$</span>, so given any non-zero vectors <span class="math-container">$v$</span>, <span class="math-container">$w$</span>, <span class="math-container">$v$</span>~<span class="math-container">$w$</span>; the set of non-zero vectors is an equivalence class.</p>
3,582,585
<p>Consider the experiment of throwing a die, if a multiple of 3 comes up, throw the die again and if any other number comes, toss a coin. Find the conditional probability of the event <strong>‘the coin shows a tail’</strong>, given that <em>‘at least one throw of die shows a 3’</em>.</p> <p>I don't know how to deal with this kind of problem. Moreover, there are three answers I am encountering. The first answer is zero, which I intuitively feel wrong, the second answer is <span class="math-container">$\frac{1}{10}$</span> and third is <span class="math-container">$\frac{1}{2}$</span> </p>
John Omielan
602,049
<p>You have</p> <p><span class="math-container">$$3^{15a} = 5^{5b} = 15^{3c} = 3^{3c}5^{3c} \tag{1}\label{eq1A}$$</span></p> <p>Taking natural logarithms (although any other logarithm base, e.g., common (i.e., base <span class="math-container">$10$</span>), will also work) gives</p> <p><span class="math-container">$$15a\ln(3) = 5b\ln(5) = 3c\ln(3) + 3c\ln(5) \tag{2}\label{eq2A}$$</span></p> <p>This gives, using the left &amp; right and then the middle &amp; right parts,</p> <p><span class="math-container">$$15a\ln(3) = 3c\ln(3) + 3c\ln(5) \implies (15a - 3c)\ln(3) = 3c\ln(5) \tag{3}\label{eq3A}$$</span></p> <p><span class="math-container">$$5b\ln(5) = 3c\ln(3) + 3c\ln(5) \implies 3c\ln(3) = (5b - 3c)\ln(5) \tag{4}\label{eq4A}$$</span></p> <p>Eliminate the <span class="math-container">$\ln(3)$</span> terms by multiplying \eqref{eq3A} by <span class="math-container">$3c$</span> and subtracting \eqref{eq4A} multiplied by <span class="math-container">$15a - 3c$</span> to get</p> <p><span class="math-container">$$((3c)(3c) - (15a - 3c)(5b - 3c))\ln(5) = 0 \tag{5}\label{eq5A}$$</span></p> <p>Since <span class="math-container">$\ln(5)$</span> is not <span class="math-container">$0$</span>, you have</p> <p><span class="math-container">$$\begin{equation}\begin{aligned} (3c)(3c) - (15a - 3c)(5b - 3c) &amp; = 0 \\ 9c^2 - 75ab + 15bc + 45ac - 9c^2 &amp; = 0 \\ -15(5ab - bc - 3ac) &amp; = 0 \\ 5ab - bc - 3ac &amp; = 0 \end{aligned}\end{equation}\tag{6}\label{eq6A}$$</span></p>
1,419,209
<p>How do I evaluate this (find the sum)? It's been a while since I did this kind of calculus.</p> <p>$$\sum_{i=0}^\infty \frac{i}{4^i}$$</p>
GAVD
255,061
<p>Let me try. </p> <p>Set $$S = \sum_{i\geq 0}\frac{i}{4^i}.$$</p> <p>Then we have $$4S = \sum_{i \geq 0} \frac{i+1}{4^i} = \sum_{i\geq 0}\frac{i}{4^i} + \sum_{i\geq 0}\frac{1}{4^i} = S + \frac{1}{1-\frac{1}{4}}$$</p> <p>So, $3S = \frac{4}{3}$. It implies that $$S = \frac{4}{9}.$$</p>
2,077,664
<p>I can show that $3^{3^{3^n}}\equiv7\pmod{10}$ since</p> <p>$3^1\equiv3\pmod{10}$</p> <p>$3^2\equiv9\pmod{10}$</p> <p>$3^3\equiv7\pmod{10}$</p> <p>$3^4\equiv1\pmod{10}$</p> <p>Thus, it reduces to $3^{(3^{3^n}\mod4)}$. I can then notice that</p> <p>$3^1\equiv3\pmod4$</p> <p>$3^2\equiv1\pmod4$</p> <p>Reducing it down to $3^{(3^{(3^n\mod2)}\mod4)}=3^{(3^1\mod4)}=3^3\equiv7\pmod{10}$</p> <p>However, this is tedious and not capable of solving the following problem:</p> <p>$$3^{3^{3^{\dots}}}\pmod{100}$$</p> <p>where the power tower keeps going up until the value becomes fixed for all further power towers. How would I take this problem?</p> <p>To clarify a bit, we take the exponents at the top and work our way down. For example,</p> <p>$$3^{3^3}=3^{27}\ne(3^3)^3$$</p> <p>For clearer notation, $3^{3^{3^{\dots}}}=3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow n)\dots)))$, and we take as many $\uparrow$'s required such that for all $n,k\in\mathbb N$ we have</p> <p>$$3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow n)\dots)))\equiv3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow k)\dots)))\pmod{100}$$</p> <p>In <a href="https://en.wikipedia.org/wiki/Knuth&#39;s_up-arrow_notation" rel="nofollow noreferrer">Knuth's up arrow notation</a>.</p>
N. S.
9,176
<p>We know that $$3^2 \equiv 1 \pmod{4} \\ 3^{20} \equiv 1 \pmod{25}$$ with the last following from Euler Theorem. Therefore $$3^{20} \equiv 1 \pmod{100}$$</p> <p>The problem then reduces to finding the powers of $3 \pmod{20}$. </p> <p>Again $$3^2 \equiv 1 \pmod{4} \\ 3^4 \equiv 1 \pmod{5} \\$$</p> <p>Therefore $3^4 \equiv 1 \pmod{20}$.</p> <p>We thus have $$3^{3^{3 ^{...}}} \equiv 3^{3^{3 ^{...}} \pmod{20}} \equiv 3^{3^{3 ^{...} \pmod{4}}} \pmod{100} \equiv 3^{3^{3 }} \pmod{100}\\ \equiv 3^{27}\equiv 3^{7} \pmod{100}$$ which is easy to calculate.</p>
2,077,664
<p>I can show that $3^{3^{3^n}}\equiv7\pmod{10}$ since</p> <p>$3^1\equiv3\pmod{10}$</p> <p>$3^2\equiv9\pmod{10}$</p> <p>$3^3\equiv7\pmod{10}$</p> <p>$3^4\equiv1\pmod{10}$</p> <p>Thus, it reduces to $3^{(3^{3^n}\mod4)}$. I can then notice that</p> <p>$3^1\equiv3\pmod4$</p> <p>$3^2\equiv1\pmod4$</p> <p>Reducing it down to $3^{(3^{(3^n\mod2)}\mod4)}=3^{(3^1\mod4)}=3^3\equiv7\pmod{10}$</p> <p>However, this is tedious and not capable of solving the following problem:</p> <p>$$3^{3^{3^{\dots}}}\pmod{100}$$</p> <p>where the power tower keeps going up until the value becomes fixed for all further power towers. How would I take this problem?</p> <p>To clarify a bit, we take the exponents at the top and work our way down. For example,</p> <p>$$3^{3^3}=3^{27}\ne(3^3)^3$$</p> <p>For clearer notation, $3^{3^{3^{\dots}}}=3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow n)\dots)))$, and we take as many $\uparrow$'s required such that for all $n,k\in\mathbb N$ we have</p> <p>$$3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow n)\dots)))\equiv3\uparrow(3\uparrow(3\uparrow(\dots(3\uparrow k)\dots)))\pmod{100}$$</p> <p>In <a href="https://en.wikipedia.org/wiki/Knuth&#39;s_up-arrow_notation" rel="nofollow noreferrer">Knuth's up arrow notation</a>.</p>
lab bhattacharjee
33,337
<p>As $3\equiv-1\pmod4,3^{2n+1}\equiv-1\equiv3$ for any integer $n\ge0$</p> <p>So, $3^{3^{3^{\cdots}}}$ can be written as $\displaystyle3^{4k+3}$ or $\displaystyle3^{3^{(4b+3)}}$</p> <p>Now,$\displaystyle3^{4k+3}=27(10-1)^{2k}=27(1-10)^{2k}$</p> <p>and $\displaystyle(1-10)^{2k}\equiv1-\binom{2k}110\pmod{10^2}\equiv1-20k$</p> <p>So, it is sufficient to find $4k+3=3^{(4b+3)}\pmod5$</p> <p>$\displaystyle4k+3=3^{(4b+3)}=3^3(3^4)^b\equiv2\cdot1^b\pmod5\equiv2$</p> <p>$\displaystyle\implies4k\equiv-1\pmod5\equiv4\implies k\equiv1$ as $(4,5)=1$</p> <p>$\displaystyle\implies(1-10)^{2k}\equiv1-20\cdot1\equiv81\pmod{100}$</p> <p>$\displaystyle\implies3^{3^{(4b+3)}}\equiv27\cdot81\pmod{100}\equiv?$</p>
251,182
<p>Is 13 a quadratic residue of 257? Note that 257 is prime.</p> <p>I have tried doing it. My study guide says it is true. But I keep getting false. </p>
Bill Dubuque
242
<p>$\rm mod\ 257\!:\ 13 \,\equiv\, 13\!-\!257 \,\equiv\, -61\cdot 4 \,\equiv\, 196\cdot 4\,\equiv\,49\cdot 4\cdot 4 \,\equiv\, 28^2\ \ $ (took $\,&lt; 10$ secs mentally)</p> <p><strong>Remark</strong> $\ $ Because of the <em>law of small numbers</em>, such negative twiddling and pulling out small square factors often succeeds for small problems.</p>
939,237
<p>Prove $n^2 &lt; n!$.</p> <p>This is what I have gotten so far</p> <p>basis step: $p(4)$ is true Inductive Hypothesis assume $p(k)$ true for $k \ge 4$</p> <p>Inductive Step $p(k+1)$ : $(k+1)^2 &lt; (k+1)!$</p> <p>$$(k+1)^2 =k^2 + 2k + 1 &lt; k! + 2k +1$$</p> <p>Can someone please explain the last step this is from text, I need help understanding this, forgive me for the formatting error Im still learning</p>
user164515
166,497
<p>$$n^n \geq n!$$</p> <p>Proof: Let $n\in\mathbb{N}$. Then $$ n^n = n\cdot n\cdot n\cdot...\cdot n$$ where as $$ n! = n\cdot(n-1)\cdot (n-2)\cdot...\cdot1$$</p> <p>For each term in the product you can compare $$n = n $$ $$n &gt; n-1 $$ $$n &gt; n-2 $$ and so on. Thus $n^n \geq n!$ </p>
1,006,562
<p>So I am trying to figure out the limit</p> <p>$$\lim_{x\to 0} \tan x \csc (2x)$$</p> <p>I am not sure what action needs to be done to solve this and would appreciate any help to solving this. </p>
André Nicolas
6,312
<p><strong>Hint:</strong> We have $\csc(2x)=\frac{1}{\sin(2x)}$ and $\sin(2x)=2\sin x\cos x$.</p>
844,832
<p>How to find the derivative of this function $$ 7\sinh(\ln t)?$$</p> <p>I don't know from where to start, so i looked at it in wolfram alpha and it was saying that the $$ 7((-1 + t^2) / 2t) $$ I did not get that. How did they jump from $$ 7\sinh(\ln t) $$ to this step? Is there an equation for it that I am missing?</p>
Hayden
27,496
<p>Assuming you know the derivatives of $\ln t$ and $\mathrm{sinh} t$, then you can use the Chain Rule, which states that $(f\circ g)'(t)=g'(t)f'(g(t))$.</p>
2,373,073
<p>Let $a, b, c$ be distinct integers, and let $P$ be a polynomial with integer coefficients. Show that it is impossible that $P(a)=b$, $P(b)=c$, and $P(c)=a$ at the same time. </p>
problembuster
356,262
<p>Assume otherwise.</p> <p>By the remainder theorem, $a-b$ divides $b-c$, $b-c$ divides $c-a$, and $c-a$ divides $a-b$.</p> <p>Then, $b-c$ also divides $a-b$, therefore, $2a = b+c$.</p> <p>$c-a$ also divides $b-c$, therefore $2c = a+b$</p> <p>Substracting both equations:</p> <p>$2(a-c) = -(a-c)$</p> <p>$a-c=0$</p> <p>$a=c$ </p> <p>Contradiction! Therefore, we are done.</p>
208,008
<p>Let $k$ and $n$ be two fixed integers. Let $C$ denotes the circle with radius $4n$ (in the plane $\mathbb{R}^2$). Suppose $\{C_1,C_2\}$ shows the set of two arbitrary tangent circles with radius $2n$ in $C$. Also, let $\{C_{11},C_{12}\}$ and $\{C_{21},C_{22}\}$ be the sets of two arbitrary tangent circles with radius $n$ in $C_1$ and $C_2$, respectively. Is there a finite number of points with size $k$ in the circle $C$ such that each $C_1$ and $C_2$ contains the odd number of points and each $C_{11}$, $C_{12}$, $C_{21}$ and $C_{22}$ contains the even number of points?</p> <p>In the following I drew a time-fixed picture of the problem, since actually the circles can revolve in the original circles by the condition of being tangent in each time:</p> <p><img src="https://i.stack.imgur.com/7z52i.jpg" alt="enter image description here"></p> <p>Motivation: Actually, one of my friends is working on the effects of critical points in the bounded area in the plane. He is engineer and do not like abstract mathematics. Based on his problem, I determine some special points in the bounded area and I give some properties to these points. After that, we define these points with their properties in the computer and we compute some special parameters of the phenomenon on that bounded area by some simple line integral and other mathematical tools. Actually, I helped him for all of this procedure. But in my private time, I abstracted this problem to the version which you can see.</p> <p>In my opinion, for arbitrary $k$ the answer is no, and if these points exist, I think they have symmetry in the plane.</p> <p>I think in general, the bellow claim is true:</p> <p>Let $S$ be a set of points which is distributed in the circle $C$ such a way that any two tangent circles $C_1$ and $C_2$ in $C$ contain the even number of points of $S$. Then $S$ contains even number of points.</p> <p>I appreciate any answer or helpful comment. </p>
Ilya Bogdanov
17,581
<p>Do I misunderstand something, or such an easy argument works?</p> <p><b>1.</b> Assume that such set <span class="math-container">$S$</span> of <span class="math-container">$k$</span> points exists. Choose an arbitrary position of <span class="math-container">$C_1$</span> such that its center <span class="math-container">$O$</span> is not in <span class="math-container">$S$</span>. The rest points inside <span class="math-container">$C_1$</span> are split into (open) semicircles (of possible circles <span class="math-container">$C_{11}$</span>) joining <span class="math-container">$O$</span> with the boundary of <span class="math-container">$C_1$</span> and `facing clockwise'. Each such semicircle <span class="math-container">$c$</span> should contain an even number of points of <span class="math-container">$S$</span>, otherwise the position of <span class="math-container">$C_{11}$</span> containing <span class="math-container">$c$</span> and that just a bit clockwise tp it contain numbers of points having different parities. Thus the total number of points inside <span class="math-container">$C_1$</span> is also even.</p> <p><b>REMARK.</b> The condition of each semicircle <span class="math-container">$c$</span> containing an even number of points in <span class="math-container">$S$</span>, along with the similar condition for counterclockwise semicircles, is equivalent to that of every <span class="math-container">$C_{11}$</span> containing a number of points in <span class="math-container">$S$</span> of fixed parity. Such examples can be easily constructed in many ways [here was a not-quite-correct explanation], one of which is shown in the picture below. A dashed circle contains an even number of green points, and while it rotates around the center it gets and loses an even number of them at every moment.</p> <p><img src="https://i.stack.imgur.com/K03H0.jpg" alt="Points distribution"> </p> <p><b>2.</b> The same argument works for the general question about <span class="math-container">$C$</span>, <span class="math-container">$C_1$</span>, and <span class="math-container">$C_2$</span>, unless the center <span class="math-container">$O$</span> of <span class="math-container">$C$</span> lies in <span class="math-container">$S$</span>. As far as I understand, in this question you do not fix the radii of <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span>, otherwise an example from the Remark above can be completed by <span class="math-container">$O$</span> (In the picture below, there is an odd number of green points, counting the center).</p> <p>If the radii may vary and <span class="math-container">$O$</span> is in <span class="math-container">$S$</span>, then one may choose <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> of equal radii so that they do not pass through other points of <span class="math-container">$S$</span>, and then increase one radius and decrease the other so as to include <span class="math-container">$O$</span> inside <span class="math-container">$C_1$</span>.</p>
2,428,243
<p>How can I evalute this product??</p> <p>$$\prod_{i=1}^{\infty} {(n^{-i})}^{n^{-i}}$$</p> <p>Unfortunately, I have no idea.</p>
Community
-1
<p><strong>Hint:</strong></p> <p>Taking the logarithm,</p> <p>$$\log p(n)=-\log n\sum_{i=1}^\infty i\,n^{-i}.$$</p> <p>This summation, which is a modified geometric series, has a closed-form formula.</p>
2,335,831
<p>I am trying to implement an Extended Kalman Filter (EKF) and it is becoming harder than I thought.</p> <p>I have one question. I noticed that the covariance matrix which should get updated over each iteration is not symmetric. I am debugging through MATLAB. I know that P should be symmetric and stay symmetric.</p> <p>What does it mean if the covariance matrix is not symmetric? Where it could be the error? </p> <p><strong>EDIT:</strong> Any covariance matrix MUST be symmetric no matter what. The symmetry comes from its definition. The covariance tells you how two variables are related and therefore if $x$ is related to $y$ also $y$ will be related to $x$ and of course in the same way.</p> <p>My problem was that I was checking with the MATLAB function <code>issymmetric(A)</code> if the matrix $A$ was symmetric. Apparently that function checks for <em>exact</em> symmetry and therefore if the involved matrixes are computed numerically and therefore it is not exactly symmetric it will give you <code>false</code>/<code>0</code> as a result. But, if for the same matrix I checked <code>A-A'</code> I had something of the order of <code>1e-15</code>.</p> <p>Thanks in advance.</p>
Forrest Voight
36,097
<p>A quick ad-hoc fix that (in my experience) works great is to simply "symmetrize" the $P$ matrix every time you calculate a new potentially asymmetric value for it by doing:</p> <p>$P'=\dfrac{P+P^T}{2}$</p>
3,265,403
<p>While trying to compute the line integral along a path K on a function, I need to parametrize my path K in terms of a single variable, let's say this single variable will be <span class="math-container">$t$</span>. My path is defined by the following ensemble: <span class="math-container">$$K=\{(x,y)\in(0,\infty)\times[-42,42]|x^2-y^2=1600\}$$</span> I know how to calculate the line integral, that is not my issue. My problem is to parametrize <span class="math-container">$x^2-y^2=1600$</span>. I tried using the identities: <span class="math-container">$$\sin^2(t)+\cos^2(t)=1$$</span> <span class="math-container">$$\sec^2(t)-\tan^2(t)=1$$</span> But I did not get anywhere with my parametrization (see below for my poor try into parametrizing). I would welcome any help/hints and if you happen to know some good reading to learn more about parametrization, I am also interested. <span class="math-container">$$r(t)=1600\sec^2(t)-1600\tan^2(t)=1600$$</span> for <span class="math-container">$$x=40\sec(t) \land y=40\tan(t)$$</span></p>
Pjotr5
157,405
<p>I think that using trigonometric function is overcomplicating it in this case. You can let <span class="math-container">$y$</span> correspond to a parameter <span class="math-container">$t$</span>, then, since <span class="math-container">$x$</span> is given to be positive, we can say that <span class="math-container">$x$</span> is the following positive root <span class="math-container">$$x = \sqrt{1600 + t^2}.$$</span> Your parameterised curve is subsequently given by: <span class="math-container">$$\left\{\left(\sqrt{1600 + t^2},t\right): t \in [-42,42]\right\}.$$</span></p> <hr> <p>Letting <span class="math-container">$y = 40\sinh(t)$</span> is also an option, in which case the parameterisation is given by <span class="math-container">$$\left(40 \cosh(t), 40 \sinh(t) \right).$$</span> This perhaps looks more appealing although finding the correct bounds on <span class="math-container">$t$</span> now involves inverse hyperbolic functions, which I will leave up to you if you are willing to do it.</p>
666,217
<p>If $a^2+b^2 \le 2$ then show that $a+b \le2$</p> <p>I tried to transform the first inequality to $(a+b)^2\le 2+2ab$ then $\frac{a+b}{2} \le \sqrt{1+ab}$ and I thought about applying $AM-GM$ here but without result</p>
SomeStrangeUser
68,387
<p>First note that if $a=0$ or $b=0$, then the question is easy. So assume that both are non-zero. Consider the value $ab$. We can show that we must have $|ab|\le1$. Assume by contrary that we have: $|ab| &gt; 1$. This means $$\frac{1}{|b|}&lt;|a|$$ Expanding: $$0\le(b^2-1)^2$$ we get:$$2\le \frac{1}{b^2}+b^2,$$ Thus, using the original ineqaulity we see: $$ 2\le \frac{1}{b^2}+b^2\lt a^2+b^2\le2 $$ which is a contradiction. Hence,$|ab|\le1$, and we can continue your original inequality, by writing: $$ a+b\le \sqrt{2+2ab}\le \sqrt{2+2}=2 $$</p>
1,748,751
<p>By K values, I mean the values described here:</p> <p><a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods" rel="nofollow">https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods</a></p> <p>I know how the K values in the Runge-Kutta method can be proven to be correct, by comparing their taylor expansion with the taylor expansion of the function to be approximated, but how were they originally figured out? </p> <p>I think I understand the Runge-Kutta method derivation when you have the derivative in terms of one variable f'(t). It seems to be a direct consequence of Simpson's rule and its higher order equivalents. But when it is some form of first order differential equation (i.e. f'(t, y(t))), I am still lost. Is there an equivalent of Simpson's rule for multivariable functions? </p>
William Oliver
89,122
<p>I figured out the answer to my question, with the help of the Peter in the comments of this question. I decided to post what I've found here in case it might help other people (because I can't seem to find a good explanation of this anywhere else online). </p> <p>First of all, for those who do not know, the <a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods" rel="noreferrer">Runge-Kutta method</a> is a method of solving first order differential equations numerically. Explicitly, our task is this: </p> <blockquote> <p>Given a derivative $\frac{\mathrm{dy}}{\mathrm{dx}} = y'(x, y)$ and an initial value for its antiderivative $y(x_n) = y_n$, approximate $y(x_{n+1}) = y_{n+1}$ where the approximation gets better as $x_{n+1} - x_n$ goes to $0$.</p> </blockquote> <p>I've found that looking at the more general problem first makes this difficult to understand, so instead we will look at the slightly simpler case where the derivative is only a function of $x$ and not of $y$. In other words, $\frac{\mathrm{dy}}{\mathrm{dx}} = y'(x)$</p> <p>The most basic and straight forward way of doing this is called <a href="https://en.wikipedia.org/wiki/Euler_method" rel="noreferrer">Euler's Method</a>. Essentially it works by moving in the direction of the slope of $y$ at $x_n$. Symbolically: </p> <p>$$ y_{n+1} \approx y_n + (x_{n+1} - x_n)y'(x_n) $$</p> <p><a href="https://i.stack.imgur.com/Jap59.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/Jap59.gif" alt="enter image description here"></a></p> <p>As you can see by the image, this is <em>okay</em> at approximating the next point, but it isn't <em>great</em>. We'd like to somehow find a better approximation.</p> <p>So here is where I was getting confused. Remember how I said that Euler's method works by moving in the direction of the tangent line? Well, that is true, but it isn't very helpful, as it doesn't really allow us to make our approximation better. We are given the function that gives us the tangent line at any $x$ value <em>exactly</em> so there is no room for improvement other than making $x_{n+1} - x_n$ smaller. Instead, we should notice that, in our approximation, if we move things around a bit, we get the following</p> <p>$$ y_{n+1} - y_n \approx (x_{n+1} - x_n)y'(x_n) $$ Why is this important? Well $y_{n+1} - y_n$ is actually just the integral of $y'(x)$ from $x_n$ to $x_{n+1}$. So in reality, Euler's method is just a direct result of the <a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="noreferrer">Riemann Sum</a> approximation of the integral!</p> <p>$$ \int_{x_n}^{x_{n+1}} y'(x) \mathrm{dx} \approx (x_{n+1} - x_n)y'(x_n) $$</p> <p>From this, we can generalize Euler's method as direct result of the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="noreferrer">Fundamental Theorem of Calculus</a>. In other words</p> <p>$$ y_{n+1} = y_n + \int_{x_n}^{x_{n+1}} y'(x) \mathrm{dx} $$</p> <p>And better approximations of the integral lead to better approximations of $y_{n+1}$! This is the basis of the Runge-Kutta method. </p> <p>How do we get better approximations of the integral? Well, a Riemann Sum uses the area under a constant that goes through one point of the function to approximate the integral.</p> <p><a href="https://i.stack.imgur.com/JgIDJ.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JgIDJ.gif" alt="Left Riemann Sum"></a></p> <p>But a better approximation would be the <a href="https://en.wikipedia.org/wiki/Trapezoidal_rule" rel="noreferrer">Trapezoidal method</a> which uses the area under a straight line that goes through two points. </p> <p><a href="https://i.stack.imgur.com/ZhXk0.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/ZhXk0.gif" alt="Trapezoidal Method"></a></p> <p>An even better approximation would be <a href="https://en.wikipedia.org/wiki/Simpson%27s_rule" rel="noreferrer">Simpson's method</a>, which uses a parabola that goes through three points. </p> <p><a href="https://i.stack.imgur.com/6vDMx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6vDMx.png" alt="enter image description here"></a></p> <p>The pattern here is that we can use an $n_{th}$ degree polynomial using $n$ points to get better and better approximations of the integral as $n$ increases (the fact that these approximations get better is a direct result of <a href="https://en.wikipedia.org/wiki/Taylor%27s_theorem" rel="noreferrer">Taylor's Theorem</a>). So what makes this better than using a smaller step size in Euler's method? Well, because this is a result of Taylor's Theorem, with Euler's method, the approximation gets better linearly as we decrease the step size, while with a method of higher order (say using a polynomial of degree $n$) if we decrease the step size by $\delta$, then the approximation gets better by approximately $\delta^n$.</p> <p>However, when we try to apply these methods to the more general case of $\frac{\mathrm{dy}}{\mathrm{dx}} = y'(x, y)$ we run into a slight problem. We are only given one value of $y$ (namely $y(x_0)$) but we need $n$ points on $y'$ for an $n_{th}$ order Runge-Kutta. So what do we do? </p> <p>Unfortunately, we have to approximate other values of $y$ with lower order methods before we can use the higher order methods. We lose a bit of accuracy here, but as it turns out, this is generally not too big of a deal, because the approximation still gets better with $\delta^n$. </p> <p>So lets say $R_n(y', y_{n-1})$ is the function that gets the next value, $y_{n}$, using the $n_{th}$ order Runge-Kutta method. Then, the following conditions hold:</p> <p>$$R_n(y', y_{n-1}) \approx R_n(y', R_{n-1}(y', y_{n-2}))$$ $$R_1(y', y_0) = (x_1 - x_0) y'(x_0, y_0)$$</p> <p>And this is gives rise to the "K" values mentioned in the question.</p>
1,748,751
<p>By K values, I mean the values described here:</p> <p><a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods" rel="nofollow">https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Explicit_Runge.E2.80.93Kutta_methods</a></p> <p>I know how the K values in the Runge-Kutta method can be proven to be correct, by comparing their taylor expansion with the taylor expansion of the function to be approximated, but how were they originally figured out? </p> <p>I think I understand the Runge-Kutta method derivation when you have the derivative in terms of one variable f'(t). It seems to be a direct consequence of Simpson's rule and its higher order equivalents. But when it is some form of first order differential equation (i.e. f'(t, y(t))), I am still lost. Is there an equivalent of Simpson's rule for multivariable functions? </p>
Community
-1
<p>From the original article ("Beitrag zur näherungweisen Integration totaler Differentialgleichungen"), Kutta used the method of <em>indeterminate coefficients</em>, expressing the increment of the function when moving from $f(x,y)$ to $f(x+\delta,y+\Delta)$, where $\Delta$ is estimated in terms of several intermediate approximations of $f$. Using the Taylor theorem to the first order, he adjusted the coefficients to achieve a match with the differential equation.</p> <p>He did not refer to polynomial approximations nor to Simpson's rule.</p>
1,637,879
<p>can you help me identify the mistake I'm making while integrating?</p> <p>Question:</p> <p>$$\int{\frac{2dx}{x\sqrt{4x^2-1}}}, x&gt;\frac{1}{2}$$</p> <p>my solution</p> <p>$$\int{\frac{2dx}{x\sqrt{4x^2-1}}}=2\int{\frac{dx}{x\sqrt{(2x)^2-1}}}$$</p> <p>let $$u=2x, x=1/2u, du=2dx, 1/2du=dx$$</p> <p>$$=\frac{2}{2}\int{\frac{du}{1/2u\sqrt{u^2-1}}}$$</p> <p>$$=2\int{\frac{du}{u\sqrt{u^2-1}}}$$</p> <p>It is known $\int{\frac{dx}{x\sqrt{x^2-a^{2}}}}=\frac{1}{a}sec^{-1}{|\frac{x}{a}|}+C$</p> <p>so</p> <p>$$=2\int{\frac{du}{u\sqrt{u^2-1}}}=2(sec^{-1}u)+C$$</p> <p>$$=2(sec^{-1}2x)+C$$</p> <p>unfortunately Wolfram Alpha says the answer is $$-2(tan^{-1}\frac{1}{\sqrt{4x^{2}-1}})+C$$</p> <ol> <li><p>Are these answers equivalent?</p></li> <li><p>What identities should i use to test equivalence?</p></li> <li><p>If i made a mistake, where is it?</p></li> </ol> <p>Thanks staxers</p>
John B
301,742
<blockquote> <p>The equality of the <strong>time average</strong> and the <strong>space average</strong> essentially means that each trajectory travels through the space so randomly that all happens as if it reaches everywhere and even more spends a time on each region proportional to the size of that region.</p> </blockquote> <p>In order to be a bit more precise, we need to go the foundations of ergodic theory.</p> <p>Ergodic theory is essentially the study of maps and flows preserving a measure. This includes the study of the stochastic properties of the dynamics, such as ergodicity. The origins go back to statistical mechanics with an attempt to apply probability theory to conservative mechanical systems (recall that any Hamiltonian system preserves the Liouville measure, and thus the natural relation to ergodic theory).</p> <p>Boltzmann's ergodic hypothesis corresponds to assume that typical points in a given energy level have a time average equal to the space average on that energy level (energy levels of conservative systems are invariant and so we cannot escape from them). From the mathematical point of view this requires the notion of ergodocity, which simply means that any invariant set has either zero or full measure (say Liouville measure).</p> <p>In another (in fact more basic) direction, the existence of a finite invariant measure gives rise to the concept of <strong>qualitative</strong> Poincaré recurrence (strictly speaking this is unrelated to ergodicity). So, although it is true that almost "every phase space trajectory comes arbitrarily close to every phase space point with the same values of all conserved variables as the initial point of the trajectory", this statement is unrelated to ergodicity.</p>
1,530,702
<p>Can anybody help me out with getting an expression of the values of $\lambda$ for a matrix $A$ for which $det(A-\lambda I)$ equals the determinant of a matrix with on the main diagonal $-\lambda$, on the diagonal above the main diagonal $\dfrac{1}{2}$ and on the diagonal under the main diagonal $\frac{1}{2} \lambda$.</p>
mvw
86,776
<p>The determinant of the tridiagonal matrix can be expressed by the recurrence (<a href="https://en.wikipedia.org/wiki/Tridiagonal_matrix#Determinant" rel="nofollow">link</a>): $$ f_n = -\lambda f_{n-1} -\frac{1}{4}\lambda f_{n-2} \quad (*) $$ and the initial values $f_0=1$, $f_{-1}= 0$.</p> <p>For $\lambda = 0$ the determinant vanishes. Otherwise:</p> <p>So $f_1 = -\lambda$, $f_2 = \lambda^2 - \frac{1}{4} \lambda$, $f_3 = -\lambda^3 + \frac{1}{4} \lambda^2+\frac{1}{4}\lambda^2=-\lambda^3+\frac{1}{2}\lambda^2$ and so on.</p> <p><strong>Solving the recurrence relation:</strong></p> <p>This is a homogenous linear recurrence relation with characteristic polynomial $$ p(t) = t^2 + \lambda t + \lambda/4 \\ $$ with roots $$ 0 = (t + \lambda/2)^2 + (\lambda -\lambda^2)/4 \iff \\ t_{1,2} = \frac{\pm \sqrt{\lambda(\lambda-1)}-\lambda}{2} $$</p> <p><strong>Case two roots:</strong></p> <p>For $\lambda \ne 1$ this leads to solutions $$ f_n = \frac{1}{2^n} \left( c_1 \left(\sqrt{\lambda(\lambda-1)}-\lambda\right)^n + c_2 \left(-\sqrt{\lambda(\lambda-1)}-\lambda\right)^n \right) $$ Inserting $f_0$ and $f_1$ gives $$ 1=c_1+c_2 \\ -\lambda = c_1 t_1 + c_2 t_2 = c_1(t_1-t_2)+ t_2 $$ thus $$ c_1 = \frac{t_2+\lambda}{t_2-t_1} \quad\quad c_2 = \frac{t_1+\lambda}{t_1-t_2} $$ We have \begin{align} t_2 + \lambda &amp;= \frac{-\sqrt{\lambda(\lambda-1)}-\lambda}{2} + \lambda = \frac{-\sqrt{\lambda(\lambda-1)}+\lambda}{2} \\ t_2 - t_1 &amp;= \frac{-\sqrt{\lambda(\lambda-1)}-\lambda}{2} - \frac{\sqrt{\lambda(\lambda-1)}-\lambda}{2} = -\sqrt{\lambda(\lambda-1)} \end{align} so \begin{align} c_1 &amp;= -\frac{-\sqrt{\lambda(\lambda-1)}+\lambda}{2\sqrt{\lambda(\lambda-1)}} = \frac{1}{2} - \frac{1}{2}\sqrt{\frac{\lambda}{\lambda-1}} \\ c_2 &amp;= \frac{1}{2} + \frac{1}{2}\sqrt{\frac{\lambda}{\lambda-1}} \end{align}</p> <p><strong>Case one root:</strong></p> <p>For $\lambda = 1$ we have only one root $$ t = -\frac{1}{2} $$ and try $$ f_n = \left(-\frac{1}{2}\right)^n (c_3 + c_4 n) $$ Inserting $f_0 = 1$ gives $c_3 = 1$ and $f_1= -\lambda = -1$ gives $c_4 = 1$. </p> <p><strong>Result:</strong></p> <p>This results in the determinant value $$ f_n = \begin{cases} 0 &amp; ; \lambda = 0 \\ \left(-\frac{1}{2}\right)^n \left[ 1 + n \right] &amp; ; \lambda = 1 \\ \frac{1}{2^{n+1}} \left[ \left(1 - \sqrt{\frac{\lambda}{\lambda-1}}\right) \left(\sqrt{\lambda(\lambda-1)}-\lambda\right)^n + \\ \left(1 + \sqrt{\frac{\lambda}{\lambda-1}}\right) \left(-\sqrt{\lambda(\lambda-1)}-\lambda\right)^n \right] &amp; ; \text{else} \\ \end{cases} $$</p> <p><strong>Test:</strong></p> <p>The $f_n$ were calculated via the recursive equation $(*)$, $g(n)$ and $h(n)$ are cases of the result forumula above. Both evaluations should match.</p> <p>Two roots, $n = 3$:</p> <pre><code>(%i) [f3,expand(radcan(g(3)))]; 2 2 L 3 L 3 (%o) [-- - L , -- - L ] 2 2 </code></pre> <p>One root, $n = 3$:</p> <pre><code>(%i) [ev(f3,L=1),expand(radcan(h(3)))]; 1 1 (%o) [- -, - -] 2 2 </code></pre> <p>Two roots, $n=8$:</p> <pre><code>(%i) [f8,expand(radcan(g(8)))]; 7 6 5 4 7 6 5 4 8 7 L 15 L 5 L L 8 7 L 15 L 5 L L (%o) [L - ---- + ----- - ---- + ---, L - ---- + ----- - ---- + ---] 4 16 32 256 4 16 32 256 </code></pre> <p>One root, $n = 8$:</p> <pre><code>(%i) [ev(f8,L=1),expand(radcan(h(8)))]; 9 9 (%o) [---, ---] 256 256 </code></pre>
1,517,456
<blockquote> <p>Rudin Chp. 5 q. 13:</p> <p>Suppose <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are real numbers, <span class="math-container">$c &gt; 0$</span>, and <span class="math-container">$f$</span> is defined on <span class="math-container">$[-1, 1]$</span> by</p> <p><span class="math-container">$$f(x) = x^a \sin(|x|^{-c}), x≠0$$</span> <span class="math-container">$$f(x) = 0, x=0$$</span></p> <p>(b) <span class="math-container">$f'(0)$</span> exists iff <span class="math-container">$a &gt; 1$</span></p> </blockquote> <p>To me, it seems quite clear that <span class="math-container">$a&gt;1$</span> would work because it is intuitively clear that <span class="math-container">$f(x) → 0$</span> as <span class="math-container">$x → 0$</span>. The function <span class="math-container">$\sin(u)$</span> has a range of <span class="math-container">$[-1, 1]$</span>, so while <span class="math-container">$\sin(|x|^{-c})$</span> will oscillate infinitely as <span class="math-container">$x→0$</span>, <span class="math-container">$x^a → 0$</span> for <span class="math-container">$a &gt; 0$</span>. It is clear that this is continuous for <span class="math-container">$a&gt;0$</span>.</p> <p>But I need to show that it is differentiable for <span class="math-container">$x=0$</span> iff <span class="math-container">$a&gt;1$</span>. And this is where I have gotten stuck. I am able to show that it is <em>not</em> differentiable for <span class="math-container">$a ≤ 1$</span>. But when I try to show it is differentiable for <span class="math-container">$a&gt;1$</span>, I fail to do so. I tried to differentiate <span class="math-container">$f(x)$</span> in general (<span class="math-container">$f'(x)$</span>) then show it will not work as <span class="math-container">$x→0$</span> for <span class="math-container">$a≤1$</span>, but this method does not work with <span class="math-container">$a&gt;1$</span>, and I end up with <span class="math-container">$x^{a+1} / |x|^{-c-2}$</span> (plus unimportant constants and cosine). And that is bad because, for example, if a = 2 and c = 10, that limit clearly diverges to infinity.</p> <p>A fellow student claimed to used the definition of the derivative to solve this and I tried this:</p> <p><span class="math-container">$$f'(x) = \lim_{t→x} \frac{f(t) - f(x)}{t-x} = \lim_{t→x} \frac{t^a \sin|t|^{-c} - x^a \sin|x|^{-c}}{t-x}$$</span></p> <p>And we are interested in only <span class="math-container">$f'(0)$</span>, so we can simply:</p> <p><span class="math-container">$$f'(0) = \lim_{t→0} \frac{f(t) - f(0)}{t-0} = \lim_{t→0} \frac{t^a \sin|t|^{-c} - 0}{t}= \lim_{t→0} t^{a-1} \sin|t|^{-c}$$</span> Assume <span class="math-container">$a&gt;1$</span> <span class="math-container">$$=\left[\lim_{t→0} t^{a-1}\right]\left[\lim_{t→0}\sin |t|^{-c}\right] = 0\left[\lim_{t→0}\sin|t|^{-c}\right] = 0$$</span> This is clear because <span class="math-container">$\sin(u)$</span> has a range of [-1, 1].</p> <p>Clearly in the case that <span class="math-container">$a≤1$</span>, this will diverge.</p> <p>Is this all that I need to do? I don't understand why my first method did not work but the second did, if that is indeed all I must do.</p> <p>I am worried about the main concept, not about how my “proof” looks. I can write it out MUCH better on paper, I am struggling to format this well on the computer (and sorry for this!)</p>
Thomas Andrews
7,933
<p>This is a "between" argument, skipping field theory, but using linear algebra.</p> <p>If $\sqrt[3]{2}\in\mathbb Q(\sqrt[4]{5})$, then multiplication by $\sqrt[3]{2}$ is a linear transformation, $T,$ of $\mathbb Q(\sqrt[4]{5})$ as a vector space over $\mathbb Q$. That linear transformation has a minimal polynomial, which is a factor of its four-dimensional characteristic polynomial.</p> <p>The minimal polynomial is easily seen to be $x^3-2$ - if $p(T)=0$ then $x^3-2$ has to divide $p(x)$.</p> <p>That means characteristic polynomial must be of the form $(x-a)(x^3-2)$, and have rational coefficients. Is that possible? This would mean that $a$ would have to be both a root of $x^3-2$ and rational. (The minimal polynomial of a linear transformation has to include all the roots of the characteristic polynomial.)</p>
364,800
<p>Let <span class="math-container">$V$</span> be a connected smooth complex projective curve of negative Euler characteristic. Can there exist a connected smooth complex algebraic curve <span class="math-container">$U$</span> such that there is a non-constant holomorphic map <span class="math-container">$U\to V$</span> but no non-constant holomorphic map from the compactification of <span class="math-container">$U$</span> to <span class="math-container">$V$</span>? Note that we are not merely asking that the map <span class="math-container">$U\to V$</span> does not extend.</p> <p>EDIT: the question considers the smooth compactification of <span class="math-container">$U$</span>.</p>
Francesco Polizzi
7,460
<p>The title and the body of the question seem to ask two different things. Let me give an answer to the second one. Since you are not asking that the compactification of <span class="math-container">$U$</span> is smooth, we can build up an example as follows.</p> <p>Let <span class="math-container">$\bar{U}$</span> be a curve with one cusp <span class="math-container">$p$</span> such that its normalization is a projective genus <span class="math-container">$2$</span> curve <span class="math-container">$V$</span>, and let <span class="math-container">$\nu \colon V \to \bar{U}$</span> be the normalization map. If <span class="math-container">$U=\bar{U} - \{p\}$</span> is the smooth locus of <span class="math-container">$\bar{U}$</span> and <span class="math-container">$q=\nu^{-1}(p)$</span>, the restriction <span class="math-container">$\nu \colon V-\{q\} \to U$</span> is an isomorphism, and so is <span class="math-container">$\nu^{-1} \colon U \to V-\{q\}$</span>.</p> <p>Composing with the inclusion <span class="math-container">$V-\{q\} \to V$</span>, we get a smooth algebraic map <span class="math-container">$U \to V$</span>, whose image is <span class="math-container">$V - \{q\}$</span>. This cannot be extended to a holomorphic map <span class="math-container">$\bar{U} \to V$</span>, since <span class="math-container">$\bar{U}$</span> and <span class="math-container">$V$</span> are not biholomorphic.</p>
3,400,766
<p>I know that by considering projection <span class="math-container">$q : \mathbb{R}^2 \to \mathbb{R}$</span>, <span class="math-container">$(x, y) \to x$</span>, and the closed subset </p> <p><span class="math-container">$$G = \left\{(x, y) : y \ge \frac 1 x, x &gt; 0\right\}$$</span></p> <p>will prove that <span class="math-container">$q$</span> is not a closed map. </p> <p>But I'm having some difficulty in understanding why we can consider the projection map <span class="math-container">$q$</span> as a natural map to the quotient space. </p> <p>The definition I have learnt for the natural map is<br> <span class="math-container">$Q:\mathbb{X}\rightarrow\mathbb{X}/\mathbb{M}$</span> such that<br> <span class="math-container">$Q(x)=x+\mathbb{M}$</span>.<br> So why is this the same as the projection</p>
DanielWainfleet
254,665
<p>The set of equivalence classes of <span class="math-container">$\sim,$</span> where <span class="math-container">$(x,y)\sim (x',y')\iff x=x',$</span> is <span class="math-container">$\Bbb R^2/\Bbb M=\{\{x\}\times \Bbb R: x\in \Bbb R\}.$</span></p> <p>The quotient topology on <span class="math-container">$\Bbb R^2/\Bbb M$</span> is the <span class="math-container">$\subset$</span>-largest topology such that <span class="math-container">$q((x,y))=\{x\}\times \Bbb R$</span> is continuous.</p> <p>The product topology on <span class="math-container">$\Bbb R^2$</span> is the <span class="math-container">$\subset$</span>-smallest topology on <span class="math-container">$\Bbb R^2$</span> such that both <span class="math-container">$p_1((x,y)=x$</span> and <span class="math-container">$p_2((x,y))=y$</span> are continuous.</p> <p>So the map <span class="math-container">$\psi(\{x\}\times \Bbb R)=(x,0)$</span> is a homeomorphism to the sub-space <span class="math-container">$\Bbb R\times \{0\}$</span> (a sub-space of <span class="math-container">$\Bbb R^2$</span>).</p> <p>The maps <span class="math-container">$q$</span> and <span class="math-container">$p_1^*((x,y))=(x,0)$</span> are not the same functions, but can be called "the same for topological purposes", as <span class="math-container">$\psi$</span> is a homeomorphism and <span class="math-container">$\psi\circ q=p_1^*.$</span></p>
172,080
<p>Here is a fun integral I am trying to evaluate:</p> <p>$$\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}.$$</p> <p>I thought about integrating by parts $2n$ times and then using the binomial theorem for $\sin(x)$, that is, using $\dfrac{e^{ix}-e^{-ix}}{2i}$ form in the binomial series.</p> <p>But, I am having a rough time getting it set up correctly. Then, again, there is probably a better approach. </p> <p>$$\frac{1}{(2n)!}\int_{0}^{\infty}\frac{1}{(2i)^{2n}}\sum_{k=0}^{n}(-1)^{2n+1-k}\binom{2n}{k}\frac{d^{2n}}{dx^{2n}}(e^{i(2k-2n-1)x})\frac{dx}{x^{1-2n}}$$</p> <p>or something like that. I doubt if that is anywhere close, but is my initial idea of using the binomial series for sin valid or is there a better way?.</p> <p>Thanks everyone.</p>
robjohn
13,854
<p>Since $\dfrac{\sin^{2n+1}(x)}{x}$ is an even function, we can integrate over the whole real line and divide by $2$.</p> <p>Write $\sin(x)=\dfrac{e^{ix}-e^{-ix}}{2i}$. Since there are no singularities and the integrand vanishes as $|x|\to\infty$, we can move the path of integration in the direction of $-i$. Expand using the binomial theorem, and close the paths of integration in two ways: for the integrands with $e^{+ikx}$ circle back counter-clockwise around the upper half-plane ($\gamma^+$); for the integrands with $e^{-ikx}$ circle back clockwise around the lower half-plane ($\gamma^-$).</p> <p>Note that $\gamma^-$ contains no poles, so those integrals can be ignored.</p> <p>We will use the identity $$ \begin{align} \sum_{k=0}^m(-1)^k\binom{n}{k} &amp;=\sum_{k=0}^m(-1)^k\binom{n}{k}\binom{m-k}{m-k}\\ &amp;=(-1)^m\sum_{k=0}^m\binom{n}{k}\binom{-1}{m-k}\\ &amp;=(-1)^m\binom{n-1}{m} \end{align} $$ Finally, to the point: $$ \begin{align} \int_0^\infty\sin^{2n+1}(x)\frac{\mathrm{d}x}{x} &amp;=\frac12\int_{-\infty}^\infty\sin^{2n+1}(x)\frac{\mathrm{d}x}{x}\\ &amp;=\left(-\frac14\right)^{n+1}i\int_{-\infty}^\infty\left(e^{ix}-e^{-ix}\right)^{2n+1}\frac{\mathrm{d}x}{x}\\ &amp;=\left(-\frac14\right)^{n+1}i\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}\int_{\gamma^+}e^{ix(2n-2k+1)}\frac{\mathrm{d}x}{x}\\ &amp;+\left(-\frac14\right)^{n+1}i\sum_{k=n+1}^{2n+1}(-1)^k\binom{2n+1}{k}\int_{\gamma^-}e^{ix(2n-2k+1)}\frac{\mathrm{d}x}{x}\\ &amp;=\left(-\frac14\right)^{n+1}i\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}2\pi i\\ &amp;=\left(-\frac14\right)^{n}\frac{\pi}{2}\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}\\ &amp;=\left(-\frac14\right)^{n}\frac{\pi}{2}(-1)^n\binom{2n}{n}\\ &amp;=\frac{1}{4^n}\frac{\pi}{2}\binom{2n}{n} \end{align} $$</p>
1,896,024
<p><span class="math-container">$f(n) = 2n^2 + n$</span></p> <p><span class="math-container">$g(n) = O(n^2)$</span></p> <p>The question is to find the mistake in the following process:</p> <blockquote> <p><span class="math-container">$f(n) = O(n^2) + O(n)$</span></p> <p><span class="math-container">$f(n) - g(n) = O(n^2) + O(n) - O(n^2)$</span></p> <p><span class="math-container">$f(n)-g(n) = O(n)$</span></p> </blockquote> <p>From how I understand it, Big-Oh represents the upper bound on the number of operations (when <span class="math-container">$n$</span> tends to very large value). So, the difference between an order of <span class="math-container">$n^2$</span> minus an order of <span class="math-container">$n^2$</span> should be negligible if <span class="math-container">$n$</span> is very large.</p> <p>But the individual steps seem correct. It seems to me that the mistake is that when doing the minus with large values, the <span class="math-container">$O(n)$</span> will also get consumed.</p> <p>I need clarification on whether I'm correct. If I'm not, then where is the mistake?</p>
naslundx
130,817
<p><strong>Hint</strong></p> <p>What if $g(n) = n^2$?</p> <p>What does $f(n) - g(n)$ simplify to, and is it $O(n)$ or $O(n^2)$?</p> <p><strong>Clarification</strong></p> <p>With the above, we get $f(n) - g(n) = 2n^2 + n - n^2 = n^2 + n = O(n^2)$.</p> <p>Remember that, by definition, $f(n) = O(n^p)$ means $f(n) \leq Cn^p$ for some constant $C$ if $n$ large enough. This still holds true for any general $g(n) = O(n^2)$.</p>
1,077,594
<p>Let $C[a,b]$ be the space of continuous functions on $[a,b]$ with the norm $$ \left\Vert{f}\right\Vert=\max_{a \leq t \leq b}\left| f(t)\right| $$</p> <p>Then $C[a,b]$ is a Banach space. </p> <p>Let's view $C^1[a,b]$ as a subspace of it. My question is, is this $C^1[a,b]$ a Banach space?</p> <p>I think it is, since for every Cauchy sequence $\{f_n\}$ in $C^1[a,b]$, it is also a Cauchy sequence in $C[a,b]$, so it converges to a function $f$ in $C[a,b]$. But convergence in $C[a,b]$ is uniform, so $f$ is in $C[a,b]$ too, which follows that $C^1[a,b]$ is complete, i.e. a Banach space.</p> <p>However, I just read a theorem named <a href="http://en.wikipedia.org/wiki/Closed_graph_theorem" rel="nofollow">Closed Graph Theorem</a>, stating that</p> <blockquote> <p>(Closed Graph Theorem) Let $X$ and $Y$ be two Banach space, and $T$ a closed linear operator from $A\subset X$ to $Y$. If $A$ is closed in $X$, then $T$ is continuous.</p> </blockquote> <p>Apply this theorem to the above case, let $X=C^1[a,b]$, $Y=C[a,b]$ and $T=\frac{d}{dt}$ from $X$ to $Y$. We can prove that $T$ is a closed linear operator. Note that $X$ is closed in $X$, so by the above theorem $T$ is continuous.</p> <p>However, it is easy to prove that differential operator is NOT continuous.</p> <p>I am sure the Closed Graph Theorem and the last statement is true, so I think $C^1[a,b]$ is not Banach.</p> <p>Could anyone tell me why?</p>
Ron Gordon
53,268
<p>The cosine function does not vanish on the semicircle as $R \to \infty$; in fact, it does the opposite. You need to either 1) take the real part of $e^{i x}$ in the upper half plane, or 2) use $\cos{x} = (e^{i x}+e^{-i x})/2$ and use both the upper and lower half planes, respectively.</p>
2,523,112
<p>Let $f\left(x\right)$ be differentiable on interval $\left(a,b\right)$ and $f'\left(x\right)&gt;0$ on that interval. If $\underset{x\rightarrow a+}{\lim}f\left(x\right)=0$, $f\left(x\right)&gt;0$ on that interval?</p> <p>I think this proposition is true by my intuitive, but I wonder whether intuitive is mathematically and strictly true and what condition to be add to be true. I can't believe my intuition, due to possiblilty of flaw.</p>
Paramanand Singh
72,031
<p>This is a consequence of mean value theorem. Redefine $f$ at $a$ by $f(a) =0$ so that $f$ is continuous on $[a, b) $. If $x\in(a, b) $ then we have via mean value theorem $$f(x) =f(a) +(x-a) f'(c_{x} )&gt;0$$ where $c_{x} $ is some point in $(a, x) $ depending on $x$. </p>
4,612
<p>I would like to make a slope field. Here is the code</p> <pre><code>slopefield = VectorPlot[{1, .005 * p*(10 - p) }, {t, -1.5, 20}, {p, -10, 16}, Ticks -&gt; None, AxesLabel -&gt; {t, p}, Axes -&gt; True, VectorScale -&gt; {Tiny, Automatic, None}, VectorPoints -&gt; 15] </code></pre> <p>I solved the differential equations and plotted the curves manually. Three questions:</p> <ol> <li>Is there an easier way to do it?</li> <li>Ticks -> None doesn't seems to work. I still get labels for the tick marks.</li> <li>I'd like to selectively label 2 tick marks.</li> </ol>
Heike
46
<p>I'm assuming here that the curves you mentioned are streamlines of the vector field. You can plot those automatically without having to solve any differential equations by using the options <code>StreamPoints</code>, for example to plot the stream lines going through the points</p> <pre><code>points = Transpose@ArrayPad[{Range[-10, 16, 2]}, {{1, 0}, {0, 0}}] (* ==&gt; {{0, -10}, {0, -8}, {0, -6}, {0, -4}, {0, -2}, {0, 0}, {0, 2}, {0, 4}, {0, 6}, {0, 8}, {0, 10}, {0, 12}, {0, 14}, {0, 16}} *) </code></pre> <p>you can do</p> <pre><code>slopefield = VectorPlot[{1, .005*p*(10 - p)}, {t, -1.5, 20}, {p, -10, 16}, FrameTicks -&gt; None, AxesLabel -&gt; {t, p}, Axes -&gt; True, VectorScale -&gt; {Tiny, Automatic, None}, VectorPoints -&gt; 15, StreamPoints -&gt; points, StreamStyle -&gt; {Red, "Line"}] </code></pre> <p><img src="https://i.stack.imgur.com/0viS2.png" alt="Mathematica graphics"></p>
2,913,017
<p>Imagine I have a real random variable $X$ with some distribution (continuous, discrete or continuous with atoms)</p> <p>Now Imagine I have i.i.d. copies $X_1,...,X_n$, all independently and equally distributed as $X$</p> <p>My claim is:</p> <p>$$\mathbb{P}(X_2&gt;X_1)=\mathbb{P}(X_2&lt;X_1)$$ My secondy claim is the following:If I order them by size, so that $X_{(1)}&lt;X_{(2)}&lt;\ldots&lt;X_{(n)}$ and I define the interval $I_n=[X_{(1)},X_{(n)}]$; Then I claim:</p> <p>$$\mathbb{P}(X_{n+1}&lt;X_{(1)})=\mathbb{P}(X_{n+1}&gt;X_{(n)})$$</p> <p>So the probability that the $n+1$-th number exceeds the interval on the left equals the probability it exceeds on the right</p> <p>I guess the first one is true, but the second one not;</p> <p>E.g. Assume X can take the value 0 and 1; and assume $n=3$; Then</p> <p>$$\mathbb{P}(X_3&gt;{X_1,X_2})=\mathbb P (X_3=1)\mathbb P (X_2=0)\mathbb P (X_1=0)=\mathbb P (X=1)\mathbb P (X=0)\mathbb P (X=0)$$ but also $$\mathbb{P}(X_3&lt;{X_1,X_2})=\mathbb P (X_3=0)\mathbb P (X_2=1)\mathbb P (X_1=1)=\mathbb P (X=0)\mathbb P (X=1)\mathbb P (X=1)$$</p> <p>which is gernerally not the same; But What I am wondering is if there are simply conditions that it would become true</p>
leonbloy
312
<p>Let $Y_n =\min(X_1,X_2 \cdots X_n)$, and let $y_n=\sum_{i=1}^n[X_i=Y_n]$ count the number of elements that attain that minimum. Analogously, let $Z_n$ and $z_n$ be the maximum and maximum-count.</p> <p>Then, by symmetry $P( X_{n} = Y_n \wedge y_n=1)=P(X_n=Y_n) P(y_n=1 \mid X_n=Y_n)=\frac{1}{n} P(y_n=1)$</p> <p>Then, esentially you are asking if $P(y_n=1)=P(z_n=1)$ , that is, if the probability of having a single maximum equals the probability of having a single minimum. This is not true in general.</p> <p>It's true for a continuous variable (continuous CDF) because in that case the probability of a having a single extrema equals $1$. It's also true for a symmetric (around the median) random variable. I'm not sure if there's a simple characterization for its CDF to be true in general.</p> <p>Added:</p> <p>Let $F(x) = P(X \le x)$ be the CDF, and let $p(x)= F(x) - F(x^-)$. </p> <p>Then the probability of having a single minimun in $n+1$ realizations equals</p> <p>$$A=p(y_{n+1}=1)= \int \left(\frac{1-F(x)}{1-F(x^-)}\right)^n dF(x)= \int \left(1-\frac{p(x)}{1+p(x)-F(x)}\right)^n dF(x) \tag{1}$$</p> <p>Similarly, for the maximum:</p> <p>$$B=p(z_{n+1}=1)= \int \left(\frac{F(x^-)}{F(x)}\right)^n dF(x) = \int \left(1- \frac{p(x)}{F(x)}\right)^n dF(x) \tag{2}$$</p> <p>If $F(x)$ have finite discontinuities at $x_i$, $i=1,2\cdots k$ (perhaps the result is also valid for more general settings), we can write $F(x)=F_c(x) + \sum_i p(x_i)u(x-x_i)$ where $F_c(x)$ is continuous and $u(\cdot)$ is the unit-step function. Then</p> <p>$$\begin{align} A &amp;=\sum_i p(x_i) \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n +F_c(+\infty)\\ &amp;=1- \sum_i p(x_i)\left[1- \left(1-\frac{p(x_i)}{1+p(x_i)-F(x_i)}\right)^n \right]\tag{3} \end{align} $$</p> <p>$$\begin{align} B&amp;=\sum_i p(x_i) \left(1- \frac{p(x_i)}{F(x_i)}\right)^n +F_c(+\infty)\\ &amp;=1- \sum_i p(x_i)\left[1- \left(1- \frac{p(x_i)}{F(x_i)}\right)^n \right] \tag{4} \end{align}$$</p> <p>Of course, $A=B=1$ if $F(x)$ is continuous. Also, $A=B$ if the probability (both the continuous and the discrete parts!) is symmetric. There's not much more to say in general, I think...</p>
3,262,714
<p>So, I need an exponential function on the form <span class="math-container">$e^{-ax}$</span> that is 1 at <span class="math-container">$x=0$</span> and approaches <span class="math-container">$0.3$</span> as <span class="math-container">$x \rightarrow \infty$</span>. I tried doing <span class="math-container">$e^{-ax} + 0.3$</span>, but that only lead to the function starting at <span class="math-container">$1.3$</span> (although it did approach <span class="math-container">$0.3$</span> as <span class="math-container">$x \rightarrow \infty$</span>)</p> <p>The answer is probably really simple but I can't seem to figure it out.</p>
Peter
82,961
<p>Choose the function <span class="math-container">$$f(x)=0.3+0.7e^{-x}$$</span></p>
1,134,854
<blockquote> <p>In complex analysis, let $a, b&gt;0$ in $\mathbb R$, $f(s)=\int^{b}_{a}1/t^s dt$, then $f$ is holomorphic for $Re(s)&gt;0$.</p> </blockquote> <p>If $s\neq 1$, then $f(s)=\frac{a^{1-s}}{(1-s)}-\frac{b^{1-s}}{(1-s)}$, but if $s=1$, then $f(s)=\ln\big(\frac{b}{a}\big)$, they seems quite different in the form, how to prove that it is holomorphic?</p>
Christian Blatter
1,303
<p>Assume $0&lt;a&lt;b$ and write $$f(s):=\int_a^b {1\over {\mathstrut t}^s}\&gt;dt=\int_a^b e^{-s\,\log t}\&gt;dt=\int_{\log a}^{\log b} e^{(1-s)u}\&gt;du\ .$$ Now its obvious that $f$ is an entire function: We can differentiate under the integral sign.</p>
184,564
<p>If $\frac{a}{c} &gt; \frac{b}{d}$, then the mediant of these two fractions is defined as $\frac{a+b}{c+d}$ and can be shown to lie striclty between the two fractions. </p> <p>My question is can you prove the following property of mediants: if $|\frac{a}{c} - x| &gt; |x - \frac{b}{d}|$ then $|b/d - mediant| &lt; |mediant - x|$ for any $x$ that lies strictly between $\frac{a}{c}$ and $\frac{b}{d}$. </p>
bartgol
33,868
<p>In my opinion, trying to learn CFD before having at least a basic knowledge of Numerical Analysis is like trying to learn multiplication before addition: it's not impossible, but not the best idea.</p> <p>For Numerical Analysis, I studied on <a href="http://books.google.com/books?id=31m4ahn_KfkC&amp;printsec=frontcover&amp;dq=numerical%20mathematics&amp;hl=en&amp;sa=X&amp;ei=7U68ULDzJJTA9gSyjoCwAQ&amp;ved=0CDAQ6AEwAA" rel="nofollow">this</a> book and I think it's a fair book: does not go too much into details, but still gives you a complete picture of the main topics in Numerical Analysis. If then you want to learn something more about, say, Runge-Kutta methods, you can always look into the references or look for a more specific book.</p> <p>As for CFD, I find <a href="http://books.google.com/books?id=j7WmQ1yFjkAC&amp;printsec=frontcover&amp;dq=numerical%20methods%20for%20partial%20differential%20equations%20quarteroni&amp;hl=en&amp;sa=X&amp;ei=wk-8UJT-E4uc9QTAq4GIBA&amp;ved=0CC0Q6AEwAA#v=onepage&amp;q&amp;f=false" rel="nofollow">this</a> a good book. It also has a quick review of basic Numerical Linear Algebra at the beginning...</p> <p>By the way, you will need a software to do CFD simulations, cause it's completely not doable to develop your own 3D code.</p>
2,056,309
<p>$\textbf{Question}$: Let $f$ be absolutely continuous on the interval $[\epsilon, 1]$ for $0&lt;\epsilon&lt;1$. Does the continuity of $f$ at 0 imply that $f$ is absolutely continuous on $[0,1]$? What if f is also of bounded variation on $[0,1]$?</p> <p>$\textbf{Attempt}$:</p> <p>My thoughts are that $f$ is NOT absolutely continuous on $[0,1]$.</p> <p>The definition from my textbook states that a function $F$ defined on $[a,b]$ is <em>absolutely continuous</em> if for any $\epsilon&gt;0$ there exists $\delta &gt;0$ so that $\sum_{k=1}^{N}|F(b_{k})-F(a_{k})|&lt;\epsilon$ whenever $\sum_{k=1}^{N}(b_{k}-a_{k})&lt;\delta$ and the intervals $(a_{k},b_{k})$ are disjoint.</p> <p>So, I think that the point $0$ is not included in this definition - $f$ is differentiable a.e. which does not necessarily include the boundary at the point 0, even if the point itself exists.</p> <p>Is this correct? My guess is then that the bounded variation assumption will make $F$ absolutely continuous on $[0,1]$ but I am not sure why.</p>
user251257
251,257
<p>Let $h = \mathbb 1_{(0,1]}$. For $x\in \mathbb R$ defines $$ g(x) = \sum_{n=1}^\infty 2^n h(x \cdot 2^n - 1) \frac{(-1)^{n+1}}{n}$$ and $$ f(x) = \int_0^x g(t) dt. $$</p> <ol> <li><p>Then, $g$ is improper Riemann integrable and thus $f$ is continuous. </p></li> <li><p>Further, on every compact interval without $0$, $g$ is bounded and thus $f$ is absolutely continuous (in fact Lipschitz). </p></li> <li><p>But, on any neighborhood of $0$, $g$ is not Lebesgue integrable, thus $f$ is not absolutely continuous. </p></li> </ol> <p><strong>For the part that $f$ is additionally of bounded variation:</strong> </p> <p>Notice that $f$ is AC if and only if $f$ is continuous, is BV, and maps measure zero set to measure zero set. Let $N\subset [0,1]$ have measure zero. Then, $N \cap [1/n, 1]$ has measure zero too and $$ |f(N)| = \lim_{n\to \infty} |f(N \cap [1/n, 1])| = 0. $$ That is, $f$ is AC on $[0,1]$. </p>
54,506
<p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p> <p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p> <p><strong>Batman Equation in text form:</strong> \begin{align} &amp;\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &amp;\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &amp;\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &amp;\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &amp;\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}</p>
J. M. ain't a mathematician
498
<p>In fact, the five linear pieces that consist the "head" (corresponding to the third, fourth, and fifth pieces in Shreevatsa's answer) can be expressed in a less complicated manner, like so:</p> <p>$$y=\frac{\sqrt{\mathrm{sign}(1-|x|)}}{2}\left(3\left(\left|x-\frac12\right|+\left|x+\frac12\right|+6\right)-11\left(\left|x-\frac34\right|+\left|x+\frac34\right|\right)\right)$$</p> <p>This can be derived by noting that the functions</p> <p>$$\begin{cases}f(x)&amp;\text{if }x&lt;c\\g(x)&amp;\text{if }c&lt;x\end{cases}$$</p> <p>and $f(x)+(g(x)-f(x))U(x-c)$ (where $U(x)$ is the unit step function) are equivalent, and using the "relation"</p> <p>$$U(x)=\frac{x+|x|}{2x}$$</p> <hr> <p>Note that the elliptic sections (both ends of the "wings", corresponding to the first piece in Shreevatsa's answer) were cut along the lines $y=-\frac37\left((2\sqrt{10}+\sqrt{33})|x|-8\sqrt{10}-3\sqrt{33}\right)$, so the elliptic potion can alternatively be expressed as</p> <p>$$\left(\left(\frac{x}{7}\right)^2+\left(\frac{y}{3}\right)^2-1\right)\sqrt{\mathrm{sign}\left(y+\frac37\left((2\sqrt{10}+\sqrt{33})|x|-8\sqrt{10}-3\sqrt{33}\right)\right)}=0$$</p> <hr> <p>Theoretically, since all you have are arcs of linear and quadratic curves, the chimera can be expressed <em>parametrically</em> using rational B-splines, but I'll leave that for someone else to explore...</p>
54,506
<p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p> <p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p> <p><strong>Batman Equation in text form:</strong> \begin{align} &amp;\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &amp;\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &amp;\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &amp;\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &amp;\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}</p>
Community
-1
<p>The following is what I got from the equations using MATLAB: <img src="https://i.stack.imgur.com/vHI8K.jpg" alt="enter image description here"></p> <hr> <p>Here is the M-File (thanks to this <a href="https://gist.github.com/1119139">link</a>):</p> <pre><code>clf; clc; clear all; syms x y eq1 = ((x/7)^2*sqrt(abs(abs(x)-3)/(abs(x)-3))+(y/3)^2*sqrt(abs(y+3/7*sqrt(33))/(y+3/7*sqrt(33)))-1); eq2 = (abs(x/2)-((3*sqrt(33)-7)/112)*x^2-3+sqrt(1-(abs(abs(x)-2)-1)^2)-y); eq3 = (9*sqrt(abs((abs(x)-1)*(abs(x)-.75))/((1-abs(x))*(abs(x)-.75)))-8*abs(x)-y); eq4 = (3*abs(x)+.75*sqrt(abs((abs(x)-.75)*(abs(x)-.5))/((.75-abs(x))*(abs(x)-.5)))-y); eq5 = (2.25*sqrt(abs((x-.5)*(x+.5))/((.5-x)*(.5+x)))-y); eq6 = (6*sqrt(10)/7+(1.5-.5*abs(x))*sqrt(abs(abs(x)-1)/(abs(x)-1))-(6*sqrt(10)/14)*sqrt(4-(abs(x)-1)^2)-y); axes('Xlim', [-7.25 7.25], 'Ylim', [-5 5]); hold on ezplot(eq1,[-8 8 -3*sqrt(33)/7 6-4*sqrt(33)/7]); ezplot(eq2,[-4 4]); ezplot(eq3,[-1 -0.75 -5 5]); ezplot(eq3,[0.75 1 -5 5]); ezplot(eq4,[-0.75 0.75 2.25 5]); ezplot(eq5,[-0.5 0.5 -5 5]); ezplot(eq6,[-3 -1 -5 5]); ezplot(eq6,[1 3 -5 5]); colormap([0 0 1]) title('Batman'); xlabel(''); ylabel(''); hold off </code></pre>
54,506
<p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p> <p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p> <p><strong>Batman Equation in text form:</strong> \begin{align} &amp;\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &amp;\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &amp;\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &amp;\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &amp;\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}</p>
copper.hat
27,978
<p>The 'Batman equation' above relies on an artifact of the plotting software used which blithely ignores the fact that the value $\sqrt{\frac{|x|}{x}}$ is undefined when $x=0$. Indeed, since we’re dealing with real numbers, this value is really only defined when $x&gt;0$. It seems a little ‘sneaky’ to rely on the solver to ignore complex values and also to conveniently ignore undefined values.</p> <p>A nicer solution would be one that is unequivocally defined everywhere (in the real, as opposed to complex, world). Furthermore, a nice solution would be ‘robust’ in that small variations (such as those arising from, say, roundoff) would perturb the solution slightly (as opposed to eliminating large chunks).</p> <p>Try the following in Maxima (actually wxmaxima) which is free. The resulting plot is not quite as nice as the plot above (the lines around the head don’t have that nice ‘straight line’ look), but seems more ‘legitimate’ to me (in that any reasonable solver should plot a similar shape). Please excuse the code mess.</p> <pre><code>/* [wxMaxima batch file version 1] [ DO NOT EDIT BY HAND! ]*/ /* [ Created with wxMaxima version 0.8.5 ] */ /* [wxMaxima: input start ] */ load(draw); /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ f(a,b,x,y):=a*x^2+b*y^2; /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ c1:sqrt(26); /* [wxMaxima: input end ] */ /* [wxMaxima: input start ] */ draw2d(implicit( f(1/36,1/9,x,y) +max(0,2-f(1.5,1,x+3,y+2.7)) +max(0,2-f(1.5,1,x-3,y+2.7)) +max(0,2-f(1.9,1/1.7,(5*(x+1)+(y+3.5))/c1,(-(x+1)+5*(y+3.5))/c1)) +max(0,2-f(1.9,1/1.7,(5*(x-1)-(y+3.5))/c1,((x-1)+5*(y+3.5))/c1)) +max(0,2-((1.1*(x-2))^4-(y-2.1))) +max(0,2-((1.1*(x+2))^4-(y-2.1))) +max(0,2-((1.5*x)^8-(y-3.5))) -1, x,-6,6,y,-4,4)); /* [wxMaxima: input end ] */ /* Maxima can't load/batch files which end with a comment! */ "Created with wxMaxima"$ </code></pre> <p>The resulting plot is: <img src="https://i.stack.imgur.com/dMzTL.png" alt="enter image description here"></p> <p>(Note that this is, more or less, a copy of the entry I made on <a href="http://blog.makezine.com">http://blog.makezine.com</a>.)</p>
2,461,615
<p>I am still at college. I need to solve this problem.</p> <p>The total amount to receive in 1 year is 17500 CAD. And the university pays its students each 2 weeks (26 payments per year). </p> <p>How much does a student have to receive for 4 months? I have calculated this in 2 ways (both seem ok) but results are different. Which one is the right one and why? </p> <pre><code>a) 17500CAD / 12 months = 1458.33CAD each month 1458.33CAD x 4 months = 5833 (total amount of money in 4 months) If money has to be given each 2 weeks: 5833 / 8 = 729.125 CAD b) 17500 / 26 = 673.08 each 2 weeks 673.08 x 8 = 5384.62 (total amount of money in 4 months) </code></pre> <p>I think the right one is a), because b) is assuming the student has been receiving money for the whole year (26 payments). But it is not the case.</p> <p>Thank you</p>
mlc
360,141
<p>a) assumes 4 weeks per 12 months, which is equivalent to a 48-week year.</p> <p>b) assumes a 52-week year.</p> <p>Given that the question explicitly relates payments to "weeks", the second convention seems preferable. The correct answer (in my opinion) is b). </p>
914,936
<p>Does anyone know where I can find the posthumously published (I think) chapter 8 of Gauss's Disquisitiones Arithmaticae?</p>
Beans on Toast
257,517
<p>Go to archive.org and look up Gauss' Werke, Band 1. This is in German and it includes the unfinished notes that would have become part of Section 8. But I would strongly recommend reading Mathews book on number theory first because it attempts to go over the content of Gauss' DA in a more up-to-date and accessible fashion.</p>
3,395,044
<p>Wikipedia states:</p> <blockquote> <p>In mathematics, a <strong>formal power series</strong> is a generalization of a <strong>polynomial</strong>, where the number of terms is allowed to be infinite; this implies giving up the possibility of replacing the variable in the <strong>polynomial</strong> with an arbitrary number. Thus a <strong>formal power series</strong> differs from a <strong>polynomial</strong> in that it may have infinitely many terms, and differs from a <strong>power series</strong>, whose variables can take on numerical values. </p> </blockquote> <p>What I am getting from this is that in both polynomials and formal power series, the variables "don't represent numbers". But I'm not exactly sure what this means, or what they <em>do</em> represent. Also it seems to be inconsistent with how I've been using polynomials, which is very much as "variables representing numbers".</p> <p>So basically I'm conceptually confused about what this means, and can't really understand how they're being used.</p>
Eric Towers
123,905
<p>The Wikipedia definitions are not perfect. Let's go at this a little differently.</p> <p>A formal polynomial is a finite ordered list of coefficients. We use powers of an indeterminant to mark where in the list each coefficient is. For instance (where I write out all the numbers that are normally elided) <span class="math-container">$1 x^2 + 2 x^1 + 3 x^0$</span> is a different polynomial from <span class="math-container">$3 x^2 + 2 x^1 + 1 x^0$</span> because, for instance, the coefficients in the <span class="math-container">$x^2$</span> place are different. Notee the similarity between this notion and <a href="https://en.wikipedia.org/wiki/Positional_notation" rel="nofollow noreferrer">positional notation</a> for writing numbers.</p> <p>A formal power series is a potentially infinite ordered list of coefficients. We use powers of an indeterminant to mark where in the list each coefficient is. Note that formal polynomials are formal power series (where, once the power of the marking is high enough, all the coefficients are zero).</p> <p>Notice that for formal objects, the indeterminate is just a marking tool, it is not a "variable" or something that can be "evaluated". We can add these ideas. We can evaluate these objects by specializing the indeterminate to a value. When you do this, you get polynomials and power series. </p> <p>For power series, a new idea comes into play: <a href="https://en.wikipedia.org/wiki/Convergent_series" rel="nofollow noreferrer">convergence</a>. Some lists of coefficients and certain specializations of the indeterminate may not produce an infinite sum that has a value. An easy example is specializing the formal power series with all coefficients set to <span class="math-container">$1$</span> by setting the indeterminant also to <span class="math-container">$1$</span>, so the power series is a sum of infinitely many copies of <span class="math-container">$1$</span>, which does not converge. So, while a formal power series has no convergence issues (because you never pretend to evaluate a formal power series), a power series has issues of convergence.</p> <p>Notice that because we are writing (finite or infinite) lists as formal power objects, we can add them (just like in positional notation), subtract, and multiply them, which operations should seem familiar. We can also interleave them, reverse them, concatenate them, sort them, and do other operations that are sensible for lists, which perhaps seems less familiar from the context of polynomials.</p>
2,216,601
<p>Alright so I have this Transformation that I know isn't one to one transformation, but I'm not sure why. </p> <p>A Transformation is defined as $f(x,y)=(x+y, 2x+2y)$.</p> <p>Now my knowledge is that you need to fulfill the 2 conditions: Additivity and the scalar multiplication one. I tried both of them and somehow both of them are met perfectly. </p> <p>However, the transformation is NOT linear. This is because the column vectors of the transformation are linearly dependent. </p> <p>So how am I supposed to relate these 2 seemingly unrelated conjectures to check the one-one transformation ? </p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>Note you can write $$ f(x,y) = \begin{bmatrix} 1 &amp; 1 \\ 2 &amp; 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}, $$ and since $f$ can be represented by matrix multiplication, it must be linear, since matrix multiplication is linear...</p> <p>As for it being one-to-one, you need to make sure for each $\begin{bmatrix} v \\ w \end{bmatrix} \in \mathbb{R}^2$, there is at most one solution to $f(x,y) = \begin{bmatrix} v \\ w \end{bmatrix}$. You are right, the fact that the underlying matrix is not invertible must play a crucial role in that.</p>
2,216,601
<p>Alright so I have this Transformation that I know isn't one to one transformation, but I'm not sure why. </p> <p>A Transformation is defined as $f(x,y)=(x+y, 2x+2y)$.</p> <p>Now my knowledge is that you need to fulfill the 2 conditions: Additivity and the scalar multiplication one. I tried both of them and somehow both of them are met perfectly. </p> <p>However, the transformation is NOT linear. This is because the column vectors of the transformation are linearly dependent. </p> <p>So how am I supposed to relate these 2 seemingly unrelated conjectures to check the one-one transformation ? </p>
Swistack
251,704
<p>The transformation $f$ is linear indeed, but it is not one-to-one transformation. <em>One-to-one</em> means that the system \begin{equation*} x + y = u \\ 2(x + y) = v \end{equation*} has a unique solution for variables $(x, y)$, i.e. they can be expressed trough the variables $u$ and $v$. But the determinant of this linear system is equal to 0 (the rows are linearly dependent), which means that the transfomation is not one-to-one.</p>
2,136,079
<p>A cone $K$, where $K ⊆\Bbb R^n$ , is pointed; which means that it contains no line (or equivalently, $(x ∈ K~\land~ −x∈K) ~\to~ x=\vec 0$.</p>
Fred
380,717
<p>Here is a picture (in 3D) of a cone which is not a pointed cone:</p> <p><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/7/72/DoubleCone.png/1024px-DoubleCone.png" rel="noreferrer">https://upload.wikimedia.org/wikipedia/commons/thumb/7/72/DoubleCone.png/1024px-DoubleCone.png</a></p> <p>Here is a picture (in 3D) of a cone which is a pointed cone:</p> <p><a href="https://upload.wikimedia.org/wikipedia/commons/e/e7/Circular-pyramid.png" rel="noreferrer">https://upload.wikimedia.org/wikipedia/commons/e/e7/Circular-pyramid.png</a></p>
3,839,878
<p>Am currently doing a question that asks about the relationship between a quadratic and its discriminant.</p> <p>If we know that the quadratic <span class="math-container">$ax^2+bx+c$</span> is a perfect square, then can we say anything about the discriminant?</p> <p>Specifically, can we be sure that the discriminant equals 0?</p> <p>So far, I have tried to complete the square for the general quadratic, and got to:</p> <p><span class="math-container">$a((x+\frac{b}{2a})^2-\frac{b^2}{4a^2}+\frac ca)$</span></p> <p>But am now stuck. What should I do next, or is there a totally different route I should be taking?</p>
David Kipper
764,256
<p>Since the quadratic is a perfect square, this tells you that a root of that quadratic will have multiplicity 2 - you took a linear equation and squared it. If the quadratic vanishes at <span class="math-container">$x=t$</span>, then so does this linear equation, but we have that linear equation twice by squaring, hence the multiplicity.</p> <p>So, we can deduce that the discriminant is 0.</p>
3,839,878
<p>Am currently doing a question that asks about the relationship between a quadratic and its discriminant.</p> <p>If we know that the quadratic <span class="math-container">$ax^2+bx+c$</span> is a perfect square, then can we say anything about the discriminant?</p> <p>Specifically, can we be sure that the discriminant equals 0?</p> <p>So far, I have tried to complete the square for the general quadratic, and got to:</p> <p><span class="math-container">$a((x+\frac{b}{2a})^2-\frac{b^2}{4a^2}+\frac ca)$</span></p> <p>But am now stuck. What should I do next, or is there a totally different route I should be taking?</p>
sirous
346,566
<p>The quadratic has two equal roots like d if it is perfect square and we may write:</p> <p><span class="math-container">$(x-d)^2=x^2-2dx +d^2$</span></p> <p>Then:</p> <p><span class="math-container">$\Delta=4d^2-4d^2=0$</span></p> <p>That is the discriminant is zero when the quadratic is perfect square.</p>
977,232
<p>We have in and out degree of a directed graph G. if G does not includes loop (edge from one vertex to itself) and does not include multiple edge (from each vertex to another vertex at most one directed edge), we want to check for how many of the following we have a corresponding graph. the vertex number start from 1 to n and the degree sequence are sort by vertex numbers.</p> <p>a) $d_{in}=(0,1,2,3), d_{out}=(2,2,1,1)$</p> <p>b) $d_{in}=(2,2,1), d_{out}=(2,2,1)$</p> <p>c) $d_{in}=(1,1,2,3,3), d_{out}=(2,2,3, 1,2)$</p> <p>I want to find a nice way instead of drawing graph.</p> <p>for (C):</p> <p><img src="https://i.stack.imgur.com/nR4VK.jpg" alt="enter image description here"></p>
Arthur
15,500
<p>You're on the right track, but you're not supposed to set the elements to be equal to one another. That would solve the problem "which $t$ makes this vector <em>equal</em> to that vector", which is, as you've discovered, impossible.</p> <p>What you <em>should</em> do is set one vector equal to a scalar multiple of another, which is done like this: $$ s(2,-3,1)=(t^2, -3t, \sqrt{6-t}) $$ which gives you three equations in two unknowns.</p>
4,569,910
<p><span class="math-container">$ABC$</span> is a right-angled triangle (<span class="math-container">$\measuredangle ACB=90^\circ$</span>). Point <span class="math-container">$O$</span> is inside the triangle such that <span class="math-container">$S_{ABO}=S_{BOC}=S_{AOC}$</span>. If <span class="math-container">$AO^2+BO^2=k^2,k&gt;0$</span>, find <span class="math-container">$CO$</span>. <a href="https://i.stack.imgur.com/YEj0a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YEj0a.png" alt="enter image description here" /></a></p> <p>The most intuitive thing is to note that <span class="math-container">$AO^2+BO^2=k^2$</span> is part of the cosine rule for triangle <span class="math-container">$AOB$</span> and the side <span class="math-container">$AB:$</span> <span class="math-container">$$AB^2=c^2=AO^2+BO^2-2AO.BO\cos\measuredangle AOB\\ =k^2-2AO.BO\cos\measuredangle AOB$$</span> From here if we can tell what <span class="math-container">$2AO.BO\cos\measuredangle AOB$</span> is in terms of <span class="math-container">$k$</span>, we have found the hypotenuse of the triangle (with the given parameter). I wasn't able to figure out how this can done.</p> <p>Something else that came into my mind: does the equality of the areas mean that <span class="math-container">$O$</span> is the centroid of the triangle? If so, can some solve the problem without using that fact?</p>
Alan
175,602
<p>First off, we know there has to be at least one white stone in each, or you would cap out at <span class="math-container">$\frac 2 3$</span> which is less than what you have.</p> <p>We can limit what we look at by just considering how many boxes have black stones in them.</p> <p>The first case is putting all the bad apples in one box. Obviously in that case the best thing to do is to only put a single white in the other 2 and use the remaining 21 to dilute the influence of the 7, which is what you achieved above.</p> <p>If we have 2 cases with black stones, then we only need 1 white in the third box, so we have 22 white to distribute amongst the two remaining, mixed with 7 black stones.<br /> So, your black stones could be 6 and 1, 5 and 2, or 4 and 3. Obviously no single box can have as big a loss potential as <span class="math-container">$\frac 1 4$</span>, so at a floor we need to triple the white stones:</p> <p>Box 1: 6 black and 18 white Box 2: 1 black and 3 white This gives each of these the same <span class="math-container">$\frac 1 4$</span> as before, totaling double the loss potential! We only have a single white stone we can add left to bring one of them slightly below <span class="math-container">$\frac 1 4$</span>, so we have a loss potential of <span class="math-container">$\frac 1 12$</span> from the better worst box, and something nonzero from the better box, thus it is a worst scenario Similarly with the other configurations, you need 3* as many white as black, so 21 white, 7 black...only one box can be improved a tiny bit</p> <p>Now lets say you have black stones in all 3 boxes. Once again, the very worst box we can allow as a loss <span class="math-container">$\frac 1 4$</span>, so we need to use 21 white to match the 7 black...leaving us only 2 white stones left to improve the odds. THe best we ccan do is improve the odds of 1 or 2 of the boxes, leaving the third at a <span class="math-container">$\frac 1 4$</span> failure rate, and the other two at nonzero failure rates, thus all are worse than the single black box case.</p>
4,314,162
<p>Assume k is a finite field with n elements, how many elements are in the projective line <span class="math-container">$\mathbb{P}^{1}(k)$</span> and how do I work this out?</p> <p>I know that an element of <span class="math-container">$\mathbb{P}^{1}(k)$</span> is represented by <span class="math-container">$[a, b]$</span>, where <span class="math-container">$a, b \in k$</span>, not both of the coordinates are 0, and two elements <span class="math-container">$[a, b]$</span> and <span class="math-container">$[c, d]$</span> are equal if for some <span class="math-container">$\lambda \in k^{*}$</span> we have <span class="math-container">$a=\lambda c, b=\lambda d$</span></p> <p>However, I’m not sure how I can use this to work out the number of elements?</p> <p>Likewise how would I advance this to work out the number of elements in <span class="math-container">$\mathbb{P}^{2}(k)$</span> where the elements are the triples [a,b,c] ?</p>
Ethan Bolker
72,858
<p>If you want to think geometrically you can work it out this way.</p> <p>The affine line over a field of cardinality <span class="math-container">$q$</span> is the one dimensional vector space over the field, so has <span class="math-container">$q$</span> elements. Add the point at infinity to construct the projective line with <span class="math-container">$q+1$</span> elements.</p> <p>You can proceed inductively up the dimensions. The projective plane is the affine plane with a projective line at infinity, so it has <span class="math-container">$q^2 + q + 1$</span> points.</p> <p>This is the lovely seven point projective plane on the two element field:</p> <p><a href="https://i.stack.imgur.com/6YuHD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YuHD.png" alt="enter image description here" /></a></p> <p><a href="https://en.wikipedia.org/wiki/Projective_plane" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Projective_plane</a></p>
4,314,162
<p>Assume k is a finite field with n elements, how many elements are in the projective line <span class="math-container">$\mathbb{P}^{1}(k)$</span> and how do I work this out?</p> <p>I know that an element of <span class="math-container">$\mathbb{P}^{1}(k)$</span> is represented by <span class="math-container">$[a, b]$</span>, where <span class="math-container">$a, b \in k$</span>, not both of the coordinates are 0, and two elements <span class="math-container">$[a, b]$</span> and <span class="math-container">$[c, d]$</span> are equal if for some <span class="math-container">$\lambda \in k^{*}$</span> we have <span class="math-container">$a=\lambda c, b=\lambda d$</span></p> <p>However, I’m not sure how I can use this to work out the number of elements?</p> <p>Likewise how would I advance this to work out the number of elements in <span class="math-container">$\mathbb{P}^{2}(k)$</span> where the elements are the triples [a,b,c] ?</p>
Servaes
30,382
<p>Your description of a projective line can be paraphrased as follows:</p> <blockquote> <p>The projective line over a field <span class="math-container">$k$</span> consists of all nonzero points in <span class="math-container">$k^2$</span>, where two points are considered 'the same' if they are on the same line through the origin.</p> </blockquote> <p>Then if <span class="math-container">$k$</span> is a finite field of <span class="math-container">$q$</span> elements, there are <span class="math-container">$q^2-1$</span> nonzero points in <span class="math-container">$k^2$</span>. Each line through the origin contains <span class="math-container">$q-1$</span> nonzero points, so this gives <span class="math-container">$\frac{q^2-1}{q-1}$</span> distinct points for the projective line over <span class="math-container">$k$</span>.</p> <p>This also generalizes to projective space over finite fields; projective <span class="math-container">$d$</span>-space over a finite field of <span class="math-container">$q$</span> elements contains precisely <span class="math-container">$\frac{q^{d+1}-1}{q-1}$</span> points.</p>
579,907
<p>Let $G$ be a group and let $H$ be a normal subgroup.</p> <p>Prove that if $S\subseteq G$ generates $G$, then the set $\{sH\mid s∈S\} ⊆ G/H$ generates $G/H$.</p> <p>I have no idea how to deal with the question above. Can somebody please give me some help?</p>
Berci
41,488
<p>I think, the best to consider any quotient set $A/\sim$ as that it contains elements of the set $A$ itself but the <em>equality sign is replaced</em> by $\sim$ (we <em>consider</em> $a$ and $b$ equal in $A/\sim$ if $a\sim b$).</p> <p>In your case the set is the group $G$ and $x\sim y$ iff $x^{-1}y\in H$, and replacing $=$ by this in $G$ gives exactly the quotient group $G/H$. (Observe that $y\in xH \iff x^{-1}y\in H$.)</p> <p>Now take an element of $G/H$, i.e. a $g\in G$ is given (formally you would write $gH\in G/H$). As $S$ generates $G$, we have in particular $g=s_1s_2s_3\dots s_k$ for some elements $s_i\in S\cup S^{-1}$ in $G$, but the same continues to hold in $G/H$.</p> <p>(Formally, you would write $s_1H\cdot s_2H\cdot \ldots\cdot s_kH=gH$.)</p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
JTP - Apologise to Monica
64
<p>SOH-CAH-TOA </p> <p>How to remember the ratios for the 3 main trig functions. </p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
Valerij
9,876
<p>In the 11th grade our math teacher taught us following:</p> <blockquote> <p>was the maiden brave</p> <p>the belly stays concave</p> <p>but unprotected sex</p> <p>makes the belly convex</p> </blockquote>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
Rodrigo Zepeda
5,922
<p>This only makes sense in Spanish but it's pretty fun. For integration by parts, $$ \int u dv = u v - \int v du $$</p> <blockquote> <p><strong>S</strong>i <strong>u</strong>n <strong>d</strong>ía <strong>v</strong>i <strong>u</strong>na <strong>v</strong>aca <strong>menos</strong> <strong>s</strong>exy <strong>v</strong>estida <strong>d</strong>e <strong>u</strong>niforme</p> </blockquote> <p>Which translates roughly to:</p> <blockquote> <p>I saw one day a not-so-sexy cow wearing a uniform. </p> </blockquote> <p>This mnemonics has generated <a href="http://img.desmotivaciones.es/201105/vacauniforme2.jpg" rel="nofollow noreferrer">awesome memes</a>. </p>
1,657,694
<p>The Algorithms course I am taking on Coursera does not require discrete math to find discrete sums. Dr. Sedgewick recommends replacing sums with integrals in order to get basic estimates.</p> <p>For example: $$\sum _{ i=1 }^{ N }{ i } \sim \int _{ x=1 }^{ N }{ x } dx \sim \frac { 1 }{ 2 } N^2$$</p> <p>How would I go about doing this for the problem below?</p> <p>What is the order of growth of the worst case running time of the following code fragment as a function of N?</p> <p><code>int sum = 0; for (int i = 1; i &lt;= N; i++) for (int j = 1; j &lt;= i*i*i; j++) sum++;</code></p> <p>What I've got so far:</p> <p>$$\sum _{ i=1 }^{ N }{ } \sum _{ j=1 }^{ i^{ 3 } }{ } 1\approx \int _{ i=1 }^{ N }{ } dx\int _{ j=1 }^{ i^{ 3 } }{ } 1djdi$$</p> <p>I'm not sure if this is correct, and I'm confused as to how to set these types of problems up. I've taken integral calculus, but it was almost 6 months ago. A hint in the right direction would go a long way. </p>
Davey
314,780
<p>See <a href="http://www.redalyc.org/articulo.oa?id=46815211" rel="nofollow">http://www.redalyc.org/articulo.oa?id=46815211</a>. The answer is thus not necessarily. I haven't read the construction yet, but at least it's not in Spanish. Great question, by the way.</p>
1,451,745
<p>Can someone check my logic here. </p> <p><strong>Question:</strong> How many ways are there to choose a an $k$ person committee from a group of $n$ people? </p> <p><strong>Answer 1:</strong> there are ${n \choose k}$ ways. </p> <p><strong>Answer 2:</strong> condition on eligibility. Assume the creator of the committee is already in the committee. This leaves us with choosing $k - 1$ people from a group of $n - 1$ potentially eligible people. If all remaining people are eligible, there are ${n - 1 \choose k - 1}$ possible committees, if there are $n - 2$ eligible people, there are ${n - 2 \choose k - 1}$ committees, if there are $n - 3$ eligible people, there are ${n - 3 \choose k - 1}$ committees,..., if there are $k - 1$ eligible people there are ${k - 1 \choose k - 1}$ committees. Therefore,$${n - 1 \choose k - 1} + {n - 2 \choose k - 1} + {n - 3 \choose k - 1} + \dots + {k - 1 \choose k - 1} = {n \choose k}$$.</p>
Brian M. Scott
12,042
<p>It’s not clear whether your argument works or not, because you’ve not told us what you mean by <em>eligible</em>. Here is one way to make your argument valid.</p> <p>Suppose that that the $n$ potential members all have different ages, and that the committee must be created by its oldest member. In other words, the creator may choose <em>only</em> from those potential members who are younger than he is. In other words, we’re defining <em>eligible</em> to mean <em>younger than the creator</em>.</p> <p>If he’s the oldest person in the pool of potential members, he can pick any $k-1$ of the other $n-1$ people in the pool: all $n-1$ of them are younger than he. Of course this can be done in $\binom{n-1}{k-1}$ ways. If he’s only the second-oldest, then $n-2$ members of the pool are younger, and he can choose $k-1$ of them in $\binom{n-2}{k-1}$ ways. In general, if he’s the $i$-th oldest person in the pool, then $n-i$ are younger, and he can choose the rest of the committee in $\binom{n-i}{k-1}$ ways. With these added details you get a legitimate argument that the sum on the lefthand side counts the $k$-member committees that can be formed from the pool of $n$ people.</p>
546,276
<p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$, and assume that $s_n \rightarrow s$. Prove that $s^k_n\rightarrow s^k$ for every $k \in\mathbb{N}$</p> <p>Ok, so we need $|s^k_n - s^k| &lt; \varepsilon$. I rewrote this as</p> <p>$$|s_ns^{k-1}_n - ss^{k-1}|=|(s_n-s)(s^{k-1}_n + s^{k-1}) -s_ns^{k-1}+ss_n^{k-1}|$$</p> <p>But this seems really messy. What should I use here: $|s_n - s| &lt; \varepsilon?$</p> <p>Help!</p>
Fly by Night
38,495
<p><strong>Hint</strong></p> <p>Apply the chain rule and implicit differentiation to find the gradient. If you have the graph with equation $y=\operatorname{f}^{-1}(x)$ then it also has the equation $\operatorname{f}(y)=x$.</p>
2,262,167
<p>The question in title has been considered for finite groups $G$ and $H$, but I do not know its situation, how far it is known whether $G$ and $H$ could be isomorphic. I have two simple questions regarding it.</p> <p><strong>Q.0</strong> If $\mathbb{Z}[G]\cong \mathbb{Z}[H]$ then $|G|$ should be $|H|$; because, $G$ is a free basis for the <em>free additive abelian group $\mathbb{Z}[G]$</em>, am I right? (I am asking this because in Isaac's character theory, I saw something different argument, not too lengthy, but I was thinking for the above natural arguments.)</p> <p><strong>Q.1</strong> Are there known examples of finite groups $G\ncong H$ with $\mathbb{Z}[G]\cong \mathbb{Z}[H]$? (In the book of character theory by Isaacs, he stated for metablelian groups $G,H$, $\mathbb{Z}[G]\cong \mathbb{Z}[H]$ implies $G\cong H$; it was proved by Whitcomb, in $1970$; but book has not been further revised, I don't known what is done after $1970$).</p>
Dietrich Burde
83,966
<p>This isomorphism problem was stated by Higman in this thesis $1940$: $$ \mathbb{Z}[G]\cong\mathbb{Z}[H] \Longrightarrow G\cong H ? $$ It is true for nilpotent groups, for metabelian groups (Whitcomp $1968$), and was disproved by Hertweck in $2001$, see <a href="https://math.stackexchange.com/questions/632372/minimal-counterexamples-of-the-isomorphism-problem-for-integral-group-rings">this question</a>.</p>
1,656,136
<p>I'm trying to track down an example of a ring in which there exists an infinite chain of ideals under inclusion. (i.e. $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq...$)</p>
egreg
62,967
<p>The classical example is the ring of polynomials in a countable number of indeterminates: $k[x_1,x_2,\dots,x_n,\dotsc]$ and the chain is $$ 0\subsetneq (x_1)\subsetneq(x_1,x_2)\subsetneq\dotsb $$</p> <p>Note that $x_{n+1}\notin(x_1,\dots,x_n)$ by a standard argument: suppose $$ x_{n+1}=f_1 x_1 + f_2x_2 + \dots + f_nx_n $$ where $f_i\in k[x_1,x_2,\dots,x_n,\dotsc]$ (for $i=1,2,\dots,n$). Substitute $x_i=0$ for $i=1,2,\dots,n$; this gives $x_{n+1}=0$, a contradiction.</p>
2,905,022
<p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p> <p><strong>Note</strong> that I only want hints.</p> <p>Thanks for the help!</p>
amsmath
487,169
<p>Put $f(x) := 3\sqrt{x-1}+\sqrt{3x+1}$ on $[1,\infty)$. Then $f'(x) &gt; 0$ and $f(1) = 2$.</p>
2,664,341
<blockquote> <p>Simplify $$\frac{1}{\sqrt[3]1+\sqrt[3]2+\sqrt[3]4}+\frac{1}{\sqrt[3]4+\sqrt[3]6+\sqrt[3]9}+\frac{1}{\sqrt[3]9+\sqrt[3]{12}+\sqrt[3]{16}}$$</p> </blockquote> <p>I have no idea how to do this. I tried using the idea of multiplying the conjugate to every term, but I seem to be getting no where. Is there any hint as to how to approach this?</p>
drhab
75,923
<p>If $\nu(A):=\int_A f_+d\lambda$ and $\mu(A):=\int_A f_-d\lambda$ then $\nu$ and $\mu$ are both finite measures so that: $$\lim_{n\to\infty}\nu(E_n)=\nu(E)&lt;\infty\text{ and }\lim_{n\to\infty}\mu(E_n)=\mu(E)&lt;\infty$$ So: $$\lim_{n\to\infty}\int_{E_n} fd\lambda=\lim_{n\to\infty}[\nu(E_n)-\mu(E_n)]=\nu(E)-\mu(E)=\int_Efd\lambda$$</p>
2,195,197
<blockquote> <p>A circle goes through $(5,1)$ and is tangent to $x-2y+6=0$ and $x-2y-4=0$. What is the circle's equation?</p> </blockquote> <p>All I know is that the tangents are parallel, which means I can calculate the radius as half the distance between them: $\sqrt5$. So my equation is $$(x-p)^2+(y-q)^2=5$$ How can I get the locations of the centre? (I think there are 2 solutions.)</p>
Parcly Taxel
357,390
<p>We can place an additional constraint on the circle centre $(p,q)$. It has to lie on the line parallel to the two tangents and equidistant from them: $$p-2q+1=0\quad p=2q-1$$ Then since the circle passes through $(5,1)$: $$(5-(2q-1))^2+(1-q)^2=5$$ $$25-10(2q-1)+(2q-1)^2+1-2q+q^2=5$$ $$25-20q+10+4q^2-4q+1+1-2q+q^2=5$$ $$5q^2-26q+32=0$$ $$(q-2)(5q-16)=0$$ $$q=2\text{ or }\frac{16}5$$ Therefore the two possible centres are $(3,2)$ and $\left(\frac{27}5,\frac{16}5\right)$, leading to the circle equations $$(x-3)^2+(y-2)^2=5$$ $$\left(x-\frac{27}5\right)^2+\left(y-\frac{16}5\right)^2=5$$</p>
213,916
<p>Let $ D\subset \mathbb{C}$ be open, bounded, connected and with smooth boundary. Let $f$ be a nonconstant holomorphic function in a neighborhood of the closure of $D$ , such that $|f(z)|=c \forall z\in \partial D$, show that $f$ takes on each value $a$, such that $|a| &lt; |c| $ at least once in $D$.</p>
froggie
23,685
<p>I should preface this by saying there <em>must</em> be a better solution than this.</p> <p>Let $B$ be the open disk $B = \{z\in \mathbb{C} : |z|&lt;c\}$. We want to show $B\subset f(D)$. Define $S = \{z\in B : z\notin f(D)\}$. First note that $S$ is closed in $B$, since by the open mapping theorem $f(D)$ is open, and $S = B\smallsetminus f(D)$. </p> <p>We want to show next that $S$ is open in $B$. If $w\in S$, then, from the assumption that $|f(z)| = c$ for all $z\in \partial D$, we have that $w\notin f(\overline{D})$. Thus $S = B\smallsetminus f(\overline{D})$. Since $f(\overline{D})$ is compact and hence closed, we conclude $S$ is open in $B$.</p> <p>Since $B$ is connected, it follows that $S = \varnothing$ or $S = B$. Note that the latter case cannot happen. Indeed, if $S = B$, then the maximum modulus principle implies that $f(D)\subset \{|z| = c\}$, which is not possible by the open mapping theorem. Thus $S = \varnothing$, completing the proof.</p> <p>I don't see how the assumption of smoothness on the boundary comes in, so maybe there's a mistake in here somewhere.</p>
633,858
<p>If G is cyclic group of 24 order, then how many element of 4 order in G? I can't understand how to find it, step by step. </p>
Xucheng Zhang
119,182
<p><strong>Lemma:</strong> If $o(g)=n$, then $o(g^k)=\dfrac{n}{gcd(n,k)}$.</p> <p>Assume $G=\langle g \rangle$ and $o(g)=24$, notice that every element of $G$ has the form $g^k$ for some integer $k$, by lemma, $o(g^k)=\dfrac{24}{gcd(24,k)}$. Let $\dfrac{24}{gcd(24,k)}=4$, i.e., $gcd(24,k)=6$, we can easily conclude this only holds for $k=6$ or $k=18$. Hence there is only two elements in $G$ of order $4$, $g^6,g^{18}$.</p>
321,230
<p>Suppose $Z$ is a topological space; and $X$ is dense in $Z$. Then do we have $W(X)= W(Z)$? Where $W(X)$, $W(Z)$ denote the weight of the $X$ and $Z$ respectively. </p> <p><strong>What I've tried:</strong> On one hand, $W(X)\le W(Z)$, clearly; On the other hand, for any open set $U$ of $Z$, we have $U\cap X$, an open set in $X$, because $X$ is dense in $Z$. So $W(X)= W(Z)$. Am I right?</p> <p>Thanks ahead.</p>
Brian M. Scott
12,042
<p>Arthur’s example shows that $w(Z)$ can be as large as $2^{w(X)}$. If $Z$ is regular, this is the biggest possible value for $w(Z)$. In that case $Z$ has a base $\mathscr{R}$ of $w(Z)$ regular open sets, and if $U\subseteq Z$ is regular open, then $U=\operatorname{int}_Z\operatorname{cl}_Z(U\cap X)$, so each $U\in\mathscr{R}$ is uniquely determined by $U\cap X$. If $\mathscr{B}$ is a base for $X$ of cardinality $w(X)$, $U\cap X=\bigcup\mathscr{B}_U$ for some $\mathscr{B}_U\subseteq\mathscr{B}$, and $U$ is uniquely determined by $\mathscr{B}_U$. Thus, $w(Z)=|\mathscr{R}|\le 2^{|\mathscr{B}|}=2^{w(X)}$.</p> <p>If $Z$ is not regular, however, $w(Z)$ can be much larger. Let $\kappa$ be any infinite cardinal, let $Z$ be a set of cardinality $\kappa$, and give $Z$ the cofinite topology; clearly $w(Z)=|Z|=\kappa$. Now let $X\subseteq Z$ with $|X|=\omega$. Then $X$ is dense in $Z$, and $w(X)=\omega$.</p>
784,032
<p>Find the remainder when $6!$ is divided by 7.</p> <p>I know that you can answer this question by computing $6! = 720$ and then using short division, but is there a way to find the remainder without using short division?</p>
lab bhattacharjee
33,337
<p>As $7$ is prime, use <a href="http://www.proofwiki.org/wiki/Wilson%27s_Theorem">Wilson's Theorem</a> $$(p-1)!\equiv-1\pmod p$$ for prime $p$</p> <p>Now, $\displaystyle -1\equiv p-1\pmod p$</p>
2,136,937
<p>Find $$\lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16}$$</p> <p>Note that $$z=\exp(\pi i/3)=\cos(\pi/3)+i\sin(\pi/3)=\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$ $$z^2=\exp(2\pi i/3)=\cos(2\pi/3)+i\sin(2\pi/3)=-\dfrac{1}{2}+i\dfrac{\sqrt{3}}{2}$$ $$z^3=\exp(3\pi i/3)=\cos(\pi)+i\sin(\pi)=1$$ $$z^4=\exp(4\pi i/3)=\cos(4\pi/3)+i\sin(4\pi/3)=-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}$$</p> <p>So, \begin{equation*} \begin{aligned} \lim_{z \to \exp(i \pi/3)} \dfrac{z^3+8}{z^4+4z+16} &amp; = \dfrac{1+8}{-\dfrac{1}{2}-i\dfrac{\sqrt{3}}{2}+4\left(-\dfrac{1}{2}+\dfrac{\sqrt{3}}{2}\right)+16} \\ &amp; = \dfrac{9}{\dfrac{27}{2}+i\frac{3\sqrt{3}}{2}} \\ &amp; = \dfrac{6}{9+i\sqrt{3}} \\ &amp; = \dfrac{9}{14}-i\dfrac{\sqrt{3}}{2} \\ \end{aligned} \end{equation*}</p> <p>But, when I check my answer on wolframalpha, their answer is $$\dfrac{245}{626}-i\dfrac{21\sqrt{3}}{626}.$$</p> <p>Can someone tell me what I am doing wrong?</p>
user414998
414,998
<p>While both Spivak's and Apostol's books are rigorous in that they include complete proofs, Spivak's has a heavier emphasis on theoretical questions, and its exercises are much harder. Spivak's book also has a complete solution manual. Spivak's book can be considered one of the best introductions to rigorous mathematics.</p> <p>But on balance, if your real interest is physics, my recommendation would be Apostol's book. Apostol also covers much more material after the basic single-variable stuff (at the end of Vol. 1 and throughout Vol. 2), and all of this is important for physics later on.</p> <p>Facility with rigorous math is useful for higher-level physics, but only rarely for introductory physics classes, so if you find you want (or need) to move faster for some reason, it would be okay to use a calculus book that is conceptual and provides proofs where they're easy, but avoids theoretical questions concerning limits and the like. One good option for this would be Lang's <em>A First Course in Calculus</em>.</p>
3,897,689
<p>i have the equation: <span class="math-container">$$y'+2y\:=1$$</span></p> <p>and i solve it the regular way for first order differential equation: <span class="math-container">$$y'\:=1-2y$$</span> <span class="math-container">$$\frac{dy}{dx}=1-2y$$</span> <span class="math-container">$$\int \:\frac{1}{1-2y}dy=\int \:1dx$$</span> <span class="math-container">$$-\frac{1}{2}\int \:-\frac{2}{1-2y}dy=\int \:1dx$$</span> and using the integral formula: <a href="https://i.stack.imgur.com/pjKMZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pjKMZ.png" alt="enter image description here" /></a> <span class="math-container">$$-\frac{1}{2}\ln\left(\left|1-2y\right|\right)=x+\ln\left(c\right)$$</span> Why <a href="https://he.symbolab.com/solver/step-by-step/%5Cleft%7C1-2y%5Cright%7C%3Dx" rel="nofollow noreferrer">Symbolab</a> omits the absolute value operator and writes: <a href="https://i.stack.imgur.com/52jay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/52jay.png" alt="enter image description here" /></a></p>
xpaul
66,420
<p>Since <span class="math-container">$$\int\frac{1}{1-2y}dy=\int 1dx, \text{ or }-\int\frac{1}{2y-1}dy=\int1dx $$</span> one has <span class="math-container">$$ -\frac12\ln|2y-1|=x+C $$</span> or <span class="math-container">$$ \ln|2y-1|=-2x-2C$$</span> So <span class="math-container">$$ 2y-1=\pm e^{-2C}e^{-2x}$$</span> Let <span class="math-container">$k=\pm e^{-2C}$</span>. Then <span class="math-container">$$ y=\frac12+\frac12ke^{-2x}. $$</span> Since <span class="math-container">$k$</span> absorbs the signs, it does not matter if you have absolute value for <span class="math-container">$2y-1$</span> or not.</p>
4,264,558
<p>I calculated homogenous already, I'm just struggling a bit with the right side. Would <span class="math-container">$y_p$</span> be <span class="math-container">$= ++e^x$</span> or <span class="math-container">$= ++e^{2x}$</span>?</p> <p>Would the power in front of the root be the roots found from the homogenous part?</p> <p>Sorry, it's been a while since I did ODE's. All help is appreciated.</p>
David P
49,975
<p><span class="math-container">$$\lim_{x\to\infty} x^2(1-\cos(1/x)) = \lim_{x\to\infty} \dfrac{1-\cos(1/x)}{1/x^2} \to {0\over 0}$$</span></p> <p>Now you can apply L'Hopital's rule</p> <p><span class="math-container">$$\lim_{x\to\infty} \dfrac{1-\cos(1/x)}{1/x^2} =\lim_{x\to\infty} = \dfrac{-\sin(1/x)/x^2}{-2/x^3} = \dfrac{1}{2} \lim_{x\to\infty} \cdot \dfrac{\sin(1/x)}{1/x}$$</span></p> <p>One more application of L'H:</p> <p><span class="math-container">$$\frac{1}{2}\lim_{x\to\infty} \dfrac{\sin(1/x)}{1/x} = \frac{1}{2}\lim_{x\to\infty} \dfrac{\cos(1/x)/x^2}{1/x^2} =\frac{1}{2}\lim_{x\to\infty} \cos(1/x) =\frac{1}{2}\cdot1=\frac{1}{2}$$</span></p>
3,814,195
<p>As an applied science student, I've been taught math as a tool. And although I've been studying <strong>a lot</strong> throughout the years, I always felt like I am missing depth. Then I read geodude's answer on this <a href="https://math.stackexchange.com/questions/721364/why-dont-taylor-series-represent-the-entire-function">post</a>, that cited these beautiful quotes:</p> <blockquote> <p>You might want to do calculus in <span class="math-container">$\mathbb{R}$</span>, but the functions themselves naturally live in <span class="math-container">$\mathbb{C}$</span></p> </blockquote> <blockquote> <p>Even in <span class="math-container">$\mathbb{R}$</span>, and in the most practical and applied problems, you can hear distant echos of the complex behavior of the functions. It's their nature, you can't change it.</p> </blockquote> <p>And although pieces of complex analysis are well known even to the most applied scientist (e.g Euler's identity), these quotes really helped me understand why my math knowledge is so shallow. It seems I share the same worries with other engineers: (<a href="https://math.stackexchange.com/questions/1658577/whats-the-best-way-for-an-engineer-to-learn-real-math">What&#39;s the best way for an engineer to learn &quot;real&quot; math?</a>) and I've found many beautiful and informative answers about diving deeper into mathematics, but none of them (as far as I could spot) addressed complex analysis. And as I think I am lost in the labyrinth of math knowledge, I ask this question:</p> <p>How can one that has an basic knowledge of real analysis approach complex analysis? What do I start? Are there any books you would recommend?</p>
Community
-1
<p>I found the book by Gamelin &quot;Complex Analysis&quot; together with &quot;Visual Complex Analysis&quot; by T. Needham to be a good combination.</p> <p>Also if you read German, then I recommend chapters 1-4 from &quot;Funktionentheorie&quot; Busam/Freitag (not sure if it exists in English).</p> <p>There is also &quot;Complex Analysis&quot; by Stein which people recommend, but I have not read.</p>
94,440
<p>In Sean Carroll's <em>Spacetime and Geometry</em>, a formula is given as $${\nabla _\mu }{\nabla _\sigma }{K^\rho } = {R^\rho }_{\sigma \mu \nu }{K^\nu },$$</p> <p>where $K^\mu$ is a Killing vector satisfying Killing's equation ${\nabla _\mu }{K_\nu } +{\nabla _\nu }{K_\mu }=0$ and the convention of Riemann curvature tensor is</p> <p>$$\left[\nabla_{\mu},\nabla_{\nu}\right]V^{\rho}={R^\rho}_{\sigma\mu\nu}V^{\sigma}.$$</p> <p>So how to prove the this formula (the connection is Levi-Civita)?</p>
Zhen Lin
5,191
<p>Permit me the use of Latin indices instead of Greek indices and the convention <span class="math-container">$\nabla_a K_b=K_{b;a} $</span>. So we wish to prove <span class="math-container">$\newcommand{\Tud}[3]{{#1}^{#2}_{\phantom{#2}{#3}}}$</span> <span class="math-container">$$\Tud{K}{a}{;b c} = \Tud{R}{a}{b c d} K^d$$</span> where <span class="math-container">$$\Tud{V}{a}{;b c} - \Tud{V}{a}{;c b} = \Tud{R}{a}{d c b} V^d$$</span> and <span class="math-container">$$K_{a ; b} + K_{b ; a} = 0$$</span></p> <p>Differentiating the last equation, we get <span class="math-container">$$K_{a ; b c} + K_{b ; a c} = 0$$</span> so, relabelling and summing, <span class="math-container">$$K_{a ; b c} + K_{b ; a c} - K_{b ; c a} - K_{c ; b a} + K_{c ; a b} + K_{a ; c b} = 0$$</span> hence, <span class="math-container">$$K_{a; b c} + K_{a ; c b} = R_{b d a c} K^d + R_{c d a b} K^d$$</span> By the interchange symmetry <span class="math-container">$R_{a b c d} = R_{c d a b}$</span>, and raising indices, we get <span class="math-container">$$\Tud{K}{a}{;b c} - \Tud{R}{a}{b c d} K^d = -(\Tud{K}{a}{;c b} - \Tud{R}{a}{c b d} K^d)$$</span></p> <p>On the other hand, by the first Bianchi identity and antisymmetry, we have <span class="math-container">$$\Tud{R}{a}{d c b} = \Tud{R}{a}{b c d} + \Tud{R}{a}{c d b}$$</span> Hence we get <span class="math-container">$$\Tud{K}{a}{;b c} = \Tud{K}{a}{; c b} + \Tud{R}{a}{d c b} K^d = \Tud{K}{a}{;c b} + \Tud{R}{a}{b c d} K^d + \Tud{R}{a}{c d b} K^d$$</span> and therefore <span class="math-container">$$\Tud{K}{a}{;b c} - \Tud{R}{a}{b c d} K^d = \Tud{K}{a}{;c b} - \Tud{R}{a}{c b d} K^d$$</span> The conclusion follows.</p>
2,468,326
<p>I want to read <a href="https://www.amazon.co.uk/Introduction-Cyclotomic-Fields-Graduate-Mathematics/dp/0387947620" rel="noreferrer">Lawrence Washington's An <em>Introduction to Cyclotomic Fields</em></a>. However, my knowledge of algebraic number theory doesn't extend farther than what's found in <a href="https://www.amazon.co.uk/Classical-Introduction-Modern-Number-Theory/dp/038797329X/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1507759261&amp;sr=1-1&amp;keywords=ireland%20and%20rosen%20number%20theory" rel="noreferrer"><em>A Classical Introduction to Modern Number Theory</em> - Ireland and Rosen</a>. Does anyone know whether this will be sufficient or whether I'll have to learn more about algebraic number theory before I can get to it? </p>
lhf
589
<p>The preface to the first edition of Washington's book says:</p> <blockquote> <p>The reader is assumed to have had at least one semester of algebraic number theory (though one of my students took such a course concurrently). In particular, the following terms should be familiar: Dedekind domain, class number, discriminant, units, ramification, local field. Occasionally one needs the fact that ramification can be computed locally. However, one who has a good background in algebra should be able to survive by talking to the local algebraic number theorist. I have not assumed class field theory; the basic facts are summarized in an appendix. For most of the book, one only needs the fact that the Galois group of the maximal unramified abelian extension is isomorphic to the ideal class group, and variants of this statement.</p> </blockquote> <p>It seems you'll have to learn much more about algebraic number theory than covered by Ireland and Rosen. </p>
4,228,512
<p>The question is: does the sequence of characteristic functions <span class="math-container">$f_k(x) := \chi_{[-\frac{1}{k}, \frac{1}{k}]}(x)$</span> converge in distributional sense to the Dirac delta?</p> <p>In order to answer I followed this approach, but I fear I'm neglecting something important in my lines:</p> <p>Firstable, <span class="math-container">$f_k\in L^1_{loc}(\mathbb{R})$</span>, so we can write the action of the associated distribution as <span class="math-container">$$\langle T_k(x),\psi \rangle= \int_\mathbb{R}\chi_{[-\frac{1}{k}, \frac{1}{k}]}(x)\cdot\psi(x)dx=\int_{-\frac{1}{k}}^\frac{1}{k}\psi(x)dx$$</span> for every test function <span class="math-container">$\psi \in C^\infty_c(\mathbb{R})$</span>. Then I computed the limit as: <span class="math-container">$$\displaystyle{\lim_{k\to \infty}\langle T_k(x),\psi\rangle =\lim_{k\to \infty} \int_{-\frac{1}{k}}^\frac{1}{k}{\psi(x)dx} =\lim_{k\to \infty} \int_{-1}^1{\frac{1}{k}\cdot\psi(\frac{y}{k})dy}}$$</span> and applied the Lebesgue dominated convergence theorem, saying that <span class="math-container">$\frac{|\psi(\frac{y}{k})|}{|k|} \le \sup_{x\in [-1,1]}|\psi(x)| &lt;\infty$</span> and <span class="math-container">$\displaystyle{\lim_{k\to \infty}{\frac{\psi(\frac{y}{k})}{k}} = 0 }$</span>. Then I deduced that the sequnce <span class="math-container">$T_k$</span> converges to the distribution associated to the zero function.</p> <p>Is my proof correct? Any check or observation would be really appreciated.</p>
paul garrett
12,291
<p>Your argument is correct. Indeed, characteristic functions <span class="math-container">$\chi_\varepsilon$</span> of smaller and smaller intervals <span class="math-container">$[-\varepsilon,\varepsilon]$</span> do not converge to <span class="math-container">$\delta$</span> (in a distributional sense), but, rather, to <span class="math-container">$0$</span>. Still, the renormalizations to have &quot;total mass&quot; <span class="math-container">$1$</span>, namely, <span class="math-container">${1\over 2\varepsilon}\chi_{\varepsilon}$</span>, <em>do</em> converge to <span class="math-container">$\delta$</span>, by a similar computation.</p>
330,991
<p>Many things in math can be formulated quite differently; see the list of statements equivalent to RH <a href="https://mathoverflow.net/questions/39944/collection-of-equivalent-forms-of-riemann-hypothesis">here</a>, for example, with RH formulated as a bound on lcm of consecutive integers, as an integral equality, etc.</p> <p>I am wondering about equivalent formulations of the P vs. NP problem. Formulations that are very much different from the questions such &quot;Is TSP in P?&quot;, formulation that may seem unrelated to complexity theory.</p>
none
140,370
<p><a href="https://www.cs.toronto.edu/~sacook/homepage/ptime.pdf" rel="nofollow noreferrer">https://www.cs.toronto.edu/~sacook/homepage/ptime.pdf</a></p> <p>The above paper (1991) gives a syntactic method for enumerating all the PTIME functions. P != NP is the proposition that none of the functions in that enumeration recognize 3SAT. That suggests various half-baked proof ideas that I'm sure lots of people have thought of, so they presumably don't work, though who knows.</p>
338,535
<p>Suppose that $f$ is a function defined on the set of natural numbers such that $$f(1)+ 2^2f(2)+ 3^2f(3)+...+n^2f(n) = n^3f(n)$$ for all positive integers $n$. Given that $f(1)= 2013$, find the value of $f(2013)$.</p>
Eric Naslund
6,075
<p><strong>Hint:</strong> Try computing some small examples first. When $n=2$, we get that $$f(1)+2^{2}f(2)=2^{3}f(2) $$ $$\Rightarrow 2^2f(2)=f(1).$$ Using the previous case and the given equation, when $n=3,$ we have that </p> <p>$$f(1)+2^2f(2)+3^{2}f(3)=3^{3}f(3)$$ $$\Rightarrow f(1)+f(1)+3^{2}f(3)=3^{3}f(3),$$ $$\Rightarrow 18f(3)=2f(1),$$ $$\Rightarrow 3^{2}f(3)=f(1).$$ </p> <p>Using the previous cases and the given equation, when $n=4$ we have </p> <p>$$f(1)+2^2f(2)+3^2f(3)+4^{2}f(4)=4^{3}f(4), $$ $$\Rightarrow f(1)+f(1)+f(1)+4^{2}f(4)=4^{3}f(4), $$ $$\Rightarrow 48f(4)=3f(1),$$ $$\Rightarrow 4^2f(4)=f(1).$$</p> <p>Do you see a pattern? What is happening here? Now that you have a conjecture, try to prove it.</p>
1,740,151
<p>$$\lim_\limits {x \to \pi} \frac{(e^{\sin x} -1)}{(x-\pi)}$$</p> <p>I found $-1$ as the answer and what I did was: </p> <p>$\lim_\limits {x \to \pi} \frac{(e^{\sin x} -1)}{(x-\pi)}$ $\Rightarrow$ $\lim \frac{(f(x) - f(a))}{(x-a)}$ $\Rightarrow$ $f(x)=(e^{\sin x})$ </p> <p>$f(a)=1$ </p> <p>$x=x$ </p> <p>and $a=\pi$ </p> <p>So I concluded that the limit of the first function would be the same as the derivative of $f(x)$ so I did:</p> <p>$\frac{d}{dx} (e^{\sin x})$ $\Rightarrow$ $-1$</p> <p>But isn't it the same as using the L'HÔPITAL rule?? </p>
Matthé van der Lee
75,745
<p>As to your question (1), the answer is: <em>all</em> of AC is needed.</p> <p><strong>Lemma</strong>: if $f:A\times B\to C\cup D$ is injective, $C\cap D=\varnothing$, and at least one of $A,B,C,D$ is well ordered, we have $|A|\leq|C|\vee|B|\leq|D|$. (By symmetry, one has $|A|\leq|D|\vee|B|\leq|C|$ as well!).</p> <p><strong>Proof</strong>: By symmetry, we may assume either $A$ or $C$ can be well ordered.</p> <p>Let $R$ well order $A$. If $\forall_{b\in B}\exists_{a\in A}f(a,b)\in D$, then $b\mapsto f(a,b)$ gives an injection $B\to D$, where for $b\in B$ we take $a$ to be the $R$-minimal $x\in A$ such that $f(x,b)\in D$. If not, it follows that $\exists_{b\in B}\forall_{a\in A}f(a,b)\in C$, and in that case the assignment $a\mapsto f(a,b)$ injects $A$ into $C$ for such a $b$.</p> <p>Now let $S$ well order $C$. If $\forall_{a\in A}\exists_{b\in B}f(a,b)\in C$, we get an injection $A\to C$ by letting $a\mapsto$ the $S$-minimal $c\in C$ for which there exists a $b\in B$ with $f(a,b)=c$. And if $\exists_{a\in A}\forall_{b\in B}f(a,b)\in D$, then for such an $a$ the prescription $b\mapsto f(a,b)$ defines an injection $B\to D$.</p> <p>Hence $|A|\leq|C|\vee|B|\leq|D|$ in both situations. Reversing the roles of $C$ and $D$ in the $R$ wo $A$ case, and of $A$ and $B$ in the $S$ wo $C$ case, we find that, equally, $|A|\leq|D|\vee|B|\leq|C|$, $\Box$.</p> <p>EDIT: the condition $C\cap D=\varnothing$ was never used in the proof, so it can be discarded.</p> <p><strong>Proposition</strong>: $(\forall_{X,Y}(|X|&lt;|Y|\Rightarrow|X|^{2}&lt;|Y|^{2}))\Rightarrow AC$</p> <p><strong>Proof</strong>: assuming the antecedent, we show that every set can be well ordered. (As is well known, the well ordering principle is equivalent to the AC.) Let $A$ be a set which we wish to well-order. We can assume $|A|\geq\aleph_{0}$ (else replace $A$ with $A\cup\mathbb{N}$; a well ordering for that set will induce one for $A$). Put $B:=A^{\mathbb{N}}$. Then $|A|\leq|B|$ and $|B|^{2}=|B|$. We now have to become slightly technical, and consider $C:=\Gamma(B)$ where $\Gamma$ is <strong>Hartogs' function</strong>; that is, $C$ is the set of all ordinals $\alpha$ that have an injection $\alpha\to B$. The facts that $\Gamma(B)$ is indeed a set, and that it is the smallest ordinal that does <em>not</em> allow an injection $\to B$, can be found in most text books on set theory.</p> <p>Write $\kappa:=|B|$ and $\mu:=|C|$. Since both are $\geq\aleph_{0}$, one has $\kappa+\mu\leq\kappa.\mu$ (indeed, we find that $\kappa.\mu=(\kappa+1).(\mu+1)=\kappa.\mu+\kappa+\mu+1\geq\kappa+\mu$). Let us assume $\kappa+\mu=\kappa.\mu$. As C is an ordinal, the Lemmma applies, giving $\kappa\leq\mu\vee\mu\leq\kappa$. The latter option is impossible, as no injection $C\to B$ can exist. So $\kappa\leq\mu$, that is, an injection $B\to C$ exists, and as $C$ is well ordered, $B$ inherits a well ordering from $C$ via this injection. But $|A|\leq|B|$, so that $A$ inherits a well ordering from the one on $B$ via an injection $A\to B$.</p> <p>Now assume $\kappa+\mu&lt;\kappa.\mu$. By our hypothesis, this yields $(\kappa+\mu)^{2}&lt;(\kappa.\mu)^{2}=\kappa^{2}.\mu^{2}=\kappa.\mu$. The latter equality follows from the fact that $|B|^{2}=|B|$, as we have already remarked, while $|C|^{2}=|C|$ since $C$ is an infinite ordinal. However, one has: $(\kappa+\mu)^{2}=\kappa^{2}+2\kappa.\mu+\mu^{2}=\kappa+\kappa.\mu+\mu$; again, $2|C|=|C|$ since $C$ is an infinite ordinal. Thus $(\kappa+\mu)^{2}=\kappa+\kappa.\mu+\mu=\kappa+\kappa.\mu+\mu+1=(\kappa+1)(\mu+1)=\kappa.\mu$, in contradiction with $(\kappa+\mu)^{2}&lt;\kappa.\mu$, $\Box$.</p>
3,453,408
<p>I'm reading through some lecture notes and see this in the context of solving ODEs: <span class="math-container">$$\int\frac{dy}{y}=\int\frac{dx}{x} \rightarrow \ln{|y|}=\ln{|x|}+\ln{|C|}$$</span> why is the constant of integration natural logged here?</p>
Eric Towers
123,905
<p>In this form, it is evident that you can rewrite the result as <span class="math-container">$\ln \left| C x \right|$</span>. Perhaps this is less evident from the form <span class="math-container">$\ln |x| + C$</span>.</p>
238,547
<p>I have a PDE like</p> <pre><code>D[h[x1, x2], x1]*a[x1,x2]+D[h[x1,x2], x2]*b[x1,x2] + c[x1,x2] == h[x1,x2] s.t. gradient(h(0,0))==0 </code></pre> <p>where a,b,c are known functions of x1 and x2, and h are the function to be solved. x1 and x2 are both in [-2, 2]. For some selected a,b,c, DSolveValue can give me perfect analytical solutions, like (a=0.1*x1, b=x2-x1^2, c=-x1^2)</p> <pre><code>eq1 = -x1^2 + D[h1[x1, x2], x1]*(0.1)*x1 + D[h1[x1, x2], x2]*(x2 - x1^2) == h1[x1, x2]; s1 = DSolveValue[eq1, h1[x1, x2], {x1, x2}] (*x1^2. (-1.25 + x1^8. C[1][(0.25 (-5. x1^2 + 4. x2))/x1^10])*) </code></pre> <p>For this analytical solution, I then use the gradient constraints to figure out how C[1] should look like and finally get h1=-1.25*x1^2, which is the solution I want. But for other a,b,c, DSolveValue reports that &quot;Inverse functions are being used by Solve, so solutions may not be found&quot;. For example (a=x2, b=Sin[x1]+x2, c=0):</p> <pre><code>eq2 = D[h2[x1, x2], x1]*x2 + D[h2[x1, x2], x2]*(Sin[x1] + x2) == h2[x1, x2]; s2 = DSolveValue[eq2, h2[x1, x2], {x1, x2}] </code></pre> <p>The output just repeats my cmd and doesn't have a solution. I'm aware that this is because h2 doesn't have an analytical solution. How to get a numerical solution under this circumstance?</p>
bbgodfrey
1,063
<p>Linear PDEs typically are solved by the method of characteristics. For the PDE in the question, the ODEs of the characteristics are</p> <pre><code>{x2'[s] == Sin[x1[s]] + x2[s], x1'[s] == x2[s], h'[s] == h[s]} </code></pre> <p>Attempting to solve these ODEs with <code>DSolve</code> is unsuccessful, returning the <code>Solve::ifun</code> error message. That this is the same error cited in the question is not surprising, because <code>DSolve</code> itself undoubtedly uses the method of characteristics when attempting to solve a linear or quasilinear PDE. (See questions <a href="https://mathematica.stackexchange.com/a/238375/1063">238346</a> and <a href="https://mathematica.stackexchange.com/a/238863/1063">238808</a> for examples in which the characteristics, and therefore the PDE, can be solved symbolically.) However, <code>NDSolve</code> can evaluate the characteristics with little difficulty. Focus first on the ODEs for <code>(x1, x2}</code>.</p> <pre><code>sp = ParametricNDSolveValue[{x2'[s] == Sin[x1[s]] + x2[s], x1'[s] == x2[s], x1[0] == x10, x2[0] == x20}, {x1[s], x2[s]}, {s, -10, 10}, {x10, x20}]; Table[sp[n, -2], {n, -4.03, 2, .1}]; pt = ParametricPlot[%, {s, -10, 10}, PlotRange -&gt; {{-2, 2}, {-2, 2}}, ImageSize -&gt; Large, FrameLabel -&gt; {x1, x2}, LabelStyle -&gt; {15, Bold, Black}, PlotStyle -&gt; RGBColor[0.368417, 0.506779, 0.709798]]; </code></pre> <p>Before displaying the result, let us superimpose (in black) the two stream lines that pass through the origin.</p> <pre><code>NDSolveValue[{x2'[s] == Sin[x1[s]] + x2[s], x1'[s] == x2[s], x1[10^-5] == 10^-5, x2[10^-5] == 10^-5 1/2 (1 - Sqrt[5])}, {x1[s], x2[s]}, {s, -50, 50}]; pt0 = ParametricPlot[%, {s, -50, 50}, PlotStyle -&gt; Black, PlotRange -&gt; {{-2, 2}, {-2, 2}}]; Show[pt, pt /. Line[z_] -&gt; Line[-z], pt0, pt0 /. Line[z_] -&gt; Line[-z]] </code></pre> <p><a href="https://i.stack.imgur.com/Tp2eB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tp2eB.png" alt="enter image description here" /></a></p> <p>(The substitution, <code>Line[z_] -&gt; Line[-z]</code> reflects the solution in the origin, reducing computations by a factor of two.) Now, the point of evaluating the characteristics is that <code>h2[x1, x2]</code> satisfies</p> <pre><code>DSolveValue[h'[s] == h[s], h[s], s] (* E^s C[1] *) </code></pre> <p>along each stream line, with <code>C[1]</code> a constant that can vary from stream line to stream line. Consequently, if the value of <code>h2[x1, x2]</code> is known anywhere on each stream line, its value is known everywhere. The question above specifies only that the gradient of <code>h2[x1, x2]</code> vanishes at the origin. From the PDE, it immediately follows that <code>h2[0, 0]</code> also vanishes, and the value of <code>h2[x1, x2]</code> must be zero everywhere along the black lines. Moreover, <code>h[x1, x2]</code> also must vanish on characteristics passing infinitesimally close to the origin. However, values of <code>h2[x1, x2]</code> on other stream lines are unspecified.</p> <p>For completeness, here is a plot of the stream lines over a larger domain.</p> <pre><code>rl = Table[{Line[z_] :&gt; Line[{2 nn Pi + #[[1]], #[[2]]} &amp; /@ z]} /. nn -&gt; n, {n, -2, 2}]; Table[sp[n, -4], {n, -6 Pi, 3 Pi, Pi/5}]; ParametricPlot[%, {s, -10, 10}, PlotRange -&gt; {{-10, 10}, {-4, 4}}, ImageSize -&gt; Large, FrameLabel -&gt; {x1, x2}, LabelStyle -&gt; {15, Bold, Black}, PlotStyle -&gt; RGBColor[0.368417, 0.506779, 0.709798]]; Show[%, % /. Line[z_] -&gt; Line[-z], Replace[{pt0, pt0 /. Line[z_] -&gt; Line[-z]}, rl, All]] </code></pre> <p><a href="https://i.stack.imgur.com/jjrsi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jjrsi.png" alt="enter image description here" /></a></p> <p>As would be expected, the structure of the stream lines is periodic in <code>x1</code> with period <code>2 Pi</code>. Finally, it should be noted that <code>StreamPlot</code> can produce similar plots, although with less detail.</p> <p><strong>Response to OP's comment</strong></p> <p><a href="https://mathematica.stackexchange.com/users/75790/le-zheng">Le ZHENG</a> asked in a comment below about the initial conditions in</p> <pre><code> NDSolveValue[{x2'[s] == Sin[x1[s]] + x2[s], x1'[s] == x2[s], x1[10^-5] == 10^-5, x2[10^-5] == 10^-5 1/2 (1 - Sqrt[5])}, {x1[s], x2[s]}, {s, -50, 50}]; </code></pre> <p>used above. They were obtained by combining the first two characteristic equations at the beginning of the answer to obtain</p> <pre><code>x2'[x1] == (Sin[x1] + x2[x1])/x2[x1] </code></pre> <p>For stream lines passing through the origin, <code>x2'[x1]</code> is equal to <code>x2[x1]/x1</code>. With these expressions set to <code>a</code> for convenience, the preceding equation becomes <code>a == 1/a + 1</code>, which has the solutions</p> <pre><code>Solve[a == 1/a + 1, a] // Flatten (* {a -&gt; 1/2 (1 - Sqrt[5]), a -&gt; 1/2 (1 + Sqrt[5])} *) </code></pre> <p>Hence, very near the origin <code>x2 = x1 1/2 (1 - Sqrt[5])</code> for one stream line and <code>x2 = x1 1/2 (1 + Sqrt[5])</code> for the other. I chose &quot;very near&quot; to be <code>x1 = 10^-5</code>. And, because the set of characteristic equations is autonomous (not explicitly dependent on <code>s</code>), I could choose <code>s</code> corresponding to <code>x1 = 10^-5</code> to be any value whatsoever. I chose it equal to <code>x1</code>.</p> <p>By the way, no stream lines pass through <code>{2 PI n, 0}</code>. with <code>n</code> an odd integer. Instead, they spiral around those points with exponentially decreasing radii.</p> <p><strong>Sample Complete Numerical Solution</strong></p> <p>As a sample solution, choose <code>h[x1,x2] = (x1^2 +x2^2)/4</code> along the x and y axes, which is sufficient to completely determine <code>h</code> and satisfies the requirements that <code>h</code> and its gradients vanish at the origin. It is convenient for this computation to eliminate <code>s</code> from the characteristic ODEs in favor of <code>x1</code> or <code>x2</code>.</p> <pre><code>sp1 = ParametricNDSolveValue[{x2'[x1] == (Sin[x1] + x2[x1])/x2[x1], h'[x1] == h[x1]/x2[x1], x2[0] == x0, h[0] == x0^2/4, WhenEvent[x2[x1] &gt; 3.5, &quot;StopIntegration&quot;]}, {x1, x2[x1], h[x1]}, {x1, -2, 2}, x0]; sp2 = ParametricNDSolveValue[{x1'[x2] == x2/(Sin[x1[x2]] + x2), h'[x2] == h[x2]/(Sin[x1[x2]] + x2), x1[0] == x0, h[0] == x0^2/4, WhenEvent[x1[x2] &gt; 2, &quot;StopIntegration&quot;]}, {x1[x2], x2, h[x2]}, {x2, -2, 2}, x0]; plt1 = Show@Table[tem = sp1[n]; ParametricPlot3D[tem, Evaluate[Join[{x1}, Flatten[Last[tem] /. x1 -&gt; &quot;Domain&quot;]]], PlotRange -&gt; {{-2, 2}, {-2, 2}, {0, 1.5}}, AxesLabel -&gt; {x1, x2, h}, ImageSize -&gt; Large, LabelStyle -&gt; {15, Bold, Black}], {n, .01, 3.5, .05}]; plt2 = Show@Table[tem = sp2[n]; ParametricPlot3D[tem, Evaluate[Join[{x2}, Flatten[Last[tem] /. x2 -&gt; &quot;Domain&quot;]]], PlotRange -&gt; {{-2, 2}, {-2, 2}, {0, 1.5}}, AxesLabel -&gt; {x1, x2, h}, ImageSize -&gt; Large, LabelStyle -&gt; {15, Bold, Black}], {n, .01, 2, .05}]; Show[plt1, plt1 /. Line[z_] :&gt; Line[Map[{-#[[1]], -#[[2]], #[[3]]} &amp;, z]], plt2, plt2 /. Line[z_] :&gt; Line[Map[{-#[[1]], -#[[2]], #[[3]]} &amp;, z]]] </code></pre> <p><a href="https://i.stack.imgur.com/efnqu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/efnqu.png" alt="![enter image description here" /></a></p> <p>As expected, <code>h = 0</code> on the stream lines passing through the origin and those infinitesimally close. The solution also can be displayed with <code>Plot3D</code>.</p> <pre><code>Flatten[Cases[%, Line[z_] -&gt; z, Infinity], 1]; ListPlot3D[%, PlotRange -&gt; {{-2, 2}, {-2, 2}, {0, 1.5}}, AxesLabel -&gt; {x1, x2, h}, ImageSize -&gt; Large, LabelStyle -&gt; {15, Bold, Black}] </code></pre> <p><a href="https://i.stack.imgur.com/Awlp1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Awlp1.png" alt="enter image description here" /></a></p>
403,631
<p>$a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| &lt; 1 $ <br/> Hint $u_{2n}$ = $u_{n}^2$</p> <p>I have totally no idea how to prove this, this looks obvious but i found out proof is really hard... I am doing a real analysis course and there's a lot of proving and I stuck there. Any advices? Practice makes perfect? </p>
A.S
24,829
<p>Proof sketch: Maybe try showing that $|\frac{1}{a^n}| \to \infty$.</p> <p>Fix $p = |\frac{1}{a}| &gt; 1$, and let $p = ( 1 + b )$. Show by induction that $p^n \ge 1 + nb$, and conclude the statement above using the Archimedean property of the reals.</p>
403,631
<p>$a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| &lt; 1 $ <br/> Hint $u_{2n}$ = $u_{n}^2$</p> <p>I have totally no idea how to prove this, this looks obvious but i found out proof is really hard... I am doing a real analysis course and there's a lot of proving and I stuck there. Any advices? Practice makes perfect? </p>
xavierm02
10,385
<p>Let $u_n = a^n$ with $|a|&lt; 1$</p> <p>Let $v_n=|u_n|$</p> <p>$v_{n+1} = a v_n &lt; v_n$ so $(v_n)$ is decreasing</p> <p>$0 \le v_n$ so $(v_n)$ is minored</p> <p>Since $(v_n)$ is minored and decreasing, it converges.</p> <hr> <p>Now let $v_\infty=\lim\limits_{n\to \infty}v_n$</p> <p>You have $v_{n+1} = a v_n$</p> <p>By making $n\to \infty$ on both sides (which you can do because $x\mapsto ax$ is continuous) you get:</p> <p>$v_\infty = a v_\infty$</p> <p>$0 = (a-1) v_\infty$</p> <p>But $|a|&lt;1$ so $a-1\not= 0$ so $v_\infty=0$</p> <hr> <p>That is $|u_n|\underset{n\to\infty}{\longrightarrow}0$</p> <p>So we can conclude that $u_n\underset{n\to\infty}{\longrightarrow}0$</p>
177,519
<p>Let $\mathfrak{g}$ be a simple lie algebra over $\mathbb{C}$ and let $\hat{\mathfrak{g}}$ be the Kac-Moody algebra obtained as the canonical central extension of the algebraic loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. In a sequence of papers, Kazhdan and Lusztig constructed a braided monoidal structure on (a certain subcategory of) the category of representations of $\hat{\mathfrak{g}}$ of central charge $k - h$ where $k \in \mathbb{C}^* \;\backslash\; \mathbb{Q}_{\geq 0}$ and $h$ is the coxeter number of $\mathfrak{g}$. They then showed that the resulting braided category is equivalent to the braided category of finite dimensional representations of the quantum group $U_q(\mathfrak{g})$ for $q = e^{\frac{\pi i}{k}}$. </p> <p>My question then is this: is there any conceptual explanation as to why these two braided categories should be equivalent (which does not resort to computing both sides and seeing that they are same)? The representations of $\hat{\mathfrak{g}}$ of various central charges can be considered as twists of the representation theory of the loop algebra $\mathfrak{g} \otimes \mathbb{C}[t,t^{-1}]$. On the other hand, the representation theory of $U_q(\mathfrak{g})$ is a braided deformation (which can be thought of as a form of twisting) of the representation theory of $\mathfrak{g}$ itself. Moreover, the equivalence above only holds for non-trivially deformed/twisted cases. The limiting case of the representations of $\mathfrak{g}$ is recovered by (carefully) taking $q=1$, which corresponds to $k \rightarrow \infty$ and hence does not participate in the game. On the other hand, to obtain central charge $0$ we would need to take $k=h$ which is also excluded (as the proof Kazhdan-Lustig assumes $k \notin \mathbb{Q}_{\geq 0}$). Is there any reason why these two lie algebras would have the same twisted/deformed representations, but not the same representations?</p>
Makoto Yamashita
9,942
<p>This is not very sophisticated answer, but in a way there isn't much choose from as the representation categories of "deformations of $U(\mathfrak{g})$." For $\mathfrak{g}=\mathfrak{sl}_n$, this can be made precise by [1] as follows. Any semisimple $\mathbb{C}$-linear monoidal category with the fusion rule of $\mathrm{SL}(n)$ has to be a twist of the finite dimensional admissible representations of $U_q(\mathfrak{sl}_n)$ for some $q$, and the twisting data is discrete, given by 3-cohomology of the Pontrjagin dual of the center of $\mathrm{SL}(n)$. If you impose the existence of braiding, the twisting class has to be of order two, so you either have the representation category of $\mathrm{SL}_q(n)$ or "twisting by parity" when $n$ is even. Other cases are probably the same, see for example [2].</p> <ol> <li>David Kazhdan and Hans Wenzl, Reconstructing monoidal categories, I. M. Gel′fand Seminar, Adv. Soviet Math., vol. 16, Amer. Math. Soc., Providence, RI, 1993, pp. 111–136. MR 1237835 (95e:18007)</li> <li><em>Imre Tuba and Hans Wenzl</em>, <a href="http://dx.doi.org/10.1515/crll.2005.2005.581.31" rel="noreferrer"><strong>On braided tensor categories of type $BCD$</strong></a>, <em>J. Reine Angew. Math.</em> <strong>581</strong> (2005), 31--69.</li> </ol>
599,126
<p>Question is to check which of the following holds (only one option is correct) for a continuous bounded function $f:\mathbb{R}\rightarrow \mathbb{R}$.</p> <ul> <li>$f$ has to be uniformly continuous.</li> <li>there exists a $x\in \mathbb{R}$ such that $f(x)=x$.</li> <li>$f$ can not be increasing.</li> <li>$\lim_{x\rightarrow \infty}f(x)$ exists.</li> </ul> <p>What all i have done is :</p> <ul> <li>$f(x)=\sin(x^3)$ is a continuous function which is bounded by $1$ which is not uniformly continuous.</li> <li>suppose $f$ is bounded by $M&gt;0$ then restrict $f: [-M,M]\rightarrow [-M,M]$ this function is bounded ad continuous so has fixed point.</li> <li>I could not say much about the third option "$f$ can not be increasing". I think this is also true as for an increasing function $f$ can not be bounded but i am not sure.</li> <li>I also believe that $\lim_{x\rightarrow \infty}f(x)$ exists as $f$ is bounded it should have limit at infinity.But then I feel the function can be so fluctuating so limit need not exists. I am not so sure.</li> </ul> <p>So, I am sure second option is correct and fourth option may probably wrong but i am not so sure about third option.</p> <p>Please help me to clear this.</p> <p>Thank You. :)</p>
Igor Rivin
109,865
<p>Well, for $x$ really, really large, what can you say about $f(x) - x?$ </p> <p>For $x$ really, really small, what can you say about $f(x) - x?$</p>
1,512,171
<p>I want to show that there exists a diffeomorphic $\phi$ such that the following diagram commutes: $$ \require{AMScd} \begin{CD} TS^1 @&gt;{\phi}&gt;&gt; S^1\times\mathbb{R}\\ @V{\pi}VV @V{\pi_1}VV \\ S^1 @&gt;{id_{S^1}}&gt;&gt; S^1 \end{CD}$$ where $\pi$ is the associated projection of $TS^1$, and $\pi_1(x,y)=x$ is the standard projection function in the first component.</p> <p>A hint was given along with the exercise that I should find a nowhere vanishing vector field on $S^1$. However, I don't know how to find one exactly, or what to do subsequent to finding such a vector field. I have seen an analogous example where $\phi$ was given without reason where $S^1$ and $\mathbb{R}$ were both instead $\mathbb{R}^n$. The definition of that $\phi$ was:$$\phi(a^i\frac{\partial}{\partial x^i}(p)) = (p,(a^1,...,a^n)).$$Perhaps the nowhere vanishing vector field on $S^1$ is used in an analogous formula?</p> <p>Could anyone give some additional hints or a sketch of a proof?</p> <p><strong>EDIT:</strong> Thinking about it, if I get the nowhere vanishing vector field, say, $u$, then because $S^1$ is a 1-manifold, I have that $T_pS^1$ is 1-dimensional as well. So that means that $T_pS^1$ is spanned by $u_p$. So I am thinking we use $\forall v_p\in TS^1$ the unique coefficient given by $\alpha\in\mathbb{R}$ such that $v_p = \alpha u_p$. So perhaps:$$\phi(v_p)=(p,\alpha),$$is our diffeomorphism? In that case, is there a condition that is met by $S^1$ such that it has to have a nowhere vanishing vector field (i.e. I don't have to find an exact formula for one)?</p>
dannum
152,081
<p>It depends if you're talking about a Riemann integral or a Lebesgue integral. </p> <p>If we are talking about a Riemann integral, the answer is that we cannot define the integral because any sub-interval of $[0,1]$ - no matter how small - contains a rational and an irrational. For this reason the upper integral and lower integral will not be the same (here the $\sup f = 1$ and $\inf f = 0$ on any sub-interval). </p> <p>If you are talking about a Lebesgue integral, the answer is yes. This is because integrating over $[0,1]$ and $[0,1] \setminus \mathbb Q$ will give the same answer under certain conditions. You'd need to know a little bit of measure theory to completely understand the details, but the formulation of the Lebesgue integral tells you that integrating over a set of measure zero (the rationals for example) will be zero, and thus $$\int_{[0,1]} f(t) dt = \int_{[0,1] - \mathbb Q} f(t) dt + \int_{\mathbb Q} f(t) dt = \int_{[0,1]- \mathbb Q} 1 dt + 0 = 1$$ </p> <p>I'm omitting some details, but hopefully this provides some insight.</p>