qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,496,114
<p>Show that $ a \equiv 1 \pmod{2^3 } \Rightarrow a^{2^{3-2}} \equiv 1 \pmod{2^3} $</p> <p>Show that$ a \equiv 1 \pmod{2^4 } \Rightarrow a^{2^{4-2}} \equiv 1 \pmod{2^4} $</p> <p><strong>Answer:</strong></p> <p>$ a \equiv 1 \pmod{2^3} \\ \Rightarrow a^2 \equiv 1 \pmod{2^3} \\ \Rightarrow a^{2^{3-2}}=a^{2^1} \equiv 1 \pmod{2^3} $</p> <p><strong>Am I right?</strong></p>
Community
-1
<p>Hint : Combine the <a href="https://en.wikipedia.org/wiki/Dirichlet_kernel" rel="nofollow noreferrer">Dirichlet kernel</a> to the following inequality :</p> <p>$$\frac{1}{2n+1}\geq\frac{1}{(2n+1)^2}|\frac{sin((n+0.5)x)}{sin(0.5x)}|\geq \frac{1}{(2n+1)^2}|\frac{sin((n+0.5)x)}{0.5x}|$$</p> <p>And use the same inductive reasoning as Emil Artin for this proof related to the Gamma function . </p>
1,173,002
<p>Do I have to use the diagonalization of A?</p>
Brian M. Scott
12,042
<p>$10$ dominoes:</p> <p>$$\begin{array}{|c|c|c|} \hline \;\;&amp;&amp;&amp;&amp;5&amp;&amp;8&amp;\\ \hline &amp;1&amp;1&amp;&amp;5&amp;&amp;8&amp;\\ \hline &amp;&amp;&amp;&amp;&amp;&amp;&amp;\\ \hline &amp;2&amp;&amp;4&amp;&amp;7&amp;&amp;10\\ \hline &amp;2&amp;&amp;4&amp;&amp;7&amp;&amp;10\\ \hline &amp;&amp;&amp;&amp;&amp;&amp;&amp;\\ \hline &amp;3&amp;3&amp;&amp;6&amp;&amp;9&amp;\\ \hline &amp;&amp;&amp;&amp;6&amp;&amp;9&amp;\\ \hline \end{array}$$</p> <p>I placed dominoes $1$ and $3$ for maximal coverage. It’s wasteful to have parallel dominoes with two spaces between them, so I tried turning dominoes $2$ and $4$ the other way. At that point the rest more less just fell into place.</p> <p>Alternatively, dominoes $5$ and $6$ could be turned horizontal in the second and seventh rows and dominoes $8$ and $9$ moved over to the last column:</p> <p>$$\begin{array}{|c|c|c|} \hline \;\;&amp;&amp;&amp;&amp;&amp;&amp;\;\;&amp;8\\ \hline &amp;1&amp;1&amp;&amp;5&amp;5&amp;&amp;8\\ \hline &amp;&amp;&amp;&amp;&amp;&amp;&amp;\\ \hline &amp;2&amp;&amp;4&amp;&amp;7&amp;&amp;10\\ \hline &amp;2&amp;&amp;4&amp;&amp;7&amp;&amp;10\\ \hline &amp;&amp;&amp;&amp;&amp;&amp;&amp;\\ \hline &amp;3&amp;3&amp;&amp;6&amp;6&amp;&amp;9\\ \hline &amp;&amp;&amp;&amp;&amp;&amp;&amp;9\\ \hline \end{array}$$</p>
908,083
<p>I'd like to know what methods can I apply to simplify the fraction $\frac{4x + 2}{12 x ^2}$ </p> <p>Is it valid to divide above and below by 2? (I didn't know it but Geogebra's Simplify aparantly does this)</p> <p>Thanks in advance</p>
recursive recursion
118,924
<p>You can factor out a two from the numerator and denominator to get $$\frac{2x+1}{6x^2}.$$ multiplying the numerator and the denominator by the same number will never affect the value of the fraction. </p>
1,982,216
<p>Consider the operator $B: L^1\left(\mathbb{R^+} \right)\to L^1\left(\mathbb{R^+} \right)$ defined for each $f\in L^1\left(\mathbb{R^+} \right)$ by $$(Bf)(t)=\int_0^\infty\alpha (t,s)f(s)ds, \ \ \ \text{for} \ \ t\geq 0$$ where $\alpha:\mathbb{R^+} \times \mathbb{R^+}\to\mathbb{R}$ is a real function satisfying $$\left|\alpha(t,s)\right| \leq\beta(t) \ \ \ \ \text{for all} \ \ t,s\geq 0,$$</p> <p>where $\beta:\mathbb{R^+} \to\mathbb{R^+}$ is a positive integrable function.</p> <p>Then $B$ defines a bounded operator on $L^1\left(\mathbb{R^+} \right)$. Now if we suppose that the function $\alpha$ is constant with respect to the second argument, i.e. $\alpha(t,s)=\gamma(t)$ for all $t,s\geq0$, then one can see that $B$ is a finite rank operator ans thus compact.</p> <p>Now in the general case ($\alpha$ not constant with respect to the second argument), can we say that $B$ is a compact operator ? how can we prove this if it's true ? Is there a reference which deals with the compactness of such operators ?</p>
PhoemueX
151,552
<p>In general, such an operator need not be compact. Indeed, let $g \in L^1 ((0,\infty)) $ be arbitrary and define $$ \alpha (t,s) = g (t) \sum_{n=1 }^\infty 1_{[n,n+1)}(s) e^{2\pi i n t} $$ and consider the sequence $f_n = 1_{[n,n+1)} $ which is bounded in $L^1$. Then $B f_n (t) = g (t) e^{2\pi i n t} $, which converges weakly in $L^1$ to $0$, but has norm $\|B f_n \|_1 = \| g\|_1$ for all $n $.</p> <p>Hence, $(B f_n) _n $ does not have a convergent subsequence, so that $B $ is not compact.</p> <hr> <p>As an extension of the answer above, here is a <strong>complete characterization</strong> of those functions $\alpha$ which yield a compact operator $B$.</p> <p>In the following, I will assume that $\alpha$ is measurable with respect to the Borel-$\sigma$-algebra on $\left(0,\infty\right)\times\left(0,\infty\right)$. I claim that the operator $B$ is compact if and only if there is a null-set $N\subset\left(0,\infty\right)$ such that the set $$ K_{0}:=\left\{ \alpha\left(\cdot,s\right)\,:\, s\in\left(0,\infty\right)\setminus N\right\} \subset L^{1}\left(\left(0,\infty\right)\right) $$ is totally bounded, i.e., $K:=\overline{K_{0}}\subset L^{1}\left(\left(0,\infty\right)\right)$ is compact. Once this characterization is known, you can use the result linked in the answer by @DavideGiraudo to obtain a further simplification.</p> <p>Indeed, the assumption $|\alpha (t,s)| \leq \beta (t) $ ensures that $K_0$ always satisfies the 1-tightness condition (condition (ii) of the paper by Hanche-Olsen and Holden) and is bounded (condition (i)). Finally, if we extend each $\alpha (\cdot , s) $ to all of $\Bbb {R} $ by seting it to zero outside $(0,\infty) $, then condition (iii) becomes $$ \inf_{\delta &gt;0} \sup_{|y| &lt; \delta} \mathrm {esssup}_{s \in (0,\infty)} \int_{\Bbb {R}} |\alpha (t+y, s) - \alpha (t,s)| dt = 0. \qquad (\dagger) $$ The condition given by @DavideGiraudo implies this (more general) condition as follows:</p> <p>\begin{eqnarray*} &amp; &amp; \int_{\mathbb{R}}\left|\alpha\left(t+y,s\right)-\alpha\left(t,s\right)\right|dt\\ &amp; = &amp; \int_{-\delta}^{0}\left|\alpha\left(t+y,s\right)\right|dt+\int_{0}^{R}\left|\alpha\left(t+y,s\right)-\alpha\left(t,s\right)\right|dt+\int_{R}^{\infty}\left|\alpha\left(t+y,s\right)-\alpha\left(t,s\right)\right|dt\\ &amp; \leq &amp; \int_{-\delta}^{0}\beta\left(t+y\right)dt+R\cdot\sup_{\left|t-t'\right|&lt;\delta}\sup_{s\geq0}\left|\alpha\left(t,s\right)-\alpha\left(t',s\right)\right|+2\int_{R}^{\infty}\beta\left(t\right)dt\\ \left({\scriptstyle \text{for }R\text{ large enough}}\right) &amp; &lt; &amp; \int_{y-\delta}^{y}\beta\left(t\right)dt+R\cdot\sup_{\left|t-t'\right|&lt;\delta}\sup_{s\geq0}\left|\alpha\left(t,s\right)-\alpha\left(t',s\right)\right|+2\varepsilon\\ \left({\scriptstyle \text{for }\delta=\delta\left(R\right)\text{ small enough}}\right) &amp; \leq &amp; \int_{-2\delta}^{\delta}\beta\left(t\right)dt+3\varepsilon\\ \left({\scriptstyle \text{for }\delta\text{ even smaller}}\right) &amp; \leq &amp; 4\varepsilon. \end{eqnarray*} Using a similar (but easier) argument, one can also show that it suffices to restrict the integral in $(\dagger)$ to $\int_0^\infty$ instead of $\int_{\Bbb{R}}$.</p> <hr> <p>It remains to prove the claimed equivalence from above. Let us show both implications separately.</p> <p>"$\Rightarrow$": Here, we assume that $B$ is compact. Hence, the set $L:=\overline{B\left(B_{1}\left(0\right)\right)}\subset L^{1}\left(\left(0,\infty\right)\right)$ is compact, where $B_{1}\left(0\right)\subset L^{1}\left(\left(0,\infty\right)\right)$ denotes the closed unit ball. It is not hard to see that the map $$ \alpha:\left(0,\infty\right)\to L^{1}\left(\left(0,\infty\right)\right),s\mapsto\alpha\left(\cdot,s\right) $$ is measurable. Indeed, $L^{1}\left(\left(0,\infty\right)\right)$ is separable and for arbitrary $g\in L^{\infty}\left(\left(0,\infty\right)\right)\cong\left[L^{1}\left(\left(0,\infty\right)\right)\right]^{\ast}$, the map $$ s\mapsto\left\langle g,\alpha\left(\cdot,s\right)\right\rangle =\int_{0}^{\infty}\alpha\left(t,s\right)\cdot g\left(t\right)dt $$ is measurable as a consequence of the Fubini theorem, since $\left|\alpha\left(t,s\right)\cdot g\left(t\right)\right|\leq\left\Vert g\right\Vert _{L^{\infty}}\cdot\beta\left(t\right)\in L^{1}\left(\left(0,\infty\right)\right)$. Now, measurability of the map $\alpha$ defined above follows from Pettis's theorem (cf. <a href="https://en.wikipedia.org/wiki/Weakly_measurable_function" rel="nofollow">https://en.wikipedia.org/wiki/Weakly_measurable_function</a>).</p> <p>Using this measurability, it is not hard to see that we have $$ Bf=\int_{0}^{\infty}\alpha\left(\cdot,s\right)\cdot f\left(s\right)ds\qquad\forall f\in L^{1}\left(\left(0,\infty\right)\right), $$ where the right-hand side is a Bochner integral. Note that $\alpha\in L^{\infty}\left(\left(0,\infty\right);L^{1}\left(\left(0,\infty\right)\right)\right)\hookrightarrow L_{{\rm loc}}^{1}\left(\left(0,\infty\right);L^{1}\left(\left(0,\infty\right)\right)\right)$, so that the Lebesgue differentiation theorem yields a null-set $N\subset\left(0,\infty\right)$ such that for all $s_{0}\in\left(0,\infty\right)\setminus N$, we have $$ \alpha\left(\cdot,s_{0}\right)=\lim_{\theta\downarrow0}\frac{1}{2\theta}\cdot\int_{s_{0}-\theta}^{s_{0}+\theta}\alpha\left(\cdot,s\right)ds=\lim_{\theta\downarrow0}B\left[\frac{1}{2\theta}\cdot\chi_{\left(s_{0}-\theta,s_{0}+\theta\right)}\right]\in\overline{B\left(B_{1}\left(0\right)\right)}=L. $$ All in all, we have shown that the set $K_{0}$ defined above satisfies $K_{0}\subset L$ and is hence totally bounded.</p> <p>To see that the Lebesgue differentiation theorem also applies to vector-valued functions, assume that $\varrho\in L_{{\rm loc}}^{1}\left(\mathbb{R}^{d};X\right)$, where $X$ is a separable Banach space. Let $\left\{ x_{n}\,:\, n\in\mathbb{N}\right\} \subset X$ be dense. For each $n\in\mathbb{N}$, the function $t\mapsto\left\Vert \varrho\left(t\right)-x_{n}\right\Vert $ is in $L_{{\rm loc}}^{1}\left(\mathbb{R}^{d};\mathbb{R}\right)$, so that the usual form of Lebesgue's differentiation theorem yields a null-set $N_{n}\subset\mathbb{R}^{d}$ such that $\left\Vert \varrho\left(t\right)-x_{n}\right\Vert =\lim_{r\downarrow0}\frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\int_{B_{r}\left(t\right)}\left\Vert \varrho\left(s\right)-x_{n}\right\Vert ds$ for all $t\in\mathbb{R}^{d}\setminus N_{n}$. Now, let $N:=\bigcup_{n\in\mathbb{N}}N_{n}$ and let $t\in\mathbb{R}^{d}\setminus N$ be arbitrary and $\varepsilon&gt;0$. There is some $n\in\mathbb{N}$ with $\left\Vert \varrho\left(t\right)-x_{n}\right\Vert &lt;\varepsilon$. Hence, \begin{eqnarray*} &amp; &amp; \frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\int_{B_{r}\left(t\right)}\left\Vert \varrho\left(t\right)-\varrho\left(s\right)\right\Vert ds\\ &amp; \leq &amp; \frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\left[\int_{B_{r}\left(t\right)}\left\Vert \varrho\left(t\right)-x_{n}\right\Vert ds+\int_{B_{r}\left(t\right)}\left\Vert x_{n}-\varrho\left(s\right)\right\Vert ds\right]\\ &amp; \leq &amp; \varepsilon+\frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\int_{B_{r}\left(t\right)}\left\Vert x_{n}-\varrho\left(s\right)\right\Vert ds\xrightarrow[r\downarrow0]{}\varepsilon+\left\Vert \varrho\left(t\right)-x_{n}\right\Vert &lt;2\varepsilon, \end{eqnarray*} which shows $\frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\int_{B_{r}\left(t\right)}\left\Vert \varrho\left(t\right)-\varrho\left(s\right)\right\Vert ds\xrightarrow[r\downarrow0]{}0$ and in particular $\varrho\left(t\right)=\lim_{r\downarrow0}\frac{1}{\lambda\left(B_{r}\left(t\right)\right)}\int_{B_{r}\left(t\right)}\varrho\left(s\right)ds$.</p> <p>"$\Leftarrow$": Here, we assume that there is a null-set $N\subset\left(0,\infty\right)$ such that the set $K_{0}$ from above is totally bounded, so that $K=\overline{K_{0}}$ is compact. Hence, so is $K_{1}:=\left\{ \theta\cdot f\,:\,\left|\theta\right|\leq1\text{ and }f\in K\right\} $, since $\mathbb{K}\times L^{1}\left(\left(0,\infty\right)\right)\to L^{1}\left(\left(0,\infty\right)\right),\left(\theta,f\right)\mapsto\theta\cdot f$ is continuous, where $\mathbb{K}\in\left\{ \mathbb{R},\mathbb{C}\right\} $. Hence, also the closed convex hull $K_{2}:=\overline{{\rm co}\, K_{1}}\subset L^{1}\left(\left(0,\infty\right)\right)$ is compact, see e.g. Theorem 5.35 in the book "Infinite Dimensional Analysis" by Aliprantis and Border.</p> <p>I claim that $B$ maps the closed unit ball $B_{1}\left(0\right)\subset L^{1}\left(\left(0,\infty\right)\right)$ into $K_{2}$. This is clearly sufficient for compactness of $B$. Assume that this fails. Then there is some $f\in B_{1}\left(0\right)$ such that $g:=Bf\notin K_{2}$. Since $K_{2}$ is compact and convex, as is $\left\{ g\right\} $, the "Strong Separation Hyperplane Theorem" (Theorem 5.79 from the same book) yields a nonzero, continuous linear function $\varphi\in\left[L^{1}\left(\left(0,\infty\right)\right)\right]^{\ast}$ which strongly separates $K_{2}$ and $\left\{ f\right\} $. $\varphi$ is given by integration against some function $h\in L^{\infty}\left(\left(0,\infty\right)\right)$. Strong separation means that there is some $\theta\in\mathbb{R}$ and $\varepsilon&gt;0$ satisfying $\varphi\left(g\right)\geq\theta+\varepsilon$ and $\varphi\left(F\right)\leq\theta$ for all $F\in K_{2}$. Note that $0\in K_{1}\subset K_{2}$, so that $0=\varphi\left(0\right)\leq\theta$.</p> <p>Now note \begin{eqnarray*} \theta+\varepsilon &amp; \leq &amp; \varphi\left(g\right)=\varphi\left(Bf\right)\\ &amp; = &amp; \int_{0}^{\infty}h\left(t\right)\cdot\int_{0}^{\infty}\alpha\left(t,s\right)\cdot f\left(s\right)ds\, dt\\ \left(\text{Fubini}\right) &amp; = &amp; \int_{0}^{\infty}f\left(s\right)\cdot\int_{0}^{\infty}\alpha\left(t,s\right)\cdot h\left(t\right)dt\, ds\\ &amp; = &amp; \int_{\left\{ s\,:\, f\left(s\right)\neq0\right\} }\left|f\left(s\right)\right|\cdot\int_{0}^{\infty}\alpha\left(t,s\right)\cdot\frac{f\left(s\right)}{\left|f\left(s\right)\right|}\cdot h\left(t\right)dt\, ds\\ &amp; = &amp; \int_{\left\{ s\,:\, f\left(s\right)\neq0\right\} }\left|f\left(s\right)\right|\varphi\left(\underbrace{\alpha\left(\cdot,s\right)\cdot\frac{f\left(s\right)}{\left|f\left(s\right)\right|}}_{\in K_{1}\subset K_{2}\text{ for }s\in\left(0,\infty\right)\setminus N}\right)\, ds\\ &amp; \leq &amp; \int_{\left\{ s\,:\, f\left(s\right)\neq0\right\} }\left|f\left(s\right)\right|\cdot\theta\, ds\\ &amp; = &amp; \theta\cdot\left\Vert f\right\Vert _{L^{1}}\\ \left(\text{since }\theta\geq0\right) &amp; \leq &amp; \theta, \end{eqnarray*} a contradiction. Hence, $B\left(B_{1}\left(0\right)\right)\subset K_{2}$, as desired.</p>
4,394,676
<p>Solve</p> <p><span class="math-container">$$\frac{dy}{dx}=\cos(x-y)$$</span></p> <p>So I know I need to make the substitution <span class="math-container">$u=x-y$</span> but then what's <span class="math-container">$du$</span>, is it <span class="math-container">$du=dx-dy$</span>?</p> <p>Or do I rewrite <span class="math-container">$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx}$$</span></p> <p>Really stuck on this one.</p>
PrincessEev
597,568
<p>Don't overthink things. In this context, <span class="math-container">$y$</span> is a function of <span class="math-container">$x$</span>; finding <span class="math-container">$du$</span> amounts to first finding <span class="math-container">$u'$</span>, and clearly</p> <p><span class="math-container">$$u(x) = x - y(x) \implies \frac{du}{dx} = \frac{d}{dx} x - \frac{d}{dx} y(x) = 1 - y'$$</span></p> <p>and therefore</p> <p><span class="math-container">$$du = (1 - y') dx$$</span></p>
238,970
<p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p> <p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p> <p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p> <p>Thanks for anyone's insight</p>
Douglas Shamlin
453,618
<p>This may be a possible answer.</p> <p>Let $f(x)=\log_{10}(x+1)$ and $g(x)=10^x-1$ (inverse function of $f$).<br> Then let $f^n(x) = f(f(\cdots(f(x))\cdots))$ with n $f$'s, similarly for $g$.</p> <p>$$^{x+2}10\approx\lim_{n\to \infty} g^n(f^n(10^{10})\cdot(\ln10)^x)$$</p> <p>The values behave fairly well for positive x-values. With $x=0.5$ and $n=40$, I got $^{2.5}10\approx4.106483157\times10^{294}$. With $x=1$ and $n=40$, I got $^310\approx9.881444237\times10^{9,999,999,999}$ (not exact because I only did 40 iterations).</p>
238,970
<p>Recently I've come across 'tetration' in my studies of math, and I've become intrigued how they can be evaluated when the "tetration number" is not whole. For those who do not know, tetrations are the next in sequence of iteration functions. (The first three being addition, multiplication, and exponentiation, while the proceeding iteration function is pentation) </p> <p>As an example, 2 with a tetration number of 2 is equal to $$2^2$$ 3 with a tetration number of 3 is equal to $$3^{3^3}$$ and so forth.</p> <p>My question is simply, or maybe not so simply, what is the value of a number "raised" to a fractional tetration number. What would the value of 3 with a tetration number of 4/3 be?</p> <p>Thanks for anyone's insight</p>
Vessel
714,488
<p>We want to find a solution for <span class="math-container">$z$</span> within the following equation: <span class="math-container">$$ _{}^yx=z $$</span> Taking the super logarithm (slog) of base <span class="math-container">$x$</span> on both sides gives: <span class="math-container">$$ y=\text{slog} _x(z) $$</span> <a href="https://en.m.wikipedia.org/wiki/Super-logarithm#Quadratic_approximation" rel="nofollow noreferrer">Wikipedia</a> shows a quadratic approximation the super logarithm as follows: <span class="math-container">$$ \text{slog}_a(b)\approx\begin{cases}\text{slog}_a(a^b)-1\text{, if }b\leq 0\\ -1+\frac{2\ln a}{1+\ln a}b+\frac{1-\ln a}{1+ln a}b^2\text{, if }0&lt;b\leq 1\\ \text{slog}_a(\log _a(b))\text{, if }b&gt;1\end{cases} $$</span> Assuming that the condition <span class="math-container">$0&lt;z\leq 1$</span> is true, our equation then becomes: <span class="math-container">$$ y\approx -1+\frac{2\ln x}{1+\ln x}z+\frac{1-\ln x}{1+ln x}z^2 $$</span> Rearranging gives: <span class="math-container">$$ \frac{1-\ln x}{1+ln x}z^2+\frac{2\ln x}{1+\ln x}z-y-1\approx 0 $$</span> Using the quadratic formula and simplifying gives the final result of: <span class="math-container">$$ z\approx1+\frac{1\mp\sqrt{1+y\left(1-\ln ^2x\right)} }{\ln x -1} $$</span> Again, assuming that the following condition is true: <span class="math-container">$$ 0&lt;z\leq 1\text{.}\\ \text{(I would greatly appreciate if someone found boundaries for $x$ and $y$ to satisfy this condition)} $$</span></p>
1,072,639
<p>A function is <b>bijective</b> if it is both <b>surjective</b> and <strong>injective</strong>. Is there a term for when a function is both <strong>not surjective</strong> and <strong>not injective</strong>?</p>
Ilmari Karonen
9,602
<p>As far as I know, there isn't. The concept of a "non-surjective and non-injective function" just doesn't generally arise often enough to need a special term.</p>
2,329,751
<p>I have this definition: $f:R^n → R^m$ is differentiable at $a∈R^n$, if there exists a linear transformation $μ:R^n→R^m$ such that</p> <p>$\lim_{h \to 0} \frac{|f(a+h)-f(a)-\mu(h)|}{|h|} = 0$.</p> <p>My questions are what's the linear transformation $μ(h)$ for? What does it mean and where does it come from? Why is it necessary?</p> <p>Can anyone explain the definition to me a bit better? Thanks</p>
Horstenson
60,224
<p>The linear map $\mu$ is the derivative of $f$ at $a$. It's the best linear approximation of $f$ near $a$.</p> <p>For a detailed answer look <a href="https://math.stackexchange.com/questions/1784262/how-is-the-derivative-truly-literally-the-best-linear-approximation-near-a-po">here</a>.</p>
2,329,751
<p>I have this definition: $f:R^n → R^m$ is differentiable at $a∈R^n$, if there exists a linear transformation $μ:R^n→R^m$ such that</p> <p>$\lim_{h \to 0} \frac{|f(a+h)-f(a)-\mu(h)|}{|h|} = 0$.</p> <p>My questions are what's the linear transformation $μ(h)$ for? What does it mean and where does it come from? Why is it necessary?</p> <p>Can anyone explain the definition to me a bit better? Thanks</p>
Christian Blatter
1,303
<p>One can talk about these things in a denominator-free way. The essential point is the following: </p> <p>Assume that a point of interest $p$ in the domain of $f$ is given, and you want to know how the values of $f$ behave in the immediate neighborhood of $p$. If $f$ is a nice function then moving away by $h$ from $p$ results in an increment $f(p+h)-f(p)$ which is <strong>in first approximation a linear function</strong> of the displacement vector $h$: $$f(p+h)-f(p)\approx Ah\qquad(h\to0)\ .\tag{1}$$ Such a statement only has real content when the error implied by the $\approx$ sign is for $|h|\ll1$ <strong>essentially smaller</strong> than the term $Ah$ appearing in the formula $(1)$. Now in most cases $Ah$ will be of order of magnitude $|h|$. We therefore require that $$f(p+h)-f(p)=Ah+o\bigl(|h|\bigr)\qquad(h\to0)\ .\tag{2}$$ If $(2)$ holds in a given situation then $f$ is called <em>differentiable</em> at $p$, and the linear map $A$ (it is uniquely determined) is called the <em>derivative</em> or <em>tangent map</em> of $f$ at $p$.</p>
109,922
<p>Let $B_n$ denote the group of signed permutations on $n$ letters. Is there a good explanation or understandable way to see why $$ \sum_{w\in B_n}q^{\text{inv}(w)}=(2n)_q(2n-2)_q\cdots(2)_q? $$</p> <p>I've been thinking about it on and off while reading through Taylor's <em>Geometry of the Classical Groups</em>, but don't understand why this identity holds. I appreciate any explanation. Thanks!</p>
Tommy
59,433
<p>There are multiple ways to generalize inv to $B_n$, and the literature is not always consistent. Here is one such generalization (denoted $\mathrm{inv_B}$ for clarity), and a corresponding proof. As in hoyland's answer, the approach is to show a bijection between permutations and inversion sequences, where the corresponding statistic is much simpler to count.</p> <p>Let $\bar w$ be a signed permutation, and $w$ be the same permutation stripped of signs. Let $\sigma_i$ be 1 if the $i$th position in $w$ is signed, 0 if it is unsigned. Let $\mathrm{inv}(w)$ be the standard inversion statistic on unsigned permutations.</p> <p>Define $$\mathrm{inv}_B(\bar w)=\mathrm{inv}(w)+\sum_{i=1}^n i\sigma_i$$</p> <p>For $w\in S_n$, the inversion sequence of $w$ is defined as $a=(a_1,\cdots,a_n)$, where $a_i=|\{0\le j\le i-1\mid w_j&gt;w_i\}|$. It is easy to see that $$|a|\equiv\sum_{i=1}^n a_i=\mathrm{inv}(w).$$ This produces a bijection between permutations and inversion sequences, although I will not prove that here. Now for $\bar w\in B_n$, let the inversion sequence of $\bar w$ be $(b_1,⋯,b_n)$, where $b_i=a_i+i\sigma_i$. Thus we have $0≤b_i≤2i−1$. By the way we've defined things it follows that $$|b|\equiv\sum_{i=1}^n b_i=\mathrm{inv_B}(w).$$ It's straightforward to see that the modified inversion sequences are also bijective assuming the original inversion sequences were. Denoting the set of inversion sequences of length $n$ as $I_n$, we have $$\sum_{w\in B_n}q^{\mathrm{inv_B}(w)}=\sum_{b\in I_n}q^{|b|}=\prod_{i=1}^n [2i]_q.$$ I hope this definition of inv is equal, or at least equivalent to the one you were using. </p>
1,820,690
<p>Find two numbers, $A$ and $B$, both smaller than $100$, that have a lowest common multiple of $450$ and a highest common factor of $15$.</p> <p>I know that this involves the formula of </p> <p>$A × B = LCM × HCF$</p> <p>But I don't quite understand the above formula so I rather memorise it and that is why I can't apply it now. Can anyone explain on how this formula is derived ? Thanks alot in advance !</p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>$$z^{10}-z^5-992=0\implies Z^2-Z-992=0\implies(Z+31)(Z-32)=0$$ using $Z=z^5$.</p> <p>I am sure that you can take it from here.</p>
1,820,690
<p>Find two numbers, $A$ and $B$, both smaller than $100$, that have a lowest common multiple of $450$ and a highest common factor of $15$.</p> <p>I know that this involves the formula of </p> <p>$A × B = LCM × HCF$</p> <p>But I don't quite understand the above formula so I rather memorise it and that is why I can't apply it now. Can anyone explain on how this formula is derived ? Thanks alot in advance !</p>
Jack's wasted life
117,135
<p>The equation is quadratic in $z^5$ with one positive and one negative root. $$ z^5=a\implies z=a^{1\over5}e^{2\pi i{k\over5}}, 0\le k&lt;5\\ \Re z=a^{1\over5}\cos(2\pi k/5) $$ If $a&gt;0,$ we have to make the cosine part negative which leaves us with $k=2,3$.</p> <p>If $a&lt;0,$ we have to make the cosine part positive which leaves us with $k=0,1,4$.</p> <p>So there are $5$ roots with negative real part.</p>
1,424,913
<p>I am trying to solve the following problem.</p> <blockquote> <p>Let $G$ be a group. If $M, N \subset G$ are such that $x^{-1} M x = M$ and $x^{-1} N x = N$ for all $x \in G$ and $M \cap N = \{1\}$, prove that $m n = n m$ for all $m \in M, n \in N$.</p> </blockquote> <p>I have already proven it for the specific case in which $M, N$ are subgroups of $G$ (see proof below). However, I can't prove it without this hypothesis. I think $M, N$ must be subgroups for the result to hold, but I can't find a counterexample to show this as well.</p> <p>Any help with a proof or counterexample is much appreciated.</p> <hr> <p>Current proof outline:</p> <blockquote> <p>$$ (m \cdot n)\cdot (n \cdot m)^{-1} = (m \cdot n)\cdot (m^{-1} \cdot n^{-1}) =: k$$ $ k = (m \cdot n \cdot m^{-1}) \cdot n^{-1} \in N$, because $(m \cdot n \cdot m^{-1}) \in N $ and $n^{-1} \in N$ (and $N$ is subgroup).</p> <p>$ k = m \cdot (n \cdot m^{-1} \cdot n^{-1}) \in M$, because $m \in M $ and $(n \cdot m^{-1} \cdot n^{-1}) \in M$ (and $M$ is subgroup).</p> <p>So $k \in M \cap N = \{1\}$, which implies $k = 1$ and $$ (m \cdot n)\cdot (n \cdot m)^{-1} = 1 $$ $$ (m \cdot n)\cdot (n \cdot m)^{-1}\cdot (n \cdot m) = 1 \cdot (n \cdot m) $$ $$ m \cdot n = n \cdot m $$</p> </blockquote>
David Quinn
187,299
<p>The minimum of $\cosh x$ occurs when $x=0$, so in this case you require $\frac{xy}{2}=0$ so you can conclude that either $x=0$ or $y=0$ or both</p>
88,159
<p>Define $\omega=e^{i \pi /4}$. Is there an elegant way of showing that $20^{1/4} \omega^3$ is not inside $\mathbb{Q}(20^{1/4} \omega)$?</p> <p>The way i am doing it is by observing that $20^{1/4} \omega$ is algebraic over $\mathbb{Q}$ with minimal polynomial $x^4+20$ and i assume that $20^{1/4} \omega^3$ is in the span over $\mathbb{Q}$ of $1, \alpha, \alpha^2,\alpha^3$, with $\alpha=20^{1/4} \omega$. But it is very messy to arrive at a contradiction.</p> <p>Thanks.</p>
pki
17,464
<p>Here's one way that I can think of. We want to show that $\omega^2\not\in \mathbb{Q}(20^{1/4}\omega)$. It follows that it's sufficient to show that $x^4+20$ is irreducible over $\mathbb{Q}(i)=\mathbb{Q}(\omega^2)$.</p> <p>Any factorization in $\mathbb{Q}(i)$ will actually be a factorization in the Gaussian integers $\mathbb{Z}[i]$ by Gauss lemma. Assume that $x^4+20$ is not irreducible over $\mathbb{Z}[i]$. Since no root of $x^4+20$ is contained in $\mathbb{Z}[i]$ the polynomial needs to factorize as a product of quadratic polynomials. But the only quadratic polynomials dividing it in $\mathbb{C}[x]$ hence in $\mathbb{Z}[i][x]$ are $x^2\pm i\sqrt{20}$ which are not in $\mathbb{Z}[i][x]$, so it must be irreducible.</p> <p>If $\omega^2=i\in \mathbb{Q}(20^{1/4}\omega)$, then it follows by the irreducibility of $x^4+20$ that $\mathbb{Q}(20^{1/4}\omega)/\mathbb{Q}(i)$ is of degree $4$, hence $\mathbb{Q}(20^{1/4}\omega)/\mathbb{Q}$ is of degree $8$, which is a contradiction.</p>
550,441
<p>Say I roll a 6-sided die until its sum exceeds $X$. What is E(rolls)?</p>
mjqxxxx
5,546
<p>Let $L(x)$ be the expected number of rolls to reach $x$. Clearly $L(x)=0$ for $x\le 0$, and for $x\ge 1$ the expected number is one more than the expected number remaining after the next roll. That is, $$ L(x)=I_{+}(x) + \frac{1}{6}\sum_{k=1}^{6}L(x-k), $$ where $I_{+}(x)$ is $1$ for positive $x$ and zero otherwise. Now consider the <a href="http://en.wikipedia.org/wiki/Z-transform" rel="nofollow">Z-transform</a> of $L(x)$, defined by $\tilde{L}(z)=\sum_{x}L(x)z^{-x}$. Multiplying both sides of the previous equation by $z^{-x}$ and summing over $x$ gives $$ \tilde{L}(z)=\sum_{x=1}^{\infty}z^{-x}+\tilde{L}(z)\left(\frac{1}{6}\sum_{k=1}^{6}z^{-k}\right)=\frac{1}{z-1}+\frac{z^5+z^4+z^3+z^2+z+1}{6z^6}\tilde{L}(z), $$ or $$ \tilde{L}(z)=\frac{6z^6}{6z^7-7z^6+1}. $$ The inverse Z-transform is then $$ L(x)=\frac{1}{2\pi i}\oint\tilde{L}(z)z^{x-1}dz, $$ or the sum of the residues of ${6z^6}/(6z^7-7z^6+1)$. It turns out that $\tilde{L}(z)$ has a second-order pole at $z=1$ and simple poles at five other locations: $$ \tilde{L}(z)= \frac{\frac{2}{7} + \frac{16}{21}(z-1)}{(z-1)^2} + \sum_{j=1}^{5}\frac{R_{j}}{z-\omega_j}, $$ where $R_j$ are the residues and $\omega_j$ are the locations of the respective poles (the largest $|\omega_j|$ is about $0.73$, and the residues are $0.1$ or so). The result is then $$ L(x)=\frac{2}{7}x + \frac{10}{21} + \sum_{j=1}^{5}R_{j}\omega_j^{x-1}. $$ The corrections to the leading (linear and constant) terms are exponentially decaying with $x$; for $x=10$, they sum to about $-0.01$.</p>
3,931,246
<p>I want to find the range of <span class="math-container">$x$</span> on which <span class="math-container">$f$</span> is decreasing, where <span class="math-container">$$f(x)=\int_0^{x^2-x}e^{t^2-1}dt$$</span></p> <p>Let <span class="math-container">$u=x^2-x$</span>, then <span class="math-container">$\frac{du}{dx}=2x-1$</span>, then <span class="math-container">$$f'(x)=\frac{d}{dx}\int_0^{x^2-x}e^{t^2-1}dt=\frac{du}{dx}\frac{d}{du}\int_0^{x^2-x}e^{t^2-1}dt=(2x-1)e^{x^4-2x^3+x^2-1}$$</span></p> <p>Since <span class="math-container">$e^{x^4-2x^3+x^2-1}&gt;0$</span> for all <span class="math-container">$x\in \Bbb R$</span> and <span class="math-container">$2x-1&lt;0\iff x&lt;\frac{1}{2}$</span>. <span class="math-container">$f$</span> is decreasing on <span class="math-container">$(-\infty,\frac{1}{2})$</span>.</p> <p>Furthermore, <span class="math-container">$f$</span> is increasing on <span class="math-container">$(\frac{1}{2},\infty)$</span>, <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x=\frac{1}{2}$</span>, and <span class="math-container">$f'(\frac{1}{2})=0$</span>, <span class="math-container">$f$</span> attains its minimum value at <span class="math-container">$x=\frac{1}{2}$</span>.</p> <p>Am I right?</p>
Z Ahmed
671,540
<p><span class="math-container">$f(x)=\int_{0}^{x^2-x} e^{t^2-1} dt \implies f'(x)= (2x-1) e^{(x^2-x)^2-1} &gt;0 ~if~ x&gt;1/2$</span>. Hence <span class="math-container">$f(x)$</span> in increasing for <span class="math-container">$x&gt;1/2$</span> and decreasing for <span class="math-container">$x&lt;1/2$</span>. Yes you are right. there is a min at <span class="math-container">$x=1/2$</span>. This one point does not matter, you may also say that <span class="math-container">$f(x)$</span> is increasing in <span class="math-container">$[1/2,\infty)]$</span> and decreasing on 4(-\infty, 1/2]$.</p> <p>Note: whether a function increasing or decreasing is decided by two points (not one). For instance, <span class="math-container">$x_1&gt;x_2 \leftrightarrows f(x_1) &gt; f(x_2).$</span> If <span class="math-container">$f(x)$</span> is decreasing.</p>
4,463,559
<p>Let <span class="math-container">$K \subset \mathbb{R}^n\times [a,b]$</span> a compact subset. For each <span class="math-container">$t \in [a,b]$</span>, let <span class="math-container">$K_t= \{x \in \mathbb{R}^n : (x,t) \in K\}$</span>. Suppose that, for all <span class="math-container">$t \in [a,b]$</span>, <span class="math-container">$K_t$</span> has measure zero in <span class="math-container">$\mathbb{R}^n$</span>. Thus, <span class="math-container">$K$</span> has measure zero in <span class="math-container">$\mathbb{R}^{n+1}$</span>.</p> <p>The definition of measure zero in <span class="math-container">$\mathbb{R}^n$</span> here is:</p> <p><em><span class="math-container">$A$</span> has measure zero if given <span class="math-container">$\epsilon &gt; 0$</span>, there is <span class="math-container">$\{Q_i\}_{i \in \mathbb{N}}$</span> rectangles in <span class="math-container">$\mathbb{R}^n$</span> such that <span class="math-container">$$A \subset \bigcup_{i \in \mathbb{N}} Q_i,$$</span> and, <span class="math-container">$$\sum_{i \in \mathbb{N}}v(Q_i) &lt; \epsilon,$$</span> where <span class="math-container">$v(Q) = (b_1 - a_1) \cdots (b_n - a_n)$</span>, if <span class="math-container">$Q = [a_1,b_1]\times \cdots \times [a_n, b_n]$</span>. The previous summation is called total volume of the cover.</em></p> <p>There are some interesting points here:</p> <ol> <li>If <span class="math-container">$K_t$</span> has measure zero in <span class="math-container">$\mathbb{R}^n$</span>, then <span class="math-container">$K_t \times \{t\}$</span> has measure zero in <span class="math-container">$\mathbb{R}^{n+1}$</span>.</li> <li>For every <span class="math-container">$t \in [a,b]$</span>, <span class="math-container">$K_t$</span> is compact. Then, <span class="math-container">$K_t \times \{t\}$</span> is compact.</li> <li><span class="math-container">$K = \bigcup_{t \in [a,b]}K_t \times \{t\}$</span>.</li> </ol> <p>Then, I tried to take an finite open cover (of rectangles) of <span class="math-container">$K_t \times \{t\}$</span> which total volume is less then <span class="math-container">$\epsilon '$</span>. Then, use it to construct an open cover of <span class="math-container">$K$</span>, and using Lindelöf theorem to get a countable cover of <span class="math-container">$K$</span>. But I'm stuck because I can't prove that total volume is less then <span class="math-container">$\epsilon$</span>.</p>
Mason
752,243
<p>By Tonelli's theorem, <span class="math-container">$$m(K) = \int_{\mathbb{R}}\int_{\mathbb{R}^n}1_K(x, t)\,dx\,dt = \int_{\mathbb{R}}m(K_t)\,dt.$$</span> Hence <span class="math-container">$m(K) = 0$</span> if and only if <span class="math-container">$m(K_t) = 0$</span> for almost every <span class="math-container">$t \in \mathbb{R}$</span>.</p>
947,290
<p>In a cyclic group of order 8 show that element has a cube root. So for some $a\in G$ there is an element $x \in G$ with $x^3=a.$</p> <p>Also show in general that if $g=&lt;a&gt;$ is a cyclic group of order m and $(k,m)=1$ then each element in G has a $k$th root. What element will $a^k$ generate? Use this to express any element as a $k$th powers.</p> <p>Where do I begin? For the first one is it just through closure essentially? And the second one Im stuck on. Where do I begin? I know that gcd between k &amp; m is 1 so $kx+my=1$ with $x,y\in Z$. Thank you.</p>
Timbuc
118,527
<p>Since $\;\text{gcd}\;(3,8)=1\;$ , if $\;G=\langle z\rangle\;$ then also $\;G=\langle z^3\rangle\;$ , and from here for each $\;a\in G\;$ there exists $\;k\in\Bbb Z\;$ so that we have</p> <p>$$a=(z^3)^k= (z^k)^3$$</p> <p>Take just $\;x=z^k\;$</p>
57,281
<p><strong>Bug introduced in 10.0 and fixed in 10.3</strong></p> <hr> <p>I'm having trouble calculating the median of a <code>Dataset[]</code> in <em>Mathematica</em> 10.</p> <p>The situation is as follows. Consider a dataset that was defined as follows:</p> <pre><code>dataset = Dataset[{&lt;|"a"-&gt;1,"b"-&gt;2|&gt;,&lt;|"a"-&gt;3,"b"-&gt;4|&gt;}]; </code></pre> <p>The mean and variance of columns <code>a</code> and <code>b</code> can now be calculated by</p> <pre><code>mean = dataset[Mean, {"a","b"}] var = dataset[Variance, {"a","b"}] </code></pre> <p>That works pefectly, but</p> <pre><code>med = dataset[Median, {"a","b"}] </code></pre> <p>returns a <code>Failure[]</code>! Somehow, <code>Median[]</code> is not compatible with a list of associations as its argument and the other functions are.</p> <p>Can someone explain why this happens and maybe help with a solution?</p>
Taliesin Beynon
7,140
<p>Median itself doesn't work on associations of vectors:</p> <pre><code>In[9]:= Median[{&lt;|"a" -&gt; 1, "b" -&gt; 2|&gt;, &lt;|"a" -&gt; 3, "b" -&gt; 4|&gt;}] During evaluation of In[9]:= Median::rectn: Rectangular array of real numbers is expected at position 1 in Median[{&lt;|a-&gt;1,b-&gt;2|&gt;,&lt;|a-&gt;3,b-&gt;4|&gt;}]. &gt;&gt; Out[9]= Median[{&lt;|"a" -&gt; 1, "b" -&gt; 2|&gt;, &lt;|"a" -&gt; 3, "b" -&gt; 4|&gt;}] </code></pre>
317,753
<p>I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:</p> <ul> <li>How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?</li> <li>What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems? </li> </ul>
Community
-1
<p><a href="http://rads.stackoverflow.com/amzn/click/0471317160" rel="nofollow">Real Analysis, by Gerald B. Folland (Author).</a></p>
223,631
<p>I'm using NeumannValue boundary conditions for a 3d FEA using NDSolveValue. In one area I have positive flux and in another area i have negative flux. In theory these should balance out (I set the flux inversely proportional to their relative areas) to a net flux of 0 but because of mesh and numerical inaccuracies they don't. Is there a way to constrain total flux = 0 and just set a constant flux for one of my areas?</p> <p>edit: here's my boundary conditions:</p> <pre><code>Subscript[Γ, 1] = NeumannValue[-1, (Abs[x] - 1)^2 + (Abs[y] - 1)^2 &lt; (650/1000)^2 &amp;&amp; z &lt; -0.199 ]; Subscript[Γ, 2] = NeumannValue[4, x^2 + y^2 + (z + 1/5)^2 &lt; (650/1000/2)^2 ]; </code></pre> <p>and my equations:</p> <pre><code>Dcof = 9000 ufun3d = NDSolveValue[ {D[u[t, x, y, z], t] - Dcof Laplacian[u[t, x, y, z], {x, y, z}] == Subscript[Γ, 1] + Subscript[Γ, 2], u[0, x, y, z] == 0}, u, {t, 0, 10 }, {x, y, z} ∈ em]; </code></pre> <p>and my element mesh:</p> <pre><code>a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}]; b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2]; c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000]; d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000]; e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000]; f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000]; r = RegionUnion[a,b,c,d,e,f]; boundingbox = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, -1/5, 1}}]; r2 = RegionIntersection[r,boundingbox] em = ToElementMesh[r2]; </code></pre> <p>And this is what my mesh looks like from the bottom up. </p> <p><a href="https://i.stack.imgur.com/Q66fl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q66fl.png" alt="enter image description here"></a> edit2: I figured i should add a plot of what i think is "wrong" too.<br> plotting the diagonal cross section i'd expect the values to be centered around 0 but they're all negative.</p> <pre><code>ContourPlot[ufun3d[5, xy, xy, z], {xy, -1 , 1 }, {z, -0.2, 1}, ClippingStyle -&gt; Automatic, PlotLegends -&gt; Automatic] </code></pre> <p><a href="https://i.stack.imgur.com/SfwIa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SfwIa.png" alt="enter image description here"></a></p>
Alex Trounev
58,388
<p>We can use mesh of first order for 3D visualization and short time for visibility. We also change boundary conditions:</p> <pre><code>Needs["NDSolve`FEM`"]; a = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, 0, 1}}]; b = Cylinder[{{0, 0, -1/5}, {0, 0, 0}}, (650/1000)/2]; c = Cylinder[{{1, 1, -1/5}, {1, 1, 0}}, 650/1000]; d = Cylinder[{{-1, 1, -1/5}, {-1, 1, 0}}, 650/1000]; e = Cylinder[{{1, -1, -1/5}, {1, -1, 0}}, 650/1000]; f = Cylinder[{{-1, -1, -1/5}, {-1, -1, 0}}, 650/1000]; r = RegionUnion[a, b, c, d, e, f]; boundingbox = ImplicitRegion[True, {{x, -1, 1}, {y, -1, 1}, {z, -1/5, 1}}]; r2 = RegionIntersection[r, boundingbox]; em = ToElementMesh[r2, "MeshOrder" -&gt; 1, MaxCellMeasure -&gt; 10^-4]; Subscript[\[CapitalGamma], 1] = NeumannValue[-1, z == -1/5 &amp;&amp; x^2 + y^2 &gt; (650/1000/2)^2]; Subscript[\[CapitalGamma], 2] = NeumannValue[4, z == -1/5 &amp;&amp; x^2 + y^2 &lt; (650/1000/2)^2]; Dcof = 9000; ufun3d = NDSolveValue[{D[u[t, x, y, z], t] - Dcof Laplacian[u[t, x, y, z], {x, y, z}] == Subscript[\[CapitalGamma], 1] + Subscript[\[CapitalGamma], 2], u[0, x, y, z] == 0}, u, {t, 0, 10^-3}, {x, y, z} \[Element] em]; DensityPlot3D[ ufun3d[1/1000, x, y, z], {x, 0, 1}, {y, 0, 1}, {z, -1, 1}, ColorFunction -&gt; "Rainbow", OpacityFunction -&gt; None, BoxRatios -&gt; {1, 1, 1}, PlotPoints -&gt; 50, Boxed -&gt; False, PlotLegends -&gt; Automatic, Axes -&gt; False] </code></pre> <p><a href="https://i.stack.imgur.com/7Xsjb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Xsjb.png" alt="Figure 1"></a></p> <p>General view of 3D distribution from different points</p> <pre><code>DensityPlot3D[ufun3d[1/1000, x, y, z], {x, y, z} \[Element] em, ColorFunction -&gt; "Rainbow", OpacityFunction -&gt; None, BoxRatios -&gt; Automatic, PlotPoints -&gt; 50, Boxed -&gt; False, Axes -&gt; False] </code></pre> <p><a href="https://i.stack.imgur.com/ATzaU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ATzaU.png" alt="Figure 2"></a></p>
176,340
<p>I am running an iterative routine that I want to export to a file while each iteration is computed, instead of storing everything in memory and then exporting to a file. </p> <p>My solution is to write to an "m" file that saves the values in the usual array format that mathematica understands (e.g. {{2,1},{3,1}} for a 2x2 matrix with the obvious contents). To do that though, I also need to write the "commas" and the brackets "{","}" manually.</p> <p>In any case, here is a sample code that achieves that but in a, quite likely, not very clever, efficient and readable way:</p> <pre><code>sm = 3; rm = 3; prior = 0; SetDirectory[NotebookDirectory[]]; DeleteFile["test.m"] stream = OpenAppend["test.m"]; Do[next = prior + s + r; If[r == 1 &amp;&amp; s == 1, WriteString[stream, "{"]]; If[r == 1, WriteString[stream, "{"]]; WriteString[stream, ToString[next]]; prior = next; If[s &lt; sm, If[r == rm, WriteString[stream, "},"]; prior = 0, WriteString[stream, ","]], If[r == rm, WriteString[stream, "}"], WriteString[stream, ","]]]; If[r == rm &amp;&amp; s == sm, WriteString[stream, "}"]] ;, {s, 1, sm}, {r, 1, sm}] Close[stream] </code></pre> <p>This generates an "m" file that when I open I can immediately process by defining a matrix with the written data to make further analysis later. It looks like this for the above code:</p> <p><a href="https://i.stack.imgur.com/Z4l3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z4l3w.png" alt="enter image description here"></a></p> <p>The problem is that my actual code includes three iterating indices (and the actual expression for calculation is much more complex) so the situation becomes very complicated with this simple solution (mainly, too many IF commands that need to be introduced).</p> <p>So, the question is, is there a way to make this code sorter, more elegant, clever, efficient and readable so that it is easily generalised and debugged?</p> <p>Note that this question is also related to <a href="https://mathematica.stackexchange.com/questions/175967/exporting-result-of-calculation-to-file-at-each-step-of-iterative-routine">this question</a> I asked a few days ago.</p> <p>Thanks.</p>
Bill
18,890
<p>Assuming one row will fit in memory but the matrix will not, use <code>Table</code> to construct each row and supply the necessary delimiters with the minimum calculation, storage and <code>If</code>.</p> <pre><code>sm = 3; rm = 3; SetDirectory[NotebookDirectory[]]; DeleteFile["test.m"]; stream = OpenAppend["test.m"]; WriteString[stream, "{"]; Do[ prior = 0; (*initialization for each row*) WriteString[stream, ToString[Table[prior = prior + s + r, {s, 1, sm}]]]; (*one row*) If[r &lt; rm, WriteString[stream, ",\n"]] (*one row per line*) ,{r, 1, rm} ]; WriteString[stream, "}"]; Close[stream] </code></pre>
526,837
<p>Let $(\Omega, {\cal B}, P )$ be a probability space, $( \mathbb{R}, {\cal R} )$ the usual measurable space of reals and its Borel $\sigma$- algebra, and $X : \Omega \rightarrow \mathbb{R}$ a random variable.</p> <p>The meaning of $ P( X = a) $ is intuitive when $X$ is a discrete random variable, because it's the definition of the probability mass function. I am not sure if my question makes sense, but how should I think of $ P( X = a) $ when $X$ is a continuous random variable? </p>
Madrit Zhaku
34,867
<p>$$ \int\frac{-4x}{2x+1}dx=-4\int\frac{x}{2x+1}dx=-4\int\frac{\frac{2x+1}{2}-\frac{1}{2}}{2x+1}dx=-4\int\frac{\frac{1}{2}(2x+1-1)}{2x+1}dx=-2\int\frac{2x+1-1}{2x+1}dx=-2\int(\frac{2x+1}{2x+1}-\frac{1}{2x+1})dx=-2\int{(1-\frac{1}{2x+1})}dx=-2\int dx+2\int\frac{1}{2x+1}dx=-2x+\ln|2x+1|+C. $$</p> <p>$C$ is a integrabile constant which can be any real number, e.g. in your case $C=-1$, but the derivate of any constant is 0, i.e $C'=0$ for all $C\in\mathbb{R}$ </p>
22,101
<p>The general rule used in LaTeX doesn't work: for example, typing <code>M\"{o}bius</code> and <code>Cram\'{e}r</code> doesn't give the desired outputs.</p>
quid
85,306
<p>It would be a misconception to consider the source of a post as somehow analogous to the source of a LaTeX document (not only diacritics but quite literally <em>nothing</em> works <em>except</em> of course the math-environment). Instead it is better to think of the source of a HTML page. The basic formatting is done via Markdown and HTML. </p> <p>Only there is extra support for typesetting mathematics via MathJax, whose syntax happens to be similar to the one of LaTeX.</p> <p>As a consequence special characters can be inserted for example </p> <ol> <li><p>by some direct method, for instance for many a user "é" or "ö" are just as easy to type as "n" (it is just some button on their keyboard), some combination of keystrokes (which one depends on the set-up), copy-paste.</p></li> <li><p>using the methods provided by HTML such as using the name of HTML entities with the systax <code>&amp;name;</code> where "name" is the name of the character, which is at least sometimes is not that hard to remember. For a list see for example <a href="https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references" rel="nofollow">the relevant Wikipedia page</a>.</p></li> </ol> <p>I mention 1. as in LaTeX by default this is in fact not possible. For 2. it should be noted that this does not work in titles and comments, but creating the character in (the preview of) the body and copy-pasting is an option. </p>
3,371,302
<p>trying to find all algebraic expressions for <span class="math-container">${i}^{1/4}$</span>.</p> <p>Using. Le Moivre formula , I managed to get this : </p> <blockquote> <p><span class="math-container">${i}^{1/4}=\cos(\frac{\pi}{8})+i \sin(\frac{\pi}{8})=\sqrt{\frac{1+\frac{1}{\sqrt{2}}}{2}} + i \sqrt{\frac{1-\frac{1}{\sqrt{2}}}{2}}$</span></p> </blockquote> <p>What's about other expressions.</p>
Community
-1
<p>Let this first root be <span class="math-container">$z$</span>, and the other ones <span class="math-container">$zw$</span>. Then</p> <p><span class="math-container">$$i=(zw)^4=z^4w^4=iw^4$$</span> and <span class="math-container">$w^4=1$</span>. Hence you multiply <span class="math-container">$z$</span> by the fourth roots of unity, <span class="math-container">$1,i,-1,-i$</span>.</p>
4,272,964
<p>I want to solve the equation following in a set of complex numbers:</p> <p><span class="math-container">$$z^2 + \bar z = \frac 1 2$$</span></p> <p><strong>My work so far</strong></p> <p>Apparently I have a problem with transforming equation above into form that will be easy to solve. I tried to multiply sides by <span class="math-container">$z$</span> and use fact that: <span class="math-container">$z\bar z = |z|^2$</span> but it doesn't seem great idea. After that I tried the following:</p> <p><span class="math-container">$$\bar z = \frac 1 2 - z^2 \Leftrightarrow |z| = | \frac 1 2 - z^2|$$</span></p> <p>and then rewrite as <span class="math-container">$z = Re(z) +Im(z)$</span> but also result was not satisfying. Could you please give me a hand with solving this equation?</p>
dxiv
291,201
<p>Taking conjugates <span class="math-container">$\,z^2 + \bar z = \frac 1 2 = \bar z ^2 + z\,$</span>, then eliminating <span class="math-container">$\bar z = \frac{1}{2}-z^2$</span> between the two:</p> <p><span class="math-container">$$ \begin{align} \left(\frac{1}{2}-z^2\right)^2+z &amp;= \frac{1}{2} \\ \iff\;\;\;\; (2z^2-1)^2+4z&amp;=2 \\ \iff\;\;\;\; 4z^4-4z^2+4z-1&amp;=0 \\ \iff\;\;\;\; 4z^4-(2z-1)^2&amp;=0 \\ \iff\;\;\;\; \left(2z^2-2z+1\right)\left(2z^2+2z-1\right)&amp;=0 \end{align} $$</span></p>
2,849,017
<p>\begin{align} dA &amp; = 2RR\,dv = 2R^2\,dv \\[8pt] A &amp; = \int_0^\pi 2R^2\,dv \\[8pt] \text{arclength} &amp; = R\,dv \end{align}</p> <p><a href="https://i.stack.imgur.com/YBIX5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YBIX5.png" alt="enter image description here"></a></p> <p>Area of a circle with radius $R$ is $\pi R^2$.</p> <p>I am trying to discover new correct methods for finding the area of a circle. Using rectangles is not something new per say, but summing infinitesimal rectangles going through the center with the angle as a parameter I have not seen before. Anyway, I have encountered a problem:</p> <p>Let $dA$ be the infinitesimal rectangle of width $R\,dv$ and length $2R$. As seen in the picture, the position of the rectangle is determined by the angle $V$. If, for example, $V=0$ the rectangle will be positioned "horisontal", and if $V=\frac{\pi}{2}$ the rectangle will be positioned "vertical". </p> <p>It does seem logical to me that if we add all the rectangles going from $V: 0$ to $\pi$ the sum will be the area of the circle. But we get $2\pi R^2$, double the area, why?</p> <p><strong>Update:</strong> WOW! I have not been given the reason for why my method failed, but I figured it out myself! So by drawing the rectangle corresponding to $v$ and $v+dv$ I saw and realized that they overlapped each other, which is bad if we want the exact result. But by symmetry I found that the formed overlap is a parallelogram with $base=R$ and $height=R$, thus every overlap is worth $Area= base*height = R^2$. Subtracting this in the final integral gives us A $= \int_0^\pi 2R^2 - R^2\,dv = \pi R^2$</p> <p><strong>Update2</strong></p> <p>I may be right for the wrong reasons here. The area of the overlap can't possible be $R^2$, that seems too large considering that the area of the rectangle itself is smaller than that when dv is infinitesmal. </p> <p><a href="https://i.stack.imgur.com/IlZGw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IlZGw.png" alt="enter image description here"></a></p>
user
505,767
<p>It is not a correct set up to calculate the area since the width of the rectangle varies with $v$.</p> <p>As an alternative we can use</p> <p>$$A=2\int_0^R 2\sqrt{R^2-y^2}\,dy$$</p> <p><a href="https://i.stack.imgur.com/gBlGK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gBlGK.jpg" alt="enter image description here"></a></p>
1,865,364
<p>After having seen a lengthy and painful calculation showing $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}3, \sqrt[\leftroot{-2}\uproot{2}3]{2}]/\mathbb Q)\cong S_3$, I'm wondering whether there's a slick proof $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)\cong S_p$ for odd prime $p$, because these calculations are getting intractable fast.</p> <p>What are some slick proofs of this fact (assuming it is indeed correct).</p> <p><strong>Correction:</strong> What <strong>IS</strong> $\operatorname{Gal}(\mathbb Q[e^\frac{2\pi i}p, \sqrt[\leftroot{-2}\uproot{2}p]{2}]/\mathbb Q)$ for prime $p$?</p>
DonAntonio
31,254
<p>The field $\;K:=\Bbb Q\left(\zeta:=e^{2\pi i/p},\,\sqrt[p]2\right)\;$ is the splitting field of $\;f(x):=x^p-2\in\Bbb Q[x]\;$ , and since this is an irreducible polynomial (why?) then $\;G:=Gal(K/\Bbb Q)\;$ acts transitively over its roots, which are $\;\alpha_i:=\sqrt[p]2\,\zeta^k\;,\;\;k=0,1,2,...,p-1\;$ .</p> <p>Now take the (Galois) subextension $\;E:=\Bbb Q(\zeta)/\Bbb Q\;$ . This is the cyclotomic extension of the rationals of order $\;\phi(p)=p-1\;$ and it, of course, is cyclic of that order since in fact $\;Gal(E/Q)\cong\left(\Bbb Z/p\Bbb Z\right)^*\;$ . </p> <p>Likewise, the (non-Galois) subextension $\;F:=\Bbb Q(\sqrt[p]2\,\zeta)/\Bbb Q\;$ of order $\;p\;$ (Why is this extension <strong>not</strong> normal?) is of order $\;p\;$ and its automorphism group is cyclic of order $\;p\;$.</p> <p>Finally, observe that $\;G=Gal(E/\Bbb Q)\cdot Aut(F/\Bbb Q)\;$ by orders considerations, and since $\;Gal(E/\Bbb Q)\lhd G\;$ we then have a semidirect product $\;\cong C_p\rtimes C_{p-1}\;$</p>
2,943,461
<p>I'm stumped on a math puzzle and I can't find an answer to it anywhere! A man is filling a pool from 3 hoses. Hose A could fill it in 2 hours, hose B could fill it in 3 hours and hose C can fill it in 6 hours. However, there is a blockage in hose A, so the guy starts by using hoses B and C. When the blockage in hose A has been cleared, hoses B and C are turned off and hose A starts being used. How long does the pool take to fill? Any help would be strongly appreciated :)</p>
Bram28
256,001
<p>Let's use meta-logic for this problem:</p> <p>Assuming there is an answer to this problem at all, it must be true that it doesn't make a difference as to whether the guy uses both hoses B and C, or just hose A, for if there was a difference, then given that we are not told how long the blockage lasted, the problem would not be solvable.</p> <p>Therefore, by meta-logic: we might as well assume that the guy was using just hose A, meaning it will take him 2 hours.</p> <p>Of course, a true solution to the problem will have to include a verification that indeed hoses B and C together will indeed fill up the pool just as fast as hose A alone ...</p>
3,433,277
<blockquote> <p>It is given <span class="math-container">$f:\mathbb R \rightarrow \mathbb R$</span> <span class="math-container">$$f(x):=\tan^{-1}(x+1)+ \cot^{-1}(x)$$</span> <span class="math-container">$\mathcal R_f=?$</span></p> </blockquote> <p>So far, I've learned <span class="math-container">$\tan$</span> and <span class="math-container">$\cot$</span> are complementary functions, therefore <span class="math-container">$$\tan^{-1}(x) + \cot^{-1}(x)=\frac{\pi}{2}.$$</span></p> <p>I entered a loop using <span class="math-container">$$\tan(x) =\frac{1}{\cot(x)}\;.$$</span></p> <p>Can I use <span class="math-container">$\tan$</span> for the whole expression and from there on use the <span class="math-container">$\tan$</span> addition formula?</p> <blockquote> <p>Is there a way of finding the image without using derivatives and limits?</p> </blockquote>
Mani khurana
724,374
<p>Note that <span class="math-container">$\cot^{-1}(\pm \infty)= 0$</span> and <span class="math-container">$\tan^{-1} (\pm \infty) =\pm \pi/2$</span>, <span class="math-container">$\cot^{-1}(0^+)= \pi/2$</span> so the least value taken by the given function <span class="math-container">$$f(x)= \cot^{-1}x +\tan^{-1} (1+x)$$</span> is <span class="math-container">$-\pi/2$</span>. its (global) maximum value will occur at <span class="math-container">$x=0$</span> which is <span class="math-container">$f(0)=\pi/2+\pi/4= 3\pi/4.$</span> Sot the range of the given function is <span class="math-container">$(-\pi/2, 3\pi/4]$</span> </p> <p>See the attached figure where <span class="math-container">$f(0)=3\pi/4=2.3561...$</span>. The upper horizontal line depicts this value and the lower horizontal line depicts the lower bound of <span class="math-container">$-\pi/2.$</span></p> <p><a href="https://i.stack.imgur.com/ZgLCC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZgLCC.png" alt="enter image description here"></a></p>
686,981
<p>So I have to solve the equation $$y^2=4\tag{1.9.88 unit 3*}$$</p> <p>I did this: $$y^2=4 \text{ means } \sqrt{y^2}=\sqrt{4}=&gt;y=2$$</p> <p>But I have a problem, $y$ can be either negative or positive so I need to do: $$\sqrt{y^2}=|y|=2=&gt;y=2- or- y=-2$$</p> <p>Is it right?</p>
user2369284
91,771
<p>Yes, it is right. I'll recommend a better way to approach this. Just factorize it.</p> <p>$(y-2)(y+2) = 0$</p> <p>$y = 2,-2$</p>
995,159
<p>I have a matrix $(a_{j,k})_{j,k\in\mathbb{N}}$ given by:</p> <p>$ a_{j,k} = \dfrac{1 -e^{-jk}}{jk + 1}$</p> <p>and I need to show that this induces a bounded operator on $\ell^2$. I'm pretty sure Schur's test is inconclusive. So my guess is to use the Hilber-Schmidt test, which states that if,</p> <p>$\sum\limits_{j=1}^{\infty}\sum\limits_{k=1}^{\infty} \left|\dfrac{1 -e^{-jk}}{jk + 1}\right|^2 &lt; \infty$, then $(a_{j,k})$ induces a bounded operator. </p> <p>However, I'm not sure how to do this summation - can anyone give me a hint?</p> <p>Many thanks </p>
Mustafa Said
90,927
<p>$|\frac{1-e^{-jk}}{jk+1}|^2 \leq \frac{1}{j^2k^2}$ for large $j,k$.</p>
2,171,237
<p>$f(x)$ is continuous on $[0,\pi]$ and $\int_0^\pi{f(x)\sin xdx} = \int_0^\pi{f(x)\cos xdx} = 1.$</p> <p>Find $\min\int_0^\pi {f^2(x)dx}.$</p> <p>I try to solve this problem by this: $$\begin{array}{l} {\left( {\int\limits_0^\pi {f(x)\sin xdx} } \right)^2} \le \left( {\int\limits_0^\pi {{f^2}(x){{\sin }^2}xdx} } \right)\left( {\int\limits_0^\pi {dx} } \right) \le \pi \int\limits_0^\pi {{f^2}(x){{\sin }^2}xdx} \\ {\left( {\int\limits_0^\pi {f(x)\cos xdx} } \right)^2} \le \pi \int\limits_0^\pi {{f^2}(x){{\cos }^2}xdx} \\ \Rightarrow \pi \int\limits_0^\pi {{f^2}(x)\left( {{{\sin }^2}x + {{\cos }^2}x} \right)dx} \ge 1 + 1 = 2 \end{array}$$ The thing is I can't find $f(x)$ to let the equation happens. Any help? Thank you in advance.</p>
Martin R
42,969
<p>Define $$ g(x) = f(x) - \frac 2\pi \sin x - \frac 2\pi \cos x \, . $$ Then $$ \int_0^\pi g(x) \cos x \, dx= \int_0^\pi f(x) \cos x \, dx - \frac 2\pi \int_0^\pi \sin x \cos x \, dx - \frac 2\pi \int_0^\pi \cos^2 x \, dx \\ = 1 - 0 - 1 = 0 $$ and similarly, $$ \int_0^\pi g(x) \sin x \, dx= 0 \, . $$ Therefore $$ \int_0^\pi f^2(x) = \int_0^\pi \left( g(x) + \frac 2\pi \sin x + \frac 2\pi \cos x\right)^2 \, dx \\ = \int_0^\pi g^2(x) \, dx + \frac{4}{\pi^2} \int_0^\pi \sin^2(x) \, dx + \frac{4}{\pi^2} \int_0^\pi \cos^2(x) \, dx $$ because all integrals with the "mixed terms" from expanding the square vanish.</p> <p>It follows that $$ \int_0^\pi f^2(x) \ge \frac{4}{\pi^2} \int_0^\pi \sin^2(x) \, dx + \frac{4}{\pi^2} \int_0^\pi \cos^2(x) \, dx = \frac 4 \pi \, . $$ Equality holds if $g(x) = 0$, i.e. for the function $$ f(x) = \frac 2\pi \sin x + \frac 2\pi \cos x \, . $$</p> <p><em>(This is essentially the same proof as given by Jacky Chong, but without using the theory of Fourier series.)</em></p> <hr> <p>Another way to look at the problem is to consider the set of all real-valued continuous functions on $[0, \pi]$ as <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="nofollow noreferrer"><em>inner product space</em></a> with the scalar product $$ \langle g, h \rangle = \int_0^\pi g(x)h(x) \, dx \, . $$ What we did is to define $g$ as the <em>orthogonal projection</em> of $f$ on the subspace spanned by $\{ \sin, \cos \}$: $$ g = f - \frac{\langle f, \sin \rangle}{\langle \sin, \sin \rangle} \sin - \frac{\langle f, \cos \rangle}{\langle \cos, \cos \rangle} \cos = f - \frac 2 \pi \sin - \frac 2 \pi cos $$ and concluded that $$ \| f \|^2 = \langle f, f \rangle = \langle g, g \rangle + \frac{4}{\pi^2} \langle \sin, \sin \rangle + \frac{4}{\pi^2} \langle \cos, \cos \rangle = \| g \|^2 + \frac 4 \pi \, . $$</p>
152,295
<p>What is the definition of picture changing operation? What is a standard reference where it is defined - not just used?</p>
José Figueroa-O'Farrill
394
<p>Although it’s behind an Elsevier pay-wall, there is one paper which explains in cohomological terms the picture-changing operator in the context of string field theory. If I remember correctly it is a kind of connecting homomorphism. The paper in question is “<a href="http://dx.doi.org/10.1016/0370-2693%2888%2991319-6" rel="noreferrer">Picture changing operation and BRST cohomology in superstring field theory</a>” by Francisco Narganes-Quijano. </p>
2,657,053
<blockquote> <p>Suppose I know that $$\sum_{i=1}^n i^2=\frac{n(n+1)(2n+1)}{6}\,\,\,\, \tag{1} $$ How can I prove the the following? $$ \sum_{i=0}^{n-1} i^2=\frac{n(n-1)(2n-1)}{6} $$</p> </blockquote> <hr> <p>I have looked up the solution to the other problem but it seems to be a bit confusing to me. Is it possible to find a solution derived from equation 1 if you did <strong>NOT</strong> know this part: $$ \frac{n(n-1)(2n-1)}{6} $$</p>
user
505,767
<p>Simply note that</p> <p>$$\sum_{i=0}^{n-1} i^2=\frac{n(n-1)(2n-1)}{6}=\left(\sum_{i=1}^n i^2\right)-n^2$$</p> <p>indeed</p> <p>$$\sum_{i=0}^{n-1} i^2=\sum_{i=1}^{n-1} i^2=\left(\sum_{i=1}^n i^2\right)-n^2=\frac{n(n+1)(2n+1)}{6}-n^2=\frac{n(n-1)(2n-1)}{6}$$</p>
3,520,327
<p>Currently in Calculus II and I was introduced to hyperbolic trigonometric functions and it threw me for a loop. I’m really confused on their MEANING... and what they represent. I can use the formulas for them easily but it doesn’t actually make sense to me. Can someone please help me out? Are there any good books you can recommend as well?</p>
Arthur
15,500
<p>That is the gist of it, yeah.</p> <p>There are, in practice, several ways to do this, and here is a short summary. A direct proof uses intermediate, already-known implications chained together like this. <span class="math-container">$$ p\to p_1\\ p_1\to p_2\\ \vdots\\ p_n\to q $$</span></p> <p>A contrapositive proof is a direct proof of the statement <span class="math-container">$$ \text{not }q\to\text{not }p $$</span> and a proof by contradiction is a direct proof of the statement <span class="math-container">$$ (p\text{ and not }q)\to \text{contradiction / absurdity} $$</span></p>
1,281,967
<p>This is a dumb question I know.</p> <p>If I have matrix equation $Ax = b$ where $A$ is a square matrix and $x,b$ are vectors, and I know $A$ and $b$, I am solving for $x$.</p> <p>But multiplication is not commutative in matrix math. Would it be correct to state that I can solve for $A^{-1}Ax = A^{-1}b \implies x = A^{-1}b$?</p>
Stefan Perko
166,694
<p>Usually, if you are solving for $x$, the easiest way which is also safe to "work", is Gaussian Elimination. (<a href="http://en.wikipedia.org/wiki/Gaussian_elimination" rel="nofollow">http://en.wikipedia.org/wiki/Gaussian_elimination</a>) It also tells you, if there are no or infinitely many solutions.</p> <p>Inverting a matrix is only sometimes possible and almost never faster (if you are only solving one system involving said matrix) for $3\times 3$ or higher. If $A$ is invertible, then there is of course always a unique solution $x$.</p>
1,944,628
<p>Let $(X,{\mathcal T}_X)$ and $(Y,{\mathcal T}_Y)$ be topologiclal spaces, and let $f,g:(X,{\mathcal T}_X)\to(Y,{\mathcal T}_Y)$ be continuous maps. </p> <p>Define the equality set as $$E(f,g) = \{x\in X \ | \ f(x) = g(x) \}$$</p> <p>I have worked out that if $(Y,{\mathcal T}_Y)$ is Hausdorff, then $E(f,g)$ is ${\mathcal T}_X$-closed (see <a href="https://math.stackexchange.com/questions/199617/the-set-of-points-where-two-maps-agree-is-closed">this answer</a>).</p> <p>In order to get a better understanding I am trying to find examples of continuous maps $f,g$ with $E(f,g)$ not closed. My understanding is that this only occurs for some maps where the target space $(Y,{\mathcal T}_Y)$ is not Hausdorff.</p>
DanielWainfleet
254,665
<p>An example where $Y$ is a $T_1$ space but not Hausdorff . Let $T_R$ be the usual topology on the reals $R$. Let $Q$ be the rationals.Let $Y=Q\cup ((R$ \ $Q)\times \{0,1\}).$</p> <p>For $b\subset R$ let $b^*=(b\cap Q)\cup ((b$ \ $Q)\times \{0,1\}).$ Let $B=\{b^*$ \ $c : b\in T_R$ and $c$ is finite $\}.$ Then $B$ is a base for a topology $T_Y$ on $Y.$ And $(Y,T_Y)$ is a $T_1$ space but not a $T_2$ space.</p> <p>Define $f:R\to Y$ and $g:R\to Y$ by : </p> <p>(i). $f(q)=g(q)=q$ if $q\in Q .$</p> <p>(ii) If $r\in R$ \ $Q$ then $f(r)=(r,0)$ and $g(r)=(r,1).$</p> <p>Then $f$ and $g$ are continuous, as can be verified by checking that $f^{-1}d$ and $g^{-1}d$ are open in $R$ for each $d\in B.$ But $\{x: f(x)=g(x)\}=Q$ which is not closed in $R.$</p> <p>Another example (in which $Y$ is not $T_1$): Let $T_Y$ be the coarse (anti-discreet) topology on $Y, $ where $Y$ has at least 2 members. Any function into $Y$ is continuous. Let $A$ be any non-closed subset of a space $X$ . Let $y_1\in Y. $ Let $f(x)=y_1$ for all $x\in X.$ Let $g(x)=y_1$ for $x\in A$ and $g(x)\in Y$ \ $\{y_1\}$ for $x\in X$ \ $A.$ Then $\{x:f(x)=g(x)\}=A.$</p>
1,082,390
<p>$$\lim_{x \to \infty} \left(\sqrt{4x^2+5x} - \sqrt{4x^2+x}\ \right)$$</p> <p>I have a lot of approaches, but it seems that I get stuck in all of those unfortunately. So for example I have tried to multiply both numerator and denominator by the conjugate $\left(\sqrt{4x^2+5x} + \sqrt{4x^2+x}\right)$, then I get $\displaystyle \frac{4x}{\sqrt{4x^2+5x} + \sqrt{4x^2+x}}$, but I can conclude nothing out of it. </p>
Thomas Andrews
7,933
<p>Note that $$\sqrt{4x^2+5x}-\sqrt{4x^2+x} = \frac{4x}{\sqrt{4x^2+5x}+\sqrt{4x^2+x}}$$</p> <p>And the denominator is between $2x+2x=4x$ and $(2x+\frac{5}{4})+(2x+\frac{1}{4})=4x+\frac{3}{2}$. So the limit be $1$.</p> <p>Alternatively, show that:</p> <p>$$\lim \left(2x+\frac{5}{4}-\sqrt{4x^2+5x}\right) = 0$$</p> <p>and</p> <p>$$\lim \left(2x+\frac{1}{4}-\sqrt{4x^2+x}\right) = 0$$</p> <p>Then again deduce that the limit of the difference must be $0$, so the limit you are seeking is $\frac{5}{4}-\frac 14=1$.</p>
2,426,897
<p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p> <p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p> <p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
Jack D'Aurizio
44,121
<p>$\sqrt{2017}\approx\sqrt{2000}=20\sqrt{5}\approx 20\cdot 2.236 \approx 45$ and $$44^2 = 1936,\qquad 45^2=2025$$ hence $\sqrt{2017}\in\color{red}{\left(44,45\right)}$.</p>
3,282,400
<p>I would like to illustrate my confusion about this topic by building up the issue from more or less first principles. Let <span class="math-container">$U \subseteq \mathbb{R}^n$</span> be an open subset and let <span class="math-container">$f:U \to \mathbb{R}^m$</span>. We say that <span class="math-container">$f$</span> is totally differentiable at <span class="math-container">$a \in U$</span> if there exists a linear function <span class="math-container">$D_{a}f:\mathbb{R}^n \to \mathbb{R}^m$</span> such that the following holds.</p> <p><span class="math-container">$\lim_{x \to a} \frac{||f(x)-f(a)-D_{a}f(x-a)||}{||x-a||} = 0$</span></p> <p>We call <span class="math-container">$D_{a}f$</span> the total derivative of <span class="math-container">$f$</span> at <span class="math-container">$a$</span>. My question concerns how we then define the derivative function. For the familiar case in which <span class="math-container">$n=m=1$</span>, this is simple since the total derivative reduces to just</p> <p><span class="math-container">$(D_{a}f)(b) = \big(\frac{df}{dx}\big{\rvert}_a\big)b$</span></p> <p>for all <span class="math-container">$b \in \mathbb{R}$</span>. Here, <span class="math-container">$\frac{df}{dx}\big{\rvert}_a$</span> denotes the usual definition of the derivative of <span class="math-container">$f$</span> at <span class="math-container">$a$</span>. We then define the derivative function <span class="math-container">$f':U \to \mathbb{R}$</span> as</p> <p><span class="math-container">$f'(y) := \frac{df}{dx}\big{\rvert}_y$</span></p> <p>for all <span class="math-container">$y \in U$</span>. Let us now consider the case when <span class="math-container">$m=1$</span>. Representing the action of <span class="math-container">$D_{a}f$</span> via the Jacobian matrix, we have</p> <p><span class="math-container">$(D_{a}f)(b) = \begin{bmatrix} \frac{\partial{f}}{\partial{x_1}}\big{\rvert}_a &amp; \cdots &amp; \frac{\partial{f}}{\partial{x_n}}\big{\rvert}_a \end{bmatrix} \begin{bmatrix} b_1 \\\ \vdots \\\ b_n \end{bmatrix}$</span></p> <p>for all <span class="math-container">$b=(b_1,\cdots,b_n)\in\mathbb{R}^n$</span>. This suggests that we define the derivative function <span class="math-container">$Df:U \to \mathbb{R}^n$</span> as</p> <p><span class="math-container">$(Df)(y) := \big(\frac{\partial{f}}{\partial{x_1}}\big{\rvert}_y, \cdots, \frac{\partial{f}}{\partial{x_n}}\big{\rvert}_y\big)$</span></p> <p>for all <span class="math-container">$y \in U$</span>. We could perhaps make a similar argument for the case in which <span class="math-container">$n=1$</span>. How does this extend to cases in which the Jacobian matrix is not a simple column or row vector? What definition of the derivative function is most natural in those cases? By natural, I mean that the definition should allow important identities like the product rule and chain rule to retain their obvious forms.</p>
Daniel Kawai
466,883
<p>I will use the result in <a href="https://math.stackexchange.com/questions/66253/proving-ab-afbfa-fb-f/4038289#4038289">this page</a>. Let <span class="math-container">$\pi:A\rightarrow A/B$</span> be the canonical surjection.</p> <p>It is sufficient to prove:</p> <p><span class="math-container">$$[A_f:A^g][B_g:B^f][(A/B)_g:(A/B)^f]=[A_g:A^f][B_f:B^g][(A/B)_f:(A/B)^g].$$</span></p> <p>But:</p> <p><span class="math-container">$$[A_f:A^g]=[(A_f)^\pi:(A^g)^\pi][(A_f)_\pi:(A^g)_\pi]$$</span></p> <p><span class="math-container">$$[B_g:B^f]=[(A_\pi)_g:(A_\pi)^f]=[(A_g)_\pi:(A^f)_\pi][(A^f)_\pi:(A_\pi)^f]$$</span></p> <p><span class="math-container">$$[(A/B)_g:(A/B)^f]=[(A^\pi)_g:(A^\pi)^f]=[(A^\pi)_g:(A_g)^\pi][(A_g)^\pi:(A^f)^\pi]$$</span></p> <p>Analogously:</p> <p><span class="math-container">$$[A_g:A^f]=[(A_g)^\pi:(A^f)^\pi][(A_g)_\pi:(A^f)_\pi]$$</span></p> <p><span class="math-container">$$[B_f:B^g]=[(A_f)_\pi:(A^g)_\pi][(A^g)_\pi:(A_\pi)^g]$$</span></p> <p><span class="math-container">$$[(A/B)_f:(A/B)^g]=[(A^\pi)_f:(A_f)^\pi][(A_f)^\pi:(A^g)^\pi]$$</span></p> <p>So the problem is equivalent to prove that:</p> <p><span class="math-container">$$[(A_f)^\pi:(A^g)^\pi][(A_f)_\pi:(A^g)_\pi][(A_g)_\pi:(A^f)_\pi][(A^f)_\pi:(A_\pi)^f][(A^\pi)_g:(A_g)^\pi][(A_g)^\pi:(A^f)^\pi]$$</span></p> <p>is equal to:</p> <p><span class="math-container">$$[(A_g)^\pi:(A^f)^\pi][(A_g)_\pi:(A^f)_\pi][(A_f)_\pi:(A^g)_\pi][(A^g)_\pi:(A_\pi)^g][(A^\pi)_f:(A_f)^\pi][(A_f)^\pi:(A^g)^\pi].$$</span></p> <p>So it suffices to prove that:</p> <p><span class="math-container">$$[(A^f)_\pi:(A_\pi)^f][(A^\pi)_g:(A_g)^\pi]=[(A^g)_\pi:(A_\pi)^g][(A^\pi)_f:(A_f)^\pi].$$</span></p> <p>We will prove that <span class="math-container">$[(A^f)_\pi:(A_\pi)^f]=[(A^\pi)_f:(A_f)^\pi]$</span>. Let <span class="math-container">$P$</span> be a representant system of cosets of <span class="math-container">$(A_\pi)^f$</span> in <span class="math-container">$(A^f)_\pi$</span>. For <span class="math-container">$p\in P$</span> choose an element <span class="math-container">$a_p\in A$</span> such that <span class="math-container">$f(a_p)=p$</span>. We will prove that the family <span class="math-container">$(\pi(a_p))_{p\in P}$</span> is a representant system of cosets of <span class="math-container">$(A_f)^\pi$</span> in <span class="math-container">$(A^\pi)_f$</span>.</p> <p>For <span class="math-container">$y\in (A^\pi)_f$</span>, then there is <span class="math-container">$x\in A$</span> such that <span class="math-container">$y=\pi(x)$</span>, so <span class="math-container">$0=f(y)=f(\pi(x))=\pi(f(x))$</span>, so <span class="math-container">$f(x)\in(A^f)_\pi$</span>, so there is <span class="math-container">$p\in P$</span> such that <span class="math-container">$f(x)\in p+(A_\pi)^f$</span>, so there is <span class="math-container">$b\in A_\pi$</span> such that <span class="math-container">$f(x)=f(a_p)+f(b)$</span>, so <span class="math-container">$x-a_p-b\in A_f$</span>, so <span class="math-container">$\pi(x)-\pi(a_p)-\pi(b)\in (A_f)^\pi$</span>, so <span class="math-container">$y\in\pi(a_p)+(A_f)^\pi$</span>.</p> <p>If <span class="math-container">$\pi(a_p)+(A_f)^\pi=\pi(a_q)+(A_f)^\pi$</span>, then <span class="math-container">$\pi(a_p)-\pi(a_q)\in(A_f)^\pi$</span>, so there is <span class="math-container">$x\in A_f$</span> such that <span class="math-container">$\pi(a_p)-\pi(a_q)=\pi(x)$</span>, so <span class="math-container">$a_p-a_q-x\in A_\pi$</span>, so <span class="math-container">$f(a_p)-f(a_q)-f(x)\in(A_\pi)^f$</span>, so <span class="math-container">$p-q\in(A_\pi)^f$</span>, so <span class="math-container">$p+(A_\pi)^f=q+(A_\pi)^f$</span>, so <span class="math-container">$p=q$</span>.</p>
317,160
<p>If $ f(x) = \frac{1}{(1+x)\sqrt x} $ how to find all $ p &gt; 0 $ such that $$ \int^{\infty}_0 |f(x)|^p dx &lt; \infty $$ The integral is with respect to lebesgue measure. Any solution or hints would be helpful. The answer is the integral converges iff $ p\in (\frac{2}{3}, 2) $.</p>
Hanul Jeon
53,976
<p><strong>Hint:</strong> Since $$\int_0^{\infty} f(x)^p dx =\int_0^1 f(x)^p dx+\int_1^{\infty} f(x)^p dx$$</p> <p>So if $0&lt;p&lt;1$, we can only consider the integral $\int_1^{\infty} f(x)^p dx$, and $$\int_1^{\infty} f(x)^p dx &lt; \int_1^\infty \left(\frac{1}{x\sqrt{x}}\right)^p dx$$ </p> <p>And $$\int_1^\infty f(x)^p dx &gt; \int_1^\infty \frac{1}{(1+x)^{3p/2}} dx .$$</p>
317,160
<p>If $ f(x) = \frac{1}{(1+x)\sqrt x} $ how to find all $ p &gt; 0 $ such that $$ \int^{\infty}_0 |f(x)|^p dx &lt; \infty $$ The integral is with respect to lebesgue measure. Any solution or hints would be helpful. The answer is the integral converges iff $ p\in (\frac{2}{3}, 2) $.</p>
Julien
38,053
<p>At $+\infty$, you have $$ |f(x)|^p\sim \frac{1}{x^{3p/2}} $$ which converges if and only if $3p/2&gt;1$.</p> <p>At $0$, $$ |f(x)|^p\sim\frac{1}{x^{p/2}} $$ which converges if and only if $p/2&lt;1$.</p> <p>So your integral converges if and only if $$ \frac{2}{3}&lt;p&lt;2. $$</p>
4,500,163
<blockquote> <p>Take two positive integers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> that are not multiples of <span class="math-container">$5.$</span> Then, construct a list in the following fashion: let the first term be <span class="math-container">$5,$</span> and starting with the second number, each number is obtained by multiplying the previous number on the list by <span class="math-container">$a$</span> and adding <span class="math-container">$b.$</span> What is the maximum number of primes that the list can contain before obtaining the first composite number?</p> </blockquote> <hr /> <p>I first started the problem by writing out a couple terms in the sequence: <span class="math-container">$$5, 5a + b, 5a^2 + ab + b, 5a^3 + a^2b + ab + b, 5a^4 + a^3b + a^2b + ab + b, \cdots$$</span> I then tried interpreting each term in modulo <span class="math-container">$5$</span> but considering each <span class="math-container">$a, b \pmod{5}$</span> and expanding was far too long. Considering the terms in modulo <span class="math-container">$2$</span> also didn't work since there always exists a way to make the term <span class="math-container">$\not\equiv 0 \pmod{2}.$</span></p> <p>Is there a better way to try and solve this problem besides grueling casework on <span class="math-container">$a,b, \pmod{5}$</span>?</p>
Bill Dubuque
242
<p>Though you've accepted an answer a day prior, it's worth strong emphasis that problems like this can be solved more simply (and more generally) using basic ideas about permutation cycles (if these are unfamiliar then see the alternative direct proof in a Remark below). Bringing to the fore the innate <span class="math-container">$\rm\color{#0a0}{periodic (cycle)}$</span> structure of the sequence <span class="math-container">$\!\bmod 5$</span> yields the very simple proof below.</p> <p><strong>Proof</strong> <span class="math-container">$\, $</span> Your sequence <span class="math-container">$\:\!c_n$</span> is generated by iteratively applying <span class="math-container">$\:\!f(x) = ax\:\!\!+\!b\,$</span> starting at <span class="math-container">$\,\color{}{c_0 = 5},\,$</span> i.e. <span class="math-container">$\,c_n = f^n(c_0).\,$</span> Viewed <span class="math-container">$\!\bmod 5\:\!$</span> note <span class="math-container">$f\,$</span> is invertible: <span class="math-container">$\,f^{-1}(x) \equiv (x\!-\!b)/a,\,$</span> by <span class="math-container">$\,a\not\equiv 0,\,$</span> so <span class="math-container">$f\,$</span> is a <span class="math-container">$\rm\color{#0a0}{permutation}$</span>, and all <span class="math-container">$\:\!c_k\:\!$</span> lie in the <span class="math-container">$\:\!\rm\color{#0a0}{orbit\, ({\it cycle})}$</span> <span class="math-container">$f^k(\color{}{c_0})$</span> of length at most <span class="math-container">$\:\!\color{#c00}5 = |\Bbb Z_5|.\,$</span> So the initial value <span class="math-container">$\,c_0\equiv 0\pmod{\!5}\,$</span> repeats as <span class="math-container">$\,c_k\equiv 0\,$</span> for <span class="math-container">$\,0&lt; k\le \color{#c00}5,\,$</span> so <span class="math-container">$\,5\mid c_k,\,$</span> so <span class="math-container">$\,c_k\,$</span> is composite (by <span class="math-container">$c_k$</span> is increasing so <span class="math-container">$\,k&gt;0\Rightarrow c_k&gt; c_0\!=\!5).\,$</span> Thus there are at most <span class="math-container">$\,\color{#c00}5\,$</span> initial primes in <span class="math-container">$\,c_k.\ \ \small\bf QED$</span></p> <hr /> <p><strong>Remark</strong> <span class="math-container">$ $</span> If permutation cycles are unknown then we can directly prove the needed <span class="math-container">$\rm\color{#0a0}{periodicity}$</span>.</p> <p><strong>Lemma</strong> <span class="math-container">$ $</span> If <span class="math-container">$f$</span> is an invertible map on <span class="math-container">$\Bbb Z_n = \Bbb Z \bmod n$</span> then <span class="math-container">$f^k(a)\equiv a\pmod{\! n}\,$</span> for some <span class="math-container">$\,k\le n$</span>.</p> <p><strong>Proof</strong> <span class="math-container">$\, $</span> Pigeonhole <span class="math-container">$\Rightarrow$</span> among the <span class="math-container">$\,n\!+\!1\,$</span> values <span class="math-container">$\,a,f(a),f^2(a),\cdots, f^n(a)\,$</span> two must be congruent: for some <span class="math-container">$\,0\le i&lt; j\le n\!:$</span> <span class="math-container">$\,f^j(a)\equiv f^i(a)\,$</span> so <span class="math-container">$f^{j-i}(a)\equiv a\,$</span> by: <span class="math-container">$ $</span> (<span class="math-container">$i$</span> times) apply <span class="math-container">$f^{-1}$</span> (to cancel <span class="math-container">$f^i)$</span>.</p> <hr /> <p>This approach does not require <em>solving</em> the recurrence (in closed form). Rather, we only need to prove that the function <span class="math-container">$f\:\!$</span> generating the recurrence is <em>invertible</em>. Then we may apply a <em>fundamental</em> fact - that the cycle graph of an invertible map on a finite set consists of <em>purely periodic cycles</em>, i.e. shape oh (o) vs. rho (<span class="math-container">$\rho$</span>), i.e. no <em>preperiod</em> part as in the initial tail of the rho. This basic result is often overlooked, leading to unnecessarily complex proofs - which essentially amount to <strong>reinventing the wheel (cycle)</strong>. See <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20wheel%20cycle">here</a> for further discussion and examples, including some <em>nonlinear</em> recurrences.</p>
3,393,466
<p>I am in final year of my undergraduate in mathematics from a prestigious institute for mathematics. However a thing that I have noticed is that I seem to be slower than my classmates in reading mathematics. As in, how muchever I try, I seem to finish my works at the last moment and I rarely find any time for extra reading. Is there any suggestions or tips that I could try that you know of? Or is it advisable to skip details in favour of saving time?</p>
Piquito
219,998
<p>It seems to me that over time you will know more about mathematics than your fellow students. What should happen is that you have a certain cultural sense of mathematics and that you probably read much more than what you need to do your homework while the other students focus on what they are asked for and nothing more.It happened to me like lake, I ended up knowing more math than my fellow students but they outdid me on homeworks Am I wrong?</p>
3,393,466
<p>I am in final year of my undergraduate in mathematics from a prestigious institute for mathematics. However a thing that I have noticed is that I seem to be slower than my classmates in reading mathematics. As in, how muchever I try, I seem to finish my works at the last moment and I rarely find any time for extra reading. Is there any suggestions or tips that I could try that you know of? Or is it advisable to skip details in favour of saving time?</p>
hal4math
699,910
<p>The question and information given is maybe a bit vague to give a satisfying and meaningful answer. But that shall me not stop from still trying: </p> <p>I think everyone of us knows that there these phrases in math like "easy to see" or similar ones that can occupy ones attention for hours and clearly will lead to take longer to finish a text then just taking that for granted for example. So if you take that effort and your fellow students for example didn't. Sure everything is fine. Also are you trying to understand every proof with every detail while you reading a text for the first time? I am pretty sure not every of your peers is doing that. And I found there are just different "types". For example I tend to need a rough overview of what I am dealing with first before I can deep dive into more involved proofs and details. I also like to go throw a text many times because I have a bad memory and this makes me repeat stuff, but of course this also means my first reading will be quite quick but also quite shallow. (So I first take the birds perspective and then go into the frogs perspective). </p> <p>But even if you hypothetically would be "objectively" slower then everyone else but still get it in the end, what would be the problem with that? Math is a big part of my life and it is many things for me but certainly not a competition. And I certainly don't think you need to be good <strong>and</strong> quick to find a meaningful occupation. </p> <p>Also you mentioned you studying in a <em>prestigious</em> university (whatever that means), so being "slower" is still a relative term, right? Also you mentioning in your undergraduate studies. So to be frank here, you very likely just scratched on the surfaces on modern mathematics. And I think that is good and bad news (at least I am conflicted about it). Because so far you are only subjected to the well disgusted content of mathematics so maybe you will excel at more recents developments. I wrote at some here before but I am fairly certain that studying math is not a sprint but a (lifelong) marathon and a bachelors is from what I am experiencing only the first few kilometers. </p> <p>That is all to say: Hang in there and keep trying. The very reason that you care enough to ask this here indicates you are on a good path anyway.</p>
1,994,922
<p>Given $B_1, B_2,\ldots$ are independent and bounded variables with $E(B_i) = 0$ for all $i=1,2,\ldots$. Define $S_n = B_1+ B_2+\ldots + B_n$ with variance $s_n^2\rightarrow \infty$. Prove that $\frac{S_n}{s_n}$ has a central limit.</p> <p><strong>My attempt:</strong> Due to the given condition, without i.i.d property, I try to prove that this sequence satisfies the Lindeberg condition and then applying Lindeberg-Feller theorem, we're done. So for every $\epsilon&gt;0$ , I need to show for each positive $\varepsilon$, $$\lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 }\sum_{j=1}^n\mathbb E\left[B_j^2\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]=0 .$$</p> <p>Since $\sigma_i^{2} = E(B_i^2)$ for all $\ i=1,2,\ldots$ and $B_is$ are bounded variables, $B_j^2\leq M=max(|B_1|,|B_2|,\ldots)$. Thus, $$\lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 }\sum_{j=1}^n\mathbb E\left[B_j^2\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]\leq \lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 } M^2\sum_{j=1}^n \mathbb E\left[\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right] .$$ As $n\rightarrow \infty$, $s_n^2\rightarrow \infty$, so all the sets $\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\rightarrow 0$. And we would be done if we could show that $\lim_{n\rightarrow \infty} \sum_{j=1}^n \mathbb E\left[\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]\rightarrow 0$. But this might not be true (counter example is the harmonic series with $p=1$) unless there is something that I was missing.</p> <p><strong>My question:</strong> Could someone please help me overcome this last step? In case I was on the wrong track, please let me know as well.</p>
grand_chat
215,011
<p>You are on the right track. Notice that (given $\varepsilon&gt;0$) there exists $N$ such that $$ I\left( |B_j|^2&gt;\varepsilon s_n^2\right) = 0 $$ for <strong>all</strong> $j$ and all $n\ge N$. (The reason is that $s_n$ is a deterministic sequence tending to infinity, while the $B$'s are bounded, so eventually the inequality is not satisfied.) This means the sum $$ \sum_{j=1}^n \mathbb E\left[\mathbf 1\left( \left|B_j\right|^2\gt \varepsilon s_n^2\right)\right] $$ stops at $j=N$, so it's bounded by $N$.</p>
327,750
<p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty (A_{1}^c \cap\cdots\cap A_{n-1}^c \cap A_n)$$</p> <p>The results is obvious enough, but how to prove this</p>
Andreas Blass
48,510
<p>The $n$'s on the right side should be $i$'s (or the $i$ should be $n$).</p> <p>For any $x$, the statement "there is an $i\in\mathbb N$ such that $x\in A_i$" is equivalent to "there is a smallest $i\in\mathbb N$ such that $x\in A_i$".</p>
201,381
<p>I have basic training in Fourier and Harmonic analysis. And wanting to enter and work in area of number theory(and which is of some interest for current researcher) which is close to analysis. </p> <blockquote> <p>Can you suggest some fundamental papers(or books); so after reading these I can have, hopefully(probably), I will have some thing to work on(I mean, chance of discovering something new)?</p> </blockquote>
Community
-1
<p>I think it is remarkable that nobody mentioned André Weil's <em>Basic Number Theory</em> until now. André Weil made both marvellous contributions to harmonic analysis on locally compact Abelian groups and to number theory.</p> <p>In <em>Basic Number Theory</em>, familiarity with number theory is not a prerequisite. However, the reader is expected to be familiar with the basic theory of locally compact Abelian groups and the Haar measure on such groups since these methods are extensively used to prove results in number theory. I think this approach is non-standard, but it beautifully shows the application of harmonic analysis to number theory.</p>
528,456
<p>I have a question regarding L'hospital's rule. </p> <p>Why can I apply L'hospital's rule to $$\lim_{x\to 0}\frac{\sin 2x}{ x}$$ and not to $$\lim_{x\to 0} \frac{\sin x}{x}~~?$$</p>
amWhy
9,003
<p>You <strong>can</strong> apply l'Hôpital's rule in <em>both</em> cases! </p> <p>You can apply l'Hôpital's rule whenever you have an <a href="http://en.wikipedia.org/wiki/Indeterminate_form#List_of_indeterminate_forms">indeterminate form</a>.</p>
528,456
<p>I have a question regarding L'hospital's rule. </p> <p>Why can I apply L'hospital's rule to $$\lim_{x\to 0}\frac{\sin 2x}{ x}$$ and not to $$\lim_{x\to 0} \frac{\sin x}{x}~~?$$</p>
Arthur
15,500
<p>The reason you cannot use L'Hopital on the $\sin(x)/x$ limit has nothing to do with calculus, and more with logic, and the problem is subtle.</p> <p>To use L'Hopital you need to know the derivative of $\sin(x)$. What is that derivative? You'd say $\cos(x)$ on reflex, and then you'd miss the problem. See, to calculate the derivative of $\sin(x)$ in the first place, you need to calculate $$ \lim_{h \to 0}\frac{\sin(x + h) - \sin(x)}{h} $$ We use the formula for $\sin(u + v)$ and simplify the fraction: $$ \frac{\sin(x + h) - \sin(x)}{h} = \frac{\sin(x)\cos(h) + \sin(h)\cos(x) - \sin(x)}{h}\\\\ = \frac{\cos(h) - 1}{h}\sin(x) + \frac{ \sin(h)}{h}\cos(x) $$ Thus to justify the use of L'Hopital, you need to know the limit of the two fractions as $h \to 0$, one of which is the limit we tried to use L'Hopital on in the first place.</p> <p>If someone says that you cannot use it on $\sin(x)/x$, but you <em>can</em> use it on $\sin(2x)/x$, then that is because the former is in some way <em>needed</em> to justify L'Hopital in the first place, while when solving the latter you implicitly assume somehow that the derivative of $\sin(x)$ is already known. It all comes down to trying to read the mind of whoever poses the problem and try to see how heavy machinery they allow you to use.</p>
2,062,398
<p>May you tell me if my translation to symbolic logic is correct? </p> <p>Thank you so much! Here is the problem:</p> <p>To check that a given integer $n &gt; 1$ is a prime, prove that it is enough to show that $n$ is not divisible by any prime $p$ with $p \le \sqrt{n}$.</p> <p>$$\forall p \in P ~\forall n \in N ~(p \nmid n \land p\le \sqrt{n} \land n&gt;1 \rightarrow n \in P )$$</p>
marwalix
441
<p>Assume it is rational. So there exists $p,q\in \Bbb{Z}^*$ such that</p> <p>$${p\over q}=\sqrt{2}+\sqrt[3]{5}$$</p> <p>Putting $\sqrt{2}$ on the L.H.S and cubing one gets</p> <p>$$\left(p-q\sqrt{2}\right)^3=5q^3$$</p> <p>And this leads to</p> <p>$$p^3+6pq^2-5q^3=\left(3p^2q+2q^3\right)\sqrt{2}$$</p> <p>And this is impossible because it would mean</p> <p>$$\sqrt{2}={p^3+6pq^2-5q^3\over 3p^2q+2q^3}\in \Bbb{Q}$$</p>
3,224,475
<p>Let <span class="math-container">$\mathbb{Z}_8$</span> be the ring containing elements integer modulo 8 with operation <span class="math-container">$+$</span> and <span class="math-container">$.$</span> being addition and multiplication modulo 8 resp. I want to find <span class="math-container">$a$</span> for every <span class="math-container">$0\neq b \in \mathbb{Z}_8$</span> such that <span class="math-container">$$a.b+b=0.$$</span> </p> <p>PS- I can write <span class="math-container">$a.b+b=(a+1).b=0$</span>, which gives <span class="math-container">$a+1=0$</span> implies <span class="math-container">$a=-1=8-1=7$</span> over <span class="math-container">$\mathbb{Z}_8$</span>. But when I check element wise, for <span class="math-container">$b=2,4,6$</span> we can have also <span class="math-container">$a=3$</span> as a solution. I didn’t expect 2 solutions when I am solving the equation <span class="math-container">$a.b+b=(a+1).b=0$</span>. I see this is happening with non unit elements but Why is it so?</p>
Ethan Bolker
72,858
<p>Here's an answer to the question</p> <blockquote> <p>How come the time complexity of Binary Search is <span class="math-container">$\log n$</span>?</p> </blockquote> <p>that describes informally what's going on in the binary tree in the question and in the video (which I have not watched).</p> <p>You want to know how long binary search will take on input of size <span class="math-container">$n$</span>, as a function of <span class="math-container">$n$</span>.</p> <p>At each stage of the search (pass through the body of the <code>while</code> loop) you split the input in half, so you successively reduce the size of the problem (<code>h-l</code>) this way: <span class="math-container">$$ n, n/2, n/4, n/8 \ldots . $$</span> (Strictly speaking, you round those to integers.)</p> <p>Clearly you will be done when the input is <span class="math-container">$1$</span>, for there's just one place. That index is the answer.</p> <p>So you want the number of steps <span class="math-container">$k$</span> such that <span class="math-container">$n/2^k \le 1$</span>. That's the smallest <span class="math-container">$k$</span> for which <span class="math-container">$2^k \ge n$</span>. The definition of the logarithm says that <span class="math-container">$k$</span> is about <span class="math-container">$\log_2(n)$</span>, so binary search has that complexity.</p>
3,224,475
<p>Let <span class="math-container">$\mathbb{Z}_8$</span> be the ring containing elements integer modulo 8 with operation <span class="math-container">$+$</span> and <span class="math-container">$.$</span> being addition and multiplication modulo 8 resp. I want to find <span class="math-container">$a$</span> for every <span class="math-container">$0\neq b \in \mathbb{Z}_8$</span> such that <span class="math-container">$$a.b+b=0.$$</span> </p> <p>PS- I can write <span class="math-container">$a.b+b=(a+1).b=0$</span>, which gives <span class="math-container">$a+1=0$</span> implies <span class="math-container">$a=-1=8-1=7$</span> over <span class="math-container">$\mathbb{Z}_8$</span>. But when I check element wise, for <span class="math-container">$b=2,4,6$</span> we can have also <span class="math-container">$a=3$</span> as a solution. I didn’t expect 2 solutions when I am solving the equation <span class="math-container">$a.b+b=(a+1).b=0$</span>. I see this is happening with non unit elements but Why is it so?</p>
Ariel Serranoni
253,958
<p>First, it is important to note that the running time of an algorithm is usually represented as function of the input size. Then, we 'measure' the complexity by fitting this function into a class of functions. For instance, if <span class="math-container">$T(n)$</span> is the function describing your algorithm's running time and <span class="math-container">$g\colon\mathbb{N}\to\mathbb{R}$</span> is another function then <span class="math-container">$$T\in O(g) \iff \text { there exist } c,n_0\in\mathbb{R}_{++} \text{ such that } T(n)\leq c g(n) \text{ for each } n\geq n_0. $$</span></p> <p>Similarly, we say that</p> <p><span class="math-container">$$T\in \Omega(g) \iff \text { there exist } c,n_0\in\mathbb{R}_{++} \text{ such that } T(n)\geq c g(n) \text{ for each } n\geq n_0. $$</span></p> <p>If <span class="math-container">$T$</span> belongs to both <span class="math-container">$O(g)$</span> and <span class="math-container">$\Omega(g)$</span> then we say that <span class="math-container">$T\in\Theta(g)$</span>. Let's conclude that for the binary search algorithm we have a running time of <span class="math-container">$\Theta(\log(n))$</span>. Note that we always solve a subproblem in constant time and then we are given a subproblem of size <span class="math-container">$\frac{n}{2}$</span>. Thus, the running time of binary search is described by the recursive function <span class="math-container">$$T(n)=T\Big(\frac{n}{2}\Big)+\alpha.$$</span> Solving the equation above gives us that <span class="math-container">$T(n)=\alpha\log_2(n)$</span>. Choosing constants <span class="math-container">$c=\alpha$</span> and <span class="math-container">$n_0=1$</span>, you can easily conclude that the running time of binary search is <span class="math-container">$\Theta(\log(n))$</span>.</p>
2,875,907
<p>There are set of rods of length <span class="math-container">$1,2,3,4 \dots N$</span>. Two players take turns to chose 3 rods and compose triangle with non-zero area. After that this particular 3 rods are removed. If it is not possible to compose triangle then player looses.</p> <p>Who has winning strategy?</p> <hr> <p>[Edit] Some easy observations:</p> <ul> <li>We get a triangle of non-zero area, if and only if the lengths of the chosen rods, say <span class="math-container">$a&lt;b&lt;c$</span>, satisfy the strict triangle inequality <span class="math-container">$a+b&gt;c$</span>. It may be easier to use this in the form <span class="math-container">$a&gt;c-b$</span> that can be interpreted as stating that the shortest chosen rod must be longer than the length gap between the two longer ones.</li> <li>The rod of length one can never be used because <span class="math-container">$a=1$</span> makes it impossible to satisfy the inequalities in the previous bullet. We can simply pretend that the rod of length one is not part of the game.</li> <li>When <span class="math-container">$N=7$</span> removing the triple <span class="math-container">$\{3,5,7\}$</span> leaves the other player with rods of lengths <span class="math-container">$\{2,4,6\}$</span> and no legal moves. This position is a win for the first player.</li> <li>When <span class="math-container">$N=8$</span> removing the triple <span class="math-container">$\{4,6,7\}$</span> similarly leaves the second player with an impossible task. The collection of lengths <span class="math-container">$\{2,3,5,8\}$</span> has (just barely) too long gaps for the second player to use either <span class="math-container">$2$</span> or <span class="math-container">$3$</span> in the role of <span class="math-container">$a$</span>. This is also a win for the first player.</li> <li>On the other hand when <span class="math-container">$N=9$</span> the game plays out differently. After removing the triple of rods used by the first player, five rods of lengths <span class="math-container">$2\le x_1&lt;x_2&lt;x_3&lt;x_4&lt;x_5\le9$</span> remain. Here <span class="math-container">$x_2\ge3$</span>. Because <span class="math-container">$x_3\ge4$</span> and <span class="math-container">$x_5\le9$</span> we have <span class="math-container">$x_3+2x_2&gt;x_5$</span>. This means that either <span class="math-container">$x_4-x_3$</span> or <span class="math-container">$x_5-x_4$</span> must be less than <span class="math-container">$x_2$</span>. Therefore the second player can pick either the rods of lengths <span class="math-container">$\{x_2,x_3,x_4\}$</span> or the rods of lengths <span class="math-container">$\{x_2,x_4,x_5\}$</span>. After having removed those rods, only two remain, so the second player wins in this case.</li> </ul> <p>But what happens in the general case? [/Edit, JL]</p>
Kaban-5
580,885
<p>This is a partial answer that explains that the first player wins if $n \!\!\! \mod \!\! 6 \in \{0, 4, 5 \}$.</p> <p>First thing to note is that $1$ can never participate in any triangles, so we can pretend that it does not exist and game is player on rods of length $2, 3, \ldots, n$. The main idea of first player strategy will be to always eliminate three shortest rods. It turns out that second player often can't interfere such a strategy.</p> <p>1) $n \!\!\! \mod \!\! 6 = 0$. Then $(n - 1) \!\!\! \mod \!\! 6 = 5$ (there are only $n - 1$ rods that can participate in the game). The strategy of first player will be to always eliminate three shortest rods. If this is always possible, first player will win. We may note that there always will be at least $5$ rods before any turn of the first player. Now, consider three smallest rods with lengths $x_1 &lt; x_2 &lt; x_3$. If they don't form a triangle, then $x_3 \geqslant x_1 + x_2$. Because first player only eliminated rods with lengths less than $x_1$ before this moment, they eliminated at most $x_1 - 2$ rods (from $2$ to $x_1 - 1$). However, the second player has eliminated at least $x_3 - x_2 - 1 = x_1 - 1$ rods (from $x_2 + 1$ to $x_3 - 1$).</p> <p>2) $n \!\!\! \mod \!\! 6 = 5$, $(n - 1) \!\!\! \mod \!\! 6 = 4$. We will use the same strategy: there will always be at least $4$ rods on any move of the first player.</p> <p>3) $n \!\!\! \mod \!\! 6 = 4$, $(n - 1) \!\!\! \mod \!\! 6 = 3$. We will use the same strategy: there will always be at least $3$ rods on any move of the first player.</p> <p>4) [I still did not solve this case.] $n \!\!\! \mod \!\! 6 = 1$, $(n - 1) \!\!\! \mod \!\! 6 = 0$. This case is interesting and probably can shed the light on the cases $n \!\!\! \mod \!\! 6 = 2$ and $n \!\!\! \mod \!\! 6 = 3$: if the first player would always eliminate three shortest rods, they will lose. I tried to use slight modifications of this strategy, but failed.</p> <p><strong>UPD</strong></p> <p>I found a proof along the same lines that the second player wins if $n \!\!\! \mod \!\! 6 = 3$ <br>(so $(n - 1) \!\!\! \mod \!\! 6 = 2$).</p> <p>Indeed, the second player will use the following strategy (except, maybe, for one move somewhere along the way; after that move the second player will continue using the same strategy): on each move, eliminate three shortest rods with length at least $4$. Obviously, the second player will win if they can keep it up: there is always at least $5$ rods before any move of the second player, so at least $3$ of them have the length at least $4$. </p> <p>What could go wrong? Suppose that the three shortest rods with lengths at least $4$ are $x_1 &lt; x_2 &lt; x_3$ and $x_3 \geqslant x_1 + x_2$. First thing to note is that the second player eliminated only rods with lengths from $4$ to $x_1 - 1$ so far, so not more than $x_1 - 4$. Therefore, the first player definitely eliminated all rods from $x_1 + 1$ to $x_2 - 1$ and from $x_2 + 1$ to $x_3 - 1 \geqslant x_1 + x_2 - 1$, so at least $(x_2 - x_1 - 1) + (x_3 - x_2 - 1) \geqslant x_3 - x_1 - 2 \geqslant x_2 - 2 \geqslant (x_1 - 1) = (x_1 - 4) + 3$. But there is no way the first player could eliminate more than $(x_1 - 4) + 3$ rods in the meantime (because the second player eliminated only $x_1 - 4$). Therefore, $x_2 = x_1 + 1$ and $x_3 = x_1 + x_2$ and the second player eliminated all rods with lengths from $4$ to $x_1 - 1$ (and nothing else) and the first player eliminated all rods with lengths from $x_1 + 1$ to $x_3 - 1$ (and nothing else). </p> <p>In this case, rods with lengths $2$ and $3$ are still here, so the second player will eliminate rods with lengths $2, x_1$ and $x_2 = x_1 + 1$. After that, only the following rods remain: $3$ and all rods from $x_3 = x_1 + x_2$ to $n$. Similar analysis shows that nothing can stop the second player from always eliminating three shortest rods with length at least $4$ from now on.</p>
1,968,978
<p>Let $f=(f_0,f_1,f_2...)$ and $g=(g_0,g_1,g_2,...)$ be sequences in $F^{\infty}$. We define multiplication $fg$ by expressing the $n$-th component $(fg)_n=\sum_{i=0}^ng_if_{n-i}$. If $h=(h_0,h_1,h_2,...)$ is also in $F^{\infty}$, we want to show multiplication is associative. Hoffman and Kunze give the following calculation:</p> <p>\begin{align} [(fg)h]_n&amp;=\sum_{i=0}^n(fg)_ih_{n-i}\\ &amp;=\sum_{i=0}^n(\sum_{j=0}^if_jg_{i-j})h_{n-i}\\ &amp;=\sum_{i=0}^n\sum_{j=0}^if_ig_{i-j}h_{n-i}\\ &amp;=\sum_{j=0}^nf_j\sum_{i=0}^{n-j}g_ih_{n-i-j}\\ &amp;=\sum_{j=0}^nf_j(gh)_{n-j}=[f(gh)]_n. \end{align} My question is regarding the second to last equality. I'm getting $\sum_{j=0}^n\sum_{i=0}^{n-j}f_{i+j}g_ih_{n-i-j}$. Is my calculation wrong, or is the one in the book wrong? If the book is wrong, it doesn't look like we have associativity, so I'm a bit confused.</p>
E.H.E
187,799
<p>$$y^2+19y=216$$ $$y(y+19)=(8)(27)$$ so the $$y=8$$</p>
2,431,548
<p>Okay, so, my teacher gave us this worksheet of "harder/unusual probability questions", and Q.5 is real tough. I'm studying at GCSE level, so it'd be appreciated if all you stellar mathematicians explained it in a way that a 15 year old would understand. Thanks!</p> <p>So, John has an empty box. He puts some red counters and some blue counters into the box. </p> <p>The ratio of the number of red counters to blue counters is 1:4</p> <p>Linda takes out, at random, 2 counters from the box.</p> <p>The probability that she takes out 2 red counters is 6/155</p> <p>How many red counters did John put into the box?</p>
drhab
75,923
<p>I preassume that the first counter taken by Linda is not put back in the box.</p> <p>Suppose that there are $n$ red counters in the box. </p> <p>Then there are $4n$ blue counters in the box, and $5n$ counters in total.</p> <p>Then the probability that the first counter taken by Linda is red is $\frac{n}{5n}=\frac15$.</p> <p><em>If this happens</em> (smells like conditional probability) then after the first shot there are $n-1$ red counters in the box and $5n-1$ counters in total.</p> <p>So at her second shot the probability on a red counter is then $\frac{n-1}{5n-1}$.</p> <p>That together gives a probability of $\frac15\frac{n-1}{5n-1}$ on $2$ red counters and it remains now to solve the equation:$$\frac15\frac{n-1}{5n-1}=\frac6{155}$$</p> <p>Can you do that yourself?</p> <hr> <p>If for $i=1,2$ the event that the $i$-th counter taken is red is denoted by $R_i$ then we have found the equation:$$\frac6{155}=P(R_1\cap R_2)=P(R_1)P(R_2\mid R_1)=\frac15\frac{n-1}{5n-1}$$which gives you the opportunity to find $n$.</p>
2,431,548
<p>Okay, so, my teacher gave us this worksheet of "harder/unusual probability questions", and Q.5 is real tough. I'm studying at GCSE level, so it'd be appreciated if all you stellar mathematicians explained it in a way that a 15 year old would understand. Thanks!</p> <p>So, John has an empty box. He puts some red counters and some blue counters into the box. </p> <p>The ratio of the number of red counters to blue counters is 1:4</p> <p>Linda takes out, at random, 2 counters from the box.</p> <p>The probability that she takes out 2 red counters is 6/155</p> <p>How many red counters did John put into the box?</p>
joshuaheckroodt
464,094
<p>The rule of thumb in probability is that the word <strong>and</strong> implies multiplication, and <strong>or</strong> implies addition. Seeing as Linda is picking one red counter <strong>and</strong> one red counter, you know that its going to be the two probabilities of a red counter being picked multiplied by each other.</p> <p>From here, lets call the number of red counters $r$, the number of blue counters $b$ and the total number of counters $r+b$. Given this, initially there was a $\displaystyle \frac{r}{r+b}$ chance of picking a red counter, and the next time there was a $\displaystyle \frac{r-1}{r+b-1}$ chance of picking a red counter (I'm assuming Linda has not replaced the red counter she took out initially). Given this, you can infer:</p> <p>$$\frac{r}{r+b} \cdot\frac{r-1}{r+b-1}=\frac{6}{155}$$ from here, before simplifying anything, you know (for the ratio) that $\displaystyle \frac{r}{b}=\frac{1}{4}$, which is useful because it implies that $\displaystyle b=4r$, and hence the question becomes: $$\frac{r}{r+4r}\cdot\frac{r-1}{r+4r-1}=\frac{6}{155}$$ $$\frac{1}{5}\cdot\frac{r-1}{5r-1}=\frac{6}{155}$$ $$\frac{r-1}{25r-5}=\frac{6}{155}$$ $$155r-155=150r-30$$ $$5r=125$$ $$r=25$$</p> <p>hence there are $25$ red counters in the box. I completed my GCSEs last year, and as far as I'm concerned this should all make sense to you.</p> <p>P.S; which exam board are you sitting? Edexcel by any chance?</p>
328,670
<p>Suppose I is an ideal of a ring R and J is an ideal of I, is there any counter example showing J need not to be an ideal of R? The hint given in the book is to consider polynomial ring with coefficient from a field, thanks</p>
Thomas Andrews
7,933
<p>Consider $R=\mathbb Q[x]$, and $I=xR$ be the most obvious ideal of $R$.</p> <p>Note that we can define $J$ as a subset of $I$ to be an ideal of $I$ if $J$ is a subgroup of $(I,+)$ and $IJ\subseteq J$. Find a $J$ that is a super-set of $x^2R$ but does not contain all of $I=xR$.</p>
1,437,287
<p>On <a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow">Wikipedia</a> it is stated that by differentiating the following formula holds:</p> <p>$$ \sum_n n q^n = {1\over (1-q)^2}$$</p> <p>Does this not require a proof? It seems to me because the series is infinite it is not clear that differentiation commutes with taking the limit. </p> <blockquote> <p>How to prove this?</p> </blockquote>
AnthonyCaterini
152,553
<p>I think that since the series is absolutely convergent, taking a limit and differentiating commute</p>
3,344,728
<p>Let <span class="math-container">$S$</span> be the set of all real numbers except <span class="math-container">$-1$</span>. Define <span class="math-container">$*$</span> on <span class="math-container">$S$</span> by <span class="math-container">$$a*b=a+b+ab.$$</span></p> <p>Goal: Show that <span class="math-container">$*$</span> gives a binary operation on S.</p> <p>In order to prove that <span class="math-container">$*$</span> is a binary operation, I need to prove that <span class="math-container">$S$</span> is closed under <span class="math-container">$*$</span>, so I tried to prove that <span class="math-container">$a+b+ab$</span> never equals <span class="math-container">$-1$</span>. I cannot figure out, however, how to do this. </p>
Luiz Cordeiro
58,818
<p>We do it by contradiction: Let <span class="math-container">$a,b\in S$</span>, i.e., <span class="math-container">$a,b\neq -1$</span>, but suppose that <span class="math-container">$a*b=-1$</span>. This means that <span class="math-container">\begin{align*} a+b+ab&amp;=-1\\ b+ab&amp;=-1-a\\ b(1+a)&amp;=-(1+a)\tag{$\star$} \end{align*}</span> Since <span class="math-container">$a\neq -1$</span>, then <span class="math-container">$(1+a)\neq 0$</span>, so <span class="math-container">$1+a$</span> is invertible. We can thus cancel the term <span class="math-container">$(1+a)$</span> from both sides of equation <span class="math-container">$(\star)$</span> above and obtain <span class="math-container">$$b=-1$$</span> which contradicts the hypothesis that <span class="math-container">$b\neq-1$</span>.</p> <p>Therefore, for <span class="math-container">$a,b\neq -1$</span> we also have <span class="math-container">$a*b\neq -1$</span>, i.e., <span class="math-container">$a*b\in S$</span>.</p>
4,467,841
<p>For a complex number <span class="math-container">$z=a+bi$</span> and a positive real value <span class="math-container">$R$</span>, we have <span class="math-container">$e^{Rbi}=\cos(Rb)+i\sin(Rb)$</span>. I am struggling to understand this since no matter how large <span class="math-container">$b$</span> or <span class="math-container">$R$</span> is, we have <span class="math-container">$|e^{Rbi}| \in [-1, 1]$</span>. What is the best way to understand this intuitively? For instance, in an applied sense, is it true that <span class="math-container">$$\bigg|\sum_{z: \ a, b \geq 0}e^{Rbi}f(z)\bigg|\leq \bigg|\sum_{z: \ a, b \geq 0}f(z)\bigg|,$$</span> where <span class="math-container">$f$</span> is some generic function and <span class="math-container">$\sum_{z: \ a, b \geq 0}f(z)&gt;0$</span>? This makes sense to me because each <span class="math-container">$|e^{Rbi}|$</span> is no larger than <span class="math-container">$1$</span>.</p>
Integral fan
977,478
<p>Based on what you have given, <span class="math-container">$Rb$</span> is a real number. I'll call it <span class="math-container">$\theta$</span>. I may be interpreting the question incorrectly, but then all that is happening is the fact that <span class="math-container">$e^{i\theta}$</span> represents a number on the complex unit circle by Euler's formula.</p> <p>The fact that <span class="math-container">$Rb = \theta$</span> can be arbitrary large is just the fact that any point on the circle can be specified by an arbitrary large angle <span class="math-container">$\theta$</span>.</p> <p>For example, <span class="math-container">$z=1$</span> can be reached by choosing an angle from the real axis <span class="math-container">$\theta = 0, 2\pi, 4\pi, 6 \pi, ..., 1000 \pi, etc.$</span>. One could also choose the negatives of this set.</p> <p>It's a matter of geometry and the periodicity of trigonometric functions that <span class="math-container">$\cos(\theta) + i \sin(\theta)$</span> will be confined to the unit circle.</p>
88,788
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/51292/relation-of-this-antisymmetric-matrix-r-beginpmatrix-01-10-endpmatr">Relation of this antisymmetric matrix $r = \begin{pmatrix} 0&amp;amp;1\\ -1&amp;amp;0 \end{pmatrix}$ to $i$</a> </p> </blockquote> <p>Let $H$ be the subset of $M_2(\mathbb R)$ consisting of all matrices of the form $\begin{pmatrix}a &amp; -b \\ b &amp; a\end{pmatrix}$ for $a, b \in \mathbb R$. </p> <ul> <li>Show that $(\mathbb C,+)$ is isomorphic to $(H,+)$.</li> <li>Show that $(\mathbb C, \times)$ is isomorphic to $(H, \times)$.</li> </ul> <p>$H$ is said to be a matrix representation of the complex numbers.</p> <p>I beg some help please. I fail even to define one to one functions mapping $\mathbb C$ onto $H$. All the best.</p>
Deven Ware
14,334
<p>Our matrices are of the form $$\left(\begin{smallmatrix} a &amp; -b \\ b &amp; a\end{smallmatrix}\right)$$ While our complex numbers are of the form $a + bi$ </p> <p>Both of these depend on $a, b$ </p> <p>Can you see a way to map from $\mathbb{C} \rightarrow H$ ? </p> <p>Hint: For the harder one, multiplication $((a+bi)(c+di)) = (ac +adi + cbi -bd)$ so to have an isomorphism we will want $f(ac - bd + (ad + cb)i) = f((a+bi))f((c+di))$ </p> <p>we'll try the only choice that really makes sense $a + bi \mapsto \left(\begin{smallmatrix} a&amp; -b \\ b &amp; a\end{smallmatrix}\right)$</p> <p>Then $$f(ac - bd + (ad + cb)i) = \left(\begin{smallmatrix} (ac - bd) &amp; (-ad -cb) \\ (ad + cb) &amp; (ac - bd)\end{smallmatrix}\right)= \left(\begin{smallmatrix} a &amp; - b \\ b &amp; a\end{smallmatrix}\right)\left(\begin{smallmatrix} c &amp; -d \\ d &amp; c\end{smallmatrix}\right) = f(a + bi)f(c + di)$$ </p> <p>Now of course, you have to show that this is one-to-one and onto (although that shouldn't be that hard) and I believe the addition should be similar. </p>
1,255,970
<p>What is $$\int_{K} e^{a \cdot x+ b \cdot y} \mu(x,y)$$ where $K$ is the Koch curve and $\mu(x,y)$ is a uniform measure <a href="http://wwwf.imperial.ac.uk/~jswlamb/M345PA46/%5BB%5D%20chap%20IX.pdf" rel="nofollow noreferrer">look here</a>.</p> <p><strong>Attempt:</strong> I can evaluate the integral numerically and I have derived a method to integrate $e^x$ over some cantor sets, <a href="https://math.stackexchange.com/q/1248373/219489">look here</a>. When I tried using that method to integrate the Koch Curve, I end up unable to express the integral in direct terms of its self. <a href="https://math.stackexchange.com/questions/1256554/integral-of-a-function-over-the-koch-curve-is-it-rigourous-enough">Here's</a> a proof that integration can be done over the Koch Curve...</p> <p><strong>Information:</strong> I'd like a symbolic answer if its available, but infinite series/products for this integral are great too. If there's a reference that actually handles <strong>this</strong> specific function over fractals and derives a symbolic result, that's good to. Also feel free to change $K$ to any other (non-trivial of course ;) ) variant of the Koch curve if that makes it easier to compute. I warn only that because the goal is to integrate over <em>any</em> fractal rather than just one or two special examples, you shouldn't pick needlessly trivial examples...</p> <p><strong>Motivation:</strong> The derivation of this result allows for integration over a fractal, however the actual reason this is useful, is because of the usefulness of the exponential function. For instance, the concept of average temperature over a fractal is a very interesting concept. $e^x$ type functions allow for rudimentary temperature fields to be constructed and theoretically integrated over fractals. $e^x$ type functions are useful for many kinds of problems, but they seem to be difficult to integrate over fractals. In addition, developing a theory for integrals over fractals, requires a large library of results, and $e^x$ should definitely be included in that list of integrable functions.</p>
GEdgar
442
<p>not an answer yet, just some thoughts. </p> <p>Say our Koch curve $K$ starts at $(0,0)$, ends at $(1,0)$ and the midpoint is at $(1/2, 1/(2\sqrt{3}\;))$. Mark seems to have used this, since his computation with $a=b=1$ agrees with mine.</p> <p>Self-similarity is described by two maps of the plane to itself: $$ L(x,y) = \left(\frac{x}{2}+\frac{y}{2\sqrt{3}},\frac{x}{2\sqrt{3}}-\frac{y}{2}\right), \\ R(x,y) = \left(\frac{x}{2}-\frac{y}{2\sqrt{3}}+\frac{1}{2},-\frac{x}{2\sqrt{3}}-\frac{y}{2}+\frac{1}{2\sqrt{3}}\right), $$ So $L(K)$ is the left half and $R(K)$ is the right half. Set $K$ is the unique nonempty compact set with $K = L(K) \cup R(K)$. Map $L$ shrinks by factor $1/\sqrt{3}$, reflects in the $x$-axis, rotates by $\pi/6$, and fixes the point $(0,1)$. Map $R$ shrinks by factor $1/\sqrt{3}$, reflects in the $x$-axis, rotates by $-\pi/6$ and fixes the point $(1,0)$.</p> <p>The measure $\mu$ on $K$ is made up of two parts, which are images of $\mu$ under $L, R$, respectively, with half the measure. That is, for integrable $f$ we have $$ \int_K f\,d\mu = \int_{L(K)} f\,d\mu+\int_{R(K)} f\,d\mu = \frac{1}{2}\int_K f\circ L\,d\mu + \frac{1}{2}\int_K f\circ R\,d\mu $$</p> <p>Now if we write $$ q(a,b) := \int_K e^{ax+by}d\mu(x,y) $$ the self-similarity shows $$ q(a,b) = \frac{1}{2}q\left(\frac{a}{2}+\frac{b}{2\sqrt{3}}, \frac{a}{2\sqrt{3}}-\frac{b}{2}\right)+\frac{1}{2}\exp\left(\frac{a}{2}+\frac{b}{2\sqrt{3}}\right)q\left(\frac{1}{2}-\frac{b}{2\sqrt{3}},-\frac{a}{2\sqrt{3}}-\frac{b}{2}\right) $$</p> <p>We could use this recursively to evaluate $q(a,b)$ numerically. At each iteration, the point $(a,b)$ where $q$ should be evaluated moves closer to the origin by factor $1/\sqrt{3}$. We stop when we are "close enough" to $(0,0)$, since we know $q(0,0)=1$. But, of course, at each iteration the number of exponentials we have to evaluate doubles, so it is a slow method.</p>
1,157,007
<p>I know that $f$ and $g$ have a pole or order $k$ in $z=0$. $f-g$ is holomorph in $\infty$.</p> <p>I need to prove that:</p> <p>$$\oint_{|z|=R} (f-g)' dz = 0$$</p> <p>Any help?</p> <p>Note: $f$ and $g$ only have a singularity in $z=0$</p>
Jason
195,308
<p>As Git Gud alluded to in their comment, simply parameterise the curve and calculate the integral directly. The usual parameterisation is $\gamma:[0,2\pi]\rightarrow\mathbb C:\ t\mapsto Re^{it}$. Letting $h=f-g$, we have $$\oint_Ch'(z)\ \mathrm dz=\int_0^{2\pi}h'(\gamma(t))\gamma'(t)\ \mathrm dt=\int_0^{2\pi}(h\circ\gamma)'(t) \mathrm dt\\=(h\circ\gamma)(2\pi)-(h\circ\gamma)(0)=h(R)-h(R)=0.$$</p>
3,457,277
<p>why Pi is transcendental number if <span class="math-container">$\pi$</span> also have algebraic equation like below which have root at <span class="math-container">$x =\pi/3$</span> as <span class="math-container">$n$</span> tends to infinity. <span class="math-container">$$\Biggl(\biggl(\Bigl(\bigl((x^2-2^{(n2^1+1)})^2-2^{(n2^2+1)}\bigr)^2-2^{(n2^3+1)}\Bigr)^2...-2^{(n2^{(n-1)}+1)}\biggr)^2-2^{n2^{n}+1}\Biggr)^2 = 3*2^{n2^{(n+1)}}$$</span> </p> <p>For <span class="math-container">$n = 1$</span> equation will be <span class="math-container">$$(x^2-2^3)^2 = 3*2^4$$</span> and its root <span class="math-container">$x_1$</span> = 1.03527618041. </p> <p>For <span class="math-container">$n = 2$</span> equation will be <span class="math-container">$$((x^2-2^5)^2-2^9)^2 = 3*2^{16}$$</span> and its root is <span class="math-container">$x_2$</span> = 1.04420953776041 and as we increase the value of n the root tends to <span class="math-container">$\pi/3$</span>.</p> <p>For n = 12 ,<span class="math-container">$3x_{12}$</span> will be 3.14159264 which is some what close to <span class="math-container">$\pi$</span>.</p> <p>Another simplified version of this equation is <span class="math-container">$${\Biggl(\biggl(\Bigl(\bigl(\frac{\pi}{3*2^n}\bigr)^2-2\Bigr)^2-2\biggr)^2...-2\Biggr)^2} = 3$$</span> Here n is the same as number of single powered two in above equation. </p> <p>Solution of the above equation will be <span class="math-container">$$\pi = 3*\Biggl(\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2+....\sqrt{2+\sqrt{3}}}}}}\Biggr)*2^n$$</span> and <span class="math-container">$$n =\text{ number of $2$ inside largest square root}$$</span> So if <span class="math-container">$\pi$</span> can also be represented as a root of algebraic equation then why <span class="math-container">$\pi$</span> is transcendental .</p>
Matt Samuel
187,867
<p>Algebra for the most part is not concerned with sequences and limits. A real or complex number is said to be algebraic if it is a root of a single polynomial equation of finite degree with integer coefficients. This is something that <span class="math-container">$\pi$</span> is not.</p> <p>If you are willing to use limits, then every number is "algebraic" : you can always find a sequence of polynomial equations with integer coefficients whose roots converge to any number. For real numbers, the polynomials can even be linear! So in a sense this is not interesting. The specific equations you outline are interesting, but the conclusion is not as it is true for any number. </p>
1,569,543
<blockquote> <p>Prove or disprove: $(\ln n)^2 \in O(\ln(n^2)).$ </p> </blockquote> <p>I think I would start with expanding the left side. How would I go about this?</p>
Marco Bellocchi
169,978
<p>You can say something about trees on at least 3 vertices with $\lambda_2=1$.</p> <p>In particular if your graph T is a tree on at least 3 vertices, then $\lambda_2=1$ if and only if T is a star</p>
2,808,159
<p><strong>The question is:</strong> </p> <blockquote> <p>A half cylinder with the square part on the $xy$-plane, and the length $h$ parallel to the $x$-axis. The position of the center of the square part on the $xy$-plane is $(x,y)=(0,0)$. <img src="https://i.stack.imgur.com/fB5le.jpg" alt="Image description"> </p> </blockquote> <p>$S_1$ is the curved portion of the half-cylinder $z=(r^2-y^2)^{1/2}$ of length $h$.<br> $S_2$ and $S_3$ are the two semicircular plane end pieces.<br> $S_4$ is the rectangular portion of the $xy$-plane </p> <p>Gauss' law: $$\iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS=\frac{q}{\epsilon_0}$$ $\mathbf E$ is the electric field $\left(\frac{\text{Newton}}{\text{Coulomb}}\right)$.<br> $\mathbf{\hat{n}}$ is the unit normal vector.<br> $dS$ is an increment of the surface area $\left(\text{meter}^2\right)$.<br> $q$ is the total charge enclosed by the half-cylinder $\left(\text{Coulomb}\right)$.<br> $\epsilon_0$ is the permitivity of free space, a constant equal to $8.854\times10^{-12}\,\frac{\text{Coulomb}^2}{\text{Newton}\,\text{meter}^2}$. </p> <p>The electrostatic field is: $$\mathbf{E}=\lambda(x\mathbf{i}+y\mathbf{j})\;\text{,}$$ where $\lambda$ is a constant.</p> <p>Use this formula to calculate the part of the total charge $q$ for the curved portion $S_1$ of the half-cylinder: $$\iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS=\frac{q}{\epsilon_0}=\iint_R\left\{-E_x[x,y,f(x,y)]\frac{\partial f}{\partial x} -E_y[x,y,f(x,y)]\frac{\partial f}{\partial y} +E_z[x,y,f(x,y)] \right\}\,dx\,dy$$</p> <p>The goal is to find the total charge $q$ enclosed by the half-cylinder, expressed in terms of $\lambda$, $r$ and $h$. </p> <p><strong>The solution should be:</strong><br> $$\pi r^2\lambda h\epsilon_0$$</p> <p><strong>This is what I've tried:</strong><br> First calculate Gauss' law for $S_1$: \begin{align} f(x,y)&amp;=z=(r^2-y^2)^{1/2}=\sqrt{(r^2-y^2)} \\ \frac{\partial f}{\partial x}&amp;=\frac12(r^2-y^2)^{-\frac12}\cdot 0=0 \\ \frac{\partial f}{\partial y}&amp;=\frac12(r^2-y^2)^{-\frac12}\cdot -2y=-\frac{y}{\sqrt{(r^2-y^2)}}=-\frac yz \\ \\ \mathbf{E}&amp;=\lambda(x\mathbf{i}+y\mathbf{j}) \\ E_x[x,y,f(x,y)]&amp;=\lambda x \\ E_y[x,y,f(x,y)]&amp;=\lambda y \\ E_z[x,y,f(x,y)]&amp;=0 \\ \\ \text{length}&amp;=h \\ \end{align}</p> <p>Using the formula<br> $$\iint_R\left\{-E_x[x,y,f(x,y)]\frac{\partial f}{\partial x} -E_y[x,y,f(x,y)]\frac{\partial f}{\partial y} +E_z[x,y,f(x,y)] \right\}\,dx\,dy$$ we get:<br> \begin{align} &amp;\iint_R\left\{-\lambda x\cdot 0-\lambda y\cdot -\frac{y}{z} + 0\right\}\,dx\,dy \\ &amp;=\iint_R\frac{\lambda y^2}{z}\,dx\,dy \\ &amp;=\lambda\iint_R\frac{y^2}{\sqrt{r^2-y^2}}\,dx\,dy \\ \end{align}</p> <p>Since the length is $h$ and the length is parallel to the $x$-axis: \begin{align} &amp;\lambda \int_R\int_0^h\frac{y^2}{\sqrt{r^2-y^2}}\,dx\,dy \\ &amp;=\lambda\int_R\left[\frac{y^2x}{\sqrt{r^2-y^2}}\right]_0^h\,dy \\ &amp;=\lambda\int_R\frac{y^2h}{\sqrt{r^2-y^2}}\,dy \\ \end{align}</p> <p>Subsitute:<br> \begin{align} y&amp;=r\sin\theta \\ \theta&amp;=\arcsin\left(\frac1r y\right) \\ \frac{dy}{d\theta}&amp;=\frac{d}{d\theta}\left(r\sin\theta\right)=r\cos\theta \\ dy&amp;=r\cos(\theta)\,d\theta \\ \\ &amp;\lambda\int\frac{hr^2\sin^2\theta}{\sqrt{r^2-r^2\sin^2\theta}}\cdot r\cos(\theta)\,d\theta \\ &amp;=\lambda h\int\frac{r^3\sin^2\theta\cos\theta}{r\sqrt{1-\sin^2\theta}}\,d\theta \\ &amp;=\lambda hr^2\int\frac{\sin^2\theta\cos\theta}{\sqrt{\cos^2\theta}}\,d\theta \\ &amp;=\lambda hr^2\int\frac{\sin^2\theta\cos\theta}{\cos\theta}\,d\theta \\ &amp;=\lambda hr^2\int\sin^2\theta\,d\theta \\ &amp;=\lambda hr^2\int\frac{1-\cos2\theta}{2}\,d\theta \\ &amp;=\frac12\lambda hr^2\int1-\cos2\theta\,d\theta \\ &amp;=\frac12\lambda hr^2\int1\,d\theta-\int\cos2\theta\,d\theta \\ &amp;=\frac12\lambda hr^2\left[\theta-\frac12\sin2\theta\right] \\ \\ \text{substitute back } \theta=\arcsin\left(\frac1r y\right)\text{:} \\ &amp;=\frac12\lambda hr^2\left[\arcsin\left(\frac1r y\right)-\frac12\sin\left(2\arcsin\left(\frac1r y\right)\right)\right] \\ \text{the boundaries of }y\text{ are }-r\text{ and }r\text{:} \\ &amp;=\frac12\lambda hr^2\left[\arcsin\left(\frac{y}{r}\right)-\frac12\sin\left(2\arcsin\left(\frac{y}{r}\right)\right)\right]_{-r}^r \\ &amp;=\frac12\lambda hr^2\left(\frac{\pi}{2}-0\right) - \left(-\frac{\pi}{2}-0\right) \\ &amp;=\frac12\pi\lambda hr^2 \end{align}</p> <p>Calculate Gauss' law for $S_2$ and $S_3$:<br> The surfaces of $S_2$ and $S_3$ are equal. </p> <p>Since:<br> $\bullet$ the position of the center of the square part on the $xy$-plane is $(x,y)=(0,0)$, the direction of the electrostatic field at both surfaces is opposite: $(\lambda x \mathbf{i})$,<br> $\bullet$ and the unit normal vectors are in opposite direction,<br> the addition of the result of Gauss' law will not be equal to 0. </p> <p>The surface of each of the surfaces is $\frac12 \pi r^2$.<br> The electric field in the $x$-direction is $\lambda x\mathbf{i}$.<br> $x$ for $S_2$ = $\frac12 h$.<br> $x$ for $S_3$ = $-\frac12 h$.<br> $\mathbf{\hat{n}}$ for $S_2$ = $\mathbf{i}$.<br> $\mathbf{\hat{n}}$ for $S_3$ = $-\mathbf{i}$. </p> <p>Therefore for $S_2$: \begin{align} \mathbf{E}\cdot \mathbf{\hat{n}} \times \text{surface area}&amp;=\lambda x\mathbf{i} \cdot \mathbf{i} \times \frac12 \pi r^2 \\ &amp;=\lambda \frac12 h\mathbf{i} \cdot \mathbf{i} \times \frac12 \pi r^2 \\ &amp;=\frac14 \pi\lambda hr^2 \\ \end{align}</p> <p>And for $S_3$: \begin{align} \mathbf{E}\cdot \mathbf{\hat{n}} \times \text{surface area}&amp;=\lambda x\mathbf{i} \cdot -\mathbf{i} \times \frac12 \pi r^2 \\ &amp;=\lambda (-\frac12 h)\mathbf{i} \cdot -\mathbf{i} \times \frac12 \pi r^2 \\ &amp;=\frac14 \pi\lambda hr^2 \\ \end{align}</p> <p>Calculate Gauss' law for $S_4$:<br> Since $S_4$ lies in the $xy$-plane, the electrostatic field $\mathbf{E}=\lambda(x\mathbf{i}+y\mathbf{j})\;\text{,}$ lies parallel to the surface, thus the result of Gauss' law is $0$.</p> <p>The net result is:<br> \begin{align} \iint_S\mathbf E\cdot \mathbf{\hat{n}}\,dS&amp;=\frac{q}{\epsilon_0} \\ &amp;=\frac12 \pi\lambda hr^2 + \frac14 \pi\lambda hr^2 + \frac14 \pi\lambda hr^2 +0 \\ &amp;=\pi\lambda hr^2 \\ \end{align}</p> <p>The total charge $q$ enclosed in the half-cylinder is thus:<br> $$q=\pi\lambda hr^2 \epsilon_0$$. </p> <p>This solves the problem I was having.</p>
mr_e_man
472,818
<p>"the boundaries of $y$ are $0$ and $r$"</p> <p>There's your problem.</p> <p>It should be $-r$ and $r$.</p>
2,736,323
<blockquote> <p>Given that $Y \sim U(2, 5)$ and $Z = 3Y - 4$, what is the distribution for $Z$?</p> </blockquote> <p>I've worked out that for $Y \sim N(2, 5)$, $Z \sim N(2, 45)$ since </p> <p>$$\mu=3\cdot2 - 4 = 2$$</p> <p>and </p> <p>$$\sigma^2=3^2 \cdot 5 = 45$$</p> <p>I'm wondering how the working differs when we have a uniform distribution, rather than a normal distribution? </p> <p><em>Sorry if a similar question has been asked before - I could not find anything on my search!</em></p> <p>Thanks!</p>
drhab
75,923
<p>Let $a,b\in\mathbb R$ with $a&lt;b$, and let $U$ be uniformly distributed over $(0,1)$.</p> <hr> <p>In general a random variable $X$ is uniformly distributed over interval $(a,b)$ if and only if its CDF can be prescribed by:</p> <ul> <li>$x\mapsto0$ if $x\leq a$</li> <li>$x\mapsto\frac{x-a}{b-a}$ if $a&lt;x\leq b$</li> <li>$x\mapsto1$ otherwise.</li> </ul> <p>Now observe that on base of this it can easily be deduced that $a+(b-a)U$ is uniformly distributed over interval $(a,b)$.</p> <p>Actually we can write $X=a+(b-a)V$ where $V=\frac{X-a}{b-a}$ and where $V$ is uniformly distributed over $(0,1)$.</p> <p>Applying this on your question we conclude that $Y=2+3V$ where $V$ is uniformly distributed over $(0,1)$ and consequently $$Z=3Y-4=3[2+3V]-4=2+9V$$allowing the conclusion that $Z$ is uniformly distributed over $(2,11)$.</p>
1,850,418
<p>An argument has two parts, the set of all premises, and the conclusion drawn from said premise. Now since there's only 1 conclusion, it would be weird to choose a name for the 'second' part of the argument. However, what is the first part called? I used to think that this was actually called the premise, however that turns out not to be the case. Anyway, what <em>is</em> it called? Because I feel like it needs a name.</p> <pre><code>Argument Premise 1¯| Premise 2 | &lt;--- What's this called? Premise 3_| Conclusion </code></pre> <p>What is the part containing premise 1, 2, and 3?</p> <p><strong>Edit:</strong> I just realized that actually, perhaps one conclusion doesn't necessarily need to be drawn, perhaps there can be several conclusions, if so, what is the set of all conclusions called?</p>
user21820
21,820
<p>In natural language arguments, you have finitely many premises. This means that you can put them all together in a single premise that is the conjunction of all the premises.</p> <p>However, in general, you can treat the premises as a <strong>collection</strong> (which may not be finite). This collection is in first-order logic called the <strong>axioms</strong> of the formal system in question, and we write "$X \vdash φ$" to mean that from the axioms $X$ we can prove $φ$. The collection of all the possible conclusions that can be proven from the axioms $X$ is called the <strong>consequences</strong> of $X$, but I'm not sure whether there's a common notation for it. The collection of all models of $X$ is denoted by "$Md(X)$" and the sentences satisfied by a collection $C$ of models is denoted by "$Th(C)$", and so $Th(Md(X))$ would be the collection of all theorems proven by $X$.</p>
1,242,075
<p>I have some function $g$. I know that $g \in C^1[a,b]$, so $g'(x)$ exists. I want to know, if $g: [a,b] \to [a,b]$ is onto. How can I find out if this is true or not?</p> <p>P.S. I am not saying all $g$ have the said property, I want to have some kind of test to distinguish functions with this property from functions without it.</p>
Andrew D. Hwang
86,418
<p>Not sure this is the type of criterion you're seeking, but if $g:[a, b] \to [a, b]$ is continuously-differentiable (continuous in particular), then by the Intermediate Value Theorem, $g$ is surjective if and only if $g$ achieves the values $a$ and $b$, i.e., there exist numbers $x_{\min}$ and $x_{\max}$ in $[a, b]$ such that $g(x_{\min}) = a$ and $g(x_{\max}) = b$. By elementary calculus, such points must be endpoints of the domain, or critical points of $g$ (i.e., solutions of $g'(x) = 0$).</p> <p>Procedurally, evaluate $g(a)$ and $g(b)$; if necessary, find all solutions of $g'(x) = 0$ (there may be infinitely many, of course...) and for each, evaluate $g(x)$. Your function is surjective if and only if the numbers $a$ and $b$ occur among these function values.</p>
2,916,306
<p>Let $f(x,y) = x\ln(x) + y\ln(y)$ be defined on space </p> <p>$S = \{(x,y) \in \mathbb{R}^2| x&gt; 0, y &gt; 0, x + y = 1\}$.</p> <p>My question is, how do I take the partial derivative for this function, given that the parameters are coupled through $x+y = 1$.</p> <p>A first idea would be to do it ignoring the coupling constraint. For this, we will get,</p> <p>$\dfrac{\partial f(x,y)}{\partial x} = \dfrac{\partial x\ln(x) + y\ln(y)}{\partial x} = \ln(x) + x/x = \ln(x) + 1$</p> <p>If we do not ignore the coupling constraint, and instead substitute $y = 1-x$, we will get,</p> <p>$\dfrac{\partial f(x,y)}{\partial x} = \dfrac{\partial x\ln(x) + (1-x)\ln(1-x)}{\partial x} = \ln(x) + 1 + \dfrac{1}{1-x} - \ln(1-x) - \dfrac{x}{1-x}$</p> <p>Am I doing this correctly? </p> <p>Why do I get two different expressions of the gradient?</p>
Hans Lundmark
1,242
<p>Taking the partial of $f(x,y)$ with respect to $x$ means that you vary $x$ while holding $y$ constant, but you can't do that if you insist on fulfilling the constraint $x+y=1$, so it's doesn't really make sense to talk about partial derivatives in this situation.</p> <p>(You can eliminate $y$ and get a one-variable function $g(x)=f(x,1-x)$, but what you are computing then is an <em>ordinary</em> derivative $g'(x)$.)</p>
3,848,179
<blockquote> <p>The velocity <span class="math-container">$v$</span> of a freefalling skydiver is modeled by the differential equation</p> <p><span class="math-container">$$ m\frac{dv}{dt} = mg - kv^2,$$</span></p> <p>where <span class="math-container">$m$</span> is the mass of the skydiver, <span class="math-container">$g$</span> is the gravitational constant, and <span class="math-container">$k$</span> is the drag coefficient determined by the position of the diver during the dive. Find the general solution of the differential equation.</p> </blockquote> <p>So is it my job to solve for velocity here (<span class="math-container">$v$</span>)? or am I missing something?</p>
Sage Stark
745,622
<p>Hint: Remember that the integral is equivalent to the signed area under a curve in the two-dimensional case. Also remember that <span class="math-container">$x^2+y^2=r^2$</span> is the formula for a circle of radius <span class="math-container">$r$</span> centered at the origin.</p>
809,499
<p>The no. of real solution of the equation $\sin x+2\sin 2x-\sin 3x = 3,$ where $x\in (0,\pi)$.</p> <p>$\bf{My\; Try::}$ Given $\left(\sin x-\sin 3x\right)+2\sin 2x = 3$</p> <p>$\Rightarrow -2\cos 2x\cdot \sin x+2\sin 2x = 3\Rightarrow -2\cos 2x\cdot \sin x+4\sin x\cdot \cos x = 3$</p> <p>$\Rightarrow 2\sin x\cdot \left(-\cos 2x+2\cos x\right)=3$</p> <p>Now I did not understand how can i solve it.</p> <p>Help me</p> <p>Thanks</p>
DonAntonio
31,254
<p>$$\begin{cases}\sin 2x=2\sin x\cos x\\{}\\\sin 3x=\sin2x\cos x+\sin x\cos2x=2\sin x\cos^2x+\sin x(1-2\sin^2x)\end{cases}$$</p> <p>Thus we get</p> <p>$$0=\sin x+4\sin x\cos x-2\sin x\cos^2x-\sin x+2\sin^3x$$</p> <p>Divide al through by $\;\sin x\;$ (why can we?):</p> <p>$$4\cos x-2\cos^2x+2\sin^2x=0\iff2\cos x-\cos^2x+1-\cos^2x=0\iff$$</p> <p>$$2\cos^2x-2\cos x-1=0\iff \ldots$$</p>
1,928,259
<p>I have the following problem: </p> <blockquote> <p>The function $f(x)$ is odd, its period is $5$ and $f(-8) = 1$. What is $f(18)$?</p> </blockquote> <p>So, $f(-8) = f(-8 + 5) = 1$. I also know that you could replace $(-8)$ with $(-3)$ and still get the same result of $1$.</p> <p>I'm just learning about periods. My grasp on it still isn't very impressive. I understand that even functions are symmetric about the y axis and odd functions are symmetric about the origin, but my brain just isn't making the connection on this one.</p> <p>Please help!</p> <p>-Jon</p>
Gordon
169,372
<p>$\left(\frac{\partial f}{\partial y} \right)_x = \frac{\partial}{\partial x}\left(\frac{\partial f}{\partial y} \right)$. It is similar to that $\frac{\partial f}{\partial x}=f_x$.</p>
20,972
<p>Find the values of $x \in \mathbb{Z}$ such that there is no prime number between $x$ and $x^2$. Is there any such number?</p>
Ross Millikan
1,827
<p>Despite the comments about Bertrand's postulate, there is still the range $-\sqrt{2} \le x \le \sqrt{2}$. If you want $x$ a natural number, there is $1$ and maybe $0$.</p>
20,972
<p>Find the values of $x \in \mathbb{Z}$ such that there is no prime number between $x$ and $x^2$. Is there any such number?</p>
Fixee
7,162
<p>Given the current wording of the question, you can set $x$ to any integer in $\{-1, 0, 1\}$ and there will be no prime between $x$ and $x^2$. For any other integer $x$, there will always be a prime between $x$ and $x^2$ (as noted in the comments to your question).</p>
2,061,547
<p>I am solving for the zeroes of the function:</p> <blockquote> <p>$$\frac{\cos(x)(3\cos^2(x)-1)}{(1+\cos^2(x))^2}$$</p> </blockquote> <p>The zeroes of the function I found were done by setting $\cos(x)=0$, and $3\cos^2(x)-1=0$</p> <p>For the $3\cos^2(x)-1=0$ I solved it and got $x=\cos^{-1}(\frac{\sqrt3}{3})$ but my calculator only gives one solution $x=.955$ but when I graphed it I got another solution at $x=2.186$. How would I get the one solution I didn't get with the calculator?</p>
layman
131,740
<p>This is one of those "follow your nose" proofs where you write one step down and see where you can go from there, and do it for each step. Here is what I came up with:</p> <p>We want to show $\{1/n \}_{n=1}^{\infty}$ has $0$ as its only limit point. Assuming you can prove $0$ is a limit point, let's show that there are no other limit points.</p> <p>Well, let $r$ be any nonzero real number. To be a limit point, we know for each $\epsilon &gt; 0$, $(r - \epsilon, r + \epsilon)$ should contain an element of $\{1/n\}_{n=1}^{\infty}$ which is not equal to $r$.</p> <p>Well, if $r = \frac{1}{n}$ for some $n$, we can choose our $\epsilon$ small enough so that $(\frac{1}{n} - \epsilon, \frac{1}{n} + \epsilon)$ doesn't contain any other elements in $\{ 1/n \}_{n=1}^{\infty}$. For example, take $\epsilon = \frac{1}{n + 1000}$.</p> <p>If $r \neq \frac{1}{n}$ for some $n$, then either $r &gt; 1$, $r &lt; 0$, or $r$ is in the set $(\frac{1}{2},1) \cup (\frac{1}{3},\frac{1}{2}) \cup (\frac{1}{4}, \frac{1}{3}) \cup (\frac{1}{5}, \frac{1}{4}) \cup \dots$.</p> <p>If $r &gt; 1$, choose $\epsilon$ small enough so that $(r - \epsilon, r + \epsilon)$ is contained in $(1,\infty)$ (which we can do since $(1,\infty)$ is open). Then $(r - \epsilon, r + \epsilon)$ doesn't contain any elements of the form $\frac{1}{n}$.</p> <p>If $r &lt; 0$, choose $\epsilon$ small enough so that $(r - \epsilon, r + \epsilon)$ is contained in $(-\infty, 0)$ (which we can do since $(-\infty, 0)$ is open). Then $(r - \epsilon, r + \epsilon)$ doesn't contain any elements of the form $\frac{1}{n}$.</p> <p>If $r$ is in $(\frac{1}{n + 1}, \frac{1}{n})$ for some $n$, then since this is an open interval, we can find $\epsilon$ small enough so that $(r - \epsilon, r + \epsilon)$ is also contained in $(\frac{1}{n + 1}, \frac{1}{n})$, implying $(r - \epsilon, r + \epsilon)$ doesn't contain any elements of the form $\frac{1}{n}$.</p> <p>So, in every case of $r$ being nonzero, we found that $r$ can't be a limit point of $\{ \frac{1}{n} \}_{n=1}^{\infty}$ since we are able to find a neighborhood around $r$ which doesn't contain elements of $\{\frac{1}{n} \}_{n=1}^{\infty}$.</p>
3,264,693
<p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p> <p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p> <p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p> <p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p> <p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p> <p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p> <p><strong>Here is the crux of my issue:</strong></p> <blockquote> <p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p> </blockquote> <p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p> <p>Edit 1: I guess another way of expressing this question is as follows: </p> <p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p> <p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p> <p>Edit 3: Here is a picture that helps clarify my point of confusion. <a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p> <p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
David K
139,123
<p>The failure of duality is that there was never really duality there in the first place.</p> <p>It's true that in <em>most cases,</em> the vector from the origin to <span class="math-container">$c^{a+bi} z$</span> is rotated and stretched (or shrunk) relative to the vector from the origin to <span class="math-container">$z.$</span></p> <p>But sometimes it is not stretched or shrunk: specifically, when <span class="math-container">$a = 0.$</span></p> <p>It is also true that in <em>most cases,</em> the vector from the origin to <span class="math-container">$(a+bi) z$</span> is rotated and stretched (or shrunk) relative to the vector from the origin to <span class="math-container">$z.$</span></p> <p>But sometimes that vector is not stretched or shrunk either: specifically, when <span class="math-container">$a^2 + b^2 = 1.$</span></p> <p>The key observation to me is that when you write something like <span class="math-container">$c^{a+bi},$</span> you identify a point in the complex plane using the parameters <span class="math-container">$a$</span> and <span class="math-container">$b$</span> somewhat like polar coordinates. Parameter <span class="math-container">$a$</span> dictates the radius <span class="math-container">$r$</span>, parameter <span class="math-container">$b$</span> dictates the angle <span class="math-container">$\theta.$</span></p> <p>When you write <span class="math-container">$a + bi,$</span> however, the parameters <span class="math-container">$a$</span> and <span class="math-container">$b$</span> act like Cartesian coordinates <span class="math-container">$x$</span> and <span class="math-container">$y$</span> of a point in the complex plane.</p> <p>Polar coordinates don't work like Cartesian coordinates, and vice versa. They both identify points in a plane, that's mostly what they have in common. So when you wrote that both <span class="math-container">$c^{a+bi} z$</span> and <span class="math-container">$(a+bi) z$</span> stretch a vector as well as rotating it, that was (mostly!) true, but the way that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> contributed to making it be a rotation or a stretching was completely different in each case.</p> <p>What really makes multiplication not stretch or shrink a vector, is you have to make sure the thing you multiply <span class="math-container">$z$</span> by is a complex number on the "unit circle." You can get this either by choosing <span class="math-container">$r=1$</span> in polar coordinates (which corresponds to <span class="math-container">$a=0$</span> in the formula <span class="math-container">$c^{a+bi}$</span>) or by choosing <span class="math-container">$x^2 + y^2 = 1.$</span></p>
3,232,296
<ol> <li><p>For , ∈ ℝ, we have ‖−‖≤‖+‖. </p></li> <li><p>The dot product of two vectors is a vector. </p></li> <li><p>For ,∈ℝ, we have ‖−‖≤‖‖+‖‖. </p></li> <li><p>A homogeneous system of linear equations with more equations than variables will always have at least one parameter in its solution. </p></li> <li><p>Given a non-zero vector , there exist exactly two unit vectors that are parallel to .</p></li> </ol> <p>My answers were</p> <ol> <li>FALSE because if we assumed that a= (-1,-2) and b= (3,4) it would make the statement false </li> <li>FALSE because the dot product of 2 vectors is a scalar </li> <li>FALSE this would have the same assumption as for question 1 </li> <li>FALSE I am not sure </li> <li>TRUE I am not sure </li> </ol> <p>I am not sure which one of my answers is/are wrong </p>
Kyle
647,271
<p>Number 3 is incorrect. Why? Because of the well-known fact that <span class="math-container">$|\bf{x} + \bf{y}| \le |\bf{x}| + |\bf{y}|$</span> (the Triangle Inequality).</p> <p>In particular, <span class="math-container">$|\bf{u} - \bf{v}| = |\bf{u} + (- \bf{v})| \le |\bf{u}| + |- \bf{v}| = |\bf{u}| + |\bf{v}|$</span>.</p>
4,156,482
<blockquote> <p>Can every continuous function <span class="math-container">$f(x)$</span> from <span class="math-container">$\mathbb{R}\to \mathbb{R}$</span> be continuously &quot;transformed&quot; into a differentiable function?</p> </blockquote> <p>More precisely is there always a continuous (non constant) <span class="math-container">$g(x)$</span> such that <span class="math-container">$g(f(x))$</span> is differentiable?<br></p> <ul> <li>This seems to hold for simple functions, for instance the function <span class="math-container">$f(x)=|x|$</span> can be transformed into a differentiable function by the function <span class="math-container">$g(x)=x^2$</span>. <br></li> <li>If <span class="math-container">$g(x)$</span> is additionally required to be increasing everywhere and differentiable then the answer seems to be <em><strong>no</strong></em> by the inverse function theorem, because of the existence of continuous nowhere differentiable functions.</li> </ul>
Troposphere
907,303
<p>Inspired by <a href="https://math.stackexchange.com/a/4157185/907303">Frank's</a> notion of &quot;corner&quot;, let's define</p> <ul> <li>The function <span class="math-container">$f$</span> has a <strong>&quot;flat&quot;</strong> at <span class="math-container">$x$</span> iff for every <span class="math-container">$\delta&gt;0$</span> there is an <span class="math-container">$x_1$</span> with <span class="math-container">$0&lt;|x-x_1|&lt;\delta$</span> such that <span class="math-container">$f(x_1)=f(x)$</span>. (In other words, <span class="math-container">$x$</span> is a limit point of the fiber of <span class="math-container">$f$</span> it's a member of).</li> </ul> <p>One easily proves:</p> <ol> <li><p>If <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x$</span> and has a flat at <span class="math-container">$x$</span>, then <span class="math-container">$f'(x)=0$</span>.</p> </li> <li><p>The points where <span class="math-container">$f$</span> has flats depend only on the set of fibers of <span class="math-container">$f$</span> -- or, in other words, on the equivalence relation <span class="math-container">$x\sim_f y \Leftrightarrow f(x)=f(y)$</span>.</p> </li> <li><p>If <span class="math-container">$f$</span> has a flat at <span class="math-container">$x$</span> and <span class="math-container">$g$</span> is an <em>arbitrary</em> function, then <span class="math-container">$g\circ f$</span> also has a flat at <span class="math-container">$x$</span>.</p> </li> </ol> <p>I will construct a continuous surjection <span class="math-container">$f:\mathbb R\to \mathbb R$</span> where every <span class="math-container">$x$</span> is flat (that is, in more dignified terms, a function whose fibers are all perfect sets). This property guarantees that <span class="math-container">$g\circ f$</span> can only be differentiable if it is constant.</p> <hr /> <p>First, let <span class="math-container">$w:[0,1]\to[0,1]$</span> be a zig-zag function that goes from <span class="math-container">$0$</span> up to <span class="math-container">$1$</span>, then down to <span class="math-container">$0$</span> and back up to <span class="math-container">$1$</span>:</p> <p><span class="math-container">$$ w(x) = \begin{cases} 3x &amp; 0 \le x \le 1/3 \\ 2 - 3x &amp; 1/3 \le x \le 2/3 \\ 3x - 2 &amp; 2/3 \le x \le 1 \end{cases} $$</span></p> <p>Stack an infinite sequence of <span class="math-container">$w$</span>s to get <span class="math-container">$u:\mathbb R\to \mathbb R$</span>: <span class="math-container">$$ u(x) = \lfloor x\rfloor + w\bigl(x-\lfloor x\rfloor\bigr) $$</span> We note that <span class="math-container">$u$</span> is continuous and <span class="math-container">$|u(x)-x|&lt;1$</span> everywhere.</p> <p>Now for each <span class="math-container">$n\ge 0$</span>, let <span class="math-container">$$ h_n(x) = \frac{u(2^n x)}{2^n} $$</span> This looks like <span class="math-container">$u$</span>, except that we have &quot;zoomed out&quot; by a factor of <span class="math-container">$2^n$</span> so we have more and smaller wiggles around <span class="math-container">$x=y$</span>. In particular, <span class="math-container">$|h_n(x)-x|\le 2^{-n}$</span>.</p> <p>Now form the <em>infinite composition</em> of all the <span class="math-container">$h_n$</span>'s: <span class="math-container">$$ f = \lim_{n\to\infty} h_n \circ h_{n-1} \circ \cdots \circ h_1 \circ h_0 $$</span> or, written in more detail: <span class="math-container">$$ f_0(x) = x \qquad\qquad f_{n+1}(x) = h_n(f_n(x)) \qquad\qquad f(x) = \lim_{n\to\infty} f_n(x) $$</span> The limit exists pointwise because the bound on the wiggles on each <span class="math-container">$h_n$</span> makes the sequence Cauchy -- and this argument actually bounds the difference <em>uniformly</em>. So <span class="math-container">$f$</span> is <em>continuous</em>, being a uniform limit of continuous functions.</p> <hr /> <p>To see that <span class="math-container">$f$</span> has flats everywhere, it is convenient to define <span class="math-container">$$ k_n(x) = 2^n f_n(x) $$</span> We then have the recurrence <span class="math-container">$$ k_0(x) = x \qquad\qquad k_n(x) = 2u(k_{n-1}(x)) $$</span> In other words, we're just iterating <span class="math-container">$2u$</span>. We don't need to worry about taking the limit anymore, because the fibers of the <span class="math-container">$k_n$</span>s (and therefore also of <span class="math-container">$f_n$</span>) will tell us enough to conclude that <span class="math-container">$f$</span> has flats everywhere.</p> <p>Note that <span class="math-container">$k_n$</span> is piecewise linear, with each linear piece having slope <span class="math-container">$\pm 6^n$</span>, and it changes direction only at points where <span class="math-container">$k_n(x)$</span> is an (even) integer.</p> <p>Now suppose we're given <span class="math-container">$x$</span> and we're looking for a <span class="math-container">$x_1$</span> within <span class="math-container">$\delta=6^{-n}$</span> of <span class="math-container">$x$</span> where <span class="math-container">$f(x_1)=f(x)$</span>. We fast-forward to <span class="math-container">$k_n$</span> and find that <span class="math-container">$x$</span> lies in a closed interval <span class="math-container">$[y,y+6^{-n}]$</span> where <span class="math-container">$k_n(x)$</span> either ascends linearly from an integer to the next or descends from an integer to the previous (namely, <span class="math-container">$y=6^{-n}\lfloor 6^n x\rfloor$</span> will work). This means that <span class="math-container">$k_{n+1}$</span> restricted to this interval attains each value in its image at least twice. So we can find an <span class="math-container">$x_1\ne x$</span> in the interval such that <span class="math-container">$k_{n+1}(x_1) = k_{n+1}(x)$</span> and therefore also <span class="math-container">$f(x_1)=f(x)$</span>, as required.</p>
2,483,188
<p>I am facing this problem: </p> <p><strong>Turn into cartesian form:</strong></p> <p>$$\dfrac{1-e^{i\pi/2}}{1 + e^{i\pi/2}}$$</p> <p>I've tried to operate and I've come up to this:</p> <p>$$\dfrac{1-2e^{i\pi/2} + e^{i\pi}}{1 - e^{i\pi}}$$</p> <p>I do not know how to go on, and I've tried to operate with the cartesian form of the initial quotient, but I come up with an expression similar. I'm stucked.</p>
zhw.
228,045
<p>Hint: Let $h(x) = x^{-1/4}.$ Consider the $F\in X^*$ given by $F(f) = \langle f,h\rangle.$</p>
1,376,981
<p>The context is as follows: I am asking this question because I would like feedback; I am a beginner to mathematical proofs.</p> <p>We wish to show $\sum\limits_{k=1}^{n}kq^{k-1} = \frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}}$ for arbitrary $q&gt;0$. My attempt:</p> <p>Let as first consider the trivial base case $n=1$ which, in the series, is clearly equally to one. Upon simplifying the left side using basic algebra, we see that the fraction is also equal to one. The base case has been shown. </p> <p>Now, we must show the inductive step, meaning that if the equality holds for some $n$ it necessarily holds for $n+1$. Let $r=n+1$. We add to the series the next term $r*q^{r-1} = (n+1)q^{n}$ (to both sides of the equation). Thus, </p> <p>$\frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}} + (n+1)q^{n}$ $\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (1-q)^2(n+1)(q^n)}{(1-q)^{2}}$</p> <p>$\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (n+1)(q^n - 2q^{n+1} + q^{n+2})}{(1-q)^{2}}$ $\Rightarrow$ $\frac{1 + nq^{n+1} + (n+1)(q^{n+2} - 2q^{n+1})}{(1-q)^{2}}$ $= \frac{1 - (n+2)q^{n+1} + (n+1)(q^{n+2})}{(1-q)^{2}}$</p> <p>Substituting $r=n+1$ the final expression can be simplified to $\frac{1 - (r+1)q^{r} + (r)(q^{r+1})}{(1-q)^{2}}$</p> <p>Therefore it's clear that if the original equality is true for some number $n$, it is also true for the number $r=n+1$ and this process can be repeated ad infinitum. The "first" case was the base case and the rest follows logically. </p>
David Quinn
187,299
<p>Start with the standard formula for the sum of the first $n$ terms of a geometric series and differentiate both sides. This will give you the formula you have</p>
1,376,981
<p>The context is as follows: I am asking this question because I would like feedback; I am a beginner to mathematical proofs.</p> <p>We wish to show $\sum\limits_{k=1}^{n}kq^{k-1} = \frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}}$ for arbitrary $q&gt;0$. My attempt:</p> <p>Let as first consider the trivial base case $n=1$ which, in the series, is clearly equally to one. Upon simplifying the left side using basic algebra, we see that the fraction is also equal to one. The base case has been shown. </p> <p>Now, we must show the inductive step, meaning that if the equality holds for some $n$ it necessarily holds for $n+1$. Let $r=n+1$. We add to the series the next term $r*q^{r-1} = (n+1)q^{n}$ (to both sides of the equation). Thus, </p> <p>$\frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}} + (n+1)q^{n}$ $\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (1-q)^2(n+1)(q^n)}{(1-q)^{2}}$</p> <p>$\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (n+1)(q^n - 2q^{n+1} + q^{n+2})}{(1-q)^{2}}$ $\Rightarrow$ $\frac{1 + nq^{n+1} + (n+1)(q^{n+2} - 2q^{n+1})}{(1-q)^{2}}$ $= \frac{1 - (n+2)q^{n+1} + (n+1)(q^{n+2})}{(1-q)^{2}}$</p> <p>Substituting $r=n+1$ the final expression can be simplified to $\frac{1 - (r+1)q^{r} + (r)(q^{r+1})}{(1-q)^{2}}$</p> <p>Therefore it's clear that if the original equality is true for some number $n$, it is also true for the number $r=n+1$ and this process can be repeated ad infinitum. The "first" case was the base case and the rest follows logically. </p>
tired
101,233
<p>The finite geometric series is given by</p> <p>$$ G(q,n)=\sum_{k=0}^nq^k=\frac{1-q^{n+1}}{1-q} $$ for constant $q$ such that $|q|&lt;1$.</p> <p>Now $\frac{d}{dq}G(q,n)$ is the series you are looking for...</p>
2,705,980
<p>I have the following problem: \begin{cases} y(x) =\left(\dfrac14\right)\left(\dfrac{\mathrm dy}{\mathrm dx}\right)^2 \\ y(0)=0 \end{cases} Which can be written as:</p> <p>$$ \pm 2\sqrt{y} = \frac{dy}{dx} $$</p> <p>I then take the positive case and treat it as an autonomous, seperable ODE. I get $f(x)=x^2$ as my solution.</p> <p>In order to solve this problem, I have to divide each side of the equation by $\frac{1}{\sqrt{y}}$. But since the solution to this IVP is $y(x)=x^2$, zero is in the image of $f(x)$. So at a particular point $1/\sqrt{y}$ is not defined. But the <strong>solution</strong> is defined at $y =0$.</p> <p>In fact, $y(x)= 0$ for all x is another solution. But aside from this solution the non-trivial solution is defined at zero also.</p> <p>So is it wrong to divide across by $1/\sqrt{y}$? And if so how else do I approach this question?</p>
Arian
172,588
<p>Since $\{Av_1,...,Av_n\}$ is linearly independent then $\{v_1,...,v_n\}$ is linearly independent. Suppose other wise. Let $a_1,...,a_n\in\mathbb{R}$ not all zero such that $a_1v_1+...+a_nv_n=0$ then by linearity of $A$ one gets $$0=A(0)=A(a_1v_1+...+a_nv_n)=a_1Av_1+...+a_nAv_n$$ thus a contradiction. Having established linear independence of $\{v_1,...,v_n\}$ we claim that this set spans $\mathbb{R}^n$. This is immediate since there are $n$ such vectors and can easily construct an isomorphism $\varphi:\mathbb{R}^n\to\mathbb{R}^n$ such that $\varphi(e_k):=v_k$ for $k=1,...,n$ where $\{e_k\}$ is the usual basis of $\mathbb{R}^n$. Now suppose $A$ is singular then there is an $x\in\mathbb{R}^n$ and $x\neq0$ such that $Ax=0$. But there are constants $\{b_k\}$ not all zero with $x=b_1v_1+...+b_nv_n$. This implies $$0=Ax=A(b_1v_1+...+b_nv_n)=b_1Av_1+...+b_nAv_n$$ which is a contradiction to linear independence of $\{Av_1,...,Av_n\}$.</p>
848,229
<p>Two teams take part at a KO-tournament with n rounds. Assuming, that the teams win all their games until they are paired together, what is the probability that they both meet in the final ?</p> <p>I figured out that the solution is </p> <p>$P_n=\prod_{j=2}^n (1-\frac{1}{2^j-1})=\frac{2^{n-1}}{2^n-1}$</p> <p>So, $\lim_{n-&gt;\infty} P_n = \frac{1}{2}$</p> <p>I tried to understand the result intuitively because I would have expected a much lower probability. </p>
Claude Leibovici
82,404
<p>What you have done is very good. As commented by G. H. Faust, using exponentials make things much simpler. Consider $$\int e^{a x}e^{i b x}~dx=\int e^{(a+i b) x}~dx=\frac{e^{(a+i b)x}}{a+i b}=\frac {e^{a x}}{a^2+b^2}(a-ib)e^{i b x}$$ Expanding, we then get $$\int e^{a x}e^{i b x}~dx=\frac {e^{a x}}{a^2+b^2}\Big([a \cos(bx)+b\sin(bx)]+i[a \sin(bx)-b\cos(bx)]\Big)$$</p> <p>So, the real part gives $$\int e^{ax} \cos (bx)~dx=\frac {e^{a x}}{a^2+b^2}\Big(a \cos(bx)+b\sin(bx)\Big)$$ and the imaginary part gives $$\int e^{ax} \sin (bx)~dx=\frac {e^{a x}}{a^2+b^2}\Big(a \sin(bx)-b\cos(bx)\Big)$$</p>
1,581,456
<p>Given functions $g,h: A \rightarrow B$ and a set C that contains at least two elements, with $f \circ g = f \circ h$ for all $f:B \rightarrow C$. Prove that $g = h$. </p> <p>My logic is to take C = B and h(x) =x for all x in particular and the result follows immediately. But, I don't see the use of the condition on C. Please somebody help.</p>
will
253,085
<p>Given positive integer $n$ that defines $z := \exp(\pi i/n),$ we can generalize the oddly periodic sum with $$ S_n(x) := \sum_{k=0}^{n-1}\frac{2k+1}{1-xz^{2k+1}}. $$ The power series when $|x| &lt; 1$ is $$ S_n(x) = \sum_{E=0}^{\infty}x^E\sum_{k=0}^{n-1}(2k+1)z^{2kE+E}. $$ This simplifies with the weighted sum, $$ \left.\frac{d}{dr}(\frac{r-r^{2n+1}}{1-r^2})\right|_{r=z^E} = \sum_{k=0}^{n-1}(2k+1)z^{2kE},\ \mbox{ to} $$ $$ S_n(x) = \sum_{E=0}^{\infty}\frac{1-z^{2E}-(2n+1)z^{2En}+(2n+1)z^{2En+2E}+ 2z^{2E}-2z^{2En+2E}}{(1-z^{2E})^2}(xz)^E. $$ We observe that $z^{2E}\to1$ correctly reduces the fraction to $n^2,\ $ the period is (almost) $n,\ $ and $$ S_n(x) = \frac{1}{1+x^n}\left( n^2 - 2n\sum_{E=1}^{n-1} \frac{(xz)^E}{1-z^{2E}} \right). $$ Before we take $x\to1,\ $ we need to recall $$ z=\exp(\pi i/n)\ \implies\ n = 1+\sum_{E=1}^{n-1}\frac{2}{1-z^{2E}}. $$ We can evaluate our analytic sum at $$ S_n(1) = \frac{n}{2}\left(1 + 2\sum_{E=1}^{n-1}\frac{1-z^E}{1-z^{2E}} \right) $$ to find the desired relationship $$ \sum_{k=0}^{n-1}\frac{2k+1}{1-z^{2k+1}}\ =\ \sum_{k=0}^{n-1}\frac{n}{1+z^k}. $$</p>
77,311
<p>I am a first-time user pf <em>Mathematica</em> (V10). I know it's easy to install palettes, but uninstalling them drives me crazy. I want to delete one. Who can help me to do that? </p>
Anton Antonov
34,008
<p>Open the folder:</p> <pre class="lang-mathematica prettyprint-override"><code>SystemOpen[FileNameJoin[{$UserBaseDirectory, &quot;SystemFiles&quot;, &quot;FrontEnd&quot;, &quot;Palettes&quot;}]] </code></pre> <p>and delete the palettes you want.</p> <p>If you do not find the palette you want to remove, try <code>$UserBaseDirectory</code> or <code>$InstallationDirectory</code>.</p>
211,175
<p>In Gradshteyn and Ryzhik, (specifically starting with the section 3.13) there are several results involving integrals of polynomials inside square root. These are given in terms of combinations of elliptic integrals. See for instance: <a href="https://i.stack.imgur.com/6Cqyb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Cqyb.jpg" alt="Gradshteyn and Ryzhik excerpt"></a></p> <p>where <span class="math-container">$F[\alpha, p]$</span> is the elliptic integral of first kind. I tried to reproduce the first result above in Mathematica (version 12) but failed. I would appreciate if anyone could point out what I am doing wrong. My first attempt is</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], {x, -Infinity, u}, Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>which returned no result:</p> <p><a href="https://i.stack.imgur.com/Wdama.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wdama.jpg" alt="unevaluated"></a></p> <p>Then I tried without integration limits, and take the limits after</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], x, Assumptions -&gt; { a &gt; b &gt; c}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/0Ci4P.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Ci4P.jpg" alt="complicated result"></a></p> <p>taking now the upper limit and simplifying </p> <pre><code>Simplify[Limit[(2 (a - x)^(3/2) Sqrt[(b - x)/(a - x)] Sqrt[(c - x)/(a - x)] EllipticF[ArcSin[Sqrt[a - b]/Sqrt[a - x]], (a - c)/(a - b)])/(Sqrt[a - b] Sqrt[(a - x) (-b + x) (-c + x)]), x -&gt; u], Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/uqFwh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uqFwh.jpg" alt="another result"></a></p> <p>whereas the integral vanishes when <span class="math-container">$x\rightarrow -\infty$</span>. Clearly, the above result given by Mathematica differs from the Gradshteyn and Ryzhik's. Two results match if the substitution: <span class="math-container">$b \rightarrow c$</span>, <span class="math-container">$c \rightarrow b$</span> is made but this would then be at odds with the condition: <span class="math-container">$a &gt; b &gt; c$</span>. </p>
bbgodfrey
1,063
<p><strong>Important Edit Made</strong></p> <p>Mathematica can perform the 3.31 integral, if <code>Assumptions</code> is changed from <code>{ a &gt; b &gt; c &gt;= u}</code> to <code>{a &gt; b &gt; c &gt; u}</code>.</p> <pre><code>s = Integrate[1/Sqrt[(a - x) (b - x) (c - x)], {x, -Infinity, u}, Assumptions -&gt; {a &gt; b &gt; c &gt; u}] (* (2 EllipticF[ArcSin[Sqrt[(a - b)/(a - u)]], (a - c)/(a - b)])/Sqrt[a - b] *) </code></pre> <p>To compare this with the expression as given in <em>Gradshteyn and Ryzhik</em> (7th edition, 2007), it is important to realize that this compendium defines the second argument of <em>Elliptic Integral F</em> differently than from that in Mathematica. Comparing the second example under "Possible Issues" in the <code>EllipticF</code> <a href="http://reference.wolfram.com/language/ref/EllipticF.html" rel="noreferrer">documentation</a> with 11.112 #2 of <em>Gradshteyn and Ryzhik</em> indicates that <code>p^2</code> (see question) should be used as the second argument of the <em>Gradshteyn and Ryzhik</em> expression when employing the Mathematica representation of <code>EllipticF</code>; i. e.,</p> <pre><code>gr = 2 EllipticF[ArcSin[Sqrt[(a - c)/(a - u)]], (a - b)/(a - c)]/Sqrt[a - c] </code></pre> <p>which differs from <code>s</code> only by the interchange of <code>b</code> and <code>c</code>. But, it is obvious from the integrand of the integral above that the relative order of <code>{a, b, c}</code> is irrelevant. As verification, I have evaluated <code>s</code> and <code>gr</code> numerically for several parameters, for instance,</p> <pre><code>vals = Thread[{a, b, c, u} -&gt; Reverse@Sort@RandomReal[{-5, 5}, 4]] (* {a -&gt; 3.47807, b -&gt; 2.65797, c -&gt; -1.04855, u -&gt; -1.17253} *) Chop[gr /. vals] (* 1.38108 *) Chop[s /. vals] (* 1.38108 *) </code></pre> <p>and in each case they are the same and agree with the original integral evaluated numerically.</p> <pre><code>NIntegrate[1/Sqrt[(a - x) (b - x) (c - x)] /. vals, {x, -Infinity, u /. vals}, Method -&gt; {Automatic, "SymbolicProcessing" -&gt; False}] (* 1.38108 *) </code></pre> <p>Therefore, the apparent discrepancy between the result in <em>Gradshteyn and Ryzhik</em> (7th edition, 2007) and the corresponding Mathematica result is due merely to differences in notation.</p>
4,499,058
<blockquote> <p>Let <span class="math-container">$A:L_2[0,1]\to L_2[0,1]$</span> be defined by<span class="math-container">$$Ax(t)=\int \limits _0^1t\left (s-\frac{1}{2}\right )x(s)\,ds\quad \forall t\in [0,1].$$</span>Compute the adjoint and the norm of <span class="math-container">$A$</span></p> </blockquote> <p>This is my avance:</p> <p>First I think to <span class="math-container">$\|A\|=\dfrac{1}{2}$</span> I only prove that:</p> <p>Let <span class="math-container">$x\in L_2[0,1]$</span> then<span class="math-container">$$\|Ax(t)\|_2=\left (\int \limits _0^1t\left (s-\frac{1}{2}\right )x(s)\,ds\right )^{1/2}\leq \frac{1}{2}\|x\|_2.$$</span>Is this correct?<br> For the other inequality I try to find <span class="math-container">$x$</span> with norm <span class="math-container">$1$</span> such that <span class="math-container">$\|Ax\|=\dfrac{1}{2}$</span>.<br> Any hint or help for this step or for computing the adjoint will be greatly appreciated.</p>
Ryszard Szwarc
715,896
<p>The operator is of the form <span class="math-container">$$ Ax=\langle x, g\rangle \,f$$</span> where <span class="math-container">$f(t)=t$</span> and <span class="math-container">$g(t)=t-{1\over 2}.$</span> Thus the range is one-dimensional. Its norm is equal <span class="math-container">$$\|A\|=\|f\|_2\|g\|_2={1\over \sqrt{3}}{1\over \sqrt{12}}={1\over 6}$$</span> The adjoint operator can be calculated as follows <span class="math-container">$$\langle x,A^*y\rangle =\langle Ax,y\rangle=\langle x,g\rangle \langle f,y\rangle=\langle x,\langle y,f\rangle g\rangle $$</span> Hence <span class="math-container">$$A^*y =\langle y,f\rangle g$$</span> i.e. the functions <span class="math-container">$f$</span> and <span class="math-container">$g$</span> swapped. The operator <span class="math-container">$A$</span> is not self-adjoint if <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are linearly independent, like in the OP question. Explicitly we get <span class="math-container">$$(A^*x)(t)=\left (t-{1\over 2}\right )\int\limits_0^1 s\,x(s)\,ds$$</span> <strong>Remark</strong> For operator of the form <span class="math-container">$$(Ax)(t)=\int\limits_0^1k(s,t)x(s)\,ds$$</span> the adjoint operator is given by</p> <p><span class="math-container">$$(A^*x)(t)=\int\limits_0^1\bar{k}(t,s)x(s)\,ds$$</span> Thus <span class="math-container">$A^*=A$</span> if and only if <span class="math-container">$k(s,t)=\bar{k}(t,s).$</span> In the OP question we have <span class="math-container">$k(s,t)=t(s-{1\over 2}), $</span> hence <span class="math-container">$k(s,t)\neq k(t,s).$</span></p>
313,025
<p>I got two problems asking for the proof of the limit: </p> <blockquote> <p>Prove the following limit: <br/>$$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$</p> </blockquote> <p>and, </p> <blockquote> <p>Prove the following limit: <br/>$$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$</p> </blockquote> <p>I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks! </p>
robjohn
13,854
<p>After a suitable substitution, both limits are easily handled by <a href="http://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow">Dominated Convergence</a> or <a href="http://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow">Monotone Convergence</a>.</p> <hr> <p><strong>The First Integral</strong></p> <p>Substituting $t\mapsto\sqrt{t+x^2}$, $$ \begin{align} xe^{x^2}\int_x^\infty e^{-t^2}\,\mathrm{d}t &amp;=xe^{x^2}\int_0^\infty e^{-t-x^2}\frac{\mathrm{d}t}{2\sqrt{t+x^2}}\\ &amp;=\frac12\int_0^\infty e^{-t}\frac{\mathrm{d}t}{\sqrt{1+t/x^2}}\tag{1} \end{align} $$ Equation $(1)$ shows that the integrand is increasing with $x$ and by Dominated or Monotone Convergence, the limit is $$ \frac12\int_0^\infty e^{-t}\,\mathrm{d}t=\frac12\tag{2} $$</p> <hr> <p><strong>The Second Integral</strong></p> <p>Substituting $p\mapsto p/x$, $$ \begin{align} x\int_0^\infty\frac{e^{-px}}{p+1}\,\mathrm{d}p &amp;=\int_0^\infty\frac{e^{-p}}{1+p/x}\,\mathrm{d}p\tag{3} \end{align} $$ Equation $(3)$ shows that the integrand is increasing with $x$ and by Dominated or Monotone Convergence, the limit is $$ \int_0^\infty e^{-p}\,\mathrm{d}p=1\tag{4} $$</p>
2,650,628
<p>The equation $\log_e(x) + \log_e(1+x) =0$ can be written as:</p> <p>a) $x^2+x-e=0$</p> <p>b) $x^2+x-1=0$</p> <p>c) $x^2+x+1=0$</p> <p>d) $x^2+xe-e=0$</p> <p>I tried differentiating both sides, then it becomes $\frac{1}{x}+\frac{1}{1+x}=0$, but I dont get any of the answers.</p>
PM.
416,252
<p>Since $\log(A) + \log(B) = \log(AB)$ $$ \begin{align} \log(x) + \log(1+x) &amp;=0\\ \Rightarrow \log{\left( x(1+x)\right)} &amp;=0 \\ \Rightarrow x(1+x) &amp;= 1 \\ \Rightarrow x^2 + x -1 &amp;=0 \end{align} $$</p>
941,709
<p><strong>Question:</strong> Let $X$ be any set with at least two elements. Assume that the only open subsets of $X$ are the empty set $\emptyset$ and $X$ itself. - Which subsets of $X$ are closed? - Which subsets of $X$ are compact?</p> <p><strong>My thoughts:</strong> Thus also $\emptyset$ and $X$ have to be also closed subsets. As their complements are both open and by definition the set is closed if the complement is open.</p> <p>The open set is compact as it is a finite set and also $X$ is compact as it has a finite amount of closed subsets, thus is bounded and closed. Am I in any way correct with these thoughts?</p>
Asaf Karagila
622
<p>The equivalence between compactness and bounded+closed is only true in metric spaces. In fact not even that, just in a particular class of metric spaces, not even in all of them. For general metric spaces we need to strengthen bounded to "totally bounded".</p> <p>And in general for topological spaces we don't have a notion of boundedness, so compactness is instead characterized using open sets:</p> <blockquote> <p>$A$ is compact if and only if whenever $\{U_i\mid i\in I\}$ is a collection of open sets such that $A\subseteq\bigcup_{i\in I}U_i$, there are some $i_1,\ldots,i_n\in I$ such that $A\subseteq\bigcup_{j=1}^n U_{i_j}$.</p> </blockquote> <p>Meaning that $A$ is compact if whenever it is covered by open sets, we can find just a finite subset of this cover which already covers $A$.</p> <hr> <p>If $A$ is a subset of a finite set, how many subsets does that finite set has? What sort of open covers (regardless to the topology!) could it even have?</p>
2,750,931
<p>I'm in the process of exploring Bra Ket notation. In it, I often find operators in the form $\lvert a\rangle\langle b\rvert$, which can be thought of as multiplying a row vector $a$ with a column vector $b$.</p> <p>This strikes me as a construction which should probably have a name that I can research to understand the properties of matrices formed this way, but I'm having trouble finding sources that name such matrices.</p> <p>What is it called when a matrix can be decomposed into a row vector and a column vector? I'd like to look up the properties of such a matrix.</p> <p>$$M=\begin{pmatrix} a_0 \\ a_1 \\ \vdots \\ a_n \\ \end{pmatrix} \begin{pmatrix} b_0 &amp; b_1 &amp; \ldots &amp; b_n \\ \end{pmatrix}$$</p>
Misha Lavrov
383,078
<p>The class of matrices are precisely the matrices with rank $1$, and the matrix $a b^{\mathsf T}$ specifically is called the <em>outer product</em> of $a$ and $b$ (by analogy with the inner product $a^{\mathsf T}b$, which gives a scalar).</p> <p>Especially in the context of quantum states, it is also common to identify the vector space of $n \times n$ matrices with the <a href="https://en.wikipedia.org/wiki/Tensor_product" rel="nofollow noreferrer">tensor product</a> $\mathbb R^n \otimes \mathbb R^n$, in which case the matrix $a b^{\mathsf T}$ corresponds to the pure tensor $a \otimes b$.</p>
2,247,498
<p>Imagine a circle of radius R in 3D space with a line l running threw it's center C in a direction perpendicular to the plane of the circle. Basically, like the axel of a wheel. </p> <p>From a given point P that is not on the circle or on l, a ray extends to intersect both l and the circle. What would be the equations used to find the intersection points the ray make with the circle and l? You are given the coordinates of C and P, the radius R and the orientation of l.</p> <p>I am trying to model looking from a point P onto a wheel-axis shape and find from point of view P the point of the edge of the circle that would appear to intersect with it's axis. Of course it doesn't but it is how this 3d structure would appear in a 2d image if a camera was situated at point P.</p>
Joseph O'Rourke
237
<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.stack.imgur.com/hqGN7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hqGN7.jpg" alt="DiskVec"></a> <hr /> Rotate everything so that the disk lies in the $xy$-plane with its center at the origin, and the line $L$ coincides with the $z$-axis. One more rotation about the $z$-azis places $p$ to project onto the $x$-axis. Now it is easy to connect $p$ through $L$ to the circle. Then reverse all rotations.</p>
586,112
<p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p> <p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p> <p>Is my explanation and answer right or not?</p>
Community
-1
<p>(J. Stewart. Calculus pp 391) I believe Stewart defines an antiderivative as an indefinite integral. </p> <p><img src="https://i.stack.imgur.com/wy1kk.png" alt="enter image description here"></p>
159,585
<p>This is a kind of a plain question, but I just can't get something.</p> <p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p> <p>How come that the in addition to the solutions $$\begin{align*} p &amp;\equiv 11\pmod{16}\\ p &amp;\equiv 1\pmod {16} \end{align*}$$ we also have $$\begin{align*} p &amp;\equiv 9\pmod {16}\\ p &amp;\equiv 3\pmod {16}\ ? \end{align*}$$</p> <p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p> <p>Thanks</p>
Community
-1
<p>First note that $p$ has to be odd. Else, $(p+5)$ and $(p-1)$ are both odd.</p> <p>Let $p = 2k+1$. Then we need $16 \vert (2k+6)(2k)$ i.e. $4 \vert k(k+3)$.</p> <p>Since $k$ and $k+3$ are of opposite parity, we need $4|k$ or $4|(k+3)$.</p> <p>Hence, $k = 4m$ or $k = 4m+1$. This gives us $ p = 2(4m) + 1$ or $p = 2(4m+1)+1$.</p> <p>Hence, we get that $$p = 8m +1 \text{ or }8m+3$$ which is what your claim is as well.</p> <p><strong>EDIT</strong></p> <p>You have obtained the first two solutions i.e. $p = 16m+1$ and $p=16m + 11$ by looking at the cases $16 \vert (p-1)$ (or) $16 \vert (p+5)$ respectively.</p> <p>However, note that you are leaving out the following possibilities.</p> <ol> <li>$2 \vert (p+5)$ and $8 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$</li> <li>$4 \vert (p+5)$ and $4 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$</li> <li>$8 \vert (p+5)$ and $2 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$</li> </ol> <p>Out of the above possibilities, the second one can be ruled out since $4 \vert (p+5)$ and $4 \vert (p-1)$, then $4 \vert ((p+5)-(p-1))$ i.e. $4 \vert 6$ which is not possible.</p> <p>The first possibility gives us $ p = 8m+1$ while the second possibility gives us $p = 8m +3$.</p> <p>Combining this with your answer, we get that $$p = 8m +1 \text{ or }8m+3$$</p> <p>In general, when you want to analyze $a \vert bc$, you need to write $a = d_1 d_2$, where $d_1,d_2 \in \mathbb{Z}$ and then look at the cases $d_1 \vert a$ and $d_2 \vert b$.</p>
159,585
<p>This is a kind of a plain question, but I just can't get something.</p> <p>For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.</p> <p>How come that the in addition to the solutions $$\begin{align*} p &amp;\equiv 11\pmod{16}\\ p &amp;\equiv 1\pmod {16} \end{align*}$$ we also have $$\begin{align*} p &amp;\equiv 9\pmod {16}\\ p &amp;\equiv 3\pmod {16}\ ? \end{align*}$$</p> <p>Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?</p> <p>Thanks</p>
Theorem
24,598
<p>The first two solution can be seen easily , ie you have $p=-5,1=11,17 \pmod{16}$. To find the next two solution , as we know $p$ should satisfy $(p+5)(p-1)=16$ then the solution of this quadratic equation as $p=3,-7 =3,9\pmod {16}$</p>
2,878,412
<p>I've been working on a problem that involves discovering valid methods of expressing natural numbers as Roman Numerals, and I came across a few oddities in the numbering system.</p> <p>For example, the number 5 could be most succinctly expressed as $\texttt{V}$, but as per the rules I've seen online, could also be expressed as $\texttt{IVI}$. </p> <p>Are there any rules that bar the second expression from being valid? Or are the rules for roman numerals such that multiple valid expressions express the same number. </p> <h1>Edit</h1> <p>A sample set of rules I've seen online:</p> <ol> <li>Only one I, X, and C can be used as the leading numeral in part of a subtractive pair.</li> <li>I can only be placed before V or X in a subtractive pair.</li> <li>X can only be placed before L or C in a subtractive pair.</li> <li>C can only be placed before D or M in a subtractive pair.</li> <li>Other than subtractive pairs, numerals must be in descending order (meaning that if you drop the first term of each subtractive pair, then the numerals will be in descending order).</li> <li>M, C, and X cannot be equalled or exceeded by smaller denominations.</li> <li>D, L, and V can each only appear once.</li> <li>Only M can be repeated 4 or more times.</li> </ol>
Community
-1
<p>Wikipedia explicitly enumerates the patterns for the units, the tens and the hundredths.</p> <p><a href="https://en.wikipedia.org/wiki/Roman_numerals#Basic_decimal_pattern" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Roman_numerals#Basic_decimal_pattern</a></p> <p>This doesn't leave room for extravagant expressions, though some variants are described. Subtractive-additive forms were of course not used.</p>