qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,143,670
<p>I'm not entire sure how to proceed on this question. I believe I am supposed to use a triangle inequality with epsilons and <span class="math-container">$m$</span>, <span class="math-container">$n \geq N$</span> to get <span class="math-container">$N_1$</span> and <span class="math-container">$N_2$</span> before setting it to <span class="math-container">$\max\{N_1, N_2\}$</span>. The absolute value in the question is making this tricky for me. I'm wondering how to go about this. Thanks!</p>
JoseSquare
643,097
<p><strong>Hint</strong></p> <p>You have to prove that for every <span class="math-container">$\epsilon &gt;0$</span> there exists <span class="math-container">$N \in \Bbb{N}$</span> such if <span class="math-container">$n,m &gt; N$</span> then <span class="math-container">$|x_n -y_n -x_m + y_m| &lt; \epsilon$</span></p> <p>Using triangle inequality we get </p> <p><span class="math-container">$$|(x_n -x_m)+(y_m-y_n)|\leq |x_n - x_m| + |y_n -y_m| $$</span></p> <p>Since <span class="math-container">$\{x_n\}$</span> and <span class="math-container">$\{y_n\}$</span> are Cauchy you can set <span class="math-container">$\frac{\epsilon}{2}$</span> and the proof is almost finished</p> <p><strong>Edit:</strong></p> <p>As said in the comments I forgot you want <span class="math-container">$\{|x_ -y_n|\}$</span> not <span class="math-container">$\{x_n -y_n\}$</span> as I tohught. Then all you need is the previous step shown in the answer by @String</p>
804,483
<p>The following integrals look like they might have a closed form, but Mathematica could not find one. Can they be calculated, perhaps by differentiating under the integral sign?</p> <p>$$I_1 = \int_{-\infty }^{\infty } \frac{\sin (x)}{x \cosh (x)} \, dx$$ $$I_2 = \int_{-\infty }^{\infty } \frac{\sin ^2(x)}{x \sinh (x)} \, dx$$</p>
Graham Hesketh
66,912
<p>For the first one we need: $$\int _{-1/2}^{1/2}\!{{\rm e}^{2\,iax}}{da}={\frac {\sin \left( x \right) }{x}}\tag{1}$$ $$ \frac{1}{\cosh \left( x \right)}=-2\,\sum _{n=1}^{\infty } \left( -1 \right) ^{n}{{\rm e}^{- \left| x \right| \left( 2\,n-1 \right) }}\tag{2}$$ $$\int _{-\infty }^{\infty }\!{{\rm e}^{2\,iax}}{{\rm e}^{- \left| x \right| \left( 2\,n-1 \right) }}{dx}=- \frac{1}{\left( 2\,ia-2\,n+1 \right) }- \frac{1}{\left( -2\,ia-2\,n+1 \right)}\tag{3}$$ $$-2\,\sum _{n=1}^{\infty } \left( -1 \right) ^{n} \left(- \frac{1}{\left( 2\,ia-2\,n+1 \right) }- \frac{1}{\left( -2\,ia-2\,n+1 \right)} \right) ={ \frac {\pi }{\cosh \left( \pi \,a \right) }}\tag{4}$$ we get:</p> <p>$$ \begin{aligned} \int _{-\infty }^{\infty }\!{\frac {\sin \left( x \right) }{x\cosh \left( x \right) }}{dx}&amp;=\int _{-1/2}^{1/2}\!{\frac {\pi }{\cosh \left( \pi \,a \right) }}{da}\\ &amp;=2\,\arctan \left( { {\rm e}^{1/2\,\pi }} \right)-2\,\arctan \left( {{\rm e}^{-1/2\,\pi }} \right)\\ &amp;=2\,\arctan \left( \sinh \left( \frac{1}{2}\,\pi \right) \right) \end{aligned}\tag{5}$$ where the last part follows from $(2)$ and the Taylor series for arctan: $$\arctan \left( x \right) =\sum _{n=0}^{\infty }{\frac { \left( -1 \right) ^{n}{x}^{2\,n+1}}{2\,n+1}}\tag{6}$$</p> <p>For the second one we need: $$ \frac{1}{\sinh \left( x \right) }=2\,\sum _{n=1}^{\infty } {{\rm e}^{-x \left( 2\,n-1 \right) }}\tag{7}$$ $${\frac { \sin^2 \left( x \right)}{x}}=-\frac{1}{2}\,\sum _{ m=1}^{\infty }{\frac { \left( -1 \right) ^{m}{2}^{2\,m}{x}^{2\,m-1}}{ \left( 2\,m \right) !}}\tag{8}$$ $$\int _{0}^{\infty }\!{x}^{2\,m-1}{{\rm e}^{-x \left( 2\,n-1 \right) }} {dx}={\frac { \left( 2\,m-1 \right) !}{ \left( 2\,n-1 \right) ^{2\,m}} }\tag{9}$$ $$\cot \left( z \right) -\frac{1}{z}=-\frac{2}{\pi}\,\sum _{m=1}^{\infty }\zeta \left( 2\,m \right) \left( {\frac {z}{\pi }} \right) ^{2\,m-1}\tag{10}$$ From $(6,7,8)$: $$ \begin{aligned} \int _{0}^{\infty }\!{\frac { \sin^2 \left( x \right)}{x\sinh \left( x \right) }}{dx}&amp;=-\frac{1}{2}\,\sum _{m=1}^{\infty } \left( \frac{\left( -4 \right) ^{m}}{m}\sum _{n=1}^{\infty }{\frac {1}{\left( 2\,n-1 \right) ^{2\,m}}} \right)\\ &amp;=\frac{1}{4}\sum _{m=1}^{\infty }\,{\frac {\zeta \left( 2\,m \right) \left( {4}^{m}-1 \right) \left( -1 \right) ^{m}}{m}} \end{aligned}\tag{11}$$ and after integrating $(10)$ once we know that: $$\ln \left( {\frac {\sin \left( z \right) }{z}} \right) =-\sum _{m=1}^ {\infty }\frac{\zeta \left( 2\,m \right)}{m} \left( {\frac {z}{\pi }} \right) ^{2\,m}\tag{12} $$ so by comparing $(11)$ with $(12)$ we know that: $$ \begin{aligned}\int _{0}^{\infty }\!{\frac { \sin^2 \left( x \right) }{x\sinh \left( x \right) }}{dx}&amp;=\frac{1}{2}\,\ln\!\left( \frac{1}{2}\,{\frac {\sinh \left( 2\,\pi \right) }{\sinh \left( \pi \right) }} \right)\\ &amp;=\frac{1}{2}\,\ln \left( \cosh \left( \pi \right) \right) \end{aligned}$$ </p>
804,483
<p>The following integrals look like they might have a closed form, but Mathematica could not find one. Can they be calculated, perhaps by differentiating under the integral sign?</p> <p>$$I_1 = \int_{-\infty }^{\infty } \frac{\sin (x)}{x \cosh (x)} \, dx$$ $$I_2 = \int_{-\infty }^{\infty } \frac{\sin ^2(x)}{x \sinh (x)} \, dx$$</p>
Random Variable
16,033
<p>First I'm going to evaluate $$\int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx .$$</p> <p>Integrate the function $ \displaystyle f(z) = \frac{e^{iaz}}{\cosh z}$ around a rectangle on the complex plane with vertices at $z= R$, $ z= R + i \pi$, $z= -R + i \pi$, and $z= - R$.</p> <p>As $R \to \infty$, $ \displaystyle \int f(z) \ dz$ vanishes on the left and right sides of the rectangle.</p> <p>So going around the rectangle counterclockwise, we get</p> <p>$$ \int_{-\infty}^{\infty} f(x) \ dx + \int_{\infty}^{-\infty} f(t + i \pi) \ dt = 2 \pi i \ \text{Res} [f(z),i \pi] ,$$</p> <p>which implies</p> <p>$$ (1+ e^{- a \pi}) \int_{-\infty}^{\infty} \frac{e^{iax}}{\cosh x} \ dx = 2 \pi i \lim_{z \to i \pi /2} \frac{e^{iaz}}{\sinh z} = 2 \pi \ e^{- a \pi /2} .$$</p> <p>And equating the real parts on both sides of the equation, we get</p> <p>$$ \int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx = \frac{2 \pi}{e^{a \pi /2} + e^{- a \pi/2}} = \pi \ \text{sech} \left( \frac{a \pi}{2}\right) .$$</p> <p>Then</p> <p>$$ \begin{align} \int_{0}^{a} \int_{-\infty}^{\infty} \frac{\cos ax}{\cosh x} \ dx \ da &amp;= \int_{-\infty}^{\infty} \int_{0}^{a} \frac{\cos ax}{\cosh x} \ da \ dx \\ &amp;= \int_{-\infty}^{\infty} \frac{\sin ax}{x \cosh x} \ dx \\ &amp;= \pi \int_{0}^{a} \text{sech} \left(\frac{a \pi}{2} \right) \ da \\ &amp;= 2 \int_{0}^{a \pi /2} \text{sech}(u) \ du \\ &amp;= 4 \int_{0}^{a \pi /2} \frac{e^{u}}{1+e^{2u}} \ du \\ &amp;= 4 \int_{1}^{e^{a \pi /2}} \frac{1}{1+w^{2}} \ dw \\ &amp;= 4 \left(\arctan (e^{a \pi /2}) - \frac{\pi}{4} \right) . \end{align}$$</p> <p>Therefore,</p> <p>$$ \int_{-\infty}^{\infty} \frac{\sin x}{x \cosh x} \ dx = 4 \arctan (e^{\pi /2}) - \pi \approx 2.3217507819 . $$</p>
42,040
<p>Suppose the polynomial $t^k - a$ has a root (hence splits) in $\mathbb{Q}(\zeta_k)$. For which $k$ does it follow that one of the roots of $t^k - a$ is rational? In particular, are there infinitely many such $k$? </p> <p>A counting argument shows this is true whenever $k$ has the property that $\varphi(k)$ is a power of a prime relatively prime to $k$. Unfortunately, I think it's an open problem whether there are infinitely many such $k$. </p> <p><strong>Motivation:</strong> If enough $k$ have this property then I think I can complete my solution to <a href="https://math.stackexchange.com/questions/41774/is-an-integer-uniquely-determined-by-its-multiplicative-order-mod-every-prime/42022#42022">"Is an integer uniquely determined by its multiplicative order mod every prime?"</a></p>
Matt E
221
<p>This is an amplification of Gerry Myerson's answer, which may be helpful.</p> <p>You are asking about the kernel of the map $\mathbb Q^{\times}/(\mathbb Q^{\times})^k \to L^{\times}/(L^{\times})^k,$ where $L =\mathbb Q(\zeta_k)$.</p> <p>In general, for any field $K$ of char. prime to $k$, there is a natural isomorphism $K^{\times}/(K^{\times})^n \cong H^1(G_K,\mu_k)$. (This is the content of Kummer theory, and follows from Hilbert's Thm. 90.)</p> <p>Thus if $L$ is a Galois extension of $K$, the kernel of the map $K^{\times}/(K^{\times})^k \to L^{\times}/(L^{\times})^k$ is naturally identified with the kernel of the restriction map $H^1(G_K,\mu_k) \to H^1(G_L,\mu_k)$, which by the inflation-restriction exact sequence, is equal to $H^1(Gal(L/K),\mu_k(L))$ (where $\mu_k(L)$ denotes the subgroup of $\mu_k$ consisting of element which lie in $L$).</p> <p>If we apply this with $K = \mathbb Q$ and $L = \mathbb Q(\zeta_k)$, we find that the kernel you are interested in is identified with $H^1((\mathbb Z/k)^{\times},\mu_k)$, which is not too hard to compute.</p> <p>One approach to the computation is as follows: If we factor $k$ into a product of powers of distinct primes, say $k = \prod p^n,$ then $\mu_k = \oplus \mu_{p^n},$ and so we are reduced to computing $$H^1((\mathbb Z/m)^{\times} \times (\mathbb Z/p^n)^{\times}, \mu_{p^n})$$ for each $p$ (where, after having chosen a particular $p$, I have written $k = m p^n$, with $m$ coprime to $p$). One can compute this lots of ways, e.g. via the Kunneth formula.</p> <p>The key facts are that if $p$ is odd then (since the mod $p$ cyclotomic character is distinct from the trivial character) $H^i((\mathbb Z/p^n)^{\times},\mu_{p^n})$ vanishes for all $i$, while if $p = 2$ and $n \geq 1$ (resp. $n \geq 2$), then $H^0((\mathbb Z/2^n)^{\times},\mu_{2^n})$ (resp. $H^1((\mathbb Z/2^n)^{\times},\mu_{2^n})$) has order two.</p> <p>From these, one deduces that $H^1((\mathbb Z/k)^{\times},\mu_k)$ is trivial if $k$ is odd; is a product of $l$ cyclic groups of order $2$ if $k$ is exactly divisible by $2$, and is divisible by $l$ distinct odd primes; and is a product of $l+1$ cyclic groups of order $2$ if $k$ is divisible by $4$, and by $l$ distinct odd primes.</p> <p>Here are the concrete interpretations: </p> <ol> <li><p>If $k$ is odd, then any element of $\mathbb Q^{\times}$ which becomes a $k$th power in $\mathbb Q(\zeta_k)$ was already a $k$th power in $\mathbb Q$.</p></li> <li><p>If $k = 2m,$ where $m$ is odd, divisible by primes $p_1,\ldots,p_l$, then any element of $\mathbb Q^{\times}$ which becomes a $k$th power in $\mathbb Q(\zeta_k) = \mathbb Q(\zeta_m)$ is a product of powers of $p_1^m,\ldots,p_l^m$.</p></li> <li><p>If $k = 2^n m,$ where $m$ is odd, divisible by primes $p_1,\ldots,p_l$, and $n \geq 4,$ then any element of $\mathbb Q^{\times}$ which becomes a $k$th power in $\mathbb Q(\zeta_k)$ is a product of powers of $p_1^{2^{n-1}m},\ldots,p_l^{2^{n-1}m}, (-4)^{2^{n-2}m}.$</p></li> </ol> <p>Of course, one doesn't need group cohomology to work this out. The advantage of the group cohomology approach, though, is that it's completely systematic. (Except perhaps for the concrete interpretation part, which involves making Kummer theory effective; although in the particular case you are interested in, it's pretty easy to see directly that the specified elements become $k$th powers in $L$, and the problem is just to show that there are no other elements that do, for which the abstract cohomology computations suffice.)</p> <hr> <p>Answer to your question: it follows for odd $k$ that $a$ was already a rational $k$th power.</p>
1,977,588
<p>In books like Calculus (Larson), in the theorems'definitions like Rolle's theorem, when they talk about continuity, they use closed intervals [a,b]. But when they talk about differentiability they use open brackets (a,b). </p> <p>Why are closed intervals used for continuity and open intervals for differentiability?</p> <p>Why can't you say "differentiable on the closed interval [a,b]"?</p> <p><a href="https://i.stack.imgur.com/FX2a4.jpg" rel="noreferrer">Rolle's Theorem definition</a></p>
Aloizio Macedo
59,234
<p>It is not that "closed intervals are used for continuity and open intervals for differentiability" (more on this one later). It is that, <strong>for Rolle's Theorem</strong> (and the Mean Value Theorem), we <em>need</em> those hypotheses. </p> <p>In the proof, we use that a continuous function on $[a,b]$ attains a maximum. And we only need differentiability inside, so we do not need to make further assumptions on the boundary about differentiability (again, more on this later). And it is a nice exercise to see that if you relax any hypothesis on Rolle's Theorem you do not have a true general statement anymore.</p> <p>Now, continuity can be talked about in far more general settings. More particularly, we can talk about continuity on any subset of the real numbers in a rather canonical fashion (no need to be intervals, closed or open or whatever).</p> <p>Differentiability is a little trickier. It is common to define differentiability only on open sets when we are in Euclidean space (not only open <em>intervals</em>, but open sets in general). This is partly due to the fact that being able to differentiate from every direction is a must in some theorems and some basic facts which we would like to have. However, there are cases for which talking about differentiability, in some sense, on "not-open" sets is useful and/or a must. This is true for example when talking about functions on the closed half-space (which enhances its discussion on manifolds with boundaries), or when talking about closed submanifolds of some manifold. </p> <p>In your particular setting, we can define differentiability on $[a,b]$ on many ways. Firstly, we can simply extend to the fact that the limit which defines the derivative exists on the boundaries (however, it will be only a one-sided limit). Or we can extend by saying that $f$ is differentiable on $[a,b]$ if there exists a differentiable function $g$ on an open set containing $[a,b]$ such that $g|_{[a,b]}=f$. Instead of discussing this further, I'll just say that differentiability is more subtle than continuity with respect to its domains.</p>
127,808
<p>I have <a href="https://math.stackexchange.com/questions/356925/a-basis-of-the-symmetric-power-consisting-of-powers">asked this question on math.se</a>, but did not get an answer - I was quite surprised because I thought that lots of people must have though about this before:</p> <p>Let $V$ be a complex vector space with basis $x_1,\ldots,x_n\in V$. Denote by $v_1\odot\cdots\odot v_k$ the image of $v_1\otimes\cdots\otimes v_k$ in the symmetric power $\newcommand{\Sym}{\mathrm{Sym}}\Sym^k(V)$. It is well-known that the Elements $v^{\odot k}$ for $v\in V$ generate this space (see, for instance, <a href="https://math.stackexchange.com/questions/137912/can-e-n-always-be-written-as-a-linear-combination-of-n-th-powers-of-linear-p/138411#138411">this answer on math.se</a>), so they must contain a basis. </p> <p>In other words, let $N=\binom{n+k-1}k$, then there must be $v_1,\ldots,v_N\in V$ with $$\mathrm{Sym}^k V = \mathbb Cv_1^{\odot k} \oplus \cdots \oplus \mathbb C v_N^{\odot k}.$$ I am looking for an explicit description of such a basis. Is such a description known? Is there maybe even a <em>"nice"</em> or somewhat <em>"natural"</em> choice for the $v_i$ as linear combinations of the $x_i$?</p>
Abdelmalek Abdesselam
7,410
<p>I would look up a book on the calculus of finite differences in a multivariate setting. The claim here is to show that for any multi-index $\alpha=(\alpha_1,\ldots,\alpha_n)$ of length $k$ one can express the multiple derivative at zero $$ \left(\frac{\partial}{\partial t}\right)^\alpha \ (t_1x_1+\cdots+ t_n x_n)^k $$ as a linear combination of finite difference analogous expressions which should only involve the evaluation of $(t_1x_1+\cdots+ t_n x_n)^k$ at integer points $(t_1,\ldots,t_n)$ with nonnegative coordinates adding up to $k$. This is the same as the above candidate basis considered by you and Peter. I don't know if there exists a multivariate analogue of the <a href="http://en.wikipedia.org/wiki/Finite_difference" rel="nofollow">Newton series</a>. If so then this would immediately imply the wanted statement.</p> <hr> <p>Edit: Apparently there is such a formula due to Lascoux and Schutzenberger, see Theorem 9.6.1 page 148 in the book "Symmetric functions and combinatorial operators on polynomials" by Alain Lascoux. Another source on the web is <a href="http://phalanstere.univ-mlv.fr/~al/ARTICLES/NewtonInterp.ps.gz" rel="nofollow">here</a>. It also has the required property here which is that the number of finite differences taken is the same as the degree of the multiplying Schubert polynomial.</p> <hr> <p>Edit: @Jesko you're right it is a bit more complicated than what I said. Also, the Lascoux-Schutzenberger formula might not be the simplest to use here.</p> <p>First note that expressions $$ \prod_{i=1}^{n-1} (x_i-x_n)^{\beta_i}\ \times\ (kx_n)^{k-|\beta|}\ , $$ where $\beta$ ranges over multiindices with $n-1$ components and length $|\beta|\le k$, form a basis. Now you get the latter as derivatives $$ \left(\frac{\partial}{\partial t}\right)^{\beta} \ \left(t_1x_1+\cdots+ t_{n-1} x_{n-1}+\left(k-\sum_{i=1}^{n-1}t_i\right)x_n\right)^k $$ at $t=0$.</p> <p>Call $f(t_1,\ldots,t_{n-1})$ the polynomial function to be hit with derivatives. One has a multivariate Newton expansion for it: $$ f(t)=\sum_{m} (t-a)^m \partial^m f(a_{11},a_{21},\ldots,a_{n-1,1}) $$ as follows. Here $a$ stands for a matrix of indeterminates $(a_{ij})$ with $1\le i\le n-1$ and $1\le j\le d$, with $d$ high enough. Let $\partial_{ij}$ denote the divided difference operator acting on functions of these indeterminates as $$ \partial_{ij} g=\frac{1}{a_{i,j+1}-a_{ij}}\left( g({\rm argument\ with\ }a_{i,j+1}\ {\rm and}\ a_{ij}\ {\rm exchanged})- g \right)\ . $$ The notation $m=(m_1,\ldots,m_{n-1})$ is for a multiindex with nonnegative entries. We also write the corresponding operator $$ \partial^m = \prod_{i=1}^{n-1} \left(\partial_{i, m_i} \cdots\partial_{i,2}\partial_{i,1}\right) $$ noting that finite difference operators concerning different groups of variables commute. Finally $$ (t-a)^m=\prod_{i=1}^{n-1} \left((t_i-a_{i,m_i})\cdots(t_i-a_{i,2})(t_i-a_{i,1})\right)\ . $$ The formula basically amounts to applying Newton's univariate formula in each coordinate direction separately. Now use this with the choice $a_{i,j}=j-1$, then take the beta derivative in the $t$'s and that should be it.</p>
22,207
<p>How to make a defined symbol stay in symbol form?</p> <pre><code>w = 3; g = 4; {w, g}[[2]] </code></pre> <blockquote> <p><code>3</code></p> </blockquote> <p>I want the output to be <strong><code>g</code></strong> and not <code>3</code>. For example, if I want to save different definitions by <code>DumpSave</code> in different files like below:</p> <p><code>Table[DumpSave["/Users/simonlausen/Desktop/Input/ex"&lt;&gt;ToString[i]&lt;&gt;".mx", {w,g}[[i]]],{i,1,2}]</code></p> <p>Any suggestions?</p>
Jacob Akkerboom
4,330
<p>I suppose this answer does not have much added value over that of Jens, but I'll post it anyway. As a remark about the part of the question about DumpSave, an alternative method to that of Jens is the following. I find that in cases where things get evaluated that you don't want to get evaluated, it helps to temporarily replace the function that you want to "do in the end", in this case DumpSave, by Hold. Then in the end you can replace the hold by your function (DumpSave) by using apply (@@). In this case you would proceed as follows</p> <pre><code>myList2 = {Hold[w], Hold[g]}; Table[ DumpSave @@ Delete[ Hold[Evaluate[ "/Users/simonlausen/Desktop/Input/ex" &lt;&gt; ToString[iii] &lt;&gt; ".mx", myList[[iii]]]] , {2, 0} ] , {iii, 1, 2} ] </code></pre>
2,951,825
<p>I want to show formally that </p> <p><span class="math-container">$$M =\{(t, \vert t \vert) \text{ }\vert t \in \mathbb{R} \} $$</span> </p> <p>is not a smooth <span class="math-container">$C^{\infty}$</span>-submanifold of <span class="math-container">$\mathbb{R}^2$</span>. </p> <p>My attempts: Intuitively it's clear that the problem is the origin point <span class="math-container">$(0,0)$</span>. Indeed, <span class="math-container">$M$</span> is the graph of the absolute value function which is not differentiable in the origin. </p> <p>But I have some problems to show that <span class="math-container">$M$</span> isn't smooth manifold in a rigorous formal way. </p> <p>In the lecture we are working with following definition: <span class="math-container">$M$</span> is a <span class="math-container">$n$</span>-dimensional smooth (so <span class="math-container">$C^{\infty}$</span>) submanifold of <span class="math-container">$\mathbb{R}^{n+k}$</span> iff for every <span class="math-container">$p \in M$</span> there exist an open subset <span class="math-container">$U \subset \mathbb{R}^{n+k}$</span> with <span class="math-container">$p \in U$</span>, an open <span class="math-container">$V \subset \mathbb{R}^n$</span> and a smooth function <span class="math-container">$\gamma \in C^{\infty}(V,U)$</span> with following properties</p> <p><span class="math-container">$\gamma(V) = U \cap M$</span></p> <p><span class="math-container">$rank(D\gamma \vert _v) = n$</span> at every <span class="math-container">$v \in V$</span> where <span class="math-container">$D\gamma \vert _v$</span> is the differential of <span class="math-container">$\gamma$</span> at <span class="math-container">$v$</span></p> <p><span class="math-container">$\gamma$</span> is a homeomorphism from <span class="math-container">$V$</span> to <span class="math-container">$M \cap U$</span></p> <p>I know that there are some other equivalent definitions of smooth manifolds but I want to know how to get a contradiction using this criterion.</p> <p>The problem is that there exist no such function with properties as above so if I try to find some <span class="math-container">$\gamma$</span> which maps onto <span class="math-container">$(0,0)$</span> how to show that there exist some <span class="math-container">$v_0 \in V$</span> with <span class="math-container">$rank(D\gamma \vert _{v_0}) = 0$</span>.</p> <p>Another idea would be to get a contradiction showing that <span class="math-container">$\gamma'$</span> along some can't be continuous, right? But here also I don't find a way how to construct the contradiction formally using the submanifold criterion above. </p>
Ernie060
592,621
<p>It's not always easy to show that a subset isn't a smooth submanifold using the definition. May I suggest another approach?</p> <p>Maybe you have seen a version of the Implicit Function Theorem like this:</p> <blockquote> <p>Let <span class="math-container">$S\subset\mathbb{R}^2$</span> be a submanifold (curve) and <span class="math-container">$p\in S$</span>. Then there is a neighborhood <span class="math-container">$V$</span> of <span class="math-container">$p$</span> in <span class="math-container">$S$</span> such that <span class="math-container">$V$</span> is the graph of a differentiable function of the form <span class="math-container">$y=f(x)$</span> or <span class="math-container">$x=g(y)$</span>.</p> </blockquote> <p>Suppose <span class="math-container">$M$</span> is a smooth submanifold (curve). Then around <span class="math-container">$(0,0)$</span> it locally is the graph of a function. This function cannot be of the form <span class="math-container">$x=g(y)$</span>, since projection of the curve on the <span class="math-container">$y$</span>-axis is not surjective. So locally the curve must be of the form <span class="math-container">$(t,f(t))$</span>. But this implies that <span class="math-container">$f(t)=|t|$</span>. Since this <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$0$</span>, we arrive at a contradiction and must conclude that <span class="math-container">$M$</span> is not a smooth submanifold.</p>
3,800,521
<p>Let <span class="math-container">$x=\tan y$</span>, then <span class="math-container">$$ \begin{align*}\sin^{-1} (\sin 2y )+\tan^{-1} \tan 2y &amp;=4y\\ &amp;=4\tan^{-1} (-10)\\\end{align*}$$</span></p> <p>Given answer is <span class="math-container">$0$</span></p> <p>What’s wrong here?</p>
19aksh
668,124
<p>We can't bluntly take <span class="math-container">$\sin^{-1}(\sin 2y) = 2y$</span> and so with <span class="math-container">$\tan^{-1}(\tan 2y)$</span>, because we don't know the value of <span class="math-container">$2y$</span> and the range in which it lies.</p> <p><a href="https://i.stack.imgur.com/MijvP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MijvP.png" alt="enter image description here" /></a> So, substitute directly.</p> <p><span class="math-container">$f(-10) = \sin^{-1}\left(\dfrac{-20}{101}\right)+\tan^{-1}\left(\dfrac{20}{99}\right) = -\sin^{-1}\left(\dfrac{20}{101}\right)+\tan^{-1}\left(\dfrac{20}{99}\right)$</span></p> <p>Now let, <span class="math-container">$\tan z = \dfrac{20}{99} = \dfrac{20/101}{99/101}\Rightarrow \sin z =\dfrac{20}{101} \Rightarrow z = \sin^{-1}\left(\dfrac{20}{101}\right)$</span></p> <p>(Here <span class="math-container">$0&lt;\tan^{-1}\left(\dfrac{20}{99}\right)&lt;\dfrac{\pi}2$</span>)</p> <p>So, we have <span class="math-container">$f(-10) = -\sin^{-1}\left(\dfrac{20}{101}\right) + \sin^{-1}\left(\dfrac{20}{101}\right) = 0$</span></p>
2,667,230
<p>Let (X, d) be a complete metric space. Let$ f : X → X$ be a function such that for all distinct$ x, y ∈ X$ ,</p> <p>$ d(f^ k (x), f^ k (y)) &lt; c · d(x, y)$, for some real number $c &lt; 1$ and an integer $k &gt; 1$. Show that f has a unique fixed point. </p> <p>my attempt : i take $f(x) = x $and $f(y) = y$ .now $f^k = f(f(...(x),,))))= x $ similarly $f^k =f(f(,,,,,(y))..) = y $ </p> <p>here im getting $ d(f^ k (x), f^ k (y)) =c · d(x, y)$ $=0$.By fixed point theorem $f(x) =f(y) = x= y$ so im getting$ $$ d(f^ k (x), f^ k (y)) = 0$</p> <p>im getting uniques fixed points</p> <p>Is my proof is coorect or not correct . Pliz tell me </p> <p>if not correct . PLiz help me</p>
epi163sqrt
132,007
<p>We interprete the problem as follows: Given is the alphabet $V=\{1,2\}$. Find the number of strings consisting of characters of $V$ of length $n\geq 0$ so that each occurrence of $1$ is followed by <em>at least</em> $d$ characters $2$. We do so by encoding the problem using generating functions.</p> <blockquote> <p>Each of the admissible strings starts with zero or more $2$'s. This can be encoded as \begin{align*} 1+z+z^2+\cdots=\frac{1}{1-z}\tag{1} \end{align*}</p> <p>Each so created string can be followed by zero or more $1$'s, whereby each occurrence of $1$ is replaced by $1$ followed by at least $d$ $2$'s. This can be encoded as \begin{align*} 1+z(z^d+z^{d+1}+\cdots)+z^2(z^d+z^{d+1}+\cdots)^2+\cdots&amp;=\frac{1}{1-z\left(z^d+z^{d+1}\cdots\right)}\\ &amp;=\frac{1}{1-\frac{z^{d+1}}{1-z}}\tag{2} \end{align*}</p> </blockquote> <p>Multiplying (1) and (2) together we get a generating function $A(z)$ where $[z^n]$, i.e. the coefficient of $z^n$, contains the number of admissible strings of length $n$.</p> <blockquote> <p>We obtain \begin{align*} \color{blue}{[z^n]A(z)}&amp;=[z^n]\left(\frac{1}{1-z}\cdot\frac{1}{1-\frac{z^{d+1}}{1-z}}\right)=[z^n]\frac{1}{1-z(1+z^d)}\\ &amp;=[z^n]\sum_{j=0}^\infty z^j(1+z^d)^j\tag{3}\\ &amp;=\sum_{j=0}^n [z^{n-j}](1+z^d)^j\tag{4}\\ &amp;=\sum_{j=0}^n [z^j](1+z^d)^{n-j}\tag{5}\\ &amp;=\sum_{j=0}^n[z^j]\sum_{k=0}^{n-j}\binom{n-j}{k}z^{dk}\tag{6}\\ &amp;=\sum_{j=0}^{\left\lfloor\frac{n}{d}\right\rfloor}[z^{dj}]\sum_{k=0}^{n-dj}\binom{n-dj}{k}z^{dk}\tag{7}\\ &amp;\,\,\color{blue}{=\sum_{j=0}^{\left\lfloor\frac{n}{d}\right\rfloor}\binom{n-dj}{j}}\tag{8} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (3) we apply the geometric series expansion.</p></li> <li><p>In (4) we use the linearity of the <em>coefficient of</em> operator and apply the rule $[z^{p-q}]A(z)=[z^p]z^qA(z)$. We also set the upper limit of the series to $n$ since the exponent of $z$ is non-negative.</p></li> <li><p>In (5) we change the order of summation $j\to n-j$.</p></li> <li><p>In (6) we apply the binomial theorem.</p></li> <li><p>In (7) we observe that we need only multiples of $d$ as exponent.</p></li> <li><p>In (8) we select the coefficient accordingly.</p></li> </ul>
345,310
<p>This is computed based on the following recursive formula <span class="math-container">$$w_n=\frac{\lambda_nw_{n+1}+\mu_nw_{n-1}+1}{\lambda_n+\mu_n}$$</span> where: <span class="math-container">$n$</span> is the inital state, State <span class="math-container">$0$</span> is absorbing, <span class="math-container">$\lambda_n$</span> and <span class="math-container">$\mu_n$</span> are the up and down rates respectively and <span class="math-container">$$\sum_{n=0}^\infty\prod_{j=1}^n\frac{\mu_j}{\lambda_j}$$</span>diverges (to make extinction certain). To get the recursion started, we need <span class="math-container">$w_0=0$</span> and <span class="math-container">$$w_1=\frac{1}{\mu_1}\sum_{n=0}^\infty\prod_{j=1}^n\frac{\lambda_j}{\mu_{j+1}}$$</span>The derivation of the last formula can be found in S. Karlin's classic book "A first course in stochastic processes". The last step of his proof requires showing that <span class="math-container">$$\lim_{n\to\infty}\prod_{j=1}^n\frac{\lambda_j}{\mu_j}(w_n-w_{n+1)}=0$$</span>To prove that is, according to Karlin, "more involved but still possible" (but he does not do it). How does one prove that the last limit must equal to <span class="math-container">$0$</span>?</p>
Honza
141,969
<p>A solution for <span class="math-container">$w_i$</span> can be built directly by defining <span class="math-container">$$\delta_i=w_{i+1}-w_i$$</span> where <span class="math-container">$\delta_i$</span> is clearly the expected time to reach State <span class="math-container">$i$</span> (for the first time) from State <span class="math-container">$i+1$</span>.</p> <p>We then need to solve <span class="math-container">$$\delta_i=\frac{\mu_i}{\lambda_i}\delta_{i+1}-\frac{1}{\lambda_i}$$</span></p> <p>The general solution is<span class="math-container">$$\delta_i=\sum_{n=i+1}^\infty\frac{1}{\lambda_n}\prod_{j=i+1}^n\frac{\lambda_j}{\mu_j}+c\prod_{j=1}^i\frac{\mu_j}{\lambda_j}$$</span>Realizing that <span class="math-container">$\delta_i$</span> cannot be a function of a rate corresponding to a state lower than State <span class="math-container">$i+1$</span>, <span class="math-container">$c$</span> must be equal to <span class="math-container">$0$</span>. The rest easily follows.</p>
896,940
<p>i tried 9 D + (-10 D)</p> <p>9= 0000 1001</p> <p>10= 0000 1010</p> <p>Reverse 10 = 1111 0101 and add 1 become 1111 0110</p> <p>after that add up 9 D + (-10 D) == 0000 1001 + 1111 0110 but the answer is equal to 1111 1111 whch is 255 in decimal but the answer should be -1 right? anything goes wrong?</p> <p>Thank you very much. </p>
cjferes
89,603
<p>In two's complement representation for binary numbers, the number 1111 1111 represents -1. You missinterpreted the result as a "normal" binary number.</p> <p>In two's complement, binary numbers of $2^n$ bits represent values ranging from $-2^{n-1}$ to $2^{n-1}-1$.</p>
151,864
<p>I would like to generate a random password of a defined length which can easily be typed in with a standard keyboard.</p> <p>As a start I tried the following:</p> <pre><code>SeedRandom["pass"]; StringJoin[RandomChoice[CharacterRange[33, 126], 10] (* "=IP@7mbYcB" *) </code></pre> <p>Do you know other solutions?</p>
yohbs
367
<p>Here's something which is nice and might be easy to remember:</p> <pre><code>StringJoin @@ RandomSample[#, Length@#] &amp;@ Flatten@{IntegerString@RandomInteger[{10, 999}], Capitalize /@ RandomWord[3], RandomSample[Characters@"!@_%$^=+*.", 2] } </code></pre> <p>Select examples:</p> <pre><code>"Tearless+PostdoctoralDragon=635" (*DEFINTIELY MY FAVORITE!!*) "Workpiece.Monopolize908Moderate=" "Venereal854RebelliouslyProportionality%." </code></pre> <h2>EDIT</h2> <p>To increase the entropy, you may want to make spelling mistakes. Here's a great way to produce pronounceable non-words:</p> <pre><code>spoilWord[word_String] := Transliterate@ Transliterate[word, RandomChoice@{"Hebrew", "Arabic", "Japanese", "Korean","Greek"} ] </code></pre> <p>Example: </p> <pre><code>spoilWord@RandomWord[] (*"matelialismeu"*) </code></pre>
771,959
<p>Let $p$ be prime, $n \in \mathbb{N}$ and $p \nmid n$. </p> <p>$\Phi_n$ is the $n$-th cyclotomic polynomial.</p> <p>How can I find the maximum $n \in \mathbb{N}$ (with $p \nmid n)$ so that $\Phi_n$ splits into linear factors over $\mathbb{Z}/(p)$.</p>
Kaj Hansen
138,538
<p>$a \equiv b \pmod{n} \iff n|(a-b)$. Knowing this, then certainly $n|r(a-b)$.</p> <p>Hence, $n|(ra-rb) \iff ra \equiv rb \pmod{n}$.</p>
6,887
<p>Let x_1, x_2, ... be iid draws from a laplace distribution with scale parameter b. Is there a relatively nice closed form for x_1+x_2+...x_n? I've seen a derivation floating around for when b=1, but I couldn't figure out a generalisation. </p>
David Bar Moshe
1,059
<p>The distribution of the $n$-th convolution of the Laplace distribution can be computed from the characteristic function (see on <a href="https://en.wikipedia.org/wiki/Laplace_distribution" rel="nofollow">Wikipedia</a>): $$\frac{\exp(i \mu t)}{1+b^2 t^2} \,.$$ The characteristic function of the $n$-th convolution becomes: $$\frac{\exp(i n \mu t)}{(1+b^2 t^2)^n} = \frac{\exp(i n \mu t)}{(1 - i b t)^n (1 + i b t)^n} \,.$$ The inverse Fourier transform can be computed using the residue theorem. The integration contour is closed from the upper or lower half plane according to the sign of $(x-n \mu)$.</p>
2,138,009
<p>Let $f(z)=(1+i)z+1$. Then $f(z)=\sqrt 2 e^{i\pi/4}z+1$ and thus $f=t\circ h\circ r$ where $t$ is the translation of vector $1$, $r$ the rotation of center $0$ and angle $\pi/4$ and $h$ the homothetic of parameter $\sqrt 2$. I found a fix point $z=\frac{-1}{2}+\frac{i}{2}$.</p> <p>1) What if $f\circ f\circ f\circ f\circ f$ ? It looks to be an translation in the direction of $i$ and an homothetic of parameter 4, but is there an easy way to prove it (with out calculation) ?</p> <p>2) Let $z_0=i$, $z_1=\sqrt 3+2i$ and $z_2=\sqrt 3-2i$, and consider the triangle $T=\Delta (z_0,z_1, z_3)$. Let $g_n=\underbrace{f\circ ...\circ f}_{n\ times}$. What is the smallest $n$ s.t. $$Area(g_n(T))\geq 100 Area(T) \ \ ?$$</p> <p>First, $Area(T)= 2\sqrt 3$. I think that $g_n$ is the same triangle with length of side bigger of $n\sqrt 2$, and thus, I would says that $$Area(g_n(T))=\frac{n4\sqrt 2\cdot n\sqrt 2\sqrt 3}{2}=4\sqrt 3n^2.$$ Therefore, $$4\sqrt 3n^2\geq 200\sqrt 3\implies n^2\geq 50\implies n\geq 8.$$ We conclude that the smallest $n$ is $n=8$. Is it correct ? </p>
Intelligenti pauca
255,730
<p>The fixed point is $z_0=i$ and rewriting $f$ as $$ f(z)-i=(1+i)(z-i)=\sqrt2 e^{i\pi/4}(z-i) $$ you can see that $f$ is a rotation of $\pi/4$ and a homothetic transformation of ratio $\sqrt2$, both with center $z_0$.</p> <p>It is then obvious that: $$ f^n(z)-i=(1+i)^n(z-i)=2^{n/2}e^{in\pi/4}(z-i), $$ that is $f^n$ is a rotation of $n\pi/4$ and a homothetic transformation of ratio $(\sqrt2)^n$, both with center $z_0=i$.</p> <p>1) For $n=4$, in particular: $f^4(z)-i=-4(z-i)$.</p> <p>2) As the area scales as the square of a side, you must have $2^{n/2}\ge10$, that is $n\ge7$.</p>
427,835
<p>Which website/journal/magazine would you recommend to keep up with advances in applied mathematics? More specifically my interest are:</p> <ul> <li>multivariate/spatial interpolation</li> <li>numerical methods</li> <li>computational geometry</li> <li>geostatistics</li> <li>etc</li> </ul> <p>I am looking for a fairly high-level and broad ranging source of info. </p>
lhf
589
<p>Try the <a href="http://www.siam.org/journals/sirev.php" rel="nofollow">SIAM Review</a>. It features Survey and Review papers of wide interest. </p>
3,101,098
<p>From 11, 12 in the book Logic in Computer Science by M. Ryan and M. Huth:</p> <p>**</p> <blockquote> <p>"What we are saying is: let’s make the assumption of ¬q. To do this, we open a box and put ¬q at the top. Then we continue applying other rules as normal, for example to obtain ¬p. But this still depends on the assumption of ¬q, so it goes inside the box. Finally, we are ready to apply →i. It allows us to conclude ¬q → ¬p, but that conclusion no longer depends on the assumption ¬q. Compare this with saying that ‘If you are French, then you are European.’ The truth of this sentence does not depend on whether anybody is French or not. Therefore, we write the conclusion ¬q → ¬p outside the box."</p> </blockquote> <p>**</p> <p>My question is about the scope of assumptions in propositional logic and proving techniques. I am not sure I fully understand what this text is trying to say. </p> <p>How can an assumption only have scope inside the box, but once you finish what you want to prove it is no more part of the assumption box and is accessible universally in the proof? WHY is this possible? Why does it not break things in the proof? This looks too convenient and random.</p> <p>Secondly, I do not understand the French and European example connection to what is written in this text. If somebody could please connect this example to what the author is actually trying to explain through this. </p>
Mauro ALLEGRANZA
108,274
<p>In the calculus there are different types of rules; some allow us to "discharge" assumptions, like e.g. <span class="math-container">$\to$</span>-intro; others do not.</p> <p>The "mechanism" is quite simple: we can made whatever assumption we want, but every "result" we get applying the rules to it will depend on the assumption made.</p> <p>This "mechanism" is made visible through the box-device : the box is opened with the assumption and all the formula derived by way of rules inside the box are dependent on the assumption.</p> <p>This means that we have correctly derived them, <em>provided that</em> the assumption holds.</p> <p>When we use a rule that allows us to discharge assumptions, we step outside the box and the result is no longer dependent on the assumption we have discharged. </p> <hr> <p>Consider this simple example :</p> <p>Assumption : "<span class="math-container">$n$</span> is divisible by <span class="math-container">$4$</span>" (i.e. <span class="math-container">$n= 4 \times k$</span>, for some <span class="math-container">$k$</span>)</p> <p>Here we are assuming an "hypotheses" (it is not true in general that every number is divisible by four).</p> <p>Then we apply some simple "arithmetical transformations" : <span class="math-container">$n=(4 \times k)=(2 \times 2 ) \times k= 2 \times (2 \times k) = 2 \times l$</span>.</p> <p>Thus, we have derived : <span class="math-container">$n= 2 \times l$</span>, for some <span class="math-container">$l$</span>, that means : "<span class="math-container">$n$</span> is divisible by <span class="math-container">$2$</span>".</p> <p>Now we can apply <span class="math-container">$\to$</span>-intro and conclude with :</p> <blockquote> <p>"if <span class="math-container">$n$</span> is divisible by <span class="math-container">$4$</span>, then <span class="math-container">$n$</span> is divisible by <span class="math-container">$2$</span>".</p> </blockquote> <p>What we have done ? we have discharged the initial assumption (closed the box) and proved the general result that holds for every <span class="math-container">$n$</span>.</p>
2,834,195
<p>Using the method of characteristics on a PDE system, I have gotten a parametric differential equation $$ \frac{dy}{dx} = \frac{y - xy}{1 + xy - x}. $$ where $x$ and $y$ are both functions of a third variable $t$. How could I use Mathematica to solve for the solution curve that $(x(t), y(t))$ follows? There is a similar approach done here: <a href="https://math.stackexchange.com/questions/2358468/does-this-simple-2d-dynamical-system-have-a-conserved-quantity">Does this simple 2D dynamical system have a conserved quantity?</a>.</p> <p>EDIT: The system where this came from is \begin{align*} \frac{dx}{dt} &amp;= x - xy \\ \frac{dy}{dt} &amp;= 1 + xy - y \end{align*}</p> <p>EDIT2: Just to be clear, I know how to draw solution curves with Mathematica. What I am wondering if there is a way to solve the equation analytically and get a closed form curve as the solution. </p>
Community
-1
<p><a href="https://en.m.wikipedia.org/wiki/Stereographic_projection" rel="nofollow noreferrer">Stereographic projection </a> is easy to visualize for $S^2\setminus \{p\}$; and the notion can be extended to $S^n\setminus \{p\}$...</p> <p>This "shows" that $S^n$ can be thought of as the one-point compactification of $\mathbb R^n$ (or in your case, $S^n\setminus \{p\}$).</p>
2,834,195
<p>Using the method of characteristics on a PDE system, I have gotten a parametric differential equation $$ \frac{dy}{dx} = \frac{y - xy}{1 + xy - x}. $$ where $x$ and $y$ are both functions of a third variable $t$. How could I use Mathematica to solve for the solution curve that $(x(t), y(t))$ follows? There is a similar approach done here: <a href="https://math.stackexchange.com/questions/2358468/does-this-simple-2d-dynamical-system-have-a-conserved-quantity">Does this simple 2D dynamical system have a conserved quantity?</a>.</p> <p>EDIT: The system where this came from is \begin{align*} \frac{dx}{dt} &amp;= x - xy \\ \frac{dy}{dt} &amp;= 1 + xy - y \end{align*}</p> <p>EDIT2: Just to be clear, I know how to draw solution curves with Mathematica. What I am wondering if there is a way to solve the equation analytically and get a closed form curve as the solution. </p>
Henno Brandsma
4,280
<p>If that last theorem is allowed to use, your statement is an immediate corollary of it. In your setup you only need note that $X=\mathbb{S}^n\setminus \{p\}$ is locally compact, and the identity is the homeomorphism, and then the conclusion is that $\mathbb{S}^n$ is homeomorphic to the one-point compactification of $X$, as required. It's way easier than reproving part of that theorem as you do in the first part, slightly sloppily.</p>
1,427,816
<p>This is kinda of a philosophical question I guess. But are the elmements of the topological closure inside the linear space $X$ all the time? Or do they become apperent when we introduce the topology? And hence introduce the topolgy to control these elements of the space which are there but out of control when we only have algebraic structure. i.e (I think) is the linear space smaller then "same" with the topology? Lets assume we have a reasonable topology.</p>
Ian
83,396
<p>One subtle difference between metric spaces and topological spaces is that the "completion of a topological space" is not a well-defined notion. </p> <p>Of course the closure is well-defined at the level of topological spaces. But unlike the completion, the closure is not really a unary operation, it is a binary operation: we take the closure of a set $A$ <em>in</em> an ambient space $X$. This $X$ could be $A$, but then the closure of $A$ is just $A$. For instance the closure of $(0,1)$ in itself is $(0,1)$; thus the situation might not be so geometrically intuitive if we don't have some assumptions about the ambient space.</p> <p>But in metric spaces, we can talk about completion without an ambient space. For instance, in $L^2([0,\pi])$, we can take the finite linear combinations of $e^{inx}$ for integers $n$. The completion of this is all of $L^2([0,\pi])$. (This is a slightly weaker statement than "everything in $L^2([0,\pi])$ has a $L^2$-convergent Fourier series", which is also true.)</p> <p>Notably, the completion of a metric space cannot be determined by looking only at the induced topology. For example, compare $\mathbb{R}$ with the standard metric and $\mathbb{R}$ with the metric $d(x,y)=|\arctan(x)-\arctan(y)|$. These both generate the standard topology, but in the latter, $x_n=n$ and $x_n=-n$ are Cauchy. Thus the completion of the latter contains a point "at $+\infty$" and a point "at $-\infty$".</p> <p>This difference can get obscured when we work with an ambient space which is a complete metric space like $\mathbb{R}^n$. In this situation the completion and the closure coincide.</p>
4,289,129
<p>Let <span class="math-container">$H$</span> be a group with identity <span class="math-container">$1_H$</span> that is generated by 2 elements <span class="math-container">$a,b$</span> that commute (<span class="math-container">$ab=ba$</span>) and where each has at most order <span class="math-container">$3$</span>. In symbols (I hope I translated correctly):</p> <p><span class="math-container">$$H=\langle a,b\rangle \ \text{, where} \ a^3=b^3=1_H=a^{-1}b^{-1}ab$$</span></p> <p>Assuming <span class="math-container">$H$</span> has exactly order 9 and assuming <span class="math-container">$\{a,b,1_H\}$</span> are all distinct, <strong>what is <span class="math-container">$H$</span> isomorphic to?</strong> (<a href="https://groupprops.subwiki.org/wiki/Groups_of_order_9" rel="nofollow noreferrer">order 9 possibilities</a> are <span class="math-container">$\mathbb Z_9$</span> and <span class="math-container">$\mathbb Z_3 \times \mathbb Z_3$</span>)</p> <hr /> <p><strong>For order 9</strong>: Assuming <span class="math-container">$H$</span> is of order 9, I believe <span class="math-container">$H$</span> is isomorphic to <span class="math-container">$\mathbb Z_3 \times \mathbb Z_3$</span>.</p> <p>Construct a map <span class="math-container">$\gamma: \mathbb Z_3^2 \to H$</span>, <span class="math-container">$\gamma(c \times d)=b^ca^d$</span>, where <span class="math-container">$c,d \in \{0,1,2\}$</span>.</p> <p>Show <span class="math-container">$\gamma$</span> is bijective: obvious</p> <p>Show <span class="math-container">$\gamma$</span> is a homomorphism: For each <span class="math-container">$c,d,e,f \in \{0,1,2\}$</span>, we must show that</p> <p>(<strong>notation</strong>: instead of <span class="math-container">$(c,d) \in \mathbb Z_3^2$</span>, I'll say <span class="math-container">$c \times d$</span>)</p> <p><span class="math-container">$$\gamma(c \times d + e \times f) = \gamma(c \times d) \gamma(e \times f)$$</span></p> <p>I believe this is equivalent to</p> <p><span class="math-container">$$b^{c+d}a^{e+f} = b^ca^d b^ea^f$$</span></p> <p>Finally, because <span class="math-container">$a^db^e=b^ea^d$</span> for <span class="math-container">$d,e \in \{0,1,2\}$</span>, we have that</p> <p><span class="math-container">$$RHS = b^ca^d b^ea^f = b^cb^e a^da^f = LHS$$</span></p>
Shaun
104,041
<p>Since, by definition of a presentation, the presentation</p> <p><span class="math-container">$$P=\langle a,b\mid a^3,b^3, ab=ba\rangle$$</span></p> <p>defines a group that maps onto <span class="math-container">$H$</span>, and that group defined by <span class="math-container">$P$</span> is <span class="math-container">$\Bbb Z_3\times \Bbb Z_3$</span>, we must have that <span class="math-container">$\lvert H\rvert\le 9$</span>.</p>
4,098,682
<p>I am trying to prove this following theorem about multiplying left cosets.</p> <blockquote> <p>Let <span class="math-container">$H \subset G$</span> a subgroup and <span class="math-container">$G/H$</span> the set of left cosets of <span class="math-container">$H$</span> in <span class="math-container">$G$</span>. We can define a group structure on <span class="math-container">$G/H$</span> by setting <span class="math-container">$aH \cdot bH = abH$</span>, which is well-defined if and only if <span class="math-container">$H$</span> is a normal subgroup of <span class="math-container">$G$</span>.</p> </blockquote> <p>Here is my attempt.</p> <blockquote> <p>Suppose that <span class="math-container">$H$</span> is normal in <span class="math-container">$G$</span>. Then <span class="math-container">$aHa^{-1} = H$</span> for all <span class="math-container">$a \in G$</span>, i.e., <span class="math-container">$aH = Ha$</span> for all <span class="math-container">$a \in G$</span>. We then have <span class="math-container">\begin{align*} aH \cdot bH = (Ha)(bH) = H(ab)H = (ab)HH = (ab)H \end{align*}</span> for all <span class="math-container">$a \in G$</span>. Conversely, if <span class="math-container">$H$</span> is not normal, there exists <span class="math-container">$a \in G$</span> and <span class="math-container">$h \in H$</span> such that <span class="math-container">$aha^{-1} \not \in H$</span>. This operation, if it were well-defined, would give <span class="math-container">\begin{align*} (aH)(a^{-1} H) = (aa^{-1})H = eH = H. \end{align*}</span> Taking <span class="math-container">$ah \in aH$</span> and <span class="math-container">$a^{-1} e = a^{-1} \in a^{-1} H$</span>, we get <span class="math-container">$aha^{-1} \in (aH)(a^{-1} h)$</span> but <span class="math-container">$aha^{-1} \not \in H$</span>, a contradiction.</p> </blockquote> <p>I worry I could be waving my hands a bit too much. How does this look? I would appreciate any criticisms.</p>
Arturo Magidin
742
<p>I don’t much like your first argument, to be honest...</p> <p>Proving that multiplication “is well defined” means proving that if <span class="math-container">$aH=a’H$</span> and <span class="math-container">$bH=b’H$</span>, then <span class="math-container">$abH = a’b’H$</span>. I’m not sure your argument establishes that; certainly not as written.</p> <p>The second is correct, but you wrap it around a “proof by fake contradiction” making it more obscure than it needs to be. (That’s what I call a direct proof in which you essentially add an “assume not” on top, and a “a contradiction” at the bottom). To prove that if coset multiplication is well defined then <span class="math-container">$H$</span> is normal, let <span class="math-container">$a\in G$</span> and <span class="math-container">$h\in H$</span>. Then <span class="math-container">$$aha^{-1} = aha^{-1}e\in aHa^{-1}H = (aa^{-1})H = eH = H,$$</span> proving that <span class="math-container">$H$</span> is normal. No need to argue by contradiction, when you have a perfectly valid direct proof there in the middle of it.</p> <p>To prove well-definedness, assume <span class="math-container">$H$</span> is normal, and that <span class="math-container">$aH = a’H$</span>, <span class="math-container">$bH=b’H$</span>. Then: <span class="math-container">$$\begin{align*} abH &amp;= a(bH) = a(b’H) = a(Hb’) = (aH)b’ \\ &amp;= (a’H)b’ = a’(Hb’) = a’(b’H)\\ &amp;= (a’b’)H. \end{align*}$$</span></p> <p>You may find <a href="https://math.stackexchange.com/questions/14282/why-do-we-define-quotient-groups-for-normal-subgroups-only">this extensive discussion on this</a> interesting; or you may not.</p>
1,920,994
<p>My calculus teacher gave us this interesting problem: Calculate</p> <p>$$ \int_{0}^{1}F(x)\,dx,\ $$ where $$F(x) = \int_{1}^{x}e^{-t^2}\,dt $$</p> <p>The only thing I can think of is using the Taylor series for $e^{-t^2}$ and go from there, but since we've never talked about uniform convergence and term by term integration, I suppose that there is an easier way to do this.</p>
Claude Leibovici
82,404
<p>You could do it directly. Since $$\int e^{-t^2}\,dt=\frac{\sqrt{\pi }}{2} \text{erf}(t)$$ $$F(x) = \int_{1}^{x}e^{-t^2}\,dt=\frac{\sqrt{\pi }}{2} (\text{erf}(x)-\text{erf}(1))$$ Now, integrating by parts $$\int \text{erf}(x)\,dx=x \,\text{erf}(x)+\frac{e^{-x^2}}{\sqrt{\pi }}$$ </p> <p>I am sure that you can take it from here.</p>
2,631,342
<p>$$\lim_{x\rightarrow 14}\frac{\sqrt{x-5}-3}{x-14}$$</p> <p>How do I evaluate the limit when I put x = 14 and I got 0/0?</p>
Dr. Sonnhard Graubner
175,066
<p>write $$\frac{\sqrt{x-5}-3}{x-14}\cdot \frac{\sqrt{x-5}+3}{\sqrt{x-5}+3}$$</p>
2,631,342
<p>$$\lim_{x\rightarrow 14}\frac{\sqrt{x-5}-3}{x-14}$$</p> <p>How do I evaluate the limit when I put x = 14 and I got 0/0?</p>
ajotatxe
132,456
<p>Hint:</p> <p>$$x-14=(\sqrt{x-5}+3)(\sqrt{x-5}-3)$$</p>
2,631,342
<p>$$\lim_{x\rightarrow 14}\frac{\sqrt{x-5}-3}{x-14}$$</p> <p>How do I evaluate the limit when I put x = 14 and I got 0/0?</p>
E.H.E
187,799
<p>By L'Hôpital's rule</p> <p>$$\lim_{x\rightarrow 14}\frac{\sqrt{x-5}-3}{x-14}=\lim_{x\rightarrow 14}\frac{\frac{1}{2}}{\sqrt{x-5}}=\frac{1}{6}$$</p>
1,168,446
<p>I have the following nonlinear differential equation (I am using $y$ as shorthand $f(x)$):</p> <p>$$\sin(y - y') = y''$$</p> <p>I have tried the following</p> <p>$$\cos(y - y')(y'-y'') = y'''$$ $$-\sin(y - y')(y'-y'')^2 + \cos(y - y')(y''-y''') = y''''$$ $$-y''(y'-y'')^2 + \dfrac{y'''}{y'-y''}(y''-y''') = y''''$$ $$-y''(y'-y'')^3 + y'''(y''-y''') = y''''(y'-y'')$$</p> <p>But this looks pretty unhelpful. Is there a better way to solve this equation?</p>
abel
9,252
<p>i don't know how useful this is to you but here it is. we will make a change of variable $$y-y' = u.$$ then the differential equation $y'' = \sin (y-y')$ can be transformed into $$\sin u = y''= y'-u'=y-u-u'$$ now we have two first order equations </p> <p>$$\begin{align}\frac{dy}{dx} &amp;= y - u\\ \frac{du}{dx} &amp;= y - u -\sin u\end{align}$$ </p> <p>the equilibrium solutions are $ u = k \pi, v = k\pi$ are saddles with eigenvalues $\frac{-1 \pm \sqrt5}2$ for $k$ even and unstable spirals with eigenvalues $\frac{1 \pm \sqrt 3 i}2$ for $k$ odd.</p>
251,705
<p>I would like to find the residue of $$f(z)=\frac{e^{iz}}{z\,(z^2+1)^2}$$ at $z=i$. One way to do it is simply to take the derivative of $\frac{e^{iz}}{z\,(z^2+1)^2}$. Another is to find the Laurent expansion of the function.</p> <p>I managed to do it using the first way, and the answer is $-3/(4e)$. However, I'm out of ideas as to how to find the expansion.</p> <p>Any help is greatly appreciated.</p>
Ivan Lerner
40,086
<p>Use the formula for the first term of the Laurent series:$$a_{-1}=\frac{1}{(m-1)!}\frac{d^{m-1}}{dz^{m-1}}\left((z-z_0)^mf(z)\right)$$ Where m is the order of the pole. You can get to this formula by taking the Taylor expansion of the function $f(z)(z-z_0)^m$ since it is holomorphic, and making the Laurent expansion without finding the coefficients.</p> <p>In your problem the pole has order two so the formula is:$$a_{-1}=\frac{d}{dz}\left(\frac{e^{iz}}{z(z+i)^2}\right)=-\frac{3}{4e}$$</p>
405,205
<p>Some friends and I have a family of polynomials (in one variable) with rational coefficients and we would very much like a formula for them. Grasping at straws, we computed many examples and wrote them in the basis of binomial coefficients. Specifically, I mean the basis <span class="math-container">$\left\{\binom{x}{0},\binom{x}{1},\binom{x}{2},\ldots\right\}$</span> of the ring of rational polynomials over <span class="math-container">$x$</span>. We were surprised to find that our polynomials all expand positively and integrally in that basis.</p> <p>That is, if one of our polynomials <span class="math-container">$p(x)$</span> of degree d is written as <span class="math-container">$\sum_{k=0}^da_k\binom{x}k$</span>, each of the <span class="math-container">$a_k$</span> is a nonnegative integer.</p> <p>We can't make much sense of the coefficients. But we're wondering if the positivity of our polynomials in the binomial coefficients basis is a sort of &quot;shadow&quot; of some other stronger phenomenon. Suppose there were some other basis <span class="math-container">$\{b_0(x),b_1(x),\ldots\}$</span> such that each <span class="math-container">$b_i(x)$</span> expands nonnegatively in the basis of binomial coefficients. If our polynomials expand nonnegatively (and in some understandable way) in the basis <span class="math-container">$\{b_0(x),b_1(x),\ldots\}$</span>, that's our desired formula.</p> <p>If you're thinking &quot;They're still grasping at straws&quot;, you're right. But it can't hurt to ask:</p> <p>What bases for the polynomial ring should we try? Are there some well-known bases that expand positively in the binomial coefficients basis?</p>
Per Alexandersson
1,056
<p>We conjecture that the coefficients of Jack polynomials can be expressed nicely in this basis, see <a href="https://arxiv.org/pdf/1810.12763.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1810.12763.pdf</a></p> <p>Also, there is a close connection with rook polynomials and hit polynomials, as well as the relation between Ehrhart polynomials and the <span class="math-container">$h^*$</span>-vector, which uses this type of polynomials.</p>
1,112,081
<p>Does $\int_0^\infty e^{-x}\sqrt{x}dx$ converge? Thanks in advance.</p>
Vim
191,404
<p>Though I don't know how to calculate the exact value, it is quite easy to show it does converge.<br/> One basic fact in improper integrals is that, whether it converges or not depends completely on how the integrated function behaves at the "bad" points (say, infinity or points where the function is not defined) and has nothing to do with the "good" points.(where it is good defined)<br/> For this function $f(x)=e^{-x} \sqrt {x}$, the only bad point is $+\infty$. All we have to do is take a look at $$f(x)=\frac{\sqrt x}{e^x}$$ as $x\to +\infty$.<br/> Review another famous fact that $\forall r \in \mathbb R^+$, however large, $$\frac{x^r}{e^x} \to 0^+$$ as $x\to +\infty$<br/> Therefore, we have good reason to claim that $\exists M \in R^+$ such that $\forall x&gt;M$ $$f(x)=\frac{\sqrt x}{e^x}&lt;\frac{1}{x^2}$$ Since $$\int_{M}^{+\infty}\frac{1}{x^2} dx=\frac{1}{M}$$ We now obtain $$\int_{0}^{+\infty} e^{-x} \sqrt {x} dx=\int_{0}^{M} e^{-x} \sqrt {x} dx+\int_{M}^{+\infty} e^{-x} \sqrt {x} dx&lt;\int_{0}^{M} e^{-x} \sqrt {x} dx+\frac{1}{M}$$ And $\int_{0}^{M} e^{-x} \sqrt {x} dx$ is a proper integral, and $\frac{1}M$ is a given number, therefore $\int_{0}^{+\infty} e^{-x} \sqrt {x} dx$ converges.</p>
7,237
<p>this came up in class yesterday and I feel like my explanation could have been more clear/rigorous. The students were given the task of finding the zeros of the following equation $$6x^2 = 12x$$ and one of the students did $$\frac{6x^2}{6x}=\frac{12x}{6x}$$ $$x = 2$$ which is a valid solution but this method eliminates the other solution of $$ x = 0$$ When the student brought it up, I explained to the student that if $x = 0, 2$ and we divide by $6x$ there is a possibility that we would be dividing by 0 which is undefined. The student, very reasonably, responded "Well, obviously I didn't know that zero was an answer when I was doing the problem". The student understands why we can't divide by 0 but is still struggling with how that connects to dividing by $x$. I went on to explain that by dividing by $x$ you are "dropping a solution" because the problem, which was quadratic, is now linear. Again, this didn't seem to click with the student. Does anyone have maybe an axiom/law/theorem that I can show the student to give a rigorous reason as to why you can't just divide by $x$?</p>
Frank Newman
5,104
<p>It comes up frequently in solving trigonometric equations such as:</p> <p><span class="math-container">$2\sin x\cos x=\sin x$</span></p> <p>Students often divide by <span class="math-container">$\sin x$</span>. I find myself using this line a lot:</p> <p>&quot;<em>If you're ever tempted to divide both sides by a variable expression, then what you probably need to do is use addition or subtraction to get everything on one side equal to <span class="math-container">$0$</span> on the other side</em>.&quot;</p> <p>Sometimes I explain why, and sometimes I don't. If I'm teaching a class, I will certainly explain why. If I'm tutoring a student who I see for just one hour a week, it depends on whether I feel we have time to go into it.</p> <p>When I do explain why, I say, like the OP says, that you can't divide by a variable expression because it might be zero, and we can't divide by zero. If a student were to say, as in the OP's experience, &quot;well obviously I didn't know that zero was an answer when I was doing the problem,&quot; I might point out that this is a little like saying, &quot;well obviously I didn't know the gun was loaded when I fired it.&quot; Any risk of disaster is reason enough for caution.</p>
1,323,845
<p>For a nonnegative integer $n$, a composition of $n$ means a partition in which the order of the parts matters.</p> <p>Consider the generating function $$C(x) = \sum_{n=0}^{\infty} c_nx^n,$$ where $c_n$ is the number of distinct compositions of $n$ (note that $c_0=1$ by convention).</p> <p>What is the value of $C\left(\tfrac 15\right)$?</p> <hr> <p>How can I start this?</p>
Matematleta
138,929
<p>In fact, $X$ is not even connected: </p> <p>Since $B$ contains more than one point, choose $b_{0}, b_{1}\in B$ and let $d(b_{0}, b_{1})=r&gt;0$. Choose $0&lt;s&lt;r$, such that $d(b_{0},x)\neq s$ for any $x\in X$. This is possible since $B$ is countable and since $\left \{ x\in X:d(x,b_{0})=s \right \}\subseteq B$</p> <p>But now, the sets $\left \{ x\in X:d(x,b_{0})&lt;s \right \}$ and $\left \{ x\in X:d(x,b_{0})&gt;s \right \}$ are open, disjoint and their union is $X$. That is, they form a separation of $X$.</p>
2,638,679
<p><a href="https://i.stack.imgur.com/S4p0Y.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S4p0Y.jpg" alt="enter image description here"></a></p> <p>Due apologies for this rustic image. But while drawing this lattice arrangement about the "square numbers" , I discovered a pattern here wherein if I add the alternate red dots (as depicted in the image above) to the square number, I get the next square number. For instance, $4 + 5(red\ dot) = 9$ , $9+7(red\ dot)=16$, $16+9(red\ dot)=25$, $25+11(red\ dot)=36$, $36+13 (red\ dot)=49$.</p> <p>The red dotted numbers themselves have a pattern as is obvious from the image. Is there any mathematical explanation to this pattern.</p>
Fred
380,717
<p>Use the following nice and easy formula: $$\begin{pmatrix} a &amp; b\\ c &amp; d \end{pmatrix}^{-1}=\frac{1}{ad-bc}\begin{pmatrix} d &amp; -b\\ -c &amp; a \end{pmatrix}.$$</p>
121,450
<p>I am trying to prove that the series <span class="math-container">$\sum \dfrac {1} {\left( m_{1}^{2}+m_{2}^{2}+\cdots +m_{r }^{2}\right)^{\mu} } $</span> in which the summation extends over all positive and negative integral values and zero values of <span class="math-container">$m_1, m_2,\dots, m_r$</span>, except the set of simultaneous zero values, is absolutely convergent if <span class="math-container">$\mu &gt; \dfrac {r} {2}$</span>.</p> <p>Any help with a proof strategy would be much appreciated.</p>
Eric Naslund
6,075
<p>Here is a way I like. We can rewrite your sum as $$\sum_{\boldsymbol{m}\in\mathbb{Z}^{m}\backslash\{\boldsymbol{0}\}}\frac{1}{\|\boldsymbol{m}\|_{2}^{r+\epsilon}}$$ where $\epsilon&gt;0.$ Then since $$\|x\|_{2}\geq\max_{i}|x_{i}|,$$ by using the comparison test, we know that our original series will converge if $$\sum_{\boldsymbol{m}\in\mathbb{Z}^{m}\backslash\{\boldsymbol{0}\}}\frac{1}{\max_{i}|\boldsymbol{m}_{i}|^{r+\epsilon}}$$ converges. Since the set of all $\boldsymbol{m}$ with $k-1\leq\max_{i}|\boldsymbol{m}_{i}|\leq k$ has size $\leq Ck^{r-1}$ for some constant $C,$ (it is the surface of an $r$ dimensional cube) we see that the above is bounded by $$\sum_{k=1}^{\infty}C\frac{k^{r-1}}{k^{r+\epsilon}}\leq C\sum_{k=1}^{\infty}\frac{1}{k^{1+\epsilon}}$$ which converges.</p>
122,546
<p>There is a famous proof of the Sum of integers, supposedly put forward by Gauss.</p> <p>$$S=\sum\limits_{i=1}^{n}i=1+2+3+\cdots+(n-2)+(n-1)+n$$</p> <p>$$2S=(1+n)+(2+(n-2))+\cdots+(n+1)$$</p> <p>$$S=\frac{n(1+n)}{2}$$</p> <p>I was looking for a similar proof for when $S=\sum\limits_{i=1}^{n}i^2$</p> <p>I've tried the same approach of adding the summation to itself in reverse, and I've found this:</p> <p>$$2S=(1^2+n^2)+(2^2+n^2+1^2-2n)+(3^2+n^2+2^2-4n)+\cdots+(n^2+n^2+(n-1)^2-2(n-1)n$$</p> <p>From which I noted I could extract the original sum;</p> <p>$$2S-S=(1^2+n^2)+(2^2+n^2-2n)+(3^2+n^2-4n)+\cdots+(n^2+n^2-2(n-1)n-n^2$$</p> <p>Then if I collect all the $n$ terms;</p> <p>$$2S-S=n\cdot (n-1)^2 +(1^2)+(2^2-2n)+(3^2-4n)+\cdots+(n^2-2(n-1)n$$</p> <p>But then I realised I still had the original sum in there, and taking that out mean I no longer had a sum term to extract.</p> <p>Have I made a mistake here? How can I arrive at the answer of $\dfrac{n (n + 1) (2 n + 1)}{6}$ using a method similar to the one I expound on above? <strong>I.e following Gauss' line of reasoning</strong>?</p>
Henry
6,460
<p>You can use something similar, though it requires work at the end. </p> <p>If $S_n = 1^2 +2^2 + \cdots + n^2$ then $$S_{2n}-2S_n = ((2n)^2 - 1^2) + ((2n-1)^2-2^2) +\cdots +((n+1)^2-n^2)$$</p> <p>$$=(2n+1)(2n-1 + 2n-3 + \cdots +1) = (2n+1)n^2$$ using the Gaussian trick in the middle. </p> <p>Similarly $$S_{2n+1}-2S_n = (2n+1)(n+1)^2$$</p> <p>So for example to work out $S_9$, you start </p> <p>$$S_0=0^2=0$$</p> <p>$$S_1=1 + 2S_0 = 1$$</p> <p>$$S_2=3+2S_1=5$$</p> <p>$$S_4=25+2S_2=30$$</p> <p>$$S_9 = 225+2S_4 = 285$$</p> <p>but clearly there are easier ways.</p>
1,109,759
<p>I.e, prove $\lVert f+g \rVert\ \le \lVert f \rVert + \lVert g \rVert$ for all $f,g$ in $C^\infty [0,1]$, $$\lVert f \rVert =(\int_0^1 \lvert f(x) \rvert ^2 dx)^{1/2}$$</p> <p>I think we're supposed to use Cauchy-Schwarz: $\lvert \int_0^1 f(x)g(x) dx \rvert \le \left( \int_0^1 \lvert f(x) \rvert ^2 dx \right)^{1/2} \left( \int_0^1 \lvert g(x) \rvert ^2 dx \right) ^{1/2}$</p> <p>So far I've got $\lVert f+g \rVert\ = \left( \int_0^1 \left( \lvert f(x) + g(x) \rvert \right) ^2 dx \right) ^{1/2} \le \left( \int_0^1 \left( \lvert f(x) \rvert + \lvert g(x) \rvert \right) ^2 dx \right) ^{1/2} = \left( \int_0^1 (\lvert f(x) \rvert)^2 + (\lvert g(x) \rvert)^2 + 2 \lvert f(x) \rvert \lvert g(x) \rvert dx \right)^{1/2} \le \left( \int_0^1 (\lvert f(x) \rvert)^2 dx \right) ^{1/2} + \left( \int_0^1 (\lvert g(x) \rvert)^2 dx \right) ^{1/2} + \left( 2 \int_0^1 \lvert f(x) \rvert \lvert g(x) \rvert dx \right)^{1/2}$</p> <p>I'm also not sure about the last step...</p>
Arch
208,530
<p>I think Alex actually intends to prove ||f|| is certainly a norm.</p> <p>Just one comment: Use $||f+g||^2 $ to avoid square roots.</p> <blockquote> <p>$||f+g||^2 = ||f||^2 + ||g||^2 +2 |&lt;f,g&gt;| \leq ||f||^2 + ||g||^2 +2 ||f||||g|| = (||f||+||g||)^2$,</p> </blockquote> <p>and we are done.</p>
1,524,349
<p>This is Problem 45 in Chapter 19 in Michael Spivak's book "Calculus".</p> <ol start="45"> <li>(a) Suppose that $\frac {f(x)} x$ is integrable on every interval [a, b] for $0$ &lt; a &lt; b, and that $\lim_{x\to0}f(x)=A$ and $\lim_{x\to\infty}f(x)=B$. Prove that for all $\alpha$, $\beta$ > $0$ we have</li> </ol> <p>$\int_0^\infty \frac {f(\alpha x) - f(\beta x)}{x}dx = (A-B)log(\frac \beta \alpha)$.</p> <p>(b) Now suppose instead that $\int_0^\infty\frac{f(x)}xdx$ converges for all $a&gt;0$ and that $\lim_{x\to0}f(x)=A$. Prove that</p> <p>$\int_0^\infty \frac {f(\alpha x) - f(\beta x)}{x}dx=Alog(\frac \beta \alpha)$.</p>
Mark Viola
218,419
<p><strong>HINT:</strong></p> <p>Since $\frac{f(x)}{x}$ is an arbitrary integrable function, it can be approximated in the $\ell^1$ norm by a compactly supported smooth function $\frac{g(x)}{x}$. So, for all $\epsilon&gt;0$, </p> <p>$$\int_a^b \left|\frac{f(x)}{x}-\frac{g(x)}{x}\right|\,dx&lt;\epsilon$$</p> <p>Then use</p> <p>$$\int_{x_1}^{x_2} \int_\alpha^\beta g'(xy)\,dy\,dx=\int_{x_1}^{x_2}\int_\alpha^\beta \frac1x\frac{\partial g(xy)}{\partial y}\,dy\,dx= \int_\alpha^\beta \int_{x_1}^{x_2} \frac1y\frac{\partial g(xy)}{\partial x}\,dx\,dy$$</p>
2,359,292
<p>I have been working on a problem in Quantum Mechanics and I have encountered a equation as given below.</p> <p>$$\frac{d\hat A(t)}{dt} = \hat F(t)\hat A(t)$$</p> <p>Where ^ denotes it is an operator </p> <p>How will this differential equation be solved? Will the usual rules for linear homogeneous first order differential with variable coefficients apply here?</p>
Fabian
7,266
<p>You can solve it by iteration (assuming convergence). Assuming that you are interested in the solution with the initial condition $\hat A(0)= I$, the iterative solution reads $$\hat A(t) = I +\int_0^t\hat F(t_1)\,dt_1 + \int_0^t\int_0^{t_1}\hat F(t_1) \hat F(t_2)\,dt_1\,dt_2 + \cdots \tag{1}$$</p> <p>For convenience, one might introduce the concept of the <a href="https://en.wikipedia.org/wiki/Ordered_exponential" rel="nofollow noreferrer">ordered exponential</a>. With that the solution assumes the compact form $$\hat A(t) = \mathcal{T} \left\{\exp\left[ \int_0^t \hat F(t')\,dt'\right] \right\}$$ where $\mathcal{T}$ indicates that when expanding the exponential, the $\hat F$ in the individual terms should be ordered according to their time argument (and thus reproducing (1)).</p>
4,045,074
<p><strong>Let <span class="math-container">$X$</span> be the random variable whose cumulative distribution function is <span class="math-container">$$ F_X (x) = \begin{cases} 0, &amp; \text{for} \space x\lt 0 \\ \frac{1}{2}, &amp; \text{for} \space 0\le x\le 1 \\ 1, &amp; \text{for} \space x\gt 1 \\ \end{cases}. $$</span> Let <span class="math-container">$Y$</span> be a random variable independent of <span class="math-container">$X$</span> and uniformly distributed over the interval <span class="math-container">$(0,1)$</span>. Define the random variable <span class="math-container">$Z$</span> as <span class="math-container">$$ Z = \begin {cases} X, &amp; \text{if} \space X\le \frac{1}{2} \\ Y, &amp; \text{if} \space X\gt \frac{1}{2} \\ \end{cases} $$</span> Determine <span class="math-container">$\mathbb{P} (Z\le \frac{1}{5})$</span>.</strong></p> <p>I believe that <span class="math-container">$X$</span> only takes the discrete values <span class="math-container">$0$</span> and <span class="math-container">$1$</span> with equal probability, but I'm not entirely sure. By intuition, I think that the answer is <span class="math-container">$\frac{1}{2}$</span>. I'm unsure about this question, so any advice would be appreciated.</p>
TravorLZH
748,964
<p>Without the knowledge of partial summation, we just use the traditional summation by parts:</p> <p>Let <span class="math-container">$\pi(x)$</span> denote the number of prime numbers less than or equal to <span class="math-container">$x$</span>, so for all <span class="math-container">$n\in\mathbb Z^{&gt;0}$</span></p> <p><span class="math-container">$$ \pi(n)-\pi(n-1)= \begin{cases} 1 &amp; n\text{ is prime} \\ 0 &amp; \text{otherwise} \end{cases} $$</span></p> <p>Applying this to our problem, we have</p> <p><span class="math-container">$$ \begin{aligned} \sum_{p\le N}\frac1p &amp;=\sum_{n=1}^N{\pi(n)-\pi(n-1)\over n}={\pi(N)\over N}+\sum_{n=1}^{N-1}{\pi(n)\over n}-\sum_{n=1}^N{\pi(n-1)\over n} \\ &amp;={\pi(N)\over N}+\sum_{n=2}^{N-1}{\pi(n)\over n}-\sum_{n=2}^{N-1}{\pi(n)\over n+1}={\pi(N)\over N}-\sum_{n=2}^{N-1}\pi(n)\left[{1\over n+1}-\frac1n\right] \\ &amp;={\pi(N)\over N}+\sum_{n=2}^{N-1}\pi(n)\int_n^{n+1}{\mathrm dt\over t^2}={\pi(N)\over N}+\int_2^N{\pi(t)\over t^2}\mathrm dt \end{aligned} $$</span></p> <p>Let <span class="math-container">$N=\lfloor x\rfloor$</span>, then</p> <p><span class="math-container">$$ \int_N^x{\pi(t)\over t^2}\mathrm dt=\pi(N)\int_N^x{\mathrm dt\over t^2}={\pi(N)\over N}-{\pi(x)\over x} $$</span></p> <p>As a result, the above formula applies to all <span class="math-container">$x\in\mathbb R_{&gt;0}$</span>:</p> <p><span class="math-container">$$ \sum_{p\le x}\frac1p={\pi(x)\over x}+\int_2^x{\pi(t)\over t^2}\mathrm dt $$</span></p> <p>By the prime number theorem, we know that there exists a positive constant <span class="math-container">$K$</span> such that</p> <p><span class="math-container">$$ \left|\pi(x)-{x\over\log x}\right|\le{Kx\over\log^2x} $$</span></p> <p>As a result, using big O notation, the above thing becomes</p> <p><span class="math-container">$$ \begin{aligned} \sum_{p\le x}\frac1p &amp;={1\over\log x}+\int_2^x{\mathrm dt\over t\log t}+\int_2^x\mathcal O\left(1\over t\log^2t\right)\mathrm dt+\mathcal O\left(1\over\log^2 x\right) \\ &amp;=\log\log x+\mathcal O(1) \end{aligned} $$</span></p> <p>That we can immediately conclude the error term is of <span class="math-container">$\mathcal O(1)$</span> is mainly because it is evident that the other terms would not exceed some constant.</p> <p>P.S. Indeed a stronger result Mertens' theorem which states</p> <p><span class="math-container">$$ \sum_{p\le x}\frac1p=\log\log x+B_1+\mathcal O\left(1\over\log x\right) $$</span></p> <p>can be proven using elementary methods without prime number theorem, but it will require more technical details.</p>
1,932,961
<p>Prove by mathematical induction that $$\sum_{i=1}^{n} i^2 = \frac{n(n+1)(2n+1)}{6}$$ holds $\forall n\in\mathbb{N}$.</p> <hr> <p>(1) Assume that $n=1$. Then left side is $1^2 =1$ and right side is $6/6 = 1$, so both sided are equal and expression holds for $n = 1$.</p> <p>(2) Let $k \in \mathbb{N}$ is given. Assume that for $n = k$ expression holds. Then for $n = k+1$ we get $$\sum_{i = 1}^{k+1} i^2 = \left(\sum_{i = 1}^{k} i^2\right) + (k+1)^2 = \frac{k(k+1)(2k+1)}{6} + k^2 + 2k + 1 = \frac{2k^3 + 9k^2 + 13k + 6}{6}.$$ Factoring the result we get that $\frac{2k^3 + 9k^2 + 13k + 6}{6} = \frac{(k+1)(k+2)(2k+3)}{6}$ and thus expression holds for $n = k+1$.</p> <p>Combining (1) and (2) we can conclude that the expression holds $\forall n \in \mathbb{N}$.</p> <hr> <p>I have a few questions:</p> <ol> <li>Is my proof correct?</li> <li>If you would be a math professor, is this style of writing math proofs right and sufficient for freshman? Or is there something I miss?</li> </ol>
Sathasivam K
355,833
<p>Since,$xyz=1$,we have any two of x,y,z is negative or all must be positive and in both case all three are non zero.</p> <p>CASE 1: If all x,y,z are positive I hope that you may easily prove it.</p> <p>CASE 2: if x,y is negative then we have $$x&gt;x-1$$ But $$ x^2≤(x-1)^2.$$since if we assume $x=\frac{1}{2} $ then$x^2=(x-1)^2$ Therefore $$ 0&lt;\frac{x^2}{(x-1)^2}&lt;1$$</p> <p>Similarly it follows for y, But for z we have $$\frac{z^2}{(z-1)^2}≤1$$if $z≤\frac{1}{2}$,else z>1 therfore totally we have $$ \frac{x^2}{(x-1)^2}+ \frac{y^2}{(x-1)^2} + \frac{z^2}{(z-1)^2} ≥1 $$ hence proved.</p>
185,177
<p>Let $X$ be a smooth finite type separated connected Deligne-Mumford stack over $\mathbb C$.</p> <p>Does there exist a finite etale morphism $Y\to X$ with $Y$ a scheme?</p> <p>What if $X$ is an algebraic space (i.e., trivial stabilizers)?</p> <p>Edit: I changed the old question to a different question which should be more clear. An answer to the new question would help a lot in answering the old question.</p>
Niels
11,682
<p>To give a more simple example than Daniel's, you can just consider for X a projective line with a single orbifold point. By Riemann-Hurwitz X is simply connected and so there is no non-trivial finite étale morphism Y→X. This holds over an algebraically closed field of characteristic zero say (but would work in characteristic p as well by defining precisely X as a stack of roots in the sense of Vistoli - see Charles Cadman, Using stacks to impose tangency conditions on curves, for the precise definition).</p> <p>Also, you may want to consider the following closely related notion, taken from</p> <p>Fundamental Groups of Algebraic Stacks Behrang Noohi <a href="http://arxiv.org/abs/math/0201021" rel="nofollow">http://arxiv.org/abs/math/0201021</a></p> <p>"An algebraic stack being uniformizable means that it has a finite étale representable cover by an algebraic space (roughly speaking, its “universal cover” is an algebraic space)."</p> <p>The author proceeds to show that, roughly, a DM stack X is uniformizable iff all morphisms from the stabilizers to the fundamental group of X are injective.</p>
104,375
<p>How I am supposed to transform the following function in order to apply the laplace transform.</p> <p>$f(t) = t[u(t)-u(t-1)]+2t[u(t-1) - u(t-2)]$</p> <p>I know that it has to be like this</p> <p>$L\{f(t-t_0)u(t-t_0)\} = e^{-st_0}F(s), F(s) = L\{f(t)\}$</p>
Blah
6,721
<p>An exercise in set theory (if $k$ runs through $\mathbb Z$, then so does $-k$):</p> <p>$[a] = \\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }a-b=3k\}=\\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }b=a-3k\}=\\ \{b \in \mathbb{Z} \text{ such that there exists }k \in \mathbb{Z} \text{ such that }b=a+3(-k)\}=\\ \{a+3k | k \in \mathbb{Z}\}=\\ \{\dots,a-6,a-3,a,a+3,a+6,\dots\}$</p> <p>You should have written $ \{\ldots, -5, -2, 1, 4, 7, 10,\ldots\} $ instead of $ \{1,4,7,10,-2,-5,\ldots\} $</p>
837,570
<p>Prove that $\arctan{x}=\frac{1}{x^2}$ has only one solution on the set of real numbers.</p> <p>I need some help with it, would greatly appreciate it.</p>
DSinghvi
148,018
<p>Infer from the graph plotting. Draw the graph of $y=1/x^2$ on paper and then make the graph of $\arctan(x)$ wherever they intersect is your solution and number of points of intersection are your number of solution. This is answer is given on the assumption that you know the basic graphs of $1/x^2$ and $\arctan(x)$.</p>
2,631,230
<p>So, I'm studying mathematics on my own and I took a book about Proofs in Abstract Mathematics with the following exercise:</p> <p>For each $k\in\Bbb{N}$ we have that $\Bbb{N}_k$ is finite</p> <p>Just to give some context on what theorems and definitions we can use:</p> <ol> <li>Definition: $\Bbb{N}_k = \{1, 2, ..., k \} $</li> <li>Definition: A set $S$ is infinite iff there exists a one-to-one but not onto $\ f:S\to S$</li> <li>Definition: $A\sim B$ means $A$ is equipotent to (or same cardinality of) $B$</li> <li>Theorem: if $A$ is infinite and $A\sim B$, then $B$ is infinite</li> <li>Theorem: if $A$ is infinite and $f:A\to B$ is one-to-one, then $B$ is infinite</li> <li>Theorem: Let $\ f:A \to B$ be one-to-one and $C\subseteq A$ then $\ g:C \to B$, $\ g(x)=f(x)\ $ for any $\ x\in C$, is also one-to-one</li> <li>Lemma: Let $k\in\Bbb{N}$, then $\Bbb{N}_k- \{x\} \sim \Bbb{N}_{k-1}$ for any $x\in \Bbb{N}_k$</li> </ol> <p>What I did was:</p> <p>Suppose that $\Bbb{N}_K$ is not finite for every $k\in\Bbb{N}$, then by the Well-Ordering Principle, there is a smallest element $k\in\Bbb{N}$ such that $\Bbb{N}_k$ is infinite. Let $x_0\in\Bbb{N}_k\ $ be the smallest element of $\Bbb{N}_k$ and define $C=\Bbb{N}_k - \{x_0\}$. Let $f:\Bbb{N}_k \to C\ $ be $\ f(n)=n+1$. We will prove that $f$ is one-to-one. Let $x_1,x_2\in\Bbb{N}_k$ such that $f(x_1)=f(x_2)$, then $x_1+1=x_2+1$. Hence $x_1=x_2$, what proves that $f$ is one-to-one. Thus we have that $C$ is infinite. Then $C\sim \Bbb{N}_{k-1}$ and thus we must have that $\Bbb{N}_{k-1}$ is infinite. However this contradicts our hypothesis that $k$ is the least element such that $\Bbb{N}_k$ is infinite. Thus it must be that for each $k\in \Bbb{N}$ we have $\Bbb{N}_k$ is finite.</p> <p>My question is if the proof above, especially when creating the function $f:\Bbb{N}_k\to C$ has any flaw. The book explicitly says we should use the 6th theorem listed above, but I didn't find any explicitly use of it. Maybe is there another way to prove it?</p> <p><strong>Edited:</strong> </p> <p>As some of you commented, the proof above was wrong. The function I created was not defined to $k+1$. I think this one is correct:</p> <p>If $\Bbb{N}_k$ is not finite for every $k \in \Bbb{N}$, then by the Well-Ordering principle there exists a least element $k \in \Bbb{N}$ such that $\Bbb{N}_k$ is infinite. By definition, there exists $f:\Bbb{N}_k \to \Bbb{N}_k$ such that $f$ is one-to-one but not onto. Then, because $f$ is not onto, there exists $y\in\Bbb{N}_k$ such that $y\neq f(x)$ for every $x\in \Bbb{N}_k$. Pick $x_0\neq y$ and define $A=\Bbb{N}_k-\{x_0\}$. Let $g:A\to A$ be defined as: $$g(x)= \begin{cases} f(x) \ if \ f(x)\neq x_0 \\ f(x_0) \ if \ f(x) = x_0 \end{cases}$$</p> <p>We will prove that $g$ is one-to-one but not onto. </p> <p>First we show $g$ is one-to-one. Let $x_1,x_2 \in A$ such that $x_1\neq x_2$. Since $f$ is one-to-one, $f(x_1)\neq f(x_2)$. If $f(x_1)=x_0$, then $f(x_2)\neq x_0$. Hence $g(x_1)=f(x_0)$ and $g(x_2)=f(x_2)$. Since $x_0\neq x_2$, then $f(x_0)\neq f(x_2)$ and thus $g(x_1)\neq g(x_2)$. Without loss of generality, if $f(x_2)=x_0$, then $g(x_1)\neq g(x_2)$. If $f(x_1)\neq x_0$ and $f(x_1)\neq x_0$, then $g(x_1)=f(x_1)$ and $g(x_2)=f(x_2)$. Hence $g(x_1)\neq g(x_2)$. We have that $g$ is one-to-one.</p> <p>We now show that $g$ is not onto. Note that, because $x_0\neq y$, such that $y\neq f(x)$ for all $x\in\Bbb{N}_k$, we have $y\in A=\Bbb{N}_k-\{x_0\}$. Let $x\in A$. If $f(x)=x_0$, then $g(x)=f(x_0)\neq y$. If $f(x)\neq x_0$, then $g(x)=f(x)\neq y$. Hence there exists $y \in A$ such that for any $x \in A$ we have $g(x)\neq y$. Thus, $g$ is not onto.</p> <p>We have demonstrated that $g:A\to A$ is one-to-one, but not onto, hence A is infinite by definition. Giving that $A=\Bbb{N}_k-\{x_0\}$ and our lemma, we have that $\Bbb{N}_{k-1}$ is also infinite. However this contradicts our hypothesis that $k$ is the smallest element such that $\Bbb{N}_k$ is infinite. Hence it must be that for every $k\in\Bbb{N}$ we have $\Bbb{N}_k$ is finite.</p> <p>Sorry if my proof writing is bad in anyway. If you have any stylistic suggestion, or any suggestion at all, I would gladly read it :) </p>
Ng Chung Tak
299,599
<p>\begin{align} E(Z) &amp;= \int_{0}^{1} \int_{0}^{1} Z f(x,y) \, dx \, dy \\ &amp;= \int_{0}^{1} \int_{0}^{1} 4xy\sqrt{x^2+y^2} \, dx \, dy \\ &amp;= \int_{0}^{1} 2y \left( \int_{0}^{1} 2x\sqrt{x^2+y^2} \, dx \right) dy \\ &amp;= \int_{0}^{1} 2y \left( \int_{0}^{1} \sqrt{u+y^2} \, du \right) dy \tag{$u=x^2$} \\ &amp;= \int_{0}^{1} 2y \left[ \frac{2}{3} (u+y^{2})^{3/2} \right]_{u=0}^{1} dy \\ &amp;= \int_{0}^{1} \frac{4y}{3} \left[ (1+y^{2})^{3/2}-y^{3} \right] dy \\ &amp;= \frac{4}{15} \left[ (1+y^{2})^{5/2}-y^{5} \right]_{y=0}^{1} \\ &amp;= \frac{8}{15}(2\sqrt{2}-1) \end{align}</p>
424,514
<p>Suppose one has a generating function <span class="math-container">$$F(z) = \sum_{k\ge 0} f(k) z^k$$</span> for some <span class="math-container">$f:\mathbb{Z}\rightarrow \mathbb{Z}$</span>. Is there a way to express an iteration of <span class="math-container">$f$</span> in terms of <span class="math-container">$F(z)$</span>. E.g., <span class="math-container">$$G(z) = \sum_{k\ge 0} f(f(k)) z^k$$</span> Can <span class="math-container">$G(z)$</span> be expressed in terms of <span class="math-container">$F(z)$</span>?</p>
Gerald Edgar
454
<p>That seems really unlikely.<br /> For example, <span class="math-container">$$F(z)=\sum_{k=0}^\infty 2^kz^k = \frac{1}{1-2z}$$</span> is a rational function, but <span class="math-container">$$G(z)=\sum_{k=0}^\infty 2^{2^k}z^k$$</span> has radius of convergence <span class="math-container">$0$</span>.</p>
1,793,231
<p>Can you please help me on this question? $\DeclareMathOperator{\adj}{adj}$</p> <p>$A$ is a real $n \times n$ matrix; show that:</p> <p>$\adj(\adj(A)) = (\det A)^{n-2}A$</p> <p>I don't know which of the expressions below might help</p> <p>$$ \adj(A)A = \det(A)I\\ (\adj(A))_{ij} = (-1)^{i+j}\det(A(i|j)) $$</p> <p><em>Editor's note: adjoint here refers to the <a href="https://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow">classical adjoint</a>.</em></p>
Ian
83,396
<p>I would discourage you from using the word "adjoint" in this context. This is an accepted usage of the word, but there is another concept in linear algebra which is <em>always</em> referred to by the word "adjoint". The two can be easily confused. An unambiguous word that can be used in this context is "adjugate", and I would encourage you to use this word.</p> <p>Anyway, you know by the first property you stated that when $A$ is invertible, the adjugate of $A$ is a multiple of the inverse of $A$. So the adjugate of the adjugate is a multiple of the inverse of $A^{-1}$, so it is a multiple of $A$. All you need to keep track of is what the constant factors in each of these steps were.</p> <p>When $A$ is not invertible the situation is quite simple, the result follows from the fact that $\operatorname{det}(A)=0$.</p>
3,491,028
<p>Problem:<br> Suppose that <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> are independent uniformly distributed on the interval <span class="math-container">$[1,3]$</span>. What is the probability that <span class="math-container">$x_1 + x_2 + x_3 &lt; 8$</span>.<br> Answer:<br> Let <span class="math-container">$p$</span> be the probability we seek. The density for these three random variables is: <span class="math-container">$$ f(x) = \begin{cases} \frac{1}{2} &amp; \text{for } 1 \leq x \leq 3 \\ 0, &amp; \text{otherwise } \end{cases} $$</span> <span class="math-container">\begin{align*} p &amp;= \int_{1}^{3} \int_{1}^{5-x_1} \int_{1}^{8-x_1-x_2} \left( \frac{1}{2}\right)^3 \, dx_3 \, dx_2 \, dx_1 \\ p &amp;= \int_{1}^{3} \int_{1}^{5-x_1} \frac{x_3}{8} \, \Big|_{x_3 = 1}^{x_3 = 8-x_1-x_2} \, dx_2 \, dx_1 \\ p &amp;= \int_{1}^{3} \int_{1}^{5-x_1} \frac{8 - x_1 - x_2}{8} - \frac{1}{8} \, dx_2 dx_1 \\ p &amp;= \int_{1}^{3} \int_{1}^{5-x_1} \frac{7 - x_1 - x_2}{8} \, dx_2 \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{7x_2 - x_1 x_2 - \frac{x_2^2}{2}}{8} \Big|_{1}^{5-x_1} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{7(5-x_1) - x_1(5-x_1) - \frac{(5-x_1)^2}{2} }{8} - \frac{1}{8} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{14(5-x_1) - 2x_1(5-x_1) - (5-x_1)^2 - 2 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ 70 - 14x_1 - 2x_1(5-x_1) - ( 25 - 10x_1 + x_1^2 ) - 2 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ 70 - 14x_1 - 2x_1(5-x_1) - 25 + 10x_1 - x_1^2 - 2 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ - 14x_1 - 2x_1(5-x_1) + 10x_1 - x_1^2 + 43 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ -4x_1 - 2x_1(5-x_1) - x_1^2 + 43 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ -4x_1 - 10x_1 + 2x_1^2 - x_1^2 + 43 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ x_1^2 - 14x_1 + 43 }{16} \, dx_1 \\ p &amp;= \int_{1}^{3} \frac{ x_1^2 - 14x_1 }{16} \, dx_1 + (3-1)\left( \frac{43}{16} \right) \\ p &amp;= \int_{1}^{3} \frac{ x_1^2 - 14x_1 }{16} \, dx_1 + \frac{43}{8} \\ p &amp;= \left( \frac{1}{16 }\right) \int_{1}^{3} ( x_1^2 - 14x_1 ) \, dx_1 + \frac{43}{8} \\ \end{align*}</span> Using an online integral calculator, I find: <span class="math-container">$$ \int_{1}^{3} ( x_1^2 - 14x_1 ) \, dx_1 = - \frac{142}{3} $$</span> <span class="math-container">\begin{align*} p &amp;= \left( \frac{1}{16 }\right) \left( - \frac{142}{3} \right) \,+ \frac{43}{8} \\ p &amp;= -\frac{71}{3(8)} + \frac{43}{8} = \frac{139 - 71}{24} \end{align*}</span> Since <span class="math-container">$p$</span> is greater than <span class="math-container">$1$</span>, my answer cannot be right. Were did I go wrong?</p> <p>I would also like to know if I setup the integral correctly.</p> <p>I ran the following R script:</p> <pre><code>count = 0 limit = 10*1000*1000 for ( i in 1:limit ) { num = sum( runif( 3, 1, 3 ) ) if ( num &lt;= 8 ) count = count + 1 } </code></pre> <p>The result was around 0.979. Therefore, I question the answer of <span class="math-container">$\frac{7}{8}$</span>. </p>
antkam
546,005
<p>I disagree with the other answer (and OP, and another commenter) that the <span class="math-container">$x_2$</span> limit has to be <span class="math-container">$\min(5-x_1, 3)$</span>. Why should it be that? <span class="math-container">$x_2$</span> can be the entire range <span class="math-container">$[1,3]$</span>. There is zero reason to restrict <span class="math-container">$x_1+x_2 \le 5$</span> because, for any <span class="math-container">$(x_1,x_2) \in [1,3]^2$</span> we can account for <span class="math-container">$x_1+x_2+x_3 \le 8$</span> simply by integrating <span class="math-container">$x_3 \in [1, \min(8-x_1-x_2,3)]$</span>. E.g. the point <span class="math-container">$(3,3,1.9)$</span> is a part of the event (i.e. it satisfies the inequality) but is not part of the integral if we use the limits <span class="math-container">$x_2 \in [1, \min(5-x_1, 3)] = [1, 2]$</span>.</p> <p>I.e. I think the correct integral should be:</p> <p><span class="math-container">$$\int_1^3 dx_1 \int_1^3 dx_2 \int_1^{\min(8-x_1-x_2,3)} \frac18 dx_3 = {47 \over 48}$$</span></p> <p>as evaluated by <a href="https://www.wolframalpha.com/input/?i=+int+%281%2F8%29+dz+dy+dx%2C+z%3D1..min%288-x-y%2C3%29+%2C+y%3D1..3%2C+x%3D1..3" rel="nofollow noreferrer">wolfram alpha</a>. Note that <span class="math-container">$7/8$</span> has to be way off because </p> <p><span class="math-container">$$\frac18 = P(x_1 &gt; 2) P( x_2 &gt; 2) P(x_3 &gt; 2)$$</span></p> <p>but it is very obvious that <span class="math-container">$x_1, x_2, x_3 &gt; 2$</span> are <em>necessary</em> but very <em>insufficient</em> for <span class="math-container">$x_1 + x_2 + x_3 &gt; 8$</span>.</p>
1,768,700
<p>According to my knowledge, to prove that $24^{31}$ is congruent to $23^{32}$ mod 19, we must show that both numbers are divisible by 19 i.e. their remainders must be equal with mod 19. Please correct me if I'm wrong.</p> <p>So, I was able to reduce $23^{32}$ and find its mod 19, which is 17 but I am having a bit of problem with $24^{31}$ since 31 is a prime number and I do not know how to break it down. Please help me with that. </p>
user5713492
316,404
<p>With perhaps a little less arithmetic, $2^2=4\equiv23\pmod{19}$, and $4\times5=20\equiv1\pmod{19}$, so $24\equiv5\equiv4^{-1}\equiv2^{-2}\pmod{19}$. By Fermat's little theorem, $$23^{32}=2^{2\times32}=2^{64}\equiv2^{64-7\times18}\equiv2^{-62}\equiv2^{-2\times31}\equiv24^{31}\pmod{19}$$</p>
823,055
<p>This may be a naive question. I am reading the definition of differetiablity of a function $f:\mathbb{R^n}\rightarrow \mathbb{R^m}$ in the book Calculus Manifolds. I already know that all norms on $\mathbb{R}^n$ induce the same metric topology. If we change the norms in the definition (for example we can use the manhattan norm), does the set of differentiable functions change ?</p> <p>I already know that all norms on $\mathbb{R}^n$ induce the same metric topology but that doesn't seem to imply a negative answer to my question.</p> <p>Another Question: If the set of differentiable functions changes, is there any reason why we are defining differentiablity using the Pythagorean norm ? </p> <p>Thank you</p>
Lee Mosher
26,501
<p>$M \times [0,1]$ is homeomorphic to the "solid Klein bottle", and its boundary is the ordinary 2-dimensional Klein bottle, which is nonorientable.</p>
1,837,356
<p>I'm reading <a href="http://www.careerbless.com/aptitude/qa/permutations_combinations_imp8.php" rel="nofollow">this passage</a> and wondering why</p> <p>Number of ways in which k identical balls can be distributed into n distinct boxes =</p> <p>$$\binom {k+n-1}{n-1}$$</p> <p>could someone explain it to me please?</p>
pancini
252,495
<p>Imagine you lay out $k$ balls in a straight line. Then you divide them up into boxes by setting out markers splitting them up. For example, if you have $10$ balls and $3$ boxes, you might do</p> <p>$$\text{b, b, MARKER, b, b, b, MARKER, b, b, b, b, b}$$</p> <p>and this sequence means two balls in the first box, three in the second, and five in the fifth.</p> <p>Now hopefully it's clear that determining how many balls to put into which box is akin to placing $n-1$ dividers between them. So if you take a line of $n+k-1$ spots, you have to choose $k-1$ spots to put the divider.</p>
1,837,356
<p>I'm reading <a href="http://www.careerbless.com/aptitude/qa/permutations_combinations_imp8.php" rel="nofollow">this passage</a> and wondering why</p> <p>Number of ways in which k identical balls can be distributed into n distinct boxes =</p> <p>$$\binom {k+n-1}{n-1}$$</p> <p>could someone explain it to me please?</p>
true blue anil
22,388
<p>This is what is called "stars and bars" combinatorics</p> <p>Suppose there are $15$ balls, and $3$ boxes.</p> <p>The balls could be variously distributed, e.g.</p> <p>$\Large\bullet\bullet\bullet+\bullet\bullet\bullet\bullet\bullet+\bullet\bullet\bullet\bullet\bullet\bullet\bullet= 15$</p> <p>I have used $+$ for a divider ("bar" in stars and bars)<br> Note that for $3$ boxes, only $2$ dividers are needed, for $n$ boxes, only $n-1$ will be needed</p> <p>You could have situations here where $1-2$ boxes remain empty, e.g.</p> <p>$\Large 0 +\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet +\bullet\bullet\bullet\bullet\bullet\bullet\bullet= 15\;$ [ Box $1$ empty]</p> <p>or $\;\Large\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet+\; 0 +\bullet\bullet\bullet\bullet\bullet\bullet\bullet= 15\;$ [ Box $2$ empty]</p> <p>or $\;\Large\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet +\;0 +\;0 = 15\;$ [ Boxes $2$ and $3$ empty ] </p> <p>The balls are $15$ "stars", and the $+'s$ are $2$ "bars",<br> which have to be placed somewhere in the $15+2$ symbols, thus the generalised formula $\binom{k+n-1}{n-1}$</p>
1,522,929
<p>For every fixed $t\ge 0$ I need to prove that the sequence $\big\{n\big(t^{\frac{1}{n}}-1\big) \big\}_{n\in \Bbb N}$ is non-increasing, i.e. $$n\big(t^{\frac{1}{n}}-1\big)\ge (n+1)\big(t^{\frac{1}{n+1}}-1\big)\;\ \forall n\in \Bbb N$$ I'm trying by induction over $n$, but got stuck in the proof for $n+1$: <br/> For n=2 its clear that follows since $$t-1\ge 2(t^{1/2}-1)\Leftrightarrow t-1\ge 2t^{1/2}-2\Leftrightarrow t+1\ge 2t^{1/2}\Leftrightarrow t^2+2t+1\ge 4t\Leftrightarrow t^2-2t+1\ge 0\Leftrightarrow (t-1)^2\ge 0$$</p> <p>So, we suppose that $\;\ n\big(t^{\frac{1}{n}}-1\big)\ge (n+1)\big(t^{\frac{1}{n+1}}-1\big)\;\ $ is valid. (I.H.) <br/> So I need to prove that: $$(n+1)\big(t^{\frac{1}{n+1}}-1\big)\ge (n+2)\big(t^{\frac{1}{n+2}}-1\big)\ $$</p> <p>But I have not reached anywhere helpful expanding all. Any ideas or different approaches to porve this will be appreciated.</p>
Max0815
595,084
<p>Here is another solution with a different contour.</p> <p>Let <span class="math-container">$$I=\int^{\infty}_{-\infty}e^{ix^2}\text{ d}x$$</span> and let <span class="math-container">$$f(z)=e^{iz^2}$$</span> Note that our function is even.</p> <p>In user279043's answer, the contour they chose was (what I would presume) based on the fact that the complex integrand can be rewritten like this <span class="math-container">$$\exp\left(z^2e^{\frac{i\pi}2}\right)=\exp\left(\left(ze^{\frac{i\pi}4}\right)^2\right)$$</span> which implies that the integrand is well behaved along the ray <span class="math-container">$e^{\frac{i\pi}4}$</span>.</p> <p>However, we can also note that along the imaginary axis, our function is similarly well behaved. Consider the following contour shown below</p> <p><a href="https://i.stack.imgur.com/fAmuc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fAmuc.png" alt="enter image description here" /></a></p> <p>consisting of the paths <span class="math-container">$$\mathcal{C}=B+\Gamma+L$$</span> Thus, our contour integral about said contour would be <span class="math-container">$$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+\int_{\Gamma}+\int_Lf(z)\text{ d}z=0$$</span> Each path can be parameterized as follows <span class="math-container">\begin{alignat*}{5} B&amp;:\text{ }z=x,\qquad &amp;\text{d}z&amp;=\text{d}x,\qquad &amp;x&amp;\in[0, R\,]\\ \Gamma &amp;:\text{ }z=Re^{i\theta},\qquad &amp;\text{d}z&amp;=iRe^{i\theta}\text{ d}\theta,\qquad &amp;\theta&amp;\in\left[0, \frac{\pi}{2}\right]\\ L&amp;:\text{ }z=iy,\qquad &amp;\text{d}z&amp;=i\text{d}y,\qquad &amp;y&amp;\in[\,R, 0] \end{alignat*}</span> Now let's evaluate each integral. The integral about <span class="math-container">$B$</span> is just half of <span class="math-container">$I$</span> <span class="math-container">$$\lim_{R\to+\infty}\int_Bf(z)\text{ d}z=\lim_{R\to+\infty}\int^{R}_0e^{ix^2}\text{ d}x=\frac{1}{2}I$$</span> The integral about <span class="math-container">$\Gamma$</span> vanishes along our integration interval using typical inequalities. Note that I put in red whatever goes to <span class="math-container">$1$</span>. <span class="math-container">\begin{align} \left|\int_{\Gamma}f(z)\text{ d}z\right|&amp;\le\int^{\frac{\pi}2}_{0}\left|e^{iR^2e^{2i\theta}}\right|\cdot\color{red}{|i|}|R|\color{red}{\left|e^{i\theta}\right|}\text{ d}\theta\\ &amp;\le\int^{\frac{\pi}2}_{0}R\color{red}{\left|e^{iR^2\cos(2\theta)}\right|} \left|e^{-R^2\sin(2\theta)}\right| \text{ d}\theta\\ &amp;\le\int^{\frac{\pi}4}_{0}R e^{-R^2\cdot \frac{4\theta}{\pi}} \text{ d}\theta+\int^{\frac{\pi}2}_{\frac{\pi}4}R e^{R^2\cdot \frac{4\left(\theta-\frac{\pi}{2}\right)}{\pi}} \text{ d}\theta \end{align}</span> Which goes to <span class="math-container">$0$</span> when we take the limit as <span class="math-container">$R$</span> goes to infinity. The third line comes from Jordan's inequality.</p> <p>Lastly, the integral along <span class="math-container">$L$</span> gives <span class="math-container">$$\lim_{R\to+\infty}\int_Lf(z)\text{ d}z=\lim_{R\to+\infty}\int^{0}_{R}e^{i(iy)^2}\cdot i\text{ d}y=-i\int^{\infty}_0e^{-iy^2}\text{ d}y$$</span> So we have <span class="math-container">$$\oint_{\mathcal{C}}f(z)\text{ d}z=\int_B+\int_{\Gamma}+\int_Lf(z)\text{ d}z=\frac12 I-i\int^{\infty}_0e^{-iy^2}\text{ d}y=0$$</span> Lastly, we can rearrange our equation and solve as follows <span class="math-container">\begin{align} \frac12 I&amp;=i\int^{\infty}_0e^{-iy^2}\text{ d}y\\ \implies\left(\frac{I}{2i}\right)^2&amp;=\int^{\infty}_0e^{-iy^2}\text{ d}y\cdot\int^{\infty}_0e^{-it^2}\text{ d}t\\ &amp;=\int^{\infty}_0\int^{\infty}_0e^{-i(y^2+t^2)}\text{ d}y\text{ d}t\\ &amp;=\int^{\frac{\pi}2}_0\int_0^{\infty}e^{-ir^2}\cdot r\text{ d}r\text{ d}\theta=\frac{\pi}{2}\int_0^{\infty}re^{-ir^2}\text{ d}r\\ &amp;=\frac{\pi}{4i}\int^{\infty}_0e^{-u}\text{ d}u=\frac{\pi}{4i}\cdot 1\\ \implies \frac{I}{2i}&amp;=\sqrt{\frac{\pi}{4i}}\\ \implies I&amp;=2i\cdot\frac{\sqrt{\pi}}{2}\cdot e^{-\frac{i\pi}{4}}=\sqrt{\pi}e^{\frac{i\pi}4}=(1+i)\sqrt{\frac{\pi}{2}} \end{align}</span> we can see that line 4 follows from the polar coordinate change of variables <span class="math-container">$$y=r\cos(\theta),\,\,t=r\sin(\theta),\qquad J_f=\left[\begin{array}{cc} \cos(\theta)&amp; -r\sin(\theta) \\ \sin(\theta)&amp; r\cos(\theta) \end{array}\right],\qquad \left|\det\left(J_f\right)\right|=r$$</span> and line 5 follows from this simple u-sub <span class="math-container">$$u= ir^2,\qquad\frac{\text{d}u}{\text{d}r}=2ir,\qquad\text{d}r=\frac{\text{d}u}{2ir}$$</span></p> <p>Hence <span class="math-container">$$I=\boxed{(1+i)\sqrt{\frac{\pi}{2}}}$$</span></p>
1,522,929
<p>For every fixed $t\ge 0$ I need to prove that the sequence $\big\{n\big(t^{\frac{1}{n}}-1\big) \big\}_{n\in \Bbb N}$ is non-increasing, i.e. $$n\big(t^{\frac{1}{n}}-1\big)\ge (n+1)\big(t^{\frac{1}{n+1}}-1\big)\;\ \forall n\in \Bbb N$$ I'm trying by induction over $n$, but got stuck in the proof for $n+1$: <br/> For n=2 its clear that follows since $$t-1\ge 2(t^{1/2}-1)\Leftrightarrow t-1\ge 2t^{1/2}-2\Leftrightarrow t+1\ge 2t^{1/2}\Leftrightarrow t^2+2t+1\ge 4t\Leftrightarrow t^2-2t+1\ge 0\Leftrightarrow (t-1)^2\ge 0$$</p> <p>So, we suppose that $\;\ n\big(t^{\frac{1}{n}}-1\big)\ge (n+1)\big(t^{\frac{1}{n+1}}-1\big)\;\ $ is valid. (I.H.) <br/> So I need to prove that: $$(n+1)\big(t^{\frac{1}{n+1}}-1\big)\ge (n+2)\big(t^{\frac{1}{n+2}}-1\big)\ $$</p> <p>But I have not reached anywhere helpful expanding all. Any ideas or different approaches to porve this will be appreciated.</p>
K.defaoite
553,081
<h2>Another method.</h2> <p><span class="math-container">$$\int_{-\infty}^\infty\exp(ix^2)\mathrm dx=2\int_0^{\infty} \exp(ix^2)\mathrm dx\tag{1}$$</span></p> <p>Let <span class="math-container">$-z=ix^2\implies x=(iz)^{1/2}\implies \mathrm dx=\frac{i^{1/2}}{2}z^{-1/2}\mathrm dz$</span> hence <span class="math-container">$$\int_0^{\infty} \exp(ix^2)\mathrm dx=\frac{i^{1/2}}{2}\int_0^{-i\infty}z^{-1/2}\exp(-z)\mathrm dz\tag{2}$$</span></p> <p>This is <em>almost</em> the Gamma function, but the limits of integration are wrong. We need to somehow argue that we can switch the <span class="math-container">$\int_0^{-i\infty}$</span> to <span class="math-container">$\int_0^\infty$</span>. We consider the following contour in the complex plane:</p> <p><a href="https://i.stack.imgur.com/5cVhj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5cVhj.png" alt="contour" /></a></p> <p>We call this closed contour <span class="math-container">$C(r,R)$</span>, a union of the four curves <span class="math-container">$C_1,C_2,C_3,C_4~(r,R)$</span>. Explicitly, <span class="math-container">$$C_1(r,R)=\{t\mid t\in[R,r]\} \\ C_2(r)=\{re^{it}\mid t\in[0,-\pi/2]\} \\ C_3(r,R)=\{-it\mid t\in[r,R]\} \\ C_4(R)=\{Re^{it}\mid t\in[3\pi/2,2\pi]\} \\ C(r,R)=C_1(r,R)\cup C_2(r)\cup C_3(r,R)\cup C_4(R)$$</span></p> <p>Because we have an integrand that is analytic everywhere in the interior and on the boundary of <span class="math-container">$C(r,R)$</span>, we know that <span class="math-container">$$\oint\limits_{C(r,R)}z^{-1/2}\exp(-z)\mathrm dz=0$$</span></p> <p>Obviously, what we'd like to show is that the integrals on <span class="math-container">$C_2$</span> and <span class="math-container">$C_4$</span> go to <span class="math-container">$0$</span> as <span class="math-container">$r\to 0,R\to\infty$</span>. Let's have a look at <span class="math-container">$C_2$</span> first. Being that the integration goes in the counter-clockwise direction, we actually traverse <span class="math-container">$C_2$</span> in the <strong>clockwise</strong> direction and so we can parameterize the integration along <span class="math-container">$C_2$</span> as <span class="math-container">$$\int\limits_{C_2(r)}z^{-1/2}\exp(-z)\mathrm dz=\int_{0}^{-\pi/2}(re^{it})^{-1/2}\exp(-re^{it})ire^{it}\mathrm dt$$</span></p> <p>Simplifying things a little, <span class="math-container">$$\int\limits_{C_2(r)}z^{-1/2}\exp(-z)\mathrm dz=-ir^{1/2}\int_{0}^{\pi/2}e^{it/2}\exp(-re^{-it})\mathrm dt$$</span></p> <p>As <span class="math-container">$r\to 0$</span> we use a Taylor expansion on the <span class="math-container">$\exp$</span>: <span class="math-container">$$\exp(-re^{-it})=1-re^{-it}+\mathrm O(r^2)$$</span></p> <p>So <span class="math-container">$$\int\limits_{C_2(r)}z^{-1/2}\exp(-z)\mathrm dz=-ir^{1/2}\int_{0}^{\pi/2}e^{it/2}\exp(-re^{-it})\mathrm dt \\ =-ir^{1/2}\int_0^{\pi/2}e^{it/2}\mathrm dt+ir^{1/2}\int_0^{\pi/2}e^{it/2}re^{it}\mathrm dt +\mathrm O(r^2)\\ =ir^{1/2}\cdot(\text{integral not depending on}~r)+ir^{3/2}\cdot(\text{integral not depending on}~r)+\mathrm O(r^2) \\ =\mathrm O(r^{1/2})\to 0~\text{as}~r\to 0.$$</span></p> <p>Now we look at <span class="math-container">$C_4$</span>. This path is indeed counter-clockwise so we parameterize as such: <span class="math-container">$$\int\limits_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz=\int_{3\pi/2}^{2\pi}(Re^{it})^{-1/2}\exp(-Re^{it})iRe^{it}\mathrm dt \\ =iR^{1/2}\int_{3\pi/2}^{2\pi}e^{it/2}\exp(-Re^{it})\mathrm dt$$</span></p> <p>Expanding with Euler's formula, <span class="math-container">$$\int\limits_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz=iR^{1/2}\int_{3\pi/2}^{2\pi} e^{it/2}\exp(-R\cos t-iR\sin t)\mathrm dt \\ =iR^{1/2}\int_{3\pi/2}^{2\pi} e^{it/2}\exp(-R\cos t)\exp(-iR\sin t)\mathrm dt $$</span></p> <p>We know that <span class="math-container">$|\exp(-iR\sin t)|=|e^{it/2}|=1$</span> and so via the <a href="https://en.wikipedia.org/wiki/Estimation_lemma" rel="nofollow noreferrer">estimation lemma</a> we know that <span class="math-container">$$\left|\int_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz\right|= R^{1/2}\left|\int_{3\pi/2}^{2\pi} e^{it/2}\exp(-R\cos t)\exp(-iR\sin t)\mathrm dt\right| \\ \leq R^{1/2}\left|\int_{3\pi/2}^{2\pi}\exp(-R\cos t)\mathrm dt\right|\tag{3}$$</span></p> <p>Past this point, the simple bound from the estimation lemma is actually not enough here, as it would only give us <span class="math-container">$$\left|\int_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz\right|\leq \frac{\pi}{2}R^{1/2}$$</span></p> <p>Which still goes to <span class="math-container">$\infty$</span> as <span class="math-container">$R\to \infty$</span>. So we need to find a better way to bound the integral <span class="math-container">$\int_{3\pi/2}^{2\pi}\exp(-R\cos t)\mathrm dt$</span>. Due to periodicity, <span class="math-container">$$\int_{3\pi/2}^{2\pi}\exp(-R\cos t)\mathrm dt=\int_{-\pi/2}^{0}\exp(-R\cos t)\mathrm dt$$</span></p> <p>And then the evenness of the integrand,<br /> <span class="math-container">$$\int_{-\pi/2}^{0}\exp(-R\cos t)\mathrm dt=\int_0^{\pi/2}\exp(-R\cos t)\mathrm dt$$</span></p> <p>Here we will need to consult some mathematical literature. It turns out that <span class="math-container">$$\int_0^{\pi/2}\exp(-R\cos t)\mathrm dt=-\frac{\pi}{2}M_0(R)$$</span></p> <p>Where <span class="math-container">$M_0$</span> is a zeroth-order <a href="https://dlmf.nist.gov/11.2" rel="nofollow noreferrer">Modified Struve function.</a> It has the <a href="https://dlmf.nist.gov/11.6" rel="nofollow noreferrer">asymptotic expansion</a> <span class="math-container">$$M_0(z)\asymp\frac{-1}{2\pi z}+\mathrm O(|z|^{-2}) \\ \text{as}~|z|\to\infty\\ (\operatorname{Re}z&gt;0)$$</span> Which means, going back to <span class="math-container">$(3)$</span>, <span class="math-container">$$\left|\int_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz\right|\leq R^{1/2}\frac{\pi}{2}M_0(R)\asymp \frac{1}{4R^{1/2}}$$</span> Which allows us to conclude <span class="math-container">$$\left|\int_{C_4(R)}z^{-1/2}\exp(-z)\mathrm dz\right|\to 0 \\ \text{as}~R\to\infty$$</span></p> <p>Therefore, <span class="math-container">$$0=\oint_{C(r,R)}z^{-1/2}\exp(-z)\mathrm dz =\left(\int\limits_{C_1(r,R)}+\int\limits_{C_2(r)}+\int\limits_{C_3(r,R)}+\int\limits_{C_4(R)}\right)z^{-1/2}\exp(-z)\mathrm dz \\ \to \left(\int\limits_{C_1(r,R)}+\int\limits_{C_3(r,R)}\right)z^{-1/2}\exp(-z)\mathrm dz~~\text{as}~r\to 0~,~R\to\infty$$</span> This finally allows us to conclude <span class="math-container">$$\int_0^{-i\infty}z^{-1/2}\exp(-z)\mathrm dz=-\int_\infty^0 z^{-1/2}\exp(-z)\mathrm dz \\ =\int_0^\infty z^{1/2-1}\exp(-z)\mathrm dz \\ =\Gamma(1/2)=\sqrt{\pi}$$</span> Going all the way back to the beginning, this means <span class="math-container">$$\int_{-\infty}^\infty\exp(ix^2)\mathrm dx=i^{1/2}\sqrt{\pi}=\sqrt{\frac{\pi}{2}}+i\sqrt{\frac{\pi}{2}}$$</span> Which instantly gives us the famous <a href="https://en.wikipedia.org/wiki/Fresnel_integral#Limits_as_x_approaches_infinity" rel="nofollow noreferrer">Fresnel Integrals</a>: <span class="math-container">$$\boxed{\int_{-\infty}^\infty \cos(x^2)\mathrm dx=\int_{-\infty}^\infty \sin(x^2)\mathrm dx=\sqrt{\frac{\pi}{2}}}$$</span></p> <hr /> <p>While lengthier than the other responses, this answer is, in my opinion, far more direct than the others and I think is an accurate reflection of the calculations one would have to go through if one had not seen the problem beforehand. It's my first full attempt at solving this problem, anyhow.</p>
4,001,031
<p>(For all those that it may concern, this is not a duplicate of my previous post, But starts in a similar way.)</p> <p>A triangle with side lengths a, b, c with a height(h) that intercepts the hypotenuse(c) at (x , y) such that it is split into two side lengths, c = m + n, we can find Pythagoras theorem using the area of a right triangle and the slope equations of the height and the hypotenuse.</p> <p><img src="https://upload.wikimedia.org/wikipedia/commons/f/fb/Pythagoras_similar_triangles_simplified.svg" alt="Text" /></p> <p>We begin by finding the coordinate (x,y) by using the concept of the area fourmula, ab = hc:</p> <p>x - value:</p> <p><span class="math-container">$bx = hm$</span></p> <p><span class="math-container">$x= \frac{hm}{b}$</span></p> <p><span class="math-container">$x = \frac{am}{c}$</span></p> <p>y- value:</p> <p><span class="math-container">$ay = hn$</span></p> <p><span class="math-container">$y = \frac{hn}{a}$</span></p> <p><span class="math-container">$y = \frac{bn}{c}$</span></p> <p>which gives us:</p> <p><span class="math-container">$(\frac{am}{c}, \frac{bn}{c})$</span></p> <p>We can also find the (x,y) coordinates using the slope equations of the height and hypotenuse.</p> <p>Height's equation:</p> <p><span class="math-container">$y = \frac{b}{a}x$</span></p> <p>Hypotenuse's equation:</p> <p><span class="math-container">$y = \frac{-a}{b}x + a$</span></p> <p>Now we can determine the (x ,y )intercept.</p> <p>x - value:</p> <p><span class="math-container">$\frac{b}{a}x = \frac{-a}{b}x + a$</span></p> <p><span class="math-container">$\frac{b}{a}x + \frac{a}{b}x = a$</span></p> <p><span class="math-container">$x(a^2 + b^2) = a^2b$</span></p> <p><span class="math-container">$x = \frac{a^2b}{a^2 + b^2}$</span></p> <p>y - value:</p> <p><span class="math-container">$y = \frac{b}{a}(\frac{a^2b}{a^2 + b^2})$</span></p> <p><span class="math-container">$y = \frac{ab^2}{a^2 + b^2}$</span></p> <p>Which gives us:</p> <p><span class="math-container">$(\frac{a^2b}{a^2 + b^2} ,\frac{ab^2}{a^2 + b^2})$</span></p> <p>Using <span class="math-container">$(\frac{bm}{c},\frac{an}{c}) ,(\frac{ab^2}{a^2 + b^2} , \frac{a^2b}{a^2 + b^2})$</span> there are 2 equalities:</p> <p><span class="math-container">$\frac{bm}{c} = \frac{a^2b}{a^2 + b^2}$</span></p> <p><span class="math-container">$\frac{an}{c} = \frac{ab^2}{a^2 + b^2}$</span></p> <p>Which after isolating and eliminating c becomes:</p> <p><span class="math-container">$a^2m = b^2n$</span></p> <p>This gives us two expressions:</p> <p><span class="math-container">$\sqrt{\frac{n}{m}} = \frac{a}{b}$</span></p> <p><span class="math-container">$\sqrt{\frac{m}{n}} = \frac{b}{a}$</span></p> <p>We substitute these into <span class="math-container">$c = m + n$</span>:</p> <p><span class="math-container">$c = m + n$</span></p> <p><span class="math-container">$\frac{c}{\sqrt{m}{n}} = \sqrt{\frac{m}{n}} + \sqrt{\frac{n}{m}}$</span></p> <p><span class="math-container">$\frac{ab}{\sqrt{m}{n}}c = a^2 + b^2 $</span></p> <p>Using <span class="math-container">$h$</span>, this can be written as:</p> <p><span class="math-container">$1 = (\frac{\sqrt{m}{n}}{h})(\frac{a^2 + b^2}{c^2})$</span></p> <p>or:</p> <p><span class="math-container">$1 = (\frac{h}{\sqrt{m}{n}})(\frac{c^2 }{a^2 + b^2})$</span></p> <p>Note, this leaves us with only two possibilities, the fractions are either inverses, or the numerator and denominator are equal. We know <span class="math-container">$h = ab/c$</span> and <span class="math-container">$h &lt; c$</span>, so <span class="math-container">$h ≠ a^2 + b^2$</span>, the denominator can also be split into <span class="math-container">$hc, c$</span> but we know <span class="math-container">$hc = ab$</span> so <span class="math-container">$ab ≠ a^2 + b^2$</span>, and we know the side lengths <span class="math-container">$m,n$</span> are smaller than <span class="math-container">$a,b$</span> which means the numerator and denominator are equal in the case of:</p> <p><span class="math-container">$a^2 + b^2 = c^2$</span></p>
J.G.
56,861
<p>As with your previous question, you have given a valid proof of Pythagoras... if I've followed your argument correctly. First, I'll condense it, if only for my own benefit. (I also swap round <span class="math-container">$a,\,b$</span>, because traditionally these are respectively opposite <span class="math-container">$A,\,B$</span>.)</p> <blockquote> <p>With <span class="math-container">$C=O$</span>, the hypotenuse <span class="math-container">$y=b(1-x/a)$</span> meets <span class="math-container">$y=ax/b$</span> at <span class="math-container">$x=ab^2/(a^2+b^2)$</span>, a proportion <span class="math-container">$b^2/(a^2+b^2)$</span> of the way from <span class="math-container">$A$</span> to <span class="math-container">$B$</span>, so <span class="math-container">$AH=b^2c/(a^2+b^2)$</span>; <span class="math-container">$HB$</span> follows similarly. Equating two expressions for <span class="math-container">$\cos\theta$</span> (i.e. using similar triangles), <span class="math-container">$c=\frac{(a^2+b^2)h}{ab}$</span>, which by area formulae is <span class="math-container">$(a^2+b^2)/c$</span>.</p> </blockquote> <p>Of the &quot;standard&quot; proofs I know, yours is most similar to <a href="https://en.wikipedia.org/wiki/Pythagorean_theorem#Proof_using_similar_triangles" rel="nofollow noreferrer">this</a>. But that proof doesn't even need area formulae: similar triangles give<span class="math-container">$$a^2=c\cdot BH,\,b^2=c\cdot HA\implies a^2+b^2=AB\cdot AB=c^2.$$</span>@S.Dolan's answer gives similar time-saving tips, albeit still using area formulae.</p>
23,846
<p>I'm stuck with this algebra question.</p> <p>I try to prove that the exterior algebra $R$ over $k^d$, that is, the $k$-algebra that is generated by $x_1,\ldots,x_d$ and $x_ix_j=- x_jx_i$ for each $i,j$, has just one simple module which is not faithful.</p> <p>I think the only simple module is $k$, but I am not really sure if my idea does work or not.</p> <p>Can I use the fact $(x_i)^2=0$ for all $i$, then $k$ has cyclic subrings? If yes, then HOW?</p> <p>Also, one more question: if $k$ is finitely generated, is that enough to say that $R$ is Artinian? Thank you</p>
Mariano Suárez-Álvarez
274
<p><em>(I will be assuming $k$ is a field; if it is not, then you will need some hypothesis on it for your statement to be true)</em></p> <p>Suppose $R$ is a ring and that $S$ is a non-zero simple left $R$-module. Pick a non-zero element $s_0\in S;$ then the map $\phi:r\in R\mapsto sr_0\in S$ is a surjective map of left $R$-modules. Let $I\subseteq R$ be its kernel, a left ideal.</p> <p>Now suppose $t\in R$ is an homogeneous element of positive degree. It is easy to see that the subset $tS=\{ts:s\in S\}$ is a submodule of $S$. Since $S$ is simple, then either $tS=0$, and in that case $t\in T$, or $tS=S$. </p> <p>I want to check that the second cannot occur. Suppose otherwise: then there exists an $s\in S$ such that $ts=s_0$ and, since $s_0$ generates $S$, there also exists an $r\in R$ such that $s=rs_0$; it follows that $s_0=rts_0$ or, in other words, that $(1-rt)s_0=0$. Now it turns out that the element $1-rt$ is invertible in $R$ (since $t$ has positive degree, $rt$ is a sum of homogeneous elements of positive degree, and then $(rt)^d=0$: it follows that $\sum_{n\geq0}(rt)^n$ is a finite sum and the usual argument shows that it is the inverse of $1-rt$), so we see that $s_0=0$, which is absurd.</p> <p>We thus conclude that every homogeneous element of positive degree in $R$ is in fact contained in $I$. Now we observe that there is exactly <em>one</em> left ideal in $R$ which contains all the homogeneous elements of positive degree, so there is in fact, as we wanted, exactly one isomorphism class of simple modules.</p> <p><strong>NB:</strong> if you know a little more about ring theory---specifically, of Artinian algebras---we can recast this argument as follows: let $R$ be your exterior algebra on $d$ generators, and let $I$ be the ideal generated by all homogeneous elements of positive degree. It is pretty obvious that $I^d=0$, so that $I$ is a nilpotent ideal; on the other hand, it is also clear that $R/I$ is isomorphic to $k$ as a ring, so that it is in particular semisimple. It follows from a well-known characterization of the Jacobson radical that $I$ <em>is</em> the Jacobson radical of $R$. As a consequence, there is a bijection between the isoclasses of simple $R$-modules and the isoclasses of simple $R/I$-modules, and there is only one of the latter.</p>
1,685,895
<blockquote> <blockquote> <p>Question: Find a value of $n$ such that the coefficients of $x^7$ and $x^8$ are in the expansion of $\displaystyle \left(2+\frac{x}{3}\right)^{n}$ are equal.</p> </blockquote> </blockquote> <hr> <p>My attempt:</p> <p>$\displaystyle \binom{n}{7}=\binom{n}{8} $</p> <p>$$ n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6) \times 2^{n-7} \times (\frac{1}{3})^7= n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6)(n-7) \times 2^{n-8} \times (\frac{1}{3})^8 $$</p> <p>$$ \frac{6}{7!} = \frac{n-7}{40320} $$</p> <p>$$ n-7 = 48 $$</p> <p>$$ n=55 $$</p>
Archis Welankar
275,884
<p>The general term of $(a+b)^n$ $$t_{r+1}={n\choose r}.a^r.b^{n-r}$$ plug in r as $6,7$ and you will get it</p>
1,685,895
<blockquote> <blockquote> <p>Question: Find a value of $n$ such that the coefficients of $x^7$ and $x^8$ are in the expansion of $\displaystyle \left(2+\frac{x}{3}\right)^{n}$ are equal.</p> </blockquote> </blockquote> <hr> <p>My attempt:</p> <p>$\displaystyle \binom{n}{7}=\binom{n}{8} $</p> <p>$$ n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6) \times 2^{n-7} \times (\frac{1}{3})^7= n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6)(n-7) \times 2^{n-8} \times (\frac{1}{3})^8 $$</p> <p>$$ \frac{6}{7!} = \frac{n-7}{40320} $$</p> <p>$$ n-7 = 48 $$</p> <p>$$ n=55 $$</p>
Uri Goren
203,575
<p>The coefficient of $x^7$ is $$\binom{n}{7}\frac{2^{n-7}}{3^7}$$ And the coefficient of $x^8$ is $$\binom{n}{8}\frac{2^{n-8}}{3^8}$$ Comparing them we get: $$\binom{n}{8}=\binom{n}{7}\frac{3}{2}$$</p>
401,002
<p>$\forall x \neg A \implies \neg \exists xA$<br> I won't ask you to solve this for me, but can you please give some guiding lines on how to approach a proof in NDFOL?<br> There are many tricks that the TA shows in class, that I could not dream of...</p> <p>P.S. I managed to proof $\neg \exists xA \implies \forall x \neg A$ but could not get on from there.<br> Thanks!</p> <hr> <p>After the proposed answer, let me see if I got this correct: </p> <ol> <li>$\exists x A \implies \exists x A$ (<strike>axiom</strike> assumption)</li> <li>$\exists x A \implies A$ (from 1)</li> <li>$\exists x A, \forall x \neg A \implies \forall x \neg A$ (<strike>axiom</strike> assumption)</li> <li>$\exists x A, \forall x \neg A \implies \neg A$ ($\forall$ extraction, from 3)</li> <li>$\forall x \neg A \implies \neg \exists x A$ (from 2,4)</li> </ol> <p>Am I correct?<br> I could not understand the justification going from (1) to (2)</p>
Dan Christensen
3,515
<p>Stated in a slightly different way from LF...</p> <ol> <li><p>$\forall x \neg A(x)$ (Assume)</p></li> <li><p>$A(y)$ (Assume)</p></li> <li><p>$\neg A(y)$ (Universal Specification, line 1)</p></li> <li><p>$A(y) \wedge \neg A(y)$ (Join, lines 2, 3)</p></li> <li><p>$\neg\exists x A(x) $ (Conclusion, line 2)</p></li> <li><p>$\forall x \neg A(x) \rightarrow \neg\exists x A(x)$ (Conclusion, line 1)</p></li> </ol>
2,677
<p>If <em>G</em> is a group, its <strong>abelianization</strong> is the abelian group <em>A</em> and the map <em>G</em> &rarr; <em>A</em> such that any map <em>G</em> &rarr; <em>B</em> with <em>B</em> abelian factors through <em>A</em>. Abelianization is a functor, and in general a very lossy operation. The map <em>G</em> &rarr; <em>A</em> is always a surjection/quotient, because we can construct <em>A</em> by dividing <em>G</em> by the minimal normal subgroup that contains all conjugations <em>ghg<sup>-1</sup>h<sup>-1</sup></em> for <em>g,h</em>&isin;<em>G</em>.</p> <p>If <em>V</em> is a finite-dimensional (super)vector space over a field <em>K</em>, then the abelianization of GL(<em>V</em>) is isomorphic to the multiplicative group <em>K</em><sup>*</sup> of non-zero numbers in <em>K</em>. Indeed, the determinant exhibits the desired isomorphism.</p> <p>Here are two questions I'm curious about:</p> <ol> <li>What can be said about the abelianizations of other (finite-dimensional) Lie groups?</li> <li>If <em>V</em> is an infinite-dimensional vector space, what can be said about the abelianization of GL(<em>V</em>)? Most infinite-dimensional vector spaces have some analytic structure, e.g. topological vector spaces, and so it's reasonable to ask that the operators in GL(<em>V</em>) should preserve that structure; you are welcome to take your favorite type of infinite-dimensional vector space and your favorite type of GL(<em>V</em>), if you want.</li> </ol>
Eric Wofsey
75
<p>I don't have anything to say about specific examples, but here are some general remarks. A way to construct the abelianization of any compact group is to consider its image under the product of all its 1-dimensional unitary representations. This is because a compact abelian group is characterized by its set of characters by Pontrjagin duality. More generally, you can construct the double Pontrjagin dual of a locally compact group to get its locally compact abelianization as a subgroup of a space of maps to U(1) with the compact-open topology.</p>
2,555,861
<p>I am reading up on <strong>Fraleigh's</strong> <em>A First Course in Abstract Algebra</em>, and he says ($H$ subgroup of $G$) $Hg=gH$ $iff$ $i_g[H]=H$ $iff$ $H$ is invariant under all inner automorphisms. I look up invariant and I find this definition:</p> <p>"Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group." from <a href="https://en.wikipedia.org/wiki/Invariant_(mathematics)" rel="nofollow noreferrer">Invariant Description Wiki</a>. </p> <p>First I am wondering if that means the elements of $H$ do not change but change positions (hence the permutation) or is $H$ the identity under all inner automorphisms of $G$. W</p> <p>EDIT: Too many questions asked by me, I will ask them separately. </p>
openspace
243,510
<p>You could try this : let $\delta x = \frac{1}{n}$, then $x_{k} = \frac{k}{n}$. Now we can consider $\displaystyle \sum_{k=0}^{n-1}\frac{1}{n}(e^{x_{k+1}}-e^{x_{k}})$, then consider $\displaystyle\sum \frac{e^{k/n}(e^{1/n}-1)}{n}$. Now estimate $e^{1/n}$ and find the ''$\lim\sum$''</p>
838,690
<p>True or false question</p> <p>If B is a subset of A then {B} is an element of power set A. </p> <p>I think this is true.</p> <p>Because B is {1,2} say A {1,2,3} then power set of includes </p> <p>$\{\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{3,2\},\{1,2,3\},\emptyset\}$</p> <p>Unless {B} means $\{\{1,2\}\}$</p>
Avraham
91,378
<p>The definition of a power set of a set $A$ is the set of all subsets of $A$ including $A$ itself and the null set. As $B$ is a subset of $A$ in your question, then yes $B$ is an element in the power set of $A$.</p>
2,357,272
<p>Find out the sum of the following infinite series $$\frac{3}{2^2(1)(2)} + \frac{4}{2^3(2)(3)} +\dots+\frac{r+2}{2^{r+1}(r)(r+1)}+\cdots $$ up to $r\to\infty$.</p> <p>MY TRY:- I tried to split $r+2$ as $[(r+1) +{(r+1)-r}]$ so that I can cancel one term from each terms in the numerator. Then I got an expression which was like Harmonic-Geometric series. But I could not do further any more after this.</p>
Dr. Sonnhard Graubner
175,066
<p>prove by induction that for your sum is hold $$\sum_{i=1}^n\frac{i+2}{2^{i+1}i(i+1)}=\frac{2^{-n-1} \left(2^n n+2^n-1\right)}{n+1}$$</p>
617,163
<p>I need to find a proper definition of a quantile. It says: a p-th quantile $x_p$ is a number, that satisfies the following conditions: $$ 0&lt;p&lt;1 $$ and $$ P(X \le x_{p}) \ge p $$ and $$ P(X \ge x_{p}) \ge 1-p $$ is this definition right?</p>
alexjo
103,399
<p>The profit is $\pi(p,q)=pq-c(q)$ where $p$ is the selling price, $q$ is the quantity selled and $c(q)$ the cost to produce the quantity $q$. So you have $\pi(p,250)=250p-c(250)$ and you know that $\pi(p,250)=50p$; then you have $c(250)=200p$ and finally $$ \frac{\pi(p,250)}{c(250)}=\frac{250p-200p}{200p}=\frac{50}{200}=25\% $$</p>
617,163
<p>I need to find a proper definition of a quantile. It says: a p-th quantile $x_p$ is a number, that satisfies the following conditions: $$ 0&lt;p&lt;1 $$ and $$ P(X \le x_{p}) \ge p $$ and $$ P(X \ge x_{p}) \ge 1-p $$ is this definition right?</p>
okarin
112,825
<p>Total Money: Price of $250$ chairs </p> <p>Gain: Price of $250 - 200 = 50$ chairs</p> <p>Profit Percent: $\frac{\text{gain}}{\text{spent}} = \frac{250 - 200}{250 - (250 - 200)} = \frac{50}{200} = 25\%$</p>
1,038,198
<p>How do you prove that $8 \cos{(x)}\cos{(2x)}\cos{(3x)} - 1 = \dfrac{\sin{(7x)}}{\sin{(x)}}$?</p>
Community
-1
<p>We have</p> <p>$$8 \sin x\cos{(x)}\cos{(2x)}\cos{(3x)} =4\sin(2x)\cos(2x)\cos(3x)=2\sin(4x)\cos(3x)$$ Moreover</p> <p>$$\sin x+\sin(7x)=2\sin\left(\frac{x+7x}{2}\right)\cos\left(\frac{7x-x}{2}\right)=\cdots$$ and the result follows easily.</p>
62,000
<p>Let $I,J,K$ be three non-void sets, and let $\gamma$:$I\times J\times K\rightarrow\mathbb{N}$. Is there some nonempty set $X$, together with some functions {$\{ f_{i}:X\rightarrow X;i\in I\} $}, some subsets {$\{ \Omega_{j}\subset X;j\in J\} $}, and some points {$\{p_{k}\in X;k\in K} $} s.t. $\mid f_{i}^{-1}\left(p_{k}\right)\cap\Omega_{j}\mid=\gamma\left(i,j,k\right)$ $\left(i\in I,j\in J,k\in K\right)$, and $\mid f_{i}^{-1}\left(p\right)\mid\leq\mid\mathbb{R\mid}$$\left(i\in I,p\in X\right)$ ? In other words, is $\gamma$ ''representable'' as the number of solutions of some ''reasonable'' equations? [An elementary problem, indeed.] </p>
Gerhard Paseman
3,402
<p>Consider the following construction. Let $Y$ be a subset of $X$ such that $Y$ is (equipollent to) $I \times K \times \omega$. I think of it as $I$-many copies of an array with $K$-many rows and each row has countably many elements. The $k$th row in the $i$th array is the preimage of $p_k$ under $f_i$. (For $h$ not equal to $i$, let $f_i$ send the $k$th row in the $h$th array to, say, the first element in that row, or perhaps instead to some subset of elements in that row, under the condition that those images are disjoint from the set of $p_k$.) For the sets $\Omega_j$, pick precisely $\gamma(i,j,k)$ elements from the $k$th row in the $i$th array and put them into $\Omega_j$. So far, we have achieved that the preimage of every point in the range of $f_i$ is at most countably infinite, for every $i$. We also have the desired condition on the intersection of the preimage of $p_k$ under $f_i$ with the set $\Omega_j$.</p> <p>Now everything is done except for deciding where to put the $p_k$. As long as you avoid sending $f_i(x)$ to a $p_k$ for $x$ outside the ith array, you can label some of the array elements with $p_k$; this should be doable because you have control of how $f_i$ acts outside the $i$th array.</p> <p>Alternatively, let the $p_k$ be disjoint from $Y$ and the $\Omega_j$, and let $f_i$ send the $p_k$ to themselves, or to some other set disjoint from the $\Omega_j$.</p> <p>Gerhard "Ask Me About System Design" Paseman, 2011.04.17</p>
2,481,767
<p>Let A={$3m-1|m\in Z$} and B={$4m+2|m\in Z$} and let $f:A\rightarrow B$ is defined by </p> <p>$f(x)=\frac{4(x+1)}{3}-2$ . Is f surjective?</p> <p>I'm not really sure how to prove this. By trying out certain values it seems it's surjective. This is my work so far:</p> <p>$f(x)=y \iff \frac{4(x+1)}{3}-2 = y \iff x=\frac{3y+2}{4}$</p> <p>If we substitute $y=4m+2$ then $x=\frac{3(4m+2)+2}{4} \iff x=\frac{12m+8}{4} \iff x=3m+2$. Although this is not exactly $ A = 3m-1$ it seems that no matter which number for m you choose you basically get the same set in the end. </p> <p>Same if we do $f(A)=B \iff f(3m-1)=4m+2 \iff \frac{4(3m-1+1)}{3}-2=4m+2 \iff $</p> <p>$\iff 4m-2=4m+2$. Obviously these two are not equal yet they yield the same exact sets since they are infinite. So is f surjective? It seems like it , but these two proofs are not exactly very precise.</p>
copper.hat
27,978
<p>Solve $f(3n-1) = 4m+2$ to get $n=m+1$.</p> <p>In particular, for any $b \in B$ there is some $a \in A$ such that $f(a) = b$.</p> <p>In fact, it is unique.</p> <p>In particular, it is not hard to compute $f^{-1}(b) = {1 \over 2} b$.</p>
3,657,075
<p><a href="https://i.stack.imgur.com/ytcQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ytcQ3.png" alt="enter image description here"></a></p> <blockquote> <p>In the given figure <span class="math-container">$\angle BAE, \angle BCD$</span> and <span class="math-container">$\angle CDE$</span> are right angles and <span class="math-container">$AB = 4, BC=3, CD=4$</span> and <span class="math-container">$DE=5$</span>. What is the value of <span class="math-container">$\angle ABD$</span>?</p> </blockquote> <p>I found this problem in a sheet of contest math problems. <strong>However</strong> the problem stated in the sheet was "<strong>Find the value of <span class="math-container">$AE$</span></strong> ". I proved the problem using <span class="math-container">$BE^2 = 8^2+4^2$</span>. Then I noticed that it could be proved by making rectangle <span class="math-container">$ABED'$</span>. I tried to prove why <span class="math-container">$\angle ABD$</span> is a right angle only with the given conditions but failed. Is the <span class="math-container">$\angle ABD$</span> even a right angle (then how to prove it) or it can have different values? </p>
marty cohen
13,079
<p>BD = 5.</p> <p>Drawing a perpendicular from D to AE at G, DG = 4 so EG = 3 and AG = 5 so AE = 8.</p> <p>As to the angles, EDG = CDB (both 3-4-5) so CDE = AED = CBD so BDG = 90 so ABD = 90.</p>
3,657,075
<p><a href="https://i.stack.imgur.com/ytcQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ytcQ3.png" alt="enter image description here"></a></p> <blockquote> <p>In the given figure <span class="math-container">$\angle BAE, \angle BCD$</span> and <span class="math-container">$\angle CDE$</span> are right angles and <span class="math-container">$AB = 4, BC=3, CD=4$</span> and <span class="math-container">$DE=5$</span>. What is the value of <span class="math-container">$\angle ABD$</span>?</p> </blockquote> <p>I found this problem in a sheet of contest math problems. <strong>However</strong> the problem stated in the sheet was "<strong>Find the value of <span class="math-container">$AE$</span></strong> ". I proved the problem using <span class="math-container">$BE^2 = 8^2+4^2$</span>. Then I noticed that it could be proved by making rectangle <span class="math-container">$ABED'$</span>. I tried to prove why <span class="math-container">$\angle ABD$</span> is a right angle only with the given conditions but failed. Is the <span class="math-container">$\angle ABD$</span> even a right angle (then how to prove it) or it can have different values? </p>
Calvin Lin
54,563
<p>The original question is under-defined. </p> <p>E.g. We could have <span class="math-container">$ACB$</span> as a straight line, with the desired angles still right.<br> Explicitly, <span class="math-container">$ A = (0,0), B = (0, -4), C = (0, -1), D = (4, -1), E = (4, 4) $</span> which gives us <span class="math-container">$ AE = 4\sqrt{2}$</span>. </p> <hr> <p>What we have is <span class="math-container">$AE = 8 \Leftrightarrow \angle ABD = 90^ \circ$</span>. </p> <p>As Quanto's solution indicates: If <span class="math-container">$ AE = 8$</span>, then <span class="math-container">$\angle ABD = 90^ \circ$</span>. </p> <p>If <span class="math-container">$ \angle ABD = 90^ \circ$</span>, then we can easily show <span class="math-container">$AE = 8$</span> (E.g. by coordinate geometry, or length chasing.)</p>
1,768,100
<p>I have started studying field theory and i have a question.somewhere i saw that a finite field with $p^m $ elements has a subfield of order $p^m $ where $m$ is a divisor of $n $.My question that if it is a field then how can it have a proper subfield.because since it is field it doesnt have any proper ideal.how can it have a subfield</p>
paf
333,517
<p>You should parametrize your line segment as $$\gamma : t\mapsto t(1+i)+(1-t)(-i)$$ when $t\in [0;1]$. Then, you have to replace in your integral $z$ by $\gamma(t)$, $dz$ by $\gamma'(t)dt$ and $L$ by $[0;1]$ (the same as a standard change of variables for real integrals) and you should be able to compute the integral without trouble.</p>
66,670
<p>I want to use:</p> <pre><code>demand = {1.92, 2.07, 2.37, 2.72, 2.87}*10^6; NSolve[SetV == demand[[1]]/(Cpf (1 - χ)), χ] </code></pre> <p>I want to make a vector of solutions for chi (χ) given each of the demand vector components.</p>
Bob Hanlon
9,362
<p>Bounding the range of n resolves the issue with <code>Maximize</code></p> <pre><code>Maximize[{(3 n + 4)/(2 n + 1), Element[n, Integers], -100 &lt;= n &lt;= 100}, n] </code></pre> <blockquote> <p>{4, {n -> 0}}</p> </blockquote> <p>Or,</p> <pre><code>Maximize[{(3 n + 4)/(2 n + 1), -100 &lt;= n &lt;= 100}, n, Integers] </code></pre> <blockquote> <p>{4, {n -> 0}}</p> </blockquote> <p>Any large value for the range bound will work since</p> <pre><code>Limit[(3 n + 4)/(2 n + 1), n -&gt; #] &amp; /@ {Infinity, -Infinity} </code></pre> <blockquote> <p>{3/2, 3/2}</p> </blockquote> <pre><code>DiscretePlot[ (3 n + 4)/(2 n + 1), {n, -10, 10}, PlotRange -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/PEOXY.png" alt="enter image description here"></p>
2,516,023
<blockquote> <p>Why does taking logarithms on both sides of $0&lt;r&lt;s$ reverse the inequality for logarithms with base $a$, $0&lt;a&lt;1$?</p> </blockquote> <p>I would like some intuition on why this works. I tried graphing $\log_{0.5}(x)$ on Desmos, for example, and if the graph were true this would be evident from the graph, but the graph seems wrong because I don't get why as $x\to0^+$, $y\to \infty$ since $0.5^x$ should be $\le 0.5$ where $0&lt;x&lt;1$.</p>
marty cohen
13,079
<p>Because $\log_a(b) =\dfrac{\log_c b}{\log_c a} $ for any $a, b, c &gt; 0$.</p> <p>The usual thing is to choose $c=e$ or $c=10$; the key point is that in both cases $c &gt; 1$.</p> <p>Therefore, if $b &gt; 1$ and $0 &lt; a &lt; 1$ then $\log_c b &gt; 0$ and $\log_c a &lt; 0$ so that $\log_a b &lt; 0$ and the usual inequalities are reversed.</p>
148,313
<p>Someone has claimed that he has constructed a quaternion representation of the one dimensional (along the x axis) Lorentz Boost.</p> <p>His quaternion Lorentz Boost is $v'=hvh^*+ 1/2( [hhv]^*-[h^*h^*v^*]^*)$ where h is (sinh(x),cosh(x),0,0). He derived this odd transform by substituting the hyperbolic sine and cosine for the sine and cosine in the usual unit quaternion rotation $v'=hvh^*$ and then subtracting out unwanted factors. You can see the short "proof" here: </p> <p><a href="http://visualphysics.org/preprints/qmn10091026" rel="nofollow">http://visualphysics.org/preprints/qmn10091026</a></p> <p>I have argued that, whereas the quaternion rotations form a group, his newly devised transform probably does not. Two transformations of the form $v'=hvh^*+ 1/2( [hhv]^*-[h^*h^*v^*]^*)$ almost certainly do not make another transformation of the form $v'=fvf^*+ 1/2( [ffv]^*-[f^*f^*v^*]^*)$ He has not attempted to prove that and he won't because he thinks it is unnecessary.</p> <p>He has responded that, even if I'm correct, he still has created a Lorentz Boost despite the fact that he has not created a group. I've argued that that it is essential that two Lorentz Boosts make another Lorentz Boost. Without that group structure there is no Boost. Is this correct? </p> <p>Can you have a Lorentz Boost along the x axis without having the group structure Boost+Boost=Boost? (for the case of Boosts along the x axis. I know that, for boosts in different directions, Boost+Boost=Rotation)</p>
Ronald
27,884
<p>Aye, this is <a href="https://en.wikipedia.org/wiki/Yao%27s_Millionaires%27_Problem">Yao's Millionaires Problem</a>!</p>
2,098,810
<p>In a triangle,what is the ratio of the distance between a vertex and the orthocenter and the distance of the circumcenter from the side opposite vertex.</p>
szw1710
130,298
<p>Another solution could be given by the <a href="https://en.wikipedia.org/wiki/Hermite%E2%80%93Hadamard_inequality" rel="nofollow noreferrer">Hermite-Hadamard inequality</a>. </p> <p>It is easy to verify that $f(x)=\sqrt{\strut 1+x^2}$ is convex (by $f''&gt;0$). Then $$f(0)\leqslant \frac{1}{2}\int\limits_{-1}^1 f(x)\text{d}x\leqslant \frac{f(-1)+f(1)}{2}.$$ For our function we have $$1\leqslant \frac{1}{2}\int\limits_{-1}^1\sqrt{\strut 1+x^2}\text{d}x\leqslant \sqrt{2},$$ which is the desired inequality.</p>
1,805,615
<p>I have one problem. I am sure it is not complicated, but I only need help to see am I, at least, on the right path.</p> <p><strong>Problem: Let $S=Span\{(0,-2,3),(1,1,1),(2, -2, 8)\}\subseteq \mathbb R^3$. Find subspace $T$ of space $\mathbb R^3$ so that $\mathbb R^3=S \oplus T$.</strong></p> <p>Here is what I have done so far:</p> <ol> <li>Since $S$ is span of vectors $(0,-2,3),(1,1,1),(2, -2, 8)$, that means that $S$ has all vectors that are linear combination of those three vectors.</li> <li>We are searching for subspace $T$, but we need to keep in mind that $\mathbb R^3=S \oplus T$, which means that S$\cap T=\overrightarrow 0$. So, $T$ would have all those vectors that cannot be a result of linear combination of vectors from $S$.</li> <li>After that, I placed vectors from $S$ into a matrix:\begin{bmatrix} 0 &amp; 1 &amp; 2 \\ -2 &amp; 1 &amp; -2 \\ 3 &amp; 1 &amp; 8 \\ \end{bmatrix} and I found its rank is $2$ which means that $\dim S=2$.We also know that $\dim\mathbb R^3=3$. Now, based on formula $\dim\mathbb R^3=\dim(S\oplus T)=\dim S+\dim T$, we get that dimension of $T$ should be $1$. That would mean that $T$ needs to have, of course, $3$ vectors, but the rank of matrix $[T]$ should be one. Am I right to assume those $3$ vectors would be something like $$T=\{(\alpha_0 a,\alpha_0 b,\alpha_0 c),(\alpha_1 a, \alpha_1 b, \alpha_1 c),(\alpha_2 a, \alpha_2 b, \alpha_2 c)\}$$, where $\alpha_0, \alpha_1$ and $\alpha_2$ are scalars? For example: $$[T]=\begin{bmatrix} 2 &amp; 7 &amp; 1 \\ 4 &amp; 14 &amp; 2 \\ 8 &amp; 28 &amp; 4 \\ \end{bmatrix}$$ Rank of that matrix would be one, making dimension of $T$ to be one. So, $T$ <em>is span over one vector</em>.</li> </ol> <p>I searched here and found a similar problem but I guess I am not sure of how $T$ would really look like. Would $T$ be $span$ over vector $(1,0,0)$ because span over that vector cannot produce any in space $S$? Can $T$ be span over any vector that is making a base in $\mathbb R^3$? For example $T=span${$(1,1,0)$}?</p> <p>Thank you.</p> <p>Edit: changed last question.</p>
DonAntonio
31,254
<p>An idea: indeed, $\;\dim\mathcal S=\dim\text{Span}\,S=2\;$ ,so why won't you reduce your matrix (say, by rows to make it easier) to check what vector to take out (the one lin. dep. in the other two) and begin to check what vector to add in order to make the whole thing linearly independent?:</p> <p>$$\begin{pmatrix}1&amp;1&amp;1\\2&amp;-2&amp;8\\0&amp;-2&amp;3\end{pmatrix}\stackrel{R_2-2R_1}\rightarrow\begin{pmatrix}1&amp;1&amp;1\\0&amp;-4&amp;6\\0&amp;-2&amp;3\end{pmatrix}\stackrel{R_3-\frac12R_2}\rightarrow\begin{pmatrix}1&amp;1&amp;1\\0&amp;-4&amp;6\\0&amp;\;0&amp;0\end{pmatrix}$$</p> <p>and we see the third vector is lin. dep. in the first two, so you can add for example vector $\;(0,0,1)\;$ , and then</p> <p>$$R^3=\mathcal S\oplus T\;,\;\;\text{with}\;\;T:=\text{Span}\,(0,0,1)$$</p> <p>You can also choose $\;(0,1,0)\;$ instead, or infinite different vectors that'll work (how to know what? You can take a general vector $\;(a,b,c)\;$ and put it in the above matrix instead of the third row, and again reduce the matrix and check when the third row doesn't become all zeros)</p> <p>Observe: $\;T\;$ does <strong>not</strong> need "to have three vectors", whatever that means: it has to be, by what we did above, a one-dimensional subspace generated, of course, by one single non-zero vector.</p>
349,309
<p>I seem to be short on examples for $I$-adic completions of rings.</p> <p>I know that a ring is $I$-adically complete if the canonical homomorphism into the inverse limit is an isomorphism. My thinking and searching on the internet has been surprisingly fruitless, though, for examples where the map is either surjective and not injective, or injective and not surjective. (Am I mistaken that both are possible?)</p> <blockquote> <p>So, the main question here is for one or more useful examples of both of these types of $I$-adically incomplete rings.</p> </blockquote> <p>If possible, it would be nice to have the surjective-not-injective example be with respect to an ideal $I$ which contains a nonzero idempotent $e$.</p>
rschwieb
29,335
<p>One thing I learned much later is: if <span class="math-container">$I$</span> is a nilpotent ideal, then <span class="math-container">$R$</span> is <span class="math-container">$I$</span>-adically complete.</p> <p>Thinking of the completion as a subring of <span class="math-container">$\prod R/I^n$</span>, it's clear that at a certain <span class="math-container">$n$</span> the tail is just copies of <span class="math-container">$R$</span>, and the congruence property that distinguishes elements of the product that are in the limit says that the tail of any given element is a constant sequence. Then it's not hard to see that the <span class="math-container">$r$</span> defining the constant tail is what maps onto your element. So the canonical map is surjective. It's obviously injective too, so there's the isomorphism.</p> <p>This provides a nice large class of <span class="math-container">$I$</span>-adically complete rings. It's also worth noting that you can't weaken &quot;nilpotent&quot; to &quot;nil&quot; or even &quot;T-nilpotent&quot;: there are counterexamples which aren't <span class="math-container">$I$</span>-adically complete in those cases.</p>
3,903,774
<p><span class="math-container">$30$</span> red balls and <span class="math-container">$20$</span> black balls are being distributed to <span class="math-container">$5$</span> kids, so that each kid gets at least one red ball. In how many ways can we distribute balls?</p> <p>Circle the correct answers:</p> <p>a) <span class="math-container">$\binom{29}{4}$</span> <span class="math-container">$\binom{24}{20}$</span></p> <p>b) <span class="math-container">$\binom{29}{5}$</span> <span class="math-container">$\binom{24}{5}$</span></p> <p>c)<span class="math-container">$|Sur(N_{30},N_{5})|S(20,5)$</span> Note: <span class="math-container">$|Sur(N_{30},N_{5})|$</span> is the number of surjections, and <span class="math-container">$S(20,5)$</span> is a Stirling number of the second kind</p> <p>d) None of <span class="math-container">$3$</span> previous answers are correct</p> <p>My approach:</p> <p>First I gave each of <span class="math-container">$5$</span> kids one red ball, which leaves me with <span class="math-container">$25$</span> red balls. Now I used the stars and bars method to distribute the balls I am left with.</p> <p>Red balls: <span class="math-container">$x_1+x_2+x_3+x_4+x_5=25,x_i\geq 0, i=1,..,5$</span>. This equation has <span class="math-container">$\binom{5+25-1}{25}=\binom{29}{25}=\binom{29}{4}$</span>.</p> <p>Black balls: <span class="math-container">$x_1+x_2+x_3+x_4+x_5=20,x_i\geq 0, i=1,..,5$</span>. This equation has <span class="math-container">$\binom{5+20-1}{20}=\binom{24}{20}=\binom{24}{4}$</span>.</p> <p>So I would say a) is the correct answer.. am I right?</p>
Pietro Paparella
414,530
<p>That’s the square of the two-norm (aka the Euclidean norm).</p>
340,264
<p>Given that</p> <p>$L\{J_0(t)\}=1/(s^2+1)$</p> <p>where $J_0(t)=\sum\limits^{∞}_{n=0}(−1)n(n!)2(t2)2n$,</p> <p>find the Laplace transform of $tJ_0(t)$. </p> <p>$L\{tJ_0(t)\}=$_<strong><em>_</em>__<em>_</em>__<em>_</em>___<em></strong>---</em>___?</p>
azimut
61,691
<p><strong>Hint:</strong></p> <p>Show that if $y^2\equiv 2\mod p^n$, there is a solution of $z^2\equiv 2\mod p^{n+1}$ with $z\equiv y\mod p^n$ (this technique is called Hensel lift).</p>
1,230,159
<p>Where can I find a complete proof to the fact that the integral closure of $\mathbb{Z}$ in $\mathbb{Q}(i)$ is $\mathbb{Z}[i]$ (the Gaussian integers are the integral closure of $\mathbb{Z}$ in the Gaussian rationals)? For such a seemingly standard fact, I can not seem to find a complete proof of this anywhere. Yes, I am aware that this question has been asked on math.stackexchange before, but there was no reference to a complete proof, nor was a complete proof ever supplied. Any help would be appreciated, thanks.</p>
user26857
121,097
<p>If $z\in\mathbb Q[i]$ is integral over $\mathbb Z$, then it's integral over $\mathbb Z[i]$. But $\mathbb Z[i]$ is a UFD, so it's integrally closed. It follows $z\in\mathbb Z[i]$. (Recall or prove that $\mathbb Q[i]$ is the field of fractions of $\mathbb Z[i]$.)</p> <p>Conversely, for $z\in\mathbb Z[i]$, $z=m+in$, $m,n\in\mathbb Z$, you can easily see that $z^2-2mz+m^2+n^2=0$, so $z$ is integral over $\mathbb Z$.</p>
1,211,978
<p>I cannot find the roots of the characteristic equation to get a solution. I only know the basic way to solve these equations. I factored out an $r^2$.</p> <p>$2r^5-7r^4+12r^3-8r^2 = 0$</p> <p>$r^2(2r^3-7r^2+12r-8) = 0$</p>
Pieter21
170,149
<p>Check Wolfram alpha for further factorization.</p> <p><a href="https://www.wolframalpha.com/input/?i=2x%5E3-7x%5E2%2B12x-8%3D0&amp;lk=4&amp;num=2" rel="nofollow">https://www.wolframalpha.com/input/?i=2x%5E3-7x%5E2%2B12x-8%3D0&amp;lk=4&amp;num=2</a></p> <p>Are you sure you have all signs right? Also the other answer?!</p>
729,444
<p>Let be two lists $l_1 = [1,\cdots,n]$ and $l_2 = [randint(1,n)_1,\cdots,randint(1,n)_m]$ where $randint(1,n)_i\neq randint(1,n)_j \,\,\, \forall i\neq j$ and $n&gt;m$. How I will be able to found the number of elements $x\in l_1$, to select, such that the probability of $x \in l_2$ is $1/2$?. I'm trying using the birthday paradox but I cann't get.</p> <p>$randint(x,y)$ pick a random number between $x$ and $y$.</p>
hmakholm left over Monica
14,366
<p>A proof sketch could be:</p> <p><em>1. Every (nonzero) vector is an eigenvector.</em> Let $v\ne 0$ and suppose $Tv$ is not a multiple of $v$. Then $v$ and $Tv$ are linearly independent; extend $\langle v,Tv\rangle$ to a basis $\langle v, Tv, v_3,v_4,\ldots,v_n\rangle$. By assumption $T$ has the same matrix representation $M$ in this basis and in the basis $\langle v,v+Tv,v_3,v_4,\ldots,v_n\rangle$. But that means that the first column of $M$ is simultaneously $(0,1,0,\ldots,0)^{\mathsf t}$ and $(-1,1,0,\ldots,0)^{\mathsf t}$, which is absurd.</p> <p><em>2. All eigenvalues are the same.</em> Since every vector is an eigenvector, there exists an eigenbasis. Therefore $M$ is diagonal. It can only be invariant under permutations of the basis vectors if all of the diagonal entries are equal..</p> <p>Therefore $T$ must be scalar multiplication by the common eigenvalue.</p>
2,532,280
<p>If a N×N (N≥3) Hermitian matrix <strong>A</strong> meets the following conditions: </p> <ol> <li><strong>A</strong> is positive semi-definite (not positive definite, i.e. <strong>A</strong> has at least M zero eigenvalue, where M is a given paremeter with 1≤M≤N-1).</li> <li>The sum of each off diagonal results in 0, and the main diagonal elements are non-negative, which is shown in the figure (set N=4 as example).</li> </ol> <p>Then what the general solution of <strong>A</strong> is?</p> <p>For example, a particular solution of <strong>A</strong> can be $$ \begin{matrix} I_{M'} &amp; 0 &amp;\\ 0 &amp; 0 \\ \end{matrix} $$ where M≤N-M'≤N-1. It is just a particular solution, I wonder what is the general solution under these two conditions.</p> <p><a href="https://i.stack.imgur.com/biXfS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/biXfS.png" alt="enter image description here"></a></p>
gen-ℤ ready to perish
347,062
<p>$$ Q(x) = 4x^2 + (5k+3)x + \left(2k^2-1\right) = 0 \\ Q(x) = Ax^2 + Bx + C = 0 \\ $$</p> <p>For the zeroes of the quadratic $Q(x)$ to be the same in magnitude but opposite in sign, then $Q(x)$ must be symmetrical about the axis $x=0$. The key here is to know that the axis of symmetry of a parabola is $x=-b/(2a)$.</p> <p>This implies that</p> <p>$$\begin{align} \frac{-B}{2A} &amp;= 0 \\ -B &amp;= 0 \\ B &amp;= 0 \\ 5k+3 &amp;= 0 \\ 5k &amp;= -3 \\ k &amp;= -\frac35 \\ \end{align}$$</p>
293,341
<p>My apologies if this question is more appropriate for mathisfun.com, but I can only get so far reading about combinatrics and set theory before the interlocking logic becomes totally blurred. If this is a totally fundamental concept, feel free just to name it so I can read and understand the math myself.</p> <p>So the goal is to minimize repetition of questions on a quiz to avoid (or really to slow down) the creation of a master key. This is for a client and I've explained that to make this truly realistic the number of questions in the master pool would need to be huge, but I want to show them the math behind their idea.</p> <p>So they suggested having a 20 question pool with a given set being a 5-member subset. I figured out that the total number of unique quizes <span class="math-container">$\binom{20}{5}$</span> would be <span class="math-container">$\frac{20!}{5!(20-5)!}$</span> or 15504 unique quizes. But I know that most of those quizes will be near identical and that it won't take as long for cheaters to see all 20 questions to make the key. To prove this to myself (without knowing the math), I simplified the total combinations to <span class="math-container">$\binom{4}{3}$</span>, like so:</p> <p>{a, b, c, d} = { {a,b,c}; {a,b,d}; {b,c,d}; {a,c,d} }</p> <p>And I see that it only takes seeing any 2 quizes to see all 4 members of the master set. So knowing that the number of combinations (binomial coefficient!) is not equivalent to number of unique appearances of the master-set, I'd like to know the actual math involved to show the client that while they have a ton of quizes, it only takes <span class="math-container">$x$</span> to know all members.</p> <p>Thanks as always.</p> <h2>Addendum</h2> <p>A bit more research has introduced me to the NP-complete problem known as Exact Cover, which would be (if I'm reading it right) a precise set of subsets which have a union equal to the original master-set. I just want to clarify that this constraint of perfect overlap is not necessary for my question, only the minimum number of subsets that would result in a union that has all master-set members, regardless of repetition, in order to demonstrate how many subsets are needed to know the original set (with the assumption that the seeker of the master-set knows the total membership count). I tweaked my micro-experiment from <span class="math-container">$\binom{4}{3}$</span> to <span class="math-container">$\binom{4}{2}$</span> resulting in 6 combinations and the ability to derive the master-set no longer being possible with a specific number of arbitrary subsets. Instead I get:</p> <p>{a, b, c, d} = { {ab} ; {ac} ; {ad} ; {bc} ; {bd} ; {cd} }</p> <p>which could derive the master set using the first three (<span class="math-container">$a$</span>) groups, or the exact cover of <span class="math-container">${ {a,b}; {c,d} }$</span>. This has me thinking that the minimum subsets needed to derive the original set is equal to the number of subsets where any given member occurs (so in this case 3 <span class="math-container">$a$</span>s, but this doesn't match up to the <span class="math-container">$\binom{4}{3}$</span>, where it can be found with 2 subsets. The next obvious solution (to me) is that the minimum number needed to derive the master-set (blindly) is half of the total number of subsets, but I would really want a link to a proof or a simple-english demonstration on how a pool of 20 questions would require 7752 subsets to know with certainty that all 20 members have appeared at least once.</p> <p>Again, thanks.</p> <h2>Question as Probability:</h2> <p>I have a bag of Scrabble tiles and I know the following:</p> <ol> <li>The bag contains 20 tiles,</li> <li>Each tile is unique (no two tiles have the same character),</li> <li>The tiles come from a much larger (and otherwise irrelevant) set of an expansion set including numbers and non-Roman alphabet characters, thus removing any advantage of knowing that this set of 20 comes from a larger-but-limited set (in other words, the characters are only informative to each other and I may get all Klingon or a mix of Chinese and Tamil. I should not assume anything about the set other than what is in the bag).</li> </ol> <p>I am allowed to perform the following steps in the order given as many times as I want:</p> <ol> <li>Pull out 5 tiles,</li> <li>Write down the characters drawn,</li> <li>Return the tiles to the bag.</li> <li>Lather, Rinse, Repeat.</li> </ol> <p>Also: I have magical fingers that prevent me from drawing the same set of 5 twice, thus reducing the number of draws from infinity to 15504 possible draws.</p> <p>My objective is to have all 20 characters written down eventually and then stop drawing characters.</p> <p>I know that the total number of unique combinations I could draw is <span class="math-container">$\binom{20}{5}$</span> which is 15504. I also know that the minimum draws required is equal to <span class="math-container">$\lceil{20}/{5}\rceil$</span>, which would be very lucky. What I am interested in is the maximum number of draws required to reveal all 20 characters.</p>
Red Banana
25,805
<p>It is a good thing to try different books, in my experience as a self-learner I found that a lot of traditionally aclaimed books are incredibly hard, there's always an author that can help you to grasp core ideas easily, for example, in calculus I read a little of the <a href="https://rads.stackoverflow.com/amzn/click/com/0312185480" rel="nofollow noreferrer" rel="nofollow noreferrer"><em>calculus made easy</em></a> by Silvanus Thompson.</p> <p>Springer has a lot of titles on proofs, and there are also some books you should look:</p> <blockquote> <p><strong>Bridge to Abstract Mathematics: Mathematical Proof and Structures</strong> - <em>Ronald P. Morash</em></p> <ul> <li>This is a really nice book, it made a lot of things on set theory, logic and proofs a little easier to me.</li> </ul> <p><strong>How to Solve it</strong> - <em>George Pólya</em></p> <ul> <li>This is a classic book, I guess you must be aquainted with it.</li> </ul> <p><strong>HOW TO PROVE IT: A Structured Approach</strong> - <em>Daniel J.Velleman</em></p> <ul> <li>I'm about to read this one, it seems to have a nice purpose.</li> </ul> <p><strong>Linear Algebra As an Introduction to Abstract Mathematics</strong> - <em>Isaiah Lankham, Bruno Nachtergaele &amp; Anne Schilling</em></p> <ul> <li>I dont remember how I found this book but perhaps it may be of help to your case,I found it in my library and it seems to be a mix of Linear Algebra and proofs, it seems nice for your case.</li> </ul> </blockquote> <p>There's a class of books that may be also helpful for your case, the transitions to advanced mathematics:</p> <blockquote> <p><strong>Mathematical Proofs: A Transition to Advanced Mathematics</strong> - <em>Gary Chartrand &amp; Albert D. Polimeni &amp; Ping Zhang</em></p> <p><strong>A Transition to Advanced Mathematics: A Survey Course</strong> - <em>William Johnston &amp; Alex M. McAllister</em></p> <p><strong>A Transition to Advanced Mathematics</strong> - <em>Douglas Smith &amp; Maurice Eggen &amp; Richard St. Andre</em></p> </blockquote> <p>Also some references on real analysis:</p> <blockquote> <p><strong>A First Course in Mathematical Analysis</strong> - David Alexander Brannan</p> <ul> <li>I really loved this book, as the author says in the preface: <em>Changes in the school curriculum over the last few decades have resulted in many students finding Analysis very difficult. The author believes that Analysis nowadays has an unjustified reputation for being hard, caused by the traditional university approach of providing students with a highly polished exposition in lectures and associated textbooks that make it impossible for the average learner to grasp the core ideas.</em></li> </ul> <p><strong>Introduction to Real Analysis</strong> - Robert G. Bartle &amp; Donald R. Sherbert</p> <ul> <li>This is also a good one, a little harder than the first one but still nice.</li> </ul> </blockquote> <p>I was thinking about this answer and I reminded of one thing that I took a lot of time to understand: the concept of the <em>best book</em>. The best book is the one that makes you learn. In an analysis course, most people will tell you to read Rudin's book, for calculus they'll say you to read Apostol's book, this is kinda invalid and it really depends on your background in mathematics, it's good to remember that such books were written in different circumstances and that the authors presume that the readers know some things. I'm not discrediting these books, they're nice, but it will be <em>much</em> better if you learn with something easier and then try to read these hard books later. Always try to find books that are compatible with your mind, this will make your mathematical experience a lot better. You can also try to read topics spoken by different people: Having trouble with one author's definition of sequence? Try to read the definition by other author, I'm doing this with the books I mentioned: When something is hard on Sherbert's book I read what Brannan has to say about it.</p> <p>I hope it helps.</p>
2,155,652
<p>I have a question regarding this proof my professor gave us. For the third property, I understand the proof up to the sentence "If $x \in E'$, i.e., x is a limit point of E." Well, I also understand that if x is not in F, then x can't be a limit point since F is closed. After that, I don't fully understand it. Could someone please give me an explanation?</p> <p>Thank you in advance.</p> <p><a href="https://i.stack.imgur.com/yECP4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yECP4.jpg" alt="enter image description here"></a></p>
fleablood
280,126
<p>Before we even start, notice if $E\subset F $ means $E' \subset F'$ because....</p> <p>If every neighborhood of $x $ contains a point of $E $ that very same point of E is also a point of $F $ so every neighborhood of $x $ contains a point of $F $.</p> <p>Now that comment you don't understand (and to tell the truth, I didn't either from the notes) doesn't matter. If $x\in E'$ either $x\in E \subset F $ or $x\in E' \subset F' \subset F $.</p>
2,414,011
<p>In my recent works in PDEs, I'm interested in finding a family of cut-off functions satisfying following properties:</p> <p>For each $\varepsilon &gt;0$, find a function ${\psi _\varepsilon } \in {C^\infty }\left( \mathbb{R} \right)$ which is a non-decreasing function on $\mathbb{R}$ such that:</p> <ol> <li>${\psi _\varepsilon }\left( x \right) = \left\{ {\begin{array}{*{20}{l}} {0 \mbox{ if } x \le \varepsilon ,}\\ {1\mbox{ if } x \ge 2\varepsilon ,} \end{array}} \right.$ and</li> <li>The function $x \mapsto x{\psi _\varepsilon }'\left( x \right)$ is bounded uniformly with respect to $\varepsilon$ as $\varepsilon \to 0$.</li> </ol> <p>The main problem here is ${\psi _\varepsilon }'\left( x \right) \to \infty $ for some $x \in \left( {\varepsilon ,2\varepsilon } \right)$ as $\varepsilon \to 0$. I also start with <a href="https://en.wikipedia.org/wiki/Non-analytic_smooth_function" rel="nofollow noreferrer">this function</a> to define explicitly ${\psi _\varepsilon }$ in the interval $\left( {\varepsilon ,2\varepsilon } \right)$ but my attempts to adjust the referenced function failed. </p> <p>Can you find an example of these cut-off functions?</p> <p>Thanks in advanced.</p>
username
948,485
<p>Take the construction proposed <a href="https://math.stackexchange.com/a/4365567/948485">here</a> and use <span class="math-container">$\psi_\epsilon(x)=f(\frac x\epsilon)$</span>. Then, <span class="math-container">$\psi^\prime_\epsilon $</span> is supported on <span class="math-container">$(\epsilon,2\epsilon)$</span>, and bounded by <span class="math-container">$(1+s)\epsilon^{-1}$</span> for any <span class="math-container">$s&gt;0$</span> you wish a priori. As a consequence, <span class="math-container">$0\leq x\psi^\prime_\epsilon &lt; 2\epsilon(1+s)\epsilon^{-1} = 2(1+s)$</span>, uniformly.</p>
809,516
<p>I need to calculate </p> <p>$$\lim_{x \to \infty} \frac{((2x)!)^4}{(4x)! ((x+5)!)^2 ((x-5)!)^2}.$$</p> <p>Even I used Striling Approximation and Wolfram Alpha, they do not help.</p> <p>How can I calculate this?</p> <p>My expectation of the output is about $0.07$.</p> <p>Thank you in advance.</p>
Leucippus
148,155
<p>The limit as given in the problem is equal to zero. This is shown by the following. </p> <p>Using $\Gamma(1+x) = x \Gamma(x)$ the expression to evaluate is seen as \begin{align} \phi_{n} &amp;= \frac{\Gamma^{4}(2n+1) }{ \Gamma(4n+1) \Gamma^{2}(n+6) \Gamma(n-4)} \\ &amp;= \frac{n^{2}(n-1)^{2}(n-2)^{2}(n-3)^{2}(n-4)^{2}}{(n+1)^{2}(n+2)^{2}(n+3)^{2}(n+4)^{2}(n+5)^{2}} \ \frac{\Gamma^{4}(2n+1)}{\Gamma(4n+1) \Gamma^{4}(n+1)}. \end{align} Now, by using Stirling's approximation, namely, \begin{align} \Gamma(n+1) \approx \sqrt{2 \pi} \ n^{n+1/2} \ e^{-n} \end{align} this expression becomes \begin{align} \phi_{n} &amp;= \frac{\left(1-\frac{1}{n}\right)^{2}\left(1-\frac{2}{n}\right)^{2} \left(1-\frac{3}{n}\right)^{2}\left(1-\frac{4}{n}\right)^{2}}{ \left(1+\frac{1}{n}\right)^{2} \left(1+\frac{2}{n}\right)^{2} \left(1+\frac{3}{n}\right)^{2} \left(1+\frac{4}{n}\right)^{2} \left(1+\frac{5}{n}\right)^{2}} \ \sqrt{ \frac{2}{\pi n} } \end{align} Taking the limit as $n \rightarrow \infty$ leads to \begin{align} \lim_{n \rightarrow \infty} \frac{\Gamma^{4}(2n+1) }{ \Gamma(4n+1) \Gamma^{2}(n+6) \Gamma(n-4)} = 0. \end{align}</p>
324,385
<p>I'm going through Wallace Clarke Boyden's <a href="http://books.google.com/books?id=OhMAAAAAYAAJ&amp;pg=PA71#v=onepage&amp;q&amp;f=false" rel="noreferrer">A First Book in Algebra</a>, and there's a section on finding the square root of a perfect square polynomial, eg. <span class="math-container">$4x^2-12xy+9y^2=(2x-3y)^2$</span>. He describes an algorithm for finding the square root of such a polynomial when it's not immediately apparent, but despite my best efforts, I find the language indecipherable. Can anyone clarify the process he's describing? The example I'm currently wrestling with is <span class="math-container">$x^6-2x^5+5x^4-6x^3+6x^2-4x+1$</span>.</p> <p>It's a lot of language to parse, but if anyone wants to take a stab at it, here's the original text:</p> <blockquote> <p><em>To find the square root of a polynomial, arrange the terms with reference to the powers of some number; take the square root of the first term of the polynomial for the first term of the root, and subtract its square from the polynomial; divide the first term of the remainder by twice the root found for the next term of the root, and add the quotient to the trial divisor; multiply the complete divisor by the second term of the root, and subtract the product from the remainder. If there is still a remainder, consider the root already found as one term, and proceed as before.</em></p> </blockquote> <p>I did some hunting online but didn't turn up anything useful. Is it possible this is an outdated method that's been abandoned for something cleaner?</p>
Steven Alexis Gregory
75,410
<p>If you know that the polynomial is a perfect square, then the square root algorithm works. For example</p> <hr> <p>$$\sqrt{x^6 - 6x^5 + 17x^4 - 36x^3 + 52x^2 - 48x + 36}$$</p> <hr> <p>\begin{array}{lcccccccccccccc} &amp;&amp;x^3 &amp;&amp; -3x^2 &amp;&amp; +4x &amp;&amp; -6\\ &amp;&amp;---&amp;---&amp;---&amp;---&amp;---&amp;---&amp;---\\ x^3 &amp;|&amp; x^6 &amp; -6x^5 &amp; +17x^4 &amp; -36x^3 &amp; +52x^2 &amp; -48x &amp; +36\\ &amp;&amp; x^6\\ &amp;&amp;---&amp;---&amp;---\\ &amp;&amp;&amp; -6x^5 &amp; +17x^4 \\ (2)x^3-3x^2 &amp;|&amp; &amp;-6x^5 &amp;+9x^4\\ &amp;&amp;&amp;---&amp;---&amp;---&amp;---\\ &amp;&amp;&amp;&amp; +8x^4 &amp;-36x^3 &amp;+52x^2\\ (2)x^3-(2)3x^2+ 4x &amp;|&amp;&amp;&amp;+8x^4 &amp;-24x^3 &amp;+16x^2\\ &amp;&amp;&amp;&amp;---&amp;---&amp;---&amp;---&amp;---\\ &amp;&amp;&amp;&amp; &amp;-12x^3 &amp;+36x^2 &amp;-48x &amp;+36\\ (2)x^3-(2)3x^2+ (2)4x - 6&amp;|&amp;&amp;&amp;&amp; -12x^3 &amp;+36x^2 &amp;-48x &amp;+36\\ &amp;&amp;&amp;&amp;&amp;---&amp;---&amp;---&amp;---\\ \end{array}</p> <p>$\text{STEP}\;1.\qquad$ Compute the square root of the leading term (x^6) and put it, (x^3), in the two $\phantom{\text{STEP}\;1.}\qquad$ places shown.</p> <p>$\text{STEP}\;2.\qquad$ Subtract and bring down the next two terms.</p> <p>$\text{STEP}\;3.\qquad$ Double the currently displayed quotient $(x^3 \to (2)x^3)$ Then add a new term, $X$, $\phantom{\text{STEP}\;3.}\qquad$ to the quotient such that $X(2x^3 + X)$ will remove the first term, $(-6x^5)$, in the $\phantom{\text{STEP}\;3.}\qquad$ current partial remainder.</p> <p>$\text{STEP}\;4.\qquad$ Repeat steps $2$ and $3$ until done.</p>
3,376,443
<p>A bin has 2 white balls and 3 black balls. You play a game as follows: you draw balls one at a time without replacement. Every time you draw a white ball , you win a dollar, but every time you draw a black ball , you loose a dollar . You can stop the game at any time.Devise a strategy for playing this game which results in an expected profit.</p> <p>According to my reasoning the best strategy is not to play the game at all : the expected value at every extraction remains the same and it's always negative <span class="math-container">$$E[X]=\left(\frac{2}{5}\right)*1 +\left(\frac{3}{5}\right)*(-1) =-\frac{1}{5}$$</span> So since I can treat it as a sum of expectations of random variables ,the best strategy is not to play…so the best expected profit is zero dollars right?</p>
Sasha Kozachinskiy
547,528
<p>Assume that we have <span class="math-container">$a$</span> white balls and <span class="math-container">$b$</span> blacks balls. We can choose between two things: to play or not to play. In the first case our profit is <span class="math-container">$0$</span>. Now assume that we choose to play. With probability <span class="math-container">$\frac{a}{a + b}$</span> we gain one dollar and we are left with <span class="math-container">$a - 1$</span> balls and <span class="math-container">$b$</span> black balls. Next, with probability <span class="math-container">$\frac{b}{a + b}$</span> we lose one dollar and we are left with <span class="math-container">$a$</span> white balls and <span class="math-container">$b - 1$</span> black balls. </p> <p>This gives the following reccurence formula for <span class="math-container">$P_{a,b}$</span>, the best expected profit we can get for <span class="math-container">$a$</span> white balls and <span class="math-container">$b$</span> black balls: <span class="math-container">$$P_{a,b} = \max\left\{0, \frac{a}{a+b}\left(1 + P_{a-1,b}\right) + \frac{b}{a + b} \left(-1 + P_{a,b - 1}\right)\right\}$$</span></p> <p>Now, if we have <span class="math-container">$a$</span> white balls and no black balls, then obviously the best we can do is to win <span class="math-container">$a$</span> dollars. On the other hand, if there is no white balls, then we should not play at all. I.e., <span class="math-container">$P_{a,0} = a, P_{0, b} = 0$</span>.</p> <p>Using these formulas, we obtain:</p> <p><span class="math-container">$$P_{0,0} = P_{0,1} = P_{0,2}=P_{0,3} = 0,$$</span> <span class="math-container">$$P_{1,0} = 1, P_{2,0} = 2,$$</span> <span class="math-container">$$P_{1,1} = \max\left\{0, \frac{1}{2}(1 + P_{0,1}) + \frac{1}{2}(-1 + P_{1,0})\right\} = \frac{1}{2}, $$</span> <span class="math-container">$$P_{2,1} = \max\left\{0, \frac{2}{3}(1 + P_{1,1}) + \frac{1}{3}(-1 + P_{2,0})\right\} = \frac{4}{3},$$</span> <span class="math-container">$$P_{1,2} = \max\left\{0, \frac{1}{3}(1 + P_{0,2}) + \frac{2}{3}(-1 + P_{1,1})\right\} = 0,$$</span> <span class="math-container">$$P_{1,3} = \max\left\{0, \frac{1}{4}(1 + P_{0,3}) + \frac{3}{4}(-1 + P_{1,2})\right\} = 0,$$</span> <span class="math-container">$$P_{2,2} = \max\left\{0, \frac{1}{2}(1 + P_{1,2}) + \frac{1}{2}(-1 + P_{2,1})\right\} = \frac{2}{3},$$</span> <span class="math-container">$$P_{2,3} = \max\left\{0, \frac{2}{5}(1 + P_{1,3}) + \frac{3}{5}(-1 + P_{2,2})\right\} = \frac{1}{5},$$</span></p> <p>i.e., on average we can gain <span class="math-container">$1/5$</span> dollars in the inital game. And the strategy is as follows. Look at the numbers above. Assume that we are left with <span class="math-container">$a$</span> white balls and <span class="math-container">$b$</span> black balls. If <span class="math-container">$P_{a,b} &gt; 0$</span>, draw a ball, otherwise stop.</p>
3,376,443
<p>A bin has 2 white balls and 3 black balls. You play a game as follows: you draw balls one at a time without replacement. Every time you draw a white ball , you win a dollar, but every time you draw a black ball , you loose a dollar . You can stop the game at any time.Devise a strategy for playing this game which results in an expected profit.</p> <p>According to my reasoning the best strategy is not to play the game at all : the expected value at every extraction remains the same and it's always negative <span class="math-container">$$E[X]=\left(\frac{2}{5}\right)*1 +\left(\frac{3}{5}\right)*(-1) =-\frac{1}{5}$$</span> So since I can treat it as a sum of expectations of random variables ,the best strategy is not to play…so the best expected profit is zero dollars right?</p>
A.J.
654,406
<p>There may be a more elegant and/or general way to do this, but here's a brute-force approach.</p> <p>Consider the following strategy: Draw balls until you've drawn more white balls than black, or if that's no longer possible, until you've drawn both white balls.</p> <p>Under this strategy, only the following outcomes are possible:</p> <p><span class="math-container">$\begin{align}\\[1ex] \text{W:} \quad &amp; \text{win \$1} &amp;&amp; p=\dfrac{2}{5}\\[1ex] \text{BWW:} \quad &amp; \text{win \$1} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{1}{3}=\dfrac{1}{10}\\[6ex] \text{BWBW:} \quad &amp; \text{break even} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{2}{3}\cdot\dfrac{1}{2}=\dfrac{1}{10}\\[1ex] \text{BBWW:} \quad &amp; \text{break even} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{2}{3}\cdot\dfrac{1}{2}=\dfrac{1}{10}\\[6ex] \text{BWBBW:} \quad &amp; \text{lose \$1} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{2}{3}\cdot\dfrac{1}{2}\cdot1=\dfrac{1}{10}\\[1ex] \text{BBWBW:} \quad &amp; \text{lose \$1} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{2}{3}\cdot\dfrac{1}{2}\cdot1=\dfrac{1}{10}\\[1ex] \text{BBBWW:} \quad &amp; \text{lose \$1} &amp;&amp; p= \dfrac{3}{5}\cdot\dfrac{1}{2}\cdot\dfrac{1}{3}\cdot1\cdot1=\dfrac{1}{10}\\[1ex] \\ \end{align}$</span></p> <p>Then your expected winnings are</p> <p><span class="math-container">$\left( \dfrac{2}{5} + \dfrac{1}{10} \right) * (1) + \left( \dfrac{1}{10} + \dfrac{1}{10} + \dfrac{1}{10}\right) * (-1) = \boxed{\dfrac{1}{5}}$</span></p>
3,376,443
<p>A bin has 2 white balls and 3 black balls. You play a game as follows: you draw balls one at a time without replacement. Every time you draw a white ball , you win a dollar, but every time you draw a black ball , you loose a dollar . You can stop the game at any time.Devise a strategy for playing this game which results in an expected profit.</p> <p>According to my reasoning the best strategy is not to play the game at all : the expected value at every extraction remains the same and it's always negative <span class="math-container">$$E[X]=\left(\frac{2}{5}\right)*1 +\left(\frac{3}{5}\right)*(-1) =-\frac{1}{5}$$</span> So since I can treat it as a sum of expectations of random variables ,the best strategy is not to play…so the best expected profit is zero dollars right?</p>
leonbloy
312
<p>Draw the possible paths. The rectangles represent the number of white/black balls. Transitions to the left correspond to a white ball extracted (plus one). The numbers in blue are the probabilities. </p> <p>It's clear that when we have zero white balls we should stop and when we have zero black balls we should continue until extracting all balls. Hence we readily can label the "terminal" states with their final scores (in red). In the <span class="math-container">$(W=1,B=2)$</span> case there is a tie, any choosing gives the same expected score.</p> <p><a href="https://i.stack.imgur.com/NwcaX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NwcaX.jpg" alt="enter image description here"></a></p> <p>Then we label the internal states, going upwards. For each state, we have the option of stopping there (computed score in red, top), or to continue (computed expected score in red, bottom). We strike out the worse score, and keep the better one. We go up until the initial state. </p> <p><a href="https://i.stack.imgur.com/6Rjx2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Rjx2.jpg" alt="enter image description here"></a></p> <p>The expected score is then <span class="math-container">$1/5$</span>. </p> <p>The diagram also shows the optimal strategy: in each (internal) state, we choose to stop if the striken number is at the bottom, otherwise we continue.</p>
2,094,596
<p>I'm questioning myselfas to why indeterminate forms arise, and why limits that apparently give us indeterminate forms can be resolved with some arithmetic tricks. Why $$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x+1}{x-1}=\frac{+\infty}{+\infty} \end{equation*} $$</p> <p>and if I do a simple operation,</p> <p>$$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(1-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(1-\frac{1}{x})}=1 \end{equation*} $$</p> <p>I understand the logic of the process, but I can't understand why we get different results by "not" changing anything.</p>
Ben Grossmann
81,360
<p>Really, this has to do with the definition of continuity. The function $Q(x,y) = x/y$ is continuous except at $y = 0$. Thus, whenever $f(t) \to L_f$ and $g(t) \to L_g \neq 0$, we have $$ \lim_{t \to a} \frac{f(t)}{g(t)} = \lim_{t \to a} Q(f(t),g(t)) = \lim_{(x,y) \to (L_f,L_g)}Q(x,y) = Q(L_f,L_g) $$ However, $Q$ is <strong>not</strong> continuous at $(0,0)$. In particular, $\lim_{(x,y) \to (0,0)}Q(x,y)$ does not exist. We therefore state that $Q(0,0) = 0/0$ is an <strong>indeterminate form</strong>.</p> <p>Similarly, $Q$ is "discontinuous at $\infty$", since $\lim_{(x,y) \to (\infty, \infty)}Q(x,y)$ does not exist. So, $\infty/\infty$ is an indeterminate form.</p>
715,361
<p>Let $\Omega$ be a bounded domain and $f_n\in L^2(\Omega)$ be a sequence such that $$\int_\Omega f_nq\operatorname{dx}\leq C&lt;\infty\qquad \text{for all}\quad q\in H^1(\Omega),\ \|q\|_{H^1(\Omega)}\leq1,\ n\in\mathbb{N}.\quad (1) $$ Is it then possible to conclude that $$ \sup_{n\in\mathbb{N}}\|f_n\|_{L^2(\Omega)}\leq C. $$</p> <p>Here, $H^1(\Omega)$ denotes the Sobolev-Hilbert-Space $H^1(\Omega)$.</p> <p>Obviously, this statement would be true if we were to replace (1) with $$\int_\Omega f_nq\operatorname{dx}\leq C&lt;\infty\qquad \text{for all}\quad q\in L^2(\Omega),\ \|q\|_{L^2(\Omega)}\leq1,\ n\in\mathbb{N}. $$</p> <p>and maybe the dense and compact embedding $H^1(\Omega)\hookrightarrow L^2(\Omega)$ is of help but I'm not sure of it.</p> <p>Edit: By now I'm pretty sure, that this statement doesn't hold. We only have a bound in the dual of $H^1(\Omega)$. But until now I'm failing to compile a conclusive argument!</p>
Mark Bennet
2,906
<p>Here's another way to look at it. You can also recast the equation as follows, without cancelling anything or multiplying or dividing by anything which might be zero: $$0=\frac {x-4}{x-1}-\left(\frac {1-4}{x-1}\right)=\frac {x-1}{x-1} $$Now do you see what is going on?</p>
262,173
<p>Consider $x^2 + y^2 = r^2$. Then take the square of this to give $(x^2 + y^2)^2 = r^4$. Clearly, from this $r^4 \neq x^4 + y^4$. </p> <p>But consider: let $x=a^2, y = b^2 $and$\,\,r = c^2$. Sub this into the first eqn to get $(a^2)^2 + (b^2)^2 = (c^2)^2$. $x = a^2 =&gt; a = |x|,$ and similarly for $b.$</p> <p>Now put this in to give $|x|^4 + |y|^4 = r^4 =&gt; (-x)^4 + (-y)^4 = r^4 $ or $ (x)^4 + (y)^4 = r^4,$ both of which give $ x^4 + y^4 = r^4$ Where is the flaw in this argument?</p> <p>Many thanks.</p>
Did
6,179
<p>The fact that $x=a^2$ is quite far to imply that $a=|x|$ (second paragraph).</p>
716,036
<blockquote> <p>Suppose that a curve $\mathbf\gamma$ in $\mathbb R^3$ has constant strictly positive curvature function $\mathbf\kappa(s)$, and constant non-zero torsion function $\mathbf\tau(s)$. Prove that the curve is a helix.</p> </blockquote> <p>I think it is easier to work backward here. First I can show that a helix satisfies the two conditions on curvature and torsion. Second, I want to use the fundamental theorem of curves to show that curve satisfying these two conditions must be a helix. However, there is a gap here. The fundamental theorem requires the function of curvature and torsion to uniquely identify a curve up to rigid motion. However, this question only gives a qualitative description of the two functions. How to make up this gap, please? Thank you! </p>
Ted Shifrin
71,348
<p>You have it. If $\kappa=a/c$ and $\tau=b/c$, where $c=\sqrt{a^2+b^2}$, then your curve is congruent to (differs by a rigid motion from) the circular helix $\alpha(t)=(a\cos t, a\sin t, bt)$. Given $\kappa$ and $\tau$, you can determine $a$ and $b$ by algebra.</p>
1,001,320
<p>I was wondering how to do an inequality problem involving QM-AM-GM-HM.</p> <p>Question: For positive $a$, $b$, $c$ such that $\frac{a}{2}+b+2c=3$, find the maximum of $\min\left\{ \frac{1}{2}ab, ac, 2bc \right\}$.</p> <p>I was thinking maybe apply AM-GM, however, I'm not sure what to plug in. Any help would be appreciated, thanks!</p>
Community
-1
<p><strong>Hint:</strong></p> <p>$$\frac{\frac{a}{2}+b}{2}\ge\sqrt{\frac{ab}{2}}\iff \left(\frac{\frac{a}{2}+b}{2}\right)^2\ge\frac{ab}{2}$$</p> <p>$$\frac{2c+b}{2}\ge\sqrt{2bc}\iff \left(\frac{2c+b}{2}\right)^2 \ge 2bc$$</p> <p>$$\frac{\frac{a}{2}+2c}{2}\ge\sqrt{ac}\iff \left(\frac{\frac{a}{2}+2c}{2}\right)^2\ge ac$$</p> <p><strong>Note:</strong> $$\frac{x+y}{2}\ge\sqrt{xy}$$ Equality occurs when $x=y$.</p>
59,828
<p>Is there a way to display the variable name instead of its value? for example, I need something like<code>varname = 1; function[varname];</code> and the output is <code>varname</code> instead of <code>1</code></p>
eldo
14,254
<pre><code>varname = 1; SetAttributes[ShowName, HoldAll] ShowName[name_] := Row[{"The name is ", HoldForm @ varname, " and its value is ", ReleaseHold @ varname}] ShowName @ varname </code></pre> <blockquote> <p>The name is varname and its value is 1</p> </blockquote> <p>Or simply</p> <pre><code>HoldForm @ varname </code></pre> <blockquote> <p>varname</p> </blockquote>
4,513,368
<p>The following question seems to be quite simple, but I am having a hard time to prove it rigorously.</p> <p>Consider <span class="math-container">$n\in\mathbb{N}$</span> vertices, for example <span class="math-container">$\{v_1,\ldots, v_n\}$</span>. I have some further information on these vertices, namely, that any of these vertices has at least one connection (by an edge) to itself or to any other of the vertices.</p> <p>Now I would like to prove, that there must be a subset of <span class="math-container">$\{v_1,\ldots, v_n\}$</span>, which form a connected graph. Even the subgroup <span class="math-container">$\{v_1,\ldots, v_n\}$</span> would be allowed.</p> <p>One more information to consider: I call a subset of <span class="math-container">$\{v_1,\ldots, v_n\}$</span>, which contains only one element, connected, if it is connected to itself by an edge.</p> <p>In the best case, I would like to prove this through a contradiction: &quot;Suppose, all subsets do not form a connected graph.&quot; However, I cannot find the contradictory statement, that follows from this false assumption.</p> <p>Again, the statement itself seems very obvious, but I would like to prove it rigorously.</p> <p>Any help is appreciated! Thank you in advance!</p>
User5678
632,875
<p>A graph <span class="math-container">$G:=(E,V)$</span> is connected if there exists a path from any node to any other node in the graph.</p> <p>The main property <span class="math-container">$P$</span>of the graph <span class="math-container">$G$</span> you are looking at is that every vertex is associated with at least one edge, possibly to itself. Specifically,</p> <p><span class="math-container">$$P\equiv\forall v \in V\; \exists e_{ij} \in E: i=v \text{ or } j=v$$</span></p> <p>You also state that a single node with an edge to itself counts as connected, which is consistent with the definition of a connected graph.</p> <p>Your conjecture <span class="math-container">$C$</span> is that there is at least one connected <em>subgraph</em> in <span class="math-container">$G$</span>:</p> <p><span class="math-container">$$C \equiv \exists V_H\subset V: G_H:=(E_H,V_H)\;\;\text{ is connected}$$</span></p> <p>I'll consider two cases.</p> <p><strong>Case 1: Trivial Case</strong></p> <p>If <span class="math-container">$\exists v \in V: e_{vv} \in E$</span> then <span class="math-container">$G_H=(\{v\},\{e_{vv}\})$</span> is a connected subset; therefore, <span class="math-container">$C$</span> is true.</p> <p><strong>Case 2: No loops</strong></p> <p>If there are no self loops, then every node is connected to one other node. Therefore,</p> <p><span class="math-container">$\exists v,w \in V: e_{vw} \in E \implies G_H=(\{v,w\},\{e_{vw}\})$</span> is a connected subgraph.</p> <p>Therefore, <span class="math-container">$C$</span> is true <span class="math-container">$\square$</span>.</p>
1,022,380
<p>in below link, (formula (34)-(40)) there are some definition of Dirac delta function in terms of other functions such as Airy function, Bessel function of the first kind, Laguerre polynomial,....</p> <p><a href="http://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/DeltaFunction.html</a></p> <p>Is there any definition of the Dirac delta function in terms of the sech (secant hyperbolic) or cosh (cosine hyperbolic) functions?</p> <p>Please say your references. Thanks</p>
Ross Millikan
1,827
<p>The delta "function" $\delta(x)$ is supposed to be zero as $x$ gets large in either direction, so basing one on $\cosh(x)$ is hard because $\cosh(x) \to +\infty$ as $x$ gets large in either direction. That makes $\operatorname{sech} (x)$ a good candidate. Since $\int_{-\infty}^{+\infty} \operatorname{sech} x dx=\pi$, you can use $\lim_{k \to \infty}\frac k \pi \operatorname{sech} (kx)$ as a delta "function".</p>
494,227
<blockquote> <p>If I had $6$ feet of fencing could I fence a region that has area $3$ square feet?</p> </blockquote> <p>So, I must show that there is a curve in the plane of my fencing that has length $6$ feet that bounds the region of area. </p> <p>How can I prove this? </p>
user71352
71,352
<p>Recall that the isoperimetric inequality states that the length of a closed curve $L$ and the enclosed area $A$ satisfy $4\pi A\le L^{2}$.</p> <p>So if you could find such a curve in the plane then by the isoperimetric inequality $12\pi=4\pi(3)\le6^{2}=36$.</p> <p>But $\pi&gt;3$ so $12\pi&gt;36$. Contradiction. Hence no such curve exists.</p>