qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,289,401 | <p>As an example in MATLAB</p>
<pre><code>[U,S,V]=svd(randn(3,2)+1j*randn(3,2))
assert(isreal(V(1,:)))
</code></pre>
<p>Why is the first row of V purely real?</p>
| Community | -1 | <p>In Cartesian coordinates it's <span class="math-container">$(x_1,\dots,x_{n+1})\to (\frac{Rx_1}{R-x_{n+1}},\dots,\frac {Rx_n}{R-x_{n+1}},0)$</span>.</p>
<p>See <a href="https://en.m.wikipedia.org/wiki/Stereographic_projection" rel="nofollow noreferrer">here</a>.</p>
|
2,968,235 | <p><span class="math-container">$\log_3 4$</span> and <span class="math-container">$\log_7 10$</span>: which of these two logarithms is greater?</p>
<p>I figured out that both are between <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, then between <span class="math-container">$1$</span> and <span class="math-container">$1.5$</span>. And then <span class="math-container">$\log_34$</span> is greater than <span class="math-container">$1.25$</span>, and <span class="math-container">$\log_710$</span> is smaller than <span class="math-container">$1.25$</span>. However, that method doesn't work for every example, and I wonder if there's a easier way to solve this? </p>
| Michael Rozenberg | 190,319 | <p>We'll show that <span class="math-container">$$\log_34>\log_710$$</span> or
<span class="math-container">$$4>3^{\log_710},$$</span> which is true because <span class="math-container">$$\log_710<1.2$$</span> and <span class="math-container">$$4>3^{1.2}.$$</span></p>
|
321,916 | <p>In order to define Lebesgue integral, we have to develop some measure theory. This takes some effort in the classroom, after which we need additional effort of defining Lebesgue integral (which also adds a layer of complexity). Why do we do it this way? </p>
<p>The first question is to what extent are the notions different. I believe that a bounded measurable function can have a non-measurable "area under graph" (it should be doable by transfinite induction), but I am not completely sure, so treat it as a part of my question. (EDIT: I was very wrong. The two notions coincide and the argument is very straightforward, see Nik Weaver's answer and one of the comments).</p>
<p>What are the advantages of the Lebesgue integration over area-under-graph integration? I believe that behaviour under limits may be indeed worse. Is it indeed the main reason? Or maybe we could develop integration with this alternative approach?</p>
<p>Note that if a non-negative function has a measurable area under graph, then the area under the graph is the same as the Lebesgue integral by <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="noreferrer">Fubini's theorem</a>, so the two integrals shouldn't behave very differently.</p>
<p>EDIT: I see that my question might be poorly worded. By "area under the graph", I mean the measure of the set of points <span class="math-container">$(x,y) \in E \times \mathbb{R}$</span> where <span class="math-container">$E$</span> is a space with measure and <span class="math-container">$y \leq f(x)$</span>. I assume that <span class="math-container">$f$</span> is non-negative, but this is also assumed in the standard definition of the Lebesuge integral. We extend this to arbitrary function by looking at the positive and the negative part separately.</p>
<p>The motivation for my question concerns mostly teaching. It seems that the struggle to define measurable functions, understand their behaviour, etc. might be really alleviated if directly after defining measure, we define integral without introducing any additional notions.</p>
| cgodfrey | 113,296 | <p>As others have pointed out, the Lebesgue integral is still computing the area under the graph. I'd just like to point out how it computes that area in a different way than the Riemann integral. For the sake of example let <span class="math-container">$f: I \to \mathbb{R}_{\geq 0}$</span> be a non-negative function on the unit interval <span class="math-container">$I := [0,1]$</span>. </p>
<p>The Riemann-sum recipe for the integral is: take a "partition" <span class="math-container">$I = \bigcup_{i=0}^{N-1} [t_i, t_i+1]$</span> of the <em>domain</em> <span class="math-container">$I$</span>, where <span class="math-container">$0 = t_0 < t_1 < t_2 < \dots < t_N = 1$</span>. Then on each sub-interval <span class="math-container">$[t_i, t_{i+1}]$</span> of the partition, approximate <span class="math-container">$f$</span> by a constant <span class="math-container">$c_i$</span> (usually either <span class="math-container">$\inf_{t \in [t_i, t_{i+1}]} f(t)$</span> for the "lower sum" or <span class="math-container">$\sup_{t \in [t_i, t_{i+1}]} f(t)$</span> for the "upper sum"). The resulting approximation of the integral is <span class="math-container">$\sum_i c_i (t_{i+1} - t_i)$</span>. Finally we take limits over finer and finer partitions of <span class="math-container">$I$</span>.</p>
<p>On the other hand, the "Lebesgue-sum" recipe is: take a partition of the <em>co-domain</em> <span class="math-container">$\mathbb{R}$</span>, for instance by choosing <span class="math-container">$N \gg 0$</span> and writing <span class="math-container">$\mathbb{R} = \bigcup_{n \in \mathbb{Z}} [\frac{n}{2^N}, \frac{n+1}{2^N})$</span>. Now on each of the <em>pre-images</em> <span class="math-container">$f^{-1}([\frac{n}{2^N}, \frac{n+1}{2^N}]) \subset I$</span> approximate <span class="math-container">$f$</span> by a constant. In this case there is a fixed choice (presumably made by Lebesgue): we use <span class="math-container">$\frac{n}{2^N}$</span>. Note that the <span class="math-container">$f^{-1}([\frac{n}{2^N}, \frac{n+1}{2^N}))$</span> partition the interval <span class="math-container">$I$</span>, but since there's no reason for these sets to be intervals we can't calculate their length via subtraction, like we did with "<span class="math-container">$(t_{i+1} - t_i)$</span>". Hence the whole concept of a measure. If <span class="math-container">$\mu$</span> is the Lebesgue measure on <span class="math-container">$I$</span>, our approximation of the integral is <span class="math-container">$\sum_n \frac{n}{2^N} \mu(f^{-1}([\frac{n}{2^N}, \frac{n+1}{2^N})))$</span>. Lastly we take limits over finer and finer partitions of <span class="math-container">$\mathbb{R}$</span> (e.g. let <span class="math-container">$N \to \infty$</span>).</p>
<p><em>Vastly</em> oversimplifying things:</p>
<ul>
<li>Riemann integral: vertical rectangles.</li>
<li>Lebesgue integral: horizontal (unions of) rectangles.</li>
</ul>
|
1,296,230 | <p>This is from Lang's <em>Algebra</em> (page 251)</p>
<blockquote>
<p><strong>Proposition 6.11</strong> <em>Let <span class="math-container">$E/F$</span> be a normal field extension. Let <span class="math-container">$E^G$</span> be the fixed field of <span class="math-container">$\operatorname{Aut}(E/F)$</span>. Then, <span class="math-container">$E^G$</span> is purely inseparable over <span class="math-container">$F$</span> and <span class="math-container">$E$</span> is separable over <span class="math-container">$E^G$</span>.</em></p>
</blockquote>
<p>And below is a corollary of this theorem:</p>
<blockquote>
<p><strong>Corollary 6.12.</strong> <em>Let <span class="math-container">$F$</span> be a field with characteristic <span class="math-container">$p\neq 0$</span> such that <span class="math-container">$F^p=F$</span>. Then, every algebraic extension <span class="math-container">$E$</span> of <span class="math-container">$F$</span> is separable and <span class="math-container">$E^p=E$</span>.</em></p>
</blockquote>
<p>How this is a corollary of the above theorem?</p>
<p>Lang states that "Every algebraic extension is contained in a normal extension, so Proposition 6.11 can be applied to get this", but how?</p>
<p>Let <span class="math-container">$E$</span> be an algebraic extension of <span class="math-container">$F$</span>. Then, there is a field extension <span class="math-container">$L$</span> of <span class="math-container">$E$</span> such that <span class="math-container">$L/F$</span> is normal.</p>
<p>Let <span class="math-container">$\phi\colon F\to F:a\mapsto a^p$</span>.</p>
<p>Then, by hypothesis, <span class="math-container">$\phi$</span> is a field automorphism of <span class="math-container">$F$</span>.</p>
<p>Then, <span class="math-container">$\phi$</span> can be extended to a field monomorphism <span class="math-container">$\sigma \colon \bar F \to \bar F$</span>, but since <span class="math-container">$\phi$</span> is not fixing <span class="math-container">$F$</span>, I don't get what this has to do with Proposition 6.11.</p>
<p>These are what all I know. How do I proceed to prove the corollary?</p>
| reuns | 276,986 | <p>Let <span class="math-container">$a \in \overline{F}$</span>, <span class="math-container">$F(a) \cong F[x]/(h(x))$</span>.</p>
<p>If <span class="math-container">$F(a)/F$</span> is not separable then <span class="math-container">$\gcd(h ,h') \ne 1$</span>, thus <span class="math-container">$h' = 0$</span> (since otherwise <span class="math-container">$\gcd(h,h')$</span> would divide <span class="math-container">$h$</span> which wouldn't be irreducible) and <span class="math-container">$h(x) = g(x^p)$</span> for some <span class="math-container">$g \in F[x]$</span>.</p>
<p>If also <span class="math-container">$F = F^p$</span>, we can take <span class="math-container">$f(x) \in F[x]$</span> such that <span class="math-container">$f^p(x) = g(x)$</span> (<span class="math-container">$p$</span>-th power of the coefficients) so that <span class="math-container">$h(x) =f^p(x^p)=( f(x))^p$</span>.</p>
<p>But then <span class="math-container">$h(a)=(f(a))^p = 0$</span> implies <span class="math-container">$f(a) = 0$</span>, and since <span class="math-container">$\deg(f) = \frac{\deg(h)}{p}$</span> it contradicts that <span class="math-container">$h$</span> was the minimal polynomial of <span class="math-container">$a$</span>. </p>
<p>Whence <span class="math-container">$F(a)/F$</span> had to be separable.</p>
|
1,251,537 | <p>$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically?</p>
<hr>
<p>Using integration by parts I got the form:
$\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.</p>
| Crostul | 160,300 | <p>The answer is yes. To prove it, define for $n$ big enough
$$g_n:\left[ a+\frac{1}{n} , b-\frac{1}{n}\right] \longrightarrow \Bbb{R} \quad \quad g_n(x)=f(x)$$
and then define $f_n:[a,b]\longrightarrow \Bbb{R}$ extending $g_n$ in a suitable way that $f_n(a)=0=f_n(b)$ and $f_n$ are uniformly bounded.</p>
<p>Then $$0=\int_a^bf(x)f_n(x) dx \to \int_a^bf(x)f(x) dx$$
so that $\int f^2 =0$ implies that $f$ is $0$.</p>
|
1,251,537 | <p>$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically?</p>
<hr>
<p>Using integration by parts I got the form:
$\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.</p>
| celtschk | 34,930 | <p>Assume that for some $x_0\in(a,b)$ we have $f(x_0)>0$. Since $f$ is assumed to be continuous, this means that there is an $\epsilon>0$ such that $f(x)>0$ in $(x_0-\epsilon,x_0+\epsilon)$. Now let
$$g(x)=\begin{cases}
0 & x\notin(x_0-\epsilon,x_0+\epsilon)\\
x-x_0+\epsilon & x_0-\epsilon \le x \le x_0\\
x_0-x+\epsilon & x_0 < x \le x_0+\epsilon
\end{cases}$$
It is easy to see that $g(x)$ is continuous. Now $f(x)g(x)>0$ for $x_0-\epsilon < x < x_0+\epsilon$ and $=0$ otherwise. Therefore clearly $\int_a^b f(x)g(x)\,\mathrm dx > 0$, in contradiction to the claim that this integral vanishes for every continuous $g(x)$.</p>
<p>With an analogous argument we also get that $f(x_0)<0$ is not possible.</p>
<p>Thus we have proven that $f(x)=0$ for every $x\in (a,b)$. However since $x$ is continuous, it follows that also $f(a)=f(b)=0$.</p>
|
162,630 | <p>Let $\mathbb{G}$ be a reductive group defined over a number field $K$, let $Z$ be its center, and let $\mathbb{A}:=\mathbb{A}_K$ be the ring of adeles of $K$. Reasonably, we care about the $\mathbb{G}(\mathbb{A})$-representation: $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$. It naturally contains the sub-representations $$L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}),\omega):=\{f\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}))|\,\,\,|f|\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A})), \forall z\in Z(\mathbb{A}), g \in \mathbb{G}(\mathbb{A})\,\,\, f(zg)=\omega(z)f(g)\} $$</p>
<p>for every $\omega$ a unitary character of $Z(\mathbb{A})$. In fact $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ is the direct integral of these subrepresentations.</p>
<p>I understand that it is generally desirable to deal with $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ by decomposing it into the cuspidal part, which is going to be discrete, and the Eisenstein part, which is (I think!) continuous. In order to define this cuspidal part, people define $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ to be the subrepresentation of $L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ of all of the functions $f$ such that for every $K$-parabolic subgroup $\mathbb{P}$ of $\mathbb{G}$, whose unipotent radical we will call $N$, satisfies that for almost all $g\in\mathbb{G}(\mathbb{A})$ the integral $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn$ is $0$.</p>
<p>The definition of a cuspidal representation is then an irreducible unitary subrepresentation of $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}), \omega)$ for some central character $\omega$.</p>
<p>I feel that I really do not understand the intuition behind the condition with the parabolic subgroups. Parabolic subgroups and their unipotent radicals seem like very formal constructions to me, but I bet there is some geometric intuition that I'm missing. Is there some geometry that should be in the back of my mind that explains the condition $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn=0$? How does this condition relate to being zeros at the cusps via the classic definition of cusp-forms? </p>
| paul garrett | 15,629 | <p>First, one should be a little careful about saying that $L^2(G_k\backslash G_\mathbb A)$ has $L^2(Z_\mathbb A G_k\backslash G_\mathbb A,\omega)$ inside it... since appearing as direct integral "integrands" is not a very strong commitment. If $G$ has non-compact center, $L^2(G_k\backslash G_\mathbb A)$ will have no discrete spectrum at all... which gives the wrong impression (by Selberg et al's proof of various forms of "Weyl's Law" for arithmetic subgroups, namely, that the bulk of the spectrum is cuspidal, hence, discrete).</p>
<p>That Gelfand condition about integrals over all unipotent radicals being $0$ is far from obviously "the right thing". It is less a leap to understand that instead of "cusps" we should think of "parabolics" (etc).</p>
<p>The constant terms along <em>various</em> parabolics correspond to "going to infinity" in the variety of fashions possible in general, in higher-rank groups.</p>
<p>Still, yes, it is mildly amazing that vanishing of <em>all</em> constant terms guarantees discreteness. In general, it is easy to fail to prove this... :)</p>
<p>I think that Y. Colin de Verdiere's argument, cast into general form by Jacquet, that appears in Moeglin-Waldspurger's book, is potentially the clearest in terms of describing the causality, as it shows a somewhat more general thing, that square-integrable automorphic forms ($K$-finite, $\mathfrak z$-finite) all of whose various constant terms vanish above some fixed height(s), is already discrete. This follows by proving that the resolvent for Casimir on such a space is compact, which follows by proving a sort of Rellich compactness lemma for an inclusion of a Sobolev $H^1$ into $H^o=L^2$, as appears in Lax-Phillips' book on Automorphic Scattering, about page 204 and following. (The earlier parts are not essential to understanding what's happening just there.)</p>
<p>The rough explanation I've heard, and sometimes repeat, although it doesn't truly explain so much, is that (eventual) vanishing of all constant terms says that the given afm has $0$ "average mass" "at infinity", so that it behaves as though it lived on a compact manifold, where a simpler Rellich lemma would apply (by a smooth partition of unity, and reducing to the essentially elementary case of a product of circles, and Fourier series).</p>
<p>The historical version of "holomorphic cuspform" (also in the Siegel and Hilbert modular cases) played on some good fortune, in some regards. If it seems lucky, you're probably right.</p>
<p>EDIT: in response to comment/query... No, there is no general rubric that says that parabolic subgroups determine spectral features of automorphic forms. Plausibly, in a different universe, the stratification of automorphic $L^2$ could be different. Thus, although the rational-rank-one case was relatively easy to (optimistically) extrapolate from the $SL_2(\mathbb R)$ case, where the cross-sections going out to the point-cusps were elementary objects, all the pseudo-down-to-earth ideas about "going to infinity" and "cusp" that seemed to be decisive for elliptic modular forms, and for Maass' waveforms, rather abruptly not only "fail", but fail qualitatively.</p>
<p>Thus (to my mind) Langlands' earning his spot at IAS in the 1960s, for, among other things, carrying out Selberg's highly-optimistic sketch of automorphic spectral decompositions. A number of important, critical surprises: there're not just two sorts of things, cuspforms and "continuous spectrum", and then a little leftover, constants, but a whole range of things. Yes, as has only been proven in recent years, the discrete spectrum dominates, and the discrete spectrum is dominated by cuspforms. But, first, there are cuspidal-data Eisenstein series, apparently not anticipated by Selberg. But, as is the subtlest part of Langlands' SLN 544, and addressed completely only for $GL_n$, in Moeglin-Waldspurger's 1989 paper, there are many non-constant $L^2$ residues of Eisenstein series, for $GL_n$ at least called Speh forms, because Birgit Speh discovered the corresponding repns of real Lie groups $GL_n(\mathbb R)$.</p>
<p>The "constant term" along a parabolic $P$ is the trivial-character Fourier component along the unipotent radical of $P$. It is not obvious that this shadow of the thing should be important, but, yes, one proves... "the theory of the constant term"... that the aggregate of the constant terms of a $K$-finite, $\mathfrak z$-finite automorphic form determines its asymptotic behavior "at infinity".</p>
<p>Indeed, decompositions along other subgroups are very interesting, especially for number theoretic applications.</p>
<p>However, as it happens, it seems that no other decompositions-along-subgroups adequately distinguish non-compact quotients from compact... and <em>compact</em> quotients have discrete spectrum with respect to their invariant Laplacian, or Laplacian on suitable vector bundles.</p>
<p>Perhaps in a different universe the non-compactness of interesting arithmetic quotients would have been mediated by different sorts of subgroups, but in this universe the collection of all parabolics seems to do the job. Yes, this was not obvious, and Gelfand deserves substantial credit for formulating things this way...</p>
|
162,630 | <p>Let $\mathbb{G}$ be a reductive group defined over a number field $K$, let $Z$ be its center, and let $\mathbb{A}:=\mathbb{A}_K$ be the ring of adeles of $K$. Reasonably, we care about the $\mathbb{G}(\mathbb{A})$-representation: $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$. It naturally contains the sub-representations $$L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}),\omega):=\{f\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}))|\,\,\,|f|\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A})), \forall z\in Z(\mathbb{A}), g \in \mathbb{G}(\mathbb{A})\,\,\, f(zg)=\omega(z)f(g)\} $$</p>
<p>for every $\omega$ a unitary character of $Z(\mathbb{A})$. In fact $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ is the direct integral of these subrepresentations.</p>
<p>I understand that it is generally desirable to deal with $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ by decomposing it into the cuspidal part, which is going to be discrete, and the Eisenstein part, which is (I think!) continuous. In order to define this cuspidal part, people define $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ to be the subrepresentation of $L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ of all of the functions $f$ such that for every $K$-parabolic subgroup $\mathbb{P}$ of $\mathbb{G}$, whose unipotent radical we will call $N$, satisfies that for almost all $g\in\mathbb{G}(\mathbb{A})$ the integral $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn$ is $0$.</p>
<p>The definition of a cuspidal representation is then an irreducible unitary subrepresentation of $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}), \omega)$ for some central character $\omega$.</p>
<p>I feel that I really do not understand the intuition behind the condition with the parabolic subgroups. Parabolic subgroups and their unipotent radicals seem like very formal constructions to me, but I bet there is some geometric intuition that I'm missing. Is there some geometry that should be in the back of my mind that explains the condition $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn=0$? How does this condition relate to being zeros at the cusps via the classic definition of cusp-forms? </p>
| Marc Palm | 10,400 | <p>In addition to Paul Garrett's answer, I address your last paragraph in a special example:</p>
<p>Strong approximation gives a homeomorphism
$SL_2(Z) \backslash H \cong Z(A) GL_2(Q) \backslash GL_2(A) / \prod_p GL_2(Z_p) \times O(2)$.</p>
<p>Lets $f$ corresponds to $\tilde{f}$. This translates</p>
<p>$$ \int_{0}^1 f( y + t)\; d t = \int\limits_{N(F)\backslash N(A)} \tilde{f}(ng_y)\; dn.$$</p>
<p>This implies that the zero-th Fourier coefficient vanishes, but that does not imply that the functions vanish at the cusps for Maass forms. It only does for modular forms.</p>
|
746,180 | <p>I'm working through Stephen Abbott's wonderful <em>Understanding Analysis</em> in preparation for entering a math undergrad degree this fall. A personal note about me: Friends and family tell me I tend to be periphrastic; if there's a long-winded, inelegant way of explaining myself, I'll find it. As I work through Abbott's book, I wonder: Are all the steps I'm taking (even to solve simple problems near the beginning of the book) necessary, or is my brain just doing what it always does by finding the most round-about way to do things? So I'd like to have someone critique a simple proof to see if I'm doing something wrong, or if this really is the way things are done in real analysis. </p>
<blockquote>
<p>Exercise 2.2.5. Let $\lfloor x\rfloor$ be the greatest integer less than or equal to x. Find $lim_{n\to \infty} a_n$ and supply proofs if $a_n=\lfloor \frac 1n \rfloor$.</p>
</blockquote>
<p>In the preceeding chapter, Abbott has already shown that $lim_{n \to \infty} \frac 1n =0$, so we can take this as given. Then we note that since $n \lt (n+1)$ $\forall n \in \mathbb N$, we have $\frac 1n \gt \frac 1{n+1} \gt 0$, $\forall n \in \mathbb N$. And since by inspection $a_n = 1$, we have </p>
<p>$$1\gt a_{n+1} \gt a_{n+2} \gt a_{n+3} \gt \cdots \gt0,$$
so that $a_n = 0$ for $ n \ge2$. Finally, since $|a_n-0|=0$ for $n \ge2$, we must have $|a_n - 0| \lt \epsilon$, $\forall \epsilon \gt 0$ and $n \ge2$. Therefore $\lim_{n \to \infty} a_n=0$.</p>
<p>Is this correct? Have I included any unnecessary steps? It just seems so pathologically nit-picky! And I feel the same way about most of the other exercises in the book. Thanks for your help!</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>Using <a href="http://en.wikipedia.org/wiki/Trigonometric_substitution" rel="nofollow">Trigonometric Substitution</a> $$x=2\tan\theta$$</p>
|
746,180 | <p>I'm working through Stephen Abbott's wonderful <em>Understanding Analysis</em> in preparation for entering a math undergrad degree this fall. A personal note about me: Friends and family tell me I tend to be periphrastic; if there's a long-winded, inelegant way of explaining myself, I'll find it. As I work through Abbott's book, I wonder: Are all the steps I'm taking (even to solve simple problems near the beginning of the book) necessary, or is my brain just doing what it always does by finding the most round-about way to do things? So I'd like to have someone critique a simple proof to see if I'm doing something wrong, or if this really is the way things are done in real analysis. </p>
<blockquote>
<p>Exercise 2.2.5. Let $\lfloor x\rfloor$ be the greatest integer less than or equal to x. Find $lim_{n\to \infty} a_n$ and supply proofs if $a_n=\lfloor \frac 1n \rfloor$.</p>
</blockquote>
<p>In the preceeding chapter, Abbott has already shown that $lim_{n \to \infty} \frac 1n =0$, so we can take this as given. Then we note that since $n \lt (n+1)$ $\forall n \in \mathbb N$, we have $\frac 1n \gt \frac 1{n+1} \gt 0$, $\forall n \in \mathbb N$. And since by inspection $a_n = 1$, we have </p>
<p>$$1\gt a_{n+1} \gt a_{n+2} \gt a_{n+3} \gt \cdots \gt0,$$
so that $a_n = 0$ for $ n \ge2$. Finally, since $|a_n-0|=0$ for $n \ge2$, we must have $|a_n - 0| \lt \epsilon$, $\forall \epsilon \gt 0$ and $n \ge2$. Therefore $\lim_{n \to \infty} a_n=0$.</p>
<p>Is this correct? Have I included any unnecessary steps? It just seems so pathologically nit-picky! And I feel the same way about most of the other exercises in the book. Thanks for your help!</p>
| Community | -1 | <p>Put $x = 2 \tan t $, then $dx = 2 \sec^2 t dt $. and $\sqrt{x^2 +4} = \sqrt{ 4 \tan^2 t + 4 } = 2 \sec t$ hence,</p>
<p>$$ \int \frac{dx}{x^2 \sqrt{x^2+4}} = \int \frac{2 \sec^2 t dt}{4 \tan^2 t 2 \sec t} = \frac{1}{4} \int \frac{ \sec t dt }{\tan^2 t} = \frac{1}{4} \int \frac{\frac{1}{\cos t}}{\frac{\sin^2t}{\cos^2 t}} = \frac{1}{4} \int \frac{\cos t dt}{\sin^2 t} = \frac{1}{4} \int \frac{d ( \sin t)}{\sin^2 t} = \frac{-1}{4}\frac{1}{\sin t} + C$$</p>
|
746,180 | <p>I'm working through Stephen Abbott's wonderful <em>Understanding Analysis</em> in preparation for entering a math undergrad degree this fall. A personal note about me: Friends and family tell me I tend to be periphrastic; if there's a long-winded, inelegant way of explaining myself, I'll find it. As I work through Abbott's book, I wonder: Are all the steps I'm taking (even to solve simple problems near the beginning of the book) necessary, or is my brain just doing what it always does by finding the most round-about way to do things? So I'd like to have someone critique a simple proof to see if I'm doing something wrong, or if this really is the way things are done in real analysis. </p>
<blockquote>
<p>Exercise 2.2.5. Let $\lfloor x\rfloor$ be the greatest integer less than or equal to x. Find $lim_{n\to \infty} a_n$ and supply proofs if $a_n=\lfloor \frac 1n \rfloor$.</p>
</blockquote>
<p>In the preceeding chapter, Abbott has already shown that $lim_{n \to \infty} \frac 1n =0$, so we can take this as given. Then we note that since $n \lt (n+1)$ $\forall n \in \mathbb N$, we have $\frac 1n \gt \frac 1{n+1} \gt 0$, $\forall n \in \mathbb N$. And since by inspection $a_n = 1$, we have </p>
<p>$$1\gt a_{n+1} \gt a_{n+2} \gt a_{n+3} \gt \cdots \gt0,$$
so that $a_n = 0$ for $ n \ge2$. Finally, since $|a_n-0|=0$ for $n \ge2$, we must have $|a_n - 0| \lt \epsilon$, $\forall \epsilon \gt 0$ and $n \ge2$. Therefore $\lim_{n \to \infty} a_n=0$.</p>
<p>Is this correct? Have I included any unnecessary steps? It just seems so pathologically nit-picky! And I feel the same way about most of the other exercises in the book. Thanks for your help!</p>
| Artem | 29,547 | <p>Hint:
$$
x^2\sqrt{x^2+4}=x^3\sqrt{1+\frac{4}{x^2}}\,,\quad d\left(\frac{1}{x^2}\right)=-\frac{2}{x^3}d x
$$</p>
|
506,152 | <p>Is $$\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}$$
for $a,b,c,d>0$</p>
<p>If it is true, then can we generalize?</p>
<p>EDIT:typing mistake corrected.</p>
<p>EDIT, WILL JAGY. Apparently the <strong>real question</strong> is
Is $$\color{magenta}{\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}}$$
for $a,b,c,d>0,$ where letters on the left hand side and in the <strong>numerator</strong> stay in the <strong>numerator</strong> on the right-hand side, and letters on the left hand side and in the <strong>denominator</strong> stay in the <strong>denominator</strong> on the right-hand side.</p>
| Bob Anderson | 97,156 | <p>A slightly different approach:</p>
<p>Multiply both sides by (c+d), which we can do without altering the inequality because c and d are positive:</p>
<p>$$ a+b < \frac{a(c+d)}{c} +\frac{b(c+d)}{d}$$
$$ a+b < \frac{ac}{c} +\frac{ad}{c} +\frac{bc}{d} +\frac{bd}{d}$$
$$ a+b < a + \frac{ad}{c} +\frac{bc}{d} +b$$
$$ a+b < a+b +\frac{ad}{c}+\frac{bc}{d}$$
$$ \frac{ad}{c} +\frac{bc}{d} > 0$$</p>
<p>This is clearly always true because both terms must be > 0.</p>
<p>This same basic outline works for a 3 term version of this:
$$\frac{a+b+c}{d+e+f} < \frac{a}{d}+\frac{b}{e} +\frac{c}{f}$$</p>
<p>and will clearly work for any number of terms because after multiplying by the denominator on the left hand side, you will always spit out on the right hand side, exactly the left hand side numerator plus some additional terms which must be positive.</p>
|
506,152 | <p>Is $$\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}$$
for $a,b,c,d>0$</p>
<p>If it is true, then can we generalize?</p>
<p>EDIT:typing mistake corrected.</p>
<p>EDIT, WILL JAGY. Apparently the <strong>real question</strong> is
Is $$\color{magenta}{\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}}$$
for $a,b,c,d>0,$ where letters on the left hand side and in the <strong>numerator</strong> stay in the <strong>numerator</strong> on the right-hand side, and letters on the left hand side and in the <strong>denominator</strong> stay in the <strong>denominator</strong> on the right-hand side.</p>
| Hypergeometricx | 168,053 | <p>Let <span class="math-container">$a=\lambda c$</span> and <span class="math-container">$b=\mu d$</span>, where <span class="math-container">$\lambda, \mu>0$</span>.
<span class="math-container">$$\frac {a+b}{c+d}=\frac {\lambda c+\mu d}{c+d}=\frac {\lambda (c+d)+\mu (c+d)-(\lambda d+\mu c)}{c+d}=\lambda + \mu -\frac {\lambda d+\mu c}{c+d}<\lambda + \mu = \frac ac+\frac bd
$$</span></p>
|
3,162,464 | <p>I need help making an OGF for <span class="math-container">$1 + x^i + x^{2i}+...+x^{ki}$</span>. I already know how to verify that <span class="math-container">$1 +x +x^2+...+x^k$</span> can be written by <span class="math-container">$({1-x^{k+1}})/({1-x})$</span>. I'm wondering if there is any correlation between the two..?</p>
<p>Any help would be greatly appreciated. Thanks!</p>
| Peter Foreman | 631,494 | <p>Hint: make the substitution <span class="math-container">$u=x^i$</span>.</p>
|
858,952 | <p>related to <a href="https://math.stackexchange.com/questions/830599/one-sided-limit-lim-x-rightarrow-0-fx-where-wolfram-alpha-does-not-hel">this question</a>:</p>
<p>Is there an easy closed-form term for</p>
<p>$$\sum_{j=k}^{\infty} \frac{x^j}{j!}e^{-x},$$</p>
<p>thus when the sum starts at a constant $k$ instead of $1$?</p>
<p>EDIT:
Thanks for your help. Is there a Chance to solve this sum-term? Because this is not really what I expect, when I talk about a closed-form term. </p>
<p>A Little bit more of context might help, maybe:</p>
<p>I have $$f(n,p)=\sum_{j=k}^{\infty} \frac{(np)^j}{j!} e^{-np}$$ and it is meant that the partial Derivation is $$\frac{\delta f(n,p)}{\delta n}=\frac{p (np)^{k-1}}{(k-1)!}e^{-np}$$ but I have no idea how to get to this. </p>
<p>Because to me: </p>
<p>$$\frac{\delta f(n,p)}{\delta n}=\sum_{j=k}^{\infty} \left( \frac{p (np)^{j-1}}{j!} e^{-np} -\frac{p (np)^j}{j!} e^{-np} \right)$$
but then I am stuck.</p>
| mookid | 131,738 | <p>Yes:</p>
<p>$$e^{-x}\sum_{j=k}^\infty \frac 1{j!} x^j =
e^{-x}\left[\exp x - \sum_{j=0}^{k-1} \frac 1{j!} x^j \right]
=e^{-x}\int_0^x \frac{(x-t)^{k-1}}{(k-1)!} e^{t} dt
$$</p>
|
858,952 | <p>related to <a href="https://math.stackexchange.com/questions/830599/one-sided-limit-lim-x-rightarrow-0-fx-where-wolfram-alpha-does-not-hel">this question</a>:</p>
<p>Is there an easy closed-form term for</p>
<p>$$\sum_{j=k}^{\infty} \frac{x^j}{j!}e^{-x},$$</p>
<p>thus when the sum starts at a constant $k$ instead of $1$?</p>
<p>EDIT:
Thanks for your help. Is there a Chance to solve this sum-term? Because this is not really what I expect, when I talk about a closed-form term. </p>
<p>A Little bit more of context might help, maybe:</p>
<p>I have $$f(n,p)=\sum_{j=k}^{\infty} \frac{(np)^j}{j!} e^{-np}$$ and it is meant that the partial Derivation is $$\frac{\delta f(n,p)}{\delta n}=\frac{p (np)^{k-1}}{(k-1)!}e^{-np}$$ but I have no idea how to get to this. </p>
<p>Because to me: </p>
<p>$$\frac{\delta f(n,p)}{\delta n}=\sum_{j=k}^{\infty} \left( \frac{p (np)^{j-1}}{j!} e^{-np} -\frac{p (np)^j}{j!} e^{-np} \right)$$
but then I am stuck.</p>
| Did | 6,179 | <p>You just made a mistake when differentiating $$f(x)=\sum_{j=k}^{\infty} \frac{(xp)^j}{j!} e^{-xp}.$$
The actual derivative is
$$f'(x)=\sum_{j=k}^{\infty} \left( \color{red}{j}\,\frac{p\,(xp)^{j-1}}{j!} e^{-xp} -\frac{p\,(xp)^j}{j!} e^{-xp} \right),$$
that is, provided $k\geqslant1$, using the change of indices $\ell=j-1$ in the first summation,
$$f'(x)=\sum_{\ell=k-1}^{\infty}\frac{p(xp)^{\ell}}{\ell!} e^{-xp} -\sum_{j=k}^{\infty} \frac{p (xp)^j}{j!} e^{-xp}=\frac{p(xp)^{k-1}}{(k-1)!} e^{-xp}.$$</p>
|
1,075,215 | <p>Question: An actuary is studying the prevalence of three health risk factors, denoted by A, B, and C, within a population of women. For each of the three factors, the probability is 0.1 that a woman in the population only has this risk factor (and no others). For any two of three factors, the probability is 0.12 that she has exactly two of these risk factors (but not the other). The probability that a woman has all three risk factors given that she has A and B, is (1/3). What is the probability that a woman has none of the three risk factors, given that she does not have risk factor A?</p>
<p>My attempt: I wrote the "probability that a woman has none of the three risk factors, given that she does not have risk factor A" as Pr(A'andB'andC'|A') as Pr(A'andB'andC'andA')/Pr(A') which just simplifies to Pr(A'andB'andC')/Pr(A') where Pr(A') = (1-.1) = .9. I'm not entirely sure where to go on from there. I also tried to draw a Venn Diagram with three intersecting circles where Pr(AandB'andC') = .1 (same for B and C), but that didn't really get me anywhere The answer is 0.467 (rounded). Can you guys please show me what I'm doing wrong or what I should be doing?</p>
<p>Thank you guys so much!</p>
| turkeyhundt | 115,823 | <p>You can divide the sample into 8 pools.
$$
\begin{array}{|c|c|} \hline
\text{Factor}& \text{Probability} \\ \hline
\text{A} & .1 \\ \hline
\text{B} & .1 \\ \hline
\text{C} & .1 \\ \hline
\text{AB} & .12 \\ \hline
\text{AC} & .12 \\ \hline
\text{BC} & .12 \\ \hline
\text{ABC} & .06 \\ \hline
\text{NONE} & .28 \\ \hline
\end{array}$$</p>
<p>So you want $$\frac{\text{NONE}}{\text{B}+\text{C}+\text{BC}+\text{NONE}}=\frac{.28}{.1+.1+.12+.28}\approx.467$$</p>
<p>Note, $\text{ABC}=.06$ because for all the pool that have AB, $\frac{2}{3}$ must be AB and $\frac{1}{3}$ must be ABC.</p>
|
3,197,262 | <p>When I was solving <span class="math-container">$ \operatorname{Cov}(X,E(X\mid Y)) = \operatorname{var}(E(X\mid Y))$</span>, I notice that <span class="math-container">$E(X\mid Y)$</span> was treated as a function of <span class="math-container">$Y$</span>.
My thinking is <span class="math-container">$E(X\mid Y)$</span> is taking values of <span class="math-container">$ \operatorname{Range}(Y) $</span> and for each value of <span class="math-container">$Y$</span>, it maps to the expectation of <span class="math-container">$X$</span>. Is this correct?</p>
| Surb | 154,545 | <p>An easy way to clarify : if <span class="math-container">$f(y)=\mathbb E[X\mid Y=y]$</span>, then <span class="math-container">$\mathbb E[X\mid Y]=f(Y)$</span>.</p>
|
1,780,797 | <p>According to <a href="https://en.wikipedia.org/wiki/Null_set#Lebesgue_measure" rel="nofollow noreferrer">Wikipedia</a>, the straight line <span class="math-container">$\mathbb{R}$</span> is a null set in <span class="math-container">$\mathbb{R}^2$</span>.</p>
<p>That means, the line <span class="math-container">$\mathbb{R}$</span> can be contained in <span class="math-container">$\bigcup_{k=1}^\infty B_k$</span>, where <span class="math-container">$B_k$</span> are open disks and their total measure of all the <span class="math-container">$B_k$</span> is less than <span class="math-container">$\epsilon$</span>.</p>
<p>Question 1: How can this be done? Any explicit construction to show this?</p>
<p>Question 2: Since the intersection of <span class="math-container">$B_k$</span> with <span class="math-container">$\mathbb{R}$</span> is an open interval <span class="math-container">$I_k$</span>, doesn't this mean that <span class="math-container">$\mathbb{R}$</span> can be covered by union of intervals <span class="math-container">$I_k$</span> whose total length is arbitrarily small? (Which according to my <a href="https://math.stackexchange.com/questions/1780580/what-is-wrong-in-this-proof-that-mathbbr-has-measure-zero">previous question</a> is impossible?)</p>
<p>Sincere thanks for any help. I am puzzled by this.</p>
| Benjamin Lindqvist | 96,816 | <p>Here's an attempt.</p>
<p>$$D(P_X||P_{X+Y}) = \mathbb{E}[\log \frac{P_X(X)}{P_{X+Y}(X+Y)}] = \mathbb{E}[\log \frac{P_X(X)P_Y(Y)}{P_{X+Y}(X+Y)P_Y(Y)}] \\
= \mathbb{E}[\log \frac{P_X(X)}{P_Y(Y)}] + \mathbb{E}[\log \frac{P_Y(Y)}{P_{X+Y}(X+Y)}] = \infty$$</p>
<p>because $g$ is uniform and has $g(x)=0$ for some $x$ such that $f(x)>0$.</p>
|
2,590,165 | <p>How to show $f(x)$=$\frac{1}{1+x^2}$ is uniform continuous on $\Bbb R$. </p>
<p>Although, of course for any interval $[a,b]$, this function is continuous and bounded, therefore also uniformly continuous. Following <strong>Continuous Extension Theorem</strong> it is uniformly continuous on any $(a,b)$. Therefore proceeding this way, we can show it is uniformly continuous on $ \Bbb R$. </p>
<p>I wish to prove the same analytically. I assumed there exists $x,u \in \Bbb R$, such that $ |x-u|< \delta$. </p>
<p>Now,</p>
<p>$|f(x)-f(u)|$=$\frac {|x^2-u^2|}{|(1+x^2)(1+u^2)|}$ $\le$ $\frac{|x-u||x+u|}{x^2u^2}$ $\le$ $\delta$$\frac{|x+u|}{x^2u^2}$.</p>
<p>Here I stuck. I wish to find an $\epsilon$ so that the $|f(x)-f(y)|\lt \epsilon$, where $\delta$ depends only on $\epsilon$, not on $x$. But unable to do that. Tried to apply A.M-G.M inequality but could not find a fruitful result. What to do? </p>
| Andreas | 317,854 | <p>So you need (the values in parantheses are now positive)</p>
<p>$$
{e^{xB}}(1-xB) \le {e^{xA}}(1-xA)
$$</p>
<p>Since $B>A$, you want that $
f(q) = {e^{xq}}(1-xq)$ is decreasing with $q$. The derivative is </p>
<p>$
f'(q) = -x^2q \; {e^{xq}}$</p>
<p>and this is clearly negative for $x,q \in (0,1)$. That's it.</p>
|
3,819,658 | <p>Calculate, <span class="math-container">$$\lim\limits_{(x,y)\to (0,0)} \dfrac{x^4}{(x^2+y^4)\sqrt{x^2+y^2}},$$</span> if there exist.</p>
<p>My attempt:</p>
<p>I have tried several paths, for instance: <span class="math-container">$x=0$</span>, <span class="math-container">$y=0$</span>, <span class="math-container">$y=x^m$</span>. In all the cases I got that the limit is <span class="math-container">$0$</span>. But I couldn't figure out how to prove it. Any suggestion?</p>
| user | 505,767 | <p>Assuming <span class="math-container">$y<1$</span> and using polar coordinates we have</p>
<p><span class="math-container">$$\dfrac{x^4}{(x^2+y^4)\sqrt{x^2+y^2}} \le \dfrac{x^4}{(x^2+y^2)\sqrt{x^2+y^2}} =r \sin^4\theta \to 0$$</span></p>
|
1,052,180 | <p>I need to find connected graph $G = (V, E), |V| \geq 3$ such that every power of his adjacency matrix contains zeroes.</p>
<p>I know that that graph will be path and adjacency matrix for even and odd powers would look like this (lets say for $|V| = 3$):</p>
<p>$M=
\left[ {\begin{array}{ccccc}
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0\\
\end{array} } \right]
$</p>
<p>$
M*M=
\left[ {\begin{array}{ccccc}
1 & 0 & 1\\
0 & 2 & 0\\
1 & 0 & 1\\
\end{array} } \right]
$</p>
<p>$
M*M*M=
\left[ {\begin{array}{ccccc}
0 & 2 & 0\\
2 & 0 & 2\\
0 & 2 & 0\\
\end{array} } \right]
$</p>
<p>$
M*M*M*M=
\left[ {\begin{array}{ccccc}
2 & 0 & 2\\
0 & 4 & 0\\
2 & 0 & 2\\
\end{array} } \right]
$
etc...</p>
<p>Zeroes and other numbers are changing their position for even/odd powers of matrix.</p>
<p>I don't know how to prove that there always will be zero for path graph with $n$ vertices for any power of his adjacency matrix. Maybe use an induction(I am not sure how to proceed with induction though)? </p>
| Robert Israel | 8,508 | <p>Hint: if the graph is bipartite (with parts $A$ and $B$), any walk starting in part $A$ will be in part $B$ after an odd number of steps and in part $A$ after an even number of steps.</p>
|
3,454,682 | <p>I know that <span class="math-container">${\langle x, y \rangle}$</span> means the inner product but I've stumbled upon the notation <span class="math-container">${\langle x, y \rangle}_a$</span> with <span class="math-container">$a \in \mathbb{R}$</span> and I can't figure out what it means. Usually what's in the subscript isn't a number, but the denotion of some vector space, e.g. <span class="math-container">$V$</span>.</p>
<p>The context is this problem from an exam in the introductory course in linear algebra at our university:</p>
<p>Let <span class="math-container">${\langle,\rangle}_1$</span> and <span class="math-container">${\langle,\rangle}_2$</span> be two inner product structures on a finite-dimensional vector space <span class="math-container">$V$</span>. Show that there is a linear map <span class="math-container">$T: V \to V$</span> such that</p>
<p><span class="math-container">${\langle x,y \rangle}_1$</span> = <span class="math-container">${\langle T(x), y \rangle}_2$</span></p>
<p>for all <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$V$</span>.</p>
| Masacroso | 173,262 | <p>In your context the notation <span class="math-container">$\langle f,g \rangle_1$</span> or <span class="math-container">$\langle f,g \rangle_2$</span> is just a way to name two distinct inner products, this is all. The numbers doesn't have a "mathematical" meaning, it just a name, a tag.</p>
<p>We could say also that there are two inner products, represented as <span class="math-container">$\langle f,g\rangle_{\text{ foo }}$</span> and <span class="math-container">$\langle f,g \rangle_{\text{ bar }}$</span> to distinguish them.</p>
|
2,250,469 | <p>Let n $\geq$ 4 be an integer. Determine the number of permutations of
$\{1, 2, . . . , n\}$, in which $1$ and $2$ are next to each other, with $1$ to the left of $2$.<br>
I can't make sense of this problem statement. The way I see it, if $n$ is an integer, then the pair $1,2$ could be formed by any pair with the form $\overline{...x_{i-2}x_{i-1}x_i1}, \overline{2y_{1}y_2y_3...}$ or a number with the form $\overline{...x_{i-2}x_{i-1}x_i12x_{i+1}x_{i+2}x_{i+3}..}$ with $x$'s and $y$'s are some mysterious digits. Can anyone explain this problem?</p>
| JMoravitz | 179,297 | <p>The problem statement requires that we count the number of arrangements such that</p>
<ul>
<li><p>The arrangement is a permutation of $\{1,2,\dots,n\}$ (<em>i.e. it uses each and every number from $\{1,2,\dots,n\}$ exactly once</em>)</p></li>
<li><p>$1$ and $2$ are adjacent, i.e. they appear right next to one another</p></li>
<li><p>$1$ appears to the left of $2$</p></li>
</ul>
<p>For $n=3$ the whole list is: $123,312$</p>
<p>For $n=4$ the whole list is: $1234,1243,3124,4123,3412,4312$</p>
<p>For $n=5$ the list begins as: $12345,12354,12435,12453,\dots$ with several more yet unwritten.</p>
<hr>
<p>To solve, apply multiplication principle to the following steps:</p>
<ul>
<li><p>Choose the location in the arrangement which the $1$ will occupy (<em>remember that you must leave enough space to the right for the $2$ to occupy as well</em>)</p></li>
<li><p>Place the two adjacent to the right of the $1$</p></li>
<li><p>From left-to-right fill in the remaining empty spaces in the arrangement with one of the remaining digits.</p></li>
</ul>
|
2,262,371 | <p>If $a,b,c$ are positive real numbers, prove that
$$\frac{2}{a+b}+\frac{2}{b+c}+ \frac{2}{c+a}≥ \frac{9}{a+b+c}$$</p>
| Lazy Lee | 430,040 | <p>By Cauchy-Schwartz. $$\sum_{cyc}\frac{1}{a+b}\sum_{cyc}(a+b)\geq (1+1+1)^2=9\implies 2\cdot\sum_{cyc}\frac{1}{a+b}\geq2\cdot \frac{9}{\sum_{cyc}(a+b)}=\frac{9}{a+b+c}$$</p>
|
1,073,628 | <p>I am trying to find generating functions which will give me a power logarithm. </p>
<p>I am trying to find generating sums in the form</p>
<p>$$\sum_{n=1}^{\infty} a_n\,x^n = -\frac{\log^2(1-x)}{1-x}$$</p>
<p>or </p>
<p>$$\sum_{n=1}^{\infty} a_n\,x^n = \frac{\log^2(x)}{x}.$$</p>
<p>Something, which will return $\log^3$ in the end. </p>
<p>Help is required! </p>
<p>Thanks</p>
| Dr. Wolfgang Hintze | 198,592 | <p><strong>Extension to general powers $(-\log (1-x))^n$</strong></p>
<p>Exploiting the powerful idea of paramteric differentiation exposed here by Felix Marin I was able to find expressions for arbitrary powers $n$ and provide examples for $n = 1..8$. </p>
<p>The results for $n\ge 4$ seem to be new (please correct me if not).</p>
<p>The derivation of the general term is a nice excise in special functions and combinatorics which I shall present later. </p>
<p><strong>Results</strong></p>
<p>Let $n = 1, 2, 3, ...$ be a natural number and define a series expansion as follows</p>
<p>$$\frac{1}{n}(-\log (1-x))^n=\sum _{k=1}^{\infty } \frac{x^k}{k} a(n,k)\tag{1}$$</p>
<p>Then the coefficients for the first few $n$ are in the format $\{n,a(n,k)\}$</p>
<p>$$\left(
\begin{array}{c}
\{1,1\} \\
\left\{2,H_{k-1}\right\} \\
\left\{3,\left(H_{k-1}\right){}^2-H_{k-1}^{(2)}\right\} \\
\left\{4,\left(H_{k-1}\right){}^3-3 H_{k-1}^{(2)} H_{k-1}+2 H_{k-1}^{(3)}\right\} \\
\left\{5,\left(H_{k-1}\right){}^4-6 H_{k-1}^{(2)} \left(H_{k-1}\right){}^2+8 H_{k-1}^{(3)} H_{k-1}+3 \left(H_{k-1}^{(2)}\right){}^2-6 H_{k-1}^{(4)}\right\} \\
\left\{6,\left(H_{k-1}\right){}^5-10 H_{k-1}^{(2)} \left(H_{k-1}\right){}^3+20 H_{k-1}^{(3)} \left(H_{k-1}\right){}^2+15 \left(\left(H_{k-1}^{(2)}\right){}^2-2 H_{k-1}^{(4)}\right) H_{k-1}-20 H_{k-1}^{(2)} H_{k-1}^{(3)}+24 H_{k-1}^{(5)}\right\} \\
\left\{7,\left(H_{k-1}\right){}^6-15 H_{k-1}^{(2)} \left(H_{k-1}\right){}^4+40 H_{k-1}^{(3)} \left(H_{k-1}\right){}^3+45 \left(\left(H_{k-1}^{(2)}\right){}^2-2 H_{k-1}^{(4)}\right) \left(H_{k-1}\right){}^2-24 \left(5 H_{k-1}^{(2)} H_{k-1}^{(3)}-6 H_{k-1}^{(5)}\right) H_{k-1}+5 \left(-3 \left(H_{k-1}^{(2)}\right){}^3+18 H_{k-1}^{(4)} H_{k-1}^{(2)}+8 \left(\left(H_{k-1}^{(3)}\right){}^2-3 H_{k-1}^{(6)}\right)\right)\right\} \\
\left\{8,\left(H_{k-1}\right){}^7-21 H_{k-1}^{(2)} \left(H_{k-1}\right){}^5+70 H_{k-1}^{(3)} \left(H_{k-1}\right){}^4+105 \left(\left(H_{k-1}^{(2)}\right){}^2-2 H_{k-1}^{(4)}\right) \left(H_{k-1}\right){}^3-84 \left(5 H_{k-1}^{(2)} H_{k-1}^{(3)}-6 H_{k-1}^{(5)}\right) \left(H_{k-1}\right){}^2-35 \left(3 \left(H_{k-1}^{(2)}\right){}^3-18 H_{k-1}^{(4)} H_{k-1}^{(2)}-8 \left(H_{k-1}^{(3)}\right){}^2+24 H_{k-1}^{(6)}\right) H_{k-1}+6 \left(35 H_{k-1}^{(3)} \left(H_{k-1}^{(2)}\right){}^2-84 H_{k-1}^{(5)} H_{k-1}^{(2)}-70 H_{k-1}^{(3)} H_{k-1}^{(4)}+120 H_{k-1}^{(7)}\right)\right\} \\
\end{array}
\right)\tag{2}$$</p>
<p><strong>Discussion</strong></p>
<p>By differentiating (1) with respect to $x$ we immediately obtain the interesting expansions </p>
<p>$$\frac{(-\log (1-x))^n}{1-x}=\sum _{k=0}^{\infty } x^k a(n,k+1)\tag{3}$$</p>
<p>This expansion is closely related to generating functions for various harmonic numbers.</p>
<p>EDIT 26.11.17</p>
<p>A direct approach would be multiplication of power series. </p>
<p>From</p>
<p>$$(-\log (1-x))^n=\left(\sum _{k=1}^{\infty } \frac{x^k}{k}\right){}^n$$</p>
<p>and comparision with $(1)$ we conclude that</p>
<p>$$a(n,m)=\frac{m}{n} \sum _{m=\sum _{}k_{i}\\k_{i}\ge 1} \frac{1}{\prod _{i=1}^n k_{i}}$$</p>
<p><strong>Derivation</strong></p>
<p>Ready on paper. To be typed in here. </p>
<p>I shall wait a while, however, in case others wish to attempt the derivation.</p>
|
2,300,382 | <p>I cannot think of a non-constructible algebraic number of degree $4$ over $\Bbb Q$ so far. I wish if I can find such an example. Could some one tell me some such numbers with justification? Also I would like to know the track of working out such an example. Any help or reference would be appreciate. Thanks in advance!</p>
| José Carlos Santos | 446,262 | <p>One problem which leads to such a number is <a href="https://en.wikipedia.org/wiki/Alhazen%27s_problem" rel="nofollow noreferrer">Alhazen's billiard problem</a>.</p>
|
200,093 | <p>I have a BLDC electric motor, I'm currently trying to control via a <code>PIDTune</code>. This is mostly an attempt to reduce (remove) a small run away drift that ends up showing up in the motor signal <code>u[t]</code>.</p>
<p>I've modelled this via:</p>
<pre><code>ssm = StateSpaceModel[\[ScriptCapitalJ] \[Phi]''[t] + \[ScriptCapitalR] \[Phi]'[t] == \[ScriptCapitalT] u[t], {{\[Phi][t], 0}, {\[Phi]'[t], 0}, {u[t], 1}}, u[t], \[Phi]'[t], t]
</code></pre>
<p>And simulated: </p>
<pre><code>params = { \[ScriptCapitalJ] -> 4.63 10^-5, \[ScriptCapitalR] -> 1 10^-5, \[ScriptCapitalT] -> 0.0335};
Plot[Evaluate[OutputResponse[ssm /. params, 1, {t, 0, 12}]], {t, 0, 12}]
</code></pre>
<p><a href="https://i.stack.imgur.com/vSYOC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vSYOC.png" alt="plot"></a></p>
<p>This is a nice model response and mirrors the response of the real motor almost exactly. </p>
<p>So I tried to create a control system to add to the control signal and bring the system relatively quickly back to zero. </p>
<pre><code>control = PIDTune[ssm /. params , {"PID"}]
</code></pre>
<p>But I continue to get the following error:</p>
<pre><code>PIDTune::infgains: Unable to compute finite controller parameters because a denominator in the tuning formula is effectively zero.
</code></pre>
<p>I have tried <em>all</em> tuning methods within the documentation, however I continue to get errors.</p>
<p>Changing to a "PD" control</p>
<pre><code>control = PIDTune[ssm /. params , {"PD"}]
</code></pre>
<p>Gives me control system, however when adding it to the feedback and then seeing the response I get a different error:</p>
<pre><code>simul = SystemsModelFeedbackConnect[ssm, control] /. params
OutputResponse[simul, UnitStep[t - 3], {t, 0, 12}]
OutputResponse::irregss: A solution could not be found for the irregular state-space model with a characteristic equation of zero.
</code></pre>
<p>The error messages don't really make any sense to me...or explain what the issue is with the model...being that it simulates reality quite well....How can I relieve these errors, or create a feedback loop via <code>PIDTune</code> for my system?</p>
<p>Thank you for the help!</p>
<p>There is a similar example with a dcmotor within the documentation for <code>PIDTune</code> for reference which works fine (albeit a different tfm):</p>
<pre><code>dcMotor = TransferFunctionModel[Unevaluated[{{k/(s ((j s + b) (l s + r) + k^2))}}], s, SamplingPeriod ->None, SystemsModelLabels -> {{None}, {None}}] /. pars;
PIDTune[dcMotor, "PID", "PIDData"]
</code></pre>
<p><strong>Update</strong></p>
<p>As per M.K.s suggestion, I have changed the ssm slightly, or rather rewritten it to come directly to the equation of motion for angular velocity omega, instead of the motors angle phi. This change simplifies the ssm and allows <code>PIDTune</code> to come up with a solution.</p>
<p>As a small explanation, the ODE is derived via <a href="http://www.site.uottawa.ca/~rhabash/StateSpaceModelBLDC.pdf" rel="nofollow noreferrer">equation 6 of this paper</a> as a simplified motor for control via amperage of u[t]. Though is is a relatively 'standard' equation used and can be found in many papers. J and R were found via nonlinearfitting of driving the motor at different amperages. As such, the model params, J, T, R are quite accurate. </p>
<pre><code>ssmnew = StateSpaceModel[\[ScriptCapitalJ] \[Omega]'[t] + \[ScriptCapitalR] \[Omega][t] == \[ScriptCapitalT] u[t], {{\[Omega][t], 0}}, {{u[t]}}, {\[Omega][t]}, t]
control = PIDTune[ssmnew /. params, {"PID"}]
loop = SystemsModelFeedbackConnect[ssmnew, control] /. params
test1 OutputResponse[loop, UnitStep[t - 4], {t, 0, 12}]
</code></pre>
<p>or </p>
<pre><code> test2 = OutputResponse[control /. params, UnitStep[t - 3], {t, 0, 10}]
</code></pre>
<p>Unfortunately at this point, I am now getting either new errors, or a response that is completely wrong, using inputs of <code>UnitStep</code> or just <code>1</code></p>
<pre><code>NDSolve::ndsz: At t == 4.000000000000114`, step size is effectively zero; singularity or stiff system suspected.
</code></pre>
<p>or </p>
<pre><code>NDSolve::irfail: Unable to reduce the index of the system to 0 or 1.
</code></pre>
<p><a href="https://i.stack.imgur.com/Chmos.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Chmos.png" alt="plot4"></a></p>
| Suba Thomas | 5,998 | <p>The first thing is that I find it odd that your model has current as the input and speed as the output? Typically, it's <a href="https://reference.wolfram.com/language/MicrocontrollerKit/workflow/MotorSpeedControl.html" rel="nofollow noreferrer">voltage to speed</a>, and also <a href="https://reference.wolfram.com/language/ref/PIDTune.html#1220886867" rel="nofollow noreferrer">voltage to position</a>.</p>
<p>However, the dominant pole approach seems to work for your model.</p>
<pre><code>pid = PIDTune[ssm /. params, {Automatic, "DominantPole"}, "PIDData"];
pid["Feedback"]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/fN2dx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fN2dx.png" alt="enter image description here"></a></p>
</blockquote>
<pre><code>OutputResponse[pid["ReferenceOutput"], UnitStep[t] - UnitStep[t - 4], {t,
0, 10}];
Plot[%, {t, 0, 10}]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/ANBE5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ANBE5.png" alt="enter image description here"></a></p>
</blockquote>
|
789,407 | <p>If the roots of the equation $$ax^2-bx+c=0$$ lie in the interval $(0,1)$, find the minimum possible value of $abc$. </p>
<p><strong>Edit:</strong> I forgot to mention in the question that $a$, $b$, and $c$ are natural numbers. Sorry for the inconvenience.<br>
<strong>Edit 2:</strong> As Hagen von Eitzen said about the double roots not allowed, I forgot to mention that too. Extremely sorry :(</p>
<blockquote>
<p>I tried to use $D > 0$, where $D$ is the discriminant but I can't further analyze in terms of the coefficients. Thanks in advance!</p>
</blockquote>
| apt1002 | 106,285 | <p>The answer is $a=4$, $b=4$, $c=1$, giving $x = \frac12$ (twice), and a product $abc=16$. Exhaustive search through all $1 \leq a,b,c \leq 16$ gave no better answer.</p>
|
2,877,085 | <p>I think that there could be used Abel and Dirichlet method, but I have no idea how</p>
<p>$$ \sum_{n=1}^{\infty} (-1)^n\frac{3n-2}{n+1}\frac{1}{n^{1/2}} .$$</p>
| marty cohen | 13,079 | <p>Write the fraction as
$3-5/(n+1)$.</p>
<p>The sum of the first converges by Leibnitz
and the second converges absolutely.</p>
|
1,392,205 | <p>The equation of line $A$ is $3x + 6y - 1 = 0$. Give the equation of a line that passes through the point $(5,1)$ that is</p>
<ol>
<li><p>Perpendicular to line $A$.</p></li>
<li><p>Parallel to line $A$.</p></li>
</ol>
<p>Attempting to find the parallel,</p>
<p>I tried $$y = -\frac{1}{2}x + \frac{1}{6}$$</p>
<p>$$y - (1) = -\frac{1}{2}(x-5)$$</p>
<p>$$Y = -\frac{1}{2}x - \frac{1}{10} - \frac{1}{10}$$</p>
<p>$$y = -\frac{1}{2}x$$</p>
| heropup | 118,193 | <p>You have a problem with your geometric PMF: the sum of from $x = 0$ to $\infty$ is not equal to $1$. As such, you must write either</p>
<p>$$\Pr[X = x] = (1/4)^x (3/4), \quad x = 0, 1, 2, \ldots,$$ or $$\Pr[X = x] = (1/4)^{x-1} (3/4), \quad x = 1, 2, 3, \ldots.$$ Which one you mean, I cannot tell, and because the supports are different, the resulting probability will be very different. <strong>I will assume the latter.</strong></p>
<p>That said....</p>
<hr>
<p>When you can get the exact distribution of the sum, why use an approximation to get the probability, especially when the number of terms is tractable?</p>
<p>A geometric distribution counts the random number of trials in a series of independent Bernoulli trials with probability of "success" $p$ until the first success is observed. In your case, the probability of success is $p = 3/4$ and we are counting the total number of trials, are observed, including the success. The PMF is $$\Pr[X = x] = (1-p)^{x-1} p, \quad x = 1, 2, 3, \ldots.$$</p>
<p>The negative binomial distribution counts the random number of trials in a series of independent Bernoulli trials with probability of success $p$ until the $r^{\rm th}$ success is observed, for $r \ge 1$. When $r = 1$, we get a geometric distribution. Under this definition, we see that the sum of $r$ IID geometric variables $$S_r = X_1 + X_2 + \cdots + X_r$$ is negative binomial with parameters $p$ and $r$, where $p$ is inherited from the underlying geometric distribution for the individual $X_i$s.</p>
<p>It is not difficult to reason that $$\Pr[S_r = x] = \binom{x-1}{r-1} (1-p)^{x-r} p^r, \quad x = r, r+1, r+2, \ldots.$$ This is because in any sequence of $x$ trials such that the $r^{\rm th}$ success is observed on the final trial, there are $\binom{x-1}{r-1}$ ways to choose which of the $x-1$ trials are counted among the $r-1$ previous successes.</p>
<p>It follows that in your case, $r = 36$, and $$\Pr[46 \le S_{36} \le 49] = \sum_{x=46}^{49} \binom{x-1}{35} (1/4)^{x-36} (3/4)^{36},$$ a sum requiring only four terms.</p>
<hr>
<p>By comparison, using a normal approximation with continuity correction, the mean is $\mu = r/p = 48$, and standard deviation is $\sigma = \sqrt{r(1-p)/p^2} = 4$; we find $$\Pr[46 \le S_{36} \le 49] \approx \Pr\left[\frac{46 - 48 - 0.5}{4} \le \frac{S_{36} - \mu}{\sigma} \le \frac{49 - 48 + 0.5}{4}\right] \approx \Pr[-0.625 \le Z \le 0.375] \approx 0.380184.$$ This deviates from the precise probability above by about $0.008$.</p>
|
587,077 | <p>Given any prime $p$. Prove that $(p-1)! \equiv -1 \pmod p$.</p>
<p>How to prove this?</p>
| Akshaj Kadaveru | 100,840 | <p>Consider the set of residues $\{1, 2, 3, \cdots , p-1\}$, and consider an arbitrary element $\alpha$. </p>
<p><strong>Lemma 1: There exist $\gamma$ such that $\alpha \gamma \equiv 1 \pmod p$</strong></p>
<p>Proof: Consider $$1, p+1, 2p+1, 3p+1, \cdots , (\alpha-1)p + 1$$ If two are equivalent modulo $\alpha$, we would have $\alpha|\beta p$ with $\beta$ the difference between two integers among $0, 1, \cdots , \alpha - 1$. Since $\alpha < p$, $\gcd(\alpha, p) = 1$, so we must have $\alpha | \beta$, impossible because $\beta < \alpha$. Therefore, all are distinct modulo $\alpha$. Since there are $\alpha$ numbers, one must be divisible by $\alpha$. We have $\alpha | \delta p + 1$. Take $\gamma = \dfrac{\delta p + 1}{\alpha}$, and we have $\alpha \gamma \equiv 1 \pmod{p}$ as desired.</p>
<p><strong>Lemma 2: There exists exactly one $\gamma$ between $1$ and $p-1$ such that $\alpha \gamma \equiv 1 \pmod p$</strong></p>
<p>Proof: We know there exists at least one. Suppose $x$ and $y$ satisfied $x\alpha \equiv 1 \pmod{p}$ and $y\alpha \equiv 1 \pmod{p}$, with both between $1$ and $p-1$. Therefore, we would have $(x-y)\alpha \equiv 0 \pmod{p}$. Since $\alpha$ is less than $p$, we have $x \equiv y \pmod{p}$, impossible given that $x,y \in [1, p-1]$.</p>
<p><strong>Remark</strong>: We denote this $\gamma$ as $\alpha^{-1}$ or the <em>inverse</em> of $\alpha$</p>
<p><strong>Main Proof</strong></p>
<p>If $\alpha = \alpha^{-1}$, we have $\alpha^2 \equiv 1 \pmod p$ or $(\alpha + 1)(\alpha - 1) \equiv 0 \pmod{p}$. Therefore, at least one must be divisible by $p$, so $\alpha \equiv \pm 1 \pmod{p}$. All other values of $\alpha$ have inverses not equal to themselves.</p>
<p>When calculating $(p-1)! = \displaystyle\prod_{i=1}^{p-1} i$, pair each residue other than $1$ and $p-1$ with their inverses. By definition, they cancel to result in $1$. We are left with $$ (p-1)! \equiv 1 \cdot (p-1) \equiv 1 \cdot (-1) = -1 \pmod{p}$$ which is our result</p>
<p><strong>Remark:</strong> this is called <em>Wilson's Theorem</em>.</p>
|
257,978 | <p>Is there any non-monoid ring which has no maximal ideal?</p>
<p>We know that every commutative ring has at least one maximal ideals -from Advanced Algebra 1 when we are study Modules that makes it as a very easy Theorem there.</p>
<p>We say a ring $R$ is monoid if it has an multiplicative identity element, that if we denote this element with $1_{R}$ we should have: $\forall r\in R;\: r.1_{R}=1_{R}.r=r$</p>
| Julian Kuelshammer | 15,416 | <p>Not every non-unital ring (or rng) has a maximal ideal. For example take $(\mathbb{Q},+)$ with trivial multiplication, i.e. $xy=0$ for all $x,y\in \mathbb{Q}$, then a maximal ideal is nothing more than a maximal subgroup. See <a href="https://math.stackexchange.com/questions/234995/mathbbq-has-no-maximal-subgroup">this question</a> why such a group does not exist.</p>
|
3,173,242 | <p><strong>Context:</strong></p>
<p>In the context of circuit theory and graph theory, suppose we have a graph <span class="math-container">$G,$</span> then <a href="https://en.wikipedia.org/wiki/Laplacian_matrix" rel="nofollow noreferrer">the Laplacian (Kirchhoff) matrix</a> <span class="math-container">$L$</span> is defined as follows: </p>
<p><span class="math-container">$$
L = D-A \tag{1}
$$</span>
where <span class="math-container">$D$</span> is the degree matrix and <span class="math-container">$A$</span> the adjacency matrix. Alternatively, in terms of the incidence matrix <span class="math-container">$M$</span> it can be also expressed as:</p>
<p><span class="math-container">$$
L = M^T M \tag{2}
$$</span></p>
<p>I am interested in the case where the graph <span class="math-container">$G$</span> is connected. Then if we partition <span class="math-container">$G$</span> into connected subgraphs that are themselves connected, the Laplacian can then be represented as a block matrix. Let's assume we partition the graph into two parts, the boundary nodes <span class="math-container">$B$</span> and the connection nodes <span class="math-container">$C.$</span> Then <span class="math-container">$|V(G)|=|V(G_B)|+|V(G_C)|.$</span> The corresponding Laplacian matrix can be put in the form of the following block matrix:</p>
<p><span class="math-container">$$
L = \begin{bmatrix}
L_{BB} & L_{BC} \\
L_{CB} & L_{CC}
\end{bmatrix}
\tag{3}
$$</span></p>
<p>One often is interested in reducing the size of the network/graph, using a scheme such as the Kron reduction, which relies on taking the <a href="https://en.wikipedia.org/wiki/Schur_complement" rel="nofollow noreferrer">Schur complement</a> (Laplacian matrices are <a href="https://en.wikipedia.org/wiki/M-matrix" rel="nofollow noreferrer">M-Matrices</a>) of <span class="math-container">$L$</span> with respect to one of its blocks. But that is only possible if the submatrix of the chosen block is invertible, and in general this is assumed to follow from the connectedness of the graph.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Does the fact that <span class="math-container">$G$</span> and its subcomponents (<span class="math-container">$B$</span> and <span class="math-container">$C$</span>) are connected ensure that the submatrices of the block representation of the Laplacian <span class="math-container">$L$</span> are invertible? (does this follow from a known theorem?) And therefore the Schur complements can be safely taken. </li>
<li>In relation to connected graphs and their corresponding incidence matrices <span class="math-container">$M$</span> [*], is there a simple way to see that the kernel of <span class="math-container">$M$</span> is given by <span class="math-container">$\text{ker } M^T=\text{span } \mathbb{1}?$</span></li>
</ol>
<p>[*]: The incidence matrix <span class="math-container">$M$</span> is usually defined with each column corresponding to an edge of <span class="math-container">$G,$</span> where then each column contains exactly one <span class="math-container">$1$</span> at the row corresponding to the tail-node of the edge and a <span class="math-container">$-1$</span> at the row corresponding to the head-node of the edge.</p>
| Misha Lavrov | 383,078 | <p>For your second question, if we think of <span class="math-container">$M^{\mathsf T}$</span> as a linear transformation, it takes an element of <span class="math-container">$\mathbb R^V$</span> (an assignment of a real scalar to every vertex) to an element of <span class="math-container">$\mathbb R^E$</span> (an assignment of a real scalar to every edge) by giving every edge the difference of the values on its endpoints.</p>
<p>So <span class="math-container">$\ker M^{\mathsf T}$</span> consists of all elements of <span class="math-container">$\mathbb R^V$</span> such that for every edge, the difference of values on its endpoints is <span class="math-container">$0$</span>: the values on the endpoints are equal.</p>
<p>As a result, once we pick a value to put on one vertex, if we want to get an element of <span class="math-container">$\ker M^{\mathsf T}$</span>, that value propagates to every other vertex in the same connected component. (All neighbors of the starting vertex must have the same value, and then all of their neighbors must have that value, and so on.) When the graph is connected, this means that each element of <span class="math-container">$\ker M^{\mathsf T}$</span> must be a multiple of <span class="math-container">$\mathbf 1_V$</span>.</p>
<p>(In general, the same argument tells us that <span class="math-container">$\ker M^{\mathsf T}$</span> is generated by elements of <span class="math-container">$\mathbb R^V$</span> which are <span class="math-container">$1$</span> on a connected component of the graph, and <span class="math-container">$0$</span> elsewhere.)</p>
|
4,549,070 | <p>How can I prove this without using Stirling's formula?</p>
<p><span class="math-container">$${n\choose an} \le 2^{nH(a)}$$</span>
<span class="math-container">$$H(a) := -a\log_2a -(1-a)\log_2(1-a)$$</span></p>
| Qiaochu Yuan | 232 | <p>Let <span class="math-container">$X = \text{Bin}(n, \frac{1}{2})$</span> be a binomial random variable with <span class="math-container">$p = \frac{1}{2}$</span>. The <a href="https://math.stackexchange.com/a/4546041/232">Chernoff bound</a> applied to <span class="math-container">$X$</span> gives that for <span class="math-container">$a \ge \frac{1}{2}$</span> we have</p>
<p><span class="math-container">$$\mathbb{P}(X \ge an) \le 2^{-n KL(a, \frac{1}{2})}$$</span></p>
<p>where</p>
<p><span class="math-container">$$\begin{align*} -KL \left( a, \frac{1}{2} \right) &= - a \log_2 \frac{a}{\frac{1}{2}} - (1 - a) \log_2 \frac{1-a}{\frac{1}{2}} \\
&= H(a) + \log_2 \frac{1}{2} = H(a) - 1 \end{align*}$$</span></p>
<p>which gives</p>
<p><span class="math-container">$$\mathbb{P}(X \ge an) \le 2^{n H(a) - n}$$</span></p>
<p>and similarly if <span class="math-container">$a \le \frac{1}{2}$</span> we get</p>
<p><span class="math-container">$$\mathbb{P}(X \le an) \le 2^{n H(1-a) - n} = 2^{n H(a) - n}.$$</span></p>
<p>Since <span class="math-container">$\mathbb{P}(X \ge an)$</span> is <span class="math-container">$2^{-n}$</span> times a sum of binomial coefficients starting at <span class="math-container">${n \choose an}$</span>, and similary for <span class="math-container">$\mathbb{P}(X \le an)$</span>, we get a bound slightly stronger than the desired bound by multiplying by <span class="math-container">$2^n$</span>, although only for values of <span class="math-container">$a$</span> such that <span class="math-container">$an$</span> is an integer.</p>
|
4,364,686 | <p>I have a point in a 3D coordinate system 1 (CS1). There can be two situations: the point is constant or the point is moving along a straight line from one known position to another at constant speed.</p>
<p>The CS1 is rotating in another (static) 3D coordinate system (CS2). The rotations of CS1 are known, i.e. the starting and ending angles are known, and the angular speeds are constant, so we can get a precise rotation matrix at any moment of time.</p>
<p>I need to find the length of the point's trajectory in the CS2.</p>
<p>In the simplest case, when the point isn't moving in CS1 and CS1 is rotating around one axis of CS2, the trajectory is a simple arc. In more complex cases, my current solution is to find a few points along the way (having point's position in CS1 and rotation angles of CS1 in CS2) and interpolate them with cubic spline, then get the length of the spline.</p>
<p>Is there a more precise and/or straightforward way to find the trajectory of the point in CS2? Thanks.</p>
| bubba | 31,744 | <p>I’d recommend that you just calculate a large number of points and then interpolate them with a polyline. If you need more accuracy, use more points. The nice thing about polylines is that their arclength are very easy to calculate.</p>
<p>Your idea of using a (cubic) spline won’t help very much, because computing the length of a cubic spline requires numerical integration techniques.</p>
|
3,661,474 | <p><span class="math-container">$ h:R^{N+1} \to [0 , \infty)$</span> , <span class="math-container">$ h $</span> is measurable</p>
<p><span class="math-container">$ g:R^{N+1} \to [0 , \infty)$</span> , <span class="math-container">$ g $</span> is measurable</p>
<p><span class="math-container">$x,y \in R^N$</span></p>
<p><span class="math-container">$$h (x, x^2) h (y, y^2)= g (x+y, x^2+y^2)$$</span></p>
<p>Where <span class="math-container">$x^2$</span> is the dot product <span class="math-container">$x.x=|x|^2$</span></p>
<p>(1) Can it be shown that <span class="math-container">$h(0,0) \neq 0$</span></p>
<p>(2). what is the solution of <strong>1</strong> by only assuming <span class="math-container">$h$</span> is measurable ?</p>
<p><strong>Comment</strong> :</p>
<p>I was only able to show <span class="math-container">$h(x,x^2)=Ae^{b.x+cx^2}$</span> under 2 conditions:</p>
<ol>
<li><span class="math-container">$h$</span> is finite and measurable</li>
<li><span class="math-container">$h(x,x^2) >0 \text{ whenever $|x-a|^2<r^2$}$</span></li>
</ol>
<p>where ,<span class="math-container">$b,a \in R^N, r>0,c \in R$</span> and <span class="math-container">$A=e^{h(0,0)}$</span></p>
<p>clearly <span class="math-container">$h(a,a^2) > 0$</span></p>
<p><span class="math-container">$h(x+a,|x+a|^2)h(a,a^2)=g(x+2a,|x+a|^2+a^2)$</span></p>
<p><span class="math-container">$h(y+a,|y+a|^2)h(a,a^2)=g(y+2a,|y+a|^2+a^2)$</span></p>
<p><span class="math-container">$h(x+a,|x+a|^2)h(y+a,|y+a|^2)h^2(a,a^2)=g(x+2a,|x+a|^2+a^2)g(y+2a,|y+a|^2+a^2)$</span></p>
<p><span class="math-container">$h^2(a,a^2)g(x+y+2a,|x+a|^2+|y+a|^2)=g(x+2a,|x+a|^2+a^2)g(y+2a,|y+a|^2+a^2)$</span></p>
<p>let <span class="math-container">$f(x,x^2)=\frac{g(x+2a,|x+a|^2+a^2)}{h(a,a^2)}=\frac{g(x+2a,x^2+2x\cdot a + 2a^2)}{h(a,a^2)}$</span></p>
<p>clearly <span class="math-container">$f(0,0)=\frac{g(2a,2a^2)}{h(a,a^2)}=h(a,a^2)>0$</span></p>
<p>Also clearly <span class="math-container">$f(x,x^2)f(y,y^2)=g(x+y+2a,x^2+y^2+2a^2+2a\cdot (x+y))$</span></p>
<p>Let <span class="math-container">$G(x+y,x^2+y^2)=g(x+y+2a,x^2+y^2+2a^2+2a\cdot (x+y))$</span></p>
<p><span class="math-container">$$\text{ Therefore $f(x,x^2)f(y,y^2)=G(x+y,x^2+y^2)$ } $$</span></p>
<p>Note <span class="math-container">$$f(x,x^2)=h(x+a,|x+a|^2)$$</span></p>
<p><span class="math-container">$$f (x, x^2) f (y, y^2)= G (x+y, x^2+y^2) \tag{1}$$</span></p>
<p>pluging <span class="math-container">$y=0$</span> in <strong>1</strong> :<span class="math-container">$f(x,x^2)f(0,0)=G(x,x^2)$</span>, also <span class="math-container">$f(y,y^2)f(0,0)=G(y,y^2)$</span></p>
<p>and multiply the two equations and use <strong>1</strong> to obtain <strong>2</strong></p>
<p><span class="math-container">$$f^2(0,0)G(x+y,x^2+y^2)=G(x,x^2)G(y,y^2) \tag{2}$$</span></p>
<p>use <strong>1</strong> to obtain this two equations</p>
<p><span class="math-container">$$G(0,2x^2)=f(x,x^2)f(-x,x^2) \tag{3}$$</span></p>
<p><span class="math-container">$$G(0,2y^2)=f(y,y^2)f(-y,y^2) \tag{4}$$</span></p>
<p>for <span class="math-container">$x.y=0 $</span> and <span class="math-container">$x^2=y^2$</span>,
<span class="math-container">$$G(0,2x^2)G(0,2y^2)=f(x,x^2)f(-y,y^2)f(y,y^2)f(-x,x^2)=G(x-y,x^2+y^2)G(y-x,x^2+y^2)$$</span></p>
<p><span class="math-container">$$G(x-y,x^2+y^2)G(y-x,x^2+y^2)=f^2(0,0)f(x-y,x^2+y^2)f(y-x,x^2+y^2)=f^2(0,0)G(0,2x^2+2y^2)$$</span></p>
<p>So <span class="math-container">$G(0,2y^2)G(0,2x^2)=f^2(0,0)G(0,2x^2+2y^2) \tag{5}$</span></p>
<p>Therefore plugging <span class="math-container">$x^2=y^2$</span> into above to get :<span class="math-container">$$G^2(0,2x^2)=f^2(0,0)G(0,4x^2)\tag{6}$$</span></p>
<p>Applying 6 recursively,</p>
<p><span class="math-container">$$G^{2^{n+1}}(0,\frac{y^2}{2^{n+1}})=f^{2n+2}(0,0)G(0,y^2) \text{ for every $n \in N$} \tag{7}$$</span></p>
<p>under condition 2, <span class="math-container">$f(0,0)\neq 0$</span>, then it can be shown <span class="math-container">$f>0$</span> everywhere as done below :</p>
<p>from <strong>1</strong> , <span class="math-container">$f^2(0,0)G(x+y,x^2+y^2)=G(x,x^2)G(y,y^2)$</span></p>
<p><span class="math-container">$f^2(0,0)G(0,2x^2)=G(x,x^2)G(-x,x^2)$</span></p>
<p>from <strong>7</strong></p>
<p><span class="math-container">$$f^2(0,0)G(0,2x^2)=\frac{f^2(0,0)G^{2^{n+1}}(0,\frac{2x^2}{2^{n+1}})}{f^{2n+2}(0,0)}$$</span></p>
<p><span class="math-container">$$\frac{f^2(0,0)G^{2^{n+1}}(0,\frac{2x^2}{2^{n+1}})}{f^{2n+2}(0,0)}=G(x,x^2)G(-x,x^2)$$</span></p>
<p>By condition 2 <span class="math-container">$\lim_{n \to \infty}G(0,\frac{2x^2}{2^{n+1}})>0$</span></p>
<p>for large <span class="math-container">$n$</span> the left hand side of the above equation is greater than zero, so <span class="math-container">$G(x,x^2)>0$</span> implying <span class="math-container">$f(x,x^2)>0$</span></p>
<p><span class="math-container">$logf(x,x^2)+logf(y,y^2)=logG(x+y,x^2+y^2)$</span> and can easily be converted into cauchy functional equation</p>
<p><span class="math-container">$f_1(x,x^2)=logf(x,x^2)-logf(0,0)$</span>, so <span class="math-container">$f_1(0,0)=0$</span></p>
<p><span class="math-container">$G_1(x,x^2)=logG(x,x^2)-2logf(0,0)=logG(x,x^2)-logG(0,0)$</span>, so <span class="math-container">$G_1(0,0)=0$</span></p>
<p><span class="math-container">$f_1(x,x^2)+f_1(y,y^2)=G_1(x+y,x^2+y^2)$</span></p>
<p>plugging <span class="math-container">$y=0$</span> into above to get <span class="math-container">$f_1(x,x^2)=G_1(x,x^2)$</span></p>
<p>Now <span class="math-container">$f_1(x,x^2)+f_1(y,y^2)=f_1(x+y,x^2+y^2)$</span></p>
<p>let <span class="math-container">$n$</span> be number of components of <span class="math-container">$x$</span> ,for <span class="math-container">$i \in N$</span> let the ith components of <span class="math-container">$x,y$</span> be <span class="math-container">$x_i,y_i$</span> respectively</p>
<p>That is to say if <span class="math-container">$x= \langle a_1, a_2, \ldots, a_n \rangle $</span></p>
<p><span class="math-container">$x_i= \langle 0, a_i,0,0,0, \ldots, 0\rangle $</span>
and so on</p>
<p>for <span class="math-container">$x.y=0$</span>, <span class="math-container">$\sum_{i=1}^n x_iy_i=0$</span></p>
<p><span class="math-container">$f_1(x,x^2)=\sum_{i=1}^nf_1(x_i,x_i^2)$</span></p>
<p><span class="math-container">$f_1(y,y^2)=\sum_{i=1}^nf_1(y_i,y_i^2)$</span></p>
<p><span class="math-container">$f_1(x+y,x^2+y^2)=\sum_{i=1}^nf_1(x_i,x_i^2)+\sum_{i=1}^nf_1(y_i,y_i^2)$</span></p>
<p><span class="math-container">$f_1(\sum_{i=1}^n(x_i+y_i),\sum_{i=1}^n(x_i^2+y_i^2))=\sum_{i=1}^nf_1(x_i+y_i,x_i^2+y_i^2)$</span></p>
<p>let <span class="math-container">$u_i=x_i+y_i,v_i=x_i^2+y_i^2$</span></p>
<p><span class="math-container">$u_i,v_i$</span> can be taken as independent variables under the condition<span class="math-container">$u_i^2 \le 2v_i$</span></p>
<p><span class="math-container">$f_1(\sum_{i=1}^nu_i,\sum_{i=1}^nv_i)=\sum_{i=1}^nf_1(u_i,v_i)$</span></p>
<p>now we can swap two variables for example <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span></p>
<p>we have <span class="math-container">$f_1(u_1,v_1)+f_1(u_2,v_2)=f_1(u_1,v_2)+f_1(u_2,v_1)$</span> provided that that both <span class="math-container">$u_1^2\le 2v_1,u_1^2 \le 2v_2,u_2^2\le 2v_1,u_2^2 \le 2v_2,$</span></p>
<p>now taking <span class="math-container">$u_2=0$</span> and <span class="math-container">$v_2$</span> is constant ie <span class="math-container">$v_2=v_o$</span></p>
<p><span class="math-container">$f_1(u_1,v_1)=f_1(u_1,v_o)-f_1(0,v_o)+f_1(0,v_1)$</span></p>
<p>define <span class="math-container">$p(u_1)=f_1(u_1,v_o)-f_1(0,v_o)$</span>, <span class="math-container">$\rho(v_1)=f_1(0,v_1)$</span></p>
<p><span class="math-container">$f_1(u_1,v_1)=p(u_1)+f_1(0,v_1)$</span></p>
<p>now we can set <span class="math-container">$u_1=x_1,v_1=x_1^2$</span> because the inequality <span class="math-container">$u_1^2\le 2v_1$</span> is still satisfied</p>
<p>To get <span class="math-container">$f_1(x_1,x_1^2)=p(x_1)+\rho (x_1^2)$</span></p>
<p>Noting that <span class="math-container">$ G(0,2y^2)G(0,2x^2)=f^2(0,0)G(0,2x^2+2y^2)$</span> as derived above</p>
<p>it implies that for <span class="math-container">$d,c \ge 0,G(0,d)G(0,c)=f^2(0,0)G(0,d+c)$</span></p>
<p><span class="math-container">$logG(0,d)+logG(0,c)=2logf(0,0)+logG(0,d+c)$</span></p>
<p>Noting <span class="math-container">$logG(0,0)=2logf(0,0)$</span></p>
<p><span class="math-container">$logG_1(0,d)=logG(0,d)-logG(0,0)$</span></p>
<p><span class="math-container">$logG_1(0,d)+logG_1(0,c)=logG_1(0,d+c)$</span></p>
<p>but <span class="math-container">$f_1(0,d)=G_1(0,d)$</span></p>
<p>so <span class="math-container">$\rho (x_1^2)+\rho (y_1^2)=\rho (x_1^2+y_1^2)$</span>, which is Cauchy functional equation with solution <span class="math-container">$\rho (x_1^2)=c_1x_1^2$</span></p>
<p>because <span class="math-container">$\rho (x^2)+\rho (y^2)=\rho (x^2+y^2)$</span>, <span class="math-container">$c_i=c$</span> for all <span class="math-container">$i \in N$</span></p>
<p><span class="math-container">$f_1(x_1,x_1^2)=p(x_1)+cx_1^2$</span></p>
<p><span class="math-container">$f_1(x_1+y_1,x_1^2+y_1^2)=f_1(x_1,x_1^2)+f_1(y_1,y_1^2)$</span></p>
<p><span class="math-container">$p(x_1+y_1)+\rho(x_1^2+y_1^2)=p(x_1)+p(y_1)+\rho(x_1^2)+\rho(y_1^2)$</span></p>
<p>This means <span class="math-container">$p(x_1)+p(y_1)=p(x_1+y_1)$</span> which is also Cauchy functional equation with solution <span class="math-container">$p (x_1)=b_1x_1$</span></p>
<p>Therefore <span class="math-container">$f_1(x,y)=\sum_{i=1}^nb_ix_i+c\sum_{i=1}^nx_i^2=b.x+cx^2$</span></p>
<p>And <span class="math-container">$f(x,x^2)= Ae^{b.x+cx^2}$</span> Where <span class="math-container">$A=e^{f(0,0)}$</span></p>
| ibnAbu | 334,224 | <p>Excluding the trivial case for <span class="math-container">$h=0$</span> ae, then it must be that <span class="math-container">$h$</span> is positive on some set of positive
measure</p>
<p>The functional equation can be coverted to</p>
<p><span class="math-container">$$f (x, x^2) f (y, y^2)= G (x+y, x^2+y^2) \text{with $f(0,0)>0$} \tag{1}$$</span></p>
<p>using equation 5 :<span class="math-container">$G(0,2y^2)G(0,2x^2)=f^2(0,0)G(0,2x^2+2y^2) \tag{5}$</span></p>
<p><span class="math-container">$G(0,d)G(0,c)=f^2(0,0)G(0,d+c) $</span>, where <span class="math-container">$d,c \ge 0$</span></p>
<p>according to equation above if <span class="math-container">$d>0$</span> and <span class="math-container">$G(0,d) >0$</span>, then <span class="math-container">$G(0,c)>0$</span> for all <span class="math-container">$c>0$</span></p>
<p>using equation 1 we have <span class="math-container">$f(0,0)f(x+y, x^2+y^2)=f(x, x^2) f (y, y^2)$</span>, let <span class="math-container">$x.y=0$</span>
using equation 1 we have <span class="math-container">$f(0,0)f(s+z, s^2+z^2)=f(s, s^2) f (z, z^2)$</span>, let <span class="math-container">$s.z=0$</span></p>
<p>combining the two equations and using 1: <span class="math-container">$f(0,0)f(x+y+s+z, x^2+y^2+s^2+z^2)=f(x+s, x^2+s^2)f(y+z, y^2+z^2)$</span></p>
<p>now let <span class="math-container">$u=x+s,v=y+z,w=x^2+s^2,t=y^2+z^2$</span></p>
<p>we have <span class="math-container">$$f(0,0)f(u+v, w+t)=f(u, w)f(v,t)$$</span></p>
<p><span class="math-container">$u,v,w,t$</span> can be taken as independent variables under the condition:</p>
<p><span class="math-container">$u^2 \le 2w$</span> ,<span class="math-container">$v^2 \le 2t$</span></p>
<p>take <span class="math-container">$v=0$</span> , <span class="math-container">$t=\frac{x^2}{2}$</span>,<span class="math-container">$u=x$</span> , <span class="math-container">$w=\frac{x^2}{2}$</span></p>
<p>we have <span class="math-container">$$f(0,0)f(x, x^2)=f(x, \frac{x^2}{2})f(0,\frac{x^2}{2})$$</span></p>
<p>because there is a point <span class="math-container">$d\neq 0$</span>,<span class="math-container">$f(d, d^2)>0$</span>, <span class="math-container">$f(0,0)f(0,\frac{d^2}{2})=G(0,\frac{d^2}{2})>0$</span></p>
<p><span class="math-container">$f(x,x^2)f(-x,x^2)=G(0,2x^2)>0$</span> implies <span class="math-container">$f(x,x^2)>0$</span> everywhere</p>
<p>and equation <span class="math-container">$1$</span> can be converted to Cauchy functional equation as done in my comment</p>
|
83,246 | <p>Let H be a separable and infinite-dimensional Hilbert space and let B be the closed ball
of H having unit radius, whose center is at the origin h of H. Suppose one would like to
know how much of B can be "filled up" by any of its compact subsets-since B itself
(although closed and bounded) is not compact. Let E be the set of all positive real
numbers z for which there exists a compact subset C of B such that all points of B lie at
a distance from C (in the metric of H) which is not greater than z. The greatest lower bound of E would be a measure of this "filling up". My question is-what is this greatest
lower bound?
I believe that it is 1 but cannot prove it. Clearly 1 belongs to E since we can take for
C any compact subset of B that contains h. I can prove that no positive real number less
than one-half the square root of 2 belongs to E. But this is as far as I have been able to
get. If 1 is the right answer, it would show that no compact subset of B can "fill up" any
more of B than the set containing only the point h.</p>
| Pietro Majer | 6,101 | <p>What you are describing is exactly the ball measure of non-compactness of the closed bounded set $B$:
$$\alpha(B):=\inf \{r>0 \, : \exists F\subset B, \mathrm {\, F\, finite\, ,s.t.} B\subset \cup_{x\in F} B(x,r) \}\, .$$
It can be viewed as the point-set distance $\inf_{F\in \mathcal{F}}d(B,F)$ between $B$ (as a point of the metric space $\mathcal{H}$ of all closed bounded subsets of $H$ endowed with the Hausdorff distance), and the set $\mathcal{F}\subset \mathcal{H}$ of all finite subsets of $H$. Since $\mathcal{F}$ is dense w.r.to the Hausdorff distance in the set $\mathcal{K}\subset \mathcal{H}$ of all compact subsets, one may equivalently use $\mathcal{K}$ instead of $\mathcal{F}$. </p>
<p>That the ball measure of the unit ball in a infinite dimensional Banach space is 1 and not less, follows immediately by a simple rescaling argument. If you can cover $B$ with $N$ balls of radius $\theta < 1$, then you can also cover it by $N^2$ balls of radius $\theta^2$, and by $N^k$ balls of radius $\theta^k$, for any $k$, which in turn implies that it is relatively compact, contradiction.</p>
|
3,528,237 | <p>I am just being introduced to quantifiers in logic and my lecturer was going through the following two statements. The question is to determine which, if any, is/are true.</p>
<ol>
<li><span class="math-container">$(\forall x \in \mathbb{R})(\exists y \in \mathbb{R})[x + y = 0]$</span></li>
<li><span class="math-container">$(\exists x \in \mathbb{R})(\forall y \in \mathbb{R})[x + y = 0]$</span></li>
</ol>
<p>Clearly, the first statement is true; we can just let <span class="math-container">$y = -x$</span>. However, my lecturer says that the second statement is false. I cannot wrap my head around why that is the case. If we can take <span class="math-container">$y = -x$</span> in the first, why can we not do the same for the second i.e. let <span class="math-container">$x = -y$</span>? In fact, how is the second statement any different from the first?</p>
<p>Any intuitive explanations/examples would be greatly appreciated!</p>
| Community | -1 | <p>If you translate the sentences into English, you can see the differences between them. </p>
<p>The first sentence says "Every real number has an additive inverse." That is for every real number <span class="math-container">$x$</span>, there is real number <span class="math-container">$y$</span> such that <span class="math-container">$x+y=0$</span>. As you say, this is clearly true since we can take <span class="math-container">$y:=-x$</span>.</p>
<p>By contrast, the second sentence says "There is a real number such that all real numbers are its additive inverse." That is, first we choose the value of <span class="math-container">$x$</span>, and then every value of <span class="math-container">$y$</span> must then satisfy <span class="math-container">$x+y=0$</span>. So your lecturer is right; once we specify the value of <span class="math-container">$x$</span> (which we have to do first since it is the first quantified variable in the expression), there is only one additive inverse of <span class="math-container">$x$</span> and so any other real number would not satisfy the equation.</p>
|
4,386,952 | <p>Informally, mathematicians treat Integers like a subset of rational numbers.</p>
<p>But according to the standard, formal construction of <span class="math-container">$\mathbb{Q}$</span>, <span class="math-container">$\mathbb{Q}$</span> is an equivalence class over <span class="math-container">$\mathbb{Z} \times \mathbb{Z}^∗$</span>. So <span class="math-container">$0_Z \neq 0_Q$</span>.</p>
<p>When mathematicians freely convert between <span class="math-container">$\mathbb{Z}$</span> and <span class="math-container">$\mathbb{Q}$</span>, they are really making use of some canonical embedding <span class="math-container">$f : \mathbb{Z} \rightarrow \mathbb{Q}$</span> which maps <span class="math-container">$x$</span> to the equivalence class containing <span class="math-container">$(x, 1)$</span>.</p>
<p>Mathematicians implicitly use these sorts of embeddings all of the time, and do not spend their time fiddling with the minutia. People do not care if their "integer" <span class="math-container">$x$</span> is in <span class="math-container">$\mathbb{Z}$</span> or in <span class="math-container">$f[\mathbb{Z}]$</span>, and interchange between the two as-needed. For all intents and purposes these two sets are "equivalent".</p>
<p>Do any theorem provers handle these sorts of relationships gracefully? Are there systems/languages which support these intuitive equivalences and don't require humans to manually fiddle with and keep track of embeddings?</p>
| Lazy | 958,820 | <p>One approach to this would be: If we knew that <span class="math-container">$c$</span> is differentiable then we knew from the product rule that <span class="math-container">$(fg)'(a) = f'(a)g(a) + f(a)g'(a) = f'(a)g(a)$</span> by <span class="math-container">$f(a)=0$</span>. Since this does not depend on <span class="math-container">$g'$</span> anymore we might assume that this is in fact the derivative to <span class="math-container">$fg$</span> if g is only continuous.</p>
<p>As you’ve already seen <span class="math-container">$f(x)g(x)-f(a)g(a) = f(x)g(x)$</span> which is the same as <span class="math-container">$(f(x)-f(a))g(x)$</span>. Thus <span class="math-container">$(f(x)g(x)-f(a)g(a))/(x-a) = (f(x)-f(a))/(x-a) \cdot g(x)$</span>. But as <span class="math-container">$(f(x)-f(a))/(x-a)$</span> limits to <span class="math-container">$f'(a)$</span> this limits to <span class="math-container">$f'(a)g(a)$</span>.</p>
|
2,514,236 | <p>For example, the matrix could have finitely many rows and columns, but each row/column has uncountably many elements and you can do the standard matrix multiplication by taking care to match up the entries with corresponding pairs of real number indices. </p>
<p>Do such objects exist and has there been any work on them?</p>
<p>Does </p>
| rschwieb | 29,335 | <p>Infinite matrix rings are pretty interesting. I know several interesting examples.</p>
<p>The simplest is the ring of column-finite matrices over a field $F$, which is isomorphic to the ring of linear transformations of an infinite dimensional $F$ vector space. <a href="http://ringtheory.herokuapp.com/rings/ring/15/" rel="nofollow noreferrer">here are some of its properties</a>. There is also, of course, the ring of row-finite matrices, and the row-and-column finite matrix rings. You could also consider the matrices with finitely many nonzero entries, and generate the ring containing those and the identity matrix.</p>
<p>There is also a close-knit cluster of three examples due to Bergman that all live inside infinite matrix rings. Recently, K. C. O'Meara described an algebra "containing" these examples which makes explaining them a lot easier. <a href="http://ringtheory.herokuapp.com/expanded-details/omearas-matrix-algebra/" rel="nofollow noreferrer">Here is the description of that algebra</a>. You can find links to the Bergman examples at that link as well.</p>
<p>Personally, I'm curious about infinite upper-triangular matrix rings. I asked <a href="https://math.stackexchange.com/q/1372406/29335">a question about a ring like that</a> but haven't gotten any feedback on it.</p>
|
39,466 | <p>I could not solve this problem:</p>
<blockquote>
<p>Prove that for a non-Archimedian field $K$ with completion $L$, $$\left\{|x|\in\mathbb R \mid x\in K\right\} =\left\{|x|\in\mathbb R \mid x\in L\right\}$$</p>
</blockquote>
<p>I considered a Cauchy sequence in $K$ with norms having limit $l$, but I could not construct an element of $K$ with norm $l$ from the sequence.</p>
<p>Will anyone please show how to prove it?</p>
| Eric Naslund | 6,075 | <p><strong>Hint:</strong> If the sequence $a_k$ is bounded, then there exists $M$ such that $|a_k|\leq M$ for all $k$.</p>
<p>Then, what can you say about $|\sum_{k=1}^\infty a_k x^k|$ and $\sum_{k=1}^\infty Mx^k$? What is the radius of convergence of the second one?</p>
|
102,427 | <p>I just coded a simple simulation module that looks at the evolution of a continuous trait in a haploid asexually reproducing population under density dependent competition in discrete time (i.e. non-overlapping generations, using recurrence equations). What I am interested in is finding out whether evolution would always favour selecting for increased intrinsic growth rates R, perhaps eventually pushing the population to go extinct (a scenario known as "evolutionary suicide"), or if instead there would be selection for restrained growth rates, e.g. due to the fact that more selfish lineages with higher intrinsic growth rates would more often move into the chaotic regime and go extinct faster.</p>
<p>The module I have takes as arguments the desired fitness function (e.g. the discrete logistic model), the initial trait values of your individuals in the 1st generation, the mutation rate and the standard deviation of the normal deviation that is applied to mutants and the nr of generations to run the simulation :</p>
<pre><code>EvolveHapl[fitnessfunc_, initpop_, mutrate_, stdev_, generations_] :=
Module[{ndist, traitvalues, currpopsize, fastPoisson, fitnessinds,
numberoffspring, nrmutants, rnoise, rndelem},
ndist = NormalDistribution[0, stdev] ;
traitvalues = Table[{}, {generations + 1}]; (*
list of lists containing ind trait values in each generation *)
traitvalues[[1]] = initpop;
currpopsize = Length[traitvalues[[1]]];
(* fast Poisson random number generator *)
fastPoisson =
Compile[{{\[Lambda], _Real}},
Module[{b = 1., i, a = Exp[-\[Lambda]]},
For[i = 0, b >= a, i++, b *= RandomReal[]];
i - 1], RuntimeAttributes -> {Listable},
Parallelization -> True];
Do[fitnessinds =
Table[fitnessfunc[traitvalues[[gen - 1]][[i]], currpopsize], {i,
1, currpopsize}]; (*
fitness of every individual in the population,
in mean number of offspring *)
numberoffspring = fastPoisson[fitnessinds]; (*
absolute number of offspring that every individual produces *)
traitvalues[[gen]] =
Flatten[Table[
Table[traitvalues[[gen - 1]][[i]], {j, 1,
numberoffspring[[i]]}], {i, 1, currpopsize}]]; (*
expected offspring trait values before mutation *)
currpopsize = Length[traitvalues[[gen]]]; (*
new population size *)
nrmutants =
RandomVariate[BinomialDistribution[currpopsize, mutrate]]; (*
nr of offspring that should mutate *)
rnoise = RandomReal[ndist, nrmutants]; (*
noise to be added to the trait values of the mutants *)
Do[rndelem = RandomInteger[{1, currpopsize}];
traitvalues[[gen]][[rndelem]] =
Max[traitvalues[[gen]][[rndelem]] + rnoise[[i]], 0];, {i, 1,
nrmutants}];, (* mutate trait values *)
{gen, 2, generations + 1}];
Return[traitvalues ]];
</code></pre>
<p>And to plot the resulting list of individual trait values I use</p>
<pre><code>PlotResult[traitvalues_] := Module[{},
generations = Length[traitvalues];
Print["Mean phenotype at the beginning : " <>
ToString[Mean[traitvalues[[1]]]]];
Print["Maximum mean phenotype at any generation : " <>
ToString[
Max[Table[
Mean[traitvalues[[i]]], {i, 1, generations}] /. {Mean[{}] ->
0}]]];
Print["Mean phenotype after " <> ToString[ngens] <>
" generations : " <>
ToString[
Mean[traitvalues[[generations]]] /. {Mean[{}] ->
"- (population extinct)"}]];
Print["Final population size : " <>
ToString[Length[traitvalues[[generations]]]]];
maxscaleN =
Max[Table[{Length[traitvalues[[i]]]}, {i, 1, generations}]];
minscaleN =
Min[Table[{Length[traitvalues[[i]]]}, {i, 1, generations}]];
maxscaleP = Max[Flatten[traitvalues]];
GraphicsRow[{Show[
ArrayPlot[
Table[BinCounts[
traitvalues[[i]], {0, maxscaleP + 0.5, 0.05}], {i, 1,
generations}]/
Table[Length[traitvalues[[i]]] + 0.00001, {i, 1, generations}],
DataRange -> {{0, maxscaleP + 0.5}, {1, generations}},
DataReversed -> True], Frame -> True,
FrameLabel -> {"Phenotype frequency", "Generation"},
FrameTicks -> True, AspectRatio -> 2],
ListPlot[
Table[{Length[traitvalues[[i]]], i}, {i, 1, generations}],
Joined -> True, Frame -> True,
FrameLabel -> {"Population size", "Generation"},
AspectRatio -> 2,
PlotRange -> {{Clip[minscaleN - 50, {0, \[Infinity]}],
maxscaleN + 50}, {0, generations}}]}]
]
</code></pre>
<p>Running it, however, is quite slow, e.g.</p>
<pre><code>psize = 300; ngens = 5000; mutrate = 0.01; stdev = 0.05; K = 2*psize;
f[R_, popsize_] := Max[(1 + R *(1 - popsize/K)), 0.00001];
First@AbsoluteTiming[
traitvalues =
EvolveHapl[ f, RandomReal[{2.5, 2.6}, psize], mutrate, stdev,
ngens];]
PlotResult[traitvalues]
204.02
</code></pre>
<p><a href="https://i.stack.imgur.com/iblgg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iblgg.png" alt="enter image description here"></a></p>
<p>I was just wondering if there might be any obvious ways to speed up this routine in Mathematica? E.g. could I get rid of all my <code>Do[]</code> and <code>Table[]</code> loops somehow? Or could this whole routine or parts of it be compiled to speed it up? I've already replaced the Poisson number generation with a faster compiled version and used <code>Max[]</code> rather than <code>Clip[]</code> in my original post, but this hasn't improved speed as much as I had hoped it would... Or better to go to C or C++ for this kind of simple simulation?</p>
| Simon Woods | 862 | <p>Here is a modified version of your code. On my PC it completes your example run in about 5 seconds.</p>
<p>I won't try to describe every change but will point out the major features. Some of the changes are stylistic rather than performance-based. This is not a criticism of your style but a reflection of the way I broke the original code down in order to understand it.</p>
<p><strong>A quick note on Map</strong></p>
<p>The most widespread change was to use <a href="http://reference.wolfram.com/language/ref/Map.html" rel="noreferrer"><code>Map</code></a> in a lot of places where you had used <code>Table</code>. Generally in <em>Mathematica</em> whenever you find yourself using <code>Table</code> to iterate over the elements in a pre-existing list, you can usually find a more functional approach. As a simple example</p>
<pre><code>Table[Length[traitvalues[[i]]], {i, 1, generations}]
</code></pre>
<p>can be written as</p>
<pre><code>Length /@ traitvalues
</code></pre>
<p>Often this change will provide a performance boost, and it makes the code more readable too (once you are accustomed to the notation)</p>
<p><strong>The altered code</strong></p>
<p>The main <code>Do</code> loop in <code>EvolveHapl</code> is an iteration to compute the next generation of <code>traitvalues</code> from the previous. This can be done very neatly using <a href="http://reference.wolfram.com/language/ref/NestList.html" rel="noreferrer"><code>NestList</code></a>. My version of <code>EvolveHapl</code> just calls <code>NestList</code> to do all the work:</p>
<pre><code>EvolveHapl[fitnessfunc_, initpop_, mutrate_, stdev_, generations_] :=
NestList[genStep[#, fitnessfunc, mutrate, stdev] &, initpop, generations]
</code></pre>
<p>The function <code>genStep</code> computes the new population from the current one:</p>
<pre><code>genStep[tv_, fitnessfunc_, mutrate_, stdev_] :=
Module[{n, fitnessinds, numberoffspring, newtv},
n = Length @ tv;
fitnessinds = fitnessfunc[tv, n];
numberoffspring = fastPoisson[fitnessinds];
newtv = Join @@ MapThread[ConstantArray, {tv, numberoffspring}];
addNoise[newtv, mutrate, stdev]]
</code></pre>
<p>The most significant speed-up is in the calculation of <code>fitnessinds</code> - I pass the <em>whole</em> population to the fitness function. Obviously this requires that the fitness function is able to operate on a list. In your case, it can (provided we use the original version with <code>Clip</code> rather than <code>Max</code>)</p>
<p><code>genStep</code> calls another function <code>addNoise</code> to do the mutations:</p>
<pre><code>addNoise[traitvalues_, mutrate_, stdev_] :=
Module[{tv, n, nrmutants, rnoise, rndelem},
tv = traitvalues;
n = Length[tv];
nrmutants = RandomVariate[BinomialDistribution[n, mutrate]];
rnoise = RandomReal[NormalDistribution[0, stdev], nrmutants];
rndelem = RandomChoice[Range[n], nrmutants];
tv[[rndelem]] += rnoise;
Clip[tv, {0, ∞}]]
</code></pre>
<p>One thing to point out about <code>addNoise</code> is that I used <code>RandomChoice</code> to identify which elements to mutate. This avoids picking the same one more than once (which I assume is desirable).</p>
<p>I pulled <code>fastPoisson</code> out as a global function, no changes to it but I'll copy it here for completeness:</p>
<pre><code>fastPoisson = Compile[{{λ, _Real}},
Module[{b = 1., i, a = Exp[-λ]},
For[i = 0, b >= a, i++, b *= RandomReal[]]; i - 1],
RuntimeAttributes -> {Listable}, Parallelization -> True];
</code></pre>
<p>I made some minor changes to the <code>PlotResults</code> function, mostly converting to use <code>Map</code> where possible.</p>
<pre><code>PlotResult[traitvalues_] :=
Module[{generations, pop, maxscaleN, minscaleN, maxscaleP, frequencydata},
generations = Length[traitvalues];
Print["Mean phenotype at the beginning : " <>
ToString[Mean[traitvalues[[1]]]]];
Print["Maximum mean phenotype at any generation : " <>
ToString[Max[Mean /@ traitvalues /. Mean[{}] -> 0]]];
Print["Mean phenotype after " <> ToString[generations] <> " generations : " <>
ToString[Mean[traitvalues[[generations]]] /. Mean[{}] -> "- (population extinct)"]];
Print["Final population size : " <>
ToString[Length[traitvalues[[generations]]]]];
pop = Length /@ traitvalues;
maxscaleN = Max[pop];
minscaleN = Min[pop];
maxscaleP = Max[traitvalues];
frequencydata = (BinCounts[#, {0, maxscaleP + 0.5, 0.05}] & /@ traitvalues)/(pop + 0.00001);
GraphicsRow[{
Show[ArrayPlot[frequencydata,
DataRange -> {{0, maxscaleP + 0.5}, {1, generations}},
DataReversed -> True], Frame -> True, FrameTicks -> True, AspectRatio -> 2,
FrameLabel -> {"Phenotype frequency", "Generation"}],
ListPlot[Transpose[{pop, Range[generations]}],
Joined -> True, Frame -> True,
FrameLabel -> {"Population size", "Generation"}, AspectRatio -> 2,
PlotRange -> {{Clip[minscaleN - 50, {0, ∞}], maxscaleN + 50}, {0, generations}}]}]]
</code></pre>
<p><strong>Test</strong></p>
<p>As I mentioned I defined the fitness function using <code>Clip</code> so that it could work on a whole population at once. So the full test code is:</p>
<pre><code>psize = 300; ngens = 5000; mutrate = 0.01; stdev = 0.05; K = 2*psize;
f[R_, popsize_] := Clip[(1 + R (1 - popsize/K)), {0.000001, ∞}];
First@AbsoluteTiming[
traitvalues = EvolveHapl[f, RandomReal[{2.5, 2.6}, psize], mutrate, stdev, ngens];]
</code></pre>
<p><a href="https://i.stack.imgur.com/hHyY0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hHyY0.png" alt="enter image description here"></a></p>
|
2,795,777 | <p>I encountered this problem in one of my linear algebra homeworks (Linear Algebra with Applications 5th Ed 1.3.44):</p>
<p>Consider a $n \times m$ matrix $A$, such that $n > m$. Show there is a vector $b$ in $\mathbb{R}^{n}$ such that the system $Ax=b$ is inconsistent.</p>
<p>I have a strong intuition as to why this is true, because the transformation matrix maps a vector in $\mathbb{R}^{m}$ to $\mathbb{R}^{n}$, so it is going from a lower dimension to a higher. When the $m$ components in $x$ vary, they would at most be parameterizing an $m$-dimensional subspace in $\mathbb{R}^{n}$. However, my "proof" (which is included below, feels very handwavey and sloppy. It may also be incorrect in a number of places. I'd appreciate it if I could get some pointers on how to formalize proofs of this type a little more, so they are rigorous enough to write on a homework/test, and maybe an example with this example.</p>
<p>My proof:</p>
<p>Consider the case where $A$ has at least $m$ linearly independent row-vectors. Using elementary row operations, rearrange $A$ to $A'$, so these $m$ row vectors are the first $m$ rows. $b'$ will refer to the vector $b$ under the same rearrangement of rows. If we place the first $m$ rows in reduced row ecchelon form using only elementary operations with the first $m$ rows, the augmented matrix $[A'|b']$ will have the following form, where $x_{i}$ is the $i$-th element of the solution vector $x$.</p>
<p>\begin{bmatrix}
1 & 0 & \dots & 0 & x_{1} \\
0 & 1 & \dots & 0 & x_{2}\\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & 1 & x_{m} \\
a'_{m+1, 1} & a'_{m+1, 2} & \dots & a'_{m+1, m} & b'_{m+1} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
a'_{n,1} & a'_{n, 2} & \dots & a'_{n,m} & b'_{n}
\end{bmatrix}</p>
<p>Now consider the $m+1$th row. To eliminate coefficients in this row, it would mean that $x_{m+1} = b'_{m+1} - $$\sum_{i=1}^{m} x_{i}\cdot a_{m+1,i}$, because to eliminate each coefficient would involve scaling by $a_{m+1,i}$ and then subtracting. The system is inconsistent for all $x_{m+1} \neq 0$, so we then choose any $b'_{m+1}$ for which this inequality holds, there are infinitely many, to find $b'$ which makes $A'x=b'$ inconsistent. Then, unswap the rows to make our $b'$ back into $b$ and we have found a vector which makes our system inconsistent.</p>
| Hagen von Eitzen | 39,174 | <p>Note that $A^{10}=10^{10}I$, which rules out most answers.</p>
|
2,795,777 | <p>I encountered this problem in one of my linear algebra homeworks (Linear Algebra with Applications 5th Ed 1.3.44):</p>
<p>Consider a $n \times m$ matrix $A$, such that $n > m$. Show there is a vector $b$ in $\mathbb{R}^{n}$ such that the system $Ax=b$ is inconsistent.</p>
<p>I have a strong intuition as to why this is true, because the transformation matrix maps a vector in $\mathbb{R}^{m}$ to $\mathbb{R}^{n}$, so it is going from a lower dimension to a higher. When the $m$ components in $x$ vary, they would at most be parameterizing an $m$-dimensional subspace in $\mathbb{R}^{n}$. However, my "proof" (which is included below, feels very handwavey and sloppy. It may also be incorrect in a number of places. I'd appreciate it if I could get some pointers on how to formalize proofs of this type a little more, so they are rigorous enough to write on a homework/test, and maybe an example with this example.</p>
<p>My proof:</p>
<p>Consider the case where $A$ has at least $m$ linearly independent row-vectors. Using elementary row operations, rearrange $A$ to $A'$, so these $m$ row vectors are the first $m$ rows. $b'$ will refer to the vector $b$ under the same rearrangement of rows. If we place the first $m$ rows in reduced row ecchelon form using only elementary operations with the first $m$ rows, the augmented matrix $[A'|b']$ will have the following form, where $x_{i}$ is the $i$-th element of the solution vector $x$.</p>
<p>\begin{bmatrix}
1 & 0 & \dots & 0 & x_{1} \\
0 & 1 & \dots & 0 & x_{2}\\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & 1 & x_{m} \\
a'_{m+1, 1} & a'_{m+1, 2} & \dots & a'_{m+1, m} & b'_{m+1} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
a'_{n,1} & a'_{n, 2} & \dots & a'_{n,m} & b'_{n}
\end{bmatrix}</p>
<p>Now consider the $m+1$th row. To eliminate coefficients in this row, it would mean that $x_{m+1} = b'_{m+1} - $$\sum_{i=1}^{m} x_{i}\cdot a_{m+1,i}$, because to eliminate each coefficient would involve scaling by $a_{m+1,i}$ and then subtracting. The system is inconsistent for all $x_{m+1} \neq 0$, so we then choose any $b'_{m+1}$ for which this inequality holds, there are infinitely many, to find $b'$ which makes $A'x=b'$ inconsistent. Then, unswap the rows to make our $b'$ back into $b$ and we have found a vector which makes our system inconsistent.</p>
| mechanodroid | 144,766 | <p>Your matrix is the transpose of the <a href="https://en.wikipedia.org/wiki/Companion_matrix" rel="nofollow noreferrer">companion matrix</a> so its characteristic and minimal polynomial are equal to $x^{10} - 10^{10}$.</p>
<p>Therefore, the spectrum of $A$ is precisely the set of real roots of $x^{10} - 10^{10}$, which are $\pm 10$.</p>
|
529,886 | <p>In the context of learning about comparison theorem, using integrals to determine convergence and learning about exponential series (That's what $n^p$ is called right?).</p>
| newzad | 76,526 | <ol>
<li>Say $m(EAB)=\alpha$ then $m(DAF)=30-\alpha$, $m(DFA)=60+\alpha$ and $m(FEC)=30+\alpha$.</li>
<li>Join $M$ and $E$.</li>
<li>$m(MEA)=m(MEF)=30$, therefore $m(MEC)=60+\alpha$</li>
<li>We see that $m(MEC)=m(MFD)=60+\alpha$ and $m(EMA)=m(ECF)=90$.</li>
<li>As a result we can say that a circle pass through the points $M, E, C, F$.</li>
<li>Finally because of the $MECF$ cyclic quadrilateral, $m(MCE)=m(MFE)=60$ </li>
<li>You can see that $MEBA$ is also a cyclic quadrilateral, therefore $m(MAE)=m(MBE)=60^\circ$ or you can joing $M$ and $D$ to see $m(MCE)=m(MBE)$</li>
</ol>
|
890,313 | <p>Say the probability of an event occurring is 1/1000, and there are 1000 trials.</p>
<p>What's the expected number of events that occur? </p>
<p>I got to an answer in a quick script by doing the above 100,000 times and averaging the results. I got 0.99895, which seems like it makes sense. How would I use math to get right to this answer? The only thing I can think of to calculate is the probability that an event never occurs, which would be 0.999^1000, but I am stuck there. </p>
| drhab | 75,923 | <p>Give the trials the numbers $1,2,\dots,1000$.</p>
<p>Let $X_i$ take value $1$ if the event is occurring at the $i$-th trial and value $0$ otherwise. </p>
<p>$$X:=X_1+\cdots+X_{1000}$$ is the number of events that occur. This with:</p>
<p>$$\mathbb E(X_i)=1\times P(X_i=1)+0\times P(X_i=0)=1\times\frac{1}{1000}+0\times\frac{999}{1000}=\frac{1}{1000}$$
for each $i\in\{1,\dots.1000\}$ and:$$\mathbb E(X)=\mathbb E(X_1+\cdots+X_{1000})=\mathbb E(X_1)+\cdots+\mathbb E(X_{1000})=\frac{1}{1000}+\cdots+\frac{1}{1000}=1$$</p>
|
1,348,099 | <p>We know that cross product gives a vector that is orthogonal to other two vectors. Let this vector denoted by $$|\vec{v} \times \vec{u}| = \vec{n}$$
Then $$\vec{n}\cdot \vec{u} = 0 $$ Everything okay up to here. Then how we choose a vector from two possible orthogonal vectors, $$\vec{n}$$ or $$\vec{-n}$$ Why following right hand rule? </p>
| Vincent | 147,033 | <p>Let $u:=mg+kx \implies x= \frac{u-mg}{k}$ <em>and</em> $du=k\,dx$ then $$\int\frac{x\cdot dx}{mg+kx}=\int\frac{\frac{u-mg}{k}\cdot \frac{du}{k}}{u}=\frac{1}{k^2}\int\frac{u-mg}{u}\,du=\frac{1}{k^2}\int 1-\frac{mg}{u}\,du=\frac{1}{k^2}[u-mg\ln(u)]+C,$$ where $C$ is an arbitrary constant. Substituting $u=mg+kx$ back into the result above yields the answer, $$\frac{1}{k^2}[mg+kx-mg\ln(mg+kx)]+C=\frac{mg}{k^2}(1-\ln(mg+kx))+\frac{x}{k}+C$$ as desired.</p>
|
3,393,193 | <p>I am asked to answer the following:</p>
<p>Let <span class="math-container">$f:Z->Z$</span> be defined by <span class="math-container">$f(x) = 2x$</span>.</p>
<ul>
<li>Write down infinitely many functions <span class="math-container">$g:Z->Z$</span> such that <span class="math-container">$g(f(x)) = Id_z$</span></li>
</ul>
<p>I thought that the right reasoning was simply to find the inverse of <span class="math-container">$f$</span>, since it is that function such that <span class="math-container">$f(g(x)) = Id_z$</span>.
Hence, <span class="math-container">$g(x) = $$x\over 2$</span>. </p>
<p>However, I'm asked infinitely many functions which made me think of the introduction of a parameter <span class="math-container">$k$</span>. And here is exactly where I am stuck. </p>
<p>Thanks in advance for your help.</p>
| Floris Claassens | 638,208 | <p>You continue if the expected value of continuing is higher than your current role. </p>
<p>On the third roll, the expected value of continuing equals
<span class="math-container">$$C_{3}=\frac{1}{6}(1+2+3+4+5+6)=\frac{7}{2}=4-\frac{1}{2},$$</span>
so you continue if you roll lower than <span class="math-container">$4$</span>.</p>
<p>On the second roll, the expected value of continuing equals
<span class="math-container">$$C_{2}=\frac{1}{6}(4+5+6)+\frac{1}{2}C_{3}=\frac{17}{4}=5-\frac{3}{4},$$</span>
so you continue if you roll lower than <span class="math-container">$5$</span>.</p>
<p>On the first role, the expected value of continuing equals
<span class="math-container">$$C_{1}=\frac{1}{6}(5+6)+\frac{2}{3}C_{2}=\frac{22}{12}+\frac{51}{12}=\frac{71}{12}=6-\frac{1}{12},$$</span>
so you continue if you roll lower than <span class="math-container">$6$</span>.</p>
|
1,887,856 | <p>When is a group homomorphism</p>
<p>$$\varphi:\Bbb Z/2\Bbb Z\to \Bbb Z/2\Bbb Z\oplus \Bbb Z/2\Bbb Z$$</p>
<p>an "isomorphism onto the first summand" ?</p>
<p>Is the map $\varphi:1\mapsto (1,1)$ an isomorphism onto the first summand?</p>
| Mr. Chip | 52,718 | <p>Such a $\varphi$ is an isomorphism in case it is bijective onto its image and has its image contained in the first summand, which is the subset $\mathbb{Z}/2\mathbb{Z} \oplus 0 = \{ (0,0), (1,0) \}$. So your map is not such an isomorphism.</p>
|
1,887,856 | <p>When is a group homomorphism</p>
<p>$$\varphi:\Bbb Z/2\Bbb Z\to \Bbb Z/2\Bbb Z\oplus \Bbb Z/2\Bbb Z$$</p>
<p>an "isomorphism onto the first summand" ?</p>
<p>Is the map $\varphi:1\mapsto (1,1)$ an isomorphism onto the first summand?</p>
| Zev Chonoles | 264 | <p>To say that a map
$$\varphi:A\to B\oplus C$$
is an isomorphism onto the first summand means that there is an isomorphism $\psi:A\to B$ for which $\varphi(a)=(\psi(a),0)$. </p>
<p>In other words, $\varphi$ is an isomorphism onto the first summand when both of the following are true:</p>
<ol>
<li>the image of $\varphi$ is equal to the first summand, i.e. the subgroup (or subring, etc.) $B\oplus 0$ of $B\oplus C$, and moreover,</li>
<li>considering the function $\varphi$ as a map solely onto its image (i.e., restricting the codomain of $\varphi$), it is an isomorphism.</li>
</ol>
<p>Thus, the map $\varphi:\mathbb{Z}/2\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ defined by $\varphi(1)=(1,1)$ is <strong><em>not</em></strong> an isomorphism onto the first summand, since the image of this $\varphi$ is not the subgroup $\mathbb{Z}/2\mathbb{Z}\oplus 0$ of $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$.</p>
|
2,569,096 | <p>The problem goes as follows:
$$
P=\left(
\begin{matrix}
a & 0.6\\
1-a & 0.4\\
\end{matrix}
\right)
$$</p>
<blockquote>
<p>Determine the value of the parameter $a \in [0,1]$ for which $P$ does <strong>not</strong> have an inverse.</p>
</blockquote>
<p>So then I know the value of $a$ lies between $0$ and $1$, inclusively. And since I don't get information on the current states of the transitional values, this has to be done algebraically. </p>
<p>Is that correct? </p>
<p>In that case, as it is a "political" transition matrix: L = left swing, R = right swing. </p>
<p>$$P \cdot\left(\begin{array}{c} x\\ y\end{array} \right) = \left( \begin{array}{c} L \\ R\end{array} \right)$$</p>
<p>$$ax + .6y = L \\x(1-a) + .4y = R$$</p>
<p>$$ax = L - .6y \\x(1-a)= R - .4y$$ $$a = \frac{L - .6y}{x} \\$$</p>
<p>Or am I lost? </p>
<p>Thanks beforehand for help.</p>
| kam | 514,050 | <p>There is no inverse to a matrix with determinant 0. so when Det(<em>P</em>)=0, $P^{-1}$ doesn't exist.</p>
<pre><code>i.e 0.4a-0.6(1-a)=0,
so 0.4a+0.6a-0.6=0,
so a=0.6
</code></pre>
|
213,665 | <p><strong>I've tried 3 methods but all failed to do that.</strong></p>
<p>1st Method</p>
<pre><code>Apply[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>2nd Method</p>
<pre><code>Map[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>3rd Method</p>
<pre><code>Flatten[{1, {2, {3, 4}, 5}, 6}, {2}]
</code></pre>
<p>I wanna get {1, {2, 3, 4, 5}, 6}</p>
| kglr | 125 | <pre><code>lst = {1, {2, {3, 4}, 5}, 6};
FlattenAt[lst, {2, 2}]
</code></pre>
<blockquote>
<p>{1, {2, 3, 4, 5}, 6}</p>
</blockquote>
<p>Also</p>
<pre><code>Map[## & @@ # &, lst, {2}]
</code></pre>
<blockquote>
<p>{1, {2, 3, 4, 5}, 6}</p>
</blockquote>
<pre><code>Replace[lst, List -> Sequence, {3}, Heads -> True]
</code></pre>
<blockquote>
<p>{1, {2, 3, 4, 5}, 6}</p>
</blockquote>
<p>And</p>
<pre><code>☺ = ## & @@@ # & /@ # &;
☺ @ lst
</code></pre>
<blockquote>
<p>{1, {2, 3, 4, 5}, 6}</p>
</blockquote>
|
2,966,392 | <p>Suppose <span class="math-container">$\lim_{n\rightarrow\infty }z_n=z$</span>.<br>
Prove <span class="math-container">$\lim_{n\rightarrow\infty}\operatorname{Re}(z_n)=\operatorname{Re}(z)$</span></p>
<p>Where, <span class="math-container">$z\in\mathbb{C}$</span> and <span class="math-container">$z_n$</span> is a complex-sequence.</p>
<p><b>My work</b></p>
<p>Let <span class="math-container">$\epsilon >0$</span>, </p>
<p>By hypothesis exists <span class="math-container">$N\in\mathbb{N}$</span> such that if <span class="math-container">$n\geq N$</span> then <span class="math-container">$|z_n-z|<\epsilon.$</span></p>
<p>We know <span class="math-container">$|\operatorname{Re}(z_n)|<|z_n|$</span> and <span class="math-container">$|\operatorname{Re}(z)|<|z|$</span> then,</p>
<p>Then</p>
<p><span class="math-container">$|\operatorname{Re}(z_n)-\operatorname{Re}(z)|\leq|\operatorname{Re}(z_n)|+|\operatorname{Re}(z)|<|z_n|+|z|$</span></p>
<p>Here i'm a little stuck. Can someone help me?</p>
| José Carlos Santos | 446,262 | <p>What you should do is<span class="math-container">\begin{align}\bigl\lvert\operatorname{Re}(z_n)-\operatorname{Re}(z)\bigr\rvert&=\bigl\lvert\operatorname{Re}(z_n-z)\bigr\rvert\\&\leqslant\lvert z_n-z\rvert\\&<\varepsilon.\end{align}</span></p>
|
1,177,583 | <p>I have a question similar to <a href="https://math.stackexchange.com/questions/757525/picture-behind-so3-so2-simeq-s2/" title="one">this one</a>, but that question is not answered. The question is to show that $SO(3)/SO(2)$ is isomorphic to the 2-sphere:
$$
SO(3)/SO(2)\cong S^2
$$
How does one establish the isomorphism?
Similarly, how do I show that the following is also an isomorphism:
$$
SO(3)/O(2)\cong \mathbb{R}P^2
$$
Thank you very much in advance.</p>
| Daniel Valenzuela | 156,302 | <p>Consider $SO(R^3)$ which acts on $R^3$ by rotations, but restricts to an action of $S^2$. For every point $x\in S^2$ we have a unique orthogonal plane $V$, hence $SO(V)\subset SO(R^3)$ will fix $x$. It is easy to see that in fact $Stab(x)=SO(V) \cong SO(2)$. Hence we have a fiber bundle
$$
SO(3) \to S^2
$$</p>
<p>with fiber being $SO(2)$. The map is basically just fixing a point in $S^2$ e.g. $(0,0,1)$ and consider its image under the group action. More fancy: a group action is a map $G\times S^2 \to S^2$ which you can restrict to $G\times \{*\}$. Since the bundle and its fiber are lie groups, this induces an isomorphism $$SO(2) \to SO(3) \stackrel \cong \to S^2 \cong SO(3)/SO(2) $$</p>
<p>Now we compose $SO(3) \to S^2 \to S^2/\mathbb Z_2 = RP^2$. The fiber will be twice as much as before. It is easy and a nice exercise to fill in the details, that the fiber is $O(2)$.</p>
|
1,177,583 | <p>I have a question similar to <a href="https://math.stackexchange.com/questions/757525/picture-behind-so3-so2-simeq-s2/" title="one">this one</a>, but that question is not answered. The question is to show that $SO(3)/SO(2)$ is isomorphic to the 2-sphere:
$$
SO(3)/SO(2)\cong S^2
$$
How does one establish the isomorphism?
Similarly, how do I show that the following is also an isomorphism:
$$
SO(3)/O(2)\cong \mathbb{R}P^2
$$
Thank you very much in advance.</p>
| Analogue Multiplexer | 241,272 | <p>$\bullet \space \mathbf{SO(3) / SO(2) \simeq S^2}:$</p>
<p>Consider a fundamental representation of the Lie group $G := SO(3)$. Any element $M$ of $G$ can be written as a linear map $M : \mathbb{R}^3 \rightarrow \mathbb{R}^3$ such that $M^{-1} = M^T$ and $\det(M) = 1$. We can easily restrict to $M : S^2 \rightarrow S^2$. For any arbitrary $x \in S^2 \subset \mathbb{R}^3$ we write $x = (x_1,x_2,x_3)$ and $-x = (-x_1,-x_2,-x_3)$, so that $x_1^2 + x_2^2 + x_3^2 = 1$.</p>
<p>Let now $\iota : SO(2) \rightarrow SO(3)$ be some embedding such that $\iota(SO(2))$ is a subgroup of $SO(3)$. Note that there is some $x \in S^2$ such that $\iota(SO(2)) (x) = x$ and $\iota(SO(2)) (-x) = -x$. Thus $\iota(SO(2))$ is a stabilizer $G_x = G_{-x} \subset G$, so that $G_x x = x$ and $G_{-x} (-x) = -x$. Let now $g \in G - G_x$, thus $g \in SO(3)$ but $g \not \in \iota(SO(2))$. Then $g G_x \subset G$ is a left coset of $G_x$, so that $g G_x \cap G_x = \emptyset$. Then
\begin{equation}
y := g x = g G_x x = (g G_x g^{-1}) g x = G_y y.
\end{equation}</p>
<p>Note that $g G_x$ is a subset of $G$ but not a subgroup. But it should be clear that $G_y$ is some conjugate of $G_x$. Then $G_y \simeq G_x$ and $G_x \cap G_y = e$ if $y \not \in \{x,-x\}$, where $e$ is the identity element of $SO(3)$. Also note that $g G_x g^{-1} = G_{-x} = G_x$ for any $g \in SO(3)$ such that $g x = -x$. Then $g^2 = e$, so that $g^{-1} = g$ and $g h g^{-1} = g h g = h^{-1}$ for any $h \in G_x$.</p>
<p>For any $y \in S^2$ there exists an element $g \in G$ such that $y = g x$. Now it should be clear that the left coset space (i.e. the smooth set of left cosets of $G_x$) is isomorphic to $S^2$. Then we can say that there is a principal fiber bundle $(SO(3),S^2,\pi,SO(2))$ with surjective map $\pi : SO(3) \rightarrow S^2$, with a short exact sequence:
\begin{equation}
1 \rightarrow SO(2) \rightarrow SO(3) \rightarrow SO(3) / \iota(SO(2)) \simeq S^2 \rightarrow 0.
\end{equation}</p>
<p>(This is similar to the principal fiber bundle $(SU(2),S^2,\pi,U(1))$.) Now note that any $x \in S^2$ induces a pair $\{x,-x\} \subset S^2$, so that $\{x,-x\} \in \mathbb{R} P^2$. Then it is straight forward to see that
\begin{equation}
SO(3) = G = \cup_{\{x,-x\} \in \mathbb{R} P^2} G_x.
\end{equation}</p>
<p>$\bullet \space \mathbf{SO(3) / O(2) \simeq \mathbb{R} P^2}:$</p>
<p>There is no proper embedding of $O(2)$ into $SO(3)$ with a fundamental representation. Consider a projective representation $SO(3) : \mathbb{R} P^2 \rightarrow \mathbb{R} P^2$.</p>
<p>Let $L \in O(2)$ and $l := \det(L)$, so that $l \in \{1,-1\}$. Let now $\iota : O(2) \rightarrow SO(3)$ be some embedding such that $\iota(O(2))$ is a subgroup of $SO(3)$. Now define $M := \iota(L) \in \iota(O(2))$ such that $\det(M) = 1$:
\begin{equation}
L =
\left(
\begin{array}{cc}
L_{1 1} & L_{1 2} \\
L_{2 1} & L_{2 2}
\end{array}
\right)
\Rightarrow M =
\left(
\begin{array}{ccc}
L_{1 1} & L_{1 2} & 0 \\
L_{2 1} & L_{2 2} & 0 \\
0 & 0 & l
\end{array}
\right)
.
\end{equation}
Note that this is just an arbitrary embedding; there is no canonical one. As discussed: for any $x \in S^2$ there exists an element $g$ such that $g x = -x$, thus also $g (-x) = x$. There is a projection $S^2 \rightarrow \mathbb{R} P^2$ so that this action turns into $g \{x,-x\} = \{-x,x\} = \{x,-x\}$. This shows that in this case $g$ is also an element of the stabilizer. All these $g$ generate an extension to the stabilizer we already constructed, related to the fundamental representation. This extended stabilizer can really be regarded as a proper embedding from $O(2)$ to $SO(3)$. Thus:
\begin{equation}
\iota(O(2)) = G_{\{x,-x\}} = G_{\{-x,x\}} \simeq O(2).
\end{equation}</p>
<p>It should be clear that if $l = 1$, then $M$ acts like $SO(2)$ and we may assume that $g = e$. If $l = -1$, then $g$ generates an axis $a(g) \in \mathbb{R} P^2$ which is perpendicular to the ${\{x,-x\}}$ axis. This axis $a(g)$ generates the direction of a mirror. There is a principal fiber bundle $(SO(3),\mathbb{R} P^2,\pi,O(2))$ with surjective map $\pi : SO(3) \rightarrow \mathbb{R} P^2$, with a short exact sequence:
\begin{equation}
1 \rightarrow O(2) \rightarrow SO(3) \rightarrow SO(3) / \iota(O(2)) \simeq \mathbb{R} P^2 \rightarrow 0.
\end{equation}</p>
|
1,260,260 | <blockquote>
<p>Find, with proof, the smallest value of $N$ such that $$x^N \ge \ln x$$ for all $0 < x < \infty$. </p>
</blockquote>
<p>I thought of adding the natural logarithm to both sides and taking derivative. This gave me $N \ge \frac 1{\ln x}$. However, is there a better way to this?</p>
<p>Please note that I would like to see only a <em>hint</em>, not a complete solution.</p>
<p>If anything, I am made aware that the answer is $N \ge \frac 1e$.</p>
| shalop | 224,467 | <p>Hint for one possible approach:</p>
<p>Consider the function $f:(0,\infty) \to \mathbb{R}$ defined by $f(x) = \frac{\log x}{x^N}$. Take the derivative, and you find that $f$ attains a global maximum value at $x=e^{\frac{1}{N}}$.</p>
<p>Now solve the following equation for $N$: $$f(e^{\frac{1}{N}})=1$$ Solving this gives you $N = \frac{1}{e}$, and I'll leave it to you to think about why solving this equation gives you the minimal $N$-value that you desire.</p>
|
163,672 | <p>Is there a characterization of boolean functions $f:\{-1,1\}^n \longrightarrow \{-1,1\}$,
so that $\mathbf{Inf_i}[f]=\frac{1} {2}$, for all $1\leq i\leq n$? Is it known how many such functions there are? </p>
| karpasi | 35,660 | <p>Apparently such functions have been studied before, in cryptography. This condition is called the Strict Avalanche Criterion, and a guy called Daniel K. Biss proved a lower bound of $2^{2^{n-o(1)}}$.</p>
<p><a href="http://ac.els-cdn.com/S0012365X97001805/1-s2.0-S0012365X97001805-main.pdf?_tid=5d5aa18e-fd17-11e3-91cc-00000aacb35e&acdnat=1403776434_39a67c01ad84f7a6f980f85a9c23ca91" rel="nofollow">http://ac.els-cdn.com/S0012365X97001805/1-s2.0-S0012365X97001805-main.pdf?_tid=5d5aa18e-fd17-11e3-91cc-00000aacb35e&acdnat=1403776434_39a67c01ad84f7a6f980f85a9c23ca91</a></p>
|
3,975,895 | <p>Let <span class="math-container">$a,b,c\in\mathbb{Z}$</span>, <span class="math-container">$1<a<10$</span>, <span class="math-container">$c$</span> is a prime number and <span class="math-container">$f(x)=ax^2+bx+c$</span>. If <span class="math-container">$f(f(1))=f(f(2))=f(f(3))$</span>, find <span class="math-container">$f'(f(1))+f(f'(2))+f'(f(3))$</span></p>
<p>My attempt:
<span class="math-container">\begin{align*}
f'(x)&=2ax+b\\
(f(f(x)))'&=f'(f(x))f'(x)\\
f'(f(x))&=\frac{(f(f(x)))'}{f'(x)}\\
\end{align*}</span></p>
| Thehx | 469,873 | <p>Okay here's the outline of how you solve this.</p>
<p>first, you write down the system of equations
<span class="math-container">$$
0=f(f(2))-f(f(1))=(3a+b)*(b+5a^2+3ab+2ac) \\
0=f(f(3))-f(f(2))=(5a+b)*(b+13a^2+5ab+2ac)
$$</span>
and this leaves you with four possibilities:</p>
<p>(a) <span class="math-container">$(3a+b=0)$</span> and <span class="math-container">$(5a+b=0)$</span> which is impossible since <span class="math-container">$a \ne 0$</span></p>
<p>(b) <span class="math-container">$(3a+b=0)$</span> and <span class="math-container">$(b+13a^2+5ab+2ac=0)$</span>
in which case we obtain <span class="math-container">$b=-3a$</span>, then our second condition turns into <span class="math-container">$2ac-2a^2-3a=0$</span>, which further yields <span class="math-container">$c=a+\frac{3}{2}$</span>, and we see no acceptable solutions here since both <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are asked to be integers.</p>
<p>(c) <span class="math-container">$(b+13a^2+5ab+2ac=0)$</span> and <span class="math-container">$(b+5a^2+3ab+2ac=0)$</span> which, if we subtract one of the equations from the other, gives us <span class="math-container">$b=-4a$</span> and, shortly after, <span class="math-container">$p=\frac{7a+4}{2}$</span>, and we can just check the values of <span class="math-container">$a$</span> from <span class="math-container">$2$</span> to <span class="math-container">$9$</span> to find the only solution at <span class="math-container">$a=6, b=-24, c=23$</span>.</p>
<p>(d) <span class="math-container">$(5a+b=0)$</span> and <span class="math-container">$(b+5a^2+3ab+2ac=0)$</span>, in which case we get <span class="math-container">$b=-5a$</span>, the second condition simplifies to <span class="math-container">$10a-2c+5$</span>, which gives us <span class="math-container">$c=5a+\frac{5}{2}$</span>, and we also see that <span class="math-container">$c$</span> will never be integer if <span class="math-container">$a$</span> is integer, and <span class="math-container">$a$</span> is always integer.</p>
<p>This leaves us with the only acceptable solution found in (c). As I already mentioned above, putting <span class="math-container">$(a,b,c)=(6,-24,23)$</span> into <span class="math-container">$f'(f(1))+f(f'(2))+f'(f(3))$</span> gives you <span class="math-container">$95$</span>.</p>
|
787,894 | <p>Find the values of $x,y$ for which $x^2 + y^2$ takes the minimum value where $(x+5)^2 +(y-12)^2 =14$.</p>
<p>Tried Cauchy-Schwarz and AM - GM , unable to do.</p>
| Macavity | 58,320 | <p>Another way is triangle inequality (essentially think of the triangle between the origin, the centre of the circle and any point on the circle):</p>
<p>$$\sqrt{x^2+y^2} +\sqrt{(x+5)^2+(y-12)^2} \ge \sqrt{(-5)^2+(12)^2} \implies x^2+y^2 \ge \left(13 - \sqrt{14}\right)^2$$</p>
|
3,844,235 | <p>Suppose a matrix <span class="math-container">$A \in \text{Mat}_{2\times 2}(\mathbb{F}_5)$</span> has characteristic polynomial <span class="math-container">$x^2 - x +1$</span>. Is <span class="math-container">$A$</span> diagonalizable over <span class="math-container">$\mathbb{F}_5$</span>?</p>
<p>Normally, I would just check to see if the geometric multiplicity and algebraic multiplicity are equal for each eigenspace, but over <span class="math-container">$\mathbb{F}_5$</span>, I am not even sure what the eigenvalues are!</p>
| Misha Lavrov | 383,078 | <p>If your matrix were diagonalizable, then there would be a diagonal matrix <span class="math-container">$$\Lambda = \begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_2\end{bmatrix} \in \text{Mat}_{2 \times 2}(\mathbb F_5)$$</span> with the same characteristic polynomial. But then, <span class="math-container">$\Lambda^2 - \Lambda + I = 0_{2\times 2}$</span> implies <span class="math-container">$\lambda_i^2 - \lambda_i + 1 = 0$</span> for <span class="math-container">$i=1,2$</span>, and there are no such elements in <span class="math-container">$\mathbb F_5$</span>.</p>
|
595,552 | <p>Let $R$ be a ring. Prove that each element of $R$ is either a unit or a nilpotent element iff the ring $R$ has a unique prime ideal.</p>
<p>Help me some hints.</p>
| rschwieb | 29,335 | <p>$F[x]/(x^3)$ consists of units and nilpotent elements, but has four ideals, so this suggests you meant something more like <em>unique prime ideal</em>.</p>
<p>This is indeed true for commutative rings. The hypothesis that nonunits are nilpotent means that the nilradical is a maximal ideal. But considering that all prime ideals contain the nilradical, the nilradical is precisely the one prime ideal in the ring.</p>
<p>Conversely, if you assume the ring has one prime ideal, then there is clearly only one maximal ideal, and everything inside it is a nonunit, hence nilpotent.</p>
<p>The statement is false for noncommutative rings. $M_2(R)$ has exactly one prime ideal: $\{0\}$. Needless to say there are non-nilpotent non-units in this ring (for example $\begin{bmatrix}1&0\\0&0\end{bmatrix}$.)</p>
<p>It might be interesting though to follow up and see if any of the new one-sided prime ideal definitions makes this work in noncommutative rings.</p>
|
595,552 | <p>Let $R$ be a ring. Prove that each element of $R$ is either a unit or a nilpotent element iff the ring $R$ has a unique prime ideal.</p>
<p>Help me some hints.</p>
| Truong | 100,751 | <p><a href="http://am-solutions.wikispaces.com/Solutions+to+Chapter+1" rel="noreferrer">http://am-solutions.wikispaces.com/Solutions+to+Chapter+1</a></p>
<p>"Let $A$ be a ring, $R$ its nilradical. Show that the following are equivalent:</p>
<p>1) $A$ has exactly one prime ideal;</p>
<p>2) every element of $A$ is either a unit or nilpotent;</p>
<p>3) $A/R$ is a field.</p>
<p>Proof. 1) ⇒ 2). Observe that $R$, which is the intersection of the prime ideals, is equal to the given prime ideal; and that $A$ is a local ring. Thus $A−R=A^∗$ and by definition $R$ consists of all nilpotent elements.</p>
<p>2) ⇒ 3). The quotient map $A→A/R$ is surjective. Since ring homomorphisms map units to units, $x∈A/R$ is either $0$ or a unit.</p>
<p>3) ⇒ 1). All prime ideals contain $R$, and $R$ is a maximal ideal: hence there is one prime ideal. "</p>
|
3,086,218 | <p>The second order differential equation is given by -</p>
<p><span class="math-container">$ \frac{d^{2}y}{dx^{2}} + \sin (x+y) = \sin x$</span> </p>
<p>Is this a homogeneous differential equation <span class="math-container">$?$</span></p>
<p>Well, I guess this is not a homogeneous differential equation since the form of this equation is not <span class="math-container">$a(x)y'' + b(x)y' +c(x)y = 0$</span>.
But the answer is given that it's homogeneous.
How can this equation be homogeneous?</p>
| Lutz Lehmann | 115,115 | <p>You are correct, as it is not a linear ODE, it is neither homogeneous nor inhomogeneous.</p>
<p>The cited characterization is most likely based on the fact that <span class="math-container">$y=0$</span> is a solution, but that is only a necessary condition for linearity, not a sufficient one.</p>
|
947,191 | <p>Show that $\sum _{n=1 } ^{\infty } (n \pi + \pi/2)^{-1 } $ diverges.</p>
<p>Both the root test and the ratio test is inconclusive. Can you suggest a series for the series comparison test?</p>
<p>Thanks in advance!</p>
| Community | -1 | <p>$$\frac{1}{n\pi +\frac{\pi}{2}} \geq \frac{1}{n\pi +n\pi}\geq \frac{1}{8n} =\frac{1}{8}\cdot\frac{1}{n}$$</p>
|
653,319 | <p>I understand that $\lim_{\theta\to0}(\sin(θ)/θ) = 1$ but what is $x$ when,
$\lim_{\theta\to0}(\tan(θ)/θ) = x$ where $x$ is a real constant value. </p>
<p>Please help me, I will be eternally great full :D</p>
| amWhy | 9,003 | <p>We have that $\dfrac{\tan\theta}{\theta}=\dfrac{1}{\cos\theta}\cdot \dfrac {\sin\theta}{\theta}.$ </p>
<p>Then recall that the limit of a product is equal to the product of the limits (when those limits do in fact exist.) </p>
<p>$$\lim_{\theta\to 0} \dfrac{\tan\theta}{\theta}= \lim_{\theta\to 0} \dfrac{\sin\theta}{(\cos\theta)\cdot \theta}=\lim_{\theta \to 0} \dfrac{1}{\cos\theta}\cdot \dfrac {\sin\theta}{\theta} = \lim_{\theta \to 0} \dfrac 1{\cos \theta} \cdot \lim_{\theta\to 0} \dfrac{\sin \theta}{\theta} = 1\cdot 1 = 1.$$</p>
|
653,319 | <p>I understand that $\lim_{\theta\to0}(\sin(θ)/θ) = 1$ but what is $x$ when,
$\lim_{\theta\to0}(\tan(θ)/θ) = x$ where $x$ is a real constant value. </p>
<p>Please help me, I will be eternally great full :D</p>
| Billie | 48,863 | <p>I'll use $x$ instead of $\theta$.</p>
<p>Use the identity:</p>
<p>$$\tan(x) = \frac{\sin x}{\cos x}$$</p>
<p>By limit rules,</p>
<p>$$\lim_{x \ \to 0} \frac{f(x)}{g(x)} = \frac{\lim_{x \ \to 0} f(x)}{\lim_{x \ \to 0} g(x)}$$</p>
<p>Thus:</p>
<p>$$\lim_{x \to 0} \sin x = 0.$$
$$\lim_{x \to 0} \cos x = 1.$$</p>
<p>You want to find $$\lim_{x \to 0} \frac{\tan x}{x}$$ which is $$\lim_{x \to 0} \frac{\sin x}{\cos x} \cdot \frac{1}{x} = \frac{0}{0}$$ , you've got $\frac{0}{0}$ then have to use HLopital's rule.</p>
<p>$$\lim_{x \to 0} \frac{(\sin x)'}{(x \cos x)'} = \lim_{x \to 0} \frac{\cos x}{\cos x - x \sin x} = \frac{1}{1} = 1.$$</p>
|
2,791,863 | <p>We need to calculating the limit
$$
\lim _{n\rightarrow \infty}((4^n+3)^{1/n}-(3^n+4)^{1/n})^{n3^n}
$$</p>
<p>I have tried taking the logarithm, but the limit doesnt seem to arrive at any familiar form.</p>
| user | 505,767 | <p>Following the hint by <a href="https://math.stackexchange.com/a/2791877/505767">Alex</a> with some more detail, we have that</p>
<ul>
<li>$(4^n+3)^{1/n}=4\left(1+\frac3{4^n}\right)^\frac1n=4e^{\frac1n\log \left(1+\frac3{4^n}\right)}=4\left(1+\frac3{n4^n}+o\left(\frac1{n4^n}\right)\right)=\\=4+\frac{12}{n4^n}+o\left(\frac{12}{n4^{n}}\right)$</li>
<li>$(3^n+4)^{1/n}=3\left(1+\frac4{3^n}\right)^\frac1n=3e^{\frac1n\log \left(1+\frac4{3^n}\right)}=3\left(1+\frac4{n3^n}+o\left(\frac1{n3^n}\right)\right)=\\=3+\frac{12}{n3^n}+o\left(\frac{12}{n3^{n}}\right)$</li>
</ul>
<p>then</p>
<p>$$( (4^n+3)^{1/n}-(3^n+4)^{1/n} )^{n3^n}=\left(1-\frac{12}{n3^n}+o\left(\frac{12}{n3^{n}}\right)\right)^{n3^n}=$$</p>
<p>$$=\left[\left(1-\frac{12}{n3^n}+o\left(\frac{12}{n3^{n}}\right)\right)^{\frac{1}{-\frac{12}{n3^n}+o\left(\frac{12}{n3^{n}}\right)}}\right]^{-12+o(1)}\to e^{-12}$$</p>
|
239,202 | <p>Let $\Gamma$ be a $C^2$ compact submanifold of $\mathbb{R}^n$. Consider the distance function $\delta$ from $\Gamma$. It is well known that, for sufficiently small $\varepsilon>0$, $\delta$ is $C^2$ on $\{ 0<\delta < \varepsilon\}$, and that it satisfies the eikonal equation</p>
<p>$$ \| \nabla \delta \| = 1, \qquad \text{with} \qquad \delta|_{\Gamma} = 0. $$</p>
<p>Now recall Bochner's formula, valid for a smooth function $u \in C^\infty(M)$ on a general Riemannian manifold. It reads:</p>
<p>$$ \frac{1}{2} \Delta\left( \| \nabla u\|^2 \right) = \nabla u \cdot \nabla \Delta u + \| \mathrm{Hess} (u) \|^2_{\mathrm{HS}} + \mathrm{Ric}(\nabla u, \nabla u). \qquad (\star)$$</p>
<p>Here $\| \cdot \|_{\mathrm{HS}}$ is the Hilbert-Schmidt norm and $\Delta$ is the Laplace-Beltrami operator. When we specify $(\star)$ to $M = \mathbb{R}^n$ and $u$ is a <em>smooth</em> solution of the eikonal equation, we have the following:</p>
<p>$$ \nabla u \cdot \nabla \Delta u = - \| \mathrm{Hess} (u) \|^2_{\mathrm{HS}}. \qquad (\star\star)$$</p>
<p>Observe that the r.h.s. requires only $C^2$ regularity of $u$, while the l.h.s., a priori, requires a further derivative ($\nabla u \cdot \nabla \Delta u$ is indeed the directional derivative of $\Delta u$ in the direction $\nabla u$).</p>
<p><strong>Q: Is it true, even if $\delta$ is a priori only a $C^2$ solution of the eikonal equation, that $\Delta \delta$ admits a directional derivative in the direction of $\nabla \delta$?</strong> Or, in other words, is it true that $(\star\star)$ holds for $\delta$?</p>
<p>The same question indeed can be posed in the Riemannian setting.</p>
<p>P.S: A direct computation with the distance from the $C^2$ (but not $C^3$) surface in $\mathbb{R}^2$ given by $y= x^{5/2}$ seems to support this claim.</p>
<p>This could be a standard fact about the regularity of solutions of the eikonal equation, but I have not been able to find any reference on this precise point.</p>
| Anton Petrunin | 1,441 | <p>If $u$ is smooth, the left-hand-side in $(\star\star)$ could be rewritten as $(\nabla u)(\Delta u)$. Assuming that $u$ is $C^2$ this expression $(\nabla u)(\Delta u)$ is always defined, while your original expression $\nabla u \cdot \nabla \Delta u$ might be undefined.</p>
<p>Indeed assume $\gamma$ is a unit-speed geodesic such that $u\circ\gamma(t)\equiv t +\mathrm{const}$, or equivalently $\dot\gamma(t)=\nabla_{\gamma(t)}u$.</p>
<p>Then $H(t)=\mathrm{Hess}_{\gamma(t)}u$ satisfies Riccati equation
$$H'+H^2=0$$
Taking the trace your equation follows.</p>
<p><em>Hope it helps.</em></p>
|
760,032 | <p>Consider the integral
\begin{equation}
I(x)= \frac{1}{\pi} \int^{\pi}_{0} \sin(x\sin t) \,dt
\end{equation}
show that
\begin{equation}
I(x)= \frac{2x}{\pi} +O(x^{3})
\end{equation}
as $x\rightarrow0$.</p>
<p>=> I Have used the expansion of McLaurin series of $I(x)$ but did not work.
please help me.</p>
| DeepSea | 101,504 | <p>$sin(x\cdot sint) = x\cdot sint - \dfrac{(x\cdot sint)^3}{3!} + ...$, and integrate term by term should give the answer.</p>
|
1,532,275 | <p>The kernel of a monoid homomorphism $f : M \to M'$ is the submonoid $\{m \in M : f(m)=1\}$. (This should not be confused with the kernel pair, which is often also named the kernel.)</p>
<p><em>Question.</em> Which submonoids $N$ of a given monoid $M$ arise as the kernel of a monoid homomorphism? (If necessary, let us assume that $M$ is commutative.)</p>
<p>Here is a necessary condition: If $xy \in N$, then $x \in N \Leftrightarrow y \in N$.</p>
| Thomas Andrews | 7,933 | <p>Hint: $$\frac{m}{k\cdots (k+m)} = \frac {1}{k\cdots (k+m-1)} - \frac{1}{(k+1)\cdots (k+m)}$$</p>
|
2,879,035 | <p>$f(x) = \int_{1}^{\infty} \frac{2}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2} dx$</p>
<p>find $P(X > 1)$</p>
<p>This is $X$ ~ $Norm(0, 1)$.</p>
<p>$P(X > 1) = 1 - P(X \leq 1) = 1 - 2 \phi(1) = 1-2(1-\phi(-1)) = 1 - 2(1-0.1587) = -0.6826$. </p>
<p>Yikes. Negative number. What am I doing wrong? </p>
| callculus42 | 144,421 | <p>Let´s say the pdf is $$f(x)=\frac{2}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$$</p>
<p>(without the integral)</p>
<p>Now you want to calculate $P(X> 1)$ which is </p>
<p>$\int_{1}^{\infty} \frac{2}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2} dx$</p>
<p>First we can factor out $2$. </p>
<p>$2\cdot \int_{1}^{\infty} \frac{1}{\sqrt{\pi}}e^{-\frac{1}{2}x^2} dx=2\cdot P(Y>1)$, where $Y$ is standard normal distributed as $Y\sim\mathcal N(0,1)$</p>
<p>Since $Y$ is symmetric distributed around $0$ we can say that $2\cdot P(Y> 1)= P(|Y|>1)$.</p>
<p>From the <a href="http://68%E2%80%9395%E2%80%9399.7%20rule" rel="nofollow noreferrer">68–95–99.7</a> rule you probably know that $1-P(|Y|>1)=P(|Y|<1)=0.6827$</p>
<p>Consequently $P(|Y|>1)=1-0.6827=0.3173=P(X>1)$</p>
|
4,498,263 | <p>I know that a group action is transitive when there is one orbit. Say that <span class="math-container">$G$</span> is a group acting on the set <span class="math-container">$A$</span>. The identity element of <span class="math-container">$G$</span> will clearly create <span class="math-container">$|A|$</span>-many orbits. But the other elements will create each their own set of orbits. Will all of these elements of <span class="math-container">$G$</span> give the same total number of orbits?</p>
| JBL | 1,080,305 | <p>The right answer to the question is "Have you tried looking at any concrete examples?" I would start with some obvious ones like the symmetric group <span class="math-container">$S_3$</span> acting on <span class="math-container">$\{1, 2, 3\}$</span>, the alternating group <span class="math-container">$A_4$</span> acting on <span class="math-container">$\{1, 2, 3, 4\}$</span>, and the cyclic group <span class="math-container">$C_6$</span> acting on <span class="math-container">$\{1, 2, 3, 4, 5, 6\}$</span> by the operation "add 1 modulo 6". Look at all the elements in these groups, and consider the orbits into which they divide the set. The second example may be particularly instructive.</p>
<p>(It is amazing how much energy is devoted in comments to pointless pedantry about whether or not an element divides the set into orbits, given that (1) everyone complaining about this phrasing should be capable of understanding what the OP means via the unimportant transition from an element to the cyclic group it generates, and (2) this point is irrelevant to the question being asked.)</p>
|
2,007,224 | <p>Analysis problem:</p>
<p><strong>Let $f$ and $g$ be differentiable on $ \mathbb R$. Suppose that $f(0)=g(0)$ and that $f' (x)$ is less or equal than $g' (x)$ for all $x$ greater or equal than $0$ Show that $f(x)$ is less or equal than$g(x)$ for all $x$ greater or equal than $0$.</strong></p>
<p>Is my proof correct?</p>
<p>I am trying to use the Generalized Mean Value Theorem:</p>
<p><a href="https://i.stack.imgur.com/SOfvH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SOfvH.jpg" alt="Generalized Mean Value Theorem"></a></p>
<p>As $f$and g are differentiable on$ \mathbb R$, $f$ and $g$ are continuous on $ \mathbb R$ and we can use the Generalized Value Theorem. Using the starting condition $f(0)=g(0)$, we have that for any b that is greater than$ 0$, exist a $c$ element of $(0,b)$ such that</p>
<p>$f' (c) g(b) = g' (c) f(b)$</p>
<p>By the starting conditions,</p>
<p>$f' (x) $is less or equal than$g' (x)$ for all $x$ greater or equal than $0$</p>
<p>Therefore, $f(b)$ is less or equal than $g(b)$ for any b element of $(0, b)$</p>
<p>As $b$ is any number bigger than$ 0$</p>
<p><strong>$f(x)$ is less or equal than $g(x)$ for any $x$ greater or bigger than$ 0$. Q.E.D.</strong></p>
| Learnmore | 294,365 | <p>Let $h(x)=f(x)-g(x);x\in [0,\infty)$</p>
<p>$h^{'}(x)=f^{'}(x)-g^{'}(x)\le 0\implies h$ is decreasing on $[0,\infty)\implies h(x)\le h(0)\forall x\in [0,\infty)\implies f(x)\le g(x)$</p>
|
29,766 | <p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
| Willie Wong | 3,948 | <p>arXiv.org</p>
<p>Any paper worth reading <em>should</em> include some background material and a description of general progress in its introduction section. This is especially true of papers that actually <em>solve</em> a problem, rather than chipping away at some small technicality. </p>
|
821,845 | <p>As the title says, why are those two equivalent? I can find a simple derivation (using natural deduction) of $\bot$ from $\neg\neg\bot$, but i fail at proving the other implication.</p>
| dtldarek | 26,306 | <p>The $\bot \to \neg\neg\bot$ implication is a special case of $\forall \alpha.\ \forall \beta.\ \alpha \to (\beta \to \alpha)$.<br>
Below there are both derivations using <a href="http://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="nofollow">Curry-Howard isomorphism</a>. </p>
<p>Formula
$$((\bot \to \bot) \to \bot) \to \bot$$
is a special case of
$$\forall \beta.\ \forall \alpha.\ ((\alpha \to \alpha) \to \beta) \to \beta $$
which can be proved by $\lambda$-term
$$\lambda f.\ f\ (\lambda x.\ x)$$</p>
<p>The other implication</p>
<p>$$\bot \to ((\bot \to \bot) \to \bot)$$</p>
<p>is a special case of</p>
<p>$$\forall \beta.\ \forall \alpha.\ \beta \to (\alpha \to \beta)$$</p>
<p>which can be proved by $\lambda$-term</p>
<p>$$\lambda x. \lambda f.\ x$$</p>
<p>I hope this helps $\ddot\smile$</p>
|
925,140 | <p>$$f(x)=\frac { x }{ x+4 } $$</p>
<p>I am not sure how to go about solving this but here is what I have done so far:</p>
<p>$$y=\frac { x }{ x+4 } $$</p>
<p>$$(x+4)y=\frac { x }{ x+4 } (x+4)$$</p>
<p>$$yx+4y=x$$</p>
<p>I feel stuck now. Where do I go from here?</p>
| Adi Dani | 12,848 | <p>$$y=\frac { x }{ x+4 },x\neq-4 $$
$$y(x+4)=x$$
$$xy+4y=x$$
$$xy-x=-4y$$
$$x(y-1)=-4y,y\neq1$$
$$x=\frac{-4y}{y-1}$$
$$x=\frac{4y}{1-y}$$</p>
|
47,890 | <p>hey so i'm programming something that finds an angle of a line between 0 and 180 degrees based on two points.... </p>
<p>the equation to find the answer is <code>Angle = sin-1((1/Hypotenuse)*B)</code>where B is the vertical side of the triangle formed and the hypotenuse is the distance between point 1 and 2. </p>
<p>However the inverse sin function in my program only takes and outputs radians so instead the equation to get degrees becomes </p>
<p><code>(Angle = sin-1(((1/Hypotenuse)*B *3.14) /180) *180) /3.14</code></p>
<p>This does not however seem to be right for some reason, as when putting in the parameters of <code>Hypotenuse=150</code>, <code>B=149.6</code> i get the answer of 85.8 (right) for the original equation and then .9973 degrees for the new equation?? </p>
<p><strong>Please help me fix this!</strong></p>
| Arturo Magidin | 742 | <p>If $B$ is the length of the opposite side, and $H$ is the length of the hypothenuse, then $B/H$ is the sine of the angle. This is <em>not</em> measured in either degrees or radians; it's the value of the sine.</p>
<p>If you take $\arcsin(B/H)$, this will be given in radians. To convert to degrees, you multiply be $180/\pi$. So what you want is:
$$\mathrm{angle} = \arcsin\Biggl(\left(\frac{1}{\text{hypothenuse}}\right)*B\Biggr)*180\Bigm/\pi.$$</p>
<p>$3.14$ is a very rough approximation to $\pi$.</p>
|
3,578,191 | <p>Without tables or a calculator, find the value of <span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span>.</p>
<p>I do not understand how the positive/negative signs are obtained as shown in the book; is there a formula for expanding these kind of things (what kind of expression is it, by the way?)?</p>
<p><a href="https://i.stack.imgur.com/TZjZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZjZo.png" alt="enter image description here"></a></p>
<p>This is my solution:</p>
<p><span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span></p>
<p><span class="math-container">$= \displaystyle\frac{[(\sqrt5+2)^3+(\sqrt5-2)^3][(\sqrt5+2)^3-(\sqrt5-2)^3]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{(\sqrt5+2+\sqrt5-2)[(\sqrt5+2)^2\color{red}{+}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2](\sqrt5+2-\sqrt5+2)[(\sqrt5+2)^2\color{red}{-}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{[2\sqrt5(5+4\sqrt5+4+\color{red}{5-4}+5-4\sqrt5+4][4(5+4\sqrt5+4\color{red}{-(5-4)}+(5-4\sqrt5+4)]}{8\sqrt5}$</span></p>
<p><span class="math-container">$=\displaystyle\frac{2584\sqrt5}{8\sqrt5}$</span></p>
<p><span class="math-container">$=323$</span></p>
<p>Because of the multiplication, I still got the same answer as given in the book. However, is the book or I correct in terms of the positive/negative signs(in red)?</p>
| J. W. Tanner | 615,567 | <p>The book solution used the formulas for the sum and difference of two cubes, </p>
<p><span class="math-container">$x^3+y^3=(x+y)(x^2-xy+y^2)$</span> and <span class="math-container">$x^3-y^3=(x-y)(x^2+xy+y^2),$</span></p>
<p>with <span class="math-container">$x=\sqrt5+2$</span> and <span class="math-container">$y=\sqrt5-2$</span>.</p>
|
3,691,692 | <p>Find all real values of a such that <span class="math-container">$x^2+(a+i)x-5i=0$</span> has at least one real solution. </p>
<p><span class="math-container">$$x^2+(a+i)x-5i=0$$</span></p>
<p>I have tried two ways of solving this and cannot seem to find a real solution.</p>
<p>First if I just solve for <span class="math-container">$a$</span>, I get <span class="math-container">$$a=-x+i\frac{5-x}{x}$$</span>
Which is a complex solution, not a real solution...</p>
<p>Then I tried using the fact that <span class="math-container">$x^2+(a+i)x-5i=0$</span> is in quadratic form of <span class="math-container">$x^2+px+q=0$</span> with <span class="math-container">$p=(a+i)$</span> and <span class="math-container">$q=5i$</span></p>
<p>So I transform <span class="math-container">$$x^2+(a+i)x-5i=0$$</span> to <span class="math-container">$$(x+\frac{a+i}{2})^2=(\frac{a+i}{2})^2+5i$$</span></p>
<p>Now it is in the form that one side is the square of the other but I don't know how to find the roots since I'm not sure if I'm supposed to convert <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> to polar form since I can't take the modulus of <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> (or at least I don't know how).</p>
<p>At thins point I feel like I'm just using the wrong method if anyone could guide me in the right direction I would very much appreciate it. Thank you. </p>
| Barry Cipra | 86,747 | <p>Taking a different approach entirely, note that</p>
<p><span class="math-container">$$\left({b\over a}+{d\over c}\right)\left({a\over b}+{c\over d}\right)=1+{ad\over bc}+{bc\over ad}+1$$</span></p>
<p>Thus, letting <span class="math-container">$ad/bc=x$</span> and noting that <span class="math-container">$x\gt0$</span>, we see that</p>
<p><span class="math-container">$$\left({b\over a}+{d\over c}\right)\left({a\over b}+{c\over d}\right)\ge4\iff x+{1\over x}\ge2\iff x^2-2x+1\ge0\iff(x-1)^2\ge0$$</span></p>
<p>(Note, the stipulation <span class="math-container">$x\gt0$</span> is important is multiplying both sides of the inequality <span class="math-container">$x+1/x\ge2$</span> by <span class="math-container">$x$</span> to get to <span class="math-container">$x^2+1\ge2x$</span>.)</p>
|
4,506,151 | <p>Determine all functions <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> such that <span class="math-container">$$f(x f(x+y))+f(f(y) f(x+y))=(x+y)^{2}, \forall x,y \in \mathbb{R} \tag1)$$</span></p>
<p>My approach:
Let <span class="math-container">$x=0$</span>, we get
<span class="math-container">$$f(0)+f\left((f(y))^2\right)=y^2$$</span>
<span class="math-container">$\Rightarrow$</span>
<span class="math-container">$$f\left((f(y))^2\right)=y^2-f(0)\tag2 $$</span>
Let us assume <span class="math-container">$f(0)=k \ne 0$</span></p>
<p>Put <span class="math-container">$y=0$</span> above, we get
<span class="math-container">$$f(k^2)=-k$$</span>
Also put <span class="math-container">$y=-x$</span> in <span class="math-container">$(1)$</span>, we get
<span class="math-container">$$f(kf(x))+f(kf(-x))=0, \forall x \in \mathbb{R}$$</span>
Put <span class="math-container">$x=0$</span> above we get
<span class="math-container">$$f(k^2)=0$$</span>
<span class="math-container">$\Rightarrow$</span>
<span class="math-container">$f(k^2)$</span> has two different images <span class="math-container">$0,-k$</span> which contradicts that <span class="math-container">$f$</span> is a function. Hence <span class="math-container">$k=0 \Rightarrow f(0)=0$</span>.
So from <span class="math-container">$(2)$</span> we get:
<span class="math-container">$$f\left((f(y))^2\right)=y^2 \cdots (3)$$</span>
Now put <span class="math-container">$y=0, x=f(x)$</span> in <span class="math-container">$(1)$</span>, and use the fact <span class="math-container">$f(0)=0$</span>,we get
<span class="math-container">$$f\left((f(x))^2\right)=(f(x))^2$$</span>
Since <span class="math-container">$x$</span> is dummy variable, we get <span class="math-container">$$f\left((f(y))^2\right)=(f(y))^2 \cdots (4)$$</span>
From <span class="math-container">$(3),(4)$</span>, we get <span class="math-container">$$f(x)=\pm x$$</span></p>
<p>I just want to ask, is my approach fine? If not where is the flaw? Also other approaches are welcomed.</p>
| RDK | 979,028 | <blockquote>
<p><span class="math-container">$f: \Bbb{R \to R}, f(xf(x+y))+f(f(y)f(x+y))=(x+y)^2.$</span></p>
</blockquote>
<p>My attempt was to show that <span class="math-container">$f(0)=0$</span>.. And I did it.</p>
<p><span class="math-container">\begin{align}
P(0, y): \; & f(0)+f(f(y)^2)=y^2. \\
P(0, 0): \; & f(0)+f(f(0)^2)=0. \\
& \text{let } f(0)=k. \\
\ \\
\Rightarrow \; & k+f(f(y)^2)=y^2, k+f(k^2)=0. \\
\ \\
& \text{let } f(a)^2=f(b)^2. \\
P(0, a): \; & k+f(f(a)^2)=a^2. \\
P(0, b): \; & k+f(f(b)^2)=b^2. \\
\therefore \; & f(a)^2=f(b)^2 \Leftrightarrow a^2=b^2. \\
\ \\
P(k, -k): \; & f(kf(0))+f(f(-k)f(0))=0. \\
\Rightarrow \; & f(k^2)+f(kf(-k))=0, f(kf(-k))=k. \\
\ \\
P(-k, 0): \; & f(-kf(-k))+f(kf(-k))=k^2. \\
\Rightarrow \; & f(-kf(-k))=k^2-k. \\
P(k, -2k): \; & f(kf(-k))+f(f(-2k)f(-k))=k^2. \\
\Rightarrow \; & f(f(-2k)f(-k))=k^2-k. \\
\ \\
\Rightarrow \; & f(-kf(-k))=f(f(-2k)f(-k)). \\
\Rightarrow \; & f(-kf(-k))^2=f(f(-2k)f(-k))^2. \\
\ \\
\therefore \; & k^2f(-k)^2=f(-2k)^2f(-k)^2, k^2=f(-2k)^2. \\
\Rightarrow \; & f(0)^2=f(-2k)^2, 4k^2=0, k=0. \\
\therefore \; & f(0)=0. \\
\ \\
\ \\
\end{align}</span></p>
|
2,256,534 | <p>As I just started learning the different rules of differentiation, I have some burning question marks in my head as such in the picture . I'm required to differentiate the following with respect to $x$.</p>
<blockquote>
<p><strong>1)</strong>
$$\frac{2x^2+4x}{x}$$
<strong>2)</strong>
$$\frac{(1-x)(x-2)}{x}$$</p>
</blockquote>
<p>For 1. It is easy to bring the $x$ up and then differentiate from there . </p>
<p>For 2. Once I bring the $x$ up , I'm stuck as I can't differentiate it from
$x^{-1}((1-x)(x-2))$
I was told not to change the question by expanding $(1-x)(x-2)$ ..
So is there any way that I can use the rule of addition and subtraction of function to solve this ? </p>
| projectilemotion | 323,432 | <p>I think this is what you mean by "I was told not to change the question".</p>
<p>This is to answer question <strong>2)</strong>. I assumed that you know how to do <strong>1)</strong>. Please correct me if I am wrong.</p>
<hr>
<p>Since you cannot expand the $(1-x)(x-2)$, start by applying the <a href="https://en.wikipedia.org/wiki/Product_rule" rel="nofollow noreferrer">product rule</a> on the numerator:
$$(uv)'=u'v+uv'$$
Therefore, we have:
$$\begin{matrix} u=1-x & v=x-2 \\ u'=-1 & v'=1 \end{matrix}$$
Hence, the derivative of what is on the numerator is:
$$\frac{d}{dx}((1-x)(x-2))=2-x+1-x=3-2x \tag{1}$$</p>
<hr>
<p>We can now apply the <a href="https://en.wikipedia.org/wiki/Quotient_rule" rel="nofollow noreferrer">quotient rule</a>:
$$\left(\frac{f}{g}\right)'=\frac{f'g-fg'}{g^2}$$
Letting:
$$\begin{matrix} f=(1-x)(x-2) & g=x \\ f'=3-2x & g'=1\end{matrix}$$
Note that we obtained $f'$ from equation $(1)$.</p>
<p>Applying the quotient rule should give you the result you require after simplification.</p>
|
3,743,743 | <p>I have this condition:</p>
<p><strong>(A is true OR B is true OR C is true) OR (A is false AND B is false AND C is false)</strong></p>
<p><em>(edit: It's been pointed out that this formula is wrong for what I want)</em></p>
<p>So as the title says, I want the condition to be true if only 1 of A, B or C is true, or if they're all false.</p>
<p>Is there a better way to write this condition?</p>
<p>edit 2: The context is a SQL Server validation check.</p>
<p>Thanks.</p>
| mathreadler | 213,607 | <p>To invert any matrix <span class="math-container">$\bf X$</span> you can reformulate it into</p>
<p>"Find the matrix which if multiplied by <span class="math-container">$\bf X$</span> gets closest to <span class="math-container">$\bf I$</span>".</p>
<p>To solve that problem, we can set up the following equation system.</p>
<p><span class="math-container">$$\min_{\bf v}\|{\bf M_{R(X)}} {\bf v} - \text{vec}({\bf I})\|_2^2$$</span></p>
<p>where <span class="math-container">$\text{vec}({\bf I})$</span> is vectorization of the identity matrix, <span class="math-container">$\bf v$</span> is vectorization of <span class="math-container">${\bf X}^{-1}$</span> which we are solving for and <span class="math-container">$\bf M_{R(X)}$</span> represents multiplication (from the right) by matrix <span class="math-container">$\bf X$</span>. If no inverse exists, then this should still find a closest approximation to an inverse in the 2-norm sense.</p>
<p>This will be a linear least squares problem that should converge in worst case same number of iterations as matrix size.</p>
<p>But, if we run some iterative solver and have a good initial guess, then it can go much faster than that.</p>
<p>Information on how to construct <span class="math-container">$\bf M_{R(X)}$</span> should exist at the wikipedia entry for <a href="https://en.wikipedia.org/wiki/Kronecker_product#Matrix_equations" rel="nofollow noreferrer">Kronecker Product</a>.</p>
<p>If you look at this:</p>
<p><span class="math-container">$$({\bf B^T\otimes A})\text{vec}({\bf X}) = \text{vec}({\bf AXB}) = \text{vec}({\bf C})$$</span></p>
<p>We can rewrite it: <span class="math-container">$$({\bf B^T\otimes A})\underset{\bf v}{\underbrace{\text{vec}({\bf X})}}-\underset{\text{vec}({\bf I})}{\underbrace{\text{vec}({\bf C})}} = {\bf 0}$$</span></p>
<p>Maybe now it becomes clearer what <span class="math-container">$\bf M_{R(X)}$</span> should be.</p>
<hr />
<p><strong>Edit</strong>:</p>
<p>If we combine this with an iterative Krylov subspace solver, we are allowed to choose an initial "guess" for the solution to the equation system.</p>
<p>So let us assume we have found one solution <span class="math-container">$X_1 = (A^TA + \mu_1 I)^{-1}$</span>.</p>
<p>We can now use <span class="math-container">$X_1$</span> as initial guess when solving for <span class="math-container">$X_2 = (A^TA+\mu_2 I)^{-1}$</span></p>
|
1,652,758 | <p>the question (not homework) I am trying to answer is, in part:</p>
<blockquote>
<p><em>Let $f$ be an analytic function that maps the open unit disk $D$ into
itself and vanishes at the origin. Prove that the inequality $$|f(z)| + |f(−z)| ≤ 2 |z^2| $$ is strict, except at the origin, unless f has the
form $f(z) = λz^2$ for some $λ$ a constant of absolute value one</em>.</p>
</blockquote>
<p>I applied Schwarz' lemma to obtain the inequality. Below is my answer:</p>
<blockquote>
<p><em>It is clear that the inequality holds at the origin. The hypotheses given for $f$ are the same as those required for Schwarz' lemma to apply to $f$; the lemma clearly applies to both $f(z)$ and $f(-z)$. Thus I have
$$|f(z) + f(-z)| \leq |f(z)| + |f(-z)| \leq |z| + |z| = 2|z|$$
Divide both sides by $|2z|$ (I have assumed $z\neq 0$):
$$\frac{|f(z) + f(-z)|}{|2z|} \leq 1$$
This fact shows that the function $(f(z) + f(-z))/ 2z$ has a removable singularity at $z = 0$ (since it is bounded in a neighbourhood of that point). Calling the analytic continuation $g(z)$, $g$ is a holomorphic map from $D$ to $D$ and vanishes at the origin; to see why, expand $f(z) + f(-z)$ into the sum of two power series, note that the first two terms vanish, and conclude that $g$ has a zero of order at least one at the origin. So Schwarz' lemma applies to $g(z)$, and in particular $|g(z)| \leq |z|$. But this fact directly implies the desired inequality.</em></p>
</blockquote>
<p><strong>The problem is</strong> that I have come most of the way to proving the strict inequality, but cannot prove that the given form is the only possible form for $f(z)$. I proved that if $|f(c) + f(-c)| = 2|c^2|$ for some $c$ not the origin, then the constructed function $g(z)$ is a rotation by Schwarz' lemma, which means that $f(z) + f(-z) = \lambda z^2$. This means that all even-index coefficients of the power series of $f$ must be zero, but it does <em>not</em> rule out the possibility that there are odd-index coefficients. None of the standard tricks like Cauchy inequalities work since the domain is the unit disc. I also tried looking at the other implications of Schwarz (i.e., that $|f'(0)| < 1$) and that, too, led nowhere. What am I missing here?</p>
| Memo Flota | 928,734 | <p>You alredy have that <span class="math-container">$f(z) +f(-z) = 2\lambda z^2$</span>. Let <span class="math-container">$h:D \longrightarrow \mathbb{C}$</span> be a function defined by <span class="math-container">$h(z) = \lambda z^2 -f(-z)$</span>. Hence, <span class="math-container">$h$</span> is analytic and <span class="math-container">$f(z) = \lambda z^2 +h(z)$</span>. We are going to show that <span class="math-container">$h \equiv 0$</span>.</p>
<p>First, <span class="math-container">$h$</span> is odd because
<span class="math-container">\begin{align*}
h(-z) &= \lambda (-z)^2 -f(z) \\
&= \lambda (z)^2 -\big( \lambda z^2 +h(z) \big) \\
&= -h(z).
\end{align*}</span></p>
<p>On the other hand, <span class="math-container">$f$</span> satisfafies the Schwarz Lemma hypotheses , then <span class="math-container">$\forall z \in D, \ |f(z)| < |z|$</span> and
<span class="math-container">\begin{equation} \label{07:eq:01}
\left| \lambda z^2 +h(z) \right| = |f(z)| \leq |z|.
\end{equation}</span>
Futhermore <span class="math-container">$|f(-z)| < |z|$</span>, thus
<span class="math-container">\begin{equation} \label{07:eq:02}
\left| \lambda z^2 -h(z) \right|
= \left| \lambda (-z)^2 +h(-z) \right|
= |f(-z)|
\leq |z|.
\end{equation}</span></p>
<p>Because of the Parallelogram Law, we have
<span class="math-container">\begin{align*}
2|\lambda z^2|^2 +2|h(z)|^2
&= \left| \lambda z^2 +h(z) \right|^2 + \left| \lambda z^2 -h(z) \right|^2 \\
& \leq 2|z|^2 .
\end{align*}</span>
It follows that <span class="math-container">$\forall z \in D, \ |h(z)| \leq \sqrt{|z|^2 -|z|^4}.$</span> The square root is well-defined because <span class="math-container">$z \in D$</span> implies <span class="math-container">$0<|z|<1$</span>, then <span class="math-container">$0 < |z|^4 < |z|^2$</span>.</p>
<p>Finally we are going to do something similar to the Schwarz Lemma proove. Let <span class="math-container">$r \in \mathbb{R}$</span> such that <span class="math-container">$0<r<1$</span> and <span class="math-container">$B = B_{r}(0)$</span>. If <span class="math-container">$z \in \partial B$</span>, then <span class="math-container">$|z|=r<1$</span>, so <span class="math-container">$h$</span> is well-defined in all <span class="math-container">$\overline{B}$</span>. By the Maximum Modulus Principle we assure that
<span class="math-container">$$
\max_{z \in \overline{B}} |h(z)|
= \max_{z \in \partial B} |h(z)|
\leq \sqrt{r^2 -r^4}.
$$</span>
As <span class="math-container">$r \rightarrow 1$</span> we get <span class="math-container">$|h(z)|\leq 0$</span> for all <span class="math-container">$z \in D$</span>. Thus <span class="math-container">$h \equiv 0$</span> and <span class="math-container">$f(z) = \lambda z^2$</span>.</p>
|
1,715,265 | <p>I've tried a method similar to showing that $\mathbb{Q}(\sqrt2, \sqrt3)$ is a primitive field extension, but the cube root of 2 just makes it a nightmare.</p>
<p>Thanks in advance </p>
| Mathmo123 | 154,802 | <p>Let $\alpha=\sqrt2+\sqrt[3]2$. It's clear that $\mathbb Q(\alpha)$ is a subextension of $\mathbb Q(\sqrt2,\sqrt[3]2)$. All that remains is to show that $\mathbb Q(\alpha)$ has degree $6$ over $\mathbb Q$.</p>
<p>You could do this by explicitly calculating the minimal polynomial of $\alpha$ over $\mathbb Q$, or by observing that
$$(\alpha-\sqrt2)^3=2,$$ which can be used to deduce that $\mathbb Q(\alpha)$ is a degree $3$ extension of $\mathbb Q(\sqrt2)$. </p>
|
1,758,159 | <p>A is symmetric(skew-symmetric) matrix and B is nonsingular matrix .
What can i say about this $$BAB^T$$
???</p>
| Robert Israel | 8,508 | <p>Hint: the transpose of a product is the product of transposes in reverse order.</p>
|
739,960 | <ol>
<li><p>$ \log_a{b} \times \log_b{a} = $ ?</p></li>
<li><p>$ \log_a{b} + \log_b{a} = \sqrt{29} $</p></li>
</ol>
<p>What is $ \log_a{b} - \log_b{a} = $ ?</p>
<p>3.</p>
<p>What is b in the following:</p>
<p>$$ \log_b{3} + \log_b{11} + \log_b{61} = 1 $$</p>
<p>and</p>
<p>4.</p>
<p>$$ \frac{1}{log_2{x}} + \frac{1}{log_{25}{x}} - \frac{3}{\log_8{x}} = \frac{1}{\log_b{x}} $$
What is b?</p>
<p>Can anyone help me solve these?</p>
| Asimov | 137,446 | <p>The way to start all of these and turn them into simple algebra is that $\log_ab=\frac{\log_x b}{\log_x a}$ Using that formula, all of these become basic algebra. Give it a try and comment what you get.</p>
|
1,329,078 | <p>I am having problems in classifying the differential equation $y''=y(x^2)$ in categories like homogeneous, exact, bernoulli, separable and non-exact so I could have the general solution. </p>
<p>Or would someone help me find the solution </p>
| user2661923 | 464,411 | <p><strong>Tools</strong><br></p>
<ul>
<li><p><a href="https://en.wikipedia.org/wiki/Binomial_theorem" rel="nofollow noreferrer">Binomial Theorem</a> <br>
For <span class="math-container">$\displaystyle k \in \Bbb{Z_{\geq 0}} ~: ~2^k = (1 + 1)^k = \sum_{r=0}^k \binom{k}{r}$</span>.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Hockey-stick_identity" rel="nofollow noreferrer">Hockey Stick Identity</a> <br>
For <span class="math-container">$\displaystyle k \in \Bbb{Z_{\geq 0}}, ~r \in \{0,1,\cdots, k\}, ~: ~\sum_{i=r}^k \binom{i}{r} = \binom{k+1}{r+1}$</span>.</p>
</li>
</ul>
<hr />
<p>Using the Binomial Theorem:</p>
<p><span class="math-container">$\displaystyle \sum_{k=0}^{n-1} 2^k = \sum_{k=0}^{n-1} \left[\sum_{r=0}^k \binom{k}{r}\right]$</span>.</p>
<p>The above double summation may be represented by the following two dimensional table, where each inner summation is represented by a specific row.</p>
<p><span class="math-container">\begin{array}{l l l l l l }
\binom{0}{0} \\
\binom{1}{0} & \binom{1}{1} \\
\binom{2}{0} & \binom{2}{1} & \binom{2}{2} \\
\binom{3}{0} & \binom{3}{1} & \binom{3}{2} & \binom{3}{3} \\
\cdots \\
\binom{n-1}{0} & \binom{n-1}{1} & \binom{n-1}{2} & \binom{n-1}{3} & \cdots \binom{n-1}{n-1}\\
\end{array}</span></p>
<p>The above table may be alternatively expressed as a double summation, where each inner summation is represented by a specific column.</p>
<p><span class="math-container">$\displaystyle \sum_{r=0}^{n-1} \left[\sum_{i=r}^{n-1}\binom{i}{r}\right]$</span>.</p>
<p>Using the Hockey Stick Identity, this equals :</p>
<p><span class="math-container">$$\sum_{r=0}^{n-1} \binom{n}{r+1} = \sum_{r=1}^n \binom{n}{r}.\tag1 $$</span></p>
<p>Re-applying the Binomial Theorem against the RHS of (1) above, it may be re-expressed as</p>
<p><span class="math-container">$\displaystyle \left[\sum_{r=0}^n \binom{n}{r}\right] - \binom{n}{0} = 2^n - \binom{n}{0} = 2^n - 1.$</span></p>
|
2,079,822 | <p>I am asked to find the maximum velocity of a mass. </p>
<p>I know that the equation for maximum acceleration is </p>
<p>$$a = w^2A$$</p>
<p>However I do not know how to find the maximum velocity. Is velocity just the same as acceleration? </p>
| Noah Schweber | 28,111 | <p>Velocity is not the same as acceleration. Acceleration is a measure of how your velocity changes over time: speed up, and acceleration is positive, etc. This is similar to how velocity measures how your <em>position</em> changes over time. Indeed, velocity is the derivative of position, and acceleration is the derivative of velocity.</p>
<p>Here's a hint for the problem: when velocity is maximal, what do you think the <em>acceleration</em> is going to be? </p>
|
2,079,822 | <p>I am asked to find the maximum velocity of a mass. </p>
<p>I know that the equation for maximum acceleration is </p>
<p>$$a = w^2A$$</p>
<p>However I do not know how to find the maximum velocity. Is velocity just the same as acceleration? </p>
| mvw | 86,776 | <p>If the position is given by $x = x(t)$ then the velocity is $v = \dot{x}$ and the acceleration $a = \dot{v} = \ddot{x}$. The dot is the Newton-style notation for the derivative regarding time. </p>
|
213,405 | <p>So here's the question:</p>
<blockquote>
<p>Given a collection of points $(x_1,y_1), (x_2,y_2),\ldots,(x_n,y_n)$, let
$x=(x_1,x_2,\ldots,x_n)^T$, $y=(y_1,y_2,\ldots,y_n)^T$,
$\bar{x}=\frac{1}{n} \sum\limits_{i=1}^n x_i$, $\bar{y}=\frac{1}{n} \sum\limits_{i=1}^n y_i$.<br>
Let $y=c_0+c_1x$ be the linear function that gives the best least squares fit to the points. Show that if $\bar{x}=0$, then
$c_0=\bar{y}$ and $c_1=\frac{x^Ty}{x^Tx}$.</p>
</blockquote>
<p>I've managed to do all the problems in this least squares chapter but this one has me completely and utterly stumped. I'm not entirely sure what the question is even telling me in terms of information nor do I get what it's asking. Any ideas on where to start?</p>
| Community | -1 | <p>Your textbook <em>surely</em> states what they mean by the phrase.</p>
<p>But to take a guess, they're probably referring to a "residue number system", where you represent not-too-large integers as a sequence of residue classes modulo a set of moduli (usually primes).</p>
<p>(to do the reverse conversion, from the residue number system to decimal (or other representations) typically involves the Chinese Remainder Theorem)</p>
|
3,648,485 | <p>I want to determine all the points where <span class="math-container">$g(x) = |\sin(2x)|$</span> is differentiable. </p>
<p>A function is differentiable at a point if the left and right limits exist and are equal.</p>
<p>So it follows that <span class="math-container">$g(x)$</span> is differentiable for all <span class="math-container">$x$</span> except where <span class="math-container">$g(x) = 0$</span>. For example, the derivative of <span class="math-container">$|\sin(2x)|$</span> does not exist at <span class="math-container">$x=0$</span>.</p>
<p>Is this correct?</p>
| Robert Israel | 8,508 | <p>It will depend on the starting point.<br>
If <span class="math-container">$x_1 > x_0$</span>, then you can show by induction that <span class="math-container">$x_{n+1} > x_n$</span>. But in that case it won't converge.</p>
|
3,648,485 | <p>I want to determine all the points where <span class="math-container">$g(x) = |\sin(2x)|$</span> is differentiable. </p>
<p>A function is differentiable at a point if the left and right limits exist and are equal.</p>
<p>So it follows that <span class="math-container">$g(x)$</span> is differentiable for all <span class="math-container">$x$</span> except where <span class="math-container">$g(x) = 0$</span>. For example, the derivative of <span class="math-container">$|\sin(2x)|$</span> does not exist at <span class="math-container">$x=0$</span>.</p>
<p>Is this correct?</p>
| Aman Pandey | 469,000 | <p>First send complete question.
What is the initial value <span class="math-container">$x_0$</span>?
Put <span class="math-container">$x_{n+1}=x_n=A$</span>.Then solve the equation. Which point is convergent point depend on the nature of the sequence. This is the method of this sort of problems.</p>
|
2,435 | <p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p>
<p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p>
<p>There are trickier examples like <code>If[a<5,...]</code>. This looks okay but knowing that <code>a<5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p>
<p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch.
Other common sources of error are, e.g. <code>x_?testFunc[#]&</code> or implicit multiplication through linebreaks.</p>
<p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p>
<hr>
<p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p>
<pre><code>PackageScope["myFunc"]
PackageExport["MyExportedFunc"]
</code></pre>
<p>I implemented the following rules</p>
<ol>
<li>They need to be on their own source line with nothing else on it</li>
<li>Their string argument must be a valid identifier</li>
</ol>
<p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
| b3m2a1 | 38,205 | <blockquote>
<h1>Status Completed</h1>
</blockquote>
<p>I often accidentally include a <code>;</code> at the end of a line in the variable declaration for <code>Module/Block/DynamicModule/etc</code> or screw them up somehow. It'd be nice if IntelliJ could catch fundamental, silly errors like that.</p>
<p>E.g. I do this all the time:</p>
<pre><code>Module[{a, b, c; d=10, f, 12},
...
]
</code></pre>
<p>And presumably both of those would be pretty easy to do for the IDE</p>
<p><strong>Comment halirutan:</strong> I have implemented this. Here you see 3 warnings</p>
<p><img src="https://i.imgur.com/IF68kea.png" alt="img"></p>
<p>The first warning on <code>a</code> is because it is not used inside the <code>Module</code> body. The other two warnings are because it is not a valid variable declaration or assignment.</p>
<p>This will be published in <strong>version 2019.1.2</strong></p>
|
3,977,687 | <p>A coin of radius 1 cm is tossed onto a plane surface that has been tessellated by right triangles whose sides are 8 cm, 15 cm, and 17 cm long. Find the probability that the coin lands within a triangle.</p>
<p>I know that this has to do with similarity because the inner triangle that is formed by the area where the coin can land is similar to the outer triangle. Therefore, I know the angles of this triangle, but I am not sure how to find one side of this triangle.</p>
| cr001 | 254,175 | <p><a href="https://i.stack.imgur.com/zJiPK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zJiPK.png" alt="enter image description here" /></a></p>
<p>First since <span class="math-container">$I$</span> is the incenter of <span class="math-container">$\triangle ABC$</span>, you can find <span class="math-container">$$IH={2S_{ABC}\over P_{ABC}}={120\over 40}={3}$$</span> where <span class="math-container">$S$</span> denotes area and <span class="math-container">$P$</span> denotes perimeter.</p>
<p>Next you find <span class="math-container">$$IM=IH-1=2$$</span></p>
<p>Notice <span class="math-container">$I$</span> is also the incenter of the inner triangle. Now you know the side ratio between the two triangles is <span class="math-container">$2\over 3$</span>.</p>
|
2,300,613 | <p>I tried to calculate few derivatives, but I cant get $f^{(n)}(z)$ from them. Any other way? </p>
<p>$$f(z)=\frac{e^z}{1-z}\text{ at }z_0=0$$</p>
| sharding4 | 254,075 | <p>Standard fact the coefficient for the series $\frac{f(z)}{1-z}$ is $\sum_{n=0}^{\infty}(\sum_{k=0}^{n}a_k)x^n$ where $f(z)$ has the expansion $\sum_{n=0}^{\infty}a_nx^n$</p>
|
598,962 | <p>I have to determine the following:</p>
<p>$$\lim_{x \rightarrow 0}\frac{9}{x}\left(\frac{3}{(x+3)^3}-\frac{1}{9}\right)$$</p>
<p>I've got so far:</p>
<p>$$\lim_{x \rightarrow 0}\frac{9}{x}\left(\frac{3}{(x+3)^3}-\frac{1}{9}\right)= \lim_{x \rightarrow 0}\left(\frac{27}{x(x+3)^3}-\frac{1}{x}\right)=\lim_{x \rightarrow 0} \left(\frac{27-(x+3)^3}{x(x+3)^3}\right)=\cdots$$</p>
<p>How to go on? I've got $\frac{\infty}{0}...$</p>
| Shahar | 114,474 | <p>When you plug in $0$ to $x$, you see that the answer is $0/0$. You have to use L'Hospital's Rule, which says</p>
<p>$$\lim_{x\to a} \frac{f(x)}{g(x)} = \lim_{x\to a} \frac{f'(x)}{g'(x)}.$$</p>
<p>This applies only to $0/0$ or $\infty/\infty$.</p>
<p>Hence, you just need to take the derivative of the top and bottom until you get an answer that is either the answer or you can't use L'Hospital's anymore.</p>
<p>So let's take the derivatives of the numerator and denominator:</p>
<p>$$\frac{d}{dx}\left(27-(x+3)^3\right)= -3(x+3)^2$$</p>
<p>$$\frac{d}{dx}\left(x(x+3)^3\right) = x(3(x+3)^2) + (x+3)^3$$</p>
<p>So now we can just plug zero in!</p>
<p>$$\lim_{x\to 0} \frac{-3(x+3)^2}{3x(x+3)^2 + (x+3)^3} = -27/27 = -1$$</p>
|
4,385,908 | <p>For an ideal <span class="math-container">$I$</span> in <span class="math-container">$A = \mathbb{C}[x, y, z]$</span> set <span class="math-container">$$Z_{xy}(I) = \{(a, b) \in \mathbb{C}^2: f(a, b, z) = 0\text{ for all }f \in I\}.$$</span></p>
<p>Let
<span class="math-container">$$J = \{f(x, y): f(a, b) = 0\text{ for all }(a, b) \in Z_{xy}(I)\}.$$</span></p>
<p>Prove <span class="math-container">$$Z_{xy}(I) \times \mathbb{C} = Z(I) = \{(a, b, c) \in \mathbb{C}^2: f(a, b, c) = 0\text{ for all }f \in I\} \iff \operatorname{rad}I = JA.$$</span></p>
<p>I'm a bit confused about the definition of <span class="math-container">$J$</span>. I'm not sure where the <span class="math-container">$f$</span> is supposed to come from since in the other sets in this problem the function <span class="math-container">$f$</span> took on 3 arguments rather than 2.</p>
| clay | 157,069 | <p>Conditional expectation is defined with the property such that for any <span class="math-container">$A \in \mathcal{F}_n$</span>, <span class="math-container">$\int_A X_n \, dP = \int_A Y \, dP$</span>.</p>
<p>Consider sets of the form <span class="math-container">$A = \{k \} \in \mathcal{F}_n$</span> which exist for <span class="math-container">$1 \le k \le n-1$</span>. Each <span class="math-container">$k$</span> forms its own atom in <span class="math-container">$\mathcal{F}_n$</span> so <span class="math-container">$X_n(k)$</span> can have a distinct value.</p>
<p><span class="math-container">\begin{align*}
X_n^{-1}(Y(k)) &= \{k\} \\
\int_{\{k\}} X_n dP &= \int_{\{k\}} Y dP \\
X_n(k) 2^{-k} &= Y(k) 2^{-k} \\
X_n(k) &= Y(k) \\
\end{align*}</span></p>
<p>Consider the set of the form <span class="math-container">$A = \{ n, n+1, n+2, \ldots \} \in \mathcal{F}_n$</span> which exists for <span class="math-container">$n \le k$</span>. All of these values of <span class="math-container">$k \ge n$</span> form one single atom in <span class="math-container">$\mathcal{F}_n$</span>, so <span class="math-container">$X_n(k)$</span> must produce the same value for all such values of <span class="math-container">$k$</span>.</p>
<p><span class="math-container">\begin{align*}
X_n^{-1}(Y(k)) &= \{ n, n+1, n+2, \ldots \} \\
\int_{\{ n, n+1, n+2, \ldots \}} X_n dP &= \int_{\{ n, n+1, n+2, \ldots \}} Y dP \\
\sum\limits_{k=n}^\infty X_n(k) 2^{-k} &= \sum\limits_{r=n}^\infty Y(r) 2^{-r} \\
X_n(k) 2^{1-n} &= \sum\limits_{r=n}^\infty Y(r) 2^{-r} \\
X_n(k) &= \sum\limits_{r=n}^\infty Y(r) 2^{n-r-1} \\
\end{align*}</span></p>
<p>Tying this together yields:</p>
<p><span class="math-container">\begin{align*}
X_n(k) &= \begin{cases}
Y(k) & \text{if} \; 1 \le k \le n - 1 \\
\sum\limits_{r=n}^\infty 2^{n-r-1} Y(r) & \text{if} \; n \le k \\
\end{cases} \\
\end{align*}</span></p>
|
91,739 | <p>I have 2 groups: </p>
<ul>
<li>general linear $ k \times k $ with $\cdot$</li>
<li>top-triangle matrix $ n \times n $ with 1 on main diagonal. Operation is $\cdot$ too</li>
</ul>
<p>Is there isomorphism for any any non-trivial $n,k$ i.e $n \neq 2 \ or \ k \neq 1$ over $\mathbb{R}$ or $\mathbb{Q}$?</p>
<p>If no, how can I prove it?</p>
| tungprime | 20,973 | <p>Another reason is that if the field is of characteristics $0$ then all elements (except the identity matrix) in the set upper triangle matrix with 1 on the main diagonal do not have finite order. However, there are lots of matrix in $ GL_k(F)$ has finite order. For instance, those have $-1$ or $1$ on the main diagonal and $0$ elsewhere. </p>
<p>However, this argument doesn't work for fields of finite characteristics.</p>
|
434,290 | <p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>,
$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$
what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$
so the equation (1) will be </p>
<p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$
The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation
$$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$
but I didn't get any argument there, can you explain this a bit please.</p>
| Start wearing purple | 73,025 | <p>One possible way is to introduce
$$ I(s)=\frac{1}{16}\int_0^{\infty}\frac{y^{s-\frac34}dy}{1+y}.\tag{1}$$
The integral you are looking for is obtained as $I'(0)$ after the change of variables $y=x^4$.</p>
<p>Let us make in (1) another change of variables: $\displaystyle t=\frac{y}{1+y}\Longleftrightarrow y=\frac{t}{1-t},dy=\frac{dt}{(1-t)^2}$. This gives
\begin{align}
I(s)&=\frac{1}{16}\int_0^1t\cdot\left(\frac{t}{1-t}\right)^{s-\frac74}\cdot \frac{dt}{(1-t)^2}=\\
&=\frac{1}{16}\int_0^1t^{s-\frac34}(1-t)^{-s-\frac{1}{4}}dt=\\&
=\frac{1}{16}B\left(s+\frac14,-s+\frac34\right)=\\&
=\frac{1}{16}\Gamma\left(s+\frac14\right)\Gamma\left(-s+\frac34\right)=\\
&=\frac{\pi}{16\sin\pi\left(s+\frac14\right)}.
\end{align}
Differentiating this with respect to $s$, we indeed get
$$I'(0)=-\frac{\pi^2\cos\frac{\pi}{4}}{16\sin^2\frac{\pi}{4}}=-\frac{\pi^2\sqrt{2}}{16}.$$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.