qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,794,724 | <blockquote>
<p>Let $F:U\rightarrow W$ be a linear transformation from the vector
space $U$ to the vectorspace $W$. Show that the image space to $F$,</p>
<p>$$V(F)=\{w\in W:w=F(u) \ \ \text{for some} \ \ u\in U\},$$</p>
<p>is a subspace of $W$.</p>
</blockquote>
<p>Okay, I know that in order for $M$ to be a subspace of a vectorspace $V$ then $M$ has to be </p>
<ol>
<li>non-empty</li>
<li>closed under addition with vectors and multiplication with scalars.</li>
</ol>
<p>So I have to show that $V(F)$ is non empty and closed under addition with vectors and multiplication with scalars. </p>
<p>Can someone break down to me how this is done? I don't really understand what is being stated in the curly brackets and how to apply that to show 1. and 2.</p>
| Community | -1 | <p>The curly brackets are just defining the image of F. F is a mapping. It takes each element of $U$ to an element of $W$. $V(F)$ is just the set of elements of $W$ that get mapped to by an element of $U$. In other words, take each $x \in U$ and apply $F$. We get $F(u)$ which is an element of $W$. Collect all the elments of W obtained this way and call this set $V(F)$.</p>
<p>Now take two elements of $V(F)$. Call them $w_1$ and $w_2$. This means there is a $u_1$ and $u_2$ in $U$ such that $F(u_1)=w_1$ and $F(u_2)=w_2$. We then have
$$w_1 + w_2 = F(u_1) + F(u_2) = F(u_1+u_2)$$
Thus $w_1 + w_2$ is in $V(F)$ since it gets mapped to by $u_1+u_2$. </p>
<p>Proceed similarly to finish the proof that $V(F)$ is a subspace.</p>
|
1,799,366 | <p>I'm trying to solve the following exercise:</p>
<blockquote>
<p>Let $\mu$ be a probability distribution on $\mathbb{R}$ having second moment $\sigma^2<\infty$ such that if $X$ and $Y$ are independent with law $\mu$ then the law of $(X+Y)/\sqrt{2}$ is also $\mu$. Show that $\mu =\mathcal{N}(0,1)$
Hint: apply the central limits theorem to packs of $2^n$ variables</p>
</blockquote>
<p>My attempt:</p>
<p>So let $Z_n=(X_1,Y_1)+\cdots+(X_n,Y_n)$, then $\mathbb{E}(Z_n)=n\mathbb{E}(Z_n)$ then for $n\to \infty$
$$T_n=\frac{Z_n-n\mathbb{E}(Z_n)}{\sqrt{\sigma^2n}}\xrightarrow{\mathcal{D}}\mathcal{N}(0,1)$$
converges in distribution to the normal distribution.</p>
<p>Now I don't see the connection how to proof $\mu=\mathcal{N}(0,1)$. I also do not understand what "packs" of $2^n$ variables are. Is it $Z_n=(X_n,Y_n)+\dots$?</p>
| Dark | 208,508 | <p><strong>Hint:</strong> without relying on the CLT, you can use <a href="https://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29" rel="nofollow noreferrer">characteristic functions</a>.</p>
<p>Let $f$ be the characteristic function of the law $\mu$. </p>
<p>You can easily show that $f$ satisfies <a href="https://math.stackexchange.com/questions/1203081/solving-fx2-f-sqrt2x">this functional equation</a> which can be solved to find that $f$ is the characteristic function of $\mathcal{N}(0,1)$.</p>
|
3,162,338 | <p>Consider <span class="math-container">$ x_1, x_2, ..., x_n \in \mathbb{R}$</span>.</p>
<p>We have to prove that each <span class="math-container">$\sqrt x $</span> is rational if the sum of <span class="math-container">$\sqrt x_1 + \ldots + \sqrt x_n $</span> is rational. </p>
<p>I think that I could prove it using fact, that only the sum of the opposite irrational number is rational for example <span class="math-container">$ \sqrt 2 + (2-\sqrt 3) = 2 $</span>, because if <span class="math-container">$ x_1, x_2, ..., x_n \in \mathbb{R} $</span> then <span class="math-container">$ \sqrt (-1) \neq - \sqrt 1$</span>. </p>
| Bill Dubuque | 242 | <p>As mentioned there are simple counterexamples. However the following version is true</p>
<p><strong>Theorem</strong> <span class="math-container">$\rm\ \sqrt{c_1}+\cdots+\!\sqrt{c_{n}} \in K\ \Rightarrow \sqrt{c_i}\in K\:$</span> for all <span class="math-container">$\rm i,\:$</span> if <span class="math-container">$\rm\: 0 < c_i\in K$</span> an ordered field.</p>
<p><strong>Proof</strong> <span class="math-container">$ $</span> See <a href="https://math.stackexchange.com/a/136655/242">this answer</a> for a simple inductive proof.</p>
|
25,337 | <p>If you want to compute crystalline cohomology of a smooth proper variety $X$ over a perfect field $k$ of characteristic $p$, the first thing you might want to try is to lift $X$ to the Witt ring $W_k$ of $k$. If that succeeds, compute de Rham cohomology of the lift over $W_k$ instead, which in general will be much easier to do. Neglecting torsion, this de Rham cohomology is the same as the crystalline cohomology of $X$.</p>
<p>I would like to have an example at hand where this approach fails: Can you give an example for</p>
<blockquote>
<p>A smooth proper variety $X$ over the finite field with $p$ elements, such that there is no smooth proper scheme of finite type over $\mathbb Z_p$ whose special fibre is $X$.</p>
</blockquote>
<p>The reason why such examples <em>have</em> to exist is metamathematical: If there werent any, the pain one undergoes constructing crystalline cohomology would be unnecessary.</p>
| George McNinch | 4,653 | <p>This paper of Serre gives an example (I've justed pasted I. Barsotti's math-sci review).
(The paper can be found in Serre's "Collected Works vol. II 1960-1971)</p>
<blockquote>
<p>Serre, Jean-Pierre Exemples de
variétés projectives en
caractéristique $p$ non relevables en
caractéristique zéro. (French) Proc.
Nat. Acad. Sci. U.S.A. 47 1961
108--109.</p>
<p>An example of a non-singular
projective variety $X_0$, over an
algebraically closed field $k$ of
characteristic $p$, which is not the
image, $\text{mod}\,p$, of any variety
$X$ over a complete local ring of
characteristic 0 with $k$ as residue
field. The variety $X_0$ is obtained
by selecting, in a 5-dimensional
projective space $S$, and for $p>5$, a
non-singular variety $Y_0$ which has
no fixed point for an abelian finite
subgroup $G$ with at least 5
generators of period $p$, of the group
$\Pi(k)$ of projective transformations
of $S$, but which is transformed into
itself by $G$; then $X_0=Y_0/G$. The
reason for the impossibility is that
$\Pi(K)$, for a $K$ of characteristic
0, does not contain a subgroup
isomorphic to $G$. {Misprint: on the
last line on p. 108 one should read
$s(\sigma)=\exp(h(\sigma)N)$.}</p>
</blockquote>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| FooF | 37,849 | <p>I believe that strictly from a statistical-genetic perspective, the most we can do is to compare genes and give a probabilistic value of somebody belonging to a certain genetic pool. Just consider mutations and any inter-race breeding that has happened during milleniums to highlight the fuzziness of the notion of defining somebody's race in any exact quantifiable terms. In addition, the notion of being <em>exactly</em> <code>1/2</code> Cherokee from statistical-genetic perspective does not make sense since the distribution of genes is not 50%:50% as very insightfully pointed out in the comments by @SteveJessop. There seems to be no mathematically sensical way to say somebody is 100% or 0% Cherokee, even less <code>1/n</code> Cherokees for any choice of integer value <code>n > 1</code>, by just looking at their genetic markup. </p>
<p>Thus I would say the definition of being a proud member of Cherokee is a matter of social and personal identification. This definition is gray, but at least it gives us sample members of Human Race that we can meaningfully call fully Cherokee or totally not being a Cherokee (while ignoring the grey area cases), which validates the use of fractions in defining somebody's degree of "Cherokeeness".</p>
<p>After thus laying foundational definitions, we refer to the curious cases of children with three parents <a href="http://en.wikipedia.org/wiki/Three-parent_baby" rel="nofollow">http://en.wikipedia.org/wiki/Three-parent_baby</a>:</p>
<blockquote>
<p>Three-parent babies are human offspring with three genetic parents, created through a
specialized form of In vitro fertilisation in which the future baby's mitochondrial DNA
comes from a third party.</p>
</blockquote>
<p>See also <a href="http://www.bbc.com/news/magazine-28986843" rel="nofollow">http://www.bbc.com/news/magazine-28986843</a> for the young case of Mrs. Saarinen.</p>
<p>Such a child would have DNA from three people, and in a sense have three genetic parents. Let's assume one of the parents of such a child would be a Cherokee and two others "totally not Cherokees", and furthermore to make the case stronger, let's assume all three genetic parents would also participate in the upbringing and support of the child so that there really would be three parents in the strongest imaginable and possible sense. We could argue (though not with any mathematical rigor) that the resulting offspring would be 1/3 Cherokee. Now if she or he would produce an offspring, it would make some sense (even if not mathematically very rigorous) to say that offspring would be 1/6 Cherokee. Paring 1/6 Cherokee with non-Cherokee parent would give us an offspring that might want to call himself/herself 1/12 Cherokee.</p>
<p>Logically speaking, though, the mentioned case cannot be 1/12 Cherokee by this avenue because this controversial treatment option has not been in existence long enough for any such three parent child to have a grandchild.</p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| Blake | 209,173 | <p>Not without inbreeding.
If we assume not shared ancestry paths, then each generation you go back you double the number of ancestors, thus 2^n where n is the generation.
taking the previous 95 generations and dividing each by 12 reveals that there are no members of (this part) of the series that are divisible by 12. Interestingly, all moduli are alternately 4 or 8.
While this is not a complete mathematical proof, beyond a few generations it is hard to determine race with much certainty anyway.</p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| tfitzger | 209,227 | <p>Genealogy and ancestry is fun. My sister did one of those DNA tests that places your geographic location genetically. Here are her results:</p>
<p>30% Great Britain,
29% Scandinavia,
20% Ireland,
8% Europe West,
6% Finland/NW Russia,
4% Europe East,
3% Iberian Peninsula.</p>
<p>Obviously, those don't fit nicely into the 1/(2^x) model. Part of this is because they are larger region (Scandinavia covers multiple countries for example). Another part of this is the fact that, genetically, these regions are similar enough to have some crossovers.</p>
|
97,261 | <p>This semester, I will be taking a senior undergrad course in advanced calculus "real analysis of several variables", and we will be covering topics like: </p>
<p>-Differentiability.
-Open mapping theorem.
-Implicit function theorem.
-Lagrange multipliers. Submanifolds.
-Integrals.
-Integration on surfaces.
-Stokes theorem, Gauss theorem.</p>
<p>I need to know if anyone of you guys know good textbooks that contain practice problems with full solutions or hints that can be used to understand the material. Most of the textbooks I found are covering only the material with few examples.</p>
| ItsNotObvious | 9,450 | <p>A couple of references that come to mind that satisfy your criteria for exercise solutions are:</p>
<p>(1) <a href="http://rads.stackoverflow.com/amzn/click/0857291912" rel="nofollow">Multivariable Analyis</a> by Shirali and Vasudeva. Most problems are provided with complete solutions.</p>
<p>(2) <a href="http://rads.stackoverflow.com/amzn/click/0470148241" rel="nofollow">Analysis in Vector Spaces</a> by Akcoglu, et al. This comes with a student solutions manual that contains solutions to all of the odd-numbered exercises. The biggest drawback to this text is, unfortunately, the price - which has almost doubled since I purchased it last year (!)</p>
<p>In both cases, the problems are very good and are at a level commensurate with the material. Both of these texts cover the inverse/implicit function theorems and integration including Stokes, the divergence theorem, etc. Also, both use the language of differential forms for the development of integration theory. </p>
|
3,172,693 | <p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p>
<p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p>
<p>Another form:</p>
<p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p>
<p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p>
<p>Thx for your help.</p>
| Ma Joad | 516,814 | <p>That is because for
<span class="math-container">$$\frac{as^2+bs+c}{s^2(s+2)}=\frac{A}{s^2}+\frac{B}{(s+2)},$$</span>
the left hand side has three parameters <span class="math-container">$a,b,c$</span>, but the right hand side only has two parameters <span class="math-container">$a,b$</span>. And if you try to solve TWO values from THREE equations, it will usually lead to a contradiction. So a third term of the right is needed. Even though this is not obvious in your question, you should think 1 as a degree 2 polynomial.</p>
<p>Or more simply, consider the example
<span class="math-container">$$
\frac{s+1}{s^2}=\frac{1}{s^2}+\frac{1}{s}
$$</span></p>
|
3,172,693 | <p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p>
<p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p>
<p>Another form:</p>
<p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p>
<p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p>
<p>Thx for your help.</p>
| John Joy | 140,156 | <p>Suppose we multiplied both sides of the second equation by <span class="math-container">$s^2(s-2)$</span>, giving us the equivalent equation:
<span class="math-container">$$1 = 0s^2 + 0s + 1= A(s+2) + Bs^2 = Bs^2 + As + 2A$$</span>
Notice that on the RHS of this equation, that the constant term is not independent of the coefficient of the <span class="math-container">$s$</span> term. This dependency, in turn, causes a contradiction.</p>
<p>Now let's try the same thing with your first equation. This generates the equivalent equation, whose coefficients can be uniquely determined:
<span class="math-container">$$\begin{align}
1 &= As(s+2) + B(s+2)+Cs^2\\
0s^2+0s+1 &= (A+C)s^2+(2A+B)s+2B\\
\end{align}$$</span></p>
<p>The reason, of course, is because <span class="math-container">${\{1,s,s^2 \}}$</span> represents an independent set that spans the set of all polynomials up to degree two, requiring three parameters to determine its coefficients uniquely.</p>
<p>I hope this helps.</p>
|
2,515,939 | <p>So, I just need a hint for proving
$$\lim_{n\to \infty} \int_0^1 e^{-nx^2}\, dx = 0$$ </p>
<p>I think maybe the easiest way is to pass the limit inside, because $e^{-nx^2}$ is uniformly convergent on $[0,1]$, but I'm new to that theorem, and have very limited experience with uniform convergence. Furthermore, I don't want to integrate the Taylor expansion, because I'm not familiar with that. So, I want to prove it in a way I'm more familiar with, if possible. So far I've tried: </p>
<ol>
<li><p>Show that $e^{-nx^2}$ is a monontone decreasing sequence with limit $0$. Then use the monotone property of integrals but I think this argument would just end circularly with passing the limit out of the integration operator. </p></li>
<li><p>Bound $e^{-nx^2}$ by 0 and some other $f(x)$ like $\cos^n x$ or $(1-\frac{x^2}{2})^n$ and then use the squeeze theorem. But the integrals of those functions seem to be a little bit out of my math range to analyze. </p></li>
</ol>
<p>But I have a feeling that there's something much simpler here that I'm missing.</p>
| Jack D'Aurizio | 44,121 | <p>$$0\leq\int_{0}^{1}e^{-nx^2}\,dx \leq \int_{0}^{1}\frac{dx}{1+n x^2} = \frac{\arctan\sqrt{n}}{\sqrt{n}}\leq \frac{\pi}{2\sqrt{n}}.$$</p>
|
3,518,285 | <p>I started studying the book of Daniel Huybrechts, Complex Geometry An Introduction. I tried studying <a href="https://mathoverflow.net/questions/13089/why-do-so-many-textbooks-have-so-much-technical-detail-and-so-little-enlightenme">backwards</a> as much as possible, but I have been stuck on the concepts of <a href="https://en.wikipedia.org/wiki/Linear_complex_structure" rel="nofollow noreferrer">almost complex structures</a> and <a href="https://en.wikipedia.org/wiki/Complexification" rel="nofollow noreferrer">complexification</a>. I have studied several books and articles on the matter including ones by <a href="https://kconrad.math.uconn.edu/blurbs/linmultialg/complexification.pdf" rel="nofollow noreferrer">Keith Conrad</a>, <a href="https://individual.utoronto.ca/jordanbell/notes/complexification.pdf" rel="nofollow noreferrer">Jordan Bell</a>, <a href="http://www.physics.rutgers.edu/~gmoore/618Spring2019/GTLect2-LinearAlgebra-2019.pdf" rel="nofollow noreferrer">Gregory W. Moore</a>, <a href="https://www.springer.com/gp/book/9780387728285" rel="nofollow noreferrer">Steven Roman</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/2881246834" rel="nofollow noreferrer" rel="nofollow noreferrer">Suetin, Kostrikin and Mainin</a>, <a href="https://www.springer.com/gp/book/9783319115108" rel="nofollow noreferrer">Gauthier</a></p>
<p>I have several questions on the concepts of almost complex structures and complexification. Here is one:</p>
<p>I understand for a finite dimensional <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$V=(V,\text{Add}_V: V^2 \to V,s_V: \mathbb R \times V \to V)$</span>, the following are equivalent</p>
<ol>
<li><span class="math-container">$\dim V$</span> even</li>
<li><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></li>
<li><span class="math-container">$V$</span> has a complex structure <span class="math-container">$s_V^{\#}: \mathbb C \times V \to V$</span> that agrees with its real structure: <span class="math-container">$s_V^{\#} (r,v)=s_V(r,v)$</span>, for any <span class="math-container">$r \in \mathbb R$</span> and <span class="math-container">$v \in V$</span></li>
<li>if and only if <span class="math-container">$V \cong \mathbb R^{2n} \cong (\mathbb R^{n})^2$</span> for some positive integer <span class="math-container">$n$</span> (that turns out to be half of <span class="math-container">$\dim V$</span>) if and only if <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>.</li>
</ol>
<p>The last condition makes me think that the property 'even-dimensional' for finite-dimensional <span class="math-container">$V$</span> is generalised by the property '<span class="math-container">$V \cong W^2$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>' for finite or infinite dimensional <span class="math-container">$V$</span>.</p>
<p>Question: For <span class="math-container">$V$</span> finite or infinite dimensional <span class="math-container">$\mathbb R-$</span>vector space, are the following equivalent?</p>
<ol start="5">
<li><p><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></p></li>
<li><p>Externally, <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span> vector space <span class="math-container">$W$</span></p></li>
<li><p>Internally, <span class="math-container">$V=S \bigoplus U$</span> for some <span class="math-container">$\mathbb R-$</span> vector subspaces <span class="math-container">$S$</span> and <span class="math-container">$U$</span> of <span class="math-container">$V$</span> with <span class="math-container">$S \cong U$</span> (and <span class="math-container">$S \cap U = \{0_V\}$</span>)</p></li>
</ol>
| WoolierThanThou | 686,397 | <p>GreginGre's solution is, of course, perfectly lovely, but if we're just killing this with choice, I guess you can also prove it as follows:</p>
<p>Let <span class="math-container">$V$</span> be infinite dimensional and, using Zorn's Lemma, let <span class="math-container">$\{e_i\}_{i\in I}$</span> be a basis for <span class="math-container">$V$</span>. Using choice again, there exists <span class="math-container">$I_1$</span> and <span class="math-container">$I_2$</span> such that both <span class="math-container">$I_1\cap I_2=\emptyset,$</span> <span class="math-container">$I_1\cup I_2=I$</span> and there exists a bijection <span class="math-container">$\varphi: I_1\to I_2$</span>. Thus, let <span class="math-container">$S=\textrm{span}\{e_i\}_{i\in I_1}$</span> and <span class="math-container">$U=\textrm{span}\{e_i\}_{i\in I_2}$</span>. Then, <span class="math-container">$V=S\oplus U$</span> and <span class="math-container">$A:S\to U$</span> given by <span class="math-container">$e_i\mapsto e_{\varphi(i)}$</span> is a linear isomorphism of the two. This just proves that any infinite dimensional vector space admits such a decomposition, so there is only something to prove in the finite dimensional case.</p>
|
355,552 | <p>How would you compute the first $k$ digits of the first $n$th Fibonacci numbers (say, calculate the first 10 digits of the first 10000 Fibonacci numbers) without computing (storing) the whole numbers ?</p>
<p>A trivial approach would be to store the exact value of all the numbers (with approximately $0.2n$ digits for the $n$th number) but this requires performing additions over numbers with <em>many</em> digits (and also a lot of storage), even if $k$ is small. Perhaps there's a way to accomplish this using only smart approximations that lead to precise results for the first $k$ digits.</p>
<p>Thanks in advance.</p>
| Community | -1 | <p>We have
$$F_n = \dfrac{\left(\dfrac{1+\sqrt5}2\right)^n - \left(\dfrac{1-\sqrt5}2\right)^n}{\sqrt{5}}$$
Hence,
$$F_n = \begin{cases} \left\lceil{\dfrac{\left(\dfrac{1+\sqrt5}2\right)^n}{\sqrt{5}}} \right\rceil & \text{if $n$ is odd}\\ \left\lfloor{\dfrac{\left(\dfrac{1+\sqrt5}2\right)^n}{\sqrt{5}}} \right\rfloor & \text{if $n$ is even}\end{cases}$$
Now compute the $n\log(\phi)$ and use it to compute the first desired number of digits of $F_n$ from above.</p>
|
1,588,665 | <p>I have been reading up on finding the eigenvectors and eigenvalues of a symmetric matrix lately and I am totally unsure of <strong>how and why</strong> it works. Given a matrix, I can find its eigenvectors and values like a machine but the problem is, I have no intuition of how it works.</p>
<p>1) I understand that $v^tAv$ is the equation of an ellipse in matrix form</p>
<p>2) I understand how Lagrangian multipliers work</p>
<p>Can someone please show me the proof where finding the eigenvectors of such a matrix gives me the principal components ? </p>
<p>I am following this topic.
<a href="https://math.stackexchange.com/questions/87199/maximizing-symmetric-matrices-v-s-non-symmetric-matrices">Maximizing symmetric matrices v.s. non-symmetric matrices</a> I know how to find the eigenvalues of the matrix but <strong>I have no idea how it works</strong></p>
| Element118 | 274,478 | <p>You divided by $x$. Note that since you cannot divide by $0$, dividing by $x$ is similar to as asserting that $x$ is not $0$.</p>
<p>Of course, this argument is not rigourous, consider $x^3-x^2=0$. Even when we divide by $x$, we have $x(x-1)=0$, so $0$ and $1$ are solutions to the equation after dividing by $x$.</p>
<p>In summary, the best way is to check if whatever you are dividing by is $0$, if it is, you should try factoring the expression instead.</p>
|
2,895,655 | <blockquote>
<p>Four coins of different colour are thrown. If three out of these show heads then find the probability that the remaining one shows tails. </p>
</blockquote>
<p>My approach:</p>
<p>$A$: The event in which 3 heads appear in 3 coins out of 4</p>
<p>$B$: The event in which the 4th coin shows tails</p>
<p>thus we need to find $P(\frac{B}{A})$</p>
<p>and we know that $P(\frac{B}{A})= \frac{P(A \cap B)}{P(A)}$</p>
<p>The ways in which 3 out of 4 coins can be chosen= $^4C_3$</p>
<p>$P(A)= ^4C_3 (\frac{1}{2})^3$</p>
<p>and</p>
<p>$ P(A \cap B)= ^4C_3 (\frac{1}{2})^4 $</p>
<p>so</p>
<p>$ P(\frac{B}{A})= \frac{1}{2} $</p>
<p>However the answer given is $\frac {4}{5}$. What am I doing wrong?</p>
| dan post | 532,731 | <p>The sample space is the number of ways that, at minimum, three coins are heads. There are 5 ways this can happen -- namely, all heads (1 way) or one of the four coins being tails (4 ways). Of these 4 of the 5 ways will have one tail. You should be able to work out the actual steps involved from this.</p>
|
2,430,482 | <p>I'm struggling to find the maximum of this function $f:\mathbb{R}^n\times\mathbb{R}^n \rightarrow \mathbb{R}$</p>
<p>$$ f(x,y) = \frac{n+1}{2} \sum_{i=1}^n x_i\,y_i - \sum_{i=1}^n x_i \sum_{i=1}^n y_i,$$</p>
<p>where $x_i,y_i\in[0,1]$ for $i=1,...,n$. It reminded me to Chebyshev's sum inequality, but it didn't help much since I <em>can't</em> sort the variables $x$ and $y$. Any help is welcomed.</p>
| kimchi lover | 457,779 | <p>Here is a stab. Let $x_i=y_i=1$ for $i\le k$ and $x_i=y_i=0$ for $i>k$. Then $f=(n+1)k/2-k^2 = ((n+1)/2-k)k$ which is largest when $k\approx n/4$, giving $f\approx n^2/16$. </p>
<p>This is only a lower bound on the desired maximum. It does not purport to actually answer the original question. </p>
<p>ADDED several hours later, and again the next day: Towards a proof.</p>
<p>Note first that the domain of $f$ is the compact set $[0,1]^{2n}$, so the continuous function $f$ attains its maximal value. The "stab" examples above show this maximum value is positive.</p>
<p>Following zwim, look at the restricted problem where $X=\sum x_i$ and $Y=\sum y_i$ are fixed. Note that at a maximum both $X<n$ and $Y<n$ because otherwise $f\le0$. Then $f(x,y)=(n+1)\sum x_iy_i/2 - XY$ is maximized when <strong>first</strong>, the $x_i$ and the $y_i$ are ordered the same (lets say non-increasing, so $1\ge x_1\ge x_2\ge \cdots \ge x_n\ge0$, and similarly for the $y_i$) and <strong>second</strong>, as many of the $x_i$ and $y_i$ values are equal to $1$ as can be, consistent with the $X$ and $Y$ sum constraints. Given that, we have $x_1=x_2=\cdots x_k = 1$ and $ x_{k+1}=u$ with $0\le u<1$ and the rest of the $x_i=0$; similarly $y_1=y_2=\cdots = y_l=1$, $y_{l+1}=v$ with $0\le v <1$, and the rest of the $y_i=0$. (The cases $k=n$ or $l=n$ cannot occur if $X<n$ and $Y<n$.) Clearly $k = \lfloor X\rfloor,$ $l = \lfloor Y\rfloor$, $u = X-k$, $ v=Y-l,$ and $f(x,y) = (n+1)(k+u y_{k+1})/2 - XY$, if we assume $X\le Y$ so $k\le l$. Denote this last expression by $g(X,Y)$, as it depends only on $X$ and $Y$. For given $X$ and $Y$, if $l>k$ we can replace the $y_i$ values for $i>k+1$ with $0$, which leaves the positive term $\sum x_i y_i$ unaffected but possibly reduces $Y$ and its negative effect in the formula for $f$. Thus we may as well assume that $l=k$, and then $g(X,Y') = (n+1) (k+uv)/2 - (k+u)(k+v)$, where $Y'$ is the reduced $Y$. Now, for given $k$, this expression is bi-linear in $u$ and $v$, which is maximized at one of the 4 corners of the $(u,v)$ square $[0,1)^2$. Relaxing the $(u,v)$ maximization to the domain $[0,1]^2$, we see the maximal value of $f$ is attained as one of
$$f=(n+1)(k+1)/2-(k+1)^2\tag 1$$ $$f=(n+1)k/2 - (k+1)k\tag 2$$ or $$f=(n+1)k/2 - k^2.\tag 3$$</p>
<p>Clearly (2) is not as big as (3), so it is out of the running. And (1) is the same as (3) with a different value of $k$. The conclusion: it looks like the bound I showed is in fact the correct answer to the original question. This is borne out by computer experiments, which suggest that $k=\lfloor (n+2)/4\rfloor$ and $(u,v)=(0,0)$ are always the optimizers.</p>
<p>ADDED 18 Sept, revised 19 Sept: So the optimum is achieved by any integer $k$ maximizing (3) above, subject to $0\le k \le n$. If we drop the integrality constraint the optimum is achieved by $k=(n+1)/4$, and with the integrality constraint by the (or more precisely, <em>any</em>) integer closest to $(n+1)/4$. <em>One</em> formula for such a $k$ is $k= \lfloor(n+2)/4\rfloor$ and another is $k=\lfloor(n+3)/4\rfloor.$
(Other formulas are possible using other symbols.)
When $n\equiv1 \pmod 4$ these formulas deliver distinct integer values equally close to $(n+1)/4$. If $n\equiv 0,2,\text{ or } 3\pmod 4$ the two formulas agree. Consider the example the cases $n=100,$ $101,$ $102,$ and $103$. In these cases we need $k$ to be a closest integer to $25.25,$ $25.5,$$ 25.75,$ and $26,$ respectively. Clearly the corresponding values of $k$ are: $25$, $25$ <em>or</em> $26$, $26,$ and $26$. The $\lfloor(n+2)/4\rfloor$ formula delivers $25,$ $25,$ $26,$ $ 26$, and the $\lfloor(n+3)/4\rfloor$ formula delivers $25,$ $26,$ $26,$ $26$, respectively.</p>
<p>Thank you, user480959 for a most interesting puzzle, and zwim for suggesting a line of attack.</p>
|
2,234,744 | <p>Please help me find the value of the following integral:<br>
$$\frac{(5050)\int^1_0(1-x^{50})^{100} dx}{\int^1_0(1-x^{50})^{101} dx}$$
I tried solving both numerator and denominator via by-parts but it isn't giving me a conclusive solution. Any other suggestions?</p>
| Chappers | 221,811 | <p>Let's try integrating by parts and see what happens.
$$ \int_0^1 (1-x^n)^m \, dx = \left[ x(1-x^n)^m \right]_0^1 - \int_0^1 -nmx^n(1-x^n)^{m-1} \, dx = 0+ nm\int_0^1 x^n(1-x^n)^{m-1} \, dx. $$
We can fiddle with the right-hand side to get it into a more familiar form:
$$ \int_0^1 x^n(1-x^n)^{m-1} \, dx = \int_0^1 \left( 1 -(1-x^n) \right) (1-x^n)^{m-1} \, dx = \int_0^1 (1-x^n)^{m-1} \, dx - \int_0^1 (1-x^n)^{m} \, dx. $$
Collecting the copies of the integrals together, we find
$$ (1+nm)\int_0^1 (1-x^n)^m \, dx = nm \int_0^1 (1-x^n)^{m-1} \, dx, $$
or
$$ \frac{\int_0^1 (1-x^n)^{m-1} \, dx}{\int_0^1 (1-x^n)^m \, dx} = \frac{1+nm}{nm}. $$
Putting $n=50$, $m=101$, the right-hand side becomes $ 5051/5050 $.</p>
|
470,617 | <ol>
<li><p>Two competitors won $n$ votes each.
How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li>
<li><p>One competitor won $a$ votes, and the other won $b$ votes. $a>b$.
How many ways are there to count the votes, in a way that the first competitor is always ahead of the other?
(They can have the same amount of votes along the way)</p></li>
</ol>
<p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p>
<p>I am unsure about how to go about solving the second problem.</p>
| xbh | 514,490 | <p>I provide one more solution, where we don't use sines and cosines. </p>
<p>First, some preparation. </p>
<p>We all know that
$$
\tan (x \pm y) = \frac {\tan (x) \pm \tan (y)} {1 \mp \tan (x)\tan (y)}.
$$
Then
$$
\cot (x - y) = \frac {\cot (x) \cot (y) + 1} { \cot (y) - \cot (x)},
$$
or
$$
\cot (x) \cot (y) = \cot(x-y) (\cot(y) -\cot(x)) - 1
$$
Especially,
$$
\cot (2x) = \frac {\cot^2 (x) -1} {2 \cot (x)},
$$
which yields
$$
\cot^2(x) - 1 = 2\cot(x)\cot(2x).
$$
Also,
$$
\csc^2(x) = \frac {\sin^2(x) + \cos^2(x)} {\sin ^2(x)} = 1 + \cot^2(x).
$$
Now let's start. Let $x = \pi /7$.
$$
P := \csc^2(x) + \csc^2(2x) + \csc^2(4x) = 3 + \cot^2(x) + \cot^2(2x) + \cot^2(3x).
$$
By the formula above,
$$
P = 6 + 2(\cot(x) \cot(2x) + \cot(2x) \cot (4x) + \cot (4x) \cot (8x)) =: 6 +2Q.
$$
Now change the angle: since $\cot(\pi \pm y) = \mp\cot (y)$, we have
$$
Q = \cot(x)\cot(2x) - \cot(2x) \cot (3x) - \cot(3x) \cot (x).
$$
Now,
\begin{align*}
Q &= 1+ \cot(x)(\cot(x) - \cot(2x)) - \cot(x) (\cot(2x) -\cot(3x)) - \cot(2x)(\cot(x) - \cot (3x))\\
&= 1+ \cot(x) (\cot(x) -3\cot(2x) + \cot(3x)) + \cot(2x) \cot(3x)\\
&= \cot(x) (\cot(x) -3\cot(2x) + \cot(3x)) + (\cot(2x)-\cot(3x))\cot(x) \\
&= \cot(x)(\cot(x) -2\cot(2x))\\
&= \cot^2(x) - 2\cot(2x) \cot(x)\\
&=1.
\end{align*}
Therefore $P = 6+2Q=8$. </p>
|
42,957 | <p>I am an "old" programmer used to <em>Fortran</em> and <em>Pascal</em>. I can't get rid of <code>For</code>, <code>Do</code> and <code>While</code> loops, but I know <em>Mathematica</em> can do things much faster!</p>
<p>I am using the following code</p>
<pre><code>SeedRandom[3]
n = 10;
v1 = Range[n];
v2 = RandomReal[250., n];
a = {};
Do[
Do[
AppendTo[a, (v2[[i]] - v2[[j]])/(v1[[i]] - v1[[j]])
],
{j, i - 1, 1, -1}], {i, n, 2, -1}
]; // Timing
</code></pre>
<p>If <code>n</code> is small, it runs fast enough, but for bigger <code>n</code> it slows down. I usually deal with <code>n > 600</code>.</p>
<p>How can the code be made faster?</p>
| Simon Woods | 862 | <p>A more functional approach:</p>
<pre><code>a = With[{f = Subtract @@@ Subsets[Reverse@#, {2}] &}, f[v2]/f[v1]]
</code></pre>
<p>For a bit more speed you could do this:</p>
<pre><code>ii = Join @@ Table[ConstantArray[i, i - 1], {i, n, 2, -1}];
jj = Join @@ Table[Range[j, 1, -1], {j, n - 1, 1, -1}];
a = Divide[Subtract[v2[[ii]], v2[[jj]]], Subtract[v1[[ii]], v1[[jj]]]];
</code></pre>
|
130,306 | <p>I am trying to make a relatively complex 3D plot in order to show the variation of a curve with a parameter. Here is the code</p>
<pre><code>AnsNf[x_, nf_] = (2 \[Pi] x^4)/((11 - (2 (2 + nf))/3) (1 + 1/2 x^6 Log[4. x^2])) + (14.298 (1 + (1.81 - 0.292 nf) x^2 - 2.276 x^2 Log[x^2/(1 + x^2)]))/(1 + (9.926 + 1.795 nf) x^2 + (1.1 - 4.964 nf) x^4 + (22.412 +5.612 nf) x^6);
AnsNfnoc[x_, nf_] = (2 \[Pi] x^4)/((11 - (2 (2 + nf))/3) (1 + 1/2 x^6 Log[4. x^2])) + (14.298 (1 + (1.81` - 0.292 nf) x^2 - 0.569 x^2 Log[x^2/(1 + x^2)]))/(1 + (9.926 + 1.795 nf) x^2 + (1.1 - 4.964 nf) x^4 + (22.412 +5.612 nf) x^6);
AnsatzINf[t_, i_] := t^2*AnsNf[t, i - 1];
AnsatzINfNoc[t_, i_] := t^2*AnsNfnoc[t, i - 1];
colsandthick = {{RGBColor[175/255, 0, 28/255], Thickness[0.004]}, {RGBColor[14/255, 95/255, 177/255], Thickness[0.004]}, {RGBColor[130/255, 120/255, 106/255], Thickness[0.004]}, {RGBColor[0/255, 102/255, 128/255],Thickness[0.004]}};
colsandthickanddot = {{RGBColor[175/255, 0, 28/255], Thickness[0.004],CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[14/255, 95/255, 177/255],Thickness[0.004], CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[130/255, 120/255, 106/255],Thickness[0.004], CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[0/255, 102/255, 128/255],Thickness[0.004], CapForm["Round"], Dashing[{1*^-10, 0.01}]}};
p1 = Graphics3D[Table[{Plot[{AnsatzINf[t, i], AnsatzINfNoc[t, i]}, {t, 0, 5},PlotStyle -> {colsandthick[[i]], colsandthickanddot[[i]]}][[
1]]} /. {x_?NumericQ, y_?NumericQ} :> {x, i, y}, {i, 1, 4}], Axes -> {True, True, True}, Boxed -> {Left, Bottom, Back},BoxRatios -> {1, 1, 0.5},FaceGrids -> {{0, 0, -1}, {0, 1, 0}, {-1, 0, 0}},AxesStyle ->Directive[FontFamily -> "Helvetica", FontSize -> 16,Thickness[0.003], Black],FaceGridsStyle ->Directive[GrayLevel[0.3, 1], AbsoluteDashing[{1, 2}]], ViewPoint -> {2.477268549689875`, -2.189130098344112`,0.566436179318843`}, ViewVertical -> {0, 0, 1},ImageSize -> Large]
</code></pre>
<p>This code produces the following plot<a href="https://i.stack.imgur.com/MWq8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWq8a.png" alt="enter image description here"></a> which is already weird because each continuous curve should be accompanied by a lower line (specified by the function "AnsNfnoc") that should be rendered through a dotted line as specified by the "colsandthickanddot" styling option.</p>
<p>While I could live with this (but why is this so?), the real problem comes when I export the plot in pdf: as shown below, the dotted curves are now rendered, but the dashing is unevenly spaced. <a href="https://i.stack.imgur.com/GdvmM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GdvmM.jpg" alt="enter image description here"></a></p>
<p>I am under the impression that this is due to some mesh applied when rendering the 3D plot, but do you have any idea how this could be corrected in such a way that the space between the dots is rendered evenly?</p>
| DPF | 34,167 | <p>Just add the <code>Dashed</code> directive before the <code>Plot</code>:</p>
<pre><code>[...] Table[{Dashed, Plot[{AnsatzINf[t, [...]
</code></pre>
<p>Knowing that, you can divide your <code>Plot</code> and only add the <code>Dashed</code> directive before the curves you want.</p>
<p><a href="https://i.stack.imgur.com/Qk529.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qk529.png" alt="enter image description here"></a></p>
|
2,849,450 | <p>Let's consider a generic linear programming problem. Is it possible that the decision variables of the objective function assume (at the optimal solution) irrational values?</p>
<p>Also, is it possible that some entries of the $A$ matrix are irrational?</p>
| Johan Löfberg | 37,404 | <p>If the problem is described with rational data, there is always a rational optimal solution. I don't have any reference immediately, but it is a standard result. Search on rational data linear program, polynomial complexity etc, and you will find a lot of material.</p>
<p>Edit: I see my answer was a bit unclear. If the solution to the rational LP is unique, it is rational. If it is non-unique, you can always generate an irrational solution by taking a linear combination of two rational solutions $\alpha x_1 + (1-\alpha)x_2$ where $\alpha$ is an irrational number between $0$ and $1$.</p>
|
1,272,124 | <p>we know that $1+2+3+4+5.....+n=n(n+1)/2$</p>
<p>I spent a lot of time trying to get a formula for this sum but I could not get it :</p>
<p>$( 2 + 3 + . . . + 2n)$</p>
<p>I tried to write the sum of some few terms.. Of course I saw some pattern between the sums but still the formula I Got didn't give a correct sum for other terms.</p>
<p>Is there another way of solving this question?</p>
| gt6989b | 16,192 | <p>different sums are in the title and body
title sum is
$$ 2+4+\ldots+2n = 2\left(1+\ldots+n\right)$$
which is easy since you yourself said what that sum is.</p>
<p>body sum us
$$1+2+...+2n = 1+2+...+M$$
with $M=2n$ and now plug back into the same formula</p>
|
2,445,693 | <p>I know that the derivative of $n^x$ is $n^x\times\ln n$ so i tried to show that with the definition of derivative:$$f'\left(x\right)=\dfrac{df}{dx}\left[n^x\right]\text{ for }n\in\mathbb{R}\\{=\lim_{h\rightarrow0}\dfrac{f\left(x+h\right)-f\left(x\right)}{h}}{=\lim_{h\rightarrow0}\frac{n^{x+h}-n^x}{h}}{=\lim_{h\rightarrow0}\frac{n^x\left(n^h-1\right)}{h}}{=n^x\lim_{h\rightarrow0}\frac{n^h-1}{h}}$$ now I can calculate the limit, lets:$$g\left(h\right)=\frac{n^h-1}{h}$$ $$g\left(0\right)=\frac{n^0-1}{0}=\frac{0}{0}$$$$\therefore g(0)=\frac{\dfrac{d}{dh}\left[n^h-1\right]}{\dfrac{d}{dh}\left[h\right]}=\frac{\dfrac{df\left(0\right)}{dh}\left[n^h\right]}{1}=\dfrac{df\left(0\right)}{dh}\left[n^h\right]$$
so in the end i get: $$\dfrac{df}{dx}\left[n^x\right]=n^x\dfrac{df\left(0\right)}{dx}\left[n^x\right]$$
so my question is how can i prove that $$\dfrac{df\left(0\right)}{dx}\left[n^x\right]=\ln n$$</p>
<h1>edit:</h1>
<p>i got 2 answers that show that using the fact that $\lim_{z \rightarrow 0}\dfrac{e^z-1}{z}=1$, so how can i prove that using the other definitions of e, i know it is definition but how can i show that this e is equal to the e of $\sum_{n=0}^\infty \frac{1}{n!}$?</p>
| Community | -1 | <p>If you don't have a definition of the logarithm handy (or suitable properties taken for granted), you cannot obtain the stated result because the logarithm will not appear by magic from the computation.</p>
<p>Assume that the formula $n^x=e^{x \log n}$ is not allowed. Then to define the powers, you can work via rationals</p>
<p>$$n^{p/q}=\sqrt[q]{n^p}$$ and extend to the reals by continuity.</p>
<p>Using this apporach, you obtain</p>
<p>$$\lim_{h\to0}\frac{n^h-1}h=\lim_{m\to\infty}m(\sqrt[m]n-1)$$</p>
<p>and you can take this as <em>a definition of the logarithm</em>.</p>
<p>$$\log n:=\lim_{m\to\infty}m(\sqrt[m]n-1).$$</p>
|
1,581,756 | <p>Find the general solution of $$z(px-qy)=y^2-x^2$$ Let $F(x,y,z,p,q)=z(px-qy)+x^2-y^2$. This gives $$F_x=zp+2x$$
$$F_y=-zq-2y$$
$$F_z=px-qy$$
$$F_p=zx$$
$$F_q=-zy$$
By Charpit's method we have $$\frac{dx}{zx}=\frac{dy}{-zy}=\frac{dz}{z(px-qy)}=\frac{dp}{-zp-2x-p^2x+pqy}=\frac{dq}{zq+2y-pxy+q^2y}$$</p>
<p>By equating the first two I am getting $xy=k$.</p>
<p>But I am not able to solve the last two. </p>
<p>Thanks for the help!!</p>
| Ross Millikan | 1,827 | <p>The reason that numbers with more than four divisors are multiples of numbers with exactly four divisors are that the numbers exactly four divisors are of the form $pq$ for distinct primes $p,q$ or $p^3$ for prime $p$. To have more than four, the number has to be of the form $p^4, p^2q, \text{ or } pqr$ for primes $p,q,r$ or more complex. All of these are multiples of a number with exactly four factors. Conversely, any multiple of a number $pq$ or $p^3$ will have more than four divisors.</p>
<p>A number with more than five divisors will be a multiple of a number in your OEIS sequence, because this sequence is all numbers of the form $pq$ or $p^4$. As $pqr$ has eight divisors and $p^2q$ and $p^5$ have six, all multiples of the elements of this sequence will have more than five divisors. </p>
<p>Added: for the fact that numbers with more that three divisors are not exactly the numbers that are multiples of numbers with exactly three divisors can be shown by example: $6$ has four divisors, while $1,2,3$ do not have exactly three. The general case is that a number of the form $pq$, with $p,q$ distinct primes, has factors $1,p,q,pq$ and none of $1,p,q$ have exactly three divisors.</p>
|
3,564,476 | <p>I'm stuck on the following problem: </p>
<p><span class="math-container">$$\int_{\frac\pi{12}}^{\frac\pi2}(1-\cos4x)\cos2x\>dx$$</span></p>
<p>I think I can use the double angle formulas here but I'm not sure how to apply it, or even if it's the right approach. I'm also not sure if 1-cos4x can be translated into anything. </p>
<p><a href="https://i.stack.imgur.com/8oKQW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8oKQW.png" alt="enter image description here"></a></p>
<p>A step-through would be greatly appreciated. </p>
| Quanto | 686,284 | <p>Use the double angle identity to write <span class="math-container">$1-\cos 4x = 2\sin^2 2x$</span>,</p>
<p><span class="math-container">$$\int_{\frac\pi{12}}^{\frac\pi2}(1-\cos4x)\cos2xdx
=\int_{\frac\pi{12}}^{\frac{\pi}2}\sin^2 2x\>d(\sin 2x)= \frac13 \sin^32x\bigg|_ {\frac\pi{12}}^{\frac{\pi}2}=-\frac1{24}$$</span></p>
|
1,455,969 | <p><a href="https://i.stack.imgur.com/5O0d8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5O0d8.png" alt="enter image description here"></a></p>
<p>Hello! I'm having problems trying to figure out this. Here is what I did: I used implication relation and Demorgan's law to simplify this proposition. I then used associative and commutative laws because the operators were are disjuntions and conjunctions. </p>
<p>The picture below is basically proof of my attempt at trying to solve this. There is no need to follow along and attempt to correct my work. I fear that that may be too time consuming considering how rough my work is. Hints/answer is much appreciated. </p>
<p>EDIT: Once again, no need to correct my wrong. It's really rough. I'll keep it much neater next time. </p>
<p><a href="https://i.stack.imgur.com/F5a90.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F5a90.jpg" alt="enter image description here"></a></p>
| Eugene Zhang | 215,082 | <p>First we prove that
$$
\{\neg(r\to p)\lor[(\neg q\to\neg p)\land(r\to q)]\}\iff \neg p \lor q
$$
Let $A\iff\{\neg(r\to p)\lor[(\neg q\to\neg p)\land(r\to q)]\}$. Then
\begin{align}
A&\iff\{\neg(\neg r\lor p)\lor[(\neg q\to\neg p)\land(r\to q)]\}\tag1
\\
&\iff\{(r\land\neg p)\lor[(\neg q\to\neg p)\land(r\to q)]\}\tag2
\\
&\iff\{(r\land\neg p)\lor[(p\to q)\land(r\to q)]\}\tag3
\\
&\iff\{(r\land\neg p)\lor(p\to q)\}\land\{(r\land\neg p)\lor(r\to q)\}\tag4
\\
&\iff\{(r\land\neg p)\lor(\neg p\lor q)\}\land\{(r\land\neg p)\lor(\neg r\lor q)\}\tag5
\\
&\iff\{[r\lor(\neg p\lor q)]\land[\neg p\lor(\neg p\lor q)]\}\land\{[r\lor(\neg r\lor q)]\land[\neg p\lor(\neg r\lor q)]\}\tag6
\\
&\iff\{[r\lor(\neg p\lor q)]\land[\neg p\lor q]\}\land\{[(r\lor(\neg r)\lor q]\land[\neg p\lor(\neg r\lor q)]\}\tag7
\\
&\iff\{[r\lor(\neg p\lor q)]\land[\neg p\lor q]\}\land\{1\land[\neg p\lor(\neg r\lor q)]\}\tag8
\\
&\iff\{[r\lor(\neg p\lor q)]\land[\neg p\lor(\neg r\lor q)]\}\land[\neg p\lor q]\tag9
\\
&\iff\{[r\lor(\neg p\lor q)]\land\{\neg r\lor(\neg p\lor q)\}\land[\neg p\lor q]\tag{10}
\\
&\iff\{0\lor(\neg p\lor q)\}\land[\neg p\lor q]\tag{11}
\\
&\iff(\neg p\lor q)\land(\neg p\lor q)\tag{12}
\\
&\iff\neg p\lor q
\end{align}
$(1)$ is for $r\to p\iff \neg r\lor p$. </p>
<p>$(2)$ is for De Morgan's law and double negation.</p>
<p>$(3)$ is for $(\neg q\to\neg p)\iff (p\to q)$.</p>
<p>$(4)$ is for distributivity.</p>
<p>$(5)$ is for $r\to q\iff \neg r\lor q$.</p>
<p>$(6)$ is for distributivity.</p>
<p>$(7)$ is for idempotence of disjunction and associativity. </p>
<p>$(8)$ is for $r\lor \neg r\iff 1$.</p>
<p>$(9)$ is for $1\land B\iff B$.</p>
<p>$(10)$ is for associativity and commutativity. </p>
<p>$(11)$ is for distributivity and $r\land \neg r\iff 0$.</p>
<p>$(12)$ is for idempotence of conjunction.</p>
<p>So
$$
\{\{\neg(r\to p)\lor[(\neg q\to\neg p)\land(r\to q)]\}\to(\neg p\lor q)\}\iff\{(\neg p\lor q)\to(\neg p\lor q)\}
$$
Since
\begin{align}
\{(\neg p\lor q)\to(\neg p\lor q)\}&\iff\{\neg[(\neg p\lor q)]\lor(\neg p\lor q)\}
\\
&\iff\{(p\land \neg q)\lor(\neg p\lor q)\}
\\
&\iff\{[(p\lor(\neg p\lor q)]\land [\neg q\lor(\neg p\lor q)]\}
\\
&\iff\{[(p\lor\neg p)\lor q)]\land [(\neg q\lor q)\lor \neg p)]\}
\\
&\iff\{(1\lor q)\land (1\lor \neg p)\}
\\
&\iff\{1\land 1\}
\\
&\iff 1
\end{align}
$\{(\neg p\lor q)\to(\neg p\lor q)\}$ is tautology. So
$$
\{\neg(r\to p)\lor[(\neg q\to\neg p)\land(r\to q)]\}\to(\neg p\lor q)\iff 1
$$
It is tautology too.</p>
|
1,455,969 | <p><a href="https://i.stack.imgur.com/5O0d8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5O0d8.png" alt="enter image description here"></a></p>
<p>Hello! I'm having problems trying to figure out this. Here is what I did: I used implication relation and Demorgan's law to simplify this proposition. I then used associative and commutative laws because the operators were are disjuntions and conjunctions. </p>
<p>The picture below is basically proof of my attempt at trying to solve this. There is no need to follow along and attempt to correct my work. I fear that that may be too time consuming considering how rough my work is. Hints/answer is much appreciated. </p>
<p>EDIT: Once again, no need to correct my wrong. It's really rough. I'll keep it much neater next time. </p>
<p><a href="https://i.stack.imgur.com/F5a90.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F5a90.jpg" alt="enter image description here"></a></p>
| MarnixKlooster ReinstateMonica | 11,994 | <p>I would treat this as a simplification problem: start with the left hand side of the statement, and work towards the right hand side.$
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}}
\newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} }
\newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\ref}[1]{\text{(#1)}}
\newcommand{\equiv}{\leftrightarrow}
\newcommand{\then}{\rightarrow}
\newcommand{\followsfrom}{\leftarrow}
\newcommand{\true}{\text{true}}
\newcommand{\false}{\text{false}}
$</p>
<p>This leads to the following straightforward calculation:</p>
<p>$$\calc
\lnot(r\then p) \;\lor\; \big((\lnot q\then\lnot p) \land (r\then q)\big)
\op\equiv\hints{write all $\;\psi \then \phi\;$ as $\;\lnot \psi \lor \phi\;$; remove $\;\lnot \lnot {}\;$}
\hint{-- usually $\;\then\;$ is not easy to calculate with}
\lnot(\lnot r \lor p) \;\lor\; \big((q \lor \lnot p) \land (\lnot r \lor q)\big)
\op\equiv\hints{simplify left hand part using DeMorgan;}
\hints{simplify right hand part by extracting common $\;{} \lor q\;$}
\hint{-- note that we can leave out one pair of parentheses}
(r \land \lnot p) \;\lor\; (\lnot p \land \lnot r) \;\lor\; q
\op\equiv\hint{simplify by extracting common $\;{} \land \lnot p\;$}
\big((r \;\lor\; \lnot r) \land \lnot p\big) \;\lor\; q
\op\equiv\hint{simplify using excluded middle}
(\true \land \lnot p) \;\lor\; q
\op\equiv\hint{simplify}
\lnot p \;\lor\; q
\endcalc$$</p>
<p>This completes the proof of the original statement, since we've proven the stronger statement
$$
\lnot(r\then p) \;\lor\; \big((\lnot q\then\lnot p) \land (r\then q)\big)
\;\;\equiv\;\;
\lnot p \;\lor\; q$$</p>
|
24,927 | <p>Does this mean that the first homotopy group in some sense contains more information than the higher homotopy groups? Is there another generalization of the fundamental group that can give rise to non-commutative groups in such a way that these groups contain more information than the higher homotopy groups? </p>
| Ben Lerner | 1,769 | <p>One geometrical fact from which the non-commutativity of the fundamental group is the following: two objects on a line can't switch relative position (i.e. left and right) through homotopy, as they are unable to "pass" each other. Two objects in a higher dimensional space can, however; so intuitively, it seems that a homotopy theory based on mappings of $I^n$ will naturally be abelian for $n \ge 2$.</p>
|
678,768 | <p>"Let $A$, $B$ be two infinite sets. Suppose that $f: A \to B$ is injective. Show that there exists a surjective map $g: B \to A$"</p>
<p>I am not sure how to go about this proof, I am trying to gather information to help me, and deduce as much as I can:
. Since $f$ is injective we know that $|A| \leq |B|$. </p>
<p>. If g is surjective then $|A| \geq |B|$.</p>
<p>. Thus $|A| = |B|$ (For this to hold)</p>
<p>. This suggests that we cannot have one countable and one uncountable set. </p>
<p>. I would start by assuming by contradiction that no such surjective map exists. I am thinking perhaps by removing the elements $b$ from $B$ that are not mapped to from A we would have an injective map $g: A \to B\setminus\{b\}$. However this is as far as I have got, can someone tell me if I am on the right track and if so where to go from here, or if I have got the completely wrong train of thought? Thanks</p>
| copper.hat | 27,978 | <p>Let $B_1 = f(A), B_2 = B \setminus B_1$, $a_0 \in A$.</p>
<p>For $x \in B_1$, let $g(x)$ be the unique element $a \in A$ such that $f(a) = x$.</p>
<p>For $x \in B_2$, let $g(x) = a_0$. </p>
<p>Then $g(B) = A$.</p>
|
1,843,274 | <p>Good evening to everyone. So I have this inequality: $$\frac{\left(1-x\right)}{x^2+x} <0 $$ It becomes $$ \frac{\left(1-x\right)}{x^2+x} <0 \rightarrow \left(1-x\right)\left(x^2+x\right)<0 \rightarrow x^3-x>0 \rightarrow x\left(x^2-1\right)>0 $$ Therefore from the first $ x>0 $, from the second $ x_1 = 1 $ and $x_2=-1$ therefore $ x $ belongs to $(-\infty,-1)$ and $(1,\infty)$ therefore $x$ belongs to $(1,\infty)$. But on the answer sheet it shows that it's defined on $(-1,0)$ and $(1,\infty)$. Where I am wrong? Thanks for any response.</p>
| Jared | 138,018 | <p>Once you write this $\frac{1 - x}{x(1 + x)} < 0$ you will find that the critical point are at $x = \{-1, 0, 1\}$. You can simply plug in $x = -2$ to find that:</p>
<p>$$
\frac{1 + 2}{4 - 2} = \frac{3}{2} > 0
$$</p>
<p>Therefore this expression is $> 0$ when $x < -1$, $< 0$ when $-1 < x < 0$,it's $>0$ when $0 < x < 1$ and it's $<0$ when $x > 1$.</p>
<p>Therefore this is true when:</p>
<p>$$
(-1, 0) \wedge (1, \infty)
$$</p>
<p><strong>Comment:</strong></p>
<p>Take a long hard look at my anser. It's compact but it's concise if you take enough to look at it. My answer relies on the fact that a sign change requires an "odd" factor (all of your factors are odd). As soon as you establish the sign of <em>any</em> region you have established the sign of <em>all</em> regions!</p>
<p><strong>A more "Rigorous" Way:</strong></p>
<p>A more rigorous way is to show where each of the factors are positive or negative. The following picture shows the three factors and their sign chart when $x = -1$, $x = 0$, and $x = 1$ (the three vertical lines).</p>
<p>\begin{align}
1- x > 0 \rightarrow x < 1 \\
x > 0 \rightarrow x > 0\\
1 + x > 0 \rightarrow x > -1
\end{align}</p>
<p>This shows a sign chart where we have:</p>
<p><a href="https://i.stack.imgur.com/oX6d3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oX6d3.png" alt="enter image description here"></a></p>
|
9,513 | <p>I'm a very novice user of <em>Mathematica</em> - is there possibility of exporting Mathematica code directly into $\LaTeX$? I'm interested only in exporting mathematical formulas. Also, from which version of program is it possible?</p>
| soandos | 605 | <p>Select the text, right click, and select Copy As -> LaTex.</p>
<p><img src="https://i.stack.imgur.com/cW29C.jpg" alt="enter image description here"></p>
|
855,227 | <p>I'm trying to understand Taylor's Theorem for functions of $n$ variables, but all this higher dimensionality is causing me trouble. One of my problems is understanding the higher order differentials. For example, if I have a function $f(x, y)$, then it's first differential is: </p>
<p>$$df = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy.$$</p>
<p>To me this quantity is saying that: </p>
<blockquote>
<p>A differential change in the value of function $f(x,y)$ is equal to how fast function $f(x,y)$ is changing
with respect to $x$ multiplied by a differential change in the
$x$-coordinate plus how fast function $f(x,y)$ is changing with
respect to $y$ multiplied by a differential change in the
$y$-coordinate.</p>
</blockquote>
<p>This seems intuitive. But when we get into higher order differentials I get confused: </p>
<p>$$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$</p>
<p>How would one interpret this quantity? What about even higher order differentials? say $d^3f$ or $d^{1500 }f$ =) </p>
<p>Thank you for any help! =) </p>
| Christian Blatter | 1,303 | <p>The "quantity" $$d^rf({\bf z}):=\sum_{k=0}^r{r\choose k}{\partial ^rf({\bf z})\over\partial x^k\partial y^{r-k}}dx^k\,dy^{r-k}$$
is a <strong>homogeneous polynomial</strong> of degree $r$ in the variables $dx$, $dy$ with coefficients the various $r$th order partial derivatives of $f$ at the given point ${\bf z}$. (Originally the polynomial had $2^r$ terms, but only $r+1$ of them really different.) This polynomial collects all terms of total degree $r$ in the Taylor expansion of $f$ at ${\bf z}$:
$$\eqalign{j^n_{\bf z}f(d{\bf z})&=\sum_{r=0}^n {1\over r!}d^rf({\bf z}) \cr&= f({\bf z})+
\bigl(f_x({\bf z})dx+ f_y({\bf z}) dy\bigr)+{1\over2}\bigl(f_{xx}({\bf z})dx^2+2 f_{xy}({\bf z})dx\,dy+ f_{yy}({\bf z})dy^2\bigr)\cr&\ \ \ +{1\over6}\bigl(f_{xxx}({\bf z})dx^3+3 f_{xxy}({\bf z})dx^2\,dy+3 f_{xyy}({\bf z})dx\,dy^2+ f_{yyy}({\bf z})dy^3\bigr)+{1\over24}\ldots\cr}$$</p>
|
4,289,381 | <p>I am trying to answer a question about line integrals, I have had a go at it but I am not sure where I am supposed to incorporate the line integral into my solution.</p>
<p><span class="math-container">$$ \mathbf{V} = xy\hat{\mathbf{x}} + -xy^2\hat{\mathbf{y}}$$</span>
<span class="math-container">$$ \mathrm{d}\mathbf{l} = \hat{\mathbf{x}}\mathrm{d}x + \hat{\mathbf{y}}\mathrm{d}y $$</span>
<span class="math-container">$$ \int_C\!\mathbf{V}\cdot\mathrm{d}\mathbf{l} = \int\!xy\,\mathrm{d}x - \int\!xy^2\,\mathrm{d}y = \left[ \frac{x^2y}{2}\right ]_?^? - \left[ \frac{xy^3}{3} \right]_?^? $$</span></p>
<p>I have a feeling that the parabola in question must come into play in the limits of the integrals, although I dont know how they are supposed to. The parabola in question is <span class="math-container">$y = \frac{x^2}{3}$</span> and the coordinates at which the line integral is supposed to go over are <span class="math-container">$a=(0,0)$</span> and <span class="math-container">$b=(3,3)$</span>.</p>
| AlanD | 356,933 | <p>In this case, it's as simple as plugging in <span class="math-container">$y=\frac{x^2}{3}$</span> into the first integral and integrating from <span class="math-container">$x=0$</span> to <span class="math-container">$x=3$</span>. In the second integral, do the same thing, but plug in <span class="math-container">$dy=\frac{2x}{3}dx$</span>. In other words, parameterize the whole curve in terms of <span class="math-container">$x$</span>.</p>
<p>In other examples (like circles), it may be more convenient to parameterize <span class="math-container">$x$</span> and <span class="math-container">$y$</span> with a different variable.</p>
|
2,970,787 | <blockquote>
<p>Find <span class="math-container">$\lim_{x\to -\infty} \frac{(x-1)^2}{x+1}$</span></p>
</blockquote>
<p>If I divide whole expression by maximum power i.e. <span class="math-container">$x^2$</span> I get,<span class="math-container">$$\lim_{x\to -\infty} \frac{(1-\frac1x)^2}{\frac1x+\frac{1}{x^2}}$$</span>
Numerator tends to <span class="math-container">$1$</span> ,Denominator tends to <span class="math-container">$0$</span></p>
<p>So I get the answer as <span class="math-container">$+\infty$</span></p>
<p>But when I plot the graph it tends to <span class="math-container">$-\infty$</span></p>
<p>What am I missing here? "Can someone give me the precise steps that I should write in such a case." Thank you very much!</p>
<p>NOTE: I cannot use L'hopital for finding this limit.</p>
| KM101 | 596,598 | <p><span class="math-container">$$\lim_{x \to -\infty} \frac{x^2-2x+1}{x+1} \implies \lim_{x \to -\infty} \frac{1-\frac{2}{x}+\frac{1}{x^2}}{\frac{1}{x}+\frac{1}{x^2}}$$</span>
As you mentioned, the numerator tends to <span class="math-container">$1$</span>. However, notice that the denominator tends to <span class="math-container">$0^-$</span>.
<span class="math-container">$$\vert x\vert > 1 \implies \biggr\vert \frac{1}{x}\biggr\vert > \biggr\vert\frac{1}{x^2}\biggr\vert$$</span>
<span class="math-container">$$\frac{1}{x}+\frac{1}{x^2} < 0$$</span>
Hence, the denomitor tends to <span class="math-container">$0^-$</span> (<span class="math-container">$0$</span> from the negative side). Therefore, the limit is <span class="math-container">$-\infty$</span>.</p>
|
1,728,097 | <p>So i have this integral : $$ \int_0^\infty e^{-xy} dy = -\frac{1}{x} \Big[ e^{-xy} \Big]_0^\infty$$
The integration part is fine, but I'm not sure what i get with the limits, can someone explain this</p>
<p>Thanks </p>
| Michael Hardy | 11,667 | <p>To be explicit: In this case $-\frac{1}{x} \Big[ e^{-xy} \Big]_{y:=0}^{y:=\infty}$ means $-\frac{1}{x} \Big[ e^{-xy} \Big]_{y\,:=\,0}^{y\,:=\,\infty}$.</p>
<p>It does <b>not</b> mean $-\frac{1}{x} \Big[ e^{-xy} \Big]_{x\,:=\,0}^{x\,:=\,\infty}$.</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Novice | 97,093 | <p>Consider the following example:</p>
<p>Let us have a 3 digit number that can be divided by 3, ie xyz.</p>
<p>Therefore xyz=0 (mod3)</p>
<p>iff xyz=(100x)+(10y)+z=x+y+z=0(mod3)</p>
<p>Therefore x+y+z=0(mod 3), meaning that the sum of the digits is divisible by 3.</p>
<p>This is an if and only if statement.</p>
<p>You can generalize it to n digit numbers. The idea is to express the n digit numbers in powers of 10. Since powers of 10=1 (mod 3), the digit is divisible by 3 iff the sum is divisible by 3.</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Learner | 280,290 | <p>This can be easily proven by "Digital Root" concept.</p>
<p>Digital root: A digit obtained by adding digits of number until a single digit is obtained.</p>
<p>All natural number is partitioned into 9 equivalence class by "Digital root".</p>
<p>Any number of Digital root 1 is represented by $1+9\times n$
any number of Digital root 2 is represented by $2+9\times n$ and so on.</p>
<p>(the reason for writing like this is: 9 is identity in the case of finding digital root)</p>
<p><strong>So a number whose sum is 3. i.e., Digital root is 3 can be written as $3+9*i$.
Which is divisible by 3.(It's clear from this representation).</strong></p>
|
1,687,147 | <blockquote>
<p>A category <span class="math-container">$\mathsf C$</span> consists of the following three mathematical entities:</p>
<ul>
<li><p>A class <span class="math-container">$\operatorname{ob}(\mathsf{C})$</span>, whose elements are called objects;</p>
</li>
<li><p>A class <span class="math-container">$\hom(\mathsf{C})$</span>, whose elements are called morphisms or maps or arrows. Each morphism <span class="math-container">$f$</span> has a source object <span class="math-container">$a$</span> and target object <span class="math-container">$b$</span>.</p>
</li>
<li><p>A binary operation <span class="math-container">$\circ$</span>, called composition of morphisms, such that for any three objects <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span>, we have <span class="math-container">$\hom(b, c) \times \hom(a, b) \to \hom(a, c)$</span>. The composition of <span class="math-container">$f : a \to b$</span> and <span class="math-container">$g : b \to c$</span> is written as <span class="math-container">$g \circ f$</span> or <span class="math-container">$gf$</span>, governed by two axioms: [...]</p>
</li>
</ul>
</blockquote>
<p>What the exact meaning of 'consist of' in the first sentence? Of course, I know the usual meaning. However, since it is not a mathematical term, I don't know the mathematical meaning of 'consists of'.</p>
| C. Dubussy | 310,801 | <p>To be formal, you can say that a category is a triple $(Ob(C), Hom(C), \circ)$ such that, etc ...</p>
<p>The notion of triple is perfectly and formaly defined in set theory. </p>
<p>Of course, I use the definition of category which states that $Ob(C)$ and $Hom(C)$ must be sets. To work with this definition, one usually uses Grothendieck Universes. </p>
|
563,431 | <p>Find the absolute maximum and minimum of $f(x,y)= y^2-2xy+x^3-x$ on the region bounded by the curve $y=x^2$ and the line $y=4$. You must use Lagrange Multipliers to study the function on the curve $y=x^2$.</p>
<p>I'm unsure how to approach this because $y=4$ is given. Is this a trick question?</p>
| Community | -1 | <p><strong>Hint</strong>: Consider a direct product of cyclic groups whose orders multiply to $52$, but such that the product is not cyclic. The fact that</p>
<p>$$52 = 2^2 \cdot 13$$</p>
<p>is highly relevant here.</p>
|
2,987,994 | <p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p>
<p>Does anyone know of any good ones to tackle?</p>
| clathratus | 583,016 | <p>A few good ones are:
<span class="math-container">$$\int_0^\infty e^{-\frac{x^2}{y^2}-y^2}dx$$</span>
<span class="math-container">$$\int_0^\infty \frac{1-\cos(xy)}xdx$$</span>
<span class="math-container">$$\int_0^\infty \frac{dx}{(x^2+p)^{n+1}}$$</span>
<span class="math-container">$$\int_{0}^{\infty}e^{-x^2}dx$$</span>
<span class="math-container">$$\int_0^\infty \cos(x^2)dx$$</span>
<span class="math-container">$$\int_0^\infty \sin(x^2)dx$$</span>
<span class="math-container">$$\int_0^\infty \frac{\sin^2x}{x^2(x^2+1)}dx$$</span>
<span class="math-container">$$\int_0^{\pi/2} x\cot x\ dx$$</span>
That should keep you busy for a while ;)</p>
|
1,162,315 | <blockquote>
<p>(b) An electrical circuit comprises three closed loops giving the following equations for the currents $i_1, i_2$ and $i_3$</p>
<p>\begin{align*}
i_1 + 8i_2 + 3i_3 &= -31\\
3i_1 - 2i_2 + i_3 &= -5\\
2i_1 - 3i_2 + 2i_3 &= 6
\end{align*}</p>
</blockquote>
<p>This is the system I need to solve. How do I solve for all three?</p>
<p>Any help would be of great help. But I need step by step instructions for each unknown. Thanks</p>
| Henrik supports the community | 193,386 | <p>Isolate one of the variables (e.g. $i_1$) in the first equation, and substitute the result in the second, which becomes an equation with two unknowns ($i_2$ and $i_3$). Isolate one of them (e.g. $i_2$). Then substitute both results in the third equation, which (if you do it correctly) becomes an equation with only one unknown ($i_3$). Hopefully you know to solve that, and then you just put the result into the expression you found for $i_2$, and when you have that, put both values into the expression for $i_1$.</p>
|
1,810,055 | <p>I want to find the Galois groups of the following polynomials over $\mathbb{Q}$. The specific problems I am having is finding the roots of the first polynomial and dealing with a degree $6$ polynomial.</p>
<blockquote>
<p>$X^3-3X+1$</p>
</blockquote>
<p>Do we first need to find its roots, then construct a splitting field $L$, then calculate $Gal(L/\mathbb{Q})$?</p>
<p>I am having difficulties finding roots. If we let the reduced cubic be: $U^2+qU+\frac{p^3}{27}=U^2+U+\frac{27}{27}=U^2+U+1$. The roots of this are: $x=\frac{-1 \pm \sqrt{-3}}{2}$</p>
<p>How do we use this to find the roots of the cubic?</p>
<p>Once I can decompose the polynomial I know that the Galois group will be $\{e\}, Z_2, A_3$ or $S_3$ depending on the degree of the splitting field and and how many linear factors there are,</p>
<blockquote>
<p>$(X^3-2)(X^2+3)$</p>
</blockquote>
<p>I have never encountered finding the Galois group of a degree $6$ polynomial but I am guessing that since it is factorised this eases things somewhat.</p>
<p>Let $f(X)=(X^3-2)(X^2+3)=(X-\sqrt[3]{2})(X^2+aX+b)(X-\sqrt{-3})(X+\sqrt{3})$</p>
<p>I am not sure how to find the coefficients of $X^2+aX+b$. Is it irreducible? </p>
<p>Let $L$ be the splitting field of $f(X)$ over $\mathbb{Q}$ then (assuming $X^2+aX+b$ is irreducible) $L=\mathbb{Q}(\sqrt[3]{2}, \sqrt{-3})$. </p>
<p>If this is true what would $[\mathbb{Q}(\sqrt[3]{2}, \sqrt{-3}), \mathbb{Q}]$ be?</p>
<p>I think this degree would be the order of the Galois group, so it could narrow down to one of $S_3, S_4, A_3, A_4...$ etc</p>
| Dietrich Burde | 83,966 | <p>You do not need to know the roots of the cubic to find its Galois group. You should consult an algebra book about Galois groups and discriminants here. Then the solution is as follows.
By the Rational Root Theorem you know that $x^3-3x+1$ is irreducible and its discriminant is $81$, which is a square in $\mathbb{Q}$. Therefore the Galois group of $x^3-3x+1$ is the alternating group $A_3$. If the discriminant of a cubic is not a square, and the polynomial is irreducible, then its Galois group is $S_3$. This is the case, for example, for $x^3+3x+1$.</p>
|
3,415,331 | <p>It is easy to show, that (for continuous functions f)
<span class="math-container">$$\exists c>0,\exists\alpha>1, \forall x\in \mathbb{R}: |x|^\alpha |f(x)| <c \implies \int |f|dx <\infty$$</span></p>
<p>The question is, whether or not this is also a neccessary condition. I could not come up with any counterexamples (<span class="math-container">$1/(x \log x)$</span> is not integrable yet). But I am also not able to prove it. And it is difficult to search for even though someone has probably already asked this question.</p>
<hr>
<p>UPDATE: It appears this is not true in general even for smooth functions <span class="math-container">$f\in C^\infty$</span> due to the ability to create smooth bumps of fixed hight in regular intervals with a decreasing base, making them integrable (See comments and answers). So in order to avoid that: what about monotonous functions?</p>
<p>I am trying to understand wether or not there is a function with a decrease "between" <span class="math-container">$1/x$</span> and <span class="math-container">$1/x^\alpha$</span>. Sorry for moving the goalposts.</p>
<hr>
<p>Proof, that it is sufficient:
<span class="math-container">$$\int |f(x)| dx \le \int_{|x|<1} |f(x)| dx + \int_{|x|>1} |x|^{-\alpha}|x|^\alpha |f(x)|dx<2\|f\|_{\infty,[-1,1]} + c\int_{|x|>1} |x|^{-\alpha}dx<\infty$$</span></p>
<p>Fix due to the helpful question whether or not <span class="math-container">$|x|^{-\alpha}$</span> is integrable.</p>
| Botond | 281,471 | <p>A better example than my measure-zero one in the comment: Pick your favorite summable sequence <span class="math-container">$(a_n)$</span> and your favorite positive number <span class="math-container">$b$</span>, and construct a function whose grap has a right triangle with base <span class="math-container">$a_n$</span> and height <span class="math-container">$b$</span>. It's integral will be <span class="math-container">$\sum_n a_n b/2 <+\infty$</span>, but it does not go to zero at infinity.</p>
|
572,137 | <p>If $\psi:G \to H$ is a surjective homomorphism, then $|\{g \in G: \psi(g)=h_1\}| = |\{g \in G: \psi(g)=h_2\}|, \forall h_1,h_2 \in H.$ </p>
<p>Could anyone advise on the proof? If $\psi$ is injective, then the result follows. So, what happens if $\psi$ is not injective? By 2nd isomorphism thm, $G/Ker(\psi) \cong H.$ Is this the correct start? Thank you. </p>
| egreg | 62,967 | <p>Hint: if $\psi(g_1)=\psi(g_2)$, then $\psi(g_1g_2^{-1})=1$, so $g_1g_2^{-1}\in\ker\psi$. If $K=\ker\psi$, then $g_1K=g_2K$. What about the converse?</p>
<hr>
<p>Since DonAntonio spoiled the fun, here's the complete answer.</p>
<p>Consider $K=\ker\psi$; then we can factor $\psi$ as $\tilde\psi\circ\pi$, where $\pi\colon G\to G/K$ is the projection and $\tilde\psi$ is injective; it's also surjective, because $\psi$ is.</p>
<p>Thus, for $h\in H$, $\{g\in G:\psi(g)=h\}=\pi^{-1}(\tilde\psi^{-1}(h))=\pi^{-1}(g)=gK$ where $g$ is any element such that $\psi(g)=h$. So the cardinality is just $|gK|=|K|$.</p>
<hr>
<p>A less “theoretical” way to look at the business is to observe that $\psi(g_1)=\psi(g_2)$ if and only if $g_1g_2^{-1}\in K=\ker\psi$, so if and only if $g_1K=g_2K$ (this is implicit in the above computation). Thus the inverse image of $h\in H$ is just a coset $gK$, where $\psi(g)=h$. Since all cosets have the same cardinality, the result follows.</p>
|
2,372,698 | <p>If a function $f(x)$ is continuous on the closed interval $\left [ a,b \right ]$ then its bounded on this interval........the proof for this theorem i have is: </p>
<p>Since it's continuous on $\left [ a,b \right ]$ if we pick a random point on this interval let it be $c$ </p>
<p>$\implies$ $\forall$ $\epsilon$ $> 0$ , $\exists$ $\delta$($\epsilon$,c) $> 0$ s.t $\left | x-c \right|$ $<$ $\delta$ $\implies$ $\left | f(x)-f(c) \right|$ $<$ $\epsilon$<br>
$-$ $\epsilon$ $< f(x)-f(c) <$ $\epsilon$<br>
$f(c)-$ $\epsilon$ $<$ $f(x)$ $<$ $f(c)+$ $\epsilon$ </p>
<p>Take $M =$ $\left |f(c) \right|$ $\in$ $+$ $\mathbb{R}$<br>
$\forall$ $M$ $\in$ $\mathbb{R}$
$f(x) < M$ </p>
<p>$\therefore$ $f(x)$ is bounded</p>
<p>Is there something missing with this proof ?<br>
because i could'nt really understand it</p>
| Hellen | 464,638 | <p>That proof in only showing that $f$ is locally bounded (bounded in a neighborhood of any given point).</p>
<p>The $M$ (which we better denote as $M_c$ to emphasize that it depends on the point) should really be $|f(c)|+\epsilon$ in that argument.</p>
<p>And the argument shows that for each $c$ there is $M_c$ and an interval $|x-c|<\delta_c$ in which the bound holds.</p>
<p>Taking the corresponding interval for every point, gives us a bunch of open intervals that cover all of $[a,b]$. Since this is compact, one can choose finitely many of them that still cover $[a,b]$. Now, if we take the corresponding $M$'s bounds in each of them and take the maximum, that gives a bound that does hold in all the interval $[a,b]$. </p>
|
3,527,785 | <p>I'm reading James Anderson's <em>Automata Theory with Modern Applications. Here:</em></p>
<blockquote>
<p><a href="https://i.stack.imgur.com/sFWNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sFWNh.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/k9zne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k9zne.png" alt="enter image description here" /></a></p>
</blockquote>
<p>And I tried to prove the following theorem (for prefix codes).</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/S5BgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S5BgC.png" alt="enter image description here" /></a></p>
</blockquote>
<p>I tried in the following way: Suppose <span class="math-container">$C$</span> is a prefix code which is not uniquely decipherable, that is there is a string <span class="math-container">$u \in C$</span> with two different expressions <span class="math-container">$u=ab=cd$</span>. But <span class="math-container">$u=vw$</span> and hence <span class="math-container">$vw=ab=cd$</span> where <span class="math-container">$w= \lambda$</span> and <span class="math-container">$\lambda$</span> is the empty word, therefore <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> which contradicts your hypothesis that <span class="math-container">$u$</span> is not uniquely decipherable.</p>
<p>Is this correct? I am confused because I paired <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> and I'm not sure if that is valid.</p>
| J.-E. Pin | 89,374 | <p>Let <span class="math-container">$1$</span> be the empty word. First of all, one needs to discard the case <span class="math-container">$C = \{1\}$</span> for the result to be correct. Indeed, if <span class="math-container">$C = \{1\}$</span>, then <span class="math-container">$C$</span> is a prefix code, but <span class="math-container">$C^*$</span> is not a free monoid.</p>
<p>Let now <span class="math-container">$C$</span> be a nonempty prefix code such that <span class="math-container">$C \not= \{1\}$</span>. Then <span class="math-container">$C$</span> does not contain <span class="math-container">$1$</span>, since <span class="math-container">$1$</span> is a prefix of every word. Let <span class="math-container">$w$</span> be a word of minimal length having two <span class="math-container">$C$</span>-factorizations
<span class="math-container">$$
w = c_1 \dotsm c_n = c'_1 \dotsm c'_m
$$</span>
Both <span class="math-container">$c_1$</span> and <span class="math-container">$c'_1$</span> are nonempty words and since <span class="math-container">$w$</span> has minimal length, <span class="math-container">$c_1 ≠ c'_1$</span>. Thus either <span class="math-container">$c_1$</span> is a prefix of <span class="math-container">$c'_1$</span>, or the other way around. Contradiction.</p>
|
3,662,466 | <p>Given a sequence with the terms </p>
<p><span class="math-container">$$
a_{n}=\left\{\begin{array}{ll}
n, & \text { if } n \text { even } \\
\frac{1}{n}, & \text { if } n \text { odd }
\end{array}\right.
$$</span></p>
<p>Prove <span class="math-container">$\limsup _{n \rightarrow \infty} a_{n} = \infty$</span>.</p>
<p>I will like to have some help. Intuitively this makes sense: the upper bound of the set consists of <span class="math-container">$a_n$</span> is <span class="math-container">$\infty$</span>. Because there is infinitely many even natural numbers. I can also explain this a bit more thoroughly verbally. But is it possible to make an <span class="math-container">$\epsilon$</span>-proof? I have a good grasp of <span class="math-container">$\epsilon$</span>-proof when it comes to convergence of sequences, but I do not have any experience with limes superior.</p>
<p>Please help,</p>
<p>Kind regards</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$b_n:= \sup \{a_k: k\ge n \}\ge n.$</span></p>
<p><span class="math-container">$\lim \sup \{a_n\}=\lim_{n \rightarrow \infty}b_n \ge \lim_{n \rightarrow \infty} n =\infty.$</span></p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Mikhail Katz | 28,128 | <p>Thinking about the distinction between language and metalanguage may be helpful here. When one describes set theory as possessing a single binary relation denoted $\in$, one is operating at the level of metalanguage. Specifying axioms satisfied by $\in$ is at the level of the language. At this stage sets could be beer mugs as Hilbert famously said in a slightly different context. </p>
<p>Next, one assumes the existence of a model of the language, and interprets the meaning of the language, or more precisely of the theory expressed in the language, in that model (no more beer mugs). </p>
<p>In my experience, traditionally trained mathematicians (who have never taken a logic course) have great difficulty with the language/metalanguage and theory/model distinctions. This is because some of them tend to think of mathematics as "one great monolithic thing" and introducing such dichotomies goes counter to that philosophy. I don't think Paul Halmos ever overcame his suspicious attitude toward the standard dichotomies in logic; for details see <a href="http://dx.doi.org/10.1007/s11787-016-0153-0" rel="nofollow noreferrer">this 2016 publication in <em>Logica Universalis</em></a>.</p>
<p>As far as the OP's comment to the effect that "Philosophical analysis of the question is unhelpful" I would agree in the sense that there is a lot of unhelpful philosophy of mathematics out there; a sterling example is the work of Hide Ishiguro on Leibniz which manages to combine bad mathematics, bad history, and bad philosophy in a single chapter 5; see <a href="http://dx.doi.org/10.1086/685645" rel="nofollow noreferrer">this 2016 publication in <em>History of Philosophy of Science</em></a>. On the other hand, the OP's problem with alleged "circularity" is based precisely on certain philosophical <em>partis pris</em> as I tried to suggest above.</p>
<p>Note 1. In response to the new version of the question that shifts the emphasis somewhat to functions and relations, note that it may be helpful to consult the article</p>
<blockquote>
<p>Leinster, Tom. Rethinking set theory. Amer. Math. Monthly 121 (2014), no. 5, 403–415</p>
</blockquote>
<p>which seeks to present an accessible introduction to a category-theoretic approach to the foundations focusing on functions (instead of points and sets).</p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Greg S | 124,813 | <p>The relevant quantifiers and relations in mathematical axioms should be understood as predicate logic. In the case of ordinary first-order predicate logic, the membership operator $\in$ is defined as a binary relation over the <strong>universe</strong>, or <em>domain of discourse</em>, sometimes denoted $\Omega$.</p>
<p>Ordinarily, you can consider $\in$ to be a function and $\Omega$ to be a set of possible elements*. Yet as you've noticed, if you try to use mathematics founded in set theory (e.g. set-based domains and functions) to interpret set axioms, you'll introduce a form of circularity**.</p>
<p>One solution is to consider logic to be valid independent of mathematics. In the case of ZF or other systems, the axiomatization is first-order predicate logic. So long as first-order logic works, you don't need a mathematical interpretation.</p>
<p>Alternatively, you can consider sets as <strong>primitive</strong> and foundational to mathematics. ZFC is an example of how to interpret sets as <strong>primitive notions</strong> equivalent to objects in a formal logic and suitable as part of a foundation of mathematics. In this case, set axioms could be a description of <em>non-foundational</em> sets which are defined in terms of the <strong>primitive</strong> <em>foundational</em> sets used in definitions.</p>
<hr>
<p>*Usually, objects in the domain of discourse could be anything in ordinary first-order logic, or for set membership, anything of which you could ask "is this a member of that set?". But in the context of mathematics it could be limited to defined mathematical objects, or in the context of pure set theory reduce to only sets.</p>
<p>**Actually, circularity isn't necessarily a problem as long as the axioms are <em>satisfiable</em>.</p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Zuhair Al-Johar | 95,347 | <p>This is in response to the new edited version of the question.</p>
<p>You are using "<a href="https://en.wikipedia.org/wiki/Indicator_function" rel="nofollow noreferrer">indicator</a>" functions on $X$ but with respect of membership in $X$ instead of subset-hood of $X$ [although one better take $X$ to be transitive, so that every member of $X$ be a subset of $X$]. </p>
<p>This is more complex, your $X$ is what we usually think of as a domain of a model, your relation $E$ is the membership relation of the model which is defined on the domain, and your $\in$ is the element-hood in the domain, possibly this approach can work, but what's the point of it really. I mean why not take the simpler way of saying that we have a non empty collection $X$ and stipulate ordered pairing as a primitive, axiomatize $\forall a,b \in X (\langle a,b \rangle \in X)$ and of course axiomatize the basic property of ordered pairs, then let $E$ be a non empty collection of ordered pairs in $X$, then Define the atomic formula $x \ E \ y$</p>
<p>$$ x \ E \ y \iff \langle x,y \rangle \in E$$</p>
<p>Then write the axioms in terms of atomic formulas using $E$ with all there quantifiers bounded by $X$. Of course 'sets' are defined simply as 'elements of $X$' [i.e.; $a$ is a set iff $ a \in X$]</p>
<p>Those axioms would serve to lay down the basis for characterization of $E$.</p>
<p>It needs to be noticed that the customary $\in$ spoken about in ZFC would be the relation $E$ here, since the axioms will speak about $E$, I mean "Extensionality, pairing, union,..." all would be characterizing the relation $E$</p>
<p>To me that's simpler than taking indicator functions on the whole domain respective to elements in that domain, those functions would be outside of the domain itself, so how for example you'll quantify over those functions (you call as sets)? If you quantify over them then you enter second order logic arena? If you wont quantify over them, then you may use the constant logical pairing function $E$ of yours, and possibly another constant one place function symbol $h(a): X \to \{0,1\}$ for $a \in X$, then you present the axioms quantified over elements of $X$, and write down formulas in terms of $h$ and $E$, not that easy but it can be done I think. You need to have ordered pairs $\langle,\rangle$ as primitives, symbols $0,1$ as constants, also $\in$ and favorably $=$ as primitives. It can be done I suppose, but I don't know what is the point behind this? It appears more complex to me.</p>
|
2,763,974 | <blockquote>
<p>Find $\displaystyle \lim_{(x,y)\to(0,0)} x^2\sin(\frac{1}{xy}) $ if exists, and find $\displaystyle\lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) ), \displaystyle\lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ if they exist.</p>
</blockquote>
<p>Hey everyone. I've tried using the squeeze theorem and found $0 \le |x^2\sin(\frac{1}{xy})| \le |x^2|\cdot 1 \xrightarrow{x\to0} 0 $ and so the "double" limit exists and equals zero.
Now, I know $\lim_{x\to 0}\sin(\frac{1}{ax})$ diverges, so both $lim_{y\to 0}(\lim_{x\to 0} x^2\sin(\frac{1}{xy}) )$ and $lim_{x\to 0}(\lim_{y\to 0} x^2\sin(\frac{1}{xy}) )$ do not exist(?) </p>
<p>I don't think I understand multi-variable limits, I would love your help on this basic one so I can understand better. Thanks in advance :) </p>
| user | 505,767 | <p>You only need simply to note that</p>
<p>$$0\le \left|x^2\sin(\frac{1}{xy})\right|\le x^2\to 0$$</p>
<p>then conclude by squeeze theorem.</p>
|
3,527,004 | <p>As stated in the title, I want <span class="math-container">$f(x)=\frac{1}{x^2}$</span> to be expanded as a series with powers of <span class="math-container">$(x+2)$</span>. </p>
<p>Let <span class="math-container">$u=x+2$</span>. Then <span class="math-container">$f(x)=\frac{1}{x^2}=\frac{1}{(u-2)^2}$</span></p>
<p>Note that <span class="math-container">$$\int \frac{1}{(u-2)^2}du=\int (u-2)^{-2}du=-\frac{1}{u-2} + C$$</span></p>
<p>Therefore, <span class="math-container">$\frac{d}{du} (-\frac{1}{u-2})= \frac{1}{x^2}$</span> and</p>
<p><span class="math-container">$$\frac{d}{du} (-\frac{1}{u-2})= \frac{d}{du} (-\frac{1}{-2(1-\frac{u}{2})})=\frac{d}{du}(\frac{1}{2} \frac{1}{1-\frac{u}{2}})=\frac{d}{du} \Bigg( \frac{1}{2} \sum_{n=0}^\infty \bigg(\frac{u}{2}\bigg)^n\Bigg)$$</span></p>
<p><span class="math-container">$$= \frac{d}{du} \Bigg(\sum_{n=0}^\infty \frac{u^n}{2^{n+1}}\Bigg)= \frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=$$</span></p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>From this we can conclude that </p>
<p><span class="math-container">$$f(x)=\frac{1}{x^2}=\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>Is this solution correct?</p>
| giobrach | 332,594 | <p>Why not a Taylor series expansion at <span class="math-container">$x=-2$</span>? That would be
<span class="math-container">$$f(x) = \sum_{n\geq 0} \frac{f^{(n)}(-2)}{n!}(x+2)^n, $$</span>
with radius of convergence of <span class="math-container">$2$</span>. You may calculate the <span class="math-container">$n$</span>-th derivative of <span class="math-container">$f(x)=1/x^2$</span> to find
<span class="math-container">$$\frac{d^nf}{dx^n}(x)= \frac{(-2)(-3) \cdots (-2-n+1)}{x^{n+2}} = (-1)^n \frac{(n+1)!}{x^{n+2}} $$</span>
and so
<span class="math-container">$$\frac{1}{x^2} = \sum_{n\geq 0} \frac{(n+1)}{2^{n+2}}(x+2)^n, \qquad x\in(-4,0) .$$</span></p>
|
2,702,726 | <p>Find the absolute minimum and maximum values of,</p>
<p>$$f(x) = 2 \sin(x) + \cos^2 (x) \text{ on } [0, 2\pi]$$</p>
<p>What I did so far is</p>
<p>$$f'(x) = 2\cos(x) -2 \cos(x) \sin(x)$$</p>
<p>Could someone please help me get started?</p>
| Siong Thye Goh | 306,553 | <p>Method $1$:</p>
<p>Continue from what you have so far, </p>
<p>$$\cos(x)((1-\sin(x))=0$$</p>
<p>Find the stationary point, evaluate the function values at the stationary point as well as the boundaries and conclude the minimal and maximal point.</p>
<p>Method $2$:</p>
<p>\begin{align}
f(x)&=2\sin(x)+\cos^2(x)\\
&=2 \sin(x)+1-\sin^2(x)\\
&=-\sin^2(x)+2\sin(x)+1 \\
&=-(\sin(x)-1)^2+2
\end{align}</p>
<p>$$-(-1-1)^2+2\le-(\sin(x)-1)^2+2 \le -(1-1)^2+2$$</p>
|
226,265 | <blockquote>
<p>Suppose <span class="math-container">$(X,d)$</span> is a metric space. Does every open cover of <span class="math-container">$X$</span> have a minimal subcover with respect to inclusion?</p>
</blockquote>
<p>In other words:</p>
<blockquote>
<p>If <span class="math-container">$\mathcal{O}$</span> is an open cover of a metric space <span class="math-container">$X$</span>, then does there exist an open cover <span class="math-container">$\mathcal{U} \subseteq \mathcal{O}$</span> such that, if <span class="math-container">$\mathcal{U}' \subsetneq \mathcal{U}$</span>, then <span class="math-container">$\mathcal{U}'$</span> does <strong>not</strong> cover <span class="math-container">$X$</span> ?</p>
</blockquote>
| Brian M. Scott | 12,042 | <p>As other answers have pointed out, there are easy counterexamples. What <strong>is</strong> true is that if $\mathscr{U}$ is an open cover of a metric space $X$, then $\mathscr{U}$ has an irreducible open <em>refinement</em>: that is, there is an open cover $\mathscr{R}$ of $X$ such that </p>
<ol>
<li>for each $R\in\mathscr{R}$ there is a $U\in\mathscr{U}$ such that $R\subseteq U$, and </li>
<li>for each $R\in\mathscr{R}$, the family $\mathscr{R}\setminus\{R\}$ no longer covers $X$.</li>
</ol>
<p>This is a consequence of two well-known theorems. First, every metric space is paracompact, so every open cover of a metric space has a locally finite open refinement. Secondly, every point finite cover of a set (and <em>a fortiori</em> every locally finite cover) has an irreducible subcover. </p>
|
1,349 | <p>In this question here the OP asks for hints for a problem rather than a full proof.</p>
<p><a href="https://math.stackexchange.com/questions/14477">Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$</a></p>
<p>Now, while I would like to respect that request, I also feel that questions on this site are not intended just for the OP's benefit. This leads me to the question...</p>
<blockquote>
<p><strong>Question</strong>: Is there any way to use some form of <em>spoiler space</em>, so that it's possible to post the answer for the other readers' benefit, but at the same time hiding it from those who do not want it?</p>
</blockquote>
<p>My attempted "look at the previous version of this post" turned out a disaster. I've seen people use rot13, but that seems like a lot of fuss (and clashes with the mathematics).</p>
<p>On some sites they use white text on white background for spoily material, which, when you select with the mouse, reveals the text. Is that possible?</p>
<hr />
<p>Testing:</p>
<blockquote>
<p>! Spoiler Space</p>
<p>! More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Spoiler Space
More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> What happens if I write a really long sentence. Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Maybe it'll involve some maths like <span class="math-container">$E=mc^2$</span> or exclamation marks <span class="math-container">$n!=n \times (n-1)!$</span>.</p>
</blockquote>
| J. M. ain't a mathematician | 498 | <p>Some more testing:</p>
<p>1.</p>
<blockquote class="spoiler">
<p> <em>italicized text goes here</em></p>
</blockquote>
<p>2.</p>
<blockquote class="spoiler">
<p> <strong>bold text goes here</strong></p>
</blockquote>
<p>3.</p>
<blockquote class="spoiler">
<p> Let's try $\LaTeX$ <em>italicized text</em> <strong>bold text</strong> and <a href="https://math.stackexchange.com/">hyperlinks</a>.</p>
</blockquote>
<p>4.</p>
<blockquote class="spoiler">
<p> $$\color{red}{\text{Does}}\;\color{green}{\text{colored}}\;\color{blue}\LaTeX\;\color{yellow}{\text{work?}}$$ But <em>black</em> <strong>text</strong> and $\LaTeX$ should still be obscured.</p>
</blockquote>
<p>I'll edit this when I think of more stress tests.</p>
|
1,349 | <p>In this question here the OP asks for hints for a problem rather than a full proof.</p>
<p><a href="https://math.stackexchange.com/questions/14477">Proof of subfactorial formula $!n = n!- \sum_{i=1}^{n} {{n} \choose {i}} \quad!(n-i)$</a></p>
<p>Now, while I would like to respect that request, I also feel that questions on this site are not intended just for the OP's benefit. This leads me to the question...</p>
<blockquote>
<p><strong>Question</strong>: Is there any way to use some form of <em>spoiler space</em>, so that it's possible to post the answer for the other readers' benefit, but at the same time hiding it from those who do not want it?</p>
</blockquote>
<p>My attempted "look at the previous version of this post" turned out a disaster. I've seen people use rot13, but that seems like a lot of fuss (and clashes with the mathematics).</p>
<p>On some sites they use white text on white background for spoily material, which, when you select with the mouse, reveals the text. Is that possible?</p>
<hr />
<p>Testing:</p>
<blockquote>
<p>! Spoiler Space</p>
<p>! More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Spoiler Space
More spoiler space</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> What happens if I write a really long sentence. Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test</p>
</blockquote>
<hr />
<blockquote class="spoiler">
<p> Maybe it'll involve some maths like <span class="math-container">$E=mc^2$</span> or exclamation marks <span class="math-container">$n!=n \times (n-1)!$</span>.</p>
</blockquote>
| Mariano Suárez-Álvarez | 274 | <p>I would very much prefer that we did <em>not</em> use this feature. It is simply too distracting. </p>
<p>I am really surprised that this has been deemed accetable UI!</p>
|
3,417,227 | <p><strong>Problem</strong>:</p>
<p>Let <span class="math-container">$f : \Bbb R \to \Bbb R$</span> be a differentiable function such as <span class="math-container">$f(0) = 0$</span>, compute </p>
<p><span class="math-container">$$\lim_{r\to 0^{+}} \iint_{x^2 + y^2 \leq r^2} {3 \over 2\pi r^3} f(\sqrt{x^2+y^2}) dx$$</span></p>
<p><strong>Progress</strong>:</p>
<p>I figure the solution to this is likely 0 because the domain is approaching 0, so intuitively there cannot by any resulting volume. I got as far as converting the problem into polar coordinates, which makes it look much cleaner, but I was unsure how to integrate the product of <span class="math-container">$f(\rho)\rho d\rho$</span>. Integration by parts did not give a useful result. Could someone please demonstrate how to obtain that integral, or present another solution?</p>
| ling | 670,949 | <p>It is easy to get an answer by the L’Hopital’s rule.
<span class="math-container">\begin{align}
&\quad \lim_{r\to0^+}\iint_{x^2+y^2\leq r^2} \frac{3}{2\pi r^3} f\left(\sqrt{x^2+y^2}\right)\, dxdy \\&=\lim_{r\to0^+}\int_0^r\int_{\partial B(0,\rho)} \frac{3}{2\pi r^3} f\left(\rho\right)\,dS d\rho\\&= \lim_{r\to0^+}\int_0^r \frac{3\rho f(\rho)}{r^3}\,d\rho\\&=\lim_{r\to0^+}\frac{\int_0^r 3\rho f(\rho)\,d\rho}{r^3}\\&=\lim_{r\to0^+}\frac{3rf(r)}{3r^2}\\&=\lim_{r\to0^+}\frac{f(r)}{r}\\&=f’(0).
\end{align}</span>
The last equal is according to the definition of derivative.</p>
|
540,135 | <p>$\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?</p>
<p>I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number... </p>
| Carsten S | 90,962 | <p>What you still want to show is that $lcm(lcm(a,b),c)|lcm(a,b,c)$. So what you want to show is that the least, or actually any, common multiple of $a$, $b$, and $c$ is a multiple of the least common multiple of $lcm(a,b)$ and $c$. This follows from the fact that a common multiple of two numbers is a multiple of their least common multiple.</p>
|
74,108 | <p>Background: I was trying to convert a MATLAB code (fluid simulation, SPH method) into a <em>Mathematica</em> one, but the speed difference is huge.</p>
<p>MATLAB code:</p>
<pre class="lang-matlab prettyprint-override"><code>function s = initializeDensity2(s)
nTotal = s.params.nTotal; %# particles
h = s.params.h;
h2Sq = (2*h)^2;
for ind1 = 1:nTotal %loop over all receiving particles; one at a time
%particle i is the receiving particle; the host particle
%particle j is the sending particle
xi = s.particles.pos(ind1,1);
yi = s.particles.pos(ind1,2);
xj = s.particles.pos(:,1); %all others
yj = s.particles.pos(:,2); %all others
mj = s.particles.mass; %all others
rSq = (xi-xj).^2+(yi-yj).^2;
%Boolean mask returns values where r^2 < (2h)^2
mask1 = rSq<h2Sq;
rSq = rSq(mask1);
mTemp = mj(mask1);
densityTemp = mTemp.*liuQuartic(sqrt(rSq),h);
s.particles.density(ind1) = sum(densityTemp);
end
</code></pre>
<p>And the corresponding <em>Mathematica</em> code:</p>
<pre><code>Needs["HierarchicalClustering`"]
computeDistance[pos_] :=
DistanceMatrix[pos, DistanceFunction -> EuclideanDistance];
initializeDensity[distance_] :=
uniMass*Total/@(liuQuartic[#,h]&/@Pick[distance,Boole[Map[#<2h&,distance,{2}]],1])
initializeDensity[computeDistance[totalPos]]
</code></pre>
<p>The data are coordinates of 1119 points, in the form of <code>{{x1,y1},{x2,y2}...}</code>, stored in <code>s.particles.pos</code> and <code>totalPos</code> respectively. And <code>liuQuartic</code> is just a polynomial function. The complete MATLAB code is way more than this, but it can run about 160 complete time steps in 60 seconds, whereas the <em>Mathematica</em> code listed above alone takes about 3 seconds to run. I don't know why there is such huge speed difference. Any thoughts is appreciated. Thanks.</p>
<p>Edit:</p>
<p>The <code>liuQuartic</code> is defined as</p>
<pre><code>liuQuartic[r_,h_]:=15/(7Pi*h^2) (2/3-(9r^2)/(8h^2)+(19r^3)/(24h^3)-(5r^4)/(32h^4))
</code></pre>
<p>and example data can be obtained by</p>
<pre><code>h=2*10^-3;conWidth=0.4;conHeight=0.16;totalStep=6000;uniDensity=1000;uniMass=1000*Pi*h^2;refDensity=1400;gamma=7;vf=0.07;eta=0.01;cs=vf/eta;B=refDensity*cs^2/gamma;gravity=-9.8;mu=0.02;beta=0.15;dt=0.00005;epsilon=0.5;
iniFreePts=Block[{},Table[{-conWidth/3+i,1.95h+j},{i,10h,conWidth/3-2h,1.5h},{j,0,0.05,1.5h}]//Flatten[#,1]&];
leftWallIniPts=Block[{x,y},y=Table[i,{i,conHeight/2-0.5h,0.2h,-0.5h}];x=ConstantArray[-conWidth/3,Length[y]];Thread[List[x,y]]];
botWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3,-0.4h,h}];y=ConstantArray[0,Length[x]];Thread[List[x,y]]];
incWallIniPts=Block[{x,y},Table[{i,0.2125i},{i,0,(2conWidth)/3,h}]];
rightWallIniPts=Block[{x,y},y=Table[i,{i,Last[incWallIniPts][[2]]+h,conHeight/2,h}];x=ConstantArray[Last[incWallIniPts][[1]],Length[y]];Thread[List[x,y]]];
topWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3+0.7h,(2conWidth)/3-0.7h,h}];y=ConstantArray[conHeight/2,Length[x]];Thread[List[x,y]]];
freePos = iniFreePts;
wallPos = leftWallIniPts~Join~botWallIniPts~Join~incWallIniPts~Join~rightWallIniPts~Join~topWallIniPts;
totalPos = freePos~Join~wallPos;
</code></pre>
<p>where <code>conWidth=0.4</code>, <code>conHeight=0.16</code> and <code>h=0.002</code></p>
| Henrik Schumacher | 38,178 | <p>Here a slightly improved version of <a href="https://mathematica.stackexchange.com/users/1871">xzczd</a>'s code (I call the function <code>cinitializeDensity</code> in the following) that does not require computing the <code>DistanceMatrix</code> beforehand. Moreover, I tried to suppress some type casts within the <code>CompiledFunction</code> and to exploit parallelization.</p>
<pre><code>Block[{r, h},
cinitializeDensity2 =
With[{code = N[liuQuartic[Sqrt[r], h]], m = N[uniMass], g = Compile`GetElement},
Compile[{{x, _Real, 1}, {y, _Real, 2}, {h, _Real}},
Block[{r, sum = 0., x1, x2},
x1 = g[x, 1];
x2 = g[x, 2];
Do[
r = (x1 - g[y, j, 1])^2 + (x2 - g[y, j, 2])^2;
sum += If[r < 4. h^2, code, 0.],
{j, 1, Length[y]}];
m sum
],
CompilationTarget -> "C",
Parallelization -> True,
RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable}
]]
];
</code></pre>
<p>Along with packing, this leads to further speedup:</p>
<pre><code>ptotalPos = Developer`ToPackedArray[N[totalPos]];
a = initializeDensity[computeDistance[ptotalPos]]; // AbsoluteTiming // First
b = cinitializeDensity[computeDistance[ptotalPos], h]; // AbsoluteTiming // First
c = cinitializeDensity2[ptotalPos, ptotalPos, h]; // AbsoluteTiming // First
a == b == c
</code></pre>
<blockquote>
<p>1.28708</p>
<p>0.006039</p>
<p>0.000844</p>
<p>True</p>
</blockquote>
<p>For even longer list, it might be worthwhile to delegate the distance checks to <code>Nearest</code>:</p>
<pre><code>Block[{r, h},
cinitializeDensity3 =
With[{code = N[liuQuartic[Sqrt[r], h]], m = N[uniMass],
g = Compile`GetElement},
Compile[{{x, _Real, 1}, {y, _Real, 2}, {h, _Real}},
Block[{r, sum = 0., x1, x2},
x1 = g[x, 1];
x2 = g[x, 2];
Do[
r = (x1 - g[y, j, 1])^2 + (x2 - g[y, j, 2])^2;
sum += code,
{j, 1, Length[y]}];
m sum
],
CompilationTarget -> "C",
Parallelization -> True,
RuntimeOptions -> "Speed",
RuntimeAttributes -> {Listable}
]]
];
d = cinitializeDensity3[ptotalPos, Nearest[ptotalPos, ptotalPos, {∞, 2 h}], h]; // AbsoluteTiming // First
a == d
</code></pre>
<blockquote>
<p>0.00126</p>
<p>True</p>
</blockquote>
|
3,264,333 | <p>I am working on my scholarship exam practice and not sure how to begin. Please assume math knowledge at high school or pre-university level.</p>
<blockquote>
<p>Let <span class="math-container">$a$</span> be a real constant. If the constant term of <span class="math-container">$(x^3 + \frac{a}{x^2})^5$</span> is equal to <span class="math-container">$-270$</span>, then <span class="math-container">$a=$</span>......</p>
</blockquote>
<p>Could you please give a hint for this question? The answer provided is <span class="math-container">$-3$</span>.</p>
| user10354138 | 592,552 | <p><strong>Hint</strong>: binomial expand <span class="math-container">$(x^3+ax^{-2})^5$</span>.</p>
|
3,264,333 | <p>I am working on my scholarship exam practice and not sure how to begin. Please assume math knowledge at high school or pre-university level.</p>
<blockquote>
<p>Let <span class="math-container">$a$</span> be a real constant. If the constant term of <span class="math-container">$(x^3 + \frac{a}{x^2})^5$</span> is equal to <span class="math-container">$-270$</span>, then <span class="math-container">$a=$</span>......</p>
</blockquote>
<p>Could you please give a hint for this question? The answer provided is <span class="math-container">$-3$</span>.</p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p>The general term <span class="math-container">$T_{r+1}$</span> is <span class="math-container">$$\binom5r(x^3)^{5-r}\left(\dfrac a{x^2}\right)^r=\binom5ra^rx^{3\cdot5-3r-2r}$$</span></p>
<p>For the constant term, the exponent of <span class="math-container">$x$</span> will be <span class="math-container">$?$</span></p>
|
19,148 | <p>I always find the strong law of large numbers hard to motivate to students, especially non-mathematicians. The weak law (giving convergence in probability) is so much easier to prove; why is it worth so much trouble to upgrade the conclusion to almost sure convergence?</p>
<p>I think it comes down to not having a good sense of why, practically speaking, a.s. convergence is better than convergence i.p. Sure, I can prove that one implies the other and not conversely, but the counterexamples feel contrived. I understand the advantages of a.s. convergence on a technical level, but not on the level of everyday life.</p>
<p>So my question: how would you explain to, say, an engineer, the significance of having a.s. convergence as opposed to i.p.? Is there a "real-life" example of bad behavior that we're ruling out?</p>
| Steve Huntsman | 1,847 | <p>You might use the <a href="http://en.wikipedia.org/wiki/Kolmogorov%27s_three-series_theorem" rel="nofollow">three-series theorem</a> to elaborate on a.s. convergence. This approach would also have the advantage of <a href="http://en.wikipedia.org/wiki/Kolmogorov%27s_three-series_theorem#Other_applications" rel="nofollow">working towards the SLLN</a>.</p>
|
2,227,709 | <p>Find the solution of the $x^2+2x+3 \equiv0\mod{198}$</p>
<p>i have no idea for this problem i have small hint to we going consider $x^2+2x+3 \equiv0\mod{12}$</p>
| P Vanchinathan | 28,915 | <p>Substitute $y=x+1$. Now the equations is transformed to $y^2+2\equiv 0 \,mod\,198$. As $196=14^2$,we have $x=y-1=13$ is a soution.</p>
|
2,227,709 | <p>Find the solution of the $x^2+2x+3 \equiv0\mod{198}$</p>
<p>i have no idea for this problem i have small hint to we going consider $x^2+2x+3 \equiv0\mod{12}$</p>
| lioness99a | 401,264 | <p>We can rewrite the equation as \begin{align}x^2+2x+3&\equiv0\mod198\\
(x+1)^2-1+3&\equiv0\mod 198\\
(x+1)^2&\equiv-2\mod198\\
(x+1)^2&\equiv196\mod198\end{align}</p>
<p>We let $y=x+1$ and we now need to solve \begin{align}y^2&\equiv-2\mod 198\\
&\equiv196\mod198\end{align}</p>
<p>We can see that $196=14^2$ and so we can say that $y\equiv14$ is a solution. Therefore, we can also say that $y\equiv-14\mod198\equiv184$ is also a solution</p>
<p>So, we have found two values for $x$ so far: $x\in\{13,183\}$</p>
<p>We can use <a href="https://www.wolframalpha.com/input/?i=y%5E2%3D196mod+198" rel="nofollow noreferrer">WolframAlpha</a> to find that there are two more solutions for $y$: $y\equiv58$ and $y\equiv-58\mod 198\equiv140$ however I'm not entirely sure how you compute these without using trial and error</p>
<p>This gives us the final two values for $x$: $x\in\{13,57,139,183\}$</p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| OwlToday | 443,828 | <p>One could also consider a geometric perspective. The tangent line to the circle <span class="math-container">$x^2+y^2=1$</span> at the point <span class="math-container">$(\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2})$</span> is the line <span class="math-container">$x+y = \sqrt{2}$</span>.</p>
<p><a href="https://i.stack.imgur.com/Meos1m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Meos1m.png" alt="enter image description here" /></a></p>
<p>For a given point <span class="math-container">$P = (x_0,y_0)$</span> on the circle, if <span class="math-container">$c$</span> is the value of <span class="math-container">$x_0+y_0$</span>, then the line <span class="math-container">$x+y = c$</span> intersects the circle in <span class="math-container">$P$</span>. This line is parallel to the line <span class="math-container">$x+y = \sqrt{2}$</span>, and <span class="math-container">$c$</span> will be less than or equal to <span class="math-container">$\sqrt{2}$</span>.</p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| Community | -1 | <p>Let <span class="math-container">$x+y>\sqrt{2}$</span></p>
<p><span class="math-container">$$ x^2+y^2+2xy>2 $$</span></p>
<p><span class="math-container">$$ 2xy>1 $$</span></p>
<p>We know that <span class="math-container">$(x-y)^2<0$</span> is not true.</p>
<p><span class="math-container">$$ x^2-2xy+y^2 $$</span></p>
<p><span class="math-container">$$ 1-2xy $$</span></p>
<p>Now see <span class="math-container">$2xy>1$</span> which will make <span class="math-container">$(x-y)^2<0$</span> which is false. Thus our assumption that <span class="math-container">$x+y>\sqrt{2}$</span> is wrong.</p>
|
2,072,473 | <p>I managed to prove the statement:</p>
<blockquote>
<p>If $f: A\to B$ and $g: B\to C$ are surjective, then $g\circ f$ is surjective.</p>
</blockquote>
<p>But now I require a counterexample to the converse of this statement. I am not sure how to formulate the counterexample. Similarly I need a counterexample of the statement being "injective" instead of "surjective".</p>
| Dominik | 259,493 | <p><strong>Hint:</strong> A function $h: A \to \{0\}$ is surjective for any nonempty set $A$.</p>
|
2,072,473 | <p>I managed to prove the statement:</p>
<blockquote>
<p>If $f: A\to B$ and $g: B\to C$ are surjective, then $g\circ f$ is surjective.</p>
</blockquote>
<p>But now I require a counterexample to the converse of this statement. I am not sure how to formulate the counterexample. Similarly I need a counterexample of the statement being "injective" instead of "surjective".</p>
| MPW | 113,214 | <p><strong>Hints:</strong> You may find it useful to use the characterizations
$$f:A\to B \textrm{ surjective }\iff f\circ f^{-1}=\operatorname{Id}_B$$
and
$$f:A\to B \textrm{ injective }\iff f^{-1}\circ f=\operatorname{Id}_A$$</p>
|
2,119,761 | <p>Asuume that $f:\mathbb{R}\rightarrow \mathbb{R}$ continuous function and $g:\mathbb{R}\rightarrow \mathbb{R}$ uniformly continuous function and $g$ bounded.</p>
<p>I have to prove that $f\circ g$ is uniformly continuous function.
I tried the following:
$f$ continuous function so $\forall \epsilon>0 ~\exists~ \delta_1>0 $ and $ |x-x_0|<\delta$ and $|f(x)-f(x_0)|<\epsilon$
From $g$ uniformly continuous function definition i can say $|g(x)-g(y)|<\delta_1$
which mean $f\circ g$ is continuous function but not uniformly continuous function.
I dont know how to use that $g$ is bounded.
Thank you very much.</p>
| Robert Israel | 8,508 | <p>Since $g$ is bounded, its range is contained in a finite closed interval. $f$ is uniformly continuous on that interval.</p>
|
2,560,556 | <p>Let $X,Y,Z$ be topological spaces.
Let $p:X\rightarrow Y$ be a continuous surjection. Let $f:Y\rightarrow Z$ be continuous if and only if $f\circ p:X\rightarrow Z$ is continuous.</p>
<p>I want to prove that this makes $p$ a quotient map. </p>
<p>My thoughts:</p>
<p>Since $p$ is a continuous surjection, all I need is for $p$ to also be open.</p>
<p>If I can show that $p^{-1}$ exists and is continuous, then $p$ must be open, and therefore a quotient map.
Since $p$ is surjective, I know that $p$ at least has a right inverse, so some function $g$ exists such that $p\circ g = Id_Y$.</p>
<p>I don't know how to proceed, however. Am I on the right track?</p>
| Alex Provost | 59,556 | <p>I assume you want the property to hold for <em>all</em> spaces $Z$. In this case, pick $Z = Y$ as sets, endowed with the quotient topology for $p$. Let $f:Y \to Z$ be the identity map. We will show that $f$ is a homeomorphism, and hence that $Y$ also has the quotient topology.</p>
<p>First, $\tilde p =f \circ p$ is continuous because $p$ is, hence by the universal property $f$ is also continuous.</p>
<p>Next, we may factor the continuous map $p$ as $$p = f^{-1} \circ f \circ p =f^{-1} \circ \tilde p,$$</p>
<p>and by the corresponding universal property for for $\tilde p$, this means that $f^{-1}$ is continuous.</p>
|
3,432,911 | <p>My argument is as follows:</p>
<p>Let <span class="math-container">$R$</span> be a commutative ring with unity, <span class="math-container">$I$</span> an ideal of <span class="math-container">$R$</span>.
If <span class="math-container">$(R/I)^n\cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules, then it follows that they are isomorphic as <span class="math-container">$R/I$</span>-modules because the isomorphism factors through the quotient. We observe that these are both free <span class="math-container">$R/I$</span>-modules with bases <span class="math-container">$${\mathfrak{B}_1=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{nj}+I\} \;\;\text{ and }\;\; \mathfrak{B}_2=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{mj}+I\}}$$</span> respectively.
Then the isomorphism between <span class="math-container">$(R/I)^n \cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules induces an isomorphism of these free modules, meaning there is a bijection between the elements of <span class="math-container">$\mathfrak{B}_1$</span> and <span class="math-container">$\mathfrak{B}_2$</span>. It follows then that <span class="math-container">$n=m$</span>.</p>
| Martin R | 42,969 | <p><span class="math-container">$z=0$</span> is mapped to <span class="math-container">$f(0) = \infty$</span>. For <span class="math-container">$z \ne 0$</span> we have with <span class="math-container">$w = \frac 1z$</span>
<span class="math-container">$$
|z-1| = 1 \iff |\frac 1w - 1| = 1 \iff |w-1|^2 = |w|^2 \\
\iff |w|^2 - 2 \operatorname{Re}w + 1 = |w|^2 \iff \operatorname{Re}w = \frac 12 \, ,
$$</span>
where we have used the formula
<span class="math-container">$$
|z_1 + z_2|^2 = |z_1|^2 + 2 \operatorname{Re}(z_1 \bar z_2) + |z_2|^2 \, .
$$</span></p>
<p>Therefore the image is the (extended) line
<span class="math-container">$$ \{ w \mid \operatorname{Re}w = \frac 12 \} \cup \{ \infty \} \, .$$</span></p>
<hr>
<p>If you are familiar with <a href="https://en.wikipedia.org/wiki/M%C3%B6bius_transformation" rel="nofollow noreferrer">Möbius transformations</a> then you can argue that the image of the circle</p>
<ul>
<li>must be a line (because Möbius transformations maps circles to circles or lines, and <span class="math-container">$f(0) = \infty$</span>),</li>
<li>pass through <span class="math-container">$w = \frac 12$</span> (because <span class="math-container">$f(2) = \frac 12$</span>),</li>
<li>be orthogonal to the real axis (because Möbius transformations preserve angles and <span class="math-container">$f$</span> maps the real axis onto itself),</li>
</ul>
<p>and that leaves only the (extended) line <span class="math-container">$x= \frac 12$</span> plus the point at infinity.</p>
|
2,924,380 | <p><span class="math-container">$\sum_{k=0}^{n}{k\binom{n}{k}}=n2^{n-1}$</span></p>
<p><span class="math-container">$n2^{n-1} = \frac{n}{2}2^{n} = \frac{n}{2}(1+1)^n = \frac{n}{2}\sum_{k=0}^{n}{\binom{n}{k}}$</span></p>
<p>That's all I got so far, I don't know how to proceed</p>
| Olivier Oloa | 118,798 | <p><strong>Hint</strong>. One has
<span class="math-container">$$
k{\binom{n}{k}}=n{\binom{n-1}{k-1}},\quad n>0,k>0.
$$</span></p>
|
3,424,189 | <p>I'm trying to calculate the integral <span class="math-container">$$\int_0^1 \frac{\sin\Big(a \cdot \ln(x)\Big)\cdot \sin \Big(b \cdot \ln(x)\Big)}{\ln(x)} dx, $$</span>
but am stuck. I tried using Simpsons' rules and got here:
<span class="math-container">$$\int_0^1 \frac{\cos\Big((a+b) \cdot \ln(x)\Big) - \cos \Big((a-b) \cdot \ln(x)\Big)}{2\ln(x)} dx, $$</span>
but alas, that also got me nowhere. Does anyone have any ideas? </p>
| ComplexYetTrivial | 570,419 | <p>For <span class="math-container">$c \in \mathbb{R}$</span> we have
<span class="math-container">\begin{align}
\int \limits_0^\infty \frac{1 - \cos(c t)}{t} \, \mathrm{e}^{-t} \, \mathrm{d} t &= \int \limits_0^\infty \int \limits_0^c \sin(u t) \, \mathrm{d} u \, \mathrm{e}^{-t} \, \mathrm{d} t = \int \limits_0^c \int \limits_0^\infty \sin(u t) \mathrm{e}^{-t} \, \mathrm{d} t \, \mathrm{d} u = \int \limits_0^c \frac{u}{1+u^2} \, \mathrm{d} u \\
&= \frac{1}{2} \ln(1 + c^2) \, ,
\end{align}</span>
so
<span class="math-container">\begin{align}
\int \limits_0^1 \frac{\sin[a \ln(x)] \sin[b \ln(x)]}{\ln(x)} \, \mathrm{d} x &= \int \limits_0^1 \frac{\cos[(a+b)\ln(x)] - \cos[(a-b) \ln(x)]}{- 2 \ln(x)} \, \mathrm{d} x \\
&\!\!\!\stackrel{x = \mathrm{e}^{-t}}{=} \int \limits_0^\infty \frac{\left(1 - \cos[(a - b) t]\right) - \left(1 - \cos[(a+b) t]\right)}{2t} \, \mathrm{e}^{-t} \, \mathrm{d} t \\
&= \frac{1}{4} \left(\ln[1 + (a-b)^2] - \ln[1 + (a+b)^2]\right) \\
&= \frac{1}{4} \ln \left(\frac{1+(a-b)^2}{1+(a+b)^2}\right) \, .
\end{align}</span></p>
|
3,795,932 | <p><span class="math-container">$\mathbb {Z}G $</span> is not Artinian where <span class="math-container">$ G$</span> is a finite group.</p>
<p>I know that <span class="math-container">$\mathbb {Z} $</span> is not Artinian but <span class="math-container">$\mathbb {Z} $</span> is not an ideal of the group ring. So How to see this? Any help would be appreciated!</p>
| Kavi Rama Murthy | 142,385 | <p>Both are correct. <span class="math-container">$\ln |2x-2|=\ln 2+\ln |x-1|$</span> and you can absorb <span class="math-container">$(\ln 2) /2$</span> into the constant.</p>
|
2,907,771 | <p>I tried coming up with a proof of compactness of $[0,1]$ in $\mathbb{R}$ and thought of the following method. please let me know if it is correct or how it could be made more correct.</p>
<p>For any open cover of $[0,1]$ there exists an $\mathbb{\epsilon}$ such that $[0,\mathbb{\epsilon})$ is contained in one open set of the cover. Using the Least upper bound property of reals, there exists $l_1$ such that $[0,l_1)$ is contained in one open set $U_1$ and for any $l_1<m, [0,m) $ is not contained in one open set. </p>
<p>Now there exists some $l_2$ such that $[l_1,l_2)$ is contained in one open set $U_2$ and for any $l_2<m, [l_2,m) $ is not contained in one open set.</p>
<p>this way one forms an increasing sequence of real numbers ${(0,l_1,l_2..)}$ this sequence must converge to some $l$.</p>
<p>if $l\ne1$, pick an $\mathbb{\epsilon}$ ball around $l,$ such that $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ is covered in one open set.
since $l$ was the limit point of sequence ${(0,l_1,l_2..)}$, $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ contains all but finitely many $l_n$'s. </p>
<p>let $l_m$ be the first entry in the sequence ${(0,l_1,l_2..)}$ which belongs to $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$, </p>
<p>form a new increasing sequence $(0,l_1,l_2....l_m,l..)$ similar to the previous method. This way one gets a increasing sequence not bounded by any $l_n<1$. </p>
<p>So there exists an increasing sequence of $(0,l_1,l_2...)$ getting arbitarily close to $1$ and their corresponding open set sequence,$(U_1,U_2,...)$ .</p>
<p>since there exists one open set containing $ (1-\mathbb{\epsilon}, 1]$. so all but finitely many $l_n$'s are contained in that open set. So finitely many open sets $(U_1,U_2...U_n)$ along with the open set covering $(1-\mathbb{\epsilon}, 1]$ cover $[0,1]$</p>
| Christian Blatter | 1,303 | <p>A proof along your lines is possible, but you have to be more greedy when choosing the $U_k$. Your algorithm could stop short long before the right end is reached, and you would have to restart with no guarantee of success.</p>
<p>We are given a family ${\cal U}$ of open sets $U\subset{\mathbb R}$ that together cover the interval $[0,1]$. Put $x_0:=0$ and choose recursively points $x_k\in\>]0,1]$ as follows: </p>
<p>Assume that $x_0$,$x_1$, $\ldots$, $x_m$ have been choosen.</p>
<ul>
<li>If $x_m=1$, stop.</li>
<li>If $x_m<1$, put
$$x_{m+1}:=\sup\bigl\{x\leq 1\bigm| \exists\, U\in{\cal U}: \ [x_m,x]\subset U\bigr\}>x_m\ .$$ Since there is an open $U\in{\cal U}$ with $x_m\in U$ we can be sure that $x_{m+1}>x_m$. </li>
</ul>
<p>I claim that this process will stop at a finite $m$. If not, consider the point $\xi:=\lim_{m\to\infty} x_m\leq1$. There is an open $U\in{\cal U}$ covering $\xi$
and therewith a point $x_m<\xi$. The inequalities $x_m<x_{m+1}<\xi$ then violate the choice of $x_{m+1}$.</p>
<p>We therefore may assume $x_m=1$ for some $m\geq1$. There is an $U_m\in{\cal U}$ covering $x_m=1$ and therewith an interval $J:=\>]1-\delta,1]$, $\>\delta>0$. By definition of $x_m$ we then can find an $U_{m-1}\in{\cal U}$ covering $[x_{m-1},x]$ for an $x\in J$. This $U_{m-1}$ will also cover an interval $J':=\>]x_{m-1}-\delta,x_{m-1}]$, $\>\delta>0$. By definition of $x_{m-1}$ we then can find an $U_{m-2}$, such that $\ldots$, etcetera. Proceeding in this way we obtain a finite sequence of open sets $U_k\in{\cal U}$ $\>(m\geq k\geq0)$ which together cover the interval $[0,1]$.</p>
|
199,695 | <p>I believe the answer is $\frac12(n-1)^2$, but I couldn't confirm by googling, and I'm not confident in my ability to derive the formula myself.</p>
| mdp | 25,159 | <p>A clique has an edge for each pair of vertices, so there is one edge for each choice of two vertices from the $n$. So the number of edges is:</p>
<p>$$\binom{n}{2}=\frac{n!}{2!\times(n-2)!}=\frac{1}{2}n(n-1)$$</p>
<p><strong>Edit:</strong> Inspired by Belgi, I'll give a third way of counting this! Each vertex is connected to $n-1$ other vertices, which gives $n(n-1)$ times that an edge is joined to a vertex. As each edge is joined to exactly two vertices, there must be $\frac{1}{2}n(n-1)$ edges.</p>
|
3,738,789 | <p>I know if I stick two pins on a paper, and trace a taut loop around them, I get an ellipse. With one pin, I get a circle. Question is, are there names for shapes I get if I trace a taut loop around 3, 4, 5, ..., k pins, assuming the pins are not collinear, and the polygon formed by joining them is convex i.e. every pin stretches the loop at least one point as I trace around the pins? General formula for the locus? Any good, readable references to this? (I am not a mathematician, just a hobbyist.) Thank you.</p>
| rschwieb | 29,335 | <p>Someone will tell me if there is an exotic case I'm not foreseeing, but I think usually it will be a smooth union of "ellipse arcs."</p>
<p>Assuming the string is large enough to circumscribe the entire collection of pins, it could still be that it is taught around the outermost pins, and that would make a polygon. So we'll also assume there is a little slack to allow more than just that.</p>
<p>Pins within the interior of the convex hull of the set of pins won't be able to affect the shape at all, so you may as well assume the pins lie on the perimeter of a convex polygon.</p>
<p>Since the loop isn't taught, I think at any given time there will be two pins closest to the stylus drawing the arc, along the loop which act like the foci of an ellipse. As the stylus moves, the loop may contact a new pin which acts like the focus of a different ellipse, and likewise a pin that used to be in contact may lose contact with the string as another pin further along takes up the role of a focus.</p>
<p>I don't think a formula for the locus around <span class="math-container">$n$</span>-pins sounds very feasible, but you could pursue trying to describe it as the boundary of a union of ellipses. The thing is that I think the number of ellipses involved depends on how much slack there is in the string. If it is very slack, then two pins very close to each other on the edge might never act as a pair of foci, but if you shrink the string enough, then they will.</p>
<p>For a little while I considered what just the union of all possible ellipses would tell us, but then I considered just a triangle. If two points form the foci of an ellipse, and the stylus is currently on the <em>opposite</em> side of the ellipse from the third triangle point, the third triangle point will be taking up slack, preventing the full ellipse from being drawn. So the answer is not just something as simple as "the boundary of the union of all possible ellipses."</p>
<p>That leads me to this: at any given point of time, the string will form a polygon, one of whose points is the stylus. Then the arc currently being drawn is a piece of an ellipse determined by two points of contact with the polygon. The convex hull of the polygon will be bound within the shape being drawn. So the shape is the boundary of the union of these "polygons with ellipse bumps."</p>
|
2,782,726 | <p>I'm reading through some lecture notes to prepare myself for analysis next semester and stumbled along the following exercises: </p>
<p>a) Prove that $\lim_{x\to0} f(x)=b$ is equivalent to the statement $\lim_{x\to0} f(x^3)=b$.</p>
<p>b) Give an example of a map where $\lim_{x\to0} f(x^2)$ exists, but $\lim_{x\to0} f(x)$ does not. </p>
<p>for b) I was thinking about the following piecewise function: </p>
<p>$f(x)=\begin{cases} -1 & x < 0 \\
1 & x \geq0
\end{cases}$</p>
<p>is this a good example?</p>
<p>for (a), I don't have any concrete tools to work with, I can't write down any explicit $\epsilon$ or $\delta$, so what can I do?</p>
| cj1996 | 316,862 | <p>For a) You can use the following in an epsilon delta proof:</p>
<p>for any <span class="math-container">$x \in \mathbb{R}$</span>, <span class="math-container">$p(x) \iff$</span> for any <span class="math-container">$x^3 \in \mathbb{R}$</span>, <span class="math-container">$p(x^3)$</span></p>
|
2,543,834 | <p>Ok, so in my differential equations class we've been doing problems which more or less amount to solving equations of the form:</p>
<p><span class="math-container">$$\frac{dY}{dt} = AY$$</span></p>
<p>Where <span class="math-container">$A$</span> is just some <span class="math-container">$2\times2$</span> linear transformation and <span class="math-container">$Y$</span> is a parametric vector function defined more specifically as</p>
<p><span class="math-container">$$Y(t) = \begin{bmatrix}
x(t) \\
y(t)
\end{bmatrix}$$</span></p>
<p>The end result, assuming that there exists <span class="math-container">$\lambda_1, \lambda_2 \ne 0; \lambda_1 \ne \lambda_2$</span> which define the eigen values for A, is a definition for <span class="math-container">$Y(t)$</span> of the form,</p>
<p><span class="math-container">$$Y(t) = k_1e^{\lambda_1t}\vec{V_1} + k_2e^{\lambda_2t}\vec{V_2}$$</span></p>
<p>Where <span class="math-container">$\vec{V_1}, \vec{V_2}$</span> are the corresponding eigen vectors to their respective eigen values and <span class="math-container">$k_1, k_2$</span> are just some constants.</p>
<p>For solutions which involve either <span class="math-container">$k_1 = 0$</span> or <span class="math-container">$k_2 = 0$</span>, the end result is a straight-line solution. The rest are exponential curves within the vector space defined by the eigen vectors.</p>
<p>My understanding of eigen vectors, from a linear algebra class I took a year ago, so far is as follows (roughly):</p>
<ul>
<li><p>geometrically speaking, an eigenvector is any vector whose direction after transformation by some matrix <span class="math-container">$A$</span> remains the same. It's only scaled and/or negated.</p>
</li>
<li><p>Every eigenvector for some matrix <span class="math-container">$A$</span> composes a subspace which in turn defines the <em>eigen space</em> for <span class="math-container">$A$</span>'s vector basis.</p>
</li>
<li><p>therefore, the eigenvectors which <span class="math-container">$span(A)$</span> are linearly independent and define a coordinate space which also exists within <span class="math-container">$A$</span>.</p>
</li>
</ul>
<p>Regardless of whether or not the above is correct (if there's a mistake, any clarification/correction would be appreciated), what is it about eigenvectors <em>specifically</em> which allows for them to be used to solve these forms of differential equations?</p>
| Community | -1 | <p>The basic idea here is to find a new set of variables, $X(t)$ and $Y(t)$, related to the original variables $x(t)$ and $y(t)$ by a linear transformation, so that the differential equations for $X(t)$ and $Y(t)$ are <em>decoupled</em>:
$$
\dot X = \lambda_1 X\, ,\qquad \dot Y=\lambda_2 Y\, . \tag{1}
$$
In other words, <em>assuming</em>
$$
\left(\begin{array}{c}
x\\ y \end{array}\right)= M \left(\begin{array}{c} X\\ Y\end{array}\right)
$$
where $M$ is invertible, we have
\begin{align}
\frac{d}{dt}\left(\begin{array}{c}
x\\y \end{array}\right)&= A \left(\begin{array}{c}
x \\ y\end{array}\right)\, ,\\
M \frac{d}{dt}
\left(\begin{array}{c} X \\ Y\end{array}\right)&= A M
\left(\begin{array}{c} X\\Y\end{array}\right)\, ,\\
\frac{d}{dt}\left(\begin{array}{c} X\\ Y\end{array}\right)&=
M^{-1} A M \left(\begin{array}{c} X\\ Y\end{array}\right)\, .\tag{2}
\end{align}
Thus if $M^{-1} A M$ is diagonal with non-zero entries $\lambda_1,\lambda_2$, the system (2) is solved by (1), and $\lambda_1,\lambda_2$ are the eigenvalues of $A$ (by construction). The new coordinates $X$ and $Y$ are by construction eigenvectors of $A$. These new coordinates are “special” in that they have an especially simple evolution. Having solved in terms of the special coordinates $X,Y$, one can then go back to the original variable $x,y$ using $M$.</p>
|
287,859 | <p>Prove that $\lim\limits_{x\rightarrow+\infty}\frac{x^k}{a^x} = 0\ (a>1,k>0)$.</p>
<p>P.S. This problem comes from my analysis book. You may use the definition of limits or invoke the Heine theorem for help. <em>It means the proof should only use some basic properties and definition of limits rather than more complicated approaches.</em></p>
| Mhenni Benghorbal | 35,472 | <p>Applying the <a href="https://math.stackexchange.com/questions/214001/what-are-common-methods-techniques-can-be-used-to-prove-that-limit-of-an-infinit/214014#214014">result</a></p>
<p><strong>Theorem</strong>: If ${a_n}$ be a sequence such that $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}= a\,,$ then</p>
<p>1) if $|a|<1$, then $\lim_{n\to \infty}a_n =0 \,,$ </p>
<p>2) if $ a>1$, then $\lim_{n\to \infty}|a_n| =\infty \,,$</p>
<p>, we have, let $b_x=\frac{x^k}{a^x}$, then</p>
<p>$$ \lim_{x\to \infty}\frac{b_{x+1}}{b_x}= \lim_{x\to \infty}\frac{(x+1)^k}{a^{x+1}}\frac{a^x}{x^k} = \lim_{x\to \infty}\frac{1}{a}(1+\frac{1}{x})^k = \frac{1}{a} < 1, $$</p>
<p>which implies by part $(1)$ of the theorem that $\lim_{x \to \infty}b_x=0. $</p>
|
9,930 | <p>One of the standard parts of homological algebra is "diagram chasing", or equivalent arguments with universal properties in abelian categories. Is there a rigorous theory of diagram chasing, and ideally also an algorithm?</p>
<p>To be precise about what I mean, a diagram is a directed graph $D$ whose vertices are labeled by objects in an abelian category, and whose arrows are labeled by morphisms. The diagram might have various triangles, and we can require that certain triangles commute or anticommute. We can require that certain arrows vanish, which can be used to ask that certain compositions vanish. We can require that certain compositions are exact. Maybe some of the arrows are sums or direct sums of other arrows, and maybe some of the vertices are projective or injective objects. Then a diagram "lemma" is a construction of another diagram $D'$, with some new objects and arrows constructed from those of $D$, or at least some new restrictions.</p>
<p>As described so far, the diagram $D$ can express a functor from any category $\mathcal{C}$ to the abelian category $\mathcal{A}$. This looks too general for a reasonable algorithm. So let's take the case that $D$ is acyclic and finite. This is still too general to yield a complete classification of diagram structures, since acyclic diagrams include all acyclic quivers, and some of these have a "wild" representation theory. (For example, three arrows from $A$ to $B$ are a wild quiver. The representations of this quiver are not tractable, even working over a field.) In this case, I'm not asking for a full classification, only in a restricted algebraic theory that captures what is taught as diagram chasing.</p>
<p>Maybe the properties of a diagram that I listed in the second paragraph already yield a wild theory. It's fine to ditch some of them as necessary to have a tractable answer. Or to restrict to the category $\textbf{Vect}(k)$ if necessary, although I am interested in greater generality than that.</p>
<p>To make an analogy, there is a theory of Lie bracket words. There is an algorithm related to <a href="http://en.wikipedia.org/wiki/Lyndon_word" rel="noreferrer">Lyndon words</a> that tells you when two sums of Lie bracket words are formally equal via the Jacobi identity. This is a satisfactory answer, even though it is not a classification of actual Lie algebras. In the case of commutative diagrams, I don't know a reasonable set of axioms — maybe they are related to triangulated categories — much less an algorithm to characterize their formal implications.</p>
<p>(This question was inspired by a mathoverflow question about <a href="https://mathoverflow.net/questions/6749/">George Bergman's salamander lemma</a>.)</p>
<hr>
<p>David's reference is interesting and it could be a part of what I had in mind with my question, but it is not the main part. My thinking is that diagram chasing is boring, and that ideally there would be an algorithm to obtain all finite diagram chasing arguments, at least in the acyclic case. Here is a simplification of the question that is entirely rigorous.</p>
<p>Suppose that the diagram $D$ is finite and acyclic and that all pairs of paths commute, so that it is equivalent to a functor from a finite <a href="http://en.wikipedia.org/wiki/Partially_ordered_set#In_category_theory" rel="noreferrer">poset category</a> $\mathcal{P}$ to the abelian category $\mathcal{A}$. Suppose that the only other decorations of $D$ are that: (1) certain arrows are the zero morphism, (2) certain vertices are the zero object, and (3) certain composable pairs of arrows are exact. (Actually condition 2 can be forced by conditions 1 and 3.) Then is there an algorithm to determine all pairs of arrows that are forced to be exact? Can it be done in polynomial time?</p>
<p>This rigorous simplification does not consider many of the possible features of lemmas in homological algebra. Nothing is said about projective or injective objects, taking kernels and cokernels, taking direct sums of objects and morphisms (or more generally finite limits and colimits), or making connecting morphisms. For example, it does not include the <a href="http://en.wikipedia.org/wiki/Snake_lemma" rel="noreferrer">snake lemma</a>. It also does not include diagrams in which only some pairs of paths commute. But it is enough to express the monomorphism and epimorphism conditions, so it includes for instance the <a href="http://en.wikipedia.org/wiki/Five_lemma" rel="noreferrer">five lemma</a>.</p>
| Harrison Brown | 382 | <p>This was originally going to be a rather different post, but I realized that my argument could perhaps be adapted into an (ineffective) algorithm, if it's possible to patch up the hole. At the very least, I should say some things which might be obvious but wasn't to me until I saw it, and so might not be obvious to someone else either.</p>
<p>Let's fix the abelian category to be the category of finite-dimensional vector spaces over GF(2). We have a decorated diagram of the type Greg describes. Then it's easy to see that the <em>non</em>-exactness of two composable arrows is in RE -- we can have a prover just give us a counterexample, and since everything's finite-dimensional, we're good. So the "diagram chasing decision problem" is in coRE.</p>
<p>If we could effectively bound the dimensions of the vector spaces in a counterexample, of course, we'd have that the problem was recursive. At least for finite-dimensional vector spaces over a finite field. But I don't know how this is possible. My question, though: is it possible to get around the need for an effective bound somehow? I think you can get a complexity-theoretic improvement by letting the prover compute everything as a black box and "checking for honesty" on logarithmic-sized subspaces, but I might be misremembering. Is there some extraordinarily clever way to do something like this for computability? (I'm thinking about results like the necklace reconstruction problem, or problem B6 on this year's Putnam exam, although in both of those cases the "uniform bound" hides a lot of information that doesn't seem to have anyplace to go in this scenario...)</p>
<p>Actually, I guess really the only scenario to worry about would be if the dimensions in the smallest counterexample to exactness of some pair grew faster than any computable function of the size of the diagram, right? Otherwise we have that the problem is in co-NSPACE(BigFunction(n)), and so it's in NSPACE(BigFunction(n)). That kind of growth rate seems implausible, but again I have no idea how to prove it's impossible.</p>
|
114,438 | <p>I am interested in knowing whether there is a definition for the symbol of a PDO which is NOT linear.
In Wikipedia and in the book I am reading (An Introduction to Partial Differential Equations by Renardy-Rogers) I only found the definition for linear PDOs.</p>
<p>Here is the Wikipedia link:</p>
<p><a href="http://en.wikipedia.org/wiki/Symbol_of_a_differential_operator" rel="nofollow">http://en.wikipedia.org/wiki/Symbol_of_a_differential_operator</a></p>
| youler | 25,895 | <p>The symbol of a nonlinear differential operator is defined as the symbol of its linearization.</p>
|
591,938 | <p>let $m,n\in N^{+}$, if such</p>
<p>$$\sqrt{37}+\sqrt{47}<\dfrac{n}{m}<\sqrt{41}+\sqrt{43}$$
Find the $m$ minimum the value </p>
<p>My try: since
$$(\sqrt{37}+\sqrt{47})m<n<(\sqrt{43}+\sqrt{41})m$$
then
$$\dfrac{10m}{\sqrt{47}-\sqrt{37}}<n<\dfrac{2m}{\sqrt{43}-\sqrt{41}}$$</p>
<p>(maybe this problem use pell equation?)</p>
<p>then I can't,Thank you very much</p>
| Community | -1 | <p>This provides only a partial answer. What you have gives us
$$\dfrac{\sqrt{43}-\sqrt{41}}2n < m < \dfrac{\sqrt{47}-\sqrt{37}}{10}n$$
Hence, a sufficient condition is that if we ensure that $\dfrac{\sqrt{47}-\sqrt{37}}{10}n -\dfrac{\sqrt{43}-\sqrt{41}}2n > 1$, there is definitely an integer $m$. Hence, $n \geq 7573$. Choosing $n=7573$, gives $m=585$. However, this does not ensure that $m$ is a minimum. All we can say is that the $m$ we are after is $\leq 585$.</p>
|
591,938 | <p>let $m,n\in N^{+}$, if such</p>
<p>$$\sqrt{37}+\sqrt{47}<\dfrac{n}{m}<\sqrt{41}+\sqrt{43}$$
Find the $m$ minimum the value </p>
<p>My try: since
$$(\sqrt{37}+\sqrt{47})m<n<(\sqrt{43}+\sqrt{41})m$$
then
$$\dfrac{10m}{\sqrt{47}-\sqrt{37}}<n<\dfrac{2m}{\sqrt{43}-\sqrt{41}}$$</p>
<p>(maybe this problem use pell equation?)</p>
<p>then I can't,Thank you very much</p>
| N. S. | 9,176 | <p>Since</p>
<p>$$\sqrt{37}+\sqrt{47}=12.9384....$$
$$\sqrt{41}+\sqrt{43}=12.9605....$$</p>
<p>Write</p>
<p>$$\frac{n}{m}=13-\frac{k}{m}$$ Then</p>
<p>$$13-0.0616< 13-\frac{k}{m}< 13-0.039$$
Hence
$$.0616 > \frac{k}{m} \geq \frac{1}{m}$$</p>
<p>This proves that</p>
<p>$$m \geq \frac{1}{0.0616}=16.23$$</p>
<p>Thus, $m \geq 17$.</p>
<p>For $17$ it is easy to show that $13-\frac{1}{17}$ has the desired property.</p>
<p><strong>P.S.</strong> If $m > \frac{1}{b-a}$ then it is trivial to prove that there exists an $n$ so that $a\leq \frac{n}{m} <b$. This simple result, shows that any $m \geq 45$ works, and reduces the problem to a finite computation: check which $1 \leq m \leq 44$ works...</p>
<p><strong>Edit</strong> Fixed some mistakes in the computations...</p>
|
1,336,869 | <p>Does every mod p have at least one element with a non-identical inverse?</p>
<p>I very much suspect this is true, but how can I prove it? For example, in mod 5, some elements have inverses that are not themselves ${2,3}$ and some have themselves as inverses ${1,4}$. Am I assured that every prime $p\gt 2$ will have at least one element that is not its own inverse (almost certainly yes)? How do I prove that?</p>
| Hagen von Eitzen | 39,174 | <p>$x$ is its own inverse $\bmod p$ iff $x^2\equiv 1\pmod p$ and this has only the two solutions $\pm1\pmod p$ (put differently, $p\mid x^2-1=(x-1)(x+1)$ implies $p\mid x-1$ or $p\mid x+1$).</p>
|
2,886,675 | <p>I suspect the following is exactly true ( for positive $\alpha$ )</p>
<p>\begin{equation}
\sum_{n=1}^\infty e^{- \alpha n^2 }= \frac{1}{2} \sqrt { \frac{ \pi}{ \alpha} }
\end{equation}</p>
<p>If the above is exactly true, then I would like to know a proof of it.
I accept showing a particular limit is true, may be far more difficult than just applying a general theorem to show that the limit exists. Also as the result involves $\pi$ this makes me think the proof could well be a long one, BUT … ?</p>
<p>To give some context, the above series crops up in calculating the 'One Particle Translational Partition Function' for the quantum mechanical 'Particle In A Box'.</p>
| Community | -1 | <p>Consider the Riemannian sum $$\lim_{m\to\infty}\frac 1m\sum_{n=0}^\infty e^{-(n/m)^2}=\int_0^\infty e^{-x^2}dx=\frac{\sqrt\pi}2.$$</p>
<p>Then with the subsitution $m^2\alpha=1$,</p>
<p>$$\lim_{\alpha\to0}\sqrt\alpha\sum_{n=0}^\infty e^{-\alpha n^2}=\frac{\sqrt\pi}2.$$
(Note that the starting index $n=0$ or $n=1$ makes no difference as a finite sum of terms will cancel out when multiplied by $\sqrt\alpha$.)</p>
|
417,064 | <p>Let T be a totally ordered set that is <strong>finite</strong>. Does it follow that minimum and maximum of T exist?
Since T is finite, I believe there exists a minimal of T. From that it maybe able to be shown that the minimal is the minimum but not quite sure whether it is the right approach. </p>
| Caleb Stanford | 68,107 | <p><strong>Claim:</strong> A totally ordered set with at least one <em>minimal</em> element has a <em>minimum</em> element.</p>
<p><strong>Proof sketch:</strong> Let $b$ be the minimal element. If $b$ were not in fact the minimum, then by definition of minimum, "$b \le a$ for all $a$" would be false. Pick some $a$ for which $b \le a$ is not true, and derive a contradiction using totality.</p>
|
246,114 | <p>A Latin Square is a square of size <strong>n × n</strong> containing numbers <strong>1</strong> to <strong>n</strong> inclusive. Each number occurs once in each row and column.</p>
<p>An example of a 3 × 3 Latin Square is:</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
\end{array}
\right)
$$</span>
Another is:
<span class="math-container">$$
\left(
\begin{array}{ccc}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3 \\
\end{array}
\right)
$$</span></p>
<p>My code can work when the order is less than 5</p>
<pre><code>n=4;
Dimensions[ans=Permutations[Permutations[Range[n]],{n}]//
Select[AllTrue[Join[#,Transpose@#],DuplicateFreeQ]&]]//AbsoluteTiming
</code></pre>
<blockquote>
<p><code>{0.947582, {576, 4, 4}}</code></p>
</blockquote>
<p>When the order is 5, the memory is not enough, I want to know if there is a better way to get all 5×5 Latin squares?</p>
| Roman | 26,598 | <p>Adding lines one-by-one and continuing only if the newly added line does not give any column duplications. This highly unoptimized code takes about a minute for <span class="math-container">$n=5$</span> (thanks @chyanog for speedup!):</p>
<pre><code>addline[lines_] :=
Select[Append[lines, #] & /@ Permutations[Range[Length[Transpose[lines]]]],
AllTrue[DuplicateFreeQ]@*Transpose]
latinsquares[n_] := Nest[Join @@ addline /@ # &,
Transpose[{Permutations[Range[n]]}],
n - 1]
latinsquares[5]
(* {{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,1,2},{4,5,2,3,1},{5,3,1,2,4}},
{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,1,2},{5,3,1,2,4},{4,5,2,3,1}},
{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,2,1},{4,5,1,3,2},{5,3,2,1,4}},
...
{{5,4,3,2,1},{4,5,2,1,3},{3,2,1,5,4},{2,1,4,3,5},{1,3,5,4,2}}} *)
</code></pre>
|
552,395 | <p>I have $f(x)$=$(2x,e^x)$
what does this notation mean? Notation: $Df(\frac{∂}{∂x})$</p>
<p>Certainly $Df(x)$=$(2,e^x)$
but how can I replace $x$ with $\frac{∂}{∂x}$?</p>
<p>Particularly, how can I make sense of $e^{\frac{∂}{∂x}}$</p>
| Han de Bruijn | 96,057 | <p>The keyword is <I>Operational Calculus</I> (let Google be your friend). <BR>
The following reference is an introductory exposure of the subject:<P> <A HREF="http://www.alternatievewiskunde.nl/jaar2004/uitboek.pdf" rel="nofollow">Re: Why exp(-st) in the Laplace Transform?</A><P>
About the second part of your question. Any differentiable function can be developed into a Taylor series:
$$
f(x+a) = f(x) + a.f'(x) + \frac{1}{2} a^2.f''(x) + \frac{1}{3!} a^3 f'''(x) + \cdots
$$
Write as follows:
$$
f(x + a) = \left[ 1 + a\frac{d}{dx} + \frac{1}{2} a^2\frac{d^2}{dx^2} + \frac{1}{3!} a^3\frac{d^3}{dx^3} +\cdots \right] f(x) $$
$$ = \left[ 1 + \left(a\frac{d}{dx}\right) + \frac{1}{2} \left(a\frac{d}{dx}\right)^2 + \frac{1}{3!} \left(a\frac{d}{dx}\right)^3 +\cdots \right] f(x) $$
The series expansion of $e^x$ is recognized in the expression between the square brackets. Therefore
we write symbolically:
$$
f(x+a) = e^{a\frac{d}{dx}} f(x)
$$</p>
|
121,897 | <p>I want to check if a user input the function with all the specified variables or not. For that I choose the replace variables with some values and check for if the result is a number or not via a doloop. I am thinking there might be more elegant way of doing it such as <a href="http://reference.wolfram.com/language/ref/ReplaceList.html" rel="nofollow"><code>ReplaceList</code></a> but it is not working the way I want it. </p>
<p>Lets assume </p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w;
(*and user give variables as *)
vas = {x, y, z, w};
(* I need to check if all the variables are in the function *)
Do[
u = u /. vas[[i]] -> 1.1;
(* 1.1 is within where the function is going to get \
evaluated *)
If[i == 4, numc9 = NumericQ[u]; Print[numc9];];
(* if numc9 False either there infinity or one of \
the variables in the list is not present in the function or function \
has extra variable(s) *)
Print[u];
, {i, 4}]
</code></pre>
<p>Is there more elegant way doing it?</p>
<p><strong>EDIT I</strong></p>
<p>After @Mr.Wizard 's answer I realized that my question was not covering everything I wanted. @Mr. Wizard answer is working, if I was checking all the variables are present in u. However, at the same time I want to check if there is no extra variables in u. Because at the end I want to evaluate u using vars and if u has an extra variable I won't get a value at the end. </p>
<p>For example:</p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w + z^p;
vas = {z, x, y, p};
</code></pre>
<p>Level and FreeQ commands give all the variables in function u. After that you check if all the variables in vas are present in this list of variables coming from Level or FreeQ and in the example above its. </p>
<p>In this situation @J.M. undocumented command does what I need. Or I will need to stick with my DoLoop.</p>
| Mr.Wizard | 121 | <h3>Update</h3>
<p>Following your updated question I recommend filtering <a href="http://reference.wolfram.com/language/ref/Level.html" rel="nofollow noreferrer"><code>Level</code></a> output with <a href="http://reference.wolfram.com/language/ref/Variables.html" rel="nofollow noreferrer"><code>Variables</code></a> as I proposed <a href="https://mathematica.stackexchange.com/a/30048/121">here</a>. Then <a href="http://reference.wolfram.com/language/ref/Sort.html" rel="nofollow noreferrer"><code>Sort</code></a> <code>vas</code> and check for equivalence.</p>
<pre><code>Sort[vas] === Variables @ Level[u, {-1}]
</code></pre>
<h3>Original proposals</h3>
<p>The first idea that came to mind:</p>
<pre><code>Level[u, {-1}] ⋂ vas === Sort[vas]
</code></pre>
<p>Equivalently:</p>
<pre><code>Complement[vas, Level[u, {-1}]] === {}
</code></pre>
<p>A method using <code>FreeQ</code></p>
<pre><code>Nor @@ Through @ Map[FreeQ][vas][u]
</code></pre>
<p>Or without version 10 Operator Forms:</p>
<pre><code>Nor @@ Through @ (FreeQ /@ vas)[u]
</code></pre>
|
121,897 | <p>I want to check if a user input the function with all the specified variables or not. For that I choose the replace variables with some values and check for if the result is a number or not via a doloop. I am thinking there might be more elegant way of doing it such as <a href="http://reference.wolfram.com/language/ref/ReplaceList.html" rel="nofollow"><code>ReplaceList</code></a> but it is not working the way I want it. </p>
<p>Lets assume </p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w;
(*and user give variables as *)
vas = {x, y, z, w};
(* I need to check if all the variables are in the function *)
Do[
u = u /. vas[[i]] -> 1.1;
(* 1.1 is within where the function is going to get \
evaluated *)
If[i == 4, numc9 = NumericQ[u]; Print[numc9];];
(* if numc9 False either there infinity or one of \
the variables in the list is not present in the function or function \
has extra variable(s) *)
Print[u];
, {i, 4}]
</code></pre>
<p>Is there more elegant way doing it?</p>
<p><strong>EDIT I</strong></p>
<p>After @Mr.Wizard 's answer I realized that my question was not covering everything I wanted. @Mr. Wizard answer is working, if I was checking all the variables are present in u. However, at the same time I want to check if there is no extra variables in u. Because at the end I want to evaluate u using vars and if u has an extra variable I won't get a value at the end. </p>
<p>For example:</p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w + z^p;
vas = {z, x, y, p};
</code></pre>
<p>Level and FreeQ commands give all the variables in function u. After that you check if all the variables in vas are present in this list of variables coming from Level or FreeQ and in the example above its. </p>
<p>In this situation @J.M. undocumented command does what I need. Or I will need to stick with my DoLoop.</p>
| Michael E2 | 4,999 | <p>Since it is user input and guessing from the loop that the OP is calculating something numerical, I think <strong><em>it is better to check that the user has entered a numerical function</em></strong>, not just an expression with only legal variables. I asked about constant functions and other functions such as <code>2 + y^3</code> that can be considered functions of <code>{x, y}</code> but have no dependency on <code>x</code>. Such functions come up in plotting and integrating and so forth. For instance,</p>
<pre><code>Plot3D[2 + y^3, {x, 0, 2}, {y, -1, 1}]
NIntegrate[2 + y^3, {x, 0, 2}, {y, -1, 1}]
</code></pre>
<p><a href="https://mathematica.stackexchange.com/questions/121897/checking-if-all-the-variables-are-present-in-the-defined-function?noredirect=1#comment331101_121897">The OP's comment</a>, on the one hand, seems to affirm that such a function is acceptable, and on the other, it says that it is also acceptable that <a href="https://mathematica.stackexchange.com/a/121900/4999">J.M.'s answer</a> would reject such a function.</p>
<p>Well, I thought that, instead of trying for a third time to make myself clear in a comment, I would post an alternative that works like the OP's <code>Do</code> loop, which accepts constant functions and functions constant with respect to some of the variables. <code>NIntegrate</code> substitutes numeric values and checks whether a numeric value results. This is what the OP's <code>Do</code> loop does, but the loop is awkward, in that it prints out results instead of returning a value.</p>
<p>The OP mentioned that the user input will actually have the form of iterator-intervals like <code>NIntegrate</code>:</p>
<pre><code>input = {{x, 1, 3}, {y, 2, 4}, {z, 3, 5}, {w, 4, 6}};
</code></pre>
<p>from which the value for <code>vas</code> will be constructed, perhaps by <code>vas = input[[All, 1]]</code>. One can construct a substitution directly from <code>input</code> to plug in values that are supposed to be valid, as indicated by the intervals. These can be plugged into <code>u</code> to check that it gets a numerical value.</p>
<p>The idea is to pick a random number from each interval to substitute. It is still a numerical check, and there is always the chance of hitting an edge case where it hits a singularity or causes something to cancel out (consider <code>a (x - 3) /. x- > 3.</code> which makes the variable <code>a</code> disappear). But I think one can see that it is highly improbable.</p>
<pre><code>ClearAll[sub2];
sub2[{v_, v1_?NumericQ, v2_?NumericQ}] := v -> RandomReal[{v1, v2}];
SeedRandom[0]; (* optional, for reproducibility *)
sub2 /@ input
(*
{x -> 2.304935615948057`,
y -> 3.266140712502736`,
z -> 4.36562617373332`,
w -> 5.132703662186646`}
*)
</code></pre>
<p>Testing <code>u</code> would be carried out as above:</p>
<pre><code>NumericQ[u /. sub2 /@ input]
(* True *)
</code></pre>
<p>Example with not all variables specified (<code>Most@input</code> drops the last variable-interval):</p>
<pre><code>NumericQ[u /. sub2 /@ Most@input]
(* False *)
</code></pre>
|
3,299,492 | <p>Is there any nice characterization of the class of polynomials can be written with the following formula for some <span class="math-container">$c_i , d_i \in \mathbb{N}$</span>? Alternatively, where can I read more about these? do they have a name?
<span class="math-container">$$c_1 + \left( c_2 + \left( \dots (c_k + x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span></p>
<p>For instance, it is not possible to write <span class="math-container">$1 + x + x^2$</span> in this way, but it is possible to write <span class="math-container">$1 + 2x + x^2$</span> or <span class="math-container">$0 + x^3$</span>.</p>
<p><em>For some context:</em> two actions on the set of polynomials <span class="math-container">$A \times \mathbb{N}[x] \to \mathbb{N}[x]$</span>, and <span class="math-container">$B \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> can be combined into a single one <span class="math-container">$\left<A,B\right> \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> that takes a word of elements on <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and applies the multiple actions in order. In the case of multiplication and exponential, we can see that the class of polynomials
<span class="math-container">$$c_1 \left( c_2 \left( \dots (c_k x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span>
can be just described as the polynomials of the form <span class="math-container">$cx^d$</span>. I do not expect such a simple characterization in the case of sums and exponentials, but I would like to know if this class of polynomials has been described or studied somewhere.</p>
| Chinnapparaj R | 378,881 | <p>Radius of convergence <span class="math-container">$r$</span> means <span class="math-container">$$r=\sup_{x}\left\{|x|: \sum n^n x^n< \infty\right\}$$</span> That is, <span class="math-container">$r$</span> is the supremum of <span class="math-container">$|x|$</span> over all number <span class="math-container">$x$</span> for which the series converges. Here <span class="math-container">$r=0$</span> means the the series converges only at <span class="math-container">$x=0$</span></p>
|
3,426,907 | <p>I'm not a mathematician, but a programmer who loves solving math puzzles, so please forgive me if I don't use the correct terms.</p>
<p>Imagine there are 100 lotteries with 100 tickets each. The lotteries have no connection at each other and all lotteries have exactly 1 price. I buy 1 ticket from each lottery so in the end I have 100 tickets from 100 lotteries.</p>
<p>I know how to calculate the probability P(x) for getting x prices, with x in range from 0..100. In total all probabilities added gives 1.0</p>
<p>My question is: What is the expected number of prices I'll win? How do I calculate this correctly?</p>
| Sumanta | 591,889 | <p>Option <span class="math-container">$2.$</span> is not correct. Consider the set <span class="math-container">$A=\big\{(x,\sin\big(\frac{\pi}{x}\big):0<x\leq 1\big\}$</span> in <span class="math-container">$X=\Bbb R^2$</span>. Then <span class="math-container">$\overline A=A\cup \big(\{0\}\times[-1,1]\big)$</span> which is not path connected. </p>
<p>To prove this, note that, <span class="math-container">$A$</span> is path connected as it is graph of a continuous function on the path connected set <span class="math-container">$(0,1]$</span>.</p>
<p>Next, if possible <span class="math-container">$\overline A$</span> is path connected. Then there is a path <span class="math-container">$\gamma :[0,1]\to A$</span> with <span class="math-container">$\gamma(0)=(0,0)$</span> and <span class="math-container">$\gamma(1)=(1,0)$</span>. So <span class="math-container">$\pi_1\gamma,\pi_2\gamma$</span> are continuous, <span class="math-container">$\pi_1,\pi_2:\Bbb R^2\to \Bbb R$</span> are projections on <span class="math-container">$1$</span>-st and <span class="math-container">$2$</span>-nd coordinates. But, <span class="math-container">$\pi_1\gamma$</span> takes all values of the form <span class="math-container">$\frac{1}{n},n\in\Bbb N$</span> by <strong>intermediate value property</strong> of <span class="math-container">$\pi_1\gamma$</span>, as <span class="math-container">$\pi_1\gamma(0)=0,\pi_1\gamma(1)=1$</span> . So <span class="math-container">$\pi_2\gamma$</span> assumes each values <span class="math-container">$\pm 1$</span> in every nbd of <span class="math-container">$0\in [0,1]$</span>. So there is <strong>no</strong> nbd <span class="math-container">$[0,\delta)$</span> of <span class="math-container">$0$</span> in <span class="math-container">$[0,1]$</span> which maps to <span class="math-container">$\big(-\frac{1}{2},\frac{1}{2}\big)$</span> under the map <span class="math-container">$\pi_2\gamma$</span>, that is <span class="math-container">$\pi_2\gamma$</span> is discontinuous.</p>
|
66,314 | <p>This is very similar to my earlier question <a href="https://mathematica.stackexchange.com/questions/60069/one-to-many-lists-merge">One to Many Lists Merge</a> but somehow different. I have two lists, first column in each list represents its key. I want to merge these two lists. The only problem is that these two lists have some common keys but not all keys are the same and that they are of different lengths. For example, </p>
<pre><code>list1 = {{1, a, aa}, {2, b, bb}, {3, c, cc}, {4, d, dd}, {6, f,
ff}, {7, g, gg}, {13, j, jj}};
list2 = {{1, 10, 100, 1000}, {2, 20, 200, 2000}, {5, 50, 500,
5000}, {6, 60, 600, 6000}, {7, 70, 700, 7000}, {9, 90, 900,
9000}};
</code></pre>
<p>I am trying to merge these lists using their keys. If the key doesn't match, one list should have missing values such as -99.99. The result I am looking for the above two list would be</p>
<pre><code>answerlist = {{1, a, aa, 10, 100, 1000}, {2, b, bb, 20, 200,
2000}, {3, c, cc, -99.99, -99.99, -99.99}, {4, d,
dd, -99.99, -99.99, -99.99}, {5, -99.99, -99.99, 50, 500,
5000}, {6, f, ff, 60, 600, 6000}, {7, g, gg, 70, 700,
7000}, {9, -99.99, -99.99, 90, 900, 9000}, {13, j,
jj, -99.99, -99.99, -99.99}};
</code></pre>
<p>Thank you for your time and support in advance. </p>
| Caitlin J. Ramsey | 22,323 | <p>This should get you close. (Note, If you are not familiar with the new Association functionality in Mathematica 10.1, I will start here by building an association from each list, and then I work with them as associations until the final step.)</p>
<pre><code>list1 = {{1, a, aa}, {2, b, bb}, {3, c, cc}, {4, d, dd}, {6, f,
ff}, {7, g, gg}, {13, j, jj}};
list2 = {{1, 10, 100, 1000}, {2, 20, 200, 2000}, {5, 50, 500,
5000}, {6, 60, 600, 6000}, {7, 70, 700, 7000}, {9, 90, 900,
9000}};
</code></pre>
<p>Take the first element of each sub - list as the keys:</p>
<pre><code>in := list1Keys = list1[[1 ;; -1, 1]]
in := list2Keys = list2[[1 ;; -1, 1]]
out := {1, 2, 3, 4, 6, 7, 13}
out := {1, 2, 5, 6, 7, 9}
</code></pre>
<p>Build an association between the keys and the remaining elements in each sub - list :</p>
<pre><code>in := assoc1 = AssociationThread[list1Keys, list1[[1 ;; -1, 1 ;; -1]]]
out := <|1 -> {1, a, aa}, 2 -> {2, b, bb}, 3 -> {3, c, cc}, ...|>
in := assoc2 = AssociationThread[list2Keys, list2[[1 ;; -1, 1 ;; -1]]]
out := <|1 -> {1, 10, 100, 1000}, 2 -> {2, 20, 200, 2000}, 5 -> {5, 50, 500, 5000}, ...|>
</code></pre>
<p>KeyUnion makes a list of associations having the same keys:</p>
<pre><code>in := assocList = KeyUnion[{assoc1, assoc2}]
out := {<|1 -> {1, a, aa}, 2 -> {2, b, bb}, ..., 5 -> Missing["KeyAbsent", 5], ...|>, <|1 -> {1, 10, 100, 1000}, 2 -> {2, 20, 200, 2000}, 3 -> Missing["KeyAbsent", 3],...|>}
</code></pre>
<p>To replace each key with a <em>single</em> value, you might do something like this: </p>
<pre><code>in := Merge[{assocList[[1]], assocList[[2]]}, Identity];
in := merged = Values[%] /. _Missing -> {-9999}
</code></pre>
<p>To get...</p>
<pre><code>out := {{{1, a, aa}, {1, 10, 100, 1000}}, ..., {{3, c, cc,},{-9999}}, ...,{{-9999}, {5, 50, 500, 5000}}, ...}
</code></pre>
<p>And then Union to combine the two lists, before "padding" as appropriate if you require each sublist to have the same number of items. </p>
<pre><code>in := Union[values[[#, 1]], values[[#, 2]]] & /@ {1, 2, 3, 4, 5, 6, 7, 8, 9}
out := {{1, 10, 100, 1000, a, aa}, ..., {-9999, 3, c, cc},{-9999, 4, d, dd}, ..., {-9999, 13, j, jj}, {-9999, 5, 50, 500, 5000}, {-9999,9, 90, 900, 9000}}
</code></pre>
<p>I should note, in the Union step, I'm treating these as a "set." I realize this may not work if you needed the values to be in a particular order within the list. That can be achieved in a few more lines of code -- if you need assistance with that, please comment and let me know. </p>
|
3,259,658 | <p>Stokes' Theorem states that, "For a smooth oriented region <span class="math-container">$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$R^3$</span> and a smooth vector field defined on <span class="math-container">$V \cup \partial V$</span>, where <span class="math-container">$\partial V$</span> is the boundary curve for <span class="math-container">$V$</span>, <span class="math-container">$\int \int _{V} $</span> curl <span class="math-container">$\vec F .d\vec S$</span> = <span class="math-container">$\int_{\partial V} \vec F . d\vec r$</span>. My question is, do we always take <span class="math-container">$\partial V$</span> to be the curve when <span class="math-container">$V$</span> is projected onto the <span class="math-container">$xy$</span> plane? (So if <span class="math-container">$V$</span> is a unit sphere, say, then <span class="math-container">$\partial V$</span> is the unit circle). If so, why do we do so?</p>
| Adam Latosiński | 653,715 | <p>No. <span class="math-container">$\partial V$</span> is a curve located somewhere in <span class="math-container">$\mathbb R^3$</span>, it's not projected on any plane.</p>
<p>In case when <span class="math-container">$V$</span> is a unit sphere it doesn't have a boundary at all and <span class="math-container">$\partial V = \varnothing$</span>.</p>
<p>If you want <span class="math-container">$\partial V$</span> to be a unit circle, you'd need <span class="math-container">$V$</span> to be, for example, a half-sphere, or, a unit disk.</p>
|
3,259,658 | <p>Stokes' Theorem states that, "For a smooth oriented region <span class="math-container">$V$</span> <span class="math-container">$\in$</span> <span class="math-container">$R^3$</span> and a smooth vector field defined on <span class="math-container">$V \cup \partial V$</span>, where <span class="math-container">$\partial V$</span> is the boundary curve for <span class="math-container">$V$</span>, <span class="math-container">$\int \int _{V} $</span> curl <span class="math-container">$\vec F .d\vec S$</span> = <span class="math-container">$\int_{\partial V} \vec F . d\vec r$</span>. My question is, do we always take <span class="math-container">$\partial V$</span> to be the curve when <span class="math-container">$V$</span> is projected onto the <span class="math-container">$xy$</span> plane? (So if <span class="math-container">$V$</span> is a unit sphere, say, then <span class="math-container">$\partial V$</span> is the unit circle). If so, why do we do so?</p>
| alan23273850 | 397,319 | <p>See the <a href="https://mathinsight.org/stokes_theorem_idea" rel="nofollow noreferrer">Math Insight</a> website. In the website there is a toy applet you can play with.</p>
<p>The curve need not lie on a plane, and it is NOT a projection on a plane. It can only be the boundary of an "open" surface, so your should split your "ellipsoid" into two parts and apply the theorem respectively.</p>
<p>In a calculus course, the boundary (curve) is usually on a plane. It's only because of simplicity of calculation. It's not related to the theorem description.</p>
|
1,446,168 | <p>Let $x$ and $y$ be two random variables. </p>
<p>Suppose $m$ is a random variable that is independent of $x$ and has the following distribution:</p>
<p>$$\text{Pr}(m = 1|x) = 0.5,$$ $$\text{Pr}(m = -1|x) = 0.5.$$</p>
<p>Let $y$ be given by: $$y= \left\{ \begin{array}{lcc}
0 & \text{if } x\geq0 \\
\\ m & \text{if } x<0 \\
\\
\end{array}
\right.$$</p>
<p>To show that $x$ and $y$ are not independent, can I use a more rigorous method other than just observing that it's the way $y$ is defined that clearly makes it dependent on $x?$ </p>
<p>I wanted to use these kinds formula for no independence: $f(x,y) \neq f(x)f(y)$ or $f(y|x) \neq f(y)$. </p>
<p>Given the information, is it possible to even find $f(x)?$ I think it's possible to find $f(y|x)$ and $f(y)$ given the information, but not sure how they would be different. </p>
| Anthony Peter | 58,540 | <p>Again, this is false. Take $$ C = \prod_{i=1}^{n} [0,1]_i \subset \mathbb{R}^n.$$ Then, $C$ is a perfect set in $\mathbb{R}^n$, so every point of $C$ is a limit point. Ergo, there does not exist a point such that it can be surrounded by a neighborhood not containing another point of $C$. In general, any compact connected subset of $\mathbb{R}^n$ will not have an isolated point. There are many more examples, however.</p>
|
1,315,450 | <blockquote>
<p>Prove that for all $n\in\mathbb{N}$ and $x>0$,
$$2^{-1+\frac{1}{n}}\left(x+1\right)\leq\left(x^{n}+1\right)^{\frac{1}{n}}$$</p>
</blockquote>
<p>The last class was about Taylor polynomial of functions, so I thought this might give me a solutions, but looking at the derivatives the only think I could think would be useful is looking at the $n-1$th degree polynomial of </p>
<p>$$\frac{\left(x+1\right)^n}{2^{n-1}}-\left(x^{n}+1\right)$$</p>
<p>(Which I got by raising to the $n$th power)</p>
<p>Though this gave me the ugly expression (for some $c$ in $[0,x]$):</p>
<p>$$f\left(x\right)=\frac{1}{2^{n-1}}+\frac{n}{2^{n-1}}x+\frac{n\left(n-1\right)}{2^{n-1}2!}x^{2}+\dots+\frac{n\left(n-1\right)\cdots\left(3\right)}{2^{n-1}\left(n-2\right)!}x^{n-2}+\left(\frac{n!\left(c+1\right)}{2^{n-1}}-n!c\right)\frac{1}{\left(n-1\right)!}x^{n-1}$$</p>
<p>Which I have no idea how to "make negative". So I tried falling back to induction, but after the pretty obvious base case of $2$ I had no idea how to continue.</p>
<p>Any tips/hints on how to prove it?</p>
| Bernard | 202,857 | <p>This is the <em>power mean inequality</em>: the p-power mean of $a$ and $b$ is
$$m_p(a,b)=\biggl(\frac{a^p+b^p}2\biggr)^{\!\tfrac1p}$$
and if $p\le q$, $m_p(a,b)\le m_q(a,b)$. Of course, $m_1$ is the usual arithmetic mean, and $m_{-1}$ is the harmonic mean.</p>
<p>Here you can rewrite the inequality as:
$$\frac{x+1}2\le\smash{\biggl(\frac{x^n+1}2\biggr)^{\!\tfrac1n}},\enspace \text{i. e.}\quad m_1(x,1)\le m_n(x,1).$$</p>
|
202,379 | <p>Suppose for some constants $\alpha,\beta,\gamma$ that we're given the following ODE: $$\alpha y''+\beta xy'+\gamma y=0.$$ Now, I know how to find the general solution for $y(x)$ if any of $\alpha,\beta,\gamma$ should turn out to be $0$, but I've just ended up with the ODE $$2y''+xy'+y=0.$$ Can anybody give me the first (few) step(s) of a general procedure one can use for such ODEs?</p>
| Mhenni Benghorbal | 35,472 | <p>Assume your solution </p>
<p>$$ y(x)=\sum_{k=0}^{\infty} a_k x^{k+\alpha} \,,$$</p>
<p>and plug into the differential equation and try to find a recurrence relation in $a_k$. Off course, you need to determine $\alpha$ as a first step. The well known power series method for second order ode is <a href="http://en.wikipedia.org/wiki/Frobenius_method" rel="nofollow">Frobenius method</a>.</p>
|
269,398 | <p>I have a python code for data analysis, that uses the <a href="https://matplotlib.org/3.5.0/tutorials/colors/colormaps.html" rel="nofollow noreferrer">"seismic" color scale</a> for 2D density plots. However, I also need to do some other plots with Mathematica (because of packages etc), for which I would like the same color scale. Unfortunately, the closer resembling color scale (temperature map) of Mathematica is still quite different from the one in Python.
Do you have any suggestion on how to "export/import" a color scale between python and mathematica? This can then be applied to any variation of color map.</p>
| Syed | 81,355 | <p>After some trial and error:</p>
<pre><code>seismic[x_] :=
Blend[{Black, Darker@Blue, Blue, White, Red, Darker@Red,
Darker@Darker@Red}, x]
LinearGradientImage[Function[x, seismic[x]], {300, 30}]
</code></pre>
<p>As an example:</p>
<pre><code>Plot3D[2 Sin[x + Cos[y]], {x, -5, 5}, {y, -5, 5}
, ColorFunction -> seismic]
</code></pre>
<p><a href="https://i.stack.imgur.com/oxIUh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxIUh.png" alt="enter image description here" /></a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.