qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,570,193 | <p>Let $X$ be a random variable and $\;f_X(x)=c6^{-x^2}\;\forall x\in\Bbb R$ its pdf. What I'm trying to compute is $\sqrt {Var(X)}$. I've got that $c=\sqrt{\frac{\ln(6)}{\pi}}$ for $f_X(x)$ to be a pdf and also that $\Bbb E(X)=0$. So my problem reduces to compute $\Bbb E(X^2)$ where</p>
<p>$$\Bbb E(X^2)=\sqrt{\frac{\ln(6)}{\pi}}\int_{-\infty}^\infty x^2e^{-x^2\ln(6)}dx$$</p>
<p>but I got stuck since I can't manage to make a variable change such that leaves some constant term multiplied by integral of a $N(0,1)$ random variable.</p>
| Najib Idrissi | 10,014 | <p>Yes, $1 = u(1_{\mathbb{K}})$. You have a short exact sequence
$$0 \to \ker(\epsilon) \to C \xrightarrow{\epsilon} \mathbb{K} \to 0$$
and it is split by the coaugmentation $u : \mathbb{K} \to C$, so by the <a href="https://en.wikipedia.org/wiki/Splitting_lemma" rel="nofollow">splitting lemma</a>, $C \cong \ker(\epsilon) \oplus \mathbb{K}$, and $\mathbb{K}$ is identified as a submodule of $C$ as the image of $u$.</p>
<p>You simply need to compute $(\epsilon \otimes \operatorname{id})(\bar{\Delta}(x))$ and $(\operatorname{id} \otimes \epsilon)(\bar{\Delta}(x))$ to check that $\bar{\Delta(x)}$ is in $\bar{C} \otimes \bar{C}$. But for example
$$(\epsilon \otimes \operatorname{id})(\bar{\Delta}(x)) = (\epsilon \otimes \operatorname{id})(\Delta(x)) - \epsilon(x) \otimes 1 - \epsilon(1) \otimes x;$$
since $x \in \bar{C} = \ker(\epsilon)$, $\epsilon(x) \otimes 1 = 0$; since $\epsilon \circ u = \operatorname{id}$, $\epsilon(1) = 1_K$; and by the counit relations, $(\epsilon \otimes \operatorname{id})(\Delta(x)) = x$. So you're left with $x - 0 - x = 0$, and so $\bar{\Delta(x)} \in \bar{C} \otimes C$. Similarly $(\operatorname{id} \otimes \epsilon)(\bar{\Delta}(x)) = 0$ so $\bar{\Delta}(x) \in \bar{C} \otimes \bar{C}$.</p>
|
9,930 | <p>One of the standard parts of homological algebra is "diagram chasing", or equivalent arguments with universal properties in abelian categories. Is there a rigorous theory of diagram chasing, and ideally also an algorithm?</p>
<p>To be precise about what I mean, a diagram is a directed graph $D$ whose vertices are labeled by objects in an abelian category, and whose arrows are labeled by morphisms. The diagram might have various triangles, and we can require that certain triangles commute or anticommute. We can require that certain arrows vanish, which can be used to ask that certain compositions vanish. We can require that certain compositions are exact. Maybe some of the arrows are sums or direct sums of other arrows, and maybe some of the vertices are projective or injective objects. Then a diagram "lemma" is a construction of another diagram $D'$, with some new objects and arrows constructed from those of $D$, or at least some new restrictions.</p>
<p>As described so far, the diagram $D$ can express a functor from any category $\mathcal{C}$ to the abelian category $\mathcal{A}$. This looks too general for a reasonable algorithm. So let's take the case that $D$ is acyclic and finite. This is still too general to yield a complete classification of diagram structures, since acyclic diagrams include all acyclic quivers, and some of these have a "wild" representation theory. (For example, three arrows from $A$ to $B$ are a wild quiver. The representations of this quiver are not tractable, even working over a field.) In this case, I'm not asking for a full classification, only in a restricted algebraic theory that captures what is taught as diagram chasing.</p>
<p>Maybe the properties of a diagram that I listed in the second paragraph already yield a wild theory. It's fine to ditch some of them as necessary to have a tractable answer. Or to restrict to the category $\textbf{Vect}(k)$ if necessary, although I am interested in greater generality than that.</p>
<p>To make an analogy, there is a theory of Lie bracket words. There is an algorithm related to <a href="http://en.wikipedia.org/wiki/Lyndon_word" rel="noreferrer">Lyndon words</a> that tells you when two sums of Lie bracket words are formally equal via the Jacobi identity. This is a satisfactory answer, even though it is not a classification of actual Lie algebras. In the case of commutative diagrams, I don't know a reasonable set of axioms — maybe they are related to triangulated categories — much less an algorithm to characterize their formal implications.</p>
<p>(This question was inspired by a mathoverflow question about <a href="https://mathoverflow.net/questions/6749/">George Bergman's salamander lemma</a>.)</p>
<hr>
<p>David's reference is interesting and it could be a part of what I had in mind with my question, but it is not the main part. My thinking is that diagram chasing is boring, and that ideally there would be an algorithm to obtain all finite diagram chasing arguments, at least in the acyclic case. Here is a simplification of the question that is entirely rigorous.</p>
<p>Suppose that the diagram $D$ is finite and acyclic and that all pairs of paths commute, so that it is equivalent to a functor from a finite <a href="http://en.wikipedia.org/wiki/Partially_ordered_set#In_category_theory" rel="noreferrer">poset category</a> $\mathcal{P}$ to the abelian category $\mathcal{A}$. Suppose that the only other decorations of $D$ are that: (1) certain arrows are the zero morphism, (2) certain vertices are the zero object, and (3) certain composable pairs of arrows are exact. (Actually condition 2 can be forced by conditions 1 and 3.) Then is there an algorithm to determine all pairs of arrows that are forced to be exact? Can it be done in polynomial time?</p>
<p>This rigorous simplification does not consider many of the possible features of lemmas in homological algebra. Nothing is said about projective or injective objects, taking kernels and cokernels, taking direct sums of objects and morphisms (or more generally finite limits and colimits), or making connecting morphisms. For example, it does not include the <a href="http://en.wikipedia.org/wiki/Snake_lemma" rel="noreferrer">snake lemma</a>. It also does not include diagrams in which only some pairs of paths commute. But it is enough to express the monomorphism and epimorphism conditions, so it includes for instance the <a href="http://en.wikipedia.org/wiki/Five_lemma" rel="noreferrer">five lemma</a>.</p>
| arsmath | 3,711 | <p>This is an old question, but I think I have a reasonably complete solution to this problem. It gives you a criterion for when you can answer a question by a diagram chase, and a procedure for actually doing it. I will sketch the answer here, and if anyone is still interested in this question, I will make a more formal write-up.</p>
<p>Diagram chase arguments make sense for pointed sets, which I will think of as vector spaces over <span class="math-container">$F_1$</span> (the field of one element). Maps between pointed sets are required to have well-defined kernels and cokernels, i.e. they are partial bijections for nonzero elements. In this setting, diagram chases are even simpler because when you chase an element backward or forward, there's only one possibility.</p>
<p>So what good does this do us? It turns out that when the diagram is simple enough,
theorems over <span class="math-container">$F_1$</span> can be lifted to arbitrary abelian categories. I will call these diagrams “chaseable”. You can associate with each object in the diagram the lattice of subobjects containing the zero object, the whole object, and closed under taking images and inverse images. If that lattice is finite and distributive for each object, then the diagram is chaseable. The main reason it would fail to be finite is if the diagram has loops. It is easy to find cases where it fails to be distributive (the <span class="math-container">$D_3$</span> quiver is a ready example), but double complexes give you lattices that are both finite and distributive. (To me this ``explains'' the prominence of double complexes -- they are much simpler than arbitrary diagrams.)</p>
<p>Once you associate a finite distributive lattice to each object, you can use the Birkhoff representation theorem to understand the lattice. You can associate to each join-irreducible element a one-element generator of a <span class="math-container">$F_1$</span>-vector space. If you follow that element around, you get an indecomposable representation of the diagram over <span class="math-container">$F_1$</span>. If a condition such as being exact holds for indecomposable <span class="math-container">$F_1$</span> representations, then it holds for the diagram.</p>
<p>This is sufficient to prove results such as the five or nine lemmas. Results that imply the existence of maps, such as the snake lemma or the connecting homomorphism for long exact sequences require slightly more work -- the join-irreducible elements have an order, and you need to keep track of the order -- but it is doable.</p>
<p>You can also use the same technique to reprove the characterization of finite-dimensional indecomposable modules over string algebras -- the indecomposable representations over a field correspond to the indecomposable representations over <span class="math-container">$F_1$</span>.</p>
|
3,547,995 | <p>I've been trying to figure out the way to solve this for a while now, and I'm hoping someone could point me in the right direction to find the answer (or show me how to solve this).</p>
<p>The problem I'm having is with this equation: <span class="math-container">$(2i-2)^{38}$</span> and I need to solve it using de Moivre's formula (i is imaginary in case that wasn't clear). Now obviously I know it would be stupid to expand right away because it would turn into an extremely long equation.</p>
<p>The farthest I got with it is <span class="math-container">$2(i-1)^{38}$</span> and <span class="math-container">$(2\cdot(-1)^{1/2}-2)^{38}$</span>.</p>
<p>I'm hoping I'm headed in the right direction but I'm stuck. Could someone please show me the way to solve this?</p>
| MPW | 113,214 | <p><strong>Hint:</strong> Start by writing <span class="math-container">$-2+2i$</span> in the form <span class="math-container">$re^{\theta i}$</span>.</p>
<p>Then your answer will be <span class="math-container">$r^{38}e^{38\theta i}$</span>.</p>
<p>For <span class="math-container">$z=x+iy$</span>, we have <span class="math-container">$r^2= x^2+y^2$</span> and <span class="math-container">$\tan \theta = y/x$</span>.</p>
|
3,547,995 | <p>I've been trying to figure out the way to solve this for a while now, and I'm hoping someone could point me in the right direction to find the answer (or show me how to solve this).</p>
<p>The problem I'm having is with this equation: <span class="math-container">$(2i-2)^{38}$</span> and I need to solve it using de Moivre's formula (i is imaginary in case that wasn't clear). Now obviously I know it would be stupid to expand right away because it would turn into an extremely long equation.</p>
<p>The farthest I got with it is <span class="math-container">$2(i-1)^{38}$</span> and <span class="math-container">$(2\cdot(-1)^{1/2}-2)^{38}$</span>.</p>
<p>I'm hoping I'm headed in the right direction but I'm stuck. Could someone please show me the way to solve this?</p>
| giobrach | 332,594 | <p><strong>Hint.</strong> Use the fact that
<span class="math-container">$$ -2 + 2i = 2\sqrt 2\cos \frac{3\pi}4 + 2i\sqrt 2 \sin \frac{3\pi}4 = 2 \sqrt 2 e^{3\pi i/4}.$$</span></p>
|
2,251,964 | <p><strong>question(s):</strong></p>
<p>Choose any real or complex clifford algebra $\mathcal{Cl}_{p,q}$. <a href="https://en.wikipedia.org/wiki/Classification_of_Clifford_algebras" rel="nofollow noreferrer">It's known</a> that there is some $A \simeq \mathcal{Cl}_{p,q}$, where $A$ is either a matrix ring $M(n,R)$ or a direct sum of matrix rings $M(n,R)\oplus M(n,R)$, for $n \geq 1$ and $R \in \{\mathbb R, \mathbb C, \mathbb H\}$ such that $\dim(\mathcal{Cl}_{p,q}) = \dim(A)$ (as k-algebras). </p>
<ol>
<li><p>Given an element $X\in \mathcal{Cl}_{p,q}$, defined on a standard basis, how can I find an explicit isomorphism $f:\mathcal{Cl}(p,q)\to A$, that preserves algebraic properties such as grade interaction? What is the "character" (qualitatively or otherwise) of such an isomorphism? Is it part of some group like the general linear or orthogonal groups?</p></li>
<li><p>Conversely, say $y^{i,j} \in R$ is an entry of some matrix $Y\in A$, and $y^{i,j}_k \in \mathbb R$ is a component of $y^{i,j}$. Let's call it a component of $A$ as well. What is the relationship of $y^{i,j}_k$ to $\mathcal{Cl}_{p,q}$? Does it have a single grade that can be determined by its grade in $R$? Or is there something more nuanced going on? How is it related to other components of $A$?</p></li>
<li><p>There are cases where a given $A$ is isomorphic to multiple Clifford algebras. Then these Clifford algebras must be isomorphic to one another. For example: $\mathcal{Cl}_{7,0} \simeq \mathcal{Cl}_{5,2} \simeq \mathcal{Cl}_{3,4} \simeq \mathcal{Cl}_{1,6} \simeq M(8,\mathbb C)$. What's going on here? Is there a mathematical term for these kinds of special isomorphisms between Clifford algebras in general?</p></li>
</ol>
<p>BONUS QUESTION: how is this problem referred to in the academic math literature? I have scoured, with my amateur math education, the "matrix representations of Clifford algebras" stuff, and mostly found stuff about real matrix representations of real Clifford algebras with real matrix generators, etc, but that's not what I'm looking for. How to distinguish?</p>
<p><strong>context:</strong></p>
<p>I have been using sage and sympy (computer algebra systems) to compute symbolic monomial representations of products for various Clifford algebras. These are then used to generate GPU code for geometric algebra usage.</p>
<p>It works nice for Clifford algebras with up to 7 generating dimensions, and for low grade computations like planar rotation, I can get up to 8 or 9 generating dimensions. I've been successful in pumping out reasonably fast 8 dimensional planar rotation functions in glsl.</p>
<p>Recently, I was reading about Bott periodicity and the classification of Clifford algebras. Using the equations given on the wikipedia page "classification of Clifford algebras", I tried generating some of these isomorphic algebras.</p>
<p>For whatever reason, they are much, much faster to generate, likely due to the ubiquity of matrix multiplication. But I have absolutely no idea how, in general, I would construct a generic multivector in them. For example, generating a symbolic representation of $M(4,\mathbb H)$ is very quick in Sage. Presumably there is some isomorphism between this and the 64 dimensional $\mathcal{Cl}_{2,4}$. But how do I use it?</p>
| arctic tern | 296,782 | <p>I can't say I understand the context you gave behind your question, but I think most of your questions can be answered by a primer on Clifford algebras. One thing I want to point out is that I use the opposite sign convention from you.</p>
<p>When making $\mathbb{C}$, mathematicians adjoined a square root of negative one to $\mathbb{R}$. When making the quaternions $\mathbb{H}$, Hamilton adjoined a new square root of negative one, called $j$, to $\mathbb{C}$, and eventually determined that in order for the obvious choice of norm to be multiplicative we would need $k:=ij$ to be linearly independent of $1,i,j$ (thus jutting out in a fourth dimension) and for $i$ and $j$ to anticommute rather than commute (meaning $ij=-ji$).</p>
<p>Clifford algebras extend this idea. I call the Clifford algebra $C\ell(n)$ the free associative algebra generated by $n$ anticommuting square roots of negative one. If we call them $e_1,\cdots,e_n$ and let $v$ be any vector in their span, we easily compute $v^2=-\|v\|^2$ (where $\|\cdot\|^2$ comes from the standard basis here). This inspires a generalization: if $(V,q)$ is any quadratic space (so $q$ is a quadratic form on $V$, meaning $q(x)=b(x,x)$ for some symmetric bilinear form $b$), then $C\ell(V,q)$ is the tensor algebra on $V$ modulo (the two-sided ideal generated by) the relations $v\otimes v=-q(v)1$.</p>
<p>We usually only consider nondegenerate bilinear forms on a vector space. At the other extreme is the completely degenerate form which always equals $0$; this gives the algebra isomorphism $C\ell(V,0)\cong\Lambda V$, known better as the exterior algebra on $V$. Even if $q$ is not identically $0$, there is still always a canonical vector space isomorphism $\Lambda V\to C\ell(V,q)$, given by</p>
<p>$$ v_1\wedge \cdots\wedge v_k\mapsto v_1\cdots v_k $$</p>
<p>whenever $v_1,\cdots,v_k$ are orthogonal (meaning $b(v,w)=0$ or $q(v+w)=q(v)+q(w)$).</p>
<p>While the tensor algebra $TV$ is $\mathbb{N}$-graded, $C\ell(V)$ is not. Indeed, $v^2=-q(v)$ seems to lie in both the ostensible $2$ and $0$ graded components; in general, $C\ell(V)$ is only $\mathbb{Z}/2\mathbb{Z}$-graded: the even component are those elements expressible as a sum of products of evenly many vectors, and similarly for the odd component. This makes it something called a <em>superalgebra</em>. While $\Lambda V$ is supercommutative, $C\ell(V,q)$ isn't in general since odd elements commute with themselves.</p>
<p>Quadratic spaces together with quadratic-form-preserving linear maps as the morphisms form a category $\mathsf{QVect}$. It has a monoidal operation on it: $(V_1,q)\oplus(V_2,q_2)=(V_1\oplus V_2,q_1\oplus q_2)$, with quadratic form defined by $(q_1\oplus q_2)(v_1,v_2)=q_1(v_1)+q_2(v_2)$ (so it contains $V_1$ and $V_2$ as orthogonal subspaces). The assignment $(V,q)\mapsto C\ell(V,q)$ is functorial from $\mathsf{QVect}$ to $\mathsf{SAlg}$, the category of superalgebras. The latter category has its own monoidal operation, the <em>super</em> tensor product with $(a_1\otimes b_1)(a_2\otimes b_2)=\pm (a_1a_2\otimes b_1b_2)$ with $(-)$ sign if both $b_1,a_2$ are odd and $(+)$ sign otherwise inside $A\widehat{\otimes}B$. Then the $C\ell$ functor is actually monoidal; </p>
<p>$$ C\ell(V_1\oplus V_2,q_1\oplus q_2) \cong C\ell(V_1,q_1) ~\widehat{\otimes}~ C\ell(V_2,q_2). $$</p>
<p>Clifford algebras, like tensor products, satisfy a universal property. If $(V,q)$ is a quadratic vector space and $A$ is any algebra and we have a linear map $\phi:V\to A$ satisfying $\phi(v)^2=-q(v)1_A$, then it extends to an algebra homomorphism $C\ell(V,q)\to A$ (via $v_1\cdots v_k\mapsto \phi(v_1)\cdots\phi(v_k)$).</p>
<p>Sylvester's law of inertia classifies nondegenerate quadratic forms on real vector spaces according to an invariant called its signature $(p,q)$: there is a basis in which</p>
<p>$$ q(x)=x_1^2+\cdots+x_p^2-x_{p+1}^2-\cdots-x_{p+q}^2 . $$</p>
<p>We call the corresponding clifford algebra $C\ell(p,q)$ (there are other naming conventions too).</p>
<p>$\bullet$ Vacuously, $C\ell(0,0)=\mathbb{R}$. </p>
<p>$\bullet$ We already know $C\ell(1,0)=\mathbb{C}$,</p>
<p>$\bullet$ We already know $C\ell(2,0)=\mathbb{H}$,</p>
<p>$$ 1\leftrightarrow 1, \quad i\leftrightarrow e_1, \quad j\leftrightarrow e_2, \quad ij\leftrightarrow e_1e_2. $$</p>
<p>$\bullet$ What about $C\ell(0,1)$? This is $\mathbb{R}[x]/(x^2-1)\cong\mathbb{R}[x]/(x-1)\oplus\mathbb{R}[x]/(x+1)$ by the Chinese Remainder Theorem. If $C\ell(0,1)$ is generated by $f$ satisfying $f^2=1$, then $(1+f)(1-f)=0$ and one checks $(1\pm f)^2=2(1\pm f)$, so $(1\pm f)/2$ are orthogonal idempotents in which case</p>
<p>$$ a+bf = (a+b)\frac{1+f}{2}+(a-b)\frac{1-f}{2} \leftrightarrow (a+b,a-b) $$</p>
<p>establishing $C\ell(0,1)\cong\mathbb{R}^2$ (our notation for direct product of rings).</p>
<p>$\bullet$ How about $C\ell(1,1)$ generated by $e,f$ with $e^2=-1$, $f^2=+1$, $ef=-fe$? In this case,</p>
<p>$$ 1\leftrightarrow \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad \phantom{f}e\leftrightarrow \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}, $$</p>
<p>$$ f\leftrightarrow \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \quad ef\leftrightarrow \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}, $$</p>
<p>establishing $C\ell(1,1)\cong\mathbb{R}(2)$ (common notation for $M_2(\mathbb{R})$ in the literature).</p>
<p>$\bullet$ And then $C\ell(0,2)$ generated by $f_1,f_2$ with $f_1^2=f_2^2=1$, $f_1f_2=-f_2f_1$. In this case,</p>
<p>$$ 1\leftrightarrow \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad f_1\leftrightarrow \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, $$</p>
<p>$$ f_2 \leftrightarrow \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}, \quad f_1f_2\leftrightarrow \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, $$</p>
<p>establishing $C\ell(0,2)\cong\mathbb{R}(2)$ as well.</p>
<p>So far, this is our table of Clifford algebras:</p>
<p>$$ \begin{array}{|c|c|c|c|c|} \hline (p,q) & ~~~0~~~ & 1 & 2 & ~~~3~~~ \\ \hline 0 & \mathbb{R} & \mathbb{R}^2 & \mathbb{R}(2) & \\ \hline 1 & \mathbb{C} & \mathbb{R}(2) & & \\ \hline 2 & \mathbb{H} & & & \\ \hline 3 \\ \hline \end{array} $$</p>
<p>At this point, we should look at (normal, nonsuper) tensor products of $\mathbb{R},\mathbb{C},\mathbb{H}$. The obvious cases are $\mathbb{R}\otimes\mathbb{K}\cong\mathbb{K}$ for each $\mathbb{K}$. The first nontrivial one is </p>
<p>$$\mathbb{C}\otimes_{\mathbb{R}}\mathbb{C}=\mathbb{C}\otimes_{\mathbb{R}}\frac{\mathbb{R}[x]}{(x^2+1)} \cong \frac{\mathbb{C}[x]}{(x^2+1)} \cong \frac{\mathbb{C}[x]}{(x-i)}\oplus \frac{\mathbb{C}[x]}{(x+i)} \cong \mathbb{C}\oplus\mathbb{C}$$</p>
<p>by the Chinese Remainder Theorem. Indeed, </p>
<p>$$ 1\otimes 1\leftrightarrow (1,1), \quad i\otimes i\leftrightarrow (-1,1) \\ i\otimes 1\leftrightarrow (i,i), \quad 1\otimes i\leftrightarrow (i,-i) $$</p>
<p>establishes $\mathbb{C}\otimes\mathbb{C}\cong\mathbb{C}^2$ as algebras.</p>
<p>Now, $\mathbb{H}$ is a module over itself from both the left and the right, and these actions commute (this is the associative property, $a(xb)=(ax)b$), which induces a map $\mathbb{H}\otimes\mathbb{H}\to \mathbb{R}(4)$; I'll let you compute what this does to a basis (and thus by dimensions estasblihes an algebra isomorphism); I do a similar trick, regarding $\mathbb{H}$ as a right vector space over $\mathbb{H}$, <a href="https://math.stackexchange.com/questions/1213431/tensor-product-between-quaternions-and-complex-numbers/2084209#2084209">here</a> in order to establish the algebra isomorphism $\mathbb{H}\otimes\mathbb{C}\cong\mathbb{C}(2)$. Thus, we have this full table:</p>
<p>$$ \begin{array}{c||c|c|c} \otimes & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \hline \mathbb{R} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \hline \mathbb{C} & \mathbb{C} & \mathbb{C}^2 & \mathbb{C}(2) \\ \hline \mathbb{H} & \mathbb{H} & \mathbb{C}(2) & \mathbb{R}(4) \end{array} $$</p>
<p>Moreover, we have $\mathbb{R}(n)\otimes\mathbb{K}\cong\mathbb{K}(n)$ for any algebra $\mathbb{K}$ (including $\mathbb{K}=\mathbb{R}^2$ and $\mathbb{H}^2$, for example, in which case e.g. $\mathbb{C}^2(n)\cong\mathbb{C}(n)^2$). I will assume you can figure out what these isomorphisms are.</p>
<p>For a general Clifford algebra $C\ell(p,q)$ generated by $e_1,\cdots,e_p$ (square roots of $-1$) and $f_1,\cdots,f_q$ (square roots of $+1$), all pairwise anticommuting, we may call $\omega=e_1\cdots e_p f_1\cdots f_q$ the orientation element (which element of the algebra this represents is actually independent of choice of basis for the vector space; it only depends on the <em>orientation</em> the basis induces on the space). Check</p>
<p>$$ \omega^2=\begin{cases} +1 & p-q\equiv 0,3 \mod4 \\ -1 & p-q\equiv 1,2 \mod 4 \end{cases} $$</p>
<p>Suppose we have quadratic spaces $(V_1,q_1),(V_2,q_2)$ and the orientation element $\omega$ of $C\ell(V_1,q_1)$ anticommutes with all $v_1\in V_1$ (so $\dim V_1$ is even). Consider the map </p>
<p>$$\phi:V_1\oplus V_2\to C\ell(V_1,q_1)\otimes C\ell(V_2,q_2)$$</p>
<p>(usual tensor product) given by extending</p>
<p>$$ v_1 \mapsto v_1\otimes 1 \\ v_2\mapsto \omega\otimes v_2 $$</p>
<p>Check this satisfies</p>
<p>$$ \begin{array}{ll} \phi(v_1,v_2)^2 & =(v_1\otimes 1+\omega\otimes v_2)^2 \\ & = v_1^2\otimes 1+v_1\omega\otimes v_2+\omega v_1\otimes v_2+\omega^2\otimes v_2^2 \\ & = (q_1+\omega^2q_2)(v_1,v_2).\end{array} $$</p>
<p>The universal property gives an algebra isomorphism $$C\ell(V_1\oplus V_2,q_1\oplus\omega^2q_2)\to C\ell(V_1,q_1)\otimes C\ell(V_2,q_2).$$</p>
<p>If we pick $(V_1,q_1)$ to have one of the signatures $(2,0),(1,1),(0,2)$ we get</p>
<p>$$ \begin{array}{l} C\ell(2,0) \otimes C\ell(r,s) \cong C\ell(s+2,r) \\ C\ell(1,1)\otimes C\ell(r,s)\cong C\ell(r+1,s+1) \\ C\ell(0,2)\otimes C\ell(r,s)\cong C\ell(s,r+2). \end{array} $$</p>
<p>(This is why it was important we worked out $C\ell(p,q)$ for $p+q\le 2$.) You should be able to use this to fill out the table of Clifford algebras for $0\le p,q\le 8$. If you do, you will notice some patterns.</p>
<p>For example, if you ignore the matrices part and just focus on scalars, they are $8$-periodic, proceeding $\mathbb{R},\mathbb{C},\mathbb{H},\mathbb{H}^2,\mathbb{H},\mathbb{C},\mathbb{R},\mathbb{R}^2$. (The "ignoring the matrices part" can be formalized using Morita equivalence.) John Baez calls this the "Clifford clock." Notice the axis of symmetry of this clock is offset by $1$; this is related to the fact the even subalgebra of $C\ell(p,q)$ is isomorphic to $C\ell(p-1,q)$, and spin representations thus use Clifford algebra representations from one less space dimension.</p>
<p>Anyway, that's a long enough answer for the classification I think.</p>
|
2,886,675 | <p>I suspect the following is exactly true ( for positive $\alpha$ )</p>
<p>\begin{equation}
\sum_{n=1}^\infty e^{- \alpha n^2 }= \frac{1}{2} \sqrt { \frac{ \pi}{ \alpha} }
\end{equation}</p>
<p>If the above is exactly true, then I would like to know a proof of it.
I accept showing a particular limit is true, may be far more difficult than just applying a general theorem to show that the limit exists. Also as the result involves $\pi$ this makes me think the proof could well be a long one, BUT … ?</p>
<p>To give some context, the above series crops up in calculating the 'One Particle Translational Partition Function' for the quantum mechanical 'Particle In A Box'.</p>
| Somos | 438,089 | <p>The infinite sum is $\, (\theta_3(0,e^{-\alpha})-1)/2 \,$ where $\, \theta_3 \,$ is a Jacobi theta function. Only for small values of $\, \alpha \,$ is it approximately $\, \sqrt{\pi /\alpha}/2. \,$ Define
$\, f(\alpha) := \theta_3(0,e^{-\alpha}). \,$
Then using modular relations
$\, f(\alpha) = \sqrt{\pi/\alpha}f(\pi^2/\alpha). \,$ Since $\, f(\alpha) \to 1 \, $ as $\, \alpha \to +\infty, \,$ this explains the close approximation when $\alpha$ is a small positive number.</p>
<p>Define $\, s(\alpha) := \sum_{n=1}^\infty
e^{-\alpha n^2}. \,$ Then the result
$\ s(\alpha) = -1/2 + \sqrt{\pi/\alpha}(1/2 + s(\pi^2/\alpha)). \,$ shows the exact relation between $\, s(\alpha) \,$ and
$\, s(\pi^2/\alpha). \,$</p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| Atmos | 516,446 | <p>$$\sin(11/x)\underset{(+\infty)}{\sim}11/x$$
What can you deduce ?</p>
<p>Note : What you have stated is good however with the product with $x$ you cannot conclude that it converges or diverges with what you wrote</p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| Sri-Amirthan Theivendran | 302,692 | <p>No the reasoning doesn't follow. If limit exists, then using your reasoning all we can say is it is between $-\infty$ and $\infty$. Make the change of variables $u=1/x$, and note that the limit is equivalent to
$$
\lim_{u\to 0^+}\frac{\sin 11u}{u}=11\lim_{u\to 0^+}\frac{\sin 11u}{11 u}
$$
and now use the well-known limit
$$
\lim_{x\to 0}\frac{\sin x}{x}=1
$$</p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>write $$11\frac{\sin(\frac{11}{x})}{\frac{11}{x}}$$ and the Limit is $$11$$</p>
|
2,660,934 | <p>Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$</p>
<p>We know $-1\le \sin \frac{11}{x} \le 1 $ </p>
<p>Therefore, $x\rightarrow \infty $
And so limit of this function does not exist. </p>
<p>Am I on the right track? Any help is much appreciated.</p>
| Andreas Lenz | 533,869 | <p>Your conclusion is not directly correct, since you are neglecting the $x$ in the denominator inside the $\sin \frac{11}{x}$.</p>
<p>Write</p>
<p>$$ \lim_{x \rightarrow \infty} x \sin \frac{11}{x} = \lim_{x \rightarrow \infty} \frac {\sin \frac{11}{x}}{\frac{1}{x}}. $$</p>
<p>and L'Hôpital's rule is applicable.</p>
|
246,114 | <p>A Latin Square is a square of size <strong>n × n</strong> containing numbers <strong>1</strong> to <strong>n</strong> inclusive. Each number occurs once in each row and column.</p>
<p>An example of a 3 × 3 Latin Square is:</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
\end{array}
\right)
$$</span>
Another is:
<span class="math-container">$$
\left(
\begin{array}{ccc}
3 & 1 & 2 \\
2 & 3 & 1 \\
1 & 2 & 3 \\
\end{array}
\right)
$$</span></p>
<p>My code can work when the order is less than 5</p>
<pre><code>n=4;
Dimensions[ans=Permutations[Permutations[Range[n]],{n}]//
Select[AllTrue[Join[#,Transpose@#],DuplicateFreeQ]&]]//AbsoluteTiming
</code></pre>
<blockquote>
<p><code>{0.947582, {576, 4, 4}}</code></p>
</blockquote>
<p>When the order is 5, the memory is not enough, I want to know if there is a better way to get all 5×5 Latin squares?</p>
| yode | 21,532 | <p>Since <a href="http://web.math.ucsb.edu/%7Epadraic/mathcamp_2012/latin_squares/MC2012_LatinSquares_lecture3.pdf" rel="nofollow noreferrer">any group table</a> is a Latin square. But just there is 1 group about 5 orders.</p>
<pre><code>FiniteGroupCount[5]
</code></pre>
<blockquote>
<p>1</p>
</blockquote>
<p>We know it have to be <span class="math-container">$C_5$</span>. So we can use it to contruct <span class="math-container">$120$</span> latin squares with <span class="math-container">$S_5$</span>:</p>
<pre><code>M = Permute[GroupMultiplicationTable[CyclicGroup[5]], SymmetricGroup[5]]
</code></pre>
<blockquote>
<pre><code>(* {{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,1,2},{4,5,2,3,1},{5,3,1,2,4}},
{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,1,2},{5,3,1,2,4},{4,5,2,3,1}},
{{1,2,3,4,5},{2,1,4,5,3},{3,4,5,2,1},{4,5,1,3,2},{5,3,2,1,4}},
...
{{5,4,3,2,1},{4,5,2,1,3},{3,2,1,5,4},{2,1,4,3,5},{1,3,5,4,2}}} *)
</code></pre>
</blockquote>
<p>Or we just exchange the symbols among them to get <span class="math-container">$2880$</span>:</p>
<pre><code>DeleteDuplicates[Function[{m, cyc}, Permute[#, cyc] & /@ m] @@@
Tuples[{M, GroupElements[SymmetricGroup[5]]}]]
</code></pre>
<p>It is not all latin squares(<span class="math-container">$161280$</span>) indeed, but it is very faster.</p>
|
2,368,813 | <p>I'm in a bit of trouble I want to calculate the cross ratio of 4 points $ A B C D $ that are on a circle </p>
<p>Sadly "officially" it has to be calculated with A B C D as complex numbers and geometers sketchpad ( the geomerty program I am used to) don't know about complex numbers</p>
<p>Now I am wondering
The cross ratio of 4 points on a circle is a real number<br>
And is there so much difference between</p>
<p>$$ \frac { (A-C)(B-D) }{ (A-D)(B-C) } \text{ (complex method) }$$
And
$$ \frac { |AC||BD| }{ |AD||BC| } \text{ (distance method). }$$</p>
<p>Where X-Y is the complex difference between X and Y</p>
<p>and |XY| is the distance between X andY</p>
<p>If allready is given that the four points are on a circle?</p>
<p>I think that the absolute values of the calculations are the same but am I right (and can we prove it)</p>
<p>ADDED LATER :</p>
<p>on Cave 's suggestion I used a circle inversion to move the four points on the circle to four points on a line and then the formulas give the same value ( if I did it correctly)</p>
<p>What I did:</p>
<p>I took a diameter of the circle
This diameter intersects the circle in O and U ,
u is the line through U perpendicular to OU</p>
<p>Project the 4 points onto line u with centre O (draw a ray through O and the point, the new point is where this ray intersects line u)</p>
<p>And calculate the cross ratio of the four new points.</p>
<p>This "projection" method and the earlier "distance" method give the same value but does this prove anything?</p>
| robjohn | 13,854 | <p>For $A,B,C,D$ on the unit circle,
$$
\begin{align}
\frac{(A-C)(B-D)}{(A-D)(B-C)}
&=\frac{\left(e^{ia}-e^{ic}\right)\left(e^{ib}-e^{id}\right)}{\left(e^{ia}-e^{id}\right)\left(e^{ib}-e^{ic}\right)}\\
&=\frac{\left(e^{i(a-c)}-1\right)\left(e^{i(b-d)}-1\right)}{\left(e^{i(a-d)}-1\right)\left(e^{i(b-c)}-1\right)}\\
&=\frac{e^{i\frac{a-c}2}2i\sin\left(\frac{a-c}2\right)e^{i\frac{b-d}2}2i\sin\left(\frac{b-d}2\right)}{e^{i\frac{a-d}2}2i\sin\left(\frac{a-d}2\right)e^{i\frac{b-c}2}2i\sin\left(\frac{b-c}2\right)}\\
&=\frac{\sin\left(\frac{a-c}2\right)\sin\left(\frac{b-d}2\right)}{\sin\left(\frac{a-d}2\right)\sin\left(\frac{b-c}2\right)}\tag{1}
\end{align}
$$
which is indeed real.</p>
<hr>
<p>For $A,C$ on the unit circle,
$$
\begin{align}
|A-C|
&=\sqrt{\left(e^{ia}-e^{ic}\right)\left(e^{-ia}-e^{-ic}\right)}\\[3pt]
&=\sqrt{2-2\cos(a-c)}\\
&=2\sin\left(\frac{a-c}2\right)\tag{2}
\end{align}
$$
Therefore,
$$
\begin{align}
\frac{|A-C||B-D|}{|A-D||B-C|}
&=\frac{2\sin\left(\frac{a-c}2\right)2\sin\left(\frac{b-d}2\right)}{2\sin\left(\frac{a-d}2\right)2\sin\left(\frac{b-c}2\right)}\\
&=\frac{\sin\left(\frac{a-c}2\right)\sin\left(\frac{b-d}2\right)}{\sin\left(\frac{a-d}2\right)\sin\left(\frac{b-c}2\right)}\tag{3}
\end{align}
$$
which is obviously real, but not so obviously, the same as $(1)$.</p>
<hr>
<p><strong>When is the Complex Cross-Ratio Real?</strong></p>
<p><a href="https://i.stack.imgur.com/cyRWN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cyRWN.png" alt="enter image description here"></a></p>
<p>$$
\begin{align}
\frac{A-C}{A-D}
&=\frac{C-A}{D-A}\\
&=\frac{|C-A|}{|D-A|}\,e^{-i\alpha}
\end{align}
$$
$$
\begin{align}
\frac{B-D}{B-C}
&=\frac{D-B}{C-B}\\
&=\frac{|D-B|}{|C-B|}\,e^{-i\beta}
\end{align}
$$
Therefore,
$$
\frac{(A-C)(B-D)}{(A-D)(B-C)}
=\frac{|C-A||D-B|}{|D-A||C-B|}\,e^{-i(\alpha+\beta)}
$$
$\alpha+\beta$ is an integer multiple of $\pi$ ($\pi$) if and only if $A$ and $B$ are on opposing arcs of a circle with $C$ and $D$.</p>
<hr>
<p><a href="https://i.stack.imgur.com/w0Cmv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w0Cmv.png" alt="enter image description here"></a></p>
<p>$$
\begin{align}
\frac{A-C}{A-D}
&=\frac{C-A}{D-A}\\
&=\frac{|C-A|}{|D-A|}\,e^{-i\alpha}
\end{align}
$$
$$
\begin{align}
\frac{B-D}{B-C}
&=\frac{D-B}{C-B}\\
&=\frac{|D-B|}{|C-B|}\,e^{i\beta}
\end{align}
$$
Therefore,
$$
\frac{(A-C)(B-D)}{(A-D)(B-C)}
=\frac{|C-A||D-B|}{|D-A||C-B|}\,e^{-i(\alpha-\beta)}
$$
$\alpha-\beta$ is an integer multiple of $\pi$ ($0$) if and only if $A$ and $B$ are on the same arc of a circle with $C$ and $D$.</p>
|
1,164,037 | <p>The question I propose is this: For an indexing set $I = \mathbb{N}$, or $I = \mathbb{Z}$, and some alphabet $A$, we can define a left shift $\sigma : A^{I} \to A^{I}$ by $\sigma(a_{k})_{k \in I} = (a_{k + 1})_{k \in I}$, because there exists a unique successor $\min \{ i > i_{0} \}$ for all $i \in I$. But one could not do the same if we made, say, $I = \mathbb{Q}$. Is there a way to characterize this in the language of orderings, i.e. a way to characterize the existence of a unique successor of a directed set such that one could sensibly define a "left shift" on a net $s: I \to A$? Does this place restrictions on, say, the cardinality of $I$?</p>
| Brian M. Scott | 12,042 | <p>Let $\langle I,\preceq\rangle$ be a linear order; all that’s required is that each $i\in I$ have an immediate successor $i^+$ in the order $\preceq$: $i\prec i^+$, and there is no $j\in I$ such that $i\prec j\prec i^+$.</p>
<p>Define a relation $\sim$ on $I$ as follows: for $i,j\in I$, $i\sim j$ iff either $i\preceq j$ and $[i,j]$ is finite, or $j\preceq i$ and $[j,i]$ is finite, where the intervals are of course taken with respect to the order $\preceq$. It’s easy to check that $\sim$ is an equivalence relation. For $i\in I$ let $[i]$ be the $\sim$-equivalence class of $i$; it’s also pretty straightforward to check that either $[i]$ has no least element and is order-isomorphic to $\Bbb Z$, or $[i]$ has a least element and is order-isomorphic to $\Bbb N$. </p>
<p>Let $\mathscr{I}=\{[i]:i\in I\}$; $\preceq$ induces a natural linear order $\sqsubseteq$ on $\mathscr{I}$ by $[i]\sqsubseteq[j]$ iff $i\sim j$ or $i\prec j$, and it turns out that $\langle\mathscr{I},\sqsubseteq\rangle$ can be any linear order whatsoever. </p>
<p>To see this, let $\langle L,\le\rangle$ be a linear order, and let $L_0$ be any subset of $L$. Let</p>
<p>$$I=\{\langle x,n\rangle\in L\times\Bbb Z:n\ge 0\text{ if }x\in L_0\}\;,$$</p>
<p>and let $\preceq$ be the lexicographic order on $I$: $\langle x,m\rangle\preceq\langle y,n\rangle$ iff $x<y$, or $x=y$ and $m\le n$. Then </p>
<p>$$[\langle x,n\rangle]=\begin{cases}
\{x\}\times\Bbb N,&\text{if }x\in L_0\\
\{x\}\times\Bbb Z,&\text{if }x\in L\setminus L_0\;,
\end{cases}$$</p>
<p>and $\langle x,n\rangle^+=\langle x,n+1\rangle$ for each $\langle x,n\rangle\in I$.</p>
<p>In particular, $I$ can be something quite different from a well-order, e.g. $\Bbb Q\times\Bbb Z$ ordered lexicographically.</p>
<p>Up to here I’ve assumed that you want a linearly ordered index set, but it should be clear that $L$ can just as well be a partial order.</p>
|
4,364,421 | <p>Are all solutions of the equation <span class="math-container">$x^2-4My^2=K^2$</span>, multiples of <span class="math-container">$K$</span>? I am considering <span class="math-container">$M$</span> not perfect square. Any tests in Python show be true, but...</p>
<p>My code:</p>
<pre><code>for x in range (1,8000):
for y in range (1,8000):
if (x*x-20*y*y)==36:
print(x,y)
</code></pre>
<p>Result: <span class="math-container">$
54, 12\\
966, 216$</span></p>
<p>All multiples of <span class="math-container">$6$</span>.</p>
<p>Any can a example of solution coprime (except by factor 2) with K?</p>
| Will Jagy | 10,400 | <p><span class="math-container">$$ 21^2 - 20 \cdot 4^2 = 11^2 $$</span>
<span class="math-container">$$ 21^2 - 20 \cdot 2^2 = 19^2 $$</span>
<span class="math-container">$$ 61^2 - 20 \cdot 12^2 = 29^2 $$</span>
<span class="math-container">$$ 41^2 - 20 \cdot 6^2 = 31^2 $$</span>
<span class="math-container">$$ 49^2 - 20 \cdot 6^2 = 41^2 $$</span>
<span class="math-container">$$ 69^2 - 20 \cdot 8^2 = 59^2 $$</span>
<span class="math-container">$$ 101^2 - 20 \cdot 18^2 = 61^2 $$</span>
<span class="math-container">$$ 89^2 - 20 \cdot 12^2 = 71^2 $$</span>
<span class="math-container">$$ 81^2 - 20 \cdot 4^2 = 79^2 $$</span>
<span class="math-container">$$ 161^2 - 20 \cdot 30^2 = 89^2 $$</span>
<span class="math-container">$$ 141^2 - 20 \cdot 22^2 = 101^2 $$</span>
<span class="math-container">$$ 141^2 - 20 \cdot 20^2 = 109^2 $$</span></p>
|
786,643 | <p>Considering $$\int\frac{\ln(x+1)}{2(x+1)}dx$$ I first solved it seeing it similar to the derivative of $\ln^2(x+1)$ so multiplying by $\frac22$ the solution is $$\int\frac{\ln(x+1)}{2(x+1)}dx=\frac{\ln^2(x+1)}{4}+const.$$. But then we can solve it using by parts' method and so this is the solution that I found:
$$\frac12\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)-\frac12\int\frac{\ln(x+1)}{x+1}dx$$
Seeing it as an equation I brought the integral $-\frac12\int\frac{\ln(x+1)}{x+1}dx$ to the left so that I obtain $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln(x+1)\ln(x+1)+const.$$ so $$\int\frac{\ln(x+1)}{(x+1)}dx=\frac12\ln^2(x+1)+const.$$. I know that the first solution is correct but I used to way of solution that seem to be correct. How is it possible? Where is the mistake? Thank you in advance for your help!</p>
| Shobhit | 79,894 | <p>Divide the last step by $2$ to get the desired answer, as $2*constant=constant$.</p>
|
121,897 | <p>I want to check if a user input the function with all the specified variables or not. For that I choose the replace variables with some values and check for if the result is a number or not via a doloop. I am thinking there might be more elegant way of doing it such as <a href="http://reference.wolfram.com/language/ref/ReplaceList.html" rel="nofollow"><code>ReplaceList</code></a> but it is not working the way I want it. </p>
<p>Lets assume </p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w;
(*and user give variables as *)
vas = {x, y, z, w};
(* I need to check if all the variables are in the function *)
Do[
u = u /. vas[[i]] -> 1.1;
(* 1.1 is within where the function is going to get \
evaluated *)
If[i == 4, numc9 = NumericQ[u]; Print[numc9];];
(* if numc9 False either there infinity or one of \
the variables in the list is not present in the function or function \
has extra variable(s) *)
Print[u];
, {i, 4}]
</code></pre>
<p>Is there more elegant way doing it?</p>
<p><strong>EDIT I</strong></p>
<p>After @Mr.Wizard 's answer I realized that my question was not covering everything I wanted. @Mr. Wizard answer is working, if I was checking all the variables are present in u. However, at the same time I want to check if there is no extra variables in u. Because at the end I want to evaluate u using vars and if u has an extra variable I won't get a value at the end. </p>
<p>For example:</p>
<pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w + z^p;
vas = {z, x, y, p};
</code></pre>
<p>Level and FreeQ commands give all the variables in function u. After that you check if all the variables in vas are present in this list of variables coming from Level or FreeQ and in the example above its. </p>
<p>In this situation @J.M. undocumented command does what I need. Or I will need to stick with my DoLoop.</p>
| BoLe | 6,555 | <p>I think it's safe to extract all symbols from the underlying expression. <code>Cases</code> doesn't look at expression heads by default, so e.g. <code>Plus</code> and <code>Power</code> aren't returned. Complement that with constant-like symbols like <code>Pi</code> which are not to be checked as variables, and finally do a check with <code>ContainsExactly</code>.</p>
<pre><code>check[expr_, vars_] :=
With[{constants = {Pi, E, GoldenRatio, Infinity}},
Module[{in},
in = Cases[expr, _Symbol, Infinity]~Complement~constants;
ContainsExactly[in, vars]]]
</code></pre>
<p><code>ContainsExactly</code> is a 10.2 feature, I think one can use <code>Union[in] === Union[vars]</code> instead.</p>
<h1>Update</h1>
<p>I have a feeling you're fresh with the system, so I'd like to present you with another method that's readily used to collect information about the various parts of an expression, in this case here collecting all interesting symbols.</p>
<pre><code>test = z^2 Sin[Pi x] + z^4 Cos[Pi y] + I y^6 Cos[2 Pi y] + w;
vars = {x, y, z, w, p};
</code></pre>
<p>Go through the expression and check every part at every level, and store it if that part is some of the variables you're checking for.</p>
<pre><code>scan = Scan[
If[MemberQ[vars, #], Sow[#]] &, test, Infinity] // Reap
</code></pre>
<blockquote>
<p>{Null, {{w, z, y, y, y, z, x}}}</p>
</blockquote>
<p><code>Union</code> will discard the duplicates and sort the result, so (<code>p</code> is missing from <code>test</code>):</p>
<pre><code>Union[scan[[2, 1]]] === Union[vars]
</code></pre>
<blockquote>
<p>False</p>
</blockquote>
|
381,036 | <p>I must show that $f(x)=p{\sqrt{x}}$ , $p>0$ is continuous on the interval [0,1). </p>
<p>I'm not sure how I show that a function is continuous on an interval, as opposed to at a particular point. </p>
| Sugata Adhya | 36,242 | <h2>Hint:</h2>
<p>Choose $c\in[0,1);$</p>
<ul>
<li><p>$c>0:|x-c|<\delta\implies|\sqrt x-\sqrt c||\sqrt x +\sqrt c|<\delta\implies|\sqrt x-\sqrt c|<\dfrac{\delta}{\sqrt x+\sqrt c}\le\dfrac{\delta}{\sqrt c};$</p></li>
<li><p>$c=0:0\le x<\epsilon^2\implies0\le\sqrt x<\epsilon.$</p></li>
</ul>
|
3,299,492 | <p>Is there any nice characterization of the class of polynomials can be written with the following formula for some <span class="math-container">$c_i , d_i \in \mathbb{N}$</span>? Alternatively, where can I read more about these? do they have a name?
<span class="math-container">$$c_1 + \left( c_2 + \left( \dots (c_k + x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span></p>
<p>For instance, it is not possible to write <span class="math-container">$1 + x + x^2$</span> in this way, but it is possible to write <span class="math-container">$1 + 2x + x^2$</span> or <span class="math-container">$0 + x^3$</span>.</p>
<p><em>For some context:</em> two actions on the set of polynomials <span class="math-container">$A \times \mathbb{N}[x] \to \mathbb{N}[x]$</span>, and <span class="math-container">$B \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> can be combined into a single one <span class="math-container">$\left<A,B\right> \times \mathbb{N}[x] \to \mathbb{N}[x]$</span> that takes a word of elements on <span class="math-container">$A$</span> and <span class="math-container">$B$</span> and applies the multiple actions in order. In the case of multiplication and exponential, we can see that the class of polynomials
<span class="math-container">$$c_1 \left( c_2 \left( \dots (c_k x^{d_k}) \dots \right)^{d_2} \right)^{d_1}$$</span>
can be just described as the polynomials of the form <span class="math-container">$cx^d$</span>. I do not expect such a simple characterization in the case of sums and exponentials, but I would like to know if this class of polynomials has been described or studied somewhere.</p>
| José Carlos Santos | 446,262 | <p>No. It means that it converges if and only if <span class="math-container">$x=0$</span>.</p>
|
3,970,488 | <p>while solving a differential equation i encounter this derivative : let <span class="math-container">$$z=\frac {dt}{dx} $$</span> i don't understand how they make that <span class="math-container">$$ \frac {dz}{dx}=z^3\frac {d^2x}{dt^2}$$</span></p>
| johnnyb | 298,360 | <p>First, it looks like you are off by a minus sign. So it should be:</p>
<p><span class="math-container">$$\frac{dz}{dx} = -z^3\frac{d^2x}{dt^2}$$</span></p>
<p>Then, it follows naturally from a more algebraic view of differentials. The standard notation for the second derivative of <span class="math-container">$x$</span> with respect to <span class="math-container">$t$</span> is <span class="math-container">$\frac{d^2x}{dt^2}$</span>. The problem is that this is not algebraically manipulable. If you find the second derivative by actually applying the quotient rule to the first derivative of <span class="math-container">$x$</span> with respect to <span class="math-container">$t$</span>, the result winds up being <span class="math-container">$\frac{d^2x}{dt^2} - \frac{dx}{dt}\frac{d^2t}{dt^2}$</span>.</p>
<p>So, if <span class="math-container">$z$</span> is <span class="math-container">$\frac{dt}{dx}$</span> then what is actually being multiplied (taking into account my suspected negative sign) is:</p>
<p><span class="math-container">$$(-1)\left(\frac{dt}{dx}\right)^3\left(\frac{d^2x}{dt^2} - \frac{dx}{dt}\frac{d^2t}{dt^2}\right) \\
(-1)\frac{dt^3}{dx^3}\left(\frac{d^2x}{dt^2} - \frac{dx}{dt}\frac{d^2t}{dt^2}\right) \\
(-1)\left(\frac{d^2x}{dx^2}\frac{dt}{dx} - \frac{d^2t}{dx^2}\right) \\
\frac{d^2t}{dx^2} - \frac{d^2x}{dx^2}\frac{dt}{dx}
$$</span>
This is the second derivative of <span class="math-container">$t$</span> with respect to <span class="math-container">$x$</span> (using the algebraically manipulable form of the notation for the second derivative). Since <span class="math-container">$z = \frac{t}{x}$</span> (the first derivative of <span class="math-container">$t$</span> with respect to <span class="math-container">$x$</span>, then <span class="math-container">$\frac{dz}{dx}$</span> is the second derivative of <span class="math-container">$t$</span> with respect to <span class="math-container">$x$</span>, which, as we showed, is equal to your formula.</p>
|
115,387 | <p>Have two series, just a quick check of some simple series:</p>
<p>$\sum _{1}^{\infty} \frac {1}{\sqrt {2n^{2}-3}}$</p>
<p>Considering
$\frac {1}{\sqrt {2n^{2}-3}}$ > $\frac {1}{\sqrt {4n^{2}}}$ = $\frac {1}{2n}$</p>
<p>Since
$\sum _{1}^{\infty} \frac {1}{2n}$ $\rightarrow$ Diverges, hence by the camparsion test we have that $\sum _{1}^{\infty} \frac {1}{\sqrt {2n^{2}-3}}$
diverges.</p>
<p>The second one is
$\sum _{1}^{\infty} (-1)^{n}(1+\frac{1}{n})^{n}$</p>
<p>Consdering $(-1)^{n}(1+\frac{1}{n})^{n} \le |(-1)^{n}(1+\frac{1}{n})^{n}|= (1+\frac{1}{n})^{n}$</p>
<p>We know that $(1+\frac{1}{n})^{n}$ converges, hence we have that $\sum _{1}^{\infty}(1+\frac{1}{n})^{n}$ coverges so again by the comparison test we have that $\sum _{1}^{\infty} (-1)^{n}(1+\frac{1}{n})^{n}$
converges absolutely $\Rightarrow$ converges,</p>
<p>many thanks in advance.</p>
| Bruce George | 31,367 | <p>The first series you mention diverges, and the reason you give is correct. However, it is not correct to say "$\to$ Diverges", as this would be as saying "approaches diverges" (or "converges to diverges"), which makes no sense. You could say that it "diverges to infinity" or write $\to +\infty$. Anyway, this is minor. Your argument is fine.</p>
<p>On the other hand, the second series diverges, and your argument is incorrect. This is because the <em>sequence</em> $(1+\frac1n)^n$ converges, but that is not what is being asked. You are asked to add up together the terms of the sequence, which is a completely different thing.</p>
<p>The usual way of dealing with the second example is to use the $n$-th term test, which says that <em>if a series converges, its $n$th term converges to 0</em>. Here, the terms are $(-1)^n\left(1+\cfrac1n\right)^n$ which do not converge to 0 because they do not converge at all. In fact, in absolute value, they approach $e$. It follows that the series <em>diverges</em>.</p>
<hr>
<p>Note that you do not need to know or recognize that the terms of the second series in absolute value approach $e$ to conclude that the series diverges. Just note that
$$ \left(1+\frac1n\right)^n\ge 1, $$
so also $|(-1)^n(1+\frac1n)^n|\ge1$, and the terms cannot converge to 0.</p>
|
921,144 | <p>Can you give me an example of the function in metric space which is continuous but not uniformly continuous. Definitions are almost the same for both terms. This is what I found on wiki: ''The difference between being uniformly continuous, and being simply continuous at every point, is that in uniform continuity the value of $\delta$ depends only on $\varepsilon$ and not on the point in the domain.'' But in both definitions there's only $\exists \delta >0$ </p>
| creative | 166,713 | <p><em>Generally for continuity when we write $\delta,$we mean that $\delta=\delta(\epsilon,x_0),x_0\in D$. Similarly for uniform continuity we mean $\delta=\delta(\epsilon)$. This notation is consistent. It is taken for granted that we understand the situation in which $\delta$ is referred.</em> </p>
<p><em>Now for the example :</em><br>
<em>$f(x)=x^2$ in $\mathbb{R}$ is continuous but not uniformly continuous. But $f(x)=x$ is uniformly continuous in $\mathbb{R}$.</em> </p>
<p><strong>Note that :</strong> <em>Uniform continuity $\implies$ continuity, but converse is not true.</em></p>
|
1,919,880 | <p>Let $B=\begin{bmatrix}
0 & 0 & 0 & -8 \\
1&0&0 & 16\\
0 &1&0& -14\\
0&0&1&6
\end{bmatrix}$</p>
<p>Consider B a real matrix. Find its Jordan Form. So, the characteristic polynomial for B is $(x^2-2x+2)(x-2)^2$. Suppose $B$ represents $T$ in the standard basis. By the primary decomposition theorem, we have that $V$ is the direct sum of $K=Ker(T^2-2T+2I)$ and $L=Ker[(T-2)^2]$. $L$ has dimension 2(direct computation here), so that $K$ should have dimension 2, too. Defining an operator $N$ over $Null(T-2)^2$ as $N=T-2$, we have that $N$ is nilpotent(it "dies" when is raised to two), and also $[T]_B=[N]_B+2[I]_B$, for every basis of L. In particular, exists a "cyclic base for N" in L, i.e., a basis for which $N$ can be represented as
\begin{bmatrix}
0 & 0 \\
1 & 0 \\
\end{bmatrix}
And then, the matrix of T with respect to that same basis is</p>
<p>\begin{bmatrix}
2 & 0 \\
1 & 2 \\
\end{bmatrix}</p>
<p>Since the rest can't be factored in linear factors, I guess we can just try to find a basis for K and the matrix will look sthg like this:</p>
<p>$D=\begin{bmatrix}
2 & 0 & 0 & 0 \\
1&2&0 & 0\\
0 &0&*& *\\
0&0&*&*
\end{bmatrix}$</p>
<p>The problem is, my friends, that when I compute $B^2-2B+2I$ through WA, and then row reduce it, it says that it is equivalent to the identity matrix, so I can't extract any basis for K. What's wrong here?</p>
| Christiaan Hattingh | 90,019 | <p>The matrix you start with is a companion matrix, so you know the characteristic polynomial and the minimum polynomial is the same. As you have also noticed the characteristic polynomial does not split over the real number field, and it follows that the Jordan form for this matrix does not exist over the real numbers - the Jordan form as given by астон вілла олоф мэллбэрг is correct.
The next best thing is the rational canonical form (also known as classical canonical form). $B$ is similar to
$$ \begin{bmatrix}
2 & 0 & 0& 0 \\
1& 2 & 0 & 0 \\
0& 0 & 0&-2 \\
0&0&1&2
\end{bmatrix}.$$</p>
<p>Notice that the lower right block is a companion matrix for the polynomial $x^2-2x+2$. For more on the rational canonical form, you can see, for example, <a href="https://math.stackexchange.com/questions/1754556/how-do-i-get-the-rational-canonical-form-from-the-minimal-and-characteristic-pol/1755755#1755755">this answer</a>.</p>
|
701,241 | <p><code>¬(p∨q)∧(p∨r)</code> Does this mean the negation of both <code>(p∨q)</code> and <code>(p∨r)</code> or just <code>(p∨q)</code>?
If it was just <code>p∨q</code> it would make more sense to me being inside the brackets like <code>(¬p∨q)</code> but maybe that's just the programmer in me. I have also seen <code>(¬p∨¬q)</code> does that mean the same as <code>¬(pvq)</code> It starts to get rather confusing..</p>
<p>Does it have to do with order assesed for example look at <code>(pvq)</code> then its negation.</p>
| Senex Ægypti Parvi | 89,020 | <p>The scope of $\neg$ used just before $($ extends to the $)$ which logically CLOSES OUT the grouping which was begun by the subject $($. </p>
<p>As to your comment about $(\neg p\vee\neg q)$, $\neg(p\wedge q)$ would be the proper way to move the $\neg$ outside the parentheses.</p>
|
542,808 | <p>I had a little back and forth with my logic professor earlier today about proving a number is irrational. I proposed that 1 + an irrational number is always irrational, thus if I could prove that 1 + irrational number is irrational, then it stood to reason that was also proving that the number in question was irrational.</p>
<p>Eg. $\sqrt2 + 1$ can be expressed as a continuous fraction, and through looking at the fraction, it can be assumed $\sqrt2 + 1$ is irrational. I suggested that because of this, $\sqrt2$ is also irrational.</p>
<p>My professor said this is not always true, but I can't think of an example that suggests this.</p>
<p>If $x+1$ is irrational, is $x$ always irrational? </p>
<p>Actually, a better question is: if $x$ is irrational, is $x+n$ irrational, provided $n$ is a rational number?</p>
| Asaf Karagila | 622 | <p>Note that the sum of two rationals is always rational, and that if $n$ is rational then $-n$ is rational. Now suppose that $x$ is any number and $n$ is rational.</p>
<p>Suppose $x+n$ is rational, then $(x+n)+(-n)$ is rational. Therefore $x+(n+(-n))$ is rational. Therefore $x+0$ is rational, and finally $x$ is rational.</p>
|
52,848 | <p>Let $f(z)=\sum_{n\geq 0}a_n z^n$ be a Taylor series with rational coefficients with infinitely non-zero $a_n$ which converges
in a small neighboorhood around $0$. Furthermore, assume that
\begin{align*}
f(z)=\frac{P(z)}{Q(z)},
\end{align*}
where $P(z)$ and $Q(z)$ are <em>coprime monic complex polynomials</em>. By developing $\frac{P(z)}{Q(z)}$ as a power sereis around $0$ and comparing it with $f(z)$ we obtain infinitely many polynomial equations in the roots of $P(z)$ and $Q(z)$ which are equal to rational numbers so this seems to force the roots of $P(z)$ and $Q(z)$ to be algebraic numbers.</p>
<p>Q: How does one prove this rigourously?</p>
| Gjergji Zaimi | 2,384 | <p>Let there be two fields $k\subset K$, and let $f\in k[[x]]$ be a formal power series with coefficients in $k$. If $f\in K(x)$ (rational functions with coefficients in $K$) then $f\in k(x)$. A proof of this is given in J.S. Milne's notes on Etale Cohomology (lemma 27.9).</p>
|
52,848 | <p>Let $f(z)=\sum_{n\geq 0}a_n z^n$ be a Taylor series with rational coefficients with infinitely non-zero $a_n$ which converges
in a small neighboorhood around $0$. Furthermore, assume that
\begin{align*}
f(z)=\frac{P(z)}{Q(z)},
\end{align*}
where $P(z)$ and $Q(z)$ are <em>coprime monic complex polynomials</em>. By developing $\frac{P(z)}{Q(z)}$ as a power sereis around $0$ and comparing it with $f(z)$ we obtain infinitely many polynomial equations in the roots of $P(z)$ and $Q(z)$ which are equal to rational numbers so this seems to force the roots of $P(z)$ and $Q(z)$ to be algebraic numbers.</p>
<p>Q: How does one prove this rigourously?</p>
| Hugo Chapdelaine | 11,765 | <p>Well, I think there is a simpler argument. For a power series $g(x)\in\mathbb{C}[[x]]$
and $\sigma\in Aut(\mathbb{C})$ (note that except for the complex conjugation or the identity $\sigma$ is not continuous!) we may define define the power series with coefficients twisted by $\sigma$ which we denote by $g^{\sigma}(x)$. Now an element in $Aut(\mathbb{C})$ respect finite sum and products so it follows from that,
that for all $\sigma\in Aut(\mathbb{C})$ one has
$$
f^{\sigma}(z)=\frac{P^{\sigma}(z)}{Q^{\sigma}(z)}.
$$
From this (and the unique factorization of $\mathbf{C}[x]$) it follows that $P(z)$ and $Q(z)$ have rational coefficients.</p>
|
3,751,780 | <p>Given positive real numbers <span class="math-container">$a, b, c$</span> with <span class="math-container">$ab + bc + ca = 1.$</span> Prove that <span class="math-container">$$ \sqrt{a^{2} + 1} + \sqrt{b^{2} + 1} + \sqrt{c^{2} + 1}\leq 2(a+b+c).$$</span></p>
<p>I have no idea to prove this inequality.</p>
| farruhota | 425,072 | <p>Alternatively, square both sides:
<span class="math-container">$$\small{2\left[\sqrt{(a^2+1)(b^2+1)}+\sqrt{(b^2+1)(c^2+1)}+\sqrt{(c^2+1)(a^2+1)}\right]}\le \\
3(a^2+b^2+c^2)+5$$</span>
By AM-GM:
<span class="math-container">$$2\sqrt{(a^2+1)(b^2+1)}\le a^2+b^2+2$$</span>
Hence, we need to prove:
<span class="math-container">$$2(a^2+b^2+c^2)+6\le 3(a^2+b^2+c^2)+5 \Rightarrow \\
1\le a^2+b^2+c^2$$</span>
which is true, because:
<span class="math-container">$$1=ab+bc+ca\le a^2+b^2+c^2.$$</span></p>
|
2,932,305 | <p>What are the intercepts of the planes <span class="math-container">$x = 0$</span> and <span class="math-container">$2y + 3z = 12$</span>? The word intercept is confusing me because I don't understand if I should say they intersect at point <span class="math-container">$(0,6,0)$</span> or the intercept is at <span class="math-container">$y=6$</span>. </p>
<p><a href="https://i.stack.imgur.com/b3Gue.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b3Gue.png" alt="enter image description here"></a></p>
| vanmeri | 218,238 | <p>In R <span class="math-container">$^3$</span> , it is the line
(x, y, z) = t(0,-3,2) + (0,6,0)
You may find this by taking any vector (x, y, z) and asking when it satisfies both equations.
The planes aren't parallel.</p>
|
272,144 | <p>Consider a multi-value function <span class="math-container">$f(z)=\sqrt{(z-a)(z+\bar a)}, \Im{a}>0,\Re{a}>0$</span>. To make the function be single-valued, one needs to make a cut. Suppose <span class="math-container">$a=e^{i\theta}$</span>, my choice of the branch cut is <span class="math-container">$e^{it},t\in (\theta,\pi-\theta)$</span>. This uniquely defines my function <span class="math-container">$f(z)$</span>, now I want to study the level curves of <span class="math-container">$f$</span>, and how to visualize it on Mathematica?</p>
<p><strong>Note:</strong> How to Choose a branch so that the cut is part of the level curve (say <span class="math-container">$\Im f=0$</span>).</p>
<p><strong>Update:</strong> In fact, since we know the effect of passing the cut is changing sign. We thus can define the radical by the following Riemann-Hilbert Problem:
<span class="math-container">$$R_+=-R_-(z),\quad z\in \Gamma,$$</span>
where <span class="math-container">$\Gamma$</span> is any branch cuts you want. Then up to some proper normalization, the solution is
<span class="math-container">$$\exp\{h(z)+C_\Gamma(\log(-1))(z)\},$$</span>
where <span class="math-container">$C_\Gamma$</span> is the Cauchy transform. If the branch cut is properly parametrized, the integral can be computed in Mathematica using <code>NIntegrate</code>.</p>
| josh | 81,539 | <p><strong>Edit3: Added a ComplexContourPlot of level curves over radial branch region. See below</strong></p>
<p>With these problems I find it helpful to draw the function in its entirety then decide how to cut out an analytically-continuous single-valued section of it. Unfortunately in this case it's a little difficult to do this with the built-in functions as they rely on default branch-cuts which cause imperfections of the plots when done so globally but we can plot around the cut to remedy this. First identify the default branch-cut of the function. I'll use <span class="math-container">$\theta=\pi/4$</span>:</p>
<pre><code>f[z_, a_] := Sqrt[(z - a) (z + Conjugate[a])]
theta = Pi/4;
Reduce[Arg[(z - Exp[I theta]) (z + Exp[-I theta])] == Pi, z]
(* (-(1/Sqrt[2]) < Re[z] < 0 && Im[z] == 1/Sqrt[2]) ||
Re[z] == 0 || (0 < Re[z] < 1/Sqrt[2] && Im[z] == 1/Sqrt[2]) *)
</code></pre>
<p>That's basically a line in the z-plane from the point <span class="math-container">$(-1/\sqrt{2},1/\sqrt{2})$</span> to <span class="math-container">$(1/\sqrt{2},1/\sqrt{2})$</span>. Now we can use <code>Plot3D</code> to plot around this line and thereby never incurring the default branch-cut (little messy but if you take your time and study the plots you'll understand it). Below I plot 16 color-coded sections of the function Im[f] to make it easier to see the surfaces.</p>
<pre><code> sa = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 1/Sqrt[2], 2}, {y,
1/Sqrt[2], 2}, PlotStyle -> Darker@Yellow];
sb = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 0, 1/Sqrt[2]}, {y,
1/Sqrt[2], 2}, PlotStyle -> Orange];
sc = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -1/Sqrt[2],
0}, {y, 1/Sqrt[2], 2}, PlotStyle -> Red];
sd = Plot3D[-Im@f[z, Exp[I theta]] /.
z -> x + I y, {x, -2, -1/Sqrt[2]}, {y, 1/Sqrt[2], 2},
PlotStyle -> Green];
se = Plot3D[-Im@f[z, Exp[I theta]] /.
z -> x + I y, {x, -2, -1/Sqrt[2]}, {y, -2, 1/Sqrt[2]},
PlotStyle -> Yellow];
sf = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -1/Sqrt[2],
0}, {y, -2, 1/Sqrt[2]}, PlotStyle -> Purple];
sg = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 0, 1/Sqrt[2]}, {y, -2,
1/Sqrt[2]}, PlotStyle -> Magenta];
sh = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 1/Sqrt[2], 2}, {y, -2,
1/Sqrt[2]}, PlotStyle -> Brown];
sa2 = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 1/Sqrt[2],
2}, {y, 1/Sqrt[2], 2}, PlotStyle -> Darker@Yellow];
sb2 = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 0,
1/Sqrt[2]}, {y, 1/Sqrt[2], 2}, PlotStyle -> Orange];
sc2 = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -1/Sqrt[2], 0}, {y,
1/Sqrt[2], 2}, PlotStyle -> Red];
sd2 = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -2, -1/Sqrt[2]}, {y,
1/Sqrt[2], 2}, PlotStyle -> Green];
se2 = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -2, -1/Sqrt[2]}, {y, -2,
1/Sqrt[2]}, PlotStyle -> Yellow];
sf2 = Plot3D[
Im@f[z, Exp[I theta]] /. z -> x + I y, {x, -1/Sqrt[2], 0}, {y, -2,
1/Sqrt[2]}, PlotStyle -> Purple];
sg2 = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 0,
1/Sqrt[2]}, {y, -2, 1/Sqrt[2]}, PlotStyle -> Magenta];
sh2 = Plot3D[-Im@f[z, Exp[I theta]] /. z -> x + I y, {x, 1/Sqrt[2],
2}, {y, -2, 1/Sqrt[2]}, PlotStyle -> Brown];
multiBranchPlot=Show[{sa, sb, sc, sd, se, sf, sg, sh, sa2, sb2, sc2, sd2, se2, sf2,
sg2, sh2}, PlotRange -> 2, BoxRatios -> {1, 1, 1},
PlotLabel -> Style["Im(f)", 16, Bold, Black]]
</code></pre>
<p><a href="https://i.stack.imgur.com/kMbWH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kMbWH.jpg" alt="enter image description here" /></a></p>
<p>Now it's easy to plot a single-valued section of the function with branch-cut for example <span class="math-container">$(-1/\sqrt{2},1/\sqrt{2})$</span>:</p>
<p><a href="https://i.stack.imgur.com/57nuZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57nuZ.jpg" alt="enter image description here" /></a></p>
<p>Or a single-valued section with branch cuts <span class="math-container">$(-\infty,-1/\sqrt{2})\bigcup (1/\sqrt{2},\infty)$</span>:
<a href="https://i.stack.imgur.com/npszP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/npszP.jpg" alt="enter image description here" /></a></p>
<p>And here is one of two (analytically-continuous, single-valued) branches described by OP in the region <span class="math-container">$(\pi-\theta,\theta)$</span>:</p>
<pre><code>theColor = Darker@Blue;
pp1 = ParametricPlot3D[{Re@z, Im@z, Im@f[z, Exp[I theta]]} /.
z -> r Exp[I t], {r, 0, 2}, {t, theta, Pi/2},
RegionFunction -> Function[{x, y, z}, y > 1/Sqrt[2] + 0.005],
PlotPoints -> 100, PlotStyle -> theColor];
pp2 = ParametricPlot3D[{Re@z, Im@z, -Im@f[z, Exp[I theta]]} /.
z -> r Exp[I t], {r, 0, 2}, {t, theta, Pi/2},
RegionFunction -> Function[{x, y, z}, y < 1/Sqrt[2] - 0.005],
PlotPoints -> 100, PlotStyle -> theColor];
pp3 = ParametricPlot3D[{Re@z, Im@z, -Im@f[z, Exp[I theta]]} /.
z -> r Exp[I t], {r, 0, 2}, {t, Pi/2, Pi - theta},
RegionFunction -> Function[{x, y, z}, y > 1/Sqrt[2] + 0.005],
PlotPoints -> 100, PlotStyle -> theColor];
pp4 = ParametricPlot3D[{Re@z, Im@z, Im@f[z, Exp[I theta]]} /.
z -> r Exp[I t], {r, 0, 2}, {t, Pi/2, Pi - theta},
RegionFunction -> Function[{x, y, z}, y < 1/Sqrt[2] - 0.005],
PlotPoints -> 100, PlotStyle -> theColor];
radialBranch =
Show[{pp1, pp2, pp3, pp4}, PlotRange -> 2, BoxRatios -> {1, 1, 1}]
comboPlot =
Show[{radialBranch, multiBranchPlot, radialPlot},
PlotLabel -> Style["Im(f) with radial branch", 16, Black, Bold]]
</code></pre>
<p><a href="https://i.stack.imgur.com/cqKQy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cqKQy.jpg" alt="enter image description here" /></a></p>
<p><strong>Edit 3: Level curve plot over radial branch defined by sector <span class="math-container">$e^{it}, \theta<t<\pi-\theta$</span>:</strong></p>
<p>This is the code to generate level-curves over the sector branch I described below. Have to use `ComplexContourPlot over the appropriate regions to assure continuity of the level curves. The two black diagonal lines are the branch cuts of the indicated branch, red points are the singular points. Note the branch cuts do not make contact with singular points but only appear so in the plot.</p>
<pre><code>line1G = Graphics@Line[{{0, 0}, ReIm@(-2 + 2 I)}];
line2G = Graphics@Line[{{0, 0}, ReIm@(2 + 2 I)}];
s1G = Graphics@{PointSize[0.025], Red, Point@{-1/Sqrt[2], 1/Sqrt[2]}};
s2G = Graphics@{PointSize[0.025], Red, Point@{1/Sqrt[2], 1/Sqrt[2]}};
ccpTable = Table[
ccp1 =
ComplexContourPlot[
Im@f[z, Exp[I theta]] == levelVal, {z, 0, 2 + 2 I}];
ccp2 =
ComplexContourPlot[-Im@f[z, Exp[I theta]] == levelVal, {z, -2,
2 I}];
{ccp1, ccp2},
{levelVal, 1/10, 15/10, 1/10}
];
ccpTable2 = Table[
ccp1 =
ComplexContourPlot[
Im@f[z, Exp[I theta]] == levelVal, {z, 0, 2 + 2 I}];
ccp2 =
ComplexContourPlot[-Im@f[z, Exp[I theta]] == levelVal, {z, -2,
2 I}];
{ccp1, ccp2},
{levelVal, -15/10, -1/10, 1/10}
];
Show[{ccpTable, line1G, line2G, s1G, s2G, ccpTable2}, PlotRange -> 2,
PlotLabel -> Style["Level curves on Im(f)", 16, Bold, Black]]
</code></pre>
<p><a href="https://i.stack.imgur.com/icpSl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/icpSl.jpg" alt="enter image description here" /></a></p>
|
80,056 | <p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p>
<p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p>
<p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p>
<p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
| David White | 11,540 | <p>I took a course at PCMI some years ago from <a href="http://people.reed.edu/~davidp/homepage/teaching.html" rel="nofollow">David Perkinson</a> (Reed College). He did an amazing job and single-handedly convinced me it was possible to teach well from slides. Check out <a href="http://people.reed.edu/~davidp/pcmi/index.html" rel="nofollow">this link</a> to see examples of his slides. As the other answers have mentioned, it seems necessary to use slides only in conjunction with the board. Perkinson did this, but also included a useful trick: he created handouts from the slides for students to write on, but left blank spots in those handouts so they had to write the proofs themselves based on what he said, showed on the slides, and wrote on the board. </p>
<p>Professor Perkinson is also a wizard of sorts with mathematica, and he was able to create awesome graphics using it. I don't think his mathematica code is online, but I'll bet he'd be willing to share if someone emailed him. He may also have tricks to reduce prep time, as this was the sort of thing he liked thinking about.</p>
|
424,675 | <p>Just one simple question:</p>
<p>Let $\tau =(56789)(3456)(234)(12)$.</p>
<p>How many elements does the conjugacy class of $\tau$ contain? How do you solve this exersie?</p>
<p>First step is to write it in disjunct cyclces I guess. What's next? :)</p>
| Amzoti | 38,839 | <p>This is a difficult question to answer.</p>
<blockquote>
<p>"The FDM is the oldest and is based upon the application of a local
Taylor expansion to approximate the differential equations. The FDM
uses a topologically square network of lines to construct the
discretization of the PDE. This is a potential bottleneck of the
method when handling complex geometries in multiple dimensions. This
issue motivated the use of an integral form of the PDEs and
subsequently the development of the finite element and finite volume
techniques."
(<a href="http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf" rel="noreferrer">http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf</a>)</p>
</blockquote>
<p>Here are two references to review so you can get a better feel for these methods.</p>
<ul>
<li><p><a href="http://files.campus.edublogs.org/blog.nus.edu.sg/dist/4/1978/files/2012/01/CN4118R_Final_Report_U080118W_OliverYeo-1r6dfjw.pdf" rel="noreferrer">http://files.campus.edublogs.org/blog.nus.edu.sg/dist/4/1978/files/2012/01/CN4118R_Final_Report_U080118W_OliverYeo-1r6dfjw.pdf</a> (see page 10 for a very nice comparison in the types of problems they were interested in - computational fluid dynamics)</p></li>
<li><p>There are some nice references for these methods at <a href="http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf" rel="noreferrer">http://www2.imperial.ac.uk/ssherw/spectralhp/papers/HandBook.pdf</a> (See section 7 for very nice references)</p></li>
</ul>
|
466,576 | <p>Some conventional math notations seem arbitrary to English speakers but were mnemonic to non-English speakers who started them. To give a simple example, Z is the symbol for integers because of the German word <em>Zahl(en)</em> 'number(s)'. What are some more examples?</p>
| rschwieb | 29,335 | <p>$\ln()$ for "logarithmus naturalis"?</p>
<p>My advisor also told me that the "socle of a ring" makes a little more sense when you know that "socle" is an architecture term for the support underneath a column or pedastal, and so the socle of a ring acts as a kind of "support for the ring." In some languages, the word for "pedestal" is something like "socle," so the meaning is less hidden there.</p>
<h2>Added</h2>
<p>When I put "socle" into google translate, it autodetects it as "plinth" which is a relatively better-known word in English. It turns into "zócalo" in Spanish, sòcol in Catalan, Sockel in German, zoccolo in Italian, cokół in Polish, soco in Romanian, and 虹晶 in Mandarin.</p>
|
1,516,363 | <p>If $f: (0,+ \infty) \rightarrow \mathbb{R}$ is continuous and $$f(x + y) = f(x) + f(y) ,$$ then $f$ is linear? </p>
<p>I saw in somewhere that if $f$ is continuous, one can drop the condition $f(ax)= a f(x), \forall a, x \in \mathbb{R}$. Is this true?</p>
| JMP | 210,189 | <p>Let $y=(a-1)x$ so that $f(x+y)=f(x+(a-1)x)=f(ax)=f(x)+f((a-1)x)$</p>
<p>Write $(a-1)x$ as $(a-2)x+x)$ so that $f((a-1)x)=f((a-2)x)+f(x)$</p>
<p>Repeat this $k$ times until $0\le a-k \lt 1$, and let $a_1=a-k$.</p>
<p>So far we have $f(ax)=kf(x)+f(a_1)$</p>
<p>We continue for example by considering $a_1-0.1k_1$, $a_2-0.01k_2$ and so on.</p>
<p>This results in $f(ax)=kf(x)+k_1f(x)+\dots = (k+k_1/10+k_2/100+\dots)f(x)=af(x)$ by continuity.</p>
|
2,236,846 | <p>For $x\in A[a,b]$</p>
<p>$\sup_{x\in A}|f(x)|\ge\int_{a}^{b}f(x)dx$</p>
<p>I'm just wondering if this is an analysis result or if the result is slightly different to this?</p>
<p>Sorry I just realised it was a greater than sign not an equals!</p>
| Community | -1 | <p>No, for a counterexample take $f(x)= 0$ if $x\neq a$ and $f(a)=1$. Then $$\sup_{[a,b]}f(x)=1\neq 0=\int_a^b f(x)dx$$</p>
|
1,354,745 | <p>Let polynomial $p(z)=z^2+az+b$ be such that $a$and $b$ are complex numbers and $|p(z)|=1$ whenever $|z|=1$. Prove that $a=0$ and $b=0$.</p>
<p>I could not make much progress.
I let $z=e^{i\theta}$ and $a=a_1+ib_1$ and $b=a_2+ib_2$ </p>
<p>Using these values in $P(z)$ i got
$|P (z)|^2=1=(\cos (2\theta)-a_2\sin (\theta)+a_1\cos (\theta)+b_1)^2+(\sin(2\theta)+a_1\sin (\theta)+a_2\cos (\theta)+b_2)^2$</p>
<p>But i dont see how to proceed further neither can i think of any other approach any other approach.
So, someone please help.
I dont know complex analysis so it would be more helpful if someone can provide hints/solutions that dont use complex analysis.</p>
| Mercy King | 23,304 | <p>For every $z\in \mathbb{C}$ we have
$$
|p(z)|^2=(z^2+az+b)(\bar{z}^2+\bar{a}\bar{z}+\bar{b})=|z|^4+a\bar{z}|z|^2+\bar{a}z|z|^2+|a|^2|z|^2+\bar{b}z^2+b\bar{z}^2+a\bar{b}z+\bar{a}b\bar{z}+|b|^2,
$$
in particular, when $|z|=1$, we have:
$$
|p(z)|^2=\bar{b}z^2+b\bar{z}^2+(a+\bar{a}b)\bar{z}+(\bar{a}+a\bar{b})z+|a|^2+|b|^2+1.
$$
Since $|p(z)|=1$ when $|z|=1$, we have:
$$\tag{1}
\bar{b}z^2+b\bar{z}^2+(a+\bar{a}b)\bar{z}+(\bar{a}+a\bar{b})z+|a|^2+|b|^2=0.
$$
Putting $z=-1,1,-i,i$ in (1), we get:
$$
\left\{
\begin{array}{lcc}
\bar{b}+b-(a+\bar{a}b)-(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\
\bar{b}+b+(a+\bar{a}b)+(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\
-\bar{b}-b-i(a+\bar{a}b)-i(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0\\
-\bar{b}-b+i(a+\bar{a}b)+i(\bar{a}+a\bar{b})+|a|^2+|b|^2&=&0
\end{array}\right.,
$$
and combining these four identities we have:
$$
4(|a|^2+|b|^2)=0.
$$
Thus $a=b=0$.</p>
|
2,864,992 | <p>It starts by someone asking an exercise question that whether negation of</p>
<pre><code>2 is a rational number
</code></pre>
<p>is</p>
<pre><code>2 is an irrational number
</code></pre>
<p>Their argument is that they consider it is incorrect if we include complex numbers, because 2 might be complex and not irrational. </p>
<p>My argument is that because first proposition is always true because 2 could not be anything else but rational number, any false sentence could be negation of the first proposition, including the given sentence above. They said what I do is not "negation" in their sense.</p>
<p>So my question is, is it true that every false statements are negation of true statement? </p>
| robjohn | 13,854 | <p>Let
$$
f(x)=\log_2(1-x)+\sum_{k=0}^\infty x^{2^k}\tag1
$$
then $f(0)=0$ and
$$
f\!\left(x^2\right)=\log_2\left(1-x^2\right)+\sum_{k=1}^\infty x^{2^k}\tag2
$$
and therefore,
$$
f(x)-f\!\left(x^2\right)=x-\log_2(1+x)\tag3
$$
Thus, for $x\in(0,1)$,
$$
\begin{align}
f(1)
&=f(1)-f(0)\\[12pt]
&=\sum_{k=-\infty}^\infty\left[f\!\left(x^{2^k}\right)-f\!\left(x^{2^{k+1}}\right)\right]\\
&=\sum_{k=-\infty}^\infty\left(x^{2^k}-\log_2\left(1+x^{2^k}\right)\right)\tag4
\end{align}
$$
Expanding $\log(1+x)$ into its Taylor Series in $x$, we get
$$
\begin{align}
\int_0^1x^{a-1}\log(1+x)\,\mathrm{d}x
&=\int_0^1\sum_{k=1}^\infty\frac{(-1)^{k-1}x^{a-1+k}}k\,\mathrm{d}x\\
&=\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k(k+a)}\\
&=\frac1a\sum_{k=1}^\infty(-1)^{k-1}\left(\frac1k-\frac1{k+a}\right)\\
&=\frac1a\left(\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a}\right)-2\sum_{k=1}^\infty\left(\frac1{2k}-\frac1{2k+a}\right)\right)\\
&=\frac1a\left(\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a}\right)-\sum_{k=1}^\infty\left(\frac1k-\frac1{k+a/2}\right)\right)\\[3pt]
&=\frac{H(a)-H(a/2)}a\tag5
\end{align}
$$
where $H(a)$ are the <a href="https://math.stackexchange.com/a/2724620">Extended Harmonic Numbers</a>. Apply $(5)$ to get
$$
\int_0^1\log_2\left(1+x^{2^k}\right)\,\mathrm{d}x=\frac{H\!\left(2^{-k}\right)-H\!\left(2^{-k-1}\right)}{\log(2)}\tag6
$$
Integration of a monomial gives
$$
\int_0^1x^{2^k}\,\mathrm{d}x=\frac1{2^k+1}\tag7
$$
Integrating $(4)$ over $[0,1]$ and using $(6)$ and $(7)$ yields
$$
\begin{align}
f(1)
&=\lim_{n\to\infty}\left[\sum_{k=-n}^n\frac1{2^k+1}-\frac{H\!\left(2^n\right)-H\!\left(2^{-n-1}\right)}{\log(2)}\right]\\
&=\lim_{n\to\infty}\left[\frac12+n-\frac{\gamma+n\log(2)+O\!\left(2^{-n}\right)}{\log(2)}\right]\\[3pt]
&=\frac12-\frac{\gamma}{\log(2)}\tag8
\end{align}
$$</p>
<hr>
<p><strong>Problem with the Use of Equation $\boldsymbol{(3)}$</strong></p>
<p>As pointed out by Michael, the use of equation $(3)$ above ignores the fact that
$$
g(x)-g\!\left(x^2\right)=0\tag9
$$
does not mean $g(x)=0$. In fact, for any $1$-periodic $h$, i.e. $h(x)=h(x+1)$,
$$
g(x)=h\!\left(\log_2(-\log(x))\right)\tag{10}
$$
satisfies $(9)$. I have encountered this misbehavior before in <a href="https://math.stackexchange.com/a/84550">Does the family of series have a limit?</a> and <a href="https://math.stackexchange.com/a/2757375">Find $f'(0)$ if $f(x)+f(2x)=x\space\space\forall x$</a>.</p>
<p>Thus, the value given in $(8)$ is an average of the values of $f(1)$ given by $(4)$.</p>
<p>The function given in $(4)$ for $x=2^{-2^{-t}}$ has period $1$ in $t$. I have computed $f(1)$ from $(4)$ for $x\in\left[\frac14,\frac12\right]$; that is, the full period $t\in[-1,0]$. I get a plot very similar to that of Michael:</p>
<p><a href="https://i.stack.imgur.com/aKSZF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKSZF.png" alt="enter image description here"></a></p>
<p>which oscillates between $-0.33274775$ and $-0.33274460$. The horizontal line is
$$
\frac12-\frac\gamma{\log(2)}=-0.33274618
$$
which is pretty close to the average of the minimum and maximum.</p>
<p>I am still looking for an a priori method to compute this oscillation.</p>
|
272,114 | <p>Yesterday, my uncle asked me this question:</p>
<blockquote>
<p>Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$.</p>
</blockquote>
<p>How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.</p>
| Did | 6,179 | <p>One looks for roots of the function $f:x\mapsto a^x+1-b^x$ with $a=\frac34$ and $b=\frac54$.</p>
<ul>
<li>Since $a\lt1$, the function $x\mapsto a^x$ is decreasing. </li>
<li>Since $b\gt1$, the function $x\mapsto b^x$ is increasing. </li>
<li>Hence the function $f$ is decreasing.</li>
<li>And $f(\pm\infty)=\mp\infty$.</li>
</ul>
<p>As such, the function $f$ has exactly one root. Since $f(0)=1$ this root is positive.</p>
|
2,826,313 | <p>So I have been given the following equation : $z^6-5z^3+1=0$. I have to calculate the number of zeros (given $|z|>2$). I already have the following:</p>
<p>$|z^6| = 64$ and $|-5z^3+1| \leq 41$ for $|z|=2$. By Rouche's theorem: since $|z^6|>|-5z^3+1|$ and $z^6$ has six zeroes (or one zero of order six), the function $z^6-5z^3+1$ has this too. However, how do I calculate the zeroes $\textit{outside}$ the disk? Is there a standard way to do this? </p>
<p>Thanks in advance.</p>
| Henrik supports the community | 193,386 | <p>For polynomials (and other functions where you know the total number of roots) the standard way is simply to subtract the number of zeroes inside or on the boundary of the disc, which in this case turns out to be $6-6$.</p>
<p>For more complex functions you can sometimes use Rouche's theorem on $f(\frac{1}{z})$, but I'm not aware of any way that could really be called standard.</p>
|
3,002,668 | <p>I have to solve this inequality:</p>
<p><span class="math-container">$$5 ≤ 4|x − 1| + |2 − 3x|$$</span></p>
<p>and prove its solution with one (or 2 or 3) of this sentences:</p>
<p><span class="math-container">$$∀x∀y |xy| = |x||y|$$</span></p>
<p><span class="math-container">$$∀x∀y(y ≤ |x| ↔ y ≤ x ∨ y ≤ −x)$$</span></p>
<p><span class="math-container">$$∀x∀y(|x| ≤ y ↔ x ≤ y ∧ −x ≤ y)$$</span></p>
<p>The solution of inequality is:</p>
<p><span class="math-container">$$(-\infty, \frac{1}{7}> U <\frac{11}{7}, \infty)$$</span></p>
<p>But I have a hard time with proving the solution with the sentence. E.g. if I choose the second one I get this:</p>
<p>5 ≤ 4(x-1) + (2-3x) ∨ 5 ≤ - [4(x-1) + (2-3x)] <=> </p>
<p>5 ≤ 4x - 4 + 2 - 3x ∨ 5 ≤ -(4x-4+2-3x) <=> </p>
<p>5 ≤ x-2 ∨ 5 ≤ -x + 2 <=> 7 ≤ x ∨ x ≤ 3</p>
<p>and that is wrong.</p>
<p>Can someone help me out, please? Sorry for bad English, that is not my first language.</p>
<p>EDIT:
Sentences should be applicable. I have another inequality which is already solved and it was done like this:</p>
<p>|3x| ≤ |2x − 1|</p>
<p>(x ∈ R | −1 ≤ x ≤1/5)</p>
<p>Sentences:</p>
<p>∀x∀y(y ≤ |x| ↔ y ≤ x ∨ y ≤ −x) (1)</p>
<p>∀x∀y(|x| ≤ y ↔ x ≤ y ∧ −x ≤ y) (2)</p>
<p>Solution:
We make another sentence which we have to prove:</p>
<p>∀x( |3x| ≤ |2x − 1| ↔ −1 ≤ x ≤ 1/5) (3)</p>
<p>|3x| ≤ |2x − 1| ⇔ |3x| ≤ 2x − 1 ∨ |3x| ≤ −(2x − 1) ⇔</p>
<p>3x ≤ 2x − 1 ∧ −3x ≤ 2x − 1 ∨ 3x ≤ −(2x − 1) ∧ −3x ≤ −(2x − 1) ⇔</p>
<p>x ≤ −1 ∧ 1 ≤ 5x ∨ 5x ≤ 1 ∧ −1 ≤ x ⇔</p>
<p>1/5 ≤ x ≤ −1 ∨ −1 ≤ x ≤1/5⇔ −1 ≤ x ≤1/5</p>
<p>So we proved sentence (3)</p>
| farruhota | 425,072 | <p>The sentences (rules) you stated will not be sufficient (applicable), because the given inequality is not a single absolute value, but a sum of two.</p>
<p>The standard <strong>algebraic</strong> method to solve the absolute value inequality is to divide into intervals:
<span class="math-container">$$5 ≤ 4|x − 1| + |2 − 3x| \Rightarrow \\
\begin{align}
1) \ &\begin{cases}x\le \frac23\\ 5\le -4(x-1)+(2-3x) \end{cases} \Rightarrow \ \ \begin{cases}x\le \frac23\\ x\le \frac1{7}\end{cases} \Rightarrow x\in (-\infty,\frac17] \ \ \text{or} \\
2) \ &\begin{cases}\frac23< x<1 \\ 5\le -4(x-1)-(2-3x) \end{cases}\ \Rightarrow \ \ \begin{cases}\frac23< x<1\\ x\le -3\end{cases} \Rightarrow x\in \emptyset\ \ \ \text{or} \\
3) \ &\begin{cases}1\le x \\ 5\le 4(x-1)-(2-3x) \end{cases} \Rightarrow \ \ \begin{cases}1\le x\\ x\ge \frac{11}7\end{cases} \Rightarrow x\in [\frac{11}7,+\infty)\ \ \ \ \ \ \ \ \end{align}$$</span>
Hence, the final solution is: <span class="math-container">$$x\in (-\infty,\frac17]\cup [\frac{11}7,+\infty).$$</span></p>
<p>==============================================================</p>
<p>Let's try:
<span class="math-container">$$5 ≤ 4|x − 1| + |2 − 3x| \iff \\
4|x-1|\ge 5-|2-3x| \iff \\
\big[4(x-1)\ge 5-|2-3x| \big] \ \ \lor \ \ \big[-4(x-1)\ge 5-|2-3x|\big] \iff \\
\big[|2-3x|\ge 9-4x \big] \ \ \lor \ \ \big[|2-3x|\ge 1+4x\big] \iff \\
\big[2-3x\ge 9-4x \lor -(2-3x)\ge 9-4x\big] \lor \big[2-3x\ge 1+4x \lor -(2-3x)\ge 1+4x\big] \iff \\
\big[x\ge 7 \lor x\ge \frac{11}7\big] \lor \big[x\le \frac17 \lor x\le -3\big] \iff \\
\big[x\ge \frac{11}7\big] \lor \big[x\le \frac17\big] \iff \\
x\in (-\infty,\frac17]\cup [\frac{11}7,+\infty)$$</span></p>
|
3,076,253 | <p>Problem: Let <span class="math-container">$I=[0,1]$</span> be the closed unit interval. Suppose <span class="math-container">$f$</span> is a continuous mapping of <span class="math-container">$I$</span> into <span class="math-container">$I$</span>. Prove that <span class="math-container">$f(x)=x$</span> for at least one <span class="math-container">$x \in I$</span>. </p>
<p>Attempt: </p>
<p>We have a known result: </p>
<p>Let <span class="math-container">$g$</span> be a continuous real valued function on a metric space <span class="math-container">$X$</span>. Then <span class="math-container">$Z(g)=\{p \in X: g(p)=0 \}$</span>, i.e. the zero set of <span class="math-container">$g$</span> is closed. [Proof is simple using "inverse image of a closed set is closed under continuous map"] </p>
<p>Now, we construct <span class="math-container">$F:I \to I$</span>, such that <span class="math-container">$F(x) =f(x)-x$</span>, which is definitely continuous, being a linear combination of two continuous functions.<br>
[It is assumed that <span class="math-container">$F(x)=f(x)-x>0$</span>, <span class="math-container">$\forall x\in I$</span>. [ If <span class="math-container">$x>f(x)$</span>, consider the function <span class="math-container">$F_1(x)=x-f(x)$</span> ]. Otherwise, if the function <span class="math-container">$F(x)$</span> changes sign somewhere within the interval, it must attain the value <span class="math-container">$0$</span>, giving us <span class="math-container">$f(x)=x$</span> ].</p>
<p>But we know that <span class="math-container">$Z(F)$</span> is closed, hence it cannot be <span class="math-container">$\phi$</span>. We are done. </p>
<p>Is this at all a valid proof? </p>
<p>Edit: Being doubtful, I write up another approach:</p>
<p>The function <span class="math-container">$f$</span> maps into <span class="math-container">$I$</span>, i.e. itself. Hence, to be <span class="math-container">$f(x)>x$</span> for every value of <span class="math-container">$x$</span>, its range set would have to exceed <span class="math-container">$I$</span>, and the best case scenario would be the identity map, for which <span class="math-container">$f(x)=x$</span> for all <span class="math-container">$x$</span> . Otherwise, <span class="math-container">$F(x)$</span> must change sign. </p>
| Anthony Ter | 588,654 | <p>If <span class="math-container">$\sigma \in S_n$</span> then <span class="math-container">$\sigma \circ \tau^{-1} \in S_n$</span>, and <span class="math-container">$\tau^{-1} \circ \sigma \in S_n$</span> so multiplying by <span class="math-container">$\sigma$</span> on either side is a surjective map <span class="math-container">$S_n \rightarrow S_n$</span>.</p>
|
1,090,620 | <p>I don't know how to solve this limit</p>
<p>$$ \lim_{y\to0} \frac{x e^ { \frac{-x^2}{y^2}}}{y^2}$$</p>
<p>$\frac{1}{e^ { \frac{x^2}{y^2}}} \to 0$</p>
<p>but $\frac{x}{y^2} \to +\infty$</p>
<p>This limit presents the indeterminate form $0 \infty$ ?</p>
| Jack D'Aurizio | 44,121 | <p>$$\lim_{y\to 0}\frac{x}{y^2}e^{-\frac{x^2}{y^2}} = \lim_{n\to +\infty} nx e^{-nx^2}\leq\lim_{n\to +\infty}\frac{nx}{\left(1+\frac{n}{2}x^2\right)^2}=0. $$</p>
|
513,500 | <p>Suppose $f,g$ are analytic functions in domain $D$.If $fg=0$, I want to prove either $f(z)=0$ or $g(z)=0$. </p>
| Anupam | 84,126 | <p>If possible, let $f,g$ be not identically zero functions. Let $K$ be any compact subset of $D$. If $Z_f$ and $Z_g$ denote the set zeros of $f$ and $g$, then $K\cap Z_f$ and $K\cap Z_g$ are finite sets. Now we have, $\vert K\cap Z_{fg}\vert\leq \vert K\cap Z_f\vert+\vert K\cap Z_g\vert<\infty$. It follows that $Z_{fg}$ is discrete and so $fg$ is not identically zero.</p>
|
1,111,935 | <p>For a given $n \in \Bbb N$, how do you find the minimum $m \in \Bbb N$ which satisfies the inequality below?</p>
<p>$$3^{3^{3^{3^{\unicode{x22F0}^{3}}}}} (m \text{ times}) > 9^{9^{9^{9^{\unicode{x22F0}^{9}}}}} (n \text{ times})$$</p>
<p>What I have tried to do so far is decomposing the $9$ on the right side to $3*3$
or to $3^2$, but both ways didn't get me much far and I couldn't find a pattern.</p>
| Booldy | 261,261 | <p>Let $a_1=3,b_1=9,a_{n+1}=3^{a_n},b_{n+1}=9^{b_n}$ obviously we have $a_n ,b_n \in N$</p>
<p>$a_1<b_1$ .Suppose $a_n<b_n$ then</p>
<p>$$a_{n+1}=3^{a_n}<3^{b_n}<9^{b_n}=b_{n+1}$$</p>
<p>$a_2>2b_1$ Suppose $a_n>2b_{n-1}$ then</p>
<p>$$a_{n+1}=3^{a_n}\ge3^{2b_{n-1}+1}=3\cdot 9^{b_{n-1}}=3b_n>2b_n$$</p>
<p>Thus by induction $a_n<b_n<\dfrac {a_{n+1}} 2<a_{n+1}$ hence the result follows</p>
|
2,917,299 | <p><strong>Let $h: R\to S$ be a ring homomorphism. Let $P\subset R$ be a prime ideal.</strong>
<strong>Give an example to show that in general $h(P)$ is not an ideal of $S$</strong></p>
<p>The first thing I think is to take $R=\mathbb{Z}$ and $P=(2)$ but I do not know how to take $S$ or if this works in this way, any help is appreciated.</p>
<p><em>Note:</em> $R$ and $S$ are commutative rings with unit.</p>
| Bajo Fondo | 411,935 | <p>You can consider $R=\mathbb{R}[x]$ and $S=\mathbb{C}$ and $h:R \to S$ as in $h(p)=p(1)$.</p>
<p>You can see that $<x>$ is prime in $R$. And $h(R)=\mathbb{R}$, which is not an ideal in $\mathbb{C}$.</p>
|
3,287,710 | <p>I want to calculate the length of a clothoid segment from the following available information.</p>
<ol>
<li>initial radius of clothoid segment </li>
<li>final radius of clothoid segment</li>
<li>angle (i am not really sure which angle is this, and its not
documented anywhere)</li>
</ol>
<p>As a test case: I need to find length of a clothoid(left) that starts at <span class="math-container">$(1000, 0)$</span> and ends at approximately <span class="math-container">$(3911.5, 943.3)$</span>. The arguments are: <span class="math-container">$initialRadius=10000$</span>, <span class="math-container">$endRadius=2500$</span>, <span class="math-container">$angle=45(deg)$</span>.</p>
<p>Previously I have worked on a similar problem where initial radius, final radius, and length are given. So I want to get the length so I can solve it the same way.</p>
<p>I am working on a map conversion problem. The format does not specify what are the details of this angle parameter.</p>
<p>Please help. I have been stuck at this for 2 days now.</p>
| Robert Israel | 8,508 | <p>Switch to polar coordinates <span class="math-container">$x = r \cos(\theta)$</span>, <span class="math-container">$y=r \sin(\theta)$</span>, and you can explicitly parametrize your curve as <span class="math-container">$r = R(\theta)$</span>. The <a href="https://math.stackexchange.com/questions/76708/how-to-determine-a-shape-is-convex-by-giving-polar-form-polynomial-equation/76745#76745">criterion</a> for a smooth polar curve
to be convex is <span class="math-container">$r^2 + 2 (r')^2 - r r'' \ge 0$</span> for all <span class="math-container">$\theta$</span>, which is rather a mess here. Numerically minimizing <span class="math-container">$\alpha$</span> subject to the constraint <span class="math-container">$R^2 + 2 (R')^2 - R R'' = 0$</span>, I find that the critical <span class="math-container">$\alpha$</span> is approximately <span class="math-container">$1.44224957028754$</span>.</p>
|
818,161 | <p>Suppose that repetitions are not allowed.</p>
<p>There are $6 \cdot 5 \cdot 4 \cdot 3 $ numbers with $4$ digits , that can be formed from the digits $1,2,3,5,7,8$.</p>
<p>How many of them contain the digits $3$ and $5$?</p>
<p>I thought that I could subtract from the total number of numbers those,that do not contain $3 \text{ and } 5$.I thought that the latter is equal to $4 \cdot 3 \cdot 2$,because then we can only use the numbers $1,2,7,8$.</p>
<p>So,the result would be $6 \cdot 5 \cdot 4 \cdot 3-4 \cdot 3 \cdot 2=336$,but I found in my textbook an other result..</p>
<p>Is the way I did it wrong??</p>
| André Nicolas | 6,312 | <p>You have used a correct Stars and Bars argument to show that there are $462$ ways to distribute $6$ objects in $6$ boxes. </p>
<p>However, if we assume that the dice are fair, and do not influence each other, then these $462$ possibilities are <em>not all equally likely</em>.</p>
<p>Let us look at a much smaller example, two identical coins. There are $3$ ways to distribute these into $2$ boxes. However, in this case it is fairly clear that the probability of two heads is $\frac{1}{4}$ and not $\frac{1}{3}$. For whether the coins are distinguishable or not, it should make no difference to the probability if we toss the coins sequentially and not simultaneously. And sequential tossing tells us the probability is $\frac{1}{2}\cdot\frac{1}{2}$.</p>
<p>Roughly speaking, an extreme Stars and Bars case, such as all balls in the third box, has smaller probability than any specific more "even" distribution.</p>
<p><strong>Remark:</strong> To see more informally that a probability model based on distinguishable dice is appropriate, assume that otherwise indistinguishable dice are made different by writing IDs on them with invisible ink. The writing should not affect the probability of events that do not involve the IDs, such as all six numbers being obtained. </p>
|
1,705,481 | <blockquote>
<p>$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2}$$</p>
</blockquote>
<p>I have tried the comparison test with $\frac{1}{n}$ and got $0$ with $\frac{1}{n^2}$ I got $\infty$</p>
<p>What should I try?</p>
| Olivier Oloa | 118,798 | <p>Since $ x \mapsto \dfrac{\ln x}{x^2}$ is decreasing over $[1,+\infty)$, one may use the <a href="https://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">integral test </a>:
$$
\sum_{n=1}^N\frac{\log\left(n\right)}{n^{2}}\leq \frac{\log\left(N\right)}{N^{2}}+\int_1^N\frac{\log\left(t\right)}{t^2}dt, \quad N\geq1,
$$ and observe that, as $N \to \infty$,
$$
0<\int_1^N\frac{\log\left(t\right)}{t^2}dt=1-\frac{1+\log N}N \to 1.
$$</p>
|
1,705,481 | <blockquote>
<p>$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2}$$</p>
</blockquote>
<p>I have tried the comparison test with $\frac{1}{n}$ and got $0$ with $\frac{1}{n^2}$ I got $\infty$</p>
<p>What should I try?</p>
| Solumilkyu | 297,490 | <p>We first claim that there exists a positive integer $N$ such that
$$\ln(n)\leq n^{1/2},\quad\forall n>N.$$
By L'Hopital's Rule,
$$\lim_{n\rightarrow\infty}\frac{\ln(n)}{n^{1/2}}=
\lim_{n\rightarrow\infty}\frac{1/n}{\frac{1}{2}n^{-1/2}}=
2\lim_{n\rightarrow\infty}\frac{1}{n^{1/2}}=0.$$
It follows that there exists a positive integer $N$, such that for every $n>N$,
$$\frac{\ln(n)}{n^{1/2}}\leq 1\quad\mbox{or}\quad \ln(n)\leq n^{1/2},$$
as desired. Therefore
$$\sum_{n=1}^\infty\frac{\ln(n)}{n^2}=
\sum_{n=1}^N\frac{\ln(n)}{n^2}+\sum_{n=N+1}^\infty\frac{\ln(n)}{n^2}
\leq
\sum_{n=1}^N\frac{\ln(n)}{n^2}+\sum_{n=N+1}^\infty\frac{1}{n^{3/2}},$$
where the right series converges. Hence $\sum_{n=1}^\infty\frac{\ln(n)}{n^2}$
converges.</p>
|
671,160 | <p>I was in maths class and i found a question interesting.
Find the limit of $\lim_{(x,y)\to (0,0)} \frac{2x}{x^2+x+y^2}$ if it exist.one of my friend did this question by transforming into polar coordinates and limit become $$\lim_{r\to 0}\frac{2\cos \theta}{r^2+\cos\theta}=\frac{2\cos \theta}{\cos\theta}=2$$ and argue the result that by the method above we sweeps out entire plane so we can do like this.But i did like this.I chose family of path given by $y^2=mx$ and put it in limit,then I got given limit as $\lim_{x\to0}\frac{2}{x+m+1}=\frac{2}{m+1}$ which depends on m which means limit doesn't exist.What is correct??Is there any general method to find existence and value of limit.I know $\epsilon-\delta$ method but for that we must know the limit.Therefore how to find limit if we don't know that.Is polar conversion helps with this problem?If one find limit as existing from any method,how can we sure that we will find limit existing and unique in other method also?</p>
| Martingalo | 127,445 | <p>Your computation is right. If you substitute $y^2=mx$ you get that the limit depends on the path you take, hence it does not exist. Another way to show that it does not exist is taking the iterated limits:</p>
<p>$$\lim_{x\to 0}\lim_{y\to 0}\frac{2x}{x^2+x+y^2}=2$$
while</p>
<p>$$\lim_{y\to 0}\lim_{x\to 0}\frac{2x}{x^2+x+y^2}=0$$</p>
<p>so it does not exist.</p>
<p>The error in your argument with polar coordinates is that the limit is not well-defined for all angles, namely $\theta=\pi/2+\pi k$, $k\in \mathbb{Z}$.</p>
<p>A standard method to see if there is a limit is to do exactly what you did and make sure it holds for any arbitrary $r>0$ and angle $\theta \in [0,2\pi]$.</p>
<p>Also, trying to upperbound: Fix $\varepsilon>0$ and find $\delta$ such that for all $(x,y)$ with $\|(x,y)\| = \sqrt{x^2+ y^2} < \delta$ then</p>
<p>$$\left|\frac{2x}{x^2+x+y^2}\right| < \varepsilon$$
but the latter one might be tricky.</p>
|
671,160 | <p>I was in maths class and i found a question interesting.
Find the limit of $\lim_{(x,y)\to (0,0)} \frac{2x}{x^2+x+y^2}$ if it exist.one of my friend did this question by transforming into polar coordinates and limit become $$\lim_{r\to 0}\frac{2\cos \theta}{r^2+\cos\theta}=\frac{2\cos \theta}{\cos\theta}=2$$ and argue the result that by the method above we sweeps out entire plane so we can do like this.But i did like this.I chose family of path given by $y^2=mx$ and put it in limit,then I got given limit as $\lim_{x\to0}\frac{2}{x+m+1}=\frac{2}{m+1}$ which depends on m which means limit doesn't exist.What is correct??Is there any general method to find existence and value of limit.I know $\epsilon-\delta$ method but for that we must know the limit.Therefore how to find limit if we don't know that.Is polar conversion helps with this problem?If one find limit as existing from any method,how can we sure that we will find limit existing and unique in other method also?</p>
| copper.hat | 27,978 | <p>There is no limit. Choose $x>0, m>0$, and $y=\sqrt{mx}$, then ${2x \over x^2 +x + y^2} = {2x \over x^2+(1+m)x} = { 2 \over x+(1+m)}$, and $\lim_ {x \downarrow 0} { 2 \over x+(1+m)} = { 2 \over 1+m}$. Since $m>0$ was arbitrary, there is no limit of the original function.</p>
<p>Your friend's technique would need to allow $\theta$ to be arbitrary as well, otherwise the limit is only taken over radial lines.</p>
|
45,163 | <p>I would like to get reccomendations for a text on "advanced" vector analysis. By "advanced", I mean that the discussion should take place in the context of Riemannian manifolds and should provide coordinate-free definitions of divergence, curl, etc. I would like something that has rigorous theory but also plenty of concrete examples and a mixture of theoretic/concrete exercises.</p>
<p>The text that I have seen that comes closest to what I'm looking for is Janich's <a href="http://rads.stackoverflow.com/amzn/click/1441931449" rel="noreferrer">Vector Analysis</a>. The Hatcheresque style of writing in this particular text though isn't really suitable for me.</p>
<p>Looking forward to your reccomendations, thanks.</p>
| J126 | 2,838 | <p>You might want to check out <a href="http://rads.stackoverflow.com/amzn/click/0486640396" rel="nofollow">Tensor Analysis on Manifolds</a> by Bishop and Goldberg.</p>
|
45,163 | <p>I would like to get reccomendations for a text on "advanced" vector analysis. By "advanced", I mean that the discussion should take place in the context of Riemannian manifolds and should provide coordinate-free definitions of divergence, curl, etc. I would like something that has rigorous theory but also plenty of concrete examples and a mixture of theoretic/concrete exercises.</p>
<p>The text that I have seen that comes closest to what I'm looking for is Janich's <a href="http://rads.stackoverflow.com/amzn/click/1441931449" rel="noreferrer">Vector Analysis</a>. The Hatcheresque style of writing in this particular text though isn't really suitable for me.</p>
<p>Looking forward to your reccomendations, thanks.</p>
| ItsNotObvious | 9,450 | <p>I have actually found something that comes pretty close to what I was looking for: Morita's <a href="http://rads.stackoverflow.com/amzn/click/0821810456">Geometry of Differential Forms</a>. While not a full-blown Riemannian geometry text, it seems to strike a nice balance between theory and computation and discusses many of the same topics discussed in the Janich book referenced in my question. In addition to concrete examples, it also has detailed solutions to the exercises. </p>
|
4,147,126 | <p>Let <span class="math-container">$\ p_n\ $</span> be the <span class="math-container">$\ n$</span>-th prime number.</p>
<blockquote>
<p>Does the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem" rel="nofollow noreferrer">prime number theorem</a> ,</p>
<p><span class="math-container">$\Large{\lim_{x\to\infty}\frac{\pi(x)}{\left[ \frac{x}{\log(x)}\right]} = 1},$</span></p>
<p>imply that:</p>
<p><span class="math-container">$ \displaystyle\lim_{n\to\infty}\ \frac{p_n}{p_{n+1}} = 1\ ?$</span></p>
</blockquote>
<p>Edit: I totally get where the vote-to-closes come from and I kind of agree with them. Yeah this is not the question I intended to ask actually. I think I've done an X-Y communication thingy. I'll leave the question and accept the answer though. But I have learned something about prime numbers along the way in reading the answers...</p>
| OmG | 356,329 | <p>You can also use results from the <a href="https://en.wikipedia.org/wiki/Prime_gap" rel="nofollow noreferrer">prime gap</a> problem. Here, as <span class="math-container">$\lim_{n\to\infty} \frac{g_n}{p_n} = 0$</span> and <span class="math-container">$g_n = p_{n+1} - p_n$</span>, you can conclude that <span class="math-container">$\lim_{n\to\infty}\frac{p_{n+1}}{p_n} = 1$</span>.</p>
|
3,980,845 | <p>Given <span class="math-container">$X,Y$</span> i.i.d where <span class="math-container">$\mathbb{P}(X>x)=e^{-x}$</span> for <span class="math-container">$x\geq0$</span> and <span class="math-container">$\mathbb{P}(X>x)=1$</span> for all <span class="math-container">$x<0$</span>
and <span class="math-container">$V=\min(X,Y)$</span><br />
Calculate how <span class="math-container">$\mathbb{E}[V|X]$</span> distributed.</p>
<p>I've found that <span class="math-container">$F_{V|X=x}(v)=\left\{\begin{array}{rcl} 0&t\leq0\\1-e^{-t}&0\leq t\leq x\\1&else\end{array}\right.$</span><br />
And I've tried using the formula <span class="math-container">$\mathbb{E}[V|X]=\int_{\infty}^{\infty}vf_{V|X=x}dv$</span> and I got that <span class="math-container">$\mathbb{E}[V|X]=-xe^{-x}-e^{-x}+1$</span> and in the answer I had to compare with they got <span class="math-container">$\mathbb{E}[V|X]\sim U(0,1)$</span><br />
Not sure how to get to this distribution any help?</p>
| StubbornAtom | 321,264 | <p>The conditional pdf <span class="math-container">$f_{V\mid X}$</span> that you write is not defined since the joint density <span class="math-container">$f_{V,X}$</span> does not exist wrt Lebesgue measure. This is because <span class="math-container">$V=X$</span> has a positive probability.</p>
<p>You can write <span class="math-container">$$V=\min(X,Y)=X\mathbf1_{X<Y}+Y\mathbf1_{X>Y}\,,$$</span></p>
<p>where <span class="math-container">$\mathbf1_A$</span> is an indicator variable.</p>
<p>Now</p>
<p><span class="math-container">\begin{align}
\mathbb E\left[V\mid X\right]&=\mathbb E\left[X\mathbf1_{X<Y}\mid X\right]+\mathbb E\left[Y\mathbf1_{X>Y}\mid X\right]
\\&=X\mathbb E\left[\mathbf1_{X<Y}\mid X\right]+\mathbb E\left[Y\mathbf1_{X>Y}\mid X\right]
\end{align}</span></p>
<p>For fixed <span class="math-container">$x>0$</span>, we have <span class="math-container">$$\mathbb E\left[\mathbf1_{x<Y}\right]=\mathbb P\left(Y>x\right)=e^{-x}$$</span> and <span class="math-container">$$\mathbb E\left[Y\mathbf1_{x>Y}\right]=\int_0^xye^{-y}\,\mathrm{d}y=1-e^{-x}-xe^{-x}$$</span></p>
<p>This suggests that <span class="math-container">$$\mathbb E\left[V\mid X\right]=Xe^{-X}+1-e^{-X}-Xe^{-X}=1-e^{-X}$$</span></p>
<p>You can verify this has a uniform distribution on <span class="math-container">$(0,1)$</span>.</p>
|
182,510 | <p>Is there a continuous probability measure on the unit circle in the complex plane - $\sigma$ with full support, such that $\hat{\sigma}(n_k)\rightarrow1$ as $k\rightarrow\infty$ for some increasing sequence of integers $\ n_k$ </p>
| Michael Hardy | 11,667 | <p>If I'm not mistaken, the <a href="http://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue_lemma" rel="nofollow">Riemann–Lebesgue lemma</a> rules this out if "continuous" means absolutely continuous with respect to Lebesgue measure. I'm not sure what happens if you mix in a "continuous singular" measure.</p>
<p>(If you concentrate measure $1$ at a point, you get every coefficient of the Fourier series equal to $1$, so your hypothesis of full support certainly can't be dropped.)</p>
|
655,981 | <p>How to calculate this complex integral?
$$\int_0^{2\pi}\cot(t-ia)dt,a>0$$</p>
<p>I got that the integral is $2\pi i$ if $|a|<1$ and $0$ if $a>1$
yet, friends of mine got $2\pi i$ regardless the value of $a$.
looking for the correct way</p>
| Ron Gordon | 53,268 | <p>Expand the cotangent to get that the integral is a ratio of sines and cosines:
$$\int_0^{2 \pi} dt \frac{\cosh{a} \cos{t} + i \sinh{a} \sin{t}}{\cosh{a} \sin{t} - i \sinh{a} \cos{t}} $$</p>
<p>Now use the usual $z=e^{i t}$, $dt = -i dz/z$ to get the integral is equal to</p>
<p>$$\oint_{|z|=1} \frac{dz}{z} \frac{e^a z^2+e^{-a}}{e^a z^2-e^{-a}}$$</p>
<p>Assume $a \gt 0$; then the poles of the integrand that lie within the unit circle are at $z=0$ and $z=\pm e^{-a}$. The residues from these poles are, respectively, $-1$, $1$, and $1$, so that the sum of the residues is $1$ and the integral is, by the residue theorem, $i 2 \pi$. </p>
<p>Note that when $a \lt 0$, the only pole within the unit circle is at the origin, so the sum of the residues in that case is $-1$. Thus, the value of the integral is</p>
<p>$$\int_0^{2 \pi} dt \, \cot{(t-i a)} = i 2 \pi \, \operatorname*{sgn}{a} \quad (a \ne 0)$$</p>
|
935,506 | <p>I'm a bit puzzled by this one.</p>
<p>The domain $X = S(0,1)\cup S(3,1)$ (where $S(\alpha, \rho)$ is a circular area with it's center at $\alpha$ and radius $\rho$). So the domain is basically two circles with radius 1 and centers at 0 and 3.</p>
<p>I'm supposed to find analytic function $f$ defined on $X$ where the imaginary part of $f$ is a constant but $f$ is not constant.</p>
<p>Where do I start?</p>
| Olivier Oloa | 118,798 | <p>You may write
$$
\begin{align}
\int 4x \sqrt{1 - x^4} dx & =2\int 2x \sqrt{1 - (x^2)^2} dx \\\\
& =2\int \sqrt{1 - u^2} du \\\\
& =2\int \cos t \:\sqrt{1 - \sin^2 t} \: dt \\\\
& =2\int \cos^2 t \: dt \\\\
& =t+\frac12 \sin (2t)+C\\\\
& =t+\sin t \cos t+C\\\\
& =\arcsin (x^2)+x^2\:\sqrt{1 - x^4}+C,\\\\
\end{align}
$$
on an appropriate interval.</p>
|
2,195,287 | <blockquote>
<p>Knowing that $p$ is prime and $n$ is a natural number show that
$$n^{41}\equiv n\bmod 55$$
using Fermat's little theorem
$$n^p\equiv n\bmod p$$</p>
</blockquote>
<p>If the exercise was to show that
$$n^{41}\equiv n\bmod 11$$ I would just rewrite $n^{41}$ as a power of $11$ and would easily prove that the congruence is true in this case but I cannot apply the same logic when I have $\bmod55$ since $n^{41}$ cannot be written as power of $55$.</p>
<p>Any hint?</p>
| Joffan | 206,402 | <p>You have two Fermat's Little Theorem results that you can use:</p>
<p>$$n^5 \equiv n \bmod 5 \\ n^{11} \equiv n \bmod 11 $$</p>
<p>Then successive application of these - for example, $n^9 \equiv n^5n^4 \equiv n\cdot n^4 \equiv n^5 \equiv n \bmod 5$ - gives </p>
<p>$$n^{41} \equiv n \bmod 5 \\ n^{41} \equiv n \bmod 11 $$</p>
<p>And the Chinese reminder theorem gives </p>
<p>$$n^{41} \equiv n \bmod 55 $$</p>
<p>as required.</p>
<p>(Note that you can also show $n^{21} \equiv n \bmod 55$, foreshadowing <a href="https://en.wikipedia.org/wiki/Carmichael_function#Carmichael.27s_theorem" rel="nofollow noreferrer">Carmichael's theorem</a>)</p>
|
2,751,909 | <blockquote>
<p>Let $f$ be a non-negative differentiable function such that $f'$ is continuous and
$\displaystyle\int_{0}^{\infty}f(x)\,dx$ and $\displaystyle\int_{0}^{\infty}f'(x)\,dx$ exist.</p>
<p>Prove or give a counter example: $f'(x)\overset{x\rightarrow
\infty}{\rightarrow} 0$</p>
</blockquote>
<p><strong>Note:</strong> I think it is not true but I couldn't find a counter example.</p>
| Robert Z | 299,698 | <p>Hint. Take as $P_n$ the uniform partition $x_k=\frac{k}{n}$ with $k=-n,\dots,n$ where $n$ is a positive integer. Then, since $f(x)=x$ is strictly increasing, it follows that
$$U(f,P_n)=\frac{1}{n}\sum_{k=-n+1}^n\frac{k}{n}\quad\mbox{and}\quad L(f,P_n)=\frac{1}{n}\sum_{k=-n}^{n-1}\frac{k}{n}.$$
Are you able to find $n$ such that $U(f,P_n)-L(f,P_n)<\epsilon$?</p>
|
2,611,382 | <p>Solve the equation,</p>
<blockquote>
<p>$$
\sin^{-1}x+\sin^{-1}(1-x)=\cos^{-1}x
$$</p>
</blockquote>
<p><strong>My Attempt:</strong>
$$
\cos\Big[ \sin^{-1}x+\sin^{-1}(1-x) \Big]=x\\
\cos\big(\sin^{-1}x\big)\cos\big(\sin^{-1}(1-x)\big)-\sin\big(\sin^{-1}x\big)\sin\big(\sin^{-1}(1-x)\big)=x\\
\sqrt{1-x^2}.\sqrt{2x-x^2}-x.(1-x)=x\\
\sqrt{2x-x^2-2x^3+x^4}=2x-x^2\\
\sqrt{x^4-2x^3-x^2+2x}=\sqrt{4x^2-4x^3+x^4}\\
x(2x^2-5x+2)=0\\
\implies x=0\quad or \quad x=2\quad or \quad x=\frac{1}{2}
$$
Actual solutions exclude $x=2$.ie, solutions are $x=0$ or $x=\frac{1}{2}$.
I think additional solutions are added because of the squaring of the term $2x-x^2$ in the steps. </p>
<p>So, how do you solve it avoiding the extra solutions in similar problems ?</p>
<p><strong>Note:</strong> I dont want to substitute the solutions to find the wrong ones.</p>
| Michael Rozenberg | 190,319 | <p>The domain gives
$$-1\leq x\leq1$$ and $$-1\leq1-x\leq1,$$ which gives $$0\leq x\leq1,$$
which says that the answer is $$\left\{\frac{1}{2},0\right\}.$$
I think it's better after your third step to write
$$\sqrt{2x-x^2}=\sqrt{1-x^2}$$ or $x=0$.</p>
|
2,184,056 | <p>To compute the oblique asymptote as $x \to +\infty$, we can first compute $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x}$, it it exists, and $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x} = k$, then we can further compute $\mathop {\lim }\limits_{x \to + \infty } (f(x) - kx)=b$, and if it exists then the asymptote would be $y = kx + b$.</p>
<p>But I am wondering if the existence of $\mathop {\lim }\limits_{x \to + \infty } \frac{{f(x)}}{x}$ always imply the existence of the second limit $\mathop {\lim }\limits_{x \to + \infty } f(x) - kx$ and hence the asymptote? If not, any counterexample is appreciated.</p>
| levap | 32,262 | <p>It doesn't. Consider for example $f(x) := x + \sin(x)$. Then</p>
<p>$$ \lim_{x \to \infty} \frac{f(x)}{x} = \lim_{x \to \infty} \frac{x + \sin(x)}{x} = 1 + \lim_{x \to \infty} \frac{\sin(x)}{x} = 1 $$</p>
<p>because $\sin$ is bounded. It is clear that $g(x) = x$ has an asymptote at infinity but by adding the bounded oscillating function $\sin x$ to $g$, it is clear that the resulting function $f$ won't have an asymptote at infinity (because it oscillates around $g(x) = x$ as $x$ approaches infinity). And indeed,</p>
<p>$$ \lim_{x \to \infty} f(x) - x = \lim_{x \to \infty} \sin x $$</p>
<p>doesn't exist.</p>
|
410,013 | <p><strong>Short question:</strong> Is there a standard term for a set <span class="math-container">$F$</span> such that there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span> (in the context of ZF)?</p>
<p><strong>More detailed version:</strong> Consider the following four notions of “finiteness” in ZF, the third of which is the one I am asking about and will be arbitrarily named “P-finite” here:</p>
<ul>
<li><p>“<span class="math-container">$F$</span> is <strong>finite</strong>” means any of the following equivalent statements:</p>
<ul>
<li><p>there exists <span class="math-container">$n\in\omega$</span> and a bijection <span class="math-container">$n \xrightarrow{\sim} F$</span>,</p>
</li>
<li><p>there exists a bijection <span class="math-container">$E \xrightarrow{\sim} F$</span> with <span class="math-container">$E\subseteq\omega$</span> and no bijection <span class="math-container">$\omega \xrightarrow{\sim} F$</span>,</p>
</li>
<li><p>every nonempty subset of <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</p>
</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>T-finite</strong>” means:</p>
<ul>
<li>every chain in <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>P-finite</strong>” [nonstandard terminology which I'd like a standard term form] means any of the following equivalent statements:</p>
<ul>
<li><p><span class="math-container">$\mathscr{P}(F)$</span> is Noetherian under inclusion (i.e., any increasing sequence <span class="math-container">$A_0 \subseteq A_1 \subseteq A_2 \subseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p>
</li>
<li><p><span class="math-container">$\mathscr{P}(F)$</span> is Artinian under inclusion (i.e., any decreasing sequence <span class="math-container">$A_0 \supseteq A_1 \supseteq A_2 \supseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p>
</li>
<li><p>there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span>.</p>
</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>D-finite</strong>” (i.e., Dedekind-finite) means any of the following equivalent statements:</p>
<ul>
<li><p>there is no bijection of <span class="math-container">$F$</span> with a proper subset of it,</p>
</li>
<li><p>there is no injection <span class="math-container">$\omega \hookrightarrow F$</span>.</p>
</li>
</ul>
</li>
</ul>
<p>(I gave several equivalent conditions to emphasize the parallel between these four notions.)</p>
<p>We have finite <span class="math-container">$\Rightarrow$</span> T-finite <span class="math-container">$\Rightarrow$</span> P-finite <span class="math-container">$\Rightarrow$</span> D-finite, and none of the implications I just wrote is reversible. (To construct a permutation model with a P-finite set that is not T-finite, start with a set of atoms in bijection with <span class="math-container">$\mathbb{R}$</span> and use the group of permutations given by continuous increasing bijections <span class="math-container">$\mathbb{R} \xrightarrow{\sim} \mathbb{R}$</span> and the normal subgroup given by pointwise stabilizers of finite sets.)</p>
<p>Surely these four notions, and the implications and nonimplications I just mentioned must appear somewhere in the literature, as well as possibly others. My question is, what is the standard name for “P-finiteness”, and where are its properties, including what I just wrote, discussed in greater detail?</p>
| Asaf Karagila | 7,206 | <p>The term you might find in the literature is "weakly Dedekind finite", since a set that maps onto <span class="math-container">$\omega$</span> is weakly Dedekind <em>infinite</em>.</p>
<p>I'd expect that you'll call these "strongly Dedekind finite". Alas, these terms were coined before my arrival to this world. There is also "dually Dedekind finite", since we are defining Dedekind finitenesss with the dual notion of <span class="math-container">$\leq^*$</span>, but that's not exactly the same thing either, since that refers to the case where "every surjection is a bijection", which is not equivalent, as it turns out, to "does not map onto <span class="math-container">$\omega$</span>".</p>
|
410,013 | <p><strong>Short question:</strong> Is there a standard term for a set <span class="math-container">$F$</span> such that there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span> (in the context of ZF)?</p>
<p><strong>More detailed version:</strong> Consider the following four notions of “finiteness” in ZF, the third of which is the one I am asking about and will be arbitrarily named “P-finite” here:</p>
<ul>
<li><p>“<span class="math-container">$F$</span> is <strong>finite</strong>” means any of the following equivalent statements:</p>
<ul>
<li><p>there exists <span class="math-container">$n\in\omega$</span> and a bijection <span class="math-container">$n \xrightarrow{\sim} F$</span>,</p>
</li>
<li><p>there exists a bijection <span class="math-container">$E \xrightarrow{\sim} F$</span> with <span class="math-container">$E\subseteq\omega$</span> and no bijection <span class="math-container">$\omega \xrightarrow{\sim} F$</span>,</p>
</li>
<li><p>every nonempty subset of <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</p>
</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>T-finite</strong>” means:</p>
<ul>
<li>every chain in <span class="math-container">$\mathscr{P}(F)$</span> has a maximal element.</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>P-finite</strong>” [nonstandard terminology which I'd like a standard term form] means any of the following equivalent statements:</p>
<ul>
<li><p><span class="math-container">$\mathscr{P}(F)$</span> is Noetherian under inclusion (i.e., any increasing sequence <span class="math-container">$A_0 \subseteq A_1 \subseteq A_2 \subseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p>
</li>
<li><p><span class="math-container">$\mathscr{P}(F)$</span> is Artinian under inclusion (i.e., any decreasing sequence <span class="math-container">$A_0 \supseteq A_1 \supseteq A_2 \supseteq \cdots$</span> of subsets of <span class="math-container">$F$</span> is stationary),</p>
</li>
<li><p>there does not exist a surjection <span class="math-container">$F \twoheadrightarrow \omega$</span>.</p>
</li>
</ul>
</li>
<li><p>“<span class="math-container">$F$</span> is <strong>D-finite</strong>” (i.e., Dedekind-finite) means any of the following equivalent statements:</p>
<ul>
<li><p>there is no bijection of <span class="math-container">$F$</span> with a proper subset of it,</p>
</li>
<li><p>there is no injection <span class="math-container">$\omega \hookrightarrow F$</span>.</p>
</li>
</ul>
</li>
</ul>
<p>(I gave several equivalent conditions to emphasize the parallel between these four notions.)</p>
<p>We have finite <span class="math-container">$\Rightarrow$</span> T-finite <span class="math-container">$\Rightarrow$</span> P-finite <span class="math-container">$\Rightarrow$</span> D-finite, and none of the implications I just wrote is reversible. (To construct a permutation model with a P-finite set that is not T-finite, start with a set of atoms in bijection with <span class="math-container">$\mathbb{R}$</span> and use the group of permutations given by continuous increasing bijections <span class="math-container">$\mathbb{R} \xrightarrow{\sim} \mathbb{R}$</span> and the normal subgroup given by pointwise stabilizers of finite sets.)</p>
<p>Surely these four notions, and the implications and nonimplications I just mentioned must appear somewhere in the literature, as well as possibly others. My question is, what is the standard name for “P-finiteness”, and where are its properties, including what I just wrote, discussed in greater detail?</p>
| Goldstern | 14,915 | <p>My old paper</p>
<ul>
<li>"Strongly amorphous sets and dual Dedekind infinity",
Math. Logic Quart. 43 (1997), no. 1, 39–44,</li>
<li><a href="https://onlinelibrary.wiley.com/doi/10.1002/malq.19970430105" rel="nofollow noreferrer">https://onlinelibrary.wiley.com/doi/10.1002/malq.19970430105</a></li>
<li>preprint at arXiv:math/9504201</li>
</ul>
<p>contains some (mildly interesting) constructions, and a (slightly more interesting) bibliography about notions of finiteness.</p>
|
2,098,693 | <p>Full Question: Five balls are randomly chosen, without replacement, from an urn that contains $5$
red, $6$ white, and $7$ blue balls. What is the probability of getting at least one ball of
each colour?</p>
<p>I have been trying to answer this by taking the complement of the event but it is getting quite complex. Any help?</p>
| Arthur | 15,500 | <p>Can you calculate the probability that you never draw a white ball? Never a red ball? Never a blue ball? Adding those together is <em>almost</em> the correct answer.</p>
<p>Here is what is missing: The case where you draw <em>only</em> white was counted twice (once as part of "never red", and once as part of "never blue"), so you need to subtract once the probability of only drawing white. The same thing is true for blue and for red, so you do the same there.</p>
|
2,098,693 | <p>Full Question: Five balls are randomly chosen, without replacement, from an urn that contains $5$
red, $6$ white, and $7$ blue balls. What is the probability of getting at least one ball of
each colour?</p>
<p>I have been trying to answer this by taking the complement of the event but it is getting quite complex. Any help?</p>
| Song Xie | 407,081 | <p>Key is to find how many cases that at least one color is missing. For red absence it is (5 out of 13). For the other two, (5 our of 12) and (5 out of 11). We sum them up and subtract duplicated counts, which are (5 out of 5), (5 out of 6) and (5 out of 7) respectively. So number of cases of at least one color absence is</p>
<p>(5 out of 13) + (5 out of 12)+(5 out of 11)-(5 out of 5)-(5 out of 6)-(5 out of 7)</p>
|
3,426,756 | <p>From a point <span class="math-container">$O$</span> on the circle <span class="math-container">$x^2+y^2=d^2$</span>, tangents <span class="math-container">$OP$</span> and <span class="math-container">$OQ$</span> are drawn to the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>, <span class="math-container">$a>b$</span>. Show that the locus of the midpoint of chord PQ is given by <span class="math-container">$$x^2+y^2=d^2\bigg[\frac{x^2}{a^2}+\frac{y^2}{b^2}\bigg]^2$$</span></p>
<p>I recognize that the locus of a chord whose midpoint is at <span class="math-container">$(h,k)$</span> is given by <span class="math-container">$\frac{xh}{a^2}+\frac{yk}{b^2}=\frac{h^2}{a^2}+\frac{k^2}{b^2}$</span></p>
<p>I also recognize that PQ is the chord of contact, but to find its equation using the chord of contact formula I would require the coordinates of point O which I do not have.</p>
<p>Here I am getting the equation in terms of <span class="math-container">$x,y,h,k$</span>, but to find the locus I need the equation entirely in the form of <span class="math-container">$h,k$</span>, right? So how do I eliminate <span class="math-container">$x,y$</span> from the equation of the locus of the midpoint?</p>
| Jan-Magnus Økland | 28,956 | <p>Still not what you want, but uses <a href="https://www.cut-the-knot.org/Generalization/JoachimsthalsNotations.shtml" rel="nofollow noreferrer">Joachimsthals notations</a>.</p>
<p>The locus is the midpoint of the two intersection points of <span class="math-container">$$s=\frac{x^2}{a^2}+\frac{y^2}{b^2}-1=0$$</span> and (from <span class="math-container">$s_1^2=s \cdot s_{11}$</span>) <span class="math-container">$$(\frac{x(O)x}{a^2}+\frac{y(O)y}{b^2}-1)^2=(\frac{x^2}{a^2}+\frac{y^2}{b^2}-1)(\frac{x(O)^2}{a^2}+\frac{y(O)^2}{b^2}-1)$$</span> which (since the square roots cancel in <span class="math-container">$\frac{x_1+x_2}{2}$</span> and <span class="math-container">$\frac{y_1+y_2}{2}$</span>) is <span class="math-container">$$(x,y)=(\frac{x(O)}{\frac{x(O)^2}{a^2}+\frac{y(O)^2}{b^2}},\frac{y(O)}{\frac{x(O)^2}{a^2}+\frac{y(O)^2}{b^2}}),$$</span> where <span class="math-container">$x(O)^2+y(O)^2=d^2$</span> since <span class="math-container">$O$</span> is on that circle.</p>
<p>Writing <span class="math-container">$h=x(O), k=y(O)$</span> into <a href="http://habanero.math.cornell.edu:3690/" rel="nofollow noreferrer">M2</a></p>
<pre><code>R=QQ[a,b,d]
S=R[h,k,x,y,MonomialOrder=>Eliminate 2]
I=ideal(h^2+k^2-d^2,(b^2*h^2+a^2*k^2)*x-a^2*b^2*h,(b^2*h^2+a^2*k^2)*y-a^2*b^2*k)
gens gb I
</code></pre>
<p>yields <span class="math-container">$$a^6b^6d^2(d^2(\frac{x^2}{a^2}+\frac{y^2}{b^2})^2-(x^2+y^2)).$$</span></p>
|
1,338,980 | <p>Suppose you have a set of data $\{x_i\}$ and $\{y_i\}$ with $i=0,\dots,N$. In order to find two parameters $a,b$ such that the line
$$
y=ax+b,
$$
give the best linear fit, one proceed minimizing the quantity
$$
\sum_i^N[y_i-ax_i-b]^2
$$
with respect to $a,b$ obtaining well know results. </p>
<p>Imagine now to desire a fit with a function like
$$
y=ax^p+b.
$$
After some manipulation one obtain the following relations
$$
a=\frac{N\sum_i(y_ix_i^p)-\sum_iy_i\cdot\sum_ix_i^p}{(\sum_ix_i^p)^2+N\sum_i(x_i^p)^2},
$$
$$
b=\frac{1}{N}[\sum_iy_i-a\sum_ix_i^p]
$$
and
$$
\frac{1}{N}[N\sum_i(y_ix_i^p\ln x_i)-\sum_iy_i\cdot\sum_ix_i^p\ln x_i]=\frac{a}{N}[N\sum_i(x_i^p)^2\ln x_i-\sum_ix_i^p\cdot\sum_ix_i^p\ln x_i.
$$
To me it seems that from this it is nearly impossible to extract the exponent $p$. Am I correct?</p>
| Community | -1 | <p>If you fix $p$ and perform a line fitting on the set $(x_i^p,y_i)$, you can compute the residual error (unexplained variance). This defines an objective function $\epsilon(p)$, that you can minimize by means of a numerical method such as the Golden section search.</p>
|
131,524 | <p>I am an amateur mathematician, and certainly not a set theorist, but there seems to me to be an easy way around the reflexive paradox: Add to set theory the primitive $A(x,y)$, which we may think of as meaning that $x$ is allowed to belong to $y$ and the axiom</p>
<p>$\forall x,\forall y, x\in y \rightarrow A(x,y)$</p>
<p>and modify the axiom schema for abstraction to read, given any wff $\phi(y)$ in which $x$ is not a free variable,</p>
<p>$\exists x,\forall y, y\in x \leftrightarrow A(y,x) \wedge \phi(y)$ </p>
<p>Then if we try to construct the set $B$ of all sets not belonging to themselves, we get</p>
<p>$\forall x, x\in B \leftrightarrow A(x,B) \wedge x\notin x$</p>
<p>Then, instead of the reflexive paradox, we get</p>
<p>$B\in B \leftrightarrow A(B,B) \wedge B\notin B$</p>
<p>which is a consistent statement that implies both $B\notin B$ and $\neg A(B,B)$. Moreover, since $B$ is arbitrary, it follows that no set can be a member of itself.</p>
<p>Now, this all looks correct to me, but I can not believe that such a simple trick has been overlooked for over a century. So I have to believe that either its been done and I am simply unaware of it, or I've made a mistake that is staring me in the face and I just can't see it. Can someone set me straight on this?</p>
| Richard Thrasher | 34,276 | <p>I've just read an interesting paper that addresses the question of how best to remove paradoxes from the naive abstraction axiom. Reference is:</p>
<p>Goldstein, L. 2013. Paradoxical partners: semantical brides and
set-theoretical grooms. Analysis 73: 33-37.</p>
<p>His idea is quite simple. Keep the abstraction axiom as is,</p>
<p>$\exists y,\forall x,x\in y\leftrightarrow \phi(x)$</p>
<p>and add the restriction that one can only make substitutions for $y$, $x$ and $\phi$ that don't reduce the expression $x\in y\leftrightarrow \phi(x)$ to either a tautology or a contradiction.</p>
<p>This seems like a simple and elegant solution, though it does place a small but reasonable burden on the user of the axiom.</p>
|
1,338,832 | <p>Assume we have a group consisting of both women and men. (In my example it is 67 women and 43 men but that is not important.) The women are indistinguishable and the men are also indistinguishable.</p>
<p>In how many ways can we pick a subgroup consisting of $n$ women and $n$ men, i.e., the same number of women and men?</p>
<ul>
<li><p>For $n = 1$ I found the answer to be $2 = 2 \cdot 1$. ($\{(m,w), (w,m)\}$)</p></li>
<li><p>For $n = 2$ I found the answer to be $6 = 3 \cdot 2$. ($\{(m,m,w,w), (w,w,m,m), (m,w,w,m), (w,m,m,w), (m,w,m,w), (w,m,w,m)\}$.)</p></li>
</ul>
<p>Therefore, I assume that for a random number $n$, the answer is $n \cdot (n - 1)$.</p>
<p>How do I prove this?</p>
<p><strong>Update</strong></p>
<p>My assumption is wrong.</p>
| drhab | 75,923 | <p>There are $2n$ spots and exactly $n$ of them must be chosen (e.g. for men to take them). </p>
<p>This can be done on $$\binom{2n}n$$ distinct ways.</p>
<hr>
<p><strong>Edit</strong>: </p>
<p>Let's do it for $n=3$. Give the men the numbers $1,2,3$ and give the women the numbers $4,5,6$. Then there are $6!$ arrangements, but e.g. arrangement $135462$ gives the same result as $326541$ (that is $mmwwwm$). So this result $mmwwwm$ is counted more than once. How many times is it counted? We can arrange $123$ on $3!$ ways (the men) and we can arrange $456$ on $3!$ ways (the women). That gives $3!3!$ possibilities that end up in $mmwwwm$. This is the case for each of these combinations so we must divide $6!$ by $3!3!$ to come to the real number.</p>
|
1,030,335 | <blockquote>
<p>Let <span class="math-container">$n$</span> and <span class="math-container">$r$</span> be positive integers with <span class="math-container">$n \ge r$</span>. Prove that:</p>
<p><span class="math-container">$$\binom{r}{r} + \binom{r+1}{r} + \cdots + \binom{n}{r} = \binom{n+1}{r+1}.$$</span></p>
</blockquote>
<p>Tried proving it by induction but got stuck. Any help with proving it by induction or any other proof technique is appreciated.</p>
| arindam mitra | 30,981 | <h2>Proving by induction</h2>
<h3>inductive step</h3>
<p><span class="math-container">$$\begin{pmatrix}r\\r\end{pmatrix} + \begin{pmatrix}r+1\\r\end{pmatrix} + \dots + \begin{pmatrix}n\\r\end{pmatrix} =\begin{pmatrix}n\\r+1\end{pmatrix} + \begin{pmatrix}n\\r\end{pmatrix}$$</span> [as the identity holds for natural numbers less than n.]</p>
<p>We Know that, <span class="math-container">$$\begin{pmatrix}n\\r+1\end{pmatrix}+ \begin{pmatrix}n\\r\end{pmatrix} = \begin{pmatrix}n+1\\r+1\end{pmatrix}$$</span> see <a href="http://en.wikipedia.org/wiki/Binomial_coefficient#Recursive_formula." rel="nofollow noreferrer">here</a></p>
<p>Hence proved.</p>
|
2,779,152 | <p>Consider a Poisson process with rate $\lambda$ in a given time interval $[0,T]$. The inter-arrival time between successive arrivals is negative exponential distributed with mean $\frac{1}{\lambda}$ such that $X_1 >0$, and $\sum_{i=1}^\text{Last} X_i < T$, where $X$ represents inter-arrival time.</p>
<p>What about the distribution of time between Last arrival and ending time $T$? Is it also negative exponential distributed and has a mean value of $\frac{1}{\lambda}$? Can we study time segment $[0,T]$ of Poisson process in the backward direction too? In the forward direction, time between $t=0$ and first arrival is negative exponential distributed. In the backward direction, Last arrival is the first arrival and is the time between $t=T$ and Last is also negative exponential distributed. Is there any way to justify this? or some reference?</p>
| Michael Hardy | 11,667 | <p>Use the recurrence relation to prove by mathematical induction that $a_{n+1} \ge \dfrac 3 2 a_n$ for $n\ge 3.$ Deduce from that, that $a_n \ge \left(\frac 3 2\right)^{n-3} \cdot 2$ for $n\ge 3.$ Hence
$$
\frac 1 {a_n} \le 2 \cdot \left( \frac 2 3 \right)^{n-3}
$$
so you have a comparison with a geometric series.</p>
|
3,168,787 | <p>I am trying to solve the following exercise in Rudin's "Rudin's Principles of Mathematical Analysis" book: (Ex 4.1)</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is a real function defined on <span class="math-container">$R^1$</span> which satisfies
<span class="math-container">$$\lim_{h\to 0}[f(x+h)-f(x-h)]=0$$</span>
for every <span class="math-container">$x\in R^1$</span>. Does this imply that <span class="math-container">$f$</span> is continuous?</p>
</blockquote>
<p>The answer to this question is simply no, and it can be proved by using the function <span class="math-container">$f(x) = 1$</span> if <span class="math-container">$x\in \mathbb{Z}$</span> and <span class="math-container">$f(x) =0$</span>, otherwise.</p>
<p>However, I am a little bit confused by this result, since I obtain the contrary by using the definitions of limits and continuity. In particular, I have the following derivation:</p>
<blockquote>
<p>Define the function <span class="math-container">$$g(h) := f(x+h) - f(x-h)$$</span> for a fixed x. Then the hypothesis implies that <span class="math-container">$$\lim_{h\to 0}g(h)=0.$$</span> By using the definition of the limit, we have that <span class="math-container">$\forall \varepsilon >0$</span>, <span class="math-container">$\exists \delta>0$</span>, such that <span class="math-container">$|g(h)| < \varepsilon$</span> and <span class="math-container">$|h|< \delta/2$</span>.</p>
<p>Now let <span class="math-container">$p, q \in \mathbb{R}$</span>. Assume <span class="math-container">$p < q$</span>, without loss of generality. Define <span class="math-container">$h := \frac{q-p}{2}$</span>, <span class="math-container">$x := \frac{q+p}{2}$</span>. Fix <span class="math-container">$\varepsilon >0$</span>. Then, for <span class="math-container">$$|f(q)- f(p)| = |f(x+h)-f(x-h)| < \varepsilon,$$</span>
we know that
<span class="math-container">$$|q-p| = |x+h-x+h| = 2 |h| < \delta.$$</span> Hence, <span class="math-container">$f$</span> is continuous.</p>
</blockquote>
<p>My question is: in which part of this derivation I am making a mistake?</p>
<p>Please note that there have been other questions regarding this exercise, for instance see <a href="https://math.stackexchange.com/questions/2021594/baby-rudin-chapter-4-exercise-1">this</a>. However, I have a different concern, whose solution I think could be useful for other people.</p>
| Community | -1 | <p>Let's look at it in a sample case. We want to prove by DCT that <span class="math-container">$$\lim_{\varepsilon\to0^+} \int_0^\infty e^{-y/\varepsilon}\,dy=0$$</span></p>
<p>This is the case if and only if for all sequences <span class="math-container">$\varepsilon_n\to 0^+$</span> it holds <span class="math-container">$$\lim_{n\to\infty}\int_0^\infty e^{-y/\varepsilon_n}\,dy=0$$</span></p>
<p>And now you can use DCT on each of these sequences. Of course, the limiting function will always be the zero function and you may consider the dominating function <span class="math-container">$e^{-x}$</span>.</p>
|
3,168,787 | <p>I am trying to solve the following exercise in Rudin's "Rudin's Principles of Mathematical Analysis" book: (Ex 4.1)</p>
<blockquote>
<p>Suppose <span class="math-container">$f$</span> is a real function defined on <span class="math-container">$R^1$</span> which satisfies
<span class="math-container">$$\lim_{h\to 0}[f(x+h)-f(x-h)]=0$$</span>
for every <span class="math-container">$x\in R^1$</span>. Does this imply that <span class="math-container">$f$</span> is continuous?</p>
</blockquote>
<p>The answer to this question is simply no, and it can be proved by using the function <span class="math-container">$f(x) = 1$</span> if <span class="math-container">$x\in \mathbb{Z}$</span> and <span class="math-container">$f(x) =0$</span>, otherwise.</p>
<p>However, I am a little bit confused by this result, since I obtain the contrary by using the definitions of limits and continuity. In particular, I have the following derivation:</p>
<blockquote>
<p>Define the function <span class="math-container">$$g(h) := f(x+h) - f(x-h)$$</span> for a fixed x. Then the hypothesis implies that <span class="math-container">$$\lim_{h\to 0}g(h)=0.$$</span> By using the definition of the limit, we have that <span class="math-container">$\forall \varepsilon >0$</span>, <span class="math-container">$\exists \delta>0$</span>, such that <span class="math-container">$|g(h)| < \varepsilon$</span> and <span class="math-container">$|h|< \delta/2$</span>.</p>
<p>Now let <span class="math-container">$p, q \in \mathbb{R}$</span>. Assume <span class="math-container">$p < q$</span>, without loss of generality. Define <span class="math-container">$h := \frac{q-p}{2}$</span>, <span class="math-container">$x := \frac{q+p}{2}$</span>. Fix <span class="math-container">$\varepsilon >0$</span>. Then, for <span class="math-container">$$|f(q)- f(p)| = |f(x+h)-f(x-h)| < \varepsilon,$$</span>
we know that
<span class="math-container">$$|q-p| = |x+h-x+h| = 2 |h| < \delta.$$</span> Hence, <span class="math-container">$f$</span> is continuous.</p>
</blockquote>
<p>My question is: in which part of this derivation I am making a mistake?</p>
<p>Please note that there have been other questions regarding this exercise, for instance see <a href="https://math.stackexchange.com/questions/2021594/baby-rudin-chapter-4-exercise-1">this</a>. However, I have a different concern, whose solution I think could be useful for other people.</p>
| Alex Ortiz | 305,215 | <p>The statement of the dominated convergence theorem (DCT) is as follows:</p>
<blockquote>
<p><strong>"Sequential" DCT.</strong> Suppose <span class="math-container">$\{f_n\}_{n=1}^\infty$</span> is a sequence of (measurable) functions such that <span class="math-container">$|f_n| \le g$</span> for some integrable function <span class="math-container">$g$</span> and all <span class="math-container">$n$</span>, and <span class="math-container">$\lim_{n\to\infty}f_n = f$</span> pointwise almost everywhere. Then, <span class="math-container">$f$</span> is an integrable function and <span class="math-container">$\int |f-f_n| \to 0$</span>. In particular, <span class="math-container">$\lim_{n\to\infty}\int f_n = \int f$</span> (by the triangle inequality). This can be written as
<span class="math-container">$$ \lim_{n\to\infty}\int f_n = \int \lim_{n\to\infty} f_n.$$</span></p>
</blockquote>
<p>(The statement and conclusion of the monotone convergence theorem are similar, but it has a somewhat different set of hypotheses.)</p>
<p>As you note, the statements of these theorems involve <em>sequences</em> of functions, i.e., a <span class="math-container">$1$</span>-discrete-parameter family of functions <span class="math-container">$\{f_n\}_{n=1}^\infty$</span>. To apply these theorems to a <span class="math-container">$1$</span>-continuous-parameter family of functions, say <span class="math-container">$\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$</span>, one typically uses a characterization of limits involving a continuous parameter in terms of sequences:</p>
<blockquote>
<p><strong>Proposition.</strong> If <span class="math-container">$f$</span> is a function, then
<span class="math-container">$$\lim_{\epsilon\to0^+}f(\epsilon) = L \iff \lim_{n\to\infty}f(a_n) = L\quad \text{for $\mathbf{all}$ sequences $a_n\to 0^+$.}$$</span></p>
</blockquote>
<p>With this characterization, we can formulate a version of the dominated convergence theorem involving continuous-parameter families of functions (note that I use quotations to title these versions of the DCT because these names are not standard as far as I know):</p>
<blockquote>
<p><strong>"Continuous" DCT.</strong> Suppose <span class="math-container">$\{f_\epsilon\}_{0<\epsilon<\epsilon_0}$</span> is a <span class="math-container">$1$</span>-continuous-parameter family of (measurable) functions such that <span class="math-container">$|f_\epsilon| \le g$</span> for some integrable function <span class="math-container">$g$</span> and all <span class="math-container">$0<\epsilon<\epsilon_0$</span>, and <span class="math-container">$\lim_{\epsilon\to0^+}f_\epsilon=f$</span> pointwise almost everywhere. Then, <span class="math-container">$f$</span> is an integrable function and <span class="math-container">$\int |f-f_\epsilon|\to 0$</span> as <span class="math-container">$\epsilon\to 0^+$</span>. In particular,
<span class="math-container">$$ \lim_{\epsilon\to0^+}\int f_\epsilon = \int \lim_{\epsilon\to0^+} f_\epsilon.$$</span></p>
</blockquote>
<p>The way we use the continuous DCT in practice is by picking an <strong>arbitrary sequence <span class="math-container">$\pmb{a_n\to 0^+}$</span></strong> and showing that the hypotheses of the "sequential" DCT are satisfied for this arbitrary sequence <span class="math-container">$a_n$</span>, using only the assumption that <span class="math-container">$a_n\to 0^+$</span> and properties of the family <span class="math-container">$\{f_\epsilon\}$</span> that are known to us.</p>
|
1,658,577 | <p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p>
<p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p>
<p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
| runaround | 310,548 | <p>Real Analysis->Elementary Number Theory->Group Theory. The rest is your own interests.</p>
|
1,658,577 | <p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p>
<p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p>
<p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
| Dave L. Renfro | 13,130 | <p>Consider going through <a href="http://rads.stackoverflow.com/amzn/click/0914098918" rel="nofollow noreferrer"><strong>Calculus</strong></a> by Michael Spivak or <a href="http://rads.stackoverflow.com/amzn/click/354065058X" rel="nofollow noreferrer"><strong>Introduction to Calculus and Analysis</strong></a> (Volume I) by Richard Courant and Fritz John. You may initially think you know most of the material in these books (because you can differentiate and integrate some standard functions, etc.), I think if you really hit these books hard by reading for understanding all the proofs and attempting as many of the exercises as you have time for, then you'll find they contain quite a bit that you are NOT familiar with. Given your engineering background, Courant would be my pick for you. See my answer to <a href="https://math.stackexchange.com/questions/79865/difficulty-level-of-courants-book">Difficulty level of Courant's book</a>. See also the comments <a href="https://www.physicsforums.com/threads/question-about-courants-introduction-to-calculus-and-analysis.609169/" rel="nofollow noreferrer">here</a>.</p>
<p>Another suggestion is to get one of the comprehensive advanced calculus texts from about a generation ago, one that includes a rigorous review of elementary calculus before launching into an extensive coverage of sequences and series, vector calculus, elementary differential geometry, possibly some complex variables, etc., such as <a href="http://rads.stackoverflow.com/amzn/click/0471025666" rel="nofollow noreferrer"><strong>Advanced Calculus</strong></a> by Angus E. Taylor and W. Robert Mann, or <a href="http://rads.stackoverflow.com/amzn/click/1577663020" rel="nofollow noreferrer"><strong>Advanced Calculus</strong></a> by R. Creighton Buck. In past generations, the 2-semester sequence courses out of such a book tended to be the primary transition (and weed-out course) for undergraduate students to transition from elementary calculus and ODE's to upper level mathematics. Because the mathematics curriculum has gotten fuller in the past few decades (more discrete math, probability, and previously non-existent courses in the emerging discipline of computer science), these 2-semester sequence courses have gradually been phased out and replaced by more targeted 1-semester "transition to advanced mathematics" courses that have much less depth and far more focus on mathematical grammar issues and basic proof methods than the earlier advanced calculus courses, plus the "transition to advanced mathematics" courses are typically taken in the U.S. during one's 2nd undergraduate year rather than the 3rd undergraduate year in which the advanced calculus courses were typically taken.</p>
|
19,261 | <p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p>
<ol>
<li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li>
<li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li>
<li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li>
</ol>
<blockquote>
<p>Then $v_i$, $v_j$ are adjacent iff $N_i$
and $N_j$ are not coprime,</p>
</blockquote>
<p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p>
<p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p>
<blockquote>
<p><strong>QUESTION</strong></p>
<p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$
is minimal (among all that do the job) — and if so: how?</p>
</blockquote>
<p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
| David Bar Moshe | 1,059 | <p>Here is an example where topological objects are constructed from geometrical data through representation theory. Let G/P be a flag variety of a complex Lie group G. Let G0 be a real form of G, and D be an open orbit of G0 in G/P. The Dolbeault cohomology spaces H^n(D, L) of line bundles over D carry irreducible representations of G0 which can
be constructed from geometrical data of the orbit. Here is a review <a href="http://cauchy.math.okstate.edu/~zierau/papers/parkcity/Zierau.pdf" rel="nofollow">article</a> on the subject. When G0 is compact, this construction reduces to the famous Bott-Borel-Weil theorem.</p>
|
581,257 | <p>I would like to see a proof of when equality holds in <a href="https://en.wikipedia.org/wiki/Minkowski_inequality" rel="nofollow noreferrer">Minkowski's inequality</a>.</p>
<blockquote>
<p><strong>Minkowski's inequality.</strong> If <span class="math-container">$1\le p<\infty$</span> and <span class="math-container">$f,g\in L^p$</span>, then <span class="math-container">$$\|f+g\|_p \le \|f\|_p + \|g\|_p.$$</span></p>
</blockquote>
<p>The proof is quite different for when <span class="math-container">$p=1$</span> and when <span class="math-container">$1<p<\infty$</span>. Could someone provide a reference? Thanks!</p>
| Daniel Fischer | 83,702 | <p>For <span class="math-container">$p = 1$</span>, the proof uses the triangle inequality, <span class="math-container">$\lvert f(x) + g(x)\rvert \leqslant \lvert f(x)\rvert + \lvert g(x)\rvert$</span>, and the monotonicity of the integral. You have equality <span class="math-container">$\lVert f+g\rVert_1 = \lVert f\rVert_1 + \lVert g\rVert_1$</span> if and only if you have equality <span class="math-container">$\lvert f(x) + g(x)\rvert = \lvert f(x)\rvert + \lvert g(x)\rvert$</span> almost everywhere. That means that almost everywhere at least one of the two functions attains the value <span class="math-container">$0$</span>, or both values "point in the same direction", that is, have the same argument. You can simply formalise that as <span class="math-container">$f(x)\cdot\overline{g(x)} \geqslant 0$</span> almost everywhere.</p>
<p>For <span class="math-container">$1 < p < \infty$</span>, in addition to the triangle inequality, the proof of Minkowski's inequality also uses Hölder's inequality,</p>
<p><span class="math-container">$$\begin{align}
\int \lvert f+g\rvert^p\,d\mu &\leqslant \int \lvert f\rvert\cdot\lvert f+g\rvert^{p-1}\,d\mu + \int \lvert g\rvert\cdot\lvert f+g\rvert^{p-1}\,d\mu\tag{1}\\
&\leqslant \lVert f\rVert_p \lVert f+g\rVert_p^{p-1} + \lVert g\rVert_p \lVert f+g\rVert_p^{p-1}.\tag{2}
\end{align}$$</span></p>
<p>You then have equality <span class="math-container">$\lVert f+g\rVert_p = \lVert f\rVert_p + \lVert g\rVert_p$</span> if and only equality holds in <span class="math-container">$(1)$</span>, and for both terms in <span class="math-container">$(2)$</span>.
Equality in <span class="math-container">$(1)$</span> is nearly the same as for the <span class="math-container">$p=1$</span> case, that gives a restriction <span class="math-container">$f(x)\cdot\overline{g(x)} \geqslant 0$</span> almost everywhere, except maybe on the set where <span class="math-container">$f(x)+g(x) = 0$</span> (but equality in Hölder's inequality forces <span class="math-container">$f(x) = 0$</span> and <span class="math-container">$g(x) = 0$</span> almost everywhere on that set, so at the end we really must have <span class="math-container">$f(x)\cdot \overline{g(x)} \geqslant 0$</span> almost everywhere if <span class="math-container">$\lVert f+g\rVert_p = \lVert f\rVert_p + \lVert g\rVert_p$</span>). For equality of the terms in <span class="math-container">$(2)$</span>, if <span class="math-container">$f+g = 0$</span> almost everywhere, we must have <span class="math-container">$f=g=0$</span> almost everywhere, and if <span class="math-container">$\lVert f+g\rVert_p > 0$</span>, we have equality if and only there are constants <span class="math-container">$\alpha,\beta \geqslant 0$</span> with <span class="math-container">$\lvert f\rvert^p = \alpha \lvert f+g\rvert^p$</span>, and <span class="math-container">$\lvert g\rvert^p = \beta\lvert f+g\rvert^p$</span> almost everywhere. We can of course take <span class="math-container">$p$</span>-th roots, and together with the condition imposed by the triangle inequality, we obtain that</p>
<p><span class="math-container">$$\lVert f+g\rVert_p = \lVert f\rVert_p + \lVert g\rVert_p$$</span></p>
<p>holds if and only if there are non-negative real constants <span class="math-container">$\alpha,\beta$</span>, not both zero, such that <span class="math-container">$\alpha f(x) = \beta g(x)$</span> almost everywhere.</p>
|
3,386,999 | <p>How can I ind the values of <span class="math-container">$n\in \mathbb{N}$</span> that make the fraction <span class="math-container">$\frac{2n^{7}+1}{3n^{3}+2}$</span> reducible ?</p>
<p>I don't know any ideas or hints how I solve this question.</p>
<p>I think we must be writte <span class="math-container">$2n^{7}+1=k(3n^{3}+2)$</span> with <span class="math-container">$k≠1$</span></p>
| Hw Chu | 507,264 | <p>Let <span class="math-container">$d = \gcd(2n^7+1, 3n^3+2)$</span>. Then since <span class="math-container">$2n^7+1 \ | \ 2^3n^{21}+1$</span> and <span class="math-container">$3n^3 + 2 \ | \ 3^7n^{21}+2^7$</span>, we must have <span class="math-container">$$d \ | \ 3^7(2^3n^{21}+1) - 2^3(3^7n^{21}+2^7) \quad\Rightarrow\quad d \ | \ 1163.$$</span></p>
<p>Since <span class="math-container">$1163$</span> is a prime, if the fraction is reducible, <span class="math-container">$1163 \ | \ 3n^3 + 2$</span>. Since <span class="math-container">$1163 \equiv 2 \pmod 3$</span>, <span class="math-container">$n^3 \equiv -2\cdot 3^{-1} \equiv 587 \pmod{1163}$</span> has one unique solution modulo <span class="math-container">$1163$</span>.</p>
<p>Fermat's little theorem tells you that <span class="math-container">$n \equiv n^{1163} \equiv n^{2325} \equiv n^{3\times 775}\pmod{1163}$</span>. So the answer should be <span class="math-container">$n \equiv 587^{775} \pmod{1163}$</span>.</p>
<p>This is hard to solve by hand, so if you believe saulspatz's computation that <span class="math-container">$n \equiv 435 \pmod{1163}$</span>, that is all the possible solutions.</p>
|
3,386,999 | <p>How can I ind the values of <span class="math-container">$n\in \mathbb{N}$</span> that make the fraction <span class="math-container">$\frac{2n^{7}+1}{3n^{3}+2}$</span> reducible ?</p>
<p>I don't know any ideas or hints how I solve this question.</p>
<p>I think we must be writte <span class="math-container">$2n^{7}+1=k(3n^{3}+2)$</span> with <span class="math-container">$k≠1$</span></p>
| DanielWainfleet | 254,665 | <p>Suppose <span class="math-container">$p$</span> is prime.</p>
<p>If <span class="math-container">$p$</span> divides <span class="math-container">$2n^7+1$</span> & <span class="math-container">$3n^3+2$</span></p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$2(2n^7+1)-(3n^3+2)=n^3(4n^4-3)$</span> </p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$4n^4-3$</span> ( See Footnote )</p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$2(4n^4-3)+3(3n^3+2)= n^3(8n+9)$</span></p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$8n+9$</span> (See Footnote)</p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$9(3n^3+2)-2(8n+9)=n(27n^2-16)$</span></p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$27n^2-16$</span> (See Footnote)</p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$9(27n^2-16)+16(8n+9)=n(243n+128)$</span></p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$243n+128$</span> (See Footnote)</p>
<p>then <span class="math-container">$p$</span> divides <span class="math-container">$9(243n+128)-128(8n+9)=1163$</span></p>
<p>then <span class="math-container">$p=1163$</span> because <span class="math-container">$p$</span> and <span class="math-container">$1163$</span> are both prime</p>
<p>then <span class="math-container">$8n+9\equiv 0 \mod 1163$</span> so <span class="math-container">$n\equiv 435 \mod 1163$</span></p>
<p>So <span class="math-container">$\gcd (2n^7+1, 3n^3+2)>1\implies n\equiv 435 \mod 1163.$</span> And we may verify that <span class="math-container">$n\equiv 435 \mod 1163\implies 2n^7+1\equiv 3n^3+2\equiv 0 \mod 1163\implies \gcd (2n^7+1,3n^3+2)>1.$</span></p>
<p>Footnote. Suppose <span class="math-container">$A,B,C, D$</span> are integers with <span class="math-container">$A>0$</span> and <span class="math-container">$C>0,$</span> and <span class="math-container">$p$</span> divides <span class="math-container">$n^A(Bn^C+D).$</span> Since <span class="math-container">$p$</span> is prime, <span class="math-container">$p$</span> divides <span class="math-container">$n^A$</span> or <span class="math-container">$p$</span> divides <span class="math-container">$Bn^C+D.$</span> Now if prime <span class="math-container">$p$</span> divides <span class="math-container">$n^A$</span> then <span class="math-container">$p$</span> divides <span class="math-container">$n$</span> and hence <span class="math-container">$p$</span> divides <span class="math-container">$2n^7$</span>, BUT if <span class="math-container">$p$</span> also divides <span class="math-container">$2n^7+1$</span> then the prime <span class="math-container">$p>1$</span> divides <span class="math-container">$(2n^7+1)-(2n^7)=1,$</span> which is absurd. So instead, <span class="math-container">$p$</span> must divide <span class="math-container">$Bn^C+D.$</span></p>
|
3,386,999 | <p>How can I ind the values of <span class="math-container">$n\in \mathbb{N}$</span> that make the fraction <span class="math-container">$\frac{2n^{7}+1}{3n^{3}+2}$</span> reducible ?</p>
<p>I don't know any ideas or hints how I solve this question.</p>
<p>I think we must be writte <span class="math-container">$2n^{7}+1=k(3n^{3}+2)$</span> with <span class="math-container">$k≠1$</span></p>
| Bill Dubuque | 242 | <p>This gcd is computable <em>purely mechanically</em> by a slight generalization of the Euclidean algorithm which allows us to scale by integers <span class="math-container">$\,c\,$</span> coprime to the gcd during the modular reduction step, i.e.</p>
<p><span class="math-container">$$\bbox[8px,border:2px solid #c00]{(a,b)\, = \,(a,\,cb\bmod a)\ \ \ {\rm if}\ \ \ (a,c) = 1}\qquad\qquad $$</span></p>
<p>which is true since <span class="math-container">$\,(a,c)= 1\,\Rightarrow\, (a,\,cb\bmod a) = (a,cb) = (a,b)\ $</span> by Euclid. When computing the gcd of polynomials <span class="math-container">$\,f(n),g(n)$</span> with integer coef's, we can use such scalings to force the lead coef of the dividend to be divisible by the lead coef of the divisor, which enables the division to be performed with integer (vs. fraction) arithmetic. Let's do that in the example at hand (but you may find it helpful to first study the simpler examples <a href="https://math.stackexchange.com/a/4098638/242">here</a> and <a href="https://math.stackexchange.com/a/3881931/242">here</a>).</p>
<p><span class="math-container">$\!\begin{align}{\rm{This\,\ yields}\!: \ \ \ }(3n^3\!+2,\,2n^7\!+1) &\,=\, (3n^3\!+2,\,\color{#0a0}{8n+9})\ \ \ {\rm by}\ [\![1]\!]\ \text{ below, $\,c = 3^2$}\\[.4em]
&\,=\, (\color{#90f}{-1163},\ \ 8n+9)\ \ \ {\rm by}\, [\![2]\!]\ \text{ below, $\,c = 8^3$}\end{align}$</span>
<span class="math-container">$\!\!\!\begin{align}
&\bmod\:\! \color{#c00}{3n^3\!+2}^{\phantom{|^{|^|}}}\!\!\!\!\!:\,\ 3^2(2n^7\!+1)\equiv 2n(\color{#c00}{3n^3})^2\!+9\,\equiv\, \color{#0a0}{8n+9},\, \ {\rm by}\ \ \color{#c00}{3n^3\equiv -2}\,\qquad [\![1]\!]\\[.4em]
&\bmod\:\! \color{#0a0}{8n+9}\!:\ \ 8^3(3n^3\!+2)\equiv 3(\color{#0a0}{8n})^3\!+ 2(8^3) \equiv \color{#90f}{-1163},\ \ {\rm by}\ \ \color{#0a0}{8n\,\equiv \,-9}\:\!\qquad[\![2]\!] \end{align}$</span></p>
<p>So the gcd <span class="math-container">$\!=\! (1163,8n\!+\!9)\! >\! 1\! \iff\!$</span> prime <span class="math-container">$p\! =\! 1163\mid 8n\!+\!9\!$</span> <span class="math-container">$\iff\! \bbox[5px,border:1px solid #0a0]{n\equiv \color{90f}{435}\pmod{\!p}}\,$</span> by</p>
<p><span class="math-container">$\!\!\bmod 1163\!:\,\ n\equiv\dfrac{\!\!-9}8\!\equiv 3\left[\dfrac{-3}8\right] \equiv 3\left[\dfrac{1160}8\right]\equiv 3[145]\equiv 435.\ \ $</span> [See <a href="https://math.stackexchange.com/a/3434593/242">here</a> for 5 more ways]</p>
<p><strong>Remark</strong> <span class="math-container">$ $</span> Above <span class="math-container">$\,k \bmod a\,$</span> denotes some "simpler" <span class="math-container">$k'$</span> such that <span class="math-container">$\,k'\equiv k\pmod{\!a},\,$</span> not necessarily the <em>least</em> nonnegative such value. Here we are "simplifying" by reducing the degree in <span class="math-container">$\,n,\,$</span> i.e. essentially using the Euclidean algorithm for <em>polynomials</em> (the scaling by <span class="math-container">$\,c\,$</span> corresponds to a fraction-free form of the algorithm using the <a href="https://math.stackexchange.com/a/116037/242">Nonmonic Division Algorithm</a>).</p>
|
61,933 | <p>Consider a lattice in R^3.
Is the some "canonical" way or ways to choose basis in it ?</p>
<p>I mean in R^2 we can choose a basis |h_1| < |h_2| and |(h_2, h_1)| < 1/2 |h_1|.
Considering lattices with fixed determinant and up to unitary transformations we get standard picture of the PSL(2,Z) acting on the upper half plane, which has a fundamental domain Im (tau)>1 Re(tau) <1/2. </p>
<p>What are the similar results for other small dimensions R^3, R^4, C^4, C^8 ?
What are the algorithms to find such a lattice reductions ?</p>
| Henry Cohn | 4,720 | <p>In higher dimensions, there doesn't seem to be anything as nice as in two dimensions: the fundamental domains get substantially more complicated and the algorithms become much less efficient. However, there are still some beautiful results. For example, Minkowski reduction is a natural generalization of the two-dimensional case. See, for example, Chapter 2 of <em>Computational geometry of positive definite quadratic forms</em> by Achill Schürmann. Minkowski reduction defines a fundamental domain, which is in fact a polyhedral cone in the space of positive-definite matrices, but the facets of this cone are known only up through seven dimensions. (It is most naturally defined using infinitely many constraints, only finitely many of which are needed in any given dimension, but figuring out exactly which ones are needed is difficult.)</p>
|
936,200 | <p>Suppose that x_0 is a real number and x_n = [1+x_(n-1)]/2 for all natural n. Use the Monotone Convergence Theorem to prove x_n → 1 as n grows.</p>
<p>Can someone please help me? I don't know what to assume since I don't know if it is increasing or decreasing when x_0 < 1 and when x_0 > 1.
Any hint/help would really help. Thank you.</p>
| Christian Blatter | 1,303 | <p>As for "$\in$" see the other answers. Concerning the idea of a variable:</p>
<p>Each letter, say $y$, denoting a variable comes a priori with its <em>domain</em> $D_y$, a certain set. We are allowed to replace this $y$ in the formula by any element $a\in D_y$ and obtain a proposition about constants which is either true or false.</p>
<p>See also here:</p>
<p><a href="https://math.stackexchange.com/questions/128656/high-school-math-definition-of-a-variable-the-first-step-from-the-concrete-into/214419?noredirect=1#comment1933976_214419">High school math definition of a variable: the first step from the concrete into the abstract...</a></p>
<p>and here:</p>
<p><a href="https://math.stackexchange.com/questions/24284/is-the-variable-in-let-y-fx-free-bound-or-neither/24434#24434">Is the 'variable' in 'let $y=f(x)$' free, bound, or neither?</a></p>
|
936,200 | <p>Suppose that x_0 is a real number and x_n = [1+x_(n-1)]/2 for all natural n. Use the Monotone Convergence Theorem to prove x_n → 1 as n grows.</p>
<p>Can someone please help me? I don't know what to assume since I don't know if it is increasing or decreasing when x_0 < 1 and when x_0 > 1.
Any hint/help would really help. Thank you.</p>
| Frunobulax | 93,252 | <p>The most general ways to "pronounce" $\in$ certainly are "is an element of" or "is a member of". However, in a case like this one where the set is not only finite but also very small it might make sense to read "$y \in \{1,2,3\}$" as "$y$ is either $1$ or $2$ or $3$" or "$y$ is one of the values $1$, $2$, and $3$".</p>
<p>This is in accordance with, say, reading "$y \in \mathbb{N}$" as "$y$ is a natural number".</p>
|
317,175 | <p>What tools would we like to use here? Is there any easy way to establish the limit?</p>
<p>$$\sum_{k=1}^{\infty}{1 \over k^{2}}\,\cot\left(1 \over k\right)$$</p>
<p>Thanks!</p>
<p>Sis!</p>
| mrf | 19,440 | <p>The series diverges. Let $a_k = \dfrac 1{k^2} \cot \dfrac 1k$ and $b_k = \dfrac1k$.</p>
<p>Then $\lim_{k\to\infty} \dfrac{a_k}{b_k} = 1$, so your series diverges by the limit comparison test (since $a_k \ge 0$).</p>
|
966,798 | <p>How I solve the following equation for $0 \le x \le 360$:</p>
<p>$$
2\cos2x-4\sin x\cos x=\sqrt{6}
$$</p>
<p>I tried different methods. The first was to get things in the form of $R\cos(x \mp \alpha)$:</p>
<p>$$
2\cos2x-2(2\sin x\cos x)=\sqrt{6}\\
2\cos2x-2\sin2x=\sqrt{6}\\
R = \sqrt{4} = 2 \\
\alpha = \arctan \frac{2}{2} = 45\\
\therefore \cos(2x + 45) = \frac{\sqrt6}{2}
$$</p>
<p>which is impossible. I then tried to use t-substitution, where:</p>
<p>$$
t = \tan\frac{x}{2}, \sin x=\frac{2t}{1+t^2}, \cos x =\frac{1-t^2}{1+t^2}
$$</p>
<p>but the algebra got unreasonably complicated. What am I missing?</p>
| Umberto P. | 67,536 | <p>Is is just for convenience. </p>
<p>Suppose (for example) that you want $\delta^2 + \delta < \epsilon$. You can factor the left-hand side to write $\delta(\delta + 1) < \epsilon$. Let $m$ be any positive number. As long as $\delta < m$ you have $\delta(\delta + 1) < \delta (m+1)$, and if <em>in addition</em> you have $\delta < \dfrac{\epsilon}{m+1}$ you arrive at $\delta(\delta + 1) < \delta(m+1) < \dfrac{\epsilon}{m+1}(m+1) = \epsilon$. </p>
<p>What this means is that if both $\delta < m$ and $\delta < \dfrac{\epsilon}{m+1}$, then $\delta^2 + \delta < \epsilon$. That is,
$$ \delta < \min \left\{ m,\dfrac{\epsilon}{m+1} \right\} \implies \delta^2 + \delta < \epsilon.$$</p>
<p>You don't need the min function to do this, in general. You can solve the inequality by other means. For instance, you can add $\dfrac 14$ to both sides of the example inequality to get
$$ \delta^2 + \delta + \frac 14 < \epsilon + \frac 14$$ so that $$\left( \delta + \frac 12 \right)^2 < \epsilon + \frac 14.$$ The solution to this is an interval:
$$
- \sqrt{ \epsilon + \frac 14} < \delta + \frac 12 < \sqrt{ \epsilon + \frac 14}
$$
so for <em>positive</em> $\delta$,
$$
\delta < \sqrt{ \epsilon + \frac 14} - \frac 12 \implies \delta^2 + \delta < \epsilon.
$$</p>
<p>In almost all cases it is simpler to impose conditions on $\delta$ and use a min.</p>
|
245,312 | <p>Let $\kappa>0$ be a cardinal and let $(X,\tau)$ be a topological space. We say that $X$ is $\kappa$-<em>homogeneous</em> if</p>
<ol>
<li>$|X| \geq \kappa$, and</li>
<li>whenever $A,B\subseteq X$ are subsets with $|A|=|B|=\kappa$ and $\psi:A\to B$ is a bijective map, then there is a homeomorphism $\varphi: X\to X$ such that $\varphi|_A = \psi$.</li>
</ol>
<p><strong>Questions</strong>: Is it true that for $0<\alpha < \beta$ there is a space $X$ such that $|X|\geq \beta$, and $X$ is $\alpha$-homogeneous, but not $\beta$-homogeneous? Is there even such a space that is $T_2$? Also it would be nice to see an example for $\alpha=1, \beta=2$. And I was wondering whether there is a standard name for $\kappa$-homogeneous spaces. (Not all of these questions have to be answered for acceptance of answer.)</p>
| Joel David Hamkins | 1,946 | <p>This is a great question!</p>
<p>The disjoint union of two circles is $1$-homogeneous, but not $2$-homogeneous. It is $1$-homogenous, since you can swap any two points and extend this to a homeomorphism (basically, "all points look alike"). But it is not $2$-homogeneous, since you can let $A$ be two points from one circle, and let $B$ be two points from different circles; there is no way to extend a bijection of $A$ with $B$ to a homeomorphism of $X$ (not all pairs look alike). </p>
<p>The real line $\mathbb{R}$ is $2$-homogeneous, but not $3$-homogeneous. For $2$-homogeneity, given any two pairs of reals $a,b$ and $x,y$, then no matter how you map these bijectively, you can extend to a homeomorphism of the line by affine translation (all pairs look alike). But the line is not $3$-homogeneous, since we can biject the triples $0,1,2$ mapping to $0,2,1$, respectively; this does not extend to a homeomorphism, since it doesn't respect between-ness (not all triples look alike). </p>
<p>The unit circle is $3$-homogeneous, but not $4$-homogeneous (thanks to Andreas Blass in the comments). For $3$-homogeneity, given two triples of points, we can match up the first in each by rotating the circle, and then match up the other two by stretching or by flipping and stretching, depending on whether the orientation was preserved or not (all triples look alike). It is not $4$ homogeneous, since we can have four points in clockwise rotation, and then try to fix the first two and swap the other two; this cannot extend to a homeomorphism, since fixing the first two fixes the orientation, which is not respected by swapping the other two (not all quadruples look alike). </p>
<p>I don't know examples yet that are $4$-homogeneous, but not $5$-homogeneous, or $n$-homogeneous but not $n+1$-homogeneous, for $n\geq 4$. </p>
<p>The real plane $\mathbb{R}^2$ appears to be $n$-homogeneous for every finite $n$, but not $\omega$-homogeneous (thanks again to Andreas). It is $n$-homogeneous, because given any two sets of $n$ points, we can imagine the plane made of stretchable latex and simply pull the points each to their desired targets, with the rest of plane getting stretched as it will. One can see this inductively, handling one additional point at a time: having moved any finitely many points, nail them down through the latex; now any additional point can be stretched to any desired target, before also nailing it down, and so on (so all $n$-tuples look alike). It is not $\omega$-homogeneous, since a countable dense set can be bijected with a countable bounded set, and this will not extend to a homeomorphism. </p>
<p>Meanwhile, the infinite case is settled. For any infinite cardinal $\beta$, the discrete space of size $\beta$ is $\alpha$-homogeneous for every $\alpha<\beta$ but not $\beta$-homogeneous, in the OP's terminology. Any bijection of small subsets can be extended to a permutation, since there are $\beta$ many points left over, but if $X$ has size $\beta$, then we can take $A=X$ and $B=X-\{a\}$, which are bijective, but this bijection cannot be extended to a permutation of $X$.</p>
<p>In particular, the countable discrete space is $n$-homogeneous for every finite $n$, but not $\omega$-homogeneous, just like $\mathbb{R}^2$. </p>
|
1,068,631 | <p>I want to find the solutions of the equation $$\left[z- \left( 4+\frac{1}{2}i\right)\right]^k = 1 $$ </p>
<p>in terms of roots of unity.</p>
<p>When I try to solve this, I get
\begin{align*}z - 4 - \dfrac i2 &= 1\\
z-\dfrac{i}{2}&=5\\
\dfrac{2z-i}2 &= 5\\
z&= 5 + \dfrac i2\end{align*}</p>
<p>Is this the right approach?</p>
<p>I want to do the same for $$\left[z-\left(4+\frac{1}{2}i\right)\right]^k = 2$$</p>
<p>as well.</p>
| The Artist | 154,018 | <p><strong>Things you need to know (Hints):</strong></p>
<p>$$\left(z-(4+\frac{1}{2}i)\right)^k=\color{Crimson}{1}$$</p>
<p>$$ \color{crimson}{\cos 2\pi n+ i \sin 2\pi n =1}$$</p>
<p>$$\text{Where n is an Integer}$$</p>
<hr>
<p>Also you should know De Moivre's Theorem:</p>
<p>$$( \cos \theta+ i \sin \theta)^b= \cos b\theta+ i \sin b\theta$$</p>
<hr>
<p>Also when it comes to second part: </p>
<p>$$\left(z-(4+\frac{1}{2}i)\right)^k=2$$</p>
<p>$$\left(z-(4+\frac{1}{2}i)\right)^k=2(\color{Crimson}{1})$$</p>
|
4,319,590 | <blockquote>
<p>Let <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span> and <span class="math-container">$x,y \in G$</span>. Show that <span class="math-container">$x(Hy)=(xH)y.$</span></p>
</blockquote>
<p>I have that <span class="math-container">$Hy=\{hy \mid h \in H\}$</span> so wouldn't <span class="math-container">$x(Hy)=\{x hy \mid h \in H\}$</span>? If so there doesn't seem to be much to be shown since if this holds I suppose that <span class="math-container">$(xH)y=\{x h y \mid h \in H\}$</span> would also hold and these two are clearly the same sets? Am I misinterpreting the set <span class="math-container">$x(Hy)$</span>? Should this be <span class="math-container">$\{xhy \mid h \in H, y \in G\}$</span> for fixed <span class="math-container">$y$</span>?</p>
| Michael Hardy | 11,667 | <p>If <span class="math-container">$w\in x(Hy),$</span> then for some <span class="math-container">$v\in Hy,$</span> <span class="math-container">$w=xv.$</span> Since <span class="math-container">$v\in Hy,$</span> for some <span class="math-container">$h\in H,$</span> <span class="math-container">$v=hy.$</span> So <span class="math-container">$w = xhy.$</span> So for some <span class="math-container">$u\in xH,$</span> <span class="math-container">$w=uy.$</span> Thus <span class="math-container">$w\in (xH)y.$</span></p>
<p>Therefore <span class="math-container">$x(Hy)\subseteq (xH)y.$</span></p>
<p>The inclusion in the other direction can be shown in the same way.</p>
|
590,891 | <p>I'm going back to school and haven't taken a math class in years, so I'm brushing up on the basics.</p>
<p>The text states $\frac{g(t + \Delta(t))^2}{2} = \frac{gt^2}{2} + \frac{g}{2}\left(2t\Delta t + \Delta t^2\right)$.</p>
<p>(Sorry for the lack of formatting. I'll probably get slammed, but I couldn't figure it out on my phone...)</p>
<p>My question is: how did they arrive at that conclusion? I've spent the last four hours trying to work it out to no avail. I'm very discouraged at this point, so any clarification would be very helpful.</p>
| ILoveMath | 42,344 | <p>Well, by using $(a + b)^2 = a^2 + 2ab + b^2$, then</p>
<p>$$\frac{g (t + \Delta t)^2}{2} = \frac{g}{2}[t^2 + 2t \Delta t + \Delta^2t]= \frac{gt^2}{2} + \frac{g}{2}[2t \Delta t + \Delta^2 t ]$$</p>
<p>Actually, $(a+b)^2 = a^2 + 2ab + b^2$ is an algebraic identity . But let see it geometrically what it means. consider the following square</p>
<p><img src="https://i.stack.imgur.com/QBUN7.png" alt="enter image description here"></p>
<p>Let $A$ denote the total area of the square, then $A = (a+b)(a+b) = (a+b)^2$ and let $A_1,A_2,A_3,A_4$ denote the area of the little squares, then
$A_1 = ba$, $A_2 = a^2$, $A_3 = ab$ and $A_4 = b^2$.</p>
<p>But, </p>
<p>$$ A = A_1 + A_2 + A_3 + A_4 \implies ba + a^2 + ab + b^2 = (a+b)^2 $$</p>
<p>$$ \therefore (a+b)^2 = a^2 + 2ab + b^2 $$</p>
|
2,359,621 | <p>Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where</p>
<p>$$f(x,y):=\begin{cases}
\frac{x^3}{x^2+y^2} & \textit{ if } (x,y)\neq (0,0) \\
0 & \textit{ if } (x,y)= (0,0)
\end{cases} $$</p>
<p>If one wants to show the continuity of $f$, I mainly want to show that </p>
<p>$$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$</p>
<p>But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$?</p>
<p>If so, how does one show that the above function tends to zero?</p>
| dromastyx | 453,578 | <p>$$\lim_{(x,y)\rightarrow (0,0)}f(x,y)=L$$
means that for all $\epsilon>0$ there exists a $\delta>0$ such that
$$0<\sqrt {x^2+y^2}<\delta \implies |f(x,y)-L|< \epsilon$$</p>
<p>In your case let $\delta=\epsilon$.</p>
<p>$$\left|\frac{x^3}{x^2+y^2}\right|=|x|\cdot \left|\frac{x^2}{x^2+y^2}\right| \leq |x|\cdot1=|x|\leq \sqrt{x^2+y^2}<\delta$$</p>
<p>Now we can see that the aforementioned implication holds.</p>
|
2,797,709 | <p>How is $\; 4 \cos^2 (t/2) \sin(1000t) = 2 \sin(1000t) + 2\sin(1000t)\cos t\,$? This is actually part of a much bigger physics problem, so I need to solve it from the LHS quickly. Is there an easy method by which I can do this?</p>
| Bernard | 202,857 | <p>Use the <em>linearisation formula</em>: $\qquad\cos^2x=\dfrac{1+\cos 2x}2$.</p>
|
829,449 | <p>I am confused on the concept of extensionality versus intensionality. When we say 2<3 is True, we say that 2<3 can be demonstrated by a mathematical proof. So, according to mathematical logic, it is true. Yet, when we consider x(x+1) and X^2 + X, we can say that the x is the same for = 1. However, we call this intensional since the two expressions are true for the same value. This I understand. However, what I am having difficulty with is the claim that numbers are by their very nature abstract objects. So, how is it that there exists any truth values for mathematical statements? I know this seems like a general question but I am having difficulty in wrapping my head around the fact since a proposition about an abstract object by its very nature is intensional. Why then is the number 1 fixed. Is it simply because we agree that 1 is 1 and nothing else? And, does mathematical logic itself establish the meaning of 1?</p>
| cnick | 133,048 | <p>I am a fan of collisions. Get some simple euclidean shapes (billiard balls, or dice on ice) and show her how to calculate the angles that they'll move in after hitting each other.</p>
|
1,649,053 | <p>In figure $AD\perp DE$ and $BE\perp ED$.$C$ is mid point of $AB$.How to prove that $$CD=CE$$<a href="https://i.stack.imgur.com/ZtAA0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZtAA0.png" alt="enter image description here"></a></p>
| Svetoslav | 254,733 | <p>Let $H$ be on $DE$ such that $CH\perp DE$. You have that $C$ is midpoint of $AB$, and $CH||AD||BE\Rightarrow DH=EH$. Then $\triangle DHC\simeq\triangle EHC $ because $DH=EH$, $CH$ is common and $\angle DHC=\angle EHC=90^o$. </p>
|
2,458,184 | <p>Let $a, b$ be non-negative integers and $p\ge3$ be a prime number. If $a^2+b^2$ and $a+b$ are divisible by $p$ does it mean $a$ and $b$ are always divisible by $p$?</p>
| Prasun Biswas | 215,900 | <p>Suppose $p$ be an odd prime.</p>
<p>Note that $a^2+b^2=(a+b)^2-2ab$ and use Euclid's lemma to conclude that $p$ must divide $a$ or $b$.</p>
<p>Now, assume that <strong>only one</strong> of $a,b$ (WLOG say $a$) is divisible by $p$.</p>
<p>Since $p\mid a+b$ and $p\mid a$, we get $p\mid (a+b)-a=b$, i.e., $p\mid b$, a contradiction.</p>
|
2,227,047 | <p>For any $x=x_1, \dotsc, x_n$, $y=y_1, \dotsc, y_n$ in $\mathbf E^n$, define $\|x-y\|=\max_{1 \le k \le n}|x_k-y_k|$. Let $f\colon\mathbf E^n \to \mathbf E^n$ be given by $f(x)=y$, where $y_k= \sum_{i=1}^n a_{ki} x_i + b_k$ where $k =1,2, \dotsc,n$. Under what conditions is $f$ a contraction mapping?</p>
<p>Any hint or solution for this question? I am beginner for this course, I can not understand clearly. </p>
| mickep | 97,236 | <p>Maybe something like this: First, we write $\alpha=a+ib$ and use the fact that the integrand is even. Thus, the integral equals
$$
2\int_0^{+\infty}\frac{1}{\sqrt{(1+ax^2)^2+(bx^2)^2}}\,dx
$$
Playing a bit with that expression, we find that this equals
$$
\biggl[\frac{1}{(a^2+b^2)^{1/4}} F\Bigl(2\arctan\bigl((a^2+b^2)^{1/4}x\bigr),\frac{1}{2}-\frac{a}{2\sqrt{a^2+b^2}}\Bigr)\biggr]_0^{+\infty}
$$
where $F$ denotes the <a href="http://functions.wolfram.com/EllipticIntegrals/EllipticF/" rel="nofollow noreferrer">incomplete elliptic integral of the first kind</a> (<code>EllipticF</code> in Mathematica, other conventions are also used). The limit $x=0$ gives no contribution, and, using the upper limit, one gets
$$
\frac{2}{(a^2+b^2)^{1/4}}K\Bigl(\frac{1}{2}-\frac{a}{2\sqrt{a^2+b^2}}\Bigr)
$$
where $K$ denotes the <a href="http://functions.wolfram.com/EllipticIntegrals/EllipticK/" rel="nofollow noreferrer">complete elliptic integral of the first kind</a>, (<code>EllipticK</code> in Mathematica).</p>
|
157,731 | <p>I have a coupled PDE. How can I convert the equation from $(x,t)$ to $(p,t)$, the Fourier space in MATHEMATICA? </p>
<p>\begin{equation}
\frac{\partial c}{\partial t} +\frac{\partial d}{\partial t} = -4\gamma(\frac{\partial a}{\partial x} +x (\frac{\partial c}{\partial x} +\frac{\partial d}{\partial x}) - \frac{\partial^2 c}{\partial x^2} - \frac{\partial^2 d}{\partial x^2})
\end{equation}</p>
<p>$\gamma$ is a constant.How can I write the corresponding equation in Fourier space?</p>
<pre><code>Derivative[0, 1][c][x, t] +
Derivative[0, 1][d][x,
t] == -4 \[Gamma] (Derivative[1, 0][a][x, t] +
x (Derivative[1, 0][c][x, t] + Derivative[1, 0][d][x, t]) -
Derivative[2, 0][c][x, t] - Derivative[2, 0][d][x, t])
</code></pre>
| Alexei Boulbitch | 788 | <p>I did not notice the factor <code>x</code>. Sorry, below I repair this with the third rule.
Try the following. First introduce three simple rules:</p>
<pre><code>rule1 = D[y_[x, t], {x, n_Integer}] :> -I^n*p^n*y[p, t];
rule2 = y_[x, t] :> y[p, t];
rule3 = x*y_[p, t] :> I*D[y[p, t], p]
</code></pre>
<p>And then apply them to your expression:</p>
<pre><code> expr = Derivative[0, 1][c][x, t] +
Derivative[0, 1][d][x,
t] == -4 \[Gamma] (Derivative[1, 0][a][x, t] +
x (Derivative[1, 0][c][x, t] + Derivative[1, 0][d][x, t]) -
Derivative[2, 0][c][x, t] - Derivative[2, 0][d][x, t]);
expr1=expr /. rule1 /. rule2//Expand
expr1 /. rule3
</code></pre>
<p><a href="https://i.stack.imgur.com/Pgi4q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pgi4q.jpg" alt="enter image description here"></a></p>
<p>Have fun!</p>
|
724,900 | <p>Assuming $y(x)$ is differentiable. </p>
<p>Then, what is formula for differentiation ${d\over dx}f(x,y(x))$?</p>
<p>I examine some example but get no clue....</p>
| Community | -1 | <p>First, compute $\mathrm{d}f(x,z)$ without assuming any relationship between $x,z$. Suppose you compute this to be</p>
<p>$$ \mathrm{d} f(x,z) = f_1(x,z) \mathrm{d}x + f_2(x,z) \mathrm{d}z $$</p>
<p>Now, we can substitute in the dependence $z = y(x)$. We need</p>
<p>$$ \mathrm{d}z = y'(x) \mathrm{d}x $$</p>
<p>and now substituting, we get</p>
<p>$$\begin{align} \mathrm{d} f(x, y(x))
&= f_1(x, y(x)) \mathrm{d}x + f_2(x, y(z)) y'(x) \mathrm{d} x
\\&= \left(f_1(x, y(x)) + f_2(x, y(z)) y'(x) \right)\mathrm{d} x \end{align}$$</p>
<p>from which we infer</p>
<p>$$ \frac{\mathrm{d} f(x, y(x))}{\mathrm{d} x} = f_1(x, y(x)) + f_2(x, y(z)) y'(x) $$</p>
|
3,120,729 | <p>I came across this exercise:</p>
<blockquote>
<p>Prove that
<span class="math-container">$$\tan x+2\tan2x+4\tan4x+8\cot8x=\cot x$$</span></p>
</blockquote>
<p>Proving this seems tedious but doable, I think, by exploiting double angle identities several times, and presumably several terms on the left hand side would vanish or otherwise reduce to <span class="math-container">$\cot x$</span>.</p>
<p>I started to wonder if the pattern holds, and several plots for the first few powers of <span class="math-container">$2$</span> seem to suggest so. I thought perhaps it would be easier to prove the more general statement:</p>
<blockquote>
<p>For <span class="math-container">$n\in\{0,1,2,3,\ldots\}$</span>, prove that
<span class="math-container">$$2^{n+1}\cot(2^{n+1}x)+\sum_{k=0}^n2^k\tan(2^kx)=\cot x$$</span></p>
</blockquote>
<p>Presented this way, a proof by induction seems to be the smart way to do it.</p>
<p><strong>Base case:</strong> Trivial, we have</p>
<p><span class="math-container">$$\tan x+2\cot2x=\frac{\sin x}{\cos x}+\frac{2\cos2x}{\sin2x}=\frac{\cos^2x}{\sin x\cos x}=\cot x$$</span></p>
<p><strong>Induction hypothesis:</strong> Assume that</p>
<p><span class="math-container">$$2^{N+1}\cot(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)=\cot x$$</span></p>
<p><strong>Inductive step:</strong> For <span class="math-container">$n=N+1$</span>, we have</p>
<p><span class="math-container">$$\begin{align*}
2^{N+2}\cot(2^{N+2}x)+\sum_{k=0}^{N+1}2^k\tan(2^kx)&=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)+\sum_{k=0}^N2^k\tan(2^kx)\\[1ex]
&=2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)+\cot x
\end{align*}$$</span></p>
<p>To complete the proof, we need to show</p>
<p><span class="math-container">$$2^{N+2}\cot(2^{N+2}x)+2^{N+1}\tan(2^{N+1}x)-2^{N+1}\cot(2^{N+1}x)=0$$</span></p>
<p>I noticed that if I ignore the common factor of <span class="math-container">$2^{N+1}$</span> and make the substitution <span class="math-container">$y=2^{N+1}x$</span>, this reduces to the base case,</p>
<p><span class="math-container">$$2^{N+1}\left(2\cot2y+\tan y-\cot y\right)=0$$</span></p>
<p>and this appears to complete the proof, and the original statement is true.</p>
<p>First question: <strong>Is the substitution a valid step in proving the identity?</strong></p>
<p>Second question: <strong>Is there a nifty way to prove the special case for <span class="math-container">$n=2$</span>?</strong></p>
| lab bhattacharjee | 33,337 | <p>Hint:</p>
<p><span class="math-container">$$\cot y-\tan y=\dfrac{\cos^2y-\sin^2y}{\cos y\sin y}=\dfrac{\cos2y}{\dfrac{\sin2y}2}=?$$</span></p>
|
1,368,073 | <p>Halmos, in Naive Set Theory, on page 19, provides a definition of intersection restricted to subsets of $E$, where $C$ is the collection of the sets intersected. The point is to allow the case where $C$ is $\emptyset$, which with this definition of intersection gives $E$ as the result. </p>
<blockquote>
<p>$\{x \in E: x \in X$ for every $X$ in $C\}$</p>
</blockquote>
<p>My problem lies in interpreting the sentence. I wanted to read it as:</p>
<blockquote>
<p>"Elements x in E, given that: Element x is in X for every X in C"</p>
</blockquote>
<p>My brain, tuned by a number of popular programming languages, wants to evaluate the terms in the condition reading from left to right. And clearly, no element $x$ will be in any $X$ if $C$ is $\emptyset$, and if the condition is evaluated to false, $E$ will not be the result of the intersection.</p>
<p>After struggling for a while, I figured that I had to read the sentence as:</p>
<blockquote>
<p>"Elements x in E, given that: For all X that are in C, x is in all of them"</p>
</blockquote>
<p>The <em>for</em> part of the condition has to be the pivotal one. It has to be the first term you evaluate. In analogy with common programming languages.</p>
<p>Questions:</p>
<ol>
<li>Is my new reading and conclusion correct?</li>
<li>How does one learn the order of evaluation in set theoretic expressions?</li>
</ol>
<p>Edit: Corrected after discussion with coldnumber.</p>
<p>Edit 2: Upon rereading the previous chapter, I've found that Halmos actually explains his "for every". The condition "$x \in X$ for every $X$ in $C$" actually means "for all $X$ (if $X \in C$, then $x \in X$)" -- which seems to give an unambiguous order of evaluation.</p>
| coldnumber | 251,386 | <p>Looking at the book, I see that $C$ is a collection of subsets of $E$, and the definition in your question is the intersection of all the elements of $C$.
Note that a subset $X$ of $E$ is not a <em>subset</em> of $C$; it is an <em>element</em> of $C$.</p>
<p>Your initial reading is correct. The set $N=\{x \in E: x \in X$ for all $X \in C\}$ contains the elements of $E$ that are in every $X \subset E$ that is an element of $C$.</p>
<p>This means that if, for example, if $E=\{a,b\}$ and $C = \{\{a\}, E\}$, then $N = \{a\}$</p>
<p>On the other hand, if $E=\{a,b\}$ and $C = \{\varnothing, \{a\}, E\}$, then $N = \varnothing$, because $\varnothing$ contains no element of $E$. </p>
<p>Or if $C = \{\{b\} \{a\}, E\}$, then $N =\varnothing$, because there is no element of $E$ that is in both $\{a\}$ and $\{b\}$. </p>
<p>In general, $N =\varnothing$ when $C$ contains subsets of $E$ that are disjoint. </p>
<hr>
<p>EDIT:
Now, if $C = \varnothing$, and we look for $N = \{x \in E: x \in X \;\forall X \in C\}$, it follows that $N=E$, because to find an element $x$ of $E$ that does not satisfy the condition "$x \in X \; \forall X \in \varnothing$" we would have to find an element $X \in \varnothing$ that does not contain $x$, which is impossible, so every $x \in E$ satisfies the condition.</p>
|
2,659,781 | <p>I saw a problem yesterday, which can be easily be solved if we are using fractions. But the problem is for the 4th grade children, and I don't know how to solved this using what they what learned.</p>
<p>I tried solved it using the graphic method ( segments ). Here's the problem:</p>
<p>A team of workers has to finish a road. First day, they builded <code>3/4</code> of the road and <code>2 meters</code>, the 2nd day <code>3/4</code> of the remaining road and <code>2 meters</code>. Last day, the remaining length was <code>1 meter</code>. What was the length of the road?</p>
| fleablood | 280,126 | <p>Time 1: They did $\frac 34$ of the road. $\frac 14$ is left.</p>
<p>Time 1a: they did $2$ meters.</p>
<p>Time 2: They did $\frac 34$ of what was left. What remains is $\frac 14$.</p>
<p>Time 2a: they did $2$ meters.</p>
<p>Time 3: 1 meter is left.</p>
<p>Work backwards:</p>
<p>The $1$ meter of time 3) and the $2$ meters of time 2a) is $3$ meters.</p>
<p>That $3$ meters is the $\frac 14$ that was left at Time 2.</p>
<p>So $3$ meters is $\frac 14$ of what? Of $12$ meters. So that's what they did had left when they started time 2).</p>
<p>At time 1b: they did $2$ meters so that and the $12$ meters is $14$ meters.</p>
<p>That is the $\frac 14$ they had left at time 1.</p>
<p>So $14$ meters is $\frac 14$ of what? Of $4*14= 56$ meters. So $56$ meters is the total work.</p>
<p>Verify:</p>
<p>Day 1: they do $\frac 34$ of $56$ plus $2$ that is $\frac 34*16 + 2 = 42+2 =44$ There is $56 -44 = 12$ meters left.</p>
<p>Day 2: they do $\frac 34$ of $12$ plus $2$ that is $\frac 34*12 + 2 = 11$. There is $12-11= 1 $ meter left.</p>
<p>Day 3: there is $1$ meter left.</p>
<p>Verify a 2nd time just to make sure. Day 1: $44$, Day 2: $11$, Day 3: $1$. $44 + 11 + 1 =56$ meters.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.