qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,408,082 | <blockquote>
<p><span class="math-container">$\textbf{Definition}$</span>: We say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p>
</blockquote>
<p>There is only one intersecting function: the identity. The reason for this is that <span class="math-container">$f[\{x\}] \cap \{x\} \neq \varnothing$</span> forces <span class="math-container">$f(x)=x$</span>.</p>
<p>If we impose restrictions on the subsets <span class="math-container">$A$</span> we consider (say, for instance, we rule out singletons), must <span class="math-container">$f$</span> still be the identity? Let's try this. We can first introduce some definitions.</p>
<blockquote>
<p><span class="math-container">$\textbf{Definition:}$</span> For any cardinal <span class="math-container">$\ell$</span>, we say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <span class="math-container">$\ell$</span>-<em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq \ell$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p>
<p><span class="math-container">$\textbf{Definition:}$</span> For any <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>, its <em>deviation</em> is the cardinality of the set
<span class="math-container">$\{x \in \mathbb{R} : f(x) \neq x\}$</span>.</p>
</blockquote>
<p>I'm going to write down some results for the <span class="math-container">$2$</span>-intersecting case. Suppose <span class="math-container">$f$</span> has the property that for every <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq 2$</span> that <span class="math-container">$f[A] \cap A \neq \varnothing$</span>. There indeed exists a non-identity example here, one with deviation <span class="math-container">$3$</span>, in fact. Let <span class="math-container">$f(x) = x$</span> for <span class="math-container">$x \in \mathbb{R} \setminus \{1,2,3\}$</span> and let <span class="math-container">$f(1)=2, f(2)=3, f(3)=1$</span>. Is there a <span class="math-container">$2$</span>-intersecting <span class="math-container">$f$</span> with deviation <span class="math-container">$\geq 4$</span>? No. The argument for this is combinatorial. Suppose an <span class="math-container">$f$</span> with this property existed, and let <span class="math-container">$A=\{x_1, x_2, x_3, x_4\}$</span> be a set of four elements with <span class="math-container">$f(x_i) \neq x_i$</span> for each <span class="math-container">$x_i \in A$</span>. We first note that <span class="math-container">$f$</span> restricted to <span class="math-container">$A$</span> defines a function <span class="math-container">$f_{A}:A \to A$</span>. To justify this claim, we can argue by contradiction. Suppose for some <span class="math-container">$x_i \in A$</span> that <span class="math-container">$f(x_i) = y \notin A$</span>. Let <span class="math-container">$x_j, x_k$</span> be distinct elements in <span class="math-container">$A$</span>, also both distinct from <span class="math-container">$x_i$</span>. Then since <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} \neq \varnothing$</span>, and <span class="math-container">$f[\{x_i, x_k\}] \cap \{x_i, x_k\} \neq \varnothing$</span>, it's easy to see this implies <span class="math-container">$f(x_k) = f(x_j) = x_i$</span> , which implies <span class="math-container">$f[\{x_k, x_j\}] \cap \{x_k, x_j\} = \varnothing$</span>, contradiction. We also note the restricted function <span class="math-container">$f_{A}$</span> is injective. To see this, suppose, say, <span class="math-container">$f(x_i) = x_{m}$</span> and <span class="math-container">$f(x_j) = x_{m}$</span>, and each of <span class="math-container">$i,j, m$</span> are distinct. Then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>, contradiction. Since <span class="math-container">$A$</span> is finite, this implies it's bijective too. Hence <span class="math-container">$f_{A}$</span> corresponds to a permutation in <span class="math-container">$S_4$</span> without fixed points. It cannot have two cycles, since if <span class="math-container">$x_j, x_i$</span> are in distinct two-cycles, then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>. Hence it's cyclic. But if it's cyclic, then <span class="math-container">$f[\{x_{i_2}, x_{i_4}\}] \cap \{x_{i_2}, x_{i_4}\} = \varnothing$</span>, where <span class="math-container">$x_{i_2}, x_{i_4}$</span> are respectively the second and fourth elements of the cycle. This establishes the contradiction.</p>
<p>What eludes me is the general case. The combinatorics seem to get harder if <span class="math-container">$\ell \geq 3$</span>.</p>
<blockquote>
<p><span class="math-container">$\textbf{Problem 1:}$</span> The finite case. Let <span class="math-container">$\ell$</span> be a finite cardinal (i.e., a positive integer). Then is it true that any <span class="math-container">$\ell$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> has finite deviation? If this is true (which I suspect), is there a closed form for the largest possible deviation in terms of <span class="math-container">$\ell$</span>?</p>
<p><span class="math-container">$\textbf{Problem 2:}$</span> The infinite case. What is the maximum deviation of an <span class="math-container">$\aleph_{0}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? What is the maximum deviation of a <span class="math-container">$\mathfrak{c}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? In particular, are there such functions with infinite deviation?</p>
</blockquote>
<p>You will notice that we don't really use any of the analytic or algebraic structure of <span class="math-container">$\mathbb{R}$</span> here, so really these notions can be generalized to functions <span class="math-container">$f:X \to Y$</span> for arbitrary sets <span class="math-container">$X,Y$</span>. We could, however, ask different kinds of questions similar to the ones described above which make use of the structure of <span class="math-container">$\mathbb{R}$</span>. Instead of considering subsets <span class="math-container">$A$</span> with sufficiently large cardinality, we could alternatively consider subsets <span class="math-container">$A$</span> which are (nondegenerate) intervals, as has been suggested in the comments, or possibly subsets which are nonempty open sets. In a broad sense, we're interested in finding "highly non-identity" functions <span class="math-container">$f$</span> satisfying <span class="math-container">$f[A] \cap A \neq \varnothing$</span> for <span class="math-container">$A$</span> in some 'large' collection of subsets of <span class="math-container">$\mathbb{R}$</span>. If you have the solution to a different problem, but one which is similar in the broad sense described above, you're free to share it.</p>
| Misha Lavrov | 383,078 | <p>In <a href="https://math.stackexchange.com/a/3408428/383078">antkam's answer</a>, it's conjectured that an <span class="math-container">$\ell$</span>-intersecting function has deviation at most <span class="math-container">$3(\ell-1)$</span>. I will prove that in this answer. It's tight, by considering a permutation with <span class="math-container">$\ell-1$</span> <span class="math-container">$3$</span>-cycles.</p>
<hr>
<p>Suppose that <span class="math-container">$f$</span> is <span class="math-container">$\ell$</span>-intersecting. Consider the directed graph <span class="math-container">$G$</span> with vertex set <span class="math-container">$V = \{ x : f(x) \ne x\}$</span> (the non-fixed points of <span class="math-container">$f$</span>) and an edge from <span class="math-container">$x$</span> to <span class="math-container">$f(x)$</span> whenever <span class="math-container">$x\in V$</span> and <span class="math-container">$f(x) \in V$</span>.</p>
<p>The weak components of <span class="math-container">$G$</span> are either </p>
<ul>
<li>trees oriented toward a root <span class="math-container">$r$</span> such that <span class="math-container">$f(r) \notin V$</span>, or </li>
<li>unicyclic components: a directed cycle, with possibly a tree rooted at each of the cycle's vertices. </li>
</ul>
<p>A set <span class="math-container">$A$</span> such that <span class="math-container">$f[A] \cap A = \varnothing$</span> is exactly a subset of <span class="math-container">$V$</span> which is an independent set in <span class="math-container">$G$</span>: we can pick at most one endpoint of each edge. This ignores edge orientations, so from now on, we will not care about those.</p>
<p>All trees are bipartite, so for every component that's a tree, we can <span class="math-container">$2$</span>-color it, and add the larger of the two color classes to <span class="math-container">$A$</span>. The same works for a unicyclic component with an even cycle. In such cases, a <span class="math-container">$k$</span>-vertex component contributes at least <span class="math-container">$\frac k2$</span> vertices to <span class="math-container">$A$</span>.</p>
<p>In other unicyclic components, we can delete one vertex from the cycle and be left with a bipartite graph, which we can treat as above. In such cases, a <span class="math-container">$k$</span>-vertex component contributes at least <span class="math-container">$\frac{k-1}{2}$</span> vertices to <span class="math-container">$A$</span>. When <span class="math-container">$k$</span> is even, this guarantees <span class="math-container">$\frac k2$</span> vertices, same as before. When <span class="math-container">$k$</span> is odd, because <span class="math-container">$k > 1$</span> for any component with a cycle, we have <span class="math-container">$k\ge3$</span> and therefore <span class="math-container">$\frac{k-1}{2} \ge \frac k3$</span>.</p>
<p>So in all cases, a <span class="math-container">$k$</span>-vertex component can contribute at least <span class="math-container">$\frac k3$</span> vertices to an independent set <span class="math-container">$A$</span>; therefore we can find an independent set of total size at least <span class="math-container">$\frac{|V|}{3}$</span>. However, because <span class="math-container">$f$</span> is <span class="math-container">$\ell$</span>-intersecting, we cannot find an independent set of size <span class="math-container">$\ell$</span>. Therefore <span class="math-container">$\frac{|V|}{3} \le \ell-1$</span>, or <span class="math-container">$|V| \le 3(\ell-1)$</span>. Since <span class="math-container">$|V|$</span> is exactly the deviation of <span class="math-container">$f$</span>, we are done.</p>
|
4,524,554 | <p>Consider we have an <span class="math-container">$n \times n$</span> matrix, <span class="math-container">$A$</span>. This matrix represents a linear function from <span class="math-container">$\Bbb R^n$</span> to <span class="math-container">$\Bbb R^n$</span>. Let's say we found a sub-space spanned by the vectors <span class="math-container">$u_1, u_2, \dots u_k$</span> where <span class="math-container">$k<n$</span> such that <span class="math-container">$A$</span> maps any vector from the span of <span class="math-container">$u_1, u_2, \dots u_k$</span> back to this same span. Now, we want to define a linear transformation that does exactly what <span class="math-container">$A$</span> does but defined from the span of <span class="math-container">$u_1, u_2, \dots u_k$</span> to span <span class="math-container">$u_1, u_2, \dots u_k$</span>.</p>
<p>I read in a proof I'm trying to follow that the standard way to do this is to define a transformation matrix, <span class="math-container">$A'$</span> that is <span class="math-container">$k \times k$</span> and its columns are the vectors <span class="math-container">$A u_j \forall j \in 1 \dots k$</span>, expressed in the coordinate system of <span class="math-container">$u_1, u_2 \dots u_k$</span>. This leads to the conclusion that:</p>
<p><span class="math-container">$$A' = U^T A U$$</span></p>
<p>Is this statement correct? And if so, any intuition for why its true and how do I prove it?</p>
| lisyarus | 135,314 | <p><em>I'll omit the surjectivity condition, because the analysis isn't much harder in this case.</em></p>
<p>First, let's deal with <span class="math-container">$f(1)$</span>.</p>
<p>If <span class="math-container">$f(1) = 0$</span>, then <span class="math-container">$\forall x\,\, f(x)=f(x\cdot 1)=f(x)+f(1)+f(x)f(1)=f(x)$</span>. So, no new relations are gained.</p>
<p>If <span class="math-container">$f(1) = 1$</span>, then <span class="math-container">$\forall x\,\, f(x)=f(x\cdot 1)=f(x)+f(1)+f(x)f(1)=f(x)+1+f(x) = 1$</span>. Thus, in this case <span class="math-container">$\forall x\,\, f(x)=1$</span>.</p>
<p>Now, assume that <span class="math-container">$f(1)=0$</span>. The expression <span class="math-container">$f(x)+f(y)+f(x)f(y)$</span> is the standard realization of disjunction (logical or) in <span class="math-container">$\mathbb Z/2\mathbb Z$</span> arithmetic (which you probably already know, but it's nice to stress this explicitly); its value is <span class="math-container">$1$</span> precisely when at least one of <span class="math-container">$f(x),f(y)$</span> is <span class="math-container">$1$</span>.</p>
<p>What it means is that the value of <span class="math-container">$f(n)$</span> is exactly determined by the values of <span class="math-container">$f(p)$</span> for <span class="math-container">$p | n$</span>. So, pick any subset <span class="math-container">$S$</span> of primes (even empty or infinite) and define <span class="math-container">$f(x) = 1$</span> iff <span class="math-container">$\exists p\in S\,\,p|x$</span>. It is then a simple induction argument that <span class="math-container">$f(x)$</span> so defined satisfies the desired relation.</p>
|
3,082,779 | <p>I love watches, and I had an idea for a weird kind of watch movement (all of the stuff that moves the hands). It is made up of a a central wheel, with one of the hands connected to it (in this case, it will be the hour hand). This hand goes through a pivot, and then displays the time. I attached a video of a 3d mock up <a href="https://youtu.be/5-OmzIeNhwg" rel="noreferrer">here</a>, because it is kinda hard to explain. My question is, is there any functions that would be able to graph the movement of the end of the hand? I don't want to make the real prototype just yet.</p>
| Ivo Terek | 118,056 | <p>Not quite. Although the index balance is correct, in <span class="math-container">${\rm d}x^\mu = g^{\mu\nu}\partial_\nu$</span> you have that the left side is a <span class="math-container">$1$</span>-form, while the right side is a vector field. What happens here is that if <span class="math-container">$(M,g)$</span> is your pseudo-Riemannian manifold (which I assume that in your case should just be a spacetime), the metric induces the so called musical isomorphisms <span class="math-container">$\flat\colon T_x M \to T_x^*M$</span> and <span class="math-container">$\sharp \colon T^*_xM \to T_xM$</span> for every point <span class="math-container">$x \in M$</span>. Said isomorphisms are characterized by <span class="math-container">$v_\flat(w) = g_x(v,w)$</span> for <span class="math-container">$v, w \in T_xM$</span>, and also for <span class="math-container">$\xi \in T_x^*M$</span>, the vector <span class="math-container">$\xi^\sharp \in T_xM$</span> is characterized by the relation <span class="math-container">$\xi(v) = g_x(v, \xi^\sharp)$</span>, for all <span class="math-container">$v \in T_xM$</span>. You can put all of these isomorphisms together to get maps <span class="math-container">$\flat : TM \to T^*M$</span> and <span class="math-container">$\sharp\colon T^*M\to TM$</span> between tangent bundles (which I'm still denoting by the same symbols), and then they can be "upgraded" to the section level, yielding isomorphisms <span class="math-container">$\flat\colon \mathfrak{X}(M) \to \Omega^1(M)$</span> and <span class="math-container">$\sharp\colon \Omega^1(M) \to \mathfrak{X}(M)$</span> between vector fields and <span class="math-container">$1$</span>-forms. What you actually have for a given coordinate system are the formulas <span class="math-container">$${\rm d}x^\mu = g^{\mu\nu} (\partial_\nu)_\flat \quad\mbox{and}\quad \partial_\mu = g_{\mu \nu}({\rm d}x^\nu)^\sharp.$$</span>What happens is that people just sweep these isomorphisms under the rug and write just the formulas you have seen in your textbook. For example, to prove the first one, you just have to check that both sides act the same in an arbitrary coordinate vector field <span class="math-container">$\partial_\lambda$</span>, say. Indeed, we have <span class="math-container">$$g^{\mu\nu}(\partial_\nu)_\flat(\partial_\lambda) = g^{\mu\nu}g(\partial_\nu,\partial_\lambda) = g^{\mu\nu}g_{\nu\lambda} = \delta^\mu_{\;\lambda} = {\rm d}x^\mu(\partial_\lambda),$$</span>as wanted. The second one follows from the first one (up to index relabeling) by doing <span class="math-container">$${\rm d}x^\mu = g^{\mu\nu}(\partial_\nu)_\flat \implies g_{\lambda \mu}{\rm d}x^\mu = g_{\lambda \mu}g^{\mu\nu}(\partial_\nu)_\flat = \delta^\nu_{\lambda}(\partial_\nu)_\flat = (\partial_\lambda)_\flat \implies \partial_\lambda = (g_{\lambda\mu}{\rm d}x^\mu)^\sharp = g_{\lambda\mu}({\rm d}x^\mu)^\sharp.$$</span>(Relabel <span class="math-container">$\mu \to \nu$</span> and <span class="math-container">$\lambda \to \mu$</span> in this order if it makes you more comfortable in the last equality above.)</p>
|
1,938 | <p>I no longer want to help a particular user (picakhu) due to repeated comments that my considered answers are not helpful, changes in questions after I answer, and the user's pattern of not voting things up (26 questions, only 25 up votes) which I think is antisocial. I have deleted several of my answers to that user's questions, but I can't do that to <a href="https://math.stackexchange.com/questions/29355/optimizing-group-spending">an answer the user has accepted</a>. Could a moderator please delete that answer for me?</p>
<p>I realize that others might benefit from an answer. However, I also don't want to reward or encourage this pattern of behavior, while the content of that answer is contained in most introductory texts on the subject. </p>
| Willie Wong | 1,543 | <p>@Douglas: please don't do what you are requesting the moderators to do. Remember that this is a Q and A website, and it seeks to document questions and answers <strong>is a way that is easily searchable and useful to other people</strong> besides just the OP and the answerer. Deliberately removing content from the site due to a (personal) dispute you have with another user <em>can be</em> interpreted as site vandalism. </p>
<p>You are of course free to ignore any user's questions in the future. But retroactively doing what you are doing in spite is something I think <strong>should not be done</strong>, and is something that warrants a warning.</p>
<p>In general, I am not going to go around policing and asking people not to delete their questions or answers: people have all sorts of reasons for doing so. But let me just say this now. <strong>If it becomes clear such deletions are done in bad-faith</strong> (not for improving the content or presentation of the site, not to remove irrelevant comments and answers, but actually resulting in the site being less useful for everyone else) (which you have freely admitted above), the moderators <strong>may view the action as grounds for suspension</strong>. </p>
<p>For the time being I'll overlook what you've just been doing: the fact that those answers you removed have not been accepted can be a justification for removing them (that the OP, and presumably other readers, do not find it to be useful). But if your answers have been genuinely accepted, then there's relatively little doubt that <em>removing</em> them will actually remove useful content from the website. </p>
|
1,938 | <p>I no longer want to help a particular user (picakhu) due to repeated comments that my considered answers are not helpful, changes in questions after I answer, and the user's pattern of not voting things up (26 questions, only 25 up votes) which I think is antisocial. I have deleted several of my answers to that user's questions, but I can't do that to <a href="https://math.stackexchange.com/questions/29355/optimizing-group-spending">an answer the user has accepted</a>. Could a moderator please delete that answer for me?</p>
<p>I realize that others might benefit from an answer. However, I also don't want to reward or encourage this pattern of behavior, while the content of that answer is contained in most introductory texts on the subject. </p>
| Alex B. | 3,212 | <p>This got too long for a comment.</p>
<p>I find the logic advocated by Willie Wong quite strange and even alarming. It is effectively saying that a user loses control over his own contribution as soon as he posts it, since a moderator reserves the right to suspend him, should he choose to exercise this control. There are all sorts of reasons one might have for removing <strong>one's own</strong> contribution, even while believing that it might have been useful. This has nothing to do with vandalism but is actually something that is explicitly supported by the software.</p>
<p>I have also <a href="https://math.stackexchange.com/questions/13662/probability-problems">removed content</a> in the past because a user failed to appreciate it. Arguing over the validity of the argument with a user who refuses to think for himself and sometimes having to read his repeated assertions that the answer was wrong or useless (as confirmed by an IMO gold medallist friend of his) is more investment in the site than I was prepared to make. I still believe that the original post that I removed was the most useful one in the whole thread. Nevertheless, suspending me for "vandalism" would be completely ludicrous. I cannot see the posts removed by Douglas, but they were his contributions and it is his right to do with them whatever the software allows. <strong>The internet is too public a place, and a moderator has no right to threaten to suspend a user who wishes to control his own internet foot print.</strong></p>
|
887,473 | <p>I have been struggling with the following claim:</p>
<p>Let $A_n$ be a sequence of compact sets and $A$ a compact set. $A=\lim\sup_n A_n=\lim\inf_n A_n$ iff $d_H(A_n,A)\to 0$ where $d_H(.,.)$ is the Hausdorff metric.</p>
<p>$\lim\inf$ and $\lim\sup$ are defined by $\lim\inf_nA_n=\left\{y\in Y:\forall \varepsilon>0,\exists N, n\geq N\quad \implies B_{\varepsilon}(y)\cap A_n\neq\emptyset\quad \right\}$ and, $\lim\sup_n A_n=\left\{y\in Y: \forall\varepsilon>0,\forall N,\exists n\geq N \text{ so that } B_\epsilon(y)\cap A_n\neq\emptyset \right\}$. </p>
<p>Also $d_H (X,Y)=\max\left(\inf\left\{\epsilon>0:Y\subset\bigcup_{x\in X}B_\epsilon(x) \right\}, \inf\left\{\epsilon>0:X\subset\bigcup_{y\in Y}B_\epsilon(y) \right\} \right)$.</p>
<p>$\Rightarrow$: Let $\varepsilon>0$ be given. For each $a\in A$ $\exists N(a,\varepsilon)$ so that $n\geq N$ implies $B_\varepsilon(a)\cap A_n\neq\emptyset$. By compactness of $A$ we can find finite $a_1,a_2\dots a_k$ and associated $N_1,N_2\dots N_k$ so that $A\subset\cup_{i=1}^k B_\varepsilon(a_i)$. Let $N=\max_{1\leq i\leq k}N_i$. I want to show $n\geq N$ implies $A_n\subset A^\varepsilon$ but couldn't manage. </p>
<p>$\Leftarrow$: I want to show $A\subset\lim\inf A_n\subset\lim\sup A_n\subset A$. Let $\varepsilon>0$ be given. There exists $N$ such that $d_H(A_n,A)<\varepsilon$ for $n$ large enough. Pick $a\in A$, so $a\in A_n^{\varepsilon}$ which implies $B_{\varepsilon}(a)\cap A_n\neq\emptyset$ for all such large $n$'s. Thus $a\in\lim\inf A_n$. To finish this side, I need to show $\lim\sup A_n\subset A$ yet again couldn't find the solution. Thanks for any help!</p>
| GEdgar | 442 | <p>Let's think about this. Some directions don't work. In space $A=[0,1]$, which is compact, let $A_n = \{ k/2^n : k=0,1,\dots,2^n\}$. Then $d_H(A_n,A) \to 0$. But $\limsup A_n = \liminf A_n = \bigcup A_n$ is not compact at all.</p>
|
2,715,374 | <p>We know that \begin{equation*}
a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}=[a_0,a_1, \cdots, a_n]
\end{equation*}</p>
<p>If $\frac{p_n}{q_n}=[a_0,a_1, \cdots, a_n]$.</p>
<blockquote>
<p>How to prove that $$
\begin{pmatrix}
p_n & p_{n-1} \\
q_n & q_{n-1} \\
\end{pmatrix}=\begin{pmatrix}
a_0 &1 \\
1 & 0 \\
\end{pmatrix}\begin{pmatrix}
a_1 &1 \\
1 & 0 \\
\end{pmatrix}\cdots\begin{pmatrix}
a_n &1 \\
1 & 0 \\
\end{pmatrix}
$$.</p>
</blockquote>
<p>I am getting the answer while checking with $n=0,1,2,3$. I think that it could be done by induction but after assuming $k=n-1$ when I am going to prove $k=n$ the calculation is getting messy. Please help me out in proving this.</p>
| StuartMN | 439,545 | <p>But with x=0 ,0x=1x (=0) so line 2 is a fallacy (false) </p>
|
367,204 | <p>I'm trying to prove that $\mathbb Z_p^*$ ($p$ prime) is a group using the Fermat's little theorem to show that every element is invertible.</p>
<p>Thus using the Fermat's little theorem, for each $a\in Z_p^*$, we have $a^{p-1}\equiv1$ (mod p). The problem is to prove that p-1 is the least positive integer which $a^{p-1}\equiv1$ (mod p).</p>
<p><strong>Remark:</strong> $\mathbb Z_p^*$ is $\{\overline 1,...,\overline {p-1}\}$ with multiplication.</p>
<p>I need help.</p>
<p>Thanks a lot.</p>
| Mr.Lilly | 67,969 | <p>Please check my inverse solution.</p>
<p>Let $\overline{x} \in \mathbb{Z}_{p}^*$. Then $(x,p)=1$, that is there exist $k, q \in \mathbb{Z}$ such that $kx + qp=1$. This implies $\overline{k}\overline{x}=\overline{1}$. Suppose that $\overline{k}=\overline{0}$. Then $\overline{0}=\overline{1}$, a contradict. Thus $\overline{k} \not = \overline{0}$, so $k \not = 0$.</p>
<p>Now, we write $k=mp+r$ where $0 \leq r < p$. If $k<p$, then $(k,p)=1$. It follows that $\overline{k} \in \mathbb{Z}_{p}^*$. If $k=p$, then $r=0$. It follows that $\overline{k}=\overline{0}$, a contradict, thus $k \not = p$. If $k>p$, then we have $k=mp+r$ where $0 < r < p$. Hence $\overline{r} \in \mathbb{Z}_p^*$. Since $k \equiv r \, (\mod p)$, $\overline{k} \in \mathbb{Z}_p^*$.</p>
<p>Therefore $\mathbb{Z}_p^*$ has inverse.</p>
|
3,516,241 | <p>Consider the equation:</p>
<p><span class="math-container">$$ x ^ 4 - (2m - 1) x^ 2 + 4m -5 = 0 $$</span></p>
<p>with <span class="math-container">$m \in \mathbb{R}$</span>. I have to find the values of <span class="math-container">$m$</span> such that the given equation has all of its roots real.</p>
<p>This is what I did:</p>
<p>Let <span class="math-container">$ u = x^2, \hspace{.25cm} u\ge 0$</span></p>
<p>We get:</p>
<p><span class="math-container">$$ u ^ 2 - (2m - 1)u + 4m -5 = 0 $$</span></p>
<p>Now since we have </p>
<p><span class="math-container">$$ u = x ^ 2$$</span></p>
<p>That means</p>
<p><span class="math-container">$$x = \pm \sqrt{u}$$</span></p>
<p>That means that the roots <span class="math-container">$x$</span> are real only if <span class="math-container">$u \ge 0$</span>.</p>
<p>So we need to find the values of <span class="math-container">$m$</span> such that all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>. If all <span class="math-container">$u$</span>'s are <span class="math-container">$\ge 0$</span>, that means that the sum of <span class="math-container">$u$</span>'s is <span class="math-container">$\ge 0$</span> <strong>and</strong> the product of <span class="math-container">$u$</span>'s is <span class="math-container">$ \ge 0 $</span>. Using Vieta's formulas</p>
<p><span class="math-container">$$S = u_1 + u_2 = - \dfrac{b}{a} \hspace{2cm} P = u_1 \cdot u_2 = \dfrac{c}{a}$$</span></p>
<p>where <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span> are the coefficients of the quadratic, we can solve for <span class="math-container">$m$</span>. We get:</p>
<p><span class="math-container">$$S = - \dfrac{-(2m - 1)}{1} = 2m - 1$$</span></p>
<p>We need <span class="math-container">$S \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{1}{2}$</span> <span class="math-container">$(1)$</span></p>
<p><span class="math-container">$$P = \dfrac{4m - 5 }{1} = 4m - 5$$</span></p>
<p>We need <span class="math-container">$P \ge 0$</span>, so that means <span class="math-container">$m \ge \dfrac{5}{4}$</span> <span class="math-container">$(2)$</span></p>
<p>Intersecting <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> we get the final answer:</p>
<p><span class="math-container">$$ m \in \bigg [ \dfrac{5}{4}, \infty \bigg )$$</span></p>
<p>My question is: Is this correct? Is my reasoning sound? Is there another way (maybe even a better way!) to solve this?</p>
| Quanto | 686,284 | <p>One approach is to express <span class="math-container">$m$</span> as a function of <span class="math-container">$x$</span>,</p>
<p><span class="math-container">$$m(x)=\frac{x^4+x^2-5}{2x^2-4}
=\frac12\left(x^2+3+\frac1{x^2-2}\right)$$</span></p>
<p>Then, set <span class="math-container">$m’(x)=0$</span> to get </p>
<p><span class="math-container">$$x(x^2-1)(x^2-3)=0$$</span></p>
<p>which identifies the local extrema at <span class="math-container">$x=0, \pm 1,\>\pm \sqrt3$</span>. It is straightforward to verify that <span class="math-container">$m(x)$</span> has the local minima <span class="math-container">$m(\pm \sqrt3)=\frac72$</span> and local maxima <span class="math-container">$m(0)<m(\pm 1)=\frac32$</span>. (See the plot below.)</p>
<p>Thus, the values of <span class="math-container">$m$</span> for real <span class="math-container">$x$</span> are</p>
<p><span class="math-container">$$m\in (-\infty, \frac32]\cup [\frac72, \infty)$$</span></p>
<p>Note the above result assumes that the equation has real roots, but not necessarily real for all roots. There may be some subtlety in interpreting the problem. If all four roots are expected to be real, the lower bound of <span class="math-container">$m$</span> is <span class="math-container">$m(0)=\frac54$</span> and its values would be <span class="math-container">$m\in (\frac54, \frac32]\cup [\frac72, \infty)$</span>. For <span class="math-container">$m\in (-\infty, \frac54]$</span>, the equation has only one pair of real roots.</p>
<p><a href="https://i.stack.imgur.com/SLY2V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLY2V.png" alt="enter image description here"></a></p>
|
2,892,342 | <p>Given two adjacent sides and all four angles of a quadrilateral, what is the most efficient way to calculate the angle that is made between a side and the diagonal of the quadrilateral that crosses (but does not necessarily bisect) the angle in between the two known sides?</p>
<p>Other known information:</p>
<ul>
<li>The two angles that touch one and only one known side are right angles.</li>
<li>The angle that touches both known sides equals $n-m$</li>
<li>The angle that doesn't touch any known sides equals $180-n+m$, which can be inferred through the above statement and the rule that states that the interior angles of a quadrilateral must add up to $360$, although this is also known from other aspects of the broader problem</li>
<li>$n$ and $m$ cannot easily be explained with words. See the picture at the end of this post.</li>
</ul>
<p>From what I can tell, the most efficient solution to this problem is to solve for the OTHER diagonal using the law of cosines and the two known sides $x$ and $y$ from the sketch, use the law of sines and/or cosines to solve for the parts of angles $A$ and $C$ that make up the left-most triangle made by the other diagonal, find the other parts of angles $A$ and $C$ by using $90-A$ and $90-C$, respectively, since $A$ and $C$ are both right angles, then use the law of sines once more to find sides $AB$ and $BC$, and FINALLY use the law of sines to find any of the four thetas. Seems tedious. Am I missing something?</p>
<p>Here is an awful sketch I made of the problem:
<a href="https://i.stack.imgur.com/hfkoG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfkoG.png" alt="Here is an awful sketch I made of the problem"></a></p>
| Jack D'Aurizio | 44,121 | <p>Le us assume that $D$ lies at the origin and $\widehat{CDA}=\theta$. Then the coordinates of $A$ are $(x\cos\theta,x\sin\theta)$ and the line through $A$ which is orthogonal to $DA$ has equation $y(t)=-\frac{\cos\theta}{\sin\theta}t+\left(x\sin\theta+x\frac{\cos^2\theta}{\sin\theta}\right) $. It follows that the length of $CB$ is given by
$$ -\frac{\cos\theta}{\sin\theta} y + \left(x\sin\theta+x\frac{\cos^2\theta}{\sin\theta}\right)=\frac{x-y\cos\theta}{\sin\theta} $$
and by symmmetry the length of $AB$ is given by $\frac{y-x\cos\theta}{\sin\theta}$.<br></p>
<p>In equivalent terms: <a href="https://en.wikipedia.org/wiki/Complete_quadrangle" rel="nofollow noreferrer">complete the quadrilateral</a>, notice the relevant proportions then remove the attached pieces.</p>
|
441,374 | <p>Let $K_{\alpha}(z)$ be the <a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I.CE.B1_.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the second kind of order $\alpha$</a>.</p>
<p>I need to compute the following integral:</p>
<p>$$\int_0^\infty\;\;K_0\left(\sqrt{a(k^2+b)}\right)dk$$ </p>
<p>where $a>0$ and $b>0$. </p>
<p>I have tried several substitutions and played around a lot in Mathematica, and can't seem to solve this. Perhaps an integral representation of $K_{0}(z)$ would be helpful here.</p>
<p>Even if this can't be done exactly, a sensible approximation strategy would also be useful. </p>
<p>Any advice would be greatly appreciated. Thanks in advance for your time!</p>
| doraemonpaul | 30,938 | <p>$\int_0^\infty K_0\left(\sqrt{a(k^2+b)}\right)dk$</p>
<p>$=\int_0^\infty\int_0^\infty e^{-\sqrt{a(k^2+b)}\cosh t}~dt~dk$</p>
<p>$=\int_0^\infty\int_0^\infty e^{-(\sqrt{a}\cosh t)\sqrt{k^2+b}}~dk~dt$</p>
<p>$=\int_0^\infty\int_0^\infty e^{-(\sqrt{a}\cosh t)\sqrt{(\sqrt{b}\sinh u)^2+b}}~d(\sqrt{b}\sinh u)~dt$</p>
<p>$=\sqrt{b}\int_0^\infty\int_0^\infty e^{-\sqrt{a}\sqrt{b}\cosh t\cosh u}\cosh u~du~dt$</p>
<p>$=\sqrt{b}\int_0^\infty K_1(\sqrt{a}\sqrt{b}\cosh t)~dt$</p>
|
598,635 | <p>Prove the two Identities for
$-1 < r < 1$</p>
<p>$$\sum_{n=0}^{\infty} r^n\cos n\theta =\frac{1-r\cos\theta}{1-2r\cos\theta+r^2}$$</p>
<p>$$\sum_{n=0}^{\infty} r^n\sin{n\theta}=\frac{r \sin\theta }{1-2r\cos\theta+r^2}$$</p>
<p>Sorry could not figure out how to format equations</p>
| Igor Rivin | 109,865 | <p>Hint: $\exp(i x) = \cos(x ) + i \sin(x).$</p>
|
3,271,675 | <p>Let <span class="math-container">$p$</span> be a prime of the form <span class="math-container">$p = a^2 + b^2$</span> with <span class="math-container">$a,b \in \mathbb{Z}$</span> and <span class="math-container">$a$</span> an odd prime. Prove that <span class="math-container">$(a/p) =1$</span></p>
<p>Could anyone give me a hint for the solution please?</p>
| user10354138 | 592,552 | <p><strong>Hint</strong>: Note that <span class="math-container">$p\equiv 1\pmod 4$</span>, so quadratic reciprocity gives <span class="math-container">$\left(\frac{a}p\right)=\left(\frac{p}a\right)$</span>.</p>
|
1,714,902 | <p>(Question edited to shorten and clarify it, see the history for the original)</p>
<p>Suppose we are given two $n\times n$ matrices $A$ and $B$. I am interested in finding the closest matrix to $B$ that can be achieved by multiplying $A$ with orthogonal matrices. To be precise, the problem is</p>
<p>$$\begin{align}
\min_{U,V}\ & \|UAV^T-B\|_F \\
\text{s.t.}\ & U^TU = I \\
& V^TV = I,
\end{align}$$
where $\|\cdot\|_F$ is the Frobenius norm.</p>
<p>Without loss of generality*, we can restrict our attention to <em>diagonal</em> matrices with nonnegative diagonal entries $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$. My hypothesis is that in this case the optimal $UAV^T$ is still diagonal, with its entries being the permutation of $a_i$ which minimizes $\sum_i (a_{\pi_i} - b_i)^2$. In other words, $U=V=P$, where $P$ is the permutation matrix corresponding to said permutation $\pi$. This appears to be true based on numerical tests, but I don't know how to prove it. Is there an elegant proof?</p>
<hr>
<p>*For arbitrary $A$ and $B$, take their singular value decompositions $A=U_A\Sigma_AV_A^T$ and $B=U_B\Sigma_BV_B^T$ to obtain
$$\begin{align}
\|UAV^T-B\|_F &= \|UU_A\Sigma_AV_AV^T-U_B\Sigma_BV_B^T\|_F \\
&= \|U'\Sigma_AV'^T-\Sigma_B\|_F,
\end{align}$$
where $U'=U_B^{-1}UU_A$ and $V'=V_B^{-1}VV_A$ are orthogonal. So we can work with $\Sigma_A$ and $\Sigma_B$ instead.</p>
| Community | -1 | <p>Independently of @loup blanc's answer, I found a more elementary partial solution showing that the $2\times2$ case implies the general $n\times n$ case.</p>
<p>The desired proposition is equivalent to the following: For any $n\times n$ matrix $A$ and diagonal $n\times n$ matrix $B$, a matrix $\tilde A=UAV^T$ which minimizes $\|\tilde A-B\|$ is diagonal.</p>
<p>Suppose the hypothesis is true for $2\times2$ matrices. In the $n\times n$ case, suppose $A$ is not diagonal, that is, it has some nonzero off-diagonal entry. We will show that $A$ cannot be optimal.</p>
<p>Without loss of generality, take the nonzero off-diagonal entry to lie in the upper left $2\times 2$ block. Divide $A$ and $B$ into blocks
$$A = \begin{bmatrix}A_{11} & A_{12} \\ A_{21} & A_{22}\end{bmatrix}, \qquad
B = \begin{bmatrix}B_{11} & 0 \\ 0 & B_{22}\end{bmatrix},$$
where the diagonal blocks are of size $2\times2$ and $(n-2)\times(n-2)$ respectively.</p>
<p>By assumption, the $2\times2$ problem with non-diagonal $A_{11}$ and diagonal $B_{11}$ has a solution $\tilde A_{11}=U_{11}A_{11}V_{11}^T$ such that $\tilde A_{11}$ is diagonal and $\|\tilde A_{11}-B_{11}\|^2<\|A_{11}-B_{11}\|^2$.</p>
<p>Consider orthogonal matrices
$$U = \begin{bmatrix}U_{11} & 0 \\ 0 & I\end{bmatrix}, \qquad
V = \begin{bmatrix}V_{11} & 0 \\ 0 & I\end{bmatrix}.$$
Then
$$\begin{align}
\|UAV^T-B\|^2 &= \left\|\begin{bmatrix}U_{11}A_{11}V_{11}^T & U_{11}A_{12} \\ A_{21}V_{11}^T & A_{22}\end{bmatrix} - \begin{bmatrix}B_{11} & 0 \\ 0 & B_{22}\end{bmatrix}\right\|^2 \\
&= \|U_{11}A_{11}V_{11}^T - B_{11}\|^2 + \|U_{11}A_{12}\|^2 + \|A_{21}V_{11}^T\|^2 + \|A_{22}-B_{22}\|^2 \\
&= \|\tilde A_{11} - B_{11}\|^2 + \|A_{12}\|^2 + \|A_{21}\|^2 + \|A_{22}-B_{22}\|^2 \\
&< \|A_{11} - B_{11}\|^2 + \|A_{12}\|^2 + \|A_{21}\|^2 + \|A_{22}-B_{22}\|^2 \\
&= \|A - B\|^2.
\end{align}$$
Thus, $UAV^T$ has a better objective value than $A$, so the latter cannot be optimal.</p>
|
737,915 | <p>I'm reading Calculus: Basic Concepts for High School Students and am trying to digest the definition of 'limit of function'. There are two details that I am struggling to fully accept:</p>
<ol>
<li><p>If you are supposed to pick an interval $(a - \delta, a + \delta)$ but $a$ can be an undefined point at the end of the domain, what happens to the other half of the interval? Is it just ignored/irrelevant?</p></li>
<li><p>The function used as an example is $f(x) = \sqrt{x}$ and from what I can tell the limit of the point $a$ always matches the value of $f(a)$. I cannot see how this would be different for other functions given the way that the limit is calculated - if someone could share an example of a function that has a different limit at $x = a$ than the value $f(a)$ I would be grateful.</p></li>
</ol>
| T_O | 84,702 | <ol>
<li><p>Yes, if your function is defined on $[a,b]$ and you want to compute the limit in $b$, you can only consider what is happening in $[b-\delta, b]$</p></li>
<li><p>Look at the function floor(x) which is defined as the highest integer that is less than equal to x. Here is the graph :</p></li>
</ol>
<p><img src="https://i.stack.imgur.com/Nibz4.png" alt="enter image description here"></p>
<p>The limit on 1 does not properly exist. There is a right-sided limit which is 1 and the value of f in 1. However the left-sided limit in 1 is 0 and is not the value of f in 1.</p>
|
3,121,361 | <p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as:
<span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p>
<p>How to prove closure property to prove that G is a group?</p>
| user90369 | 332,823 | <p><em>Without</em> using <span class="math-container">$\,b_n\,$</span> you can use of course what the others have written!</p>
<p>There is nothing to add.</p>
<hr>
<p>Given: <span class="math-container">$\,a_n\,$</span> monotone increasing and <span class="math-container">$\,b_n\,$</span> monotone decreasing and <span class="math-container">$\,a_n<b_n\,$</span> for all <span class="math-container">$\,n>0$</span></p>
<p>Proof: <span class="math-container">$~2= a_1 \leq a_n \leq \lim\limits_{n\to\infty} a_n = \lim\limits_{n\to\infty} b_n \leq b_n \leq b_1 = 4$</span></p>
<p>Therefore <span class="math-container">$\{a_n\}$</span> is bounded. That's really the shortest way.</p>
|
2,555,499 | <p>Let $v_1=(1,1)$ and $v_2=(-1,1)$ vectors in $\mathbb{R}^2$. They are <strong>clearly linearly independent</strong> since each is not an scalar multiple of the other. The following information about a linear transformation $f: \mathbb{R}^2 \to \mathbb{R}^2$ is given: $$f(v_1)=10 \cdot v_1 \text{ and } f(v_2)=4 \cdot v_2$$</p>
<ol>
<li><p><strong>Give the transformation matrix $_vF_v$ with respect to ordered basis $\mathcal{B}=(v_1,v_2)$</strong></p></li>
<li><p><strong>Give the transformation matrix $_eF_e$ with respect to the ordered standard basis $e=(e_1,e_2)$ of $\mathbb{R}^2$</strong></p></li>
</ol>
<p>Recall that
$$ \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}=\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix} $$
We need a matrix $_eF_e$ such that:
$$_eF_e\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}=\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} 10 & 0 \\ 0 & 4 \end{bmatrix}$$
then
$$_eF_e=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}^{-1}
=\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix}\begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix}$$
Okay so I'm pretty sure that $$_eF_e=_eF_v \cdot _vF_v \cdot _vF_e$$
And i figured I could find $_eF_e=\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix}$ in the following equation $$\begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix} \text{ } \begin{bmatrix} 1 & -1 \\ 1 & 1 \end{bmatrix}= \begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \\ \Rightarrow
\begin{bmatrix} 10 & -4 \\ 10 & 4 \end{bmatrix} \text{ } \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & \frac{1}{2} \end{bmatrix}= \begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} \\ \Rightarrow {}_eF_e=\begin{bmatrix} 7 & 3 \\ 3 & 7 \end{bmatrix} $$</p>
<p>Now, how can I find ${}_v{F}_v$? I got a feeling that I'm making it more difficult than necessary</p>
| wendy.krieger | 78,024 | <p>My trick is to make the large operators × and ÷ and the small operators (mn and m/n) separate. The small operators are applied as multiplication then division.</p>
<p>So $ab/cd=\frac {ab}{cd}$. In essence the small operators are parts of the same 'word'. </p>
<p>The large operators are always applied to the first numerator, </p>
<p>so, $a\times b\div c \times d = \frac{abd}{c}$</p>
<p>This is what closest matches the intent of these operations with least brackets. </p>
|
1,071,564 | <p>Let's a call a directed simple graph $G$ on $n$ labelled vertices <strong>good</strong> if every vertex has outdegree 1 and, when considered as if it were undirected, it is connected. How many good graphs of size $n$ are there?</p>
<p>Here's my work so far. Let's call this number $T(n)$. Clearly, $T(2) = 1$: there's only the loop on two vertices.</p>
<p><img src="https://i.stack.imgur.com/NRt8v.png" alt="T(2)"></p>
<p>We also have that $T(3) = 8$. We can count them using the following argument: let's call a <strong>possible shape</strong> a directed simple graph on $n$ <em>unlabelled</em> vertices which is good. For $n = 3$ we have the following shapes:</p>
<p><img src="https://i.stack.imgur.com/Cbtpp.png" alt="T(3)"></p>
<p>There are $3!$ <em>labelled</em> good graphs of the first shape: fix the outside vertex in 3 possible ways, then fix the loop in two possible ways. There are also $\frac{3!}{3}$ <em>labelled</em> good graphs of the second shape: it's simply the number of cycles on 3 elements. So in total we have: $$T(3) = 3! + \frac{3!}{3} = 8\text{.}$$</p>
<p>We also know that $T(4) = 78$. Let's list all possible shapes:</p>
<p><img src="https://i.stack.imgur.com/SdBDj.png" alt="T(4)"></p>
<p>From top left to bottom right, it's easy to check that we have $4!$ <em>labelled</em> good graphs of the first shape, $2\cdot {4 \choose 2}$ of the second, $2\cdot {4 \choose 2}$ of the third, $4!$ of the fourth and $\frac{4!}{4}$ of the last. In total: $$T(4) = 4! + \left(2\cdot {4 \choose 2} + 2\cdot {4 \choose 2}\right) + 4! + \frac{4!}{4} = 3\cdot 4! + \frac{4!}{4}\text{.}$$</p>
<p><s>I <em>think</em> that $T(5) = 884$, but I won't draw all possible shapes or count their labelings for brevity.</s></p>
<p>I computed $T(5)$ again, and now I get 944. This also invalidates the following conjecture.</p>
<p><strong>CONJECTURE [DISPROVEN]:</strong> I'm <em>conjecturing</em> that there's a simple-ish formula for $T(n)$. It's something like $$T(n) = (2^{n-2} - 1) \cdot n! + \frac{n!}{n} + S(n)$$ where $S(n)$ is some function I currently don't understand such that $S(2) = S(3) = S(4) = 0$, while $S(5) = 5\cdot 4$.</p>
| jschnei | 113,308 | <p>The graphs you are describing are known as simple (directed) pseudotrees; see <a href="http://en.wikipedia.org/wiki/Pseudoforest" rel="nofollow">http://en.wikipedia.org/wiki/Pseudoforest</a>. There doesn't appear to be a 'nice' closed form for these trees. Wikipedia/OEIS gives the number of undirected connected graphs with $n$ vertices as</p>
<p>$$f(n) = \sum_{k=1}^n \frac{(-1)^{k-1}}{k} \sum_{n_1+\cdots+n_k=n} \frac{n!}{n_1! \cdots n_k!} \binom{\binom{n_1}{2}+\cdots +\binom{n_k}{2}}{n}.$$</p>
<p>(values at <a href="http://oeis.org/A057500" rel="nofollow">http://oeis.org/A057500</a>). </p>
<p>Now, this is not quite what you want, since you want to count the number of such directed graphs. Luckily, these two quantities are easily related. </p>
<p>Note that every pseudotree can be decomposed into a directed cycle and several directed trees attached to points on this cycle. The edges in the directed trees must always point towards the cycle, so their direction is fixed given the undirected the graph. On the other hand, any cycle with more than two vertices can be directed in exactly two ways. Therefore, if $g(n)$ is the number of simple labelled directed pseudotrees (the quantity you want) and if $h(n)$ is the number of undirected pseudotrees with a cycle of size 2, then $g(n) = 2f(n) - h(n)$.</p>
<p>We can get an expression (although unfortunately not a closed form one) for $h(n)$ by observing that the number of ways to construct an undirected pseudotree with a cycle of size 2 is to choose 2 vertices to belong to the cycle, split the remanining vertices into 2 sets (depending on which vertex they belong to), and count the number of ways to build a labelled rooted forest on each of those sets. By Cayley's formula, there are $(n+1)^{n-1}$ rooted forests on $n$ labelled nodes, so this gives</p>
<p>$$h(n) = \binom{n}{2}\sum_{k=0}^{n-2}\binom{n-2}{k}(k+1)^{k-1}(n-k-1)^{n-k-3}$$</p>
<p>We can in fact generalize this logic to get a sum similar to that for $f(n)$ directly for $g(n)$; the only difference is that if our cycle is of length $k$, we must partition the remaining vertices into $k$ sets instead of 2. This gives</p>
<p>$$f(n) = \sum_{k=2}^{n}\binom{n}{k}(k-1)!\sum_{n_1+\cdots+n_{k}=n-k}\dfrac{(n-k)!}{n_1!\cdots n_k!}(n_1+1)^{n_1-1}\cdots(n_k+1)^{n_k-1}$$</p>
|
206,780 | <p>Let $f:X\to Y$ is a measurable function. Banach indicatrix
$$
N(y,f) = \#\{x\in X \mid f(x) = y\}
$$
is the number of the pre-images of $y$ under $f$. If there are infinitely many pre-images then $N(y,f) = \infty$. </p>
<p>Let $X\subset\mathbb R^n$, $Y\subset\mathbb R^m$ with Lebesgue measure.</p>
<p><em>I am interested to know if $N(y,f)$ is a measurable function (?)</em> </p>
<ul>
<li>If $X$ is an interval (say $X=[a,b]$) and $f$ is a continuous function, the answer is some how positive (<a href="https://math.stackexchange.com/q/68635/23566">https://math.stackexchange.com/q/68635/23566</a>).</li>
<li>In Federer's Geometric measure theory we find following theorem </li>
</ul>
<blockquote>
<p>Let $X$ be a separable metric space and let $f(A)$ be $\mu$-measurable for all Borel subsets $A$ of $X$.
Let $\zeta(S) = \mu(f(S))$ for $S\subset X$ and let $\psi$ be the measure on $X$ defined by the Carathéodory construction from $\zeta$. Then
$$
\psi(A) = \int\limits_{A}N(y,f)\, d\mu_{Y}
$$
for every Borel set $A\subset X$.</p>
</blockquote>
<p><em>Does it say anything about measurability of $N(y,f)$ ?</em> </p>
| R W | 8,588 | <p>This is a corollary of Rokhlin's theorem on classification of measurable partitions in Lebesgue spaces from his famous paper "On the fundamental ideas of measure theory". The Russian original is <a href="http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=sm&paperid=5995&option_lang=rus" rel="nofollow">available</a>. The statement you need is (I) at the beginning of p.138. Alas, the way measure theory is still taught nowadays, most mathematical students are nor aware of Rokhlin's theory.</p>
|
2,611,656 | <p>Suppose $x = 1/t$. So now $x$ is a function of $t$, i.e., $x(t)$.</p>
<p>So $$\frac{dx(t)}{dt} = -t^{-2} \Rightarrow dx(t) = -t^{-2}dt$$</p>
<p>This problem is from the textbook: <code>advanced mathematical methods for scientists and engineers</code></p>
<p><a href="https://i.stack.imgur.com/SqKnQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SqKnQ.png" alt="enter image description here"></a></p>
<p>How to go from "$dx = -t^2dt$" to "$\frac{d}{dx} = -t^2\frac{d}{dt}$"? </p>
<p>It seems that I just divide the previous term by $1$ and then multiply it by $d$. </p>
<p>However, it seems unrealistic to me; can anyone please explain this carefully to me? More specifically, can I multiply $d$, which is like an operator to me. thanks!</p>
| David Reed | 444,890 | <p>You use the inverse-function theorem and the chain rule:</p>
<p>Inverse function theorem says $$\frac{dx}{dt} = -t^{-2} \to \frac{dt}{dx} = \frac{1}{\frac{dx}{dt}}= -t^2
$$</p>
<p>The chain rule says : $$\frac{d}{dx} = \frac{dt}{dx}\frac{d}{dt} = -t^2\frac{d}{dt}$$</p>
|
1,010 | <p>For periodic/symmetric tilings, it seems somewhat "obvious" to me that it just comes down to working out the right group of symmetries for each of the relevant shapes/tiles, but its not clear to me if that carries over in any nice algebraic way for more complicated objects such as a <a href="http://en.wikipedia.org/wiki/Penrose_tiling" rel="noreferrer">penrose tiling</a>
and just following <a href="http://en.wikipedia.org/wiki/Quasicrystal" rel="noreferrer">wikipedia</a> just leads to the statement that groupoids come into play, but with no references to example constructions! Moreover, at least naively thinking about, it seems any such algebraic approach should naturally also apply to fractals.</p>
<ol>
<li>what references am I somehow not able to find that do a good job talking about this further?</li>
<li>is my "intuition" that the mathematical structure for at least some classes of fractals and quasicrystals being equivalent correct?</li>
</ol>
| Alex Fink | 30 | <p>For periodic tilings, Bill Thurston and JH Conway would say that it's better to think about the orbifolds of tilings than their symmetry groups: this is the approach to the classification of plane symmetry groups and several other things Conway and Burgiel and Goodman-Strauss take in the beautiful <em>The symmetries of things</em>, which comes out pretty slick, I'd say. </p>
<p>I have no idea if this goes through to aperiodic tilings. </p>
|
737,692 | <p>I'm working on this question:</p>
<blockquote>
<p>Rewrite the following summation using sigma notation and then compute it
using the technique of telescoping summation.
$$\frac{1}{2*5}+\frac{1}{3*6}+\frac{1}{4*7}+...+\frac{1}{(n-2)(n+1)}+\frac{1}{(n-1)(n+2)} $$</p>
</blockquote>
<p>My work:
I replaced the $n's$ for $i's$ and added the Sigma to it $$\sum_{i=1}^n \frac{1}{(i-2)(i+1)}+\frac{1}{(i-1)(i+2)} $$
would it be wrong to replace every $i$ with $\frac{n(n+1)}{2}$? And then simplify the resulting equation. </p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>$$\frac1{(i-2)(i+1)}=\frac13\cdot\frac{i+1-(i-2)}{(i-2)(i+1)}=\frac13\left(\frac1{i-2}-\frac1{i+1}\right)$$</p>
<p>Similarly, $$\frac1{(i-1)(i+2)}=\cdots=\frac13\left(\frac1{i-1}-\frac1{i+2}\right)$$</p>
<p>Set $i=1,2,\cdots,n-1,n$ find the surviving terms </p>
|
2,137,706 | <p>$$\sum_{k=0}^m {n \choose k} {n-k \choose m-k} = 2^m {n \choose m}, m<n$$</p>
<p>I know that $2^m$ represents the number of subsets of a set of length $m$, which I can see there being a connection to the ${n \choose k}$ term, but I can't see how the combination it's multiplied by affects this.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>An analytical approach:</p>
</blockquote>
<p>\begin{align}
\sum_{k = 0}^{m}{n \choose k}{n - k \choose m - k} & =
\bracks{z^{m}}\sum_{\ell = 0}^{\infty}z^{\ell}
\bracks{\sum_{k = 0}^{\ell}{n \choose k}{n - k \choose \ell - k}} =
\bracks{z^{m}}\sum_{k = 0}^{\infty}{n \choose k}
\sum_{\ell = k}^{\infty}{n - k \choose \ell - k}z^{\ell}
\\[5mm] & =
\bracks{z^{m}}\sum_{k = 0}^{\infty}{n \choose k}
\sum_{\ell = 0}^{\infty}{n - k \choose \ell}z^{\ell + k} =
\bracks{z^{m}}\sum_{k = 0}^{\infty}{n \choose k}z^{k}
\sum_{\ell = 0}^{\infty}{n - k \choose \ell}z^{\ell}
\\[5mm] & =
\bracks{z^{m}}\sum_{k = 0}^{\infty}{n \choose k}z^{k}\pars{1 + z}^{n - k} =
\bracks{z^{m}}\bracks{\pars{1 + z}^{n}
\sum_{k = 0}^{\infty}{n \choose k}\pars{z \over 1 + z}^{k}}
\\[5mm] & =
\bracks{z^{m}}\bracks{\pars{1 + z}^{n}\pars{1 + {z \over 1 + z}}^{n}} =
\bracks{z^{m}}\pars{1 + 2z}^{n}
=
\bracks{z^{m}}\sum_{\ell = 0}^{n}{n \choose \ell}\pars{2z}^{\ell}
\\[5mm] & =
\bbx{\ds{{n \choose m}2^{m}}}
\end{align}</p>
|
1,310,233 | <p>Fundamental theorem of calculus states that the derivative of the
integral is the original function, meaning that
$$
f(x)=\frac{d}{dx}\int_{a}^{x}f(y)dy.\tag{*}
$$
To motivate the statement of the Lebesgue differentiation theorem, observe
that (*) may be written in terms of symmetric differences as
$$
f(x)=\lim_{r\to 0^+}\frac{1}{2r}\int_{x-r}^{x+r}f(y)dy.\tag{**}
$$
An $n$-dimensional version of (**) is
$$
f(x)=\lim_{r\to 0^+}\frac{1}{|B(x,r)|}\int_{B(x,r)}f(y)dy.\tag{***}
$$</p>
<p>where the integral is with respect $n$-dimensional Lebesgue measure. The Lebesgue differentiation theorem states that (***) holds pointwise $\mu$-a.e. for any locally integrable function $f$.</p>
<p>My question is how could we write (**) by using (*) ? If we define $F(x)=\int_{a}^{x}f(y)dy$. The quotient
$$
\frac{F(x+r)-F(x)}{r}=\frac{\int_{a}^{x+r}f(y)dy-\int_{a}^{x}f(y)dy}{r}=\frac{1}{r}\int_{x}^{x+r}f(y)dy
$$
How could we say that
$$\frac{1}{r}\int_{x}^{x+r}f(y)dy\overset{?}{=}\frac{1}{2r}\int_{x-r}^{x+r}f(y)dy$$ </p>
| Community | -1 | <p>The last equality as stated is not true. But we don't need that. </p>
<p>In general we have </p>
<p>$$g'(x) = \lim_{r\to 0}\frac{g(x+r) - g(x-r)}{2r}$$</p>
<p>for any function differentiable at $x$ (Can you show that?). Put $g(x) = \int_a^x f(s) \, ds$, by $(*)$, </p>
<p>\begin{equation}
\begin{split}
f(x) &= \frac{d}{dx} \int_a^x f(s) \, ds \\
&= \lim_{r\to 0} \frac{1}{2r} \left(\int_a^{x+r} f(s) \, ds - \int_a^{x-r} f(s)\, ds\right)\\
&=\lim_{r\to 0} \frac{1}{2r} \int_{x-r}^{x+r} f(s) \, ds.
\end{split}
\end{equation}</p>
|
690,465 | <p>So we are learning trigonometry in school and I would like to ask for a little help with these. I would really appreciate if somebody can explain me how I can solve such equations :)</p>
<ul>
<li><p>$\sin 3x \cdot \cos 3x = \sin 2x$</p></li>
<li><p>$2( 1 + \sin^6 x + \cos^6 x ) - 3(\sin^4 x + \cos^4 x) - \cos x = 0$</p></li>
<li><p>$3 \sin^2 x - 4 \sin x \cdot \cos x + 5 \cos^2 x = 2$</p></li>
<li><p>$\sin^2 x - \sin^4 x + \cos^4 x = 1$</p></li>
</ul>
<p>In our students book they're poorly explained with 2 pages, I tried to find a solution on the web, but still couldn't find similar examples. All we got from our teacher was paper with few formulas and we basically have no idea when to use them. I would show what I've tried, but the problem is that I have no idea to even start solving such equations. </p>
| Alan | 54,910 | <p>For the second:</p>
<p>$$
2\left(1+\sin^6 x+\cos^6 x\right)-3\left( \sin^4 x + \cos^4 x\right) - \cos x= 0 \\
2\left(1+\left( \sin^2 x + \cos^2 x \right) \left(\sin^4 x - \sin^2 x \cos^2 x + \cos^4 x\right)\right)-3\left( \sin^4 x + \cos^4 x\right) - \cos x = 0 \\
2\left(1+\sin^4 x - \sin^2 x \cos^2 x + \cos^4 x\right)-3\left( \sin^4 x + \cos^4 x\right) - \cos x= 0 \\
2+2\sin^4 x - 2\sin^2 x \cos^2 x + 2\cos^4 x-3\sin^4 x -3\cos^4 x - \cos x= 0 \\
2-\sin^4 x - 2\sin^2 x \cos^2 x - \cos^4 x - \cos x= 0 \\
2-\left(\sin^4 x + 2\sin^2 x \cos^2 x + \cos^4 x \right)- \cos x= 0 \\
2-\left(\sin^2 x + \cos^2 x \right)^2- \cos x= 0 \\
2-1- \cos x= 0 \\
\cos x=1 \\
$$</p>
|
4,485,550 | <p>I'm interested in playing with nonwell-founded variants of set theory and weaker/different axioms of induction/extensionality.</p>
<p>I have a hunch coalgebraic methods could better handle weirdness like modelling homotopy type theory.</p>
<p>I also have just been interested in the idea of coinduction as primitive as opposed to induction.</p>
<p>Weak limits like weak function spaces are also useful in Computer Science for higher order abstract syntax. So set-theory minus induction/extensionality could lead to cleaner encodings of stuff like the lambda calculus.</p>
<p>But I really don't know where to start with nonwell-founded set theory in the first place. Unfortunately nonwell-founded set theory seems a bit esoteric.</p>
| Peter Smith | 35,151 | <p>One version of non-well founded set theory arises when we replace the Axiom of Foundation with the Anti-Foundation Axiom (AFA), explored in an influential book written by Peter Aczel in 1988.</p>
<p>The obvious place to start finding out more about this is the <em>Stanford Encyclopaedia</em> article <a href="https://plato.stanford.edu/entries/nonwellfounded-set-theory/" rel="noreferrer">https://plato.stanford.edu/entries/nonwellfounded-set-theory/</a></p>
|
4,804 | <p>Using the Unanswered tab, I was surprised to find a large number of questions that were already answered <em>in the answer box</em>, correctly, and thoroughly. But if the answer(s) is never upvoted, the question remains in the tab where it does not belong. To make things worse, the Community user occasionally bumps such already-answered questions to the front page, wasting the screen real estate. </p>
<p>By now I've read a number of such answers (to make sure they are indeed correct) and subsequently upvoted: this is by far the easiest way to reduce the number of Unanswered questions on the site (currently 8409). Of course, 30 upvotes per day only go so far, which is why I write this, hoping that more users will find a little time for this kind of clean-up. </p>
<p>Why so many question owners do not upvote correct answers to their questions, I have no idea. </p>
<p>As an aside (but in line with the topic), I'd like to recognize J.M. as the first MSE user to vote 10000 times.</p>
<p><img src="https://i.stack.imgur.com/LLUqk.png" alt="10K votes"></p>
| Community | -1 | <p>Another reason seems to be that many people won't recognize (or are not interested in) a correct answer the more specialized or advanced the question is. I got many upvotes and answer tags on absolutely trivial statements while sometimes I (quite obviously) put lots of work into answers on special topics for which no one cared later on. I do not complain about this (these are the answers which are fun for me, actually), it's just an observation and if I could not live with it I'd stop answering questions here.</p>
<p>I do appreciate and support your request to clean up a bit, however.</p>
|
2,405,767 | <p>This is Velleman's exercise 3.6.8.b (<strong>And of course not a duplicate of</strong> <a href="https://math.stackexchange.com/questions/253446/uniqueness-proof-for-forall-a-in-mathcalpu-existsb-in-mathcalpu-f">Uniqueness proof for $\forall A\in\mathcal{P}(U)\ \exists!B\in\mathcal{P}(U)\ \forall C\in\mathcal{P}(U)\ (C\setminus A=C\cap B)$</a>): </p>
<p>Prove that $∀A ∈ \mathscr P(U)∃!B ∈ \mathscr P(U) ∀C ∈ \mathscr P(U) (C ∩ A = C \setminus B)$.</p>
<p><em>$\mathscr P$ is used to denote the power set.</em></p>
<p><em>$∃!B$ means that "there exists a unique set B such that..."</em> </p>
<p>And here's my proof of it:</p>
<p>Proof. </p>
<p>Existence. Let $B = (U\setminus A) ∈ \mathscr P(U)$. Then clearly for all $A ∈ \mathscr P(U)$ and $C ∈ \mathscr P(U)$, $C\setminus B = C\setminus (U\setminus A) = C ∩ A$. </p>
<p>Uniqueness. To see that $B$ is unique, we choose some $B'$ in $\mathscr P(U)$ such that for all $A ∈ \mathscr P(U)$ and $C ∈ \mathscr P(U)$, $C ∩ A = C \setminus B')$. Then in particular, taking $C = U$, we can conclude that $U ∩ A = U \setminus B'$ which is equivalent to $A = U \setminus B'$ and thus $B' = U\setminus A = B$.</p>
<p>Here is my question:</p>
<p>Is my proof valid? Particularly "$C\setminus (U\setminus A) = C ∩ A$" and "$A = U \setminus B'$ and thus $B' = U\setminus A = B$" parts. In other words, do the mentioned parts need more explanation\justification? </p>
<p>Thanks.</p>
| Raffaele | 83,382 | <p>$x=r \cos\phi;\;y=r\sin\phi$</p>
<p>$r=\sqrt{x^2+y^2}$</p>
<p>The equations can be written as</p>
<p>$r\cos\phi+ r(r\sin\phi)=2$ and then</p>
<p>$x+y\sqrt{x^2+y^2}=2$</p>
|
97,946 | <p>I want to prove the following:</p>
<p>Let $G$ be a finite abelian $p$-group that is not cyclic.
Let $L \ne {1}$ be a subgroup of $G$ and $U$ be a maximal subgroup of L then there exists a maximal subgroup $M$ of $G$ such that $U \leq M$ and $L \nleq M$.</p>
<p>Proof.
If $L=G$ then we are done.Suppose $L \ne G$ . Let $|G|=p^{n}$ then $|L|=p^{n-i}$ and $|U|=p^{n-i-1}$ for some $0< i < n$. There is $x_{1} \in G$ such that $x_{1} \notin L$. Thus $|U\langle x_{1}\rangle|=p^{n-i}$ and does not contain L. There is $x_{2} \in G$ such that $x_{2} \notin L$ and $x_{2} \notin |U\langle x_{1}\rangle|$. Thus $|U\langle x_{1}\rangle\langle x_{2}\rangle|=p^{n-i+1}$. Continuing like this, we get $|U\langle x_{1}\rangle \langle x_{2}\rangle\cdots \langle x_{i}\rangle|=p^{n-1}$ is a maximal subgroup of $G$. The problem is, I am not sure that this subgroup does not contain $L $.</p>
<p>Thanks in advance.</p>
| Geoff Robinson | 14,450 | <p>This is false in general. Consider the case (which you can reduce to by isomorphism theorems) that $U =1$ and $L$ then necessarily has order $p.$ You are asking for the existence of a complement to $L$ in $G$, for you would have $G = L \times M$ if there were such a maximal subgroup $M.$ There is such a subgroup $M$ if and only if $L$ is not contained in $\Phi(G),$ the Frattini subgroup of $G.$ More precisely, in general you are guaranteed to find the maximal subgroup you want if and only if $L/U$ is not contained in the Frattini subgroup of $G/U.$ Given a subgroup $U$, unless $G/U$ is elementary Abelian (ie an Abelian $p$-group of exponent $p )$, you can always find a subgroup $L$ containing $U$ with $[L:U] = p$ such that every maximal subgroup of $G$ containing $U$ will also contain $L.$</p>
|
2,725,455 | <p>Probably this is pretty simple (or even trivial), but I'm stucked.</p>
<p>If $H\leq G$ is a subgroup, does it follow that $hH=Hh$, if $h\in H$ ? I can't prove or find a counter-example. If anyone could help me, I'd be grateful!</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>$$hH=\{hh' \mid h' \in H\}, \color{red}{H}h = \{h'h \mid h' \in H\}$$</p>
<p>Note that $hh' = \color{red}{hh'h^{-1}}\cdot h$, so $hH \subseteq Hh$. A similar trick shows the reverse inclusion.</p>
|
2,725,455 | <p>Probably this is pretty simple (or even trivial), but I'm stucked.</p>
<p>If $H\leq G$ is a subgroup, does it follow that $hH=Hh$, if $h\in H$ ? I can't prove or find a counter-example. If anyone could help me, I'd be grateful!</p>
| Joaquim Nabuco | 549,679 | <p>They are Always equal to $H$, then ir Always holds</p>
|
4,425,234 | <p>Let <span class="math-container">$f:[1,\infty)\rightarrow [1,\infty)$</span> be a function such that for every <span class="math-container">$x\in [1,\infty)$</span>, <span class="math-container">$f(f(x))=2x^{2}-3x+2$</span>. I am required to show that <span class="math-container">$f$</span> is bijective and also to study the injectivity of the function <span class="math-container">$g:[1,\infty)\rightarrow \mathbb{R}$</span>, <span class="math-container">$g(x)=x^{2}+(f(x)-4)x-2f(x)+7$</span>, for every <span class="math-container">$x\in\mathbb{R}$</span>.</p>
<p>For the first task I selected <span class="math-container">$x,y \in [1,\infty)$</span> such that <span class="math-container">$x\neq y$</span>. Then, <span class="math-container">$f(f(x))=f(f(y))$</span> iff <span class="math-container">$2x^{2}-3x=2y^{2}-3y$</span>, meaning that <span class="math-container">$2x-3=2y-3 \iff x=y$</span>, which is not true. Thus, the function is not injective.</p>
<p>For every <span class="math-container">$x \in [1,\infty)$</span>, we want to show that there is a <span class="math-container">$z$</span> in <span class="math-container">$[1,\infty)$</span> such that <span class="math-container">$z=2x^{2}-3x+2$</span>; because <span class="math-container">$2x^2-3x+2=2x(x-1)-(x-1)+1=(2x-1)(x-1)+1$</span> and <span class="math-container">$x\geq 1$</span>, then <span class="math-container">$2x\geq 1$</span> and <span class="math-container">$z \geq 1$</span>, so there exists <span class="math-container">$z \in [1,\infty)$</span> such that <span class="math-container">$z=2x^{2}-3x+2$</span>. Thus the function is surjective.</p>
<p>I am quite clueless on how to study the injectivity of the other function, not knowing who <span class="math-container">$f$</span> is and what properties does it have.</p>
| FlipTack | 396,317 | <p>I would say Euler's method is actually the most intuitive of the numerical methods!</p>
<p>We have a function <span class="math-container">$f$</span> that tells us the derivative of a solution to the ODE passing through our current point <span class="math-container">$(t_n, y_n)$</span>. What is the derivative? In a small enough interval around <span class="math-container">$y_n$</span>, it's approximately the "rise over run", so if <span class="math-container">$h = t_{n+1} - t_n$</span> is 'small' then we can use this approximation:</p>
<p><span class="math-container">$$ \frac{y(t_{n+1}) - y_n}{t_{n+1} - t_n} \simeq f(t_n, y_n)$$</span></p>
<p><span class="math-container">$$ \implies y(t_{n+1}) \simeq y_n + h \cdot f(t_n, y_n)$$</span></p>
|
2,156,331 | <p>Consider the discrete topology $\tau$ on $X:= \{ a,b,c, d,e \}$. Find subbasis for $\tau$ which does not contain any singleton sets.</p>
<p>The definition of subbasis is as follows: </p>
<blockquote>
<p><strong>Definition:</strong> A <em>subbasis</em> $S$ for a topology on $X$ is a collection of subsets of $X$ whose union is $X$.</p>
</blockquote>
<p>So let $S$ be equal to the collection of $\{a,b\}$, $\{c,d\}$ and $\{d,e\}$. </p>
<p>Clearly union of these three elements is $X$. </p>
<p>So should be $S$ - as defined - be taken as subbasis? Please check the answer I posted in comment.</p>
| Ibrahim | 696,130 | <p>The correct singleton sets should be <span class="math-container">$\{(a,b),(b,c),(c,d),(d,e),(a,e)\}$</span></p>
|
499,171 | <p>Let $\{x_n\}$ be "any" sequence containing all rationals. I have to prove that every real number is the limit of some subsequence. I know that rationals are dense in real. But, are not the order of the rationals in the sequence creating problem here ? How to pick rationals from this sequence. </p>
| Daniel Montealegre | 24,005 | <p>Pick any real number you want, call it $r$. Then let pick the first rational in your sequence that has difference with $r$ of less than $1$, say your number was $x_{n_1}$. Then pick the next rational that appears after $n_1$ in your sequence that has difference with $r$ of less than $1/2$. This can be done since you are just ignoring the first $n_1$ numbers of your sequence, and there are infinite many rationals that are within $1/2$ units from $r$. Call this number $x_{n_2}$. Can you continue the process?</p>
|
297,036 | <p>If $f'(x) = \sin{\dfrac{\pi e^x}{2}}$ and $f(0)= 1$, then what will be $f(2)$?</p>
<p>This is what I tried to find the antiderivative of $f'(x)$ with u-substitution, </p>
<p>$$
\begin{align}
u &=\frac{\pi e^x}{2} \\
\frac{2}{\pi}du &=e^x dx
\end{align}
$$</p>
<p>I don't know what to do next.</p>
| ILoveMath | 42,344 | <p>Some thoughts: Apply mean value theorem to $f$ on the interval $[0,2]$ to obtain:</p>
<p>$$ \frac{f(2) - f(0)}{2} = \sin (\frac{\pi e^c}{2}) $$</p>
<p>for some $c$ in the interval. Then, we have that $f(2) = 2 \sin (\frac{\pi e^c}{2}) + 1$</p>
<p><img src="https://i.stack.imgur.com/iaMYy.png" alt="enter image description here"></p>
|
297,036 | <p>If $f'(x) = \sin{\dfrac{\pi e^x}{2}}$ and $f(0)= 1$, then what will be $f(2)$?</p>
<p>This is what I tried to find the antiderivative of $f'(x)$ with u-substitution, </p>
<p>$$
\begin{align}
u &=\frac{\pi e^x}{2} \\
\frac{2}{\pi}du &=e^x dx
\end{align}
$$</p>
<p>I don't know what to do next.</p>
| Mhenni Benghorbal | 35,472 | <p>Note that</p>
<p>$$ {f'(x) = \sin{\frac{\pi e^x}{2}}}\implies \int_{0}^{x}f'(t)dt = \int_{0}^{x}\sin\left(\frac{\pi e^t}{2}\right) $$</p>
<p>$$\implies f(x)=f(0)+\int_{0}^{x}\sin\left(\frac{\pi e^t}{2} \right)dt $$</p>
<p>$$ \implies f(2)=1+\int_{0}^{2}\sin\left(\frac{\pi e^t}{2} \right)dt \longrightarrow (*)$$</p>
<p>$$=1-{\it Si} \left( \frac{\pi}{2} \right) +{\it Si} \left( \frac{1}{2}\,{{\rm e}^{2}
}\pi \right) \sim 1.157117528,$$</p>
<p>where $Si$ is the <a href="http://en.wikipedia.org/wiki/Trigonometric_integral" rel="nofollow">sine integral</a>.</p>
|
1,256,460 | <p>I want to solve the following problem: </p>
<p>$$u_{xx}(x,y)+u_{yy}(x,y)=0, 0<x<\pi, y>0 \\ u(0,y)=u(\pi, y)=0, y>0 \\ u(x,0)=\sin x +\sin^3 x, 0<x<\pi$$ </p>
<p>$u$ bounded </p>
<p>I have done the following: </p>
<p>$$u(x,y)=X(x)Y(y)$$ </p>
<p>We get the following two problems: </p>
<p>$$X''(x)+\lambda X(x)=0 \ \ \ \ \ (1) \ X(0)=X(\pi)=0$$ </p>
<p>$$Y''(y)-\lambda Y(y)=0 \ \ \ \ \ (2)$$ </p>
<p>To solve the problem $(1)$ we do the following: </p>
<p>The characteristic polynomial is $\mu +\lambda =0$. </p>
<ul>
<li><p>$\lambda <0$: $\mu=\pm \sqrt{-\lambda}$</p>
<p>$X(x)=c_1e^{\sqrt{-\lambda}x}+c_2e^{-\sqrt{-\lambda}x}$ </p>
<p>$X(0)=0 \Rightarrow c_1+c_2=0 \Rightarrow c_1=-c_2$ </p>
<p>$X(\pi)=0 \Rightarrow c_1e^{\sqrt{-\lambda}\pi}+c_2 e^{-\sqrt{-\lambda}\pi}=0 \Rightarrow c_2(-e^{\sqrt{-\lambda}\pi}+e^{-\sqrt{-\lambda}\pi})=0 \Rightarrow c_1=c_2=0$ </p></li>
<li><p>$\lambda=0$: </p>
<p>$X(x)=c_1 x+c_2$ </p>
<p>$X(0)=0 \Rightarrow c_2=0 \Rightarrow X(x)=c_1x$ </p>
<p>$X(\pi)=0 \Rightarrow c_1 \pi=0 \Rightarrow c_1=0$ </p></li>
<li><p>$\lambda >0$ : </p>
<p>$X(x)=c_1 \cos (\sqrt{\lambda} x)+c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(0)=0 \Rightarrow c_1=0 \Rightarrow X(x)=c_2 \sin (\sqrt{\lambda}x)$ </p>
<p>$X(\pi)=0 \Rightarrow \sin (\sqrt{\lambda}\pi)=0 \Rightarrow \sqrt{\lambda}\pi=k\pi \Rightarrow \lambda=k^2$ </p></li>
</ul>
<p>For the problem $(2)$ we have the following: </p>
<p>$Y(y)=c_1 e^{ky}+c_2 e^{-ky}$ </p>
<p>The general solution is the following: </p>
<p>$$u(x,y)=\sum_{k=0}^{\infty}a_n( e^{ky}+ e^{-ky}) \sin (kx) $$ </p>
<p>$$u(x,0)=\sin x+\sin^3 x=\sin x+\frac{3}{4}\sin x-\frac{1}{4}\sin (3x)=\frac{7}{4}\sin x-\frac{1}{4}\sin (3x) \\ \Rightarrow \frac{7}{4}\sin x-\frac{1}{4}\sin (3x)=\sum_{k=0}^{\infty}2a_n\sin (kx) \\ \Rightarrow 2a_1=\frac{7}{4} \Rightarrow a_1=\frac{7}{8}, 2a_3=-\frac{1}{4}=-\frac{1}{8}, a_k=0 \text{ for } k=2,4,5,6,7, 8, \dots $$ </p>
<p>Is this correct?? </p>
| MaxV | 464,821 | <p>First of all that formula is wrong, because after few minutes a coffee is always cold, and here after 30 minutes a cup of coffe is at <strong>61°C</strong>!!! </p>
<p><strong>I mean this is a killer coffee!!!</strong> <em>(normal people over 49°C get permanent injuries!)</em></p>
<p>However, here we have 2 solutions very close: <em>(use <a href="http://asciimath.org/" rel="nofollow noreferrer">http://asciimath.org/</a> to see formulas)</em></p>
<p>math average:<br>
<code>T_0 = 95°C
T_30 = 61.16°C
T_m_av = (95+61.16) / 2 = 78.08 °C</code> </p>
<p>Integral average:</p>
<pre><code>T_i_av = 1/30 int_0^30 (20 + 75e^(-t/50)) dt = 76.40 °C
</code></pre>
|
2,414,472 | <blockquote>
<p>Let $(a_n)_{n\geq2}$ be a sequence defined as
$$
a_2=1,\qquad a_{n+1}=\frac{n^2-1}{n^2}a_n.
$$
Show that
$$
a_n=\frac{n}{2(n-1)},\quad\forall n\geq2
$$
and determine $\lim_{n\rightarrow+\infty}a_n$.</p>
</blockquote>
<p>I cannot show that $a_n$ is $\frac{1}{2}\frac{n}{n-1}$. Some helps? </p>
<p>Thank You</p>
| SC30 | 352,208 | <p>$a_{n+1}=(1-\frac{1}{n^2})a_n=(1-\frac{1}{n^2})(1-\frac{1}{(n-1)^2}) a_{n-1}=\prod_{k=2}^n (1-\frac{1}{k^2}) $ where the last step follows from the fact that $a_2=1$. Now $\frac{k^2-1}{k^2}=\frac{(k-1)(k+1)}{k^2}$, so writing down the first few terms you'll convince yourself that everything cancels except $\frac{1}{2}$ at the beginning and $\frac{n}{n-1}$ at the end obtaining the desired result.</p>
|
2,414,472 | <blockquote>
<p>Let $(a_n)_{n\geq2}$ be a sequence defined as
$$
a_2=1,\qquad a_{n+1}=\frac{n^2-1}{n^2}a_n.
$$
Show that
$$
a_n=\frac{n}{2(n-1)},\quad\forall n\geq2
$$
and determine $\lim_{n\rightarrow+\infty}a_n$.</p>
</blockquote>
<p>I cannot show that $a_n$ is $\frac{1}{2}\frac{n}{n-1}$. Some helps? </p>
<p>Thank You</p>
| farruhota | 425,072 | <p>Hint: You can substitute the common term formula into the difference equation to verify it.</p>
|
2,569,267 | <p><a href="https://gowers.wordpress.com/2011/10/16/permutations/" rel="nofollow noreferrer">This</a> article claims:</p>
<blockquote>
<p>we simply replace the number 1 by 2, the number 2 by 4, and the number 4 by 1</p>
<p>....I start with the numbers arranged as follows: 1 2 3 4 5 6. After doing the permutation (124) the numbers are arranged as 2 4 3 1 5 6.</p>
</blockquote>
<p>I always thought <span class="math-container">$(124)$</span> was read left to right as "1 goes to 2, 2 goes to 4, and 4 goes to 1" and therefore the outcome should be 4, 1, 3, 2, 5, 6.</p>
<p>According to my understanding, the article did the permutation reading from right to left. Is the blog following a convention of reading right to left, or do I just have it wrong?</p>
| xxxxxxxxx | 252,194 | <p>You seem to be reading it as saying "1 goes to position 2", but the convention is that it should be read as "1 gets replaced by 2", or "object 1 becomes object 2". It helps to view these permutations in an alternate form (where for each number, we write the image underneath).
$$\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6\\
2 & 4 & 3 & 1 & 5 & 6
\end{pmatrix}$$
If we think of having the numbers in front of us as physical objects, laid out in order, after applying the permutation to these objects $2 \ 4 \ 3 \ 1 \ 5 \ 6$ is what we will see in front of us.</p>
|
499,652 | <p>I saw this a lot in physics textbook but today I am curious about it and want to know if anyone can show me a formal mathematical proof of this statement? Thanks!</p>
| Eric Auld | 76,333 | <p>The one-sentence answer is that the Taylor series for tangent at zero is $x + O(x^3)$. So it is actually quite a good approximation.</p>
|
2,761,151 | <p>In the formula below, where does the $\frac{4}{3}$ come from and what happened to the $3$? How did they get the far right answer? Taken from Stewart Early Transcendentals Calculus textbook.</p>
<p>$$\sum^\infty_{n=1} 2^{2n}3^{1-n}=\sum^\infty_{n=1}(2^2)^{n}3^{-(n-1)}=\sum^\infty_{n=1}\frac{4^n}{3^{n-1}}=\sum_{n=1}^\infty4\left(\frac{4}{3}\right)^{n-1}$$</p>
| MPW | 113,214 | <p>$$\frac{4^n}{3^{n-1}}=\frac{4^{1+(n-1)}}{3^{n-1}}$$</p>
<p>$$=\frac{4^1\cdot 4^{n-1}}{3^{n-1}}$$</p>
<p>$$=4\cdot \frac{4^{n-1}}{3^{n-1}}$$</p>
<p>$$=4\cdot\big(\frac{4}{3}\big)^{n-1}$$</p>
|
281,288 | <p><img src="https://i.imgur.com/0C2Jl.jpg" alt="curved line graph"></p>
<p>In this curved line graph, I need to be able to make a formula that can tell me the interpolated value at any point on the curved path given one Data input.</p>
<p>So for example if I wanted to know what value the line was at exactly half way between Point 2 and Point 3, I can eyeball it and tell it would be somewhere around 3.0 for a value, and to get it more exact I could use a ruler and some math. But that is the long way of finding 1 point on the curve at a time. Is there a generic formula that can take any arbitrary curved path with a set of points (a curved path that has no real pattern except that it's a collection of splines with known points to interpolate between) and turn it into a mathematical formula that you could input a Data 1 and it spits out the Data 2 value of the curve where the Data points meet, or vice versa?</p>
<p>For example, <br>
Input Data 1 to math formula = Point 2.5<br>
Data 2 = [Computed by math formula] 3.0</p>
<p>or</p>
<p>Input Data 2 to math formula = 3.0<br>
Data 1 = [Computed by math formula] Point 2.5</p>
<p>Just need the method to develop the math formula!</p>
| bubba | 31,744 | <p>If you just have four data points (or some smallish number, anyway), then the easiest approach is probably something called "Lagrange interpolation". </p>
<p>The wikipedia page has a bunch of formulas that might be hard to understand, but the examples should be pretty clear:</p>
<p><a href="http://en.wikipedia.org/wiki/Lagrange_polynomial" rel="nofollow">http://en.wikipedia.org/wiki/Lagrange_polynomial</a> </p>
<p>For four given data points, you will get a cubic (degree three) equation of the form $y = ax^3 + bx^2 + cx + d$, which you can then use to get intermediate values anywhere.</p>
|
2,878,448 | <p>What could be a possible approach to find the proof of:</p>
<blockquote>
<p>$\binom{2k+1}{k}$ is odd when $k=2^m-1$, otherwise $\binom{2k+1}{k}$ is even.</p>
</blockquote>
<p>I have seen some similar problems in <a href="https://math.stackexchange.com/questions/317163/prove-if-n-2k-1-then-binomni-is-odd-for-0-leq-i-leq-n">https://math.stackexchange.com/questions/317163</a> and <a href="https://math.stackexchange.com/questions/2046338/2k-1-choose-a-is-odd?noredirect=1&lq=1">https://math.stackexchange.com/questions/2046338</a>, but I still don't know that why$\binom{2k+1}{k}$ is even when $k \neq 2^m-1$. </p>
<p>Any answer will be appreciated. Thanks!</p>
| Batominovski | 72,152 | <p>Let
$$k=2^{r_1}+2^{r_2}+\ldots+2^{r_n}$$
where $r_1,r_2,\ldots,r_n$ are nonnegative integers such that $r_1<r_2<\ldots<r_n$. Thus.
$$2k+1=2^0+2^{r_1+1}+2^{r_2+1}+\ldots+2^{r_n+1}\,.$$
If there exists $j\in\{1,2,\ldots,n\}$ such that $r_j\neq r_{j-1}+1$ (here, $r_{0}:=-1$), then the bit corresponding to $2^{r_j}$ in $2k+1$ is $0$, whilst the bit corresponding to $2^{r_j}$ in $k$ is $1$, and $\displaystyle \binom{0}{1}=0$. By <a href="https://en.wikipedia.org/wiki/Lucas%27s_theorem" rel="nofollow noreferrer">Lucas's Theorem</a>, we conclude that $$\binom{2k+1}{k}\equiv 0\pmod{2}\,.$$
On the other hand, suppose that $r_j=r_{j-1}+1$ for $j=1,2,\ldots,n$. Then, $r_j=j-1$ for each $j=1,2,\ldots,n$, making $k=2^n-1$. By Lucas's Theorem, we get that
$$\binom{2k+1}{k}\equiv\binom{1}{0}\cdot\Biggl(\binom{1}{1}\Biggr)^n=1\pmod{2}\,.$$</p>
|
2,031,699 | <p>Let $A,B$ be open subsets of $\mathbb{R}^n$. </p>
<p>Does the following equality hold?</p>
<p>$$\partial(A\cap B)= (\bar A \cap \partial B) \cup (\partial A \cap \bar B)$$</p>
<p>Edit: Thanks for showing me in the answers that above formula fails if $A$ and $B$ are disjoint but their boundaries still intersect. I was able to come up with a similar formula which avoids this case
$$[\partial(A\cap B)]\setminus(\partial A \cap \partial B)= (A \cap \partial B) \cup (\partial A \cap B),$$
which I was able to prove and suffices for what I need to do.</p>
<p>However, when showing that $ (A \cap \partial B) \cup (\partial A \cap B)\subseteq \partial(A\cap B)$, I needed to assume that the topology is induced by a metric. I wonder if the formula still holds in an arbitrary topological space.</p>
| Eugene Zhang | 215,082 | <p>This is not true generally unless <span class="math-container">$\overline{A\cap B}=\overline{A}\cap \overline{B}$</span>.
<span class="math-container">\begin{align}
\partial (A\cap B)&= \overline{A\cap B}-(A\cap B)^{o}
\\
&=(\overline{A}\cap \overline{B})-(A^{o}\cap B^{o})
\\
&=(\overline{A}\cap \overline{B})\cap(A^{o}\cap B^{o})^c
\\
&=(\overline{A}\cap \overline{B})\cap(A^{o^c}\cup B^{o^c})
\\
&=(\overline{A}\cap \overline{B}\cap A^{o^c})\cup(\overline{A}\cap \overline{B}\cap B^{o^c})
\\
&=(\overline{A}\cap \partial B)\cup (\overline{B}\cap \partial A)
\end{align}</span>
By <a href="https://math.stackexchange.com/questions/356758/when-is-the-closure-of-an-intersection-equal-to-the-intersection-of-closures">this post</a>, <span class="math-container">$\overline{A\cap B}=\overline{A}\cap \overline{B}$</span> implies discrete space. So this is impossible in <span class="math-container">$\Bbb{R}^n$</span>. However, we can prove generally
<span class="math-container">$$
\partial(A\cap B)\subset (\bar A \cap \partial B) \cup (\partial A \cap \bar B)
$$</span>
for it is always true that <span class="math-container">$\overline{A\cap B}\subset \overline{A}\cap \overline{B}$</span>. This can be done easily by replacing "<span class="math-container">$=$</span>" with "<span class="math-container">$\subset $</span>" at 2nd line of above proof and rest follows.</p>
|
902,592 | <p>Consider,</p>
<p>$$ \displaystyle x\frac{\partial u}{\partial x}+\frac{\partial
u}{\partial t} = 0 $$</p>
<p>with initial values $ t = 0 : \ u(x, 0) = f(x) $ and calculate the
solution $ u(x,t) $ of the above Cauchy problem using the method of
characteristics.</p>
<p>And here is the solution, I will point out where i am stuck:</p>
<ol>
<li>We parametrise the initial conditions by $\mathbb n:x_0(\mathbb n)=\mathbb n$, $t_0(\mathbb n)=0, u_0(\mathbb n)=f(\mathbb n)$</li>
<li><p>Solve the characteristic equations</p>
<p>$$
\matrix{
\frac{\partial x(\sigma,\mathbb n)}{\partial \sigma} = x, && x(0,\mathbb n)=n \\
\frac{\partial t(\sigma,\mathbb n)}{\partial \sigma} = 1, && x(0,\mathbb n)=n \\
t(0,\mathbb n)=0: && x(\sigma,\mathbb n)=e^{\sigma}\mathbb n \\
t(\sigma,\mathbb n)=\sigma
}
$$
<strong>This is where i am stuck and confused</strong></p>
<p>How did they get $x(\sigma,\mathbb n)=e^{\sigma}\mathbb n$? I just cannot see where the $e$ came from, please forgive my stupidity but can someone please tell me how they got this solution? When i integrate i do not get this!</p>
<p><strong>I will put the rest of the solution so the reader can follow, I am only stuck with the part mentioned though</strong></p></li>
<li><p>Calculate $\sigma$ and $\mathbb n$ in terms of $x$ and $t$ (coordinate change)</p>
<p>$$ \sigma = t, \mathbb n = xe^-{t} $$</p></li>
<li><p>Solve the compatibility condition $ \frac{\partial u}{\partial \sigma} = 0, u(0,\mathbb n) = f(\mathbb n) $.</p>
<p>Hence, $u(\sigma)=f(\mathbb n)$</p></li>
<li>Hence we have $u(x,t)=f(xe^{-t})$</li>
</ol>
| atomteori | 156,639 | <p>If you'll abide the strangeness of it, group theory (Lie theory) provides an answer. Use this stretching group:
$$
G(x,t,u)=(\lambda x,\lambda^\beta t, \lambda^\alpha u)\lambda_o=1
$$$\lambda_o=1$ is the unit transformation without which no group is complete. Plug these transformed variables into the equation and you get
$$
\lambda x \frac{\lambda^\alpha \partial u}{\lambda \partial x}+\frac{\lambda^\alpha \partial u}{\lambda^\beta \partial t}=0
$$Here you can see, for invariance to this group to exist, $\alpha=\alpha -\beta$. Therefore, $\beta =0$ and $\alpha$ can be any value, so we'll leave it as a variable. Since
$$
\lambda^\alpha u(x,t)=u(\lambda x,y)
$$we can take the partial derivative of both sides w.r.t. $\lambda$ and set $\lambda =\lambda_o =1$ to get
$$
\alpha u=xu_x+0u_t
$$The characteristic equation is
$$
\frac{du}{\alpha u}=\frac{dx}{x}=\frac{dt}{0}
$$Two independent integrals are $\frac{u}{x^\alpha}$ and $t$. These are, in fact, stabilizers of the transformation group, and according to Sophus Lie they form an embedded manifold within your solution manifold. Hence, the most general solution to your PDE is to take one stabilizer and set it equal to a function of the other. This results in
$$
u=x^\alpha f(t)
$$Now we take partial derivatives.
$$
u_x=\alpha x^{\alpha -1}f
$$
$$
u_t=x^\alpha f_t
$$Plugging these back into the PDE, we arrive at
$$
\alpha x^\alpha f+x^\alpha f_t=0
$$The x-terms drop out (as they must, else you are in error) and the equation simplifies to
$$
\frac{f_t}{f}=-\alpha
$$Here comes your exponential term.
$$
\frac{d}{dt}ln(f)=-\alpha
$$
$$
ln(f)=-\alpha t+C\rightarrow f=Ce^{-\alpha t}
$$Since $u=x^\alpha f(t)$, we have arrived at your answer.
$$
u=C(xe^{-t})^\alpha
$$The only reason this works is because an infinite continuous group (in this case an infinite cyclic group) preserves the structure of any smooth manifold to which it is invariant. An entire family of groups where $x'=\lambda x$, $t'=t$ and $u'=\lambda^\alpha u$ is invariant to your PDE, a very high degree of symmetry. </p>
|
1,895,323 | <p>Recently, I had a mock-test of a Mathematics Olympiad. There was a question which not only I but my friends too were not able to solve. The question goes like this: </p>
<p>If,
$$ \frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{1}{a+b+c} $$<br>
Then what is the value of<br>
$$ \frac{1}{a^5} + \frac{1}{b^5} + \frac{1}{c^5} $$ </p>
<p>To, solve this question, I used a variety of ways like:<br>
1) transposing variables in the first equation and,<br>
2) putting a whole power of five to both the sides of the equation one. But, I was unable to find the solution. </p>
<p>The options were -- (a) 1 , (b) 0 , (c) $ \frac{1}{a^5 + b^5 + c^5} $ , (d) None of them. </p>
<p>So, I require any possible help. And, a complete answer would be most welcome. Thanks in advance.</p>
| Doug M | 317,162 | <p>$\frac 1a + \frac 1b + \frac 1c = \frac 1{a+b+c}\\
\frac {ab +ac + bc}{abc}= \frac 1{a+b+c}\\
(a+b+c)(ab +ac + bc) = abc\\
a^2b + a^2c + ab^2 +ac^2 + b^2c + bc^2 + 2abc = 0\\
(a^2b + ab^2) + c(a^2 + b^2 + 2ab + cb + ca)= 0\\
ab (a+b) + c((a+b)^2 + c(a+b))= 0\\
(a+b)(ab + c(a+b) + c^2) = 0\\
(a+b)(a(b + c) + c (b+c)) = 0\\
(a+b)(b+c)(a + c) = 0$</p>
<p>$a+b = 0$ or $a+c = 0$ or $b+c =0$</p>
<p>Suppose $a=-b$</p>
<p>then
$\frac 1{a^5} + \frac 1{b^5} + \frac 1{c^5} = \frac 1{c^5}$</p>
<p>And going throught the other possibilities it becomes clear that c) is correct</p>
|
1,423,449 | <p>Find all extrema for the function $f(x)=-\frac{x^{3}}{3}+x^{2}-x+4$ on the domain $x \in [-3.3]$.</p>
<p><strong>Solution:</strong> $f'(x)=-x^{2}+2x-1 = 0 \implies (x-1)^{2}=0 \implies x^{*}=1$. </p>
<p>Is that it? </p>
| MathAdam | 266,049 | <p>Hint:</p>
<p>Next, I think you need to show that this isn't merely an infection point, such as (0,0) on $y=x^3$ </p>
<p>To do this, try taking the second derivative of $f'(x)=-x^{2}+2x-1$ </p>
<p>$$f''(x)=-2x+2$$ </p>
<p>then substitute $x:=1$ to test the point you discovered. Is it a relative maximum, a minimum or an inflection point? </p>
<p>$$-2(1)+2=0$$ </p>
<p>That the result is $0$ shows that the slope is neither decreasing nor increasing at this point. At a relative maximum or minimum you would have a non-zero result. So you are dealing with an inflection point. Since this is the only critical point (as identified by the first derivative), the function is monotonically increasing or decreasing throughout the interval. Calculate the values for $f(x)$ at the extrema, and you have your max/mins. (Since $f(-3)>f(3)$, you know the function is decreasing.)</p>
<p>The following diagram bears this out. The inflection point is marked. The function can be seen to be monotonically decreasing. </p>
<p><a href="https://i.stack.imgur.com/yk7zF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yk7zF.png" alt="enter image description here"></a></p>
|
2,842,217 | <p>im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing).</p>
<p>Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too,</p>
<p>because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct.</p>
<p>But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it?</p>
<p>Completely lost. </p>
| farruhota | 425,072 | <p>An alternative and straightforward method is:
$$\begin{align}y&=\tan x \ (=0)\\
y'&=\frac{1}{\cos^2 x}=1+\tan^2x=1+y^2 \ (=1)\\
y''&=2yy'=2y(1+y^2)=2y+2y^3 \ (=0) \\
y'''&=2y'+6y^2y'=2+8y^2+6y^4 \ (=2)\\
y^{(4)}&=16yy'+24y^3y'=16y+40y^3+24y^5 \ (=0)\\
y^{(5)}&=16+120y^2y'+120y^4y'=16+136y^2+240y^4+120y^6 \ (=16)\\
y^{(6)}&=272yy'+960y^3y'+720y^5y'=272y+1232y^3+1680y^5+720y^7 \ (=0)\\
y^{(7)}&=272y'+3696y^2y'+8400y^4y'+5040y^6y'=272+3968y^2+\cdots \ (=272)\end{align}$$
Hence:
$$\begin{align}\tan x&=0+\frac{1}{1!}x+\frac{0}{2!}x^2+\frac{2}{3!}x^3+\frac{0}{4!}x^4+\frac{16}{5!}x^5+\frac{0}{6!}x^6+\frac{272}{7!}x^7+\cdots\\
&=x+\frac{1}{3}x^3+\frac{2}{15}x^5+\frac{17}{315}x^7+\cdots\end{align}$$
Note: You can continue as far as you want, though the computation gets tedious. <a href="http://www.wolframalpha.com/input/?i=taylor%20tanx" rel="nofollow noreferrer">WA</a> shows the expansion to many more terms (press on "More terms" button).</p>
|
3,991,691 | <p>I'm having some trouble proving the following:</p>
<blockquote>
<p>Let <span class="math-container">$d$</span> be the smallest positive integer such that <span class="math-container">$a^d \equiv 1 \pmod m$</span>, for <span class="math-container">$a \in \mathbb Z$</span> and <span class="math-container">$m \in \mathbb N$</span> and with <span class="math-container">$\gcd(a,m) = 1$</span>. Prove that, if <span class="math-container">$a^n \equiv 1 \pmod m$</span> then <span class="math-container">$d\mid n$</span>.</p>
</blockquote>
<p>The first thing that came to my mind was Euler's theorem but I couldn't conclude anything because I'm not very skilled when it comes to using Euler's totient function. Can someone give me any tips or show me how to solve this?</p>
| J. W. Tanner | 615,567 | <p>It's <span class="math-container">$(5y-2x)^2-21y^2=-20$</span>, which is a Pell type equation.</p>
<hr />
<p>I got that by completing the square:</p>
<p><span class="math-container">$x^2-5xy+y^2=-5\implies \left(x-\frac52y\right)^2-\frac{21}4y^2=-5\implies (5y-2x)^2-21y^2=-20$</span>.</p>
<p>So it's <span class="math-container">$X^2-21Y^2=-20$</span>, with <span class="math-container">$X=5y-2x$</span> and <span class="math-container">$Y=y$</span>.</p>
|
3,991,691 | <p>I'm having some trouble proving the following:</p>
<blockquote>
<p>Let <span class="math-container">$d$</span> be the smallest positive integer such that <span class="math-container">$a^d \equiv 1 \pmod m$</span>, for <span class="math-container">$a \in \mathbb Z$</span> and <span class="math-container">$m \in \mathbb N$</span> and with <span class="math-container">$\gcd(a,m) = 1$</span>. Prove that, if <span class="math-container">$a^n \equiv 1 \pmod m$</span> then <span class="math-container">$d\mid n$</span>.</p>
</blockquote>
<p>The first thing that came to my mind was Euler's theorem but I couldn't conclude anything because I'm not very skilled when it comes to using Euler's totient function. Can someone give me any tips or show me how to solve this?</p>
| Will Jagy | 10,400 | <p>The question as given is perfect for a technique from contest mathematics called <strong>Vieta Jumping</strong>. This is a special case of automorphism of quadratic forms. It has the virtue that it can be justified using nothing worse that the quadratic formula, and it does not require the use of square roots either. If we have a solution in positive integers to <span class="math-container">$x^2 - 5xy + y^2 = -5$</span> we get new ones using either
<span class="math-container">$$ (x,y) \mapsto (5y-x,y) $$</span> or</p>
<p><span class="math-container">$$ (x,y) \mapsto (x,5x -y) $$</span></p>
<p>Note that repeating one of the "jumps" twice in a row goes back to the original solution.</p>
<p>A "fundamental" solution is one that minimizes <span class="math-container">$x+y$</span> as much as possible. That is, fundamental when both
<span class="math-container">$$ x+y \leq 5y - x + y $$</span> and
<span class="math-container">$$ x+y \leq x +5x - y . $$</span>
The first one becomes
<span class="math-container">$$ 2x \leq 5 y, $$</span> the second
<span class="math-container">$$ 2y \leq 5x .$$</span><br />
Altogether, fundamental solutions are on the arc <span class="math-container">$x^2 - 5xy + y^2 $</span> with
<span class="math-container">$$\frac{2x}{5} \leq y \leq \frac{5x}{2} $$</span></p>
<p>As you can see, there are the two integer points between the slanted lines, those being <span class="math-container">$(2,1)$</span> and <span class="math-container">$(1,2).$</span> Every solution in positive integers reduces to one of these; in turn, jumping up from one of the fundamental solutions generates all solutions.</p>
<p>The linear recurrence <span class="math-container">$w_{n+2} = 5 w_{n+1} - w_n$</span> that applies to every other number in a sequence comes from the trace and determinant of
<span class="math-container">$$
\left(
\begin{array}{r}
5 & -1 \\
1 & 0 \\
\end{array}
\right)
$$</span>
being <span class="math-container">$5$</span> and <span class="math-container">$1.$</span></p>
<p><a href="https://i.stack.imgur.com/CnKMZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CnKMZ.png" alt="enter image description here" /></a></p>
|
3,713,395 | <p>I would like the definition of a modular form with complex multiplication and if possible a reference.
Thank you ! </p>
| Homieomorphism | 553,656 | <p>A newform <span class="math-container">$f=\sum_{n=1}^\infty a(n)q^n$</span> of level N and weight k has complex multiplication if there is a quadratic imaginary field K such that <span class="math-container">$a(p)=0$</span> as soon as p is a prime which is inert in K. The field K is then unique (if the weight k≥2), and one says that f has CM by K.</p>
|
96,799 | <pre><code>NDSolve[{ (-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0,
y[0] = 0, y'[0] = 0}, y, {r, -4, 4}]
</code></pre>
<p>I use this but get errors and am unable to get a plot.</p>
<hr>
<p><em>Update</em></p>
<p>Even after I fix the syntax error,</p>
<pre><code>NDSolve[{(-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0,
y[0] == 0, y'[0] == 0}, y, {r, -4, 4}]
</code></pre>
<p>I still get <code>Power::infy</code> (<code>1/0.^2</code>), <code>Infinity::indet</code>, and <code>NDSolve::ndnum</code> (non-numerical derivative) errors. Is there a way to integrate and plot this ODE?</p>
| user21 | 18,437 | <p>There is a singularity at <code>r==0</code> you have to deal with.</p>
<pre><code>{(-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0,
y[0] == 0, y'[0] == 0} /. r -> 0
{False, y[0] == 0, Derivative[1][y][0] == 0}
Power::infy : "\"Infinite expression \[NoBreak]1/0^2\[NoBreak] \
encountered.\""
</code></pre>
<p>One way around this maybe to integrate from a <code>r > 0</code> to <code>r==4</code>. </p>
<pre><code> ode = NDSolveValue[{Derivative[2][y][
r] - (1880*470*0.04077^2*r^2 - 48 + 1/1880 r^2)*y[r] == 0,
y[10^(-10)] == 0, Derivative[1][y][10^(-10)] == 0},
y[r], {r, -4, 4}];
Plot[ode, {r, -4, 4}]
</code></pre>
<p>The solution seems to be 0. everywhere.</p>
<pre><code>ode /. r -> 0
0.`
</code></pre>
|
96,799 | <pre><code>NDSolve[{ (-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0,
y[0] = 0, y'[0] = 0}, y, {r, -4, 4}]
</code></pre>
<p>I use this but get errors and am unable to get a plot.</p>
<hr>
<p><em>Update</em></p>
<p>Even after I fix the syntax error,</p>
<pre><code>NDSolve[{(-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0,
y[0] == 0, y'[0] == 0}, y, {r, -4, 4}]
</code></pre>
<p>I still get <code>Power::infy</code> (<code>1/0.^2</code>), <code>Infinity::indet</code>, and <code>NDSolve::ndnum</code> (non-numerical derivative) errors. Is there a way to integrate and plot this ODE?</p>
| Michael E2 | 4,999 | <p><em>[Update notice: I had left in the previous code initial conditions for <code>NDSolve</code> from working through the OP's problem and forgot to generalize them. They are now fixed.]</em></p>
<h3>Introduction</h3>
<p>The <a href="http://mathworld.wolfram.com/FrobeniusMethod.html" rel="nofollow noreferrer">method of Frobenius</a> obtains a series solution to an second-order, linear ODE $y''+P\,y'+Q\,y=0$ at a regular singular point $x_0$ in the form
$$y(x) = (x-x_0)^m u(x) = (x-x_0)^m \sum_{k=0}^\infty a_k (x-x_0)^k\,,$$
where $u(x)$ is analytic in a neighborhood $x_0$ and $m$ is a root of the <a href="http://mathworld.wolfram.com/IndicialEquation.html" rel="nofollow noreferrer">indicial equation</a>.</p>
<p>If we make the substitution $y(x) \mapsto x^m u(x)$ in the ODE, we should expect to get a differential equation for $u$ that is nonsingular in a neighborhood of $x_0$.</p>
<p>One should not expect to be able to solve every initial-value problem at a singularity. The factor $(x-x_0)^m$ may force $y(x)$ or $y'(x)$ to be zero or to go to infinity at $x = x_0$. If there are two independent solutions and one of them is infinite at $x_0$, then it is impossible to obtain every initial condition from a linear combination of the two. In the OP's problem, there are two independent Frobenius-series solutions; one of them it turns out satisfies $y(0)=y'(0)=0$.</p>
<h3>OP's Example</h3>
<p>The function (currently) <code>frobeniusNDSolve</code> returns two independent solutions from the singular point <code>r == 0</code> of the OP's ODE over the interval <code>{-4, 4}</code>.</p>
<pre><code>opODE = (-y''[r]/1880) + (470 (Rationalize@0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r];
sol = frobeniusNDSolve[opODE, y, {r, 0, {-4, 4}}]
</code></pre>
<p><img src="https://i.stack.imgur.com/IawGZ.png" alt="Mathematica graphics"></p>
<p>Two independent solutions are returned. They are highly oscillatory and one is much, much larger than the other.</p>
<pre><code>Plot[(y[r] /. sol)/{1000000, 1} // Evaluate, {r, -0.05, 0.05}]
</code></pre>
<p><img src="https://i.stack.imgur.com/wmDBV.png" alt="Mathematica graphics"></p>
<p>Any linear combination of them is also a solution. Here is a solution to the IVP <code>y[1] == 1, y'[1] == -1</code>.</p>
<pre><code>ivpsol = {a, b} /. First@NSolve[{1, -1} == ({a, b}.({y[1], y'[1]} /. sol))]
(* {-0.0236628, 14203.7} *)
Plot[ivpsol.(y[r] /. sol) // Evaluate, {r, 0.999995, 1.000005}, AspectRatio -> Automatic]
Plot[ivpsol.(y[r] /. sol) // Evaluate, {r, 0., 1.2}]
</code></pre>
<p><img src="https://i.stack.imgur.com/pSfAo.png" alt="Mathematica graphics">
<img src="https://i.stack.imgur.com/AcLzC.png" alt="Mathematica graphics"></p>
<h3>Code dump</h3>
<p>This is a rather minimal implementation. It handles the easy case in which the indicial equation of a second-order, linear differential equation has distinct roots whose difference is not an integer. Otherwise, it will return unevaluated. Also missing is the full <code>NDSolve</code> syntax for the return value.</p>
<blockquote>
<p><code>frobeniusNDSolve[ode, y, {x, x0, x2}, ndsolveopts]</code><br>
<code>frobeniusNDSolve[ode, y, {x, x0, {x1, x2}}, ndsolveopts]</code><br>
returns two independent solutions to the linear second-order <code>ode</code>, valid over <code>x0 <= x <= x2</code> or <code>x1 <= x <= x2</code> if possible. The point <code>x0</code> should be a regular singular point with distinct roots of the indicial equation not differing by an integer.</p>
</blockquote>
<pre><code>ClearAll[frobeniusODE];
frobeniusODE[ode_, y_ -> u_, {x_, x0_}, m_: \[FormalM]] :=
y''[x] - (y''[x] /. First@Solve[ode == 0, y''[x]]) /.
y -> ((# - x0)^m u[#] &) // Collect[#, (x - x0)^m] &
ClearAll[linearODEQ];
linearODEQ[ode_, y_, x_] := Length@ coefficientsODE[ode, y, x] === 2;
ClearAll[orderODE];
orderODE[ode_] := Module[{d},
d = Cases[ode, Derivative[n : __][_][_] :> {n}, Infinity];
If[Max[Length /@ d] === 1,
Max@ d, (* ODE *)
$Failed] (* not an ODE *)
]
ClearAll[coefficientsODE];
mem : coefficientsODE[ode_, y_, x_] :=
mem = With[{order = orderODE[ode]},
With[{yp = Derivative[order][y][x]},
CoefficientArrays[yp /. Solve[ode == 0, yp],
Table[Derivative[n][y][x], {n, 0, order - 1}]
]
]];
ClearAll[indicialCoefficients];
indicialCoefficients[ode_, y_, {x_, x0_}] :=
With[{res = -Limit[#, x -> x0] & /@
({(x - x0)^2, x - x0} First@ Last@ coefficientsODE[ode, y, x])},
res /; VectorQ[res, NumericQ]
];
ClearAll[indicialRoots];
indicialRoots[ode_, y_, {x_, x0_}, m_: \[FormalM]] /;
linearODEQ[ode, y, x] && orderODE[ode] == 2 :=
Module[{c},
c = indicialCoefficients[ode, y, {x, x0}];
Solve[m (m - 1) + c.{1, m} == 0, m] /; FreeQ[c, indicialCoefficients]
]
(* Produces two independent solutions *)
Clear[frobeniusNDSolve];
Options[frobeniusNDSolve] = Options[NDSolve];
abs[x_] := Piecewise[{{x, x >= 0}, {-x, x < 0}}];
frobeniusNDSolve[ode_, y_, {x_, x0_, x2_?NumericQ}, opts : OptionsPattern[]] :=
frobeniusNDSolve[ode, y, {x, x0, {x0, x2}}, opts];
frobeniusNDSolve[ode_, y_, {x_, x0_, {x1_, x2_}},
opts : OptionsPattern[]] :=
Module[{roots, u, m, ode2, ode3, icp, sol},
roots = indicialRoots[ode /. Equal -> Subtract, y, {x, x0}];
(ode2 = Collect[u''[x] - (u''[x] /.
First@ Solve[
frobeniusODE[ode /. Equal -> Subtract, y -> u, {x, x0}, #] == 0, u''[x]]),
{u[x], u'[x], u''[x]}, Simplify];
ode3 = ode2 /.
With[{up = Coefficient[ode2, u'[x]],
uq = Coefficient[ode2, u[x]]},
{up u'[x] -> Piecewise[{{up, x != x0}}] u'[x],
uq u[x] -> Piecewise[{{uq, x != x0}}] u[x]}
];
icp = Solve[SeriesCoefficient[ode2, {x, x0, -1}] == 0 /. u[x0] -> 1,
u'[x0]];
If[icp === {{}},
icp = 0,
If[MatchQ[icp, {{_Rule}}],
icp = u'[x0] /. First@icp,
icp = $Failed]];
sol = NDSolve[
{ode3 == 0, {u[x0], u'[x0]} == {1, icp}},
u, {x, x1, x2}, opts];
{y -> Function @@ {x, abs[x - x0]^# u[x] /. First[sol]}}) & /@
Flatten@ Values[roots] /;
FreeQ[roots, indicialRoots] && ! IntegerQ@ First@ Differences[Flatten@ Values[roots]]
]
</code></pre>
<p>The code also works if the indicial roots are conjugate complex numbers. To get independent solutions, one should take the real and imaginary parts. (This could be made automatic, but as yet, I haven't programmed it.)</p>
|
3,613,950 | <blockquote>
<p>Given the set <span class="math-container">$S$</span> that is the set of all subsets of <span class="math-container">$\{1, 2, \ldots, n\}$</span>. Two different sets are chosen at random from <span class="math-container">$S$</span>. What is the probability that
the two subsets share exactly two equal elements?</p>
</blockquote>
<p><strong>My attempt</strong></p>
<p>I found that the universal set <span class="math-container">$\Omega = \dfrac{2^n\left(2^n-1\right)}{2}$</span></p>
<p>Then I tried to find the number of ways to select two subsets sharing two equal elements:</p>
<p>The number of ways to choose one subset that contains <span class="math-container">$i, j$</span> is <span class="math-container">$2^{n-2}$</span>.</p>
<p>The number of ways to choose another subset that contains <span class="math-container">$i, j$</span> and is different from the previous is: <span class="math-container">$\sum{2^{n-k-1}}$</span></p>
<p>However, I failed to put those two together to obtain correct result. I wonder whether there is another approach to this problem or how my method could have been continued.</p>
<p>Thanks in advance.</p>
| mathcounterexamples.net | 187,663 | <p><strong>Hint</strong></p>
<p>Fréchet derivative of <span class="math-container">$f(X) =\mathbf{a}^TX^2\mathbf{a}$</span> is given by</p>
<p><span class="math-container">$$\partial_{X_0}f(h) = \mathbf{a}^TX_0 h\mathbf{a} + \mathbf{a}^Th X_0 \mathbf{a}$$</span></p>
|
400,926 | <p>Maybe you can help here. There is kind of a lengthy setup to understand what the question is asking. There is a paper I'm reading, and in one section of it I can't make heads or tails of the result. The reference is "Global Carleman Estimates for Waves and Applications" by Baudouin, Buhan, Ervedoza. </p>
<hr>
<p>The setup (taken from the paper) : Suppose $p \in L^{\infty}(\Omega \times (-T,T))$. Given initial data $(y_0^{-T},y_1^{-T}) \in L^2(\Omega)\times H^{-1}(\Omega)$, find a function $u \in L^2(\Gamma_0 \times (-T,T))$ such that the solution $y$ of</p>
<p>\begin{eqnarray}
\partial_t^2y -\Delta y + py = 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ in } \Omega \times(-T,T) \\
y = u|_{\Gamma_0} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ on } \partial \Omega \times (-T,T)\\
y(-T) = y_0^{-T}, \partial_ty(-T) = y_1^{-T} \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ in } \Omega
\end{eqnarray}
solves $y(T) = \partial_ty(T) = 0$. </p>
<hr>
<p>There is a claim that we can get an explicit form for $u$ and $y$. Let $\phi = e^{\lambda \psi}$, where $\psi(x,t) = |x-x_0|^2 - \beta t^2 +C$. For $s$ a parameter, define the functional
$$K_{s,p}(z) = \frac{1}{2s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}|\partial_t^2z - \Delta z + pz|^2 dx \ dt + \frac{1}{2}\int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}|\partial_{\nu}z|^2 d \sigma dt $$ $$+<(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)}$$</p>
<p>Here, $<(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)} = \int_{\Omega}{y_0^{-T}z_1^{-T} dx} - <y_1^{-T},z_0^{-T}>_{H^{-1} \times H_0^1}$,
and
$<y_1^{-T},z_0^{-T}>_{H^{-1} \times H_0^1} = \int_{\Omega} \nabla(-\Delta_d)^{-1}y_1^{-T}\cdot \nabla z_0^{-T} dx$
where $\Delta_d$ is the Laplace operator with Dirichlet boundary conditions.</p>
<p>Part of the paper shows that $K_{s,p}$ has a unique minimizer $Z[s,p]$, for each $s,p$.</p>
<hr>
<p>The setup is above. Now comes the two parts I don't get.<br>
(1). The paper claims that the Euler-Lagrange equation given by the minimization of $K_{s,p}$ is
$$\frac{1}{s}\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2z - \Delta z + pz)(\partial_t^2Z -\Delta Z +pZ) dx \ dt + \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z \partial_{\nu}Z d\sigma dt$$ $$+ <(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)}$$</p>
<p>I don't understand how this result is obtained. From what I know, the Euler Lagrange equations are as follows (from Evans book). If $I[w] = \int L(Dw(x),w(x),x)$, and we call these variables $p, z, x$ respectively, then the Euler Lagrange equations satisfy $-\sum{({L_{p_i}(Du,u,x)})_{x_i}} + L_z(Du,u,x) = 0$. When I try to do this to $K_{s,p}$, I get a huge mess, because it seems like we need to use the product rule. I don't get how it simplifies to this form, and why the third term $<\cdot,\cdot>$ stays the same.</p>
<p>(2) Let $Y = \frac{1}{s}e^{2s\phi}(\partial_t^2 - \Delta + p)Z[s,p]$, and let $U[s,p] = e^{2s\phi}\partial_{\nu}Z[s,p]|_{\Gamma_0}$. </p>
<p>Then, we get
$$\int_{-T}^{T}\int_{\Omega}e^{2s\phi}(\partial_t^2z - \Delta z + pz)Y dx \ dt + \int_{-T}^{T}\int_{\Gamma_0}e^{2s\phi}\partial_{\nu}z U d\sigma dt$$ $$+ <(y_0^{-T},y_1^{-T}),(z(-T), \partial_t z(-T))>_{(L^2 \times H^{-1}) \times (H_0^1 \times L^2)} = 0$$</p>
<p>The paper claims that this is the dual formulation of the problem. What does this mean exactly, and how does this help us show that Y,U works as a solution?</p>
<p>Any help is greatly appreciated. Thanks in advance</p>
| guacho | 77,946 | <p>Consider $g(\epsilon)_{z,v}=K_{s,p}(z+\epsilon v)$ for $v$ an arbitrary function in the appropriate space. Now take the derivative in $\epsilon$ and evaluate at $\epsilon =0$. This is the directional derivative of the functional along the line given by $v$.</p>
|
100,739 | <p>Let $a\in (1,e)\cup(e,\infty).$ I'd like to show that the equation $a^x=x^a$ has exactly two positive solutions, and one is larger and one smaller than $e.$ Is it even possible to show? I think I've tried everything.</p>
| Ilya | 5,887 | <p>For positive numbers, your equation is equivalent to $\sqrt[a]{a} = \sqrt[x]{x}$, so you have to consider the graph of the function
$$
y = \sqrt[x]{x}
$$
with sections of the form $y = \mathrm{const}.$</p>
|
1,645,361 | <p>I am aware that the union of subspaces does not necessarily yield a subspace. However, I am confused about the following question: </p>
<blockquote>
<p>(i) Let $U, U'$ be subspaces of a vector space $V$ (both not equal to $V$). Prove that the union of $U$ and $U'$ does not equal $V$.<br>
(ii) Find an example of $V$ and $U,U',U''$ contained in $V$ (all not equal to $V$) such that the union of $U,U'$ and $U''$ is equal to $V$. </p>
</blockquote>
<p>Thank you.</p>
| Bernard | 202,857 | <p>This is perfectly impossible if the base field is infinite, and is known as the <em>avoidance lemma for subspaces</em>: </p>
<blockquote>
<p>If a subspace of a vector space over an infinite field is contained in a finite union of subspaces, it is contained in one of them.</p>
</blockquote>
<p>You can find a proof in my answer to <a href="https://math.stackexchange.com/questions/1165705/a-vectorspace-over-an-infinite-field-is-not-a-finite-union-of-proper-subspaces/1186139#1186139">this question</a></p>
<p>Note the analogue for subgroups of a group is true only for the union of two groups.</p>
|
3,354,566 | <p>I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus.</p>
<p>This emerged as a sticking point in <a href="https://math.stackexchange.com/questions/3354502/are-integrals-thought-of-as-antiderivatives-to-avoid-using-faulhaber">this question</a>.</p>
| symplectomorphic | 23,611 | <p>In a sense your question is very natural. Let's take an informal approach to it, and then see where the technicalities arise. (That's how a lot of research mathematics works, by the way! Have an intuitive idea, and then try to implement it carefully. The devil is always in the details.)</p>
<p>So, one way to tell the familiar story of one-variable calculus is as follows:</p>
<ol>
<li>Define the <strong>derivative</strong> <span class="math-container">$f'$</span> of a function <span class="math-container">$f$</span> as the limit of the difference quotient, <span class="math-container">$h^{-1}(f(x+h)-f(x))$</span>, as <span class="math-container">$h\to0$</span>.</li>
<li>Define <em>an</em> <strong>anti-derivative</strong> of a function <span class="math-container">$f$</span> as a function <span class="math-container">$F$</span> for which <span class="math-container">$F'=f$</span>. </li>
<li>Define the <strong>definite integral</strong> of a function <span class="math-container">$f$</span> over <span class="math-container">$[a,b]$</span>, say as the limit of Riemann sums.</li>
<li>Discover that (2) and (3) are related, in the sense that
<span class="math-container">$$\int_a^bf=F(b)-F(a)$$</span>
so long as <span class="math-container">$F$</span> is any anti-derivative of <span class="math-container">$f$</span>.</li>
</ol>
<hr>
<p>Now, your idea is that you can imagine doing this the other way around, as follows:</p>
<ol>
<li>Define the <strong>definite integral</strong> of a function <span class="math-container">$f$</span> over an interval <span class="math-container">$[a,b]$</span>, say as a limit of Riemann sums.</li>
<li>Define <em>an</em> <strong>anti-integral</strong> of a function <span class="math-container">$F$</span> as a function <span class="math-container">$f$</span> for which
<span class="math-container">$$F(x)-F(0)=\int_0^xf$$</span></li>
<li>Define the <strong>derivative</strong> of a function, as the limit of the difference quotient.</li>
<li>Discover that (2) and (3) are related, in the sense that
one anti-integral of <span class="math-container">$f$</span> is just <span class="math-container">$f'$</span>, so long as <span class="math-container">$f'$</span> is defined.</li>
</ol>
<hr>
<p>The trouble in both stories arises in steps 2 and 4. In both versions, step 4 is a form of the Fundamental Theorem.</p>
<p><em>The Problem with Step 2</em></p>
<p>In both the standard and the flipped story, step 2 poses existence and uniqueness problems.</p>
<p>In the standard story, an anti-derivative of <span class="math-container">$f$</span> may not even exist; one sufficient condition is to require that <span class="math-container">$f$</span> be continuous, but that is not necessary. And even if you do require that <span class="math-container">$f$</span> be continuous, you're always going to have non-uniqueness. Thus "anti-differentiation" construed as an operation is not really a bona fide "inverse" operation, because it is not single-valued. Or in other words, differentiation is not injective: it identifies many different functions. (Exactly which functions it identifies depends on the topology of the domain they're defined on.)</p>
<p>In the flipped story, again note that we certainly will never have uniqueness. Given any anti-integral <span class="math-container">$f$</span>, you can find infinitely many others by changing the values of <span class="math-container">$f$</span> at a set of measure zero. We also aren't guaranteed existence of an anti-integral for a given <span class="math-container">$F$</span>, and this time not even the continuity of <span class="math-container">$F$</span> will serve as a sufficient condition. What we need is even stronger, "<a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">absolute continuity</a>."</p>
<p><em>The Problem with Step 4</em></p>
<p>In the standard story, the catch is in "so long as <span class="math-container">$F$</span> is any anti-derivative of <span class="math-container">$f$</span>." The problem is that <a href="https://math.stackexchange.com/questions/239324/is-integrability-equivalent-to-having-antiderivative">not every Riemann integrable function has an anti-derivative</a>. If we want to guarantee an anti-derivative, we can impose the additional hypothesis that <span class="math-container">$f$</span> is continuous (which is again sufficient but not necessary).</p>
<p>A similar problem arises in the flipped scenario: given an arbitrary <span class="math-container">$f$</span>, it might not have an anti-integral. The <a href="https://en.wikipedia.org/wiki/Absolute_continuity#Equivalent_definitions" rel="noreferrer">fundamental theorem for Lebesgue integrals</a> shows that it's both necessary and sufficient to require that <span class="math-container">$f$</span> be absolutely continuous, at least when we work with the Lebesgue definite integral instead of the Riemann definite integral. But given the fact that integrals are not sensitive to values on a set of measure zero, the best conclusion we can draw in that case is that an anti-integral of <span class="math-container">$f$</span> equals <span class="math-container">$f'$</span> "almost everywhere" (meaning, everywhere except at a set of measure zero).</p>
<hr>
<p><strong>The Upshot</strong></p>
<p>Note that even in the familiar story, we don't <em>define</em> integrals as anti-derivatives. Thus you should not expect we could <em>define</em> derivatives as anti-integrals. The essential obstruction to this sort of definition is existence and uniqueness.</p>
<p>In <em>both</em> scenarios, we first specify the seemingly unrelated limit-based definitions of derivatives and definite integrals. We then <em>discover</em> a relationship about how anti-derivatives are related to integrals (the standard story) or how anti-integrals are related to derivatives (the flipped story), assuming enough regularity of the functions involved to resolve the existence and uniqueness problems.</p>
|
553,845 | <p>Could we assert that if $H$ is a subgroup of $G$, then the factor group $N_G(H)/C_G(H)$ is isomorphic to a subgroup of ${\rm Inn}(H)$ instead of ${\rm Aut}(H)$?</p>
| Nicky Hekster | 9,605 | <p>There is another approach that is less known than the "N/C" theorem. <br>The <a href="http://en.wikipedia.org/wiki/Outer_automorphism_group" rel="nofollow">outer automorphisms</a> of a group $H$, ${\rm Out}(H)$, are defined as the quotient group ${\rm Aut}(H)/{\rm Inn}(H)$. Note that the elements of ${\rm Out}(H)$ are <em>cosets</em> of automorphisms of $H$, and not themselves automorphisms. Since $H$ and $C_G(H)$ are both normal in $N_G(H)$, the subgroup $HC_G(H) \unlhd N_G(H)$. Observe that $N_G(H)/HC_G(H) \cong (N_G(H)/C_G(H))/(HC_G(H)/C_G(H))$ and $HC_G(H)/C_G(H) \cong H/(H \cap C_G(H)) = H/C_H(H)= H/Z(H) \cong {\rm Inn}(H).$ From this, one can easily show that<p>$N_G(H)/HC_G(H) \hookrightarrow {\rm Out}(H)$.<p>And this is interesting for instance when both ${\rm Out}(H)$ and $Z(H)$ are trivial (such groups are called <em>complete</em>), then $N_G(H)$ becomes a direct product of $H$ and $C_G(H)$. By the way, the symmetric groups $S_n$ are all complete, except for $n=2$ or $6$.</p>
|
395,685 | <p>I recall seeing a quote by William Thurston where he stated that the Geometrization conjecture was almost certain to be true and predicted that it would be proven by curvature flow methods. I don't remember the exact date, but it was from after Hamilton introduced the Ricci flow but well before Perelman's work. Unfortunately, most of the results for Geometrization and Ricci flow are from 2003 or after. Does anyone know if the quote I'm referring to actually exists, and if so, where to find it?</p>
<p>There is a quote from Thurston lauding Perelman's work, which suggests that he thought the Ricci flow was a promising approach, but I thought there was one from before as well.</p>
<blockquote>
<p>That the geometrization conjecture is true is not a surprise. That a
proof like Perelman's could be valid is not a surprise: it has a
certain rightness and inevitability, long dreamed of by many people
(including me). What is surprising, wonderful and amazing is that someone – Perelman – succeeded in rigorously analyzing and controlling this process, despite the many hurdles, challenges and potential pitfalls.</p>
</blockquote>
<p>Thanks in advance.</p>
| Carlo Beenakker | 11,260 | <p>This 1994 paper by Thurston may or may not be the source you are thinking of, but it is a thoughtful essay that conveys the confidence Thurston had in his conjecture (albeit without referring to curvature flow):</p>
<blockquote>
<p>The full geometrization conjecture is still a conjecture. It has been
proven for many cases, and is supported by a great deal of computer
evidence as well, but it has not been proven in generality. I am
convinced that the general proof will be discovered; I hope before too
many more years. At that point, proofs of special cases are likely to
become obsolete.</p>
</blockquote>
<p><A HREF="https://arxiv.org/abs/math/9404236" rel="noreferrer">On proof and progress in mathematics</A></p>
|
897,756 | <p>How can I solve the following trigonometric inequation?</p>
<p>$$\sin\left(x\right)\ne \sin\left(y\right)\>,\>x,y\in \mathbb{R}$$</p>
<p>Why I'm asking this question... I was doing my calculus homework, trying to plot the domain of the function $f\left(x,y\right)=\frac{x-y}{sin\left(x\right)-sin\left(y\right)}$ and figured out I'd have to solve the inequation $\sin\left(x\right)\ne\sin\left(y\right)$... I was able to come to the answer $y\ne x +2\cdot k\cdot \pi \>,\>k \in \mathbb{N}$. However, the answer on the textbook also includes $y\ne -x +2\cdot k\cdot \pi + \pi \>,\>k \in \mathbb{N}$, so I thought that I was probably doing something wrong while solving that inequation.</p>
| Cookie | 111,793 | <p>\begin{align*}
\lim_{h \rightarrow 0} \frac{\sin(x+h)-\sin x}h &=\lim_{h \rightarrow 0} \frac{(\sin x \cos h + \cos x \sin h)-\sin x}h & \text{trigonometric sum formula} \\
&=\lim_{h \rightarrow 0} \frac{\sin x(\cos h-1) + \cos x \sin h}h &\text{shuffle terms in numerator} \\
&=\lim_{h \rightarrow 0} \left(\frac{\sin x(\cos h-1)}h + \frac{\cos x \sin h}h \right) & \text{break the fraction} \\
&=\lim_{h \rightarrow 0} \frac{\sin x(\cos h-1)}h +\lim_{h \rightarrow 0} \frac{\cos x \sin h}h \\
&=\sin x \lim_{h \rightarrow 0} \frac{\cos h-1}h + \cos x \lim_{h \rightarrow 0} \frac{\sin h}h \\
&= \sin x \cdot 0 + \cos x \cdot 1 & \text{apply limit indentities}\\
&= \cos x. & \text{simplify}
\end{align*}</p>
|
1,708,900 | <p>Does a closed form exist for </p>
<blockquote>
<p>$$\sum \limits_{n=0}^{\infty} \frac{1}{(kn)!}$$</p>
</blockquote>
<p>in terms of $k$ and other functions? The best that I have been able to do is solve the case where $k=1$, since the sum is just the infinite series for $e$. I would guess that any closed form must involve the exponential function, but am at a loss to prove it.</p>
| robjohn | 13,854 | <p>This is a different approach to the idea in David Ullrich's answer.</p>
<hr>
<p>As long as $\frac nk\not\in\mathbb{Z}$,
$$
\begin{align}
\sum_{j=0}^{k-1}e^{2\pi ij\frac nk}
&=\frac{e^{2\pi in}-1}{e^{2\pi i\frac nk}-1}\\
&=0
\end{align}
$$
if $\frac nk\in\mathbb{Z}$, then
$$
\begin{align}
\sum_{j=0}^{k-1}e^{2\pi ij\frac nk}
&=\sum_{j=0}^{k-1}1\\
&=k
\end{align}
$$
Thus,
$$
\begin{align}
\sum_{n=0}^\infty\frac1{(kn)!}
&=\sum_{n=0}^\infty\overbrace{\left(\frac1k\sum_{j=0}^{k-1}e^{2\pi ij\frac nk}\right)}^{1\iff\frac nk\in\mathbb{Z}}\frac1{n!}\\
&=\frac1k\sum_{j=0}^{k-1}\sum_{n=0}^\infty\frac{\left(e^{2\pi ij/k}\right)^n}{n!}\\
&=\frac1k\sum_{j=0}^{k-1}e^{\large e^{2\pi ij/k}}\\
&=\frac1k\sum_{j=0}^{k-1}e^{\cos(2\pi j/k)+i\sin(2\pi j/k)}\\
&=\frac1k\sum_{j=0}^{k-1}e^{\cos(2\pi j/k)}\left(\cos(\sin(2\pi j/k))+i\sin(\sin(2\pi j/k))\right)\\
&=\frac1k\sum_{j=0}^{k-1}e^{\cos(2\pi j/k)}\cos(\sin(2\pi j/k))
\end{align}
$$
The last step follows since the imaginary parts of the $j$ and $k-j$ terms cancel.</p>
|
1,708,900 | <p>Does a closed form exist for </p>
<blockquote>
<p>$$\sum \limits_{n=0}^{\infty} \frac{1}{(kn)!}$$</p>
</blockquote>
<p>in terms of $k$ and other functions? The best that I have been able to do is solve the case where $k=1$, since the sum is just the infinite series for $e$. I would guess that any closed form must involve the exponential function, but am at a loss to prove it.</p>
| Mariusz Iwaniuk | 276,773 | <p>Sum expressed by a special function:</p>
<p>$$\color{red}{\sum _{n=0}^{\infty } \frac{1}{(k n)!}}=\sum _{n=0}^{\infty } \frac{1}{\Gamma (k
n+1)}=\color{red}{E_{k,1}(1)}$$</p>
<p>where: $\color{red}{E_{k,1}(1)}$ is generalized <a href="http://mathworld.wolfram.com/Mittag-LefflerFunction.html" rel="nofollow noreferrer">Mittag-Lefflere</a> function.</p>
|
38,731 | <p>The <a href="http://en.wikipedia.org/wiki/Ramanujan_summation">Ramanujan Summation</a> of some infinite sums is consistent with a couple of sets of values of the Riemann zeta function. We have, for instance, $$\zeta(-2n)=\sum_{n=1}^{\infty} n^{2k} = 0 (\mathfrak{R}) $$ (for non-negative integer $k$) and $$\zeta(-(2n+1))=-\frac{B_{2k}}{2k} (\mathfrak{R})$$ (again, $k \in \mathbb{N} $). Here, $B_k$ is the $k$'th <a href="http://en.wikipedia.org/wiki/Bernoulli_number">Bernoulli number</a>. However, it does not hold when, for example, $$\sum_{n=1}^{\infty} \frac{1}{n}=\gamma (\mathfrak{R})$$ (here $\gamma$ denotes the Euler-Mascheroni Constant) as it is not equal to $$\zeta(1)=\infty$$. </p>
<p>Question: Are the first two examples I stated the only instances in which the Ramanujan summation of some infinite series coincides with the values of the Riemann zeta function?</p>
| Anixx | 2,513 | <p>You should note that the Cauchy principlal value of $\zeta(1)$ is $\gamma$:</p>
<p>$$\lim_{h\to0}\frac{\zeta(1+h)+\zeta(1-h)}2=\gamma$$</p>
<p>Saying $\zeta(1)=\infty$ is wrong because zeta has no limit at that point (except for directional limits).</p>
|
1,748,001 | <p>I need to find a relation between $\sqrt{x+ia}$ and $\sqrt{\sqrt{x^2+a^2}+x}$</p>
<p>where $a>0$ $x\in \mathbb{R}$</p>
<p>Thank you</p>
| Brian | 331,755 | <p>You can treat $\frac{1}{3x^{2/3}}$ as $3x^{-2/3}$</p>
<p>Similarly, you can treat $\frac{2}{3x^{5/3}}$ as $2*3x^{-5/3}=6x^{-5/3}$</p>
|
1,251,914 | <p>I do not understand how to set up the following problem:</p>
<p>"Forces of 20 lb and 32 lb make an angle of 52 degrees with each other. find the magnitude of the resultant force."</p>
<p>An actually picture would really help.</p>
| ParaH2 | 164,924 | <p>This man can ! <a href="https://www.youtube.com/watch?v=M9sbdrPVfOQ" rel="nofollow">https://www.youtube.com/watch?v=M9sbdrPVfOQ</a> :) </p>
<p>But I never understand I think we have to have a brain in 4D to understand. </p>
<p>You can also see that :</p>
<p>A conference really interesting <a href="https://www.youtube.com/watch?v=1wAaI_6b9JE" rel="nofollow">Here</a></p>
<p>And a short video <a href="https://www.youtube.com/watch?v=uDaKzQNlMFw" rel="nofollow">Here</a></p>
|
1,251,914 | <p>I do not understand how to set up the following problem:</p>
<p>"Forces of 20 lb and 32 lb make an angle of 52 degrees with each other. find the magnitude of the resultant force."</p>
<p>An actually picture would really help.</p>
| David K | 139,123 | <p>There have been people who reportedly can visualize things in four dimensions
as easily as other people can in three. It's rare, however.
Moreover, visualizing four dimensions may not help much when you
want to solve a problem in five dimensions or more.
So as Henning Makholm's answer states, to do anything really useful
in higher dimensions you need a mathematical
model in which you can formally work out the answers.</p>
<p>It is a nice mental exercise to try actually to visualize four-dimensional
objects, however, so I recommend not to stop trying.
One way to do this is to try to "construct" well-shaped
four-dimensional objects.
Consider the following pattern (from <a href="http://www.math.union.edu/~dpvc/talks/2000-11-22.funchal/cube-unfolded.html" rel="noreferrer">http://www.math.union.edu/~dpvc/talks/2000-11-22.funchal/cube-unfolded.html</a>) that folds into a three-dimensional cube:</p>
<p><img src="https://i.stack.imgur.com/6z99t.gif" alt="unfolded cube"></p>
<p>The pattern itself fits in two dimensions, in a flat plane, but in order
to assemble the cube you have to make parts of the pattern
"pop out" from that plane so that you can join the edges that
need to be joined.</p>
<p>By analogy, the following (from <a href="http://im-possible.info/english/articles/hypercube/" rel="noreferrer">http://im-possible.info/english/articles/hypercube/</a>) is a three-dimensional pattern from which a four-dimensional cube (known as a hypercube or tesseract) might be constructed:</p>
<p><img src="https://i.stack.imgur.com/nVh72.gif" alt="enter image description here"></p>
<p>In the assembled tesseract, cubes that are joined only at one edge in this
pattern need to be joined at the faces adjacent to that edge. To do this,
you have to make the cubes "pop out" of three-dimensional space.
The fun part is trying to imagine where the cubes can "pop out" to.</p>
<p>In Robert Heinlein's story, "And He Built a Crooked House," someone builds
a house in this shape and it folds up into a real tesseract with people inside.
Some of the details of the story explore how the rooms become connected in the folded tesseract.
There is at least one line of sight in which someone could see themselves
as if they were some distance away.</p>
|
4,581,539 | <p>Consider the task of proving that <span class="math-container">$|z+w|\leq |z|+|w|$</span>, where <span class="math-container">$z$</span> and <span class="math-container">$w$</span> are complex numbers.</p>
<p>We can consider three cases:</p>
<ol>
<li><span class="math-container">$|z|$</span> or <span class="math-container">$|w|$</span> equal to <span class="math-container">$0$</span></li>
<li><span class="math-container">$z=\lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
<li><span class="math-container">$z\neq \lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
</ol>
<p>My question is about case (2) specifically.</p>
<p>There is are similar questions <a href="https://math.stackexchange.com/questions/1671369/triangle-inequality-about-complex-numbers-special-case">here</a> and <a href="https://math.stackexchange.com/questions/1968175/if-z-and-w-are-two-complex-numbers-prove-that-zw-zw">here</a> but those solutions are different from the one below, which I am asking about.</p>
<p><strong>My proof of case (2) is</strong></p>
<p><span class="math-container">$$|z+w| = |\lambda w + w|=|(1+\lambda)w|$$</span></p>
<p><span class="math-container">$$=|((1+\lambda)w_1, (1+\lambda)w_2)|$$</span></p>
<p><span class="math-container">$$=\sqrt{(1+\lambda)^2 (w_1^2+w_2^2)}$$</span></p>
<p><span class="math-container">$$|1+\lambda||w|$$</span></p>
<p><span class="math-container">$$\leq (1+|\lambda|)|w|$$</span></p>
<p><span class="math-container">$$=|w|+|z|$$</span></p>
<p>where I used <span class="math-container">$|z|=\sqrt{\lambda^2(w_1^2+w_2^2)}=|\lambda||w|$</span>.</p>
<p>But then I noticed that in Spivak's <em>Calculus</em> he says to consider separately the cases <span class="math-container">$\lambda>0$</span> and <span class="math-container">$\lambda<0$</span>, and I am not doing this.</p>
<p><strong>My questions then are:</strong></p>
<ul>
<li>is the proof above incorrect?</li>
<li>why do we need to consider the cases separately?</li>
</ul>
| Siong Thye Goh | 306,553 | <p>A sequence of continuous functions need not converge to a continuous function.</p>
<p>We can consider a simpler examples:</p>
<p><span class="math-container">$$f_n(x)=x^n, x\in [0,1]$$</span></p>
<p>The limit is not a continuous function.</p>
<p>Forier-series allow us to construct many more such examples. Just take a piecewise continuous periodic function that is not continuous, the approximating polynomial is certainly continuous but the limit is not.</p>
|
4,581,539 | <p>Consider the task of proving that <span class="math-container">$|z+w|\leq |z|+|w|$</span>, where <span class="math-container">$z$</span> and <span class="math-container">$w$</span> are complex numbers.</p>
<p>We can consider three cases:</p>
<ol>
<li><span class="math-container">$|z|$</span> or <span class="math-container">$|w|$</span> equal to <span class="math-container">$0$</span></li>
<li><span class="math-container">$z=\lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
<li><span class="math-container">$z\neq \lambda w$</span>, <span class="math-container">$\lambda \in \mathbb{R}$</span></li>
</ol>
<p>My question is about case (2) specifically.</p>
<p>There is are similar questions <a href="https://math.stackexchange.com/questions/1671369/triangle-inequality-about-complex-numbers-special-case">here</a> and <a href="https://math.stackexchange.com/questions/1968175/if-z-and-w-are-two-complex-numbers-prove-that-zw-zw">here</a> but those solutions are different from the one below, which I am asking about.</p>
<p><strong>My proof of case (2) is</strong></p>
<p><span class="math-container">$$|z+w| = |\lambda w + w|=|(1+\lambda)w|$$</span></p>
<p><span class="math-container">$$=|((1+\lambda)w_1, (1+\lambda)w_2)|$$</span></p>
<p><span class="math-container">$$=\sqrt{(1+\lambda)^2 (w_1^2+w_2^2)}$$</span></p>
<p><span class="math-container">$$|1+\lambda||w|$$</span></p>
<p><span class="math-container">$$\leq (1+|\lambda|)|w|$$</span></p>
<p><span class="math-container">$$=|w|+|z|$$</span></p>
<p>where I used <span class="math-container">$|z|=\sqrt{\lambda^2(w_1^2+w_2^2)}=|\lambda||w|$</span>.</p>
<p>But then I noticed that in Spivak's <em>Calculus</em> he says to consider separately the cases <span class="math-container">$\lambda>0$</span> and <span class="math-container">$\lambda<0$</span>, and I am not doing this.</p>
<p><strong>My questions then are:</strong></p>
<ul>
<li>is the proof above incorrect?</li>
<li>why do we need to consider the cases separately?</li>
</ul>
| Jam | 161,490 | <blockquote>
<p>"Each term in the summation is obviously continuous… Thus, I would expect the infinite sum to be continuous as well."</p>
</blockquote>
<p>Err… why?</p>
<p>Properties that hold for all <em>members</em> of a sequence need not hold for the <em>limit</em> of the sequence (see <a href="https://math.stackexchange.com/questions/757384/which-properties-do-still-hold-for-the-limit-of-a-sequence-of-functions">Q757384</a>). A natural example can be found with the truncated decimal expansions of <span class="math-container">$\sqrt{2}$</span>. All of
<span class="math-container">$$1,\,1.4,\,1.41,\,1.414,\ldots$$</span></p>
<p>are obviously rational. But the same cannot be said for <span class="math-container">$\sqrt{2}$</span>, their limit.</p>
<p>To put it another way, being the limit of a sequence doesn't imply being similar to the sequence in other aspects. The only property a limit is actually concerned with is distance or size, such that <span class="math-container">$|x_n-L|$</span> can be made arbitrarily small for sufficiently large <span class="math-container">$n$</span>. Hence, any other property not directly tied size can be lost in the limit (e.g., integer-ness or rationality for numbers, continuity or differentiability of functions, etc.).</p>
|
2,559,564 | <blockquote>
<p>A nonempty subfamily $\mathcal{F}$ of $Z(X)$ is called $z$-filter on $X$ provided that</p>
<ol>
<li>$ \emptyset \not \in \mathcal{F}$ </li>
<li>If $z_{1} , z_{2} \in \mathcal{F}$ , then $z_{1} \cap z_{2} \in \mathcal{F}$ </li>
<li>If $ z \in \mathcal{F} , z^{*} \in Z(X) , z^{*} \supset z$ , then $ z^{*} \in \mathcal{F}$</li>
</ol>
<p><strong>The family $Z[ C(X)] =Z(X) =\{ Z(f) : f \in C \} $ is all zero-sets in $X$.
$Z(f) = \{ x \in X : f(x) = 0 \}$</strong></p>
</blockquote>
<p>So, my question is:</p>
<blockquote>
<p>The following are equivalent for a $z$-filter $\mathcal{F}$. </p>
<ol>
<li>$\mathcal{F}$ is prime. </li>
<li>whenever the union of two zero-sets is all of $X$, at least one of them belongs to $\mathcal{F}$. </li>
<li>Given $z_{1},
z_{2} \in Z(X) $, there exist $z \in \mathcal{F} $ such that one of $z
\cap z_{1} , z \cap z_{2}$ contains the other.</li>
</ol>
</blockquote>
| Peter Elias | 392,689 | <p>2 $\Rightarrow$ 3: If $z_1,z_2\in Z(X)$ then we have $f_1,f_2\in C(X)$ such that $Z(f_1)=z_1$ and $Z(f_2)=z_2$. For $x\in X$, let $g_1(x)=\max\{0,|f_1(x)|-|f_2(x)|\}$, $g_2(x)=\max\{0,|f_2(x)|-|f_1(x)|\}$. Then $g_1,g_2\in C(X)$ and $Z(g_1)\cup Z(g_2)=X$, hence 2. implies that there is some $z\in\{Z(g_1),Z(g_2)\}\cap\mathcal{F}$. If $z=Z(g_1)$ then $z\cap z_1=\{x\in X\colon 0=|f_1(x)|\le|f_2(x)|\}$ and $z\cap z_2=\{x\in X\colon|f_1(x)|\le|f_2(x)|=0\}$, hence $z\cap z_2\subseteq z\cap z_1$. Similarly, if $z=Z(g_2)$ then $z\cap z_1\subseteq z\cap z_1$.</p>
<p>3 $\Rightarrow$ 2: If $z_1,z_2\in Z(X)$ and $z_1\cup z_2=X$ then by 3. there is some $z\in\mathcal{F}$ such that (without a loss of generality) $z\cap z_1\subseteq z\cap z_2$. We have $z=z\cap(z_1\cup z_2)=z\cap z_2\subseteq z_2$, hence $z_2\in\mathcal{F}$.</p>
<p>Together with Henno's answer we obtain the equivalence.</p>
|
305,166 | <p>If two undirected graphs are identical except that one has an additional loop at vertex $A$, do they actually have the same complement?</p>
| Mathemagician1234 | 7,012 | <p>Well, technically,by the terminology I know, these are <em>multigraphs</em> and not graphs. In this particular case, I don't think it applies since I think only simple graphs have complements.Think about it: the complement of this multigraph would have loops on the adjacent vertices without edges in the complement and this is clearly nonisomorphic to the complement of orginal simple graph, let alone the same complement graph. I'm not even sure the complement is well defined in the case of psuedographs. </p>
<p>For the case of simple graphs, the answer is clearly no since a complement graph is defined on the exact same set of vertices where the edges of the complement graph are constructed on the nonadjacent vertices of the original graph. Therefore any 2 graphs with the same complements would have to have the exact same number of vertices and the exact same adjacency relations on those vertices. Therefore, the only 2 simple graphs who have the same complement are identical. So although I'm not certain, this reasoning seems to indicate the answer is no. </p>
|
4,475,082 | <p>Problem:</p>
<ul>
<li>Three-of-a-kind poker hand: Three cards have one rank and the remaining two cards have
two other ranks. e.g. {2♥, 2♠, 2♣, 5♣, K♦}</li>
</ul>
<p>Calculate the probability of drawing this kind of poker hand.</p>
<p>My confusion: When choosing the three ranks, the explanation used <span class="math-container">$13 \choose 1$</span> and <span class="math-container">$12 \choose 2$</span>. I used <span class="math-container">$13 \choose 3$</span> instead which ends up being wrong. I do not know why.</p>
| utobi | 220,145 | <p>Here is another way to solve it through unordered samples.</p>
<p>We are looking for hands of the kind <span class="math-container">$x_1$</span>-<span class="math-container">$x_2$</span>-<span class="math-container">$x_2$</span>-<span class="math-container">$y$</span>-<span class="math-container">$z$</span>, where <span class="math-container">$x_1,x_2,x_3$</span> are all of the same face value (although of a different suit), whereas <span class="math-container">$y,z$</span> are different face values.</p>
<p>To work with unordered hands, let's fix the order of the cards as above, i.e. three of a kind are the first three cards followed by two other different kinds.</p>
<p>There are 13 possible face values (2, 3, <span class="math-container">$\ldots$</span>, K, A), and for each face value, there are <span class="math-container">${4\choose 3}$</span> ways to select 3 cards out of 4, disregarding order and without replacement. This fills <span class="math-container">$x_1$</span>-<span class="math-container">$x_2$</span>-<span class="math-container">$x_3$</span>.</p>
<p>For <span class="math-container">$y$</span>, there are 48 possibilities, since three face values have already been drawn and the one left cannot be used. For <span class="math-container">$z$</span>, there are 44 possibilities since the other three remaining cards of the face values chosen in <span class="math-container">$y$</span> cannot be either.</p>
<p>However, we are not done yet, i.e. <span class="math-container">$13{4\choose 3}48\cdot44$</span> is not quite right because this number includes also poker hands s.t. 4s-4c-4h-2s-3h and 4s-4c-4h-3h-2s, which are obviously indistinguishable since order doesn't matter. But the last two cards can be ordered in 2! ways. Dividing by <span class="math-container">$2!$</span>, we remove those hands that differ only in the ordering in the last two cards.</p>
<p>The right number of poker hands with a three-of-a-kind is thus</p>
<p><span class="math-container">$$13{4\choose 3}\frac{48\cdot44}{2!},$$</span></p>
<p>and the required probability is</p>
<p><span class="math-container">$$
\frac{13 {4\choose 3}\frac{48\cdot44}{2!}}{{52 \choose 5}}.
$$</span></p>
|
1,190,083 | <p>A positive element x of a C*-algebra A is a self-adjoint element whose spectrum is contained in the non-negative reals. If there's a faithful finite-dimensional representation of A where the involution is conjugate transposition, I think the second condition just means that x can be thought of as a matrix with positive eigenvalues, so it is self-adjoint*. Are there examples of C*-algebras with elements that have non-negative real spectra but that are not self-adjoint? What is the reason for not counting such elements as positive?</p>
<p>*This isn't true, but I'm leaving it in in case other people make the same mistake.</p>
| Qiaochu Yuan | 232 | <p>Sure. For example, any $n \times n$ nilpotent matrix has all eigenvalues zero, so has non-negative real spectrum as an element of $M_n(\mathbb{C})$, but no nonzero nilpotent matrix can be self-adjoint by the spectral theorem. In fact the first reason that occurs to me to not consider these positive is precisely that we don't have the benefit of various nice tools if we drop this condition, such as the spectral theorem and the functional calculus. </p>
<p>The functional calculus implies, for example, that we can treat a positive element $A$ more or less as if it were a positive real number, and in particular we can do things like take its square root $\sqrt{A}$. That's not an operation that makes sense on nilpotent matrices, and in fact not all nilpotent matrices even have square roots. </p>
|
2,521,331 | <p>I need to show, that when we have $X,Y$ - any metric spaces and
<br>
$f:X \ni x \to a \in Y$ is constant , then $f$ is continuous . </p>
<p>$(X,\tau_{1}),(Y,\tau_{2}) $ - topological spaces : $f: X\to Y$.
I know a definition : $f: X\to Y $ is continuous if $ \forall_{W \in \tau_{2}}\ f^{-1}[W] \in \tau_{1} $ . <br></p>
<p>Maybe let $U$ be open in Y , then id$x^{-1}(U) = U$<br>
$const^{-1}(U)= \begin{cases}
x , a\in U \\
\emptyset , a \notin U
\end{cases}$
?</p>
| William Elliot | 426,203 | <p>Let U be an open set of y.<br>
Case one. If a in U, what is the image of U?<br>
Case two. If a not in U, what is the image of U? </p>
<p>Do not guess but use the definition of inverse image to caculate the inverse image of a set by a function. Also look up the fact that a function is continuous iff for every open subset of the codomain, the inverse image is open. Enjoy the workout.</p>
|
2,366,610 | <p>Let $U$ be an $n \times n$ unitary matrix and $X$ an $n \times n$ real symmetric matrix. Suppose that $$U^\dagger X U = X$$ for all real symmetric $X$, then what are the allowed unitaries $U$? It seems that the only possible $U$ is some phase multiple of the identity $U=aI$ where $|a|=1$ but I'm not able to show that this is the only allowed unitary.</p>
| G.H.lee | 445,037 | <p>Let $\sin x = y $ , $ \sinh x = y + z$</p>
<p>Then $L = \lim_{x\rightarrow 0}\frac{\sinh^{-1}(y+z) - \sinh^{-1}(y)}{z} = {\sinh^{-1}}' (0) = 1 $</p>
<p>($z \rightarrow 0 $ as $ x \rightarrow 0$)</p>
|
172,058 | <p>I'm wondering whether there is certain relationship between the largest eigenvalue of a positive matrix(every element is positive, not neccesarily positive definite) $A$, $\rho(A)$ and that of $A∘A^T$, $\rho( A∘A^T)$, where $∘$ denotes hadamard product.</p>
<p>Here's a result I find for many numerical cases. I create a matrix of size $n$ whose elements are uniformly drawn from $[0,M]$, as $n$ gets large (>20), $\rho(A)\rightarrow 2M\rho( A∘A^T)$.</p>
<p>I've read some papers on the bound of eigenvalue of $A∘B$, yet none of them mention the special case of $A∘A^T$. I'm wondering whether there's a theory about this and moreover, whether this result could be extended to general linear operators, such as integral operators $T(f(x))=\int k(x,y)f(y)dy$ and $T(f(x))=\int k(x,y)k(y,x)f(y)dy$</p>
<p>Any reference is appreciated. hanks in advance!</p>
| ofer zeitouni | 35,520 | <p>If I understand correctly the question, the answer is that no reasonable such function exists. Take the matrix that is zero everywhere except that $A_{i,i+1}=1$, $i=1,\ldots,n-1$, and $A_{n,1}=1$. Then $\rho(A)=1$ but
$\rho(A\circ A^T)=0$.</p>
|
2,761,509 | <p>I hope it's not a duplicate but I've been searching about this problem for some time on this site and I couldn't find anything. My problem is why a number $\in(-1,0)$ raised to $\infty$ is $0$. For example let's take
$$\lim_{n\to \infty} \left(\frac{-1}{2}\right)^n$$
Which is equivalent to
$$\left(\frac{-1}{2}\right)^\infty=(-1)^\infty\left(\frac{1}{2}\right)^\infty=0(-1)^\infty$$
But if a sequence converges all its subsequences converge to the same limit.
And $(-1)^{2n}$ is a subsequence of $(-1)^n$ that converges to $1$ when $(-1)^{2n + 1}$ is a subsequence of $(-1)^n$ that converges to $-1$. So $(-1)^\infty$ does not exist. It remains that$$\lim_{n\to \infty} \left(\frac{-1}{2}\right)^n=0(DNE)$$ </p>
| fleablood | 280,126 | <p>If $\lim a_n = L$ exist and $a_n = b_n*c_n$ it does not follow that $\lim b_n$ exists and, indeed, we can <em>ALWAYS</em> find counter examples. (Ex: $\lim \frac 1{2^n} = 0$ but $\frac 1{2^n} = 2^n\frac 1{2^{2n}}$ but $\lim 2^n$ is not finite.)</p>
<p>So $\lim (\frac {-1}2)^n = (-1)^{n}*(\frac {1}2)^n$ but $\lim (-1)^n$ does not exist is utterly irrelevant.</p>
<p>===</p>
<p>What is true is that if $a_n = b_n*c_n$ and $\lim b_n$ and $\lim c_n$ both exist, then $\lim a_n = \lim b_n*\lim c_n$. </p>
<p>And if $a_n = b_n*c_n$ and $\lim a_n$ exists and $\lim c_n$ exists <em>AND</em> $\lim c_n \ne 0$ then $\lim c_n = \frac {\lim a_n}{\lim b_n}$.</p>
<p>Neither of those are the case here.</p>
|
69,902 | <p>I'm VERY new to Mathematica programming (and by new I mean two days), and was solving Project Euler question 12, which states:</p>
<blockquote>
<p>Which starting number, under one million, produces the longest [Collatz] chain?</p>
</blockquote>
<p>Now don't take this question wrong. <strong>I am not asking for a solution, I am simply wondering why my proposed solution is taking so long to produce an answer. It does eventually produce the correct solution to the problem.</strong></p>
<p>My code is below:</p>
<pre><code>collatzLength[x_] := Module[{c, n}, (For[n = x; c = 1, n != 1, c += 1,
If[EvenQ[n], n = n/2, n = 3*n + 1]]); c]
Last@Flatten@(MaximalBy[Transpose@{(collatzLength /@
Range[1000000]), Range[1000000]}, First])
</code></pre>
<p>It seems that the <code>collatzLength /@ Range[1000000]</code> is what is taking so long, so I am wondering how I can improve the collatz function (or any of the code) so that it completes in a reasonable timeframe.</p>
| mgamer | 19,726 | <p>You´ll find a lot of Mathematica Code on the internet regarding this problem. Your code generates the collate sequence for every number without taking into account, that there are a lot of duplicate calculations. You can approach it via </p>
<pre><code>collatz[n_] := collatz[n] = If[EvenQ[n], n/2, 3*n + 1]
</code></pre>
<p>to remember the calculations, then... </p>
<pre><code>collatzSequence[n_] := NestWhileList[collatz, n, #1 > 1 &]
</code></pre>
<p>and</p>
<pre><code>Length /@ (collatzSequence /@ Range[2, 1000000]) // Max
</code></pre>
<p>to calculate.</p>
<p>Speed could be improved by compiling the definition of <code>collatz</code></p>
|
69,902 | <p>I'm VERY new to Mathematica programming (and by new I mean two days), and was solving Project Euler question 12, which states:</p>
<blockquote>
<p>Which starting number, under one million, produces the longest [Collatz] chain?</p>
</blockquote>
<p>Now don't take this question wrong. <strong>I am not asking for a solution, I am simply wondering why my proposed solution is taking so long to produce an answer. It does eventually produce the correct solution to the problem.</strong></p>
<p>My code is below:</p>
<pre><code>collatzLength[x_] := Module[{c, n}, (For[n = x; c = 1, n != 1, c += 1,
If[EvenQ[n], n = n/2, n = 3*n + 1]]); c]
Last@Flatten@(MaximalBy[Transpose@{(collatzLength /@
Range[1000000]), Range[1000000]}, First])
</code></pre>
<p>It seems that the <code>collatzLength /@ Range[1000000]</code> is what is taking so long, so I am wondering how I can improve the collatz function (or any of the code) so that it completes in a reasonable timeframe.</p>
| DumpsterDoofus | 9,697 | <p>For extra brute force, just <code>Compile</code> it to C code:</p>
<pre><code>collatzLength =
Compile[{{x, _Integer}},
Module[{c,
n}, (For[n = x; c = 1, n != 1, c += 1,
If[EvenQ[n], n = Round[n/2], n = 3*n + 1]]); c],
CompilationTarget -> "C", RuntimeAttributes -> {Listable}]
</code></pre>
<p>It computes the first million lengths in under 2 seconds:</p>
<pre><code>First@AbsoluteTiming@collatzLength[Range[1000000]]
(*1.248002*)
</code></pre>
<p>For more info on what functions are compilable, see <a href="https://mathematica.stackexchange.com/questions/1096/list-of-compilable-functions">this question</a>.</p>
|
8,052 | <p>I wonder how you teachers walk the line between justifying mathematics because of
its many—and sometimes surprising—applications, and justifying it as the study
of one of the great intellectual and creative achievements of humankind?</p>
<p>I have quoted to my students G.H. Hardy's famous line,</p>
<blockquote>
<p>The Theory of Numbers has always been regarded as one of the most obviously useless branches of Pure Mathematics.</p>
</blockquote>
<p>and then contrasted this with the role number theory plays in contemporary
cryptography.
But I feel slightly guilty in doing so, because I believe that even without
the applications to cryptography that Hardy could not foresee—if in fact
number theory were completely "useless"—it
would nevertheless be well-worth studying for anyone.</p>
<p>One provocation is
Andrew Hacker's influential article in
the <em>NYTimes</em>,
<a href="http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html?_r=0" rel="noreferrer">Is Algebra Necessary?</a>
I believe your cultural education is not complete unless you understand
something of the achievements of mathematics, even if pure and useless.
But this is a difficult argument to make when you are teaching students how
to factor quadratic polynomials, e.g.,
<a href="https://matheducators.stackexchange.com/q/8020/511">The sum - product problem</a>.</p>
<p>So, to repeat, how do you walk this line?</p>
| Gerald Edgar | 127 | <p>Maybe teachers should have some of <a href="http://www.ams.org/samplings/posters/posters" rel="nofollow">these posters</a> (available free from the AMS) hanging around the place.</p>
|
4,281,028 | <p>I have the following problem, which was asked in a <a href="https://math.stackexchange.com/questions/584375/cumulative-distribution-function-word-problem">similar question</a> but it doesn't help me.</p>
<p><strong>A dart is equally likely to land at any point inside a circular target of unit radius.
Let <span class="math-container">$r$</span> and <span class="math-container">$\phi$</span> be the radius and the angle of the point</strong></p>
<p><strong>(a) Find the joint cdf of r and <span class="math-container">$\phi$</span></strong></p>
<p><strong>(b) Find the marginal cdf of R and <span class="math-container">$\phi$</span>.</strong></p>
<p>I have the solution (always for <span class="math-container">$0\leq r \leq 1$</span> and <span class="math-container">$0 \leq\phi \leq 2\pi$</span>), which is <span class="math-container">$$F(R, \phi)=\frac{r^2\phi}{2\pi}$$</span></p>
<p>Since it is equal to the area of the pie slice divided between the area of the circle and <span class="math-container">$\frac{\pi}{\pi}=1$</span>. I, however, don't understand this.</p>
<p>The angle is a uniform random variable between <span class="math-container">$0$</span> and <span class="math-container">$2\pi$</span>, so it makes sense for its cdf to be <span class="math-container">$\frac{\phi}{2\pi}$</span>. However, the radius is a uniform between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, so, as far as I know, its cdf should be just <span class="math-container">$r$</span>, not <span class="math-container">$r^2$</span>. I can understand the area of the pie divided between the total area being the join cdf, but mathematically, there is something I am missing to justify the <span class="math-container">$r^2$</span>. Can someone help me with this?</p>
<p>Also, my solution states that <span class="math-container">$F(r)=r^2$</span>, so the problem is in the second bulletpoint too.</p>
| tommik | 791,458 | <p>reading your question, the joint distribution is UNIFORM on the unit disk, thus the joint pdf is simply the reciprocal of the area thus</p>
<p><span class="math-container">$$f_{XY}(x,y)=\frac{1}{\pi}$$</span></p>
<p>in order to find the pdf of the vector <span class="math-container">$(R;\Theta)$</span> indicating radius and angle, respectively, you simply can pass in polars obtainig</p>
<p><span class="math-container">$$f_{P \Theta}(\rho;\theta)=\frac{\rho}{\pi}$$</span></p>
<p>now integrating you get your desired joint cdf</p>
<hr />
<p>to find marginals, integrate the opposite rv:</p>
<p><span class="math-container">$$f_{P}(\rho)=2\rho$$</span></p>
<p>with CDF</p>
<p><span class="math-container">$$F_{P}(\rho)=\int_0^\rho 2t dt=\rho^2$$</span></p>
<p>and</p>
<p><span class="math-container">$$f_{\Theta}(\theta)=\int_0^1 \frac{\rho}{\pi}d\rho=\frac{1}{2\pi}$$</span></p>
<p>which is uniform in <span class="math-container">$\theta \in (0;2\pi)$</span></p>
|
2,798,847 | <p>I would like to prove that $\|e^A-e^B\| \leq \|A-B\|e^{max\{\|A\|,\|B\|\}}$, where $A,B \in \mathbb{R}^{n \times n}$.</p>
<p>So far I was able to create the first difference term, but I have no idea how to incorporate the max norm.
I've read <a href="https://math.stackexchange.com/questions/2262000/inequality-norm-of-difference-in-exponential-of-matrices?newreg=15d63d96ac0f4ad08a10eb1a97a1a7d4">this</a> post, where the Fréchet calculus was mentioned, but I'm still stuck.</p>
<p>Any help would be appreciated.
Thank you in advance!</p>
| Áron Fehér | 565,498 | <p>So after some sleepless nights I came up with, what I hope is the answer.
First let's use the Taylor series expansion:
$\|e^A-e^B\| = \|\displaystyle\sum_{k=0}^\infty\frac{A^k-B^k}{k!}\|$</p>
<p>We can then use the property of the binomial polynoms $(x-y)^n=(x-y)(x^{n-1}+x^{n-2}y+\cdots+xy^{n-2}+y^{n-1})$ as
$\|e^A-e^B\| = \|\displaystyle\sum_{k=0}^\infty\frac{(A-B)(A^{k-1}+A^{k-2}B+\cdots+AB^{k-2}+B^{k-1})}{k!}\|$.</p>
<p>The norm of product is the product of norm, and the first term is independent of $k$, so I can bring it to the front
$\|e^A-e^B\| = \|A-B\|\|\displaystyle\sum_{k=0}^\infty\frac{(A^{k-1}+A^{k-2}B+\cdots+AB^{k-2}+B^{k-1})}{k!}\|$, and after that we can apply the inequality of the norm of the sums: $\|x+y\| \leq \|x\|+\|y\|$ as</p>
<p>$\|e^A-e^B\|\leq \|A-B\|\displaystyle\sum_{k=0}^\infty\frac{\|A\|^{k-1}+\|A\|^{k-2}\|B\|+\cdots+\|A\|\|B\|^{k-2}+\|B\|^{k-1}}{k!}=\|A-B\|S.$
If $\|A\| \geq \|B\|$, then $S\leq\displaystyle\sum_{k=0}^\infty\frac{k\|A\|^{k-1}}{k!}=\displaystyle\sum_{k=0}^\infty\frac{\|A\|^{k-1}}{(k-1)!}=e^{\|A\|}$.
Similarly if $\|A\| \leq \|B\|$ then $S \leq e^{\|B\|}$, from which we can state that $S \leq e^{max\{\|A\|,\|B\|\}}$, for every $A,B\in \mathbb{R}^{n\times n}$.</p>
<p>This proves my initial problem.</p>
|
1,379,188 | <p>The Riemann distance function $d(p,q)$ is usually defined as the infimum of the lengths of all <strong>piecewise</strong> smooth paths between $p$ and $q$.</p>
<p><strong>Does it change if we take the infimum only over smooth paths?</strong>
(Note that if a smooth manifold is connected, <a href="https://math.stackexchange.com/a/134129/104576">then it is smoothly path connected</a>).</p>
<p>I am quite certain the distance does not change. I think that every piecewise smooth path can be approximated by a smooth path.</p>
<p>Around any singular point of the original path, we can take a coordinate ball, and create smomehow a smoothing of a relevant segment of the path which is not much longer than the original. </p>
<p>An explicit construction such as this can be found <a href="https://math.stackexchange.com/a/134129/104576">here</a>. However, the point there is only to show smooth path connectivity, and we also need some bound on the "added length". </p>
<p><strong>Partial Result (Reduction to the case of Euclidean metric):</strong></p>
<p>I show that the specific Riemannian metric does not matter. That is, if we can create a smoothing with small elongation measured by one metric $g_1$ then we can do the same for any other metric $g_2$. </p>
<p>Hence it is enough to prove the claim for $\mathbb{R}^n$ with the standard metric. </p>
<p>Proof:</p>
<p>Since the question is local (we focus around some point $p$ of non-smoothness of the original piecewise-smooth path) we can take an orthonormal frame for $g_1$, denoted by $E_i$. write $g_{ij}=g_2(E_i,E_j)$, I want to find $\text{max} \{g_2(v,v)|v\in \mathbb{S}^{n-1}_{g_1}\} = \text{max} \{g_2(v,v)|v=x^iE_i , x=(x^1,...,x^n) \in \mathbb{S}^{n-1}_{Euclidean}\} = \text{max} \{g_{ij}x^ix^j| \sum(x^i)^2=1 \} = \text{max} \{x^T \cdot G \cdot x | \|x\|=1 \} = \text{max}{\lambda(G)}$. </p>
<p>Since <a href="https://math.stackexchange.com/a/63206/104576">the roots of a polynomial are continuous in in terms of its coefficients</a>, and the coefficients of the charactersitic polynomial of a matrix depends continuously on the matrix entries, it follows that the eigenvalues of a matrix depends continuously on the matrix entries. Hence, since the matrix $g_{ij}(q)$ is a continuous function of $q$, it follows that if we restrict to a compact small enough neigbourhood of $p$ we the function $f(q)= \text{max}{\lambda(g_{ij}(q))}$ is continuous and in particularly bounded by some constant $C$. Hence for any path $\gamma$ which is contained in a small enough neighbourhood of $p$ $L_{g_2}(\gamma) \le \sqrt C L_{g_1}(\gamma)$.</p>
<p>In particular we can take $g_1$ to be the pullback metric of the standrad Euclidean metric via some coordinate ball around $p$. Now solving the problem for the Euclidean case (which implies solving it for $g_1$), we obtain a solution for an arbitrary $g_2$ as required.</p>
| mlk | 155,406 | <p>If you already reduced this problem to the $\mathbb{R}^n$ case, then we should be able to tackle it with the usual analytical methods. The following is probably a bit of technical overkill but should work.</p>
<p>As far as I can see, the only problem is to smoothly connect two pieces with an arbitrarily small loss of length. Assume we have two smooth paths
$$p_1: [a,0] \to \mathbb{R}^n $$
and
$$p_2: [0,b] \to \mathbb{R}^n $$
such that $p_1(0)=p_2(0)$. </p>
<p>Now for some arbitrarily small $\delta$ we want to construct $\tilde{p}:[a,b]\to \mathbb{R}^n$ such that $\tilde{p}(a)=p_1(a)$, $\tilde{p}(b)=p_2(b)$ and $$\mathop{Length}(\tilde{p}) = \mathop{Length}(p_1)+ \mathop{Length}(p_2)+\varepsilon$$
For this construct a partition of unity: Let
$$\varphi_1(t) := \begin{cases} 1 & \text{ for } t\leq -\delta \\ 0 & \text{ for } t\geq \delta \\ \frac{e^{-1/(x-\delta)}}{ e^{-1/(x-\delta)}+e^{-1/(x+\delta)}} & \text{ for }t \in (-\delta,\delta) \end{cases} $$
and $\phi_2(t) = 1-\phi_1(t)$. Then the $\phi_i(t)$ are smooth and $|\dot{\phi(t)}|\leq \frac{4}{\delta}$. (I haven't checked the details here but there definitely is such a function)</p>
<p>I think we can safely assume both parts to be parameterised by arc length (that is $|\dot{p}_i(t)| = 1$ for all $t$) and we can smoothly extend them (for example by their taylor series) a bit, so that they are defined on $[a,\delta]$ and $[-\delta,b]$.</p>
<p>Now we define
$$\tilde{p}:[a,b]\to \mathbb{R}^n; t \mapsto \phi_1(t) p_1(t)+\phi_2(t) p_2(t)$$
(technically we need to extend the $p_i$ to all of $[a,b]$, but the corresponding $\phi_i$ is $0$ anyway, so this works as a definition)
Then $\tilde{p}$ is smooth since it is built from smooth functions and it is equal to $p_1$ on $[a,-\delta]$ and $p_2$ on $[\delta,b]$, so we need only to care about the middle.</p>
<p>But now for $t\in[-\delta,\delta]$
$$ |\tilde{p}(t)| = |\dot{\phi}_1(t)p_1(t)+\dot{\phi}_2(t)p_2(t)+\phi_1(t)\dot{p_1}(t) + \phi_2(t)\dot{p_2}(t)|$$
so via triangular inequality and noting that $\dot{\phi}_2= -\dot{\phi}_1$
$$ \leq \underbrace{|\dot{\phi}_1(t)|}_{\leq 4/\delta} |p_1(t)-p_2(t)| + \phi_1 (t) \underbrace{|p_1(t)|}_{=1} + \phi_2(t) \underbrace{|p_2(t)|}_{=1}$$
$$ \leq 4/\delta |p_1(t)-p_2(t)| + \phi_1(t)+\phi_2(t)$$
thus since $p_1(0)=p_2(0)$ and $|\dot{p}_i|=1$ implies $|p_1(t)-p_2(t)| \leq 2|t| \leq 2\delta$:
$$\leq 4/\delta \cdot 2\delta + 1 = 9$$</p>
<p>So we only need to integrate:
$$\mathop{Length}(\tilde{p}) = \int_a^b |\tilde{p}(t)| dt $$
$$= \int_a^{-\delta} |\tilde{p}(t)| dt + \int_\delta^b |\tilde{p}(t)| dt + \int_{-\delta}^{\delta} |\tilde{p}(t)| dt $$
$$ \leq \mathop{Length}(p_1) + \mathop{Length}(p_2) + 2\delta \cdot 9$$
which is what we wanted. </p>
<p>Of course we are actually not on $\mathbb{R}^n$ but on some open subset, but for a small enough $\delta$ this should pose no problem.</p>
<p><strong>edit:</strong>
Maybe some slight change, if you do not believe in extending $p_1$ and $p_2$, it is also possible to just reparamterize them in such a way that $p_1(\delta) = p_2(-\delta)$, since we only need $p_1(t)$ and $p_2(t)$ to be close for small $t$ and not actually to be equal anywhere.</p>
|
1,257,193 | <p>Let $f:[0,\infty)\to\mathbb{R}$ continuous and $\lim\limits_{x\to\infty}f(x)=a$. Claim: $\lim\limits_{x\to\infty}\frac{1}{x}\int_0^xf(t)dt=a$.</p>
<p>My try: It is $\int_0^xf(t)dt=F(x)-F(0)$ (because of the fundamentaltheorem of calculus) and $\lim\limits_{x\to\infty}F(x)=ax+b$, because $\lim\limits_{x\to\infty}f(x)=a$.
What can I do next, or do you have an idea how to prove it? Regards</p>
| Tom-Tom | 116,182 | <p>Let us go back to the rigorous definition of the limit and it works straightforwardly.
For any $\epsilon>0$, there is an $X>0$ such that $\forall x>X$, $|f(x)-a|<\epsilon/2$. For such an $x$, let us compute
$$\begin{split}
\left\lvert-a+\frac1x\int_0^xf(t)\mathrm dt\right\rvert&=\frac1x\left\lvert-aX+\int_0^Xf(t)\mathrm dt+\int_X^x(f(t)-a)\mathrm dt\right\rvert\\&\leq
\frac bx+\frac1x\int_X^x|f(t)-a|\mathrm dt\quad\left(\text{with}\;b=\left|\int_0^Xf(t)\mathrm dt-aX\right|\right)\\
&\leq\frac bx+\frac{x-X}x\frac\epsilon2\\&\leq\frac{b}x+\frac\epsilon2.\end{split}$$
For all $x>2\frac b\epsilon$ we have $\frac{b}x<\frac\epsilon2$. Therefore, for all $x>\max\{X,2\frac b\epsilon\}$, $$\left\lvert\frac1x\int_0^xf(t)\mathrm dt-a\right\rvert<\epsilon.$$</p>
|
2,737,869 | <p>Determine the value of real parameter $p$ </p>
<p>in such a way that the equation</p>
<p>$$\sqrt{x^2+2p} = p+x $$ </p>
<p>has just one real solution</p>
<p>a. $p \ne 0$</p>
<p>b. There is no such value of parameter$p$</p>
<p>c. None of the remaining possibilities is correct.</p>
<p>d. $p\in [−2,\infty)$</p>
<p>e. $p\in [−2,0)\cup(0,\infty)$</p>
<p>I thought the answer is A but it isn't. help me!</p>
| Angina Seng | 436,618 | <p>By the Heine-Borel theorem, if an infinite sequence of open intervals
cover the compact set $[0,1]$, then a finite subcollection do.
It is not hard to prove that if intervals $I_1,\ldots,I_n$
cover $[0,1]$ then the sum of their lengths is at least $1$.</p>
|
258,205 | <p>I want to know if $\displaystyle{\int_{0}^{+\infty}\frac{e^{-x} - e^{-2x}}{x}dx}$ is finite, or in the other words, if the function $\displaystyle{\frac{e^{-x} - e^{-2x}}{x}}$ is integrable in the neighborhood of zero.</p>
| Ron Gordon | 53,268 | <p>In general, when $f$ is "well-behaved" at zero and infinity:</p>
<p>$$\int_0^{\infty} dx \frac{f(a x) - f(b x)}{x} = (f(\infty)-f(0)) \log{\frac{a}{b}}$$</p>
<p>You can see this from this (rough) "proof":</p>
<p>$$\begin{align}\int_0^{\infty} dx \frac{f(a x) - f(b x)}{x} &= \int_0^{\infty} dx \: \int_b^a du \, \frac{d}{du} f(u x) \\ &= \int_b^a du \: \int_0^{\infty} dx \, \frac{d}{dx} f(u x)\\ &= \int_b^a \frac{du}{u} (f(\infty)-f(0)) \end{align}$$</p>
<p>The result follows. In this case, $f(x) = e^{-x}$, $a=1$, and $b=2$; the integral is then</p>
<p>$$(0-1)\log{\frac{1}{2}} = \log{2}$$</p>
|
3,576,026 | <p>I was solving a problem and got down to this:
<span class="math-container">$$\lim_{n \to \infty} \arctan\left(\frac{\sum_{k=0}^n-\frac{1}{1+k^2}}{\sum_{k=0}^n \frac{k}{1+k^2}}\right)$$</span>
After this, I said that, since the bottom series diverges and the upper one converges, the result is <span class="math-container">$0$</span>. But the person who gave the question asked me why am I allowed to swap limit and summation.<br>
I think he meant take the limit inside the function and then distribute it on the both the numerator and denominator, but I am not sure, please confirm.<br>
In the case he meant what I have understood, though, I don't really know the answer. Can someone hint me elements of it please? (my knowledge base is Calc1 and what I have accumulated thus far of Calc 2 material)</p>
<p>Thank you very much!</p>
| Community | -1 | <p>You can swap the limit and the function because the arc tangent function is <em>continuous</em>. Then you have the limit of a ratio such that the numerator is bounded, while the denominator diverges to infinity.</p>
<p>Hence your limit is <span class="math-container">$$\arctan(0)=0.$$</span></p>
<hr>
<p>You might as well use that <span class="math-container">$|\arctan x|\le|x|$</span> and squeeze.</p>
|
1,572,954 | <p>What is the only ordered pair of numbers $(x,y)$ which, for all $a$ and $b$, satisfies </p>
<p>$$x^a y^b=\left(\frac34\right)^{a-b} \text{and } x^b y^a=\left(\frac34\right)^{b-a}$$</p>
<p>I started off with the trivial cases, $a=0$ and $b=0$ and you get $1=1$ on both sides, so that works.</p>
<p>I can't seem to find anymore cases. Any ideas?</p>
| Brian Tung | 224,454 | <p>By inspection,</p>
<p>$$
x = x^1y^0 = \left(\frac{3}{4}\right)^{1-0} = \frac{3}{4}
$$</p>
<p>and</p>
<p>$$
y = x^0y^1 = \left(\frac{3}{4}\right)^{0-1} = \frac{4}{3}
$$</p>
<p>Note that this only suffices to identify $x$ and $y$ provided the condition stated actually holds; it does not prove that this $x$ and $y$ work for all $a$ and $b$, so if you need to demonstrate that, there is more work to be done (though it is straightforward).</p>
|
7,871 | <p>I'm trying to make a demonstration of how rounding to different numbers of digits affects things but I can't find a way to round numbers to a specified number of digits. </p>
<p>The <code>Round</code>function only round to the nearest whole integer, and that is not what I always want. Other ways seems to only change the way the numbers are displayed, not how they are internally stored. </p>
<p>I want to throw away precision, but it seems Mathematica doesn't want to allow me to do this. As an example: I would like to round 3.4647 to just 3.5 or 3.46. </p>
<p>There must be some way to do this, but I can't for the life of me find it.</p>
| Artes | 184 | <pre><code>round1[x_, n_] := Ceiling[10^n x]/10^n // N
round2[x_, n_] := Floor[10^n x]/10^n // N
round1[3.4647, 1]
round2[3.4647, 2]
</code></pre>
<blockquote>
<pre><code>3.5
3.46
</code></pre>
</blockquote>
|
7,871 | <p>I'm trying to make a demonstration of how rounding to different numbers of digits affects things but I can't find a way to round numbers to a specified number of digits. </p>
<p>The <code>Round</code>function only round to the nearest whole integer, and that is not what I always want. Other ways seems to only change the way the numbers are displayed, not how they are internally stored. </p>
<p>I want to throw away precision, but it seems Mathematica doesn't want to allow me to do this. As an example: I would like to round 3.4647 to just 3.5 or 3.46. </p>
<p>There must be some way to do this, but I can't for the life of me find it.</p>
| hlren | 54,486 | <p>Another solution is</p>
<pre><code>Round1[x_, n_] := With[{m = Round[Log10[Abs[x]]]}, Round[x 10^(n - m)] 10.^(m - n)];
(*m estimates the scale of x, n sets the number precision, Abs function enables negative number.*)
Round1[-3.46473*10^-15, 4] // InputForm
(*-3.465*^-15*)
</code></pre>
|
4,473,632 | <p>Let <span class="math-container">$f:\mathbb R^+\to\mathbb R$</span> be a continuous function satisfied <span class="math-container">$f(a)+f(b)\ge f(2\sqrt{ab})$</span> for all <span class="math-container">$a,b>0$</span> , is <span class="math-container">$f$</span> differentiable?</p>
<p>Morever, if for all <span class="math-container">$a_1,a_2,\cdots,a_n>0$</span> there holds <span class="math-container">$$\sum_if(a_i)\ge f\left(n\sqrt[n]{\prod_ia_i}\right)\\$$</span> is <span class="math-container">$f$</span> differentiable?</p>
| Chris Sanders | 309,566 | <p>Easy example: find <span class="math-container">$f$</span> such that <span class="math-container">$2\leq f(x)\leq 3$</span> for all <span class="math-container">$x\in\mathbb{R}$</span> and <span class="math-container">$f$</span> is everywhere continuous but nowhere differentiable.</p>
<p>You may refer to <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Weierstrass_function</a></p>
|
2,653,829 | <blockquote>
<p>How can I show $(x^2+1, y^2+1)$ is not maximal in $\mathbb R[x,y]$?</p>
</blockquote>
<p>I know I can mod out the ideal one piece at a time and show $\mathbb C[x]/(x^2+1)$ is not a field since $(x^2+1)$ is not maximal in $\mathbb C[x]$, <strong>but is there another way of showing this?</strong></p>
| Angina Seng | 436,618 | <p>The zeros of these polynomials are the complex points $(i,i)$, $(i,-i)$,
$(-i,i)$ and $(-i,-i)$. These fall into two orbits under the action
of complex conjugation, viz., $(i,i)$ and $(-i,-i)$ which also satisfy
$x=y$, and $ (i,-i)$ and $(-i,i)$ which also satisfy
$x=-y$. It follows that say, $\left<x^2+1,y^2+1,x-y\right>$ is a proper
ideal containing $\left<x^2+1,y^2+1\right>$ as it still has zeroes
over the complex numbers (e.g., $(i,i)$) but $\left<x^2+1,y^2+1\right>$
has a zero (e.g., $(i,-i)$) which is not a zero of
$\left<x^2+1,y^2+1,x-y\right>$.</p>
|
141,655 | <blockquote>
<p>What is the chance that at least two people were born on the same day
of the week if there are 3 people in the room?</p>
</blockquote>
<p>I'm wondering if my solution is accurate, as my answer was different than the solution I found:</p>
<p>Probability that there are at least 2 people in the room born on the same day = 1 - (No one was born on the same day) - (Exactly one person was born on the same day)</p>
<p>There are (3 choose 2) different pairs of couples. Each couple has the same birthday as another couple with the chances of 1/7 and different with chances 6/7. Thus:</p>
<p>$$1 – (6/7)^3 – 3(1/7)(6/7)^2 = 0.0553$$</p>
<p>Thanks for any help!</p>
| Brian M. Scott | 12,042 | <p>Label the three people $A,B$, and $C$. Suppose that no two were born on the same day of the week. $A$ can be born on any day of the week. The probability that $B$ was born on a different day is $\frac67$. (We are of course assuming that the seven days are equally likely, though I believe that in fact this isn't the case.) Given that $A$ and $B$ were born on different days of the week, the probability that $C$ was born on one of the remaining $5$ days of the week is $\frac57$. Thus, the probability that they were born on three different days of the week is $\frac67\cdot\frac57=\frac{30}{49}$, and the probability that at least two of them were born on the same day of the week is $1-\frac{30}{49}=\frac{19}{49}$.</p>
<p>You can check this as follows. There are $7^3=343$ possible ways of assigning days of the week to $A,B$, and $C$. $7\cdot6\cdot5=210$ of these result in $A,B$, and $C$ being assigned different days of the week, so the remaining $343-210=133$ assignments give at least two of the three people the same day of the week. Since all assignments are (assumed to be) equally likely, the probability of getting one of those is $\frac{133}{343}=\frac{19}{49}$.</p>
|
3,168,662 | <p>How do you evaluate <span class="math-container">$\int_{|z|=1} \frac{\sin(z)}{z^2+(3-i)z-3i}dz$</span> ? </p>
<p>Here is my thought process: </p>
<p>I want to use <a href="http://mathworld.wolfram.com/CauchyIntegralFormula.html" rel="nofollow noreferrer">Cauchy's Integral Formula</a>, but in order to use it I need to find the poles and make sure one of them is in the interior of the unit circle (the contour we are supposed to integrate over).</p>
<p>First, I need to set the denominator in the integral to <span class="math-container">$0$</span> and solve the <a href="http://mathworld.wolfram.com/QuadraticEquation.html" rel="nofollow noreferrer">quadratic equation</a>: <span class="math-container">$z^2+(3-i)z-3i=0$</span></p>
<p>Using Wolfram, here: <a href="https://www.wolframalpha.com/input/?i=solve+z%5E2+%2B+(3-i)z+-3i+%3D+0" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+z%5E2+%2B+(3-i)z+-3i+%3D+0</a></p>
<p>I get <span class="math-container">$z=-3$</span> and <span class="math-container">$z=i$</span></p>
<p>A requirement for Cauchy's Integral Formula is one of these poles must be in the intereior of the curve we are integrating over (in this case the unit circle). However, <span class="math-container">$z=-3$</span> is outside the unit circle and <span class="math-container">$z=i$</span> is on the unit circle itself, not in its interior. So I cannot use Cauchy's Integral Formula. </p>
<p>What other options do I have?</p>
| jmerry | 619,637 | <p>Since <span class="math-container">$\sin(i)\neq 0$</span>, we're integrating across a pole. That is, we're integrating something comparable to <span class="math-container">$\frac{c}{z-i}$</span> on a path that approaches <span class="math-container">$i$</span> on two sides. The integral diverges, by comparison to <span class="math-container">$\int_0^1 \frac1x\,dx$</span>.</p>
<p>Now, there is still a way to assign a value. If we cut out a symmetric piece of the curve around <span class="math-container">$i$</span>, and let that cut tend to zero, we are left with a <a href="https://en.wikipedia.org/wiki/Cauchy_principal_value" rel="nofollow noreferrer">principal value</a> integral. As can be seen by closing the curve with a small semicircle on either side, this principal value contributes to the integral as if the residue there were halved.</p>
|
2,838,037 | <p>For the set $A=\{0\} \cup \{\frac 1n \mid n \in \mathbb N\}$, I understand that $\{\frac 1n \mid n \in \mathbb N\}$ is open and closed in $A$ because it is a union of all the connected components $\{\frac 1n\}$ in $A$ for all $n \in \mathbb N$. Even though $\{0\}$ is also a connected component of $A$, why is $\{0\}$ closed but not open? I thought $\{0\}$ is closed and open in $A$ as well just like each $\{\frac 1n\}$.</p>
| Henno Brandsma | 4,280 | <p>Let $X$ be an uncountable set with the cocountable topology. As all convergent sequences in $X$ are eventually constant and then have that constant as its limit, all subsets are sequentially closed. But only $X$ and at most countable subsets are closed. </p>
|
1,746,363 | <p>I got maybe easy problem. I am not sure if it is true that [$\mathbb Z_2[x]/f\mathbb Z_2[x]: \mathbb Z_2$]=deg $f$ where $f \in \mathbb Z_2[x]$ irreducible. Can anybody help me ? Thanks</p>
| Sharky | 332,069 | <p><strong>Trig by Reference Triangles</strong>:</p>
<p>The angle $\frac{5 \pi}{6}$ in radians will be given by $150^o$. This simple conversion can be done by remembering that $180^o = \pi$ $rad $.</p>
<p>To solve for</p>
<p>$h'(x)=12 sin^2x cosx$
$h'( \frac{ 5 \pi}{6} )=12 sin^2 ( \frac{5 \pi}{6}) cos( \frac{5 \pi}{6} )$</p>
<p>we can use right-angled triangles of the standard angles. E.g. here our reference triangle is the $(90, 60, 30^o)$ triangle in the secondant quadrant of the cartesian plane. However, we are more interested in the radian form of the triangle ($ \frac{\pi}{2}, \frac{ \pi}{3}, \frac{\pi}{6}) $. The respective side lengths of this triangle are therefore $(2, - \sqrt{3}, 1)$. </p>
<p>$sin (\frac{5 \pi}{6} ) = \frac{ - \sqrt {3}}{2}$</p>
<p>and, </p>
<p>$-cos (\frac{5 \pi}{6} ) = \frac{1}{2}$</p>
<p>(See link:<a href="http://www.regentsprep.org/regents/math/algtrig/ATT3/referenceTriangles.htm" rel="nofollow">Reference Triangles</a>)</p>
|
3,087,570 | <p>The "school identities with derivatives", like
<span class="math-container">$$
(x^2)'=2x
$$</span>
are not identities in the normal sense, since they do not admint substitutions. For example if we insert <span class="math-container">$1$</span> instead of <span class="math-container">$x$</span> into the identity above, the appearing equality will not be true:
<span class="math-container">$$
(1^2)'=2\cdot 1.
$$</span>
That is why when explaining this to my students I present the derivative in the left side as a formal operation with strings of symbols (and interpret the identity as the equality of strings of symbols). </p>
<p>This however takes a lot of supplementary discussions and proofs which look very bulky, and I have no feeling that this is a good way to explain the matter. In addition, people's reaction to <a href="https://math.stackexchange.com/questions/1501585/calculus-as-a-structure-in-the-sense-of-model-theory">this my question</a> makes me think that there are no texts to which I could refer when I take this point of view.</p>
<p>I want to ask people who teach mathematics how they bypass this difficulty. Are there tricks for introducing rigor into the "elementary identities with derivatives" (and similarly with integrals)?</p>
<p>EDIT. It seems to me I have to explain in more detail my own understanding of how this can be bypassed. I don't follow this idea accurately, in detail, but my "naive explanations" are the following. I describe Calculus as a first-order language with a list of variables (<span class="math-container">$x$</span>, <span class="math-container">$y$</span>,...) and a list of functional symbols (<span class="math-container">$+$</span>, <span class="math-container">$-$</span>, <span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, ...) and the functions which are not defined everywhere, like <span class="math-container">$x^y$</span>, are interpreted as relation symbols (of course this requires a lot of preparations and discussions, that is why I usually miss these details, and that is why I don't like this way). After that the derivative is introduced as a formal operation on <a href="https://en.wikipedia.org/wiki/First-order_logic#Terms" rel="nofollow noreferrer">terms</a> (expressions) of this language, and finally I prove that this operation coincide with the usual derivative on "elementary functions" (i.e. on the functions which are defined by terms of this language). </p>
<p>Derek Elkins suggests a simpler way, namely, to declare <span class="math-container">$x$</span> a notation of the function <span class="math-container">$t\mapsto t$</span>. Are there texts where this is done consistently? (I mean, with examples, exercises, discussions of corollaries...)</p>
<p>@Rebellos, you identity
<span class="math-container">$$
\frac{d}{dx}(x^2)\Big|_{x=1}=2\cdot 1
$$</span>
becomes true either if you understand the derivative as I describe, i.e. as an operation on expressions (i.e. on terms of the first order language), since in this case it becomes a corollary of the equality
<span class="math-container">$$
\frac{d}{dx}(x^2)=2\cdot x,
$$</span>
or if by substitution you mean something special, not what people usually mean, i.e. not the result of the replacement of <span class="math-container">$x$</span> by <span class="math-container">$1$</span> everywhere in the expression (and in this case you should explain this manipulation, because I don't understand it). Anyway, note, that your point is not what Derek Elkins suggests, since for him <span class="math-container">$x$</span> means a notation of the function <span class="math-container">$t\mapsto t$</span>, it can't be substituted by 1). </p>
| ncmathsadist | 4,154 | <p>The domain of the derivative is functions, not numbers. That's what at the bottom of this.</p>
|
3,787,167 | <p>Let <span class="math-container">$\{a_{jk}\}$</span> be an infinite matrix such that corresponding mapping <span class="math-container">$$A:(x_i) \mapsto (\sum_{j=1}^\infty a_{ij}x_j)$$</span> is well defined linear operator <span class="math-container">$A:l^2\to l^2$</span>.
I need help with showing that this operator will be bounded. I guess it means that i need to check if a unit sphere maps to something bounded, so i need to manage to get some inequality on coefficients of matrix that will allow to write a straight sequence of inequalities and get desired bound. But I don't understand how to get bound from operator being well defined.</p>
| Bart Michels | 43,288 | <p>This follows from the fact that the pointwise limit of bounded operators is bounded, which follows from the uniform boundedness principle:</p>
<p><a href="https://math.stackexchange.com/questions/2542884">If a sequence of bounded operator converges pointwise, then it is bounded in norm</a></p>
|
888,101 | <p>Suppose I am asked to show that some topology is not metrizable. What I have to prove exactly for that ?</p>
| Alexander Golys | 789,572 | <p>Another possible way is to find homeomorphic embedding of known not-metrizable space. This is for instance a nice way of proving not-metrazibility of ordered square (by embedding of the real line with lower-limit topology)</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.