qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,116,252
<p>I'm trying to prove (or disprove) the following:</p> <p><span class="math-container">$$ \sum_{i=1}^{N} \sum_{j=1}^{N} c_i c_j K_{ij} \geq 0$$</span> where <span class="math-container">$c \in \mathbb{R}^N$</span>, and <span class="math-container">$K_{ij}$</span> is referring to a <a href="https://en.wikipedia.org/wiki/Kernel_method" rel="noreferrer">kernel matrix</a>:</p> <p><span class="math-container">$$K_{ij} = K(x_i,x_j) = \frac{\sum_{k=1}^{N} \min(x_{ik}, x_{jk})}{\sum_{k=1}^{N} \max(x_{ik}, x_{jk})}$$</span> Here, <span class="math-container">$x \in \mathbb{R}^N \geq 0$</span>.</p> <p>I'm basically trying to prove that <span class="math-container">$K_{ij}$</span> is a positive definite matrix, so I can use it as a Kernel, but I'm really stuck trying to work with <span class="math-container">$\max$</span></p> <p>Edit: the function I'm refering to is:</p> <p><span class="math-container">$$K(u,v) = \frac{\sum_{k=1}^{N} \min(u_{k}, v_{k})}{\sum_{k=1}^{N} \max(u_{k}, v_{k})}$$</span> where <span class="math-container">$u, v \in \mathbb{R}^N \geq 0$</span></p>
g g
249,524
<p><strong>Warning: Only (very) partial answer!</strong></p> <p>For <span class="math-container">$N=1$</span> and <span class="math-container">$u,v&gt;0$</span> the function <span class="math-container">$K$</span> is indeed positive definite in the sense normally used for a Kernel function (see <a href="https://en.wikipedia.org/wiki/Positive-definite_kernel#Definition" rel="nofollow noreferrer">here</a>). This standard definition is different from the OP's since it requires the Kernel matrix to be positive for any set of <span class="math-container">$n$</span> points (instead of only for <span class="math-container">$N$</span> points as in the OP's question). This makes the general case already non-trivial for <span class="math-container">$N=1$</span>, while it is of course obvious for <span class="math-container">$n=N=1$</span>.</p> <p>Observe that <span class="math-container">$$K(u,v) = \frac{\min(u,v)}{\max(u,v)}= \begin{cases} \frac{u}{v}\text{ if }u\leq v\\ \frac{v}{u}\text{ if }u\geq v. \end{cases}$$</span> Now set <span class="math-container">$u=\exp x$</span> and <span class="math-container">$v = \exp y$</span> and write <span class="math-container">$$ K(u,v)=K(\exp x, \exp y) = \exp \left(-\lvert x-y \rvert \right). $$</span> It is well known that <span class="math-container">$\exp \left(-\lvert x-y \rvert \right)$</span> is a positive definite Kernel. This kernel is known as <a href="https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space#Common_examples" rel="nofollow noreferrer">Laplacian</a> or <a href="https://en.wikipedia.org/wiki/Mat%C3%A9rn_covariance_function" rel="nofollow noreferrer">Matern-<span class="math-container">$\frac{1}{2}$</span></a>.</p> <p>With respect to the domain of definition: Note that you can extend this Kernel continuously to <span class="math-container">$\mathbb{R}^{\geq 0}\times \mathbb{R}^{\geq 0} \setminus (0,0)$</span> by setting <span class="math-container">$K(0,u)=0$</span> for <span class="math-container">$u\neq 0$</span>. But this fails in <span class="math-container">$(0,0)$</span> because <span class="math-container">$K(u,u)=1$</span> for <span class="math-container">$u\neq 0.$</span></p>
279,064
<p>I find this question interesting, but need to get it out of my system: is the space of connections (modulo gauge) on a compact four-manifold paracompact, in the Sobolev topology?</p> <p>If so, I believe it would admit partitions of unity, which would surely make life easier in gauge theory. But I haven't seen the experts make use of such a fact. I have also heard that spaces of curves (as used in symplectic geometry) do not always admit partitions of unity.</p> <p>The question occurred to me while reading about the first of the "five gaps" described by the late Abbas Bahri: <a href="http://sites.math.rutgers.edu/~abahri/papers/five%20gaps.pdf" rel="noreferrer">http://sites.math.rutgers.edu/~abahri/papers/five%20gaps.pdf</a></p> <p>As Bahri himself points out, this so-called "gap" has been filled in several different ways (via the Freed-Uhlenbeck theorem or via holonomy perturbations; that is, if the original approach is indeed flawed). Still, I think it would be useful for the younger generation to understand the core of the difficulty. I had wondered if it can be summed up in a negative answer to this question.</p>
Vít Tuček
6,818
<p>I assume that by Sobolev topology you mean the topology induced by the Sobolev norm. Since all normed spaces are metric spaces the affirmative answer to your question follows from the fact that all metric spaces are paracompact. See e.g. <em>A new proof that metric spaces are paracompact</em> by Mary Ellen Rudin (<a href="http://www.ams.org/journals/proc/1969-020-02/S0002-9939-1969-0236876-3/S0002-9939-1969-0236876-3.pdf" rel="nofollow noreferrer">pdf</a>).</p> <p><em>edit</em></p> <p>I missed that you are asking for connections modulo gauge equivalence. In that case I can refer you to Theorem 2 of <a href="https://arxiv.org/abs/1012.3180" rel="nofollow noreferrer">arXiv:1012.3180</a> where it is proved* that the moduli space of all <em>irreducible</em> connections is locally Hausdorff Hilbert manifold and that the space of all <em>irreducible</em> connections forms a $\mathrm{Gau}$-principal bundle over it. </p> <p>.* In the setting of Lie algebroids and their connections which subsume many classical cases.</p>
3,748,392
<blockquote> <p>Let <span class="math-container">$G=\mathbb{Z}_{7}\rtimes_{\rho}\mathbb{Z}_{6}$</span> with <span class="math-container">$|\ker\rho| = 2$</span>. How many <span class="math-container">$3$</span>-Sylow subgroups are there in <span class="math-container">$G$</span>?</p> </blockquote> <p>I know that the number is <span class="math-container">$1$</span> or <span class="math-container">$7$</span>, but I'm stuck.</p>
dan_fulea
550,003
<p>The question is compactly answered and accepted already, as this message is written, so consider it as a comment. We can in fact construct explicitly the group <span class="math-container">$G$</span> as a group of matrices over the field <span class="math-container">$F=\Bbb F_7$</span> with seven elements. Note first that the two groups <span class="math-container">$N=(\Bbb F_7,+)$</span> and <span class="math-container">$H=(\Bbb F_7^\times,\cdot)$</span> (obtained from <span class="math-container">$F$</span> by applying the one or the other forgetful functor) are isomorphich with the groups from the OP involved in the semidirect product. We have an action by multiplication of <span class="math-container">$H=(\Bbb F^\times,\cdot)$</span> on <span class="math-container">$N=(\Bbb F_7,+)$</span>. Now we let each unit <span class="math-container">$u$</span> act on an <span class="math-container">$x$</span> by the rule: <span class="math-container">$$ \rho_u(x)=u^2x\ . $$</span> It is clear that only <span class="math-container">$\pm1$</span> are acting trivially, so the kernel of <span class="math-container">$\rho$</span> has two elements. The multiplication of two elements <span class="math-container">$(x,u)\in G$</span> and <span class="math-container">$(y,v)\in G$</span>, <span class="math-container">$x,y\in N=\Bbb F_7$</span>, <span class="math-container">$u,v\in H=\Bbb F^\times_7$</span>, can be written <span class="math-container">$$ (x,u)(y,v)=(x+u^2y,uv)\ . $$</span> We have also a matrix representation for <span class="math-container">$(x,u)$</span> as the <span class="math-container">$3\times 3$</span> matrix over <span class="math-container">$\Bbb F_7$</span>: <span class="math-container">$$ A(x,u)= \begin{bmatrix} 1 \\ \cdot &amp; u\\ x &amp;\cdot &amp;u^2 \end{bmatrix} \ . $$</span> Now it is simple to exhibit all elements <span class="math-container">$(x,u)$</span> of order <span class="math-container">$3$</span> in this group, corresponding to matrices of the above shape of (multiplicative) order <span class="math-container">$3$</span>. First of all, <span class="math-container">$u$</span> has order <span class="math-container">$1$</span> or <span class="math-container">$3$</span>, so <span class="math-container">$u\in\{1,2,4\}$</span>. The <span class="math-container">$1$</span> is easily excluded.</p> <p>Then the computation of <span class="math-container">$$ (x,u)^3=(\ x(1+u^2+u^4)\ ,\ u^3\ ) $$</span> shows that each choice of <span class="math-container">$x$</span> is possible. So we have <span class="math-container">$2\times 7$</span> elements of order exactly three, concluding that there are <span class="math-container">$7$</span> (<span class="math-container">$3$</span>-Sylow) groups with <span class="math-container">$3$</span> elements.</p> <hr /> <p>Computer aid, here <a href="https://www.sagemath.org" rel="nofollow noreferrer">sage</a>:</p> <pre><code>sage: F = GF(7) sage: A = matrix(F, 3, 3, [1,0,0, 0,1,0, 1,0,1]) sage: B = matrix(F, 3, 3, [1,0,0, 0,3,0, 0,0,9]) sage: # 3 is a generator of (F*, .) sage: G = MatrixGroup([A, B]) sage: G.order() 42 sage: G.structure_description() 'C2 x (C7 : C3)' sage: P = G.as_permutation_group() sage: for W in P.subgroups(): ....: if W.order() == 3: ....: print(f&quot;Subgroup of order 3. Is it normal? {W.is_normal()}&quot;) ....: Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False Subgroup of order 3. Is it normal? False sage: </code></pre>
3,003,672
<p>Say I have an infinte 2D grid (ex. a procedurally generated world) and I want to get a unique number for each integer coordinate pair. How would I accomplish this?</p> <p>My idea is to use a square spiral, but I cant find a way to make a formula for the unique number other than an algorythm that just goes in a square spiral and stops at the wanted coords.</p> <p>The application for this converstion could be for example a way to save an n dimensional shape to a file where each line represents a chunk of the shape (by using <span class="math-container">$u(x, y, z) = u(u(x, y), u(y, z))$</span> ), or have a very unique random seed for each integer point (ex. a way to hash an integer vector to a data point in an n dimensional array)</p>
Trebor
584,396
<p>There are also tools from number theory. We can first map all integers to non-negative ones, which is easy, just take <span class="math-container">$$f(n)=\left\{\begin{align}&amp;2n&amp;n\ge0\\&amp;-2n-1&amp;n&lt;0\end{align}\right.$$</span> as Ross pointed out. Now us take the pair <span class="math-container">$(m,n)$</span> to be <span class="math-container">$2^{f(m)}3^{f(n)}$</span>. Since we can uniquely decompose positive integers into prime factors, this function is invertible, and you have your result.</p>
3,003,672
<p>Say I have an infinte 2D grid (ex. a procedurally generated world) and I want to get a unique number for each integer coordinate pair. How would I accomplish this?</p> <p>My idea is to use a square spiral, but I cant find a way to make a formula for the unique number other than an algorythm that just goes in a square spiral and stops at the wanted coords.</p> <p>The application for this converstion could be for example a way to save an n dimensional shape to a file where each line represents a chunk of the shape (by using <span class="math-container">$u(x, y, z) = u(u(x, y), u(y, z))$</span> ), or have a very unique random seed for each integer point (ex. a way to hash an integer vector to a data point in an n dimensional array)</p>
Jungkwuen An
788,678
<p>If you want to convert <span class="math-container">$(a,b)$</span> into <span class="math-container">$c$</span>. (<span class="math-container">$a$</span>,<span class="math-container">$b$</span>,<span class="math-container">$c$</span> are all positive integer) <span class="math-container">$$c = 2^{a-1} * (2*b-1)$$</span> This can uniquely map <span class="math-container">$(a,b)$</span> into <span class="math-container">$c$</span>. It is also invertible. Moreover, <span class="math-container">$c$</span> is equal to all positive integers, which means there are no waste of rooms(memories) unlike the way, e.x. <span class="math-container">$$c = 2^{a-1} * 3^{b-1}$$</span></p>
2,482,341
<p>I have tried to solve $\frac{\mathrm{d}}{\mathrm{dx}}\int_{0}^{x^2}e^{x+t}\mathrm{dt}$ by two different ways and I'm getting two answers. Please let me know the mistake: </p> <p><strong>Method One</strong><br> Let $F(t)$ be the antiderivative of $e^{x+t}$.<br> Thus $F^{'}(t)=e^{x+t}$ </p> <p>So </p> <p>\begin{align} \frac{\mathrm{d}}{\mathrm{d}x}\int_{0}^{x^2}e^{x+t}\mathrm{dt} &amp; = \frac{\mathrm{d }}{\mathrm{dx}}(F(x^2)-F(0) \nonumber \\ &amp; = F^{'}(x^2)\cdot2x -F^{'}(0) \nonumber \\ &amp; = e^{x+x^2}\cdot2x-e^{x} \nonumber \end{align} But consider the following<br> <strong>Method Two:</strong><br> \begin{align} \text{Let } I &amp;=\int_{0}^{x^2}e^{x+t}\mathrm{dt} \\ &amp;=e^{x}\int_{0}^{x^2}e^{t}\mathrm{dt} \\ &amp;=e^{x}(e^{x^2}-1)\\ &amp;=e^{x+x^2}-e^{x} \end{align}</p> <p>\begin{align} \text{Thus } \frac{\mathrm{d}I}{\mathrm{d}x} &amp;=(2x+1)e^{x+x^2}-e^{x} \\ \end{align}</p> <p>Thank you in advance</p>
Zhuoran He
485,692
<p>Since $AB$ and $BC$ have lower upper bounds, they should be set to maximum. Then if $AC$ is also set to maximum, angle $B$ becomes blunt ($2^2+3^2&lt;4^2$), which is not good. So we set angle $B$ to $90^\circ$ and the maximum area should be $\frac{1}{2}\times 2\times 3=3$. Once the maximum is found (guessed), it's not hard to prove it:</p> <p>$$S=\frac{1}{2}|AB||BC|\sin B\leq\frac{1}{2}|AB||BC|\leq\frac{1}{2}\times 2\times 3=3.$$</p> <p>No calculus. Just a simple trigonometry.</p>
656,423
<p>This is a really simple problem but I am unsure if I have proved it properly.</p> <p>By contradiction:</p> <p>Suppose that $x \geq 1$ and $x&lt; \sqrt{x}$. Then $x\cdot x \geq x \cdot 1$ and $x^2 &lt; x$ (squaring both sides), which is a contradiction.</p>
user76568
74,917
<p>As a contra-positive, assuming $x$ is not negative: $$\sqrt{x} &gt; x \implies x&gt;x^2 \land x \neq0 \implies 1&gt;x$$ (1st implication is by squaring (Which is obviously an increasing function here), 2nd implication is by dividing by $x$) </p> <p>So, equivalently: $$x \geq 1 \implies x \geq\sqrt{x}$$</p> <p>Alternatively, for a more <strong>direct</strong> proof, you can rely on $\sqrt{\cdot}$ being an increasing function in $\mathbb R_{\ge 0}$ domain: $$x \geq 1 \implies x^2 \geq x \implies x \geq\sqrt{x}$$ (1st implication is by multiplying by $x$, 2nd implication is by taking the square root, relying on it being increasing)</p>
816,249
<p>I just need some references which studies examples of skew adjoint differential operators generating unitary strongly continuous groups of operators, and its applications to partial differential equations.</p> <p>The example I know is the differential operator defined on the hilbert space $H=L^2(\mathbb{R})$ by $$Af=f'$$, which has as domain $$D(A)=\{f \in L^2(\mathbb{R}), absolutely \ continuous, \ with \ f'\in L^2(\mathbb{R}) \}.$$</p> <p>This operator generates a strongly continuous unitary group: $$(U(t)f)=f(t+s).$$ By unitary I mean $U(t)^{-1}=U(t)^*$. By a <a href="http://en.wikipedia.org/wiki/Stone%27s_theorem_on_one-parameter_unitary_groups" rel="nofollow">Stone's Theorem</a>, this implies that $A$ must be skew adjoint.</p>
Disintegrating By Parts
112,478
<p>If you have any densely-defined selfadjoint linear operator $A$ on a complex Hilbert space $X$, then $e^{itA}$ is a unitary semigroup, which is really $e^{tB}$ where $B^{\star}=-B$.</p> <p>More generally, suppose that $U : [0,\infty)\rightarrow \mathscr{L}(X)$ is an isometric $C^{0}$ semigroup, meaning that $$ \begin{align} &amp; 1.\;\; U(t)U(t')=U(t+t'),\;\;\; t, t' \ge 0;\\ &amp; 2.\;\; U(0)=I;\\ &amp; 3.\;\; \|U(t)x\|=\|x\|,\;\; t \ge 0;\\ &amp; 4.\;\; \lim_{t\downarrow 0}U(t)x=x,\;\; x \in X; \end{align} $$ It turns out that these assumptions are enough to guarantee that the following is a dense linear subspace of $X$: $$ \mathcal{D}(A)=\left\{ x \in X\; :\; Ax=\lim_{h\downarrow 0}\frac{1}{h}(U(h)-I)x\mbox{ exists}\right\} $$ Automatically, $U(t)\mathcal{D}(A)\subseteq\mathcal{D}(A)$ for all $t \ge 0$, and the right derivative of $U(t)x$ is $\frac{d}{dt}U(t)x= AU(t)x=U(t)Ax$ for all $x \in\mathcal{D}(A)$. Note that property (3) implies that the right derivative also satisfies the following for all $t \ge 0$ and $x \in \mathcal{D}(A)$: $$ \frac{d}{dt}\|U(t)x\|^{2}=\frac{d}{dt}(U(t)x,U(t)x)=(U(t)Ax,U(t)x)+(U(t)x,U(t)Ax)=0. $$ In particular, setting $t=0$ and applying $U(0)=I$ gives $$ (Ax,x)+(x,Ax)=0,\;\;\; x \in \mathcal{D}(A). $$ By the polarization identity for complex Hilbert spaces, the above gives $$ (Ax,y)=-(x,Ay),\;\;\; x,y \in \mathcal{D}(A). $$ So $A$ is antisymmetric because of property (3). In fact, property (3) is equivalent to the antisymmetry of $A$. Equivalently, $iA$ is symmetric, i.e., $((iA)x,y)=(x,(iA)y)$ for all $x,y \in X$. If $A$ is bounded, then $iA$ will be self-adjoint. If $A$ is not bounded, it is possible that $(iA)^{\star} \ne (iA)$ because the adjoint $(iA)^{\star}$ may have a larger domain, even though the two agree on the smaller domain $\mathcal{D}(A)$. This is sometimes written as $(iA)\prec (iA)^{\star}$ to mean that the operator on the right is a proper extension of the one on the left.</p> <p>If you assume that $U(t)$ is unitary for all $t \ge 0$ (which is (3)+surjective), then $(iA)=(iA)^{\star}$. Conversely, if $(iA)=(iA)^{\star}$, then $U(t)$ is unitary for all $t \ge 0$.</p>
4,146,858
<blockquote> <p>Q) For every twice differentiable function <span class="math-container">$f:\mathbb{R}\longrightarrow [-2,2] $</span> with <span class="math-container">$[f(0)]^2+[f'(0)]^2=85$</span> , which of the following statement(s) is(are) TRUE?</p> </blockquote> <blockquote> <p>(A) There exists <span class="math-container">$r,s \in\mathbb{R}$</span> , where <span class="math-container">$r&lt;s$</span> , such that f is one-one on the open interval <span class="math-container">$(r,s)$</span></p> </blockquote> <blockquote> <p>(B) There exists <span class="math-container">$x_0 \in (-4,0)$</span> such that <span class="math-container">$|f'(x_0)|\leq 1$</span></p> </blockquote> <blockquote> <p>(C) <span class="math-container">$lim_{x\to \infty}f(x)=1$</span></p> </blockquote> <blockquote> <p>(D) There exists <span class="math-container">$\alpha \in (-4,4)$</span> such that <span class="math-container">$f(\alpha)+f"(\alpha)=0 and f'(\alpha)=0$</span></p> </blockquote> <p>I have problem with option B. Here's the given solution- <a href="https://i.stack.imgur.com/DV1ND.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DV1ND.jpg" alt="enter image description here" /></a></p> <p>But I think this is wrong because <span class="math-container">$f(0)$</span> cannot be equal to <span class="math-container">$2$</span> as that gives us the value of <span class="math-container">$f'(0)=9$</span> (from the condition given in the question). If the slope is positive at <span class="math-container">$x=0$</span> and the function is achieving its highest value there then for the points in the right neighbourhood of <span class="math-container">$x=0$</span> the slope would have to abruptly change to <span class="math-container">$0$</span> otherwise the function would obtain values greater than <span class="math-container">$2$</span> which are not in its co-domain. But it cannot abruptly change either coz it's given to be a twice differentiable function.</p> <p>And I took <span class="math-container">$f(x)=9$</span> only but of course similar argument can be made with the negative value and a similar argument can be made if they took <span class="math-container">$f(0)=-2$</span> and <span class="math-container">$f(-4)=2$</span> in the solution.</p> <p>My question-</p> <ol> <li>Is the given solution wrong because of what I just said? Or did I go wrong somewhere? If not then can this be put into some more concrete words or if there's a theorem related to this?</li> <li>Can an alternate solution be proposed for option B?</li> </ol> <p>I'm under confident about this because this is a JEE Advanced 2018 question and no objections were made against this that I'm aware of.</p>
KCd
619
<p>Logically the quotient rule in calculus is not needed, since it can be derived from the product rule, the power rule, and the chain rule every time, e.g., <span class="math-container">$(1/g)' = (g^{-1})' = -g^{-2}g' = -g'/g^2$</span>. But most students learn the quotient rule and don't have trouble after practicing it(and then they have to learn not to confuse it with the ratio of derivatives in L'Hopital's rule later).</p> <p>I once met a very famous mathematician who does <em>not</em> know the quotient rule: he learned math in Europe, where university courses often begin with analysis rather than elementary calculus, and he never teaches freshman calculus so he has no reason to make contact with the quotient rule. I was discussing something with him and when the derivative of a ratio was needed he found it with the product rule and told me he didn't know another way and didn't care if there is another way.</p>
2,984,918
<p>How can I prove this? </p> <blockquote> <p>Prove that for any two positive integers <span class="math-container">$a,b$</span> there are two positive integers <span class="math-container">$x,y$</span> satisfying the following equation: <span class="math-container">$$\binom{x+y}{2}=ax+by$$</span></p> </blockquote> <p>My idea was that <span class="math-container">$\binom{x+y}{2}=\dfrac{x+2y-1}{2}+\dfrac{y(y-1)}{2}$</span> and choose <span class="math-container">$x,y$</span>, such that <span class="math-container">$2a=x+2y-1, 2b=y(y-1)$</span>, but using this idea, <span class="math-container">$x,y$</span> won’t be always positive. </p>
Oldboy
401,277
<p><strong>Case 1:</strong> <span class="math-container">$a=b$</span>. </p> <p><span class="math-container">$${x+y \choose 2}=ax+ay$$</span></p> <p><span class="math-container">$$\frac{(x+y)(x+y-1)}{2}=a(x+y)$$</span></p> <p><span class="math-container">$$x+y-1=2a$$</span></p> <p><span class="math-container">$$x+y=2a+1$$</span></p> <p>Basically, you can set <span class="math-container">$y=1$</span> and this gives <span class="math-container">$x=2a$</span>.</p> <p><strong>Case 2:</strong> <span class="math-container">$a\gt b$</span> </p> <p>Introduce substitution: </p> <p><span class="math-container">$$x=u+v\tag{1}$$</span></p> <p><span class="math-container">$$y=u-v\tag{2}$$</span></p> <p>So the equation:</p> <p><span class="math-container">$${x+y \choose 2}=ax+by$$</span></p> <p>...now becomes:</p> <p><span class="math-container">$${2u \choose 2}=\frac{2u(2u-1)}{2}=a(u+v)+b(u-v)$$</span></p> <p>You can now express <em>v</em> as a function of <em>u</em>:</p> <p><span class="math-container">$$v=\frac{u(2u-a-b-1)}{a-b}\tag{3}$$</span></p> <p>Now replace (3) into (1) and (2) and the result is:</p> <p><span class="math-container">$$x={u(2u-2b-1) \over a-b}\tag{4}$$</span></p> <p><span class="math-container">$$y={u(2a+1-2u) \over a-b}\tag{5}$$</span></p> <p>From (4), using condition that <span class="math-container">$x&gt;0$</span> we have:</p> <p><span class="math-container">$$u&gt;b+\frac12\tag{6}$$</span></p> <p>From (5), usgin condition that <span class="math-container">$y&gt;0$</span> we get:</p> <p><span class="math-container">$$u &lt; a + \frac 12\tag{7}$$</span></p> <p>If we limit the choice of parameter <span class="math-container">$u$</span> to natural numbers we have:</p> <p><span class="math-container">$$b+1\le u \le a\tag{8}$$</span></p> <p>Is it possible to choose the value for <span class="math-container">$u$</span> so that expressions (4) and (5) produce positive integer values? Actually, it is! Available values for <span class="math-container">$u$</span> are in the set of <span class="math-container">$a-b$</span> <strong>consecutive</strong> integer numbers, from <span class="math-container">$b$</span> (exclusive) to <span class="math-container">$a$</span> (inclusive). Obviously, one of those numbers must be divisible by <span class="math-container">$a-b$</span>. That value of <span class="math-container">$u$</span> will cancel the denominator in (4) and (5) thus generating integer values for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>!</p> <p>Example: <span class="math-container">$a=5, b=2$</span>. We have to pick the value for <span class="math-container">$u$</span> that is greater than 2, less or equal to 5 and divisible by 5-2=3. The only possible choice is <span class="math-container">$u=3$</span>. From (4) and (5) we get: <span class="math-container">$x=1,y=5$</span>. One can easily check that <span class="math-container">${1+5 \choose 2}=5\times 1+2\times 5$</span>.</p> <p><strong>Case 3:</strong> <span class="math-container">$a\lt b$</span></p> <p>This is not a new case at all. You can always convert this type of problem into the following form:</p> <p><span class="math-container">$${y+x \choose 2}=by+ax$$</span></p> <p>Now replace <span class="math-container">$x$</span> with <span class="math-container">$y$</span> and <span class="math-container">$y$</span> with <span class="math-container">$x$</span> and you'll get the case (2) that we already considered.</p> <p>BTW, excellent problem, it's too bad that it did not get enough attention.</p>
2,065,639
<p>$\displaystyle \int_a^b (x-a)(x-b)\,dx=-\frac{1}{6}(b-a)^3$</p> <p>$\displaystyle \int_a^{(a+b)/2} (x-a)(x-\frac{a+b}{2})(x-b)\, dx=\frac{1}{64}(b-a)^4$ </p> <p>Instead of expanding the integrand, or doing integration by part, is there any faster way to compute this kind of integral?</p>
Matthew Leingang
2,785
<p>You can do this with substitution and <a href="https://en.wikipedia.org/wiki/Cavalieri%27s_quadrature_formula" rel="nofollow noreferrer">Cavalieri's formula</a>: $$ \int_0^1 u^n \,du = \frac{1}{n+1} $$</p> <p>For the first one, let $u= \frac{x-a}{b-a}$. Then $x = (b-a)u +a$, which means $x-a = (b-a)u$ and $x-b = (b-a)(u-1)$. Also $dx=(b-a)\,du$. So \begin{align*} \int_a^b (x-a)(x-b)\,dx &amp;= (b-a)^3 \int_0^1 u (u-1)\,du = (b-a)^3 \int_0^1 (u^2 - u) \\ &amp;= (b-a)^3 \left(\frac{1}{3}-\frac{1}{2}\right) = -\frac{1}{6}(b-a)^3 \end{align*}</p> <p>For the second one, let $u= \frac{2}{a-b}\left(x-\frac{a+b}{2}\right)$. Then: \begin{align*} x-a &amp;= \frac{a-b}{2}(u-1) \\ x-b &amp;= \frac{a-b}{2}(u+1) \\ x - \frac{a+b}{2} &amp;= \frac{a-b}{2}u \\ dx &amp;= \frac{a-b}{2}\,du \end{align*} So \begin{align*} \int_a^{(a+b)/2} (x-a)\left(x-\frac{a+b}{2}\right)(x-b)\,dx &amp;= \left(\frac{a-b}{2}\right)^4\int_1^0(u-1)u(u+1)\,du = -\frac{(b-a)^4}{16}\int_0^1 (u^3-u)\,du \\ &amp;= -\frac{(b-a)^4}{16}\left(\frac{1}{4}-\frac{1}{2}\right) = -\frac{1}{64}(b-a)^4 \end{align*}</p> <p>This has less integration, but it took some time to get the substitution right.</p>
93,099
<p>Consider $n$ points generated randomly and uniformly on a unit square. What is the expected value of the area (as a function of $n$) enclosed by the convex hull of the set of points?</p>
Community
-1
<p><strong>Update:</strong> By a result of Buchta (Zufallspolygone in konvexen Vielecken, Crelle, 1984; available on digizeitschriften.de) there is a general formula for this expected value, it is $$1 -\frac{8}{3(n+1)} \bigl( \sum_{k=1}^{n+1} \frac{1}{k} (1 - \frac{1}{2^k}) - \frac{1}{(n+1)2^{n+1}} \bigr) $$ yielding (starting with $n=3$): $11/144$, $11/72$, $79/360$, $199/720$, and so on.</p> <p>The paper contains in fact a more general result, where the problem is solved for any convex $m$-gon; not just the square. </p> <p>For asymtotics see other answer(s).</p> <p>--</p> <p>Old version (highly incomplete and wrong guess)</p> <p>For $n=3$ the expected value is $11/144$ and for $n=4$ it is $11/72$. </p> <p>This information is taken from a <a href="http://www.math.kth.se/~johanph/area12.pdf">somewhat recent paper (2004) by Johan Philip</a> where the respective distribution functions are studied in detail. I did not see any mention of exact values for other small values of $n$ there (the asymptocic result given already is mentioned though), so they might be unknown. </p>
2,546,487
<p>When we say $f(x)=x\, \mbox{sgn}( x)$ is continuous at $x=0$ when we say $f(x)=0$ but it is not differentiable at $x=0$. Furthermore, we say $g(x)=x^2\mbox{sgn}(x)$ is continuous and differentiable at $x=0$. But why or how? How can $f’(0)=0$ when $f(0)=0$?</p>
Green
357,732
<p>It's because all the 1's are indistinguishable. For example, if you have 1001 and the 1's and 0's were distinguishable, you have $4!$ options, but since they aren't you have $4!$ divided by $2! * 2!$ (which is equivalent to 4 choose 2). </p> <p>As a result, to find out the total number of ways to have r 1's in a string of length n, you have to find the total way to choose r positions out of the n total to assign 1 to.</p>
2,546,487
<p>When we say $f(x)=x\, \mbox{sgn}( x)$ is continuous at $x=0$ when we say $f(x)=0$ but it is not differentiable at $x=0$. Furthermore, we say $g(x)=x^2\mbox{sgn}(x)$ is continuous and differentiable at $x=0$. But why or how? How can $f’(0)=0$ when $f(0)=0$?</p>
vrugtehagel
304,329
<p>Yes the order matters, but we still use $c(n,r)$ (more commonly denoted ${n\choose r}$) because we essentially want to find the number of ways to pick $r$ zeroes out of a string with length $n$ and change those to $1$'s, resulting in the amount of strings with exactly $r$ ones.</p>
1,953,251
<blockquote> <p>For what x does the exponential series $P_c(x) = \sum^\infty_{n=0} (-1)^{n+1}\cdot n\cdot x^n$ converge?</p> </blockquote> <p><strong>What I got so far:</strong></p> <p>$\sum^\infty_{n=0} (-1)^{n+1}\cdot n\cdot x^n = (-1)\sum^\infty_{n=0} (-1)^{n}\cdot (\sqrt[n]n)^n\cdot x^n = (-1)\sum^\infty_{n=0} ((-1)\cdot (\sqrt[n]n)\cdot x)^n$</p> <p>We know that $|((-1)\cdot (\sqrt[n]n)\cdot x)|$ must be &lt; 1 in order to converge <em>(geometric sum)</em>:</p> <p>$ |((-1)\cdot (\sqrt[n]n)\cdot x)| &lt; 1$ </p> <p>Case 1: $ ((-1)\cdot (\sqrt[n]n)\cdot x) &lt; 1 \Leftrightarrow (-1)\cdot (\sqrt[n]n)\cdot x &lt; 1 \Leftrightarrow (-1)\cdot x &lt; \frac{1}{(\sqrt[n]n)} \Leftrightarrow x &gt; \frac{-1}{(\sqrt[n]n)}$</p> <p>Case 2: $ ((\sqrt[n]n)\cdot x) &lt; 1 \Leftrightarrow (\sqrt[n]n)\cdot x &lt; 1 \Leftrightarrow x &lt; \frac{1}{(\sqrt[n]n)}$</p> <p>We know that $x &lt; |\frac{-1}{(\sqrt[n]n)}| = |\frac{1}{(\sqrt[n]n)}| = \frac{1}{(\sqrt[n]n)} \le \pm 1 $</p> <p>So $x &lt; (-1) &lt; 1$. <strong>Is this way right</strong>, must $x$ be $&lt; -1$ in order for $P_c(x) $ to converge? Is there a simpler way to get to a solution? Thanks in advance!</p>
DonAntonio
31,254
<p>Cauchy-Hadamard:</p> <p>$$\lim_{n\to\infty}\sqrt[n]{\left|(-1)^{n+1}n\right|}=\lim_{n\to\infty}\sqrt[n]n=1$$</p> <p>so the series converges for $\;|x|&lt;1\;$ .</p>
4,380,124
<p>I understand that the double integral is <a href="https://i.stack.imgur.com/29e8B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/29e8B.png" alt="enter image description here" /></a> However what confuses me is when I try to visualize why this formula only accounts for the region inside the bounds and not the whole rectangular region. <a href="https://i.stack.imgur.com/Zs2t1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zs2t1.jpg" alt="enter image description here" /></a></p> <p>My guess is it has to do with one of the bounds being (x) but I have a hard time visualizing that. Could someone maybe give me the 2 variable analog to this equation of the single integral, and show how it only looks at points within the specified region <a href="https://i.stack.imgur.com/KrnII.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KrnII.png" alt="enter image description here" /></a></p>
Hans Lundmark
1,242
<p>The formula in your first picture seems to refer to double integrals over rectangles only. But if you know how to integrate over a rectangle, you can define the double integral over a non-rectangular bounded set <span class="math-container">$M$</span> by taking a rectangle <span class="math-container">$D$</span> which contains <span class="math-container">$M$</span>, setting <span class="math-container">$$ g(x,y) = \begin{cases} f(x,y), &amp; (x,y) \in M ,\\ 0, &amp; \text{otherwise} , \end{cases} $$</span> and defining <span class="math-container">$$ \iint_M f(x,y) \, dxdy := \iint_D g(x,y) \, dxdy $$</span> (provided that the integral on the right-hand side exists).</p>
3,639,192
<p>In their article on the <a href="https://en.wikipedia.org/wiki/Brauer_group#Galois_cohomology" rel="nofollow noreferrer">Brauer group</a> Wikipedia writes:</p> <blockquote> <p>Since all central simple algebras over a field <span class="math-container">$K$</span> become isomorphic to the matrix algebra over a separable closure of <span class="math-container">$K$</span>, the set of isomorphism classes of central simple algebras of degree <span class="math-container">$n$</span> over <span class="math-container">$K$</span> can be identified with the Galois cohomology set <span class="math-container">$H^1(K, \mathrm{PGL}(n))$</span>.</p> </blockquote> <p>I understand all the words here, but the reasoning goes too quick for me to follow it. Could someone explain why this is true?</p> <p>I believe <span class="math-container">$H^1(K, \mathrm{PGL}(n))$</span> is really the group cohomology <span class="math-container">$H^1(G, \mathrm{PGL}(n,K))$</span> where <span class="math-container">$G$</span> is the Galois group of the separable closure of the field <span class="math-container">$K$</span>. The group cohomology <span class="math-container">$H^1(G,M)$</span> is explained <a href="https://en.wikipedia.org/wiki/Group_cohomology#H_1" rel="nofollow noreferrer">here</a>. Perhaps it's not explained in sufficient generality, since they seem to define <span class="math-container">$H^1(G,M)$</span> only where <span class="math-container">$M$</span> is an abelian group acted on by a group <span class="math-container">$G$</span>. But I think the same thing should work for any set acted on by <span class="math-container">$G$</span>, and I know how <span class="math-container">$PGL(n,K)$</span> is acted on by the Galois group <span class="math-container">$G$</span>.</p> <p>I guess I need to see how a</p> <ul> <li>central simple algebra <span class="math-container">$A$</span> over a field <span class="math-container">$K$</span> that becomes isomorphic to an <span class="math-container">$n \times n$</span> matrix algebra when tensored with the separable closure of <span class="math-container">$A$</span> </li> </ul> <p>gives rise to a</p> <ul> <li>1-cocycle <span class="math-container">$c_A \colon G \to M$</span></li> </ul> <p>and why isomorphic algebras of this sort give cocycles that differ by a coboundary. (Also how to go back.)</p>
Quanto
686,284
<p>Let <span class="math-container">$y=(\frac35)^x$</span>. Then, divide the equation by <span class="math-container">$15^x$</span> to get</p> <p><span class="math-container">$$y-\frac 1y+1=0$$</span></p> <p>Solve to get</p> <p><span class="math-container">$$y = \frac{-1\pm\sqrt5}2$$</span></p> <p>Use <span class="math-container">$x\ln\frac35 =\ln y$</span> to obtain the sulution</p> <p><span class="math-container">$$x= \frac{ \ln \frac{\sqrt5-1}2}{\ln\frac35}$$</span></p>
1,176,098
<p>Here are some of my ideas:</p> <p><strong>1. Addition Formula:</strong> <span class="math-container">$\sin{x}$</span> and <span class="math-container">$\cos{x}$</span> are the unique functions satisfying:</p> <ul> <li><p><span class="math-container">$\sin(x + y) = \sin x \cos y + \cos x \sin y $</span></p> </li> <li><p><span class="math-container">$\cos(x + y) = \cos x \cos y - \sin x \sin y$</span></p> </li> <li><p><span class="math-container">$\sin 0 = 0\quad$</span> and <span class="math-container">$\quad\displaystyle{\lim_{x \rightarrow 0} \frac{\sin x }{x} = 1}$</span></p> </li> <li><p><span class="math-container">$\cos 0 = 1\quad$</span> and <span class="math-container">$\quad\displaystyle{\lim_{x \rightarrow 0} \frac{1-\cos x}{x} = 0}$</span></p> </li> </ul> <p><strong>2. Taylor Series:</strong></p> <ul> <li><p><span class="math-container">$\displaystyle{\sin x = \sum_{n = 0}^{\infty} \frac{(-1)^n}{(2n+1)!}\;x^{2n+1}}$</span></p> </li> <li><p><span class="math-container">$\displaystyle{\cos x = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n)!}\;x^{2n}}$</span></p> </li> </ul> <p><strong>3. Differential Equations:</strong> <span class="math-container">$\sin(x)$</span> and <span class="math-container">$\cos(x)$</span> are the unique solutions to <span class="math-container">$y'' = -y$</span>, where <span class="math-container">$\sin(0) = \cos^\prime(0) = 0$</span> and <span class="math-container">$\sin^\prime(0) = \cos(0) = 1$</span>.</p> <p><strong>4. Inverse Formula:</strong> We have:</p> <p><span class="math-container">$$\begin{align} \arcsin x &amp;= \phantom{\frac{\pi}{2} + } \int_0^x \frac{1}{\sqrt{1 - t^2}}\, dt \\[6pt] \arccos x &amp;= \frac{\pi}{2} - \int_0^x \frac{1}{\sqrt{1 - t^2}}\, dt \end{align}$$</span></p> <p>Then <span class="math-container">$\sin x$</span> is the inverse of <span class="math-container">$\arcsin x$</span>, extended appropriately to the real line, and <span class="math-container">$\cos x$</span> is similar.</p> <p><strong>Question:</strong> Are there any others that you like? In particular, are there any good rigorous ones coming from the original geometric definition?</p>
Rory Daulton
161,807
<p>The book <em>Principles of Mathematical Analysis</em> (also called <em>Baby Rudin</em>) by Walter Rudin, Second edition, pages 167-169, briefly develops the theory of trigonometric functions. This is after developing the theory of series of complex numbers as well as the theory of exponential and logarithmic functions, so the additional analysis can be quite brief.</p> <p>Rudin defines the complex exponential function $E(z)$ by</p> <p>$$E(z)=\sum_{n=0}^{\infty} \frac{z^n}{n!}$$</p> <p>then defines the functions cosine and sine (which he initially calls $C(x)$ and $S(x)$) of a real variable by</p> <p>$$C(x)=\frac 12[E(ix)+E(-ix)], \quad S(x)=\frac 1{2i}[E(ix)-E(-ix)]$$</p> <p>He then derives all the usual properties (both trigonometric and analytic) of sine and cosine in just two pages. All his proofs are simple and clear, except for one: showing that there are positive numbers $x$ such that $C(x)=0$. That can be proved simply in other ways, so the analytic exposition of trigonometry is rigorous, brief, and clear. Along the way he also shows the usual parameterization of the unit circle and shows the usual high-school definitions of sine and cosine.</p> <p>Rudin's conclusion is,</p> <blockquote> <p>It should be stressed that we derived the basic properties of the trigonometric functions from [the definitions of $E(x)$, $C(x)$, and $S(x)$], without any appeal to the geometric notion of angle.</p> </blockquote> <p>(I replaced references to specific equations with their meaning in this quote.)</p> <hr> <p>Regarding your question "In particular, are there any good rigorous ones coming from the original geometric definition?":</p> <p>The "original geometric definition" is not rigorous unless you have a good definition of the length of a circular arc as well as a good axiom system for the real numbers or an equivalent. Euclid did not provide such a system, though he tried. The explanation I gave from Baby Rudin defines geometry and trigonometry from analysis. Developing a rigorous geometry that includes all the trigonometric ideas without starting from analysis is very difficult. Even <a href="http://en.wikipedia.org/wiki/Hilbert%27s_axioms">Hilbert's axioms</a> for geometry <a href="http://en.wikipedia.org/wiki/Hilbert%27s_axioms#Application">had its problems</a>. I have never seen a good rigorous development of trigonometry from modern formal geometry, though it must exist somewhere.</p>
4,116,134
<p>Find angle between <span class="math-container">$y=\sin x$</span> and <span class="math-container">$y=\cos x$</span> at their intersection point.</p> <p>Intersection points are <span class="math-container">$\frac{\pi}{4}+\pi k$</span> and to find angle between them we need to compute derivatives at intersection points but then I can't combine them to get an answer which is <span class="math-container">$\arctan2\sqrt2$</span>. Will be thankful for your help.</p>
Steven Alexis Gregory
75,410
<p>The slope of the tangent line to <span class="math-container">$y=\sin x$</span> at <span class="math-container">$x = \frac{\pi}{4}+\pi k$</span> is <span class="math-container">$$m_s = \cos\left(\frac{\pi}{4}+\pi k \right) = \frac{\cos \pi k - \sin \pi k}{\sqrt 2} = \frac{(-1)^k}{\sqrt 2} $$</span></p> <p>The slope of the tangent line to <span class="math-container">$y=\cos x$</span> at <span class="math-container">$x = \frac{\pi}{4}+\pi k$</span> is <span class="math-container">$$m_c = \cos\left(\frac{\pi}{4}+\pi k \right) = -\frac{\cos \pi k + \sin \pi k}{\sqrt 2} = -\frac{(-1)^k}{\sqrt 2} $$</span></p> <p>The angle between the two lines is described by</p> <p><span class="math-container">$$ \tan \theta = \left| \dfrac{m_s - m_c}{1 + m_s m_c}\right | = \dfrac{\sqrt 2}{\left(\frac 12 \right)} = 2\sqrt 2$$</span></p> <p>So <span class="math-container">$\theta = \arctan 2\sqrt 2 \approx 70.53^\circ$</span></p>
47,246
<p>I got some text scraps with this <em>structure</em></p> <pre><code>"Focal Plane: 198' Active Aid to Navigation: Yes *Latitude: 35.250 N *Longitude: -75.529 W" </code></pre> <p>But some of them lack of parts like this</p> <pre><code>"Focal Plane: 198' Active Aid to Navigation: Yes *Longitude: -75.529 W" </code></pre> <p>Using regular expression matching, I want to extract all the info available, included getting a tag "NotAvailable" for missing data., </p> <p>I wrote this code, but it can't tell when a pattern does not match -- should the pattern in the middle not match, my code just skips it and goes on to the next pattern:</p> <pre><code>ptrFocal = "(?&lt;=\\Focal Plane: )(.*?)(?=\\')"; ptrLat = "(?&lt;=\\*Latitude: )(.*?)(?=[a-zA-Z])"; ptrLon = "(?&lt;=\\*Longitude: )(.*?)(?=[a-zA-Z])"; r = StringTrim /@ StringCases[text, RegularExpression[ptrFocal &lt;&gt; "|" &lt;&gt; ptrLat &lt;&gt; "|" &lt;&gt; ptrLon] ]; </code></pre> <p>I would want some pointers on how to do this.</p> <p><strong>EDIT:</strong></p> <p>The expected transformation of the example data I gave above is</p> <blockquote> <pre><code>{198, 32.250, -75.529} {198, "NotAvailable", -75.529} </code></pre> </blockquote>
Kuba
5,478
<pre><code>str1 = "Focal Plane: 198' Active Aid to Navigation: Yes *Latitude: \ 35.250 N *Longitude: -75.529 W"; str2 = "Focal Plane: 198.12' Active Aid to Navigation: Yes \ *Longitude: -75.529 W" </code></pre> <p>With string patterns because I do not use regex :) </p> <pre><code>record[string_] := Map[ StringCases[string, # ~~ x : NumberString :&gt; x] /. {} -&gt; "NotAvailable" &amp;, {"Focal Plane: ", "Latitude: ", "Longitude: "}] // Flatten record@str1 record@str2 </code></pre> <blockquote> <pre><code>{"198", "35.250", "-75.529"} {"198.12", "NotAvailable", "-75.529"} </code></pre> </blockquote> <p>You can add <code>ToExpression /@ StringCases[...</code> if you need numbers not strings.</p>
47,246
<p>I got some text scraps with this <em>structure</em></p> <pre><code>"Focal Plane: 198' Active Aid to Navigation: Yes *Latitude: 35.250 N *Longitude: -75.529 W" </code></pre> <p>But some of them lack of parts like this</p> <pre><code>"Focal Plane: 198' Active Aid to Navigation: Yes *Longitude: -75.529 W" </code></pre> <p>Using regular expression matching, I want to extract all the info available, included getting a tag "NotAvailable" for missing data., </p> <p>I wrote this code, but it can't tell when a pattern does not match -- should the pattern in the middle not match, my code just skips it and goes on to the next pattern:</p> <pre><code>ptrFocal = "(?&lt;=\\Focal Plane: )(.*?)(?=\\')"; ptrLat = "(?&lt;=\\*Latitude: )(.*?)(?=[a-zA-Z])"; ptrLon = "(?&lt;=\\*Longitude: )(.*?)(?=[a-zA-Z])"; r = StringTrim /@ StringCases[text, RegularExpression[ptrFocal &lt;&gt; "|" &lt;&gt; ptrLat &lt;&gt; "|" &lt;&gt; ptrLon] ]; </code></pre> <p>I would want some pointers on how to do this.</p> <p><strong>EDIT:</strong></p> <p>The expected transformation of the example data I gave above is</p> <blockquote> <pre><code>{198, 32.250, -75.529} {198, "NotAvailable", -75.529} </code></pre> </blockquote>
Mr.Wizard
121
<p>A variation of Kuba's method, using a single <code>StringCases</code> pass with post processing:</p> <pre><code>str2 = "Focal Plane: 198.12' Active Aid to Navigation: Yes *Longitude: -75.529 W" fields = {"Focal Plane: ", "Latitude: ", "Longitude: "}; StringCases[str2, a : fields ~~ x : NumberString :&gt; (a -&gt; ToExpression[x])] fields /. % /. _String :&gt; "NotAvailable" </code></pre> <blockquote> <pre><code>"Focal Plane: 198.12' Active Aid to Navigation: Yes *Longitude: -75.529 W" {"Focal Plane: " -&gt; 198.12, "Longitude: " -&gt; -75.529} {198.12, "NotAvailable", -75.529} </code></pre> </blockquote>
3,155,229
<p>Stumbled across this weird phenomenon using the equation <span class="math-container">$y = \frac{1}{x} $</span>.</p> <p><strong>Surface Area:</strong> When you calculate the surface area under the curve from 1 to <span class="math-container">$\infty$</span></p> <p><span class="math-container">$$\int_1^\infty \frac{1}{x}dx = \lim_{a \to \infty} \int_1^a \frac{1}{x}dx = \lim_{a \to \infty} \left[\ln\left|x\right|\right]^a_1 = \lim_{a \to \infty} (\ln\left|a\right|-ln\left|1\right|) = \infty$$</span></p> <p><strong>Volume of revolution :</strong> When you calculate the volume of the revolution from 1 to <span class="math-container">$\infty$</span></p> <p><span class="math-container">$$\pi\int_1^\infty \left(\frac{1}{x}\right)^2dx = \pi\lim_{a \to \infty} \int_1^a \frac{1}{x^2}dx = \pi\lim_{a \to \infty}\left[-\frac{1}{x}\right]^a_1 = \pi *(1-0) = \pi $$</span></p> <p>How can it be that an object with an infinite surface area under his curve has a finite volume when you rotate it around the axis?</p> <p>I get the math behind it and I'm assuming there is nothing wrong with the math. But it seems very contra-intuitive because if you rotate an infinite surface area just a little fraction it should have an infinite volume, that's what my intuition tells me?. So can someone explain to me why this isn't like that, that an infinite surface area rotated around the axis can have a finite volume?</p>
Daan Seuntjens
655,854
<p>As @Minus One-Twelfth pointed out in the comments: this phenomenon is called Gabriel's horn.</p> <p>Gabriel's horn is a geometric figure which has infinite surface area but finite volume.</p> <p><a href="https://i.stack.imgur.com/zHAvX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zHAvX.png" alt="gabriel's horn" /></a></p> <p>How you can interpret the phenomenon:</p> <p>You can treat the horn as a stack of disk on top of each other with radii that are diminishing. Every disk has a radius <span class="math-container">$r = \frac{1}{x}$</span> and an area of <span class="math-container">$πr^2$</span> or <span class="math-container">$\frac{π}{x^2}$</span> .</p> <ul> <li>The sum of all the radii creates a series that is the same as the surface area under the curve <span class="math-container">$\frac{1}{x}$</span>.</li> <li>The sum of all the area's of all the disks creates a series that is the same as the volume of the revolution.</li> </ul> <p>The series <span class="math-container">$\frac{1}{x}$</span> diverges but <span class="math-container">$\frac{1}{x^2}$</span> converges. So the area under the curve is infinite and the volume of the revolution is finite.</p> <p>This creates a paradox: you could fill the inside of the horn with a fixed volume of paint, but couldn't paint the inside surface of the horn. This paradox can be explained by using a 'mathematically correct paint', meaning that the paint can be spread out infinitely thin. Therefore, a finite volume of paint can paint an infinite surface.</p>
4,269,434
<p>I want to compute the gradient of the vector function <span class="math-container">$f(\vec{x}) = \|\vec{x} - \vec{a}\|$</span>, I have a try, but the result is kind of strange to me.</p> <p>so here is my steps <span class="math-container">\begin{align*} \nabla\|\vec{x} - \vec{a}\| &amp; = \nabla\sqrt{\sum_{i = 1}^{n}(x_i - a_i)^2} \\ &amp; = \frac{\nabla \sum_{i = 1}^{n} (x_i - a_i)}{2\sqrt{\sum_{i = 1}^{n}(x_i - a_i)^2}}\\ \end{align*}</span></p> <p>since the gradient of <span class="math-container">$\sum_{i = 1}^{n} (x_i - a_i)$</span> will give back a vector, so I think this should be really the original vector scaling by 2, <span class="math-container">$2(\vec{x} - \vec{a})$</span>, and then <span class="math-container">\begin{align*} \nabla\|\vec{x} - \vec{a}\| &amp; = \frac{\nabla \sum_{i = 1}^{n} (x_i - a_i)}{2\sqrt{\sum_{i = 1}^{n}(x_i - a_i)^2}}\\ &amp; = \frac{2(\vec{x}-\vec{a})}{2\sqrt{\sum_{i = 1}^{n}(x_i - a_i)^2}}\\ &amp; = \frac{\vec{x} - \vec{a}}{\| \vec{x} - \vec{a} \|_2} \end{align*}</span> so it's seems that the gradient of function <span class="math-container">$f(\vec{x}) = \|\vec{x} - \vec{a}\|$</span> is really doing the vector normalization? Am I making something mistake at here can someone check this with me?</p>
Steph
993,428
<p>The differential approach is much simpler with almost no computation.</p> <p>Denote your scalar function <span class="math-container">$f = \| \mathbf{z} \|$</span> where <span class="math-container">$\mathbf{z=x-a}$</span>.</p> <p>From the relation <span class="math-container">$f^2 = \mathbf{z} : \mathbf{z}$</span>, it is easy to obtain <span class="math-container">$2 f df = 2 \mathbf{z}:d\mathbf{z}$</span> and thus <span class="math-container">$$ df = \frac{1}{f} \mathbf{z} : d\mathbf{z} = \frac{1}{f} \mathbf{z} : d\mathbf{x} $$</span> The gradient is <span class="math-container">$$ \frac{\partial f}{\partial \mathbf{x}}= \frac{1}{f} \left( \mathbf{x-a} \right) $$</span></p>
1,794,855
<p>I need to prove that for every three integers $(a,b,c)$, the $\gcd(a-b,b-c) = \gcd(a-b,a-c)$. Assuming that a $a \ne b$.</p> <p>Having:</p> <p>$d_1 = \gcd(a-b,b-c)$</p> <p>$d_2 = \gcd(a-b,a-c)$</p> <p>How do i prove $d_1 = d_2$?</p>
clark
33,325
<p>Let $X_n=B(n,p)$ be a binomially distributed random variable. Also notice that $X_n=Y_1+Y_2+\cdots+ Y_n$ where $Y_i$ are i.i.d. Bernoulli with parameter $p$.</p> <p>Now observe that \begin{align} \sum_{k=0}^n k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k}&amp;= \operatorname{E}(X_n)\\ &amp;= \operatorname{E}( Y_1+Y_2+\cdots Y_n)\\ &amp;=\operatorname{E}( Y_1)+\operatorname{E}(Y_2)+\cdots +\operatorname{E}(Y_n)\\ &amp;=np \end{align}</p>
3,065,572
<p>Function <span class="math-container">$f: (0, \infty) \to \mathbb{R}$</span> is continuous. For every positive <span class="math-container">$x$</span> we have <span class="math-container">$\lim\limits_{n\to\infty}f\left(\frac{x}{n}\right)=0$</span>. Prove that <span class="math-container">$\lim\limits_{x \to 0}f(x)=0$</span>. I have tried to deduce something from definition of continuity, but with no effect.</p>
EuxhenH
281,807
<p>Proof is wrong. You need <span class="math-container">$A\subseteq B$</span> and <span class="math-container">$B\subseteq A$</span> to show that <span class="math-container">$A=B$</span>. </p> <p>Take an element <span class="math-container">$x\in A$</span> and see what that implies.</p>
267,051
<p>Games appear in pure mathematics, for example, <a href="https://en.wikipedia.org/wiki/Ehrenfeucht%E2%80%93Fra%C3%AFss%C3%A9_game" rel="noreferrer">Ehrenfeucht–Fraïssé game</a> (in mathematical logic) and <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur_game" rel="noreferrer">Banach–Mazur game</a> (in topology).</p> <p>But the Game Theory behind those applications is not so deep, and we don't need to know some fundamental theorems for them. Maybe except Zarmelo Theorem.</p> <p>Are there applications of <strong>advanced</strong> (anything behind the basic definitions) game theory ideas in pure mathematics?</p> <p>Thanks!</p>
Igor Rivin
11,142
<p>Well, here is <a href="https://arxiv.org/abs/1408.1790" rel="noreferrer">one.</a> (A game-theoretic proof of Erdos-Feller-Kolmogorov-Petrowsky law of the iterated logarithm for fair-coin tossing, 2014).</p> <p>And this:</p> <p><em>Tanaka, Kazuyuki</em>, <a href="http://dx.doi.org/10.1002/malq.19920380127" rel="noreferrer"><strong>A game-theoretic proof of analytic Ramsey theorem</strong></a>, Z. Math. Logik Grundlagen Math. 38, No.4, 301-304 (1992). <a href="https://zbmath.org/?q=an:0798.03050" rel="noreferrer">ZBL0798.03050</a>.</p> <p>And <a href="http://logic.dorais.org/archives/72" rel="noreferrer">this very nice post</a> by Francois Dorais (who used to be an active contributor here) on Fraisse's theorem.</p> <p>And this ancient wisdom:</p> <p><em>Marshall, A.W.; Olkin, I.</em>, <a href="http://dx.doi.org/10.2140/pjm.1961.11.1421" rel="noreferrer"><strong>Game theoretic proof that Chebyshev inequalities are sharp</strong></a>, Pac. J. Math. 11, 1421-1429 (1961). <a href="https://zbmath.org/?q=an:0118.13703" rel="noreferrer">ZBL0118.13703</a>.</p>
52,802
<p>This is partly a programming and partly a combinatorics question.</p> <p>I'm working in a language that unfortunately doesn't support array structures. I've run into a problem where I need to sort my variables in increasing order.</p> <p>Since the language has functions for the minimum and maximum of two inputs (but the language does not allow me to nest them, e.g. <code>min(a, min(b, c))</code> is disallowed), I thought this might be one way towards my problem.</p> <p>If, for instance, I have two variables $a$ and $b$, I only need one temporary variable so that $a$ ends up being less than or equal to $b$:</p> <pre><code>t = min(a, b); b = max(a, b); a = t; </code></pre> <p>for three variables $a,b,c$, the situation is a little more complicated, but only one temporary variable still suffices so that $a \leq b \leq c$:</p> <pre><code>a = min(a, b); t = max(a, b); c = max(t, c); t = min(t, c); b = max(a, t); a = min(a, t); </code></pre> <p>Not having a strong combinatorics background, however, I don't know how to generalize the above constructions if I have $n$ variables in general. In particular, is there a way to figure out how many temporary variables I would need to sort out $n$ variables, and to figure out what is the minimum number of assignment statements needed for sorting?</p> <p>Thanks in advance!</p>
Martin Sleziak
8,297
<p>I thought it might be helpful to add a completely different viewpoint. (Feel free to ignore this answer, if other answers are clearer for you or if this post contains some things that you have not studied yet.)</p> <p>We work with subsets of some given set $M$. Such subsets can be identified with function from $M$ to $\{0,1\}$, namely, there is a bijection between $\mathcal P(M)$ and $\{0,1\}^M$ given by $$A\mapsto\chi_A$$ where $\chi_A$ is the characteristic function of the set $A$:</p> <p>$$\chi_A(x)= \begin{cases} 1,&amp; x\in A\\ 0,&amp; x\notin A \end{cases} $$</p> <p>So we have a bijection between $\mathcal P(M)$ and the ring $\mathbb Z_2^M$ (with the componentwise operations). Let us have a look which set operations correspond to the ring operations $\oplus$ and $\odot$. </p> <p>$$\chi_{A\triangle B}=\chi_A\oplus\chi_B$$ (I prefer the symbol $\triangle$ as the notation for symmetric difference.) $$\chi_{A\cap B}=\chi_A\odot\chi_B$$</p> <p>So $(\mathcal P(M),\triangle,\cap)$ is a ring isomorphic to $(\mathbb Z_2^M, \oplus, \odot)$. If we work with the second ring, we see that the identity for the operation $\oplus$ is $(0,0,\dots,0)$, which means that the identity for $\triangle$ is the corresponding set, which is $\emptyset$. </p> <p>I am mentioning this basically for the reason that it illustrates the situation when instead of proving some things about one structure (in this case it is $(\mathcal P(M),\triangle,\cap)$) it might be helpful to find an isomorphic structure which we are already familiar with.</p>
1,560,539
<p>I'm trying to show that the partial sums of $\log(j) = n\log(n) - n + \text{O}(\log(n))$</p> <p>I know that $$\int_1^n\log(x)dx = n\log(n) - n + 1$$</p> <p>so that this number is pretty close to what I want.</p> <p>Now I look at the difference between sum and integral of log:</p> <p>$$\sum_{j=1}^n \log(j) - \int_1^n \log(x)dx$$</p> <p>My work:</p> <p>Working out the arithmetic and simplifying as much as possible, I am now at $$\sum_{j=1}^n \log(j) - \int_1^n \log(x)dx = \log(n) + \sum_{j=1}^{n-1} \big[\log(\large \frac{1}{1+\frac{1}{j}}) - \log(1 + \frac{1}{j})(j) + 1]$$ </p> <p>Where can I go from here? I think I would want to show that this difference, i.e. the R.H.S., is actually $O(\log(n))$ ...</p> <p>Thanks,</p>
marty cohen
13,079
<p>You have $\log(1+1/j) = 1/j+O(1/j^2)$, so $\sum_{j=2}^n \frac1{\log(1+1/j)} \approx \sum_{j=2}^n (-1/j+O(1/j^2)) \approx -\log(n)+O(1) $ and (I think you mean $n$ in this expression instead of $j$) $n\log(1+1/n) \approx 1$ , which, I think, gives you what you want.</p>
1,315,199
<p>From the wikipedia article on sine waves:</p> <blockquote> <p>The sine wave is important in physics because it retains its wave shape when added to another sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.</p> </blockquote> <p>I don't follow these statements. If you add two sine waves together with identical phase, isn't the amplitude doubled? Sure, it's still a sine wave, but isn't the same true of a square wave? If I add two square waves with equal phase, it's still a square wave, just with doubled amplitude. It has retained its shape. What am I missing?</p>
nullUser
17,459
<p>The statement means that $$\alpha_1 \sin(\omega x + \delta_1) + \alpha_2\alpha_3\sin(\omega x + \delta_2)$$ can be expressed as another sine wave of the form $\alpha_3\sin(\omega x + \delta_3)$, e.g. in the case $\alpha_1=\alpha_2 = 1$ $$ 2\cos(\frac{\delta_2}{2}-\frac{\delta_1}{2})\sin(\omega x + \frac{\delta_1}{2} + \frac{\delta_2}{2}) $$</p> <p>Note that the cosine term out front does not depend on $x$, it is just a constant in terms of $\delta_1$ and $\delta_2$.</p>
1,315,199
<p>From the wikipedia article on sine waves:</p> <blockquote> <p>The sine wave is important in physics because it retains its wave shape when added to another sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.</p> </blockquote> <p>I don't follow these statements. If you add two sine waves together with identical phase, isn't the amplitude doubled? Sure, it's still a sine wave, but isn't the same true of a square wave? If I add two square waves with equal phase, it's still a square wave, just with doubled amplitude. It has retained its shape. What am I missing?</p>
Zach Effman
135,903
<p>This is referring to the sum identity for sines: $\sin(\alpha) + \sin(\beta) = 2\cos(\frac{\alpha-\beta}{2})\sin(\frac{\alpha+\beta}{2})$. </p> <p>From this we see that we can add sine waves of the same frequency but different phases and still get a sine wave of the original frequency. Specifically, applying the above to $\sin(x)$ and $\sin(x+\phi)$, we get $\sin(x)+\sin(x+\phi) = 2\cos(\frac{\phi}{2})\sin(x+\frac{\phi}{2})$. A little work generalizes this to arbitrary sinusoids of the same frequency.</p>
97,318
<p>I use this code but it doesn't work.</p> <pre><code>ZZ = { {1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}, {13, 14, 15, 16} } ; ZZ // MatrixForm K = ConstantArray[0, {4, 4, 4}]; K // MatrixForm For[ i = 1, i = 4, i++, For [j = 1, j = 4, j++, K[[i, j, 1]] = ZZ[[i, 1]] ; K[[i, j, 2]] = ZZ[[i, 2]] ; K[[i, j, 3]] = ZZ[[i, 3]] ; K[[i, j, 4]] = ZZ[[i, 4]] ; ] ]; K[[3, 2, 1]] </code></pre> <p>What am I doing wrong?</p>
e.doroskevic
18,696
<p>Try something like: </p> <pre><code>Table[10 i + j, {i, 4}, {j, 3}] // MatrixForm </code></pre> <p>Please refer to: <a href="https://reference.wolfram.com/language/ref/Table.html" rel="nofollow">https://reference.wolfram.com/language/ref/Table.html</a></p>
3,829,377
<p>We know trace of a matrix is the sum of the eigenvalues of the given matrix. Suppose we know the characteristics polynomial of the matrix, is there any result which gives us the sum of the positive eigenvalues of the matrix?</p> <p>Note that I need the sum of only the positive eigenvalues...not all eigenvalues.</p>
Misha Lavrov
383,078
<p>This is essentially just as hard as finding the individual eigenvalues...</p> <p>...in particular, because if it were easy, you could <em>use</em> it to find the individual eigenvalues.</p> <p>From the characteristic polynomial of a matrix <span class="math-container">$A$</span>, it is easy to get the characteristic polynomial of <span class="math-container">$A-kI$</span>, and the sum of all the positive eigenvalues of <span class="math-container">$A-kI$</span> is equal to the sum of all the eigenvalues greater than <span class="math-container">$k$</span> of <span class="math-container">$A$</span>. In particular, if <span class="math-container">$k$</span> is between the largest eigenvalue and the second-largest, this sum will just be the largest eigenvalue of <span class="math-container">$A$</span>.</p> <p>We can find such a <span class="math-container">$k$</span> with binary search (note that in particular, the sum of the positive eigenvalues of <span class="math-container">$A$</span> is an upper bound on the largest eigenvalue of <span class="math-container">$A$</span>, so we have a range to work with).</p> <p>Once we figure out the largest eigenvalue of <span class="math-container">$A$</span>, we could account for it and use a similar process to find the second-largest eigenvalue, third-largest, and so on.</p> <hr /> <p>Note that &quot;just as hard as finding the individual eigenvalues&quot; means that in particular, there can be no exact formula for finding this sum, when the matrix is <span class="math-container">$5\times 5$</span> or larger - in contrast to the sum of all eigenvalues, which is just the trace.</p>
4,419,565
<p>Consider the function <span class="math-container">$f(x)$</span> and let <span class="math-container">$g(x)=f(cx)$</span>.</p> <p>By the definition of derivative</p> <p><span class="math-container">$$f'(x)=\frac{df(x)}{dx}=\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}\tag{1}$$</span></p> <p>and so the definition of <span class="math-container">$f'(cx)$</span> is</p> <p><span class="math-container">$$f'(cx)=\frac{df(cx)}{d(cx)}=\lim\limits_{h \to 0} \frac{f(cx+h)-f(cx)}{h}\tag{2}$$</span></p> <p>Also</p> <p><span class="math-container">$$g'(x) = \lim\limits_{h \to 0} \frac{g(x+h)-g(x)}{h}$$</span> <span class="math-container">$$=\lim\limits_{h \to 0} \frac{f(c(x+h))-f(cx)}{h}$$</span> <span class="math-container">$$=\lim\limits_{h \to 0} \frac{f(cx+ch)-f(cx)}{h}$$</span> <span class="math-container">$$=c\lim\limits_{h \to 0} \frac{f(cx+ch)-f(cx)}{ch}$$</span> <span class="math-container">$$=cf'(cx)$$</span></p> <p>Hence</p> <p><span class="math-container">$$g'(x)=cf'(cx)$$</span></p> <p>I am not sure if I am seeing an inexistent ambiguity, but <span class="math-container">$f'(cx)$</span> seems like ambiguous notation.</p> <p>As defined above in <span class="math-container">$(2)$</span>, <span class="math-container">$f'(cx)$</span> means the derivative of <span class="math-container">$f$</span> relative to <span class="math-container">$cx$</span>, evaluated at a point we call <span class="math-container">$cx$</span>, .</p> <p>Note that this is different than the derivative of <span class="math-container">$f$</span> relative to <span class="math-container">$x$</span> evaluated at a point <span class="math-container">$cx$</span>: this derivative is <span class="math-container">$g'(x)=\frac{df(cx)}{dx}$</span>. In &quot;prime&quot; notation, how do we denote this latter derivative? It would seem to be <span class="math-container">$f'(cx)$</span>, but I think either this is incorrect, or the definition given in <span class="math-container">$(2)$</span> is somehow non-standard.</p> <p>For example, let <span class="math-container">$$f(x)=3x^3$$</span> <span class="math-container">$$g(x)=f(cx)=3c^3x^3$$</span></p> <p>Then</p> <p><span class="math-container">$$f'(x)=\frac{df(x)}{dx}=9x^2$$</span> <span class="math-container">$$\left.\frac{df(x)}{dx}\right \vert_{x=cx}=9c^2x^2\ \ (=f'(cx)???)$$</span> <span class="math-container">$$g'(x)=\frac{df(cx)}{dx}=c\frac{df(cx)}{d(cx)}=c\cdot f'(cx)=c\cdot 9c^2x^2= 9c^3x^2\tag{3}$$</span></p> <p>In this example, <span class="math-container">$f'(cx)=\frac{df(cx)}{d(cx)}=9c^2x^2$</span> according to definition I gave in <span class="math-container">$(2)$</span>.</p> <p>But perhaps more intuitively, it could also be <span class="math-container">$f'(cx)= \left.\frac{df(x)}{dx}\right \vert_{x=cx}=9c^2x^2$</span></p> <p>Note that both uses of <span class="math-container">$f'(cx)$</span> lead to the same result in this example.</p> <p>Which use of <span class="math-container">$f'(cx)$</span> is the &quot;correct&quot; or &quot;standard&quot; one?</p>
Carl Schildkraut
253,966
<p>This is definitely potentially confusing, but not an ambiguity.</p> <p>The object written <span class="math-container">$f$</span> is a function, which takes an input and gives an output. The object written <span class="math-container">$f'$</span> is also a function, which is defined using the function <span class="math-container">$f$</span>. If <span class="math-container">$f$</span> is the function which squares its input (usually written like <span class="math-container">$f(x)=x^2$</span>), then <span class="math-container">$f'$</span> is the function which doubles its input (usually written like <span class="math-container">$f'(x)=2x$</span>). It's also true that <span class="math-container">$f'(cx)=2cx$</span>, because <span class="math-container">$f'(cx)$</span> does not mean &quot;the derivative of <span class="math-container">$f(cx)$</span>,&quot; it means &quot;the function <span class="math-container">$f'$</span> evaluated at an input of <span class="math-container">$cx$</span>.&quot;</p> <p>Your expression (2) is a correct application of the definition (1), although it would be odd to call it a definition itself. I do not know of a way to write &quot;the derivative of <span class="math-container">$f(cx)$</span> as a function of <span class="math-container">$x$</span>&quot; in prime notation, besides by writing a new function <span class="math-container">$g(x)$</span> defined by the rule <span class="math-container">$g(x)=f(cx)$</span>, from which &quot;the derivative of <span class="math-container">$f(cx)$</span> as a function of <span class="math-container">$x$</span>&quot; is <span class="math-container">$g'(x)$</span>.</p>
4,419,565
<p>Consider the function <span class="math-container">$f(x)$</span> and let <span class="math-container">$g(x)=f(cx)$</span>.</p> <p>By the definition of derivative</p> <p><span class="math-container">$$f'(x)=\frac{df(x)}{dx}=\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}\tag{1}$$</span></p> <p>and so the definition of <span class="math-container">$f'(cx)$</span> is</p> <p><span class="math-container">$$f'(cx)=\frac{df(cx)}{d(cx)}=\lim\limits_{h \to 0} \frac{f(cx+h)-f(cx)}{h}\tag{2}$$</span></p> <p>Also</p> <p><span class="math-container">$$g'(x) = \lim\limits_{h \to 0} \frac{g(x+h)-g(x)}{h}$$</span> <span class="math-container">$$=\lim\limits_{h \to 0} \frac{f(c(x+h))-f(cx)}{h}$$</span> <span class="math-container">$$=\lim\limits_{h \to 0} \frac{f(cx+ch)-f(cx)}{h}$$</span> <span class="math-container">$$=c\lim\limits_{h \to 0} \frac{f(cx+ch)-f(cx)}{ch}$$</span> <span class="math-container">$$=cf'(cx)$$</span></p> <p>Hence</p> <p><span class="math-container">$$g'(x)=cf'(cx)$$</span></p> <p>I am not sure if I am seeing an inexistent ambiguity, but <span class="math-container">$f'(cx)$</span> seems like ambiguous notation.</p> <p>As defined above in <span class="math-container">$(2)$</span>, <span class="math-container">$f'(cx)$</span> means the derivative of <span class="math-container">$f$</span> relative to <span class="math-container">$cx$</span>, evaluated at a point we call <span class="math-container">$cx$</span>, .</p> <p>Note that this is different than the derivative of <span class="math-container">$f$</span> relative to <span class="math-container">$x$</span> evaluated at a point <span class="math-container">$cx$</span>: this derivative is <span class="math-container">$g'(x)=\frac{df(cx)}{dx}$</span>. In &quot;prime&quot; notation, how do we denote this latter derivative? It would seem to be <span class="math-container">$f'(cx)$</span>, but I think either this is incorrect, or the definition given in <span class="math-container">$(2)$</span> is somehow non-standard.</p> <p>For example, let <span class="math-container">$$f(x)=3x^3$$</span> <span class="math-container">$$g(x)=f(cx)=3c^3x^3$$</span></p> <p>Then</p> <p><span class="math-container">$$f'(x)=\frac{df(x)}{dx}=9x^2$$</span> <span class="math-container">$$\left.\frac{df(x)}{dx}\right \vert_{x=cx}=9c^2x^2\ \ (=f'(cx)???)$$</span> <span class="math-container">$$g'(x)=\frac{df(cx)}{dx}=c\frac{df(cx)}{d(cx)}=c\cdot f'(cx)=c\cdot 9c^2x^2= 9c^3x^2\tag{3}$$</span></p> <p>In this example, <span class="math-container">$f'(cx)=\frac{df(cx)}{d(cx)}=9c^2x^2$</span> according to definition I gave in <span class="math-container">$(2)$</span>.</p> <p>But perhaps more intuitively, it could also be <span class="math-container">$f'(cx)= \left.\frac{df(x)}{dx}\right \vert_{x=cx}=9c^2x^2$</span></p> <p>Note that both uses of <span class="math-container">$f'(cx)$</span> lead to the same result in this example.</p> <p>Which use of <span class="math-container">$f'(cx)$</span> is the &quot;correct&quot; or &quot;standard&quot; one?</p>
Jackozee Hakkiuz
497,717
<p>I just want to emphasize one thing. You don't take the derivative of a function &quot;relative to something&quot;. The derivative of a function <span class="math-container">$f:\mathbb R \to \mathbb R$</span> is always taken &quot;with respect to&quot; one thing and one thing only: its entry.</p> <p>If you want to see how the quantity <span class="math-container">$f(cx)$</span> varies when <span class="math-container">$x$</span> varies, then you should think of it as defining a new function <span class="math-container">$g$</span> by <span class="math-container">$g(x)=f(cx)$</span> (because you want to vary <span class="math-container">$x$</span>) and then differentiating this new function <span class="math-container">$g$</span>.</p> <p>My main point is that you have <strong>one</strong> operation that acts on functions and spits out functions: <strong>the derivative</strong>. It takes <span class="math-container">$f$</span> and gives you <span class="math-container">$f'$</span> back. It takes <span class="math-container">$g$</span> and gives you <span class="math-container">$g'$</span> back. After that, you can evaluate these functions wherever you want. The relation that you correctly found</p> <p><span class="math-container">$$g'(x) = cf'(cx)$$</span></p> <p>reads like this: when you evaluate the function <span class="math-container">$g'$</span> at the point <span class="math-container">$x$</span>, you obtain the same number as if you were to evaluate the function <span class="math-container">$f'$</span> at the point <span class="math-container">$cx$</span> and multiply the result by <span class="math-container">$c$</span>.</p> <p>That's it. No &quot;with respect to&quot;s involved.</p>
1,683,414
<p>$$\begin{cases} x^2 = yz + 1 \\ y^2 = xz + 2 \\ z^2 = xy + 4 \end{cases} $$</p> <p>How to solve above system of equations in real numbers? I have multiplied all the equations by 2 and added them, then got $(x - y)^2 + (y - z)^2 + (x - z)^2 = 14$, but it leads to nowhere.</p>
Robert Israel
8,508
<p>Maple indicates that a Groebner basis for the corresponding polynomials is $[z^2 - 4, y, z + 2 x]$. Thus there are two solutions: $x = \pm 1, y = 0, z = \mp 2$.</p>
2,482,868
<p>I am trying to find</p> <p>$$ \lim\limits_{x\to \infty }[( 1+x^{p+1})^{\frac1{p+1}}-(1+x^p)^{\frac1p}],$$</p> <p>where $p&gt;0$. I have tried to factor out as</p> <p>$$(1+x^{p+1})^{\frac1{p+1}}- \left( 1+x^{p}\right)^{\frac{1}{p}} =x\left(1+\frac{1}{x^{p+1}}\right)^{\frac{1}{p+1}}- x\left(1+\frac{1}{x^{p}}\right)^{\frac{1}{p}},$$ but still was not able to make progress. Any other approach to this is welcome.</p>
Adayah
149,178
<p>We can use the expansion</p> <p>$$(1+y)^{\alpha} = \sum_{n=0}^{\infty} \binom{\alpha}{n} \cdot y^n.$$</p> <p>Taking $y = \frac{1}{x^p}, \ \alpha = \frac{1}{p}$, we get </p> <p>$$\left( 1 + \frac{1}{x^p} \right)^{\frac{1}{p}} = 1 + \frac{1}{p x^p} + o \left( \frac{1}{x^{p}} \right).$$</p> <p>Similarly</p> <p>$$\left(1+\frac{1}{x^{p+1}} \right)^{\frac{1}{p+1}} = 1+\frac{1}{(p+1)x^{p+1}} + \mathcal{o} \left( \frac{1}{x^{p+1}} \right) = 1 + o \left( \frac{1}{x^p} \right).$$</p> <p>Hence</p> <p>$$\begin{align*} \left( 1+x^{p+1} \right)^{\frac{1}{p+1}} - \left(1+x^p\right)^{\frac{1}{p}} &amp; = x \left[ \left( 1 + \frac{1}{x^p} \cdot o(1) \right) - \left( 1 + \frac{1}{x^p} \left( \frac{1}{p} + o(1) \right) \right) \right] \\[1ex] &amp; = -\frac{x}{x^p} \left( \frac{1}{p} + o(1) \right). \end{align*}$$</p> <ul> <li>If $p &gt; 1$, the limit is $0$.</li> <li>If $p = 1$, the limit is $-1$.</li> <li>If $p &lt; 1$, the limit is $-\infty$.</li> </ul>
685,642
<p>I am trying to find a basis for the set of all $n \times n$ matrices with trace $0$. I know that part of that basis will be matrices with $1$ in only one entry and $0$ for all others for entries outside the diagonal, as they are not relevant.</p> <p>I don't understand though how to generalize for the entries on the diagonal. Maybe just one matrix with $1$ in the $(1, 1)$ position and a $-1$ in all other $n - 1$ positions?</p>
sunspots
110,953
<p>Consider the subspace <span class="math-container">$W = \{ A \in M_{n \times n}(F) : tr(A) = 0\} = \{ A \in M_{n \times n}(F) : \sum_{i=1}^{n} A_{ii} = 0\}.$</span> Now, we apply the standard representation of <span class="math-container">$M_{n \times n}(F)$</span> with respect to the matrix units <span class="math-container">$\beta,$</span> which is an isomorphism <span class="math-container">$\phi_{\beta}$</span> from <span class="math-container">$M_{n \times n}(F)$</span> onto <span class="math-container">$F^{n^{2}}.$</span> Namely, we express each element of <span class="math-container">$W$</span> as a coordinate vector relative to <span class="math-container">$\beta,$</span> i.e., <span class="math-container">$$\phi_{\beta}(W) = \{ (A_{11},A_{12},\dots,A_{21},A_{22},\dots,A_{nn}) \in F^{n^{2}} : \sum_{i=1}^{n} A_{ii} = 0\}.$$</span></p> <p>Next, we encode the linear constraint <span class="math-container">$\sum_{i=1}^{n} A_{ii} = 0$</span> into the following matrix <span class="math-container">$$B = \begin{pmatrix} 1 &amp; 0 &amp; \cdots &amp; 0 &amp; 1 &amp; \cdots &amp; 1\end{pmatrix},$$</span> since the solution set of the corresponding homogeneous system: <span class="math-container">$$B \begin{pmatrix} A_{11} \\ A_{12} \\ \vdots \\ A_{21} \\ A_{22} \\ \vdots \\ A_{nn} \end{pmatrix} = 0$$</span> satisfies the constraint. Now, we identify the solution set as the null space <span class="math-container">$N(B),$</span> then we apply the dimension theorem (<span class="math-container">$dim(N(B)) = nullity(B) = dim(F^{n^{2}}) - rank(B)$</span>). Thus, the problem is to find a subset of the null space with exactly <span class="math-container">$n^{2} - 1$</span> linearly independent vectors.</p> <p>To obtain this subset, we identify the <span class="math-container">$B_{11} = 1$</span> entry as the pivot variable. Next, we identify the remaining entries as the free variables. Then, we obtain a solution <span class="math-container">$$\{ (0,1,\dots,0,0,\dots,0),\dots, (-1,0,\dots,0,1,\dots,0),\dots,(-1,0,\dots,0,0,\dots,1)\},$$</span> where we have expressed the vectors as <span class="math-container">$n^{2}$</span>-tuples. Lastly, we apply the inverse of the standard representation <span class="math-container">$\phi_{\beta}^{-1}$</span> to the solution in order to return square matrices instead of coordinate vectors.</p>
3,489,642
<blockquote> <p>Let <span class="math-container">$D_{2\cdot 8}$</span> be given by the group presentation <span class="math-container">$\langle x,y\mid xy = yx^{-1} , y^2 = e, x^8 = e\rangle$</span>. Let <span class="math-container">$G = F_{\{x,y\}}$</span> be the free group on two generators and <span class="math-container">$N = \langle\{xyx^{-1}y,y^2,x^8\}\rangle$</span>. Then</p> <ol> <li><span class="math-container">$N$</span> is a normal subgroup of <span class="math-container">$G$</span>.</li> <li>The quotient group <span class="math-container">$G/N$</span> is isomorphic to <span class="math-container">$D_{2\cdot 8}$</span>.</li> </ol> </blockquote> <p>For 1. I need to show that <span class="math-container">$gng^{-1}\in N$</span> for all <span class="math-container">$g\in G$</span>, <span class="math-container">$n\in N$</span>, i.e. that <span class="math-container">$N$</span> is closed under conjugation by elements of <span class="math-container">$G$</span>. But for example I don't see how we could write <span class="math-container">$yx^8y^{-1}$</span> as a product of elements of <span class="math-container">$\{xyx^{-1}y, y^2,x^8\}$</span> and their inverses.</p> <p>For 2. I see that the quotient group must consist of <span class="math-container">$2\cdot8=16$</span> equivalence classes. But I don't see how we find these explicitly.</p> <p>I believe it suffices to exhibit a group homomorphism <span class="math-container">$\varphi$</span> such that <span class="math-container">$\ker\varphi = \langle\{xyx^{-1}y,y^2,x^8\}\rangle$</span> and <span class="math-container">$\mathrm{Im}\varphi\cong G/N$</span>, by the first isomorphism theorem. But I am not sure how to define this homomorphism exactly.</p>
Con
682,304
<p>Hint: Define <span class="math-container">$\varphi \colon F_{\lbrace x,y \rbrace} \rightarrow D_{2 \cdot 8}$</span>, by sending <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to the two generators of the dihedral group that you also called <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in the presentation above. The kernel will then be given by the relations of the dihedral group which is exactly what you want and what you can see from the presentation.</p>
1,412,862
<p>The author of my textbook has an unsatisfactory proof when it is describing the properties of the closure of a set. I'm using <span class="math-container">$E^*$</span> for E closure. Also, <span class="math-container">$E'$</span> indicates the set of limit points of <span class="math-container">$E$</span>.</p> <blockquote> <p><strong>Theorem:</strong> <span class="math-container">$E^*\subset F$</span> for every closed set <span class="math-container">$F\subset X$</span> such that <span class="math-container">$E\subset F$</span></p> <p><em>Proof</em>: If <span class="math-container">$F$</span> is closed and <span class="math-container">$E\subset F$</span>, then <span class="math-container">$F'\subset F$</span>, hence <span class="math-container">$E'\subset F$</span>. Thus <span class="math-container">$E^*\subset F$</span>.</p> </blockquote> <p>My question is, why is he able to conclude that <span class="math-container">$F$</span> contains <span class="math-container">$E$</span>'? Why is <span class="math-container">$E'$</span> a subset of <span class="math-container">$F'$</span>?</p>
Gabriel Sanfins
247,708
<p>The main implication we will use when proving this theorem is that:</p> <p>If $E\subset F $ then $ E^* \subset F^*$.</p> <p><em>Proof:</em> take $x \in E^*$, then for every neigborhood $V$ of $x$, there is a point $y \in E$ such that $y \in V$ as well. But we have that $E \subset F$, so $y \in F$ and $y \in V$. From this, we conclude that $x \in F^*$ and we have the result.</p> <p>The other fact we will use in proving the main result is that $F$ is closed if and only if $F=F^*$.</p> <p>Now let's prove the result:</p> <p><em>Proof:</em> Suppose that $E \subset F$ and $F$ is closed, thus by the previous result, we have that $E^* \subset F^* = F$. And we have the result.</p>
4,154,205
<p>I study maths as a hobby. I am stuck on this question:</p> <p>Find the values of c for which the line <span class="math-container">$2x-3y = c$</span> is a tangent to the curve <span class="math-container">$x^2+2y^2=2$</span> and find the equation of the line joining the points of contact.</p> <p>I have established that <span class="math-container">$c=\pm \sqrt17$</span>.</p> <p>To get the line joining the points of contact I thought one way would be to start by finding the points of contact.</p> <p>For <span class="math-container">$c =\sqrt 17$</span> the equation of the line is y = <span class="math-container">$\frac{2x-\sqrt 17}{3}$</span> and the equation of the curve is <span class="math-container">$y = \sqrt \frac{2-x^2}{2}$</span></p> <p>At the tangent</p> <p><span class="math-container">$\frac{2x-\sqrt 17}{3} = \sqrt \frac{2-x^2}{2}$</span></p> <p><span class="math-container">$\rightarrow \frac{4x^2-4x\sqrt17+17}{9} = \frac{2-x^2}{2}$</span></p> <p><span class="math-container">$\rightarrow 17x^2-8x\sqrt17+16=0$</span></p> <p><span class="math-container">$\rightarrow x = \frac{8\sqrt17\pm\sqrt1088-1088}{34}=\frac{8\sqrt17}{34}$</span> but proceeding in this fashion seems very messy and I don't see how it will lead to the answer given in the book as <span class="math-container">$3x+4y=0$</span></p>
Community
-1
<p>I think there is a subtle distinction between your case and one where I would use WLOG in my writings.</p> <p>For me, WLOG is for when you have a number of cases that are dealt with in a nearly identical manner, so that you'd really like to treat them as a single case. For instance, if I am dealing with a rectangular prism for some reason, I can assume WLOG that no side is longer than the length and no side is shorter than the height. I can do that because I can reorient the box to turn six potential cases into one.</p> <p>Your example is different, because we deal with empty sets in a different way than we deal with non-empty sets. I agree with your instinct to wave them away with a single phrase as it is a trivial case. But my personal writing style would be to say something like this:</p> <blockquote> <p>If any of the terms of the family <span class="math-container">$X$</span> were empty, they would obviously contribute nothing to <span class="math-container">$\bigcup_{i\in I}X_i$</span>. Therefore, for the remainder of this paragraph, let us assume that every term <span class="math-container">$X_i$</span> is non-empty....</p> </blockquote> <p>That being said, I doubt many people would raise their eyebrows if you used WLOG to describe this circumstance, and none would be so troubled as to let it harm their understanding of your larger argument. But I learned that my two favorite undergraduate professors have passed on since the last time I looked, and this seems like the sort of rhetorical pedantry that they would love to point out. ^_^</p>
6,219
<p>First we craft a function to return the quadrant boundary based on <a href="http://en.wikipedia.org/wiki/Oppermann%27s_conjecture" rel="noreferrer">Oppermann's Conjecture</a></p> <pre><code>a[n_] := (Mod[n, 2] + n^2 + 2 n)/4 </code></pre> <p>Then we create a few lists</p> <pre><code>r = 10; q = 1; q1 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 2; q2 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 3; q3 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 4; q4 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; u = Flatten[Table[{(n - 1) &lt;-&gt; n}, {n, 2, a[4 + 4 r] + 1}]]; </code></pre> <p>We produce the normal <a href="http://mathworld.wolfram.com/PrimeSpiral.html" rel="noreferrer">Ulam's Spiral</a></p> <pre><code>Graph[u] </code></pre> <p><img src="https://i.stack.imgur.com/eDqsa.png" alt="Ulam&#39;s Spiral"></p> <p>We don't get the spiral when we attempt to combine the diagonal lists by using this</p> <pre><code>Graph[Union[u, q1, q2, q3, q4]] </code></pre> <p><img src="https://i.stack.imgur.com/RaRF3.png" alt="Ulam Bad Spiral"></p> <p>How can we overlay the diagonals onto the spiral?</p>
Heike
46
<p>Starting with <code>Graph[u]</code> you can extract the coordinates of the vertices as follows</p> <pre><code>gr = Graph[u]; crds = AbsoluteOptions[gr, VertexCoordinates]; </code></pre> <p>The graph including the diagonals can then be drawn according to</p> <pre><code>Graph[VertexList[gr], Union[u, q1, q2, q3, q4], crds] </code></pre> <p><img src="https://i.stack.imgur.com/tXEha.png" alt="Mathematica graphics"></p>
6,219
<p>First we craft a function to return the quadrant boundary based on <a href="http://en.wikipedia.org/wiki/Oppermann%27s_conjecture" rel="noreferrer">Oppermann's Conjecture</a></p> <pre><code>a[n_] := (Mod[n, 2] + n^2 + 2 n)/4 </code></pre> <p>Then we create a few lists</p> <pre><code>r = 10; q = 1; q1 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 2; q2 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 3; q3 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; q = 4; q4 = Table[a[q + 4 (n - 1)] &lt;-&gt; a[q + 4 (n)], {n, 1, r}]; u = Flatten[Table[{(n - 1) &lt;-&gt; n}, {n, 2, a[4 + 4 r] + 1}]]; </code></pre> <p>We produce the normal <a href="http://mathworld.wolfram.com/PrimeSpiral.html" rel="noreferrer">Ulam's Spiral</a></p> <pre><code>Graph[u] </code></pre> <p><img src="https://i.stack.imgur.com/eDqsa.png" alt="Ulam&#39;s Spiral"></p> <p>We don't get the spiral when we attempt to combine the diagonal lists by using this</p> <pre><code>Graph[Union[u, q1, q2, q3, q4]] </code></pre> <p><img src="https://i.stack.imgur.com/RaRF3.png" alt="Ulam Bad Spiral"></p> <p>How can we overlay the diagonals onto the spiral?</p>
kglr
125
<p>You can also use <code>GraphEmbedding</code> to get the vertex coordinates:</p> <pre><code>Graph[VertexList[g = Graph[u]], Union[u, q1, q2, q3, q4], VertexCoordinates -&gt; GraphEmbedding[g]] </code></pre> <p><a href="https://i.stack.imgur.com/BjC1R.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BjC1R.jpg" alt="enter image description here"></a></p>
1,992,789
<blockquote> <p>Let $\{x_n\}$ be a sequence that does not converge and let L be a real number. Prove that there exist $\epsilon &gt;0$ and a sub-sequence $\{x_{p_n}\}$ of $\{x_n\}$ such that $|x_{p_n}-L|&gt;\epsilon$ for all n.</p> </blockquote> <p>I don't have any idea on how to prove this. Any advice and suggestions will be greatly appreciated.</p>
Robert Z
299,698
<p>Use <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow">Inclusion-exclusion principle</a>. </p> <p>There are $\frac{8!}{2^4}$ 8-digits numbers that can be formed using two 1s, two 2s, two 3s, and two 4s.</p> <p>i) How many of these 8-digits numbers have two adjacent 1s? There are $7\cdot \frac{6!}{2^3}$ such numbers.</p> <p>ii) How many of these 8-digits numbers have two adjacent 1s and two adjacent 2s in this order? There are $15\cdot \frac{4!}{2^2}$ such numbers.</p> <p>iii) How many of these 8-digits numbers have two adjacent 1s, two adjacent 2s and two adjacent 3s in this order? There are $10\cdot \frac{2!}{2^1}$ such numbers.</p> <p>iv) How many of these 8-digits numbers have two adjacent 1s, two adjacent 2s, two adjacent 3s and two adjacent 4s in this order? They is only $1$ such number.</p> <p>Hence the final result is (why?): $$\frac{8!}{2^4}-4\cdot7\cdot \frac{6!}{2^3} +(4\cdot 3)\cdot 15\cdot \frac{4!}{2^2} -(4\cdot 3\cdot 2)\cdot 10\cdot \frac{2!}{2^1} +(4!)\cdot 1\cdot \frac{0!}{2^0}=864.$$</p>
1,027,235
<p>Let N = 12345678910111213141516171819. How can I use modular arithmetic to show that N is (or isn't) divisible by 11? In general, how can I apply modular arithmetic to determine the divisibility of an integer by a smaller integer? I am finding modular arithmetic very confusing and unintuitive. I can understand "simple" modular arithmetic like the 24-hour day etc. but when it comes down to finding the modulo of high raised powers, or checking divisibility of large integers, I am totally lost. </p>
Jessica B
81,247
<p>You want to find what that number is mod 11. The good thing is, you don't have to get the answer in one go. You just keep applying the rule 'any multiple of 11 goes away'. Notice that at the start of the number you have some power of 10 times 12. That it therefore the same as that multiple of 10 times 1. So replace the 12 by a 1. Now your number starts with a 13, which can be replaced with a 2...</p>
1,027,235
<p>Let N = 12345678910111213141516171819. How can I use modular arithmetic to show that N is (or isn't) divisible by 11? In general, how can I apply modular arithmetic to determine the divisibility of an integer by a smaller integer? I am finding modular arithmetic very confusing and unintuitive. I can understand "simple" modular arithmetic like the 24-hour day etc. but when it comes down to finding the modulo of high raised powers, or checking divisibility of large integers, I am totally lost. </p>
Community
-1
<p><strong>Hint</strong>:</p> <p>$f(x)=\displaystyle\sum_{i=0}^mc_ix^i\ \ (c_i \in \mathbb{Z} \ \forall i)\land a\equiv b \pmod n \implies f(a)\equiv f(b) \pmod n$</p>
3,305,677
<p>I am learning Asymptotic complexity of functions from CLRS. I know that exponentiation functions like <span class="math-container">$a^n$</span>,<span class="math-container">$(a&gt;0)$</span> are faster than <span class="math-container">$n!$</span> But what about <span class="math-container">$a^{a^n}$</span> vs <span class="math-container">$n!$</span> How do they compare? A proof would really help me understand the concept. Thanks!</p>
logdev
566,485
<p>How do they compare?: <span class="math-container">$n!=\mathcal{O}(a^{a^n})$</span></p> <p>Proof: If we bring down both <span class="math-container">$n!$</span> and <span class="math-container">$a^{a^n}$</span> to <span class="math-container">$log$</span> scale then <span class="math-container">$n!$</span> in <span class="math-container">$log$</span> scale will be approximately <span class="math-container">$nlogn$</span> (as <span class="math-container">$log(n!)≈nlogn$</span> using Stirling's approximation) and <span class="math-container">$a^{a^n}$</span> will be brought down to <span class="math-container">$a^n*loga$</span>.</p> <p><span class="math-container">$nlogn$</span> grows slower than <span class="math-container">$a^n*loga$</span> which implies <span class="math-container">$n!=\mathcal{O}(a^{a^n})$</span>.</p>
3,933,011
<p>I know how to solve for the basic about this. But in this problem</p> <p><span class="math-container">$\displaystyle{\frac{1}{2\pi i} \oint_{|z|=1} \frac{\cos \left(e^{-z}\right)}{z^2} dz}$</span>,</p> <p>I don't know how to start. Can somebody help me or guide me about this? Or just give me a hint. Thanks. I would appreciate the help. Thanks.</p>
Thusle Gadelankz
840,795
<p>You could also do this using the residue theorem directly, as the headline asks for. I wish to emphazise that I am aware that the calulation is completely analogous, and that I simply show this as an alternative since it appears from the comments and the headline that the asker might be more comfortable with the residue theorem as the starting point here. I urge him however, to think about why these two approaches are equivalent in this case.</p> <p>Observe that <span class="math-container">$f(z) = \frac{\cos(e^{-z})}{z^2}$</span> has a pole of order 2 at <span class="math-container">$z = 0$</span> with residue</p> <p><span class="math-container">$$\text{Res}(0) = \lim_{z \rightarrow 0} \frac{d}{dz} z^2 f(z) = \lim_{z \rightarrow 0} \sin(e^{-z})e^{-z} = \sin(1).$$</span> By the residue theorem we then have</p> <p><span class="math-container">$$\frac{1}{2\pi i} \int_{|z|=1} \frac{\cos(e^{-z})}{z^2} dz = \sin(1) $$</span></p>
454,426
<blockquote> <p>In set theory and combinatorics, the cardinal number $n^m$ is the size of the set of functions from a set of size m into a set of size $n$.</p> </blockquote> <p>I read this from this <a href="http://en.wikipedia.org/wiki/Empty_product#0_raised_to_the_0th_power" rel="nofollow noreferrer">Wikipedia page</a>.</p> <p>I don't understand, however, why this is true. I reason with this example in which $M$ is a set of size $5$, and $N$ is a set of size $3$. For each element in set $M$, there are three functions to map the element from the set of size $5$ to an element in the set of size $3$. </p> <p>By my reasoning, that means the total number of functions is just $3*5$, i.e. $3$ functions for each of the $5$ elements in the set. Why is it actually $3^5$? I saw on <a href="https://math.stackexchange.com/questions/209361/size-of-the-set-of-functions-from-x-to-y">this thread</a> that the number of functions from a set of size $n$ to a set of size $m$ is equivalent to "How many $m$-digit numbers can I form using the digits $1,2,...,n$ and allowing repetition?" I know how to answer that question, but I don't know why it's the same thing as finding the number of functions from the size $n$ set to the size $m$ set. </p>
Ben Grossmann
81,360
<p>Here's an attempt to explain it understandably through induction:</p> <p>Suppose we want to make a function from $\{a,b,c\}$ to $\{1,2,3,4,5\}$. How many choices do we have?</p> <p>Well, we have to start by choosing some $f(a)$. We have $5$ choices for that. We also have $5$ choices for $f(b)$. However, we can make this choice independently of $f(a)$, which means that for every function on $\{a\}$, there are $5$ choices for $f(b)$. So, there are $5\times 5=25$ choices for $f$ on $\{a,b\}$.</p> <p>Now what about $\{a,b,c\}$? Well, there are $25$ choices for $f$ on $\{a,b\}$, and each such choice can be extended by choosing one of $5$ options for $f(c)$. So, there are $25\times5=5\times5\times5=5^3$ options for a function on $\{a,b,c\}$.</p> <p>Hope that explanation helps a little bit.</p>
63,348
<p>This question arises from a discussion with my friends on a commonly encountered IQ test questions: "What's the next number in this series 2,6,12,20,...". Here a "number" usually means an integer. I was wondering whether there is a systematical way to solve such problems.Let us call a point on a plane integer point if all the components of it are integers. I want to know the following:</p> <p>Give a finite set of integer points, Can we always find a corresponding polynomial that passes all these points and maps integers to integers? </p>
Aaron Meyerowitz
8,008
<p>Just to make a few comments:</p> <p>1) As noted, If we have a list of values $a_0,a_1,\cdots, a_n$ of integers then there is a (unique) polynomial $f(t)$ of degree no more than $n$ with integer coefficients which maps integers to integers and such that $f(k)=k$ for $0 \le k \le n$.</p> <p>2) There is a method involving the differences and differences of the differences etc. which reveals $f(n)$ as an integer linear combination of the polynomials $\binom{t}{j}$ for $0 \le j \le n$. Furthermore, the polynomials of this form are exactly the polynomials sending integers to integers. These (specialized) Newton Polynomials are very similar to the Taylor series which uses the basis $\frac{t^k}{k!}$ </p> <p>3) If you just want the next term (as predicted by this polynomial) then you don't need to explicitly find the polynomial, just extend the differences. Many test takers realize this. $$\begin{matrix}2&amp;\ &amp;6&amp;\ &amp;12&amp;\ &amp;20&amp;\ &amp;\mathbf{30}\\\ &amp;4&amp;&amp;6&amp;&amp;8&amp;&amp;\mathbf{10}&amp;\\\ &amp;&amp;2&amp;&amp;2&amp;&amp;\mathbf{2}&amp;&amp;\end{matrix}$$ corresponds to $f(n)=2+4n+2\binom{n}{2}=n^2+3n+2$</p> <p>4) There is also a polynomial of degree 3 that gives $2,6,12,20,\mathbf{2011}$ so there is no unique extension.</p> <p>5) If the given sequence is $1,2,4,8,?$ then the polynomial interpolation gives $15$ next from $\binom n0+\binom n1+\binom n2+\binom n3$ although most tests would favor another continuation.</p>
1,577,174
<p>If $a, b \in \mathbb{C}$, then we have the standard triangle inequality for the difference:</p> <p>$$||a| - |b|| \le |a - b|.$$</p> <p>I am wondering if this inequality generalizes to exponents greater that one.</p> <blockquote> <p>My question is, for $1 &lt; p &lt; \infty$, does there exists a constant $C_p$ such that $||a|^p - |b|^p| \le C_p|a - b|^p$ for all $a, b \in \mathbb{C}$?</p> </blockquote> <p>I am aware of the "standard" triangle inequality for $p &gt; 1$:</p> <p>$$|a +b|^p \le 2^{p}(|a|^p + |b|^p),$$</p> <p>and that $2^p$ may not be sharpest constant possible. If I try to use this estimate to prove, say, that $|a|^p - |b|^p \le C_p |a - b|^p$, I get stuck with an extra term that I'm not sure what to do with.</p> <p>$$|a|^p - |b|^p \le 2^p|a -b|^p + (2^p -1)|b|^p.$$</p> <p>A resolution on this matter is greatly appreciated.</p>
zhw.
228,045
<p>Hint: Is $(x+1)^2 -x^2$ bounded on $(0,\infty)?$</p>
743,819
<p>I'm studying for my exam and I came across the following draw without replacement problem :</p> <blockquote> <p><span class="math-container">$N$</span> boxes filled with red and green balls. The box <span class="math-container">$r$</span> contains <span class="math-container">$r-1$</span> red balls and <span class="math-container">$N-r$</span> green balls. We pick a box at random and we take <span class="math-container">$2$</span> random balls inside it, without putting them back.</p> <p><strong>1)</strong> What is the probability that the second ball is green ?</p> <p><strong>2)</strong> What is a probability that the second ball is green, knowing the first one is green.</p> </blockquote> <p>I don't know where to start, all those dependance (to <span class="math-container">$r$</span> and <span class="math-container">$N$</span>) are blowing my mind.</p> <p>I don't know if I should concider it as a Binomial law (with Bernoulli : ball = green, <span class="math-container">$n=2, p = ?$</span>) or go with the formula <span class="math-container">$$p(X=k)=\frac{C_{m}^{k} C_{N-m}^{n-k}}{C_{N}^{n}}$$</span> or something else...</p> <p>Could someone advise me ?</p>
drhab
75,923
<p>1) The number of green balls in total equals the number of red balls. Picking out a box at random, taking out $2$ balls and then looking at the second is actually 'the same' as picking out one ball out of 'big' box that contains all balls. The procedure followed has no influence at all on the chances of a ball to be picked out. Each of the balls has the same probability to show up as the 'elected one' here. So the probability that this ball is green is $\frac{1}{2}$. Likewise the probability that the first ball is green is also $\frac{1}{2}$. If $G_{i}$ denotes the event that the $i$-th ball taken out is green then $P\left(G_{1}\right)=P\left(G_{2}\right)=\frac{1}{2}$.</p> <p>2) This is more complicated. To be found is $P\left(G_{2}\mid G_{1}\right)$ and based on 1) we can start with: $P\left(G_{2}\mid G_{1}\right)=\frac{P\left(G_{2}\cap G_{1}\right)}{P\left(G_{1}\right)}=2P\left(G_{2}\cap G_{1}\right)$. So now it comes to calculating $P\left(G_{2}\cap G_{1}\right)$. </p> <p>For <em>notational convenience</em> we will calculate $P\left(R_{2}\mid R_{1}\right)$ instead of $P\left(G_{2}\mid G_{1}\right)$ This in the understanding that $P\left(R_{2}\mid R_{1}\right)=P\left(G_{2}\mid G_{1}\right)$ because of symmetry. Here $R_i$ denotes the event that the $i$-th ball drawn is red.</p> <p>Let $R$ denote the index of the box that is chosen at random. Then $$P\left(R_{2}\cap R_{1}\right)=\sum_{r=1}^{N}P\left(R_{2}\cap R_{1}\mid R=r\right)P\left(R=r\right)=\frac{1}{N}\sum_{r=1}^{N}P\left(R_{2}\cap R_{1}\mid R=r\right)$$</p> <p>Here $P\left(R_{2}\cap R_{1}\mid R=r\right)=\frac{r-1}{N-1}\frac{r-2}{N-2}$ so that $P\left(R_{2}\cap R_{1}\right)=\frac{1}{N\left(N-1\right)\left(N-2\right)}\sum_{r=3}^{N}\left(r-1\right)\left(r-2\right)$. By induction it can be shown that $\sum_{r=3}^{N}\left(r-1\right)\left(r-2\right)=\frac{1}{3}N\left(N-1\right)\left(N-2\right)$ leading to $P\left(R_{2}\cap R_{1}\right)=$$\frac{1}{3}$ and finally $P\left(R_{2}\mid R_{1}\right)=\frac{2}{3}$. </p> <p>This result triggers the expectation that there is a 'shortcut' for this route, as there is in case 1). </p> <hr> <p>To proceed with my last remark about a shortcut for case 2); here is some effort in that direction.</p> <p>Think of boxes that contain ordered pairs of coloured balls. Every box contains $\left(N-1\right)\left(N-2\right)$ of these pairs. In box $r\in\left\{ 1,\dots,N\right\} $ there are $\left(r-1\right)\left(r-2\right)$ of the sort red-red, $\left(r-1\right)\left(N-r\right)$ of sort red-green, $\left(N-r\right)\left(r-1\right)$ of sort green-red and $\left(N-r\right)\left(N-r-1\right)$ of sort green-green. This ends up in total there are $N\left(N-1\right)\left(N-2\right)$ pairs. It can be calculated that $\frac{1}{3}N\left(N-1\right)\left(N-2\right)$ are of sort green-green, and off course $\frac{1}{2}N\left(N-1\right)\left(N-2\right)$ of the pairs has a green as first ball. As in 1) it now comes to picking an ordered pair out of a big box containing all ordered pairs of balls. This because each ordered pair has equal probability to be chosen. The procedure followed to get this far does not disturb that. That gives $$P\left\{ \text{second green}\mid\text{first green}\right\} =\frac{P\left\{ \text{second green and first green}\right\} }{P\left\{ \text{first green}\right\} }=\frac{\frac{1}{3}}{\frac{1}{2}}=\frac{2}{3}$$</p>
1,521,649
<p>I need to show that at most finitely many terms of this sequence are greater than or equal to $c$. </p> <p>I don't know if it is the wording of the problem but I don't know what this is asking me to do. Help on this would be amazing! And thank you in advance.</p>
Brevan Ellefsen
269,764
<p>$$x = \log_b(a^n)$$ $$b^x=a^n$$ $$(b^x)^{1/n}=(a^n)^{1/n}$$ $$b^{x/n}=a$$ $$x/n = \log_b(a)$$ $$x = n \log_b(a)$$ $$\log_b(a^n) = n\log_b(a)$$</p>
1,521,649
<p>I need to show that at most finitely many terms of this sequence are greater than or equal to $c$. </p> <p>I don't know if it is the wording of the problem but I don't know what this is asking me to do. Help on this would be amazing! And thank you in advance.</p>
randomgirl
209,647
<p>$\ln(x)=\int_1^x \frac{1}{t} dt \\ \text{ so } \ln(x^r)=\int_1^{x^r} \frac{1}{t} dt \\ \text{ and then differentiating both sides gives} \\ [\ln(x^r)]'=(x^r)' \cdot \frac{1}{x^r}=\frac{r}{x} \\ \text{ now integrating both sides ... see if you can finish from here } \ln(x^r)=... \\ $</p>
1,570,754
<p>Let $I$ be an interval and $f\colon I \to \mathbb{R}$ a differentiable function. Suppose the following definitions:</p> <p>For $x_0 \in I$ the point $(x_0,f(x_0))$ is called <em>saddle point</em> if $f'(x_0) = 0$ but $x_0$ is not a local extremum of $f$.</p> <p>For $x_W \in I$ the point $(x_W,f(x_W))$ is called <em>point of inflection</em> if there is a neighborhood $U$ from $x_W$ in $I$ such that $f'$ is strictly monotonic increasing (resp. decreasing) for $x &lt; x_W$ on $U$ and strictly monotinic decreasing (resp. increasing) for $x &gt; x_W$ on $U$.</p> <p>What is the logical relation between saddle points and points of inflection? </p> <p>My first intuitive guess was that a point $(x,f(x))$ is a saddle point <em>iff</em> it is a point of inflection and $f'(x) = 0$. However the implication "$\implies$" seems to be wrong. Consider the following counterexample: $$ f(x) = \begin{cases} x^4 \cdot \sin\left(\frac{1}{x}\right) &amp; x \neq 0 \\ 0 &amp; x = 0 \end{cases} $$</p> <p>Then $(0,0)$ is a saddle point but not a point of inflection because the derivativative oscillates on every neighborhood of $0$.</p> <p>Is this correct so far? Is the other implication true? If so, how to prove it?</p>
zhw.
228,045
<p>Your example does indeed show that a saddle point need not be an inflection point. (The function $x^2\sin(1/x)$ also works, but your example has the virtue of being continuously differentiable.)</p> <p>In the other direction, if $(a,f(a))$ is a point of inflection and $f'(a) = 0,$ then $(a,f(a))$ is a saddle point. To see this, suppose WLOG that for some small $\delta &gt; 0$ that $f'$ strictly increases in $[a-\delta,a]$ and strictly decreases in $[a,a+\delta].$ In $[a-\delta,a)$ we have $f'(x)&lt;0,$ because these values must be less than $f'(0)=0.$ The same reasoning shows that $f'(x) &lt; 0$ for $x\in (a,a+\delta].$ The mean value theorem then shows $f$ strictly decreases on both $[a-\delta,a]$ and $[a,a+\delta].$ Hence $f$ strictly decreases on $[a-\delta,a+\delta].$ It follows that $f(a)$ is neither a local max. nor min. for $f$ at $a.$</p>
1,914,752
<p>dividing by a whole number i can describe by simply saying split this "cookie" into two pieces, then you now have half a cookie. </p> <p>does anyone have an easy way to describe dividing by a fraction? 1/2 divided by 1/2 is 1</p>
John
362,662
<p>1/2 is half of a cookie. 1/2 divided by 1/2 is simply seeing how many times half a cookie fits, or corresponds to half a cookie, which is one time. Or how many halves of a cookie you need to get half of a cookie, which is one (One half of a cookie) again.</p>
115,385
<p>I heard that computation results can be very sensitive to choice of random number generator. </p> <ol> <li><p>I wonder whether it is relevant to program own Mersenne-Twister or other pseudo-random routines to get a good number generator. Also, I don't see why I should not trust native or library generators as random.uniform() in numpy, rand() in C++. I understand that I can build generators on my own for distributions other than uniform (inverse repartition function methor, polar method). But is it evil to use one built-in generator for uniform sampling? </p></li> <li><p>What is wrong with the default 'time' seed? Should one re-seed and how frequently in a code sample (and why)?</p></li> <li><p>Maybe you have some good links on these topics!</p></li> </ol> <p>--edit More precisely, I need random numbers for multistart optimization routines, and for uniform space sample to initialize some other optimization routine parameters. I also need random numbers for Monte Carlo methods (sensibility analysis). I hope the precisions help figure out the scope of question.</p> <p>Kind regards</p>
hmakholm left over Monica
14,366
<p>The answers to those questions depend <em>completely</em> on what you need the pseudorandom numbers <em>for</em>.</p> <p>In some applications, such as cryptography, one needs to use very, very random numbers, and it is essential that nobody can <em>predict</em> which number your generator produced, and also that nobody can <em>deduce</em> afterward which numbers were produced, except to the extent you explicitly tell them. In such a setting, seeding with the current time is obviously worthless, because an attacker can just set his computer's time to when he knows you (say) generated your key pair and then run the same random generator to figure out what it must have been.</p> <p>In most other cases you care less about your random numbers being secret, and the CPU load that goes into ensuring that they are would be ill spent.</p> <p>In yet other applications, such as some Monte Carlo simulations, you actually <em>want</em> the sequence of random numbers to be predictable and reproducible such that you can cross-check the computation later, or if you want to see how slightly different starting conditions would develop under the same "random" external influences.</p> <p>It's all, completely, in what your needs are. And until you understand this connection better, it would certainly be a waste of effort to program your own generator just because you've heard that (in some situations!) it is the "right thing" to do.</p>
28,456
<p>I built a <code>Graph</code> based on the permutations of city's connections from :</p> <pre><code>largUSCities = Select[CityData[{All, "USA"}], CityData[#, "Population"] &gt; 600000 &amp;]; uScityCoords = CityData[#, "Coordinates"] &amp; /@ largUSCities; Graph[#[[1]] -&gt; #[[2]] &amp; /@ Permutations[largUSCities, {2}] , VertexCoordinates -&gt; Reverse[uScityCoords, 2], VertexStyle -&gt; Red, Prolog -&gt; {LightBrown, CountryData["USA", "FullPolygon"]},ImageSize -&gt; 650] </code></pre> <p>It looks like this: <img src="https://i.stack.imgur.com/1ea6c.png" alt="graph"></p> <p>My question, is there any way to have the Graph like this? <img src="https://i.stack.imgur.com/ajwOT.png" alt="enter image description here"></p>
E3labs
8,351
<p>I asked about something similar, but it's with data for municipalities in Brazil. Since I couldn't find the names and locations for them on WolframAlpha (well at least not elegantly) then I resorted to the sledgehammer approach. Where I the image of the map, added a layer to it and then put dots where I wanted the nodes to go. After that I could have used morphologicalgraph to get their positions and then have the background of my graph be an image. Here is the question: <a href="https://mathematica.stackexchange.com/questions/27987/adding-an-image-background-to-a-graph-that-uses-vertexposition">Adding an image background to a graph that uses vertexposition</a></p>
1,723,718
<p>Knowing $f(x,y) = 2x^2 +3y^2 -7x +15y$, one simply proves $$|f(x,y)|\leq 5(x^2+y^2)+22 \sqrt{x^2 + y^2}$$ How can I use this info to compute $$ \lim_{(x,y)\to(0,0)} \frac{f(x,y) - 2(x^2+y^2)^{1/4}}{(x^2+y^2)^{1/4}}\;\;\; ?$$</p> <p>Thanks!</p>
Klint Qinami
318,882
<p>This can be done quite easily if you convert to polar coordinates.</p> <p>We convert </p> <p>$$lim_{x, y \to (0, 0)} \frac{2x^2 + 3y^2 - 7x + 15y - 2(x^2 + y^2)^{\frac{1}{4}}}{(x^2 + y^2)^{\frac{1}{4}}}$$</p> <p>turns into </p> <p>$$lim_{r \to 0} \&gt;\&gt; \frac{2r^2\cos^2 \theta + 3r^2 \sin^2 \theta - 7r \cos \theta + 15 r \sin \theta - 2 \sqrt{r}}{\sqrt{r}}$$</p> <p>$$lim_{r \to 0} \&gt; \&gt;2 r^{\frac{3}{2}}\cos^2 \theta + 3 r^{\frac{3}{2}} \sin^2 \theta -7 \sqrt{r} \cos \theta + 15 \sqrt{r} \sin \theta - 2 $$</p> <p>$$= -2$$</p>
839,431
<p>Can someone be kind enough to show me the steps to integrate this, I'm sure it's by parts but how do I split up the sin part? $$x\sin(1+2x)$$</p>
Aapeli
77,427
<p>Let $u=x$ and $v'=\sin(1+2x)$, then $u'=2$, $v=-\frac{\cos(1+2x)}{2}$: \begin{align}\int{uv'\ dx}&amp;=uv-\int{u'v\ dx}\\ \int{x\sin(1+2x)\ dx}&amp;=-\frac{x\cos(1+2x)}{2}-\int{-\frac{2\cos(1+2x)}{2}\ dx}+C\\ &amp;=-\frac{x\cos(1+2x)}{2}+\frac{\sin(1+2x)}{4}+C\\ \therefore\int{x\sin(1+2x)\ dx}&amp;=\frac{1}{4}\sin(1+2x)-\frac{1}{2}x\cos(1+2x)+C \end{align}</p>
13,989
<p>Suppose $E_1$ and $E_2$ are elliptic curves defined over $\mathbb{Q}$. Now we know that both curves are isomorphic over $\mathbb{C}$ iff they have the same $j$-invariant.</p> <p>But $E_1$ and $E_2$ could also be isomorphic over a subfield of $\mathbb{C}$. As is the case for $E$ and its quadratic twist $E_d$. Now the question general is.</p> <blockquote> <p>$E_1$ and $E_2$ defined over $\mathbb{Q}$ and isomorphic over $\mathbb{C}$. Let $K$ the smallest subfield of $\mathbb{C}$ such that $E_1$ and $E_2$ become isomorphic over $K$. What can be said about $K$. Is it always a finite extension of $\mathbb{Q}$. If so, what can be said about the extension $K|\mathbb{Q}$.</p> </blockquote> <p>My second question is something goes something like in the opposite direction. I start again with quadratic twists. Let $E$ be an elliptic curve over $\mathbb{Q}$ and consider the quadratic extension $\mathbb{Q}|\mathbb{Q}(\sqrt{d})$. Describe the curves over $\mathbb{Q}$(or isomorphism classes over $\mathbb{Q}$) which become isomorphic to $E$ over $\mathbb{Q}(\sqrt{d})$. I think the answer is $E$ and $E_d$. Again I would like to know what happens if we take a larger extension.</p> <blockquote> <p>Let $E$ be an elliptic curve over $\mathbb{Q}$ and $K|\mathbb{Q}$ a finite extension. Describe the isomorphism classes of elliptic curves over $\mathbb{Q}$ which become isomorphic to $E$ over K.</p> </blockquote> <p>I have no idea what is the right context to answer such questions.</p>
Felipe Voloch
2,290
<p>The answer is a bit more complicated if $j=0,1728$ because the corresponding elliptic curves have a bigger automorphism group, so I'll leave those out and let you (or others) deal with this case. If $j \ne 0,1728$, then the automorphism group of $E$ is of order $2$ and all other elliptic curves isomorphic to $E$ over an extension are quadratic twists. It seems that you know what happens in this case. This is well-discussed in Silverman's book.</p>
13,989
<p>Suppose $E_1$ and $E_2$ are elliptic curves defined over $\mathbb{Q}$. Now we know that both curves are isomorphic over $\mathbb{C}$ iff they have the same $j$-invariant.</p> <p>But $E_1$ and $E_2$ could also be isomorphic over a subfield of $\mathbb{C}$. As is the case for $E$ and its quadratic twist $E_d$. Now the question general is.</p> <blockquote> <p>$E_1$ and $E_2$ defined over $\mathbb{Q}$ and isomorphic over $\mathbb{C}$. Let $K$ the smallest subfield of $\mathbb{C}$ such that $E_1$ and $E_2$ become isomorphic over $K$. What can be said about $K$. Is it always a finite extension of $\mathbb{Q}$. If so, what can be said about the extension $K|\mathbb{Q}$.</p> </blockquote> <p>My second question is something goes something like in the opposite direction. I start again with quadratic twists. Let $E$ be an elliptic curve over $\mathbb{Q}$ and consider the quadratic extension $\mathbb{Q}|\mathbb{Q}(\sqrt{d})$. Describe the curves over $\mathbb{Q}$(or isomorphism classes over $\mathbb{Q}$) which become isomorphic to $E$ over $\mathbb{Q}(\sqrt{d})$. I think the answer is $E$ and $E_d$. Again I would like to know what happens if we take a larger extension.</p> <blockquote> <p>Let $E$ be an elliptic curve over $\mathbb{Q}$ and $K|\mathbb{Q}$ a finite extension. Describe the isomorphism classes of elliptic curves over $\mathbb{Q}$ which become isomorphic to $E$ over K.</p> </blockquote> <p>I have no idea what is the right context to answer such questions.</p>
stankewicz
3,384
<p>Question 1: Putting both curves in say, Legendre Normal Form (or else appealing the lefschetz principle) shows that if the two curves are isomorphic over <span class="math-container">$\mathbf{C}$</span> then they are isomorphic over <span class="math-container">$\overline{\mathbf{Q}}$</span>. Now we could say that for instance <span class="math-container">$E_2$</span> is an element of <span class="math-container">$H^1(G_{\overline{Q}}, Isom(E_1))$</span> where we let <span class="math-container">$Isom(E_1)$</span> be the group of isomorphisms of <span class="math-container">$E_1$</span> as a curve over <span class="math-container">$\mathbf{Q}$</span> (as in Silverman, to distinguish from <span class="math-container">$Aut(E_1)$</span>, the automorphisms of <span class="math-container">$E_1$</span> as an <em>Elliptic Curve</em> over <span class="math-container">$\mathbf{Q}$</span>, that is, automorphisms fixing the identity point). However, <span class="math-container">$E_2$</span> is also a principal homogeneous space for a unique curve over <span class="math-container">$\mathbf{Q}$</span> with a rational point, which of course has to be <span class="math-container">$E_2$</span>, so the cocycle <span class="math-container">$E_2$</span> represents could be taken to have values in <span class="math-container">$Aut(E_1)$</span>. Now <span class="math-container">$Aut(E_1)$</span> is well known to be of order 6,4 or 2 depending on whether the <span class="math-container">$j$</span>-invariant of <span class="math-container">$E_1$</span> is 0, 1728 or anything else, respectively. Moreover the order of the cocycle representing <span class="math-container">$E_2$</span> (which we now see must divide 2, 4 or 6) must be the order of the minimal field extension <span class="math-container">$K$</span> over which <span class="math-container">$E_1$</span> is isomorphic to <span class="math-container">$E_2$</span>. So <span class="math-container">$K$</span> must be degree 2,3,4 or 6 unless I've made an error somewhere.</p> <p>Question 2: If you restrict your focus to just elliptic curves, yes your idea is right. If it's a quadratic extension, you have exactly 1 non-isomorphic companion. If you have a higher degree number field, you have nothing but composites of the quadratic case unless your elliptic curve has j invariant 0 or 1728.</p> <p>Notice I am very explicitly using your choice of the word elliptic curve for both of these answers.</p>
2,759,407
<p>Suppose there's a exam with 5 questions. If the probability that Student $1$ correctly answers question $i$ is $P_{1.i}$, then</p> <p>$P_{1.1} = 0.3$ , $P_{1.2} = 0.4$ , $P_{1.3} = 0.9$, $P_{1.4} = 0.7$ , $P_{1.5} = 0.1$</p> <p>For Student $2$, </p> <p>$P_{2.1} = 0.4$ , $P_{2.2} = 0.5$ , $P_{2.3} = 0.2$, $P_{2.4} = 0.8$ , $P_{2.5} = 0.1$</p> <p>What is the probability that Student $1$ performs better than Student $2$ ?</p> <p>How to solve something like that? I want an expression to do this.</p>
BruceET
221,800
<p><strong>Comment:</strong> I wish you success with @HennoBrandsma's approach (+1). It seems there will be some bookkeeping in considering all the possibilties. In case it is of any use (e.g., for checking intermediate results), here are simulated distributions for Student 1, Student 2, and Difference scores. </p> <p>It is reasonable to expect the simulated probabilities to be accurate to two or three places. </p> <p>Although Student 1 will do slightly better <em>on average,</em> it seems that there is slightly less than a 50:50 chance for Student 1 to do <em>better</em> than Student 2. However, there are about 3 chances in 4 Student 1 will do <em>as well or better.</em></p> <pre><code>set.seed(429) m = 10^6; p1=c(3,4,9,7,1)/10; p2=c(4,5,2,8,1)/10 s1 = replicate(10^6, sum(rbinom(5, 1, p1))) round(table(s1)/m,3) s1 0 1 2 3 4 5 0.011 0.142 0.398 0.340 0.101 0.008 s2 = replicate(10^6, sum(rbinom(5, 1, p2))) round(table(s2)/m,3) s2 0 1 2 3 4 5 0.043 0.260 0.407 0.236 0.051 0.003 d = s1 - s2 round(table(d)/m,3) d -5 -4 -3 -2 -1 0 1 2 3 4 5 0.000 0.001 0.011 0.060 0.172 0.285 0.272 0.148 0.044 0.006 0.000 mean(s1 &gt; s2) [1] 0.471404 mean(s1 &gt;= s2) [1] 0.756087 par(mfrow=c(1,3)) hist(s1, prob=T, br=(0:6)-.5, col="skyblue2", main="Student 1 Scores") hist(s2, prob=T, br=(0:6)-.5, col="skyblue2", main="Student 2 Scores") hist(d, prob=T, br=(-5:6)-.5, col="skyblue2", main="Difference in Scores") abline(v = .5, col="red", lwd=3, lty="dashed") par(mfrow=c(1,1)) </code></pre> <p><a href="https://i.stack.imgur.com/mwN7N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mwN7N.png" alt="enter image description here"></a></p>
3,540,045
<p>The definite integral, <span class="math-container">$$\int_0^{\pi}3\sin^2t\cos^4t\:dt$$</span></p> <p><strong>My question</strong>: for the trigonometric integral above the answer is <span class="math-container">$\frac{3\pi}{16}$</span>. What I want to know is how can I compute these integrals easily. Is there more than one way to solve it? If so, is the key to solving these integrals, just recognizing some trig identities and using u-sub until it looks like a simpler integral?</p> <p><strong>Here's what I tried (Why doesn't it work!):</strong></p> <p>I rewrote the integrand as: <span class="math-container">$3(1-\cos^2t)\cos^4t\:dt$</span> then foiled it in,</p> <p><span class="math-container">$3\cos^4t-3\cos^6t dt$</span> , then I used the power rule and multipied through by chain rule and then did </p> <p><span class="math-container">$F(\pi)-F(0)$</span> and got the answer: <span class="math-container">$\frac{6}{35}$</span></p> <p>Why does this not work?!?</p>
Community
-1
<p>Use <span class="math-container">$$\frac{1+\cos\left(ax\right)}{2}=\cos^{2}\left(\frac{ax}{2}\right)$$</span></p> <p>Then we have:</p> <p><span class="math-container">$$\int_{0}^{\pi}3\cos^{4}\left(t\right)dt-\int_{0}^{\pi}3\cos^{6}\left(t\right)dt=\frac{3}{4}\int_{0}^{\pi}\left(1+\cos\left(2t\right)\right)^{2}dt-\frac{3}{8}\int_{0}^{\pi}\left(1+\cos\left(2t\right)\right)^{3}dt$$</span></p> <p>Can you take it from here?</p> <p>Or using reduction formula for <span class="math-container">$\int_{0}^{\pi}3\left(\cos\left(x\right)\right)^{4}dx$</span> follows:</p> <p><span class="math-container">$$=3\left(\frac{1}{4}\cos^{3}\left(x\right)\sin\left(x\right)\Big|_0^\pi+\frac{3}{4}\int_{0}^{\pi}\cos^{2}\left(x\right)dx)\right)$$</span><span class="math-container">$$=3\left(\frac{3}{8}\int_{0}^{\pi}1+\cos\left(2x\right)dx\right)$$</span><span class="math-container">$$=\frac{9\pi}{8}$$</span></p> <p>Use this for the other one.</p>
3,540,045
<p>The definite integral, <span class="math-container">$$\int_0^{\pi}3\sin^2t\cos^4t\:dt$$</span></p> <p><strong>My question</strong>: for the trigonometric integral above the answer is <span class="math-container">$\frac{3\pi}{16}$</span>. What I want to know is how can I compute these integrals easily. Is there more than one way to solve it? If so, is the key to solving these integrals, just recognizing some trig identities and using u-sub until it looks like a simpler integral?</p> <p><strong>Here's what I tried (Why doesn't it work!):</strong></p> <p>I rewrote the integrand as: <span class="math-container">$3(1-\cos^2t)\cos^4t\:dt$</span> then foiled it in,</p> <p><span class="math-container">$3\cos^4t-3\cos^6t dt$</span> , then I used the power rule and multipied through by chain rule and then did </p> <p><span class="math-container">$F(\pi)-F(0)$</span> and got the answer: <span class="math-container">$\frac{6}{35}$</span></p> <p>Why does this not work?!?</p>
emonHR
565,609
<p><strong>Approach <span class="math-container">$(1)$</span></strong> <br> Let <span class="math-container">\begin{align} y&amp;=\cos t+i\sin t\implies y^n&amp;=\cos t+i\sin nt\\ \frac{1}{y}&amp;=\cos t-i\sin t\implies \frac{1}{y^n}&amp;=\cos t-i\sin nt\\ \end{align}</span> Then <span class="math-container">\begin{align} y+\frac{1}{y}=2\cos t \qquad y^n+\frac{1}{y^n}=2\cos nt\\ y-\frac{1}{y}=2i\sin t \qquad y^n+\frac{1}{y^n}=2i\sin nt\\ \end{align}</span> Now you can form your integrand using,<br> <span class="math-container">$(2i\sin t)^2=-4\sin^2t$</span><br> <span class="math-container">$(2\cos t)^4=16\cos^4t$</span> <br> <span class="math-container">$$3\sin^2t\cos^4t=3\times\frac{1}{-4}\times\frac{1}{16}\left(y-\frac{1}{y}\right)^2\left(y+\frac{1}{y}\right)^4$$</span> Now use <a href="https://en.wikipedia.org/wiki/Binomial_theorem#Theorem_statement" rel="nofollow noreferrer">binomial expansion</a> and integrate term by term. Why This approach is better?</p> <blockquote class="spoiler"> <p>Polynomials are much easier to integrate </p> </blockquote> <p><strong>Approach <span class="math-container">$(2)$</span></strong> <br></p> <p>Instead of convert the integrand into polynomial, you can convert it to exponential using <a href="https://en.wikipedia.org/wiki/Euler&#39;s_formula#Relationship_to_trigonometry" rel="nofollow noreferrer">Euler formula.</a> <hr> It seems you struggling with <a href="https://math.stackexchange.com/q/193435/565609"><span class="math-container">$\int\cos^nt\:dt$</span></a>. Don't use power rule here direction. You can use <a href="https://en.wikipedia.org/wiki/Integration_by_reduction_formulae#How_to_compute_the_integral" rel="nofollow noreferrer">reduction formula</a> which came using <a href="https://en.wikipedia.org/wiki/Integration_by_parts" rel="nofollow noreferrer">integration by parts</a>.</p>
175,971
<p>Let's $F$ be a field. What is $\operatorname{Spec}(F)$? I know that $\operatorname{Spec}(R)$ for ring $R$ is the set of prime ideals of $R$. But field doesn't have any non-trivial ideals.</p> <p>Thanks a lot!</p>
Rudy the Reindeer
5,798
<p>As you say $\mathrm{Spec}(R)$ is defined to be the set of all prime ideals of $R$. If $R$ is a field, the only proper ideal is $0$ hence you get $\mathrm{Spec}(F) = \{0\}$. </p> <p>It gets more interesting if your space is a ring that is not a field, like for example $R = \mathbb Z$. Then you can endow it with the following topology: each closed set in the space corresponds to an ideal $J$ of $R$, defined as $C(J) = \{ p \mid p \text{ a prime ideal of } R \text{ such that } J \subset p\}$.</p> <p>Now what does $\mathrm{Spec}(\mathbb Z)$ endowed with this topology look like? Well, first of all, the points in our space correspond to prime ideals and since $\mathbb Z$ is a principal ideal domain, each point looks like $p\mathbb Z$. Note that the zero ideal $\{0\}$ is prime if and only if the ring is an integral domain, so in this case, zero is also a point in our space.</p> <p>Next we want to know what closed sets look like. For this, let's stick a prime ideal into $C(\cdot)$ and see what comes out: $C(p\mathbb Z) = \{ p \mathbb Z \}$ which means, every singleton set in our space is closed.</p> <p>Now what are the closed sets corresponding to non-prime ideals? Well, $n$ has only a finite number of prime divisors and each point in $C(n\mathbb Z)$ corresponds to a divisor of $n$: $C(n\mathbb Z) = \{ p \mathbb Z \mid p \mathbb Z \text{ a prime ideal containing } n \mathbb Z \}$. </p> <p>Now we know that all closed sets are finite. </p> <p><strong>Edit</strong> (I apologise for the blunder kindly pointed out by Rene and t.b.)</p> <p>You need to be careful about what open sets, i.e. complements of closed sets look like. You can easily trick yourself into believing that since a set is closed if and only if it's finite, $\mathrm{Spec}(\mathbb Z)$ has the cofinite topology. But this is false. To see this note that if we indeed had the cofinite topology, $\mathrm{Spec}(\mathbb Z) \setminus \{\langle 0 \rangle \}$ would be open. But for this to be true, $\langle 0 \rangle$ would have to be closed which means that we would have to have an ideal $I$ in $\mathbb Z$ such that the only prime ideal it is contained in is $\langle 0 \rangle$. But this implies that $I = \langle 0 \rangle$ which implies that $I$ is contained in every prime ideal. Hence there is no ideal $I$ such that $C(I) = \{ \langle 0 \rangle \}$. </p> <p>As pointed out in Rene's answer, it boils down to all open sets contain zero since the complement of a closed set, $C(n\mathbb Z)^c$, is all prime ideals contained in $n \mathbb Z$ which always includes zero since we're in an integral domain so that the zero ideal is prime.</p>
3,669,700
<blockquote> <p>Let <span class="math-container">$f(x)$</span> be a monic, cubic polynomial with <span class="math-container">$f(0)=-2$</span> and <span class="math-container">$f(1)=−5$</span>. If the sum of all solutions to <span class="math-container">$f(x+1)=0$</span> and to <span class="math-container">$f\big(\frac1x\big)=0$</span> are the same, what is <span class="math-container">$f(2)$</span>?</p> </blockquote> <p>From <span class="math-container">$f(0)$</span> I got that <span class="math-container">$f(x)=x^3+ax^2+bx-2$</span> and from <span class="math-container">$f(1)=-5$</span> that <span class="math-container">$a+b = -4$</span> however I'm not sure how to use the info about the transformations to find <span class="math-container">$f(2).$</span> It seems that <span class="math-container">$(x+1)$</span> is a root for <span class="math-container">$f(x+1)$</span> and the same logic applies for <span class="math-container">$f\big(\frac1x\big)$</span>?</p> <p>Should I use Vieta's here or what's the appropriate way to go?</p>
AryanSonwatikar
571,692
<p>When <span class="math-container">$f(x+1)=0$</span>, we have, <span class="math-container">$$(x+1)^3+a(x+1)^2+b(x+1)-2=0$$</span> Whose sum of roots can be obtained by "negative of coefficient of <span class="math-container">$x^2$</span> upon coefficient of <span class="math-container">$x^3$</span>", which comes out to be <span class="math-container">$-\frac{3+a}{1}$</span>. Similarly, when <span class="math-container">$f(\frac 1x)=0$</span>, we have, <span class="math-container">$$\frac {1}{x^3}+a\frac{1}{x^2}+b\frac{1}{x}-2=0$$</span> Since <span class="math-container">$x\neq 0$</span>, we can rearrange and get <span class="math-container">$$-2x^3+bx^2+ax+1=0$$</span> Whose sum of roots is simply <span class="math-container">$\frac b2$</span>. So, <span class="math-container">$$\frac b2 = -3-a$$</span> Use that with the condition for <span class="math-container">$f(1)=5$</span> and solve for <span class="math-container">$a,b$</span>. Then substitute <span class="math-container">$x=2$</span>. Ta-da!</p>
81,588
<p>A certain function contains points $(-3,5)$ and $(5,2)$. We are asked to find this function,of course this will be simplest if we consider slope form equation </p> <p>$$y-y_1=m(x-x_1)$$</p> <p>but could we find for general form of equation? for example quadratic? cubic?</p>
Greg Martin
16,078
<p>I think the following idea works. Let $f(x) = x^p-x+a$. They key observation is that $f(x+1)=f(x)$ in the field of $p$ elements. Now factor $f(x) = g_1(x) \cdots g_k(x)$ as a product of irreducibles. Sending $x$ to $x+1$ must therefore permute the factors $\{ g_1(x), \dots, g_k(x) \}$. But sending $x$ to $x+1$ $p$ times in a row comes back to the original polynomial, so this permutation of the $k$ factors has order dividing $p$. It follows that either every $g_j(x)$ is fixed by sending $x$ to $x+1$ - which I think is a property that no nonconstant polynomial of degree less than $p$ can have, but that needs proof - or else there are $k=p$ factors, which can only happen in the case $a=0$.</p>
2,506,182
<p>The question speaks for itself, since the question comes from a contest environment, one where the use of calculators is obviously not allowed, can anyone perhaps supply me with an easy way to calculate the first of last digit of such situations?</p> <p>My intuition said that I can look at the cases of $3$ with an index ending in a $1$, so I looked at $3^1=3$, $3^{11}=177,147$ and $3^{21}= 10,460,353,203$. So there is a slight pattern, but I'm not sure if it holds, and even if it does I will have shown it holds just for indices with base $3$, so I was wondering whether there is an easier way of knowing.</p> <p>Any help is appreciated, thank you.</p>
Cornman
439,383
<p>We observe $3^{2011}\mod 10$ to get the last digit.</p> <p>It is $3^{2011}=3\cdot 3^{2010}=3\cdot 9^{1005}\equiv 3\cdot (-1)^{1005}\equiv -3\equiv 7\mod 10$</p>
2,506,182
<p>The question speaks for itself, since the question comes from a contest environment, one where the use of calculators is obviously not allowed, can anyone perhaps supply me with an easy way to calculate the first of last digit of such situations?</p> <p>My intuition said that I can look at the cases of $3$ with an index ending in a $1$, so I looked at $3^1=3$, $3^{11}=177,147$ and $3^{21}= 10,460,353,203$. So there is a slight pattern, but I'm not sure if it holds, and even if it does I will have shown it holds just for indices with base $3$, so I was wondering whether there is an easier way of knowing.</p> <p>Any help is appreciated, thank you.</p>
Community
-1
<p>Actually 3^1=3 , 3^2=9 ,3^3=27 ,3^4=81 . These are the only four numbers that come at units place to powers of 3. So to find any last digit of 3^2011 divide 2011 by 4 which comes to have 3 as remainder . Hence the number in units place is same as digit in units place of number 3^3. Hence answer is 7.</p>
9,010
<p>I hear people use these words relatively interchangeably. I'd believe that any differentiable manifold can also be made into a variety (which data, if I understand correctly, implicitly includes an ambient space?), but it's unclear to me whether the only non-varietable manifolds should be those that don't admit smooth structures. I'd hope there's more to it than that.</p> <p>I've heard too that affine schemes are to schemes local coordinates are to manifolds, so maybe my question should be about schemes instead -- I don't even know enough to know...</p>
Matt E
221
<p>In English (as opposed to French, in which language variety and manifold are synonyms) the word <em>variety</em> is short for <em>algebraic variety</em>. The main differences, then, between (algebraic) varieties and (smooth) manifolds are that:</p> <p>(i) Varieties are cut out in their ambient (affine or projective) space as the zero loci of polynomial functions, rather than simply as the zero loci of smooth functions. This gives them a more rigid structure. (Here I am thinking just of quasi-projective varieties; there are objects that people call varieties which can't be immersed into projective space, but there is no need to think about them when you are just learning the subject. Also, a manifold need not be regarded as lying in an ambient Eulcidean space, but can always be immersed into one, and can then be thought of as being cut out as the zero locus of smooth functions.) </p> <p>The rigidity of varieties is reflected in the definition of isomorphism: we define two varieties to be isomorphic if we can find polynomial maps giving rise to mutually inverse bijections from one to the other, while two manifolds are isomorphic (i.e. diffeomorphic) if we can find smooth maps giving rise to mutually inverse bijections between them. It turns out, for example, that the only diffeomorphism invariant of a compact connected surface is its genus $g$, while if we look at smooth connected projective curves over the complex numbers (which, when we forget the variety structure and just think of them as manifolds, <em>are</em> compact connected surfaces --- note that one complex dimension gives two real dimensions) then the genus is not a complete invariant. For a fixed genus $g \geq 2$, there is a $6g-6$-dimensional family of non-isomorphic curves of genus $g$. (When $g = 1$ there is a $2$-dimensional family, and when $g = 0$, the variety structure is actually uniquely determined. Also, by "dimension" here I mean real dimension; but these families have their own natural algebraic variety structures, of half the dimension --- i.e. there is a $3 g - 3$ dimensional variety parameterizing isomorphism classes of genus $g$ curves when $g \geq 2$. Again the halving of dimension reflects the difference between real and complex dimension.)</p> <p>(ii) Varieties can admit singularities, whereas we stipulate that manifolds be non-singular (i.e. locally Euclidean). Here it is useful to think about the fact that the critical locus of a (collection of) smooth function(s) can be pretty nasty, and so if we consider the zero loci of smooth functions and allow singularities, we will allow extremely nasty singularities. On the other hand, the critical locus of a (collection) of polynomial(s) is not so bad (e.g. it is always of codimension at least one in the zero locus), and so allowing singularities in the theory turns out to be okay (and in fact to be more than okay; it turns out to be one of the more powerful features of algebraic geometry).</p>
2,949,224
<p>How do I show that if <span class="math-container">$\sqrt{n}(X_n - \theta)$</span> converges in distribution, then <span class="math-container">$X_n$</span> converges in probability to <span class="math-container">$\theta$</span>? </p> <p>Setting <span class="math-container">$Y_n = \sqrt{n}(X_n - \theta)$</span> , convergence in distribution (to a random variable <span class="math-container">$Y$</span>) means: <span class="math-container">$P(Y_n \leq y)$</span> implieas <span class="math-container">$P(Y \leq y)$</span>. </p> <p>Convergence in probability requires that <span class="math-container">$P(Y_n \geq \epsilon) \rightarrow 0 $</span> </p> <p>My reasoning so far is the following. Given convergence in distribution I can use Porohov's theorem that <span class="math-container">$P(|Y_n|&gt;M)&lt; \epsilon$</span>, for some positive <span class="math-container">$M$</span> and some positive <span class="math-container">$\epsilon$</span>. Now, I need to show that this translates into <span class="math-container">$P(|Y_n| \geq \epsilon)=0$</span> and this will be convergence in probability. I'm quite stuck however, any hints are appreciated</p>
Kavi Rama Murthy
142,385
<p>You have almost finished the proof. Let <span class="math-container">$\delta &gt;0$</span>. <span class="math-container">$P\{|X_n-\theta| &gt;\delta\} \leq P\{|Y_n| &gt;M\}$</span> for all <span class="math-container">$n$</span> such that <span class="math-container">$\delta \sqrt n &gt;M$</span>, hence for all sufficiently large <span class="math-container">$n$</span>. So <span class="math-container">$P\{|X_n-\theta| &gt;\delta\} &lt;\epsilon$</span> for all sufficiently large <span class="math-container">$n$</span>.</p>
3,657,428
<p>My textbook says that</p> <blockquote> <p>If <span class="math-container">$f(x)$</span> is piecewise continuous on <span class="math-container">$(a,b)$</span> and satisfies <span class="math-container">$f(x) = \frac{1}{2} [f(x_{-})+f(x_{+})]$</span> for all <span class="math-container">$x\in(a,b)$</span>, and if <span class="math-container">$f(x_{0})\neq 0$</span>, then <span class="math-container">$|f(x)|&gt;0$</span> on some interval containing <span class="math-container">$x_{0}$</span>.</p> </blockquote> <p>Why is this true?</p> <p>Edit:</p> <p>My attempt to prove this claim:</p> <p>Suppose <span class="math-container">$f \in PC (a,b), f (x) = \frac{1}{2} [f(x-) + f(x+)]$</span>, and <span class="math-container">$f(x_{0}) \neq 0$</span> for some <span class="math-container">$x_{0} \in (a,b)$</span>.</p> <p>Case I: If <span class="math-container">$f$</span> is continuous at <span class="math-container">$x_{0}$</span>, then by the definition of continuity, there is a neighborhood <span class="math-container">$U$</span> containing <span class="math-container">$x_{0}$</span> where <span class="math-container">$f(x)\neq0$</span> for <span class="math-container">$x \in U$</span>.</p> <p>Case II: If <span class="math-container">$x_{0}$</span> is a point of discontinuity, then because <span class="math-container">$f$</span> is piecewise continuous, the values <span class="math-container">$$f(x-)=\lim_{\varepsilon\to0+} f(x-\varepsilon), \quad f(x+) = \lim_{\varepsilon \to 0+} f(x+\varepsilon)$$</span></p> <p>always exist. For fixed <span class="math-container">$x_{0} \in (a,b)$</span> such that <span class="math-container">$f(x_{0})\neq0$</span>, the condition <span class="math-container">$f(x_{0}) = \frac{1}{2} [f(x_{0}^{-}) + f(x_{0}^{+})]$</span> implies that <span class="math-container">$f(x_{0}^{-})$</span> and <span class="math-container">$f(x_{0}^{+})$</span> are not simultaneously zero. Then:</p> <p>II.i) if <span class="math-container">$f(x_{0}^{-})$</span> is zero, choose the interval <span class="math-container">$[x_{0},x_{0}+\varepsilon)$</span>.</p> <p>II.ii) if <span class="math-container">$f(x_{0}^{+})$</span> is zero, choose the interval <span class="math-container">$[x_{0}-\varepsilon, x_{0})$</span>.</p> <p>II.iii) if <span class="math-container">$f(x_{0}^{-}), f(x_{0}^{+})$</span> are not zero, choose the interval <span class="math-container">$(x_{0}-\varepsilon, x_{0}+\varepsilon)$</span>.</p>
Community
-1
<p>Observe that <span class="math-container">$\lvert f(x)\rvert&gt;0$</span> and <span class="math-container">$f(x)\ne0$</span> are the same thing.</p> <p>If <span class="math-container">$f$</span> is continuous at <span class="math-container">$x_0$</span>, then by definition of continuity <span class="math-container">$\{x\in\Bbb R\,:\, f(x)\ne0\}$</span> must contain a neighbourhood of <span class="math-container">$x_0$</span>.</p> <p>If <span class="math-container">$f$</span> is not continuous at <span class="math-container">$x_0$</span>, then there cannot be two sequences <span class="math-container">$(z_n)_{n\in\Bbb N}$</span> and <span class="math-container">$(y_n)_{n\in\Bbb N}$</span> such that <span class="math-container">$f(y_n)=f(z_n)=0$</span>, <span class="math-container">$z_n\nearrow x_0$</span> and <span class="math-container">$y_n\searrow x_0$</span>. In fact, if such were the case, then necessarily <span class="math-container">$f(x_0^+)=f(x_0^-)=0$</span>, and therefore <span class="math-container">$f(x_0)=0$</span>.</p> <p>If there is no sequence <span class="math-container">$(y_n)_{n\in\Bbb N}$</span>, then <span class="math-container">$\{x\,:\, f(x)\ne0\}$</span> contains an interval in the form <span class="math-container">$[x_0,x_0+\varepsilon)$</span> for some <span class="math-container">$\varepsilon&gt;0$</span>. If there is no sequence <span class="math-container">$(z_n)_{n\in\Bbb N}$</span>, then it contains <span class="math-container">$(x_0-\varepsilon,x_0]$</span>.</p>
1,725,945
<p>I'm reading the proof of "Fundamental Theorem of Finite Abelian Groups" in Herstein Abstract Algebra, and I've found this statement in the proof that I don't see very clear.</p> <p>Let $A$ be a normal subgroup of $G$. And suppose $b\in G$ and the order of $b$ is prime $p$, and $b$ is not in $A$. Then $A \cap (b)=(e)$.</p>
egreg
62,967
<p>$A\cap(b)$ is a subgroup of $(b)$. Since $b$ has prime order, the only subgroups of $(b)$ are $(e)$ and $(b)$ (by Lagrange's theorem).</p> <p>If $A\cap(b)=(b)$ we have $(b)\subseteq A$, so $b\in A$. Since, by assumption, $b\notin A$, we must have $A\cap(b)=(e)$.</p>
736,749
<p>Let us consider the Fourier transform of <span class="math-container">$\mathrm{sinc}$</span> function. As I know it is equal to a rectangular function in frequency domain and I want to get it myself, I know there is a lot of material about this, but I want to learn it by myself. We have <span class="math-container">$\mathrm{sinc}$</span> function whhich is defined as <span class="math-container">$$ \mathrm{sinc}(\omega_0\,t) = \sin(\omega_0\,t)/(\omega_0\,t). $$</span> Its Fourier transform <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0\,t) \,e^{-i\,\omega\,t}/(\omega_0\,t)\,\mathrm dt $$</span> can be represented as <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0\,t)\,(\cos(\omega\,t) - i \,\sin(\omega\,t))/(\omega_0\,t)\,\mathrm dt. $$</span> Because we can distribute in brackets and consider that integral of difference is equal differences of integrals, we get <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0 \,t) \cos(\omega\,t)/(\omega_o\,t)\,\mathrm dt - \int_{\Bbb R} \sin(\omega_0\,t) \sin(\omega\,t)/(\omega_o\,t)\,\mathrm dt $$</span></p> <p>but first product is zero, right? Because sine and cosine are orthogonal, so how could continue? Please help me.</p>
CyTex
30,854
<p>We know that the Fourier transform of Sinc(z) is,</p> <p>\begin{equation*} \int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-i \omega z} dz\end{equation*} </p> <p>and</p> <p>\begin{equation*} \int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-i \omega z} dz = \int_{-\infty}^{+\infty} {{e^{iz}-e^{-iz}}\over{2iz}} e^{-i \omega z} dz. \end{equation*} </p> <p>So,</p> <p>(1) \begin{equation*} \int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-i \omega z} dz={1 \over {2i}} \int_{-\infty}^{+\infty} {{e^{i(1-\omega)z}}\over{z}} dz + {1 \over {2i}} \int_{+\infty}^{-\infty} {{e^{-i(1+\omega)z}}\over{z}} dz. \end{equation*}</p> <p>Let us consider the first item, when \omega &lt; 1, namely (1-\omega)>0, we can choose the path below to do the contour integration.</p> <p><a href="https://i.stack.imgur.com/BQ09V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BQ09V.png" alt="enter image description here"></a></p> <p>Using the method of complex residues, we take the contour with no singular point, separating the path into four parts, namely A, B, C and D shown as the red letters in the figure. Thus,</p> <p>(2) \begin{equation*} \begin{split} \oint {{e^{i(1-\omega)z}}\over{z}} dz=\lim_{\substack{{\rho \to 0}\\{R\to\infty}}} \int_{-R}^{-\rho}{{e^{i(1-\omega)z}}\over{z}} dz+ \lim_{\rho \to 0} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz +\lim_{\substack{{\rho \to 0}\\{R\to\infty}}} \int^{R}_{\rho}{{e^{i(1-\omega)z}}\over{z}} dz\\ +\lim_{R \to \infty} \int_{D}{{e^{i(1-\omega)z}}\over{z}} dz = 2\pi i {\mathrm{Res}} f(z)=0. \end{split} \end{equation*}</p> <p>Also if (1-\omega)>0, namely ~\omega &lt; 1, and when ~{R \to \infty} and ~{\rho \to 0}, it is easy to prove using Jordan’s lemma, that the fourth term is equal to zero. Thus we can the rewrite the rest items as,</p> <p>\begin{equation*} \int_{-\infty}^{0_{-}}{{e^{i(1-\omega)z}}\over{z}} dz+\int^{+\infty}_{0_{+}}{{e^{i(1-\omega)z}}\over{z}} dz =\int_{-\infty}^{+\infty}{{e^{i(1-\omega)z}}\over{z}} dz =-\lim_{\rho \to 0} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz.\end{equation*} </p> <p>In order to calculate the value of the item on the right side, I use z=\rho e^{i\theta}, so dz=i \rho e^{i\theta}d\theta, then we have,</p> <p>\begin{equation*} \lim_{\rho \to 0} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz=\lim_{\rho \to 0}\int_{\pi}^0 \exp[i (1-\omega) \rho e^{i \theta}] i d\theta=\int_{\pi}^0id\theta=i(0-\pi)=-i \pi.\end{equation*} </p> <p>So, in the end, we get the value of the first integration item shown in the Eq.(1),</p> <p>(3) \begin{equation*} \int_{-\infty}^{+\infty}{{e^{i(1-\omega)z}}\over{z}} dz = i\pi,\quad \omega&lt;1. \end{equation*}</p> <p>When \omega>1, according to Jordan’s lemma, we can not get the fourth item in the Eq.(2) to be zero. So we have to choose another path for finalising the integration. So I chose the path below to deal with such a situation,</p> <h2><a href="https://i.stack.imgur.com/Pgq05.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pgq05.png" alt="enter image description here"></a></h2> <p>It is not hard to calculate the same way as the previous one,</p> <p>(4) \begin{equation*} \begin{split} \oint {{e^{i(1-\omega)z}}\over{z}} dz=\lim_{\substack{{\rho \to 0}\\{R\to\infty}}} \int_{R}^{\rho}{{e^{i(1-\omega)z}}\over{z}} dz+ \lim_{\rho \to 0} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz +\lim_{\substack{{\rho \to 0}\\{R\to\infty}}} \int^{-R}_{-\rho}{{e^{i(1-\omega)z}}\over{z}} dz\\ +\lim_{R \to \infty}\int_{D}{{e^{i(1-\omega)z}}\over{z}} dz = 2\pi i {\mathrm{Res}} f(z)=0, \end{split} \end{equation*}</p> <p>and here again I use the same trick, z=R e^{i\theta}, so dz=i R e^{i\theta}d\theta. Because \omega>1,</p> <p>\begin{equation*}\lim_{R \to \infty} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz=\lim_{R \to \infty}\int_{\pi}^0 \exp[i (1-\omega) R e^{i \theta}] i d\theta=\int_{\pi}^0 0 \times id\theta=0,\end{equation*}</p> <p>\begin{equation*} \lim_{\rho \to 0} \int_{B}{{e^{i(1-\omega)z}}\over{z}} dz=\lim_{\rho \to 0}\int^{\pi}_0 \exp[i (1-\omega) \rho e^{i \theta}] i d\theta=\int^{\pi}_0 i d\theta=i(\pi-0)=i \pi.\end{equation*} </p> <p>So, in the end,</p> <p>(5) \begin{equation*} \int_{-\infty}^{+\infty}{{e^{i(1-\omega)z}}\over{z}} dz = i\pi,\quad \omega&gt;1. \end{equation*}</p> <p>Now, let’s move on to the second part in the Eq.(1). Following the same steps, using contour above real axis I get,</p> <p>(6) \begin{equation*} \int_{+\infty}^{-\infty}{{e^{-i(1+\omega)z}}\over{z}} dz = \lim_{\rho \to 0}\int_{\pi}^0 \exp[-i (1+\omega) \rho e^{i \theta}] i d\theta = -i\pi,\quad \omega&lt; -1; \end{equation*}</p> <p>and using contour blow real axis I get,</p> <p>(7) \begin{equation*} \int_{+\infty}^{-\infty}{{e^{-i(1+\omega)z}}\over{z}} dz = \lim_{\rho \to 0}\int^{\pi}_0 \exp[-i(1+\omega) \rho e^{i \theta}] i d\theta = i\pi,\quad \omega&gt;-1. \end{equation*}</p> <p>Finally, we can write the answer as, if -1&lt; \omega&lt;1</p> <p>\begin{equation*}\int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-i \omega z} dz={1 \over {2i}} \int_{-\infty}^{+\infty} {{e^{i(1-\omega)z}}\over{z}} dz + {1 \over {2i}} \int_{+\infty}^{-\infty} {{e^{-i(1+\omega)z}}\over{z}} dz={{\pi}\over2}+{{\pi}\over2}=\pi;\end{equation*}</p> <p>and if \omega>1 and \omega&lt;-1</p> <p>\begin{equation*}\int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-i \omega z} dz={1 \over {2i}} \int_{-\infty}^{+\infty} {{e^{i(1-\omega)z}}\over{z}} dz + {1 \over {2i}} \int_{+\infty}^{-\infty} {{e^{-i(1+\omega)z}}\over{z}} dz={{\pi}\over2}-{{\pi}\over2}=0.\end{equation*}</p> <p>(Note: if we define the Fourier Transform as \begin{equation*}\int_{-\infty}^{+\infty} {{\sin(z)}\over{z}} e^{-2\pi i \omega z} dz \end{equation*}, Then the result could be a little different.)</p>
736,749
<p>Let us consider the Fourier transform of <span class="math-container">$\mathrm{sinc}$</span> function. As I know it is equal to a rectangular function in frequency domain and I want to get it myself, I know there is a lot of material about this, but I want to learn it by myself. We have <span class="math-container">$\mathrm{sinc}$</span> function whhich is defined as <span class="math-container">$$ \mathrm{sinc}(\omega_0\,t) = \sin(\omega_0\,t)/(\omega_0\,t). $$</span> Its Fourier transform <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0\,t) \,e^{-i\,\omega\,t}/(\omega_0\,t)\,\mathrm dt $$</span> can be represented as <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0\,t)\,(\cos(\omega\,t) - i \,\sin(\omega\,t))/(\omega_0\,t)\,\mathrm dt. $$</span> Because we can distribute in brackets and consider that integral of difference is equal differences of integrals, we get <span class="math-container">$$ \int_{\Bbb R} \sin(\omega_0 \,t) \cos(\omega\,t)/(\omega_o\,t)\,\mathrm dt - \int_{\Bbb R} \sin(\omega_0\,t) \sin(\omega\,t)/(\omega_o\,t)\,\mathrm dt $$</span></p> <p>but first product is zero, right? Because sine and cosine are orthogonal, so how could continue? Please help me.</p>
LL 3.14
731,946
<p>This simple method is missing. Since for <span class="math-container">$\omega_0\geq 0$</span>, <span class="math-container">$$ \int_{\Bbb R} \mathbf{1}_{[-\omega_0,\omega_0]}(\omega) \,e^{i\,\omega\,t}\,\mathrm d \omega = \int_{-\omega_0}^{\omega_0} \,e^{i\,\omega\,t}\,\mathrm d \omega = \frac{e^{i\, \omega_0\,t} - e^{-i\,\omega_0\,t}}{i\, t} = \omega_0\,\mathrm{sinc}(\omega_0\,t) $$</span> one deduces by the Fourier inversion theorem that <span class="math-container">$$ \int_{\Bbb R} \mathrm{sinc}(\omega_0\,t) \,e^{-i\,\omega\,t}\,\mathrm d t = \frac{2\pi}{\omega_0}\,\mathbf{1}_{[-\omega_0,\omega_0]}(\omega). $$</span></p>
2,494,232
<p>I'm trying to find the out if $\sum_{n=1}^\infty {{1\over \sqrt{n}}-{1\over{\sqrt{n+1}}}}$ is divergent or convergent.</p> <p>Here are some rules my book gives that I will try to follow:</p> <p><a href="https://i.stack.imgur.com/TNV6x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TNV6x.png" alt="enter image description here"></a></p> <p>Looking at those I can see that it isn't #1, not a p-series form; It isn't geometric so that rules out #2; #3 looks like a good fit, anyways I don't think the next 5 steps will apply.</p> <p>Using #3, I'll need to split it up into $a_n$ and $b_n$ so I'll go ahead and combine them into one term: $${ \sqrt{n+1} - \sqrt{n} } \over {\sqrt{n}\sqrt{n+1}}$$ And it is at this point that I'm stuck. I need to figure out the highest powers of n in the numerator and denominator.</p> <p>If I'm on the wrong path I would appreciate some help, otherwise, I just need to figure how to get $a_n$ and $b_n$. Thank you.</p>
Mark Viola
218,419
<p><strong>Without recognizing the telescoping nature of the series</strong>, we have </p> <p>$$\sqrt{n+1}-\sqrt n=\frac{1}{\sqrt{n+1}+\sqrt n}$$</p> <p>Hence, we have </p> <p>$$\frac{1}{\sqrt{n}}-\frac1{\sqrt{n+1}}=\frac{1}{\sqrt{n}\sqrt{n+1}\left(\sqrt{n+1}+\sqrt n\right)}\le \frac{1}{2n^{3/2}}$$</p> <p>Inasmuch as the series $\sum_{n=1}^\infty \frac{1}{n^{3/2}}$ converges, the series of interest does likewise.</p>
2,181,540
<p>What approach we have to solve the following differential equation?</p> <p>$$y''(x)- \frac{y'(x)^2}{y} + \frac{y'(x)}{x}=0$$</p> <p>The known solution is $y(x) = c_2 * x^{c_1}$</p>
Mike
17,976
<p>It dawned on me that the first couple terms look very close to a quotient rule. It seems this equation falls apart when you divide both sides by $y$.</p> <p>$$\frac{yy''-y'^2}{y^2}+\frac{y'}{xy}=0$$</p> <p>$$\left(\frac{y'}{y}\right)'+\frac{y'}{xy}=0$$</p> <p>$$x\left(\frac{y'}{y}\right)'+\frac{y'}y=\left(\frac{xy'}{y}\right)'=0$$</p> <p>$$\frac{y'}{y}=\frac{c_1}{x}$$ $$\ln y=c_1\ln x+c_2$$ $$y=c_3x^{c_1}$$</p>
2,485,425
<p>If $A$ is a non empty subset of the reals and $f$ is a bounded function from $A$ to the reals, how can we show that:</p> <blockquote> <p>$\sup|f(x)| - \inf|f(x)| \le \sup(f(x)) - \inf(f(x))$?</p> </blockquote> <p>I started by stating that since $f$ is bounded, $\inf(f(x)) \le f(x) \le \sup(f(x))$. And then $|f(x)| \le \max\{|\sup(f(x))|,|\inf(f(x))|\}$.</p> <p>So $\sup|f(x)| = \max\{|\sup(f(x))|,|\inf(f(x))|\}$ and $\inf|f(x)| = \max\{\min\{|\sup(f(x))|,|\inf(f(x))|\},0\}$.</p> <p>I feel like my $\inf|f(x)|$ is wrong though, and I don't know where to get it.</p> <p>Any help would be really appreciated!</p>
gen-ℤ ready to perish
347,062
<p>I assert that the equator is <em>not</em> a line.</p> <p>From <a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer"><em>Wolfram MathWorld</em></a>:</p> <blockquote> <p>A line is a straight one-dimensional figure having no thickness and extending infinitely in both directions.</p> </blockquote> <p>If you look at Earth in terms of orthogonal unit vectors (i.e., $\mathbf i$, $\mathbf j$, $\mathbf k$ or equivalents) then the equator occupies at least two dimensions.</p> <p>If you look at Earth in terms of <a href="http://mathworld.wolfram.com/SphericalCoordinates.html" rel="nofollow noreferrer">spherical coordinates</a>, where the origin is Earth’s center, then equator is of the form $r=k$ for a constant $k$. Thus only one dimension is used to define it. However, its finiteness/infiniteness is debatable:</p> <ul> <li>If your coordinate system is defined such that $\theta$ is restricted to vary on $[0,2\pi)$ then the equator has a finite length of $2\pi k$.</li> <li>If your coordinate system is defined such that $\theta$ can vary across $(-\infty,\infty)$ then the locus (or set of points defined by a criterion in some way) $r=k$ is infinite.</li> </ul> <p>However, notice that $r=k$ for $\theta\in[0,2\pi)$ and $r=k$ for $\theta\in(-\infty,\infty)$ — though they appear differently in numerical form — overlap entirely on a graph. This is part of the reason why $\theta$ is almost always restricted to $[0,2\pi)$ (which is mentioned in the linked article).</p> <p>Thus, in the conventional sense, the equator is not a line.</p> <p>You also ask if a circle is an infinite line. In response to that, I ask you, what is the length of that line, or in other words, its circumference? There are many reasons to not think of a circle as an infinite line. I’ll leave that investigation to you; perhaps you’d like to write a paper on it one day....</p>
1,593,282
<p>Say we have the function $f:A \rightarrow B$ which is pictured below.<a href="https://i.stack.imgur.com/WfCxs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WfCxs.jpg" alt="enter image description here"></a></p> <p>This function is not bijective, so the inverse function $f^{-1}: B \rightarrow A$ does not exist. However, we can see that elements $d$ and $e$ in set $B$ are each mapped to by a single element in $A$, so they are kind of "nice" in this sense, almost like a bijective function.</p> <p>Would it be acceptable to use the notation $f^{-1}(d)=6$ and $f^{-1}(e)=7$ here, even though not all the elements of $B$ have a single inverse? Or does the use of $f^{-1}$ always imply that the entire function $f$ has an inverse?</p>
Gregory Grant
217,398
<p>Yes that's perfectly reasonable and it's done with regularity. BUT you have to interpret $f^{-1}(x)$ as a <em>set</em>, a subset of the domain, and not as an element of the domain. That's the case even if the set has one element. Just make sure you don't treat $f^{-1}$ as a real function and the notation is fine.</p>
3,098,245
<p><span class="math-container">$$ \frac{d}{dx}e^x =\frac{d}{dx} \sum_{n=0}^{ \infty} \frac{x^n}{n!}$$</span> <span class="math-container">$$ \sum_{n=0}^{ \infty} \frac{nx^{n-1}}{n!}$$</span> <span class="math-container">$$ \sum_{n=0}^{ \infty} \frac{x^{n-1}}{(n-1)!}$$</span> This isn't as straightforward as I thought it would be. I'm assuming there is some way to shift indices to complete the proof, but I'm not sure exactly how. </p>
David C. Ullrich
248,223
<p>Seems to me people are giving correct versions of the calculation, hence showing that your conclusion is wrong, but without pinning down exactly where the error in your version is. The error is here: The "identity" <span class="math-container">$$\frac k{k!}=\frac1{(k-1)!}$$</span>is not true if <span class="math-container">$k=0$</span>. (You obtain that identity by cancelling a <span class="math-container">$k$</span> in the numerator and denominator. Ignoring the fact that if <span class="math-container">$k=0$</span> there <em>is</em> no factor of <span class="math-container">$k$</span> in <span class="math-container">$k!$</span>, in general you cannot "cancel" zeroes: <span class="math-container">$1(0)=2(0)$</span>, hence <span class="math-container">$1=2$</span>.)</p>
3,323,170
<p>This question was originally posted <a href="https://crypto.stackexchange.com/q/72456/62225">here</a> on Crypto StackExchange. As suggested by an answer I am posting it here to help get a better perspective on the math side.</p> <blockquote> <p>Public-key cryptography was not invented until the 1970's. Apart from the idea not existing earlier (<a href="https://crypto.stackexchange.com/a/59542/62225">as talked about here</a>), is there any reason it could not have been used earlier? For example, are there forms that are easy enough to perform by hand but complicated enough to not be solved (easily) by hand?</p> </blockquote>
Henno Brandsma
4,280
<p>I think that a system like RSA would have been impractical in pre-computer days. How to generate large enough primes? There were tables for the smaller primes, but your opponent has the same tables, so then trial division would be a threat... </p> <p>Also, modular exponentiation is no party to do by hand either. And encoding a message to a number is awkard too. I posit that it's impractical to do by hand for parameters that would have been considered safe. </p>
2,230,897
<blockquote> <p>A line passing through $P=(\sqrt3,0)$ and making an angle of $\pi/3$ with the positive direction of x axis cuts the parabola $y^2=x+2$ at A and B, then:<br> (a)$PA+PB=2/3$<br> (b)$|PA-PB|=2/3$<br> (c)$(PA)(PB)=\frac{4(2+\sqrt3)}{3}$<br> (d)$\frac{1}{PA}+\frac{1}{PB}=\frac{2-\sqrt3}{2}$ </p> </blockquote> <p>Equation of line:<br> $y=\sqrt3(x-\sqrt3)$<br> $y=\sqrt3x-3$ </p> <p>substituting $y=\sqrt3x-3$ in $y^2=x+2$ </p> <p>$(\sqrt3x-3)^2=x+2$<br> $3x^2-6\sqrt3x+9=x+2$<br> $3x^2-(6\sqrt3+1)x+7=0$ </p> <p>$x_A=\frac{(6\sqrt3+1)+\sqrt{(6\sqrt3+1)^2-84}}{6}$<br> $x_B=\frac{(6\sqrt3+1)-\sqrt{(6\sqrt3+1)^2-84}}{6}$ </p> <p>But this gives a very complicated value of $x$ and that makes me feel that there ought to be a shorter way to solve this question, just can't figure out what it is.</p>
Mick
42,351
<p>Let the feet of A and B on the x-axis be A’ and B’ respectively.</p> <p><a href="https://i.stack.imgur.com/rV4mQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rV4mQ.png" alt="enter image description here"></a></p> <p>Compute $A’P$. It is … $\dfrac {1}{6} + \dfrac {\sqrt(\delta)}{6}$.</p> <p>Similarly, $PB’ = -\dfrac {1}{6} + \dfrac {\sqrt(\delta)}{6}$.</p> <p>Then, $|PA – PB| = 2 \times |A’P – PB’| = \dfrac {2}{3}$.</p>
3,600,528
<p>Is there a general formula for determining multiplicity of <span class="math-container">$2$</span> in <span class="math-container">$n!\;?$</span> I was working on a Sequence containing subsequences of 0,1. 0 is meant for even quotient, 1 for odd quotient. Start with k=3, k should be odd at start, if odd find (k-1) /2, otherwise k/2. This subsequence goes on until we reach 1.</p> <p>Assign 0,1 accordingly as quotient is even, odd respectively. k(n+1) =k(n) +2, k(n) is odd, n>=1.Do this for all k>=3.1 is added before each subsequence as subsequence is generated by odd integer>=3.This sequence goes on like this: 11, 101, 111, 1001, 1101, 1011, 1111, 10001, 11001, 10101, 11101, 10011, 11011, 10111, 11111,..</p> <p>The subsequences with increasing k are replica of the subsequence with steps to reach 1 minus one step or no of bits minus one with one extra bit of 1,0 depending on k.</p> <p>Example-for k=5 the subsequence is 101 as quotient in first step is 2, and in second step is 1.</p> <p>I want to find what is sequence at nth step? Also can this sequence help in determining what is multiplicity of 2 in n! ? </p>
Bernard
202,857
<p>You have <em>Legendre's formula</em>: for any prime <span class="math-container">$p$</span>, the multiplicity of <span class="math-container">$p$</span> in <span class="math-container">$n!$</span> is <span class="math-container">$$v_p(n!)=\biggl\lfloor\frac{n}{p}\biggr\rfloor+\biggl\lfloor\frac{n}{p^2}\biggr\rfloor+\biggl\lfloor\frac{n}{p^3}\biggr\rfloor+\dotsm$$</span>(of course, this is a finite sum).</p>
182,101
<p>With respect to assignments/definitions, when is it appropriate to use $\equiv$ as in </p> <blockquote> <p>$$M \equiv \max\{b_1, b_2, \dots, b_n\}$$</p> </blockquote> <p>which I encountered in my analysis textbook as opposed to the "colon equals" sign, where this example is taken from Terence Tao's <a href="http://terrytao.wordpress.com/">blog</a> :</p> <blockquote> <p>$$S(x, \alpha):= \sum_{p\le x} e(\alpha p) $$</p> </blockquote> <p>Is it user-background dependent, or are there certain circumstances in which one is more appropriate than the other?</p>
Belgi
21,335
<p>$x:=y$ means $x$ is defined to be $y$.</p> <p>The notation $\equiv$ is also (sometimes) used to mean that, but it also have other uses such as $4\equiv0$ (mod 2).</p> <p>I encountered $:=$ a lot more than $\equiv$ , and it is my personal favourite.</p> <p>There is also the notation $\overset{\Delta}{=}$ to mean "equal by definition"</p> <p>By the way, some people also use the notaion $x=:y$ to mean $y$ is defined to be $x$</p>
165,328
<p>What is the difference between $\cap$ and $\setminus$ symbols for operations on sets?</p>
Zev Chonoles
264
<p>Here is the <a href="http://en.wikipedia.org/wiki/Union_%28set_theory%29" rel="nofollow">Wikipedia article on $\cup$</a>, the <a href="http://en.wikipedia.org/wiki/Intersection_%28set_theory%29" rel="nofollow">Wikipedia article on $\cap$</a>, and the <a href="http://en.wikipedia.org/wiki/Complement_%28set_theory%29#Relative_complement" rel="nofollow">Wikipedia article on $\setminus$</a>.</p> <p>Given two sets $A$ and $B$, the sets $A\cup B$, $A\cap B$, and $A\setminus B$ are defined as $$A\cup B=\{x\mid x\in A\;\text{ or }\;x\in B\}$$ $$A\cap B=\{x\mid x\in A\;\text{ and }\;x\in B\}$$ $$A\setminus B=\{x\mid x\in A\;\text{ and }x\notin B\}$$ For example, if $A=\{1,2,3,4\}$ and $B=\{3,4,5,6\}$, then $$A\cup B=\{1,2,3,4,5,6\},$$ $$A\cap B=\{3,4\},$$ $$A\setminus B=\{1,2\}.$$</p>
165,328
<p>What is the difference between $\cap$ and $\setminus$ symbols for operations on sets?</p>
Aru Ray
13,129
<p>(Answer to the edited question):</p> <p>$\cap$ is for set intersection and $\backslash$ is for set difference. I'm sure you can look up the wikipedia entries for them. Here is a more descriptive example: </p> <p>Suppose $A$ is the set of families with pet cats, and $B$ is the set of families with pet dogs. $A \cap B$ will consist of families with at least one cat AND at least one dog. $A \backslash B$ will consist of families with at least one cat but NO dogs. </p>
3,247,176
<p>I have this statement:</p> <blockquote> <p>It can be assured that | p | ≤ 2.4, if it is known that:</p> <p>(1) -2.7 ≤ p &lt;2.3</p> <p>(2) -2.2 &lt; p ≤ 2.6</p> </blockquote> <p>My development was:</p> <p>First, <span class="math-container">$ -2.4 \leq p \leq 2.4$</span></p> <p>With <span class="math-container">$1)$</span> by itself, that can't be insured, same argument for <span class="math-container">$2)$</span></p> <p>Now, i will use <span class="math-container">$1)$</span> and <span class="math-container">$2)$</span> together, and the intersection between this intervals are: <span class="math-container">$(-2.2, 2.3)$</span>. So, this also does not allow me to ensure that | p | ≤ 2.4, since, there are some numbers that are outside the intersection of these two intervals, for example 2.35 is outside this interval. </p> <p>But according to the guide, the correct answer must be <span class="math-container">$1), 2)$</span> together. And i don't know why.</p> <p>Thanks in advance.</p>
fleablood
280,126
<p>You getting confused by a <em>strong</em> premise implying a <em>weak</em> conclusion. </p> <p>1 and 2 together say precisely: <span class="math-container">$p\in (-2.2,2.3) $</span></p> <p>And you are asking to conclude <span class="math-container">$p\in [-2.4,2.4] $</span>.</p> <p>That's simply a matter of noting if <span class="math-container">$(-2.2, 2.3) \subset [-2.4,2.4] $</span>. And if <span class="math-container">$p $</span> is in a smaller subset, then we can conclude <span class="math-container">$p $</span> must therefore also be in the bigger superset.</p> <p>Or in other words:</p> <p><span class="math-container">$-2.2 &lt; p &lt; 2.3\implies $</span></p> <p><span class="math-container">$-2.4\le -2.2 &lt; p &lt;2.3 \le 2.4\implies $</span></p> <p><span class="math-container">$-2.4\le p \le 2.4$</span></p> <p>....</p> <p>An analogy:</p> <p>We can conclude <span class="math-container">$p $</span> is a somewhat green article of clothing if we know both </p> <p>1: <span class="math-container">$p$</span> is a hat.</p> <p>2: <span class="math-container">$p$</span> is precisely the color an emerald takes when viewed at noon on the summer solstice on the equator.</p> <p>In your question, 1 tells you that <span class="math-container">$p &lt;2.3$</span> so you can conclude <span class="math-container">$p\le 2.4$</span>. (<span class="math-container">$p $</span> is a hat so you can conclude <span class="math-container">$p $</span> is an article of clothing.)</p> <p>And 2 tells you that <span class="math-container">$-2.2 &lt;p $</span> so you can conclude <span class="math-container">$-2.4\le p $</span>. (<span class="math-container">$p $</span> is the color of emeralds so you can conclude <span class="math-container">$p $</span> is green.)</p> <p>Together 1 and 2 tell you <span class="math-container">$-2.2&lt;p &lt;2.3$</span> so you can conclude <span class="math-container">$-2.4\le p\le 2.4$</span>. (Together you know <span class="math-container">$p $</span> is a hat the color of emeralds in a certain condition so you can conclude <span class="math-container">$p $</span> is a green article of clothing.)</p> <p>Strong premise. Weak result.</p>
1,994,021
<p>In one of the research article it is written that the following limit is equal to zero $$\lim_{x \to 0 }\frac{d}{2^{b+c/x}-1}\left[a2^{b+c/x}-a-a\frac{c\ln{(2)}2^{b+c/x}}{2x}-\frac{c\ln{(2)}}{2x^2}\frac{2^{b+c/x}}{\sqrt{2^{b+c/x}-1}}\right]\left(e^{-ax\sqrt{2^{b+c/x}-1}}\right)=0$$ where $a,b,c,d$ are all positive constants. I am unable to solve it. Please help me in getting there. Many thanks in advance.</p>
Sil
290,240
<p>First notice that the function under limit is not defined in the left neighborhood of $0$, because assuming $x&lt;0$, the expression under square root has to be non negative, i.e. $2^{b+c/x}-1 \geq 0$, and you can put these together to show that $x \leq -c/b$ then. So I'll assume you want to compute limit for $x \to 0^+$.</p> <p>Next thing, and it is not necessary but personally I find it more intuitive, is to replace $x$ with $\frac{1}{x}$ and calculate the limit as $x\to\infty$.</p> <p>After this and several algebraic manipulations, you would get</p> <p>$$ \lim_{x \to \infty} -cd\frac{\ln 2}{2} f(x)g(x) $$</p> <p>where $$f(x) = \frac{a}{x}-\frac{2a}{c\ln 2 x^2}+\frac{a}{x(2^{b+cx}-1)}+\frac{1}{\sqrt{2^{b+cx}-1}}+\frac{1}{(2^{b+cx}-1)^{3/2}}$$ $$g(x) = \frac{x^2}{\mathrm{e}^{a\frac{\sqrt{2^{b+cx}}}{x}}}$$</p> <p>Now it is enough to show that $\lim_{x \to \infty} f(x) = 0$ and $\lim_{x \to \infty} g(x) = 0$.</p> <p>The $\lim_{x \to \infty} f(x) = 0$ is straightforward since each of the fractions converge to $0$ (notice that $2^{b+cx}-1 \to \infty$ as $x \to \infty$).</p> <p>The $\lim_{x \to \infty} g(x) = 0$ is a bit more tricky, but notice that $\sqrt{2^{b+cx}}$ grows faster than $x^2$, i.e. for sufficiently large $x$ you have</p> <p>$$ \sqrt{2^{b+cx}} \geq x^2 $$</p> <p>Applying it to the $g(x)$, you get </p> <p>$$ 0 \leq \frac{x^2}{\mathrm{e}^{a\frac{\sqrt{2^{b+cx}}}{x}}} \leq \frac{x^2}{\mathrm{e}^{ax}} $$</p> <p>Now it is clear the right side converge to $0$ (since exponential function grows faster than any polynomial, same principle as we just used), therefore the left side also converge to $0$.</p>
413,882
<p>Let $\mathbb F$ be a field and $\mathbb F[x]$ the ring of polynomials with coefficients in $\mathbb F$. Let $p(x)$ be an irreducible polynomial in $\mathbb F[x]$. Let $k$ be a positive integer and consider the vector space $V$, over the field $\frac{\mathbb F[x]}{(p(x))}$with basis </p> <p>$$1, p(x), p(x)^2, \ldots, p(x)^{k-1}.$$</p> <p>That is, $$V=\left\{q_0+q_1p(x)+\cdots +q_{k-1}p(x)^{k-1}\;\;:\;\;q_i\in\frac{\mathbb F[x]}{(p(x))}\right\}.$$ </p> <p>Define in $V$ a product modulo $p(x)^k$. That is, if </p> <p>$$v=v_0+v_1p(x)+\cdots +v_{k-1}p(x)^{k-1}\;\;\text{and}\;\;u=u_0+u_1p(x)+\cdots +u_{k-1}p(x)^{k-1},$$ then we multiply $v$ by $u$ the obvious way, using the distributivity and assuming that $p(x)^k=0$. This makes $V$ an algebra. My question is:</p> <blockquote> <p>Is this algebra $V$ isomorphic to $\frac{\mathbb F[x]}{(p(x)^k)}$? </p> </blockquote>
Community
-1
<p>If I understand well $V=L^k$, where $L=\mathbb F[X]/(p)$ is a field. On $V$ you define a multiplication by $$(a_0,a_1,\dots,a_{k-1})(b_0,b_1,\dots,b_{k-1})=(a_0b_0,a_0b_1+a_1b_0,\dots,a_0b_{k-1}+\cdots+a_{k-1}b_0).$$</p> <p>It's easy to see that $V\simeq L[Y]/(Y^k)$ and thus your problem reduces to the following: </p> <blockquote> <p>Let $\mathbb F$ be a field and $p\in\mathbb F[X]$ irreducible. Set $L=\mathbb F[X]/(p)$. Is it true that $L[Y]/(Y^k)$ is isomorphic to $\mathbb F[X]/(p^k)$?</p> </blockquote> <p>Denote $L$ by $\mathbb F(t)$, where $t$ is the residue class of $X$ modulo $(p)$. (Obviously $p(t)=0$.) I'll prove that $$L[Y]/(Y^k)\text{ is isomorphic to } \mathbb F[X]/(p^k)$$ whenever $p'(t)\neq 0$. (This happens, for instance, if the characteristic of $\mathbb F$ is $0$.)</p> <p>Define a ring homomorphism $\varphi:\mathbb F[X]\to L[Y]/(Y^k)$ by sending $X$ to $y+t$, where $y$ denotes the residue class of $Y$ modulo $(Y^k)$. We have $\varphi(p(X))=p(y+t)=p(t)+yh(y)$, $h$ being a polynomial with coefficients in $L$ and having the property that $h(0)=p'(t)$. Since $p(t)=0$ we get $\varphi(p(X))=yh(y)$ and using that $y^k=0$ we obtain $\varphi(p^k(X))=0$, that is, $p^k(X)\in\ker\varphi$. But $\ker\varphi$ is a principal ideal (of $\mathbb F[X]$), so $\ker\varphi=(f(X))$ for some $f\in\mathbb F[X]$. Since $f(X)\mid p^k(X)$ we must have $f(X)=p^i(X)$ for $1\le i\le k$. Suppose $i&lt;k$. Then $\varphi(p^i(X))=0$ which is equivalent to $y^ih^i(y)=0$, that is, $Y^ih^i(Y)\in(Y^k)$. We thus get $Y\mid h(Y)$, a contradiction (with $h(0)\neq 0$). As a consequence we have proved that $\ker\varphi=(p^k(X))$. It remains to prove that $\varphi$ is surjective. But this follows easily observing that $\varphi$ is a homomorphism of $\mathbb F$-vector spaces of the same dimension (equal to $k\deg p$).</p>
413,882
<p>Let $\mathbb F$ be a field and $\mathbb F[x]$ the ring of polynomials with coefficients in $\mathbb F$. Let $p(x)$ be an irreducible polynomial in $\mathbb F[x]$. Let $k$ be a positive integer and consider the vector space $V$, over the field $\frac{\mathbb F[x]}{(p(x))}$with basis </p> <p>$$1, p(x), p(x)^2, \ldots, p(x)^{k-1}.$$</p> <p>That is, $$V=\left\{q_0+q_1p(x)+\cdots +q_{k-1}p(x)^{k-1}\;\;:\;\;q_i\in\frac{\mathbb F[x]}{(p(x))}\right\}.$$ </p> <p>Define in $V$ a product modulo $p(x)^k$. That is, if </p> <p>$$v=v_0+v_1p(x)+\cdots +v_{k-1}p(x)^{k-1}\;\;\text{and}\;\;u=u_0+u_1p(x)+\cdots +u_{k-1}p(x)^{k-1},$$ then we multiply $v$ by $u$ the obvious way, using the distributivity and assuming that $p(x)^k=0$. This makes $V$ an algebra. My question is:</p> <blockquote> <p>Is this algebra $V$ isomorphic to $\frac{\mathbb F[x]}{(p(x)^k)}$? </p> </blockquote>
Jim
56,747
<p>No, the algebra you've defined is isomorphic to $\mathbb F[x]/x^k$, where $x$ is represented by $p(x) \in V$. It has dimension $k$ whereas $\mathbb F[x]/p(x)^k$ has dimension $k\cdot\deg p$.</p>
223,008
<p>Ok so my teacher said we can use this sentence: <strong>If $a$ is not a multiple of $5$, then $a^2$ is not a multiple of $5$ neither.</strong></p> <p>to prove this sentence: <strong>If $a^2$ is a multiple of $5$, then $a$ itself is a multiple of $5$</strong></p> <p>I don't understand the logic behind it, I mean what's the link between them, how can we conclude the 2nd sentence to be true if the 1st one is true?</p> <p>Thanks a lot guys!</p>
Brian M. Scott
12,042
<p>This is an example of an implication and its contrapositive. The contrapositive of an implication $\varphi\to\psi$ is the implication $\lnot\psi\to\lnot\varphi$; in words, the contrapositive of $$\text{if }\varphi\text{ is true},\text{ then }\psi\text{ is true}\tag{1}$$ is $$\text{if }\psi\text{ is not true},\text{ then }\varphi\text{ is not true}\;.\tag{2}$$</p> <p>(Here $\varphi$ and $\psi$ are any statements.) </p> <p>Suppose that you know that $(1)$ is true: whenever $\varphi$ is true, so is $\psi$. Now you discover that $\psi$ is false. Could $\varphi$ be true? No, because if it were, then you know that $\psi$ would be true as well. Thus, if $\psi$ is false you can conclude that $\varphi$ must be false as well $-$ which is $(2)$ in slightly different words.</p> <p>In your case $\varphi$ is $$a\text{ is not a multiple of }5$$ and $\psi$ is $$a^2\text{ is not a multiple of }5\;.$$</p> <p>You know that if $a$ is not a multiple of $5$, then neither is $a^2$. Suppose, now, that someone hands you an $a^2$ that <strong>is</strong> a multiple of $5$. Could $a$ fail to be a multiple of $5$? No: if $a$ were not a multiple of $5$, then $a^2$ would not be a multiple of $5$, and we know that this particular $a^2$ <strong>is</strong> a multiple of $5$. Since $a$ either is or is not a multiple of $5$, and we’ve ruled out the second possibility, we conclude that $a$ is also a multiple of $5$.</p>
10,622
<p>Given an open set $U \subset \mathbb R ^n $, there exists an exhaustion by compact sets, i.e. a sequence of compact sets $K_i$, s.t.</p> <p>$\cup _{i=0}^{\infty} K_i = U$ and $\forall i \in \mathbb N : K_i \subset K_{i+1} ^{\circ}$ </p> <p>We can imagine that different exhaustions by compact sets 'propagate' at different speeds for the various parts $U$.</p> <p>Let us call an exhaustion $(K_i)_i$ 'stronger' than another exhaustions $(L_i)_i$, whenever we have</p> <p>$\forall i \in \mathbb N \exists j \in \mathbb N : L_i \subset K_j$.</p> <p>We call two exhaustions equivalent, if each one is stronger than the other - i.e. for each compact set in the first, we have a compact superset in the second, and vice versa.</p> <p><strong>Question</strong>: Are all exhaustions of $U$ equivalent?</p> <p>Purpose: You encounter various settings, where you define a certain structure by such an exhaustion. If the above question has a positive answer, this would spare from wondering whether your construction is independent of such an exhaustion.</p>
shuhalo
3,557
<p>Ok, one version on my own, more technical.</p> <p>Let $V$ be any compact set included in $U$. It remains to show $\exists i \in \mathbb N : V \subset K_i$.</p> <p>Note that $\forall i : V \cap K_i$ bounded again.</p> <p>Suppose, there is no index $i$ s.t. $V \subset K_i$.</p> <p>Then there is a nested sequence $V_i$ of non-empty bounded sets, $V = V_0, \forall i : V_i \cap K_i = \{\}$, .</p> <p>Let $x_i \in V_i$ be a sequence of points, which exists by assumption. Then this sequence is bounded and w.l.o.g it is convergent.</p> <p>Hence let $x_i \rightarrow x$. By assumption, $\exists k \in \mathbb N : \exists \epsilon &gt; 0 : B_{\epsilon}(x) \subset {K_k}^{\circ}$.</p> <p>As $x_i$ convergence, we skip a leading number of elements and w.l.o.g. $\forall i : x_i \in B_{\epsilon/2}(x) \subset B_{\epsilon}(x) \subset K_k$.</p> <p>After reindexing the $V_i$ according to our choices of subsequences, we obtain a contradiction.</p> <p>Hence, such there is an $i$ s.t. $V \subset K_i$. As $V$ has been any compact set, we are done.</p>
3,535,316
<p><span class="math-container">$$\int_{0}^{\pi}e^{x}\cos^{3}(x)dx$$</span></p> <p>I tried to solve it by parts.I took <span class="math-container">$f(x)=\cos^{3}(x)$</span> so <span class="math-container">$f'(x)=-3\cos^{2}x\sin x$</span> and <span class="math-container">$g'(x)=e^{x}$</span> and I got&quot;</p> <p><span class="math-container">$$\int_{0}^{\pi}e^{x}\cos^{3}(x)dx=e^{x}\cos(x)+3\int_{0}^{\pi}e^{x}\cos^{2}(x)\sin(x)dx.$$</span></p> <p>How to approach the second integral?</p>
user577215664
475,762
<p>Use the definition of the exponential: <span class="math-container">$$ \begin{align} E=&amp;\sum_{n=0}^\infty \frac {n+1}{n!} \\ E=&amp;\sum_{n=0}^\infty \frac {1}{n!}+\sum_{n=1}^\infty \frac {n}{n!} \\ E=&amp;e+\sum_{n=1}^\infty \frac {1}{(n-1)!} \\ E=&amp;e+\sum_{n=0}^\infty \frac {1}{n!} \\ \implies E=&amp;2e \end{align} $$</span></p>
3,166,419
<blockquote> <p>A fair coin is tossed three times in succession. If at least one of the tosses has resulted in Heads, what is the probability that at least one of the tosses resulted in Tails?</p> </blockquote> <p>My argument and answer: The coin was flipped thrice, and one of them was heads. So we have two unknown trials. The coin flips are all independent of each other, and so there is no useful information to be derived from the fact that one of them was heads. The probability of getting at least one Tails in these two trials is <span class="math-container">$ \frac 12 + \frac 12 - \frac 14 = \frac 34 $</span>. </p> <p>The given answer: <span class="math-container">$ \frac 67 $</span>. The answer proceeds as follows: Initially the sample space consists of 8 events. We now know that one of those events can't happen (TTT can't happen because one of them were heads).6 of the remaining 7 events have at least one tail, and so the probability is <span class="math-container">$ \frac 67 $</span>.</p> <p>Why is my answer wrong? What am I missing?</p>
Jacob
644,834
<p>The answer <span class="math-container">$\frac{6}{7}$</span> is correct. Whatever the issue in your reasoning, I feel it must lie in the statement "there is no useful information" in saying one of the 3 flips was heads. To me it seems there is a difference between reading off the results of coins already flipped and predicting (as you do in your argument) the results of future flips based on previous ones. Clearly, as you mention, each flip is independent, but perhaps information about some of the flips is useful if all of the flips have <em>already been made</em>.</p> <p>To illustrate this point, I think it helps to take this problem to the extreme. Suppose we perform a million coin flips. I don't tell you the results of each coin flip, but I do tell you that 999,999 of the flips yielded heads. I now ask you to guess the result of the remaining flip. What should you answer? (I recommend thinking about this yourself, then look at my answer below).<br> |<br> |<br> |<br> |<br> |<br> |<br> |<br> |<br> <strong>Answer:</strong> You may suspect that you have a 50-50 shot, but that is not the case. The crucial point is I did not tell you <em>which</em> of the 999,999 coins were heads. Had I made this specification it would be equally likely. However, there are, in fact, one million possible sequences of flips in which only one is tails. Here they are: <span class="math-container">\begin{equation} T\, H\, H\, H\, ...\, H \\ H\, T\, H\, H\, ...\, H \\ H\, H\, T\, H\, ...\, H \\ . \\ . \\ . \\ H\, H\, H\, H\, ...\, T \\ \end{equation}</span></p> <p>In contrast, only one sequence has all heads. Assuming sequences of flips are equally likely, we must conclude that guessing tails is the appropriate choice. Notice that we cannot solve this problem correctly by reducing the size of our string of flips to one, similar to your method. We had to consider the flips as part of a larger sequence. I hope this helps!</p>
1,761,527
<p>Let $f,g,h:X\to\mathbb{R}$ such that $f(x)\leq g(x)\leq h(x)$ for all $x\in X$. If $f$ and $h$ are differentiable in $a$ and $h(a)=f(a)$. Then $g$ is differentiable and $g'(a)=f'(a).$</p> <p>How can I can prove that $g$ is differentiable in $a$.? Thanks</p>
hmakholm left over Monica
14,366
<p>This is not even close to being true -- for example a counterexample could be $$ f(x) = -4 \qquad g(x)=\arctan(x) \qquad h(x)=4 $$ (where $g'(a)$ exists for all $a$ but never equals $f'(a)$) or $$ f(x) = 0 \qquad g(x)=\begin{cases}1 &amp; x\in\mathbb Q \\ 0 &amp; x\notin\mathbb Q \end{cases} \qquad h(x)=1 $$ (where $g'(a)$ doesn't exist anywhere).</p> <hr> <p><em>For the edited question where the assumption is $f(a)=h(a)$ rather than $f'(a)=h'(a)$:</em></p> <p>Without loss of generality we can assume that $f(x)$ is constant, $f(x)=c$ everywhere -- otherwise just subtract $f(x)$ from all three functions.</p> <p>Then since $f(a)=h(a)=c$ we must also have $g(a)=c0$.</p> <p>Furthermore neither $g$ nor $h$ can be less than the constant value of $f$, so $$ \frac{h(x)-h(a)}{x-a} $$ is either zero or has the same sign as $x-a$ -- so if it has a limit for $x\to a$ (which it must becaue $h$ is differentiable), this limit must be $0$.</p> <p>Now since $g$ is between $f$ and $h$, $$\frac{g(x)-g(a)}{x-a}$$ is always between $0=\frac{f(x)-c)}{x-a}$ and $\frac{h(x)-c}{x-a}$ which both converge to $0$, so by the squeeze theorem it too converges to $0$.</p>
794,842
<p>Let the statement $?PQR$ be determined by the following truth-table.</p> <pre><code>P Q R ?PQR T T T T T T F F T F T F T F F T F T T T F T F T F F T F F F F T </code></pre> <ol> <li>After ‘Answer:’ below, give a logically equivalent sentence of ?PQR in FOL. But here’s the catch: you may only use the Boolean connectives (i.e. ¬, Ʌ, and V) in the sentence you give. I am trying to figure this assignment out; but cannot figure out the formula for determining the sentence. Any ideas?</li> </ol>
Asaf Karagila
622
<p><strong>HINT:</strong></p> <p>For every line whose value is $\tt T$, write a sentence describe the assignment to the variables, e.g. $P\land Q\land R$ describes the first line of the table. </p> <p>Then take the disjunction of these sentences, and show that the result has exactly this truth table.</p> <p>You might want to simplify that result.</p>
4,623,022
<p>I have a question that I have been curious about for years.</p> <p>In differential geometry, since the exterior derivative satisfies property <span class="math-container">$d^2=0$</span>, we can make a de Rham cohomology from it.</p> <p>Then if we write <span class="math-container">$\iota_X:\Omega^n\rightarrow\Omega^{n-1}$</span> as the interior derivative(also called as interior product) for a vector field <span class="math-container">$X$</span>, then <span class="math-container">$\iota_X^2=0$</span> holds. Can we make a homology for a suitable vector field <span class="math-container">$X$</span> from this?</p> <p>And if you can create such a homology, are there any useful properties about it? Like the de Rham theorem.</p> <p>I would really appreciate it if you could let me know.</p>
Quaere Verum
484,350
<p>This is more of a comment that is far too long for a comment. But I have also wondered about this, so I thought I would get started with the simplest possible examples. If anyone has a good interpretation for what these cohomology groups are, I would also love to know it.</p> <p>For the simplest example, let's take <span class="math-container">$S^1$</span> as our manifold. The (co)tangent bundles are both trivial. Write their generators over <span class="math-container">$C^\infty(S^1)$</span> and <span class="math-container">$\partial_\theta$</span> and <span class="math-container">$d\theta$</span>. Take a vector field <span class="math-container">$v=f\partial_\theta$</span> for some <span class="math-container">$f\in C^\infty(S^1)$</span>. We consider the chain complex <span class="math-container">$$0\to\Omega^1(S^1)\xrightarrow{\iota_v}C^\infty(S^1)\to 0$$</span> Write <span class="math-container">$\eta\in\Omega^1$</span> as <span class="math-container">$\eta=fd\theta$</span>. Then <span class="math-container">$\iota_v(\eta)=fg$</span>. Therefore, <span class="math-container">$$\ker\iota_v=\{\eta=fd\theta\in \Omega^1(S^1)|fg=0\}=\text{Ann}_{C^\infty(S^1)}\langle g\rangle$$</span> That is, <span class="math-container">$H^1(S^1,g\partial_\theta)$</span> is the annihilator of the ideal generated by <span class="math-container">$g$</span>, as a module over <span class="math-container">$C^\infty(S^1)$</span>. By definition, interior multiplication acts trivially on functions. So to find <span class="math-container">$H^0(S^1,g\partial_\theta)$</span>, we only need to know the image of <span class="math-container">$\iota_v$</span>. Clearly, we have <span class="math-container">$$\text{im }\iota_v=\{fg\mid f\in C^\infty(S^1)\}$$</span> since <span class="math-container">$fd\theta$</span> defines a <span class="math-container">$1$</span>-form on <span class="math-container">$S^1$</span>. As such, we get <span class="math-container">$$H^0(S^1,g\partial_\theta)=C^\infty(S^1)/\langle g\rangle$$</span> If this were algebraic geometry, we could think about <span class="math-container">$H^1(S^1,g\partial_\theta)$</span> as the annihilator of an ideal sheaf, and of <span class="math-container">$H^0(S^1,g\partial_\theta)$</span> as the coordinate ring of the subscheme defined by said ideal sheaf.</p> <p>At any rate, these vector spaces are not finite dimensional, so I would say this is not a particularly &quot;nice&quot; cohomology. It reminds me of Poisson cohomology, which is likewise ill-understood even for simple manifolds.</p> <p>Another case to look at, is a manifold <span class="math-container">$X$</span> with an action of a Lie group <span class="math-container">$G$</span>. Then we can look at the fundamental vector fields <span class="math-container">$v_\xi$</span> for <span class="math-container">$\xi\in\mathfrak{g}$</span> and ask ourselves what these cohomology groups might represent. The easiest case to consider, then, would again be a torus, say <span class="math-container">$T^2=S^1\times S^1$</span>, viewed as a principal <span class="math-container">$S^1$</span>-bundle over the circle. Then we can look at the vector field <span class="math-container">$v=\partial_{\theta_2}$</span>, where I denote the coordinates by <span class="math-container">$(\theta_1,\theta_2)$</span>. This is the fundamental vector field which corresponds to <span class="math-container">$1\in\mathfrak{u}(1)\cong\mathbb{R}$</span>. Once again, <span class="math-container">$\Omega^2(T^2)$</span> and <span class="math-container">$\Omega^1(T^2)$</span> are free modules over <span class="math-container">$C^\infty(T^2)$</span>, so computations become very easy (and maybe this is why nothing interesting is happening). Similar reasoning to the above shows that, this time, we get <span class="math-container">$H^2(T^2,v)=0$</span>, <span class="math-container">$H^1(T^2,v)= C^\infty(T^2)\cdot d\theta_1/(-C^\infty(T^2)\cdot d\theta_1)=0$</span> and <span class="math-container">$H^0(T^2,v)=C^\infty(T^2)/C^\infty(T^2)=0$</span>.</p> <p>Perhaps if we took the Hopf fibration <span class="math-container">$S^3\to S^2$</span> as a principal <span class="math-container">$S^1$</span>-bundle, it would yield something more interesting. This is a little bit more involved.</p> <p><strong>EDIT:</strong> The response by Mariano Suárez-Álvarez is of course correct, and it tells us that this (co)ohomology will be trivial whenever the vector field is nowhere vanishing. As such, it is not particularly interesting to look at, I believe. This is largely due to the fact that the operation of interior multiplication is really a &quot;pointwise&quot; operation, not a local operation like differentiation. Hence it acts trivially on <span class="math-container">$C^\infty(X)$</span>, which is why it does not give &quot;interesting&quot; local information about the manifold <span class="math-container">$X$</span>.</p> <p>It also answers what happens on the Hopf bundle: the cohomology will again be trivial, because the group action is free. Rather, when we have an <span class="math-container">$S^1$</span>-action (or an <span class="math-container">$\mathbb{R}$</span>-action) on a manifold <span class="math-container">$X$</span>, the cohomology that you obtain from this complex, using infinitesimal generators for the action, is going to tell you something about the fixed point locus of the action. I suppose that's about as satisfactory of an answer that one could hope for, in this case.</p>
1,557,039
<h2>Background</h2> <p>I am a software engineer and I have been picking up combinatorics as I go along. I am going through a combinatorics book for self study and this chapter is absolutely destroying me. Sadly, I confess it makes little sense to me. I don't care if I look stupid, I want to understand how to solve these problems. </p> <p>I am studying counting with repetition. That is, generalized binomial coefficients and generating functions. </p> <h2>Problem</h2> <p>Suppose that an unlimited amount of jelly beans is available in each of the five different colors: red, green, yellow, white, and black. </p> <ul> <li>How many ways are there to select from twenty jelly beans? </li> <li>How many ways are there to select twenty jelly beans if we must select at least two jelly beans of each color? </li> </ul> <h2>Attempted Solution</h2> <ul> <li><p>How many ways are there to select from twenty jelly beans?</p> <p>$5^{20}$</p></li> <li><p>How many ways are there to select twenty jelly beans if we must select at least two jelly beans of each color? </p></li> </ul> <p>I keep thinking of applying the hypergeometric distribution here, but I think this is dead wrong. The entire chapter is on series, so I am confused as to what these series are and why they are being applied to solve these problems? The above solutions (some I am too embarrassed to share) didn't pass the smell test at all :( </p>
Domenico Vuono
227,073
<p>For $n=1$ $3^2\equiv 1 \pmod 4$ Now suppose that $$3^{2n}\equiv 1\pmod 4$$ and you have to demonstrate that $3^{2n+2}\equiv 1\pmod 4$ Indeed $3^{2n}\cdot 3^2\equiv 1\pmod 4$</p>
307,458
<p>Let <span class="math-container">$\mathbf{x}_1, \dots, \mathbf{x}_n \in \mathbb{R}^d$</span> be <span class="math-container">$n$</span> given vectors. Define the function</p> <p><span class="math-container">$$ \mathcal{K}(\mathbf{x},\mathbf{y}) := \alpha\exp\left(-\frac{\|\mathbf{x}-\mathbf{y}\|^2}{2\sigma^2}\right) $$</span></p> <p>where <span class="math-container">$\alpha$</span> and <span class="math-container">$\sigma$</span> are given constants. Now define the <span class="math-container">$n\times 1$</span> vector</p> <p><span class="math-container">$$ \mathcal{K}_n(\mathbf{x}) := \begin{bmatrix} \mathcal{K}(\mathbf{x},\mathbf{x}_1) &amp; \dots &amp; \mathcal{K}(\mathbf{x},\mathbf{x}_n) \end{bmatrix}^\top $$</span></p> <p>Let <span class="math-container">$\mathbf{A}$</span> be any <span class="math-container">$n \times n$</span> positive definite matrix and <span class="math-container">$\mathbf{b}$</span> be any <span class="math-container">$n \times 1$</span> real vector. Let <span class="math-container">$\lambda &gt; 0$</span> be given. Consider the optimization problem</p> <p><span class="math-container">$$ \max_{\mathbf{x} \in \mathbb{R}^d} \quad\mathcal{K}_n(\mathbf{x})^\top\mathbf{b}-\lambda\mathcal{K}_n(\mathbf{x})^\top\mathbf{A}\mathcal{K}_n(\mathbf{x}) $$</span></p> <p>Is this optimization problem known in literature? How do I do gradient ascent on this?</p>
dchatter
3,600
<p>I don't see any connection between your problem and MPC.</p> <p>Take a look at optimization via the technique known as stochastic approximation; it is extremely popular today for several reasons. Check out "Optimization Methods for Large-Scale Machine Learning" by Bottou, Curtis, and Nocedal on arXiv: <a href="https://arxiv.org/abs/1606.04838" rel="nofollow noreferrer">https://arxiv.org/abs/1606.04838</a>, especially Section 3.</p>
1,811,081
<blockquote> <p>Let $1,x_{1},x_{2},x_{3},\ldots,x_{n-1}$ be the $\bf{n^{th}}$ roots of unity. Find: $$\frac{1}{1-x_{1}}+\frac{1}{1-x_{2}}+......+\frac{1}{1-x_{n-1}}$$</p> </blockquote> <p>$\bf{My\; Try::}$ Given $x=(1)^{\frac{1}{n}}\Rightarrow x^n=1\Rightarrow x^n-1=0$</p> <p>Now Put $\displaystyle y = \frac{1}{1-x}\Rightarrow 1-x=\frac{1}{y}\Rightarrow x=\frac{y-1}{y}$</p> <p>Put that value into $\displaystyle x^n-1=0\;,$ We get $\displaystyle \left(\frac{y-1}{y}\right)^n-1=0$</p> <p>So we get $(y-1)^n-y^n=0\Rightarrow \displaystyle \left\{y^n-\binom{n}{1}y^{n-1}+\binom{n}{2}y^2-......\right\}-y^n=0$</p> <p>So $\displaystyle \binom{n}{1}y^{n-1}-\binom{n}{2}y^{n-1}+...+(-1)^n=0$ has roots $y=y_{1}\;,y_{2}\;,y_{3}\;,.......,y_{n-1}$</p> <p>So $$y_{1}+y_{2}+y_{3}+.....+y_{n-1} = \binom{n}{2} =\frac{n(n-1)}{2}\;,$$ Where $\displaystyle y_{i} = \frac{1}{1-x_{i}}\;\forall i\in \left\{1,2,3,4,5,.....,n-1\right\}$</p> <p>My Question is can we solve it any less complex way, If yes Then plz explain here, Thanks</p>
pmichel31415
326,840
<p>Otherwise you could write $x_i=e^{j\frac {i2\pi} {n}}$ ($j^2=-1$)</p> <p>Then $$y_i=\frac {e^{-j\frac {i\pi} {n}}} {e^{-j\frac {i\pi} {n}}-e^{j\frac {i\pi} {n}}}=\frac {e^{-j\frac {i\pi} {n}}}{-2j\sin(\frac {i\pi} {n})}=2j\cot(\frac {i\pi} {n})+\frac 1 2$$</p> <p>Now consider the fact that, if $x_i$ is an $n^{th}$ root of unity, so is $\bar{x}_i$, so if $x_i\neq \bar x_i$ , your sum contains $y_i+\bar y_i=2\mathcal {Re}(y_i)=1$ exactly $\frac {n-1} {2}$ if $n$ is odd or $\frac {n-2} {2}$ if $n$ is even (but the remaining term is $\frac 1 {1-(-1)}=\frac 1 2$).</p> <p>So in both cases the result is $\frac {n-1} 2$</p> <p>I don't know if this can be considered "less complex" but it certainly involves more "down to earth" calculations IMHO.</p>