qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,098,838
<blockquote> <p>The displacement of a particle varies according to <span class="math-container">$x=3(\cos t +\sin t)$</span>. Then find the amplitude of the oscillation of the particle.</p> </blockquote> <p>Can someone kindly explain the concept of amplitude and oscillation and how to solve it?</p> <p>Any hints for solving the problem would be helpful.</p>
mathcounterexamples.net
187,663
<p>You must have</p> <p><span class="math-container">$$\langle z,z \rangle = ax^2 + 2bxy + dy^2 &gt;0$$</span> for all <span class="math-container">$z\neq 0$</span>. In particular for all <span class="math-container">$z=(x,1)^T$</span>.</p> <p>In particular the discriminant of the trinomial</p> <p><span class="math-container">$$a x^2 +2bx +d$$</span> has to be strictly negative. This is exactly what you’re looking for.</p>
3,098,838
<blockquote> <p>The displacement of a particle varies according to <span class="math-container">$x=3(\cos t +\sin t)$</span>. Then find the amplitude of the oscillation of the particle.</p> </blockquote> <p>Can someone kindly explain the concept of amplitude and oscillation and how to solve it?</p> <p>Any hints for solving the problem would be helpful.</p>
Bernard
202,857
<p>You don't have to. If the homogeneous quadratic polynomial <span class="math-container">$\;ax^2+2bxy+dy^2\;$</span> takes positive values for all <span class="math-container">$(x,y)\ne (0,0)$</span>, it means its (reduced) discriminant <span class="math-container">$\;\delta'=b^2-ad &lt;0$</span>, and this discriminant is also <span class="math-container">$\;-\begin{vmatrix} a&amp;b\\b&amp;d\end{vmatrix}$</span>.</p>
158,549
<p>Denote by $\Sigma$ the collection of all $(S, \succeq)$ wher $S \subset \mathbb{R}$ is compact and $\succeq$ is an arbitrary total order on $S$.</p> <p>Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ such that for all $(S, \succeq) \in \Sigma$ there exists a compact interval $I$ with the properties that</p> <ul> <li>$f(I) = S$</li> <li>$x \geq y$ implies $f(x) \succeq f(y)$ for all $x,y \in I$?</li> </ul> <p>If so, how regular can we take $f$ to be? The motivation is that basically, I am trying to construct the analogue of a <a href="http://en.wikipedia.org/wiki/Normal_number" rel="nofollow">normal sequence</a> but on $\mathbb{R}$ instead of $\mathbb{N}$.</p> <p>EDIT: As Brian M. Scott points out, this is not possible if the orderings have no greatest and least elements. However, since adding this assumption doesn't go against the intuition of generalizing normal sequences, I am still interested in the answer if we restrict the various total orders to have minimal and maximal elements.</p> <p>Thanks in advance.</p>
Brian M. Scott
12,042
<p>If I understand correctly what you’re asking, the answer is <em>no</em>. Let $\preceq$ be a linear order on $[0,1]$ having no last element; no compact subset of $\Bbb R$ can be mapped onto $[0,1]$ in such a way that $f(x)\preceq f(y)$ whenever $x\le y$, because every compact subset of $\Bbb R$ has a last element with respect to the usual order. That is, if $I$ is a compact subset of $\Bbb R$, let $u=\max I$, and let $x\in[0,1]$ be such that $f(u)\prec x$; then $x\notin\operatorname{ran}f$, so $f[I]\ne[0,1]$.</p> <p><strong>Added:</strong> Restricting the linear orders to those with first and last elements doesn’t help. Well-order $[0,1]=\{y_\xi:\xi&lt;2^\omega+1\}$. Now let $I$ be any compact interval, say $[a,b]$, and let $f:I\to[0,1]$ be an order-preserving surjection. Clearly $f(b)=y_{2^\omega}$ is the last element of in the well-ordering of $[0,1]$. Let $\langle x_n:n\in\omega\rangle$ be a strictly increasing sequence in $[a,b]$ converging to $b$. Then $\langle f(x_n):n\in\omega\rangle$ is an increasing sequence in the well-ordering of $[0,1]$, and it has a supremum, say $y_\alpha$. Then $y_\alpha&lt;2^\omega$, since $\operatorname{cf}2^\omega&gt;\omega$, and no $y_\beta$ such that $\alpha&lt;\beta&lt;2^\omega$ is in the range of $f$.</p> <p>Actually, this requires a bit of modification if $f$ is not required to be strictly order-preserving (i.e., a bijection). In that case let $B=f^{-1}[\{y_{2^\omega}\}]$; $B$ must be of the form $(c,b]$ or $[c,b]$ for some $c\in I$. If $B=[c,b]$, replace $b$ by $c$ in the previous paragraph. If $B=(c,b]$, note that nothing between $f(c)$ and $f(b)$ is in the range of $f$.</p>
158,549
<p>Denote by $\Sigma$ the collection of all $(S, \succeq)$ wher $S \subset \mathbb{R}$ is compact and $\succeq$ is an arbitrary total order on $S$.</p> <p>Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ such that for all $(S, \succeq) \in \Sigma$ there exists a compact interval $I$ with the properties that</p> <ul> <li>$f(I) = S$</li> <li>$x \geq y$ implies $f(x) \succeq f(y)$ for all $x,y \in I$?</li> </ul> <p>If so, how regular can we take $f$ to be? The motivation is that basically, I am trying to construct the analogue of a <a href="http://en.wikipedia.org/wiki/Normal_number" rel="nofollow">normal sequence</a> but on $\mathbb{R}$ instead of $\mathbb{N}$.</p> <p>EDIT: As Brian M. Scott points out, this is not possible if the orderings have no greatest and least elements. However, since adding this assumption doesn't go against the intuition of generalizing normal sequences, I am still interested in the answer if we restrict the various total orders to have minimal and maximal elements.</p> <p>Thanks in advance.</p>
William
13,579
<p>If you have the property that if $x &lt; y$, then $f(x) \prec f(y)$, all strictly, then no.</p> <p>Let $S$ be an infinite compact subset of $\mathbb{R}$. Let $\aleph$ be a cardinal in bijection with $S$. That is there is a bijection $f : \aleph \rightarrow S$. From this bijection, you can define a well-ordering on $S$ that is order-isomorphic to $\aleph$. So for our purpose, we can just think of $S$ as $\aleph$. </p> <p>Assume an $f$ exists. $0 \in \aleph$ and $1 \in \aleph$. Then there exists $x &lt; y \in \mathbb{R}$ such that $f(x) = 0$ and $f(y) = 1$. However, there are infinitely many points between $x$ and $y$ in the usual ordering on $\mathbb{R}$. However in the ordering of $\aleph$, there are no elements between $0$ and $1$. Therefore, every one of the infinitely many $z$ such that $x &lt; z &lt; y$, you must have that $f(z) = 0$ or $f(z) = 1$. </p> <p>So if you want strict ordering to be preserved, then you can not have such a function for any possible ordering. However, with $\leq$ you can have some functions but they are not very interesting. </p> <p>Moreover, you want a a single function that works for all $(S, \preceq)$ in the sense that there is some compact interval $I$ with your property. I would guess this is not possible. There are a lot of orderings of a any compact $S$. For instance given any ordering of $S$, you can make a modified linear ordering that switches the ordering of two element. This switch would require you to find a different interval $I$ that would witness your property. My intuition is that there are so many orderings and only countable many disjoint compact intervals, that there are just not enough intervals for you to define your function $f$ to make it work for all of them. But again this is not a proof just intuition. </p>
648,607
<p>I would like to determine whether the following series is absolut convergent or not. I´m not sure how to begin generally. I would say no, because when taking the absolut value of the fraction and add all of them together the series doesnt converge...could someone give me a general road plan how to manage this.</p> <p>$$\sum_{n=0}^{\infty} \frac{(-1)^n}{2n+1}$$</p>
Pankaj Sejwal
33,578
<p>Basically two tests are needed to check for convergence in case of alternating series which it is :</p> <p>a) It is decreasing monotonically.</p> <p>b) Its limit is 0 as n approaches infinity.</p> <p>In this, for first check that ,</p> <p>let $f(x)= \frac { 1 }{ (2n+1) } $. So,$f'(x) = \frac { -2 }{ (2n+1)^2 } $ which is clearly decreasing monotonically.</p> <p>For second test, we need to calculate limit of $f(x)={ Lt }_{ n-&gt;\infty }\quad \frac { 1 }{ (2n+1) } $ which is 0. Hence both are satisfied and it converges. But as the other answer has stated it fails for being absolutely convergent on comparing with a more general series, it's p-test gives divergent for power(n) of compared series not being >1.</p>
3,466,680
<p>I'm solving a problem in ODE:</p> <blockquote> <p>Solve in <span class="math-container">$\left (-\dfrac{\pi}{2},\dfrac{\pi}{2} \right )$</span> the ODE <span class="math-container">$y''(t) \cos t + y (t) \cos t=1$</span></p> </blockquote> <p>In my lecture, we are given three theorems:</p> <blockquote> <p><span class="math-container">$\textbf{Theorem 1} \quad$</span> If <span class="math-container">$y_1$</span> and <span class="math-container">$y_2$</span> are two linearly independent solutions to <span class="math-container">$y''+ay'+by=c$</span>. Then</p> <p>i) The system <span class="math-container">$$\begin{bmatrix}y_1 &amp; y_2 \\ y_1' &amp; y_2' \\ \end{bmatrix} \begin{bmatrix}h \\ k \\ \end{bmatrix} = \begin{bmatrix}0 \\ c \\ \end{bmatrix}$$</span> (in which the unknown functions are <span class="math-container">$h$</span> and <span class="math-container">$k$</span>) has a unique solution.</p> <p>ii) All the solutions <span class="math-container">$s$</span> are of the form <span class="math-container">$t \mapsto H y_1 +K y_2$</span> where <span class="math-container">$H,K$</span> are anti-derivatives of <span class="math-container">$h$</span> and <span class="math-container">$k$</span>.</p> </blockquote> <p>and</p> <blockquote> <p><span class="math-container">$\textbf{Theorem 2} \quad$</span> Homogeneous case</p> <p>If we can find a solution <span class="math-container">$y_1$</span> to <span class="math-container">$y''+ay'+by=0$</span>, then we can determine another solution <span class="math-container">$y_2$</span> by using undetermined constant method to look for a solution of the form <span class="math-container">$y_2 = \lambda y_1$</span> in which <span class="math-container">$\lambda$</span> is a function.</p> </blockquote> <p>and</p> <blockquote> <p><span class="math-container">$\textbf{Theorem 3} \quad$</span> Superposition Principle</p> <p>Consider <span class="math-container">$y''+ay'+by= c_i \quad (E_i)$</span> in which <span class="math-container">$c_i$</span> are functions. If <span class="math-container">$y_i$</span> is the solution to <span class="math-container">$(E_i)$</span> then <span class="math-container">$\sum_{i=1}^n \alpha_i y_i$</span> is the solution to <span class="math-container">$y''+ay'+by=\sum_{i=1}^n \alpha_i c_i$</span>.</p> </blockquote> <hr /> <p>I'm unable to apply those theorems to solve this ODE. Unfortunately, my professor's never solved an example with non-constant coefficients in class. The lectures are very likely to contain typos. I'm sorry for that because I'm unable to recognize them.</p> <p>Could you please elaborate on how to solve this ODE?</p>
mathsdiscussion.com
694,428
<p><span class="math-container">$$y''Cosx+yCosx=1$$</span> <span class="math-container">$$y''Cosx-y'Sinx+y'Sinx+yCosx=1$$</span> <span class="math-container">$$(y'Cosx)'+(ySinx)'=1$$</span> <span class="math-container">$$y'Cosx+ySinx=x+c$$</span> <span class="math-container">$$y'+ytanx=(x+c)Secx$$</span> Now this is 1st order linear differentiable equation</p>
462,983
<h2>The Question:</h2> <p>This is a very fundamental and commonly used result in linear algebra, but I haven't been able to find a proof or prove it myself. The statement is as follows:</p> <blockquote> <p>let $A$ be an $n\times n$ square matrix, and suppose that $B=\operatorname{LeftInv}(A)$ is a matrix such that $BA=I$. Prove that $AB=I$. That is, prove that a matrix commutes with its inverse, that the left-inverse is also the right-inverse</p> </blockquote> <h2>My thoughts so far:</h2> <p>This is particularly annoying to me because it seems like it should be easy.</p> <p>We have a similar statement for group multiplication, but the commutativity of inverses is often presented as part of the definition. Does this property necessarily follow from the associativity of multiplication? I've noticed that from associativity, we have $$ \left(A\operatorname{LeftInv}(A)\right)A=A\left(\operatorname{LeftInv}(A)A\right) $$ But is that enough?</p> <p>It might help to talk about <a href="http://en.wikipedia.org/wiki/Generalized_inverse" rel="nofollow">generalized inverses</a>.</p>
xavierm02
10,385
<p>You notation $A^{-1}$ is confusing because it makes you think of it as a two-sided inverse but we only know it's a left-inverse.</p> <p>Let's call $B$ the matrix so that $BA=I$. You want to prove $AB=I$.</p> <p>First, you need to prove that there is a $C$ so that $AC=I$. To do that, you can use the determinant but there must be another way. [EDIT] There are several methods <a href="https://math.stackexchange.com/questions/3852/if-ab-i-then-ba-i">here</a>. The simplest (imo) is the one using the fact the matrix has full rank.[/EDIT]</p> <p>Then you have that $B=BI=B(AC)=(BA)C=IC=C$ so you get $B=C$ and therefore $AB=I$.</p>
4,339,772
<p>The problem is stated as:</p> <blockquote> <p>Show that <span class="math-container">$\int_{0}^{n} \left (1-\frac{x}{n} \right ) ^n \ln(x) dx = \frac{n}{n+1} \left (\ln(n) - 1 - 1/2 -...- 1/{(n+1)} \right )$</span></p> </blockquote> <p><strong>My attempt</strong></p> <p>First of all, we make the substitution <span class="math-container">$1-\frac{x}{n} = t$</span>, we then have that the integral can be rewritten as:</p> <p><span class="math-container">$\int_{1}^{0} -n t^n \ln(n(1-t)) dt = \int_{0}^{1} n t^n \ln(n(1-t)) dt$</span></p> <p>Using logarithmic laws, we can split the integral into two seperate ones as follows:</p> <p><span class="math-container">$\int_{0}^{1} n t^n \ln(n(1-t)) dt = \int_{0}^{1} n t^n \ln(n) dt + \int_{0}^{1} n t^n \ln(1-t) dt$</span></p> <p>We calculate each integral from the sum above:</p> <p><span class="math-container">$ I_1 := \int_{0}^{1} n t^n \ln(n) dt = \frac{n}{n+1}\ln(n)$</span></p> <p><span class="math-container">$ I_2 := \int_{0}^{1} n t^n \ln(1-t) dt = -n\int_{0}^{1} t^n \sum_{k=1}^{\infty}\frac{t^k}{k} dt$</span></p> <p>Since the radius of convergence of <span class="math-container">$\sum_{k=1}^{\infty}\frac{t^k}{k}$</span> is 1, and we are integrating from <span class="math-container">$0$</span> to <span class="math-container">$1$</span>, we can interchange the order of limit operations. Meaning, we can calculate the integral first.</p> <p><span class="math-container">$ I_2 = -n\sum_{k=1}^{\infty}\int_{0}^{1}\frac{t^{(n+k)}}{k} dt = -\sum_{k=1}^{\infty} \frac{n}{k(n+k+1)} = \frac{-n}{n+1} \sum_{k=1}^{\infty} \frac{n+1}{k(n+k+1)}$</span></p> <p>Using partial fraction decomposition, we have that <span class="math-container">$I_2$</span> can be written as:</p> <p><span class="math-container">$\frac{-n}{n+1}\sum_{k=1}^{\infty} \frac{n+1}{k(n+k+1)} = \frac{-n}{n+1} \sum_{k=1}^{\infty} \frac{1}{k} + \frac{n}{n+1}\sum_{k=1}^{\infty} \frac{1}{n+k+1}$</span></p> <p>Putting it all together we get:</p> <p><span class="math-container">$I_1 + I_2 = \frac{n}{n+1} \left ( \ln(n) - \sum_{k=1}^{\infty} \frac{1}{k} + \sum_{k=1}^{\infty} \frac{1}{n+k+1} \right )$</span></p> <p>Which is indeed close the the result sought, however, I don't really know what to do with the last sums, and why I did wrong in choosing <span class="math-container">$\infty$</span> as an upper limit in the summation. I see that the sum of <span class="math-container">$1/k's$</span> diverge, but how can I avoid this?</p> <p>Thank you for any help that could help me complete the last step of this problem.</p>
Kavi Rama Murthy
142,385
<p>Hint: <span class="math-container">$\sum_{k=1}^{\infty} \frac{1}{k}$</span> and <span class="math-container">$ \sum_{k=1}^{\infty} \frac{1}{n+k+1} $</span> are both infinity so you cannot write these sums separately. Instead, you should write <span class="math-container">$\lim_{N \to \infty} [-\sum_{k=1}^{N} \frac{1}{k} +\sum_{k=1}^{N} \frac{1}{n+k+1}]$</span>. Now <span class="math-container">$[-\sum_{k=1}^{N} \frac{1}{k} +\sum_{k=1}^{N} \frac{1}{n+k+1}]$</span> simplifies to <span class="math-container">$\frac 1 {N+1}+\frac 1 {N+2}+..+\frac 1 {n+N+1}-(1+\frac 1 2+\frac 1 3+...+\frac 1 {n+1})$</span> (for <span class="math-container">$N&gt;n+1$</span>). Note that <span class="math-container">$\frac 1 {N+1}+\frac 1 {N+2}+..+\frac 1 {n+N+1} \to 0$</span> as <span class="math-container">$N \to \infty$</span>.</p>
2,778,422
<p>The generalized $\lambda-\text{eigenspace}$ is defined by: $V^f_{(\lambda)}=\bigl\lbrace v\in V\mid\exists j\,\text{ such that }\,(f-\lambda)^jv=0 \bigr\rbrace$. Suppose that $V$ is a vector space over the field $k$ and $f,g\in \operatorname{End}_k(V)$ satisfy $f\circ g=g\circ f$. Show that $g(V^f_{(\lambda)})\subseteq V^f_{(\lambda)}$.</p> <p>Own work: Well I tried choosing an element of $v\in V^f_{(\lambda)}$ and I have to see if $(f-\lambda)^j(g(v))=0$. But I don't know how to link the commutativity of f and g with the application of $(f-\lambda)^j(g(v)$.</p>
Maxime Ramzi
408,637
<p><strong>Hint</strong>: Show by induction that $f^n\circ g = g\circ f^n$; then conclude that $g$ commutes with all elements of $k[f]$.</p>
2,555,861
<p>I am reading up on <strong>Fraleigh's</strong> <em>A First Course in Abstract Algebra</em>, and he says ($H$ subgroup of $G$) $Hg=gH$ $iff$ $i_g[H]=H$ $iff$ $H$ is invariant under all inner automorphisms. I look up invariant and I find this definition:</p> <p>"Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group." from <a href="https://en.wikipedia.org/wiki/Invariant_(mathematics)" rel="nofollow noreferrer">Invariant Description Wiki</a>. </p> <p>First I am wondering if that means the elements of $H$ do not change but change positions (hence the permutation) or is $H$ the identity under all inner automorphisms of $G$. W</p> <p>EDIT: Too many questions asked by me, I will ask them separately. </p>
Michael Hardy
11,667
<p>I'm guessing you mean this is to be done by limits of Riemann sums.</p> <p>If $x$ goes from $0$ to $1$ by steps of $\Delta x=1/n,$ then at the $i$th step we have $x=0 + i/n,$ and so</p> <p>$$ \int_0^1 e^x\,dx = \lim_{n\to\infty} \frac 1 n \sum_{i=1}^n e^{0 \,+\, i/n}. $$ And \begin{align} \frac 1 n \sum_{i=1}^n e^{0\,+\,i/n} = {} &amp; \frac 1 n \times \Big( \text{a sum of a geometric series} \Big) \\[10pt] = {} &amp; \frac 1 n \times\text{first term} \times \left( \frac{(\text{common ratio})^{\text{number of terms}}-1}{(\text{common ratio})-1} \right) \\[10pt] = {} &amp; \frac 1 n \cdot e^{1/n} \left( \frac{e^{n/n}-1}{e^{1/n}-1} \right) \\[10pt] = {} &amp; (e-1) \cdot e^{1/n} \cdot \frac 1 {n(e^{1/n}-1)} \end{align} And then we have $e^{1/n}\to1$ as $n\to\infty,$ and $\dfrac 1 {n(e^{1/n} - 1)} \to 1 $ as $n\to\infty.$</p> <p>There are a number of ways to establish that last limit. Here's one: $$ \lim_{n\to\infty} \frac{e^{1/n}-1}{1/n} = \lim_{\Delta x\,\to\,0} \frac{e^{0\,+\,\Delta x} - e^0}{\Delta x} = \lim_{\Delta x\,\to\,0} \frac{\Delta e^x}{\Delta x} = \left. \frac d {dx} e^x \right|_{x=0} = e^0 = 1. $$</p>
1,423,252
<p>The proposition is:</p> <blockquote> <p>If $\lim S_n = L$ and for every $n$, $S_n$ is in the interval $[a,b]$, then $L$ is also in $[a,b]$.</p> </blockquote> <p>I have proved this effectively, but now the question is to provide a counterexample to the stronger assumption, for the interval $(a,b)$. </p> <p>Basically, where this proposition does not guarantee that $L$ is also in $(a,b)$.</p>
Ludolila
60,678
<p><strong>Hint:</strong></p> <p>Think about the sequence ${1\over n}$. Can you figure out the interval for the counter example?</p>
2,041,839
<p>$$\int_{1}^{x}\frac{dt}{\sqrt{t^3-1}}$$ does this have a closed form involving jacobi elliptic functions of parameter $k$?</p> <p><strong>N.B</strong> I tried with the change of variables $t=1+k\frac{1-u}{1+u}$. But this leads no where. <a href="http://mathworld.wolfram.com/JacobiEllipticFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiEllipticFunctions.html</a></p> <p><strong>update</strong> the above integral is equivalent to $$\int\limits_{0}^{\sec^{-1}x^{\frac{3}{2}}}\sec^{\frac{2}{3}}tdt$$.</p>
Matias Heikkilä
66,856
<p>The notation $\mathrm{d}x$ should not be attempted to be taken too literally. <a href="https://en.wikipedia.org/wiki/Non-standard_analysis" rel="nofollow noreferrer">While it can be made precise</a> it's usually considered a relic from time when things were not as precisely defined as they are these days. You will run in all kinds of troubles if you try to naively assign it a deeper meaning.</p> <p>I've never seen $\mathrm{d}x^2$ though.</p>
2,688,608
<p>Assume matrix </p> <p>$$A= \begin{bmatrix} -1&amp;0&amp;0&amp;0&amp;0\\ -1&amp;1&amp;-2&amp;0&amp;1\\ -1&amp;0&amp;-1&amp;0&amp;1\\ 0&amp;1&amp;-1&amp;1&amp;0\\ 0&amp;0&amp;0&amp;0&amp;-1 \end{bmatrix} $$</p> <p>Its Jordan Canonical Form is $$J= \begin{bmatrix} -1&amp;1&amp;0&amp;0&amp;0\\ 0&amp;-1&amp;0&amp;0&amp;0\\ 0&amp;0&amp;-1&amp;0&amp;0\\ 0&amp;0&amp;0&amp;1&amp;1\\ 0&amp;0&amp;0&amp;0&amp;1 \end{bmatrix} $$</p> <p>I am trying to find a nonsingular $P$, let $P=\begin{bmatrix}\mathbf{p}_1&amp;\mathbf{p}_2&amp;\mathbf{p}_3&amp;\mathbf{p}_4&amp;\mathbf{p}_5\end{bmatrix}$ s.t. $J=P^{-1}AP\Leftrightarrow AP=PJ$. I came up with the Wikipedia article on JCF and I think I need to find the generalized eigenvectors so that $AP=PJ=\begin{bmatrix}-\mathbf{p_1}&amp;\mathbf{p_1}-\mathbf{p_2}&amp;-\mathbf{p_3}&amp;\mathbf{p_4}&amp;\mathbf{p_4}+\mathbf{p_5}\end{bmatrix}$ yielding the systems $$(A+I)\mathbf{p_1}=\mathbf{0}$$ $$(A+I)^2\mathbf{p_2}=\mathbf{0}$$ $$(A+I)\mathbf{p_3}=\mathbf{0}$$ $$(A-I)\mathbf{p_4}=\mathbf{0}$$ $$(A-I)^2\mathbf{p_5}=\mathbf{0}$$</p> <p>I solved each of these systems making sure that the vectors $\mathbf{p_i}$ I chose are linearly independent. So I chose $$P=\begin{bmatrix}\mathbf{p}_1&amp;\mathbf{p}_2&amp;\mathbf{p}_3&amp;\mathbf{p}_4&amp;\mathbf{p}_5\end{bmatrix}=\begin{bmatrix}1&amp;2&amp;-2&amp;0&amp;0\\1&amp;1&amp;2&amp;0&amp;1\\1&amp;1&amp;2&amp;0&amp;0\\0&amp;0&amp;0&amp;1&amp;1\\1&amp;1&amp;-2&amp;0&amp;0\end{bmatrix}$$ which even though is nonsingular I am not getting $AP=PJ$.</p> <p>What am I doing wrong?</p>
user
505,767
<p>Note that the condition $AP=PJ$ is equivalent to</p> <ul> <li>$Ap_1=-p_1 \to p_1$</li> <li>$Ap_2=p_1-p_2\to p_2$</li> <li>$Ap_3=-p_3\to p_3$</li> <li>$Ap_4=p_4 \to p_4$</li> <li>$Ap_5=p_4+p_5 \to p_5$</li> </ul> <p>Since the set up is equivalent, from you results seems that there is something wrong in the calculation indeed $Ap_1\neq p_1-p_2$.</p> <p>Notably from</p> <ul> <li>$Ap_1=-p_1 \implies (A+I)p_1=0$</li> <li>$Ap_2=p_1-p_2\implies (A+I)p_2=p_1$</li> </ul> <p>we obtain</p> <ul> <li>$p_1=(0,1,1,0,0)$</li> <li>$p_2=(-1,0,0,0,0)$</li> </ul> <p>from</p> <ul> <li>$Ap_3=-p_3 \implies (A+I)p_3=0$</li> </ul> <p>excluding $p_1$ we obtain</p> <ul> <li>$p_3=(1,0,0,0,1)$</li> </ul> <p>and from</p> <ul> <li><p>$Ap_4=p_4 \implies (A-I)p_4=0$</p></li> <li><p>$Ap_5=p_4+p_5 \implies (A-I)p_5=p_4$</p></li> </ul> <p>we obtain</p> <ul> <li>$p_4=(0,0,0,-1,0)$</li> <li>$p_5=(0,-1,0,0,0)$</li> </ul>
2,357,272
<p>Find out the sum of the following infinite series $$\frac{3}{2^2(1)(2)} + \frac{4}{2^3(2)(3)} +\dots+\frac{r+2}{2^{r+1}(r)(r+1)}+\cdots $$ up to $r\to\infty$.</p> <p>MY TRY:- I tried to split $r+2$ as $[(r+1) +{(r+1)-r}]$ so that I can cancel one term from each terms in the numerator. Then I got an expression which was like Harmonic-Geometric series. But I could not do further any more after this.</p>
B. Goddard
362,009
<p>If you do the partial fraction expansion, the summand becomes</p> <p>$$\frac{1}{2^{r+1}}\left( \frac{2}{r} - \frac{1}{r+1}\right) = \frac{1}{2^r r} - \frac{1}{2^{r+1}(r+1)}, $$</p> <p>So the sequence is telescoping. All terms cancel except the first and so sum equals $\frac12$.</p>
547,971
<p>I have to show that for $f,g$ analytic on some domain and $a$ a double zero of $g$, we have:</p> <p>$$\operatorname{Res} \left(\frac{f(z)}{g(z)}, z=a\right) = \frac{6f'(a)g''(a)-2f(a)g'''(a)}{3[g''(a)]^2}.$$</p> <p>The problem is that direct calculation using the formula (for pole of order $2$):</p> <p>$$\operatorname{Res}(h(z),z=a)=\lim_{z \to a} \frac{d}{dz}\left( (z-a)^2h(z) \right)$$</p> <p>is extremely ugly, given that we're dealing with a quotient. Is there some sort of trick to make the calculation more manageable?</p>
Ron Gordon
53,268
<p>Because $a$ is a double zero of $g(z)$, write</p> <p>$$g(z) = (z-a)^2 p(z)$$</p> <p>where $p(a) \ne 0$ and is analytic, etc. etc.</p> <p>Then</p> <p>$$\operatorname*{Res}_{z=a} \frac{f(z)}{g(z)} = \left [\frac{d}{dz} \frac{f(z)}{p(z)} \right ]_{z=a}$$</p> <p>Now,</p> <p>$$\frac{d}{dz} \frac{f(z)}{p(z)} = \frac{f'(z) p(z)-f(z) p'(z)}{p(z)^2}$$</p> <p>Also, note that</p> <p>$$g(z) = \frac12 g''(a) (z-a)^2 + \frac16 g'''(a) (z-a)^3+\cdots = p(a) (z-a)^2 + p'(a) (z-a)^3+\cdots$$</p> <p>Therefore</p> <p>$$p(a) = \frac12 g''(a)$$</p> <p>and</p> <p>$$p'(a) = \frac16 g'''(a)$$</p> <p>Thus</p> <p>$$\operatorname*{Res}_{z=a} \frac{f(z)}{g(z)} = \frac{f'(a) \frac12 g''(a) - f(a) \frac16 g'''(a)}{\frac14 [g''(a)]^2}$$</p> <p>which is equivalent to the formula you seek.</p>
638,244
<p>In any (simple) type theory there are <strong>base types</strong> (i.e. the type of <em>individuals</em> and the type of <em>propositions</em>) and <strong>type builders</strong> (i.e. $\rightarrow$, which takes two types $t,t'$ and yields the type of <em>functions</em> $t \rightarrow t'$). </p> <p>For each type in such a type theory there is a rooted ordered tree with </p> <ul> <li>base types as labels of leaves and </li> <li>type builders as labels of non-leaf nodes<br/> (<em>"by which type builder this node is built?"</em>)</li> </ul> <p>that shows how the corresponding type (i.e. the root) is built from base types.</p> <blockquote> <p>What is the "official" name of such a tree when the base types are ignored (i.e. the labels of the leaves)? <br/><br/>[Formally: <em>Two types have the same <strong>???</strong> when their corresponding trees are isomorphic upto labelling of the leaves.</em>]</p> </blockquote> <p>(Something like "type constructor" or "type construction" or "construction type"?)</p> <p>As long as there is only <em>one</em> base type, this question is not very interesting. And when there are several base types, but of quite different nature - like <em>individuals</em> and <em>propositions</em> - it's not very interesting, too. </p> <p>But what I think of is a type theory with several base types of the <em>same</em> kind (or nature) &mdash; like chemical elements. This leads to another question:</p> <blockquote> <p>(How) can/are "base types <em>of the same kind</em>" be captured in type theory?</p> </blockquote> <hr> <p>(See also: <a href="http://en.wikipedia.org/wiki/Context-free_grammar#Derivations_and_syntax_trees" rel="nofollow">context-free grammars</a>, <a href="http://en.wikipedia.org/wiki/Parse_tree" rel="nofollow">parse tree</a>, <a href="http://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="nofollow">syntax tree</a>, <a href="http://en.wikipedia.org/wiki/Atomism" rel="nofollow">atomism</a>/<a href="http://en.wikipedia.org/wiki/Reductionism" rel="nofollow">reductionism</a>.)</p>
Basil
36,042
<p>I don't know about "official", but from what I understand you speak of a special kind of <em>type contexts</em>, where the "holes"---that is, the type placeholders in the context---are to be filled exclusively by <em>base</em> types (I'm borrowing the term <em>context</em> from some old experience of mine in algebraic semantics, where contexts make sense for trees in general).</p> <p>To illustrate what I have in mind, consider a type system $\mathcal{T}$ built with arrows over, say, $\mathbb{N}$. I also need a symbol for the hole, the type placeholder; so I introduce a special type variable, say $\omega$, to stand for an unspecified base type---this is so to speak a base pseudo-type. I can then define the <em>type contexts</em> over $\mathcal{T}$ inductively by (i) $\omega$ is a type context, and (ii) if $\rho$ and $\sigma$ are types or contexts then $\rho \rightarrow \sigma$ is also a type context. Examples would be $\omega$, $(\mathbb{N} \rightarrow \omega) \rightarrow \mathbb{N}$, $\mathbb{N} \rightarrow (\omega \rightarrow \omega)$, $(\omega \rightarrow \omega) \rightarrow \omega$ et cetera.</p> <p>Note that you would possibly need more base pseudo-types than one, if you have different proper base types and you want to reserve the right to fill in different base types to different placeholders.</p> <p>Note also (aha, now that I read your original post again, I understand that this is most likely what you mean) that if you want that <em>only</em> the type constructors are specified---here, the arrow constructor---and leave <em>all</em> proper leaves out of the picture, you can accomodate the inductive definition above by allowing no proper types in the inductive step. Again, I don't know about "official", but the term <em>skeleton</em> comes to mind.</p> <p>As to the second question about "kinds" (and again borrowing from algebraic semantics), you might try introducing different <em>sorts</em> of base types (the term to look up would be <em>many-sorted algebras</em>): if you want to play with, say, two kinds of types, $k$ and $l$, then I intuitively understand that $k = \{ \alpha_1, \ldots, \alpha_m \}$ and $l = \{ \beta_1, \ldots, \beta_n \}$; that is, all $\alpha_i$'s and $\beta_j$'s are base types of the kind $k$ and $l$ respectively. Surely, an inductive understanding of the special type contexts that you have in mind above, can be had in this generalized setting as well. EDIT: In this case of course your type constructors should be metatyped by kinds. My chemistry is unfortunately rusted to dirt these days, but (in danger of making a fool of myself) say that you can form some "bond" $b$ between "elements" of the kind $k$, $l$, and $k$ again, but not of the kind $k$, $l$, $l$; then you should specify your bond's "arity" by $b : k \times l \times k$. Something along these lines anyway.</p> <p>Hope this algebraic-semantics take helps.</p>
2,275,604
<p>We can break up the circle into an infinite amount of rings with perimeter $2\pi r$. For a given circle $r$, the outside ring has perimeter of $2\pi r$ and the smallest one has of course perimeter $0$. We can add up all the area of the infinite rings using the arithmetic series concept; we can get the average of the first term and the last term and multiply by the number of terms. Therefore, we can find the area of the circle as $\frac{2\pi r + 0}{2} \times r$ = $\pi r^2$.</p> <p>I thought of this proof and was wondering if it was valid. I know the phrase "arithmetic series" to add up/integrate the rings together is not really the right wording, but is it the right idea?</p>
Google X
441,387
<p>This would go solidly into the category of "plausibility argument" not proof. The fundamental idea could be formalized into a proof, but this goes for many(if not most) plausibility arguments. To do it properly would require lots of work to remove ambiguity from statements like "we can add up all of the infinite rings using the arithmetic series concept." You would essentially need to build up Calculus. </p>
2,275,604
<p>We can break up the circle into an infinite amount of rings with perimeter $2\pi r$. For a given circle $r$, the outside ring has perimeter of $2\pi r$ and the smallest one has of course perimeter $0$. We can add up all the area of the infinite rings using the arithmetic series concept; we can get the average of the first term and the last term and multiply by the number of terms. Therefore, we can find the area of the circle as $\frac{2\pi r + 0}{2} \times r$ = $\pi r^2$.</p> <p>I thought of this proof and was wondering if it was valid. I know the phrase "arithmetic series" to add up/integrate the rings together is not really the right wording, but is it the right idea?</p>
Community
-1
<p>For a slightly more detailed argument (not rigorous yet, but closer), consider a large integer radius $r$ and accumulate the contributions of $r$ rings $1$ unit large.</p> <p>$$A\approx\sum_{s=0}^{r-1}2\pi\left(s+\frac12\right)=2\pi\frac{(r-1)r+r}2=\pi r^2$$</p> <p>by the triangular numbers formula. Then by similarity, the formula extends to arbitrary radii.</p> <p>But still remains to prove that the area of a ring is well approximated by its perimeter, i.e. that you can "straighten" it...</p>
3,230,957
<p>Q5. Calculate the eigenvalues and eigenvectors of the following matrix</p> <p><span class="math-container">$$\left(\begin{matrix} 3 &amp; \sqrt{2} \\ \sqrt{2} &amp; 2 \end{matrix}\right)$$</span></p> <p>It is <span class="math-container">$2 \times 2$</span> matrix and having square-root value.</p>
zbrads2
655,480
<p>The characteristic polynomial is <span class="math-container">$$(3-\lambda)(2-\lambda)-2.$$</span> Can you see how to get that?</p> <p>Now set this polynomial equal to zero and solve for <span class="math-container">$\lambda$</span> to get the eigenvalues. If we let <span class="math-container">$A$</span> be your matrix. Then you can find the eigenvectors by solving <span class="math-container">$(A-\lambda I)v=0$</span>, which you can do by augmenting a column of zeroes onto the matrix <span class="math-container">$A-\lambda I$</span> and doing some row reduction.</p>
3,230,957
<p>Q5. Calculate the eigenvalues and eigenvectors of the following matrix</p> <p><span class="math-container">$$\left(\begin{matrix} 3 &amp; \sqrt{2} \\ \sqrt{2} &amp; 2 \end{matrix}\right)$$</span></p> <p>It is <span class="math-container">$2 \times 2$</span> matrix and having square-root value.</p>
Community
-1
<p>We have for the eigenvalues:</p> <p><span class="math-container">$$ \text{det} (\begin{bmatrix} 3 &amp; \sqrt{2} \\ \sqrt{2} &amp; 2 \\ \end{bmatrix} - \lambda \begin{bmatrix} 1 &amp; 0 \\ 0 &amp; 1 \\ \end{bmatrix})$$</span></p> <p>So then we get <span class="math-container">$\lambda^2-5\lambda+4=0$</span> so then <span class="math-container">$\lambda=1,4$</span> which are our eigenvalues.</p> <p>Then you can try the last bit for the eigenvectors we have <span class="math-container">$A-\lambda I=0$</span>, where <span class="math-container">$A$</span> is the original matrix and <span class="math-container">$\lambda$</span> are our two eigenvalues values we have found. Then you solve this to get your eigenvectors!</p>
3,230,957
<p>Q5. Calculate the eigenvalues and eigenvectors of the following matrix</p> <p><span class="math-container">$$\left(\begin{matrix} 3 &amp; \sqrt{2} \\ \sqrt{2} &amp; 2 \end{matrix}\right)$$</span></p> <p>It is <span class="math-container">$2 \times 2$</span> matrix and having square-root value.</p>
IamKnull
610,697
<p>Find eigenvalues from the characteristic polynomial:</p> <p><span class="math-container">$\left|\begin{matrix} 3-\lambda &amp; \sqrt{2} \\ \sqrt{2} &amp; 2-\lambda \end{matrix}\right| =\lambda^2-5*\lambda+4=(\lambda-1)*(\lambda-4)$</span></p> <h2><span class="math-container">$\lambda_1=1\;\; \lambda_2=4$</span></h2> <p>For every λ we find its own vector(s):</p> <p><span class="math-container">$\lambda_1=1$</span></p> <p><span class="math-container">$A-\lambda_1I=\left(\begin{matrix} 2 &amp; \sqrt{2} \\ \sqrt{2} &amp; 1 \end{matrix}\right)$</span></p> <p><span class="math-container">$(A-\lambda I)v=0$</span></p> <p>So we have a homogeneous system of linear equations, we solve it by Gaussian Elimination:</p> <p><span class="math-container">$\left(\begin{matrix} 2 &amp; \sqrt{2} &amp; 0 \\ \sqrt{2} &amp; 1 &amp; 0 \end{matrix}\right)$</span></p> <p><span class="math-container">$\begin{matrix} x_1 &amp; +\frac{\sqrt{2}}{2}*x_2 &amp; = &amp; 0 \end{matrix}$</span></p> <p>General Solution: <span class="math-container">$X=\left(\begin{matrix} \frac{-\sqrt{2}}{2}*x_2 \\ x_2 \end{matrix}\right)$</span></p> <p>Let <span class="math-container">$x_2=1,\; v_1=\left(\begin{matrix} \frac{-\sqrt{2}}{2} \\ 1 \end{matrix}\right)$</span></p> <hr> <p><span class="math-container">$\lambda_2=4$</span></p> <p><span class="math-container">$A-\lambda_2I=\left(\begin{matrix} -1 &amp; \sqrt{2} \\ \sqrt{2} &amp; -2 \end{matrix}\right)$</span></p> <p><span class="math-container">$(A-\lambda I)v=0$</span></p> <p>So we have a homogeneous system of linear equations, we solve it by Gaussian Elimination:</p> <p><span class="math-container">$\left(\begin{matrix} -1 &amp; \sqrt{2} &amp; 0 \\ \sqrt{2} &amp; -2 &amp; 0 \end{matrix}\right)$</span></p> <p><span class="math-container">$\begin{matrix} x_1 &amp; -\sqrt{2}*x_2 &amp; = &amp; 0 \end{matrix}$</span></p> <p>General Solution: <span class="math-container">$X=\left(\begin{matrix} \sqrt{2}*x_2 \\ x_2 \end{matrix}\right)$</span></p> <p>Let <span class="math-container">$x_2=1, v_2=\left(\begin{matrix} \sqrt{2} \\ 1 \end{matrix}\right)$</span></p>
212,240
<p>I'm a beginner of the area of free boundary problem. Let me first give some background: </p> <p>$\Omega \subset \mathbb{R}^n$ is an open connected set, and locally $\partial \Omega$ is a Lipschitz graph. Consider the convex set $$K:=\{v \in L^1_{loc}(\Omega): \nabla v \in L^2(\Omega) \,, v=u^0 \mbox{on $\partial \Omega$}\},$$ where $u^0\ge0,u^0 \in L^1_{loc}(\Omega)$, and $\nabla u^0 \in L^2(\Omega)$. </p> <p>We are looking for the minimizer $u$ of the functional $$J(v):=\int_{\Omega}(|\nabla v|^2+\chi_{\{v&gt;0\}})$$ in the class $K$.</p> <p>It is proved that the minimizer $u$ exists and satisfying the following properties: $$u \ge 0 , \, \Delta u=0 \, \mbox{on the open set $\{u&gt;0\}$}, \, \mbox{and $u$ is subharmonic},$$ see section 1-2 in the paper by Alt and Caffarelli <a href="ftp://eudml.org/doc/152360" rel="noreferrer">here</a></p> <p>It is also proved in section 3-5 that $\partial\{u&gt;0\}$ has locally finite $\mathcal{H}^{n-1}$ measure. However, a lot of intermediate theorems such as Corollary 3.3 and Remark 4.2 are based on the fact that $|\partial\{u&gt;0\}|=0$, that is, $|\partial\{u=0\}|=0$. This fact is not proved in the paper, and generally it is not true if $u$ is merely continuous. </p> <p>Now my question is, why is $|\partial\{u=0\}|=0$ true? I've been stucked on it for a couple of days. </p> <p>Another related question is, what conditions on a general function $u$, which is not necessarily the minimum of the functional $J$, can guarantee that $|\partial\{u=0\}|=0$? Is the assumption that $u$ is a Sobolev function enough? How about $u$ is subharmonic? </p> <p>Any suggestions would be appreciated. Thanks!</p>
jfbonder
43,444
<p>Since $\partial\{u&gt;0\}$ has locally finite $H^{n-1}$ measure $|\partial\{u&gt;0\}\cap B_r|=0$ and hence $|\partial\{u&gt;0\}|=0$.</p>
4,037,697
<p><span class="math-container">$a_n $</span> is a sequence defined this way: <a href="https://i.stack.imgur.com/nHdiD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nHdiD.png" alt="enter image description here" /></a></p> <p>and we define: <a href="https://i.stack.imgur.com/b01hX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b01hX.png" alt="enter image description here" /></a></p> <p>need to prove that this happens: <a href="https://i.stack.imgur.com/YMuE6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMuE6.png" alt="enter image description here" /></a></p> <p>i got exam soon and a lot of simmilar questions but i have no idea where to start</p>
Chrystomath
84,081
<p>Let <span class="math-container">$A=RU$</span> be the polar decomposition of <span class="math-container">$A$</span>, where <span class="math-container">$R$</span> is positive semi-definite and <span class="math-container">$U$</span> is unitary.</p> <p>Then <span class="math-container">$R^2=A^TA=\lambda^2I$</span>; being diagonalizable this forces <span class="math-container">$R=|\lambda|I$</span> and thus <span class="math-container">$A=|\lambda|U$</span>, a similarity transformation.</p>
1,768,100
<p>I have started studying field theory and i have a question.somewhere i saw that a finite field with $p^m $ elements has a subfield of order $p^m $ where $m$ is a divisor of $n $.My question that if it is a field then how can it have a proper subfield.because since it is field it doesnt have any proper ideal.how can it have a subfield</p>
Martín-Blas Pérez Pinilla
98,199
<p>Parametrize the path: $$z(t) = x(t) + iy(t) = -i(1-t) + (1+i)t,\qquad t\in[0,1]$$. $$\int_{L}\left( \overline {z}+1\right)dz = \int_0^1(\overline{z(t)}+1)z'(t)\,dt =\cdots$$</p>
416,940
<p><span class="math-container">$\DeclareMathOperator\Spec{Spec}\newcommand{\perf}{\mathrm{perf}}\DeclareMathOperator\SHC{SHC}$</span>I have just finished reading the paper &quot;The spectrum of prime ideals in tensor triangulated categories&quot; in which Balmer proposes his notion of spectrum which nowadays is considered central in the understanding and classification of the homotopy categories which we want to study in the concrete mathematical practice (to name a few examples: the <span class="math-container">$G$</span>-equivariant stable homotopy category for <span class="math-container">$G$</span> a compact Lie group, or the derived category of quasi-coherent sheaves on a scheme).</p> <p>Since I am not familiar with this notion I wanted to ask here various questions about the underlying ideas of such concept.</p> <p>(1) I noticed that all the examples proposed by Balmer in his paper deal with compact objects, in the sense that the proposed tensor triangulated categories (t.t. categories from now on) can be identified with the full-subcategories of compact objects in a larger t.t. category. And from what I remember every other example which I read in different sources does the same thing: we study the Balmer spectrum of compact objects in a larger t.t. category. Balmer does not explicitly state that this must be the case, indeed his definition does not require the involved objects to be compact a priori.</p> <p>For this abstract machinery to work we only need the t.t. category to be essentially small. I could think that this is the problem: in general we cannot guarantee that the t.t. category we are interested in is essentially small so we restrict to the subcategory of its compact objects for this property to be more likely.</p> <p>But I have other reasons to believe that this justification is not completely correct: if we indulge in the intuition suggested by the choice of words, we should think of the support of an object in our t.t. category as an higher categorical analogue of the usual support of a function. Fixing the domain of our functions to be compact spaces ensures that the support will also be compact. So if we consider also non-compact objects the support could be non &quot;topologically small&quot;.</p> <p>Thus I am inclined to believe for the complete t.t. categories either the Balmer spectrum is too big to be computed or its is not the correct notion we want to use to classify their tensor subcategories.</p> <p>(2) Related to the previous question: if the proposed notion of Balmer spectrum should be applied only to categories of compact objects, what can we deduce about the whole category of possibly non-compact objects? Suppose we consider an essentially small t.t. category <span class="math-container">$\mathcal{T}$</span> and we manage to compute the Balmer spectrum of <span class="math-container">$\mathcal{T}^c$</span>, can we deduce any information regarding the thick tensor ideals or localizing tensor ideals of <span class="math-container">$\mathcal{T}$</span>?</p> <p>Two classical examples of this are <span class="math-container">$D(R)$</span>, the derived category of a commutative ring <span class="math-container">$R$</span>, and <span class="math-container">$\SHC$</span>, the stable homotopy category. For <span class="math-container">$D^{\perf}(R)$</span> this is homeomorphic to the usual Zariski spectrum <span class="math-container">$\Spec(R)$</span>, while for <span class="math-container">$\SHC^\mathrm{c}$</span> we have the classification provided by the thick subcategory theorem from chromatic homotopy theory. But I have never seen a classification (even partial) of their thick tensor subcategories or thick localizing subcategories.</p> <p>(3) What information does the Balmer spectrum encode? Balmer proves that there is a bijection between the Thomason subsets of this spectrum and the radical thick tensor ideals of the t.t. category. But other than this? At first I expected that if two t.t. categories had isomorphic spectrum then they would have a sufficiently compatible t.t. structure. Then I found the following interesting example: we have that the Balmer spectrum of the category of compact rational <span class="math-container">$S^1$</span>-equivariant spectra is homeomorphic to <span class="math-container">$\Spec(\mathbb{Z})$</span>. If <span class="math-container">$H \leq S^1$</span> is a closed subgroup then the kernel of <span class="math-container">$\phi^H$</span>, the non-equivariant geometric <span class="math-container">$H$</span>-fixed points, provides a Balmer prime. Then <span class="math-container">$\ker \phi^{S^1}$</span> corresponds to the generic point <span class="math-container">$(0)$</span>, while <span class="math-container">$\ker \phi^{C_n}$</span> can be mapped to <span class="math-container">$(p_n)$</span> where we order the prime numbers <span class="math-container">$\{p_n : n \geq 1 \}$</span>.</p> <p>Therefore <span class="math-container">$S^1\text{-}\SHC^\mathrm{c}_{\mathbb{Q}}$</span> and <span class="math-container">$D^{\perf}(\mathbb{Z})$</span> have the same Balmer spectrum, but they are very different t.t. categories: for one, the latter has a compact generator given by the tensor unit, while this is not the case in the former category. I would have thought that the t.t. structure would have been more rigid with respect to the Balmer spectrum, but this seems not to be the case.</p> <p>If you wanted a more precise question: if two t.t. categories have homeomorphic Balmer spectra, can we translate this to any information on the two categories? What if the homeomorphism is induced by a monoidal exact functor? Can we deduce it is fully faithful, essentially surjective or any other property?</p> <p>I hope that my questions are not too vague or naïve.</p>
Drew Heard
16,785
<p>To help with (3), let me point out that in 'nice' situations (e.g., in the derived category of a noetherian commutative ring), the Balmer spectrum classifies <em>all</em> localizing tensor ideals of the category, in terms of arbitrary subsets of the spectrum (so the topology plays no role). This uses a theory of support developed by <a href="https://www.math.ucla.edu/%7Ebalmer/Pubfile/Idemp.pdf" rel="nofollow noreferrer">Balmer--Favi</a> and <a href="https://arxiv.org/abs/1105.4692" rel="nofollow noreferrer">Stevenson</a>. Along with Beren Sanders and Tobias Barthel, we investigated when this occurs in some detail in a <a href="https://arxiv.org/abs/2106.15540" rel="nofollow noreferrer">recent preprint</a>.</p>
1,742,768
<p>How to examine convergence of $\sum_{n=1}^{\infty}(\sqrt[n]{a} - \frac{\sqrt[n]{b}+\sqrt[n]{c}}{2})$ for $a, b, c&gt; 0$ using Taylor's theorem?</p>
Martin Argerami
22,857
<p>You have, using Taylor's polynomial, $$ a^{1/n}=e^{\frac1n\,\log a}=1+\frac1n\log a+\frac{e^{c_n}}{2n^2}\,\log^2 a, $$ where $0&lt;c_n&lt;\frac1n\,\log a$. So \begin{align} \sum_{n=1}^{\infty}(\sqrt[n]{a} - \frac{\sqrt[n]{b}+\sqrt[n]{c}}{2}) &amp;=\sum_{n=1}^\infty\frac1n\,\left(\log a-\frac12\log b-\frac12\,\log c\right)+\frac{1}{2n^2}\left(e^{c_n}\log^2 a-\frac{e^{d_n}}2\log^2b-\frac{e^{f_n}}2\,\log^2c\right)\\ \ \\ &amp;=\sum_{n=1}^\infty\frac1n\,\left(\log \frac a{(bc)^{1/2}}\right)+\frac{1}{2n^2}\left(e^{c_n}\log^2 a-\frac{e^{d_n}}2\log^2b-\frac{e^{f_n}}2\,\log^2c\right).\\ \ \\ \end{align} The series of the second terms will always converge because of the $n^2$ and the fact that the exponentials are bounded by fixed numbers. So the convergence is decided by $$\sum_{n=1}^\infty\frac1n\,\left(\log \frac a{(bc)^{1/2}}\right).$$ If the log is any nonzero number, you get a divergent series. Thus convergence happens if and only if $a/(bc)^{1/2}=1$, that is $$ a^2=bc. $$</p>
1,760,687
<p>Can anyone explain me why this equality is true?</p> <p>$x^k(1-x)^{-k} = \sum_{n = k}^{\infty}{{n-1}\choose{k-1}}x^n$</p> <p>I really don't see how any manipulation could give me this result. </p> <p>Thanks!</p>
marty cohen
13,079
<p>Yes, with initial term 1, difference 3, and length 2.</p> <p>Note that <em>any</em> sequence of length 2 is a linear progression, and that any sequence of length $n$ is a progression of order $n-1$ (i.e., has its $n-1$ difference constant).</p>
4,492,566
<blockquote> <p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p> </blockquote> <p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p> <p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p> <p>Any insight on how to proceed?</p>
5xum
112,884
<p>Rotating the parabola even by the smallest angle will cause it to no longer be well defined.</p> <p>Intuitively, you can prove this for yourself by considering the fact that the derivative of a parabola is unbounded. This means that the parabola becomes arbitrarily &quot;steep&quot; for large (or small) values of <span class="math-container">$x$</span>, i.e. its angle being closer and closer to <span class="math-container">$90^\circ$</span>, and rotating it by even a little will tip it over the <span class="math-container">$90$</span> degrees.</p> <hr /> <p>For a formal proof, first, we need to explain exactly what a rotation of a parabola is. In general, a rotation in <span class="math-container">$\mathbb R^2$</span> is multiplication with a rotation matrix, which has, for a rotation by <span class="math-container">$\phi$</span>, the form <span class="math-container">$$\begin{bmatrix}\cos\phi&amp;-\sin\phi\\\sin\phi&amp;\cos\phi\end{bmatrix}$$</span></p> <p>In other words, if we start with a parabola <span class="math-container">$P= \{(x,y)|x\in\mathbb R\land y=x^2\}$</span>, then the parabola, rotated by an angle of <span class="math-container">$\phi$</span>, is</p> <p><span class="math-container">$$\begin{align}P_\phi &amp;= \left.\left\{\begin{bmatrix}\cos\phi&amp;-\sin\phi\\\sin\phi&amp;\cos\phi\end{bmatrix}\cdot\begin{bmatrix}x\\y\end{bmatrix}\right| x\in\mathbb R, y=x^2\right\}\\ &amp;=\{(x\cos\phi - y\sin\phi, x\sin\phi + y\cos\phi)|x\in\mathbb R, y=x^2\}\\ &amp;= \{(x\cos\phi-x^2\sin\phi, x\sin\phi + x^2\cos\phi)| x\in\mathbb R\}\end{align}.$$</span></p> <hr /> <p>The question now is which values of <span class="math-container">$\phi$</span> construct a well defined parabola <span class="math-container">$P_\phi$</span>, where by &quot;well defined&quot;, we mean &quot;it is a graph of a function&quot;, i.e., for each <span class="math-container">$\overline x\in\mathbb R$</span>, there exists exactly one value <span class="math-container">$\overline y$</span> such that <span class="math-container">$(\overline x,\overline y)\in P_\phi$</span>.</p> <p>Clearly, if <span class="math-container">$\phi = 0$</span>, we have <span class="math-container">$P_0=\{(x, x^2)|x\in\mathbb R\}$</span> which is well defined, because for every <span class="math-container">$\overline x$</span>, the value <span class="math-container">$\overline y=\overline x^2$</span> is the unique value required for <span class="math-container">$(\overline x,\overline y)$</span> to be in <span class="math-container">$P_0$</span>. Also, if <span class="math-container">$\phi=\pi$</span>, then <span class="math-container">$P_\pi = \{(-x, -x^2)|x\in\mathbb R\}$</span> is also well defined because if <span class="math-container">$(\overline x,\overline y)\in P_\pi$</span>, then <span class="math-container">$\overline y=-\overline x^2$</span>.</p> <hr /> <p>Now, observe what happens if <span class="math-container">$\phi\notin\{0,\pi\}$</span>. For now, let's assume that <span class="math-container">$\phi\in(0,\frac\pi2)$</span>. In that case, <span class="math-container">$\sin\phi\neq 0$</span>, which means that the equation <span class="math-container">$$x\cos\phi-x^2\sin\phi=0$$</span> has <strong>two</strong> solutions. One solution is <span class="math-container">$x=0$</span>, the other is <span class="math-container">$x=\frac{\cos\phi}{\sin\phi} = \cot\phi$</span>.</p> <p>This means that, if we take <span class="math-container">$\overline x=0$</span>, there are two values of <span class="math-container">$x$</span> that can create a point <span class="math-container">$(\overline x, \overline y)$</span>, and we have two possible values of <span class="math-container">$\overline y$</span> as well. One is <span class="math-container">$\overline y_1 = 0$</span>, the other is <span class="math-container">$$\overline y_2 = x\sin\phi + x^2\cos\phi = \frac{\cos\phi}{\sin\phi} \sin\phi + \left(\frac{\cos\phi}{\sin\phi}\right)^2\cos\phi =\cos\phi + \frac{\cos^3\phi}{\sin\phi}$$</span></p> <p>and, because <span class="math-container">$\phi\in(0,\frac\pi2)$</span>, we know that <span class="math-container">$\overline y_2&gt;0$</span>, which means <span class="math-container">$\overline y_2\neq \overline y_1$</span>, and therefore, <span class="math-container">$P_\phi$</span> is not a graph of a function.</p> <hr /> <p>Note that the options when <span class="math-container">$\phi$</span> is in one of the other three quadrants can be solved similarly as the one above, or, you can use symmetry to translate all of the other three cases to the one already solved above.</p>
4,492,566
<blockquote> <p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p> </blockquote> <p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p> <p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p> <p>Any insight on how to proceed?</p>
Carsten S
90,962
<p>A part of the question is: how much do we have to rotate <span class="math-container">$y = x^2$</span> around the origin such that it hits the <span class="math-container">$y$</span>-axis in a second point, in addition to the origin? To simplify this, instead of rotating the parabola (by <span class="math-container">$\varphi$</span>) we can rotate the <span class="math-container">$y$</span>-axis (by <span class="math-container">$-\varphi$</span>). Now any non-vertical line through the origin has an equation <span class="math-container">$y=ax$</span> and <span class="math-container">$$x^2=ax$$</span> has a solution at <span class="math-container">$x=a$</span>, so the line hits the parabola at <span class="math-container">$(a, a^2)$</span>. We see that we cannot rotate the parabola even a tiny bit without losing the property that it is the graph of a function.</p> <p>(In the above you may have noticed that I glossed over two cases: For <span class="math-container">$\varphi=\pi$</span> the <span class="math-container">$y$</span>-axis is taken to itself, so it is vertical, and of course the parabola is mapped to <span class="math-container">$y=-x^2$</span>, the graph of a function. And in the case <span class="math-container">$a=0$</span>, corresponding to <span class="math-container">$\varphi=\pi/2$</span> or <span class="math-container">$\varphi=3\pi/2$</span>, the point <span class="math-container">$(a,a^2)=(0,0)$</span> is the origin, so we would have to use a different line to show that the rotated parabola is not the graph of a function. But this is obvious.)</p>
4,492,566
<blockquote> <p>To which degree must I rotate a parabola for it to be no longer the graph of a function?</p> </blockquote> <p>I have no problem with narrowing the question down by only concerning the standard parabola: <span class="math-container">$$f(x)=x^2.$$</span></p> <p>I am looking for a specific angle measure. One such measure must exist as the reflection of <span class="math-container">$f$</span> over the line <span class="math-container">$y=x$</span> is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than <span class="math-container">$45^\circ$</span> as such a rotation will cross the y-axis at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(0,\sqrt{2})$</span>.</p> <p>Any insight on how to proceed?</p>
whuber
1,489
<p>Without any loss of generality, assume the center of rotation is the origin, so that the first coordinate of the image of any point <span class="math-container">$(x,y)$</span> under a rotation by <span class="math-container">$\theta$</span> equals <span class="math-container">$x\cos\theta - y\sin\theta.$</span></p> <p>Because</p> <p><span class="math-container">$$\cot\theta \cos\theta - \cot^2\theta \sin\theta = \frac{\cos\theta}{\sin\theta}\cos\theta - \frac{\cos^2\theta}{\sin^2\theta}\sin\theta = 0,$$</span></p> <p>the point <span class="math-container">$(\cot\theta,\cot^2\theta)$</span> on the parabola is mapped by a rotation of <span class="math-container">$\theta$</span> to a point with coordinates of the form <span class="math-container">$(0,y).$</span></p> <p>Provided <span class="math-container">$\theta$</span> is not an integral multiple of <span class="math-container">$\pi/2,$</span> such a point exists and has positive distance from the origin, showing that <span class="math-container">$(0,y)\ne(0,0).$</span> However, because <span class="math-container">$(0,0)$</span> is also on the rotated parabola, <strong>the rotated parabola does not define a function in any neighborhood of <span class="math-container">$0.$</span></strong></p>
48,864
<p>I can't resist asking this companion question to the <a href="https://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking"> one of Gowers</a>. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.</p> <p>So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics. </p> <p>I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.</p> <p>Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.</p>
Daniel Moskovich
2,051
<p>A <a href="http://en.wikipedia.org/wiki/Simplicial_set">simplicial set</a> is surely an idea which would be more natural to a computer. Breaking a shape up into simplices is still something a human would do, because simplices are contractible geometric objects whose gluings one can explicitly describe. But to pass from this to finite strings with face and degeneracy maps, and then to base your theory on that, is pure computer-thought... and, like any good computer idea, extremely pretty.</p>
2,014,366
<p>first of all it's not an exam sheet or some kinda stuff. I'm just preparing myself about quantifiers.</p> <p>i couldn't find similar task to this one so had to ask here. </p> <hr> <p>Let '$x \mathrel{\heartsuit} y$' stand for 'x loves y'. Rewrite the sentence 'Someone loves everyone' using quantifiers in two different ways.</p>
Jan Schultke
379,035
<p>I can only think of one way: $$\exists x \forall y:x \heartsuit y$$ </p>
340,264
<p>Given that</p> <p>$L\{J_0(t)\}=1/(s^2+1)$</p> <p>where $J_0(t)=\sum\limits^{∞}_{n=0}(−1)n(n!)2(t2)2n$,</p> <p>find the Laplace transform of $tJ_0(t)$. </p> <p>$L\{tJ_0(t)\}=$_<strong><em>_</em>__<em>_</em>__<em>_</em>___<em></strong>---</em>___?</p>
Andreas Blass
48,510
<p>You want the $p$-adic infinite sum $x=x_0+x_1p+x_2p^2+\dots$ to satisfy $x^2=2$. That will require each of the finite partial sums $S_n=x_0+x_1p+x_2p^2+\dots+x_np^n$ to satisfy ${S_n}^2\equiv 2\pmod{p^{n+1}}$, because the omitted terms of the infinite series will contribute only terms divisible by $p^{n+1}$ to $x^2$. I suggest choosing the $x_n$'s one after the other, inductively, so that, when you're trying to choose $x_n$, you already know $x_0,\dots,x_{n-1}$. The $y$ that you're given serves as $x_0$ to start the induction. Choosing $x_n$ will amount, after some manipulations, to solving a congruence modulo $p$ which (fortunately) turns out to be linear in the unknown $x_n$, with a coefficient of $2y$, which is (fortunately) not $0$ modulo $p$. </p>
1,230,159
<p>Where can I find a complete proof to the fact that the integral closure of $\mathbb{Z}$ in $\mathbb{Q}(i)$ is $\mathbb{Z}[i]$ (the Gaussian integers are the integral closure of $\mathbb{Z}$ in the Gaussian rationals)? For such a seemingly standard fact, I can not seem to find a complete proof of this anywhere. Yes, I am aware that this question has been asked on math.stackexchange before, but there was no reference to a complete proof, nor was a complete proof ever supplied. Any help would be appreciated, thanks.</p>
hunter
108,129
<p>Here is an elementary proof from scratch. If $\alpha = a + bi$ is an algebraic integer, then $\overline{\alpha}$ will be as well, since any polynomial over $\mathbb{Q}$ having $\alpha$ as a root also has $\overline{\alpha}$ as a root.</p> <p>Therefore its trace $\alpha + \overline{\alpha} = 2a$ and its norm $\alpha\overline{\alpha} = a^2 + b^2$ will be algebraic integers as well. Conversely, if this holds, then $\alpha$ is indeed an algebraic integer, because it satisfies the polynomial $$ x^2 - Tr(\alpha)x + N(\alpha) \in \mathbb{Z}[x] $$</p> <p>Thus it is necessary and sufficient that $2a$ be an integer and that $a^2 + b^2$ be an integer. Clearly it suffices for $a, b \in \mathbb{Z}$, i.e., clearly $\mathbb{Z}[i]$ is contained in the integral closure, and we need only show the necessity. So we need to show: if $a, b$ are rational numbers such that $2a$ and $a^2 + b^2$ are integers, then $a$ and $b$ are themselves integers.</p> <p>Let's write $a = m/2$ and $b = c/d$, where $m, c$, and $d$ are integers. We can assume that $c$ and $d$ have no common prime factors and $d$ is positive. Let $n = a^2 + b^2$, which we also know is an integer. So $$ m^2/4 + c^2/d^2 = n $$ and thus $$ m^2d^2 + 4c^2 = 4nd^2. $$ Work mod $d^2$. Since $c$ is prime to $d$, we know $c^2$ is prime to $d^2$ and therefore $4$ is equivalent to $0$ mod $d^2$, which shows that $d$ is either $1$ or $2$.</p> <p>If $d$ is $1$, then working mod $4$ we see $m$ is even, so now both $a$ and $b$ are integers as desired. So suppose $d$ is two. We get</p> <p>$$ m^2 + c^2 = 4n. $$ Work mod $4$ again. Since the only square mod $4$ are $0$ and $1$, we see both $m^2$ and $c^2$ are 0 mod 4. But $c$ can't be even since it's prime to $d$, so we eliminate this case.</p>
2,418,448
<p>Let $\mathbb{N}^{\mathbb{N}}$ be the set of all sequences of positive integers. For $a=(n_1,n_2,\cdots),b=(m_1,m_2,\cdots)$ define $d(a,b)=\frac{1}{\min(i:n_i\neq m_i)}, a\neq b$, $d(a,a)=0$</p> <hr> <p>It can be easily shown that $d(a,b)\leq \max\{d(a,c),d(b,c)\}$, I want to prove that if two balls in this metric space intersect, then one is contained in another</p>
Neal
20,569
<p>Let $x\in \Bbb{N}^\Bbb{N}$. The ball $B_x(r)$ is equal to all the sequences $z$ such that the first $\lfloor \frac{1}{r}\rfloor$ elements of $z$ coincide with those of $x$.</p> <p>Let us suppose $B_x(r) \cap B_y(s) \neq \emptyset$, with $r &lt; s$. Then there exists some $z$ such that the first $\lfloor \frac{1}{r}\rfloor$ elements of $z$ coincide with those of $x$ and also the first $\lfloor \frac{1}{s}\rfloor$ elements of $z$ coincide with those of $y$.</p> <p>Therefore the first $\lfloor \frac{1}{s}\rfloor$ elements of $z$ coincide with those of both $x$ and $y$, so we have $x\in B_y(s)$. Because every element of $B_x(r)$ has its first $\lfloor \frac{1}{r}\rfloor$ elements coinciding with those of $x$, they must also coincide with those of $y$, so $B_x(r)\subset B_y(s)$.</p>
1,764,106
<p>In my book I have the following definition for subgroups of a group $G$ generated by $A$, a subset of G:</p> <p>$$\langle A\rangle=\{x_1^{\epsilon_1}x_2^{\epsilon_2}...x_n^{\epsilon_n}\mid x_i\in A,~\epsilon_i\in\mathbb Z,~ x_i \neq x_{i+1} ,~ n=1,2,3...\}$$</p> <p>I have no trouble understanding this. But then we define the subgroup of an abelian group $G$ generated by a subset $A$:</p> <p>$$\langle A\rangle=\{x_1^{\epsilon_1}x_2^{\epsilon_2}...x_k^{\epsilon_k}\mid x_i\in A,~\epsilon_i\in\mathbb Z \text{ for each } i\}$$</p> <p>I fail to understand how this implies that we can commute the $x_i$'s. Could somebody please explain to me how the commutative property follows from this?</p>
Ken Duna
318,831
<p>You've got to remember that you are assuming that $G$ is abelian from the start. Therefore the elements of $A$ commute with each other.</p>
293,341
<p>My apologies if this question is more appropriate for mathisfun.com, but I can only get so far reading about combinatrics and set theory before the interlocking logic becomes totally blurred. If this is a totally fundamental concept, feel free just to name it so I can read and understand the math myself.</p> <p>So the goal is to minimize repetition of questions on a quiz to avoid (or really to slow down) the creation of a master key. This is for a client and I've explained that to make this truly realistic the number of questions in the master pool would need to be huge, but I want to show them the math behind their idea.</p> <p>So they suggested having a 20 question pool with a given set being a 5-member subset. I figured out that the total number of unique quizes <span class="math-container">$\binom{20}{5}$</span> would be <span class="math-container">$\frac{20!}{5!(20-5)!}$</span> or 15504 unique quizes. But I know that most of those quizes will be near identical and that it won't take as long for cheaters to see all 20 questions to make the key. To prove this to myself (without knowing the math), I simplified the total combinations to <span class="math-container">$\binom{4}{3}$</span>, like so:</p> <p>{a, b, c, d} = { {a,b,c}; {a,b,d}; {b,c,d}; {a,c,d} }</p> <p>And I see that it only takes seeing any 2 quizes to see all 4 members of the master set. So knowing that the number of combinations (binomial coefficient!) is not equivalent to number of unique appearances of the master-set, I'd like to know the actual math involved to show the client that while they have a ton of quizes, it only takes <span class="math-container">$x$</span> to know all members.</p> <p>Thanks as always.</p> <h2>Addendum</h2> <p>A bit more research has introduced me to the NP-complete problem known as Exact Cover, which would be (if I'm reading it right) a precise set of subsets which have a union equal to the original master-set. I just want to clarify that this constraint of perfect overlap is not necessary for my question, only the minimum number of subsets that would result in a union that has all master-set members, regardless of repetition, in order to demonstrate how many subsets are needed to know the original set (with the assumption that the seeker of the master-set knows the total membership count). I tweaked my micro-experiment from <span class="math-container">$\binom{4}{3}$</span> to <span class="math-container">$\binom{4}{2}$</span> resulting in 6 combinations and the ability to derive the master-set no longer being possible with a specific number of arbitrary subsets. Instead I get:</p> <p>{a, b, c, d} = { {ab} ; {ac} ; {ad} ; {bc} ; {bd} ; {cd} }</p> <p>which could derive the master set using the first three (<span class="math-container">$a$</span>) groups, or the exact cover of <span class="math-container">${ {a,b}; {c,d} }$</span>. This has me thinking that the minimum subsets needed to derive the original set is equal to the number of subsets where any given member occurs (so in this case 3 <span class="math-container">$a$</span>s, but this doesn't match up to the <span class="math-container">$\binom{4}{3}$</span>, where it can be found with 2 subsets. The next obvious solution (to me) is that the minimum number needed to derive the master-set (blindly) is half of the total number of subsets, but I would really want a link to a proof or a simple-english demonstration on how a pool of 20 questions would require 7752 subsets to know with certainty that all 20 members have appeared at least once.</p> <p>Again, thanks.</p> <h2>Question as Probability:</h2> <p>I have a bag of Scrabble tiles and I know the following:</p> <ol> <li>The bag contains 20 tiles,</li> <li>Each tile is unique (no two tiles have the same character),</li> <li>The tiles come from a much larger (and otherwise irrelevant) set of an expansion set including numbers and non-Roman alphabet characters, thus removing any advantage of knowing that this set of 20 comes from a larger-but-limited set (in other words, the characters are only informative to each other and I may get all Klingon or a mix of Chinese and Tamil. I should not assume anything about the set other than what is in the bag).</li> </ol> <p>I am allowed to perform the following steps in the order given as many times as I want:</p> <ol> <li>Pull out 5 tiles,</li> <li>Write down the characters drawn,</li> <li>Return the tiles to the bag.</li> <li>Lather, Rinse, Repeat.</li> </ol> <p>Also: I have magical fingers that prevent me from drawing the same set of 5 twice, thus reducing the number of draws from infinity to 15504 possible draws.</p> <p>My objective is to have all 20 characters written down eventually and then stop drawing characters.</p> <p>I know that the total number of unique combinations I could draw is <span class="math-container">$\binom{20}{5}$</span> which is 15504. I also know that the minimum draws required is equal to <span class="math-container">$\lceil{20}/{5}\rceil$</span>, which would be very lucky. What I am interested in is the maximum number of draws required to reveal all 20 characters.</p>
Peter Smith
35,151
<p>If it's the very idea of (fairly) rigorous proof that is bugging you, then can I warmly suggest looking at the excellent</p> <blockquote> <p>Daniel J. Velleman, <em>How to Prove it: A Structured Approach</em> (CUP, 1994 and much reprinted, and now into a second edition).</p> </blockquote> <p>From the blurb: "Many students have trouble the first time they take a mathematics course in which proofs play a significant role. This new edition of Velleman's successful text will prepare students to make the transition from solving problems to proving theorems by teaching them the techniques needed to read and write proofs." Which is, from what you say, exactly what you need.</p>
2,414,011
<p>In my recent works in PDEs, I'm interested in finding a family of cut-off functions satisfying following properties:</p> <p>For each $\varepsilon &gt;0$, find a function ${\psi _\varepsilon } \in {C^\infty }\left( \mathbb{R} \right)$ which is a non-decreasing function on $\mathbb{R}$ such that:</p> <ol> <li>${\psi _\varepsilon }\left( x \right) = \left\{ {\begin{array}{*{20}{l}} {0 \mbox{ if } x \le \varepsilon ,}\\ {1\mbox{ if } x \ge 2\varepsilon ,} \end{array}} \right.$ and</li> <li>The function $x \mapsto x{\psi _\varepsilon }'\left( x \right)$ is bounded uniformly with respect to $\varepsilon$ as $\varepsilon \to 0$.</li> </ol> <p>The main problem here is ${\psi _\varepsilon }'\left( x \right) \to \infty $ for some $x \in \left( {\varepsilon ,2\varepsilon } \right)$ as $\varepsilon \to 0$. I also start with <a href="https://en.wikipedia.org/wiki/Non-analytic_smooth_function" rel="nofollow noreferrer">this function</a> to define explicitly ${\psi _\varepsilon }$ in the interval $\left( {\varepsilon ,2\varepsilon } \right)$ but my attempts to adjust the referenced function failed. </p> <p>Can you find an example of these cut-off functions?</p> <p>Thanks in advanced.</p>
Joey Zou
260,918
<p>Let $\psi\in C^{\infty}(\mathbb{R})$ be any non-decreasing smooth function satisfying $\psi(x) = 0$ for $x\le 1$ and $\psi(x) = 1$ for $x\ge 2$, and set $\psi_{\epsilon}(x) = \psi(x/\epsilon)$, so that $\psi_{\epsilon}(x) = 0$ for $x\le\epsilon$ and $\psi_{\epsilon}(x) = 1$ for $x\ge 2\epsilon$. Then $\psi_{\epsilon}'(x) = \frac{1}{\epsilon}\psi'(\frac{x}{\epsilon})$. Notice that $\psi'$ is zero outside $[1,2]$, and hence $$\max\limits_{x\in\mathbb{R}}{|x\psi_{\epsilon}'(x)|} = \max\limits_{x\in\mathbb{R}}{\left|\frac{x}{\epsilon}\psi'\left(\frac{x}{\epsilon}\right)\right|} = \max\limits_{\frac{x}{\epsilon}\in[1,2]}{\left|\frac{x}{\epsilon}\psi'\left(\frac{x}{\epsilon}\right)\right|} = \max\limits_{x\in[1,2]}{|x\psi'(x)|}, $$ i.e. $x\mapsto x\psi_{\epsilon}'(x)$ is uniformly bounded in $\epsilon$.</p>
69,948
<p>Has anyone ever created a "pairing function" (possibly non-injective) with the property to be nondecreasing wrt to product of arguments, integers n>=2, m>=2. (We can also assume that n and m are bounded by an integer K, if useful) :</p> <p>n m > n' m' => p(n,m) > p(n',m') </p> <p>If yes what does it look like, does it have a name ? </p> <p>-Luna</p>
ma11hew28
48,031
<p>How about these unordered pairing functions?</p> <p>For positive integers as arguments and where argument order doesn't matter:</p> <ol> <li><p>Here's an <a href="http://www.mattdipasquale.com/blog/2014/03/09/unique-unordered-pairing-function/" rel="nofollow">unordered pairing function</a>:</p> <p>$&lt;x, y&gt; = x * y + trunc(\frac{(|x - y| - 1)^2}{4}) = &lt;y, x&gt;$</p></li> <li><p>For x ≠ y, here's a <a href="http://www.mattdipasquale.com/blog/2014/03/09/unique-unordered-pairing-function/" rel="nofollow">unique unordered pairing function</a>:</p> <pre><code>&lt;x, y&gt; = if x &lt; y: x * (y - 1) + trunc((y - x - 2)^2 / 4) if x &gt; y: (x - 1) * y + trunc((x - y - 2)^2 / 4) = &lt;y, x&gt; </code></pre></li> </ol>
1,611,730
<p>I am a linguist, not a mathematician, so I apologize if there's something wrong with my terminology and/or notation.</p> <p>I have two structures that I want to merge (partially or completely). To generate a list of all possible combinations, I compute the Cartesian product of the two sets of objects, which gives me a set of pairs, and then I compute the [1, ..., <em>n</em>]-fold Cartesian product of my set of pairs with itself where <em>n</em> is the highest cardinality out of the two structures (here, 5).</p> <p><a href="https://i.stack.imgur.com/WO79P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WO79P.png" alt="Two structures with 5 objects each"></a></p> <pre><code>A = (a1, a2, a3, a4, a5) and B = (b1, b2, b3, b4, b5) </code></pre> <p>Basically, I'm generating 1-tuples like ((a1, b1)), ((a1, b2)), ..., ((a5, b5)) that merge one pair of objects, 2-tuples that merge two pairs of objects, etc. up to <em>n</em>-tuples. I end up with $\sum\limits_{i=1}^{n} (\bar{\bar{A}}\times \bar{\bar{B}})^i$ tuples, where $\bar{\bar{A}}$ and $\bar{\bar{B}}$ represent the cardinalities of the two structures.</p> <p>In this case, I get 10172525 tuples, which is way too high for my needs. I filter my list to only keep tuples if the pairs they contain are all different and cannot be found in another tuple with the same length. This removes up to 99% of the original tuples. For example:</p> <p>((a1, b1), (a2, b2), (a3, b3))<br> ((a1, b1), (a3, b3), (a2, b2)) has the same pairs as the preceding tuple, but in a different order<br> ((a1, b1), (a1, b1), (a3, b3)) has the pair (a1, b1) more than once</p> <p>I'm looking for an equation that will help me predict the number of unique tuples. For 1-tuples, there's $\bar{\bar{P}}$ unique tuples where $\bar{\bar{P}}$ is the number of pairs. For 2-tuples, I do $\frac{\bar{\bar{P}}\times (\bar{\bar{P}}-1)}{2}$. I'm sure there's an equation that works for any tuple length, but I can't figure it out.</p> <p>Here are the numbers of tuples generated vs. unique tuples for two structures with 4 objects each:</p> <pre><code>+--------+-----------+--------+ | Length | generated | unique | +--------+-----------+--------+ | 1 | 16 | 16 | | 2 | 256 | 120 | | 3 | 4096 | 560 | | 4 | 65536 | 1820 | | Total | 69904 | 2516 | +--------+-----------+--------+ </code></pre>
bof
111,012
<p>If $P$ is an $n$-element set ($\bar{\bar P}=n$) then the number of $k$-element subsets of $P$ (that's unordered subsets, no repetitions) is given by the <a href="https://en.wikipedia.org/wiki/Binomial_coefficient" rel="nofollow">binomial coefficient</a> $$\binom nk=\frac{n!}{k!(n-k)!}=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}.$$</p>
2,094,596
<p>I'm questioning myselfas to why indeterminate forms arise, and why limits that apparently give us indeterminate forms can be resolved with some arithmetic tricks. Why $$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x+1}{x-1}=\frac{+\infty}{+\infty} \end{equation*} $$</p> <p>and if I do a simple operation,</p> <p>$$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(1-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(1-\frac{1}{x})}=1 \end{equation*} $$</p> <p>I understand the logic of the process, but I can't understand why we get different results by "not" changing anything.</p>
Olivier Oloa
118,798
<p>An <em>inderterminate form</em> just means that we have to take a closer look to understand what happens. Continuing with your example, we have $$ "\lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\frac{+\infty}{+\infty}" $$ then <em>seing things in more details</em>: $$ \lim_{x \rightarrow +\infty} \frac{x+1}{2x-1}=\lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(2-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(2-\frac{1}{x})}=\frac12. $$ The indeterminate form is the same, the result is different. An <em>indeterminate form</em> means the result is <em>not</em> automatic, many results are possible.</p>
1,040,136
<p>Just a quick question:</p> <p>Is the size of the set of real numbers from 1 to 2 greater, or equal in size to the number of real numbers between 1 and 10?</p> <p>I'm a Physicist so I'm not totally clued up on Mathematical jargon pertaining to set theory...</p>
erfan soheil
195,909
<p>If $ a &lt; b$ and $c &lt; d $ then $card ([a,b]) = card ([c,d]) $</p> <p>Proof : define $ f :[a,b]\to [c,d]$</p> <p>$f(x)= \frac{d-c}{b-a}x- \frac{a(d-c)-c(b-a)}{b-a}$, $f$ is bijection so the proof is complete.</p>
1,721,565
<p>I'm having trouble with what I have done wrong with the chain rule below. I have tried to show my working as much as possible for you to better understand my issue here.</p> <p>So:</p> <p>Find $dy/dx$ for $y=(x^2-x)^3$ <br> So power to the front will equal = $3(x^2-x)^2 * (2x-1)$</p> <p>Where did the $-1$ come from in $2x-1$?</p> <p>How did they get that? </p> <p>Thanks!</p>
Santiago
326,828
<p>Since $(x^n)' = n x^{n-1}$, we have $(x^2)' = 2x$ and $x' = 1$, therefore $(x^2 -x)' = 2x-1$ - differentiation of functions is additive.</p>
1,555,548
<p>There are $8$ people and they want to sit in a bus which has $2$ single front seats and $4$ sets of $3$ seats with $1$ person that is always the designated driver. How many ways are there for the people to sit in the bus?</p> <p>I solved it by using:</p> <p>$6!*(\binom{9}{3}) - 4((6*5*4*3)*2(\binom{4}{2})+(6*5*4*3*2)(\binom{3}{2}) + 6!(\binom{2}{2})) + 7!*(\binom{10}{3})-4((7*6*5*4)*3!*(\binom{5}{2})+(7*6*5*4*3)*2!*(\binom{4}{2})+(7*6*5*4*3*2)(\binom{3}{2}) + 7!(\binom{2}{2}) = \boxed{233280}$</p> <p>I did complementary counting and took out the cases where there were more than $3$ people in a set of rows. Can anyone tell me if my answer is right?</p>
DJohnM
58,220
<p>Looking at another, viable(?) interpretation:</p> <p>Take the one driver and put him/her in the driver's seat (one of the two single front seats). That leaves seven <strong>distinguishable</strong> people and $13$ <strong>distinguishable seats</strong>.</p> <p>Add six <strong>identical</strong> empty boxes to the seven passengers. Arrange these $13$ items in $13!$ ways. Then divide by $6!$ to eliminate double counting from switching around the empty boxes..</p> <p>These $13$ items file onto the bus in some constant, defined way. The empty boxes take up empty seats.</p>
52,364
<p>In addition to my previous post, <a href="https://mathematica.stackexchange.com/questions/52295/problem-regarding-3d-plot-of-a-moebius-strip-from-a-set-of-2d-points">regarding plotting the surface of a Möbius strip</a>, I now realised, that some of the eigenmodes for a Möbius strip are either oscillations of a scalar field OR a vector field, meaning that not all eigenmodes of this object can be shown in 3D as wobbles in the direction normal to the surface, because the Möbius strip is not orientable, like a cylinder is.</p> <p>Now that I have learned this fact, I have to do the following:</p> <ol> <li><p>Represent all <strong>transversal</strong> oscillations as wobbles in the direction normal to the surface. This means that these are oscillations of a vector field with only one component. This has been successfully done with the help in my previous question.</p></li> <li><p>I need to represent all the <strong>longitudinal</strong> oscillations as oscillations of a scalar field, which is usually done by showing changes of color on an object, like shown in this example for a cylinder: <img src="https://i.stack.imgur.com/egcXZ.png" alt="cylinder example"></p></li> </ol> <p><strong>Problem</strong></p> <p>I would like to plot 4D data <span class="math-container">$(x,y,z,F)$</span> as a set of polygons that have been made using <code>DelaunayTriangulation[]</code> of a 2D mesh in 3D using <code>Graphics3D</code>for the case of a Möbius strip, where <span class="math-container">$(x,y,z)$</span> are the data points and <span class="math-container">$F$</span> is the color.</p> <p>In my previous question I noted that <code>ListSurfacePlot3D</code> and <code>ListPlot3D</code> are not working properly for the case of the Möbius strip. The solution was to plot the strip as a set of polygons, generated by the mesh. A uniformly colored polygon surface looks like this: <img src="https://i.stack.imgur.com/cc4b3.png" alt="Möbius strip"></p> <p><strong>What I want</strong></p> <p>I would like to color these polygons with the color equal to the average value of <span class="math-container">$F$</span> for the surrounding 3 points.</p> <p><strong>Here is the provided data and code</strong></p> <p>Original 2D data of a rectangular <span class="math-container">$[0,1]\times[0,1]$</span> membrane:</p> <pre><code>data2D = Import["https://pastebin.com/raw/0Liw8F1r", "NB"]; </code></pre> <p>Transformed 4D data parametrised as a Möbius strip with color data as the 4th column:</p> <pre><code>data4D = Import["https://pastebin.com/raw/zgrCRiQh", "NB"]; </code></pre> <p>The code to draw the strip without the color data <span class="math-container">$F$</span>:</p> <pre><code>tri = First @ Cases[ListDensityPlot[Join[#, {0}] &amp; /@ data2D], Polygon[idx_] :&gt; idx, Infinity]; Graphics3D[GraphicsComplex[data4D[[;; , 1 ;; 3]], {Yellow, EdgeForm[], Polygon /@ tri}], Boxed -&gt; False, AspectRatio -&gt; 1, BoxRatios -&gt; Automatic, SphericalRegion -&gt; True, , ImageSize -&gt; Large, PlotRange -&gt; 0.9 {{-1, 1}, {-1, 1}, {-0.5, 0.5}}] </code></pre> <p>I tried to find some answers before posting the question, but all solutions had either discrete points or smooth plots in 3D, and I need to color the specific polygons.</p> <p>Thank you all for your help, I hope that the problem is clearly stated.</p>
chuy
237
<p>Something like this?</p> <pre><code>data2D = Import["http://pastebin.com/raw.php?i=0Liw8F1r", "NB"]; data4D = Import["http://pastebin.com/raw.php?i=zgrCRiQh", "NB"]; </code></pre> <p>Find <code>min</code> and <code>max</code> values for the color data</p> <pre><code>{min, max} = {Min[#], Max[#]} &amp;@data4D[[All, -1]]; </code></pre> <p>Now, I place a color in front of each polygon (like {color1, polygon1, color2, polygon2, ...}) when specifying the graphics used by <code>GraphicsComplex</code>. The color1 is the average of the color data (according to <code>data4D</code>) specified at each vertex of polygon1.</p> <pre><code>Labeled[Graphics3D[ GraphicsComplex[ data4D[[;; , 1 ;; 3]], {EdgeForm[], Map[{ColorData["DarkRainbow"][ Rescale[Mean[Part[data4D, #1, -1]], {min, max}]], Polygon[#1]} &amp;, tri]}], Boxed -&gt; False, AspectRatio -&gt; 1, BoxRatios -&gt; Automatic, SphericalRegion -&gt; True, PlotRange -&gt; 0.9 {{-1, 1}, {-1, 1}, {-0.5, 0.5}}, ImageSize -&gt; Large], BarLegend[{"DarkRainbow", {min, max}}], Right] </code></pre> <p><img src="https://i.stack.imgur.com/DBBcZ.png" alt="enter image description here"></p>
52,364
<p>In addition to my previous post, <a href="https://mathematica.stackexchange.com/questions/52295/problem-regarding-3d-plot-of-a-moebius-strip-from-a-set-of-2d-points">regarding plotting the surface of a Möbius strip</a>, I now realised, that some of the eigenmodes for a Möbius strip are either oscillations of a scalar field OR a vector field, meaning that not all eigenmodes of this object can be shown in 3D as wobbles in the direction normal to the surface, because the Möbius strip is not orientable, like a cylinder is.</p> <p>Now that I have learned this fact, I have to do the following:</p> <ol> <li><p>Represent all <strong>transversal</strong> oscillations as wobbles in the direction normal to the surface. This means that these are oscillations of a vector field with only one component. This has been successfully done with the help in my previous question.</p></li> <li><p>I need to represent all the <strong>longitudinal</strong> oscillations as oscillations of a scalar field, which is usually done by showing changes of color on an object, like shown in this example for a cylinder: <img src="https://i.stack.imgur.com/egcXZ.png" alt="cylinder example"></p></li> </ol> <p><strong>Problem</strong></p> <p>I would like to plot 4D data <span class="math-container">$(x,y,z,F)$</span> as a set of polygons that have been made using <code>DelaunayTriangulation[]</code> of a 2D mesh in 3D using <code>Graphics3D</code>for the case of a Möbius strip, where <span class="math-container">$(x,y,z)$</span> are the data points and <span class="math-container">$F$</span> is the color.</p> <p>In my previous question I noted that <code>ListSurfacePlot3D</code> and <code>ListPlot3D</code> are not working properly for the case of the Möbius strip. The solution was to plot the strip as a set of polygons, generated by the mesh. A uniformly colored polygon surface looks like this: <img src="https://i.stack.imgur.com/cc4b3.png" alt="Möbius strip"></p> <p><strong>What I want</strong></p> <p>I would like to color these polygons with the color equal to the average value of <span class="math-container">$F$</span> for the surrounding 3 points.</p> <p><strong>Here is the provided data and code</strong></p> <p>Original 2D data of a rectangular <span class="math-container">$[0,1]\times[0,1]$</span> membrane:</p> <pre><code>data2D = Import["https://pastebin.com/raw/0Liw8F1r", "NB"]; </code></pre> <p>Transformed 4D data parametrised as a Möbius strip with color data as the 4th column:</p> <pre><code>data4D = Import["https://pastebin.com/raw/zgrCRiQh", "NB"]; </code></pre> <p>The code to draw the strip without the color data <span class="math-container">$F$</span>:</p> <pre><code>tri = First @ Cases[ListDensityPlot[Join[#, {0}] &amp; /@ data2D], Polygon[idx_] :&gt; idx, Infinity]; Graphics3D[GraphicsComplex[data4D[[;; , 1 ;; 3]], {Yellow, EdgeForm[], Polygon /@ tri}], Boxed -&gt; False, AspectRatio -&gt; 1, BoxRatios -&gt; Automatic, SphericalRegion -&gt; True, , ImageSize -&gt; Large, PlotRange -&gt; 0.9 {{-1, 1}, {-1, 1}, {-0.5, 0.5}}] </code></pre> <p>I tried to find some answers before posting the question, but all solutions had either discrete points or smooth plots in 3D, and I need to color the specific polygons.</p> <p>Thank you all for your help, I hope that the problem is clearly stated.</p>
J. M.'s persistent exhaustion
50
<p>Nowadays, one can use <code>DelaunayMesh[]</code> directly for mesh triangulation instead of <code>ListDensityPlot[]</code>:</p> <pre><code>data2D = Import["http://pastebin.com/raw.php?i=0Liw8F1r", "NB"]; data4D = Import["http://pastebin.com/raw.php?i=zgrCRiQh", "NB"]; pts = Drop[data4D, None, -1]; dm = DelaunayMesh[data2D]; tri = First /@ MeshCells[dm, 2]; nrms = Table[Flatten[Last[SingularValueDecomposition[Standardize[pts[[t]], Mean, 1 &amp;], -1, Tolerance -&gt; 0]]], {t, tri}]; vc = ColorData["DarkRainbow"] /@ Rescale[data4D[[All, -1]]]; vn = Table[Normalize[Mean[nrms[[id]]]], {id, dm["VertexFaceConnectivity"]}]; Graphics3D[GraphicsComplex[pts, {EdgeForm[], Polygon[tri2]}, VertexColors -&gt; vc, VertexNormals -&gt; vn], Boxed -&gt; False, Lighting -&gt; "Neutral", PlotRange -&gt; 0.9 {{-1, 1}, {-1, 1}, {-0.5, 0.5}}, SphericalRegion -&gt; True] </code></pre> <p><img src="https://i.stack.imgur.com/BcFYN.png" alt="colored Möbius strip"></p> <p>The clash of bright and dark colors is due to the nonorientability, which is manifested in the directions of the estimated normals. (See <a href="https://mathematica.stackexchange.com/a/14218">this answer</a> for another illustration.)</p>
2,774,923
<blockquote> <p>$ABC$ is a triangle where $AE$ and $EB$ are angle bisectors, $|EC| = 5$, $|DE| = 3$, $|AB| = 9$. Find the perimeter of the triangle $ABC$. <a href="https://i.stack.imgur.com/4nsiM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4nsiM.jpg" alt="enter image description here"></a></p> </blockquote> <p>I realized that the length of the side $|DC| = 8$. In the $\triangle BEC$, we have special triangle $3-4-5$. This is where I'm stuck. Can I take your thinkings? </p>
Intelligenti pauca
255,730
<p>A simpler proof that such a triangle cannot exist, as also claimed by @g.kov.</p> <p>The <a href="https://www.cut-the-knot.org/Curriculum/Geometry/LocusCircle.shtml" rel="nofollow noreferrer">locus of points</a> $P$ such that $PC/PD=5/3$ is a circle of radius $7.5$ passing through $E$ and with center on line $CD$. Hence $A$ and $B$ must belong to that circle and $AB$ is a chord passing through $D$.</p> <p>But the shortest such chord is the one perpendicular to $CD$ ($HI$ in diagram below), whose length is $12$. Hence $AB$ cannot have length $9$ as stated in the problem.</p> <p><a href="https://i.stack.imgur.com/K6l7q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K6l7q.png" alt="enter image description here"></a></p>
1,192,357
<blockquote> <p>What is the limit of $f(a,b) =\frac{a^\beta}{a^2 + b^2}$ as $(a,b) \to (0,0)$?</p> </blockquote> <p>Clearly the answer depends on the value of $\beta$. For $\beta &gt; 0$, we can deduce via inequalities that $\lim_{(a,b) \to (0,0)} f(a,b) = 0$.</p> <p>However, for $\beta &lt; 0$, the answer is less obvious. How would you approach this case?</p>
LaBird
220,474
<p><strong>Added: Assume $\beta$ is an integer.</strong></p> <p>Case $1$: If $\beta &gt; 2$, then the numerator is more dominant than the denominator, hence the limit of $f(a, b)$ when $(a, b) \rightarrow (0,0)$ is $0$.</p> <p>Case $2$: If $\beta &lt; 2$, then the denominator is more dominant than the numerator, and since $a^2 + b^2$ is always positive but $a^\beta$ can be positive or negative, hence there are $2$ cases: </p> <ul> <li><p>Case $2a$: When $\beta$ is even, $a^\beta$ is always positive. So the limit of $f(a, b)$ when $(a, b) \rightarrow (0,0)$ is $+\infty$.</p></li> <li><p>Case $2b$: When $\beta$ is odd, $a^\beta$ follows the sign of $a$. So the limit of $f(a, b)$ when $(a, b) \rightarrow (0,0)$ does not exist.</p></li> </ul> <p>Case $3$: If $\beta = 2$, then when both $a$ and $b$ tend to $0$, we can substitute $b$ as $a$, and the function is reduced to $\frac{1}{2}$. In other words, the limit of $f(a, b)$ when $(a, b) \rightarrow (0,0)$ is $\frac{1}{2}$.</p> <p><strong>Added</strong>: If $\beta$ is not an integer, since $a^\beta$ is undefined when $a &lt; 0$. Therefore the limit of $f(a, b)$ when $(a, b) \rightarrow (0,0)$ does not exist.</p>
1,001,320
<p>I was wondering how to do an inequality problem involving QM-AM-GM-HM.</p> <p>Question: For positive $a$, $b$, $c$ such that $\frac{a}{2}+b+2c=3$, find the maximum of $\min\left\{ \frac{1}{2}ab, ac, 2bc \right\}$.</p> <p>I was thinking maybe apply AM-GM, however, I'm not sure what to plug in. Any help would be appreciated, thanks!</p>
vadim123
73,324
<p>Applying AM-GM $$1=\frac{(a/2)+b+2c}{3}\ge \sqrt[3]{(a/2)b(2c)}=\sqrt[3]{abc}$$ We square both sides to get $$\sqrt[3]{a^2b^2c^2}\le 1 ~~~~~~(1)$$</p> <p>Now suppose $\min\{ab/2,ac,2bc\}=k$. We take the geometric mean of these three: $$\sqrt[3]{a^2b^2c^2}=\sqrt[3]{(ab/2)ac(2bc)}\ge \sqrt[3]{k^3}=k ~~~~~~~(2)$$</p> <p>Combining (1) and (2), we get $k\le 1$. In fact $k=1$ can be achieved, by taking $a=2, b=1, c=0.5$. Hence the desired answer is $1$.</p>
2,026,486
<p>Let $h,g$ be <em>not</em> injective functions, can function $f:\mathbb{R}\rightarrow \mathbb{R}^2$ such that $f(x) = (h(x), g(x))$ be injective?</p> <p>I know that, if I pick polynomials for $h$ and $g$, then may be not injective, if picked carefully. For example, I can check zeros of the polynomials, whether they collide or not, but I am really not sure whether there exist other, nontrivial counterexample to disprove injectivity of $f$.</p>
marco2013
79,890
<p>Let $h(x)=x^2$, and $g(x)=(x+1)^2$. $h$ and $g$ are not injective.</p> <p>But,if $f(x)=f(y)$, we have $x^2=y^2$ and $(x+1)^2=(y+1)^2$.</p> <p>So, $(x+1)^2-x^2-1=(y+1)^2-y^2-1$.</p> <p>And then $2x=2y$. So $x=y$.</p> <p>So, $f$ is injective.</p>
4,521,199
<blockquote> <p><strong>Theorem 8.15</strong>: If <span class="math-container">$f$</span> is a continuous and <span class="math-container">$2\pi$</span>-periodic function and if <span class="math-container">$\epsilon&gt;0$</span> is fixed, then there exists a trigonometric polynomial <span class="math-container">$P$</span> such that <span class="math-container">$$\left|P(x)-f(x)\right|&lt;\epsilon$$</span> for all real <span class="math-container">$x$</span>.<br /> <em>Proof</em>: If we identify <span class="math-container">$x$</span> and <span class="math-container">$x+2\pi$</span>, we may regard the <span class="math-container">$2\pi$</span>-periodic functions on <span class="math-container">$\mathbb{R}^1$</span> as functions on the unit circle <span class="math-container">$T$</span>, by means of the mapping <span class="math-container">$x\rightarrow e^{ix}$</span>. The trigonometric polynomials, i.e., the functions of the form <span class="math-container">$$Q(x)=\sum^N_{-N} c_ne^{inx}\qquad(x\mbox{ real})$$</span> form a self-adjoint algebra <span class="math-container">$\mathscr{A}$</span>, which separates points on <span class="math-container">$T$</span>, and which vanishes at no point of <span class="math-container">$T$</span>. Since <span class="math-container">$T$</span> is compact, the Stone-Weierstrass theorem tells us that <span class="math-container">$\mathscr{A}$</span> is dense in <span class="math-container">$\mathscr{C(T)}$</span>. This is exactly what the theorem asserts.</p> </blockquote> <p>I have already seen the proof of this theorem discussed in the site, however I have never found a satisfactory explanation of the passages omitted by Rudin (more advanced tools are usually used to answer the existing questions but that does not help). I have tried proposing my attempt at a proof, however I immediately realized it was flawed (it implied the map <span class="math-container">$z \to e^{ix}, x \in [0,2\pi[$</span> to be continuous).</p> <p>I think that Rudin is being a little too much elliptic here. In fact, I interpreted his first statement as &quot;we can define a continuous function g on <span class="math-container">$S^1$</span> such that: <span class="math-container">$$g(z) = f(x), \hspace{5mm} \text{$x$ is the unique element of $[0,2\pi[$ such that $z=e^{ix}$ "}$$</span> and then the rest of the proof proceeds smoothly. My problem is that there is a non-trivial property of <span class="math-container">$g$</span> that must be proved in order to make use of the Stone-Weierstrass Theorem, that is, <span class="math-container">$g$</span> is continuous. Rudin doesn't even acknowledge that and I did not manage to prove that it is actually continuous so far. Therefore I am not even sure that this is the right interpretation of what is written.</p> <p>If someone could give me some guidance that would be very appreciated!</p>
Paul Frost
349,785
<p>Define <span class="math-container">$$\phi : \mathbb R \to S^1, \phi(x) = e^{ix} = \cos x + i \sin x. $$</span> It is well-known that this map is a continuous surjection such that <span class="math-container">$\phi(x) =\phi(y)$</span> iff <span class="math-container">$x - y = 2k \pi$</span> for some <span class="math-container">$k \in \mathbb Z$</span>. It is moreover an <em>open map</em>. See for example <a href="https://math.stackexchange.com/q/3873485">Open sets on the unit circle $S^1$</a>. Hence <span class="math-container">$\phi$</span> is a <em>quotient map</em>. This implies that it induces a bijection between the set of continuous maps <span class="math-container">$f : \mathbb R \to X$</span> with period <span class="math-container">$2\pi$</span> and the set of continuous maps <span class="math-container">$\bar f : S^1 \to X$</span>. Here <span class="math-container">$X$</span> is any topological space.</p> <p><strong>Update:</strong></p> <p>If you want to avoid the concept of quotient map, you can argue as follows. It is clear that <span class="math-container">$\phi$</span> induces a bijection <span class="math-container">$$\phi_* : \mathcal P_{2\pi} (\mathbb R,X) \to \mathcal F (S^1,X) $$</span> where <span class="math-container">$X$</span> is any topological space, <span class="math-container">$\mathcal P_{2\pi} (\mathbb R,X)$</span> denotes the set of all (not necessarily continuous) functions <span class="math-container">$f :\mathbb R \to X$</span> with period <span class="math-container">$2\pi$</span> and <span class="math-container">$\mathcal F (S^1,X)$</span> denotes the set of all functions <span class="math-container">$g : S^1 \to X$</span>. In fact, <span class="math-container">$\phi_*(f)$</span> is the unique function such that <span class="math-container">$\phi_*(f) \circ \phi = f$</span>. We have <span class="math-container">$\phi_*(g \circ \phi) = g$</span> which shows that <span class="math-container">$\phi_*$</span> is surjective. Moreover, if <span class="math-container">$\phi_*(f_1) = \phi_*(f_2)$</span>, then <span class="math-container">$f_1 = \phi_*(f_1) \circ \phi = \phi_*(f_2) \circ \phi = f_2$</span> which shows that <span class="math-container">$\phi_*$</span> is injective.</p> <p>Let us prove the following useful</p> <p><strong>Lemma.</strong> <span class="math-container">$\psi : (-\pi/2,\pi/2) \stackrel{\phi}{\to} S^1_r$</span> is a homeomorphism onto the right open half-circle <span class="math-container">$S^1_r = \{z \in S^1 \mid \operatorname{Re} z &gt; 0 \}$</span>.</p> <p>Proof. <span class="math-container">$\psi$</span> is well-defined because <span class="math-container">$\psi(x) = \cos x + i\sin x$</span>. The map <span class="math-container">$p : S^1_r \to (-1,1), p(z) = \operatorname{Im} z$</span>, is a homeomorphism (its inverse <span class="math-container">$p^{-1}$</span> is given by <span class="math-container">$p^{-1}(t) = \sqrt{1-t^2} +it$</span>). Thus it suffices to show that <span class="math-container">$\bar \psi = p \circ \phi : (-\pi/2,\pi/2) \to (-1,1)$</span> is a homeomorphism. But <span class="math-container">$\bar \psi(x) = \sin x$</span> which finishes the proof since <span class="math-container">$\sin : (-\pi/2,\pi/2) \to (-1,1)$</span> is well-known to be a homeomorphism.</p> <p>Let us now show that <span class="math-container">$\phi_*(f)$</span> is continuous iff <span class="math-container">$f$</span> is continuous.</p> <ol> <li><p>If <span class="math-container">$\phi_*(f)$</span> is continuous, then trivially <span class="math-container">$f = \phi_*(f) \circ \phi$</span> is continuous.</p> </li> <li><p>Let <span class="math-container">$f$</span> be continuous. We want to show that <span class="math-container">$\phi_*(f)$</span> is continuous. Continuity on <span class="math-container">$S^1_r$</span> (in particular in <span class="math-container">$1 \in S^1$</span>) follows from the Lemma: We have <span class="math-container">$$\phi_*(f) \mid_{S^1_r} = \phi_*(f)\circ \psi \circ \psi^{-1} = \phi_*(f) \circ \phi \mid_{(-\pi/2,\pi/2)} \circ \psi^{-1} = f \mid_{(-\pi/2,\pi/2)} \circ \psi^{-1} .$$</span> Now let <span class="math-container">$z \in S^1$</span> be arbitrary. We want to show that <span class="math-container">$\phi_*(f)$</span> is continuous in <span class="math-container">$z$</span>. The map <span class="math-container">$\mu_z : S^1 \to S^1, \mu_z(w) = z \cdot w$</span>, is a homeomorphism (its inverse is <span class="math-container">$\mu_{z^{-1}}$</span>). Since <span class="math-container">$\mu_z(1) = z$</span>, it suffices to show that <span class="math-container">$\phi_*(f) \circ \mu_z^{-1}$</span> is continuous in <span class="math-container">$1$</span>. Write <span class="math-container">$z = e^{i\xi}$</span>. The map <span class="math-container">$\bar f : \mathbb R \to X, \bar f(x) = f(x-\xi)$</span>, is continuous with period <span class="math-container">$2\pi$</span>, thus we know that <span class="math-container">$\phi_*(\bar f)$</span> is continuous in <span class="math-container">$1$</span>. We claim that <span class="math-container">$\phi_*(f) \circ \mu_z^{-1} = \phi_*(\bar f)$</span> which finishes the proof. By definition of <span class="math-container">$\phi_*$</span> we have to show that <span class="math-container">$\phi_*(f) \circ \mu_z \circ \phi = \phi_*(\bar f) \circ \phi = \bar f$</span>. But <span class="math-container">$$\phi_*(f)(\mu_z^{-1}(\phi(x)) = \phi_*(f)(\mu_z^{-1}(e^{ix}) = \phi_*(f)(e^{ix}/z) = \phi_*(f)(e^{i(x-\xi)}) = \phi_*(f)(\phi(x-\xi)) = (\phi_*(f) \circ \phi)(x-\xi) = f(x-\xi)= \bar f(x) .$$</span></p> </li> </ol>
52,841
<p>In classical Mechanics, momentum and position can be paired together to form a symplectic manifold. If you have the simple harmonic oscillator with energy $H = (k/2)x^2 + (m/2)\dot{x}^2$. In this case, the orbits are ellipses. How is the vector field determined by the (symplectic) gradient, then? </p> <p>Also, does anyone know an interpretation for the area inside a closed curve in phase space?</p>
Vít Tuček
6,818
<p>You can view $\mathbb{R}^{2n}$ as a quotient of the real Heisenberg group $\mathcal{H}^{2n+1}$ modulo its center. For a closed loop $\alpha$ in $\mathbb{R}^{2n}$ and a point in $\mathcal{H}^{2n+1}$ over $\alpha(0)$ there's unique lift $\tilde{\alpha}$ of $\alpha$ to $\mathcal{H}^{2n+1}$ going through this point. The symplectic area enclosed by $\alpha$ expresses the signed distance from $\tilde{\alpha}(0)$ to $\tilde{\alpha}(1)$ with respect to a left invariant Riemannian metric on $\mathcal{H}^{2n+1}$.</p>
2,826,850
<p>$x,y$ and $z$ are consecutive integers, such that $\frac {1}{x}+ \frac {1}{y}+ \frac {1}{z} \gt \frac {1}{45} $, what is the biggest value of $x+y+z$ ?.</p> <p>I assumed that $x$ was the smallest number so that I could express the other numbers as $x+1$ and $x+2$ and in the end I got to a cubic function but I didn't know how to find its roots. I probably didn't do anything important so I'd appreciate if you give me any hints or help. Thanks in advance.</p> <p>I have thought about using the AM-HM.</p>
fleablood
280,126
<p>Just do it. If $z\le {3*45} $ than $\frac 1x+\frac 1y+\frac 1z &gt;3\frac 1 z\ge \frac 1 {45}$ any such $z-2=x &lt;z-1=y &lt;z\le 135$ will do and the largest $x+y+z $ will be when $z=135$.</p> <p>Alternatively if $x\ge 135$ then $\frac 1x+\frac 1y+\frac 1z &lt;3\frac 1 x\le \frac 1 {45}$ so none with $ z=x+2&gt;y=x+1&gt;x\ge 135$ will work.</p> <p>So it's just a matter of seeing if $x=134,y=135,z=136$ will be in range. Or in other words whether $\frac 1{134}+\frac1 {136} &gt;\frac 2 {135} $ or not.</p> <p>$ \frac 1{134}+\frac1 {136}=\frac {136+134}{(135-1)(135+1)}=\frac {2*135}{135^2-1}&gt;\frac {2*135}{135^2}=\frac {2}{135} $ so indeed $\frac 1{134}+\frac1 {135}+\frac1 {136}&gt;\frac 1 {45} $.</p> <p>And that's the largest the terms can be.</p> <p>So the largest $x+y+z=134+135+136=3*135=405$.</p> <p>.....</p> <p>A take away from this seems to be that $\frac 1 {avg(x_i)}\le avg(\frac 1 {x_i}) $. Can we prove that?</p>
4,577,651
<p>From Gallian's &quot;Contemporary Abstract Algebra&quot;, Part 2 Chapter 5</p> <p>It looks like using Lagrange's theorem would work, since <span class="math-container">$|S_n| = n!$</span> and <span class="math-container">$\langle\alpha\rangle$</span> is a subgroup of <span class="math-container">$S_n$</span>. However, that hasn't been covered in the book at this point, so I'm assuming a different solution is expected</p> <p><span class="math-container">$\alpha$</span> can be broken up into disjoint cycles <span class="math-container">$\alpha_1\dots\alpha_m$</span> such that <span class="math-container">$|\alpha_1| + \dots +|\alpha_m| = n$</span>, and then <span class="math-container">$|\alpha| = \operatorname{lcm}(|\alpha_1|, \dots, |\alpha_n|)$</span>. Don't know how to continue though</p>
cigar
1,070,376
<p><strong>Hint</strong></p> <p>The order of any cycle of size <span class="math-container">$k$</span> is <span class="math-container">$k$</span>, and thus less than or equal to <span class="math-container">$n$</span>. And thus divides <span class="math-container">$n!$</span>.</p> <p>Then there's the <em>universal property of the lcm</em>: <span class="math-container">$a,b\mid c\implies \rm{lcm}(a,b)\mid c$</span>.</p> <p>Thus we go around (or under) Lagrange.</p>
1,204,128
<p>Two sides of a triangle are 6 m and 8 m in length and the angle between them is increasing at a rate of $0.06$ rad/s. Find the rate at which the area of the triangle is increasing when the angle between the sides of fixed length is $\large\frac {\pi}{3}$ rad. </p>
egreg
62,967
<p>No, it's not sufficiently rigorous: either you make an $\varepsilon$-$\delta$ proof or appeal to a known theorem.</p> <p>For instance, you can observe that $$ |f(x)|\le|2x| $$ and appeal to the squeeze theorem.</p> <hr> <p>For an $\varepsilon$-$\delta$ proof, consider $\varepsilon&gt;0$ and set $\delta=\varepsilon/2$. If $0&lt;x&lt;\delta$, we have two cases: either $x$ is rational, so $|f(x)-0|=|2x|=2|x|&lt;2\delta=\varepsilon$, or $x$ is irrational, so $|f(x)-0|=|-2x|=2|x|&lt;2\delta=\varepsilon$.</p> <p>You can see that it's not really different from the “squeeze theorem” proof.</p>
1,204,128
<p>Two sides of a triangle are 6 m and 8 m in length and the angle between them is increasing at a rate of $0.06$ rad/s. Find the rate at which the area of the triangle is increasing when the angle between the sides of fixed length is $\large\frac {\pi}{3}$ rad. </p>
Jack M
30,481
<p>First of all, your conclusion should not be that $f(0)=0$, it should be that $\lim_{x\to 0} f(x) = 0$.</p> <p>Your proof isn't really rigorous, although the idea is correct. What you're implicitly assuming is something like the following proposition:</p> <blockquote> <p>If $\lim_{x\to a} g(x) = L$ and $\lim_{x\to a} h(x) = L$, and if $$f(x)=\begin{cases} g(x),&amp;x\in A\\ h(x),&amp;x\in \mathbb R \setminus A\end{cases}$$ Then $\lim_{x\to a} f(x) = L$.</p> </blockquote> <p>Which is true, but you should at the very least state it explicitly even if you don't take the time to prove it, otherwise your proof seems like an appeal to intuition.</p>
2,262,094
<p>In Sheldon Axler's <em>Linear Algebra Done Right</em> Page 35, he gave a proof of "Length of linearly independent list &lt;=length of spanning list"(see reference below) by using a process of adding a vector from independent list to the spanning list and then cancels a vector form the spanning list to form a new spanning list.</p> <p>My question falls into the <em>Linear Dependence Lemma</em> part. <em>Linear Dependence Lemma</em> only tells us that we can remove some $v_j$ in the dependent list, but the proof he gives on the first picture says we can remove one of the $w's$, why not $u's$? How can he be so sure about that?</p> <blockquote> <p>My current understanding: </p> <p>Step 1: By the definition of <em>spanning list</em>, it's easy to obtain that $u_1$ can be written as a linear combination of the spanning list $w_1,...,w_n$ as $u_1=a_1w_1+...+a_nw_n$. Meanwhile, the coefficients $a_1,...,a_n$ cannot all equals to $0$, or else $u_1=0$, contradicting the presumption that $u_1,...,u_n$ is linearly independent. Let $a_j\neq 0$, and thus we can rewrite $u_1=a_1w_1+...+a_nw_n$ into $w_j=-\frac{a_1}{a_j}w_1+...-\frac{a_{j-1}}{a_j}w_{j-1}-\frac{a_{j+1}}{a_j}w_{j+1}-...-\frac{a_n}{a_j}w_n+\frac{1}{a_j}u_1$, implying $w_j\in span\{w_1,...,w_{j-1},w{j+1},...,w_n,u_1\}$, which is why we can reduce some $w's$.</p> <p>Step $j$($j$>=$2$), also the coefficients of in the form of $a's$ in $u_j=a_1w_1+...+a_{n-j+1}w_{n-j+1}+b_1u_1+...+b_{j-1}u_{j-1}$ cannot all equals to zero, or else contradicts the presumption that $u's$ are linearly independent. This is why we can always remove some $w's$ in each step.</p> </blockquote> <p>Please verify my understanding, this does not seem obvious to me and it took time for it to figure it out. However, I'm not so sure that I'm right, so please point out any mistakes and substitute my understanding of the question with a better explanation, thanks in advance.</p> <p>Reference 1:<a href="https://i.stack.imgur.com/38TVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/38TVT.png" alt="Proof of &quot;Length of linearly independent list &lt;=length of spanning list&quot;"></a> Reference 2: <em>Linear Dependence Lemma</em><a href="https://i.stack.imgur.com/5KRBK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5KRBK.png" alt="Linear Dependence Lemma"></a></p>
Tengu
58,951
<p>The main thing to notice is the order of writing the list.</p> <p>Your step 1 said we can remove $u_1$ is false. Note that in Linear Dependence Lemma, it said there exists a vector $v_j$ in the list so it belongs to span of all <strong>previous</strong> vectors $v_1, \ldots, v_{j-1}$. Notice all $v_1, \ldots, v_{j-1}$ are at the <strong>left</strong> of $v_j$ in the list $(v_1, \ldots, v_m)$. </p> <p>Hence, if we write the list as $$u_1,w_1, \ldots, w_n$$ in this exact order, we can't remove $u_1$ because all $w_i$ are at the <strong>right</strong> of $u_1$ in the list. Hence, even if $u_1$ is linear combination of $w_1, \ldots,w_n$, $u_1$ is still not the corresponding $v_j$ satisfying Linear Dependence Lemma.</p> <blockquote> <p>Why can't we remove $u_i$?</p> </blockquote> <p>All the steps give a general list $$u_1, \ldots, u_k,w_1, \ldots, w_l$$ that is linearly dependent. Linear Dependence Lemma said that in this this, there exists a vector $v$ so $v$ belongs to span of all previous vectors in the list. If this $v=u_i$ for some $i$ then $v \in \text{span}(u_1, \ldots, u_{i-1})$, which contradicts to condition that $(u_1, \ldots, u_k)$ is linearly independent. Thus, this $v$ must be different from any $u_i$. Thus, $v=w_i$ for some $i$. With this and condition (b) of the Linearly Dependent Lemma, we find that we can remove $v=w_i$. Thus, we can't remove $u_i$.</p>
903,656
<p>An urn has $2$ balls and each ball could be green, red or black. We draw a ball and it was green, then it was returned it to the urn. What is the probability that the next ball is red? </p> <p>My attempt: I think it is just a probability of $1/4$ because we have 4 colors in total but on the other hand I think i need to use conditional probability:</p> <p>$$P(R|V)= {P(R\bigcap V)\over P(V)}$$</p> <p>where $P(V)$ is the probability of drawing a green ball , $P(R)$ is the probability of drawing a red ball but I am not so sure which one would be the correct approach of the problem</p> <p>I would really appreciate your help :)</p>
Jack D'Aurizio
44,121
<p>There are just three colors, not four. Anyway, half of the times you re-draw the first ball, that is green. In the other case, you draw the other ball that may be green, red or black with equal probability, $\frac{1}{3}$. Hence the probability to draw a red ball the second time is $\frac{1}{2}\cdot\frac{1}{3}=\frac{1}{6}$.</p>
2,410,517
<p>I feel like I'm missing something very simple here, but I'm confused at how Rudin proved Theorem 2.27 c:</p> <p>If <span class="math-container">$X$</span> is a metric space and <span class="math-container">$E\subset X$</span>, then <span class="math-container">$\overline{E}\subset F$</span> for every closed set <span class="math-container">$F\subset X$</span> such that <span class="math-container">$E\subset F$</span>. Note: <span class="math-container">$\overline{E}$</span> denotes the closure of <span class="math-container">$E$</span>; in other words, <span class="math-container">$\overline{E} = E \cup E'$</span>, where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p> <p>Proof: If <span class="math-container">$F$</span> is closed and <span class="math-container">$F \supset E$</span>, then <span class="math-container">$F\supset F'$</span>, hence <span class="math-container">$F\supset E'$</span>. Thus <span class="math-container">$F \supset \overline{E}$</span>.</p> <p>What I'm confused about is how we know <span class="math-container">$F \supset E'$</span> from the previous facts?</p>
Perturbative
266,135
<p>It's not too hard to prove that for any sets <span class="math-container">$A \subseteq B$</span> in a metric space <span class="math-container">$(X, d)$</span>, it follows that <span class="math-container">$A' \subseteq B'$</span>. With the above result at hand, we can prove Theorem <span class="math-container">$2.27$</span>(c).</p> <p><em>Proof:</em> To prove this theorem choose a closed set <span class="math-container">$F$</span> that contains <span class="math-container">$E$</span>, this means that <span class="math-container">$E \subseteq F$</span>. </p> <p>Observe that since <span class="math-container">$F$</span> is closed <span class="math-container">$F = \overline{F} = F \cup F'$</span>. Also note that since <span class="math-container">$E \subseteq F$</span> it follows that <span class="math-container">$E' \subseteq F'$</span>. It then follows by elementary set theory that <span class="math-container">$E \cup E' \subseteq F \cup F'$</span>. But this implies that <span class="math-container">$\overline{E} \subseteq \overline{F} = F$</span> (since <span class="math-container">$\overline{E} = E \cup E'$</span>) which concludes the proof. <span class="math-container">$\square$</span></p> <hr> <p>As an aside, Rudin's exposition can be very terse at times, so don't be discouraged if some things seem simple at first but you can't quite flesh out their details.</p>
2,410,517
<p>I feel like I'm missing something very simple here, but I'm confused at how Rudin proved Theorem 2.27 c:</p> <p>If <span class="math-container">$X$</span> is a metric space and <span class="math-container">$E\subset X$</span>, then <span class="math-container">$\overline{E}\subset F$</span> for every closed set <span class="math-container">$F\subset X$</span> such that <span class="math-container">$E\subset F$</span>. Note: <span class="math-container">$\overline{E}$</span> denotes the closure of <span class="math-container">$E$</span>; in other words, <span class="math-container">$\overline{E} = E \cup E'$</span>, where <span class="math-container">$E'$</span> is the set of limit points of <span class="math-container">$E$</span>.</p> <p>Proof: If <span class="math-container">$F$</span> is closed and <span class="math-container">$F \supset E$</span>, then <span class="math-container">$F\supset F'$</span>, hence <span class="math-container">$F\supset E'$</span>. Thus <span class="math-container">$F \supset \overline{E}$</span>.</p> <p>What I'm confused about is how we know <span class="math-container">$F \supset E'$</span> from the previous facts?</p>
Avik Chakravarty
630,919
<p>Let <span class="math-container">$x \in E'$</span>, then <span class="math-container">$\forall r &gt; 0$</span>, <span class="math-container">$N_r(x) \cap(E-\{x\}) \ne \emptyset$</span>. Since <span class="math-container">$E \subset F$</span>, then <span class="math-container">$\forall r &gt; 0$</span>, <span class="math-container">$N_r(x) \cap(F-\{x\}) \ne \emptyset$</span>.Thus, <span class="math-container">$x \in F'$</span>. But F is closed. So, <span class="math-container">$F' \subset F$</span>. So, <span class="math-container">$x \in F$</span>. Thus, <span class="math-container">$E' \subset F$</span>. Finally, <span class="math-container">$\bar{E} \subset F$</span>.</p>
894,159
<p>I was assigned the following problem: find the value of $$\sum_{k=1}^{n} k \binom {n} {k}$$ by using the derivative of $(1+x)^n$, but I'm basically clueless. Can anyone give me a hint?</p>
quid
85,306
<p>Recall that $$ (1+x)^n = \sum_{k=0}^{n} x^k \binom {n} {k} $$ and thus $$ ((1+x)^n)' = \sum_{k=1}^{n} k x^{k-1} \binom {n} {k} $$ Now, calculate the left-hand side, and then think which value of $x$ could be a good choice.</p>
2,303,106
<p>I was looking at this question posted here some time ago. <a href="https://math.stackexchange.com/questions/1353893/how-to-prove-plancherels-formula">How to Prove Plancherel&#39;s Formula?</a></p> <p>I get it until in the third line he practically says that $\int _{- \infty}^{+\infty} e^{i(\omega - \omega')t} dt= 2 \pi \delta(\omega - \omega')$.</p> <p>I mean, I would understand if we were integrating over a period of length $2 \pi$, but here the integration is over $\mathbb{R}$. </p> <p>P.S. I would have asked this directly to the author of the post, but it's been over a year since he last logged in.</p>
Conversely
394,051
<p>I think he used that $$ 1 = \hat{\delta(w)} $$ so, $$\int _{-\infty}^{+\infty} e^{i(\omega-\omega ')t} dt $$ is the antitransform of $\delta$ values in $(\omega - \omega') $ plus $2\pi$ for definition of antitransform.</p>
2,303,106
<p>I was looking at this question posted here some time ago. <a href="https://math.stackexchange.com/questions/1353893/how-to-prove-plancherels-formula">How to Prove Plancherel&#39;s Formula?</a></p> <p>I get it until in the third line he practically says that $\int _{- \infty}^{+\infty} e^{i(\omega - \omega')t} dt= 2 \pi \delta(\omega - \omega')$.</p> <p>I mean, I would understand if we were integrating over a period of length $2 \pi$, but here the integration is over $\mathbb{R}$. </p> <p>P.S. I would have asked this directly to the author of the post, but it's been over a year since he last logged in.</p>
Disintegrating By Parts
112,478
<p>A classical way to interpret what you have is through the Fourier transform and its inverse. If $f$ is continuous at $x$ where it has left- and right-hand derivatives, and if $f$ is suitably integrable on $\mathbb{R}$, then $$ \lim_{R\rightarrow\infty}\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}\hat{f}(s)e^{isx}ds = f(x). $$ This can be written as \begin{align} f(x)&amp;=\lim_{R\rightarrow\infty}\frac{1}{2\pi}\int_{-R}^{R}\int_{-\infty}^{\infty}f(t)e^{-ist}dt e^{isx}ds \\ &amp;=\lim_{R\rightarrow\infty}\int_{-\infty}^{\infty}\left(\frac{1}{2\pi}\int_{-R}^{R}e^{is(x-t)}ds\right)f(t)dt \end{align} This is being represented in a short-hand form as $$ \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{is(x-t)}ds=\delta(x-t). $$ There are several ways to interpret the above, but none of them including treating the integral by itself.</p> <p>The symmetric truncated integral is $$ \frac{1}{2\pi}\int_{-R}^{R} e^{is(x-t)}ds = \frac{1}{\pi}\frac{\sin(R(x-t))}{x-t}. $$ So you're really looking at a very classical limit of an integral: $$ \lim_{R\uparrow\infty}\frac{1}{\pi}\int_{-\infty}^{\infty}f(t)\frac{\sin(R(x-t))}{x-t}dt = f(x). $$</p>
1,826,964
<blockquote> <p>A fair die is tossed n times (for large n). Assume tosses are independent. What is the probability that the sum of the face showing is $6n-3$?</p> </blockquote> <p>Is there a way to do this without random variables explicitly? This is in a basic probability theory reviewer, and random variables was not yet discussed. It's Larsen and Marx before Chapter 3 (where random variables starts)</p> <p>One thing I tried:</p> <p>$$P(X_1 = 3) = 1/6$$</p> <p>$$P(\sum_{i=1}^{2} X_i = 9) = (1/6)^2 4$$</p> <p>$$P(\sum_{i=1}^{3} X_i = 15) = (1/6)^3 (10)$$</p> <p>$$P(\sum_{i=1}^{4} X_i = 21) = (1/6)^4 (20)$$</p> <p>I'm not seeing any pattern for the multiplicand of $(1/6)^n$</p> <p>$$P(\sum_{i=1}^{n} X_i = 6n-3) = (1/6)^n (?)$$</p> <p>Another thing I tried:</p> <p>$$P(\sum_{i=1}^{n} X_i = 6n-3)$$</p> <p>$$= P(X_i = 6 \ \text{except for 3 which are 5's}) \tag{1}$$</p> <p>$$+ P(X_i = 6 \ \text{except for 2 which are a 4 and a 5}) \tag{2}$$</p> <p>$$+ P(X_i = 6 \ \text{except for 1 which is 3}) \tag{3}$$</p> <p>where I think</p> <p>$$(1) = (1/6)^{n-3}(1/6)^{3}\binom{n}{3}$$</p> <p>$$(2) = (1/6)^{n-3}(1/6)^{2}\binom{n}{2}(1/6)^{1}\binom{n-1}{1}$$</p> <p>$$(3) = (1/6)^{n-2}(1/6)^{1}\binom{n}{1}$$</p> <p>Is any approach going somewhere? If not, please suggest</p>
drhab
75,923
<p>If $Y_i:=6-X_i$ then you are looking for $P(Y_1+\cdots+Y_n=3)$. </p> <p>Observe that (preassuming the die is $6$-sided) the $Y_i$ take values in $\{0,1,2,3,4,5\}$</p> <p>The answer is:</p> <p>$$\left[\binom{n}3+n(n-1)+n\right]\left(\frac16\right)^n$$</p> <p>$\binom{n}3$ corresponds with $n$-tuples having on $3$ spots a $1$ and on the other spot(s) a $0$.</p> <p>$n(n-1)$ corresponds with $n$-tuples having on $1$ spot a $1$, on $1$ spot a $2$ and on the other spot(s) a $0$.</p> <p>$n$ corresponds with $n$-tuples having on $1$ spot a $3$, and on the other spot(s) a $0$.</p> <p>Using the convention that $\binom{n}3=0$ if $3\notin\{0,\dots,n\}$ the answer works for every positive integer $n$.</p>
2,909,480
<p>Please notice the following before reading: the following text is translated from Swedish and it may contain wrong wording. Also note that I am a first year student at an university - in the sense that my knowledge in mathematics is limited.</p> <p>Translated text:</p> <p><strong>Example 4.4</strong> Show that it for all integers $n$ is true that $n^3 - n$ is evenly divisible by $3$.<sup>1</sup> </p> <p>Here we are put in front of a situation with a statement for <em>all integers</em> and not all positive. But it is enough that we treat the cases when $n$ is non-negative, for if $n$ is negative, put $m = -n$. Then $m$ is positive, $n^3 - n = -(m^3 - m)$ and if $3$ divides $a$, then $3$ also divides $-a$.</p> <p>Now here also exists a statement for $n = 0$ so that we have a sequence $p_0, p_1, p_2, \; \ldots$ of statements, but that the first statement has the number $0$ and not $1$ is of course not of any higher meaning. Statement number $0$ says that $0^3 - 0$, which equals $0$, is evenly divisible by $3$, which obviously is true. If the statement number $n$ now is true, id est $n^3 - n = 3b$ for some integer $b$, then the statement number $n+1$ also must be true for</p> <p>$ \begin{split} (n + 1)^3 - (n + 1) &amp; = n^3 - n + 3n^2 + 3n \\ &amp; = 3b + 3n^2 + 3n \\ &amp; =3(b + n^2 + n) \end{split} $</p> <p>and $b + n^2 + n$ is an intege. What we was supposed to show now follows from the induction principle. $\square$</p> <p><p>1. That an integer $a$ is "evenly divisible by 3" are everyday language rather than mathematical. The precise meaning is that it exists another integer $b$ such that $a = 3b$.</p></p> <p><strong>In the above written text</strong>, I understanding everything (or I at least think so) except </p> <p>$ \begin{split} (n + 1)^3 - (n + 1) &amp; = n^3 - n + 3n^2 + 3n \\ &amp; = 3b + 3n^2 + 3n \\ &amp; =3(b + n^2 + n). \end{split} $</p> <p>Could someone please explain what happened, because I am totally lost?</p>
BCLC
140,308
<p>We want to show the following proposition</p> <p>$$k^3 - k \ \text{is always divisible by 3 for positive integers} \ k \tag{*}$$.</p> <p>The set of positive integers has a special property that if some proposition, such as Proposition (*), is</p> <ol> <li><p>true for the first positive integer, $n=1$ (analogy: the first domino is knocked over) and</p></li> <li><p>true for $k=n+1$th positive integer, assuming, for the sake of argument, that the same property is true for the $k=n$th positive integer (analogy: the $n+1$th domino is knocked over, if, for the sake of argument, its predecessor, the $n$th domino, is knocked over first).</p></li> </ol> <p>To better understand this, consider that unlike the positive integers, sets like the real numbers or $(0,1) \cup {7}$ don't have this special property that the positive integers do. (Analogy: We can imagine countably infinite dominoes for each of the positive integers, but can you imagine uncountably infinite dominoes for each of the numbers in $(0,1) \cup {7}$?)</p> <p>Now, back to the positive integers. Showing $(1)$ is easy. To show $(2)$, we pretend the proposition is true for some arbitrary positive integer, say $k_{n}=n=7$ (The first equality reads that the $n$th positive integer is $n$. The second equality reads that $n=7$). Then, we want to show the proposition is true for the next positive integer, $k_{n+1}=n+1=7+1=8$.</p> <p>Often this done is with considering the expression for $n+1$ and then manipulating it to come up with the expression for $n$. This can be seen in the proof in your question post and the rest of this post.</p> <hr> <p>Now for your question...</p> <p>Underbrace to the rescue!</p> <ol> <li>Let's prove $\begin{split} (n + 1)^3 - (n + 1) &amp; = n^3 - n + 3n^2 + 3n \end{split}$</li> </ol> <p>Pf:</p> <p>$$LHS = (n + 1)^3 - (n + 1) = (n + 1)^2(n+1) - (n + 1)$$</p> <p>$$ = (n^2+2n+1)(n+1) - (n + 1)$$</p> <p>$$ = (n^3+3n^2+3n+1) - (n + 1)$$</p> <p>$$ = n^3+3n^2+3n+1 - (n + 1)$$</p> <p>$$ = n^3+3n^2+3n \underbrace{+1 - n} - 1$$</p> <p>$$ = n^3+3n^2+3n \overbrace{- n +1} - 1$$</p> <p>$$ = n^3+3n^2+3n - n +0$$</p> <p>$$ = n^3+3n^2+3n - n$$</p> <p>$$ = n^3 - n+3n^2+3n = RHS$$</p> <p>QED</p> <ol start="2"> <li>Let's prove $\begin{split} n^3 - n + 3n^2 + 3n = 3b + 3n^2 + 3n \end{split}$ (and understand what's going on).</li> </ol> <p>Pf:</p> <p>$$LHS = n^3 - n + 3n^2 + 3n$$</p> <p>$$ = \underbrace{n^3 - n}_{\text{We assume for the sake of (inductive) argument that this is divisible by 3.}} + 3n^2 + 3n$$</p> <p>Now, something's being divisible by $3$ means that it is a multiple of $3$, i.e. $\text{something}=3b$ for some integer $b$. For example, $6$ is divisible by $3$ because $6$ is the double of $3$, i.e. $6=3b$ for $b=2$. $312$ is divisible by $3$ because $312$ is a multiple of $3$ because it is the $104$-ble of $3$, meaning $312=3b$ for $b=104$. $0$ is divisible by $3$ because $0=3b$ for $b=0$ itself. Hence, we have that</p> <p>$$\underbrace{n^3 - n}_{\text{We assume for the sake of (inductive) argument that this is divisible by 3.}} + 3n^2 + 3n$$</p> <p>$$=\underbrace{n^3 - n}_{\text{We assume for the sake of (inductive) argument that this is a multiple of 3.}} + 3n^2 + 3n$$</p> <p>$$=\underbrace{n^3 - n}_{\text{We assume for the sake of (inductive) argument that this is equal to 3b, for some integer b.}} + 3n^2 + 3n$$</p> <p>$$=\overbrace{3b} + 3n^2 + 3n = RHS$$</p> <ol start="3"> <li>Let's prove $3b + 3n^2 + 3n = 3(b + n^2 + n)$ (and understand what's going on).</li> </ol> <p>$$LHS = 3b + 3n^2 + 3n$$</p> <p>$$=\underbrace{3b}_{\text{Hey look, this expression has a '3' in it. That means, it's a multiple of 3.}} + 3n^2 + 3n$$</p> <p>$$=3b + \underbrace{3n^2}_{\text{Hey look, this expression has a '3' in it. That means, it's a multiple of 3.}} + 3n$$</p> <p>$$=3b + 3n^2 + \underbrace{3n}_{\text{Hey look, this expression has a '3' in it. That means, it's a multiple of 3.}}$$</p> <p>So, let's take out $3$ from all of them.</p> <p>$$ =3(b + n^2 + n) = RHS$$</p> <hr> <p>So, what just happened?</p> <p>We assumed for the sake of argument that $n^3 - n$ is divisible by 3 and wanted to show that $(n+1)^3 - (n+1)$ is divisible by 3. Well, we were able to rewrite $(n+1)^3 - (n+1)$ as</p> <p>$$(n+1)^3 - (n+1) = n^3 - n + 3n^2 + 3n$$</p> <p>$$= \underbrace{n^3 - n}_{\text{divisible by 3 by assumption}} + 3n^2 + 3n$$</p> <p>$$= n^3 - n + \underbrace{3n^2}_{\text{divisible by 3 because it has '3' as a factor}} + 3n$$</p> <p>$$= n^3 - n + 3n^2 + \underbrace{3n}_{\text{divisible by 3 because it has '3' as a factor}} = (**)$$</p> <p>Now, we can end here by saying that the finite sum of things that are divisible by 3 is another thing that is divisible by 3, or we don't have to take that for granted and rewrite $n^3-n$ as</p> <p>$$n^3-n=3b, \text{for some integer b}$$</p> <p>Thus, </p> <p>$$(**) = \underbrace{3b}_{n^3-n} + 3n^2 + 3n = (***)$$</p> <p>While all the terms have a factor of 3, we're still not taking for granted that the finite sum of things that are divisible by 3 is another thing that is divisible by 3, so one last step:</p> <p>$$(***) = 3(b+ n^2 + n)$$</p> <p>Therefore, $(n+1)^3 - (n+1)$ is divisible by 3 assuming for the sake of argument that $n^3 - n$ is divisible by 3. Specifically, we have shown this by writing $(n+1)^3 - (n+1)$ as sum of</p> <ol> <li><p>$n^3 - n$,</p></li> <li><p>some number with 3 in it</p></li> <li><p>some number with 3 in it</p></li> </ol>
2,325,565
<p>Is it possible to calculate probability density function from a data set of values? I assume this should be some kind of a function fitting exercise. </p>
Nick Peterson
81,839
<p>No, not quite. It's easy to come up with infinitely many density functions that COULD have lead to a given finite set of observations; it's even possible to come up with infinitely many that make the outcome you observed 'likely'. So, without making more assumptions, you're pretty much stuck.</p> <p>That said, we can make some assumptions, and use those assumptions to come up with a potential PDF that seems reasonable.</p> <p>One common method for approximating PDFs in this fashion is Kernel Density Estimation (KDE). The idea is that you choose a "bandwidth" $h&gt;0$, and a "Kernel" function $K$ such that (1) $K(x)\geq 0$ for all $x$, and (2) the area under $K$ is $1$, and (3) the mean of $K$ is $0$; then, if your data points are $x_1,\ldots,x_n\in\mathbb{R}$, you define $$ \hat{f}(x):=\sum_{i=1}^{n}\frac{1}{nh}K\left(\frac{x-x_i}{h}\right). $$ It is pretty straight-forward to check that this $\hat{f}$ is a density function.</p> <p>Why is this a reasonable thing to do? The intuition is that you generally assume $K$ is the density of a random variable that's centered at $0$; then this resulting density function is going to have "high" density at exactly the points that you chose, but also assign some density to the points NEAR those observed data points. The bandwidth $h$ controls how tightly packed the density is around the observed points; if $h$ is really small, then the density will be really tightly packed around the observed points, whereas large $h$ will spread it out more.</p> <p>It is very common to use the density of the standard normal as the kernel function.</p>
2,325,565
<p>Is it possible to calculate probability density function from a data set of values? I assume this should be some kind of a function fitting exercise. </p>
spaceisdarkgreen
397,125
<p>The simplest way is to make a histogram of the data and then normalize it so it has area one. A more sophisticated way is to use a <a href="https://en.wikipedia.org/wiki/Kernel_density_estimation" rel="nofollow noreferrer">kernel density estimator</a>.</p>
2,626,920
<p>I am working on a homework problem and I am given this situation:</p> <p>Let $A$ be the event that the $r$ numbers we obtain are all different from each other. So, for example, if $n = 3$ and $r = 2$ the sample space is $S = \{(1, 1),(1, 2),(1, 3),(2, 1),(2, 2),(2, 3),(3, 1),(3, 2),(3, 3)\}$ and the event $A$ is $A = \{(1, 2),(1, 3),(2, 1),(2, 3),(3, 1),(3, 2)\}$.</p> <p>My task is to solve for the general case and put together a formula. </p> <p>For the random experiment described above, find the probability $P(A)$ for a general $n$ and $r$. [Hint: If $r = 1$, we don't choose any duplicate numbers, so $P(A) = 1$. If $r &gt; n$, then our choice of $r$ numbers must contain some duplicates, so $P(A) = 0$. The interesting case is when $2 \leq r \leq n$.]</p> <p>I found this relatively simple to do while programming on <em>R</em>, however I do not know where to begin when putting a formula together for the general case. Any explanations would be helpful!</p>
Ethan Bolker
72,858
<p>Hint. You'll do better by thinking rather than (brute force) programming.</p> <p>Can you calculate the size of the sample space? That depends only on $n$. You've already done $n=3$ and found $9$ elements.</p> <p>Now how many ways can you choose $r$ different elements in order from among $n$? You found $6$ when $n=3$ and $r=2$. Work out a few more cases and find the pattern.</p>
2,626,920
<p>I am working on a homework problem and I am given this situation:</p> <p>Let $A$ be the event that the $r$ numbers we obtain are all different from each other. So, for example, if $n = 3$ and $r = 2$ the sample space is $S = \{(1, 1),(1, 2),(1, 3),(2, 1),(2, 2),(2, 3),(3, 1),(3, 2),(3, 3)\}$ and the event $A$ is $A = \{(1, 2),(1, 3),(2, 1),(2, 3),(3, 1),(3, 2)\}$.</p> <p>My task is to solve for the general case and put together a formula. </p> <p>For the random experiment described above, find the probability $P(A)$ for a general $n$ and $r$. [Hint: If $r = 1$, we don't choose any duplicate numbers, so $P(A) = 1$. If $r &gt; n$, then our choice of $r$ numbers must contain some duplicates, so $P(A) = 0$. The interesting case is when $2 \leq r \leq n$.]</p> <p>I found this relatively simple to do while programming on <em>R</em>, however I do not know where to begin when putting a formula together for the general case. Any explanations would be helpful!</p>
Mostafa Ayaz
518,023
<p>It is in fact $\Large\binom{n}{k}=\dfrac{n!}{k!(n-k)!}$</p>
2,626,920
<p>I am working on a homework problem and I am given this situation:</p> <p>Let $A$ be the event that the $r$ numbers we obtain are all different from each other. So, for example, if $n = 3$ and $r = 2$ the sample space is $S = \{(1, 1),(1, 2),(1, 3),(2, 1),(2, 2),(2, 3),(3, 1),(3, 2),(3, 3)\}$ and the event $A$ is $A = \{(1, 2),(1, 3),(2, 1),(2, 3),(3, 1),(3, 2)\}$.</p> <p>My task is to solve for the general case and put together a formula. </p> <p>For the random experiment described above, find the probability $P(A)$ for a general $n$ and $r$. [Hint: If $r = 1$, we don't choose any duplicate numbers, so $P(A) = 1$. If $r &gt; n$, then our choice of $r$ numbers must contain some duplicates, so $P(A) = 0$. The interesting case is when $2 \leq r \leq n$.]</p> <p>I found this relatively simple to do while programming on <em>R</em>, however I do not know where to begin when putting a formula together for the general case. Any explanations would be helpful!</p>
Community
-1
<p>I get:</p> <p>The sample size is $n^r$.</p> <p>The number of possibilities for success is: $n(n-1)(n-2)\cdots(n-(r-1))=\frac{n!}{(r-1)!}$...</p>
114,909
<p>What is known about normal subgroups of $SL_2(\mathbb{C}[X])$? Can one hope for a congruence subgroup property, i.e. that every (non-central) normal subgroup contains the kernel of the reduction modulo some ideal of $\mathbb{C}[X]$?</p>
Andrei Smolensky
5,018
<p>Most likely, the analog of the standard "sandwich classification" of normal subgroups of $SL_2(\mathbb{C}[X])$ involves the notion of a radix, which gives the more careful form of a congruence subgroup.</p> <p>See the paper <a href="http://www.ams.org/journals/bull/1991-24-01/S0273-0979-1991-15968-9/S0273-0979-1991-15968-9.pdf" rel="nofollow">D. Costa, G. Keller, Normal subgroups of $SL(2,A)$</a>, where this is done for $A$ a ring of stable rank 1 or a Dedekind ring of arithmetic type. $\mathbb{C}[X]$ is neither of them, but still of rank 2, so the ideas from the paper might be helpful.</p>
25,363
<p>In what way and with what utility is the law of excluded middle usually disposed of in intuitionistic type theory and its descendants? I am thinking here of topos theory and its ilk, namely synthetic differential geometry and the use of topoi in algebraic geometry (this is a more palatable restructuring, perhaps), where free use of these "¬⊨P∨¬P" theories is necessarily everywhere--freely utilized at every turn, one might say. But why and how are such theories first formulated, and what do they look like in the purely logical sense?</p> <p>You will have to forgive me; I began as a student in philosophy (not even that of mathematics), and the law of excluded middle is something that was imbibed with my mother's milk, as it were. This is more of a philosophical issue than a mathematical one, but being the renaissance guys/gals that you all are, I thought that perhaps this could generate some fruitful discussion. </p>
Andrej Bauer
1,176
<p>You make a couple of basic mistakes in your question. Perhaps you should correct them and ask again because I am not entirely sure what it is you are asking:</p> <ol> <li><p>Topos theory does <em>not</em> "freely use $P \lor \lnot P$", and neither does synthetic differential geometry. In fact, topos theorists are quite careful about <em>not</em> using the law of excluded middle, while synthetic differential geometry proves the negation of the law of excluded middle.</p></li> <li><p>As far as I know, the law of excluded middle is $P \lor \lnot P$, while the law of non-contradiction is $\lnot (P \land \lnot P)$. These two are <em>not</em> equivalent (unless you already believe in the law of excluded middle, in which case the whole discussion is trivial). The principle of non-contradiction is of course intuitionistically valid. So you seem to be confusing two different logical principles.</p></li> </ol> <p>If I had to guess what you asked, I would say you are wondering why anyone in their right mind would want to be agnostic about the law of excluded middle (intuitionistic logic) or even deny it (synthetic differential geometry). Aren't people who do so just plain crazy?</p> <p>To understand why someone might work without the law of excluded middle, the best thing is to study their theories. Probably you cannot afford to devote several years of your life to the study of topos theory. For an executive summary of synthetic differential geometry and its interplay with logic I recommend <a href="http://publish.uwo.ca/~jbell/">John Bell</a>'s texts on synthetic differential geometry, such as <a href="http://publish.uwo.ca/~jbell/invitation%20to%20SIA.pdf">this one</a>.</p> <p>Let me try an analogy. Imagine a mathematician who studies commutative groups and has never heard of the non-commutative ones. One day he meets another mathematician who shows him non-commutative groups. How will the first mathematician react? I imagine he will go through all <a href="http://en.wikipedia.org/wiki/K%C3%BCbler-Ross_model">the usual phases</a>:</p> <ol> <li><strong>Denial:</strong> these are not groups!</li> <li><strong>Anger:</strong> why are you destroying my groups? I hate you!</li> <li><strong>Bargaining:</strong> can we at least analyze non-commutative group in terms of their "commutative representations" (whatever that would mean)?</li> <li><strong>Depression:</strong> this is hopeless, I wasted my life studying the wrong groups. I might as well study point-set topology.</li> <li><strong>Acceptance:</strong> it's kind of cool that the symmetries of a cube form a group.</li> </ol> <p>I am at stage 5 with regards to intuitionistic logic. Where are you?</p>
87,948
<p>Let $\mu_t, t \geq 0,$ be a family of probability measures on the real line. One can assume whatever one wishes about them, although typically they will be continuous in some topology (usually at least the topology of weak convergence of measures), and they will be absolutely continuous with respect to Lebesgue measure. The basic question is as follows:</p> <p>Is there a Markov process $X_t$ such that its marginal distribution at each time is $\mu_t$?</p> <p>An obvious example is when $$d \mu_t = \frac{e^{-x^2/2t}}{\sqrt{2 \pi t}} dx$$ and $\mu_0 = \delta_0$, in which case we know that Brownian motion is such a Markov process. I am curious to know if there is any general theory along these lines.</p> <h2>Edit</h2> <p>As per Byron's comment below, I would like the Markov process to be continuous. Ideally I would like to have an SDE description of the process. </p> <p>The SDE description actually suggests one possible answer: simply compute and play with the time and space derivatives of the density function to see if they satisfy some sort of parabolic equation (like the heat equation), use this to get the adjoint of the generator, and then compute the generator itself. This is a very plausible option, but I was hoping that there might be something more systematic.</p>
Community
-1
<p>If you are willing to drop continuity in the parameter $t$, then you could let $(X_t)$ be independent with distribution $\mu_t$. </p>
295,618
<p>Problem A: Please fill each blank with a number such that all the statements are true:</p> <p>0 appears in all these statements $____$ time(s)<br> 1 appears in all these statements $____$ time(s)<br> 2 appears in all these statements $____$ time(s)<br> 3 appears in all these statements $____$ time(s)<br> 4 appears in all these statements $____$ time(s)<br> 5 appears in all these statements $____$ time(s)<br> 6 appears in all these statements $____$ time(s)<br> 7 appears in all these statements $____$ time(s)<br> 8 appears in all these statements $____$ time(s)<br> 9 appears in all these statements $____$ time(s) </p> <p>Note: they are treated as numbers, not digits. e.g. 11 counts as occurrence of 11, but not two 1.</p> <p><strong>EDIT</strong><br> How do number of solutions behave, with respect to number of statements there are? I need a sketch of the proof.</p>
JB King
8,950
<p>Note this does carry the assumption of counting the occurrence of the number at the start of the statement and not strictly in the boxes as there is a bit of interpretation there:</p> <p>The first one has at least one solution. 1732111211 would be the values where there are 7 1s in the statements occurring in the all but 3 of the lines, those being the 2,3, and 7 lines. 2 appears 3 times as it is the number of 7s and 3s in the sequence. 3 appears twice as it is the number of 2s as well as its own line for its other appearance.</p> <hr> <p>The first one would appear to be easily generalized as if one wanted to take away the line with 9s, then the number of 1s will drop to 6 and the 2 that was on the 7s will come down one row. This could be repeated up to a few times I'd think. 0,1,5, and 6 would give 4 times that a 1 would appear so for at least 7 rows this can be done. One could add lines for 10,11, and so on which would increase the number of 1s and then the 2 that is on the 7s would shift up.</p>
295,618
<p>Problem A: Please fill each blank with a number such that all the statements are true:</p> <p>0 appears in all these statements $____$ time(s)<br> 1 appears in all these statements $____$ time(s)<br> 2 appears in all these statements $____$ time(s)<br> 3 appears in all these statements $____$ time(s)<br> 4 appears in all these statements $____$ time(s)<br> 5 appears in all these statements $____$ time(s)<br> 6 appears in all these statements $____$ time(s)<br> 7 appears in all these statements $____$ time(s)<br> 8 appears in all these statements $____$ time(s)<br> 9 appears in all these statements $____$ time(s) </p> <p>Note: they are treated as numbers, not digits. e.g. 11 counts as occurrence of 11, but not two 1.</p> <p><strong>EDIT</strong><br> How do number of solutions behave, with respect to number of statements there are? I need a sketch of the proof.</p>
Erick Wong
30,402
<p>For the revised B there are exactly 3 solutions consisting of single digits: $173311121291$, $174121121291$, $191311111391$. No single-digit solutions were found to the original B, and the solution to A is unique within this class.</p>
2,096,711
<p>I need help finding a closed form of this finite sum. I'm not sure how to deal with sums that include division in it.</p> <p>$$\sum_{i=1}^n \frac{2^i}{2^n}$$</p> <p>Here's one of the attempts I made and it turned out to be wrong:</p> <p>$$\frac{1}{2^n}\sum_{i=1}^n {2^i} = \frac{1}{2^n} (2^{n +1} - 1) = \frac{2^{n + 1} - 1} {2^n}$$</p> <p>And then from there, simplifying it ended up with just a constant.</p> <p>I also tried it in which I moved $2^{-n}$ to the outside of the sigma notation and went from there:</p> <p>$$2^{-n}\sum_{i=1}^n {2^i} = 2^{-n} (2^{n +1} - 1) = 2 - 2^{-n}$$</p> <p>I plugged the equation in to Wolfram Alpha to check my answers. It gave me $2 - 2^{1-n}$, which is close to what I got in that second method. I need help finding the error in my math. I keep looking over it and I guess I'm just not seeing something.</p>
fonfonx
247,205
<p>Your first attempt was a good idea but you made some mistakes in your computations.</p> <p>Since the denominator does not depend on $i$ you can take it out of the sum and you get</p> <p>$$\sum_{i=1}^n \frac{2^i}{2^n}=\frac{1}{2^n}\sum_{i=1}^n {2^i} = \frac{1}{2^n} 2 (2^{n} - 1) = \frac{2^{n} - 1} {2^{n-1}}=2-2^{1-n}$$</p>
2,353,272
<p>Suppose that we are given the function $f(x)$ in the following product form: $$f(x) = \prod_{k = -K}^K (1-a^k x)\,,$$ Where $a$ is some real number. </p> <p>I would like to find the expansion coefficients $c_n$, such that: $$f(x) = \sum_{n = 0}^{2K+1} c_n x^n\,.$$</p> <p>A closed form solution for $c_n$, or at least a relation between the coefficients $c_n$ (e.g. between $c_n$ and $c_{n+1}$) would be great! </p>
G Cab
317,234
<p>We have that $$ \bbox[lightyellow] { \eqalign{ &amp; f(x) = \prod\limits_{k\, = \, - K}^K {\left( {1 - a^{\;k} x} \right)} = \prod\limits_{k\, = \, - K}^K {a^{\;k} \left( {a^{\; - k} - x} \right)} = \cr &amp; = \left( {\prod\limits_{k\, = \, - K}^K {a^{\;k} } } \right)\;\prod\limits_{k\, = \, - K}^K {\left( {a^{\; - k} - x} \right)} = \left( {\prod\limits_{k\, = \, - K}^K {a^{\;k} } } \right)\;\left( { - 1} \right)^{2K + 1} \prod\limits_{k\, = \, - K}^K {\left( {x - a^{\; - k} } \right)} = \cr &amp; = - \left( {\prod\limits_{k\, = \, - K}^K {a^{\;k} } } \right)\;\prod\limits_{k\, = \, - K}^K {\left( {x - a^{\; - k} } \right)} = - \prod\limits_{k\, = \, - K}^K {\left( {x - a^{\; - k} } \right)} = - \prod\limits_{k\, = \, - K}^K {\left( {x - a^{\;k} } \right)} = \cr &amp; = - \left( {x - 1} \right)\prod\limits_{k\, = \,1}^K {\left( {x - a^{\;k} } \right)} \prod\limits_{k\, = \,1}^K {\left( {x - \left( {{1 \over a}} \right)^{\;k} } \right)} = \cr &amp; = - {1 \over {\left( {x - 1} \right)}}\prod\limits_{k\, = \,0}^K {\left( {x - a^{\;k} } \right)} \prod\limits_{k\, = \,0}^K {\left( {x - \left( {{1 \over a}} \right)^{\;k} } \right)} = \cr &amp; = {{\left( { - 1} \right)^{\,K} } \over {\left( {x - 1} \right)}}a^{\, - \,\left( {\scriptstyle K + 1 \atop \scriptstyle 2} \right)} \left( {\prod\limits_{k\, = \,0}^K {\left( {x - a^{\;k} } \right)} } \right)^{\,2} \cr} }$$ because $$ \prod\limits_{k\, = \,0}^K {\left( {x - a^{\;k} } \right)} = \left( { - 1} \right)^{\,K + 1} a^{\,\left( {\scriptstyle K + 1 \atop \scriptstyle 2} \right)} \prod\limits_{k\, = \,0}^K {\left( {x - \left( {{1 \over a}} \right)^{\;k} } \right)} $$</p> <p>Then the $$ \bbox[lightyellow] { \prod\limits_{k\, = \,0}^{n - 1} {\left( {1 - xq^{\;k} } \right)} }$$ is the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol" rel="nofollow noreferrer"><em>q-Pochammer Symbol</em></a> $(x;\,q)_n$.</p> <p>We can get the coefficients of the polynomial $$ \bbox[lightyellow] { f(x) = - \prod\limits_{k\, = \, - K}^K {\left( {x - a^{\;k} } \right)} = - \sum\limits_{j\, = \,0}^{2K + 1} {c_{\,j} \;x^{\,j} } }$$</p> <p>by means of the <a href="https://en.wikipedia.org/wiki/Vieta&#39;s_formulas" rel="nofollow noreferrer">Vieta's Formulas</a> $$ \bbox[lightyellow] { \left\{ \matrix{ c_{\;2K + 1} = 1 \hfill \cr \left( { - 1} \right)^{\,k} c_{\;2K + 1 - k} \quad \left| {\;1 \le k} \right.\quad = \sum\limits_{ - K\, \le \,j_{\,1} \, &lt; \,j_{\,2} \, &lt; \, \cdots \, &lt; \,j_{\,k} \, \le \,K} {a^{\;j_{\,1} \, + \,j_{\,2} \, + \, \cdots \, + \,j_{\,k} } } \hfill \cr} \right. }$$</p> <p>Now $$ \bbox[lightyellow] { \sum\limits_{\left\{ \begin{subarray}{l} \;1\, \leqslant \,l\, \leqslant \,k \\ \; - K\, \leqslant \,j_{\,1} \, &lt; \,j_{\,2} \, &lt; \, \cdots \, &lt; \,j_{\,k} \, \leqslant \,K \end{subarray} \right.} {\;j_{\,l} \,\,} = - k\left( {K + 1} \right) + \sum\limits_{\left\{ \begin{subarray}{l} \;1\, \leqslant \,l\, \leqslant \,k \\ \;1\, \leqslant \,j_{\,1} \, &lt; \,j_{\,2} \, &lt; \, \cdots \, &lt; \,j_{\,k} \, \leqslant \,2K + 1 \end{subarray} \right.} {\;j_{\,l} \,\,} }$$ and we are left to evaluate the last sum.<br> That is, the sum of the elements of all the k-subsets of the set $\left\{ {1,\,2,\, \cdots ,\,2K + 1} \right\}$.</p> <p>The total number of k-subsets is $$ N_{k - subs} = \left( \matrix{ 2K + 1 \cr k \cr} \right) $$ The number of k-subsets containing a given element $j$ clearly is $$ N_{j\, \in \;k - subs} = \left( \matrix{ 2K \cr k - 1 \cr} \right) $$</p> <p>Thus the sum we are looking for is $$ \bbox[lightyellow] { \begin{gathered} S(k,2K + 1) = - k\left( {K + 1} \right) + \sum\limits_{\left\{ \begin{subarray}{l} \;1\, \leqslant \,l\, \leqslant \,k \\ \;1\, \leqslant \,j_{\,1} \, &lt; \,j_{\,2} \, &lt; \, \cdots \, &lt; \,j_{\,k} \, \leqslant \,2K + 1 \end{subarray} \right.} {\;j_{\,l} \,\,} = \hfill \\ = - k\left( {K + 1} \right) + \left( \begin{gathered} 2K \\ k - 1 \\ \end{gathered} \right)\sum\limits_{\;1\, \leqslant \,j\, \leqslant \,2K + 1} {\;j\,} = \left( \begin{gathered} 2K \\ k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 2K + 2 \\ 2 \\ \end{gathered} \right) - k\left( {K + 1} \right) \hfill \\ \end{gathered} }$$</p>
359,277
<p>Can you find function which satisfies $f(ab)=\frac{f(a)}{f(b)}$? For example $log(x)$ satisfies condition $f(ab)=f(a)+f(b)$ and $x^2$ satisfies $f(ab)=f(a)f(b)$?</p>
Andreas Caranti
58,401
<p>Assuming the function is defined on non-zero real numbers, and takes all non-zero values (but please do see below for a generalization), one has first $$ f(1) = f(1 \cdot 1) = \frac{f(1)}{f(1)} = 1, $$ and then for all $x$ $$ f(x) = f( 1 \cdot x) = \frac{1}{f(x)}, $$ so that $f(x) \in \{1, -1 \}$. Then $$ f(x y) = \frac{f(x)}{f(y)} = f(x) f(y), $$ so that we get that $$ f(x) = 1, \qquad\text{or}\qquad f(x) = \operatorname{sgn}(x). $$</p> <blockquote> <p><strong>Addendum</strong> One may consider the same problem for $f: G \to H$, where $G, H$ are groups (multiplicatively written, with identity $1$), and then it is possibly clearer. <em>(See also the comments to OP.)</em></p> <p>The condition is now $f(ab) = f(a) f(b)^{-1}$. Once more, $f(1) = 1$, and $f(x) = f(1 \cdot x) = f(1) f(x)^{-1} = f(x)^{-1}$, so all values of $f$ are involutions (or the identity) and $f$ is a group homomorphism.</p> <p>So in this case we have that $f$ is a morphism of $G$ onto a(n abelian) subgroup of $H$ whose non-identity elements are involutions. (Clearly there is a non-trivial such $f$ if and only if $G$ has a non-trivial quotient of exponent $2$.)</p> </blockquote>
2,524,809
<p>I am trying to solve the following problem:</p> <blockquote> <p>Show that $\frac{dy}{dt}=f(y/t)$ is equal to $t\frac{dv}{dt}+v=f(v)$, (which is a separable differential equation) by using substitution of $y = t \cdot v$ or $v =\frac{y}{t}$. </p> </blockquote> <p>I did the following:</p> <p>By using the chain-rule, we can write down $\frac{dy}{dt} = \frac{dy}{dv} \cdot\frac{dv}{dt}$. The first part of this product, $\frac{dy}{dv}$ is equal to $t$, as $y=t \cdot v$. Using substitution, we can also see that $f(y/t)=f(v)$. Thus, we have found the following equation: $ t\cdot\frac{dv}{dt} = f(v)$.</p> <p>My question is what I did wrong, what did I do to lose the '$+ v$' part of the equation?</p> <p>Thanks for your help,</p> <p>K. Kamal</p> <hr> <p>If anyone is still interested, I forgot that v is a function of t and therefore we need to use the product rule.</p>
aleden
468,742
<p>To implicitly differentiate, you must apply the chain rule:</p> <p>$$\frac{\rm d}{\rm dx}(x^2+y^2)=\frac{\rm d}{\rm dx}(16)$$ $$2x+2y\frac{\rm dy}{\rm dx}=0$$ $$\frac{\rm dy}{\rm dx}=-\frac{x}{y}$$</p>
2,524,809
<p>I am trying to solve the following problem:</p> <blockquote> <p>Show that $\frac{dy}{dt}=f(y/t)$ is equal to $t\frac{dv}{dt}+v=f(v)$, (which is a separable differential equation) by using substitution of $y = t \cdot v$ or $v =\frac{y}{t}$. </p> </blockquote> <p>I did the following:</p> <p>By using the chain-rule, we can write down $\frac{dy}{dt} = \frac{dy}{dv} \cdot\frac{dv}{dt}$. The first part of this product, $\frac{dy}{dv}$ is equal to $t$, as $y=t \cdot v$. Using substitution, we can also see that $f(y/t)=f(v)$. Thus, we have found the following equation: $ t\cdot\frac{dv}{dt} = f(v)$.</p> <p>My question is what I did wrong, what did I do to lose the '$+ v$' part of the equation?</p> <p>Thanks for your help,</p> <p>K. Kamal</p> <hr> <p>If anyone is still interested, I forgot that v is a function of t and therefore we need to use the product rule.</p>
Faraad Armwood
317,914
<p>Use the chain rule. Here you are assuming $y$ is a function of $x$ i.e you have $y = y(x)$. It now follows that, $y^2 = (y(x))^2 = x^2 \circ y(x)$ and so by the chain rule,</p> <p>$$ \frac{d y^2}{dx} = \frac{d}{dx}(x^2) \cdot \frac{d}{dx}(y(x)) = 2x \cdot \frac{d y}{dx}$$</p>
3,534,566
<p>I want to know if my answer is equivalent to the one in the back of the book. if so what was the algebra? if not then what happened?</p> <p><span class="math-container">$$x^2y'+ 2xy = 5y^3$$</span></p> <p><span class="math-container">$$y' = -\frac{2y}{x} + \frac{5y^3}{x^2}$$</span></p> <p><span class="math-container">$n = 3$</span></p> <p><span class="math-container">$v = y^{-2}$</span></p> <p><span class="math-container">$-\frac{1}{2}v'=y^{-3}$</span></p> <p><span class="math-container">$$\frac{-1}{2}v'-\frac{2}{x}v = \frac{5}{x^2}$$</span></p> <p><span class="math-container">$$v'+\frac{4}{x}v=\frac{-10}{x^2}$$</span></p> <p>this is now a first order linear ODE where:</p> <p><span class="math-container">$$y(x)=\frac{1}{u(x)}\int u(x)q(x)$$</span></p> <p><span class="math-container">$u(x)=e^{4\int\frac{1}{x}}=x^4$</span></p> <p><span class="math-container">$q(x) = \frac{-10}{x^2}$</span></p> <p><span class="math-container">$$\frac{1}{x^4}\int x^4 \frac{-10}{x^2}=\frac{1}{x^4}\frac{-10x^{2+1}}{2+1}+C$$</span></p> <p>which leaves us with :</p> <p><span class="math-container">$$\frac{1}{y^2} = \frac{-10}{3x}+x^{-4}C$$</span></p> <p>naturally </p> <p><span class="math-container">$$y^2= \frac{1}{\frac{-10}{3x}+x^{-4}C}$$</span></p> <p>The book states the answer as being:</p> <p><span class="math-container">$$y^2= \frac{x}{2+Cx^5}$$</span> </p>
nonuser
463,553
<p>Write:<span class="math-container">$$(x^2y)' = 5y^3$$</span> Now let <span class="math-container">$z=x^2y$</span> then we get <span class="math-container">$$z' = {5z^3\over x^6}\implies {z'\over z^3} = 5x^{-6}$$</span> So after integrating both sides we get <span class="math-container">$$ -{z^{-2}\over 2} = -x^{-5}+c'\implies {1\over 2x^4y^2} = {1-c'x^5\over x^5}$$</span></p> <p>So <span class="math-container">$$ y^2 = {x\over 2+cx^5}$$</span> where <span class="math-container">$c=-2c'$</span>.</p>
3,534,566
<p>I want to know if my answer is equivalent to the one in the back of the book. if so what was the algebra? if not then what happened?</p> <p><span class="math-container">$$x^2y'+ 2xy = 5y^3$$</span></p> <p><span class="math-container">$$y' = -\frac{2y}{x} + \frac{5y^3}{x^2}$$</span></p> <p><span class="math-container">$n = 3$</span></p> <p><span class="math-container">$v = y^{-2}$</span></p> <p><span class="math-container">$-\frac{1}{2}v'=y^{-3}$</span></p> <p><span class="math-container">$$\frac{-1}{2}v'-\frac{2}{x}v = \frac{5}{x^2}$$</span></p> <p><span class="math-container">$$v'+\frac{4}{x}v=\frac{-10}{x^2}$$</span></p> <p>this is now a first order linear ODE where:</p> <p><span class="math-container">$$y(x)=\frac{1}{u(x)}\int u(x)q(x)$$</span></p> <p><span class="math-container">$u(x)=e^{4\int\frac{1}{x}}=x^4$</span></p> <p><span class="math-container">$q(x) = \frac{-10}{x^2}$</span></p> <p><span class="math-container">$$\frac{1}{x^4}\int x^4 \frac{-10}{x^2}=\frac{1}{x^4}\frac{-10x^{2+1}}{2+1}+C$$</span></p> <p>which leaves us with :</p> <p><span class="math-container">$$\frac{1}{y^2} = \frac{-10}{3x}+x^{-4}C$$</span></p> <p>naturally </p> <p><span class="math-container">$$y^2= \frac{1}{\frac{-10}{3x}+x^{-4}C}$$</span></p> <p>The book states the answer as being:</p> <p><span class="math-container">$$y^2= \frac{x}{2+Cx^5}$$</span> </p>
Lutz Lehmann
115,115
<p>You switched one sign too many in <span class="math-container">$$ -\frac12v'+\frac2xv=\frac5{x^2} $$</span> Then <span class="math-container">$$ \left(\frac{v}{x^4}\right)'=\frac{v'}{x^4}-\frac{4v}{x^5}=-\frac{10}{x^6} \implies \frac{v}{x^4}=\frac2{x^5}+C $$</span> etc.</p>
63,525
<p>I asked this question in math.stackexchange but I didn't have much luck. It might be more appropiate for this forum. Let $z_1,z_2,…,z_n$ be i.i.d random points on the unit circle ($|z_i|=1$) with uniform distribution on the unit circle. Consider the random polynomial $P(z)$ given by $$ P(z)=\prod_{i=1}^{n}(z−z_i). $$ Let $m$ be the maximum absolute value of $P(z)$ on the unit circle $m=\max\{|P(z)|:|z|=1\}$.</p> <p>How can I estimate $m$? More specifically, I would like to prove that there exist $\alpha&gt;0$ such that the following holds almost surely as $n\to\infty$ $$ m\geq \exp(\alpha\sqrt{n}). $$ Or at least that for every $\epsilon&gt;0$ there exists $n$ sufficiently large such that $$ \mathbb{P}(m\geq\exp(\alpha\sqrt{n}))&gt;1-\epsilon $$ for some $\alpha$ independent on $n$.</p> <p>Any idea of what can be useful here?</p>
Johan Wästlund
14,302
<p>Here is a more careful (EDIT: even more careful!) argument that gives an affirmative answer to the weaker version of the question (as stated in the edit to my previous post, I doubt that the stronger version is true).</p> <p>The argument uses the following lemma, which ought to be known. If someone has a reference, please leave a comment.</p> <p>Lemma: Let $a_1,\dots, a_n$ be real numbers with each $a_i\geq 1$, and let $X_1,\dots,X_n$ be independent random variables, each uniform on $\pm 1$. Let $I$ be an interval of length $2r$. Then $$Pr(a_1X_1+\cdots+a_nX_n\in I) \leq \frac{1+r}{\sqrt{\pi n/2}}.$$</p> <p>Proof: Let $f(X)$ denote $a_1X_1+\cdots+a_nX_n$. In the Boolean lattice of all assignments of $\pm 1$ to the variables $X_1,\dots,X_n$, consider a random walk starting from the point where all $X_i$'s are $-1$, and moving in $n$ steps to the point where they are all $+1$, in each step choosing uniformly and independently of the history a variable which is $-1$ and changing it to $+1$. </p> <p>What is the expectation of the number $N(I)$ of steps of this walk at which $f(X) \in I$? On one hand, $N(I)\leq 1+r$, since $f(X)$ increases by at least 2 in each step.</p> <p>On the other hand, the probability that the walk passes through any given point in the Boolean lattice is at least $2^{-n}\sqrt{\pi n/2}$ (this probability is minimized at the middle level(s) of the lattice, and the claim follows by well-known estimates of the central binomial coefficient). Therefore $$EN(I) \geq \frac{\#\{X:f(X)\in I\}}{2^n}\cdot \sqrt{\pi n/2} = Pr(f(X)\in I) \cdot \sqrt{\pi n/2}.$$</p> <p>It follows that $$Pr(f(X)\in I) \leq \frac{1+r}{\sqrt{\pi n/2}}. \qquad \square$$</p> <p>As was explained in the earlier post, we can randomly choose $n$ pairs of opposite points $\{z_i, -z_i\}$, then find $z$ with $\left|P(z)P(-z)\right|=1$ given only this information, and finally fix the $z_i$'s by $n$ independent coin flips. </p> <p>In order to apply the lemma, we want to have, before the coin flipping, $n/2$ pairs $z_i, -z_i$ making an angle of say at most $60$ degrees with $z, -z$, so that each of the $n/2$ corresponding coin flips determine the sign of a term of at least $\log 3$ in $\log\left|P(z)\right| - \log\left|P(-z)\right|$. Actually, after choosing the $n$ pairs $z_i, -z_i$, this a. a. s. holds for every $z$. The idea is to divide the circle into, say, 100 equally large sectors. With high probability, every pair of opposite sectors will contain at least $n/51$ pairs (as opposed to the expected number, $n/50$).</p> <p>We now condition on the outcomes of the coin flips for the smaller terms (pairs $z_i, -z_i$ more or less orthogonal to $z$). The lemma above tells us that for any interval $I$ of length $4\alpha\sqrt{n}$, the probability that $\log\left|P(z)\right| - \log\left|P(-z)\right| \in I$ is at most $4\alpha/\sqrt{\pi} + O(1/\sqrt{n})$.</p> <p>In particular, with probability at least $1-O(\alpha)$, the absolute value of $\log\left|P(z)\right| - \log\left|P(-z)\right|$ is at least $2\alpha\sqrt{n}$, and since $\log\left|P(z)\right|=-\log\left|P(-z)\right|$, $\max\left(\log\left|P(z)\right|, \log\left|P(-z)\right|\right)\geq \alpha\sqrt{n}$ as required.</p>
2,214,137
<p>How many positive integer solutions does the equation $a+b+c=100$ have if we require $a&lt;b&lt;c$?</p> <p>I know how to solve the problem if it was just $a+b+c=100$ but the fact it has the restriction $a&lt;b&lt;c$ is throwing me off.</p> <p>How would I solve this?</p>
Bérénice
317,086
<p>$a&lt;b&lt;c$ can be translated by $b=a+x$ and $c=b+y=a+(x+y)$, where $a,x,y&gt;0$. $$a + b + c = 100\iff a+a+x+a+x+y=100 \iff3a+2x+y=100$$</p> <p>Let $a_k$ the number of solutions of the diophantine equation $3a+2x+y=k$, where $a,x,y&gt;0$. </p> <p>$$\sum_{d=0}^\infty a_k t^k=\sum_{k=0}^\infty(\sum_{3a+2x+y=k\\a,x,y&gt;0}t^k)=(\sum_{k=1}^\infty t^{3k})(\sum_{k=1}^\infty t^{2k})(\sum_{k=1}^\infty t^{k})=\frac{t^6}{(1-t^3)(1-t^2)(1-t)}$$</p> <p>We will calculate explecitely $a_k$ now : $$\frac{t^6}{(1-t^3)(1-t^2)(1-t)}\\=\frac{t + 2}{9 (t^2 + t + 1))} - \frac{89}{72 (t - 1)} + \frac{1}{8 (t + 1)} - \frac{3}{4 (t - 1)^2} -\frac{1}{6 (t - 1)^3}-1\\=\sum_{n=1}^∞ t^n\frac{1}{72} (47 + 9 (-1)^n + 8 e^{-(2 i n π)/3} + 8 e^{(2 i n π)/3} + 6 n^2-36n)\\=\sum_{n=1}^∞ t^n\frac{1}{72} (47 + 9 (-1)^n + 16\cos(2\pi \frac{n}{3}) + 6 n^2-36n)$$</p> <p>So : $$a_{100}=\frac{1}{72} (47+9 -8 +60000-3600)=784$$</p>
2,214,137
<p>How many positive integer solutions does the equation $a+b+c=100$ have if we require $a&lt;b&lt;c$?</p> <p>I know how to solve the problem if it was just $a+b+c=100$ but the fact it has the restriction $a&lt;b&lt;c$ is throwing me off.</p> <p>How would I solve this?</p>
Vik78
304,290
<p>First we count the number of triples of positive integers $(a, b, c)$ with $a + b + c = 100$. There are $\binom{99}{2}$ of them, which results from an elementary application of stars and bars. Just line up a hundred dots, place a bar in between the $a$th and $(a +1)$th dot, and then place another bar $b$ dots to the right of the first one. Then $c$ is the number of dots to the right of the second bar. There are 99 spaces in between a hundred dots and you're choosing two of them to place bars in.</p> <p>This is an overcount since it doesn't account for the condition $a &lt; b &lt; c$. First we should subtract the number of triples in which $a, b, c$ are not all distinct. There are 49 triples such that $a = b$: to count them, choose $x$ with $ 0 &lt; x&lt;50$, set $a = b = x$ and $c = 100-2x$. This accounts for all possible cases since if $a = b \ge 50$ then $c$ is not positive. There are also 49 triples with $a = c$, or with $b = c$, respectively. There are no triples with $a = b = c$, so the total number of triples with $a , b, c$ not all distinct is $3*49 = 147$.</p> <p>The number of triples of positive integers $(a, b, c)$ with $a, b, c$ distinct and $a + b + c = 100$ is therefore $\binom{99}{2} -147$. Given a triple $(a, b, c)$ with $a&lt; b &lt; c$ and $a + b + c = 100$, there are six distinct ways to permute $a, b, c$, all of which our current estimate accounts for. Therefore we should divide by 6 to throw out all the triples that are in the wrong order. The final answer is $\frac{1}{6}(\binom{99}{2} -147) = 784$.</p>
892,758
<p>Theorem: Let $(X,\mathscr{T})$ be a topological space. If $E$ is connected and $K$ is such that $E\subseteq K\subseteq\mathrm{cl}(E)$, then $K$ is connected. (Cl(E) is closure of E)</p> <p>Question: Consider the standard topology on $\mathbf{R}$. Let $\mathbf{E}$ = (2,4). Then cl($\mathbf{E}$)= [2,4]. Let $\mathbf{B}$ = [2,4) My claim is that B is disconnected since I can partition it with [2,3) and [3,4).</p> <p>What am I doing wrong?</p> <p>(I have just started studying topology on my own so it is very possible that mine is a stupid mistake.)</p>
Mohammad Khosravi
87,886
<p>The point is that $[3,4)$ is not open set in $[2,4)$. In the definition of connectivity it is needed that there exist no two open sets such that $X = A\cup B$ and if there exist such sets $X$ is not connected.</p>
892,758
<p>Theorem: Let $(X,\mathscr{T})$ be a topological space. If $E$ is connected and $K$ is such that $E\subseteq K\subseteq\mathrm{cl}(E)$, then $K$ is connected. (Cl(E) is closure of E)</p> <p>Question: Consider the standard topology on $\mathbf{R}$. Let $\mathbf{E}$ = (2,4). Then cl($\mathbf{E}$)= [2,4]. Let $\mathbf{B}$ = [2,4) My claim is that B is disconnected since I can partition it with [2,3) and [3,4).</p> <p>What am I doing wrong?</p> <p>(I have just started studying topology on my own so it is very possible that mine is a stupid mistake.)</p>
Henno Brandsma
4,280
<p>A set is only disconnected when we can partition it into two disjoint non-empty <em>open</em> sets, not just two sets. And $[2,3)$ is not open in the standard topology on $\mathbb{R}$. Neither is $[3,4)$.</p> <p>Why don't you partition $(2,4)$ in $(2,3)$ and $[3,4)$, e.g.? The same would hold, and still you believe/know that $(2,4)$ is connected.</p>
2,405,905
<p>Let $R$ be a commutative semi-local ring (finitely many maximal ideals) such that $R/P$ is finite for every prime ideal $P$ of $R$ ; then is it true that $R$ is Artinian ring ? From the assumed condition , we get that $R$ has Krull dimension 0 ; so it is enough to ask : Is $R$ a Noetherian ring ? From the semi-local and $0$ Krull dimension condition , it also follows that $R$ has finite Spectrum . But I am unable to say whether all this really implies $R$ is Noetherian or not .</p>
Elle Najt
54,092
<p>As you say, the ring is zero dimensional because each $R/P$ is a finite domain, hence a field. Hence we get a map $R \to \prod R/m_i$, running over the maximal ideals. The target here is a product of finite fields, hence the image is Noetherian. The kernel is the nilradical of $R$, since the nilradical is the intersection of all primes, which is in this case reduces to the intersection of all maximal ideals.</p> <p>So if $R$ is reduced, we are done.</p> <p>If you can prove that the nilradical in your case is Noetherian, then you can conclude from the 2-3 theorem for Noetherian property in exact sequences.</p> <p>I <em>think</em> you can get a non-reduced counter example like this:</p> <p>$R = \mathbb{F}_p[x_1, x_2, \ldots, x_n, \ldots] / (x_1, x_2, \ldots)^2$.</p> <p>This is a dimension zero local ring and the quotient by the maximal ideal is the finite field $F_p$. </p> <p>(In words: $R$ is the quotient of the countably infinite dimensional polynomial ring over $F_p$ by the square of the maximal ideal corresponding to zero.)</p> <p>However, there is an infinite ascending chain of ideals $(x_1) \subset (x_1,x_2) \subset (x_1,x_2,x_3) \ldots$.</p>
398,176
<p>I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume):</p> <p>$$V = \int\limits_a^b {A(x)dx}$$</p> <p>But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: </p> <p>$$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$</p> <p>It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something?</p>
response
76,635
<p>It should be:</p> <p>$$V = \int_0^a a^2 dz$$</p> <p>where $a$ is the length of one of the sides of the square.</p> <p>Or using your notation:</p> <p>$$V = \int_0^x x^2 dz$$</p> <p>where $z$ is the dimension over which you are integrating.</p>
398,176
<p>I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume):</p> <p>$$V = \int\limits_a^b {A(x)dx}$$</p> <p>But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: </p> <p>$$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$</p> <p>It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something?</p>
Harish Chandra Rajpoot
210,295
<p>In general if a solid has cross-sectional area $A$ which is constant along its normal length say $L$ then the volume of such solid is $$\color{blue}{V=\int_{0}^L Adx=A\int_{0}^L dx=A[L-0]=AL}$$ In fact, the area of the square is not varying with the distance $x$ in a cube. It is $a^2$ which is constant along entire length $x=a$ hence, volume $V$ of cube =\int_{0}^a a^2dx=a^2\int_{0}^a dx=a^2[a-0]=a^3$$ </p>
27,965
<p>I'm looking at <a href="https://math.stackexchange.com/questions/2669893/calculating-the-sums-of-series">this question</a>. I gave the answer that was accepted. Please bear in mind that, when I answered this question, it was a different edit. In particular, there were more parts to the question.</p> <p>The reason I'm here is because my question got a few up votes as well as a few down votes, attaining a score of $-1$ at its nadir. I was wondering, was I justified in giving this answer?</p> <p>I'm aware of the guidelines about good questions for the site, and also aware that this question fails them. I've also seen <a href="https://math.meta.stackexchange.com/questions/27259/is-it-acceptable-to-answer-a-poor-quality-question">this meta post</a>.</p> <p>The reason I gave my answer is because I believed the asker really did have little clue about how to go about these questions. I figured that all the asker needed was one good, fully justified worked example, and they could do the rest on their own. The asker's comment at the end vindicated this view, and they changed the question so that my answer would more comfortably fit it!</p> <p>I understand that it's generally not preferable to reward poor quality questions on the site. I don't want the quality of questions to (further) drop. But, I do think that, for many people, these first steps are the most daunting. It's very hard for a new student to produce the work they've done, when they haven't done any. First and foremost, I want to help people on this site, as I'm sure most other people here do as well, and part of me finds it hard to accept that these people, who are genuinely confused at the first hurdle, cannot get help.</p> <p>I know there are ways around it too. I know that a more savvy user of the site knows that they can mitigate this, for example, by copying out the relevant formulae they know. However, this is not something that new users pick up easily. I don't think it necessarily indicates laziness, or a lack of due consideration of a problem, merely an unfamiliarity with the subtler workings of this site.</p> <p>Bear in mind, I only answered one part of the question too. I wasn't giving out all the answers for all the parts, for the asker to present as their work. The asker would still be required to do most of the work in order to answer the question. I was just using one part as an example to impart the necessary tools for the asker to answer the question.</p> <p>So, with all that said, was I justified in giving this answer? Or were the downvotes justified instead?</p>
Ethan Bolker
72,858
<p>Interesting question.</p> <p>In general, I try not to answer short questions showing no work - commenting instead with boilerplate welcoming a new user (often that's the case, though not here) and a "show your work" prompt. There are folks here who will jump in with quick correct answers. I will often comment disapprovingly on those, though don't usually downvote. They are after all correct.</p> <p>But occasionally I sense genuine confusion coupled with an ability and willingness to learn. Then I may invest some time (as you did) in an answer I hope is instructive as well as correct. Sometimes my words vanish into the void. Sometimes the OP turns a corner and offers genuine thanks (whether or not s/he knows about accepting or upvoting). Those are the times when I feel rewarded and appreciated. That's what happened to you this time. </p> <p>You weren't wrong. There really is no "wrong" or "right" on MSE, just varying views of proper conventions. Keep on helping.</p>
2,508,508
<p>Let $x_1$ be in $R$ with $ x_1&gt;1$, and let $x_{k+1}=2- \frac{1}{x_k}$ for all $k$ in $N$. Show that the sequence $(x_k)_k$ is monotone and bounded and find its limit.</p> <p>I am not sure how to start this problem.</p>
Doug M
317,162
<p>by induction $x_k&gt;1$</p> <p>$x_1 &gt; 1$</p> <p>If $x_k &gt; 1$ then $\frac {1}{x_{k}} &lt; 1$ and $x_{k+1} = 2 - \frac {1}{x_k} &gt; 1$</p> <p>$\{x_k\}$ is montonic</p> <p>We will show that $x_{k+1} - x_{k} &lt; 0$</p> <p>$x_{k+1} - x_{k}\\ 2 - \frac {1}{x_k} - x_{k}\\ \frac {2x_k -1 + x_k^2}{x_k}\\ -\frac {(x_k - 1)^2}{x_k}$</p> <p>and since $x_k &gt; 1, -\frac{x_k - 1)^2}{x_k} &lt; 0$</p> <p>The sequence is monotonic and bounded, therefore convergent.</p> <p>$\{x_k\}$ converges to $x$</p> <p>$x = 2 - \frac 1x\\ (x-1)^2 = 0\\ x = 1$</p>
3,521,224
<p>Let <span class="math-container">$(U_1,U_2,...) , (V_1,V_2,...)$</span> be two independent sequences of i.i.d. Uniform (0, 1) random variables. Define the stopping time <span class="math-container">$N = \min\left(n\geqslant 1\mid U_n \leqslant V^2_n\right)$</span>.</p> <p>Obtain <span class="math-container">$P(N = n)$</span> and <span class="math-container">$P(V_N \leqslant v)$</span> for <span class="math-container">$n = 1,2,...,1\geqslant v \geqslant$</span>0.</p> <p>I know that I should use conditioning in order to get the probability. </p> <p>I also know that I have to check if <span class="math-container">$U_1 \leqslant V_1$</span> then <span class="math-container">$N=1$</span></p>
user8675309
735,806
<p>To solve with minimal calculation, and focusing on your comment "I know that I should use conditioning in order to get the probability". </p> <p>It is common to try to do "first step analysis" in these sorts of problems. Letting <span class="math-container">$A$</span> be the event that <span class="math-container">$\{V_1^2 \gt U_1\}$</span> (ignoring the zero probability set where <span class="math-container">$V_1 = U_1$</span>), introduce the indicator (Bernouli) random variable <span class="math-container">$\mathbb I_A$</span>: </p> <p><span class="math-container">$p = E\Big[\mathbb I_A\Big] = E\Big[E\big[\mathbb I_A\big \vert U_1\big]\Big]$</span><br> and in particular, for <span class="math-container">$x \in[0,1]$</span><br> <span class="math-container">$E\big[\mathbb I_A\big \vert U_1 = x\big] = Pr(V_1^2 \gt U_1 =x) = Pr(V_1 \gt \sqrt{x}) = 1 - \sqrt{x}$</span><br> which is given by the complementary CDF of <span class="math-container">$V_1$</span>. Each <span class="math-container">$x \in [0,1]$</span> has density 1, which gives </p> <p><span class="math-container">$p = E\Big[\mathbb I_A\Big] = E\Big[E\big[\mathbb I_A\big \vert U_1\big]\Big] = \int_{0}^{1} (1-\sqrt{x})dx = 1-\int_{0}^{1} \sqrt{x}dx =\frac{1}{3}$</span> </p> <p>This is a Bernouli process so it is immediate that <span class="math-container">$N$</span> has a geometric distribution with success parameter <span class="math-container">$p$</span> and <span class="math-container">$P(V_N &lt;=v) = P(V_1 &lt;=v) = p$</span> </p>
4,316,780
<p>Let <span class="math-container">$X$</span> be a real Banach space. Let <span class="math-container">$J \colon X \to 2^{X^*}$</span> be its (normalized) duality map, <span class="math-container">$$ J(x) = \{ x^* \in X^* \colon \langle x^* , x \rangle =||x|| \ ||x^*||, \ || x^* ||=||x|| \} , \ x \in X.$$</span> Assume that <span class="math-container">$X$</span> is smooth, so that <span class="math-container">$J(x)= \{j(x)\}$</span> is a singleton. Fix <span class="math-container">$x,y \in X$</span>. It is known that, if <span class="math-container">$$||x|| \le ||x+ry||, \tag 1$$</span> for every <span class="math-container">$r \in \mathbb R$</span>, then <span class="math-container">$\langle j(x),y \rangle =0$</span>. What happens if we only take <span class="math-container">$r \ge0$</span> in <span class="math-container">$(1)$</span>? I expect that <span class="math-container">$ \langle j(x),y \rangle \ge 0$</span>, since that is the case for Hilbert spaces. Indeed, if <span class="math-container">$X$</span> is a Hilbert space with inner product <span class="math-container">$(\cdot,\cdot) $</span>, then <span class="math-container">$J$</span> is just the identity map, and <span class="math-container">$(1)$</span> implies that <span class="math-container">$$ r^2 ||y||^2 + 2r (x,y) \ge 0, $$</span> for every <span class="math-container">$r \ge 0$</span>. Dividing by <span class="math-container">$r$</span> and then letting <span class="math-container">$r \to 0$</span> we obtain that <span class="math-container">$(x,y) \ge 0$</span>.</p>
daw
136,544
<p>Let <span class="math-container">$f(x) = \|x\|$</span>. Then (1) implies <span class="math-container">$$ f'(x; y) \ge 0. $$</span> Now <span class="math-container">$f$</span> is a continuous convex function, <span class="math-container">$J(x)=\{j(x)\}$</span> is the subdifferential of <span class="math-container">$f$</span> at <span class="math-container">$x$</span>, so <span class="math-container">$f'(x,y) = \langle j(x), y\rangle$</span>.</p>
1,171,980
<p>I would like to show the following</p> <blockquote> <p>$$-x-x^2 \le \log(1-x) \le -x, \quad x \in [0,1/2].$$</p> </blockquote> <p>I know that for $|x|&lt;1$, we have $\log(1-x)=-\left(x+\frac{x^2}{2}+\cdots\right)$. The inequality on the right follows because the difference is $\frac{x^2}{2}+ \frac{x^3}{3} + \cdots \ge 0$.</p> <p>For the inequality on the left, the difference is $\frac{x^2}{2}-\left(\frac{x^3}{3}+\frac{x^4}{4} + \cdots\right)$. How do I show this is nonnegative?</p>
Jack D'Aurizio
44,121
<p>$$\frac{d}{dx}\left(\log(1-x)+x+x^2\right) = 1+2x-\frac{1}{1-x}=\frac{x(1-2x)}{1-x} $$ is a non-negative function on $\left[0,\frac{1}{2}\right]$, hence the LHS-inequality follows.</p>
1,171,980
<p>I would like to show the following</p> <blockquote> <p>$$-x-x^2 \le \log(1-x) \le -x, \quad x \in [0,1/2].$$</p> </blockquote> <p>I know that for $|x|&lt;1$, we have $\log(1-x)=-\left(x+\frac{x^2}{2}+\cdots\right)$. The inequality on the right follows because the difference is $\frac{x^2}{2}+ \frac{x^3}{3} + \cdots \ge 0$.</p> <p>For the inequality on the left, the difference is $\frac{x^2}{2}-\left(\frac{x^3}{3}+\frac{x^4}{4} + \cdots\right)$. How do I show this is nonnegative?</p>
marty cohen
13,079
<p>$\begin{array}\\ -\ln(1-x) &amp;=\sum_{k=1}^{\infty} \frac{x^k}{k}\\ &amp;=x+\sum_{k=2}^{\infty} \frac{x^k}{k}\\ &amp;=x+x^2\sum_{k=2}^{\infty} \frac{x^{k-2}}{k}\\ &amp;=x+x^2\sum_{k=0}^{\infty} \frac{x^{k}}{k+2}\\ &amp;\le x+x^2\sum_{k=0}^{\infty} \frac{x^{k}}{2}\\ &amp;\le x+\frac{x^2}{2}\sum_{k=0}^{\infty} x^{k}\\ &amp;= x+\frac{x^2}{2}\frac1{1-x}\\ &amp;= x+\frac{x^2}{2(1-x)}\\ &amp;\le x+x^2 \quad \text{if $0 \le x \le \frac12$}\\ \end{array} $</p>
562,707
<p>This is a famous rudimentary problem : how to use mathematical operations (not any other temporary variable or storage) to swap two integers A and B. The most well-known way is the following:</p> <pre><code>A = A + B B = A - B A = A - B </code></pre> <p>What are some of the alternative set of operations to achieve this?</p>
Mark S.
26,369
<p>I'm not sure if you're asking for all solutions or not, but one of the most famous solutions is <a href="http://en.wikipedia.org/wiki/XOR_swap_algorithm" rel="nofollow">by using binary xor three times</a>. $A=A\oplus B,B=A\oplus B,A=A\oplus B$.</p>
2,645,948
<p>I was studying neighbourhood methods from Overholt's book of Analytic Number theory(P No 42). There to estimate $Q(x)=\sum_{n \leq x}\mu^2(n)$ they have used a statement that </p> <p>$$\sum_{j^2\leq x} \mu(j)\left[\frac x {j^2}\right]=x\sum_{j\leq \sqrt x}\frac {\mu(j)} {j^2}+ O(\sqrt x).$$</p> <p>I am not getting this statement as $[\frac x {j^2}]=\frac x {j^2}-\{\frac x {j^2}\}$ and $\{\frac x {j^2}\}\leq 1$. Can I get something from here?</p>
Matthew Conroy
2,937
<p>Since $[x]=x-\{x\}$, we have $[x]=x+O(1)$, and so $$ \sum_{j^2\leq x} \mu(j)\left[\frac x {j^2}\right]= \sum_{j^2\leq x} \mu(j) \left(\frac{x}{j^2} + O(1) \right) =\sum_{j^2\leq x} \mu(j) \frac{x}{j^2} + O(\sqrt{x}) = x \sum_{j \le \sqrt{x}} \frac{\mu(j)}{j^2} +O(\sqrt{x}). $$</p>
533,399
<p>Starting with the classical propositional logic, is there a rather canonical way to prove that $$p\wedge q=q\wedge p$$ for the commutativity of the conjunction and analogously for the other properties and connectives, other than using truth tables, visualizing with Venn diagrams akin <a href="http://en.wikipedia.org/wiki/Logical_conjunction" rel="nofollow">Wikipedia's approach</a>, or verbal philosophical reasoning?</p> <p>Put it other way, can we well-define the connectives from a deeper foundation than that?</p> <p>For example, in set theory, we define a union of the two sets $A$, $B$ as $$A\cap B:=\{x\,|\,x\in A\wedge x\in B\}$$ to then move on and prove that $\cap$ is commutative. By doing so we simply delegate the proof to the very propositional (or whichever) logic we defined the operator with $$A\cap B\overset{\mathrm{def}}{=}\{x\,|\,x\in A\wedge x\in B\}\overset{\mathrm{com}}{=}\{x\,|\,x\in B\wedge x\in A\}\overset{\mathrm{def}}{=}B\cap A.\square$$</p>
user43208
43,208
<p>I'm not sure this will satisfy you, but a categorically-minded way to characterize meets $a \wedge b$ and joins $a \vee b$ is via universal properties: </p> <p>$$x \leq a \wedge b \;\;\; \text{iff}\;\;\; x \leq a,\; x \leq b$$ </p> <p>$$a \vee b \leq x\;\;\; \text{iff}\;\;\; a \leq x,\; b \leq x$$</p> <p>for any $x$. These are general definitions in the theory of posets or preorders, but for propositions, we can think of $\leq$ as denoting the entailment relation. The pair of entailments on the right (for each of $\wedge, \vee$) simply means both are asserted. </p> <p>In that case, one can prove $a \wedge b = b \wedge a$. For, we have </p> <p>$$x \leq a \wedge b\;\;\; \text{iff}\;\;\; x \leq a, x \leq b\;\;\; \text{iff}\;\;\; x \leq b \wedge a.$$ </p> <p>Now, since $a \wedge b \leq a \wedge b$, we can put $x = a \wedge b$ and reason forward to conclude $a \wedge b \leq b \wedge a$. Similarly, putting $x = b \wedge a$ and reasoning backward, we conclude $b \wedge a \leq a \wedge b$. Thus, if we take propositions to be equal if they entail one another (i.e., if we assume the antisymmetry axiom for posets), we derive $a \wedge b = b \wedge a$. Similarly we can prove $a \vee b = b \vee a$. </p> <p>A similar "universality argument" can be used to prove that $\wedge, \vee$ are associative, idempotent, etc. </p> <p>Once we have universal characterizations for $\wedge, \vee$, we can add a third that characterizes negation </p> <p>$$a \wedge b \leq c\;\;\; \text{iff}\;\;\; a \leq (\neg b) \vee c$$ </p> <p>and in this way we get classical propositional logic (more exactly, we'd add in two more to characterize the top element $\top$ ("true") and $\bot$ ("false")). </p>
2,222,514
<p>A straight line OL rotates around the point O with a constant angular velocity !. A point M moves along the line OL with a speed proportional to the distance OM. Find the equation of the curve described by the point M</p> <p>As it says angular velocity is constant which i think means $$θ' =constant=ω$$</p> <p>and after integrating i get $$θ = ω ⋅ t + θ_o$$</p> <p>What can i do about the linear velocity?</p> <p>Can someone help me with this problem</p>
Jens
307,210
<p>We know that $$\frac{dr}{dt}=kr\tag{1}$$</p> <p>There is a <a href="http://tutorial.math.lamar.edu/Classes/DE/Linear.aspx" rel="nofollow noreferrer">long way</a> to determine that this means $$r=Ce^{kt}\tag{2}$$</p> <p>or there is the shorter way of simply inserting answer $2$ into the differential equation $1$ and seeing that it works.</p> <p>We also know that $$\theta = \omega t\tag{3}$$</p> <p>Isolating $t$ from equation $3$ and inserting in equation $2$ we get $$r=Ce^{\frac{k}{\omega}\theta}$$</p> <p>or the equation of a <a href="https://en.wikipedia.org/wiki/Logarithmic_spiral" rel="nofollow noreferrer">logarithmic spiral</a>.</p>