qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
618,823
<p>Reading a book I encountered the following claim, which I don't understand. Let $X$ be a smooth projective curve over $\mathbb{C}$, and $q\in X$ a rational point. Denote by $\pi_i: X^n\to X$ the $i$-th projection of the cartesian $n$-product of the curve onto $X$ itself. The claim is that</p> <blockquote> <p>The line bundle $\bigotimes_{i=1}^n \pi_i^* \mathcal{O}_X(q) $ is clearly ample.</p> </blockquote> <p>Could you point me in the right direction please? Is there a specific criterion for ampleness I should immediately see it's satisfied?</p> <p>I know that, being each $\pi_i$ finite and surjective, every $\pi_i^*\mathcal{O}_X(q)$ is ample because each $\mathcal{O}_X(q)$ is. But how can one conclude from here that the tensor product of them is?</p> <p>PS: Is there any way to see this geometrically?</p>
Community
-1
<p>The result you want is the following.</p> <blockquote> <blockquote> <p><strong>Hartshorne II Ex. 7.5 (c):</strong> If $\mathcal{L}, \mathcal{M}$ are two ample line bundles on a Noetherian scheme $X$, then $\mathcal{L}\otimes \mathcal{M}$ is ample. </p> </blockquote> </blockquote> <p><strong>Proof:</strong> As $\mathcal{M}$ is ample there is some $m &gt; 0$ for which $\mathcal{M}^{\otimes n}$ is generated by its global sections. Now $\mathcal{L}^{\otimes m}$ is still ample in view of Proposition 7.5. Thus $\mathcal{M}^{\otimes m} \otimes \mathcal{L}^{\otimes m} = (\mathcal{M} \otimes \mathcal{L})^{\otimes m}$ is ample by part (a). By Proposition 7.5 again we conclude $\mathcal{M} \otimes \mathcal{L}$ is ample. </p> <blockquote> <blockquote> <p><strong>Exercise:</strong> Prove part (a) of the exercise above, namely if $\mathcal{L}$ is ample and $\mathcal{M}$ is globally generated then $\mathcal{L}\otimes \mathcal{M}$ is ample.</p> </blockquote> </blockquote>
4,614,269
<p>Given the curve <span class="math-container">$y=\frac{5x}{x-3}$</span>. To examine its asymptotes if any.</p> <p><strong>We are only taking rectilinear asymptotes in our consideration</strong></p> <p>My solution goes like this:</p> <blockquote> <p>We know that, a straight line <span class="math-container">$x=a$</span>, parallel to <span class="math-container">$y$</span> axis can be a vertical asymptote of a branch of the curve <span class="math-container">$y=f(x)$</span>, iff <span class="math-container">$f(x)\to\infty$</span> when <span class="math-container">$x\to a+0$</span>, <span class="math-container">$x\to a-0$</span> or <span class="math-container">$x\to a$</span>. Similarly, a straight line <span class="math-container">$y=a$</span>, parallel to <span class="math-container">$x$</span> axis can be a horizontal asymptote of a branch of the curve <span class="math-container">$x=\phi(y)$</span>, iff <span class="math-container">$x\to\infty$</span> when <span class="math-container">$y\to a+0$</span>, <span class="math-container">$y\to a-0$</span> or <span class="math-container">$y\to a$</span>. Using these lemmas, we obtain <span class="math-container">$x=3$</span> and <span class="math-container">$y=5$</span> as the rectilinear asymptotes. This is because, <span class="math-container">$y=\frac{5x}{x-3}=f(x)$</span>, then as <span class="math-container">$x\to 3$</span>, <span class="math-container">$f(x)\to \infty$</span>. Also, since <span class="math-container">$y=\frac{5x}{x-3}$</span>, thus,<span class="math-container">$xy-3y=5x$</span> or <span class="math-container">$xy-5x=3y$</span>, hence <span class="math-container">$x=\frac{3y}{y-5}=\phi(y)$</span>. Due to which <span class="math-container">$\phi(y)\to \infty$</span> as <span class="math-container">$y\to 5$</span>.</p> </blockquote> <p>Now, I tried checking, if there is any <em>oblique asymptote</em>. We know that, <span class="math-container">$y=mx+c$</span>, is an oblique asymptote of <span class="math-container">$y=f(x)$</span>, iff <span class="math-container">$\exists $</span> a finite <span class="math-container">$m=\lim_{|x|\to\infty}\frac{y}{x}$</span> and <span class="math-container">$c=\lim_{|x|\to\infty} y-mx$</span>. Now, we have the function <span class="math-container">$y=\frac{5x}{x-3}$</span> and hence, <span class="math-container">$\frac{y}{x}=\frac{5}{x-3}$</span> and hence, <span class="math-container">$m=\lim_{|x|\to\infty}\frac{y}{x}=\lim_{|x|\to\infty}\frac{5}{x-3}=0$</span>. Hence, <span class="math-container">$c=\lim_{|x|\to\infty} y-mx=\frac{5x}{x-3}=\frac{5}{1-\frac 3x}=5$</span>. So, the oblique asymptote of the given curve is <span class="math-container">$y=mx+c=5$</span>, which is same as the horizontal asymptote, parallel to <span class="math-container">$x-axis$</span> as shown above. Is the solution correct? If not, where is it going wrong?</p>
joriki
6,622
<p>Your proof is OK, but there are some things you could improve:</p> <p>It’s perhaps worth explaining how it is that the two sums count the same thing in much the same way, and thus should have the same number of terms, but don’t. This is because the terms in the second sum with <span class="math-container">$n-i\lt m$</span> are zero; you could change the summation to run from <span class="math-container">$1$</span> to <span class="math-container">$n-m$</span> to make it apparent that it has the same number of terms as the first sum.</p> <p>And to make it a bit easier to understand you could explain that on the left you first pick pink and black from all balls, then pink from pink and black whereas on the right you first pick pink and white from all balls and then pink from pink and white.</p> <p>(I kept writing “punk” instead of “pink” – apparently my fingers know that I like punk and I don’t like pink :-)</p>
314,236
<p>The integral $\int_{-\infty}^{\infty} e^{ix} dx$ diverges. I have read (in a wikipedia article) that the principal value of this integral vanishes: $P \int_{-\infty}^{\infty} e^{ix} dx = 0$. How can one see that?</p> <p>Thank you for your effort!</p>
Diógenes
184,463
<p>You might want to consider the this as a the distribution:</p> <p>$$P.V.\left(\int_{-\infty}^{\infty}e^{ixt}dx\right)_{t=1}$$</p> <p>Now the distribution needs to be tested against test functions as:</p> <p>$$\int_{-\infty}^{\infty}P.V.\left(\int_{-\infty}^{\infty}e^{ixt}dx\right)\phi(t)dt$$</p> <p>Which is, because of the definition of Principal Value:</p> <p>$$\int_{-\infty}^{\infty}\left[\lim_{N\to\infty}\frac{e^{ixt}}{it}\right]_{-N}^{N}\phi(t)dt=\int_{-\infty}^{\infty}\lim_{N\to\infty}\left(\frac{e^{iNt}-e^{-iNt}}{it}\right)\phi(t)dt$$</p> <p>Now we change variables to $y=Nt$ so we have, $dt=dy/N$:</p> <p>$$\int_{-\infty}^{\infty}\frac{2sen(y)}{y}\lim_{N\to\infty}\phi(y/N)dy$$</p> <p>Finally, because $\phi$ is a test function, it is continuos:</p> <p>$$\int_{-\infty}^{\infty}P.V.\left(\int_{-\infty}^{\infty}e^{ixt}dx\right)\phi(t)dt=2\pi\phi(0)$$</p> <p>Which tells you that:</p> <p>$$P.V.\left(\int_{-\infty}^{\infty}e^{ixt}dx\right)_{t=1}=\left(2\pi\delta(t)\right)_{t=1}=0$$</p> <p>Now the Dirac Delta is a Distribution whose converges to $0$ at every point but zero.</p>
266,981
<p>Wikipedia presents a recursive definition of the Pfaffian of a skew-symmetric matrix as $$ \operatorname{pf}(A)=\sum_{{j=1}\atop{j\neq i}}^{2n}(-1)^{i+j+1+\theta(i-j)}a_{ij}\operatorname{pf}(A_{\hat{\imath}\hat{\jmath}}),$$ where $\theta$ is the Heaviside step function. I have failed to find a proper reference for this result. Is it standard or does it require proper citation? What should one cite?</p>
john
8,751
<p>Given a group $G$ the set of functions $Set(X,G)$ admits a pointwise group structure -- in this way, the category of groups is cotensored as a $Set$-category. Likewise any of the standard algebraic categories.</p> <p>Similarly when you encounter a functor category $[C,D]$ equipped with some ``pointwise" structure inherited from $D$ you are in the presence of a 2-category (or $Cat$-enriched category) admitting cotensors.</p> <p>For example, the 2-category of (symmetric) monoidal categories / categories with limits/ categories with colimits (and of any any suitable flavour of morphism between them) admit cotensors -- although they often don't admit all weighted limits.</p>
266,981
<p>Wikipedia presents a recursive definition of the Pfaffian of a skew-symmetric matrix as $$ \operatorname{pf}(A)=\sum_{{j=1}\atop{j\neq i}}^{2n}(-1)^{i+j+1+\theta(i-j)}a_{ij}\operatorname{pf}(A_{\hat{\imath}\hat{\jmath}}),$$ where $\theta$ is the Heaviside step function. I have failed to find a proper reference for this result. Is it standard or does it require proper citation? What should one cite?</p>
ಠ_ಠ
56,938
<p>The category of sheaves of abelian groups on a space $X$ is powered and copowered over abelian groups as follows:</p> <p>let $t: X \to \bullet$ denote the terminal map to the 1-point space. Then the direct image is the global sections functor $t_* = \Gamma(X, -)$ and the inverse image $t^{-1}$ constructs the constant sheaf with given stalk. Let $A_X := t^{-1} A$ denote the constant sheaf with stalk an abelian group $A$.</p> <p>Then by the tensor-Hom and image adjunctions, for sheaves of abelian groups $\mathscr{F}$, $\mathscr{G}$ we have $$\mathrm{Hom}(\mathscr{F}, \mathscr{Hom}(A_X, \mathscr{G})) \cong \mathrm{Hom}(A_X \otimes\mathscr{F}, \mathscr{G}) \cong \mathrm{Hom}(A_X, \mathscr{Hom}(\mathscr{F}, \mathscr{G})) \cong \mathrm{Hom}(A, \mathrm{Hom}(\mathscr{F}, \mathscr{G})).$$</p> <p>So sheaves of abelian groups are copowered over abelian groups by tensoring with the corresponding constant sheaf of abelian groups. They are powered by Hom-ing out of the constant sheaf.</p> <p>The same argument applies for presheaves of abelian groups, <em>mutatis mutandis</em>.</p>
1,331,850
<p>I haven't done something like this in a long time. How do I set something like this up? Can someone help me with the beginning or give me some direction?</p> <p><img src="https://i.stack.imgur.com/jluer.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/dv5yu.png" alt="enter image description here"></p>
Dmoreno
121,008
<p>Your vector field is irrotational, i.e., $\nabla \wedge \mathbf{F} = 0 $, and there exists $G$ such that $\nabla G = \mathbf{F}$. We can see that $G = x^2/ 2 + 2 x y + y^2/2$ and <a href="https://en.wikipedia.org/wiki/Gradient_theorem" rel="nofollow"><strong><em>therefore</em></strong></a>:</p> <p>$$ \int_C \mathbf{F} \cdot \mathrm{d}\mathbf{r} = G(\mathbf{r}(t=3)) - G(\mathbf{r}(t=0)) $$</p> <p>Hope you find this helpful.</p> <p>Cheers!</p>
992,487
<p>Consider the following problem.</p> <p>A collection of $n$ countries $C_1, \dots, C_n$ sit on an EU commission. Each country $C_i$ is assigned a voting weight $c_i$. A resolution passes if it has the support of a proportion of the panel of at least $A$, taking into account voting weights. Each country $C_i$ has a probability $p_i$ of voting for the resolution, and each country acts independently of the others.</p> <p>The problem is to assign the voting weights so as to maximize the probability that any given resolution will pass. I am interested in answering the question asymptotically under something like the following assumptions.</p> <ol> <li><p>The number of countries $n$ is very large. (Perhaps the EU's $n = 28$ is already not so far from this!)</p></li> <li><p>The proportion of votes held by any one country is bounded above by $M/n$, for some fixed reasonable number $M$.</p></li> <li><p>$p_i &gt; A$ for all $i$.</p></li> <li><p>The probabilities $p_i$ are bounded away from $1$.</p></li> </ol> <p>Perhaps some of these conditions can be relaxed, or perhaps additional assumptions are needed, but these are the ones that seem to be needed for my arguments below.</p> <p>I have tried to answer the question in an approximate and non-rigorous way as follows. </p> <p>Let $X_i$ be the random variable equal to $1$ when country $C_i$ votes for the resolution, and $0$ otherwise. Now let $V = \sum c_i X_i$. By a suitably general version of the central limit theorem (the Berry-Esseen inequality?), $V$ follows approximately a normal distribution with mean $\sum c_i p_i$ and variance $\sum c_i^2 p_i(1-p_i)$. The probability that we would like to maximize is $$P\left( V \geq A\sum c_i \right).$$ </p> <p>If we let $F(z)$ be the cumulative distribution function for the standard normal distribution, this probability can be approximated by $F(z)$ where $$z = \frac{\sum c_i(p_i - A)}{\left[\sum c_i^2 p_i (1-p_i) \right]^{1/2}}. $$</p> <p>Considering the gradient of the function $z = z(c_1,\dots,c_n)$ shows that $z$ is maximal when the weights $c_i$ are proportional to the numbers $$\gamma_i = \frac{p_i - A}{p_i(1-p_i)}.$$</p> <p>I conclude that it is plausible that the weights $c_i = \gamma_i$ are close to being optimal.</p> <p>My questions, in descending order of importance, are:</p> <ol> <li><p>Has anything significant been written on this problem, or an equivalent one?</p></li> <li><p>Is my "theorem" correct?</p></li> <li><p>What would a rigorous formulation of the "theorem" look like?</p></li> </ol> <p>EDIT: I've simulated the problem for $A = 0.5$ with 1867 countries with a 50.17% chance of voting in favour and 637 countries with a probability of 50.5%. I gave weight $1$ to each of the first group of countries and weight $c$ to the second. In the graph below, the horizontal axis is for $c$, and the vertical axis for the probability of passing the resolution. The blue curve represents the theoretical probability we would have if the normal approximation worked perfectly, and the red curve experimental data based on 5 million repetitions of the experiment. The maximum for the red graph is not too far from the conjectured optimal value of $\gamma = 2.94$. </p> <p><img src="https://i.stack.imgur.com/LxVwS.jpg" alt="Experimental data for the conjecture"></p> <p>EDIT: In response to a comment, here are some additional details on the maximization of $z$ above. By homogeneity, it makes no difference whether or not we constrain the $c_i$'s to have sum $1$. But if we do, then a compactness argument shows that $z$ must attain a maximum at some point. </p> <p>Now return to unconstrained $c_i$'s. $\partial z/\partial c_i$ has the same sign as $$\frac{\sum c_j^2 p_j (1 - p_j)}{\sum c_j (p_j - A)} \gamma_i - c_i.$$ This shows that where the maximum occurs, all the $c_i$'s must be proportional to $\gamma_i$.</p>
Nameless
68,482
<p>This is a very interesting problem! On your theorem: I don't understand why you maximize $z$, which is just a density (up to that point everything seems fine). You want to maximize $P(V\ge A)$. You assume for sufficiently many countries that this can be approximated by a normal distribution, invoking some version of the CLT. Thus, $$P(V\ge A)\approx \int^\infty_A \phi\left(\frac{x-\sum c_ip_i}{\sqrt{\sum c_i^2 p_i(1-p_i})}\right) dx=1-\Phi\left(\frac{A-\sum c_ip_i}{\sqrt{\sum c_i^2 p_i(1-p_i)}}\right).$$ How to maximize this? I am not entirely sure. I remember that the normal distribution is log-concave, and <a href="http://en.wikipedia.org/wiki/Logarithmically_concave_function#Log-concave_distributions" rel="nofollow">the CDF of log-concave functions is log-concave</a>. So if $log(\Phi(x))$ is concave, then $-log(\Phi(x))$ is convex. But that doesn't help us here.. </p> <p>A few more suggestions:</p> <ul> <li><p>On assumption 2): Why do you need $M$ if you defined weights $c_i$ already?</p></li> <li><p>In your formulation, you could constrain the weights to $\sum_i c_i=1$, then the condition is $V\ge A$, looks nicer but is not necessary.</p></li> <li><p>On assumption 3): I agree it doesn't make the problem meaningless, but it seems unnecessary - it doesn't change the maximization problem. It just implies that, say, even an equal voting weight distribution would lead to more passes than failures of resolutions. But the problem of $p_i&lt;A$ for some (or even all) countries would still be interesting. Given the normal approximation, there is still a positive probability for the resolution to pass whenever $p_i&gt;0$ for all $i$, but it would be harder. This way you could model "harder resultions".</p></li> <li><p>On assumption 1) and 3): if $n\to\infty$, then assumption 3 guarantees that the resolution passes, as long as you have positive weight $c_i$ on all countries. Because the expected voteshare is above $A$, and asymptotically the expected vote share realizes with probability 1 (some strong law of large numbers). Interesting: if some countries have $p_i&gt;A$ (a positive mass, to be exact) and some $p_i&lt;A$, then as $n\to\infty$ you can guarantee passing of the resolution by giving positive weights to all with $p_i&gt;A$ and none to the others.</p></li> <li><p>Given the previous point, it seems dangerous to talk about "asymptotics" - you just want <em>finitely but many</em> countries so that you can approximate with a normal distribution, but you don't actually want $n\to\infty$ as the problem then is trivial given assumption 3). Maybe this is what the commenter above meant.</p></li> <li><p>Where can you find something similar? I think your best bet might be the finance literature, where you compute the probability that your portfolio investment return is above some threshold $A$. There are some assets which have a similar structure as these votes (e.g., bonds): either the asset pays a positive dividend or the issuer goes bankrupt and the return is zero - just like your random variable $X_i$. Same for a loan portfolio.</p></li> <li>Your comments about "getting a decision right" reminded me of the <a href="http://en.wikipedia.org/wiki/Condorcet%27s_jury_theorem" rel="nofollow">Condorcet jury theorems</a>. The setting is a special case of yours, where every member has the same voting weight and the majority threshold is $A=1/2$</li> <li>It also reminded me of the <a href="http://econdse.org/wp-content/uploads/2013/03/jury-pesendorferapsr.pdf" rel="nofollow">Feddersen Pesendorfer game theoretic analysis of unanimity voting rules</a>. Neither of the two are directly related to your problem though. Again, finance seems to be your best bet.</li> </ul>
992,487
<p>Consider the following problem.</p> <p>A collection of $n$ countries $C_1, \dots, C_n$ sit on an EU commission. Each country $C_i$ is assigned a voting weight $c_i$. A resolution passes if it has the support of a proportion of the panel of at least $A$, taking into account voting weights. Each country $C_i$ has a probability $p_i$ of voting for the resolution, and each country acts independently of the others.</p> <p>The problem is to assign the voting weights so as to maximize the probability that any given resolution will pass. I am interested in answering the question asymptotically under something like the following assumptions.</p> <ol> <li><p>The number of countries $n$ is very large. (Perhaps the EU's $n = 28$ is already not so far from this!)</p></li> <li><p>The proportion of votes held by any one country is bounded above by $M/n$, for some fixed reasonable number $M$.</p></li> <li><p>$p_i &gt; A$ for all $i$.</p></li> <li><p>The probabilities $p_i$ are bounded away from $1$.</p></li> </ol> <p>Perhaps some of these conditions can be relaxed, or perhaps additional assumptions are needed, but these are the ones that seem to be needed for my arguments below.</p> <p>I have tried to answer the question in an approximate and non-rigorous way as follows. </p> <p>Let $X_i$ be the random variable equal to $1$ when country $C_i$ votes for the resolution, and $0$ otherwise. Now let $V = \sum c_i X_i$. By a suitably general version of the central limit theorem (the Berry-Esseen inequality?), $V$ follows approximately a normal distribution with mean $\sum c_i p_i$ and variance $\sum c_i^2 p_i(1-p_i)$. The probability that we would like to maximize is $$P\left( V \geq A\sum c_i \right).$$ </p> <p>If we let $F(z)$ be the cumulative distribution function for the standard normal distribution, this probability can be approximated by $F(z)$ where $$z = \frac{\sum c_i(p_i - A)}{\left[\sum c_i^2 p_i (1-p_i) \right]^{1/2}}. $$</p> <p>Considering the gradient of the function $z = z(c_1,\dots,c_n)$ shows that $z$ is maximal when the weights $c_i$ are proportional to the numbers $$\gamma_i = \frac{p_i - A}{p_i(1-p_i)}.$$</p> <p>I conclude that it is plausible that the weights $c_i = \gamma_i$ are close to being optimal.</p> <p>My questions, in descending order of importance, are:</p> <ol> <li><p>Has anything significant been written on this problem, or an equivalent one?</p></li> <li><p>Is my "theorem" correct?</p></li> <li><p>What would a rigorous formulation of the "theorem" look like?</p></li> </ol> <p>EDIT: I've simulated the problem for $A = 0.5$ with 1867 countries with a 50.17% chance of voting in favour and 637 countries with a probability of 50.5%. I gave weight $1$ to each of the first group of countries and weight $c$ to the second. In the graph below, the horizontal axis is for $c$, and the vertical axis for the probability of passing the resolution. The blue curve represents the theoretical probability we would have if the normal approximation worked perfectly, and the red curve experimental data based on 5 million repetitions of the experiment. The maximum for the red graph is not too far from the conjectured optimal value of $\gamma = 2.94$. </p> <p><img src="https://i.stack.imgur.com/LxVwS.jpg" alt="Experimental data for the conjecture"></p> <p>EDIT: In response to a comment, here are some additional details on the maximization of $z$ above. By homogeneity, it makes no difference whether or not we constrain the $c_i$'s to have sum $1$. But if we do, then a compactness argument shows that $z$ must attain a maximum at some point. </p> <p>Now return to unconstrained $c_i$'s. $\partial z/\partial c_i$ has the same sign as $$\frac{\sum c_j^2 p_j (1 - p_j)}{\sum c_j (p_j - A)} \gamma_i - c_i.$$ This shows that where the maximum occurs, all the $c_i$'s must be proportional to $\gamma_i$.</p>
Matt B.
164,029
<p><a href="http://en.m.wikipedia.org/wiki/Roy" rel="nofollow">http://en.m.wikipedia.org/wiki/Roy</a>'s_safety-first_criterion</p> <p>(About related finance problems).</p> <p>trying to chose a portfolio that maximizes the probability of meeting a threshold is probably close to the problem you're mentioning here.</p> <p>As for the EU, their goal when they set up the system was not too far from what you're describing. The idea was also to prevent the formation of coalitions against France and Germany (if those 2 agree on something, it's very difficult to block it, and inversely, it's very difficult to pass a law without them agreeing).</p>
607,044
<p>I'm looking at some work with Combinatorial Game Theory and I have currently got: (P-Position is previous player win, N-Position is next player win)</p> <p>Every Terminal Position is a P-Position,</p> <p>For every P-Position, any move will result in a N-Position,</p> <p>For every N-Position, there exists a move to result in a P-Position.</p> <p>These I am okay with, the problem comes when working with the Sprague-Grundy function; g(x)=mex{g(y):y \in F(x)}, where F(x) are the possible moves from x and mex is the minimum excluded natural number.</p> <p>I can see every Terminal Position has SG Value 0, these have x=0, and F(0) is the empty set.</p> <p>The problem comes in trying to find a way to prove the remaining two conditions for positions, can anyone give me a hand with these?</p>
Henry
6,460
<p>$\gamma$ is the limit of the sum of the slightly bigger than triangle pieces of this diagram (from Wikipedia)</p> <p><img src="https://i.stack.imgur.com/LKfZK.png" alt="enter image description here"></p> <p>As $n$ increases, the sum increases, but clearly has an upper bound of $1$ and therefore converges to a limit $\gamma$ less than or equal to $1$. </p> <p>This picture also makes it obvious why $\gamma$ is slightly more than $0.5$</p> <p>In fact the partial sum of the pieces is $H(n)-\log_e(n+1)$ but the difference $\log_e(n+1) - \log_e(n)$ is $O(\frac1n)$, so does not affect the convergence to $\gamma$.</p>
607,044
<p>I'm looking at some work with Combinatorial Game Theory and I have currently got: (P-Position is previous player win, N-Position is next player win)</p> <p>Every Terminal Position is a P-Position,</p> <p>For every P-Position, any move will result in a N-Position,</p> <p>For every N-Position, there exists a move to result in a P-Position.</p> <p>These I am okay with, the problem comes when working with the Sprague-Grundy function; g(x)=mex{g(y):y \in F(x)}, where F(x) are the possible moves from x and mex is the minimum excluded natural number.</p> <p>I can see every Terminal Position has SG Value 0, these have x=0, and F(0) is the empty set.</p> <p>The problem comes in trying to find a way to prove the remaining two conditions for positions, can anyone give me a hand with these?</p>
Community
-1
<p>Let $$u_n=\sum_{k=1}^n\frac 1 k-\log n$$ then $$u_{n}-u_{n-1}=\frac{1}{n}+\log\left(1-\frac 1 n\right)\sim_\infty-\frac{1}{2n^2}$$ so the series $\displaystyle\sum_{n\ge2}u_{n}-u_{n-1}$ is convergent by asymptotic comparison and then the sequence $(u_n)_n$ is convergent by telescoping.</p>
2,662,717
<p>Let $(f_k)_{k=m}^\infty$ be a sequence of differentiable functions $f_k:[a,b]\rightarrow R$ whose derivatives are continuous. Suppose there exists a sequence $(M_k)_{k=m}^\infty$ in $R$ with $|f_k'|\le M_k$ for all $x\in X, k\geq m,$ and such that $\sum_{k=m}^\infty M_k$ converges. Assume also that there is some $x_0\in [a,b]$ such that $\sum_{k=m}^\infty f_k(x_0)$ converges. </p> <p>Show that $\sum_{k=m}^\infty f_k$ converges uniformly to a differentiable function $f:[a,b]\rightarrow R$ and that $f'(x)=\sum_{k=m}^\infty f_k'(x)$ for all $x\in [a,b]$.</p> <p>$Remark:$ So you are showing that under these assumptions, </p> <p>$\frac{d}{dx}\sum_{k=m}^\infty f_k= \sum_{k=m}^\infty \frac{d}{dx} f_k$</p> <p>$Hint:$ Combine the Weirestrass M-test with Theorem $3.7.1$: Let $[a, b]$ be an interval, and for every integer $n ≥ 1$, let $f_n : [a, b] → R$ be a differentiable function whose derivative $f_n' : [a, b] → R$ is continuous. Suppose that the derivatives $f_n'$ converge uniformly to a function $g : [a, b] → R$. Suppose also that there exists a point $x_0$ such that the limit lim$_{n→∞} f_n (x_0)$ exists. Then the functions $f_n$ converge uniformly to a differentiable function $f$, and the derivative of $f$ equals $g$. </p>
Donald Splutterwit
404,247
<p>The neat way to show this is to use matricies \begin{eqnarray*} \begin{bmatrix} 0 &amp;1 \\1 &amp;1 \\ \end{bmatrix}^m = \begin{bmatrix} F_{m-1} &amp;F_m \\F_m &amp;F_{m+1} \\ \end{bmatrix} . \end{eqnarray*} \begin{eqnarray*} \begin{bmatrix} 0 &amp;1 \\1 &amp;1 \\ \end{bmatrix}^n = \begin{bmatrix} F_{n-1} &amp;F_n \\F_n &amp;F_{n+1} \\ \end{bmatrix} . \end{eqnarray*} So \begin{eqnarray*} \begin{bmatrix} F_{m+n-1} &amp;F_{m+n} \\F_{m+n} &amp;F_{m+n+1} \\ \end{bmatrix} =\begin{bmatrix} 0 &amp;1 \\1 &amp;1 \\ \end{bmatrix}^{n+m} =\begin{bmatrix} 0 &amp;1 \\1 &amp;1 \\ \end{bmatrix}^{m} \begin{bmatrix} 0 &amp;1 \\1 &amp;1 \\ \end{bmatrix}^{n} \\ \begin{bmatrix} F_{m-1} &amp;F_m \\F_m &amp;F_{m+1} \\ \end{bmatrix} \begin{bmatrix} F_{n-1} &amp;F_n \\F_n &amp;F_{n+1} \\ \end{bmatrix} =\begin{bmatrix} . &amp;F_{m-1}F_n+F_m F_{n+1} \\. &amp; . \\ \end{bmatrix} . \end{eqnarray*}</p>
2,027,556
<p>The definition I have of a tensor product of vector finite dimensional vector spaces $V,W$ over a field $F$ is as follows: Let $v_1, ..., v_m$ be a basis for $V$ and let $w_1,...,w_n$ be a basis for $W$. We define $V \otimes W$ to be the set of <strong>formal linear combinations</strong> of the mn symbols $v_i \otimes w_j$. That is, a typical element of $V \otimes W$ is $$\sum c_{ij}(v_i \otimes w_j).$$ The space $V \otimes W$ is clearly a finite dimensional vector space of dimension mn. We define bilinear map $$B: V \times W \to V \otimes W$$ here is the formula $$B(\sum a_iv_i, \sum b_jw_j) = \sum_{i,j}a_ib_j(v_i \otimes w_j). $$</p> <p>Why does $V \otimes W$ have to be a <strong>formal linear combinations</strong> of symbols $v_{i} \otimes w_j$, what would be wrong in defining $V \otimes W$ simply as a <strong>linear combination</strong> of symbols $v_i \otimes w_j$?</p> <p>Thanks.</p>
Eric Wofsey
86,856
<p>The term "linear combination" has no meaning unless you are talking about elements of a vector space. The symbols $v_i\otimes w_j$ are no more than symbols, and there is not (yet) any vector space that they are elements of, so it is meaningless to talk about linear combinations of them. However, we can talk about "formal linear combinations", which means roughly "pretend they are elements of a vector space and write down symbols that would represent linear combinations of them".</p> <p>More rigorously, you can say that a "formal linear combination" of the symbols $v_i\otimes w_j$ is just a function from the set of these symbols to $F$. Given such a function $f$, if $f(v_i\otimes w_j)=c_{ij}$, we represent $f$ by the notation $\sum c_{ij}v_i\otimes w_j$. The set of such functions then forms a vector space where addition and scalar multiplication are defined pointwise, and this addition and scalar multiplication corresponds to how you would expect expressions of the form $\sum c_{ij}v_i\otimes w_j$ to behave if they actually were linear combinations.</p>
4,528,823
<p>I'm trying to understand the proof that every bounded sequence has a convergent subsequence. The proof goes as follows:</p> <blockquote> <p>Let <span class="math-container">$\{a_{n}\}$</span> be a bounded sequence of real numbers. Choose <span class="math-container">$M\ge 0$</span> such that <span class="math-container">$|a_{n}|\le M$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>. Define sets <span class="math-container">$$E_{n}=\overline{\{a_{j} \mid j\ge n}\}.$$</span> Then, <span class="math-container">$E_{n}\subseteq [-M, M]$</span> are a descending sequence of non-empty, closed and bounded subsets of <span class="math-container">$\mathbb{R}$</span>. The nested interval theorem then says that there exists some <span class="math-container">$a$</span> such that <span class="math-container">$$a \in \bigcap_{n\ge 1}E_{n}.$$</span> For each natural number <span class="math-container">$k$</span>, <span class="math-container">$a$</span> is a point of closure of <span class="math-container">$\{a_{j}\mid j\ge k\}$</span>. Hence, for infinitely many indices <span class="math-container">$j\ge n$</span> (shouldn't it be <span class="math-container">$j\ge k$</span>?), <span class="math-container">$$a_{j}\in (a-\frac{1}{k}, a+\frac{1}{k}).$$</span> We may therefore inductively choose a strictly increasing sequence of natural numbers <span class="math-container">$\{n_{k}\}$</span> such that <span class="math-container">$|a-a_{n_{k}}|&lt;1/k$</span> for all <span class="math-container">$k$</span>.</p> </blockquote> <p>My question is how exactly are the points in the sequence being chosen such that they're guaranteed to be strictly increasing? I'm not able to visualize how this process works.</p> <p>Can you can please explain how starting with <span class="math-container">$k=1$</span> we inductively construct the subsequence?</p>
Theo Bendit
248,286
<p>The point <span class="math-container">$a$</span> is in <span class="math-container">$\bigcap_n E_n$</span>, which means it is in every <span class="math-container">$E_n$</span>. The set <span class="math-container">$E_n$</span> is the result of throwing away the first <span class="math-container">$n - 1$</span> (I'm assuming here <span class="math-container">$\Bbb{N}$</span> begins at <span class="math-container">$1$</span>) terms of the sequence, and taking the closure. So, <span class="math-container">$a$</span> is a point that is in the closure of the sequence values, no matter how many we throw away.</p> <p>So, if we put a little interval <span class="math-container">$\left(a - \frac{1}{k}, a + \frac{1}{k}\right)$</span> around <span class="math-container">$a$</span>, there must always be points from the sequence lying in this interval, (once again) no matter how many terms we throw away.</p> <p>Let's take <span class="math-container">$k = 1$</span> to start. Then, we are looking at the interval <span class="math-container">$(a - 1, a + 1)$</span>, which must contain some sequence term (after all, <span class="math-container">$a \in E_1$</span>, which is the closure of all the sequence terms). Pick one, any one, and call the index of the sequence <span class="math-container">$n_1$</span>.</p> <p>Next, the interval <span class="math-container">$\left(a - \frac{1}{2}, a + \frac{1}{2}\right)$</span> must contain sequence terms, no matter how many we throw away. Let us throw the first <span class="math-container">$n_1$</span> terms of the sequence. So, we are considering <span class="math-container">$E_{n_1 + 1}$</span>. Since <span class="math-container">$a \in E_{n_1 + 1}$</span>, there must be some <span class="math-container">$m &gt; n_1$</span> such that <span class="math-container">$a_m \in \left(a - \frac{1}{2}, a + \frac{1}{2}\right)$</span>. We'll let <span class="math-container">$n_2 = m$</span>. Note, by construction <span class="math-container">$n_2 &gt; n_1$</span>.</p> <p>Next, with <span class="math-container">$\left(a - \frac{1}{3}, a + \frac{1}{3}\right)$</span>, we use the fact that <span class="math-container">$a \in E_{n_2 + 1}$</span>, and use this to obtain an <span class="math-container">$n_3 &gt; n_2$</span>, such that <span class="math-container">$a_{n_3} \in \left(a - \frac{1}{3}, a + \frac{1}{3}\right)$</span>, and so on, and so on. The constructed sequence of natural numbers increases strictly, and by construction <span class="math-container">$|a - a_{n_k}| &lt; \frac{1}{k}$</span>, as needed.</p> <p>Hope that helps. Let me know if you have follow-up questions.</p>
189,781
<p>I want to add custom Mesh lines onto a Plot3D, with different MeshStyle than Mesh lines which already exist. As a MWE, starting with a meshed plot</p> <pre><code>Plot3D[Exp[-(x^2+y^2)],{x,-3,3},{y,-3,3},PlotRange-&gt;All,Mesh-&gt;Automatic] </code></pre> <p>I want to go back and add extra mesh (keeping the existing Automatic mesh) across x=0 and y=0, but in a different color. I know this cannot be done with Epilog, since Mesh is a 3D thing and Epilog is for 2D layers. How can one accomplish this?</p>
egwene sedai
655
<p>You can plot another one with the desired mesh only, e.g.</p> <pre><code>f[x_, y_] := Exp[-(x^2 + y^2)] p0 = Plot3D[f[x, y], {x, -3, 3}, {y, -3, 3}, PlotRange -&gt; All, Mesh -&gt; Automatic] p2 = Plot3D[f[x, y], {x, -3, 3}, {y, -3, 3}, Mesh -&gt; {{{0, {Red, Thickness[.01]}}}, {{0, {Red, Thickness[.01]}}}}, PlotRange -&gt; All] Show[{p0, p2}] </code></pre> <p>or using <code>ParametricPlot3D</code>:</p> <pre><code>pmx = ParametricPlot3D[{x, 0, f[x, 0]}, {x, -3, 3}, PlotStyle -&gt; {Red, Thickness[.01]}]; pmy = ParametricPlot3D[{0, y, f[0, y]}, {y, -3, 3}, PlotStyle -&gt; {Red, Thickness[.01]}]; Show[{p0, pmx, pmy}] </code></pre> <p><a href="https://i.stack.imgur.com/2Qm6C.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2Qm6C.png" alt="enter image description here"></a></p>
3,896,327
<p>For the first part of this question, I was asked to find the either/or version and the contrapositive of this statement, which I found as follows:</p> <p>i) either <span class="math-container">$n \leq 7$</span>, or <span class="math-container">$n^2-8n+12$</span> is composite</p> <p>ii) if <span class="math-container">$n^2-8n+12$</span> is not composite, then <span class="math-container">$n \leq 7$</span></p> <p>Then we are asked to prove the statement.</p> <p>I've examined all of our class notes and found mention of proof via factorising &quot;to be covered in more detail later&quot; (but no detail later included - perhaps it is considered too obvious?).</p> <p>As this is such a simple question, judging by the two marks available for it, I'm concerned that asking another student will cause them to inadvertently tell me the answer.</p> <p>I am finding this question hard to google around because findable simple proof examples do not seem to include quadratic expressions.</p> <p>What would the right steps be to approach a proof like this? And/or is there an online resource that shows some similar examples with working/notes? I don't mind handing in an incorrect answer if I'm able to make an attempt, but I'm just so uncertain of the right starting point that I cannot make a start.</p>
Stinking Bishop
700,480
<p><span class="math-container">$$n^2-8n+12=(n-4)^2-2^2=(n-2)(n-6)$$</span></p> <p>Thus, if <span class="math-container">$n&gt;7$</span>, then both of the factors <span class="math-container">$n-2$</span> and <span class="math-container">$n-6$</span> are greater than <span class="math-container">$1$</span> and <span class="math-container">$n^2-8n+12=(n-2)(n-6)$</span> is a nontrivial factorization of <span class="math-container">$n^2-8n+12$</span>.</p>
3,288,651
<p>I have a problem understanding a proof about ideals, which states that every ideal in the integers can be generated by a single integer. And with that I realized that I also don't really understand ideals in general and the intuition behind them. </p> <p>So let me start by the definition of an ideal. For <span class="math-container">$a, b \in \mathbb{Z}$</span>, the ideal generated by <span class="math-container">$a$</span> is the set <span class="math-container">$ (a) := \{ua : u \in \mathbb{Z}\} $</span> while the ideal generated by <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is the set <span class="math-container">$(a, b) := \{ua + vb : u,v \in \mathbb{Z}\}$</span>. Here comes my first question: Are those "multiples" of the generators (<span class="math-container">$u$</span> and <span class="math-container">$v$</span>) all possible integers? Or does this apply only to a specific amount of predefined integers? </p> <p>And now comes the proof in question. I added the questions in parenthesis where I had problems following: </p> <p>The lemma states that for <span class="math-container">$a, b \in \mathbb{Z}$</span> (not both 0), <span class="math-container">$ \exists d \in \mathbb{Z}: (a,b) = (d) $</span>. This means in my understanding that every ideal in the integers, no matter how many integers were used to generate it, can be generated only by a single integer. </p> <p><em>Proof</em>: The set <span class="math-container">$(a,b)$</span> must contain some positive numbers (why? The definition of the ideal doesn't state that). By the well-ordering principle, we know that those positive numbers must have a smallest positive number. Let <span class="math-container">$d$</span> be that number. Because <span class="math-container">$d \in (a,b)$</span>, every multiple of <span class="math-container">$d$</span> must also be in <span class="math-container">$(a,b) $</span> (why? Is there any definition or lemma or theorem that states that?). Therefore, we have <span class="math-container">$(d) \subseteq (a,b)$</span>. And now to prove the other side <span class="math-container">$\supseteq$</span>: For any <span class="math-container">$c \in (a,b) $</span> , <span class="math-container">$\exists q,r $</span> (are those elements of the integers or of the set <span class="math-container">$(a,b)$</span>? and do any restrictions apply to <span class="math-container">$q$</span>?) where <span class="math-container">$0 \leq r &lt; d$</span> such that <span class="math-container">$c = qd + r$</span> (as far as my understanding goes, this comes from the fact that any integer can be divided by another integer yielding a remainder). Since both <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are in <span class="math-container">$(a,b)$</span>, so is <span class="math-container">$r=c−qd$</span> . Since <span class="math-container">$0≤r&lt;d$</span> and <span class="math-container">$d$</span> is (by assumption) the smallest positive element in <span class="math-container">$(a, b)$</span>, we must have <span class="math-container">$r = 0$</span>. Thus <span class="math-container">$ c = qd ∈ (d)$</span> (how did we conclude that last step?). </p> <p>Thank you for the clarifications. </p>
Wuestenfux
417,848
<p>Well, the proof is correct. What are the basic steps?</p> <ol> <li><p>If <span class="math-container">$I\ne \{0\}$</span> is an ideal of <span class="math-container">$\Bbb Z$</span> and <span class="math-container">$0\ne a\in I$</span>, then <span class="math-container">$-a=(-1)a\in I$</span> and so <span class="math-container">$I$</span> contains a positive integer.</p></li> <li><p>By the well-ordering principle, <span class="math-container">$I$</span> contains a least positive number <span class="math-container">$n$</span>.</p></li> <li><p>Each number <span class="math-container">$a\in I$</span> is a multiple of <span class="math-container">$n$</span>. Indeed, divide <span class="math-container">$a$</span> into <span class="math-container">$n$</span> with remainder: <span class="math-container">$a=qn+r$</span> where <span class="math-container">$0\leq r&lt;n$</span>. Then <span class="math-container">$r = a-qn = a+(-q)n\in I$</span>. But <span class="math-container">$n$</span> is the smallest positive number in <span class="math-container">$I$</span> and so <span class="math-container">$r=0$</span>. The claim follows.</p></li> <li><p>By 3., the ideal <span class="math-container">$I$</span> equals <span class="math-container">$(n)$</span>.</p></li> </ol>
2,974,839
<p>I solved this question this way.</p> <p>First, there are two ways that the cards will not be alternating:</p> <p>B - R - B - R - B - R</p> <p>R - B - R - B - R - B</p> <p>Second, there are 6! (720) possible orders in which the cards can be dealt.</p> <p>So, the answer is 2/720. Is this correct?</p>
Community
-1
<p>There are actually <span class="math-container">$6!/(3!3!)=720/36=20$</span> ways, so the probability is <span class="math-container">$2/20=1/10$</span>. </p>
2,974,839
<p>I solved this question this way.</p> <p>First, there are two ways that the cards will not be alternating:</p> <p>B - R - B - R - B - R</p> <p>R - B - R - B - R - B</p> <p>Second, there are 6! (720) possible orders in which the cards can be dealt.</p> <p>So, the answer is 2/720. Is this correct?</p>
J.G.
56,861
<p>Without loss of generality, the first card is blue. Then choose a red one with probability <span class="math-container">$\frac{3}{5}$</span>, a blue one with probability <span class="math-container">$\frac{2}{4}=\frac{1}{2}$</span>, a red one with probability <span class="math-container">$\frac{2}{3}$</span>, and the last blue one with probability <span class="math-container">$\frac{1}{2}$</span>, followed by the last red one. The result is <span class="math-container">$\frac{2}{5}\cdot(\frac{1}{2})^2=\frac{1}{10}$</span>.</p>
228,389
<p>How and where is it proved that WKL$_0$ proves the compactness theorem for countable models? (This is a follow-up to a comment of F. Dorais.)</p>
Carl Mummert
5,442
<p>The statement for the completeness theorem is due to Harvey Friedman, 1976, "Systems of second order arithmetic with restricted induction II", p. 558 of: Meeting of the Association for Symbolic Logic, John Baldwin, D. A. Martin, Robert I. Soare and W. W. Tait, <em>The Journal of Symbolic Logic</em>, Vol. 41, No. 2 (Jun., 1976), pp. 551-560, <a href="http://www.jstor.org/stable/2272259" rel="nofollow">http://www.jstor.org/stable/2272259</a></p> <p>Friedman had previously worked in systems without restricted induction. He stated the corresponding result for the completeness theorem for WKL in his paper "Some Systems of Second Order Arithmetic and Their Use", <em>Proceedings of the International Congress of Mathematicians, Vancouver, 1974</em>, pp. 235-242, <a href="http://www.mathunion.org/ICM/ICM1974.1/Main/icm1974.1.0235.0242.ocr.pdf" rel="nofollow">http://www.mathunion.org/ICM/ICM1974.1/Main/icm1974.1.0235.0242.ocr.pdf</a></p> <p>As Noah Schweber mentioned, the proof of the completeness theorem in WKL is essentially just a formalization of Henkin's proof of the completeness theorem from ZFC. However, Friedman's theorems show that the completeness theorem for countable first-order theories is <em>equivalent</em> to WKL over RCA (also to $\mathsf{WKL}_0$ over $\mathsf{RCA}_0$), which requires an additional proof for the reversal. </p>
1,515,667
<blockquote> <p>Show that the limit $\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}$ does not exist</p> </blockquote> <p>$$\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}$$</p> <p>Divide by $y^2$:</p> <p>$$\lim_{(x,y)\to (0,0)}\frac{2e^x}{\frac{x^2}{y^2}+1}$$</p> <p>$$=\frac{2(1)}{\frac{0}{0}+1}$$</p> <p>Since $\frac{0}{0}$ is undefined. This limit does not exist.</p> <p><strong>I am not satisfied with my proof.</strong> Makes me think that what I just did was simple step, and not acceptable university level mathematics. Any comments on my proof?</p>
cr001
254,175
<p>Indeed your proof is not valid.</p> <p>Let $y=x$, $\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}=1$</p> <p>Let $y=2x$, $\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}=\large{8\over5}$</p> <p>So it cannot exist.</p>
3,429,514
<p>so I am having a problem understanding finding solution to that equation: <span class="math-container">$$x'' +w^2 x =f sin ω t$$</span> with initial conditions x(0)=0 and x'(0)=0. I know that is an explanation of how to solve that: <br> <br></p> <p><em>I had to make the euler formula substitution and then multiply it by t . Ostensibly, this is because the the particular solution ansatz for fsintwt has a term(s) the same as the homogeneous solution. Hence Yp=Cte^(iwt). So, presumably, any g(t) ansatz that gives terms the same as the homogeneous solution must be adjusted accordingly? Was there some simpler way to solve this problem that I over looked? Please excuse any terminology errors in the preceding.</em> <br> <br> And<br><br> <em>f≠0. This is the case of resonance, so you need to multiply your ansatz by t. So for the particular solution, you need to try <span class="math-container">$x=Atsinωt + Btcosωt$</span></em><br><br> But I do not understand what they are talking about. What I understand is that the general solution is: <span class="math-container">$$C_1 cos\omega t + C_2sin\omega t + x_(particular)$$</span> where to find x(particular) I need to make cost = x, but then I come to f = 0 when I solve the left side of the equation. What am I doing wrong, not understanding, please help:) I really appreciate help as I spent a good few hours on that one problem, and very very thanks in advance!</p>
Community
-1
<p>A solution of the homogeneous equation will <em>never</em> work as a solution of the non-homogeneous equation for a very simple reason: when plugged in the LHS, it yields <span class="math-container">$0$</span> !</p> <p>So when the RHS belongs to the homogeneous equation, the ansatz must be different. In the case of an equation with constant coefficients, the ansatz will be a polynomial of appropriate degree times the homogeneous solution.</p> <p>Here we can try <span class="math-container">$$(C_1t+C_0)\cos\omega t+(S_1t+S_0)\sin\omega t,$$</span> which gives</p> <p><span class="math-container">$$2\omega S_1\cos(\omega t)-2\omega C_1\sin(\omega t)=f\sin(\omega t).$$</span></p>
4,295,459
<blockquote> <p>Find the Taylor series of <span class="math-container">$$\frac{1}{(i+z)^2}$$</span> centered at <span class="math-container">$z_0 = i$</span>.</p> </blockquote> <p>Im thinking if I could find the Taylor series for <span class="math-container">$$\frac{1}{i+z}$$</span> I could use that <span class="math-container">$$\frac{d}{dz} \big(-\frac{1}{i+z} \big) = \frac{1}{(i+z)^2}$$</span> However Im struggling with finding the series of 1/(i+z) (I know I should use the geometric series), and also not sure how to make sure the series is centered at <span class="math-container">$z_0 = i$</span>. Any help is appreciated.</p>
GEdgar
442
<p>To find the Taylor series of <span class="math-container">$1/(i+z)$</span> in powers of <span class="math-container">$z-i$</span>. I could write <span class="math-container">$w = z-i$</span> and find the Taylor series of that in powers of <span class="math-container">$w$</span>.</p> <p><span class="math-container">$$ \frac{1}{i+z} = \frac{1}{2i+w} $$</span></p> <p>and recognize this as a geometric series with first term <span class="math-container">$1/(2i)$</span> and ratio <span class="math-container">$-w/(2i)$</span></p> <p><span class="math-container">$$ \frac{1}{2i+w} = \frac{1}{2i}\left(\frac{1}{1-\frac{-w}{2i}}\right) =\frac{1}{2i} + \frac{-w}{4} + \frac{w^2}{-8i} + \frac{-w^3}{-16} + \dots $$</span></p>
395,604
<p>Lets say you have a sequence $S = (0, 1, 2, 3, 4, 5, 6, 7, 8)$</p> <p>And another sequence $T = (0, 1, 2, 3)$</p> <p>Is there any specific mathematical term that defines the relationship between $S$ and $T$, that specifically says that $S$ starts with $T$?</p> <p>I thought $T$ would be called an <em>initial sub-sequence</em>, but this is incorrect because a sub-sequence seems to be a any subset of the elements of the sequence in the same order of the sequence (so even $(2, 4, 6, 8)$ would be a sub-sequence, while I want the <em>prefix</em> sub-sequence part, i.e. a sub-sequence that the sequence starts with)</p>
xavierm02
10,385
<p>In computer science (which is, at least at the beginning, maths), you call them words or strings over an alphabet $X$ which contains your letters.</p> <p>For this alphabet $X$, you define $X^n$ by the usual Cartesian product and $X^*=\bigcup\limits_{n\in \Bbb N}X^n$. $X^*$ is the set of all words written with letters of $X$.</p> <p>Then you can define a product $\cdot$ on $X^*$: $\left(u_1,\dots,u_p\right)\cdot \left(v_1,\dots,v_q\right)=\left(u_1,\dots,u_p,v_1,\dots,v_q\right)$</p> <p>And so saying a word $u$ start with $v$ is saying : $\exists w \in X^*, u=v\cdot w$. And $v$ is therefore called a left factor of $u$, or a prefix.</p>
1,521,739
<p>The following is the Meyers-Serrin theorem and its proof in Evans's <em>Partial Differential Equations</em>:</p> <blockquote> <p><a href="https://i.stack.imgur.com/XnzXY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XnzXY.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/7woqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7woqQ.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/pBIqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBIqX.png" alt="enter image description here"></a></p> </blockquote> <p>Could anyone explain where (for which $x\in U$) is the convolution in step 2 defined and how to get (3) from Theorem 1? </p> <blockquote> <p><a href="https://i.stack.imgur.com/NdZWL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdZWL.png" alt="enter image description here"></a></p> </blockquote>
benleis
148,578
<p>I'm going to assume the OP wanted the vertex and focus of the original tilted parabola since they already had rotated it to a standard form where those values are easier to find.</p> <p>Rather than rotating, its convenient to do everything in place since we're looking for the focus and by definition: for any point on the parabola the distance to the focus is equal to the distance to the directrix. </p> <p>First let the focus F = (a,b) and the directrix is a linear equation y = mx + c We need to find the point P (p, mp + c) on the directrix such that the segment between it<br> and an arbitrary point on the parabola is perpendicular to the directrix i.e. <span class="math-container">$-\dfrac{1} {m} = \dfrac{(y - mp - c)}{x - p}$</span></p> <p>Solving for p we get <span class="math-container">$p = \dfrac{1}{1 + m^2}(my + x- mc)$</span> and <span class="math-container">$P = \left(\dfrac{1}{1 + m^2}(my + x- mc), \dfrac{m}{1 + m^2}(my + x - mc) + c \right)$</span></p> <p>Then our general equation for the parabola is: <span class="math-container">$(x-a)^2 + (y-b)^2 = (x - \dfrac{1}{1 + m^2}(my + x- mc))^2 + (y - \dfrac{m}{1 + m^2}(my + x - mc) - c)^2$</span> </p> <p>This simplifies nicely to <span class="math-container">$(x-a)^2 + (y-b)^2 = \dfrac{1}{1 + m^2}(mx - y + c)^2$</span></p> <p>If we move everything to the lhs we get this form: <span class="math-container">$x^2 + m^2y - 2mxy + (-2(m^2 + 1)a -2c)x + (-2(m^2 + 1)b + 2c)y + ((m^2 + 1)(a^2 + b^2 ) - c^2) = 0$</span></p> <p>So if the coefficient of <span class="math-container">$x^2$</span> in the original equation is 1 we can easily find m. This is already the case in our parabola so here m = 1.</p> <p>Now we're left with <span class="math-container">$(x-a)^2 + (y - b^2) = \dfrac{1}{2}(x - y + c)^2$</span></p> <p>Corresponding with the original equation gives us a system of equations:</p> <ol> <li><p><span class="math-container">$2a^2 + 2b^2 - c^2 = 4$</span></p></li> <li><p><span class="math-container">$-4a -2c = 2 \sqrt{2}$</span></p></li> <li><p><span class="math-container">$-4b +2c = -2 \sqrt{2}$</span></p></li> </ol> <p>Solving for c and then substituting back in to find a and b we get:</p> <p><span class="math-container">$(x + \frac{3}{4}\sqrt{2})^2 + (y - \frac{3}{4}\sqrt{2})^2 = \frac{1}{2}(x - y + \frac{\sqrt{2}}{2})^2$</span></p> <p>The first part is done. The focus <span class="math-container">$F = \left( -\dfrac{3}{4}\sqrt{2}, \dfrac{3}{4}\sqrt{2} \right)$</span></p> <p>The vertex is halfway between F and the directrix. So first we look at the axis of symmetry y = -x + d which goes through F. Solving d = 0. This intersects the directrix where <span class="math-container">$-x = x + \frac{\sqrt{2}}{2}$</span> at I = <span class="math-container">$\left(-\dfrac{1}{4}\sqrt{2}, \dfrac{1}{4}\sqrt{2}\right)$</span> </p> <p>And the vertex V is at the midpoint of the segment between F and I. <span class="math-container">$V = \left(-\dfrac{\sqrt{2}}{2},\dfrac{\sqrt{2}}{2} \right)$</span> </p>
3,597,829
<p>Edit: I am using the natural logarithm in what follows.</p> <p>I want to figure out how to show by hand that the maximum of <span class="math-container">$$\log(4)c+\log(3)a+\log(2)x$$</span> when <span class="math-container">$$a\geq 0, c\geq 0, x \geq 0, y \geq 0,$$</span> <span class="math-container">$$a+c+x+y=1,$$</span> <span class="math-container">$$(a+c)^2+(x+y)^2+2xc\leq 1-2\gamma,$$</span> where <span class="math-container">$\gamma$</span> is a fixed constant such that <span class="math-container">$4/25\leq \gamma \leq 1/4$</span>, is <span class="math-container">$\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$</span>, which is given by <span class="math-container">$a=\frac{1+\sqrt{1-4\gamma}}{2}$</span>, <span class="math-container">$x=1-a=\frac{1-\sqrt{1-4\gamma}}{2}$</span>, <span class="math-container">$c=y=0$</span>.</p> <p>I have tried using Lagrange multipliers by changing the last constraint to an equality, but it becomes very messy and I get stuck. </p> <p>I would also be happy if I could simply show that <span class="math-container">$\log(4)c+\log(3)a+\log(2)x \leq \frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)$</span> for all such <span class="math-container">$a,c,x,y$</span>. A method I have attempted is to use the concavity of <span class="math-container">$\log(x)$</span>. The equation of the line that passes through <span class="math-container">$(2, \log(2))$</span> and <span class="math-container">$(3, \log(3))$</span> is <span class="math-container">$L(x)=\log(3/2)x-\log(9/8)$</span>. Then by concavity of <span class="math-container">$\log(x)$</span> we have <span class="math-container">$\log(x)\leq L(x)$</span> for all integers <span class="math-container">$x$</span>. Then</p> <p><span class="math-container">\begin{align*}&amp;\log(4)c+\log(3)a+\log(2)x\\ &amp;\leq L(4)c+L(3)a+L(2)x\\ &amp;=\log(3/2) (4c+3a+2x+y)-\log(9/8)\\ &amp;=\log(3/2)\left(\frac{5+3}{2}c+\frac{5+1}{2}a+\frac{5-1}{2}x+\frac{5-3}{2}y\right)-\log(9/8)\\ &amp;=\log(3/2)\left(\frac{5+\sqrt{1-4\gamma}}{2}\right)-\log(9/8)+\log(3/2)\left(\frac{a-x+3(c-y)-\sqrt{1-4\gamma}}{2}\right)\\ &amp;=\frac{\log(6)}{2}+\frac{\sqrt{1-4\gamma}}{2}\log(3/2)+\log(3/2)\left(\frac{a-x+3(c-y)-\sqrt{1-4\gamma}}{2}\right).\\ \end{align*}</span> That means that to obtain the inequality I want, it would suffice to show that <span class="math-container">$a-x\leq 3(y-c)+\sqrt{1-4\gamma}$</span>. However, I have not had success in proving this inequality. Any suggestions or help is immensely appreciated. </p> <p>Another option would be to use perturbation techniques, but I do not have experience in this. Thank you for your help.</p>
Yuri Negometyanov
297,350
<p><span class="math-container">$\color{brown}{\textbf{Preliminary transformations.}}$</span></p> <p>The task is to maximize the function <span class="math-container">$$f(x,a,c) = (x+2c)\log 2 + a\log 3\tag1$$</span> over non-negative <span class="math-container">$a,c,x,y,$</span> under the conditions <span class="math-container">$$x+y+a+c = 1,\quad (x+y)^2+(c+a)^2+2cx\le 1-2\gamma,\tag2$$</span> or <span class="math-container">\begin{cases} x+y+a+c = 1\\ (c+a)(x+y) - cx\ge\gamma\\ x\ge0,\quad y\ge0,\quad a\ge0,\quad c\ge0,\tag3 \end{cases}</span> where <span class="math-container">$$\gamma\in\left[0,\dfrac14\right].\tag4$$</span></p> <p>Denote <span class="math-container">$$b= a+c,\quad z = 1-y,\quad g(z,b,c) = f(z-b,b-c,c),$$</span> then <span class="math-container">$$g(z,b,c) = (z-b+2c)\log2+(b-c)\log3\tag5,$$</span></p> <p><span class="math-container">\begin{cases} b(1-b) - c(z-b)\ge\gamma\\ 0\le c\le b\le z\le1.\tag6 \end{cases}</span></p> <p><span class="math-container">$\color{brown}{\textbf{Searching of the global maximum.}}$</span></p> <p>Maximum of the linear function <span class="math-container">$g(z,b,c)$</span> corresponds to the edges of area, wherein maximum of the linear function with the linear constraints - to the vertices. </p> <p>Since from <span class="math-container">$(5)$</span> <span class="math-container">$$g(z,b,c) = z\log2+b\log\dfrac32+c\log\dfrac43,$$</span> then the global maximum corresponds to the point with the maximal distance from the start of the coordinates at the certain direction.</p> <p><strong>Vertices.</strong></p> <p>If <strong>c=0</strong> then </p> <p><span class="math-container">$$b^2-b + \gamma = 0,$$</span> <span class="math-container">$$\max g_v(z,b,0) = g\left(1,\frac12+\frac{\sqrt{1-4\gamma}}2,0\right),$$</span> <span class="math-container">$$\max g_v(z,b,0) = \left(\frac12-\frac{\sqrt{1-4\gamma}}2\right)\log2 +\left(\frac12+\frac{\sqrt{1-4\gamma}}2\right)\log3.$$</span></p> <p>If <strong>b=c</strong> then <span class="math-container">\begin{cases} b(1-z)=\gamma\\ z = b, \end{cases}</span> <span class="math-container">$$\max g_v(z,b,b) = g\left(\frac12+\frac{\sqrt{1-4\gamma}}2, \frac12+\frac{\sqrt{1-4\gamma}}2,\frac12+\frac{\sqrt{1-4\gamma}}2\right),$$</span> <span class="math-container">$$\max g_v(z,b,b) = \left(1+\sqrt{1-4\gamma}\right)\log2.$$</span></p> <p>If <strong>z=b</strong> then <span class="math-container">\begin{cases} b(1-b)=\gamma\\ c=b, \end{cases}</span> <span class="math-container">$$\max g_v(b,b,c) = \left(1+\sqrt{1-4\gamma}\right)\log2.$$</span></p> <p>If <strong>z=1</strong> then <span class="math-container">\begin{cases} (b-c)(1-b)=\gamma\\ c = 0 \end{cases}</span> <span class="math-container">$$\max g_v(1,b,c) = g\left(1,\frac12+\frac{\sqrt{1-4\gamma}}2,0\right),$$</span> <span class="math-container">$$\max g_v(1,b,c) = \left(\frac12-\frac{\sqrt{1-4\gamma}}2\right)\log2 +\left(\frac12+\frac{\sqrt{1-4\gamma}}2\right)\log3.$$</span></p> <p>The greatest value over vertices is <span class="math-container">$$\color{brown}{\mathbf{g_v(z,b,c) = \begin{cases} \left(1+\sqrt{1-4\gamma}\right)\log2,\quad\text{if}\quad \gamma\in[0,\gamma_v)\\[4pt] \left(\frac12-\frac{\sqrt{1-4\gamma}}2\right)\log2 +\left(\frac12+\frac{\sqrt{1-4\gamma}}2\right)\log3, \quad\text{if}\quad \gamma\in[\gamma_v,0.25], \end{cases}}}\tag7$$</span> where <span class="math-container">$$\color{brown}{\mathbf{\gamma_v= \dfrac14 - \dfrac14\left(\dfrac{\log\,^3/_2}{\log\,^8/_3}\right)^2\approx0.20728.}}\tag8$$</span></p> <p><strong>Optimization task.</strong></p> <p>Optimization task for non-linear constraint <span class="math-container">$(6.1)$</span> can be solved by Lagrange multipliers method, applied to the function <span class="math-container">$$G(z,b,c,\lambda) = (z-b+2c)\log2+(b-c)\log3+\lambda(b-b^2+bc-cz - \gamma).$$</span> The stationary points of <span class="math-container">$G$</span> can be defined from the system <span class="math-container">$G'_z = G'_b = G'_c = G'_\lambda = 0,$</span> or <span class="math-container">\begin{cases} \log2-\lambda c = 0\\ -\log2+\log3+\lambda(1-2b+c) = 0\\ 2\log2-\log3+\lambda(b-z) = 0\\ b(1-b+c)-cz - \gamma = 0. \end{cases}</span></p> <p>Then, taking in account <span class="math-container">$(6.1),$</span> <span class="math-container">$$ \begin{cases} \log2-\lambda c = 0\\ \log3+\lambda(1-2b) = 0\\ \log2 + \lambda(1-b+c-z) = 0\\ b(1-b+c)-cz - \gamma = 0 \end{cases}\Rightarrow \begin{cases} (2b-1)\log2= c\log3\\ z=1-b+2c\\ b(1-b+c) - c(1-b+2c) = \gamma, \end{cases}$$</span> <span class="math-container">\begin{cases} c=r(2b-1)\\ z=(4r-1)b-(2r-1)\\ b(1-b)+c(2b-1-2c) = \gamma\\ 0\le c\le b \le z \le1\\ r=\dfrac{\log2}{\log3} = \log_23\approx0.63093,\tag9 \end{cases}</span></p> <p><span class="math-container">\begin{cases} c=r(2b-1)\\ z=1-b+2c\\ b(1-b)+(1-2r)(2b-1)^2=\gamma\\ b\in\left[\dfrac12,\dfrac{2r}{4r-1}\right]\approx[0.5,0.82814]\\ \gamma\in\left[0,\dfrac{(2r-1)^2}{(4r-1)^2}\right]\approx[0,0.02953],\tag{10} \end{cases}</span></p> <p><span class="math-container">$$b(1-b)+r(1-2r)(4b^2-4b+1) = \gamma,$$</span> <span class="math-container">$$s(b^2-b) + 2r^2-r+\gamma = 0,\quad\text{where}\quad s=8r^2-4r+1\approx1.66086,\tag{11}$$</span> <span class="math-container">$$b= \dfrac12 +\dfrac12\sqrt{\dfrac{1-4\gamma}s},\quad c=r\sqrt{\dfrac{1-4\gamma}s},\quad z= \dfrac12 +\dfrac{4r-1}2\sqrt{\dfrac{1-4\gamma}s},$$</span> <span class="math-container">$$g_m(z,b,c) = (4r-1)\sqrt{\dfrac{1-4\gamma}s}\log2+\left(\dfrac12+\dfrac{1-2r}2\sqrt{\dfrac{1-4\gamma}s}\right)\log3\\ = \left(\dfrac12+\dfrac{8r^2-4r+1}2\sqrt{\dfrac{1-4\gamma}s}\right)\log3 = \left(\dfrac12+\dfrac{s}2\sqrt{\dfrac{1-4\gamma}s}\right)\log3,$$</span> <span class="math-container">$$g_m(z,b,c) \le \dfrac{1+\sqrt s}2 \log3 &lt; {1+1.3}2 = 1.15\log3,$$</span> <span class="math-container">$$g_v(z,b,c) &gt; r\left(1+\sqrt{1-4\cdot0.3}\right)\log3 &gt; 0.63(1+0.9)\log3 &gt; g_m(z,b,c),$$</span></p> <p><span class="math-container">$$\color{brown}{\mathbf{\max g(z,b,c)=g_v(z,b,c).}}\tag{12}$$</span></p> <p>Thus, the global maximum of <span class="math-container">$f(x,a,c)$</span> under the given conditions <strong>is defined by the formulas</strong> <span class="math-container">$\color{brown}{\mathbf{(7)-(8)}}\ $</span> (see also Wolfram Alpha plot).</p> <p><a href="https://i.stack.imgur.com/wSrrY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wSrrY.png" alt="The global maximum"></a></p>
3,683,739
<p><strong>Question</strong>:</p> <p>I am trying to solve a principal value integral involving a square root. Using Mathematica I can get an answer but I would like to know a general approach to obtain them by hand. To be clear I am interested in a clear explanation of the method not just the solution.</p> <p>The principal value integral is of the form: <span class="math-container">$$ I_{B}\left(y\right)=\frac{1}{\pi} \int_0^B J\left(x\right)\left(\frac{\mathcal{P}}{x+y}+\frac{\mathcal{P}}{x-y}\right)\mathrm{d}x,$$</span> where <span class="math-container">$\mathcal{P}$</span> denotes a principal value integral and <span class="math-container">$0\le y\le B$</span> (we only consider real numbers). I would be interested in a method of solution for <span class="math-container">$J\left(x\right)$</span> given by: <span class="math-container">$$J\left(x\right)=x\sqrt{B^2-x^2}.$$</span></p> <p>Using Mathematica I have ascertained that <span class="math-container">$I\left(y\right)=\frac{B^2}{2}-y^2$</span> and since <span class="math-container">$J\left(x\right)$</span> is odd we have: <span class="math-container">$$ I_{B}\left(y\right)=\frac{1}{\pi} \int_{-B}^B J\left(x\right)\frac{\mathcal{P}}{x-y}\mathrm{d}x,$$</span> so I wonder if it could be solved with something akin to contour integration but I am at a loss as to how to proceed further.</p> <p><strong>Context</strong>:</p> <p>The integral is required to map spectral densities of Hamiltonians used in Open Quantum Systems.</p> <p><strong>Bonus</strong>:</p> <p>I will accept any answer that provides a step-by-step (analytic) method of solution for <span class="math-container">$I$</span> but I would also be interested in methods of solution for a couple of other integrals. I can post these as a separate question if people prefer.</p> <p>Firstly: <span class="math-container">$$ L_{\left(C,D\right)}\left(y\right)=\frac{1}{\pi} \int_{C}^{D} J_2\left(x\right)\frac{\mathcal{P}}{x-y}\mathrm{d}x,$$</span> with <span class="math-container">$0\le C&lt;y&lt;D$</span>, all positive real numbers and: <span class="math-container">$$J_2\left(x\right)=\sqrt{\left(D-x\right)\left(x-C\right)}.$$</span> Unfortunately, I have been unable to compute <span class="math-container">$L$</span> with Mathematica, but I have used alternative methods to ascertain <span class="math-container">$L_{\left(C,D\right)}\left(y\right)=\frac{C+D}{2}-y$</span>. </p> <p>And also: <span class="math-container">$$ K_{\left(A,B\right)}\left(y\right)=\frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\left(\frac{\mathcal{P}}{x+y}+\frac{\mathcal{P}}{x-y}\right)\mathrm{d}x,$$</span> with <span class="math-container">$0\le A&lt;y&lt;B$</span> all positive real numbers and: <span class="math-container">$$J_1\left(x\right)=\sqrt{\left(B^2-x^2\right)\left(x^2-A^2\right)}.$$</span> Since: <span class="math-container">$$ \frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\left(\frac{1}{x+y}+\frac{1}{x-y}\right)\mathrm{d}x=\frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\frac{2x}{x^2-y^2}\mathrm{d}x=\frac{1}{\pi} \int_{A^2}^{B^2} J_1\left(\sqrt{w}\right)\frac{1}{w-y^2}\mathrm{d}x=L\left(y^2\right)=\frac{A^2+B^2}{2}-y^2,$$</span> where we have used the substitution <span class="math-container">$w=x^2$</span> and defined <span class="math-container">$C=A^2$</span> and <span class="math-container">$D=B^2$</span>.</p> <p>It is probably also worth noting that: <span class="math-container">$$ L_{\left(C,D\right)}\left(y\right)=\frac{1}{\pi} \int_{0}^{D-C} \sqrt{w}\sqrt{D-C-w}\frac{\mathcal{P}}{w-(y-C)}\mathrm{d}w,$$</span> which can be found using the substitution <span class="math-container">$w=x-C$</span>.</p>
Green Fish
548,931
<p><strong>Preliminary definitions and equivalences</strong>:</p> <p>Firstly we define (as per the question): <span class="math-container">\begin{equation} \begin{split} I_{B}\left(y\right)=&amp;\frac{1}{\pi} \int_{0}^{B} x\sqrt{B^2-x^2}\left(\frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}\right)\mathrm{d}x,\\ K_{\left(A,B\right)}\left(y\right)=&amp;\frac{1}{\pi} \int_{A}^{B} \sqrt{\left(B^2-x^2\right)\left(x^2-A^2\right)}\left(\frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}\right)\mathrm{d}x,\\ L_{\left(C,D\right)}\left(y\right)=&amp;\frac{1}{\pi} \int_{C}^{D} \sqrt{\left(D-x\right)\left(x-C\right)}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x,\\ M_{E}\left(y\right)=&amp;\frac{1}{\pi} \int_{0}^{E} \sqrt{x}\sqrt{E-x}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x,\\ N\left(y\right)=&amp;\frac{1}{\pi} \int_{0}^{1} \sqrt{x}\sqrt{1-x}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x. \end{split} \end{equation}</span> where <span class="math-container">$\frac{\mathcal{P}_{x}}{x}$</span> is a principal value distribution centred at <span class="math-container">$x=0$</span>: <span class="math-container">\begin{equation} \begin{split} \frac{\mathcal{P}_{x}}{x}=\lim_{\varepsilon\rightarrow 0}\left[\frac{x}{x^2+\varepsilon^2}\right]\underset{\mathcal{D}}{=}\begin{cases} \frac{1}{x}, &amp;\text{ for } x\ne 0\\ 0, &amp;\text{ for } x= 0\\ \end{cases} \end{split} \end{equation}</span> where the limit should be taken after integration, we will also define a distributional equality: <span class="math-container">$=_{\mathcal{D}}$</span>. If two distributions are distributionally equal this means they will have the same value when integrated over any domain (assuming there are no poles in the integrand other than those that appear explicitly in the distributional equality). If two distributions are distributionally equal this means they differ by a finite amount (in fact they can differ by an infinite amount as long as the infinity diverges slower than <span class="math-container">$1/x$</span> does, as <span class="math-container">$x\rightarrow 0$</span>) at a finite (or countably infinite) number of points within their domain. Distributional equality holds in such a case because (unless a distribution is infinitely valued at such a point) the contribution of a single point in an integration domain to the full integral is infinitesimal (zero). A commonly used distributionally equality is: <span class="math-container">\begin{equation} \begin{split} x\frac{\mathcal{P}_{x}}{x}\underset{\mathcal{D}}{=}1, \end{split} \end{equation}</span> since the two only differ at a single point <span class="math-container">$x=0$</span>, where the former is zero and the later one (so they differ by a finite amount (1) at a finite number (1) of points).</p> <p>In all the integrals above we assume that <span class="math-container">$y$</span> is within the integration domain (though it can be at a boundary), e.g. for <span class="math-container">$I_{B}\left(y\right)$</span> we have <span class="math-container">$0\le y\le B$</span> and for <span class="math-container">$K_{\left(A,B\right)}\left(y\right)$</span> we have <span class="math-container">$A\le y\le B$</span>. Now we have <span class="math-container">$I_{B}\left(y\right)=K_{\left(0,B\right)}\left(y\right)$</span> by definition; <span class="math-container">\begin{equation} \begin{split} K_{\left(A,B\right)}\left(y\right)=&amp;\frac{1}{\pi} \int_{A}^{B} \sqrt{\left(B^2-x^2\right)\left(x^2-A^2\right)}\left(\frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}\right)\mathrm{d}x\\ =&amp;\frac{1}{\pi} \int_{A}^{B} 2x\sqrt{\left(B^2-x^2\right)\left(x^2-A^2\right)}\frac{\mathcal{P}_{x^2-y^2}}{x^2-y^2}\mathrm{d}x\\ =&amp;\frac{1}{\pi} \int_{A^2}^{B^2} \sqrt{\left(B^2-w\right)\left(w-A^2\right)}\frac{\mathcal{P}_{w-y^2}}{w-y^2}\mathrm{d}w=L_{\left(A^2,B^2\right)}\left(y^2\right), \end{split} \end{equation}</span> using the distributional equality [1]: <span class="math-container">\begin{equation} \begin{split} \frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}\underset{\mathcal{D}}{=}2x\frac{\mathcal{P}_{x^2-y^2}}{x^2-y^2}, \end{split} \end{equation}</span> and the substitution <span class="math-container">$w=x^2$</span> (so <span class="math-container">$2x\mathrm{d} x=\mathrm{d} w$</span>); we also have: <span class="math-container">\begin{equation} \begin{split} L_{\left(C,D\right)}\left(y\right)=&amp;\frac{1}{\pi} \int_{C}^{D} \sqrt{\left(D-x\right)\left(x-C\right)}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x\\ =&amp;\frac{1}{\pi} \int_{0}^{D-C} \sqrt{\left(D-C-w\right)w}\frac{\mathcal{P}_{w-\left(y-C\right)}}{w-\left(y-C\right)}\mathrm{d}w=M_{D-C}\left(y-C\right),\\ \end{split} \end{equation}</span> where we use the substitution <span class="math-container">$w=x-C$</span>; and lastly [2]: <span class="math-container">\begin{equation} \begin{split} M_{E}\left(y\right)=&amp;\frac{1}{\pi} \int_{0}^{E} \sqrt{x}\sqrt{E-x}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x=\frac{1}{\pi} \int_{0}^{1} \sqrt{Ew}\sqrt{E-Ew}\frac{\mathcal{P}_{Ew-y}}{Ew-y}E\mathrm{d}w\\ =&amp;E\frac{1}{\pi} \int_{0}^{1} \sqrt{w}\sqrt{1-w}\frac{\mathcal{P}_{w-\frac{y}{E}}}{w-\frac{y}{E}}\mathrm{d}w=E N\left(\frac{y}{E}\right),\\ \end{split} \end{equation}</span> where we have used the substitution <span class="math-container">$w=\frac{x}{E}$</span>.</p> <p>So we have: <span class="math-container">\begin{equation} \begin{split} M_{E}\left(y\right)=&amp;E N\left(\frac{y}{E}\right),\\ L_{\left(C,D\right)}\left(y\right)=&amp;M_{D-C}\left(y-C\right)=\left(D-C\right) N\left(\frac{y-C}{D-C}\right)\\ K_{\left(A,B\right)}\left(y\right)=&amp;L_{\left(A^2,B^2\right)}\left(y^2\right)=M_{B^2-A^2}\left(y^2-A^2\right)=\left(B^2-A^2\right) N\left(\frac{y^2-A^2}{B^2-A^2}\right),\\ I_{B}\left(y\right)=&amp;K_{\left(0,B\right)}\left(y\right)=B^2 N\left(\frac{y^2}{B^2}\right), \end{split} \end{equation}</span> and only need to determine a method to integrate <span class="math-container">$N\left(y\right)$</span> to answer all the original questions.</p> <p><strong>Integrations</strong>:</p> <p>If we consider <span class="math-container">$N\left(y\right)$</span> for <span class="math-container">$0&lt;y&lt;1$</span> we find: <span class="math-container">\begin{equation} \begin{split} N\left(y\right)=&amp;\frac{1}{\pi} \int_{0}^{1} \sqrt{x}\sqrt{1-x}\frac{\mathcal{P}_{x-y}}{x-y}\mathrm{d}x=\frac{2}{\pi} \int_{0}^{1} w^2\sqrt{1-w^2}\frac{\mathcal{P}_{w^2-y}}{w^2-y}\mathrm{d}w\\ =&amp;\frac{2}{\pi} \int_{0}^{1} \left(w^2-y+y\right)\sqrt{1-w^2}\frac{\mathcal{P}_{w^2-y}}{w^2-y}\mathrm{d}w\\ =&amp;\frac{2}{\pi} \int_{0}^{1} \sqrt{1-w^2}\mathrm{d}w+\frac{2y}{\pi} \int_{0}^{1} \sqrt{1-w^2}\frac{\mathcal{P}_{w^2-y}}{w^2-y}\mathrm{d}w\\ =&amp;\frac{1}{\pi} \int_{0}^{\frac{\pi}{2}} \left(1+\cos\left(2\theta\right)\right)\mathrm{d}\theta+\frac{2y}{\pi} \int_{0}^{\frac{\pi}{2}} \left(1-\sin^2\theta+y-y\right)\frac{\mathcal{P}_{\sin^2\theta-y}}{\sin^2\theta-y}\mathrm{d}\theta\\ =&amp;\frac{1}{\pi} \left[\theta+\frac{1}{2}\sin\left(2\theta\right)\right]_{0}^{\frac{\pi}{2}}-\frac{2y}{\pi} \int_{0}^{\frac{\pi}{2}} \left(\sin^2\theta-y\right)\frac{\mathcal{P}_{\sin^2\theta-y}}{\sin^2\theta-y}\mathrm{d}\theta\\ &amp;+\frac{2y\left(1-y\right)}{\pi} \int_{0}^{\frac{\pi}{2}} \frac{\mathcal{P}_{\sin^2\theta-y}}{\sin^2\theta-y}\mathrm{d}\theta\\ =&amp;\frac{1}{2}-y, \end{split} \end{equation}</span> where we have used the substitution <span class="math-container">$w=\sqrt{x}$</span> (so <span class="math-container">$\mathrm{d}x=2w\mathrm{d}w$</span>) and then <span class="math-container">$w= \sin\theta$</span> (so <span class="math-container">$\mathrm{d}w=\cos\theta\mathrm{d}\theta$</span>), we have also used the distributional equality: <span class="math-container">\begin{equation} \begin{split} \left(x-y\right)\frac{\mathcal{P}_{x-y}}{x-y}&amp;\underset{\mathcal{D}}{=}1,\\ \end{split} \end{equation}</span> the trigonometric identity: <span class="math-container">\begin{equation} \begin{split} 1-\sin^2\theta&amp;=\cos^2\theta=\frac{1}{2}\left(1+\cos\left(2\theta\right)\right), \end{split} \end{equation}</span> and the fact [3]: <span class="math-container">\begin{equation} \begin{split} \int_{0}^{\frac{\pi}{2}} \frac{\mathcal{P}}{\sin^2\theta-y}\mathrm{d}\theta=0. \end{split} \end{equation}</span></p> <p>For <span class="math-container">$y=1$</span> we have: <span class="math-container">\begin{equation} \begin{split} N\left(1\right)=&amp;\frac{1}{\pi} \int_{0}^{1} \sqrt{x}\sqrt{1-x}\frac{\mathcal{P}_{x-1}}{x-1}\mathrm{d}x=-\frac{1}{\pi} \int_{0}^{1} \sqrt{\frac{x}{1-x}}\mathrm{d}x\\ =&amp;-\frac{2}{\pi} \int_{-\frac{\pi}{2}}^{0}\cos^2\theta\mathrm{d}\theta\\ =&amp;-\frac{1}{\pi} \int_{-\frac{\pi}{2}}^{0}\left(1+\cos\left(2\theta\right)\right)\mathrm{d}\theta=-\frac{1}{\pi} \left[\theta+\frac{1}{2}\sin\left(2\theta\right)\right]_{-\frac{\pi}{2}}^{0}=-\frac{1}{2}, \end{split} \end{equation}</span> where we used the distributional equality: <span class="math-container">\begin{equation} \begin{split} \sqrt{1-x}\frac{\mathcal{P}_{x-1}}{x-1}\underset{\mathcal{D}}{=}-\frac{1}{\sqrt{1-x}}, \end{split} \end{equation}</span> we have also used substitutions <span class="math-container">$x=\cos^2 \theta$</span> (so <span class="math-container">$\mathrm{d}x=\left|-2\cos\theta\sin\theta\mathrm{d}\theta\right|$</span>) (where we have preserved the orientation of our integral so we require the modulus sign). For <span class="math-container">$y=0$</span> we have: <span class="math-container">\begin{equation} \begin{split} N\left(0\right)=&amp;\frac{1}{\pi} \int_{0}^{1} \sqrt{x}\sqrt{1-x}\frac{\mathcal{P}_{x}}{x}\mathrm{d}x=\frac{1}{\pi} \int_{0}^{1} \sqrt{\frac{1-x}{x}}\mathrm{d}x=\frac{2}{\pi} \int_{-\frac{\pi}{2}}^{0}\sin^2\theta\mathrm{d}\theta\\ =&amp;\frac{1}{\pi} \int_{-\frac{\pi}{2}}^{0}\left(1-\cos\left(2\theta\right)\right)\mathrm{d}\theta=\frac{1}{\pi} \left[\theta-\frac{1}{2}\sin\left(2\theta\right)\right]_{-\frac{\pi}{2}}^{0}=\frac{1}{2}, \end{split} \end{equation}</span> where we used the distributional equality: <span class="math-container">\begin{equation} \begin{split} \sqrt{x}\frac{\mathcal{P}_{x}}{x}\underset{\mathcal{D}}{=}\frac{1}{\sqrt{x}}, \end{split} \end{equation}</span> we have also used substitutions <span class="math-container">$x=\cos^2 \theta$</span> (so <span class="math-container">$\left|\mathrm{d}x\right|=\left|2\cos\theta\sin\theta\mathrm{d}\theta\right|$</span>).</p> <p>Bringing it all together we find: <span class="math-container">\begin{equation} \begin{split} N\left(y\right)=&amp;\frac{1}{2}-y, \end{split} \end{equation}</span> for <span class="math-container">$0\le y\le 1$</span>.</p> <p><strong>Conclusion</strong>:</p> <p>Since <span class="math-container">$N\left(y\right)=\frac{1}{2}-y$</span> for <span class="math-container">$0\le y\le 1$</span> (as derived above) and: <span class="math-container">\begin{equation} \begin{split} M_{E}\left(y\right)=&amp;E N\left(\frac{y}{E}\right),\\ L_{\left(C,D\right)}\left(y\right)=&amp;\left(D-C\right) N\left(\frac{y-C}{D-C}\right)\\ K_{\left(A,B\right)}\left(y\right)=&amp;\left(B^2-A^2\right) N\left(\frac{y^2-A^2}{B^2-A^2}\right),\\ I_{B}\left(y\right)=&amp;B^2 N\left(\frac{y^2}{B^2}\right), \end{split} \end{equation}</span> (as derived in the section before last) we find: <span class="math-container">\begin{equation} \begin{split} M_{E}\left(y\right)=&amp;E \left(\frac{1}{2}-\frac{y}{E}\right)=\frac{E}{2}-y,\\ L_{\left(C,D\right)}\left(y\right)=&amp;\left(D-C\right) \left(\frac{1}{2}-\frac{y-C}{D-C}\right)=\frac{D+C}{2}-y\\ K_{\left(A,B\right)}\left(y\right)=&amp;\left(B^2-A^2\right) \left(\frac{1}{2}-\frac{y^2-A^2}{B^2-A^2}\right)= \frac{B^2+A^2}{2}-y^2,\\ I_{B}\left(y\right)=&amp;B^2 \left(\frac{1}{2}-\frac{y^2}{B^2}\right)=\frac{B^2}{2}-y^2, \end{split} \end{equation}</span> for <span class="math-container">$0\le y\le E$</span>, <span class="math-container">$C\le y\le D$</span>, <span class="math-container">$A\le y\le B$</span>, and <span class="math-container">$0\le y\le B$</span> respectively. This completes the derivation.</p> <hr> <p><strong>Footnote [1]</strong>: We were a bit cavalier with our principal value equating: <span class="math-container">\begin{equation} \begin{split} \frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}\underset{\mathcal{D}}{=}2x\frac{\mathcal{P}_{x^2-y^2}}{x^2-y^2}, \end{split} \end{equation}</span> so we shall verify our procedure was valid here: <span class="math-container">\begin{equation} \begin{split} &amp;\frac{\mathcal{P}_{x+y}}{x+y}+\frac{\mathcal{P}_{x-y}}{x-y}=\lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{x+y}{\left(x+y\right)^2+\varepsilon_1^2}+\frac{x-y}{\left(x-y\right)^2+\varepsilon_2^2}\right]\\ =&amp;\lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{\left(x^2-y^2\right)\left(x-y\right)+\varepsilon_2^2\left(x+y\right)+\left(x^2-y^2\right)\left(x+y\right)+\varepsilon_1^2\left(x-y\right)}{\left(\left(x+y\right)^2+\varepsilon_1^2\right)\left(\left(x-y\right)^2+\varepsilon_2^2\right)}\right]\\ =&amp;\lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{2x\left(x^2-y^2\right)+\varepsilon_2^2\left(x+y\right)+\varepsilon_1^2\left(x-y\right)}{\left(x^2-y^2\right)^2+\varepsilon_1^2\left(x-y\right)^2+\varepsilon_2^2\left(x+y\right)^2+\varepsilon_1^2\varepsilon_2^2}\right]\\ =&amp;\begin{cases} \frac{2x}{x^2-y^2}, &amp;\text{ for } x^2\ne y^2\\ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{\left(x+y\right)}{\left(x+y\right)^2+\varepsilon_1^2}\right]=\frac{\mathcal{P}_{x+y}}{x+y}=\frac{1}{2y}, &amp;\text{ for } x= y\ne 0\\ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{\left(x-y\right)}{\left(x-y\right)^2+\varepsilon_2^2}\right]=\frac{\mathcal{P}_{x-y}}{x-y}=-\frac{1}{2y}, &amp;\text{ for } x= -y\ne 0\\ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0}\left[\frac{2x\left(x^2+\varepsilon_2^2+\varepsilon_1^2\right)}{x^4+\left(\varepsilon_1^2 +\varepsilon_2^2\right)x^2+\varepsilon_1^2\varepsilon_2^2}\right]_{x=0}=0, &amp;\text{ for } x= \pm y= 0\\ \end{cases} \end{split} \end{equation}</span> where we use the definition of principal value distributions above, if we compare this to: <span class="math-container">\begin{equation} \begin{split} &amp;2x\frac{\mathcal{P}_{x^2-y^2}}{x^2-y^2}=\lim_{\varepsilon\rightarrow 0}\left[2x\frac{x^2-y^2}{\left(x^2-y^2\right)^2+\varepsilon^2}\right]\\ =&amp;\begin{cases} \frac{2x}{x^2-y^2}, &amp;\text{ for } x^2\ne y^2\\ 0, &amp;\text{ for } x= y\ne 0\\ 0, &amp;\text{ for } x= -y\ne 0\\ \lim_{\varepsilon\rightarrow 0}\left[2x\frac{x^2}{x^4+\varepsilon^2}\right]_{x=0}=0, &amp;\text{ for } x= \pm y= 0\\ \end{cases} \end{split} \end{equation}</span> we see that the distributions are equal everywhere except at two points <span class="math-container">$x=y$</span> and <span class="math-container">$x=-y$</span> (where <span class="math-container">$y\ne 0$</span>), but here they only differ by a finite amount (<span class="math-container">$\pm\frac{1}{2y}$</span>) at a finite number (2) of points (unless <span class="math-container">$y=0$</span> in which case they do not differ at any points) and thus they can be considered to be distributionally equal.</p> <p><strong>Footnote [2]</strong>: We leave proof of the distributional equality: <span class="math-container">\begin{equation} \begin{split} E\frac{\mathcal{P}_{Ew-y}}{Ew-y}\underset{\mathcal{D}}{=} \frac{\mathcal{P}_{w-\frac{y}{E}}}{w-\frac{y}{E}}, \end{split} \end{equation}</span> as an exercise for the reader.</p> <p><strong>Footnote [3]</strong>: It would be nice to prove <span class="math-container">$\int_{0}^{\frac{\pi}{2}} \frac{\mathcal{P}_{\sin^2\theta-y}}{\sin^2\theta-y}\mathrm{d}\theta=0 $</span> using symmetry but we can show it in a somewhat clumsy way for <span class="math-container">$0&lt;y&lt;1$</span>. We have, where we use the second part of the principle value definition (that removes the pole <span class="math-container">$y=\sin\theta$</span>, i.e. <span class="math-container">$\theta=\phi$</span>, from the integration range by taking explicit limits), the result: <span class="math-container">\begin{equation} \begin{split} &amp;\int_{0}^{\frac{\pi}{2}} \frac{\mathcal{P}_{\sin^2\theta-y}}{\sin^2\theta-y}\mathrm{d}\theta=\int_{0}^{\frac{\pi}{2}} \frac{\mathcal{P}_{\sin\left(\theta+\phi\right)\sin\left(\theta-\phi\right)}}{\sin\left(\theta+\phi\right)\sin\left(\theta-\phi\right)}\mathrm{d}\theta\\ =&amp;\lim_{\varepsilon\rightarrow 0}\left[\int_{0}^{\phi-\varepsilon} \frac{1}{\sin\left(\theta+\phi\right)\sin\left(\theta-\phi\right)}\mathrm{d}\theta+\int_{\phi+\varepsilon}^{\frac{\pi}{2}} \frac{1}{\sin\left(\theta+\phi\right)\sin\left(\theta-\phi\right)}\mathrm{d}\theta\right]\\ =&amp;\lim_{\varepsilon\rightarrow 0}\left[\left[\frac{1}{\sin \left(2\phi\right)}\ln\left|\frac{\sin\left(\theta-\phi\right)}{\sin\left(\theta+\phi\right)}\right|\right]_{0}^{\phi-\varepsilon}+\left[\frac{1}{\sin \left(2\phi\right)}\ln\left|\frac{\sin\left(\theta-\phi\right)}{\sin\left(\theta+\phi\right)}\right|\right]_{\phi+\varepsilon}^{\frac{\pi}{2}} \right]\\ =&amp;\frac{1}{\sin \left(2\phi\right)}\lim_{\varepsilon\rightarrow 0}\left[\ln\left|\frac{\sin\left(-\varepsilon\right)}{\sin\left(2\phi-\varepsilon\right)}\right|-\ln\left|\frac{\sin\left(-\phi\right)}{\sin\left(\phi\right)}\right|\right.\\ &amp;~~~~~~~~~~~~~~~~~~~~~~~~~\left.+\ln\left|\frac{\sin\left(\frac{\pi}{2}-\phi\right)}{\sin\left(\frac{\pi}{2}+\phi\right)}\right|-\ln\left|\frac{\sin\varepsilon}{\sin\left(2\phi+\varepsilon\right)}\right| \right]\\ =&amp;\frac{1}{\sin \left(2\phi\right)}\lim_{\varepsilon\rightarrow 0}\left[\ln\left|\frac{\sin\varepsilon}{\sin\left(2\phi\right)\cos\varepsilon-\cos\left(2\phi\right)\sin\varepsilon}\right|\right.\\ &amp;~~~~~~~~~~~~~~~~~~~~~~~~~\left.-\ln\left|\frac{\sin\varepsilon}{\sin\left(2\phi\right)\cos\varepsilon+\cos\left(2\phi\right)\sin\varepsilon}\right| \right]\\ =&amp;\frac{1}{\sin \left(2\phi\right)}\lim_{\varepsilon\rightarrow 0}\left[\ln\left|\frac{1+\cot\left(2\phi\right)\varepsilon}{1-\cot\left(2\phi\right)\varepsilon}+\mathcal{O}\left(\varepsilon^2\right)\right| \right]\\ =&amp;\frac{1}{\sin \left(2\phi\right)}\lim_{\varepsilon\rightarrow 0}\left[\ln\left|1+2\cot\left(2\phi\right)\varepsilon+\mathcal{O}\left(\varepsilon^2\right)\right| \right]\\ =&amp;\lim_{\varepsilon\rightarrow 0}\left[\frac{2\cot\left(2\phi\right)}{\sin \left(2\phi\right)}\varepsilon+\mathcal{O}\left(\varepsilon^2\right)\right]=0, \end{split} \end{equation}</span> where we have used the substitution <span class="math-container">$\sin\phi=y$</span> (valid since <span class="math-container">$0&lt;y&lt;1$</span>, it means <span class="math-container">$0&lt;\phi&lt;\frac{\pi}{2}$</span>) and the trigonometric identities <span class="math-container">$\sin^2\theta-\sin^2\phi=\sin\left(\theta+\phi\right)\sin\left(\theta-\phi\right)$</span> and <span class="math-container">$\sin\left(x+y\right)=\sin x\cos y +\cos x\sin y$</span> as well as the Taylor expansions <span class="math-container">$\sin x=x+\mathcal{O}\left(x^3\right)$</span>, <span class="math-container">$\cos x=1+\mathcal{O}\left(x^2\right)$</span>, and <span class="math-container">$\ln\left(1+x\right)=x+\mathcal{O}\left(x^2\right)$</span>. We have also used the fact that: <span class="math-container">\begin{equation} \begin{split} &amp;\frac{\mathrm{d}}{\mathrm{d}\theta}\left[\frac{1}{\sin \left(2\phi\right)}\ln\left|\frac{\sin\left(\theta-\phi\right)}{\sin\left(\theta+\phi\right)}\right|\right]\\ =&amp;\frac{1}{\sin \left(2\phi\right)}\frac{\cos\left(\theta-\phi\right)\sin\left(\theta+\phi\right)-\sin\left(\theta-\phi\right)\cos\left(\theta+\phi\right)}{\sin\left(\theta-\phi\right)\sin\left(\theta+\phi\right)}\\ =&amp;\frac{1}{\sin\left(\theta-\phi\right)\sin\left(\theta+\phi\right)}, \end{split} \end{equation}</span> where we have used the trigonometric identity: <span class="math-container">$\sin\left(2\theta\right)=\sin\left(\theta+\phi\right)\cos\left(\theta-\phi\right)-\cos\left(\theta+\phi\right)\sin\left(\theta-\phi\right)$</span>. We were able to substitute the antiderivative because neither of the integrals we considered featured poles (the poles were at <span class="math-container">$\theta =\phi$</span>).</p>
3,683,739
<p><strong>Question</strong>:</p> <p>I am trying to solve a principal value integral involving a square root. Using Mathematica I can get an answer but I would like to know a general approach to obtain them by hand. To be clear I am interested in a clear explanation of the method not just the solution.</p> <p>The principal value integral is of the form: <span class="math-container">$$ I_{B}\left(y\right)=\frac{1}{\pi} \int_0^B J\left(x\right)\left(\frac{\mathcal{P}}{x+y}+\frac{\mathcal{P}}{x-y}\right)\mathrm{d}x,$$</span> where <span class="math-container">$\mathcal{P}$</span> denotes a principal value integral and <span class="math-container">$0\le y\le B$</span> (we only consider real numbers). I would be interested in a method of solution for <span class="math-container">$J\left(x\right)$</span> given by: <span class="math-container">$$J\left(x\right)=x\sqrt{B^2-x^2}.$$</span></p> <p>Using Mathematica I have ascertained that <span class="math-container">$I\left(y\right)=\frac{B^2}{2}-y^2$</span> and since <span class="math-container">$J\left(x\right)$</span> is odd we have: <span class="math-container">$$ I_{B}\left(y\right)=\frac{1}{\pi} \int_{-B}^B J\left(x\right)\frac{\mathcal{P}}{x-y}\mathrm{d}x,$$</span> so I wonder if it could be solved with something akin to contour integration but I am at a loss as to how to proceed further.</p> <p><strong>Context</strong>:</p> <p>The integral is required to map spectral densities of Hamiltonians used in Open Quantum Systems.</p> <p><strong>Bonus</strong>:</p> <p>I will accept any answer that provides a step-by-step (analytic) method of solution for <span class="math-container">$I$</span> but I would also be interested in methods of solution for a couple of other integrals. I can post these as a separate question if people prefer.</p> <p>Firstly: <span class="math-container">$$ L_{\left(C,D\right)}\left(y\right)=\frac{1}{\pi} \int_{C}^{D} J_2\left(x\right)\frac{\mathcal{P}}{x-y}\mathrm{d}x,$$</span> with <span class="math-container">$0\le C&lt;y&lt;D$</span>, all positive real numbers and: <span class="math-container">$$J_2\left(x\right)=\sqrt{\left(D-x\right)\left(x-C\right)}.$$</span> Unfortunately, I have been unable to compute <span class="math-container">$L$</span> with Mathematica, but I have used alternative methods to ascertain <span class="math-container">$L_{\left(C,D\right)}\left(y\right)=\frac{C+D}{2}-y$</span>. </p> <p>And also: <span class="math-container">$$ K_{\left(A,B\right)}\left(y\right)=\frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\left(\frac{\mathcal{P}}{x+y}+\frac{\mathcal{P}}{x-y}\right)\mathrm{d}x,$$</span> with <span class="math-container">$0\le A&lt;y&lt;B$</span> all positive real numbers and: <span class="math-container">$$J_1\left(x\right)=\sqrt{\left(B^2-x^2\right)\left(x^2-A^2\right)}.$$</span> Since: <span class="math-container">$$ \frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\left(\frac{1}{x+y}+\frac{1}{x-y}\right)\mathrm{d}x=\frac{1}{\pi} \int_{A}^{B} J_1\left(x\right)\frac{2x}{x^2-y^2}\mathrm{d}x=\frac{1}{\pi} \int_{A^2}^{B^2} J_1\left(\sqrt{w}\right)\frac{1}{w-y^2}\mathrm{d}x=L\left(y^2\right)=\frac{A^2+B^2}{2}-y^2,$$</span> where we have used the substitution <span class="math-container">$w=x^2$</span> and defined <span class="math-container">$C=A^2$</span> and <span class="math-container">$D=B^2$</span>.</p> <p>It is probably also worth noting that: <span class="math-container">$$ L_{\left(C,D\right)}\left(y\right)=\frac{1}{\pi} \int_{0}^{D-C} \sqrt{w}\sqrt{D-C-w}\frac{\mathcal{P}}{w-(y-C)}\mathrm{d}w,$$</span> which can be found using the substitution <span class="math-container">$w=x-C$</span>.</p>
Ron Gordon
53,268
<p>The evaluation of the Cauchy principal value integral via contour integration is relatively straightforward. To begin, consider the contour integral</p> <p><span class="math-container">$$\oint_C dz \, \frac{z \sqrt{z^2-B^2}}{z-y} $$</span></p> <p>where <span class="math-container">$C$</span> is the following contour:</p> <p><a href="https://i.stack.imgur.com/JEJEZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEJEZ.jpg" alt="enter image description here"></a></p> <p>There are semicircular detours of radius <span class="math-container">$\epsilon$</span> around the branch points at <span class="math-container">$z=\pm B$</span> and the pole at <span class="math-container">$z=y$</span>. Also, the large circle has a radius <span class="math-container">$R$</span>. The pieces of the contour integral as labeled in the figure are as follows. (Yes, there are a lot of pieces, but as you will see, most will vanish or cancel.)</p> <p><span class="math-container">$$\int_{AB} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{-R}^{-B-\epsilon} dx \frac{x \sqrt{x^2-B^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{BC} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \, \frac{(-B+\epsilon e^{i \phi})\sqrt{(-B+\epsilon e^{i \phi})^2-B^2}}{-B+\epsilon e^{i \phi}-y} $$</span></p> <p><span class="math-container">$$\int_{CD} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{-B+\epsilon}^{y-\epsilon} dx \frac{x \sqrt{x^2-B^2}}{x-y} = \int_{-B+\epsilon}^{y-\epsilon} dx \frac{x i \sqrt{B^2-x^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{DE} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \, \frac{(y+\epsilon e^{i \phi})\sqrt{(y+\epsilon e^{i \phi})^2-B^2}}{\epsilon e^{i \phi}} \\ = i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \, \frac{(y+\epsilon e^{i \phi})i \sqrt{B^2-(y+\epsilon e^{i \phi})^2}}{\epsilon e^{i \phi}}$$</span></p> <p><span class="math-container">$$\int_{EF} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{y+\epsilon}^{B-\epsilon} dx \frac{x \sqrt{x^2-B^2}}{x-y} = \int_{y+\epsilon}^{B-\epsilon} dx \frac{x i \sqrt{B^2-x^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{FG} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i \epsilon \int_{\pi}^{-\pi} d\phi \, e^{i \phi} \, \frac{(B+\epsilon e^{i \phi}) \sqrt{(B+\epsilon e^{i \phi})^2-B^2}}{B+\epsilon e^{i \phi}-y} $$</span></p> <p><span class="math-container">$$\int_{GH} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{B-\epsilon}^{y+\epsilon} dx \frac{x \sqrt{x^2-B^2}}{x-y} = \int_{B-\epsilon}^{y+\epsilon} dx \frac{x (-i) \sqrt{B^2-x^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{HI} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i \epsilon \int_{0}^{-\pi} d\phi \, e^{i \phi} \, \frac{(y+\epsilon e^{i \phi})\sqrt{(y+\epsilon e^{i \phi})^2-B^2}}{\epsilon e^{i \phi}} \\ = i \epsilon \int_{0}^{-\pi} d\phi \, e^{i \phi} \, \frac{(y+\epsilon e^{i \phi})(-i) \sqrt{B^2-(y+\epsilon e^{i \phi})^2}}{\epsilon e^{i \phi}} $$</span></p> <p><span class="math-container">$$\int_{IJ} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{y-\epsilon}^{-B+\epsilon} dx \frac{x \sqrt{x^2-B^2}}{x-y} = \int_{y-\epsilon}^{-B+\epsilon} dx \frac{x (-i) \sqrt{B^2-x^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{JK} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i \epsilon \int_{0}^{-\pi} d\phi \, e^{i \phi} \, \frac{(-B+\epsilon e^{i \phi})\sqrt{(-B+\epsilon e^{i \phi})^2-B^2}}{-B+\epsilon e^{i \phi}-y} $$</span></p> <p><span class="math-container">$$\int_{KL} dz \frac{z \sqrt{z^2-B^2}}{z-y} = \int_{-B-\epsilon}^{-R} dx \frac{x \sqrt{x^2-B^2}}{x-y}$$</span></p> <p><span class="math-container">$$\int_{LA} dz \frac{z \sqrt{z^2-B^2}}{z-y} = i R \int_{-\pi}^{\pi} d\theta \, e^{i \theta} \frac{R e^{i \theta} \sqrt{R^2 e^{i 2 \theta}-B^2}}{R e^{i \theta}-y} $$</span></p> <p>Note that, on the branch above the real axis, <span class="math-container">$-1=e^{i \pi}$</span> and on the branch below the real axis, <span class="math-container">$-1=e^{-i \pi}$</span>. Thus, the sign of <span class="math-container">$i$</span> in front of the square root when <span class="math-container">$|x| \lt B$</span> is positive above the real axis and negative below the real axis.</p> <p>First, note that the integrals over <span class="math-container">$AB$</span> and <span class="math-container">$KL$</span> cancel because the square root does not introduce any phase change there.</p> <p>Second, as <span class="math-container">$\epsilon \to 0$</span>, the integrals over <span class="math-container">$BC$</span>, <span class="math-container">$FG$</span>, and <span class="math-container">$JK$</span> vanish.</p> <p>Third, as <span class="math-container">$\epsilon \to 0$</span>, the integrals over <span class="math-container">$DE$</span> and <span class="math-container">$HI$</span> cancel. In this case, the phase difference over the branch cut due to the square root turns what is normally a constructive interference (i.e., the contributions usually add) into a destructive interference (i.e., they cancel.)</p> <p>Fourth, as <span class="math-container">$\epsilon \to 0$</span>, the integrals over <span class="math-container">$CD$</span> and <span class="math-container">$EF$</span> combine to form</p> <p><span class="math-container">$$i PV \int_{-B}^B dx \frac{x \sqrt{B^2-x^2}}{x-y}$$</span></p> <p>and the integrals over <span class="math-container">$GH$</span> and <span class="math-container">$IJ$</span> combine to form</p> <p><span class="math-container">$$-i PV \int_{B}^{-B} dx \frac{x \sqrt{B^2-x^2}}{x-y}$$</span></p> <p>so together, the contribution to the contour integral over these four intervals is</p> <p><span class="math-container">$$i 2 PV \int_{-B}^B dx \frac{x \sqrt{B^2-x^2}}{x-y}$$</span></p> <p>Fifth, the final contribution to the contour integral is the integral over <span class="math-container">$LA$</span> in the limit as <span class="math-container">$R \to \infty$</span>. That limit is evaluated as follows:</p> <p><span class="math-container">$$\begin{align} \int_{LA} dz \frac{z \sqrt{z^2-B^2}}{z-y} &amp;= i R \int_{-\pi}^{\pi} d\theta \, e^{i \theta} \frac{R e^{i \theta} \sqrt{R^2 e^{i 2 \theta}-B^2}}{R e^{i \theta}-y} \\ &amp;= i R^2 \int_{-\pi}^{\pi} d\theta \, e^{i 2 \theta} \left (1-\frac{B^2}{R^2 e^{i 2 \theta}} \right )^{1/2} \left (1-\frac{y}{R e^{i \theta}} \right )^{-1} \\ &amp;= i R^2 \int_{-\pi}^{\pi} d\theta \, e^{i 2 \theta} \left [1+y \frac{1}{R e^{i \theta}} -\left (\frac{B^2}{2} - y^2 \right ) \frac1{R^2 e^{i 2 \theta}} + O \left ( \frac1{R^3} \right ) \right ]\end{align}$$</span></p> <p>After integration, the first two contributions inside the brackets vanish. Further, as <span class="math-container">$R \to \infty$</span>, all terms <span class="math-container">$O(1/R^3)$</span> will also vanish. This simply leaves the <span class="math-container">$1/R^2$</span> term in the integrand, and we may finally write an expression for the contour integral:</p> <p><span class="math-container">$$\oint_C dz \, \frac{z \sqrt{z^2-B^2}}{z-y} = i 2 PV \int_{-B}^B dx \frac{x \sqrt{B^2-x^2}}{x-y} - i 2 \pi \left (\frac{B^2}{2} - y^2 \right )$$</span></p> <p>By Cauchy's theorem, the contour integral is equal to zero. Therefore, when <span class="math-container">$y \in (-B,B)$</span></p> <blockquote> <p><span class="math-container">$$\frac1{\pi} PV \int_{-B}^B dx \frac{x \sqrt{B^2-x^2}}{x-y} = \frac{B^2}{2} - y^2$$</span></p> </blockquote> <p>as asserted by the OP.</p> <p><strong>ADDENDUM</strong></p> <p>The case <span class="math-container">$|y| \gt B$</span> is a straightforward application of the residue theorem and the result is</p> <p><span class="math-container">$$\frac1{\pi} \int_{-B}^B dx \frac{x \sqrt{B^2-x^2}}{x-y} = \sqrt{y^2-B^2} \left (y - \sqrt{y^2-B^2} \right ) - \frac{B^2}{2}$$</span></p>
194,373
<p>Let $\Omega$ be a bounded smooth domain and define $\mathcal{C} = \Omega \times (0,\infty)$. Below, $x$ refers to the variable in $\Omega$ and $y$ to the variable in $(0,\infty)$. The map $\operatorname{tr}_\Omega:H^1(\mathcal C) \to L^2(\Omega)$ refers to the trace operator ($\operatorname{tr}_\Omega u = u(\cdot,0)$ for smooth functions). <img src="https://i.stack.imgur.com/F2lgh.png" alt="enter image description here"></p> <p>How do I know that the constant functions are in that bigger space (let's just take $\epsilon =1$)? They obviously have finite $H^\epsilon(\mathcal{C})$ norm but that is not enough. </p> <p>We can approximate (<a href="https://math.stackexchange.com/questions/1102898/why-does-this-completion-of-a-sobolev-space-contain-constant-functions-please-e">see this</a>) the constant function $1$ by $u_n$, where $u_n(x,y) = 1$for $y \in [0,n)$ and $u_n(x,y) = 0$ for $y \in [2n, \infty)$ and $u_n(x,y)$ linearly interpolates between $(n,2n)$. <strike>This is Cauchy with respect to the $H^\epsilon$ norm</strike> (<strong>edit: it's not Cauchy</strong>), but how to prove that $1$ is in $H^\epsilon$? I thought we could say $\lVert u_n - 1 \rVert_{H^\epsilon(\mathcal C)} \to 0$ but this is not sensible since $tr_\Omega$ is only defined for $H^1(\mathcal C)$ functions, and $1$ is not in $H^1(C)$.</p>
Joonas Ilmavirta
55,893
<p>The trace only depends on values near the boundary. That is, if $\phi\in C^\infty([0,\infty))$ is one in a neighborhood of zero, then $\operatorname{tr}_\Omega(\phi u)=\operatorname{tr}_\Omega(u)$ for every $u\in H^1(\mathcal C)$. With this in mind, you can formally apply a cut-off to your constant function and treat the trace in that way. Traces do indeed make sense in $H^\epsilon(\mathcal C)$ since multiplication by a compactly supported function brings your function to $H^1(\mathcal C)$.</p> <p>But this is not needed if you just want to show that the inclusion is strict. The point is that $H^\epsilon(\mathcal C)$ is a completion of $H^1(\mathcal C)$. When showing that constant functions are in the completion but not the original space, we should of course remember that they are not in the original space &ndash; if you permit the tautology. The norm $\|\cdot\|_{H^\epsilon(\mathcal C)}$ was only defined for functions in $H^1(\mathcal C)$ by the integral expression in the first place, so the expression $\|u_n-1\|_{H^\epsilon(\mathcal C)}$ (or $\|1\|_{H^\epsilon(\mathcal C)}$) indeed does not make sense for the norm defined on $H^1(\mathcal C)$. (Of course the norm can be naturally extended and the integral expression is exactly the same. You just need to extend the trace map to $\operatorname{tr}_\Omega:H^\epsilon(\mathcal C)\to L^2(\Omega)$ via cut-offs or otherwise to make sense of the expression.)</p> <p>Once you have confirmed (as you seem to have) that your sequence is Cauchy in the norm, then it automatically has a limit in the completion. The sequence converges locally uniformly (in fact, it is eventually constant in any compact set), so it is easy to observe that if it had a limit in $H^1(\mathcal C)$, it would have to be the pointwise limit &ndash; the constant function. But the constant is not in $H^1(\mathcal C)$, so you have indeed shown that the completion contains a point outside the original space. This point is a point in a formal completion (an equivalence class of Cauchy sequences) but in this case it is natural to identify with a function (which is not in $H^1(\mathcal C)$).</p>
1,047,489
<p>let $f(x)$ be a function such that </p> <p>$$f(0) = 0$$</p> <p>$$f(1) =1$$ $$f(2) = 2$$ $$f(3) = 4$$ $$f'(x) \text{is differentiable on } \mathbb{R}$$ Prove that there is a number in the interval $(0,3)$ such that $0 &lt; f''(x)&lt;1$</p> <p>I'm really stuck. thanks</p>
user193702
193,702
<p>In the interval $(2,3)$ we can get $f'(\xi)=2$, and in $(0,1)$ we can get $f'(\delta)=1$, so in $[\delta,\xi]$ we can get $0&lt;f"(\theta)&lt;1$, considering $\xi-\delta&gt;1$.</p>
243,210
<p>I have difficulty computing the $\rm mod$ for $a ={1,2,3\ldots50}$. Is there a quick way of doing this?</p>
Hans Engler
9,787
<p>Why don't you try separation of variables? This leads to a closed form solution that exists for all $x &lt; \sqrt{2 \log \frac{3}{2}}$.</p>
25,413
<p>Background - I am tutoring a second year college sophomore for a class titled Single Variable Calculus, and whose curriculum looks to be similar to the AB calculus I tutor in my High School.</p> <p>We are on limits and L’Hôpital’s Rule, and I see this among the questions (note, all the worksheet questions are meant to be solved via L-H rule) - The instruction is</p> <p>&quot;Evaluate the following using L’Hôpital’s Rule&quot;</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sin x}x= $$</span></p> <p>I recall, when subbing for a calc teacher, that this is a classic example of the use of the &quot;squeeze theorem&quot; aka &quot;sandwich theorem&quot;. Once it's proven, we'd go on to different arguments of Sine, practice a bit, then move on. It's introduced prior to L-H rule.</p> <p>Given the fast pace of my student self-studying and effort of remote teaching, I'm inclined to ignore this, and move on. My question is whether skipping over Squeeze Theorem is doing her a disservice, and should I (forgive the pun) squeeze it into our next session? At my HS, students have told me it feels like it's introduced, practiced for a few problems, but never seeing again.</p>
user52817
1,680
<p>The squeeze theorem merely introduces a basic framework for future analytic thinking. For example, the inequality</p> <p><span class="math-container">$$\sum_{k=1}^n\frac1k&gt;\int_1^{n+1}\frac{1}{x}\, dx$$</span></p> <p>can be established easily by drawing boxes that fit above the curve <span class="math-container">$y=\frac1x$</span>.</p> <p>Upon knowing that the integral goes to infinity like <span class="math-container">$\ln(x)$</span>, we use &quot;squeeze theorem thinking&quot; to deduce that the harmonic series diverges: the series is squeezed between <span class="math-container">$\ln(n+1)$</span> and infinity as <span class="math-container">$n\to\infty$</span>.</p> <p>The point is that introducing the squeeze theorem sets a basic principle. Then with neuroplasticity, one can reuse and invoke the basic concept in future contexts, perhaps without even realizing explicitly that we are using it!</p> <p>The rationale for including the squeeze theorem in Calculus I is not to narrowly give the learner a tool to evaluate certain limits <em>per se.</em> Applying the squeeze theorem to limits such as <span class="math-container">$\lim_{x\to0}\frac{\sin(x)}{x}$</span> is merely <em>action</em> that helps settle the abstract concept so that it can grow in the brain to become a <em>schema.</em> Analysts use the squeeze theorem subconsciously in all sorts of abstract situations, but they had to develop this capacity.</p>
25,413
<p>Background - I am tutoring a second year college sophomore for a class titled Single Variable Calculus, and whose curriculum looks to be similar to the AB calculus I tutor in my High School.</p> <p>We are on limits and L’Hôpital’s Rule, and I see this among the questions (note, all the worksheet questions are meant to be solved via L-H rule) - The instruction is</p> <p>&quot;Evaluate the following using L’Hôpital’s Rule&quot;</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sin x}x= $$</span></p> <p>I recall, when subbing for a calc teacher, that this is a classic example of the use of the &quot;squeeze theorem&quot; aka &quot;sandwich theorem&quot;. Once it's proven, we'd go on to different arguments of Sine, practice a bit, then move on. It's introduced prior to L-H rule.</p> <p>Given the fast pace of my student self-studying and effort of remote teaching, I'm inclined to ignore this, and move on. My question is whether skipping over Squeeze Theorem is doing her a disservice, and should I (forgive the pun) squeeze it into our next session? At my HS, students have told me it feels like it's introduced, practiced for a few problems, but never seeing again.</p>
Xander Henderson
8,571
<p>Most of the other answers in this thread focus on the mathematics. This is appropriate, as this is a Q&amp;A site for mathematics educators. However, I suspect that the question being answered (&quot;Should I teach the squeeze theorem?&quot;) has already been addressed here many times. The distinguishing question here seems to be &quot;Should I, as a tutor, teach a tutee the squeeze theorem when the primary instructor has omitted it?&quot; While this question might be more appropriate for the <a href="https://academia.stackexchange.com/">Academia SE</a>, it doesn't seem off-topic here.</p> <h3>What is your role?</h3> <p>Typically, the role of a tutor is to provide supplemental coaching to a student who is taking a course from a primary instructor. It is the job of the primary instructor to establish the curriculum, the grading schemata, the pace, etc. The job of the tutor is to review the material presented by the primary instructor in order to prepare the student for assessments.</p> <h3>Are <em>you</em> doing harm by omitting the squeeze theorem?</h3> <p>No.</p> <p>As a tutor, you serve your tutees by preparing them to perform well on the assessments provided by their primary instructor. When you help them with material that is likely to be on those assessments, you are doing your job. If you spend time on material that is not likely to be on those assessments, you are, perhaps, helping the student to learn more, but may be harming them in the sense that they may perform less well on assessments. Coving extra material is, likely, neutral at best.</p> <h3>Is the <em>student</em> being harmed by the omission of the squeeze theorem?</h3> <p>In my opinion, yes.</p> <p>As has been pointed out in the comments and in other answers, the squeeze theorem is a fundamental result in analysis (or, perhaps more foundational, is the result that <span class="math-container">$f(x) \le g(x)$</span> implies that <span class="math-container">$\lim f(x) \le \lim g(x)$</span>). Skipping this result does a disservice to the mathematics, and has the potential to create a kind of &quot;cargo cult&quot; version of mathematics.</p> <p>Thus, in my opinion, if an instructor chooses to omit the squeeze theorem, they are doing a disservice to their students. But that is on the instructor, not you.</p> <h3>What should you do?</h3> <p>Again, this is probably a matter of opinion, but: your role is to prepare your tutees for the assessments their instructor is likely to prepare. This instructor has skipped the squeeze theorem, hence you should probably also skip it, or spend only minimal time on it.</p> <p>You also need to be careful to respect the authority of the instructor. You and the instructor should appear to be a united team in front of the students. Be careful not to criticize the instructor in earshot of your tutees. Doing so only degrades the relationship between the instructor and their students, which is likely to make the overall learning environment worse.</p> <p>On the other hand, it might be worth talking to the instructor. I can imagine many possible results, including:</p> <ul> <li>Students are often unreliable reporters when it comes to describing what has been taught. Perhaps the instructor will tell you that they <em>did</em> spend time on the squeeze theorem, and the student has simply forgotten (in which case, you <em>really</em> need to spend some time on it).</li> <li>Perhaps the instructor had a really good reason for omitting the squeeze theorem. I have difficulty imagining what that reason is, but you might find that they have a persuasive argument.</li> <li>Maybe the instructor just forgot about it (e.g. if they are teaching multiple sections of the course, they may have gotten confused about what was said to one group of students vs another). They may welcome the feedback, and take it as an opportunity to fill a gap.</li> <li>Or the instructor could be a real a-hole who doesn't really know what they are doing, skipped the theorem because it is hard and students often struggle with it and they want to make sure that their teaching evaluations are good. That would suck, but such is life.</li> </ul> <p>Whatever the case, having a line of communication between instructor and tutor can be valuable. If it is possible for you to talk to this instructor, I would advise you to do so.</p>
639,665
<p>How can I calculate the inverse of $M$ such that:</p> <p>$M \in M_{2n}(\mathbb{C})$ and $M = \begin{pmatrix} I_n&amp;iI_n \\iI_n&amp;I_n \end{pmatrix}$, and I find that $\det M = 2^n$. I tried to find the $comM$ and apply $M^{-1} = \frac{1}{2^n} (comM)^T$ but I think it's too complicated.</p>
karakfa
14,900
<p>General block form inverse with $A$ and $D$ invertible is</p> <p>$$\begin{pmatrix}A &amp; B \\ C &amp; D\end{pmatrix}^{-1}= \begin{pmatrix}X &amp; -A^{-1}BY \\ -D^{-1}CX &amp; Y\end{pmatrix}$$</p> <p>where $X=(A-BD^{-1}C)^{-1}$ and $Y = (D-CA^{-1}B)^{-1}$</p> <p>substituting values will give you $X=Y=\frac12I$ and off-diagonals $-\frac12iI$. Putting all together $M^{-1} = \frac12M^*$.</p> <p>($*$ is for Hermetian transpose)</p>
3,985,917
<p>I am trying to show algebraically that <span class="math-container">$8^3&gt;9^{8/3}$</span>. This came from trying to complete the base case of an induction proof.</p> <p>I have struggled because <span class="math-container">$8$</span> and <span class="math-container">$9$</span> cannot be manipulated to be the same base. Otherwise I could just argue that <span class="math-container">$3&gt;\dfrac{8}{3}$</span>.</p> <p>I tried raising both sides to the third power and got <span class="math-container">$8^9&gt;9^8$</span>. I can rewrite this as <span class="math-container">$8^9&gt;9^{9-1}$</span> but I am not sure if this is the right direction.</p>
David Lui
445,002
<p>We want to compare <span class="math-container">$x^y $</span> vs. <span class="math-container">$y^x$</span>, for <span class="math-container">$x, y &gt; e$</span>. Take log base <span class="math-container">$y$</span> of both sides, we get <span class="math-container">$y \log_y(x) = y \ln(x) / \ln(y)$</span> vs. <span class="math-container">$x$</span>.</p> <p>Note that if <span class="math-container">$x = y$</span>, then <span class="math-container">$y ln(x) / ln(y) = x$</span>. Consider <span class="math-container">$y$</span> as a constant and take derivatives. <span class="math-container">$\frac{y}{x ln(y)}$</span> vs <span class="math-container">$1$</span>.</p> <p>Therefore, if <span class="math-container">$x &gt; y$</span>, then <span class="math-container">$\frac{y}{x \ln(y)} &lt; 1$</span> (Here is where we use <span class="math-container">$y &gt; e$</span>, so that <span class="math-container">$ln(y) &gt; 1$</span>. Without this assumption, it could be possible that <span class="math-container">$\frac{y}{x \ln(y)} &gt; 1$</span>). Therefore, 1 is bigger. Thus, for <span class="math-container">$x &gt; y$</span>, <span class="math-container">$x &gt; y \ln(x) / \ln(y)$</span>, so <span class="math-container">$y^x$</span> is bigger than <span class="math-container">$x^y$</span>.</p> <p>In general, <span class="math-container">$small^{big} &gt; big^{small}$</span> for numbers <span class="math-container">$&gt; e$</span></p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
911
5,312
<h3>A short explanation</h3> <p>Consider a matrix <span class="math-container">$A$</span>, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector <span class="math-container">$x$</span> the result is <span class="math-container">$y = A x$</span>.</p> <p>Now an interesting question is </p> <blockquote> <p>Are there any vectors <span class="math-container">$x$</span> which does not change their direction under this transformation, but allow the vector magnitude to vary by scalar <span class="math-container">$ \lambda $</span>?</p> </blockquote> <p>Such a question is of the form <span class="math-container">$$A x = \lambda x $$</span></p> <p>So, such special <span class="math-container">$x$</span> are called <em>eigenvector</em>(s) and the change in magnitude depends on the <em>eigenvalue</em> <span class="math-container">$ \lambda $</span>.</p>
507,827
<p>Let $a_n$ be a positive sequence. Prove that $$\limsup_{n\to \infty} \left(\frac{a_1+a_{n+1}}{a_n}\right)^n\geqslant e.$$</p>
njguliyev
90,209
<p>Since I solved this problem several years ago, I didn't write my solution immediately, so that others could think on this problem. Now I am writing my own solution:</p> <p>It starts as the solution by Ju'x, i.e. we can safely assume that $a_1 = 1$ and suppose the converse inequality. Then there exists $N \in \mathbb{N}$ such that $$\frac{1+a_{n+1}}{a_n} &lt; e^{1/n}, \qquad n \ge N.$$ Hence $$a_N &gt; \frac{1}{e^{1/N}} + \frac{a_{N+1}}{e^{1/N}} &gt; \frac{1}{e^{1/N}} + \frac{1}{e^{(1/N)+(1/N+1)}} + \frac{a_{N+2}}{e^{(1/N)+(1/N+1)}} &gt; \ldots,$$ i.e. $$a_N &gt; \frac{1}{e^{1/N}} + \frac{1}{e^{(1/N)+(1/N+1)}} + \ldots + \frac{1}{e^{(1/N)+\ldots+(1/N+k)}}, \qquad k \in \mathbb{N}.$$ Using $e^{1/n} &lt; 1 + \dfrac{1}{n-1} = \dfrac{n}{n-1}$ we get $$a_N &gt; (N-1)\left( \frac{1}{N} + \frac{1}{N+1} + \ldots + \frac{1}{N+k}\right), \qquad k \in \mathbb{N},$$ which is impossible, since the harmonic series diverges.</p>
142,677
<p>Consider the following list of equations:</p> <p>$$\begin{align*} x \bmod 2 &amp;= 1\\ x \bmod 3 &amp;= 1\\ x \bmod 5 &amp;= 3 \end{align*}$$</p> <p>How many equations like this do you need to write in order to uniquely determine $x$?</p> <p>Once you have the necessary number of equations, how would you actually determine $x$?</p> <hr/> <p><strong>Update:</strong></p> <p>The "usual" way to describe a number $x$ is by writing</p> <p>$$x = \sum_n 10^n \cdot a_n$$</p> <p>and listing the $a_n$ values that aren't zero. (You can also extend this to some radix other than 10.)</p> <p>What I'm interested in is whether you could instead express a number by listing all its residues against a suitable set of modulii. (And I'm <em>guessing</em> that the prime numbers would constitute such a "suitable set".)</p> <p>If you were to do this, how many terms would you need to quote before a third party would be able to tell which number you're trying to describe?</p> <p>That was my question. However, since it appears that the Chinese remainder theorem is extremely hard, I guess this is a bad way to denote numbers...</p> <p>(It also appears that $x$ will never be uniquely determined without an upper bound.)</p>
jmc
30,839
<p>Note that even if you specify the residue of $x$ modulo all integers $n$, this need not determine an integer.</p> <p>This is where $\hat{\mathbb{Z}}$ comes into the picture. I think you should know this as a fact, just to be complete (no pun intended). But I warn you that the theory going into $\hat{\mathbb{Z}}$ is much more advanced than CRT.</p> <p>If you do want to know more about this, look into <a href="http://en.wikipedia.org/wiki/P-adic_number" rel="nofollow">http://en.wikipedia.org/wiki/P-adic_number</a> . Especially the part on $p$-adic expansion is interesting, because it is analogous in some sense to writing integers in radix 10.</p> <p>Probably the greatest difference is that the sum in the expansion need not be a finite sum.</p>
1,693,630
<p><a href="https://i.stack.imgur.com/TVeGv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TVeGv.jpg" alt="enter image description here"></a></p> <p>This is my attempt at finding $\frac{d^2y}{dx^2}$. Can some one point out where I'm going wrong here?</p>
John_dydx
82,134
<p>You can approach it this way:</p> <p>Find $\large\frac{dx}{dt}$ and $\large\frac{dy}{dt}$. Note that $\large\frac{dt}{dx}= \frac{1}{\large\frac{dx}{dt}}$ $$\frac{dy}{dx}= \frac{dy}{dt} \cdot \frac{dt}{dx}$$</p> <p>And then differentiate again to find $\frac{d^2y}{dx^2}$</p> <p>$$ \frac{dx}{dt} = 3t^2 - 12$$</p> <p>$$\frac{dy}{dt}= 2t$$</p> <p>$$ \frac{dy}{dx} = \frac{2t}{3t^2-12} $$</p> <p>$$ \frac{d^2y}{dx^2} = \frac{(3t^2-12)(2)-2t(6t)}{(3t^2-12)^2} \times \frac{dt}{dx} = \frac{(3t^2-12)(2)-2t(6t)}{(3t^2-12)^2} \times \frac{1}{3t^2-12}$$ </p> <p>$$\frac{d^2y}{dt^2} = \frac{-2(t^2 + 4)}{9(t^2-4)^3}$$</p>
400,749
<p>The Extreme Value Theorem says that if $f(x)$ is continuous on the interval $[a,b]$ then there are two numbers, $a≤c$ and $d≤b$, so that $f(c)$ is an absolute maximum for the function and $f(d)$ is an absolute minimum for the function. </p> <p>So, if we have a continuous function on $[a,b]$ we're guaranteed to have both absolute maximum and absolute minimum, but functions that aren't continuous can still have either an absolute min or max? </p> <p>For example, the function $f(x)=\frac{1}{x^2}$ on $[-1,1]$ isn't continuous at $x=0$ since the function is approaching infinity, so this function doesn't have an absolute maximum. Another example: suppose a graph is on a closed interval and there is a jump discontinuity at a point $x=c$, and this point is the absolute minimum. </p> <p>The extreme value theorem requires continuity in order for absolute extrema to exist, so why can there be extrema where the function isn't continuous? </p>
user79202
79,202
<p>First of all, $f(x) = x^{-2}$ isn't well defined on the domain $[-1,1]$, specifically where $x = 0$, so you can't really say that it is discontinuous. But if you assign it any value at $x = 0$, so $f(0) := c$ for $c \in \mathbb{R}$, it is discontinuous. Then, as shown by yourself, continuity isn't needed to find a function on an interval $[a,b]$ having absolute extrema. This is reflected by the extreme value theorem, because it only guarantees extreme values for a continuous, real function on an interval $[a,b]$, but does not say that a function, having such extreme values, necessarily is continuous.</p>
703,125
<blockquote> <p>Let $\{A_\alpha\}$ be a collection of connected subspaces of $X$; let $A$ be connectted subspace of $X$. Show that if $A\cap A_\alpha \neq \emptyset$ for all $\alpha$, then $A\cup(\cup A_\alpha)$ is connected.</p> </blockquote> <p>I know this theorem:<img src="https://i.stack.imgur.com/6tILz.png" alt="enter image description here"></p> <p>And as every set in this union has a point in common with $A$, I guess I need to use this. If $\{U_\alpha \}$ was countable I could easily use this theorem with induction, but now I'm not sure if I can use such an argument.</p> <p>I mean $A\cup A_1$ is connected, therefore $A\cup A_1 \cup A_2$ connected and so on.</p>
Kaladin
133,789
<p>If you assume that $A\cup(\cup A_{\alpha})$ is disconnected. Therefor it can be written as the union of two disjoint non-empty closed subsets $U_{1}$ and $U_{2}$. But because $A\cap A_{\alpha}\neq \emptyset$ for all $\alpha$ you get $A\cap U_{1}\neq \emptyset$ and $A\cap U_{2}\neq \emptyset$. But this would mean that $A$ is disconnected, this is a contradiction with the assumption that $A$ is connected .</p>
3,071,751
<p>Is there a parametrization for the figure '8' curve, which is self-intersected?</p>
BadAtAlgebra
611,990
<blockquote> <p><span class="math-container">$x = \frac{a\sqrt{2}\cos(t)}{\sin^2(t) + 1}; \qquad y = \frac{a\sqrt{2}\cos(t)\sin(t)}{\sin^2(t) + 1}$</span></p> </blockquote> <p>Check out this source: </p> <blockquote> <p><a href="https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Lemniscate_of_Bernoulli</a></p> </blockquote>
3,071,751
<p>Is there a parametrization for the figure '8' curve, which is self-intersected?</p>
md2perpe
168,433
<p>An easier parameterization of an <a href="https://www.wolframalpha.com/input/?i=plot%20%7B%20x%3Dsin(2t),%20y%3Dcos(t)%20%7D" rel="nofollow noreferrer">8-like figure</a> is <span class="math-container">$(x,y) = (\sin 2t, \cos t),$</span> where <span class="math-container">$0 \leq t \leq 2\pi.$</span> It can easily be made more 8-like <a href="https://www.wolframalpha.com/input/?i=plot%20%7B%20x%3Dsin(2t),%20y%3D2cos(t)%20%7D" rel="nofollow noreferrer">by scaling</a>.</p>
625,975
<p>I'm just starting to learn computability. Some treatments of the subject use a relation they call $T$, which I <em>think</em> is called the universal recursive relation. It's defined something like this (<a href="http://www.its.caltech.edu/~jclemens/courses/02ma117a/handouts/handout6.pdf" rel="nofollow">http://www.its.caltech.edu/~jclemens/courses/02ma117a/handouts/handout6.pdf</a> p. 3):</p> <blockquote> <p>$T_n(m, c, x_1, \ldots , x_n) \iff \operatorname{Final}(m, c)\land c$ is a computation starting from $\ast 1^{x_1}B\cdots B1^{x_n}$, i.e. $c$ codes a halting computation for the machine m computing a function with inputs $x_1, \ldots , x_n$</p> </blockquote> <p>As I understand this relation, it holds when and only when the program with code $m$ and input $(x_1,\ldots , x_n)$ halts with output $c$. </p> <p>This relation is said to be recursive (and primitive recursive - see p. 2 of the handout I linked). What I don't see is how this is compatible with the halting problem being undecidable. If the relation was recursive, couldn't you use it to check whether the program halts for some given input? (I've probably misunderstood what the relation is but I can't see how else to interpret it.)</p> <p>Thanks a lot.</p>
MJD
25,554
<p>You have misunderstood $c$ here. It is not an output. It is a <em>complete description</em> of a computation, including all the state transitions, tape head movements, and tape modifications that the machine makes on its way to the final state. </p> <p>It is computable to check whether a single step of this long description is correct. Each step contains a description of the tape, where the tape head is, the machine's start state and its end state, and which symbol the machine will write. Checking a single step means to check to make sure the tape matches the description in the previous step, except for the one tape symbol that was changed; checking that the one symbol was the one under the tape head; checking that the new symbol is the one that the machine's transition function said it should write, given its state and the value of the old tape symbol; and so on.</p> <p>$c$ is a long list of such steps, and since it is finite, and the steps can be checked one by one, $c$ can be checked in its entirety.</p> <p>Given a machine $m$, an initial input, and a proposed computation $c$, one can check to see if the machine will in fact compute the way $c$ says and reach a final state at the end. $c$ describes a <em>particular</em> computation, of say $S$ number of steps, and if $m$ would actually compute for longer than $S$, you can easily detect that and halt and report that $c$ is not an accurate description of what the machine would do.</p> <p>In particular if $m$ fails to halt on the given input, you can still be sure to calculate that $c$ was an inaccurate description of what $m$ would have done, and halt and say so.</p> <p>This doesn't help you in deciding <em>if</em> the machine would ever halt. If $m$ never halts, then $c$, being finite, is necessarily an inaccurate description of what $m$ would do, and you can certainly detect this. But knowing that $c$ is an <em>inaccurate</em> description of the behavior of $m$ is obviously no use at all if you want to know what $m$ actually does!</p>
4,084,486
<p>I am studying different integral transform methods, and I am confused on why saying things such as <span class="math-container">$$ \mathcal{F}^{-1}[1] = \delta(x) $$</span> is valid? If you actually plug this in, <span class="math-container">$$ \mathcal{F}^{-1}[1] = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{ikx}dx $$</span> this does not converge for any <span class="math-container">${x}$</span> at all. It diverges everywhere. I understand the principle value of the integral when <span class="math-container">${x\neq 0}$</span> is <span class="math-container">$0$</span>, but I don't know why it's valid to just call it <span class="math-container">${\delta(x)}$</span> based on it's principle value.</p>
Oliver Díaz
121,671
<p>Some initial remarks:</p> <p>@Mark Viola considers the Fourier transform defined on the space of tempered distributions which contains not only copies of nice functions (integrable functions for example) but also finite measures, and many other things.</p> <p>The <span class="math-container">$\delta$</span> &quot;function&quot; can also been considered as a finite Borel measure in the line, the one that assign mass <span class="math-container">$1$</span> to <span class="math-container">$\{0\}$</span> and zero mass to any other set.</p> <hr /> <p>I will restrict the Fourier transform to the set of measures of finite variation (<span class="math-container">$|\mu|(\mathbb{R})&lt;\infty$</span>), which contains integrable functions (think of <span class="math-container">$f\mapsto \mu_f:=f\,dx$</span>). The Fourier transform there is defined as</p> <p><span class="math-container">$$\widehat{\mu}(t):=\int e^{-ixt}\mu(dx)$$</span></p> <p>With that in mind, for the measure <span class="math-container">$\delta(A)=1$</span> if <span class="math-container">$0\in A$</span> and <span class="math-container">$\delta(A)=0$</span> otherwise, one has <span class="math-container">$$ \widehat{\delta}(t)=\int e^{-ixt}\delta(dx)=e^{i0t}=1$$</span></p> <ul> <li><p>It turns out that in the space of finite measures, the map <span class="math-container">$\mu\mapsto\widehat{\mu}$</span> is also injective.</p> </li> <li><p>There is criteria (Bochner's theorem) that say when a continuous function <span class="math-container">$g$</span> is the Fourier transform of a (positive) finite measure the inverse Fourier transfer, and threre is also an inversion formula, which is typically study in courses of probability - think of characteristic functions of probability distributions.</p> </li> <li><p>In this contexts, the inverse Fourier transform of <span class="math-container">$g(t)\equiv1$</span> is the measure <span class="math-container">$\delta(dx)$</span>.</p> </li> </ul>
30,305
<p>I want to call <code>Range[]</code> with its arguments depending on a condition. Say we have </p> <pre><code>checklength = {5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6} </code></pre> <p>I then want to call <code>Range[]</code> 13 times (the length of <code>checklength</code>) and do <code>Range[5]</code> when <code>checklength[[#]] == 5</code> and <code>Range[2, 6]</code> when <code>checklength[[#]] == 6</code>. <code>If[]</code> would seem an appropriate way to do it, </p> <pre><code>Range[If[checklength[[#]] == 5, 5, XXX]]&amp; /@ Range[13] </code></pre> <p>but I don't know what to put for "XXX", since I need "2,6" there without any brackets. I've tried </p> <pre><code>Range[If[checklength[[#]]==5, 5, Flatten[{2,6}]]]&amp; /@ Range[13] </code></pre> <p>but that doesn't help (in fact if you think about it, it shouldn't!). The problem is, I need an unbracketed pair of numbers to be treated as a single argument and I don't know how to do that. I can think of one quite messy solution, </p> <pre><code>Range[If[checklength[[#]] == 5, 1, 2], If[checklength[[#]] == 5, 5, 6]&amp; /@ Range[13] </code></pre> <p>but I'd be disappointed if there's not a better way to do it. Even though this does the trick, the general question remains of how to treat unbracketed comma separated numbers as a single item.</p>
Michael E2
4,999
<p>Even simpler than mapping or <code>Table</code>, since <a href="http://reference.wolfram.com/mathematica/ref/Range.html" rel="nofollow noreferrer"><code>Range</code></a> is <a href="http://reference.wolfram.com/mathematica/ref/Listable.html" rel="nofollow noreferrer"><code>Listable</code></a>:</p> <pre><code>Range[checklength - 4, checklength] </code></pre> <hr> <p>The above is elegant, but for the sake of speed, here's a faster one:</p> <pre><code># + ConstantArray[Range[-4, 0], {Length@#}] &amp;@ Developer`ToPackedArray @ checklist </code></pre> <p>yet this one is fastest:</p> <pre><code>Transpose @ Table[# + i, {i, -4, 0}] &amp;@ Developer`ToPackedArray @ checklist </code></pre> <p><strong>Timings</strong></p> <p>Comparing the <a href="https://mathematica.stackexchange.com/questions/30305/how-to-use-if-to-select-arguments-min-max-in-range#comment94271_30305">rm-rf</a>/<a href="https://mathematica.stackexchange.com/a/30325">Mr.Wizard</a> replacement method with the two above:</p> <pre><code>n = 10^6; cl = RandomInteger[{5, 6}, {10, n}]; (* ten trials *) { Do[ # + ConstantArray[Range[-4, 0], {Length@#}] &amp;@ Developer`ToPackedArray @ cl[[seed]], {seed, 10}] // Timing // First, Do[ Transpose@Table[# + i, {i, -4, 0}] &amp;@ Developer`ToPackedArray @ cl[[seed]], {seed, 10}] // Timing // First, Do[ cl[[seed]] /. {5 -&gt; Range[5], 6 -&gt; Range[2, 6]}, {seed, 10}] // Timing // First } / 10 </code></pre> <blockquote> <pre><code> ConstantArray Table ReplaceAll V9.0.1 { 0.0629536, 0.0469905, 0.1893714 } V8.0.4 { 0.0297195, 0.0267913, 0.158208 } </code></pre> </blockquote> <p>Somewhat disappointing to see <em>Mma</em> slow down as it gets older. (This holds over data sizes for <code>cl</code> from <code>10^2</code> to <code>10^7</code>.)</p>
926,069
<p>Say that two $m\times n$ matrices, where $m,n\ge 2$, are <em>related</em> if one can be obtained from the other after a finite number of steps, where at each step we add any real number to all elements of any one row or column. For example, $\left(\begin{array}{cc} 0 &amp; 0\\0 &amp; 0 \end{array}\right)$ and $\left(\begin{array}{cc} 1 &amp; 3\\0 &amp; 2 \end{array}\right)$ are related since the latter can be obtained from the former by adding $1s$ to the first row, and then adding $2s$ to the second column.</p> <p><strong>Question</strong> What matrices are related to the $m\times n$ zero matrix? Also, given two related matrices, how can we determine the minimum number of steps to generate one from the other?</p> <hr> <p>The motivation of this question is the transportation problem: it can be shown that transportation problems with related cost matrices have the same optimum solutions. (A matrix of nonnegative elements $(x_{ij})$ is feasible if $\sum_i x_{ij}=d_j$ and $\sum_j x_{ij}=s_i$ for some constants $d_j$ and $s_i$ with $\sum_j d_j=\sum_i s_i$, and is optimal if it minimises over all feasible matrices the sum $\sum_{i,j} x_{ij}c_{ij}$ for a given cost matrix $(c_{ij})$.) </p>
frog
84,997
<p>Your series is the Fourier-Series of the $2\pi$-peridodic extension of the function $$ f(x):=\frac{\pi-x}{2} $$ defined on $[0,2\pi]$, hence the sign will depend on $x$… EDIT: Thus your statement $$\operatorname{sgn}\left(\sum\limits_{k=1}^\infty\frac{\sin (kx)}{k}\right)=\operatorname{sgn}(\sin x)$$ is correct.</p>
128,015
<p>For the function $\frac{1}{x}$ on the real line, one can use a modified principal value integral to consider it as a distribution p.f.$(\frac{1}{x}),$ and one can do a similar construction to make $\frac{1}{x^m}$ into a distribution for $m&gt;1.$ In the complex plane, the function $\frac{1}{z^m}$ is locally integrable for $m=1,$ but for larger $m$ some construction analogous to the one dimensional would have to be done to make it into a distribution. </p> <p>More generally, given a meromorphic function on the plane (or torus), one should be able to consider it as a distribution by integrating against it and subtracting off some delta distributions or derivatives of delta distributions. Is this process explained in detail anywhere? Has anyone computed the Fourier series of such distributions, say for the Weierstrass $\mathfrak{p}$ function on the torus?</p>
user23078
23,078
<p>It's interesting to think distributiions as the boundary of analytic functions, which I believe is originated from Saito. You may find the following result helpful</p> <p>Let I be an open interval on $\mathbb{R}$, and $$ Z={z\in \mathbb{C};\Re z\in I,0&lt;\Im z &lt;\gamma} $$ is a one sided complex neighborhood. If $f$ is analytic such that there exsists a non-negtive integer $N$ $$ |f(z)|\leq C|\Im z|^{-N},\quad z\in Z $$ Then $f(x+i0)$ exsists as a distribution and is of order $N+1$. And the condition can not be relax very much, in fact, if we assume that $f(x+i0)\in \mathcal{D'}^n$(distribution with order $n$),then we have $$ |f(z)|\leq C|\Im z|^{-n-1} $$</p>
733,101
<p>I've been stuck for a while on this question and haven't found applicable resources.</p> <p>I have 10 choices and can select 3 at a time. I am allowed to repeat choices (combination), but the challenge is that ABA and AAB are not unique.</p> <p>10 choose 3 is the question.</p> <p>I have been working on a smaller set to find a formula. 3 choose 3.</p> <p>I came up with 27 results (if order matters) and 10 results if order doesn't matter. AAA, AAB, AAC, ABB, ACB, ACC, BBB, BBC, BCC, CCC</p> <p>How do I go about solving these problems.</p> <p>My closest hypothesis is choices^slots / slots! == 3^3/3!</p>
André Nicolas
6,312
<p>Let us suppose that we have $m$ distinct object "types," but many objects of each type. You want to select $k$ objects, where selections may be repeated. Consider the equation $$x_1+x_2+\cdots+x_{m}=k.\tag{1}$$ Any selection of $k$ objects, not necessarily distinct, corresponds to a solution of Equation (1) in non-negative integers. Here $x_i$ is the number of Type $i$ objects selected. </p> <p>It is a standard result that Equation (1) has $\binom{m+k-1}{k-1}$ solutions. For a good discussion, please see the Wikipedia article on <a href="http://en.wikipedia.org/wiki/Stars_and_bars" rel="nofollow">Stars and Bars.</a> There have also been many answers on MSE that use Stars and Bars. </p> <p>Note that for your $3$ and $3$ problem, we get $\binom{6-1}{3-1}$, which is $10$. For your $10$ and $3$ problem, we get $\binom{12}{2}$. For $3$ objects to be selected, there are simpler ways to do the analysis, but for general $k$ Stars and Bars is the best approach. </p>
4,642,388
<p>As an enthusiast in Mathematics, yet aware of this site policy, I am sure enough that this question has the fate to remain closed in the future, yet I have made my mind emotionally that before it occurs, someone will be able to answer or even comment on this work.</p> <p>In 2022, I published a book named <a href="https://rads.stackoverflow.com/amzn/click/com/B0BD2TRSZQ" rel="nofollow noreferrer" rel="nofollow noreferrer">A Young Mathematician</a> where I first presented my idea of Imtiaz Germain primes, a version of Sophie Germain primes. I had invented this series, in notation explained later in my book <a href="https://rads.stackoverflow.com/amzn/click/com/B0BKMVDVLY" rel="nofollow noreferrer" rel="nofollow noreferrer">Research of the Century</a> as following:</p> <blockquote> <p>The following defines the set IG (named Imtiaz Germain primes), are defined above. We will enhance this, and give meaning to Imtiaz Germain primes further with a clear scope.</p> <p><span class="math-container">$$ p \in Y $$</span> <span class="math-container">$$ \dagger p = p $$</span> <span class="math-container">$$ \to 2(p)+1 = \mathbb{P} $$</span> <span class="math-container">$$ \to 2(p)+1 = \mathbb{C} $$</span> <span class="math-container">$$ \to 2(p)+1 = \mathbb{P} $$</span> <span class="math-container">$$ \to p \in IG $$</span></p> </blockquote> <p>Here, <span class="math-container">$p$</span> is Sophie Germain prime and the <span class="math-container">$Y$</span> is defined as following according to Research of the Century:</p> <blockquote> <p><span class="math-container">$Y: Y \in \mathbb{N} \wedge \frac{\mathbb{N}}{2} = \phi$</span></p> </blockquote> <p>In basic terms, if you take a Safe prime, and apply <span class="math-container">$2p+1$</span> on it, you need to get a Composite number, and if you apply <span class="math-container">$2p+1$</span> on this Composite number, you get a prime number, it is a Imtiaz-Germain prime.</p> <p>The sequence continues as:</p> <blockquote> <p>3, 23, 29, 53, 113, 233, 293, 419, 593, 653, 659, 683, 1013, 1103, 1223, 1439, 1559, 1583, 1973, 2039, 2273, 2339, 2549, 2753, 3299, 3359, 3593, 3803, 3863, 4019, 4409, 4733, 4793, 4919, 4943, 5003, 5279, 5639, 6173, 6263, 6269, 6323, 6563, 6983, 7433, 7643, 7823, 8243, 8273, 8513, 10253, 10529, 10799, 10883, 11393, 11579, 12329, 12923, 13049, 13619, 13649, 14159, 14879, 16673, 16823, 17579, 17669, 17939, 18443, 18803, 19373, 19913, 20249, 20393, 20693, 20753, 20789, 20879...</p> </blockquote> <p>The book A Young Mathematicians checked upto Ten Million numbers, to see if the ending numbers are on 3 and 9. See the copy of the book A Young Mathematician <a href="https://aitzazimtiaz.github.io/A-Young-Mathematician/" rel="nofollow noreferrer">here</a> which computed it to what I claimed. How can I write a proof that such numbers end on 3 and 9 always? All I want is how to start my proof formulation, I do not require a full proof.</p> <p>P.S. I am extremely sorry for hurting everyone time, but I have a passion to share this work, downvote and close vote is certain enough. I just would appreciate your comment on this idea in general and would love to see a proof. Considering the fact I am barely 16, I plea guilty for posting question I should not have done, but would love to know you read my idea.</p> <p>Edit 1: 3 is a Sophie Germain prime, it has safe prime 7, apply <span class="math-container">$2p+1$</span> on it you get 15 (it needs to be composite), apply <span class="math-container">$2p+1$</span> on it again, you get 31 (a prime) so 3 is IG prime. 5 is a Sophie Germain prime, with 11 as Safe prime. Apply <span class="math-container">$2p+1$</span> on it, you get 23, which is prime so it needed to be composite so 5 is not IG prime.</p>
hasManyStupidQuestions
606,791
<p><strong>Attempt</strong> (<em>community wiki</em>):</p> <p>Note that for 2. I've been sloppy in declaring that <span class="math-container">$\mathfrak{F} \subseteq \mathcal{P}(\mathfrak{M})$</span> because the Tarskian semantics for multi-sorted logic don't require that. In general <span class="math-container">$\mathfrak{F}$</span> could be any set.</p> <p>However, my understanding is that one can assume without loss of generality that <span class="math-container">$\mathfrak{F} \subseteq \mathcal{P}(\mathcal{M})$</span>, in the following way.</p> <p>Let's say we have some structure <span class="math-container">$\tilde{\mathfrak{F}}$</span> for the collections (that may or may not consist of subsets of <span class="math-container">$\mathfrak{M}$</span>). Then we can define a (possibly new) structure <span class="math-container">$\mathfrak{F} \subseteq \mathcal{P}(\mathfrak{M})$</span> via the map <span class="math-container">$\tilde{\mathfrak{F}} \to \mathfrak{F}$</span> with rule of assignment <span class="math-container">$$ \tilde{f} \mapsto \{ m \in \mathfrak{M}: \tilde{E}(m, \tilde{f}) \} ,$$</span> where <span class="math-container">$\tilde{E}$</span> is the interpretation of <span class="math-container">$\tilde{\in}$</span> in <span class="math-container">$\tilde{\mathfrak{F}}$</span>. (I guess I also left it implicit in the question that the interpretation of <span class="math-container">$\tilde{\in}$</span> in any <span class="math-container">$\mathfrak{F} \subseteq \mathcal{P}(\mathfrak{M})$</span> would be the restriction of <span class="math-container">$\in$</span> of the ambient set theory.)</p> <p>Anyway, regarding 1., I am not sure at all how one would prove it via proof-theoretic means, although presumably there is a way. My understanding though is that if one can prove that the answer to 2. is correct, then the combination of the soundness and completeness theorem would mean that the model-theoretic result also implicitly proves the proof-theoretic result 1.</p> <p>Specifically, I am asserting that if it is true that all models of <span class="math-container">$\tilde{T}$</span> have as their first-order part a model of <span class="math-container">$T$</span>, and that for every model of <span class="math-container">$T$</span> there is at least one corresponding model of <span class="math-container">$\tilde{T}$</span> with that as its first-order part, then by the Godel completeness theorem this means that the only things <span class="math-container">$\tilde{T}$</span> can prove about statements restricted to <span class="math-container">$\mathcal{L}$</span> are those that can be proved by <span class="math-container">$T$</span>, i.e. <span class="math-container">$\tilde{T}$</span> is conservative over <span class="math-container">$T$</span>. I'm not really certain whether that's correct or precise enough.</p> <p>So for proving 2., the general idea would be that presumably the only new atomic formulas that are introduced by augmenting <span class="math-container">$\mathcal{L}$</span> to <span class="math-container">$\tilde{\mathcal{L}}$</span> are of the form <span class="math-container">$o \tilde{\in} c$</span>. But (1) I'm not sure whether it's <em>really</em> true that those are the only new atomic formulas, or whether others are &quot;sneaking in&quot; that I'm not noticing, and (2) if they are the only ones, how to prove that that's the case.</p> <p>If it's true, then basically the proof strategy would be to try to show that all such atomic formulas <span class="math-container">$o \tilde{\in} c$</span> can be &quot;eliminated&quot;, or replaced with statements entirely in <span class="math-container">$\mathcal{L}$</span> which are proven by <span class="math-container">$T$</span>.</p> <p>This seems to divide into three cases: (1) <span class="math-container">$c$</span> corresponds to a (explicitly defined) wff. <span class="math-container">$\psi$</span> via the axiom schema, (2a) <span class="math-container">$c$</span> does not correspond to a wff. but does correspond to an implicitly defined formula, (2b) <span class="math-container">$c$</span> does not correspond to any sort of formula.</p> <p>This is where I start to have a lot of confusion, which seems to be related to that expressed <a href="https://math.stackexchange.com/questions/539105/second-order-logic-and-quantification-over-formulas">in this other related question</a>. Specifically regarding (2b), which I can only think of how to potentially address model-theoretically. Because even though our <em>intended</em> models for the collections sort <span class="math-container">$C$</span> is always <span class="math-container">$\mathsf{Def}_{\mathcal{L}}(\mathfrak{M})$</span>, that doesn't mean we can't have models for <span class="math-container">$C$</span> that are strict supersets of <span class="math-container">$\mathsf{Def}_{\mathcal{L}}(\mathfrak{M})$</span> (i.e. even when using the trick to force the interpretation of <span class="math-container">$C$</span> to be a subset of <span class="math-container">$\mathcal{P}(\mathfrak{M})$</span>). And for those possible valuations of <span class="math-container">$C$</span>-sort variables <span class="math-container">$c$</span> corresponding to elements <span class="math-container">$\mathfrak{F} \setminus \mathsf{Def}_{\mathcal{L}}(\mathfrak{M})$</span>, I am very unsure how, if at all one could reason about them proof-theoretically.</p> <p>For case (1), the proof seems to be easy, i.e. we basically have a definitorial expansion, because we can replacement the statement <span class="math-container">$o \in c_{\psi,\bar{\omega}}$</span> with the <span class="math-container">$\mathcal{L}$</span>-sentence <span class="math-container">$\forall^O \bar{\omega} \psi(o, \bar{\omega})$</span> (basically the axiom schema was rigged to accomplish this).</p> <p>For case (2a), apparently the <a href="https://encyclopediaofmath.org/wiki/Beth_definability_theorem" rel="nofollow noreferrer">Beth definability theorem</a> says that in fact it is the same as case (1), i.e. &quot;no new / nonstandard formulas sneak in the back door&quot;. (But I seriously doubt I actually understand the statement of the Beth definability theorem, much less whether I am applying it correctly. I certainly don't understand its generalization, the Craig interpolation theorem.)</p> <p>So again that would seem to only leave (2b) which I can only think to handle model theoretically. But basically because in that case the variable <span class="math-container">$c$</span> doesn't even implicitly define a predicate, that seems to mean it would &quot;correspond to a parameter present in the &quot;second-order part&quot; <span class="math-container">$\mathfrak{F}$</span> part of some models of <span class="math-container">$\tilde{T}$</span> but not present in other models of <span class="math-container">$\tilde{T}$</span>&quot;. (I know that statement doesn't actually make sense.) So then basically we would conclude by the Godel completeness theorem that <span class="math-container">$\tilde{T}$</span> can not prove anything about statements involving it, in particular the one we care about <span class="math-container">$o \tilde{\in} c$</span>? And because apparently <span class="math-container">$\tilde{T}$</span> can't prove anything about <span class="math-container">$o \tilde{\in} c$</span> in that case, it would follow that <span class="math-container">$\tilde{T}$</span> can't use it to prove anything over the sublanguage <span class="math-container">$\mathcal{L} \subseteq \tilde{\mathcal{L}}$</span>. So in this case too the collection of true sentences over <span class="math-container">$\mathcal{L}$</span> is not expanded by <span class="math-container">$\tilde{T}$</span>?</p> <p>So assuming the argument for (2b) is actually correct, we would get a proof that no additional statements in the language <span class="math-container">$\mathcal{L}$</span> are proved by <span class="math-container">$\tilde{T}$</span>, and thus that <span class="math-container">$\tilde{T}$</span> is conservative over <span class="math-container">$T$</span>?</p>
3,072,995
<p>The only thing I know with this equation is <span class="math-container">$y=\frac{x^2+1}{x+1}=x+1-\frac{2x}{x+1}$</span>.</p> <p>Maybe it can be solved by using inequality.</p>
Community
-1
<p>Out of the given answers, one simple way is to get the <span class="math-container">$x$</span> in terms of <span class="math-container">$y$</span> and solve it :)</p> <p><span class="math-container">$y=\frac{x^2+1}{x+1}$</span> <span class="math-container">$$\Rightarrow yx + y = x^2 +1 \Rightarrow -x^2 + yx +(y-1)=0 \Rightarrow x = \frac{-y \pm\sqrt{y^2 + 4(y-1)}}{-2} \Rightarrow \frac{-y \pm \sqrt{(y-(-2-\sqrt5))(y - (\sqrt5-2))}}{-2}$$</span></p> <p>The Lowest Value so will be obtained will be when the value under square root = 0, then in that case</p> <p><span class="math-container">$$ = \frac{-\sqrt5+2}{-2}$$</span></p> <p>Now plug that value in the expression for <span class="math-container">$x$</span> Gives the minimal value of <span class="math-container">$y$</span>:</p>
3,319,122
<p>This is from Tao's Analysis I: </p> <p><a href="https://i.stack.imgur.com/DYQxE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DYQxE.png" alt="enter image description here"></a></p> <p>So far I managed to show (inductively) that these sets do exist for for every <span class="math-container">$\mathit{N}\in\mathbb{N}$</span> but, I'm finding it hard to show they're unique.</p> <p>One of the things I'm (also) trying to proof is that <span class="math-container">$\mathit{N}\in\mathit{A_N}$</span>, but I'm stuck on that one too. </p> <p>I'm not allowed to use the ordering of the natural numbers.</p>
Mishikumo2019
631,353
<p>No, the function is really decreasing ! Indeed, if you differentiate the function, you have : <span class="math-container">$$f'(x)=-\left(\frac{e^x(x-1)+1}{(e^x-1)^2} \right)\le 0. $$</span> And the function is continuous when <span class="math-container">$x=0$</span>. Indeed : <span class="math-container">$$e^x-1\sim x $$</span> when <span class="math-container">$x$</span> is at the neighbourhood of <span class="math-container">$0$</span>, so : <span class="math-container">$$f(x)\sim\frac{x}{x}\sim 1, $$</span> and by continuity you have : <span class="math-container">$f(0)=1$</span>. There must be an error when your calculator draws the function, espacially when <span class="math-container">$x$</span> is at the neighbourhood of <span class="math-container">$0$</span>.</p>
2,370,716
<p>Give the postive integer $n\ge 2$,and $x_{i}\ge 0$,such $$x_{1}+x_{2}+\cdots+x_{n}=1$$ Find the maximum of the value $$x^2_{1}+x^2_{2}+\cdots+x^2_{n}+\sqrt{x_{1}x_{2}\cdots x_{n}}$$</p> <p>I try $$x_{1}x_{2}\cdots x_{n}\le\left(\dfrac{x_{1}+x_{2}+\cdots+x_{n}}{n}\right)^n=\dfrac{1}{n^n}$$</p>
Michael Rozenberg
190,319
<p>Also we can make the following.</p> <p>Let $x_1=x_2=...=x_{n-1}=0$ and $x_n=1$.</p> <p>Hence, we get a value $1$ and it's a maximal value because for this we need to probe that $$\sum_{i=1}^nx_i^2+\sqrt{\prod_{i=1}^nx_i}\leq\left(\sum_{i=1}^nx_i\right)^2,$$ which is true by AM-GM.</p> <p>Indeed, let $\prod\limits_{i=1}^nx_i=w^n$, where $w\geq0$, $\sum\limits_{1\leq i&lt;j\leq n}x_ix_j=\frac{n(n-1)}{2}v^2$, where $v\geq0$, and $\sum\limits_{i=1}^nx_1=nu$.</p> <p>Hence, by AM-GM $u\geq w$,$v\geq w$ and we need to prove that $$n^2(n-1)^2v^4\geq w^n$$ or $$n^2(n-1)^2v^4\cdot n^{n-4}u^{n-4}\geq w^n,$$ which is obviously true for $n\geq4$.</p> <p>Thus, it's remains to understand, what happens for $n=2$ and for $n=3$, which is for you.</p>
2,586,625
<p>Consider the set $$S=\{x\mid x\in S\}.$$</p> <p>For every element $x$, either $x\in S$ or $x\not\in S.$</p> <p>If we know that $x$ is in fact element of $S$, then, by definition, $x\in S$ so it is true that $x\in S$.</p> <p>If we know that $x\not\in S$, then, by definition, $x\not\in S$ so it is true that $x\not\in S$.</p> <p>But if my question is <em>what elements does $S$ contain,</em> then the answer is "not certain."</p>
CopyPasteIt
432,081
<p>There is the concept in $\text{Set Theory}$ of a 'collectivizing relation', in other words, a relation that can be used to define sets.</p> <p>Starting with any set $S$, </p> <p>$\tag 1 \text{There exists a set } X \text{ such that } X = \{x\mid x\in S \text{ AND } P(x) \}$</p> <p>where P(x) is a statement about $x$ is collectivizing.</p> <p>In particular, with</p> <p>$\tag 2 P(x) \; :\; x = x$</p> <p>we can create a new set</p> <p>$\tag 3 A = \text{Set Defined by (1) \ (Set Builder Schema)}$</p> <p>Exercise: Show that $A = S$.</p> <p>The OP should have written,<br> Starting with a set $S$, what elements are in the set $S^{'} = \{x\mid x\in S\}$?</p> <p>Ans: $S^{'} = S$.</p> <blockquote> <p>In conclusion, you can only make sense out the expression</p> <p>$\tag 4 S = \{x\mid x\in S\}$ in one way:</p> <p>The expression (4) is a statement in set theory (perhaps an abuse of notation). The RHS builds a set, and the statement, $S = \text{RHS}$, is either TRUE or FALSE (and here (4) is always TRUE).</p> <p>It is nonsensical to employ (4) to 'build a set $S$'.</p> </blockquote>
2,586,625
<p>Consider the set $$S=\{x\mid x\in S\}.$$</p> <p>For every element $x$, either $x\in S$ or $x\not\in S.$</p> <p>If we know that $x$ is in fact element of $S$, then, by definition, $x\in S$ so it is true that $x\in S$.</p> <p>If we know that $x\not\in S$, then, by definition, $x\not\in S$ so it is true that $x\not\in S$.</p> <p>But if my question is <em>what elements does $S$ contain,</em> then the answer is "not certain."</p>
JDH
413
<p>To define an object, such as a set, means to provide a property that that object and only that object has. </p> <p>In your case, however, it turns out that <em>every</em> set $S$ has the property that $$S=\{x\mid x\in S\}.$$ This is just because every set is the set consisting of its elements. </p> <p>So you haven't successfully defined an object, you haven't picked out a particular set, since your property doesn't hold of one and only one object.</p>
1,583,887
<p>This problem is from an an Introduction to Abstract Algebra by Derek John that I am solving.</p> <p>I am trying to prove that any group of order 1960 aren't simple, so I am doing it by contradiction, but I got stuck in the middle.</p> <p>Suppose that $|G| = 1960 = 2^3 * 5 * 7^2$, by Sylow theory we have 2,5,7 subgroup of G. After some computations I got the following least values for $n_p$. The following are the values $n_2 = 5$, $n_7 = 8$, and $n_5 = 56$, but I don't know how to proceed further. </p>
Jyrki Lahtonen
11,619
<p>Posting my chatroom solution. It is more complicated than Alex Jordan's argument, but does give a bit more information about the groups of size $1960$. More precisely, it shows that either a Sylow 5-subgroup, denoted $P_5$, or a Sylow 7-subgroup, denoted $P_7$, must be a normal subgroup. Also, we will see that $G$ always has an abelian subgroup of order $245$.</p> <ol> <li>Unless $P_7\unlhd G$ there must be exactly $8$ Sylow 7-subgroups. In that case the normalizer $N_G(P_7)$ has cardinality $245$. So we can assume that $P_5\le N_G(P_7)$.</li> <li>Because $|P_7|=7^2$ we know that $P_7$ must be isomorphic to either $C_{49}$ or to the additive group of the vector space $\Bbb{F}_7^2$. In the former case $Aut(P_7)\cong C_{42}$, in the latter case $Aut(P_7)\cong GL_2(\Bbb{F}_7)$. These have cardinalities $42$ and $42\cdot48$ respectively.</li> <li>So in both cases we see that $P_7$ has no automorphisms of order five. Therefore the conjugation action of the copy of $P_5$ on $P_7$ must be trivial, and $P_5$ centralizes $P_7$.</li> <li>Therefore $C_G(P_5)$ has size at least $245$ for this copy of $P_5$ (and hence for all of them). Thus also $245\mid |N_G(P_5)|$.</li> <li>Hence the number of Sylow $5$-subgroups is a factor of eight. But the only factor of $8$ that is $\equiv1\pmod5$ is $1$, so $P_5\unlhd G$.</li> </ol> <p>Of course, if $P_7\unlhd G$, the argument of steps 2 - 5 still works, and shows that $P_5\le C_G(P_7)$, and $P_5\unlhd G$. So we still have an abelian subgroup $P_5P_7$.</p>
3,561,807
<p>There is this question regarding constrained optimisation. It says, a rectangular parallelepiped has all eight vertices on the ellipsoid <span class="math-container">$x^{2}+3y^{2}+3z^{2}=1$</span>. Using the symmetry about each of the planes, write down the surface area of the parallelepiped and therefore find the maximum surface area.<br> I know that the surface area <span class="math-container">$S(x,y,z) = 8xy+8yz+8zx$</span>. I've also defined the constraint as given <span class="math-container">$g(x) = x^{2}+3y^{2}+3z^{2}=1$</span>. Using the Lagrange multiplier <span class="math-container">$\lambda$</span> I got:<br> <span class="math-container">$8y+8z-2\lambda x=0$</span><br> <span class="math-container">$8x+8z-6\lambda y=0$</span><br> <span class="math-container">$8x+8y-6\lambda z=0$</span><br> What confuses me is what is next in the answer key. They say using symmetry, <span class="math-container">$y=z$</span>. My question is, how are we able to make that argument? I'm slightly confused. Is it because in the ellipsoid equation, the coefficients of <span class="math-container">$y$</span> and <span class="math-container">$z$</span> are equal? Does that mean for a rectangular parallelepiped inscribed in said ellipsoid, the <span class="math-container">$y$</span> and <span class="math-container">$z$</span> coordinates of its vertices will always be equal? If this is the case, can anyone explain to me why?</p>
Robert Lewis
67,071
<p>With <span class="math-container">$n$</span> odd, we have</p> <p><span class="math-container">$n = 2k + 1. \; k \in \Bbb Z; \tag 1$</span></p> <p>then</p> <p><span class="math-container">$n^2 = 4k^2 + 4k + 1, \tag 2$</span></p> <p>whence</p> <p><span class="math-container">$n^2 - 1 = 4k^2 + 4k = 4(k^2 + k) \Longrightarrow 4 \mid n^2 -1. \tag 3$</span></p> <p>One may also write</p> <p><span class="math-container">$n + 1 = 2k + 2, \tag 4$</span></p> <p><span class="math-container">$n - 1 = 2k, \tag 5$</span></p> <p><span class="math-container">$n^2 - 1 = (n + 1)(n - 1) = (2k + 2)(2k) = 2(k + 1)(2k)$</span> <span class="math-container">$= 4k(k + 1) \Longrightarrow 4 \mid n^2 - 1. \tag 6$</span></p> <p>You can pick whichever proof you like the most!</p>
3,398,645
<p>I have a doubt about value of <span class="math-container">$e^{z}$</span> at <span class="math-container">$\infty$</span> in one of my book they are mentioning that as <span class="math-container">$\lim_{z \to \infty} e^z \to \infty $</span></p> <p>But in another book they are saying it doesn't exist.I am confused now </p> <p>As we can see <span class="math-container">$e^{z}$</span> is entire function then as <span class="math-container">$z \to \infty $</span> then <span class="math-container">$e^{z}$</span> must go to <span class="math-container">$\infty$</span></p> <p>Please help.</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$|e^{i2\pi n}|=1$</span> for all <span class="math-container">$n$</span> and <span class="math-container">$|i2\pi n| \to \infty$</span>. So it is not true that <span class="math-container">$|e^{z}| \to \infty$</span> as <span class="math-container">$|z| \to \infty$</span>. Of course the limit is <span class="math-container">$\infty$</span> when you take limit through <span class="math-container">$\{1,2,..\}$</span> so the limit of <span class="math-container">$e^{z}$</span> as <span class="math-container">$|z| \to \infty$</span> does not exist. </p>
3,398,645
<p>I have a doubt about value of <span class="math-container">$e^{z}$</span> at <span class="math-container">$\infty$</span> in one of my book they are mentioning that as <span class="math-container">$\lim_{z \to \infty} e^z \to \infty $</span></p> <p>But in another book they are saying it doesn't exist.I am confused now </p> <p>As we can see <span class="math-container">$e^{z}$</span> is entire function then as <span class="math-container">$z \to \infty $</span> then <span class="math-container">$e^{z}$</span> must go to <span class="math-container">$\infty$</span></p> <p>Please help.</p>
robjohn
13,854
<p>In real analysis, <span class="math-container">$$ \lim\limits_{x\to\infty}e^x=\infty $$</span> because the limit is taken along the positive real axis. Similarly, <span class="math-container">$$ \lim\limits_{x\to-\infty}e^x=0 $$</span> because the limit is taken along the negative real axis.</p> <p>However, in complex analysis, <span class="math-container">$\infty$</span> is taken to be the point added to make the <a href="https://en.wikipedia.org/wiki/Alexandroff_extension" rel="nofollow noreferrer">one-point compactification</a> of the complex plane (the <a href="https://en.wikipedia.org/wiki/Riemann_sphere" rel="nofollow noreferrer">Riemann Sphere</a>). <span class="math-container">$e^z$</span> takes on all non-zero complex values in any neighborhood of <span class="math-container">$\infty$</span>. Therefore, <span class="math-container">$\lim\limits_{z\to\infty}e^z$</span> does not exist.</p>
623,796
<p>What's the domain of the function $f(x) = \sqrt{x^2 - 4x - 5}$ ?</p> <p>Thanks in advance.</p>
mathematics2x2life
79,043
<p>For the function $f$ you gave to be undefined, the input to the square root must be negative. First, notice that $$ x^2-4x-5=(x+1)(x-5) $$ So for this to be negative one of the terms must be negative and the other must be positive. So what numbers satisfy $x+1&gt;0$ and $x-5&lt;0$ or $x+1&lt;0$ and $x-5&gt;0$?</p>
623,796
<p>What's the domain of the function $f(x) = \sqrt{x^2 - 4x - 5}$ ?</p> <p>Thanks in advance.</p>
Emi Matro
88,965
<p>The domain is $\text{domain}(f)\geq 0$, because since you are dealing with the reals, it cannot be negative inside the square root. To determine domain, solve: $x^2-4x-5=0$: </p> <p>$(x-5)(x+1)=0 \implies x=5,-1$ These are the roots, so the domain is: $(-\infty,-1] \cup [5,\infty)$. </p>
2,183,809
<p>The problem:</p> <blockquote> <p>If $p$ is prime and $4p^4+1$ is also prime, what can the value of $p$ be?</p> </blockquote> <p>I am sure this is a pretty simple question, but I just can't tackle it. I don't even known how I should begin...</p>
Will Jagy
10,400
<p>$$ 4 p^4 + 1 = (2 p^2 + 2p+1)(2 p^2 - 2p +1) = (2p(p+1)+1) (2p(p-1)+1)$$</p>
182,091
<p>3D graphics can be easily rotated interactively by clicking and dragging with the mouse.</p> <p>Is there a simple way to achieve the same for animated 3D graphics? I would like to rotate them interactively (in real time) <em>while</em> the animation is running.</p> <hr> <p>Here's an example animation, mostly taken from the documentation.</p> <pre><code>L = 4; sol = NDSolveValue[{D[u[t, x, y], t, t] == D[u[t, x, y], x, x] + D[u[t, x, y], y, y] + Sin[u[t, x, y]], u[t, -L, y] == u[t, L, y], u[t, x, -L] == u[t, x, L], u[0, x, y] == Exp[-(x^2 + y^2)], Derivative[1, 0, 0][u][0, x, y] == 0}, u, {t, 0, L/2}, {x, -L, L}, {y, -L, L}]; Animate[ Plot3D[sol[t, x, y], {x, -L, L}, {y, -L, L}, PlotRange -&gt; {0, 1}, PlotPoints -&gt; 20, MaxRecursion -&gt; 0], {t, 0, L/2} ] </code></pre> <p>When the animation is stopped, I can rotate the graphics. Then if the animation is started again, the rotation is kept.</p> <p>However, I cannot rotate <em>while</em> the animation is running. Is there a relatively easy way to enable this?</p> <p><em>Note:</em> My actual application has an animated plot on the surface of a sphere. The ability to rotate would be very useful.</p>
Kuba
5,478
<p>Here's another approach. There could be a problem in case plot's options change during animation, PlotRange/Ticks etc, currently only initial ones are preserved. Will try to come up with something more general later.</p> <pre><code>DynamicModule[{viewPoint, viewVertical, plot} , plot[t_] := Plot3D[sol[t, x, y], {x, -L, L}, {y, -L, L}, PlotRange -&gt; {0, 1}, PlotPoints -&gt; 20, MaxRecursion -&gt; 0]; Module[{opts}, {viewPoint, viewVertical} = {ViewPoint, ViewVertical} /. Options[Plot3D]; With[{ rest = Sequence @@ Last[plot[0]] }, Animate[ Graphics3D[Dynamic@First@plot[t] , ViewPoint -&gt; Dynamic[viewPoint] , ViewVertical -&gt; Dynamic@viewVertical , SphericalRegion -&gt; True , rest ], {t, 0, L/2 }] ] ] ] </code></pre>
128,122
<p>Original Question: Suppose that $X$ and $Y$ are metric spaces and that $f:X \rightarrow Y$. If $X$ is compact and connected, and if to every $x\in X$ there corresponds an open ball $B_{x}$ such that $x\in B_{x}$ and $f(y)=f(x)$ for all $y\in B_{x}$, prove that f is constant on $X$. </p> <p>Here's my attempt: Cover X by $\bigcup _{x \in X}B_x$. Since X is compact there is a finite sub-covering $\bigcup _{n=1}^NB_x$ of $X$. Given $x\in X$ there is an i between 1 and N such that $x\in B_{x_{i}}$. By assumption $f(x)=f(x_{i})$. Since there are only finitely many balls covering $X$, $f(X)$ is finite, say $f(X)=\{a_{1},...a_{m}\}$.</p> <p>Where do I go from here, I want to show that f(X) is a singleton. Is $X$ a singleton too?</p>
Mike
28,355
<p>If you assume that U_0 is not equal to X initially, then the boundary points of U_0 must be in X, which implies by hypothesis that U_0 is closed, which is a contradiction unless U_0 equals X. Does this argument tacitly rely on the connectedness of X?</p>
2,185,072
<p>Let A be the matrix: $$\begin{pmatrix} 1&amp;2&amp;3&amp;2&amp;1&amp;0\\2&amp;4&amp;5&amp;3&amp;3&amp;1\\1&amp;2&amp;2&amp;1&amp;2&amp;1 \end{pmatrix}$$.</p> <p>Show that {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} is a basis for the column space of A. Find a "nice basis for the column space of A. </p> <p>So far, I have row reduced A to $$\begin{pmatrix} 1&amp;2&amp;0&amp;-1&amp;4&amp;3\\0&amp;0&amp;1&amp;1&amp;-1&amp;-1\\0&amp;0&amp;0&amp;0&amp;0&amp;0 \end{pmatrix}$$ where the pivots occur in column 1 and column 3, so {(1,2,1),(3,5,2)} should be a "nice" column space? I do not see where {$\bigl( \begin{smallmatrix} 1 \\ 4\\3\end{smallmatrix} \bigr)$, $\bigl( \begin{smallmatrix} 3\\4\\1 \end{smallmatrix} \bigr)$} come from though. </p>
amd
265,466
<p>By finding the rref of $A$ you’ve determined that the column space is two-dimensional and the the first and third columns of $A$ for a basis for this space. The two given vectors, $(1,4,3)^T$ and $(3,4,1)^T$ are obviously linearly independent, so all that remains is to show that they also span the column space. You might be able to spot that the given vectors are linear combinations of some of the columns of $A$, which shows that they’re elements of the column space. Taken together with the other facts you have, that’s enough to show that they’re a basis. There’s a more systematic way to go about this, though. </p> <p>In <a href="https://math.stackexchange.com/a/2185251/265466">his answer</a>, dantopa suggests trying to express these two vectors in the basis that you’ve found. If so, then they span the same space and you’re done. I suggest turning that around and instead seeing if you can express the basis that you’ve found in terms of the two given vectors. That is, solving the system $$\begin{align}a\pmatrix{1\\4\\3}+b\pmatrix{3\\4\\1}&amp;=\pmatrix{1\\2\\1} \\ c\pmatrix{1\\4\\3}+d\pmatrix{3\\4\\1}&amp;=\pmatrix{3\\5\\2}\end{align}$$ which can be written as the single equation $$\pmatrix{1&amp;3\\4&amp;4\\3&amp;1}\pmatrix{a&amp;c\\b&amp;d}=\pmatrix{1&amp;3\\2&amp;5\\1&amp;2}.$$ This can be solved by forming an augmented matrix and row-reducing, just as you’re used to doing. If the system has a solution, you’ll end up with pivots in the first two columns and the required coefficients in the other two. </p> <p>Observe, though that you could have done this with the matrix $A$ in the first place and skipped the intermediate step of finding another basis for its column space. That is, <em>prepend</em> the two vectors to $A$ and row-reduce as usual. If those two vectors form a basis for the column space, then you’ll have pivots in the first two columns and nowhere else. For the problem at hand, you’d create the augmented matrix $$\left(\begin{array}{cc|cc}1&amp;3 &amp; 1&amp;2&amp;3&amp;2&amp;1&amp;0 \\ 4&amp;4 &amp; 2&amp;4&amp;5&amp;3&amp;3&amp;1 \\ 3&amp;1 &amp; 1&amp;2&amp;2&amp;1&amp;2&amp;1 \end{array}\right)$$ and row-reduce it as usual. </p> <p>As for finding a “nice” basis for the column space, that really depends on what “nice” means. That said, I find that the basis that you get by taking the columns that correspond to pivot columns in the rref doesn’t usually produce a “useful” basis. You can, however, <em>column-reduce</em> the matrix instead, which is more or less what puhsu does in <a href="https://math.stackexchange.com/a/2185088/265466">his answer</a>. Column-reduction is like row-reduction, except that you operate on the columns of the matrix instead of its rows. If you like, you can think of it as row-reducing the transpose and then transposing the result. The non-zero columns of the matrix produced by this process are a basis for the column space. You can see why this works if you remember that the non-zero <em>rows</em> of the rref of a matrix form a basis for its <em>row</em> space, and that the row space of a matrix is equal to the column space of its transpose. </p> <p>For the matrix in this problem, we end up with $$\left(\begin{array}{rc}1&amp;0 &amp; 0&amp;\cdots&amp;0 \\ 0&amp;1 &amp; 0&amp;\cdots&amp;0 \\ -1&amp;1 &amp; 0&amp;\cdots&amp;0 \end{array}\right)$$ so a “nice” basis for the column space might be $(1,0,-1)^T$ and $(0,1,1)^T$. In general, the vectors for a basis computed this way will be sparse, i.e., they will have $r-1$ zeros as components, where $r=\operatorname{rank}A$, and another of the components of each vector will be $1$.</p>
3,836,059
<p>The following question is a last year's Statistics exam question I tried to solve (without any luck). Any help would be grateful. Thanks in advance.</p> <p>An Atomic Energy Agency is worried that a particular nuclear plant has leaked radio-active material. They do <span class="math-container">$5$</span> independent Geiger counter measurements in the direct neighbourhood of the reactor. They find the following measurements (per unit time):</p> <p>observation i: 1 2 3 4 5</p> <p>count <span class="math-container">$x_i$</span> : 1 2 6 2 7</p> <p>(I did not know how to implement this into a tabular)</p> <p>The natural background radiation has an average of <span class="math-container">$λ = 2$</span> (per unit time). The agency would only be worried if the radiation rate would be in the order of <span class="math-container">$λ = 5$</span>.</p> <p>They therefore decide to test: <span class="math-container">$H_0 : λ ≤ 2$</span> versus : <span class="math-container">$H_1 : λ &gt; 2$</span></p> <p>They want to device the optimal test to see if there there is any reason for alarm. Assuming that the data are realizations of a sample from a Poisson distribution:</p> <p><span class="math-container">$X_1, ..., X_5 ∼ POI(λ)$</span></p> <p>with density: <span class="math-container">$f(x) = e^{-λ}\frac{λ^{x}}{x!}$</span></p> <p>I have two questions I need some help with:</p> <ol> <li><p>Determine a sufficient statistic for the Possion sample and show that it has a monotone likelihood ratio.</p> </li> <li><p>Derive the uniform most powerful test of level <span class="math-container">$α = 0.0487$</span> for the test problem.</p> </li> </ol> <p>Because we have a Poisson distribution, I know that we can use: <span class="math-container">$\sum_{i = 1}^{5}X_i \sim Poi(5λ)$</span></p> <p>For the first question, my attempt:</p> <p><span class="math-container">$p(x_1,...,x_5|λ) = \prod_{i = 1}^{5} e^{-5λ}\frac{λ^{x_1 + x_2 + x_3 + x_4 + x_5}}{x_1!x_2!x_3!x_4!x_5!} = h(x_1 +...+ x_5|λ) * g(x_1,x_2,x_3,x_4,x_5) $</span></p> <p><span class="math-container">$h(x_1 +...+ x_5|λ) = e^{-5λ}λ^{x_1 + x_2 + x_3 + x_4 + x_5} $</span></p> <p><span class="math-container">$g(x_1,x_2,x_3,x_4,x_5) = \frac{1}{x_1!x_2!x_3!x_4!x_5!}$</span></p> <p>It follows by the factorization theorem that <span class="math-container">$T(X_1, X_2, X_3,X_4,X_5) = X_1+X_2+X_3+X_4+X_5$</span> is sufficient statistic.</p> <p>Not sure how to construct a proof to show it has a monotone likelihood ratio.</p>
BruceET
221,800
<p>I will show the test and its result, leaving it to you to justify that it is a LR test based on the sufficient statistic.</p> <p>The sum <span class="math-container">$T$</span> of five readings is 18. Under <span class="math-container">$H_0: \lambda_T = 5(2) = 10,$</span> one has <span class="math-container">$P(T \ge 16) = 0.0487.$</span> So the critical value for a right-tailed test at the 5% level (or below) is <span class="math-container">$c=16,$</span> and <span class="math-container">$H_0$</span> is rejected. The P-value is</p> <p><span class="math-container">$$P(T \ge 18\,|\,\lambda_T=10) = 1 - P(T \le 17\,|\,\lambda_T = 10) = 0.014.$$</span></p> <p>Computations in R, where <code>ppois</code> denotes Poisson CDF are shown below. For such small <span class="math-container">$\lambda$</span> it would not be appropriate to use a normal approximation.</p> <pre><code>1 - ppois(15,10) [1] 0.0487404 1 - ppois(17,10) [1] 0.01427761 </code></pre>
287,043
<p>Consider the problem of finding the limit of the following diagram:</p> <p>$$ \require{AMScd} \begin{CD} &amp; &amp; &amp; &amp; E \\ &amp; &amp; &amp; &amp; @VVV \\ &amp;&amp; C @&gt;&gt;&gt; D \\ &amp; &amp; @VVV \\A @&gt;&gt;&gt; B \end{CD} $$</p> <p>The abstract definition of the limit involves an adjunction related to collapsing the entire index category to a point. However, one could break this operation into two stages: first collapsing the upper three objects to a point reduces it to</p> <p>$$ \require{AMScd} \begin{CD} &amp; &amp; C \times_D E \\ &amp; &amp; @VVV \\A @&gt;&gt;&gt; B \end{CD} $$</p> <p>and then we finish computing the limit as $A \times_B (C \times_D E)$.</p> <p>This is a particularly convenient thing, since it implies a way to work locally with more complicated diagrams where you ultimately want a limit &mdash; i.e. take limits or perform other modifications to smaller pieces of the diagram while leaving the rest unchanged.</p> <hr> <p>However, not every variation works out so nicely. If we try the same thing but instead collapse the middle three objects to a point, the intermediate diagram becomes</p> <p>$$ \require{AMScd} \begin{CD} &amp; &amp; C \times_D E \\ &amp; &amp; @VVV \\A \times_B C@&gt;&gt;&gt; C \end{CD} $$</p> <p>So, trying to perform <em>this</em> operation isn't local at all; it modifies the value of the diagram at the other two vertices.</p> <p>To clarify what I mean, this diagram together with the appropriate "cone" is (I believe) universal among all diagrams with "cones" of the form</p> <p>$$ \require{AMScd} \begin{CD} &amp; &amp; \bullet &amp;\to&amp; E \\ &amp; &amp; @VVV @VVV \\\bullet @&gt;&gt;&gt; \bullet &amp; \to &amp; D \\ \downarrow &amp; &amp; \downarrow &amp; \searrow @AAA \\ A @&gt;&gt;&gt; B @&lt;&lt;&lt; C \end{CD} $$</p> <hr> <p>It seems clear what the the <em>abstract</em> theory behind this sort of calculation should be; just factor the usual adjunction into a sequence of adjunctions.</p> <p>But my interest in such things is very much not in the <em>abstract</em> &mdash; these are the sorts of operations one would like to have as a <em>practical</em> calculus of diagrams.</p> <p>So my question is if such a calculus is known? Is there worked out how to predict and recognize which sorts of operations really should be local? Or for those operations that are not local, to easily work out how the rest of the diagram gets modified?</p> <p>(and the bonus question: how much of this carries over to <em>homotopy</em> limits?)</p>
Mike Shulman
49
<p>This sort of calculus is central to the abstract study of homotopy limits via <a href="https://ncatlab.org/nlab/show/derivator" rel="noreferrer">derivators</a>. See <a href="https://arxiv.org/abs/1112.3840" rel="noreferrer">this paper</a> and <a href="https://arxiv.org/abs/1306.2072" rel="noreferrer">this one</a> for some examples of "detection lemmas" that decompose limits using Kan extensions in various ways; they all follow from the calculus of homotopy <a href="https://golem.ph.utexas.edu/category/2010/06/exact_squares.html" rel="noreferrer">exact squares</a>.</p>
85,814
<p>how to solve $\pm y \equiv 2x+1 \pmod {13}$ with Chinese remainder theorem or iterative method?</p> <p>It comes from solving $x^2+x+1 \equiv 0 \pmod {13}$ (* ) and background is following:</p> <blockquote> <p>13 is prime. (* ) holds under Euclidean lemma if and only if $4(x^2+x+1) \equiv \pmod {13}$ or if and only if $(2x+1)^2 \equiv -3 \pmod {13}$. So if $p=13$, so by Euler's criterion $[ \frac{-3}{13} ] \equiv (-3)^{\frac{13-1}{2}} = (-3)^6 = 9^3 \equiv (-4)^3 =-64 \equiv 1 \pmod{13} $. Hence equation $y^2 \equiv -3 \pmod{13}$ has two incongruent solution( lemma 4.1.3) $\pm y$ so solutions of the equations $\pm y \equiv 2x+1 \pmod{13}$ are solutions of the equation (* ) So my most important question is how you change equation $\pm y \equiv 2x+1 \pmod{13}$ to the form $ax\equiv b \pmod{13} $ in other words to the form where you can use either Chinese remainder theorem or iterative method to solve $\pm y \equiv 2x+1 \pmod{13}$ and finally (* )? Finally just because of curiosity. Is $[\frac{-3}{7}]\equiv (-3)^3 = -27 \equiv -1 \pmod{7}$? So is mod(-27,7)=1 or -1?</p> </blockquote>
Phira
9,325
<p>Either, $\pm y-1$ is divisible by 2, so you divide by 2, or $\pm y+12$ is divisible by 2, so divide this by 2.</p>
3,840,643
<p>Assume that given three predicates are presented below:</p> <p><span class="math-container">$H(x)$</span>: <span class="math-container">$x$</span> is a horse</p> <p><span class="math-container">$A(x)$</span>: <span class="math-container">$x$</span> is an animal</p> <p><span class="math-container">$T(x,y)$</span>: <span class="math-container">$x$</span> is a tail of <span class="math-container">$y$</span></p> <p>Then, translate the following inference into an inference using predicate logic expressions and prove whether inference is valid or not (for instance, using natural deduction):</p> <p>Horses are animals.</p> <hr /> <p>Horses' tails are tails of animals.</p> <p>My thoughts: I am quite good at translating predicate logic expressions, but here I struggled to come up with formula for Horses' tails. My initial idea was to consider similar sentence such as &quot;w is a tail of a horse&quot; to form required inference, but it was not successful. Would be welcomed to hear your ideas about this task.</p>
lemontree
344,246
<p>Hints:</p> <p>&quot;<span class="math-container">$x$</span> is a <span class="math-container">$P$</span>'s tail&quot; means that <span class="math-container">$x$</span> is a tail of <span class="math-container">$y$</span> and <span class="math-container">$y$</span> is a <span class="math-container">$P$</span>.</p> <p>&quot;Horses' tails are tails of animals&quot; means that for all tails <span class="math-container">$x$</span> and tail-bearers <span class="math-container">$y$</span>, the tail being a horse's tail implies the tail being an animal's tail (where for &quot;being a <span class="math-container">$P$</span>'s tail&quot; insert the above definition).</p> <p>With the appropriate formalization of this paraphrase, it is possible to find a formal proof of the inference.</p>
1,785,414
<p>I am trying to find a closed form for the integral $$I=\int_{\frac{\pi}{4}}^{\frac{3\pi}{4}}\frac{\lfloor|\tan x|\rfloor}{|\tan x|}dx$$ So far, my reasoning is thus: write, by symmetry through $x=\pi/2$, $$I=2\sum_{n=1}^{\infty}n\int_{\arctan n}^{\arctan (n+1)}\frac{dx}{|\tan x|}=2\sum_{n=1}^{\infty}n\ln\frac{\sin\arctan(n+1)}{\sin\arctan n}$$ Using $\sin{\arctan {x}}=\frac{x}{\sqrt{1+x^{2}}}$, we get: $$I=2\sum_{n=1}^{\infty}n\ln(\frac{(n+1)\sqrt{1+n^2}}{n\sqrt{1+(n+1)^2}})=\sum_{n=1}^{\infty}n\ln\frac{(n+1)^2(1+n^2)}{n^2(1+(n+1)^2)}=\sum_{n=1}^{\infty}n\ln(1+\frac{2n+1}{n^2(n+1)^2})$$ Expanding the logarithm into an infinite series we get $$I=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{(-1)^{m+1}(2n+1)^m}{mn^{2m-1}(n+1)^{2m}}$$ Here I am a bit stuck.. Does anyone have any suggestions to go further?</p> <p>Thank you.</p> <p>EDIT: keeping in mind the nice answer below, applying summation by parts to $$I_N=2\sum_{n=1}^{N}n\ln\frac{\sin\arctan(n+1)}{\sin\arctan n}=2\sum_{n=1}^{N}n(\ln\sin\arctan(n+1)-\ln\sin\arctan n)$$ gives $$I_N=2((N+1)\ln\sin\arctan(N+1)+\frac{\ln 2}{2}-\sum_{n=1}^{N}\ln\sin\arctan(n+1))$$ hence: $$I-\ln2=-\sum_{n=2}^{\infty}\ln\frac{n^2}{1+n^2}=\sum_{n=2}^{\infty}\ln\frac{1+n^2}{n^2}=\sum_{n=2}^\infty\sum_{m=1}^\infty\frac{(-1)^{m+1}}{mn^{2m}}= \sum_{m=1}^\infty\frac{(-1)^{m+1}}{m}\sum_{n=2}^\infty n^{-2m}=\sum_{m=1}^\infty\frac{(-1)^{m+1}(\zeta(2m)-1)}{m}$$<br> Is this valid and helpful? </p> <p>EDIT 2: Coming back to $$\sum_{n=2}^{\infty}\ln(1+\frac{1}{n^2})=\ln(\prod_{n=2}^{\infty}(1+\frac{1}{n^2}))=\ln(\prod_{n=2}^{\infty}(1-\frac{i^2}{n^2}))=\ln(\prod_{n=1}^{\infty}(1-\frac{i^2}{n^2}))-\ln2$$<br> $$=\ln(\frac{\sin(i\pi)}{i\pi})-\ln2=\ln\frac{\sinh\pi}{\pi}-\ln2$$ hence $I=\ln\frac{\sinh\pi}{\pi}$ </p>
Jack D'Aurizio
44,121
<p>Maybe we are lucky. We may notice that: $$ 1+\frac{2n+1}{n^2(n+1)^2} = 1+\frac{1}{n^2}-\frac{1}{(n+1)^2} $$ and the roots of the polynomial $x^2(x+1)^2+2x+1$ are given by $$ \alpha = \frac{1}{2}\left(-1-\sqrt{2}-\sqrt{2\sqrt{2}-1}\right), $$ $$ \beta = \frac{1}{2}\left(-1-\sqrt{2}+\sqrt{2\sqrt{2}-1}\right), $$ $$ \gamma = \frac{1}{2}\left(-1+\sqrt{2}-i\sqrt{2\sqrt{2}+1}\right), $$ $$ \delta = \frac{1}{2}\left(-1+\sqrt{2}+i\sqrt{2\sqrt{2}+1}\right), $$ so: $$ \sum_{n=1}^{N}\log\left(1+\frac{2n+1}{n^2(n+1)^2}\right)=\log\prod_{n=1}^{N}\frac{(n-\alpha)(n-\beta)(n-\gamma)(n-\delta)}{n^2(n+1)^2}$$ can be written in terms of: $$ \log\prod_{n=1}^{N}\frac{n-\alpha}{n} = \log\frac{\Gamma(N+1-\alpha)}{\Gamma(N+1)\Gamma(1-\alpha)} $$ and through summation by parts the problem boils down to computing:</p> <blockquote> <p>$$ \sum_{N\geq 1}\log\frac{\Gamma(N+1-\alpha)\Gamma(N+1-\beta)\Gamma(N+1-\gamma)\Gamma(N+1-\delta)}{(N+1)^2\Gamma(N+1)^4\Gamma(1-\alpha)\Gamma(1-\beta)\Gamma(1-\gamma)\Gamma(1-\delta)}\tag{1}$$</p> </blockquote> <p>where: $$\log\Gamma(z+1)=-\gamma z+\sum_{n\geq 1}\left(\frac{z}{n}-\log\left(1+\frac{z}{n}\right)\right) $$ probably leads to a massive simplification of $(1)$, or at least the chance to write $(1)$ as a simple integral by exploiting the identities: $$ \log(m)=\int_{0}^{+\infty}\frac{e^{-x}-e^{-mx}}{x}\,dx,\qquad \log\left(1-\frac{\nu}{n}\right)=\int_{0}^{+\infty}\frac{1-e^{\nu x}}{x e^{nx}}\,dx.$$</p> <p>However, by Did's comment we simply have:</p> <blockquote> <p>$$ \log\prod_{n\geq 1}\left(1+\frac{1}{n^2}\right) = \color{red}{\log\frac{\sinh \pi}{\pi}} $$</p> </blockquote> <p>through the Weierstrass product for the $\sinh$ function.</p>
66,801
<p>In short, I am interested to know of the various approaches one could take to learn modern harmonic analysis in depth. However, the question deserves additional details. Currently, I am reading Loukas Grafakos' "Classical Fourier Analysis" (I have progressed to chapter 3). My intention is to read this book and then proceed to the second volume (by the same author) "Modern Fourier Analysis". I have also studied general analysis at the level of Walter Rudin's "Real and Complex Analysis" (first 15 chapters). In particular, if additional prerequisites are required for recommended references, it would be helpful if you could state them.</p> <p>My request is to know how one should proceed after reading these two volumes and whether there are additional sources that one could use that are helpful to get a deeper understanding of the subject. Also, it would be nice to hear suggestions of some important topics in the subject of harmonic analysis that are current interests of research and references one could use to better understand these topics.</p> <p>However, I understand that as one gets deeper into a subject such as harmonic analysis, one would need to understand several related areas in greater depth such as functional analysis, PDE's and several complex variables. Therefore, suggestions of how one can incorporate these subjects into one's learning of harmonic analysis are welcome. (Of course, since this is mainly a request for a roadmap in harmonic analysis, it might be better to keep any recommendations of references in these subjects at least a little related to harmonic analysis.)</p> <p>In particular, I am interested in various connections between PDE's and harmonic analysis and functional analysis and harmonic analysis. It would be nice to know about references that discuss these connections. </p> <p>Thank you very much!</p> <p><strong>Additional Details</strong>: Thank you for suggesting Stein's books on harmonic analysis! However, I am not sure how one should read these books. For example, there seems to be overlap between Grafakos and Stein's books but Stein's "Harmonic Analysis" seems very much like a research monograph and although it is, needless to say, an excellent book, I am not very sure what prerequisites one must have to tackle it. In contrast, the other two books by Stein are more elementary but it would be nice to know of the sort of material that can be found in these two books but that cannot be found in Grafakos. </p>
Anil P
15,700
<p>you can also look as a primer lecture notes on topological groups Higgins, london Mathematical society very easy to read</p>
2,704,955
<p>In my test on complex analysis I encountered following problem:</p> <blockquote> <p>Find $\oint\limits_{|z-\frac{1}{3}|=3} z \text{Im}(z)\text{d}z$</p> </blockquote> <p>So first I observed that function $z\text{Im}(z)$ is not holomorphic at least on real axis. Therefore we have to intgrate using parametrization.</p> <p>First, let's change variable $w = z - \frac{1}{3}$. So we got $\oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w+\frac{1}{3})\text{d}w = \oint\limits_{|w|=3} (w+\frac{1}{3}) \text{Im}(w)\text{d}w = \frac{1}{2i}\oint\limits_{|w|=3} (w+\frac{1}{3}) (w-\bar w)\text{d}w$.</p> <p>Then by letting $w=3e^{i \phi}$ we transform integral to the form $\frac{1}{2}\int\limits_{0}^{2\pi}(3e^{i \phi}+\frac{1}{3})(3e^{i \phi}-3e^{-i \phi})ie^{i \phi}\text{d}\phi = -\frac{1}{2}\int\limits_{0}^{2\pi}\text{d}\phi=-\pi$.</p> <p>Is my reasoning correct? I don't quite sure about change of variable I made since function is not holomorphic at real axis. Is there any other way how this integral can be evaluated? Thanks! </p>
Mark Viola
218,419
<p>Note that since $\text{Im}(z)=\frac1{2i}(z-\bar z)$, that </p> <p>$$z\text{Im}(z)=\frac1{2i}(z^2-|z|^2)$$</p> <p>Since $z^2$ is analytic, we have</p> <p>$$\begin{align} \oint_{|z-\frac13 |=3}z\text{Im}(z)\,dz&amp;=\frac i2\oint_{|z-\frac13 |=3}|z|^2\,dz\\\\ &amp;=-\frac {3}2 \int_0^{2\pi} \left|\frac13 +3e^{i\phi}\right|^2 e^{i\phi}\,d\phi\\\\ &amp;=-\frac {3}2 \int_0^{2\pi} \left(\frac{10}9 +e^{i\phi}+e^{-i\phi}\right)e^{i\phi}\,d\phi\\\\ &amp;=-3\pi \end{align}$$</p>
1,470,819
<p>Let $f$ be defined (and real-valued) on $[a,b]$. For any $x\in [a,b]$ form a quotient $$\phi(t)=\dfrac{f(t)-f(x)}{t-x} \quad (a&lt;t&lt;b, t\neq x),$$ and define $$f'(x)=\lim \limits_{t\to x}\phi(t),$$ provided this limit exists in accordance with Defintion 4.1. </p> <p>I have one question. Why Rudin considers $t\in (a,b)$? What would be if $t\in [a,b]?$</p>
Fabrice NEYRET
277,841
<p>In the x-semilog plot, the area under the curve is $A = \int exp(x)*f(exp(x)) $. The variable change $y = exp(x)$ yields $dx = dy/y$, so $ A = \int y/y*f(y) = \int f(y)$</p>
2,505,171
<p>How many numbers are there if you do not allow leading $0$'s? </p> <p>In how many of the numbers in each case is no digit $j$ in the $j$th place?</p> <p>If leading $0$'s are allowed? </p> <p>If they are not allowed? </p> <p>I know how to answer this if the numbers $0$ through $9$ can be repeated, but I am getting hung up on the "exactly one" part.</p>
Leonhard Euler
481,442
<p>10! (10 factorial)</p> <p>Choose any of the 10 numbers (maybe a 6). Then you have 9 numbers to choose for the next digit, and so on.</p> <p>When you get rid of the leading 0, simply take away the 9! numbers that had a leading 0. </p> <p>i.e. 10! - 9!</p>
1,593,679
<p>While proving that $$\int^{\infty}_0 \frac{\sin x}xdx$$ I saw the Laplace Transform proof. <br> It used that $$\cal L\left\{\frac{\sin t}{t}\right\}=\int^\infty_0 \cal L\left\{\sin(t)\right\}d\sigma$$ So for understanding it, I tried: $$\cal L\left\{\frac{\sin t}{t}\right\}=\int^\infty_0e^{-st}\frac{\sin t}{t}dt=\int^\infty_0\frac1t\cal L\left\{\sin t\right\}dt$$ But I cannot see how that $\sigma$ emerged and $t^{-1}$ vanished? Also, how do we know that using the Laplace transform, we would get an integral that is equal to the original one ($\int^\infty_0\frac{\sin x}{x}dx$)</p>
Fabian
7,266
<p>Your formula is a (too) short notation, suppressing the variable of the Laplace transform. It should read $$\mathcal{L}\left\{\frac{\sin t}{t} \right\} (0) = \int_0^\infty \mathcal{L}\{ \sin t\}(\sigma) \,d\sigma.$$</p> <p>This follows from the rule `<a href="https://en.wikipedia.org/wiki/Laplace_transform#Properties_and_theorems" rel="nofollow">Frequency-domain integration</a>'. A proof of this is rather straightforward. If you have troubles, I can provide some more help.</p>
1,652,747
<p>Ok, so I think I'm getting the hang of this. Is this more or less on the right track?</p> <p>$$e^{z-2}=-ie^2$$ $$e^ze^{-2}=-ie^2$$ $$e^z=-ie^4$$ $$\ln(e^z)=\ln(-ie^4)$$ $$z=\ln|-i|+iarg(-i)+2\pi ik+4$$ $$z=\frac{i\pi}{2}-\frac{i\pi}{2}+4+2\pi ik$$ $$z=4+2\pi ik$$</p>
Kerr
275,679
<p>Note that $e^{z-2}=-ie^2=e^{2-\frac{i \pi}{2}+2\pi ki}$, so $$z-2=2-\frac{i \pi}{2}+2\pi ki$$ $$z=4+i(2\pi k-\frac{ \pi}{2}).$$ In your solution the last two rows are seems to be wrong, since $arg(-i)=-\pi/2$.</p>
3,477,795
<p>How to use the absolute value function to translate each of the following statements into a single inequality.</p> <p>(a) <span class="math-container">$\ x ∈ (-4,10) $</span> </p> <p>(b) <span class="math-container">$\ x ∈ (-\infty,2] \cup[9,\infty) $</span></p> <p>I think in the first one the absolute value of <span class="math-container">$\ x$</span> should be greater than 4 and less than 10. is that correct? because the distance from <span class="math-container">$\ x$</span> to <span class="math-container">$\ 0$</span> should be between <span class="math-container">$\ 4$</span> and<span class="math-container">$\ 10$</span> in order for <span class="math-container">$\ x$</span> to belong in this interval.</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>Let <span class="math-container">$\epsilon&gt;0$</span> given and consider the partition <span class="math-container">$\sigma$</span> definef by <span class="math-container">$$\Bigl(1,2-\frac{\epsilon}{7},2+\frac{\epsilon}{7} ,4-\frac{\epsilon}{7}, 4+\frac{\epsilon}{7},7\Bigr)$$</span> then</p> <p><span class="math-container">$$U(f,\sigma)-L(\sigma,f)=$$</span> <span class="math-container">$$(3-2)\frac{2\epsilon}{7}+(3-1)\frac{2\epsilon}{7}=\frac{6\epsilon}{7}&lt;\epsilon$$</span></p>
3,477,795
<p>How to use the absolute value function to translate each of the following statements into a single inequality.</p> <p>(a) <span class="math-container">$\ x ∈ (-4,10) $</span> </p> <p>(b) <span class="math-container">$\ x ∈ (-\infty,2] \cup[9,\infty) $</span></p> <p>I think in the first one the absolute value of <span class="math-container">$\ x$</span> should be greater than 4 and less than 10. is that correct? because the distance from <span class="math-container">$\ x$</span> to <span class="math-container">$\ 0$</span> should be between <span class="math-container">$\ 4$</span> and<span class="math-container">$\ 10$</span> in order for <span class="math-container">$\ x$</span> to belong in this interval.</p>
CyclotomicField
464,974
<p>Since <span class="math-container">$\int_a^b f(x) dx = \int_a^cf(x) dx + \int_c^b f(x)dx$</span> you can split this integral into three parts, as <span class="math-container">$\int_1^7 f(x) dx = \int_1^2 f(x) dx + \int_2^4 f(x)dx + \int_4^7 f(x) dx$</span>. Since <span class="math-container">$f(x)$</span> is constant on each of these intervals they are easily integrated.</p>
192,784
<p>First let me try to describe in more details below the approach of "reordering" digits of Pi, which is used in OEIS A096566</p> <p><a href="https://oeis.org/A096566" rel="nofollow noreferrer">https://oeis.org/A096566</a></p> <p>and what I have done analyzing it so far.</p> <p>I am looking at first 620 "reordered" digits (which is more than currently listed in A096566) of Pi (in decimal representation). The reordering is done in such way that, while looking at consecutive pipeline stream of digits, all "range" of 10 different decimal digits:</p> <p>1,2,...,7,8,9,0</p> <p>in their first occurrence are getting "collected", while all coming "repeating" digits of each kind (1,2,...,7,8,9,0 ) are getting "pushed back" to be written later ... .</p> <p>Then the second "next" unique ten digits are getting collected, that is written, first looking for them in already "pushed back" group and then looking for coming-in (again with all "repeating" digits getting "pushed back") and so on until entire (second) set of all unique digits (1,2,...,7,8,9,0 ) is completely collected (written).</p> <p>In total - I got 62 such sets (covering 62*10 decimal digits) - see all those 62 sets listed below - where each {....} line represents the set of such 10 digits collection.</p> <pre><code> {3,1,4,5,9,2,6,8,7,0} {1,5,3,9,2,8,4,6,7,0} {5,9,3,2,6,4,8,1,7,0} {3,2,9,5,8,4,1,6,7,0} {3,2,8,9,5,1,7,4,6,0} {3,9,5,8,2,4,7,1,6,0} {3,9,4,5,2,8,6,0,1,7} {3,9,4,2,8,6,5,1,0,7} {3,9,2,8,4,6,1,0,5,7} {9,3,8,2,4,6,1,0,5,7} {9,8,3,2,4,6,0,5,1,7} {9,8,3,2,6,4,0,5,1,7} {9,8,2,3,4,6,0,5,1,7} {9,8,2,3,4,5,0,1,6,7} {8,2,9,3,4,1,5,0,6,7} {8,9,2,3,4,1,0,5,6,7} {8,2,3,9,4,0,1,5,6,7} {8,2,4,9,3,1,0,5,6,7} {8,2,1,5,9,4,3,0,6,7} {8,2,4,9,5,3,1,0,6,7} {8,2,4,9,1,5,3,6,0,7} {8,2,4,9,5,3,1,6,0,7} {8,2,9,4,5,3,1,6,0,7} {2,8,4,9,3,5,1,6,0,7} {8,2,9,4,3,1,5,6,0,7} {8,4,2,9,1,5,3,6,0,7} {8,4,2,9,3,6,1,5,0,7} {8,4,2,9,3,6,1,5,0,7} {8,2,4,6,3,9,1,0,5,7} {8,4,2,3,6,9,1,5,0,7} {8,4,2,3,6,1,5,9,0,7} {8,4,2,3,6,1,9,5,0,7} {8,4,2,6,3,1,9,5,0,7} {4,8,2,6,9,1,3,5,0,7} {4,2,8,6,1,3,9,0,5,7} {4,2,8,6,9,3,1,0,5,7} {4,8,2,6,1,3,0,5,9,7} {4,8,2,6,3,0,1,5,9,7} {4,8,2,3,6,1,5,0,9,7} {8,2,4,6,3,1,0,5,9,7} {2,4,8,6,1,3,0,5,9,7} {2,4,8,1,6,3,5,9,0,7} {8,2,4,1,3,6,5,9,0,7} {2,8,4,1,5,3,6,9,0,7} {4,2,1,8,3,6,9,5,0,7} {4,2,1,8,3,5,6,9,0,7} {4,1,2,3,8,9,6,5,0,7} {1,4,8,2,3,9,6,5,0,7} {1,4,2,3,8,5,9,6,0,7} {1,4,8,5,2,9,3,6,0,7} {1,4,2,8,3,9,6,5,0,7} {1,4,2,8,9,3,6,5,0,7} {1,2,8,4,9,3,6,0,5,7} {1,2,8,3,4,6,9,5,0,7} {1,3,2,4,8,9,6,5,0,7} {1,4,3,2,9,8,6,0,5,7} {1,3,4,2,9,6,8,0,5,7} {1,4,3,2,9,6,8,5,0,7} {1,4,3,2,9,6,8,5,0,7} {1,4,3,2,9,6,8,5,7,0} {1,3,2,9,4,6,8,7,0,5} {1,2,3,4,6,9,8,7,0,5} </code></pre> <p>Here are results of arithmetic averages which I got for each (out of ten) positions between those 62 sets:</p> <p>first position digits average 4.694538</p> <p>second position digits average 4.306452</p> <p>third position digits average 4</p> <p>forth position digits average 5.048387097</p> <p>fifth position digits average 4.951612903</p> <p>sixth position digits average 4.548387097</p> <p>seventh position digits average 4.161290323</p> <p>eights position digits average 4.274193548</p> <hr> <p>ninths position digits average 2.870967742</p> <p>tenth position digits average 6.14516129</p> <hr> <p>Above results show that for the first eight positions, digits in each position were changing from the set to set progressively more and more randomly - thus averages for those positions are getting closer to 4.5 .</p> <p>However for positions 9 and 10 - such randomization was not achieved yet within first 62 sets ...(though looking outside of presented so far 62 sets data and relying on the known observation that eventually the average digit value in the decimal expansion of Pi comes practically to 4.5), I could "speculate" in advance that it will come to 4.5 eventually and for positions 9 and 10 too ... - but it looks like that positions 9 and 10 are "randomizing" at much slowly rate than the other 8 positions ... and that might be (or might not be) interesting. </p> <p>I am not sure how many more sets (beyond 62, which I presented here) are needed to get arithmetic averages for "ninth" and "tenth" digit positions (within the set) to reach the same proximity of 4.5 for average value, as it is achieved already by first 8 positions within those 62 sets ... .</p> <p>It is also notable that if to average positions 9 and 10 together, the average between those two, within available so far 62 sets, will be close to 4.5.</p> <h1>Conclusion and questions</h1> <p>There appears to be that those first 62 sets listed above have a slight hint of retaining some loose organizational order between predecessor sets and successor sets.</p> <p>But I presume that further "down the road", beyond the first 62 sets, one will see that gradually the level of randomness in sets digits composition order is increasing and adjacent sets become more more disconnected from each other.</p> <p>What I am trying to say that in case of digits Pi (after applied above discussed reordering) it appears that there exists some sort of transition from initial order (within first 620 digits) to total randomness ...</p> <p>I used Maximal information-based nonparametric exploration statistical analysis program (MINE) by "David Reshef at al ".</p> <p>Being applied (by me ) "pairwise" to the first 62 terms, MINE shows high values (up to 1) of the maximal information coefficient (MIC), which is a measure of two-variable dependence designed specifically for rapid exploration of many-dimensional data sets.</p> <p>The links (thanks to LVK for the upload) to the excel spreadsheet, turned into the comma separated value file (.csv), with the data (62 sets of reordered Pi digits) and the MINE generated output .csv file, which was produced (at my home PC Windows based computer) upon executing</p> <p>java -jar MINE.jar PiReordered.csv -allPairs cv=1.0</p> <p>correspondingly are </p> <p><a href="https://dl.dropbox.com/u/29863189/PiReordered.csv" rel="nofollow noreferrer">https://dl.dropbox.com/u/29863189/PiReordered.csv</a></p> <p>and</p> <p><a href="https://dl.dropbox.com/u/29863189/PiReordered.csv%2Callpairs%2Ccv%3D1.0%2CB%3Dn%5E0.6%2CResults.csv" rel="nofollow noreferrer">https://dl.dropbox.com/u/29863189/PiReordered.csv%2Callpairs%2Ccv%3D1.0%2CB%3Dn%5E0.6%2CResults.csv</a></p> <p>Does such concept of transition from order to randomness exist ?</p> <p>Could this above observation be statistically confirmed or disproved ?</p> <p>If "yes" - what specific tools / methods of statistical analysis could be applied ?</p> <p>I also received suggestion that in order to test whether this discussed above feature is only characteristic to (some initial digits of) Pi, the same reordering should be applied to the some significant number of randomly generated very long strings of decimal digits - to see if there the same pattern behavior will appear or not - is it useful ?</p> <p>Thanks,</p> <p>Best Regards,</p> <p>Alexander R. Povolotsky</p> <p>PS - in response to LVK's answer and his comment, which I am quotting here "The nth line of your table consists of the digits written in the order of their nth appearance in π. This could be in principle read off the graph by crossing it with the horizontal line y=n and reading off the intersection points from left to right. (In practice this is not convenient due to the low resolution and overlap between the curves.) ..... I don't think there is any statistical method for analysis of the data organized in this way. You'll probably need to devise one yourself. ..... – LVK Sep 10 at 16:02"</p> <p>LVK - thanks for your thoughts and valuable contribution ! I think though that the frequency chart somewhat hides away the positional dependency between the unique digits of the {1,2,...,9,0} set. The table presentation with the columns representing the particular combination of (all) digits (from 1 to 0 in above mentioned set) for each consecutive ten digit collection is, in my opinion, more revealing in that regard.</p> <p>My questions still remain to be in place:</p> <p>1) is there some "other" (I would call it "transitional") non-randomness exists for some few hundreds "initial" digits of Pi (beyond the order imposed by the re-arrangement itself), which is getting revealed by this re-arrangement ?</p> <p>2) what (other than MINE) quantative statistical methods/tools could be used in analysis of this situation - </p> <p>PPS I am trying to rework first 3 columns in already posted MINE results csv file (where first two columns are "textually enumerated" names of the 10 digitt sets, like for example "18thSet", and the 3rd column is MIC value for the two sets identified in the first two columns at the same row) into three (3)-dimensional "surface" chart with each column, mentioned above, be correspondingly x,y and z values ...</p> <p>Doing it manually via converting into table -- by keeping the first column in tact, transposing the second column into up-most row and filling the table's body by the MIC values from the 3rd column is very laborious.</p> <p>I found discussion at</p> <p><a href="https://stackoverflow.com/questions/7083044/mathematica-csv-to-multidimensional-charts">https://stackoverflow.com/questions/7083044/mathematica-csv-to-multidimensional-charts</a></p> <p>how to do it with Mathematica, but I don't have it ...</p> <p>Could some one (who has Mathematica) be kind enough to do it (and post) ?</p>
binn
39,264
<p>"some sort of transition from initial order (within first 620 digits) to total randomness ... Does such concept of transition from order to randomness exist ?"</p> <p>If the phenomenon is that pi digit summaries are different from your expectation when you use a small sample size but are close to your expectation when you use a large sample size, then maybe various laws of large numbers could explain it. The variance of the sample mean is smaller when the sample size increases.</p>
40,500
<blockquote> <p>What are the most fundamental/useful/interesting ways in which the concepts of Brownian motion, martingales and markov chains are related?</p> </blockquote> <p>I'm a graduate student doing a crash course in probability and stochastic analysis. At the moment, the world of probability is a confusing blur, but I'm starting with a grounding in the basic theory of markov chains, martingales and Brownian motion. While I've done a fair amount of analysis, I have almost no experience in these other matters and while understanding the definitions on their own isn't too difficult, the big picture is a long way away.</p> <p>I would like to <strong>gather together results and heuristics</strong>, each of which links together two or more of Brownian motion, martingales and Markov chains in some way. Answers which <strong>relate probability to real or complex analysis</strong> would also be welcome, such as "Result X about martingales is much like the basic fact Y about sequences".</p> <p>The thread may go on to contain a Big List in which each answer is the posters' favourite as yet unspecified result of the form "This expression related to a markov chain is always a martingale because blah. It represents the intuitive idea that blah".</p> <p>Because I know little, I can't gauge the worthiness of this question very well so apologies in advance if it is deemed untenable by the MO police.</p>
Simon Lyons
9,564
<p>Levy's characterisation of Brownian motion:</p> <p>If $X$ is a continuous martingale and $X$ has quadratic variation process $[ X ]_t = t$ then $X$ is a standard Brownian motion.</p>
59,486
<p>Chern-Weil theory tells us that the integral Chern classes of a flat bundle over a compact manifold (i.e. a bundle admitting a flat connection) are all torsion. Given a compact manifold $M$ whose integral cohomology contains torsion, one can then ask which (even-dimensional) torsion classes appear as the Chern classes of flat bundles. What is known about this question? I would be interested both in statements about specific manifolds and about general (non)-realizability results.</p> <p>One specific thing that I know: if $S$ is a non-orientable surface, then there is a flat bundle $E\to S$ whose first Chern class is the generator of $H^2 (S; \mathbb{Z}) = \mathbb{Z}/2$. This shows up, for example, in papers of C.-C. Melissa Liu and Nan-Kuo Ho. As Johannes pointed out in the comments, this also shows that the fundamental class of a product of surfaces can be realized by a flat bundle.</p> <p>However, I suspect that for a product of 3 Klein bottles, not all the 4-dimensional torsion classes can be realized as second Chern classes of flat bundles. In fact, I think I know a proof of this if one restricts to unitary flat connections: the space of unitary representations has too few connected components. </p>
Ben Wieland
4,639
<p>The answer to Tom's formulation is no. It's possible if you restrict to finitely generated groups that my argument falls apart, but I doubt this is essential.</p> <p>Take a group $\Gamma$ so that $B\Gamma^+=K(Q/Z,2n-1)$, ie, $H^k(\Gamma;Z)=H^k(K(Q/Z,2n-1);Z)$. Since $Ext(Q/Z,Z)=\hat Z$, there lots of interesting classes in $H^{2n}(\Gamma;Z)$. If we could lift them to flat bundles over $B\Gamma$, then after applying the plus construction and profinite completion, we would have split $K(\hat Z;2n)$ off of $BU^{\hat{}}$. But the torsion homology of the Eilenberg-MacLane space cannot be a retract of the torsion-free homology of $BU$.</p> <p>I wanted to work one prime at a time, but $Ext(Q_p/Z_p,Z)=0$.</p>
2,676,200
<blockquote> <p>A hyperbola has equation $\frac{x^2}{4}-\frac{y^2}{16}=1$. Show that every other line parallel to this asymptote, $y=2x$, intersects the hyperbola exactly once.</p> </blockquote> <p><a href="https://i.stack.imgur.com/gFB6v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gFB6v.png" alt="enter image description here"></a></p> <p>So here's the hyperbola. The blue line represents the asymptote $y = 2x$. I am not concerned with the other asymptote that has a negative gradient, $y =-2x$, although the same principles would apply. </p> <p>Now the problem can re-phrased as:</p> <p>For the line $y = 2x+c , c \ne 0$</p> <p>Prove the line intersects the hyperbola once exactly for any non-zero c.</p> <p>The orange line represents the case where $c&gt;0$.</p> <p>The black line represents the case where $c&lt;0$.</p> <p>This is geometrically intuitive, however, I am struggling to prove this algebraically.</p> <hr> <p>FIRST ATTEMPT</p> <p>First of all, substitute $y = 2x+c$ into $\frac{x^2}{4}-\frac{y^2}{16}=1$ for $y$.</p> <p>$\frac{x^2}{4}-\frac{(2x+c)^2}{16}=1$</p> <p>Multiply through by 16.</p> <p>$4x^2-(2x+c)^2=16$</p> <p>$4x^2-(4x^2+4cx+c^2)=16$</p> <p>$-4cx-c^2=16$</p> <p>$c^2 + 4cx + 16 = 0$</p> <p>Take the discriminant to determine the number of times the line intersects the hyperbola.</p> <p>$\Delta = B^2 - 4AC$ for a generic quadratic $Ax^2 + Bx + C = 0$.</p> <p>(Using upper case $A$, $B$ and $C$ as lower case $c$ is already taken.)</p> <p>Therefore,</p> <p>For $0x^2 +4cx + (c^2 + 16)=0$ where the quadratic is taken in terms of $x$.</p> <p>$\Delta = (4c)^2 -4(0)(c^2 + 16) = 16c^2$</p> <p>As $c \ne 0$,</p> <p>$16c^2 &gt; 0$</p> <p>Hence all lines parallel to $y = 2x$ must intersect the hyperbola twice.</p> <p>I've managed to disprove what I am trying to prove. Please can someone explain where I have gone wrong.</p>
cansomeonehelpmeout
413,677
<p>You are actually done at $$c^2+4cx+16=0$$</p> <p>Remember that you're solving for $x$, and you get only one such $x$, unless $c=0$.</p>
11,629
<p>The basic logic course in school gives the impression that logic has both the syntax and the semantics aspects. Recently, I wonder whether the syntax part still plays an essential role in the current studies. Below are some of my observations, I hope the idea from the community can make them more complete.</p> <p>Model theory: Even though model theory is stated in the language of logic, it can be viewed as the study of local isomorphism (see Poizat's "A course in Model Theory"). The syntax part is therefore a natural (though might be uncomfortable for some) way to view the theory rather than a necessity.</p> <p>Recursion theory: The object of study is the notion of computability in different context. If we believe in Church-Turing Thesis, then these concept are independent of the formalism chosen. </p> <p>Set theory: The intimate relationship between large cardinal and determinacy perhaps can suggest that this is a universal phenomenon. Will this phenomenon disappear if we change the language of mathematics to, for example, category theory?</p> <p>Proof theory: I know too little to say anything.</p> <p>If the observation is true, is it justified to demand that Turing degrees, and large cardinals receive the same mathematical status as, for example, prime numbers?</p>
Neel Krishnaswami
1,610
<p>I think your observation is a very good one, but this phenomenon is limited to classical logic and does not continue to hold when we move to intuitionistic or substructural logics.</p> <p>One way of understanding the role of syntax is to take the connectives of logic as explaining what counts as a legitimate proof of that proposition. So a conjunction $A \land B$ can be proven with a proof of $A$ and a proof of $B$, and an implication $A \implies B$ can be can be proven with a proof of $B$, assuming a hypothetical proof of $A$, and so on. Conversely, we also give rules explaining how to use true propositions -- e.g., from $A \land B$, we can re-derive $A$, and we can also rederive $B$. </p> <p>If you work this out formally, you get Gentzen's system of natural deduction. The natural deduction systems for good logics admit a normal form theorem for proofs. The normalization procedure also gives us an equivalence relation on proofs (two proofs are equivalent if they have the same normal form), and it so happens that for classical logic, all proofs are equivalent. (This is a small lie: we can give more refined accounts of equivalence of classical proofs which don't equate everything, but the right answer here is still not entirely settled....)</p> <p>The equivalence of proofs means that we can take the view that the meaning of a classical proposition is its truth value -- i.e., its provability -- and so algebraic models of classical logic contain all the information contained in a classical proposition. We don't need the proofs, and so syntax takes a secondary role. </p> <p>But in intuitionistic and substructural logics like linear logic, not all proofs are equated. This means that we can't take the view that all the relevant information about a proposition is contained in its truth value, and so syntax retains a more important role.</p>
46
<p>I have solved a couple of questions myself in the past, and I think some of them are interesting to the public and will most likely appear in the future. One example for this is the question how to enable antialiasing in the Linux frontend, for which there is no native support right now. My question would now be whether posting these as a new question would be appropriate, and then immediately answer it.</p>
Brett Champion
69
<p>If it's an interesting question, ask it. But give someone else a chance to answer it first -- you may learn something new.</p>
2,975,665
<p>I am asked to find the sum of the series <span class="math-container">$$\sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}$$</span></p> <p>For some reason (that I don't understand) I can't apply the techniques for finding the sum of the series that I usually would to this one? I think the others that I have done have been geometric series so I could just find the first few terms etc... </p> <p>This one is really difficult for me to figure out. </p> <p>As far as my research into this has taken me, it has something to do with writing the series out as a power series about x=a and somehow using this to find a? My teacher told me to take a few derivatives of this? Does she mean to take the derivatives of taylor series or something like that? </p> <p>I am pretty confused as to the method I need to complete this question and online sources and class lecture information has taken me nowhere but back to the stuff I understand that applies to geometric series. What kind of series is this? How do I find its sum?</p>
Claude Leibovici
82,404
<p>What your teacher was meaning is that <span class="math-container">$$\frac d{dx} \sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}= \sum_{n=0}^\infty\frac{(x+1)^{n+1}}{(n+1)!}\tag 1$$</span> <span class="math-container">$$\frac {d^2}{dx^2} \sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}= \sum_{n=0}^\infty\frac{(x+1)^{n}}{(n)!}=e^{x+1}\tag 2$$</span> Integrate both sides of <span class="math-container">$(2)$</span> to get <span class="math-container">$$\int\frac {d^2}{dx^2} \sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}=e^{x+1}+C$$</span> Make <span class="math-container">$x=-1$</span> on both sides to get <span class="math-container">$C=-1$</span>. So <span class="math-container">$$\frac d{dx} \sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}= \sum_{n=0}^\infty\frac{(x+1)^{n+1}}{(n+1)!}=e^{x+1}-1\tag 3$$</span> Integrate both sides of <span class="math-container">$(3)$</span> to get <span class="math-container">$$\sum_{n=0}^\infty\frac{(x+1)^{n+2}}{(n+2)!}=e^{x+1}-x+D$$</span> Make <span class="math-container">$x=-1$</span> on both sides to get <span class="math-container">$D=-2$</span>.</p>
628,236
<p>I am trying to give a name to this axiom in a definition: </p> <p>$(X \bullet R) \sqcup (Y \bullet S) \equiv (X \sqcup Y) \bullet (R \sqcup S)$</p> <p>(for all $X, Y, R, S$) where $\sqcup$ is the join of a lattice and $\bullet$ is some binary operation. It feels related to monotonicity/distributivity but I don't know a standard name for this. Any ideas? So far I have called it "full distributivity". I'd also like to have a name (possibly the same) for this axiom when $\sqcup$ isn't a lattice operation, just some binary operation.</p>
Community
-1
<p>You can see it is a cone. It is irreducible because its smooth part is connected and dense. Compute where it is smooth using the Jacobian.</p>
31,261
<p>Given a 3-manifold $M$, one can define the Kauffman bracket skein module $K_t(M)$ as the $C$-vector space with basis "links (including the empty link) in $M$ up to ambient isotopy," modulo the skein relations, which can be found in the second paragraph of section two of <a href="http://arxiv.org/abs/math/0402102" rel="nofollow">http://arxiv.org/abs/math/0402102</a>. (Side question - how can I draw these relations in Latex?)</p> <p>If $S$ is a surface, then $K_t(S\times [0,1])$ has an algebra structure given by stacking one link on top of another. If $S$ is a boundary component of $M$, then $K_t(M)$ is a (left) $K_t(S\times [0,1])$ module, where the left module structure is given by gluing $S\times \{1\}$ to the copy of $S$ in the boundary of $M$. In this situation, we can define a left module map $K_t(A\times [0,1]) \to K_t(M)$ which is uniquely defined by "(empty link in $S\times [0,1]$) maps to (empty link in $M$)." The "peripheral ideal" is the kernel of this module map, and is a left ideal of $K_t(S\times [0,1])$.</p> <p>The motivation for these definitions comes from knot theory - if $K$ is a knot in $S_3$, then the complement of a small tubular neighborhood of $K$ is a manifold with a torus boundary, and the algebra $K_t(T^2\times [0,1])$ and module $K_t(S^3 \setminus K)$ give information about the knot $K$.</p> <p>Now I can ask my question: Is there a manifold $M$ with a torus boundary such that the peripheral ideal is trivial? </p> <p>I've just recently started learning about knot theory, and I'm having a hard time trying to figure this out. One thing that I do know is that $M$ cannot be of the form $S^3 \setminus K$, because of propositions 7 and 8 in <a href="http://arxiv.org/abs/math/9812048" rel="nofollow">http://arxiv.org/abs/math/9812048</a>. I also suspect that $M$ will actually have <b>two</b> boundary components which are a torus, but I don't really have a good reason for this.</p> <p>Also, I suspect this might be a hard question, so any hints about one might approach it would be helpful.</p>
Charlie Frohman
4,304
<p>There is a gap in the proof that the peripheral ideal is nontrivial in that paper. </p> <p>Thang Le and Stavros came up with a more algebraic way of definining a closely related ideal that they could prove was nontrivial.</p> <p>I think its a great problem. A good starting point might be to prove it for Torus knots. There is a recent paper of Julien Marche that computes the Kauffman bracket skein module of all torus knots, but stop short of understanding the module structure over the skein module of the torus. You might start there.</p> <p>I am willing to conjecture that the peripheral ideal is always nontrivial for any link. In fact, Thang has recently proved a weak form of this.</p> <p>We defined the peripheral ideal to be the extension to the noncommutative torus of the kernel of the inclusion map of skein algebra of the torus into the skein module of the complement of the knot. Via an identification of the skein algebra of the torus with the symmetric part of the noncommutative torus the ideal corresponds to the ideal of the image of the $SL_2\mathbb{C}$-characters of the knot group in the characters of $\mathbb{Z}\times \mathbb{Z}$. We found a way of seeing the colored Jones polynomial of the knot as lying in the dual to the $SL_2\mathbb{C}$-characters of the knot group, and we found that the colored Jones polynomial is in the annihilator of the peripheral ideal.</p> <p>Thang and Stavros stepped back from the picture, and found a formal connection between the Jones polynomial and the noncommutative torus, and then just defined their ideal to be the annihilator of the Jones polynomial. Using formal properties of the $R$-matrix they were able to give an axiomatic proof that their ideal was nontrivial.</p> <p>The conjecture is about the relation between the formal definition of quantum invariants and their concrete realization. The Kauffman bracket skein module of a knot complement is a deformation quantization of the unreduced scheme of the $SL_2\mathbb{C}$-characters of its fundamental group. The conjecture that the peripheral ideal is nontrivial is motivated by this idea, and the fact that the $SL_2\mathbb{C}$-character variety of a nontrivial knot, is nontrivial, meaning the $A$-ideal is nontrivial. This should mean that the peripheral ideal is nontrivial.</p> <p>The orthogonality between the peripheral ideal and the colored Jones polynomial should lead to data about the $SL_2\mathbb{C}$-character variety of the knot being expressed in the aggregate behavior of the colored Jones polynomial of the knot.</p>
2,164,946
<p>I'm working on arc length calculation and area of surface of revolution in calculus and I'm really quite stuck on the process of how to do this. Here is a particular problem that I'm struggling with:</p> <blockquote> <p>Find the surface area of the surface of revolution generated by revolving the graph $$y=x^3; \qquad 0 \leq x \leq10$$ around the $x$-axis.</p> </blockquote> <p>I've gone through the steps that I've learned to do (listed below) and the steps for the most part seem to make sense, however I keep ending up with incorrect answers. Please help! Below I listed my general process of approaching the problem.</p> <blockquote> <p>$$\begin{align} &amp;y = x^3 \\&amp;y' = 3x^2 \\&amp;(y')^2 = 9x^4\\&amp;1 + (dy/dx)^2 = 1 + 9x^4\end{align}$$</p> </blockquote> <p>Using the formula: </p> <blockquote> <p>$$2\pi y \cdot\int(1 + (dy/dx)^2)^{1/2} dx$$</p> </blockquote> <p>Here's how I set up the integral for the problem:</p> <blockquote> <p>$$2\pi x^3\cdot \int_0^{10}(1+9x^4)^{1/2} dx.$$</p> </blockquote> <p>This came out to $$2\pi x^3\cdot(6/5)\cdot(1+9(10^4))^{3/2}\cdot x^5$$ My final answer was $$2.035785969E16.$$ </p> <p>Please help me understand where I'm going wrong! </p>
Ahmed Al Dahmani
443,872
<p>You should place the X^3 inside the integral as the formula. Review the formula and you should be able to see your mistake :)</p>
1,688,184
<blockquote> <p>Prove that $2\sqrt 5$ is irrational</p> </blockquote> <p><strong>My attempt:</strong></p> <p>Suppose $$2\sqrt 5=\frac p q\quad\bigg/()^2$$ </p> <p>$$\Longrightarrow 4\cdot 5=\frac{p^2}{q^2}$$</p> <p>$$\Longrightarrow 20\cdot q^2=p^2$$</p> <p>$$\Longrightarrow q\mid p^2$$</p> <p>$$\text{gcd}(p,q)=1$$</p> <p>$$\Longrightarrow \text{gcd}(p^2,q)=1$$</p> <p>How can I procced?</p>
Roman83
309,360
<p>$$2\sqrt5 = \frac pq, \gcd (p,q=1)$$ $$20q^2=p^2 \Rightarrow 5|p^2 \Rightarrow 5|p \Rightarrow 25|p^2$$ Let $p=5p_1$ $$20q^2=25p_1^2 \Rightarrow 5|q $$ $$\gcd (p,q)\geq 5$$ Сontradiction.</p>
540,217
<p><strong>Question:</strong></p> <p>$\int ^1_0 \frac {\ln x}{1-x^2}dx$ - converges or diverges?</p> <p><strong>What we did:</strong></p> <p>We tried to compare with $-\frac 1x$ and $-\frac 1{x-1}$ but ended up finding that these convergence tests fail. Our book says this integral diverges, but Wolfram on the other hand says it converges. How come?</p>
Riemann1337
98,640
<p>It converges and its value is $-\pi^2/8$.</p>
556,855
<p>Given a $\triangle ABC$ with sides $AB=BC$ and $\angle B=100^\circ $, prove that $$a^3 + b^3 = 3a^2b$$ where $a=AB=BC$ and $b=AC$,</p> <p>I have tried to use simultaneously the sine and cosine rules as well as the Pythagorean Theorem with all my attempts failing to prove that $LHS =RHS$. I would greatly appreciate a hint on how to prove the proposition.</p>
Priyatham
106,406
<p>A straight forward application of cosine rule should tell you that $$ b = 2a\sin(50) $$</p> <p>Consider </p> <p>$$ \begin{equation} \begin{split} a^3 + b^3 - 3a^2b &amp; = a^3(1+8\sin^350-6\sin50) \\ &amp; = a^3(1+8\frac{(3\sin50 - \sin 30)}{4}-6\sin50) \\ &amp; = a^3(1+6\sin50-2\sin30-6\sin50) \\ &amp; = 0 \end{split} \end{equation} $$</p>
556,855
<p>Given a $\triangle ABC$ with sides $AB=BC$ and $\angle B=100^\circ $, prove that $$a^3 + b^3 = 3a^2b$$ where $a=AB=BC$ and $b=AC$,</p> <p>I have tried to use simultaneously the sine and cosine rules as well as the Pythagorean Theorem with all my attempts failing to prove that $LHS =RHS$. I would greatly appreciate a hint on how to prove the proposition.</p>
Jonas Kgomo
45,379
<p>$a^3+b^3=3a^2b$ using cosine</p> <p>$b^2=a^2+a^2-2aa\cos \beta,\beta&lt;100$ </p> <p>$b^2=2a^2(1-\cos \beta)\\let \\(\frac{b}{a})^2=t^2=2(1-\cos \beta)$</p> <p>$a^3+b^3=3a^2b\implies 1+t^3=3t\implies t^3-3t+1=0$</p> <p>$t(t^2-3)=-1\implies t^2(t^2-3)^2=2(1-\cos \beta)(1-\cos\beta-3)^2\\=2(1-\cos \beta)(1+4\cos \beta+4\cos^2 \beta)\\=2(1+3\cos\beta-4\cos^3\beta)=2+6\cos\beta-8\cos^3\beta=1$</p> <p>now we should show that</p> <p>$8\cos^3\beta-6\cos\beta-1=2\cos3\beta-1=0 $</p> <p>$3\beta=\pm \frac{\pi}{3}+2\pi k,k\in Z$</p>
2,284,178
<p>Let the roots of the equation: $2x^3-5x^2+4x+6$ be $\alpha,\beta,\gamma$</p> <ol> <li>State the values of $\alpha+\beta+\gamma,\alpha\gamma+\alpha\beta+\beta\gamma,\alpha\beta\gamma$</li> <li>Hence, or otherwise, determine an equation with integer coefficients which has $\frac{1}{\alpha^2}\frac{1}{\beta^2}\frac{1}{\gamma^2}$</li> </ol> <p>For Question 1 I let the roots equal: $(x-\alpha)(x-\beta)(x-\gamma)$ which equals: $x^3-x^2(\alpha+\beta+\gamma)+x(\alpha\gamma+\alpha\beta+\beta\gamma)-\alpha\beta\gamma$</p> <p>I then equated it as:</p> <p>$\alpha+\beta+\gamma =\frac{5}{2}$</p> <p>$\alpha\gamma+\alpha\beta+\beta\gamma=2$</p> <p>$\alpha\beta\gamma=-6$</p> <p>Answering question 2 I went and did:</p> <p>$\frac{1}{\alpha^2}+\frac{1}{\beta^2}+\frac{1}{\gamma^2}$ which equal $\frac{\alpha^2\beta^2+\alpha^2\gamma^2+\beta^2\gamma^2}{\alpha^2\beta^2\gamma^2}=\frac{(\alpha\beta)^2+(\alpha\gamma)^2+(\beta\gamma)^2}{(\alpha\beta\gamma)^2}$</p> <p>that would give me the sum of the roots and </p> <p>$\frac{1}{(\alpha\beta\gamma)^2}$ would give me the product of the roots but kinda confused as to how to finish this question.</p>
Donald Splutterwit
404,247
<p>We have \begin{eqnarray*} \alpha+\beta+\gamma =\frac{5}{2} \\ \alpha\beta+\beta\gamma+\gamma\alpha =2 \\ \alpha\beta\gamma =-3 \end{eqnarray*} Now calculate \begin{eqnarray*} (\alpha+\beta+\gamma)^2 =\alpha^2+\beta^2+\gamma^2+2(\alpha\beta+\beta\gamma+\gamma\alpha) \\ (\alpha\beta+\beta\gamma+\gamma\alpha )^2=\alpha^2\beta^2+\beta^2\gamma^2+\gamma^2\alpha^2 +2 \alpha\beta\gamma(\alpha+\beta+\gamma )\\ \end{eqnarray*} So $\alpha^2+\beta^2+\gamma^2=?$ and $\alpha^2\beta^2+\beta^2\gamma^2+\gamma^2\alpha^2=?$. Now \begin{eqnarray*} \frac{1}{\alpha^2}+\frac{1}{\beta^2}+\frac{1}{\gamma^2} =\frac{\alpha^2\beta^2+\beta^2\gamma^2+\gamma^2\alpha^2}{\alpha^2\beta^2\gamma^2} =\frac{?}{?} \\ \frac{1}{\alpha^2\beta^2}+\frac{1}{\beta^2\gamma^2}+\frac{1}{\gamma^2\alpha^2}=\frac{\alpha^2+\beta^2+\gamma^2}{\alpha^2\beta^2\gamma^2} =\frac{?}{?} \\ \frac{1}{\alpha\beta\gamma^2} =\frac{?}{?} \end{eqnarray*} so ...</p> <p>Alternatively rearrange the original equation to $2x^3+4x=5x^2-6$. now square both sides and rearrange \begin{eqnarray*} 4x^6+16x^4+16x^2=25x^4-60x^2+36 \\ 4x^6-9x^4+76x^2-36=0 \\ 4-9x^{-2}+76 x^{-4}-36x^{-6}=0 \end{eqnarray*} So $ \color{red}{36y^3-76y^2+9y-4=0}$.</p>
1,590,262
<p>Let $ABC$ be of triangle with $\angle BAC = 60^\circ$ . Let $P$ be a point in its interior so that $PA=1, PB=2$ and $PC=3$. Find the maximum area of triangle $ABC$.</p> <p>I took reflection of point $P$ about the three sides of triangle and joined them to vertices of triangle. Thus I got a hexagon having area double of triangle, having one angle $120$ and sides $1,1,2,2,3,3$. We have to maximize area of this hexagon. For that, I used some trigonometry but it went very complicated and I couldn't get the solution.</p>
Rory Daulton
161,807
<p>Let $\theta=\measuredangle PAB$ in the triangle you specify. Then $\measuredangle PAC=60°-\theta$.</p> <p><a href="https://i.stack.imgur.com/8gVGc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8gVGc.png" alt="enter image description here"></a></p> <p>By the law of cosines,</p> <p>$$2^2=1^2+c^2-2\cdot 1\cdot c\cdot \cos\theta$$ $$3^2=1^2+b^2-2\cdot 1\cdot b\cdot \cos(60°-\theta)$$</p> <p>Solving those equations for $b$ and $c$,</p> <p>$$c=\cos\theta+\sqrt{\cos^2\theta+3}$$ $$b=\cos(60°-\theta)+\sqrt{\cos^2(60°-\theta)+8}$$</p> <p>Since the triangle's area is $\frac 12bc\sin A$ and $A=60°$, putting them all together we get</p> <p>$$Area=\frac{\sqrt 3}{4}\left(\cos(60°-\theta)+\sqrt{\cos^2(60°-\theta)+8}\right)\left(\cos\theta+\sqrt{\cos^2\theta+3}\right)$$</p> <p>where $0°&lt;\theta&lt;60°$. This looks like a bear to maximize analytically, so I did it numerically with a graph and got</p> <blockquote> <p>$$Area_{max}\approx 4.66441363567 \quad\text{at}\quad \theta\approx 23.413°$$</p> </blockquote> <p>I could get only five significant digits for $\theta$ but the maximum area should be accurate to all shown digits. I checked this with a construction in Geogebra and it checks. WolframAlpha timed out while trying to find an exact maximum value of the triangle's area from that my formula.</p>
1,590,262
<p>Let $ABC$ be of triangle with $\angle BAC = 60^\circ$ . Let $P$ be a point in its interior so that $PA=1, PB=2$ and $PC=3$. Find the maximum area of triangle $ABC$.</p> <p>I took reflection of point $P$ about the three sides of triangle and joined them to vertices of triangle. Thus I got a hexagon having area double of triangle, having one angle $120$ and sides $1,1,2,2,3,3$. We have to maximize area of this hexagon. For that, I used some trigonometry but it went very complicated and I couldn't get the solution.</p>
achille hui
59,379
<p>Let $\mathcal{A}$ be the area of $\triangle ABC$. Let $\theta$ and $\phi$ be the angles $\angle PAC$ and $\angle BAP$ respectively.<br> We have $\theta + \phi = \angle BAC = \frac{\pi}{3}$. As functions of $\theta$ and $\phi$, the side lengths $b$, $c$ and area $\mathcal{A}$ are:</p> <p>$$ \begin{cases} c(\theta) &amp;= \cos\theta + \sqrt{2^2-\sin^2\theta}\\ b(\phi) &amp;= \cos\phi + \sqrt{3^2-\sin^2\phi} \end{cases} \quad\text{ and }\quad \mathcal{A}(\theta) = \frac{\sqrt{3}}{4} c(\theta)b\left(\frac{\pi}{3}-\theta\right) $$ In order for $\mathcal{A}(\theta)$ to achieve maximum has a particular $\theta$, we need</p> <p>$$\frac{d\mathcal{A}}{d\theta} = 0 \iff \frac{1}{\mathcal{A}}\frac{d\mathcal{A}}{d\theta} = 0 \iff \frac{1}{c}\frac{dc}{d\theta} - \frac{1}{b}\frac{db}{d\phi} = 0 \iff \frac{\sin\theta}{\sqrt{2^2-\sin^2\theta}} - \frac{\sin\phi}{\sqrt{3^2-\sin^2\phi}} = 0$$ This implies $$\frac{\sin\theta}{2} = \frac{\sin\phi}{3} = \frac13 \sin\left(\frac{\pi}{3} - \theta\right) = \frac13 \left(\frac{\sqrt{3}}{2}\cos\theta - \frac12\sin\theta\right) \iff 4\sin\theta = \sqrt{3}\cos\theta$$ and hence $$\theta = \tan^{-1}\left(\frac{\sqrt{3}}{4}\right) \approx 0.4086378550975924 \;\;( \approx 23.41322444637054^\circ )$$</p> <p>Furthermore, we have $\displaystyle\;\frac{\sin\theta}{2} = \frac{\sin\phi}{3} = \frac{\sqrt{3}}{2\sqrt{19}}\;$. Substitute this into the expression for side lengths and area, we get $$ \begin{cases} c &amp;= \frac{4+\sqrt{73}}{\sqrt{19}}\\ b &amp;= \frac{7+3\sqrt{73}}{2\sqrt{19}} \end{cases} \quad\implies\quad \mathcal{A} = \frac{\sqrt{3}}{8}(13+\sqrt{73}) \approx 4.664413635668018 $$ Please note that the condition $\displaystyle\;\frac{\sin\theta}{2} = \frac{\sin\phi}{3}$ is equivalent to $\angle ABP = \angle ACP$. If one can figure out why these two angles equal to each other when $\mathcal{A}$ is maximized, one should be able to derive all the result here w/o using any calculus.</p>
1,977,577
<p>if a function is lebesgue integrable, does it imply that it is measurable? (without any other assumption)</p> <p>The reason why I ask this is because royden, in his book, kind of imply about a measurable function when assuming the function to be lebesgue integrable</p>
True_False
608,343
<p>Actually, this is the converse of the following theorem which you can start from its end to answer your question:</p> <p>Let <span class="math-container">$f$</span> be a bounded measurable function on a set of finite measure <span class="math-container">$E$</span>. Then, <span class="math-container">$f$</span> is integrable over <span class="math-container">$E$</span>. </p> <p>The proof uses the simple approximation lemma* with <span class="math-container">$\epsilon=\frac{1}{n}$</span> to show that there are two simple functions <span class="math-container">$\phi_n$</span> and <span class="math-container">$\psi_n$</span> defined on <span class="math-container">$E$</span> such that <span class="math-container">$0 \leq \phi_n \leq f \leq \psi_n$</span> on <span class="math-container">$E$</span>, and <span class="math-container">$0 \leq \psi_n -\phi_n \leq \frac{1}{n}$</span>. Using the monotonicity and linearity of the integral of simple functions, we can easily show that the lower and upper Lebesgue integrals of <span class="math-container">$f$</span> are equal.</p> <p><strong>For the converse</strong>, use the last line of the proof: the equality of lower and upper Lebesgue integrals of <span class="math-container">$f$</span> to produce a sequence of simple functions <span class="math-container">$\phi_n$</span> such that <span class="math-container">$\phi_n \to f$</span> pointwise on <span class="math-container">$E$</span>, and <span class="math-container">$|\phi_n| \leq f$</span> on <span class="math-container">$E$</span>. Therefore, by the simple approximation theorem**, <span class="math-container">$f$</span> is measurable. <span class="math-container">$\Box$</span> </p> <p>In the Royden-Fitzpartick's book; Real Anaylsis, you can find the following definitions and results:</p> <p>Lower integral = <span class="math-container">$\sup \{ \int_E \phi: \phi \text{ is simple, and } \phi \leq f \text{ on } E\}$</span>.</p> <p>Upper integral= <span class="math-container">$\inf \{ \int_E \psi: \psi \text{ is simple, and } \psi \geq f \text{ on } E\}$</span>.</p> <ul> <li><p><strong>The simple approximation lemma</strong>: Let <span class="math-container">$f$</span> be a measurable real-valued function on <span class="math-container">$E$</span>. Assume <span class="math-container">$f$</span> is bounded on <span class="math-container">$E$</span>. Then for each <span class="math-container">$\epsilon&gt;0$</span>, there are simple functions <span class="math-container">$\phi_{\epsilon}$</span> and <span class="math-container">$\psi_{\epsilon}$</span> defined on <span class="math-container">$E$</span> which have the following approximation properties:<span class="math-container">$0 \leq \phi_{\epsilon} \leq f \leq \psi_{\epsilon}$</span> on <span class="math-container">$E$</span>, and <span class="math-container">$0 \leq \psi_{\epsilon} -\phi_{\epsilon} \leq \epsilon.$</span></p></li> <li><p><strong>The simple approximation theorem</strong>: An extended real-valued function <span class="math-container">$f$</span> on a measurable set <span class="math-container">$E$</span> is measurable if and only if there is a sequence <span class="math-container">$\{ \phi_n \}$</span> of simple functions on <span class="math-container">$E$</span> which converges pointwise on <span class="math-container">$E$</span> to <span class="math-container">$f$</span> and has the property that <span class="math-container">$|\phi_n| \leq f$</span> on <span class="math-container">$E$</span> for all <span class="math-container">$n$</span>.</p></li> </ul>
1,303,183
<p>If I have a vector space $V$ ( of dimension $n$ ) over real numbers such that $\{v_1,v_2...v_n\}$ is the basis for the space ( not orthogonal ). Then I can write any vector $l$ in this space as $l=\sum_i\alpha_iv_i$. Here $\alpha_1,\alpha_2...\alpha_n$ are the coefficients that define the vector $l$ according to this basis. Can another set of coefficients $\beta_1,\beta_2...\beta_n$ give the same vector $l$ ? If the basis was orthogonal the answer would be no, but I can't prove for a non orthogonal basis.</p>
Nick Alger
3,060
<p>Here is an alternative approach based on the existence and uniqueness of the matrix inverse for square matrices with full column rank. It is slightly circular in the sense that proofs of the existence and uniqueness of this matrix inverse usually use arguments similar to the other answers here. However, hopefully this approach will provide some intuition about the meaning of changing from one basis to another.</p> <p>The operation of taking a vector of coefficients and building a linear combination of vectors from it is a linear operation. In particular, it is equivalent to multiplying the coefficient vector with a matrix where the basis vectors are stacked in each column, $$\underbrace{\sum_{i=1}^n \alpha_i v_i}_{l} = \underbrace{\begin{bmatrix}v_1 &amp; v_2 &amp; \dots &amp; v_n\end{bmatrix}}_{P}\underbrace{\begin{bmatrix}\alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_n \end{bmatrix}}_{\alpha}.$$ More generically, $$l = P \alpha,$$ where $l$ is the linear combination, $P$ is the matrix of basis vectors, and $\alpha$ is the vector of coefficients. If the basis vectors are linearly independent, then $P$ is invertible, so the only possible map going the other direction is given by $P^{-1}$; $$\alpha = P^{-1} l.$$</p> <p>There can't be other inverse maps by the uniqueness of matrix inverses, which is the desired result.</p> <p>More generally, if you want to go from a vector written in the standard basis $\{e_i\}_{i=1}^n$ to an alternative basis $\{v_i\}_{i=1}^n$, one multiplies by $P^{-1}$, whereas if you want to go from the $v_i$ basis back to the standard basis you multiply by $P$. </p> <p><img src="https://i.stack.imgur.com/UHnAg.png" alt="Change of basis"></p>
258,215
<p>How can we show that if $f:V\to V$ Then for each $m\in \mathbb {N}$ $$\operatorname{im}(f^{m+1})\subset \operatorname{im}(f^m)$$ Please help,I am stuck on this.</p>
Philip Benj
52,337
<p>The inverse image of a function exists if it is bijective.</p>
3,259,193
<p>I have a simple question about notation regarding limits, specifically, <span class="math-container">$$\lim_{\|x\| \rightarrow \infty}f(x).$$</span> </p> <p><strong>Question:</strong> </p> <p><span class="math-container">$\lim_{\|x\| \rightarrow \infty}f(x)$</span>:</p> <p>In words what we are doing is taking the limit as the "norm" of the point <span class="math-container">$x$</span> goes to infinity. </p> <p>My problem here is I want to make sure I am interpreting the idea behind it correctly. So with taking limits one will encounter an expression of the form: <span class="math-container">$$\lim_{x \rightarrow \infty}f(x).$$</span> Here I would visualize our value of <span class="math-container">$x$</span> just tending towards "infinity" on a graph. But I'm having trouble visualizing the behaviour in this form <span class="math-container">$$\lim_{\|x\| \rightarrow \infty}f(x).$$</span></p> <p>What it says to me is that the "distance" of the <span class="math-container">$x$</span> value is going to infinity. So would a way to visualize it be if we had a fixed point <span class="math-container">$x_0$</span> on the number line and we kept on measuring the distance from this fixed point <span class="math-container">$x_0$</span> to some arbitrary point <span class="math-container">$x$</span> that is really far away (infinity away) from <span class="math-container">$x_0$</span>? And if this is a valid way of thinking about it, what is the benefit of writing it in this form versus the other way mentioned? </p>
KcH
36,116
<p>You use the analogy of <span class="math-container">$\lim_{x \rightarrow \infty} f(x)$</span>, saying</p> <blockquote> <p>I would visualize our value of just tending towards "infinity" on a graph.</p> </blockquote> <p>This says to me that you are thinking of functions with domain <span class="math-container">$\mathbb{R}$</span>. I will also assume that by <span class="math-container">$||x||$</span> you mean the <a href="https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm" rel="nofollow noreferrer">Euclidean norm</a>. </p> <p>In this setting, I think a better analogy would be <span class="math-container">$$\lim_{x \rightarrow 0 } f(x).$$</span> Here, you must imagine <span class="math-container">$x$</span> approaching 0 from both the left and the right (and observe that it approaches the same value from either side). Similarly, <span class="math-container">$\lim_{||x|| \rightarrow \infty} f(x)$</span> is can be visualized as <span class="math-container">$x$</span> tending towards both infinity and negative infinity on a graph (with the notation being meaningful if and only if the graph approaches the same limit on either side).</p> <p>Indeed, this idea generalizes to functions with domain <span class="math-container">$\mathbb{R}^n$</span> for any <span class="math-container">$n \in \mathbb{N}$</span>. Just as <span class="math-container">$\lim_{x \rightarrow 0} f(x)$</span> involves <span class="math-container">$x$</span> approaching the origin from every possible direction, <span class="math-container">$\lim_{||x||\rightarrow \infty} f(x)$</span> involves <span class="math-container">$x$</span> going arbitrarily far away from the origin in every possible direction.</p>
312,649
<p>Hello how to show the following:</p> <p>Let $(X,\tau)$ be a topological space then a single point is compact but not necessarily closed.</p> <p>Thank you!</p>
Robert Israel
8,508
<p>The topological space $X$ is $T_1$ (i.e. for any two distinct points $x$, $y$, $x$ has an open neighbourhood that does not contain $y$) if and only if all single-point sets are closed.</p>
1,549,490
<p>$f(x) = 3x - \frac{1}{x^2}$</p> <p>I am finding this problem to be very tricky:</p> <p><a href="https://i.stack.imgur.com/7RjYG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7RjYG.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/jwUHp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jwUHp.jpg" alt="enter image description here"></a></p>
Mikasa
8,581
<p>Just a hint:</p> <p>Use anothe version of differentiation: </p> <p>$f'(a)=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}= \lim_{x\to a}\frac{(3x-1/x^2)-(3a-1/a^2)}{x-a}=\cdots=\lim_{x\to a}\frac{3a^2x^2(x-a)-(x-a)(x+a)}{x^2a^2(x-a)}=??$</p>