[ { "question_id": 5108519, "title": "Seeking other generalisations to the integral $\\int_0^{\\infty} \\frac{\\ln \\left(x+\\frac{1}{x}\\right)}{1+x^2}dx$", "question_text": "The\n\nintegral\n\n$$\n\n\\int_0^{\\infty} \\frac{\\ln \\left(x+\\frac{1}{x}\\right)}{1+x^2}=\\pi \\ln 2\n\n$$\n\ninvites me to investigate the integral\n\n$$\n\nI=\\int_0^{\\infty} \\frac{\\ln \\left(x+\\frac{1}{x}\\right)}{x^4+1} d x\n\n$$\n\nFirst of all, via the inverse substitution\n\n$x\\to \\frac{1}{x}$\n\n$$\n\nI=\\int_0^{\\infty} \\frac{x^2\\ln \\left(\\frac{1}{x}+x\\right)}{x^4+1} d x\n\n$$\n\nAveraging these two versions gives\n\n$$\n\n\\begin{aligned}\n\nI & =\\frac{1}{2} \\int_0^{\\infty} \\frac{\\left(1+x^2\\right) \\ln \\left(\\frac{1}{x}+x\\right)}{1+x^4} d x . \\\\\n\n& =\\frac{1}{2} \\int_0^{\\infty} \\frac{\\left(1+\\frac{1}{x^2}\\right) \\ln \\left(x+\\frac{1}{x}\\right)}{x^2+\\frac{1}{x^2}} d x \\\\\n\n& =\\frac{1}{4} \\int_0^{\\infty} \\frac{\\ln \\left(x+\\frac{1}{x}\\right)^2}{x^2+\\frac{1}{x^2}} d\\left(x-\\frac{1}{x}\\right) \\\\\n\n& =\\frac{1}{4} \\int_0^{\\infty} \\frac{\\ln \\left[\\left(x-\\frac{1}{x}\\right)^2+4\\right]}{\\left(x-\\frac{1}{x}\\right)^2+2} d\\left(x-\\frac{1}{x}\\right)\n\n\\end{aligned}\n\n$$\n\nThe\n\nGlasser Master Theorem\n\n$$\n\n\\int_0^{\\infty} f\\left(x-\\frac{1}{x}\\right) d x=\\frac{1}{2} \\int_{-\\infty}^{\\infty} f(x) d x\n\n$$\n\nrewrites the integral as:\n\n$$\n\n\\boxed{I\n\n=\\frac{1}{2} \\int_0^{\\infty} \\frac{\\ln \\left(x^2+4\\right)}{x^2+2} d x\n\n=\\frac{\\pi}{2 \\sqrt{2}} \\ln (2+\\sqrt{2})}\n\n$$\n\nusing the answer in the\n\npost\n\n.\n\nYour comments and other generalisations are highly appreciated.", "question_owner": "Lai", "question_link": "https://math.stackexchange.com/questions/5108519/seeking-other-generalisations-to-the-integral-int-0-infty-frac-ln-leftx", "answer": { "answer_id": 5108524, "answer_text": "Assuming that\n\n$n$\n\n is a positve integer (\n\n$n \\geq 2$\n\n for the definite integral), for the antiderivative\n\n$$I_n=\\int\\frac{\\log\\left(x+\\frac{1}{x}\\right)}{x^n+1}\\,dx$$\n\n use\n\n$$x^n+1=\\prod_{k=1}^n (x-r_k)$$\n\n and partial fraction decomposition gives\n\n$$I_n=\\sum_{k=1}^n a_k\\,\\int\\frac{\\log \\left(x+\\frac{1}{x}\\right)}{x-r_k}\\,dx$$\n\n$$J_k=\\int\\frac{\\log \\left(x+\\frac{1}{x}\\right)}{x-r_k}\\,dx$$\n\n$$J_k=\\log \\left(r_k+\\frac{1}{r_k}\\right) \\log(x-r_k)+\\text{Li}_2\\left(1-\\frac{x}{r_k}\\right)-\\text{Li}_2\\left(\\frac{r_k-x}{r_k-i}\\right)-\n\n \\text{Li}_2\\left(\\frac{r_k-x}{r_k+i}\\right)$$\n\nRecombine the logarithms and apply the bounds.\n\nIt is tedious but it works.", "answer_owner": "Claude Leibovici", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2631074, "title": "Triple integral. Spherical coordinates. Too much calculations", "question_text": "I have troubles with integral:\n\n$$\n\n \\iiint \\limits_{S} g(x;y;z)dxdydz\\ \\label{orig} \\tag{1}\n\n$$\n\nWhere $g(x;y;z) = \\frac{xyz}{(a^2 + x^2 + y^2 + z^2)^3}$ and area is given by inequalities:\n\n$$\n\n (x^2 + y^2 + z^2)^{3/2} \\leqslant 4xy, \\\\\n\n x \\geqslant 0, y \\geqslant 0, z\\geqslant 0\n\n$$\n\nI know one way to solve this. I used conversion to\n\nspherical coordinate system\n\n and got the following integral:\n\n$$\n\n \\int\\limits_{0}^{\\pi / 2}d\\varphi \\int\\limits_{0}^{\\pi/2}d\\theta \\int\\limits_{0}^{\\sin2\\varphi(1 - \\cos2\\theta)} \\frac{r^5 \\sin2\\varphi \\sin2\\theta(1 - \\cos2\\theta)}{8(a^2 + r^2)^3} dr\n\n$$\n\nThis integral is solvable, but there are a lot of calculations in the process. My question is:\n\nIs it possible to solve original integral $\\eqref{orig}$ differently? is there more elegant solution? Maybe I'm making some mistakes?", "question_owner": "puhsu", "question_link": "https://math.stackexchange.com/questions/2631074/triple-integral-spherical-coordinates-too-much-calculations", "answer": { "answer_id": 5108577, "answer_text": "Here is an alternative path to converting to spherical coordinates. This is not (yet) a complete answer, nor is it a more elegant method IMO, but based on the closed form below, I think there is a good chance of there being a much cleaner solution.\n\nDenote the initial domain of integration by\n\n$A$\n\n, then reduce and transform the integral to new ones over\n\n$B$\n\n the region under\n\n$A$\n\n in the plane\n\n$z=0$\n\n (\n\nplot\n\n);\n\n$C$\n\n the rotation of\n\n$B$\n\n clockwise by\n\n$\\pi/4$\n\n rad about the origin (\n\nplot\n\n);\n\n$D$\n\n the region obtained by the change of variables,\n\n$(s,t)=\\left(u^2+v^2,u^2-v^2\\right)$\n\n (\n\nplot\n\n)\n\n(NB: The double integral over\n\n$D$\n\n as shown above needs an additional factor of\n\n$2$\n\n, though I'm not entirely sure why just yet. Symmetry is a likely culprit. This factor is included below.)\n\n$$\\begin{align*}\n\nI(a) &= \\iiint_A \\frac{xyz}{\\left(a^2+x^2+y^2+z^2\\right)^3} \\, dz \\, dy \\, dx \\\\\n\n&= \\iint_B \\frac{xy}4 \\left(\\frac1{\\left(a^2+x^2+y^2\\right)^2} - \\frac1{\\left(a^2+(4xy)^{2/3}\\right)^2}\\right) \\, dy \\, dx \\\\\n\n&= \\iint_C \\frac{u^2-v^2}8 \\left(\\frac1{\\left(a^2+u^2+v^2\\right)^2} - \\frac1{\\left(a^2+2^{2/3}\\left(u^2-v^2\\right)^{2/3}\\right)^2}\\right) \\, dv \\, du \\\\\n\n&= \\iint_D \\frac t{32\\sqrt{s^2-t^2}} \\left(\\frac1{\\left(a^2+s\\right)^2} - \\frac1{\\left(a^2+(2t)^{2/3}\\right)^2}\\right) \\, dt \\, ds \\\\\n\n&= 2 \\int_0^4 \\int_{\\tfrac{s^{3/2}}2}^s \\cdots \\, dt \\, ds \\\\\n\n&= \\frac1{16} \\int_0^4 \\int_\\tfrac{\\sqrt s}2^1 \\frac{st}{\\sqrt{1-t^2}} \\left(\\frac1{\\left(a^2+s\\right)^2} - \\frac1{a^2+(2st)^{2/3}}\\right) \\, dt \\, ds & t\\to st \\\\\n\n&= \\frac12 \\int_0^1 \\int_s^1 \\frac s{\\sqrt{1-t}} \\left(\\frac1{\\left(a^2+4s\\right)^2} - \\frac1{\\left(a^2+4(st)^{2/3}\\right)^2}\\right) \\, dt \\, ds & s\\to4s \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^t \\cdots \\, ds \\, dt & \\text{Fubini} \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^1 \\frac{st^2}{\\sqrt{1-t}} \\left(\\frac1{\\left(a^2+4st\\right)^2} - \\color{red}{\\frac1{\\left(a^2+4s^{2/3}t\\right)^2}}\\right) \\, ds \\, dt & s\\to st \\\\\n\n&= \\frac14 \\int_0^1 \\int_0^1 \\frac{s(2-3s)t^2}{\\sqrt{1-t}\\left(a^2+4st\\right)^2} \\, ds \\, dt & \\color{red}{s\\to s^{3/2}} \\\\\n\n&= \\frac12 \\int_0^1 \\int_0^1 \\frac{s(2-3s)\\left(1-r^2\\right)^2}{\\left(a^2+4\\left(1-r^2\\right)s^2\\right)^2} \\, ds \\, dr & r=\\sqrt{1-t} \\\\\n\n&= -\\frac1{16} \\int_0^1 \\int_0^1 \\left(\\frac32 - \\frac{3a^2+4\\left(1-r^2\\right)}{a^2+4\\left(1-r^2\\right)s} + \\frac{3a^4+8a^2\\left(1-r^2\\right)}{2\\left(a^2+4\\left(1-r^2\\right)s\\right)^2}\\right) \\, ds \\, dr \\\\\n\n&= -\\frac3{32} + \\frac1{64} \\int_0^1 \\left(4+\\frac{3a^2}{1-r^2}\\right) \\log\\frac{a^2+4-4r^2}{a^2} \\, dr \\\\\n\n&\\qquad - \\frac1{32} \\int_0^1 \\left(2+\\frac{a^2}{a^2+4-4r^2}\\right) \\, dr \\\\\n\n&= -\\frac5{32} + \\frac1{16} J_1 + \\frac{3a^2}{64} J_2 - \\frac1{32} J_3 \\\\\n\n&= -\\frac9{32} + \\frac{3a^2+16}{32} J_3 + \\frac{3a^2}{64} J_2\n\n\\end{align*}$$\n\nwhere\n\n$$\\begin{align*}\n\nJ_1 &= \\int_0^1 \\log\\frac{a^2+4-4r^2}{a^2} \\, dr \\\\\n\n&= 2 \\int_0^1 \\left(\\frac{a^2+4}{a^2+4-4r^2} - 1\\right) \\, dr & \\text{by parts} \\\\\n\n&= 2\\left(a^2+4\\right) J_3 - 2 \\\\[2ex]\n\nJ_3 &= \\int_0^1 \\frac{dr}{a^2+4-4r^2} \\, dr \\\\\n\n&= \\frac1{2\\sqrt{a^2+4}} \\int_0^\\tfrac2{\\sqrt{a^2+4}} \\frac{dr}{1-r^2} & r\\to\\frac{\\sqrt{a^2+4}}2r \\\\\n\n&= \\frac1{2\\sqrt{a^2+4}} \\operatorname{artanh} \\frac2{\\sqrt{a^2+4}}\n\n\\end{align*}$$\n\nThe missing piece is\n\n$$J_2 = \\int_0^1 \\log\\frac{a^2+4-4r^2}{a^2} \\cdot \\frac{dr}{1-r^2}\n\n\\stackrel{?}= \\operatorname{artanh}^2\\frac2{\\sqrt{a^2+4}}$$\n\nbased on numerical evidence. This most likely follows from reduction of a combination of\n\ndilogarithms\n\n, but I suspect there is a simpler elementary approach to evaluating\n\n$J_2$\n\n. In any case, we have\n\n$$I(a) = -\\frac9{32} + \\frac{3a^2+16}{64\\sqrt{a^2+4}} \\operatorname{artanh} \\frac2{\\sqrt{a^2+4}} + \\frac{3a^2}{64} \\operatorname{artanh}^2\\frac2{\\sqrt{a^2+4}}$$", "answer_owner": "user170231", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2066444, "title": "Do Riemann-Stieltjes integrals "iterate"?", "question_text": "Let's say we define: $$h(x) = \\int_a^x f(t)dg(t),$$ then do we have for integrable functions $a$ that: $$\\int_a^b a(u) dh(u) = \\int_a^b a(u)f(u)dg(u) ?$$\n\nI would like to know whether this holds for either the Riemann-Stieltjes or the Lebesgues-Stieltjes integral or any similar integral. Also for the sake of simplicity, feel free to assume that all relevant functions are as \"nice\" as you want, e.g. real-analytic.\n\nA yes/no answer would suffice, as would references which either prove or disprove such a result.\n\nAttempt:\n\n In \"nice\" cases, we hope that the behavior of the Riemann-Stieltjes sums will predict the behavior for the integrals, i.e. that the behavior will be respected/preserved by the appropriate limits. So let's write now instead: $$\\int_a^b a(u) dh(u) \\approx \\sum_{i=0}^{n-1} a(x_i) (h(x_{i+1}) - h(x_i)) $$ Then by definition of $h$ we have that: $$h(x) \\approx \\sum_{j=0}^{m-1} f(t_j) (g(t_{j+1}) - g(t_j))$$ In particular for each $i$ we have that (setting $t_m = x_i, t_{m+1}=x_{i+1}$, etc.): $$h(x_{i+1}) - h(x_i) \\approx \\sum_{j=0}^{m} f(t_j) (g(t_{j+1}) - g(t_j)) - \\sum_{j=0}^{m-1} f(t_j) (g(t_{j+1}) - g(t_j)) = f(x_i)(g(x_{i+1})-g(x_i))$$ so that substituting into the above: $$\\int_a^b a(u) dh(u) \\approx \\sum_{i=0}^{n-1} a(x_i) (h(x_{i+1}) - h(x_i)) \\approx \\sum_{i=0}^{n-1}a(x_i)f(x_i)(g(x_{i+1})-g(x_i)) \\approx \\int_a^b a(u)f(u)dg(u). $$ Of course, the above \"argument\" is extremely sloppy and would require considerable effort to be made rigorous, assuming that is even possible.\n\nBut hopefully it suggests why I think the above result may be true -- I had hoped to find it or something similar on the\n\nWikipedia page for the Riemann-Stieltjes integral\n\n, but it is not.\n\nAlso one might expect the identity to be true by sloppily \"applying\" the fundamental theorem of calculus ($h(x)``=\"\\int_a^x f(t)g'(t)dt$ so $h'(u)``=\"f(u)g'(u)$), i.e. when $$\\int_a^b a(u)dh(u) ``=\" \\int_a^b a(u) h'(u) du ``=\" \\int_a^b a(u) f(u) g'(u) du ``=\" \\int_a^b a(u) f(u) dg(u). $$", "question_owner": "Chill2Macht", "question_link": "https://math.stackexchange.com/questions/2066444/do-riemann-stieltjes-integrals-iterate", "answer": { "answer_id": 2562823, "answer_text": "You are using\n\n$a$\n\n to denote both the lower integration limit and the function. To avoid confusion let your function\n\n$a$\n\n be written as\n\n$\\alpha$\n\n.\n\nThis is true given\n\nonly\n\n that\n\n$\\alpha$\n\n and\n\n$f$\n\n are Riemann-Stieltjes integrable with respect to\n\n$g$\n\n on\n\n$[a,b]$\n\n and\n\n$g$\n\n is increasing. This can be generalized if\n\n$g$\n\n has bounded variation as well. Note that R-S integrability implies that\n\n$\\alpha$\n\n and\n\n$f$\n\n are also bounded and that the product\n\n$\\alpha f$\n\n is R-S integrable with respect to\n\n$g$\n\n.\n\nTake a partition\n\n$P = (x_0, x_1, \\ldots, x_n)$\n\n of\n\n$[a,b]$\n\n. Any corresponding Riemann-Stieltjes with tags\n\n$\\xi_k \\in [x_{k-1},x_k]$\n\n can be written as\n\n$$S(P,\\alpha,h) = \\sum_{k=1}^n \\alpha(\\xi_k)[h(x_k)-h(x_{k-1})] = \\sum_{k=1}^n \\alpha(\\xi_k)\\int_{x_{k-1}}^{x_k}f(u) \\, dg(u)\\\\ = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}\\alpha(\\xi_k)f(u) \\, dg(u). $$\n\nWe also have\n\n$$\\int_{a}^{b}\\alpha(u)f(u) \\, dg(u) = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}\\alpha(u)f(u) \\, dg(u).$$\n\nThus,\n\n$$\\tag{*}\\left|S(P,\\alpha,h) - \\int_a^b \\alpha(u)f(u) \\, dg(u)\\right| = \\left|\\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}[\\alpha(\\xi_k)-\\alpha(u)]f(u) \\, dg(u)\\right| \\\\ \\leqslant \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}|\\alpha(\\xi_k)-\\alpha(u)||f(u)| \\, dg(u) \\\\ \\leqslant M(f) \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) \\, dg(u), $$\n\nwhere\n\n$M(f) = \\sup_{u \\in [a,b]} |f(u)|$\n\n,\n\n$M_k(\\alpha) = \\sup_{u \\in [x_{k-1},x_k]} \\alpha(u)$\n\n and\n\n$m_k(\\alpha) = \\inf_{u \\in [x_{k-1},x_k]} \\alpha(u).$\n\nNote that the RHS of (*) can be written in terms of upper and lower Riemann-Stieltjes sums as\n\n$$M(f)\\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) \\, dg(u) = \\sum_{k=1}^n \\int_{x_{k-1}}^{x_k}(M_k(\\alpha) - m_k(\\alpha)) [g(x_k) - g(x_{k-1})] \\\\ = M(f)(U(P,\\alpha,g) - L(P,\\alpha,g)).$$\n\nSince\n\n$\\alpha$\n\n is R-S integrable with respect to\n\n$\\alpha$\n\n it follows that for any\n\n$\\epsilon >0$\n\n there is a partition\n\n$P_\\epsilon$\n\n such that if\n\n$P$\n\n is a refinement then\n\n$U(P,\\alpha,g) - L(P,\\alpha,g) < \\epsilon/M(f)$\n\n and\n\n$$\\left|S(P,\\alpha,h) - \\int_a^b \\alpha(u)f(u) \\, dg(u)\\right|< \\epsilon.$$\n\nTherefore,\n\n$$\\int_a^b \\alpha(u) \\, dh(u)= \\int_a^b \\alpha(u)f(u) \\, dg(u).$$", "answer_owner": "RRL", "is_accepted": true, "score": 6, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108499, "title": "Isi MMath PMB Problem: derivative becomes zero at infinity", "question_text": "Hello Stack exchange I encounter this math problem yesterday. I uploded both the question and my approach , what I wanted to know , where are the possible glitch in my solution. Now why I am asking this beacuse, I upload the same thing on another math paltform and one of the user is saying it's not rigid, however I also think , there might be some lag in the solution. Can someone help and let me know, Thanks\n\n$\\textbf{Here are the clarificatiions I need:}$\n\n$\\textbf{1. Is my approach towards the problem correct.}$\n\n$\\textbf{2.If it's not correct , where can be the possible glitch .}$\n\nParts I am deeply concered about is\n\n$\\textbf{Interchanging of limiting varibles}$\n\n after the part \"letting\n\n$h\\rightarrow 0$\n\n''.\n\n$\\textbf{Question:}$\n\n Let\n\n$f:\\mathbb{R} \\rightarrow \\mathbb{R}$\n\n ba a differentiable function such that\n\n$f'$\n\n continuous , and there exist\n\n$a,b\\in\\mathbb{R}$\n\n such that\n\n$$\\lim_{x\\rightarrow \\infty} f(x)=a$$\n\n$$\\lim_{x\\rightarrow \\infty} f'(x)=b$$\n\nShow that\n\n$b=0$\n\n.\n\n$\\textbf{My Solution:}$\n\n We are given ,\n\n$$\\lim_{x\\rightarrow \\infty} f(x)=a$$\n\nThus we have , for any fix\n\n$h\\in\\mathbb{R}$\n\n we have ,\n\n$$\\lim_{x\\rightarrow \\infty} f(x+h)=a$$\n\n.\n\nThus ,\n\n$$\\lim_{x\\rightarrow \\infty} (f(x+h)-f(x))=(\\lim_{x\\rightarrow \\infty} f(x+h))-(\\lim_{x\\rightarrow \\infty} f(x))=a-a=0$$\n\n.\n\n(Since each of the limit on left hand side exist that is both\n\n$\\lim_{x\\rightarrow \\infty} f(x)=a$\n\n and\n\n$\\lim_{x\\rightarrow \\infty} f(x+h)=a)$\n\n exist for any fix\n\n$h\\in\\mathbb{R}$\n\n).\n\nThus for any fix\n\n$0\\neq h\\in\\mathbb{R}$\n\n$$\\lim_{x\\rightarrow \\infty} \\frac{f(x+h)-f(x)}{h}=0$$\n\n.\n\nNow letting\n\n$h\\rightarrow 0$\n\n , i.e,\n\n$$\\lim_{h\\rightarrow 0}\\lim_{x\\rightarrow \\infty} \\frac{f(x+h)-f(x)}{h}=0(*)$$\n\n.\n\n.\n\nSince\n\n$$\\lim_{x\\rightarrow \\infty}f'(x)=b$$\n\n exist and\n\n$f'$\n\n is continuous from\n\n$\\mathbb{R}\\rightarrow\\mathbb{R}$\n\nThus ,in\n\n$(*)$\n\n we can interchange the limit,\n\n$$\\lim_{x\\rightarrow \\infty} \\lim_{h\\rightarrow 0}\\frac{f(x+h)-f(x)}{h}=0(*)$$\n\n.\n\nHence,\n\n$$b=\\lim_{x\\rightarrow \\infty} f'(x)=0$$\n\nThis finishes the proof.\n\n$\\blacksquare$", "question_owner": "Safal_DB_Mathogenic", "question_link": "https://math.stackexchange.com/questions/5108499/isi-mmath-pmb-problem-derivative-becomes-zero-at-infinity", "answer": { "answer_id": 5108555, "answer_text": "You need to be more careful to justify the interchange of limits. Consider the sequence of functions\n\n$g_{n}:[0, \\infty)\\rightarrow\\mathbb{R}$\n\n defined by\n\n$$g_{n}(x) = \\frac{f(x+1/n)-f(x)}{1/n}.$$\n\n We show that the sequence\n\n$(g_{n})$\n\n converges uniformly on\n\n$[0, \\infty)$\n\n. First, note that using the mean value theorem, we have\n\n$g_{n}(x) = f'(x+\\zeta(n, x))$\n\n for some\n\n$0 < \\zeta(n, x) < 1/n$\n\n that depends on both\n\n$n$\n\n and\n\n$x$\n\n. Fix\n\n$\\varepsilon > 0$\n\n and choose\n\n$M$\n\n large enough such that\n\n$|f'(y)-b|\\leq\\varepsilon/2$\n\n for\n\n$y\\geq M$\n\n. Then for\n\n$x\\geq M$\n\n and any\n\n$n, m$\n\n we have\n\n$|g_{n}(x)-g_{m}(x)|\\leq\\varepsilon$\n\n. Using the fact that\n\n$f'$\n\n is uniformly continuous on\n\n$[0, M+1]$\n\n, there exists\n\n$\\delta > 0$\n\n such that\n\n$|f'(y)-f'(z)|\\leq\\varepsilon$\n\n whenever\n\n$|y-z|\\leq\\delta$\n\n and\n\n$y, z\\in [0, M+1]$\n\n. Choose\n\n$N$\n\n large enough so that\n\n$2/N\\leq\\delta$\n\n. Then for\n\n$n, m\\geq N$\n\n and all\n\n$x\\in [0, M]$\n\n we have\n\n$|g_{n}(x)-g_{m}(x)|\\leq\\varepsilon$\n\n.\n\nSince\n\n$(g_{n})$\n\n converges uniformly on\n\n$[0, \\infty)$\n\n we have\n\n\\begin{align}\\lim_{x\\rightarrow\\infty}f'(x) = \\lim_{x\\rightarrow\\infty}\\lim_{n\\rightarrow\\infty}g_{n}(x) = \\lim_{n\\rightarrow\\infty}\\lim_{x\\rightarrow\\infty}g_{n}(x) = 0.\\end{align}\n\nThere is a much simpler proof that does not require the continuity of\n\n$f'$\n\n. Assume for the sake of contradiction that\n\n$\\lim_{x\\rightarrow\\infty}f'(x) = b > 0$\n\n (without loss of generality). Choose\n\n$M$\n\n large enough so that\n\n$f'(x)\\geq b/2$\n\n for\n\n$x\\geq M$\n\n. Then, using the mean value theorem for\n\n$x\\geq M$\n\n, we get\n\n$$f(x) = f(M)+f'(\\zeta(x, M))(x-M)\\geq f(M)+b(x-M)/2.$$\n\n Take the limit as\n\n$x\\rightarrow\\infty$\n\n to obtain a contradiction.", "answer_owner": "Karthik Kannan", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108452, "title": "Demonstrate $I(a) = \\int_1^\\infty \\frac{\\sqrt{a+x}}{a+x^2}\\,dx > \\frac{\\pi}{2}\\quad \\forall (a>0)$", "question_text": "I am trying to show that\n\n$$\n\nI(a) = \\int_1^\\infty \\frac{\\sqrt{a+x}}{a+x^2}\\,dx > \\frac{\\pi}{2}\\quad (a>0).\n\n$$\n\nUsing\n\n$x = \\sqrt{a}\\,t$\n\n we get\n\n$$\n\nI(a) = \\int_{1/\\sqrt{a}}^\\infty \\frac{\\sqrt{1 + t/\\sqrt{a}}}{1+t^2}\\,dt.\n\n$$\n\nDifferentiating under the integral sign yields\n\n$$\n\nI'(a) = \\frac{1}{2a^{3/2}} \\frac{\\sqrt{1+1/a}}{1+1/a}\n\n - \\frac{1}{4a^{3/2}} \\int_{1/\\sqrt{a}}^\\infty \\frac{t}{(1+t^2)\\sqrt{1+t/\\sqrt{a}}}\\,dt.\n\n$$\n\nThe first term is positive and the integral term is negative so I need to show the whole expression is negative.\n\nI tried the bound\n\n$$\n\n\\sqrt{1 + t/\\sqrt{a}} \\le 1 + t/\\sqrt{a} \\;\\Rightarrow\\; \\frac{1}{\\sqrt{1 + t/\\sqrt{a}}} \\ge \\frac{1}{1 + t/\\sqrt{a}},\n\n$$\n\nand for\n\n$t \\ge 1/\\sqrt{a}$\n\n also\n\n$$\n\n\\frac{1}{1 + t/\\sqrt{a}} \\ge \\frac{1}{1 + t^2},\n\n$$\n\nso\n\n$$\n\n\\int_{1/\\sqrt{a}}^\\infty \\frac{t}{(1+t^2)\\sqrt{1+t/\\sqrt{a}}}\\,dt\n\n\\ge \\int_{1/\\sqrt{a}}^\\infty \\frac{t}{(1+t^2)^2}\\,dt\n\n = \\frac{1}{2(1+1/a)}.\n\n$$\n\nPlugging this in gives\n\n$$\n\nI'(a) \\le \\frac{1}{2a^{3/2}(1+1/a)} \\Bigl( \\sqrt{1+1/a} - \\tfrac{1}{4} \\Bigr).\n\n$$\n\nBut\n\n$\\sqrt{1+1/a} > 1 > 1/4$\n\n, so the right-hand side is\n\npositive\n\n and the bound is useless for proving\n\n$I'(a)<0$\n\n.\n\nA solution I found claims a tighter lower bound using\n\n$$\n\n\\int_{1/\\sqrt{a}}^\\infty \\frac{t}{(1+t^2)^2}\\,dt\n\n = \\frac{\\pi}{4} - \\frac{1}{2}\\arctan\\!\\left(\\frac{1}{\\sqrt{a}}\\right)\n\n - \\frac{1}{2} \\cdot \\frac{1/\\sqrt{a}}{1+1/a},\n\n$$\n\nbut this antiderivative seems to belong to\n\n$\\int \\frac{1}{(1+t^2)^2}\\,dt$\n\n, not to the\n\n$t/(1+t^2)^2$\n\n we actually have.", "question_owner": "Joelle", "question_link": "https://math.stackexchange.com/questions/5108452/demonstrate-ia-int-1-infty-frac-sqrtaxax2-dx-frac-pi2-q", "answer": { "answer_id": 5108490, "answer_text": "$$I(a) = \\int_1^\\infty \\frac{\\sqrt{a+x}}{a+x^2}\\,dx$$\n\n$$\\sqrt{a+x}=u \\qquad \\implies \\qquad I(a)=\\int_{\\sqrt{a+1}}^\\infty \\frac{2 u^2}{u^4-2 a u^2+a\\left(a+1\\right)}\\,du$$\n\nWrite\n\n$$u^4-2 a u^2+a\\left(a+1\\right)=(u^2-\\alpha)(u^2-\\beta) \\quad \\text{with}\\quad (\\alpha,\\beta)=a\\pm i\\sqrt{a}$$\n\nUsing partial fraction decomposition\n\n$$\\frac{2 u^2}{u^4-2 a u^2+a\\left(a+1\\right)}=\\frac{2}{\\alpha -\\beta }\\Big(\\frac{1}{u^2-\\alpha }-\\frac{1}{u^2-\\beta } \\Big)$$\n\n So, two simple integrals.\n\nThe final result is not the most pleasant but, at least expanded for large values of\n\n$a$\n\n, we have\n\n$$I(a)\\sim \\frac \\pi 2+\\frac{\\log (a)-2+4 \\log (2)}{4 \\sqrt{a}}+\\frac{\\pi }{16 a}-\\frac{3 \\log (a)-13+12 \\log (2)}{96 a^{3/2}}+\\cdots$$\n\nTrying for\n\n$a=25$\n\n, the above truncated expansion gives\n\n$1.77781$\n\n while the exact value is\n\n$1.77772$\n\nNotice that\n\n$I(0)=2$\n\n; so a very small range of variation.", "answer_owner": "Claude Leibovici", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1441747, "title": "The definition of strong continuity via joint continuity", "question_text": "A semigroup $S(t)$ on a Banach space $E$ is a family of bounded linear operators $\\{S(t)\\}_{t\\ge 0}$ with the property that $S(t)S(s)=S(t+s)$ for any $s,t\\ge 0$ and that $S(0)=I$. A semigroup is furthermore called\n\nstrongly continuous\n\n if the map $(x,t)\\mapsto S(t)x$ is continuous.\n\nI was told that this is equivalent of saying $t\\to S(t)x$ is continuous for every $x$.\n\nHow can I see the equivalence of two ways of defining strong continuity?\n\nCould anyone expand what $(x,t)\\mapsto S(t)x$ is continuous really mean? Can one show this via the usual strategy of 2-sided continuity? How could this be the same as saying $t\\to S(t)x$ is continuous for every $x$?\n\nAppreciate for any helps.", "question_owner": "math101", "question_link": "https://math.stackexchange.com/questions/1441747/the-definition-of-strong-continuity-via-joint-continuity", "answer": { "answer_id": 4856698, "answer_text": "To avoid confusion let me give the two definitions of strong continuity different names, and from here on out I will not use the term\n\nstrongly continuous\n\n.\n\nA one-parameter semigroup\n\n$S(t)$\n\n is called\n\ntime continuous\n\n if for all\n\n$x \\in E$\n\n we have that\n\n$\\mathbb{R}_{\\geq 0} \\ni t \\mapsto S(t)x \\in E$\n\n is continuous.\n\nA one-parameter semigroup\n\n$S(t)$\n\n is called\n\neverywhere continuous\n\n if\n\n$E \\times \\mathbb{R}_{\\geq 0} \\ni (x,t) \\mapsto S(t)x \\in E$\n\n is continuous.\n\nWe see that time continuity follows immediately from everywhere continuity.\n\nSo, it seems that everywhere continuity is stronger in this sense.\n\nHowever, we can proof that the continuity everywhere follows from the time continuity.\n\nPick an arbitrary\n\n$x \\in E$\n\n and\n\n$t>0$\n\n.\n\nBy the time continuity we can find a\n\n$\\delta_1>0$\n\n such that for all\n\n$|s-t|<\\delta_1$\n\n we have that\n\n$||S(s)x - S(t)x|| < \\varepsilon/2$\n\n.\n\nFor any\n\n$y \\in E$\n\n we have, again by time continuity, that\n\n$$ \\sup_{|s-t| \\leq \\delta_1} ||S(s) y|| < \\infty $$\n\nBecause we have bounded linear operators we can apply the uniform boundedness principle and get\n\n$$ M := \\sup_{|s-t| \\leq \\delta_1} ||S(s)|| < \\infty $$\n\nNow let\n\n$\\delta_2 = \\varepsilon / (2 M)$\n\n and consider\n\n$||y -x || < \\delta_2$\n\n.\n\nWe then have the chain inequalities\n\n\\begin{align}\n\n||S(s)y - S(t)x||\n\n&= ||S(s)y - S(s)x + S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq ||S(s)y - S(s)x || + ||S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq ||S(s)||\\ ||y - x|| + ||S(s)x - S(t)x|| \\\\\\\\\n\n&\\leq \\varepsilon / 2 + \\varepsilon/2 \\\\\\\\\n\n&= \\varepsilon\n\n\\end{align}\n\nIt seems this is one of the special cases where separate continuity\n\ndoes\n\n imply joint continuity.\n\nThe same question and a shorter answer can be found\n\nhere", "answer_owner": "G. Bellaard", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107138, "title": "General relation between Jordan measure and Riemann integral", "question_text": "Following my\n\nprevious question\n\n, suppose that\n\n$f: A \\to \\mathbb{R}^{\\ge0}$\n\n is a bounded function which is defined over a bounded subset\n\n$A \\subset \\mathbb{R}^n$\n\n. Denote the Jordan inner measure and outer measure of the region between\n\n$f$\n\n and\n\n$0$\n\n by\n\n$m_{∗,(J)}(B)$\n\n and\n\n$m^{∗,(J)} (B)$\n\n respectively. Also denote the Darboux lower integral and upper integral of\n\n$f$\n\n over\n\n$A$\n\n by\n\n$\\underline{\\int_A} f$\n\n and\n\n$\\overline{\\int_A} f$\n\n respectively. I think in general it holds that\n\n$m_{∗,(J)}(B) = \\underline{\\int_A} f$\n\n and\n\n$m^{∗,(J)} (B) = \\overline{\\int_A} f$\n\n, so Riemann integral and Jordan measure are essentially equivalent. Is this statement correct?", "question_owner": "S.H.W", "question_link": "https://math.stackexchange.com/questions/5107138/general-relation-between-jordan-measure-and-riemann-integral", "answer": { "answer_id": 5107688, "answer_text": "Yes, you're nearly correct: it holds that the Jordan measures of the ordinate set of a non negative bounded function will equal its respective Darboux integrals in general, but if the domain of the function is just bounded, not a rectangle, you will also need the condition that the set of discontinuities of the function have a 0 Lebesgue measure, or else the two may not be equal.\n\nAlso it will be required that A itself is Jordan measurable, but I'm assuming that's already implied in your definition.", "answer_owner": "Guilherme Gondin", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108438, "title": "Finding $\\int e^{-x}\\bigg(\\frac{1+\\tanh(1+x)}{1+x}\\bigg)^2 dx$", "question_text": "I need help with this integral:\n\n$$\\int e^{-x}\\bigg(\\frac{1+\\tanh(1+x)}{1+x}\\bigg)^2 dx$$\n\nMathematica could not find a closed form. I tried Feynman's tricks but could not get rid of the denominator (which was my idea since the numerator alone gives the hypergeometric function).\n\nActually, other sigmoid functions would also work for me, such as\n\n$\\arctan$\n\n or the logistic function, but the square is important. However, the\n\n$\\exp$\n\n and denominator are non-negotiable.", "question_owner": "Camilo", "question_link": "https://math.stackexchange.com/questions/5108438/finding-int-e-x-bigg-frac1-tanh1x1x-bigg2-dx", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1791361, "title": "Is Leibnizian calculus embeddable in first order logic?", "question_text": "We just published an article making what we feel is a plausible case in favor of an affirmative answer in\n\nFoundations of Science\n\n, see preprint\n\nhere\n\n. The basic argument is that while such a requirement may seem very limitative, such an embedding seems possible with a small number of additional ingredients like a black box for returing the sum of a series, and I was curious how well-supported this appears and if there are aspects that may have been overlooked.\n\nCrosslisted at\n\nHSM\n\n without generating much response.\n\nNote.\n\nLeibnizian calculus\n\n is calculus as it was practiced by Leibniz, in the same sense as\n\nEuclidean geometry\n\n could be interpreted as geometry as practiced by Euclid (though the term often has a different meaning). An example that we give in the paper of the type of mathematics that would not be Leibnizian calculus is a proof of the extreme value theorem, a 19th century argument.", "question_owner": "Mikhail Katz", "question_link": "https://math.stackexchange.com/questions/1791361/is-leibnizian-calculus-embeddable-in-first-order-logic", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108399, "title": "A few clarifications about multiplication in subgradient calculus", "question_text": "In the\n\nsubgradient calculus linearity properties\n\n, the appropriate side of the addition rule utilizes Minkowski addition of sets. Ordinarily in linearity, a scaling rule agrees with, and\n\nis basically redundant given\n\n, an addition rule (I can't imagine scaling failing for only irrational numbers except by adversarial construction).\n\nIs\n\n$\\alpha\\partial f$\n\n in the subgradient calculus scaling rule a multiplication built on top of Minkowski addition, i.e., does\n\n$2\\partial f$\n\n mean\n\n$\\partial f + \\partial f$\n\n? The alternative is vector-like element-wise scaling of the set, which obviously outputs only a subset of the output under the interpretation of multiplication as repeated Minkowski addition with some continuation for non-integer values of\n\n$\\alpha$\n\n, but it's not obvious to me what that continuation would be.\n\nThen for the affine transformation of variables rule, is matrix multiplication by a set as in\n\n$A^T\\partial f(Ax + b)$\n\n built on top of this \"Minkowski multiplication\" via some basic linear algebra? It is not clear how to ask this more precisely because I'm not sure if\n\n$f$\n\n is required to be scalar-valued, vector-valued, matrix-valued, or if this rule is valid for all three so long as the dimensions are valid for any of the three basic notions of matrix multiplication once the semantics of matrix-set multiplication have been unpacked.\n\nAs there also seems to be some notation overloading of\n\n$\\partial$\n\n in the affine transformation of variables rule (perhaps similar to the familiar partial derivative\n\n$\\partial$\n\n notation overloading from multivariable calculus), I think it would be illustrative for an answer to actually apply this rule to a toy example.", "question_owner": "user10478", "question_link": "https://math.stackexchange.com/questions/5108399/a-few-clarifications-about-multiplication-in-subgradient-calculus", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108362, "title": "Showing the strict mononticy of the function $(a^\\frac{1}{x}+b^\\frac{1}{x})^x$", "question_text": "Let\n\n$a,b\\in \\mathbb{R}$\n\n with\n\n$a,b>0$\n\n. Define the function\n\n$f\\colon [1,\\infty)\\rightarrow \\mathbb{R}$\n\n by\n\n$$\n\nf(x)=(a^\\frac{1}{x}+b^\\frac{1}{x})^x\n\n$$\n\nThis function seems to be strictly growing, but I've had a hard time showing this. How can this be shown?\n\nEdit: This post was closed without a reason given. I suspect, that is the case because it reminded some of math.stackexchange.com/q/4094/42969, but this only shows the weak monotonicity of the function, and is thus not useful in showing this particular result.\n\nSecondly, having perused the guidelines, I realize that I didn't add my own attempts at solving this. I will add that I have tried differentiating this function, but it just turns into a mess and likewise taking the logarithm doesn't provide much value. Not knowing anymore reliable ways to determine the monotonicity of a function, I came here for help.", "question_owner": "redib", "question_link": "https://math.stackexchange.com/questions/5108362/showing-the-strict-mononticy-of-the-function-a-frac1xb-frac1xx", "answer": { "answer_id": 5108372, "answer_text": "Since\n\n$f(x) > 0$\n\n for\n\n$a, b > 0$\n\n and\n\n$x \\ge 1$\n\n, we can consider\n\n$$\n\ng(x) = \\ln f(x) = x \\ln\\big(a^{1/x} + b^{1/x}\\big).\n\n$$\n\nDifferentiating, we obtain\n\n$$\n\ng'(x) = \\ln\\big(a^{1/x} + b^{1/x}\\big)\n\n- \\frac{1}{x}\\,\\frac{\\ln(a)\\,a^{1/x} + \\ln(b)\\,b^{1/x}}{a^{1/x} + b^{1/x}}.\n\n$$\n\nLet\n\n$u = a^{1/x} > 0$\n\n and\n\n$v = b^{1/x} > 0$\n\n. Then\n\n$$\n\ng'(x) = \\ln(u + v) - \\frac{u \\ln u + v \\ln v}{u + v}.\n\n$$\n\nMultiplying both sides by\n\n$u + v$\n\n gives\n\n$$\n\n(u + v) g'(x) = u \\ln\\!\\left(\\frac{u + v}{u}\\right) + v \\ln\\!\\left(\\frac{u + v}{v}\\right).\n\n$$\n\nSince each logarithm is positive for\n\n$u, v > 0$\n\n, we have\n\n$g'(x) > 0$\n\n.\n\nBecause\n\n$f'(x) = f(x) g'(x)$\n\n and\n\n$f(x) > 0$\n\n, it follows that\n\n$f'(x) > 0$\n\n.\n\nHence\n\n$f(x)$\n\n is strictly increasing for\n\n$a, b > 0$\n\n and\n\n$x \\ge 1$\n\n.", "answer_owner": "almost_okay", "is_accepted": true, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108171, "title": "Closed form of $\\int_0^{\\infty} \\frac{\\sin (\\tan x) \\cos ^{2n-1} x}{x} d x?$", "question_text": "Being attracted by the answer in the\n\npost\n\n$$\\int_0^{\\infty} \\frac{\\sin (\\tan x) }{x} d x = \\frac{\\pi}{2}\\left(1- \\frac 1e \\right) , $$\n\nI started to investigate and surprisingly found that\n\n$$\n\n\\int_0^{\\infty} \\frac{\\sin (\\tan x) \\cos x}{x} d x=\\frac{\\pi}{2}\\left(1-\\frac{1}{e}\\right),\n\n$$\n\nhaving the same answer as the first one.\n\nThen I tried a bit further with\n\n$n\\ge 1$\n\n,\n\n$$\n\nI_n=\\int_0^{\\infty} \\frac{\\sin (\\tan x) \\cos ^{2n-1} x}{x} d x\n\n$$\n\nUsing the\n\nLobachevsky Integral Formula\n\n, we have\n\n$$\n\n\\begin{aligned}\n\nI_n & =\\int_0^{\\infty} \\frac{\\sin (\\tan x) \\cos ^{2n-1} x}{x} d x \\\\&= \\int_0^{\\infty} \\frac{\\sin x}{x} \\cdot \\frac{\\sin (\\tan x) \\cos ^{2n-1} x}{\\sin x} d x\\\\\n\n& =\\int_0^{\\frac \\pi 2} \\frac{\\sin (\\tan x) \\cos ^{2n-1} x}{\\sin x} d x\\\\\n\n\\end{aligned}\n\n$$\n\nNow consider the parametrised integral\n\n$$\n\nI(a)=\\int_0^{\\frac \\pi 2} \\frac{\\sin (a\\tan x) \\cos ^{2 n-1} x}{\\sin x} d x\n\n$$\n\nwhose derivative w.r.t.\n\n$a$\n\n is\n\n$$\n\nI^{\\prime}(a)=\\int_0^{\\frac \\pi 2} \\cos ^{2 n-2} x \\cos (a \\tan x) d x .\n\n$$\n\nPutting\n\n$t=\\tan x$\n\n and using the contour integration along anti-clockwise direction of the path\n\n$$\\gamma=\\gamma_{1} \\cup \\gamma_{2} \\textrm{ where } \\gamma_{1}(t)=t+i 0(-R \\leq t \\leq R) \\textrm{ and } \\gamma_{2}(t)=R e^{i t} (00$\n\n,\n\n$f\\left(\\delta,\\sqrt{\\delta}\\right)=-\\dfrac{1}{2}\\delta^2<0$\n\n, so\n\n$f(0,0)=0$\n\n is not an extremum.\n\nFor\n\n$g$\n\n, in every neighborhood of\n\n$(0,0)$\n\n,\n\n$g(\\delta,0)=4\\delta^4>0$\n\n,\n\n$g(\\delta,2\\delta^2)=-2\\delta^4<0$\n\n, so\n\n$g(0,0)=0$\n\n is not an extremum.\n\n(\n\n$\\delta>0$\n\n)", "answer_owner": "JC Q", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108134, "title": "Evaluate $\\lim_{x\\to0}\\frac{x^2\\sin\\frac{1}{x}+3x\\sin\\frac{1}{x}}{x\\sin\\frac{1}{x}}$", "question_text": "Evaluate\n\n$$\\lim_{x\\to0}\\frac{x^2\\sin\\frac{1}{x}+3x\\sin\\frac{1}{x}}{x\\sin\\frac{1}{x}}$$\n\nMy Attempt\n\nLet\n\n$\\frac{1}{x}= t$\n\n and\n\n$x\\to 0 \\implies t\\to \\infty$\n\n. So we would need to evaluate this\n\n$$\\lim_{t\\to \\infty}\\frac{\\frac{1}{t^2}\\sin t+3\\frac{1}{t}\\sin t}{\\frac{1}{t}\\sin t}=\\lim_{t\\to \\infty}\\frac{\\frac{1}{t}\\sin t(\\frac{1}{t}+3)}{\\frac{1}{t}\\sin t}=3$$\n\nBut someone has an ideas that this limit isn't exists", "question_owner": "user62498", "question_link": "https://math.stackexchange.com/questions/5108134/evaluate-lim-x-to0-fracx2-sin-frac1x3x-sin-frac1xx-sin-frac1", "answer": { "answer_id": 5108139, "answer_text": "For\n\n$x\\neq 0,\\ x\\neq \\frac{1}{n\\pi},\\ n\\in\\mathbb{Z}\\setminus\\{0\\},$\n\n$$\\frac{x^2\\sin\\frac{1}{x}+3x\\sin\\frac{1}{x}}{x\\sin\\frac{1}{x}}$$\n\n$$=\\frac{x^2 + 3x}{x}\\left(\\frac{\\sin\\frac{1}{x}}{\\sin\\frac{1}{x}}\\right)$$\n\n$$=\\left(\\frac{x + 3}{1}\\right) 1.$$\n\nNow you take\n\n$\\lim_{x\\to0}.$\n\nBut if you include\n\n$x= \\frac{1}{n\\pi},\\ n\\in\\mathbb{Z}\\setminus\\{0\\}$\n\n in the domain, then the limit does not exist (do you see why?).", "answer_owner": "Adam Rubinson", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108198, "title": "Integration by substitution with $x$ in the denominator", "question_text": "I'm going to calculate the following indefinite integral\n\n$$\\int 3x\\sqrt{5x^2+7}dx$$\n\nusing the change of variable\n\n$u=5x^2+7$\n\n,\n\n$du=10xdx$\n\n. From this last expression, can I solve for\n\n$dx$\n\n, that is,\n\n$\\dfrac{du}{10x}=dx$\n\n?\n\nIf we can solve for\n\n$x$\n\n, we have:\n\n$$\\int 3x\\sqrt{5x^2+7}dx=\\int 3x\\sqrt{u}\\frac{du}{10x};$$\n\ncan I eliminate\n\n$x$\n\n?", "question_owner": "Octavius", "question_link": "https://math.stackexchange.com/questions/5108198/integration-by-substitution-with-x-in-the-denominator", "answer": { "answer_id": 5108200, "answer_text": "In order to apply the substitution method, you can proceed as follows:\n\n\\begin{align*}\n\n\\int 3x\\sqrt{5x^{2} + 7}\\mathrm{d}x & = \\frac{3}{10}\\int10x\\sqrt{5x^{2} + 7}\\mathrm{d}x\\\\\n\n& = \\frac{3}{10}\\int\\sqrt{5x^{2} + 7}\\mathrm{d}(5x^{2} + 7)\\\\\n\n& = \\frac{3}{10}\\int u^{1/2}\\mathrm{d}u\\\\\n\n& = \\frac{1}{5}u^{3/2} + C\\\\\n\n& = \\frac{1}{5}(5x^{2} + 7)^{3/2} + C\n\n\\end{align*}", "answer_owner": "Átila Correia", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108032, "title": "Evaluate $\\int \\frac{e^x [\\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)]}{x \\ln x} \\, \\mathrm {dx}$", "question_text": "Evaluate:\n\n$$\\int \\frac{e^x [\\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)]}{x \\ln x} \\, \\mathrm {dx}$$\n\nMy approach:\n\n$$\\int \\frac{e^x [\\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)]}{x \\ln x} \\, \\mathrm {dx}$$\n\n$$\\to\\int\\frac{e^x\\operatorname{Ei}(x)\\sin(\\ln x)}{x \\ln x}-\\int\\frac{e^x\\operatorname{li}(x)\\cos(\\ln x)}{x \\ln x}$$\n\nObserve this term:\n\n$$\\int\\frac{e^x\\operatorname{Ei}(x)\\sin(\\ln x)}{x \\ln x}$$\n\nNotice\n\n$$\\frac{d}{dx}(\\operatorname{Ei}(x))=\\frac{e^x}{x}$$\n\nSo I tried some u-sub\n\nlike\n\n$\\frac{\\operatorname{Ei}(x)}{\\ln x}$\n\n,\n\n$\\frac{\\operatorname{li}(x)}{\\ln x}$\n\n but I think it's some other u-substitute. (I tried to show effort but everything stops here)(I create this before but I forgot the trick)", "question_owner": "Andre Lin", "question_link": "https://math.stackexchange.com/questions/5108032/evaluate-int-fracex-operatornameeix-sin-ln-x-operatornamelix", "answer": { "answer_id": 5108216, "answer_text": "Apologies for answering my own question:\n\nGiven:\n\n$$\\int \\frac{e^x [\\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)]}{x \\ln x} \\, \\mathrm {dx}$$\n\n$$\n\nf(x) = \\frac{e^x}{\\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]\n\n$$\n\nUsing the product rule:\n\n$$\n\nu = \\frac{e^x}{\\ln x}, \\quad v = \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x)\n\n$$\n\n$$f'(x) = u' \\cdot v + u \\cdot v'$$\n\nDifferentiate them separately:\n\n$$\n\nu' = \\frac{e^x (\\ln x - \\frac{1}{x})}{(\\ln x)^2}\n\n$$\n\n$$\n\nv'= \\frac{e^x}{x} \\sin(\\ln x) + \\frac{\\operatorname{Ei}(x)}{x} \\cos(\\ln x) - \\frac{1}{\\ln x} \\cos(\\ln x) + \\frac{\\operatorname{li}(x)}{x} \\sin(\\ln x)\n\n$$\n\nCombine:\n\n$$\n\nf'(x) = \\frac{e^x (\\ln x - \\frac{1}{x})}{(\\ln x)^2} \\cdot \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right] + \\frac{e^x}{\\ln x} \\cdot \\left[ \\frac{e^x}{x} \\sin(\\ln x) + \\frac{\\operatorname{Ei}(x)}{x} \\cos(\\ln x) - \\frac{1}{\\ln x} \\cos(\\ln x) + \\frac{\\operatorname{li}(x)}{x} \\sin(\\ln x) \\right]\n\n$$\n\nSimplify:\n\n$$\n\nf'(x) = \\frac{e^x}{x \\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]\n\n$$\n\n(Notice that this is the same with our original integral)\n\nSo:\n\n$$\n\n\\int \\frac{e^x \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right]}{x \\ln x} dx = \\boxed{{\\frac{e^x}{\\ln x} \\left[ \\operatorname{Ei}(x) \\sin(\\ln x) - \\operatorname{li}(x) \\cos(\\ln x) \\right] + C}}\n\n$$", "answer_owner": "Andre Lin", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108182, "title": "Find the $n^{th}$ derivative of $f(x)=\\frac{x}{x^{2}+a^{2}}$", "question_text": "I need clarity in finding out the\n\n$n^{th}$\n\n Derivative of\n\n$$f(x)=\\frac{x}{x^{2}+a^{2}}$$\n\nMy Thought\n\nLet's Assume\n\n$x=a\\tan\\theta$\n\n$$\\implies f(x)=\\frac{a\\tan\\theta}{a^{2}\\sec^{2}\\theta}$$\n\n$$\\implies f(x)=\\frac{1}{a}(\\sin\\theta)(\\cos\\theta)$$\n\n$$\\implies f(x)=\\frac{1}{2a}(\\sin2\\theta)$$\n\nNow,\n\n$$f_1(x)=\\frac{1}{2a}(2\\cos2\\theta)$$\n\n$$\\implies f_2(x)=\\frac{1}{2a}(-4\\sin2\\theta)$$\n\nTherefore,\n\n$$f_n(x)=\\frac{1}{2a}\\left(2^n\\sin\\left(\\frac{n\\pi}{2}+2\\theta\\right)\\right)$$\n\n$$f_n(x)=\\frac{1}{2a}\\left(2^n\\sin\\left(\\frac{n\\pi}{2}+2\\tan^{-1}\\left(\\frac{x}{a}\\right)\\right)\\right)$$\n\nI Need Clarification on Whether this Approach is Correct or Wrong", "question_owner": "Bachelor", "question_link": "https://math.stackexchange.com/questions/5108182/find-the-nth-derivative-of-fx-fracxx2a2", "answer": { "answer_id": 5108204, "answer_text": "Using simple fractions, you can take advantage of the general rule\n\n$$ \\dfrac{\\mathrm d^n}{\\mathrm dx^n} \\frac{1}{x+b} = (-1)^n n!\n\n \\frac{1}{(x+b)^{n+1}}. $$\n\nSince we have\n\n$$\\frac{x}{x^2+a^2} = \\frac{\\frac12}{x+\\mathrm i a}\n\n +\\frac{\\frac12}{x-\\mathrm i a}$$\n\nwe immediately get\n\n$$ \\begin{split}\\dfrac{\\mathrm d^n}{\\mathrm dx^n} \\frac{x}{x^2+a^2}\n\n &= (-1)^n\\frac{n!}{2} \\left(\\frac{1}{(x+\\mathrm i a)^{n+1}} + \\frac{1}{(x-\\mathrm i a)^{n+1}}\\right)\\\\\n\n & = (-1)^n \\frac{n!}{(x^2+a^2)^{n+1}}\n\n \\left(\\sum_{k=0}^{\\lfloor (n+1)/2\\rfloor}\\binom {n+1}{2k} (-1)^k a^{2k}x^{n+1-2k}\\right).\n\n\\end{split}$$\n\nThe last equality results from the cancellation of odd powers of\n\n$a$\n\nin the sum\n\n$(x+\\mathrm i a)^{n+1}+(x-\\mathrm i a)^{n+1}$\n\n.", "answer_owner": "Tom-Tom", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4311494, "title": "Can someone explain the actual use of idea of limits in layman terms for me as an absolute beginner in calculus?", "question_text": "As a beginner in calculus I have always struggled I the area of limits not when I solve higher order thinking questions but just getting the basic idea and the notion of finding limits for a function. It would be a great relief if someone could help me with this query ?", "question_owner": "ram kumar", "question_link": "https://math.stackexchange.com/questions/4311494/can-someone-explain-the-actual-use-of-idea-of-limits-in-layman-terms-for-me-as-a", "answer": { "answer_id": 4311549, "answer_text": "I disagree with the comments.\n\nFurther, since this is an interpretation question, I feel justified in providing an answer even though the OP (i.e. original poster) has shown no work.\n\nI will explain the notion of limits in the simplified world of single variable functions, where both the domain and range of the function is some subset of the Real Numbers. This should give you a reasonable intuitive grasp of the idea behind the limit.\n\nThen, you will have to broaden your intuition to consider functions that have other domains or other ranges.\n\nThe first concept to consider is the notion of a\n\nneighborhood\n\n. The simplest example is to consider a fixed value\n\n$a \\in \\Bbb{R}$\n\n. Then, for a small positive value\n\n$\\delta$\n\n, the neighborhood of radius\n\n$\\delta$\n\n around the value\n\n$a$\n\n is regarded as the set of all\n\n$x \\in \\Bbb{R}$\n\n such that\n\n$-\\delta < (x-a) < \\delta.$\n\nTypically, the shorthand expression for this is\n\n$|x-a| < \\delta.$\n\n Typically, in the definition of a limit, one is concerned with those values of\n\n$x$\n\n that are in the neighborhood of radius\n\n$\\delta$\n\n around\n\n$a$\n\n, but where\n\n$x \\neq a.$\n\nTypically, this is expressed as\n\n$0 < |x-a| < \\delta.$\n\nThen, you have to understand the idea of (for a specific\n\n$\\epsilon > 0$\n\n) the neighborhood of radius\n\n$\\epsilon$\n\n around some fixed finite value\n\n$L$\n\n.\n\nBasically, this neighborhood is expressed as the set of all\n\n$y$\n\n, such that\n\n$|y - L| < \\epsilon.$\n\nNow, you are ready for the intuitive definition of a limit.\n\nSuppose that you see the assertion that\n\n$\\displaystyle \\lim_{x \\to a} f(x) = L$\n\n.\n\nAssigning the variable\n\n$y$\n\n to represent\n\n$f(x)$\n\n, what this assertion signfies, is that for any\n\n$\\epsilon > 0$\n\n there exists a\n\n$\\delta > 0$\n\n such that\n\nIf\n\n$x$\n\n is in a neighborhood of radius\n\n$\\delta$\n\n around\n\n$a$\n\n, and\n\n$x \\neq a$\n\n,\n\nThen\n\n$y = f(x)$\n\n is in a neighborhood of radius\n\n$\\epsilon$\n\n around\n\n$L$\n\n.\n\nMore formally, the assertion is written:\n\n$\\displaystyle \\lim_{x \\to a} f(x) = L$\n\n signifies that\n\nFor all\n\n$\\epsilon > 0$\n\n, there exists a\n\n$\\delta > 0$\n\n (where the choice of\n\n$\\delta$\n\n often depends on the choice of\n\n$\\epsilon)$\n\nsuch that\n\n$0 < |x - a| < \\delta \\implies |f(x) - L| < \\epsilon.$\n\nAs a very simple concrete example, suppose that\n\n$f(x) = 2x$\n\n, and you are asked to prove that\n\n$\\displaystyle \\lim_{x\\to 2} f(x) = 4.$\n\nIt turns out that for this particular problem, you can specify\n\n$\\displaystyle \\delta = \\frac{\\epsilon}{2}.$\n\nThen, if\n\n$\\displaystyle 0 < |x - 2| < \\delta = \\frac{\\epsilon}{2}$\n\n then you can conclude that\n\n$|f(x) - 4| = |2x - 4| = 2|x - 2| < 2\\delta = \\epsilon.$\n\nThis constitutes a proof that\n\n$\\displaystyle \\lim_{x \\to 2} f(x) = 4 ~: ~f(x) = 2x.$\n\nThe foundation of the proof was that you were able to identify a relationship between\n\n$\\delta$\n\n and\n\n$\\epsilon ~\\left(\\text{i.e. that} ~\\displaystyle \\delta = \\frac{\\epsilon}{2}\\right)$\n\n that allowed the required constraint to be satisfied.", "answer_owner": "user2661923", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2701131, "title": "Find the value of $\\theta$ on $\\pi/2 \\le \\theta \\le \\pi$ at which the curve $r=\\theta - \\sin (3\\theta)$ is closest to the pole.", "question_text": "Find the value of $\\theta$ on $\\pi/2 \\le \\theta \\le \\pi$ at which the curve $r=\\theta - \\sin (3\\theta)$ is closest to the pole.\n\nHow can I approach this problem? I thought to find the values of theta where $r=0$, but apparently that's not right. Calculators are allowed.", "question_owner": "space", "question_link": "https://math.stackexchange.com/questions/2701131/find-the-value-of-theta-on-pi-2-le-theta-le-pi-at-which-the-curve-r", "answer": { "answer_id": 2701556, "answer_text": "You are right that the first step is to find where $r=0$. As you have found, the only solution is $\\theta=0$ which is outside your allowed values of $\\theta$, so this step fails. The given curve does not go through the origin.\n\nYour next step is to find where $r$ is a minimum. The function for $r$ is continuous so we can use the usual calculus methods. We find the derivative and find where it equals zero.\n\n$$\\begin{align}\n\n0 & = \\frac d{d\\theta}\\left( r \\right) \\\\[2ex]\n\n & = \\frac d{d\\theta}\\left( \\theta-\\sin(3\\theta) \\right) \\\\[2ex]\n\n & = 1 - 3\\cos(3\\theta) \\\\[2ex]\n\n\\cos(3\\theta) & = \\frac 13 \\\\[2ex]\n\n3\\theta & = 2k\\pi\\pm\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n\\theta & = \\frac{2k\\pi}3\\pm\\frac 13\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n\\end{align}$$\n\nThe only values of $\\theta$ that fit in your required interval have $k=1$:\n\n$$\\begin{align}\n\n\\theta & = \\frac{2\\pi}3\\pm\\frac 13\\cos^{-1}\\left(\\frac 13\\right) \\\\[2ex]\n\n & \\approx 1.6840752966129, \\quad 2.5047149081735\n\n\\end{align}$$\n\nI'll let you finish form here. Note that there is no value of $\\theta$ that makes $r$ undefined, so we have found all the critical points. Find which of those two values of $\\theta$ has the minimum value of $r$ (the other has a maximum value). Compare that value of $r$ with those at the endpoints of the given interval and find the absolute minimum of $r$ with its corresponding value of $\\theta$. Ask if you need more help.\n\nHere is a polar graph of your problem, done on the TI-Nspire CX Graphing Calculator emulator. This confirms that the correct answer is the larger of the two values of $\\theta$ above, $2.5047149081735$. This graph also shows the corresponding value of $r$ and the cartesian coordinates.", "answer_owner": "Rory Daulton", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108174, "title": "Different (but equivalent) expression of a pullback", "question_text": "Consider the map\n\n$\\varphi: M \\to N$\n\n,\n\n$x^i$\n\n a coordinate system on\n\n$M$\n\n and\n\n$x'^i$\n\n a coordinate system on\n\n$N$\n\n.\n\n$\\alpha$\n\n is a form.\n\nI was given the \"fact\" that\n\n$$(\\varphi^*\\alpha)_i(p) =\\frac{\\partial x'^j}{\\partial x^i}\\big(p\\big)\\;\\alpha_j(x')$$\n\n and\n\n$$(\\varphi^*\\alpha)_i(p) =\\frac{\\partial x^k}{\\partial x'^i}\\big(\\varphi(p)\\big)\\;\\alpha_k\\big(\\varphi(p)\\big)$$\n\n are indeed\n\nthe same\n\n relation, even if \"viewed from different perspectives\".\n\nUnfortunately I could not reconcile them. Can you give a suggestion?", "question_owner": "Lo Scrondo", "question_link": "https://math.stackexchange.com/questions/5108174/different-but-equivalent-expression-of-a-pullback", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108048, "title": "Do you determine the number system of a definition (using = or :=) after evaluating, or is it declared beforehand?", "question_text": "When you have a definition (usually using the \"\n\n$:=$\n\n\" or the normal equality symbol \"\n\n$=$\n\n\") in math, do you determine the number system of the output/variable (usually on the LHS of the \"\n\n$:=$\n\n\" or \"\n\n$=$\n\n\" symbol) after evaluating the formula given for it (usually on the RHS of the definition/equality symbol), or do you already have to declare the number system for the output (LHS of equality) beforehand (like when you just state the definition. So then after evaluating the formula on the RHS, we must find solutions that match our pre-declared number system for the output on the LHS)?\n\nI'm not sure, but I think that since it's a definition, it's defined as whatever the other thing/formula is equal to (and whatever number system it exists in)(on the RHS), so if the formula evaluates to a real or complex or infinite number, then the thing being defined (on the LHS) is also in the real or complex or extended real (for infinite) number systems (i.e., we found out the number systems after evaluating, and we didn't declare it beforehand). But I'm also confused because this contradicts what happens for functions. For example, if we are defining a function (like\n\n$y=\\sqrt{x})$\n\n (or using the := symbol,\n\n$y:=\\sqrt{x}$\n\n), then we must define the number system of the codomain (i.e., the output\n\n$f(x)$\n\n or\n\n$y$\n\n of the function that's being defined) beforehand (like\n\n$y \\in \\Bbb{R}$\n\n or\n\n$y \\in \\Bbb{C}$\n\n). So, for defining functions, the formula/rule for the function doesn't tell us its number system, and we have to declare it beforehand.\n\nAlso (similar question as above), let's say we have something like the limit definition of a derivative or an infinite sum (limit of partial sums). Then do we find the number system of the output after evaluating the limit (i.e., we find out after evaluating the limits that a derivative and infinite sum must be real numbers (or extended reals if the limit goes to infinity, right?)? Or do we have to declare the number system of the output beforehand, when we are just stating the definition (i.e., we must declare that a derivative and infinite sum must be in the real numbers from the beginning, and then we find solutions that exist in the reals by evaluating the limit, which would then verify our original assumption/declaration since we found solutions in the real numbers)? But then for this specific method (where we declare the number system beforehand), then if we get a limit of infinity, we define it to be DNE/undefined (since we usually like to work in a real number field), but our original declaration was that a derivative and infinite sum must be real numbers only. But from our formula (on the RHS) and from the definition of a limit, we can get either a real number or infinity (extended reals), so then how would this work (like would infinity be a valid value/solution or not, and would it be an undefined or defined answer)? So basically, whenever we have these types of definitions in math (like formulas), does that mean we find the number system of the output (what we're defining) after evaluating the formula, or do we declare the number system it has to be (then we find solutions in that number system using the formula) beforehand?\n\nAlso (another example related to the same question above), if we have a formula like\n\n$A=\\pi r^2$\n\n (or\n\n$A:=\\pi r^2$\n\n for a definition) (area of a circle), or any other formula (for example, arithmetic mean formula, density formula, velocity/speed formula, integration by parts formula, etc.), then do we determine the number system of the \"object being defined\" (on the LHS) after evaluating the formula (on the RHS), or is it declared beforehand (like for the whole equation or just the LHS object)? For example, for\n\n$A=\\pi r^2$\n\n (or\n\n$A:=\\pi r^2$\n\n), do we determine that area (\n\n$A$\n\n) must be a real number after finding that formula is also a real number (since if\n\n$r$\n\n is a real number, then\n\n$\\pi r^2$\n\n is also a real number based on real number operations) (similar to my explanation in paragraph 2 of how I think definitions work)? Or do we have to declare beforehand that area (\n\n$A$\n\n) must be a real number, and then we must find solutions from the formula (\n\n$\\pi r^2$\n\n) that are also real numbers (which is always true for this example since\n\n$\\pi r^2$\n\n is always real) for the equation/definition to be valid (similar to how functions and codomains work)?\n\nSorry for the long question, and if it's confusing. Please let me know if any clarification is needed. Any help regarding the assumptions of existence and number systems in equations/definitions/formulas would be greatly appreciated. Thank you!\n\nEDIT: I am adding these 3 options to my question to make it clearer:\n\nOption #1: Explicitly declaring the number system for the output:\n\n Like we declare beforehand that for the definition\n\n$A:=B$\n\n (or\n\n$A=B$\n\n) where\n\n$A$\n\n is the output and\n\n$B$\n\n is a formula,\n\n$A \\in \\Bbb{R}$\n\n, or we use functional-definition (like\n\n$f:\\Bbb{R} \\to \\Bbb{R}$\n\n, where we define the number system of the output (which would be\n\n$A$\n\n for this example) beforehand as well. We also have to declare the number system for the operations and numbers being used for the formula for\n\n$B$\n\n (i.e., we declare the general/ambient number system for the operations).\n\nOption #2: Implicitly declaring the number system for everything:\n\n Like for\n\n$A:=B$\n\n (or\n\n$A=B$\n\n), we declare that the general/ambient number system for the whole equation/definition to be\n\n$\\Bbb{R}$\n\n, so then this would include the operations in the formula for\n\n$B$\n\n, the output of\n\n$B$\n\n, and the value of\n\n$A$\n\n (everything in the equation).\n\nOption #3: Determining the number system for\n\n$A$\n\n after evaluating\n\n$B$\n\n (the RHS):\n\n Like if we have\n\n$A:=B$\n\n (or\n\n$A=B$\n\n, but for this example, this only applies to\n\n$A=B$\n\n (using an equality symbol), we declare that the general/ambient number system for\n\n$B$\n\n is\n\n$\\Bbb{R}$\n\n, so the operations and output for\n\n$B$\n\n must be in\n\n$\\Bbb{R}$\n\n, and since\n\n$A$\n\n is\n\ndefined to be equal to\n\n$B$\n\n (not just equal to\n\n$B$\n\n), then\n\n$A$\n\n must also be in\n\n$\\Bbb{R}$\n\n. Also, I think this option only applies where it is an explicit definition (\n\n$A:=B$\n\n), and usually does not apply for a general equality (\n\n$A=B$\n\n). However, it can sometimes apply to a general equality (\n\n$A=B$\n\n) only if it's similar to a formula or definition, not a relationship (like\n\n$V=IR$\n\n (Ohm's Law) or integration by parts (IBP is a relationship, not a formula/definition, since it's proven from the product rule, so all integrals have to exist beforehand, I think), since these are relationships between variables/quantities, so you need to know the number system for every variable beforehand (i.e., for\n\n$V=IR$\n\n, we need to know\n\n$V, I, R \\in \\Bbb{R}$\n\n, right?)).\n\nSo, which is correct from options 1, 2, and 3, or are all of them correct? Thank you!", "question_owner": "Aaditya Visavadiya", "question_link": "https://math.stackexchange.com/questions/5108048/do-you-determine-the-number-system-of-a-definition-using-or-after-evaluat", "answer": { "answer_id": 5108053, "answer_text": "There is no rule that answers your question. The point of written mathematics is to communicate between writer and reader. The writer must provide whatever is necessary. How much is necessary depends on how much context the reader and writer share.\n\nThat principle applies much more generally than when you are dealing with what you call \"different kinds of numbers\". Definitions and formulas appear in more advanced mathematics where the objects need not be numbers of any kind.\n\nThe definition of the derivative looks the same whether you are studying elementary calculus of complex analysis.\n\nIf you are solving a quadratic equation the context matters and the writer should tell you in advance if it's not clear from the surrounding material.\n\nIf you are writing formulas for the areas of geometric objects it's implicit that the variables represent real numbers.\n\nThe domain and the codomain are officially part of the definition of a function. If there's any doubt about what they are in any particular case the author should clarify in advance.", "answer_owner": "Ethan Bolker", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1851459, "title": "Understanding this proof that $\\lim\\limits_{h\\to 0}\\frac{\\cos(h)-1}{h}=0$", "question_text": "I need help understanding how this limit is proved? :\n\nShow that $$\\lim_{h\\to 0} \\frac{\\cos (h)-1}{h}=0$$\n\nProof\n\n:\n\nUsing the half angle formula, $\\cos h = 1-2 \\sin^2(h/2)$\n\n$$\\lim_{h\\to 0} \\frac{\\cos (h)-1}{h}\\\\=\\lim_{h\\to 0}( -\\frac{2 \\sin^2(h/2)}{h})\\\\=-\\lim_{\\theta \\to 0}\\frac{\\sin \\theta}{\\theta} \\sin \\theta\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\text{(Let $\\theta=h/2$)} \\\\ = -(1)(0)\\\\=0$$\n\nI have no idea how this proof is done, so I apologize for the lack of my own thoughts in this question. I understand limits and know sin, cos, tan, but I am just very lost as what they did in each step. Can someone please explain all the steps of the proof as well as the half-angle formula. Thanks!", "question_owner": "BlueMagic1923", "question_link": "https://math.stackexchange.com/questions/1851459/understanding-this-proof-that-lim-limits-h-to-0-frac-cosh-1h-0", "answer": { "answer_id": 1851473, "answer_text": "The simplest proof is this:\n\n$$\\frac{\\cos h-1}h=\\frac{(\\cos h-1)(\\cos h+1)}{(\\cos h+1)h}=\\frac{\\cos^2h-1}{(\\cos h+1)h}=-\\frac{\\sin^2h}{(\\cos h+1)h}=-\\frac{\\sin h}h\\cdot\\frac{\\sin h}{\\cos h+1}.$$\n\nThe first fraction tends to $1$, the second tends to $\\dfrac 02=0$, hence the limit is $\\color{red}0$.\n\nFor the proof you mention, at the third line, you should have\n\n$$=\\lim_{h\\to 0}\\Bigl( -\\frac{2 \\sin^2(h/2)}{h}\\Bigr)=\\lim_{h\\to 0}\\Bigl( -\\frac{\\sin^2(h/2)}{h/2}\\Bigr)=\\dots$$", "answer_owner": "Bernard", "is_accepted": false, "score": 10, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4965083, "title": "Understanding "Both differential and integral calculus make use of the notion of convergence of infinite series to a well-defined limit".", "question_text": "I was reading the book \"Algorithms for Optimization\" and in the Introduction part of the book it is written that:\n\nModern calculus stems from the developments of Gottfried Wilhelm Leibniz (1646–1716) and Sir Isaac Newton (1642–1727). Both differential and integral calculus make use of the notion of convergence of infinite series to a well-defined limit.\n\nI'm wondering what does the last sentence mean? I'm familiar with the Riemann integral for calculating the definite integrals but how the indefinite integrals and differentiations are related to the \"notion of convergence of infinite series\"?", "question_owner": "user1380196", "question_link": "https://math.stackexchange.com/questions/4965083/understanding-both-differential-and-integral-calculus-make-use-of-the-notion-of", "answer": { "answer_id": 5108147, "answer_text": "Maybe this fact will go beyond the scope of the question but I think it might be helpful.\n\nLet\n\n$(B, \\|\\cdot\\|)$\n\n be a normed vector space. These kind of spaces naturally arise in analysis. In particular, we are more interested in complete normed vector spaces, called Banach spaces, in order to ensure that every Cauchy sequence has a limit. This property (completeness) turns out to be crucial when dealing with this kind of spaces,for example when working with\n\n$L^p$\n\n spaces (but there are much more examples of Banach spaces).\n\nNow, if we want to establish completeness for the normed space\n\n$B$\n\n we would have to prove that every Cauchy sequence converges to an element of the space. It turns out that this condition is equivalent to the requirement that every absolutely convergent series in norm has actually a well defined limit.\n\nPlainly, given\n\n$(v_n) \\in B$\n\n this condition can be rephrased as:\n\n$$\n\n\\sum_{n=0}^\\infty \\|v_n\\| < \\infty \\rightarrow \\exists v\\in B : \\lim_{N \\to \\infty} \\sum_{n=0}^N v_n = v\n\n$$\n\nThis simple fact, somehow connects the notion of convergent sequence and series. Indeed, it is only a straightforward application of the fact that for every Cauchy sequence\n\n$(a_n)_{n\\in \\mathbb{N}}$\n\n, possibly thinning the sequence, you may assume\n\n$\\sum_{n=0}^\\infty (a_{n+1}-a_n)$\n\n is absolutely convergent in norm.", "answer_owner": "Alessandro", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1100368, "title": "Closed form for ${\\large\\int}_0^\\infty\\frac{x-\\sin x}{\\left(e^x-1\\right)x^2}\\,dx$", "question_text": "I'm interested in a closed form for this simple looking integral:\n\n$$I=\\int_0^\\infty\\frac{x-\\sin x}{\\left(e^x-1\\right)x^2}\\,dx$$\n\nNumerically,\n\n$$I\\approx0.235708612100161734103782517656481953570915076546754616988...$$\n\nNote that if we try to split the integral into two parts, each with only one term in the numerator, then both parts will be divergent.", "question_owner": "Laila Podlesny", "question_link": "https://math.stackexchange.com/questions/1100368/closed-form-for-large-int-0-infty-fracx-sin-x-leftex-1-rightx2-dx", "answer": { "answer_id": 1100403, "answer_text": "$$\\frac{x-\\sin x}{x^2}=\\sum_{n\\geq 1}\\frac{(-1)^{n+1}}{(2n+1)!}x^{2n-1},$$\n\nand since:\n\n$$ \\int_{0}^{+\\infty}\\frac{x^{2n-1}}{e^x-1}\\,dx = (2n-1)!\\cdot \\zeta(2n),$$\n\nwe have:\n\n$$\\begin{eqnarray*} &&\\int_{0}^{+\\infty}\\frac{x-\\sin x}{x^2(e^x-1)}\\,dx = \\sum_{n\\geq 1}\\frac{(-1)^{n+1}}{2n(2n+1)}\\zeta(2n)\\\\&=&\\color{red}{\\sum_{n\\geq 1}\\left(-1+n\\arctan\\frac{1}{n}+\\frac{1}{2}\\,\\log\\left(1+\\frac{1}{n^2}\\right)\\right)}\\\\&=&\\color{blue}{\\log\\sqrt{\\frac{\\sinh \\pi}{\\pi}}+\\sum_{n\\geq 1}\\left(-1+n\\arctan\\frac{1}{n}\\right)}.\\tag{1}\\end{eqnarray*} $$\n\nCombining this identity with the\n\nrobjonh's answer to another question\n\n, we finally get:\n\n$$\\color{purple}{\\int_{0}^{+\\infty}\\frac{x-\\sin x}{x^2(e^x-1)}\\,dx=\\frac{1}{2}+\\frac{5\\pi}{24}-\\log\\sqrt{2\\pi}+\\frac{1}{4\\pi}\\operatorname{Li}_2(e^{-2\\pi})}.\\tag{2}$$\n\nOn the other hand, the identity claimed by user111187,\n\n$$ \\int_{0}^{+\\infty}\\frac{x-\\sin x}{x(e^x-1)} = \\gamma+\\Im\\log\\Gamma(1+i)\\tag{3} $$\n\nfollows from the\n\nintegral representation for the $\\log\\Gamma$ function\n\n and for the\n\nEuler-Mascheroni constant\n\n. By considering the Weierstrass product for the $\\Gamma$ function,\n\n$$\\Gamma(z+1) = e^{-\\gamma z}\\prod_{n\\geq 1}\\left(1+\\frac{z}{n}\\right)^{-1}e^{\\frac{z}{n}}$$\n\nwe have:\n\n$$ \\log\\Gamma(z+1) = -\\gamma z + \\sum_{n\\geq 1}\\left(\\frac{z}{n}-\\log\\left(1+\\frac{z}{n}\\right)\\right)$$\n\nso:\n\n$$ \\int_{0}^{+\\infty}\\frac{x-\\sin x}{x(e^x-1)}\\,dx = \\sum_{n\\geq 1}\\left(\\frac{1}{n}-\\arctan\\frac{1}{n}\\right).\\tag{4}$$", "answer_owner": "Jack D'Aurizio", "is_accepted": true, "score": 15, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108131, "title": "What is the fixed point of $\\sin(\\cos(\\tan(x))) = x$?", "question_text": "The fixed point of\n\n$\\sin(\\cos(\\tan(x))) = x$\n\n?", "question_owner": "Shan yu Liew", "question_link": "https://math.stackexchange.com/questions/5108131/what-is-the-fixed-point-of-sin-cos-tanx-x", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108129, "title": "How to prove it", "question_text": "Theorem (Riemann).\n\n If\n\n$f(x)$\n\n is Riemann integrable in the interval\n\n$a \\leq x \\leq b$\n\n, then:\n\n$$\\lim_{k \\to +\\infty} \\int_a^b f(x) \\sin kx \\; dx = 0 \\;.$$", "question_owner": "Wayne yeung", "question_link": "https://math.stackexchange.com/questions/5108129/how-to-prove-it", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108112, "title": "F(Z)= 1/(1+Z2) IS MEROMORPHIC OR NOT", "question_text": "F(Z)= 1/(1+Z2) IS MEROMORPHIC OR NOT", "question_owner": "PRAKASH K", "question_link": "https://math.stackexchange.com/questions/5108112/fz-1-1z2-is-meromorphic-or-not", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4429681, "title": "If $\\int_n^m{f(x,y)}dy=g(x)$, is there a way to find or approximate $f(x,y)$ given $g(x)$?", "question_text": "If I'm given\n\n$f(x,y)$\n\n, when\n\n$$\\int_n^m{f(x,y)}dy=g(x) ,$$\n\n then I know how that I can at least approximate\n\n$g(x)$\n\n using a Riemann sum. However, if I am instead given\n\n$g(x)$\n\n I don't know how to even approximate\n\n$f(x,y)$\n\n, other than by trying out every possible function I think that\n\n$f(x,y)$\n\n might be. Is there a way to find or approximate\n\n$f(x,y)$\n\n that's better than guess and check given\n\n$g(x)$\n\n?", "question_owner": "Anders Gustafson", "question_link": "https://math.stackexchange.com/questions/4429681/if-int-nmfx-ydy-gx-is-there-a-way-to-find-or-approximate-fx-y-giv", "answer": { "answer_id": 4429684, "answer_text": "This equation is far from uniquely determining\n\n$f$\n\n, and there are many solutions.\n\nFor example, if\n\n$\\chi:[n,m]\\to\\mathbb R$\n\n is a continuous function whose integral is\n\n$1$\n\n (and there are many of those), then\n\n$f(x,y) = g(x) \\chi(y)$\n\n is a solution.", "answer_owner": "SolubleFish", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108078, "title": "Reverse-engineering for an integral by setting the function equal to the quotient rule?", "question_text": "I'm currently learning integrals (antiderivative is what we're calling it in class). It's my first day (It's very fun!). I'm trying to solve the integral of\n\n$$\\frac{(x+1)^2}{3x}$$\n\n I understand you can just simplify the terms of\n\n$\\frac{(x+1)^2}{3x}$\n\n into\n\n$\\frac{x^2}{3x} + \\frac{2x}{3x} + \\frac{1}{3x}$\n\n and then easily find the integral, but I'm trying to do it a different way using the quotient rule. I want to set g'h - gh' =\n\n$(x+1)^2$\n\n and\n\n$h^2$\n\n = 3x and then 'reverse engineer' from there. Any way to do it? My professor tried it and was unable to do it, she said it wasn't possible because you can get h and h', but not g and g'. But I want to understand why. I really feel like this more 'rigorous way' should be possible. Or maybe I'm just very confused/lost...\n\nThanks!", "question_owner": "Shaheer Zaighum", "question_link": "https://math.stackexchange.com/questions/5108078/reverse-engineering-for-an-integral-by-setting-the-function-equal-to-the-quotien", "answer": { "answer_id": 5108083, "answer_text": "This idea is of no help, because knowing\n\n$h$\n\n (\n\n$=\\pm\\sqrt{3x}$\n\n) and\n\n$k$\n\n (\n\n$=(x+1)^2$\n\n), the standard method to solve\n\n$g'h-gh'=k$\n\n is to let\n\n$g=fh$\n\n and look for\n\n$f$\n\n such that\n\n$(fh)'h-(fh)h'=k$\n\n, i.e.\n\n$f'=k/h^2$\n\n, which leads you back to your initial problem.", "answer_owner": "Anne Bauval", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107693, "title": "How to extend this function into positive real numbers from natural numbers", "question_text": "So consider the following sum\n\n$$\n\nH(n) = \\sum_{k=1}^{n} \\frac{1}{k}\n\n$$\n\nIf you consider this sum as a function of n, the domain of this function is natural numbers.\n\nHowever, the domain of this function can be extended into positive real numbers using Euler's formula\n\n$$H(n) = \\int_{0}^{1} \\frac{1 - x^{n}}{1 - x} \\, dx$$\n\nThis function above will be identical to the first one for positive integers, but will also be defined for positive real numbers.\n\nThe reason this works is that the expression under the integral is just a sum of geometric sequence formula, and integrating from 0 to 1 gives the sum of harmonic series.\n\n$$\\int_{0}^{1} \\left( 1 + x + x^{2} + x^{3} + \\ldots + x^{n-2} + x^{n-1} \\right) \\, dx$$\n\nBut writing this in the form like this\n\n$\\int_{0}^{1} \\frac{1 - x^{n}}{1 - x} \\, dx$\n\n, then allows n to be any positive real number.\n\nI was wondering if there is any similar (or not so similar) way to extend a more complicated sum function into positive real numbers from natural numbers. Here is the function:\n\n$$P(n) \\;=\\; \\sum_{k=1}^{n+1} \\frac{i^{k}}{(z+i)^{k}\\,\\Gamma(n+2-k)}$$\n\n$i$\n\n is the imaginary unit and\n\n$z$\n\n is just some number. I understand that this expression is much more complicated, than the previous one, but I want to see if I can coherently define for example\n\n$P(2.5)$\n\n,\n\n$P(3.2)$\n\n, etc in terms of z.\n\nI tried using an approach similar to the Euler's approach in harmonic sum case, but could not find an integral which would fit. But maybe I am missing something. If there is any other, more advanced approach I can take, I would gladly look into it as well.", "question_owner": "Egor Zaytsev", "question_link": "https://math.stackexchange.com/questions/5107693/how-to-extend-this-function-into-positive-real-numbers-from-natural-numbers", "answer": { "answer_id": 5107735, "answer_text": "As said in comments, the incomplete gamma function or, better, the exponential integral function are the solutions.\n\n$$P_n(z) \\;=\\; \\sum_{k=1}^{n+1} \\frac{i^{k}}{(z+i)^{k}\\,\\Gamma(n+2-k)}=\\frac{e^{1-i z} (1-i z)\\,\\, E_{-(n+1)}(1-iz)-1}{\\Gamma (n+2)}\\tag 1$$\n\nDefining\n\n$t=(1-iz)$\n\n the expression is just\n\n$$\\frac {e^t}{t^{n+1} }\\,\\frac{\\Gamma (n+2,t)}{\\Gamma (n+2)}$$\n\nI do not think that interpolation is required since the\n\n$n$\n\n in the summation does not play the same role as the\n\n$n$\n\n in the function.\n\nFor a random test, using\n\n$z=\\pi$\n\n, interpolation of the function using for knots\n\n$n=1,2,\\cdots,10$\n\n (this is a very wide range).\n\nThe function gives\n\n$$P_{2.5}(\\pi)=-0.0529129 + 0.103660 \\,i$$\n\nSpline interpolation gives\n\n$-0.0521662 + 0.106454\\,i$\n\nHermite interpolation gives\n\n$ -0.0521662 +0.108183 \\,i$\n\nSimilarly, the function gives\n\n$$P_{3.2}(\\pi)=-0.0357736 + 0.0351827\\, i$$\n\nSpline interpolation gives\n\n$ -0.0360989+0.0345025 \\,i$\n\nHermite interpolation gives\n\n$-0.0365992+ 0.0355283 \\,i$", "answer_owner": "Claude Leibovici", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5069339, "title": "Alternatives or generalisations of $\\int_0^1 \\frac{\\ln \\left(x^2-x+1\\right)}{x \\ln x} d x$", "question_text": "The beautiful result of the integral\n\n$$\n\n\\int_0^1 \\frac{\\ln \\left(x^2-x+1\\right)}{x \\ln x} d x=\\ln 2\\ln3\n\n$$\n\nattracts me to tackle it. Noting that\n\n$x^3+1=(x+1)(x^2-x+1)$\n\n, we can split the integral into two parts as:\n\n$$\n\n\\begin{aligned}\n\n\\int_0^1 \\frac{\\ln \\left(x^2-x+1\\right)}{x \\ln x} d x\n\n& =\\int_0^1 \\frac{\\ln \\left(x^3+1\\right)-\\ln(x+1)}{x \\ln x} d x=J(3)-J(1),\n\n\\end{aligned}\n\n$$\n\nwhere\n\n$J(a)=\\int_0^1 \\frac{\\ln \\left(x^a+1\\right)-\\ln \\left(x+1\\right)}{x \\ln x} d x$\n\n whose derivative w.r.t.\n\n$a$\n\n is\n\n$$\n\n\\begin{aligned}\n\nJ^{\\prime}(a) & =\\int_0^1 \\frac{x^a \\ln x}{x \\ln x\\left(x^a+1\\right)} d x \\\\\n\n& =\\int_0^1 \\frac{x^{a-1}}{x^a+1} d x \\\\\n\n& =\\frac{1}{a}\\left[\\ln \\left(x^a+1\\right)\\right]_0^1 \\\\\n\n& =\\frac{1}{a}\\ln 2\n\n\\end{aligned}\n\n$$\n\nIntegrating back yields\n\n$$\n\n\\int_0^1 \\frac{\\ln \\left(x^2-x+1\\right)}{x \\ln x} d x =J(3)-J(1)=\\int_1^3 J^{\\prime}(a) da =\\ln 2 \\int_1^3 \\frac{1}{a} d a =\\ln 2 \\ln 3\n\n$$\n\nGeneralisation 1\n\nFor any natural number\n\n$n$\n\n,\n\n$$\n\n\\begin{aligned}\n\nI_n & =\\int_0^1 \\frac{\\ln \\left(x^{2n}-x^{2 n-1}+x^{2 n-2}-\\cdots+1\\right)}{x \\ln x} d x \\\\\n\n& =\\int_0^1 \\frac{\\ln \\left(x^{2 n+1}+1\\right)-\\ln (x+1)}{x \\ln x} d x\\\\&=\n\n\\int_1^{2 n+1} \\frac{1}{a} \\ln 2 d a\\\\&=\\ln 2 \\ln (2 n+1)\n\n\\end{aligned}\n\n$$\n\nFor example,\n\n$$\n\n\\int_0^1 \\frac{\\ln \\left(x^8-x^7+x^6-x^5+x^4-x^3+x^2-x+1\\right)}{x \\ln x}dx=\\ln 2\\ln 9\n\n$$\n\nGeneralisation 2\n\nFor any natural number\n\n$n$\n\n,\n\n$$\n\n\\int_0^1 \\frac{\\ln (x^{2n}-x^n+1)}{x \\ln x} d x = \\int_0^1 \\frac{\\ln \\left(x^{3 n}+1\\right)-\\ln \\left(x^n+1\\right)}{x \\ln x} d x =\\int_n^{3 n} \\frac{1}{a} \\ln 2 d a =\\ln 2 \\ln 3\n\n$$\n\nwhich is\n\nindependent\n\n of the choice of\n\n$n$\n\n.\n\nMy question\n\nAre there any alternatives or generalisations of the integral?\n\nYour comments and alternatives/generalisations are highly appreciated.", "question_owner": "Lai", "question_link": "https://math.stackexchange.com/questions/5069339/alternatives-or-generalisations-of-int-01-frac-ln-leftx2-x1-rightx", "answer": { "answer_id": 5069666, "answer_text": "Here is a result I found (which covers all OP's generalisations):\n\n$$\\forall n\\ge2, V_n:=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_n(x)}{x\\ln x}+\\frac{\\Lambda_1(n)}{1-x}\\right)dx=\\frac{1}{2}\\Lambda_2(n)$$\n\nWhere\n\n$\\Phi_n(x)$\n\n is the\n\n$n$\n\n-th cyclotomic polynomial,\n\n$\\Lambda_{k}(n)$\n\n is the generalized Von Mangoldt\n\nfunction\n\n.\n\nProof:\n\nFirst, consider the integral\n\n$$I(s)=\\int_{0}^{1}\\left(\\frac{\\ln(1-x^s)}{x\\ln x}+\\frac{\\ln s}{1-x}-\\frac{\\ln(1-x)}{\\ln x}\\right)dx$$\n\nDifferentiate w.r.t.\n\n$s$\n\n$$I'(s)=\\int_{0}^{1}\\left(\\frac{-x^{s-1}}{1-x^{s}}+\\frac{1}{s(1-x)}\\right)dx=\\int_{0}^{1}\\sum_{k=0}^{\\infty}\\left(\\frac{1}{s}x^k-x^{sk+s-1}\\right)dx$$\n\n$$=\\lim_{x\\to 1^{-}}\\sum_{k=0}^{\\infty}\\frac{x^{k+1}-x^{s(k+1)}}{s(k+1)}=\\frac{1}{s}\\lim_{x\\to 1^{-}}\\ln\\left(\\frac{1-x^s}{1-x}\\right)=\\frac{\\ln s}{s}$$\n\nNow, integrate to get\n\n$I(s)$\n\n$$I(s)=I(1)+\\int_{1}^{s}I'(t)dt=I(1)+\\frac{1}{2}\\ln^2s$$\n\nSince\n\n$n\\ge2,\\sum_{d|n}\\mu(d)=0$\n\n and\n\n$\\Phi_n(x)=\\prod_{d|n}(1-x^d)^{\\mu(\\frac{n}{d})}$\n\n , we have:\n\n$$V_n=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_n(x)}{x\\ln x}+\\frac{\\Lambda_1(n)}{1-x}\\right)dx=\\sum_{d|n}\\mu\\left(\\frac{n}{d}\\right)I(d)$$\n\n$$=\\sum_{d|n}\\mu\\left(\\frac{n}{d}\\right)\\left(I(1)+\\frac{1}{2}\\ln^2 d\\right)=\\frac{1}{2}\\Lambda_2(n)$$\n\nas desired. We can also define\n\n$V_1:=I(1)=\\int_{0}^{1}\\frac{\\psi^{(0)}(x+1)+\\gamma}{x}dx=1.2577468869...$\n\nExample:\n\n$$V_6=\\int_{0}^{1}\\left(\\frac{\\ln\\Phi_6(x)}{x\\ln x}+\\frac{\\Lambda_1(6)}{1-x}\\right)dx=\\int_{0}^{1}\\frac{\\ln(x^2-x+1)}{x\\ln x}dx$$\n\n$$=\\frac{1}{2}\\Lambda_2(6)=\\frac{1}{2}\\sum_{d|6}\\mu\\left(\\frac{6}{d}\\right)\\ln^2 d=\\ln(2)\\ln(3)$$\n\n For a bonus (see this\n\nanswer\n\n), we can calculate\n\n$\\Lambda_{2}(n)$\n\n as follow:\n\n$$\\Lambda_{2}(n)=\\sum_{d|n}\\Lambda_{1}(d)\\Lambda_{1}\\left(\\frac{n}{d}\\right)$$\n\nFinally, there is a nice example that I think it's worth mentioning (proof left as exercise)\n\n$$\\int_{0}^{1}\\frac{\\ln(x^8+x^7-x^5-x^4-x^3+x+1)}{x\\ln x}dx=0$$", "answer_owner": "user953715", "is_accepted": false, "score": 8, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108054, "title": "When showing differentiability at the split of a piecewise function, can one simply use the derivatives of each piece?", "question_text": "Background\n\nLet's say you have a piecewise function like\n\n$$\\displaystyle\n\nf(x) = \\begin{cases}\n\n x^2+2 & x<0 \\\\\n\n 2e^x & x \\geq 0\n\n \\end{cases}\n\n$$\n\nYou may assume continuity is already proven.\n\nGoal\n\nProve or disprove differentiability at\n\n$x=0$\n\n.\n\nMethod 1 - The shortcut\n\nOne easy way is to differentiate the two pieces, and show that they do not equal each other at\n\n$x=0$\n\n$$\\displaystyle\n\nf'(x) = \\begin{cases}\n\n 2x & x<0 \\\\\n\n 2e^x & x \\geq 0\n\n \\end{cases}\n\n$$\n\nEvaluated at\n\n$x=0$\n\n the two cases give different values, thus no differentiatiability at the split.\n\nMethod 2 - Rigor\n\nEvaluate the limit definition of the derivative from LHS and RHS.\n\nI.e., one would show that\n\n$$\\displaystyle\n\n\\lim\\limits_{h \\to 0^-} \\frac{f(0+h)-f(0)}{h} \\quad \\neq \\quad \\lim\\limits_{h \\to 0^+} \\frac{f(0+h)-f(0)}{h}\n\n$$\n\nand get the same result.\n\nQuestion\n\nIs method 1 always sufficient?", "question_owner": "Alec", "question_link": "https://math.stackexchange.com/questions/5108054/when-showing-differentiability-at-the-split-of-a-piecewise-function-can-one-sim", "answer": { "answer_id": 5108056, "answer_text": "I think that your\n\n$f'(x) = \\begin{cases} 2x, x < 0 \\\\ 2e^x, x \\ge 0 \\end{cases}$\n\n is incorrect because if it were true then the formula says\n\n$f'(0) = 2e^{0} = 2$\n\n. But\n\n$f'(x)$\n\n does not exist at\n\n$x = 0$\n\n, and your method\n\n$2$\n\n just proved it that it doesn't exist. So method\n\n$1$\n\n you talked about is not sufficient.", "answer_owner": "Wang YeFei", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1162663, "title": "Stuck on proving $\\int_{-\\infty}^\\infty \\cos(\\frac{\\pi}{a}x)\\cos(\\frac{3\\pi}{a} x) \\, \\mathrm{d}x$ = 0", "question_text": "Can someone please help me to show how\n\n$$\\int_{-\\infty}^\\infty \\cos(\\frac{\\pi}{a}x)\\cos(\\frac{3\\pi}{a} x) \\, \\mathrm{d}x = 0$$\n\nAttempt:\n\nTrig Identity yields\n\n$$= \\frac{1}{2} \\int_{-\\infty}^\\infty \\cos(\\frac{4\\pi}{a}x) + \\cos(\\frac{2\\pi}{a} x) \\, \\mathrm{d}x$$\n\n$$= \\frac{a}{2} (\\frac{\\sin(\\frac{4\\pi}{a}x)}{4\\pi} + \\frac{\\sin(\\frac{2\\pi}{a}x)}{2\\pi}) $$ evaluated from $-\\infty$ to $\\infty$\n\nWhat is a nontrivial way to show that the last expression is zero?\n\nMy course notes says something about stretching of the sine function, not good enough for me.", "question_owner": "Olórin", "question_link": "https://math.stackexchange.com/questions/1162663/stuck-on-proving-int-infty-infty-cos-frac-piax-cos-frac3-pia", "answer": { "answer_id": 1162688, "answer_text": "The limit does not exist.\n\nHowever, in the theory of generalized functions (i.e., distribution theory), the limit\n\n$\\lim_{x\\to \\infty} \\sin(ax) = 0$\n\n.\n\nNOTE:\n\nThis is NOT a classical limit, but has a rigorous interpretation in the context of distribution theory. Formal (non-rigorous) application of distribution theory is used pervasively by physicists and engineers to obtain results (usually correct) very quickly without the need to enforce rigor. Examples include formal applications of the \"Dirac Delta\" and its derivatives and extending the Fourier transform to the space of objects (tempered distributions) that are not\n\n$L^1$\n\n or\n\n$L^2$\n\n functions (e.g., the Fourier transforms of\n\n$1$\n\n,\n\n$H$\n\n,\n\n$|x|^\\alpha$\n\n).", "answer_owner": "Mark Viola", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5103316, "title": "$\\text{erfc}$ Integral", "question_text": "Is there any closed form for\n\n$$I(m,n) = \\int_{0}^{\\infty} \\text{erfc}^n(x^m) \\mathrm dx \\tag1$$\n\nI was able to obtain a closed form when\n\n$n = 1$\n\n.\n\n\\begin{align}\n\nI(m, 1) &= \\int_{0}^{\\infty} \\text{erfc}(x^m) \\mathrm dx \\tag2\\\\ &= \\left[ x \\cdot \\text{erfc}(x^m) \\right]_{0}^{\\infty} - \\int_{0}^{\\infty} x \\left( -\\frac{2m}{\\sqrt{\\pi}} x^{m-1} e^{-x^{2m}} \\right) \\mathrm dx \\tag3 \\\\\n\n&= \\int_{0}^{\\infty} \\frac{2m}{\\sqrt{\\pi}} x^m e^{-x^{2m}} \\mathrm dx \\tag4 \\\\\n\n&= \\frac{2m}{\\sqrt{\\pi}} \\int_{0}^{\\infty} (u^{1/2}) (e^{-u}) \\left( \\frac{\\mathrm du}{2m u^{1 - 1/(2m)}} \\right) \\tag5 \\\\\n\n&= \\frac{1}{\\sqrt{\\pi}} \\int_{0}^{\\infty} u^{(1/(2m) - 1/2)} e^{-u} \\mathrm du \\tag6 \\\\\n\n&= \\frac{1}{\\sqrt{\\pi}} \\Gamma\\left(\\frac{1}{2} + \\frac{1}{2m}\\right) \\tag7\n\n\\end{align}\n\nWhere in\n\n$(2)$\n\n, I used integration by parts; in\n\n$(4)$\n\n the substitution\n\n$u = x^{2m}$\n\n was performed; and in\n\n$(6)$\n\n, I used the definition of the\n\n$\\Gamma$\n\n function. Thus the case\n\n$m=1$\n\n admits a rather simple closed form.\n\nHere is the solution for\n\n$I(0.5, 4)$\n\n I saw on\n\nInstagram\n\n.\n\n\\begin{align}\n\nI &= \\int_{0}^{\\infty} \\text{erfc}^4(\\sqrt{x}) \\mathrm dx \\\\ &= \\frac{16}{\\pi^2} \\int_{0}^{\\infty} \\int_{0}^{1} \\int_{0}^{1} \\frac{\\exp \\left ( -z \\left \\{ 2 + x^{-2} + y^{-2} \\right \\}\\right )}{(1+x^2)(1+y^2)} \\mathrm dx \\, \\mathrm dy \\, \\mathrm dz \\\\\n\n&= \\frac{16}{\\pi^2}\\int_{0}^{1} \\int_{0}^{1} \\frac{1}{(1+x^2)(1+y^2)(2 + x^{-2} + y^{-2})} \\mathrm dx \\, \\mathrm dy \\\\\n\n&= \\frac{16}{\\pi^2} \\int_{0}^{1} \\frac{y^2}{(1+y^2)^2} \\left (\\frac{\\pi}{4} - \\frac{1}{\\sqrt{2 + y^{-2}}} \\tan^{-1} \\sqrt{2 + y^{-2}} \\right ) \\mathrm dy \\\\\n\n&= \\frac{4}{\\pi}I_1 - \\frac{16}{\\pi^2} I_2\n\n\\end{align}\n\nwhere\n\n$I_1$\n\n is the nicer\n\n$y$\n\n-looking integral and\n\n$I_2$\n\n is the scarier\n\n$\\tan^{-1}$\n\n integral.\n\n$I_1 = \\pi/8 - 1/4$\n\n is easily solved.\n\n$I_2$\n\n is harder... By making the substitution\n\n$v = \\sqrt{2 + y^{-2}}$\n\n, some algebra, then the substitution\n\n$v = \\sqrt{2} \\cosh t$\n\n, some integration by parts, ... , one is able to deduce the beautiful result that\n\n$$I = \\frac12 - \\frac{1}{\\pi} \\left (6 - \\frac{8}{\\sqrt{3}} \\right )$$\n\nHere\n\n is a related post.\n\nThis led me to believe there could be some general form to the integral\n\n$(1)$\n\n. If not, I would still appreciate the evaluation of a specific subcase of\n\n$I(m,n)$\n\n. Thanks", "question_owner": "Maxime Jaccon", "question_link": "https://math.stackexchange.com/questions/5103316/texterfc-integral", "answer": { "answer_id": 5103633, "answer_text": "$\\newcommand{\\Li}{\\mathrm{Li}}\n\n\\newcommand{\\logr}[1]{\\log\\left(#1\\right)}\n\n\\newcommand{\\HypF}[4]{{}_{2}F_{1}\\left(\\begin{array}{cc} {#1} ,{#2} \\\\{#3} \\end{array} ;{#4} \\right)}\n\n\\newcommand{\\HypthreeFtwo}[6]{{}_{3}F_{2}\\left(\\begin{array}{cc} {#1} ,{#2} ,{#3} \\\\ {#4} , {#5}\\end{array};{#6}\\right)}\n\n\\renewcommand{\\a}{\\alpha}\n\n\\renewcommand{\\b}{\\beta}\n\n\\newcommand{\\Res}{\\mathbf{Res}}\n\n\\newcommand{\\Z}{\\mathbb{Z}}\n\n\\newcommand{\\Q}{\\mathbb{Q}}\n\n\\newcommand{\\N}{\\mathbb{N}}\n\n\\newcommand{\\R}{\\mathbb{R}}\n\n\\newcommand{\\C}{\\mathbb{C}}\n\n\\newcommand{\\am}{\\mathrm{am}}\n\n\\newcommand{\\sn}{\\mathrm{sn}}\n\n\\newcommand{\\cn}{\\mathrm{cn}}\n\n\\newcommand{\\dn}{\\mathrm{dn}}\n\n\\newcommand{\\ns}{\\mathrm{ns}}\n\n\\newcommand{\\nc}{\\mathrm{nc}}\n\n\\newcommand{\\nd}{\\mathrm{nd}}\n\n\\newcommand{\\scn}{\\mathrm{sc}}\n\n\\newcommand{\\cs}{\\mathrm{cs}}\n\n\\newcommand{\\sd}{\\mathrm{sd}}\n\n\\newcommand{\\ds}{\\mathrm{ds}}\n\n\\newcommand{\\cd}{\\mathrm{cd}}\n\n\\newcommand{\\dc}{\\mathrm{dc}}\n\n\\newcommand{\\dilogarithm}[1]{\\mathrm{Li}_2\\left({#1} \\right) }\n\n\\newcommand{\\trilogarithm}[1]{\\mathrm{Li}_3\\left({#1} \\right) }\n\n\\newcommand{\\polylogarithm}[2]{\\mathrm{Li}_{#1}\\left(#2\\right)}\n\n\\newcommand{\\risingfactorial}[2]{{#1}^{\\overline{#2}}\n\n}\n\n\\newcommand{\\fallingfactorial}[2]{{#1}^{\\underline{#2}}\n\n}\n\n\\renewcommand{\\sl}[1]{\\mathrm{sl}{(#1)}}\n\n\\newcommand{\\lem}{\\varpi}\n\n\\newcommand{\\erf}{\\mathrm{erf}}\n\n\\newcommand{\\erfc}{\\mathrm{erfc}}\n\n\\newcommand{\\cadd}[1][0pt]{\\mathbin{\\genfrac{}{}{#1}{0}{}{+}}}\n\n\\newcommand{\\Cdots}[1][0pt]{\\genfrac{}{}{#1}{0}{\\mbox{}}{\\cdots}}$\n\nAuthor of the solution of\n\n$I(0.5,4)$\n\n here. You probably saw the solution on my\n\nInstagram post\n\n. Personally, I highly doubt there will be a general formula even for fixed\n\n$n$\n\n or fixed\n\n$m$\n\n. I will give two particular cases of\n\n$n=3$\n\n$$\\int_0^{\\infty}\\erfc^3(x)\\, dx=\\frac{3}{\\sqrt{\\pi}}-\\frac{6\\sqrt 2}{\\pi^{3/2}}\\tan^{-1}\\sqrt{2}\\tag{1}$$\n\n$$\\int_0^\\infty\\erfc^3(\\sqrt{x})\\, dx=\\frac{1}{2}-\\frac{3-\\sqrt 3}{\\pi}\\tag{2}$$\n\nI will post a detailed solution when I have time but I can tell that I solved both\n\n$(1)$\n\n and\n\n$(2)$\n\n starting by integration by parts.\n\n$(5)$\n\n is used to solve\n\n$(1)$\n\n$$\\int \\erfc (x)\\, dx=x\\erfc(x)-\\frac{e^{-x^2}}{\\sqrt{\\pi}}\\tag{3}$$\n\n$$\\int x\\erfc(x) dx=\\frac{x^2}{2}\\erfc(x)-\\frac{x}{2\\sqrt{\\pi}}e^{-x^2}+\\frac{1}{4}\\erf(x)\\tag{4}$$\n\n$$\\int_0^{\\infty} e^{-yt^2}\\erfc(tx)\\, dt=\\frac{1}{\\sqrt{\\pi y}}\\tan^{-1}\\frac{\\sqrt{y}}{x}\\tag{5}$$", "answer_owner": "Brightsun", "is_accepted": true, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2498628, "title": "Proof only by transformation that : $ \\int_0^\\infty \\cos(x^2) dx = \\int_0^\\infty \\sin(x^2) dx $", "question_text": "This was a question in our exam and I did not know which change of variables or trick to apply\n\nHow to show by inspection ( change of variables or whatever trick ) that\n\n$$ \\int_0^\\infty \\cos(x^2) dx = \\int_0^\\infty \\sin(x^2) dx \\tag{I} $$\n\nComputing the values of these integrals are known as routine. Further from their values, the equality holds. But can we show equality beforehand?\n\nNote\n\n: I am not asking for computation since it can be found\n\nhere\n\nand we have as well that,\n\n$$ \\int_0^\\infty \\cos(x^2) dx = \\int_0^\\infty \\sin(x^2) dx =\\sqrt{\\frac{\\pi}{8}}$$\n\nand the result can be recover here,\n\nEvaluating $\\int_0^\\infty \\sin x^2\\, dx$ with real methods?\n\n.\n\nIs there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?", "question_owner": "Guy Fsone", "question_link": "https://math.stackexchange.com/questions/2498628/proof-only-by-transformation-that-int-0-infty-cosx2-dx-int-0-infty", "answer": { "answer_id": 2507570, "answer_text": " 0$\n\n and\n\n$J> 0$\n\n. Performing an integration by part we obtain\n\n$$J = \\frac{1}{\\sqrt{2}} \\int^\\infty_0\\frac{\\sin(2x)}{x^{1/2}}\\,dx=\\frac{1}{\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin^2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{1}{2\\sqrt{2}} \\int^\\infty_0\\frac{\\sin^2 x}{x^{3/2}}\\,dx\\color{red}{>0}$$\n\nGiven that\n\n$\\color{red}{\\sin 2x= 2\\sin x\\cos x =(\\sin^2x)'}$\n\n. Similarly we have,\n\n$$I = \\frac{1}{\\sqrt{2}}\\int^\\infty_0\\frac{\\cos(2x)}{\\sqrt{x}}\\,dx=\\frac{1}{2\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin 2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{1}{4\\sqrt{2}} \\int^\\infty_0\\frac{\\sin 2 x}{x^{3/2}}\\,dx\\\\=\n\n \\frac{1}{4\\sqrt{2}}\\underbrace{\\left[\\frac{\\sin^2 x}{x^{1/2}}\\right]_0^\\infty}_{=0} +\\frac{3}{8\\sqrt{2}} \\int^\\infty_0\\frac{\\sin^2 x}{x^{5/2}}\\,dx\\color{red}{>0}$$\n\nConclusion:\n\n$~~~I^2-J^2 =0$\n\n,\n\n$I>0$\n\n and\n\n$J>0$\n\n impliy\n\n$I=J$\n\n. Note that we did not attempt to compute neither the value of\n\n$~~I$\n\n nor\n\n$J$\n\n.\n\nExtra-to-the answer\n\n However using similar technique in above prove one can easily arrives at the following\n\n$$\\color{blue}{I_tJ_t = \\frac\\pi8\\frac{1}{t^2+1}}$$\n\n from which one get the following explicit value of\n\n$$\\color{red}{I^2=J^2= IJ = \\lim_{t\\to 0}I_tJ_t =\\frac{\\pi}{8}}$$\n\nSee also\n\nhere\n\n for more on (The Fresnel Integrals Revisited)", "answer_owner": "Guy Fsone", "is_accepted": true, "score": 32, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 9286, "title": "Evaluation of Gaussian integral $\\int_{0}^{\\infty} \\mathrm{e}^{-x^2} dx$", "question_text": "How to prove\n\n $$\\int_{0}^{\\infty} \\mathrm{e}^{-x^2}\\, dx = \\frac{\\sqrt \\pi}{2}$$", "question_owner": "Jichao", "question_link": "https://math.stackexchange.com/questions/9286/evaluation-of-gaussian-integral-int-0-infty-mathrme-x2-dx", "answer": { "answer_id": 9292, "answer_text": "This is an old favorite of mine.\n\nDefine $$I=\\int_{-\\infty}^{+\\infty} e^{-x^2} dx$$\n\nThen $$I^2=\\bigg(\\int_{-\\infty}^{+\\infty} e^{-x^2} dx\\bigg)\\bigg(\\int_{-\\infty}^{+\\infty} e^{-y^2} dy\\bigg)$$\n\n$$I^2=\\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty}e^{-(x^2+y^2)} dxdy$$\n\nNow change to polar coordinates\n\n$$I^2=\\int_{0}^{+2 \\pi}\\int_{0}^{+\\infty}e^{-r^2} rdrd\\theta$$\n\nThe $\\theta$ integral just gives $2\\pi$, while the $r$ integral succumbs to the substitution $u=r^2$\n\n$$I^2=2\\pi\\int_{0}^{+\\infty}e^{-u}du/2=\\pi$$\n\nSo $$I=\\sqrt{\\pi}$$ and your integral is half this by symmetry\n\nI have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.", "answer_owner": "Ross Millikan", "is_accepted": true, "score": 222, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5108014, "title": "Reference Request: Change of Variables for Infinite Series", "question_text": "In this post I want to describe a technique for proving the convergence/divergence of infinite series which I have been thinking about. I am curious whether there are texts which describe this technique so I can explore the idea further.\n\nThe idea essentially introduces a way to “change variables” in a given series. I think this is best understood through some examples.\n\nConsider first\n\n$$\\sum_{n=1}^\\infty \\frac{1}{2^{\\sqrt{n}}}.$$\n\n The idea is to sort the terms into chunks based on which consecutive squares they fall between. We rewrite the expression with this idea in mind:\n\n$$\\sum_{k=1}^\\infty \\sum_{k^2\\leq n<(k+1)^2}\\frac{1}{2^{\\sqrt{n}}}.$$\n\n Now on each chunk, we estimate the series from above by the largest term in the chunk times the number of terms in each chunk. This becomes:\n\n$$\\sum_{k=1}^\\infty \\sum_{k^2\\leq n<(k+1)^2}\\frac{1}{2^{\\sqrt{n}}}\\leq \\sum_{k=1}^\\infty \\frac{(k+1)^2-k^2}{2^{\\sqrt{k^2}}}=\\sum_{k=1}^\\infty \\frac{2k+1}{2^k}.$$\n\n The ratio test then implies the bigger series converges, and so our original series converges as well.\n\nHere's another example:\n\n$$\\sum_{n=2}^\\infty \\frac{1}{n^{1+\\frac{1}{\\sqrt{\\log n}}}}=\\sum_{k=1}^\\infty \\sum_{2^k\\leq n<2^{k+1}}\\frac{1}{n^{1+\\frac{1}{\\sqrt{\\log n}}}}\\leq \\sum_{k=1}^\\infty \\frac{2^{k+1}-2^k}{(2^k)^{1+\\frac{1}{\\sqrt{\\log 2^k}}}}=\\sum_{k=1}^\\infty \\frac{2^k}{2^k\\cdot 2^{\\sqrt{k}/\\sqrt{\\log 2}}}=\\sum_{k=1}^\\infty \\frac{1}{\\left(2^{1/\\sqrt{\\log 2}}\\right)^{\\sqrt{k}}}.$$\n\n The steps in this second example are the same as in the first, we just select exponential chunks instead of quadratic ones. We can check that\n\n$$2^{\\frac{1}{\\sqrt{\\log 2}}}\\approx 3.537$$\n\n and so by a simple generalization of the first example we can show that the series\n\n$$\\sum_{n=2}^\\infty \\frac{1}{n^{1+\\frac{1}{\\sqrt{\\log n}}}}$$\n\n converges. It seems like this technique is really good at understanding \"edge cases\" where the usual convergence tests from calculus are inconclusive. For instance, the ratio test is inconclusive in each of the above examples.\n\nThe above examples suggest the following general result, which is a generalization of the Cauchy condensation test (see\n\nthis post\n\n): Suppose\n\n$(a_n)$\n\n is a nonincreasing sequence, and\n\n$f:\\mathbb{N}\\to\\mathbb{N}$\n\n is a function such that\n\n$f(1)=1$\n\n and\n\n$f(n)\\geq n$\n\n for all\n\n$n$\n\n. If\n\n$$\\sum_{n=1}^\\infty \\Delta[f](n)a_{f(n)}$$\n\n converges, then\n\n$\\sum_{n=1}^\\infty a_n$\n\n converges, where\n\n$\\Delta[f](n)=f(n+1)-f(n)$\n\n is the difference operator. Moreover, if\n\n$$\\sum_{n=1}^\\infty \\Delta[f](n)a_{f(n+1)}$$\n\n diverges, then\n\n$\\sum_{n=1}^\\infty a_n$\n\n diverges. The proof of this statement is just a generalization of the arguments in the above examples, taking note of the fact that we can produce lower bounds on the series by taking the smallest term in each chunk times the number of terms in the chunk. The growth condition\n\n$f(n)\\geq n$\n\n is just to ensure each chunk is nonempty. Additionally, if we know some more information about\n\n$f$\n\n, namely that\n\n$C\\Delta[f](n)\\leq \\Delta[f](n+1)$\n\n for some constant\n\n$C>0$\n\n, then\n\n$$C\\sum_{n=1}^\\infty \\Delta[f](n)a_{f(n)}\\leq \\sum_{n=1}^\\infty a_n\\leq \\sum_{n=1}^\\infty \\Delta[f](n)a_{f(n)},$$\n\n and so in such a case we recover the same type of explicit bounds as in the Cauchy condensation test.\n\nMy main question is whether there is a source in which this result or a similar generalization of the Cauchy condensation test appears.", "question_owner": "Eli Seamans", "question_link": "https://math.stackexchange.com/questions/5108014/reference-request-change-of-variables-for-infinite-series", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1003056, "title": "For continuous $f,g: [0,1] \\to [0,1]$ with $f \\circ g = g \\circ f$ , there exists $x$ such that $f(x)=g(x)$", "question_text": "I have been stuck on this for hours now:\n\nLet\n\n$f,g: [0,1] \\to [0,1]$\n\n be continuous such that\n\n$f \\circ g = g \\circ f$\n\n. Show that there exists\n\n$x \\in [0,1]$\n\n such that\n\n$f(x)=g(x)$\n\nMy attempt\n\n: It is easy to proof that\n\n$f,g$\n\n have both a fix point in the interval\n\n$[0,1]$\n\n. That means there exist\n\n$a,b \\in [0,1]$\n\n such that\n\n$f(a)=a$\n\n and\n\n$g(b)=b$\n\n. Now we also know that\n\n$$f(g(x))=g(f(x)), \\text{ for all } x $$\n\nSo I can make use of that and say for example that:\n\n$$f(g(b))=g(f(b))=f(b) $$\n\nWhich shows that\n\n$f(b)$\n\n is yet another fix point of\n\n$g$\n\n. Similarly, by the same argument I'd get:\n\n$$f(g(a))=g(f(a))=g(a) $$\n\n and therefore\n\n$g(a)$\n\n is yet another fix point of\n\n$f$\n\n. While this seems great and all but I am very unsure if my next steps are correct:\n\nDefine\n\n$h: [0,1] \\to [0,1]$\n\n such that\n\n$h(x)=g(x)-f(x)$\n\n. Of course\n\n$h$\n\n is continuous, because\n\n$f,g$\n\n are. Then I'd obtain:\n\n$$h(a)=g(a)-f(a)=g(a)-a \\geq 0 \\\\h(b)=g(b)-f(b)=b-g(f(b))... $$\n\nWhere I am not quite sure if those two inequalities are right at all or just misleading me.", "question_owner": "Spaced", "question_link": "https://math.stackexchange.com/questions/1003056/for-continuous-f-g-0-1-to-0-1-with-f-circ-g-g-circ-f-there-exist", "answer": { "answer_id": 1003078, "answer_text": "Your idea is fine. A priori, $h(x)=g(x)-f(x)$ can take on any value $\\in[-1,1]$. But if we assume the claim of the problem statement is wrong, it never is $0$, hence by the IVT is either alsways $>0$ or always $<0$. Wlog. $h(x)>0$ for all $x$. Especially, whenever $a$ is a fixed point of $f$, we have $g(a)>f(a)=a$ as you showed. As you also showed, $g$ maps fixed point sof $f$ to fixed points of $f$. That is $a_0=a$, $a_{n+1}=g(a_n)$ givs us a strictly increasing sequence of fixed points of $f$. As the sequence is bounded, it must convereg to some $\\tilde a\\in[0,1]$. Then by continuity, $g(\\tilde a)=\\tilde a$ and $f(\\tilde a)=\\tilde a$, hence $h(\\tilde a)=0$, contradiction.", "answer_owner": "Hagen von Eitzen", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1901686, "title": "Doubt in step in the proof of Theorem 6.11 in Rudin's book", "question_text": "I want to understand a step in baby Rudin's theorem 6.11. The theorem says the following: Let\n\n$f$\n\n be Riemann-Stieljes integrable in\n\n$[a,b]$\n\n. Let\n\n$m\\leq f\\leq M$\n\n. Let\n\n$\\phi:[m,M]\\to\\mathbb{R}$\n\n be continuous. Then\n\n$h(x)=\\phi(f(x))$\n\n is Riemann-Stieljes integrable in\n\n$[a,b]$\n\n.\n\nThe proof goes like this. Since\n\n$\\phi$\n\n is continuous in a compact set it is uniformly continuous in\n\n$[m,M]$\n\n. Let\n\n$\\epsilon>0$\n\n. Then we can pick\n\n$\\delta<\\epsilon$\n\n such that\n\n$|s,t|\\leq\\delta$\n\n imply that\n\n$|\\phi(s)-\\phi(t)|<\\epsilon$\n\n, where\n\n$s,t\\in[m,M]$\n\n.\n\nLet\n\n$P$\n\n be a partition of\n\n$[a,b]$\n\n.\n\n$$\n\na=x_0\\leq x_2\\leq\\ldots\\leq x_n=b\n\n$$\n\nlet\n\n$\\Delta x_i=x_i-x_{i-1}$\n\n. Let\n\n$M_i=\\sup\\{f(\\Delta x_i)\\}$\n\n and\n\n$m_i=\\inf\\{f(\\Delta x_i)\\}$\n\n. Let\n\n$M_i^*=\\sup\\{h(\\Delta x_i)\\}$\n\n and\n\n$m_i^*=\\inf\\{h(\\Delta x_i)\\}$\n\n.\n\nLet's divide the intervals in\n\n$P$\n\n in two categories. if\n\n$M_i-m_i<\\delta$\n\n then\n\n$i\\in A$\n\n. If not\n\n$i\\in B$\n\n.\n\nOk, so far so good, no problems. The next thing he says is, for\n\n$i\\in A$\n\n our choice of\n\n$\\delta$\n\n shows that\n\n$M_i^*-m_i^*\\leq\\epsilon$\n\n. Can you prove this?", "question_owner": "PhoenixPerson", "question_link": "https://math.stackexchange.com/questions/1901686/doubt-in-step-in-the-proof-of-theorem-6-11-in-rudins-book", "answer": { "answer_id": 1901700, "answer_text": "If $i \\in A$, i.e. $M_i - m_i < \\delta$, then $|f(x) - f(y)| \\leq M_i - m_i < \\delta$ for all $x,y$ from the $i$-th interval.\n\nThen $|h(x) - h(y)| = |\\phi(f(x)) - \\phi(f(y))| < \\epsilon$ for any $x,y$ from the $i$-th interval -- by the choice of $\\epsilon/\\delta$.\n\n$M_i^* - m_i^* \\stackrel{\\text{def}}{=} \\sup_{\\text{such } x} h(x) - \\inf_{\\text{such } y} h(y) \\leq \\sup_{\\text{such x,y}}|h(x) - h(y)| \\leq \\epsilon$, where \"such\" means \"from the interval $[x_{i-1}, x_i]$\".", "answer_owner": "user66081", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107848, "title": "Explanation on Landau $o$ little for $ \\lim_{x\\to 0} \\frac{\\log(\\cos x) - \\sinh(\\alpha x)}{x^{6\\alpha}} $", "question_text": "Calculate without Hopital\n\n$$\n\n\\lim_{x\\to 0} \\frac{\\log(\\cos x) - \\sinh(\\alpha x)}{x^{6\\alpha}}\n\n$$\n\nFor\n\n$\\alpha>0$\n\n we are in a\n\n$\\dfrac{0}{0}$\n\n case; now\n\n$\\sinh(\\alpha\\,x) \\sim \\alpha\\,x$\n\n, while\n\n$$\n\n\\log(\\cos x) = \\log(1+(\\cos x-1)) \\sim \\cos x - 1 \\sim -\\frac{x^2}{2}\n\n$$\n\nis\n\n$o(x)$\n\n, then is little \"\n\no\n\n\" than\n\n$\\alpha\\,x$\n\n; it follows that\n\n$$\n\n\\lim_{x\\to 0} \\frac{\\color{red}{\\log(\\cos x) - \\sinh(\\alpha x)}}{x^{6\\alpha}}\n\n= \\lim_{x\\to 0} \\frac{\\color{red}{-\\sinh(\\alpha\\,x)}}{x^{6\\alpha}}\n\n= \\lim_{x\\to 0} \\frac{-\\alpha\\,x}{x^{6\\alpha}}\n\n= -\\alpha \\, \\lim_{x\\to 0} x^{1-6\\alpha}.\n\n$$\n\nThis limit is zero for\n\n$1-6\\alpha>0$\n\n, i.e., for\n\n$\\alpha<1/6$\n\n; it is\n\n$-1/6$\n\n for\n\n$\\alpha=1/6$\n\n, and\n\n$-\\infty$\n\n for\n\n$\\alpha>1/6$\n\n.\n\nI have read that with the small “\n\no\n\n” I can write\n\n$$-\\sinh(\\alpha\\,x)$$\n\n instead of\n\n$$\\log(\\cos x) -\\sinh(\\alpha\\,x)$$\n\nWhy, is there a condition? Which?\n\nI have done this, as alternative to small \"\n\no\n\n\":\n\nConsider\n\n$$\n\n\\lim_{f(x)\\to 0} \\frac{\\sinh(f(x))}{f(x)} = 1\n\n$$\n\n$$\n\n\\begin{align}\n\n\\lim_{x\\to 0}\\frac{\\log(1-1+\\cos x) - \\sinh(\\alpha x)}{x^{6\\alpha}}\n\n& =\\lim_{x\\to 0}\\frac{\\log(1-(1-\\cos x)) - \\sinh(\\alpha x)}{x^{6\\alpha}}\\notag\\\\\n\n&=\\lim_{x\\to 0} \\frac{\\log(1-(1-\\cos x))}{x^{6\\alpha}}-\\lim_{x\\to 0}\\frac{\\alpha x}{x^{6\\alpha}}\\cdot\\frac{\\sinh(\\alpha x)}{\\alpha x}\\notag\\\\\n\n&=\\lim_{x\\to 0} \\frac{-(1-\\cos x)-\\alpha x}{x^{6\\alpha}}\n\n\\end{align}\n\n$$\n\nIf\n\n$6\\alpha=1$\n\n:\n\n$$\n\n\\lim_{x\\to 0} \\frac{-(1-\\cos x)-\\alpha x}{x^{6\\alpha}}\n\n=\\lim_{x\\to 0} \\frac{-(1-\\cos x)-\\frac{1}{6}x}{x}\n\n=\\lim_{x\\to 0}\\frac{-(1-\\cos x)}{x}-\\frac{1}{6}\n\n=-\\frac{1}{6}\n\n$$\n\nLet\n\n$\\alpha<0$\n\n:\n\n$$\n\n\\lim_{x\\to 0} \\frac{-(1-\\cos x)-\\alpha x}{x^{6\\alpha}}\n\n$$\n\nThen\n\n$x^{6\\alpha}\\to \\infty$\n\n and the numerator tends to\n\n$0$\n\n, so the limit is\n\n$0$\n\n.\n\nFor\n\n$\\alpha>0$\n\n:\n\n$$\n\n\\lim_{x\\to 0} \\frac{-(1-\\cos x)-\\alpha x}{x^{6\\alpha}}\n\n=\\lim_{x\\to 0} \\frac{-(1-\\cos x)}{x^{6\\alpha}}-\\alpha \\lim_{x\\to 0}x^{1-6\\alpha}\n\n$$\n\nThis limit is zero for\n\n$1-6\\alpha>0$\n\n, i.e., for\n\n$\\alpha<1/6$\n\n; it equals\n\n$-1/6$\n\n for\n\n$\\alpha=1/6$\n\n, and it is\n\n$-\\infty$\n\n for\n\n$\\alpha>1/6$\n\n.\n\nFinally:\n\n$$\n\n\\lim_{x\\to 0} \\frac{\\log(\\cos x) - \\sinh(\\alpha x)}{x^{6\\alpha}} =\n\n\\begin{cases}\n\n0 & \\text{for } \\alpha<1/6, \\\\[2mm]\n\n-1/6 & \\text{for } \\alpha=1/6,\\\\[2mm]\n\n-\\infty & \\text{for } \\alpha>1/6.\n\n\\end{cases}\n\n$$", "question_owner": "Sebastiano", "question_link": "https://math.stackexchange.com/questions/5107848/explanation-on-landau-o-little-for-lim-x-to-0-frac-log-cos-x-sinh", "answer": { "answer_id": 5107885, "answer_text": "The process you showed is very clear and complete. It is OK to try not to replace infinitesimals with others(me neither), but that does not mean to be unable to understand the validness of a replacement.\n\nIf you translate the original process with a none-\"\n\n$o$\n\n\" one, it should look like\n\n$$\n\n\\begin{aligned}\n\n\\lim_{x\\to0}\\frac{\\ln(\\cos x)}{\\sinh(\\alpha x)}&=-\\frac{1}{\\alpha}\\lim_{x\\to0}\\frac{\\ln(1+(\\cos x-1))}{\\cos x-1}\\frac{1-\\cos x}{x^2}\\frac{\\alpha x}{\\sinh(\\alpha x)}x\\\\\n\n&=-\\frac{1}{\\alpha}\\cdot1\\cdot\\frac{1}{2}\\cdot1\\cdot0\\\\\n\n&=0\n\n\\end{aligned}\n\n$$\n\nwhere\n\n$\\lim\\limits_{x\\to0}\\dfrac{\\ln(1+(\\cos x-1))}{\\cos x-1}=1$\n\n is shown by substitution and\n\n$\\ln(1+u)\\sim u$\n\n,\n\n$\\lim\\limits_{x\\to0}\\dfrac{1-\\cos x}{x^2}=\\dfrac{1}{2}$\n\n by\n\n$1-\\cos x\\sim\\dfrac{1}{2}x^2$\n\n,\n\n$\\lim\\limits_{x\\to0}\\dfrac{\\alpha x}{\\sinh(\\alpha x)}$\n\n by substitution and\n\n$\\sinh u\\sim u$\n\n.\n\nThe posted answer introduces\n\n$o(\\cdot)$\n\n symbol to make it easier. By\n\n$\\log(\\cos x)=o(x)$\n\n and\n\n$\\sinh(\\alpha x)\\sim\\alpha x$\n\n, it shows\n\n$$\n\n\\begin{aligned}\n\n\\lim_{x\\to0}\\frac{\\log(\\cos x)-\\sinh(\\alpha x)}{x^{6\\alpha}}&=\\lim_{x\\to0}\\frac{\\sinh(\\alpha x)}{x^{6\\alpha}}\\left(\\frac{\\log(\\cos x)}{\\sinh(\\alpha x)}-1\\right)\\\\\n\n&=\\lim_{x\\to0}\\frac{\\sinh(\\alpha x)}{x^{6\\alpha}}(0-1)\\\\\n\n&=\\lim_{x\\to0}\\frac{-\\sinh(\\alpha x)}{x^{6\\alpha}}\n\n\\end{aligned}\n\n$$\n\nand from then on discusses the behavior of the latter limit.", "answer_owner": "JC Q", "is_accepted": true, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2563659, "title": "Fourier series of a piecewise continuous (constant) function. Is my solution correct?", "question_text": "Given the function\n\n$$\n\n \\phi(x)\n\n=\\begin{cases}\n\n 1 & 00$\n\n$$\\int_0^\\infty x^{n^a}\\,dn=\\Gamma \\left(1+\\frac{1}{a}\\right)\n\n (-\\log (x))^{-1/a} \\tag 1$$\n\nThe summation can be approximated using the simplest form of Euler-Maclaurin summation. Using\n\n$k=-\\log(x)$\n\n, it write\n\n$$S\\sim \\frac 1a k^{-1/a} \\,\\Gamma \\left(\\frac{1}{a},k\\right)+\\sum_{n=0}^7 \\beta_n\\, a^n \\tag 2$$\n\n where the coefficients are\n\n$$\\left(\n\n\\begin{array}{cc}\n\n n & b_n \\\\\n\n 0 & \\frac{e^{-k}}{2}+1 \\\\\n\n 1 & \\frac{-3+3 e-7 e^2+105 e^3}{1260 e^4} \\\\\n\n 2 & \\frac{-441+250 e-210 e^2}{25200 e^4} \\\\\n\n 3 & \\frac{-116+15 e+12 e^2}{4320 e^4} \\\\\n\n 4 & \\frac{7-4 e}{576 e^4} \\\\\n\n 5 & \\frac{7525-240 e}{302400 e^4} \\\\\n\n 6 & -\\frac{3}{1600 e^4} \\\\\n\n 7 & -\\frac{199}{100800 e^4} \\\\\n\n\\end{array}\n\n\\right)$$\n\nJust a few numbers for illustration\n\n$$\\left(\n\n\\begin{array}{ccccc}\n\n a & x & (1) & (2) & \\text{summation}\\\\\n\n \\frac{1}{5} & \\frac{1}{4} & 23.4371 & 24.2465 & 24.2462 \\\\\n\n \\frac{1}{5} & \\frac{1}{2} & 749.987 & 750.679 & 750.679 \\\\\n\n \\frac{1}{5} & \\frac{3}{4} & 60900.0 & 60900.6 & 60900.6 \\\\\n\n& & & & \\\\\n\n \\frac{1}{4} & \\frac{1}{4} & 6.49815 & 7.29143 & 7.29100 \\\\\n\n \\frac{1}{4} & \\frac{1}{2} & 103.970 & 104.650 & 104.649 \\\\\n\n \\frac{1}{4} & \\frac{3}{4} & 3503.97 & 3504.55 & 3504.55 \\\\\n\n& & & & \\\\\n\n \\frac{1}{3} & \\frac{1}{4} & 2.25209 & 3.01943 & 3.01888 \\\\\n\n \\frac{1}{3} & \\frac{1}{2} & 18.0167 & 18.6764 & 18.6759 \\\\\n\n \\frac{1}{3} & \\frac{3}{4} & 252.007 & 252.585 & 252.581 \\\\\n\n& & & & \\\\\n\n \\frac{1}{2} & \\frac{1}{4} & 1.04068 & 1.76061 & 1.75983 \\\\\n\n \\frac{1}{2} & \\frac{1}{2} & 4.16274 & 4.78883 & 4.78822 \\\\\n\n \\frac{1}{2} & \\frac{3}{4} & 24.1660 & 24.7283 & 24.7224 \\\\\n\n& & & & \\\\\n\n 1 & \\frac{1}{4} & 0.72135 & 1.33464 & 1.33333 \\\\\n\n 1 & \\frac{1}{2} & 1.44270 & 2.00065 & 2.00000 \\\\\n\n 1 & \\frac{3}{4} & 3.47606 & 4.01135 & 4.00000 \\\\\n\n& & & & \\\\\n\n 2 & \\frac{1}{4} & 0.75269 & 1.25830 & 1.25391 \\\\\n\n 2 & \\frac{1}{2} & 1.06447 & 1.56556 & 1.56447 \\\\\n\n 2 & \\frac{3}{4} & 1.65230 & 2.17657 & 2.15230 \\\\\n\n& & & & \\\\\n\n 3 & \\frac{1}{4} & 0.80086 & 1.23990 & 1.25002 \\\\\n\n 3 & \\frac{1}{2} & 1.00902 & 1.47282 & 1.50391 \\\\\n\n 3 & \\frac{3}{4} & 1.35271 & 1.86404 & 1.85054 \\\\\n\n& & & & \\\\\n\n\\end{array}\n\n\\right)$$", "answer_owner": "Claude Leibovici", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107891, "title": "Closed form of $\\Omega = \\int\\limits_3^\\infty {\\frac{{{x^4}}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} $", "question_text": "I found this problem from a FB page\n\n$$\\Omega = \\int\\limits_3^\\infty {\\frac{{{x^4}}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx}$$\n\nHere what I tried to find the closed form:\n\n$${\\text{We have}}:\\frac{d}{{dx}}\\left( {\\frac{x}{{\\sqrt {{x^4} + 2{x^2} + 4} }}} \\right) = \\frac{{4 - {x^4}}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}$$\n\n$$\\begin{gathered}\n\n \\Rightarrow \\Omega = \\int\\limits_3^\\infty {\\frac{{{x^4}}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} = - \\int\\limits_3^\\infty {\\frac{{\\left( {4 - {x^4}} \\right) - 4}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} \\hfill \\\\\n\n = - \\int\\limits_3^\\infty {\\frac{{4 - {x^4}}}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} + 4\\underbrace {\\int\\limits_3^\\infty {\\frac{1}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} }_I \\hfill \\\\\n\n = - \\left[ {\\frac{x}{{\\sqrt {{x^4} + 2{x^2} + 4} }}} \\right]_3^\\infty + 4I = \\frac{3}{{\\sqrt {103} }} + 4I \\hfill \\\\\n\n\\end{gathered}$$\n\n$$I = \\int\\limits_3^\\infty {\\frac{1}{{\\sqrt {{{\\left( {{x^4} + 2{x^2} + 4} \\right)}^3}} }}dx} \\overbrace = ^{\\sqrt 2 x \\to x}\\frac{1}{{4\\sqrt 2 }}\\int\\limits_{\\frac{3}{{\\sqrt 2 }}}^\\infty {\\frac{1}{{\\sqrt {{{\\left( {{x^4} + {x^2} + 1} \\right)}^3}} }}dx}$$\n\n$I$\n\n looks like an elliptic integral, but I can't find a good substitution to reduce this integral.\n\nYour comments and alternatives are highly appreciated.", "question_owner": "OnTheWay", "question_link": "https://math.stackexchange.com/questions/5107891/closed-form-of-omega-int-limits-3-infty-fracx4-sqrt-left", "answer": { "answer_id": 5107904, "answer_text": "The last integral\n\n$$ I=\\frac{dx}{{\\sqrt {{{\\left( {{x^4} + {x^2} + 1} \\right)}^3}} }}$$\n\n is not so bad if you write\n\n$$x^4+x^2+1=(x^2+a)(x^2+b)\\qquad \\text{where} \\qquad (a,b)=\\frac {1\\pm i\\sqrt 3 }2$$\n\nTake a look\n\nhere\n\n and simplify.\n\nIf no mistake on my side\n\n$$I=\\frac{x \\left( (a+b)x^2+(a^2+b^2)\\right)}{a b (a-b)^2\n\n \\sqrt{\\left(x^2+a\\right) \\left(x^2+b\\right)}}+$$\n\n$$\\frac{i}{a \\sqrt{b} (a-b)^2}\\Bigg((a-b) F\\left(i \\sinh\n\n ^{-1}\\left(\\frac{x}{\\sqrt{a}}\\right)|\\frac{a}{b}\\right)+(a+b) E\\left(i\n\n \\sinh ^{-1}\\left(\\frac{x}{\\sqrt{a}}\\right)|\\frac{a}{b}\\right) \\Bigg)$$", "answer_owner": "Claude Leibovici", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107692, "title": "prove $\\left(\\sum_{n=-\\infty}^{+\\infty} q^{n^2}\\right)^2 = 1 + 4\\sum_{n=1}^{\\infty} \\frac{q^n}{1+q^{2n}}$", "question_text": "Let\n\n$$\n\n\\theta(q) = \\sum_{n=-\\infty}^{+\\infty} q^{n^2}.\n\n$$\n\nSo\n\n$$\n\n\\theta(q)^2\n\n= \\left(\\sum_{m\\in \\mathbb{Z}} q^{m^2}\\right)\\left(\\sum_{n\\in \\mathbb{Z}} q^{n^2}\\right)\n\n= \\sum_{N=0}^{\\infty} r_2(N)\\, q^N,\n\n$$\n\nwhere\n\n$r_2(N)$\n\n counts ordered pairs\n\n$(m,n)\\in\\mathbb{Z}^2$\n\n with\n\n$m^2+n^2=N$\n\n, including signs and order, and\n\n$r_2(0)=1$\n\n.\n\nMeaning the identity to prove is equivalent to showing that for all\n\n$N\\ge 1$\n\n the coefficient of\n\n$q^N$\n\n on the right-hand side equals\n\n$r_2(N)$\n\n:\n\n$$\n\n1 + 4\\sum_{n=1}^{\\infty} \\frac{q^n}{1+q^{2n}}\n\n= 1 + \\sum_{N=1}^{\\infty} \\Big(4\\cdot \\text{coeff}_{q^N}\\Big[\\sum_{n=1}^{\\infty} \\frac{q^n}{1+q^{2n}}\\Big]\\Big)\\, q^N.\n\n$$\n\nHow to move forward with this? Is there a more straightforward way with series or integral manipulations only?", "question_owner": "Boyce.E", "question_link": "https://math.stackexchange.com/questions/5107692/prove-left-sum-n-infty-infty-qn2-right2-1-4-sum-n-1-inf", "answer": { "answer_id": 5107844, "answer_text": "If we express a positive integer in the form\n\n$N=2^{a_0}p_1^{2a_1}\\cdots p_r^{2a_r}q_1^{b_1}\\cdots q_s^{b_s}$\n\n, where\n\n$p_i$\n\n are\n\n$3\\pmod 4$\n\n primes and\n\n$q_i$\n\n and\n\n$1\\pmod 4$\n\n primes, then we have:\n\n$$r_2(N)=\\begin{cases} 0 & \\text{if any $a_i$ is a half-integer}\\\\ 4(b_1+1)(b_2+1)\\cdots (b_r+1) & \\text{if all $a_i$ are integers}\\end{cases}$$\n\n(See\n\nhere\n\n).\n\nWe can write the RHS of the equation as\n\n$$1+4\\sum_{n=1}^\\infty\\frac{q^n}{1+q^{2n}}=1+4\\sum_{n=1}^\\infty\\sum_{i=0}^\\infty (-1)^{i}q^{(2i+1)n},$$\n\nwhich expresses\n\n$\\frac{q^n}{1+q^{2n}}$\n\n as an infinite geometric series. A\n\n$q^N$\n\n term is only reached when\n\n$(2i+1)n=N$\n\n. This means each odd factor of\n\n$N$\n\n gives us a unique contribution to the coefficient when it is reached by\n\n$2i+1$\n\n. If this odd factor is\n\n$1\\pmod 4$\n\n, then\n\n$i$\n\n is even, so the contribution is\n\n$+1$\n\n. Otherwise, it is\n\n$-1$\n\n.\n\nThus, the coefficient of\n\n$q^N$\n\n on the RHS for positive\n\n$N$\n\n is\n\n$$4\\cdot(\\#(\\text{1 mod 4 factors of $N$})-\\#(\\text{3 mod 4 factors of $N$})).$$\n\n(We don't need to worry about\n\n$N=0$\n\n since its clear both sides match with a coefficient of\n\n$1$\n\n.)\n\nAs above, write\n\n$N=2^{a_0}p_1^{2a_1}\\cdots p_r^{2a_r}q_1^{b_1}\\cdots q_s^{b_s}=2^{a_0}PQ$\n\n. The property of an odd factor being\n\n$1\\pmod 4$\n\n or\n\n$3\\pmod 4$\n\n is solely determined by its prime factors in\n\n$P$\n\n, since any factor from\n\n$Q$\n\n is\n\n$1\\pmod 4$\n\n. So we just need to compute the desired difference in\n\n$P$\n\n, then multiply by\n\n$(b_1+1)\\cdots (b_r+1)$\n\n, the number of factors of\n\n$Q$\n\n.\n\nIf at least one of the\n\n$a_i$\n\ns is a half-integer, then WLOG let\n\n$a_1$\n\n be a half-integer. We can pair up any factor\n\n$p_1^{e_1}p_2^{e_2}\\cdots p_r^{e_r}$\n\n with\n\n$p_1^{2a_1-e_1}p_2^{e_2}\\cdots p_r^{e_r}$\n\n. If one of them\n\n$1\\pmod 4$\n\n, then the other must be\n\n$3 \\pmod 4$\n\n since they differ by a factor of\n\n$p_1^{|2a_1-2e_1|}$\n\n, which is\n\n$3 \\pmod 4$\n\n due to\n\n$2a_1-2e_1$\n\n being odd. Thus, the number of\n\n$1\\pmod 4$\n\n and\n\n$3\\pmod 4$\n\n factors in\n\n$P$\n\n are the same, so the coefficient of\n\n$q^N$\n\n is\n\n$0$\n\n.\n\nIf all\n\n$a_i$\n\ns are integers, then consider\n\n$p_1^{2a_1-1}p_2^{2a_2}\\cdots p_r^{2a_r}$\n\n, which must have the same number of\n\n$1\\pmod 4$\n\n and\n\n$3\\pmod 4$\n\n factors from above. Thus, the desired difference is determined by all the factors divisible by\n\n$p_1^{2a_1}$\n\n, which is equivalent to the difference in\n\n$p_2^{2a_2}\\cdots p_r^{2a_r}$\n\n. Repeating this process, it suffices to find the difference in\n\n$p_r^{2a_r}$\n\n, which has\n\n$a_r+1$\n\n factors\n\n$1\\pmod 4$\n\n and\n\n$a_r$\n\n factors\n\n$3\\pmod 4$\n\n, so the difference is\n\n$1$\n\n. The coefficient of\n\n$q^N$\n\n is therefore\n\n$4(b_1+1)\\cdots (b_r+1)$\n\n.\n\nThis precisely matches with the terms\n\n$r_2(N)q^N$\n\n on the LHS, so we are done.", "answer_owner": "CosmicOscillator", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4326247, "title": "This improper integral doesn't converge, does it?", "question_text": "I am interested in finding out whether my calculations correct.\n\nI have to solve this exercise: it's an improper integral.\n\nbefore using integration by parts, I've studied the bounds, in order to check where the function is undefined. And I got that when solving the function for\n\n$x = 3$\n\n the denominator is\n\n$0$\n\n, therefore the function is undefined (i.e. division by zero).\n\nTherefore, I've taken the limit as\n\n$x$\n\n approaches\n\n$3$\n\n. And then I've solved the integral with\n\n$x$\n\n as the upper bound.\n\nAs stated above, I've used integration by parts by choosing\n\n$$ u = x $$\n\n (because the derivative of the polynomial, hopefully, is going to become some smaller value) And\n\n$$ dv = (1 / (3 - x)) $$\n\n because the antiderivative is simply equal to\n\n$\\log$\n\n (natural log, i.e. with base\n\n$e$\n\n). Of course, the argument of\n\n$\\log$\n\n must be the absolute value).\n\nNow, by integrating by parts and after having evaluated the limit of the antiderivative, I found that the limit doesn't exist, because the limit of the function evaluated in the upper bound is undefined (i.e. the natural log is undefined for\n\n$x = 3$\n\n). Is it true? And if it's true, the integral doesn't converge, right?\n\nMy attempt:\n\n$$\n\n\\begin{align*}\n\n\\int_1^3 \\frac{x}{3-x}dx&=\\lim_{x \\rightarrow 3} \\int_1^x x \\cdot \\frac{1}{3-x}dx\\\\\\\\\n\nu=x, v'&=\\frac{1}{3-x}\\\\\n\nu'=dx, v&=\\log(3-x)\\\\\\\\\n\n\\therefore \\int_1^3 \\frac{x}{3-x}dx&=x\\log(3-x)|^x_1 - \\int_1^x 1 \\cdot \\log(3-x)dx\\\\\n\n&= \\lim_{x\\rightarrow3} [ x \\cdot \\log(3-x)|_1^x - \\log(3-x)|_1^x ] = \\underline{\\text{DNE}}\n\n\\end{align*}\n\n$$\n\nTherefore, the integral doesn't converge.", "question_owner": "Gabriel Burzacchini", "question_link": "https://math.stackexchange.com/questions/4326247/this-improper-integral-doesnt-converge-does-it", "answer": { "answer_id": 4326249, "answer_text": "You are correct that the integral does not converge, but you made some mistakes and overcomplicated the solution in general.\n\nThe mistake\n\n: If\n\n$v=\\log(3-x)$\n\n, then\n\n$v'=-\\frac{1}{3-x}$\n\n. You missed a minus sign.\n\nThe overcomplication\n\n:\n\nInstead of using per partes, you can rewrite\n\n$$\\frac{x}{3-x} = \\frac{x-3+3}{3-x} = \\frac{-(3-x)}{3-x} + \\frac{3}{3-x} = \\frac{3}{3-x} - 1$$\n\nand only integrate after this rearrangement. No need for per partes, a simple introduction of a new variable\n\n$u=3-x$\n\n is sufficient and you get (since\n\n$du = -dx$\n\n):\n\n$$\\int_1^3\\frac{x}{3-x}dx = 3\\int_1^3 \\frac{1}{3-x}dx - \\int_1^3 1dx = 3\\int_2^0-\\frac{1}{u}du - 2 = 3\\int_0^2\\frac1udu - 2$$\n\nnow you can either remember that the integral of\n\n$\\frac{1}{u}$\n\n diverges around\n\n$0$\n\n, or you can write it out, since\n\n$$\\int_0^2\\frac1udu=\\lim_{x\\to 0}\\int_x^2\\frac1udu = \\lim_{x\\to 0} (\\ln(2)-\\ln(x))$$\n\n and the limit above does not exist.", "answer_owner": "5xum", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5103934, "title": "On $\\int_0^1\\frac{a+2x}{\\sqrt{x(1-x)(a+x)(1+a+x)}}dx=\\pi$", "question_text": "Let\n\n$a\\ge0$\n\n, show that:\n\n$$ \\int\\limits_0^1\\frac{a+2x}{\\sqrt{x(1-x)(a+x)(1+a+x)}}dx = \\pi \\tag{1} $$\n\nFor\n\n$a=0$\n\n, the integral is:\n\n$\\displaystyle\\int_0^1\\frac{dx}{\\sqrt{1-x^2}}=\\frac{\\pi}{2}$\n\nSplitting the integral into two:\n\n$$\n\n\\begin{align}\n\nI &=\\int\\limits_0^1\\frac{a+2x}{\\sqrt{x(1-x)(a+x)(1+a+x)}}dx \\\\\n\n&=2\\int\\limits_0^1\\sqrt{\\frac{a+x}{x(1-x)(1+a+x)}}\\,dx \\\\\n\n&-a\\int\\limits_0^1\\frac{dx}{\\sqrt{x(1-x)(a+x)(1+a+x)}} \\\\[5mm]\n\nI &= 2 I_1 - a I_2\n\n\\end{align}\n\n$$\n\n$I_1$\n\n &\n\n$I_2$\n\n are elliptic integrals of third & first kind, respectively.\n\nHow to cancel the effect of\n\n$(a)$\n\n to get a constant result of\n\n$(\\pi)$\n\n?\n\nThe integral seems to hold for all\n\n$a\\in\\mathbb{C}$\n\n with\n\n$\\displaystyle I=\\begin{cases} +\\pi &\\,:\\,{\\small\\Re(a)\\ge{\\,\\,0}} \\\\ -\\pi &\\,:\\,{\\small\\Re(a)\\le{-2}} \\end{cases}$", "question_owner": "Hazem Orabi", "question_link": "https://math.stackexchange.com/questions/5103934/on-int-01-fraca2x-sqrtx1-xax1axdx-pi", "answer": { "answer_id": 5103954, "answer_text": "The substitution\n\n$$ t= \\frac{x(a+x)}{1+a} $$\n\nsimplifies the integral to\n\n$$ \\int_0^1 \\frac{dt}{\\sqrt{t(1-t)}}$$\n\nwhich is equal to\n\n$\\pi$\n\n.", "answer_owner": "Rishit Garg", "is_accepted": true, "score": 9, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1526107, "title": "Find the relative minimum, relative maximum, and point of inflection", "question_text": "I'm trying to find the relative minimum, Relative maximum, and point of inflection:\n\n$$ f(x)= \\frac{x^3}{x^2-64} $$\n\nPlease elaborate on the points of inflection and chart of signs, because I did that and I got two inflection points and the question asks for one. Thanks.", "question_owner": "Jaime Aguilar", "question_link": "https://math.stackexchange.com/questions/1526107/find-the-relative-minimum-relative-maximum-and-point-of-inflection", "answer": { "answer_id": 1526187, "answer_text": "First calculate\n\n\\begin{align}\n\nf'(x) = \\; \\frac{x²(x²-192)}{(x²-64)²} , \\;\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;\\;\n\nf''(x) = \\;\\frac{128x(x²+192)}{(x²-64)³}.\n\n\\end{align}\n\nAlso, observe that the function is discontinuous at $-8$ and $8$.\n\nPoints of Inflection\n\nDefinition:\n\n A point of inflection is a point at which the curve is continuous and changes from being concave $f''(x)<0$ to convex $f''(x)>0$ or vice versa.\n\nNext, notice that\n\n\\begin{align}\n\nf''(x)>0,& \\;\\; \\text{for all }\\; x \\in (-\\infty,-8) \\cap (0,8) \\\\[2ex]\n\nf''(x)<0,& \\;\\; \\text{for all }\\; x \\in (-8,0) \\cap (8,\\infty)\n\n\\end{align}\n\nSo, there is one inflection point at $0$.\n\nMaxima and Minima\n\nA similar analysis for the first derivative yields\n\n\\begin{align}\n\nf'(x)>0,& \\;\\; \\text{for all }\\; x \\in (-\\infty,192^{\\frac{1}{2}}) \\cap (192^{\\frac{1}{2}},\\infty) \\\\[2ex]\n\nf'(x)<0,& \\;\\; \\text{for all }\\; x \\in (-192^{\\frac{1}{2}},-8) \\cap (-8,0) \\cap (0,8)\\cap (8,192^{\\frac{1}{2}})\n\n\\end{align}\n\nNotice that the function converges to $-\\infty$ from the left and $\\infty$ from the right as it approaches $-8$ or $8$. Moreover, it has one local (relative) maximum at $-192^{\\frac{1}{2}}$ and one local minimum at $192^{\\frac{1}{2}}$.", "answer_owner": "mzp", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107725, "title": "How can I derive a smooth, non-singular force formula from a uniformly dense rod in $\\mathbb{R}^{1}$?", "question_text": "I am working in\n\n$\\mathbb{R}^{1}$\n\n. Somewhere on this line, there is a line segment of uniform density; this line segment is bounded by the interval\n\n$\\lbrack a,b\\rbrack$\n\n. Additionally, somewhere on this line there is a single point\n\n$d$\n\n, which may or may not coincide with the line segment of uniform density.\n\nI attempted to find an equation, which for any given real number\n\n$d$\n\n, will find the collective \"pull\" of the line segment\n\n$\\lbrack a,b\\rbrack$\n\n on\n\n$d$\n\n, as described by a force law that resembles the inverse square law in spirit, but avoids its singularity at\n\n$x = d$\n\n.\n\nMy theory is as follows:\n\nThe endpoints will have the strongest \"pull\" vectors (highest magnitude).\n\nAt the midpoint of\n\n$\\lbrack a,b\\rbrack$\n\n there will be a zero vector, as the opposing forces will cancel.\n\nInterior points not coincident with the center will have an intermediate value, which is dependent on the collective effect of the rod (which varies over the distance it spans).\n\nThe magnitude will taper off as the value\n\n$d$\n\n goes from an endpoint of\n\n$\\lbrack a,b\\rbrack$\n\n in the direction away from its midpoint.\n\nI derived the following equation, which I suspect can be simplified into an integral. Let\n\n$n$\n\n be the number of subintervals used to approximate the rod, and let\n\n$o$\n\n be the offset from\n\n$d$\n\n to the center of each subinterval:\n\n$$\n\n\\begin{aligned}\n\no\n\n&=\n\n\\begin{cases}\n\nd-\\bigl(a+c\\cdot\\frac{b-a}{n}\\bigr)\n\n& \\mbox{ if }d-\\bigl(a+c\\cdot\\frac{b-a}{n}\\bigr)\\neq0 \\\\\n\nd-\\Bigl(a+\\bigl(c+\\frac{1}{2}\\bigr)\\cdot\\frac{b-a}{n}\\Bigr)\n\n& \\mbox{ otherwise}\n\n\\end{cases} \\\\\n\nF(d)\n\n&=\n\n\\lim_{n\\to\\infty}\n\n\\sum_{c=0}^{n}\n\n\\frac{o}{(|o| + \\epsilon)^2}\n\n\\cdot \\frac{b-a}{n}\n\n\\end{aligned}\n\n$$\n\nThis works numerically, but the presence of\n\n$\\epsilon$\n\n is unsatisfying. I would like to replace this with a kernel that:\n\nIs based on the actual interval structure of the rod (not just its endpoints),\n\nCancels at the midpoint of the rod,\n\nDecays slowly outside the rod (e.g., like\n\n$1/\\delta$\n\n or slower),\n\nAnd avoids any singularities or artificial smoothing parameters like\n\n$\\epsilon$\n\n.\n\nBelow is a numerical plot of the force function using a discretized rod and the kernel\n\n$\\frac{\\delta}{(|\\delta| + \\epsilon)^2}$\n\n:\n\nIs there a way to derive a closed-form or series-based expression for\n\n$F(d)$\n\n that avoids the need for\n\n$\\epsilon$\n\n, while preserving the physical behaviour described above?\n\nI would very much like the resulting formula to be explicitly grounded in the mass and distance of each infinitesimal sub-interval that comprises the rod, not just a smoothed or symbolic approximation.\n\nMy intuition also seems to indicate that the derivative would be zero at the midpoint. It doesn't make sense to me that its absolute value would be in-differentiable at that point. I could be mistaken about this though, so I say it with hesitation.", "question_owner": "Jasper", "question_link": "https://math.stackexchange.com/questions/5107725/how-can-i-derive-a-smooth-non-singular-force-formula-from-a-uniformly-dense-rod", "answer": { "answer_id": 5107787, "answer_text": "$$F(d)=\\frac{2d-a-b}{2d^2-ad-bd+a^2+b^2}$$", "answer_owner": "Christophe Boilley", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107563, "title": "Inequality involving a differentiable function and its derivative", "question_text": "I am working on the following problem and would appreciate some help or a hint.\n\nProblem:\n\nLet\n\n$f$\n\n be differentiable and\n\n$f'$\n\n be continuous,\n\n$\\alpha \\geq 0$\n\n. Assume that for all\n\n$a \\max\\{f(x), f(y)\\}$\n\n for some\n\n$t_0 \\in (x,y)$\n\n, there will be a point in\n\n$(x, t_0)$\n\n with positive derivative and a point in\n\n$(t_0, y)$\n\n with negative derivative. This will give you the desired contradiction in this case.\n\nFor completeness, the solution for\n\n$\\alpha>0$\n\n:\n\n Since\n\n$t_0$\n\n is an interior maximum point and\n\n$g'$\n\n is continuous, we have\n\n$g'(t)\\ge 0$\n\n in\n\n$(t_0-\\epsilon, t_0)$\n\n and\n\n$g'(t)\\le 0$\n\n in\n\n$(t_0, t_0 +\\epsilon)$\n\n for some positive\n\n$\\epsilon>0$\n\n. However, if we take\n\n$t_1 \\in (t_0 - \\epsilon, t_0)$\n\n and\n\n$t_2 \\in (t_0, t_0 + \\epsilon)$\n\n we have by the condition given in the problem that\n\n$f'(t_1) \\le \\frac{\\alpha}{2} (t_1 - y)$\n\n or\n\n$f'(t_2) \\ge \\frac{\\alpha}{2} (t-x)$\n\n. If the first is the case, then\n\n$g'(t_1) \\le - \\frac{\\alpha}{2}(t_1-x) < 0$\n\n, which is not true. If the second is the case, then\n\n$g'(t_2) \\ge - \\frac{\\alpha}{2}(t_2-y) > 0$\n\n, which is not true. This is the desired contradiction.", "answer_owner": "Lukas", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4857242, "title": "Using Laplace Transform to solve non-linear ODE for pendulum motion and showing why it cannot be solved", "question_text": "After solving the linear version of the ODE for the Pendulum equation using Laplace transform, I tried to use LT to solve the non-linear ODE for pendulum motion.\n\n$\\frac{d^{2}\\theta}{dt^{2}}+\\frac{g}{l} \\sin\\theta=0$\n\nHowever, I am not very familiar with LT and so i do not really understand how to convert the non-linear term into the Laplace domain? should I just use the sin identity, even though it doesn't look right? Once it is answered, I would also like to know why exactly it cannot be solved using LT.\n\nThanks in advance", "question_owner": "Alex", "question_link": "https://math.stackexchange.com/questions/4857242/using-laplace-transform-to-solve-non-linear-ode-for-pendulum-motion-and-showing", "answer": { "answer_id": 4857367, "answer_text": "i do not really understand how to convert the non-linear term into the Laplace domain.\n\nNobody else really does either. Computing the Laplace transform of even basic nonlinearities requires approximation via infinite series, see\n\nhere\n\n. The only nonlinear way to combine functions that plays nice with the Laplace transform is convolution to my knowledge. The relation is given by\n\n$$\n\n\\mathcal{L}[u*v](s) = \\mathcal{L}[u](s)\\cdot\\mathcal{L}[v](s),\n\n$$\n\nwhere\n\n$$\n\n[u*v](t) = \\int_{-\\infty}^\\infty u(s)v(t-s)~\\mathrm{d}s.\n\n$$", "answer_owner": "whpowell96", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4663395, "title": "Does division by 0 really have anything to do with $\\lim_{x \\to 0} \\frac{1}{x}$?", "question_text": "To my very limited knowledge, division by 0 is undefined precisely because it breaks the field axioms. No dividing by 0 if you want a field. However there do exist structures that are not fields which allow for division by 0. Like the Riemann sphere. Or wheel algebras.\n\nHowever a very common argument/\"proof\" I often hear for why division by 0 is undefined, is that\n\n$\\lim_{x \\to 0+} \\frac{1}{x} = \\infty$\n\n whereas\n\n$\\lim_{x \\to 0-} \\frac{1}{x} = -\\infty$\n\n. And therefore since the limits go the opposite direction, this means we can't say\n\n$\\frac{1}{0} = \\infty$\n\n.\n\nBut I do not understand this line of reasoning. If the left and right limits differ, that would just mean the limit from both sides isn't defined. It wouldn't tell me anything about the function's value\n\nat\n\n 0, only\n\nnear\n\n 0? And I don't see why a function's behaviour near 0 would have anything to do with it's value at 0.\n\nIs there a missing step? Or is it just a fallacy?\n\nEdit: A few people just agreeing with me haha, so let me try put it another way. Is there anything, anything at all, that we can conclude about division by 0, specifically from the differing left/right limits of\n\n$\\frac{1}{x}$\n\n?", "question_owner": "confusedscreaming", "question_link": "https://math.stackexchange.com/questions/4663395/does-division-by-0-really-have-anything-to-do-with-lim-x-to-0-frac1x", "answer": { "answer_id": 5106901, "answer_text": "Indeed, the reason for leaving\n\n$\\frac{1}{0}$\n\n undefined has nothing to do with continuity of the function\n\n$\\frac{1}{x}$\n\n. Otherwise we would need to leave\n\n$\\lfloor 1 \\rfloor$\n\n undefined, since we can't define the function\n\n$\\lfloor x \\rfloor$\n\n as a continuous function at\n\n$x=1$\n\n.", "answer_owner": "jjagmath", "is_accepted": false, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107710, "title": "Number of possibilities in the known universe?", "question_text": "this may be a rather simple one. I am not a math guy by any means, but you folks will have a good time with this one I think. I will posit what I think is a layman's answer and then you all have at it!\n\nI got this question in my head when researching large, finite numbers. I was looking at Graham's Number at the time. This number seems to be a lot bigger than all the possibilities in the \"observable\" universe because expressing it in digits using only a Planck volume for each digit apparently takes more than all the Planck volumes in the observable universe. A lot more.\n\nSo the title is the question and I think the answer (knowing nothing about sub-atomic particle physics) Would be something like: Total of all sub-atomic (or just 'all') particles in the known universe times the Planck volumes in the observable universe. That total would be then multiplied by the power of the Planck volumes in the observable universe. I think this would put every particle in every combination of every Planck Volume. I know this is not exact (and I may be totally off), but you folks can fool with it how you like and tell us the results. And let me know if this is an expressible number, and if so, what that would (roughly) be taking into account some assumptions about the size of the universe and the Mass in the universe. I know these things cannot be fully known at this time, but assigning some values even if they are wrong would be kind of fun.\n\nThe reason I thought about this in the first place is I believe this is the largest number that matters to our existance. Any higher number seems like kind of a waste of time relative to our existence. And it looks like Graham's and others' large numbers like it are much larger than this number. Let me know! Thanks!\n\nGot some info here on the site about the relative size of Graham's number to the universe, but Graham's Number appears to be incalculable in size.\n\nuniverse sized cube and visualising really large numbers", "question_owner": "Themblues", "question_link": "https://math.stackexchange.com/questions/5107710/number-of-possibilities-in-the-known-universe", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5098183, "title": "Can this be done without a calculator? Show that $\\int_0^1(1-x^2)(1-x^4)(1-x^6)\\dots dx<\\frac{1}{\\sqrt3}$", "question_text": "Let\n\n$I=\\displaystyle\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)dx$\n\n.\n\nWithout a calculator, show that\n\n$I<\\frac{1}{\\sqrt3}$\n\n.\n\nAccording to\n\nWolfram\n\n,\n\n$I\\approx 0.999969\\left(\\frac{1}{\\sqrt3}\\right)$\n\n.\n\nHere is a graph of the integrand in blue, and\n\n$y=\\frac{1}{\\sqrt3}$\n\n in red.\n\nContext\n\nI was playing with integrals of power series and stumbled upon this curious numerical result.\n\nMy attempt\n\nAccording to Euler's\n\nPentagonal Number Theorem\n\n,\n\n$$\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)=\\sum_{k=-\\infty}^\\infty(-1)^kx^{k(3k+1)}$$\n\nSo\n\n$\\begin{align}\n\nI&=\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)dx\\\\\n\n&=\\int_0^1\\sum_{k=-\\infty}^\\infty(-1)^kx^{k(3k+1)}dx\\\\\n\n&=\\sum_{k=-\\infty}^\\infty\\frac{(-1)^k}{3k^2+k+1}\\\\\n\n&=1+\\sum_{k=1}^\\infty\\left(\\frac{(-1)^k}{3k^2+k+1}+\\frac{(-1)^{-k}}{3k^2-k+1}\\right)\\\\\n\n&=1+\\sum_{k=1}^\\infty(-1)^k\\frac{6k^2+2}{9k^4+5k^2+1}\n\n\\end{align}$\n\nThen what? I'm not sure this is doable without a calculator.\n\nEdit\n\nFrom the OEIS:\n\nA258408\n\n is\n\n$I=\\displaystyle\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)dx=\\frac{4\\sqrt{\\frac{3}{11}}\\pi\\sinh \\left(\\frac{\\sqrt{11}\\pi}{6}\\right)}{2\\cosh\\left(\\frac{\\sqrt{11}\\pi}{3}\\right)-1}=0.577332\\dots$\n\nA258232\n\n is\n\n$\\displaystyle\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^k\\right)dx=\\frac{8\\sqrt{\\frac{3}{23}}\\pi\\sinh \\left(\\frac{\\sqrt{23}\\pi}{6}\\right)}{2\\cosh\\left(\\frac{\\sqrt{23}\\pi}{3}\\right)-1}=0.368412\\dots$", "question_owner": "Dan", "question_link": "https://math.stackexchange.com/questions/5098183/can-this-be-done-without-a-calculator-show-that-int-011-x21-x41-x6", "answer": { "answer_id": 5098188, "answer_text": "This is just limited to the computation of the integral.\n\nUse\n\n$$3k^2+k+1=3(k-a)(k-b) \\qquad \\text{with} \\quad (a,b)=-\\frac{1}{6} \\left(1\\pm i \\sqrt{11}\\right)$$\n\n$$3k^2-k+1=3(k-c)(k-d) \\qquad \\text{with} \\quad (c,d)=+\\frac{1}{6} \\left(1\\pm i \\sqrt{11}\\right)$$\n\nThen partial fraction decomposition. Sum up to\n\n$n$\n\n to face generalized harmonic numbers and use their asymptotics to obtain\n\n$$S=\\sum_{k=1}^\\infty(-1)^k\\frac{6k^2+2}{9k^4+5k^2+1}$$\n\n$$S=-1-\\frac{i \\pi \\left(\\tan \\left(\\frac{1}{12} \\left(5+i \\sqrt{11}\\right) \\pi \\right)+\\cot\n\n \\left(\\frac{1}{12} \\left(5+i \\sqrt{11}\\right) \\pi \\right)-2 \\csc \\left(\\frac{1}{6}\n\n \\left(\\pi +i \\sqrt{11} \\pi \\right)\\right)\\right)}{2 \\sqrt{11}}$$\n\nExpand the trigonometric functions to get\n\n$$\\small S=-1+4 \\pi \\sqrt{\\frac{3}{11}}\\,\\frac{\\sinh \\left(\\frac{\\sqrt{11} \\pi }{6}\\right)}{2 \\cosh \\left(\\frac{\\sqrt{11} \\pi }{3}\\right)-1}$$\n\n$$I=\\displaystyle\\int_0^1\\prod_{k=1}^\\infty\\left(1-x^{2k}\\right)\\,dx=4 \\pi \\sqrt{\\frac{3}{11}}\\,\\frac{\\sinh \\left(\\frac{\\sqrt{11} \\pi }{6}\\right)}{2 \\cosh \\left(\\frac{\\sqrt{11} \\pi }{3}\\right)-1}$$", "answer_owner": "Claude Leibovici", "is_accepted": false, "score": 15, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107572, "title": "Solving $x^{x^3}=729$", "question_text": "I want to solve\n\n$$x^{x^3}=729$$\n\nso I tried like below:\n\n$$\\log_9{x^{x^3}}=\\log_9{729}\\\\x^3\\log_9x=3\\\\x^3=\\frac{3}{\\log_9x}\\\\x^3=3\\log_x{9}\\\\\\cdots$$\n\nbut I got stumped. Then I tried this:\n\n$$x^{x^3}=729\\\\(x^{x^3}=729)^3\\\\x^{3x^3}=(9^3)^3$$\n\nIt can be rewritten as:\n\n$$(x^3)^{x^3}=9^9\\\\x^3=9\\\\x=\\sqrt[3]{9} $$\n\nThen I checked it with Desmos:\n\nIt seems that I was correct. Anyway, I think my second try was somehow heuristic, and I'm asking for an analytic solution for that type of equation, if it exists. Or another point of view or idea.", "question_owner": "Khosrotash", "question_link": "https://math.stackexchange.com/questions/5107572/solving-xx3-729", "answer": { "answer_id": 5107582, "answer_text": "More amusing would be\n\n$$x^{x^3}=k$$\n\n Take logarithms\n\n$$x^3\\log(x)=\\log(k) \\implies x^3\\log(x^3)=3\\log(k)$$\n\n which gives\n\n$$x^3=\\frac{3 \\log (k)}{W(3 \\log (k))}$$\n\nEdit\n\nEven more general, using the same gymnastic,\n\n$$\\left(x^a\\right)^{x^b}=k \\quad \\implies \\quad x=\\Bigg(\\frac {\\frac b a \\log(k) } {W\\left(\\frac{b }{a}\\log (k)\\right) }\\Bigg)^{\\frac 1b}$$\n\n where\n\n$(a,b)$\n\n could be any real numbers", "answer_owner": "Claude Leibovici", "is_accepted": true, "score": 6, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107573, "title": "Study of a series $ \\sum_{n=1}^{\\infty} \\log \\left[1+\\left(n^{x}-n^{x} \\cos \\frac{1}{n^{2}}\\right)\\right] $", "question_text": "Study, as the real parameter\n\n$x$\n\n varies, the numerical series\n\n$$\n\n\\sum_{n=1}^{\\infty} \\log \\left[1+\\left(n^{x}-n^{x} \\cos \\frac{1}{n^{2}}\\right)\\right]\n\n$$\n\n\\sol\n\nThe same quantity\n\n$n^x$\n\n in the difference suggests that I can factor it out.\n\n$$\n\na_n=\\log\\left[ 1 + n^x \\left(1 - \\cos\\frac{1}{n^2}\\right)\\right]\n\n=\\log\\left[ 1 + \\frac{\\left(1 - \\cos\\frac{1}{n^2}\\right)}{\\frac1{n^x}}\\right].\n\n$$\n\nNow if\n\n$x\\leq 0$\n\n, for\n\n$n \\to \\infty$\n\n,\n\n$1/n^x \\xrightarrow{n\\to \\infty} +\\infty$\n\n and since\n\n$1 - \\cos t \\sim_0 t^2/2$\n\n for\n\n$t \\to 0$\n\n, we obtain\n\n$$\n\n1 - \\cos \\frac{1}{n^2} \\sim_\\infty \\frac{1}{2 n^4}.\n\n$$\n\nthus\n\n$$\n\na_n=\\log \\left(1+\\frac{\\frac{1}{2 n^4}}{\\frac{1}{n^x}}\\right)\n\n=\\log \\left(1+\\frac{n^x}{2n^4}\\right)\n\n=\\log \\left(1+\\frac{1}{2}n^{x-4}\\right)\n\n$$\n\nSince by assumption\n\n$x\\leq 0$\n\n then\n\n$x-4\\leq -4<0$\n\n, therefore the term\n\n$n^{x-4}\\to 0$\n\n when\n\n$n\\to\\infty$\n\n.\n\nHence, since for\n\n$u\\to 0$\n\n we have\n\n$\\log(1+u) \\sim_0 u$\n\n,\n\n$$\n\n\\log \\left(1+\\frac{1}{2}n^{x-4}\\right) \\sim_\\infty \\frac{1}{2} n^{x-4}.\n\n$$\n\nSince\n\n$p=x-4<0$\n\n all terms diverge, therefore the original series should diverge.\n\nIf\n\n$x>0$\n\n then\n\n$$\n\na_n=\\log\\left[ 1 + n^x \\left(1 - \\cos\\frac{1}{n^2}\\right)\\right]\n\n$$\n\ngives an indeterminate form, for\n\n$n\\to \\infty$\n\n, of type\n\n$\\infty\\cdot 0$\n\n: what to do?", "question_owner": "Sebastiano", "question_link": "https://math.stackexchange.com/questions/5107573/study-of-a-series-sum-n-1-infty-log-left1-leftnx-nx-cos-fra", "answer": { "answer_id": 5107576, "answer_text": "You have\n\n\\begin{align}\n\na_n:=\\log\\left( 1 + n^x \\left(1 - \\cos\\frac{1}{n^2}\\right)\\right) \\sim n^x\\cdot \\frac{1}{2n^4}=\\frac12 n^{x-4};\n\n\\end{align}\n\nthus\n\n$\\sum a_n$\n\n behaves like\n\n$\\sum n^{x-4}$\n\n, which converges iff\n\n$4-x>1$\n\n.", "answer_owner": "Sine of the Time", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107602, "title": "Why the general solution of $2U_x-U_y=0$ is equivalent to a series and can be simplified to $f(x+2y)$?", "question_text": "The following is an example from my lecture notes:\n\n$$2U_x-U_y=0$$\n\nTo solve this PDE, we use\n\n$U=e^{\\alpha x+\\beta y}$\n\n. Substituting it in the equation gives:\n\n$$(2\\alpha-\\beta)e^{\\alpha x+\\beta y}=0 \\quad\\Rightarrow\\quad \\beta=2\\alpha$$\n\nSo,\n\n$$U(x,y)= e^{\\alpha x+2\\alpha y}= e^{\\alpha(x+2y)}\\;;\\ \\forall\\alpha$$\n\nHence, the general solution of this equation is the linear combination of these solutions set:\n\n$$U(x,y)=\\sum_{i=0}^{\\infty}A_ie^{i(x+2y)}=f(x+2y)$$\n\nwhere,\n\n$f$\n\n is an arbitrary function.\n\nI have problem understanding, how from\n\n$e^{\\alpha(x+2y)}\\;;\\ \\forall\\alpha$\n\n we conclude\n\n$U(x,y)= \\sum_{i=0}^{\\infty}A_ie^{i(x+2y)}$\n\n and why it can be generalized to\n\n$f(x+2y)$\n\n?\n\nFor instance, comparing to ODE equations, I know that\n\n$y_1=e^{x}$\n\n and\n\n$y_2=e^{2x}$\n\n are solutions to\n\n$y''-3y'+2y=0$\n\n, and the general solution is linear combination of these two answers which is\n\n$y=c_1e^x+c_2e^{2x}$\n\n.\n\nHowever, for the above PDE equation, I don't understand how the linear combination of the solutions is the given series. I mean, we got\n\n$e^{\\alpha(x+2y)}$\n\n for all values\n\n$\\alpha$\n\n as the solution, but the series incorporates only the non-negative integer values of\n\n$\\alpha$\n\n (renamed to\n\n$i$\n\n later). Additionally, I don't see how the series can be simplified to\n\n$f(x+2y)$\n\n (Sure, it satisfies\n\n$2U_x-U_y=0$\n\n, but how we derived it from the abovementioned series?).", "question_owner": "User", "question_link": "https://math.stackexchange.com/questions/5107602/why-the-general-solution-of-2u-x-u-y-0-is-equivalent-to-a-series-and-can-be-si", "answer": { "answer_id": 5107611, "answer_text": "$$2U_x-U_y=0$$\n\nYou wrote:\n\n$\\color{red}{\\text{To solve this PDE, we use } U=e^{\\alpha x+\\beta y}}$\n\n.\n\nWhy using the exponential function? This seems to come out of nowhere.\n\nInstead of why not writing:\n\nTo solve this PDE, we use\n\n$U=f\\left(\\alpha x+\\beta y\\right)$\n\n.\n\nThis is coming out of nowhere as well but no more than above.\n\n$$U_x=\\alpha f'(X)\\quad ; \\quad U_y=\\beta f'(X)\\quad \\text{with} \\quad X=\\left(\\alpha x+\\beta y\\right)$$\n\n$$2U_x-U_y=2\\alpha f'(X)-\\beta f'(X)=(2\\alpha-\\beta)f'(X)=0$$\n\nFor example this is satisfied with\n\n$\\alpha=1$\n\n and\n\n$\\beta=2$\n\n leading to\n\n$X=x+2y$\n\n$$U=f\\left(x+ 2y\\right)\\quad \\text{is the general solution.}$$\n\nIsn't it simpler than a sum of exponential?\n\nNote that using the Method of Characteristics avoid coming of nowhere and is much more general to solve more complicated linear PDE.", "answer_owner": "JJacquelin", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1405782, "title": "integral inequality involving $\\sup|f'|$", "question_text": "Let $f:[0,1]\\rightarrow \\mathbb R$ be continuous function differentiable on $(0,1)$ with property that there exists $a \\in (0,1]$ such that\n\n$$\\int_{0}^a f(x)dx=0$$\n\nProve that\n\n$$\\left|\\int_{0}^1 f(x)dx \\right|\\le \\dfrac {1-a} 2 \\cdot \\sup_{x\\in (0,1)} |f'(x)|$$\n\nFind the case of equality.\n\nWe have one solution using Mean value theorem.Is there another one?", "question_owner": "Booldy", "question_link": "https://math.stackexchange.com/questions/1405782/integral-inequality-involving-supf", "answer": { "answer_id": 5107613, "answer_text": "We can also use Taylor's Mean Value Theorem to prove this\n\nLet\n\n$F(x)=\\int_0^x f(t)\\,dt$\n\n. Then\n\n$F(0)=0$\n\n,\n\n$F'(x)=f(x)$\n\n and\n\n$F''(x)=f'(x)$\n\n.\n\nBy assumption,\n\n$\\int_0^a f(x)\\,dx=0$\n\n, hence\n\n$F(a)=0$\n\n also\n\n$\\sup_{x\\in (0,1)} |f'(x)|=M$\n\n .\n\nApplying Taylor’s theorem with the Lagrange remainder around\n\n$x=a$\n\n, there exist\n\n$\\theta,\\epsilon\\in(0,1)$\n\n such that\n\n$F(1)=F(a)+(1-a)F'(a)+\\frac{(1-a)^2}{2}F''(\\theta)$\n\nand\n\n$F(0)=F(a)-aF'(a)+\\frac{a^2}{2}F''(\\epsilon)$\n\n.\n\nSince\n\n$F(a)=0$\n\n and\n\n$F(0)=0$\n\n, we get\n\n$0=-aF'(a)+\\frac{a^2}{2}F''(\\epsilon)$\n\n,\n\nhence\n\n$F'(a)=\\frac{a}{2}F''(\\epsilon)$\n\n.\n\nSubstituting this into the expression for\n\n$F(1)$\n\n gives\n\n$F(1)=\\frac{a(1-a)}{2}F''(\\epsilon)+\\frac{(1-a)^2}{2}F''(\\theta)$\n\n.\n\nTaking absolute values and using the triangle inequality,\n\n$|F(1)|\\le \\frac{a(1-a)}{2}|F''(\\epsilon)|+\\frac{(1-a)^2}{2}|F''(\\theta)|$\n\n$ \\leq \\frac{a(1-a)}{2}M+\\frac{(1-a)^2}{2}M = \\frac{1-a}{2}M$\n\nFinally, noting that\n\n$F''(x)=f'(x)$\n\n, we obtain\n\n$\\left|\\int_0^1 f(x)\\,dx\\right|\\le \\frac{1-a}{2}\\sup_{0 f(z), \\\\\n\n& \\mbox{ or } \\qquad f(x) > f(y) \\ \\mbox{ and } \\ f(y) < f(z).\n\n\\end{align}\n\n$$\n\nNow as\n\n$f$\n\n is continuous on the closed interval\n\n$[x, z]$\n\n and as\n\n$[x, z]$\n\n is a compact subset of\n\n$\\mathbb{R}$\n\n, so the image\n\n$f\\left( [x, z] \\right)$\n\n is also a compact --- and hence closed and bounded --- subset of\n\n$\\mathbb{R}$\n\n.\n\nThus the set\n\n$f\\left( [x, z] \\right)$\n\n has a maximum element\n\n$M$\n\n and a minimum element\n\n$m$\n\n.\n\nFirst, assume that\n\n$$x < y < z, \\ \\ f(x) < f(y), \\ \\mbox{ and } \\ f(y) > f(z). $$\n\nThen\n\n$$ f(y) > \\max \\left\\{ f(x), f(z) \\right\\}. \\ \\tag{1}$$\n\nBut in view of (1) above, we can conclude that the maximum\n\n$M$\n\n of\n\n$f\\left( [x, z] \\right)$\n\n is attained at some interior point\n\n$p$\n\n, say, of\n\n$[x, z]$\n\n.\n\nThen we can conclude that the image set\n\n$f\\left( (x, z ) \\right)$\n\n of the open set\n\n$(x, z)$\n\n in the domain space\n\n$\\mathbb{R}^1$\n\n has a maximum element\n\n$M$\n\n and therefore cannot be open in the codomain space\n\n$\\mathbb{R}^1$\n\n; for no\n\n$\\delta > 0$\n\n can the open interval\n\n$( M-\\delta, M+\\delta)$\n\n be contained in\n\n$f\\left( (x, z ) \\right)$\n\n, which gives rise to a contradiction.\n\nSo we assume that\n\n$$ x < y < z, \\ \\ f(x) > f(y), \\ \\mbox{ and } \\ f(y) < f(z). $$\n\nThen\n\n$$f(y) < \\min \\left\\{ f(x), f(z) \\right\\}. \\ \\tag{2}$$\n\nThen the minimum\n\n$m$\n\n of the set\n\n$f \\left( [x, z] \\right)$\n\n is attained at some interior point\n\n$q$\n\n, say, of\n\n$[x, z]$\n\n, which implies that the image under\n\n$f$\n\n of the open set\n\n$(x, z)$\n\n fails to be open because this image set contains\n\n$m$\n\n but fails to contain the open interval\n\n$(m-\\delta, m+\\delta)$\n\n for any real number\n\n$\\delta > 0$\n\n, which is a contradiction.\n\nHence every continuous open mapping\n\n$f$\n\n of\n\n$\\mathbb{R}^1$\n\n into\n\n$\\mathbb{R}^1$\n\n is monotonic.\n\nIs this proof correct? If so, then what about the presentation? Is the presentation lucid enough too? If not, then where does the problem lie?", "question_owner": "Saaqib Mahmood", "question_link": "https://math.stackexchange.com/questions/2220765/prob-15-chap-4-in-baby-rudin-every-continuous-open-mapping-of-mathbbr", "answer": { "answer_id": 2221425, "answer_text": "Hint: Let $af(b)$.", "answer_owner": "xpaul", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2148777, "title": "Limit of the Riemann sum $ \\sum_{i=1}^n \\sin({i\\pi \\over n}){\\pi \\over n}$", "question_text": "Show that $f: [0, \\pi] \\to \\mathbb R$, $f(x) = \\sin(x)$ is Riemann-integrable and determine $\\int_0^\\pi f$ (e.g. by Riemann sums)\n\nI showed it being Riemann-integrable because it is continuous.\n\nSo, I made an equal partition $P_n$, s. t. $|x_i - x_{i-1}|= {\\pi \\over n}$\n\nand $$S(f, P, Z) = \\sum_{i=1}^n \\sin({i\\pi \\over n}){\\pi \\over n}$$\n\nI've been given a formula $\\sin(a)+\\sin(a+t)+\\sin(a+2t)+...+\\sin(a+(n-1)t) = {\\sin(nt/2) \\over \\sin(t/2)} \\sin(a+{n-1 \\over 2}t)$\n\nwhis is good for all $a, t \\in \\mathbb R$, $t \\neq 0$.\n\nI took out $\\pi \\over n$ from the sum and put it in front of it, ran the formula and got $${{\\sin({n({\\pi \\over n}) \\over 2})} \\over \\sin({{\\pi \\over n} \\over 2})} \\sin({\\pi \\over n} + {n-1 \\over 2}{\\pi \\over n})$$\n\nI rolled it around a bit, but could not get $2$ out of it, any help?", "question_owner": "repulsive23", "question_link": "https://math.stackexchange.com/questions/2148777/limit-of-the-riemann-sum-sum-i-1n-sini-pi-over-n-pi-over-n", "answer": { "answer_id": 2148800, "answer_text": "Since $$\\sum_{k= 1}^{n} \\sin kx= \\frac{\\sin\\left({nx\\over2} \\right)}{\\sin\\left(\\frac x2\\right)}\\sin\\left( \\frac{n+1}{2}x\\right)$$\n\ntaking $x= \\frac \\pi n$ we get,\n\n$$S = {\\pi \\over n}\\sum_{i=1}^n \\sin({i\\pi \\over n}) = 2\n\n{{{\\pi \\over 2n}} \\over \\sin({{\\pi \\over 2n} })} \\sin\\left( {n+1 \\over n}{\\pi \\over 2}\\right)\\to 2 $$\n\nsince $$ \\lim_{n\\to \\infty }{{{\\pi \\over 2n}} \\over \\sin({{\\pi \\over 2n} })} = \\lim_{h\\to 0} \\frac{h}{\\sin h}= 1$$", "answer_owner": "Guy Fsone", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107505, "title": "Determine the nature of the series $ \\sum_{n=1}^{\\infty} \\left( e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4} \\right) $", "question_text": "We have this series\n\n$$\n\n\\sum_{n=1}^{\\infty} \\left( e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4} \\right)\n\n$$\n\nPreface: can the sum be split into three terms?\n\nLet\n\n$a_n = e^{1/n} - 1 - \\frac{1}{n^4}$\n\n:\n\n$$\n\n\\lim_{n \\to \\infty} a_n = \\lim_{n \\to \\infty} \\left( e^{1/n} - 1 - \\frac{1}{n^4} \\right).\n\n$$\n\nSince\n\n$e^{1/n} \\to 1$\n\n and\n\n$\\frac{1}{n^4} \\to 0$\n\n, it follows that\n\n$$\n\n\\lim_{n \\to \\infty} a_n = 1 - 1 - 0 = 0.\n\n$$\n\nThe limit of the general term is zero, so the necessary condition for the convergence of the series is satisfied. Let us see if there exists a convergent majorant series. Observe that\n\n$e^{1/n} - 1$\n\n behaves like\n\n$\\frac{1}{n}$\n\n for large\n\n$n$\n\n, so\n\n$$\n\na_n \\sim_{\\infty} \\frac{1}{n} - \\frac{1}{n^4} \\sim_{\\infty} \\frac{1}{n}.\n\n$$\n\nThe harmonic series diverges, but I think I cannot say that\n\n$a_n$\n\n diverges.\n\nJust do I must use a different criterion?", "question_owner": "Sebastiano", "question_link": "https://math.stackexchange.com/questions/5107505/determine-the-nature-of-the-series-sum-n-1-infty-left-e-frac1n", "answer": { "answer_id": 5107512, "answer_text": "Using\n\n$$ e^x\\ge 1+x $$\n\none has\n\n$$ e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\ge \\frac1n-\\frac1{n^4} $$\n\nand hence\n\n$$ \\sum_{n=1}^\\infty\\bigg(e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\bigg)\\ge \\sum_{n=1}^\\infty\\frac1n-\\sum_{n=1}^\\infty\\frac1{n^4}=\\infty. $$\n\nSo\n\n$\\sum_{n=1}^\\infty\\bigg(e^{\\frac{1}{n}} - 1 - \\frac{1}{n^4}\\bigg)$\n\n diverges.", "answer_owner": "xpaul", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 484160, "title": "Separate incomplete elliptic integral into real and imaginary parts", "question_text": "I am working in a problem that involves Incomplete Elliptic Integrals of the First and Second kind of the form $F(\\sin^{-1}x~|~m)$ and $E(\\sin^{-1}x~|~m)$ where the parameters $m$, $x$ are real numbers in the range $m>1$ and $1/\\sqrt{m} \\le x \\le 1$.\n\n($x$ and $m$ are related to the commonly used argument $\\phi$ and modulus $m$ by $x \\equiv \\sin \\phi$ and $m \\equiv k^2$)\n\nAs can be seen by plotting them, in this range the real part of the integrals is independent of $x$ while the imaginary part isn't. As an example, see\n\nthis plot for F\n\n and\n\nthis other for E\n\n for a value $m=5$.\n\nWhat I would like to do is to separate the real and imaginary part of this integrals, at least in this particular range. In other words, finding the real valued functions $f_{re} (m)$, $g_{re} (m)$, $f_{im} (x,m)$ and $g_{im} (x,m)$ that satisfy:\n\n$$\n\nF(\\sin^{-1}x~|~m) \\equiv f_{re} (m) + \\text{i} f_{im} (x,m)\n\n$$\n\n$$\n\nE(\\sin^{-1}x~|~m) \\equiv g_{re} (m) + \\text{i} g_{im} (x,m)\n\n$$\n\nin the range $m>1$ and $1/\\sqrt{m} \\le x \\le 1$.\n\nBy using the Reciprocal Modulus Transformations (see DLMF section 19.7) and taking the limit $x\\rightarrow 1/\\sqrt{m}$, I have found the real parts to be:\n\n$$\n\nf_{re}(m) \\equiv \\frac{1}{\\sqrt{m}} K\\left(\\frac{1}{m}\\right)\n\n$$\n\n$$\n\ng_{re}(m) \\equiv \\sqrt{m} \\left[ E \\left( \\frac{1}{m} \\right) - K \\left( \\frac{1}{m} \\right) \\right] + \\frac{1}{\\sqrt{m}} K\\left(\\frac{1}{m}\\right)\n\n$$\n\nHowever, the imaginary parts $f_{im} (x,m)$, $g_{im} (x,m)$ escape me. I reckon there should be a way of expressing them in terms of incomplete elliptic integrals with parameters in the real valued range.\n\nIf I use the reciprocal modulus transformations I will bring the parameter inside the range $01$. I have looked everywhere in the literature but I can't seem to find any identity that solves the problem. I could perhaps do something if there was a way of expressing elliptic integrals of complex argument as a combination of elliptic integrals of real argument and imaginary pure argument, but I don't know how it can be done.\n\nDoes someone have any insight on how those imaginary parts could be found?", "question_owner": "m3tro", "question_link": "https://math.stackexchange.com/questions/484160/separate-incomplete-elliptic-integral-into-real-and-imaginary-parts", "answer": { "answer_id": 3917107, "answer_text": "The reciprocal modulus transformation is Byrd and Friedman 114.01:\n\n$$F(\\sin^{-1}x,m)=\\frac1{\\sqrt m}F(\\sin^{-1}x\\sqrt m,1/m)$$\n\n$$E(\\sin^{-1}x,m)=\\frac1{\\sqrt m}(mE(\\sin^{-1}x\\sqrt m,1/m)+(1-m)F(\\sin^{-1}x\\sqrt m,1/m))$$\n\nNow the parameter is in\n\n$[0,1]$\n\n but the sine-amplitude is a real number in\n\n$[1,\\sqrt m]$\n\n. Thus B&F 115.02 applies:\n\n$$F(\\sin^{-1}x\\sqrt m,1/m)=K(1/m)-iF(A,1-1/m)$$\n\n$$E(\\sin^{-1}x\\sqrt m,1/m)=E(1/m)-i\\left(F(A,1-1/m)-E(A,1-1/m)+\\frac{(1-1/m)\\sin A\\cos A}{\\sqrt{1-(1-1/m)\\sin^2A}}\\right)$$\n\nwhere\n\n$$A=\\sin^{-1}\\frac{\\sqrt{mx^2-1}}{x\\sqrt{m-1}}$$\n\nNote that I have flipped the signs of the imaginary parts from B&F to match the values of\n\n$E(\\cdot)$\n\n and\n\n$F(\\cdot)$\n\n as calculated by Mathematica and mpmath. Finally we get\n\n$$F(\\sin^{-1}x,m)=\\frac1{\\sqrt m}(K(1/m)-iF(A,1-1/m))$$\n\n$$E(\\sin^{-1}x,m)=\\frac1{\\sqrt m}\\left[mE(1/m)+(1-m)K(1/m)\\\\\n\n-i\\left(F(A,1-1/m)-mE(A,1-1/m)+\\frac{(m-1)\\sin A\\cos A}{\\sqrt{1-(1-1/m)\\sin^2A}}\\right)\\right]$$", "answer_owner": "Parcly Taxel", "is_accepted": true, "score": 8, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107506, "title": "calculating an integral", "question_text": "I need to compute the integral\n\n$$\\int \\frac{1 - \\ln u}{(u - \\ln u)^2} \\, du$$\n\ni tried applying the substitution\n\n$w = u - \\ln u$\n\n$$\\frac{dw}{du} = 1 - \\frac{1}{u}, \\quad \\text{so} \\quad dw = \\left(1 - \\frac{1}{u}\\right) du$$\n\nthis means that the integral is\n\n$$\\int \\frac{1- u + w}{w^2} \\cdot \\frac{du}{\\left(1 - \\frac{1}{u}\\right)} = \\int \\frac{1}{w} \\cdot \\frac{du}{1 - \\frac{1}{u}}$$\n\nbut this seems complicated and i dont know how to reduce this expression more. can someone help me find a better way?", "question_owner": "Demir", "question_link": "https://math.stackexchange.com/questions/5107506/calculating-an-integral", "answer": { "answer_id": 5107520, "answer_text": "I think I may see something, but I think I may have lucked my way into it. Divide numerator and denominator by\n\n$u^2$\n\n to get\n\n$$\\int\\dfrac{1-\\ln u}{(u-\\ln u)^2}du=\\int\\dfrac{\\frac1{u^2}-\\frac{\\ln u}{u^2}}{(1-\\frac{\\ln u}u)^2}du$$\n\n$$v=1-\\dfrac{\\ln u}u,dv=-(\\frac1{u^2}-\\dfrac{\\ln u}{u^2})du$$\n\nThe integral then becomes\n\n$$-\\int\\dfrac{dv}{v^2}=\\frac1v+C=\\dfrac1{1-\\frac{\\ln u}u}+C=\\frac{u}{u-\\ln u}+C$$", "answer_owner": "Mike", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4282084, "title": "How do I solve for x when given the derivative equation and the slope of the tangent line?", "question_text": "The derivative of a function\n\n$f$\n\n is given by\n\n$f′(x)=0.1x+e^{0.25x}$\n\n. At what value of\n\n$x$\n\n for\n\n$x>0$\n\n does the line tangent to the graph of\n\n$f$\n\n at\n\n$x$\n\n have slope\n\n$2$\n\n ?\n\nThis provides the derivative and slope of the tangent line but I am not sure how to solve for x", "question_owner": "Maus", "question_link": "https://math.stackexchange.com/questions/4282084/how-do-i-solve-for-x-when-given-the-derivative-equation-and-the-slope-of-the-tan", "answer": { "answer_id": 4282102, "answer_text": "Finding the exact value is hard and even if you have a \"closed form expression\" it'll probably not be \"just some number\" (see for example the result here\n\nhttps://www.wolframalpha.com/input/?i=solve+2+%3D+0.1x%2Be%5E%280.25x%29\n\n), so we'll have to resort to numerics. If this sounds daunting note that you can get a super simple and good approximation of the actual solution in this case:\n\nfirst note that as\n\n$x$\n\n grows, you can basically ignore the linear part of\n\n$f'$\n\n as it'll become neglible in contrast to the exponential one. So lets assume that\n\n$x$\n\n is sufficiently large and solve\n\n$2=e^{x/4} \\iff x = 4 \\ln 2$\n\n. Since\n\n$f'$\n\n is monotonous we expect our actual answer in the domain\n\n$(0, 4 \\ln2)$\n\n. Taylorexpand\n\n$f'$\n\n around\n\n$4 \\ln 2 = \\ln 16$\n\n to get\n\n$f'(x) \\approx 2.27726+0.6(x-\\ln 16) + 0.0625 (x - \\ln 16)^2$\n\n (note also that the coeffiecients in the expansion become quite small as we go to higher degrees). We now set this equal to\n\n$2$\n\n and solve again to find\n\n$x \\approx 2.2858$\n\n or\n\n$x \\approx -6.3406$\n\n. We're clearly after the\n\n$x \\approx 2.2858$\n\n solution. If you want more precision you can repeat this process:\n\ntaylorexpand around the\n\n$x$\n\n value we just found\n\nsolve the resulting polynomial equation\n\nbut the first estimate is already quite a good estimate, if we consider that the actual value (numerically computed) is at around\n\n$x \\approx 2.28688$\n\n. So via a simple approximation we've managed to find an approximate solution with a relative error of\n\n$\\sim 0.047\\%$\n\n.", "answer_owner": "SV-97", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107430, "title": "Can a function have a cusp at a point without being twice differentiable?", "question_text": "A function\n\n$f:[a,b]\\to \\mathbb{R}$\n\n is said to have a\n\ncusp\n\n at a point\n\n$c$\n\n if:\n\n$f$\n\n is continuous at\n\n$c$\n\n;\n\nThe one-sided derivatives satisfy\n\n$$\\lim_{x \\to c^-} f'(x) = -\\infty \\quad \\text{and} \\quad \\lim_{x \\to c^+} f'(x) = +\\infty.$$\n\n (or the reverse signs)\n\nThis definition doesn’t require\n\n$f$\n\n to be twice differentiable.\n\nHowever around\n\n$c$\n\n, in many examples, the second derivative can help describe the shape of the cusp — for instance, if the second derivatives on both sides have the same sign, the graph forms a sharp point with a vertical tangent.\n\nMy question is:\n\nCan there exist a function that has a cusp at a point in this sense, but is not twice differentiable at that point (or perhaps not even twice differentiable in any neighborhood of it)?\n\nI couldn't come up with an example to this so I tried to prove that is impossible but failed.", "question_owner": "pie", "question_link": "https://math.stackexchange.com/questions/5107430/can-a-function-have-a-cusp-at-a-point-without-being-twice-differentiable", "answer": { "answer_id": 5107437, "answer_text": "If we interpret your question as:\n\nIs it true that a function with a cusp in\n\n$c$\n\n is twice differentiable in a punctured neighborhood of\n\n$c$\n\n?\n\nthe answer is no.\n\nYou could start from a piecewise constant function\n\n$g(x)$\n\n, whose envelopes are the functions\n\n$f_1(x)=1/\\sqrt[3]x$\n\n, and\n\n$f_2(x) = 2/\\sqrt[3]x$\n\n (dashed red lines in the picture below), and then define\n\n$$f(x) = \\int_0^x g(t) dt = \\lim_{\\varepsilon\\to 0}\\int_{\\varepsilon}^x g(t)dt.$$\n\nThe function\n\n$f(x)$\n\n thus defined is continuous in\n\n$0$\n\n, and\n\n$$\\lim_{x\\to 0^+} f'(x) = \\lim_{x\\to 0^+} g(x) = +\\infty,$$\n\nand\n\n$$\\lim_{x\\to 0^-} f'(x) = \\lim_{x\\to 0^-} g(x) = -\\infty,$$\n\nas you require, and yet there are points where the first (and of course the second) derivative of\n\n$f$\n\n is not defined, in\n\nevery\n\n (punctured) neighborhood of\n\n$0$\n\n.", "answer_owner": "dfnu", "is_accepted": false, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1030860, "title": "Finding radius when performing shell method", "question_text": "Find the volume of the region generated by revolving $y = -x^3$ and $y = -\\sqrt x$ around the $x$-axis.\n\nI don't understand how the radius component is $-y$; why not $+y$?", "question_owner": "Jermiah", "question_link": "https://math.stackexchange.com/questions/1030860/finding-radius-when-performing-shell-method", "answer": { "answer_id": 1030895, "answer_text": "The region defined by those equations is in the fourth quadrant, where $y$ is negative. The radius must be positive, so it is written as $|y|$ or $-y$. When $y$ is negative, $-y$ is positive, despite the presence of the minus sign.", "answer_owner": "Rory Daulton", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5062050, "title": "Is it hard to evaluate the integral $\\int_0^{\\frac{\\pi}{4}} \\tan ^{-1} \\sqrt{\\frac{1-\\tan ^2 x}{2}} d x$ without Feynman’s trick?", "question_text": "Once I was attracted by the decent value of the integral\n\n$$\n\nI=\\int_0^{\\frac{\\pi}{4}} \\tan ^{-1} \\sqrt{\\frac{1-\\tan ^2 x}{2}} d x=\\frac{\\pi^2}{24},\n\n$$\n\nI attempted to get rid of the surd in the integral using\n\n$\\sin \\theta=\\tan x$\n\n to transform the integral into\n\n$$\n\nI=\\int_0^{\\frac{\\pi}{2}} \\tan ^{-1}\\left(\\frac{\\cos \\theta}{\\sqrt{2}}\\right) \\frac{\\cos \\theta}{1+\\sin ^2 \\theta} d \\theta.\n\n$$\n\nFeymann’s trick again came to my mind with the parametrised integral\n\n$$I(a)=\\int_0^{\\frac{\\pi}{2}} \\tan ^{-1}(a \\cos \\theta) \\frac{\\cos \\theta}{1+\\sin ^2 \\theta} d \\theta$$\n\nwhose derivative w.r.t.\n\n$a$\n\n is\n\n$$\n\n\\begin{aligned}\n\nI^{\\prime}(a) & =\\int_0^{\\frac{\\pi}{2}} \\frac{\\cos ^2 \\theta}{\\left(1+a^2 \\cos ^2 \\theta\\right)\\left(1+\\sin ^2 \\theta\\right)} d \\theta \\\\\n\n& =\\int_0^{\\frac{\\pi}{2}} \\frac{\\sec ^2 \\theta d \\theta}{\\left(\\sec ^2 \\theta+a^2\\right)\\left(\\sec ^2 \\theta+\\tan ^2 \\theta\\right)} \\\\\n\n& =\\int_0^{\\infty} \\frac{d t}{\\left(t^2+1+a^2\\right)\\left(2 t^2+1\\right)}, \\quad \\text { where } t=\\tan \\theta \\\\\n\n\\end{aligned}\n\n$$\n\nVia partial fractions, we have\n\n$$\n\n\\begin{aligned}\n\nI^{\\prime}(a) & =\\frac{1}{2 a^2+1} \\int_0^{\\infty}\\left(\\frac{2}{2 t^2+1}-\\frac{1}{t^2+1+a^2}\\right) d t \\\\\n\n& =\\frac{1}{2 a^2+1}\\left[\\sqrt{2} \\tan ^{-1}(\\sqrt{2} t)-\\frac{1}{\\sqrt{1+a^2}} \\tan ^{-1} \\frac{t}{\\sqrt{1+a^2}}\\right]_0^{\\infty} \\\\\n\n& =\\frac{\\pi}{2\\left(2 a^2+1\\right)}\\left(\\sqrt{2}-\\frac{1}{\\sqrt{1+a^2}}\\right)\n\n\\end{aligned}\n\n$$\n\nIntegrating back from\n\n$a=0$\n\n to\n\n$\\frac{1}{\\sqrt 2}$\n\n yields\n\n$$\n\n\\begin{aligned}\n\nI & =\\int_0^{\\frac{1}{\\sqrt{2}}} \\frac{\\pi}{2\\left(2 a^2+1\\right)}\\left(\\sqrt{2}-\\frac{1}{\\sqrt{1+a^2}}\\right) d a \\\\\n\n& =\\frac{\\pi}{2}\\left(\\frac{\\pi}{4}-\\frac{\\pi}{6}\\right) \\quad \\textrm{ via } a=\\tan \\alpha \\\\\n\n& =\\frac{\\pi^2}{24}\n\n\\end{aligned}\n\n$$\n\nMy Question\n\n:\n\nIs it hard to evaluate the integral\n\n$\\int_0^{\\frac{\\pi}{4}} \\tan ^{-1} \\sqrt{\\frac{1-\\tan ^2 x}{2}} d x$\n\n without Feynman’s trick?\n\nYour comments and alternatives are highly appreciated.", "question_owner": "Lai", "question_link": "https://math.stackexchange.com/questions/5062050/is-it-hard-to-evaluate-the-integral-int-0-frac-pi4-tan-1-sqrt-fr", "answer": { "answer_id": 5062069, "answer_text": "Evaluating the integral without Feynman's trick is possible although standard integration techniques like substitution or integration by parts applied directly often lead to more complicated expressions that don't really lead to a nice closed form. An alternative approach exists that avoids introducing a parameter.\n\nSubstitute\n\n$u = \\tan x$\n\n. Then\n\n$dx = \\frac{du}{1+u^2}$\n\n. The limits change from\n\n$x=0, \\pi/4$\n\n to\n\n$u=0, 1$\n\n.\n\n$$I = \\int_0^1 \\arctan \\sqrt{\\frac{1-u^2}{2}} \\frac{du}{1+u^2} \\tag{1}$$\n\nUse the integral definition\n\n$\\arctan(z) = \\int_0^1 \\frac{z}{1+z^2 y^2} dy$\n\n.\n\nLet\n\n$z = \\sqrt{\\frac{1-u^2}{2}}$\n\n.\n\n$$I = \\int_0^1 \\left( \\int_0^1 \\frac{\\sqrt{\\frac{1-u^2}{2}}}{1 + \\left(\\frac{1-u^2}{2}\\right) y^2} dy \\right) \\frac{du}{1+u^2}$$\n\n$$I = \\int_0^1 \\int_0^1 \\frac{1}{\\sqrt{2}} \\frac{\\sqrt{1-u^2}}{(1+u^2)\\left(1 + \\frac{y^2}{2}(1-u^2)\\right)} dy \\, du \\tag{2}$$\n\nSwap the order of integration (assuming Fubini's theorem applies):\n\n$$I = \\int_0^1 \\left( \\int_0^1 \\frac{\\frac{1}{\\sqrt{2}}\\sqrt{1-u^2}}{(1+u^2)(1 + \\frac{y^2}{2}(1-u^2))} du \\right) dy$$\n\nLet the inner integral be\n\n$J(y)$\n\n.\n\n$$ \\bbox[5px, #F0F8FF, border: 2px solid #4682B4]{ J(y) = \\frac{1}{\\sqrt{2}} \\int_0^1 \\frac{\\sqrt{1-u^2}}{(1+u^2)(1 + \\frac{y^2}{2} - \\frac{y^2}{2}u^2)} du } \\tag{3}$$\n\nEvaluate the inner integral\n\n$J(y)$\n\n. Substitute\n\n$u = \\sin \\theta$\n\n,\n\n$du = \\cos \\theta d\\theta$\n\n.\n\n$$J(y) = \\frac{1}{\\sqrt{2}} \\int_0^{\\pi/2} \\frac{\\cos \\theta}{(1+\\sin^2 \\theta)(1 + \\frac{y^2}{2} - \\frac{y^2}{2}\\sin^2 \\theta)} \\cos \\theta d\\theta$$\n\n$$J(y) = \\frac{1}{\\sqrt{2}} \\int_0^{\\pi/2} \\frac{\\cos^2 \\theta}{(1+\\sin^2 \\theta)(1 + \\frac{y^2}{2}\\cos^2 \\theta)} d\\theta$$\n\nDivide numerator and denominator by\n\n$\\cos^4 \\theta$\n\n:\n\n$$J(y) = \\frac{1}{\\sqrt{2}} \\int_0^{\\pi/2} \\frac{\\sec^2 \\theta}{(\\sec^2 \\theta + \\tan^2 \\theta)(\\sec^2 \\theta + \\frac{y^2}{2})} d\\theta$$\n\nSubstitute\n\n$t = \\tan \\theta$\n\n,\n\n$dt = \\sec^2 \\theta d\\theta$\n\n.\n\n$\\sec^2 \\theta = 1+t^2$\n\n.\n\n$$J(y) = \\frac{1}{\\sqrt{2}} \\int_0^{\\infty} \\frac{dt}{(1+t^2+t^2)(1+t^2 + \\frac{y^2}{2})} = \\frac{1}{\\sqrt{2}} \\int_0^{\\infty} \\frac{dt}{(1+2t^2)(t^2 + 1 + y^2/2)}$$\n\nUsing partial fractions\n\n$\\frac{1}{(1+2t^2)(t^2+A)} = \\frac{1}{-(1+y^2)}\\left(\\frac{-2}{1+2t^2} + \\frac{1}{t^2+A}\\right)$\n\n, where\n\n$A = 1+y^2/2$\n\n.\n\n$$J(y) = \\frac{1}{\\sqrt{2}} \\frac{1}{-(1+y^2)} \\int_0^{\\infty} \\left( \\frac{-2}{1+2t^2} + \\frac{1}{t^2+1+y^2/2} \\right) dt$$\n\n$$J(y) = \\frac{1}{\\sqrt{2}(1+y^2)} \\int_0^{\\infty} \\left( \\frac{2}{1+2t^2} - \\frac{1}{t^2+1+y^2/2} \\right) dt$$\n\nWe evaluate the integrals:\n\n$\\int_0^{\\infty} \\frac{2}{1+2t^2} dt = \\left[ \\frac{2}{\\sqrt{2}} \\arctan(\\sqrt{2}t) \\right]_0^{\\infty} = \\sqrt{2} \\frac{\\pi}{2} = \\frac{\\pi}{\\sqrt{2}}$\n\n.\n\n$\\int_0^{\\infty} \\frac{1}{t^2+1+y^2/2} dt = \\left[ \\frac{1}{\\sqrt{1+y^2/2}} \\arctan\\left(\\frac{t}{\\sqrt{1+y^2/2}}\\right) \\right]_0^{\\infty} = \\frac{\\pi}{2\\sqrt{1+y^2/2}} = \\frac{\\pi \\sqrt{2}}{2\\sqrt{2+y^2}}$\n\n.\n\n$$J(y) = \\frac{1}{\\sqrt{2}(1+y^2)} \\left( \\frac{\\pi}{\\sqrt{2}} - \\frac{\\pi \\sqrt{2}}{2\\sqrt{2+y^2}} \\right) = \\frac{\\pi}{1+y^2} \\left( \\frac{1}{2} - \\frac{1}{2\\sqrt{2+y^2}} \\right)$$\n\n$$ \\bbox[5px, #F0F8FF, border: 2px solid #4682B4]{ J(y) = \\frac{\\pi}{2(1+y^2)} \\left( 1 - \\frac{1}{\\sqrt{2+y^2}} \\right) } \\tag{4}$$\n\n(Note: This\n\n$J(y)$\n\n is related to the\n\n$I'(a)$\n\n in the original post by\n\n$J(y) = \\frac{1}{\\sqrt{2}} I'(y/\\sqrt{2})$\n\n).\n\nEvaluate the outer integral\n\n$I = \\int_0^1 J(y) dy$\n\n.\n\n$$I = \\int_0^1 \\frac{\\pi}{2(1+y^2)} \\left( 1 - \\frac{1}{\\sqrt{2+y^2}} \\right) dy$$\n\n$$I = \\frac{\\pi}{2} \\int_0^1 \\frac{dy}{1+y^2} - \\frac{\\pi}{2} \\int_0^1 \\frac{dy}{(1+y^2)\\sqrt{2+y^2}}$$\n\nThe first integral is\n\n$\\frac{\\pi}{2} [\\arctan y]_0^1 = \\frac{\\pi}{2} \\frac{\\pi}{4} = \\frac{\\pi^2}{8}$\n\n.\n\nFor the second integral, let\n\n$K = \\int_0^1 \\frac{dy}{(1+y^2)\\sqrt{2+y^2}}$\n\n. Let\n\n$y = \\sqrt{2} \\tan \\phi$\n\n,\n\n$dy = \\sqrt{2} \\sec^2 \\phi d\\phi$\n\n. The limits become\n\n$0$\n\n and\n\n$\\arctan(1/\\sqrt{2})$\n\n.\n\n$$K = \\int_0^{\\arctan(1/\\sqrt{2})} \\frac{\\sqrt{2} \\sec^2 \\phi d\\phi}{(1+2\\tan^2 \\phi)\\sqrt{2+2\\tan^2 \\phi}} = \\int_0^{\\arctan(1/\\sqrt{2})} \\frac{\\sqrt{2} \\sec^2 \\phi d\\phi}{(1+2\\tan^2 \\phi)\\sqrt{2}\\sec \\phi}$$\n\n$$K = \\int_0^{\\arctan(1/\\sqrt{2})} \\frac{\\sec \\phi d\\phi}{1+2\\tan^2 \\phi} = \\int_0^{\\arctan(1/\\sqrt{2})} \\frac{\\cos \\phi d\\phi}{\\cos^2 \\phi + 2\\sin^2 \\phi} = \\int_0^{\\arctan(1/\\sqrt{2})} \\frac{\\cos \\phi d\\phi}{1+\\sin^2 \\phi}$$\n\nLet\n\n$v = \\sin \\phi$\n\n.\n\n$dv = \\cos \\phi d\\phi$\n\n. The limits become\n\n$0$\n\n and\n\n$\\sin(\\arctan(1/\\sqrt{2})) = 1/\\sqrt{3}$\n\n.\n\n$$K = \\int_0^{1/\\sqrt{3}} \\frac{dv}{1+v^2} = [\\arctan(v)]_0^{1/\\sqrt{3}} = \\arctan\\left(\\frac{1}{\\sqrt{3}}\\right) - \\arctan(0) = \\frac{\\pi}{6}$$\n\nSo, the second integral is\n\n$K = \\frac{\\pi}{6}$\n\n.\n\nFinally,\n\n$$I = \\frac{\\pi^2}{8} - \\frac{\\pi}{2} \\left( \\frac{\\pi}{6} \\right) = \\frac{\\pi^2}{8} - \\frac{\\pi^2}{12} = \\frac{3\\pi^2 - 2\\pi^2}{24} = \\frac{\\pi^2}{24}$$\n\nEdit: Funny enough, i actually found another way of solving it, here you go (albeit a looooooot more complex and redundant):\n\nLet the integral be\n\n$I = \\int_0^{\\frac{\\pi}{4}} \\arctan \\sqrt{\\frac{1-\\tan ^2 x}{2}} \\, dx$\n\n.\n\n1.\n\nFirst, substitute\n\n$t = \\tan x$\n\n. As\n\n$x$\n\n goes from\n\n$0$\n\n to\n\n$\\pi/4$\n\n,\n\n$t$\n\n goes from\n\n$0$\n\n to\n\n$1$\n\n. The differential is\n\n$dx = \\frac{dt}{1+t^2}$\n\n.\n\n$$I = \\int_0^1 \\arctan \\sqrt{\\frac{1-t^2}{2}} \\frac{dt}{1+t^2}$$\n\nNow, use the integral representation\n\n$\\arctan z = \\int_0^z \\frac{ds}{1+s^2}$\n\n. Let\n\n$z = \\sqrt{\\frac{1-t^2}{2}}$\n\n.\n\n$$I = \\int_0^1 \\left( \\int_0^{\\sqrt{(1-t^2)/2}} \\frac{ds}{1+s^2} \\right) \\frac{dt}{1+t^2}$$\n\nThis can be written as a double integral over a domain\n\n$D$\n\n in the\n\n$ts$\n\n-plane:\n\n$$I = \\iint_D \\frac{dt \\, ds}{(1+t^2)(1+s^2)}$$\n\nThe domain\n\n$D$\n\n is defined by the limits of integration:\n\n$$D = \\left\\{ (t, s) \\,:\\, 0 \\le t \\le 1, \\quad 0 \\le s \\le \\sqrt{\\frac{1-t^2}{2}} \\right\\}$$\n\nThe condition\n\n$s \\le \\sqrt{\\frac{1-t^2}{2}}$\n\n implies\n\n$s^2 \\le \\frac{1-t^2}{2}$\n\n, which rearranges to\n\n$2s^2 \\le 1-t^2$\n\n, or\n\n$t^2 + 2s^2 \\le 1$\n\n. Since\n\n$t \\ge 0$\n\n and\n\n$s \\ge 0$\n\n, the domain\n\n$D$\n\n is the region in the first quadrant bounded by the ellipse\n\n$t^2 + 2s^2 = 1$\n\n.\n\n2.\n\n$$t = r \\cos \\theta, \\quad s = \\frac{r \\sin \\theta}{\\sqrt{2}}$$\n\nThe condition\n\n$t^2 + 2s^2 \\le 1$\n\n becomes\n\n$(r\\cos\\theta)^2 + 2\\left(\\frac{r\\sin\\theta}{\\sqrt{2}}\\right)^2 \\le 1$\n\n, which simplifies to\n\n$r^2\\cos^2\\theta + r^2\\sin^2\\theta \\le 1$\n\n, or\n\n$r^2 \\le 1$\n\n. Since\n\n$t, s \\ge 0$\n\n, we have\n\n$0 \\le r \\le 1$\n\n and\n\n$0 \\le \\theta \\le \\pi/2$\n\n.\n\nThe Jacobian determinant of this transformation is:\n\n$$ J = \\det \\begin{pmatrix} \\frac{\\partial t}{\\partial r} & \\frac{\\partial t}{\\partial \\theta} \\\\ \\frac{\\partial s}{\\partial r} & \\frac{\\partial s}{\\partial \\theta} \\end{pmatrix} = \\det \\begin{pmatrix} \\cos \\theta & -r \\sin \\theta \\\\ \\frac{\\sin \\theta}{\\sqrt{2}} & \\frac{r \\cos \\theta}{\\sqrt{2}} \\end{pmatrix} = \\frac{r \\cos^2 \\theta}{\\sqrt{2}} - \\left( -r \\sin \\theta \\frac{\\sin \\theta}{\\sqrt{2}} \\right) = \\frac{r}{\\sqrt{2}} (\\cos^2 \\theta + \\sin^2 \\theta) = \\frac{r}{\\sqrt{2}} $$\n\nThe area element transforms as\n\n$dt \\, ds = |J| \\, dr \\, d\\theta = \\frac{r}{\\sqrt{2}} dr \\, d\\theta$\n\n.\n\nSubstituting the coordinates and the Jacobian into the integral:\n\n$$ I = \\int_0^{\\pi/2} \\int_0^1 \\frac{1}{\\left(1 + (r\\cos\\theta)^2\\right) \\left(1 + \\left(\\frac{r\\sin\\theta}{\\sqrt{2}}\\right)^2\\right)} \\frac{r}{\\sqrt{2}} dr \\, d\\theta $$\n\n$$ I = \\frac{1}{\\sqrt{2}} \\int_0^1 r \\left( \\int_0^{\\pi/2} \\frac{d\\theta}{(1 + r^2\\cos^2\\theta)(1 + \\frac{r^2}{2}\\sin^2\\theta)} \\right) dr $$\n\nLet the inner integral (with respect to\n\n$\\theta$\n\n) be denoted by\n\n$J(r)$\n\n.\n\n3.\n\n$$ J(r) = \\int_0^{\\pi/2} \\frac{d\\theta}{(1 + r^2\\cos^2\\theta)(1 + \\frac{r^2}{2}\\sin^2\\theta)} $$\n\nSubstitute\n\n$u = \\sin^2\\theta$\n\n. Then\n\n$\\cos^2\\theta = 1-u$\n\n. Also,\n\n$du = 2\\sin\\theta\\cos\\theta \\, d\\theta$\n\n. We need\n\n$d\\theta$\n\n in terms of\n\n$du$\n\n:\n\n$d\\theta = \\frac{du}{2\\sin\\theta\\cos\\theta} = \\frac{du}{2\\sqrt{u}\\sqrt{1-u}}$\n\n. The limits change from\n\n$0, \\pi/2$\n\n to\n\n$0, 1$\n\n.\n\n$$ J(r) = \\int_0^1 \\frac{1}{(1 + r^2(1-u))(1 + \\frac{r^2}{2}u)} \\frac{du}{2\\sqrt{u(1-u)}} $$\n\n$$ J(r) = \\frac{1}{2} \\int_0^1 \\frac{du}{\\sqrt{u(1-u)} (1+r^2 - r^2u)(1 + \\frac{r^2}{2}u)} $$\n\nUse partial fractions for the non-square-root part:\n\n$$ \\frac{1}{(1+r^2 - r^2u)(1 + \\frac{r^2}{2}u)} = \\frac{A}{1+r^2 - r^2u} + \\frac{B}{1 + \\frac{r^2}{2}u} $$\n\nSolving for\n\n$A$\n\n and\n\n$B$\n\n yields\n\n$A = \\frac{2}{3+r^2}$\n\n and\n\n$B = \\frac{1}{3+r^2}$\n\n.\n\n$$ J(r) = \\frac{1}{2} \\int_0^1 \\frac{1}{\\sqrt{u(1-u)}} \\left( \\frac{2/(3+r^2)}{1+r^2 - r^2u} + \\frac{1/(3+r^2)}{1 + \\frac{r^2}{2}u} \\right) du $$\n\n$$ J(r) = \\frac{1}{2(3+r^2)} \\left[ 2 \\int_0^1 \\frac{du}{\\sqrt{u(1-u)}(1+r^2 - r^2u)} + \\int_0^1 \\frac{du}{\\sqrt{u(1-u)}(1 + \\frac{r^2}{2}u)} \\right] $$\n\n3.1\n\nWe need to evaluate integrals of the form\n\n$K = \\int_0^1 \\frac{dx}{\\sqrt{x(1-x)}(a+bx)}$\n\n. Let\n\n$x = \\sin^2\\phi$\n\n. Then\n\n$dx = 2\\sin\\phi\\cos\\phi \\, d\\phi$\n\n. The limits\n\n$x=0, 1$\n\n correspond to\n\n$\\phi=0, \\pi/2$\n\n.\n\n$$ \\sqrt{x(1-x)} = \\sqrt{\\sin^2\\phi(1-\\sin^2\\phi)} = \\sqrt{\\sin^2\\phi\\cos^2\\phi} = \\sin\\phi\\cos\\phi \\quad (\\text{for } \\phi \\in [0, \\pi/2]) $$\n\nSubstituting into the integral\n\n$K$\n\n:\n\n$$ K = \\int_0^{\\pi/2} \\frac{2\\sin\\phi\\cos\\phi \\, d\\phi}{(\\sin\\phi\\cos\\phi)(a+b\\sin^2\\phi)} = \\int_0^{\\pi/2} \\frac{2 \\, d\\phi}{a+b\\sin^2\\phi} $$\n\nNow, use the substitution\n\n$t = \\tan\\phi$\n\n. Then\n\n$d\\phi = \\frac{dt}{1+t^2}$\n\n and\n\n$\\sin^2\\phi = \\frac{t^2}{1+t^2}$\n\n. The limits\n\n$\\phi=0, \\pi/2$\n\n correspond to\n\n$t=0, \\infty$\n\n.\n\n$$ K = \\int_0^{\\infty} \\frac{2}{a + b\\left(\\frac{t^2}{1+t^2}\\right)} \\frac{dt}{1+t^2} = \\int_0^{\\infty} \\frac{2(1+t^2)}{a(1+t^2) + bt^2} \\frac{dt}{1+t^2} $$\n\n$$ K = \\int_0^{\\infty} \\frac{2}{a + (a+b)t^2} dt $$\n\nAssuming\n\n$a>0$\n\n and\n\n$a+b>0$\n\n:\n\n$$ K = \\frac{2}{a+b} \\int_0^{\\infty} \\frac{dt}{\\frac{a}{a+b} + t^2} = \\frac{2}{a+b} \\left[ \\sqrt{\\frac{a+b}{a}} \\arctan\\left(t\\sqrt{\\frac{a+b}{a}}\\right) \\right]_0^{\\infty} $$\n\n$$ K = \\frac{2}{a+b} \\sqrt{\\frac{a+b}{a}} \\left( \\frac{\\pi}{2} - 0 \\right) = \\frac{\\pi \\cdot 2 \\sqrt{a+b}}{2 (a+b) \\sqrt{a}} = \\frac{\\pi}{\\sqrt{a(a+b)}} $$\n\nThus, we have established the identity:\n\n$$ \\int_0^1 \\frac{dx}{\\sqrt{x(1-x)}(a+bx)} = \\frac{\\pi}{\\sqrt{a(a+b)}} \\quad \\text{for } a, a+b > 0 $$\n\n3.2\n\nNow use the classical Beta-type integral result derived above.\n\nFor the first integral in\n\n$J(r)$\n\n:\n\n$a=1+r^2$\n\n,\n\n$b=-r^2$\n\n. Both\n\n$a=1+r^2>0$\n\n and\n\n$a+b=1+r^2-r^2=1>0$\n\n. The integral is\n\n$\\frac{\\pi}{\\sqrt{(1+r^2)(1)}} = \\frac{\\pi}{\\sqrt{1+r^2}}$\n\n.\n\nFor the second integral in\n\n$J(r)$\n\n:\n\n$a=1$\n\n,\n\n$b=r^2/2$\n\n. Both\n\n$a=1>0$\n\n and\n\n$a+b=1+r^2/2>0$\n\n. The integral is\n\n$\\frac{\\pi}{\\sqrt{1(1+r^2/2)}} = \\frac{\\pi}{\\sqrt{1+r^2/2}}$\n\n.\n\nSubstituting these back into\n\n$J(r)$\n\n:\n\n$$ \\bbox[5px, #F0F8FF, border: 2px solid #4682B4]{ J(r) = \\frac{1}{2(3+r^2)} \\left[ 2 \\frac{\\pi}{\\sqrt{1+r^2}} + \\frac{\\pi}{\\sqrt{1+r^2/2}} \\right] = \\frac{\\pi}{3+r^2} \\left( \\frac{1}{\\sqrt{1+r^2}} + \\frac{1}{2\\sqrt{1+r^2/2}} \\right) } $$\n\n4.\n\nNow substitute\n\n$J(r)$\n\n back into the expression for\n\n$I$\n\n:\n\n$$ I = \\frac{1}{\\sqrt{2}} \\int_0^1 r J(r) dr = \\frac{1}{\\sqrt{2}} \\int_0^1 r \\frac{\\pi}{3+r^2} \\left( \\frac{1}{\\sqrt{1+r^2}} + \\frac{1}{2\\sqrt{1+r^2/2}} \\right) dr $$\n\n$$ I = \\frac{\\pi}{\\sqrt{2}} \\left[ \\int_0^1 \\frac{r}{(3+r^2)\\sqrt{1+r^2}} dr + \\int_0^1 \\frac{r}{2(3+r^2)\\sqrt{1+r^2/2}} dr \\right] $$\n\nLet\n\n$K_1 = \\int_0^1 \\frac{r}{(3+r^2)\\sqrt{1+r^2}} dr$\n\n and\n\n$K_2 = \\int_0^1 \\frac{r}{2(3+r^2)\\sqrt{1+r^2/2}} dr$\n\n.\n\nLet\n\n$t=r^2$\n\n,\n\n$dt=2rdr$\n\n.\n\n$K_1 = \\int_0^1 \\frac{dt/2}{(3+t)\\sqrt{1+t}}$\n\n. Now let\n\n$u=\\sqrt{1+t}$\n\n, so\n\n$u^2=1+t$\n\n,\n\n$t=u^2-1$\n\n,\n\n$dt=2udu$\n\n.\n\n$K_1 = \\int_1^{\\sqrt{2}} \\frac{udu}{(3+u^2-1)u} = \\int_1^{\\sqrt{2}} \\frac{du}{u^2+2} = \\left[ \\frac{1}{\\sqrt{2}} \\arctan\\left(\\frac{u}{\\sqrt{2}}\\right) \\right]_1^{\\sqrt{2}}$\n\n$K_1 = \\frac{1}{\\sqrt{2}} \\left( \\arctan(1) - \\arctan\\left(\\frac{1}{\\sqrt{2}}\\right) \\right) = \\frac{1}{\\sqrt{2}} \\left( \\frac{\\pi}{4} - \\arctan\\left(\\frac{1}{\\sqrt{2}}\\right) \\right)$\n\n.\n\nLet\n\n$t=r^2$\n\n,\n\n$dt=2rdr$\n\n.\n\n$K_2 = \\int_0^1 \\frac{dt/2}{2(3+t)\\sqrt{1+t/2}} = \\frac{1}{4} \\int_0^1 \\frac{dt}{(3+t)\\sqrt{1+t/2}}$\n\n. Now let\n\n$u=\\sqrt{1+t/2}$\n\n, so\n\n$u^2=1+t/2$\n\n,\n\n$t=2u^2-2$\n\n,\n\n$dt=4udu$\n\n.\n\n$K_2 = \\frac{1}{4} \\int_1^{\\sqrt{3/2}} \\frac{4udu}{(3+2u^2-2)u} = \\int_1^{\\sqrt{3/2}} \\frac{du}{2u^2+1} = \\left[ \\frac{1}{\\sqrt{2}} \\arctan(\\sqrt{2}u) \\right]_1^{\\sqrt{3/2}}$\n\n$K_2 = \\frac{1}{\\sqrt{2}} \\left( \\arctan(\\sqrt{2}\\sqrt{3/2}) - \\arctan(\\sqrt{2}) \\right) = \\frac{1}{\\sqrt{2}} \\left( \\arctan(\\sqrt{3}) - \\arctan(\\sqrt{2}) \\right)$\n\n.\n\n5.\n\nSubstitute\n\n$K_1$\n\n and\n\n$K_2$\n\n back into the expression for\n\n$I$\n\n:\n\n$$ I = \\frac{\\pi}{\\sqrt{2}} (K_1 + K_2) $$\n\n$$ I = \\frac{\\pi}{\\sqrt{2}} \\left[ \\frac{1}{\\sqrt{2}} \\left( \\frac{\\pi}{4} - \\arctan\\left(\\frac{1}{\\sqrt{2}}\\right) \\right) + \\frac{1}{\\sqrt{2}} \\left( \\arctan(\\sqrt{3}) - \\arctan(\\sqrt{2}) \\right) \\right] $$\n\n$$ I = \\frac{\\pi}{2} \\left[ \\frac{\\pi}{4} - \\arctan\\left(\\frac{1}{\\sqrt{2}}\\right) + \\arctan(\\sqrt{3}) - \\arctan(\\sqrt{2}) \\right] $$\n\n$$ I = \\frac{\\pi}{2} \\left[ \\frac{\\pi}{4} + \\arctan(\\sqrt{3}) - \\left( \\arctan(\\sqrt{2}) + \\arctan\\left(\\frac{1}{\\sqrt{2}}\\right) \\right) \\right] $$\n\nUsing the identity\n\n$\\arctan x + \\arctan(1/x) = \\pi/2$\n\n for\n\n$x>0$\n\n, we have\n\n$\\arctan(\\sqrt{2}) + \\arctan(1/\\sqrt{2}) = \\pi/2$\n\n. Also,\n\n$\\arctan(\\sqrt{3}) = \\pi/3$\n\n.\n\n$$ I = \\frac{\\pi}{2} \\left[ \\frac{\\pi}{4} + \\frac{\\pi}{3} - \\frac{\\pi}{2} \\right] $$\n\n$$ I = \\frac{\\pi}{2} \\left[ \\frac{3\\pi + 4\\pi - 6\\pi}{12} \\right] = \\frac{\\pi}{2} \\left[ \\frac{\\pi}{12} \\right] = \\frac{\\pi^2}{24} $$", "answer_owner": "heather milkem", "is_accepted": true, "score": 12, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1920822, "title": "What to do with the Riemann-Stieltjes integral on an interval of a function with infinitely many discontinuity points?", "question_text": "If we have finitely many discontinuity points on an interval, say\n\n$[a,b]$\n\n, and we want to compute the integral of a function defined on this interval, usually we use the notion of improper integral and split the integral in a sum of integral and then compute each sum as an improper integral itself. But if we have infinitely many points of discontinuity, what can be done?\n\nThank you.", "question_owner": "Yassin Rany", "question_link": "https://math.stackexchange.com/questions/1920822/what-to-do-with-the-riemann-stieltjes-integral-on-an-interval-of-a-function-with", "answer": { "answer_id": 1920866, "answer_text": "Even if uncountable, the real question is \"what is the measure of the set of discontinuities\"? (Any countable set has measure 0, an uncountable set\n\nmay\n\n have measure 0.) Any set of measure 0 can be ignored.", "answer_owner": "user247327", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5093270, "title": "Can we always use bounds created by Taylor expansion of a function?", "question_text": "Problems in Mathematical Analysis Volume II 1.1.22 - Kaczor & Nowak\n\nAssume\n\n$\\lim_{x\\to 0} f(x)=1$\n\n,\n\n$\\lim_{x\\to 0} g(x)=\\infty$\n\n, and\n\n$\\lim_{x\\to 0} g(x)(f(x)-1)=\\gamma$\n\n.\n\n$(\\gamma \\in \\mathbb{R})$\n\nClaim.\n\nThen\n\n$\\lim_{x\\to 0} f(x)^{g(x)} = e^{\\gamma}$\n\n.\n\nProof.\n\nSince\n\n$f(x)\\to 1$\n\n as\n\n$x\\to 0$\n\n, there exists\n\n$\\delta_0>0$\n\n such that for all\n\n$0<|x|<\\delta_0$\n\n we have\n\n$|f(x)-1|<\\tfrac12$\n\n. Thus for all such\n\n$x$\n\n the inequality\n\n$(f(x)-1)-\\frac{(f(x)-1)^2}{2}\\le \\ln f(x)\\le f(x)-1 \\tag{1}$\n\napplies.\n\nMultiplying (1) by\n\n$g(x)$\n\n we obtain\n\n$g(x)(f(x)-1)-\\frac{g(x)(f(x)-1)^2}{2}\\le g(x)\\ln f(x)\\le g(x)(f(x)-1). \\tag{2}$\n\nNote that\n\n$g(x)(f(x)-1)^2 = (f(x)-1)\\cdot(g(x)(f(x)-1)).$\n\nAs\n\n$x\\to 0$\n\n, the first factor\n\n$(f(x)-1)\\to 0$\n\n and the second factor\n\n$g(x)(f(x)-1)\\to \\gamma$\n\n. Hence the product tends to\n\n$0$\n\n. Therefore\n\n$\\lim_{x\\to 0}\\frac{g(x)(f(x)-1)^2}{2}=0.$\n\nFrom (2) we have for\n\n$0<|x|<\\delta_0$\n\n$g(x)(f(x)-1)-\\frac{g(x)(f(x)-1)^2}{2}\\le g(x)\\ln f(x)\\le g(x)(f(x)-1).$\n\nTaking the limit as\n\n$x\\to 0$\n\n, both the leftmost and rightmost expressions converge to\n\n$\\gamma$\n\n. Thus by the squeeze theorem,\n\n$\\lim_{x\\to 0} g(x)\\ln f(x)=\\gamma.$\n\nThe exponential function is continuous, so\n\n$\\lim_{x\\to 0} f(x)^{g(x)} = \\lim_{x\\to 0} e^{g(x)\\ln f(x)} = e^{\\gamma}.$\n\n$\\square$\n\nNow is my proof correct? I'm particularly doubtful about the bounding step of ln\n\n$(f(x))$\n\n.\n\nActually, nowadays I'm quite often using bounds by Taylor series of functions, so it just struck me is it even valid ?\n\nI mean in this particular question is the inequality\n\n$(f(x)-1)-\\frac{(f(x)-1)^2}{2}\\le \\ln f(x)\\le f(x)-1 \\quad $\n\ntrue for all possible\n\n$f(x)$\n\n? .", "question_owner": "T﹏T", "question_link": "https://math.stackexchange.com/questions/5093270/can-we-always-use-bounds-created-by-taylor-expansion-of-a-function", "answer": { "answer_id": 5093275, "answer_text": "Your proof is basically correct in spirit, but the particular two-sided inequality you wrote down needs a fix: the right side\n\n$$\n\n\\ln f(x) \\le f(x)-1\n\n$$\n\nis valid for all\n\n$f(x)>0$\n\n (because\n\n$\\ln t \\le t-1$\n\n for every\n\n$t>0$\n\n), but the left side\n\n$$\n\n(f(x)-1)-\\tfrac{1}{2}(f(x)-1)^2 \\le \\ln f(x)\n\n$$\n\nis not valid for every\n\n$f(x)$\n\n with\n\n$|f(x)-1|<\\tfrac{1}{2}$\n\n. It is true for\n\n$f(x)\\ge 1$\n\n (i.e. for\n\n$u=f-1\\ge 0$\n\n) because the alternating Taylor series for\n\n$\\ln(1+u)$\n\n has a positive remainder after truncation at the\n\n$u^2$\n\n-term when\n\n$0\\le u<1$\n\n. But it fails for\n\n$f(x)<1$\n\n (i.e.\n\n$u<0$\n\n); for example\n\n$f=0.6$\n\n gives\n\n$$\n\n(f-1)-\\tfrac{1}{2}(f-1)^2 = -0.4 - 0.08 = -0.48\n\n$$\n\nwhile\n\n$\\ln 0.6 \\approx -0.5108 < -0.48$\n\n, so the purported lower bound is false there.\n\nYou do not need that (false) uniform lower bound to finish the proof. A standard and clean correction is to use the fact\n\n$$\n\n\\lim_{t\\to 1} \\frac{\\ln t}{t-1} = 1,\n\n$$\n\nwhich follows from L'Hôpital's rule (or from the Taylor series). Thus for any\n\n$\\varepsilon>0$\n\n, there exists\n\n$\\delta>0$\n\n such that whenever\n\n$0<|f(x)-1|<\\delta$\n\n$$\n\n(1-\\varepsilon)(f(x)-1) \\le \\ln f(x) \\le (1+\\varepsilon)(f(x)-1).\n\n$$\n\nMultiply by\n\n$g(x)$\n\n and use the hypothesis\n\n$g(x)(f(x)-1)\\to \\gamma$\n\n. Since\n\n$\\varepsilon$\n\n was arbitrary, the squeeze argument yields\n\n$$\n\ng(x)\\ln f(x) \\to \\gamma.\n\n$$\n\nFinally the exponential is continuous, so\n\n$$\n\nf(x)^{g(x)} = e^{\\,g(x)\\ln f(x)} \\to e^\\gamma,\n\n$$\n\nwhich is exactly the claimed conclusion.", "answer_owner": "Thomas Lamby", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107402, "title": "How to interpret the graph of the derivative of $x=\\tan(x+y)$?", "question_text": "Consider the curve defined implicitly by\n\n$$x=\\tan(x+y)$$\n\nThis is how the graph of the above curve looks like:\n\nIf you apply inverse tangent to both sides of the equation and use implicit differentation, you get that\n\n$$y'=\\frac{-x^2}{1+x^2}$$\n\nThis is how the graph of the above derivative looks like:\n\nThis graph makes sense: I am able to interpret it as the derivative of the original curve (it shows the rate of change of each branch of the original function).\n\nHowever, if I plug in the original equation\n\n$x=\\tan(x+y)$\n\n to the derivative and complete a few trigonometric simplifications, I get the following:\n\n$$y'=-\\sin^2(x+y)$$\n\nThis is how the graph of the above function looks like:\n\nHow can I geometrically understand this graph?\n\nHow can I \"see\" that this is the derivative of\n\n$x=\\tan(x+y)$\n\n?\n\nFor example, if I wanted to know how fast the\n\n$y$\n\n value of the original function changes where the black dot is:\n\nwhere can I find the corresponding point on the graph of\n\n$y'=-\\sin^2(x+y)$\n\n?\n\nAny help would be appreciated.", "question_owner": "Pawel", "question_link": "https://math.stackexchange.com/questions/5107402/how-to-interpret-the-graph-of-the-derivative-of-x-tanxy", "answer": { "answer_id": 5107405, "answer_text": "You made a mistake in your second, arctan-free solution to the problem.\n\nYou graphed\n\n$y = -\\sin^2(x+y)$\n\n, an implicit function defined between two variables, when in fact you have\n\n$\\frac{dy}{dx} = - \\sin^2(x+y)$\n\n - an algebraic relationship between\n\nthree\n\n quantities:\n\n$x$\n\n,\n\n$y$\n\n, and\n\n$dy/dx$\n\n, which cannot be easily represented on a 2D graph.\n\nIf we note that\n\n$\\tan(x+y) = x$\n\n, then we can conclude\n\n$$\\sec^2(x+y) = \\tan^2(x+y) + 1 = x^2+1$$\n\n so\n\n$$\\frac{dy}{dx} = -\\sin^2(x+y) = \\cos^2(x+y)-1=\\frac{1}{1+x^2}-1 = \\frac{-x^2}{x^2+1}$$\n\n confirming your previous calculation.", "answer_owner": "mathperson314", "is_accepted": true, "score": 7, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107310, "title": "How to show $4^{-n} ( {2n\\choose n} x + \\sqrt{n} \\sum_1^{n} {2n\\choose n-k} \\frac{\\sin(2kx/\\sqrt{n})}{k})$ approaches the error function for large n?", "question_text": "I've been asked to show\n\n$$\\lim_{n \\to \\infty} 4^{-n} \\left( {2n\\choose n} x + \\sqrt{n} \\sum_{k=1}^{n} {2n\\choose n-k} \\frac{\\sin\\left(\\frac{2kx}{\\sqrt{n}}\\right)}{k}\\right) = \\int_0^x e^{-t^2}dt$$\n\nwith the hint that I can approximate binomial coefficients using Stirling's approximation. However, I am struggling to start this derivation.", "question_owner": "user3760593", "question_link": "https://math.stackexchange.com/questions/5107310/how-to-show-4-n-2n-choose-n-x-sqrtn-sum-1n-2n-choose-n-k-fra", "answer": { "answer_id": 5107414, "answer_text": "Fix\n\n$x\\in\\mathbb R$\n\n. Put\n\n$$\n\np_{n,k}:=4^{-n}{2n\\choose n-k},\\qquad k\\in\\mathbb Z,\n\n$$\n\nand note that\n\n$p_{n,k}=\\mathbb P(X=n-k)$\n\n, where\n\n$X\\sim \\text{Binomial}(2n, 1/2)$\n\n. We want to show\n\n$$\n\n\\lim_{n\\to\\infty}\\left(p_{n,0}x+\\sqrt n\\sum_{k=1}^n p_{n,k}\\frac{\\sin(2kx/\\sqrt n)}{k}\\right)=\\int_0^x e^{-t^2}\\,\\mathrm dt.\n\n$$\n\nFirst\n\n$p_{n,0}=4^{-n}{2n\\choose n}\\sim\\frac{1}{\\sqrt{\\pi n}}$\n\n, hence\n\n$p_{n,0}x\\to0$\n\n. Therefore it suffices to consider\n\n$$\n\nS_n:=\\sqrt n\\sum_{k=1}^n p_{n,k}\\frac{\\sin(2kx/\\sqrt n)}{k}.\n\n$$\n\nUsing a version of the local limit theorem (see Theorem 6 in\n\nSums of Independent Random Variables\n\n by Valentin V. Petrov, p. 197),\n\n$$\n\n\\sup_k\\left| p_{n,k}-\\frac{1}{\\sqrt{\\pi n}}e^{-k^2/n}\\right|=O\\!\\left(\\frac 1n\\right)\\!\\qquad(n\\to\\infty).\n\n$$\n\nAnd we also know that there exist constants\n\n$C,c>0$\n\n such that for every\n\n$n$\n\n and every integer\n\n$ k $\n\n (which is guaranteed by the Chernoff bound),\n\n$$\n\np_{n,k}\\le \\frac{C}{\\sqrt n}e^{-c k^2/n}.\n\n$$\n\nPut\n\n$t_k=k/\\sqrt n$\n\n. Then\n\n$k=\\sqrt nt_k$\n\n and\n\n$$\n\n\\sqrt np_{n,k}\\frac{\\sin(2kx/\\sqrt n)}{k}=p_{n,k}\\frac{\\sin(2xt_k)}{t_k}.\n\n$$\n\nFor any fixed\n\n$M>0$\n\n split the sum at\n\n$k=\\lfloor M\\sqrt n\\rfloor$\n\n and write\n\n$S_n=S_{n}^{\\le M}+S_n^{>M}$\n\n.\n\nFor\n\n$t_k\\le M$\n\n$$\n\n\\sup_k\\left|p_{n,k} \\frac{\\sin(2x t_k)}{t_k} - \\frac{1}{\\sqrt{\\pi n}} e^{-t_k^2} \\frac{\\sin(2x t_k)}{t_k}\\right|=O\\!\\left(\\frac1n\\right)\\!\\quad (n\\to\\infty).\n\n$$\n\nSumming over\n\n$1\\le k\\le M\\sqrt n$\n\n gives a Riemann sum, which converges to the integral (since the number of terms is only of order\n\n$O(\\sqrt{n})$\n\n):\n\n$$\\lim_{n\\to\\infty} S_n^{\\le M}=\\frac{1}{\\sqrt\\pi}\\int_0^{M} e^{-t^2}\\frac{\\sin(2xt)}{t}\\,\\mathrm dt.$$\n\n(\n\n$t\\mapsto e^{-t^2}\\frac{\\sin(2xt)}{t}$\n\n is continuous on\n\n$[0,M]$\n\n if we define its value at\n\n$0$\n\n by the limit\n\n$2x$\n\n.)\n\nFor the tail\n\n$t_k>M$\n\n use the Chernoff bound. From\n\n$p_{n,k}\\le \\dfrac{C}{\\sqrt n}e^{-c t_k^2}$\n\n and\n\n$k=\\sqrt nt_k$\n\n,\n\n$$\\bigg|\\sqrt np_{n,k}\\frac{\\sin(2xt_k)}{k}\\bigg|\\le \\frac{C}{\\sqrt n}e^{-c t_k^2}\\frac{1}{t_k}\\cdot\\sqrt n=\\frac{C}{t_k}e^{-c t_k^2}.$$\n\nHence\n\n$$|S_n^{>M}|\\le C\\sum_{t_k>M}\\frac{e^{-c t_k^2}}{t_k}\\le C\\int_{M-1/\\sqrt n}^\\infty\\frac{e^{-c t^2}}{t}\\,\\mathrm dt.$$\n\nThe integral on the right tends to\n\n$0$\n\n as\n\n$M\\to\\infty$\n\n, uniformly in large\n\n$n$\n\n. Thus first letting\n\n$n\\to\\infty$\n\n (to evaluate\n\n$S_n^{\\le M}$\n\n) and then\n\n$M\\to\\infty$\n\n (to get rid of the tail) yields\n\n$$\\lim_{n\\to\\infty}S_n=\\frac{1}{\\sqrt\\pi}\\int_0^{\\infty} e^{-t^2}\\frac{\\sin(2xt)}{t}\\,\\mathrm dt.$$\n\nDefine\n\n$F(x):=\\dfrac{1}{\\sqrt\\pi}\\displaystyle\\int_0^\\infty e^{-t^2}\\frac{\\sin(2xt)}{t}\\,\\mathrm dt$\n\n. Differentiation under the integral sign is justified because\n\n$|\\partial_x(\\sin(2xt)/t)|=2|\\cos(2xt)|\\le2$\n\n and\n\n$\\int_0^\\infty e^{-t^2}\\,\\mathrm dt<\\infty$\n\n. Thus\n\n$$F'(x)=\\frac{2}{\\sqrt\\pi}\\int_0^\\infty e^{-t^2}\\cos(2xt)\\,\\mathrm dt.$$\n\nUsing the Fourier transform of the Gaussian\n\n$\\displaystyle\\int_{-\\infty}^{\\infty} e^{-t^2}e^{2ixt}\\,\\mathrm dt=\\sqrt\\pi e^{-x^2}$\n\n and taking real parts gives\n\n$$2\\int_0^\\infty e^{-t^2}\\cos(2xt)\\,\\mathrm dt={\\sqrt\\pi}e^{-x^2}.$$\n\nTherefore\n\n$F'(x)=e^{-x^2}$\n\n and\n\n$F(0)=0$\n\n, hence\n\n$F(x)=\\int_0^x e^{-u^2}\\,\\mathrm du$\n\n.\n\nThus we have proven that\n\n$$\\lim_{n\\to\\infty}\\left(p_{n,0}x+\\sqrt n\\sum_{k=1}^n p_{n,k}\\frac{\\sin(2kx/\\sqrt n)}{k}\\right)=\\int_0^x e^{-u^2}\\,\\mathrm du.$$\n\nHere's Theorem 6 in\n\nSums of Independent Random Variables\n\n by Valentin V. Petrov for reference:", "answer_owner": "Steve Norkus", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107241, "title": "Is my proof for integrals correct?", "question_text": "I'm learning integrals in school now, and our curriculum starts off by defining the indefinite integral as the antiderivative.\n\n$$\n\n\\frac{d}{dx}[\\int f(x)dx]=f(x)\n\n$$\n\nI didn't like this approach, so I set out to define integrals with a different starting point by following the Khan Academy AP Calculus BC course's approach of defining integrals as the limit of Riemann Sums, and subsequently the area under the curve.\n\n$$\n\n\\int_{a}^{b}f(x)dx = \\lim_{n \\to\\infty}\\sum_{k=1}^{n}\\frac{b-a}{n}f(a+k \\frac{b-a}{n})\n\n$$\n\nOriginally, I used the mean value theorem to prove the integral as the antiderivative:\n\n$$\n\nF(x)=\\int_{a}^{x}f(t)dt \\\\\n\nF'(x)=\\lim_{h \\to 0}\\frac{F(x+h) - F(x)}{h} = \\lim_{h \\to 0} \\frac{\\int_{a}^{x+h}f(t)dt - \\int_{a}^{x}f(t)dt}{h} = \\lim_{h \\to 0} \\frac{\\int_{x}^{x+h}f(t)dt}{h}\n\n$$\n\nThe mean value theorem states for a function continuous over\n\n$[a, b]$\n\n and differentiable over\n\n$(a, b)$\n\n, there exists a value\n\n$c$\n\n between\n\n$a$\n\n and\n\n$b$\n\n such that\n\n$f'(c) = \\frac{f(b) - f(a)}{b - a}$\n\n.\n\n$$\n\nF'(c) = \\frac{F(b) - F(a)}{b-a} \\\\\n\nf(c) = \\frac{F(b) - F(a)}{b-a} \\\\\n\nf(c) = \\frac{\\int_{a}^{b}f(t)dt}{b-a} \\\\\n\nf(c)(b-a) = \\int_{a}^{b}f(t)dt\n\n$$\n\nSubstituting the result into the earlier proof results in:\n\n$$\n\n\\lim_{h \\to 0} \\frac{\\int_{x}^{x+h}f(t)dt}{h} = \\lim_{h \\to 0} \\frac{f(c)h}{h} = f(c)\n\n$$\n\nI noticed a fatal error here, which is that I assumed the integral to be the antiderivative while trying to prove the integral as the antiderivative, resulting in a circular argument. I consulted my math teacher about this problem, and he claimed that my proof still worked. He wasn't claiming that the mean value theorem could be proved without assuming the integral to be the antiderivative, he was claiming that my proof works as is. Is my teacher wrong here? There is a chance I've misconstrued his argument as I wasn't able to understand it at all, but I've tried my best to convey what he told me. Is there any case where the mean value theorem can be used like this to prove the integral as the antiderivative in this manner?\n\nBy the way, I've been able to complete my proof using the intermediate value theorem after the fact, so my question is not about that.", "question_owner": "Wide", "question_link": "https://math.stackexchange.com/questions/5107241/is-my-proof-for-integrals-correct", "answer": { "answer_id": 5107289, "answer_text": "Starting from the Riemann integral, you can prove easily that\n\n$F'(x)=f(x)$\n\n at every point\n\n$x$\n\n where\n\n$f$\n\n is continuous because\n\n\\begin{equation}\n\n\\left|\\frac{F(x+h)-F(x)}{h}-f(x)\\right| \\le \\frac{1}{|h|}\\left|\\int_x^{x+h}|f(t)-f(x)|\\right|\\le \\sup_{t\\in[x, x+h]}|f(t)-f(x)|\\mathop{\\longrightarrow}_{h\\to 0} 0\n\n\\end{equation}\n\nThis does not tell you anything at points where\n\n$f$\n\n is not continuous.\n\nA slight improvement of the same reasoning is semi-differentiabiliy\n\n\\begin{align}\n\n\\partial_+ F(x) = \\lim_{t\\to x\\atop t>x} f(t)\\cr\n\n\\partial_- F(x) = \\lim_{t\\to x\\atop t0).\n\n$$\n\nSo\n\n\\begin{align*}\n\nS\n\n&=\\frac{1}{\\pi}\\int_{0}^{\\infty}\\!\\!\\int_{0}^{\\infty}x^{-1/2}y^{-1/2}\n\n\\Bigg[\\sum_{n=0}^{\\infty}(-1)^n e^{-(n+1)x}e^{-(n+2m+2)y}\\Bigg]\n\n\\Bigg[\\sum_{m=0}^{\\infty}(-1)^m e^{-2my}\\Bigg]\\,dx\\,dy \\\\[4pt]\n\n&=\\frac{1}{\\pi}\\int_{0}^{\\infty}\\!\\!\\int_{0}^{\\infty}x^{-1/2}y^{-1/2}\\,\n\n\\frac{e^{-x-2y}}{(1+e^{-(x+y)})(1+e^{-2y})}\\,dx\\,dy.\n\n\\end{align*}\n\nI don't know if this is correct or how to proceed from this.", "question_owner": "Gabriel", "question_link": "https://math.stackexchange.com/questions/5105882/calculating-sum-m-0-infty-sum-n-0-infty-frac-1nm-sqrtn1", "answer": { "answer_id": 5107205, "answer_text": "Your solution should be exactly:\n\n$$ -\\frac{\\sqrt{2}}{4} (\\sqrt{2}-1)\\zeta(\\frac{1}{2}) \\left[\\zeta(\\frac{1}{2},\\frac{1}{4})-\\zeta(\\frac{1}{2},\\frac{3}{4})\\right]$$\n\nWhich appears to agree with @Claude Leibovici 's approximation. I will provide a proof (hopefully the result is correct) as soon as time permits.\n\n$\\mathbf{Edit\\, and\\, proof \\,below:}$\n\n1-Let us first unpack your double sum identity.\n\n$$ S = \\sum_{m=0}^{\\infty}\\sum_{n=0}^{\\infty} \\frac{(-1)^{m}(-1)^{n}}{\\sqrt{n+1}\\sqrt{n+2m+2}} = \\sum_{m=1}^{\\infty}\\sum_{n=1}^{\\infty} \\frac{(-1)^{m}(-1)^{n}}{\\sqrt{n}\\sqrt{n+2m-1}} $$\n\nHere I have simply changed the sums from starting at\n\n$m$\n\n or\n\n$n$\n\n$=$\n\n$0$\n\n to starting at\n\n$1$\n\n. Not relevant right now but I will refer to this later.\n\nThen:\n\n$$ S = \\color{red}{\\frac{1}{\\sqrt{1}}\\sum_{m=0}^{\\infty} \\frac{(-1)^{m}}{\\sqrt{2m+2}}} -\\color{blue}{\\frac{1}{\\sqrt{2}}\\sum_{m=0}^{\\infty} \\frac{(-1)^{m}}{\\sqrt{1+2m+2}}} + \\color{orange}{\\frac{1}{\\sqrt{3}}\\sum_{m=0}^{\\infty} \\frac{(-1)^{m}}{\\sqrt{2+2m+2}}} ... $$\n\n2- Let us now further unpack the sums into individual term, your S will be equivalent to all of the sums below:\n\n$\\color{red}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{2}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{3}})}+\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{4}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{5}}) + \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{6}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{7}}).....$\n\n$\\color{red}{-\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{4}})}+\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{5}})}-\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{6}})} + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{7}}) - \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{8}}) + \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{9}}).....$\n\n$\\color{red}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{6}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{7}})}+\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{8}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{9}}) + \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{10}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{11}}).....$\n\n$\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\vdots$\n\n3- Now, I notice the following: if we follow the sums from the bottom - up we notice that they do not end where they are supposed to. For example: why does\n\n$-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{3}})}$\n\n end there and not at\n\n$ \\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{1}})}$\n\n? What will happen if we add these\n\n$\\color{purple}{missing \\, \\, terms}$\n\n?\n\n$\\qquad\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\color{purple}{\\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{1}})}.....$\n\n$\\qquad\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\\color{purple}{-\\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{1}})+{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{2}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{3}})}}.....$\n\n$\\qquad\\qquad\\,\\,\\,\\,\\,\\color{purple}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{1}})-{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{2}}) + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{3}}) - \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{4}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{5}})}}.....$\n\n$\\color{red}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{2}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{3}})}+\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{4}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{5}}) + \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{6}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{7}}).....$\n\n$\\color{red}{-\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{4}})}+\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{5}})}-\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{6}})} + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{7}}) - \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{8}}) + \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{9}}).....$\n\n$\\color{red}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{6}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{7}})}+\\color{orange}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{8}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{9}}) + \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{10}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{11}}).....$\n\n4- Now, we notice that what we have created is a multiplication of infinite series. See based on the colors.\n\n$\\qquad\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\color{purple}{\\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{1}})}.....$\n\n$\\qquad\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\\color{purple}{-\\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{1}})+\\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{2}})} - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{3}})}.....$\n\n$\\qquad\\qquad\\,\\,\\,\\,\\,\\color{purple}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{1}})}\\color{green}{-{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{2}})}\\color{purple}{ + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{3}}) - \\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{4}})} - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{5}})}}.....$\n\n$\\color{fuchsia}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{2}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{3}})}+\\color{green}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{4}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{5}}) +\\color{red}{ \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{6}})} - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{7}}).....$\n\n$\\color{fuchsia}{-\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{4}})}+\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{5}})}-\\color{green}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{6}})} + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{7}}) - \\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{8}})} + \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{9}}).....$\n\n$\\color{fuchsia}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{6}})}-\\color{blue}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{7}})}+\\color{green}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{8}})} - \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{9}}) +\\color{red}{ \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{10}})} - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{11}}).....$\n\nNotice the multiplication now\n\n$$\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right) = \\left(-\\frac{1}{\\sqrt{2}}+\\frac{1}{\\sqrt{4}}-\\frac{1}{\\sqrt{6}}...\\right)\\left(-\\frac{1}{\\sqrt{1}}+\\frac{1}{\\sqrt{3}}-\\frac{1}{\\sqrt{5}}...\\right)$$\n\n$$$$\n\nMultiply the entire first series by\n\n$-\\frac{1}{\\sqrt{1}}$\n\n then Multiply the entire first series by\n\n$\\frac{1}{\\sqrt{3}}$\n\n and so on to obtain:\n\n$$\\color{fuchsia}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{2}})}-\\color{fuchsia}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{4}})}+\\color{fuchsia}{\\frac{1}{\\sqrt{1}}(\\frac{1}{\\sqrt{6}})...}$$\n\n$$\\color{green}{-\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{2}})}+\\color{green}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{4}})}-\\color{green}{\\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{6}})}..$$\n\n$$\\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{2}})}-\\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{4}})}+\\color{red}{\\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{6}})}..$$\n\nWe have just shown that the odd columns (counting from left to right : 1,3,5 ...) are given by the multiplication of the above series. If we were to do the same for the even columns (counting from left to right: 2,4,6) we would realize that they are exactly:\n\n$$ \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right)$$\n\nTherefore our entire collection of infinite sums is given by:\n\n$$ \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right) + \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) = 2 \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) $$\n\n5- Now to remove the purple terms\n\n$$ S = 2 \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) - \\color{purple}{terms \\, we \\, added \\, at\\, the \\, beginning} $$\n\nIf we look closely:\n\n$$ \\color{purple}{\\frac{1}{\\sqrt{2}}(\\frac{1}{\\sqrt{1}}) - \\frac{1}{\\sqrt{3}}(\\frac{1}{\\sqrt{2}}) + \\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{3}})} = \\sum_{n=1}^{\\infty}\\frac{(-1)^{n+1}}{\\sqrt{n+1}\\sqrt{n}} $$\n\n$$ \\color{purple}{-\\frac{1}{\\sqrt{4}}(\\frac{1}{\\sqrt{1}}) + \\frac{1}{\\sqrt{5}}(\\frac{1}{\\sqrt{2}}) - \\frac{1}{\\sqrt{6}}(\\frac{1}{\\sqrt{3}})} = \\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{n+3}\\sqrt{n}} $$\n\nIf we continue turning all the purple terms into infinite series we conclude that they are:\n\n$$ \\color{purple}{\\sum_{m=1}^{\\infty}\\sum_{n=1}^{\\infty} \\frac{(-1)^{m}(-1)^{n}}{\\sqrt{n}\\sqrt{n+2m-1}}} $$\n\nWhich is relevant in our section 1.\n\n6- To finalize:\n\n$$ S = 2 \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) - \\color{purple}{S} $$\n\n$$ S = \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right)\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) $$\n\nAnd with the help of Mathematica:\n\n$$ \\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n-1}}\\right) = -\\frac{\\zeta({\\frac{1}{2},\\frac{1}{4}}) - \\zeta({\\frac{1}{2},\\frac{3}{4}})}{2} $$\n\n$$\\left(\\sum_{n=1}^{\\infty}\\frac{(-1)^{n}}{\\sqrt{2n}}\\right) = \\frac{(\\sqrt{2}-1)\\zeta(\\frac{1}{2})}{\\sqrt{2}} $$\n\nThus :\n\n$$ S = \\sum_{m=0}^{\\infty}\\sum_{n=0}^{\\infty} \\frac{(-1)^{m}(-1)^{n}}{\\sqrt{n+1}\\sqrt{n+2m+2}} = -\\frac{\\sqrt{2}}{4} (\\sqrt{2}-1)\\zeta(\\frac{1}{2}) \\left[\\zeta(\\frac{1}{2},\\frac{1}{4})-\\zeta(\\frac{1}{2},\\frac{3}{4})\\right] \\approx 0.285590286661141394872194 {\\dots} $$\n\nWe have solved a seemingly complex problem by using fairly easy techniques.\n\nEdit: Apologies for the MathJax nightmare", "answer_owner": "Apples and Pears", "is_accepted": false, "score": 6, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 123493, "title": "Given an infinite bounded set A in $R^n$,$2\\leq n$, show there are infinite boundary points", "question_text": "I started studying multivariable caclulus, and am having problems with this exercise:\n\nGiven an infinite bounded set A in $R^n$,$2\\leq n$, show there are infinite boundary points\n\nAttempt\n\n: Going from the assumption that A has finite boundary points, we can assume it has an interior point. I believe it's possible to construct a ball around that point with large enough radius (assuming A is bounded by M, 3M should suffice), and prove that there's a boundary point on every line between a boundary point of that ball and the interior point we selected. This ended up being very complex and I didn't manage to work out all the details nicely, so I figure there must be something easier!\n\nThanks!", "question_owner": "ro44", "question_link": "https://math.stackexchange.com/questions/123493/given-an-infinite-bounded-set-a-in-rn-2-leq-n-show-there-are-infinite-boun", "answer": { "answer_id": 123497, "answer_text": "For the $n=2$ case:\n\nLet $A$ be such a set and let $L_x$ denote the vertical line that passes through $(x,0)$. Note that any boundary point of $L_x\\cap A$ (as a subset of $L_x$) is a boundary point of $A$, as if an open ball $B$ contains a point in $L_x$ outside $A\\cap L_x$ then this point lies outside $A$ as well. Since $A$ is bounded, for each $x\\in\\mathbb R$ either $L_x\\cap A$ is empty or it has at least one boundary point (this is not hard to show). If there are infinitely many $x\\in \\mathbb R$ such that $L_x\\cap A$ is nonempty, then we have infinitely many distinct boundary points and are done. Otherwise, for each $x$ such that $L_x\\cap A$ is nonempty we have some $\\epsilon>0$ such that $L_y\\cap A$ is empty for any $y$ such that $|x-y|<\\epsilon$. Thus any point $p\\in L_x\\cap A$ is such that $B_r(p)$ contains $B_{\\min\\{r,\\epsilon\\}}(p)\\setminus L_x$ which is nonempty and does not intersect $A$, so is a boundary point of $A$. Hence all points in $A$ are boundary points, so $A$ has infinitely many boundary points.\n\nFor $n>2$, the argument is very similar. I will let you work out the details.", "answer_owner": "Alex Becker", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107314, "title": "How to prove : $\\{ (2+\\sqrt3)^n\\} = 1 -(2-\\sqrt3)^n$", "question_text": "I need help proving that for all n positive integer\n\n$$\\{ (2+\\sqrt3)^n\\} = 1 -(2-\\sqrt3)^n$$\n\nwhere\n\n$\\{ x \\}$\n\n is the\n\nfractional part\n\n.\n\nWe have\n\n\\begin{align*}\n\n(2+\\sqrt3)^n &= \\sum_{k=0}^{n} \\binom{n}{k} 2^{n-k}3^{k/2}\n\n\\\\ &= \\sum_{k=0}^{\\lfloor \\frac{n}{2}\\rfloor} \\binom{n}{2k} 2^{n-2k}3^{k} + \\sum_{k=1}^{\\lfloor \\frac{n+1}{2}\\rfloor} \\binom{n}{2k-1} 2^{n-2k+1}3^{k-\\frac{1}{2}} \\\\ &= S_1 + \\sqrt{3}S_2\n\n\\end{align*}\n\nwhere\n\n\\begin{align*}\n\nS_1 &= \\sum_{k=0}^{\\lfloor \\frac{n}{2}\\rfloor} \\binom{n}{2k} 2^{n-2k}3^{k}\n\n\\\\ S_2 &= \\sum_{k=1}^{\\lfloor \\frac{n+1}{2}\\rfloor} \\binom{n}{2k-1} 2^{n-2k+1}3^{k}\n\n\\end{align*}\n\nWe have\n\n$S_1 \\in \\mathbb{N}$\n\n therefore\n\n$$\\{(2+\\sqrt3)^n \\}= \\{ \\sqrt3 S_2\\}$$\n\nAny hints or detailed explanation would be appreciated!", "question_owner": "epsilon", "question_link": "https://math.stackexchange.com/questions/5107314/how-to-prove-2-sqrt3n-1-2-sqrt3n", "answer": { "answer_id": 5107320, "answer_text": "$P = (2 + \\sqrt{3})^n, \\ N = (2 - \\sqrt{3})^n, \\ 0 < N < 1. $\n\n$P = I + f,$\n\n where\n\n$I$\n\n is integer part of\n\n$P$\n\n and\n\n$f$\n\n is fractional part of\n\n$P$\n\n.\n\n$\n\nP + N = (2 + \\sqrt{3})^n + (2 - \\sqrt{3})^n \\in \\mathbb{Z}.\n\n$\n\n$I + f + N \\in \\mathbb{Z}. $\n\n$f + N \\in \\mathbb{Z} \\cap (0,2) = \\{1\\}. $\n\n$f = 1 - N = 1 - (2 - \\sqrt{3})^n. $\n\n$\n\n\\{(2 + \\sqrt{3})^n\\} = 1 - (2 - \\sqrt{3})^n.\n\n$\n\n$\\rlap \\smile {\\dot{}\\dot{}}$", "answer_owner": "T﹏T", "is_accepted": true, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107277, "title": "Computing a minimum or maximum of a function", "question_text": "In linear regression, we usually make use usually the MSE, which can have the following form:\n\n$$L = \\frac{1}{N}\\sum_{i=1}^{N} (y_i - \\hat{y}_i)^2,$$\n\nwhere\n\n$y_i$\n\n is the target value and\n\n$ \\hat{y}_i$\n\n is the predicted value. When we would like to find the minimum value for L with respect to the model that eventually predicts y, we compute the derivative and set it to zero, and solve for the parameters of the model. How, though we ensure that the stationary point is computing a minimum and not a maximum?", "question_owner": "Jose Ramon", "question_link": "https://math.stackexchange.com/questions/5107277/computing-a-minimum-or-maximum-of-a-function", "answer": { "answer_id": 5107291, "answer_text": "As it was mentioned in the comments, when a convex function has a stationary point, it must be a global minimum, and this is in fact a convex function. If you don't immediately see this, you can always just compute the Hessian matrix and realize it is positive definite.\n\nSo, to summarize, the function to be optimized is\n\n$$\n\nL(\\beta_0, \\beta_1) = \\frac{1}{N}\\sum_{i=1}^N (\\beta_0 + \\beta_1 x_i - y_i)^2\n\n$$\n\nthe first order derivatives are given by\n\n$$\n\n\\dfrac{\\partial L}{\\partial \\beta_0} = \\frac{2}{N}\\sum_{i=1}^N(\\beta_0+\\beta_1 x_i - y_i),\n\n$$\n\n$$\n\n\\dfrac{\\partial L}{\\partial \\beta_1} = \\frac{2}{N}\\sum_{i=1}^N x_i (\\beta_0+\\beta_1 x_i - y_i)\n\n$$\n\nand the second order derivatives are\n\n$$\n\n\\dfrac{\\partial^2 L}{\\partial \\beta_0^2} = 2, \\quad \\dfrac{\\partial^2 L}{\\partial \\beta_0 \\partial \\beta_1} = \\frac 2N \\sum_{i=1}^N x_i, \\quad \\dfrac{\\partial^2 L}{\\partial \\beta_1^2} = \\frac 2N \\sum_{i=1}^N x_i^2\n\n$$\n\nHence, the Hessian matrix is constant and is given by\n\n$$\n\nH(\\beta_0, \\beta_1)=\\begin{pmatrix}2 & \\frac 2N \\sum_{i=1}^N x_i\\\\\n\n\\frac 2N \\sum_{i=1}^N x_i & \\frac 2N \\sum_{i=1}^N x_i^2 \\end{pmatrix}=\n\n\\begin{pmatrix}2 & 2\\overline{x}\\\\\n\n2\\overline{x} & 2\\overline{x^2} \\end{pmatrix}.\n\n$$\n\nLooking at the determinants of the principal minors,\n\n$$\n\n\\Delta_1 = 2 > 0, \\quad \\Delta_2 = 4 (\\overline{x}^2 - \\overline{x^2})=4 \\textrm{ Var}(x)\\ge 0,\n\n$$\n\nit is easy to see that they are all positive, as long as the variance of\n\n$x$\n\n is not zero.", "answer_owner": "PierreCarre", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1149514, "title": "Is the integral of the sum really the sum of the integrals?", "question_text": "I was asked to find the Maclaurin series of\n\n$\\int_0^x\\frac{\\arctan (t)}{t}dt$\n\nUsing the known Maclaurin for arctan:\n\n$\\arctan(t)=\\sum_{n=1}^{\\infty} \\frac{(-1)^{n+1}t^{2n-1}}{2n-1}$\n\nOk, so what I did is use the known formula:\n\n$$\\int_0^x\\frac{\\arctan (t)}{t}=\\int_0^x\\frac{\\sum_{n=1}^{\\infty} \\frac{(-1)^{n+1}t^{2n-1}}{2n-1}}{t}dt=\\int_0^x\\sum_{n=1}^{\\infty}\\frac{(-1)^{n+1}t^{2n-2}}{2n-1}dt$$\n\nThis is nothing more than\n\n$\\int_0^x1-\\frac{t^2}{3}+\\frac{t^4}{5}-\\cdots dt$\n\n.\n\nIf this was a finite sum, then yes, for sure I can divide it into separate integrals. But this is an infinite sum. The integrand is a polynomial, an integrable and even continuous function so I don't see any reason why we can't separate that integral of the sum into the sum of the integrals, but it's not apparent to me why it's obvious that we can do that either.\n\nBut if we can, then the Maclaurin series we were looking for is\n\n$\\sum_{n=1}^{\\infty}\\int_0^x\\frac{(-1)^{n+1}t^{2n-2}}{2n-1}dt$\n\nIs this correct? If so - can we always separate the integral of the sum into sum of integrals? even infinite sums?", "question_owner": "Oria Gruber", "question_link": "https://math.stackexchange.com/questions/1149514/is-the-integral-of-the-sum-really-the-sum-of-the-integrals", "answer": { "answer_id": 1149519, "answer_text": "If\n\n$\\displaystyle\\int_a^b \\sum_{n=1}^\\infty |a_n(t)|\\,dt<\\infty$\n\n then\n\n$\\displaystyle\\int_a^b \\sum_{n=1}^\\infty a_n(t)\\,dt = \\sum_{n=1}^\\infty \\int_a^b a_n(t)\\,dt$\n\n.\n\nThis is actually a special case of Fubini's theorem.", "answer_owner": "Michael Hardy", "is_accepted": true, "score": 7, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5090551, "title": "Integral of lemniscate tangent function", "question_text": "$\\def\\sl{\\operatorname{sl}}\\def\\cl{\\operatorname{cl}}\\def\\tl{\\operatorname{tl}}\\def\\cscl{\\operatorname{cscl}}\\def\\secl{\\operatorname{secl}}\\def\\cotl{\\operatorname{cotl}}\\def\\d{\\,\\mathrm{d}}$\n\nThe lemniscate sine \"sl\" and lemniscate cosine \"cl\" are defined as the solutions to:\n\n$\\displaystyle\\frac{\\partial}{\\partial z}\\,\\sl(z)=\\left(1+\\sl^{2}(z)\\right)\\cl(z),\\quad\\frac{\\partial}{\\partial z}\\,\\cl(z)=-\\left(1+\\cl^{2}(z)\\right)\\sl(z),\\quad \\sl(0)=0,\\quad \\cl(0)=1$\n\nIn terms of Jacobi elliptic functions, they are:\n\n$\\sl(z)=\\operatorname{sn}(z\\,\\vert-1)\\quad\\cl(z)=\\operatorname{cd}(z\\,\\vert-1)\\quad\\tl(z)=\\operatorname{sn}(z\\,\\vert-1)\\cdot\\operatorname{dc}(z\\,\\vert-1)$\n\nDefine the \"lemniscate tangent\" as the ratio of the lemniscate sine to the lemniscate cosine:\n\n$\\displaystyle\\tl(z)=\\frac{\\sl(z)}{\\cl(z)}$\n\nI'm trying to find the antiderivative of this function.\n\nIt is known that:\n\n$\\displaystyle\\int\\sl(z)\\d z = -\\arctan\\left(\\cl(z)\\right)+C$\n\n$\\displaystyle\\int\\cl(z)\\d z = \\arctan\\left(\\sl(z)\\right)+C$\n\nMore identities can be found on Wikipedia's article on the lemniscate functions.\n\nHere's what I've tried using integration by parts:\n\n$\\displaystyle\\int\\tl(z)\\d z$\n\n$\\displaystyle=\\int\\underbrace{\\frac{1}{\\cl(z)}}_{u}\\,\\underbrace{\\sl(z)}_{\\d v}\\d z$\n\n$\\displaystyle= \\underbrace{\\frac{1}{\\cl(z)}}_{u}\\underbrace{\\left(-\\arctan\\left(\\cl(z)\\right)\\right)}_{v} - \\int\\underbrace{-\\arctan\\left(\\cl(z)\\right)}_{v}\\,\\underbrace{\\frac{-\\cl^\\prime(z)}{\\cl^2{z}}}_{\\d u}\\d z$\n\nSince\n\n$\\displaystyle\\int\\frac{\\arctan\\left(f(x)\\right) f^\\prime(x)}{f(x)^2}\\d x = \\ln\\left(\\frac{f(x)}{\\sqrt{1+f(x)^2}}\\right) - \\frac{\\arctan\\left(f(x)\\right)}{f(x)}+C$\n\nwe have\n\n$\\displaystyle\\ldots = -\\frac{\\arctan\\left(\\cl(z)\\right)}{\\cl(z)} - \\ln\\left(\\frac{\\cl(z)}{\\sqrt{1+\\cl^2(z)}}\\right) + \\frac{\\arctan\\left(\\cl(z)\\right)}{\\cl(z)}$\n\n$\\displaystyle=\\ln\\left(\\frac{\\sqrt{1+\\cl^2(z)}}{\\cl(z)}\\right)+C$\n\n$\\displaystyle\\cong\\frac{1}{2}\\ln\\left(\\frac{1+\\cl^2(z)}{\\cl^2(z)}\\right)+C$\n\n$\\displaystyle=\\frac{1}{2}\\ln\\left(1+\\frac{1}{\\cl^2(z)}\\right)+C$\n\nIs this correct?\n\nEDIT:\n\n If we define \"lemniscate cosecant/secant/cotangent\" functions in the same way as the circular functions:\n\n$\\displaystyle\\cscl(z)=\\frac{1}{\\sl(z)}\\qquad\\secl(z)=\\frac{1}{\\cl(z)}\\qquad\\cotl(z)=\\frac{1}{\\tl(z)}=\\frac{\\cl(z)}{\\sl(z)}$\n\nThen\n\n$\\cscl(z)=\\operatorname{ns}(z\\,\\vert-1)\\quad\\secl(z)=\\operatorname{dc}(z\\,\\vert-1)\\quad\\cotl(z)=\\operatorname{ns}(z\\,\\vert-1)\\cdot\\operatorname{cd}(z\\,\\vert-1)$\n\n$\\displaystyle\\int\\tl(z)\\d z=\\frac{\\ln\\left(1+\\secl^2(z)\\right)}{2}+C$", "question_owner": "user1658693", "question_link": "https://math.stackexchange.com/questions/5090551/integral-of-lemniscate-tangent-function", "answer": { "answer_id": 5090556, "answer_text": "The comment from Travis is a good idea. You can also avoid all that arctan funny business by rewriting directly according to the definition:\n\n$\\displaystyle{\\int \\frac{\\text{sl} z dz}{\\text{cl} z}}$\n\n$\\displaystyle{= \\int \\frac{- d(\\text{cl} z)}{\\text{cl} z \\cdot (1+\\text{cl}^2 z)}}$\n\n$\\displaystyle{= \\int \\frac{- du}{u \\cdot (1+u^2)}}$\n\nAnd then keep going as you would have. This produces the same answer as your strategy.", "answer_owner": "ryemoon", "is_accepted": true, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4010567, "title": "Two different definite integrals having same Riemann sum", "question_text": "The Riemann sum of the expression\n\n$\\frac{3}{n}\\sum_{i=1}^n (10+\\frac{3i}{n})^{\\frac{1}{2}}$\n\n is the definite integral\n\n$\\int_{10}^{13}x^{\\frac{1}{2}}dx$\n\n but the question asks me about another integral which has the same Riemann sum as the one above. The other integral is of the form\n\n$\\int_6^9 f(x)\\,dx$\n\n.\n\nI tried working out the other integral and it's\n\n$\\int_6^9 (10+x)^{1/2}\\,dx$\n\n but this is clearly wrong after evaluating the integral.\n\nAny hints on this question.", "question_owner": "Aristotle Stagiritis", "question_link": "https://math.stackexchange.com/questions/4010567/two-different-definite-integrals-having-same-riemann-sum", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3006881, "title": "Help for this problem involving Riemann integral and partitions", "question_text": "If\n\n$f: I--->\\mathbb R$\n\n is bounded, let\n\n$||f||:= Sup {|f(x)|: x \\in I}$\n\n, and if\n\n$P =(x_0,...,x_n)$\n\n is a partition of\n\n$I:=[a,b]$\n\n, let\n\n$||P||:=Sup [x_1-x_0,...,x_n-x_{n-1}]$\n\n(a) If\n\n$P'$\n\n is the partition obtained from\n\n$P$\n\n as in the proof of Lemma 7.1.2, show that\n\n$L(P,f) ≤ L(P',f) ≤ L(P,f) + 2||f|| ||P||$\n\n and\n\n$U(P,f) ≥ U(P',f) ≥ U(P,f) - 2||f|| ||P||$\n\n(b) If\n\n$P_1$\n\n is a partitions obtained from\n\n$P$\n\n by adding\n\n$k$\n\n points to\n\n$P$\n\n, show that\n\n$L(P_1,f) ≤ L(P,f) + 2k||f|| ||P||$\n\n and also that\n\n$U(P_1,f) ≥ U(P,f) - 2k||f|| ||P||$\n\nSo what i have intuitively and some hint of my professor is that\n\nFirst, the partition of Lemma 7.1.2 is\n\n$P':=[{x_0,x_1,...,x_{k-1},z,x_k,...,x_n}$\n\n]\n\nSo what i have so far is, and part of this, is a hint of the professor we have\n\n$|m_k|$\n\n,\n\n$|m'_k|$\n\n,\n\n$|m''_k|$\n\n$≤$\n\n$||f||$\n\n hence\n\n$0 ≤ L(P',f) - L(P,f) = (m'_k-m_k)(z-x_{k-1})+(m''_k-m_k)(x_k-z)$\n\n$≤$\n\n$2||f||(x_k-x_{k-1} ≤ 2||f|| ||P||$\n\nWhere\n\n$m_k:=inf[f(x):x \\in [x_{k-1},x_k]]$\n\n$m'_k:=inf[f(x):x \\in [x_{k-1},z]]$\n\n$m''_k:=inf[f(x):x \\in [x_z,x_k]]$\n\nAnd for part b i think i need to use induction but i think i need to solve a first to solve b and i... im lost sincerely ): can someone help me?", "question_owner": "Daniel ML", "question_link": "https://math.stackexchange.com/questions/3006881/help-for-this-problem-involving-riemann-integral-and-partitions", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4701363, "title": "constant coefficient in asymptotic for central binomial", "question_text": "Problem: Find\n\n$\\lim _{n \\to \\infty}\\frac{(n!)^22^{2n}}{(2n)!\\sqrt n}$\n\n.\n\nI used Stirling's approximation\n\n$$\\frac{(n!)^22^{2n}}{(2n)!\\sqrt n}\\approx \\frac{((\\sqrt{2\\pi n})n^ne^{-n})^22^{2n}}{(\\sqrt{4\\pi n})(2n)^{2n}e^{-2n}\\sqrt n}=\\frac{n^{2n}e^{-2n}2\\pi n(2^{2n})}{2^{2n}n^{2n}e^{-2n}2\\pi^{0.5}n}=\\frac\\pi{\\pi^{0.5}}=\\sqrt\\pi$$\n\n$$\\lim _{n \\to \\infty}\\frac{(n!)^22^{2n}}{(2n)!\\sqrt n}\\approx\\lim _{n \\to \\infty}\\sqrt\\pi=\\sqrt\\pi$$\n\nIf someone can help me and tell me where my mistake is, I will appreciate it very much.\n\nI think my mistake is because I just found that it approximated it but not equal.", "question_owner": "Mingyu", "question_link": "https://math.stackexchange.com/questions/4701363/constant-coefficient-in-asymptotic-for-central-binomial", "answer": { "answer_id": 4701442, "answer_text": "The main problem I see is that in using Stirling's approximation for both factorials you haven't proven that the error in the approximation disappears in the limit. It may happen to give the right answer in this case, but it is quite easy to construct situations where blindly applying approximations doesn't work - for example,\n\nthis video\n\n where applying\n\n$\\sin x \\sim x$\n\n in the limit\n\n$\\lim_{x \\rightarrow 0} \\frac{1}{\\sin^2 x} - \\frac{1}{x^2}$\n\n gives the wrong answer.\n\nTo approach the limit properly, you would need to break it into component limits whose values you can evaluate, which can still include a more careful application of Stirling's formula. For example, you could start something like this:\n\n$$\\lim_{n \\rightarrow \\infty} \\frac{(n!)^2 2^{2n}}{(2n)!\\sqrt{n}} = \\lim_{n \\rightarrow \\infty} \\left(\\frac{n!}{\\sqrt{2 \\pi n} \\left( \\frac{n}{e} \\right)^n} \\right)^2 \\left( \\frac{\\sqrt{2 \\pi (2n)}\\left( \\frac{2n}{e} \\right)^{2n}}{(2n)!} \\right) \\frac{2^{2n}}{\\sqrt{n}} \\frac{\\left(\\sqrt{2 \\pi n} \\left(\\frac{n}{e} \\right)^n \\right)^2}{\\sqrt{2 \\pi (2n)} \\left(\\frac{2n}{e} \\right)^{2n}}$$\n\nwhere all I've done is \"multiply by 1\" (i.e. introduce the same factor into both the numerator and denominator) and broken things up into three terms. At\n\nthis\n\n point you can then note that the limits of the first two terms are both 1 thanks to Stirling's formula, and the third term is essentially what you got in your solution, and so you can apply the limit rules to split this into the product of three separate limits and finish it off.", "answer_owner": "ConMan", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107234, "title": "Could someone help me solve this math problem?", "question_text": "If\n\n$f:\\mathbb R \\to \\mathbb R$\n\n;\n\n$f(0)=2013$\n\n; for any\n\n$x \\in \\mathbb R$\n\n,\n\n$f(x-5)-f(x) \\leq 3(x+3)$\n\n,\n\n$f(x+15)-f(x) \\geq 9(x+8)$\n\n; then what’s the value of\n\n$f(2025)/2026$\n\n?", "question_owner": "lily liu", "question_link": "https://math.stackexchange.com/questions/5107234/could-someone-help-me-solve-this-math-problem", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5088765, "title": "Feedback on a numerical integration formula", "question_text": "$$\\sum_{k=1}^{\\lceil\\Delta x / \\text{sub}\\rceil} \\int_{a_k}^{b_k} \\left(\\sum_{n=0}^2 \\frac{f^{(n)}(m_k)}{n!}(x - m_k)^n\\right) dx$$\n\nDefinition of variables:\n\n$a_k$\n\n,\n\n$b_k$\n\n,\n\n$m_k$\n\n: The left endpoint, right endpoint, and midpoint of their respective subinterval\n\n$_k$\n\n.\n\n$Δx$\n\n: the length of the entire interval you’re integrating.\n\nQuestion is located in the last paragraph. Explanation of the formula is right below.\n\nOver the past 2 days, I’ve developed this numerical integration formula which uses Taylor-Series approximated parabolas and their antiderivatives to approximate the integral of a function over a certain interval. Because of the way this formula converts the given function into a quadratic and integrates it, I’ve decided to call this formula the Quadratization Rule of Integration (the Q-Rule for short). My reason for posting this is partly just to get it out there, but also because there are some issues with it that I’d like to address. First I’ll explain how it works though:\n\n$$\\sum_{k=1}^{\\lceil\\Delta x / \\text{sub}\\rceil} \\int_{a_k}^{b_k} \\left(\\sum_{n=0}^2 \\frac{f^{(n)}(m_k)}{n!}(x - m_k)^n\\right) dx$$\n\nStarting with the summation in the middle. This is just the equation for the Taylor series of a function, the differences being that the upper limit of the summation is 2 (which gives a quadratic function approximation), and it is centered at the variable\n\n$m_k$\n\n. The\n\n$m$\n\n represents the midpoint, and the value of\n\n$_k$\n\n represents which subinterval\n\n$m$\n\n is the midpoint of. I’ll explain more about\n\n$_k$\n\n a little later, but that’s about it for this part of the formula.\n\nAfter you compute the summation for the Taylor approximated quadratic, you integrate the quadratic across [\n\n$a_k,b_k$\n\n] by applying the power rule to the quadratic.\n\n$a_k$\n\n and\n\n$b_k$\n\n can be thought of the same way you would think of\n\n$m_k$\n\n. They are the points\n\n$a$\n\n and\n\n$b$\n\n of their respective subintervals\n\n$_k$\n\n, just like how\n\n$m_k$\n\n is the midpoint of it’s respective subinterval\n\n$_k$\n\n.\n\n$$\\sum_{k=1}^{\\lceil\\Delta x / \\text{sub}\\rceil} \\int_{a_k}^{b_k} \\left(\\sum_{n=0}^2 \\frac{f^{(n)}(m_k)}{n!}(x - m_k)^n\\right) dx$$\n\nNext is the summation shown on top of the rest of the formula (usually will be written to the left of the rest of the formula). This summation is used to create subintervals to apply the rest of the formula to. First, you divide your total interval into a certain amount of subintervals based on the summation’s upper limit. The value of “sub” is just the length you want all of the subintervals to be (something less than one, like 0.5 is usually best for accuracy). Afterwards, you divide\n\n$Δx$\n\n (entire interval length) by that number to get how many intervals you should do. Then, the ceiling function (rounding up) is applied to that number to prevent a decimal upper limit. You then divide your total interval (\n\n$Δx$\n\n) into that amount of subintervals and begin applying the rest of the formula to each subinterval.\n\nYou start at\n\n$k = 1$\n\n for the first subinterval, creating a taylor series quadratic approximation centered at the midpoint\n\n$m_1$\n\n, and then you integrate that quadratic across\n\n$a_1$\n\n and\n\n$b_1$\n\n. Then continue this until you’ve applied it to every subinterval up to your upper limit. You then add up all of those integrals, and that’s approximately the definite integral of the original function.\n\n$$\\sum_{k=1}^{\\lceil\\Delta x / \\text{sub}\\rceil} \\int_{a_k}^{b_k} \\left(\\sum_{n=0}^2 \\frac{f^{(n)}(m_k)}{n!}(x - m_k)^n\\right) dx$$\n\nSo far, I haven’t tested the Q-Rule too much, but using polynomial approximations of functions typically works well for integration, especially with a small enough subinterval that allows the polynomial to still give a good approximation of the function. I have actually tested it on the function e^cos(x), (which has no elementary antiderivative), and it gave\n\n$1.114$\n\n, which is about\n\n$0.001$\n\n off from the actual integral (according to an integral calculator). Another small thing I’d like to add is that this is not a derivation of Simpson’s Rule. Simpsons Rule fits a parabola over 3 distinct points, then uses a special formula to find the area of the parabolic segment. The Q-Rule creates a quadratic centered at and tangent to one single point using a Taylor series approximation. It then integrates that parabola with the power rule across its subinterval.\n\n$$\\sum_{k=1}^{\\lceil\\Delta x / \\text{sub}\\rceil} \\int_{a_k}^{b_k} \\left(\\sum_{n=0}^2 \\frac{f^{(n)}(m_k)}{n!}(x - m_k)^n\\right) dx$$\n\nAnyway, here is my question. The nth derivative in the Taylor approximation only needs to be evaluated at one point per subinterval. To make evaluating the function’s derivative at a single point simple, a numerical differentiation technique can be used. What are some simple numerical differentiation techniques I can write into the formula, and how would you recommend notating it?\n\nThanks.", "question_owner": "Munchrr", "question_link": "https://math.stackexchange.com/questions/5088765/feedback-on-a-numerical-integration-formula", "answer": { "answer_id": 5107208, "answer_text": "You describe a composite rule, but I want to concentrate on a single interval.\n\n$$\\int_a^{a+h} \\sum _{n=0}^2 \\frac{f^{(n)}\\left(\\frac{2 a + h}{2}\\right) }{n!} \\left(x-\\frac{2 a + h}{2}\\right)^n\\, dx$$\n\nYou approximate a function using truncated Taylor series, the error term is dominated by the first term in the truncated tale, i.e. if you use n terms, the n+1 term in the Taylor series is your error term, which is called Lagrange Reminder.\n\n$$\\frac{f^{(n+1)}\\left(\\frac{2 a + h}{2}\\right) }{(n+1)!} \\left(x-\\frac{2 a + h}{2}\\right)^{n+1}$$\n\nThe error of integration scheme\\rule\\quadrature is the integral of the error term of your approximation, i.e. the integral of Lagrange Reminder in your case. In your case this integral is vanishing (=0), which doesn't mean your scheme is exact, simply in this case you take the next term in the tail. This is a nice situation, when this happen you get an extra order of accuracy for free. So, you got\n\n$$\\int_a^{a+h}\\frac{f^{(n+2)}\\left(\\frac{2 a + h}{2}\\right) }{(n+2)!} \\left(x-\\frac{2 a + h}{2}\\right)^{n+2} dx\n\n\\underset{n=2}{=} \\frac{h^5 f^{(4)}\\left(\\frac{1}{2} (2 a+h)\\right)}{1920}$$\n\nMore precisely, there is a\n\n$c\\in(a,a+h)$\n\n such that the error is\n\n$E=\\frac{h^5 f^{(4)}\\left(c\\right)}{1920}$\n\nSimilar analysis for Simpson Rule gives\n\n$E=-\\frac{(h)^5}{90}\\,f^{(4)}(c)$\n\nSo far so good. However, the pitfall is that normally you know\n\n$f(x)$\n\n that you want to integrate, but not necessary its derivative. For example if\n\n$f$\n\n is obtained from previous computation you only have a discrete function, a table of values\n\n$(x_i,f(x_i))$\n\n, and therefore you need numerical approximation of the derivatives.\n\nA different numerical derivative scheme will give a different error for your quadrature idea. For example\n\n$$f'(a)\\approx \\frac{f(a+h) - f(a-h)}{2h}$$\n\nhave a second order accuracy, which means the error is\n\n$O(h^2)$\n\n.\n\nSimilarly\n\n\\begin{align}\n\nf''(a) &\\approx \\frac{f(a+h) - 2f(a) + f(a-h)}{h^2}, \\\\\n\n%f'''(a) &\\approx \\frac{f(a+2h) - 2f(a+h) + 2f(a-h) - f(a-2h)}{2h^3}, \\\\\n\n%f^{(4)}(a) &\\approx \\frac{f(a+2h) - 4f(a+h) + 6f(a) - 4f(a-h) + f(a-2h)}{h^4},\n\n\\end{align}\n\nis a second order approximation.\n\nNow, your method gives\n\n$$\\int_a^{a+h} f(x)\\approx h f\\left(m\\right) +\\frac{1}{24} h^3 f''\\left(m\\right), $$\n\nwhere\n\n$m=\\frac{2 a+h}{2}$\n\n - the midpoint.\n\nSubstituting the above approximations for the derivatives we get the famous Simpson:\n\n\\begin{align}\n\n\\int_a^{a+h} f(x)\\,dx\n\n&\\approx h f(m) + \\frac{h^3}{24} f''(m) \\\\\n\n&\\approx h f(m) + \\frac{h^3}{24} \\cdot \\frac{4\\big(f(a) - 2 f(m) + f(a+h)\\big)}{h^2} \\\\\n\n&= h f(m) + \\frac{h}{6} \\big(f(a) - 2 f(m) + f(a+h)\\big) \\\\\n\n&= \\frac{h}{6} \\big(f(a) + 4 f(m) + f(a+h)\\big)\n\n\\end{align}\n\nI would say, this is a nice exercise - an alternative derivation of Simpson rule, such things often happens. You can try different numerical schemes, to see what happens.\n\nhttps://en.wikipedia.org/wiki/Finite_difference_method", "answer_owner": "Michael Medvinsky", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 592581, "title": "Finding the limit of $ \\frac{1}{n}\\sum_{k=1}^{n}\\sqrt[k]{k} $", "question_text": "I need some help regarding a certain section in a homework question. I need to find the limit of:\n\n$$ \\lim_{n\\rightarrow\\infty}\\left(\\frac{1}{n}\\sum_{k=1}^{n}\\sqrt[k]{k}\\right).$$\n\nNow my intuition is that it converges to 1, but the whole question is centered around the number\n\n$ e $\n\n, so I'm guessing there's some connection to\n\n$ e $\n\n and thats where I'm stuck.\n\nAnyone have any hints?\n\nThanks", "question_owner": "user475680", "question_link": "https://math.stackexchange.com/questions/592581/finding-the-limit-of-frac1n-sum-k-1n-sqrtkk", "answer": { "answer_id": 592588, "answer_text": "Hint:\n\n If $a_n\\to a$ then $\\frac1n\\sum_{k=1}^n a_n\\to a$ (Cesáro summation)\n\nEdit:\n\nIf Cesáro cannot be used immediately, here's a quick proof:\n\nAs $a_n$ converges, it is bounded, say $|a_n|0$ be given. There exists $N$ with $|a_n-a|<\\frac\\epsilon2$ for all $n>N$. Then $$\\left|\\frac1n\\sum_{k=1}^n a_n -a\\right|=\\left|\\frac1n\\sum_{k=1}^n (a_n -a)\\right|\\le \\frac1n\\sum_{k=1}^N(M+a)+\\frac1n\\sum_{k=N+1}^n\\frac\\epsilon2\\le \\frac Nn(M+a)+\\frac\\epsilon2$$\n\nfor such $n$.\n\nNow let $N'=\\max\\left\\{N,\\left\\lceil\\frac{2N(M+a)}{\\epsilon}\\right\\rceil\\right\\}$. Then\n\n$$\\left|\\frac1n\\sum_{k=1}^n a_n -a\\right|<\\epsilon$$\n\nfor all $n>N'$.", "answer_owner": "Hagen von Eitzen", "is_accepted": true, "score": 13, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5045806, "title": "Is there any other method to evaluate the Interesting integral $\\int_0^1 \\cot ^{-1}(k\\sqrt{x(1-x)}) d x$?", "question_text": "To tackle the integral\n\n$\\int_0^1 \\cot ^{-1}(\\sqrt{x(1-x)}) d x$\n\n, I had tried integration by parts and Feynman’s trick and failed. Then I used the substitution\n\n$x=\\sin^2 \\theta$\n\n and obtained\n\n$$\n\n\\begin{aligned}\n\n\\int_0^1 \\cot ^{-1}(\\sqrt{x(1-x)}) d x & =\\int_0^{\\frac{\\pi}{2}} \\cot ^{-1}(\\sin \\theta \\cos \\theta) 2 \\sin \\theta \\cos \\theta d \\theta \\\\\n\n& =\\int_0^{\\frac{\\pi}{2}} \\cot ^{-1}\\left(\\frac{\\sin 2 \\theta}{2}\\right) \\sin 2 \\theta d \\theta\\\\&= \\frac{1}{2} \\int_0^\\pi \\frac{\\tan ^{-1}(2 \\csc \\theta)}{\\csc \\theta} d \\theta\\\\&= \\int_0^{\\frac{\\pi}{2}} \\int_0^2 \\frac{\\sin ^2 \\theta}{a^2+\\sin ^2 \\theta} d a d \\theta\\\\&= \\int_0^2\\int_0^{\\frac{\\pi}{2}} \\frac{\\sin ^2 \\theta}{a^2+\\sin ^2 \\theta} d \\theta d a\\\\ \\int_0^{\\frac{\\pi}{2}} \\frac{\\sin ^2 \\theta}{a^2+\\sin ^2 \\theta}d\\theta &= \\frac{\\pi}{2}-a^2 \\int_0^{\\frac{\\pi}{2}} \\frac{1}{a^2+\\sin ^2 \\theta} d \\theta \\\\&= \\frac{\\pi}{2}-a^2 \\int_0^{\\infty} \\frac{d t}{a^2+t^2\\left(1+a^2\\right)}, t=\\tan \\theta\\\\&= \\left. \\frac{\\pi}{2}-\\frac{a}{\\sqrt{1+a^2}}\\left[\\tan ^{-1} \\frac{t \\sqrt{1+a^2}}{a}\\right)\\right]_0^{\\infty}\\\\&= \\frac{\\pi}{2}\\left(1-\\frac{a}{\\sqrt{1+a^2}}\\right)\n\n\\end{aligned}\n\n$$\n\nIntegrating back yields\n\n$$\n\n\\int_0^1 \\cot ^{-1}(\\sqrt{x(1-x)}) d x =\\frac{\\pi}{2} \\int_0^2\\left(1-\\frac{a}{\\sqrt{1+a^2}}\\right) d a=\\frac{\\pi}{2}(3-\\sqrt{5}) \\quad \\blacksquare\n\n$$\n\nIn general, replacing the upper limit\n\n$2$\n\n by\n\n$\\frac 2k$\n\n gives\n\n$$\n\n\\begin{aligned}\n\n\\int_0^1 \\cot ^{-1}(k\\sqrt{x(1-x)}) d x & =\\int_0^{\\frac{\\pi}{2}} \\frac{\\cot ^{-1}\\left(\\frac{2 \\csc 2 \\theta}{k}\\right)}{\\csc 2 \\theta} d \\theta \\\\\n\n& =\\frac{\\pi}{2} \\int_0^{\\frac{\\pi}{2}}\\left(1-\\frac{a}{\\sqrt{1+a^2}}\\right) d a\n\n\\\\&= \\frac{\\pi}{2}\\left[a-\\sqrt{1+a^2}\\right]_0^{\\frac{2}{k}} \\end{aligned}\n\n$$\n\nNow we can conclude that\n\n$$\\boxed{\\int_0^1 \\cot ^{-1}(k\\sqrt{x(1-x)}) d x=\\frac{\\pi}{2 k}\\left(2+k-\\sqrt{k^2+4}\\right)}$$\n\nMy question\n\nIs there any other method to evaluate the interesting integral?\n\nYour comments and alternative methods are highly appreciated.", "question_owner": "Lai", "question_link": "https://math.stackexchange.com/questions/5045806/is-there-any-other-method-to-evaluate-the-interesting-integral-int-01-cot", "answer": { "answer_id": 5045820, "answer_text": "\\begin{align}\n\n&\\int_0^1 \\cot ^{-1}\\left(k\\sqrt{x(1-x)}\\right) d x\\>\\>\\>\\>\\>\\>\\>x=\\sin^2t\\\\\n\n=&\\int_0^{\\frac{\\pi}{2}} \\cot ^{-1}(k\\sin t\\cos t) \\ d( \\sin^2t) \\\\\n\n\\overset{ibp} =&\\ \\frac\\pi2-\\int_0^{\\frac{\\pi}{2}} \\frac{2k\\cos^22t}{4+k^2\\sin^2 2t}dt\n\n =\\frac\\pi2-\\frac\\pi{2k}\\left(\\sqrt{4+k^2}-2\\right)\\\\\n\n\\end{align}", "answer_owner": "Quanto", "is_accepted": false, "score": 6, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107164, "title": "Does analytic continuation preserve equality after rearranging a Dirichlet-type double sum when one side is made absolutely convergent by a weight?", "question_text": "Let\n\n$F(n,d)$\n\n be an arithmetic function of two variables (for example\n\n$F(n,d)$\n\n could involve\n\n$\\mu(d)$\n\n,\n\n$\\lambda(d)$\n\n, divisor functions).\n\nThen let\n\n$$\n\nA(n) := \\sum_{d \\le n} F(n,d)\n\n$$\n\nis always a\n\nfinite sum\n\n.\n\nFor even\n\n$n\\le d$\n\n But our sum ends at\n\n$d \\le n$\n\n so we don't have to care about that.\n\nNow fix a real parameter\n\n$k$\n\n and form the weighted series\n\n$$\n\nS(k) := \\sum_{n=1}^\\infty A(n)\\, n^{-k}.\n\n$$\n\nAssume that\n\n$k$\n\n is chosen so that this series\n\nconverges absolutely\n\n.\n\nBecause the finite identity holds term-by-term, the value of\n\n$S(k)$\n\n is classical and well-defined no regularization is needed.\n\nRearrangement\n\nUsing the definition of\n\n$A(n)$\n\n,\n\n$$\n\nS(k)\n\n= \\sum_{n=1}^\\infty n^{-k} \\sum_{d \\le n} F(n,d)\n\n$$\n\nand by interchanging the order of summation,\n\n$$\n\nS(k)\n\n= \\sum_{d=1}^\\infty \\sum_{n \\ge d} F(n,d)\\, n^{-k}.\n\n$$\n\nIn typical analytic-number-theory manipulations, one then rewrites part of the expression as a Dirichlet series\n\n$$\n\n\\sum_{d=1}^{\\infty} a(d)\\, d^{-s},\n\n$$\n\nor as an Euler product.\n\nHowever, that Dirichlet series may converge only for\n\n$\\Re(s) > \\sigma_0$\n\n and\n\ndiverge\n\n at the value of\n\n$s$\n\n needed in the calculation.\n\nThe usual workaround is:\n\nintroduce a regulator\n\n$s$\n\n with\n\n$\\Re(s)$\n\n large,\n\nmanipulate everything in the absolutely convergent region,\n\nevaluate the resulting expression at the desired value by\n\nanalytic continuation\n\n (or Abel summation).\n\nMy question\n\nSince\n\n$S(k)$\n\n is absolutely convergent in its original form, is it guaranteed that rearranging the sums and then evaluating the resulting Dirichlet series by analytic continuation produces the same value as the original?\n\nFormally:\n\n$$\n\n\\text{(insert regulator $s$)} \\;\\Rightarrow\\;\n\n\\text{(manipulate where absolutely convergent)} \\;\\Rightarrow\\;\n\n\\text{(analytically continue)}\n\n$$\n\nDoes this process always preserve the value of the initial absolutely convergent series?\n\nIf not, then when is it guaranteed equality?", "question_owner": "Treesight", "question_link": "https://math.stackexchange.com/questions/5107164/does-analytic-continuation-preserve-equality-after-rearranging-a-dirichlet-type", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4514790, "title": "Model-free ultra-local function approximation", "question_text": "I've been reading a lot about model-free control and I came across the concept of the ultra-local model. There is a really intricate approach outlined\n\nhere\n\n but I'm having an issue with one part outlined below.\n\nImage of the equations I have the question about\n\nI'm not sure how the authors went from\n\n$y+s\\frac{dy}{ds} = -\\frac{\\phi}{s^2} + \\alpha\\frac{du}{ds}$\n\n to the next equation for\n\n$\\phi$\n\n. It is mentioned in the article that it has to do with\n\n$\\int_{\\tau_{i}}^{\\tau_{f}} F(\\tau)d\\tau$\n\n and said in a conference that it is the averaging integral, but I don't see how. The equations are outlined in section 3.4.1 in the article.", "question_owner": "derekboase", "question_link": "https://math.stackexchange.com/questions/4514790/model-free-ultra-local-function-approximation", "answer": { "answer_id": 4527587, "answer_text": "I am considering the notation of the paper\n\nModel-Free Control of Single-Phase Boost AC/DC Converters\n\n which you mention in one the comments.\n\nFirst, note that we have the following equality\n\n$$\\int_0^T\\int_0^sz(\\theta)d\\theta ds=\\int_0^T(T-s)z(s)ds,$$\n\nwhich can be proven by integrating by parts.\n\nThe starting point is the following expression in the frequency domain\n\n$$-\\dfrac{\\phi}{s^4}=\\dfrac{1}{s^2}Y(s)+\\dfrac{1}{s}\\dfrac{d}{ds}Y(s)-\\dfrac{\\alpha}{s^2}\\dfrac{d}{ds}U(s).$$\n\nLet us consider first the terms in\n\n$Y(s)$\n\n and convert them to the time-domain using the inverse Laplace transform. We have that\n\n$$\\dfrac{1}{s^2}Y(s)\\rightarrow \\int_0^T\\int_0^\\tau y(\\theta)d\\theta d\\tau=\\int_0^T(T-\\tau)y(\\tau)d\\tau$$\n\nand\n\n$$\\dfrac{1}{s}\\dfrac{d}{ds}Y(s)\\rightarrow -\\int_0^T\\tau y(\\tau)d\\tau.$$\n\nIf we sum the two, get the term\n\n$$\\int_0^T(T-2\\tau)y(\\tau)d\\tau.$$\n\nSimilarly, the term in\n\n$U(s)$\n\n becomes\n\n$$\\dfrac{\\alpha}{s^2}\\dfrac{d}{ds}U(s)\\rightarrow -\\alpha\\int_0^T(T-\\tau)\\tau u(\\tau)d\\tau.$$\n\nTherefore, the time-domain expression of the right-hand side is given by\n\n$$\\int_0^T\\left[(T-2\\tau)y(\\tau)+\\alpha(T-\\tau)\\tau u(\\tau)\\right]d\\tau.$$\n\nNow we need to consider the term in\n\n$\\phi/s^4$\n\n and we have that\n\n$$\\dfrac{1}{s^4}\\rightarrow \\dfrac{T^3}{3!}=\\dfrac{T^3}{6}$$\n\nwhich finally yields\n\n$$-\\dfrac{T^3}{6}\\phi=\\int_0^T\\left[(T-2\\tau)y(\\tau)+\\alpha(T-\\tau)\\tau u(\\tau)\\right]d\\tau$$\n\nor, equivalently,\n\n$$\\phi=-\\dfrac{6}{T^3}\\int_0^T\\left[(T-2\\tau)y(\\tau)+\\alpha(T-\\tau)\\tau u(\\tau)\\right]d\\tau.$$", "answer_owner": "KBS", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5050591, "title": "A question related to continuous plane curves", "question_text": "The question that follows is related to the possible values of the index of a simple closed curve. Actually, the question that follows is equivalent to the fact that the index of a simple closed curve can only be equal to -1,0 or 1. For, if the index was n>1, then there would exist a simple closed curve\n\n$\\gamma(t)=r(t)e^{i\\theta(t)}$\n\n,\n\n$t\\in [0,1]$\n\n, with\n\n$r(0)=r(1)$\n\n and\n\n$\\theta(1)-\\theta(0)=2n\\pi$\n\n. hence the function\n\n$$f(t)=((r(t),s(t)),\\quad \\text{where}\\quad s(t)=\\frac{\\theta(t)}{2\\pi},\n\n$$\n\nwould satisfy the assumptions of the proposed question, and if the answer to the question is positive, the curve is not going to be simple.\n\nQuestion.\n\nLet\n\n$f : [0,1] \\to \\mathbb R^2,\\,$\n\n be a continuous map such that\n\n$$\n\nf(0)=(0,0) \\quad\\text{and}\\quad f(1)=(0,n),\n\n$$\n\nwhere\n\n$n\\in\\mathbb N$\n\n,\n\n$n>1$\n\n. Are there always\n\n$x_1,x_2\\in [0,1],\\,$\n\n such that\n\n$$\n\nf(x_2)-f(x_1)=(0,1)?\n\n$$\n\nThere exists a positive answer for\n\n$n=2$\n\n. See\n\nA question about continuous curves in\n\n$\\mathbb{R}^2$\n\n. However, the proof for\n\n$n=2$\n\n doesn't seem to be easily generalized for\n\n$n>2$\n\n.\n\nAlso, the answer would be negative if\n\n$n$\n\n was not an integer. (See\n\nAverage pace and horizontal chords\n\n)\n\nHints and/or references are welcome.", "question_owner": "Yiorgos S. Smyrlis", "question_link": "https://math.stackexchange.com/questions/5050591/a-question-related-to-continuous-plane-curves", "answer": { "answer_id": 5107101, "answer_text": "Yes, for every continuous\n\n$f:[0,1]\\to\\mathbb{R}^2$\n\n with\n\n$$\n\nf(0) = (0,0), \\qquad f(1) = (0,n), \\quad (n \\in \\mathbb{N},\\; n > 1),\n\n$$\n\nthere exist\n\n$x_1, x_2 \\in [0,1]$\n\n such that\n\n$$\n\nf(x_2) - f(x_1) = (0,1).\n\n$$\n\nSketch (topological / degree-theoretic proof)\n\nWrite\n\n$f = (u,v)$\n\n for the two coordinate functions.\n\nConsider the continuous map\n\n$$\n\nF : [0,1]^2 \\longrightarrow \\mathbb{R}^2, \\qquad\n\nF(x,y) = \\big(u(y) - u(x),\\, v(y) - v(x) - 1\\big).\n\n$$\n\nOur goal is to show that\n\n$F$\n\n has a zero with\n\n$x \\le y$\n\n; that zero gives the required pair.\n\nStep 1. Boundary behavior\n\nProject\n\n$[0,1]^2$\n\n to the\n\n$2$\n\n–torus by identifying opposite sides.\n\nOn the identified boundary we have\n\n$$\n\n\\begin{aligned}\n\nF(0,y) &= (u(y),\\, v(y) - 1),\\\\\n\nF(1,y) &= (u(y) - u(1),\\, v(y) - v(1) - 1)\n\n = (u(y),\\, v(y) - n - 1),\n\n\\end{aligned}\n\n$$\n\nand similarly for the horizontal sides.\n\nThus, after passing to the torus, the second component of\n\n$F$\n\nchanges its “vertical degree’’ by\n\n$n - 1$\n\n, while the first component\n\nhas degree\n\n$0$\n\n.\n\nStep 2. Degree argument\n\nHence the total Brouwer degree of\n\n$F$\n\n at the value\n\n$(0,0)$\n\n is\n\n$n - 1 \\neq 0$\n\n.\n\nA nonzero degree forces the preimage of\n\n$(0,0)$\n\n to be nonempty.\n\nPicking any\n\n$(x,y)$\n\n in that preimage with\n\n$x \\le y$\n\n gives\n\n$$\n\nu(y) - u(x) = 0, \\qquad v(y) - v(x) = 1,\n\n$$\n\ni.e.\n\n$f(x_2) - f(x_1) = (0,1)$\n\n.\n\nStep 3. Alternative formulation\n\nEquivalently, one can apply\n\nMiranda’s theorem\n\n(the multivariate intermediate value theorem)\n\nto a truncated version of\n\n$v$\n\n, ensuring the second component\n\nstays between\n\n$0$\n\n and\n\n$n$\n\n.\n\nThe opposite–sign conditions on opposite faces\n\nthen yield a zero of\n\n$F$\n\n.\n\n$$\n\n\\text{Therefore, the statement holds for every integer } n > 1.\n\n$$", "answer_owner": "Bazzas", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5107083, "title": "Which orientation of a truncated cone will drain faster?", "question_text": "A container shaped like a truncated cone (frustum) as in cylinder-like shape where one end has a larger radius than the other. Both orientations contain the same volume of water, and the outlet hole (same size) is always at the bottom. In which of these scenarios will the container empty faster: large end down, small end up or small end down, large end up? My line of reasoning is this:\n\nAccording to Torricelli's law the flow rate is\n\n$\n\nQ = a \\sqrt{2gh}\n\n$\n\nwhere (a) is the area of the outlet and (h) is the instantaneous height of the water column above the hole.\n\nThe rate of change of the water volume inside the container is\n\n$\\frac{dV}{dt} = -Q. $\n\nSince the volume depends on the water height via the container’s cross-sectional area\n\n$ A(h) $\n\n,\n\n$V(h) = \\int_0^h A(h')\\,dh'$\n\nwe can write\n\n$A(h)\\frac{dh}{dt} = -a\\sqrt{2gh}.$\n\nHence the differential equation for the water height is\n\n$\\frac{dh}{dt} = -\\,\\frac{a}{A(h)}\\sqrt{2gh}. $\n\nIntegrating from the initial height (h_0) down to (0) gives the total draining time:\n\n$t = \\int_{0}^{h_0} \\frac{A(h)}{a\\sqrt{2gh}}\\,dh.$\n\nSo according to this it would stand that the one with the larger radius in the bottom empties faster. Is this accurate?", "question_owner": "jimbrr", "question_link": "https://math.stackexchange.com/questions/5107083/which-orientation-of-a-truncated-cone-will-drain-faster", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3311902, "title": "True or false limit questions with explanation", "question_text": "A) If 𝑓(𝑥) = 𝑔(𝑥) when 𝑥 ≠ 𝑎 then lim\n\n𝑥→𝑎\n\n𝑓(𝑥) = lim\n\n𝑥→𝑎\n\n𝑔(𝑥), provided the limits\n\nexist\n\nB) If lim\n\n𝑥→5\n\n𝑓(𝑥) = 0 and lim\n\n𝑥→5\n\n𝑔(𝑥) = 0 then lim\n\n𝑥→5\n\n𝑓(𝑥)/𝑔(𝑥) does not exist\n\nRespectively my answers are true and then false, but I don't believe I am correct.\n\nI'm fairly new to limits and am struggling with the concept, a lil help would be nice, thank you!", "question_owner": "dingusteve", "question_link": "https://math.stackexchange.com/questions/3311902/true-or-false-limit-questions-with-explanation", "answer": { "answer_id": 3311931, "answer_text": "B is false because when both\n\n$f$\n\n and\n\n$g$\n\n have limit 0, L'Hospital's Rule can be applied to\n\n$f/g$\n\n. For example,\n\n$\\sin x$\n\n and\n\n$x$\n\n both go to 0 as\n\n$x \\rightarrow 0$\n\n, but by L'Hospital's Rule,\n\n$\\sin x/x$\n\n goes to 1 as\n\n$x \\rightarrow 0$\n\n. For A, suppose\n\n$\\lim_{x \\rightarrow a} f(x) = F$\n\n and\n\n$\\lim_{x_ \\rightarrow a} g(x) = G$\n\n. By definition, given\n\n$\\epsilon > 0$\n\n, there is\n\n$\\delta_f > 0$\n\n (likewise,\n\n$\\delta_g > 0$\n\n) such that when\n\n$0 < |x - a| < \\delta_f$\n\n (likewise,\n\n$0 < |x - a| < \\delta_g$\n\n), we have\n\n$|f(x) - F| < \\epsilon/2$\n\n (likewise,\n\n$|g(x) - G| < \\epsilon/2$\n\n). Set\n\n$\\delta = \\min \\lbrace \\delta_{f}, \\delta_{g} \\rbrace$\n\n, then both inequalities are satisfied when\n\n$0 < |x - a| < \\delta$\n\n. Since\n\n$f(x) = g(x)$\n\n for all\n\n$x \\not = a$\n\n,\n\n$|f(x) - F - [g(x) - G]| = |f(x) - g(x) - F + G| = |G - F|$\n\n for all such\n\n$x$\n\n. Now for\n\n$0 < |x - a| < \\delta$\n\n,\n\n$|f(x) - F - g(x) - G| \\leq |f(x) - F| + |g(x) - G| < \\epsilon/2 + \\epsilon/2 = \\epsilon$\n\n (by the triangle inequality). Combining these two results yields\n\n$|G - F| < \\epsilon$\n\n. Since the choice of\n\n$\\epsilon$\n\n is arbitrary, we must have\n\n$|G - F| = 0$\n\n, or\n\n$F = G$\n\n.", "answer_owner": "Amy Ngo", "is_accepted": false, "score": -2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106767, "title": "Closed form for $\\sum_{k=1}^{n-1} k \\cot\\left(\\frac{\\pi k}{n}\\right)$ and its generalizations", "question_text": "I’ve been looking at the sum\n\n$$\n\nS(n) = \\sum_{k=1}^{n-1} k \\cot\\left(\\frac{\\pi k}{n}\\right),\n\n$$\n\nand after some manipulations, I arrived at the following explicit (though somewhat complicated) expression:\n\n$$\n\nS(n) = \\frac{n(n-1)}{\\pi} - \\frac{2n}{\\pi} \\sum_{m=1}^{\\infty} \\frac{\\zeta(2m)}{2m+1} \\cdot \\frac{B_{2m+1}(n) - B_{2m+1}}{n^{2m}}\n\n$$\n\nWhile this is correct, it looks quite cumbersome. Looking at the first few values for small\n\n$n$\n\n, it seems there\n\nmust exist a much simpler closed form\n\n for this sum and it might not seem useful at first glance but the advantage of writing it this way allows us to plug in any value for n. So it extends the discrete sum in a meaningful way perhaps, I didn't check by plugging in values for real n...\n\nMy question:\n\nDoes anyone know a\n\nnice closed-form expression\n\n for\n\n$S(n)$\n\n?\n\nIs there a known\n\nauxiliary formula\n\n for the more general sum\n\n$$\n\nS(n) = \\sum_{k=1}^{n-1} k^{p} \\cot\\left(\\frac{\\pi k}{n}\\right),\n\n$$\n\nfor integer powers\n\n$p \\ge 1$\n\n? Any references, tricks, or generating-function approaches would be highly appreciated.\n\nThanks in advance", "question_owner": "Treesight", "question_link": "https://math.stackexchange.com/questions/5106767/closed-form-for-sum-k-1n-1-k-cot-left-frac-pi-kn-right-and-its-ge", "answer": { "answer_id": 5106916, "answer_text": "background (the\n\n$p=0$\n\n case): we can use\n\n$\\cot(\\pi x)=\\frac1\\pi\\frac d{dx}\\log\\sin(\\pi x)$\n\n, and logarithmically differentiate\n\n$$\\prod_{k=0}^{n-1}\\sin\\left(\\pi\\left(x+\\frac kn\\right)\\right)=\\frac{\\sin(\\pi nx)}{2^{n-1}}$$\n\nwhich comes from using Euler's reflection formula on Gauss's multiplication theorem, to get\n\n$$\\sum_{k=0}^{n-1}\\cot\\left(\\pi\\left(x+\\frac kn\\right)\\right)=n\\cot(\\pi nx)$$\n\nthe reflection formula also gives the representation\n\n$\\cot(\\pi n)=\\frac{\\psi(1-n)-\\psi(n)}\\pi$\n\n, which turns the sum into\n\n$$\\frac1\\pi\\sum_{k=1}^{n-1}(n-2k)\\,\\psi\\!\\left(\\frac kn\\right)$$\n\nthe first part can be handled by\n\n$-\\sum_{k=1}^{n-1}\\psi\\!\\left(\\frac kn\\right)=n\\log n+(n-1)\\gamma$\n\n, so\n\n$$S(n)=\\frac{-1}\\pi\\left(n(n\\log n+(n-1)\\gamma)+2\\sum_{k=1}^{n-1}k\\,\\psi\\!\\left(\\frac kn\\right)\\right)$$\n\nbut the remaining sum still seems difficult to handle; via\n\n$(x-1)\\psi(x)=\\frac d{dx}\\log G(x)+x-\\frac{1+\\log\\tau}2$\n\n, we can rewrite it in terms of something looking like the multiplication theorem for the Barnes G function (the shifted hyperfactorial), but the only actual multiplication theorem for that I could find is\n\nthe double product in Wikipedia\n\n.\n\nI can't find a pattern in the minimal polynomials either, but I have found one that breaks down; the first case after\n\n$S_4=-2$\n\n in which the minimal polynomial has nonzero odd-power terms (ie. not given directly as the sqrt of another constant with half the degree) is\n\n$S_{16}=-8(3+4(\\sqrt2+\\sqrt{2+\\sqrt2}))$\n\n, with minpoly\n\n$x^4+96x^3-4736x^2-75776x-192512$\n\n.\n\nCuriously, it seems the degree of the minpoly equals that of\n\n$\\cot\\left(\\frac\\pi n\\right)$\n\n (OEIS entry\n\nA089929\n\n(n)).", "answer_owner": "DroneBetter", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106791, "title": "what is $\\lim_{n \\to \\infty} \\frac{1}{n} \\sum_{k=1}^n \\frac{\\log(n - k + 2)}{\\log(k + 1)}$", "question_text": "I'm trying\n\n$$\\lim_{n \\to \\infty} \\frac{1}{n} \\sum_{k=1}^n \\frac{\\log(n - k + 2)}{\\log(k + 1)}$$\n\nLetting\n\n$x = k/n$\n\n, term is\n\n$$\\frac{\\log n + \\log(1 - x + 2/n)}{\\log n + \\log(x + 1/n)} = \\frac{1 + o(1)}{1 + o(1)} \\to 1$$\n\npointwise, suggesting the limit is\n\n$\\int_0^1 1 \\, dx = 1$\n\n.\n\nIs the limit indeed 1? If not, how to rigorously pass to the integral, given the\n\n$1/n$\n\n inside the logs? (Any base >1, it cancels.)", "question_owner": "LokeP", "question_link": "https://math.stackexchange.com/questions/5106791/what-is-lim-n-to-infty-frac1n-sum-k-1n-frac-logn-k-2-lo", "answer": { "answer_id": 5106824, "answer_text": "Let\n\n$$a_n=\\frac{1}{n}\\sum_{k=1}^n \\frac{\\log(n-k+2)}{\\log(k+1)},$$\n\nor equivalently\n\n$$a_n=\\frac{1}{n}\\sum_{k=1}^n\\frac{\\log(k+1)}{\\log(n-k+2)}.$$\n\nSo\n\n$$a_n=\\frac1{2n}\\sum_{k=1}^n\\left[\\frac{\\log(n-k+2)}{\\log(k+1)}+\\frac{\\log(k+1)}{\\log(n-k+2)}\\right]\n\n\\geq\\frac1{2n}\\sum_{k=1}^n2=1.$$\n\nOn the other hand,\n\n\\begin{align*}\n\na_n\n\n&=\\frac{1}{n}\\sum_{1\\leq k\\leq n/2}\\frac{\\log(n-k+2)}{\\log(k+1)}\n\n+\\frac{1}{n}\\sum_{n/2 0,\\, \\text{Re}\\, \\alpha> 0.$$\n\nI am looking for the value of the integral\n\n$$I:= \\int_{0}^\\infty (x+1)^p\\, e^{-ax}\\, x^\\alpha\\, L^{\\alpha}_m(x)\\, dx =?; \\quad a> 0,\\,p\\in\\mathbb Z.$$\n\nThe Laguerre polynomial of type\n\n$\\alpha$\n\n and degree\n\n$m$\n\n is given by\n\n$$L^{\\alpha}_m(x)=\\sum_{k=0}^{m}(-1)^k \\binom{m+\\alpha}{m-k} \\frac{x^k}{k!}.$$\n\nThen, by this definition, we get\n\n$$I:= \\sum_{k=0}^{m} \\frac{(-1)^k}{k!} \\binom{m+\\alpha}{m-k} \\int_{0}^\\infty (x+1)^p\\, e^{-ax}\\, x^{\\alpha+k}\\, dx$$\n\nSo to calculate I, it is enough to calculate the following integral:\n\n$$J:=\\int_{0}^\\infty (x+1)^p\\, e^{-ax}\\, x^{\\alpha+k}\\, dx=?$$\n\nAccording to p.277 in \"Magnus, W., Oberhettinger, F., Soni, R.: Formulas and Theorems for the Special Functions of Mathematical Physics, Grundlehren der mathematischen Wissenschaften 52, Springer, Berlin et al. (1966)\", we have,\n\n$$U(\\gamma,c,a)=\\frac{1}{\\Gamma(\\gamma)}\\int_{0}^\\infty (1+x)^{c-\\gamma-1}\\, e^{-ax}\\, x^{\\gamma-1}\\, dx; \\quad \\text{Re}\\, \\gamma> 0,\\, \\text{Re}\\,a> 0.$$\n\nThen for\n\n$\\gamma=\\alpha+k+1$\n\n and\n\n$c=\\alpha+p+k+2$\n\n, we get\n\n$$J:=\\Gamma(\\alpha+k+1) U(\\alpha+k+1,\\alpha+p+k+2,a).$$\n\nThen,\n\n$$I:= \\sum_{k=0}^{m} \\frac{(-1)^k}{k!} \\binom{m+\\alpha}{m-k} \\Gamma(\\alpha+k+1) U(\\alpha+k+1,\\alpha+p+k+2,a).$$", "question_owner": "Z. Alfata", "question_link": "https://math.stackexchange.com/questions/5106737/integral-transform-of-l-alpha-mx", "answer": { "answer_id": 5106886, "answer_text": "The integral\n\n$J$\n\n is entry\n\n3.384.3\n\n in\n\nGradshteyn and Ryzhik, Table of Integrals, Series, and Products (8th Edition)\n\n and gives\n\n$$\\int_0^\\infty\\mathrm{d}x\\,(x+1)^px^{\\alpha+k}\\mathrm{e}^{-ax}=a^{-\\frac{\\alpha+k+p}{2}-1}\\mathrm{e}^{\\frac{a}{2}}\\Gamma(\\alpha+k+1)W_{\\frac{p-\\alpha-k}{2},\\frac{p+\\alpha+k+1}{2}}(a)$$\n\nwhere\n\n$W_{\\nu,\\mu}(x)$\n\n is the second Whittaker function. I personally don't have any hope the resulting sum has a closed form tho.\n\nEdit: apparently we are not in the valid regime for the parameters, so idk", "answer_owner": "Caesar.tcl", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106867, "title": "Show a continuous function over $\\mathbb{R}$ is zero", "question_text": "I've been working with the following exercise:\n\nLet\n\n$f(x)\\in C(\\mathbb{R})$\n\n and for all\n\n$a,b\\in\\mathbb{R}$\n\n, it holds\n\n$$\n\nf(a)+f(b)\\ge\\int_a^b f^2(x)\\mathrm{d}x.\n\n$$\n\nShow\n\n$f\\equiv 0$\n\n.\n\nMy attempts so far:\n\nBy letting\n\n$a=b=x$\n\n we have\n\n$2f(x)\\ge 0$\n\n, hence\n\n$f\\ge 0$\n\n. Then I wish to show\n\n$\\inf f=0$\n\n. Suppose by contradiction, say\n\n$\\inf f=m>0$\n\n, then\n\n$m^2(b-a)\\le\\int_a^b f^2\\le f(a)+f(b)$\n\n. Now if\n\n$f$\n\n is bounded, let\n\n$|f|\\le M$\n\n, it holds\n\n$m^2(b-a)\\le 2M$\n\n, a contradiction. However I am not guranteed that\n\n$f$\n\n is bounded, and even if I can show\n\n$\\inf f=0$\n\n, I cannot proceed anymore.\n\nCan someone offer me some suggestions? Thanks in advance.", "question_owner": "MathLearner", "question_link": "https://math.stackexchange.com/questions/5106867/show-a-continuous-function-over-mathbbr-is-zero", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 842176, "title": "How to integrate: $\\int_{0}^{\\infty}e^{tx}(x^2e^{-x})/2dx$", "question_text": "I'm working on a few moment generating function problems and I came across:\n\n$f(x)=(x^2e^{-x})/2$ for $x>0$, and zero otherwise.\n\nFind the mean. The mean is the same as the expected value. If we find the moment generating function, $M_x(t)$, of $f(x)$ then we can take the first derivative of $M_x(t)$ at $t=0$. This will give us the mean.\n\nTo find the $M_x(t)$ we take $$\\int_{-\\infty}^{\\infty}e^{tx}f(x)dx$$\n\n$$\\int_{0}^{\\infty}e^{tx}(x^2e^{-x})/2dx$$\n\nI wrote this as:\n\n$$\\frac12\\int_{0}^{\\infty}x^{2}e^{x(t-1)}dx$$\n\nI'm a bit rusty on integration and if someone could help point me in the right direction as to how to tackle this guy I would greatly appreciate it!", "question_owner": "Vincent", "question_link": "https://math.stackexchange.com/questions/842176/how-to-integrate-int-0-inftyetxx2e-x-2dx", "answer": { "answer_id": 842183, "answer_text": "Integrate by parts two times:\n\n\\begin{aligned}\n\nI & = \\frac{1}{2}\\int_0^\\infty x^2e^{x(t-1)}dx = \\\\\n\n & = \\frac{1}{2}\\left[x^2\\frac{e^{x(t-1)}}{t-1}\\right]^{x=\\infty}_{x=0} - \\frac{1}{2}\\int_0^\\infty 2x\\frac{e^{x(t-1)}}{t-1}dx = \\\\\n\n & = -\\frac{1}{2}\\left[2x\\frac{e^{x(t-1)}}{(t-1)^2}\\right]^{x=\\infty}_{x=0} + \\frac{1}{2}\\int_0^\\infty 2\\frac{e^{x(t-1)}}{(t-1)^2}dx = \\\\\n\n & = \\frac{1}{2}\\left[2\\frac{e^{x(t-1)}}{(t-1)^3}\\right]^{x=\\infty}_{x=0} = \\\\\n\n & = -\\frac{1}{(t-1)^3}\n\n\\end{aligned}\n\nNote that $t<1$ is required for convergence.", "answer_owner": "fromGiants", "is_accepted": true, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106793, "title": "Minimizing $f(v_1,v_2)=\\frac{1}{2} m_1 v_1^2 +\\frac{1}{2} m_2 v_2^2$ over $m_1 v_1 + m_2 v_2 = p_i$", "question_text": "How do I minimize\n\n$$f(v_1,v_2)=\\frac{1}{2} m_1 v_1^2 +\\frac{1}{2} m_2 v_2^2$$\n\nwhen\n\n$$m_1 v_1 + m_2 v_2 = p_i?$$\n\nI want a solution in terms of\n\n$p_i , m_1$\n\n and\n\n$m_2$\n\n.\n\nI am trying to show that an inelastic collision is perfectly inelastic, as in the most energy is lost, when the two objects stick. I know the solution can be achieved with some multivariable calculus but I forgot how to do it and any help would be appreciated. I assume, because of the context of the problem, that the solution is something like\n\n$v_1 = v_2$\n\n.", "question_owner": "d ds", "question_link": "https://math.stackexchange.com/questions/5106793/minimizing-fv-1-v-2-frac12-m-1-v-12-frac12-m-2-v-22-over-m-1-v", "answer": { "answer_id": 5106798, "answer_text": "The hint given by @TedShifrin is sufficient. Since this is a physics question, there is a more physics way of thinking:\n\n$$\n\n\\begin{aligned}\n\nf(v_1,v_2)&=\\frac{1}{2}m_1{v_1}^2+\\frac{1}{2}m_2{v_2}^2\\\\\n\n&=\\frac{1}{2}(m_1+m_2)\\left(\\frac{m_1v_1+m_2v_2}{m_1+m_2}\\right)^2+\\frac{1}{2}\\frac{m_1m_2}{m_1+m_2}(v_1-v_2)^2\\\\\n\n&=\\frac{{p_i}^2}{2(m_1+m_2)}+\\frac{1}{2}\\mu(v_1-v_2)^2\\\\\n\n&\\geqslant\\frac{{p_i}^2}{2(m_1+m_2)}\n\n\\end{aligned}\n\n$$\n\nYou might be reading a physics textbook, then you will soon see how this works.", "answer_owner": "JC Q", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105829, "title": "Closed form for $I_n = \\int_0^{\\pi} \\frac{x \\sin(nx)}{1 - \\cos x}\\, dx$", "question_text": "I recently constructed the following integral for natural numbers\n\n$n$\n\n, which led me to an interesting discovery.\n\n$$I_n = \\int_0^{\\pi} \\frac{x \\sin(nx)}{1 - \\cos x}dx $$\n\nUsing online tools, I obtained the following sequence:\n\n$$I_1 = \\pi \\ln 4$$\n\n$$I_2 = 2\\pi \\ln 4 - 2\\pi$$\n\n$$I_3 = 3\\pi \\ln 4 - 3\\pi$$\n\n$$I_4 = 4\\pi \\ln 4 - \\tfrac{14\\pi}{3}$$\n\n$$I_5 = 5\\pi \\ln 4 - \\tfrac{35\\pi}{6}$$\n\n$$I_6 = 6\\pi \\ln 4 - \\tfrac{37\\pi}{5}$$\n\n$$I_7 = 7\\pi \\ln 4 - \\tfrac{259\\pi}{30}$$\n\n$$I_8 = 8\\pi \\ln 4 - \\tfrac{1066\\pi}{105}$$\n\n$$I_9 = 9\\pi \\ln 4 - \\tfrac{1599\\pi}{140}$$\n\n$$I_{10} = 10\\pi \\ln 4 - \\tfrac{1627\\pi}{126}$$\n\nSo empirically, I predict:\n\n$$I_n=\\pi \\left(n\\ln4-a_n\\right)$$\n\nI attempted the following,\n\n$$\\frac{1}{1-\\cos x}=\\sum_{m=0}^{\\infty} (\\cos x)^m$$\n\n$$I_n=\\sum_{m=0}^{\\infty} \\int_0^\\pi x\\,\\sin(nx)\\,\\cos^m xdx$$\n\nand then proceeded with complex exponentials and binomial expansion, but it quickly turned into an algebraic mess, which brought me here.\n\nAny help in deriving the closed form for\n\n$a_n$\n\n would be helpful. Thank you!\n\nEDIT:\n\nMy second attempt:\n\n$$I_n = \\int_0^\\pi x \\sin(nx) \\, d\\!\\left(-\\cot\\frac{x}{2}\\right)$$\n\nIntegrating by parts:\n\n$$I_n = \\left[-x \\sin(nx) \\cot\\frac{x}{2}\\right]_0^\\pi\n\n+ \\int_0^\\pi \\cot\\frac{x}{2} \\, d(x \\sin(nx))$$\n\n$$I_n = \\int_0^\\pi \\cot\\frac{x}{2} \\, (\\sin(nx) + n x \\cos(nx))dx$$\n\nNow,\n\n$$1 + 2\\cos x + 2\\cos 2x + \\cdots + 2\\cos nx\n\n= \\frac{\\sin\\left((n+\\frac{1}{2})x\\right)}{\\sin\\frac{x}{2}}$$\n\n$$= \\sin(nx)\\cot\\frac{x}{2} + \\cos(nx)$$\n\n$$\\therefore \\int_0^\\pi \\cot\\frac{x}{2} \\sin(nx)dx\n\n= \\int_0^\\pi (1 + 2\\cos x + 2\\cos 2x + \\cdots + \\cos(nx)) dx= \\pi$$\n\n$$I_n = π+ n\\int_0^\\pi x\\cot\\frac{x}{2} \\cos(nx)dx$$", "question_owner": "Rishit Garg", "question_link": "https://math.stackexchange.com/questions/5105829/closed-form-for-i-n-int-0-pi-fracx-sinnx1-cos-x-dx", "answer": { "answer_id": 5105837, "answer_text": "Let,\n\n$I_n = \\displaystyle\\int_0^{\\pi} \\frac{x \\sin(nx)}{1 - \\cos x}\\, dx$\n\n.\n\nThen we have\n\n$I_{n+2} + I_n = \\displaystyle\\int_0^{\\pi} \\frac{x }{1 - \\cos x}(\\sin((n+2)x) + \\sin(nx)) dx\n\n\\\\= \\displaystyle\\int_0^{\\pi} \\frac{x }{1 - \\cos x}2\\cos(x) \\sin((n+1)x) dx.$\n\nNow\n\n$I_{n+1} - \\dfrac{I_{n+2} + I_n}{2} = \\displaystyle\\int_0^{\\pi} \\bigg[\\frac{x \\sin((n+1)x)}{1 - \\cos x} - \\frac{x }{1 - \\cos x}\\cos(x) \\sin((n+1)x) \\bigg]dx\n\n\\\\= \\displaystyle\\int_0^{\\pi} x \\sin((n+1)x) dx = \\dfrac{\\pi(-1)^{n}}{n+1}.$\n\nWhich shows that,\n\n$I_{n+2} + I_n - 2I_{n+1} = \\dfrac{2\\pi(-1)^{n+1}}{n+1}$\n\n, for all\n\n$n \\in \\mathbb{N}.$\n\nAs @Martin R pointed out, let\n\n$J_n = I_{n+1} - I_n$\n\n, then\n\n$J_{n+1} - J_n = \\dfrac{2\\pi(-1)^{n+1}}{n+1}$\n\n, which implies:\n\n$$J_{n} = I_{n+1} - I_{n} = J_1 + 2\\pi\\displaystyle \\sum_{k=1}^{n-1}\\frac{(-1)^{k+1}}{k+1}\n\n\\\\ \\implies I_{n+1} - I_{n} = \\pi\\ln4 -2\\pi + 2\\pi\\displaystyle \\sum_{k=1}^{n-1}\\frac{(-1)^{k+1}}{k+1}\n\n\\\\ \\implies I_{n+1} - I_{n} = \\pi\\ln4 - 2\\pi\\displaystyle \\sum_{k=1}^{n}\\frac{(-1)^{k}}{k}\n\n\\\\ \\implies I_{n} = (n-1)\\pi \\ln4 + I_1 - 2\\pi\\displaystyle \\sum_{l=1}^{n-1}\\displaystyle \\sum_{k=1}^{l}\\frac{(-1)^{k}}{k}\n\n\\\\ \\implies I_{n} = n\\pi \\ln4 - 2\\pi\\bigg(n\\displaystyle \\sum_{k=1}^{n-1}\\frac{(-1)^{k-1}}{k} - \\frac{1+(-1)^{n}}{2}\\bigg)\n\n\\\\ \\implies I_{n} = \\pi\\bigg(n \\ln4 + 2n\\displaystyle \\sum_{k=1}^{n-1}\\frac{(-1)^{k-1}}{k} + 1+(-1)^n\\bigg)\n\n\\\\ \\implies I_{n} = \\pi\\bigg(n \\ln4 + 1+(-1)^n - 2n\\displaystyle \\sum_{k=1}^{n-1}\\frac{(-1)^{k-1}}{k}\\bigg).$$", "answer_owner": "Afntu", "is_accepted": true, "score": 20, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2390557, "title": "Could a version of probability theory be made rigorous with only calculus?", "question_text": "I am wondering if one could, or if there has been built a version of probability theory that could exist rigorous on its own without real analysis and measure theory?\n\nThe motivation for this quesiton is that in almost every introductory text in mathematical statistics I've seen, they only rely on calculus as a prerequisite. Could one only rely on calculus and multivariate calculus to do this rigorous?\n\nIn introductory texts random variables are described as random functions, but their characteristics is their pmf or pdf, would this be enough?\n\nSo if we only restrict ourselves to what is called discrete or continuous random variables(or vectors) in introductory statistics text, is calculus and multivariate calculus enough to get the theory to stand on its own? or will we need measure theory?\n\nIf it is the case that this is impossible to do without measure theory, does this mean that most introductory texts in statistics are wrong? Since they base everything on Riemann-integrals, but the theory is wrong?\n\nUPDATE: From the discussion in the comments we see that by the definition of a probability space we need it to be a measure, and we also need the events to be a sigma-algebra. Hence we need these two concepts from measure theory. So I should have asked the question like this instead: Will the theory of continuous random variables in introductory texts work out if we only allow Riemann-integration and not Lebesgue-integration? Is the integration-theory in introductory texts in calculus which only deals with Riemann-integration enough to make sure that the theory is rigorous?", "question_owner": "user119615", "question_link": "https://math.stackexchange.com/questions/2390557/could-a-version-of-probability-theory-be-made-rigorous-with-only-calculus", "answer": { "answer_id": 2411893, "answer_text": "Measure theory is nice because it gives us a one-size fits all approach to probability theory. For example, if $D$ is a probability distribution on the real line, then the expectation of $D$ is, by definition, $$\\mathbf{E}(D) = \\int_D (x \\in \\mathbb{R} \\mapsto x),$$ where the right hand side means the Lebesgue integral of the function $\\mathbb{R} \\rightarrow \\mathbb{R}$ given by $x \\mapsto x$ with respect to measure $D$. This is normally denoted $\\int_{x \\in \\mathbb{R}}xD(dx),$ which looks a bit strange to my eyes. Anyway...\n\nWe recover the various formulae that are actually used to perform computations of the expectation as special cases of the above formula. For example, if $D$ is the discrete distribution corresponding to a probability mass function $p$, then the above formula simplifies to $$\\mathbf{E}(D) = \\sum_{x \\in \\mathbb{R}} xp(x).$$ On the other hand, if $D$ is the continuous distribution corresponding to a probability density function $f$, then it simplifies to: $$\\mathbf{E}(D) = \\int_{x \\in \\mathbb{R}} xf(x).$$\n\nOf course, $D$ might have a discrete and a continuous part, in which case we have: $$\\mathbf{E}(D) = \\sum_{x \\in \\mathbb{R}} xp(x)+\\int_{x \\in \\mathbb{R}} xf(x).$$\n\nIf that's all you care about, then in principle you can just write the above formula as a definition and be done with it. However, in my opinion, it's much more satisfying to derive the above formulae. In particular, writing $H_n$ for the $n$-dimensional Hausdorff measure on the real ine, it turns out that the discrete case is $D = p \\cdot H_0$ and the continuous case is $D = f \\cdot H_1$. The mixed case is $D = p\\cdot H_0 + f \\cdot H_1$. Therefore, we can derive the above formula as follows:\n\n$$\\mathbf{E}(p \\cdot H_0+f\\cdot H_1) = \\int_{p \\cdot H_0+f\\cdot H_1} x = \\int_{p \\cdot H_0} x + \\int_{f \\cdot H_1} x$$\n\n$$= \\int_{H_0} xp(x) + \\int_{H_1} xf(x) = \\sum_{x \\in \\mathbb{R}} xp(x)+\\int_{x \\in \\mathbb{R}} xf(x)$$\n\nOnce you've seen this kind of reasoning, it becomes obvious how to generalize; just involve summands involving $H_n$ for $0 < n < 1$. So the measure theoretic approach is really much more general, and imo much more satisfying.\n\nAnd that's just on the real line. Imagine you're working in real $3$-space. The measure $H_0$ lets you describe the probabilistic analogue of \"point charges\", and the measure $H_3$ lets you describe the probabilistic analogue of \"charge densities.\" Okay, but what if you want your outcomes to be randomly distributed along a wire coiling through space? In the context, $H_1$ comes the to the rescue. So even if you're not interested in those weird distributions arising from $H_n$ via non-integral values of $n$, nonetheless the added generality is still pretty useful.\n\nBy the way, you can't get too far in calculus without the Dirac delta function, which can be elegantly thought of as a measure, or better yet, a\n\ndistribution\n\n. Indeed, distributions were invented because at some point while doing calculus, you realize that functions aren't general enough to do what you need them to do. This eventually leads to the theory of\n\n$k$-currents\n\n, which, among other things, provides an elegant reinterpretation of the fundamental theorem of calculus. In short, you'll end up doing measure theory and distribution theory one way or another.", "answer_owner": "goblin GONE", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3348532, "title": "Is this double integral over D positive, negative, or zero? R is the region inside the unit circle centered at the origin, R is its right half", "question_text": "I've some confusions about this textbook question:\n\nFirstly, isn't this question describing a unit sphere (and not a unit circle)? Because I don't see how a 2D circle could have both a \"right\" and a \"bottom\" half...\n\nAssuming R is the unit sphere, then I can at least visualize the regions D and B. But the questions still make no sense to me. This chapter covered functions of two variables, yet every question here is integrating a function of a single variable...\n\nI tried graphing, for example,\n\n$y^3+y^5$\n\n using GeoGebra, but obviously nothing comes up since it doesn't make sense to graph a 2D function in 3D.\n\nQuestion 9 makes no sense in a different way. If R is a sphere, then it doesn't make sense to think about whether a function is positive or negative (i.e. above or below) with respect to it.\n\nI clearly have some very significant misunderstandings here and I think I'm just missing a couple key points. Even if you can't clear up all my misunderstandings, any help whatsoever is greatly appreciated!", "question_owner": "James Ronald", "question_link": "https://math.stackexchange.com/questions/3348532/is-this-double-integral-over-d-positive-negative-or-zero-r-is-the-region-insi", "answer": { "answer_id": 3349165, "answer_text": "As Joe has already mentioned in the comments, this is about the unit circle in\n\n$\\Bbb R^2$\n\n. The Right half is where\n\n$x \\ge 0$\n\n, the Bottom half is where\n\n$y \\le 0$\n\n.\n\nActually, the first one is integrating a function of no variables, but I'm betting you didn't have a problem with that. You know it just means a function\n\n$f(x, y) = 1$\n\n for all\n\n$x$\n\n and\n\n$y$\n\n. So even though no variables are seen in it, we can still consider it a function of\n\n$x$\n\n and\n\n$y$\n\n. The same thing is true about all the rest. Problems 9 and 10 are about the function\n\n$f(x,y) = 5x$\n\n for all\n\n$x, y$\n\n. Problems 11 and 12 are about the function\n\n$f(x,y) = y^3 + y^5$\n\n for all\n\n$x,y$\n\n. They may be constant with respect to one of the variables, but we can still consider that a function of two variables.\n\nYou didn't set it up right in GeoGebra. It should graph a surface which is flat in one direction and follows the 5th degree polynomial in the other.\n\n$R$\n\n is not a sphere. Of course, this partly because of the misunderstanding of (1). But\n\n$R$\n\n is not even a circle. It is a half-circle. Problem 9 says nothing about whether the\n\nfunction\n\n is positive or negative. It asks whether the\n\nintegral\n\n is positive or negative, or zero. And by the\n\nintegral\n\n, they mean the value that you get upon performing the integration. Recall that integration is by definition a method for producing a particular number from the function and the set\n\n$R$\n\n. They want you to tell what the sign of that number will be.", "answer_owner": "Paul Sinclair", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 314748, "title": "Integration and a new variable", "question_text": "I have come across an integral where they introduce a new variable\n\n$k' = k - k_0$\n\n where\n\n$k_0$\n\n is constant.\n\n$$\n\n\\begin{split}\n\n\\psi(x) &= \\int\\limits_{-\\infty}^{\\infty} e^{-\\frac{(k-k_0)^2}{2 \\sigma_x^2}} e^{ikx} \\,\\,\\mathrm{d}k\\\\\n\n\\boxed{\\scriptsize k' = k-k_0} &\\Longrightarrow \\boxed{\\scriptsize k=k' - k_0} \\Longrightarrow\\boxed{{\\scriptsize \\mathrm{d}k = \\ ???}}\\\\\n\n\\psi(x) &= \\int\\limits_{-\\infty}^{\\infty} e^{- k'^2 / 2 \\sigma_x^2} \\, e^{i(k' -k_0)x} \\,\\,\\mathrm{d}k'\\\\\n\n\\end{split}\n\n$$\n\nI am not sure how did an author get\n\n$\\mathrm{d}k'$\n\n out of\n\n$\\mathrm{d}k$\n\n so I need you to confirm my take on this. If I differentiate a new variable I get:\n\n$$\n\n\\begin{split}\n\n\\frac{d k'}{dk} &= \\frac{d k}{d k} - \\frac{d k_0}{dk}\\\\\n\n\\frac{d k'}{dk} &= \\frac{d k}{d k}\\\\\n\n\\end{split}\n\n$$\n\nAm I allowed to just cancel out the\n\n$dk$\n\n in the denominator to get the expression below?\n\n$$\n\ndk' = dk\n\n$$", "question_owner": "71GA", "question_link": "https://math.stackexchange.com/questions/314748/integration-and-a-new-variable", "answer": { "answer_id": 314765, "answer_text": "$\\boxed{\\scriptsize k=k' - k_0} \\Longrightarrow\\boxed{{\\scriptsize \\mathrm{d}k = \\ ???}}$\n\nWhen you have the first equation, derive each side by k, left side would be\n\n$\\mathrm{d}k$\n\n, right side would be\n\n$\\mathrm{d}k'$\n\n and\n\n$k_0$\n\n is just a constant so derivative would be zero. New equation would be:\n\n$\\boxed{\\mathrm{d}k=\\mathrm{d}k'}$", "answer_owner": "Tyathalae", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1518413, "title": "Find $\\frac{N_r}{D_r}$ for small $x$.", "question_text": "Let\n\n$$N_r= (1-x)^{-\\frac{5}{2}} + (16+8x)^{\\frac{1}{2}}\\quad\\text{and}\\quad\n\nD_r = (1+x)^{-\\frac{1}{2}} + (2+x)^2.$$\n\nShow that\n\n$$\\frac{N_r}{D_r} = 1 + \\frac{23x^2}{40}+o(x^2).$$\n\nThis is my work\n\n\\begin{gather}\n\n(1-x)^{-\\frac{5}{2}} = 1+\\frac{5x}{2}+ \\frac{35x^2}{8}\\\\\n\n(16+8x)^\\frac{1}{2}= 4+2x-\\frac{x^2}{8}\\\\\n\n(1+x)^{-\\frac{1}{2}} = 1-\\frac{x}{2}+\\frac{3x^2}{8}\\\\\n\n(2+x)^2=x^2+4x+4\n\n\\end{gather}\n\nOn simplification\n\n\\begin{gather}\n\nN_r = 5 + \\frac{9x}{2} + \\frac{17x^2}{4} \\\\\n\nD_r= 5+\\frac{7x}{2}+\\frac{11x^2}{8} \\\\\n\n\\frac{N_r}{D_r} = 1 +\\frac{x}{5} + \\frac{87x^2}{200}\n\n\\end{gather}\n\nOn simplification I am not getting the RHS. Kindly help.", "question_owner": "Giridharan L", "question_link": "https://math.stackexchange.com/questions/1518413/find-fracn-rd-r-for-small-x", "answer": { "answer_id": 1518498, "answer_text": "This is my work\n\n\\begin{gather}\n\n(1-x)^{-\\frac{5}{2}} = 1+\\frac{5x}{2}+ \\frac{35x^2}{8}\\\\\n\n(16+8x)^\\frac{1}{2}= 4+2x-\\frac{x^2}{8}\\\\\n\n(1+x)^{-\\frac{1}{2}} = 1-\\frac{x}{2}+\\frac{3x^2}{8}\\\\\n\n(2+x)^2=x^2+4x+4\n\n\\end{gather}\n\nOn simplification\n\n\\begin{gather}\n\nN_r = 5 + \\frac{9x}{2} + \\frac{17x^2}{4} \\\\\n\nD_r= 5+\\frac{7x}{2}+\\frac{11x^2}{8} \\\\\n\n\\frac{N_r}{D_r} = 1 +\\frac{x}{5} + \\frac{87x^2}{200}\n\n\\end{gather}", "answer_owner": "Giridharan L", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 199094, "title": "Evaluating $\\int_0^{\\large\\frac{\\pi}{4}} \\log\\left( \\cos x\\right) \\, \\mathrm{d}x $", "question_text": "It's my first post here and I was wondering if someone could help me with evaluating the\n\ndefinite integral\n\n$$ \\int_0^{\\Large\\frac{\\pi}{4}} \\log\\left( \\cos x\\right) \\, \\mathrm{d}x $$\n\nThanks in advance, any help would be appreciated.", "question_owner": "Souvik", "question_link": "https://math.stackexchange.com/questions/199094/evaluating-int-0-large-frac-pi4-log-left-cos-x-right-mathrmdx", "answer": { "answer_id": 199138, "answer_text": "Write $$\\log(\\cos(x))=\\log\\left(\\frac12 e^{ix}(1+e^{-2ix})\\right)\\\\\n\n=-\\log 2 + ix +\\sum_{k=1}^\\infty\\frac{(-1)^{k+1}}{k}e^{-2ikx}.$$\n\nThen integrate term by term to obtain\n\n$$\\int_0^{\\pi/4}\\log(\\cos(x))dx=-\\frac{\\pi}{4}\\log 2 +i\\frac{\\pi^2}{32}+\\frac{i}{2}\\sum_{k=1}^\\infty\\frac{(-1)^{k+1}}{k^2}\\left[e^{-ik\\pi/2}-1\\right].$$\n\nThe odd terms of the series with $e^{-ik\\pi/2}$ give rise to the Catalan constant, and the even terms combine with the other infinite series to cancel the $i\\pi^2/32$ term.", "answer_owner": "user12477", "is_accepted": false, "score": 24, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3185307, "title": "Why can we divide by zero in limits?", "question_text": "Before I ask, I want to tell you that I am beginner in limits, so you may find some problems in my understanding.\n\nLet's assume a function\n\n$f(x) = 15-2x^2$\n\n. We want to know how the function behaves at\n\n$x=1$\n\n. Specifically, we want to know the slope of tangent line at\n\n$x=1$\n\n.\n\nSimply, we get a good formula for that by doing this:\n\n$$m=\\frac{f(1)-f(x)}{x-1}.$$\n\n Then we get the equation\n\n$$m=\\frac{2-2x^2}{x-1}.$$\n\nNow we have to take the limit to find the slope of the tangent line,\n\n$$\\lim_{x\\to 1} \\frac {2-2x^2}{x-1}.$$\n\nTo solve this we simplify it like this :\n\n\\begin{align}\n\n\\lim_{x\\to 1} \\frac {2-2x^2}{x-1}\n\n =& \\lim_{x\\to 1}\\frac {-2(x-1)(x+1)}{(x-1)} \\\\\n\n =& \\lim_{x\\to 1}-2(x+1) \\\\\n\n =& -2(1)-2 \\\\\n\n =& -4\n\n\\end{align}\n\nIn algebra class when we had a fraction and we wanted to cancel something we always say\n\n$x \\ne a$\n\n. For instance\n\n$\\frac {1}{x-1}$\n\n. Here\n\n$x \\ne 1$\n\n, because\n\n$x-1$\n\n would be zero.\n\nBut here in limits I found something unbelievable: here we are dividing by zero and that's forbidden.\n\n$$\\lim_{x\\to 1}\\frac {-2(1-1)(1+1)}{(1-1)}.$$\n\nWe are just canceling zero in this fraction. Can anyone explain this?", "question_owner": "Mohammad Alshareef", "question_link": "https://math.stackexchange.com/questions/3185307/why-can-we-divide-by-zero-in-limits", "answer": { "answer_id": 5106698, "answer_text": "When taking a limit, you are precisely\n\navoiding\n\n the value to where the variable tends.\n\n$$\\lim_{x\\to a}f(x)$$\n\n evaluates\n\n$f$\n\n in a neighborhood of\n\n$a$\n\n, but not at\n\n$a$\n\n itself.\n\n$f(a)$\n\n is irrelevant.\n\nSo, while\n\n$\\dfrac{x^2-1}{x-1}$\n\n may not be evaluated at\n\n$x=1$\n\n, the following identity is correct:\n\n$$x\\ne1\\implies\\frac{x^2-1}{x-1}=x+1.$$\n\nFor this reason,\n\n$$\\lim_{x\\to1}\\frac{x^2-1}{x-1}=\\lim_{x\\to1}(x+1)$$\n\n holds.\n\nAddendum:\n\nLet the function\n\n$f$\n\n defined as\n\n$$\\begin{cases}x\\ne1\\to\\dfrac{x^2-1}{x-1}\\\\x=1\\to0\\end{cases}$$\n\nWe still have\n\n$\\lim_{x\\to1}f(x)=2$\n\n because\n\n$f(1)$\n\n is not used.", "answer_owner": "user1548405", "is_accepted": false, "score": 5, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 944902, "title": "derivative of ln inside ln", "question_text": "$y=ln(ln2x^4)$\n\nTo find the derivative of this would I have to find the derivative of the inside and then do the derivative of the entire ln function on the inside? Such as the derivative of the inside is $\\frac{4}{x}$ and the derivative of the outer ln is $\\frac{1}{ln2x^4}$. Then I multiply the two together to get the final answer of $\\frac{4}{xln2x^4}$?", "question_owner": "stumped", "question_link": "https://math.stackexchange.com/questions/944902/derivative-of-ln-inside-ln", "answer": { "answer_id": 944912, "answer_text": "By the Chain Rule:\n\n$$\\left(\\log\\log(2x^4)\\right)'=\\frac1{\\log2x^4}\\frac1{2x^4}8x^3$$\n\nand now do some algebraic order in the above.", "answer_owner": "Timbuc", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5020095, "title": "About the geometric interpretation of the arclength formula in polar coordinates", "question_text": "I have been thinking about how to interpret the formula for arclength in polar coordinates geometrically.\n\nSo this is the formula:\n\n$$L = \\int_{\\theta_1}^{\\theta_2} \\sqrt{ \\left( \\frac{dr}{d\\theta} \\right)^2 + r^2 } \\, d\\theta$$\n\ndealing with the differentials and rewriting, we have:\n\n$$dL = \\sqrt{(dr)^2 + (r \\, d\\theta)^2}$$\n\nAs I understand it, this is the Pythagorean relationship between the 3 quantities\n\n$dL$\n\n,\n\n$rd\\theta$\n\n, and\n\n$dr$\n\n as described by the picture:\n\nHowever, after some thinking, I noticed a problem, shouldn't\n\n$r \\, d\\theta \\to dL\n\n$\n\n as\n\n$d\\theta \\to 0$\n\n?\n\nEdit1: the reason why I think that\n\n$r \\, d\\theta \\to dL$\n\n as\n\n$d\\theta \\to 0$\n\n is just kind of an intuitive reasoning, for as\n\n$d\\theta$\n\n gets smaller and smaller,\n\n$dr$\n\n should also get smaller, until eventually, it reaches a point where we could treat\n\n$dr$\n\n as negligible and\n\n$rd\\theta$\n\n =\n\n$dL$\n\n. Or, in other words, for really small\n\n$d\\theta$\n\n,\n\n$dL$\n\n becomes the arc subtended by\n\n$d\\theta$\n\n with radius r.\n\nps:sorry for any problems or issues with my question, it's my first time asking here", "question_owner": "Thin Ng Hu ArmKnew2Grandmar", "question_link": "https://math.stackexchange.com/questions/5020095/about-the-geometric-interpretation-of-the-arclength-formula-in-polar-coordinates", "answer": { "answer_id": 5020241, "answer_text": "It is, in fact, true in a sense that\n\n$dL \\to rd\\theta$\n\n as\n\n$d\\theta \\to 0$\n\n, since as\n\n$d\\theta$\n\n gets really small, so do all the other infinitesimal changes as well: As\n\n$d\\theta$\n\n goes to zero, it is true that:\n\n$$ dL \\to 0 $$\n\nand that\n\n$$ dr \\to 0 $$\n\nand that\n\n$$ rd\\theta \\to 0. $$\n\nBut this is not much of an issue. Just think about simple derivatives: both\n\n$dy \\to 0$\n\n and\n\n$dx \\to 0$\n\n, yet\n\n$\\frac{dy}{dx}$\n\n is perfectly sensible.\n\nI personally find it helpful to think in terms of small but not infinitesimally small triangles that approximate the limit, that is,\n\n$dL$\n\n's,\n\n$dr$\n\n's and\n\n$rd\\theta$\n\n's that \"have not reached zero yet\".\n\nAlso, and I think this point will be the most helpful to Your question, notice that (in general)\n\n$dr$\n\n won't approach\n\n$0$\n\n much \"faster\" than\n\n$rd\\theta$\n\n or\n\n$rd\\theta$\n\n as\n\n$\\theta$\n\n goes to zero, so these two will be still sufficiently different \"before\n\n$d\\theta$\n\n (and thus all other differences) actually reache(s)\" zero.\n\nA notable exception would be a circle: In that case,\n\n$dr$\n\n is always zero (as\n\n$r$\n\n is constant), so\n\n$dL$\n\n actually equals\n\n$rd\\theta$\n\n at all times (for all\n\n$\\theta$\n\n).\n\nNow, instead of a circle, imagine something like a spiral: now\n\n$dr$\n\n would be greater (or smaller) than\n\n$0$\n\n as long as\n\n$d\\theta > 0$\n\n.\n\nI hope that answers your question. Feel free to ask follow-ups in case it didn't.", "answer_owner": "Vulturus", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106667, "title": "Deriving the optimal point of a function with a variable exponent interaction term", "question_text": "I am trying to find the value of the variable\n\n$\\eta$\n\n that maximizes the function\n\n$C(\\eta, S)$\n\n, where\n\n$C$\n\n is defined as:\n\n$$C(\\eta, S) = k \\cdot \\eta^\\alpha \\cdot (1-\\eta)^\\gamma \\cdot S^{\\beta + \\delta \\eta}.$$\n\n The function is maximized with respect to\n\n$\\eta$\n\n, and all other parameters are treated as fixed constants:\n\n$\\eta \\in (0, 1)$\n\n. Constants\n\n$S > 0$\n\n,\n\n$k$\n\n,\n\n$\\alpha$\n\n,\n\n$\\gamma$\n\n,\n\n$\\beta$\n\n and\n\n$\\delta$\n\n are all positive. The variable\n\n$S$\n\n (System Scale) is also a constant for the purpose of the optimization, but it appears in the exponent as an interaction term (\n\n$\\delta\\eta$\n\n).\n\nMy Approach (Using Logarithmic Differentiation):\n\n To simplify the process of finding the critical point, I take the natural logarithm of\n\n$C(\\eta, S)$\n\n:\n\n$$\\ln C = \\ln k + \\alpha \\ln \\eta + \\gamma \\ln(1-\\eta) + (\\beta + \\delta \\eta) \\ln S$$\n\n Next, I take the derivative with respect to\n\n$\\eta$\n\n and set it to zero (\n\n$\\frac{\\mathrm d (\\ln C)}{\\mathrm d\\eta} = 0$\n\n) to find the critical point\n\n$\\eta^*$\n\n.\n\n$$\\frac{\\mathrm d (\\ln C)}{\\mathrm d\\eta} = 0 + \\frac{\\alpha}{\\eta} - \\frac{\\gamma}{1-\\eta} + \\delta \\ln S = 0.$$\n\nQuestion:\n\n Is the\n\n$\\frac{\\mathrm d (\\ln C)}{\\mathrm d\\eta}$\n\n derivative shown above correct? Assuming the derivative is correct, what is the closed-form expression for\n\n$\\eta^*$\n\n that satisfies the equation:\n\n$$\\frac{\\alpha}{\\eta^*} - \\frac{\\gamma}{1-\\eta^*} + \\delta \\ln S = 0?$$\n\nWork I’ve Tried:\n\n I have attempted to rearrange the final equation to isolate\n\n$\\eta^*$\n\n, but the quadratic form is challenging to solve cleanly, especially with the\n\n$\\ln S$\n\n term. Any guidance on the algebraic steps to solve for\n\n$\\eta^*$\n\n would be appreciated.", "question_owner": "B. Robinson", "question_link": "https://math.stackexchange.com/questions/5106667/deriving-the-optimal-point-of-a-function-with-a-variable-exponent-interaction-te", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106642, "title": "Find a function where the difference at two points a fixed distance apart equals another function", "question_text": "So the idea is there are two functions\n\n$f$\n\n and\n\n$g$\n\n such that\n\n$f(x+n)-f(x)=g(x)$\n\n, where\n\n$n$\n\n is fixed. If we know\n\n$g$\n\n, then how do we find\n\n$f$\n\n?\n\nI think I'm close to something, because I realized\n\n$f(x+n)-f(x)$\n\n looks like the top half of the limit for a derivative, and so as\n\n$n$\n\n gets smaller it gets closer to\n\n$nf'(x)$\n\n. And in turn, for small\n\n$n$\n\n$f(x)\\approx G(x)/n$\n\n.\n\nBut if\n\n$n$\n\n is meaningfully large then it starts feeling a lot more discrete-y, and I'm only armed with highschool calc so I'm bringing the problem it here where all the smart people are.\n\nThere's also the question of how to find f if you're adding the two points (or some other binary operator I guess) but it came up as the difference between the points when I found it. Also also sorry if the formatting is bad, this's my first post on here.", "question_owner": "Wicker", "question_link": "https://math.stackexchange.com/questions/5106642/find-a-function-where-the-difference-at-two-points-a-fixed-distance-apart-equals", "answer": { "answer_id": 5106678, "answer_text": "Because the equation\n\n$f(x+n) - f(x) = g(x)$\n\n stays true if we shift\n\n$f$\n\n vertically by any amount, there is necessarily some arbitrariness in defining\n\n$f$\n\n knowing only\n\n$g$\n\n. This is analogous to the constant of integration in calculus, but in this case more complicated.\n\nFor simplicity, start with the example\n\n$n = 1$\n\n. If you notice, once we know\n\n$f(0)$\n\n we also know\n\n$f(1)$\n\n,\n\n$f(2)$\n\n, etc. We also know\n\n$f(-1)$\n\n because\n\n$f(0) - f(-1) = g(-1)$\n\n. So we can choose\n\n$f(0)$\n\n arbitrarily and that will determine the values of\n\n$f$\n\n on\n\n$\\ldots, -2, -1, 0, 1, 2, \\ldots$\n\n. In general, that sequence is called a\n\ncoset\n\n of the additive group\n\n$\\mathbb{Z}$\n\n. It is the coset that contains the number\n\n$x = 0$\n\n, and also the coset that contains the number\n\n$x = 5$\n\n.\n\nIf we move to an arbitrary\n\n$n$\n\n, every\n\n$x$\n\n is in exactly\n\none coset of the additive group\n\n$n\\mathbb{Z}$\n\n, that is, every\n\n$x$\n\n is in exactly one sequence of the form\n\n$\\ldots, x - 2n, x-n, x, x+n, x+2n, \\ldots$\n\n.\n\nSo to define\n\n$f$\n\n, we first choose one element\n\n$x_I$\n\n from each coset\n\n$I$\n\n and assign an arbitrary value to\n\n$f(x_I)$\n\n. That represents our vertical shift for that coset. We then determine\n\n$f$\n\n on the rest of the coset\n\n$I$\n\n using the rule\n\n$f(x+n) - f(x) = g(x)$\n\n. There is nothing in the equation to link the values on one coset to the values on another - no continuity or anything like that. So we can choose a different shift for every coset.\n\nOne choice of these\n\ncoset representatives\n\n is to use\n\n$[0,n)$\n\n as the set of representatives. Every real number is in the same coset as exactly one number in\n\n$[0,n)$\n\n. But in principle we could choose any representative for each coset.\n\nUsing the coset terminology shows the algebraic structure of the problem. We could also look at it more dynamically, like a discrete differential equation\n\n$f(x+n) = g(x) + f(x)$\n\n. In that case, we would think of the map\n\n$T(x) = x+n$\n\n that sends each point to the \"next\" one in its coset.\n\nThe cosets of\n\n$n\\mathbb{Z}$\n\n are exactly the\n\norbits\n\n of the map\n\n$T$\n\n. So in this language we choose a set of\n\norbit representatives\n\n, assign an arbitrary value of\n\n$f$\n\n at each of those points, and then use the equation to determine the value of\n\n$f$\n\n on the rest of each orbit.", "answer_owner": "Carl Mummert", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106256, "title": "Why are we allowed to classify indeterminate limits with a specific form without proving it formally?", "question_text": "I have a few questions about infinite limits and their properties. Like I know that the arithmetic rules for the extended real number system are proven based on infinite limits, and those properties for infinite limits (like\n\n$\\infty+c=\\infty$\n\n) are proven using the Epsilon-Delta definition. But what about for indeterminate forms? Because we can't prove that there is a specific rule for limits in an indeterminate form (ex., we can't prove that if\n\n$\\lim_{x \\to \\infty}f(x)$\n\n and\n\n$\\lim_{x \\to \\infty}g(x)$\n\n equal\n\n$\\infty$\n\n, then\n\n$\\lim_{x \\to \\infty}(f(x)-g(x))=\\lim_{x \\to \\infty}f(x)-\\lim_{x \\to \\infty}g(x)$\n\n because this rule only holds for finite numbers or if we add them instead of subtracting).\n\nSo my main question is, if we can't prove what the rule is for a specific indeterminate limit, why are we allowed to define it to be that specific indeterminate form? For example, if we have the\n\n$\\lim_{x \\to 5}(x^2-x)$\n\n, we can split it up into\n\n$\\lim_{x \\to 5}(x^2)-\\lim_{x \\to 5}(x)$\n\n since we know this property holds for finite numbers, and then we get\n\n$\\lim_{x \\to 5}(x^2-x)=\\lim_{x \\to 5}(x^2)-\\lim_{x \\to 5}(x)=25-5=20$\n\n (i.e., we can just subtract each limit individually). But if we had the\n\n$\\lim_{x \\to \\infty}(x^2-x)$\n\n, then why do we call it the indeterminate form \"\n\n$\\infty-\\infty$\n\n\" (in the extended reals)? Like we haven't proven this, and what if the formula is something way different, like what if it was if\n\n$\\lim_{x \\to \\infty}f(x)$\n\n and\n\n$\\lim_{x \\to \\infty}g(x)$\n\n are\n\n$\\infty$\n\n, then\n\n$\\lim_{x \\to \\infty}(f(x)-g(x))=\\lim_{x \\to \\infty}f(x)+\\lim_{x \\to \\infty}g(x)-\\lim_{x \\to \\infty}(f(x)g(x))$\n\n or something else weird like that (like how the product/quotient rules for derivatives isn't just the product/quotient of the individual derivatives). So why do we just automatically make it equal to the indeterminate form\n\n$\\infty-\\infty$\n\n (and not some other form/operations with infinity) when that isn't proven? Or is this because we just assume to use/extend the already proven difference rule for limits (which is proven for finite limits only) to where both limits are infinity, and just use that to split the limit and equal it to\n\n$\\infty-\\infty$\n\n (and then we would later prove this form is indeterminate)?\n\nAlso, I understand how it's proven that those limit forms are indeterminate (because multiple limits of the same form can have different answers), but I mainly don't understand WHY we're allowed to GIVE it that form of\n\n$\\infty-\\infty$\n\n specifically if it isn't proven to be equal to that, especially since this form is also used to define the operation\n\n$\\infty-\\infty$\n\n to be undefined in the extended real number system, so shouldn't the form be well-defined/proven?\n\nAny help regarding these infinite/undefined limit properties would be greatly appreciated! Sorry if my question is confusing, please let me know if it needs any clarification.\n\nEDIT: Adding links to parts of my question to hopefully make it a bit clearer to others, since my question is confusing. Sorry for the inconvenience.\n\nLink 1 to Question\n\nLink 2 to Explanation", "question_owner": "Aaditya Visavadiya", "question_link": "https://math.stackexchange.com/questions/5106256/why-are-we-allowed-to-classify-indeterminate-limits-with-a-specific-form-without", "answer": { "answer_id": 5106273, "answer_text": "TL;DR: When someone says \"the indeterminate form\n\n$\\infty - \\infty$\n\n,\" what they mean is,\n\nYou cannot apply the subtraction rule to\n\n$\\lim(f(x) - g(x))$\n\n using\n\n$\\lim f(x)$\n\n and\n\n$\\lim g(x)$\n\n when\n\n$\\lim f(x) = \\infty$\n\n and\n\n$\\lim g(x) = \\infty.$\n\nIn this case, we\n\ndon't\n\n apply the subtraction rule to get\n\n$\\infty - \\infty.$\n\nIndeed, you\n\ncannot\n\n apply the subtraction rule in this case, and that's exactly what we mean when we say\n\n$\\infty - \\infty$\n\n is an indeterminate form.\n\nBy calling\n\n$\\infty - \\infty$\n\n an intermediate form, we are in fact saying that\n\n$\\infty - \\infty$\n\ndoesn't make sense\n\n and that the sequence of equations\n\n$$\\text{\"}\\lim(f(x) - g(x)) = \\lim f(x) - \\lim g(x) = \\infty - \\infty\\text{\"}$$\n\ndoesn't make sense.\n\nAnd I see that you know these things don't make sense.\n\nThe string of symbols \"\n\n$\\infty - \\infty$\n\n\" is not meant to actually subtract\n\n$\\infty$\n\n from\n\n$\\infty$\n\n; it's just a kind of mnemonic\n\nname\n\n for a situation where one of the limit rules doesn't apply even though someone might think it did if they were not as well clued in as you are.\n\nThe classification of a formula as an \"indeterminate form\" for a limit is not a statement that the limit doesn't exist. It's just an indication that you need to work a little harder to determine whether the limit actually exists and (if it does) what the limit is.\n\nFor example, suppose\n\n$p(x)$\n\n and\n\n$q(x)$\n\n are polynomials with positive leading coefficients. Then\n\n$$\n\n\\lim_{x\\to\\infty} p(x) = \\infty \\quad \\text{and} \\quad\n\n \\lim_{x\\to\\infty} q(x) = \\infty.\n\n$$\n\nWhat then are the following limits?\n\n$$ \\lim_{x\\to\\infty} (p(x) - q(x)), $$\n\n$$ \\lim_{x\\to\\infty} \\frac{p(x)}{q(x)}. $$\n\nAs it turns out, depending on exactly which polynomials we choose, we can make the first limit be anything we want:\n\n$\\infty,$\n\n$-\\infty,$\n\n or any real number. The second limit can't be negative but it can be zero, any positive real number, or\n\n$\\infty.$\n\nTry it with the following examples:\n\n$$ f(x) = x^2 - 1, \\quad g(x) = x + 1. $$\n\n$$ f(x) = x + 1, \\quad g(x) = x. $$\n\n$$ f(x) = x - 1, \\quad g(x) = x^2 - 1. $$\n\n\"Indeterminate form\" refers to a formula involving two functions, and it means that there is no simple rule that says what the limit of that formula is when all we know is the limits of those two functions. If there\n\nwere\n\n such a rule, then given two functions whose limits are both\n\n$\\infty$\n\n the rule would always return the same answer as for any other two functions whose limits are both\n\n$\\infty.$\n\nIf you have worked through the examples correctly then\n\nyou\n\n have just proved that there is no such rule.\n\nIt also turns out that if you are given two specific polynomials -- that is, you know exactly what their degrees and coefficients are -- these limits are easy to evaluate.\n\nSo yes, there can still be a way to evaluate a limit despite it being an \"indeterminate form.\" The phrase \"indeterminate form\" means only that you need to know more about the functions than just what the limit of each function is,\n\nas opposed to other cases where you can determine the limit of an expression like\n\n$f(x) - g(x)$\n\n or\n\n$f(x)/g(x)$\n\n just by knowing that\n\n$f(x)$\n\n has a certain limit and\n\n$g(x)$\n\n has a certain limit.\n\nFor example,\n\n$$\n\n\\lim_{x\\to\\infty} f(x) = \\infty \\ \\text{and} \\ \\lim_{x\\to\\infty} g(x) = -\\infty\n\n\\implies \\lim_{x\\to\\infty} (f(x) - g(x)) = \\infty.\n\n$$\n\nAs for the symbols that we use as names for these indeterminate forms such as\n\n$\\infty - \\infty,$\n\n$\\infty/\\infty,$\n\n or\n\n$0 \\times \\infty,$\n\n they are merely shorthand for the actual meaning of the forms. They are not arithmetic operations. That is, when someone writes\n\nthe intermediate form\n\n$\\infty-\\infty$\n\nwhat they really mean is\n\n$\\lim_{x\\to\\infty} (f(x) - g(x))$\n\n where\n\n$\\lim_{x\\to\\infty} f(x) = \\infty$\n\n and\n\n$\\lim_{x\\to\\infty} g(x) = \\infty,$\n\n in which case we are unable to apply the subtraction rule using the limits of\n\n$f(x)$\n\n and\n\n$g(x)$\n\n to evaluate the limit of\n\n$f(x) - g(x).$\n\nThey do\n\nnot\n\n literally mean \"subtract\n\n$\\infty$\n\n from\n\n$\\infty.$\n\n\" That's not an indeterminate form; it's just an undefined operation.\n\nBut can you see why it is desirable to be able to say \"the intermediate form\n\n$\\infty-\\infty$\n\n\" rather than have to give the full explanation every time?\n\nMoreover,\n\n$\\infty-\\infty$\n\n is an indeterminate form\n\nbecause\n\n the difference rule doesn't work when both limits are\n\n$+\\infty.$\n\nYou might call\n\n$\\infty-\\infty$\n\n an \"abuse of notation.\" You might also call it a \"formal notation,\" using \"formal\" in the sense of \"having the outward appearance of\"; it looks like a subtraction, but it isn't actually a subtraction because that operation isn't defined for those operands.\n\nOr as Dave Renfro mentioned in a comment, you can simply regard\n\n$\\infty - \\infty$\n\n as just a\n\nname\n\n for a particular situation, a string of symbols meant to be read as if they were a single word and not a composite mathematical expression that can be parsed into several meaningful pieces. I think this may be the best way to regard this particular notation.\n\nIn contrast, the operations involving\n\n$\\infty$\n\n that are defined on the extended real numbers correspond to cases where rules for limits actually do work despite the fact that one or both of the limits involved is\n\n$\\pm\\infty.$\n\nAt least that's my interpretation of it. For example, I would say that\n\n$$\n\n\\left(\\lim_{x\\to\\infty} x\\right) - \\left(\\lim_{x\\to\\infty} x\\right)\n\n$$\n\nis an undefined expression, so you can't say that it is equal to\n\n$\\lim_{x\\to\\infty}(x - x).$\n\nOn the other hand, if we accept the use of the extended reals,\n\n$$\n\n\\left(\\lim_{x\\to\\infty} x\\right) + \\left(\\lim_{x\\to\\infty} x\\right)\n\n= \\infty + \\infty = \\infty.\n\n$$", "answer_owner": "David K", "is_accepted": false, "score": 17, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106662, "title": "At which order does the derivative stop having changes?", "question_text": "I'm a bit new on derivatives and the use of it, when i search about it, most of derivatives stop at second order derivatives. Like the most used example is acceleration. So now, I'm wondering if there's a higher order that is having constant line. From what I know, derivative is a rate of change by using limit approaching zero. I've searched a bit about this too, about 3rd and 4th order and so on (jerk, snap, etc).\n\nDoes this mean when an object starts to move from 0 m/s, there's always a change on acceleration, and other higher order derivatives? And if so, which order of derivative that the changes is negligible that it doesn't seem like a change?\n\nSorry if this is confusing, I don't know how to ask my confusion about this.", "question_owner": "Deez Nuts", "question_link": "https://math.stackexchange.com/questions/5106662/at-which-order-does-the-derivative-stop-having-changes", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 162111, "title": "Population growth and the logistic model", "question_text": "I am not sure where I am going wrong, I will admit I do not fully understand all the equations but I do feel like I have a pretty good grasp on them.\n\nThis is from Stewart Calculus 7e pg 613 #3\n\n\"The pacific halibut fishery has been modeled by the differential\n\n $$\\frac{dy}{dt} = ky \\left(1 - \\frac{y}{M}\\right)$$where $y(t)$ is the biomass (the total mass of the members of the population) in kilograms at time t (measured in years), the carrying capacity is estimated to be $M = 8 * 10^7 kg$ and $k = .071$ per year.\n\na) if $y(0) = 2 \\cdot 10^7 kg$ find the biomass a year later.\n\nb) How long will it take for the biomass to reach $4 \\cdot 10^7 kg$?\"\n\nFirst I start by putting in the given information into the logistic equation.\n\n$$\\frac{dy}{dt} = .071y\\left(1 - \\frac{y}{8*10^7}\\right)$$\n\nStarting with $a$ I need to find $y$ when $t$ is $0$ so I use another equation from the book. I find that $A = 3$\n\n$$A = \\frac{M - y_0}{y_0}$$\n\nWhere I am assuming $y_0$ is the initial value.\n\n$y(0) = 2 \\cdot 10^7$\n\n$$A = \\frac{(8 \\cdot 10^7) - (2 \\cdot 10^7)}{ 2 \\cdot 10^7}$$\n\nThis comes out to 3 so I put it in the other equation that solves for $y(t)$\n\n$$y(t) = \\frac{M}{1+ Ae^{-kt}}$$\n\n$$y(1) = \\frac{(8 \\cdot 10^7) }{1+ 3e^{-.071}}$$\n\nThis comes out to $21083781.89$ which I know is wrong, going through my work it all seems to be correct, I am not sure where I have gone wrong.", "question_owner": "user138246", "question_link": "https://math.stackexchange.com/questions/162111/population-growth-and-the-logistic-model", "answer": { "answer_id": 426752, "answer_text": "The solution is correct. If the value of $k=0.71$ is used instead of $k=0.071$, the answer comes out to\n\n$$y(1) = \\frac{(8 \\cdot 10^7) }{1+ 3e^{-0.71}} \\approx 3.23\\cdot 10^7$$\n\nin accordance with the textbook answer. (Observed by Did)", "answer_owner": "ˈjuː.zɚ79365", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3353508, "title": "What is the derivative of a function of the form $u(x)^{v(x)}$?", "question_text": "So I have a given lets say\n\n$(x+1)^{2x}$\n\n in addition to\n\n$\\frac{\\mathrm dy}{\\mathrm dx}a^u=a^u\\log(a)u'$\n\n. I still have to multiply this by the derivative of the inside function\n\n$x+1$\n\n correct?", "question_owner": "Alkahest", "question_link": "https://math.stackexchange.com/questions/3353508/what-is-the-derivative-of-a-function-of-the-form-uxvx", "answer": { "answer_id": 3353544, "answer_text": "This is what logarithmic differentiation is for. You start with writing the function as an equation\n\n$$y = (x + 1)^{2x},$$\n\nthen take the natural log of both sides:\n\n$$\\ln y = \\ln\\left[(x + 1)^{2x}\\right] = 2x \\ln(x+1).$$\n\nWe then implicitly differentiate both sides with respect to\n\n$x$\n\n. By chain rule (remember,\n\n$y$\n\n is a function of\n\n$x$\n\n), the left side comes to\n\n$$\\frac{1}{y} \\cdot y'.$$\n\nThe right side can be differentiated as normal:\n\n$$\\frac{2x}{x + 1} + 2\\ln(x + 1).$$\n\nSo,\n\n\\begin{align*}\n\n&\\frac{1}{y} \\cdot y' = \\frac{2x}{x + 1} + 2\\ln(x + 1) \\\\\n\n\\implies \\, &y' = y\\left(\\frac{2x}{x + 1} + 2\\ln(x + 1)\\right) \\\\\n\n\\implies \\, &y' = (x + 1)^{2x}\\left(\\frac{2x}{x + 1} + 2\\ln(x + 1)\\right).\n\n\\end{align*}", "answer_owner": "Theo Bendit", "is_accepted": false, "score": 24, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105850, "title": "Counterexample to "the square of a non-monotone function is non-monotone"", "question_text": "The question: “If a function is not monotone on\n\n$(a, b)$\n\n, then its square cannot be monotone on\n\n$(a, b)$\n\n.” We are to provide a counterexample to this statement.\n\nOn initial attempts I was able to forge piecewise functions which were sufficient counterexamples to this statement. One is stated below.\n\nThe function:\n\n$(-1)^{\\lfloor x\\rfloor}x$\n\n. (we can consider the required domain). Here's a graph I made on desmos\n\nAfter some attempts I was able to find another function.\n\nThe function:\n\n$(-1)^xx$\n\n. (we can consider the required domain). Graphed here\n\nHere is where I have a doubt; is this function even a sufficient counter example for the statement?\n\nMy conclusion against this function being sufficient came in the following way:\n\nFor a function to be monotonic increasing in\n\n$(a, b)$\n\n we must have:\n\n$f(x_2) > f(x_1)$\n\n for\n\n$x_2 > x_1$\n\n; if\n\n$x_1 , x_2\\in(a, b)$\n\n. (vice versa for decreasing)\n\nNow the function stated above, returns complex numbers for some values in any interval we consider; now by property of complex numbers we know we can’t compare complex numbers so this is technically breaking the rigorous definition of monotonicity…\n\nOn the other hand it is impossible to modify the domain in such a way that the output to them lies only on the real numbers, as there are infinitely many such points which would yield a complex number between two points\n\nSo my question is, is my train of thought against the latter function correct or am I missing any fine detail, in the analysis of its behavior.\n\nThank you in advance. Any insight provided will be invaluable to drive my curiosity in Mathematics.", "question_owner": "relac.ab", "question_link": "https://math.stackexchange.com/questions/5105850/counterexample-to-the-square-of-a-non-monotone-function-is-non-monotone", "answer": { "answer_id": 5105874, "answer_text": "Take any positive monotone function and build a new function by taking the square root and assigning varying signs.\n\nE.g.\n\n$e^x$\n\n vs.\n\n$\\text{sign}(\\sin(x))e^{x/2}$\n\n.", "answer_owner": "user1548405", "is_accepted": true, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3170265, "title": "What does $L(x) = f(a) + f'(a)(x - a)$ have to do with this problem? I am confused.", "question_text": "Following is the word problem:\n\nGiven the formula\n\n$L(x) = f(a)+f'(a)(x-a)$\n\n, what is the linear approximation of the tangent line at\n\n$a=2$\n\n for the following function\n\n$f(x)=x^3$\n\n?\n\nAfter watching Khan Academy, I figured out that I have to find slope (derivative) of\n\n$f(x)$\n\n which is\n\n$f'(x)=3x^2$\n\n, then plug-in\n\n$2$\n\n to get\n\n$f'(2)=3(2)^2=12$\n\nthen use point slope\n\n$(y-y_1)=m(x-x_1)$\n\n$(y_1$\n\n is found by plugging in\n\n$2$\n\n in the original\n\n$f(x)=x^3)$\n\n implies\n\n$$(y-8)=12(x-2) \\implies y=12x-24+8 \\implies y=12x-16$$\n\n and this is correct answer.\n\nBut I have no idea why problem mentions formula\n\n$L(x) = f(a)+f'(a)(x-a)$\n\n?", "question_owner": "Aman Khan", "question_link": "https://math.stackexchange.com/questions/3170265/what-does-lx-fa-fax-a-have-to-do-with-this-problem-i-am-confus", "answer": { "answer_id": 3170269, "answer_text": "This L(x) is the general formula for the linear approximation that you are looking for. You just have to plug in your\n\n$f$\n\n. So, in your case\n\n$$L(x)=a^3+\\left.(x^3)'\\right|_a(x-a)=a^3+3a^2(x-a)=3a^2x-2a^3$$\n\n which is linear (in\n\n$x$\n\n). Now, plugging in\n\n$a=2$\n\n you can confirm your result\n\n$$L(x)=3(2^2)x-2(2^3)=12x-16$$", "answer_owner": "Jimmy R.", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5027027, "title": "Evaluation of a Digamma Integral", "question_text": "Is there an alternative proof for the following identity?\n\n$\\forall \\text{ }0 < \\alpha < 1; a,b$\n\n$$\\int_{0}^{\\infty} x^{\\alpha - 1} \\Big[\\psi(b+x) - \\psi(a+x)\\Big] \\mathrm dx = \\frac{\\pi}{\\sin(\\alpha \\pi)}\\Big[\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)\\Big]$$\n\nI am interested if anyone has any other techniques/methods to solve this integral without the series expansion of the digamma function (the method I used). The result of this integral is included in\n\nthis\n\n post of mine requesting for more polygamma integrals (I don't want the original post to be an essay, so I am deriving this result here).\n\nI considered the\n\nseries formula\n\n of the digamma function, giving the following\n\n\\begin{align*}\n\n\\psi (b+x) - \\psi(a+x)&=-\\gamma +\\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+1}}-{\\frac {1}{n+b+x}}\\right) - \\Big[-\\gamma +\\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+1}}-{\\frac {1}{n+a+x}}\\right)\\Big]\\\\\\\\\n\n&= \\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+1}}-{\\frac {1}{n+b+x}}\\right) - \\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+1}}-{\\frac {1}{n+a+x}}\\right)\\\\\\\\\n\n&= \\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+a+x}}-{\\frac {1}{n+b+x}}\\right)\n\n\\end{align*}\n\nSubstituting this into the original integral gives\n\n\\begin{align*}\n\n\\int_{0}^{\\infty} x^{\\alpha - 1} \\Big[\\psi(b+x) - \\psi(a+x)\\Big] \\mathrm dx &= \\int_{0}^{\\infty} x^{\\alpha - 1} \\Big[\\sum _{n=0}^{\\infty }\\left({\\frac {1}{n+a+x}}-{\\frac {1}{n+b+x}}\\right)\\Big] \\mathrm dx\\\\\\\\\n\n&= \\sum _{n=0}^{\\infty}\\int_{0}^{\\infty} x^{\\alpha - 1} \\Big[\\left({\\frac {1}{n+a+x}}-{\\frac {1}{n+b+x}}\\right)\\Big] \\mathrm dx \\\\\\\\\n\n&= \\frac{\\pi}{\\sin(\\alpha \\pi)} \\sum_{n=0}^{\\infty} \\Big[(n+a)^{\\alpha - 1} - (n+b)^{\\alpha - 1}\\Big]\\\\\\\\\n\n&= \\frac{\\pi}{\\sin(\\alpha \\pi)} \\Big[\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)\\Big]\n\n\\end{align*}\n\nI used the series formula obtained at the beginning, then swapped the sum and integral signs, integrated the two terms (\n\n$\\int_{0}^{\\infty} x^{\\alpha - 1} \\frac{1}{n+q+x} \\mathrm dx = (n+q)^{\\alpha - 1} \\frac{\\pi}{\\sin(\\alpha \\pi)}$\n\n I used the result (obtained through the substitution\n\n$u = x/(n+q)$\n\n and using\n\nthis\n\n result), and used the definition of the Hurwitz zeta function to obtain the final form.\n\nFeel free to share any techniques you may have. Thanks :)", "question_owner": "Maxime Jaccon", "question_link": "https://math.stackexchange.com/questions/5027027/evaluation-of-a-digamma-integral", "answer": { "answer_id": 5100135, "answer_text": "Thank you Setness Ramesory for your comment. It has been close to a year since I asked this question... and I have since gained familiarity with contour integration. I also don't want this question to contaminate the \"unanswered questions.\"\n\nWe begin by defining the complex function\n\n$f(z) = z^{\\alpha - 1} [\\psi(b+z) - \\psi(a+z)]$\n\n. Note that the term\n\n$z^{\\alpha - 1}$\n\n introduces a branch point at the origin, and we shall choose the branch cut to lie along the positive real axis. Furthermore, the\n\n$\\psi$\n\n's introduce simple poles at\n\n$-a - \\mathbb{N}$\n\n and\n\n$b - \\mathbb{N}$\n\n. (This can be easily seen with the recurrence relation\n\n$\\psi(z+1) = \\psi(z) + \\frac{1}{z}$\n\n). This motivates the construction of the following keyhole contour\n\n$\\Omega$\n\n.\n\n$\\Omega$\n\n consists of four parts.\n\n$(a)$\n\n A large circle of radius\n\n$R$\n\n centered at the origin with counter-clockwise orientation, denoted\n\n$\\Gamma$\n\n;\n\n$(b)$\n\n a small circle of radius\n\n$\\epsilon$\n\n centered at the origin with clockwise orientation, denoted\n\n$\\gamma$\n\n;\n\n$(c)$\n\n a line segment just above\n\n$\\mathbb{R}^+$\n\n from\n\n$\\epsilon$\n\n to\n\n$R$\n\n denoted\n\n$\\xi_1$\n\n;\n\n$(d)$\n\n a line segment just below\n\n$\\mathbb{R}^+$\n\n from\n\n$R$\n\n to\n\n$\\epsilon$\n\n denoted\n\n$\\xi_2$\n\n. We have\n\n$$\\oint_{\\Omega} f(z) dz = \\int_{\\Gamma} f(z) dz + \\int_{\\gamma} f(z) dz + \\int_{\\xi_1} f(z) dz + \\int_{\\xi_2} f(z) dz$$\n\nThe contour\n\n$\\Omega$\n\n is pictured below (red and blue dots are the poles from\n\n$\\psi(a+z)$\n\n and\n\n$\\psi(b+z)$\n\n).\n\nIt can be shown the integrals over the circles,\n\n$\\Gamma$\n\n and\n\n$\\gamma$\n\n vanish. Parametrizing\n\n$\\xi_1$\n\n and\n\n$\\xi_2$\n\n, we obtain\n\n\\begin{align}\n\n\\int_{\\xi_1}f(z) dz + \\int_{\\xi_2} f(z) dz &= (1-e^{2\\pi i (\\alpha - 1})) \\int_{0}^{\\infty} x^{\\alpha - 1} [\\psi(b+x) - \\psi(a+x)] dx \\\\\n\n&= -2i e^{i \\pi \\alpha} \\sin(\\pi \\alpha) \\int_{0}^{\\infty} x^{\\alpha - 1} [\\psi(b+x) - \\psi(a+x)] dx\n\n\\end{align}\n\nNow, we must caculate the sum of the residues contained in our contour. As mentioned above, these are simple poles. Thus, we may use\n\n$\\text{Res}(f, z_k) = \\lim_{z \\rightarrow z_k}(z-z_k) f(z)$\n\n. Let\n\n$a_k$\n\n be the pole\n\n$-a-k$\n\n, for some\n\n$k \\in \\mathbb{N}$\n\n. Then, we may perform the residue calculation as\n\n\\begin{align}\n\n\\text{Res}(f, a_k) &= \\lim_{z \\rightarrow a_k} (z-a_k) (z^{\\alpha - 1} [\\psi(b+z) - \\psi(a+z)]) \\\\\n\n& = \\lim_{z \\rightarrow a_k}(z-a_k) [-z^{\\alpha - 1}\\psi(a+z)]\\\\\n\n& = \\lim_{z \\rightarrow a_k} -z^{\\alpha - 1} [(z-a_k)\\psi(a+z)]\\\\\n\n& = \\lim_{z \\rightarrow a_k} -z^{\\alpha - 1} (-1)\\\\\n\n& = (-a-k)^{\\alpha - 1} = ((a+k)e^{i \\pi})^{\\alpha - 1}\\\\\n\n&= (a+k)^{\\alpha - 1}e^{i \\pi (\\alpha - 1)}\n\n\\end{align}\n\nWhere we have used the fact that\n\n$\\psi(b+z)$\n\n is analytic at\n\n$a_k$\n\n (and thus dissapears) and used the polar form of the negative reals. An almost identical calculation is done to calculate the residue at\n\n$z = -b-k = b_k$\n\n, where we obtain\n\n$\\text{Res}(f, b_k) = -(b+k)^{\\alpha - 1} e^{i \\pi (\\alpha - 1)}$\n\n.\n\nNow, a result we shall use later: the sum over all the residues inside the contour.\n\n\\begin{align}\n\n\\sum \\text{Res} &= \\sum_{k=0}^{\\infty} [(a+k)^{\\alpha - 1}e^{i \\pi (\\alpha - 1)} -(b+k)^{\\alpha - 1} e^{i \\pi (\\alpha - 1)}]\\\\\n\n&= e^{i \\pi (\\alpha - 1)} \\sum_{k=0}^{\\infty} [(a+k)^{\\alpha - 1}-(b+k)^{\\alpha - 1}]\\\\\n\n& = e^{i \\pi (\\alpha - 1)} [\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)]\\\\\n\n& = - e^{i \\pi \\alpha}[\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)]\n\n\\end{align}\n\nIndeed, by Cauchy's Residue Theorem, we have\n\n\\begin{align}\n\n\\oint_{\\Omega} f(z) dz = \\int_{\\xi_1}f(z) dz + \\int_{\\xi_2} f(z) dz = 2\\pi i \\sum \\text{Res} = -2 \\pi i e^{i \\pi \\alpha} [\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)]\n\n\\end{align}\n\nHowever, by equating this result with the parametrization of the sum\n\n$\\xi_1$\n\n and\n\n$\\xi_2$\n\n, we have\n\n$$-2 \\pi i e^{i \\pi \\alpha}[\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b)] = -2i e^{i \\pi \\alpha} \\sin(\\pi \\alpha) \\int_{0}^{\\infty} x^{\\alpha - 1} [\\psi(b+x) - \\psi(a+x)] dx$$\n\nDividing gives the desired result.\n\n$$\\int_{0}^{\\infty} x^{\\alpha - 1} \\Big[\\psi(b+x) - \\psi(a+x)\\Big] \\mathrm dx = \\frac{\\pi}{\\sin(\\alpha \\pi)}\\Big[\\zeta(1-\\alpha, a) - \\zeta(1-\\alpha, b))\\Big]$$", "answer_owner": "Maxime Jaccon", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4762066, "title": "Evaluating $\\sum_{i=1}^{\\infty} \\sum_{j=1}^{\\infty} \\frac{1}{ij(i+j)^2}$", "question_text": "I was playing around with double sums and encountered this problem: Evaluate\n\n$$\\sum_{i=1}^{\\infty} \\sum_{j=1}^{\\infty} \\frac{1}{ij(i+j)^2}$$\n\nIt looks so simple I thought it must have been seen before, but after using ApproachZero I only found extremely similar variants of the problem:\n\n$\\sum_{j=1}^{\\infty} \\sum_{k=1}^{\\infty} \\frac{1}{j^2(j+k)^2}$\n\nhere\n\n$\\sum_{k=1}^{\\infty} \\sum_{n=1}^{\\infty} \\frac{1}{n^2k^2(n+k)^2}$\n\nhere\n\n$\\sum_{n,k \\in \\mathbb{N_1}} \\frac{1}{n^2k^2(n+k)}$\n\nhere\n\nSome general facts about\n\n$T(r,s,t)=\\sum_{u,v=1}^{\\infty} \\frac{1}{u^rv^s(u+v)^t}$\n\nhere\n\nWhile the last link looks promising, I don't believe that it answers my specific question(but I haven't read the paper fully, only skimmed).\n\nI have an answer to this question(shared Q&A style), but I am curious as to whether there is another way to evaluate the sum. I have been experimenting with double sums for a few weeks now, so many techniques to attack \"simple\" sums such as this would be much appreciated! In terms of level, I have officially finished a Calculus 2 course but I have self-taught myself Calculus 3 and Complex Analysis, and worked through a few \"integration problems\" books so any further integral techniques would be fine.", "question_owner": "dgeyfman", "question_link": "https://math.stackexchange.com/questions/4762066/evaluating-sum-i-1-infty-sum-j-1-infty-frac1ijij2", "answer": { "answer_id": 4762067, "answer_text": "My own answer to the above question.\n\nWe can do this by analyzing a particular\n\n$n=i+j$\n\n. Instead of summing in the row/column way(when the double sum is written out) we sum by the diagonals from the top row to the left column as shown below\n\nwhere each box corresponds to a particular choice of i,j. Now, because\n\n$n=i+j$\n\n will hit all integers from\n\n$2$\n\n to\n\n$\\infty$\n\n, we can rewrite the summation as\n\n$$S = \\sum_{n=2}^{\\infty} \\frac{1}{n^2}f(n)$$\n\nwhere\n\n$$f(n)=\\sum_{i=1}^{n-1} \\frac{1}{i(n-i)}$$\n\nwhere we set\n\n$j$\n\n to be\n\n$n-i$\n\n because\n\n$i+j=n$\n\n. This is the \"sum by diagonals\" representation. Now, we can find an explicit formula for\n\n$f(n)$\n\n. Note that\n\n$$\\frac{1}{i(n-i)}=\\frac{1}{n}(\\frac{1}{n-i}+\\frac{1}{i})$$\n\nso, we have that\n\n$$f(n)=\\frac{1}{n} \\sum_{i=1}^{n-1} \\frac{1}{i}+\\frac{1}{n-i}=$$\n\n$$\\frac{1}{n} (\\sum_{i=1}^{n-1} \\frac{1}{i}+ \\sum_{i=1}^{n-1} \\frac{1}{n-i})=$$\n\n$$\\frac{1}{n}(H_{n-1}+\\sum_{i=1}^{n-1} \\frac{1}{n-i})=$$\n\n$$\\frac{1}{n}(H_{n-1}+\\sum_{i=1}^{n-1} \\frac{1}{i})=$$\n\n$$\\frac{2 H_{n-1}}{n}$$\n\nwhere the second-to-last equality follows from the reflection formula\n\n$\\sum_{i=a}^{b} f(i)=\\sum_{i=a}^{b} f(a+b-i)$\n\n. From this, we have that\n\n$$S=\\sum_{n=2}^{\\infty} \\frac{1}{n^2} \\cdot \\frac{2 H_{n-1}}{n}=$$\n\n$$2 \\sum_{n=2}^{\\infty} \\frac{H_{n-1}}{n^3}=$$\n\n$$2 \\sum_{n=2}^{\\infty} \\frac{H_n-\\frac{1}{n}}{n^3}=$$\n\n$$2(\\sum_{n=2}^{\\infty} \\frac{H_n}{n^3} - \\sum_{n=2}^{\\infty} \\frac{1}{n^4})$$\n\nHere, by noting that\n\n$H_n=\\frac{1}{n}$\n\n for\n\n$n=1$\n\n, we also have that\n\n$$S=2(\\sum_{n=1}^{\\infty} \\frac{H_n}{n^3} - \\sum_{n=1}^{\\infty} \\frac{1}{n^4})$$\n\nas the\n\n$n=1$\n\n terms cancel each other out. Now, the second summation is clearly\n\n$\\zeta(4)=\\frac{\\pi^4}{90}$\n\n, so we have that\n\n$$S=2(\\sum_{n=1}^{\\infty} \\frac{H_n}{n^3} - \\frac{\\pi^4}{90})=$$\n\n$$2 \\sum_{n=1}^{\\infty} \\frac{H_n}{n^3} - \\frac{\\pi^4}{45}$$\n\nNow, all that is left is to evaluate\n\n$\\sum_{n=1}^{\\infty} \\frac{H_n}{n^3}$\n\n. To do this, we use the method outlined in Problem 3.58 of \"Limits, Series, and Fractional Part Integrals\". Recall the Dilogarithm function\n\n$$\\mathrm{Li_2}(z)=\\sum_{n=1}^{\\infty} \\frac{z^n}{n^2} = - \\int_{0}^{z} \\frac{\\ln(1-t)}{t} dt$$\n\nFor example, note that\n\n$\\mathrm{Li_2}(1)=\\zeta(2)=\\frac{\\pi^2}{6}$\n\n and\n\n$\\mathrm{Li_2}(0)=0$\n\n. It is a known result(proved in an earlier problem in the book) that\n\n$$\\sum_{n=1}^{\\infty} \\frac{H_n}{n^3} = - \\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\int_{0}^{1} x^{n-1} \\ln(1-x) dx$$\n\nand we know that\n\n$$- \\sum_{n=1}^{\\infty} \\frac{1}{n^2} \\int_{0}^{1} x^{n-1} \\ln(1-x) dx=- \\int_{0}^{1} \\frac{\\ln(1-x)}{x} (\\sum_{n=1}^{\\infty} \\frac{x^n}{n^2}) dx=$$\n\n$$- \\int_{0}^{1} \\frac{\\ln(1-x)}{x} \\mathrm{Li_2}(x) dx$$\n\nNow, sub\n\n$u=\\mathrm{Li_2}(x), du = \\frac{-\\ln(1-x)}{x} dx$\n\n, and using the special values of\n\n$\\mathrm{Li_2}(x)$\n\n above for the bounds, we see that\n\n$$\\sum_{n=1}^{\\infty} \\frac{H_n}{n^3} = \\int_{0}^{\\frac{\\pi^2}{6}} u du=$$\n\n$$\\frac{u^2}{2} \\Big|_0^{\\frac{\\pi^2}{6}}=$$\n\n$$\\frac{\\pi^4}{72}$$\n\nSo, we finish with\n\n$$S = 2 \\cdot \\frac{\\pi^4}{72} - \\frac{\\pi^4}{45}=$$\n\n$$\\boxed{\\frac{\\pi^4}{180}}=$$\n\n$$\\boxed{\\frac{1}{2} \\zeta(4)}$$", "answer_owner": "dgeyfman", "is_accepted": false, "score": 14, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106002, "title": "Why do we care about invariant solutions of differential equations (under a one parameter Lie group)?", "question_text": "I am reading about symmetry methods on differential equations.\n\nIt starts by considering invariant points, then orbits of points and invariant curves. It ends in a method to determine all (w.r.t Lie symmetry group) invariant solutions of an ODE.\n\nI do understand (most of it) formally. But I do lack a little bit of insight.\n\n$Question:$\n\nWhy are we even interested in invariant solutions of ODEs?\n\nAfter all the idea of symmetry methods seems to be, that of using transformations to solve ODEs. So why do we care about invariant solutions to those transformations?\n\nEdit:\n\nHere is a brief summary of everything relevant if need be:\\\n\nDenote with\n\n$\\Gamma_{\\epsilon}: (x,y) \\mapsto (\\hat x, \\hat y)$\n\n a one-parameter Lie group of symmetries.\n\nI.e. For every\n\n$\\epsilon$\n\n,\n\n$\\Gamma$\n\n maps the points\n\n$(x,y)$\n\n to the new point\n\n$(\\hat x, \\hat y)$\n\n.\n\nInvariant Point:\n\nIf (x,y) is ,mapped to itself by the one parameter lie group, then we say that this point is a invariant point.\n\nInvariant Curve:\n\nA curve\n\n$y=f(x)$\n\n with the set of points\n\n$(x,f(x))$\n\n is mapped to the set of points\n\n$(\\hat x, \\hat f(x))$\n\n, i.e. to the solution curve\n\n$y=\\hat f(x)$\n\n.\n\nTherefore the curve\n\n$y=f(x)$\n\n is invariant under the symmetry if\n\n$\\hat f=f$\n\n.\n\nWe call a symmetry trivial if every solution curve is invariant under said symmetry.\n\nOrbit through a point:\n\nLet (x,y) be a point. The set of points to which can (x,y) be mapped is called the orbit of the group through (x,y). I.e the set\n\n$\\{(\\hat x(x,y;\\epsilon),\\hat y(x,y;\\epsilon)): \\epsilon\\}$\n\n.\n\nA orbit is a curve (or in case of an invariant point, a just a single point).\n\nOrbits are invariant Curves:\n\nBy definition point of an orbit are mapped to points of the same orbit, thus orbits are invariant under the Lie group of symmetries.\n\nConsider the orbit through a non-invariant point\n\n$(x,y)$\n\n. The tangent vector to the orbit at the point\n\n$(\\hat x,\\hat y)$\n\n is\n\n$(\\xi(\\hat x,\\hat y), \\eta(\\hat x,\\hat y))$\n\n, where\n\n$\\frac{d \\hat x}{d\\epsilon}=\\xi(\\hat x,\\hat y)$\n\n$\\frac{d\\hat y}{d\\epsilon}=\\eta(\\hat x,\\hat y)$\n\nInvariance of Curves under Lie Symmetries:\n\nIf an orbit crosses any curve C transversely at the point\n\n$(x,y)$\n\n then there are Lie symmetries that map\n\n$(x,y)$\n\n to points that are not on\n\n$C$\n\n. Therefore a curve is invariant if and only if no orbit crosses it.\n\nIn other words,\n\n$C$\n\n is an invariant curve if and only if the tangent to\n\n$C$\n\n at each point\n\n$(x,y)$\n\n is parallel to the tangent vector\n\n$(\\xi(x,y),\\eta(x,y))$\n\n. This condition can be mathematically expressed by introducing the characteristic\n\n$Q(x,y,y'):=\\eta(x,y)-y' \\xi(x,y)$\n\nIf\n\n$C$\n\n is the curve\n\n$y=y(x)$\n\n, the tangent to\n\n$C$\n\n at\n\n$(x,y(x))$\n\n is in the direction\n\n$(1,y'(x))$\n\n. It is parallel to\n\n$(\\xi(x,y),\\eta(x,y))$\n\n if and only if\n\n$Q(x,y,y')=0$\n\n (on\n\n$C$\n\n).\n\nLet\n\n$\\frac{dy}{dx}=\\omega(x,y)$\n\n be a ordinary differential equation. Note that\n\n$y'=\\omega(x,y)$\n\n, thus we can reduce the characteristic to\n\n$Q(x,y):=Q(x,y,(\\omega(x,y)))$\n\n.\n\nThis gives a characterization of invariant solution curves.\n\nA solution curve\n\n$y=f(x)$\n\n is invariant if and only if\n\n$Q(x,y)=0$\n\n, where\n\n$y=f(x)$\n\nThe Lie symmetries are trivial if and only if\n\n$Q(x,y)$\n\n is identically zero, that is\n\n$\\eta(x,y) \\equiv \\omega(x,y) \\xi(x,y)$\n\nIn case of\n\n$Q_y \\not \\equiv 0$\n\n, we can determine the curves y=f(x) that satisfy\n\n$Q(x,y)=0$\n\n.\n\nThose curves are invariant solutions of\n\n$\\frac{dy}{dx}=\\omega(x,y)$\n\n.\n\nThus we can find all solutions that are invariant under a given Lie group.", "question_owner": "Denis", "question_link": "https://math.stackexchange.com/questions/5106002/why-do-we-care-about-invariant-solutions-of-differential-equations-under-a-one", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5037966, "title": "Is the chain rule needed at all to solve this problem?", "question_text": "In order to find the following derivative\n\n$$\\frac{\\mathrm{d}}{\\mathrm{d}x} [(x-a)^2-(x-b)^2],$$\n\nis the chain rule needed at all or is it possible to just use the sum rule and the power rule?\n\nI took this from a video that explains optimization for machine learning. The professor mentions that he will use the chain rule to solve that expression, but doesn't explain how he applied it. He only shows the final result. Could it be that he meant \"using the sum rule...\"", "question_owner": "user1575411", "question_link": "https://math.stackexchange.com/questions/5037966/is-the-chain-rule-needed-at-all-to-solve-this-problem", "answer": { "answer_id": 5038012, "answer_text": "The answer to the question \"Is it\n\nnecessary\n\n to use [theorem X] to attack [problem Y]?\" is almost universally \"No\". In principle, any problem can be solved by going back to definitions.\n\nLook Ma! No theorems!\n\nOne\n\ncould\n\n compute the derivative in the question directly from the definitions:\n\n$$\n\n\\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (x-a)^2 - (x-b)^2 \\right]\n\n= \\lim_{h\\to 0} \\frac{(x+h-a)^2 - (x+h-b)^2 - (x-a)^2 + (x-b)^2}{h}.\n\n$$\n\nFix some\n\n$\\varepsilon > 0$\n\n and choose\n\n$\\delta > 0$\n\n arbitrarily. Suppose that\n\n$|h-0| < \\delta$\n\n. Then\n\n\\begin{align}\n\n\\left| \\frac{(x+h-a)^2 - (x+h-b)^2 - (x-a)^2 + (x-b)^2}{h} - 2(b-a) \\right|\n\n&= \\left| \\frac{-2ah + 2bh}{h} - 2(b-a) \\right|\\\\\n\n&= \\left| 2(b-a) - 2(b-a) \\right| \\\\\n\n&= 0.\n\n\\end{align}\n\nThus\n\n$$\n\n\\lim_{h\\to 0} \\frac{(x+h-a)^2 - (x+h-b)^2 - (x-a)^2 + (x-b)^2}{h}\n\n= 2(b-a),\n\n$$\n\nfrom which it follows that\n\n$$\n\n\\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (x-a)^2 - (x-b)^2 \\right]\n\n= 2(b-a).\n\n$$\n\nNo theorems have been invoked here—just the definitions of the derivative and a limit. But this is\n\ntedious\n\n.\n\nLet's use all the theorems!\n\nThe point of invoking theorems is that it usually makes the job easier. The above computation is kind of a pain in the butt, and I am not even convinced that I don't have some typos in there somewhere—I am only certain of the result because I can compute it in other ways, using theorems. I would rather find a shortcut. In this case, a few rather useful theorems are\n\nLinearity of the Derivative:\n\n If\n\n$f$\n\n and\n\n$g$\n\n are differentiable and\n\n$\\alpha$\n\n and\n\n$\\beta$\n\n are constants, then\n\n$(\\alpha f+\\beta g)' = \\alpha f' + \\beta g'$\n\n.\n\nThe Chain Rule:\n\n If\n\n$f$\n\n and\n\n$g$\n\n are differentiable, then\n\n$(f\\circ g)'(x) = (f'\\circ g)(x) g'(x)$\n\n; and\n\nThe Power Rule:\n\n For any natural number\n\n$n$\n\n,\n\n$\\frac{\\mathrm{d}}{\\mathrm{d}x} x^n = nx^{n-1}$\n\n.\n\nUsing these, one might note that for any constant\n\n$c$\n\n and with\n\n$f(x) = x^2$\n\n and\n\n$g(x) = x-c$\n\n, the chain rule implies that\n\n$$ \\frac{\\mathrm{d}}{\\mathrm{d}x} (x-c)^2\n\n= (f\\circ g)'(x)\n\n= (f'\\circ g)(x) g'(x). $$\n\nBy the power rule,\n\n$f'(x) = 2x$\n\n, and the derivative of\n\n$g$\n\n is given by\n\n$g'(x) = 1$\n\n. Therefore\n\n$$ (f'\\circ g)(x) g'(x)\n\n= 2g(x) g'(x)\n\n= 2(x-c)\\cdot 1\n\n= 2(x-c). $$\n\nThen, by the linearity of the derivative and the above result,\n\n$$ \\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (x-a)^2 - (x-b)^2 \\right]\n\n= \\frac{\\mathrm{d}}{\\mathrm{d}x} (x-a)^2 - \\frac{\\mathrm{d}}{\\mathrm{d}x} (x-b)^2\n\n= 2(x-a) - 2(x-b)\n\n= -2a + 2b. $$\n\nNote that this computation, involving these theorems, it typically greatly simlified—much of the work can be done mentally, and doesn't really need to be justified, so most people would just write\n\n$$ \\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (x-a)^2 - (x-b)^2 \\right]\n\n= 2(x-a) - 2(x-b)\n\n= -2a + 2b. $$\n\nBut I don't like the chain rule! :(\n\nOn the other hand, maybe one is perfectly happy with the fact that derivatives are linear and that the power rule is \"a thing\", but one doesn't want to invoke the chain rule (for whatever reason). That's fine, too. In that case, one\n\ncan\n\n expand out the function and compute the derivative as\n\n\\begin{align} \\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (x-a)^2 - (x-b)^2 \\right]\n\n&= \\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ x^2 - 2ax + a^2 - x^2 + 2bx - b^2 \\right] \\\\\n\n&= \\frac{\\mathrm{d}}{\\mathrm{d}x} \\left[ (-2a + 2b)x + (a^2-b^2) \\right] \\\\\n\n&= -2a+2b, \\end{align}\n\nwhere the last line follows from the linearity of the derivative.\n\nSummary\n\nAs demonstrated above, it is not\n\nnecessary\n\n to invoke the chain rule in order to evaluate the given derivative. One can simply expand out the polynomial, and then invoke the linearity of the derivative and the power rule. A real masochist might even go back to the basic definitions of a derivative and a limit. However, theorems are generally proved in order to\n\navoid\n\n this kind of tedium. In cases where a theorem might simplify a compuation, it is often preferable to invoke the theorem in order to save time and reduce the possibility of error.\n\nAlternatively, if one is programming a computer to perform the computation (as seems plausible given the context given in the problem), it might be advisable to invoke the theorems as an optimization in order to speed up the computation. I don't know whether or not the chain rule is a massive optimization in\n\nthis case\n\n, but I can certainly envision situations in which it would be.", "answer_owner": "Xander Henderson", "is_accepted": false, "score": 32, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106197, "title": "Finding Symmetries for Ordinary Differential Equations", "question_text": "Consider the ODE\n\n$\\frac{dy}{dx}=\\frac{y}{x}$\n\n.\n\nHow can we find Lie group symmetries for this ODE?\n\n(And for general ODEs too, this specific ODE is just an example.)\n\nI have been thinking about the\n\n$\\Gamma_{\\epsilon}(x,y)=(e^{\\epsilon}x,e^{\\epsilon}y)$\n\n, since inserting the new coordinates leave the equation invariant. (The exponential is just used for the group property.)\n\nAre there other tricks or methods?", "question_owner": "Denis", "question_link": "https://math.stackexchange.com/questions/5106197/finding-symmetries-for-ordinary-differential-equations", "answer": { "answer_id": 5106454, "answer_text": "Are there other tricks or methods?\n\nThe standard method is to solve the linearized pde which results from Lie symmetry condition. For first order it is\n\n\\begin{align*}\n\n\\eta_{x}+\\omega \\left( \\eta_{y}-\\xi_{x}\\right ) -\\omega^{2}\\xi_{y}-\\omega_{x}\\xi -\\omega_{y}\\eta =0\\tag {A}\n\n\\end{align*}\n\nFor second order ode it is\n\n\\begin{align*}\n\n-\\eta \\omega _{y}+\\left ( -3y^{\\prime }\\xi _{y}-2\\xi _{x}+\\eta _{y}\\right ) \\omega -\\xi \\omega _{x}+\\left ( -y^{\\prime }\\eta _{y}+\\left ( y^{\\prime }\\right ) ^{2}\\xi _{y}+y^{\\prime }\\xi _{x}-\\eta _{x}\\right ) \\omega _{y^{\\prime }}+\\eta _{xx}-\\xi _{yy}\\left ( y^{\\prime }\\right ) ^{3}+\\left ( \\eta _{yy}-2\\xi _{yx}\\right ) \\left ( y^{\\prime }\\right ) ^{2}+\\left ( 2\\eta _{yx}-\\xi _{xx}\\right ) y^{\\prime }=0\n\n\\end{align*}\n\nThese pde's are solved for\n\n$\\xi,\\eta$\n\n by trial and error using ansatz.\n\nThere are standard ansatz's to use. Some more complicated than others.\n\nMaple for example use 16 different algorithms to try to find one that work.\n\nFor your example, ansatz with simple polynomial works.\n\nWriting the ode as\n\n\\begin{align*}\n\ny^{\\prime}&=\\frac{y}{x}\\\\\n\ny^{\\prime}&= \\omega\\left( x,y\\right)\n\n\\end{align*}\n\nThe condition of Lie symmetry is the linearized PDE given by\n\n\\begin{align*}\n\n\\eta_{x}+\\omega \\left( \\eta_{y}-\\xi_{x}\\right ) -\\omega^{2}\\xi_{y}-\\omega_{x}\\xi -\\omega_{y}\\eta =0\\tag {A}\n\n\\end{align*}\n\nTo determine\n\n$\\xi,\\eta$\n\n, equation (A) is solved using ansatz.\n\nTrying bivariate polynomials of degree one gives\n\n\\begin{align*}\n\n\\xi &= x a_{2}+y a_{3}+a_{1}\\tag{1E} \\\\\n\n\\eta&= x b_{2}+y b_{3}+b_{1}\\tag{2E}\n\n\\end{align*}\n\nThe unknown coefficients are\n\n\\begin{align*}\n\n\\{a_{1}, a_{2}, a_{3}, b_{1}, b_{2}, b_{3}\\}\n\n\\end{align*}\n\nSubstituting equations (1E,2E) and\n\n$\\omega$\n\n into (A) gives\n\n\\begin{align*}\n\nb_{2}+\\frac{y \\left(b_{3}-a_{2}\\right)}{x}-\\frac{y^{2} a_{3}}{x^{2}}+\\frac{y \\left(x a_{2}+y a_{3}+a_{1}\\right)}{x^{2}}-\\frac{x b_{2}+y b_{3}+b_{1}}{x} = 0 \\tag{5E}\n\n\\end{align*}\n\nSimplifying the above and writing it in normal form gives\n\n\\begin{align*}\n\n-\\frac{x b_{1}-y a_{1}}{x^{2}} = 0\n\n\\end{align*}\n\nHence\n\n\\begin{align*}\n\n-x b_{1}+y a_{1} = 0\\tag{6E}\n\n\\end{align*}\n\nSetting each coefficients in (6E) to zero gives\n\n\\begin{align*}\n\na_{1}&=0\\\\\n\n-b_{1}&=0\n\n\\end{align*}\n\nHence the unknowns are\n\n\\begin{align*}\n\na_{1}&=0\\\\\n\na_{2}&=\\text{arbitrary}\\\\\n\na_{3}&=\\text{arbitrary}\\\\\n\nb_{1}&=0\\\\\n\nb_{2}&=\\text{arbitrary}\\\\\n\nb_{3}&=\\text{arbitrary}\n\n\\end{align*}\n\nSubstituting the above solution in the anstaz (1E,2E) (using\n\n$1$\n\n as arbitrary value for any unknown) gives\n\n\\begin{align*}\n\n\\xi &= 0\\\\\n\n\\eta &= x\n\n\\end{align*}\n\nThe next step is to determine the canonical coordinates\n\n$R,S$\n\n.\n\nThe canonical coordinates map\n\n$\\left( x,y\\right) \\to \\left( R,S \\right)$\n\n where\n\n$\\left( R,S \\right)$\n\n are\n\nthe canonical coordinates which make the original ode become\n\na quadrature and hence solved by integration.\n\nThe characteristic pde which is used to find the canonical coordinates is\n\n\\begin{align*}\n\n\\frac{d x}{\\xi} &= \\frac{d y}{\\eta} = dS \\tag{1}\n\n\\end{align*}\n\nThe above comes from the requirements that\n\n$\\left( \\xi \\frac{\\partial}{\\partial x} + \\eta \\frac{\\partial}{\\partial y}\\right) S(x,y) = 1$\n\n.\n\nStarting with the first pair of ode's in (1)\n\ngives an ode to solve for the independent variable\n\n$R$\n\n in the\n\ncanonical coordinates, where\n\n$S(R)$\n\n. Since\n\n$\\xi=0$\n\n then in this special case\n\n\\begin{align*}\n\nR &= x\n\n\\end{align*}\n\n$S$\n\n is found from\n\n\\begin{align*}\n\nS &= \\int{ \\frac{1}{\\eta}} dy\\\\\n\n &= \\int{ \\frac{1}{x}} dy\n\n\\end{align*}\n\nWhich results in\n\n\\begin{align*}\n\nS&= \\frac{y}{x}\n\n\\end{align*}\n\nNow that\n\n$R,S$\n\n are found, we need to setup the ode in these coordinates. This\n\nis done by evaluating\n\n\\begin{align*}\n\n\\frac{dS}{dR} &= \\frac{ S_{x} + \\omega(x,y) S_{y} }{ R_{x} + \\omega(x,y) R_{y} }\\tag{2}\n\n\\end{align*}\n\nWhere in the above\n\n$R_{x},R_{y},S_{x},S_{y}$\n\n are all partial\n\nderivatives and\n\n$\\omega(x,y)$\n\n is the right hand side of the original ode given by\n\n\\begin{align*}\n\n\\omega(x,y) &= \\frac{y}{x}\n\n\\end{align*}\n\nEvaluating all the partial derivatives gives\n\n\\begin{align*}\n\nR_{x} &= 1\\\\\n\nR_{y} &= 0\\\\\n\nS_{x} &= -\\frac{y}{x^{2}}\\\\\n\nS_{y} &= \\frac{1}{x}\n\n\\end{align*}\n\nSubstituting all the above in (2) and simplifying gives the ode in canonical coordinates.\n\n\\begin{align*}\n\n\\frac{dS}{dR} &= 0\\tag{2A}\n\n\\end{align*}\n\nThe above is a quadrature ode. This is the whole point\n\nof Lie symmetry method. It converts a first order ode, no matter how\n\ncomplicated it is, to one that can be solved by\n\nintegration when the ode is in the canonical coordiates\n\n$R,S$\n\n.\n\nSince the ode has the form\n\n$\\frac{d}{d R}S \\left(R \\right)=f(R)$\n\n, then we\n\nonly need to integrate\n\n$f(R)$\n\n.\n\n\\begin{align*}\n\n\\int{dS} &= \\int{0\\, dR} + c_2 \\\\\n\nS \\left(R \\right) &= c_2\n\n\\end{align*}\n\nTo complete the solution, we just need to transform the above back\n\nto\n\n$x,y$\n\n coordinates. Since\n\n$S=\\frac{y}{x}$\n\n then the solution in the natural\n\ncoordinates is\n\n\\begin{align*}\n\ny = c_2 x\n\n\\end{align*}\n\nOfcourse for this ode, there is no point of using Lie symmetry to solve it since the ode is already in quadrature form. But the same method above works for any first order ode, if one can solve the Lie symmetry linearized PDE.", "answer_owner": "Nasser", "is_accepted": true, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106265, "title": "Relation between Jordan measure and Riemann integral", "question_text": "It's well-known that if\n\n$A \\subset \\mathbb{R}^n$\n\n is a bounded set, then the characteristic function\n\n$\\chi_A$\n\n is Riemann-integrable if and only if\n\n$A$\n\n is measurable in the sense of Jordan. In general, is there any explicit relation between Darboux lower/upper integral and Jordan inner/outer measure? Here is the definitions:\n\nThe Jordan inner measure\n\n$m_{∗,(J)}(A)$\n\n of\n\n$A$\n\n is defined as\n\n$\\sup\\{m(B) \\mid B \\subset A, B \\text{ is elementary}\\}$\n\n, where an elementary set is a finite union of boxes.\n\nThe Jordan outer measure\n\n$m^{∗,(J)} (A)$\n\n of\n\n$A$\n\n is defined as\n\n$\\inf\\{m(B) \\mid A \\subset B, B \\text{ is elementary}\\}$\n\n, where an elementary set is a finite union of boxes.\n\nThe lower Darboux integral of\n\n$\\chi_A$\n\n is defined as\n\n$\\sup \\{L(\\chi_A, P): P \\ \\text{is a partition of box} \\ R \\supset A\\}$\n\n where\n\n$L(\\chi_A, P) = \\sum_{n} \\inf_{x \\in R_n} \\mathbf{1}_A (x)\\mathrm{Vol}(R_n)$\n\n.\n\nThe upper Darboux integral of\n\n$\\chi_A$\n\n is defined as\n\n$\\inf \\{U(\\chi_A, P): P \\ \\text{is a partition of box} \\ R \\supset A\\}$\n\n where\n\n$U(\\chi_A, P) = \\sum_{n} \\sup_{x \\in R_n} \\mathbf{1}_A (x)\\mathrm{Vol}(R_n)$\n\n.\n\nNote that in these definitions, a box\n\n$B$\n\n is a subset of\n\n$\\mathbb{R}^n$\n\n of the form\n\n$I_1 \\times I_2 \\times \\dots \\times I_n$\n\n, where each\n\n$I_i$\n\n is an interval. Intuitively, I expect that Darboux lower/upper integral be equal to Jordan inner/outer measure but I couldn't prove that. Also it's interesting to consider the general case and replace the characteristic function\n\n$\\chi_A$\n\n be some other bounded function\n\n$f(x)$\n\n, maybe in this case equality doesn't hold?", "question_owner": "S.H.W", "question_link": "https://math.stackexchange.com/questions/5106265/relation-between-jordan-measure-and-riemann-integral", "answer": { "answer_id": 5106405, "answer_text": "Inner Jordan measure and\n\n$\\sup L(\\chi_A, P)$\n\n are the same.\n\nWe observe that if\n\n$R$\n\n is a rectangle containing\n\n$A$\n\n and\n\n$P$\n\n is a partition of\n\n$R$\n\n,\n\nthen\n\n$L(\\chi_A, P)$\n\n is the sum of area of sub-rectangles completely contained in\n\n$A$\n\n,\n\nand therefore\n\n$L(\\chi_A, P)\\le m_{\\text{inner}}(A)$\n\n, by definition.\n\nTherefore\n\n$\\sup L(\\chi_A, P)\\le m_{\\text{inner}}(A)$\n\n.\n\nFor the reverse inequality, if\n\n$(R_i)$\n\n is finitely many rectangles contained in\n\n$A$\n\n, then they are contained in\n\n$R$\n\n, a rectangle containing\n\n$A$\n\n.\n\nWe can refine\n\n$(R_i)$\n\n to get a partition of\n\n$R$\n\n and as each\n\n$R_i$\n\n is contained in\n\n$A$\n\n (see the figures below. Two rectangles were inside the circle and we then gave a common refinement. Sub-rectangles totally inside the circle contain the two original rectangles), the sum of area of\n\n$R_i$\n\n would be non-greater than the sum of area of sub-rectangles completely contained in\n\n$A$\n\n, i.e.\n\n$L(\\chi_A, P)$\n\n.\n\nTherefore\n\n$\\sum m(R_i)\\le L(\\chi_A, P)\\le \\sup L(\\chi_A,\\tilde P)$\n\n and the reverse inequality holds.\n\nConsidering\n\n$A= R\\setminus (R\\setminus A)$\n\n gives the symmetric result for outer Jordan measure.", "answer_owner": "Asigan", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4905102, "title": "Complex numbers : finding real & imaginary numbers and the magnitude", "question_text": "Given a complex series defined as\n\n$z_{4}=\\sum^{26}_{k=0}(2i)^{k}$\n\n, determine the real and imaginary parts of the resulting imaginary number. Additionally, calculate the magnitude (modulus) of this complex number.\n\nMy Thoughts are:\n\nUsing geometric series\n\n$S_n= a(r^{n+1} -1) /(r-1) )$\n\n for\n\n$a=1 , r=2i , n=26$\n\nNow as I know that\n\n$i^2=-1$\n\n , so\n\n$i^{27} = i^3 =-1$\n\n$\\implies (2i)^{27} = (2^27) × i^{27} = -2^{27} i$\n\n$\\implies (1-2^{27} i)/(-3) = -1/3 - 2i/3 $\n\nSo\n\n$Re(z) = -1/3$\n\n ,\n\n$Im(z)= -2^{27}/3$\n\n.\n\nBut my problem then how l'm i gonna find\n\n$|z|$\n\n ? According to the def.\n\n$|z|= √(-1/3)+(-2^{27}/3)^2$\n\nWhere did I go wrong? Should i use a different formula?", "question_owner": "user1315456", "question_link": "https://math.stackexchange.com/questions/4905102/complex-numbers-finding-real-imaginary-numbers-and-the-magnitude", "answer": { "answer_id": 4905115, "answer_text": "First of all\n\n$i^{27}=-i$\n\n but I think that you just had a finger error.\n\nThen you have that\n\n$z=\\frac{r^{n+1}-1}{r-1}$\n\n with\n\n$r=2i$\n\n, then\n\n\\begin{align*}\n\nz=\\frac{-2^{27}i-1}{2i-1}&= \\frac{-2^{27}i-1}{2i-1}\\cdot \\frac{2i+1}{2i+1}\\\\\n\n&=\\frac{2^{28}-1-(2^{27}+2)i}{-5}\n\n\\end{align*}\n\nSo,\n\n$Re(z)=\\frac{2^{28}-1}{-5}$\n\n,\n\n$Im(z)=\\frac{(2^{27}+2)i}{5}$\n\n and\n\n\\begin{align*}\n\n|z|&=\\sqrt{\\left(\\frac{2^{28}-1}{-3}\\right)^2+\\left(\\frac{(2^{27}+2)i}{5}\\right)^2}\\\\\n\n&=\\sqrt{\\frac{2^{56}+2^{54}+5}{25}}\n\n\\end{align*}", "answer_owner": "Carlos Reyes Valdivieso", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2062453, "title": "Relative/Absolute precision of Newton's method", "question_text": "I was asked to calculate the relative and absolute precision of my implementation of Newton's method and I wanted to make sure I understand it correctly.\n\nAccording to the method:\n\n$$x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}$$\n\nSo if I understand correctly:\n\nAbsolute error is $\\frac{|f(x_n)|}{|f'(x_n)|}$\n\nRelative error is $\\frac{|x_{n+1}|}{|x_n|}$\n\nIs that correct?", "question_owner": "blueplusgreen", "question_link": "https://math.stackexchange.com/questions/2062453/relative-absolute-precision-of-newtons-method", "answer": { "answer_id": 2301703, "answer_text": "If:\n\n$x_{\\infty}$ is precise value you want to find\n\n$x_{n+1}$ your best approximation\n\nthen\n\nabsolute error\n\n $\\epsilon$ will be :\n\n$$\\epsilon = | x_{\\infty} - x_{n+1}|$$\n\nand relative error $\\eta$ will be:\n\n$$ \\eta = \\frac{\\epsilon}{|x_{\\infty}|}$$", "answer_owner": "Adam", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106246, "title": "Question regarding Pushforward(differential)", "question_text": "I have a question about the following:\n\nLet\n\n$\\phi:U \\rightarrow Y$\n\n be a smooth map from an open subset\n\n$U$\n\n of\n\n$\\mathbb{R}^m $\n\nto an open subset\n\n$V$\n\n of\n\n$\\mathbb{R}^n$\n\n. For any point\n\n$x$\n\n in\n\n$U$\n\n, the Jacobian of\n\n$\\phi$\n\n at\n\n$x$\n\n is the matrix representation\n\nof the total derivative of\n\n$\\phi$\n\n at\n\n$x$\n\n, which is a linear map\n\n$d\\phi_x:T_x\\mathbb{R}^m \\rightarrow\n\n T_{\\phi(x)}\\mathbb{R}^n$\n\n between their tangent spaces. Note that the\n\ntangent spaces\n\n$T_x\\mathbb{R}^m,T_{\\phi(x)}\\mathbb{R}^n $\n\n are\n\nisomorphic to\n\n$\\mathbb{R}^m$\n\n and\n\n$\\mathbb{R}^n$\n\n.\n\nHow those this exactly work?\n\nConsider\n\n$c(t):=(2t,t^2)$\n\n, then\n\n$c'(t)=(2,2t)$\n\n.\n\nSo the tangent vector in the point\n\n$t_0$\n\n is\n\n$(2,2t_0)$\n\n, so the tangent space in\n\n$t_0$\n\n has only dimension\n\n$1$\n\n.\n\nSo how could this be isomorphic to\n\n$\\mathbb{R}^2$\n\n?", "question_owner": "Maxi", "question_link": "https://math.stackexchange.com/questions/5106246/question-regarding-pushforwarddifferential", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5102713, "title": "A closed form of the integral $\\int_{0}^{\\infty}\\arcsin\\left(\\frac{\\sin\\left(x\\right)}{x}\\right)dx$", "question_text": "$\\DeclareMathOperator{\\sinc}{sinc}$\n\n My friends gave me this hard looking integral that he came up with using demos but he couldn't find a solution.\n\n$$\\int_{0}^{\\infty}\\arcsin\\left(\\sinc\\left(x\\right)\\right)dx$$\n\nwhere\n\n$$\\sinc(x)=\\ \\frac{\\sin\\left(x\\right)}{x}.$$\n\n$\\textbf{Question: Is there a closed form for this integral and if so, what is it?}$\n\nAlthough Desmos says it diverges, the integral's convergence can be proven. Recall that\n\n$$\\sinc\\left(x\\right)-\\sinc^{2}\\left(x\\right) \\leq \\arcsin\\left(\\sinc\\left(x\\right)\\right) \\leq \\sinc\\left(x\\right)+\\sinc^{2}\\left(x\\right)$$\n\nfor all real values of\n\n$x$\n\n, because\n\n$$x-x^2 \\leq \\arcsin(x) \\leq x+x^2$$\n\n over the image of\n\n$\\sinc(x)$\n\n.\n\nThe two bounding functions both have convergent values of\n\n$0$\n\n and\n\n$\\pi$\n\n over the positive real axis respectively. I also noticed that any integral of\n\n$$\\int_{0}^{2\\pi k}\\arcsin\\left(\\sinc\\left(x\\right)\\right)dx, k \\in \\mathbb{N}$$\n\n is a lower bound to the integral and conversely,\n\n$$\\int_{0}^{2\\pi k +\\pi}\\arcsin\\left(\\sinc\\left(x\\right)\\right)dx, k \\in \\mathbb{N}$$\n\n is a upper bound. These bounding techniques, however, do not prove the existence of, or help solve for, a closed form solution. One attempt i made at finding a solution is writing\n\n$\\arcsin(x)$\n\n as a power series.\n\n$$\\arcsin(x) = \\sum_{n=0}^{\\infty} \\frac{(2n)!}{4^n (n!)^2 (2n+1)} x^{2n+1}$$\n\n$$\\arcsin(\\sinc(x)) = \\sum_{n=0}^{\\infty} \\frac{(2n)!}{4^n (n!)^2 (2n+1)} \\frac{\\sin(x)^{2n+1}}{x^{2n+1}}$$\n\n$$\\int_{0}^{\\infty}\\arcsin\\left(\\sinc\\left(x\\right)\\right)dx= \\sum_{n=0}^{\\infty} \\frac{(2n)!}{4^n (n!)^2 (2n+1)} \\int_0^{\\infty}\\frac{\\sin(x)^{2n+1}}{x^{2n+1}}dx$$\n\nThen, using\n\n$$\n\n\\int_{0}^{\\infty} \\left( \\frac{\\sin x}{x} \\right)^{m} \\, dx\n\n= \\frac{\\pi}{2^{m} (m-1)!}\n\n\\sum_{k=0}^{\\lfloor m/2 \\rfloor}\n\n(-1)^{k} \\binom{m}{k} (m - 2k)^{m-1}\n\n$$\n\n$$\\int_{0}^{\\infty}\\arcsin\\left(\\sinc\\left(x\\right)\\right)dx= \\sum_{n=0}^{\\infty} \\frac{(2n)!}{4^n (n!)^2 (2n+1)} \\frac{\\pi}{2^{2n+1}(2n)!}\n\n\\sum_{k=0}^{n} (-1)^k \\binom{2n+1}{k}\\,(2n+1-2k)^{2n}$$\n\n$$= \\sum_{n=0}^{\\infty} \\sum_{k=0}^{n} \\binom{2n+1}{k} \\frac{\\pi (-1)^k \\,(2n+1-2k)^{2n}}{(n!)^2 (2n+1) 2^{4n+1}}.$$\n\nHowever, I have not been able to get any further in solving this sum or any other forms of the integral so I ask you guys for help.", "question_owner": "d ds", "question_link": "https://math.stackexchange.com/questions/5102713/a-closed-form-of-the-integral-int-0-infty-arcsin-left-frac-sin-leftx-r", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5106147, "title": "Really struggling to understand derivative of $\\sin(x)$ being $\\cos(x)$?", "question_text": "I am doing some self-study on my own and trying to understand some calculus on a rigorous level. That said, I really want to be able to prove and understand why the derivative of\n\n$\\sin(x)$\n\n is\n\n$\\cos(x)$\n\n. Yet this seems to be a very deep rabbit hole and an elementary proof seems elusive. For one thing, I need to be able to prove\n\n$\\sin(a+b) = \\sin(a)\\cos(b) + \\cos(a)\\sin(b)$\n\n. Many proofs of this fact seem to require Euler's identity, power series, or linear algebra. It is crazy that what is presented so early in calculus is actually so complex. I guess I have a couple questions:\n\nIs there an elementary proof of the identity\n\n$\\sin(a+b) = \\sin(a)\\cos(b) + \\cos(a)\\sin(b)$\n\n?\n\nAre there any good resources that can help me build up the required tools to derive\n\n$\\frac{d}{dx} \\sin(x) = \\cos(x)$\n\n?", "question_owner": "Joe", "question_link": "https://math.stackexchange.com/questions/5106147/really-struggling-to-understand-derivative-of-sinx-being-cosx", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4049200, "title": "Understanding Higher-order Differentiability Conceptually", "question_text": "I understand conceptually that a function\n\n$f\\colon A\\to\\mathbf R$\n\n is differentiable at a point\n\n$a\\in A$\n\n if it can be well approximated by a line there; more precisely, if we can find a constant\n\n$f'(a)$\n\n such that\n\n$$f(a+h) = f(a) + f'(a)h+o(h).$$\n\nMy goal: I want to understand (intuitively) what it means when a function is twice differentiable, or thrice, etc, and likewise what it means when it isn't.\n\nIn a very loose sense, I have the intuition that a function is somehow smooth\n\ner\n\n around a point if it has higher order differentiability there, and I suppose this somehow corresponds to the fact that it can be approximated well not only by a line but even better by a polynomial. (e.g. twice differentiability is\n\n$f(a+h) = f(a) + f'(a)h + \\tfrac12f''(a)h^2 + o(h^2)$\n\n, etc.). Does this mean polynomials canonically define the notion of \"smoothness\"?\n\nPerhaps I can best get across what I'm asking for with an example. When I look at the graphs of\n\n$$y=x|x| \\qquad \\text{and} \\qquad y=x^3$$\n\nfor instance, I see that the latter grows quicker than the former, but around zero, they both basically look \"smooth\" to me. Yes I know that one is made of the absolute value function which is pointy, but if you just showed me these two pictures:\n\nI wouldn't really feel that one is somehow \"smoother\" around zero than the other.\n\nIs there a better way I can think about this?", "question_owner": "lamasabachthani", "question_link": "https://math.stackexchange.com/questions/4049200/understanding-higher-order-differentiability-conceptually", "answer": { "answer_id": 4051768, "answer_text": "Here are some thoughts:\n\nTo study, as always, I use my favourite tool of\n\nTaylor's theorem\n\n(*):\n\n$$ f(a+h) = f(a) + h f'(a) + \\frac{h^2}{2} f''(a) +O(h^3)$$\n\nNow, let's analyze\n\n$f''(a)$\n\n, if\n\n$f(a+h)$\n\n is not differentiable, it means the left hand derivative and right hand derivative are not equal i.e(Or may be the limit themself don't exist, but let us forget the case for now):\n\n$$ \\lim_{h \\to 0} \\frac{f'(a+h) - f'(a)}{h} \\neq \\frac{ f'(a) - f'(a-h ) }{h}$$\n\nLet's call the right hand 2nd derivative as\n\n$f_{R}$\n\n and left hand 2nd derivative as\n\n$f_{L}$\n\n, this leads to 'two' series of the function locally. That is, for points to the right of\n\n$f$\n\n, we have\n\n$$ f(a+h) = f(a) + hf'(a) + \\frac{h^2}{2} f_{R}''(a) + O(h^3)$$\n\nAnd another series for the left of\n\n$f$\n\n as:\n\n$$ f(a-h) = f(a) - h f'(a) + \\frac{h^2}{2} f_{L}''(a) + O(h^3)$$\n\nNow, here's the deal, the second order derivative and higher terms only become really relevant (for most nice functions) after\n\n$h>1$\n\n, this because if\n\n$h<1$\n\n then\n\n$h^2 0,$\n\n the correct integral is\n\n$$\n\n\\int \\tan x \\,\\mathrm dx = \\ln(\\sec x) + C.\n\n$$\n\nHow can we write a function that is equal to\n\n$\\ln(\\sec x)$\n\n when\n\n$\\sec x > 0$\n\n but is equal to\n\n$\\ln(-\\sec x)$\n\n when\n\n$\\sec x < 0$\n\n?\n\nThe answer is\n\n$ \\ln\\lvert\\sec x\\rvert. $\n\nThat is, when we write\n\n$$\n\n\\int \\tan x \\,\\mathrm dx = \\ln\\lvert\\sec x\\rvert + C\n\n$$\n\nwe get an equation that is good at any\n\n$x$\n\n where\n\n$\\sec x$\n\n is defined.\n\nThere's really nothing more to the use of the modulus than that.\n\nAs noted in a comment, the constant\n\n$C$\n\n in this integral is only a \"piecewise\" constant. Specifically, in each interval\n\n$\\left(-\\frac\\pi2,\\frac\\pi2\\right),$\n\n$\\left(\\frac\\pi2,\\frac{3\\pi}2\\right),$\n\nthat is, within each interval where\n\n$\\tan x$\n\n is defined,\n\nwe are allowed to use a different constant in the integral of\n\n$\\tan x.$\n\nThat has nothing to do with the use of the modulus per se;\n\nit's just a thing we think about when we have a function that's only integrable within disjoint intervals like these.", "answer_owner": "David K", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105520, "title": "Big picture of Vector Calculus", "question_text": "I'm taking my first course in vector calculus and I'm trying to understand the goal of the subject. So here is my understanding:\n\nWe \"generalise\" single variable calculus by arriving at the fundamental theorem of calculus:\n\n$$ \\int_a^b f(x)\\,dx = F(b) - F(a) $$\n\nWhich basically says: In order to integrate over an interval on the real line (a bulk region) you need only to know information about the end points (a calculation on the boundary of the bulk region).\n\nWe now ask: Will the same principle apply to\n\n$ \\mathbb{R}^2 $\n\n ,\n\n$ \\mathbb{R}^3 $\n\n? In general, functions of the form\n\n$ f:\\mathbb{R}^n \\to \\mathbb{R}^m $\n\n ?\n\nAnswering this question is the why of Vector Calculus (from my understanding).\n\nThe answer to the question is: Yes the same ideas hold in higher dimensions. And so, we arrive at the big theorems of Vector Calculus.\n\nGreen's Theorem\n\nStokes' Theorem\n\nGauss' Divergence Theorem\n\nWhere each theorem expresses the same underlying principle: integrating over a region equals integrating over the region’s boundary.\n\nAnd then we have the big theorem, the \"master theorem\":\n\nGeneralised Stokes' Theorem, which contains all of the above theorems.\n\nIs my understanding correct? Is there any flaws and/or missing points? I would really appreciate some guidance.\n\nComputational questions (parametrisation):\n\nIn principle, the theorems are coordinate free. Is that the main reason why we need to parametrise the boundary curve\n\n$ \\partial D $\n\n? So that we can have an orientation? What is it about parametrisation that is so important to vector calculus? Is parametrising just a computational trick or something deeper?", "question_owner": "Sebastian", "question_link": "https://math.stackexchange.com/questions/5105520/big-picture-of-vector-calculus", "answer": { "answer_id": 5105531, "answer_text": "Not an answer but something worth sharing. Here is a pretty nice visual representation of how the different theorems are connected (I got this from Joseph Breen's website - an assistant professor at the University of Alabama)\n\nEach box represents an integral. The arrows between the boxes represents a way to transition from one to the other. For example, one way to evaluate a line integral is to use a parametrization to convert it into a single variable integral. Or one can convert a surface integral to a triple integral using the divergence theorem. The boxes and arrows above summarize these relationships. Obviously this is a simplification, because there are conditions that need to be satisfied to apply theorems like Green’s theorem and the like.\n\nIf you would like more information on these theorems I suggest you look at these posts:\n\nHow would you discover Stokes's theorem?\n\nWhen integrating how do I choose wisely between Green's, Stokes' and Divergence?\n\nExplaining Green's Theorem for Undergraduates", "answer_owner": "Bumblebee", "is_accepted": false, "score": 10, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 456029, "title": "Change of Variables in Limits (Part 1)", "question_text": "After\n\nanswering this question\n\n, I was wondering if the following generalization holds true:\n\nClaim:\n\n If $\\lim \\limits_{x\\to a}g(x)=b$, then $\\lim \\limits_{x\\to a}f(g(x))=\\lim \\limits_{y\\to b}f(y)$.\n\nI've seen some people use this change of variables before when evaluating difficult limits, but I haven't seen this presented as a theorem in a textbook. Is this claim true? If not, is the claim salvageable with additional hypotheses? For example, must $f$ be continuous for the claim to hold?", "question_owner": "Adriano", "question_link": "https://math.stackexchange.com/questions/456029/change-of-variables-in-limits-part-1", "answer": { "answer_id": 456038, "answer_text": "You certainly need some assumptions on\n\n$f,g$\n\n, as\n\nthis\n\n answer shows (and includes other cases).\n\nTHM\n\n Suppose that\n\n$f(y)\\to \\ell$\n\n as\n\n$ y\\to b$\n\n. Suppose that\n\n${\\rm im}\\, g\\subseteq {\\rm dom}\\, f$\n\n, and suppose that\n\n$g(x)\\to b$\n\n as\n\n$x\\to a$\n\n, yet\n\n$g$\n\n does\n\nnot\n\n attain the value\n\n$b$\n\n in a neighborhood\n\n$B(x,\\eta)-\\{x\\}$\n\n. Then\n\n$$f\\circ g(x)\\to \\ell \\;\\;\\text{ as } x\\to a$$\n\nP\n\n Let\n\n$\\epsilon >0$\n\n be given. Since\n\n$f\\to\\ell $\n\n as\n\n$y\\to b$\n\n there exists\n\n$\\delta >0$\n\n such that\n\n$0<|y-b|<\\delta$\n\n implies\n\n$|f(y)-\\ell|<\\epsilon$\n\n. Since\n\n$g\\to b$\n\n as\n\n$x\\to a$\n\n, there exists\n\n$\\eta >\\delta'>0$\n\n such that\n\n$0<|x-a|<\\delta'$\n\n implies\n\n$0<|g(x)-b|<\\delta$\n\n. But then we will have\n\n$|f(g(x))- \\ell|<\\epsilon$\n\n whenever\n\n$0<|x-a|<\\delta'$\n\n, so the claim follows.\n\n$\\blacktriangle$\n\nThis then gives the standard\n\nCOR\n\n Suppose that\n\n$f$\n\n is continuous at\n\n$g(a)$\n\n and\n\n$g$\n\n is continuous at\n\n$a$\n\n. Then\n\n$f\\circ g$\n\n is continuous at\n\n$a$\n\n.", "answer_owner": "Pedro", "is_accepted": true, "score": 18, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1722051, "title": "Derivative of $\\ln |x|$ in the distributional sense", "question_text": "Consider the function\n\n$\\ln |x|$\n\n, since it is locally integrable we can form the distribution\n\n$$(\\ln |x|,\\phi)=\\int_{-\\infty}^{\\infty}\\ln |x|\\phi(x)dx.$$\n\nNow, I want to show that in the sense of distributions we have\n\n$\\ln |x|' = \\operatorname{Pv}\\frac{1}{x}$\n\n. My obvious try was to substitute directly the definition of the derivative for distributions:\n\n$$(\\ln |x|', \\phi)=-(\\ln |x|,\\phi')=-\\int_{-\\infty}^{\\infty} \\ln |x|\\phi'(x)dx,$$\n\nthe obvious thing to do would be to split this in two integrals:\n\n$$(\\ln |x|', \\phi)=-\\int_{-\\infty}^0 \\ln(-x)\\phi'(x)dx - \\int_0^\\infty \\ln (x) \\phi'(x)dx.$$\n\nNow, I've seem the question\n\nDerivative of a distribution\n\n and it tells what to do next: we simply rewrite all of that as\n\n$$(\\ln |x|,\\phi')=\\lim_{\\epsilon\\to 0^+}\\int_{-\\infty}^{-\\epsilon}\\ln |x|\\phi'(x)dx+\\int_{\\epsilon}^\\infty \\ln |x|\\phi'(x)dx$$\n\nBut I can't understand where this limit comes from. I mean how does one get from where I stopped all the way to this line? Because obviously after that just inetgration by parts is enough to get what we want.\n\nMy doubt is how that\n\n$\\epsilon \\to 0^+$\n\n really appeared.", "question_owner": "Gold", "question_link": "https://math.stackexchange.com/questions/1722051/derivative-of-ln-x-in-the-distributional-sense", "answer": { "answer_id": 1722086, "answer_text": "The derivative of\n\n$\\ln(|x|)$\n\n is\n\n$\\text{Vp} \\frac{1}{x}$\n\n:\n\n$$\\begin{align}\n\n\\langle \\ln(|x|)', \\phi \\rangle &= -\\int_{-\\infty}^\\infty \\ln(|x|) \\phi'(x) \\,dx\\\\\\\\\n\n&=- \\lim_{\\varepsilon \\to 0^+} \\int_{|x|>\\epsilon} \\ln(|x|) \\phi'(x) \\,dx \\\\\\\\\n\n&=\\lim_{\\varepsilon \\to 0^+} \\left(\\underbrace{-\\ln(|\\varepsilon|) (\\phi(-\\varepsilon)-\\phi(\\varepsilon))}_{\\to 0} + \\int_{|x|>\\varepsilon} \\frac{ \\phi(x)}{x} \\,dx\\right)\\\\\\\\\n\n&=\\lim_{\\varepsilon \\to 0^+} \\int_{|x|>\\varepsilon} \\frac{ \\phi(x)}{x} dx\n\n\\end{align}$$", "answer_owner": "Tryss", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4203313, "title": "Solving $\\exp(-x)=\\sin(x)$ analytically", "question_text": "This is related to finding out the time it takes for the capacitor to discharge in a full-wave rectifier (\n\nlink to ee.se\n\n), and it's of the form:\n\n$$\\mathrm{e}^{-x}=\\sin(x) \\tag{1}$$\n\nTo my knowledge it's impossible to determine\n\n$x$\n\n analytically, because whether with\n\n$\\ln()$\n\n or\n\n$\\arcsin()$\n\n, one term gets buried. Using the exponential equivalent formula for\n\n$\\sin(x)$\n\n doesn't seem to work, either. But I'm not fluent in math, so I'm asking it here: is it possible, maybe with some tricks, cheats, anything?\n\nI shouldn't have presumed people to know what a full-wave rectifier with parallel RC load is or does, so for the sake of clarity this is what interests me:\n\nThe (ideal) theory is that the sine wave charges the capacitor. At the peak and for a short interval after it (on the downslope), the voltage across the capacitor is the same as the sine. When the two slopes are equal, the capacitor voltage is no longer a sine but an\n\n$\\mathrm{e}^{-x}$\n\n, continuing from the last voltage value. The sine has an absolute value, so the second half of the period sees the value of the sine rising again, until it meets the discharging exponential -- this is what is needed here. The cycle continues:\n\nFor the sake of simplicity, here, on math.se, the question deals with a generic formula, (1), not the absolute value of it, and no complications with finding out the time and value when the capacitor voltage stops being a sine and continues as an exponential. There are also no time constants involved, or frequencies, therefore, the simplified version looks like this:\n\nThe capacitor discharges with the blue trace until it meets the red trace. Only the first point of intersection is needed (black dot), any other subsequent points are discarded (green circle). If this is solved, then\n\n$\\mathrm{e}^{-ax}=\\sin(bx)$\n\n can also be solved, and even the moment when the waveform switches shapes, though I suspect that will be a tad more complicated (and not part of this question).", "question_owner": "a concerned citizen", "question_link": "https://math.stackexchange.com/questions/4203313/solving-exp-x-sinx-analytically", "answer": { "answer_id": 4203453, "answer_text": "You can find a REALLY COMPLICATED analytic expression for the solution by means of the\n\nLagrange inversion theorem\n\n. After some simplifications the answer is\n\n$$x = \\frac{1}{2}+\\sum_{n=2}^\\infty\\frac{1}{n!\\,2^n}\\sum_{k=1}^{n-1}(-1)^k n^{\\overline k}B_{n-1,k}(a_1,a_2,\\ldots,a_{n-k})$$\n\nwhere\n\n$n^{\\overline k}$\n\nis the\n\nrising factorial\n\n,\n\n$B_{i,j}$\n\n are the\n\nBell polynomials\n\n and\n\n$a_j$\n\n is given by\n\n$$a_j =\n\n\\begin{cases}\n\n -\\frac{1}{2 (j+1)} & j \\equiv 1 \\pmod 2 \\\\\n\n 0 & j \\equiv 2 \\pmod 4 \\\\\n\n \\frac{1}{j+1} & j \\equiv 0 \\pmod 4\n\n\\end{cases}$$", "answer_owner": "jjagmath", "is_accepted": true, "score": 15, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3450907, "title": "One sided limit of an integral of a regulated function", "question_text": "I am working on the following exercise:\n\nConsider a non-negative regulated function\n\n$\\phi$\n\n on\n\n$\\mathbb{R}$\n\n with\n\n$$\\int_{-\\infty}^{\\infty} \\phi(x) dx = 1$$\n\nand let\n\n$(a_n)_n$\n\n be a sequence of positive numbers such that\n\n$$lim_{n\\rightarrow \\infty} \\ a_n \\rightarrow \\infty.$$\n\nShow that the sequence\n\n$(\\delta_n(x))_n$\n\n defined by\n\n$\\delta_n(x) := a_n\\phi(a_nx)$\n\n is a Dirac-Sequence. Use this to show that\n\n$$lim_{h \\rightarrow 0+} \\frac{1}{\\pi} \\int_{-1}^1 \\frac{h}{h^2+x^2}f(x) \\ dx = f(0)$$\n\n, where\n\n$f:[-1,1] \\rightarrow \\mathbb{R}$\n\n is a in\n\n$0$\n\n continuous regulated function.\n\nI do not know how to do this. Could you help me?\n\nEDIT: Here is the terminology:\n\nRegulated-Function:\n\n Let\n\n$I$\n\n be an interval with starting point\n\n$a$\n\n and endpoint\n\n$b$\n\n. A function\n\n$f: I \\rightarrow \\mathbb{R}$\n\n is a regulated function on\n\n$I$\n\n if\n\nIn every point\n\n$x \\in (a,b)$\n\n exists the left and the right limit.\n\nIf\n\n$a \\in I$\n\n then\n\n$f$\n\n has a rigt limit in\n\n$a$\n\n. If\n\n$b \\in I$\n\n then\n\n$f$\n\n has a left limit in\n\n$b$\n\n.\n\nDirac-Sequence: A sequence\n\n$(\\delta_k)_k$\n\n of regulated functions on\n\n$\\mathbb{R}$\n\n is called Dirac Sequence if:\n\n$\\delta_k \\ge 0$\n\n for all\n\n$k$\n\n$\\int_\\mathbb{R} \\delta_k (t) dt = 1$\n\n for all\n\n$k$\n\nFor arbitrary\n\n$\\epsilon > 0$\n\n and\n\n$r>0$\n\n exists an\n\n$N$\n\n such that for all\n\n$k \\ge N$\n\n holds:\n\n$$\\int_{\\mathbb{R} \\setminus [-r,r]} \\delta_k(t) dt < \\epsilon$$\n\n$$\\bigg\\lvert \\int_{[-r,r]} \\delta_k(t) dt -1 \\bigg\\rvert < \\epsilon$$", "question_owner": "3nondatur", "question_link": "https://math.stackexchange.com/questions/3450907/one-sided-limit-of-an-integral-of-a-regulated-function", "answer": { "answer_id": 3451014, "answer_text": "Step 1: checking that\n\n$\\delta_k(x)\\geq0$\n\n\\\n\n$\\delta_k(x)=a_k\\phi(a_k x)$\n\n is a product of two positive numbers for any\n\n$x$\n\n and all\n\n$k$\n\n hence\n\n$\\delta_k(x)\\geq 0$\n\n.\n\nStep 2: showing that\n\n$\\int_{R}\\delta_k(x)\\, dx=1$\n\n$$\n\n\\int_{R}\\delta_k(x)\\, dx= \\int_{-\\infty}^{\\infty} a_k \\phi(a_k x)\\, dx=[a_k x=y]= \\int_{-\\infty}^{\\infty} \\phi(y)\\, dy=1\n\n$$\n\nStep 3:\n\n$$\n\n|\\int_{-\\infty}^{-r}a_k\\phi(a_k x)\\, dx +\\int_{r}^{\\infty} a_k\\phi(a_k x) \\, dx |\\leq \\epsilon$$\n\n$$\n\n| \\int_{-\\infty}^{-r}a_k\\phi(a_k x)\\, dx +\\int_{r}^{\\infty} a_k\\phi(a_k x) \\, dx +\\int_{-r}^{r} a_k\\phi(a_k x) \\, dx -\\int_{-r}^{r} a_k \\phi(a_k x)\\, dx|=\n\n$$\n\n$$\n\n= |1-\\int_{-r}^{r} a_k \\phi(a_k x)\\, dx| \\leq \\epsilon\n\n$$\n\nStep 4: showing the main integral\n\nRe-writing\n\n$$\n\n\\frac{h}{h^2+x^2}=\\frac{1}{h}\\frac{1}{1+(\\frac{x}{h})^2}\n\n$$\n\nAnd substitute\n\n$x=yh$\n\n where\n\n$y \\in \\mathbb{R}$\n\n we get\n\n$$\n\n\\frac{1}{\\pi}\\int_{-\\infty}^{\\infty} \\frac{1}{1+y^2}f(hy) \\, dy\n\n$$\n\nAccording to mclaurin approximation we have that\n\n$f(hy)=f(0)$\n\n as\n\n$h \\to 0$\n\n hence we get\n\n$$\n\n\\frac{f(0)}{\\pi}\\int_{-\\infty}^{\\infty}\\frac{1}{1+y^2} \\, dy=\\frac{f(0)}{\\pi}(\\frac{\\pi}{2}+\\frac{\\pi}{2})=f(0)\n\n$$\n\nTada you are done.", "answer_owner": "user726500", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1614504, "title": "Why do regulated functions receive so little attention in elementary analysis courses?", "question_text": "The only place where regulated functions (= such with one-sided limits everywhere) occasionally seem to come up in elementary analysis courses is in connection with integration, yet there are clearly several places, even before continuity is discussed, where they naturally fill some gaps in theory and could be used to throw light on related concepts. Here are some arguments to justify this and hence the question in the title:\n\nIt is the regulated functions (and not the continuous functions) that are the righteous owner of the result \"are bounded on compact invervals\" and the sequential proof is virtually the same as for continuous functions.\n\nRegulated functions provide a natural (if a little tautological) converse to the intermediate value theorem: continuous $\\Leftrightarrow$ regulated + has IVP. This characterization shows exactly what two ingredients it takes to make a continuous function.\n\nThe main result on regulated functions (which says that they are precisely the uniform limits of step functions) nicely mingles with other analytic concepts, including uniform convergence, uniform continuity, open-cover compactness of $[a, b]$, and even countability in the form that the countable subsets are precisely the sets of discontinuity points of regulated functions.\n\nWhy do typical first analysis courses (at least in Europe) seem to avoid such facts?", "question_owner": "Damian Reding", "question_link": "https://math.stackexchange.com/questions/1614504/why-do-regulated-functions-receive-so-little-attention-in-elementary-analysis-co", "answer": { "answer_id": 1614534, "answer_text": "I think the answer ultimately lies within the context of real analysis in a mathematics curriculum. First and second years take proof based real analysis classes to become acquainted with analysis as a subject and learn proof based mathematics. Techniques that do not generalize beyond that topic are therefore far less useful than those that do. If someone wishes to study real analysis in its own right, many universities have advanced real analysis classes, but the real purpose for undergraduate real analysis classes seems to be to set a student up for their mathematics career.\n\nNow, you might ask why first years and second years get taught real analysis as the springboard for learning proofs, instead of say number theory, which is a very interesting question in its own right. I took real analysis my first year, but I had been previously introduced to proofs in the context of number theory. But I think it's thought to be easier to use real analysis, as calculus intuitions often carry over, and the fact that everyone already knows a lot of the computational aspect speeds up the class.", "answer_owner": "Stella Biderman", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 456267, "title": "Continuous functions as regulated functions: a property.", "question_text": "In\n\nDifferential and Integral\n\n by Paul Lorenzen (1971) pag. 148, I read\n\n... every continuous function is trivially approximable by step functions that\n\nhave no jump\n\n at a given arbitrary point ...\n\n.\n\nAll this to say that an antiderivative of a continuous function is\n\neverywhere\n\n differentiable.\n\n(It is a fact that an antiderivative of a regulated function is not, in general, differentiable everywhere).\n\nHere I adopt the definition: if $f$ is a function defined on a compact interval $I$, one says that a function $g$ continuous on $I$ is an\n\nantiderivative\n\n of $f$ on $I$ if there exists a countable set $D \\subset I$ such that $g$ is differentiable at any $\\;x \\in I-D$ and $\\;g'(x)=f(x)$.\n\nPlease could someone explain why the italics are true?", "question_owner": "Tony Piccolo", "question_link": "https://math.stackexchange.com/questions/456267/continuous-functions-as-regulated-functions-a-property", "answer": { "answer_id": 456270, "answer_text": "If you have a continuous function, then you can approximate it by step functions, this should be clear. but if you fix a point, then you can just let this point be in the interior of a step and you can fix it all time to be the value of the function.\n\nbe careful: here I'm changing the length of the steps.", "answer_owner": "Mebat", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4214614, "title": "Why are these integrals equivalent to the Riemann Hypothesis?", "question_text": "Riemann Hypothesis is equivalent to the integral equation\n\n$$\\int_{-\\infty}^{\\infty} \\frac{\\log \\mid \\zeta (1/2+it)\\mid }{1+4t^2} \\ dt =0$$\n\nMany other integral equations exist that are equivalent.\n\nHow to show that they are equivalent ?\n\nThey usually include absolute value of a function.\n\nWhy is that ?\n\nI assume it came from a contour integral on the riemann sphere.\n\nThe growth rate of the zeta function on the critical line also probably relates to all those integrals , not ?\n\nAnother example is\n\nEstablishing the exact value\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt=\\frac{\\pi(3-\\gamma)}{32}$$\n\nis equivalent to the Riemann Hypothesis.\n\nMore examples :\n\nRiemann's Hypothesis is true if and only if\n\n$$\\frac{1}{\\pi}\\int_0^{\\infty} \\log\\left|\\frac{\\zeta(\\frac{1}{2}+it)}{\\zeta(\\frac{1}{2})}\\right|\\ \\frac{dt}{t^2}=\\frac{\\pi}{8}+\\frac{\\gamma}{4}+\\frac{\\log 8\\pi}{4}-2$$\n\nTake\n\n$a\\in R$\n\n with\n\n$\\frac{1}{2}\\leq a<1$\n\n. Riemann's\n\n$\\zeta$\n\n-function has no zeros in\n\n$\\Re(s)>a$\n\n if and only if\n\n$$\\frac{1}{\\pi}\\int_0^{\\infty} \\log\\left|\\frac{\\zeta(a+it)}{\\zeta(a)}\\right|\\ \\frac{dt}{t^2}=\\frac{\\zeta'(a)}{2\\zeta(a)}-\\frac{1}{1-a}$$\n\nAnd many more exist.\n\nI have no idea how to get to such conclusions or prove them.\n\nIm not even sure how to prove integrals for \"slightly easier\" cases , meaning not famous open problems but zero's of other nontrivial functions that are not on a half-plane.", "question_owner": "mick", "question_link": "https://math.stackexchange.com/questions/4214614/why-are-these-integrals-equivalent-to-the-riemann-hypothesis", "answer": { "answer_id": 4991357, "answer_text": "A long comment on the\n\nRiemann hypothesis equivalent by V.V.Volchkov\n\n found\n\nhere\n\n:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt = \\frac{\\pi(3-\\gamma)}{32} \\label{1}\\tag{1}$$\n\nmentioned\n\nhere\n\n.\n\nI cannot give you the absolute correct answer but what I can give you is a plausibility argument in the form of a slightly incorrect calculation of the integral yielding the answers:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt=-\\frac{\\pi (1-2 \\gamma )}{32}\\color{Blue}{-\\frac{\\pi}{32} \\sum _{k=2}^{\\infty } \\frac{1}{k}} \\label{2}\\tag{2}$$\n\nand\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt=\\frac{\\pi(3-3\\gamma)}{32} \\label{2.1}\\tag{2.1}$$\n\ninstead.\n\nThe starting points are \\eqref{3} to \\eqref{8} for results regarding\n\nthe von Mangoldt function\n\n which have been proven by joriki (see\n\nhere\n\n) and GH from MO (see\n\nhere\n\n): assuming that\n\n$n>1$\n\n:\n\n$$\\Lambda(n)=\\lim\\limits_{s \\rightarrow 1} \\zeta(s)\\sum\\limits_{d|n} \\frac{\\mu(d)}{d^{(s-1)}} \\label{3}\\tag{3}$$\n\n$$\\begin{align} a(n) &=-2\\lim\\limits_{s \\rightarrow 0} \\zeta(s)\\sum\\limits_{d|n} \\frac{\\mu(d)}{d^{(s-1)}} \\label{4}\\tag{4} \\\\\n\n&= \\sum\\limits_{d|n} d \\cdot \\mu(d) \\label{5}\\tag{5} \\\\\n\n\\end{align}$$\n\n$$T = \\left( \\begin{array}{ccccccc} \\color{Red}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{\\cdots} \\\\ \\color{Red}{+1}&-1&+1&-1&+1&-1&+1 \\\\ \\color{Red}{+1}&+1&-2&+1&+1&-2&+1 \\\\ \\color{Red}{+1}&-1&+1&-1&+1&-1&+1 \\\\ \\color{Red}{+1}&+1&+1&+1&-4&+1&+1 \\\\ \\color{Red}{+1}&-1&-2&-1&+1&+2&+1 \\\\ \\color{Red}{+1}&+1&+1&+1&+1&+1&-6 \\\\ \\color{Red}{\\vdots}&&&&&&&\\ddots \\end{array} \\right) \\label{6}\\tag{6}$$\n\n$$\\displaystyle \\begin{pmatrix}\n\n\\color{Red}{\\frac{T(1,1)}{1 \\cdot 1}}&\\color{Blue}{+\\frac{T(1,2)}{1 \\cdot 2}}&\\color{Blue}{+\\frac{T(1,3)}{1 \\cdot 3}+}&\\color{Blue}{\\cdots}&\\color{Blue}{+\\frac{T(1,k)}{1 \\cdot k}} \\\\\n\n\\color{Red}{\\frac{T(2,1)}{2 \\cdot 1}}&+\\frac{T(2,2)}{2 \\cdot 2}&+\\frac{T(2,3)}{2 \\cdot 3}+&\\cdots&+\\frac{T(2,k)}{2 \\cdot k} \\\\\n\n\\color{Red}{\\frac{T(3,1)}{3 \\cdot 1}}&+\\frac{T(3,2)}{3 \\cdot 2}&+\\frac{T(3,3)}{3 \\cdot 3}+&\\cdots&+\\frac{T(3,k)}{3 \\cdot k} \\\\\n\n \\color{Red}{\\vdots}&\\vdots&\\vdots&\\ddots&\\vdots \\\\\n\n\\color{Red}{\\frac{T(n,1)}{n \\cdot 1}}&+\\frac{T(n,2)}{n \\cdot 2}&+\\frac{T(n,3)}{n \\cdot 3}+&\\cdots&+\\frac{T(n,k)}{n \\cdot k} \\end{pmatrix} = \\begin{pmatrix} \\color{Blue}{\\frac{\\infty}{1}} \\\\ +\\frac{\\Lambda(2)}{2} \\\\ +\\frac{\\Lambda(3)}{3} \\\\ \\vdots \\\\ +\\frac{\\Lambda(n)}{n} \\end{pmatrix} \\label{7}\\tag{7}$$\n\n$$=\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;$$\n\n$$\\displaystyle \\begin{pmatrix} \\color{Red}{\\frac{\\infty}{1}}&+\\frac{\\Lambda(2)}{2}&+\\frac{\\Lambda(3)}{3}+&\\cdots&+\\frac{\\Lambda(k)}{k} \\end{pmatrix}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\label{8}\\tag{8}$$\n\nThe following limits and integrals can be done in Mathematica 14:\n\n$$\\begin{align}\n\n\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\frac{T(n,k)}{n^c \\cdot k^s} &= \\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\frac{a(\\gcd(n,k))}{n^c \\cdot k^s}\n\n \\label{9}\\tag{9} \\\\\n\n&=\\sum\\limits_{n=1}^{\\infty} \\frac{\\lim\\limits_{z \\rightarrow s} \\zeta(z)\\sum\\limits_{d|n} \\frac{\\mu(d)}{d^{(z-1)}}}{n^c} \\label{10}\\tag{10}\\\\\n\n&= \\frac{\\zeta(s) \\zeta(c)}{\\zeta(s + c - 1)} \\label{11}\\tag{11}\\\\\n\n\\end{align}$$\n\n$$\\begin{align}\n\n\\frac{\\zeta '(s)}{\\zeta (s)}&=\\lim_{c\\to 1} \\, \\left( \\color{Red}{\\zeta (c)}-\\frac{\\zeta (s) \\zeta (c)}{\\zeta (s+c-1)}\\right) \\label{12}\\tag{12}\\\\\n\n&=\\sum\\limits_{n=1}^{\\infty} \\left( \\color{Red}{\\frac{1}{n^{1}}}-\\frac{\\lim\\limits_{z \\rightarrow s} \\zeta(z)\\sum\\limits_{d|n} \\frac{\\mu(d)}{d^{(z-1)}}}{n^{1}}\\right) \\label{13}\\tag{13} \\\\\n\n&= \\sum\\limits_{n=1}^{\\infty}\\left(\\color{Red}{\\frac{1}{n^{1}}}-\\sum\\limits_{k=1}^{\\infty} \\frac{a(\\gcd(n,k))}{n^{1} \\cdot k^s}\\right) \\label{14}\\tag{14} \\\\\n\n&= \\sum\\limits_{n=1}^{\\infty}\\left(\\sum\\limits_{k=2}^{\\infty} \\frac{a(\\gcd(n,k))}{n^{1} \\cdot k^s}\\right) \\label{15}\\tag{15} \\\\\n\n&= \\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}\\frac{a(\\gcd(n,k))}{n^{1} \\cdot k^s} \\label{16}\\tag{16} \\\\\n\n\\end{align}$$\n\n$$[k>1] = \\text{Iverson bracket} = (\\text{If k > 1 then 1 else 0}) \\label{17}\\tag{17} $$\n\n$$\\begin{align}\\log | \\zeta (\\sigma+i t)| &=\\log (| \\zeta (\\sigma+i t)| ) \\label{18}\\tag{18} \\\\\n\n&=\\frac{1}{2} \\log (\\zeta (\\sigma+i t))+\\frac{1}{2} \\log (\\zeta (\\sigma-i t)) \\label{19}\\tag{19} \\\\\n\n&=\\int \\frac{1}{2} \\left(\\frac{\\zeta '(\\sigma+i t)}{\\zeta (\\sigma+i t)}+\\frac{\\zeta '(\\sigma-i t)}{\\zeta (\\sigma-i t)}\\right) \\, d\\sigma \\label{20}\\tag{20} \\\\\n\n&=\\int \\frac{1}{2} \\left(\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}\\frac{a(\\gcd(n,k))}{n^{1} \\cdot k^{\\sigma+i t}}+\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}\\frac{a(\\gcd(n,k))}{n^{1} \\cdot k^{\\sigma-i t}}\\right) \\, d\\sigma \\label{21}\\tag{21}\n\n \\\\\n\n&=\\int \\frac{1}{2} \\left(\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]} \\left(\\frac{a(\\gcd (n,k))}{n^{1} \\cdot k^{\\sigma+i t}}+\\frac{a(\\gcd (n,k))}{n^{1} \\cdot k^{\\sigma-i t}}\\right)\\right) \\, d\\sigma \\label{22}\\tag{22}\\\\\n\n&=\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\int \\frac{1}{2} \\color{Red}{[k>1]} \\left(\\frac{a(\\gcd (n,k))}{n^{1} \\cdot k^{\\sigma+i t}}+\\frac{a(\\gcd (n,k))}{n^{1} \\cdot k^{\\sigma-i t}}\\right) \\, d\\sigma \\label{23}\\tag{23}\\\\\n\n&=\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}-\\frac{\\left(1+k^{2 i t}\\right) k^{-\\sigma-i t} a(\\gcd (k,n))}{2 n \\log (k)} \\label{24}\\tag{24} \\\\\n\n\\end{align}$$\n\n$$\\begin{align} \\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma &=\\int_{\\frac{1}{2}}^{\\infty }\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}-\\frac{\\left(1+k^{2 i t}\\right) k^{-\\sigma-i t} a(\\gcd (k,n))}{2 n \\log (k)} \\, d\\sigma \\label{25}\\tag{25} \\\\\n\n&=\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\int_{\\frac{1}{2}}^{\\infty } \\color{Red}{[k>1]}-\\frac{\\left(1+k^{2 i t}\\right) k^{-\\sigma-i t} a(\\gcd (k,n))}{2 n \\log (k)} \\, d\\sigma \\label{26}\\tag{26} \\\\\n\n&=\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}-\\frac{k^{-\\frac{1}{2}-i t} \\left(1+k^{2 i t}\\right) a(\\gcd (k,n))}{2 n \\log ^2(k)} \\label{27}\\tag{27} \\\\\n\n\\end{align}$$\n\nexcept for this integral which can not be done in Mathematica 14:\n\n$$\\begin{align} \\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt &=\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\color{Red}{[k>1]}-\\frac{k^{-\\frac{1}{2}-i t} \\left(1+k^{2 i t}\\right) a(\\gcd (k,n))}{2 n \\log ^2(k)} \\, dt \\label{28}\\tag{28} \\\\\n\n&=\\sum\\limits_{n=1}^{\\infty}\\sum\\limits_{k=1}^{\\infty} \\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\color{Red}{[k>1]}-\\frac{k^{-\\frac{1}{2}-i t} \\left(1+k^{2 i t}\\right) a(\\gcd (k,n))}{2 n \\log ^2(k)} \\, dt \\label{29}\\tag{29} \\\\\n\n\\end{align}$$\n\nTherefore we resort to computing instances of the integral above for integer values between\n\n$1$\n\n and\n\n$6$\n\n for both\n\n$n$\n\n and\n\n$k$\n\n. That is:\n\n$$n=1,2,3,4,5,6 \\label{30}\\tag{30} $$\n\n and\n\n$$k=1,2,3,4,5,6 \\label{31}\\tag{31} $$\n\n, with the following Mathematica 14 program:\n\nClear[sigma, t, a, n, k, B] (*takes several minutes to run*)\n\nnn = 6;(* nn=3 for faster program but smaller matrix *)\n\na[n_] := Total[MoebiusMu[Divisors[n]]*Divisors[n]]\n\nTableForm[\n\n B = Integrate[(1 - 12*t^2)/(1 + 4*t^2)^3*\n\n Integrate[\n\n Integrate[\n\n Table[Table[\n\n If[k > 1, (a[GCD[n, k]]/n/k^(sigma + I*t) +\n\n a[GCD[n, k]]/n/k^(sigma - I*t))/2, 0], {k, 1, nn}], {n, 1,\n\n nn}], sigma], {sigma, 1/2, Infinity}], {t, 0, Infinity}]]\n\nTableForm[1/(-(Pi/32))*B]\n\nTableForm[\n\n 1/(-(Pi/32))*Table[Table[B[[n, k]]*n*k, {k, 1, nn}], {n, 1, nn}]]\n\n-(Pi/32)*\n\n Limit[Zeta[s]*Zeta[s]/Zeta[s + s - 1] - Zeta[s] - (Zeta[s] - 1),\n\n s -> 1]\n\nN[%, 40]\n\nThe first output from the program is matrix\n\n$B$\n\n that has the definition:\n\n$$B(n,k)=\\int_0^{\\infty } \\frac{\\left(1-12 t^2\\right) \\int_{\\frac{1}{2}}^{\\infty } \\left(\\int \\left[k>1\\right] \\frac{1}{2} \\left(\\frac{a(\\gcd (n,k))}{n k^{\\sigma+i t}}+\\frac{a(\\gcd (n,k))}{n k^{\\sigma-i t}}\\right) \\, d\\sigma\\right) \\, d\\sigma}{\\left(4 t^2+1\\right)^3} \\, dt \\label{32}\\tag{32} $$\n\nand starts:\n\n$$B=\\left(\n\n\\begin{array}{cccccc}\n\n \\color{Red}{0} & \\color{Blue}{-\\frac{\\pi }{64}} & \\color{Blue}{-\\frac{\\pi }{96}} & \\color{Blue}{-\\frac{\\pi }{128}} & \\color{Blue}{-\\frac{\\pi }{160}} & \\color{Blue}{-\\frac{\\pi }{192}} \\label{33}\\tag{33} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{128} & -\\frac{\\pi }{192} & \\frac{\\pi }{256} & -\\frac{\\pi }{320} & \\frac{\\pi }{384} \\\\\n\n \\color{Red}{0} & -\\frac{\\pi }{192} & \\frac{\\pi }{144} & -\\frac{\\pi }{384} & -\\frac{\\pi }{480} & \\frac{\\pi }{288} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{256} & -\\frac{\\pi }{384} & \\frac{\\pi }{512} & -\\frac{\\pi }{640} & \\frac{\\pi }{768} \\\\\n\n \\color{Red}{0} & -\\frac{\\pi }{320} & -\\frac{\\pi }{480} & -\\frac{\\pi }{640} & \\frac{\\pi }{200} & -\\frac{\\pi }{960} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{384} & \\frac{\\pi }{288} & \\frac{\\pi }{768} & -\\frac{\\pi }{960} & -\\frac{\\pi }{576} \\\\\n\n\\end{array}\n\n\\right) $$\n\nDividing this matrix\n\n$B$\n\n with:\n\n$$-\\frac{\\pi}{32} \\label{34}\\tag{34}$$\n\n$$\\left(\n\n\\begin{array}{cccccc}\n\n \\color{Red}{0} & \\color{Blue}{\\frac{1}{2}} & \\color{Blue}{\\frac{1}{3}} & \\color{Blue}{\\frac{1}{4}} & \\color{Blue}{\\frac{1}{5}} & \\color{Blue}{\\frac{1}{6}} \\\\\n\n \\color{Red}{0} & -\\frac{1}{4} & \\frac{1}{6} & -\\frac{1}{8} & \\frac{1}{10} & -\\frac{1}{12} \\\\\n\n \\color{Red}{0} & \\frac{1}{6} & -\\frac{2}{9} & \\frac{1}{12} & \\frac{1}{15} & -\\frac{1}{9} \\\\\n\n \\color{Red}{0} & -\\frac{1}{8} & \\frac{1}{12} & -\\frac{1}{16} & \\frac{1}{20} & -\\frac{1}{24} \\\\\n\n \\color{Red}{0} & \\frac{1}{10} & \\frac{1}{15} & \\frac{1}{20} & -\\frac{4}{25} & \\frac{1}{30} \\\\\n\n \\color{Red}{0} & -\\frac{1}{12} & -\\frac{1}{9} & -\\frac{1}{24} & \\frac{1}{30} & \\frac{1}{18} \\\\\n\n\\end{array}\\right) \\label{35}\\tag{35}$$\n\nMultiplying matrix\n\n$B$\n\n with\n\n$$-\\frac{n k B(n,k)}{\\frac{\\pi }{32}} \\label{36}\\tag{36}$$\n\n$$\\left(\n\n\\begin{array}{cccccc}\n\n \\color{Red}{0} & \\color{Blue}{1} & \\color{Blue}{1} & \\color{Blue}{1} & \\color{Blue}{1} & \\color{Blue}{1} \\\\\n\n \\color{Red}{0} & -1 & 1 & -1 & 1 & -1 \\\\\n\n \\color{Red}{0} & 1 & -2 & 1 & 1 & -2 \\\\\n\n \\color{Red}{0} & -1 & 1 & -1 & 1 & -1 \\\\\n\n \\color{Red}{0} & 1 & 1 & 1 & -4 & 1 \\\\\n\n \\color{Red}{0} & -1 & -2 & -1 & 1 & 2 \\\\\n\n\\end{array}\n\n\\right) \\label{37}\\tag{37}$$\n\nCompare this modified matrix\n\n$B$\n\n to matrix\n\n$T$\n\n:\n\n$$T = \\left( \\begin{array}{ccccccc} \\color{Red}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{+1}&\\color{Blue}{\\cdots} \\\\ \\color{Red}{+1}&-1&+1&-1&+1&-1&+1 \\\\ \\color{Red}{+1}&+1&-2&+1&+1&-2&+1 \\\\ \\color{Red}{+1}&-1&+1&-1&+1&-1&+1 \\\\ \\color{Red}{+1}&+1&+1&+1&-4&+1&+1 \\\\ \\color{Red}{+1}&-1&-2&-1&+1&+2&+1 \\\\ \\color{Red}{+1}&+1&+1&+1&+1&+1&-6 \\\\ \\color{Red}{\\vdots}&&&&&&&\\ddots \\end{array} \\right) \\label{38}\\tag{38}$$\n\nThey look like they are the same.\n\nSubtracting the numbers in red and blue and forming matrix\n\n$A$\n\n:\n\n$$A=\\left(\n\n\\begin{array}{cccccc}\n\n \\color{Red}{0} & \\color{Blue}{0} & \\color{Blue}{0} & \\color{Blue}{0} & \\color{Blue}{0} & \\color{Blue}{0} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{128} & -\\frac{\\pi }{192} & \\frac{\\pi }{256} & -\\frac{\\pi }{320} & \\frac{\\pi }{384} \\\\\n\n \\color{Red}{0} & -\\frac{\\pi }{192} & \\frac{\\pi }{144} & -\\frac{\\pi }{384} & -\\frac{\\pi }{480} & \\frac{\\pi }{288} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{256} & -\\frac{\\pi }{384} & \\frac{\\pi }{512} & -\\frac{\\pi }{640} & \\frac{\\pi }{768} \\\\\n\n \\color{Red}{0} & -\\frac{\\pi }{320} & -\\frac{\\pi }{480} & -\\frac{\\pi }{640} & \\frac{\\pi }{200} & -\\frac{\\pi }{960} \\\\\n\n \\color{Red}{0} & \\frac{\\pi }{384} & \\frac{\\pi }{288} & \\frac{\\pi }{768} & -\\frac{\\pi }{960} & -\\frac{\\pi }{576} \\\\\n\n\\end{array}\n\n\\right) \\label{39}\\tag{39}$$\n\nSince the conjectured formula for matrix\n\n$B$\n\n is:\n\n$$B(n,k) = -\\frac{\\pi a(\\gcd (n,k))}{32 n k} \\label{40}\\tag{40}$$\n\nand given that the matrix in display \\eqref{37} is essentially matrix\n\n$T$\n\n in displays \\eqref{38} and \\eqref{6}, then matrix\n\n$A$\n\n is known to have the generating function and limit:\n\n$$\\begin{align} \\sum _{n=1}^{\\infty } \\left(\\sum _{k=1}^{\\infty } \\frac{A(n,k)}{k^s n^s}\\right)\n\n&=-\\frac{\\pi}{32} \\underset{s\\to 1}{\\text{lim}}\\left(\\frac{\\zeta (s) \\zeta (s)}{\\zeta (s+s-1)}-\\color{Red}{\\zeta (s)}-\\color{Blue}{(\\zeta (s)-1)}\\right) \\label{41}\\tag{41} \\\\\n\n&=-\\frac{\\pi (1-2 \\gamma )}{32} \\label{42}\\tag{42} \\\\\n\n\\end{align}$$\n\nSince:\n\n$$-\\frac{\\pi}{32} \\underset{s\\to 1}{\\text{lim}}\\left(\\color{Blue}{(\\zeta (s)-1)}\\right) = \\color{Blue}{-\\frac{\\pi}{32} \\sum _{k=2}^{\\infty } \\frac{1}{k}} \\label{43}\\tag{43}$$\n\nwe get the wrong answer:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt=-\\frac{\\pi (1-2 \\gamma )}{32}\\color{Blue}{-\\frac{\\pi}{32} \\sum _{k=2}^{\\infty } \\frac{1}{k}} \\label{44}\\tag{44}$$\n\ncompared to V.V.Volchov:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt = \\frac{\\pi(3-\\gamma)}{32} \\label{45}\\tag{45}$$\n\nIt is hard for me to tell exactly what I did wrong, but I am pretty sure it is a matter of analytic continuation that I don't know how to deal with. Unless also V.V.Volchov was slightly wrong regarding the resulting pole in blue, but I don't know about that because I have not read his paper.\n\nEdit 9.11.2024:\n\nMathematica 14 for the comment below:\n\nClear[s, sigma, t]\n\nint = Integrate[1/7^(s - 1)/(s - 1), s]\n\ns = sigma + I*t\n\nv = int\n\nIntegrate[(ExpIntegralEi[-((-1 + sigma + I t) Log[7])] +\n\n ExpIntegralEi[-((-1 + sigma - I t) Log[7])])/2, {sigma, 1/2,\n\n Infinity}]\n\nIntegrate[\n\n 1/4 ((1 - 2 I t) ExpIntegralEi[\n\n 1/2 (1 - 2 I t) Log[7]] + (1 + 2 I t) ExpIntegralEi[\n\n 1/2 (1 + 2 I t) Log[7]] - (2 7^(1/2 - I t) (1 + 7^(2 I t)))/\n\n Log[7])*(1 - 12*t^2)/(1 + 4*t^2)^3, {t, 0, Infinity}]\n\nClear[k, m]\n\nLimit[1/32 \\[Pi] (1 + Log[m]) - Pi/32*Sum[1/k, {k, 2, m}], m -> Infinity]\n\nwhere seven plays the role of\n\n$m$\n\nIt is possible to get rid of the sum\n\n$$\\color{Blue}{-\\frac{\\pi}{32} \\sum _{k=2}^{\\infty } \\frac{1}{k}} \\label{46}\\tag{46}$$\n\nin blue, if one is allowed the following:\n\nContinuing from display \\eqref{43}:\n\nThe indefinite integral of the analytic continuation of the sum in blue is:\n\n$$\\color{Blue}{\\int \\zeta (s)-1 \\, ds=\\int \\underset{m\\to \\infty }{\\text{lim}}\\left(\\sum _{k=2}^m \\frac{1}{k^s}+\\frac{1}{(s-1) m^{s-1}}\\right) \\, ds} \\label{47}\\tag{47}$$\n\n$$\\color{Blue}{=-\\sum _{k=2}^m \\frac{k^{-s}}{\\log (k)}+\\text{Ei}(-((s-1) \\log (m)))} \\label{48}\\tag{48}$$\n\nSubstituting:\n\n$$s=\\sigma+i t \\label{49}\\tag{49}$$\n\nand integrating the real part of\n\n$\\color{Blue}{\\text{Ei}(-((s-1) \\log (m)))}$\n\n:\n\n$$\\color{Blue}{\\int_{\\frac{1}{2}}^{\\infty } \\frac{1}{2} (\\text{Ei}(-((\\sigma+i t-1) \\log (m)))+\\text{Ei}(-((\\sigma-i t-1) \\log (m)))) \\, d\\sigma=\\frac{1}{4} \\left((1+2 i t) \\text{Ei}\\left(\\frac{1}{2} (2 i t+1) \\log (m)\\right)+(1-2 i t) \\text{Ei}\\left(\\frac{1}{2} (1-2 i t) \\log (m)\\right)-\\frac{2 \\left(1+m^{2 i t}\\right) m^{\\frac{1}{2}-i t}}{\\log (m)}\\right)} \\label{50}\\tag{50}$$\n\nIntegrating again:\n\n$$\\color{Blue}{\\int_0^{\\infty } \\frac{\\left(1-12 t^2\\right) \\left((1+2 i t) \\text{Ei}\\left(\\frac{1}{2} (2 i t+1) \\log (m)\\right)+(1-2 i t) \\text{Ei}\\left(\\frac{1}{2} (1-2 i t) \\log (m)\\right)-\\frac{2 \\left(1+m^{2 i t}\\right) m^{\\frac{1}{2}-i t}}{\\log (m)}\\right)}{4 \\left(4 t^2+1\\right)^3} \\, dt} \\label{51}\\tag{51}$$\n\nwe get the answer:\n\n$$\\color{Blue}{=\\frac{1}{32} \\pi (\\log (m)+1)} \\label{52}\\tag{52}$$\n\nplugging \\eqref{52} into \\eqref{46}:\n\n$$\\color{Blue}{-\\frac{\\pi}{32} \\sum _{k=2}^{\\infty } \\frac{1}{k}} \\label{53}\\tag{53}$$\n\n$$\\color{Blue}{\\underset{m\\to \\infty }{\\text{lim}}\\left(\\frac{1}{32} \\pi (\\log (m)+1)-\\frac{1}{32} \\pi \\sum _{k=2}^m \\frac{1}{k}\\right)=-\\frac{1}{32} (\\gamma -2) \\pi} \\label{54}\\tag{54}$$\n\nTaking the black part of the right hand side of \\eqref{2} and substituting the blue part of \\eqref{2} with the right hand side of \\eqref{54}:\n\n$$\\frac{1}{32} \\pi (1-2 \\gamma )\\color{Blue}{-\\frac{1}{32} (\\gamma -2) \\pi} =\\frac{\\pi(3-3\\gamma)}{32} \\label{55}\\tag{55}$$\n\nwe get the less incorrect answer:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt=\\frac{\\pi(3-3\\gamma)}{32} \\label{56}\\tag{56}$$\n\ncompared to V.V.Volchov:\n\n$$\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{1/2}^{\\infty}\\log|\\zeta(\\sigma+it)|~d\\sigma ~dt = \\frac{\\pi(3-\\gamma)}{32} \\label{57}\\tag{57}$$\n\nEdit 10.11.2024:\n\nMathematica 14:\n\nnn = 10^3;\n\nNIntegrate[(1 - 12*t^2)/(1 + 4*t^2)^3*\n\n NIntegrate[Log[Abs[Zeta[sigma + I*t]]], {sigma, 1/2, 9*nn}], {t, 0, nn}]\n\ngives:\n\n0.237853...\n\nwhile:\n\nN[Pi*(3 - EulerGamma)/32]\n\ngives:\n\n0.237856...\n\nwhich numerically supports V.V.Volchov's number\n\n$$\\frac{\\pi(3-\\gamma)}{32}=0.237856...\\text{=probably right}$$\n\n in displays \\eqref{1}, \\eqref{45} and \\eqref{57} being probably right, while my number\n\n$$\\frac{\\pi(3-3\\gamma)}{32}=0.12452...\\text{=probably wrong}$$\n\n in displays \\eqref{2.1} and \\eqref{56} is probably wrong.\n\nEdit 13.9.2025: Clarification of post at Mathematica stack exchange, written in Latex, computed with the previously added program at 24.06.2025\n\nhere\n\n:\n\n$$\\underbrace{\\int_0^{\\infty} \\frac{\\left(1-12 t^2\\right) }{\\left(4 t^2+1\\right)^3}\\int_{\\frac{1}{2}+\\epsilon}^{\\infty} \\log (| \\zeta (\\sigma+i t)| ) \\, d\\sigma \\, dt}_{\\text{The Volchkov integral with a small number epsilon }\\epsilon} = \\\\\n\n\\underbrace{\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{\\frac{1}{2}+\\epsilon}^{\\infty}\\Re\\left(\\left.\\int\\frac{\\zeta'(s)}{\\zeta(s)}~ds\\right|_{s=\\sigma+it}\\right)~d\\sigma ~dt}_{\\text{The Volchkov integral with a small number epsilon }\\epsilon} = \\\\\n\n \\underbrace{\\underset{s\\to \\epsilon+1}{\\text{lim}}\\left(\\underbrace{\\frac{1}{32} \\left(4+\\frac{1}{(s-2) (s-1)}\\right) \\pi}_{\\text{Conjectured leading term (?)}} \\underbrace{-\\frac{\\pi \\zeta '(s)}{32 \\zeta (s)}}_{\\text{By the Prime Number Theorem}}\\right)}_{\\text{limit with a small number epsilon }\\epsilon}$$\n\nepsilon integral limit\n\n-(1/2) 0.259866 0.259892\n\n-(1/3) 0.255639 0.255639\n\n0 0.237853 0.237856\n\n1/3 0.194392 0.194391\n\n1/2 0.147776 0.147776\n\nClear[m, s, d, sigma, t, k]\n\nk = 541 (* k = 541 is a prime number *)\n\nint1 = Integrate[1/k^s, s]\n\ns = sigma + I*t;\n\nv1 = int1\n\ns = sigma - I*t;\n\nv2 = int1\n\ninte1 = Integrate[(v1 + v2)/2, {sigma, 1/2, Infinity}]\n\nIntegrate[inte1*(1 - 12*t^2)/(1 + 4*t^2)^3, {t, 0, Infinity}]\n\n-%*32/Pi\n\nProgram outputs:\n\n1/541\n\nwhich suggests that Theorem 1 is true.\n\n$$\\underbrace{\\frac{32}{\\pi}\\left(\\underbrace{\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{\\frac{1}{2}+\\epsilon}^{\\infty}\\Re\\left(\\left.\\int\\frac{1}{n^{s}}~ds\\right|_{s=\\sigma+it}\\right)~d\\sigma ~dt}_{\\text{The simple Volchkov integral with a small number epsilon }\\epsilon}\\right)}_{\\text{left hand side}} = \\underbrace{\\frac{1}{n^{ (\\epsilon+1)}}}_{\\text{Theorem 1}}\\\\\n\n$$\n\n$$\\frac{32}{\\pi}\\left(\\underbrace{\\int_0^{\\infty} \\frac{\\left(1-12 t^2\\right) }{\\left(4 t^2+1\\right)^3}\\int_{\\frac{1}{2}+\\epsilon}^{\\infty} \\log (| \\zeta (\\sigma+i t)| ) \\, d\\sigma \\, dt}_{\\text{The Volchkov integral with a small number epsilon }\\epsilon} \\;+\\; \\underbrace{\\frac{\\pi \\zeta '(\\epsilon+1)}{32 \\zeta (\\epsilon+1)}}_{\\text{Theorem 1}} \\right) = \\\\\n\n\\underbrace{\\frac{32}{\\pi}\\left(\\underbrace{\\int_{0}^{\\infty}\\frac{(1-12t^2)}{(1+4t^2)^3}\\int_{\\frac{1}{2}+\\epsilon}^{\\infty}\\Re\\left(\\left.\\int\\frac{\\zeta'(s)}{\\zeta(s)}~ds\\right|_{s=\\sigma+it}\\right)~d\\sigma ~dt}_{\\text{The Volchkov integral with a small number epsilon }\\epsilon} \\;+\\; \\underbrace{\\frac{\\pi \\zeta '(\\epsilon+1)}{32 \\zeta (\\epsilon+1)}}_{\\text{Theorem 1}} \\right)}_{\\text{Left Hand Side}} = \\\\\n\n\\underbrace{4+\\frac{1}{\\epsilon(\\epsilon-1)}}_{\\text{Conjectured term}}$$\n\n$$\\begin{array}{|c|c|c|c|}\n\n\\hline \\epsilon \\text{ = epsilon} & \\underbrace{4+\\frac{1}{\\epsilon(\\epsilon-1)}}_{\\text{Conjectured term}} & \\text{Left Hand Side, numerically} & \\text{# correct (decimal) digits} \\\\\n\n\\hline \\frac{1}{2} &0.000000000 &0.000000008 &8 \\\\\n\n\\hline \\frac{1}{3} &-0.50000000 &-0.49999998 & 8 \\\\\n\n\\hline \\frac{1}{4} &-1.33333333 &-1.33333332 & 8 \\\\\n\n\\hline \\frac{1}{5} &-2.25000000 &-2.249999990 &9 \\\\\n\n\\hline \\frac{1}{6} &-3.20000000 &-3.199999992 &9 \\\\\n\n\\hline \\frac{1}{\\infty} &-\\infty &-\\infty & \\text{Riemann hypothesis} \\\\\n\n\\hline \\end{array}$$", "answer_owner": "Mats Granvik", "is_accepted": false, "score": 6, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2879033, "title": "Derivative of inverse trigonometric functions: $f(x)=\\arcsin\\left(2x\\sqrt{1-x^2}\\right)$", "question_text": "While trying to find the derivative of\n\n$$f(x)=\\arcsin\\left(2x\\sqrt{1-x^2}\\right)$$\n\nwe arrive at two different answers by substituting $x=\\sin t$ or $x=\\cos t$. The aim is to simplify it as $\\arcsin(\\sin2t)$ and further simplify it as $2t$ and take its derivative. Proceeding in that manner, I arrived at two different answers for the derivative. I could understand that it was because the function $f$ is defined differently in different intervals, and as I have solved the problem without considering the intervals, I hit this answer. Also, I thought that since $\\sin t$ and $\\cos t$ will coincide only when $t=\\dfrac{\\pi}4$, $x=\\pm\\dfrac1{\\sqrt{2}}$ will be the points where the definition changes. I got some more clarity when I looked at the graph of $f(x)$ but I could not figure out a way to systematically figure out what answer suits what interval.\n\nMAJOR EDIT: I made a mistake in the function itself that I had given. My question earlier read:\n\n$$f(x)=\\arcsin\\left(2x\\sqrt{x^2-1}\\right)$$\n\nBut I wanted only for what I've now altered it as. My sincere apologies...\n\nEDIT: My question is how to figure out the interval in which the substitution applies for such problems in general. I don't need the expression for the derivative of $f(x)$.\n\nNote: I am new to this community, so please point out any deviation from the policy, if I have deviated.\n\nAlso, I couldn't post my working as I do not have $10$ reputation.", "question_owner": "Raghavasimhan Varadharajan", "question_link": "https://math.stackexchange.com/questions/2879033/derivative-of-inverse-trigonometric-functions-fx-arcsin-left2x-sqrt1-x2", "answer": { "answer_id": 2879048, "answer_text": "$$ f(x) = \\arcsin(2x\\sqrt{x^2-1}) $$\n\nReplace\n\n$x=\\cosh\\theta$\n\nthen\n\n$f(x) = \\arcsin(\\sinh2\\theta)$\n\nby differentiating\n\n$f(x)$\n\n,\n\n$f '(x)$\n\n =\n\n$\\dfrac{2\\cosh(2\\theta)}{\\sqrt{1-\\sinh^2(2\\theta)}\\sqrt{x^2-1}}$\n\nReplacing\n\n$ \\cosh(2\\theta) = 2x^2-1 $\n\n$f '(x) = \\dfrac{4x^2-2}{\\sqrt{1-4x^4+4x^2}\\sqrt{x^2-1}}$", "answer_owner": "Narendra", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1859210, "title": "Double Integral on $[0,1]\\times[0,1]$", "question_text": "I want to calculate the double integral given by\n\n$$\\int_{0}^{1}\\int_{0}^{1}\\frac{x^3y^3\\ln(xy)}{1-xy}dxdy .$$\n\nI set\n\n$u=xy$\n\n and\n\n$v=1-xy$\n\n, then calculate the Jacobian. However, my change of variables was not useful.\n\nHow can we choose\n\n$u$\n\n and\n\n$v$\n\n? Is there a other way?\n\nThanks", "question_owner": "math", "question_link": "https://math.stackexchange.com/questions/1859210/double-integral-on-0-1-times0-1", "answer": { "answer_id": 1859229, "answer_text": "Take $xy=u\n\n $. We have $$I=\\int_{0}^{1}\\int_{0}^{1}\\frac{x^{3}y^{3}\\log\\left(xy\\right)}{1-xy}dxdy=\\int_{0}^{1}\\frac{1}{y}\\int_{0}^{y}\\frac{u^{3}\\log\\left(u\\right)}{1-u}dudy\n\n $$ and $$\\int_{0}^{y}\\frac{u^{3}\\log\\left(u\\right)}{1-u}du=\\sum_{k\\geq0}\\int_{0}^{y}u^{k+3}\\log\\left(u\\right)du\n\n $$ $$=\\sum_{k\\geq0}\\frac{y^{k+4}\\log\\left(y\\right)}{k+4}-\\sum_{k\\geq0}\\frac{y^{k+4}}{\\left(k+4\\right)^{2}}\n\n $$ hence $$I=\\sum_{k\\geq0}\\frac{1}{k+4}\\int_{0}^{1}y^{k+3}\\log\\left(y\\right)dy-\\sum_{k\\geq0}\\frac{1}{\\left(k+4\\right)^{2}}\\int_{0}^{1}y^{k+3}dy\n\n $$ $$=-2\\sum_{k\\geq0}\\frac{1}{\\left(k+4\\right)^{3}}=\\color{red}{-2\\left(\\zeta\\left(3\\right)-\\frac{251}{216}\\right).}$$", "answer_owner": "Marco Cantarini", "is_accepted": true, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5100588, "title": "It looks like there is no solution but you can still find one in $\\frac{x^2-1}{x-1}=2$ - which is true?", "question_text": "I saw this math brain teaser problem:\n\n$$\\frac{x^2-1}{x-1} = 2$$\n\nAnd I can do this (please, someone correct me if I err here):\n\n$$\\begin{align}\n\n\\frac{x^2-1}{x-1} &= 2 \\\\\n\n\\frac{(x-1)(x+1)}{x-1} &= 2\\\\\n\nx+1 &=2\\\\\n\nx&=1\n\n\\end{align}$$\n\nLeaving me with x=1 as a (real) solution. Well and good, but when I plug the solution back into the original equation\n\n$$\n\n\\begin{align}\n\n\\frac{(1)^2-1}{(1)-1} &= 2 \\\\\n\n\\frac{0}{0} &= 2 \\rightarrow \\text{this is plainly a contradiction}\\\\\n\n\\end{align}$$\n\nSo, I feel that I am missing something elementary. I am aware that if I use limits this way\n\n$$\n\n\\begin{align}\n\n\\ \\lim_{x\\rightarrow 1}\\frac{x^2-1}{x-1} &= 2 \\\\\n\n\\end{align}$$\n\nand apply L'Hôpital's rule I get\n\n$$\n\n\\begin{align}\n\n\\ \\frac{2x}{1}=2x &= 2 \\\\\n\n\\end{align}$$\n\nAnd we have\n\n$x = 1$\n\n again. As I recall, tho this is a rather improper use of L'Hôpital because this isn't really a proper function (I don't think? Again, someone tell me if I am missing something).\n\nSo that's two methods that get a solution, but it still seems wrong. I expect there is something from elementary precalc that I am forgetting here, and I would be gratified to know what it is.", "question_owner": "Jesse", "question_link": "https://math.stackexchange.com/questions/5100588/it-looks-like-there-is-no-solution-but-you-can-still-find-one-in-fracx2-1x", "answer": { "answer_id": 5100599, "answer_text": "When ever you divide by an unknown expression, you\n\nmust\n\n specify and note that that expression can not be zero. It\n\nmust\n\n be acknowledged.\n\nIf you had for example something\n\nwithout\n\n any issue say\n\n$x^2 - 1 = 3(x-1)$\n\n you are actually\n\nNOT\n\n allowed to say \"divide both sides by\n\n$x-1$\n\n so to get\n\n$\\frac{x^2-1}{x-1} = 3\\implies x+1 = 3\\implies x=2$\n\n\" even though that will give you\n\na\n\n correct answer (and it will miss a second correct answer).\n\nYou can't do that because, for all you know, maybe\n\n$x-1=0$\n\n and you can not divide by\n\n$0$\n\n. You\n\nmust\n\n say: \"If\n\n$x-1$\n\n is not zero, I can divide both sides by\n\n$x-1$\n\n, but if\n\n$x-1$\n\ndoes\n\n equal zero, I can not. So I will have to, one way or another, deal with the two cases, one where\n\n$x-1\\ne 0$\n\n and another where\n\n$x-1=0$\n\n\"\n\nSo what you say is \"**!!IF!!!\n\n$x-1\\ne 0$\n\n then\n\n$x^2 -1 = 3(x-1)\\implies \\frac {x^2-1}{x-1} = x+1=3$\n\n and so\n\n$x=2$\n\n and as when\n\n$x=2$\n\n we have\n\n$x-1=2-1\\ne 0$\n\n we are okay about our concerns. But I still have to consider what if\n\n$x-1=0$\n\n. In that case\n\n$x-1=0$\n\n and\n\n$x=1$\n\n but if we plug that in\n\nwithout\n\n dividing we get\n\n$x^2-1 = 1^2-1 = 0$\n\n and\n\n$3(x-1) = 3\\cdot 0 = 0$\n\n and that is consistent so that is a second solution\". You\n\nmust\n\n consider and stipulate \"I am assuming\n\n$x-1\\ne 0$\n\n when I do division and I must take that into account\".\n\nSo let's go to your problem. You have\n\n$\\frac {x^2-1}{x-1} = 2$\n\n. The first thing you\n\nmust\n\n state is that, as we are dividing by\n\n$x-1$\n\n and we can't divide by\n\n$0$\n\n, we\n\ncan't\n\n have\n\n$x-1$\n\n. But if it is not, we can go on.\n\nWhen\n\n$x-1 \\ne 0$\n\n (and we\n\nmust\n\n specific that \"when\"); when\n\n$x-1\\ne 0$\n\n we have\n\n$\\frac {x^2-1}{x-1} = x+1$\n\n so solving\n\n$x+1 = 2$\n\n gives as\n\n$x=1$\n\n. That's not an acceptable answer, as when\n\n$x-1$\n\ndoes\n\n equal\n\n$0$\n\n then\n\n$\\frac {x^2-1}{x-1}$\n\n is undefined and impossible. So... the is no solution.", "answer_owner": "fleablood", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105739, "title": "Spivak's Epsilon-Delta Limit Theorem", "question_text": "I am currently on chapter 5 of Spivak's Calculus and I think I am beginning to understand the material, but I'm unsure of how to approach part A of question 26.\n\nGive examples to show that the following limit definition is NOT correct.\n\n(a) For all\n\n$\\delta> 0$\n\n, there is an\n\n$\\epsilon > 0$\n\n such that if\n\n$0 <|x−a|< \\delta$\n\n, then\n\n$|f(x)−L|<\\epsilon$\n\n.\n\nMy initial thought was to attempt the problem with a discontinuous function, but I am not sure what I do is actually correct.\n\nSo if we use the function\n\n$f(x) = \\dfrac{x^2 - 1}{x - 1} = x + 1$\n\n if\n\n$x \\neq 1$\n\n.\n\nWith L =\n\n$\\displaystyle \\lim_{x \\to 0} \\dfrac{x^2 - 1}{x - 1} = 1$\n\n.\n\nUsing the above:\n\nIf\n\n$0 < |x - 0| < \\delta$\n\n (and\n\n$x \\neq 1$\n\n), then\n\n$$\n\n\\left| \\dfrac{x^2 - 1}{x - 1} - 1 \\right|\n\n= |x + 1 - 1|\n\n= |x|\n\n< \\delta = \\varepsilon.\n\n$$\n\nSo if we take that for\n\n$\\varepsilon > 0$\n\n there exists\n\n$\\delta > 0$\n\n of the function\n\nthat\n\n$0 < |x - l| < \\delta$\n\n implies\n\n$\\left| \\dfrac{x^2 - 1}{x - 1} - 1 \\right| < \\varepsilon.$\n\n$\\Box$\n\nBut this is not correct because this is not the real limit of f(x)? And this would show that the given definition is not correct?", "question_owner": "user1703307", "question_link": "https://math.stackexchange.com/questions/5105739/spivaks-epsilon-delta-limit-theorem", "answer": { "answer_id": 5105749, "answer_text": "Let us take (a) as the definition of limit. Consider the constant function\n\n$f(x)=1$\n\n. Choose any\n\n$a$\n\n and any\n\n$L$\n\n. Given\n\n$\\delta>0$\n\n, choose\n\n$\\epsilon=2+|L|$\n\n. Then\n\n$$\n\n|f(x)-L|\\leq|f(x)|+|L|=1+|L|<\\epsilon.\n\n$$\n\nSo we would have that\n\n$$\\lim_{x \\to a}f(x)=L.$$\n\n For instance,\n\n$$\n\n\\lim_{x \\to 0}1=17.\n\n$$\n\nThis is not only incongruent with the spirit of what a limit should be but, much worse, is incoherent: for we would have\n\n$$\n\n1=\\lim_{x \\to 0}1=2.\n\n$$", "answer_owner": "Martin Argerami", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3287710, "title": "How to calculate length of Clothoid segment?", "question_text": "I want to calculate the length of a clothoid segment from the following available information.\n\ninitial radius of clothoid segment\n\nfinal radius of clothoid segment\n\nangle (i am not really sure which angle is this, and its not\n\ndocumented anywhere)\n\nAs a test case: I need to find length of a clothoid(left) that starts at\n\n$(1000, 0)$\n\n and ends at approximately\n\n$(3911.5, 943.3)$\n\n. The arguments are:\n\n$initialRadius=10000$\n\n,\n\n$endRadius=2500$\n\n,\n\n$angle=45(deg)$\n\n.\n\nPreviously I have worked on a similar problem where initial radius, final radius, and length are given. So I want to get the length so I can solve it the same way.\n\nI am working on a map conversion problem. The format does not specify what are the details of this angle parameter.\n\nPlease help. I have been stuck at this for 2 days now.", "question_owner": "Ali Arsalan", "question_link": "https://math.stackexchange.com/questions/3287710/how-to-calculate-length-of-clothoid-segment", "answer": { "answer_id": 3289195, "answer_text": "The basic equation for a clothoid is\n\n$R·L = A^2$\n\n where\n\nR\n\n is radious,\n\nL\n\n is the length from the point where R=infinite and\n\nA\n\n is a constant (a scale factor). You can also write\n\n$L=\\frac{A^2}{R}$\n\nThe length along the spiral (not a segment) between two points is\n\n$L= L_2 - L_1$\n\nThe local X-axis is the line tangent at R=inf, (and then\n\n$L=0$\n\n).\n\nThe angle\n\n$\\phi$\n\n from the X-axis to the tangent at a point\n\n$r=R_i$\n\n is\n\n$\\phi=\\frac{L_i}{2R_i}=\\frac{L_i^2}{2A^2}=\\frac{A^2}{2R_i^2}$\n\nSo,\n\n$L_2-L1= \\frac{A^2}{R_2} - \\frac{A^2}{R_1} = A^2(\\frac{1}{R_2}-\\frac{1}{R_1})$\n\nTo get A you can use the angle at one point. For example, for the second point\n\n$(r=R_2$\n\n and\n\n$\\phi=\\phi_2)$\n\n you use\n\n$A^2= \\phi_2·2·R_2^2$\n\nNote that if the first point is R=inf,\n\n$L=0$\n\n then the required length is\n\n$L=A^2/R_2 = \\phi_2·2·R_2$", "answer_owner": "Ripi2", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105734, "title": "on an alternate proof of the Maclaurin series", "question_text": "I'm trying to come up with an alternative proof of Maclaurin series but I'm stuck on a part of the proof. consider the integral\n\n$$f(y)-f(0)=\\int_0^y{f'(y-x)}dx$$\n\n using integration by parts you can show\n\n$$f(y)=\\int_0^y{(f^{(k+1)}(y-x))x^k}dx+\\sum_{n=0}^{k}\\frac{f^{(n)}(0)y^n}{n!}$$\n\n (note that I'm using\n\n$f^{(k)}(x)$\n\n to denote the nth derivative)taking the\n\n$\\lim_{k \\to \\infty}$\n\n on both sides leads to\n\n$$f(y)=(\\lim_{k \\to \\infty}\\int_0^y{(f^{(k+1)}(y-x))x^k}dx)+\\sum_{n=0}^{\\infty}\\frac{f^{(n)}(0)y^n}{n!}$$\n\n meaning I need to prove\n\n$$0=\\lim_{k \\to \\infty}\\int_0^y{(f^{(k+1)}(y-x))x^k}dx$$\n\n for all analytic functions f(y) but I have no idea where to even start", "question_owner": "Zactastic 4k", "question_link": "https://math.stackexchange.com/questions/5105734/on-an-alternate-proof-of-the-maclaurin-series", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 2210171, "title": "Error approximation for trapezoidal rule?", "question_text": "Have I calculated the error bound incorrectly?\n\nQuestion\n\nUse the Trapezoidal Rule error, to find the smallest reasonable integer $n$ such that $E_T \\leq \\frac{1}{10}$ of $$\\int_{1}^{3} 2\\ln(t)dt$$\n\nMy work:\n\n\\begin{align}f(x) &= 2\\ln(t)\\\\\n\nf’(x) &= \\frac{2}{x}\\\\\n\nf''(x) &= - \\frac{2}{x^{2}}\\end{align}\n\nTesting the end points, should I find critical points? Not sure\n\n\\begin{align}f''(1)& = \\frac{2}{9}\\\\\\\\\n\nf''(3) &= 2 \\tag{larger value therefore max}\\\\\\\\\n\n|E_T| &< \\frac{(2)(2)^{3}}{12n^{2}} < \\frac{1}{10}\\\\\\\\\n\n\\frac{16}{12n^{2}} &< \\frac{1}{10}\\\\\\\\\n\n\\sqrt{\\frac{40}{3}} &< n\\end{align}\n\nIs this incorrect?\n\nThank you", "question_owner": "yre", "question_link": "https://math.stackexchange.com/questions/2210171/error-approximation-for-trapezoidal-rule", "answer": { "answer_id": 2213424, "answer_text": "I think the question is about exact error not an estimate. The integral is\n\n$$\n\nI = \\int_1^3 f(t)\\, dt =6 \\ln 3 - 4 \\approx 2.592\n\n$$\n\nTrapezoidal rule with differnt $n$ gives\n\n$$\n\nI_1 = f(3) - f(1) \\approx 2.197\n\n$$\n\n$$\n\nI_2 = \\frac{f(3) + 2 f(2) + f(1)}{2} \\approx 2.485\n\n$$\n\n$$\n\nI_3 = \\frac{f(3)+2f(7/3)+2f(5/3)+f(1)}{3} \\approx 2.543\n\n$$\n\n$I_3$ is the first close enough.", "answer_owner": "slitvinov", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1058750, "title": "Consumer Surplus", "question_text": "I am working on an economics problem where my solution is not correct, and I'd like to know why. Below is the question and my subsequent work/solution:\n\nThe demand function for a particular vacation package is\n\n$$D(q) = 2000 − 49\\sqrt{q}.$$\n\nFind the consumer surplus when the sales level for the package is\n\n$800$\n\n.\n\nI got this far:\n\n$$CS = \\int_0^{800}\\left( 2000 - 49q^{1/2}- 614.07\\right)dq$$\n\n$$ = \\int_0^{800}\\left( 1386 - 49q^{1/2} \\right)dq$$\n\n$$ = 1386q - 24.5q^{-1/2}$$\n\nDid I find the antiderivative incorrectly? I'm not sure what's wrong withn my work.", "question_owner": "Jaime", "question_link": "https://math.stackexchange.com/questions/1058750/consumer-surplus", "answer": { "answer_id": 1060206, "answer_text": "Note that $\\int q^{\\frac{1}{2}} dq = \\frac{2}{3} q^{\\frac{3}{2}} + C$. So you integrated your second term incorrectly.\n\nIt should be:\n\n$$\\int_{0}^{800} (1386 - 49\\sqrt{q}) dq = 1386(800) - \\frac{98}{3} \\cdot (800)^{\\frac{3}{2}}$$", "answer_owner": "ml0105", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4564090, "title": "For which number $a$ does the limit $f(x) = \\lfloor x \\rfloor$ exist?", "question_text": "Similar question:\n\nFor which $a$ does $\\lim_{x\\to a}f(x) = \\lfloor x \\rfloor$ exist?\n\nThe question is from Spivak's Calculus Chapter 5 Question 4. I am trying to look for the answer using the tools I have learned in this Chapter, i.e., the delta epsilon definition of the limit.\n\nI attempted to prove the limit does not exist if\n\n$a$\n\n is an integer, which I am unsure whether it is correct. And I am also having trouble proving the limit exists if\n\n$a$\n\n is not an integer.\n\nAssuming that\n\n$a$\n\n is an integer\n\n.\n\nIf an interval\n\n$A$\n\n containing integer\n\n$a$\n\n, we can find both\n\n$x_1$\n\n, and\n\n$x_2$\n\n within the interval\n\n$A$\n\n, such that\n\n$f(x_1) = a$\n\n and\n\n$f(x_2) = a-1$\n\n, for all\n\n$0 < |x_1 - a| < \\delta_1$\n\n, and\n\n$0 < |x_2 - a| < \\delta_2$\n\n. Let\n\n$\\epsilon = \\frac{1}{2}$\n\n. We need\n\n$|a-l|<\\frac{1}{2}$\n\n, and\n\n$|a-1-l|<\\frac{1}{2}$\n\n. And we cannot have that.\n\nIs the logic correct above?\n\nAssuming that\n\n$a$\n\n is not an integer\n\nHow can I show that the limit exists using delta epsilon proof?", "question_owner": "JOHN", "question_link": "https://math.stackexchange.com/questions/4564090/for-which-number-a-does-the-limit-fx-lfloor-x-rfloor-exist", "answer": { "answer_id": 4564091, "answer_text": "Let\n\n$a\\in (n,n+1)$\n\n. Then there is a\n\n$\\delta>0$\n\n such that\n\n$(-\\delta+a,a+\\delta)\\subset (n,n+1)$\n\n, and consequently,\n\n$f((-\\delta+a,a+\\delta)) = \\{\\lfloor a \\rfloor\\} = \\{n\\}$\n\n.", "answer_owner": "Andrew", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1312937, "title": "Finding maximum height with given velocity.", "question_text": "$\\renewcommand{\\vec}[1]{\\mathbf{#1}}$\n\nI was given $\\vec{v}(t) = 40 \\hat{i} + (30-10t)\\hat{j}$ m/s\n\nand i found position vector by\n\n$\\int \\vec{v}(t) = 40\\hat{i} + (30-10t)\\hat{j}$\n\nwhich is $\\vec{r}(t) = 40\\hat{i} + (30t+5t^2)\\hat{j} + 50$\n\nNow, I'm supposed to find the greatest height possible, but I don't know how to.\n\nHow do I find this?", "question_owner": "Kibbles", "question_link": "https://math.stackexchange.com/questions/1312937/finding-maximum-height-with-given-velocity", "answer": { "answer_id": 1312942, "answer_text": "First, $r(t) = (40t,30t - 5t^2)$ ( Yours is with a $+$ sign). This means:\n\n$x = 40t, y = 30t - 5t^2$, can you eliminate $t$ and obtain $y = f(x)$ which is a quadratic function and you can find $y_{max}$.", "answer_owner": "DeepSea", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105736, "title": "From Taylor's Theorem with Peano's Form of Remainder to the existence of derivatives", "question_text": "As the Taylor's Theorem with Peano's Form of Remainder says:\n\nIf\n\n$f$\n\n is a function such that its\n\n$n^\\text{th}$\n\nderivative at\n\n$a$\n\n (i.e.\n\n$f^{(n)}(a)$\n\n) exists, then\n\n$$f(a + h) = f(a) + hf'(a) + \\frac{h^{2}}{2!}f''(a) + \\cdots + \\frac{h^{n}}{n!}f^{(n)}(a) + o(h^{n})$$\n\nwhere\n\n$o(h^n)$\n\n represents a function\n\n$g(h)$\n\n with\n\n$g(h)/h^n→0$\n\n as\n\n$h→0$\n\n.\n\nThis conclusion is based on the premise that\n\n$f$\n\n is\n\n$n^\\text{th}$\n\n derivative at\n\n$a$\n\n.\n\nConversely, if\n\n$f$\n\n can be written in the form that\n\n$$f(a + h) = f(a) + hf'(a) + \\frac{h^{2}}{2!}f''(a) + \\cdots + \\frac{h^{n}}{n!}f^{(n)}(a) + o(h^{n})$$\n\n does the Taylor's Theorem guarantee the existence of\n\n$n^\\text{th}(n\\ge 2)$\n\n derivatives at point\n\n$a$\n\n?\n\nTaking the example as:\n\n$$f(x) = 1 + 2x^2 + o(x^2) , x \\in \\mathring {U} (0,\\delta)$$\n\nIt can be rewritten as\n\n$f(x) = 1+ o(1)$\n\n where\n\n$o(1)=2x^2+o(x^2)$\n\n so\n\n$\\lim_{x \\to 0} f(x) = 1$\n\n,\n\nby the definition of the\n\n$1^\\text{st} $\n\nderivative at zero point\n\n$$f'(0) \\overset{\\text{def}}{=} \\lim_{x \\to 0} \\frac{f(x) - f(0)}{x} = \\lim_{x \\to 0} \\frac{(1+2x^2+o(x^2)) - 1}{x} = 0$$\n\nHowever, for the\n\n$2^\\text{nd}$\n\n derivative at zero point, there is no further information about\n\n$f'(x)$\n\n$$f''(0) \\overset{\\text{def}}{=} \\lim_{x \\to 0} \\frac{f'(x) - f'(0)}{x}$$\n\nDoes this imply that Taylor's Theorem with Peano's Form of Remainder\n\nin point\n\n$a$\n\n only guarantees the existence of\n\n$f(a)$\n\n and\n\n$f'(a)$\n\n?", "question_owner": "illusionaryshelter", "question_link": "https://math.stackexchange.com/questions/5105736/from-taylors-theorem-with-peanos-form-of-remainder-to-the-existence-of-derivat", "answer": { "answer_id": 5105798, "answer_text": "Conversely, if\n\n$f$\n\n can be written in the form that\n\n$$f(a + h) = f(a) + hf'(a) + \\frac{h^{2}}{2!}f''(a) + \\cdots + \\frac{h^{n}}{n!}f^{(n)}(a) + o(h^{n})$$\n\n does the Taylor's Theorem guarantee the existence of\n\n$n^\\text{th}(n\\ge 2)$\n\n derivatives at point\n\n$a$\n\n?\n\nIf you can do that: Yes (simply because\n\n$f^{(n)}(a)$\n\n occurs in the formula since Taylor's theorem ony applies to functions which are\n\n$n$\n\n-times differentiable at\n\n$a$\n\n ). But I think you mean something else. Perhaps you mean\n\nIf\n\n$f$\n\n is\n\n$(n-1)$\n\n-times differenatiable and\n\n$f$\n\n can be written in the form\n\n$$f(a + h) = f(a) + hf'(a) + \\frac{h^{2}}{2!}f''(a) + \\cdots + \\frac{h^{n-1}}{(n-1)!}f^{(n-1)}(a) + \\frac{h^{n}}{n!}c_n + o(h^{n})$$\n\nwith some\n\n$c_n \\in \\mathbb R$\n\n, does\n\n$f^{(n)}(a)$\n\n exist?\n\nConsider your example\n\n$f(x) = 1 + 2x^2 + o(x^2)$\n\n. Certainly\n\n$f$\n\n is differentiable at\n\n$0$\n\n with\n\n$f'(0) = 0$\n\n, but there is no reason why\n\n$f''(0)$\n\n should exist. Let for example\n\n$$g(x) = \\begin{cases} x^3\\sin(1/x^2)\n\n& x \\ne 0 \\\\ 0 & x = 0 \\end{cases}$$\n\nThis is an\n\n$o(x^2)$\n\n-function (i.e.\n\n$f(x) = 1 +2x^2 +g(x)$\n\n has the desired form). A necessary condition that\n\n$f''(0)$\n\n exists is that\n\n$g''(0)$\n\n exists. But we have\n\n$$g'(x) = \\begin{cases}3x^2 \\sin (1/x^2) - \\cos(1/x^2) & x \\ne 0 \\\\ 0 & x = 0 \\end{cases}$$\n\nThis shows that\n\n$g''(0)$\n\n does not exist.", "answer_owner": "Paul Frost", "is_accepted": false, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4980737, "title": "How to evaluate $ \\int_0^1 \\frac{\\ln(1-x) \\ln^2 x \\ln^2(1+x)}{1-x} \\, dx $", "question_text": "Question\n\nHow to evaluate\n\n$$\n\n \\int_0^1 \\frac{\\ln(1-x) \\ln^2 x \\ln^2(1+x)}{1-x} \\, dx\n\n$$\n\nMy attempt\n\nTo evaluate the integral\n\n$$\n\nI = \\int_0^1 \\frac{\\ln(1-x) \\ln^2 x \\ln^2(1+x)}{1-x} \\, dx\n\n$$\n\nWe can use a substitution to help simplify the integral. Let’s substitute\n\n$x = 1 - u $\n\n Then,\n\n$dx = -du$\n\n, and the limits change as follows:\n\nWhen\n\n$x = 0$\n\n,\n\n$u = 1$\n\n.\n\nWhen\n\n$x = 1$\n\n,\n\n$u = 0$\n\n.\n\nThus, the integral transforms as follows:\n\n$$\n\nI = \\int_1^0 \\frac{\\ln(1 - (1-u)) \\ln^2(1-u) \\ln^2(1 + (1-u))}{1 - (1-u)} (-du).\n\n$$\n\nThis simplifies to:\n\n$$\n\nI = \\int_0^1 \\frac{\\ln(u) \\ln^2(1-u) \\ln^2(2-u)}{u} \\, du.\n\n$$\n\nI don't know the closed form of the integral; I got inspired by related integrals from the site.\n\nClosed form\n\n$$-\\frac{4}{3} \\pi ^2 \\text{Li}_4\\left(\\frac{1}{2}\\right)+\\frac{131 \\zeta (3)^2}{16}+\\frac{7}{3} \\zeta (3) \\ln ^3(2)-\\frac{77}{24} \\pi ^2 \\zeta (3) \\ln (2)+\\frac{217}{8} \\zeta (5) \\ln (2)+\\frac{41 \\pi ^6}{30240}-\\frac{1}{18} \\pi ^2 \\log ^4(2)-\\frac{1}{144} \\pi ^4 \\log ^2(2)$$\n\nThanks to Mariusz Iwaniuk for the closed form.", "question_owner": "Martin.s", "question_link": "https://math.stackexchange.com/questions/4980737/how-to-evaluate-int-01-frac-ln1-x-ln2-x-ln21x1-x-dx", "answer": { "answer_id": 5013908, "answer_text": "This is going to be strange,\n\n$$I=\\int_0^1 \\frac{\\ln(1-x) \\ln^2 (x) \\ln^2(1+x)}{1-x} \\, dx$$\n\nPerforming\n\n$\\frac{1-x}{1+x}=u$\n\n,\n\n$$I = \\int_{0}^{1} \\frac{\\log\\left(\\frac{2u}{1+u}\\right) \\log^2\\left(\\frac{1-u}{1+u}\\right) \\log^2\\left(\\frac{2}{1+u}\\right)}{u(1+u)} \\, du$$\n\nUsing,\n\n$$\\log\\left(\\frac{2u}{1+u}\\right) = \\log 2 - \\log u - \\log(1+u)\\tag1$$\n\n$$\\log^2\\left(\\frac{1-u}{1+u}\\right) = \\log^2(1-u) + \\log^2(1+u)-2\\log(1-u)\\log(1+u)\\tag2$$\n\n$$\\log^2\\left(\\frac{2}{1+u}\\right) = \\log^2 2 + \\log^2(1+u)-2\\log(2)\\log(1+u)\\tag3$$\n\nAlso,\n\n$$\\frac{1}{u(1+u)}=\\frac{1}{u}-\\frac{1}{1+u}$$\n\n$$I = \\int_{0}^{1} \\frac{\\log\\left(\\frac{2u}{1+u}\\right) \\log^2\\left(\\frac{1-u}{1+u}\\right) \\log^2\\left(\\frac{2}{1+u}\\right)}{u} \\, du-\\int_{0}^{1} \\frac{\\log\\left(\\frac{2u}{1+u}\\right) \\log^2\\left(\\frac{1-u}{1+u}\\right) \\log^2\\left(\\frac{2}{1+u}\\right)}{1+u} \\, du$$\n\n$$I = I_1-I_2$$\n\nSection - 1\n\nWorking with our\n\n$I_1$\n\n,\n\n$$I_1= \\int_{0}^{1} \\frac{\\log\\left(\\frac{2u}{1+u}\\right) \\log^2\\left(\\frac{1-u}{1+u}\\right) \\log^2\\left(\\frac{2}{1+u}\\right)}{u} \\, du$$\n\nNow after multiplying\n\n$(1),(2),(3)$\n\n,\n\n$$\\log^3(2)\\log^2(1-u)+\\log^3(2)\\log^2(1+u)-2\\log^3(2)\\log(1-u)\\log(1+u)-\\log^2(2)\\log(u)\\log^2(1-u)-\\log^2(2)\\log(u)\\log^2(1+u)+2\\log^2(2)\\log(u)\\log(1-u)\\log(1+u)-\\log^2(2)\\log(1+u)\\log^2(1-u)-\\log^2(2)\\log^3(1+u)-2\\log^2(2)\\log(1-u)\\log^2(1+u)+\\log(2)\\log^2(1-u)\\log^2(1+u)+\\log(2)\\log^4(1+u)-2\\log(2)\\log(1-u)\\log^3(1+u)-\\log(u)\\log^2(1-u)\\log^2(1+u)-\\log(u)\\log^4(1+u)+2\\log(u)\\log(1-u)\\log^3(1+u)-\\log^3(1+u)\\log^2(1-u)-\\log^5(1+u)-2\\log(1-u)\\log^4(1+u)-2\\log^2(2)\\log(1+u)\\log^2(1-u)-2\\log^2(2)\\log^2(1+u)+4\\log^2(2)\\log(1-u)\\log^2(1+u)+2\\log(2)\\log(u)\\log(1+u)\\log^2(1-u)+2\\log(2)\\log(u)\\log^3(1+u)-4\\log(2)\\log(u)\\log(1-u)\\log^2(1+u)+2\\log(2)\\log^2(1+u)\\log^2(1-u)+2\\log(2)\\log^4(1+u)+4\\log(2)\\log(1-u)\\log^3(1+u)$$\n\nDividing by\n\n$u$\n\n,\n\n$$\\frac{\\log^3(2)\\log^2(1-u)}{u}+\\frac{\\log^3(2)\\log^2(1+u)}{u}-\\frac{2\\log^3(2)\\log(1-u)\\log(1+u)}{u}-\\frac{\\log^2(2)\\log(u)\\log^2(1-u)}{u}-\\frac{\\log^2(2)\\log(u)\\log^2(1+u)}{u}+\\frac{2\\log^2(2)\\log(u)\\log(1-u)\\log(1+u)}{u}-\\frac{\\log^2(2)\\log(1+u)\\log^2(1-u)}{u}-\\frac{\\log^2(2)\\log^3(1+u)}{u}-\\frac{2\\log^2(2)\\log(1-u)\\log^2(1+u)}{u}+\\frac{\\log(2)\\log^2(1-u)\\log^2(1+u)}{u}+\\frac{\\log(2)\\log^4(1+u)}{u}-\\frac{2\\log(2)\\log(1-u)\\log^3(1+u)}{u}-\\frac{\\log(u)\\log^2(1-u)\\log^2(1+u)}{u}-\\frac{\\log(u)\\log^4(1+u)}{u}+\\frac{2\\log(u)\\log(1-u)\\log^3(1+u)}{u}-\\frac{\\log^3(1+u)\\log^2(1-u)}{u}-\\frac{\\log^5(1+u)}{u}-\\frac{2\\log(1-u)\\log^4(1+u)}{u}-\\frac{2\\log^2(2)\\log(1+u)\\log^2(1-u)}{u}-\\frac{2\\log^2(2)\\log^2(1+u)}{u}+\\frac{4\\log^2(2)\\log(1-u)\\log^2(1+u)}{u}+\\frac{2\\log(2)\\log(u)\\log(1+u)\\log^2(1-u)}{u}+\\frac{2\\log(2)\\log(u)\\log^3(1+u)}{u}-\\frac{4\\log(2)\\log(u)\\log(1-u)\\log^2(1+u)}{u}+\\frac{2\\log(2)\\log^2(1+u)\\log^2(1-u)}{u}+\\frac{2\\log(2)\\log^4(1+u)}{u}+\\frac{4\\log(2)\\log(1-u)\\log^3(1+u)}{u}$$\n\nWe have,\n\n$$I_1=\\log^3(2)A+\\log^3(2)B-2\\log^3(2)C-D-\\log^2(2)E+2\\log^2(2)F-\\log^2(2)G-H-2\\log^2(2)K+\\log(2)L+\\log(2)M-2\\log(2)N-P-Q+2R-S-T-2W-2\\log^2(2)X-2\\log^2(2)Y+4\\log^2(2)Z+2\\log(2)\\alpha+2\\log(2)\\beta-4\\log(2)\\gamma+2\\log(2)\\delta+2\\log(2)\\Omega+4\\log(2)\\tau$$\n\nWhere,\n\n$$\\boxed{A=\\int_0^1 \\frac{\\log^2(1-u)}{u} \\, du=2\\zeta(3)}$$\n\nsolution of A\n\n$$\\boxed{B=\\int_0^1 \\frac{\\log^2(1+u)}{u} \\, du=\\frac{1}{4}\\zeta(3)}$$\n\nsolution of B\n\n$$\\boxed{C=\\int_0^1 \\frac{\\log(1-u)\\log(1+u)}{u} \\, du=-\\frac{5}{8}\\zeta(3)}$$\n\nsolution of C\n\n$$\\boxed{D=\\int_0^1 \\frac{\\log(u)\\log^2(1-u)}{u} \\, du=-\\frac12 \\zeta(4)}$$\n\nsolution of D\n\n$$\\boxed{E=\\int_0^1 \\frac{\\log(u)\\log^2(1+u)}{u} \\, du=\\frac{\\pi^4}{24}-4\\text{Li}_4 \\left(\\frac{1}{2} \\right)-\\frac{7\\zeta(3)\\ln 2}{2}+\\frac{\\pi^2\\ln^2 2}{6}-\\frac{\\ln^4 2}{6}}$$\n\nsolution of E\n\n$$\\boxed{F=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log(1+u)}{u} \\, du=-\\frac{3 \\pi^4}{160}+\\frac{7\\log(2)}{4}\\zeta(3)-\\frac{\\pi^2 \\log^2(2)}{12} +\\frac{\\log^4(2)}{12}\\quad+ 2 \\text{Li}_4 \\left(\\frac{1}{2} \\right)}$$\n\nsolution of F\n\n$$\\boxed{G=\\int_0^1 \\frac{\\log^2(1-u)\\log(1+u)}{u} \\, du=2\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)-\\frac58\\zeta(4)+\\frac72\\ln(2)\\zeta(3)-\\frac12\\ln^2(2)\\zeta(2)+\\frac{1}{12}\\ln^4(2)}$$\n\nG was self-computed\n\n$$\\boxed{H=\\int_0^1 \\frac{\\log^3(1+u)}{u} \\, du=6\\zeta(4)-\\frac{21}{4}\\ln(2)\\zeta(3)+\\frac{3}{2}\\ln^{2}(2)\\zeta(2)-\\frac{1}{4}\\ln^{4}(2)-6\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)}$$\n\nH was self-computed\n\n$$\\boxed{K=\\int_0^1 \\frac{\\log(1-u)\\log^2(1+u)}{u} \\, du=-\\frac {\\pi^4}{240}}$$\n\nsolution of K\n\n$$\\boxed{L=\\int_0^1 \\frac{\\log^2(1-u)\\log^2(1+u)}{u} \\, du=\\frac{21}{24}\\zeta(5)-\\frac16\\left(24\\zeta(5)-\\frac{21}{2}\\ln^2(2)\\zeta(3)+4\\ln^3(2)\\zeta(3)-\\frac45\\ln^5(2)-24\\ln(2)\\operatorname{Li}_4\\left(\\frac12\\right)-24\\operatorname{Li}_5\\left(\\frac12\\right)\\right)}$$\n\nL was self-computed\n\n$$\\boxed{M=\\int_0^1 \\frac{\\log^4(1+u)}{u}du=24\\zeta(5)-\\frac{21}{2}\\ln^{2}(2)\\zeta(3)+4\\ln^{3}(2)\\zeta(2)-\\frac{4}{5}\\ln^{5}(2)-24\\ln(2)\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)-24\\operatorname{Li}_5\\left(\\frac{1}{2}\\right)}$$\n\nM was self-computed\n\n$$\\boxed{N=\\int_0^1 \\frac{\\log(1-u)\\log^3(1+u)}{u} \\, du=-6\\operatorname{Li}_5\\left(\\frac12\\right)-6\\ln2\\operatorname{Li}_4\\left(\\frac12\\right)+\\frac{3}{4}\\zeta(5)+\\frac{21}{8}\\zeta(2)\\zeta(3)\\\\\\quad-\\frac{21}8\\ln^22\\zeta(3)+\\ln^32\\zeta(2)-\\frac15\\ln^52}$$\n\nSolution of N\n\n$$\\boxed{P=\\int_0^1 \\frac{\\log(u)\\log^2(1+u)\\log^2(1-u)}{u} \\, du=-2 \\zeta(\\bar5,1)+8 \\text{Li}_6\\left(\\frac{1}{2}\\right)+4 \\text{Li}_4\\left(\\frac{1}{2}\\right) \\log ^2(2)+8 \\text{Li}_5\\left(\\frac{1}{2}\\right) \\log (2)-\\frac{13 \\zeta (3)^2}{16}+\\frac{7}{6} \\zeta (3) \\log ^3(2)-\\frac{221 \\pi ^6}{30240}+\\frac{\\log ^6(2)}{9}-\\frac{1}{12} \\pi ^2 \\log ^4(2)}$$\n\nSolution of P\n\n$$Q=\\int_0^1 \\frac{\\log(u)\\log^4(1+u)}{u} \\, du$$\n\nfrom\n\nhere\n\n,\n\n$$\\mathcal{J}_{2}^{(4,1)} = - \\frac{[\\log(\\tfrac{1}{2})]^{6}}{6}=-\\frac16\\log^6(2)$$\n\n$$\\mathcal{J}_{1}^{(4,1)}=-24\\left[\\zeta(6) - \\operatorname{Li}_6\\left(\\frac12\\right)-\\log(2)\\operatorname{Li}_5\\left(\\frac12\\right)-\\frac12\\log^2(2)\\operatorname{Li}_4\\left(\\frac12\\right)-\\frac16\\log^3(2)\\operatorname{Li}_3\\left(\\frac12\\right)-\\frac1{24}\\log^4(2)\\operatorname{Li}_2\\left(\\frac12\\right) \\right]$$\n\n$$\\mathcal{J}_{0}^{(4,1)} =-12 \\left[ \\zeta(5,1) - \\zeta_1(2; 5, 1) - \\log(2) \\zeta_1(2; 4, 1) - \\frac12\\log^2(2) \\zeta_1(2; 3, 1) - \\frac16 \\log^3(2) \\zeta_1(2; 2, 1) \\right]$$\n\nSubstituting these we can (possibly) obtain a closed form!\n\nAlternatively,\n\nHere's a partial proof to finding it's closed form;\n\nSubstitute\n\n$x=\\frac{1-x}{x}$\n\n,\n\n$$Q=\\int_{\\frac12}^1\\frac{\\ln(1-x)\\ln^4(x)}{x(1-x)}\\,dx-\\int_{\\frac12}^1\\frac{\\ln^5(x)}{x(1-x)}\\,dx$$\n\nWe apply partial fractions to obtain 4 more integrals,\n\n$$Q=\\int_{\\frac12}^1\\frac{\\ln(1-x)\\ln^4(x)}{x}\\,dx+\\int_{\\frac12}^1\\frac{\\ln(1-x)\\ln^4(x)}{1-x}\\,dx-\\int_{\\frac12}^1\\frac{\\ln^5(x)}{x}\\,dx-\\int_{\\frac12}^1\\frac{\\ln^5(x)}{1-x}\\,dx$$\n\nWhile I'm not able to move on with the first two integrals, the last two are quite straightforward;\n\n$$\\int_{\\frac12}^1\\frac{\\ln^5(x)}{x}\\,dx=-\\frac{\\ln^6(2)}{6}$$\n\n$$\\int_{\\frac12}^1\\frac{\\ln^5(x)}{1-x}\\,dx=\\sum_{n\\ge 1} \\int_{\\frac12}^1 x^{n-1}\\ln^5(x)\\,dx$$\n\n$$=\\sum_{n\\ge 1} \\frac{\\ln^5(2)}{n2^n}+\\frac{5\\ln^4(2)}{n^22^n}+\\frac{20\\ln^3(2)}{n^32^n}+\\frac{60\\ln^2(2)}{n^42^n}+\\frac{120\\ln(2)}{n^52^n}+\\frac{120}{n^62^n}-\\frac{120}{n^6}=120\\,\\operatorname{Li}_6\\left(\\tfrac{1}{2}\\right) + 60\\,\\operatorname{Li}_4\\left(\\tfrac{1}{2}\\right)\\log^2(2) + 120\\,\\operatorname{Li}_5\\left(\\tfrac{1}{2}\\right)\\log(2) + \\tfrac{35}{2}\\zeta(3)\\log^3(2) - \\tfrac{8\\pi^6}{63} + \\tfrac{11\\log^6(2)}{6} - \\tfrac{5}{4}\\pi^2\\log^4(2)$$\n\nAll these sums are well known, I am not going to provide the proof for each,\n\nYou can compare the results\n\nintegral\n\n and\n\nsum\n\n$$R=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^3(1+u)}{u} \\, du$$\n\n$$S=\\int_0^1 \\frac{\\log^3(1+u)\\log^2(1-u)}{u} \\, du$$\n\n$$\\boxed{T=\\int_0^1 \\frac{\\log^5(1+u)}{u} \\, du=\\frac{1}{6}\\ln^6(2)+\\frac{8\\pi^6}{63}+\\sum_{n=0}^5 \\binom{5}{n}\\operatorname{Li}_{n+1}\\left(\\frac12\\right)\\ln^{5-n}(2)n!}$$\n\nT was self-computed\n\n$$W=\\int_0^1 \\frac{\\log(1-u)\\log^4(1+u)}{u} \\, du$$\n\n$$\\boxed{X=\\int_0^1 \\frac{\\log(1+u)\\log^2(1-u)}{u} \\, du=2\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)-\\frac58\\zeta(4)+\\frac72\\ln(2)\\zeta(3)-\\frac12\\ln^2(2)\\zeta(2)+\\frac{1}{12}\\ln^4(2)}$$\n\nComputed same as\n\n$G$\n\n$$\\boxed{Y=\\int_0^1 \\frac{\\log^2(1+u)}{u} \\, du=\\frac{1}{4}\\zeta(3)}$$\n\nAbove integral computed same as\n\n$B$\n\n$$\\boxed{Z=\\int_0^1 \\frac{\\log^2(1+u)\\log(1-u)}{u} \\, du=-\\frac {\\pi^4}{240}}$$\n\nAbove integral computed same as\n\n$K$\n\n$$\\alpha=\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^2(1-u)}{u} \\, du$$\n\n$$\\boxed{\\beta=\\int_0^1 \\frac{\\log(u)\\log^3(1+u)}{u} \\, du=\\frac{\\pi^2}3\\ln^32-\\frac25\\ln^52+\\frac{\\pi^2}2\\zeta(3)+\\frac{99}{16}\\zeta(5)-\\frac{21}4\\zeta(3)\\ln^22\\\\-12\\operatorname{Li}_4\\left(\\frac12\\right)\\ln2-12\\operatorname{Li}_5\\left(\\frac12\\right)}$$\n\nSolution of the above integral\n\n$$\\boxed{\\gamma=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^2(1+u)}{u} \\, du=\\frac{7\\pi^2}{48}\\zeta(3)-\\frac{25}{16}\\zeta(5)}$$\n\nSolution of the above integral\n\n$$\\boxed{\\delta=\\int_0^1 \\frac{\\log^2(1-u)\\log^2(1+u)}{u} \\, du=\\frac{21}{24}\\zeta(5)-\\frac16\\left(24\\zeta(5)-\\frac{21}{2}\\ln^2(2)\\zeta(3)+4\\ln^3(2)\\zeta(3)-\\frac45\\ln^5(2)-24\\ln(2)\\operatorname{Li}_4\\left(\\frac12\\right)-24\\operatorname{Li}_5\\left(\\frac12\\right)\\right)}$$\n\nAbove integral computed same as\n\n$L$\n\n$$\\boxed{\\Omega=\\int_0^1 \\frac{\\log^4(1+u)}{u} \\, du=24\\zeta(5)-\\frac{21}{2}\\ln^{2}(2)\\zeta(3)+4\\ln^{3}(2)\\zeta(2)-\\frac{4}{5}\\ln^{5}(2)-24\\ln(2)\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)-24\\operatorname{Li}_5\\left(\\frac{1}{2}\\right)}\n\n$$\n\n$\\Omega$\n\n was self-computed\n\n$$\\boxed{\\tau=\\int_0^1 \\frac{\\log(1-u)\\log^3(1+u)}{u} \\, du=-6\\operatorname{Li}_5\\left(\\frac12\\right)-6\\ln2\\operatorname{Li}_4\\left(\\frac12\\right)+\\frac{3}{4}\\zeta(5)+\\frac{21}{8}\\zeta(2)\\zeta(3)\\\\\\quad-\\frac{21}8\\ln^22\\zeta(3)+\\ln^32\\zeta(2)-\\frac15\\ln^52}$$\n\nAbove integral is computed same as\n\n$N$\n\nSection - 2\n\nWorking with our\n\n$I_2$\n\n,\n\n$$I_2= \\int_{0}^{1} \\frac{\\log\\left(\\frac{2u}{1+u}\\right) \\log^2\\left(\\frac{1-u}{1+u}\\right) \\log^2\\left(\\frac{2}{1+u}\\right)}{1+u} \\, du$$\n\nNow after multiplying\n\n$(1),(2),(3)$\n\n,\n\n$$\\log^3(2)\\log^2(1-u)+\\log^3(2)\\log^2(1+u)-2\\log^3(2)\\log(1-u)\\log(1+u)-\\log^2(2)\\log(u)\\log^2(1-u)-\\log^2(2)\\log(u)\\log^2(1+u)+2\\log^2(2)\\log(u)\\log(1-u)\\log(1+u)-\\log^2(2)\\log(1+u)\\log^2(1-u)-\\log^2(2)\\log^3(1+u)-2\\log^2(2)\\log(1-u)\\log^2(1+u)+\\log(2)\\log^2(1-u)\\log^2(1+u)+\\log(2)\\log^4(1+u)-2\\log(2)\\log(1-u)\\log^3(1+u)-\\log(u)\\log^2(1-u)\\log^2(1+u)-\\log(u)\\log^4(1+u)+2\\log(u)\\log(1-u)\\log^3(1+u)-\\log^3(1+u)\\log^2(1-u)-\\log^5(1+u)-2\\log(1-u)\\log^4(1+u)-2\\log^2(2)\\log(1+u)\\log^2(1-u)-2\\log^2(2)\\log^2(1+u)+4\\log^2(2)\\log(1-u)\\log^2(1+u)+2\\log(2)\\log(u)\\log(1+u)\\log^2(1-u)+2\\log(2)\\log(u)\\log^3(1+u)-4\\log(2)\\log(u)\\log(1-u)\\log^2(1+u)+2\\log(2)\\log^2(1+u)\\log^2(1-u)+2\\log(2)\\log^4(1+u)+4\\log(2)\\log(1-u)\\log^3(1+u)$$\n\nDividing by\n\n$1+u$\n\n,\n\n$$\\frac{\\log^3(2)\\log^2(1-u)}{1+u}+\\frac{\\log^3(2)\\log^2(1+u)}{1+u}-\\frac{2\\log^3(2)\\log(1-u)\\log(1+u)}{1+u}-\\frac{\\log^2(2)\\log(u)\\log^2(1-u)}{1+u}-\\frac{\\log^2(2)\\log(u)\\log^2(1+u)}{1+u}+\\frac{2\\log^2(2)\\log(u)\\log(1-u)\\log(1+u)}{1+u}-\\frac{\\log^2(2)\\log(1+u)\\log^2(1-u)}{1+u}-\\frac{\\log^2(2)\\log^3(1+u)}{1+u}-\\frac{2\\log^2(2)\\log(1-u)\\log^2(1+u)}{1+u}+\\frac{\\log(2)\\log^2(1-u)\\log^2(1+u)}{1+u}+\\frac{\\log(2)\\log^4(1+u)}{1+u}-\\frac{2\\log(2)\\log(1-u)\\log^3(1+u)}{1+u}-\\frac{\\log(u)\\log^2(1-u)\\log^2(1+u)}{1+u}-\\frac{\\log(u)\\log^4(1+u)}{1+u}+\\frac{2\\log(u)\\log(1-u)\\log^3(1+u)}{1+u}-\\frac{\\log^3(1+u)\\log^2(1-u)}{1+u}-\\frac{\\log^5(1+u)}{1+u}-\\frac{2\\log(1-u)\\log^4(1+u)}{1+u}-\\frac{2\\log^2(2)\\log(1+u)\\log^2(1-u)}{1+u}-\\frac{2\\log^2(2)\\log^2(1+u)}{1+u}+\\frac{4\\log^2(2)\\log(1-u)\\log^2(1+u)}{1+u}+\\frac{2\\log(2)\\log(u)\\log(1+u)\\log^2(1-u)}{1+u}+\\frac{2\\log(2)\\log(u)\\log^3(1+u)}{1+u}-\\frac{4\\log(2)\\log(u)\\log(1-u)\\log^2(1+u)}{1+u}+\\frac{2\\log(2)\\log^2(1+u)\\log^2(1-u)}{1+u}+\\frac{2\\log(2)\\log^4(1+u)}{1+u}+\\frac{4\\log(2)\\log(1-u)\\log^3(1+u)}{1+u}$$\n\nWe have,\n\n$$I_1=\\log^3(2)A+\\log^3(2)B-2\\log^3(2)C-D-\\log^2(2)E+2\\log^2(2)F-\\log^2(2)G-H-2\\log^2(2)K+\\log(2)L+\\log(2)M-2\\log(2)N-P-Q+2R-S-T-2W-2\\log^2(2)X-2\\log^2(2)Y+4\\log^2(2)Z+2\\log(2)\\alpha+2\\log(2)\\beta-4\\log(2)\\gamma+2\\log(2)\\delta+2\\log(2)\\Omega+4\\log(2)\\tau$$\n\nWhere,\n\nPlease note I am out of notations to use so Ill be reusing the old ones,\n\n$$\\boxed{A=\\int_0^1 \\frac{\\log^2(1-u)}{1+u} \\, du=\\frac{7}{4}\\zeta(3)+\\frac13\\ln^3(2)-\\frac{\\pi^2}{6}\\ln(2)}$$\n\nA was self-computed\n\n$$\\boxed{B=\\int_0^1 \\frac{\\log^2(1+u)}{1+u} \\, du=\\frac13 \\ln^3(2)}$$\n\nB was self-computed\n\n$$\\boxed{C=\\int_0^1 \\frac{\\log(1-u)\\log(1+u)}{1+u} \\, du = \\frac{1}{3}\\log^3(2)-\\frac{\\pi^2}{12}\\log(2)+\\frac{\\zeta(3)}{8}}$$\n\nSolution to C\n\n$$\\boxed{D=\\int_0^1 \\frac{\\log(u)\\log^2(1-u)}{1+u} \\, du=\\frac{11}{4}\\zeta(4)-\\frac14\\ln^42-6\\operatorname{Li}_4\\left(\\frac12\\right)}$$\n\nSolution to D\n\n$$\\boxed{E=\\int_0^1 \\frac{\\log(u)\\log^2(1+u)}{1+u} \\, du=2 \\, \\text{Li}_4\\left(\\frac{1}{2}\\right) + \\frac{7}{4} \\, \\zeta(3) \\ln(2) - \\frac{\\pi^4}{45} + \\frac{\\ln^4(2)}{12} - \\frac{1}{12} \\pi^2 \\ln^2(2)}\n\n$$\n\nE was self-computed\n\n$$\\boxed{F=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log(1+u)}{1+u} \\, du= 2\\operatorname{Li}_4\\left(\\dfrac{1}{2}\\right)+\\dfrac{21}{8}\\zeta(3)\\ln 2 - \\dfrac{1}{45}\\pi^4 + \\dfrac{1}{12}\\ln^4 2 - \\dfrac{5}{24}\\pi^2\\ln^2 2}$$\n\nF was self-computed\n\n$$\\boxed{G=\\int_0^1 \\frac{\\log^2(1-u)\\log(1+u)}{1+u} \\, du=-\\frac{1}{4}\\zeta \\left(4\\right)+2\\ln \\left(2\\right)\\zeta \\left(3\\right)-\\ln ^2\\left(2\\right)\\zeta \\left(2\\right)+\\frac{1}{4}\\ln ^4\\left(2\\right)}$$\n\nSolution to G\n\n$$\\boxed{H=\\int_0^1 \\frac{\\log^3(1+u)}{1+u} \\, du=\\frac14 \\ln^4(2)}$$\n\nH was self-computed\n\n$$\\boxed{K=\\int_0^1 \\frac{\\log(1-u)\\log^2(1+u)}{1+u} \\, du=2\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)+2\\zeta(3)\\log(2)+\\frac13\\log^4(2)-\\frac{\\pi^4}{45}-\\frac{\\pi^2}{6}\\log^2(2)}$$\n\nK was self-computed\n\n$$\\boxed{L=\\int_0^1 \\frac{\\log^2(1-u)\\log^2(1+u)}{1+u} \\, du=-4\\operatorname{Li}_5\\left(\\frac12\\right)+4\\zeta(3)\\log^2(2)-\\frac{2\\pi^2}9 \\log^3(2)-\\frac{\\pi^2}{3}\\zeta(3)-\\frac{\\pi^4}{20}\\log2+\\frac7{30}\\log^5(2)+ \\frac{63}8\\zeta(5) }$$\n\nSolution to above integral\n\n$$\\boxed{M=\\int_0^1 \\frac{\\log^4(1+u)}{1+u} \\, du=\\frac15 \\ln^5(2)}$$\n\nM was self-computed\n\n$$\\boxed{N=\\int_0^1 \\frac{\\log(1-u)\\log^3(1+u)}{1+u} \\, du=6\\zeta(5)-6\\operatorname{Li}_5\\left(\\frac12\\right)-\\frac{\\pi^4}{15}\\ln(2)+3\\ln^2(2)\\zeta(3)-\\frac{\\pi^2}{6}\\ln^3(2)+\\frac14 \\ln^5(2)}$$\n\nSolution to N\n\n$$P=\\int_0^1 \\frac{\\log(u)\\log^2(1+u)\\log^2(1-u)}{1+u} \\, du$$\n\n$$\\boxed{Q=\\int_0^1 \\frac{\\log(u)\\log^4(1+u)}{1+u} \\, du=24\\operatorname{Li}_6\\left(\\frac12\\right)-\\frac{8}{315}\\pi^6-\\frac{\\pi^2}{4}\\ln^4(2)+24\\ln(2)\\operatorname{Li}_5\\left(\\frac12\\right)+12\\ln^2(2)\\operatorname{Li}_4\\left(\\frac12\\right)+\\frac{7}{2}\\zeta(3)\\ln^3(2)+\\frac13\\ln^6(2)}$$\n\nQ was self-computed\n\n$$R=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^3(1+u)}{1+u} \\, du$$\n\nHere's a partial proof :\n\n$$ab^3=\\frac{1}{8}(a+b)^4-\\frac{1}{8}(a-b)^4-a^3b$$\n\n$$R=\\int _0^1\\frac{\\log (u)\\log (1-u)\\log ^3(1+u)}{1+u}du$$\n\n$$a=\\log(1-u), b=\\log(1+u)$$\n\n$$\\implies\\log(1-u)\\log^3(1+u)=\\frac{1}{8}\\log^4(1-u^2)-\\frac{1}{8}\\log^4\\left(\\frac{1-u}{1+u}\\right)-\\log^3(1-u)\\log(1+u)$$\n\n$$R=\\frac{1}{8}\\int_0^1\\frac{\\log(u)\\log^4(1-u^2)}{1+u}\\,du-\\frac{1}{8}\\int_0^1\\frac{\\log(u)\\log^4\\left(\\frac{1-u}{1+u}\\right)}{1+u}\\,du-\\int_0^1\\frac{\\log(u)\\log^3(1-u)\\log(1+u)}{1+u}\\,du$$\n\n$$\\int_0^1\\frac{\\log(u)\\log^4(1-u^2)}{1+u}\\,du=\\int_0^1\\frac{\\log(u)\\log^4(1-u^2)}{1-u^2}(1-u)\\,du$$\n\n$$u^2=t\\implies \\,du=\\frac{1}{2\\sqrt{t}}\\,dt$$\n\n$$\\int_0^1\\frac{\\log(\\sqrt{t})\\log^4(1-t)}{1-t}\\left(1-\\sqrt{t}\\right)\\frac{1}{2\\sqrt{t}}\\,dt\\implies\\frac14\\int_0^1\\frac{\\log(u)\\log^4(1-u)}{\\sqrt{t}(1-t)}-\\frac14\\int_0^1\\frac{\\log(u)\\log^4(1-u)}{(1-t)}$$\n\nThe above two integrals can be evaluated using the Beta function (routine method)\n\n$$\\int_0^1\\frac{\\log(u)\\log^4\\left(\\frac{1-u}{1+u}\\right)}{1+u}\\,du$$\n\n$$\\underset{\\frac{1-u}{1+u}=t}{=}\\int_0^1 \\frac{\\log(1 - t) \\log^4(t)}{1 + t} \\, dt-\\int_0^1 \\frac{\\log(1 + t) \\log^4(t)}{1 + t} \\, dt$$\n\nNeed to work on the above two integrals....\n\n$$\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^3(1-u)}{1+u} \\, du$$\n\nWe can follow a similar path as followed in calculating\n\n$\\alpha$\n\n from section 2.\n\n$$\\boxed{S=\\int_0^1 \\frac{\\log^3(1+u)\\log^2(1-u)}{1+u} \\, du=-12 \\text{Li}_6\\left(\\frac{1}{2}\\right)-12 \\text{Li}_5\\left(\\frac{1}{2}\\right) \\ln (2)+6 \\zeta^2 (3)+8 \\zeta (3) \\ln ^3(2)-12 \\zeta(2) \\zeta (3) \\ln (2)+36 \\zeta (5) \\ln (2)-9\\zeta(6)+\\frac{1}{4}\\ln ^6(2)-2\\zeta(2) \\ln ^4(2)-\\frac{27}{2} \\zeta(4) \\ln ^2(2)+12 \\sum_{n=1}^\\infty\\frac{H_n}{n^52^n}}$$\n\nS was self-computed, Also no closed form for that sum\n\n$$\\boxed{T=\\int_0^1 \\frac{\\log^5(1+u)}{1+u} \\, du=\\frac16 \\ln^6(2)}$$\n\nT was self-computed\n\n$$\\boxed{W=\\int_0^1 \\frac{\\log(1-u)\\log^4(1+u)}{1+u} \\, du=24\\operatorname{Li}_6\\left(\\frac12\\right)-\\frac{8}{315}\\pi^6+24\\ln(2)\\zeta(5)-\\frac{2}{15}\\pi^4\\ln^2(2)+4\\ln^3(2)\\zeta(3)-\\frac{\\pi^2}{6}\\ln^4(2)+\\frac15\\ln^5(2)}$$\n\nSolution to W\n\n$$\\boxed{X=\\int_0^1 \\frac{\\log(1+u)\\log^2(1-u)}{1+u} \\, du= 2 \\zeta (3) \\log (2)-\\frac{\\pi ^4}{360}+\\frac{\\log ^4(2)}{4}-\\frac{1}{6} \\pi ^2 \\log ^2(2)}$$\n\nSolution to X\n\n$$\\boxed{Y=\\int_0^1 \\frac{\\log^2(1+u)}{1+u} \\, du=\\frac13 \\ln^3(2)}$$\n\nY was self-computed\n\n$$\\boxed{Z=\\int_0^1 \\frac{\\log^2(1+u)\\log(1-u)}{1+u} \\, du=2\\operatorname{Li}_4\\left(\\frac{1}{2}\\right)+2\\zeta(3)\\log(2)+\\frac13\\log^4(2)-\\frac{\\pi^4}{45}-\\frac{\\pi^2}{6}\\log^2(2)}$$\n\nZ was self-computed\n\n$$\\boxed{\\alpha=\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^2(1-u)}{1+u} \\, du = \\frac{21}{8}\\log^2(2)\\zeta(3)-\\frac{15}{8}\\log(2)\\zeta(4)+\\frac{121}{16}\\zeta(5)-\\frac23\\log^3(2)\\zeta(2)-\\frac{1}{15}\\log^5(2)-2\\log(2)\\operatorname{Li}_4\\left(\\frac12\\right)-2\\operatorname{Li}_5\\left(\\frac12\\right)-3\\zeta(2)\\zeta(3)}$$\n\nProof : Here, we can re-write our integral as follows,\n\n$$\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^2(1-u)}{1+u} \\, du=\\frac{1}{6}\\int_0^1\\frac{(1-u)\\log(u)\\log^3(1-u^2)}{1-u^2}\\,du-\\frac{1}{6}\\int_0^1\\frac{(1-u)\\log^3(u)\\log(1-u^2)}{1-u^2}\\,du+\\frac13\\int_0^1\\frac{\\log^3(u)\\log(1+u)}{1+u}\\,du-\\frac13\\int_0^1\\frac{\\log(u)\\log^3(1+u)}{1+u}\\,du$$\n\nApply IBP to the last two integrals and for first two make the substitution of\n\n$u^2\\to u$\n\n, we can obtain known integrals in text.\n\n$$\\boxed{\\beta=\\int_0^1 \\frac{\\log(u)\\log^3(1+u)}{1+u} \\, du=6 \\, \\text{Li}_5\\left(\\frac{1}{2}\\right) + 6\\text{Li}_4\\left(\\frac{1}{2}\\right) \\ln(2) - 6 \\, \\zeta(5) + \\frac{21}{8} \\, \\zeta(3) \\ln^2(2) + \\frac{\\ln^5(2)}{5} - \\frac{1}{6} \\pi^2 \\ln^3(2)}$$\n\n$\\beta$\n\n was self-computed\n\n$$\\gamma=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^2(1+u)}{1+u} \\, du$$\n\nYou can try the following for\n\n$a=1,b=1,c=2$\n\n,\n\n$$\\sin \\pi (b-a) \\int_0^1 x^{b-1}(1-x)^{c-b-1}(1+x)^{-a} dx = \\sin \\pi (c-a) \\int_0^1 x^{a-c} \\left[ (1-x)^{c-a-1}(1+x)^{-b} - 1 \\right] dx - \\sin \\pi (b-a) \\frac{\\Gamma(c-b)\\Gamma(b)}{\\Gamma(a)\\Gamma(c-a)} \\int_0^1 x^{b-c} \\left[ (1-x)^{c-a-1}(1+x)^{-b} - 1 \\right] dx + \\frac{\\sin \\pi (c-a)}{a-c+1} - \\frac{\\pi \\Gamma(b)}{\\Gamma(2-c+b)\\Gamma(a)\\Gamma(c-a)}$$\n\nYou will have to perform the below,\n\n$$\\frac{\\partial}{\\partial a^m \\partial b^n \\partial c^r}$$\n\nPicking appropriate\n\n$m,n,r$\n\n$$\\boxed{\\delta=\\int_0^1 \\frac{\\log^2(1-u)\\log^2(1+u)}{1+u} \\, du=-4\\operatorname{Li}_5\\left(\\frac12\\right)+4\\zeta(3)\\log^2(2)-\\frac{2\\pi^2}9 \\log^3(2)-\\frac{\\pi^2}{3}\\zeta(3)-\\frac{\\pi^4}{20}\\log2+\\frac7{30}\\log^5(2)+ \\frac{63}8\\zeta(5) }$$\n\nComputed same as\n\n$L$\n\n$$\\boxed{\\Omega=\\int_0^1 \\frac{\\log^4(1+u)}{1+u} \\, du=\\frac15\\ln^5(2)}$$\n\n$\\Omega$\n\n was self-computed\n\n$$\\boxed{\\tau=\\int_0^1 \\frac{\\log(1-u)\\log^3(1+u)}{1+u} \\, du=6\\zeta(5)-6\\operatorname{Li}_5\\left(\\frac12\\right)-\\frac{\\pi^4}{15}\\ln(2)+3\\ln^2(2)\\zeta(3)-\\frac{\\pi^2}{6}\\ln^3(2)+\\frac14 \\ln^5(2)}$$\n\nSolution to above integral\n\nAll the above integrals are known and trivial, derived from the beta\n\nfunction, Not to mention all the \"self-computed\" and other related/similar integrals are well-documented in text\n\nand on/off this site as well\n\n$$\\int_0^1 t^{a-1}(1-t)^{b-1}\\,dt=\\beta(a,b)$$\n\n$$\\int_{-1}^1(1-t)^{a-1}(1+t)^{b-1}\\,dt=2^{a+b-1}\\beta(a,b)$$\n\n$$\\displaystyle \\int_0^1 \\frac{x^{a-1}+x^{b-1}}{(1+x)^{a+b}} \\textrm{d}x = \\beta(a,b)$$\n\nRecap : We are still missing the closed form/proofs for the following integrals :\n\nFrom section-1,\n\n$R=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^3(1+u)}{u} \\, du$\n\n$S=\\int_0^1 \\frac{\\log^3(1+u)\\log^2(1-u)}{u} \\, du$\n\n$W=\\int_0^1 \\frac{\\log(1-u)\\log^4(1+u)}{u} \\, du$\n\n$Q=\\int_0^1 \\frac{\\log(u)\\log^4(1+u)}{u} \\, du$\n\n, which also gave us two more\n\n$\\int_{\\frac12}^1\\frac{\\ln(1-x)\\ln^4(x)}{x}\\,dx$\n\n and\n\n$\\int_{\\frac12}^1\\frac{\\ln(1-x)\\ln^4(x)}{1-x}\\,dx$\n\n, the other refernce does involve a generalized solution.\n\n$\\alpha=\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^2(1-u)}{u} \\, du$\n\nFrom Section -2,\n\n$P=\\int_0^1 \\frac{\\log(u)\\log^2(1+u)\\log^2(1-u)}{1+u} \\, du$\n\n$R=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^3(1+u)}{1+u} \\, du$\n\n, which also gave us three more\n\n$\\int_0^1 \\frac{\\log(1 - t) \\log^4(t)}{1 + t} \\, dt$\n\n and\n\n$\\int_0^1 \\frac{\\log(1 + t) \\log^4(t)}{1 + t} \\, dt$\n\n and\n\n$\\int_0^1 \\frac{\\log(u)\\log(1+u)\\log^3(1-u)}{1+u} \\, du$\n\n$\\gamma=\\int_0^1 \\frac{\\log(u)\\log(1-u)\\log^2(1+u)}{1+u} \\, du$\n\nAnother approach may go this way,\n\n$$I=\\int_0^1 \\frac{\\ln(1-x) \\ln^2(1+x)\\ln^2 x}{1-x} \\, dx$$\n\nUsing the Identity,\n\n$$ab^2=\\frac{1}{6}(a+b)^3+\\frac{1}{6}(a-b)^3-\\frac{1}{3}a^3$$\n\nWhere,\n\n$$a=\\ln(1-x)\\,||||\\, b=\\ln(1+x)$$\n\n$$\\ln(1-x)\\ln^2(1+x)=\\frac{1}{6}(\\ln(1-x)+\\ln(1+x))^3+\\frac{1}{6}(\\ln(1-x)-\\ln(1+x))^3-\\frac{1}{3}(\\ln(1-x))^3$$\n\n$$\\ln(1-x)\\ln^2(1+x)=\\frac{1}{6}\\ln^3(1-x^2)+\\frac{1}{6}\\ln^3\\left(\\frac{1-x}{1+x}\\right)-\\frac{1}{3}\\ln^3(1-x)$$\n\n$$\\ln(1-x)\\ln^2(1+x)\\ln^2 x=\\frac{1}{6}\\ln^3(1-x^2)\\ln^2 x+\\frac{1}{6}\\ln^3\\left(\\frac{1-x}{1+x}\\right)\\ln^2 x-\\frac{1}{3}\\ln^3(1-x)\\ln^2 x$$\n\n$$\\frac{\\ln(1-x)\\ln^2(1+x)\\ln^2 x}{1-x}=\\frac{1}{6}\\frac{\\ln^3(1-x^2)\\ln^2 x}{1-x}+\\frac{1}{6}\\frac{\\ln^3\\left(\\frac{1-x}{1+x}\\right)\\ln^2 x}{1-x}-\\frac{1}{3}\\frac{\\ln^3(1-x)\\ln^2 x}{1-x}$$\n\n$$I=\\frac{1}{6}\\int_0^1\\frac{\\ln^3(1-x^2)\\ln^2 (x)}{1-x}\\,dx+\\frac{1}{6}\\int_0^1\\frac{\\ln^3\\left(\\frac{1-x}{1+x}\\right)\\ln^2(x)}{1-x}\\,dx-\\frac{1}{3}\\int_0^1\\frac{\\ln^3(1-x)\\ln^2 (x)}{1-x}\\,dx$$\n\nYou can then refer\n\nhere\n\n and\n\nhere\n\n and\n\nhere\n\n for further.", "answer_owner": "Amrut Ayan", "is_accepted": true, "score": 15, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105509, "title": "How does one solve the integral $\\int_{1}^{\\infty}\\frac{\\ln\\left(\\arctan\\left(x\\right)\\right)}{x^{2}+\\ln\\left(x^{2}+1\\right)}\\,\\mathrm dx$", "question_text": "When I was on Twitter, I saw the integral\n\n$$\\int_{1}^{\\infty}\\frac{\\ln\\left(\\arctan\\left(x\\right)\\right)}{x^{2}+\\ln\\left(x^{2}+1\\right)}\\,\\mathrm dx.$$\n\nThe solution was not part of the post, but I believe it can be solved somewhat easily with infinite sums, which I have seen be done for many functions related to radical, log, and arctan functions, but I am not good at.\n\nThe area under the curve is quickly convergent for\n\n$x>1$\n\n, but diverges at an asymptote at\n\n$e^{-x^2}=x$\n\n, a P.V. at that point does not seem to exist either (numerically). Computers say that the integral is equal to\n\n$0.0974915068$\n\n.\n\nThe only solution method I can think of is re-writing the arctan function as a complex logarithm.\n\n$$\n\nI = \\int_{1}^{\\infty}\n\n\\frac{\n\n\\ln\\!\\left(\n\n \\frac{1}{2i}\n\n \\left[\n\n \\ln(1 + i x) - \\ln(1 - i x)\n\n \\right]\n\n\\right)\n\n}{\n\nx^{2} + \\ln(x^2+1)\n\n}\\,\\mathrm dx\n\n.$$\n\n I tried to\n\n$u$\n\n-substitute out the logarithms of\n\n$x^2+1$\n\n but there seems to be no\n\n$\\mathrm du$\n\n. So my question is, is there a fast way to solve this, perhaps with infinite series?", "question_owner": "rain", "question_link": "https://math.stackexchange.com/questions/5105509/how-does-one-solve-the-integral-int-1-infty-frac-ln-left-arctan-leftx", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105717, "title": "Is it correct to write a limit this way?", "question_text": "When I have a problem of this kind:\n\n$$\n\n \\lim_{x \\to 1} \\frac{x^2-2x+1}{3(x-1)^2}\n\n$$\n\nAs it gives\n\n$\n\n \\ \\frac{0}{0}\n\n$\n\n, then:\n\n$$\n\n \\lim_{x \\to 1} \\frac{(x-1)^2}{3(x-1)^2}\n\n$$\n\nThe\n\n$(x-1)^2$\n\n on the numerator cancels with the one on the denominator so the limit is equal to\n\n$\n\n \\ \\frac{1}{3}\n\n$\n\n.\n\nSo, is it right to write it this way?\n\n$$\n\n \\lim_{x \\to 1} \\frac{(x-1)^2}{3(x-1)^2} = \\lim_{x \\to 1} \\frac{1}{3} = \\frac{1}{3}$$\n\nIt confuses me because, even though the numerical value of the limit of the function on the left is equal to the numerical value of the function of the right, they do have different domains.\n\nHow should I write it in a correct way?", "question_owner": "Ismael Amarillo", "question_link": "https://math.stackexchange.com/questions/5105717/is-it-correct-to-write-a-limit-this-way", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5094024, "title": "Why it is possible to treat a derivative like a fraction and swap velocity and distance in this definite integral?", "question_text": "I apologize if this is a question that should be posted in the physics stack. My question has less to do with the physics involved than the math used to describe the physics. From the definition of Work as the distance integral of force and Newton's second law, when mass is constant, for a body accelerating from a known position\n\n$x_i$\n\n to a position at\n\n$x_f$\n\n, we can calculate the work done on or by that object as:\n\n$$\n\nW=m\\int_{x_i}^{x_f} a \\space ds\n\n$$\n\nSince acceleration is the first derivative of velocity, we can write it:\n\n$$\n\nW=m\\int_{x_i}^{x_f} \\frac {dv}{dt} \\space ds\n\n$$\n\nBut then we can change the variables thusly:\n\n$$\n\nW=m\\int_{v_i}^{v_f} \\frac {ds}{dt} \\space dv\n\n$$\n\nThat last step is what is bugging me. I know that I have the reason why this is possible in my brain somewhere, but I just can't seem to find the reason now that it has been 15 years since I took my last calculus class. From my college math, I know it is an oversimplification to interpret the derivative\n\n$\\frac {dv}{dt}$\n\n as being the same as\n\n${dv}$\n\n divided by\n\n${dt}$\n\n, so I have it in my head that the rules of division cannot be the reason why this works. It also fails to explain the change in the limits of integration. I have these steps from a derivation of the kinetic energy formula memorized, but I can't remember why treating a derivative as a fraction and directly swapping the variable of integration with the limits of integration works here. If someone can remind me of the theorem or rule for why this step works, I will very much appreciate it. I need to be able to explain it to people who might have had mathematics up to calculus I, but that are probably not math or physics majors.\n\nBy the way, my bachelor's is in mathematics with a concentration in physics, so 15 years ago I had math through ODE and Analysis II and lots of upper-division physics. But my career is in civil engineering, and the civil engineers I know prefer to shortcut calculus whenever possible. Consequently, I haven't used this type of variable transformation since college. I just need a reminder about this fine point. Thank you.", "question_owner": "David Eisenbeisz", "question_link": "https://math.stackexchange.com/questions/5094024/why-it-is-possible-to-treat-a-derivative-like-a-fraction-and-swap-velocity-and-d", "answer": { "answer_id": 5094026, "answer_text": "One way to go is as follows.\n\nBy the chain rule,\n\n$$\n\n\\frac{\\mathrm dv}{\\mathrm dt}\n\n=\n\n\\frac{\\mathrm dv}{\\mathrm ds}\n\n \\frac{\\mathrm ds}{\\mathrm dt}.\n\n$$\n\nTherefore\n\n$$\n\n\\int_{s_i}^{s_f} \\frac{\\mathrm dv}{\\mathrm dt} \\,\\mathrm ds\n\n=\n\n\\int_{s_i}^{s_f} \\left( \\frac{\\mathrm dv}{\\mathrm ds}\n\n \\frac{\\mathrm ds}{\\mathrm dt}\\right) \\,\\mathrm ds.\n\n$$\n\nNext,\n\n$$\n\n\\int_{s_i}^{s_f} \\frac{\\mathrm ds}{\\mathrm dt} \\frac{\\mathrm dv}{\\mathrm ds}\n\n\\,\\mathrm ds\n\n=\n\n\\int_{v_i}^{v_f} \\frac{\\mathrm ds}{\\mathrm dt} \\,\\mathrm dv\n\n$$\n\nby the usual change-of-variables theorem for integrals.", "answer_owner": "David K", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4888199, "title": "Volume of intersecting oblique cylinders", "question_text": "The problem: Two oblique circular cylinders of equal height\n\n$h$\n\n have a circle of radius\n\n$a$\n\n as a common lower base and their upper bases are tangent to each other. Find the common volume.\n\nI have a solution, below, but I arrived at it through an analogy, so I'm asking how others would solve this problem. This is problem 31 of Additional Problems of Chapter 7 of Simmons Calculus, and it is appearing around where many first semester courses in calculus might end. My initial approaches led to integrals a reader is not yet ready to solve in the book, so I kept looking til I found something.\n\nThe picture shows my approach. The sketch has sections of the intersecting regions shaded, they are like discs sliding past each other. Each overlapping area is symmetric, and each symmetric half is reminiscent of a rising or setting sun, which prompted the idea that a solution can be had by treating the volume similarly, since the rise of the sun at a constant rate over the horizon is equivalent to half the overlap section sliding up and down the cylinder. As the sun rises, the top surfaces first, then the rest of the disc according to its y-position. As we slide with constant speed from top to bottom of the cylinder, a point on the disc “rises” according to its y-position,\n\n$\\sqrt{a^2-x^2}$\n\n, until a semicircle has risen at the bottom. Once a sliver of disc rises, the constant continued rise forms a right triangle in section, with length proportional to when it rose, so length ~\n\n$\\sqrt{a^2-x^2}$\n\n. So the triangle area is proportional to the square, area ~\n\n$[\\sqrt{a^2-x^2}]^2$\n\n, and the total of all these triangles is\n\n$\\int_{-a}^{a} [\\sqrt{a^2-x^2}]^2dx=\\frac{4}{3}a^3$\n\n. Finally, the volume must be scaled to the height of the cylinder, so\n\n$V=\\frac{4}{3}a^3 \\frac{h}{a}=\\frac{4}{3}a^2h$\n\n.\n\nI am curious what other simple (or not simple) approaches solve this problem.", "question_owner": "RobinSparrow", "question_link": "https://math.stackexchange.com/questions/4888199/volume-of-intersecting-oblique-cylinders", "answer": { "answer_id": 4888202, "answer_text": "This is not an easier solution, but it is provided as a way to confirm that your solution is correct.\n\nFor a given height\n\n$h_0 \\in [0, h]$\n\n above the base, the cross-sectional area common to the two cylinders is given by the intersection of two circles of radius\n\n$a$\n\n and distance between the centers\n\n$$d(h_0) = 2a \\cdot \\frac{h_0}{h},$$\n\n since when\n\n$h_0 = 0$\n\n, the bases coincide, and when\n\n$h_0 = h$\n\n, the bases are tangent and thus their centers are separated by twice the radius, which is\n\n$2a$\n\n.\n\nThe area of this \"lens\" shape as a function of\n\n$d$\n\n can be calculated via elementary trigonometry: the angle\n\n$\\theta$\n\n between the line joining the two centers and a ray from one center to an intersection point on the boundary satisfies\n\n$$\\cos \\theta = \\frac{d}{2a},$$\n\nthus the area is\n\n$$A(d) = 4 \\cdot \\frac{1}{2} a^2 \\theta - 2 \\cdot \\frac{d}{2} a \\sin \\theta = 2a^2 \\cos^{-1} \\frac{d}{2a} - \\frac{d}{2} \\sqrt{4a^2 - d^2}.$$\n\nWritten as a function of\n\n$h_0$\n\n, this is\n\n$$A(h_0) = 2a^2 \\left(\\cos^{-1} \\frac{h_0}{h} - \\frac{h_0}{h} \\sqrt{1 - \\left(\\frac{h_0}{h}\\right)^2}\\right).$$\n\nThe total volume is therefore\n\n$$\\begin{align}\n\nV &= \\int_{h_0 = 0}^h A(h_0) \\, dh_0 \\\\\n\n&= 2a^2 h \\int_{z=0}^1 \\cos^{-1} z - z \\sqrt{1-z^2} \\, dz \\\\\n\n&= 2a^2 h \\left[z \\cos^{-1} z - \\sqrt{1-z^2} + \\frac{1}{3} (1-z^2)^{3/2} \\right]_{z=0}^1 \\\\\n\n&= 2a^2 h \\left(0 - 0 + 0 - 0 + 1 - \\frac{1}{3}\\right) \\\\\n\n&= \\frac{4}{3} a^2 h.\n\n\\end{align}$$", "answer_owner": "heropup", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5099130, "title": "How are epsilon and delta related, and how does the relation prove a limit?", "question_text": "I understand the formal epsilon-delta definition of a limit to be:\n\n$$\\lim_{x\\to a} f(x)=L \\iff \\forall \\epsilon>0 \\ \\exists \\delta>0 \\ \\forall x \\ (0<|x-a|< \\delta \\implies |f(x)-L|<\\epsilon) $$\n\nHowever, I don't understand how we can deduce a relation between epsilon and delta. For example:\n\n$$\\lim_{x\\to -3} 7x-9=-30$$\n\n$$\\forall \\epsilon>0 \\ \\exists \\delta>0 \\ \\forall x \\ (0<|x+3|< \\delta \\implies |7x-9+30|<\\epsilon) $$\n\n$$|7x+21|<\\epsilon \\implies 7|x+3|<\\epsilon \\implies |x+3|<\\dfrac\\epsilon7 $$\n\nNow, the confusion comes in that worked examples are able to say that:\n\n$$\\delta<\\dfrac\\epsilon7$$\n\nI do not understand how they derive that delta is smaller than epsilon. Since we go from:\n\n$$0<|x+3|< \\delta$$\n\n$$|x+3|< \\dfrac\\epsilon7$$\n\nTo:\n\n$$\\delta<\\dfrac\\epsilon7$$\n\nI don't see how those two inequalities lead to that relation.\n\nFurthermore, why does the existence of this relation prove the existence of the limit?", "question_owner": "Matin Gomez-Pablos", "question_link": "https://math.stackexchange.com/questions/5099130/how-are-epsilon-and-delta-related-and-how-does-the-relation-prove-a-limit", "answer": { "answer_id": 5099206, "answer_text": "Typically in an\n\n$\\epsilon$\n\n-\n\n$\\delta$\n\n proof you don't\n\ndeduce\n\n a relationship between\n\n$\\epsilon$\n\n and\n\n$\\delta.$\n\n There is no need whatsoever to do that.\n\nThere are a set of pairs of\n\n$(\\epsilon,\\delta)$\n\n that satisfy the condition\n\n$\\forall x\\ (0<|x-a|< \\delta \\implies |f(x)-L|<\\epsilon).$\n\n That set of pairs defines a relation that we could (at least in principle) deduce, but we don't need to know what that relation is and we almost never actually find it in an\n\n$\\epsilon$\n\n-\n\n$\\delta$\n\n proof.\n\nThe thing we're trying to prove starts with\n\n$\\forall\\epsilon>0 \\ \\exists\\delta>0,$\n\n so all we need to do is make sure we know how to find\n\none\n\n suitable value of\n\n$\\delta$\n\n no matter what the value of\n\n$\\epsilon$\n\n might be.\n\nInstead, the usual practice is to\n\ninvent\n\n a suitable relationship between\n\n$\\epsilon$\n\n and\n\n$\\delta.$\n\n Often the relationship is a function that gives you a single value of\n\n$\\delta$\n\n for each\n\n$\\epsilon.$\n\n But there are always lots of relationships that would work, so you can never deduce the uniquely correct one.\n\nIn fact, none of what you've written is an\n\n$\\epsilon$\n\n-\n\n$\\delta$\n\n proof.\n\nIt isn't even an outline of a proof. It's like the \"scratch work\" part of\n\nSome questions about epsilon-delta limit proofs\n\n, which helps you think up the proof but isn't part of the proof.\n\nAs you can see in the linked question, the statement that relates\n\n$\\delta$\n\n to\n\n$\\epsilon$\n\n comes at the end of the scratch work but is step 2 in the proof.\n\nWe don't need to deduce\n\n$\\delta < \\epsilon/7$\n\n from\n\n$0<|x+3|< \\delta$\n\n and\n\n$|x+3|< \\epsilon/7$\n\n;\n\nwe deduce\n\n$|x+3|< \\epsilon/7$\n\n from\n\n$0<|x+3|< \\delta$\n\n and\n\n$\\delta < \\epsilon/7.$\n\nThe proof would continue by showing that\n\n$|x+3|< \\epsilon/7$\n\n implies\n\n$7|x+3|<\\epsilon,$\n\n which implies\n\n$|7x+21|<\\epsilon,$\n\nwhich implies\n\n$|(7x-9)-(-30)|<\\epsilon.$\n\nNote that you have some implication arrows in your scratch work that are written in the opposite direction from what you need for the proof. Fortunately, these implications are actually logical equivalences; the arrows go in both directions. If they were only valid in the direction you wrote, the proof wouldn't work.\n\nBy the way, it turns out that in this exercise, because you're dealing with a straight-line function, you have one of the rare cases where it's just as easy to state the complete relationship between\n\n$\\epsilon$\n\n and the values of\n\n$\\delta$\n\n that satisfy\n\n$\\forall x\\ (0<|x-a|< \\delta \\implies |f(x)-L|<\\epsilon)$\n\nas it is to choose\n\n$\\delta$\n\n any other way.\n\nYour scratch work comes close to getting the complete relationship, but note that\n\n$0<|x+3|< \\delta$\n\n and\n\n$\\delta \\leq \\epsilon/7$\n\n also implies\n\n$|x+3|< \\epsilon/7.$\n\nThat is, the complete relationship of all suitable pairs\n\n$(\\delta,\\epsilon)$\n\n includes pairs such that\n\n$\\delta = \\epsilon/7$\n\n as well as pairs such that\n\n$\\delta < \\epsilon/7.$\n\nAgain, however, I should emphasize that knowing the complete relationship between\n\n$\\delta$\n\n and\n\n$\\epsilon$\n\n is\n\ncompletely irrelevant\n\n to your ability to write the proof. You only need to come up with one value of\n\n$\\delta$\n\n for each\n\n$\\epsilon$\n\n in order to satisfy the \"\n\n$\\exists \\delta$\n\n\" part of the definition.", "answer_owner": "David K", "is_accepted": true, "score": 16, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105701, "title": "What happens with a log function when its base is equal to another function?", "question_text": "I have a problem stemming from looking into ways to approximate the Lambert W function on a graphing program like Desmos. In my process of graphing functions, I came across a question I never had answered in my math classes. What would happen if a you have a logarithm function like this:\n\n$ \\log_{f(x)}(g(x)) $\n\nI know that plugging in x for both of the functions makes a horizontal line at\n\n$y=1$\n\n where\n\n$ x>0 $\n\n because of the log of base rule. How would this function change over x in relation to what\n\n$f(x)$\n\n and\n\n$g(x)$\n\n are equal to? What does it mean to have a variable base in a logarithm? What would plugging in, say\n\n$f(x)=e^x$\n\n and\n\n$g(x)=x$\n\n do?", "question_owner": "Mathieu Walsh", "question_link": "https://math.stackexchange.com/questions/5105701/what-happens-with-a-log-function-when-its-base-is-equal-to-another-function", "answer": { "answer_id": 5105703, "answer_text": "You may be interested in the identity:\n\n\\begin{align*}\n\n\\log_{b}(a) = \\frac{\\log_{c}(a)}{\\log_{c}(b)}\n\n\\end{align*}\n\nwhich holds whenever\n\n$a\\in\\mathbb{R}_{>0}$\n\n,\n\n$b\\in\\mathbb{R}_{>0}\\backslash\\{1\\}$\n\n and\n\n$c\\in\\mathbb{R}_{>0}\\backslash\\{1\\}$\n\n.\n\nProof\n\nNotice that\n\n$a = c^{\\log_{c}(a)}$\n\n and\n\n$a = b^{\\log_{b}(a)}$\n\n. Moreover, we do also know that\n\n$b = c^{\\log_{c}(b)}$\n\n. Consequently,\n\n\\begin{align*}\n\nc^{\\log_{c}(a)} = b^{\\log_{b}(a)} = (c^{\\log_{c}(b)})^{\\log_{b}(a)} = c^{\\log_{c}(b)\\log_{b}(a)}\n\n\\end{align*}\n\nFrom the injectivity of the exponential function, it results that:\n\n\\begin{align*}\n\n\\log_{c}(a) = \\log_{c}(b)\\log_{b}(a) & \\Rightarrow \\log_{b}(a) = \\frac{\\log_{c}(a)}{\\log_{c}(b)}\n\n\\end{align*}\n\nas we wanted to demonstrate.\n\nHopefully this helps!", "answer_owner": "Átila Correia", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104173, "title": "Help with simplifying this integral", "question_text": "Is it possible to convert this expression:\n\n$$\\int u^2 \\, f''(u) \\, du$$\n\nInto some integral of this form:\n\n$$\\int t^n \\, f^{(n+1)}(t) \\, dt$$\n\nUsing multiple integration techniques repeatedly like substitutions, int. by parts and other integration techniques?\n\n(Where\n\n$t$\n\n is whichever variable is being used, and\n\n$f^{(n+1)}$\n\n is the\n\n$(n+1)$\n\nth derivative)\n\nTo be clear, I used a\n\n$t$\n\n instead of a\n\n$u$\n\n in the general form because using substitutions would change the variable from\n\n$u$\n\n to something else.\n\nThe main problem that I run into when trying to figure this out is that when I do integration by parts, the integral part of it will either increase the power of the variable and decrease the degree of the derivative, or vice versa. This is a problem because what I need is either: the degree of the derivative increasing by\n\n$1$\n\n and the power staying the same, or the power decreasing by\n\n$1$\n\n and the derivative staying the same.\n\nAnother issue is when I apply some kind of substitution, it usually complicates the expression with square and even cube roots. If it is possible to convert the integral to the form:\n\n$$\\int t \\cdot f^{(n)}(t^2) \\, dt,$$\n\nthat would make substitution easily applicable, although I haven’t figured out how to do that yet. (\n\n$f^{(n)}(t)$\n\n is some\n\n$n$\n\nth derivative.)\n\nIt would be incredibly impactful if you could find some solution for this, so please give it a try if you’re interested.", "question_owner": "Munchrr", "question_link": "https://math.stackexchange.com/questions/5104173/help-with-simplifying-this-integral", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5052424, "title": "Proving $\\int_{\\frac\\pi2-1}^{\\frac\\pi2}f(x)\\,\\mathrm dx\\le\\int_0^{\\frac\\pi2}f(x)\\cos x\\,\\mathrm dx\\le\\int_0^1f(x)\\,\\mathrm dx$", "question_text": "SEEMOUS\n\n$2016$\n\nLet\n\n$f$\n\n be a continuous and decreasing real valued function, defined on\n\n$[0, \\pi/2]$\n\n. Prove the inequalities\n\n$${\\int_{\\frac{\\pi}{2}-1}^{\\frac{\\pi}{2}}f(x)\\,\\mathrm dx}\\le{\\int_{0}^{\\frac{\\pi}{2}}f(x)\\cos x\\,\\mathrm dx}\\le{\\int_{0}^{1}f(x)\\,\\mathrm dx}.$$\n\n When do equalities hold?\n\nWell, equalities hold when\n\n$f(x)=c$\n\n (some constant). The first few things which occured to me was changing the limits from of the leftmost thing to\n\n$0$\n\n to\n\n$1$\n\n from\n\n${\\frac{\\pi}{2}-1}$\n\n to\n\n$\\frac{\\pi}{2}$\n\n, but it wasn’t much helpful. After some digging though I was able to find a general form of the given problem.\n\n$g:[0,\\frac{\\pi}{2}]\\to[0,1]$\n\n is integrable and If\n\n$f:[0,\\frac{\\pi}{2}] \\to \\Bbb R$\n\n is decreasing, then\n\n$$\\int_{\\frac{\\pi}{2}-a}^{\\frac{\\pi}{2}}f(x)\\,\\mathrm dx\\le\\int_{0}^{\\frac{\\pi}{2}}f(x)g(x)\\,\\mathrm dx\\le\\int_{0}^{a}f(x)\\,\\mathrm dx,$$\n\nwhere\n\n$a=\\int_{0}^{\\frac{\\pi}{2}}g(x)\\,\\mathrm dx$\n\nMy work on it\n\n$$\\int_{\\frac{\\pi}{2}-a}^{\\frac{\\pi}{2}}f(x)\\,\\mathrm dx\\le\\int_{0}^{\\frac{\\pi}{2}}f(x)g(x)\\,\\mathrm dx$$\n\n$$\\iff \\int_{0}^{\\pi/2-a}f(x)g(x)\\ge\\int_{\\pi/2-a}^{\\pi/2}f(x)\\big(1-g(x)\\big)\\,\\mathrm dx$$\n\nI don’t know how to proceed further, any hint/suggestion would be helpful.", "question_owner": "T﹏T", "question_link": "https://math.stackexchange.com/questions/5052424/proving-int-frac-pi2-1-frac-pi2fx-mathrm-dx-le-int-0-frac-pi2fx", "answer": { "answer_id": 5052483, "answer_text": "That is\n\nSteffensen's inequality\n\n:\n\nLet\n\n$f: [a, b] \\to \\Bbb R$\n\n be a decreasing function and\n\n$g:[a, b] \\to [0, 1]$\n\n (Riemann or Lebesgue) integrable, then\n\n$$\n\n \\int_{b-I}^b f(x) \\, dx \\le \\int_a^b f(x) g(x) \\, dx\n\n \\le \\int_a^{a+I} f(x) \\, dx \\, ,\n\n$$\n\nwhere\n\n$I = \\int_a^b g(x) \\, dx$\n\n.\n\nSee\n\nWikipedia\n\n or\n\nWolfram MathWorld\n\n. Note that in these articles\n\n$f$\n\n is required to be nonnegative, but that condition is not needed, as the following proof shows.\n\nProof\n\n of the right inequality:\n\n$$\n\n\\begin{align}\n\n &\\int_a^{a+I} f(x) \\, dx - \\int_a^b f(x) g(x) \\, dx \\\\\n\n & \\quad = \\int_a^{a+I} \\underbrace{f(x)}_{\\ge f(a+I)} (1-g(x)) \\, dx - \\int_{a+I}^b \\underbrace{f(x)}_{\\le f(a+I)} g(x) \\, dx \\\\\n\n& \\quad \\ge f(a+I) \\int_a^{a+I} (1-g(x)) \\, dx - f(a+I) \\int_{a+I}^b g(x) \\, dx \\\\\n\n& \\quad = f(a+I) \\left( I - \\int_a^b g(x) \\, dx \\right) \\\\\n\n& \\quad = 0 \\, .\n\n\\end{align}\n\n$$\n\nThe left inequality is proven in the same way.\n\nIn your case is\n\n$g(x) = \\cos(x)$\n\n on\n\n$[a, b] = [0, \\pi/2]$\n\n and\n\n$I=1$\n\n. Since\n\n$0 < g(x) < 1$\n\n except at the endpoints of the interval, equality holds if and only if\n\n$f$\n\n is constant.\n\nIn the general case we can say that if\n\n$0 < g(x) < 1$\n\n almost everywhere on\n\n$[a, b]$\n\n then equality holds if and only if\n\n$f$\n\n is constant almost everywhere.", "answer_owner": "Martin R", "is_accepted": true, "score": 11, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1356507, "title": "Is there any function $f:\\mathbb R \\rightarrow \\mathbb R$ such that it has right derivative but not left derivative at every point?", "question_text": "Is there any function $$f:\\mathbb R \\rightarrow \\mathbb R$$ such that it has right derivative but not left derivative at every point?", "question_owner": "153", "question_link": "https://math.stackexchange.com/questions/1356507/is-there-any-function-f-mathbb-r-rightarrow-mathbb-r-such-that-it-has-right", "answer": { "answer_id": 1356550, "answer_text": "Let $q_1,q_2,\\ldots$ be an enumeration of the rational numbers in $[0,1]$, and\n\n$$ f_n(x) = \\left\\{\\begin{array}{rl}1 & \\text{if } x\\geq q_n,\\\\ 0 & \\text{otherwise}.\\end{array}\\right.$$\n\nIf we consider\n\n$$ f(x) = \\sum_{n\\geq 1}\\frac{1}{2^n}\\,f_n(x) $$\n\nthen $f(x)$ has a right derivative for every $x\\in[0,1]$ but no left derivative exists if $x\\in\\mathbb{Q}\\in[0,1]$.\n\nNow we may try to condensate singularities, but due to\n\nDenjoy-Young-Saks theorem\n\n, we cannot achieve the non-existence of the left derivative over a subset of $[0,1]$ with positive measure.", "answer_owner": "Jack D'Aurizio", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105510, "title": "How to calculate $\\frac{df}{dg}$?", "question_text": "Consider functions\n\n$f(x(t),y(t))$\n\n and\n\n$g(x(t),y(t))$\n\n.\n\nI want to understand how to calculate\n\n$\\frac{df}{dg}$\n\n.\n\nSince I had no idea on how to approach this I considered two examples:\n\nExample 1:\n\nFirst I look at an easier case:\n\nLet\n\n$f(x(t),y(t))$\n\n, then\n\n$\\frac{df}{dt}=\\frac{\\partial f}{\\partial x}\\frac{\\partial x}{\\partial t}+ \\frac{\\partial f}{\\partial y}\\frac{\\partial y}{\\partial t}$\n\n.\n\nExample 2:\n\nLet\n\n$u(x), v(x)$\n\n be functions.\n\nThen\n\n$\\frac{du}{dv}=\\frac{\\partial u}{\\partial x} \\frac{\\partial x}{\\partial v}$\n\n, where\n\n$\\frac{\\partial x}{\\partial v}=\\frac{1}{v'(x)}$\n\n.\n\nAfter this consideration I still am not sure, how to do it.\n\nSo I tried to look at an more concrete example.\n\nLet\n\n$f(x(t),y(t))= ax(t)+by(t)$\n\n,\n\n$g(x(t),y(t))=bx(t)-ay(t)$\n\n and\n\n$x(t)=t$\n\n,\n\n$y(t)=y(t)$\n\n.\n\nThus we have\n\n$$f(x(t),y(t))=at+by(t)$$\n\n and\n\n$$g(x(t),y(t))=bt-ay(t)$$\n\n.\n\nThen I thought that maybe\n\n$\\frac{df}{dg}=\\frac{df}{dt} \\frac{dt}{dg}$\n\n could be the right approach.", "question_owner": "Maxi", "question_link": "https://math.stackexchange.com/questions/5105510/how-to-calculate-fracdfdg", "answer": { "answer_id": 5105609, "answer_text": "Considering the functions\n\n$g(x(t),y(t))$\n\n,\n\n$f(x(t),y(t))$\n\n with parameter\n\n$t$\n\n,\n\nThen\n\n$$\\frac{df}{dg}=\\frac{df}{dt} \\frac{dt}{dg}$$\n\nIf such a parametric relation does not exist, say the functions are given as\n\n$g(x,y)$\n\n and\n\n$f(x,y)$\n\n then,\n\n$$\\frac{df}{dg} = \\frac{\\partial f}{ \\partial x}\\frac{\\partial x}{ \\partial g}$$\n\n$$\n\n\\text {OR}$$\n\n$$\\frac{df}{dg} = \\frac{\\partial f}{ \\partial y}\\frac{\\partial y}{ \\partial g}$$\n\nBut for\n\n$\\frac{df}{dg}$\n\n to exist, we need in general wherever\n\n$f,g$\n\n is differentiable, the condition\n\n$$\\frac{\\partial f}{ \\partial x}\\frac{\\partial x}{ \\partial g} = \\frac{\\partial f}{ \\partial y}\\frac{\\partial y}{ \\partial g}$$\n\nFor example, define\n\n$f(x,y) = xy^3$\n\n and\n\n$g(x,y) = x^2y^6$\n\n.\n\nIf not, then\n\n$\\frac{df}{dg}$\n\n has no meaning as commented by the user\n\n@Ted Shifrin.", "answer_owner": "vishalnaakar25", "is_accepted": true, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104225, "title": "Does a function $f:D\\subset\\Bbb{R}^n\\to\\Bbb{R}$ have arbitrary limits at isolated points of $D$ by definition?", "question_text": "Backgrounds:\n\nThe definition that the limit of a function\n\n$f:D\\subset\\Bbb{R}^n\\to\\Bbb{R}$\n\n at\n\n$\\boldsymbol{x}_0\\in D$\n\n is\n\n$A$\n\n is as such:\n\n$\\forall\\varepsilon>0$\n\n,\n\n$\\exists\\delta>0$\n\n, so that\n\n$\\forall\\boldsymbol{x}\\in U_0(\\boldsymbol{x}_0,\\delta)\\cap D$\n\n,\n\n$\\lvert f(\\boldsymbol{x})-A\\rvert<\\varepsilon$\n\n.\n\n(To avoid any misunderstanding for those not familiar with multivariable calculus, I will discuss\n\n$n=1$\n\n situation here, but this is not quite related to my main concern.)\n\nTextbooks differ in their ways of dealing with\n\n$n=1$\n\n situation. Some of them use\n\n$\\forall x\\in U_0(x_0,\\delta)$\n\n rather than\n\n$\\forall x\\in U_0(x_0,\\delta)\\cap D$\n\n. This difference causes different results for problems like\n\n$\\lim\\limits_{x\\to0}\\dfrac{\\sin\\left(x\\sin\\dfrac{1}{x}\\right)}{x\\sin\\dfrac{1}{x}}$\n\n(is\n\n$1$\n\n or does not exist), and we already have enough discussions about this on MSE(like\n\nhere\n\n).\n\nBut for\n\n$n>1$\n\n situations(multivariable calculus), most textbooks I have seen do the same as what I have shown at the beginning. To include \"\n\n$\\cap D$\n\n\" in the definition provides much convinience since\n\n$D$\n\n might become much more complicated. As a result, we have the conclusion that \"\n\n$f$\n\n is always continuous at an isolated point of\n\n$D$\n\n\", the proof of which is as such:\n\n$\\forall\\varepsilon>0$\n\n,\n\n$\\exists\\delta>0$\n\n, so that\n\n$U_0(\\boldsymbol{x}_0,\\delta)\\cap D=\\varnothing$\n\n(by definition of isolated points), and thus\n\n$\\forall\\boldsymbol{x}\\in U_0(\\boldsymbol{x}_0,\\delta)\\cap D$\n\n(\n\n$\\forall\\boldsymbol{x}\\in\\varnothing$\n\n),\n\n$\\lvert f(\\boldsymbol{x})-f(\\boldsymbol{x}_0)\\rvert<\\varepsilon$\n\n, which shows\n\n$\\lim\\limits_{\\boldsymbol{x}\\to\\boldsymbol{x}_0}f(\\boldsymbol{x})=f(\\boldsymbol{x}_0)$\n\n by definition.\n\nWe also already have enough discussions about this on MSE(on both\n\n$n=1$\n\n and\n\n$n>1$\n\n situations, like\n\nhere\n\n and\n\nhere\n\n).\n\nQuestion:\n\nIf we accept this definition, problem is: for isolated points of\n\n$D$\n\n,\n\n$\\forall A\\in\\Bbb{R}$\n\n the statement is always true, which means\n\n$\\lim\\limits_{\\boldsymbol{x}\\to\\boldsymbol{x}_0}f(\\boldsymbol{x})=A$\n\n for\n\n$\\forall A\\in\\Bbb{R}$\n\n.\n\nWhat is worse, in this way we can also show\n\n$\\lim\\limits_{\\boldsymbol{x}\\to\\boldsymbol{x}_0}f(\\boldsymbol{x})\\ne f(\\boldsymbol{x}_0)$\n\n(by the arbitrariness of\n\n$\\lim\\limits_{\\boldsymbol{x}\\to\\boldsymbol{x}_0}f(\\boldsymbol{x})$\n\n), and thus\n\n$f$\n\n becomes discontinuous at\n\n$\\boldsymbol{x}_0$\n\n.\n\n1)Do I misunderstand anything about this definition? Is there any mistake in my description?\n\n2)If I am right so far, how do we understand the arbitrariness of limits at isolated points, and the contradiction caused by it?", "question_owner": "JC Q", "question_link": "https://math.stackexchange.com/questions/5104225/does-a-function-fd-subset-bbbrn-to-bbbr-have-arbitrary-limits-at-isolate", "answer": { "answer_id": 5105332, "answer_text": "Principles of Mathematical Analysis\n\n by Rudin (a very commonly used real analysis texbook) gives a definition of limit according to which a limit of the function can only be defined at limit points of the domain:\n\nLet\n\n$X$\n\n and\n\n$Y$\n\n be metric spaces; suppose\n\n$E \\subset X$\n\n,\n\n$f$\n\n maps\n\n$E$\n\n into\n\n$Y$\n\n, and\n\n$p$\n\n is a limit point of\n\n$E$\n\n. We write\n\n$f(x) \\rightarrow q$\n\n as\n\n$x \\rightarrow p$\n\n, or\n\n$lim_{x \\to p} f(x) = q$\n\n if there is a point\n\n$q \\in Y$\n\n with the following property:\n\nFor every\n\n$\\epsilon > 0$\n\n there exists a\n\n$\\delta > 0$\n\n such that\n\n$d_{Y}(f(x), q) < \\epsilon$\n\n for all points\n\n$x \\in E$\n\n for which\n\n$0 < d_{X}(x, p) < \\delta$\n\n.\n\nFurthermore, these\n\nlecture notes\n\n (see pg. 23) say quite explicitly:\n\nWe do not define a limit of a function at an\n\nisolated point of its domain\n\nAnd also\n\nUnderstanding Analysis\n\n by Abbott says\n\nIf\n\n$c$\n\n is an isolated point of\n\n$A$\n\n, then\n\n$\\lim_{x \\to c}f(x)$\n\n isn't defined\n\nSo I think the resolution to your \"paradox\" is that limiting value of a function simply is not defined at isolated points of the domain.\n\nIt is possible to define \"non-deleted\" limits (so the limit exists at a limit point), but this does not seem to be the commonly used definition.\n\nAre \"non-deleted\" limits more common outside the USA? Maybe? But the comment to\n\nthis answer\n\n by Russian user Fedor Petrov would seem to indicate otherwise.\n\nHowever, none of this precludes the function from being formally continuous at an isolated point, because continuity need not be defined in terms of limits (and in full generality\n\nis not\n\n defined in terms of limits). To quote again from Rudin's\n\nPrinciples of Mathematical Analysis\n\n:\n\nSuppose\n\n$X$\n\n and\n\n$Y$\n\n are metric spaces,\n\n$E \\subset X$\n\n,\n\n$p \\in E$\n\n, and\n\n$f$\n\n maps\n\n$E$\n\n into\n\n$Y$\n\n. Then\n\n$f$\n\n is said to be\n\ncontinuous at\n\n$p$\n\n if for every\n\n$\\epsilon > 0$\n\n there exists a\n\n$\\delta > 0$\n\n such that\n\n$d_{Y}(f(x), f(p)) < \\epsilon$\n\n for all points\n\n$x \\in E$\n\n for which\n\n$d_{X}(x, p) < \\delta$\n\nFootnote:\n\nFWIW, the \"deleted\" limits make a lot more sense to me personally. Consider\n\n$f(x)$\n\n equal to\n\n$1$\n\n at\n\n$x=0$\n\n and equal to\n\n$x^2$\n\n elsewhere. The \"deleted\" limit as\n\n$x \\to 0$\n\n is\n\n$0$\n\n, which makes sense: the limit is the behavior of the function as\n\n$x$\n\n \"gets close to\" a particular value. The \"non-deleted\" limit is undefined, which doesn't really make sense.", "answer_owner": "NikS", "is_accepted": true, "score": 2, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105399, "title": "How to show the even order derivatives of $f(x)=\\big(1+\\tan^{r}(x) \\big)^{-1}$ are $0$?", "question_text": "Consider this function\n\n$f(x)=\\left(1+\\tan^r{x}\\right)^{-1}$\n\n where\n\n$r$\n\n is a fixed positive real number.\n\nThen, I claim that for every even positive integer\n\n$n$\n\n,\n\n$f^{(n)}\\left(\\frac{\\pi}{4}\\right)=0$\n\n. Is this claim true?\n\nHere is a Mathematica program to calculate the derivative for even integers up to\n\n$n=10$\n\n:\n\nDo[Print[Assuming[r > 0,Simplify[D[1/(1 + Tan[x]^r), {x, 2 j}]] /. x -> (Pi/4)]], {j, 1, 5}]", "question_owner": "John L", "question_link": "https://math.stackexchange.com/questions/5105399/how-to-show-the-even-order-derivatives-of-fx-big1-tanrx-big-1-a", "answer": { "answer_id": 5105415, "answer_text": "First of all, observe that,\n\n$f(x) = \\dfrac{1}{1+ \\tan^rx}$\n\n, then\n\n$f(\\pi/2 - x) = \\dfrac{1}{1+ \\cot^rx} = \\dfrac{\\tan^rx}{1+ \\tan^rx}$\n\n.\n\nWhich shows,\n\n$f(x) + f(\\pi/2 - x) = 1.$\n\nNow, after differentiating the above relation, we can get\n\n$$f^{(n)}(x) + f^{(n)}(\\pi/2 - x)(-1)^n = 0.$$\n\nWhich shows that for odd\n\n$n$\n\n, it is not giving any info, but for even\n\n$n$\n\n, we get\n\n$$f^{(2n)}(x) = - f^{(2n)}(\\pi/2 - x)\n\n\\\\ \\implies f^{(2n)}(\\pi/4) = - f^{(2n)}(\\pi/4) \\implies f^{(2n)}(\\pi/4) = 0 ~ \\forall n \\in \\mathbb{N}.$$", "answer_owner": "Afntu", "is_accepted": true, "score": 10, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104154, "title": "Why is the natural log added to this differentiable function?", "question_text": "I am refreshing my calculus knowledge using. The workbook direction for the problem is:\n\nPerform the following derivative, where:\n\n$$\\cot(4 \\theta^2 - 1) > 0.$$\n\nThe author then presents the below function\n\n$$\\frac{\\mathrm d}{\\mathrm d\\theta}\\ln[\\cot(4 \\theta^2 - 1)].$$\n\nI am unable to remember and find in my under grad texts the reason why the ln was added to the function. I am just beginning my calculus refresher and this is my first Math Stack Exchange question.\n\nEDITED Question with screen shots of the text book. I have placed below screen shots of the question and four solution pages. I can follow the solution with the exception of ln reasoning.", "question_owner": "BTyler", "question_link": "https://math.stackexchange.com/questions/5104154/why-is-the-natural-log-added-to-this-differentiable-function", "answer": { "answer_id": 5105491, "answer_text": "Hint: Since\n\n$$\\frac {d}{dx} (\\ln x) = \\frac {1}{x}$$\n\n any function inside of the \"\n\n$\\ln$\n\n\" can also be differentiated by the chain rule. For example,\n\n$$\\frac {d}{dx} \\ \\ln (x^2) = \\frac {1}{x^2}\\cdot (2x) = \\frac {2}{x}$$\n\n However, for\n\n$\\ln x$\n\n,\n\n$x$\n\n cannot be zero or negative in the real numbers, so there must be a condition that the function inside \"\n\n$\\ln$\n\n\" must be greater than zero.", "answer_owner": "bjcolby15", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105552, "title": "How to determine a function or a family of functions subject to some constraints", "question_text": "Let’s consider a real function\n\n$f(x)$\n\n that is continuous, differentiable everywhere, and has an infinite number of maxima and minima, for example\n\n$f(x) = \\cos(2x)+\\cos(5x)-\\sin(3x)$\n\n (this is just one of the infinite number of examples that could be given).\n\nLet's imagine that I can determine all the values of\n\n$x$\n\n where\n\n$f$\n\n has a minima, i.e., I can define a set\n\n$S$\n\n of infinite values of\n\n$x$\n\n for which\n\n$f$\n\n has a minima (if I couldn't, for example when I can't determine a periodic pattern, I would have to consider a finite set of minima that I would calculate using numerical methods, but let's imagine that this is not the case for now).\n\nJust one question\n\n: what algorithms/mathematical methods exist (tools, packages, software, and if they don't exist, what do you suggest as a simplified approach or where to read something insight) to determine a function\n\n$f_1(x)$\n\n (or a family of functions\n\n$g$\n\n, since\n\n$f_1(x)$\n\n is almost certainly not unique), that respects the following constraints:\n\n$f_1(x)$\n\n is continuous and differentiable everywhere, like\n\n$f(x)$\n\n,\n\n$f_1(x)$\n\n passes through all minima points of\n\n$f(x)$\n\n,\n\nall minima of\n\n$f_1(x)$\n\n coincide with some minima of\n\n$f(x)$\n\n, i.e., the minima of\n\n$f(x)$\n\n with “minor” value,\n\nall maxima of\n\n$f_1(x)$\n\n coincide with other minima of\n\n$f(x)$\n\n, i.e., the minima of\n\n$f(x)$\n\n with “major” value,\n\nno maxima and no minima of\n\n$f_1(x)$\n\n passes through a point other than all the minima of\n\n$f(x)$\n\n?\n\nIntuitively, I expect\n\n$f_1(x)$\n\n to be a function composed of elementary trigonometric functions like\n\n$f(x)$\n\n (see example at beginning).\n\nBut I have no idea how to find\n\n$f_1(x)$\n\n to compare it with\n\n$f(x)$\n\n, just out of curiosity (for example to see if\n\n$f_1(x)$\n\n becomes inevitably more complex than\n\n$f(x)$\n\n or whether it maintains a “complexity” comparable to\n\n$f(x)$\n\n).", "question_owner": "Matteo", "question_link": "https://math.stackexchange.com/questions/5105552/how-to-determine-a-function-or-a-family-of-functions-subject-to-some-constraints", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5105499, "title": "Does $\\int_{0}^{\\infty}\\sin\\left(\\sin\\left(x^{2}\\right)\\right)dx$ have a simple solution?", "question_text": "I was on twitter when I saw someone post the integral\n\n$$\\int_{0}^{\\infty}\\sin\\left(\\sin\\left(x^{2}\\right)\\right)dx.$$\n\nThe solution was not apart of the question so I came to MSE to ask. I know that\n\n$\\int_{0}^{\\infty}\\sin\\left(x^{2}\\right)dx$\n\n, the Fresnel integral, can be solved with complex analysis and re-writing\n\n$\\sin(x)$\n\n as the sum of complex exponentials.\n\nI attempted a similar method with this integral, writing the outer sin function as the combination of complex exponentials\n\n$$\\int_{R}e^{i\\sin\\left(z^{2}\\right)}dz.$$\n\n However, I was unable to finish the integral. What is the easiest way to finish solving this integral?", "question_owner": "rain", "question_link": "https://math.stackexchange.com/questions/5105499/does-int-0-infty-sin-left-sin-leftx2-right-rightdx-have-a-simple", "answer": { "answer_id": 5105501, "answer_text": "First perform the substitution\n\n$u = x^2$\n\n on the integral (which converges by the Dirichlet test). Now recall the\n\nJacobi-Anger\n\n expansion,\n\n$$\\sin(\\sin(u)) = 2 \\sum_{k=1}^\\infty J_{2k-1}(1) \\sin((2k-1)u)$$\n\nmoving the sum out, pulling out terms out of the integral, and integrating (a familiar Fresnel integral), we obtain\n\n$$\\int_{0}^{\\infty}\\sin\\left(\\sin\\left(x^{2}\\right)\\right)dx = \\sqrt{\\frac{\\pi}{2}} \\sum_{k=1}^\\infty \\frac{J_{2k-1}(1)}{\\sqrt{2k-1}}= \\sqrt{\\frac{\\pi}{2}} \\left( \\frac{J_1(1)}{\\sqrt{1}} + \\frac{J_3(1)}{\\sqrt{3}} + \\frac{J_5(1)}{\\sqrt{5}} + \\dots \\right)$$\n\nI don't think there is a better form for the integral. Note that the complex analytical method would likely fail as the integral over\n\n$\\mathbb{R}$\n\n of\n\n$\\cos (\\sin (x^2))$\n\ndiverges\n\n.", "answer_owner": "Maxime Jaccon", "is_accepted": false, "score": 4, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1555575, "title": "Finding all the points (x, y) where the slope of the tangent line is -1 and 1.", "question_text": "I'm given an astroid with the parametric equations $x = 6cos^3t, y = 6sin^3t$.\n\nIn order to find the slope of the tangent line itself, I divided the derivative of y in terms of t over the derivative of x in terms of t, hence $\\frac {dy/dt}{dx/dt} = \\frac{dy}{dx}$. This gave me $-tan(t)$, which is the slope of the tangent line to the astroid in terms of t.\n\nHowever, now I'm being asked to list all of the points $(x, y)$ where the slope of the tangent line is 1 and -1. At first I believed it was just going to be as easy as setting it up like $-tan(t) = 1, -tan(t) = -1$, but theres no set values for those $x$'s, or $t$'s in this case.\n\nIs there another way to go about this problem? Any kind of guidance on this would be greatly appreciated.", "question_owner": "etree", "question_link": "https://math.stackexchange.com/questions/1555575/finding-all-the-points-x-y-where-the-slope-of-the-tangent-line-is-1-and-1", "answer": { "answer_id": 1555604, "answer_text": "You are correct that in a sense there are no\n\nset\n\n values for $t$ when $\\tan(t)=\\pm1$ because there are infinite solutions. However, your function is periodic with period of $2\\pi$ so there a finite solutions in terms of $x$ and $y$. This becomes more clear when you look at a plot.\n\nWe only need to find $t$ when it restricted to $[0,2\\pi]$ due to the periodicity of the function and then our $x$ and $y$ values from there.\n\nWe know $\\tan(t)=1$. With our restriction, this means $t=\\pi/4$ and $t=5\\pi/4$. Again, these are the only solutions we care about. For $\\tan(t)=-1$, $t=3\\pi/4$ and $t=7\\pi/4$. Now we can substitute these $t$ values back to our original function in order to find the $(x,y)$ points. We obtain (due to the symmetry of the problem)\n\n$$(x,y)=\\left(\\pm\\frac{3}{\\sqrt 2}, \\pm\\frac{3}{\\sqrt 2}\\right)$$", "answer_owner": "Ben Longo", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1168004, "title": "Wolfram Alpha gives an answer to a non-convergent integral", "question_text": "When I am trying to solve the integral\n\n$$\\int_{0}^{\\pi} e^{ia\\cos(\\theta)} \\tan^{3}(\\theta)d\\theta$$\n\nWolfram Alpha gives the answer (putting query without the limits) ,\n\nAnswer: $0.5[(a^2+2)Ei(ia\\cos(x))+\\sec^2(x)e^{ia\\cos(x)}(1+ia\\cos(x))]$\n\nAlpha result\n\nI am confused with this results, however when I put the query with limits it gives that the integral does not converge. What is the different between these two situations when the initial integral is symmetric around 0 and integration yields real part to zero while the imaginary part goes to infinity.", "question_owner": "MaxQuantum", "question_link": "https://math.stackexchange.com/questions/1168004/wolfram-alpha-gives-an-answer-to-a-non-convergent-integral", "answer": { "answer_id": 1168346, "answer_text": "Just FYI, the \"sec\" becomes undefined at $\\frac{\\pi}{2}$ because $\\sec(x)=\\frac{1}{\\cos(x)}$ and $\\cos(\\frac{\\pi}{2})=0$\n\nSo it's like (sort of) $\\frac{1}{|x|}$ you can integrate bits of it easily, but go over the origin and you're going to have some trouble.", "answer_owner": "Alec Teal", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5103909, "title": "Find the limit of $\\frac{\\arctan x^2}{\\sqrt{1+x\\sin x}-\\sqrt{\\cos x}}\\left(2-\\frac x{\\mathrm e^x-1}\\right)$ as $x\\to0$", "question_text": "Find\n\n$$\\lim\\limits_{x\\to0}\\frac{\\arctan x^2}{\\sqrt{1+x\\sin x}-\\sqrt{\\cos x}}\\left(2-\\frac x{\\mathrm e^x-1}\\right).$$\n\nI used\n\n$\\arctan x\\sim x\\sim\\mathrm e^x-1$\n\n to get\n\n$\\lim\\limits_{x\\to0}\\frac{x\\left(2\\mathrm e^x-2-x\\right)}{\\sqrt{1+x\\sin x}-\\sqrt{\\cos x}}$\n\n.\n\nIf we use L'Hopital's rule two times, it gives a solution. But I think it is too hard to calculate this by hand without making mistakes, because the denominator becomes\n\n$$\\frac{\\sqrt{\\cos (x)}}{2}+\\frac{\\sin ^2(x)}{4 \\cos ^{\\frac{3}{2}}(x)}-\\frac{(\\sin (x)+x \\cos (x))^2}{4 (x \\sin (x)+1)^{3/2}}+\\frac{2 \\cos (x)-x \\sin (x)}{2 \\sqrt{x \\sin (x)+1}}.$$\n\n(The above expression is given by software)\n\nThere are also solutions by Taylor expansion to simplify the denominator. But I would also like to avoid this.\n\nAre there any better ways to solve the problem? (Note. We may use L'Hopital's law)", "question_owner": "youthdoo", "question_link": "https://math.stackexchange.com/questions/5103909/find-the-limit-of-frac-arctan-x2-sqrt1x-sin-x-sqrt-cos-x-left2-fr", "answer": { "answer_id": 5103930, "answer_text": "Presumably, you know how to show that the factor inside the parenthesis, on the right, tends to\n\n$1$\n\n and we ignore it.\n\nNow using the conjugate binomial, we seek the limit of\n\n$$\\frac{2\\arctan(x^2)}{1+x\\sin(x)-\\cos(x)}=\\frac{2\\arctan(x^2)}{2\\sin^2\\left(\\dfrac x2\\right)+x\\sin(x)}$$\n\n as\n\n$\\sqrt{1+x\\sin(x)}+\\sqrt{\\cos(x)}$\n\n tends to\n\n$2$\n\n.\n\nHence we get\n\n$$\\dfrac2{2\\dfrac1{2^2}+1}.$$", "answer_owner": "user1548405", "is_accepted": true, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1572327, "title": "Find $\\lim_{x \\to 0}(e^x-1)(\\frac{1}{x} - \\left \\lfloor{{\\frac{1}{x}}}\\right \\rfloor)$", "question_text": "Find $\\lim_{x \\to 0}(e^x-1)(\\frac{1}{x} - \\left \\lfloor{{\\frac{1}{x}}}\\right \\rfloor)$\n\nI was thinking about using the Sandwich Theorem and doing something like this: $$(e^x-1) < (e^x-1)(\\frac{1}{x} - \\left \\lfloor{{\\frac{1}{x}}}\\right \\rfloor) < (e^x-1)\\cdot \\frac{1}{x}$$\n\n(this seems true because $x \\to 0$)\n\nand then I can say that the limit of the left side is $0$ and the limit of the right side is $0$ (because I get $=0$ both when $x \\to 0^+$ and when $x \\to 0^-$)...\n\nSo I get the the limit of the original expression equals $0$. But I think this is not correct... Can someone tell me what is a correct way to solve it using the Sandwich Theorem (or something even simpler,\n\nwithout using L'hospital's rule\n\n)?", "question_owner": "Natalia", "question_link": "https://math.stackexchange.com/questions/1572327/find-lim-x-to-0ex-1-frac1x-left-lfloor-frac1x-right-rf", "answer": { "answer_id": 1572337, "answer_text": "Observe that\n\n$0\\leq z-\\lfloor z\\rfloor <1 $\n\n so that\n\n$$\\vert (e^x-1)(\\frac{1}{x} - \\left \\lfloor{{\\frac{1}{x}}}\\right \\rfloor)\\vert \\leq e^x-1\\Rightarrow \\lim_{x \\to 0}(e^x-1)(\\frac{1}{x} - \\left \\lfloor{{\\frac{1}{x}}}\\right \\rfloor)=0.$$", "answer_owner": "Matematleta", "is_accepted": true, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 3226247, "title": "If the set on which a Riemann integrable function $f$ is nonzero has empty interior then the integral of $|f|$ is $0$.", "question_text": "$f:[a,b]\\to\\mathbb{R}$\n\n is Riemann integrable and\n\n$X=\\{x\\in [a,b] : f(x)\\neq 0\\}$\n\n has empty interior.\n\nShow that\n\n$\\int_a^b|f(x)|dx=0$\n\n.\n\nI think that it is an easy exercise of measure theory. But how to solve it without the knowledge of measure theory and just using real analysis?\n\nThanks in advance.", "question_owner": "Majid", "question_link": "https://math.stackexchange.com/questions/3226247/if-the-set-on-which-a-riemann-integrable-function-f-is-nonzero-has-empty-inter", "answer": { "answer_id": 3226274, "answer_text": "Let\n\n$a=x_00$\n\n we can write\n\n$$\\begin{align}\n\n\\varepsilon^{-1/2}\\int_0^x ze^{-\\tan^2z/\\varepsilon}dz&=\\varepsilon^{-1/2}\\sum_{\\ell=0}^{L-2}\\left(\\int_{\\ell \\pi+\\delta}^{(\\ell+1)\\pi-\\delta}ze^{-\\tan^2z/\\varepsilon}dz+\\int_{(\\ell+1)\\pi-\\delta}^{(\\ell+1)\\pi+\\delta}ze^{-\\tan^2z/\\varepsilon}dz\\right)\\\\\\\\\n\n&+\\varepsilon^{-1/2}\\int_{(L-1)\\pi+\\delta}^{x}ze^{-\\tan^2z/\\varepsilon}dz \\tag 1\\\\\\\\\n\n\\end{align}$$\n\nWe observe that in\n\n$(1)$\n\n the only integrals that will contribute in the limit as\n\n$\\varepsilon \\to 0$\n\n are those around integer multiples of\n\n$\\pi$\n\n. Thus, we have for\n\n$(L-1)\\pi0$\n\n$$\\begin{align}\n\n\\lim_{\\varepsilon \\to 0}\\varepsilon^{-1/2}\\int_0^x ze^{-\\tan^2z/\\varepsilon}dz&=\\lim_{\\varepsilon \\to 0} \\varepsilon^{-1/2}\\sum_{\\ell=0}^{L-2}\\left(\\int_{(\\ell+1)\\pi-\\delta}^{(\\ell+1)\\pi+\\delta}ze^{-\\tan^2z/\\varepsilon}dz\\right) \\tag 2\\\\\\\\\n\n\\end{align}$$\n\nWe proceed to evaluate the integrals in\n\n$(2)$\n\n. To that end we have\n\n$$\\begin{align}\n\n\\varepsilon^{-1/2}\\int_{(\\ell+1)\\pi-\\delta}^{(\\ell+1)\\pi+\\delta}ze^{-\\tan^2z/\\varepsilon}dz &=\\varepsilon^{-1/2}\\left(\\int_{-\\delta}^{\\delta}ze^{-\\tan^2z/\\varepsilon}dz+(\\ell +1)\\pi\\int_{-\\delta}^{\\delta}e^{-\\tan^2z/\\varepsilon}dz\\right)\\\\\\\\\n\n&=(\\ell +1)\\pi\\varepsilon^{-1/2}\\int_{-\\delta}^{\\delta}e^{-\\tan^2z/\\varepsilon}dz\\\\\\\\\n\n&\\sim (\\ell +1)\\pi\\varepsilon^{-1/2}\\int_{-\\delta}^{\\delta}e^{-z^2/\\varepsilon}dz\\\\\\\\\n\n&= (\\ell +1)\\pi\\int_{-\\delta/\\varepsilon^{1/2}}^{\\delta/\\varepsilon^{1/2}}e^{-z^2}dz\\\\\\\\\n\n&\\to (\\ell +1)\\pi^{3/2}\n\n\\end{align}$$\n\nSumming over\n\n$\\ell$\n\n we find for\n\n$(L-1)\\pi0$\n\n$$\\int_0^\\infty \\sin(bx) x^{n-1} \\, dx = \\frac{\\Gamma(n) \\sin\\left(\\frac{n\\pi}{2}\\right)}{b^n}$$\n\n is true\n\nonly\n\n if\n\n$n<1$", "answer_owner": "Claude Leibovici", "is_accepted": true, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1150860, "title": "Find the arc length of the graph of the given equation from $P$ to $ Q$ or on the specified interval.", "question_text": "$y=\\dfrac{x^3}{3}+\\dfrac{1}{4x}$ from $1$ to $3$\n\nI know that the arc length equation is the $\\sqrt{1+f'(x)}$ but for some reason I am having problems with this question. I got that the derivative was $\\dfrac{1}{4} + x^2$. Any help would be appreciated", "question_owner": "Kels", "question_link": "https://math.stackexchange.com/questions/1150860/find-the-arc-length-of-the-graph-of-the-given-equation-from-p-to-q-or-on-th", "answer": { "answer_id": 1150949, "answer_text": "Given that the arc length of $f(x)$ from $a$ to $b$ is given by $$\\int_a^b\\sqrt{1+\\left(f'(x)\\right)^2}\\space dx$$ it makes sense that if you are expected to solve an arc length problem, the nasty looking $\\sqrt{1+\\left(f'(x)\\right)^2}$ quantity has to simplify to something that is integrable. It is often the case that $1+\\left(f'(x)\\right)^2$ can be rewritten in terms of a function $h(x)$ such that $1+\\left(f'(x)\\right)^2 = h(x)^2$, and hence $\\sqrt{1+\\left(f'(x)\\right)^2} = h(x)$ where $h(x)$ can be integrated. That is true of this problem as well. So, given $y(x) = \\frac{x^3}{3}+\\frac{x^{-1}}{4}$ then $f'(x) = x^2-\\frac{x^{-2}}{4}$, and then $$\\begin{align}1+\\left(f'(x)\\right)^2 = 1+\\left(x^2-\\frac{x^{-2}}{4}\\right)^2 \\\\ = 1+x^4-\\frac{1}{2}+\\frac{x^{-4}}{16} \\\\ = x^4+\\frac{1}{2}+\\frac{x^{-4}}{16} \\\\ = \\left(x^2+\\frac{x^{-2}}{4}\\right)^2 \\end{align}$$ Can you take it from here?", "answer_owner": "graydad", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104070, "title": "Optimizing area between $\\cos\\left(3x-k\\right)$ and $\\sin\\left(2\\left(x+k\\right)\\right)$.", "question_text": "Consider the functions\n\n$f(x) = \\cos\\left(3x - k\\right)$\n\n and\n\n$g(x) = \\sin\\left(2x + 2k\\right)$\n\n, defined over their maximal domains, where\n\n$k \\in \\mathbb{R}$\n\n.\n\nFind the values of\n\n$k$\n\n for which the total area bounded by\n\n$f$\n\n,\n\n$g$\n\n and the lines\n\n$x = 0$\n\n and\n\n$x = 2\\pi$\n\n is maximized.\n\nI've obtained the area function:\n\n$$A(k) = \\int_{0}^{2\\pi}\\left|\\cos\\left(3x-k\\right)-\\sin\\left(2\\left(x+k\\right)\\right)\\right|dx$$\n\nHowever I have no clue how to continue maximizing this function. Any help would be appreciated.", "question_owner": "Elite", "question_link": "https://math.stackexchange.com/questions/5104070/optimizing-area-between-cos-left3x-k-right-and-sin-left2-leftxk-right", "answer": { "answer_id": 5104179, "answer_text": "First note that\n\n\\begin{align}\n\nA(k) &= \\int_{0}^{2\\pi}\\left|\\cos\\left(3x-k\\right)-\\sin\\left(2x+2k\\right)\\right| \\, dx \\\\\n\n&= \\int_{-k/3}^{2\\pi - k/3} |\\cos(3u) - \\sin(2u + 8k/3)| \\, du \\\\\n\n&= \\int_{-\\pi}^{\\pi} |\\cos(3u) - \\sin(2u + 8k/3)| \\, du \\\\\n\n&= \\int_{-\\pi}^{\\pi} |\\cos(3x) - \\sin(2x + \\eta)| dx\n\n\\end{align}\n\nwhere\n\n$\\eta = 8k/3$\n\n.\n\n$\\cos(3x)$\n\n is even and\n\n$\\sin(2x+\\eta) = \\sin(2x) \\cos(\\eta) + \\cos(2x)\\sin(\\eta)$\n\n can be split into an even and odd part as well. Pairing the even and odd parts, the integrand can be written as\n\n$$|[\\cos(3x) - \\sin(\\eta) \\cos(2x)] - [\\cos(\\eta) \\sin(2x)]|$$\n\n Let\n\n$E$\n\n denote the even function, and\n\n$O$\n\n denote the odd function in the expression above. The integral, upon splitting the bounds, becomes\n\n\\begin{align}\n\nA(\\eta) &= \\int_{-\\pi}^{0} |E(x, \\eta) + O(x, \\eta)| dx + \\int_{0}^{\\pi} |E(x, \\eta) + O(x, \\eta)| \\, dx \\\\\n\n&= \\int_{0}^{\\pi} \\left( |E(x, \\eta) - O(x, \\eta)| + |E(x, \\eta) + O(x, \\eta)| \\right) \\, dx\n\n\\end{align}\n\nwhere we have performed the substitution\n\n$u = -x$\n\n on the first integral and then recombined everything under one integral. Indeed, recall that\n\n$|a - b| + |a + b| = 2\\max(|a|, |b|)$\n\n; thus we must have\n\n\\begin{align}\n\nA(\\eta) &= 2 \\int_{0}^{\\pi} \\max\\left(|E(x, \\eta)|, |O(x, \\eta)|\\right) \\, dx \\\\ &= 2 \\int_{0}^{\\pi} \\max\\left( |\\cos(3x) - \\sin(\\eta)\\cos(2x)|, |-\\cos(\\eta)\\sin(2x)| \\right) \\, dx\n\n\\end{align}\n\nThe maximum will occur at the \"extreme points\" of the integrand: where it is either all even or all odd.\n\n$\\eta$\n\n will determine how to move to these extreme points. In the case when the function is purely odd. If the function is purely odd, it requires that the even part\n\n$\\sin(\\eta) \\cos(2x) = 0$\n\n, which means\n\n$\\eta = n \\pi$\n\n. If the function is purely even, it requires that\n\n$\\cos (\\eta) = 0$\n\n or\n\n$\\eta = \\frac{\\pi}{2} + n \\pi$\n\n.\n\nThe value\n\n$k = 3\\pi/16$\n\n corresponds to\n\n$\\eta = 8(3\\pi/16)/3 = \\pi/2$\n\n, which is one of the extreme cases for\n\n$\\eta$\n\n. The integral evaluates to\n\n$$A(k=3\\pi/16) = \\frac{10}{3}\\left(\\sin\\left(\\frac{\\pi}{5}\\right) + \\sin\\left(\\frac{2\\pi}{5}\\right)\\right) \\approx 5.129$$\n\nThis agrees with Mariusz Iwaniuk's comment on the value of\n\n$A(k)$\n\n.\n\nHere\n\n is my handy desmos numerical verification.", "answer_owner": "Maxime Jaccon", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 763071, "title": "Stokes' Theorem and Surface Independence Failure", "question_text": "As we know, if $\\vec{F}=\\nabla\\times\\vec{A}$ then from Stokes' Theorem, $\\iint_{S_1} \\vec{F}\\dot \\,d\\vec{S}=\\iint_{S_2}\\vec{F}\\dot \\,d\\vec{S}$ where $S_1$ and $S_2$ have the same boundary.\n\nDoes anyone have a quick example at the top of their head wherein the above equality is not satisfied implying that the vector potential of $\\vec{F}$ does not exist?", "question_owner": "Tomas Jorovic", "question_link": "https://math.stackexchange.com/questions/763071/stokes-theorem-and-surface-independence-failure", "answer": { "answer_id": 763088, "answer_text": "Do you mean something like the inverse-square field\n\n$$\n\nF(x, y, z) = \\frac{(x, y, z)}{(x^{2} + y^{2} + z^{2})^{3/2}},\\quad\n\n(x, y, z) \\neq (0, 0, 0),\n\n$$\n\nwhose divergence vanishes identically, and $S$ the unit sphere, cut into any convenient pair of surfaces, e.g., the upper and lower hemispheres? (The integral of $F$ over $S$ with the outward unit normal is $4\\pi$, not $0$.)", "answer_owner": "Andrew D. Hwang", "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 1714958, "title": "Show that the line with parametric equations don't intersect", "question_text": "Show that the line with parametric equations $x = 6 + 8t$, $y = −5 + t$, $z = 2 + 3t$ does not intersect the plane with equation $2x − y − 5z − 2 = 0$.\n\nTo answer this do i just plug in the $x$, $y$, and $z$ equation into $2x − y − 5z − 2 = 0$? So $2(6 + 8t) − y − 5(−5 + t) − 2(2 + 3t) = 0 $", "question_owner": "Kimmy.J", "question_link": "https://math.stackexchange.com/questions/1714958/show-that-the-line-with-parametric-equations-dont-intersect", "answer": { "answer_id": 1714967, "answer_text": "$2(6+8t)−(−5+t)−5(2+3t)−2=0$\n\n$0t=5$\n\nTherefore, there is no intersection.", "answer_owner": "Kimmy.J", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104087, "title": "If a continuous function that is differentiable everywhere has only 1 extrema point does that imply that that is an absolute extrema?", "question_text": "I was wondering if a function that has only one solution to\n\n$f'(x) = 0$\n\n such that\n\n$f''(x) ≠ 0$\n\n, then would this imply that it is an absolute maximum or minimum? I have no idea how someone would even go about proving this, but I cannot think of a graph of a function that would disobey this property. I was thinking of this after attempting a practice exam that asked to show that\n\n$x > \\ln x \\quad \\forall x > 0$\n\n and I went about showing that\n\n$x - \\ln x > 0$\n\n by finding its derivative and finding the only minimum, which was at\n\n$\\ x =1$\n\n. I then moved on with the next question, as I had concluded that this would be sufficient enough, however when checking the marking scheme, they took it a step further and found the second derivative and explained that since it was always greater than 0, the function was always concave up, meaning the minimum was an absolute minimum, which is why I am now here.", "question_owner": "kbyeet", "question_link": "https://math.stackexchange.com/questions/5104087/if-a-continuous-function-that-is-differentiable-everywhere-has-only-1-extrema-po", "answer": { "answer_id": 5104142, "answer_text": "Suppose, toward a contradiction, that\n\n$f$\n\n has only one extremum, say at\n\n$a$\n\n, but this is not an absolute extremum. Without loss of generality, the extremum at\n\n$a$\n\n is a maximum (if it's a minimum, work with\n\n$-f$\n\n). Since it's not an absolute maximum, there's some\n\n$b$\n\n with\n\n$f(b)>f(a)$\n\n; without loss of generality, assume\n\n$b>a$\n\n. On the (compact) interval\n\n$[a,b]$\n\n,\n\n$f$\n\n must attain a minimum value, say at\n\n$c$\n\n. Since\n\n$a$\n\n was a local maximum, we have\n\n$f(a+\\epsilon)0$\n\n, so\n\n$f(c)$\n\n is strictly below both\n\n$f(a)$\n\n and\n\n$f(b)$\n\n. In particular,\n\n$c$\n\n can't equal\n\n$a$\n\n or\n\n$b$\n\n, and therefore\n\n$f$\n\n has an extremum (a minimum) at\n\n$c$\n\n, contradicting the hypothesis that its only extremum is at\n\n$a$\n\n.", "answer_owner": "Andreas Blass", "is_accepted": false, "score": 3, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4997991, "title": "Spivak Chapter 7, Problem 19b: Continuity", "question_text": "Here is the question from Spivak.\n\nSuppose\n\n$00$\n\n, show that\n\n$\\int_{-a}^{a} f(x) \\, \\mathrm dx = \\int_{0}^{a} [f(-x)+f(x)] \\, \\mathrm dx$\n\n and hence, evaluate\n\n$\\int_{-1}^{1} \\ln(x+\\sqrt{1+x^2}) \\, \\mathrm dx$\n\n.\n\nI just follow the definite integrateion rules and tried the following.\n\nWhy is there a negative sign (\n\n$-$\n\n) in front of\n\n$f(-x)$\n\n?", "question_owner": "Casper LI", "question_link": "https://math.stackexchange.com/questions/788746/show-that-int-aa-fx-dx-int-0a-f-xfx-dx", "answer": { "answer_id": 789293, "answer_text": "you are using $$\\int_a^bf(x)dx = - \\int_b^af(x)dx$$", "answer_owner": "userX", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5104036, "title": "Proof and applicability of an integral formula $\\int_{0}^{\\infty}\\int_{0}^{\\infty}\\sin(xy)xf(x)\\,\\mathrm dx\\,\\mathrm dy$.", "question_text": "I have a question regarding the formula\n\n$$\\int_{0}^{\\infty}\\int_{0}^{\\infty}\\sin(xy)xf(x)\\,\\mathrm dx\\,\\mathrm dy=\\int_{0}^{\\infty}f(x)\\,\\mathrm dx.$$\n\nI derived it this by moving integrals inside each other (even when the requirements for Fubini’s Theorem are not met), assuming the inverse of the forward Fourier transform of a function is equal to that function, and assuming that\n\n$$\\int_{0}^{\\infty}\\frac{\\sin\\left(ax\\right)}{x}\\,\\mathrm dx = \\frac{\\pi}{2}, \\forall a.$$\n\nThe formula seems to work for MOST functions I tested, like\n\n$e^x$\n\n,\n\n$e^{-x^2}$\n\n,\n\n$\\frac{1}{x^2+1}$\n\n, et cetera. However, it doesn’t work for some functions even tough they have a convergent area above the positive real axis, for example,\n\n$f(x)=\\frac{\\sin x}{x}$\n\n.\n\nMy question is, what is the rigorous proof for this formula and for what functions does it apply to?", "question_owner": "d ds", "question_link": "https://math.stackexchange.com/questions/5104036/proof-and-applicability-of-an-integral-formula-int-0-infty-int-0-inft", "answer": { "answer_id": 5104067, "answer_text": "Change the order in integral\n\n$\\int_{0}^{\\infty}xf(x) \\int_{0}^{\\infty}\\sin(xy)\\,\\mathrm dy\\,\\mathrm dx$\n\nInner integral\n\n$\\int_{0}^{\\infty}\\sin(xy)\\,\\mathrm dy = - \\frac{1}{x}cos(xy)|_{0}^{\\infty}=\\lim_{k\\to \\infty} \\frac{1-cos(kx) }{x}$\n\nCauchy principal value of inner integral is\n\n$\\frac{1}{x}$\n\n4.Got\n\n$\\int_{0}^{\\infty}f(x)\\mathrm dx$", "answer_owner": "becouse", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5103859, "title": "Bessel differential equation?", "question_text": "he Bessel functions of the first kind\n\n$J_n(x)$\n\n are defined as the solutions to the Bessel differential equation:\n\n$$ x^2y''(x)+ xy'(x)+(x^2-n^2)y(x)=0.$$\n\nThe special case of\n\n$n=0$\n\n gives\n\n$J_0(x)$\n\n as the series\n\n$$J_0(x)=\\displaystyle\\sum_{k=0}^{\\infty}(-1)^k \\dfrac{x^{2k}}{k!^2\\, 2^{2k}}.$$\n\nNow, I am looking for the solution of the following equation:\n\n$$y''(x)+ \\frac{n}{x}y'(x)+ y(x)=0, \\quad \\forall x\\neq 0, \\tag{$\\ast$}$$\n\nwhere\n\n$n\\in \\mathbb N$\n\n. Any reference of special functions where I can find the solution of (\n\n$\\ast$\n\n).", "question_owner": "Z. Alfata", "question_link": "https://math.stackexchange.com/questions/5103859/bessel-differential-equation", "answer": { "answer_id": 5103981, "answer_text": "We have the following equation:\n\n\\begin{align*}\n\n(*) \\quad y''(x)+ \\frac{n}{x}y'(x)+ y(x)=0, \\quad \\forall x\\neq 0,\\, n\\in \\mathbb N.\n\n\\end{align*}\n\nAccording to the indication of @Claude Leibovici, let,\n\n$y(x) = x^{\\frac{1-n}{2}} z(x)$\n\n. Then\n\n\\begin{align*}\n\ny'(x) &= \\frac{1-n}{2} x^{\\frac{-1-n}{2}} z(x) + x^{\\frac{1-n}{2}} z'(x),\\\\[4pt]\n\ny''(x) &= \\frac{(1-n)(-1-n)}{4} x^{\\frac{-3-n}{2}} z(x) + (1-n)x^{\\frac{-1-n}{2}} z'(x) + x^{\\frac{1-n}{2}} z''(x).\n\n\\end{align*}\n\nSubstitute into the equation\n\n$(*)$\n\n, we get\n\n\\begin{align*}\n\n\\left[\\frac{(1-n)(-1-n)}{4} x^{\\frac{-3-n}{2}} z+(1-n)x^{\\frac{-1-n}{2}}z'+x^{\\frac{1-n}{2}}z''\\right] +\\frac{n}{x}\\left[\\frac{1-n}{2}x^{\\frac{-1-n}{2}}z+x^{\\frac{1-n}{2}}z'\\right]+x^{\\frac{1-n}{2}}z=0.\n\n\\end{align*}\n\nSimplifying and factoring\n\n$x^{\\frac{-3-n}{2}}$\n\n, we obtain:\n\n\\begin{align*}\n\nx^{\\frac{-3-n}{2}}\\Big[z'' x^2 +z'\\,x(1-n+n) +z\\Big(\\frac{(1-n)(-1-n)}{4}+\\frac{n(1-n)}{2}+ x^2\\Big)\\Big] = 0.\n\n\\end{align*}\n\nSince\n\n$x \\neq 0$\n\n, dividing by\n\n$x^{\\frac{-3-n}{2}}$\n\n gives:\n\n\\begin{align*}\n\nx^2 z''(x) + x z'(x) + \\Big(x^2 - \\frac{(n-1)^2}{4}\\Big)z(x) = 0.\n\n\\end{align*}\n\nThis is exactly the Bessel differential equation of order,\n\n$\\nu = \\frac{n-1}{2}$\n\n.\n\nHence, the general solution is\n\n\\begin{align*}\n\nz(x) = C J_{\\frac{n-1}{2}}(x),\n\n\\end{align*}\n\nwhere\n\n$J_\\nu$\n\n is the Bessel function of the first. But\n\n$y(x) = x^{\\frac{1-n}{2}} z(x)$\n\n, then\n\n\\begin{align*}\n\ny(x) = C x^{\\frac{1-n}{2}} J_{\\frac{n-1}{2}}(x).\n\n\\end{align*}\n\nIt is correct?", "answer_owner": "Z. Alfata", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 4768283, "title": "$\\dfrac{\\mathrm{d}^n}{\\mathrm{d}x^n}\\dfrac{e^{ax}}{\\ln(cx)}$ and summation with Stirling number of the first kind", "question_text": "I would like to calculate the\n\n$n$\n\n-th derivative of\n\n$\\dfrac{e^{ax}}{\\ln(cx)}$\n\nI tried to calculate it in this way:\n\n$$(fg)^{(n)}(x)=\\sum_{k=0}^n\\binom nkf^{(k)}(x)g^{(n-k)}(x)$$\n\n$$\\frac{\\mathrm d^n}{\\mathrm dx^n}\\frac1{\\ln(cx)}=\\frac1{(-x)^n\\ln(cx)^{n+1}}\\sum_{k=0}^nk!{n\\brack k}\\ln(cx)^{n-k}$$\n\nAnd\n\n$$\\frac{\\mathrm d^n}{\\mathrm dx^n}e^{ax}=a^n e^{ax}$$\n\nSo\n\n$$\\frac{\\mathrm d^n}{\\mathrm dx^n}\\frac{e^{ax}}{\\ln(cx)}=\\frac{e^{ax}}{\\ln\\left(cx\\right)}\\sum_{k=0}^n\\binom nk\\ \\frac{a^{n-k}}{(-x)^k}\\sum_{j=0}^kj! {k\\brack j}\\ln\\left(cx\\right)^{-j}$$\n\nIs there a way to eliminate one of the summations?", "question_owner": "Math Attack", "question_link": "https://math.stackexchange.com/questions/4768283/dfrac-mathrmdn-mathrmdxn-dfraceax-lncx-and-summation-with", "answer": { "answer_id": null, "answer_text": "(Нет ответов)", "answer_owner": null, "is_accepted": false, "score": 0, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" }, { "question_id": 5103780, "title": "Alternate ways to do the last step $\\int_{0}^{1}\\alpha(1-\\alpha)\\csc(\\pi\\alpha)d\\alpha$", "question_text": "So recently I was given the integral\n\n$$\n\nI=\\int_{0}^{1}\\frac{x-\\frac{1}{2}}{\\log\\left(\\frac{x}{1-x}\\right)}dx\n\n$$\n\nNaturally, with the logarithm term in the denominator, I tried Feynman's trick which led me to defining\n\n$$\n\nI(\\alpha)=-\\frac12\\int_{0}^{1}\\frac{x}{\\log\\left(\\frac{1}{x}-1\\right)}\\left(1-\\left(\\frac{1}{x}-1\\right)^\\alpha\\right)\n\n$$\n\nClearly\n\n$I(1)=I$\n\n and\n\n$I(0)=0$\n\n. Taking the derivative of this w.r.t\n\n$\\alpha$\n\n gave me\n\n$$\n\nI'(\\alpha)=-\\frac12\\int_{0}^{1}x\\left(\\frac{1}{x}-1\\right)^\\alpha dx=\\int_{0}^{1}x^{1-\\alpha}(1-x)^{\\alpha}dx=-\\frac12\\mathrm{B}(2-\\alpha,1+\\alpha)\n\n$$\n\nUsing the relation between the Beta function and the Gamma function, along with Euler's reflection formula and simple properties of\n\n$\\Gamma$\n\n gets me the following:\n\n$$\n\n\\mathrm{B}(2-\\alpha,1+\\alpha)=\\frac{\\Gamma(2-\\alpha)\\Gamma(1+\\alpha)}{\\Gamma(3)}=\\frac12\\alpha(1-\\alpha)\\Gamma(\\alpha)\\Gamma(1-\\alpha)=\\frac\\pi 2 \\alpha(1-\\alpha)\\csc(\\pi \\alpha)\n\n$$\n\nSo all that remains is the integral\n\n$$\\int_{0}^{1}\\alpha(1-\\alpha)\\csc(\\pi\\alpha)d\\alpha$$\n\nI was able to evaluate this by IBP with\n\n$\\csc$\n\n as\n\n$u$\n\n. This was nice for me because I had developed formulas for the integral\n\n$$\n\n\\int x^n\\cot^m(x)dx\n\n$$\n\nlast year when playing around with this type. However, even so, the last integral was tedious and annoying to do. I was wondering if anyone could suggest a simpler way?", "question_owner": "aaron", "question_link": "https://math.stackexchange.com/questions/5103780/alternate-ways-to-do-the-last-step-int-01-alpha1-alpha-csc-pi-alphad", "answer": { "answer_id": 5103783, "answer_text": "Solution below, verified with Mathematica\n\n$$\\int_{0}^{1} \\alpha (1- \\alpha) \\csc (\\pi \\alpha) \\, d\\alpha = \\frac{7 \\zeta(3)}{\\pi^3}$$\n\nKinda reversing the steps... Using the properties of the Gamma function (Gamma reflection and\n\n$\\Gamma(a+1) = a \\Gamma(a)$\n\n), the integral becomes\n\n\\begin{align}\n\nI &= \\frac{1}{\\pi} \\int_{0}^{1} \\Gamma(1+\\alpha) \\Gamma(2-\\alpha) d\\alpha \\\\\n\n&= \\frac{1}{\\pi} \\int_{0}^{1} B(1+\\alpha, 2-\\alpha) \\Gamma(3) d\\alpha \\\\\n\n&= \\frac{2}{\\pi} \\int_{0}^{1} B(1+\\alpha, 2-\\alpha) d\\alpha\\\\\n\n\\end{align}\n\nUsing the integral definition of the beta function, we must have\n\n\\begin{align} I &= \\frac{2}{\\pi} \\int_{0}^{1} \\left[ \\int_{0}^{1} t^{(1+\\alpha)-1} (1-t)^{(2-\\alpha)-1} dt \\right] d\\alpha \\\\\n\n&= \\frac{2}{\\pi} \\int_{0}^{1} (1-t) \\left[ \\int_{0}^{1} \\left(\\frac{t}{1-t}\\right)^{\\alpha} d\\alpha \\right] dt\n\n\\end{align}\n\nwhere I have swapped the order of integration and pulled some terms out. The inner integral can be solved, and we have\n\n$$I = \\frac{2}{\\pi} \\int_{0}^{1} (1-t) \\left[ \\frac{\\frac{2t-1}{1-t}}{\\ln(t) - \\ln(1-t)} \\right] dt = \\frac{2}{\\pi} \\int_{0}^{1} \\frac{2t-1}{\\ln(t) - \\ln(1-t)} dt$$\n\nPerforming the substitution\n\n$2t-1 = x$\n\n and using the fact that the integrand is even, we obtain\n\n$$I = \\frac{2}{\\pi} \\int_{-1}^{1} \\frac{x}{2 \\text{ artanh}(x)} \\left( \\frac{1}{2} dx \\right) = \\frac{2}{\\pi} \\frac{1}{4} \\int_{-1}^{1} \\frac{x}{\\text{artanh}(x)} dx$$\n\nPerforming the substitution\n\n$x = \\tanh (u)$\n\n and then integration by parts, we see that\n\n$$I = \\frac{2}{\\pi} \\frac{1}{4} \\int_{0}^{\\infty} \\frac{\\tanh^2(u)}{u^2} du = \\frac{1}{4} \\int_{0}^{\\infty} \\left( \\frac{\\tanh(u)}{u} \\right)^2 du$$\n\nIndeed,\n\nthis\n\n is a well known integral that evaluates to\n\n$\\frac{14 \\zeta(3)}{\\pi^2}$\n\n, so\n\n$$I = \\frac{2}{\\pi} \\left( \\frac{7 \\zeta(3)}{2\\pi^2} \\right) = \\frac{7 \\zeta(3)}{\\pi^3}$$", "answer_owner": "Maxime Jaccon", "is_accepted": false, "score": 1, "answer_link": null }, "source_license": "CC BY-SA (Stack Exchange content)" } ]