title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Topological properties vs homeomorphisms
I'm studying general topology and a question has come to my mind. We have defined a topological property to be a property which a (viz. any) topological space can satisfy or not satisfy, and such that, if satisfied by a space, is also satisfied by every space homeomorphic to it. I can see the ambiguity of this definition lying in its lacking to specify the language in which the properties are expressed. Anyway, I was wondering if, in some appropriate language, topological properties are enough to capture the notion of homeomorphisms. More precisely, is it true that, if two spaces satisfy the same topological properties written in an appropriate (formal) language , then they are homeomorphic? Feel free to make assumptions on the language of the properties. Disclaimer If we think about properties in the most general and informal sense, then the answer is yes. Indeed, given a topological space X, "being homeomorphic to X" is a topological property. As a result, given another space Y havin
is it true that, if two spaces satisfy the same topological properties written in an appropriate (formal) language, then they are homeomorphic? since first order theories with infinite models always have elementarily equivalent but non-isomorphic models, one has to look for something else. hen: if you're willing to allow for one infinitary operation, the theory of complete heyting algebras is a good first approximation: every topological space provides a model, and there are ways to express many properties one can regard as 'topological', though not all models 'are' (or rather, 'are provided by') spaces, and there are some issues involving separation properties, so that some (non-hausdorff) non-homeomorphic spaces end up being 'elementarily equivalent' another possible approach is via a relational theory of ultrafilter convergence , which presumably can be developed in two-sorted FOL
|general-topology|geometry|logic|model-theory|
1
What does a parametric equation mean?
I am following the last module of Differential Calculus on Khan Academy, that deals with Parameteric equations. Here are the parametric equations described in the lecture. $x(t) = 5t + 10$ $y(t) = 50 - 5t^2/2$ However, I really don't understand what parametric equations really mean. How do they differ from normal equations. According to Wikipedia : "In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters." I really don't understand what this definition is trying to convey. From what I observed, if two functions share a variable, it typically gets defined as a parametric equation. But that seems to be a loose definition. Regarding my prerequisite knowledge, I have a Masters in Engineering. Therefore I understand the formulae of calculus quite well. I just never bothered to understand some of the underlying concepts. Therefore, I am revisiting it by through Khan Academy.
Sometime it may be interesting to use physics in math. You can treat the $t$ as time (which we call it parameter, is a variable). Then $x(t)$ and $y(t)$ is telling you the $x$ - and $y$ -coordinates of a moving object at time $t$ . For example, in no air resistance case, a projectile motion with inclination angle $45^\circ$ can be described as $$x(t)=t,y(t)=t-\dfrac{1}{2}gt^2$$ ( $g$ is some constant). This means at time $t=1$ , the object is at $\left(1,1-\dfrac{g}{2}\right)$ , as $t$ varies, you can imagine there is a parabola curve (which is physics is called trajectory) which is exactly the object passes. Now in mathematics case, usually when we face some function curves, it may be tough to describe its changes in $x,y$ or area, blablabla. But with the above example, we can express the curve, in the sense by introducing a 'time'-like variable, which we call parameter, so that we can describe the $x$ , $y$ coordinate in terms of this parameter. (Of course higher dimensional can be g
|calculus|
0
Rational Parameterization of Quartic Curve (Variety)
I am trying to find a rational parameterization of the curve (variety) $$x^4+a^2x^2=x^2y^2+(h^2+a^2)y^2$$ If I have done my math correctly, this curve has a singularity of multiplicity 2 at the origin, and the projective extension $$x^4+a^2x^2z^2=x^2y^2+(h^2+a^2)y^2z^2$$ has an isolated singularity at (0:1:0). So, the geometric genus should be $$\frac{(4-1)(4-2)}{2}-2-1=0$$ so the curve should have a rational parameterization. I have found a (non-rational) parameterization by parameterizing the associated equation (hyperbola) found by substituting $$u=x^2$$ $$v=y^2$$ which is: $$(x,y)=(\sqrt{-\frac{a^2-(a^2+h^2))t}{1-t}},\sqrt{-t\frac{a^2-(a^2+h^2)t}{1-t}})$$ but I have not been able to find a substitution that will eliminate the radicals and give me a rational parameterization.
One thing we can do is reduce the number of radicals we have to solve for. If you say $z = \frac{x}{y}$ then $y = xz$ and our equation becomes: $x^2 + a^2 = z^2(x^2 + h^2 + a^2)$ by dividing out $x^2$ . This is 3 squares equaling 2 squares $(X_1^2 + X_2^2 = Y_1^2 + Y_2^2 + Y_3^2)$ . One option for parameterizing this is found here: In ℕ⁺, can the sum of three squares equal the sum of two squares? but that uses a lot of quadratic terms which would make it difficult to harmonize $zX_1 = Y_1$ and $zX_2 = Y_3$ . Fortunately, I found a parameterization of $x_1^2 + x_2^2 + x_3^2 - y_1^2 - y_2^2 - y_3^2 = a$ (see https://artofproblemsolving.com/community/c3046h1150063 ) which makes it easy to set $a = x_3 = 0$ leaving us with the parameterization: $x_1^2 + x_2^2 - y_1^2 - y_2^2 - y_3^2 = 0$ is parameterized by: $u = \frac{y_3^2}{4}$ $v = \frac{b}{2}$ $(x_1,x_2,y_1,y_2,y_3) = (u + v + \frac{1}{2},u - v + \frac{1}{2},-u + v + \frac{1}{2},u + v - \frac{1}{2},y_3)$ Now we need to solve $x = x_1$
|algebraic-geometry|algebraic-curves|rational-functions|quartics|
0
What's the difference between zig-zags and a helixes?
I've been reading through the Polytope-Wiki entry on helices . To my understanding, an $n$ -gonal helix is a blend of a planar $n$ -gon $\{n\}$ with the regular linear apeirogon $\{\infty\}$ . The blend of the apeirogon with a line segment produces the only two dimensional "helix", the zig-zag. Finally, $\{\infty\}$ is a blend of itself with a point and so is a one-dimensional helix. All of these polygons have the same graph-structure. Every vertex joins two edges and every edge meets two vertices. There are no cycles, so their Hasse diagrams are isomorphic. This means that all the helices are isomorphic as abstract polytopes. As far as I can make out, their symmetry groups are isomorphic. Every symmetry of $\{\infty\}$ corresponds to a symmetry of the $n$ -gonal helix and vice versa. So how do these shapes differ? All the helices are isomorphic as abstract polygons and they've got isomorphic symmetry groups. The only way I can see to distinguish them is via "undoing" the blending. The
I think the short answer to the question Can it really be that the same abstract polytope can have such qualitatively different realisations? Is "yes". Even a square (as an abstract graph) has some qualitatively quite different embeddings in the plane. Many more in space.
|solution-verification|polygons|polytopes|
0
Decaying inequality in expectation implies almost sure convergence to zero?
Is this claim true? The following is my attempt at the proof. I am unsure about the proof because I did not have to use the fact that $X_n\geq 0, \forall n\in \mathbb{N}$ . Any feedback for confirmation will be greatly appreciated. Claim: Suppose the sequence of nonnegative random variables, $X_n\geq 0$ satisfies $E[x_{n+1}]\leq E[X_n]r$ , $\forall n\in \mathbb{N}$ , where $0 .Then, $X_n$ converges to zero almost surely. Proof: Given inequality implies $E[X_n]\leq E[X_0]r^n$ . For any $\epsilon>0$ , Markov inequality yields $P(\{X_n\geq \epsilon\})\leq \cfrac{E[X_n]}{\epsilon}\leq \cfrac{E[X_0]r^n}{\epsilon}$ . Thus, $\sum_{n=0}^\infty P(\{X_n\geq \epsilon\})\leq \sum_{n=0}^\infty \cfrac{E[X_0]r^n}{\epsilon}=\cfrac{E[X_0]}{\epsilon}\cfrac{1}{1-r} . Therefore by Borel Canteli Lemma, $X_n$ converges to zero almost surely.
Assuming finiteness as mentioned in the comments $E[X_{1}]$ . By the decreasing we get $\lim_{n}E[X_{n}]=0$ . By Fatou's lemma for $X:=\liminf_{n}X_{n}\geq 0$ we get $$E[X]\leq \liminf_{n}E[X_{n}]=0.$$ So we get $X=0$ . Your approach also works too because it shows that for every $\omega$ there exists $N(\omega)$ so that $$0\leq X\leq \epsilon.$$
|probability|sequences-and-series|probability-theory|solution-verification|stochastic-processes|
1
Proving a useful identity with the Dirac Delta function
I am studying QFT and renormalization of QED, and in a passage is exploited the following identity: $$\frac{1}{ABC}=2∫_{0}^{1} dxdydzδ(x+y+z-1)\frac{1}{(Ax+By+Cz)^{3}}$$ for every $A,B,C\in\mathbb{C}$ .Does someone know how to prove it? I know it's just to compute the integral but I'm struggling with the fact that I have a finite domain. Should I use the step function?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Hereafter, $\bracks{\cdots}$ is an $Iverson\ Bracket$ . \begin{align} & \color{#44f}{\left. 2\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}{\delta\pars{x + y + z - 1} \over \pars{Ax + By + Cz}^{\,3}}\dd x\,\dd y\,\dd z\,\right\vert_{\ A, B, C\ \in\ \mathbb{C}\setminus\braces{0}}} \\
|integration|dirac-delta|
1
Maxima and minima of product
Determine the maxima and minima of $(x-a)^{\frac{1}{3}}(2x-a)^{\frac{2}{3}}$ where $a>0$ . Doing the first derivative,we get the points $x=a,\frac{a}{2}$ where the first derivative is undefined and a defined critical point $x=\frac{5}{6}a$ . But the calculation of 2nd derivative seemed to me a daunting task,so instead of differentiating again,I tried to look for the intervals where $f(x)$ is increasing or decreasing. But doing that I found that at $f'(x)$ is positive in both left and right neighbourhood of $a$ ,meaning it is neither maxima nor minima. But since $a$ is a critical point,it should give a sign change. Now,I am unsure of the approach. It will be very helpful if lights are shed on it.
The value of x which you get after solving the first derivative are $\frac{a}{2},\frac{5a}{6}$ , for which you're supposed to get minima and maxima. If we take $$y=\sqrt[3]{(x-a)(2x-a)^2}$$ Regarding critical points We notice that y is 0 at $\frac{a}{2}$ and $a$ . We can see that the factor $(2x-a)$ has an even power. It denotes that sign of y will not change on either sides of $\frac{a}{2}$ until it reaches $a$ on its right, although it's a critical point. Reason for this:- Let $x_0$ be any point before $a$ Then if $$y=\sqrt[3]{(x_0-a)(2x_0-a)^2}$$ then y is always negative because $(2x-a)^2$ is always positive, irrespective of whether $x_0>\frac{a}{2}$ or $x_0 . It will only become positive when $x_0$ becomes more than $a$ . As $\frac{a}{2}$ is a root, y=0 when $$x=\frac{a}{2}$$ It's a constantly increasing function after x exceeds the local minima. So there exists a local maxima at $x=\frac{a}{2}$ where $y=0$ and local minima at $x=\frac{5a}{6}$ where $y ( because $\frac{a}{2} ) The
|calculus|functions|derivatives|maxima-minima|
1
Prove that the range of $f(x)$ is all of $\mathbb{R}$ if $\left | f(x) - f(y) \right | \geq \left |x - y\right |$
Let $f : \mathbb{R}\rightarrow \mathbb{R}$ be a continuous function such that $\left | f(x) - f(y) \right | \geq \left |x - y\right |$ for all $x, y \in \mathbb{R}$ . Prove that the range of $f(x)$ is all of $\mathbb{R}$ . I found a solution here which goes like: If $f(x)=f(y)$ then $0\geq |x-y|$ , so $x=y$ , thus $f$ is injective, therefore monotone (being injective). Since $|f(x) - f(0)| \geq |x|$ , it means $f$ is both lower and upper unbounded, thus its range is the whole $\mathbb{R}$ (and moreover, $f$ is a bijection). I understood that $f$ is upper unbounded as $|f(x)|\geq x$ for positive reals, but how do they got $f$ to be lower unbounded?
Let's say $f$ is lower bounded, WLOG take $f$ to be increasing. Then we must have $\displaystyle\lim_{x\to-\infty}f(x)$ exists, say $L$ . We first assume $L . Now equivalently, for any $\epsilon>0$ , there exists $\delta$ such that $$x Hence, there must be some $a$ small enough so that $f(a)=L+\dfrac{1}{2}$ . Then $|f(a+1)-f(a)|\ge|a+1-a|=1$ but $f(a+1) , this means $f(a+1)=L-\dfrac{1}{2}$ . Clearly we get a contradiction. The positive $L$ case can be done similarly.
|real-analysis|
0
In ℕ⁺, can the sum of three squares equal the sum of two squares?
Are there any examples where: $a² + b² + c² = p² + q²\qquad {a, b, c, p, q ∈ ℕ⁺}\tag{1}$ If not, can $(1)$ be disproven?
I made another parameterization which is a little easier to manipulate (it has more linear variables) I found a parameterization of $x_1^2 + x_2^2 + x_3^2 - y_1^2 - y_2^2 - y_3^2 = a$ (see https://artofproblemsolving.com/community/c3046h1150063 ) which makes it easy to set $a = x_3 = 0$ leaving us with a parameterization of $x_1^2 + x_2^2 - y_1^2 - y_2^2 - y_3^2 = 0$ . This is parameterized by: $u = \frac{y_3^2}{4}$ $v = \frac{b}{2}$ $(x_1,x_2,y_1,y_2,y_3) = (u + v + \frac{1}{2},u - v + \frac{1}{2},-u + v + \frac{1}{2},u + v - \frac{1}{2},y_3)$
|diophantine-equations|square-numbers|sums-of-squares|
0
Prove that the range of $f(x)$ is all of $\mathbb{R}$ if $\left | f(x) - f(y) \right | \geq \left |x - y\right |$
Let $f : \mathbb{R}\rightarrow \mathbb{R}$ be a continuous function such that $\left | f(x) - f(y) \right | \geq \left |x - y\right |$ for all $x, y \in \mathbb{R}$ . Prove that the range of $f(x)$ is all of $\mathbb{R}$ . I found a solution here which goes like: If $f(x)=f(y)$ then $0\geq |x-y|$ , so $x=y$ , thus $f$ is injective, therefore monotone (being injective). Since $|f(x) - f(0)| \geq |x|$ , it means $f$ is both lower and upper unbounded, thus its range is the whole $\mathbb{R}$ (and moreover, $f$ is a bijection). I understood that $f$ is upper unbounded as $|f(x)|\geq x$ for positive reals, but how do they got $f$ to be lower unbounded?
We already know $f$ is monotone and injective, assume WLOG it is strictly increasing. As $f$ is continuous and $\mathbb{R}$ is connected, its range is connected and thus is an interval. If this interval was bounded above, using that $f$ is increasing we would get $$\lim_{x\to+\infty} f(x)= c$$ for some $c\in\mathbb{R}$ (because the limit of an increasing function is either finite or $+\infty$ ). But this would imply that for some $x_0\in\mathbb{R}$ we would have $$|f(x)-c| Thus, $|f(x)-f(y)|\leq 2$ for $x,y\geq x_0$ , so taking $x,y\geq x_0$ such that $|x-y|>2$ we reach a contradiction with the condition satisfied by $f$ . The fact that this interval is not bounded below follows by a symmetric argument. As the only interval not bounded below nor above is $\mathbb{R}$ , the result is proved.
|real-analysis|
0
Three consecutive sums of two squares
$0, 1, 2$ is an example of three consecutive non-negative integers $n, n+1, n+2$ which are each the sum of two integer squares. Using modular arithmetic you can prove that in all of these triplets $n \equiv 0 \pmod 8$ . Now I'm wondering, is there a way to find all such triplets? Below are two ways I've found which generate infinite triplets, but not all: Because $(a^2 + b^2)(c^2 + d^2) = (ac - bd)^2 + (ad + bc)^2$, you can take any triplet (except $0, 1, 2$ which will give itself) and generate a new triplet. If $n, n+1, n+2$ is the chosen triplet then the following is also a triplet: $$n(n+2) = n^2 + 2n$$ $$(n+1)^2 + 0^2 = n^2 + 2n + 1$$ $$(n+1)^2 + 1^2 = n^2 + 2n + 2$$ Assume that there exists a triplet in the following form (letting $b^2 = 2a^2 + 1$): $$a^2 + a^2 = 2a^2$$ $$2a^2 + 1 = b^2 + 0^2$$ $$2a^2 + 2 = b^2 + 1$$ Solving $b^2 = 2a^2 + 1$ using the convergents for $\sqrt 2$ you'll find that every second convergent ($3/2$, $17/12$, $99/70$, etc.) will be in the form $b/a$. For e
Well we know $a^2 + b^2 = n$ so $a^2 + b^2 + 1 = c^2 + d^2$ A parameterization for this can be found here: https://mathoverflow.net/questions/104621/seeking-an-integer-parameterization-for-a2b2-c2d21 (just scroll to the bottom) and then you just do this again after the substitutions: $c^2 + d^2 + 1 = g^2 + f^2$ Just find a way to set the c and d from both equations equal to each other while remaining in the integer domain and you should have a solid parameterization.
|number-theory|
0
Probability that the centroid of a triangle is inside its incircle
Question : The vertices of triangles are uniformly distributed on the circumference of a circle. What is the probability that the centroid is inside the incricle. Simulations with $10^{10}$ trails give a value of $0.457982$ . It is interesting to note that this agrees with $\displaystyle \frac{G}{2}$ to six decimal places where $G$ is the Catalan's constant . Julia source code: using Random inside = 0 step = 10^7 target = step count = 0 function rand_triangle() angles = sort(2π * rand(3)) cos_angles = cos.(angles) sin_angles = sin.(angles) x_vertices = cos_angles y_vertices = sin_angles return x_vertices, y_vertices end function incenter(xv, yv) a = sqrt((xv[2] - xv[3])^2 + (yv[2] - yv[3])^2) b = sqrt((xv[1] - xv[3])^2 + (yv[1] - yv[3])^2) c = sqrt((xv[1] - xv[2])^2 + (yv[1] - yv[2])^2) s = (a + b + c) / 2 incenter_x = (a * xv[1] + b * xv[2] + c * xv[3]) / (a + b + c) incenter_y = (a * yv[1] + b * yv[2] + c * yv[3]) / (a + b + c) incircle_radius = sqrt(s * (s - a) * (s - b) * (s - c))
This is not an answer, but I have a plot of what the probability space looks like. Fix the first vertex at the top. Choose two real values between $0$ and $1$ uniformly at random, let them be $p_1$ and $p_2$ . The second vertex is $p_1$ the way around the circle, and $p_2$ respectively. Plot the points on the coordinate plane, where $p_1$ is $x$ and $p_2$ is $y$ . The entire space of possibilities is the square from $(0,0)$ to $(1,1)$ . We plot the point only if the centroid is in the incircle. We get a figure like such ( $10^4$ steps sampled from each axis): Bonus: Here is the same plot but with the fixed point on the opposite side instead. I do not know where to proceed from here. I am not sure how to intuitively understand why six negative petals appear. Code for anyone interested: import matplotlib.pyplot as plt import numpy as np import math sqrt = math.sqrt PI = math.pi cos = math.cos sin = math.sin def dist(p1, p2): d1 = p1[0]-p2[0] d2 = p1[1]-p2[1] return sqrt(d1*d1 + d2*d2) de
|probability|integration|geometry|triangles|geometric-probability|
0
Is "If ⊨□φ then ⊨φ" true in modal logic?
(Under propositional modal logic, system K) There's seems to be an obvious counterexample: A model with one world w s.t. $\phi$ is not true in $\phi$ and $\lnot R(w,w)$ . But consider this argument: Assuming $\vDash\square\phi$ , take an arbitrary model $M$ and construct a new model $M^*$ s.t. for any $w$ in $M$ , add a world x s.t. $R(z,x)$ . Since $M^*,x \vDash \square\phi$ , $M^*,w \vDash\phi$ . And since $\langle M,w\rangle$ and $\langle M^*,w\rangle$ is bisimilar, $M,w \vDash\phi$ . Because $M$ and $w$ was arbitrary, $\vDash\phi$ . I don't think this argument is right because we don't know if $\langle M,w\rangle$ and $\langle M^*,w\rangle$ is bisimilar, since we don't know if adding an $x$ would change how $M^*, w$ satisfies sentential variables. Intuitively, this claim holds when $\phi$ is a modal tautology. But I'm not sure if that's always the case given that $\vDash\square\phi$ . I suspect it isn't. Update: The counterexample does not work because: since $\phi$ is any wff, we
That the rule is valid is trivial from the perspective of the sequent calculus for K obtained by extending g3cp with the distributive rule $\Gamma\Rightarrow\varphi$ / $\Gamma',\Box\Gamma\Rightarrow\Box\varphi,\Delta'$ where $\Box\Gamma$ is obtained from $\Gamma$ by replacing every formula $\psi$ in $\Gamma$ with $\Box\psi$ and $\Gamma'$ and $\Delta'$ are arbitrary formulas (for admissibility of thinning). Since this sequent calculus is cut-free, a derivation of $\Rightarrow\Box\varphi$ can only be obtained from a derivation of $\Rightarrow\varphi$ , so if the former is derivable, then the latter is derivable.
|logic|modal-logic|
0
An easier example of a non-PID where every finitely generated ideal is principal
Say that an integral domain $\mathcal{X}$ is an almost-PID if $\mathcal{X}$ is not a PID but every finitely generated ideal of $\mathcal{X}$ is principal. The question of whether almost-PIDs exist came up in my class recently (the standard proof that elements of PIDs can be factored into irreducibles shows that from a "non-factorable" element we can construct an infinitely generated nonprincipal ideal, and students were asking whether this was necessary). The answer is yes, but the simplest example I currently know (which a colleague pointed out to me) is any nontrivial ultrapower of $\mathbb{Z}$ . By Los' Theorem each finitely generated ideal is principal, but any such ring isn't even a UFD. However, this isn't something I can present to an early abstract algebra class. Question : is there an almost-PID which can be defined, and ideally verified as an almost-PID, using only elementary techniques? By "elementary techniques" I ideally mean the material presented in the first nine chapte
A domain in which every finitely generated ideal is principal is called a Bézout domain. It is equivalent to the statement that every $2$ -generated ideal is principal. A standard example of a Bézout domain is the ring of algebraic integers. It is easy to verify that there are ideals that are not principal: for example, the ideal generated by all $2^{1/n}$ is not principal (any generator would lie in a finite extension of $\mathbb{Q}$ , but no finite extension of $Q$ contains all $n$ th roots of $2$ ). Proving that every $2$ -generated ideal is principal was described by Dedekind as an "important but difficult" theorem, so this will not satisfy your "elementary techniques" requirement. Another standard example is the ring of entire functions, which likewise would take you too far afield. Wikipedia's page on Bézout domains has a sketch of how to construct a non-UFD Bézout domain from any PID that is not a field. Applying it to $\mathbb{Z}$ , it gives the ring $R=\mathbb{Z}+x\mathbb{Q}[x
|abstract-algebra|ring-theory|ideals|examples-counterexamples|principal-ideal-domains|
1
Is it true that a function is $c$-strongly convex if $f - c\|x\|^2_p$ is convex for ANY norm $\|x\|_p$?
It is a common knowledge that a function is $c$ -strongly convex if $f - c\|x\|^2_2$ is convex. However, can we replace $\|x\|_2$ with any norm $\|x\|_p$ ? I strongly suspect this holds, but from looking at several references online (see p 26 of this ), it might also be true that this condition ONLY holds for $p = 2$ . I find that almost all proofs on Mathstacksexchange implicitly assume that the norm is 2-norm. Whenever people write $\|x\|^2$ in their proofs, what they actually mean is that $\|x\|^2$ is the 2-norm. Consider the example here , which the top-answer said to show $$"R := \alpha (1 - \alpha)\|x-y\|^2 + \|\alpha x + (1-\alpha) y\|^2 - \alpha \|x\|^2 - (1 - \alpha) \|y\|^2 = 0."$$ But this implicitly assumes $\|x\|^2$ is the 2-norm, otherwise I cannot see how this equality holds (otherwise it is simple if we just write $\|x-y\|^2$ as $(x-y)^T(x-y)$ . But only holds for 2-norm.) I would appreciate it if someone can verify/check this result.
Following the clarification in your comment, for a given norm $\|\cdot\|$ , $f(x)$ is here called $c$ -strongly convex if for all $x,y$ and $\alpha \in [0,1]$ we have $$f(\alpha x + (1-\alpha)y) \le \alpha f(x) + (1 - \alpha)f(y) - \frac{c}{2}\alpha(1-\alpha)\|x-y\|^2. \tag{1}$$ Note that for $\|\cdot\|_2$ , this becomes equivalent to the definition of c-strong convexity commonly used in the literature (i.e., a function is called $c$ -strongly convex if $f(x) - c\|x\|^2_2$ is convex). Let us define $$R(x,y) := \alpha (1 - \alpha)\|x-y\|^2 + \|\alpha x + (1-\alpha) y\|^2 - \alpha \|x\|^2 - (1 - \alpha) \|y\|^2.$$ From the proof given in this answer , we can see that if $R(x,y) \ge 0$ for $\|x\|_p$ , then inequality (1) implies convexity of $f(x) - c\|x\|^2_p$ . if $R(x,y) \le 0$ for $\|x\|_p$ , then convexity of $f(x) - c\|x\|^2_p$ implies inequality (1). For $p=2$ , we always have $R(x,y) = 0$ , so convexity of $f(x) - c\|x\|^2_2$ and (1) are equivalent: $$\color{blue}{\fbox{inequality
|solution-verification|convex-analysis|convex-optimization|nonlinear-optimization|numerical-optimization|
0
Metric spaces are completely normal
Given a metric space $(X, k)$ with $Y, Z\subset X$ and $\operatorname{cl}(Y)\cap Z = \emptyset$, $\operatorname{cl}(Z)\cap Y = \emptyset$, prove that there are open sets $M, N$ such that $Y\subset M$, $Z\subset N$ and $M\cap N=\emptyset$. My Attempt: Since $\operatorname{cl}(Y)=Y\cup Y'$ and $\operatorname{cl}(Y)\cap Z = \emptyset$, we get: $(Y\cup Y')\cap Z = \emptyset$. Thus, $(Y\cap Z)\cup (Y'\cap Z) = \emptyset$, which means $Y\cap Z=\emptyset$ and $Y'\cap Z = \emptyset$. Similarly, $Z'\cap Y=\emptyset$. But I couldn't see how to choose $M$ and $N$ to be open sets. Can anyone please help me proceed on the remaining parts?
Let $(X,d)$ be your metric space suppose $E,F\subset X$ are closed and disjoint. Then for each $x,y \in E,F$ , there exists $\epsilon_x,\epsilon_y>$ respectively such that $$B_{\epsilon_x}(x) \cap F = \emptyset = B_{\epsilon_y}(y) \cap E.$$ Then I claim the disjoint open sets containing $E,F$ to be $$U=\bigcup_{x \in E}B_{\frac{\epsilon_x}{2}}(x), V=\bigcup_{y \in F}B_{\frac{\epsilon_y}{2}}(y).$$ To show theyre disjoint assume toward a contradiction and use triangle inequality.
|general-topology|metric-spaces|
0
Multivariable matrix calculus textbook?
I study a multivariable calculus course, and the lecturer is using matrix calculus notation, which I'm not familiar with. The notes that come with the course aren't sufficient in understanding this notation. Are there any good calculus textbooks you could recommend, which use and introduce matrix calculus notation? Thanks in advance!
This MIT short course on matrix calculus may be helpful.
|calculus|reference-request|matrix-calculus|
0
Matrix Calculus Reference Request
Looking for a good book on matrix calculus. I have seen the matrix cook book but it is mostly identities/equations without any proofs or work. I want a thorough treatment of the topic.
This MIT short course on matrix calculus may be helpful.
|matrices|reference-request|matrix-calculus|
0
Prove that the range of $f(x)$ is all of $\mathbb{R}$ if $\left | f(x) - f(y) \right | \geq \left |x - y\right |$
Let $f : \mathbb{R}\rightarrow \mathbb{R}$ be a continuous function such that $\left | f(x) - f(y) \right | \geq \left |x - y\right |$ for all $x, y \in \mathbb{R}$ . Prove that the range of $f(x)$ is all of $\mathbb{R}$ . I found a solution here which goes like: If $f(x)=f(y)$ then $0\geq |x-y|$ , so $x=y$ , thus $f$ is injective, therefore monotone (being injective). Since $|f(x) - f(0)| \geq |x|$ , it means $f$ is both lower and upper unbounded, thus its range is the whole $\mathbb{R}$ (and moreover, $f$ is a bijection). I understood that $f$ is upper unbounded as $|f(x)|\geq x$ for positive reals, but how do they got $f$ to be lower unbounded?
You can deduce that $f$ has to be monotone directly from the hypothesis (rather than deducing it from injectivity): Let $g\colon \mathbb R^2\backslash \Delta \to \mathbb R$ be the function $g(x,y) = \frac{f(x)-f(y)}{x-y}$ , where $\Delta = \{(x,x): x \in \mathbb R\}$ . Then by assumption $|g(x,y)|\geq 1$ for all $(x,y) \in \mathbb R^2 \backslash \Delta$ , and $g(x,y) = g(y,x)$ . Now if $U_{+} = \{(x,y) \in \mathbb R^2: x>y\}$ and $(x_1,y_1),(x_2,y_2) \in U_+$ then the path $\gamma\colon [0,1]\to U_+$ given by $\gamma(t) = ((1-t)x_1+tx_2, (1-t)y_1+ty_2)$ has $\gamma(0) =(x_1,y_1)$ and $\gamma(1)=(x_2,y_2)$ . But then $g(\gamma(t))$ is a continuous function, so that $g(\gamma(0))$ and $g(\gamma(1))$ must have the same sign since $g\circ \gamma$ is continuous and $g$ does not vanish on $U_+$ . It follows that $g$ has constant sign on $U_+$ and, replacing $f$ with $-f$ if necessary, we may assume that $f(x)>f(y)$ whenever $x>y$ , that is, $f$ is increasing. But now replacing $f(x)$ by $f(x
|real-analysis|
0
Help with the indefinite integral $\int \frac{dx}{2x^4 + 3x^2 + 5}$
I start by rewriting the denominator, $2x^4+3x^2+5$ , as a squared term plus a constant. To do this, we notice that the first two terms already have a common factor of $2x^2$ . We can complete the square by taking half of the coefficient of our $x^2$ term, squaring it, and adding it to both the numerator and denominator. Since the coefficient of our $x^2$ term is $3$ , half of it would be $\frac{3}{2}$ , and squaring it gives us $\frac{4}{9}$ . So we add $\frac{4}{9}$ to both the numerator and denominator. I started like that but I couldn't go further. Can you please help me to solve it.
Substitute $t=\sqrt[4]{\frac25}x$ , along with $a_{\pm}= \sqrt{ 2\pm\frac3{\sqrt{10}}}$ \begin{align} &\int\frac1{2x^4+3x^2+5}dx = \frac1{\sqrt[4]{250}}\int\frac1{t^4+\frac3{\sqrt{10}} t^2 +1}dt\\ =&\ \frac1{2\sqrt[4]{250}}\int\frac{1+t^2}{t^4+\frac3{\sqrt{10}} t^2 +1} + \frac{1-t^2}{t^4+\frac3{\sqrt{10}} t^2 +1}\ dt\\ =&\ \frac1{2\sqrt[4]{250}}\int\frac{d(t-\frac1t)}{(t-\frac1t)^2+a_+^2} -\frac{d(t+\frac1t)}{(t+\frac1t)^2-a_-^2}\\ =&\ \frac1{2\sqrt[4]{250}} \bigg( \frac1{a_+}\tan^{-1}\frac{t-\frac1t}{a_+} + \frac1{a_-}\coth^{-1}\frac{t+\frac1t}{a_-}\bigg) \end{align}
|integration|algebra-precalculus|quadratics|completing-the-square|
0
Proving that a sequence of function does not uniformly converge
I am trying to prove that the sequence of functions $f_n (t) = n^2 t e^{-nt}$ does not converge uniformly on $\mathbb{R}_{\geq 0}$ . I was able to show that it converges point-wise to the zero function. To prove that it doesn't converge uniformly, I need to show that $$ \exists x, \exists \epsilon, \forall N, \exists n > N, |f_n (t)| \geq \epsilon. $$ I found that $f_n (t)$ has a maximum of $\frac{n}{e}$ at $t = \frac{1}{n}$ , but I'm stumped because I can't choose an $x$ that depends on $n$ . Is it preferable to proceed by contradiction?
Your negation is not correct as written, it seems to me. Uniform convergence, once we know that the limit is $0$ , is $\def\e{\varepsilon}$ $$\tag1 \forall \e>0,\ \exists n_0,\ \forall n\geq n_0,\ \forall x,\ |f_n(x)| Hence the negation is $$\tag2 \exists \e>0,\ \forall n_0,\ \exists n\geq n_0,\ \exists x,\ |f_n(x)|\geq\e. $$ So we need to find an $\e>0$ and, given an arbitrarily large $n$ , an $x_n$ such that $|f_n(x_n)|\geq\e$ . Since the function is uniformly continuous away from $0$ (being bounded, continuous, with limit zero at infinity) the problem has to occur at $0$ . If we take $x_n=\frac1n$ , then $$ f_n(x_n)=n^2\frac1n\,e^{-n\,\frac1n}=ne^{-1}. $$ So if we take $\e=e^{-1}$ , given any $n_0\in\mathbb N$ we can take $n=n_0+1$ , and $x_n=\frac1n$ , to get $f_n(x_n)\geq e^{-1}$ .
|real-analysis|sequence-of-function|
1
In the axiomatic treatment of natural numbers, can we define what a natural number is?
In his book Number Systems and the Foundations of Analysis , Elliot Mendelson first defines a Peano system (p. 53). He then goes on to prove that two arbitrary Peano systems are isomorphic — they are essentially the same (p. 80). He then makes a basic existence assumption — there exists a Peano system (p. 83). In his book The Real Numbers and Real Analysis , Ethan D. Bloch states Axiom 1.2.1 (Peano Postulates) (p. 3). Axiom 1.2.1 states the existence of a Peano system, namely $(\mathbb{N}, s, 1)$ . Then, in Exercise 1.2.8 (p. 11), he mentions that two (arbitrary) Peano systems are isomorphic. So, I think Bloch’s approach is (partly) equivalent to Mendelson’s. (Bloch never defines what Peano systems are.) Bloch then defines the set of natural numbers (Definition 1.2.2, p. 3) as "the set the existence of which is given in the Peano Postulates." It implies that the members of a Peano system are called natural numbers. (It reminds me of how we define vectors as elements of a vector space.)
In the axiomatic treatment of natural numbers, can we define what a natural number is? If you mean 'just writing down the usual Peano axioms in first order logic and working from there with just them, nothing more ', then everything is a natural number already (well, the objects at least, not the functions) If you mean 'working in a strong set theory like ZF(C), and proving some object exist with so and so properties', that's no longer an axiomatic treatment, I'd say, but a 'genetic' one
|axioms|peano-axioms|formal-systems|
0
Conditional Probability of the null hypothesis given the rejection region
let $X \sim Bin(10,p)$ We would like to test the following hypotheses: $H_{0}: p=0.5 $ vs $H_{1} : p=0.3$ The Statistic $T(x)$ is defined as $T(x)=x$ and the critical (rejection) region is $R=\{x:T(x)\le4\}$ I want to calculate the conditional probability of the null hypothesis given $T(x)\in R $ That is : $P(H_{0}\mid T(x)\in R)$ My first solution is: $P(H_{0}\mid T(x)\in R)=\frac{\alpha}{\alpha+(1-\beta)}$ My second solution (according to Bayesian hypothesis): $P(H_{0}\mid T(x)\in R) = P(H_{0})*L(H_{0}\mid T(x)\in R)$ , which is equal to : $P(H_{0}\mid T(x)\in R) = P(H_{0})*P(T(x)\in R\mid H_{0})$ $P(T(x)\in R\mid H_{0})= P_{H_{0}}(X\le4)=0.37695=\alpha$ while $P(H_{0})$ equal to : $ P(H_{0})= P(H_{0}\mid T(x)\notin R) + P(H_{0}\mid T(x)\in R)=\frac{1-\alpha}{\beta} +\frac{\alpha}{1-\beta} $ Thus: $P(H_{0}\mid T(x)\in R)=(\frac{1-\alpha}{\beta} +\frac{\alpha}{1-\beta})*\alpha$ Which of the above solutions is correct ?
None of this makes any sense. The frequentist would say $\Pr[H_0]$ is not well-defined because $\Pr[p = 0.5]$ is not a statement about a random variable; and the Bayesian would protest that you have not specified a suitable prior distribution for $p$ , and the computation cannot be performed until you do. You also do not explain how you reason $$\Pr[H_0 \mid T(x) \in R] = \frac{\alpha}{1-\beta}.$$ Again, the frequentist will tell you this is nonsensical because $p$ is a parameter : it is fixed, and does not change from experiment to experiment. That it is unknown to us does not make it a random variable. The Bayesian would tell you that $$\Pr[H_0 \mid T(x) \in R] = \Pr[p = 0.5 \mid X \le 4] = \frac{\Pr[X \le 4 \mid p = 0.5]\Pr[p = 0.5]}{\Pr[X \le 4]}$$ and could reasonably interpret your hypothesis to imply that the prior on $p$ is a location-scale transformed Bernoulli with hyperparameter $\theta$ ; e.g. $$\Pr[p = 0.5] = 1-\theta, \quad \Pr[p = 0.3] = \theta$$ hence $$\begin{align} \P
|probability-theory|probability-distributions|conditional-probability|bayesian|hypothesis-testing|
0
Product of commutative symmetric positive definite matrices
Let A and B be commutative symmetric positive definite matrices. The goal of the question was to show that $||x||_{A,B}=\sqrt{ }$ is a norm in $\mathbf{R}^n$ , where $ $ is the canonical scalar product. For that, my idea was to show that $(x,y)_{A,B}= $ is a scalar product, and then using Cauchy-Schwarz inequality I can show that $||x||_{A,B}=\sqrt{ }$ is a norm (since the only real problem is proving triangle inequality). When trying to show that $(x,y)_{A,B}$ is a scalar product, I managed to show that it's bilinear and symmetric, but I can't show that $ > 0$ for all $x \in \mathbf{R}^n$ . Since $ = $ , all that is left to show is: "Let A and B be commutative symmetric positive definite matrices, then AB is positive definite" Furthermore, if there's another easier way to show that $||x||_{A,B}$ is a norm, please show me.
$A$ and $B$ commute by assumption and are diagonalisable because they are symmetric positive definite, so they are simultaneously diagonalisable. This implies every eigenvalue of $AB$ is the product of an eigenvalue of $A$ and an eigenvalue of $B$ and so is positive.
|linear-algebra|normed-spaces|
0
Polygon Problem
Problem: Let $A_1A_2\dots A_{100}$ be a regular $100$ -gon with a circumcircle of diameter $1$ . Let $M$ be the midpoint of the minor arc $A_1A_2$ . Find $$ \sum^{100}_{i=1} |MA_i|^2. $$ My attempt: Let the points $A_k$ be of the form $$ \frac12 e^{2\pi ki/100} = \frac12 \cos\Bigl(\frac{2\pi k}{100}\Bigr) + \frac12 i\sin\Bigl(\frac{2 \pi k}{100}\Bigr) $$ in the complex plane, where $k = 1, 2, \dots, 100$ . Then $M$ is clearly $\frac12 \cos\bigl(\frac{3\pi}{100}\bigr) + \frac12 i \sin\bigl(\frac{3\pi}{100}\bigr)$ . Hence by the Pythagorean Theorem the sum equals $$ \begin{multline} \sum^{100}_{k = 1} \Biggl( \biggl( \frac12 \cos\Bigl(\frac{2\pi k}{100}\Bigr) - \frac12 \cos\Bigl(\frac{3\pi}{100}\Bigr) \biggr)^2 + \biggl( \frac12 \sin\Bigl(\frac{2 \pi k}{100}\Bigr) - \frac12 \sin\Bigl(\frac{3\pi}{100}\Bigr) \biggr)^2 \Biggr) \\ = \frac12 \sum^{100}_{k=1} \biggl( 1 - \cos\Bigl(\frac{2\pi k}{100}\Bigr) \cos\Bigl(\frac{3\pi}{100}\Bigr) - \sin\Bigl(\frac{2\pi k}{100}\Bigr) \cos\Bigl(\frac{3\p
Draw the segments $\overline{MA_1}$ and $\overline{MA_{51}}$ . Because $A_1$ and $A_{51}$ are diametrically opposite points, the angle between these two segments is a right angle. So by the Pythagorean Theorem the squared lengths satisfy $MA_1^2+MA_{51}^2=1.$ There are $50$ such diametrically opposite pairs constituting the vertices of the $100$ -gon, so the sum of the squares of all $100$ distances is the sum of $50$ unities, thus $50$ itself.
|geometry|algebra-precalculus|trigonometry|complex-numbers|polygons|
1
Validity of integral formula for $E[|X + Y|] - E[|X - Y|]$
$\newcommand{\real}{\mathbb{R}}$ I read this identity from the monograph Probability Inequalities by Zhengyan Lin and Zhidong Bai. The author proposed to use this identity to prove the inequality $E[|X - Y|] \leq E[|X + Y|]$ when $X$ and $Y$ are i.i.d. I quote: An alternative proof is to use the formula \begin{align} & E[|X + Y|] - E[|X - Y|] \\ =& \int_{-\infty}^\infty(1 - F(u) - F(-u))(1 - G(u) - G(-u))du, \tag{1} \end{align} where $F$ and $G$ are the distribution functions of $X$ and $Y$ respectively. The author didn't give a proof to $(1)$ (assume $X$ and $Y$ are independent). In the original text, the lower limit of the integral is " $0$ ", which has been corrected to " $-\infty$ " in $(1)$ . My question: does $(1)$ hold for any distribution functions $F$ and $G$ (such that $E[|X|] )? The author seems to imply this is the case. As shown below, $(1)$ is indeed true if $F$ and $G$ are absolutely continuous with densities $f$ and $g$ , but I have difficulty to generalize it to any di
Here is a more concise derivation, a modification of this answer : First verify the identity $$ |x+y|-|x-y|=2[\min(x^+,y^+)+\min(x^-,y^-)-\min(x^+,y^-)-\min(x^-,y^+)].$$ Next, use this result to argue that for nonnegative and independent $U$ and $V$ : $$E\min(U,V)=\int_0^\infty P(\min(U,V)>t)\,dt=\int_0^\infty P(U>t)P(V>t)\,dt. $$ Apply this last to find, when $X$ and $Y$ are independent, $$ \begin{aligned} E\min(X^+,Y^+)&=\int_0^\infty P(X^+>t)P(Y^+>t)\,dt=\int_0^\infty P(X>t)P(Y>t)\,dt\\ E\min(X^-,Y^-)&=\int_0^\infty P(X^->t)P(Y^->t)\,dt=\int_0^\infty P(-X>t)P(-Y>t)\,dt\\ E\min(X^+,Y^-)&=\int_0^\infty P(X^+>t)P(Y^->t)\,dt=\int_0^\infty P(X>t)P(-Y>t)\,dt\\ E\min(X^-,Y^+)&=\int_0^\infty P(X^->t)P(Y^+>t)\,dt=\int_0^\infty P(-X>t)P(Y>t)\,dt \end{aligned}$$ Put everything together: $$E|X+Y|-E|X-Y|=2\int_0^\infty[P(X>t)-P(-X>t)][P(Y>t)-P(-Y>t)]\,dt.$$ This is the same as the desired identity (1), since $P(X>t)=1-F(t)$ and $P(-X>t)=F(-t)$ for almost every $t$ (similarly for $Y$ , $G$ ), and
|probability|probability-theory|expected-value|
0
The limit of $f(x) = \frac{1}{x^2 + 5x - 24}$ at $x=4$
I'm working through Advanced Calculus: Theory and Practice by John S. Petrovic and is currently working on problem 3.4.2, which is as follows: Find the limit and prove that the result is correct using the definition (epsilon-delta) of the limit: $$ \lim_{x \to 4} \frac{1}{x^2 + 5x - 24} $$ This is my attempt: $$ \lim_{x \to 4} \frac{1}{x^2 + 5x - 24} = \frac{1}{12} $$ We need $| x - 4 | , and an expression for $\delta$ dependent on $\epsilon$ , therefore we focus on \begin{align} \Biggl|f(x) - \frac{1}{12}\Biggr| &= \Biggl|\frac{12 - x^2 - 5x + 24}{12(x^2 + 5x - 24)}\Biggr| \\ &= \Biggl|\frac{-(x + 9)(x - 4)}{12(x + 8)(x - 3)}\Biggr| \\ &= \Biggl|\frac{(x + 9)(x - 4)}{12(x + 8)(x - 3)}\Biggr| \\ &\leq \frac{1}{12} \frac{|x+9|\,|x-4|}{|x+8|\,|x-3|}. \end{align} (It is the next step where I am pretty sure I'm using the inequalities incorrectly.) Now let $\delta_1 = 1$ , which implies that $-1 and therefore: $12 $11 11$ $0 0$ Therefore, $$ \frac{1}{12} \frac{|x+9|\,|x-4|}{|x+8|\,|x-3|} As
You're doing everything correctly, including finding upper bounds for linear factors in the numerator and lower bounds for linear factors in the denominator. The problem is that the initial choice of $\delta_1 = 1$ is too big since the limit $x \to 4$ is too close to $x = 3$ , where the factor $x - 3$ vanishes. Although the arithmetic is bit messier, if you begin with $\delta_1 = \tfrac12$ , and follow the same steps, everything should work out. In general, if you're writing a proof for a limit as $x \to a$ for a rational function with roots $r_1, \dots, r_n$ in the denominator, you need to choose an initial $$ \delta_1
|calculus|limits|continuity|limits-without-lhopital|epsilon-delta|
1
Python code for Second Order ODE Initial Value problem using Finite Difference methods
I am trying to solve the following ODE $$\begin{aligned} & y^{\prime \prime}+2 y^{\prime}+y=0 \\ & y(0)=2 \quad y^{\prime}(0)=-1 \\ & 0 \leqslant x \leqslant 1 \end{aligned}$$ Using central FD, I can write $$\frac{y_{i+1}-2 y_i+y_{i-1}}{h^2}+2 \frac{y_{i+1}-y_{i-1}}{2 h}+y_i=0$$ which I have simplied to $$(1+h) y_{i+1}+\left(h^2-2\right) y_i+(1-h) y_{i-1}=0$$ I am struggling to fit the second boundary condition into the set of resulting equations. Using central FD, I think I can derive $$\begin{aligned} & \frac{y_2-y_0}{2 h}=-1 \\ & y_0=y_2+2 h\end{aligned}$$ However, I am not sure how to make use of this condition. Can you please guide me ? The analytic solution to the ODE is given as $y(x)=2 e^{-x}+x e^{-x}$ Thanks Based on whpowell96's suggestion, I will create the system of equations as follows. I have removed the python code for now. Does this look correct? Thank you. Assuming N = 10 $$ \begin{aligned} & h=0.125 , h^2=0.016, 1-h=0.875 \\ & 1.125 y_2-1.984 y_1=-1.75 \\ & 1.125 y_3-
The way to do this that achieves the best accuracy is via the ghost point method . Label the nodes such that $y_0$ is at $x=0$ . The discretized boundary conditions read $$ \begin{aligned} y_0 &= 2 \\ \frac{y_{1} - y_{-1}}{2h} &= -1. \end{aligned} $$ Here we have introduced an extra variable, $y_{-1}$ which corresponds to the function value one node to the left of the domain. Since the function is not defined there, it is called a ghost node . We won't actually evaluate it, but will merely use it to inform our algebra. Now consider the difference equation at $x=0$ : $$ (1+h)y_1 + (h^2-2)y_0 + (1-h)y_{-1} = 0. $$ Again, we are using the ghost point $y_{-1}$ . However, notice that our boundary condition can be rearranged to $y_{-1} = 2h + y_1$ . Substituting this and $y_0=2$ into the difference equation yields $$ (1+h)y_1 + 2(h^2-2) + (1-h)(2h+y_1)=0, $$ which is a linear equation for $y_1$ . Since the boundary conditions was discretized using a second order scheme, we actually retain se
|ordinary-differential-equations|boundary-value-problem|initial-value-problems|python|finite-differences|
0
Blowup of $C\times D$ has finite $(-1)$-curves
I was reading this post and in the proof of the fact If $C$ , $D$ are smooth curves and $g(C) \geq 1$ , then any blow-up of $C \times D$ contains at most finitely many $(-1)$ -curves. Here we take the projection $p:S\to C$ , where $\pi:S\to C\times D$ is a blow-up. Then the proof noted that no rational curve in $S$ can dominate $C$ which I understand. But I don't understand why the next step holds, that is, why does this mean that the $(-1)$ -curves are contained in reducible fibers of $p$ and why are there only finitely many of them? Any help will be much appreciated!
Since a map of smooth projective curves is surjective or constant, a $(-1)$ -curve must map to a point in $C$ , so every $(-1)$ -curve is contained in a fiber of $p$ . On the other hand, $\pi$ is the blowup of $C\times D$ in a finite set of points $\{x_1,\cdots,x_n\}$ , and every fiber of $p$ over a point $c\in C$ which is not the projection of an $x_i$ is the same as the fiber of $C\times D\to C$ over that point. So only finitely many fibers can contain a $(-1)$ -curve, and as each fiber has finitely many irreducible components, there can only be finitely many $(-1)$ curves in a given fiber, giving finitely many $(-1)$ -curves in $S$ . The bit about reducible fibers comes from calculating the fibers of $\pi$ . If $\pi: X'\to X$ is the blowup of a smooth surface in a point $x$ , then the total transform of any curve $Z\subset X$ is given by $\widetilde{Z}+\mu_x(Z)E$ , where $\widetilde{-}$ is the strict transform, $\mu_x(Z)$ is the multiplicity of $Z$ at $x$ , and $E$ is the exceptiona
|algebraic-geometry|
1
A curve intersected by a straight line having constant harmonic mean segments
It is known that the if the product of two line segments OA,OB drawn from any point O to a curve is a constant, then the curve is a circle (black). The product is the square of the geometric mean. However what is another curve (green) if harmonic mean of OA,OB is a constant? Thanks for all suggested insights or the curve itself if already known.
A solution obtained when O,A,B are not always collinear, even if derived on that basis. Could be due to some inherent precession? in the spiral. A circle radius $a$ is drawn through A,B, centred on their perpendicular bisector. Let $$OB=r, OA= r- 2 a \sin \psi, \psi= \psi(s), r'(s)= \cos \psi $$ Constant Harmonic mean $$H=\frac{r (r- 2 a \sin \psi)}{(r- a \sin \psi)} \tag1$$ Differentiate wrt arc $s$ using Chain Rule, cancelling $\cos \psi$ at RHS $$ H= \frac{2 r ~- 2 a \sin \psi - 2 a r \sin \psi }{1- a \psi'(s)}$$ Cross multiply, simplify to get ode of desired curve $$ a \psi ' =1 +\frac{2 a \sin \psi}{r} \left(\frac{ a \sin \psi}{r} -1\right) \tag 2$$ From (1) At tangent point $$ \psi=0, ~ H=r= r_{tangent}$$ At $$~ \psi = \pi/2, ~r_{max}= a+ H/2 + \sqrt{a^{2}+H^{2}} \tag 3$$ At $ \psi =3 \pi/2$ $$\psi =3 \pi/2,~~r_{min}=- a+ H/2 - \sqrt{a^{2}+H^{2}},~r_{max}- r_{min}= 2a\tag 4$$ Also from (1) $$ H= \frac{ r_{max}.(r_{max}-2 a)}{r_{max}-a} = \frac{ r_{min}.(r_{min}+2 a)}{r_{min}+a} ;
|geometry|locus|
0
Calculate this limit by first principles
I have to calculate this complex limit by first principles, ie, without using sophisticated tricks like L'Hospital's rule, Stirling formula, gamma function etc. $$ \lim_{n \to \infty} \frac{[(n+1)!]^2 (2e^{i \theta})^{2n}}{(2n)(2n+1)!} $$ where $\theta$ is a fixed real number.
A couple steps to make the limit nicer: The $\frac{[(n+1)!]^2}{(2n+1)!}$ term looks very similar to $\frac{1}{\binom{2n+2}{n+1}}$ . We can multiply by $\frac{2n}{2n+2}$ without changing the limit to get it in this form. The $\left(2e^{i\theta}\right)^n$ is $4^n$ with a phase term. The phase term won't converge, so it's a good guess that the limit converges to zero. Finally, we can multiply by four and shift $n+1\mapsto n$ to arrive at $$\frac14\lim_{n\to\infty}\frac{4^n}{\binom{2n}{n}}.$$ This doesn't converge, however it reminds me of the Catalan/arcsin series, which look somewhat like $$\sum\binom{2n}{n}\frac{1}{2n+1}x^n.$$ There's the more complicated $$\frac{2x^2}{1-x^2} + \frac{2x\arcsin(x)}{(1-x^2)^\frac32} = \sum_{n=1}^{\infty}\frac{(2x)^{2n}}{\binom{2n}{n}}$$ (see proofwiki ) which looks so similar I believe your problem is actually supposed to be a summation rather than an integral: $$\sum_{n=0}^{\infty}\frac{[(n+1)!]^2(2e^{i\theta})^{2n}}{2n(2n+1)!}=\frac14\sum_{n=1}^{\infty}
|real-analysis|calculus|limits|limits-without-lhopital|
0
The probability of a circle in a circumscriptible polygon
I have difficulty understanding the solution below and have already summarized my difficulties as follows, why "the area of the polygon $abcelef$ .... represents the number of ways the three points can be taken, so that the circle circumscribing the triangle will lie wholly within the given polygon." It looks like the ratio of the two areas is the probability that a circle with a given radius lying wholly in the polygon. "An element of the polygon at $G$ is $4 x d x d \psi$ ", what does the "element" mean? a differential form? and why is it $4 x d x d \psi$ ; why is an element of the polygon at $R$ $dt$ ? why is $dt$ only a differential form of $d\theta$ ? why not include $dx$ too? why do we introduce an imaginary angle $\phi$ ? Problem : A circle is circumscribed about a triangle formed by joining three points taken at random in the surface of a circumscriptible polygon of $n$ sides, find the chance that the circle lies wholly within the polygon. Solution : Let $ABCDEF\ldots$ be the p
Below I am trying to make the asked points clear, computations do not stay in focus. First of all a picture: The problem starts somehow with the middle, making the least number of words in a random language count for exposing the problem. This is not the way we think, and not the way we start solving. Instead, let us introduce the objects in their order. The circle $\mathcal O$ with center $O$ in the origin and radius one is given. (The general case is translated and rescaled to this one.) A polygon $\Pi=ABCDEF\dots$ circumscribed to $\mathcal O$ is chosen. So $AB$ , $BC$ , $CD$ , $DE$ , $EF$ , $\dots$ are all tangent to $\mathcal O$ . The parameter $r$ is the distance from $O$ to the side $AB$ , so $r=1$ , i must have as few variables to look at as possible. The situation so far is fixed. Consider the probability space $\Omega$ of three points $(P,R,G)$ inside $\Pi$ , seen as a "solid area", the (closed) convex hull of its vertices. Denote such a triple always by $(P,R,G)$ with $P,R,G
|probability|proof-explanation|geometric-probability|
1
Motivation for the selection of basis function of Fourier series
I have seen two motivations of decomposing an $L^2(\mathbb{T}^n)$ function into a Fourier series with basis functions $e^{2\pi inx}$ with $n$ being integers. The first one is the intuitive frequency representation of a signal; the second one is that the trigonometrical functions are eigenfunctions of the Laplacian, so it can help to solve PDE. I am satisfied by those explanations. However, in Folland's Real Analysis, Modern Techniques and Their Applications , it gives another motivation as follows I feel a little "unmotivated". Surely, having functions in the form $f(y+x)=\phi(x)f(y)$ can be good. But why is it deterministic for people to consider them as the building blocks? Is there any theorem in Fourier analysis that particularly uses this fact? Sorry for being a vague question, but basically, I feel it is somehow unnatural.
Let $G$ be an abelian topological group. Just as we look at the dual space $X^*$ when we want to know about a vector space (a Banach space, for example) $X$ , it is a natural idea that the "dual group" $G^*$ might give some useful information about $G$ , and this is indeed the case. More concretely, we will consider the complex homomorphisms (usually called the characters ) on $G$ , which are continuous functions $\phi:G\to\mathbb{C}$ satisfying $$\phi(x+y)=\phi(x)\phi(y)\qquad(x,y\in G). $$ In our case, $G=\mathbb{R}^n$ with addition (in particular, it is abelian), and the characters are precisely given by $$\phi_t(x)=e^{it\cdot x} $$ for some $t\in\mathbb{R}^d$ . There is (at least) one more motivation to consider the characters. Let $G$ be a locally compact abelian group. Then $G$ admits a unique Haar measure, and there arises a natural Banach space $L^1(G)=L^1(G,m)$ . In fact, $L^1(G)$ is an abelian Banach algebra whose multiplication is given by the convolution. Then it is well kn
|real-analysis|fourier-analysis|
1
Question regarding proof of range of a cos(x) + b sin(x)
The proof for the range of $a \cos(x) + b \sin(x)$ given in my book is as follows $$f(x) = a \cos(x) + b \sin(x)$$ $$\sqrt{a^2 + b^2}(\frac{a}{\sqrt{a^2 + b^2}}\cdot \cos(x) + \frac{b}{\sqrt{a^2 + b^2}}\cdot \sin(x))$$ $$\sqrt{a^2 + b^2}(\sin(\alpha)\cdot\cos(x) + \cos(\alpha)\cdot\sin(x)$$ $$\sqrt{a^2 + b^2}\cdot\sin(\alpha + x)$$ Similarly the same thing is done as follows $$f(x) = a \cos(x) + b \sin(x)$$ $$\sqrt{a^2 + b^2}(\frac{a}{\sqrt{a^2 + b^2}}\cdot \cos(x) + \frac{b}{\sqrt{a^2 + b^2}}\cdot \sin(x))$$ $$\sqrt{a^2 + b^2}(\cos(\alpha)\cdot\cos(x) + \sin(\alpha)\cdot\sin(x)$$ $$\sqrt{a^2 + b^2}\cdot\cos(x - \alpha)$$ Then, $$-1 \leq \sin(x + a) \leq 1$$ $$-\sqrt{a^2 + b^2} \leq \sqrt{a^2 + b^2}\cdot\sin(x + \alpha) \leq \sqrt{a^2 + b^2}$$ Therefore the range of the function is $[-\sqrt{a^2 + b^2}, \sqrt{a^2 + b^2}]$ The main issues for me in the above proof are that how can we replace $\frac{a}{\sqrt{a^2 + b^2}}$ and $\frac{b}{\sqrt{a^2 + b^2}}$ with $\sin(\alpha)$ and $\cos(\alph
Hint: draw a right angled triangle with side lengths $a$ and $b$ . Calculate sine and cosine for an angle of your choice.
|functions|trigonometry|
1
Expected value of the Brownian motion to the power of $n$
Let $B$ be a Brownian motion. How can $\mathbb{E}\left[B_t^n\right]$ and $\mathbb{E}\left\lvert[B_t\right\rvert^n]$ with $n\in\mathbb{N}$ and $t\geq 0$ be computed?
Let $s . From $B_t-B_s$ being independent of $\mathcal{F}_s$ and $B_t-B_s \sim \mathcal{N}\left(0, t-s\right)$ follows : \begin{align*} \mathbb{E}\left[\left(B_t - B_s\right)^n\vert \mathcal{F}_s\right] = \mathbb{E}\left[\left(B_t - B_s\right)^n\right] &= \left(t-s\right)^\frac{n}{2}\left(n-1\right)!! \cdot \begin{cases} 0 &\text{if}\ n\ \text{is odd}\\ 1 &\text{if}\ n\ \text{is even} \end{cases}\\ \mathbb{E}\left[\left\lvert B_t - B_s\right\rvert^n\vert \mathcal{F}_s\right] = \mathbb{E}\left[\left\lvert B_t - B_s\right\rvert^n\right] &= \left(t-s\right)^\frac{n}{2}\left(n-1\right)!! \cdot \begin{cases} \sqrt{\frac{2}{\pi}} &\text{if}\ n\ \text{is odd}\\ 1 &\text{if}\ n\ \text{is even} \end{cases} \end{align*} Because of $B_t = B_t - B_0 \sim \mathcal{N}\left(0, t\right)$ we also have: \begin{align*} \mathbb{E}\left[B_t^n\right] &= t^\frac{n}{2}\left(n-1\right)!! \cdot \begin{cases} 0 &\text{if}\ n\ \text{is odd}\\ 1 &\text{if}\ n\ \text{is even} \end{cases}\\ \mathbb{E}\left[\left\lve
|expected-value|brownian-motion|
1
Question regarding proof of range of a cos(x) + b sin(x)
The proof for the range of $a \cos(x) + b \sin(x)$ given in my book is as follows $$f(x) = a \cos(x) + b \sin(x)$$ $$\sqrt{a^2 + b^2}(\frac{a}{\sqrt{a^2 + b^2}}\cdot \cos(x) + \frac{b}{\sqrt{a^2 + b^2}}\cdot \sin(x))$$ $$\sqrt{a^2 + b^2}(\sin(\alpha)\cdot\cos(x) + \cos(\alpha)\cdot\sin(x)$$ $$\sqrt{a^2 + b^2}\cdot\sin(\alpha + x)$$ Similarly the same thing is done as follows $$f(x) = a \cos(x) + b \sin(x)$$ $$\sqrt{a^2 + b^2}(\frac{a}{\sqrt{a^2 + b^2}}\cdot \cos(x) + \frac{b}{\sqrt{a^2 + b^2}}\cdot \sin(x))$$ $$\sqrt{a^2 + b^2}(\cos(\alpha)\cdot\cos(x) + \sin(\alpha)\cdot\sin(x)$$ $$\sqrt{a^2 + b^2}\cdot\cos(x - \alpha)$$ Then, $$-1 \leq \sin(x + a) \leq 1$$ $$-\sqrt{a^2 + b^2} \leq \sqrt{a^2 + b^2}\cdot\sin(x + \alpha) \leq \sqrt{a^2 + b^2}$$ Therefore the range of the function is $[-\sqrt{a^2 + b^2}, \sqrt{a^2 + b^2}]$ The main issues for me in the above proof are that how can we replace $\frac{a}{\sqrt{a^2 + b^2}}$ and $\frac{b}{\sqrt{a^2 + b^2}}$ with $\sin(\alpha)$ and $\cos(\alph
The target is to express $a\cos{x}+b\sin x$ as $r_1\sin(\alpha+x)$ or, $r_2\cos(\beta-x)$ so $$r_1\cos\alpha\sin x+r_1\sin\alpha\cos x=r_2\cos\beta\cos x+r_2\sin\beta\sin x=acosx+bsinx$$ Comparing coefficients, $r_1\cos\alpha=r_2\sin\beta=b\\ r_1\sin\alpha=r_2\cos\beta=a$ Squaring and adding $r_1^2=r_2^2=a^2+b^2\\ \tan\alpha=\cot\beta$ hence $\alpha \;\text{and} \; \beta \; \text{are the complementary angles of the same right angled triangle with two sides}\; a,b \; \text{hypotenuse} \; r_1=r_2$
|functions|trigonometry|
0
Canonical Form: Second-Order Partial Differential Equation
$u_{xx}+4u_{xy}+3u_{yy} + 3u_x-u_y+2u=0$ I found that $\xi(x,y) = y-3x$ and $\eta(x,y)=y-x$ , then $0= u - 5u_{\xi} - 2u_{\eta} - 2u_{\xi \eta}$ . I try to use manipulation like SFFT: $$6u = \left (\frac{\partial}{\partial \xi}+1 \right )\left (2\frac{\partial}{\partial \eta} + 5\right )u.$$ Let $v=2\frac{\partial u}{\partial \eta}+5u$ and hence $6u=\frac{\partial v}{\partial \xi}+v$ . From $v=2\frac{\partial u}{\partial \eta} + 5u$ give us $$u = e^{-5\eta/2}\int ve^{5\eta/2}\;d\eta,$$ but I don't know how to solve $u$ using this form to $6u=\frac{\partial v}{\partial \xi}+v$ .
We start with the following PDE (see above): $$u-5 u_{\xi}-2 u_{\eta}- 2 u_{\xi \eta} = 0$$ We try to separate variables by multiplication $$u(\xi,\eta)=f(\xi)\cdot g(\eta)$$ This leads to the following equation $$1-2 \frac{f_{\xi}\cdot g_{\eta}}{f\cdot g}- 5 \frac{f_{\xi}}{f}- 2 \frac{g_{\eta}}{g}=0$$ We can get 2 separate ODEs by defining $$\frac{f_{\xi}}{f}=k_1 \implies f(\xi)=c_1\cdot e^{k_1 \xi}$$ $$\frac{g_{\eta}}{g}=k_2 \implies g(\eta)=c_2\cdot e^{k_2 \eta}$$ and $$1-2 k_1 k_2-5 k_1-2 k_2=0$$ This equation is satisfied if $$k_2=-\frac{5k_1-1}{2(k_1+1)}$$ The solution then is $$u(\xi,\eta)=c(k_1)\cdot exp(k_1 \xi)\cdot exp(k_2 \eta)=c(k_1)\cdot exp(k_1 \xi)\cdot exp(-\frac{5k_1-1}{2(k_1+1)} \eta)=c(k1)\cdot exp(k_1\xi-\frac{5k_1-1}{2(k_1+1)} \eta)$$ Finally we sum over all possible $k_1$ (may be discrete $\sum_{k_1}$ or continuous $\int_{k_1}$ ) $$u(\xi,\eta)=\sum_{k_1} c(k_1)\cdot exp(k_1\xi-\frac{5k_1-1}{2(k_1+1)} \eta)$$ General solution in original $x, y$ coordinates $$\hat{
|partial-differential-equations|
0
Question regarding domain of trignometric function
The question is as follows Find the domain of the function $$f(x) = \frac{1}{1 + 2 \sin(x)}$$ since $\sin(x) \neq \frac{-1}{2}$ therefore the general expression for this should be as follows $$x \neq n\pi + (-1)^{n-1}\frac{\pi}{6}$$ This equation i derived my self and seems about right however the answer given in the textbook is the following $$x \neq n\pi + (-1)^n \frac{7\pi}{6}$$ Are the above two equations exactly the same or am i wrong? Any help would be appreciated!
They are exactly the same, yes. Generate a list of values from each: $$\begin{array}{c|c|c} n & s_1(n) = n\pi + (-1)^{n-1}\frac{\pi}{6} & s_2(n) = n\pi + (-1)^n \frac{7\pi}{6} \\ \hline 0 & - \frac \pi 6 & \color{red}{\frac{7\pi}{6}} \\ 1 & \pi + \frac \pi 6 = \color{red}{\frac{7\pi}{6}} & \pi-\frac{7\pi}{6} = -\frac{\pi}{6}\\ 2 & 2\pi - \frac \pi 6 = \color{blue}{\frac{11\pi}{6}} & 2\pi+\frac{7\pi}{6} = \color{green}{\frac{19\pi}{6}} \\ 3 & 3\pi + \frac \pi 6 = \color{green}{\frac{19\pi}{6}}& 3\pi-\frac{7\pi}{6} = \color{blue}{\frac{11\pi}{6}}\\ 4 & 4 \pi - \frac \pi 6= \color{purple}{\frac{23\pi}{6}} & 4\pi+\frac{7\pi}{6} = \color{orange}{\frac{31\pi}{6}} \\ 5 & 5 \pi + \frac \pi 6 = \color{orange}{\frac{31\pi}{6}}& 5\pi-\frac{7\pi}{6} = \color{purple}{\frac{23\pi}{6}}\\ \end{array}$$ This pattern continues: each list enumerates the same numbers, but in a slightly twisted order. Namely, if $n$ is even, then $s_1(n) = s_2(n+1)$ and $s_1(n+1) = s_2(n)$ . This equality that is hinted at
|functions|trigonometry|
1
Can we reduce $\int_0^{\pi/2}\frac{\sqrt{\sin x}}{1+\cos x}\,dx$ to complete elliptic integrals?
This definite integral has an equivalent closed form in terms of complete elliptic integrals , $$\begin{align*} I &= \int_0^\tfrac\pi2 \frac{\sqrt{\sin x}}{1+\cos x} \, dx \\ & = 2 - \sqrt{\frac2\pi}\, \Gamma^2\left(\frac34\right) \\ &= \boxed{2 - 2\sqrt2 \, E\left(\frac1{\sqrt2}\right) + \sqrt2 \, K\left(\frac1{\sqrt2}\right)} \tag{$*$} \end{align*}$$ Q : Is there any way to algebraically reduce or transform $I$ to more readily obtain this elliptic integral form, without leaning on beta/gamma functions? Having made a similar connection recently, I'm wondering if the same can be done here. Working backward from $(*)$ , we have $$\begin{align*} I &= 2 - 2\sqrt2 \int_0^\tfrac\pi2 \sqrt{1-\frac12\sin^2t} \, dt + \sqrt2 \int_0^\tfrac\pi2 \frac{dt}{\sqrt{1-\frac12\sin^2t}} \, dt \\ &= 2 \left(1 - \int_0^\tfrac\pi2 \frac{\cos^2t}{\sqrt{1+\cos^2t}} \, dt\right) \end{align*}$$ I've tried replacing $1=\int_0^{\pi/2} f(t) \, dt$ but I'm not sure if there's a clever choice of $f(t)$ that will coo
For the antiderivative $$I=\int\frac{\sqrt{\sin (x)}}{1+\cos (x)}\,dx$$ $$\sqrt{\sin (x)}=t \quad\implies\quad x=\sin ^{-1}\left(t^2\right)\quad\implies \quad dx=\frac{2 t}{\sqrt{1-t^4}}\,dt$$ $$I=\int\frac{2 t^2}{1-t^4+\sqrt{1-t^4}}\,dt=\int\frac{2 \left(1-t^4-\sqrt{1-t^4}\right)}{t^2 \left(t^4-1\right)}\,dt$$ $$I=2 \left(\frac{1-\sqrt{1-t^4}}{t}+F\left(\left.\sin ^{-1}(t)\right|-1\right)-E\left(\left.\sin^{-1}(t)\right|-1\right) \right)$$ So, for the definite integral $$J=\int_0^{\frac \pi 2}\frac{\sqrt{\sin (x)}}{1+\cos (x)}\,dx=2\left( K(-1)-E(-1)+1\right)=2-\frac{2 \sqrt{2} \pi ^{3/2}}{\Gamma \left(\frac{1}{4}\right)^2}$$
|integration|definite-integrals|trigonometric-integrals|elliptic-integrals|
0
Prove distributive law in Hilbert system
Using the logical axioms of the Hilbert system $\phi\to\phi$ $\phi\to(\psi\to\phi)$ $\left( \phi \to \left( \psi \rightarrow \xi \right) \right) \to \left( \left( \phi \to \psi \right) \to \left( \phi \to \xi \right) \right)$ $\left ( \lnot \phi \to \lnot \psi \right) \to \left( \psi \to \phi \right)$ $\alpha\to\beta\to\alpha\land\beta$ $\alpha\wedge\beta\to\alpha$ $\alpha\wedge\beta\to\beta$ $\alpha\to\alpha\vee\beta$ $\beta\to\alpha\vee\beta$ $(\alpha\to\gamma)\to (\beta\to\gamma) \to \alpha\vee\beta \to \gamma$ along with the inference rule modus ponens MP $\dfrac{\alpha,\alpha\to\beta}{\beta}$, how can we prove the distributive law $p\wedge(q\vee r) \leftrightarrow(p\wedge q)\vee(p\wedge r)$? I'm sure I'm probably missing something quite obvious, but I can't see how any of these axioms can prove any disjunction at all.
Some Preliminaries First: before answering the question, it will be useful to rename the axioms and adopt some conventions that will also sneak something (yet to be named) in to resolve this question and others like it: $$\begin{array}{ll} I:& a ⊃ a,\\ K:& a ⊃ b ⊃ a,\\ S:& (a ⊃ b ⊃ c) ⊃ (a ⊃ b) ⊃ a ⊃ c,\\ Z:& (¬a ⊃ ¬b) ⊃ b ⊃ a,\\ A:& a ⊃ b ⊃ a∧b,\\ π_0:& a∧b ⊃ a,\\ π_1:& a∧b ⊃ b,\\ σ_0:& a ⊃ a∨b,\\ σ_1:& b ⊃ a∨b,\\ O:& (a ⊃ c) ⊃ (b ⊃ c) ⊃ a∨b ⊃ c. \end{array}$$ Second: in here, and below, use $⊃$ for the conditional operator and adopt the conventions that it associates to the right, e.g. $a ⊃ b ⊃ c = a ⊃ (b ⊃ c)$ and that it binds more loosely than the other connectives, with $¬$ binding the strongest of all the connectives. For concreteness, we will write $f: a$ to state that $f$ is a proof of $a$ . For modus ponens, if $f: a ⊃ b$ and $g: a$ , then we write $fg: b$ . Products associate to the left, e.g. $fgh = (fg)h$ . We can also define the following: $$ \left(\begin{matrix}x: a\\y:
|propositional-calculus|formal-proofs|hilbert-calculus|
0
Referencing a statement with quantifiers in two separate lines
I want to show that a statement with several quantifiers, e.g., " $f(a, b) for all $a\in [0, 3]$ and all $b\in [-\infty, -1]$ ", is equivalent to another statement with several quantifiers, e.g., " $g(a, b) for all $a \in[-10, 15]$ and $b ", where $f$ and $g$ are two functions from $\mathbb{R}^2 \rightarrow \mathbb{R}$ , although this is irrelevant to my question. When it comes to writing that they are equivalent and, more specifically, when referencing the first statement, I'm not sure how to do it. Which option of the two below is most frequent in mathematical writing? OPTION 1 Line 1: We have shown that Line 2: $f(a,b) Eq.(1) Line 3: for all $a\in[0,3]$ and all $b\in[-\infty,-1]$ . Line 4: Now we will prove Eq.(1) is equivalent to the condition $g(a,b) for all $a\in[-10,15]$ and $b . Eq.(1) above refers just to “ $f(a,b) " and doesn’t include the quantifiers “for all $a\in[0,3]$ and all $b\in[-\infty,-1]$ ”. Therefore, I wonder if Line 4 can be interpreted as “Now we will prove $f(a
Style will guide writers on such issues. This is a Question about math writing & you can check out Style guides , eg Knuth or American Mathematical Society or other alternatives. Here is how I might have written it : We have shown Proposition P1 : $\forall a \in [0,3] , \forall b \in [-\infty,-1] : f(a,b) \leq 0 \tag{P1}$ Now we will prove P1 is equivalent to Proposition P2 : $\forall a \in [-10,15] , b \le 0 : g(a,b) \leq 0 \tag{P2}$ High-light : Use Math Symbols like $\forall$ & $\exists$ rather than "for all" & "there exists" Use Math Symbols like $\leq$ & $\geq$ rather than " =" Prefer words like Corollary & Proposition & Lemma & Theorem for Conclusions & use words like Condition & Criteria within those Conclusions. Use "tags" to refer later. UPDATE : When we do not want to make Propositions & it is in the flow of some argument , then this tweak can work : We have shown that : $\forall a \in [0,3] , \forall b \in [-\infty,-1] : f(a,b) \leq 0 \tag{7}$ Now we will prove that (7) is e
|logic|notation|terminology|first-order-logic|article-writing|
1
On Ramanujan's fastest series.
Context With some effort we can show that Ramanujan's fastest series implies: \begin{align} \frac{8E(k_{58})K(k_{58})}{\pi^2}-\frac{aK^2(k_{58})}{\pi^2}=\frac{\sqrt{58}}{29\pi},\tag{1} \end{align} with: $a=\left(\frac{489227532 \sqrt{58}}{{29}}- \frac{691872192\sqrt{29}}{29}+90847272\sqrt{2}-128477440\right)$ and $k_{58}=(13\sqrt{58}-99)(\sqrt{2}-1)^6.$ Some related formulas Being: \begin{align} P(q)=1-\sum_{n=1}^{\infty}\frac{24nq^{2n}}{1-q^{2n}},\tag{2} \end{align} with $q=e^{-\pi\frac{K(k')}{K(k)}}$ , $K(k)=\int_{0}^{\pi/2}\frac{dt}{\sqrt{1-k^2\sin^2t}}$ and $E(k)=\int_{0}^{\pi/2}{\sqrt{1-k^2\sin^2t}}dt$ the complete elliptic integrals of the first kind and second kind respectively and $k'=\sqrt{1-k^2}$ the complementary modulus. Then we have: \begin{align} P(q)=\frac{12E(k)K(k)}{\pi^2}+\frac{(4k^2-8)K^2(k)}{\pi^2}\tag{3}. \end{align} Ramanujan dealt with $P(q)$ proving (as Paramanand has noted) that $P(q)$ would have the form: \begin{align} P(q)=\frac{K^2(k)}{\pi^2}A(n,k)+\frac{3}{
The short answer to your question is that there is no available proof for the identity $(1)$ of your question which involves only hand calculation. In the following I present reasonable details of work by Ramanujan as well Borwein brothers. In his famous paper Modular equations and approximations to $\pi$ Ramanujan discussed about evaluation of the function $P(q^2) $ where $$P(q) =1-24\sum_{n\geq 1}\frac{nq^n}{1-q^n}\tag{1}$$ for certain specific values of $q$ of the form $q=\exp(-\pi\sqrt{r}) $ where $r\in\mathbb{Q},r>0$ . The function $P(q) $ is related to Dedekind's eta function $$\eta(q) =q^{1/24}\prod_{n\geq 1}(1-q^n),\tag{2}$$ via $$P(q) = 24q\frac{d}{dq}\log\eta(q)\tag{3}$$ The topic is highly interesting and is related to theory of elliptic integrals which we discuss in brief. Let us start with definitions of elliptic integrals first. Let $k\in(0,1)$ be the elliptic modulus and $k'=\sqrt{1-k^2}$ be the complementary modulus and we define $$K(k)=\int_0^{\pi/2}\frac{dx}{\sqrt{1-k
|elliptic-integrals|elliptic-functions|elliptic-modular-form|
0
How many permutations are possible in a 10 digit number containing 2 and 5 such that no two 2's are together
How many ten digits whole number satisfy the following property they have 2 and 5 as digits, and there are no consecutive 2's in the number (ie. any two 2's are separated by at least one 5). So I know pretty good way of solving this question Now let $A_n$ represent all number that can be formed following above condition for n digit number For $A_n$ there are two possibility Number ending in 2 _ _ _ . . . . (2/5), 5 , 2 (n digit number) So here last 2 digit are fixed so here all permutations are equal to $A_{(n-2)}$ Number ending in 5 _ _ _ . . . . (2/5), 5 (n digit number) So here last digit is fixed so all permutations are equal to $A_{n-1}$ Now we have the recursive relation That $A_n=A_{n-1}+A_{n-2}$ (n>2) Now a1= 2 a2 = 3 Now we can use the recursive relation to find a10=144 But the thing is if I didn't read it I wasn't able to quite solve it without counting same cases again so is there any other way apart from this method to solve this question..
You can analyse and see that the number of 2's cannot exceed 5, as then there will be atleast 1 pair of 2. If you want to explain your problem to a high schooler,you can make a event specific formula. Say there are n number of 2s.Hence there are 10-n number of 5s. Our favourable arrangement is like this $$× 5 × 5 × 5 × 5 × 5 × ...5 × $$ where $×$ represents possible spaces where 2 can be placed The number of empty spaces = 10-n+1 Number of 2's=n So total number of arrangements = $$\sum_{n=0}^{5}\frac{^{(10-n+1)}P_n}{n!}$$ (n=0 for 5555555555) Dividing by $n!$ because of repitition of 2 My answer from this is coming to 144
|combinatorics|permutations|combinations|
0
How to approximate $\sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ for $x$ approaching $\infty$?
How to find the approximate value of $f(x) = \sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ as a function of $x$ when $x$ goes to $\infty$ ? I want a closed form function $g(x)$ s.t. $f(x) \sim g(x) $ for $x \to \infty$ . I have tried to integrate $f(x)$ : $$ \int f(x) dx= \sum_{n = 1}^{\infty} n\int\frac{1}{n^3 + x} dx = \sum_{n = 1}^{\infty} n \log(n^3 + x).$$ But this isn't helpful. Any other idea?
Two approaches: The trickiness here is that the $n^3$ term goes from not mattering to mattering. We can estimate each regime separately by breaking up the sum into before and after $x^{1/3}$ Approximately, the first part is $$\sum^{x^{1/3}}_{n=1} \frac{n}{n^3+x}\approx \sum^{x^{1/3}}_{n=1} \frac{n}{x} \approx x^{-1/3}/2$$ The second part is approximately: $$\sum^{\infty}_{n= x^{1/3}} \frac{n}{n^3+x}\approx \sum^{\infty}_{n= x^{1/3}} \frac{n}{n^3} \approx \sum^{\infty}_{n= x^{1/3}} \frac{1}{n^2} \approx x^{-1/3}$$ Combining gives an overestimate of $\frac{3}{2 x^{1/3}}$ . This is off by at most a factor of $2$ given our haphazard handling of dropping the smaller term, especially near $n=x^{1/3}$ . The second approach is to compute $\int^\infty_0 y/(y^3+x)dy$ . Substitution gives that it’s proportional to $x^{1/3}$ , and substituting $x=1$ and using Wolfram Alpha gives that the constant is actually $\frac{2\sqrt{3}\pi}{9}\approx 1.21$ which matches with the above overestimate constant of
|calculus|sequences-and-series|taylor-expansion|
0
Given that ${\log_{10} 2}\approx {0.3010}$. Find the number of digits in $5^{44}$
In a ΜAΘ contest from 1991, I found this problem in my problem book. I know how to solve problems like this, and I know how to solve it if the problem tells me to find the digits in $2^{44}$ , but $5^{44}$ makes me think about the problem like this: $${5^{44} = \frac{10^{44}}{2^{44}}}$$ Given this, I don't think I can begin to approximate it with a change of base, since I'm only given ${\log_{10} 2}\approx {0.3010}$ , not ${\log_2 10}$ which I could calculate. Maybe I could try $44$ digits from the $10^{44}$ minus the digits in $2^{44}$ ? $${44} - 2^{44\log_210}$$ I don't think thats how division works either, I'm very lost. Please do help.
You are on the right track. Note that we want to compute $$\log_{10}(5^{44}) = 44\log_{10}(5) = 44 \log_{10}(10 / 2) = 44(\log_{10}(10) - \log_{10}(2)) = 44(1 - 0.3010) \approx 30.7$$ so we get 31 digits.
|logarithms|
1
Exact sequence induced by two head-to-tail arrows
Let $f: A \to B$ and $g: B \to C$ be two arrows in an abelian category $\mathsf{A}$ . Prove that they induce an exact sequence: $$0 \to kerf \to kergf \to kerg \to cokerf \to cokergf \to cokerg \to 0$$ This is exercise 8.4.6 in Category Theory for Working Mathematicians . Here is my attempt: This looks much like a corrolary of the snake lemma. I tried to ensemble them into a diagram (q.uiver link) but this does not seem to work as the middle rows are far from exact. In this diagram (again q.uiver link) the 'connecting' morphism $kerg \to cokerf$ is obvious. However, there seems not to exist an easy method to check exactness other than laboriously chasing elements.
I was right with the idea to ensemble all $f,g,gf$ into a snake lemma diagram and here is its construction (q.uiver link) . This diagram is precisely the mapping cone of this simpler diagram , which should be of interest.
|homological-algebra|
1
Baby Rudin 9.9 Matrices
The inserted picture shows the statement about linear transformation of a basis. The statement says "Then, every A $\in L(X,Y)$ determines a set of numbers $a_{ij}$ such that $Ax_{j}=\sum_{i=1}^{m}a_{ij}y_{i}$ , $\left(1\leq j\leq n\right)$ " Note that $L\left(X,Y\right)$ is the set of all linear transformations of the vector space X into the vector space Y. From the definition of $A$ , $Ax=Y$ . However, it is not clear if there exists $A$ such that $Ax_{j}=\sum_{i=1}^{m}a_{ij}y_{i} , \left(1\leq j\leq n\right)$ . Satisfying this equality condition means the following. $$ Ax_{j}=\begin{pmatrix} a_{11} & a_{12} &...& a_{1n}\\ ...&...&...&...\\ a_{m1} &a_{m2} & ...& a_{mn} \end{pmatrix} \begin{pmatrix} x_{1j}\\ x_{2j}\\ ...\\ x_{nj} \end{pmatrix}= \begin{pmatrix}a_{11}x_{1j}+...+a_{1n}x_{nj}\\ a_{21}x_{1j}+...+a_{2n}x_{nj}\\ a_{m1}x_{1j}+...+a_{mn}x_{nj} \end{pmatrix} $$ Then, $$ \sum_{i=1}^{m}a_{ij}y_{i}=\sum_{i=1}^{m}a_{ij}\begin{pmatrix}y_{1i}\\ y_{2i}\\ ...\\ y_{mi} \end{pmatrix}= \b
Note that symbol $A$ in the equation $Ax_{j}=\sum_{i=1}^{m}a_{ij}y_{i}$ is not for a matrix. Here $A$ is just a linear transformation. Generally letter $T$ is used for linear transformation. What author actually trying to say here is, for every $T\in L(X,Y)$ , there corresponds an $m \times n$ matrix $A$ ,(author uses $[A]$ for this) and entries $a_{ij}$ of $A$ are given by $Tx_{j}=\sum_{i=1}^{m}a_{ij}y_{i}$ . This correspondence is actually a bijection between $L(X,Y)$ and $\mathbb{F}^{m\times n}$ .
|linear-algebra|linear-transformations|
1
Help with the indefinite integral $\int \frac{dx}{2x^4 + 3x^2 + 5}$
I start by rewriting the denominator, $2x^4+3x^2+5$ , as a squared term plus a constant. To do this, we notice that the first two terms already have a common factor of $2x^2$ . We can complete the square by taking half of the coefficient of our $x^2$ term, squaring it, and adding it to both the numerator and denominator. Since the coefficient of our $x^2$ term is $3$ , half of it would be $\frac{3}{2}$ , and squaring it gives us $\frac{4}{9}$ . So we add $\frac{4}{9}$ to both the numerator and denominator. I started like that but I couldn't go further. Can you please help me to solve it.
Another solution Write $$2 x^4+3 x^2+5=2(x^2-a)(x^2-b)$$ with $$a=-\frac{1}{4} \left(3+i \sqrt{31}\right) \qquad \text{and} \qquad b=-\frac{1}{4} \left(3-i \sqrt{31}\right) $$ Use partial fraction decomposition $$\frac 1 {2 x^4+3 x^2+5}=\frac{1}{2 (a-b)}\left(\frac{1}{x^2-a}-\frac{1}{x^2-b} \right)$$ So, two simple integrals
|integration|algebra-precalculus|quadratics|completing-the-square|
0
How to approximate $\sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ for $x$ approaching $\infty$?
How to find the approximate value of $f(x) = \sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ as a function of $x$ when $x$ goes to $\infty$ ? I want a closed form function $g(x)$ s.t. $f(x) \sim g(x) $ for $x \to \infty$ . I have tried to integrate $f(x)$ : $$ \int f(x) dx= \sum_{n = 1}^{\infty} n\int\frac{1}{n^3 + x} dx = \sum_{n = 1}^{\infty} n \log(n^3 + x).$$ But this isn't helpful. Any other idea?
If we want to find several first asymptotic terms, the Euler-Maclaurin formula is probably a shortcut to the answer: $$\sum_{n=a}^{n=b}f(n)\sim\int_a^bf(x)dx+\frac12\big(f(b)+f(a)\big)+\frac1{12}\big(f'(b)-f'(a)\big)+...$$ where all other terms are odd derivatives of $f(x)$ , taken on the bounds. In our case $\displaystyle S(x)=\sum_{n=0}^\infty\frac n{n^3+x}$ , and the bounds are $\,a=0; \,b=\infty$ . As we have only odd derivatives in the sum, analysis shows that the third surviving term $\sim\frac1{x^3}$ Therefore, $$S(x)=\int_0^\infty\frac t{t^3+x}dt-\frac1{12\,x}+O\Big(\frac1{x^3}\Big)\tag{1}$$ The evaluation of the integral is straightforward: $$\int_0^\infty\frac t{t^3+x}dt\overset{t=sx^{-1/3}}{=}x^{-1/3}\int_0^\infty \frac s{1+s^3}ds\overset{s^3=t}{=}\frac1{3x^{1/3}}\int_0^\infty\frac{t^{-1/3}}{1+t}dt$$ Making the substitution $s=\frac1{1+t}$ we are getting Beta-function $$=\frac1{3x^{1/3}}\int_0^1(1-s)^{-1/3}s^{-2/3}ds$$ Using the Euler formula for Gamma-function $$=\frac1{3x^
|calculus|sequences-and-series|taylor-expansion|
1
Coupling of Random Variables Where Probability of Movement is Dependent on Position
Suppose a particle starts at position 0 and at each time period particle $i$ moves one position to the right with probability $p_{i, j}$ or moves to the left with with probability $1-p_{i, j}$ , where $j$ is the position of the particle before it moves. Let $X_{n, i}$ be the position of particle $i$ after $n$ moves. If $p_{2, j}\geq p_{1, j}$ for all $j$ , show that $X_{n, 2}\geq_{st} X_{n, 1}$ for all $n$ . Is this also always true under the same conditions, but allowing the second particle to initially start to the right of the first particle? The goal is to find a coupling such that $(X_{n, 1}, X_{n, 2}) =_d (\hat{X}, \hat{Y})$ such that $\hat{X} \leq_{st} \hat{Y}$ . If such a coupling exists then it would immediately follow that $X_{n, 1} \leq_{st} X_{n, 2}$ . The difficulty for me comes from the fact that the probability of moving left/right is dependent on the position of the particle. I have tried creating a coupling of the form $(X_{n, 1}, X_{n, 1} + \sum_{i=1}^n I_i)$ where \b
In words, I'll define the following coupling: at each time step, we find which of the two particles is more likely to move right. Then we first decide if it moves right or left. If the more likely particle moves left, the other is forced to move left. Otherwise, the less likely particle may move right, with the appropriate probability. In notation, let $\hat{X}_0 = \hat{Y}_0 = 0$ . At each $n$ , let $\pi_n = \max(p_{1, \hat{X}_n}, p_{2, \hat{Y}_n}), \sigma_n = \min(p_{1, \hat{X}_n}, p_{2, \hat{Y}_n}) $ , and let $U_n = \mathbf{1}\{p_{1,\hat{X}_n} \le p_{2, \hat{Y}_n}\}.$ Then draw two correlated bits $$ B_n \sim \mathrm{Bern}(\pi_n), C_n = \begin{cases} 0 & B_n = 0\\ \sim \mathrm{Bern}(\sigma_n/\pi_n) & B_n = 1\end{cases},$$ and set $$\hat{X}_{n+1} = \hat{X}_n + (1-U_n) (2B_n -1) + U_n(2C_n - 1) \\ \hat{Y}_{n+1} = \hat{Y}_n + U_n (2B_n -1) + (1-U_n)(2C_n - 1).$$ The key property we will need is that $$ \forall n, P(\hat{Y}_{n+1} \ge \hat{X}_{n+1}|\hat{Y}_n = \hat{X}_n) = 1.$$ To see th
|probability|probability-theory|random-variables|coupling|
1
Properties of the permanent of a matrix
The permanent of a square matrix is known to have the following properties - The permanent of the identity matrix is one. For any permutation matrices $P$ , $Q$ , and matrix $A$ , the permanent of $A$ equals the permanent of $PAQ$ . Is the permanent the only function of a square matrix that has these two properties? If not, what are some other examples?
This is definitely not true. Let $p(x_{11}, x_{12}, \cdots, x_{nn})$ be any symmetric polynomial of the variables such that $p(I)=1$ for the identity matrix (by abusing the notation, $I$ is identified with its entries), then $p$ satisfies both conditions: $PAQ$ is simply the result of switching rows and columns of $A$ , so no entries have been changed. Further condtions are needed to characterize permanent.
|matrices|permanent|
1
An easier example of a non-PID where every finitely generated ideal is principal
Say that an integral domain $\mathcal{X}$ is an almost-PID if $\mathcal{X}$ is not a PID but every finitely generated ideal of $\mathcal{X}$ is principal. The question of whether almost-PIDs exist came up in my class recently (the standard proof that elements of PIDs can be factored into irreducibles shows that from a "non-factorable" element we can construct an infinitely generated nonprincipal ideal, and students were asking whether this was necessary). The answer is yes, but the simplest example I currently know (which a colleague pointed out to me) is any nontrivial ultrapower of $\mathbb{Z}$ . By Los' Theorem each finitely generated ideal is principal, but any such ring isn't even a UFD. However, this isn't something I can present to an early abstract algebra class. Question : is there an almost-PID which can be defined, and ideally verified as an almost-PID, using only elementary techniques? By "elementary techniques" I ideally mean the material presented in the first nine chapte
Discovering an example is easy: take the widely known nonprincipal ideals $(c,x)\subset R[x],\,$ nonunit $c\in R\backslash0,\,$ and force them all principal by adjoining $\:\!x/c\,$ for all $\:\!c\neq 0,\,$ so $\,c\mid x\mid x^n\,$ for all $\:\!n> 0,\,$ so we get $\,R + xF[x],\,$ for $F$ = fraction field of $R.\,$ ACCP fails if $R\neq F\:\!$ since then there is a nonzero nonunit $\,a\in R\,$ so $\,(x)\subsetneq (x/a) \subsetneq (x/a^2) \subsetneq (x/a^3)\,\ldots,\,$ so it is not a PID, but it is easily proved Bezout as below [ generally a domain is PID $\!\iff\!$ Bezout & ACCP]. Theorem $ $ If $\:\!R\:\!$ is a Bezout domain with fraction field $F$ then $\:\!S = R+xF[x]\:\!$ is Bezout. Proof $\, $ In $F[x]\!: (f_1,g_1)\! =\! (h) \!=\! h(f,g),\,$ $\color{#c00}{\rm wlog}$ scaled so $\,f,g, h\in S.\,$ Scaling $\,af\!+bg =1 \,$ shows in $S\!:\,I=(f,g)$ contains a constant $c\neq 0,\,$ so by below $I=(d),\,$ so $(f_1,g_1) = (dh)$ . Lemma $ $ In $\:\!S\!:\:$ if constant $\,0\!\neq\! c\in(f,g)
|abstract-algebra|ring-theory|ideals|examples-counterexamples|principal-ideal-domains|
0
How many permutations are possible in a 10 digit number containing 2 and 5 such that no two 2's are together
How many ten digits whole number satisfy the following property they have 2 and 5 as digits, and there are no consecutive 2's in the number (ie. any two 2's are separated by at least one 5). So I know pretty good way of solving this question Now let $A_n$ represent all number that can be formed following above condition for n digit number For $A_n$ there are two possibility Number ending in 2 _ _ _ . . . . (2/5), 5 , 2 (n digit number) So here last 2 digit are fixed so here all permutations are equal to $A_{(n-2)}$ Number ending in 5 _ _ _ . . . . (2/5), 5 (n digit number) So here last digit is fixed so all permutations are equal to $A_{n-1}$ Now we have the recursive relation That $A_n=A_{n-1}+A_{n-2}$ (n>2) Now a1= 2 a2 = 3 Now we can use the recursive relation to find a10=144 But the thing is if I didn't read it I wasn't able to quite solve it without counting same cases again so is there any other way apart from this method to solve this question..
I just edited the Inclusion-Exclusion section (i.e. Method 2), to try to make it easier to understand. Extending the comment of lulu, you know that a satisfying solution will have $~k~$ 2's, where $~k \in \{0,1,2,\cdots,5\}.~$ There are two alternate methods that generalize well, Stars and Bars and inclusion-exclusion. $\underline{\text{Method 1: Stars and Bars}}$ Hold $~k~$ as a fixed constant. First, for Stars and Bars theory, see this article and this article . Consider the following tableau, which assumes $~k = 3,~$ for illustrative purposes: ___2__2___2__ The placement of the $~3~$ 2's creates (k+1 = 4) islands of 5's. Let $~x_1,x_2,x_3,x_4~$ denote the size of these islands. Then the number of satisying placements of exactly $~3~$ 2's is exactly equal to the number of solutions to $x_1 + x_2 + x_3 + x_4 = (10 - k) = 7.$ $x_1, \cdots, x_4 \in \Bbb{Z_{\geq 0}}.$ $x_2, x_3 \geq 1.$ The idea is that with the end variables $~x_1,x_4~$ ignored, there will be no occurrence of consecutiv
|combinatorics|permutations|combinations|
0
Showing $\int_{-1}^{1} \frac{125}{12}\sqrt[10]{\frac{1 + x}{1 - x}} (x^2 - x) \, dx = \phi\pi$
While exploring possible applications for this new trick , I stumbled upon an entire family of integrals that "always" yield $a\pi^n$ , where $a$ is an algebraic number and $n$ is a natural number. The following integral captivated me greatly: $$\boxed{\int_{-1}^{1} \frac{125}{12}\sqrt[10]{\frac{1 + x}{1 - x}} (x^2 - x) \, dx = \color{red}{\phi}\color{blue}{\pi}}\tag{1}$$ The family: $$\int_{-1}^{1} \left( \frac{1 + x}{1 - x} \right)^{\frac{1}{2}} (x^2 - x) \, dx = 0$$ $$\int_{-1}^{1} \left( \frac{1 + x}{1 - x} \right)^{\frac{1}{4}} (x^2 - x) \, dx = \frac{\pi}{8\sqrt{2}}$$ $$\int_{-1}^{1} \left( \frac{1 + x}{1 - x} \right)^{\frac{1}{6}} (x^2 - x) \, dx = \frac{10\pi}{81}$$ $$\int_{-1}^{1} \left(\frac{1 + x}{1 - x}\right)^{\frac{1}{8}} (x^2 - x) \, dx = \frac{\sqrt{3}}{100} \left(\frac{22787}{479}\right)^{\frac{1}{4}} \pi^2$$ $$\int_{-1}^{1} \left( \frac{1 + x}{1 - x} \right)^{\frac{1}{10}} (x^2 - x) \, dx = \frac{12}{125}\phi\pi$$ $$\int_{-1}^{1} \left( \frac{1 + x}{1 - x} \right)^{\f
Let's concider $$I(\alpha)=\int_{-1}^1\left(\frac{1+x}{1-x}\right)^\alpha x(x-1)dx;\,\,-1 Making the substitution $\frac{1+x}{1-x}=t$ $$I(\alpha)=4\int_0^\infty\frac{t^\alpha}{(1+t)^4}(1-t)dt$$ Making the substitution $x=\frac1{1+t}$ $$=4\int_0^1(1-x)^\alpha x^{2-\alpha}dx-4\int_0^1(1-x)^{\alpha+1}x^{1-\alpha}dx$$ Integrating the second term by parts $$=4\frac{1-2\alpha}{2-\alpha}\int_0^1(1-x)^\alpha x^{2-\alpha}dx=4\frac{1-2\alpha}{2-\alpha}B\big(1+\alpha;3-\alpha\big)=\frac23\frac{1-2\alpha}{2-\alpha}\Gamma(1+\alpha)\Gamma(3-\alpha)$$ Using the Euler formula for Gamma-function $$I(\alpha)=\frac23\frac{1-2\alpha}{2-\alpha}\alpha(2-\alpha)(1-\alpha)\Gamma(\alpha)\Gamma(1-\alpha)=\frac{2\pi}3\frac{\alpha(1-\alpha)(1-2\alpha)}{\sin\pi\alpha}$$ You can quickly check the answer, for example, at $\alpha=\frac16$ and get $\displaystyle\frac{10\pi}{3^4}$
|integration|definite-integrals|
1
For $a,b,c\in\left[\frac{1}{\sqrt{6}}, 6\right]$: $\sum_{cyc}\frac{4}{a+3b}\geq \sum_{cyc}\frac{3}{a+2b}$
For $a,b,c\in\left[\frac{1}{\sqrt{6}}, 6\right]$ prove that $$\frac{4}{a+3b}+\frac{4}{b+3c}+\frac{4}{c+3a}\geq\frac{3}{a+2b}+\frac{3}{b+2c}+\frac{3}{c+2a}.$$ I can't really find a way to exploit the given condition. I noticed we can substitute $a\leftarrow \frac{\sqrt{6}}{a}$ , $b\leftarrow \frac{\sqrt{6}}{b}$ , $c\leftarrow \frac{\sqrt{6}}{c}$ preserving the conditions on $a,b,c$ which leads us to the same inequality for $\frac{1}{a}$ , $\frac{1}{b}$ , $\frac{1}{c}$ where the terms are a bunch of weighted harmonic means. Any ideas would be apreciated.
New proof by Buffalo Way (BW). Since the inequality is homogeneous, it suffices to prove the inequality for $a, b, c \in [1, 6\sqrt{6}]$ . Since $6\sqrt{6} , it suffices to prove the inequality for $a, b, c \in [1, 15)$ . After clearing the denominators, it suffices to prove that, for all $a, b, c \in [1, 15)$ , \begin{align*} &6\,{a}^{4}b-6\,{a}^{4}c+35\,{a}^{3}{b}^{2}+25\,{a}^{3}{c}^{2}+25\,{a}^ {2}{b}^{3}-60\,{a}^{2}{b}^{2}c-60\,{a}^{2}b{c}^{2}\\ &+35\,{a}^{2}{c}^{3}- 6\,a{b}^{4}-60\,a{b}^{2}{c}^{2}+6\,a{c}^{4}+6\,{b}^{4}c+35\,{b}^{3}{c} ^{2}+25\,{b}^{2}{c}^{3}-6\,b{c}^{4}\\ \ge{}& 0. \tag{1} \end{align*} Since the inequality is cyclic, assume that $c = \min(a, b, c)$ . We split into two cases. Case 1. $a \ge b \ge c$ Let $b = c + u, a = c + u + v$ for $u, v \ge 0$ . (1) is written as \begin{align*} &120\,{c}^{3}{s}^{2}+120\,{c}^{3}st+120\,{c}^{3}{t}^{2}+300\,{c}^{2}{s} ^{3}+501\,{c}^{2}{s}^{2}t+321\,{c}^{2}s{t}^{2}\\ &\quad +60\,{c}^{2}{t}^{3}+240 \,c{s}^{4}+548\,c{s}^{3}t+402\,c{s
|inequality|summation|sum-of-squares-method|rearrangement-inequality|tangent-line-method|
0
Variance of the difference of two random variables
I have the following problem: Suppose an unbiased coin is tossed 10 times. Let D be the random variable that denotes the number of heads minus the number of tails. What is the variance of D? My solution is: Let X be the number of heads and Y be the number of tails, so $X~Bin(10,\frac{1}{2})$ and $Y~Bin(10,\frac{1}{2})$ and $D=X-Y$ Now $Var(D)=Var(X)+Var(Y)-2COV(X,Y)$ but since X and Y are Independent we have that covariance will be 0 i.e, $COV(X,Y)=0$ so $Var(D)=Var(X)+Var(Y)=2np(1-p)=2\cdot10\cdot\frac{1}{2}(1-\frac{1}{2})=5$ Where am i wrong in this? Any help is appreciated!
Note that $X+Y=10$ . So $D=X-Y=X-(10-X)=2X-10$ . This gives $var (D)=4 var (X)$ . So $Var(D)=4\cdot np(1-p)=4\cdot 10\cdot \frac{1}{2}(1-\frac{1}{2})=10$ .
|random-variables|
0
Can we use Binomial Approximations when evaluating limits?
I came across this question, Compute $$L=\lim\limits_{x\to 0}{\frac{\sqrt[3]{1+\sin^2 x} \hspace{2mm}-\sqrt[4]{1-2 \tan x}}{\sin x + \tan^2 x}}$$ Can I use Binomial Approximations here? As $x\to 0 \implies \sin^2 x\to 0$ $\therefore \left(1+\sin^2 x \right)^\tfrac{1}{3} \approx 1+\frac{sin^2 x}{3}$ Similarly, $\left (1-2 \tan x \right)^\tfrac{1}{4} \approx 1-\frac{\tan x}{2}$ Now, computing limit $L$ becomes very easy and after computation. $$\boxed{L=\frac{1}{2}}$$ And I cross checked it using a graphing calculator and the above $L$ is correct. So, is it okay to compute limits this waya if I have no options left (excluding L'Hôpital's rule)?
$$\lim_{\lim_{x\to0}}\dfrac{(1+\sin^2x)^{1/3}-(1-2\tan x)^{1/4}}{\sin x+\tan^2x}=\lim_{x\to0}\dfrac{(1+\sin^2x)^{1/3}-1}{\sin x+\tan^2x}-\dfrac{(1-2\tan x)^{1/4}-1}{\sin x+\tan^2x}$$ Now rationalize the numerator: $$\lim_{x\to0}\dfrac{(1+\sin^2x)^{1/3}-1}{\sin x+\tan^2x}=\lim_{x\to0}\dfrac{(1+\sin^2x)-1}{(\sin x+\tan^2x)}\cdot\lim_{x\to0}\dfrac1{(1+\sin^2x)^{2/3}+(1+\sin^2x)^{1/3}+1}$$ Finally as $x\to0\implies\sin x\to0\implies\sin x\ne0$ divide numerator & denominator by $\sin x$ $$\lim_{x\to0}\dfrac{(1+\sin^2x)-1}{(\sin x+\tan^2x)}=\lim_{x\to0}\dfrac{\sin x}{1+\tan x\sec x}=?$$ Try rationalizing the numerator: $$\lim_{x\to0}\dfrac{(1-2\tan x)^{1/4}-1}{\sin x+\tan^2x}$$
|calculus|limits|trigonometry|limits-without-lhopital|binomial-theorem|
0
Examples of two finite magmas which satisfy the same equations but not the same quasi-equations?
Does there exist two binary operations $+$ and $*$ on $\{0,1\}$ such that $+$ and $*$ satisfy the same equations, but not the same quasi-equations? If not, are there such binary operations on a finite set of higher cardinality, and if so, what is the smallest possible cardinality? For those who don't know, a quasi-equation is a conditional statement, the antecedent of which is a conjunction of finitely many equations, and the consequent of which is a single equation.
Does there exist two binary operations + and ∗ on {0,1} such that + and ∗ satisfy the same equations, but not the same quasi-equations? If not, are there such binary operations on a finite set of higher cardinality, and if so, what is the smallest possible cardinality? (Partial) answers: No, there do not exist two $2$ -element magmas that satisfy the same identities, but not the same quasi-identities. Yes, there do exist two $5$ -element magmas that satisfy the same identities, but not the same quasi-identities. I don't know whether you can find examples of magmas with these properties that are smaller than size $5$ , but I expect that you can. The problem becomes easier if you allow me to change from the language of magmas to the language of two unary operations. I will describe two $4$ -element algebras with two unary operations that satisfy the same identities, but not the same quasi-identities. $2$ -element magmas. There is an easy reason why two $2$ -element magmas that satisfy th
|universal-algebra|
0
What is the exact mathematical definition of a fraction? Can all real numbers be expressed as fractions?
As per some sources fractions have been defined as a quotient of two numbers whereas some sources restrict the numerator to be a whole number and denominator to be a positive integer. What is the exact definition?
A fraction (from Latin: fractus, "broken") represents a part of a whole - Wikipedia More simply, a fraction is a notation. Let's say there are 5 apples. You have eaten 3. A way of expressing this statement is that $\frac{3}{5}$ of the 5 apples have been consumed . A form of a fraction which can be used for operations is known as a rational number. By definition, a rational number is a number which can be represented by $\frac{p}{q}$ where $q≠0$ and $p,q∈Z$ . The fraction $\frac{3}{5}$ can be represented as a rational number. But all fractions cannot be represented as rational numbers A good example includes this. You have a square of area 4 $m^2$ . You cut out a circle of radius 1 m it. The area of the circle will be $π m^2$ . So the area of your circle is $\frac{π}{4}$ times the total area of the square. But $\frac{π}{4}$ is not a rational number, as π is an irrational number and cannot be expressed as $\frac{p}{q}.$ Like π, other irrational numbers like $\sqrt{2},\sqrt{3}$ cannot be
|real-numbers|rational-numbers|
0
Ambiguity in proving the derivative of $e^x$.
To prove the derivative of $e^x$ my math book did this $$\begin{align}{d(e^x)\over dx}&=\lim_{h\to0} \frac{e^{x+h}-e^x}{h} \tag{1} \\ &= \lim_{h\to0}e^x \cdot\frac{e^h-1}{h} \tag{2} \\ &= e^x \cdot \lim_{h\to0} {\left(1+\frac{h}{1!}+\frac{h^2}{2!}...\right)-1\over h} \tag{3} \\ &=e^x\end{align}$$ But from where $\boxed{e^h=\left(1+\frac{h}{1!}+\frac{h^2}{2!}+\frac{h^3}{3!}...\right)}$ this formula came? is it Macalurin Series Expansion? Is it $$f(x)=e^x\\ e^x= f(0)+\frac{x\cdot f'(x)\Big|_{x=0}}{1!}+\frac{x^2\cdot f''(x)\Big|_{x=0}}{2!}+....\infty$$ But I am here to find the derivative of $e^x$ so I don't know $f'(x)$ neither $f''(x),f'''(x)$ . So I have to stop in step $(2)$ . How is this possible? Where is the ambiguity? And also if above method is not the correct way to derive $d(e^x)\over dx$ then how to derive it from basics ?
There are multiple ways to define the function $f(z) = e^z$ , all of which are equivalent and the rest of which can be proved from any single definition. For instance, some authors will define $$e^z = \sum_{k=0}^\infty \frac{z^k}{k!} \tag{1}$$ and then prove that this implies $$e^z = \lim_{n \to \infty} \left(1 + \frac{z}{n}\right)^n. \tag{2}$$ Other authors might begin with $(2)$ and show that this implies $(1)$ . Still others may define $e$ to be the unique positive real number such that $$\int_{x=1}^e \frac{1}{x} \, dx = 1 \tag{3}$$ and derive the other properties from this. Consequently, there is no single definition that precedes all of the other properties. As such, whether the calculation of the derivative of $e^x$ is circular reasoning depends on how the function is defined.
|calculus|limits|
1
Are analytic functions with highest convergence radius at a point analytic everywhere?
This seems to be some elementary point and I have red several questions about it, but I could not have a clarifying answer, so I directly ask myself. I say that a real-valued function $f$ on a real variable (defined on some interval in $\mathbb{R}$ ) is analytic at $x_0$ if it is $C^\infty$ and such that its Taylor series at $x_0$ converges on some open neighborhood $I_{x_0}$ of $x_0$ , and converge to $f$ there. Now, if we take $y_0$ in such $I_{x_0}$ , it seems well known that $f$ is also analytic at $y_0$ . Maybe there is some confusion on the terminology, but my understanding this should mean that there exists $I_{y_0}$ open neighborhood of $y_0$ (presumably $I_{x_0}$ itself) on which the Taylor expansion at $y_0$ ( which is a totally different power series from the Taylor expansion at $x_0$ ) converges again pointwise to $f$ . Maybe I am missing something elementary, but I do not see any obvious proof of this fact. If someone can help me to clarify, thank you very much.
The radius of convergency around any point $z_0$ in the complex plane is the distance to the nearest singular point. For polynomials there is one singular point at $\infty$ only. Polynomials can be re-expanded around an finite point. Differently from real algebra, $\infty$ is a point by the inversion of neighbourhoods of $(0,\infty)$ at the unit circle $z\to 1/z.$ For meromorphic functions (rationals of polynomials) it is the distance to the nearest pole. For algebraic functions (defined by solutions of algebraic equations e.eg $ z\to w(z): z=w^2$ , it is the distance to the nearest branch point or pole. In general, for series with an inifinite number of nonzero coefficients, it is the distance to nearest point of divergency from the point of expansion. For any point $z_1$ inside this circle of convergency of the series in $(z-z_0)^n$ the series can be reorganized in $(z-z_1)^n$ , either algebraically or by Taylors formula, convergent in an open neighborhood of however small radius. It
|real-analysis|analysis|analytic-functions|
0
How to approximate $\sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ for $x$ approaching $\infty$?
How to find the approximate value of $f(x) = \sum_{n = 1}^{\infty} \frac{n}{n^3 + x}$ as a function of $x$ when $x$ goes to $\infty$ ? I want a closed form function $g(x)$ s.t. $f(x) \sim g(x) $ for $x \to \infty$ . I have tried to integrate $f(x)$ : $$ \int f(x) dx= \sum_{n = 1}^{\infty} n\int\frac{1}{n^3 + x} dx = \sum_{n = 1}^{\infty} n \log(n^3 + x).$$ But this isn't helpful. Any other idea?
As suggested in the comments, we can express $f(x)$ in terms of the digamma function $\psi(x)$ as follows: \begin{align*} f(x) &= \frac{1}{{3x^{1/3} }}\left( {\psi (x^{1/3} + 1) + {\rm e}^{ - 2\pi {\rm i}/3} \psi (x^{1/3} {\rm e}^{2\pi {\rm i}/3} + 1) + {\rm e}^{2\pi {\rm i}/3} \psi (x^{1/3} {\rm e}^{ - 2\pi {\rm i}/3} + 1)} \right) \\ & = \frac{1}{{3x^{1/3} }}\left( {\psi (x^{1/3} + 1) + 2\operatorname{Re} \left( {{\rm e}^{ - 2\pi {\rm i}/3} \psi (x^{1/3} {\rm e}^{2\pi {\rm i}/3} + 1)} \right)} \right). \end{align*} By applying the standard asymptotic expansion of the digamma function and simplifying the result, we obtain the complete asymptotic expansion $$ f(x) \sim \frac{{2\pi }}{{3\sqrt 3 }}\frac{1}{{x^{1/3} }} - \sum\limits_{k = 0}^\infty {\frac{{B_{6k + 2} }}{{6k + 2}}\frac{1}{{x^{2k + 1} }}} = \frac{{2\pi }}{{3\sqrt 3 }}\frac{1}{{x^{1/3} }} - \frac{1}{{12x}} + \frac{1}{{240x^3 }} - \frac{1}{{12x^5 }} + \ldots , $$ as $x\to+\infty$ . Here, $B_k$ denotes the Bernoulli numbers . T
|calculus|sequences-and-series|taylor-expansion|
0
An attempt for approximating the logarithm function $\ln(x)$: Could be extended for big numbers?
An attempt for approximating the logarithm function $\ln(x)$ : Could be extended for big numbers? PS: Thanks everyone for your comments and interesting answers showing how currently the logarithm function is numerically calculated, but so far nobody is answering the question I am asking for, which is related to the formula \eqref{Eq. 1} : Is it correctly calculated?, Could a formula for the logarithm of large numbers be found with it? Here with "big/large numbers" I am meaning in the same sense of how the Stirling's approximation formula approximates the factorial function at large values. Intro__________ On a previous question I found that the following approximation could be used: $$\ln\left(1+e^x\right)\approx \frac{x}{1-e^{-\frac{x}{\ln(2)}}},\ (x\neq 0) \quad \Rightarrow \quad \dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln\left(x^{\ln(2)}\right)} \approx \frac{x}{x-1}$$ And later I noted that I could do the following: $$\dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln(2)} \approx \frac{x\ln\lef
Too long for a comment. As said in comments, you do not need to cover a wide range of $x$ . What you can do is to use $$\log(x)=-\sum_{n=1}^\infty \frac{(-1)^n}{n}\,(x-1)^n$$ and make it a $[n,n]$ Padé approximant $P_n$ which, after simplifications will write $$P_n=\frac {\sum_{n=0}^\infty a_n \,x^n } {\sum_{n=0}^\infty b_n \,x^n }$$ For example $$P_7=\frac {(x-1) \left(363+10310 x+58673 x^2+101548 x^3+58673 x^4+10310 x^5+363 x^6\right)}{70(1+x)\left(11+48 x+393 x^2+832 x^3+393 x^4+48 x^5+x^6\right) }$$ whose error is $$\frac{(x-1)^{15}}{176679360}$$ For the useful range already mentioned by @marty cohen, the maximum error is $3.10\times 10^{-8}$ .
|real-analysis|combinatorics|convergence-divergence|solution-verification|pochhammer-symbol|
0
What's wrong with applying our intuition for the behavior of objects in low dimension to high dimension
The following text is taken from the a book about linear programming that I'm reading: A graphical illustration is useful for understanding the notions and procedures of linear programming, but as a computational method it is worthless. Sometimes it may even be misleading, since objects in high dimension may behave in a way quite different from what the intuition gained in the plane or in three-dimensional space suggests. I don't know what the author is insinuating at by saying that the object's behavior is different from the intuition since my experience working with objects in higher dimension was always gained from lower dimension. So, is there some concrete example illustrating this statement?
You are missing what the Author is trying to convey. (1) A graphical illustration is useful for understanding (by humans) the notions and procedures of linear programming, but as a computational method (by computers) it is worthless . (2) Sometimes it may (not "will") even be misleading, since objects in high dimension may (not "will") behave in a way quite different from what the intuition gained in the plane or in three-dimensional space suggests. Nobody except the Author can tell what thoughts led to that claim. I will try to give Examples for these from my thoughts. EXAMPLES : (1A) There are Pairs of Curves which seem to intersect once , when viewed with DESMOS or Wolfram. When analysed with Calculus or other tools , it is revealed that there are 3 intersections. It might even turn out that there is some very tiny sinusoidal nature at the central intersection , giving even more intersections. Such things will not be visible with graphing methods. We still know that there is at leas
|soft-question|examples-counterexamples|linear-programming|intuition|dimensional-analysis|
0
Proof of the "Maz identity" for solving integrals
The "Maz identity" states: $$ \int_0^\infty f(x)g(x)\mathrm{d}x = \int_0^\infty \mathcal{L}\{f\}(u)\mathcal{L}^{-1}\{g\}(u)\mathrm{d}u, $$ where $\mathcal{L}$ is the Laplace transform. I came across this identity when trying to find the Mellin transform of $\sin(x)$ . The theorem turns out to be very useful, but I could not find any reference for a proof of this identity. The only references on MSE are this and this , but neither provides a derivation. This identity also appears in a recent IG post by owenmmth, for those who are interested. PS: Although not important to the question, but the Mellin transform is defined as $$ \{\mathcal{Mf}\}(s) = \int_0^\infty x^{s-1}f(x)\mathrm{d}x .$$
May this paper answer your question? NB: there are typos in the last Corollary.
|integration|laplace-transform|integral-transforms|mellin-transform|
0
Let $f(x)=x^2-kx$ and $g(x)=x^3-kx$, such that for a rational number $\alpha$, $f(\alpha)$ and $g(\alpha)$ both are rational numbers. Find k
Let $f(x)=x^2-kx$ and $g(x)=x^3-kx$ , $k\in\mathbb{R^+}$ be two real valued functions such that for a rational number $\alpha$ , $f(\alpha)$ and $g(\alpha)$ both are rational numbers. Find the range of values that $k$ can take. My Attempt If $k=1$ then given condition is trivially true. But can there be other values. If $m$ and $n$ be rational numbers such that $\alpha^2-k\alpha=m$ and $\alpha^3-k\alpha=n$ . This would imply that $k$ is rational since $\alpha,m$ and $n$ are all rational Also by eliminating $k$ , we have $\alpha^2(\alpha-1)=n-m$ . But after this I reach some kind of dead end.
The statement is a bit unclear. I think that you are saying that for a fixed rational $\alpha$ , we have $f(\alpha)$ and $g(\alpha)$ are both rational. Then what possible values of $k$ are there (depending possibly on $\alpha$ )? That is, we know only that \begin{align} \alpha^2 - k \alpha &= a \in \mathbb{Q} \\ \alpha^3 - k \alpha &= b \in \mathbb{Q}. \end{align} If $\alpha = 0$ , then $k$ can be anything and this is uninteresting. Suppose now that $\alpha \neq 0$ . Then we can divide by $\alpha$ and find \begin{align} \alpha - k &= a' \in \mathbb{Q} \\ \alpha^2 - k &= b' \in \mathbb{Q}. \end{align} The first equation tells us that $k = \alpha - a' \in \mathbb{Q}$ , and thus $k$ must be rational. Clearly any rational $k$ suffices. We gain no new information from $g$ , and we conclude that $k$ can be any rational when $\alpha \neq 0$ , and otherwise $k$ has no restrictions.
|algebra-precalculus|number-theory|elementary-number-theory|polynomials|diophantine-equations|
1
Proving that, for positive integers $k_i$, there exists $x_0\in[0,\pi]$, such that $\frac12+\sum_{i = 1}^m\cos(k_ix_0)<0$
$k_i$ is a positive integer, $i=1,\ldots,m$ , please try to prove that there exist a point $x_0 \in [0,\pi]$ , such that $\frac{1}{2}+\sum\limits_{i = 1}^m {\cos ({k_i}{x_0})} . My attempt: If $k_i$ are all equal, this question is easy; If $k_i$ forms an arithmetic sequence, using the product to difference formulas, we can simplify the expression $\frac{1}{2}+\sum\limits_{i = 1}^m {\cos ({k_i}{x})}$ , for example, $\frac{1}{2} + \sum\limits_{i = 1}^n {\cos (ix)} = \frac{{\sin \frac{{(2n + 1)}}{2}x}}{{2\sin \frac{x}{2}}}$ . Now, however, $k_i$ are disorganized.
Let $k_1, \ldots, k_m \in \mathbb{N}_{>0}$ be (not necessarily distinct) positive integers. Define $f(\theta)$ by $$ f(\theta) = \sum_{i=1}^{m} \cos(k_i \theta). $$ Then by noting that $\int_{0}^{\pi} \cos(k\theta) \, \mathrm{d}\theta = \pi \mathbf{1}_{\{k=0\}} $ for $k \in \mathbb{Z}$ , we get $$ \int_{0}^{\pi} f(\theta) \, \mathrm{d}\theta = 0. \tag{1}\label{e:1} $$ Moreover, \begin{align*} \int_{0}^{\pi} f(\theta)^2 \, \mathrm{d}\theta &= \sum_{i = 1}^{m}\sum_{j=1}^{m} \int_{0}^{\pi} \cos(k_i\theta)\cos(k_j\theta) \, \mathrm{d}\theta \\ &= \sum_{i = 1}^{m}\sum_{j=1}^{m} \int_{0}^{\pi} \frac{\cos((k_i + k_j)\theta) + \cos((k_i - k_j)\theta)}{2} \, \mathrm{d}\theta \\ &= \sum_{i = 1}^{m} \underbrace{ \sum_{j=1}^{m} \frac{\pi}{2} \mathbf{1}_{\{ k_i = k_j \}} }_{\geq \frac{\pi}{2}} \geq \frac{m \pi}{2}. \tag{2}\label{e:2} \end{align*} So by combining $\eqref{e:1}$ and $\eqref{e:2}$ , we get \begin{align*} \frac{m\pi}{2} + \frac{\pi}{4} \leq \int_{0}^{\pi} \left( f(\theta)^2 + \frac{1}{4
|real-analysis|sequences-and-series|trigonometry|
1
Automatic knot/link identification
My program spits out links as a crossing diagram. In fact, it does have some topological data (not only graphic) since the links are constructed as sums of tangles, and instead of drawing line A to line B, I could "follow" this lines and see if at the end some Dowker code results (but my brain already hurts now). Main problem: The diagram isn't slightliest in minimal form, so I would have to apply some Reidemeister move heuristic too. (I surely won't go beyond 10 crossings, and for 9 I already have lots of paper sheets sorted by all kind of link properties, so the hassle is reducing diagrams not yet in minimal form. Think of the eight crossing diagram for the Borromei rings - could you reduce it with one look?) Any idea how I could do this faster as "by hand" (and I am quite fast)? Random output: $4^2_{1-}$ , far from minimal. Red lines split the tangles that were involved in construction.
For link diagrams up to 11 crossings, you could use KnotFolio , so long as the knot or link is prime and non-split, and so long as you are ok with identification up to symmetry. KnotFolio makes use of the KnotInfo and LinkInfo databases. First, I filtered your diagram in an image editor to get the diagram itself. I copy/pasted this into KnotFolio and adjusted some sliders to scale it up and blur/threshold it until all the gaps from the removed red lines went away: Then I clicked "accept" to go into image editing mode Editing the image further turns out not to be necessary — upon clicking "convert to diagram" it manages to be interpretable as a diagram. From here, we can see that it's unambiguously L4a1. The curly braces indicate which variant it is; swapping the orientation of one of the components gives the other variant. There are other options too, such as using KLO . It doesn't have image importing, but it does let you interactively manipulate knots using Reidemeister moves. Or, th
|knot-theory|
1
$\langle T,\varphi_n\rangle\rightarrow 0$ for all distributions $T$ of finite order $\implies\varphi_n\rightarrow 0$ in $\mathcal D(\mathbb{R}^{d})$.
Let $(\varphi_n)_{n \in \mathbb{N}} \subset \mathcal{D}(\mathbb{R}^{d})$ such that $\langle T, \varphi_n \rangle \rightarrow 0$ for all distributions $T$ of finite order. Prove that $\varphi_n \rightarrow 0$ in $\mathcal{D}(\mathbb{R}^{d})$ . My attempt: We have to prove that there exists a compact $K \subset \mathbb{R}^{d}$ such that $\operatorname{supp}(\varphi_n) \subset K$ for all $n \in \mathbb{N}$ and $|D^\alpha \varphi_n(x)|\rightarrow 0$ uniformly in $K$ for all $\alpha\in\mathbb{N}^{d}$ . Let $T \in \mathcal{D}'(\mathbb{R}^{d})$ of finite order, say $k$ . Then, there exists a compact $K_k$ and $C_{K_k}>0$ such that $|\langle T, \varphi \rangle| \leq C_{K_k}\max_{|\alpha|\leq k}\sup_{x \in K}|D^\alpha\varphi (x)|$ for all $\varphi \in C_{0}^{\infty}(K_{k})$ . But I don't know how to proceed.
The desired result is true. Compactly supported distributions on $\mathbb R^d$ have finite order, so $\langle T,\varphi_n\rangle\to 0$ for every $T\in\mathcal E'(\mathbb R^d)$ . And as in the post linked by PhoemueX, the fact that $\mathcal E(\mathbb R^d)$ is a Montel space implies that the sequence $(\varphi_n)_{n\in\mathbb N}$ converges to $0$ in $\mathcal E(\mathbb R^d)$ . Convergence of $(\varphi_n)_{n\in\mathbb N}$ to $0$ in $\mathcal E(\mathbb R^d)$ means that for every multiindex $\alpha\in\mathbb N^d$ , the sequence $(D^\alpha\varphi_n)_{n\in\mathbb N}$ converges to $0$ uniformly in every compact subset of $\mathbb R^d$ . So all that's left to show is that there is a compact subset of $\mathbb R^d$ which contains the support of every $\varphi_n$ . Let's proceed by contradiction and suppose that no compact subset of $\mathbb R^d$ contains the support of every $\varphi_n$ . For every $m\in\mathbb N$ , since the compact set $$K_m=[-m,m]^d\cup\bigcup_{k=0}^m\operatorname{supp}\varp
|functional-analysis|distribution-theory|
1
Evaluating the integral: $\int_{0}^{\infty} \frac{|2-2\cos(x)-x\sin(x)|}{x^4}~dx$
I am interested in evaluating the following integral: $$ \int_{0}^{\infty} \frac{|2-2\cos(x)-x\sin(x)|}{x^4}~dx $$ Using Matlab, Numerically it seems that the integral is convergent, but I'm not sure about it. How can we prove that the integral is convergent or not? Many Thanks in advance.
It is a good exercise to utilize Laplace properties $$\int_0^\infty f(x)g(x)\mathrm{d}x = \int_0^\infty \mathcal{L}\{f\}(u)\mathcal{L}^{-1}\{g\}(u)\mathrm{d}u,$$ Let's apply consecutively: \begin{align} I&=\int_{0}^{\infty} \frac{|2-2\cos(x)-x\sin(x)|}{x^4}\mathrm dx\\ &=\int_{0}^{\infty}\mathcal{L^{-1}}\left[\frac{1}{x^4}\right](\xi)\mathcal{L}\Bigl[|2-2\cos(x)-x\sin(x)|\Bigr](\xi)\mathrm d\xi\\ &=\int_{0}^{\infty}\frac{\xi^3}{3!}\frac{2}{\xi(\xi^2+1)^2}\mathrm d\xi\\ &=\frac16\int_{0}^{\infty}\frac{2\xi^2}{(\xi^2+1)^2}\mathrm d\xi\\ &=\frac16\int_{0}^{\infty}\mathcal{L^{-1}}\left[\frac{2\xi}{(\xi^2+1)^2}\right](t)\mathcal{L}[\xi](t)\mathrm dt\\ &=\frac16\int_{0}^{\infty}t\sin t\frac{1}{t^2}\mathrm dt\\ &=\frac16\int_{0}^{\infty}\frac{\sin t}{t}\mathrm dt\\ &=\frac16\frac{\pi}{2}=\boxed{\frac{\pi}{12}} \end{align} No complicated formulas from Transform Table is used, but the following trivial one. $$\mathcal{L}[t\sin t]=–\frac{\mathrm d}{\mathrm ds}\left(\frac{1}{s^2+1}\right)=\frac{2
|integration|
0
How are these two conditions equivalent?
I'm reading an article and I quote the author here : The condition $\sum_{n=1}^{\infty} n^t L(n) \operatorname{Pr}\left(|X|>n^{1 / r}\right) is equivalent to the moment condition $E\left[|X|^{(t+1) r} L(X)\right] . $t \geq 0, 0 , $X$ is an arbitrary random variable and $L(\cdot)$ is a slowly varying function but shouldn't matter here, I know that generally for positive random variables we got $\mathrm{E}[X] \sim \sum_{n=0}^{\infty} \mathrm{P}(X>n)$ but it doesn't seem to me that there's a an obvious way of arranging terms to get that equivalence of conditions. if somebody can help that'll be cool.
Write $\Pr\left(\lvert X\rvert>n^{1/r}\right)=\sum_{k=n}^\infty \Pr\left(k^{1/r} , hence \begin{align} \sum_{n=1}^{\infty} n^t L(n) \operatorname{Pr}\left(|X|>n^{1 / r}\right) &=\sum_{n=1}^{\infty} n^t L(n)\sum_{k=n}^\infty \Pr\left(k^{1/r} Since $L(\cdot)$ is slowly varying, the exists constants $c_1$ and $c_2$ such that for each $\ell\geqslant 1$ , $$c_1 2^{\ell(t+1)}L(2^\ell)\leq \sum_{n=1}^{2^\ell} n^t L(n)\leqslant c_2 2^{\ell(t+1)}L(2^\ell)$$ then we can conclude.
|integration|sequences-and-series|probability-theory|
1
Question regarding the completeness theorem and ZFC
In order to prove the completeness theorem we obviously need a framework such as ZFC (I'm aware that ZFC isn't the only possibility) so that we can talk about a language $\mathcal{L}$ and also about models of $\mathcal{L}$ . Now the completeness theorem makes perfect sense to me in so far as the language which we study has a model like first order logic ( $\mathcal{L_=}$ ), ordered fields, groups, etc. But then I hand a thought. By the completeness theorem if $\mathbf{ZFC} \models \varphi$ then $\mathbf{ZFC} \vdash \varphi$ (by $\mathbf{ZFC}$ I mean the axioms of ZFC). However $\mathbf{ZFC} \models \varphi$ means that any model $\mathfrak{U}$ for which $\mathfrak{U} \models \mathbf{ZFC}$ it is true that $\mathfrak{U} \models \varphi$ . But how do we know that any model of ZFC exists? Since we can't construct a model of ZFC within ZFC, how can we interpret what the completeness theorem is saying? So does the completeness theorem just not say anything about ZFC itself? Or is it the case
Welcome to Math.SE! The completeness theorem is a theorem of ZFC Set Theory. It says that if a first-order theory $T$ is consistent, then $T$ has a model (and vice versa). This is a conditional statement: you can use the completeness theorem to get a model of the theory only if you know that said theory is consistent. Similarly, if you know that a theory has a model, you can be sure that theory is consistent. The following is a particular instance of the theorem, obtained by taking ZFC as the theory $T$ : ZFC is consistent precisely if it has a model. This theorem of ZFC leaves open two possibilities, I. ZFC is inconsistent, and it does not have a model; II. ZFC is consistent, and it has a model; while ruling out the following others: III. ZFC is inconsistent, but it has a model; IV. ZFC is consistent, but it does not have a model. If ZFC really is free of contradictions, then it can't prove its own consistency or come up with its own model, so the axioms of ZFC do not pin down which o
|logic|set-theory|model-theory|
1
Solving for a variable under $\Gamma(z)$
My friend came to see me regarding some calculation in probability where he would like to know if it is possible to solve for a variable analytically under the gamma function. By this I mean say we are given the value of some quantity $x$, such that $x = \dfrac{\Gamma(1 + \frac{2}{k})}{\left(\Gamma(1 + \frac{1}{k})\right)^2}$ I would like to solve this for some real number $k$. I can use the factorial to manipulate this expression and get $x = \dfrac{(\frac{2}{k})!}{{\left(\left(\frac{1}{k}\right)!\right)}^2 }$ If (a big if) I can make the substitution $n = \frac{1}{k}$, this would be equal to the central binomial coefficient. However I don't think this is possible as $\frac{1}{k}$ is not an integer in general. What else can I do? Thanks.
Sorry to be late ! Consider instead that you want to solve for $k$ $$\log\Bigg( \dfrac{\Gamma(1 + \frac{2}{k})}{\Big(\Gamma(1 + \frac{1}{k})\Big)^2}\Bigg)=y \qquad \text{with} \qquad y=\log(x)\qquad (x >1)$$ If $x$ is small, $k$ will be "large". Using the expansion $$\log (\Gamma (1+\epsilon))=-\gamma\, \epsilon +\frac{\pi ^2 }{12}\epsilon ^2-\frac{\zeta (3) }{3} \epsilon ^3+\frac{\pi ^4}{360} \epsilon ^4+O\left(\epsilon^5\right)$$ you should end with $$y=\frac{\pi ^2}{6 k^2}-\frac{2 \zeta (3)}{k^3}+\frac{7 \pi ^4}{180 k^4}+O\left(\frac{1}{k^5}\right)$$ Using power series reversion $$k=\frac{\pi }{\sqrt{6y}}-\frac{6 \zeta (3)}{\pi ^2}+\frac{\pi \left(\frac{7}{10}-\frac{324 \zeta (3)^2}{\pi ^6}\right)}{\sqrt{6}}\sqrt{y}+O\left(y^1\right)$$ Trying for $x=1.1$ , this gives $k=3.50795$ while the "exact" solution is $k=3.50284$ . If $x$ is large, $k$ being small, use Stirling approximation to obtain $$y=\frac{2 \log (2)}{k}+\frac{1}{2} (\log (k)-\log (2 \pi )+\log (2))-\frac{k}{8}+O\left(k^
|special-functions|gamma-function|
0
Superderivative of $G^\infty$ maps $\mathbb{R}^{1,1}_\infty\to\mathbb{R}_\infty$
I am following Rogers's Supermanifolds: Theory and Applications and I might be getting something wrong, because I reach a definition that, as I understand it, doesn't imply what the author states. Letting $\mathbb{R}_{\infty}$ be a real Grassmann algebra generated by anticommuting $\{\xi_j\}_{j\in\mathbb{N}}$ , with even and odd parts $\mathbb{R}_{\infty,0}$ and $\mathbb{R}_{\infty,1}$ respectively, we call superspace the product $$\mathbb{R}^{m,n}_{\infty}:= (\mathbb{R}_{\infty,0})^{\times m}\times(\mathbb{R}_{\infty,1})^{\times n}$$ and we specify a point in it as $({\bf x};{\bf \xi})$ for ${\bf x}\in (\mathbb{R}_{\infty,0})^{\times m}$ and ${\bf \xi}\in (\mathbb{R}_{\infty,1})^{\times n}$ . Meanwhile we note $G^\infty(U)$ the $\mathbb{R}_\infty$ -module of maps $U\to\mathbb{R}_\infty$ for any open $U$ in a DeWitt supermanifold. Furthermore, the author defines the superderivative $\mathcal{D}$ of functions from $\mathbb{R}^{1,1}_\infty$ as acting in the following way: $$\mathcal{D}f(
After further reading and waking up this morning with an epiphany, I have arrived to the conclusion that the problem is double: in one hand I was misunderstanding some notions, and in the other the author committed a typo. Here I present the solution. First and foremost, I got wrong what is meant by $\mathcal{D}^2$ . The space of derivations $D(\mathbb{R}^{1,1})$ to which $\mathcal{D}^2$ belongs is not a super algebra over $\mathbb{R}^{1,1}$ but a super Lie module over $\mathbb{R}^{1,1}$ , so by $\mathcal{D}^2$ one should understand $[\mathcal{D},\mathcal{D}]$ , for the corresponding super Lie bracket $[\cdot,\cdot]$ . The latter, for any two derivations $\delta_1, \delta_2$ and with $|\cdot|$ being the $\mathbb{Z}_2$ -graded degree, is specifically defined as $$[\delta_1, \delta_2] :=\delta_1 \delta_2 - (-1)^{|\delta_1||\delta_2|}\delta_2\delta_1.\tag{1}$$ Secondly, the author seems to have committed a typo, as confirmed by the expressions used later in the same book, and the proper d
|graded-modules|supergeometry|supermanifolds|
1
Evaluation of limit $ \lim_{n\rightarrow \infty}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots (4n)}$
Evaluation of limit $\displaystyle \lim_{n\rightarrow \infty}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots (4n)}$ What I try : $\displaystyle \lim_{n\rightarrow \infty}\frac{(2n-1)!}{(2n-1)!}\frac{(2n)(2n+1)(2n+2)\cdots (4n+1)}{\bigg((2n)(2n+2)(2n+4)\cdots (4n)\bigg)^2}$ $\displaystyle \lim_{n\rightarrow \infty}\frac{(4n+1)!\cdot (2n-1)!}{((4n)!)^2}$ How do solve it, please have a look on that problem, Thanks
Here is a method without Stirling's formula. Just use an inequality: $$0 0.$$ Let $$x_n=\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots(4n)},$$ then $$\log x_n=\sum_{k=0}^{n}\log\left(1+\frac{1}{2n+2k}\right).$$ Since $$0 we have $$0 So $$\lim_{n\to\infty}\log x_n=\lim_{n\to\infty}\sum_{k=0}^{n}\frac{1}{2n+2k} =\frac12\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n}\frac{1}{1+\frac{k}{n}} =\frac12\log2.$$ Hence $$\lim_{n\to\infty}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots(4n)} =\lim_{n\to\infty}e^{\log x_n}=\sqrt2.$$
|limits|
1
Urn draws with replacement problem: number of drawn white balls till we draw black ball for the k-th time
In the urn we have a white, b red and c black balls. We draw with replacement. Calculate the expected number of white balls, which are drawn until we drew black ball for the k-th time. My solution: The expected number of draws till be draw black ball for the k-th time is: $$\left(\frac{a+b+c}{c} \right)^k$$ That should be correct? I don't really know how to continue. The below is obviously wrong due to comments. In the sequence of turns there are $$\left( \frac{a+b+c}{c} \right)^k-k$$ possibilities to draw white or red. Therefore $$\frac{b}{a+b+c}\left(\left(\frac{a+b+c}{c} \right)^k-k \right)$$ of them are white. Is my solution correct?
To summarize the discussion in the comments: The red balls are irrelevant, so let's ignore them. And let's use $W,B$ for the number of white and black balls respectively. Let $E_k$ denote the answer we seek, for a given $k$ . We first solve the case $k=1$ . Considering the result of the first draw, we have $$E_1=\frac B{W+B}\times 0+\frac W{W+B}\times (1+E_1)\implies E_1=\frac WB$$ But it is clear that $E_k=E_{k-1}+E_1$ so we have $$\boxed {E_k=\frac {kW}B}$$ Sanity Checks: if $W=0$ this vanishes, as it must. If $B=0$ this is infinite, as again it must be. Also, it's easy to see that replacing $(W,B)$ by $(nW, nB)$ has no effect on the expectation, and the formula reflects that. As mentioned in the comments, the idea behind your approach is fine, though you get the initial expectation wrong. Specifically, the expectation is additive in $k$ , not multiplicative as you have written. Once you correct this, you should be able to make your computation go through. Specifically: We expect it
|probability|
0
Evaluation of limit $ \lim_{n\rightarrow \infty}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots (4n)}$
Evaluation of limit $\displaystyle \lim_{n\rightarrow \infty}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots (4n)}$ What I try : $\displaystyle \lim_{n\rightarrow \infty}\frac{(2n-1)!}{(2n-1)!}\frac{(2n)(2n+1)(2n+2)\cdots (4n+1)}{\bigg((2n)(2n+2)(2n+4)\cdots (4n)\bigg)^2}$ $\displaystyle \lim_{n\rightarrow \infty}\frac{(4n+1)!\cdot (2n-1)!}{((4n)!)^2}$ How do solve it, please have a look on that problem, Thanks
First recall that as $x\to\infty$ , $$\frac{\Gamma\left(x+\frac12\right)}{\Gamma(x)}\sim\sqrt x.$$ Therefore, as $n\to\infty$ , $$\begin{align}\frac{(2n+1)(2n+3)\cdots (4n+1)}{(2n)(2n+2)\cdots (4n)}&=\frac{\left(n+\frac12\right)\left(n+\frac32\right)\cdots\left(2n+\frac12\right)}{n(n+1)\cdots (2n)}\\&=\frac{\Gamma\left(2n-\frac12\right)/\Gamma\left(n+\frac12\right)}{\Gamma(2n-1)/\Gamma(n)}\\ &\sim\sqrt{2n-1}/\sqrt n \\&\to\sqrt2. \end{align}$$
|limits|
0
"Continuous composition" of Lie Bracket
Let $A,X\in M_n(\mathbb{R})$ . We denote $[A,X]=AX-XA$ the commutator. It is indeed a Lie Bracket for the matrix Lie algebra. Taking $A$ constant, I'm looking for "the flow" of the commutator. That is to say, a "natural" function $\Phi([A,\cdot{}],t)$ with $t\in\mathbb{R}$ such that if $t=n\in\mathbb{N}$ : $$\Phi([A,\cdot{}],n)(X) = [A,\;\dots\;[A, [A,X]]\dots\;]=[(A)^n,X]$$ If such a function exists, is there a "good" way to "extend it" for a matrix $A$ changing continuously as the composition goes on ? That is, for $A:t\mapsto A(t)\in \mathcal{C}([t_0,t_f],M_n(\mathbb{R}))$ being a continuous function. To sum up my question: For $A$ constant: $$\Phi([A,\cdot{}],t)(X) = [(A)^t,X]=\;?$$ And for a continuous matrix $A$ : $$\left(\mathop{\bigcirc}\limits_{t_0}^{t}\Phi([A(s),\cdot{}],ds)\right)(X)=\left(\mathop{\bigcirc}\limits_{t_0}^{t}[(A(s))^{ds},\cdot]\right)(X)=\;?$$ Thank you in advance for your kind help. EDIT: Here is my main idea for the constant case so far: First let's distingu
I would argue that finding a "natural way" to define such object it is a lost cause in most cases even when the matrix is taken to be constant: Consider $A\in M_n(\mathbb{R})$ , $\operatorname{ad}_A:=[A,\cdot]$ is an element of $\mathfrak{gl}(M_n(\mathbb{R}))$ . As such it can be respresented as an element of $M_{n^2}(\mathbb{R})$ say $\tilde{A}$ . The morphism $\operatorname{ad}_A^n$ is then given by taking the $n^{th}$ power of $\tilde{A}$ for $n\in\mathbb{N}$ . In this context, extending the construction to the real in a "natural way" of would amount to define $\tilde{A}^t$ for $t\in\mathbb{R}$ . This is fine when $\tilde{A}$ is diagonalizable whith stricly positive eigenvalues say $\tilde{A}=PDP^{-1}$ whith $P$ invertible and $D$ diagonal. One could set $\tilde{A}^t=PD^tP^{-1}$ with $D^t$ being the diagonal matrix whose elements on the diagonal are the one of $D$ raised to the power $t$ . In this construction stricly positive eigenvalues are needed when $t$ is negative. My point is
|limits|lie-algebras|matrix-calculus|function-and-relation-composition|
0
Finding the Crossing number of $K_{4,4}$
I am struggling to find the crossing number of the complete bipartite graph $K_{4,4}$ . The best range I can get to is $\text{cr}(K_{4,4}) \leq 16$ (obtained by putting all vertices onto a regular octagon), which is still messy of course. Any help is appreciated!
We see that $cr(K_{4,4})\leq4$ by placing the partite sets in a way that they are normal to each other. Next, if the crossing number is less than 4, removing some 3 edges from $K_{4,4}$ would result in a planar graph. Now, by Euler's formula ( $v-e+f=2$ ), we conclude that this new graph has $7$ faces. Since the original graph was bipartite, every face must be bounded by at least $4$ edges (bipartite graphs have only even cycles). Therefore, the sum of bound degree $\sigma \geq 28$ . At the same time, $\sigma \leq 2\cdot 13=26$ , a contradiction. Thus, $cr(K_{4,4}) = 4$ .
|combinatorics|graph-theory|np-complete|
0
How to find angle between tangent on curve an Y axis
Lets say there is a curve whose curvature is given as k = 0.01. What i want to calculate is angle between tangent drawn at point "P" which is at a distance of "X" along the curve and Y axis (shown as θ in pic) . Can any relation be established among θ, X, and K or R(radius)?
The angle between a curve and the $X$ -axis is the derivate of that curve, so if you know the equation of your curve, you try to write it as $y=f(x)$ and you calculate the derivate ( $\frac{df}{dx}(x)$ ).
|geometry|
0
Extending scalars to get $\mathbb{C}\bigotimes_\mathbb{R} \mathbb{R}^{2n}\cong \mathbb{C}^{2n}$ as $\mathbb{C}$-modules
I am going through some lecture notes in commutative algebra. I am struggling with one basic example which I want to understand fully before going further. The example is the following: We take $\mathbb{C}$ -module $\mathbb{C}^n.$ We restrict scalars to $\mathbb{R}$ and obtain $\mathbb{R}^{2n}.$ Then we extend scalars to $\mathbb{C}$ and obtain $\mathbb{C}\bigotimes_\mathbb{R} \mathbb{R}^{2n}\cong \mathbb{C}^{2n}.$ I guess I understand the first (restriction) part of this example. We think of the natural embedding $f: \mathbb{R}\to\mathbb{C}.$ That allows us to view the $\mathbb{C}$ -module $\mathbb{C}^n$ as a $\mathbb{R}$ -module via the action $r(c_1,\ldots,c_n):=(f(r)c_1,\ldots,f(r)c_n)=(rc_1,\ldots,rc_n)$ for $\forall r\in\mathbb{R},~(c_1,\ldots, c_n)\in \mathbb{C}^n.$ This $\mathbb{R}$ -module is isomorphic to $\mathbb{R}^{2n}$ via the $\mathbb{R}$ -linear map $(a_1+b_1i,\ldots,a_n+b_ni)\mapsto (a_1,b_1,a_2,b_2,\ldots, a_n,b_n).$ Now I want to take this $\mathbb{R}$ -module $\math
When you work with tensor products, always use the universal Property. First define a bilinear map $\mathbb{C}\times\mathbb{R}^{2n}\to\mathbb{C}^{2n}$ . The map is very natural, simply $(\lambda, (x_1,...,x_{2n}))\to(\lambda\cdot x_1,...,\lambda\cdot x_{2n})$ . It is clearly bilinear over $\mathbb{R}$ , and so induces a homomorphism $\varphi:\mathbb{C}\otimes_{\mathbb{R}}\mathbb{R}^{2n}\to\mathbb{C}^{2n}$ of $\mathbb{R}$ -modules. By a direct computation, we see that actually it is even a map of $\mathbb{C}$ -modules. (note, you don't get that from the universal property, this is just something you check by hand) In order to show $\varphi$ is an isomorphism, we define an inverse map. This is also quiet natural. Given an element $(a_1+ib_1,...,a_{2n}+ib_{2n})=(a_1,...,a_{2n})+i(b_1,...,b_{2n})$ of $\mathbb{C}^{2n}$ , send it to the following element: $1\otimes (a_1,...,a_{2n})+i\otimes(b_1,...,b_{2n})$ It's very easy to check that this is an inverse of $\varphi$ . Note that an inverse i
|commutative-algebra|modules|tensor-products|
1
Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$
I was wondering if there is anyway to do the following problem: Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ As $\displaystyle{\displaylines{2y\in \mathbb{Z}}}$ I have tried to do $\displaystyle{\displaylines{3x + z = 2m - 1}}$ as $\displaystyle{\displaylines{3x + z = 2m - 1}}$ must be odd. Subsituting $\displaystyle{\displaylines{3x + z = 2m - 1}}$ into $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ , $\displaystyle{\displaylines{m + y = 1011}}$ There should be $\displaystyle{\displaylines{1010}}$ pairs of $\displaystyle{\displaylines{(m,y)}}$ , excluding the iterations with $\displaystyle{\displaylines{0}}$ . I do not know how to move on from here... Please help me thanks!
I don't know the correct answer but I got 337 pairs of solutions so please correct me if I made any mistake in any step. First, take 2y+z = 2k where k is any integer I took 2y+z = 2k because as 3x is odd. So we require an even integer to make the sum as odd. So 3x+2y+z=2021 can be written as 3x+2k=2021 Now the HCF(3,2)=1 and 1 divide 2021 so there are integer solutions to this. Now we can List the solutions x0 = 1 and k0 = -1. So the general solutions to this will be of the form x = 2021 + 2t k = -2021 - 3t Here t is any integer. Also -673>t>-1011 So the value of t will lie between -673 and -1011 So there will be 337 pairs according to me. Please feel free to point out any mistake in my solution; this is my first answer to the questions on this website. I will try to provide the best and easiest solution to any questions. Thank you and have a nice day ahead.
|combinatorics|elementary-number-theory|
0
How to find angle between tangent on curve an Y axis
Lets say there is a curve whose curvature is given as k = 0.01. What i want to calculate is angle between tangent drawn at point "P" which is at a distance of "X" along the curve and Y axis (shown as θ in pic) . Can any relation be established among θ, X, and K or R(radius)?
There isn't enough information, as the slope of the tangent line dictates the angle. The slope is not dependent on the curvature nor the position. You can move the point P around and it will move the center of curvature to keep the radius $R$ as specified. If you can find the slope of the tangent line $m$ , then $$\theta = \tfrac{\pi}{2} - \tan^{-1}( m) $$ Please note that there might be an inherent assumption in the question that the center of curvature lies on the horizontal axis somewhere. There is never a mathematical reason for this to occur in general, other than for a specific problem for which this is prescribed.
|geometry|
0
Find dimension of $\mathbb{Q}(\sqrt{2}, i)$ over $\mathbb{Q}$
One of the tasks that I have is to first (1) find the minimal polynomial of $\sqrt{2} + i$ over $\mathbb{Q}$ , and (2) find the degree of $\mathbb{Q}(\sqrt{2}, i)$ over $\mathbb{Q}$ , i.e. $[\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}]$ . For (1), I first find the minimal polynomial as follows: If $x = \sqrt{2} + i$ , then \begin{align*} x^2 &= (\sqrt{2} + i)^2 \\ x^2 &= 2 + 2\sqrt{2}i - 1 = 1 + 2\sqrt{2}i \\ x^2 - 1 &= 2\sqrt{2}i \\ (x^2 - 1)^2 &= (2\sqrt{2}i))^2 \\ x^4 - 2x^2 + 1 &= -4 \\ x^4 - 2x^2 + 5 &= 0 \end{align*} So the minimal polynomial of $\sqrt{2} + i$ is $x^4 - 2x^2 + 5 = 0$ . Now for (2), I know I can use the fact that \begin{equation*} [\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}] = [\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}(\sqrt{2} + i)][\mathbb{Q}(\sqrt{2} + i) : \mathbb{Q}] \end{equation*} in which because $x^4 - 2x^2 + 5$ is the minimal polynomial of $\sqrt{2} + i$ over $\mathbb{Q}(\sqrt{2} + i)$ , then $[\mathbb{Q}(\sqrt{2} + i) : \mathbb{Q}] = 4$ . Now the only issue that I am run
It is trivial that $\mathbb Q\left(\sqrt2+i\right)\subset \mathbb Q\left(\sqrt2,i\right)$ . Convsersely, since $$\frac 3{\sqrt 2+i}=\sqrt2-i,$$ we know that $\sqrt2-i$ belongs to $\mathbb Q\left(\sqrt2+i\right)$ and hence $\sqrt2$ and $i$ are in $\mathbb Q\left(\sqrt2+i\right)$ .
|abstract-algebra|extension-field|
0
Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$
I was wondering if there is anyway to do the following problem: Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ As $\displaystyle{\displaylines{2y\in \mathbb{Z}}}$ I have tried to do $\displaystyle{\displaylines{3x + z = 2m - 1}}$ as $\displaystyle{\displaylines{3x + z = 2m - 1}}$ must be odd. Subsituting $\displaystyle{\displaylines{3x + z = 2m - 1}}$ into $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ , $\displaystyle{\displaylines{m + y = 1011}}$ There should be $\displaystyle{\displaylines{1010}}$ pairs of $\displaystyle{\displaylines{(m,y)}}$ , excluding the iterations with $\displaystyle{\displaylines{0}}$ . I do not know how to move on from here... Please help me thanks!
Here is one way. Since $3x+2y+z=x+(x+y)+(x+y+z)$ , the problem is transformed to find how many positive solutions for $a+b+c=2021$ such that $0 . Without any restriction, there are $2020\choose 2$ positive solutions. Since 2021 is not divisible by 3, it is impossible to have $a=b=c$ . Now consider the cases of one equal pair. Let's say $a=b=1$ , then $c=2019$ is determined. There are $3\choose 2$ possibilities to choose $a=b$ . Hence, we need to exclude ${3\choose 2}\cdot(1010)$ solutions. Now the remaining $ {2020\choose 2}-3030$ solutions satisfy $a\ne b\ne c$ . Divide this by $3!$ will be the solution that you want. Did I make any mistake above?
|combinatorics|elementary-number-theory|
0
Find dimension of $\mathbb{Q}(\sqrt{2}, i)$ over $\mathbb{Q}$
One of the tasks that I have is to first (1) find the minimal polynomial of $\sqrt{2} + i$ over $\mathbb{Q}$ , and (2) find the degree of $\mathbb{Q}(\sqrt{2}, i)$ over $\mathbb{Q}$ , i.e. $[\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}]$ . For (1), I first find the minimal polynomial as follows: If $x = \sqrt{2} + i$ , then \begin{align*} x^2 &= (\sqrt{2} + i)^2 \\ x^2 &= 2 + 2\sqrt{2}i - 1 = 1 + 2\sqrt{2}i \\ x^2 - 1 &= 2\sqrt{2}i \\ (x^2 - 1)^2 &= (2\sqrt{2}i))^2 \\ x^4 - 2x^2 + 1 &= -4 \\ x^4 - 2x^2 + 5 &= 0 \end{align*} So the minimal polynomial of $\sqrt{2} + i$ is $x^4 - 2x^2 + 5 = 0$ . Now for (2), I know I can use the fact that \begin{equation*} [\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}] = [\mathbb{Q}(\sqrt{2}, i) : \mathbb{Q}(\sqrt{2} + i)][\mathbb{Q}(\sqrt{2} + i) : \mathbb{Q}] \end{equation*} in which because $x^4 - 2x^2 + 5$ is the minimal polynomial of $\sqrt{2} + i$ over $\mathbb{Q}(\sqrt{2} + i)$ , then $[\mathbb{Q}(\sqrt{2} + i) : \mathbb{Q}] = 4$ . Now the only issue that I am run
Your original question seems to be about just the dimension of $K=\mathbb{Q}(\sqrt{2},i)$ over $\mathbb{Q}$ , not about finding a primitive element for $K$ . If so, to conclude that $[K:\mathbb{Q}]=4$ is enough to note that we have a tower $$ \mathbb{Q}\subset\mathbb{Q}(\sqrt{2})\subset\mathbb{Q}(\sqrt{2},i) $$ since each step is quadratic and the degree is multiplicative on towers.
|abstract-algebra|extension-field|
0
How to integrate a function or its derivatives with respect to itself?
How do you evaluate the following integrals? $\int f df$ $\int f' df$ $\int f'' df$ Where $f(x)$ is a function of x, and $f', f''$ are the first and second derivatives with respect to x. Regarding integral 1, I think that the answer is: $(1/2)f^2$ . But I get stack with the other two.
As User203940 noted, consider \begin{equation} f^{\prime}(x) = \frac{\mathrm{d}f}{\mathrm{d}x} \Leftrightarrow \mathrm{d}f = f^{\prime}(x)\mathrm{d}x \end{equation} then you can re-write your integrals as \begin{align*} \int g(x)\,\mathrm{d}f = \int g(x)f^{\prime}(x)\,\mathrm{d}x \end{align*} where $g(x)$ denotes $f(x)$ , $f^{\prime}(x)$ , $f^{\prime\prime}(x)$ or any other function of interest. Using, e.g., integartion by parts the integral can be evaluated as \begin{equation} \int g(x)f^{\prime}(x)\,\mathrm{d}x = g(x)f(x) - \int g^{\prime}(x)f(x)\,\mathrm{d}x \end{equation} For the special case $g(x) = f(x)$ , then \begin{equation} \int f(x)f^{\prime}(x)\,\mathrm{d}x = f(x)f(x) - \int f^{\prime}(x)f(x)\,\mathrm{d}x = f^{2}(x) - \int f(x)\,\mathrm{d}f = f^{2}(x) - \frac{1}{2}f^{2}(x) + C = \frac{1}{2}f^{2}(x) \end{equation} where in the step $\int f(x)\,\mathrm{d}f$ the function $f$ itself was considered as an independent variable. For sure there are more dicussions about similar topi
|calculus|
1
Ricci Equation $(\overline R(U,V)X,Y) = (R^\nabla(U,V)X,Y) - (B_U X, B_V Y) + (B_V X, B_U Y)$
Let $f:(M, g) \rightarrow (N, h)$ be a pseudo-Riemannian immersion, $\overline D$ the linear connection on $f^*(TN)$ induced from the Levi-Civita connection of $h$ , and $\nabla$ the connection induced on the normal bundle $NM$ (which is the orthogonal complement of $TM$ in $f^*(TN)$ ). Consider also the second fundamental form $$\mathrm{II}: TM\otimes TM\rightarrow NM$$ given by $$\mathrm{II}(U,V) = \mathcal N(\overline D_U V),$$ where $\mathcal N$ is the orthogonal projection onto $NM$ . Finally, let the tensor $B:TM\otimes NM \rightarrow TM$ be given by $g(B_U X, V) = -g(\mathrm{II}(U,V),X)$ . Then, how can I prove the Ricci Equation $$(\overline R(U,V)X,Y) = (R^\nabla(U,V)X,Y) - (B_U X, B_V Y) + (B_V X, B_U Y),$$ where $U,V$ are vector fields on $M$ , $X, Y$ are sections of $NM$ , and $\overline R$ and $R^\nabla$ are the curvatures of $\overline D$ and $\nabla$ respectively c.f. Arthur Besse’s “Einstein Manifolds” p. 38 Theorem 1.72 e)?
This answer serves mainly to satisfy my own curiosity, especially since at this stage in the book Besse already introduces connections in vector bundles and exterior covariant derivatives (you should probably read/familiarize yourself with the stuff in this answer first). The following setup is motivated by Ivo Terek’s notes (you can also find a similar setup in an appendix to Taylor’s PDE volume 2). But I’ll second @Ted Shifrin’s comment that you really should consult a standard differential geometry text for a more down-to-earth presentation. Tldr; for Ricci’s equation specifically, if you don’t find it in a textbook, then simply look at how Gauss’ equation is proved. The proof can be mimicked almost line by line, because these really are the same thing once you interchange the roles of the tangent and normal bundles. The second fundamental form/shape tensor in Riemannian geometry captures the extrinsic properties of a submanifold. But more strictly speaking, we’re looking at “how” t
|differential-geometry|
1
Maximizing $(x-y)^2(y-z)^2(z-x)^2$ where $x^2+y^2+z^2=1$
The problem is to maximize $(x-y)^2(y-z)^2(z-x)^2$ over all $(x,y,z)\in \mathbb{R}^3$ satisfying $x^2+y^2+z^2=1$ . Since the domain is compact and the function is continuous, it is certain that extrema must exist. Plus I figured out that the maximum is $\frac{1}{2}$ where $(x,y,z) = (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0)$ by Lagrange multiplier. However, the calculation for Lagrange multiplier was quite long. This problem is from past problems of regional math contest which do not assume calculus knowledge. Is there an elementary way to get the maximum? Thanks in advance for any form of help, hint, or solution.
WOLG, suppose that $x\ge y \ge z$ and denote $$L:= \left((x-y)(y-z)(x-z)\right)^2$$ Applying the AM-GM inequality to $(x-y),(y-z)$ and $(x-z)/2$ : $$\begin{align} &(x-y)(y-z)(x-z)\le 2\cdot \left(\frac{(x-y)+(y-z)+\frac{x-z}{2}}{3} \right)^3 =\frac{(x-z)^3}{4}\\ \implies &L\le \frac{(x-z)^6}{16}\end{align}$$ We have also that $$(x-z)^2 \le 2\left(x^2+(-z)^2 \right)\le 2(x^2+y^2 +z^2) = 2$$ We deduce then $$\color{red}{L \le \frac{1}{2}}$$ The equality occurs if and only if $\cases{x = -z\\y = 0 \\x^2+y^2 +z^2 = 1} \iff (x,y,z) = \left(\frac{1}{\sqrt 2},0,-\frac{1}{\sqrt 2} \right)$ And of course you can also permute $x$ , $y$ , $z$ and have six solutions.
|inequality|contest-math|
0
Extending homomorphism from from splitting field of a polynomial to another field where that polynomial splits into linear factor
Let $k$ be a field, $f(x)\in k[x]$ and let $F$ be the splitting field for $f(x)$ over $k$ . Let $k\subset K$ be an extension such that $f(x)$ as product of linear factors over $K$ . Prove that there is a homomorphism $F\to K$ extending identity on $K$ . As $F$ is minimal splitting field I guess there is an embedding of $F$ inside $K$ . I have no idea that I could write.
As $K$ contians all roots of $f$ , the roots generate a splitting field $E\subset K$ of $f$ by definition. As splitting fields are unique up to isomorphism, the composition $F\xrightarrow{\sim}E\subset K$ is a desired embedding.
|field-theory|extension-field|splitting-field|
0
What is the exact mathematical definition of a fraction? Can all real numbers be expressed as fractions?
As per some sources fractions have been defined as a quotient of two numbers whereas some sources restrict the numerator to be a whole number and denominator to be a positive integer. What is the exact definition?
A fraction is generally anything that can be written in the form $\frac {something}{otherthing}$ where the value is a ration between the two things, or another way of thinking about it, is one thing divided by another. For instance $\frac 8{16}$ is the ratio of $8$ to $16$ of the value of $8\div 16$ and has the value of one-half, or if you want to use decimals (as your text obviously wants to do), the value of $0.5$ . Another example is if you take a circles circumference $C$ and it's diameter $d$ the the ratio of $C$ to $d$ or $\frac Cd$ is will always have the ratio we call $\pi$ with $\pi =\frac Cd$ . $\frac Cd$ is a fraction. If we wanted to divide $\pi$ to a number one-fourth it's size we could write $\frac \pi 4$ . As you can see a fraction can be between any two real numbers. We can even write a fraction as $\frac 70$ even though that is no such possible value. It is an invalid and wrong fraction and can not be any possible value but it is still a fraction. Just a wrong and usel
|real-numbers|rational-numbers|
1
What does it mean when a number 'y' is pseudoprime to base 'x'
I am self learner so I don't really understand about pseudoprime to base $x$ For example, $91$ is a pseudoprime to base $3$ then is $91$ also a pseudoprime to base $2$ ? thank you please explain. edit** I did a bit research For example, an odd composite integer $N$ will be called a Fermat pseudoprime to base $a$ , if $\gcd(a, N) = 1$ and $a^{N−1} \equiv 1 \pmod{N}$ . My question is what about base $2$ ? Do we use the same formula like $2^{90}$ and divide by $91$ ? If I don't get a remainder as $1$ , it is not a pseudoprime, right? But when I plugged in on the calculator, the number is too big so how can I find a remainder on calculator?
To calculate powers modulo $n$ you can use Modular arithmetic , i. e. reduce the results of multiplications or exponentiations modulo $n$ so that the numbers do not get too big. If you want to use a calculator for $2^{90} mod\ 91$ you can split $90$ into factors $10\cdot3\cdot3$ and use the formula $x^{(a*b)} = (x^a)^b$ : calculate $2^{10} = 1024$ and subtract $1001$ which is a multiple of $91$ , result: $23 = 2^{10} mod \ 91$ calculate $23^3 = 12167$ and subtract $133\cdot91$ , result: $64 = (2^{10})^3 mod \ 91$ calculate $64^3 = 262144$ and subtract $2880\cdot 91$ , result: $64 = ((2^{10})^3)^3 mod \ 91 = 2^{90} \ mod \ 91$ . $ \Rightarrow 91 $ is not a pseudoprime base 2. Subtraction of any multiples of 91 (or n in the general case) from intermediate results does not change the final result as long as the numbers are small enough for the required exponentiation.
|combinatorics|discrete-mathematics|pseudoprimes|
0
Calculate this limit by first principles
I have to calculate this complex limit by first principles, ie, without using sophisticated tricks like L'Hospital's rule, Stirling formula, gamma function etc. $$ \lim_{n \to \infty} \frac{[(n+1)!]^2 (2e^{i \theta})^{2n}}{(2n)(2n+1)!} $$ where $\theta$ is a fixed real number.
The limit of the absolute value tends to infinity, so the expression is not convergent. Indeed, let $\displaystyle a_n={4^n[(n+1)!]^2\over 2n(2n+1)! }.$ For $n\ge 4$ we get $${a_n\over a_{n-1}}= {(n+1)^2(2n-2)\over n^2\left (2n+1\right)}=\left (1+{1\over n}\right )^2\left (1-{3\over 2n+1}\right )\\ \ge \left (1+{2\over n}\right )\left (1-{3\over 2n}\right )=1+{1\over 2n}-{3\over 2n^2}\ge 1+{1\over 2n}-{3\over 8n}=1+{1\over 8n}$$ Thus $$a_n\ge \left (1+{1\over 8n}\right ) \left (1+{1\over 8(n-1)}\right )\ldots \left (1+{1\over 8\cdot 4}\right )a_3$$ By multiplying the product and dropping nonessential positive terms we get $$a_n\ge \left [1+{1\over 8}\left ({1\over 4}+{1\over 5}+\ldots +{1\over n}\right )\right ]a_3$$ The expression inside the brackets tends to $\infty$ hence $a_n\to \infty.$
|real-analysis|calculus|limits|limits-without-lhopital|
1
Is there a linear operator $T$ such that $T(x^n) =f(n)x^{n-1}$?
When I first learnt calculus I was so surprised to learn that there is a meaningful mathematical operator $D$ that $$ D(x^n)= n x^{n-1}. $$ It seems to be a very random thing to multiply the exponent by the function and subtract one from the exponent to get the correct result. Again, this sounded very strange for me when I first learnt about derivatives, I mean if I was asked before I learnt the calculus of $D$ I would have dismissed this as a random useless property. Back then I thought about this question: is there a mathematical operator $T$ such that $$ T(x^n) =f(n)x^{n-1}\;? $$ After $6$ years now I remembered the question and I modified it as follows: can a linear operator $T$ defined on the vector space of all vector space of all polynomials and power series be such that $$ T(x^n) =f(n)x^{n-1}\quad \ n \ne 0 $$ for any continuous $f$ on $\mathbb{R}$ and can this operator be represented via known linear operator(s) (derivatives, integrals, etc. ), and known functions and $f$ ? Af
It might depend on the function $f$ to be considered ultimately, but it is actually quite easy to construct such a linear operator $T$ when $f$ can be expanded into some power series (like Taylor series), i.e. $f(t) = \sum_k a_kt^k$ . Indeed, since $(xD)x^n = nx^n$ (meaning that $x^n$ is the eigenfunction of $xD$ associated to the eigenvalue $n$ ), one has $f(xD)x^n = f(n)x^n$ , hence formally $$ T_f = x^{-1}f(xD) = x^{-1} \sum_k a_k(xD)^k. $$ For example with $f = \exp$ : $$ T_{\exp}(x^n) = x^{-1}e^{xD}x^n = x^{-1} \sum_{k=0}^\infty \frac{(xD)^k}{k!} x^n = x^{-1} \sum_{k=0}^\infty \frac{n^k}{k!} x^n = e^nx^{n-1} $$ It is to be noted that the above power expansion may also contain negative powers (cf. Laurent series). In that case, the inverse operator $(xD)^{-1} = D^{-1}x^{-1}$ has to be understood as $$ (xD)^{-1}\phi(x) = \int \frac{\phi(x)}{x} \,\mathrm{d}x, $$ hence for instance with $f(t) = 1/t$ : $$ T_f(x^n) = x^{-1}(xD)^{-1}x^n = x^{-1} \int \frac{x^n}{x} \,\mathrm{d}x = x^{-1}
|real-analysis|calculus|functional-analysis|operator-theory|
0
Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$
I was wondering if there is anyway to do the following problem: Find the number of positive integer solutions to $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ As $\displaystyle{\displaylines{2y\in \mathbb{Z}}}$ I have tried to do $\displaystyle{\displaylines{3x + z = 2m - 1}}$ as $\displaystyle{\displaylines{3x + z = 2m - 1}}$ must be odd. Subsituting $\displaystyle{\displaylines{3x + z = 2m - 1}}$ into $\displaystyle{\displaylines{3x + 2y + z = 2021}}$ , $\displaystyle{\displaylines{m + y = 1011}}$ There should be $\displaystyle{\displaylines{1010}}$ pairs of $\displaystyle{\displaylines{(m,y)}}$ , excluding the iterations with $\displaystyle{\displaylines{0}}$ . I do not know how to move on from here... Please help me thanks!
The equation $3x+2y+z=2021$ is a linear Diophantine equation, so we can use available tools to write down all the solutions. First start with writing $3x+2y = 2021-z$ . Since $3$ and $2$ are relatively prime, this has integer solutions for any integer $z$ . To obtain all the solutions, we can write $1$ as integer combination of $3$ and $2$ from Euclid's algorithm. I'm joking a bit, that's how you do it in general, but here we clearly have $$1 = 3 - 2 = 3\cdot 1 + 2\cdot (-1)\implies 3(2021-z)+2(z-2021) = 2021 - z$$ which gives us particular solutions for $x$ and $y$ , so from theory of linear Diophantine equations , we know that all integer solutions are given by \begin{align} x &= 2021-n-2m\\ y &= n-2021+3m\\ z &= n \end{align} where $m,n$ are arbitrary integers. It's clear that different choices of $m,n$ will yield different solutions. That combined with the fact that we need only positive integer solutions tells us that the number of desired solutions is equal to the number of integ
|combinatorics|elementary-number-theory|
1
$A_{\mathfrak{p}}$-submodules of the residue field $k = A_{\mathfrak{p}}/\mathfrak{p}A_{\mathfrak{p}}$?
I was trying to calculate the length of the $A_{\mathfrak{p}}$ -module $A_{\mathfrak{p}}/\mathfrak{p}A_{\mathfrak{p}}$ . It seems that the only $A_{\mathfrak{p}}$ -submodule must be the zero module, but I can't figure out why. I have a feeling that the proof is really simple but I am not seeing it. I would appreciate any enlightening. To give context, I was trying to calculate the order of vanishing of $z$ in $\mathbb{A}_{\mathbb{C}}^1$ at the origin.
You know that there is inclusion preserving one-one correspondence between the set of all prime ideals $I$ of $A$ with $I\cap(A-P)=\emptyset$ and the set of all prime ideals of $A_P$ . Therefore $PA_P$ is the unique maximal of $A_P$ . So $k=A_P/PA_P$ is a field. Since $(PA_P)k=0$ , $A/PA_P$ -submodules of $k$ and $A_P$ -submodules of $k$ are same. So $l_{A_P}(k)=l_{k}(k)=1$ .
|algebraic-geometry|commutative-algebra|intersection-theory|
0
Closed form expressions for $T_n$ and $S_n$ of a Fibonacci-like sequence
I am a student curious about recurrence relations who has just bumped into the Intermediate 1st year (grade 11). I derived the closed form expressions for the $n^{th}$ term and the sum of $n$ terms of a Fibonacci-like sequence of the form $(a,b,a+b,a+2b,2a+3b,...)$ by substituting $a_n=x^n$ into the characteristic recurrence relation: $a_{n+2}=a_{n+1}+a_n$ . I want to know what this method ( $a_n=x^n$ ) is called, and if the below mentioned expressions are already documented somewhere. I would also like to know if there are simpler versions of these expressions. $$T_n=\sum_{r \in \{{\phi,\psi\}}}{r^n \left(\frac{T_1}{3r+1}+\frac{T_2}{r+2}\right)}$$ $$S_n=\left(\sum_{r\in\{\phi,\psi\}}{r^n\left(\frac{T_1+T_2}{3r+1}+\frac{T_1+2T_2}{r+2}\right)}\right)-T_2$$ where, $T_n = n^{th}$ term of the Fibonacci-like sequence. $S_n =$ sum of $n$ terms of the Fibonacci-like sequence. $\phi,\psi$ are the roots of the equation: $x^2-x-1=0$ Also, I saw other answers mentioning the closed form expression
I had tried to find the sum until nth term for fibonacci once, and I would suggest a much simpler way. Here was my approach: $F_{k} = F_{k+1} - F_{k-1}$ $F_{k-1} = F_{k} - F_{k-2}$ $F_{k-2} = F_{k-1} - F_{k-3}$ $F_{k-3} = F_{k-2} - F_{k-4}$ $F_{k-4} = F_{k-3} - F_{k-5}$ ... $F_{3} = F_{4} - F_{2}$ $F_{2} = F_{3} - F_{1}$ $F_{1} = F_{2} - F_{0}$ Therefore, the sum telescopes: cancelling every term except $F_{k+1} + F_{k} - F_{1} - F_{0} $ Which simplifies to $F_{k+2}-F_{2} = F_{k+2} - 1$ Now, for a closed form expression you can simply plug this into the Binets formula and get the closed form: $$S_{k}= \frac{\left(\frac{1+\sqrt5}{2}\right)^{k+2} - \left(\frac{1-\sqrt5}{2}\right)^{k+2}}{\sqrt5} -1 $$
|sequences-and-series|recurrence-relations|terminology|fibonacci-numbers|
0
If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$?
I saw this problem:If $x^2-16\sqrt x =12$ what is the value of $x-2\sqrt x$ ? To make notations simpler Let $x=t^2$ and $t^4-16t=12$ and $l= t^2-2t$ . I tried to use the fact the $\frac{12} l =\frac{t^4-16t}{t^2-2t}= t^2+2t +4 + \frac{8}{t-2}= (t-2)^2 + 6t+ \frac{t}{8l}= \frac{l^2}{t^2}+6\frac{l}{t-2}+ \frac{t}{8l}$ I tried to find some similarities between the $l$ cubic polynomial and the original quartic polynomial but I didn't find anything useful, needless to say that solving the quartic polynomial is not an option as the general solution is too complicated. Btw the answer is $2$ and since this is a "clean" and "nice" answer for a quartic polynomial there must be some trick
$$\begin{gathered} {x^2} - 16\sqrt x - 12 = 0 \hfill \\ * x \geqslant 0 \hfill \\ \Leftrightarrow {x^2} - 4x - 2x - 4\sqrt x - 12 + 6x - 12\sqrt x = 0 \hfill \\ \Leftrightarrow \left( {x + 2\sqrt x } \right)\left( {x - 2\sqrt x } \right) - 2\left( {x + 2\sqrt x } \right) + 6\left( {x - 2\sqrt x - 2} \right) = 0 \hfill \\ \Leftrightarrow \left( {x + 2\sqrt x } \right)\left( {x - 2\sqrt x - 2} \right) + 6\left( {x - 2\sqrt x - 2} \right) = 0 \hfill \\ \Leftrightarrow \left( {x - 2\sqrt x - 2} \right)\left( {x + 2\sqrt x + 6} \right) = 0 \Rightarrow x - 2\sqrt x = 2 \hfill \\ \end{gathered} $$
|algebra-precalculus|
0
Darboux's theorem when $f'(x) \neq 0 $
I recently came across Darboux's theorem and tried to prove from scratch (without using Darboux's theorem)that if $ f $ is differentiable for every $ x$ and $ f'(x) \neq 0 $ then $f'(x)$ is either always possitive or always negative. This is my attempt: Assume that there exist $a,b$ such that $f'(a) $f$ is continuous on $[a,b]$ so it has a max $M$ and a min $m$ in $[a,b]$ and so $M\geq f(x)\geq m , x\in[a,b]$ We know that the max, min cannot be in $(a,b)$ because $f'(x)\neq0$ so they must be either $a$ or $b$ This is how "far" i managed to do on my own , I am not sure if this is even the right way to approach this problem . Any help would be greatly appreciated!
Assume WLOG $f'(a) (otherwise consider $-f$ in the following reasoning). As $f'(a) there exists some $\delta_1>0$ such that $$\dfrac{f(x)-f(a)}{x-a} for $x\in (a,a+\delta_1)$ , so $$f(x)-f(a) in that neighbourhood (as $x-a>0$ ) and the minimum of $f$ cannot be attained at $a$ . Similarly, as $f'(b)>0$ there exists some $\delta_2>0$ such that $$\dfrac{f(x)-f(b)}{x-b}>0$$ for $x\in (b-\delta_2,b)$ , so $$f(x)-f(b) in that neighbourhood (as $x-b ) and the minimum of $f$ isn't attained at $b$ either. Therefore, the minimum is reached at some $c\in(a,b)$ , and thus $f'(c)=0$ .
|real-analysis|calculus|proof-writing|
1
Derivative of a quadratic form where the vector is exponentiated
$\newcommand{\vx}{\mathbf{x}}$ While solving a certain problem, I bumped into the following Lagrangian. $$ \mathcal{L}(\vx, \lambda) = \frac{(e^\vx) ^\top A e^\vx}{(e^\vx)^\top B e^\vx} - \lambda ((e^\vx)^\top \mathbf{1} - 1) $$ Here $\vx\in\mathbb{R}^d$ , and $e^\vx$ means that we exponentiate each of its elements. $\lambda$ is the Lagrange multiplier for the condition that the elements of $e^\vx$ sum up to one. I would like to solve the optimization problem and I am therefore interested in $\nabla_\vx \mathcal{L}(\vx, \lambda)$ . Attempt at the derivative I have never bumped into a problem like this, so I am unsure how to take derivatives properly. $$ \begin{align} \nabla_\vx \left[(e^\vx) ^\top A e^\vx\right] &= (e^\vx \mathrm{I})(2 A e^\vx) = 2(e^\vx)^\top Ae^\vx \\ \nabla_\vx\left[(e^\vx) ^\top B e^\vx\right] &= 2 (e^\vx)^\top B e^\vx \end{align} $$ which would give $$ \nabla_\vx \mathcal{L}(\vx, \lambda) = \frac{2((e^\vx)^\top A e^\vx)((e^\vx)^\top B e^\vx) - 2((e^\vx)^\top B e^\
$ \def\h{\odot} \def\o{{\tt1}} \def\a{\alpha} \def\b{\beta} \def\l{\lambda} \def\p{\partial} \def\L{{\large\cal L}} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\bR#1{\big(#1\big)} \def\BR#1{\Big(#1\Big)} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\diag#1{\op{diag}\LR{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\c#1{\color{red}{#1}} \def\cFp{\c{F'}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ A scalar function $\phi(z)$ and its derivative $\phi'(z)$ can be applied element-wise to a vector argument $x$ to generate vector-valued results $$\eqalign{ f = \phi(x), \qquad f' = \phi'(x) \\ }$$ The $\c{\rm differential}$ of such an element-wise function can be written using the element-wise product $(\h),\,$ which in turn can be replaced by a diagonal matrix $$\eqalign{ F' &= \Diag{f'} \qiq f' = F'\o,\quad \o^TF' = \LR{f'}^T \\ \c{df} &= f'\odot dx \;\equiv\; \c{F'\,
|calculus|linear-algebra|vector-analysis|quadratic-forms|
0