title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Which of the following collections of 3-vectors space [a,b,c] are vectors? The vectors with ab=0.
I am struggling to see why ab = 0 is not a vector space. If V = [0,b,c], it is a vector space and if V = [a,0,c], it is a vector space and if V = [0,0,c], it is a vector space, shouldn't it be that vector space with ab = 0 be a vector space also.
My interpretation of your question is "Why isn't the set of vectors from $(a, b, c) \in \mathbb{R}^3 $ for which $ab=0$ a vector space? The answer is that it isn't closed under vector addition: $(1, 0, 0) + (0, 1, 0) = (1, 1, 0)$ and $1 \cdot 1 = 1 \neq 0$ . All the examples in your justification are indeed vector spaces, and you are right that if you take the union of all those sets you obtain $\{(a, b, c) \in \mathbb{R}^3 : ab = 0\}$ , but the union of two vector spaces is not necessarily a vector space, as this example demonstrates. I recommend this for more information about ways to combine two vector spaces.
|linear-algebra|abstract-algebra|
0
Maximization of an expression involving exponentials
The problem goes something like this: $$ f(x) = 4^x+8^x+9^x-2^x-6^x-12^x$$ Find the maximum value of f(x). This question appeared on a test in which calculators were not allowed, so please refrain from using any graphing calculators or similar instruments. I tried substituting $2^x=\alpha$ and $3^x=\beta$ to see if I could then observe something. It was of no help to me. Any ideas or solutions would be appreciated.
You can compute the derivative: $$f'(x) = 4^x \ln 4 + 8^x \ln 8 + 9^x \ln 9 - 2^x \ln 2 - 6^x \ln 6 - 12^x \ln 12.$$ My instinct would be that a "nice" value of $x$ will probably make $f'(x) = 0$ (probably an integer). So, I would look for $x$ where this expression is $0$ , by treating the $\ln 2$ and $\ln 3$ terms separately, hoping both go to $0$ at the same time. Factoring in this way, we get \begin{align} f'(x) &= (2 \cdot 4^x + 3\cdot 8^x - 2^x - 6^x - 2 \cdot 12^x) \ln 2 + (2 \cdot 9^x - 6^x - 12^x) \ln 3 \\ &= 2^x(2 \cdot 2^x + 3 \cdot 4^x - 1 - 3^x - 2 \cdot 6^x) \ln 2 + 3^x(2\cdot 3^x - 2^x - 4^x)\ln 3. \end{align} From eyeballing the $\ln 3$ term, $x = 1$ looks plausible, and we can confirm it works for the $\ln 2$ term as well. So, $x = 1$ is a stationary point. Is $x = 1$ a global maximum? Yes. We can see this by showing the derivative is negative for $x > 1$ and positive for $x . First, recall that the map $y \mapsto y^x$ is strictly convex if $x > 1$ , and strictly concav
|exponential-function|maxima-minima|
0
Formation of differential equation, elimination
In formation of differential equation of a given equation what are the things we should eliminate? I have read that if there are n number of arbitrary constants than the order of differential equation so formed will also be n. A question in my textbook says "Obtain the differential equation of all circles of radius a and centre (h,k) that is (x-h)^2+(y-k)^2=a^2." Now I don't know which one h, k or a should be eliminated or all should be eliminated etc.
Only h and k should be eliminated because in this case the radius of the family of circles is given which is a. It is already defined whereas h and k being the centre of the circles can be different keeping a constant. You can also think of it as finding the equation for a family of circles whose radius is known(a in this case) and the only constants that vary for different circles is h and k. Hence they're the arbitrary constants.
|ordinary-differential-equations|
0
Let $\mathbb{K}$ be a finite field with $q$ elements and $V$ a $n$-dimensional $\mathbb{K}$-vectorspace. How many subsets of $V$ are a basis of $V$?
Let $\mathbb{K}$ be a finite field with $q$ elements and $V$ a $n$ -dimensional $\mathbb{K}$ -vectorspace. How many elements are in $V$ ? How many subsets of $V$ are a basis of $V$ ? For the first question I reasoned that I have $n$ dimensions that I can fill up with $q$ different elements. So from a combinatorics standpoint $V$ should have a length of $q^n$ . For the second question I'm almost certain I'm wrong: Let $\{a_1, a_2, \cdots a_n\}$ be a basis of $V$ I can multiply each of my basis vectors by $q-1$ different elements in $\mathbb{K}/\{0\}$ (excluding $0$ because multiplying a basisvector by zero nolonger makes it a basisvector) that alone yields $(q-1)^n$ different possibilities now for each one of these options I can take any of the $n$ vectors and add them to any of the other $(n-1)$ remaining vectors. So in total there are $n(n-1)(q-1)^n$ subsets in $V$ that form a basis of $V$ I haven't done lots of combinatorics yet, so I would appreciate it if someone could help me out.
Let's first count ordered basis. You can pick the first vector in the basis any way you want, as long as it's not the zero vector. That gives you $q^n-1$ possibilities. The second vector should not be any of the $q$ multiples of the first vector. So that gives you $q^n-q$ possibilities. The third vector should not be any of the $q^2$ linear combinations of the first two vectors ( $q$ choices of scalars for each), so that gives you $q^n-q^2$ possibilities. And so on. So that's the number of ordered bases. What about if we ignore order? How many times have we counted each un-ordered basis?
|linear-algebra|combinatorics|vector-spaces|vectors|finite-fields|
1
How to define the weighted average of two functions?
Let $A=\{a_1,a_2,a_3\}$ and $B=\{b_1,b_2,b_3\}$ be two sets, and let \begin{gather} \Delta(B)=\left\{\delta\in[0,1]^B\mid\sum_{b\in B}\delta(b)=1\right\} \end{gather} be the set of all lotteries over the set $B$ . Further, let $f:A\to\Delta(B)$ and $g:A\to\Delta(B)$ be two functions satisfying $f\neq g$ . Then, define the function $h:A\to\Delta(B)$ as follows: \begin{gather} h=(2/3)f+(1/3)g \end{gather} I have three questions regarding such a construction: Is the function $h$ a legitimate construction? If it is, does it really have domain $A$ ? If it is, is the function $h$ as previously defined equivalent to defining it as \begin{gather} (\forall a\in A)[h(a)=(2/3)f(a)+(1/3)g(a)] \end{gather}
Short answer. Yes. (3) is the way you define the sum of functions. Your notation is correct but much more formal than necessary. The straightforward way to describe this situation is to say that a convex combination of probability distributions is a probability distribution.
|functions|
1
Is $(\lVert x \rVert - \lVert y \rVert)^2$ convex?
I am trying to figure out if $f: \mathbb{R}_{\geq 0}^{2n} \mapsto \mathbb{R}$ with $$f(x,y)= (\lVert x \rVert - \lVert y \rVert)^2$$ is a convex function. I would be interested if it is the case for any p-Norm, but only one of them like the eucledian norm would also be helpful. So far I have tried: Using the definition of convexity, but I was not able to find estimations that led to the desired inequality Checking if the Hessian is positive semidefinite for n=2 via the eigenvalues. I was not able to estimate if they are non-negative. Trying to show $f(x,y) \geq f(a,b)+ \nabla f(a,b)^T ((x,y)-(a,b))$ . Again, I was not able to find an expedient estimation for n=1 the problem reduces to $(|x|-|y|)^2$ , which is convex on $\mathbb{R}_{\geq 0}^{2}$ (the Hessian has eigenvalues $0$ and $4$ ). The question is how to come to an solution for the n-dimensional case?
A counterexample for $n=2$ : With $a = (2, 0)$ and $b = (0, 2)$ is $$ f(a, a) = f(b, a) = 0 $$ but $$ f(\frac{a+b}{2}, a) > 0 \, . $$ This can be generalized to all dimensions $n \ge 2$ by appending zeros to the components of $a$ and $b$ .
|normed-spaces|convex-analysis|
1
Error in Hatcher, Algebraic Topology?
In Hatcher's Algebraic Topology on page $409$ in the third paragraph it is written Given a fibration $p : E \to B$ with fiber $F = p^{−1} (b_0 )$ , we know that the inclusion of $F$ into the homotopy fiber $F_p$ is a homotopy equivalence. Recall that $F_p$ consists of pairs $(e, \gamma)$ with $e \in E$ and $\gamma$ a path in $B$ from $p(e)$ to $b_0$ . The inclusion $F\hookrightarrow E$ extends to a map $i : F_p \to E$ , $i(e, \gamma) = e$ , and this map is obviously a fibration. In fact it is the pullback via $p$ of the path fibration $PB \to B$ . Why is the fibration $i$ the pullback of the path fibration $\pi: PB \to B$ via $p$ ? It is $PB = \{\gamma \in C(I,B): \gamma(0) = b_0\}$ and $\pi(\gamma) = \gamma(1)$ and therefore $$p^* PB = \{(e,\gamma) \in E \times C(I,B): p(e) = \gamma(1), \ \gamma(0) = b_0\}.$$ But it is $F_p = \{(e,\gamma) \in E \times C(I,B): p(e) = \gamma(0), \ \gamma(1) = b_0\}$ , i.e. with $R: C(I,B) \to C(I,B), \gamma \mapsto (t \mapsto \gamma(1-t))$ $$F_p = (\tex
Whether or not something's wrong here is a bit subtle: Generally speaking, pullbacks are only defined up to (unique) isomorphism, and you've already noted that $p^* PB$ and $F_p$ are naturally isomorphic (via the map "reversing the paths"), so with this view everything's fine. However, since Hatcher explicitly defines the pullback as a certain subset of the product, I tend to agree with you that the statement is not quite correct.
|algebraic-topology|homotopy-theory|fiber-bundles|fibration|
1
Equation with Absolute Value Functions in the Powers
I am a Grade 12 student and I was unable to even properly approach the problem given below so I would love some pointers! $2^{|x+1|} - 2^x = |2^x - 1| + 1$ First, I tried to assume $2^x$ as 't' and then tried to assume $2^{|x+1|}$ as t. Then, I tried squaring it up, taking logarithms on both sides but none proved fruitful. Finally, I went with graphing it which was a bit tedious and difficult without using Desmos/graphing calculators. I would like a method that doesn't use graphing as graphing felt a bit time-consuming for me. If possible after guiding me with a method, if you could propose a more complex question in the same topic that I could solve would be highly appreciated. Also, I am new to this forum so apologies for any errors.
Let $f\left(x\right)=2^{\left|x+1\right|}-2^x$ and $g\left(x\right)=\left|2^x-1\right|+1$ . The solution set to $2^{\left|x+1\right|}-2^x=\left|2^x-1\right|+1$ is $x=\left\{-2,0,\mathbb{R}^+\right\}$ .
|algebra-precalculus|functions|exponential-function|absolute-value|
0
Is the real-imaginary Schanuel's conjecture equivalent to the full Schanuel's conjecture?
Schanuel's conjecture says the following about the transcendence of numbers related by the complex exponential function: Given any $n$ complex numbers $z_1, ... z_n$ that are linearly independent over the rational numbers $\mathbb{Q}$ , the field extension $\mathbb{Q}(z_1, ..., z_n, e^{z_1}, ..., e^{z_n})$ has transcendence degree at least $n$ over $\mathbb{Q}$ . I would like to control the real and imaginary parts of the numbers separately. Thus I'm interested in the restriction to real and imaginary numbers $z_1, ... z_n$ . Since for imaginary numbers $yi$ , the function values $e^{yi}$ , $\cos y$ and $\sin y$ are definable from each other by algebraic functions, the real-imaginary Schanuel's conjecture can be stated using only real functions as follows: Given any $m+n$ real numbers $x_1, ... x_m, y_1, ... y_n$ such that each of the sets $x_1, ... x_m$ and $y_1, ... y_n$ are linearly independent over the rational numbers $\mathbb{Q}$ , the field extension $\mathbb{Q}(x_1, ... x_m, y_
This is not a complete solution; writing a complete solution turned out to be harder than I thought. I'm having trouble proving that linear independence of $z_1,\overline{z_1},...,z_n,\overline{z_n}$ is equivalent to linear independence of $\Re z_1, \Im z_1, ..., \Re z_n, \Im z_n$ . Assume as induction hypothesis that $z_1,\overline{z_1},...,z_n,\overline{z_n}$ and $\Re z_1, \Im z_1, ..., \Re z_n, \Im z_n$ span the same vector subspace of $\mathbb{C}$ over $\mathbb{Q}$ . Then there are four possibilities for whether the real and imaginary parts of $z_{n+1}$ is linearly independent with any maximal linearly independent subset of $z_1, ..., z_n$ : $\Re z_{n+1}$ and $i\Im z_{n+1}$ are both linear combinations of $z_1,\overline{z_1},...,z_n,\overline{z_n}$ . By the induction hypothesis it is equivalent that they are both linear combinations of $\Re z_1, i\Im z_1, ..., \Re z_n, i\Im z_n$ , meaning that $\Re z_{n+1}$ is a linear combination of $\Re z_1, ..., \Re z_n$ and $\Im z_{n+1}$ is a l
|complex-numbers|transcendental-numbers|
0
Positive Matrices and Linear Forms
Fix a vector $\vec{b}$ and a positive definite (but not necessarily symmetric) matrix $A$ , can we prove that the fraction $$ \frac{\vec{b}^TA\vec{y}}{\vec{b}^T\vec{y}} $$ always has the same sign (If $\vec{b}^T\vec{y}\neq 0$ )?
No. Consider $A = \begin{pmatrix}\frac{1}{2} & 0 \\ 0 & 2\end{pmatrix}$ , $b = (1,1)^T$ , and $y = (2,-1)^T$ . $b^Ty = 1>0$ , but $b^TAy = b^T(1,-2)^T = -1 . Therefore the fraction is negative for this choice of $y$ , but we find that the fraction is positive for $y=(1,0)^T$
|linear-algebra|matrices|positive-definite|
0
Marginal distribution of Benford's Law
This is the last step is calculating the marginal distribution of Benford's law. $f_X(x) = \sum_{y=0}^9 log(1 + \frac{1}{10x+y}) =\log \Pi_{y=0}^9 (1 + \frac{1}{10x+y}) = log(1+ \frac{1}{x})$ I'm just stuck on the last step can someone please explain why: $\log \Pi_{y=0}^9 (1 + \frac{1}{10x+y}) = log(1+ \frac{1}{x})$
Take for example $x=3$ . Then $$\begin{align} \prod_{y=0}^9 \left(1 + \frac{1}{10x+y}\right) &= \left(1 + \frac{1}{30}\right)\left(1+\frac{1}{31}\right) \cdots\left(1 + \frac{1}{39}\right)\\ &=\frac{31}{30} \, \frac{32}{31} \cdots \frac{39}{38} \frac{40}{39}\\ &=\frac{40}{30}\\ &=\frac{4}{3}\\ &=1+\frac{1}{3} \end{align} $$ Can you generalize from here?
|probability|probability-distributions|
1
Calculation regarding divisors and resolution of singularities
I'm trying to understand a calculation on the second page of https://sma.epfl.ch/~filipazz/notes/adjunction_and_inversion_of_adjunction.pdf , and I have a couple questions. Here is the setup. $X$ is the cone of a projectively normal rational curve of degree $n$ , $f:Y\to X$ is the resolution of singularities we get by blowing up the vertex, and $E$ is the exceptional divisor. In short, it is claimed that the discrepancy of $f$ is $a=-1+2/n$ . The calculation is as follows: Adjunction gives us $K_E = (K_Y+E)|_E = (f^*K_X + (a+1)E)|_E$ . Since $E$ is a rational curve, the degree of $K_E$ is $-2$ . Then we calculate $$-2 =\text{deg}((f^*K_X + (a+1)E)|_E) = (f^*K_X + (a+1)E) \cdot E.$$ Why is the degree of this divisor the same as its intersection number with $E$ ? Then $$(f^*K_X + (a+1)E) \cdot E = f^*K_X\cdot E + (a+1)E \cdot E = -n(a+1).$$ Why is $f^*K_X\cdot E=0$ ? Thanks in advance.
For any smooth curve $C$ and any Cartier divisor $D$ , we have $D.C = \deg(D|_C)$ : for any closed point $c\in C$ , we have that the coefficient of $c$ in $D.C$ is the valuation of the local equation cutting out $D$ at that point, which is the same contribution from the intersection product at that point. The pullback of any divisor has zero intersection with the exceptional divisor: write your divisor as the difference of effective divisors missing the blown-up point and then you win. Both of these topics are covered at a bit more length in Hartshorne chapter V - the first is contained in the discussion around V.1.1-V.1.3, the second is V.3.2.
|algebraic-geometry|divisors-algebraic-geometry|singularity-theory|
1
Commutant of a polynomial
Let $Q$ a polynomial in $\mathbb{C}[X]$ (or in $\mathbb{K}[X]$ with $\mathbb{K}$ a field of characteristic zero). I wonder what is the commutant of $Q$ , i.e. the set of polynomials $P$ such that $P \circ Q = Q \circ P$ , where $\circ$ is the composition. More precisely, is there any works, any results on that topic ? All the followings remarks also works in $\mathbb{K}[X]$ , where the (formal) Bottcher coordinates are still defined. This holds because the characteristic of $\mathbb{K}$ is zero. If $Q$ is constant or of degree $1$ , the answer is easy. I suppose that $Q$ is of degree $d_0$ , and $d_0$ greater than $1$ . Conjugating $Q$ by an homothety, I suppose that the leading coefficient of $Q$ is $1$ . The constants polynomials commuting with $Q$ are its fixed points. Let $d$ a natural integer. Using the Bottcher coordinates $\Phi$ at infinity, it is possible to show that for all integer $d$ there are exactly $d_0-1$ Laurent séries $L$ in $1/X$ of degree $d$ (i.e. $L = \sum_{k = -
I've found a detailed caracterisation of commutation for polynomials in $\mathbb{C}[X]$ . See the following article of Ritt : https://www.jstor.org/stable/1989297 The theorem says that two polynomials $P$ and $Q$ of degree greater than 1 commute if and only if, up to conjugacy in $\mathbb{C}[X]$ : The polynomials are powers of $X$ multiplied by roots of unity satisfying the appropriate relation The polynomials are Tchebychev polynomials The polynomials are iterates of a polynomial of the form $X R(X^r)$ multiplied by a $r$ -root of unity The example of $Q = X^2+1$ belongs to the third case with r=1, thus the commutant of $Q$ is formed by its iterates and the constant polynoms where the constant is a fixed point of $Q$ .
|polynomials|laurent-series|
1
(Hall's Theorem) Existence of two subfamilies of sets containing the same elements
I came across the following claim in a textbook on combinatorics [1]. Claim (Lindström, Tverberg): Let $A_1, . . . , A_m \subseteq [n]$ be non-empty with $m > n$ . There are non-empty, disjoint $I, J \subseteq [m]$ , such that $$\bigcup_{i \in I} A_i = \bigcup_{j \in J} A_j.$$ The book provides a very nice proof of this claim, using characteristic vectors of the sets $A_1, . . . , A_m$ and some linear algebra basics. It also includes a remark that says this claim can be proved using Hall's theorem . This second approach is not obvious to me but I would like to understand how you can use Hall's theorem here (it's really bugging me). If anyone could provide a solution, a hint, or a reference to a proof, I would really appreciate it. Reference: [1] B. Sudakov, "Algebraic Methods in Combinatorics", 2023 Original Proof: Let $v_1, . . . , v_m$ be the characteristic vectors of $A_1, . . . , A_m$ . Since $m>n$ , they are linearly dependent over $\mathbb{R}$ . So, there is a nonzero $\alpha \in
As a partial response, here is a proof of the claim based on a possible proof of Hall's theorem. Maybe it will inspire further progress. The graph we consider is the usual graph for the system-of-representatives view of the theorem: let $X = \{x_1, x_2, \dots, x_m\}$ , $Y = \{y_1, y_2, \dots, y_n\}$ , and put an edge $x_i y_j$ whenever $j \in A_i$ . It will be important that because the $A_i$ are all nonempty, $X$ has no isolated vertices. Pick a maximum matching $M$ in this graph. Of course, since $m>n$ , this matching does not cover all of $X$ . We are going to start by letting $U_0$ be the set of all vertices in $X$ not covered by $M$ , and then repeat the following procedure: After $U_i$ has been defined, let $V_i = N(U_i) - N(U_0 \cup \dots \cup U_{i-1})$ : the set of "new" vertices in $Y$ , which are adjacent to $U_i$ that we haven't found previously. Let $U_{i+1}$ be the set of all vertices in $X$ matched to $V_i$ by $M$ . We stop when no new vertices get added to $U_0, U_1, U_2
|combinatorics|graph-theory|bipartite-graphs|matching-theory|
1
Equation of an 'n-Oval' (pins and loop of string method)
You can write the equations of an n -ellipse (polyellipse, egglipse, k -ellipse, etc...), where points $(x_1,y_1), (x_2, y_2)...,$ are foci: $$\sqrt{(x-x_1)^2+(y-y_1)^2}+\sqrt{(x-x_2)^2+(y-y_2)^2}+\sqrt{(x-x_3)^2+(y-y_3)^2}+\sqrt{(x-x_4)^2+(y-y_4)^2}...=d$$ For two points, the curve can be the same as an ellipse drawn with two pins and a loop of string (see fig. 1). However, with more points, the curve seems to be the same as James Clerk Maxwell's construction method (see fig. 2) You can also create various ovals by adding points to the loop and pins method shown in the first figure (see also fig. 3). What I want to know is how to construct equations for curves of the kind shown in fig. 3 given any number of points (akin to the above equation). I'd also be interested in knowing if there's anything published on this type of curve. Thanks.
An idea. I agree of course with the answer of Intelligenci Pauca, but I would like to say that there is some hope to have a single equation using absolute values like in the examples below (for different values of $p$ ) ; these curves are made of connected conical arcs (some of them may not be elliptical arcs) ; moreover, in the points where the arcs are connected, there is no smoothness. But I am almost certain that it is possible, at least in certain cases, to assemble second degree polynomials with or without absolute value(s) in order to get the curves you want. $$3x^2+2y^2+|y+2x|+|xy|=p$$ SAGE program for the generation of the figure : var('x y'); g=Graphics(); for k in range(0,20) : p=k/8 g+=implicit_plot(3*x^2+2*y^2+abs(y+2*x)+abs(x*y)==p,(x,-1,1),(y,-1,1)) show(g)
|calculus|trigonometry|algebraic-geometry|conic-sections|curves|
0
Why is $(x,z)$ not principal in $k[x,y,z]/(xy-z^2)$?
Let $k$ be a field. I want to show $(x,z)$ is not principal in $R=k[x,y,z]/(xy-z^2)$ but I ran into some difficulties. I know $(xy-z^2)$ is prime and polynomials in $R$ are all of the for $c+dz$ , where $c,d\in k[x,y]$ . So I suppose $(x,z) = (a+bz)$ for some $a,b$ . Then for some $g=g_1+g_2z$ , $g(a+bz) = g_1 a +g_2bxy+(g_2a+g_2b)z = z$ and similarly for some $q=q_1+q_2z$ , $q(a+bz) = q_1 a +q_2bxy+(q_2a+q_2b)z = x$ . If $(x,z)$ is principal, this set of equations should give me a contradiction. But I struggled to see it. Any help would be appreciated. Thanks!
Well, to start, you wrote the equations down wrong. You should have the following: $(a+bz)(g_1+g_2z)=g_1a+g_2bxy+(g_1b+g_2a)z=z$ $(a+bz)(q_1+q_2z)=q_1a+q_2bxy+(q_1b+q_2a)z=x$ So $g_1a+g_2bxy=0$ , $g_1b+g_2a=1$ , $q_1a+q_2bxy=x$ , $q_1b+q_2a=0$ . The last equation says $q_1=ac$ and $q_2=-bc$ for some $c\in k[x,y]$ ; plugging in to the third equation gives $a^2c-b^2cxy=x$ , so $c|x$ . If $c$ is constant, then from looking at the monomial with the highest power of $x$ inside the highest-degree terms of $a^2c$ and $b^2cxy$ , the highest-degree term of $b^2cxy$ cannot cancel with the highest-degree term of $a^2c$ . Therefore the degree of the LHS is at least two while the degree of the right hand side is one, a contradiction. So $c=ex$ for some nonzero constant $e$ , giving $a^2ex-b^2ex^2y=x$ or $a^2e-b^2exy=1$ . Now look at the highest-degree terms of $a^2e$ and $b^2exy$ sorted by degree then exponent of $x$ : these come from the corresponding highest-degree terms of $a$ and $b$ , and the
|abstract-algebra|algebraic-geometry|ring-theory|commutative-algebra|
1
How is it possible for the L² norm of f − g to measure the area between the graphs of f and g?
Here is the definition of a norm given by my textbook; (This is from Fourier Series and Boundary Value Problems by James Ward Brown and Ruel V. Churchill, Chapter 7) I'm confused by what authors say after (9). I was under the impression that the area between two curves on an interval [a,b] is just the difference of the integral on [a,b] of those functions. That is, Area between $f(x)$ and $g(x)$ on $[a,b] = \int_a^b f(x) - g(x) dx =\int_a^b f(x) dx - \int_a^b g(x) dx$ How does the norm represent the same thing? Or am I simply missing that somehow, $\left(\int_a^b [f(x) - g(x)]^2dx \right)^{\frac{1}{2}} = \int_a^b f(x) - g(x) dx$ for all functions in a function space? Sorry if this is simple, thanks in advance!
Firstly, the area between the graph of two functions $f,g: [a,b]\to\mathbb{R}$ is given by $$\displaystyle\int_{a}^{b}|f(x)-g(x) |dx,$$ because area cannot be negative and we don't know, in general, which one of the functions attains bigger values. The author says $$|| f-g||=\left(\displaystyle\int_{a}^{b}|f(x)-g(x) |^2dx \right)^{1/2}$$ is a measure of the area, not exactly what that area is equal. They continue to explicitly mention the reasoning behind this measure: it is the mean square deviation between $f$ and $g$ .
|partial-differential-equations|orthonormal|function-spaces|
1
On the sum of radicals of integers less than some number
Reading about the abc conjecture and the interesting definition and properties of radicals of an integer , I wondered about the properties and bounds of the sum $$S_{\text{rad}}(k)=\sum_{n=1}^k \text{rad}(n)$$ Note that, based on the property that $\text{rad}(n)=n$ if $n$ is square free, and $\text{rad}(n) otherwise, we have that $$\sum_{n_{\text{square free}}\leq k} n We have that $\sum_{n=1}^k n = \frac{k^2+k}{2}$ , and it is not difficult to show that $\sum_{n_{\text{square free}}\leq k} n \sim \frac{6}{\pi^2}\left(\frac{k^2+k}{2}\right)$ ; therefore, a first tentative bound could be $$\frac{6}{\pi^2}\left(\frac{k^2+k}{2}\right) Now it comes the interesting part. I wondered if $S_{\text{rad}}$ could converge to some constant $\frac{6}{\pi^2} , and that is indeed what numerical results seem to yield. Concretely, $C\approx 0.705...$ I have found that those are the first digits of $$\sum_{n=1}^\infty \frac{1}{p_n\#} = \frac{1}{2}+\frac{1}{2\times3}+\frac{1}{2\times3\times5}+\dots$$ whe
It turns out that the correct constant is $$ C = \prod_p \biggl( 1 - \frac1{p(p+1)} \biggr) \approx 0.70444, $$ which is already smaller than $\frac12+\frac16+\frac1{30}+\frac1{210}$ . Here's a justification (with many steps missing). When summing a multiplicative function, the main term is very often related to the residue of the rightmost pole of the corresponding Dirichlet series. For example, Perron's formula tells us that $$ \sum_{n\le x} n = \int_{c-i\infty}^{c+i\infty} \biggl( \sum_{n=1}^\infty n\cdot n^{-s} \biggr) \frac{x^s}s \, ds = \int_{c-i\infty}^{c+i\infty} \zeta(s-1) \frac{x^s}s \, ds, $$ where $c$ is any constant larger than $2$ . The integrand has its rightmost pole at $s=2$ , where its residue is $\frac{x^2}2$ —this is the asymptotic main term of the left-hand side. Similarly, $$ \sum_{\substack{n\le x \\ n\text{ squarefree}}} n = \int_{c-i\infty}^{c+i\infty} \biggl( \sum_{\substack{n\ge1 \\ n\text{ squarefree}}} n\cdot n^{-s} \biggr) \frac{x^s}s \, ds = \int_{c-i\inf
|number-theory|convergence-divergence|summation|prime-numbers|radicals|
1
$\limsup_{n\rightarrow \infty} \frac{x_n}{y_n} = 1$ implies $x_n \geq (1-\epsilon)y_n$
Suppose $x_n$ and $y_n$ are a sequence of real numbers such that $$\limsup_{n\rightarrow \infty} \frac{x_n}{y_n} = 1 \quad \mathrm{and} \quad \liminf_{n\rightarrow \infty} \frac{x_n}{y_n} = -1.$$ Intuitively this says that as $n \rightarrow \infty$ the quotient $x_n/y_n$ remains within the the interval $(-1, 1)$ . Of course this is only a limiting statement, so that for any fixed $n$ the quotient may take values outside $(-1, 1)$ , but we expect this to happen less often as $n$ gets larger. However I am following a proof (found here: Asymptotics of Brownian motion ) that says the above implies that for any $\epsilon > 0$ we have $$x_n \geq (1-\epsilon)y_n \quad \mathrm{and} \quad x_n \leq -(1-\epsilon)y_n.$$ But $\limsup$ and $\liminf$ are only limiting statements so how can the above be true for a particular value $n$ or even all $n$ ?
The cited proof doesn't state that $x_n\geq (1-\varepsilon) y_n$ . Instead, it follows this reasoning: As $\text{lim sup}\space \dfrac{x_n}{y_n}=1$ , there exists a sequence $t_n$ such that $\dfrac{x_{t_n}}{y_{t_n}}\to 1$ . Then, there exists $n_0\in\mathbb{N}$ such that, if $n\geq n_0$ , $\dfrac{x_{t_n}}{y_{t_n}}\geq (1-\varepsilon)\iff x_{t_n}\geq (1-\varepsilon) y_{t_n}.$ As the author of the original text is only interested in the asymptotic behaviour of the Brownian Motion (that is, when $n\to\infty$ ), it is not relevant this only occurs after $n_0$ .
|real-analysis|sequences-and-series|limsup-and-liminf|
1
Find all $f:\Bbb R\to\Bbb R$ st for any $x,y\in\mathbb R$, the multiset $\{(f(xf(y)+1),f(yf(x)-1)\}$ equals the multiset $\{xf(f(y))+1,yf(f(x))-1\}$.
Find all functions $f: \mathbb R \to \mathbb R$ such that for any $x,y \in \mathbb R$ , the multiset $\{(f(xf(y)+1),f(yf(x)-1)\}$ is identical to the multiset $\{xf(f(y))+1,yf(f(x))-1\}$ . Note: The multiset $\{a,b\}$ is identical to the multiset $\{c,d\}$ if and only if $a=c,b=d$ or $a=d,b=c$ . My idea Let's consider the given functional equation $$\{(f(xf(y)+1), f(yf(x)-1)\} = \{xf(f(y))+1, yf(f(x))-1\}$$ From this, we can derive that $$f(xf(y)+1) = xf(f(y))+1$$ and $f(yf(x)-1) = yf(f(x))-1$ for all $x, y \in \mathbb{R}$ . now. We analyze each part: $f(xf(y)+1) = xf(f(y))+1$ $f(yf(x)-1) = yf(f(x))-1$
Consider $$ f(xf(y)+1)=xf(y)+1 $$ and let $x=0$ we have $$ f(1)=1 $$ now consider $y=1$ , $\tilde{x}=x-1$ we have $$ f(x)=x-1+1=x $$ so the only function that satisfies this functional equation is the identity You also need to consider the other possible option i.e. $$ f(xf(y)+1)=yf(f(x))-1 $$ And $$ f(yf(x)-1)=xf(f(y))+1$$ By letting $x=0$ we obtain $$ f(\pm 1)=yf(f(0))\mp 1 $$ so $f(\pm 1)=\mp 1$ . By direct verification $$ f(x)=-x$$ satisfy the functional equation, I guess it is the only function, but I wasn't able to prove it
|real-analysis|analysis|functions|contest-math|functional-equations|
1
Maximizing the Number of Vertices for a Regular n-gon Within a Specific Integer Coordinate Range
I'm interested in identifying the largest possible number of vertices () for a regular -gon that can be formed within an integer coordinate plane. The constraints of my problem are as follows: The plane is defined by the coordinate range $[0,32767]^2$ Each vertex of the n-gon corresponds to a unique integer pair $(x,y)$ . the context of this problem is that I am really trying to find the greatest number of coordinates forming a convex hull (excluding colinear points) within this range.
Niven's theorem feels relevant. If all corners have rational coordinates, then the center has rational coordinates as well: sum up all corners and divide by $n$ . So the vector $v_i$ from center to corner $i$ will have rational coordinates. If all corners have equal distance from the center, then you get the angle $\alpha$ which two consecutive corners form with the center characterized as $$\cos\alpha=\frac{\langle v_i,v_{i+1}\rangle}{\langle v_i,v_i\rangle}$$ This is rational as well. Also, if your polygon is regular you have $$\alpha=\frac{2\pi}{n}$$ So you have a rational fraction of the full angle (or equivalently a rational number of degrees), and it's cosine is rational too. That doesn't leave you a lot of options, since the above theorem states that $\cos\alpha\in\{0,\pm\frac12,\pm1\}$ are the only possibilities, and you can quickly exclude the cases of $n=3$ and $n=6$ . The integers you have in your question are merely a special case of rational numbers. So they can't allow fo
|geometry|number-theory|
0
Given concave decreasing function, does $\exists c\in [0,1]$ s.t. $\frac{(f(c)-cf'(c))\left(c-\frac{f(c)}{f'(c)}\right)}{2}\leq2\int_{0}^{1}f(x)dx?$
For any curve $f:\mathbb{R}\to\mathbb{R},$ the gradient of $f(x)$ at the point $x=c$ is $f'(c).$ The curve passes through the point $(c,f(c)),$ and so the equation of the tangent to the curve at $x=c$ has equation $y-f(c) = f'(c)(x-c).$ This can be rearranged to: $ g(x): (= y) = f'(c)x + (f(c) -cf'(c)). $ My question is this: Given a differentiable, concave decreasing function $f:[0,1]\to [0,1],$ with $f(0)=1$ and $f(1)=0$ , does there always exist $c\in [0,1]$ such that the area of the triangle formed by $x-$ axis, the $y-$ axis, and the tangent to $y=f(x)$ at $x=c$ is less than $2\displaystyle\int_{0}^{1} f(x) dx\ ?$ Thinking about $g(0)$ and the value of $x$ s.t. $g(x) = 0,$ where $g$ is as in the first paragraph, this is equivalent to asking the following: Given a differentiable, concave decreasing function $f:[0,1]\to [0,1],$ with $f(0)=1$ and $f(1)=0$ , does there always exist $c\in [0,1]$ such that $$ \frac{ (f(c) - c f'(c)) \left( c - \frac{f(c)}{f'(c)} \right) }{2} \leq 2\int_
By the mean value theorem, there is a point $c$ with $f'(c)=-1$ . Then the area of the triangle can be computed as follows: on the diagonal it contains the point $(c,f(c))$ . Hence the base of the triangle has length $c+f(c)$ , and its total area is $\frac {(c+f(c))^2}2$ . Since $c,f(c) \in [0,1]$ , an upper bound of the area of the triangle is $c+f(c)$ . The integral of $f$ is at least the area of the polygon with corners $(0,0)$ , $(0,1)$ , $(c, f(c))$ , $(1,0)$ . That area is $$ c \frac{ 1+ f(c)}2 + (1-c) \frac{ f(c)}2=\frac {c+f(c)}2, $$ which is twice the upper bound of the triangle.
|real-analysis|inequality|convex-analysis|examples-counterexamples|integral-inequality|
1
How to deduce that two random variables are equivalent?
For example, \begin{equation} Y_1= \begin{cases} x & \text{with prob. } p\\ 1-x & \text{with prob. } 1-p \end{cases} \end{equation} and \begin{equation} Y_2=\begin{cases} x & \text{with prob. } 2p-1\\ 0 & \text{with prob. } 1-p\\ 1 & \text{with prob. } 1-p \end{cases} \end{equation} where $x\in\{0,1\}$ and $p\in(\frac{1}{2},1)$ .
We can look at conditional distributions $P(Y_i|x)$ . $Y_1$ is already in this form: \begin{align} P(Y_1=x)&=p\\ P(Y_1=1-x)&=1-p\\ \end{align} Thus we focus on $Y_2$ . It is easy to see that $P(Y_2=x)$ is equivalent to the cases \begin{align} P(Y_2=0 \wedge x=0)&=\underbrace{(2p-1)}_{Y_2=x}+\underbrace{(1-p)}_{Y_2=0\text{ regardless of }x}=p\\ P(Y_2=1 \wedge x=1)&=\underbrace{(2p-1)}_{Y_2=x}+\underbrace{(1-p)}_{Y_2=1\text{ regardless of }x}=p\\ \end{align} and thus $P(Y_2=x)=p$ . Similarly, for $P(Y_2=1-x)$ we have two cases: \begin{align} P(Y_2=0 \wedge x=1)&=\underbrace{(1-p)}_{Y_2=0\text{ regardless of }x}=1-p\\ P(Y_2=1 \wedge x=0)&=\underbrace{(1-p)}_{Y_2=1\text{ regardless of }x}=1-p \end{align} and so $P(Y_2=1-x)=1-p$ . Therefore, $Y_1$ and $Y_2$ follow the same distribution.
|probability|bernoulli-distribution|
1
Area of a plane triangle as limit of a spherical triangle
We know that the area of an spherical triangle (in a unit sphere) is given by $A(\triangle) = \alpha + \beta + \gamma - \pi$, where $\alpha$, $\beta$, and $\gamma$ are the interior angles of the spherical triangle. I would like to see how plane (Euclidean) geometry works as a limit when the radius of the sphere goes to infinity. Clearly the curvature of the sphere $1/r$ becomes zero and a sphere turns into a plane. What happens to the area of the triangle? If we say that the area of the triangle is \begin{equation} A(\triangle) = r^2 [(\alpha + \beta + \gamma) - \pi] \end{equation} clearly $\alpha + \beta + \gamma - \pi$ go to zero, but not at the rate that $r^2$ goes to infinity. It seems that this limit is infinity. There seems to me that we can not find something like $b h/2$ (base times height over two) from spherical geometry. Right? Of course objects become amplified in area by $r^2$ or length by $r$ so we would need to have something to pull them back. Thanks. Update: One way to
We apply the Spherical Law of Cosines for Angles and render the planar area from the limiting form of this law. Begin by rendering the arc lengths on a sphere of radius $R$ * as $a,b,c$ and the respective opposite angles as $\alpha,\beta,\gamma$ . The planar limit is to be obtained as $R\to\infty$ with $\alpha,\beta,c$ fixed. (* -- We do not assume a unit radius, so "extra" factors of $R$ will appear in formulas below.) From the spherical trigonometry law described above, we then have $\cos\gamma=-\cos\alpha\cos\beta+\sin\alpha\sin\beta\cos(c/R)$ We render $\cos(c/R)$ as $1-[1-\cos(c/R)]$ and apply the formula for the cosine of a sum to obtain $\cos\gamma+\cos(\alpha+\beta)=-\sin\alpha\sin\beta[1-\cos(c/R)]$ Next comes the sum-product relation for cosines: $2\cos\left(\dfrac{\alpha+\beta+\gamma}2\right)\cos\left(\dfrac{\alpha+\beta-\gamma}2\right)=-\sin\alpha\sin\beta[1-\cos(c/R)]$ $2\sin\left(\dfrac{\Delta}{2R^2}\right)\cos\left(\dfrac{\alpha+\beta-\gamma}2\right)=\sin\alpha\sin\beta[
|geometry|spherical-geometry|
0
Solve $x^x-5x+6=0$ using Lambert W function.
How do I solve $x^x - 5x + 6 = 0$ using the Lambert W function? EDIT: I solved equations $2^x - 5x + 6 = 0$ and $3^x - 4x - 15 = 0$ using Lambert W function, but not able to solve $x^x - 5x + 6 = 0$ equation. How and which method can be used to solve this equation? How to solve this equation without graphically plotting, which method can be used? I am just preparing for my examination, I got this question after solving some Lambert W function equations and I don't know much about that.
My answer concerns solutions in closed form . Because the equations mentioned here are polynomial equations of more than one algebraically independent monomials and without univariate factor, we don't know how we can solve the equations by rearranging for $x$ by applying only finite numbers of elementary functions we can read from the equation. But in some cases, we can try to guess solutions ( $2$ is a solution of your first equation, 3 is a solution of your second equation, for example). Therefore certain Special functions are needed. The Special functions mentioned here are not so widely known. $\ $ If we want to see if an equation is solvable in terms of Lambert W , we can use the equation in its $\exp$ - $\ln$ form. Let $a,b,c\neq 0$ constants. a) Equations of the form $$a^x+bx+c=0$$ can be solved in terms of LambertW: $$e^{\ln(a)x}+bx+c=0$$ We can rearrange the sum form to the product form: $$-(bx+c)e^{-\ln(a)x}=1.$$ We see, $x$ is in a linear function in both the coefficient and
|logarithms|lambert-w|
0
Elementary proof that $\mathbb{E}[1/X] \geq 1/\mathbb{E}[X]$ for finitely supported positive random variables.
Let $X$ be a random variable with finite support contained in $(0,\infty)$ . By Jensen's inequality we have $\mathbb{E}[1/X] \geq 1/\mathbb{E}[X]$ . I am curious if there exists an elementary proof of this fact which avoids the use of Jensen's inequality. To rephrase the setup in an elementary way: Let $S$ be a finite set of positive real numbers, and let $f : S \to [0,1]$ be a function such that $\sum_{s \in S} f(s) = 1$ . Then we want to show that $$\sum_{s \in S} f(s)/s \geq \left(\sum_{s \in S} f(s) s \right)^{-1}.$$ Equivalently, we want to show that $$\sum_{s, s' \in S} f(s)f(s') \frac{s}{s'} \geq 1.$$ What is the most elementary proof of this fact? In particular, is there a nice proof which avoids arguing by induction? I would like to present a proof of this to students who are good at algebraic manipulation but who haven't learned Jensen's inequality and are not so comfortable with writing proofs by induction. Here's something that might be useful: $$\operatorname{Cov}(X,1/X) =
It can be proved by exploiting the monotonicity of the function $x \mapsto 1/x$ on $(0, \infty)$ . Let $X_1, X_2 \text{ i.i.d. } \sim X$ . It is easy to verify by monotonicity that \begin{align*} (X_1 - X_2)\left(\frac{1}{X_1} - \frac{1}{X_2}\right) \leq 0. \end{align*} Taking expectations on both sides and using the i.i.d. of $X_1, X_2$ then yields \begin{align} 1 - E[X_1]E\left[\frac{1}{X_2}\right] - E[X_2]E\left[\frac{1}{X_1}\right] + 1 = 2 - 2E[X]E\left[\frac{1}{X}\right]\leq 0, \end{align} which is the desired inequality.
|probability|jensen-inequality|
1
Is every square traceless matrix unitarily similar to a zero-diagonal matrix?
This question asks for the symmetric case, but after consideration I believe that any complex square matrix with zero trace is unitarily similar to a matrix with zero diagonal. This answer to another related question has a demonstration of the not necessarily unitary affirmative. Is the unitary case known true or false already? For reference, this is what makes me think it is true: Consider the set of values from the diagonal one pair at a time, say $d_0$ and $d_1$. We have the principal submatrix $$\pmatrix{d_0 & x \\ y & d_1}$$ The general unitary transform (for any $c$ and $s$ such that $cc^* + ss^* = 1$) is \begin{align} & \pmatrix{c & s \\ -s^* & c^*}\pmatrix{d_0 & x \\ y & d_1}\pmatrix{c^* & -s \\ s^* & c} \\ = & \pmatrix{cd_0 + sy& cx+sd_1 \\ -d_0s^* + c^*y & -s^*x+c^*d_1}\pmatrix{c^* & -s \\ s^* & c} \\ = & \pmatrix{\vert c \vert^2d_0 +\vert s\vert^2d_1 + cs^*x + c^*sy & -csd_0 - s^2y + c^2x + csd_1\\ -c^*s^*d_0 +(c^*)^2y - (s^*)^2x + csd_1& \vert s \vert^2d_0 +\vert c\vert^2d_
The proof can be simplified a bit if we take the following consideration at first. Any complex matrix $M$ can be written as $M = A + iB$ where $A,B$ are both Hermitian. We just set $A = (M+M^{*})/2$ and $B = -i(M-M^{*})/2$ . Since $M$ is traceless then ${\rm Tr}(A)={\rm Tr}(B)=0$ . At first, we use a unitary that diagonalizes $B$ . Then we use the discrete Fourier transform, which maps traceless diagonal matrices to matrices with $0$ diagonal. The result will be $M' = A' + iB'$ , where $B'$ has $0$ diagonal and ${\rm Tr}(A')=0$ . We then just need a way to transform matrices $\pmatrix{d_0 & x \\ y & d_1}$ , where $d_0,d_1$ are real and $d_0\le 0 \le d_1$ to the form $\pmatrix{0 & * \\ * & d_0+d_1}$ . This is a simpler problem, I believe, because we need to solve $$ \vert c \vert^2d_0 +\vert s\vert^2d_1 = (cs^*)x + (c^*s)y $$ only for real $d_0, d_1$ . We then use those transformations recursively to $M'$ $-$ by touching only two indices at a time with a two-level unitary. The idea is t
|linear-algebra|matrices|
0
Fourier transform of the principal value distribution
I would like to compute the Fourier transform of the principal value distribution. Let $H$ denote the Heaviside function. Begin with the fact that $$2\widehat{H} =\delta(x) - \frac{i}{\pi} p.v\left(\frac{1}{x}\right).$$ Rearranging gives that the principal value distribution is, up to a constant $$\delta(x) - 2\widehat{H}.$$ If we take the Fourier transform of this, we get $$1- 2H(-x) ,$$ which seems wrong. First, why does this method produce nonsense? Second, what is a good way to do this computation?
I thought it might be instuctive to present a rigorous and direct approach to determining the Fourier Transform of $\text{PV}\frac1x$ . To that end, we now proceed. First, we define the Fourier Transform for an $L^1$ function, $f$ as $$\mathscr{F}\{f\}(k)=\int_{-\infty}^\infty f(x) e^{ikx}\,dx$$ Next, we define the Principal Value of the distribution, $d(x)=\left(\text{PV}\frac1x\right)$ as $$\langle d,\phi\rangle= \lim_{\delta\to 0^+}\int_{|x|\ge \delta}\frac{\phi(x)}{x}\,dx$$ where $\phi\in \mathbb{S}$ . Therefore, we have for any $\phi\in \mathbb{S}$ $$\begin{align} \langle \mathscr{F}\{d\},\phi\rangle &=\langle d, \mathscr{F}\{\phi\}\rangle\\\\ &=\lim_{\delta\to 0^+}\int_{|x|\ge\delta}\frac1x \int_{-\infty}^\infty \phi(k) e^{ikx}\,dk\,dx\\\\ &=\lim_{\delta\to 0^+} \lim_{L\to\infty}\int_{-\infty}^\infty \phi(k) \int_{\delta\le |x|\le L}\frac{e^{ikx}}{x}\,dx\,dk\\\\ &=\lim_{\delta\to 0^+} \lim_{L\to\infty}\int_{-\infty}^\infty \phi(k) \int_{\delta\le |x|\le L} \frac{i\sin(kx)}{x}\,dx
|fourier-analysis|distribution-theory|
0
Solving a PDE that looks like Kinetic Fokker Planck without diffusion
I am interested to solve the following PDE . $$\partial_t f(t,x,y)=-y\partial_xf(t,x,y)+(x+y)\partial_y f(t,x,y)+\alpha f(t,x,y)$$ with $f:\mathbb{R}^+\times\mathbb{R}\times \mathbb{R}\to\mathbb{R}$ . This reminds me kinetic fokker planck without the diffusion term in $y$ and under harmonic potential in both $x$ and $y$ . I tried with characterist method but I think I have too many variables and I was not able to conclude. How can I solve it (if possible?). Is there a method I am not aware of?
Hint . Let $f(x,y,t)=u(\xi,\eta,t)$ , where $\xi:=x^2+y^2+xy$ and $\eta:=\frac{y}{x}$ . Show that $$ -yf_x+(x+y)f_y=\xi u_{\xi}+(\eta^2+\eta+1)u_{\eta}. \tag{1} $$ Expressed in terms of the new coordinates, the PDE becomes $$ u_t=\xi u_{\xi}+(\eta^2+\eta+1)u_{\eta}+\alpha u, \tag{2} $$ for which the Lagrange-Charpit equations are separable.
|analysis|partial-differential-equations|partial-derivative|characteristics|
0
Convex conjugate of sum of convex functions — infimal convolution
I have a function $f: \mathbb{R} \to \mathbb{R}_+$ defined by $$ f(x) = f_1(x) + f_2(x) + f_3(x) - f_4(x) $$ where every $f_i$ is a proper, closed convex function defined over some interval $[a,b] \subset \mathbb{R}$ . I would like to minimize $f$ $$ \arg\min\limits_{x \in [a, b]} f(x) $$ The sum of a set of convex functions is convex, so I thought of approaching this by defining $$ g(x) = f_1(x) + f_2(x) + f_3(x) $$ and $h(x) = f_4(x)$ and minimizing via the difference of convex functions algorithm (DCA) (see slide 34 ). $$ \arg\min\limits_{x \in [a, b]} g(x) - h(x) $$ This requires the evaluation of the convex conjugate $g^\star$ of $g$ : $$ g^\star(y) = \mathrm{sup}_{x} \{ \langle x, y \rangle - f(x) \}$$ I know the conjugate functions $f_i^\star$ of each of my constitutive functions $f_i$ . However, my understanding is that in general: $$ (f_1 + f_2 + f_3)^\star (y) \neq (f_1^\star + f_2^\star + f_3^\star)(y)$$ Apparently, extrapolating from this question (and reference they give):
I'll only attempt to answer part of your question involving the function you call " $g$ ". For simplicity, let's consider $f(x):= f_0(x) + f_1(x) + f_2(x)$ when $f_0,f_1,f_2$ are convex. Then the optimal value of primal problem is \begin{align} \nu(P) &:= \min_{x} f(x) = \min_{x,x^1,x^2} \{ f_0(x) + f_1(x^1) + f_2(x^2) : x^1=x,x^2=x\}\\ &\overset{(i)}{=} \max_{\lambda^1,\lambda^2} \min_{x,x^1,x^2} f_0(x) + f_1(x^1) + f_2(x^2) + \langle \lambda^1,x-x^1\rangle + \langle \lambda^2,x-x^2\rangle \\ &= \max_{\lambda^1,\lambda^2} \min_{x} f_0(x) + \langle \lambda^1+\lambda^2,x\rangle + \min_{x^1} f_1(x^1) - \langle\lambda^1,x^1\rangle + \min_{x^2} f_2(x^2) - \langle\lambda^2,x^2\rangle\\ &= \max_{\lambda^1,\lambda^2} -f_0^*(-\lambda^1-\lambda^2) - f_1^*(\lambda^1) - f_2^*(\lambda^2) =: \nu(D) \end{align} where $(i)$ follows from strong duality for convex programs.
|convex-analysis|convex-optimization|convolution|
0
Positive integer solutions for $a+b+c = 3(ab-c^2)$
A friend of mine came up with this problem: $$ a+b+c = 3(ab-c^2) $$ Prove no solutions exist for positive integers $a, b$ . They say it's based on an IMO shortlist problem, but claims it's a bit harder. I gave it a try myself, after isolating $a$ you get this: $a = (3c^2+c+b)/(3b-1)$ . This "simplifies" to the following statement: Prove $3c^2+c \not\equiv -b \pmod{3b-1}$ for all positive integers $b$ . Of course, you could go one step further: Because $3c^2+c$ is always even and $3b-1$ is even for odd $b$ , you can prove that the equation above doesn't hold for odd $b$ . You're left with: $3c^2+c \not\equiv -2b \pmod{6b-1}$ I feel like I've reached a dead end, since a lot of the values $6b-1$ are primes. I believe my method is completely wrong. How would you approach problems like these?
Multiplying both sides by $3$ and rearranging, you get $$(3a-1)(3b-1)=9c^2+3c+1.$$ The left side must be divisible by a prime $p\equiv-1\pmod 3.$ But then $\gcd(c,p)=1.$ Then we must have a root to $x^3\equiv 1\pmod p,$ namely, $x=3c.$ If $3c\not\equiv1\pmod p,$ then this root is a non-trivial root, and that is only possible if $p\equiv1\pmod3.$ So $3c\equiv 1\pmod p,$ and thus $0\equiv (3c)^2+3c+1\equiv 3\pmod p,$ and thus $p=3,$ which again is a contradiction. There are negative solutions. For example when $c=3,$ we get $9c^2+3c+1=91=(-7)\cdot (-13)$ giving solutions $a=-2, b=-4.$ But there are no solution where even one of $a,b$ is positive.
|elementary-number-theory|
1
Can we prove that $\frac{1}{n}\sum_{k=1}^n a_k \rightarrow \frac{1}{2}$?
Let $a_n$ be equal to 0 or 1, where it is randomly equal to these numbers. If we do the limit $\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{k=1}^n a_k$ , it SEEMS to approach 1/2, but with this information we cannot prove the problem. I thinked it while flipping coins and playing heads or tails, and thinking about the probability of one or the other falling. Consider that the tail have value 0 and the head have value 1, so if we sum the amount of times the coin flips to heads with the amount of times the coin flips to tails, and divide by the number of times we did the experiment, we will have a value that is approximately the probability of flipping head, with I think that approaches the real value.
By central limit theorem (in the limit $n\to\infty$ ), what you're asking is equivalent to the expected value of $a_k$ . Assuming the probability for $a_k$ being $0$ and $1$ is uniform and equal $(p(0)=p(1)=1/2)$ , we have that $$\lim_{n\to\infty}\sum_{k=1}^na_k=\mathrm E[a_k]=p(0)\cdot 0+p(1)\cdot 1=\frac{1}{2}\cdot 0+\frac{1}{2}\cdot 1=\frac{1}{2},$$ like you wanted to prove.
|sequences-and-series|limits|
0
Showing that a simplicial map in McCarthy's paper on additivity is a homotopy equivalence
I am reading Prof. McCarthy's paper proving additivity ( 1992 ). In this paper he proves a version of Quillen's theorem A for simplicial sets (this is the unique result labeled as proposition of the paper for those interested). There is a step which I am failing to make more explicit. Let $D$ be a category with a $0$ object, a specified class of morphisms called cofibrations (which are the morphisms of interests, which we denote by $\rightarrowtail$ ) such that every cofibration admits a cokernel (which we write as a quotient). We consider a bisimplicial $X$ set whose $(m,n)$ simplicies are diagrams in $D$ of the form $$0=D_0\rightarrowtail D_1\rightarrowtail \cdots\rightarrowtail D_m\rightarrowtail E_0\rightarrowtail \cdots \rightarrowtail E_n$$ and we map this to to $$0=E_0/E_0\rightarrowtail E_1/E_0 \rightarrowtail\cdots \rightarrowtail E_n/E_0.$$ We view this as a map $X\to N_\bullet(D)R$ where the codomain is the bisimplicial set which is constant in the first coordinate equal to
McCarthy's argument is elaborated in a later paper of his, The cyclic homology of an exact category (1994) - see lemma 3.4.4. Briefly, to show that the bisimplicial map $\rho: S_\bullet \mathrm{id}_\mathscr{D} \mid \mathscr{D} \to S_\bullet \mathscr{D} R$ is an equivalence, it suffices by the realization theorem to show that $\rho(-, [n]): (S_\bullet \mathrm{id}_\mathscr{D} \mid \mathscr{D})(-, [n]) \to S_\bullet \mathscr{D}R(-, [n])$ is an equivalence for each $n$ . The codomain of this map is a constant simplicial set $S_n \mathscr{D}$ . Consider the map $\nu: S_n \mathscr{D} \to (S_\bullet \mathrm{id}_\mathscr{D})(-, [n])$ that sends an element $(0 = F_0 \rightarrowtail F_1 \rightarrowtail \cdots \rightarrowtail F_n)$ to $(\underbrace{0 \rightarrowtail 0 \rightarrowtail \cdots \rightarrowtail 0}_{m \text{ zeroes}} \rightarrowtail F_0 \rightarrowtail F_1 \rightarrowtail \cdots \rightarrowtail F_n)$ . We claim that $\nu$ is a homotopy inverse for $\rho(-, [n])$ . One direction is easy
|category-theory|homotopy-theory|simplicial-stuff|
1
Approximate set in a generating algebra
Suppose that $(X,\mathcal{A})$ is a measurable space and let $\mathcal{A}^0$ be an algebra generating $\mathcal{A}$ . For each $\epsilon>0$ and $A\in \mathcal{A}$ , one can find $A^0\in\mathcal{A}^0$ such that $\mu(A\Delta A^0) , where $\mu$ is a finite measure on $(X,\mathcal{A})$ . Now, if we're given another finite measure $\nu$ on $(X,\mathcal{A})$ , can one find $A^0\in\mathcal{A}^0$ such that both $\mu(A\Delta A^0) and $\nu(A\Delta A^0) for some $\epsilon_1,\epsilon_2>0$ ? ( A similar argument is used in the accepted answer here .)
Assuming the measures are non-negative: Consider the non-negative measure $\eta$ given by $\eta = \mu + \nu$ . Do you see how you can proceed from here? In full detail, given $A\in \mathcal{A}$ we can find $A^{0} \in \mathcal{A}^{0}$ such that this new measure is small on $A\Delta A^{0}$ : for $\varepsilon = \min\{\varepsilon_1, \varepsilon_2\}$ we have that: $$\varepsilon > \eta(A\Delta A^{0}) =\mu(A\Delta A^{0}) + \nu(A\Delta A^{0}).$$ Now, as both measures are non-negative the previous inequality implies what you wanted to prove.
|measure-theory|
1
Commutator of the unit object, in a symmetric monoidal category.
Let $(\mathcal{C},\otimes,\phi,\psi,U)$ be a symmetric monoidal category, where $\otimes:\mathcal{C} \times \mathcal{C} \to \mathcal{C}$ is a bifunctor. Let $X,Y,Z \in \mathcal{C}$ , then $\phi:(X \otimes Y) \otimes Z \overset{\simeq}{\to} X \otimes (Y \otimes Z)$ is the associator , and $\psi:X \otimes Y \overset{\simeq}{\to} Y \otimes X$ is the commutator ,where both $\phi,\psi$ are subject to some coherence conditions (pentagon & hexagon-axioms, see e.g. chap. $1$ "Tannakian Categories" by P. Deligne & J.S. Milne). Furthermore, $U$ is the unit object. In "Catégories Tannakiennes" by Saavedro Rivano (specifically, see $2.3.1,2.3.2$ on page $38$ ) there is a claim that one has $\psi_{U,U} = \text{id}_{U \otimes U}$ . It is furthermore said, I believe, that this follows from the commutativity of the diagram $\underline{Note}$ : I have reinterpreted Saavedro Rivano:s notation to be consistent with P. Deligne & J.S. Milne:s notation. Here, $r_{X} := \psi_{U,X} \circ l_X$ where $l_X:X \ov
It follows from the fact that $r_U=l_U$ . Mac Lane takes this as an additional axiom (item $(8)$ as shown below) in his treatment of (symmetric, braided or vanilla) monoidal categories but it can be shown superfluous, following from other things. Then $\psi_{U,U}r_U=l_U=r_U$ and $r_U$ is an isomorphism, so... By superfluous, I mean items $(4)-(7)$ suffice to prove item $(8)$ . You realise this by solving exercise $(9)$ and then considering:
|category-theory|monoidal-categories|
1
Asymptotics of Brownian motion
Let $X$ be a Brownian motion. Then a corollary of the iterated logarithm law says that: $$\limsup_{s\rightarrow +\infty} \frac{X_s}{(2s\log\log s)^{1/2}} = 1 \quad a.s. \tag{3.6}$$ $$\liminf_{s\rightarrow +\infty} \frac{X_s}{(2s\log\log s)^{1/2}} = -1 \quad a.s.\\ \tag{3.7}$$ With help from an earlier question I posted, my interpretation of (3.6) and (3.7) is that as $t \rightarrow \infty$ a Brownian motion with the scaling $(2t\log\log t)^{-1/2}$ will almost surely take values within $(-1,1)$ . This is only a statement about the $t \rightarrow \infty$ limit so for any finite time the Brownian motion may take values outside $(-1,1)$ , but we expect that this will happen less often as $t$ increases. What is confusing me is the following remark in the book Stochastic Calculus by Baldi (3.6) and (3.7) give important information concerning the asymptotic of the Brownian motion as $t \rightarrow + \infty$ . Indeed they imply the existence of two sequences of times $(t_n)_n, (s_n)_n$ , with
These facts follow from properties of $\limsup$ and $\liminf$ : We can rewrite (3.6) as (all the following identities and inequalities hold $\mathbb{P}$ -almost surely) $$ \lim_{s \to \infty} \sup_{t \geq s} \frac{X_t}{(2t\log\log t)^{1/2}} = 1. $$ By definition of the limit, for every $\epsilon > 0$ (here we also assume $\epsilon ) we can find $s_0 > 0$ such that for every $s_1 \geq s_0$ $$ \sup_{t \geq s_1} \frac{X_t}{(2t\log\log t)^{1/2}} \geq (1-\epsilon/2). \tag{1}$$ By a property of the supremum, there exists $t_1 \geq s_1$ such that $$ \frac{X_{t_1}}{(2t_1\log\log t_1)^{1/2}} \geq (1-\epsilon) \implies X_{t_1} \geq (1-\epsilon) (2t_1\log\log t_1)^{1/2}. $$ Since (1) remains true if we replace $s_1$ with any $s \geq s_1$ , it holds in particular for $s_2 := t_1 \geq s_1$ . Starting from this $s_2$ we can find $t_2 \geq s_2$ as above, etc... Continuing this procedure, we can construct an increasing sequence $(t_n)_{n \geq 1}$ satisfying the required inequality for every $n \geq 1$
|real-analysis|probability|probability-theory|stochastic-processes|brownian-motion|
1
Quotient of $\ell^\infty$ has infinite dimension
Let $\ell^\infty$ be the bounded sequence space over the complex numbers and let $c_0$ the subspace of all sequences converging to $0$. I am attempting to show that $\ell^\infty/c_0$ has infinite dimension. I have looked at several different ways to show this but I think perhaps the easiest is to show that there exists an infinite linearly independent set in the quotient. It is easy to find an infinite countable set in $\ell^\infty$ (just take the sequences of the form $(0,\dots,0,1,0,\dots)$, but of course this set is identified in the quotient). I also was thinking of doing a proof by contradiction, and show that no matter what finite linearly independent set in $\ell^\infty/c_0$ you give, we can construct a sequence not in the span. But the precise way to construct such a sequence is not evident to me at this point (some sort of diagonal argument, perhaps). Will a cardinality argument be necessary?
It is possible to prove that the dimension of $\ell^{\infty}/c_0$ is $|\mathbb{R}|$ under Axiom of Choice. We find a subset of $\ell^{\infty}/c_0$ whose cardinality is $|\mathbb{R}|$ , and it is linearly independent (any finite subset is linearly independent). Then the dimension is at least $|\mathbb{R}|$ . Moreover, as $|\ell^{\infty}/c_0|=|\mathbb{R}|$ , the dimension is at most $|\mathbb{R}|$ . Hence the dimension is $|\mathbb{R}|$ . Consider a basis $B\subseteq (0,1]$ of the vector space $\mathbb{R}$ over $\mathbb{Q}$ . We may require that $1\in B$ . Then the following subset $S\subseteq \ell^{\infty}/c_0$ has cardinality $|\mathbb{R}|$ and is linearly independent. $$ S=\{(e^{2\pi i b n})_{n\in\mathbb{N}} | \ b\in B-\{1\}\}. $$ To see this, take any finite subset $\{b_1,\ldots, b_k\}$ in $B-\{1\}$ . If $c_1,\ldots, c_k\in \mathbb{C}$ satisfy $$ c_1e^{2\pi i b_1 n}+\cdots+c_ke^{2\pi i b_k n} = 0\in \ell^{\infty}/c_0, $$ then we must have $$ \lim_{n\rightarrow\infty}(c_1e^{2\pi i b_1
|linear-algebra|
0
A problem about diagonalize invariant subspaces
Let $V$ be a non-zero finite-dimensional vector space, A belongs to End( $V$ ). Also, for any invariant subspace $M$ of A , there exists an invariant subspace $N$ of A such that $V=M\oplus N$ . Prove: A is diagonalizable
Assuming you can use the Jordan canonical form of $A$ , we have the following proof by contradiction : Suppose not. Then there is some eigenvalue $\lambda$ and some generalized eigenvector $v$ such that $$(A-\lambda I)v \neq 0 \\ (A-\lambda I)^2v = 0.$$ In other words, $A (A-\lambda I)v = \lambda (A-\lambda I)v$ so that $\mathbb{C}\cdot (A-\lambda I)v$ is an $A$ -invariant subspace. By assumption, there is some $A$ -invariant subspace $N$ such that $V = \left(\mathbb{C}\cdot (A-\lambda I)v\right) \oplus N$ , so in particular, $v = c(A-\lambda I)v + n$ where $c \in \mathbb{C}$ and $n\in N$ . Apply $A-\lambda I$ to both sides to show $$(A-\lambda I)v = c(A-\lambda I)^2v + (A-\lambda I)n = 0 + An - \lambda n \in N;$$ however, this shows $$(A-\lambda I)v \in (\mathbb{C}\cdot (A-\lambda I)v) \cap N = \{0\},$$ so $(A-\lambda I)v = 0$ , a contradiction.
|linear-algebra|diagonalization|direct-sum|invariant-subspace|
0
Can we prove that $\frac{1}{n}\sum_{k=1}^n a_k \rightarrow \frac{1}{2}$?
Let $a_n$ be equal to 0 or 1, where it is randomly equal to these numbers. If we do the limit $\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{k=1}^n a_k$ , it SEEMS to approach 1/2, but with this information we cannot prove the problem. I thinked it while flipping coins and playing heads or tails, and thinking about the probability of one or the other falling. Consider that the tail have value 0 and the head have value 1, so if we sum the amount of times the coin flips to heads with the amount of times the coin flips to tails, and divide by the number of times we did the experiment, we will have a value that is approximately the probability of flipping head, with I think that approaches the real value.
Say $S_n = \sum_{k=1}^n a_n$ , and suppose that $a_n$ are 0 or 1 with equal probability. One sense of convergence is to say that $\lim_{n\to \infty}S_n/n =1/2$ means that, for every $\epsilon > 0$ , $\lim_{n\to\infty}\mathbb{P}(|\tfrac{S_n}{n} - \tfrac{1}{2}|> \epsilon) = 0$ . Using Chebyshevs inequality you can prove this pretty easily. $$\mathbb{P}(|\tfrac{S_n}{n} - \tfrac{1}{2}|> \epsilon)
|sequences-and-series|limits|
1
Probability variable exceeds another
Suppose we have a biased coin with probability $\frac13$ heads and $\frac23$ tails. If we flip this coin $n$ times, what is the probability we have more heads than tails? Is this something we can solve for? My best idea is to use a tail inequality such as Markov's Inequality or Chebychev's Inequality to form an upper bound but curious if there is a simple way to get the true probability. Only other thing I can think of is to brute force small cases of $n$ and see if a pattern exists.
Let $H_n$ and $T_n$ be the number of heads and tails, respectively, in $n$ flips. Since $H_n+T_n=n$ for all positive integers $n$ , we have $$ P[H_n\geq T_n] = P[H_n\geq n/2] \quad \forall n \in \{1, 2, 3, ...\}$$ Let $X_i$ be an indicator function that is 1 if flip $i$ is heads, and 0 else, so $$H_n = \sum_{i=1}^nX_i$$ Assume $\{X_i\}_{i=1}^{\infty}$ are i.i.d. $Bern(p)$ with $p=1/3$ . You can use the Hoeffding inequality https://en.wikipedia.org/wiki/Hoeffding%27s_inequality $$P\left[\sum_{i=1}^nX_i\geq np + t\right] \leq \exp(-2t^2/n)\quad \forall t>0, n\in\{1, 2, 3, ...\}$$ We can choose $t$ so that $np+t=n/2$ : $$ t = n(1/2-p) = n/6$$ So $$\boxed{P[H_n\geq T_n] \leq \exp(-n/18) \quad \forall n \in \{1, 2, 3, ...\}}$$ For $n$ even, you can compare this to the exact $$ P[H_n\geq T_n]=\sum_{i=n/2}^{n} {n \choose i}p^i(1-p)^{n-i} \quad \forall n \in \{2, 4, 6, ...\}$$ Truncating to 5 decimal places gives https://www.wolframalpha.com/input?i=y%3Dsum+%28n+choose+i%29%281%2F3%29%5Ei%282%
|probability|probability-theory|statistics|
0
if $\exists$ nontrivial polynomial relation $s(X) \cdot A(X) + t(X) \cdot B(X) = 0$, then do $A$ and $B$ have a common factor?
let $R$ be a commutative ring with unit, and fix elements $A(X)$ and $B(X)$ , not both zero , in $R[X]$ . suppose that there exist polynomials $s(X)$ and $t(X)$ in $R[X]$ , respectively of degrees less than $\deg(B)$ and $\deg(A)$ , and not both zero, such that $s(X) \cdot A(X) + t(X) \cdot B(X) = 0$ . (the existence of such $s$ and $t$ is implied by the vanishing of the resultant $\text{res}(A, B)$ .) Question. Can we conclude that $A$ and $B$ have a nontrivial common divisor $g$ in $R[X]$ ? (i.e., of positive degree, and hopefully monic?) when $R$ is a field, this is a standard aspect of the theory of resultants, and essentially follows from unique factorization: let's assume wlog that $t \neq 0$ . the relation implies $A \mid t \cdot B$ ; if $A$ and $B$ were coprime, then $A \mid t$ would hold, contradicting $\deg(t) .
Even with $R$ a Dedekind Domain your property need not hold. For example, let $R=\mathbb{Z}[\sqrt{-5}]$ : Let $$A(X)=(1+\sqrt{-5})X+2,\qquad\qquad\qquad$$ $$B(X)=(1+\sqrt{-5})X^2+5X+1-\sqrt{-5}.\,$$ Then $$(2x+1-\sqrt{-5})A(x)=2B(X).$$ However $A(X)$ and $B(X)$ cannot have any common linear factor, as $A$ is not divisible by any non-unit in $R$ , and $A$ does not divide $B$ . Conversely, starting from your assertion that the result holds for fields, it is easy to show that it holds for $R$ a UFD: Passing to the field of fractions of $R$ , we obtain a polynomial $f(X)$ (over $R$ ) of degree at least $1$ , such that $$f(X)\,|\,\lambda A(X),\qquad\qquad f(X)\,|\,\mu B(X),$$ with $\lambda,\mu \in R$ both non-zero. For any prime factor $p$ of $\lambda$ , we have $$p\,|\,f(X)g(X)=\lambda A(X)$$ for some polynomial $g(X)$ . Thus either $p$ divides $f(X)$ , so $$f(X)/p\,|\,\lambda/p A(X),\qquad\qquad f(X)/p\,|\,\mu B(X),$$ or $p$ divides $g(X)$ , so $$f(X)\,|\,\lambda/p A(X),\qquad\qquad f(X)\
|abstract-algebra|polynomials|ring-theory|divisibility|resultant|
1
Given the infinite series, justify carefully for which values of $p$ the series converges/diverges.
The series is \begin{equation}\sum^{\infty}_{n=5} \frac{1}{n(\ln n)^p} \end{equation} I tried using the Cauchy Condensation Test and ended up with \begin{equation}\sum^{\infty}_{k=1} \frac{1}{(k \ln 2)^p}\end{equation} From here, I am stuck. I wanted to get $k$ in the exponent so that I could view the second series as a geometric one. Please help!
Just notice that $p$ is a constant, and so is $\ln 2$ , so you can factor it out and your second series is $$ \frac{1}{\ln^p 2} \sum_{k=1}^\infty \frac{1}{k^p} $$
|real-analysis|sequences-and-series|convergence-divergence|
0
Correct method to evaluate the limit $\lim_{x\to 0}{\cfrac{\alpha e^x+\beta e^{-x}+\gamma\sin(x)}{x\sin^2x}}=2/3$?
I couldn't solve this question so I looked for hints. One method of solving was to use the Taylor Series expansion of each of the functions. It was a bit long. So another solution used the L'Hospitals Rule instead. $$\lim_{x\to 0}{\cfrac{\alpha e^x+\beta e^{-x}+\gamma\sin(x)}{x\sin^2x}}$$ $$= \lim_{x\to 0}{\cfrac{\alpha e^x+\beta e^{-x}+\gamma\sin(x)}{x^3}}$$ For limit to be finite limit of numerator should be zero. Therefore $$\alpha +\beta =0.$$ But limit of $\sin x$ as $x$ tends to $0$ is taken as $x$ in most cases so the correct equation should be $$\alpha +\beta +\gamma x=0$$ and further equations are obtained similarly. Can the limit of $\sin x$ as $x$ tends to $0$ be taken as either $x$ or $0$ as per convenience? Is using L'Hopitals Rule a correct method of solving this question? OR Is Taylor series the more appropriate method?
Use B.Taylor is also short! $e^x = 1 + x/1! + x^2/2! + x^3/3! + ...$ $e^{-x} = 1 - x/1! + x^2/2! - x^3/3! + ...$ $\sin(x)= x - x^3/3! + x^5/5! + ...$ and in the denominator $\sim x^3$ . Hence we have $\alpha+\beta = 0$ . In the numerator: $2\alpha(x+x^3/3!+...) + \gamma(x-x^3/3!+...)$ $\implies 2\alpha + \gamma = 0$ and $2\alpha - \gamma = 4$ $\implies \alpha = 1, \beta = -1, \gamma = -2$ . Therefore, (c) is wrong!
|calculus|limits|taylor-expansion|
0
What’s the difference between an ordered pair and an ordered set?
I am told relations are (sets of?) ordered pairs. How do these relate to ordered sets? Are ordered sets ordered by ordered pairs (i.e., relations)?
An ordered set is usually understood to be a set equipped with a partial order. That is, it is a pair of the form $(S, \leq)$ where $S$ is a set and $\leq$ is a partial order on $S$ : a binary relation on $S$ that is Reflexive: $x \leq x$ Antisymmetric: if $x \leq y$ and $y \leq x$ , then $x=y$ Transitive: if $x \leq y$ and $y \leq z$ , then $x\leq z$ Since $\leq$ is a binary relation on $S$ , it can ultimately be understood as a set of ordered pairs satisfying the above conditions. However, the notion of an ordered pair does not really have much to do with that of an ordered set. It's probably better to think of an ordered set as a set with additional structure, in this case a partial order. Thus we similarly speak of ordered groups, ordered vector spaces etc..
|elementary-set-theory|logic|
0
Showing $\frac1{ab+4}+\frac1{ac+4}+\frac1{ad+4}+\frac1{bc+4}+\frac1{bd+4}+\frac1{cd+4}\geq\frac65$, for positive $a,b,c,d$ with $ab+bc+cd+da=4$
Let us take $a\geq b\geq c\geq d>0$ such that $ab+bc+cd+da=4$ . Show that $$\frac{1}{ab+4}+\frac{1}{ac+4}+\frac{1}{ad+4}+\frac{1}{bc+4}+\frac{1}{bd+4}+\frac{1}{cd+4}\geq\frac{6}{5}$$ $a+b+c+d \ge 4$ , $abcd \le 1$ ( are obvious) I wasn't able to solve it using the following: using inequality of means. Using rearrangement inequality Substituting $\frac{1}{x}$ doesn't help since $f(x) = \frac{x}{1+4x}$ is concave.
Proof. We have $ab, ac, ad, bc, bd, cd \le 4$ . ( Note : $ac + bd \le ab + bc \le 4$ .) Let $x = ab, y = bc, z = cd, u = da, v = ac, w = bd$ . Then $x, y, z, u, v, w \le 4$ , and $x + y + z + u = 4$ . We have $$\frac{1}{x + 4} = \frac25 - \frac{1}{25}(x + 4) + \frac{(x - 1)^2}{25(x + 4)} \ge \frac25 - \frac{1}{25}(x + 4) + \frac{(x - 1)^2}{25(4 + 4)}.$$ ( Note : $\frac25 - \frac{1}{25}(x + 4)$ is the first order Taylor approximation of $\frac{1}{x+4}$ around $x = 1$ .) Similarly, we deal with $\frac{1}{y + 4}$ etc. It suffices to prove that \begin{align*} &\frac{12}{5} - \frac{1}{25}(x + y + z + u + v + w + 24)\\[6pt] &\quad + \frac{(x - 1)^2 + (y-1)^2 + (z-1)^2 + (u-1)^2 + (v-1)^2 + (w-1)^2 }{25(4 + 4)}\\ \ge{}& \frac65, \end{align*} or \begin{align*} &\frac{27}{100} - \frac{1}{20}(x + y + z + u) + \frac{1}{200}(x^2 + z^2) + \frac{1}{200}(y^2 + u^2) \\[6pt] &\quad - \frac{1}{20}(v + w) + \frac{1}{200}(v^2 + w^2)\\[6pt] \ge{}& 0, \end{align*} or (using $xz = yu = vw = abcd$ ) \begin{al
|inequality|jensen-inequality|rearrangement-inequality|
0
Error propagation in compass and straightedge constructions
I was trying to assess the impact of non-idealities on the outcome of a classical geometric construction, performed on paper with actual compass and straightedge. I was thinking of possible approaches, but at the same time I didn't want to start from scratch, expecting that someone must have investigated this topic before me. In fact I found this article. I seem to find only this article. The author hits the core, he perfectly delineates the matter, and in particular he includes images that are pretty self-explanatory: Unluckily he develops the discussion in qualitative terms and does not offer a useful framework. Near the ends he writes "The goal might be to mount a precise analysis of all the standard constructions and compare competing constructions for accuracy. There is a literature of papers doing precisely this, and I will try to post some references later", but apparently he could not recall what such references were. Can anybody help me on this subject, suggesting methods or t
You will find some discussion of these issues in J. Wallner, R. Krasauskas, H. Pottmann, "Error propagation in geometric constructions", Computer-Aided Design Volume 32, Issue 11 (2000), pp. 631-641.
|geometry|analytic-geometry|geometric-construction|error-propagation|
0
Convergence of a sequence $x_{n+1}=p+\frac{1}{x_{n}}$
I'm attempting to prove the convergence of the sequence $\{x_n\}$ , given by $x_{n+1}=p+\frac{1}{x_{n}}$ with $x_1=1$ and $p>0$ My Attempt: $a = p+\frac{1}{a} $ => $a=\frac{p+\sqrt{p^2+4}}{2}$ $(x_n > 0)$ $|x_{n+1}-a|=|p+\frac{1}{x_n}-a|=\frac{a-p}{x_n}|x_n-a|=\frac{(a-p)^{n}}{\prod\limits_{i=1}^nx_i}|x_1-a| $(a-p) => $\lim \limits_{n \to \infty}(a-p)^n = 0$ => $\lim \limits_{n \to \infty}x_n=a$ To prove the above, i need to first prove $x_n>1$ . I know it's right, but i cant prove it. Any help for $x_n>1$ , or any other way to prove the convergence. Thanks.
You need to prove $x_n> 1, \frac{1}{x_n}> 1-p$ at the same time. $n=2$ case is trivial. Suppose this is true for $x_n$ , then $x_{n+1}=p+\frac{1}{x_n}> 1$ , and $\frac{1}{x_{n+1}}=\frac{1}{p+\frac{1}{x_n}}> \frac{1}{1+p}> 1-p$ . By induction we are done.
|real-analysis|sequences-and-series|limits|
1
Convergence of a sequence $x_{n+1}=p+\frac{1}{x_{n}}$
I'm attempting to prove the convergence of the sequence $\{x_n\}$ , given by $x_{n+1}=p+\frac{1}{x_{n}}$ with $x_1=1$ and $p>0$ My Attempt: $a = p+\frac{1}{a} $ => $a=\frac{p+\sqrt{p^2+4}}{2}$ $(x_n > 0)$ $|x_{n+1}-a|=|p+\frac{1}{x_n}-a|=\frac{a-p}{x_n}|x_n-a|=\frac{(a-p)^{n}}{\prod\limits_{i=1}^nx_i}|x_1-a| $(a-p) => $\lim \limits_{n \to \infty}(a-p)^n = 0$ => $\lim \limits_{n \to \infty}x_n=a$ To prove the above, i need to first prove $x_n>1$ . I know it's right, but i cant prove it. Any help for $x_n>1$ , or any other way to prove the convergence. Thanks.
Hint : denote $x_n= a_n/a_{n-1}$ with $a_0 =a_1 =1$ then $$\iff \frac{a_{n+1}}{a_n}= p+ \frac{a_{n-1}}{a_n}\iff a_{n+1}=pa_n+a_{n-1}\tag{1}$$ and we can find easily the closed-form solution of $(1)$ .
|real-analysis|sequences-and-series|limits|
0
Can't figure out multivariable limit of $\frac{x^3-x^2y}{x^2+y^6}$ with polar coordinate sub.
I need to find the limit of a function $f(x,y)$ as $(x,y)\rightarrow (0,0)$ . The only method I know of is to consider all paths through $(0,0)$ and do polar coordinate substitution to make it into a limit of $r\rightarrow0$ . $$\lim_{(x,y)\rightarrow(0,0)}f(x,y)=\frac{x^3-x^2y}{x^2+y^6}$$ So with polar coordinate substitution that becomes: $$\lim_{r\rightarrow0}f(r,\theta)=\frac{r^3\cos^3\theta-r^3\cos^2\theta\sin\theta}{r^2\cos^2\theta+r^6\sin^6\theta} = \frac{r\cos^2\theta(\cos\theta - \sin\theta)}{\cos^2\theta+r^4\sin^6\theta}\stackrel{?}{=}0$$ This gives the expected answer ( $0$ , from wolfram alpha) except when $\cos\theta=0$ , in which case it's a divide by zero again. If just using an arbitrary straight path $y=ax$ it's pretty easy to prove, but in my understanding that doesn't consider all paths, and thus is not a sufficient for determining the limit. Any help is appreciated!
$$L =\lim_{(x, y)\to(0, 0)}\frac{x^3-x^2y}{x^2+y^6} = \lim_{(x, y)\to(0, 0)}\frac{x^2}{x^2+y^6}(x-y)$$ Notice $\displaystyle\lim_{(x, y)\to(0, 0)} x-y = 0$ and $\displaystyle\left|\frac{x^2}{x^2+y^6}\right|\le 1$ , therefore, $L = 0$ .
|limits|polar-coordinates|limits-without-lhopital|
1
Proof $\displaystyle \sum_{k=0}^n \binom{2n}{2k} = 2^{2n-1}$
Can someone help me prove the equation: $$\sum_{k=0}^n \binom{2n}{2k} = 2^{2n-1}$$ I know that via binomial theorem for even k: $$2^{2n} = (1+1)^{2n} = \sum_{k=0}^n \binom{2n}{k}$$ and for odd k: $$0 = (1-1)^{2n} = \sum_{k=0}^n \binom{2n}{k}\cdot(-1)^k$$ That's how i solve the equation: $$2^{2n-1}=\frac{2^{2n}}{2}=\frac{\sum_{k=0}^n \binom{2n}{k}}{2}$$ Now we can remove all odd k because they are equal 0. Also only even k remain: $$\sum_{k=0}^n \binom{2n}{2k}=2^{n+1}$$ And.. I have a mistake somewhere, because it must be: $$2^{n-1}$$ I think I don't fully understand the difference between odd k and the number of subsets with odd cardinality. Or odd k is an abbreviation for the number of subsets with odd cardinality?
I think you had a confusion when you took the case of k odd and k even, in both sums there are k even and odd I did not understand your remark however here is my attempt: \begin{align*} 0 &= (1-1)^{2n} = \sum_{k=0}^{2n} \binom{2n}{k} (-1)^k \\ \text{So:} \\ 0 &= \sum_{k=0}^{n} \binom{2n}{2k} - \sum_{k=0}^{n-1} \binom{2n}{2k+1} \\ \text{And:} \\ 2^{2n} &= (1+1)^{2n} = \sum_{k=0}^{2n} \binom{2n}{k} \\ \text{Then:} \\ 2^{2n} &= \sum_{k=0}^{n} \binom{2n}{2k} + \sum_{k=0}^{n-1} \binom{2n}{2k+1} \\ \text{Therefore:} \\ 2^{2n} &= 2 \sum_{k=0}^{n} \binom{2n}{2k} \\ \text{Finally:} \\ 2^{2n-1} &= \sum_{k=0}^{n} \binom{2n}{2k} \end{align*}
|binomial-coefficients|binomial-theorem|
0
About a historical note on a famous series
Currently, I'm doing research on the history of the Basel problem and I happen to notice a small gap. Not on the history of the aforementioned problem per se, since its history is very well documented. In fact, I mean that I wasn't able to determine who was the first person to show the convergence of the series of reciprocals of squares, before its exact value was determined by Leonhard Euler. Although, I did, quite coincidentally I may add, come across a random comment on a post about a proof of the convergence of the series in discussion, saying that Bernoulli (unfortunately, not which member of the Bernoulli family) showed the convergence by bounding the series from above by the number 2. Sadly, it wasn't mentioned whether a Bernoulli brother was the first person to achieve such a result. In case that the previous holds to be true, I kindly ask for anyone to verify such claim. Otherwise, I ask of you to tell/find me that person and with robust evidence such as sources and web links.
According to W. Gautschi 2008: Stirling, already in 1730, actually calculated the series to nine decimal places, but Euler did not yet know this. DOI. 10.1137/070702710
|sequences-and-series|math-history|
1
Name for maximum size of common neighborhoods between all the pairs of vertices?
I'm curious if there's a specific term for the following parameter: Given a graph $G = (V,E)$ , i am looking for $$ \ell(G) := \max_{u,v\in V}|N(u)\cap N(v)|$$ I know that if we look into edges this is the book number of the graph. Thanks in advance.
The quantity $|N(u) \cap N(v)|$ is referred to as the codegree of $u$ and $v$ , and so the parameter in the question is the maximum codegree of $G$ . Here is an article with an example of this usage. The codegree is a somewhat unfortunate term, because it is also used for a related but different concept for hypergraphs. (In a $k$ -uniform hypergraph, the codegree of a set of $k-1$ vertices is the number of edges containing all $k-1$ of them; for graph, this notion of codegree would reduce to the ordinary degree.) There is not really any ambiguity, since the context of the kind of graph you're considering is enough to decide which "codegree" you mean. However, it makes searching for results related to "maximum codegree" a bit challenging.
|graph-theory|terminology|
1
Finding all integral solutions to $ \frac{1}{x} + \frac{1}{y}= \frac{1}{4} $
I'm trying to find all integral solutions to the equation I found in a competition math packet "Find the number of integer solutions (x, y) to if, for instance, (x, y) = (2, −4) and (x, y) = (−4, 2) are counted as different integer solutions." $ \frac{1}{x} + \frac{1}{y}= \frac{1}{4} $ Is there an easier method than just brute-forcing it? I attempted to find solutions by using some algebra to simplify it $ \frac{1}{\frac{1}{x}+ \frac{1}{y}} = 4$ $\frac{xy}{x+y} = 4 $ $ xy = 4x + 4y $ $xy -4x - 4y + 16 = 16 $ $(y-4)(x-4) =16 $ In this form, I still need to guess
$$\begin{aligned} \frac1y=\frac14-\frac1x=\frac{x-4}{4x}\implies y = \frac{4x}{x-4} \end{aligned}$$ This is an integer if, and only if, $(x-4)|4x$ , which is true if, and only if $$(x-4)|4x-4(x-4) = 16,$$ therefore $x-4\in\{\pm1,\pm2,\pm4,\pm8,\pm16\}$ , i.e., $x\in\{-12,-4,0,2,3,5,6,8,12,20\}$ . Of course we can't have $x=0$ , but the other values produce valid solutions. Here they are: $$\begin{alignat}{2} -\dfrac1{12}+\dfrac13&=\dfrac14\qquad\dfrac13-\dfrac1{12}&=\dfrac14\\ -\dfrac14+\dfrac12&=\dfrac14\qquad\dfrac12-\dfrac14&=\dfrac14\\ \dfrac15+\dfrac1{20}&=\dfrac14\qquad\dfrac1{20}+\dfrac15&=\dfrac14\\ \dfrac16+\dfrac1{12}&=\dfrac14\qquad\dfrac1{12}+\dfrac16&=\dfrac14\\ \dfrac18+\dfrac18&=\dfrac14\\ \end{alignat}$$
|algebra-precalculus|
1
An exact definition of multiplication
I am looking into repeated operations, and it seems really hard to precisely define multiplication. Of course, for integer $b$ and real number $a$ , we use the grade school definition we all know: $$ab = \underbrace{a + a + a + \cdots + a}_{b\text{ times}}$$ but what about for real numbers $a$ and $b$ ? For exponentiating (for integers: repeated multiplication), we have a precise formula to define it, which is easy to derive: $$a^x = \sum_{n=0}^{\infty} \frac{x^n \left(\ln(a)\right)^n}{n!}$$ which is nice because we only have integer powers in the sum, which we already know how to define: $$x^n = \underbrace{x \times x\times x \times \cdots \times x}_{n\text{ times}}$$ But this just raises the question of how we define $x \times x$ precisely. Is there an analogous formula to this for multiplication? How does the calculator compute multiplication of reals? Note: According to sources, just approximating multiplication for real numbers uses calculus or numerical methods. I cannot grasp wh
The way we define these fundamental operations depends on how we define (or construct) the real numbers . When we do that, we define what we mean with "adding" and "multiplying" two numbers , and it's done in a certain way that we construct a field , with several other properties. See for example Construction of real numbers . However, one way you can think we define the multiplication of two arbitrary numbers $a$ and $b$ is by taking sequences $\left\lbrace a_n\right\rbrace$ and $\left\lbrace b_n\right\rbrace$ of rational numbers that approximate $a$ and $b$ , respectively (that is, whose limits are $a$ and $b$ ), and we can define the product $ab$ as $\displaystyle\lim_{n\rightarrow \infty} (a_n b_n) $ , which makes sense since the product of two rational numbers can be defined "intuitively". Of course that it's needed to prove the product is well defined (basically, that you get the same result no matter what sequences you choose, as long as their limits are $a$ and $b$ ), but that'
|real-analysis|definition|arithmetic|real-numbers|
1
Alternating shifted central binomial sum with Cauchy weights
My question is how can one show that $$\lim_{n \to \infty} \frac{1}{\binom{2n}{n}}\sum_{k=1}^n (-1)^k \binom{2n}{n+k}\frac{x^2}{x^2+\pi^2k^2} = \frac{1}{2}\Big(\frac{x}{\sinh(x)}-1\Big) $$ I find the proposed identity nice because it feels like we can look at the presence of the fraction in $x$ on the left hand side, for large $x$ , as a perturbation to the textbook sum of alternating binomial coefficients with a little rearrangement. I'm also generally interested in techniques to handle alternating sums of shifted central binomial sums with different weights, which prompted my question at MSE 2824529. Skbmoore posted a proof of a neat and useful identity at 2827591 (and Tired commmented a slick proof) that $$ \,\,\,\,\,\sum_{k=1}^n (-1)^{k+1} \binom{2n}{n+k} k^s = \binom{2n}{n} \sin(\pi s/2) \int_0^\infty \frac{dx \, \,x^s}{\sinh{\pi x}} \frac{n!^2}{(n+ix)!(n-ix)!}. $$ Likely an apt modification of that formula will net us the asymptotics above, but I'm not quite clever enough with mo
Another approach is to compute the partial fraction decomposition of the rational function $$ \frac{1}{(z-1^2) (z-2^2) \cdots (z-n^2)}. $$ Given that the poles are all simple we may write $$ \frac{1}{(z-1^2) (z-2^2) \cdots (z-n^2)} = \sum_{k=1}^n \frac{A_k}{z-k^2} $$ and the coefficients can be computed via $$ A_k = \lim_{z \to k^2} \frac{(z-k^2)}{(z-1^2)(z-2^2) \cdots (z-n^2)} = \prod_{\substack{1\leq j \leq n \\ n\neq k}} \frac{1}{k^2-j^2} = \frac{(-1)^{n-k} 2k^2}{(n-k)!(n+k)!}. $$ Therefore $$ \frac{1}{(z-1^2) (z-2^2) \cdots (z-n^2)} = \sum_{k=1}^n \frac{(-1)^{n-k} 2k^2}{(n-k)!(n+k)!} \frac{1}{z-k^2} $$ for all complex $z$ . Replacing $z$ with $-x^2$ and rearranging gives $$ \prod_{k=1}^n \frac{1}{1+\frac{x^2}{k^2}} = 1- \frac{2x^2}{\binom{2n}{n}} \sum_{k=1}^{n} \frac{(-1)^{k+1}}{k^2+x^2} \binom{2n}{n+k} $$ Now let $n \to \infty$ and use the product formula for hyperbolic sine $$ \prod_{k=1}^{\infty} \left(1+\frac{x^2}{k^2} \right) = \frac{\sinh \pi x}{\pi x} $$ on the left.
|sequences-and-series|summation|asymptotics|binomial-coefficients|
0
Convergence of a sequence $x_{n+1}=p+\frac{1}{x_{n}}$
I'm attempting to prove the convergence of the sequence $\{x_n\}$ , given by $x_{n+1}=p+\frac{1}{x_{n}}$ with $x_1=1$ and $p>0$ My Attempt: $a = p+\frac{1}{a} $ => $a=\frac{p+\sqrt{p^2+4}}{2}$ $(x_n > 0)$ $|x_{n+1}-a|=|p+\frac{1}{x_n}-a|=\frac{a-p}{x_n}|x_n-a|=\frac{(a-p)^{n}}{\prod\limits_{i=1}^nx_i}|x_1-a| $(a-p) => $\lim \limits_{n \to \infty}(a-p)^n = 0$ => $\lim \limits_{n \to \infty}x_n=a$ To prove the above, i need to first prove $x_n>1$ . I know it's right, but i cant prove it. Any help for $x_n>1$ , or any other way to prove the convergence. Thanks.
Let $\alpha>0,\beta be the two roots of $$x=p+\frac1x\iff x^2-px-1=0.$$ i.e. $$\alpha=\frac{p+\sqrt{p^2+4}}{2},\quad \beta=\frac{p-\sqrt{p^2+4}}{2},$$ and then $$\alpha-p=\frac{1}{\alpha},\quad \beta-p=\frac{1}{\beta},\quad \left|\frac{\beta}{\alpha}\right| $$\frac{x_n-\alpha}{x_n-\beta} =\frac{p+\frac{1}{x_{n-1}}-\alpha}{p+\frac{1}{x_{n-1}}+\beta} =\frac{\frac{1}{x_{n-1}}-\frac{1}{\alpha}}{\frac{1}{x_{n-1}}-\frac{1}{\beta}} =\frac{\beta}{\alpha}\cdot\frac{x_{n-1}-\alpha}{x_{n-1}-\beta} =\cdots=\left(\frac{\beta}{\alpha}\right)^{n-1}\cdot\frac{1-\alpha}{1-\beta}.$$ So $$\lim_{n\to\infty}\frac{x_n-\alpha}{x_n-\beta}=0\iff\lim_{n\to\infty}x_n=\alpha.$$
|real-analysis|sequences-and-series|limits|
0
The composition of Convex functions?
My professor gave us the following question: Refute (with a simple example): Let ,:ℝ→ℝ be two convex functions. The composition ℎ≜∘ (that is, ℎ()=(())) is also a convex function. But from what I can read online the composition of two convex functions is convex as well, what am I missing here? The composition of two convex functions is convex
Let $f$ and $g$ be $f(x)=-x$ , $g(x)=x^2$ . Then $f$ and $g$ are convex (since they are twice continuously differentiable and its second derivatives are $\geq 0$ ). However, $f(g(x))=-x^2$ is not convex.
|functions|derivatives|discrete-mathematics|convex-analysis|
0
Variance of Concave Function
Let $X:=[X_1,\dots, X_n]$ be a random vector with $X_i \in (0,2)$ and having a joint distribution $F_X$ . Take a constant vector $a:=[a_1,\dots, a_n]$ with $a_i \in [0,1]$ and $\sum_{i=1}^n a_i = 1$ . Consider the variance $$ v(a):={\rm var}( \log a^\top X) $$ Is it possible to tell if the function $v(a)$ is convex or concave in $a$ ? My attempts: We know that $a^\top X$ is linear in $a$ and $\log ()$ is concave, hence $\log a^\top X$ is concave in $a$ . However, when taking the variance, it involves a difference between the second moments and the squared mean, which complicates the computation. I tried to go with the second-order derivative test for Hessian, but the computation again became very complicated. So, I came back and tested some simple cases such as the Bernoulli case. With various simulations, I find it seems to be a concave function; however, only for the testing cases. Any suggestion is appreciated.
As I understood problem well it isn't possible without knowlegde about covariances pairs $(X_{1},X_{2})$ since ${\rm var}( \log a^\top X)= {\rm var} (\sum^{n}_{i=1} ( \log a_{i})X_{i}) = \sum^{n}_{i=1} {\rm var} ( \log a_{i}X_{i} ) + 2 \cdot\sum_{i \neq j \leq n } {\rm cov}((\log a_{i})X_{i};(\log a_{j})X_{j})$
|real-analysis|convex-analysis|convex-optimization|non-convex-optimization|
0
Understanding proof that smooth point is regular ( Second quetion, Gortz's Algebraic Geometry )
I am reading the Gortz's Algebraic Geometry, proof of Lemma 6.26 and trying to understand some statement. First, I propose associated question. Q. Let $A \subseteq B$ be a subring with prime ideals $\mathfrak{p} \subseteq A$ and $\mathfrak{q} \subseteq B$ such that $\mathfrak{p} = \mathfrak{q} \cap A$ ( lying over ). Then its residue fields $\kappa(\mathfrak{p}) := A_{\mathfrak{p}}/\mathfrak{p}A_{\mathfrak{p}}$ and $\kappa(\mathfrak{q}):=B_{\mathfrak{q}}/\mathfrak{q}B_{\mathfrak{q}}$ are isomorphic? If not, when ? I don’t think this statement will work.. This question originates from following proof ( Lemma 6.26 of the Gortz's book ). Lemma 6.26. Let $k$ be a field, and $X$ a $k$ -scheme, locally of finite type. Let $x\in X$ be a point such that $X$ is smooth at $x$ of relative dimension $d$ over $k$ . Then the local ring $\mathcal{O}_{X,x}$ is regular of dimension $\le d$ . If $x$ is closed, then $\dim \mathcal{O}_{X,x}=d$ . Here the smoothness is defined as Proof of the Lemma 6.26. L
Yes. The proof is not really accurate. For any $y' \in \mathbb{A}^{n}_{\kappa(y)}$ lying over $y\in \mathbb{A}^{n}_k$ , it is not in general $\kappa(y') =\kappa(y)$ , so we cannot apply the reduced case directly. But we can choose suitable $y'$ such that $\kappa(y') =\kappa(y)$ . This is as follows. Again let $f: X \to \operatorname{Spec} k$ , $l : \operatorname{Spec} \kappa(y) \to \operatorname{Spec}k$ , and $i_y : \operatorname{Spec}\kappa(y) \to X$ be the canonical morphism ( Gortz's book p.71 ). Since $ l \circ id_{\operatorname{Spec} \kappa(y)} =l = \operatorname{Spec}( k \hookrightarrow \kappa(y)) := \operatorname{Spec}(\Gamma(f \circ i_y)) = f\circ i_y $ , by universal property, there is an unique morphism $g:=\operatorname{Spec}\kappa(y) \to \overline{X}:=X \otimes_k \kappa(y)$ such that $p\circ g = i_y$ and $\overline{f} \circ g = id_{\operatorname{Spec}\kappa(y)} $ . ( $p$ and $\overline{f}$ are as in the commutative diagram in the question. ) And let $y'$ be its image point.
|algebraic-geometry|
1
Complex Power of a Complex Number Using Euler's Formula
To get the principal answer to a complex number $(a+bi)$ raised to another complex number $(c+di)$ I understand you can get this by first determining $r=\sqrt{a^2+b^2}$ and $\theta = \arctan(b/a)$ and then use those by raising $e^{(\ln(r)*c-d*\theta)+(\ln(r)*d+c*\theta)}$ . But I find that does not provide the correct answer for an equation like $(-1+1.732i)^{3}$ Here $r=\sqrt{(-1)^2+1.732^2} = 2$ And $\theta = \arctan{\frac{1.732}{-1}} = -1.04718$ So plugging that in we get $e^{(\ln{2}*3-0*(-1.04718))+(\ln{2}*0+3*(-1.04718))} = -8$ But if I expand the equation at manually (or check it with Wolfram Alpha) I find the correct answer is actually $+8$ It seems there must be a correction factor needed sometimes to switch the sign of the result? I thought it might be related to odd number exponents (in this case $3$ ) but it doesn't seem to be needed all the time (for example $(2+0i)^3$ works just fine without a correction needed. I've spend some time researching and I can't see this address
Your calculation of arc-tan took the wrong branch of the tangent. Note that in your number $(−1+1.732\,i)$ the real part is negative and the imaginary part is positive, so the number belongs to the second quadrant and the actual argument is between $\pi/2$ and $\pi$ . Actually, it is $\theta = \frac 23\pi \approx +2.0944,$ which is $\left(\arctan \frac {1.732}{-1}\right) + \pi,$ whilst your result of $−1.04718 \approx \arctan \frac{-1.732}1$ corresponds to $(1-1.732\,i)$ in the fourth quadrant.
|complex-numbers|eulers-number-e|
0
Taking partial derivative of SSD, wrt the parameters $a, b$?
In the book titled: Analysis of Straight-line data, by Forman S. Acton; it is given on page #10: The classical 'least squares' procedure is most commonly derived by forming an expression for the sum of the squared vertical deviations from a general line, then demanding that this expression be minimized wrt the parameters of the line. If our points $(x_i, y_i)$ are computed with a line: $$Y_i = a + bx_i, (1)$$ then the sum of squared deviations are given by $$SSD = \sum_{i=1}^n (y_i - Y_i)^2 \equiv (y_i - a - bx_i)^2. (2)$$ We are to choose a , and b so that this expression is a minimum, which we accomplish by partially differentiating SSD wrt a , and to b , and equating each derivative separately to zero, thereby obtaining the so-called normal equations of the system: $$an + b\sum x = \sum y\\ a\sum x + b\sum x^2 = \sum xy. (3)$$ (Summations are all over the data points unless explicitly indicated otherwise; thus $\sum xy $ means $\sum_{i=1}^n xy,$ etc.) Equations (3) are a pair of sim
I shall assume familiarity with chain rule and power rule of derivatives ( $(x^n)^\prime=nx^{n-1}$ ). Let me know if you need help with these. We have $$\mathsf{SSD}=\sum_{i=1}^n(y_i-a-bx_i)^2$$ Let us differentiate one of the terms $(y_i-a-bx_i)^2$ w.r.t. $a$ and $b$ . Note that for any function $f(x)$ , the chain rule gives $$\frac{\mathrm d}{\mathrm dx}(f(x))^2=2f(x)f^\prime(x)$$ So, first taking $y_i-a-bx_i=f(a)$ and noting that $f^\prime(a)=-1$ , we get $$\frac{\partial}{\partial a}(y_i-a-bx_i)^2=2(y_i-a-bx_i)(-1)$$ So, the derivative of the sum will just be $$\frac{\partial }{\partial a}\textsf{SSD}=\sum_{i=1}^n-2(y_i-a-bx_i)$$ This is equal to $0$ , and we can cancel the $-2$ to get $$\sum_{i=1}^ny_i-\sum_{i=1}^na-\sum_{i=1}^nbx_i=0$$ which is precisely the first equation in $(3)$ . Similarly, if we set $y_i-a-bx_i=f(b)$ , and $f^\prime(b)=-x_i$ , we will get $$\frac{\partial }{\partial b}(y_i-a-bx_i)^2=2(y_i-a-bx_i)(-x_i)$$ So, we get $$\frac{\partial }{\partial b}\mathsf{SSD}=
|derivatives|partial-derivative|least-squares|standard-deviation|
1
Theorem 3.11 (c): Rudin's PMA
I wanted to ask some clarification on one of the proofs in Rudin's PMA, specifically Theorem 3.11 (c). It states that In $\mathbb R^k$ , every Cauchy sequence converges. The proof for it is as follows: Let $\{\mathbf x_n\}$ be a Cauchy sequence in $\mathbb R^k$ . Define $E_N$ to be the set $\mathbf x_N, \mathbf x_{N + 1}, \ldots$ . Then for some $N$ , we have that the diameter of $E_N$ is less than 1. The range of $\{\mathbf x_n\}$ is the union of $E_N$ and the finite set $\{\mathbf x_1, \mathbf x_2, \ldots, \mathbf x_{N - 1}\}$ . Then $\{\mathbf x_n\}$ is bounded. Since every bounded subset of $\mathbb R^k$ has compact closure in $\mathbb R^k$ by Theorem 2.41, then (c) follows from (b). In the bolded part, I have no clue as to how we can just say that there exists some N. Is there some property of Cauchy sequences that I'm missing?
Since $\{x_n\}$ is Cauchy, by the definition of Cauchy sequence, exists $N$ s.t. $m,n>N\Rightarrow {\rm d}(x_m,x_n) . We claim the set $E_{N+1}=\{x_{N+1}, x_{N+2},\cdots\}$ has diameter less than $1$ . In fact take arbitrary $a,b\in E_{N+1}$ . Say $a=x_m, b=x_n (m,n>N)$ . Then ${\rm d}(a,b)={\rm d}(x_m,x_n) \begin{equation*} \Rightarrow {\rm diam}E_{N+1}:=\sup_{a,b\in E_{N+1}}{\rm d}(a,b)\le \frac{1}{2}
|real-analysis|metric-spaces|cauchy-sequences|
0
Solve Differential Equation : $ (4xy^2) dx +(2x^2 +\frac{y}{x^2})dy= 0 $
I am trying again and again to solve this but I got wrong answer and the lecturer told me is wrong, I don't know he's right or not. First of all , I have asked same question,someone told me to solve with separable variable , yes it is easy then , the answer is $\sqrt[3]{e^{x^4}} - \sqrt[3]{c} = y $ And I don't know how to solve this because When I using exact method, I met the the complicated construction, $$(4xy^2) dx + (2x^2y +\frac {y}{x^2})dy = 0 $$ $M = (4xy^2) dx$ $N = (2x^2y + \frac{y}{x^2})dy$ $\frac{dM}{dx} =8xy$ $\frac{dN}{dx} = 4xy-\frac{2y}{x^3} $ $R(x) = \frac{8xy - (4xy - \frac{2y}{x^3})} {2x^2y + \frac{y}{x^2}}$ $R(x) = \frac{4xy + \frac{2y}{x^3}}{2x^2y + \frac{y}{x^2}}$ If we see only R(x) is a good option but to integrated look like complicated so, the next step become $ e^{\int\frac{4xy + \frac{2y}{x^3}}{2x^2y + \frac{y}{x^2}}} $ I have tried using someway to solve keep the answer doesn't meet except separable , maybe there are something I missed . I would like apprec
An alternative is separation of variables. You'll just have to multiply by $x^2/y^2$ on both sides: $$ \begin{aligned} 4xy^2\mathrm dx+\left(2x^2y+\frac{y}{x^2}\right)\mathrm dy&=0\\ 4x^3\mathrm dx+\left(2x^4+1\right)\frac{\mathrm dy}{y}&=0\\ \frac{1}{2}\int\frac{8x^3}{1+2x^4}\mathrm dx+\int\frac{\mathrm dy}{y}&=0\\ \frac{1}{2}\ln(1+2x^4)+\ln y&=C'\\ \implies y&=\frac{C}{\sqrt{1+2x^4}} \end{aligned}$$
|ordinary-differential-equations|
0
Is an ideal generated by irreductible elements of a ring a radical ideal?
I'm wondering if an ideal generated by irreductible elements of a ring is a radical ideal, I can't seem to prove it nor disprove it.
The answer is NO. Let $R=\mathbb{Z}[\sqrt{-6}]$ . It is easy to show that $3$ is irreducible. Now, let $I=3R.$ Clearly $\sqrt{-6}\in R\setminus I$ (it is not of the form $3a+3b\sqrt{-6}$ for some $a,b\in\mathbb{Z})$ , but $(\sqrt{-6})^2=-6\in I$ .
|abstract-algebra|ring-theory|ideals|
0
Inequality $\frac{1}{4}(e^{-4} + e^{-1}) \leq \int_1^2 e^{-x^2}dx \leq \frac{1}{2}(e^{-4}+e^{-1})$
Please help me to understand how to prove the inequality \begin{equation} \frac{1}{4}(e^{-4} + e^{-1}) \leq \int_1^2 e^{-x^2}dx \leq \frac{1}{2}(e^{-4}+e^{-1}). \end{equation} Using the mean value theorem we can easily show that \begin{equation} e^{-4} \leq \int_1^2 e^{-x^2}dx \leq e^{-1}. \end{equation} But I completely don't understand how to obtain main inequality.
I will be proving the first inequality only as the second one has been answered in the post you mentioned in the comment. I do think this might be a crude way to prove the first inequality but I do not see any other method. Also, we will be using the first inequality of the aforementioned post. So let us sketch this graph first and mark points $A(1,e^{-1}), B(2,e^{-4}), C(1,0), D(2,0)$ . Let us mark an additional point $E$ with $x$ -coordinate $x_E$ that satisfies $e^{-x^2} = \frac{e^{-4}+e^{-1}}{4}$ , so this means $x_E= \sqrt{\ln(4) - \ln(e^{-4} + e^{-1})}$ . Points $F$ and $G$ are points of intersection of horizontal line through $E$ with $AC$ and $BD$ (extended) respectively (the graph is visualised here : https://i.stack.imgur.com/IDRip.png ). Take $e^{-1}$ out from the $\ln$ , so we are left with $\ln(1+e^{-3})-1$ and since $e^{-3} \ll 1$ we can use the approximation $\ln(1+x) \lessapprox x$ . Thus we get $x_E \gtrapprox \sqrt{1+\ln(4)-e^{-3}}$ . Again since $e^{-3} \ll\ln(4e)$ w
|calculus|integration|definite-integrals|
1
A $1/2$-Lipschitz function $f$ from $[0,1)$ to $[0,1)$ that has no fixed point?
I thought that $f(x) = 1/2x + 1/2$ might be a solution. However, I am also doubting it because of bijectiveness... is it necessary? Is it too obvious?
Your example does the job. The function is injective but not surjective (onto). Any continuous surjective function from $[0,1)$ to itself must have a fixed point. Either $f(0) = 0$ , or $f(0) > 0$ and $f(c) = 0$ for some $c > 0$ . In the latter case the function $g(x) = f(x) - x$ must have a zero in $(0,c)$ and thus $f$ must have a fixed point there.
|real-analysis|analysis|
1
Finding a point on $SO(3)$ that is equivalent (up to diffeomorphism) to a point on $\mathbb{RP}^3$
We know that topologically speaking, $$\text{SO}(3) \simeq \frac{D^3}{\sim_A}\simeq \frac{\mathbb{S}^3}{\sim_A}\simeq \mathbb{RP}^3 $$ where $\sim_A$ is the equivalence relation identifying antipodal points. I have spent quite some time understanding this diffeomorphism, and the derivation now makes sense to me. Using it, however, to find the answer to a question I've been wondering about is trickier. Suppose we consider the 'north pole' of $\mathbb{S}^3$ , which is defined in Cartesian coordinates as $(0,0,0,1)$ . On $\mathbb{RP^3}$ , this is identified with the point $(0,0,0,-1)$ , i.e., the 'south pole' of the 3-sphere. Suppose I want to find the rotation corresponding to this point of $\mathbb{RP}^3$ on SO(3). That is, to find the rotation ('point') on $SO(3)$ that corresponds to the single element of $\mathbb{RP}^3$ identifying the north and south poles on the three-sphere. Intuitively, I expect the rotation to be by $\pi$ radians. Identifying the axis is where the trouble comes i
I do believe quaternions are the most direct approach to it. That is to say: to computing the rotation that corresponds to a given point on $\mathbb{S}^3/\sim_A$ . Showing that this correspondence is bijective is a different story, but you said you already understood that part. You know that elements of $SO(3)$ are rotations of $\mathbb{R}^3$ . So in the simplest form an answer to your question would be, given an element $\overline{q}$ in $\mathbb{RP}^3$ , a recipe that tells you for each $x \in \mathbb{R}^3$ to what point in $\mathbb{R}^3$ the point $x$ is mapped by the rotation. (A more complicated form of answer would be giving the axis and angle of rotation, I come back to that later.) Now in this 'where does $x$ go?'-form the answer is really simple. Step 1: pick a representative $q \in \mathbb{S}^3$ of the equivalence class $\overline{q}$ . Step 2: View $q$ as a quaternion, so if you would have ordinarily written it as $(0, 0, 0, 1)$ it now becomes $0 + 0i + 0j + 1k = k$ Step 3:
|abstract-algebra|general-topology|differential-topology|quaternions|
0
In $\sigma (12)\sigma ^{-1}=\left(\sigma (1)\sigma (2)\right)=(23)$, why $\left(\sigma (1)\sigma (2)\right)=(23)$?
This is a proof of a formula in a math class. There are some parts that I don't quite understand, so I would like to ask for some clarification. In $\sigma (12)\sigma ^{-1}=\left(\sigma (1)\sigma (2)\right)=(23)$ , can anyone prove why the result is equal to the transposition (23)? Sorry, the $\sigma = (12...n)$ .
The first statement, $\sigma (12)\sigma ^{-1}=(\sigma (1)\sigma (2))$ is always true. The last part, $=(23)$ , needn't be true at all. But it will be true, by the first part, if and only if $\sigma (1)=2$ and $\sigma (2)=3,$ or $\sigma (1)=3$ and $\sigma (2)=2.$ Thus if we are working in $\mathscr S_3$ , we must have $\sigma =(123)$ or $\sigma =(13).$ Otoh, if we are working in $\mathscr S_{100}$ , there's $2×98!$ permutations $\sigma $ out of $100!$ that will have the same effect.
|group-theory|permutations|
1
how to calculte this multiple integrals $\iiint_G 1 dxdydz$?
$F=\{(x, y, z)\in R^3| x^2+y^2+z^2 \le 1\} \\ G=\{ (2x+y+z,x+2y+z,x+y+4z)|(x, y, z)\in F\}$ $\iiint_G 1 dxdydz$ the integrand is $1$ , so it represents the volume of $G$ . I can calculte it directly by $x'=2x+y+z \\ y'=x+2y+z \\z'=x+y+4z,$ but it's a little cumbersome. Any easier way related to eigenvalues of the martix \begin{bmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 4 \end{bmatrix} or something can be find.Thanks.
The determinant of a $3\times3$ matrix is exactly the factor by which the corresponding linear transformation changes the volume of any region of $\Bbb R^3$ (with a negative sign if it "inverts" / mirrors space). (This generalizes in the obvious way to other dimensions as well.) So what you want is to take the determinant of your matrix and multiply by the volume of $F$ . And then the absolute value of that is your answer.
|real-analysis|calculus|linear-algebra|integration|eigenvalues-eigenvectors|
0
Primorial Primes (Euclid primes): do they become more scarce?
I remember from my first number theory class a woman asking if every number that is the product of all primes plus one is prime. I know that this is false. I also know that at first it looks like it might be true. What I wonder is, do we know anything about the distribution: are most candidates indeed prime? Are there gaps where 2 or more candidates in a row are not prime? A related question that maybe I should ask separately but I will first try here: in cases where the candidate is not prime, does there: 1. tend to be a prime nearby? 2. Can a prime always be generated by taking out one of the primes from the sequence (not the last one)?
You wrote in a comment that you’re aware that we don’t know whether infinitely many Euclid numbers are prime, and indeed your questions might be answered without knowing this. But we also don’t know whether infinitely many Euclid numbers are composite – at least we didn’t in $1996$ , see The New Book of Prime Number Records by Paulo Ribenboim, and MathWorld still gives this as the current state of the question. Together, these known unknowns imply that the answers to your questions are unknown. If we knew that most Euclid numbers are prime, we’d know that infinitely many are prime, and if we knew that it’s not the case that most are prime, we’d know that infinitely many are composite. If we knew that infinitely many pairs of successive Euclid numbers are composite, we’d know that infinitely many are composite, and if we knew that this isn’t the case, we’d know that infinitely many are prime.
|number-theory|prime-numbers|primorial|
0
Compound interest in between the compounding period
Background: I'm an engineering student with no experience with a real investment account or anything of that sort. If we were to extend the concept of compound interest strictly mathematically, the formula would be: $$\text{Amount} = \text{Principal} \times \text{rate} ^ t$$ where $t$ is time in number of compounding periods passed, including fractional part For example, a principal of 1, with an interest rate of 1.2 (20% increment), compounded annually, after 2 years and 3 months would be: $$\text{Amount} = 1 \times (1.2)^{(2+3/12)}$$ But as per my understanding, banks and other such institutes write the change on records only at the end of the compounding period (a year). So it also makes sense for someone to not realise this mathematical extension, and instead use simple interest for the fractional part of the year, following the formula: $$\text{Amount} = 1 \times (\text{rate} ^ {\text{(completed years)}} + \text{rate} \times \text{remaining fraction of the year})$$ Is there a form
This is a comment but too long to fit in the allocated space: There is no widely recognised term, but most would refer to simple interest, as the following: \begin{align} V_{end} = V_{start} *(1 + i \times t) \end{align} where $V$ stands for value, $i$ is the quoted annual interest rate (e.g. $0.03$ for 3%) and $t$ the time. For compound interest it is common to refer to the annual rate (e.g. 3%) and then the compounding period (unless it is already clear), e.g. "3% per annum compounded quarterly" and by convention that means \begin{align} V_{end} = V_{start} \times \left(1 + i \times p\right)^{t/p} \end{align} Where $p$ is the time period over which compounding is to be applied (e.g.0.25 for quarterly), $t$ is the time, and should be a whole number of periods and $i$ is the annual interest rate. If the time includes an incomplete period at the end, the treatment is ambiguous, but typically simple interest is applied in for the incomplete period. Note the addition of $1$ to the interes
|compound-interest|
0
Probability Theory: Generating Functions of Random Variables
Let $X, Y$ be independent random variables with the geometric distribution with parameter $p > 0$ . (a) Compute the mean of $Z = XY$ . I got that $E(Z) = 1/p^2$ (b) Compute the probability generating function of $W = X + 2Y$ This is my work so far: work I'm not sure if this right and if it is the final answer or not. Please help.
Since $X$ and $Y$ are independent, we have $$ \mathbb E[XY] = \mathbb E[X]\mathbb E[Y] = \frac1p\cdot\frac1p=\frac1{p^2}. $$ Recall that the probability generating function of $X$ is $$ G_X(s) := \mathbb E[s^X] = \sum _{k=1}^{\infty } (1-p)^{k-1}p s^k= \frac{ps}{1-(1-p)s}, $$ so the probability generating function of $2Y$ is $$ G_{2Y}(s) := \mathbb E[s^{2Y}] = \sum _{k=1}^{\infty } (1-p)^{k-1}p s^{2k}= \frac{p s^2}{1-(1-p) s^2}. $$ By independence, the probability generating function of $W$ is the product of the probability generating functions of $X$ and $2Y$ : \begin{align} G_W(s) &= G_X(s)G_{2Y}(s)\\ &= \frac{ps}{1-(1-p)s}\cdot\frac{p s^2}{1-(1-p) s^2}\\ &= \frac{p^2s^3}{(1-(1-p)s)(1-(1-p)s^2)}. \end{align}
|probability|probability-theory|probability-distributions|generating-functions|geometric-probability|
0
Prove $(x,y)\rightarrow f(x,y)=\dfrac {\cos(xy)}{(1+x^2)(1+y)}\in L^{1}(]0,+\infty×]-1,1[)$
today I have a question about prove this function $$(x,y)\rightarrow f(x,y)=\dfrac {\cos(xy)}{(1+x^2)(1+y)}$$ Such that $(x,y)\in ]0,+\infty ×]-1,1[$ are integrable. **My attempts ** I was use this result $$\int_{0}^{\infty} \frac{\cos xy}{x^2 + 1} dx = \frac{\pi e^{-y}}{2}$$ But Where I complet i get this divergante integral $$\int\limits_{-1}{1}\dfrac{\pi e^{-y}}{2(1+y)}dy$$ Also when I start by integrate respect to $y$ also I have divergante integral. So how can I prove $$f\in L^{1}(]0,+\infty×]-1,1[)$$
Consider for $0 the functions $f(y)=\int_0^{\infty}\frac{|\cos(xy)|}{1+x^2}dx\geq g(y)$ where $g(y)=\sum _{k=1}^{\infty}\int_{\frac{k\pi}{y}}^{\frac{k\pi}{y}+\frac{\pi}{4y}}\frac{|\cos(xy)|dx}{1+x^2}\geq h(y)$ and where $$h(y)=\frac{\sqrt{2}}{2}\sum _{k=1}^{\infty}\int_{\frac{k\pi}{y}}^{\frac{k\pi}{y}+\frac{\pi}{4y}}\frac{dx}{1+x^2}=\frac{\sqrt{2}}{2} \sum_{k=1}^{\infty}\frac{y}{y^2+\pi^2k^2+\frac{\pi^2yk}{4}}>Ky>0$$ where $K$ is a positive constant since $y Then $$\int_{-1}^1\frac{1}{1+y}f(|y|)dy\geq \int_{-1}^1\frac{K|y|}{1+y}dy=+\infty.$$ Edit. If $f$ is defined on $[0,\infty)$ we say that $\int_0^{\infty} f(x)dx$ converge to $L\in R$ if $$\lim_{T\to \infty} \int_0^{T} f(x)dx=L.$$ We say that $f$ is integrable on $[0,\infty)$ if $\int_0^{\infty} |f(x)|dx (which implies of course that both $\int_0^{\infty} |f(x)|dx$ and $\int_0^{\infty} f(x)dx$ are convergent. Example: $L$ exists for $f(x)=\sin x/x$ but $f$ is not integrable. The set $L^1([0,\infty))$ is the space of all integrable f
|real-analysis|integration|measure-theory|definite-integrals|
0
Restriction of a stable bundle on a curve
Let $X \subset \mathbb P^N$ be a smooth projective surface. Let $H$ be an ample divisor on $X$ . Let $M_{X,H}(r;c_1,c_2)$ be the moduli space of rank r, slope stable (w.r.to $H$ ) vector bundles with fixed Chern classes $c_1,c_2$ . Let $C$ be a smooth curve lying on $X$ such that for every $E \in M_{X,H}(r;c_1,c_2)$ , $E|_{C}$ is semistable on $C$ . Then we have a natural map from $M_{X,H}(r;c_1,c_2)$ to $M_C(r, d)$ (moduli of semistable bundles of rank $r$ and degree $d$ on the curve $C$ ) given by $E \mapsto E|_C$ . My questions are : $(i)$ Is this map a morphism between these two moduli schemes? $(ii)$ If so, then is it an injective morphism? (I guess in rank $1$ case it is injective, please correct me if this is wrong) Any help is appreciated.
Yes for (1), because the restriction gives a morphism of moduli functors, hence also of their coarse moduli spaces. No for (2), even in the rank 1 case; consider for instance $X = Y \times \mathbb{P}^1$ and $C = \{y_0\} \times \mathbb{P}^1$ , where $Y$ is a smooth curve of positive genus and $y_0 \in Y$ is a point.
|algebraic-geometry|
1
find the remaining rows of a character table
Assume we're given a group of order 10 with 4 conjugacy classes with representatives $g_i$ for $1\leq i\leq 4$ so that $(|C_G(g_i)|:1\leq \leq 4) = (10,5,5,2)$ where $C_G(g_i) := \{h \in G : g_i h = h g_i\}$ . Two rows of the character table are given in the following table: Find, with proof, the other rows of the character table. I know I should use some properties of character tables such as the fact that the columns of the character table are orthogonal and hence the character table is invertible. This follows from the theorem that if $C,C'$ are conjugacy classes of G and $g\in C, h\in C',$ then $\sum_{i=1}^s \chi_i (g)\overline{\chi_i}(h) = |G|/|C|$ if $C=C'$ and 0 otherwise. But I'm not sure if this will help deduce the other rows of the character table.
Note that $C_G(g_2)$ has index $2$ in $G$ and is hence normal. Hence, $G$ admits the sign character $\chi_3$ , i.e., the unique non-trivial character of the form $G \to G/C_G(g_2) \to \{\pm 1\}$ . Since $\lvert C_G(g_3)\rvert = 5$ , the only character $C(g_3) \to \{\pm 1\}$ is the trivial one. Thus, $\chi_3(g_3) = 1$ . Clearly, $\chi_3(g_1) = \chi_3(g_2) = 1$ . Since $\chi_3\neq \chi_1$ , we must have $\chi_3(g_4) = -1$ . From the relation $10 = \lvert G\rvert = \sum_{i=1}^4 \chi_i(1)^2 = 1^2 + 2^2 + 1^2 + \chi_4(1)^2$ , we deduce $\chi_4(1) = 2$ . Thus, the character table has the form $$ \begin{array}{r|cccc} & g_1 & g_2 & g_3 & g_4 \\ \hline \chi_1 & 1 & 1 & 1 & 1 \\ \chi_2 & 2 & \frac{-1+\sqrt{5}}{2} & \frac{-1-\sqrt{5}}{2} & 0 \\ \chi_3 & 1 & 1 & 1 & -1 \\ \chi_4 & 2 & a & b & c \end{array} $$ Since the columns need to be orthogonal, we deduce $$ \begin{align} a &= \frac{-1-\sqrt{5}}{2} \\ b &= \frac{-1+\sqrt{5}}{2} \\ c &= 0. \end{align} $$
|abstract-algebra|group-theory|representation-theory|characters|
0
Solving the roots of $(x^2-4)(x+1)=4-2x$
This is a very easy question I suppose. I am trying to help my niece for solving some equations, i.e. finding real roots. There was one equation which we solved, but I wasn't particularly satisfied with how we solved it, we guessed the end result, which was correct in the end. Let me state the equation: $$(x^2-4)(x+1)=4-2x$$ So we first simply did the multiplication on the left first and rewrote it as: $$x^3+x^2-2x-8=0$$ Next we started to figure out if there exists a factorization of the form: $$(x^2+ax+b)(x+c)=x^3+(a+c)x^2+(ac+b)x+bc$$ So in order for this to exist, we should have: \begin{align*} a+c &= 1 \\ ac+b &= -2 \\ bc &= -8 \end{align*} So we simply solved that $a=1-c$ and $b=-\frac{8}{c}$ and inserted these into the the equation in the middle to get: \begin{align*} c(1-c)-\frac{8}{c} &=-2 \\ -c^3+c^2+2c-8 &=0 \\ c(-c^2+c+2) &= 8 \end{align*} at which point I noticed that we didn't get anything more useful for trying to find out this factorization. Anyhow, a solution of $c=-2$
You could have noticed that $x^2-4=(x-2)(x+2)$ and $4-2x=-2(x-2)$ , so that $$\begin{align}(x^2-4)(x+1)=4-2x&\iff(x-2)(x+2)(x+1)=-2(x-2)\\&\iff(x-2)\left((x+2)(x+1)+2\right)=0\\&\iff(x-2)(x^2+3x+4)=0. \end{align}$$
|algebra-precalculus|systems-of-equations|
1
Can someone help me with partial fraction expansion, please?
$$\frac{1}{(1-x^a)(1-x^b)}=\frac{A}{(1-x)^2}+\frac{B}{(1-x)}+\sum_{r^a=1}^{ ‎ }\frac{C_r}{(1-x/r)}+\sum_{t^b=1}^{ ‎ }\frac{D_t}{(1-x/t)}$$ $(t,r\neq1; A, B, C, D$ are real numbers) How did the author know that he need to separate the case r,t=1? I would appreciate it really much if someone can guide me to resources to study the general rules for partial fraction expansion like this.
This expansion is only valid if $a$ and $b$ are coprime. In that case, $t=r=1$ is the only multiple root of the denominator. For a root $x_0$ of the denominator with multiplicity $m$ , a partial fraction expansion contains terms proportional to $(x-x_0)^{-k}$ for all $k$ up to $m$ . If $t=r=1$ hadn’t been treated separately, both sums would have included a term proportional to $(1-x)^{-1}$ , and the term proportional to $(1-x)^{-2}$ would have been missing. If $a$ and $b$ aren’t coprime, there are further multiple roots that need to be treated analogously.
|partial-fractions|
1
Integral of conditional probability density function
As far as I understand, when we fix the condition for the conditional density, we get probability distribution and the integral over all the space is $1$ $P(X|Y=y_0)$: $$\int_{\mathbb{R}}f_{X \mid Y}(x \mid y=y_0)dx=P(X|Y=y_0) However, suppose we want to take integral: $$\int_{\mathbb{R}}\bigg(\int_{\mathbb{R}}f_{X \mid Y}(x \mid y)dx\bigg)dy $$ I thought it is equal to $1$, but approximate numerical computation through summation for continuous conditional density $$\sum_{i=1}^N \sum_{j=1}^N f_{X\mid Y}(a+\frac{b-a}{N}i \ \ \big| \ \ a+\frac{b-a}{N}j)\cdot(\frac{b-a}{N})^2 $$ gives very big values, e.g. $3000$ or even $1e+25$.
There is a solid proof: Given a joint distribution function $f(x,y)$ , one can write it also as $f(x,y)=f_{X|Y}(x|y)f_Y(y)$ . Hence, notice that: $f_Y(y)=\int_{x \in \mathbb{R}}^{} f(x,y) \, dx$ and expand the joint distribution function as above: $f_Y(y)= \int_{x \in \mathbb{R}}^{} f_{X|Y}(x|y)f_Y(y) \, dx \Rightarrow f_Y(y)= f_Y(y)\int_{x \in \mathbb{R}}^{} f_{X|Y}(x|y) \, dx $ given that $f_Y(y)\neq0$ : $\int_{x \in \mathbb{R}}^{} f_{X|Y}(x|y) \, dx =1$ , Q.E.D
|probability|probability-theory|estimation|
0
I was trying to understand QUAKE III fast inverse square root alg and i want to find best 'u' value in $\log_2(x+1)≈x+u$ approximation
I was trying to find best 'u' value for this approximation: $\log_2(x+1)≈x+u$ And I did think I can calculate error with this function. NOTE: for the x values between 0 and 1 i need becouse of IEEE 754 float point uses a scientific notation that (1 + x) * 2^n that 0 $f(u)=\int_{0}^{1}|\log_2(x+1)-(x+u)|dx$ Then I wanted to find when the slope becomes zero. (means local min in this situation) $\frac{d}{du}f(u)=0$ $\frac{d}{du}(\int_{0}^{1}|\log_2(x+1)-(x+u)|dx)=0$ Then I get stuck what can I even do after that?
Firstly I am answering my own question after 2 days because I found a very cool solution for finding the best 'u' value in this approximation: $log_2(1+x)≈x+u$ My solution starts with finding the error value with an integral $f_{error}(x,u)=log_2(1+x)-(x+u)$ $f_{totalErr}(u)=∫_{0}^{1}|f_{error}(x,u)|dx$ (Total Error Between 0 and 1) Then I split the integral into 2 parts because taking absolute value of error creates a non-continuous function $f_{tErrPlus}(u, s, E)=∫_{s}^{E}f_{error}(x,u)dx$ {if $f_{error}(x,u) ≥ 0$ } $f_{tErrMinus}(u, s, E)=-f_{tErrPlus}(u, s, E)$ {if $f_{error}(x,u) ≤ 0$ } In this part, we will need to cheat a little and use the graphs . :D We see that our error function has two roots which means we have 3 parts 2 negative and 1 positive part we can see that if $(x_0, x_1)$ are roots $f_{totalErr}(u) = f_{tErrMinus}(u, 0, x_0) + f_{tErrPlus}(u, x_0, x_1) + f_{tErrMinus}(u, x_1, 1)$ I tried to find the roots of our function, but it only can be calculated with Lambert
|integration|derivatives|algorithms|logarithms|
1
Trying to understand why a trigonometric substitution for the x value in an equation describing a circle uses Sin($\theta$).
Essentially I am trying to solve a definite integral for an equation that describes a circle. I have $\sqrt(R^2 - (x - A)^2) - B)$ (A and B being the circle origin offsets and R the radius) At a point in the solution I substitute for (x -A) the function Rcos( $\theta$ ), but everywhere on the internet uses Rsin( $\theta$ ), i understand why sin is prefered I do not understand why sin is allowed. As far as I can tell Rcos( $\theta$ ) seems correct for a circle as it describes the x value of the right triangle that is formed by R,x,y with angle Rx being $\theta$ . I do not follow why sin and cos are interchangeable and I am thoroughly confused. Any help is appreciated. Thanks.
It seems like you're trying to understand why the trigonometric functions sine and cosine are interchangeable in certain contexts. In trigonometry, sine and cosine are related through the unit circle. Since the unit circle has radius 1, the x-coordinate of a point on the unit circle corresponds to cosine and the y-coordinate corresponds to sine. When you're dealing with a circle with radius R centered at the origin, the substitution x = Rcos(θ) and y = Rsin(θ) comes from considering the parametric equations of the circle. Both sine and cosine are used because they describe the relationship between the angle θ and the x and y coordinates of a point on the circle. So, it's not that one is "allowed" while the other isn't, but rather that both are valid representations of the relationship between the angle θ and the coordinates of a point on the circle. Depending on the context or preference, one might be more convenient to use over the other. I hope this explanation help
|calculus|integration|trigonometry|circles|
0
$p$-groups with nontrivial intersection of nonnormal subgroups
I study the following paper about finite $p$ -groups, Finite groups in which the non-normal subgroups have nontrivial intersection, N. Blackburn, Journal of algebra, 3, 30-37 (1966). In this paper the intersection of all non-normal subgroups of $G$ is denoted by $R(G)$ . We say that a group $G$ is an $R$ -group if $G$ is not a Dedekind group and $R(G)\neq 1$ . The main result of this paper is as follows, Let $G$ be a finite $p$ -group. If $G$ is an $R$ -group, then $p=2$ and one of the following holds.\ (i) $G\cong Q_8\times C_4\times E$ , where $E$ is elementary abelian.\ (ii) $G\cong Q_8\times Q_8\times E$ , where $E$ is elementary abelian.\ (iii) $G$ is a $Q$ -group, i.e. $G=\langle x,A\rangle$ , where $A$ is an abelian subgroup which is not elementary abelian, $x^2\in A$ has order $2$ , and $a^x=a^{-1}$ for all $a\in A$ . In this case $R(G)=\langle x^2\rangle$ . Also, if $A$ is cyclic then $G$ is a generalized quaternion group. In particular, if the centre of $G$ is cyclic, then $G
The answer is no. If $G$ is a generalized quaternion group then $c(\overline{G}) = c(G)-1$ .
|group-theory|finite-groups|p-groups|
1
Monty Hall problem - doesn't it matter that you know from the start that a goat will be revealed with 100% certainty?
I know that the Monty Hall problem is old news. However, I can't get this out of my head and I am looking forward to your thoughts about it. If the rules of the game are known from the beginning, especially that the game host will guarantee to open a door containing a goat, isn't this information extremely relevant when calculating the initial probabilities of my chances to select the door with the car behind it? This aspect seems to be insufficiently considered in explanations stating that switching doors is statistically the better option - which of course I can accept as it has been discussed by far more intelligent people than me ;) My line of thought is as follows: if I know from the start that the host will definitely open a door that contains a goat with 100% certainty, then I can ignore that door from the beginning (no matter which of the two remaining doors it ultimately is). Thus, from the start, I only have two doors to choose from and I end up with the seemingly incorrect 5
Here's a way to think of this. Suppose you and Monty both know that Monty will definitely reveal a goat. In fact, Monty (who knows where the goats are) has already secretly decided which goat he wants to reveal. Now you can't just ignore Monty's favourite goat, since you don't know where it is. You might accidentally pick that door, in which case he will have to reveal the other goat. If Monty does reveal his favourite goat, then you can just ignore it and the other two doors are equally likely to have the car. But if Monty doesn't reveal his favourite goat, then it is behind the door you originally picked, and the car is behind the other door. The problem is that you don't know which of these has happened.
|probability|proof-explanation|monty-hall|
0
Solve $\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$
Solve the following equation in $\mathbb R$ $$\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$$ where $\lfloor y\rfloor=k \iff k \le y I will address 2 cases. $\lfloor x-1\rfloor= 0 \iff 0 \le x-1 . So every $x \in [1,2)$ is a solution. $3^x-2^x-\lfloor x^2\rfloor= 0$ a) For $x=0$ , we get $1-1-0=0$ , so it's a solution. b) For $x , and since $\lfloor x^2\rfloor>0$ , we get that $3^x-2^x-\lfloor x^2\rfloor , so there are no solutions in this interval. c) For $x=1$ , we get that $3-2-1=0$ , so it's also a solution. d) For $x \in (0,1) \implies\lfloor x^2\rfloor= 0$ . So, $3^x-2^x=0$ but this has no solutions on this interval because $3^x>2^x, \forall x \in (0,\infty)$ e) Now for $x>1$ we get some problems. I think $3^x-2^x-\lfloor x^2\rfloor> 0$ , but I don't know how to prove it. I can prove that the function $3^x-2^x$ is strictly increasing, but I don't know so much about the values of $\lfloor x^2\rfloor$ . So far I think I made some decent progress. But I'm waiting to see what yo
Your approach is on the right track. Let's continue the proof. For |x - 1| = 0, we have 0 Now, consider the equation 3x - 2x - |x^2| = 0: a) For x = 0, we get 1 - 1 - 0 = 0, so it’s a solution. b) For x 0, we get that 3x - 2x - |x^2| c) For x = 1, we get 3 - 2 - 1 = 0, so it’s also a solution. d) For x in the interval (0, 1), |x^2| = 0. So, 3x - 2x = 0, but this has no solutions in this interval because 3x > 2x, for all x in the interval (0, infinity). e) Now, for x > 1, we need to show that 3x - 2x - |x^2| > 0. To prove this, consider that |x^2| is always a non-negative integer. So, for x > 1, we have x^2 > x, which means |x^2| >= x. Therefore, 3x - 2x - |x^2| > 0 for x > 1. Thus, the solutions to the equation are x in the interval [1, 2) and x = 0 or x = 1.
|algebra-precalculus|exponentiation|ceiling-and-floor-functions|
0
Trying to understand why a trigonometric substitution for the x value in an equation describing a circle uses Sin($\theta$).
Essentially I am trying to solve a definite integral for an equation that describes a circle. I have $\sqrt(R^2 - (x - A)^2) - B)$ (A and B being the circle origin offsets and R the radius) At a point in the solution I substitute for (x -A) the function Rcos( $\theta$ ), but everywhere on the internet uses Rsin( $\theta$ ), i understand why sin is prefered I do not understand why sin is allowed. As far as I can tell Rcos( $\theta$ ) seems correct for a circle as it describes the x value of the right triangle that is formed by R,x,y with angle Rx being $\theta$ . I do not follow why sin and cos are interchangeable and I am thoroughly confused. Any help is appreciated. Thanks.
I understand your confusion. Let me clarify. When substituting for x in the equation of a circle using polar coordinates, we use R cos θ because the x-coordinate of a point on the circle can be represented by R cos θ, where R is the radius and θ is the angle. However, when solving integrals or equations involving circles and polar coordinates, it’s common to use the identity x = R sin θ as well. This might seem counterintuitive at first since R sin θ represents the y-coordinate of a point on the circle, not the x-coordinate. The reason for using R sin θ as a substitution for x lies in the symmetry of the circle. The equation of a circle is symmetric with respect to both the x-axis and the y-axis. Therefore, if x = R cos θ represents one half of the circle (e.g., the right half), then x = -R cos θ represents the other half (e.g., the left half). Similarly, y = R sin θ represents one half of the circle (e.g., the top half), and y = -R sin θ represents the other half (e.g., the bottom hal
|calculus|integration|trigonometry|circles|
0
Apparent contradiction on Lebesgue-Stietjes measure?
I was reviewing Folland's Real Analysis. Given the Algebra of unions of disjoint half-open intervals (a,b]. And given an increasing, right continuous $F$ . $\mu\left(\bigcup_1^n(a_j,b_j] \right) = \sum_1^n\left(F(b_j)-F(a_j)\right)$ is a premeasure on that Algebra. The Lebesgue-Stieltjes measure comes from this prealgebra through Caratheodory's theorem. Now, onto my question. Let $F(x) = \begin{equation} \begin{cases} x+1, x\geq 0\\ x, x (sorry for the alignment) This function is increasing and right-continuous. For $\epsilon > 0$ , we have $\mu((0,1]) = F(1) - F(0) = 2 - 1 = 1$ $\mu((-\epsilon,1]) = F(1) - F(-\epsilon) = 2 + \epsilon = 2 + \epsilon$ But, by continuity from above, if we take $\epsilon_n$ decreasing and $\epsilon_n \rightarrow 0$ $2 = \lim_{n \rightarrow \infty} \mu((-\epsilon_n,1]) = \mu((0,1]) = 1$ How is this possible? What am i doing wrong?
$(-\epsilon_n ,1]$ decreases to $[0,1]$ (not $(0,1]$ ) and $\mu ([0,1])=\mu ((0,1])+\mu \{0\}=1+1=2$ .
|measure-theory|borel-measures|
1
$P(X \geq t) \leq \frac{E(X^2)}{E(X^2) + t^2}$, where $E(X) = 0$ and $E(X^2)$ is finite.
I'm trying to prove the above inequality. I've tried the following; note I make use of the Cauchy-Schwarz Inequality and the fact that $I$ is an indicator function: $$|(t-X)I(t-X>0)|^2 \leq \\E[(t-X)^2]E[(I(t-X>0))^2] \\ = (t^2 + E(X^2))P(t-X >0) =\\ t^2 + E(X^2),$$ since $(t-X) \leq (t-X)I(t-X>0) \implies P(t-X>0) \geq 1 \implies P(t-X>0)=1.$ Now I'm a bit stuck. I have tried relating $t^2 + E(X^2)$ to $|t-X|^2$ but am not making progress. I'm trying to think of how we can bring $P(X\geq t)$ into the mix as well, since eventually we will obviously need it! EDIT: $t \geq 0$ .
$ \begin{aligned} &(t-X)\leq (t-X)\textbf{1}\{t-X>0\}\\ \Rightarrow&0\leq t=\mathbb{E}[t-X]\leq \mathbb{E}[(t-X)\textbf{1}\{t-X>0\}]\\ \Rightarrow& (\mathbb{E}[t-X])^2\leq (\mathbb{E}[(t-X)\textbf{1}\{t-X>0\}])^2 \end{aligned} $ $ \begin{aligned} t^2&=(\mathbb{E}[t-X])^2\\ &\leq (\mathbb{E}[(t-X)\textbf{1}\{t-X>0\}])^2\\ &\leq \mathbb{E}[(t-X)^2]\mathbb{E}[\textbf{1}\{t-X>0\}^2]\\ &=\mathbb{E}[(t-X)^2]\mathbb{E}[\textbf{1}\{t-X>0\}]\\ &\leq \mathbb{E}[(t-X)^2]\cdot \mathbb{P}(t-X>0)\\ &=(t^2+\mathbb{E}[X^2])\cdot \mathbb{P}(X We can reformulate this to $\mathbb{P}(X\geq t)\leq \frac{\mathbb{E}[X]^2}{\mathbb{E}[X^2]+t^2}.$
|probability|random-variables|expected-value|cauchy-schwarz-inequality|
0
Solve $\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$
Solve the following equation in $\mathbb R$ $$\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$$ where $\lfloor y\rfloor=k \iff k \le y I will address 2 cases. $\lfloor x-1\rfloor= 0 \iff 0 \le x-1 . So every $x \in [1,2)$ is a solution. $3^x-2^x-\lfloor x^2\rfloor= 0$ a) For $x=0$ , we get $1-1-0=0$ , so it's a solution. b) For $x , and since $\lfloor x^2\rfloor>0$ , we get that $3^x-2^x-\lfloor x^2\rfloor , so there are no solutions in this interval. c) For $x=1$ , we get that $3-2-1=0$ , so it's also a solution. d) For $x \in (0,1) \implies\lfloor x^2\rfloor= 0$ . So, $3^x-2^x=0$ but this has no solutions on this interval because $3^x>2^x, \forall x \in (0,\infty)$ e) Now for $x>1$ we get some problems. I think $3^x-2^x-\lfloor x^2\rfloor> 0$ , but I don't know how to prove it. I can prove that the function $3^x-2^x$ is strictly increasing, but I don't know so much about the values of $\lfloor x^2\rfloor$ . So far I think I made some decent progress. But I'm waiting to see what yo
Upon reflection I found out a way of solving this equation there is a alternate way of solving this question I will try to find the solutions using the Lambert W function, which is a special function that satisfies the equation $W(z)e^{W(z)} = z$ for any complex number $z$ . First, I will rewrite the equation as follows: $$ (x-1)(3x-2x-x^2)=0 $$ $$ ⇒ 3x-2x-x^2 = x-1 $$ $$ ⇒ 3x-2x = x^2-x+1 $$ $$ ⇒ 3^x = 2x+x^2-x+1 $$ $$ ⇒ x \ ln(3) = ln(2x+x^2-x+1) $$ $$ ⇒ x = \frac{ln(2x+x^2-x+1)}{ln(3)} $$ $$ ⇒ x \ ln(3) = W(ln(3)[2x]+x^2-x+1) $$ $$ ⇒ x = \frac{W(ln(3)[2x]+x^2-x+1)}{ln(3)} $$ Now, I will use the fact that the Lambert W function has two real branches, $W_0$ and $W_{-1}$ , which are defined for different ranges of the input. I will also use the approximation $W_0(z) ≈ ln(z) - ln(ln(z))$ and $W_-1(z) ≈ ln(-z) - ln(-ln(-z))$ for large values of $z$ . Using the $W_0$ branch, I get: $$ x = \frac{W_0(ln(3)[2x]+x^2-x+1)}{ln(3)} $$ $$ ≈ \frac{(ln(ln(3)[2x]+x^2-x+1) - ln(ln(ln(3)[2x]+x^2-x+1))
|algebra-precalculus|exponentiation|ceiling-and-floor-functions|
0
Verification of answer in a birthday problem
In the answer here, should the number of ways to pick the groups of triples, pairs and singlets from the 20 people be: $$\frac{20!}{\color{#C00}{3!^2}\,\color{#090}{2!^4}\,\color{#E90}{6!}}$$ since if you were to take this to the extreme and have 20 single people, there would be only 1 way to do it, which would be $$\frac{20!}{20!}$$ I would like to think it is a typo, but I want to make sure my understanding is correct.
The formula is not for "picking" twenty people, it is for the number of different partitions given the size of the elements of the partitions. So it checks out that there is only one way to partition a set of twenty elements in twenty singletons.
|combinatorics|algebra-precalculus|birthday|
0
Verification of answer in a birthday problem
In the answer here, should the number of ways to pick the groups of triples, pairs and singlets from the 20 people be: $$\frac{20!}{\color{#C00}{3!^2}\,\color{#090}{2!^4}\,\color{#E90}{6!}}$$ since if you were to take this to the extreme and have 20 single people, there would be only 1 way to do it, which would be $$\frac{20!}{20!}$$ I would like to think it is a typo, but I want to make sure my understanding is correct.
The intended meaning of this quotient is elucidated in a comment under the answer by the author: Consider the six people with unique birthdates. We choose the birthdates in chronological order, thus the $6!$ in the denominator. When we choose the $6$ people, we choose them in the $6!$ different orders, thus we only divide by $1!^6$ in the choice of people. In one or the other we need to divide by $6!$ or we end up choosing 6! orders of dates and 6! orders of people. So if there were $20$ singletons, this quotient should indeed be $20!$ (and not $1$ as you suggest), since it accounts for the $20!$ different orders of the people. The denominator contains the factorial of each group size, since the order doesn’t matter within the groups, since all people in a group are assigned the same birthday. As regards consistency with the other answer you linked to, you’d have to elaborate on where you see a contradiction.
|combinatorics|algebra-precalculus|birthday|
1
Prove that the function $f$ is a decreasing function.
For $n \geq 2, n \in \mathbb{N}$ and $t \geq 1$ , prove that the function $f$ (with $t$ as the variable and $n$ as a parameter) is a decreasing function. $$f(t)=\frac{1}{{8n{t^3}}}\left( { - 3n{t^4} + 2n{t^2} + \sqrt {32n{t^3}\left( {2nt + 3{t^3} + t} \right) + {{\left( {3{t^2} + 1} \right)}^2}{{\left( { - n{t^2} + n + {t^2} + 1} \right)}^2}} + n + 3{t^4} + 4{t^2} + 1} \right)$$ I have verified that it indeed decreases for $n=2,3$ . The proof seems quite cumbersome. I am looking for a more elegant approach. This problem arises from another task I am working on, which involves finding optimal constants. Thank you.
Some thoughts. We use Maple to simplify the expressions. Let $g := f(t)$ . It is easy to verify that $g > 0$ , and $$A(t) g^2 + B(t) g + C(t) = 0 \tag{1}$$ where $A(t) := 64\,{n}^{2}{t}^{6}$ , and $B(t) := 48\,{n}^{2}{t}^{7}-48\,n{t}^{7}-32\, {n}^{2}{t}^{5}-64\,n{t}^{5}-16\,{n}^{2}{t}^{3}-16\,n{t}^{3}$ , and $C(t) := - 96\,n{t}^{6}-64\,{n}^{2}{t}^{4}-32\,n{t}^{4} $ . Taking derivative on (1), we have $$A'(t) g^2 + A(t) \cdot 2g g' + B'(t) g + B(t) g' + C'(t) = 0$$ or $$(2g A(t) + B(t))g' = - \left( A'(t) g^2 + B'(t) g + C'(t)\right). \tag{2}$$ We have $$2g A(t) + B(t) = 16nt^3 \sqrt {32n{t^3}( {2nt + 3{t^3} + t} ) + {{( {3{t^2} + 1} )}^2}{{( { - n{t^2} + n + {t^2} + 1} )}^2}},$$ and \begin{align*} &A'(t) g^2 + B'(t) g + C'(t)\\ ={}& \frac{6}{t}\Big(A(t) g^2 + B(t) g + C(t)\Big)\\ &\quad + 16 n{t}^{2} ( 3n{t}^{4}-3{t}^{4}+2n{t}^{2}+4{t}^{2}+3n +3 ) g+64n{t}^{3} ( 2n+1 )\\[6pt] ={}& 16 n{t}^{2} ( 3n{t}^{4}-3{t}^{4}+2n{t}^{2}+4{t}^{2}+3n +3 ) g+64n{t}^{3} ( 2n+1 ). \end{align*} From (2),
|real-analysis|inequality|
1
Solve $\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$
Solve the following equation in $\mathbb R$ $$\lfloor x-1\rfloor(3^x-2^x-\lfloor x^2\rfloor) = 0$$ where $\lfloor y\rfloor=k \iff k \le y I will address 2 cases. $\lfloor x-1\rfloor= 0 \iff 0 \le x-1 . So every $x \in [1,2)$ is a solution. $3^x-2^x-\lfloor x^2\rfloor= 0$ a) For $x=0$ , we get $1-1-0=0$ , so it's a solution. b) For $x , and since $\lfloor x^2\rfloor>0$ , we get that $3^x-2^x-\lfloor x^2\rfloor , so there are no solutions in this interval. c) For $x=1$ , we get that $3-2-1=0$ , so it's also a solution. d) For $x \in (0,1) \implies\lfloor x^2\rfloor= 0$ . So, $3^x-2^x=0$ but this has no solutions on this interval because $3^x>2^x, \forall x \in (0,\infty)$ e) Now for $x>1$ we get some problems. I think $3^x-2^x-\lfloor x^2\rfloor> 0$ , but I don't know how to prove it. I can prove that the function $3^x-2^x$ is strictly increasing, but I don't know so much about the values of $\lfloor x^2\rfloor$ . So far I think I made some decent progress. But I'm waiting to see what yo
We can show that for $x \ge 4$ we have no solutions. We can look at the following inequality : $$3^x \ge 3^{\lfloor x \rfloor} > (\lfloor x \rfloor + 1)^2 + 2^{\lfloor x \rfloor} \times 2 > \lfloor x^2 \rfloor + 2^x $$ Proving this will show that $3^x- \lfloor x^2 \rfloor - 2^x > 0 , \forall x \in [4,\infty)$ , so there will be no solutions.From the definition of the floor function it is obvious that $3^x \ge 3^{\lfloor x \rfloor}$ and $(\lfloor x \rfloor + 1)^2 + 2^{\lfloor x \rfloor} \times 2 > \lfloor x^2 \rfloor + 2^x$ , so it is enough to prove that $3^{\lfloor x \rfloor} > (\lfloor x \rfloor + 1)^2 + 2^{\lfloor x \rfloor} \times 2 , \forall x \in [4,\infty)$ , which seems easier to me since we have integers. So we have to show that $3^n > (n+1)^2 + 2^{n+1} , \forall n \in \mathbb N , n\ge 4$ . We will prove it by mathematical induction. For $n = 4$ we have that $81>25+32$ which is true.Now we assume that the inequality is true for $n-1$ , we will prove that it is true for $n \in
|algebra-precalculus|exponentiation|ceiling-and-floor-functions|
0
I was trying to understand QUAKE III fast inverse square root alg and i want to find best 'u' value in $\log_2(x+1)≈x+u$ approximation
I was trying to find best 'u' value for this approximation: $\log_2(x+1)≈x+u$ And I did think I can calculate error with this function. NOTE: for the x values between 0 and 1 i need becouse of IEEE 754 float point uses a scientific notation that (1 + x) * 2^n that 0 $f(u)=\int_{0}^{1}|\log_2(x+1)-(x+u)|dx$ Then I wanted to find when the slope becomes zero. (means local min in this situation) $\frac{d}{du}f(u)=0$ $\frac{d}{du}(\int_{0}^{1}|\log_2(x+1)-(x+u)|dx)=0$ Then I get stuck what can I even do after that?
$\log_2(x+1)≈x+u$ So this is $$ f(x)=\log_2(1+x) -x\approx u \tag1 $$ for $x\in[0,1]$ . We have $f(0)=f(1)=0$ and $f(x)\geqslant0$ over the interval which means that for the image of $f$ we have: $f([0,1]) = [0,v]$ . The (local) maximum of $f$ is given by $$f'(x_0)=\frac1{(1+x_0)\ln2}-1 \stackrel{.}= 0\tag2 $$ which has the single solution $$ x_0 = \frac1{\ln2}-1\tag3 $$ so that $$v=f(x_0) \approx 0.08607 $$ Choosing $u$ so that the maximal absolute error is minimal is achieved by $u=v/2$ : $$ u = \frac{\ln\left(1/\ln2\right)-1}{2\ln2}+\frac12 \approx 0.04304\tag4 $$
|integration|derivatives|algorithms|logarithms|
0
Can't understand Bayes theorem
A software company conducted a test on their new platform by exposing their users to two versions of the same product. Number of users that were given version A: 4000 Number of user that were given version B: 5000 Number of users that experienced a bug: 3000 Number of users with version B that experienced a bug: 1500 What is the probability that a user tested Version B, given they experienced a bug during testing? My attempt I don’t get this question at all. Let X = user tested Version B Let Y = experienced a bug during testing $P(X|Y)=P(X \cap Y)/P(Y)$ $P(X \cap Y) = P(X) \cdot P(Y|X) = \dfrac{5000}{9000} \times \dfrac{1500}{3000} = \dfrac{5}{18}$ $P(Y) = P(X).P(Y|X) + P(X^c).P(Y|X^c)$ Now how do we go from here? and is the probability calculated above correct? Please help. Thanks in advance.
You are on the right track, except for one mistake. $$P(Y|X) = \frac{1500}{5000} = \frac{3}{10}$$ The denominator will be $5000$ because it's the probability of a user experiencing a bug if they had version B. How many had version B? $5000$ , right. How many of those experienced a bug? $1500$ . See why it should be $5000$ and not $3000$ ? Similarly $P(X^c) = \dfrac{4000}{9000}$ and $P(Y|X^c) = \dfrac{1500}{4000}$ . I believe you can take it from here. Just plug the values into your formula.
|probability|conditional-probability|bayes-theorem|
1
How to set equations containing factorials?
I recently encountered a problem involving the construction of equations involving factorials or combinatorial numbers. I recall reading somewhere (although I cannot recall the reference) that this can be a tricky subject. In a particular scenario, the author of the problem suggested that the solution would be $x=1$ . To construct the simplest equation possible with this solution, the author proposed using $x-1=0$ . Subsequently, the author manipulated this equation to obtain: \begin{align} (x-1)^2 &= 0 \\ x^2 - 2x + 1 &= 0 \\ x(x-2) + 1 &= 0 \\ x(x-2) &= -1 \\ \frac{x!}{(x-1)!}\frac{(x-2)!}{(x-3)!} &= -1 \\ x!(x-2)! &= -(x-1)!(x-3)! \end{align} However, upon substituting $x=1$ into the equations, we encounter expressions such as $(-1)!$ and $(-2)!$ , which do not make sense in this context. My question is: Am I missing something, or is the author's approach incorrect? Any insights or explanations would be greatly appreciated. Thank you very much.
The factorial can be generalized to complex arguments using the gamma function : $n!=\Gamma(n+1)$ . This function has poles at all non-positive integers, so the factorial of a negative integer diverges. Thus you’re right to think that this attempt at constructing an equation with solution $x=1$ is misguided, since substituting $x=1$ leads to expressions that are divergent even when appropriately generalized.
|combinatorics|algebra-precalculus|factorial|
1
Equivalent Set [0,1] and [0,1]\X
Let $X:= \left \{\frac{1}{n}: n\in\mathbb{N}\right \}$ . Prove that there is a bijection between $[0,1]$ and $[0,1]-X$ . I still don't find a bijective function between $[0,1]$ and $[0,1]-X$ . So far, I already know that $[0,1]$ and $(0,1)$ are equivalent sets. We may find a bijective function between $(0,1)$ and $[0,1]-X$ and use the composition of bijective functions.
Here's a different way to do this that works for any countable set $X$ . Enumerate $X$ as $\{a_1, a_2, \ldots\}$ (meaning that $i\mapsto a_i$ should be a bijection) and choose a sequence $b_1, b_2, \ldots$ of distinct elements of $[0, 1]\setminus X$ . You can do that because $[0, 1]\setminus X$ is still infinite. Define a function $f: [0, 1] \to [0, 1]\setminus X$ by $f(a_i) = b_{2i}, f(b_i) = b_{2i-1}$ , and $f(x)=x$ if $x \neq a_i, b_i$ . It is pretty clearly onto (the image obviously contains everything aside from the $b_i$ because of the third case of the definition, and the first two cases show that it includes every $b_i$ ) and 1-1, so $f$ is a bijection. The value of this method is that it generalises. You can replace $[0, 1]$ with any infinite set $Y$ and $X$ with any countable set such that $Y\setminus X$ is infinite. The same argument still works.
|elementary-set-theory|
0
The fiber of the sheaf of differentials is indeed the Zariski cotangent space
I started studying differentials on schemes (Hartshorne II.8), and read Eisenbud to gain more intuition and algebra. After vaguely understanding cotangent bundle and conormal sequence, I tried to calculate fiber of $\Omega_{S/k}$ , which I expected to be the cotangent space $m/m^2$ . Let $S=k[x_1,\dots,x_r]/I$ where $I=(f_1,\dots, f_s)$ . Using the technique in Eisenbud, $\Omega_{S/k}=\text{coker } J = \oplus Sdx_i/\text{im } J$ when $J: \oplus Sde\to \oplus Sdx_i$ is the map between free modules given by the matrix $(\partial f_j/\partial x_i)$ . I thought the natural map would be $\oplus S_m dx_i\to m=(x_1-a_1,\dots,x_r-a_r)$ sending $dx_i \mapsto x_i-a_i$ . Now, we have $\sum \frac{\partial f_j}{\partial x_i} dx_i \in\text{im }J$ , and it maps to $\sum \frac{\partial f_j}{\partial x_i} (x_i-a_i)$ , which is a hyperplane analytically tangent to variety at $m$ . But I can't show this is algebraically tangent to the variety, or $\sum \frac{\partial f_j}{\partial x_i} (x_i-a_i)\in m^2$
We can use the (purely algebraic) Taylor expansion on $f_j$ so that $f_j = \sum \frac{\partial f_j}{\partial x_i}(x_i-a_i) + \sum\frac{\partial^2 f_j}{\partial x_k \partial x_l} (x_k-a_k)(x_l-a_l) + \cdots$ . Then obviously $f_j - \sum \frac{\partial f_j}{\partial x_i}(x_i-a_i) \in m^2$ . Note that $f_j = 0$ in $S$ .
|algebraic-geometry|
1
Equivalence between two definitions of winding number
I've noticed that the definition of a winding number is rather different in Stewart & Tall's Complex Analysis than in Ahlfors' Complex Analysis : Let $\gamma:[a,b]\to\mathbb{C}$ be an arc and let $a\in\mathbb{C}$ lie outside its image. Definition (Stewart, Tall): the winding number $w(\gamma,a)$ is defined as $$\frac{\theta(b)-\theta(a)}{2\pi}$$ for any continuous choice of argument along $\gamma$ i.e. for any continuous map $\theta:[a,b]\to\mathbb{R}$ such that $$e^{i\theta(t)} = \frac{\gamma(t)}{|\gamma(t)|}.$$ Definition (Ahlfors): the winding number $n(\gamma,a)$ is defined as $$\frac{1}{2\pi i}\int_\gamma\frac{1}{z-a}.$$ How can one show these two definitions are equivalent?
I shall assume that $\gamma$ is a closed curve. Let $[b,c]$ be the domain of $\gamma$ and define $$\begin{array}{rccc}F\colon&[b,c]&\longrightarrow&\Bbb C\\&t&\mapsto&\displaystyle\int_b^t\frac{\gamma'(u)}{\gamma(u)-a}\,\mathrm du.\end{array}$$ I shall prove that $$(\forall t\in[b,c]):\gamma(t)-a=(\gamma(b)-a)e^{F(t)}.\label{a}\tag1$$ Assertion \eqref{a} is clearly true when $t=b$ . In order to prove that it is true in general, it is enough to prove that $(\gamma-a)e^{-F}$ is constant, which is true, since you get $0$ when you differentiate it (note that this uses the fact, that follows from the definition of $F$ , that $F'=\frac{\gamma'}{\gamma-a}$ ). It follows that \begin{align}\gamma(b)-a&=\gamma(c)-a\text{ (since $\gamma$ is a closed curve)}\\&=(\gamma(b)-a)e^{F(c)},\end{align} and therefore $e^{F(c)}=1$ . So, $F(c)\in2\pi i\Bbb Z$ . Now, take $\alpha\in\Bbb R$ such that $\alpha$ is an argument of $\gamma(b)-a$ . It follows from \eqref{a} that, for each $t\in[b,c]$ , \begin{align}
|complex-analysis|definition|contour-integration|winding-number|
1
Holomorphic implies Cauchy-Riemann equations
I'm making some notes on minimal surfaces and i was reading my old notes on my course on Complex analysis on one variable and i noticed the following in the prove that (real and imaginary parts of) holomorphic functions satisfies the C-R equations. When doing $$\lim_{h\to 0}\frac{f(z_0+h)-f(z_0)}{h}=\lim_{l\to 0}\frac{u\left((x_0,y_0)+l(1,0)\right)-u(x_0,y_0)+iv\left((x_0,y_0)+l(1,0)\right)-iv(x_0,y_0)}{l}=u_x(x_0,y_0)+iv_x(x_0,y_0).$$ (maybe is a stupid question) we use that the limit of the sum is the sum of the limits, but its not true in general that if the limit of sum exists, then exists the limit of each summand. What I'm missing here
Consider the function $\mathcal{Re} : \mathbb{C} \to \mathbb{R}$ that return the real part of a complex number. Such function is continuous. Indeed for each $\epsilon>0$ you can choose $\delta=\epsilon$ and if $\|z-z_0\| you will have $$ |\mathcal{Re}(z)-\mathcal{Re}(z_0)| \le \|z-z_0\| Similarly $\mathcal{Im} : \mathbb{C} \to \mathbb{R}$ is continuous. so if $\lim_{ z \to z_0} f(z)$ exists you have that $$ \lim_{z \to z_0} \mathcal{Re}(f(z))=\mathcal{Re} \left(\lim_{z \to z_0} f(z) \right) $$ and so the limit of the real part exists. Same for the limit of the imaginary part. Now that you know that the two limits exist you can use that sum of limits is equal to limits of sum
|complex-analysis|limits|
1
Is it true that [if the premises are false, the conclusion is true]?
My question is, is my proof correct in concluding "If the premises are false, then the conclusion is true"? Please see if my proof is correct Is my proof of [If the premises are false, the conclusion is true] correct? If the premises are false, the conclusion is false. Statement 1 is false because I am cup. cup is an animal. i am an animal The premise (I am cup, cup is an animal) is false, but the conclusion (I am an animal) is true. If there is even one counterexample, the proposition is false. Therefore, proposition 1 is false Negation of proposition 1 is true. The negation of (p->q) is (p and not q) The negation of proposition 1 is The premise is false and the conclusion is true. Proposition 2 is true If (p and q) are true then (p->q) is true thus If the premises are false, the conclusion is true. Statement 3 is true for example I am a cup. cup is god. I am a God Since the premise (I am a cup) is false, the conclusion (I am God) is true.
Proposition 1. If the premises of an argument are false, then the conclusion is false. No. Your counterexample works. Here is an another one: "Snow is yellow. Therefore, bananas are yellow" False premises, true conclusion, so proposition 1. is false. Proposition 2. If the premises of an argument are false, then the conclusion is true. No. Here is a counterexample "Snow is yellow. Therefore, bananas are purple" False premises, false conclusion, so proposition 2. is false as well. But wait! I can hear you say. These aren't valid arguments! Well, you didn't say anything about that. But fine, let's consider two more propositions: Proposition 3. If the premises of a valid argument are false, then the conclusion is false. Again, your counterexample works. Here is an another one: "Snow is yellow. Snow and bananas have the same color. Therefore, bananas are yellow" False premises, true conclusion, so proposition 3. is false. Proposition 4. If the premises of a valid argument are false, then th
|logic|
0
A condition of Leray-Hirsch theorem
This is the Leray-Hirsch theorem on Hatcher's Algebraic Topology. Theorem 4D.1. Let $F \rightarrow E \xrightarrow{p} B$ be a fiber bundle such that, for some commutative coefficient ring $R$ : (a) $H^n(F ; R)$ is a finitely generated free $R$ -module for each $n$ . (b) There exist classes $c_j \in H^{k_j}(E ; R)$ whose restrictions $i^*\left(c_j\right)$ form a basis for $H^*(F ; R)$ in each fiber $F$ , where $i: F \rightarrow E$ is the inclusion. Then the map $\Phi: H^*(B ; R) \otimes_R H^*(F ; R) \rightarrow H^*(E ; R), \sum_{i j} b_i \otimes i^*\left(c_j\right) \mapsto \sum_{i j} p^*\left(b_i\right) \smile c_j$ , is an isomorphism of $R$ -modules. My question is why we need $c_j$ generates the cohomology of every fiber instead of one fiber (for example $E_p$ ). Since the choice of $i$ can be any $i:E_p\rightarrow E$ , and then the proof will not change. If not, what is the inclusion $i:F\rightarrow E$ here?
Note that if $b_0, b_1 \in B$ are in the same path component then the inclusions $F_{b_i} = p^{-1}(b_i) \to E$ are homotopic, so induce the same map on cohomology (once $F_{b_0}$ and $F_{b_1}$ have been identified by trivializing $p$ along a path connecting $b_0$ to $b_1$ ). In particular if $B$ is path connected and the images of $c_j$ generate the cohomology of one fiber then they generate the cohomology of every fiber under the respective restriction. A reason for phrasing the statement in that way is a combination of allowing $B$ to be not path connected and eliding the choice of $b \in B$ that the fiber is being taken. It would be equally fine to require that every path component of $B$ contains some point $b \in B$ such that the images of $c_j$ by restricting along $F_b \to E$ generate.
|algebraic-topology|
1