title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Integral $\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$
Show that: $$\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$ I evaluated this by some Fourier series. Is there any other method? Start with substitution of $$u=\arcsin x$$ Then we have to integrate $$\int_0^{\frac{\pi}{2}}\frac{u^3\cos u}{\sin^2 u}\text{d}u=-\int_0^{\frac{\pi}{2}}u^3\csc u\text{d}u$$ Since $$\int\csc u\text{d}u=\ln (\csc u-\cot u)=\ln \left(\frac{1-\cos x}{\sin x}\right)=\ln 2+2\ln \left(\sin \frac{x}{2}\right)-\ln \sin x$$ Thus $$\int_0^{\frac{\pi}{2}}u^2\csc u\text{d}u=\int_0^{\frac{\pi}{2}}u^2\text{d}\left(2\ln \frac{\sin u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-2\int_0^{\frac{\pi}{2}}u\left(2\ln \sin \frac{u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-4\int_0^{\frac{\pi}{2}}u\ln \sin \frac{u}{2}\text{d}u+2\int_0^{\frac{\pi}{2}}u\ln \sin u\text{d}u$$ $$=-\frac{\pi^2}{4}\ln 2+4\int_0^{\frac{\pi}{2}}u\left[\ln 2+\sum_{n=1}^{\infty}\frac{\cos nu}{n}\right]\text{d}u-\int_0^{\frac{\pi}{2}}u^2\cot u\text{d}u$$ $$=\frac
Letting $\arcsin x \mapsto x$ yields $$ \begin{aligned} & I=\int_0^{\frac{\pi}{2}} x^3 \cot x \csc x d x=-\int_0^{\frac{\pi}{2}} x^3 d(\csc x) \\ & = \underbrace{ -\left[x^3 \csc x\right]_0^{\frac{\pi}{2}}}_{-\frac{\pi^3}{8}} +3 \int_0^{\frac{\pi}{2}} x^2 \csc x d x \end{aligned} $$ Using the identity: $e^{ix}=\cos x+i\sin x$ to expand the integrand into a power series, we have $$ \begin{aligned} \int_0^{\frac{\pi}{2}} x^2 \csc x d x &=2 i \int_0^{\frac{\pi}{2}} \frac{x^2 e^{-x i}}{1-e^{-2 x i}} d x \\ &=-2 \sum_{n=0}^{\infty} \Im \int_0^{\frac{\pi}{2}} x^2 e^{-(2 n+1) x i} d x \\ &=2 \sum_{n=0}^{\infty} \int_0^{\frac{\pi}{2}} x^2 \sin (2 n+1) x d x \end{aligned} $$ Twice integration by parts with the last integral gives $$ \begin{aligned} & =-\frac{1}{2 n+1} \int_0^{\frac{\pi}{2}} x^2 d(\cos (2 n+1) x) \\ & =-\frac{1}{2 n+1}\left(-\frac{2}{2 n+1} \int_0^{\frac{\pi}{2}} x \cos (2 n+1) d x\right) \\ & =\frac{2}{(2 n+1)^2}\left([x \sin (2 n+1) x]_0^{\frac{\pi}{2}}-\int_0^{\frac{\pi}{2}}
|real-analysis|calculus|integration|fourier-analysis|trigonometric-integrals|
0
Paradox: Roots of a polynomial require less information to express than coefficients?
A somewhat information theoretical paradox occurred to me, and I was wondering if anyone could resolve it. Let $p(x) = x^n + c_{n-1} x^{n-1} + \cdots + c_0 = (x - r_0) \cdots (x - r_{n-1})$ be a degree $n$ polynomial with leading coefficient $1$ . Clearly, the polynomial can be specified exactly by its $n$ coefficients $c=\{c_{n-1}, \ldots, c_0\}$ OR by its $n$ roots $r=\{r_{n-1}, \ldots, r_0\}$ . So the roots and the coefficients contain the same information. However, it takes less information to specify the roots, because their order doesn't matter . (i.e. the roots of the polynomial require $\lg(n!)$ bits less information to specify than the coefficients). Isn't this a paradox? Or is my logic off somewhere? Edit: To clarify, all values belong to any algebraically closed field (such as the complex numbers). And note that the leading coefficient is specified to be 1, meaning that there is absolutely a one-to-one correspondence between the $n$ remaining coefficients $c$ and the $n$ roo
I do not think any of the current answers actually address the question. So, let me add my two cents. The keyword is Commutativity . It kills the information of the order of multiplication: $$ab=ba.$$ The field of complex numbers $\mathbb{C}$ has this property over multiplication. The coefficients $\{c_{n-1}, c_{n-2}, \cdots, c_0\}$ are elementary symmetric polynomials of the roots. Over a field, elementary symmetric functions are not sensitive to the order of variables. However, the situation will drastically change in a (multiplicative) non-commutative setting, such as square matrices or quaternions. The first thing is distributive laws. See answers in here and here for some extreme cases. To continue our discussion let's suppose both distributive laws hold. In that situation, you can unambiguously write $$(x-a)(x-b)=x^2-ax-xb+ab$$ and clearly, this is different from $(x-b)(x-a)=x^2-xa-bx+ba$ for $a\neq b.$ Now, as you can see both sides contain the same amount of information, i.e.,
|combinatorics|polynomials|roots|information-theory|
0
Can a triangle up to isometry shatter seven points?
From Wikipedia : The class [of sets] $C$ shatters the set $A$ if for each subset $a$ of $A$ , there is some element $c \in C$ such that $a = c\cap A$ . In other words, $C$ shatters $A$ if for every subset $S\subset A$ , it's possible to find a set $c\in C$ which covers the points in $S$ but none of the points in $A\setminus S$ . For instance, if $C$ is the set of closed half-planes, then $C$ shatters any non-colinear set of three points, but does not shatter a unit square, because it isn't possible to cover one diagonal while omitting the other. Given a triangle $T$ , let $C$ be the set of all isometric copies of $T$ - all translations, rotations, and reflections of $T$ in the plane, not permitting scaling. What is the maximum size of a set $A$ such that for some triangle $T$ , $C(T)$ shatters $A$ ? It's possible to attain $|A|=6$ by taking $A$ to be the vertices of a regular hexagon of unit side length and $T$ to be an isosceles triangle with side lengths $(2.8,4.48797,4.48797)$ . Bel
This is indeed possible. The points $(0.95,0.4)$ , $(0.63,0.13)$ , $(0.37,0.12)$ , $(0.25,0.44)$ , $(0.35,0.83)$ , $(0.46,0.96)$ and $(0.91,0.76)$ are shattered by copies of a triangle with squared side lengths $3.637$ , $3.1492$ and $0.8194$ . Here are $18$ diagrams that exhibit all the subsets. The three green points lie on the perimeter, and the triangle can be infinitesimally disturbed so as to contain any of the $8$ combinations of these three points. At least $16$ such diagrams are required to cover all $128$ subsets, but I didn’t find a solution with less than $18$ diagrams. Here’s the Java code I used to find this configuration. It generates random convex heptagons and triangles, finds all isometries that make two points lie on one side of the triangle and a third point on a second side, checks whether all subsets are covered in this way and then uses simulated annealing to find a minimal subset of the resulting diagrams that covers all subsets. Note that another option would b
|geometry|triangles|
1
How to show that $(y-x^2, z-x^3) \in \mathbb{C}[x,y,z]$ is irreducible or radical?
In Fulton's introduction to Algebraic Geometry, there is the following exercise on page 11: I have been struggling for a bit longer than I care to admit on this problem, and have not been able to get a handle on it. I have managed to prove that such an algebraic set is equal to $V = V(y-x^2, z-x^3)$ , but I haven't managed to prove that this is the ideal of the variety. I have looked elsewhere for solutions, but have not been able to find any. However, I did come across the solution to the part a of this particular problem, and figured it would be a good idea to link it here for anyone who might stumble on this post in the future. Irreducible components of the variety $V(X^2+Y^2-1,X^2-Z^2-1)\subset \mathbb{C}^3.$ In this particular chapter, we have covered Hilbert's Nullstellensatz,so there is an obvious route open to me that the author seems to intend: prove the ideal $\mathcal{I} = (y-x^2, z-x^3)$ is prime, in which case it is radical and therefore the ideal of the algebraic set; bei
Here is a solution in a series of hints/steps, the details of which I leave to you. Some of these are a little tedious to demonstrate with absolute rigor, but the good news is that they do not use anything more than basic ring theory, as you preferred. Let $R$ be a domain. Show that for any $a \in R$ , the kernel of the unique $R$ -algebra morphism $R[X] \to R$ sending $X$ to $a$ is $\langle X - a \rangle$ . (Suggestion: use Euclidean division by $X-a$ , which is possible because the leading coefficient is a unit - it's $1$ !) Using 1, prove that for any $a_{1}, \ldots, a_{n} \in R$ , the evaluation morphism $R[X_{1}, \ldots, X_{n}] \to R$ sending $X_{i}$ to $a_{i}$ is $\langle X_{1}-a_{1}, \ldots, X_{n} - a_{n} \rangle$ . Perhaps you might use induction, and the fact that $R[X_{1}, \ldots, X_{n}]$ is canonically isomorphic to $(R[X_{1}, \ldots, X_{n-1}])[X_{n}]$ . Conclude that the ideal $\langle X_{1} - a_{1}, \ldots, X_{n}-a_{n} \rangle \subset R[X_{1}, \ldots, X_{n}]$ is prime, as
|algebraic-geometry|ring-theory|algebraic-curves|affine-varieties|polynomial-rings|
0
Regular conditional distribution of $Y$ given $X=x$ in Klenke's book
In his book "Probability theory", Klenke uses the following definition of transition kernel: and if in $ii)$ the measure is a probability measure for all $\omega_1$ then $K$ is called a stochastic kernel. Later on he defines regular conditional distribution of $Y$ given $\mathcal F$ as follows: After the last line in the image above, he added "(the function from the factorization lemma with an arbitrary value for $x \notin X(\Omega)$ ) is called a regular conditional distribution of $Y$ given X.", which perplexes me a lot. My attempt to understand this line: First, I think he forgot to assume that the map $ \omega \mapsto K_{Y|\sigma(X)}(\omega, B)$ should be $\mathcal F$ -measurable for any fixed $B \in \mathcal E$ . Next, I assume the previous statement holds and fix one $B \in \mathcal E$ . Since $K_{Y|\sigma(X)}(\cdot, B)$ is a version of $ P(Y \in B | \sigma(X)) $ and is $\sigma(X)$ -measurable too, by factorization theorem, there is a measurable function $\kappa(x, B)$ from $(E',
I also think his construction of $\kappa_{Y,X}(x,A)$ is a bit strange. Instead of making sense of the strange composition $\kappa_{Y,\sigma(X)}(X^{-1}(\omega),A)$ , I would declare $\kappa_{Y,X}$ as the stochastic kernel from $(E', \mathcal{E}')$ to $(E, \mathcal{E})$ such that $$ \int_{\Omega} \mathbf{1}_B(Y(\omega)) \mathbf{1}_A(X(\omega)) \, \mathbf{P}(\mathrm{d}\omega) = \int_{\mathcal{E}'} \kappa'_{Y,X}(x, B) \mathbf{1}_A(x) \, \mathbf{P}(X \in \mathrm{d}x) \tag{1}\label{e:def} $$ for all $A \in \mathcal{E}'$ and $B \in \mathcal{E}$ , or equivalently, $$ \mathbf{P}(Y \in B, X \in A) = \int_{\mathcal{E}'} \mathbf{P}(Y \in B \mid X = x) \mathbf{1}_A(x) \, \mathbf{P}(X \in \mathrm{d}x) $$ where the notation $\kappa'_{Y,X}(x, B) = \mathbf{P}(Y \in B \mid X = x)$ is adopted for better readability and $\mathbf{P}(X \in \cdot) = (\mathbf{P}\circ X^{-1})(\cdot)$ is the distribution of $X$ . If the kernel $\kappa'_{Y,X}$ as defined above exists, then $\kappa_{Y,\sigma(X)}$ also exists and
|probability-theory|measure-theory|conditional-probability|
0
Card matching game
The game setup is as follows: there is a bag with 6 matching pairs of cards (two 1s, two 2s, two 3s, two 4s, two 5s, two 6s). We randomly draw one card at a time, but put matching cards aside as soon as they appear in our hand. The game ends and we lose if we ever hold three cards, no two of which match. What is the probability of winning? My Approach Let $State_x$ be the state where we have $x$ cards in our hand and $n$ be the number of remaining cards in the deck. After some thought I realized that in the 'winning' scenario we will always come back to the state where we have 1 card in our hand. So for every possible $n$ , when we are at $State_1$ (this will only happen when $n$ is odd), I calculated the probability of coming back to $State_1$ : Either this way: $State_1 \to State_0 \to State_1$ In this case the probability of coming back to $State_1$ is $\frac{1}{n}$ Or this way: $State_1 \to State_2 \to State_1$ In this case the probability of coming back to $State_1$ is $\frac{n-1}
Consider a generalization of this game: instead of $6$ pairs in the deck, there are $N$ pairs. As we play, every time we get a pair in our hand, we reduce to a smaller game where $N$ has decreased by $1$ ! With one small wrinkle: we may start this smaller game with a card already in our hand (e.g. our first 3 draws are $1, 2, 1$ , yielding a new game where we start with a $2$ in our hand). So, we should recursively compute the probability that we win a game with $N$ pairs and $k \in \{0,1\}$ cards already in our hand. However, we can note that having a card already in your hand does not change your probability of winning whatsoever -- if you start with no cards in your hand, the first thing you do is draw a card, and the card you draw doesn't affect your odds of winning in any way. So we really just need to recursively compute the probability that we win a game with $N$ pairs and no cards in our hand. Let's do it! If $N = 1$ , the probability that we win is $\boxed{1}$ . If $N = 2$ , t
|probability|card-games|
0
Matrix of similarity
Let $0 , $F$ be a field, and $p \in F[x]$ be monic and irreducible. Let $C(p^k)$ be the companion matrix of $p^k$ . Let $X=\begin{bmatrix} C(p^k) & A \\ 0 & B \end{bmatrix}$ with minimal polynomial $p^k$ . Then $X$ is similar to $C(p^k) \oplus D$ for some matrix $D$ . Question: Can the matrix of similarity be taken to be block upper triangular conformal to $X$ ?
The answer is yes. This is a special case of a slightly more general fact: let $f$ be any monic non-constant polynomial and $X=\pmatrix{C(f)&A\\ 0&B}$ . If $f(X)=0$ , then the equation $CY-YB=A$ is solvable and hence $$ \pmatrix{I&Y\\ 0&I}\pmatrix{C&A\\ 0&B}\pmatrix{I&-Y\\ 0&I} =\pmatrix{C&0\\ 0&B}. $$ For a proof that $CY-YB=A$ is solvable, see Robert Hartwig, Roth’s removal rule revisited , LAA 49:91-115(1983), section 4, theorem 1.
|linear-algebra|matrices|
1
The given answer is 1,800, whereas I found it to be 1,600. Please explain how.
In an engineering college of 10,000 students, 1,500 like neither their core branches nor other branches. The number of students who like their core branches is 1/4th of the number of students who like other branches. The number of students who like both their core and other branches is 500.The number of students who like their core branches is The method I used for solving: Total students = 10,000 Students who like neither core nor other branches = 1,500 Students who like both core and other branches = 500 Let's denote: Students who like only their core branches as x Students who like only other branches as y Given: Total students $= x+y+500+1500 = 10,000$ We know that the number of students who like their core branches is 1/4th the number of students who like other branches, so $x = \frac{1}{4}y$ We can set up an equation using the information above and solve for x: $\dfrac{1}{4} y + y + 500 + 1500 = 10,000$ We get y = 6,400​ Now, we can find x: $x = \dfrac{1}{4} y = \dfrac{1}{4} \tim
$x = \dfrac{1}{4}y$ That's not correct. The question says - The number of students who like their core branches is 1/4th of the number of students who like other branches. I don't see "only" used anywhere in the sentence. So, $x + 500 = \dfrac{1}{4}(y + 500)$ This should give you the correct answer, $1800$ .
|algebra-precalculus|
0
Does there exist an open dense subset of $\mathbb{R}$ with Lebesgue measure zero?
I know an example for finite lebesgue measure for an open dense set, but not for zero measure. Otherwise can we prove that no such set exists?
No. Any open subset $U\subseteq \mathbb{R}$ , dense or not, contains a small interval $(x_0-\varepsilon,x_0+\varepsilon)$ and therefore has positive Lebesgue measure.
|measure-theory|lebesgue-measure|
1
$0 ≤ A ≤ I. $ $\iff$ ⟨ψ|A|ψ⟩ ∈ [0, 1] for every unit vector $|ψ⟩ ∈ H$
Given two operators $A $ and $ B$ , where $A ≤ B$ means the operator $B − A$ is positive semidefnite. (i) $0 ≤ A ≤ I.$ (ii) ⟨ψ|A|ψ⟩ ∈ [0, 1] for every unit vector $|ψ⟩ ∈ H$ are equivalent I am having trouble proving $(ii) \implies (i)$ If I take any non zero vector $|\phi⟩ $ , then pluging in $|\phi⟩|/\|\phi|\| $ is aunit vector which implies $0\le ⟨\phi|A|\phi⟩\le \|\phi|\| ^2$ Then I wanted to use $(d)\iff (a)$ in the proposition below to conclude that A is positive semidefnite, but for that first I need to prove that A is hermitian and I can't figure out how to do that. Any idea? Proposition For a Hermitian operator A ∈ L(H), the following fve conditions are equivalent: (a) A is positive semidefnite. (b) A = $B^†B$ for an operator B ∈ L(H). (c) A = $B^†B$ for an operator B ∈ L(H, K) and some Hilbert space K. (d) ⟨ψ|A|ψ⟩ ≥ 0 for every |ψ⟩ ∈ H. (e) Tr[AC] ≥ 0 for every C ∈ PSD(H)
Observe that \begin{align*} \langle \psi | A |\psi\rangle&\leq 1\\ &= \langle \psi | \psi \rangle \end{align*} therefore $\langle \psi | A-I | \psi \rangle \leq 0$ . You also have $0\leq \langle \psi|A|\psi\rangle$ and both for all $\psi$ unitary which you can now scale. Therefore $0\preccurlyeq A\preccurlyeq I$
|linear-algebra|functional-analysis|positive-definite|quantum-mechanics|self-adjoint-operators|
0
Where Did I make a mistake in my deviation for the Period of a Pendulum Formula
Imagine a Pendulum with String Length L, and Angle θ, which forms arclength S (Will be used later in the derivation.) First V = $\sqrt{2gh} $ (Height is measured straight down) KNOWN TO BE A CORRECT FORMULA (From Here, Assume that nothing else is proven to be correct.) Arclength X = $Lθ $ Time (Period ) = T V = $(Distance) / (Time)$ V = $Lθ/T $ $\sqrt{2gh} = \frac{4Lθ}{T} $ (Four, as the pendulum covers that area four times) Height $= L - L \cos θ$ $\sqrt{2gL(1 - cos(θ))} = \frac{4Lθ}{T} $ $T = \frac{4Lθ}{\sqrt{2gL(1 - cos(θ))}} $ This is very different from the known period of a (simple) Pendulum $T = 2\pi\sqrt{\frac{L} {g}} $ , mainly where the angle $\theta $ doesn't even exist. Is there a way to get the canceled $\theta $ in my derived equation to cancel out? Did I just completely make a Mistake?
The original derivation takes $\theta$ to be small so $1-\cos\theta\approx{\theta^2\over 2}$ your answer becomes $$T=4\sqrt{L\over g}$$ Now your mistake is $v={s\over t}$ because this only works when velocity doesn't change (acceleration = 0) but velocity depends on height and height changes I'll prove it here for reference a fair background in rotational mechanics would help Taking polar coordinates we $r=L$ and $\theta$ (angle with vertical) varies From rudimentary circular motion with fixed $r$ using torque equation about the point of contact we get $$ mr^2{d^2\theta\over dt^2}=-mgr\sin\theta $$ The gravity component is taken perpendicular to string as tension cancels the other component Now nothing stops you from directly integration for $\theta(t)$ but this comes out as elliptic integrals and other fun stuff to avoid that we approximate $\sin\theta\approx\theta$ This is the general SHM $${d^2\theta\over dt^2}=-{g\over L}\theta$$ Which solves to $$\theta(t)=A\sin\left(\sqrt{g\over
|algebra-precalculus|trigonometry|solution-verification|physics|
0
How to show that $(y-x^2, z-x^3) \in \mathbb{C}[x,y,z]$ is irreducible or radical?
In Fulton's introduction to Algebraic Geometry, there is the following exercise on page 11: I have been struggling for a bit longer than I care to admit on this problem, and have not been able to get a handle on it. I have managed to prove that such an algebraic set is equal to $V = V(y-x^2, z-x^3)$ , but I haven't managed to prove that this is the ideal of the variety. I have looked elsewhere for solutions, but have not been able to find any. However, I did come across the solution to the part a of this particular problem, and figured it would be a good idea to link it here for anyone who might stumble on this post in the future. Irreducible components of the variety $V(X^2+Y^2-1,X^2-Z^2-1)\subset \mathbb{C}^3.$ In this particular chapter, we have covered Hilbert's Nullstellensatz,so there is an obvious route open to me that the author seems to intend: prove the ideal $\mathcal{I} = (y-x^2, z-x^3)$ is prime, in which case it is radical and therefore the ideal of the algebraic set; bei
From Cox, Little and O'Shea: Proposition 3. Let $V ⊂ k^n$ be an affine variety. Then $V$ is irreducible if and only if $I(V)$ is a prime ideal. As an example of how to use Proposition 3, let us prove that the ideal $I(V)$ of the twisted cubic is prime. Suppose that $f g ∈ I(V).$ Since the curve is parametrized by $(t, t^2, t^3),$ it follows that, for all $t ,$ $f (t, t^2, t^3)g(t, t^2, t^3) = 0.$ This implies that $f (t, t^2, t^3)$ or $g(t, t^2, t^3)$ must be the zero polynomial, so that $f$ or $g$ vanishes on $V.$ Hence, $f$ or $g$ lies in $I(V),$ proving that $I(V)$ is a prime ideal. By the proposition, the twisted cubic is an irreducible variety in ${\Bbb R}^3$ . One proves that a straight line is irreducible in the same way: first parametrize it, then apply the above argument. In fact, the above argument holds much more generally. Proposition 5. If k is an infinite field and $V ⊂ k^n$ is a variety defined parametrically $$\begin{align} x_1 &= f_1(t_1, \ldots , t_m),\\ &\vdots\\ x_n
|algebraic-geometry|ring-theory|algebraic-curves|affine-varieties|polynomial-rings|
0
is every triangulation of regular n-gon has same minimum angle?
Although my knowledge maybe not correct, but I will state it nonetheless; "Delaunay triangulation maximizes the minimum angle among all triangulation" "Every triangulation of regular n-gon satisfies Delaunay condition, as no point is in interior of circumcircle of any triangle formed, therefore they are all valid Delaunay triangulation." Then, If I am not mistaken, does every triangulation of regular n-gon have same Inf angle, because they all maximizes the minimum angle, and they are all Delaunay?
Every angle is opposite an edge connecting two vertices of the $n$ -gon. It equals half the angle from one vertex to the centre to the other vertex, so is a multiple of $\pi/n$ . The smallest angle is always $\pi/n$ .
|computational-geometry|triangulation|
1
Derivative of $ |u|^\gamma $ with $ \gamma>1 $
I have a doubt in the proof of Gagliardo-Nirenberg-Sobolev inequality that is found in Evans' book "Partial Differential Equations" (Theorem 1, pages 277-279). In this part of the proof, $ \gamma>1 $ is a constant and $ u\in C^1(\mathbb{R}^n) $ is a function with compact support (i.e., $ u\in C^1_c(\mathbb{R}^n) $ ). By definition, $ D|u|^\gamma $ is the gradient of $ |u|^\gamma $ (I'm not sure if it is the ordinary derivative or the weak derivative, but I think that's the ordinary derivative). Why is $ D|u|^\gamma $ equal to $ \gamma\cdot|u|^{\gamma-1}\cdot|Du| $ ? It is similar to the chain rule, but I can't understand the partial derivatives of $ |u|^\gamma $ , because the function $ |x| $ isn't usually differentiable. See the Evans' book, page 278 (PS.: I'm not sure about my English level, but I think that my question can be understood)
For Evans, $Du$ always refers to the weak gradient. However, in this case $\vert u \vert^\gamma$ is differentiable in the classical sense, so $Du$ also denotes the classical gradient (since when a function is classically differentiable then it is weakly differentiable and the two notions of derivatives agree). Indeed, let $g : \mathbb R \to \mathbb R$ defined by $z \mapsto \vert z \vert^\gamma$ . Then one can check from first principles that $g':\mathbb R \to \mathbb R$ and is given by $$g'(z) = \begin{cases} \gamma z \vert z \vert^{\gamma - 2}, &\text{if } z\neq 0, \\ 0, &\text{if } z=0. \end{cases} $$ Note that $g'$ is continuous at $z=0$ since $\gamma>1$ , so $g\in C^1(\mathbb R)$ . Since this is the case, we can just say $g'(z)=\gamma z \vert z \vert^{\gamma-2}$ with the understanding that $g'(0)=0$ . Hence, the chain rule applies to $g(u(x))$ and $$D \vert u \vert^\gamma = g'(u(x))D u=\gamma u(x) \vert u(x) \vert^{\gamma -2} D u .\tag{$\ast$} $$ Note carefully that what you have w
|calculus|partial-differential-equations|partial-derivative|chain-rule|
1
Where Did I make a mistake in my deviation for the Period of a Pendulum Formula
Imagine a Pendulum with String Length L, and Angle θ, which forms arclength S (Will be used later in the derivation.) First V = $\sqrt{2gh} $ (Height is measured straight down) KNOWN TO BE A CORRECT FORMULA (From Here, Assume that nothing else is proven to be correct.) Arclength X = $Lθ $ Time (Period ) = T V = $(Distance) / (Time)$ V = $Lθ/T $ $\sqrt{2gh} = \frac{4Lθ}{T} $ (Four, as the pendulum covers that area four times) Height $= L - L \cos θ$ $\sqrt{2gL(1 - cos(θ))} = \frac{4Lθ}{T} $ $T = \frac{4Lθ}{\sqrt{2gL(1 - cos(θ))}} $ This is very different from the known period of a (simple) Pendulum $T = 2\pi\sqrt{\frac{L} {g}} $ , mainly where the angle $\theta $ doesn't even exist. Is there a way to get the canceled $\theta $ in my derived equation to cancel out? Did I just completely make a Mistake?
1. What went wrong? The formula $$\text{speed}=\frac{\text{distance}}{\text{time}} $$ actually defines the average speed , whereas your formula $v=\sqrt{2gh}$ is the maximum instantaneous speed of the bob (when it passes through the minimum height point). This means that you cannot directly plug $v$ to the above formula. 2. What is correct? In fact, the period of a (not-approximated) pendulum is given by $$T = \frac{4}{\omega} K\left(\sin\frac{\theta_0}{2}\right),$$ where $\omega = \sqrt{\frac{g}{L}}$ is the angular frequency of corresponding simple pendulum, $K(k) = \int_{0}^{\frac{\pi}{2}} \frac {\mathrm{d}\theta}{\sqrt{1 - k^{2}\sin^{2} \theta}}$ is the complete elliptic integral of the first kind , and $\theta_0$ is the maximum angular displacement. When $\theta_0$ is small, this is approximated by the usual formula for the simple pendulum: $$ \theta_0 \ll 1 \qquad\implies\qquad T \approx \frac{2\pi}{\omega} = 2\pi \sqrt{\frac{L}{g}}. $$ Check this page for the derivation.
|algebra-precalculus|trigonometry|solution-verification|physics|
0
Why is the Daniell integral not so popular?
The Riemann integral is the most common integral in use and is the first integral I was taught to use. After doing some more advanced analysis it becomes clear that the Riemann integral has some serious flaws. The most natural way to fix all the drawbacks of the Riemann integral is to develop some measure theory and construct the Lebesgue integral. Recently, someone pointed out to me that the Daniell integral is ‘equivalent’ to the Lebesgue integral. It uses a functional analytic approach instead of a measure theoretic one. However, most courses in advanced analysis do not cover the theory of the Daniell integral and most books prefer the Lebesgue integral. But since these two constructions are equivalent, why do people prefer the Lebesgue integral over the Daniell integral?
Measure theory in the vein of Lebesgue and Caratheodory relies on an abstract axiomatic system and resembles sprawling computer code more than it does math. Worse yet, it is typically not clear whether the exposition in a particular textbook is even consistent and free of contradictions. There are other possibilities, notably the Daniell integral or Riesz's approach, e.g. Lebesgue Integration and Measure by Alan Weir. The Lebesgue theory was there first, however, and it's still what we do. Apropos, Mephisto in Goethe's Faust gives to a student, who mistakes him for a professor, the following answer: "Hear, therefore, one alone, for that is best, in sooth, and simply take your master's words for truth." That might be just as well. Unfortunately, people also tend to be quite opinionated about this topic.
|analysis|functional-analysis|measure-theory|soft-question|integration|
0
Does the function has to be necessarily monotonically decreasing to apply Cauchy's Condensation Test?
Test the convergence of $$\displaystyle \sum_{n=1}^\infty\left(\frac{\log n}{n}\right)^2$$ My textbook has used Cauchy's Condensation Test (CCT) to prove that the series is convergent. But I know that to apply CCT, the function $f(n)$ has to be monotonically decreasing. $\left(\frac{\log n}{n}\right)^2$ is not a monotonically decreasing function. Is my textbook wrong here?
In this case, you can create a similar function: $$S' = \sum_{n=1}^\infty g(n)$$ Where $g(n) = \left(\frac{\log{n}}{n} \right) + a_n$ And choose a suitable $a_n$ that makes $g(n)$ non-increasing while having $a_n = 0 \ \forall n > 3$ . Applying CCT on $S'$ yields that it is a convergent series, so you can then substract $\sum_{n=1}^3 a_n$ to it and check that your initial sum also converges. Note that this would only work for the case when a finite number of initial terms of the sum are increasing.
|real-analysis|sequences-and-series|convergence-divergence|
0
Show that the general solution to $y^{\prime\prime}-4xy^{\prime}+\left(4x^2-2\right)y=0$ is $y\left(x\right)=C_1\mathrm{e}^{x^2}+C_2x\mathrm{e}^{x^2}$
By solving the differential equation $$y^{\prime\prime}-4xy^{\prime}+\left(4x^2-2\right)y=0$$ using the series method, demonstrate that its general solution is $$y\left(x\right)=C_1\mathrm{e}^{x^2}+C_2x\mathrm{e}^{x^2}$$ Here, $C_1$ and $C_2$ are arbitrary constants of integration. My attempt and hint : I've been given a hint that by obtaining recursive relationships for the coefficients of the series, you can calculate several initial terms, and then "guessing" the expression for the $n$ -th coefficient (since you know what you should get) you can verify whether it holds true in all cases. I am not really familiar with this method and would appreciate your help.
Assuming we don't know the answer. $y=\sum_{n=0}^\infty a_n x^n$ , $y'=\sum_{n=1}^\infty na_n x^{n-1}$ , $y''=\sum_{n=2}^\infty n(n-1)a_n x^{n-2}$ . Subing in the given equation $y''-4xy'+(4x^2-2)=0$ we have $$\sum_{n=2}^\infty n(n-1)a_n x^{n-2}\color{red}{- \sum_{n=1}^\infty 4na_n x^n}+ \color{purple}{\sum_{n=0}^\infty 4a_n x^{n+2}}\color{red}{- \sum_{n=0}^\infty 2a_n x^n}=0$$ Separating some initial terms we have $$2a_2+6a_3x+\sum_{n=4}^\infty n(n-1)a_n x^{n-2}\color{red}{-4a_1x- \sum_{n=2}^\infty 4na_n x^n}+ \color{purple}{\sum_{n=0}^\infty 4a_n x^{n+2}}\color{red}{-2a_0-2a_1x- \sum_{n=2}^\infty 2a_n x^n}=0$$ By reindexing the black and the purple series and combining the reds, $$(2a_2-2a_0)+(6a_3-6a_1)x+\sum_{n=2}^\infty (n+2)(n+1)a_{n+2}x^n \color{red}{-\sum_{n=2}^\infty (4n+2)a_n x^n} +\color{purple}{\sum_{n=2}^\infty 4a_{n-2} x^n}=0$$ Combining all $$2(a_2-a_0)+6(a_3-a_1)x+\sum_{n=2}^\infty I_n x^n=0\tag1$$ where $$I_n=(n+2)(n+1)a_{n+2}-(4n+2)a_n+4a_{n-2}.$$ From $(1)$ , $a_2=a_
|sequences-and-series|ordinary-differential-equations|
1
Is There a Conceptual Connection Between the 3D Winding Number and Ray Casting Algorithms?
The 3D winding number provides a numerical answer to whether a point is inside or outside a closed surface, with its definition arising from surface integration. In my recent journey through computational geometry, I stumbled upon a paper addressing the calculation of the 3D Winding Number for a closed triangular mesh. This method involves casting a ray in any direction and, for each triangle it intersects, computing the dot product between the ray's direction and the triangle's normal. If this value is positive, the winding number is decreased by 1; if negative, it is increased by 1. The paper posits that a point resides inside the mesh if the Winding Number equals 0; otherwise, it's outside if the number is nonzero. This technique strikes me as somewhat similar to a Ray-Casting algorithm, another strategy for determining a point's position relative to a shape. Despite both methods aiming for the same outcome, I have been unable to find a theoretical link connecting them. Am I missing
I've resolved my query regarding the connection between the 3D Winding Number and Ray Casting methods. Traditionally, Ray Casting counts the number of times a ray crosses a polygon's edges to determine a point's inside/outside status by the parity of crossings. This approach aligns with the winding number for simply-connected polygons but diverges for self-intersecting polygons. In 2001, Dan Sunday introduced a variation that mimics the winding number calculation without trigonometric or complex integrals. Like conventional Ray Casting, Sunday's method involves casting a ray from the point in question, typically in the +x direction, and tracking the polygon's tangent direction. However, it incorporates the crossing direction: incrementing the winding number for upward crossings and decrementing for downward ones. Completing a full trace adjusts the winding number based on these interactions, determining the point's status (inside if nonzero, outside if zero) with accuracy, even for sel
|geometry|algorithms|geometric-topology|computational-geometry|winding-number|
1
How to perform a random split on a value
I have 30 dollars, randomly split it into 3 parts, and send it to persons A, B, and C. After playing an infinite number of games, the expected payoff of each person can be: Case A person A: 15 person B: 7.5 person C: 7.5 Or Case B person A: 10 person B: 10 person C: 10 I think it depends on how you randomly split the values into 3 parts Case A person A: randomly get a number from (0-15) person B: randomly get a number from (15 - the money that person A got) person C: 15 - the money that person A + person B got In the above case, it is no longer a fair game, because the first person will always have a larger range of chances to get a higher number Case B However, I have no idea how to design a fair random split.
It would be simple if you see it as cutting a circle. How to cut a circle to 3 pieces randomly? Just choose 3 points randomly! Then the three pieces should have equal expectation of length. Due to symmetry, we can fix one point at zero, so we just need to choose the other two points. If the problem is continuous, that should be it. We sort the points and find their differences. For discrete problem, we need to make a little adjustment, since we should "round" the result to nearest integer. We could use another random number to do that. E.g, for n=30, in C++, we can choose the numbers this way: #include using namespace std; int main(){ srand(time(0)); int n=30; int cnt=0,A=0,B=0,C=0; for(;cnt possible output: A=9.9646,B=9.9869,C=10.0485 Or we could just choose $n$ points, since adding one more point doesn't make much difference. #include using namespace std; const int N=3; int n=30; int main(){ srand(time(0)); int points[N]; int len[N]; for(int i=0;i possible output: 10.0401,9.9872,9.97
|random|
0
How do I triangulate a point from 3 segments, rather than from 3 points?
I have a real-world acoustics problem that I'm trying to solve for some work related R&D. Hopefully you big brained folks can explain down to me. :) I'm good at visualizing math concepts, but I lack any substantial experience, save perhaps g/FooBar. So, sans elegance: Edit: Using the below descriptions, I'm attempting to model a room with people talking into mics while facing the center of the room. The segment variability models the potential distance to a mic that the respective person might be, as people in this environment will often change their position from leaning into a mic to leaning back in their chair. I realize these constraints limit beyond actual potential for where a person might be physically located relative to their mic. However, I picked the constraints because 1. they cover the vast majority of cases 2. I am actually interested in learning how to solve the problem as described 3. the output I am trying to achieve is for estimating time delays to compare to cross-co
Heuristically, if you have 4 mics, there might be just barely enough information to determine the placement of the points. With 5 mics, heuristically, I would expect that a unique solution should exist. With only 3 mics, I'm pretty sure there is not enough information to resolve the location of the points. Here is the heuristic: if you have $n$ mics, then there are effectively $3n$ unknowns. (2 unknowns for each mic for the $x,y$ coordinates of the mic, and 1 more unknown for the distance from the mic to its corresponding sound source.) Also there are $(n-1)n$ observations (for each mic, there are $n-1$ differences in sound arrival timing). I don't know of a simple algebraic formula to recover the location of the points, but this could be formulated as an instance of an optimization problem, and then you could use any off-the-shelf optimization problem to solve it. In particular, let $\delta_{ij}$ be the difference: distance from source $i$ to mic $j$ minus distance from source $i$ to
|geometry|trigonometry|
1
Proving that $f$ is rational if it is known that $f$ is analytic in $\mathbb{D}$ and satisfies $|f(z)| \to 1$ as $|z| \to 1$
This question was already asked here , I wouldn’t ask it again however I do not think that the question gets the point across that it’s trying to get across, and I can’t find anything else related to this question online. The question is found in Donald E. Marshall’s Complex Analysis 3.8 (a) Prove that $\phi$ is a one-to-one analytic map of $\mathbb{D}$ onto $\mathbb{D}$ if and only if $$ \phi(z) = c \left( \frac{z - a}{1 - \overline{a}z} \right), $$ for some constants $c$ and $a$ , with $|c| = 1$ and $|a| . What is the inverse map? (b) Let $f$ be analytic in $\mathbb{D}$ and satisfy $|f(z)| \to 1$ as $|z| \to 1$ . Prove that $f$ is rational. and the point that I think the question is trying to get across is, can one show (b) given that (a) is proven? Otherwise why would Donald bunch these two questions together? I was able to prove (a) using the Schwarz lemma and automorphisms of the disk and I’m thinking it’s related to (b) since $|\phi(z)|\to 1$ as $|z|\to 1$ since $|c|=1$ , however
I guess this is technically an answer (?). Conrad's answer here is probably what you were looking for; you simply needed to know what the word "Blaschke product" means. Given a finite collection of complex numbers $\{a_1, a_2, ..., a_n\}$ , a Blaschke product is a rational function of the form $$B(z) = K \prod _{j = 1}^n \dfrac{z - a_j}{1 - \overline{a_j}z}, \quad |K| = 1.$$ If you like you can even match the notation of (a) and write $$B(z) = \phi_1(z) \phi_2 (z) \cdots \phi_n(z), \quad \phi_j(z) = c_j\left(\dfrac{z - a_j}{1 - \overline{a_j}z}\right), \quad K = c_1c_2 \cdots c_n$$ That is, $B(z)$ is a product of functions of the form in part (a). As Conrad suggested in their answer, having proven (a) you know you can form a Blaschke product whose zeros exactly agree with the finitely many zeros of $f(z)$ . The solution doesn't require you to know anything about Blaschke products or what they are called.
|complex-analysis|complex-numbers|analytic-functions|rational-functions|
0
Is this true (Trigonometry) and if so, can it be proved by induction?
I'm considering functions of the form $\cos{(n\theta)}+\sin{(n\theta)}$ and have noticed that when $n=4k+1$ for positive integers $k$ the expression seems to be divisible by $\cos{\theta}+\sin{\theta}$ (also works in the trivial case when $k=0$ ) and when $n=4k-1$ the expression is divisible by $\cos{\theta}-\sin{\theta}$ I've tested this idea for a few integers $k$ and it seems to work, so I'm wondering if it can be proven. Induction seems like the logical choice but exactly how to proceed is, for now, beyond my grasp. Does anyone have any ideas? (or a counter-example?)
Let $\varepsilon=\pm1$ . First note that $\cos t+\varepsilon\sin t=\sqrt2\cos(t-\varepsilon\pi/4)$ , so that your question amounts to Prove that $\cos(\theta-\varepsilon\pi/4)$ divides $\cos((4k+1)\theta-\varepsilon\pi/4)$ . Moreover, letting $x:=\theta-\varepsilon\pi/4$ , we have $\cos((4k+1)x)=\cos((4k+1)\theta-\varepsilon k\pi-\varepsilon\pi/4)=(-1)^k\cos((4k+1)\theta-\varepsilon\pi/4)$ , so the task simplifies to: Prove that $\cos x$ divides $\cos((4k+1)x)$ . Actually, we even have $$\forall n\text{ odd},\quad\cos x\mid\cos(nx)=T_n(\cos x)\quad\text{in}\quad\Bbb Z[\cos x]$$ since the $n$ th Chebyshev polynomial $T_n(X)\in\Bbb Z[X]$ has the parity of $n$ (there are many ways to prove it, including induction ). Edit: alternatively, either notice directly (using De Moivre's formula and the binomial theorem ) that $$\begin{align}\cos((2m+1)x)&=\operatorname{Re}\left((\cos x+i\sin x)^{2m+1}\right)\\&=\sum_{j=0}^m\binom{2m+1}{2j}(-1)^j\sin^{2j}x\cos^{2m+1-2j}x\\ &=\cos x\sum_{j=0}^m\bino
|trigonometry|induction|
1
Application of weak maximum principle.
Fix any open, bounded set $U \subset \mathbb{R}^n$ and suppose that $u \in C^2(U) \cap C(\bar{U})$ is a solution of $$-\Delta u = f \,\,\,\text{in}\,U$$ $$u=g \,\,\,\,\,\,\text{on}\,\,\,\partial U,$$ where $f \in C(\bar{U}), g\in C(\partial U).$ Prove that $$\sup_{x \in U}|u(x)| \leq C\left(\sup_{y \in \partial U}|g|+\sup_{x \in U}|f(x)|\right),$$ for constant C depending only on $n$ and $\sup_{x,y\in U}|x-y|.$ Proof: Based on the hint, I showed that the function $v(x)=u(x)+\dfrac{|x|^2}{2n}\sup_{y \in U}|f(y)|$ is subharmonic. Then applying the weak maximum principle, we get \begin{align*} \sup_{x \in U} |v(x)| \leq \sup_{x \in \partial U} |u(x)|+ \dfrac{1}{2n}\sup_{x \in \partial U} |x|^2.\sup_{y\in U}|f(y)| \end{align*} I don't know how to end up with the final inequality. Furthermore, I think we have the constant $C$ will depend on $n$ and $sup_{x \in \partial U} |x|^2$ , how it could be depend on $\sup_{x,y\in U}|x-y|?$ Could you please give me some ideas?
You are very close. Since $v$ is subharmonic, from the maximum principle you have that $$ u(x)+ \frac 1{2n} \vert x\vert^2\sup_{\Omega}\vert f\vert =v\leq\sup_\Omega v = \sup_{\partial \Omega} v \leq \sup_{\partial \Omega}\vert u \vert + \frac 1{2n} \big ( \sup_{x\in \Omega} \vert x\vert\big )^2\sup_{\Omega}\vert f\vert \qquad \text{for all } x\in \Omega.$$ Since $u=g$ on $\partial \Omega$ , this gives $$u(x) \leq \sup_{\partial \Omega}\vert g \vert + \frac 1{2n} \big ( \sup_{x\in \Omega} \vert x\vert\big )^2\sup_{\Omega}\vert f\vert-\frac 1{2n} \vert x\vert^2\sup_{\Omega}\vert f\vert \leq \sup_{\partial \Omega}\vert g \vert + \frac 1{n} \big ( \sup_{x\in \Omega} \vert x\vert\big )^2\sup_{\Omega}\vert f\vert. \tag{$\ast$} $$ This implies that $$ \sup_\Omega u \leq C \big ( \sup_{\partial \Omega}\vert g \vert + \sup_{\Omega}\vert f\vert\big ) $$ with $$C=\frac 1{n} \big ( \sup_{x\in \Omega} \vert x\vert\big )^2+1. $$ Applying ( $\ast$ ) to $-u$ further implies that $$ \sup_\Omega \vert
|analysis|partial-differential-equations|supremum-and-infimum|maximum-principle|poissons-equation|
1
Why does $\sum_{n=1}^{\infty} \frac{1}{n}$ not converge and yet $\sum_{n=1}^{\infty} \frac{1}{n^2}$ does converge? Looking for an intuition
It was proven to us that $\sum_{n=1}^{\infty} \frac{1}{n}$ does not converge and yet $\sum_{n=1}^{\infty} \frac{1}{n^2}$ does converge, I don't doubt the proof, and yet I would like to have acquired an 'intuition' to it. What I don't understand is what exactly is the difference between $\sum_{n=1}^{\infty} \frac{1}{n}$ and $\sum_{n=1}^{\infty} \frac{1}{n^2}$ . My intuition leads me to believe that the reason either of them converges is because the terms that are summed are gradually decreasing until the addition of the next term is meaningless, because both: $$\lim_{n \to \infty} \frac{1}{n} = 0$$ And $$\lim_{n \to \infty} \frac{1}{n^2} = 0$$ I would appreciate it if someone could debunk this in plain terms which refer to intuition, as I do understand the formality of the proof.
The term of a series must have a $0$ limit for the series to converge. True, but it is only a necessary condition which is by no way sufficient. What matters (for a monotonic series) is whether the speed convergence of the inner term is enough for the sum of remaining terms to have a $0$ limit. A common way to study the convergence of a series of term $u_n$ ( $\sum_{n=1}^\infty u_n$ ) is to considere the auxilliary sequence $\sum_{i=n}^{\infty} u_i$ , in fact the sum of all remaining terms. Then the convergence of the series is equivalent to that auxiliary sequence having a $0$ limit.
|calculus|sequences-and-series|
0
Why does $\sum_{n=1}^{\infty} \frac{1}{n}$ not converge and yet $\sum_{n=1}^{\infty} \frac{1}{n^2}$ does converge? Looking for an intuition
It was proven to us that $\sum_{n=1}^{\infty} \frac{1}{n}$ does not converge and yet $\sum_{n=1}^{\infty} \frac{1}{n^2}$ does converge, I don't doubt the proof, and yet I would like to have acquired an 'intuition' to it. What I don't understand is what exactly is the difference between $\sum_{n=1}^{\infty} \frac{1}{n}$ and $\sum_{n=1}^{\infty} \frac{1}{n^2}$ . My intuition leads me to believe that the reason either of them converges is because the terms that are summed are gradually decreasing until the addition of the next term is meaningless, because both: $$\lim_{n \to \infty} \frac{1}{n} = 0$$ And $$\lim_{n \to \infty} \frac{1}{n^2} = 0$$ I would appreciate it if someone could debunk this in plain terms which refer to intuition, as I do understand the formality of the proof.
I think you are looking at it from the wrong way. Instead of looking for intuition into why a series converges or not, try to think about what in your current intuition makes the results you are looking for feel... strange. In my experience, the most common intuitive thinking people have can be summarized in "Well, I am adding smaller and smaller numbers, so surely, the sum can't just grow beyond all bounds right? There must be some upper bound to the sum if the summands go to $0$ , right ?" It's that italicized sentence that is usually the crux of the problem. The answer to it is no , and while the harmonic series is a very known counterexample, it is not the best in explaining things to our intuition. So, to explain things to our intuitive brain, I would look at two series. First, we can look at the series $$\frac12 + \frac12 + \frac13+\frac13+\frac13+\frac14+\frac14+\frac14+\frac14+\cdots$$ which obviously diverges. Intuition quickly accepts this. In a similar vein, the series $$\fr
|calculus|sequences-and-series|
1
How to show that $(y-x^2, z-x^3) \in \mathbb{C}[x,y,z]$ is irreducible or radical?
In Fulton's introduction to Algebraic Geometry, there is the following exercise on page 11: I have been struggling for a bit longer than I care to admit on this problem, and have not been able to get a handle on it. I have managed to prove that such an algebraic set is equal to $V = V(y-x^2, z-x^3)$ , but I haven't managed to prove that this is the ideal of the variety. I have looked elsewhere for solutions, but have not been able to find any. However, I did come across the solution to the part a of this particular problem, and figured it would be a good idea to link it here for anyone who might stumble on this post in the future. Irreducible components of the variety $V(X^2+Y^2-1,X^2-Z^2-1)\subset \mathbb{C}^3.$ In this particular chapter, we have covered Hilbert's Nullstellensatz,so there is an obvious route open to me that the author seems to intend: prove the ideal $\mathcal{I} = (y-x^2, z-x^3)$ is prime, in which case it is radical and therefore the ideal of the algebraic set; bei
One can also use the "algebra-geometry" correspondence. $(y-x^2,z-x^3)$ is a prime ideal in $\mathbb C[x,y,z]$ if and only if the set $$ V = \{ (x,y,z) \in \mathbb A_{\mathbb C^3} : y=x^2, z = x^3\} $$ is an irreducible set in $\mathbb A_{\mathbb C}^3$ . The answer of Jan-Magnis alludes that $V$ can be parameterizes as: $$ V = \{ (t,t^2,t^3) : t \in \mathbb C \} $$ Therefore, it is possible to find a bijection: $$ \varphi : \mathbb A_{\mathbb C}^1 \to V \quad \quad t \mapsto (t,t^2,t^3) $$ $\varphi$ turns out to an isomorphism of affine algebraic sets. Since $\mathbb A_{\mathbb C}^1$ is irreducible, $V$ must be irreducible. This argument uses some heavy machinery, but I like it because one can circumvent trying to algorithmically determine whether $(y-x^2,z-x^3)$ is a prime ideal or not.
|algebraic-geometry|ring-theory|algebraic-curves|affine-varieties|polynomial-rings|
0
Is $\log \circ f \circ \exp$ concave when $f$ is concave, positive and increasing?
I believe that $\log \circ f \circ \exp$ should be concave if $f$ is positive, increasing and concave. My intuitive reasoning is that since it is true (in a degenerate sense) for $f(x) = x$ , a fortiori it must be true if $f$ is strictly concave. However, I'm not sure how to prove this. Any idea or counter-example?
The issue is that for $f(x)=x^a$ then $g(x)=\ln(f(e^x))=ax$ is always linear, so intuition is biased. But just take $f(x)=x+\sqrt{x}\ $ this doesn't go so well... $f''(x)=-\dfrac 1{4x\sqrt{x}} $g''(x)=\dfrac{e^{3x/2}}{4(e^x+e^{x/2})^2}>0$
|derivatives|convex-analysis|
1
Retraction onto subgroup?
Let $N\subset G$ be a nontrivial subgroup of finite group $G$ , is there a "retraction" onto $N$ , i.e. a homomorphism $\varphi : G \rightarrow N$ s.t. $ Im(\varphi) = N $ , $\varphi |_N = id_N $ . Its obviously yes when $N = G$ or $ N = \{e\}$ , if no other cases possible (I could not find them) how to proof ? What about infinite groups ? Is statement true for abelian groups ?
Isn't this equivalent to $N$ having a normal complement $K$ in $G$ , which would be the kernel of $\phi$ ? The case in which $N$ is a Sylow subgroup of a finite group $G$ is studied in transfer theory. For example, Burnside's Transfer Theorem states that if $N \in {\rm Syl}_p(G)$ and $N \le Z(N_G(N))$ , then $N$ has a normal complement. This occurs for example in dihedral groups of order $2r$ with $r$ odd and $p=2$ , or in $A_4$ with $p=3$ .
|group-theory|finite-groups|retraction|
0
Show some proofs in the set $M=\{a+x+\sqrt{a-x+x^2}|x\in N\}$
The question Let $a \in (0, \infty)$ and the set $$M=\{a+x+\sqrt{a-x+x^2}|x\in N*\}$$ Show that: a)if $a=1$ , then the set $M$ contains exactly $2$ rational numbers b) if the set $M$ contains at least $2$ rational numbers, then the number $a$ is rational The idea a) if $a=1$ then we have to solve $1+x+\sqrt{1-x+x^2}=y$ , where $y$ is an rational number $\sqrt{x^2-x+1}=y-x-1$ and because $y-x-1$ is an rational number we get that $\sqrt{x^2-x+1}$ should also be a rational number $=> x^2-x+1$ is a perfect square $x^2-x+1=k^2$ Maybe I can demonstrate it by using a quadric formula...I don't know. b) For point b we have to show that if $x^2-x+a=k^2$ has at least $2$ rational solutions, then $a$ is rational... I don't know what to do forward. I hope one of you can help me. Thank you!
For the second part (since you've already solved the first): Let the 2 values of $x$ be $m$ and $n$ . Then: $$(a+m + \sqrt{m^2-m+a}) - (a+n + \sqrt{n^2-n+a}) \in \Bbb Q \iff \sqrt{m^2-m+a} - \sqrt{n^2-n+a}\in \Bbb Q$$ Note that $\sqrt{m^2-m+a} - \sqrt{n^2-n+a}$ is non-zero since $x^2-x$ is increasing over the naturals. Then: $$\frac{n-m}{\sqrt{m^2-m+a} + \sqrt{n^2-n+a}} \in \Bbb Q \iff \sqrt{m^2-m+a} + \sqrt{n^2-n+a} \in \Bbb Q$$ Adding, $\sqrt{m^2-m+a} \in \Bbb Q$ , so this forces $a$ to be rational.
|radicals|rational-numbers|
1
Elementary central binomial coefficient estimates
How to prove that $\quad\displaystyle\frac{4^{n}}{\sqrt{4n}} for all $n$ > 1 ? Does anyone know any better elementary estimates? Attempt. We have $$\frac1{2^n}\binom{2n}{n}=\prod_{k=0}^{n-1}\frac{2n-k}{2(n-k)}=\prod_{k=0}^{n-1}\left(1+\frac{k}{2(n-k)}\right).$$ Then we have $$\left(1+\frac{k}{2(n-k)}\right)>\sqrt{1+\frac{k}{n-k}}=\frac{\sqrt{n}}{\sqrt{n-k}}.$$ So maybe, for the lower bound, we have $$\frac{n^{\frac{n}{2}}}{\sqrt{n!}}=\prod_{k=0}^{n-1}\frac{\sqrt{n}}{\sqrt{n-k}}>\frac{2^n}{\sqrt{4n}}.$$ By Stirling, $n!\approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$ , so the lhs becomes $$\frac{e^{\frac{n}{2}}}{(2\pi n)^{\frac14}},$$ but this isn't $>\frac{2^n}{\sqrt{4n}}$ .
You have received a lot of very nice answers but seemingly none of them has addressed your initial question. Since $$ \frac1{4^n}\binom{2n}n=\prod_{k=1}^n\frac{2k-1}{2k}:=a_n. $$ we need to prove: $$ \frac1{\sqrt{4n}} Observe that for $n=1$ we have $$\frac1{\sqrt{4n}}=a_n=\frac1{\sqrt{3n+1}}.$$ Since $a_{n}=a_{n-1}\cdot\frac{2n-1}{2n}$ it suffices therefore to prove for $n>1$ : $$ \sqrt{\frac{4(n-1)}{4n}} or $$ \frac{n-1}{n} The left and right inequalities are equivalent to: $$ (2n)^2(n-1) and $$ (2n-1)^2(3n+1) respectively, both of which are trivial.
|discrete-mathematics|inequality|binomial-coefficients|approximation|
0
Prove a set is linearly independent.
$\phi = \mathbb{V} \rightarrow \mathbb{V}$ is an operator satisfying $\phi^n = 0$ for some $n$ and $\phi^{n-1} \ne 0$ Let $v \in \mathbb{V}$ be a vector s.t. $\phi^{n-1} \ne 0$ . Is the set {v, $\phi(v)$ , $\dots$ , $\phi^{n-1}(v)$ } linearly independent? I intuitively understand that it is. Also I've proven that the only eigenvalue of this linear transformation is 0. As for proving linear independence, I'm trying to prove it by contradiction (is this the correct path?). Assume the set is not linearly independent. Then there are values $c_1, \dots, c_n$ such that: $c_1v+c_2\phi(v) +\dots +c_{n}\phi^{n-1}(v) = 0 = \phi^n(v)$ and $c_i \ne 0$ for some $i \in [n]$ . Where do I move from here?
We have $$0 = c_{1}v + c_{2}\phi(v) + \cdots + c_{n}\phi^{n-1}(v),$$ where not all the $c_{i}$ are zero. Apply $\phi^{n-1}$ to both sides. By linearity, we have \begin{align*} 0 = \phi^{n-1}(0) &= \phi^{n-1}\left(c_{1}v + c_{2}\phi(v) + \cdots + c_{n}\phi^{n-1}(v)\right)\\ &= c_{1}\phi^{n-1}(v) + c_{2}\phi^{n}(v) + \cdots + c_{n}\phi^{2n-2}(v)\\ &= c_{1}\phi^{n-1}(v), \end{align*} where the last equality follows because $\phi^{n} = 0$ (and so $\phi^{m} = 0$ for all $m \geq n$ ). But by assumption $\phi^{n-1}(v) \neq 0,$ so we must have $c_{1} = 0.$ So we actually have $$0 = c_{2}\phi(v) + \cdots + c_{n}\phi^{n-1}(v).$$ Can you use the same trick to continue from here, and arrive at a contradiction?
|linear-algebra|eigenvalues-eigenvectors|linear-transformations|linear-independence|
1
How to concisely denote all the elements of a matrix as a set?
Suppose you have a matrix $A$ . Is there a "standard"/mathematical elegant way to denote all members of the matrix as a set? For example: suppose there is a matrix $A = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] $ then I would like to know how to define a function $set( \cdot)$ such that we have: $$set(A) = \{ a,b,c,d \}$$ If such notation exists, I would be interested to know what it is and what is currently used if no such notation exists. To clarify, as we are discussing sets, the order of the elements does not matter.
As discussed in the comments, there does not exist any universal notation for such a function, however, there does exist machinery such that you can define your own function to do what you describe. Using your notation of a function $set(\cdot)$ and ideas discussed in the comments, we can make this notation more explicit by defining it as the function $(set) : \mathbb{R}^{m \times n} \rightarrow \Omega$ where: $$set(A) := \cup_{ij} \{A_{ij} \} = \{x \space | \space x \text{ is an entry of } A \}$$ for an arbitrary matrix $A \in \mathbb{R}^{m \times n}$ and where $\Omega$ is simply the set containing all possible sets $set(B)$ for all $B \in \mathbb{R}^{m \times n}$ .
|linear-algebra|matrices|notation|
0
Conditional expectation of random summation - How to show $E[\sum_{i=1}^{N}\xi_i|\sigma(N)]=pN$?
I am using the formal definition of conditional expectations: $E[X|\mathscr{F}]$ is any RV $Y$ such that $Y\in\mathscr{F}$ and $\int_AXdP=\int_AYdP$ for all $A\in\mathscr{F}$ . Suppose that $\xi_1,\xi_2,\dots$ are iid RV with mean $p$ , and they are independent of RV $N$ . How do I show rigorously by definition that $E[\sum_{i=1}^{N}\xi_i|\sigma(N)]=pN$ ? I understand the intuitive saying that conditioned on $\{N=n\}$ , $E[\sum_{i=1}^{N}\xi_i]=E[\sum_{i=1}^{n}\xi_i]=\sum_{i=1}^{n}E[\xi_i]=pn=pN$ . But it seems too far away from the formal definition for me.
$$\int\limits_{N=n}\sum_1^N\xi_idP= \sum_1^n\int\limits_{N=n}\!\!\xi_idP= \sum_1^nP(n)\int\xi_idP= \,nP(n)p\,= \int\limits_{N=n}\!\!pNdP$$
|probability|measure-theory|conditional-probability|conditional-expectation|
0
Retraction onto subgroup?
Let $N\subset G$ be a nontrivial subgroup of finite group $G$ , is there a "retraction" onto $N$ , i.e. a homomorphism $\varphi : G \rightarrow N$ s.t. $ Im(\varphi) = N $ , $\varphi |_N = id_N $ . Its obviously yes when $N = G$ or $ N = \{e\}$ , if no other cases possible (I could not find them) how to proof ? What about infinite groups ? Is statement true for abelian groups ?
Your question is related to "splitting exact sequence". There are many examples: let $G = N_1 \times N_2 \times \cdots \times N_k$ , $k > 1$ and $N_i \not= \{e\}$ . Then there are natural injections $N_i \longrightarrow G$ and natural projections $p_i: G \longrightarrow N_i$ . The following proposition solves part of the problem and is easy to prove. Proposition . Let $G$ be an abelian group and $N$ is a nontrivial subgroup of $G$ . There is a retraction onto $N$ $\Leftrightarrow$ $G\cong N \times H$ for some subgroup $H$ .
|group-theory|finite-groups|retraction|
0
HoTT and isomorphisms
I have heard that Homotopy Type Theory makes it so that isomorphic objects are “equal”. I wonder how this squares with a lot of mathematical examples from Algebra and Set Theory, where the nature of the isomorphism, or a certain class of isomorphisms, and how they interact with other morphisms, is relevant. How can you do any of this math if you just say “isomorphic objects are equal” and that’s that?
In type theory identity types are allowed to have more than one element. Univalence implies that the type of proofs that two structures are equal is equivalent to the type of isomorphisms between them. This more careful phrasing is consistent with the fact that the set of isomorphisms is non trivial. E.g. if there are many isomorphisms between two structures, then the identity type will have multiple distinct elements (exactly one for each isomorphism).
|logic|type-theory|homotopy-type-theory|
0
Mixture of wine and water
I'm new here, so kindly bear with me if this question is too trivial for the forum. I tried working it out but to no avail. A solution contain wine and water in ratio 2:1. Out of it 36 litre solution is replaced by 15 litre water. Again 26 litre solution is replaced by 26 litres of water. Now wine to water ratio becomes 8:7. Find the difference between the quantity of water in the initial and the final solution. Also, why does the ratio stay the same every time we take out liquid from a mixture of liquids. I mean not in this case because water is added here. I mean is there a proof of the intuitive fact?
Let Wine = $2x$ and Water = $x $ So, total volume = $3x$ Litre Step $1$ : $36$ litre solution is replaced by $15$ litre water. Wine $= 2x - 24 = 2(x-12)$ Water $= x - 12 + 15 = x + 3 $ Total Volume $= 3x - 21 = 3(x - 7)$ Step $2$ : Now $26$ litre Solution is replaced by $26$ litre water Wine $= 2(x - 12) - \frac{2(x - 12)}{3(x - 7)} \cdot 26 = \frac{2(x-12)(3x-47)}{3(x-7)}$ Water $ = x + 3 - \frac{(x + 3)}{3(x - 7)} \cdot 26 + 26 = \frac{(3x² + 40x - 687)}{3(x - 7)}$ Now we know the ratio should be $8:7$ So, we end up with $$\Rightarrow \frac{2(3x^2 -83x + 564)}{3x^2 + 40x - 687} =\frac{8}{7}$$ $$\Rightarrow \frac{3x^2 -83x + 564}{3x^2 + 40x - 687} = \frac{4}{7}$$ $$\Rightarrow 21x^2 - 581x + 3948 = 12x^2 + 160x - 2748$$ $$\Rightarrow 9x^2 - 741x + 6696 = 0$$ $$\Rightarrow 3x^2 - 247x + 2232 =0$$ $$\Rightarrow (3x - 31) (x - 72) = 0 $$ Therefore, $x = \frac{31}{3}$ or $x = 72$ $x = \frac{31}{3}$ is not possible as we are supposed to take out $36$ litres initially. Hence, $x = 72$ Initi
|ratio|
0
If $\alpha = \beta$, why can't the entropy-regularized Wasserstein distance equal $0$?
In optimal transportation theory, the optimal re-allocation of probability distribution $\alpha$ 's mass to another distribution $\beta$ is solved by minimizing the Wasserstein distance with respect to the transport plan. $$W (\alpha, \beta) = \min_{\pi\in \Pi(\alpha\beta)} \int c(x,y) \mathrm{d}\pi(x,y) $$ Alternatively, the relative entropy-regularized Wasserstein distance, also called Sinkhorn distance , can be used: $$W_\epsilon (\alpha, \beta) = \min_{\pi\in \Pi(\alpha\beta)} \int c(x,y) \mathrm{d}\pi(x,y) + \epsilon H(\pi \| \alpha \otimes \beta)$$ where $\epsilon$ is the regularization parameter, and relative entropy is $$H(\pi \| \alpha \otimes \beta) = \int \ln \left(\frac{\mathrm{d}\pi (x,y)}{\mathrm{d}\alpha(x) \mathrm{d}\beta(y) } \right) \mathrm{d}\pi (x,y) $$ Aude Genevay said that if you try the extreme case where both the source and target distributions are identical, $\alpha = \beta$ , then we would expect the entropy-regularized Wasserstein distance (Sinkhorn distance
Be careful, because if you consider a general positive Borel cost function $c$ the object you're calling Wasserstein distance is instead an OT problem. It is called Wasserstein distance only when the cost is a metric. This in my opinion could lead to some confusion. Indeed, the property $W(\alpha, \alpha) = 0$ is true for the Wasserstein distance because we can use the fact that $\|x-y\|=0$ means that the OT maps the mass on $\{x=y\}$ which is saying that the marginals measures are equal (easily prove that if you marginalize the coupling you have the same quantity). I don't see how this could hold for a general cost $c$ . Moreover, if you consider the same problem with the entropic regularizer the same arguments hold while if you consider the cost to be some norm you can reason the same way. Suppose there exists an optimal coupling $\gamma^*$ such that the entropic regularized Wasserstein is zero, i.e. $$ \int \|x-y\| + \epsilon\log\left(\frac{d\gamma^*}{d\alpha d\beta}\right)d\gamma^*
|statistics|probability-distributions|entropy|regularization|optimal-transport|
0
Are rational functions a vector space?
Let $\mathscr P_n[x]$ be the space of real polynomials in $x\in\mathbb R$ of degree at most $n\in\mathbb N$ , and $$\mathscr P[x] := \lim_{n\to\infty} \mathscr P_n[x]$$ the set of all real polynomials in $x$ . Then consider the set $V$ whose elements $v(x)$ are defined by $$v(x) = \frac{p(x)}{q(x)} \qquad \text{for} \quad p(x), q(x) \in \mathscr P [x],$$ with $q(x) \ne 0$ (the zero function). Is this set a vector space under addition? My naïve answer would be yes, with a basis of elements of the form $$b(x) = \frac{(x-\hat x^1)(x-\hat x^2)\cdots(x-\hat x^m)} {(x-\hat x_1)(x-\hat x_2)\cdots(x-\hat x_n)} \qquad \hat x^i, \hat x_j \in \mathbb C$$ such that the coefficients of the numerator and denominator are real, but I'm not sure whether I'm neglecting something. It certainly seems to me that both linear independence and spanning should work. Bonus: If it is a vector space, does it have a name? There is a superficially similar question to mine here , but that one is asking for fixed $q(
Putting together everything from the comments: Given an arbitrary real polynomial quotient $\frac{P(x)}{Q(x)}$ $\left(P(x), Q(x) \in \mathscr P[x]\right)$ , you can uniquely decompose it into basis functions by applying the method of long division, to obtain $$\frac{P(x)}{Q(x)} = R(x) + \frac{\bar P(x)}{Q(x)},$$ where $R(x), \bar P(x)\in \mathscr P[x]$ with $\operatorname{deg} \bar P then exploiting the unique decomposition via partial fractions of $\frac{\bar P(x)}{Q(x)}$ to write $$\frac{\bar P(x)}{Q(x)} = \sum_{i=1}^r \left[ \frac{c_{i_{1}}}{x - b_{i}} + \dots + \frac{c_{i_{l_i}}}{(x - b_{i})^{l _ {i}}}\right] + \sum _ { j=1}^s \left [ \frac{d _ {j _ {1} } x + e _ {j _ {1} } }{x ^ {2} + p _ {j} x + q _ {j} } + \dots + \frac{d _ {j _ { t _ j } } x + e _ {j _ { t _ j } } }{( x ^ {2} + p _ {j} x + q _ {j} ) ^ {t _ {j} } } \right ] ,$$ for some coefficients $c, d, e \in\mathbb R$ to be determined. This holds for any $Q(x)$ , which has been here maximally factorised in the reals to the f
|linear-algebra|vector-spaces|rational-functions|
1
General polynomial equation I've made
For quite a while, from time to time I worked on a way to create a general nth degree/polynomial equation and I think it'd be great to show it to others as I think it doesn't have any mistakes and it is as simple as it can get. So here is the general equation that I ask you to review: $v_{0}+\sum_{i=1}^d v_{i}x^{i}=0$ Where $d$ is the degree of the equation and $v$ is a coefficient.
You're correct, that's indeed what a general polynomial equation looks like. A couple remarks: You can rewrite the term as $\sum_{i=0}^d v_i x^i$ , since $x^0 = 1$ and therefore simplify it a little further. Also, $v$ is not really a variable, there are actually $d+1$ many of them, namely $v_0, \dots, v_d$ . Also note that $d$ could actually be larger than the degree of the equation, choose for example $d = 2$ , $v_0 = 1, v_1 = 1, v_2 = 0$ . This choice results in the equation $1 + x + 0x^2 = 0$ , which has degree 1.
|algebra-precalculus|
1
Example of two non-isomorphic, non-trivial rings with the same underlying group
I'm looking for two rings that have the same additive group, but the multiplication is defined differently. For instance, if we have $(\mathbb{Z}_6,+)$ as the additive group, we naturally have the ring $(\mathbb{Z}_6,+,\cdot)$. But can we define multiplication differently so that it still satisfies the ring axioms with some other operation? This new ring, call it $(\mathbb{Z}_6,+,*)$ should not be isomorphic to $(\mathbb{Z}_6,+,\cdot)$. I found two trivial examples. The identity ring works out, but again, it's trivial. The other one I found was the opposite ring, which in the non-Abelian case is just defined as switching the order of a given ring. Except that's isomorphic to the original ring, so it's not really what I'm looking for. Are there any two rings that satisfy this property? Any finite rings? Given an arbitrary ring, can we find another non-isomorphic ring with the same group, or does the first ring need to satisfy specific properties to allow this?
Crostul's answer even gives two non-isomorphic fields with isomorphic additive groups. Edit. I thought that there could be an example with finite fields. The idea was that in $K=\mathbb Z/5\mathbb Z$ , neither $2$ nor $3$ has a root. Then $K[\sqrt 2]$ and $K[\sqrt 3]$ have isomorphic additive groups (isomorphic to $K\times K$ ) but they are not isomorphic as rings. However, as @Crostul pointed out, two finite fields with the same numeber of elements are always isomorphic.
|abstract-algebra|group-theory|ring-theory|
0
Retraction onto subgroup?
Let $N\subset G$ be a nontrivial subgroup of finite group $G$ , is there a "retraction" onto $N$ , i.e. a homomorphism $\varphi : G \rightarrow N$ s.t. $ Im(\varphi) = N $ , $\varphi |_N = id_N $ . Its obviously yes when $N = G$ or $ N = \{e\}$ , if no other cases possible (I could not find them) how to proof ? What about infinite groups ? Is statement true for abelian groups ?
Consider the statement: There is a retraction $\varphi:G\to N$ . If $N$ is proper nontrivial, then $\ker(\varphi)$ is a proper, nontrivial and normal . In particular pick any simple group $G$ of nonprime order, and any of its proper nontrivial subgroups (which exist by the fact that $G$ has composite order). It will be a counterexample. This won't even be true even for abelian groups. Take $G=\mathbb{Z}_{p^n}$ for prime $p$ . It is not decomposable as direct product of nontrivial groups. And as we will see soon those conditions are equivalent for abelian groups. More generally if there is a retraction $\varphi:G\to N$ then we have a short exact sequence $$1\to \ker(\varphi)\to G\to N\to 1$$ which is right split, and so by splitting lemma for groups our $G$ is a semidirect product of $\ker(\varphi)$ and $N$ . Conversely any semidirect product will induce such retraction. In particular if $N$ is normal (e.g. when $G$ is abelian) then there is a retraction $G\to N$ if and only if $N$ is a
|group-theory|finite-groups|retraction|
0
Successively attaching maps vs attaching all maps
In the definition of CW complex we need to attaching cells. If there are finitely many $n$ cells, I wonder if there is difference between attaching them successively and attaching them altogether. More precisely and abstractly, Let $i \colon A \to A'$ , $j \colon B \to B'$ be two monomorphisms, and $f \colon A\to X, g \colon B \to X$ be morphisms. We then form the pushouts $X_A:= X \coprod_{A}A'$ and $X_{B} : = X\coprod_{B} B'$ , and then form $X' = X_A\coprod_{X}X_{B}$ . Is it true that \begin{equation} X' \simeq X\coprod_{A\coprod B} (A' \coprod B'). \end{equation}
This is true. To show it, pick a topological space $Y$ . We will show that $\mathsf{Top}(X',Y)\cong\mathsf{Top}(X\coprod_{A\coprod B} (A' \coprod B'),Y)$ naturally in $Y$ . A map $X'\to Y$ is the data of two maps $f_1\colon X_A\to Y$ and $f_2\colon X_B\to Y$ that agree on their restrictions to $X$ . These maps in turn are the data of maps $f_{11}\colon A'\to Y$ , $f_{12}\colon X\to Y$ that agree on their restriction to $A$ , and of maps $f_{21}\colon B'\to Y$ and $f_{22}\colon X\to Y$ that agree on their restriction to $B$ , and additionally we must require $f_{12}=f_{22}$ (this comes from the previous requirement that $f_1$ and $f_2$ agree on their restrictions to $X$ ). We can combine these maps into maps $g_1\colon A'\coprod B'\to Y$ and $g_2\colon X\to Y$ , with $g_1=f_{11}\coprod f_{21}$ and $g_2=f_{12}=f_{22}$ . These two maps satisfy precisely that their restrictions to $A\coprod B$ agree; there are no further relations. But this is precisely the data of a map $X\coprod_{A\copro
|algebraic-topology|category-theory|
1
Approximating a random variable by a sequence of random variables
Consider the triangular hat function: \begin{equation} \varphi(x) = \begin{cases} 1 - |x|, & \text{if } x \in [-1, 1], \\ 0, & \text{otherwise.} \end{cases} \end{equation} It is well known that $\varphi$ is the pdf $$Y = X_1 + X_2 - 1,$$ where each $X_i$ is a uniformly distributed random variable on $[0,1]$ . I am interested in the following question: is it possible to find a sequence of random variables, $X_n$ , with smooth pdf supported $[-1,1]$ such that the pdfs of $X_n$ converges to the pdf of $Y$ ? Edit: The motivation is that this function has shown up in a different non-probabilistic problem. In that context, I am looking for a preferably smooth function approximating $\varphi$ . The motivation to find a sequence is that I can use concentration inequalities to sharply bound error estimates.
Instead of the sequence given in @geetha290krm's answer, it may be easier to work with $$U_n=\frac{n-1}{n}Y+\frac Z n$$ where $Z$ can have any arbitrary smooth distribution on the interval $(-1,1)$ , which is independent of $Y$ . In this case, random sampling from $U_n$ is easier. Also, the pdf of $U_n$ is given by $$f_{U_n(x)}=\int_{-1}^{1}f_{\frac Z n}(x-t) \text{d}F_{\frac{n-1}{n}Y}(t) \quad x \in [-1,1].$$
|probability|approximation-theory|
0
Finding the conditional probability of a pdf $f(x,y)=1/y$
I am wondering if I did this right find the conditional pdf $E(Y|X=x)$ for the following $f(x,y)=1/y$ for the $0 I think the domain confuses me I know that $f(y|x)=f(x,y)/f_x(x)$ so I know that $f_x(x)=\int (1/y) dy $ for $y=x$ to $y=1$ so I get $f_y(y)=ln(1)-ln(x)=ln(x)$ and I got $f(y|x)=(1/y)/(ln(x)$ then I got $E[Y|X=x]= \int y (1/y)/ln(x))dy= \int ln(x)dy$ $E[Y|x=x]=y/ln(x)$ then we plug in y=x to y=1 but I am not sure if this is right
Let us write all details to avoid mistakes. Let $x\in (0,1)$ be given, then for $\color{blue}{y\in (x,1) }$ , we have $$\color{blue}{f_{Y|X=x}(y)}=\frac{f(x,y)}{f_X(x)}=\frac{\frac{1}{y}}{\int_{x}^{1}\frac{1}{y}\text{d}y}=\frac{-1}{y\ln x}=\color{blue}{\frac{1}{y\ln(\frac{1}{ x})}}.$$ Next, $$\color{blue}{\mathbb E (Y|X=x)}=\int_{-\infty}^{+\infty}yf_{Y|X=x}(y)\text{d}y=\int_{x}^{1}y\frac{1}{y\ln(\frac{1}{ x})}\text{d}y=\color{blue}{\frac{1-x}{\ln(\frac{1}{ x})}}.$$
|probability|
1
$\mathcal M = \{z \in \mathbb C | |z|=r \}$ , where $r \in \mathbb R , r >0$
The statement of the problem : Consider the set $\mathcal M = \{ z \in \mathbb C | |z|=r\} $ , where $r \in \mathbb R , r >0$ a) Prove that there exists $a,b \in \mathcal M,a \neq b$ such that $a+b \in \mathcal M$ . b) Find the values ​​of the positive integer $n \geq 2$ for which there exists a subset $\mathcal S \subseteq \mathcal M$ , with n elements, so that $u+v \in \mathcal M , \forall u,v \in \mathcal S,u \neq v$ . My approach: First of all, the set $\mathcal M$ consists of all the affixes of the points located on the circle with the center in the coordinate system and the radius r . To prove the point a) , I chose a point on the axis $Oy$ with coordinates $(0,\frac{r}{2})$ , then I took the parallel to the $Ox$ axis. It is obvious that this will intersect the circle in two points, symmetrical to the $Oy$ axis. According to the parallelogram rule, the sum of these affixes will be in $(0,r)$ , which is on the circle, so I proved point a) , below I put a picture that suggests my r
The key idea is to use the position vectors of the points on the circle. 2 vectors with magnitude $r$ will add up to a vector with magnitude $r$ iff the angle between the vectors is 120°. So the angle between the position vectors of any pair of points in $\mathcal{S}$ should be 120°. Also, as the sum of the angles about any point (the origin of the argand plane, in our case) is 360°, we can have at most three points in $\mathcal{S}$ . As a particular example, for $n = 3$ , $\mathcal{S} = \{r, r\omega, r{\omega}^2\}$ works, where $\omega$ is the complex root of unity.
|complex-numbers|
0
$\mathcal M = \{z \in \mathbb C | |z|=r \}$ , where $r \in \mathbb R , r >0$
The statement of the problem : Consider the set $\mathcal M = \{ z \in \mathbb C | |z|=r\} $ , where $r \in \mathbb R , r >0$ a) Prove that there exists $a,b \in \mathcal M,a \neq b$ such that $a+b \in \mathcal M$ . b) Find the values ​​of the positive integer $n \geq 2$ for which there exists a subset $\mathcal S \subseteq \mathcal M$ , with n elements, so that $u+v \in \mathcal M , \forall u,v \in \mathcal S,u \neq v$ . My approach: First of all, the set $\mathcal M$ consists of all the affixes of the points located on the circle with the center in the coordinate system and the radius r . To prove the point a) , I chose a point on the axis $Oy$ with coordinates $(0,\frac{r}{2})$ , then I took the parallel to the $Ox$ axis. It is obvious that this will intersect the circle in two points, symmetrical to the $Oy$ axis. According to the parallelogram rule, the sum of these affixes will be in $(0,r)$ , which is on the circle, so I proved point a) , below I put a picture that suggests my r
I take $r=1$ , which doesn't change the generality of the problem. Let $p,q \in \mathbb R$ such that $e^{i p}=u\neq v=e^{iq}$ $$u+v\color{red}{\textbf=}2\cos(\frac{p-q}2)e^{i\frac{p+q}2}\in \mathcal {M}\iff p-q=\pm\frac{2\pi}3$$ $\color{red}{\text{based}}$ on trigonometry formulas . Then $v=ju$ or $v=j^2u$ , with $ j:=e^{\frac{2i\pi}{3} }$ Let us then have a third point $w$ that meets our requirements: We have $w=j^2u $ or $w=ju$ . Our reasoning shows that the sets we are looking for belong to $$\{\color{red}{\{}\{u,ju\},\{u,j^2u\},\{u,ju,j^2u\}\color{red}{\}}, \text{with } u\in\mathcal M\}$$ $\text{The set of possible numbers is }\{2,3\}$ .
|complex-numbers|
1
If $M$ is a finitely generated module over a Noetherian ring $A,$ then $\widehat{\mathfrak{a}M}=\hat{\mathfrak{a}} \hat{M}$
Let $M$ be a finitely generated module over a Noetherian ring $A$ . Let $\mathfrak{a}\subset A$ be an ideal and denote $\hat{M}$ to the $\mathfrak{a}$ -adic completion of $M$ . Then how can I show that $\widehat{\mathfrak{a}M}=\hat{\mathfrak{a}} \hat{M}$ ? I know that $\hat{A}$ is a flat $A$ -module. I need some help. Thanks.
$\def\a{\mathfrak{a}}$ There's no need for Noetherianity assumptions. The result is still true for an arbitrary ring $A$ and an arbitrary $A$ -module $M$ , as long as $\mathfrak{a}$ is a finitely generated ideal . Indeed, one has inclusions of $\hat{A}$ -submodules of $\hat{M}$ $$ \a\hat{M}\subset\hat{\a}\hat{M}\subset\widehat{\a M}, $$ so it suffices to see that the left module equals the right one. Let $f_1,\dots,f_r\in A$ be generators of $\a$ . We have a surjection $A^{\oplus r}\xrightarrow{(f_1,\dots,f_r)}\a M$ . By 0315 (2), we have a surjection $\smash{\hat{A}}^{\oplus r}\xrightarrow{(f_1,\dots,f_r)}\widehat{\a M}$ . But the image of $\smash{\hat{A}}^{\oplus r}\xrightarrow{(f_1,\dots,f_r)}\hat{M}$ is $\a \hat{M}$ .
|abstract-algebra|commutative-algebra|modules|tensor-products|
0
I want to know the behavior of the solution without solving the ordinary differential equation for enzyme-substrate reactions.
I want to know the behavior of the solution without solving the ordinary differential equation for the following enzyme-substrate reactions. This chemical reaction represents the process by which substrate S is transformed into product P by the action of enzyme E. In detail, the reaction takes place as follows; First, the enzyme E and S become the enzyme-substrate complex ES Next, the enzyme-substrate complex ES splits into the enzyme E and the product P. We also assume that the initial concentration of E is >0. Here, $k_1,k_2,k_3>0$ , and [S] is the concentration of S, and the others similarly represent the concentrations of substances sandwiched between brackets. ES represents a complex of E and S. My question Perhaps from the first equation, we expect to be able to say "that the concentration of substrate S become zero after a sufficient time interval , but can we prove this without solving the differential equation?
I'll use $T$ instead of $[ES]$ and drop all brackets. I'll also rescale $t$ so that $k_1=1$ . The steady state equations read $$\tag{1} 0=-SE+k_2T \\ 0=-SE +k_2 T +k_3T \\ 0=SE-k_2T-k_3T\\ 0=T $$ from which we learn that the product $SE\to 0$ as $t\to \infty$ . For all $t$ , a conserved quantity is $$\tag{2} E(t)+T(t)=H $$ With an initial condition $E(0)\gt0$ it follows $H>0$ as all concentrations are non-negative. Evaluating (2) for large time demonstrates that $$ E\to H >0 \qquad , \qquad t\to\infty $$ So the only way to satisfy the product $SE\to0$ is $S\to0$ as $t\to\infty$ .
|ordinary-differential-equations|chemistry|
1
Proof of Doob's Maximal Inequality for Positive Supermartingales
I am working through a course on Stochastic Processes and am looking for a proof verification for an alternative to the one that I have been presented. My proof, as noted below, currently contains a gap. However, it is unclear to me whether or not this is an easily resolvable gap or if I have to refer to an alternative proof. Let $(X_n)_{n \ge 0}$ be a positive supermartingale. Then Doob's Maximal Inequality states that for any $\lambda \ge 0$ , we have the following: $$ \lambda \mathbb{P} \big{(} \max _{k \ge 0} X_k \ge \lambda \big{)} \leq \mathbb E(X_0)$$ My proof attempt uses the fact that by Doob's Optional Stopping Theorem, if we can find a stopping time $T$ , then for any martingale, $\mathbb{E}(X_T) \leq \mathbb E (X_0)$ under the assumption that one of the following holds: $T$ is bounded $X$ is bounded and $T$ is finite $\mathbb E (T) and for some $K > 0$ , we have that $|X_n - X_{(n-1)} | \leq K$ The gap in my proof is that I can't see how any of these conditions apply in thi
To show that $\mathbb{E}[X_T] \le \mathbb{E}[X_0]$ , note that we have $\mathbb{E}[X_{T \wedge n}] \le \mathbb{E}[X_0]$ for all $n$ because $T \wedge n$ is a bounded stopping time. Then, since $X_{T \wedge n} \ge 0$ , Fatou's lemma gives \begin{align*} \mathbb{E}[X_T] &= \mathbb{E}[\lim X_{T \wedge n}] \\ &\le \liminf \mathbb{E}[X_{T \wedge n}] \\ &\le \mathbb{E}[X_0]. \end{align*}
|probability-theory|inequality|solution-verification|stochastic-processes|martingales|
1
The order of functions within the integral when finding the area between two curves?
I am trying to find the area between $x^2 = 8y$ and $x-2y+8=0$ . I know I am supposed to use $\int_{a}^{b} (f(x) - g(x)) \, dx$ to find the area. However how do I determine which one to choose as $f(x)$ and which one as $g(x)$ ?
You would want the function which has a bigger value to go first. So if there is not one which alwas has a bigger value, you'd split the integral up into small sub-intervals, in all of which, one function is bigger. You can do that by, e.g., looking for zeros of $f(x)-g(x)$ , as in all the intervals between those zeros, the functions cannot switch (if they are continous)
|calculus|integration|area|
0
N-th Derivative of $\sin(x)/x$
How would one go about obtaining a closed form for the $n$-th derivative of $\frac{\sin(x)}{x}$ I took a few of the derivatives but didn't see any immediate pattern. There may be some obvious thing I'm missing, or overthinking it.
The purpose of this post is to detail the derivation for the $k$ -th derivative of the following function evaluated at any $x \in \mathbb R$ : $ f(x)=\begin{cases} \frac{\sin(x)}{x} \quad&\text{if $x \neq$ 0}\\1 \quad &\text{if $x=0$} \end{cases}$ If you just want the answer without all of the underlying work (for which there is a considerable amount): For $\displaystyle x \neq 0: f^{(n)}(x)=\sum_{i=0}^n(-1)^{i-n}\frac{n!}{i!}I^{i-n-1}(x)\sin^{(i)}(x)$ For $\displaystyle x =0: f^{(n)}(0)=\begin{cases} \frac{(-1)^{\frac{n}{2}}}{n+1} \quad&\text{ if $n$ is even} \\0 \quad &\text{ if $n$ is odd} \end{cases}$ Having stated the above results, we will now march forward with the derivation. The following post will be broken up into several sections. First we will show how we arrive at a 'good guess' for the $n$ th derivative at $x \neq 0$ . Second we will formally prove (via induction) that this guess is correct. Third we will prove (once again with induction) the formulas associated with the
|calculus|
0
The number of relations over a set
I need to calculate the number of relations over $A$ , when the size of $A$ is $n$ , and want to understand why my approach is not correct. I denoted $A_i$ as subset of $A$ , and I said general relation is something of the type $A_i \times A_j$ where $i$ can be equal to $j$ . The number of subsets of $A$ is $2^n$ . So $A_i \times A_j$ has $2^n \times 2^n$ different combinations. Therefore the number of relations should be $2^{2n}$ , but it's wrong. Why? Here's the correct answer.
Your question has two aspects: why is the right answer right, and why is yours wrong. The number of possible subsets in a set $S$ with $n$ elements is $2^{n}$ ; this collection of subsets is often referred to as the power set of $S$ . The fact that $P(S)$ has $2^n$ elements is readily seen when you list the possible subsets as binary codes. A relation in $S$ is a set of ordered pairs drawn from $S \times S$ . You can think of it as a matrix of dimensions $n \times n$ with binary entries denoting which pairs of elements are in the relation. The number possible relations is the number of ways that matrix can be filled. If you consider the matrix cells as a set itself, it has $n \times n$ cells, and its power set has $2^{n \times n}=2^{n^2}$ elements. Your approach has (greatly) undercounted possible relations. This is because you assume relations include all possible pairs in your Cartesian products of $A_i \times A_j$ . Again, to use the array visualization, if you examine for example s
|combinatorics|relations|combinatorial-proofs|
0
Is every unitary ring finitely generated?
I'm puzzled by the following: if $R$ is a unitary ring then $R$ is generated by $1_R$ , denoted as $$R = \langle 1_R \rangle. $$ can we conclude that every unitary ring is finitely generated? I know the answer is no, as $\mathbb{Q}$ is not finitely generated, although we can express $\mathbb{Q}$ as $\langle 1 \rangle$ . Can somebody help explain what's causing my confusion?
Your problem lies in the fact that you are confusing being finitely generated as an ideal and being finitely generated as a module. Taking any ring $R$ , it is true that $R=(1)$ as an ideal, which is equivalent to say that $R=\langle 1\rangle _R$ as an $R$ -module. So $\mathbb{Q}=\langle 1\rangle _\mathbb{Q}$ as a $\mathbb{Q}$ -module (i.e. as an ideal and thus, according to your definition, as a ring). However $\mathbb{Q}$ is also a $\mathbb{Z}$ -module. If you consider $\mathbb{Q}$ with this structure, as you correctly pointed out, it is not finitely generated. But this means that it is not finitely generated as a $\mathbb{Z}$ -module, not as an ideal ( $=\mathbb{Q}$ -module).
|abstract-algebra|ring-theory|modules|finitely-generated|
1
Integrating $\int\frac{\cos(\omega t)\gamma e^{-\gamma t}}{\omega}dt$
How to integrate $$\int\frac{\cos(\omega t)\gamma e^{-\gamma t}}{\omega}dt,$$ where $\gamma, \omega \neq 0$ . I tried using substitution $u=\omega t$ , $du=\omega dt$ and got $\frac{1}{\omega^2} \int \cos(u) \gamma e^{-\frac{\gamma u}{\omega}} du$ and then integrating by parts, but so far it seems like an endless cycle of substitutions and ind integration by parts, and I can't get to something meaningful. Is there any easy way to compute this? Edit: Using one of the hints, I arrived at $\frac\gamma\omega\mathrm{Re}\left[\frac{\mathrm{e}^{(i\omega-\gamma)t}}{i\omega-\gamma}+C\right]$ , but not sure how do go from here. Have I made any errors here (I am not sure about the denominator)?
In general for $a,b$ $$ \begin{align} I=\int\cos(ax)e^{bx}&={\sin(ax)\over a}e^{bx}-{b\over a}\int\sin(ax)e^{bx}\\ &={\sin(ax)\over a}e^{bx}+{b\cos(ax)\over a^2}-{b^2\over a^2}\int \cos(ax)e^{bx}\\&={e^{bx}(a\sin(ax)+b\cos(ax))\over a^2}-{b^2\over a^2}I \end{align} $$ The on rearrangement gives $$ \int\cos(ax)e^{bx}={e^{bx}\over a^2+b^2}(b\cos(ax)+a\sin(ax))+C $$ Edit: $$\int{\cos(\omega t)\gamma e^{-\gamma t}\over \omega}\\ ={\gamma\over\omega}\int\cos(\omega t)e^{-\gamma t}\\ ={\gamma\over\omega}{e^{-\gamma x}\over \omega^2+\gamma^2}(\omega\sin(\omega x)-\gamma\cos(\omega x))$$
|calculus|integration|indefinite-integrals|
1
Intersection of Interiors contains Interior of Intersection
I'm teaching my self topology using a book I found. This is the third part of a 4 part question. links to other parts: one , two . I'm trying to prove the following problem from a book I found: Let $X$ be a topological space and let $\mathscr{A}$ be a collection of subset of $X$. Prove $int( \bigcap \limits_{A \in \mathscr{A}} A)\subseteq \bigcap \limits_{A \in \mathscr{A}} int(A)$ I want to know if my proof is valid, and before i prove it I will given an example of $int( \bigcap \limits_{A \in \mathscr{A}} A)\subset \bigcap \limits_{A \in \mathscr{A}} int(A)$. Example of $int( \bigcap \limits_{A \in \mathscr{A}} A)\subset \bigcap \limits_{A \in \mathscr{A}} int(A)$. Let: $\mathscr A = [-1, \frac{1}{n}]; $ where $ n \in \mathbb N$. So then, $int( \bigcap \limits_{A \in \mathscr{A}} A) = (-1,0)$ and $\bigcap \limits_{A \in \mathscr{A}} int(A)= (-1,0]$. Proof of: $int( \bigcap \limits_{A \in \mathscr{A}} A)\subseteq \bigcap \limits_{A \in \mathscr{A}} int(A)$ To prove this, I will use 3
Your proof is valid. The following one is an alternative. Let $B:=\bigcap\limits_{A\in\mathscr A}A$ . If a point $x$ belongs to $\operatorname{int}(B)$ , i.e. if $B$ is a neighborhood of $x$ , then every superset of $B$ also is, hence for every $A\in\mathscr A$ , $A$ is a neighborhood of $x$ , i.e. $x\in\operatorname{int}(A)$ . Therefore, $x\in\bigcap\limits_{A\in\mathscr A}\operatorname{int}(A)$ .
|general-topology|proof-verification|
0
If $L$ is regular, is $L' = \{xz \mid \exists y, y \in \Sigma^* \text{ such that } |x|=|y|=|z|\text{ and }xyz \in L\}$ regular?
I know how that if the condition $|x|=|y|=|z|$ is relaxed, then we get another regular set, as is shown by the construction in this question or this one . But I am not able to solve for this case when the condition on lengths of $x,y,z$ is imposed.
The answer is no, and was first given in [1]. Here is a slightly modified version of their argument. Let $L = a^*ba^*ba^*$ and let $'=\{ \mid \exists \in \{a,b\}^* \text{ such that } |x| = |y| = |z| \text{ and }xyz \in L\}$ . Suppose that $L'$ is regular. Then so is its intersection with the regular language $a^*bba^*$ . Now, if you want to remove the middle part of $a^iba^jba^k$ and get two consecutive $b$ 's, you have to start with a word of the form $a^nba^{n+1}ba^n$ . It follows that $L' \cap a^*bba^* = \{a^nbba^n \mid n \geqslant 0\}$ and I let you verify that this language is not regular. [1] R. E. Stearns and J. Hartmanis, Regularity preserving modifications of regular expressions, Information and Control 6 (1963), 55–69.
|automata|regular-language|
1
Topology: Basic Queston re: Moore-Smith Convergence
I am currently studying three classic topology books: (1) "Introduction To Topology" (Gamelin & Greene, 1975), (2) (same title by Bert Mendelson, 1958), and (3) "General Topology" (John Kelley, 1955). Of these, only the latter by Kelley even mentions the term Moore-Smith Convergence, also covering related topics such as directed sets and nets. An entire chapter is dedicated to these concepts. Since the other books don't even them at all, I'm wondering if perhaps the teaching approach has shifted over time and now the same basic ideas are adequately addressed through a combination of Cauchy Sequences, sequential and uniform convergence,and function spaces. Am I on track with this thinking or not? If not, why do Moore-Smith Convergence and related concepts seem to be rather neglected as of late?
In terms of a general understanding of Moore-Smith convergence as a whole, this motto may help: A net is what we need instead of a sequence, when we are dealing with a topological space instead of a metric space. (Not meant literally, just a gloss, so to speak.) Gemignani, "Elementary Topology", 1967, introduces this subject with the proposition that if A is a subset of a metric space X, then a point in X is in the closure of A, if and only if there is a sequence in A converging to the point. He then gives an example of a subset of a topological space where a point in the closure is not the limit of any sequence in the set. Eventually he shows that if A is a subset of a topological space X, then a point is in the closure of A if and only if there is a net in A converging to the point.
|general-topology|
1
How does one perform induction on integers in both directions?
On a recent assignment, I had a question where I had to prove a certain statement to be true for all $n\in\mathbb{Z}$ . The format of my proof looked like this: Statement is true when $n=0$ "Assume statement is true for some $k\in\mathbb{Z}$ " Statement must be true for $k+1$ Statement must be true for $k-1$ My professor said the logic is flawed because of my second bullet point above. She says that since mathematical induction relies on the well-ordering principle and since $\mathbb{Z}$ has no least or greatest element, that using induction is invalid. Instead, she says my argument should be structured like this: Statement is true when $n=0$ "Assume statement is true for some integer $k\geq0$ " Statement must be true for $k+1$ "Assume statement is true for some integer $k\leq0$ " Statement must be true for $k-1$ I am failing to understand where my logic fails and why I need to split the assumptions like she is suggesting. Could someone explain why relying on the well-ordering principl
Let's look at induction on the natural numbers. You can prove $P(k)$ for any $k \in \mathbb{N}$ if you can prove: $P(0)$ is true For all $n \ge 0$ , $P(n) \rightarrow $ P(n+1)$. This works works, intuitively, because a proof of $P(k)$ either follows directly from (1) if $k = 0$ , or indirectly from (2) if $k > 0$ . (That is, the proof of $P(k)$ follows from the proof of $P(k-1)$ which follows from the proof of $P(k-2)$ , ..., which follows from the proof of $P(1)$ , which follows from the proof of $P(0)$ , which is already established. For integers, though, you have stated that you can prove $P(k)$ for any $k \in \mathbb{Z}$ if you can prove $P(0)$ is true There exists $n \in \mathbb{Z}$ for which $P(k)$ is true For all $k \in \mathbb{Z}$ , $P(k) \rightarrow P(k+1)$ For all $k \in \mathbb{Z}$ , $P(k) \rightarrow P(k-1)$ Now, how do we prove $P(57)$ ? Do we prove $P(56)$ is true, or do we prove $P(58)$ is true? Either one seems sufficient, but one either leads to a circular argument or
|logic|proof-writing|induction|
0
PDE : $2x(y + z) dx + (2yz - x^2 + y^2 - z^2) dy + (2yz - x^2 - y^2 + z^2) dz = 0$
I want to solve the partial differential equation $$2x(y + z) dx + (2yz - x^2 + y^2 - z^2) dy + (2yz - x^2 - y^2 + z^2) dz = 0$$ Here is the step that I followed : By putting $u=y+z, v=y-z$ , then we have : $$4xudx+(u^2-v^2-2x^2)du+2uvdv=0 $$ For $X=(4xu, u^2-v^2-2x^2,2uv)$ then $X rot(X)=0$ , hence the condition of integrability is satisfied. I take $x=cste$ , so $dx=0$ . Hence $$(u^2-2x^2)du+(2uvdv-v^2du)=0 $$ $$ (u^2-2x^2)du-v^4d(\frac{u}{v^2})=0$$ From integration : $$ \frac{1}{3}u^3-2x^2u-uv^2=C(x)$$ Differentiation to the above equation $$u^2du-2x^2du-4xudx-v^2du-2uvdv-C'(x)dx=0 $$ Associate this with the initial equation $4xudx+(u^2-v^2-2x^2)du+2uvdv=0 $ : $$\frac{4xu}{-4xu-C'(x)}=\frac{u^2-v^2-2x^2}{u^2-2x^2-v^2}=\frac{2uv}{-2uv}$$ $$\frac{4xu}{-4xu-C'(x)}=1=-1 $$ This is not contradictory ?
$$4xudx+(u^2-v^2-2x^2)du+2uvdv=0 \quad\text{is OK.} \tag 1$$ The mistake is in the integration. $\quad \int v^4d(\frac{u}{v^2})$ because $v^4$ isn't constant. Calculus from Eq.(1) $/u^2$ : $$4\frac{x}{u}dx+\left(1-\frac{v^2}{u^2}-2\frac{x^2}{u^2}\right)du+2\frac{v}{u}dv=0 $$ $$2\left(2\frac{x}{u}dx-\frac{x^2}{u^2}du\right)+du+\left(-\frac{v^2}{u^2}du+2\frac{v}{u}dv\right)=0 $$ $$2\:d\left(\frac{x^2}{u}\right)+du+d\left(\frac{v^2}{u} \right)=0$$ The integration is possible because the coefficients of the differentials are constant. $$2\frac{x^2}{u}+u+\frac{v^2}{u}=C$$
|ordinary-differential-equations|partial-differential-equations|
1
Solving a limit without using L'Hôpital's rule
First time posting here. I would like to get some help with this limit. I'm expected to solve it without using L'Hopital's rule, as I haven't been taught said rule, but I'm not sure how to go about it. $$\lim_{x\to3} \frac{(\sin(x-3))}{(x^2-9)}$$ Could someone nudge me in the right direction? Thanks!
You can also use substitution $x-3=a \ ,\ a\to0 $ $$\lim_{x\to3} \frac{(\sin(x-3))}{(x^2-9)}\\x=a+3\\\lim_{a\to 0 }\frac {\sin a}{(a+3)^2-9}=\\\lim_{a\to 0 }\frac {\sin a}{a^2+6a+9-9}=\\\lim_{a\to 0 }\frac {\sin a}{a(a+6)}=?\\$$
|calculus|limits|limits-without-lhopital|
0
Birthday problem with shared birthdays among males and female students
There are $m$ male and $f$ female students in a class (where $m$ and $f$ are each less than 365) What is the probability that a male student shares a birthday with a female student? I have attempted the method suggested by Alex in his comment on the linked question . The number of ways to allocated dates for males and female students is $$365^m \times(365-m)^f$$ The total number of ways to allocate birthdays without restriction to $m+f$ students is $365^{m+f}$ . Hence, the probability of getting a shared birthday, using complementary probability is: $$1 - \frac{365^m \times(365-m)^f}{365^{m+f}}=1 - \frac{(365-m)^f}{365^{f}}$$ Is the approach/calculations correct? (Comment) Start with finding all ways of putting $k$ identical while balls into $n=365$ bins (each bin may contain up to k balls). Then find the number of ways of putting $m$ identical black balls in the remaining bins $n−j,1≤j≤k$ bins. Then find $P(Sc)$ , probability of these events. $1−P(Sc)$ is what you want
The comment you refer to is not understandable and it does not work. That your approach is wrong can be seen already from the fact that it can give negative probability for $m>n$ , which is a nonsense as the general expression should of course describe this case as well. The reason for this error is the hidden assumption that all male students have different birthdays. This can be demonstrated already for the case $m=2$ . Indeed the correct complementary probability is: $$ \frac1{365}\left(1-\frac1{365}\right)^f+\frac{364}{365}\left(1-\frac2{365}\right)^f, $$ where the first and the second terms refer to the cases when the male students have the same and different birthdays, respectively. This probability is higher than the expression based on your calculation: $$ \left(1-\frac2{365}\right)^f. $$ For the general case of arbitrary $m$ we need to solve the following basic problem: find the number of ways to put $m$ distinguishable balls into $n$ distinguishable bins so that no bin is emp
|probability|combinatorics|discrete-mathematics|contest-math|birthday|
0
Sobolev embedding of $H^1(\mathbb R^2)$ into $L^q(\mathbb R^2) $ : explicit dependence on $q$
Let $u\in H^1(\mathbb R^2)$ , where $H^1(\mathbb R^2)\equiv W^{1,2}(\mathbb R^2)$ is the usual Sobolev space with standard norm and inner product. I want to show that for any $1\le q (actually I believe it should be $2 ), we have the inequality $$\|u\|_{L^q(\mathbb R^2)}\le C\sqrt{q} \|u\|_{H^1(\mathbb R^2)} \tag1$$ For some constant $C>0$ which does not depend on $u$ . By Sobolev embedding theorem , we know that for $s$ satisfying $1/q = 1/2 - s/2 $ , we have the continuous embedding $W^{s,2}(\mathbb R^2)=H^s(\mathbb R^2) \hookrightarrow L^q(\mathbb R^2)$ , hence there exists $C_1>0$ such that $$\|u\|_{L^q(\mathbb R^2)}\le C_1 \|u\|_{H^{s}(\mathbb R^2)} \tag{1a} $$ Next, we need to find a constant $C_2>0$ such that $$\|u\|_{H^{s}(\mathbb R^2)}\le C_2 \|u\|_{H^1(\mathbb R^2)} \tag{1b} $$ This answer claims that $(1\text{b})$ is also a consequence of Sobolev embedding. By recursively applying the usual Sobolev inequality to $D^{s-1}u,D^{s-2}u,\ldots$ and so on, we can indeed show that t
cs89 answers your first question well, but the paper attached there seems to indicate that $C(q)^2\sim8\pi\mathrm{e}q$ , which fits your desired result (2) well. This answer tries to verify and generalize the inequality (1) in a relatively elementary way. I saw this inequality first in a note written by Steve Shkoller (see Theorem 2.30 in Notes on $L^p$ and Sobolev Spaces ), and later a generalization to the $n$ -dimensional case in the well-known textbook, A First Course on Sobolev Spaces , written by Giovanni Leoni (see Theorem 12.33 in the second edition). Both proofs in the above two references seem sort of technical at first sight. Nevertheless, a slight modification of the argument used by Lawrence Evans in his famous textbook Partial Differential Equations to prove Morrey's inequality can show that there exists a constant $C$ depending only on $n$ such that \begin{equation} ||u||_{L^q(\mathbb{R}^n)}\le Cq^{1-1/n}||u||_{W^{1,n}(\mathbb{R}^n)} \tag{*} \end{equation} for any $u\in
|functional-analysis|sobolev-spaces|
1
Solutions of equations of form $\tau(n+a_i)=\tau(x+a_i)$ where $\tau$ is the number of divisors function and $a_i$ is a diverging series
Consider the function $$\tau(n)=\sum_{d|n}1$$ which gives the number of divisors of a number. The question is: How much information does $\tau(n)$ contain about $n$ ? The answer is obviously: not very much . There are infinitely many numbers which have the same value for $\tau$ . However, if there were an infinite number of equations involving $\tau$ then we would be getting somewhere . For example let us say we are given with the following equations $\tau(n)=\tau(x), \tau(n+1)=\tau(x+1),...,\tau(n+k)=\tau(x+k),...$ and we are asked to find an $x$ satisfying all the above infinite equations for a given $n$ . Well $x$ has to be $n$ , for $x\neq n$ would imply $\tau$ is periodic. However, let us say that our equations were $\tau(n)=\tau(x), \tau(n+1)=\tau(x+1), \tau(n+4)=\tau(x+4),...,\tau(n+k^2)=\tau(x+k^2)...$ Intuitively, it seems $x=n$ , but is there a rigorous way to prove it? What if our equations were $$\tau(n)=\tau(x),\tau(n+a_1)=\tau(x+a_1),...,\tau(n+a_k)=\tau(x+a_k),...$$ wher
In short: $n$ is uniquely determined by the infinite sequence $$ \tau(n+1^2),\tau(1^2+1),\tau(n+2^2),\tau(n+2^2+1),\tau(n+3^2),\tau(n+3^2+1),\ldots $$ In fact, we don't need to know the exact values of $\tau$ here, only whether they're even or odd. Note that for any $n\ge 1$ , $\tau(n)$ is odd if and only if $n$ is a perfect square. Now suppose we are given a 'mystery integer $n$ ' and a known integer $k$ , and we know that $\tau(n+k^2)$ is odd, then we know that there exists an integer $m$ such that $n=m^2-k^2=(m-k)(m+k)$ . In other words, we know that our mystery integer $n$ has a divisor $d:=m+k\mid n$ for which $n/d-d=2k$ , so the fact that $\tau(n+k^2)$ is odd gives us some information about $n$ . This is what we will exploit. Let $n\ge 1$ be an integer and define $$ N:=\max\{k\ge 0:k=0\ \vee\ \tau(n+k^2)\equiv 1\pmod 2\}. $$ For any $d\mid n$ with $n/d\equiv d\pmod 2$ , we have $N\ge \frac12(n/d-d)$ . Therefore, $N=0$ if and only if such a $d$ does not exist, which is the case if
|number-theory|elementary-number-theory|divisibility|divisor-sum|divisor-counting-function|
1
Polynomial roots with sum of coefficients prime
There exists a quadratic polynomial $P(x)$ with integer coefficients such that: (a) both of its roots are positive integers, (b) the sum of its coefficients is prime, and (c) for some integer k, $P(k) = −55$ . Show that one of its roots is $2$ , and find the other root. This is a problem from Canada 2001 . I found this on a note by Adeel Khan . My process is, Let the roots be $\alpha$ and $\beta$ and $P(x)=ax^2+bx+c$ . By the second condition, we have $$a(1-\alpha)(1-\beta)=\pm p\text{ for some prime }p$$ Clearly, $a,b$ are distinct, otherwise it violates the third condition. Therefore, one of them is $2$ (WLOG $\alpha=2$ ) and $a=\pm1$ . If $a=1$ , $$(1-\beta)=p\implies\beta=1-p$$ $$b=-(\alpha+\beta)=p-3$$ $$c=\alpha\beta=2-2p$$ hence, $$P(2)=4a+2b+c=4a+2p-6+2-2p=4a+4(p-1)=0\implies p-1=a=1\implies p=2\implies\beta=-1$$ which is a contradiction. Therefore $a=-1$ . Tell me if I have done any mistake and what to do next.
Assume the quadratic to be $a(x-p)(x-q)$ having positive roots p and q. The sum of coefficients is $a(1-p-q+pq)$ = $a(p-1)(q-1)$ . For this to be prime, any two out of a, $(p-1)$ or $(q-1)$ be 1 and the other must be prime. Therefore, at least one out of p or q will be $24. This proves the first part. Now, figure out $a(x-p)(x-2)=-55$ We shall take a= $1$ (as both roots are not same) We end with $(x-p)(x-2)=-55$ Now, the factors should one out of $(-11,5) , (11,-5), (-5,11)$ or $(5,-11)$ Here, $(-11,5)$ and $(-5,11)$ give $p=18$ while $(11,-5) $ and $(5,-11)$ give $p=-9$ Since, p is a positive root. Therefore p should be $18$ and the value of k is $7$ or $13$
|polynomials|
1
Prove that $\sqrt{8}$ is irrational in different method
I tried to prove that $\sqrt{8}$ is irrational. I said let $\sqrt{8}$ be rational then $\sqrt{8}$ = $a/b$ where $a$ and $b$ are relatively prime. Then $2\sqrt{2}=a/b$ , and $\sqrt{2} =a/(2b)$ . it is obvious that $RHS$ is rational and $LHS$ is irrational (assumed that $\sqrt{2}$ is proved). So there is a contradiction and proof done. My question is that is there other ways to prove that $\sqrt{8}$ is irrational?
Another Proof. Let if possible $\sqrt 8$ be a rational number. Then it can be written in $\dfrac{p}{q}$ form where, $p,q$ are integers and $q\neq0$ and $(p,q)=1$ , so that $\sqrt 8=\dfrac{p}{q}$ . But we know, $\sqrt4=2 . $\therefore 2 or $0 Thus $p-2q$ is an integer that is less than $q$ , so that $\sqrt8(p-2q)$ or $\dfrac{p}{q}(p-2q)$ is not an integer. But $\sqrt8(p-2q)=\dfrac{p}{q}(p-2q) = \dfrac{p^2}{q}-2p =\dfrac{p^2}{q^2}\cdot q-2p=8q-2p$ . (which is an integer). $\implies \sqrt8(p-2q)$ is an interger. Here we arrived at a contradiction. Hence, $\sqrt8$ is not a rational number. (Proved).
|discrete-mathematics|proof-writing|solution-verification|alternative-proof|
0
An exercise question in the probability theory class
I was asked a "simple" question by one of my students in the tutorial class, but I found myself struggling with this for about 2 hours already. Here is the question: Assume there are $K$ constants: $0\leq C_{1} ; Let $X$ be a random variable with $Pr(X=C_{k})=\frac{1}{K}$ , $k=1,...,K$ . Find when the maximum of $Var(X)$ is attained, determine the value of $K$ and the value of $(C_{1},...,C_{K})$ when this happens. Here are some of my ideas: $E[X]=\sum_{k=1}^{K}C_{k}\frac{1}{K}=\frac{1}{K}\sum_{k=1}^{K}C_{k}$ $E[X^2]=\sum_{k=1}^{K}(C_{k})^2\frac{1}{K}=\frac{1}{K}\sum_{k=1}^{K}(C_{k})^2$ $Var(X)=E[X^2]-(E[X])^2=\frac{1}{K^2}\big(K\sum_{k=1}^{K}(C_{k})^2-(\sum_{k=1}^{K}C_{k})^2\big)\\~~~~~~~~~~~~=\frac{1}{K^2}\big((K-1)\sum_{k=1}^{K}(C_{k})^2-2\sum_{i\neq j}C_{i}C_{j}\big)$ . I am really wondering if it is possible to apply Lagrange multiplier method to this. Any help and advice will be sincerely appreciated. Thank you very much in advance!
This gives some formality to Gareth's answer (reaching the same conclusion). Claim: If $X$ is any random variable that satisifes $0\leq X\leq 1$ surely, then $$Var(X)\leq 1/4$$ and this is achieved by $X \sim Bernoulli(1/2)$ . Proof: Suppose $0\leq X \leq 1$ surely. Note that $$ \left(0\leq X \leq 1\right) \implies 0\leq E[X] \leq 1$$ $$\left(X^2\leq X\right) \implies E[X^2]\leq E[X]$$ Define $m=E[X]$ , so $m \in [0,1]$ . Thus \begin{align} Var(X) &= E[X^2] - m^2\\ &\leq E[X] - m^2 \\ &=m-m^2\\ &\leq \sup_{v\in [0,1]}\{v-v^2\}\\ &=1/4 \end{align} $\Box$ Under the structure of your question, we can use $K=2$ and $C_1=0, C_2=1$ to achieve the max variance of $1/4$ . I'm not sure if your question also asks what to do if $K$ is fixed and cannot be chosen (so we cannot choose $K=2$ ). For any even integer $K$ , we can approach the max variance $1/4$ arbitrarily closely by using $C_1=0$ , $C_K=1$ , and placing half the remaining $C_i$ values near $0$ and the other half near 1 (spacing them o
|probability|random-variables|lagrange-multiplier|variance|uniform-distribution|
0
Difference between different definitions of diagram in a category
I'm currently reading the book "Topoi: The Categorial Analysis of Logic" by Robert Goldblatt, and in chapter 3.11, in order to define limits and co-limits he defines a diagram in a category as follows: Definition 1: By a diagram $D$ in a category $\cal C$ we simply mean a collection of $\cal C$ -objects $(d_i)_{i\in I}$ together with some $\cal C$ -arrows $g:d_i\to d_j$ between certain objects in the diagram. However, when I looked at the same topic in other books, they all defined a diagram in a category in the following way: Definition 2: Let $\cal J$ be a (small) category category. A diagram in $\cal C$ is a functor $F:\mathcal{J}\to \mathcal{C}$ . Definition 1 just seems like a "dumbed down" version of definition 2, and maybe the author chose this definition because at this point in the book he didn't even introduce functors. My question is whether these definitions are equivalent. For example, $F$ being a functor implies that for any $a\in \cal J$ we have $F(id_a)=id_{F(a)}$ , so
Definition 1 is more talking about way we would draw a diagram, as a literal picture. For example, consider the "diagram" below $\require{AMScd}$ \begin{CD} B @>{g}>> D\\ @A{f}AA @AA{k}A \\ A @>>{h}> C \end{CD} We left out the arrows $gf$ and $kh$ in the drawing, which should be there according to definition 2. But fine, maybe that will just be a convention of drawing pictures: don't include superfluous information (like identity arrows). There is another important difference though, consider the simpler triangle diagram consisting of $A \xrightarrow{f} B \xrightarrow{g} C$ and $A \xrightarrow{h} C$ (you'll have to draw the picture yourself now, I do not know how to draw a triangle in MathJax). In definition $2$ there might be the implied condition that $h = gf$ , if the category $\mathcal{J}$ is the triangle (due to functoriality of $F$ ). Note that this may not always be the case, for example we could get the same picture if $\mathcal{J}$ is the category with 3 separate arrows: $X \t
|category-theory|definition|limits-colimits|
0
Is every subset of a set also a set?
Using the axioms of $ZF$ , you ensure that from a set or multiple sets, you can also create a set. However, the question that arose in my mind is whether all subsets of this set that was created are also sets. In other words, is there a proof that every class $A$ , where $A\subset C$ and $C$ is a set, is also a set? If A is defined by a formula $[ A = \{ x \in C : P(x) \} ]$ , then by the axiom of schema, it becomes easy to infer that A is a set. But, we do not know if any subset of C can be defined by a formula.
Reading between the lines somewhat, it seems that the question you are really interested in, would be rather something like this: Suppose $(M,\varepsilon)$ is a model of set theory, $a\in M$ is a set and $C\subseteq M$ is some subset of the model with the property that each (actual) element of $C$ is an ( $\varepsilon$ -)element of $a$ . Is $C$ then necessarily "realized" in $M$ as a subset of $a$ , in that there is some $c\in M$ such that the $\in$ -elements of $C$ are exactly the $\varepsilon$ -elements of $c$ ? The answer to this (interpretation of your) question is actually no, not necessarily: If ZFC is consistent, it has a countable model $M$ . Hence, there are only countably many actual subsets of (what the model thinks is) $\mathbb{N}$ that are realized as sets in $M$ .
|set-theory|
0
Difference between different definitions of diagram in a category
I'm currently reading the book "Topoi: The Categorial Analysis of Logic" by Robert Goldblatt, and in chapter 3.11, in order to define limits and co-limits he defines a diagram in a category as follows: Definition 1: By a diagram $D$ in a category $\cal C$ we simply mean a collection of $\cal C$ -objects $(d_i)_{i\in I}$ together with some $\cal C$ -arrows $g:d_i\to d_j$ between certain objects in the diagram. However, when I looked at the same topic in other books, they all defined a diagram in a category in the following way: Definition 2: Let $\cal J$ be a (small) category category. A diagram in $\cal C$ is a functor $F:\mathcal{J}\to \mathcal{C}$ . Definition 1 just seems like a "dumbed down" version of definition 2, and maybe the author chose this definition because at this point in the book he didn't even introduce functors. My question is whether these definitions are equivalent. For example, $F$ being a functor implies that for any $a\in \cal J$ we have $F(id_a)=id_{F(a)}$ , so
I agree with your "dumbed-down" assessment, and I have to say that I don't rate the scholarship of Goldblatt's book too highly. (Not just my opinion, either: see what Colin McLarty says here .) If you want to learn topos theory from the beginning, I think a better choice would be the book by Moerdijk and Mac Lane, supplemented by whatever "first course in category theory" textbooks you might need to fill in gaps. (I grew up on Mac Lane's book, but there are a lot of newer generation texts that are very good, e.g., the ones by Awodey, Leinster, Riehl, and sorry to leave others out.) One problem with the dumbed-down definition is that it suppresses some information that you might really want; for example, in addition to "some arrows", maybe it's important whether the composites of some of those arrows are equal to other arrows. The idea of a reflexive coequalizer, for example, is technically important. The way it ought to be expressed is that it's the universal cocone for a functor $D: S
|category-theory|definition|limits-colimits|
0
Proving the general formula for conditional expectation of a Poisson Process
I am studying a course on Stochastic Processes and encountered the following proof exercise on Poisson Processes: If $N$ is a Poisson Process with intensity $\lambda$ , then for $0 where $k \leq n$ are non-negative integers, then prove the following: $$ P(N(s) = k | N(t) = n) = \binom{n}{k} \Big{(} \frac{s}k \Big{)}^k \Big{(} 1 - \frac{s}t \Big{)}^{n-k}$$ I know that $N(t)$ (and analogously, $N(s)$ ) will follow the Poisson distribution with parameter $\lambda t $ . This allows me to use Bayes’ Formula for the conditional expectation to see that: $$ P(N(s) = k | N(t) = n) = \frac{P(N(s) = k \text{ and } N(t) = n)}{ P(N(t) = n)}$$ $$ = \frac{n!}{e^{-\lambda t}(\lambda t)^n} P(N(s) = k \space \text{and} \space N(t) = n) $$ $$ = \frac{n!}{e^{-\lambda t}(\lambda t)^n} \Big{(}P(N(t) = n | N(s) = k)\frac{e^{-\lambda s}(\lambda s)^k}{k!} \Big{)}$$ Then simplifying the above expression yields the following: $$ = \frac{n! \space e^{-\lambda (s - t)}s^k}{k! \lambda ^{n-k}t^n} P(N(t) = n | N(s) =
Continuing where I left off, using the hint given in the comments, I left off at the following point: $$ P(N(s)=k|N(t)=n) = \frac{(n!) \space e^{-\lambda (s - t)}s^k}{k! \lambda ^{n-k}t^n} P(N(t) = n | N(s) = k)$$ Note that the following result holds: $$P(N(t)=n|N(s)=k)=P(N(t−s)=n−k)$$ due to the fact that Poisson Processes satisfy the independence property that $N_t - N_s$ is independent of $\mathscr{F}_s$ for any $0 . Therefore the above simplifies as follows: $$P(N(s)=k|N(t)=n) = \frac{n! \space e^{-\lambda (s - t)}s^k}{k! \lambda ^{n-k}t^n}P(N(t−s)=n−k)$$ $$ = \frac{n! \space e^{-\lambda (s - t)}s^k}{k! \lambda ^{n-k}t^n} \times \frac{e^{- \lambda (t-s)}(\lambda (t-s))^{n-k}}{(n-k)!}$$ $$ = \frac{n! }{k!(n-k)! } \times \frac{s^k((t-s))^{n-k}}{t^n}$$ $$ = \frac{n! }{k!(n-k)! } \times \Big{(}\frac{s}{t}\Big{)}^k \times \Big{(} \frac{t-s}{t}\Big{)}^{n-k}$$ $$ = \binom{n}{k} \times \Big{(}\frac{s}{t}\Big{)}^k \times \Big{(} 1 - \frac{s}{t}\Big{)}^{n-k} \quad \quad \square$$
|probability-theory|solution-verification|stochastic-processes|proof-writing|poisson-distribution|
1
Card matching game
The game setup is as follows: there is a bag with 6 matching pairs of cards (two 1s, two 2s, two 3s, two 4s, two 5s, two 6s). We randomly draw one card at a time, but put matching cards aside as soon as they appear in our hand. The game ends and we lose if we ever hold three cards, no two of which match. What is the probability of winning? My Approach Let $State_x$ be the state where we have $x$ cards in our hand and $n$ be the number of remaining cards in the deck. After some thought I realized that in the 'winning' scenario we will always come back to the state where we have 1 card in our hand. So for every possible $n$ , when we are at $State_1$ (this will only happen when $n$ is odd), I calculated the probability of coming back to $State_1$ : Either this way: $State_1 \to State_0 \to State_1$ In this case the probability of coming back to $State_1$ is $\frac{1}{n}$ Or this way: $State_1 \to State_2 \to State_1$ In this case the probability of coming back to $State_1$ is $\frac{n-1}
We have a simple exercise with a concrete small number of pairs, so i will go to solve it without generalization. Let us write down all states that may occur. Below it is a schematic picture, the states $0$ , $1$ , $2$ appear in different places, are not the same one at the many places, it depends on the "level" at which it is shown. The state " $*$ " leads to losing the game. And it occurs also in different levels. 12 0 \ 1 \ \ 11 1 / \ 1/11 / \ 10/11 / \ 8/10 10 0 2 -------------------- (*) GAME OVER \ / 1 \ / 2/10 \ / 9 1 / \ 1/9 / \ 8/9 / \ 6/8 8 0 2 -------------------- (*) GAME OVER \ / 1 \ / 2/8 \ / 7 1 / \ 1/7 / \ 6/7 / \ 4/6 6 0 2 -------------------- (*) GAME OVER \ / 1 \ / 2/6 \ / 5 1 / \ 1/5 / \ 4/5 / \ 2/4 4 0 2 -------------------- (*) GAME OVER \ / 1 \ / 2/4 \ / 3 1 (and from here we can only WIN) / \ 1/3 / \ 2/3 / \ 0 = 0/2 2 0 2 -------------------- (*) \ / 1 \ / 2/2 = 1 \ / 1 1 / 1/1 / / 0 0 = WIN Because of the simple structure of the graph with the states, it is eas
|probability|card-games|
0
Calculate the range of the function $f(x)=-x^2+6x$ with the domain $[2,5]$
Consider the function $f(x)=-x^2+6x$ with the domain $[2,5]$ . Now calculate the range of the function according to its domain. Here is my solution: The domain is $2\leq x\leq5$ so I can wrote: $2\leq x\leq5\implies4\leq x^2\leq25\implies-4\geq-x^2\geq-25\implies\boxed{-25\leq-x^2\leq-4}\ (\text{1})$ $2\leq x\leq5\implies\boxed{12\leq6x\leq30}\ (\text{2})$ Then I calculated the sum of the two inequalities $(\text{1})$ and $(\text{2})$ . So the range is $-13\leq-x^2+6x\leq26$ . But since this was a question from a calculus book, I know that the correct answer is $[5,9]$ and not $[-13,26]$ . So what is the problem of my solution?
The derivative $f'(x)=-2x+6$ has one zero at $x=3$ . The maximum and minimum values will occur either at the endpoints of the domain or at $x=3$ . Simply plug these $x$ values into $f$ to check. $f(2)=-(2)^2+6(2)=8$ $f(3)=-(3)^2+6(3)=9$ $f(5)=-(5)^2+6(5)=5$ This gives us a minimum value of $5$ and a maximum value of $9$ . So the range is $[5,9]$ as you said.
|calculus|algebra-precalculus|
1
Number of ways to distribute gifts among kids subject to the stated conditions
How many ways we can distribute gifts among kids with conditions below. Given the conditions: There are 5 kids, denoted as A, B, C, D, and E. There are 6 gifts, denoted as 1, 2, 3, 4, 5, and 6. Only 3 kids will be given gifts. Each kid can only have 1 gift. Each gift can only be given to 1 kid. Kid A doesn't like gifts 2 and 3. Kid B doesn't like gift 2. Kid D doesn't like gifts 1, 4, and 5. We can select 10 kids to receive gifts from a pool of $\begin{pmatrix}5 \\ 3\end{pmatrix}=10$ children. However, considering that each child may have preferences or dislikes for certain gifts, every combination becomes distinct which become difficult. Are there faster approach?
One approach is to use a variation of the theory of Rook Polynomials . In the board below, the columns correspond to kids and the rows to gifts; a shaded square indicates a forbidden combination. $\qquad\qquad$ We will find the rook polynomial of the forbidden sub-board (the shaded squares): the polynomial $$R(x) = \sum_{k>= 0} r_k x^k$$ where $r_k$ is the number of ways to place $k$ non-attacking rooks on the shaded squares. By inspection, the shaded squares in the first two columns have the rook polynomial $1+3x+x^2$ and the shaded squares in column four have the rook polynomial $1+3x$ . Since these two groups have no row or column in common, the rook polynomial of the forbidden sub-board is $$R(x) = (1+3x+x^2)(1+3x) = 1+6x+10x^2+3x^3 \tag{*} $$ We are now ready to apply inclusion/exclusion to solve the problem. Let's say an assignment of gifts to kids has "property $ij$ " if kid $i$ is given gift $j$ where that combination is forbidden. We want to count all the assignments with none
|combinatorics|combinations|
0
How does one perform induction on integers in both directions?
On a recent assignment, I had a question where I had to prove a certain statement to be true for all $n\in\mathbb{Z}$ . The format of my proof looked like this: Statement is true when $n=0$ "Assume statement is true for some $k\in\mathbb{Z}$ " Statement must be true for $k+1$ Statement must be true for $k-1$ My professor said the logic is flawed because of my second bullet point above. She says that since mathematical induction relies on the well-ordering principle and since $\mathbb{Z}$ has no least or greatest element, that using induction is invalid. Instead, she says my argument should be structured like this: Statement is true when $n=0$ "Assume statement is true for some integer $k\geq0$ " Statement must be true for $k+1$ "Assume statement is true for some integer $k\leq0$ " Statement must be true for $k-1$ I am failing to understand where my logic fails and why I need to split the assumptions like she is suggesting. Could someone explain why relying on the well-ordering principl
Here is a (quotient-)inductive definition of the integers whose natural induction principle is the exact induction principle you used. Inlining: data ℤ : Type where zero : ℤ suc pre : ℤ -> ℤ s-p : ∀ i → suc (pre i) ≡ i p-s : ∀ i → pre (suc i) ≡ i squash : isSet ℤ It is the type simultaneously generated by the first three 'constructors' ( $\mathsf{zero}$ , $\mathsf{suc}$ and $\mathsf{pre}$ ) and quotiented by the other three rules. So, $ℤ$ is given by the rules: $ℤ$ contains a value $\mathsf{zero}$ , representing $0$ If $i$ is a value of $ℤ$ , then so is $\mathsf{suc}\ i$ , representing $i+1$ If $i$ is a value of $ℤ$ , then so is $\mathsf{pre}\ i$ , representing $i-1$ $\mathsf{suc}\ (\mathsf{pre}\ i) = i$ $\mathsf{pre}\ (\mathsf{suc}\ i) = i$ Don't worry about $\mathsf{squash}$ Then, the natural induction principle says that for a predicate $P : ℤ → \mathsf{Prop}$ , to prove $∀i. P(i)$ we need to show: $P(0)$ $∀ i. P(i) → P(\mathsf{suc}\ i)$ $∀ i. P(i) → P(\mathsf{pre}\ i)$ Conditions i
|logic|proof-writing|induction|
0
Calculate the range of the function $f(x)=-x^2+6x$ with the domain $[2,5]$
Consider the function $f(x)=-x^2+6x$ with the domain $[2,5]$ . Now calculate the range of the function according to its domain. Here is my solution: The domain is $2\leq x\leq5$ so I can wrote: $2\leq x\leq5\implies4\leq x^2\leq25\implies-4\geq-x^2\geq-25\implies\boxed{-25\leq-x^2\leq-4}\ (\text{1})$ $2\leq x\leq5\implies\boxed{12\leq6x\leq30}\ (\text{2})$ Then I calculated the sum of the two inequalities $(\text{1})$ and $(\text{2})$ . So the range is $-13\leq-x^2+6x\leq26$ . But since this was a question from a calculus book, I know that the correct answer is $[5,9]$ and not $[-13,26]$ . So what is the problem of my solution?
A bit of geometry: $y=-x^2 +6x$ with $x\in [2,5];$ $y=-(x^2-6x)=-(x-3)^2+9$ ; This is a parabola opening downward with maximum at $x=3$ : $y_{max} =9;$ $y_{min} =-(5-3)^2+9=5$ , and we are done. (Why is $x=5$ chosen for the minimum?)
|calculus|algebra-precalculus|
0
How to calculate the distance from $X^2$ to $\operatorname{span}(1,X)$?
Question: Let $E = \Bbb{R}_2[X]$ an euclidian space with a dot product $\left\langle P,Q\right\rangle \ = \int^1_0 {P(t)Q(t)dt}$ . Calculate the distance from $X^2$ to $\operatorname{span}(1,X)$ . Answer: Let $(a,b) \in \Bbb{R}^2$ . $p$ is the orthogonal projector. We have: \begin{equation} a + bX = p^\perp_{\operatorname{span}(1,X)}(X^2) \iff \left\{\begin{array}{@{}l@{}} \langle X^2 - (a+bX), 1\rangle = 0\\ \langle X^2 - (a+bX), X\rangle = 0\\ \end{array}\right.\, \iff \left\{\begin{array}{@{}l@{}} a = -1/6\\ b = 1\\ \end{array}\right.\,. \end{equation} So the wanted distance is $\left\|X^2-(X-1/6)\right\| = \frac{1}{6\sqrt 5}$ . I am a little bit lost since I don't understand how $X^2$ can be projected on $\operatorname{span}(1, X)$ . Since $X^2$ is not a linear combination of $\operatorname{span}(1, X)$ . Why $a + bX$ is not equal to zero? How the dot products $\langle X^2 - (a+bX), 1\rangle \ = 0$ and $\langle X^2 - (a+bX), X\rangle \ = 0$ helps to get the projector? It means that
Think about $\mathbb R^3$ for a moment. Suppose you have a plane $P$ spanned by two vectors $u, v$ and you have a third vector $w$ which is not in the plane but is also not orthogonal to the plane. Then it makes sense to project $w$ onto the plane, right? But $w$ is not a linear combination of $u$ and $v$ because $w$ is not contained in the plane. It's the same idea here: as long as $X^2$ is not orthogonal to $\mathrm{span}\{1, X\}$ then $X^2$ has a non-zero projection onto that plane. Now what does "orthogonal projection" mean? It means precisely the point $s = p^\perp_P(w)$ contained in $P = \mathrm{span}\{u,v\}$ such that the line from $s$ to $w$ is orthogonal to $P$ . "The line from $s$ to $w$ " is parallel to the vector $w - s$ , so that means $$ \langle w-s, u\rangle = 0\quad \langle w-s, v\rangle = 0 \tag{$*$} $$ because $u,v \in P$ . Now apply this when $u = 1$ , $v = X$ , and $w = X^2$ . What the answer does is write $s = a + bX$ for some unknown scalars $a, b$ ; we can do thi
|linear-algebra|polynomials|metric-spaces|inner-products|bilinear-form|
1
Probability of rain after an amount of time in Minecraft game
Minecraft time is measured in ticks. When a world is loaded, the game waits anywhere from 12,000 to 180,000 ticks to start raining. After it starts raining, the rain lasts anywhere from 12,000 to 24,000 ticks. When the rain ends, clear weather lasts for another 12,000 to 180,000 ticks, etc. This cycle repeats forever. I want to know if there is a function that can determine the chance that it's raining on tick t . From ticks 0 to 12,000 the chance is 0%. From ticks 12,000 to 24,000 the chance is (t - 12,000) / 168,000 (I believe) After this, I got confused. What would the probabilities be past this point? I created a simulation to brute force it and this is what the graph looked like (x is ticks in thousands, y is the percentage chance of rain). But I'm not actually sure what the function for this would be. Thank you.
Let $C_i\sim C$ and $R_i \sim R$ denote the durations (in ticks) of the $i$ th coupled periods of clear and rainy weathers, respectively. We want to find $$p(t)=\mathbb P \left (A_t:=\text{at time $t$ the weather is $\bf{rainy}$} \right ).$$ Let us define $$q(t)=1- p(t)=\mathbb P \left (A'_t:=\text{at time $t$ the weather is $\bf{clear}$} \right ).$$ By conditioning on $C_1$ , we have $$q(t)=\mathbb P (A'_t \, |C_1>t)\mathbb P (C_1>t)+\mathbb P (A'_t, C_1 \le t).$$ By observing $$\mathbb P (A'_t \, |C_1>t)=1$$ $$\mathbb P (A'_t, C_1 \le t)=\int_{0}^{\infty} \mathbb P (A'_t, C_1 \le t|R_1+C_1=x) \text{d}F_{R_1+C_1}(x)= \int_{0}^{t} q(t-x) \text{d}F_{R_1+C_1}(x),$$ we obtain $$\color{blue}{p(t)=1- q(t) \\ q(t)=1-F_{C}(t)+\int_{0}^{t} q(t-x) \text{d}F_{R+C}(x).}$$ When $F_{R+C}$ has a pdf, denoted by $f_{R+C}$ , it is a linear Volterra integral equation , which can be solved by Laplace transformation. The final solution is $$\color{blue}{p(t)= 1-\mathcal L^{-1} \left [\frac{\mathcal L[1-F
|probability|
0
Solve $a+b=\gcd(a^3,b^3),\;b+c=\gcd(b^3,c^3),\;c+a=\gcd(c^3,a^3)$ for positive integers $a,b,c$
Solve $$\begin{cases}a+b=\gcd(a^3,b^3)\\ b+c=\gcd(b^3,c^3)\\ c+a=\gcd(c^3,a^3)\end{cases}$$ for positive integers $a,b,c$ . We can rewrite as: $$\begin{cases}a+b=\gcd(a,b)^3\\ b+c=\gcd(b,c)^3\\ c+a=\gcd(c,a)^3\end{cases}$$ I ran a program that checked $1\le a\le b\le c \le 500$ , and it didn’t find any solutions. @RandomGuy checked all the numbers up to $200000$ and didn’t find any solutions, too (see in the comments). Probably there aren’t any. Progress I can show that $\color{red}{\gcd(a,b,c)=1}$ . Indeed, let $d=\gcd(a,b,c)$ , $a_1=ad$ , $b_1=bd$ , $c_1=cd$ . Note that $\gcd(a,b)=\gcd(da_1,db_1)=d\cdot\gcd(a_1,b_1)$ . Our system then becomes: $$\begin{cases}a_1+b_1=d^2\cdot\gcd(a_1,b_1)^3\\ b_1+c_1=d^2\cdot\gcd(b_1,c_1)^3\\ c_1+a_1=d^2\cdot\gcd(c_1,a_1)^3\end{cases}\tag1$$ Note that $a_1+b_1+ c_1$ and $d$ are coprime. If $p\mid a_1+b_1+ c_1$ and $p\mid d$ then $p\mid d^2\cdot\gcd(a_1,b_1)^3 = a_1+b_1$ . But then also $p\mid (a_1+b_1+ c_1)-(a_1+b_1)=c_1$ . As well as $p\mid a_1$ and
Adding all three equations we get $$\sum_{cyc}\gcd(a,b)^3=2(a+b+c)$$ Since the sum of gcds are even either all must be even or one of them must be even Since atleast one $\gcd$ is even this guarantees 2 numbers are even Case 1: Assume all the numbers are even then the system reduces to $$ a_1+b_1=4\gcd(a_1,b_1)^3\\ a_1+c_1=4\gcd(a_1,c_1)^3\\ b_1+c_1=4\gcd(b_1,c_1)^3 $$ Solving for $a_1$ we get $$a_1=2(\gcd(a_1,b_1)^3+\gcd(a_1,c_1)^3-\gcd(b_1,c_1)^3)$$ And similar expressions for $b_1,c_1$ show that they are even too Now we can create $a_2,b_2,c_2$ in a similar fashion and keep going till infinity which would mean the power of 2 in $a,b,c$ is $\infty$ this is absurd so no numbers satisfy for this case Case 2: assume a,b are even and c is odd Edit: No solutions till 200,000 by brute force Now this case I'm not entirely sure how to prove so I'll leave with just the proof for case 1
|elementary-number-theory|systems-of-equations|gcd-and-lcm|
0
Calculate two limits related to a recursively defined sequence
I am currently studying mathematical analysis, specifically the theory of sequences, and I have come across a challenging problem in my homework that I am unable to solve. I would greatly appreciate any assistance or insight you can provide. The problem is as follows: Consider the sequence defined recursively by $x_{n+2}=x_{n+1}+\frac{x_n}{\ln x_n}$ for $n\geq 0$ , where $x_0,x_1>1$ . I am tasked with finding the limits of two expressions. Firstly, I need to determine $$c:=\lim_{n\rightarrow +\infty}\frac{\ln^2 x_n}{n}.$$ Secondly, I need to calculate $$\lim_{n\rightarrow +\infty}\frac{\ln^2x_n-cn}{\sqrt{n}}.$$
You have by an easy induction that $x_n>1$ for any $n\in\mathbb N$ , then $(x_n)$ is increasing and it's going to infinity otherwise its limit $l$ would be such that $l=l+\frac{l}{\ln(l)}$ hence $l=0$ : contradiction. Now you can use that to get $x_{n+2}\sim x_{n+1}$ and from $\displaystyle \frac{x_{n+2}}{x_{n+1}}=1+\frac{x_n}{x_{n+1}\ln(x_n)}$ you also have $\ln(x_{n+2})-\ln(x_{n+1})\sim \frac{x_n}{x_{n+1}\ln(x_n)}\sim\frac{1}{\ln(x_n)}$ and $\ln(x_{n+2})+\ln(x_{n+1})\sim 2\ln(x_{n+1})$ hence $\ln^2(x_{n+2})-\ln^2(x_{n+1})\sim 2$ . Then $\ln^2(x_n)\sim 2n$ and hence $c=2$ . For the other limit try to use first order expansions instead of equivalences as I did.
|real-analysis|calculus|sequences-and-series|limits|recursion|
0
Subset of integers called by a somewhat ill-defined property
I know there were some issues with set theory that involved self-reference and famous example being that of Russell's example. Here, I have a set that I use a somewhat vague term "use" to construct but eventually answer leads to a contradiction with the property itself. What I want to ask is, can someone explain what happens wrong here formally (e.g which axiom I am using falsely) $$A= \{n\in \mathbb{Z} :n \text{ was used in real life by someone in any setting}\}$$ Where by any setting I mean, $n$ could be number of days, it could be highest number a child counted when they were $10$ (if they did) and also when they were $12$ (if they did), but other numbers that were never acknowledged by anyone such as my heart rate yesterday at 2:04 PM (which I never really counted) (*) My problem is that , I think we can somehow agree all numbers that will ever be used, acknowledged (acknowledgement being a use case) is bounded somehow because people are finite objects and there will be finitely ma
I am no expert in this matters, just an enthusiast, so would be happy to be corrected if I am wrong. However, as I understand it, the problem here is the precise meaning of the words used in real life . The moment you pin down the meaning of this, it becomes a matter of logic to deduce what comes after. Let's assume you can somehow formalise all you said in a theory. That would mean that the existence of this set leads to a contradiction (which would also be formalised, of course). You can live with that, but in most logical systems contradictions are avoided at all costs as they lead to the provability of every single statement, which is not very helpful. Rather, you can somehow “ban” this set from existing in your theory, meaning you choose your axioms such that this construction is no longer possible. That is reminiscent of the Russell's paradox and the later development of Zermelo-Frenkel set theory.
|logic|set-theory|natural-numbers|
0
Can second derivative indicate convergence of infinite series?
I was thinking about this question: Why does the series $\sum_{n=1}^\infty\frac1n$ not converge? and investigating the rate of decrease of the members of the series (any decreasing series, not just harmonic ones) at which the sum remains bounded and doesn't diverge. I was looking at different series and trying to figure out the criteria for determining if a sum of subsequent members can reach the first member (for harmonic series the sum of $2^{n-1}$ subsequent members will always reach $\frac{1}{2}$ ). While looking at that, I came to the following intuitive criteria for determining the convergence of the series: For any infinite series $\sum_{n=1}^{+\infty}\frac{1}{f(n)}$ where $f(n)$ is monotonically-increasing function, if $f(n)$ grows faster than $const\cdot n\cdot \ln n$ then the series converges, otherwise the series will diverge. In other words, if $f''(n)>\frac{const}{n}$ the series converges; otherwise, it diverges. Notice: initially my criteria was $f''(n)>0$ , but based on
It can be seen that a very simple convergence criteria follows from the above: For any infinite series $\sum_{n=1}^{+\infty}\frac{1}{f(n)}$ where $f(n)$ is monotonically-increasing function, $\\$ if a limit $\lim\limits_{n\to\infty}\frac{f(n)}{n\ln n} = \infty$ then the series converges, otherwise the series will diverge. However, the above criteria doesn't work, because $\sum\limits_{n=2}^\infty \frac {1} {n\ln n\ln(\ln n)}$ diverges. Therefore, I cannot find a "border" function - the fastest growing $f(n)$ for which the series will diverge, so that it could be a reference function - all other functions that grow faster than it, will start converging. I wish there could be such a function.
|sequences-and-series|convergence-divergence|
0
How to prove that a manifold with Self-Dual Riemann tensor is Ricci-flat
By self-dual I mean that \begin{equation} *\mathcal{R}_{ab}=\mathcal{R}_{ab}\,, \end{equation} where $\mathcal{R}$ is the curvature 2-form, related to the Riemann tensor $R$ by $\mathcal{R}_{ab}= (1/2) R_{abcd} \,\,e^c\wedge e^d$. By Ricci flat I mean that the Ricci tensor, which can be obtained as the 1-3 contraction of the Riemann tensor, vanishes. I know how to prove the statement in the title by explicit computation, using the symmetry properties of the Riemann tensor $R_{abcd}= - R_{bacd}$, $R_{abcd} = -R_{ab dc}$, $R_{abcd}=R_{cdab}$ , and the algebraic Bianchi identity $R_{a[bcd]}=0$. I would like to see a "computation free" proof of the statement in the title.
I found a 'computational free' way by reading this article by ST68 . The method use Thm1.3 of this article,i.e.,we need to show \begin{equation} *R\in\mathcal{R}_3\,, \end{equation} where $R$ denote the curvature form. We can show the self dual condition for curvature form equivalent to same equation of curvature operator,i.e. the composition of Hodge star and curvature operator equal to curvature operator.Then by symmetry we deduce \begin{equation} *R=R=R*\,, \end{equation} here $R$ for curvature operator. and then \begin{equation} *R=R*\,\\ s=tr(R)=0\,, \end{equation} the first equivalent to say the 4-manifold is Einstein. to obtain the latter we need decompose to self and anti-self dual part of the space of two forms,and the trace of right-down part is, which equal to half of scalar curvature, zero
|differential-geometry|riemannian-geometry|
0
shorter proof of generalized mediant inequality?
Show $\frac{a_{1}+...+a_{n}}{b_{1}+...+b_{n}}$ is between the smallest and largest fraction $\frac{a_{i}}{b_{i}}$, where $b_{i}>0$. Attempt Assume the largest is $\frac{a_{n}}{b_{n}}\Rightarrow$ $\frac{a_{n}}{b_{n}}-\frac{a_{1}+...+a_{n}}{b_{1}+...+b_{n}}\Rightarrow $ $\frac{b_{1}+...+b_{n-1}}{{b_{1}+...+b_{n}}}[\frac{a_{n}}{b_{n}}-\frac{a_{1}}{b_{1}+...+b_{n-1}}-...-\frac{a_{n-1}}{b_{1}+...+b_{n-1}}]\Rightarrow $ if $a_{1} \frac{a_{n}}{b_{n}}-\frac{a_{2}}{b_{1}+...+b_{n-1}}-...-\frac{a_{n-1}}{b_{1}+...+b_{n-1}}$ any hints or solutions?
A geometric proof without words : But with words, it's better. Let $V_k=\pmatrix{a_k\\b_k}$ and $\theta_k \in I$ its polar angle where $I=(0,\tfrac{\pi}{2})$ The idea is that it suffices to "transfer" the order between fractions $a_k/b_k$ considered as $\arctan(\theta_k)$ and the order of these polar angles $\theta_k$ . With more details : As $\frac{b_k}{a_k}=\arctan{\theta_k}$ , condition : $$\frac{a_1}{b_1} plainly means that the indices of the vectors are by ascending order of the $\arctan$ of their polar angles, therefore of their polar angles (because function $\arctan$ is increasing on $I=(0,\tfrac{\pi}{2}).$ ) Consider the convex cone $(C)$ (rendered with cyan color on the above figure) (see here ) generated by the $V_k$ s. $(C)$ has generic element $W=\sum c_k V_k$ (all $c_k>0$ ) which is in-between the two extremal rays $\mathbb{R_+}V_1$ and $\mathbb{R_+}V_n$ . "Being situated between" means that the polar angle of $W$ belongs to interval $(\theta_1,\theta_n).$ In our case, we
|calculus|inequality|recreational-mathematics|
0
How to show that an non-linear ODE has at least two solutions in $C^2(0,1)$
I'm given a differential equation that goes as following $$-\frac{d}{dx}(u(x)u'(x))=1$$ with the auxiliary condition of $u(0)=u(1)=0$ and the set $C^2(0,1)=\left\{g\in C^2(0,1)\cap C[0,1] \mid g(0)=g(1)=0\right\}$ What I tried: I used that $uu'=\frac{(u')^2}{2}$ Integrating both sides, I get $$u'(x)^2=-2x+C_1\implies u'(x)= \pm \sqrt{-2(x+C_1)}$$ does it hold that since there is a $\pm$ , the solution will have at least or only two solutions?
You have the right idea, except for some little typos. In this case, after integrating twice you find that $$ u(x)^2=-x^2+C_1x+C_2. $$ Using the first initial condition $u(0)=0$ you get then $C_2=0$ . Similarly, using that $u(1)=0$ one can infer that $C_1=1$ . Then, the solution satisfies $$ u(x)^2=(-x^2+x)=x(1-x). $$ Observe that for $x\in(0,1)$ we have $x(1-x)>0$ , and hence, on $(0,1)$ there are at least two solution $$ u_1(x)=\sqrt{x(1-x)} \qquad \hbox{ and } \qquad u_2(x)=-\sqrt{x(1-x)}. $$
|ordinary-differential-equations|partial-differential-equations|
0
Knock out tournament 3
Thirty-two players ranked 1 to 32 are playing in a knockout tournament. Assume that in every match between any two players, the better-ranked player wins, the probability that ranked 1 and ranked 2 players are winner and runner up respectively, is? My approach: But the given answer is 16/31. Could someone please explain where i went wrong?
There are a couple of mistakes in your approach. One, you are excluding player 1 (top-ranked) from all five matches - the whole tournament basically. You should be considering only the first four matches. Two, your approach assumes that every player is equally likely to face the top ranked (or second ranked) player in the four matches building up to the finals. That's not correct. The lower a player's rank, the less likely he/she is to play against either of the two players. Correct Solution Here's one approach we could take to solve this problem. There will be $4$ matches before the final. For players with rank $1$ and $2$ to be the winner and the runner-up, they must both reach the finals. And for that to happen, the only condition necessary is that they don't meet in earlier matches. They are guaranteed to win against other opponents. Looking at things from the perspective of any of the two players (I'm going with rank 1), the total number of ways in which their opponents to the fir
|probability|combinatorics|
0
Does $f(x,y)= x^2 + y^3 -3y - 2$ have a minimum at the point $(0,1)$?
Let's consider the function: $$f(x,y)= x^2 + y^3 -3y - 2.$$ I find two critical points: $(0,1)$ and $(0,-1) $ . Using the definition method with $h_1=x$ and $h_2=y-1$ to test the point $ (0,1) $ , I find that: $$f(h_1,1+h_2)-f(0,1)= h_1^2 + h_2^2(h_2+3).$$ We can use $h_1=0$ to find that the sign is not constant so there is no extremum, is that correct?
$0 0$ . Thus $(0,1)$ is a local minimum for $f$ , but it is not a global minimum.
|multivariable-calculus|optimization|
0
Sum of norms induced by PSD matrices
Suppose you have two positive definite matrices $M$ and $N \in \mathcal{S}_{++}^n$ . They induce scalar products and norms on $\mathbb{R}^n$ : \begin{align} \lVert x \rVert_M &= \sqrt{x^\intercal M^{-1} x}, & \lVert x \rVert_N &= \sqrt{x^\intercal N^{-1} x}. \end{align} Introduce the function $f:x\mapsto \lVert x \rVert_M + \lVert x \rVert_N$ . It is a norm as it is the sim of two norms. However, it is not induced by a scalar product. Can we upper-bound $f$ by a norm induced by another positive definite matrix?
Since $2\sqrt{ab}\leq a+b$ , we have $$\| x \|_M + \| x \|_N=\sqrt{x^TMx}+\sqrt{x^TNx}\leq \\\sqrt{2(x^TMx+x^TNx)}= \sqrt{2x^T(M+N)x}=\| x \|_{2(M+N)}.$$
|inner-products|positive-definite|matrix-norms|
1
Different ways to describe Hilbert's Hotel
In math, Hilbert's Hotel (e.g. https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel ) is a famous paradox that describes a situation where a hotel with infinitely many rooms and full capacity somehow has the ability to still accommodate more guests. I have always found this concept difficult to understand and often resort to the following explanation: How many numbers are there between 3 and 1? An infinite number of numbers (e.g. 1.1, 1.111, 1.1111....2.1, 2.1111, etc.) How many numbers are there between 2 and 1? Also an infinite number of numbers Yet we all know that the space between (3-1) > (2-1) This is the paradox: from one angle, a number appears to be significantly smaller than another number - yet it is still somehow possible to construct equally sized intervals around these numbers and then pack an infinite quantity of numbers within these intervals Is my interpretation of Hilbert's Hotel correct? Note: I don't actually understand how this paradox was resolved.
A key to understanding Hilbert's Hotel is the idea of "countable infinity". In intuitive terms, a set is countably infinite if it is the "same size" as the set of natural numbers $\mathbb{N} = \{1, 2, 3, \ldots\}$ . How do we determine if two sets are the same size, especially infinite ones? Let's start with finite sets and see if we can work our way up to infinite sets. For example, let's examine the sets A = {1,2,3,4}, B = {5,6,7,8}, and C = {9,10,11}. We're using numbers here because it's easier, but remember sets can contain anything. Now, intuitively it feels like A and B are the same size (4, to be exact), but C is a different size. But how do we show this? Let's "pair up" the elements of A and B. 1 and 5 pair up, 2 and 6 pair up, 3 and 7 pair up, and 4 and 8 pair up. Since we can build this pairing (in mathematics, we'd call it a "mapping" or a "map") in a way that every element of A is paired with exactly one element of B, and every element of B is paired with exactly one eleme
|set-theory|
1
Why is the inner product of two simple vectors simple? (Geometric algebra)
I’m trying to understand the reason for the assertion on page 20 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus” that If $B$ is a simple $s$ -vector, then $B\cdot A$ [ where $A$ is a simple $n$ -vector ] is simple. According to page 4, a multivector $A_r$ is called a simple $r$ -vector iff it can be factored into a product of $r$ anticommuting vectors $a_1, a_2,…, a_r$ , that is $$ A_r = a_1a_2…a_r,$$ where $a_ja_k = -a_ka_j$ for $j, k = 1, 2, …, r$ , and $j\neq k$ . However, I can’t see how that is true even if $B$ were a simple $1$ -vector $b=b_1+b_2+b_3$ and $A=a_1a_2a_3$ a simple $3$ -vector with $b_i$ parallel $a_i$ . In this case, $$\begin{aligned}b\cdot A &= (b_1\cdot a_1)a_2a_3 - a_1(b_2\cdot a_2)a_3 + a_1a_2(b_3\cdot a_3) \\ &= (b_1\cdot a_1)a_2a_3 + a_1\left[a_2(b_3\cdot a_3) - (b_2\cdot a_2)a_3\right] \end{aligned}$$ and I can’t factor this further into a simple $2$ -vector.
$ \newcommand\form[1]{\langle#1\rangle} \newcommand\lcontr{\mathbin\rfloor} $ This can be proved by induction on grade, but instead I am going to give a "vector free" approach. Assume we have an $n$ -dimensional vector space $V$ equipped with a nondegenerate metric which generates a geometric algebra. (When the metric is degenerate we can still make the arguments to follow work, but we have to do some shenanigans with the dual space $V^*$ .) First, notation: your inner product $\cdot$ can be defined on $s$ - and $t$ -vectors $A_s, B_t$ by $$ A_s\cdot B_t = \form{A_sB_t}_{|s-t|}. $$ However, the left contraction $$ A_s\lcontr B_t = \form{A_sB_t}_{t-s} $$ is more well behaved (where the grade projection is defined to be $0$ when $t-s$ is negative). For instance, for arbitrary multivectors $A, B, C$ we have $$ (A\wedge B)\lcontr C = A\lcontr(B\lcontr C),\quad (A\wedge B)*C = A*(B\lcontr C) $$ with $A*B = \form{AB}_0$ the scalar product. The second adjoint identity can be taken as a defini
|geometric-algebras|
1
Does $f(x,y)= x^2 + y^3 -3y - 2$ have a minimum at the point $(0,1)$?
Let's consider the function: $$f(x,y)= x^2 + y^3 -3y - 2.$$ I find two critical points: $(0,1)$ and $(0,-1) $ . Using the definition method with $h_1=x$ and $h_2=y-1$ to test the point $ (0,1) $ , I find that: $$f(h_1,1+h_2)-f(0,1)= h_1^2 + h_2^2(h_2+3).$$ We can use $h_1=0$ to find that the sign is not constant so there is no extremum, is that correct?
The second derivative test can be used here as well. The hessian matrix of second derivatives is $$ Hf(x,y) = \begin{pmatrix} 2 & 0 \\ 0 & 6y \end{pmatrix} $$ We see that at the critical point $(0,1)$ , the matrix is diagonal with two positive entries. This tells us $(0,1)$ is a local minimum. At the other critical point $(0,-1)$ , the matrix has negative determinant, so the critical point is a saddle point. There is neither a global maximum or minimum value, since $\lim_{y\to \infty} f(0,y) = +\infty$ and $\lim_{y\to-\infty} f(0,y) = - \infty$ .
|multivariable-calculus|optimization|
0
Why does a compact-open topology define a subbase of continuous functions?
Why does a compact-open topology define a subbase of continuous functions ? It is clear that in order for a set of such functions to set a subbase , it must cover the space of all continuous functions of their $X$ in $Y$ , but it seems to me not obvious that in some arbitrary topological space there will be a union of such compacts that the function we need will be defined on it. As I understand it, the answer to this question should be obvious, but still I don't understand why this is so.I will be grateful for any reasoning.
The phrase "prebase" is a bit unusual, most frequently one uses the word subbase . Given a set $X$ and a set $\Sigma$ of subsets of $X$ , there is a coarsest topology $\mathcal T(\Sigma)$ containing $\Sigma$ . It is called the topology generated by $\Sigma$ . Explicitly it can be described as follows: Let $\mathcal B(\Sigma)$ be the set of all finite intersections of elements of $\Sigma$ . That is, $$\mathcal B(\Sigma) = \{ \bigcap_{S \in F} S \mid F \subset \Sigma \text{ is finite }\}.$$ $F = \emptyset$ is allowed and one understands $\bigcap_{S \in \emptyset} S = X$ . This is because for $x \in X$ we have $x \in \bigcap_{S \in F} S$ iff $x \in S$ for all $S \in F$ . If $F = \emptyset$ , then each $x \in X$ satisfies $x \in \bigcap_{S \in \emptyset} S$ for all $S \in \emptyset$ , simply because there are no such $S$ . Clearly $\Sigma \subset \mathcal B(\Sigma)$ . Take $\mathcal T(\Sigma)$ to be set of unions of elements of $\mathcal B(\Sigma)$ . That is, $$\mathcal T(\Sigma) = \{ \big
|general-topology|functional-analysis|
0
Does $f(x,y)= x^2 + y^3 -3y - 2$ have a minimum at the point $(0,1)$?
Let's consider the function: $$f(x,y)= x^2 + y^3 -3y - 2.$$ I find two critical points: $(0,1)$ and $(0,-1) $ . Using the definition method with $h_1=x$ and $h_2=y-1$ to test the point $ (0,1) $ , I find that: $$f(h_1,1+h_2)-f(0,1)= h_1^2 + h_2^2(h_2+3).$$ We can use $h_1=0$ to find that the sign is not constant so there is no extremum, is that correct?
Your $$f(h_1,1+h_2)-f(0,1)=h_1^2+h_2^2(h_2+3)$$ is $≥0$ for every $(h_1,h_2)$ in $\Bbb R\times[-3,\infty)$ , which is a neighborhood of $(0,0)$ . So, you proved that $f$ has a local minimum at $(0,1)$ . Your last sentence shows that this minimum is not global. An analogous study at $(0,−1)$ results in $h_1^2+h_2^2(h_2-3)$ , which takes both positive and negative values on any neighborhood of $(0,0)$ , hence $f$ has a saddle point at $(0,−1)$ .
|multivariable-calculus|optimization|
0
What is the range of the function $\frac{\sqrt{x^2 + 4x + 3}}{x-5}$
I want to find the domain and range of the function $y = \frac{\sqrt{x^2 + 4x + 3}}{x-5}$ . I found the domain to be $(-\infty,-3] \cup [-1, 5) \cup (5, \infty)$ . I also found the inverse of the function: $x = \frac{10y^2+4 \pm \sqrt{192y^2+4}}{2(y^2-1)}$ which is defined when $y \not= 1$ and $y \not= -1$ . For this inverse to make sense $x$ needs to be a value in the domain, so I figure the range would be all values of $y$ such that: $\frac{10y^2+4 \pm \sqrt{192y^2+4}}{2(y^2-1)} \leq -3$ or $-1 \leq \frac{10y^2+4 \pm \sqrt{192y^2+4}}{2(y^2-1)} or $5 Problem is, it's hard to solve for a explicit value of $y$ in the above inequalities, or at least I don't know how to do it. Is there a way to do it? Or maybe a completely different approach that will give me the range of the function? Symbolab says range should be $y \leq 0$ and $y > 1$ , but I want to know how to get there.
The function is continuous. As $x$ approaches $5$ from the right, the numerator approaches $\sqrt{48}$ and the denominator approaches $0$ and stays positive, so the values are arbitrarily large and positive. As $x$ approaches $5$ from the left, the denominator approaches $0$ and is negative, so the values are arbitrarily large and negative. So we know the range will include intervals of the form $(a,\infty)$ and $(-\infty,b)$ , by the Intermediate Value Theorem. Because $f(-1)=0$ and the values grow arbitrarily large and negative as you move along the interval $[-1,5)$ , the range includes all negative values and $0$ , so the range includes $(-\infty,0]$ . As $x\to\infty$ , the function approaches $1$ . Because we have $$\frac{\sqrt{(x+3)(x+1)}}{(x-5)},$$ for $x\gt 5$ , each factor in the square root is greater than $x-5$ , so this fraction is greater than $1$ . Since $\lim_{x\to 5^+}f(x) = \infty$ and $\lim_{x\to\infty}f(x) = 1$ , but $f(x)\gt 1$ for all $x\gt 5$ , the range includes
|calculus|algebra-precalculus|functions|
0
HessianMatrix (norm[2](x)) in Wolfram
I'm very new to Wolfram-alpha. I'm trying to compute the Hessian Matrix of ||x|| where x is a vector. I tried this command HessianMatrix (norm[2][x]) in mathematica 13 where I use Wolframalpha input cell but nothing happens. Any idea ? I'm probably lacking some basic knowledge.
In WolframAlpha, you can get the Hessian determinant for a multivariable function $f(x_1,x_2,\ldots)$ with the syntax Hessian matrix (or determinant) f(x1, x2, ...) This usually also works if you have additional parameters, in which case one should specify which variables to use in the component derivatives, e.g. Hessian matrix f(x1, x2, y) w.r.t. {x1, x2} Both matrix and determinant are included in either flavor of input. For your use case, one would resort to Hessian matrix norm({x, y, z}) As you can see, the output doesn't know how to handle derivatives of the built-in Abs function (absolute value). Sadly, there doesn't appear to be any means elaborating on the WA query and introducing extra conditions on the variables, such as fixing $(x,y,z)\in\Bbb R^3$ . However, we can reduce Abs'[x] and Abs''[x] based on the sign of x so cleaning up the output isn't a difficult task. Since you mentioned you were working in a notebook, you can carry out this cleanup with the following code: D[No
|wolfram-alpha|
0
Are these two different definitions of simply connected?
In Donald Marshall's notes on complex analysis he defines a region, $\Omega$ as simply connected if $S^2 \setminus \pi(\Omega)$ is connected where $\pi$ is the steregraphic projection. This seems to include regions like $\Omega = \{z : \|z \| > 1\}$ since $S^2 \setminus \pi(\Omega)$ is the just lower hemisphere which is connected. In algebraic topology, a region is simply connected if any two closed curves in $\Omega$ are homotopic. But, thinking of $\mathbb{C}$ as $\mathbb{R}^2$ , this would exclude the $\Omega$ above since a loop around the missing hole is not homotopic to a loop that does not go around the missing hole. Are these definitions meant to be different? In complex analysis, are we allow a homotopy to "go through infinity"? Am I just missing something?
For a connected open set $\Omega\subset\Bbb C$ , asserting that $\Omega$ is simply connected in the sense that any two closed curves in $\Omega$ are homotopic is equivalent to the assertion that the complement of $\Omega$ on the Riemann sphere is connected. You will find a proof that John B. Conway's Functions of One Complex Variable I , for instance. Take a look at theorem 2.2 from chapter VIII, where a list of nine (!) equivalent conditions is given.
|complex-analysis|algebraic-topology|
0
Show that $ \Vert Tx_\varepsilon -x_\varepsilon \Vert \leq \varepsilon $
Exercise: Let $X$ be a Banach space, $\Vert \quad \Vert$ is a norm on $X$ . Let $B_1= \{ x \in X:\Vert x \Vert=1 \} $ . If the oprator $T: B_1\rightarrow B_1$ satisfies for all $x, y\in B_1$ , there is $\Vert Tx-Ty\Vert \leq \Vert x-y\Vert$ . Then for all $\varepsilon \in (0,1)$ , there exists $x_\varepsilon\in B_1$ such as $\Vert Tx_\varepsilon-x_\varepsilon \Vert\leq \varepsilon$ . My Attempt: This exercise looks quite like the Fixed Point Theorem so I want to prove it by the similar way. But I failed because without $0 in the Fixed Point Theorem I can't find a Cauchy sequence. So I can't find a $x$ even is dependent on $\varepsilon$ to make $Tx$ and $x$ close enough. I'm not sure if my attempt is on a correct way. Update: According to @Brian-Moehring, there is a mistake in this exercise. Maybe the $B_1$ should be a unit ball rather than a unit sphere.
I think $B_1$ stands for the unit ball instead of the unit sphere. In that case, consider $Ux = \frac12 Tx + \frac12 (1- \epsilon)x$ . You can easily prove that the map is a contraction map and so you can find a fixed point $x_\epsilon$ . $$Ux_\epsilon = x_\epsilon \implies Tx_\epsilon - x_\epsilon = \frac\epsilon2 x_\epsilon \implies \left\|Tx_\epsilon - x_\epsilon \right\|\le \frac\epsilon2
|functional-analysis|banach-spaces|
0
Show $1+\sqrt{2}+\sqrt{3}+\sqrt{6}$ is a unit of $\mathbb{Q}(\sqrt{2},\sqrt{3})$.
I'm trying to find a general unit/inverse formula for $\mathbb{Q}(\sqrt{2},\sqrt{3}) = \lbrace a+b\sqrt{2}+c\sqrt{3}+d\sqrt{6} \ \vert \ a,b,c,d\in\mathbb{Q}\rbrace$ and then plug in my specific unit $1+\sqrt{2}+\sqrt{3}+\sqrt{6}$ to prove is it a unit/inverse: $$a+b\sqrt{2}+c\sqrt{3}+d\sqrt{6} = (a+b\sqrt{2})+(c+d\sqrt{2})\sqrt{3}$$ let $x=a+b\sqrt{2}$ and let $y=c+d\sqrt{2}$ , $$(x+y\sqrt{3})(x-y\sqrt{3}) = x^2-3y^2$$ $$\Rightarrow (x+y\sqrt{3})\left(\frac{x-y\sqrt{3}}{x^2-3y^2}\right) = 1$$ this is in the form $a\cdot a^{-1} = 1$ which means, $$a^{-1} = \frac{x-y\sqrt{3}}{x^2-3y^2} = \frac{(a+b\sqrt{2})-(c+d\sqrt{2})\sqrt{3}}{(a+b\sqrt{2})^2-3(c+d\sqrt{2})^2} = \frac{a+b\sqrt{2}-c\sqrt{3}-d\sqrt{6}}{a^2+2\sqrt{2}ab+2b^2-3c^2-6\sqrt{2}cd-6d^2}$$ expanding unit/inverse expression into the form $a+b\sqrt{2}+c\sqrt{3}+d\sqrt{6}$ , $$\frac{a}{a^2+2\sqrt{2}ab+2b^2-3c^2-6\sqrt{2}cd-6d^2} + \left(\frac{b}{a^2+2\sqrt{2}ab+2b^2-3c^2-6\sqrt{2}cd-6d^2}\right)\sqrt{2} + \left(\frac{-c}{a^2+2\sqr
I got $$ ( 1 + \sqrt 6)^2 - ( \sqrt 2 + \sqrt 3)^2 = 2 $$ so that
|abstract-algebra|group-theory|ring-theory|field-theory|
0
Objects in an $\infty$-category
I tried reading about the definition of a pre-additive $\infty$ -category, there it is said that a pointed $\infty$ -category is pre-additive if all finite products and co-products exist, and the canonical map $X\sqcup Y\longrightarrow X\times Y$ is an equivalence for any pair of objects of the $\infty$ -category. My question is: what are the objects of an $\infty$ -category? The definition I learned says that an $\infty$ -category is a simplicial set satisfying a certain property. So I guess my question is not about $\infty$ -categories but rather about simplicial sets. Now, a simplicial set is a functor from the simplex category to sets. Are its objects basically its 1-simplices? If so, is the isomorphism of products and co-products also a property of a pre-additive's $n$ -simplices? Or is it particular specifically for $1$ -simplices?
You're probably talking about quasicategories as a model for $\infty$ -category theory. Given a quasicategory $C$ , the $0$ -simplices of $X$ are the objects, and the $1$ -simplices are the $1$ -morphisms. (Depending on your way of looking at higher morphisms, the $2$ -simplices might or might not be $2$ -morphisms. Usually we follow the globular picture of a higher morphism, and hence the $2$ -simplices are not exactly the $2$ -simplices. The same applies to $n$ -simplices for $n\geq 2$ .) Therefore $X$ and $Y$ in the definition of a preadditive $\infty$ -category are just $0$ -simplices of the quasicategory (and the morphism in question is a $1$ -simplex).
|homotopy-theory|simplicial-stuff|higher-category-theory|
0
Rencontres Numbers
I'm having trouble understanding rencontres numbers, $D_{n,k}$ . The numerical values shown of the wiki page: https://en.wikipedia.org/wiki/Rencontres_numbers Looking at n = 3: $D_{3,3} = 1$ I think I understand this because there is only one way to give all three items to the correct person $D_{3,2} = 0$ because if 2 people had the correct item the third person must also have the right item $D_{3,1} = 3$ as there are three different ways to give one person the right item $D_{3,0} = 2$ this is the part I don't understand, I thought the answer should be 1 as there is only one way to give nobody the right item Can someone please explain why for $n =3$ and $k=0$ the answer is $2$ ?
You could give item $1$ to person $2$ , item $2$ to person $3$ and item $3$ to person $1$ , or item $1$ to person $3$ , item $2$ to person $1$ and item $3$ to person $2$ . That's two ways.
|combinatorics|permutations|
0