title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Finding all groups $H$ (up to isomorphism) such that there is a surjective homomorphism from $D_8$ to $H$
|
Here, $D_8$ is the dihedral group of order $8$ . From the First Isomorphism Theorem, I'm trying to find a solution for this by considering the quotient groups of $D_8$ . If $\phi$ is a surjective homomorphism from $D_8$ to $H$ then it's image is isomorphic to the quotient group of $D_8$ with the kernel of $\phi$ . I'm considering what order $H$ could have: 1, 2, 4 and 8. So far I have: If $H$ has order 4, then the normal subgroup (kernel of $\phi$ ) must have order 2. This cannot involve any reflections, since if it were a subgroup then the inverse reflection would have to be there, and a reflection along with its inverse does not form a normal subgroup. So we are left with it being a subgroup containing rotations only and being order 2. This is $\{ R_{2 \pi}, R_{\pi} \}$ (full and half rotations respectively), and we can see this is a normal subgroup. I'm then told: This gives a quotient isomorphic to $C_2$ x $C_2$ . My questions: Why is this isomorphic to $C_ 2$ x $C_2$ ? How can we
|
If $D_8/H$ were cyclic of order $4$ , then there would exist a coset $gH$ of order $4$ , namely an element $g\in D_8$ such that $g,g^2,g^3\notin H$ and $g^4\in H$ . So, there are two cases only to parse: $g^4=R_{2\pi}(=1)$ : if $g$ has order $4$ , then $g=R_{\pi/2}$ , but then $g^2\in H$ : contradiction; if $g$ has order $2$ , then again $g^2\in H$ : contradiction; $g^4=R_\pi$ , then $g$ has order $2$ , hence $g^2\in H$ : contradiction. Therefore, $D_8/H\cong C_2^2$ .
|
|group-theory|finite-groups|group-isomorphism|group-homomorphism|
| 0
|
Restriction of a differential form along an inclusion (example)?
|
I am trying to figure out if the restriction of a differential form on a manifold $M$ onto an open subspace of $M$ is the same as the original form? Take, for example, the $1$ -form $\alpha=-ydx+xdy$ on $S^1$ and the open $U = S^1 \setminus \left\{(0,1)\right\}$ of $S^1$ . We know that $\alpha$ is not exact on $S^1$ , but its restriction to $U$ is exact according to Poincare lemma. Therefore, we can write $$\alpha = df$$ for some function $f$ on $U$ . I cannot understand this as it seems contradictory to me. I mean why we cannot do this on $S^1$ as well? In general, is the restriction of a differential form $\alpha$ on $M$ to open $U$ of $M$ have the same formula?
|
It is true that the restriction to an open subset is given by the same formula, but this does not lead to any contradiction. For the example you use, the function $f$ is just the angular coordinate in polar coordinates, for example you can take $f(x,y)$ for $(x,y)\in S^1\setminus\{(-1,0)\}$ to be the unique $t\in (-\frac{3\pi}2,\frac{\pi}2)$ such that $x=\cos t$ and $y=\sin t$ . This is a smooth function on $S^1\setminus\{(-1,0)\}$ such that $df=\alpha$ on this subset. (Indeed, this is the unique solution of $df=\alpha$ up to an additive constant.) But obviously $f$ does not extend to $S^1$ (not even continuously), so this does not say anything about $\alpha$ on $S^1$ .
|
|calculus|differential-geometry|differential-forms|
| 1
|
Question on a proof involving the Cramér model
|
This is a question on one part from Tao's note: https://terrytao.wordpress.com/2015/01/04/254a-supplement-4-probabilistic-models-and-heuristics-for-the-primes-optional/ . Refer to Prediction $3$ (Riemann hypothesis). Construct a random model ${{\mathcal P}'}$ of the primes ${{\mathcal P}}$ to be a random subset of the natural numbers, such that each natural number ${n > 2}$ has an independent probability of ${\frac{1}{\log n}}$ of lying in ${{\mathcal P}'}$ , and let us require that ${1 \not \in {\mathcal P}'}$ and ${2 \in {\mathcal P}'}$ . We consider the random variables $\displaystyle X_n := 1_{{\mathcal P}'}(n) \log n - 1$ . Then the ${X_n}$ have mean zero and are independent, and have size ${O( \log x ) = O( x^{o(1)} )}$ . It is then stated that Thus $\displaystyle \mathop{\bf E} (\sum_{2 for any fixed natural number ${k}$ (the only terms in ${(\sum_{2 that give a non-zero contribution are those in which each ${X_n}$ appears at least twice, so there are at most ${k/2}$ distinct in
|
(1) Even the maximum of each term of the form $\prod_{j=1}^k X_{n_j}$ is $O(x^{o(1)})$ , so the expectation certainly is as well. (2) Are you able to bound the variance of $\sum_{2 with the given bound? What happens when you input that bound into Chebyshev's inequality?
|
|probability-theory|number-theory|
| 0
|
set of values of $a$ for which the Inequality $ax^2-(3+2a)x+6>0,a\neq 0$ holds for exactly three negative integer values of $x,$ is
|
The complete set of values of $a$ for which the Inequality $ax^2-(3+2a)x+6>0,a\neq 0$ holds for exactly three negative integer values of $x,$ is What I try: $\displaystyle ax^2-3x-2ax+6>0$ $x(ax-3)-2(ax-3)=0$ $\displaystyle (x-2)(ax-3)=0\Longrightarrow x=2\quad\text{or}\quad x=\frac{3}{a}$ From above , For exactly one integer $x,$ We have $a=3$ But I did not understand How can I find range of $a$ for exactly $3$ negative integer values of $x$ Please have a look , Thanks
|
If we are restricting to $f(x) = ax^2−3x−2ax+6 > 0$ for only $3$ integer $x , then its clear only the integers $x = -1, -2, -3$ meet the condition. Firstly, the parabola must be concave downward such that $f(x) > 0$ is bounded, where $f(x_{max}) > 0$ , where $x_{max} = \dfrac{3}{2a} + 1$ . Because $x = 2$ is always a zero, we must have $3/a . Furthermore, $3/a , since $f(\exists x 0$ . Secondly, note that $f(0) = 6$ and the $x_{max} = \dfrac{3}{2a} + 1$ . If $\dfrac{3}{2a} + 1 > 0$ , $a . Let $a = -3/2$ . Then $x_{max} = 0$ and $3/a = -2$ . However, this does not satisfy the condition. For $a , $x_{max} > 0$ , which indicates the number of $\{x: x 0\}$ decrease. So it must be that $\dfrac{3}{2a} + 1 and the continuity of $f(x)$ means $x = -1, -2, -3$ . Hence, $-4 \le 3/a
|
|inequality|
| 0
|
Finding a Point with Null Real Derivative on a Cubic Path
|
Let's examine a cubic complex function $F(z) = z^3 + e_1z^2 + e_2z + e_3$ with $z$ in the complex numbers. Suppose this function zeros out at two points, $m$ and $n$ , which lie inside the boundary of the unit circle in the complex plane. We need to show that along the straight line between $m$ and $n$ , there's a point, let's call it $u$ , where the real part of $F$ 's derivative at point $u$ becomes zero. To address this, we utilize the derivative $F'(z)$ , focusing on its real part across the interval connecting $m$ and $n$ . The fundamental theorem of algebra supports the existence of roots for $F(z)$ within the complex plane. Given $F(z)$ zeroes at $m$ and $n$ inside the unit circle, we consider the continuity of $F'(z)$ across a linear path in the complex plane. By parameterizing the segment from $m$ to $n$ with $z(t) = m + t(n - m)$ , where $0 \leq t \leq 1$ , we observe $F'(z)$ along this parameterized line. The Intermediate Value Theorem states that if a continuous function tr
|
$f(t)=Re (P(a+t(b-a))$ is real valued function defined on $[0,1]$ . It is continuously differentiable and $f(0)=Re P(a)=0, f(1)=Re P(b)=0$ . By Rolle's Theroen ther exist $t \in (0,1)$ such that $f'(t)=0$ . But $f'(t)=Re \frac d {dt} P(a+t(b-a))=Re P'(a+t(b-a)) (b-a)$ by Chain Rule. Hence, $(b-a) Re P'(w)=0$ , where $w=a+t(b-a)$ . Cancel $b-a$ to finish. ( $w$ belongs to the line segment from $a$ to $b$ ). Note: This works for any differentiable function $P$ and any two complex numbers $a$ and $b$ with $P(a)=P(b)=0$ .
|
|complex-analysis|derivatives|complex-numbers|discrete-geometry|
| 1
|
Prove that $r+x$ is irrational.
|
The question is: Let $r$ be rational and $x$ be irrational. Prove that $r+x$ is irrational. I realize that this question has been asked before, but I have a doubt about an approach different from assuming $r = \frac{p}{q}$ . Assume $r+x$ is rational. Since $r \in \mathbb{Q}, \,\,\exists (-r) \in \mathbb{Q}$ such that $r + (-r) = 0$ . By the closure property of the field of rational numbers, if $ (-r), (r+x) \in \mathbb{Q}$ , then $$ x = 0 + x = (-r + r) + x = (-r) + (r+x) \in \mathbb{Q}$$ which is a contradiction, therefore $r+x$ must be irrational. However, I have a doubt about the first equality. Since $x$ is irrational, would we be able to conclude that $0+x=x$ as this axiom of additive identity only holds for the rational numbers, and we haven't been introduced to the field of real numbers yet.
|
Besides Anne Bauval 's pertinent remark in the comments, in the first place you need to have already assumed or constructed addition on ℝ , otherwise you are simply forbidden from writing " $r+x$ " in the first place given reals $r,x$ . And certainly you would have the basic arithmetic fact $0+x = x$ , either assumed or proven, otherwise that addition would be meaningless and you would not be using it.
|
|real-analysis|irrational-numbers|
| 1
|
Uniform distribution problem(Problem 6.13 First course in probability)
|
A man and a woman agree to meet at a certain location about 12:30 P.M. If the man arrives at a time uniformly distributed between 12:15 and 12:45, and if the woman independently arrives at a time uniformly distributed between 12:00 and 1 P.M., find the probability that the first to arrive waits no longer than 5 minutes. Let us call $X$ ~ $U(15, 45)$ is the minute offset from 12:00 from the time the man arrives and $Y$ ~ $U(0, 60)$ is the minute offset from 12:00 from the time the woman arrives. Then what we have to calculate is: $$P(X-5\leq Y \leq X \lor Y - 5 \leq X \leq Y) = P(X-5\leq Y \leq X) + P(Y - 5 \leq X \leq Y)$$ , since $P(Y - 5 \leq X \leq Y, X-5\leq Y \leq X) = 0$ $$f_{X, Y}(x, y) = f_{X}(x)f_{Y}(y) = \frac{1}{30} \frac{1}{60}= \frac{1}{1800}$$ So: Firstly: $$P(X-5\leq Y \leq X) = \int_{15}^{45} \int_{x-5}^{x} \frac{1}{1800} \,dy \,dx = \frac{5(30)}{1800} = \frac{1}{12}$$ Secondly: $$P(Y - 5 \leq X \leq Y) = \left(\int_{15}^{20} \int_{15}^{y} + \int_{20}^{45} \int_{y-5}^{y
|
The last integral limit you should have from $y - 5$ to $45$ . $$ \int_{15}^{20} \int_{15}^{y} \frac {1} {1800} dx dy = \int_{15}^{20} \frac {y - 15} {1800} dy = \left. \frac {y(y - 30)} {3600} \right|_{15}^{20} = \frac {-200 + 225} {3600} = \frac {25} {3600} = \frac {1} {144} $$ $$ \int_{20}^{45} \int_{y-5}^{y} \frac {1} {1800} dx dy = \int_{20}^{45} \frac {5} {1800} dy = (45 - 20) \times \frac {1} {360} = \frac {25} {360} = \frac {10} {144} $$ In you calculation you treated the last integral as $0$ so you have $$ \frac {1} {144} + \frac {10} {144} = \frac {11} {144} $$ In fact the last integral is similar to the first one by symmetry $$ \int_{45}^{50} \int_{y-5}^{45} \frac {1} {1800} dx dy = \int_{45}^{50} \frac {50 - y} {1800} dy = \left. \frac {y(100 - y)} {3600} \right|_{45}^{50} = \frac {2500 - (50^2 - 5^2)} {3600} = \frac {25} {3600} = \frac {1} {144} $$ So we have the additional $1/144$ and thus the resulting probability for $X \in (Y - 5, Y)$ is $$ \frac {1} {144} + \frac {10}
|
|probability|probability-distributions|
| 0
|
How to rigorously evaluate $\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$?
|
How to rigorously evaluate $L:=\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$ ? I tried to use the beat function since $$B(x,y)= 2\int_0^\frac{\pi}{2}\sin^{2x-1}(t)\cos^{2y-1}(t)dt$$ $$\int_0^\frac{\pi}{2}\sqrt[n]{\sin(t)}dt= 0.5 B\left(\frac{1}{2n} + \frac{1}{2},\frac{1}{2}\right)= \frac{\Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}{2\Gamma\left(\frac{1}{2n} + 1\right)}=\frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)}$$ $$L =\lim\limits_{n \to \infty } n \left( \frac{\pi}{2} - \frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)} \right)$$ This limit is very hard to deal with, I tried to use Gauss product of the gamma function to try to simplify this mess but it didn't work. I noticed that if we define $f(x)=\int_0^\frac{\pi}{2}{\sin^x (t)}dt$ assuming $f'(0) $ exist at $0$ $$-f'(0)= \lim_{h \to 0 } \frac{\int_0^\frac{\pi}{2}{1-\s
|
Let $\epsilon = \frac{1}{n}$ and \begin{equation} J_\epsilon = \int_0^{\pi/2} e^{\epsilon\ln\sin t}d t \end{equation} Use Taylor-Lagrange formula $e^u = 1 + u + \frac{u^2}{2}e^{\theta u}$ with $\theta\in[0, 1]$ , then \begin{equation} |J_\epsilon - \frac{\pi}{2} - \epsilon\int_0^{\pi/2}\ln\sin(t) d t|\le \frac{\epsilon^2}{2}\int_0^{\pi/2}\ln^2(\sin(t)) d t \end{equation} this shows that the limit is $-\int_0^{\pi/2}\ln\sin(t) d t$
|
|real-analysis|integration|limits|definite-integrals|
| 0
|
How to rigorously evaluate $\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$?
|
How to rigorously evaluate $L:=\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$ ? I tried to use the beat function since $$B(x,y)= 2\int_0^\frac{\pi}{2}\sin^{2x-1}(t)\cos^{2y-1}(t)dt$$ $$\int_0^\frac{\pi}{2}\sqrt[n]{\sin(t)}dt= 0.5 B\left(\frac{1}{2n} + \frac{1}{2},\frac{1}{2}\right)= \frac{\Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}{2\Gamma\left(\frac{1}{2n} + 1\right)}=\frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)}$$ $$L =\lim\limits_{n \to \infty } n \left( \frac{\pi}{2} - \frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)} \right)$$ This limit is very hard to deal with, I tried to use Gauss product of the gamma function to try to simplify this mess but it didn't work. I noticed that if we define $f(x)=\int_0^\frac{\pi}{2}{\sin^x (t)}dt$ assuming $f'(0) $ exist at $0$ $$-f'(0)= \lim_{h \to 0 } \frac{\int_0^\frac{\pi}{2}{1-\s
|
Note that \begin{align*} n \int_{0}^{\frac{\pi}{2}} (1 - \sqrt[n]{\sin t}) \, \mathrm{d}t &= \int_{0}^{\frac{\pi}{2}} \frac{1 - (\sin t)^{1/n}}{1/n} \, \mathrm{d}t \\ &= \int_{0}^{\frac{\pi}{2}} \int_{0}^{1} (\sin t)^{s/n}(-\log \sin t) \, \mathrm{d}s\mathrm{d}t. \end{align*} Since the integrand is non-negative and increasing in $n$ , we can invoke the monotone convergence theorem to conclude that \begin{align*} \lim_{n\to\infty} n \int_{0}^{\frac{\pi}{2}} (1 - \sqrt[n]{\sin t}) \, \mathrm{d}t &= \int_{0}^{\frac{\pi}{2}} \int_{0}^{1} \lim_{n\to\infty} (\sin t)^{s/n}(-\log \sin t) \, \mathrm{d}s\mathrm{d}t \\ &= \int_{0}^{\frac{\pi}{2}} \int_{0}^{1} (-\log \sin t) \, \mathrm{d}s\mathrm{d}t \\ &= \int_{0}^{\frac{\pi}{2}} (-\log \sin t) \, \mathrm{d}t \\ &= \frac{\pi}{2} \log 2. \end{align*}
|
|real-analysis|integration|limits|definite-integrals|
| 1
|
2 Brownian Motions with Non Zero Correlation and NOT jointly normal?
|
Is it possible for 2 Brownian Motions to have non-zero correlation without being jointly normal? I'm a bit confused by the question. I just assumed we always talk of multiple Brownian Motions as being jointly normally distributed with a certain covariance matrix etc. I know in general two normal random variables have no reason a priori to be jointly normal. But does this also apply to Brownian motions which are a continuous stochastic process in which each B_i(t) is normally distributed with mean 0 and variance t for a given time t?
|
Consider the following stochastic differential equation (SDE): $dB_t = \rho(B_t, W_t)dW_t + \sqrt{1 - \rho^2(B_t, W_t)}dZ_t$ , where $\{W_t\}_t$ and $\{Z_t\}_t$ are two independant Brownian motions, and $\rho$ is a function taking values in $[-1, 1]$ . Then the solution $\{B_t\}_t$ of this SDE is a Brownian motion, as $d\langle B, B\rangle_t = \rho^2(B_t, W_t)dt + (1 - \rho^2(B_t, W_t))dt = dt$ and the joint law of $\{(B_t, W_t)\}_t$ has no reasons to be a Gaussian distribution. You can have a look at the third chapter of Damien Bosc, Three essays on modeling the dependence between financial assets , PhD thesis, 2012.
|
|normal-distribution|brownian-motion|correlation|
| 0
|
Does this theorem about two lines and two transversals have a name?
|
I've got this theorem right here. It involves two arbitrary lines $AB$ and $CD$ and two transversals $AC$ and $BD$ , intersecting in between these lines . The theorem (or lemma, I don't know) in question states that $\alpha+\beta = \varphi + \theta$ . The simplest proof of which I know is based on the Sum of Angles of Triangle equals Two Right Angles : two transversals intersecting in between two lines form two triangles with these lines. Then sums of angles of these triangles are equal to one another. One pair of angles is equal as vertical angles, so the sum of two remaining angles in one triangle should be equal to the sum of two remaining angles, which was to be proven. I've got three questions in regards to this fact: Does this fact (is it a lemma or a theorem?) has a name ? I thought of something related to bowtie, butterfly, or hourglass, but there are numerous facts names after these objects. What is the correct way to formulate this fact by using parallel lines and transversal
|
Constant vertex rotation angle theorem , if you will: $$ \gamma= \alpha-\theta= \varphi - \beta \to \ \alpha+\beta= \varphi+\theta $$ Arbitrary lines AB,CD with fixed vertex V rotating arbitrarily, variable transversals AC,BD.
|
|geometry|euclidean-geometry|terminology|alternative-proof|angle|
| 1
|
How to rigorously evaluate $\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$?
|
How to rigorously evaluate $L:=\lim\limits_{n \to \infty } n \int_0^\frac{\pi}{2}1- \sqrt[n]{\sin(t)}dt$ ? I tried to use the beat function since $$B(x,y)= 2\int_0^\frac{\pi}{2}\sin^{2x-1}(t)\cos^{2y-1}(t)dt$$ $$\int_0^\frac{\pi}{2}\sqrt[n]{\sin(t)}dt= 0.5 B\left(\frac{1}{2n} + \frac{1}{2},\frac{1}{2}\right)= \frac{\Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)}{2\Gamma\left(\frac{1}{2n} + 1\right)}=\frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)}$$ $$L =\lim\limits_{n \to \infty } n \left( \frac{\pi}{2} - \frac{n\sqrt{\pi}\ \Gamma\left(\frac{1}{2n} + \frac{1}{2}\right)}{ \Gamma\left(\frac{1}{2n} \right)} \right)$$ This limit is very hard to deal with, I tried to use Gauss product of the gamma function to try to simplify this mess but it didn't work. I noticed that if we define $f(x)=\int_0^\frac{\pi}{2}{\sin^x (t)}dt$ assuming $f'(0) $ exist at $0$ $$-f'(0)= \lim_{h \to 0 } \frac{\int_0^\frac{\pi}{2}{1-\s
|
To obtain more than the limit itself. As you properly wrote $$A=\int_0^\frac{\pi}{2}\sqrt[n]{\sin(t)}\,dt=n \frac{\sqrt{\pi } \,\, \Gamma \left(\frac{n+1}{2 n}\right)}{\Gamma \left(\frac{1}{2 n}\right)}$$ Take logarithms and Taylor expand $$\log(A)=\log \left(\frac{\pi }{2}\right)-\frac{\log (2)}{n}+\frac{\pi ^2}{24 n^2}+O\left(\frac{1}{n^3}\right)$$ Exponentiate and Taylor again $$A=e^{\log(A)}=\frac{\pi }{2}-\frac{\pi \log (2)}{2 n}+\frac{\pi \left(\frac{\pi ^2}{12}+\log ^2(2)\right)}{4 n^2}+O\left(\frac{1}{n^3}\right)$$ So, $$n \int_0^\frac{\pi}{2}\Big(1- \sqrt[n]{\sin(t)}\Big)\,dt=\frac{1}{2} \pi \log (2)-\frac{\pi \left(\frac{\pi ^2}{12}+\log^2(2)\right)}{4 n}+O\left(\frac{1}{n^2}\right)$$
|
|real-analysis|integration|limits|definite-integrals|
| 0
|
Differences between figures, numbers, and digits?
|
I was wondering if figure, number, digit and other similar words can be used interchangeably in mathematics? What are the differences in their usage? How about if we extend these notions to other contexts, such as in computer science, and in life more generally speaking? Thanks and regards!
|
In addition to the answer above - I would suggest that when discussing numbers we need to be careful to think first whether we are using cardinal numbers (to measure the size of something) or ordinal numbers (to measure the position of something). With finite numbers, there is not too much of an issue, but there may be an issue if the word number is used to refer to a transfinite number. Figures are often a lazy term for descriptive statistics .
|
|terminology|
| 0
|
Maximal subgroups of $SL(n, 2)$
|
According to Aschbacher's theorem, if $H$ is a maximal subgroup of $SL(n,q)$ , then either it belongs to one of the classes $C_1 - C_8$ , or $H$ is absolutely irreducible and $H/(H\cap Z(SL(n,q)))$ is almost simple. I know that representatives of the classes $C_1 - C_8$ are described for $SL(n,q)$ in the general case, for example, in the book by Kleidman and Liebeck "The subgroup structure of the Finite Classical Groups". However, are there any results concerning subgroups of the latter type for the group $SL(n, 2)$ , and if so, where can I read about it? I will be really greatful for this information.
|
You cannot hope for a general classification of maximal subgroups of this type. For that you would essentially need to find all absolutely irreducible representations over ${\mathbb F}_q$ (with $q=2$ in your case) of all nearly simple finite groups. There is an upper bound on their order, which is often useful, and is used extensively in maximality proofs: either $H=A_m$ or $S_m$ with $m= n+1$ or $n+2$ , or $|H| \le q^{3n}$ . This is due to Liebeck, and is Thm 5.2.4 of the book by Kleidman and Liebeck. There is also a complete classification of maximal subgroups of classical groups for dimensions up to $17$ . The dimensions up to 12 are listed in the tables in my book with Bray and Roney-Dougal, and the dimensions from $13$ to $17$ are covered in two PhD theses.
|
|group-theory|finite-groups|maximal-subgroup|
| 1
|
What was the gap in Ariane Papke's proof that the minimum number of sudoku clues is 17?
|
I was reading McGuire's paper on why the minimum number of clues in a Sudoku puzzle is 17 when I came across a curious comment: In 2008, a 17-year-old girl submitted a proof of the nonexistence of a 16-clue sudoku puzzle as an entry to Jugend forscht (the German national science competition for high-school students). She later published her work in the journal Junge Wissenschaft (No. 84, pp. 24–31). However, when Sascha Kurz, a mathematician at the University of Bayreuth, Germany, studied the proof closely, he found a gap that is probably very difficult, if not impossible, to fix. I was able to find the work he was referring to (I think) here . Not knowing German or anyone who speaks German, I had to settle and read a machine translation of the paper, which wasn't very good. By my understanding, the proof went along the lines of this: We start out by expanding the grid into 3D space with a $9$ by $9$ by $9$ cube, and for each square of the puzzle, we place a 1 in the n'th cube from the
|
The argument The argument goes essentially this way (reformulated with standard notation): We model the sudoku by an integer linear program. For all $1\leq i, j, k \leq 9$ , we define: $$x_{i, j, k} = \begin{cases} 1 &\text{ if there is a value } k \text{ in the cell } (i, j)\\ 0 &\text{ else } \end{cases}$$ This gives $9^3=729$ variables. We define the following families of constraints: $$\sum_{i=1}^9 x_{i, j, k} = 1 ~~~~\forall \;1\leq j, k \leq 9$$ $$\sum_{j=1}^9 x_{i, j, k} = 1 ~~~~\forall \;1\leq i, k \leq 9$$ $$\sum_{k=1}^9 x_{i, j, k} = 1 ~~~~\forall \;1\leq i, j \leq 9$$ $$\sum_{(i, j)\in Q_{p,q}} x_{i, j, k} = 1 ~~~~\forall \;1\leq k \leq 9,\; 1\leq p, q \leq 3$$ where the $Q_{p, q}$ are the nine $3x3$ squares. This gives $4\times 9^2 = 324$ equations. Fixing a variable $x_{i, j, k}$ to $1$ (meaning giving a clue) fixes 29 variables corresponding to $9$ variables $x_{i, j, k'}$ , $8$ variables $x_{i', j, k}$ , $8$ variables $x_{i, j', k}$ , $j\neq j'$ , $4$ variables $x_{i', j
|
|combinatorics|integer-programming|sudoku|
| 1
|
How to prove this integral inequality $\int_{0}^{1}[f''(x)]^2dx\ge 12$?
|
Let $f\in C^{2}[0,1]$ such that $f(0)=0,f(1/2)=f(1)=1$ . Show that $$\int_{0}^{1}[f''(x)]^2dx\ge 12.$$ My idea: try to find a function $f(x)$ such that the equality holds and use Cauchy-Schwarz. So I tried some "good" functions to achieve the minimum value. Assume $f(x)=ax^3+bx^2+cx$ is a polynomial. Then I find that when $f(x)=-2x^2+3x$ , the integral $\int_{0}^{1}[f''(x)]^2dx$ has minimum value $16$ . It seems it doesn't help to solve the problem. Since $f''(x)=(f(x)-ax-b)''$ and replace $f(x)$ by $f(x)-x$ , we can change the condition to $f(0)=f(1)=0, f(1/2)=1/2$ . I find that $f(x)=\frac{1}{2}\sin(\pi x)$ satisfies the condition and $\int_{0}^{1}[f''(x)]^2dx=\frac{\pi^2}{8}=12.1761...$ which is very closed to $12$ . But it still doesn't help. It seems that it isn't easy to find such $f(x)$ such that the equality holds. So how to prove this inequality? And what is the minumum value of this integral? May be not 12? Thank you!
|
You can split the integral in to two separate ones $$ \int_0^{\frac12} (f''(x))^2 \, \mathrm{d}x+\int_{\frac12}^{1} (f''(x))^2 \, \mathrm{d}x $$ and find the smallest function for both using calculus of variations and combine them piecewise. The reason that you have to do this is because calculus of variations only makes statements about the boundary. The solution is $$ f(x)= \begin{cases} -2x^3+\frac52 x& x \in [0,\frac12]\\ 2x^3-6x^2+\frac{11}2x -\frac12& x \in (\frac12,1]. \end{cases} $$ This achieves the lower bound of 12. From there on it is indeed Cauchy-Schwarz, similar to this How prove this $\int_{a}^{b}[f''(x)]^2dx\ge\dfrac{4}{b-a}$ .
|
|real-analysis|integration|inequality|integral-inequality|
| 1
|
integral inequality in energy inequality
|
I encountered these two inequalities in reading Sogge's lectures on nonlinear wave equations (page 17). It seems natural and straightforward such that the author didn't give any hint but I cannot work out a prrof. Appreciate any hints. (1) Let $$u(0,x)=\partial_t u(0,x)=0$$ Then $$\int_0^\phi |u(t,x)|^2 dt \leq t_0^2 \int_0^\phi |\partial_t u(t,x)|^2 dt$$ where $t_0$ is an upper bound and $\phi (2) Let the source of wave equation $\square u =F(u, u', u'')$ satisfy $$F(0,0,u'')=0$$ then $$ 2|\partial_t u F| \leq C (|u|^2 + |u'|^2)$$ Thanks in advance!
|
If amplitude at t=0 is zero, and velocity at t=0 is zero for all x, the time mean of the amplitude squared at any point $x$ is smaller than a constant times the mean of the velocity squared $$\left(\frac{ u(x,t)}{t_0}\right)^2 \ dt For a linear equation, with $u=0$ the equilibrium configuration, both sides remain at 0. If u=0 is not the equilibrium configuration, the amplitude is growing with $t^2$ independent of $x$ as for a single mass like $\partial_t u^2$ . For a nonlinear equation dominated by the linear approximation, Lipshitz smoothness conditions yield the same boundedness between time means of kinetic energy and the quadratic approximation of the potential energy. The topic in point mechanics and its application in statistical mechanics is called the virial theorem, that connects time and ensemble means of kinetic and potential energies by their power laws.
|
|hyperbolic-equations|
| 1
|
How to determine the value of $\displaystyle f(x) = \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n$?
|
How to determine the value of $\displaystyle f(x) = \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n$ ? No context, this is just a curiosity o'mine. Yes, I am aware there is no reason to believe a random power series will have a closed form in terms of well established functions, but also I have no way to know if that is the case here, so that is why I'm asking. Do you know this power series or any method I could use to determine its value? In my research I've found out about the polylogarithm , which is defined as $$\mathrm{Li}_s(x) = \sum_{n=1}^\infty\frac{x^n}{n^s} = \frac1{\Gamma(s)}\int_0^\infty\frac{t^{s-1}}{e^t/x-1}dt$$ This called my attention because $$\begin{aligned} f(x) &= \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n\\ &= x\sum_{n=1}^\infty\frac1{\sqrt n}\frac{x^{n-1}}{(n-1)!}\\ &= x\sum_{n=1}^\infty\frac1{\sqrt n}\mathcal L^{-1}\left\{\frac1{x^n}\right\}\\ &= x\mathcal L^{-1}\left\{\sum_{n=1}^\infty\frac1{\sqrt n}\frac1{x^n}\right\}\\ &= x\mathcal L^{-1}\left\{\mathrm{Li}_{1/2}\left(\frac
|
This is not a complete solution but it gives only the result: a complete formula for a sum generalizing that of the OP. The development was initiated by the results of @Svyatoslav and completed to this point in a discssion with him. $$s(a,x) := \sum_{n=0}^{\infty} \frac{n^a}{n!}x^n \\ = \frac{x}{\Gamma(1-\{a\})} \frac{\partial ^{\lfloor a\rfloor}}{\partial w^{\lfloor a\rfloor}}\left(\int_0^1 \frac{\exp \left(-u x\; e^w +e^w x+w\right)}{\log ^{\{a\}}\left(\frac{1}{1-u}\right)} \, du\right)|_ {w\to 0}$$ Here $\lfloor a\rfloor$ is the integer part of $a$ and $\{a\} = a-\lfloor a\rfloor$ is the fractional part of $a$ .
|
|power-series|generating-functions|laplace-transform|closed-form|polylogarithm|
| 0
|
Domain of Laplace transform
|
According to Wikipedia , if the Laplace transform $$\mathcal{L}\{f\}(s)=\lim_{b\to\infty}\int_0^b e^{-st}f(t)\,dt$$ converges for some $s_0\in\mathbb{R}$ , then it automatically converges for all $s\in\mathbb{R}$ with $s>s_0$ . (I'm considering the Laplace transform on the real line.) How can I prove this? If $$\lim_{b\to\infty}\int_0^be^{-st}|f(t)|\,dt$$ converges, then I can prove it using comparison test, but what if it converges conditionally?
|
This answer is essentially @RRL's answer given here , re-written for more novice readers: I worked within real numbers, and also I did not use Riemann-Stieltjes integral. Let $\alpha:[0,\infty)\to\mathbb{R}$ be a function defined by $$\alpha(t)\mathrel{\mathop:}=\int_0^te^{-s_0t}f(t)\,dt.$$ Let $s>s_0$ . Since $\alpha$ is continuous, integration by parts gives \begin{align}\int_0^be^{-st}f(t)\,dt&=\int_0^be^{-(s-s_0)t}e^{-s_0t}f(t)\,dt\\ &=[e^{-(s-s_0)t}\alpha(t)]_0^b+(s-s_0)\int_0^be^{-(s-s_0)t}\alpha(t)\,dt.\tag{$\ast$}\label{lap} \end{align} By assumption, $\lim_{t\to\infty}\alpha(t)$ exists. In particular, $|\alpha(t)|$ is bounded, say $|\alpha|\leq M$ . Then $$|e^{-(s-s_0)t}\alpha(t)|\leq e^{-(s-s_0)t}M\quad\xrightarrow{t\to\infty}\quad0,$$ so the first term in \eqref{lap} goes to zero as $b\to\infty$ . On the other hand, $$\int_0^be^{-(s-s_0)t}|\alpha(t)|\,dt\leq M\int_0^be^{-(s-s_0)t}\,dt\leq M\int_0^\infty e^{-(s-s_0)t}\,dt,$$ so $\int_0^\infty e^{-(s-s_0)t}|\alpha(t)|\,dt$ con
|
|laplace-transform|
| 1
|
Operator norm and orthogonal projection
|
Let $A$ , $B$ be two positive semidefinite symmetric matrices on $\mathbb{R}^d$ with the same eigenvectors. Let $P$ be an orthogonal projection. Does it hold that $$ \|APB\| \leq \|AB\|, $$ where $\|\cdot\|$ denotes the operator norm. If $P$ was on the left or right we would directly have $\|ABP\| \leq \|AB\|\|P\| \leq \|AB\|$ . However here, trying to show it from the definition of the operator norm does not help me to conclude as for all $x$ , we have $$ \|APBx\|^2 = x^{\top}BPA^2PBx, $$ and I don't know how to drop $P$ from this expression. Does it still hold if we drop the assumption that $A$ , $B$ share the same eigenvectors?
|
No. Counterexample: $$ A=\pmatrix{1\\ &0},\quad P=\pmatrix{\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}},\quad B=\pmatrix{0\\ &1}. $$ In this case $\|APB\|=\frac{1}{2}>0=\|AB\|$ .
|
|linear-algebra|matrices|
| 0
|
From algebra of smooth functions to a smooth manifold
|
Suppose that the $\mathbb{R}$ -algebras $\mathcal{F}_1$ and $\mathcal{F}_2$ , as vector spaces, are isomorphic to the plane $\mathbb{R}^2$ . Let the multiplication in $\mathcal{F}_1$ and $\mathcal{F}_2$ be respectively given by the relations $(x_1, y_1) \cdot (x_2, y_2) = (x_1x_2, y_1y_2) $ $(x_1, y_1) \cdot (x_2, y_2) = (x_1x_2 + y_1y_2, x_1y_2 + x_2y_1) $ Find the manifold $M_i$ for which the algebra $\mathcal{F}_i$ , $i =1, 2$ , is the algebra of smooth functions, explicitly indicating what function on $M_i$ corresponds to the element $(x, y) \in \mathcal{F}_i$ . Are the algebras $\mathcal{F}_1$ and $\mathcal{F}_2$ isomorphic? This question has boggled me for a while. We are looking for manifolds $M_i$ for which $\mathcal{F}_i \cong C^\infty(M_i)$ . The problem I see here is that $C^\infty(M_i)$ is infinite-dimensional while $\mathcal{F}_i$ 's are both $2$ -dimensional so how can there exists such manifolds? The second thing is that the latter one looks like its almost the product w
|
I cannot give a satisfying answer, but I had a look in Neustrev's book which convinced me that the author's exposition in not really clear. Quote from Chapter 3: Here we give a detailed answer to the following fundamental question: Given an abstract $\mathbb R$ -algebra $\mathcal F$ , find a set (smooth manifold) $M$ whose $\mathbb R$ -algebra of (smooth) functions can be identified with $\mathcal F$ . I guess he considers two questions: Find a set $M$ such that $\mathcal F(M) =$ algebra of functions $f : M \to \mathbb R$ can be identified with $\mathcal F$ . Find a smooth manifold $M$ such that $C^\infty(M)$ can be identified with $\mathcal F$ . In the first case one can regard $M$ as a discrete topological space so that $\mathcal F(M) = C^0(M) =$ algebra of continuous functions $f : M \to \mathbb R$ . We may also regard $M$ as $0$ -dimensional smooth manifold, at least if $M$ is countable. Neustrev's approach to smooth manifold is a bit unusual. He introduces them by an algebraic def
|
|differential-geometry|smooth-manifolds|
| 0
|
Is the angle on on Cartesian coordinate system between dots of all complex roots of polynomial with real coefficients the same?
|
So far I realized that any polynomial with complex roots has the even number of complex roots.Because for every $(x-(a-b*i))$ there is $(x-(a+b*i)) $ in order for coefficiants to be real.That means that odd degree polynomials have at least one real root. Because of symmetry of complex numbers roots e.g. $a-b*i$ and $a+b*i$ the angle between them is the same regarding $ x$ axis. But is it the same for other roots. Here is example of polynomial: $x^4 + 3x +21$ Here is graph,but I am not sure are all angles the same,at least I dont see a geometric reason for that. complex roots on coordinate system for x^4 + 3x +21 And why are all complex roots on circle with semidiameter 1 on the same distance? x^10 -1 complex roots on coordinate system
|
Let $Z$ be a complex number I'll start from the basics. $Z$ can be expressed in the form of $x+iy$ . $$Z=x+iy$$ This can be plotted in 2 dimensional Argand plane,in which y axis is imaginary while x axis is real. Here $x=a,y=b$ . Notice that we can also say $$a=\sqrt{a^2+b^2}\cosθ$$ and $$b=\sqrt{a^2+b^2}\sinθ$$ . Hence $$Z=\sqrt{a^2+b^2}(\cosθ+i\sinθ)$$ The above formula is called "Euler's formula", named after Leonhard Euler. $\sqrt{a^2+b^2}$ is known as modulus of $Z$ and also written in the form $|Z|$ . $θ$ =Amplitude, the clockwise angle from x axis to $a+bi$ Using Taylor series, it can be proven that $$(\cosθ+i\sinθ)=e^{iθ}$$ I will link the proof below. So $$Z=\sqrt{a^2+b^2}e^{iθ}=|Z|e^{iθ}$$ Also if you take $θ=π$ , you get $$e^{iπ}=\cos(π)+i\sin(π)=-1$$ This is a famous equation regarding Euler's identity For your first doubt, regarding $$x^4+3x+21=0$$ You might notice that the coefficient of $x^3=0$ . Formula for sum of roots in a polynomial= $\frac{-b}{a}$ . Here $a$ =coeffi
|
|linear-algebra|geometry|polynomials|roots|cyclotomic-polynomials|
| 0
|
How to prove this integral inequality $\int_{0}^{1}[f''(x)]^2dx\ge 12$?
|
Let $f\in C^{2}[0,1]$ such that $f(0)=0,f(1/2)=f(1)=1$ . Show that $$\int_{0}^{1}[f''(x)]^2dx\ge 12.$$ My idea: try to find a function $f(x)$ such that the equality holds and use Cauchy-Schwarz. So I tried some "good" functions to achieve the minimum value. Assume $f(x)=ax^3+bx^2+cx$ is a polynomial. Then I find that when $f(x)=-2x^2+3x$ , the integral $\int_{0}^{1}[f''(x)]^2dx$ has minimum value $16$ . It seems it doesn't help to solve the problem. Since $f''(x)=(f(x)-ax-b)''$ and replace $f(x)$ by $f(x)-x$ , we can change the condition to $f(0)=f(1)=0, f(1/2)=1/2$ . I find that $f(x)=\frac{1}{2}\sin(\pi x)$ satisfies the condition and $\int_{0}^{1}[f''(x)]^2dx=\frac{\pi^2}{8}=12.1761...$ which is very closed to $12$ . But it still doesn't help. It seems that it isn't easy to find such $f(x)$ such that the equality holds. So how to prove this inequality? And what is the minumum value of this integral? May be not 12? Thank you!
|
Thanks for the answer. Now I can write down the remaining steps. Let $p(x)$ be the piecewise polynomial in the follwing answer. Integration by parts, we can get $$\int_0^{\frac12} (f''^2- p''^2)dx=\int_0^{\frac12}(f''-p'')^2dx-2p''(1/2)f'(1/2).$$ Similarly, we have $$\int_{\frac12}^{1} (f''^2- p''^2)dx=\int_{\frac12}^{1}(f''-p'')^2dx+2p''(1/2)f'(1/2).$$ Then we have $\int_0^{1} f''^2dx\ge\int_0^{1} p''^2dx=12$ .
|
|real-analysis|integration|inequality|integral-inequality|
| 0
|
$ \lim_{s\to +\infty}\left(\int_{0}^{\frac{\pi}{2}} \frac{\sin^2(sx)}{\sin^2x}f(x)\,dx- \frac{\pi}{2}f(0)s-\frac{f'(0)}{2}\ln{s}\right)$
|
Suppose: $f:[0,\frac{\pi}{2}]\to \mathbb{R}$ , $f(x)\in C^2[0,\frac{\pi}{2}]$ , $\gamma=\lim_{n\to \infty}(1+1/2+\dotsb+1/n-\ln{n})$ . Prove: $$ \lim_{s\to +\infty}\left(\int_{0}^{\frac{\pi}{2}} \frac{\sin^2(sx)}{\sin^2x}f(x)\,dx- \frac{\pi}{2}f(0)s-\frac{f'(0)}{2}\ln{s}\right)= \int_{0}^{\frac{\pi}{2}} \frac{f(x)-f(0)-f'(0)x}{2\sin^2x}\,dx+\frac{f'(0)}{2}(1+\ln2+\gamma) . $$ I can only prove that $$\lim_{s\to +\infty}\frac{1}{s}\int_{0}^{\frac{\pi}{2}} \frac{\sin^2(sx)}{\sin^2x}f(x)\,dx=\frac{\pi}{2}f(0).$$ I think we can calculate the asymptotic expansion of $\int_{0}^{\frac{\pi}{2}} \frac{\sin^2(sx)}{\sin^2x}f(x)\,dx$ (as $s \to +\infty$ ), but it is hard for me.
|
$\newcommand{\ga}{\gamma}$ Let \begin{equation*} L(s):=\int_0^{\pi/2} \frac{\sin^2(sx)}{\sin^2x}\,f(x)\,dx- \frac\pi2f(0)s -\frac{f'(0)}2\ln s, \end{equation*} \begin{equation*} R:=\int_{0}^{\pi/2} \frac{f(x)-f(0)-f'(0)x}{2\sin^2x}\,dx+\frac{f'(0)}2\,(1+\ln2+\ga). \end{equation*} We have to show that \begin{equation*} L(s)\overset{\text{(?)}}\to R \tag{10}\label{10} \end{equation*} (as $s\to\infty$ ). Let \begin{equation*} g(x):=f(x)-f(0)-f'(0)x, \end{equation*} so that \begin{equation*} L(s)=I_1(s)+f(0)(I_2(s)-\pi s/2)+f'(0)(I_3(s)-\tfrac12\,\ln s), \end{equation*} \begin{equation*} R=\tfrac12\,J_1+f'(0)J_3, \end{equation*} where \begin{equation*} I_1(s):=\int_0^{\pi/2} \frac{\sin^2(sx)}{\sin^2x}\,g(x)\,dx, \end{equation*} \begin{equation*} I_2(s):=\int_0^{\pi/2} \frac{\sin^2(sx)}{\sin^2x}\,dx, \quad I_3(s):=\int_0^{\pi/2} \frac{\sin^2(sx)}{\sin^2x}\,x\,dx, \end{equation*} \begin{equation*} J_1:=\int_0^{\pi/2} \frac{g(x)\,dx}{\sin^2x},\quad J_3:=\frac{1+\ln2+\ga}2. \end{equation*} So,
|
|analysis|integration|
| 1
|
How to proof a sigma algebra is equal to the union of two given subsets?
|
EDIT: I edited it after user469053s correction I'm doing a homework exercise about measure theory and can't figure out how to proof the following: Let $(X,A)$ be a measurable space, $g \subset A$ a generator of A and $S \in A$ . Proof the following: \begin{equation} A = \{B \cup C : B \in \sigma(g \cap S), C \in \sigma(g \cap S^c) \} \end{equation} For one direction I have what I believe to be a correct proof, but I don't see how I can proof that $A \subset \{B \cup C : B \in \sigma(g \cap S), C \in \sigma(g \cap S^c) \}$ . A random element $X \in A$ I can write as: $X = (X \cap S) \cup (X \cap S^c)$ , but I don't know how to proof they are in their respective sigma algebras. (For the other direction I have that $(g \cap S), (g \cap S^c) \subset g$ , so we know that the following holds: $\sigma(g \cap S), \sigma(g \cap S^c) \subset \sigma(g) = A$ . Because a sigma algebra is closed under union, this also means: $\{B \cup C : B \in \sigma(g \cap S), C \in \sigma(g \cap S^c) \} \subset \
|
A note on notation: $g\cap S$ denotes $\{B\cap S:B\in g\}$ . Let $L$ denote the set of $B\in A$ such that $B\cap S\in \sigma(g\cap S)$ . $X\cap S=S\in \sigma(g\cap S)$ . For $(B_i)_{i=1}^\infty\subset L$ , $\Bigl(\bigcup_{i=1}^\infty B_i\Bigr)\cap S=\bigcup_{i=1}^\infty (B_i\cap S)\in \sigma(g\cap S)$ . $(X\setminus B)\cap S=S\setminus B=S\setminus (S\cap B)\in \sigma(g\cap S)$ . So $L$ is a $\sigma$ -algebra on $X$ . It clearly contains $g$ , since for any $B\in g$ , $B\cap S\in g\cap S\subset \sigma(g\cap S)$ . So $L$ is a $\sigma$ -algebra on $A$ containing $g$ , so it also contains $A$ . Therefore $B\cap S\in \sigma(g\cap S)$ for any $B\in A$ . The same argument yields that $B\cap S^C\in \sigma(g\cap S^C)$ for any $B\in A$ . So for any $B\in A$ , we have $$B=(B\cap S)\cup (B\cap S^C)\in \{C\cup C':C\in \sigma(g\cap S),C'\in \sigma(g\cap S^C)\}.$$
|
|measure-theory|
| 1
|
On the curve $y=\frac{\sin (\pi x)}{x^p},x>0$, for what values of $p$ does the product of all the arc lengths between neighboring roots exist?
|
Consider the curve $y=\frac{\sin (\pi x)}{x^p}, x>0$ , shown here with $p=0.75$ . It occurred to me that if $p$ is large enough, then the curve flattens quickly, so the arc lengths between neighboring roots approach $1$ quickly, so the product of all those arc lengths should converge. So my question is: For what values of $p$ does the product of all the arc lengths between neighboring roots exist? That is, given $f(x)=\dfrac{\sin (\pi x)}{x^p}$ , for what real values of $p$ does the following limit exist: $$L=\lim_{n\to\infty}\prod_{k=1}^n \int_{k}^{k+1}\sqrt{1+(f'(x))^2}dx$$ Numerical investigation suggests that when $p=1$ , $L\approx 4.93$ ; and when $p=0.5$ , $L$ does not exist. If that's true, then there must be some critical value of $p$ between $0.5$ and $1$ , and I wonder what it is. Possibly related : Convergence $I=\int_0^\infty \frac{\sin x}{x^s}dx$ Context: I am interested in limits of geometrical products. Here is another question about a limit of a product of arc lengths.
|
Strating from @Varun Vejalla's answer $$I_k=\int_k^{k+1}f'(x)^2dx=\int_k^{k+1}\frac{4\pi^2x^2\cos^2(\pi x)+\sin^2(\pi x)-2\pi x\sin(2\pi x)}{4x^3}\,dx$$ the antiderivative is rather simple to compute and $$I_k=\frac{ \pi ^2}{4} (\text{Ci}(2 k \pi )-\text{Ci}(2 (k+1) \pi ))+\frac{1}{2} \pi ^2 \log \left(\frac{k+1}{k}\right)$$ Expanded for large values of $k$ $$I_k=\frac{ \pi ^2}{4} \left(\frac{2}{k}-\frac{1}{k^2}+\frac{8 \pi ^2-6}{12 \pi^2 k^3}+\frac{9-6 \pi ^2}{12 \pi ^2 k^4}+O\left(\frac{1}{k^5}\right)\right)$$ Numerically, this gives $I_{10}=\color{red}{0.47022}15$
|
|calculus|integration|limits|products|arc-length|
| 0
|
Subfunctor from a category to Set
|
I'm looking for the definition of a subfunctor $R$ of a functor $U:\cal K \to \mathbb {Set}$. Are there any other conditions beyond that $R(A)\subseteq U(A)$ for all objects $A\in \cal K$ ?
|
On top of the condition that $R(A)\subseteq U(A)$ for every object $A$ in the category $\mathcal{K}$ , you need the morphisms of $\mathcal{K}$ to behave accordingly: If $f:A\to B$ is a morphism in $\mathcal{K}$ , then $R(f)$ should send $R(A)$ to $R(B)$ . Since $R$ is a subfunctor of $U$ , $R(f)$ is the restriction of $U(f)$ to the set $R(A)$ , we may write this condition as $$Uf[R(A)]\subseteq R(B).$$ Now remember that this should hold simultaneously for all the objects and morphisms in $\mathcal{K}$ !
|
|category-theory|functors|forgetful-functors|
| 0
|
Numerical Inverse Z-Transform - Abate and Whitt
|
I'm trying to implement an inverse Z-Transform using the Fourier series based technique by Abate and Whitt: The Fourier-series method for inverting transforms of probability distributions. Numerical inversion of probability generating functions. We approximate the integral numerically by $$f(n) \approx \frac{1}{2n\rho^n}\bigg[ \tilde{f}(\rho) + 2 \sum^{n-1}_{j = 1} (-1)^j\text{Re}\tilde{f}(\rho e^{\frac{j\pi i}{n}}) + (-1)^n\tilde{f}(-\rho) \bigg]$$ and setting $\rho = 10^{-\lambda/2n}$ gives an accuracy of $10^{-\lambda}$ . import cmath import numpy as np def abate_whitt(ftilde, n): lmbda = 1 rho = 10 ** (-lmbda / (2 * n)) e = cmath.e pi = cmath.pi summation = 0 for k in range(1, n): summation += ((-1)**k) * (np.real(ftilde(rho * e**((1j * pi * k) / n)))) summation *= 2 part1 = ftilde(rho) part2 = ((-1)**n) * ftilde(-rho) scale_factor = 1 / (2 * n * (rho**n)) result = scale_factor * (part1 + summation + part2) return result I then test it for pairs of functions and started off with $\
|
Can you please tell me where you found the above formula? If you have poles on or outside the unit circle you may use this formula: $$f(\tilde{f},T,\rho,n)\approx \frac{\rho ^T}{n}\cdot \sum _{k=0}^{n-1} \Re\left(\tilde{f}\left(\rho\cdot \exp \left(\frac{i ((2 \pi ) k)}{n}\right)\right)\cdot \exp \left(\frac{i ((2 \pi ) k T)}{n}\right) \right)$$ IMPORTANT : $\tilde{f}(z)$ all poles are located inside the circle with radius $\rho$ and has no poles at $\infty$ . import matplotlib.pyplot as plt import numpy as np # Numerical approximation of inverse Z-Transform. def abate_whitt(ftilde, T, rho=1, n=50): sum = 0.0 for k in range(0, n): exp1 = np.exp(1j * 2*np.pi * k / n) exp2 = exp1**T sum += ftilde(rho*exp1) * exp2 return (rho**T / n) * sum.real # Z-functions for test. def ftilde1(z): return z / (z - 1) def ftilde2(z): return z / ((z + 1/2) * (z + 1/3)) # Exact solution def f1(T): return 1 def f2(T): return 6*(-(-1/2)**T + (-1/3)**T) T = 1 val1 = abate_whitt(ftilde1, T, 2) val2 = f1(T) pri
|
|fourier-analysis|fourier-transform|python|inverse-laplace|z-transform|
| 0
|
How to prove this integral inequality $\int_{0}^{1}[f''(x)]^2dx\ge 12$?
|
Let $f\in C^{2}[0,1]$ such that $f(0)=0,f(1/2)=f(1)=1$ . Show that $$\int_{0}^{1}[f''(x)]^2dx\ge 12.$$ My idea: try to find a function $f(x)$ such that the equality holds and use Cauchy-Schwarz. So I tried some "good" functions to achieve the minimum value. Assume $f(x)=ax^3+bx^2+cx$ is a polynomial. Then I find that when $f(x)=-2x^2+3x$ , the integral $\int_{0}^{1}[f''(x)]^2dx$ has minimum value $16$ . It seems it doesn't help to solve the problem. Since $f''(x)=(f(x)-ax-b)''$ and replace $f(x)$ by $f(x)-x$ , we can change the condition to $f(0)=f(1)=0, f(1/2)=1/2$ . I find that $f(x)=\frac{1}{2}\sin(\pi x)$ satisfies the condition and $\int_{0}^{1}[f''(x)]^2dx=\frac{\pi^2}{8}=12.1761...$ which is very closed to $12$ . But it still doesn't help. It seems that it isn't easy to find such $f(x)$ such that the equality holds. So how to prove this inequality? And what is the minumum value of this integral? May be not 12? Thank you!
|
I'll work with the condition $f(0)=f(1)=0, f(1/2)=1/2$ . The idea of the following calculation is to minimize $\int_0^1 f''(x)^2 \, dx $ on the intervals $[0, 1/2]$ and $[1/2, 1]$ separately. Without further restrictions that would give a minimum of zero, attained by a piecewise linear function, which is not (twice) differentiable on $[0, 1]$ . Therefore we introduce an additional parameter $a \in \Bbb R$ and minimize the integral over the left interval subject to the conditions $f(0) = 0, f(1/2) = 1/2, f'(1/2) = a$ , and over the right interval subject to the conditions $f(1/2) = 1/2, f'(1/2) = a, f(1) = 0$ . Doing integration by parts, $$ \frac 12 = f\left( \frac 12\right) - f(0) = \int_0^{1/2} f'(x) \, dx = \left .x f'(x)\right]_{x=0}^{x=1/2} - \int_0^{1/2} x f''(x) \, dx \\ = \frac a2 - \int_0^{1/2} x f''(x) \, dx \, , $$ so that $$ \left( \frac{a-1}{2}\right)^2 = \left( \int_0^{1/2} x f''(x) \, dx\right)^2 \le \int_0^{1/2} x^2 \, dx \cdot \int_0^{1/2} f''(x)^2 \, dx \\ = \frac{1}{
|
|real-analysis|integration|inequality|integral-inequality|
| 0
|
Examining why the proof for the intersection of (finite) collections of open sets being open in a metric space does not extend to infinite collections
|
In a metric space, we have that the intersection of countably many open sets is open: Let $A_1, A_2, \ldots, A_n$ be open sets and $A = \bigcap_{i=1}^n A_i$ . For $x \in A$ , $x \in A_i$ for all $i$ , and since each $A_i$ is open, $\exists r_i > 0$ with $B_{r_i}(x) \subseteq A_i$ . Set $r = \min\{r_1, r_2, \ldots, r_n\}$ , then $B_r(x) \subseteq A$ . Since $x$ was arbitrary, $A$ is open. I am trying to understand why this proof, does not hold for uncountably infinite sets and I believe the problem lies in this step: $r = \min\{r_1, r_2, \ldots, r_n\}$ I think this because: For any collection of elements of any space that I am considering the intersection of, there is a associated $r_i$ with each element. Then since we are considering a metric space $\{r_1, r_2, \ldots, r_i, \ldots\} \subset \mathbb{R}$ . Then it is not necessarily true that there exists a minimum $r_i$ . As say $\{r_1, r_2, \ldots, r_i, \ldots\}$ was the set $(0,1)$ . I am aware that I can consider the many counterexam
|
The issue is not "countable vs uncountable" but rather "finite vs infinite". It is not true in general that a countable intersection of open sets is open, and the problem is exactly what you mention that an infinite set of positive radii may have infimum zero. For example in $\mathbb R$ , $$ \bigcap_n\Big(-\frac1n,\frac1n\Big)=\{0\}, $$ which is not open.
|
|real-analysis|general-topology|analysis|
| 1
|
Continuity and Product Spaces
|
I have the following basic question regarding continuous functions. Is the following statement correct? Fix nonempty toplogical spaces $A, B, C, D$ with $A \times C$ and $B \times D$ endowed with the product topology. Also, fix functions \begin{align*} h & : A \times C \to B \times D ,\\ f &: A \to B ,\\ g &: C \to D , \end{align*} with $h := (f, g)$ . Thus, if $h$ and $f$ are continuous, then $g$ is continuous. I know that if $f$ and $g$ are continuous, then $h$ is continuous, while if $h$ is continuous it is not necessarily the case that both $f$ and $g$ are continuous. What about this 'intermediate' case? Thanks a lot in advance for any feedback.
|
The function $h$ is continuous if and only if $f,g$ are. For one direction, assume $f,g$ are continuous and pick a net $(x_\alpha,y_\alpha)_\alpha$ in $A\times C$ converging to $(x,y)$ . Then $\lim_\alpha x_\alpha=x$ in $A$ and $\lim_\alpha f(x_\alpha)=f(x)$ by continuity of $f$ . Similarly, $\lim_\alpha g(y_\alpha)=y$ . So $\lim_\alpha h((x_\alpha, y_\alpha))=(x,y)$ , and $h$ is continuous. For the other direction, assume $h$ is continuous. Fix any $y\in C$ (here is where the non-emptiness of $C$ becomes necessary). Define $a:A\to A\times C$ by $a(x)=(x,y)$ . Define $b:B\times D\to B$ by $b(u,v)=u$ . Then $a,b$ are continuous and $f=b\circ h\circ a$ . Thus $f$ is also continuous, since it is the composition of continuous functions. By symmetry, so is $g$ .
|
|general-topology|functions|continuity|products|
| 1
|
Currently, what is the largest publicly known prime number such that all prime numbers less than it are known?
|
So recently, an absurdly large prime number was found, but a lot of prime numbers less than it are still not known. I am wondering up to where we know all the primes. I put "currently publicly known" because there is a chance that some government agency has a longer list for crypto reasons or something like that.
|
Yoni Rozenshein ( https://math.stackexchange.com/users/36650/yoni-rozenshein ), Currently, what is the largest publicly known prime number such that all prime numbers less than it are known?, URL (version: 2013-03-14): https://math.stackexchange.com/q/330221 "Given current computing abilities, I'd guess your prime is somewhere between 2^50 and 2^60." Since autumn 2020 that "time dependend prime" is well above 2^78. See the b-file of OEIS https://oeis.org/A033844 ("a(n) = prime(2^n)"). Line 78 in this textfile reads: 78 17254990129969542495182251 It may take some more years to update https://oeis.org/A095124 ("a(n) = prime(2^(2^n))") beyond a(6) = 870566678511500413493.
|
|soft-question|prime-numbers|
| 0
|
Algebraic manipulation of universal quantifier
|
I apologize in advance if my title is not super descriptive, I'm not exactly sure if "algebraic" is the correct term. Let's say I have two sequences, $A$ and $B$ , where $A_i$ is the $i$ th element, 1 indexed. $n$ is the size of $A$ , and $m$ is the size of $B$ If I have the following statement about these sequences: $$\forall_{i,j} ((1 \leq i \leq n \wedge 1 \leq j \leq n \wedge i Essentially saying that "A and B are sorted nondecreasing, and all elements of A are $\leq$ all elements of B", I know that the following statement must be true of $C = AB$ ( $C$ is the concatenation of these two sequences): $$\forall_{i,j} ((1 \leq i \leq n+m \wedge 1 \leq j \leq n+m \wedge i I know this to be true of course based on intuition, but how do I manipulate the first statement in a way to reflect the second? The farthest I got was extracting $A_n and trying figure out how to use that to my advantage. I'm sure there is a way to do this somewhat how we can with De Morgan's laws to predicate logic,
|
I don't know why you write general quantifier three times in your innitial statement (instead of just one at the beginning). You could also write the statement as formal statement much But anyways, your innitial statement says: Let $A=(A_i)_{1\leq i\leq n}$ , $B=(B_i)_{1\leq i\leq m}$ be finite (no stricly) increasing sequences, such that for any natural $1\leq i\leq n$ and $1\leq j\leq m$ we got $A_i . You want to prove essentialy that for any $1\leq k,t\leq m+n$ , $k\leq t$ and sequence $C=(A_1,..., A_n,B_1,...,B_m), the $ C$ is increasing. To prove that take any $k,t,k\le t$ as above. And consider scenarios. $k,t\leq n$ : then both $C_k,C_t=A_k,A_t$ so $C_k\leq C_t$ (same with $k,t>n$ $k : then $C_k=A_k,C_t=B_t$ so $C_k\le C_t$ (As $A_k\le B_t$ for any appropriate indices $k,t$ ). This is any possible scenarios and hence we proved our theorem.
|
|sequences-and-series|first-order-logic|
| 0
|
If the trace of a matrix equals its rank, is it idempotent?
|
It is well-known and can easily be proven that if a matrix $A$ is idempotent, then its trace equals its rank: $$ A^2 = A \Rightarrow \mathrm{tr}(A) = \mathrm{rk}(A) $$ Does the inverse also hold? If yes, how can this be proven?
|
The idea is $A$ is idempotent if and only if it has eigenvalues $0,1$ , but the converse condition cannot ensure. So we can construct a counterexample with non-zero and non-one eigenvalues. For example, consider $$A=\begin{pmatrix}3 & 0 \\ 0 & -1\end{pmatrix}$$ which you can see $A$ has rank $2$ and $\operatorname{tr}(A)=2$ , but $A^2\ne A$ .
|
|linear-algebra|matrices|matrix-rank|trace|idempotents|
| 1
|
Is the angle on on Cartesian coordinate system between dots of all complex roots of polynomial with real coefficients the same?
|
So far I realized that any polynomial with complex roots has the even number of complex roots.Because for every $(x-(a-b*i))$ there is $(x-(a+b*i)) $ in order for coefficiants to be real.That means that odd degree polynomials have at least one real root. Because of symmetry of complex numbers roots e.g. $a-b*i$ and $a+b*i$ the angle between them is the same regarding $ x$ axis. But is it the same for other roots. Here is example of polynomial: $x^4 + 3x +21$ Here is graph,but I am not sure are all angles the same,at least I dont see a geometric reason for that. complex roots on coordinate system for x^4 + 3x +21 And why are all complex roots on circle with semidiameter 1 on the same distance? x^10 -1 complex roots on coordinate system
|
I see different angles in your figure. Here is a case where the difference is more obvious: the roots of $x^4 - 2 x^3 + 3 x^2 - 2 x + 2$ plotted on the complex plane. If you only meant is the angle the same within each conjugate pair of roots, but can be a different angle for another pair of roots, then of course the answer is yes for the reason you have already given. If you want to know how I came by this example, I decided what I wanted my four roots to be (remembering that I needed the ones in the lower plane to be mirror images of the ones in the upper plane), then I wrote out the polynomial in the form $(x-r_1)(x-r_2)(x-r_3)(x-r_4)$ and multiplied it out. (Technically, I took the lazy way out and had Wolfram Alpha multiply it for me, as well as generate the graph of the roots.)
|
|linear-algebra|geometry|polynomials|roots|cyclotomic-polynomials|
| 0
|
Markov Chain Transition Matrix eigenvalues - Textbook Exercise
|
I am trying to solve the following problem taken the book Markov Chains and Mixing Times 2nd edition (Exercise 12.2): Let $P$ be irreducible transition matrix, and suppose that A is a matrix with $0 \le A(i,j) \le P(i,j)$ and $A \ne P$ . Show that any eigenvalue $\lambda$ of $A$ satisfies $|\lambda| . Now I know that $P$ 's biggest eigenvalue is 1. I have tried several approaches but I am not sure how to use irreducibility in this context. Maybe if the chain has some period it is possible to look on a chain with transition matrix raised to the power of the period to get aperiodic chain? Help will be appreciated. Link to the book: https://pages.uoregon.edu/dlevin/MARKOV/markovmixing.pdf
|
I think I figured the solution: Assume $P(i,k) > A(i,k)$ for some $i,k \in \mathcal{X}$ where $\mathcal{X}$ is the states space so $P$ and $A$ are of size $|\mathcal{X}| \times |\mathcal{X}|$ . Also assume by contradiction $Av = \lambda v$ for some $|\lambda| \ge 1$ . Then we have: $\begin{align*}\\ |Av(i)| = |\sum_{j \in \mathcal{X}} A(i,j)v(j)| \le \sum_{j \in \mathcal{X}} A(i,j)|v(j)| \le \sum_{j \in \mathcal{X}} A(i,j)\max\limits_{l \in \mathcal{X}} |v(l)|\\ In addition: $|Av(i)| = |\lambda| |v(i)|$ Hence: $|v(i)| Meaning not all the vector elements are the same (in absolute value). Now assume P is aperiodic (else use @user8675309 proposition) then there exist time t such that all elements in $P^t$ are strictly positive and the original element - wise inequality still holds. Denote $A' = A^t$ , $P' = P^t$ and let $s \in \mathcal{X}$ $\begin{align*}\\ |A'v(s)| = |\sum_{j \in \mathcal{X}} A'(s,j)v(j)| \le \sum_{j \in \mathcal{X}} A'(s,j)|v(j)| \le \sum_{j \in \mathcal{X}} P'(s,j) |v(
|
|linear-algebra|probability|eigenvalues-eigenvectors|markov-chains|mixing|
| 0
|
Seeking Counterexamples: Bilinear Maps Continuous in Components but Not Globally in Non-Complete Normed Spaces
|
It can be shown that when X, Y, and Z are all Banach spaces (or at least when X or Y are Banach spaces) over the number fields R or C, and when B : X×Y →Z is a bilinear function, the continuity of B for each component implies its overall continuity. However, I'm curious about the scenario when X, Y, and Z are merely normed spaces, not necessarily complete. Is it possible for a bilinear mapping to be continuous per component, but not continuous as a whole? The abovementioned proposition can be proven using the Uniform Boundedness Principle. Therefore, it seems natural to search for an example that fails to satisfy the UBP as a typical counterexample to this statement. Despite this, I'm finding it challenging to conceive a meaningful counterexample. Could anyone provide an instance that illustrates this, or give some guidance on how to think about this problem?
|
Take $X=Y=c_{00}$ be the space of sequences with finitely many non-zero entries supplied with the $l^2$ -norm. Define $$ B(x,y):=\sum_{k=1}^\infty k^3 x_k y_k. $$ For fixed $x$ the map $y\mapsto B(x,y)$ is continuous. Same for $x\mapsto B(x,y)$ for $y$ fixed. However, $B$ is not continuous, as $B(k^{-1}e_k,k^{-1}e_k)=k\not\to0=B(0,0)$ , where $e_k=(0,\dot,0,1,0,\dots)$ is the standard unit vector.
|
|functional-analysis|operator-theory|banach-spaces|examples-counterexamples|
| 1
|
How to derive explicit matrix representations for the non-diagonal generators of a simple Lie algebra?
|
Given a semi simple Lie algebra's Dynkin diagram/Cartan matrix one can easily find out the weights of any particular representation. The weights of the defining representation are enough to give the mutually commuting i.e. Cartan generators in Cartan-Weyl basis directly. But how do we derive explicit matrix representations of the non-Cartan generators (laddering operators) i.e. the ones which don't commute? Ofcourse, if I know the matrix representation from the beginning, then we can just take linear combinations to move to the Cartan-Weyl basis: but that's not what I am doing here. Is it possible to derive for example the non-diagonal Pauli matrices for $\mathfrak{su}(2)$ from the root/weight info of $A_1$ ?(I think "yes", because the Cartan matrix encodes full info about the Lie algebra). How do we do this? If only I could find the action of these generators $E_\alpha$ on weight vectors, then I could get the matrix elements easily. I tried using the commutation relations between the
|
All you are looking for, in effect, is a Cartan-Weyl basis for the Lie algebra but this is quite easy to work out directly, at least in the defining representations (what physicists call the fundamental representation) of the classical Lie algebras, and from there you can often extend to other possible representations as well. Note one important thing though. You are using "the" a lot when everything here is a choice. We choose what basis we want for the representation and then we choose a scale for each $E_\alpha$ . Probably we ensure $[E_\alpha,E_{-\alpha}] = H_\alpha$ and maybe that $E_{\alpha+\beta} = [E_\alpha,E_{\beta}]$ where possible (for all simple roots perhaps). Turning a weight system directly into a matrix representation is more of a pain though. Using $E_\alpha (V_\lambda) \subset V_{\lambda + \alpha}$ we get which entries in the matrix must be $0$ , and using $[E_\alpha,E_{-\alpha}] = H_\alpha$ we can work out what the remaining entries have to be (again this is all depe
|
|abstract-algebra|matrices|representation-theory|lie-algebras|semisimple-lie-algebras|
| 0
|
Properties of sum of two exponential functions
|
I have some data that can be fit reasonably well with an exponential function. However, a colleague mentioned that it would be better to use the sum of two exponentials: $$ f(x; a, b_1, b_2, \lambda_1, \lambda_2) = a + b_1 e^{\lambda_1 x} + b_1 e^{\lambda_2 x} $$ There was no mathematical justification given, and I'm curious: assuming that this is used to model what is roughly exponential (so $\lambda_1$ and $\lambda_2$ have the same sign, for example), what properties does this kind of function have? Does this function have a name? I assume there must be a fully general case, which is an infinite sum of exponentials? I'm having trouble searching online for this, because Google thinks I'm referring to the sum of exponential random variables.
|
Such functions are the solutions to second order differential equations of the form $y''+ay'+by=0$ -- though for many $a$ and $b$ , the values of $\lambda$ will be complex: $$ \lambda=\frac{-a \pm \sqrt{a^2-4b}}2 $$ Compared to a single exponential, having this mix lets you do a couple of things. First, it lets you have a mix of time scales. Like, if you have heat transfer between two objects and the rest of the world -- you might have a short time scale on which heat moves between the two objects, and a long time scale on which heat moves between them and their surroundings. It lets you fit systems that can't start changing immediately, but have some inertia -- though for many such systems, you end up with complex $\lambda$ . I'm not sure about the name for this family of functions. But the underlying dynamics would be called a second order system. Specifically, for $\lambda$ being negative real, an "overdamped" second order system.
|
|exponential-function|
| 0
|
Probability that the centroid of a triangle is inside its incircle
|
Question : The vertices of triangles are uniformly distributed on the circumference of a circle. What is the probability that the centroid is inside the incricle. Simulations with $10^{10}$ trails give a value of $0.457982$ . It is interesting to note that this agrees with $\displaystyle \frac{G}{2}$ to six decimal places where $G$ is the Catalan's constant . Julia source code: using Random inside = 0 step = 10^7 target = step count = 0 function rand_triangle() angles = sort(2π * rand(3)) cos_angles = cos.(angles) sin_angles = sin.(angles) x_vertices = cos_angles y_vertices = sin_angles return x_vertices, y_vertices end function incenter(xv, yv) a = sqrt((xv[2] - xv[3])^2 + (yv[2] - yv[3])^2) b = sqrt((xv[1] - xv[3])^2 + (yv[1] - yv[3])^2) c = sqrt((xv[1] - xv[2])^2 + (yv[1] - yv[2])^2) s = (a + b + c) / 2 incenter_x = (a * xv[1] + b * xv[2] + c * xv[3]) / (a + b + c) incenter_y = (a * yv[1] + b * yv[2] + c * yv[3]) / (a + b + c) incircle_radius = sqrt(s * (s - a) * (s - b) * (s - c))
|
Long comment . Here is evidence that $0.457 . Assume that the circle is a unit circle centred at the origin, and the vertices of the triangle are: $A\space(\cos(-2Y),\sin(-2Y))$ where $0\le Y\le\pi$ $B\space(\cos(2X),\sin(2X))$ where $0\le X\le\pi$ $C\space(1,0)$ So: $a=BC=2\sin X$ $b=AC=2\sin Y$ $c=AB=|2\sin(X+Y)|$ The incircle has centre $\left(\frac{a\cos (-2Y)+b\cos (2X)+c}{a+b+c},\frac{a\sin (-2Y)+b\sin (2X)}{a+b+c}\right)$ and radius $\sqrt{\frac{(s-a)(s-b)(s-c)}{s}}$ where $s=\frac{a+b+c}{2}$ . The coordinates of the centroid are $\left(\frac{\cos (-2Y)+\cos (2X)+1}{3},\frac{\sin (-2Y)+\sin (2X)}{3}\right)$ . So the probability in the OP is the probability that $$\left(\frac{\cos (-2Y)+\cos (2X)+1}{3}-\frac{a\cos (-2Y)+b\cos (2X)+c}{a+b+c}\right)^2+\left(\frac{\sin (-2Y)+\sin (2X)}{3}-\frac{a\sin (-2Y)+b\sin (2X)}{a+b+c}\right)^2 This probability is the ratio of the area of the shaded region to the area of the square in the graph below. By symmetry, we only need to consider the
|
|probability|integration|geometry|triangles|geometric-probability|
| 0
|
What is the geometrical interpretation of $1 - i \cdot \text{Im}\left(\frac{1}{z}\right) = z$ in the Gaussian number plane?
|
What is the geometrical interpretation of $1 - i \cdot \text{Im}\left(\frac{1}{z}\right) = z$ in the Gaussian number plane? I get that the first part is the real part 1 with no imaginary part so it lies on the real axis, so it basically means I have a point that is at 1 on the real axis and dependent on the value of z I transform it so it is equal to point z? The last part is what I do not get.
|
Let $z=x+iy$ . Then $\dfrac{1}{z}=\dfrac{x-iy}{x^2+y^2}$ . So $\operatorname{Im}\left(\dfrac{1}{z}\right)=\dfrac{-y}{x^2+y^2}$ . The given condition implies $$\begin{cases}x&=1\\y&=\dfrac{y}{x^2+y^2}\end{cases}$$ This means $y(y^2+1)-y=0$ , so $y=0$ . The geometric representation of this is thus a single point $(1,0)$ .
|
|complex-numbers|
| 0
|
How to calculate Gal$(F(\mu_{p^\infty})/F(\mu_p))$ for a number field $F$?
|
Let $F$ be a number field. Recall that we define $$F(\mu_{p^\infty})=\bigcup_{n=1}^{\infty}F(\mu_{p^n}).$$ I want to calculate the group Gal $(F(\mu_{p^\infty})/F(\mu_p))$ . I know that this is supposed to be equal to $\mathbb{Z}_p$ , but I am unable to prove it. Every reference I have seen just states this directly, which means that this is probably really simple to do, but I am unable to see it.
|
I wouldn't say this isomorphism is "really simple" when learning these things for the first time. It involves an isomorphism in infinite Galois theory and an isomorphism between a certain subgroup of $\mathbf Z_p^\times$ with $\mathbf Z_p$ . In particular, the group ${\rm Gal}(F(\mu_{p^\infty})/F(\mu_{p}))$ is not really "directly" isomorphic to $\mathbf Z_p$ is any natural way, but rather this comes out as the result of composing some isomorphisms. To start off, the statement you made is not quite true since it has counterexamples when $p = 2$ : $$ {\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q(\mu_2)) = {\rm Gal}(\mathbf Q(\mu_{2^\infty})/\mathbf Q) \cong \mathbf Z_2^\times, $$ which is not isomorphic to $\mathbf Z_2$ since it has nontrivial torsion, namely the number $-1$ . That $-1$ corresponds to complex conjugation acting on $\mathbf Q(\mu_{2^\infty})$ . More generally, when $F$ is a number field with a real embedding, then viewing $F$ in $\mathbf R$ shows complex conjugation is a
|
|galois-theory|algebraic-number-theory|galois-extensions|cyclotomic-fields|
| 1
|
Asymptotic solution of non-linear ODE
|
Consider the ODE $$y''+y'+f(y)=0,\quad (*)$$ with $f(y)=y+y^3$ . I'm interested in the asymptotic behavior of solutions $y(x)$ as $x\to\infty$ . From Wolfram Alpha's example plots (and from what I've managed), I suspect that $\lim y=\lim y'=0.$ Since a priori, no term seems to be small compared to the others, my approach so far was to distinguish cases of different balances between the summands, but some cases are missing. The ones I've done are: Firstly, if $y\to 0$ is indeed true, then $y^3 , hence the “limit equation” $$y''+y'+y=0 $$ is linear and has explicit solutions, so it's easy to check that this case is consistent with the claim. The next cases are the ones where we can neglect one of the derivatives. $$y'+f(y)=0$$ can be solved by integrating. Similarly straightforward, $$y''+f(y)=0$$ has an integrating factor, namely $y'$ . In both cases, I can check the consistency of the solution with the assumptions. Question: How can I treat the cases, $y and the case where no term beco
|
Let $u = y$ and $v = y'$ . We then have a first order system of ODEs $$ \begin{aligned} u' &= v \\ v' &= - u - v - u^3. \end{aligned} $$ Now consider the function $E(u,v) = \frac{1}{2}v^2 + \frac{1}{2}u^2 + \frac{1}{4} u^4$ . From the chain rule, we have $$ \frac{d}{dt}E = E_uu' + E_vv' = -v^2 \leq 0, $$ so this function is nonincreasing along trajectories of the system. Also notice that $E = 0 \iff u,v=0$ and $E>0 \iff u,v \neq 0$ . This function $E$ satisfies the properties of a Lyapunov function , so we can apply the Lyapunov stability theorem to conclude that the origin $(u,v)=(0,0)$ is stable, so all solutions tend to $y=y'=0$ as $t\to\infty$ . More detailed asymptotics can be obtained via linearization or dominant balance as you have stated.
|
|ordinary-differential-equations|asymptotics|
| 1
|
On finding the values of angles using sine and cosine rules
|
I had a triangle $\triangle ABC$ , given that, $BC=\sqrt{3}+1$ , $AC = \sqrt{3}-1$ , and $\angle BCA=60^\circ$ . I am asked to find the value of $\angle BAC$ . So I used to cosine rule to get $AB = \sqrt{6}$ , and then tried using the sine rule to get to the value of $\angle BAC$ . I got $$\sin(A)=\frac{\sqrt3+1}{2\sqrt2}$$ For simplicity, I am writing $\angle BAC$ as $\angle A$ Using a calculator, that evaluates to be $$\angle A = \arcsin(0.965925826)$$ According to my knowledge and calculator, that can be both $75^\circ$ or $105^\circ$ . What should be my answer if both options are given in the question?
|
My synthetic solution attached, I hope it is still accepted, although not trigo
|
|geometry|triangles|
| 0
|
Does this theorem about two lines and two transversals have a name?
|
I've got this theorem right here. It involves two arbitrary lines $AB$ and $CD$ and two transversals $AC$ and $BD$ , intersecting in between these lines . The theorem (or lemma, I don't know) in question states that $\alpha+\beta = \varphi + \theta$ . The simplest proof of which I know is based on the Sum of Angles of Triangle equals Two Right Angles : two transversals intersecting in between two lines form two triangles with these lines. Then sums of angles of these triangles are equal to one another. One pair of angles is equal as vertical angles, so the sum of two remaining angles in one triangle should be equal to the sum of two remaining angles, which was to be proven. I've got three questions in regards to this fact: Does this fact (is it a lemma or a theorem?) has a name ? I thought of something related to bowtie, butterfly, or hourglass, but there are numerous facts names after these objects. What is the correct way to formulate this fact by using parallel lines and transversal
|
Just have a look at this drawing, where the left green angle equals the right green angle (quite obvious): The sum of the angles of the triangles $AEC$ and $DEB$ are equal to $180°$ . Hence, the sum of the angles $A+C$ equals $B+D$ . Why would something, so obvious, need to be a special theorem?
|
|geometry|euclidean-geometry|terminology|alternative-proof|angle|
| 0
|
An (a.s.) continuous process $(X_t)_{t\geq 0}$ is a Brownian motion if $(e^{i\lambda X_t + \frac{1}{2}\lambda^2 t})_{t\geq 0}$ is a local martingale
|
Problem Let $X=(X_t)_{t\geq0}$ be an (a.s.) continuous $\mathbb{R}$ -valued process with $X_0=0$ such that $(e^{i\lambda X_t + \frac{1}{2}\lambda^2 t})_{t\geq 0}$ is a $\mathbb{C}$ -valued local martingale for all $\lambda\in\mathbb{R}$ . Show that $X$ is a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion. Opening remark The standard definition of a Brownian motion goes as follows: A $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process $B=(B_t)_{t\geq 0}$ is called a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion if: $B_0=0$ (a.s.) $B$ is (a.s.) continuous. $\forall s $B_t$ has independent increments. This and this solution for related problems make use of the so-called Lévy characterisation : A $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process $B=(B_t)_{t\geq 0}$ is called a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion if: $B_0=0$ (a.s.) $B_t$ is an (a.s.) continuous martin
|
The main idea is to use the following fact: The conditional characteristic function $\mathbb{E}\left[e^{i\lambda X_s}\vert\mathcal{F}_s\right]$ of $X_s$ is equal to $e^{i\lambda\mu - \frac{1}{2}\lambda^2\sigma^2}$ (= the characteristic function of a $\mathcal{N}\left(\mu,\sigma^2\right)$ -random variable) if and only if $X_s$ is independent of $\mathcal{F}_s$ and $\mathcal{N}\left(\mu,\sigma^2\right)$ -distributed. In this case: The conditional characteristic function $\mathbb{E}\left[e^{i\lambda \left(X_t - X_s\right)}\vert\mathcal{F}_s\right]$ of $X_t-X_s$ is equal to $e^{- \frac{1}{2}\lambda^2\left(t-s\right)}$ (= the characteristic function of a $\mathcal{N}\left(0,t-s\right)$ -random variable) if and only if $X_t - X_s$ is independent of $\mathcal{F}_s$ and $\mathcal{N}\left(0,t-s\right)$ -distributed. First, we show that $Z:=(e^{i\lambda X_t + \frac{1}{2}\lambda^2 t})_{t\geq 0}$ is not only a local martingale but a martingale as well. Note that it is an adapted process because it
|
|normal-distribution|expected-value|brownian-motion|independence|local-martingales|
| 1
|
How to find the limit of $\frac{n}{(n!!)^\frac{2}{n}}$
|
The limit in question $$\lim_{n\to \infty} \frac{n}{n!!^\frac{2}{n}}.$$ What I have done: $$ \lim_{n\to \infty}\frac{(n^\frac{2}{n})^\frac{n}{2}}{n!!^\frac{2}{n}}. $$ Then taking the logarithm and converting it into a summation series $$\lim_{n\to \infty}\frac{2}{n}\sum_{r=0}^{n-1}\ln\frac{1}{1-\frac{2r}{n}}.$$ Substituting $\frac{r}{n} =t$ and converting it into an integral yields $$-2\int_{0}^1 \ln(1-2t)dt.$$ Integrating gives this function $$-2[(t+1)\ln(1-2t) -t]_{0}^1.$$ Can someone help me find the limit?
|
If $n=2k$ then $n!!=(2k)!!=2^kk!$ . Thus, $$\dfrac{n}{(n!!)^{2/n}}=\dfrac{2k}{2(k!)^{1/k}}=\dfrac{k}{(k!)^{1/k}}$$ But one can show (for example, see this ) that $$\lim_{k\to\infty} \dfrac{k}{(k!)^{1/k}}=e $$ If $n=2k-1$ then $n!!=(2k-1)!!=\dfrac{(2k-1)!}{2^{k-1}(k-1)!}$ and so $$\dfrac{n}{(n!!)^{2/n}}=\dfrac{(2k-1)2^{\frac{2(k-1)}{(2k-1)}}[(k-1)!]^{2/(2k-1)}}{[(2k-1)!]^{2/(2k-1)}}$$ But the limit of this expression must be the same as that of $$2\cdot\dfrac{(2k-1)}{[(2k-1)!]^{1/(2k-1)}}\cdot \dfrac{[(k-1)!]^{2/(2k-1)}}{[(2k-1)!]^{1/(2k-1)}}$$ The first factor goes to $e$ by the above limit. The second has the same limit as $$\dfrac{[(k-1)!]^{1/(k-1)}}{[(2k-1)!]^{1/(2k-1)}}$$ And again, by the above this has the same limit as $$\dfrac{\dfrac{k-1}{e}}{\dfrac{2k-1}{e}}\longrightarrow \dfrac{1}{2} $$ So for $n=2k-1$ the sequence converges to $e$ too. As both the odd and even subsequences converge to $e$ , we conclude $$\dfrac{n}{(n!!)^{2/n}}\to e$$
|
|integration|limits|
| 1
|
What is the different between these two forms of the derivative of arcsine?
|
It is known that the derivative of an inverse function is given as $$ g'(y)=\frac{1}{f'(x)} \implies \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}} $$ So if $\arcsin(y)$ is differentiated: $$ \arcsin(y)' = \frac{1}{\cos(x)} = \frac{1}{\sqrt{1-y^2}} $$ but then when I graph them, I get different results, and I was also wondering why if we take the integral of sec(x), why would it not be arcsin(y)? Perhaps it is related to some interval issue I don't understand? Any explanations are appreciated.
|
Notice that $x$ and $y$ are not independent from each other in these calculations. They are related via $y = \sin x$ . We then have $$ \sqrt{1-y^2} = \sqrt{1 - \cos^2x} = \sqrt{\sin^2x} = \sin x, $$ where $x$ is appropriately restricted so that $\sin x$ is invertible. When you integrate $\sec x$ , you are doing so with repsect to $x$ . There are no $y$ 's involved. If you wish to change it to a $y$ integral, you must perform change of variables in the integral with $y = \sin x$ , $dy = \cos x dx$ .
|
|calculus|derivatives|trigonometry|inverse|inverse-trigonometric-functions|
| 0
|
Poisson distribution inequality for sum of rates
|
Given $s,t \in \mathbb{R}^+$ and $i,j \in \mathbb{N}$ , Let $X,Y,Z$ be random variables with Poisson distribution with rate $s,t$ and $s+t$ , respectively. Is it true that $$ P(Z = i+j) \geq P(X=i) \cdot P(Y=j).$$ I would need to have $$ \frac{s^i}{i!}\cdot \frac{t^j}{j!} \leq \frac{(s+t)^{i+j}}{(i+j)!}.$$ By the Young's inequality with $p = (i+j)/i$ and $q = (i+j)/j$ , for every $a,b \in \mathbb{R}^+$ , $$ ab \leq \frac{a^{\frac{i+j}{i}}}{\frac{i+j}{i}} + \frac{b^{\frac{i+j}{j}}}{\frac{i+j}{j}}. $$ I've tried different values to $a,b$ but still I cannot do it. For example, for $a = (s(i+j)/i)^{i/(i+j)}$ and $b$ analogous, we have $$ \frac{s^i }{(\frac{i}{i+j})^i} \cdot \frac{t^j }{(\frac{j}{i+j})^j} \leq (s+t)^{i+j}$$ Unfortunately, from there I couldn't go further. Any suggestion?
|
I think I have an idea using coupling. I define $X \sim \text{Pois}(s)$ , $Y \sim \text{Pois}(t)$ independently in $P$ . Then $Z = X+Y \sim \text{Pois}(t+s)$ . Then $$ P(X = i)\cdot P(Y=j) = P(X=i,Y=j) \leq P(X+Y = i+j) = P(Z = i+j).$$ Do you think is right?
|
|poisson-distribution|young-inequality|
| 0
|
Showing that a given map is a regular surface patch of the unit sphere.
|
Show that the map $\sigma:(0,2\pi) \times \mathbb{R} \rightarrow \mathbb{R}^3$ given by $\sigma(u,v) = (\text{sech}(v)\cos({u}), \text{sech}(v)\sin({u}), \text{tanh}(v))$ , is a regular surface patch for the unit sphere. So, I have managed to show that $(0,2\pi) \times \mathbb{R}$ is open and that $\sigma$ is continuous and injective. And, I know how to do my next steps, i.e., calculating $|| \sigma(u,v) ||^2 = 1$ to show that all points lie on the sphere and calculating $\sigma_u \times \sigma_v \neq 0$ to show that it is regular. What I am struggling with is showing the remaining condition for $\sigma$ to be a surface patch. I.e., finding an open set $V \subset \mathbb{R}^3$ and a map $\Phi: V \rightarrow (0,2\pi) \times \mathbb{R}$ such that $\sigma((0,2\pi) \times \mathbb{R}) \subseteq V$ , $\Phi\restriction_{\sigma((0,2\pi) \times \mathbb{R})} = \sigma^{-1}$ , and $\Phi$ is continuous at all points in $\sigma((0,2\pi) \times \mathbb{R})$ . I'm not sure where to even start with thi
|
I assume your textbook must have done an example with standard spherical coordinates; you confront the identical issue with that. Here's one way to do this, using basic multivariable calculus. Define a function $F\colon\Bbb R^2-\{(x,0): x\ge 0\} \to\Bbb R$ by $$F(X,Y) = \frac\pi 2+\int_C \frac{-y\,dx+x\,dy}{x^2+y^2}, \quad\text{where $C$ is any curve in the domain from $(0,1)$ to $(X,Y)$}.$$ This line integral occurs in every multivariable calculus class, and you can show using Green's Theorem that $F$ is well-defined and then smooth. By taking the circular arc from $(1,0)$ to $(\cos\theta,\sin\theta)$ (again, staying inside $\Bbb R^2-\{(x,0): x\ge 0\}$ ) you can evaluate the integral explicitly to see that $F(\cos\theta,\sin\theta) = \theta$ .
|
|differential-geometry|
| 0
|
Cosine similarity magnitude vector
|
there is one thing I'm not sure about regarding cosine similarity. Does the magnitude of the vector matter? I think the answer is yes, especially if you look at the picture below where the word count is an important factor. However, when we take the angle from the beginning, the magnitude of the vector is not relevant (because the direction doesn't change), but when we take the angle at a later point/position (as the maker of this video did), then the magnitude is a relevant factor. However, it looks to me like he took the angle (17) at a pretty arbitrary point...? I hope my question is clear. source: https://www.youtube.com/watch?v=m_CooIRM3UI Cosine similarity of word counts
|
No, it doesn't matter. Only the directions of the vectors matter. The angle between two vectors is the angle between the two lines these vectors determine. It doesn't matter where on the lines you measure it, the angle is the same. When you compute the cosine similarity, you really normalize the vectors anyway. Recall the formula used was $$\textrm{Cosine similarity} = \frac{A\cdot B}{\lVert A\rVert \lVert B\rVert}$$ But that's the same as $$\frac{A}{\lVert A\rVert}\cdot \frac{B}{\lVert B\rVert}$$ which is the dot product of the vectors reduced to unit length (normalized) keeping their original directions (positive scalars don't affect the directions of vectors). Addendum: It is also mentioned that $A\cdot B = \lVert A \rVert \lVert B\rVert \cos \theta$ , where $\theta$ is the (acute) angle between $A$ and $B$ . Then you get that $$\require{cancel}\textrm{Cosine similarity} = \frac{A\cdot B}{\lVert A\rVert \lVert B\rVert} = \frac{\overbrace{\cancel{\lVert A \rVert \lVert B\rVert} \cos
|
|statistics|vector-spaces|vectors|information-theory|
| 1
|
Determining whether a housing allocation is in the Core
|
I have recently been thinking about the housing allocation problem where we have a set of players and a set of houses where players have strict preferences over the houses. I am aware of the Top Trading Cycle algorithm which can be used to assign houses to players in a Pareto Optimal, Strategy Proof, and Individual Rational way. Furthermore, this resulting allocation is group rational, i.e. in the (weak) core meaning that no subset of players can deviate and switch their assigned houses among themselves so that all players in this subset strictly improve. This also implies that the core is always non-empty. However, what I was wondering is whether there exists an approach that allows us to efficiently determine whether a given allocation is in the (weak) core. I.e. given an allocation f, can we determine whether there exists a subset of players that can switch so that all of them strictly improve? I could not find much existing content on this question. Any input would be highly apprec
|
I believe this might work: Consider the directed graph G induced by the players and their preferences. An allocation f is a circulation on G. Given an allocation f, we can now consider the modified graph G' where for each node v, we remove those edges that are of lower preference than the currently assigned edge in f. Furthermore, we remove the edge that is assigned in f. The resulting graph G' now only consists of those edges that point to more desirable houses for each player. If we can find any cycle in this modified graph, f is not in the weak core. Else, it is.
|
|game-theory|economics|matching-theory|
| 0
|
Questions about the Proof that $\inf A = - \sup\left(-A\right)$ (Rudin)
|
This proof has appeared numerous times on this website, but I have just two questions on it that I don't believe have been addressed in the past. The full theorem from Rudin is: Theorem. Let $A$ be a nonempty set of real numbers which is bounded below. Let $-A $ be the set of all numbers $-x$, where $x \in A$. Prove that $\inf A = - \sup\left(-A\right)$. My questions are: (a) Are we guaranteed that $\inf A$ or $\sup\left(-A\right)$ exists? We are only given that $A$ is bounded below, so $\exists \beta, \forall x \in A, x \geq \beta$. But this doesn't directly imply that there exists a greatest $\beta$, nor does it directly imply anything about the nature or boundedness of the set $-A$. In other words, must be prove, or asssume, existence to write this proof? Or do we posit axiomatically that such an infimum exists, and establish this proof based on its properties? (b) My second question hinges a bit more on the method by which I sought to prove this theorem, but I think it can be easil
|
Slightly more details pertaining specifically to Rudin's text: Th. 1.19 (pg. 8) states The real numbers have the least upper-bound property. Th. 1.11 (pg. 5) states that as the reals have the least upper-bound property then every set bounded below will have a greatest lower bound. Thus we DO know that $\inf A$ does exist. We don't know anything about the set $-A$ and it is our job to prove everything we state (that it is bounded above, that it has least upper bound, and that $\sup (-A) = -\inf A$ ).
|
|real-analysis|proof-explanation|supremum-and-infimum|
| 0
|
Is there an algorithm for sorting points in a field based on sweeping a line through them at an arbitrary angle?
|
Given a set of coordinates on a euclidean plane, is it possible to sort the points by sweeping a line through them where the line might not be strictly horizontal or vertical? The inputs to the problem would be the set of unsorted 2d points, and a unit vector. I know that for a simply vertical or horizontal sweep line, you obviously only need to account for the x value (for a vertical sweep) or the y value (for a horizontal sweep). But if the sweep were to follow some arbitrary vector, I'd imagine the sort has to take in to account some sort of component of the angle of that vector. Example drawing: a field of points to be sorted The red line is the sweep line that I'm envisioning, and the grey dashed vector denotes the direction I want the sweep to occur such that the resultant order in this example would be [A, B, C, D, E]. I thought one possible solution might be to first reduce the points to all be on the path of the vector, but I'm not sure how to do that. Then it should become a
|
In the answer below, I use "points" and "vectors" interchangeably. So, for example, we might think of $(a,b)$ at the point in the plane or as the vector in the plane that goes from the origin to $(a,b)$ . Also, I'm assuming that either there won't be any ties (points that the line hits at the same time) or that if there are ties, you are indifferent to how the tied points are ordered. Suppose we are given the direction vector $$u=\begin{pmatrix} x_0 \\ y_0\end{pmatrix}.$$ Suppose also that we have an ant riding a line that's perpendicular to $u$ and traveling along $u$ with speed $1$ (the orange line in the photo is a snapshot of the line at one point in time, and the point of intersection of the line and the vector $u$ is where the ant is sitting). he ant is very tired, so she does not walk up/down the plane. So she also moves in the direction of $u$ . We can assume the ant has been traveling for all of history and will travel forever, and is at the origin at time $0$ . We could chang
|
|vectors|algorithms|euclidean-geometry|sorting|
| 1
|
The Facebook Birthday Problem(Birthday Problem Variation)
|
The Facebook Birthday Problem: This problem stems from the classic Birthday Paradox. It says: How many friends do you need for the probability of having at least one friend with a birthday each day to be greater than 50%? Answer: If there are 23 friends, the probability of two or more people sharing a birthday exceeds 50%. If there are 60 people, the probability exceeds 99%. If there are birthdays every day among my friends on Facebook, and I want to send birthday wishes to them every day, how many friends do I need to achieve this daily occurrence? How is the probability calculated? Q1-1: If I have 1000 friends on Facebook, what is the probability that there will be a friend's birthday every day? Q1-2: If I have 5000 friends on Facebook, what is the probability that there will be a friend's birthday every day? Q2-1: How many friends do I need at least to ensure that there is a friend's birthday every day with a probability of more than 50%? Q2-2: How many friends do I need at least to
|
If you have $n$ friends, the probability that you have a friend with each birthday is $$ \frac{{n\brace 365}\cdot 365!}{365^n}. $$ The notation ${n\brace k}$ refers to the Stirling numbers of second kind, which can be calculated using this formula: $$ {n\brace k}=\frac1{k!}\sum_{j=0}^k(-1)^{j}\binom kj(k-j)^n. $$ Plugging in $1000$ and $5000$ for $n$ into that formula answers your first two questions. For $1000$ friends, the probability is nearly zero (one in a trillion), while for $5000$ friends, the probability is $\approx 99.96\%$ . You can then use a computer to find the smallest value of $n$ for which that formula is greater than $0.5$ and $0.99$ . Doing so, I found that you need $2287$ friends to succeed more than $50\%$ of the time, and you need $3828$ friends to succeed more than $99\%$ of the time.
|
|probability|expected-value|random|coupon-collector|birthday|
| 1
|
Is Gödels second incompleteness theorem provable within peano arithmetic?
|
All following notation and assumptions follow Gödel's Theorems and Zermelo's Axioms by Halbeisen and Krapf. Exercise 11.4 c) states "Conclude that the Second Incompleteness Theorem is provable within PA." As the Second Incompleteness Theorem is stated as $\text{PA} \not\vdash \neg \text{prv}(\lceil 0=1 \rceil)$ , I interpret the stated conclusion as the statment $\text{PA} \vdash \neg \text{prv}(\lceil{\neg \text{prv} (\lceil{0=1}\rceil)\rceil}).$ I could not find a proof of this fact. However, I think I managed to prove the opposite. But since this directly contradicts the authors, I am not sure if I am correct. My solution: Amongst others, recall the following from the book: The axiom scheme $(L1)$ : $\varphi\rightarrow(\psi\rightarrow \varphi)$ The tautology (G): $\vdash(\varphi\rightarrow \psi)\leftrightarrow(\neg\psi\rightarrow \neg\varphi)$ . Löb's Theorem states that if $\varphi$ is an $L_{\text{PA}}$ -sentence, then $\text{PA}\vdash \text{prv}(\lceil \varphi \rceil) \rightarrow
|
This is a common confusion, and in particular Halbeisen/Krapf messed it up here (either Exercise 11.4(c) or Theorem 11.1 needs to be rephrased). The same confusion happens around the first incompleteness theorem G1IT; since it's a little easier to state, I'll treat it first. The general-and-conditional statement of G1IT is as follows: $(\star)\quad$ If $T$ is a consistent r.e. theory interpreting $\mathsf{Q}$ then $T$ is incomplete. This is almost always what logicians mean when we refer to "the first incompleteness theorem." This is provable in $\mathsf{PA}$ . On the other hand, the statement $(\dagger)\quad$ $\mathsf{PA}$ is incomplete is not provable in $\mathsf{PA}$ (unless $\mathsf{PA}$ is inconsistent of course). The point is that $(\dagger)$ is not actually an instance of $(\star)$ ; rather, it is an instance of $(\star)$ together with a further hypothesis, namely $\mathit{Con}(\mathsf{PA})$ . The same issue happens with G2IT. While the "general" second incompleteness theorem ta
|
|logic|solution-verification|peano-axioms|incompleteness|provability|
| 1
|
Derivative (Jacobian) of a matrix equation
|
I have this equation: $y = e^{t(A + W)} x_0 $ where A is a diagonal matrix and W is a symmetric matrix. I need to find $\frac{\partial y}{\partial W}$ . If A and W commute then I could use the fact that $e^{t(A + W)} = e^{tA} . e^{tW} $ and then use the kronecker product: $\frac{\partial y}{\partial W} = x_0^T \otimes e^{tA} \, \, vec(t e^{tW}) $ But I can't derive the case were they don't commute. Any thoughts?? thanks
|
$ \def\k{\otimes} \def\h{\odot} \def\BR#1{\Big[#1\Big]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\vc#1{\op{vec}\LR{#1}} \def\qiq{\quad\implies\quad} \def\l{\lambda} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\R{{\large\cal R}} \def\x{x_0} \def\c#1{\color{red}{#1}} \def\J{{\cal J}} $ Construct a new symmetric matrix variable and calculate its eigendecomposition $$\eqalign{ S &= \LR{A+W} &\qiq dS=dW \quad \{ \c{\sf\,differential\,} \} \\ S &= QLQ^T&\qiq I=Q^TQ,\;\;L=\Diag{\l_k} \\ }$$ Given a differentiable function $f,\,$ the $\sf Daleckii$ - $\sf Krein\ Theorem$ says $$\eqalign{ F &= f(S) \\ dF &= Q\,\BR{R\h\LR{Q^TdS\,Q}}\,Q^T \\ {R_{jk}} &= \begin{cases} {\large\frac{f(\l_j)-f(\l_k)}{\l_j-\l_k}} \qquad {\rm if}\;\l_j\ne\l_k \\ \\ \quad f'(\l_k) \qquad\quad\; {\rm otherwise} \\ \end{cases} }$$ where $\h$ denotes the Hadamard product. In the current problem, the function is pretty simple $$ f(\l)=
|
|linear-algebra|algebra-precalculus|matrix-calculus|kronecker-product|
| 1
|
AM-GM inequality problems on 3 variables
|
Let $a,b,c\ge0$ . Prove that: $$\frac ab+\frac bc+\frac ca+\frac{3\sqrt[3]{abc}}{a+b+c}\ge4$$ This is a problem from Samin Riasat's Olympiad Inequalities worksheet. The worksheet gives a hint as to prove and use $$\frac ab+\frac bc+\frac ca\geq\frac{a+b+c}{\sqrt[3]{abc}}$$ I've done it, substituted it in the original inequality and now I have to prove: $$\frac{9\mathrm{AM}^2+3\mathrm{GM}^2}{3\mathrm{AM}\cdot\mathrm{GM}}\geq4$$ where $\mathrm{AM}$ and $\mathrm{GM}$ are the arithmetic and geometric means of $a$ , $b$ and $c$ . How can I finish the problem?
|
First, let's see this:- $$a^3b^3c^3=(a^2c)(b^2a)(c^2b)=(abc)^3$$ So we can say from the second expression, using AM-GM inequality that:- $$\frac{a^2c+b^2a+c^2b}{3}≥\sqrt[3]{(a^2c)(b^2a)(c^2b)}$$ Or $$a^2c+b^2a+c^2b≥3\sqrt[3]{(abc)^3}$$ Or $$\frac{a^2c+b^2a+c^2b}{abc}≥3...(i)$$ Again, by AM-GM inequality we know that:- $$\frac{a+b+c}{3}≥\sqrt[3]{abc}$$ Or $$\frac{a+b+c}{3\sqrt[3]{abc}}≥1$$ Or (By doing reciprocal) $$\frac{3\sqrt[3]{abc}}{a+b+c}≤1$$ or $$1≥\frac{3\sqrt[3]{abc}}{a+b+c}...(ii)$$ Adding (i) and (ii), we get $$\frac{a^2c+b^2a+c^2b}{abc} + 1 ≥ 3 + \frac{3\sqrt[3]{abc}}{a+b+c}$$ Or $$\frac{a^2c+b^2a+c^2b}{abc}-\frac{3\sqrt[3]{abc}}{a+b+c}≥2$$ Adding $\frac{6\sqrt[3]{abc}}{a+b+c}$ on both sides $$\frac{a^2c+b^2a+c^2b}{abc}+\frac{3\sqrt[3]{abc}}{a+b+c}≥2+\frac{6\sqrt[3]{abc}}{a+b+c}$$ But from $(ii)$ , we know $$2≥\frac{6\sqrt[3]{abc}}{a+b+c}$$ Or $$4≥2+\frac{6\sqrt[3]{abc}}{a+b+c}$$ Hence $$\frac{a^2c+b^2a+c^2b}{abc}+\frac{3\sqrt[3]{abc}}{a+b+c}≥4$$ I didn't use your hint someh
|
|inequality|
| 0
|
doesn't the independency phenomenon make a case for non-classical logic?
|
alright, this question is philosophical and somewhat fuzzy. i also admit to knowing little about logic. all in all, this question can possibly be easily resolved by either pointing to (perhaps even well-known) literature i haven’t found or by pointing out a fault in my reasoning. joel david hamkins is a proponent of multiverse interpretation of set theory, where we should see ZFC and other formalisations of the concept of sets as theories of not one universe of sets, but of a multiverse of sets, in which there are many “universes” of sets, .. or let’s call them aliverses . the reason he gives is that since we can perfectly model – within ZFC – different theories ZFC, so one in which CH is true and one in which it is false, we know how such “places” look like, so to say that .. we are in only “one true universe” in which CH is either true or it is false and we just don’t know .. would be to disregard either of these “places” as unreal – even though we can readily visit them. instead, he
|
As noted in the comments, you are - or at least are dangerously close to - mixing up syntax and semantics. That said, I think there is a way to make your feelings precise, and I at least sometimes share them; since I can't find an exact duplicate of this question (although similar things have been asked before, e.g. 1 ) , I'll jot this approach down here. I'll call this position ZFC-finalism . The idea is that, while we informally work in a universe of sets, our real stance is that the ZFC axioms and only the ZFC axioms are a priori justifiable. A weaker version of this stance is that the mathematical community (or at least the set theoretic community) will never "canonize" any further axioms, independence phenomena and various proposals notwithstanding; an even weaker version adds "unless we reject ZFC-style foundations as a whole," and this much weaker version I think is likely true. I personally think that ZFC-finalism is perfectly coherent, even if I don't share it usually. I suspe
|
|logic|set-theory|philosophy|
| 1
|
Is the $\mathbb F_{p^n}((x,y))$ a local field?
|
Let $\mathbb{F}_{p^n}$ be a finite field of characteristic $p$ , then we can define discrete valuation $v_p$ in $\mathbb{F}_{p^n}((x))$ by $$ v_p(f):=\min \{n~|~a_n \neq 0\}, $$ where $f(x)=\sum_{n \geq i} a_n x^n \in \mathbb{F}_{p^n}((x))$ . This may be called a one-dimensional local field known as the field of formal Laurent power series. I am interested if there are suitable generalisations of $\mathbb F_{p^n}((x))$ to more than one variable. For example, here I see $\mathbb F_{p^n}((x))((y))$ is a two-dimensional local field. Is $\mathbb F_{p^n}((x))((y)) \cong \mathbb F_{p^n}((x, y))$ ? Consider the field $\mathbb F_{p^n}((x,y))$ of formal Laurent power series in $x$ and $y$ . To show $\mathbb F_{p^n}((x,y))$ is a local field, at first, we need to define a discrete valuation. If $f(x,y)=\sum_{m \geq i, \\n \geq j} a_{mn} x^m y^n \in \mathbb F_{p^n}((x,y))$ , then I guess such a valuation is $$v_p(f)=\min\{m,n~|~a_{mn} \neq 0\}.$$ We have to show it satisfies the properties of valu
|
As pointed out in Pete L. Clark's answer to the MathOverflow thread "Examples of Common False Beliefs in Mathematics" , and comments to it, for any field $k$ , there is a proper inclusion of fields $$k((x,y)) \subsetneq k((x))((y))$$ where the left hand side is, by definition, the quotient field of $k[[x,y]]$ , while the right hand side, properly written $\color{red}(k((x))\color{red})((y))$ , is the field of formal Laurent series in $y$ over the ("coefficient") field of formal Laurent series in $x$ . In fact, there is a bit of discussion there showing that while e.g. the element $$\sum_{i\ge 0} x^{-i}y^{i}$$ somewhat surprisingly does lie in the left hand side (it is $=\frac{x}{x-y}$ ), but the upgraded example $$\sum_{i\ge 0} x^{-i^2}y^{i}$$ does not. (More abstractly, one can check that actually, $k((x))((y))$ is "not symmetric in $x,y$ ", while the smaller field obviously is.) By the way, the first example $\sum_{i\ge 0} x^{-i}y^{i}=\frac{x}{x-y}$ shows that your attempt of a valua
|
|number-theory|algebraic-geometry|local-field|
| 1
|
Markov Chain Transition Matrix eigenvalues - Textbook Exercise
|
I am trying to solve the following problem taken the book Markov Chains and Mixing Times 2nd edition (Exercise 12.2): Let $P$ be irreducible transition matrix, and suppose that A is a matrix with $0 \le A(i,j) \le P(i,j)$ and $A \ne P$ . Show that any eigenvalue $\lambda$ of $A$ satisfies $|\lambda| . Now I know that $P$ 's biggest eigenvalue is 1. I have tried several approaches but I am not sure how to use irreducibility in this context. Maybe if the chain has some period it is possible to look on a chain with transition matrix raised to the power of the period to get aperiodic chain? Help will be appreciated. Link to the book: https://pages.uoregon.edu/dlevin/MARKOV/markovmixing.pdf
|
use the average instead $S_A =\frac{1}{n}\sum_{k=0}^{n-1} A^k \leq \frac{1}{n}\sum_{k=0}^{n-1} P^k = S_P$ where the inequality is strict in at least one component. Note that $S_P\mathbf 1 = \mathbf 1$ and $S_A$ has a (Perron) eigenvalue $\geq 1$ iff $A$ does. Since $P$ is $n\times n$ and irreducible, $S_P$ is a positive matrix. proof 0 (w/ analysis) Construct the monotone sequence of matrices $S_k := \frac{1}{k}\cdot S_p + \frac{k-1}{k}\cdot S_A$ for $k \in \mathbb N$ $\implies S_p = S_1 \geq S_2 \geq S_3\geq \dots$ where the inequality holds point-wise and is strict in at least one component for each $k$ and all $S_k$ are positive matrices. Perron Theory then tells us that the Perron roots form a strictly monotone decreasing sequence $1=\lambda_1^{(S_1)} \gt \lambda_1^{(S_2)}\gt \lambda_1^{(S_3)} \gt \dots$ $\implies 1 \gt \lambda_1^{(S_A)}$ e.g. by continuity of coefficients of the characteristic polynomial and Hurwitz from complex analysis. Alternatively there is a real analysis arg
|
|linear-algebra|probability|eigenvalues-eigenvectors|markov-chains|mixing|
| 1
|
let X is a normed vector space. if $ D \subseteq X$ is a balanced set $ D^0 \cup \{0\}$is a balanced set.
|
I tried to prove that: "Let $X$ be a normed vector space. If $D \subseteq X$ is a balanced set then $D^0 \cup \{0\}$ is a balanced set." $D^0$ is the interior of $D$ . I tried to prove it like this: Let $\lambda\in \mathbb{C}$ and $ |\lambda| \le 1 $ . If $x= 0$ or $ \lambda=0 $ the result is obvious. Let $ x\in D^0$ and let $ \lambda \neq 0 $ . Since $D$ is balanced, $ \lambda x \in D$ . And, since $ x\in D^0 \Rightarrow \exists \epsilon >0 : B(x,\epsilon) \subseteq D$ . But, I don't know how I can use them for this proof.
|
Since $B(x, \epsilon) \subseteq D$ , $B(\lambda x, |\lambda|\epsilon) = \lambda B(x, \epsilon) \subseteq D$ . As $\lambda \neq 0$ , this implies $\lambda x \in D^0$ .
|
|functional-analysis|vector-spaces|normed-spaces|
| 1
|
The Facebook Birthday Problem(Birthday Problem Variation)
|
The Facebook Birthday Problem: This problem stems from the classic Birthday Paradox. It says: How many friends do you need for the probability of having at least one friend with a birthday each day to be greater than 50%? Answer: If there are 23 friends, the probability of two or more people sharing a birthday exceeds 50%. If there are 60 people, the probability exceeds 99%. If there are birthdays every day among my friends on Facebook, and I want to send birthday wishes to them every day, how many friends do I need to achieve this daily occurrence? How is the probability calculated? Q1-1: If I have 1000 friends on Facebook, what is the probability that there will be a friend's birthday every day? Q1-2: If I have 5000 friends on Facebook, what is the probability that there will be a friend's birthday every day? Q2-1: How many friends do I need at least to ensure that there is a friend's birthday every day with a probability of more than 50%? Q2-2: How many friends do I need at least to
|
When you have $n$ friends on Facebook, the probability that there is a friend's birthday every day is given by $$P_n=\mathbb P \left (\cap_{i=1}^{365} D'_i \right )=\\1-\mathbb P \left (\cup_{i=1}^{365} D_i \right )=\\1- \sum_{i=1}^{365} (-1)^{i-1} \binom{365}{i} \left (1-\frac{i}{365} \right )^n=$$ $$\color{blue}{\sum_{i=0}^{365} (-1)^{i} \binom{365}{i} \left (1-\frac{i}{365} \right )^n} \tag{1}$$ where $D_i$ is the event that there is no birthday in day $i$ (the inclusion-exclusion principle is applied to find the probability of the union $\cup_{i=1}^{365} D_i$ ). Using (1), the answers to both parts of $Q1$ can be obtained by setting $n=1000$ and $n=5000$ . To solve Q2, you need to find the smallest $n$ that satisfies: $$ P_n \ge \alpha$$ for $\alpha=0.5$ and $\alpha= 0.99$ . Working with the sum appearing in (1) seems difficult. Fortunately, using $$ e^{-\frac{i}{365}} \approx 1-\frac{i}{365},$$ the formula (1) can be well approximated by the following simple formula: $$\color{blue
|
|probability|expected-value|random|coupon-collector|birthday|
| 0
|
How the Fourier transform show the frequency extent of $f(x)$?
|
The Fourier transform of a function is given by $$f(ξ) = \int_{-∞}^{∞} f(x) e^{-2πixξ}dx$$ the paper I was reading from says that the test function $e^{2πixξ}$ is a periodic with period $2\pi/ξ$ and it continues by saying ("so integrating $f$ against this test function gives information about the extent to which this frequency occurs in $f$ ") My question is how integrating our original function with this "test function" gives us information about the extent to which frequency occurs in $f$ ?
|
The Fourier transform of a periodic function $f(x)$ is related to its frequency content since $$\mathcal{F}_x\left[a_n \cos\left(2 \pi \frac{n}{P} x\right)\right](\omega )=\int_{-\infty}^{\infty} a_n \cos\left(2 \pi \frac{n}{P} x\right)\, e^{-2 i \pi \omega x} \, dx\\=\frac{a_n}{2}\, \delta\left(\omega-\frac{n}{P}\right)+\frac{a_n}{2}\, \delta\left(\frac{n}{P}+\omega\right)\tag{1}$$ and $$\mathcal{F}_x\left[b_n \sin\left(2 \pi \frac{n}{P} x\right)\right](\omega)=\int\limits_{-\infty}^{\infty} b_n \sin\left(2 \pi \frac{n}{P} x\right)\, e^{-2 i \pi \omega x} \, dx\\=\frac{b_n}{2} i\, \delta\left(\frac{n}{P}+\omega\right)-\frac{b_n}{2} i\, \delta\left(\omega-\frac{n}{P}\right)\tag{2}$$ where $\delta(\omega)$ is the Dirac delta function , but the Fourier transform of a periodic function doesn't converge in the usual sense, rather it converges only in a distributional sense. Its seems to me the notion that the Fourier transform of a non-periodic function $f(x)$ somehow gives the frequency c
|
|fourier-analysis|fourier-transform|
| 0
|
Drawing tangent vector
|
this is the solution provided from my class lecture notes. I have a question related to the graph drawing. Based on the calculation, we have two vectors, vector r(2) = and vector r'(2) = . According to the calculation of vectors, shouldn't those two vector have the same Y axis on the graph? Many thanks.
|
To understand this, it is better to take one more example to appreciate the relevance of @Soham Saha'comment. Let $M:\mathbb R\to \mathbb R^2, t\mapsto(\cos t, \sin t), v=M':t\to (-\sin t , \cos t)$ $$M(\frac{\pi}{4})=(\frac{\sqrt2}{2},\color{red}{\frac{\sqrt2}{2}}), v(\frac{\pi}{4})=(-\frac{\sqrt2}{2},\color{red}{\frac{\sqrt2}{2}})$$ But because $v(\frac{\pi}{4})$ is the velocity vector to $M(\frac{\pi}{4})$ , we prefer to represent it as the vector $$\overrightarrow {M(\frac{\pi}{4})N(\frac{\pi}{4})}$$ with $N(\frac{\pi}{4})=(\frac{\sqrt2}{2}-\frac{\sqrt2}{2},\frac{\sqrt2}{2}+\frac{\sqrt2}{2})=(0,\sqrt2)$
|
|vector-spaces|vectors|
| 0
|
Closed form expressions for $T_n$ and $S_n$ of a Fibonacci-like sequence
|
I am a student curious about recurrence relations who has just bumped into the Intermediate 1st year (grade 11). I derived the closed form expressions for the $n^{th}$ term and the sum of $n$ terms of a Fibonacci-like sequence of the form $(a,b,a+b,a+2b,2a+3b,...)$ by substituting $a_n=x^n$ into the characteristic recurrence relation: $a_{n+2}=a_{n+1}+a_n$ . I want to know what this method ( $a_n=x^n$ ) is called, and if the below mentioned expressions are already documented somewhere. I would also like to know if there are simpler versions of these expressions. $$T_n=\sum_{r \in \{{\phi,\psi\}}}{r^n \left(\frac{T_1}{3r+1}+\frac{T_2}{r+2}\right)}$$ $$S_n=\left(\sum_{r\in\{\phi,\psi\}}{r^n\left(\frac{T_1+T_2}{3r+1}+\frac{T_1+2T_2}{r+2}\right)}\right)-T_2$$ where, $T_n = n^{th}$ term of the Fibonacci-like sequence. $S_n =$ sum of $n$ terms of the Fibonacci-like sequence. $\phi,\psi$ are the roots of the equation: $x^2-x-1=0$ Also, I saw other answers mentioning the closed form expression
|
I don't know if there's an official name for the $a_n = x^n$ method - maybe the "ansatz method", since one name for a substitution like $a_n = x^n$ is an "ansatz". The simplest closed form I can think of for a sequence that satisfies $T_n = T_{n-1} + T_{n-2}$ in terms of $\phi$ and $\psi$ is $$T_n = \frac{(T_1 - T_0\psi)\cdot \phi^n - (T_1 - T_0\phi)\cdot \psi^n}{\phi - \psi}.$$ This can be obtained by writing $T_n = A \phi^n + B \psi^n$ , then setting $n=0$ and $n=1$ to solve for $A$ and $B$ . (When $T_0 = 0$ and $T_1 = 1$ , this reduces to Binet's formula.) In some cases, our base case for the recurrence is $T_1$ and $T_2$ , rather than $T_0$ and $T_1$ . In that case, $T_0$ "should have been" $T_2 - T_1$ to satisfy the Fibonacci recurrence, and we can replace $T_0$ by $T_2 - T_1$ in the formula above. It's a tiny bit messier, that way. Compared to the formula in the question, we are trading off a constant factor like $\frac{T_1}{3\phi + 1} + \frac{T_2}{\phi+2}$ for $\frac{T_1 - T_0\p
|
|sequences-and-series|recurrence-relations|terminology|fibonacci-numbers|
| 1
|
Question About The Remark after Proposition 1.4.11 from Measure Theory by Donold Cohn
|
My Question Define subsets $G$ , $G_0$ , and $G_1$ of $\mathbb{R}$ by \begin{align*} G &= \{x:x=r+n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\\ G_0 &= \{x:x=r+2n\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\},\ \text{and}\\ G_1 &= \{x:x=r+(2n+1)\sqrt{2}\ \text{for some $r$ in $\mathbb{Q}$ and $n$ in $\mathbb{Z}$}\}. \end{align*} Define a relation $\sim$ on $\mathbb{R}$ by letting $x \sim y$ hold when $x-y\in G$ ; the relation $\sim$ is then an equivalence relation on $\mathbb{R}$ . Use the axiom of choice to form a subset $E$ of $\mathbb{R}$ that contains exactly one representative of each equivalence class of $\sim$ . Let $A = E + G_0$ (that is, let $A$ consist of the points that have the form $e+g_0$ for some $e$ in $E$ and some $g_0$ in $G_0$ ). I got confused by the book's remark that "the set $A$ defined above is not Lebesgue measurable: if it were, then both $A$ and $A^c$ would include (in fact, would be) Lebesgue measurable s
|
If $A$ were measurable then so would $A+\sqrt2=A^c$ be, and both would have the same measure (since the Lebesgue measure $\lambda$ is invariant by translation). By Proposition 1.4.11, we would then have $\lambda(A)=\lambda(A^c)=0$ . But this is impossible, since $\infty=\lambda(\Bbb R)=\lambda(A)+\lambda(A^c)$ . Equivalently, but closer to Cohn's formulation: we would then have $\infty=\lambda(\Bbb R)=\lambda(A)+\lambda(A^c)=2\lambda(A)=2\lambda(A^c)$ hence $\lambda(A)=\lambda(A^c)=\infty>0$ , but this is impossible by Proposition 1.4.11.
|
|real-analysis|analysis|measure-theory|lebesgue-measure|outer-measure|
| 1
|
Difficult Vectors Problem (Calculus & Vectors)
|
Find parametric equations of a line that intersects line 1 and line 2 at right angles. Line 1: $[x,y,z] = [4,8,-1] + t[2,3,-4]$ and Line 2: $[x,y,z] = [7,2,-1] + k[-6,1,2]$ . I've tried solving this problem by crossing the direction vectors to get the direction of the normal, but I don't know how to find a point on the normal to get the overall equation of it. I originally thought I could just find the point of intersection of the 2 lines to get the intersection point, but the 2 lines turned out to be skew. Please help.
|
This question relates to a particular case of a general case that is dealt with here for example. Let $\color {grey}{\mathcal P}=\mathbb R \color {green}{(2,3,-4)}\oplus \mathbb R \color{red}{(-6,1,2)}, X=(4,8,-1), Y=(7,2,-1)$ $$(2,3,-4)\times (-6,1,2)=(10,20,20)=10(1,2,2)\text{(cross product)}$$ The numbers offered are well chosen to avoid excessively complicated calculations. We obtain $$X-Y=(4,8,-1)-(7,2,-1)=(-3,6,0)=\color{green}{(+1)}(2,3,-4)+(-6,1,2)+(1,2,2)$$ So, In accordance with here , we have with the notations in the figure $$x_1=X\color{green}{-1}(2,3,-4)=(2,5,3)$$ And parametric equation of the line $l $ that intersects line 1 and line 2 at right angles is $$\boxed{l:(2,5,3)+t(1,2,2), t\in \mathbb R}$$ In addition, we have $$y_1=Y+(-6,1,2)=(1,3,1)$$ And we check that $$x_1-y_1=1.(1,2,2)$$
|
|calculus|vector-spaces|vectors|vector-analysis|3d|
| 0
|
$\int_0^1\frac{1-\cos x}{x} \, dx-\int_1^{+\infty}\frac{\cos x}{x} \, dx=\gamma, \\ \gamma=\lim_{n\to\infty}(1+1/2+\cdots+1/n-\ln n)$
|
Prove: $$\int_0^1\frac{1-\cos x}{x} \, dx-\int_1^{+\infty}\frac{\cos x}{x} \, dx=\gamma, \\ \gamma=\lim_{n\to\infty}(1+1/2+\cdots+1/n-\ln n)$$ I try to use the Taylor expansion of $1-\cos x$ and exchange the order of 'integral sign' and 'sum sign', but I don't know how to deal with the latter integral.
|
Too long for a comment (an absolutely trivial problem) Use the rearrangement $$\int_0^{\infty}\left(\int_0^{\infty}(1-\cos(x))e^{-s x}\textrm{d}x -\int_1^{\infty} e^{-s x} \textrm{d}x\right)\textrm{d}s,$$ which leads to $$\int_0^{\infty}\left(\frac{1}{s(1+s^2)}-\frac{e^{-s}}{s}\right)\textrm{d}s=\int_0^{\infty}\left(\frac{1}{s(1+s)}-\frac{e^{-s}}{s}\right)\textrm{d}s=\gamma,$$ since we have the (absolutely trivial) result $$\int_0^{\infty} \left(e^{-x}-\frac{1}{(1+x)^s}\right)\frac{1}{x}\textrm{d}x=\psi(s),$$ which follows immediately by exploiting the simple fact that $\displaystyle \int_0^{\infty} \frac{e^{-ax}-e^{-bx}}{x}\textrm{d}x=\log\left(\frac{b}{a}\right)$ , and done. And you want to remember that $\psi(1)=-\gamma$ . (An absolute) End of story Remember that was meant to be only a comment.
|
|integration|
| 1
|
Criteria for the maximum of $f(x) = axe^{bx}$ at $(2,1)$
|
I have this question and have been trying for a long time to fully solve it, hopefully you guys can help me. Here is the question: Consider the function $f(x) = axe^{bx}$ , where $a$ and $b$ are constants and real numbers. Find the possible value(s) of $a$ and $b$ such that $f(x)$ has an absolute maximum of $(2, 1)$ on the interval $[0, 50]$ . Give your answer in exact form. I got one answer, but can't seem to prove or find out other possible answers. Here is my solution for one possible answer: $f(x) = axe^{bx}$ $1=f(2) =2ae^{2b}$ $ae^{2b}=\frac12$ $f'(x) = ae^{bx}(1+xb)$ $0=f'(2) = ae^{2b}(1+2b)=\frac12(1+2b)$ $b = -\frac12$ $ae^{2(-1/2)}=\frac12$ $a =\frac e2$ . So one of my possible answers is $a=\frac e2$ and $b=-\frac12$ . Now my only problem is the question asked for the possible value(s), so I don't know if there is more than one possible answer. Then question did specify on the interval from $[0,50]$ . So I don't know if there is a function that is greater in the negative, but
|
Your own calculation $$f'(x)=ae^{bx}(1+xb)$$ shows that $f$ has a unique critical point (except of course if $a=0$ or $b=0$ , in which case $f$ is constant). So, the answer to your question about the possibility of more complicated shapes for the graph of $f$ is: no. Moreover, this critical point is an absolute extremum on $\Bbb R$ and is at $$\left(-\frac1b,-\frac abe^{-1}\right)=(2,1)$$ iff $b=-\frac12$ and $a=-eb=\frac e2$ , as you found. (It is then de facto also an absolute extremum on any interval containing $2$ .) The only thing you forgot to check, to make sure that this unique possible solution is indeed a solution (i.e. that there is one solution and not zero), is that this extremum is a maximum. You can check more generally that if $ab then the extremum of $f$ at $x=-1/b$ is a maximum, by looking at the sign of $f'(x)=abe^{bx}(x+1/b)$ when $x and when $x>-1/b$ .
|
|calculus|derivatives|maxima-minima|
| 0
|
Calculating $\int \frac{\tan(x) + \tan^3(x)} {e^{\sec^2(x)} + e^{-\sec^3(x)}} \, \mathrm{d}x$
|
$$\int \; \frac{\tan(x) + \tan^3(x)} {e^{\sec^2(x)} + e^{-\sec^3(x)}} \, \mathrm{d}x$$ Analysis: This integral is complex due to the combination of: Rational function: $\tan(x)$ and $\tan^3(x)$ form a rational function where the degree of the numerator (3) isn't smaller than the denominator (1). Composite exponential terms: The denominator includes $e^{\sec^2(x)}$ and $e^{-\sec^3(x)}$ , making it a composite function. Challenges for Analytical Solution: These factors make it difficult to find an exact analytical solution using standard integration techniques like integration by parts, u-substitution, or partial fractions. What can I do please help me.
|
Evaluate: $$\int{\tan(x)+\tan^3{x}\over e^{\sec^2{x}}+e^{-\sec^3{x}}}\,\,dx$$ Substitute $\phi = \sec^2x\implies dx = {d\phi\over2\tan(x)\sec^2x}$ So we would have: $$ \frac{1}{2}\int {1+\tan^2x\over\phi(e^\phi+e^{-\phi^{3\over2}})}d\phi = \frac{1}{2}\int{d\phi\over{e^\phi+e^{-\sqrt{\phi^3}}}} $$ Which most likely is not elementary....
|
|calculus|
| 0
|
How to use variational calculus on a ratio of two functionals, with inequality constraint associated with the denominator?
|
I wish to optimize $J=\frac{I_1}{I_2}$ where $I_2=\int_0^T f(t)\theta(f(t))dt$ . Where $\theta(.)$ is the Heaviside step function. To avoid using the step function (since it is not differentiable) , I wrote an inequality constraint $f(t)\geq0$ only for the bottom functional and hence got: $I_2=\int_0^T f(t)dt$ . Furthermore, $I_1=\int_0^T f(t)dt$ without the constraint. This entire functional J is subjected to an equality constraint $h(t)=0$ . Any help, including being pointed to a good resource, is extremely appreciated. Edit: Fixed the wording, and defined $\theta(.)$ after seeing an answer.
|
Optimizing a ratio of two functionals, especially with constraints, is a problem often approached using the calculus of variations alongside methods from the field of constrained optimization such as Lagrange multipliers. Given the functionals: \begin{equation} I_2 = \int_{0}^{T} f(t)\theta(f(t))\,dt \end{equation} \begin{equation} I_1 = \int_{0}^{T} f(t)\,dt \end{equation} and the constraint \begin{equation}( h(t) = 0 )\end{equation} , if we wish to optimize the functional: \begin{equation} J = \frac{I_1}{I_2} \end{equation} subject to \begin{equation}( f(t) \geq 0 )\end{equation} , we can reformulate the problem using a Lagrange multiplier \begin{equation}( \lambda(t) )\end{equation} to enforce the inequality constraint and a multiplier \begin{equation}( \mu )\end{equation} for the equality constraint. We define a Lagrangian \begin{equation}( \mathcal{L} )\end{equation} as: \begin{equation} \mathcal{L}(f(t), \lambda(t), \mu) = I_1 - \mu I_2 - \int_{0}^{T} \lambda(t) f(t) \,dt \end{eq
|
|nonlinear-optimization|variational-analysis|
| 0
|
Probability that the centroid of a triangle is inside its incircle
|
Question : The vertices of triangles are uniformly distributed on the circumference of a circle. What is the probability that the centroid is inside the incricle. Simulations with $10^{10}$ trails give a value of $0.457982$ . It is interesting to note that this agrees with $\displaystyle \frac{G}{2}$ to six decimal places where $G$ is the Catalan's constant . Julia source code: using Random inside = 0 step = 10^7 target = step count = 0 function rand_triangle() angles = sort(2π * rand(3)) cos_angles = cos.(angles) sin_angles = sin.(angles) x_vertices = cos_angles y_vertices = sin_angles return x_vertices, y_vertices end function incenter(xv, yv) a = sqrt((xv[2] - xv[3])^2 + (yv[2] - yv[3])^2) b = sqrt((xv[1] - xv[3])^2 + (yv[1] - yv[3])^2) c = sqrt((xv[1] - xv[2])^2 + (yv[1] - yv[2])^2) s = (a + b + c) / 2 incenter_x = (a * xv[1] + b * xv[2] + c * xv[3]) / (a + b + c) incenter_y = (a * yv[1] + b * yv[2] + c * yv[3]) / (a + b + c) incircle_radius = sqrt(s * (s - a) * (s - b) * (s - c))
|
A follow-up to my comment on Dan's post: In the $x,y$ plane, the boundary of the region $R$ is defined by the implicit function $$15+6\cos(2y)-5\cos(4y)=(6+10\cos(2y))\cos(2x)-12(\cos(3y)-\cos y) \cos x$$ so that solving for $x$ and denoting $c=\cos y$ , we arrive at $$\cos x = \frac{3c^3 - 3c \pm 4c^2 \sqrt{2-c^2}}{5c^2-1} \\ \implies x = x_{\color{red}{\pm}}(y) = \pm \arccos \frac{3c^3 - 3c \color{red}{\pm} 4c^2 \sqrt{2-c^2}}{5c^2-1}$$ where the $\pm$ subscript corresponds to the same sign on the root. In the plot, $x_-(y)$ and $x_+(y)$ are resp. shown in blue and red. It can be shown that $\min x_-(y)=0$ at $y=\arccos\dfrac15$ (orange), which can be used as a cut-off to split up the integral for the area w.r.t. $y$ : $$\iint_R dA = 2 \iint_{R \land x\ge0} dA = 2 \left(\int_0^{\arccos\tfrac15} x_+(y) \, dy + \int_{\arccos\tfrac15}^\tfrac\pi2 \left(x_-(y) - x_+(y)\right) \, dy\right)$$ Double that and divide by $\pi^2$ to get the probability. Numerically, Mathematica with NIntegrate c
|
|probability|integration|geometry|triangles|geometric-probability|
| 1
|
Determining mapping cone of free resolution
|
I am reading the concept of mapping cone of a resolution . I need help with the following. Let $I_1=\langle x_1^2-x_2 x_4, x_1 x_2-x_3 x_4, x_1 x_3-x_4^2,x_2^2-x_1 x_3, x_2 x_3-x_1 x_4,x_3^2-x_2 x_4 \rangle \subseteq R_1=\mathbb{K}[x_1,x_2,x_3,x_4]$ and $I_2= \langle x_5^2-x_6 x_8, x_5 x_6 -x_7 x_8, x_5 x_7-x_8^2,x_6^2-x_5 x_7,x_6 x_7-x_5 x_8,x_7^2-x_6 x_8 \rangle \subseteq R_2=\mathbb{K}[x_5,x_6,x_7,x_8]$ are two ideals with $R_i$ -minimal free resolution of $I_i$ given by $\mathbf{F}_i$ , for $i=1,2$ respectively. Let $J=I_1 +I_2+\langle x_4-x_8 \rangle \subseteq R=\mathbb{K}[x_1,\dots,x_4,x_5,\dots,x_8]$ be an ideal. What will be the $R$ -minimal free resolution of $J$ ? I think using tensor product of free resolution $\mathbf{F}_1 \otimes \mathbf{F}_2$ and then from mapping cone we can obtained minimal free resolution of $J$ . But, I don't able to prove this. Any help would be appreciated. Thank you.
|
This is the perfect kind of question for a computer algebra package such as Macaulay2. https://macaulay2.com/ I highly recommend learning how to use this if you need to computations like this. The main issue here is the emphasis on "minimal". What you propose sounds fine for producing a free resolution. But it may be subtle to prove that it is in fact minimal.
|
|abstract-algebra|commutative-algebra|ideals|homological-algebra|tensor-products|
| 1
|
How do I show that all continuous periodic functions are bounded and uniform continuous?
|
A function $f:\mathbb{R}\to \mathbb{R}$ is periodic if there exits $p>0$ such that $f(x+P)=f(x)$ for all $x\in \mathbb{R}$. Show that every continuous periodic function is bounded and uniformly continuous. For boundedness, I first tried to show that since the a periodic function is continuous, it is continuous for the closed interval $[x_0,x_0+P]$. I know that there is a theorem saying that if it is continuous on a closed interval, then it is bounded. However, I'm not allowed to state that theorem directly. Should I just aim for a contradiction by supposing f is not bounded on the interval stated above?
|
Assume that $g$ is continuous and periodic with period $T$ . Then, for all $x \in \mathbb{R}, T \in \mathbb{R}$ , $g(x + T) = g(x)$ . Now, consider $g \colon [-T, 2T] \rightarrow \mathbb{R}$ . Here, $g$ is continuous and also uniformly continuous (by compactness of the domain). Now, for every $\varepsilon > 0$ , there exists $\delta(\varepsilon) > 0$ such that for all $x', y' \in [-T, 2T]$ , if $|x' - y'| \leq \delta$ then $|g(x') - g(y')| . Let $x, y \in \mathbb{R}$ such that $|x - y| \leq \delta$ , consider $|g(x) - g(y)|$ . Since $g$ is periodic, there exists $k \in \mathbb{Z}$ such that $x' = x + kT \in [0, T]$ and $y' = y + kT \in [-T, 2T]$ . Then,since $|x' - y'| \leq \delta$ : \begin{align*} |g(x) - g(y)| &= |g(x + kT) - g(y + kT)| \\ &= |g(x') - g(y')| \\ &
|
|real-analysis|continuity|uniform-continuity|periodic-functions|
| 0
|
Stochastic integration and use of ito's lemma
|
Note to the moderators: This question has been solved, and is indeed a valid question. Please post a comment explaning what needs to be elaborated on. Thank you very much! - random0620 Proposition 6.7 Let $X, Y \in \mathcal{P}, M \in \mathcal{M}_{\mathrm{c}}^2$ and $S \leq T$ be stopping times. Then $$ \begin{aligned} & \mathbb{E}\left[\left((X \cdot M)_{T \wedge t}-(X \cdot M)_{S \wedge t}\right)\left((Y \cdot M)_{T \wedge t}-(Y \cdot M)_{S \wedge t}\right) \mid \mathcal{F}_S\right]= \mathbb{E}\left[\int_{S \wedge t}^{T \wedge t} X_u Y_u \mathrm{~d}\langle M\rangle_u \mid \mathcal{F}_S\right] \end{aligned} $$ I am wondering how to solve this stochastic calculus question. In the question, $\mathcal{P}$ is the set of pre-visible processes. $\mathcal{M}_c^2$ is the set of square integrable martingales. My attempt: I used the following formula : $$(X \cdot M)_{min(T,t)} - (X \cdot M)_{min(S,t)} = \int_{min(S,t)}^{min(T,t)} X_s \, dM_s$$ $$ \mathbb{E}\left(\left[(X \cdot M)_{min(T,t)} - (X
|
By stochastic integration by parts, which is a simple corollary of Ito's formula, $$ X_t Y_t-X_0 Y_0=\int_0^t X_S d Y_s+\int_0^t Y_S d X_S+\int_0^t d X d Y $$ we have $$\left((X \cdot M)_{T \wedge t}-(X \cdot M)_{S \wedge t}\right)\left((Y \cdot M)_{T \wedge t}-(Y \cdot M)_{S \wedge t}\right) \\=(X\cdot M)_{T\land t}(Y\cdot M)_{T\land t} -(X\cdot M)_{S\land t}(Y\cdot M)_{S\land t}\\ = \int_{S\land t}^{T\land t}X_sd(Y\cdot M)_{s} + \int_{S\land t}^{T\land t}Y_sd(X\cdot M)_{s} + \int_{S\land t}^{T\land t}d(X\cdot M)d(Y\cdot M) \\ $$ Now we have that $$d(X\cdot M)d(Y\cdot M)= (XdM)(YdM) = XYdMdM = XYd\langle M\rangle.$$ Thus, $$ = \int_{S\land t}^{T\land t}X_sd(Y\cdot M)_{s} + \int_{S\land t}^{T\land t}Y_sd(X\cdot M)_{s} + \int_{S\land t}^{T\land t}X_sY_sd\langle M\rangle_s. \\$$ But after taking expectation, and applying Optional stopping which leaves only the variation term, we are left with $$\mathbb{E}\left[(X\cdot M)|_{T\land t}(Y\cdot M)|_{T\land t} -(X\cdot M)|_{S\land t}(Y\cdot M)
|
|probability|probability-theory|stochastic-calculus|stochastic-integrals|
| 0
|
Criteria for the maximum of $f(x) = axe^{bx}$ at $(2,1)$
|
I have this question and have been trying for a long time to fully solve it, hopefully you guys can help me. Here is the question: Consider the function $f(x) = axe^{bx}$ , where $a$ and $b$ are constants and real numbers. Find the possible value(s) of $a$ and $b$ such that $f(x)$ has an absolute maximum of $(2, 1)$ on the interval $[0, 50]$ . Give your answer in exact form. I got one answer, but can't seem to prove or find out other possible answers. Here is my solution for one possible answer: $f(x) = axe^{bx}$ $1=f(2) =2ae^{2b}$ $ae^{2b}=\frac12$ $f'(x) = ae^{bx}(1+xb)$ $0=f'(2) = ae^{2b}(1+2b)=\frac12(1+2b)$ $b = -\frac12$ $ae^{2(-1/2)}=\frac12$ $a =\frac e2$ . So one of my possible answers is $a=\frac e2$ and $b=-\frac12$ . Now my only problem is the question asked for the possible value(s), so I don't know if there is more than one possible answer. Then question did specify on the interval from $[0,50]$ . So I don't know if there is a function that is greater in the negative, but
|
By differentiation (ProductRule, simplification) it is found $$(x_m,y_m)=(-1/b,-a/(be))$$ when equated to $(2,1)$ we obtain $$ (a,b)= (e/2, -\frac12)$$ Second derivative sign test shows max at this choice. Your work is correct and it appears to me that there need not be any doubt . If a second value is admissible, it will surely come out here as a double or multiple root. Confirmed by function graph also:
|
|calculus|derivatives|maxima-minima|
| 0
|
Support of the pushforward of structure sheaf of a smooth scheme along proper birational morphism
|
Let $k$ be a field of characteristic $0$ and $R$ be a finite type $k$ -algebra. Let $X$ be a smooth $k$ -scheme and $f: X \to \text{Spec}(R)$ be a proper birational morphism. Then, is the $R$ -module $f_* \mathcal O_X$ supported at every maximal ideal of $R$ , i.e., is the localization of $f_* \mathcal O_X$ non-zero at every maximal ideal of $R$ ? If needed, I am willing to assume that $f_* \mathcal O_X$ is a projective $R$ -module.
|
Proper + birational tells you that $f$ is surjective, so yes, $f_* \mathcal{O}_X$ will be supported on all of ${\rm Spec}(R)$ . The characteristic 0 assumption is not needed.
|
|algebraic-geometry|commutative-algebra|schemes|coherent-sheaves|
| 0
|
Range of function $ f(x)=|\sin(\frac{x}{2})|+|\cos(x)|$
|
Find the range of function $$f(x)=|\sin\bigg(\frac{x}{2}\bigg)|+|\cos(x)|.$$ Using $0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $ 0\leq |\cos(x)|\leq 1, 0\leq f(x)\leq 1+1=2.$ But minimum value is $\frac{1}{\sqrt{2}}$ . Other way what I tried: $ f(x)=\bigg|\sin\bigg(\dfrac{x}{2}\bigg)\bigg|+\bigg|1-2\sin^2\bigg(\dfrac{x}{2}\bigg)\bigg|$ . $f(t)=|t|+|1-2t^2|,t=\sin\bigg(\dfrac{x}{2}\bigg).$ How can I get the minimum value?
|
Let's continue your idea. Define $f(x)=|x|+|1-2x^2|$ , where $x\in[-1,1]$ . Observe that $f(x)=f(-x)$ , so that $f$ is even, hence it suffices for us to investigate the behaviour of $f$ in $[0,1]$ . In this sense, $f(x)$ can be rewritten as $$f(x)=x+|1-2x^2|$$ Now consider $1-2x^2>0\iff x\in\left(\dfrac{-1}{\sqrt{2}},\dfrac{1}{\sqrt{2}}\right)$ , this means we can split $f$ into case-defined function, where $$f(x)=\begin{cases}-2x^2+x+1&,x\in\left[0,\dfrac{1}{\sqrt{2}}\right]\\ 2x^2+x-1,&x\in\left[\dfrac{1}{\sqrt{2}},1\right]\end{cases}$$ You can now investigate the behaviour of $f$ . Continue the proof by yourself.
|
|trigonometry|
| 0
|
Prove that the following function is one-to-one
|
Define a function $g$ from the set of real numbers to $S$ by the following formula: $$ g(x) = \frac12\biggl( \frac x{1+|x|} \biggr) + \frac12,\quad x\in\mathbb{R}. $$ Prove that $g$ is a one-to-one correspondence. (It is possible to prove this statement either with calculus or without it.) What conclusion can you draw from this fact? My question is that what is the conclusion we can draw after we decide that it is a one-to-one correspondence? I would prove its one-to-one correspondence through its graph, which is one-to-one in that no two $x$ 's are mapped to the same $y$ .
|
This is what should we do $$ g(\alpha)=g(\beta) \rightarrow \alpha=\beta $$ $$ g(\alpha)=g(\beta) \rightarrow \frac{1}{2}\left(\frac{\alpha}{1+|\alpha|}\right)+\frac{1}{2}=\frac{1}{2}\left(\frac{\beta}{1+|\beta|}\right)+\frac{1}{2}\rightarrow \frac{\alpha}{1+|\alpha|}=\frac{\beta}{1+|\beta|} $$ $$ \text{if } \alpha,\beta \geq 0 \rightarrow \frac{\alpha}{1+\alpha}=\frac{\beta}{1+\beta} \rightarrow \alpha=\beta $$ Or $$\text{if } \alpha, \beta And we are done We proved that the function is one-to-one!
|
|algebra-precalculus|functions|discrete-mathematics|
| 0
|
Interpolating two spherical coordinates (theta, phi)
|
I have two points on a unit sphere, given as tuples $(\theta, \phi)$. I need an efficient way to interpolate them, for example given $t$ between zero and one, to generate a point that lies (on the unit sphere) between the two points, where $t = 0$ would give the first point, $t = 1$ would give the second point, and $t = 0.5$ would give a point that is exactly in the middle. I read about Slerp , but I don't know how to apply the quaternions math to the spherical coordinates. (If the two points are antipodal, let the result be random) It seems to be very similar to this question/answer , only that I use spherical coordinates instead of latitude/longitude.
|
I simply did sin(1-t)*P1 + sin(t)*P2 for t in range [0, 1] when P1 and P2 are in cartesian coordinates. I have no idea why it works but it works..
|
|spherical-coordinates|spherical-geometry|
| 0
|
Definition of Left-Closable Martingale
|
I am currently studying martingales with Resnick's book A Probability Path . He defines a martingale as closed on the right if there is an $X \in L_1$ such that $X_n = \mathbb{E}[X \mid \mathcal{B}_n]$ for all $n$ . One can also find that definition here , defining it as "right-closable." What I am curious about is the "on the right." I did a little bit (perhaps not enough) searching around on the internet and I can't seem to find any definition of what it might mean to be closed "on the left." Does anyone happen to know of such a definition or where I might find one?
|
I haven't run into this terminology before, but here is my guess. The term closed probably refers to the set of indices $n$ where the martingale is defined. That is, consider the domain $$ \{ n : X_{n} \ \mathrm{is \ defined} \} = \mathbb{Z}_{\geq 0}, $$ which is not (topologically) closed in the extended reals because it has a hole on the right (at $n=+\infty$ ). Therefore, we define $X_{\infty}$ such that $$ X_{n} = \mathbb{E}[X_{\infty} | \mathcal{B}_{n} ], $$ which is just what you call $X$ . The domain then is closed. From this, it should be clear that "closed on the left" is not a particularly useful concept, since the domain is always closed on the left (at $n=0$ ). I assume that for this reason the term "closed" is used instead, excluding the "on the right". Notice also that essentially $ X_{\infty} = \lim_{n \to \infty} X_{n}, $ although the actual nature of this convergence depends on what conditions we have on the $X_{n}$ . See Doob's Martingale Convergence Theorem for this.
|
|probability|probability-theory|measure-theory|conditional-expectation|martingales|
| 1
|
Why is $\sum_{m=0}^{\lfloor xs\rfloor} 2 \binom{s}{m} p^m (1-p)^{s-m} \leq 2\exp{\left(-\frac{2(\lfloor xs\rfloor - sp)^2}{s}\right)}$
|
I am trying to understand few of the mathematical steps I have encountered in a paper, there are two of them (a) $\sum_{m=0}^{\lfloor xs\rfloor} 2 \binom{s}{m} p^m (1-p)^{s-m} \leq 2\exp{\left(-\frac{2(\lfloor xs\rfloor - sp)^2}{s}\right)}$ (b) $\sum_{m= \lfloor xs \rfloor + 1}^s \left(\frac{p}{x}\right)^m \left(\frac{1-p}{1-x}\right)^{s-m} \leq p\exp{\left(-sd(x,p)\right)}$ , where $d(a,b) = a\log{\frac{a}{b}} + (1-a)\log{\frac{1-a}{1-b}}$ . I have searched and tried a lot but could not figure out it. If some one can help or point out some relevant reference it would be helpful.
|
Note that the sum in (a) is $2\cdot \mathbb{P}\{X\leqslant x\}$ for $X\sim \mathrm{Bin}(s,p)$ . In view of this, (a) is a consequence of Hoeffding's inequality. (Recall that a binomial random variable is a sum of iid Bernoulli random variables.)
|
|algebra-precalculus|inequality|summation|exponential-function|binomial-coefficients|
| 0
|
Rearranging propositional equations
|
so I wanted to ask if there is anything that allows for this case: I have $3$ statements $A, B$ , and $C$ and I wish to change them from the form: $(A \operatorname{xor} B)$ , $(A \operatorname{xor} C)$ Into $A \operatorname{xor} B \operatorname{xor} C$ Is this possible and how does one do this? Preferably with references as I wish to increase my knowledge. -- Edit for clarity -- A little context, my friend is making a board game he wants to check that the rules won't be missing interpreted as being able to do any action $A,B$ or $C$ simultaneously. So far a player may have access to two different cards written something like, "You may roll $3$ dice instead of $1$ this turn" and "You may double all dice rolled until the end of the turn instead of rolling $1$ dice." both of these rules change how someone takes their turn. A player will roll a die that determines far they can move, in this case actions $B$ and $C$ replace the action $A$ . $A$ $=$ "Roll $1$ die" $B$ $=$ "Roll $3$ dice thi
|
I have $3$ statements $A, B$ , and $C$ and I wish to change them from the form: $(A \operatorname{xor} B)$ $(A \operatorname{xor} C)$ Into $A \operatorname{xor} B \operatorname{xor} C$ You may wish to do so, but please note that the conjunction of the first two statement is not equivalent to $A \operatorname{xor} B \operatorname{xor} C$ . In fact, $(A \operatorname{xor} B)$ and $(A \operatorname{xor} C)$ don't even imply $A \operatorname{xor} B \operatorname{xor} C$ Consider: $A$ is False ( $F$ ), and $B$ and $C$ are True ( $T$ ). Then $(A \operatorname{xor} B)$ and $(A \operatorname{xor} C)$ are clearly both true. However, $A \operatorname{xor} B \operatorname{xor} C$ is False, since: $$F \operatorname{xor} T \operatorname{xor} T = F \operatorname{xor} (T \operatorname{xor} T) = F \operatorname{xor} F = F$$ You actually have to be very careful in using expressions where you apply what is normally a binary operator to more than two operands. This may work well for operators like $\land
|
|logic|propositional-calculus|
| 1
|
I need help with this question about a graph that has a log in it and the points where it crosses the axes
|
Prompt: Find the $x$ - and $y$ -intercepts of the graph of $x = (y+2)\ln(3y+4)$ . Given solution: For $x = 0$ , solve $(y+2)\ln(3y+4) = 0$ for $y$ : \begin{align} y + 2 &= 0 & \ln(3y + 4) &= 0 \\ y &= -2 & 3y + 4 &= e^0 \\ \rlap{\text{not a solution}}\qquad && 3y + 4 &= 1 \\ && 3y &= -3 \\ && y &= -1 \end{align} So, only $(0, -1)$ . For $y = 0$ , evaluate $x = (0 + 2) \ln(3(0) + 4) = 2\ln 4 = 4\ln 2$ , so only $(4\ln2, 0)$ . My question: Why does $y \neq -2$ ? Can someone help me in simple terms.
|
The domain of the natural logarithm function only consist of positive real numbers, but with $y = -2$ , the input to the log is $3y + 4 = -2 . You can see that the graph only contains points with $y$ -coordinate satisfying $3y + 4 > 0$ , i.e., $y > -\tfrac43$ .
|
|algebra-precalculus|logarithms|
| 0
|
Range of function $ f(x)=|\sin(\frac{x}{2})|+|\cos(x)|$
|
Find the range of function $$f(x)=|\sin\bigg(\frac{x}{2}\bigg)|+|\cos(x)|.$$ Using $0\leq \bigg|\sin\bigg(\frac{x}{2}\bigg)\bigg|\leq 1$ and $ 0\leq |\cos(x)|\leq 1, 0\leq f(x)\leq 1+1=2.$ But minimum value is $\frac{1}{\sqrt{2}}$ . Other way what I tried: $ f(x)=\bigg|\sin\bigg(\dfrac{x}{2}\bigg)\bigg|+\bigg|1-2\sin^2\bigg(\dfrac{x}{2}\bigg)\bigg|$ . $f(t)=|t|+|1-2t^2|,t=\sin\bigg(\dfrac{x}{2}\bigg).$ How can I get the minimum value?
|
Let us consider $g(x)=|\sin(x)|+|1-2\sin^2(x)|$ . Observe that $g$ is $\pi$ -periodic, ie $g(x+\pi)=g(x)$ . Also since $\sin (x)=\sin(\pi-x)$ , it follows that we can assume $x\in [0, \frac{\pi}2]$ since the graph just repeats laterally inverted in $[\frac{\pi}2,\pi]$ . $g(x)=\begin {cases}2\sin^2x+\sin x -1 && \frac{\pi}2\geq x\geq\frac{\pi}4 \\-2\sin^2x + \sin x +1&& \frac{\pi}4>x\geq 0\end{cases}$ For the first case, consider the parabola $y(x):=2x^2+x-1$ . We are interested in $\frac{1}{\sqrt 2}\le x \le 1$ . Note that $y$ has its minimum value $y(v)$ on its vertex where $v=-0.25$ . Since $|v-\sin(\pi/4)| , it follows in this case that $$\max_{x=\sin(\pi/2)} y(x) = 2$$ and $$\min_{x=\sin(\pi/4)} y(x)=\frac{1}{\sqrt 2}$$ For the second case note the parabola $w(x):=-2x^2+x+1$ has a maximum value at its vertex $w(v)$ where $v=0.25$ . Now note we are interested in $0\le x . Noting that $v$ is within the permissible range of $x$ , $w(v)$ is the maximum value attained in this case. Sinc
|
|trigonometry|
| 1
|
Prove that $B$ is normal
|
Let $B$ be a complex invertible matrix such that $B^*B^2B^*=B+B^*$ . Prove that $B$ is normal. I was only able to prove that $BB^*$ , $B^*B$ commute, so $BB^*$ , $B^*B$ , $B+B^*$ are simultaneously diagonalizable (and also $B+B^*$ is positive definite). I know that the problem works also for $B$ ordinary, but that problem becomes easy if we solve it for $B$ invertible. Anyway, I tried solving this for a long time, and nothing works. Any tips would be highly appreciated. Thanks.
|
using OP's observation about $BB^*$ and $B^*B$ commuting, left multiply $B^*B^2B^*=B+B^*$ by $(B^*B)$ to get $(B^*B)B + (B^*B)B^* = (B^*B)B^*B^2B^* = B^*B^2B^*(B^*B)=B(B^*B) + B^*(B^*B)$ then left multiply by $B^*$ and take the trace $\text{trace}\Big(B^*(B^*B)B\Big) + \text{trace}\Big(B^*(B^*B)B^*\Big)= \text{trace}\Big(B^*B(B^*B)\Big) + \text{trace}\Big(B^*B^*(B^*B)\Big)$ $\implies \text{trace}\Big((BB^*)(B^*B)\Big) = \text{trace}\Big(B^*(B^*B)B\Big) = \text{trace}\Big((B^*B)^2\Big) $ [cyclic property of trace] $\implies B^*B=BB^*$ by Cauchy-Schwarz; equivalently the trace equality shows $\Big\Vert BB^* -B^*B\Big\Vert_F^2 = 0$ addendum: simultaneous diagonalization proof write $B= H + S$ , where $H$ is Hermitian and $S$ is skew-Hermitian OP has observed that $BB^*$ , $B^*B$ and $B+B^* = 2H$ are simultaneously diagonalizable by some unitary $U$ . $B^*B -BB^* = \big(H^2 - S^2 -SH +HS\big)-\big(H^2 - S^2 +SH -HS\big)= 2\big(HS -SH\big)$ conjugating by $U^{*}$ gives $U^*\big(B^*B -BB^*\b
|
|linear-algebra|
| 1
|
Clarification on a limit
|
I am looking for a formal proof that If $$\lim_{x\to a} h(x)=\pm\infty$$ then $$\lim_{x\to a}{\huge(}1+\frac{\alpha}{h(x)}{\huge)}^{h(x)}=e^\alpha$$ I understand that h(x) tends to infinity but I don't know how I can formally prove this. I know the heine and cuschy definitions of limits.
|
Let $$f(x)= \bigg(1+\frac{α}{h(x)}\bigg)^{h(x)}$$ Then $$ \ln f(x) = h(x) \ln \bigg(1+\frac{α}{h(x)}\bigg)$$ Or $$\lim_{x\to a} \ln f(x) = \lim_{x\to a} \bigg[h(x)\ln \bigg(1+\frac{α}{h(x)}\bigg)\bigg]$$ Or $$\lim_{x\to a} \ln f(x) = \lim_{x\to a} \bigg[α\frac{\ln \bigg(1+\frac{α}{h(x)}\bigg)}{\bigg(\frac{α}{h(x)}\bigg)}\bigg]$$ As $x\to a$ so $ h(x) \to \infty $ hence $\frac{α}{h(x)}\to 0$ . We also know $$\lim_{k\to 0}\frac{\ln(1+k)}{k} = 1$$ Hence $$\lim_{x\to a} \ln f(x) = \lim_{x\to a} α × \lim_{x\to a}\bigg[\frac{\ln \bigg(1+\frac{α}{h(x)}\bigg)}{\bigg(\frac{α}{h(x)}\bigg)}\bigg]$$ Or $$\lim_{x\to a} \ln f(x) = \lim_{x\to a} α $$ Or $$\lim_{x\to a} f(x) = \lim_{x\to a} e^α = e^α $$
|
|real-analysis|calculus|limits|
| 0
|
Interpolating points on a sphere between two points
|
I managed to solve it using the following function: given a cartesian point A and point B. the geodesic path on a sphere is defined as: r(t) = sin(1-t)*A + sin(t)*B, for t=[0, 1] then normalize r(t)/||r(t)|| to get the points on a sphere Can someone explain me why does it work? edit, changed the "1" to pi/2 to get uniform sampling
|
What you have is a monotonic locus of $r(t)$ in both cases, however they are unrelated to the surface of the sphere. By normalising $r(t)$ afterwards to $\hat{r}(t)=\frac{r(t)}{\|r(t)\|}$ , you're radially projecting your locus onto the unit sphere which gives you a non-linear parametrisation (non-linear interpolation). This non-linearity of parametrisation is visible in your plot. When you remove the sines there will be higher densities of points at the beginning and end of your locus than in the middle.
|
|spherical-coordinates|geodesic|riemann-sphere|
| 0
|
Prove $\int_{1}^{x} \frac{dt}{t} \leq \sum_{n \leq x} \frac{1}{n} \leq 1 + \int_{1}^{x} \frac{dt}{t}$
|
Let $x \geq 1$ . I wish to show, by interpreting the sum as a Riemann sum, that $$\log(x) = \int_{1}^{x} \frac{dt}{t} \leq \sum_{n \leq x} \frac{1}{n} \leq 1 + \int_{1}^{x} \frac{dt}{t}.$$ Certainly, $f(t) = 1/t$ is continuous on $[1, x]$ , so is integrable on $[1, x]$ and may be computed as the limit of a sequence of Riemann sums. I have only the idea of interpreting $\sum_{n \leq x} \frac{1}{n}$ as an "upper sum" of $f$ with respect to a partition $\mathcal{P} = \{1, 2, ..., \lfloor x \rfloor, x\}$ , which would necessarily be no less than $\int_{1}^{x} \frac{dt}{t}$ . However, I am not certain how to formalise this idea, because, in using $\mathcal{P}$ , I should be left with an "excess" area from $\lfloor x \rfloor$ to $x$ ?
|
It would be convenient, since the sum is divided up at the integers, to interpret your Riemann sum as having equidistant points $1$ unit of distance apart each. So: $$ \int_1^x \frac{dt}{t} = \sum_{n=1}^{\lfloor x \rfloor -1} \int_n^{n+1} \frac{dt}{t} + \int_{\lfloor x \rfloor}^x \frac{dt}{t} $$ Since $\frac 1 t$ is decreasing, then $$ \frac{1}{n+1} \le \frac 1 t \le \frac{1}{n} \text{ on } [n,n+1] $$ Then $$ \frac{1}{n+1} \le \int_n^{n+1} \frac{dt}{t} \le \frac 1 n $$ These are best easily seen as in, say, this Desmos demo . (Modified from one I use to show students about the left-endpoint rule.) From here, the remainder is ultimately just careful manipulations & algebra.
|
|calculus|riemann-integration|
| 0
|
Value of the coordinates in parametric form of hyperbola
|
We know a hyperbola can be expressed in the form of $$ \frac{(x-h)^2}{a^2}-\frac{(y-k)^2}{b^2}=1$$ where $(h,k)$ is it's center. I've learnt that in the parametric form, we take $$x= h + a\sec t$$ and $$ y = k + b\tan t $$ These values satisfy the given equation. But so does $$x= h + a\csc t$$ and $$y=k+b\cot t$$ Then why aren't these second values of $x$ and $y$ taken as parameters? Does this cause a difference in the graphs?
|
The answer is largely convention and convenience. Your alternative parameterization will work, but since we often parameterize curves using $x=\cos{\theta}$ and $y=\sin{\theta}$ for internal consistency with the geometric definitions for $\sin$ and $\cos$ it is a more natural choice to use $\sec$ for the $x$ value if $\cos$ won't work. From there, because we need to make $x^{2}-y^{2}=1$ the identity of choice is $\sec^{2}{\theta}-\tan^{2}{\theta}=1$ and the parameterization then follows.
|
|analytic-geometry|conic-sections|parameter-estimation|
| 0
|
Calculating the Distance from a Point on the Tangent to an Ellipse to the Center
|
In the following figure, the line that touches the ellipse at only one point, called A, is the tangent line to the ellipse at that point. C is the center of the ellipse. Point $L'$ is the point where the perpendicular passing through C to the tangent line intersects the tangent line. Point L, instead, is the intersection between this perpendicular and the ellipse. I know that the ratio $\frac{AL'}{L'C}$ is given. I would like to calculate the length of $AC$ . I thought I could calculate $L'C$ (and then $AC$ ) using question Distance point on ellipse to centre ; however, by doing so, I can only calculate $LC$ . Any suggestions?
|
Let's WLOG take the centre of the ellipse be origin. Then the ellipse can be described as $$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$ We state without a proof a lemma that The equation of tangent to the ellipse $\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$ at $(x_1,y_1)$ is $\dfrac{x_1x}{a^2}+\dfrac{y_1y}{b^2}=1$ . This is a well-known result from conic section, you can ask for a proof if you need. Now given this fact, if we know the coordinates of $L'=(l_1,l_2)$ , then we have $$b^2(x_1l_1)+a^2(y_1l_2)=a^2b^2\iff x_1=\dfrac{a^2(b^2-y_1l_2)}{b^2l_1}$$ However, we know that $(x_1,y_1)$ is point on the ellipse, so we get a quadratic equation $$\dfrac{a^2(b^2-y_1l_2)^2}{b^4l_1^2}+\dfrac{y_1^2}{b^2}=1$$ So you can solve for $A=(x_1,y_1)$ . And you get $AL$ , and thus $LC$ by the ratio. Now if you are given $A$ , then you get the slope of $CA$ , and the given information can be described as $\tan\angle ACL$ , so you can find $\angle ACL$ . Combined with the slope of $CA$ , you can get the slope of $LC$
|
|geometry|conic-sections|
| 0
|
Prove $ \int_{-x}^{x} \frac{t^2 e^t}{e^t + 1} \, dt = \frac{x^3}{3}$ for every real $x$
|
Just like in the title, one of the problems given in our homework is to prove that for every real $x$ the following equality is true: $$ \int_{-x}^{x} \frac{t^2 e^t}{e^t + 1} \, dt = \frac{x^3}{3}$$ I've been facing this for 2 days straight now, and am absolutely dumbstruck on how to do this. We aren't allowed to calculate the antiderivative of this problem. The only hint we were given is to think of an operation that will cancel out the integral on the left side of the equality. But even after rereading out learning material again and again, I simply couldn't find anything helpful. However, I would much rather getting a hint rather than the solution if possible. Thanks in advance! EDIT : Even after reading the solution written below, I still feel like something's not clicking in my head. If possible, a detailed explanation would be ideal (and thanks in advance for going through the trouble for me).
|
Hint 1: Fundamental theorem of calculus Hint 2: Let $F(x)=\displaystyle\int^x_{-x}\dfrac{t^2e^t}{e^t+1}~dt$ . What is $F(0)$ ?. Edit. Let's me provide the exact solution below in case you need. Hint 1: We apply fundamental theorem of calculus on $F$ . That is, $$F'(x)=\dfrac{x^2e^x}{e^x+1}\dfrac{dx}{dx}-\dfrac{(-x)^2e^{-x}}{e^{-x}+1}\dfrac{d(-x)}{dx}=x^2\left(\dfrac{e^x}{e+1}+\dfrac{e^{-x}}{e^{-x}+1}\times\dfrac{e^x}{e^x}\right)=x^2$$ Hence, $F(x)=\dfrac{x^3}{3}+\text{constant}$ . By Hint 2, $F(0)=0$ , so you have shown that $F(x)=\dfrac{x^3}{3}$ .
|
|real-analysis|calculus|integration|
| 1
|
Wrong simplification of $2^{\sqrt{\log_2n}}$
|
I am trying to do the exercise 01, chapter 02 of the book: Algorithm Design [Kleinberg _ Tardos] - publication version 03 of the book I need to manipulate $2^{\sqrt{\log_2n}}$ , what I did was: $2^{\sqrt{\log_2n}}$ $2^{\frac{1}{2}\log_2n}\impliedby$ Log property where $\log_b(n^a)=a\log_bn$ $n^{\frac{1}{2}\log_22}\impliedby$ Log property where $a^{\log_bc}=c^{\log_ba}$ $n^{\frac{1}{2}}\impliedby\log_22=1$ I don't know the correct answer. Moreover I know that the correct answer is smaller than $n^\frac{1}{3}$ , and $n^\frac{1}{2}$ is not smaller than $n^\frac{1}{3}$ , so I messed up somewhere but I cannot find.
|
An expression like $2^{\sqrt{\log_2 n}}$ does not simplify to anything nicer. First, the problem with the attempt in the question is that there is no rule to simplify $(\log x)^k$ . We can simplify $\log(x^k)$ to $k \log x$ , but there's nothing to be done when we first take the logarithm and then raise the result to a power. In particular, $\sqrt{\log_2 n} \ne \frac12 \log_2 n$ . (To make this more obvious, let $y = \log_2 n$ ; then you are trying to simplify $\sqrt{y}$ to $\frac12 y$ .) We can make some comparisons between $2^{\sqrt{\log_2 n}}$ and other functions in terms of growth rates. More precisely: For all $k>0$ , no matter how large, $2^{\sqrt{\log_2 n}}$ grows more slowly than $n^{1/k}$ : for large enough $n$ (depending on $k$ ), we have $2^{\sqrt{\log_2 n}} . To see this, take the logarithm of both sides: we get $\sqrt{\log_2 n}$ on one side and $\frac1k \log_2 n$ on the other. In terms of $y = \log_2 n$ again, $\sqrt y$ will be less than $y/k$ provided $y > k^2$ ; therefor
|
|algebra-precalculus|logarithms|
| 1
|
Show that $1 + \sqrt{2}$ is a unit in $\Bbb Z[\sqrt2]$
|
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated: Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$ , where $x$ and $y$ are integers and $m$ isn't a square number. Show that besides $\pm1$ , $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$ . The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first. How should I begin solving the problem? Any kind of help is welcome. (I'm not that familiar with the technicalities in English, I'm sorry.)
|
So the solution turned out to be: $$1 = (1 + \sqrt{2})(n + m\sqrt{2})$$ We have to show that $n,m\in\mathbb{Z}$ . If they are, then $1+\sqrt{2}$ is a unit, otherwise it isn't. $$1 = n + m\sqrt{2} + n\sqrt{2} + 2m$$ $$1 = (n + 2m) + (m + n)\sqrt{2}$$ $1$ can be written in the form as $1 + 0\sqrt{2}$ , so therefore by the fundamental theorem of arithmetic: $$n + 2m = 1 \quad\wedge\quad m + n = 0$$ By solving this equation system, we get that $m=1$ and $n = -1$ , which are integers, therefore we can conclude that $1+\sqrt{2}$ is a unit in $H_2$ . The other unit, $3 + 2\sqrt{2}$ can be shown similarly.
|
|elementary-number-theory|
| 0
|
Find count of this functions roots:$\sqrt{x+1}-x^2+1=0$
|
There is an equation here: $$\sqrt{x+1}-x^2+1=0$$ Now we want to write the equation $f(x)$ like $h(x)=g(x)$ in a way that we know how to draw h and g functions diagram. Then we draw the h and g function diagrams and find the common points of them. So it will be number of the $f(x)$ roots that here is the equation mentioned top. Actually now my problem is with drawing the first equation's diagram I want you to draw its diagrams like $\sqrt{x-1}$ syep by step. Please help me with it!
|
To draw the figure, the idea is turning the equation to $\sqrt{x+1}=x^2-1$ . I guess $y=x^2-1$ is easy to draw. You have an open upward parabola with $y$ -intercept at $-1$ and $x$ -intercepts $\pm1$ . For the square root function, you know $g(x):=\sqrt{x}$ is increasing and concave so $g$ is keep going up and flatter as $x$ rises. Moreover, it should strictly above $x$ -axis for $x>0$ , and has $x$ -intercept $0$ . Now, $\sqrt{x+1}$ is exactly shifting the whole square root curve 1 unit left. Now counting the intersection would give you the answer. (You should see that both curves cut at $x$ -axis already, and there is one more solution on $x>0$ .)
|
|algebra-precalculus|functions|roots|quadratics|radicals|
| 0
|
explanation required for the logic of a proof step regarding set membership, conjunction, and implication
|
This question is asking for an explanation of a step in the following segment of someone else's proof of a textbook exercise regarding set membership, conjunction and implication. Consider the following: $$ (x \in A \land y \in B) \implies (x \in C\land y \in D) $$ Let me check I understand the meaning. It says that if both $x \in A$ and $y \in B$ , then we can conclude that both $x \in C$ and $y \in D$ . Both clauses of the antecedent must be true in order for both clauses of the consequent to be true. The online solution guide then had the following as the next step: $$ (x\in A \implies x \in C) \land (y \in B \implies y \in D) $$ Question: I don't understand how the step was made to this statement. Can anyone explain (for a self-teaching newcomer to maths)? My Thoughts The first statement had pairs of clauses, connected by a conjunction. Both clauses had to be true in the antecedent for the consequent to be true. It so happens the consequent also has paired clauses, connected by a c
|
I agree with you. The second statement clearly implies the first one, but the converse is false. A counterexample is $B=\varnothing,A\not\subseteq C$ . Edit after your update: this was however essentially the only counterexample, i.e. if $A$ and $B$ are non-empty then $$\forall x\forall y\quad[(x \in A \land y \in B) \implies (x \in C\land y \in D)]$$ implies $$\forall x\forall y\quad[(x\in A \implies x \in C) \land (y \in B \implies y \in D)]$$ or equivalently, $$\exists x\exists y\quad[(x\in A \land x \notin C) \lor (y \in B \land y \notin D)]$$ implies $$\exists x\exists y\quad[x \in A \land y \in B\land(x \notin C\lor y \notin D)].$$ Indeed, if there exist some $x,y$ such that, for instance, $x\in A \land x \notin C$ , then, keeping this $x$ but taking as a new $y$ some element in $B$ , you get that $x \in A \land y \in B\land(x \notin C\lor y \notin D)$ holds.
|
|elementary-set-theory|logic|
| 0
|
Curve Fitting and continuous identification of parameterized matrix eigenvector from unconnected data points
|
I need some help potentially applying machine learning or curve fitting techniques to a numerical linear algebra problem I am working on. I have a parameterized symmetric matrix $M(s)$ whose eigenspectrum I want to study numerically as a function of $s$ . The matrix is specifically the Hamiltonian of some physical system - the details aren't too important. Numerically, I'll pick some finite step size for $s$ (e.g. in NumPy via np.linspace(0, 1, 101) ) and then diagonalize $M(s)$ at each value using np.linalg.eigh() . I thus get an ordered list of real eigenvalues $\lambda_k(s)$ indexed by $k$ and their corresponding eigenvectors $V_k(s)$ . Now, I have a given vector $V_\star = V_0(0)$ and I want to study overlaps of the parameterized eigenvectors with this target vector. I do this by plotting the overlaps $O_k(s) = |V_\star^T V_k(s)|$ of each vector with the target $V_\star$ , i.e. for all $k$ . I generically get something that looks like this: $O_k(s)$ vs. $s$ " /> The colors on this
|
One can use the following definition of an avoided crossing (anticrossing) between neighboring, ordered, eigenvalues at $s=s_c$ : $$\lambda'_{k+1}(s_c)-\lambda'_{k}(s_c)=0, ~~~\lambda''_{k+1}(s_c)-\lambda''_{k}(s_c)>0 $$ which is basically the definition of a local minimum for the vertical distance between two neighboring eigenvalues. This definition does not cover crossings with zero minimum distance between them, but numerically its hard to distinguish between the two anyway. If one is not limited by computational resources and can go to very small resolutions, then it could make sense to numerically approximate the first derivative and test it for a change of sign; this is guaranteed to detect any crossing that is wider than the resolution, and can detect smaller crossings whose asymptotic behavior falls within the resolution limit. An algorithm that can detect all minima of a function can therefore detect every crossing between the eigenvalues of a matrix, by applying it to every c
|
|linear-algebra|statistical-inference|numerical-linear-algebra|machine-learning|computer-vision|
| 0
|
In how many ways can we distribute 6 white and 6 black ball in 10 different boxes such each box has atleast 1 ball?
|
The question states that 6 white and 6 black balls of the same size are distributed among 10 different urns. Balls are alike except for the colour each urn can hold any number of balls. Find the number of different distribution of the balls so that there is atleast 1 ball in each urn. Now the difficulty I am finding is that I can't simply apply any method for a direct answer as there will be many cases which could be repeated and some which I have to add again so I thought it won't be a good method So I am seeking a quick answer to the question with some explanation as this question is something I can't understand much..
|
Here is a solution using inclusion/exclusion. First, if we want to place $k$ balls in $r$ urns, without any restrictions, this can be done in $$\binom{k+r-1}{r}$$ ways. This fact can be shown by the method of Stars and Bars, or see the Wikipedia article on Multisets . Now consider placing $6$ white balls and $6$ black balls in $10$ urns. Let's say an arrangement has "Property $i$ " if urn $i$ is empty, for $1 \le i \le 10$ , and define $S_j$ to be the total of the number of arrangements with $j$ of the properties, for $0 \le j \le 9$ . Then $$S_j = \binom{10}{j} \binom{10-j+6-1}{6}^2$$ By the Principle of Inclusion/Exclusion, the number of arrangements with none of the properties, i.e. with no empty urn, is $$\sum_{j=0}^9 (-1)^j S_j = 26250$$
|
|combinatorics|
| 0
|
Show that $1 + \sqrt{2}$ is a unit in $\Bbb Z[\sqrt2]$
|
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated: Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$ , where $x$ and $y$ are integers and $m$ isn't a square number. Show that besides $\pm1$ , $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$ . The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first. How should I begin solving the problem? Any kind of help is welcome. (I'm not that familiar with the technicalities in English, I'm sorry.)
|
$$(1+\sqrt 2)(-1+\sqrt2) =-1+\sqrt2-\sqrt2 +2=-1+2=1.$$ $\therefore 1+\sqrt2 $ is a unit in $\Bbb Z[\sqrt2].$ The way I got it was: $$(1+\sqrt2) (1-\sqrt2) =-1.$$ So just multiply $$1-\sqrt2 $$ by $$-1.$$ Of course, $$(a+b)(a-b)=a^2-b^2,$$ and the other one's inverse is simply the "conjugate", $3-2\sqrt2. $ It's possible to define a "norm" on $\Bbb Z[\sqrt m] $ by $N(a+b\sqrt m) =a^2-mb^2$ and verify that it's multiplicative . As a result, $a+b\sqrt m$ is a unit exactly when $N(a+b\sqrt m) =\pm1.$
|
|elementary-number-theory|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.