title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
(r-1)-partition of a balanced complete r-partite graph
|
Let $G=K_{m,\ldots,m}$ be a complete $r$ -partite graph. How to show that for any $(r-1)$ -partition $V_1, \ldots, V_{r-1}$ of $V(G)$ , the number of edges within a part is at least $m^2$ (i.e., $\sum_{1\leq i\leq r-1}e_G(V_i)\geq m^2$ )? If we consider a $k$ -partition for certain $k\leq r-1$ , what can we get?
|
For reference, I will put the answer: Let $V_1, ..., V_{r-1}$ be a partition of the vertices of $K_{m, ..., m}$ Each $K_r$ has at least one edge in some of the $V_i$ . Because each edge belongs to exactly $m^{r-2}$ copies of a $K_r$ and that there is $m^r$ copies of a $K_r$ in total, we know that there is at least $\frac{m^r}{m^{r-2}} = m^2$ edges contained in the $V_i$ . The bound is tight $m$ gets big: Let $I_1,\dots, I_r$ be the $r$ maximal independent sets of $G$ . Lets take $V_i = I_i$ for $i=1\dots r-2$ and $V_{r-1} = I_{r-1}\cup I_r$ . The edges contained in the $V_i$ are all in $V_{r-1}$ , and there is $m^2$ in total. When $k\leq r-1$ , given a $K_r$ , the $k$ -partition that minimize the number of edges in the $V_i$ is to spread equally the vertices in the $V_i$ . If we write $r = pk+q$ the euclidean division of $r$ by $k$ , we get at least $(k-p)\binom{q}{2} + p\binom{q+1}{2}$ edges in the $V_i$ . By considering all the $K_r$ , we get that at least $m^2\left((k-p)\binom{q}{2}
|
|combinatorics|
| 0
|
Compute $\int_{0}^{\infty}\frac{x\sin 2x}{9+x^{2}} \, dx$
|
$$\int_{0}^{\infty}\frac{x\sin 2x}{9+x^{2}} \, dx$$ Some rearranging eventually gives $$\int_{0}^{\infty}\frac{x\sin 2x}{9+x^{2}} \, dx = \frac{-i}{2}\int_{-\infty}^{\infty} \frac{xe^{2ix}}{9+x^{2}}$$ Consider $f(z) = \frac{ze^{2iz}}{9+z^{2}}$ and the contour $\gamma$ of the semicircle laying in the upper half of the plane: Let $\gamma_{R}$ denote the circular part with radius $R$ and $\gamma_{L}$ denote the part lying on the real axis with length $2R$. Computing the residue at the only pole, $z = 3i$, we have that $$\oint_{\gamma}f(z) \, dz = \frac{i\pi}{e^{6}} $$ On the other hand, \begin{align*} \oint_{\gamma}f(z) \, dz &= \oint_{\gamma_{R}}f(z) \, dz + \oint_{\gamma_{L}} f(z) \, dz \\ &= \oint_{\gamma_{R}} f(z) \, dz + \int_{-R}^{R} \frac{xe^{2ix}}{9+x^{2}} \, dx \end{align*} We may evaluate the first integral in the usual way by parameterizing the contour and taking $z = Re^{i\theta}$. \begin{align*} \oint_{\gamma_{R}} f(z) \, dz = \int_{0}^{\pi} \, \frac{R^{2}e^{2i\theta}e^{2i\co
|
Also can be fairly nice to utilize Laplace transform this way. \begin{align} \int_{0}^{\infty}\frac{x\sin 2x}{9+x^{2}}\mathrm{d}x&=\int_0^{\infty}\mathcal{L}\left[\sin\left(2t\right)\right](x)\mathcal{L}^{-1}\left[\frac{t}{t^{2}+9}\right](x)\mathrm{d}x\\ &=\int_0^{\infty}\frac{2\cos(3x)}{x^{2}+4}\mathrm{d}x\\ &\overset{*}=\frac{\pi}{2e^6}. \end{align} where (*) can be evaluated as by Marco Cantarini or other different approaches like here Evaluating $\int_0^\infty\frac{\cos(ax)}{x^2+1}dx$ without complex analysis or Fourier Transform? .
|
|complex-analysis|contour-integration|
| 0
|
Prove $\int_0^1 (\sqrt[n]{1-x^n}-x)^2 dx=\frac{1}{3}$ for $n \gt 0$
|
I have found that the value of the integral below is always $1/3$ for all positive $n$ . $$\int_0^1 (\sqrt[n]{1-x^n}-x)^2 dx$$ Can anyone prove this for me? Thank you.
|
put $x^n=y$ then $dx=\frac{1}{n} y^{\frac{1}{n}-1} dy$ So $$ I=\int_0^1 \left((1-x^n)^{\frac{1}{n}}-x\right)^2dx=\int_0^1 \left((1-y)^{\frac{1}{n}}-y^{\frac{1}{n}}\right)^2\frac{1}{n} y^{\frac{1}{n}-1} dy $$ So $$ I=\frac{1}{n}\int_0^1 \left((1-y)^{\frac{2}{n}}y^{\frac{1}{n}-1} -2(1-y)^{\frac{1}{n}}y^{\frac{2}{n}-1}+y^{\frac{3}{n}-1} \right) dy$$ then $$ I=\frac{1}{n} \left(\beta\left(\frac{2}{n}+1,\frac{1}{n}\right)-2\beta\left(\frac{1}{n}+1,\frac{2}{n}\right)+\frac{n}{3} \right) $$ now we have from beta function $$ \beta\left(\frac{2}{n}+1,\frac{1}{n}\right)-2\beta\left(\frac{1}{n}+1,\frac{2}{n}\right)=0$$ So $$I=\frac{1}{3}$$ also we can get this result without using beta function we have $$ \int_0^1 (1-y)^{\frac{2}{n}}y^{\frac{1}{n}-1} dy =\int_0^1 y^{\frac{2}{n}}(1-y)^{\frac{1}{n}-1} dy $$ now using integral by part $$\int_0^1 (1-y)^{\frac{2}{n}}y^{\frac{1}{n}-1} dy=2\int_0^1 (1-y)^{\frac{1}{n}}y^{\frac{2}{n}-1}dy$$ therefore $$I=\frac{1}{n} \int_0^1 y^{\frac{3}{n}-1} dy=\frac{1}{
|
|integration|
| 0
|
Determining condition of coplanarity
|
Determine the value of $\lambda$ such that the vectors $$5\vec{a}+6\vec{b}+7\vec{c},7\vec{a}+\lambda\vec{b}+9\vec{c},3\vec{a}+20\vec{b}+5\vec{c}$$ are coplanar given that $\vec{a},\vec{b},\vec{c}$ are non coplanar. For the vectors to be coplanar,there must exist scalars $x_1,x_2,x_3$ not all $0$ such that their linear combination is $0$ . Then after rearranging,the condition we get is $$(5x_1+7x_2+3x_3)\vec{a}+(6x_1+\lambda x_2+20x_3)\vec{b}+(7x_1+9x_2+5x_3)\vec{c}=0$$ Since $\vec{a},\vec{b},\vec{c}$ are non coplanar,hence each of the coefficients must be $0$ . But here,we are getting $3$ equations for $4$ variables(the fourth one being $\lambda$ ). So,how can $\lambda$ be determined? Can I assign any one of $x_1,x_2,x_3$ any value I like,for example can I make $x_1$ or $x_3$ $0$ or $1$ or any other number? I am really confused.
|
You can indeed view this problem as presenting 3 equations with 4 variables. However the assumption that you need as many equations as variables is false for several reasons. Example 1: Find $x, y \in \mathbb{R}$ such that $x^2 + y^2 = 0$ Here we have one equation with two variables, but it has still a unique solution: $x = y = 0$ . The setting where the "as many equations as variables" rule of thumb shines is when the equations are linear . Example 2: Find $x, y \in \mathbb{R}$ not all $0$ such that $x+y = 0$ and $x-y = 0$ Here we have two equations and two variables, but solving the equation system leads to $x=y=0$ which is not a valid solution. The question is not to solve the system , having a unique solution to the equation system can prevent having a solution to the question. Example 3: Find $\lambda \in \mathbb{R}$ and $x, y \in \mathbb{R}$ not all $0$ such that $x + y = 0$ and $x + \lambda y = 0$ Here we have two equation with 3 variables, but the variables don't all have the s
|
|linear-algebra|geometry|vector-analysis|plane-geometry|
| 0
|
How to solve this using variation of parameters
|
I have managed to solve this ODE $$y''+4y=\cos(2x)$$ using comparison of coefficient, and I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}x\sin(2x)$$ Using parameters variation I get: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}\cos(2x)\cos(4x)+\frac{1}{2}x\sin(2x)+\frac{1}{8}\sin(4x)\sin(2x)$$ However, trying to get my first solution from the second, I get stuck. I tried using trigonometric identities and the fact that $y_h$ can "swallow" solutions of the form $A\cos(2x)$ and $B\sin(2x)$ where $A,B$ are constants. What am I missing? Thanks
|
Using Cramer's rule i got: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{16}\cos(2x)\cos(4x)+\frac{1}{4}x\sin(2x)+\frac{1}{16}\sin(4x)\sin(2x)$$ Which then gives: $$y(x) = C_1\cos(2x)+C_2\sin(2x)+\frac{1}{4}x\sin(2x)$$ Thanks!
|
|ordinary-differential-equations|
| 0
|
Continuity proof epsilon delta $\frac{x\sin(xy)}{\sqrt{x^2+y^2}}$
|
I'm trying to prove the continuity of $\frac{x\sin(xy)}{\sqrt{x^2+y^2}}$ at $(0,0)$ . Is the following working correct, and is the way of writing my proof correct? Let $\delta = \sqrt{\epsilon}$ . Then, $\frac{x\sin(xy)}{\sqrt{x^2+y^2}}$ $\leq$ $\frac{x^2y}{\sqrt{x^2+y^2}}$ $\leq$ $\frac{(x^2+y^2)\sqrt{x^2+y^2}}{\sqrt{x^2+y^2}} $ = $(\sqrt{x^2+y^2})^2$ = $\delta^2$ $\epsilon$ Thank you!
|
Let $f(x,y)=\frac{x\,\sin(xy)}{\sqrt{x^2+y^2}},\,\,(x,y)\in\mathbb{R}^2\setminus \left\{(0,0)\right\}.$ Let $\epsilon>0.$ As you already wrote, $$|f(x,y)-0| So, if $\delta=\sqrt{\epsilon}$ , then for all $(x,y)$ such that $||(x,y)-(0,0)||_{2} i.e. $x^2+y^2 , according to $(I)$ you get $$|f(x,y)-0| The above proccedure shows that $$\lim_{(x,y)\to (0,0)}f(x,y)=0,$$ so the continuous extension of $f$ to $\mathbb{R}^2$ is $$g(x,y)=\begin{cases} f(x,y),\,\,(x,y)\neq (0,0)\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(x,y)=(0,0)\end{cases}$$
|
|continuity|epsilon-delta|
| 0
|
Can I analytically integrate a high dimensional Normal distribution?
|
Let's say $X \sim \mathcal{N}(\mu, \Sigma)$ , where $\mu \in \mathbb{R}^{3}$ . If I have two half spaces: $H_{1} = \{x: a^{T}x \geq 0\}, H_{2} = \{x: b^{T}x \geq 0\}$ , is there a way I can compute the total probability mass in the area $H_{1} \cap H_{2}$ ?
|
Denote $a = \pmatrix{a_1\\a_2\\a_3}$ and $b = \pmatrix{b_1\\b_2\\b_3}$ and $$M:= \pmatrix{a&b}=\pmatrix{a_1&b_1\\a_2&b_2\\a_3&b_3}$$ Denote $Y = \pmatrix{y_1\\y_2} = M^T\cdot X \in \mathbb{R}^{2}$ , then $Y\sim\mathcal{N}_2\left(M\mu,M\cdot\Sigma \cdot M^T \right)$ a bivariate normal distribution. We have $$H_1 \cap H_2 = \{y_1 \ge 0,y_2 \ge 0 \}$$ Then $$\color{red}{P=\Phi_2\left(\pmatrix{0\\0},\pmatrix{+\infty\\+\infty}; M\mu,M\cdot\Sigma \cdot M^T \right)}$$ where $\Phi_2\left(\mathbf{l},\mathbf{u}; \text{mean},\text{covariance}\right)$ is the culmulative distribution function of a bivariate normal distribtution.
|
|calculus|integration|analysis|statistics|
| 1
|
Let $k,j \in \mathbb N_{\geqslant 3}$ such that $k \neq j$. Is it true that $|2^{-k} -2^{-j}| \geqslant 8^{-k} + 8^{-j}$?
|
Question. Consider arbitrary distinct elements $k,j \in \mathbb N_{\geqslant 3}$ , i.e., $k,j$ are two distinct integers greater or equal than $3$ . I am wondering if the inequality $$ |2^{-k} - 2^{-j} |\geqslant 8^{-k} + 8^{-j} $$ is satisfied. My attempt. For simplicity, I started by considering the case $j > k$ . With this in mind, it is clear that $$ |2^{-k} - 2^{-j}| = 2^{-k} - 2^{-j}. $$ But now I don't know how to proceed. I have tried using the decomposition $8 = 2^3$ which yields the inequality $$ 2^{-k} - 2^{-j} \geqslant 2^{-3k} + 2^{-3j} $$ but I also don't know how to check if this is true. Does anyone have an idea on how to prove this? Thanks for any help in advance.
|
If $k then $$2^{-k}-2^{-j}=2^{-k}(1-2^{k-j})\ge 2^{-k}(1-2^{-1})=2^{-k-1}$$ On the other hand $$8^{-k}+8^{-j}\le 8^{-k}+8^{-k-1}=9\cdot8^{-k-1}$$ So it suffices to show that $2^{-k-1}> 9\cdot 8^{-k-1}$ which is equivalent to $4^{k+1}> 9. $ The latter holds if $k\ge 1.$
|
|real-analysis|calculus|inequality|solution-verification|
| 1
|
Inner product of signatures of piecewise linear paths
|
It is a well-know observation that, given two points $x_1,x_2 \in \mathbb{R}^d$ , the path signature associated to their linear interpolation is given by the tensor exponential. Precisely, if $\Delta x$ denotes the linear interpolation $x_1 \rightarrow x_2$ , then $$S(\Delta x) = \left(1 , \Delta x, \frac{\Delta x^{\otimes 2}}{2!}, \frac{\Delta x^{\otimes 3}}{3!}, ... \right) =: \exp_\otimes(\Delta x) \in T((\mathbb{R}^d)),$$ where $S(\cdot)$ denotes the signature map and $T((\mathbb{R}^d))$ is the extended tensor algebra. Actually, the signature takes value in a subset $T^1$ of $T((\mathbb{R}^d))$ which can be endowed with an inner product, and thus becomes a Hilbert space if one takes the appropriate completions. I refer to this work for a proper definition, but I assume most readers interested in this post are familiar with this. Now, instead of considering just a line segment, let us consider a piecewise linear path. Specifically, given a finite number of points $x_1,...,x_n$ in $\
|
Firstly, you need to define an inner product in $T^1(V)$ . As far as I am aware this is defined by summing all of the component-wise inner products for each tensor power; but, this is only guaranteed to converge in the range of the signature thanks to the factorial decay. Ilya's work that you link doesn't define the inner product at all (ctrl+F for "inner, dot, scalar" yielded nothing). I will assume this definition I have just mentioned since we are working with signatures. To get a positive answer to your question, something that would help is if the algebra product $\otimes$ interacts well with the scalar product, in the sense $ = $ Unfortunately, this doesn't even work at the first level. $ = $ $ $ $= $ $=1+\sum_{i=0}^d (a_i^1+b_i^1)(c_i^1+d_i^1)+...$ whereas $ = (1+\sum_{i=0}^d a_i^1 c_i^1 +...)(1+\sum_{i=0}^d b_i^1 d_i^1 +...)$ so there's no way of "retrieving" the term $\sum_{i=0}^d a_i^1 d_i^1$ from the LHS in the RHS. This is just illustrative. Now, to answer your question suc
|
|abstract-algebra|tensor-products|rough-path-theory|
| 0
|
Proving function is differentiable at $(0,0)$ using total derivative definition
|
I am looking at the function: $$\frac{x\sin(xy)}{\sqrt{x^2+y^2}}$$ at $(0,0)$ , where the function at $(0,0)$ is defined to be $0$ . I understand that I need to check that: $$\lim_{{h \to 0}} \frac{{f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - A\mathbf{h}}}{{\|\mathbf{h}\|}} = \lim_{{h \to 0}} \frac{{f(\mathbf{h})}}{{\|\mathbf{h}\|}} = 0 \ .$$ Since $D_1f$ and $D_2f$ are equal to zero, I believe that A is the matrix $(0, 0)$ . I've tried going through different paths to see if I get a different limit when doing: $$\lim_{{h \to 0}} \frac{{f(\mathbf{h})}}{{\|\mathbf{h}\|}} \ ,$$ and all of them give me $0$ . Therefore, I believe that function is differentiable at $(0,0)$ . But how can I prove it more soundly? Thank you!
|
You can try using polar coordinates. The limit $h\to 0$ is equivalent to $r\to 0$ . Then let $A=(a\ b)$ , $x=r\cos\theta$ , $y=r\sin\theta$ . $$\lim_{r\to 0}\frac{\cos\theta\sin(r^2\cos\theta\sin\theta)-a\cos\theta-b\sin\theta}{r}=0$$ For the first part, use $$\lim_{r\to 0}\frac{\sin(r^2\cos\theta\sin\theta)}{r^2\cos\theta\sin\theta}=1$$ If you want your limit to be $0$ , independent of $\theta$ , then $a=b=0$ . And you got your intuitive solution.
|
|multivariable-calculus|derivatives|
| 0
|
Are there further estimation techniques for this convergent series?
|
I was solving the following problem: Let $\{a_n\}$ be a sequence where $a_1 = 2$ , and $a_{n+1}$ is defined by $(n+1)a_{n+1}^2= na_n^2+a_n$ . I aim to show that $\sum_{i=2}^{\infty} \frac{a_i^2}{i^2} . Initially, I found a relatively straightforward solution by observing that $a_n > a_{n+1} > 1$ and $\frac{1}{n^2} hold. However, I attempted to find a better estimation for the series but could only derive the inequality: $$ \frac{a_n^2}{n^2} which holds for all $n>2$ , and since $\frac{a_n^2}{n}\to 0$ as $n\to\infty$ , I obtained: $$ \sum_{i=2}^{\infty} \frac{a_i^2}{i^2}=\frac{3}{4} + \sum_{i=3}^{\infty}\frac{a_n^2}{n^2} I'm seeking further estimation techniques or a precise evaluation of the series. Any assistance would be greatly appreciated. P.S. My code shows that it approaches around $1.5791$ ( $n=1000000$ )
|
As your sequence is decreasing, you could try the following to get an arbitrarily good estimate: $$\sum\limits_{i = 2}^{\infty} \frac{a_i^2}{i^2} = \sum\limits_{i = 2}^n \frac{a_i^2}{i^2} + \sum\limits_{i = n+1}^{\infty} \frac{a_i^2}{i^2} Of course, you could do this by hand to get a precise algebraical expression for each $n$ , but it probably makes more sense to compute it numerically. As you already ran some code anyway, this should be easy to implement.
|
|sequences-and-series|limits|
| 0
|
Prove $\int_0^1 (\sqrt[n]{1-x^n}-x)^2 dx=\frac{1}{3}$ for $n \gt 0$
|
I have found that the value of the integral below is always $1/3$ for all positive $n$ . $$\int_0^1 (\sqrt[n]{1-x^n}-x)^2 dx$$ Can anyone prove this for me? Thank you.
|
Explore the symmetry to integrate the general case \begin{align} &\int_0^1 (\sqrt[n]{1-x^n}-x)^{2k} \overset{1-x^n \to x^n}{dx}\\ = &\int_0^1 (x-\sqrt[n]{1-x^n})^{2k} d(-\sqrt[n]{1-x^n})\\ =&\ \frac12 \int_0^1 (x-\sqrt[n]{1-x^n})^{2k} d(x-\sqrt[n]{1-x^n}) =\frac1{2k+1}\\ \end{align}
|
|integration|
| 1
|
Obtaining the three torus via Dehn surgery
|
It is a well known theorem from the '60 (Lickorish-Wallace) that any closed orientable three dimensional smooth manifold can be obtained performing a sequence of integral Dehn surgeries along knots in $\mathbb{S}^3$ . The most common examples found in any book are $\mathbb{S}^3, \mathbb{S}^2\times \mathbb{S}^1 $ and the Lens spaces $L(p,q)$ . Curiosly, I can't find how to get the three torus $\mathbb{T}^3 = \mathbb{S}^1\times\mathbb{S}^1 \times \mathbb{S}^1$ which is a quite ubiquitous $3$ -manifold in Geometry. How can I obtain $\mathbb{T}^3$ via (rational and integral) Dehn surgery from $\mathbb{S}^3$ ?
|
May be the following explanation, following Ethan Dlugie, gives a visualized picture. To show that 0-surgery on three components of the Borromean link yields $S^1\times S^1\times S^1,$ what is required to be done is showing that 0-surgery on the two components link in $S^1\times D^2$ yields $(T^2-int(D^2))\times S^1,$ such that the boundary of each slice $(T^2-int(D^2))\times\{ t\}$ is a meridian on the boundary of the solid torus. Construct the sections as follows, select a meridional disk to make it intersect the link at two points. Consider the case when the meridional disk doesn't intersect the other component first. Assume a tubular neighborhood $N(L)$ of the link is fixed. At the beginning, the section is a surgery on the meridional disk, disjoint from $N(L)$ . As the meridional disk moves to right, the radius of the tube becomes smaller, when it touches $N(L)$ , they intersect at an annulus. When the section moves further to right, it intersects $N(L)$ at annulus, till the tube
|
|differential-topology|geometric-topology|low-dimensional-topology|surgery-theory|
| 1
|
Property of $a_{n+1} = a_n - \frac{1}{a_n}$
|
For a $a_n$ defined recursively by $a_{n+1} = a_n - \frac{1}{a_n}$ , $a_0 = k >0$ . Prove that if the first $n$ such that $a_n \leq 0$ , then $n \in O(k^2)$ . I ran a computer simulation, and it seems true. Moreover, these questions appear similar to this question: Closed form for the sequence defined by $a_0=1$ and $a_{n+1} = a_n + a_n^{-1}$ . However, I have no idea how to prove that. Do you have any ideas?
|
Show the following for $ a_0 = k$ . If you're stuck, explain what you've tried. $a_{k-1} > k-1 $ $a_{k-1 + k-2} > k-2 $ $a_{k-1 + k-2 + k-3 } > k-3$ $ a_{k-1 + k-2 + k-2 + \ldots + 1 } > 0 $ $a_k $a_{k + k-1} $a_{k + k-1 + k-2 } $a_{k + k-1 + k-2 + \ldots + 1 } Hence $ \frac{k^2 - k } { 2 } . Hence, the result follows.
|
|sequences-and-series|recurrence-relations|closed-form|
| 1
|
Expectation of a sum of (a random number of) random variables.
|
I'm wondering if the following bound is true: $$E[X_1 + X_2 + \cdots + X_N]\leq E[N]\cdot E[\max\{X_1,\ldots,X_N\}]$$ The variables $N,X_1,\ldots,X_N$ are not independent. It seems right, but I'm not sure how to approach it. Edit: Kurt G. pointed out how this is not generally true. An additional assumption I did not mention is the fact that $N$ does not affect the total sum. More specifically, for any two possible values $\alpha, \beta $ of $N$ , it is true that $E[X_1+\cdots+X_N | N=\alpha] = E[X_1+\cdots+X_N|N=\beta]$ .
|
The inequality is generally false: Let $X_i$ be $X$ for all $i$ where $X=1+P$ and $P$ is Poisson with parameter $\lambda=1\,.$ Then $X$ takes values in $\mathbb N$ with mean two and variance one and $$ X_1+X_2+\dots+X_N=N\,X=N\,\max(X_1,X_2,\dots,X_N)\,. $$ Setting $N=X$ gives $$ \mathbb E[N\,X]=\operatorname{Cov}[N,X]+\mathbb E[N]\mathbb E[X]=\operatorname{Var}[X]+4=5\,. $$ The LHS of your inequality is therefore five. The RHS however is $$ \mathbb E[N]\mathbb E[X]=4\,. $$
|
|probability-theory|
| 0
|
If $a$ and $b$ are coprimes in an euclidean domain, then $a^n$ and $b$ are coprimes.
|
Let $a,b\in\mathbb{E}$ where $\mathbb{E}$ is an euclidean domain. Prove that if $a$ and $b$ are coprimes, then $a^n$ and $b$ are coprimes for all $n\in\mathbb{N}$ . I think the proof is by induction on $n$ and that's how I started it, but any suggestions are appreciated.
|
Denote $Ra$ , the principal ideal generated by $a$ , by $(a)$ . Let us recall the following statement (some textbooks state it as part of the CRT ( Chinese Remainder Theorem )): Let $R$ be a commutative ring with $1$ . If ideals $I,J$ are coprime ( $A,B$ are coprime ideals provided $A+B=R$ ), then so are $I^n,J^m$ for all $m,n\in \mathbb Z_+$ . (Hint for the proof: If we have proved ${\color {skyblue} I}, {\color{violet}{J^m}}$ are coprime, then using this statement again implies so are ${\color {skyblue} I}^n, {\color{violet}{J^m}}$ ) By Bezout's identity , two elements in a PID are coprime iff their correponding principal ideals are coprime. $a,b$ are coprime $\implies$ the ideal $(a),(b)$ are coprime ideals $\implies$ $(a)^n=(a^n),(b)$ are coprime ideals $\implies$ $a^n,b$ are coprime.
|
|abstract-algebra|ring-theory|gcd-and-lcm|euclidean-domain|
| 0
|
Why is volume of sphere by disc method and its surface area by ring method depend differently on arc?
|
The question arose while trying to derive the moment of inertia of solid vs hollow sphere. In calculating the surface area of sphere, the integral used is $\int{2\pi r×Rd\theta}$ . In calculating the volume of sphere the integral is $\int{\pi r^2 dx}$ . After reading from some related questions: (volume calculation of sphere not working) on this site: I still have a few questions: In short: Why is area of cylinder of negligible bulge $=2\pi r\times \text{arc length of bulge }$ ignore radius variation along bulge but volume of cylinder unaffected? We are stacking rings along height of cylinder not along the arc of bulge.To stack against the latter would mean radius of rings changes. Since volume of disc element can be treated as many area of many concentric ring elements and area of each ring element depends on $Rd\theta$ volume should also depend on $Rd\theta$ . In long: Why is error in volume of cylinder(disk) with bulging edges negligible while error in area significant? If the ring
|
I think the easiest visualization is in 2 d, consider the approach of the area of the triangle, the red approximation ist better than the black, but the approximation of the line length of the upper line does not get better if we take smaller steps the error s always the same for the length, but it gets better and better for the area
|
|calculus|integration|definite-integrals|
| 0
|
Absolute value of complex numbers $|a+bi|$
|
I didn't understand why the absolute value of $(a+bi)$ is equal to $\sqrt{a^2+b^2}$ but not $\sqrt{(a+bi)^2}$ like $|x|=\sqrt{x^2}$ if $|x|=\sqrt{x^2}$ is right and if we give $x = a+bi$ it should be $\sqrt{(a+bi)^2}$ and why this proof is wrong: $z=a+bi$ $|z|^2=z^2$ $|z|^2=(a+bi)^2$ $|z|=\sqrt{(a+bi)^2}$ Note: I don't want "Cartesian representation of a complex number" I don't see it as proof. So I wrongly generalized the equation of the absolute value of a real number: my generalization: $|x|=\sqrt{x^2}$ (only right for real numbers) right definition: (by my understanding so far) $|a_1x_1+a_2x_2...a_nx_n| = \sqrt{\Sigma_n{(a_kx_k)^2}}$
|
For a complex number $z$ , we don't speak about the "absolute value of $z$ " but we speak about the modulus of $z$ denoted by $|z|$ . If the algebraic form of $z$ is $a + ib$ , then by definition you set $|z| = \sqrt{a^2 + b^2}$ . Note that this definition extends the absolute value on $\mathbb R$ . Also, if $z = i$ , how do you define $\sqrt{i^2}$ ? More generally, how do you define the square root of a complexe number ? How can you distinguish the two roots ?
|
|complex-numbers|absolute-value|
| 1
|
Is subset relation preserved under limit for Hausdorff metric?
|
Let $X$ be a metric space. I consider elements in $Y=2^X\setminus \emptyset$ and use the Hausdorff metric for $Y$ . Suppose that $A_n \subseteq B_n$ for $A_n,B_n \in Y$ and $A_n \rightarrow A$ and $B_n \rightarrow B$ , where $A,B \in Y$ as well. Is it true that $A \subseteq B$ ?
|
The answer is pretty close to yes. As Joriki points out, there's some issues involving closure, but what we can say is this: If $A_n \to A$ and $B_n \to B$ under the Hausdorff (pseudo)metric on $2^X \setminus \{\emptyset\}$ and $A_n \subseteq B_n$ for all $n$ , then $\overline{A} \subseteq \overline{B}$ . Suppose that $\overline{A} \not\subseteq \overline{B}$ . Then there exists some $x \in \overline{A} \setminus \overline{B}$ . Since $x \in X \setminus \overline{B}$ , which is open, there exists an open ball around $x$ exists that is disjoint from $\overline{B}$ and hence $B$ . Moreover, in this ball, there must exist some point in $A$ . Without loss of generality, by replacing $x$ with this new point, possibly reducing the radius, let us suppose that $x \in A$ , but $r > 0$ is such that $B(x; r) \cap B = \emptyset$ . Now, since $A_n \to A$ , we know that $$\sup_{a \in A} d(a, A_n) \to 0$$ as $n \to \infty$ . So, some $M$ exists such that $$n \ge M \implies \sup_{a \in A} d(a, A_n) We
|
|real-analysis|general-topology|metric-spaces|order-theory|hausdorff-distance|
| 1
|
Is the function $(x^2+y^2)\sin{(\frac{1}{x^2+y^2})}$ differentiable at the point $(0,0)$?
|
EDIT: The question specifies that $f(0,0)=0$ in a piecewise. For the first few parts of a question, I have used the definition of partial derivatives to show that $\frac{\partial{f}}{\partial{x}}$ and $\frac{\partial{f}}{\partial{y}}$ both exist at the point $(0,0)$ and are equal to $0$ . I have also found the partial derivatives for $f$ at a point away from $(0,0)$ using normal differentiation rules like product, quotient and chain rule, and have shown that $\frac{\partial{f}}{\partial{x}}$ is not continuous at $(0,0)$ by showing its limit diverges as we approach $(0,0)$ . Have I made a mistake so far in my reasoning and working? If not, I would like to take on the last part of this question, which asks us to determine if $f$ is differentiable at $(0,0)$ . For this last part, I know two theorems: 1: "If $f$ is differentiable at some point, then all of its partial derivatives exist at that point and its derivative is the jacobian matrix of the function". 2: "If all of the partial deriv
|
Since you already showed that $\frac{\partial{f}}{\partial{x}}$ and $\frac{\partial{f}}{\partial{y}}$ both exist at the point $(0,0)$ and are equal to $0$ , your theorem 1 indicates that if $f$ is differentiable at this point, its differential is necessarily the zero map. Therefore, by definition of the differential, $f$ is differentiable at $(0,0)$ if and only if $$\lim_{(x,y)\to(0,0)}\frac{|f(x,y)-f(0,0)-0x-0y|}{\|(x,y)\|}=0,$$ i.e. $$\lim_{r\to0}\frac{r^2\sin(1/r^2)}r=0,$$ which is indeed the case since $|\sin|\le1$ .
|
|multivariable-calculus|derivatives|continuity|partial-derivative|
| 0
|
Absolute value of complex numbers $|a+bi|$
|
I didn't understand why the absolute value of $(a+bi)$ is equal to $\sqrt{a^2+b^2}$ but not $\sqrt{(a+bi)^2}$ like $|x|=\sqrt{x^2}$ if $|x|=\sqrt{x^2}$ is right and if we give $x = a+bi$ it should be $\sqrt{(a+bi)^2}$ and why this proof is wrong: $z=a+bi$ $|z|^2=z^2$ $|z|^2=(a+bi)^2$ $|z|=\sqrt{(a+bi)^2}$ Note: I don't want "Cartesian representation of a complex number" I don't see it as proof. So I wrongly generalized the equation of the absolute value of a real number: my generalization: $|x|=\sqrt{x^2}$ (only right for real numbers) right definition: (by my understanding so far) $|a_1x_1+a_2x_2...a_nx_n| = \sqrt{\Sigma_n{(a_kx_k)^2}}$
|
Starting from the definition of the absolute square $|z|^{2}$ of a complex number $z = a + i\,b$ while keeping the definition of the complex conjugate in mind, i.e., $z^{\ast} = a - i\,b$ , such that overall, \begin{equation} |z|^{2} = z z^{\ast} = \left(a + i\,b\right)\left(a + i\,b\right)^{\ast} = \left(a + i\,b\right)\left(a - i\,b\right) = a^{2} + b^{2}\,. \end{equation} Obviously, the absolute value is given by \begin{equation} |z| = \sqrt{a^{2} + b^{2}} \end{equation}
|
|complex-numbers|absolute-value|
| 0
|
Real analysis - applying Darboux's theorem
|
I've been given this exercise: Given $g:\mathbb{R}\longrightarrow\mathbb{R}$ is differentiable, and $0\leq g'(x)\leq 1$ for all $x\in \mathbb{R}$ , prove that there exists some point $0\leq c\leq 1$ such that $g'(c)=3c^2$ Now, here's what I tried: I want to apply Darboux's theorem, so I want to find a suitable $c$ such that $g'(0)\leq 3c^2\leq g'(1)$ (assuming without loss of generality that $g'(0)\leq g'(1)$ ). Solving for $c$ I get $\sqrt{\frac{g'(0)}{3}}\leq c\leq \sqrt{\frac{g'(1)}{3}}$ . Now applying Darboux's theorem, since $g'(0)\leq 3c^2\leq g'(1)$ , there exists some $d\in(0,1)$ such that $g'(d) = 3c^2$ . And this is where I got stuck. How can I guarantee that $d = c$ ? Much appreciated.
|
Define $h(x)=g'(x)-3x^2$ . As $h(x)=(g(x)-x^3)'$ it satisfies Darboux's Theorem. But $h(0)=g'(0)\geq0$ and $h(1)=g'(1)-3 , so there exists some $c\in(0,1)$ where $h(c)=0$ , that is, $g'(c)=3c^2$ .
|
|real-analysis|
| 1
|
Absolute value of complex numbers $|a+bi|$
|
I didn't understand why the absolute value of $(a+bi)$ is equal to $\sqrt{a^2+b^2}$ but not $\sqrt{(a+bi)^2}$ like $|x|=\sqrt{x^2}$ if $|x|=\sqrt{x^2}$ is right and if we give $x = a+bi$ it should be $\sqrt{(a+bi)^2}$ and why this proof is wrong: $z=a+bi$ $|z|^2=z^2$ $|z|^2=(a+bi)^2$ $|z|=\sqrt{(a+bi)^2}$ Note: I don't want "Cartesian representation of a complex number" I don't see it as proof. So I wrongly generalized the equation of the absolute value of a real number: my generalization: $|x|=\sqrt{x^2}$ (only right for real numbers) right definition: (by my understanding so far) $|a_1x_1+a_2x_2...a_nx_n| = \sqrt{\Sigma_n{(a_kx_k)^2}}$
|
Why should $|x|=\sqrt{x^2}$ ? That's a consequence of the definition. Not the definition itself. If $x\in \mathbb R$ then $x = x + 0i$ and $Re(x) = x$ and $Im(0) = 0$ . And $|x| = \sqrt{Re(x)^2 + Im(x)^2} = \sqrt{x^2 + 0^2} = \sqrt{x^2}$ . The idea is that $|z|$ is a norm which can be thought of as "the distance" from the origin". It can have a more formal and precise definition but that's the "organic" definition. But some of the fundamental aspects of a norm/distance for $0$ are: its a real, non-negative value and that only the origin has a zero norm. For real numbers, a positive number is its own value distance from $0$ . That is $7$ is $7$ units from $0$ and $\pi$ is $\pi$ units from $0$ . So if $x > 0$ then $|x| = 0$ . And negative values or their values but in the opposite direction. $-19$ is $19$ units away from $0$ but in the negative direction. And $0$ is $0$ distance from $0$ . So we can describe the $|x|; x\in \mathbb R$ as $|x| =\begin{cases}|x|=x&if x>0\\ |x|=-x&if x . But
|
|complex-numbers|absolute-value|
| 0
|
Is subset relation preserved under limit for Hausdorff metric?
|
Let $X$ be a metric space. I consider elements in $Y=2^X\setminus \emptyset$ and use the Hausdorff metric for $Y$ . Suppose that $A_n \subseteq B_n$ for $A_n,B_n \in Y$ and $A_n \rightarrow A$ and $B_n \rightarrow B$ , where $A,B \in Y$ as well. Is it true that $A \subseteq B$ ?
|
As stated, the answer to your question is negative. The reason is that the Hausdorff pseudo-distance on $Y=2^X\setminus \{\emptyset\}$ defines non-Hausdorff topology on $Y$ . A simple example is given by $X=[0,1]$ with the standard metric. Consider the constant sequences $B_n=B={\mathbb Q}\cap [0,1]$ and $A_n:= B_n$ . Then $B_n\to B$ , while $A_n\to X$ as $n\to\infty$ . Such pathological examples explain why one usually restricts the Hausdorff distance only to closed (and bounded) subsets.
|
|real-analysis|general-topology|metric-spaces|order-theory|hausdorff-distance|
| 0
|
Prove: $\frac{1}{\sqrt{a+b^2}}+\frac{1}{\sqrt{b+c^2}}+\frac{1}{\sqrt{c+a^2}} \geq \frac{2\sqrt{3}+9}{6}$ for $ab+bc+ca=2$
|
Problem: Let $a,b,c$ are non-negatives such that $ab+bc+ca=2$ . Prove: $$\frac{1}{\sqrt{a+b^2}}+\frac{1}{\sqrt{b+c^2}}+\frac{1}{\sqrt{c+a^2}} \geq \frac{2\sqrt{3}+9}{6}$$ Equality holds iff $(a,b,c) =(2,1,0)$ Currently, I haven't had any ideas for this problem. I'll really appreciate if you share me some ideas for this problem even if it's not nice. Thank you very much!
|
Some thoughts. WLOG, assume that $c = \min(a, b, c)$ . By AM-GM, we have $$\frac{1}{\sqrt{a+b^2}} = \frac{2\sqrt{3}}{2\sqrt{(a + b^2) \cdot 3}} \ge \frac{2\sqrt{3}}{a + b^2 + 3},$$ and \begin{align*} \frac{1}{\sqrt{b+c^2}}+\frac{1}{\sqrt{c+a^2}} &= \sqrt{\frac{1}{b+c^2}+\frac{1}{c+a^2} + \frac{8}{2\sqrt{(b+c^2)(c+a^2)\cdot 4}}}\\ &\ge \sqrt{\frac{1}{b+c^2}+\frac{1}{c+a^2} + \frac{8}{(b+c^2)(c+a^2) + 4}}. \end{align*} It suffices to prove that $$\frac{2\sqrt{3}}{a + b^2 + 3} + \sqrt{\frac{1}{b+c^2}+\frac{1}{c+a^2} + \frac{8}{(b+c^2)(c+a^2) + 4}} \ge \frac{2\sqrt{3}+9}{6}. \tag{1}$$ Let $$Q := \frac{1}{b+c^2}+\frac{1}{c+a^2} + \frac{8}{(b+c^2)(c+a^2) + 4}.$$ By AM-GM, we have $$\sqrt{Q} = \frac32 \sqrt{\frac49 Q} = \frac32 \cdot \frac{2\cdot \frac49 Q}{2\sqrt{\frac49 Q \cdot 1}} \ge \frac32 \cdot \frac{2 \cdot \frac49 Q}{\frac49 Q + 1}. \tag{2}$$ From (1) and (2), it suffices to prove that $$\frac{2\sqrt{3}}{a + b^2 + 3} + \frac32 \cdot \frac{2 \cdot \frac49 Q}{\frac49 Q + 1} \ge \frac{2
|
|inequality|
| 1
|
Understanding how matrix transformations operate across subspaces (e.g. rowspace, columnspace)
|
My question originates from a short proof in Strang's LA book that shows when $A\pmb{x}=\pmb{b}$ every vecetor $\pmb{x_r}$ in the rowspace of $A$ projects uniquely to a single vector $\pmb{b_c}$ in the column space of $A$ . The proof is discussed in this thread . On that thread, many of the answers contradict eachother (for example, some say $A$ must be full rank, others not) and the case that $\pmb{x}$ is in neither the rowspace nor nullspace isn't discussed. As a quick aside, I believe that proof only shows that the transformation $T_A|_{C(A^T)} = A\pmb{x_r}$ is an injective function but not that $\forall \pmb{b} \in C(A) \ \exists \ \pmb{x_r} \in C(A^T) : T_A(\pmb{x_r}) \rightarrow \pmb{b}$ i.e. it may or may not be bijective (any further on this would be of great interest, I would love to know how to prove). I would like to try and fully characterise all possibilities here and to get advice on whether it is throught process is correct and clears up any confusion in the other thread
|
Let $T|_{R(A)}$ denote the function $T_A$ restricted to $R(A)$ . We show that $T|_{R(A)}$ is injective. Since the orthogonal complement of $R(A)$ is $N(A)$ , we see that $\ker T|_{R(A)} = \ker T_a \cap R(A)= \{0\}$ . If the kernel of a linear transformation $L$ is trivial, then the function is injective. Try to show this yourself. If we restrict the codomain of $T|_{R(A)}$ to its range, i.e. $C(A)$ , then $T|_{R(A)}$ is also surjective (this is just a general fact: if you restrict the codomain of a function to its range, then you end up with a surjective function). Hence, if you restrict both the domain and codomain of $T_A$ , we see that $T|_{R(A)}$ is an isomorphism between $R(A)$ and $C(A)$ . There is no need for the matrix $A$ to be full rank in order for the proof in the linked post to hold. Edit: original post was edited. 1. The function is injective as before. 2. $T_A$ can be invertible, so it is not necessarily many-to-one.
|
|linear-algebra|vector-spaces|projection-matrices|
| 1
|
Is the determinant of the Jacobian matrix of $g$ at $f(2,1)$ correct?
|
Let $f(x,y)=(x^2-y^2,2xy),$ where $x>0,y>0.$ Let $g$ be the inverse of $f$ in a neighbourhood of $f(2,1)$ . Then the determinant of the Jacobian matrix of $g$ at $f(2,1)$ is equal to...............? Solution: Let $f(x,y)=(f_1(x,y),f_2(x,y))$ ,where $f_1(x,y)=x^2-y^2$ and $f_2(x,y)=2xy$ ,then the Jocobian of the matrix is, $Jf=\begin{bmatrix} \frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} \\ \frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} \end{bmatrix}=\begin{bmatrix} 2x & -2y \\ 2y & 2x \end{bmatrix}$ ,then $Jf(2,1)=\begin{bmatrix} 4 & -2 \\ 2 & 4 \end{bmatrix}$ , $det(Jf(2,1))=det \begin{bmatrix} 4 & -2 \\ 2 & 4 \end{bmatrix}=20\neq 0$ ,then by Inverse function theorem $det (Jg(2,1))=det (Jf^{-1}(2,1))=\frac{1}{det((Jf(2,1)))}=1/20$ Is my Solution correct?Please suggest anyother method by which it can be tackled also check whether I applied the Inverse function theorem correctly
|
Mostly. $f(2,1)=(3,4)$ so you want $\det(\mathrm J_g(3,4)) = \det(\mathrm J_{f^{-1}}(3,4))$ But otherwise, yes, that is: $\det(\mathrm J_f(2,1))^{-1}=1/20$ .
|
|multivariable-calculus|solution-verification|determinant|jacobian|inverse-function-theorem|
| 0
|
What is the result of 3-digit chopping for 0.000234?
|
I am trying to understand if the 0."000" part counted as digit or not. If 0."000" is not "digit", then the result should 0.000234. If yes, then the result should be 0.00 Which one is correct? Here's the definition of chopping from my textbook:
|
It has been a while, but hopefully, students in the future will benefit from my answer. My book has a very similar definition but adds a couple of details. It says that d_1 can be from 1 to 9 and d_i can be from 0 to 9. So 0.000234 would be expressed as 0.234*10^-3. As a result, the three-digit chopping for 0.000234 would be itself, 0.000234. The book I am using is Numerical Analysis, by Burden, Faires, Burden, 10th edition. Yours might be a slightly older version for it to have such a similar definition.
|
|numerical-methods|floating-point|
| 0
|
Expected Value Question 30 sided/20 sided die solution Issue
|
I recently saw a problem wherein player $A$ is given a $30$ sided die, player $B$ is given a 20 sided die, and they both simultaneously roll and whoever gets the higher outcome wins that dollar amount. If they draw then player B wins. Calculate the expected value for player $A$ . I have seen the solution claimed to be $8.15$ but in the original solution for the problem I have seen claimed to be $10.2$ . I got 10.2 and I am uncertain as to where the mistake in my solution is assuming there is one. Let $Z$ be the random variable that represents the amount that is won or lost by player A, $X$ be the random variable that represents the dice roll of player A, and $Y$ the random variable that represents the dice roll of player B. By definition, we want to calculate $$\mathbb{E}[Z] = \sum_{z \in Range(Z)}z\mathbb{P}(Z = z)$$ Assume we are player A. Here note that by the rules of the game we can lose what our opponent rolls, or we can win what we roll. Thus we see that we can lose anywhere fro
|
The mistake is where you let $i$ run all the way up to $j-1$ even for $j\gt21$ ; $B$ can’t roll that high. You could have found this error by checking whether the probabilities sum to $1$ .
|
|probability|probability-theory|solution-verification|problem-solving|dice|
| 1
|
If the mean of four of six given numbers is known, what is the mean of the other two?
|
Four of the six numbers $$1867,\quad 1993,\quad 2019,\quad 2025,\quad 2109,\quad \text{and}\quad 2121 $$ have a mean of 2008. What is the mean of the other two numbers? I would like to get help for this problem because i want to find a way to answer this problem without using guess and check. thank you :)
|
First, we find the total of the numbers, which is 12134. Then, we subtract the total of the 4 numbers, the average times 4 (8032). The result is the sum of the two numbers, 4102. The average is 2051 (4102/2).
|
|arithmetic|
| 0
|
Proof that the Function $f(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} $ Satisfies $ f(x+y) = f(x)f(y) $
|
I am attempting to prove that the function $ f(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} $ satisfies the functional equation $f(x+y) = f(x)f(y) $ for all real $ x $ and $y$ , without relying on the knowledge that $ f(x) $ is the exponential function or using Taylor series expansions. I have been unable to establish the relationship rigorously.It may not be correct what i tried and i may have an error in all equations, but i have tried using the binomial theorem and limit theorems. We know that $ f(x+y) = \sum_{n=0}^{\infty} \frac{(x+y)^n}{n!} $ , so by the binomial theorem, it becomes: $ \sum_{i=0}^{n} \left( \sum_{k=0}^{i} \frac{x^{k-i} y^i}{i!(k-i)!} \right) $ Similarly, I represented $ f(x)f(y) $ as: $ \sum_{i=0}^{n} \left( \sum_{k=0}^{n} \frac{y^k x^i}{k! i!} \right) $ Then, I considered their difference in module and tried to find ways to show that they can become arbitrarily close. Another approach I tried was induction. I wanted to prove that $ \sum_{i=0}^{n} \left( \sum_{k=0}^{n}
|
Perhaps the cleanest approach is to use a combination of the binomial theorem and Cauchy Products. It also seems easier to manipulate $f(x)f(y)$ into $f(x+y)$ than to do the reverse direction. Here's one approach, done in three steps. Step 1. Cauchy Products We can write $f(x)f(y)$ in the following way $$ f(x)f(y) = \left ( \sum_{n=0}^\infty \frac{x^n}{n!} \right )\left ( \sum_{m=0}^\infty \frac{y^m}{m!} \right ) = \sum_{k=0}^\infty \sum_{n=0}^k \frac{x^ny^{k-n}}{n!(k-n)!} $$ This last formula is sometimes called a Cauchy product formula. Please look at the comments on your question for more information about it. Step 2. Binomial Theorem Focus on a single fixed $k$ term and recall that $${k \choose n} = \frac{k!}{n!(k-n)!} \iff \frac{1}{k!}{k \choose n} = \frac{1}{n!(k-n)!}.$$ From this it follows $$\sum_{n=0}^k \frac{x^ny^{k-n}}{n!(k-n)!} = \sum_{n=0}^k \frac{1}{k!}{k \choose n}x^ny^{k-n} = \frac{1}{k!} \sum_{n=0}^k {k \choose n}x^ny^{k-n} = \frac{1}{k!}(x+y)^k$$ where the last equali
|
|real-analysis|calculus|sequences-and-series|
| 0
|
Probability that two singular random variables are equal to each other
|
Let $X$ and $Y$ be independent random variables on $\mathbb{R}$ . If they have an absolutely continuous distribution (w.r.t. the Lebesgue measure), we know that $P(X=Y)=0$ , due to the fact that the event $X=Y$ is supported on a set with zero (Lebesgue) measure on $\mathbb{R}^2$ . Now assume the variables have a discrete distribution, let's say $P(X=a_k)=P(Y=a_k)=p_k$ , where $a_k\in\mathbb{R}$ , $k$ runs through a countable set and $p_k\geqslant 0$ satisfy $\sum_k p_k=1$ ; in such a case $P(X=Y)=\sum_k p_k^2>0$ . What happens if the two variables have a singular continuous distribution w.r.t. the Lebesgue measure, i.e. continuous but not absolutely continuous? Can we still say $P(X=Y)=0$ ?
|
$$\lim_{n\to \infty} \Pr(|X-Y|\leq 1/n)$$ $$=\lim_{n\to \infty} \int_{-\infty}^{\infty}\mu_X(dx)\mu_Y\left[x-\frac{1}{n},x+\frac{1}{n}\right]=0$$ by dominated convergence.
|
|probability-theory|singular-measures|
| 0
|
Gradient of 2-norm squared
|
Could someone please provide a proof for why the gradient of the squared $2$ -norm of $x$ is equal to $2x$ ? $$\nabla\|x\|_2^2 = 2x$$
|
Here an other simple proof using directly the definition of differentiability at a point. 1-But first let's remmeber that $f(\vec{x})$ is said to be differentaible at point $x$ if $\forall \vec{h}$ you have that you can writte $ f(\vec{x}+ \vec{h})= f(\vec{x}) + L(\vec{h}) + o(\vec{h})$ with $L(\vec{h})$ a linear mapping in $\vec{h}$ and $\lim_{\vec{h} \to \vec{0}} || \frac{o(\vec{h})} {||\vec{h}||} ||$ 2-Here $f(\vec{x})= ||\vec{x}||^2$ $$||\vec{x}+\vec{h}||^2 = ||\vec{x}||^2 + || \vec{h}||^2 + + = f(\vec{x}) + + + || \vec{h}||^2 $$ We note $o(\vec{h}) = || \vec{h}||^2 \Rightarrow \lim_{\vec{h} \to \vec{0}} || \frac{||\vec{h}||^2} {||\vec{h}||} || = \lim_{\vec{h} \to \vec{0}} || \vec{h}|| = 0$ Obviously $L( \vec{h}) = + $ as it is a linear mapping in $\vec{h}$ because we work with real number we get that $ = \Rightarrow + = 2 $ 3- Now again by definition the unique vector $ \vec{\nabla f( \vec{x})}$ satisfying $ 2 = $ is the gradient. Thus it cames trivially that $\vec{\nabla f( \vec{
|
|multivariable-calculus|vector-analysis|normed-spaces|scalar-fields|
| 0
|
Is it true that $i \in \mathbb{Q}(\sqrt{2},\xi)$, where $\xi = 1_{\frac{2\pi}{3}} = -\frac{1}{2}+i\frac{\sqrt{3}}{2}$?
|
I am trying to do the following exercise: Give the Galois group of $\mathbb{Q}(\sqrt{3},\xi)$ over $\mathbb{Q}$ , where $\xi = 1_{\frac{2\pi}{3}}$ . Prove that $i \in \mathbb{Q}(\sqrt{2},\xi)$ and group together the elements of $\text{Gal}_{\mathbb{Q}}(\mathbb{Q}(\sqrt{2},\xi))$ that have the same values in $\mathbb{Q}(i)$ . It is very easy to prove that $i \in \mathbb{Q}(\sqrt{3},\xi)$ , but I'm having more difficulties trying to prove that $i \in \mathbb{Q}(\sqrt{2},\xi)$ . I've already found some errors in the headings of the other exercises on the same list, so I'm starting to suspect that in this case the exercise intended to say $\mathbb{Q}(\sqrt{3},\xi)$ instead of $\mathbb{Q}(\sqrt{2},\xi)$ , as in the second part it suddenly changes from $\mathbb{Q}(\sqrt{3},\xi)$ to $\mathbb{Q}(\sqrt{2},\xi)$ .
|
Let $w=\exp(\tfrac{2\pi i}3)$ . If $i\in\Bbb Q(\sqrt2,w)$ , then for some rational numbers $a,b,c,d,e,f,$ $$a+bw+cw^2+d\sqrt2+e\sqrt2 w+f\sqrt2 w^2=i$$ Taking imaginary parts we have $$A\sqrt3+B\sqrt6=1$$ for some $A,B\in\Bbb Q.$ If, $A,B\neq0$ , by squaring, this implies that $\sqrt{18}=3\sqrt2$ is rational which is not. If $A$ or $B$ is zero, then $\sqrt3$ or $\sqrt6$ is rational. Contradiction again.
|
|abstract-algebra|field-theory|galois-theory|galois-extensions|
| 1
|
Solve $3 = -x^2+4x$ by factoring
|
I have $3 = -x^2 + 4x$ and I need to solve it by factoring. According to wolframalpha the solution is $x_1 = 1, x_2 = 3$. \begin{align*} 3 & = -x^2 + 4x\\ x^2-4x+3 & = 0 \end{align*} According to wolframalpha $(x-3) (x-1) = 0$ is the equation factored, which allows me to solve it, but how do I get to this step?
|
Another way: $$\begin{align}x^2-4x+3&=x^2-4x+4-1\\ &=(x-2)^2-1\\ &=(x-2-1)(x-2+1)\\ &=(x-3)(x-1) \end{align}$$
|
|quadratics|factoring|
| 0
|
placing men and women around a table question
|
In how many ways can you place $5$ women and $5$ men around a table in a way that $2$ people from the same gender would not sit next to each other? I'm not sure if its $5!$ or $5! \cdot 4$. Thanks in advance !
|
So first you can make either all the men or all the women sit first around the table; let's say first all the women sit around the table: number of ways of arranging the women = 5!/5 = 4! (we divide by 5 to eliminate some identical cases as you might know) Next, since we do not want the people of same gender to sit together, we need to place the men in the gaps between the women already sat around the table : so number of gaps created between 5 women around a table=5 , so we can arrange these 5 men in these 5 places= 5! ( Since the arrangement is unique now, we don't use 5!/5) So your final answer would be 4!. 5!
|
|combinatorics|discrete-mathematics|
| 0
|
Bisector Intersection Proof
|
$AGF$ is a triangle, $B$ is the center of its inscribed circle, and $D$ and $E$ are the intersections of that circle with $[AG]$ & $[AF]$ respectively. I couldn’t figure out how to prove that the perpendicular bisector of $[AD]$ , the perpendicular bisector of $[AE]$ , and the angular bisector of $\widehat{GAF}$ intersect. ( $H$ on the figure). Do you know how to prove this intersection? Thank you in advance! https://www.geogebra.org/m/fhge9wjr
|
One proof is that perpendicular bisectors of $AD$ and $AE$ meet at the circumcenter of $\triangle{ADE}$ . Therefore $HD=HE$ , (and $AD=AE$ )*, and all points equidistant from two lines that meet at a point fall on the angle bisector of those lines. for DS: since $DB=BE$ and $AB$ is shared and $\angle{ADB}, \angle{AEB}$ are right angles, by HL theorem $\triangle{ADB}=\triangle{AEB}$ , $AD=DE$ , i.e., $ADBE$ is a kite.
|
|geometry|angle|
| 0
|
Issue about the Cauchy problem of separable variables differential equation
|
Consider the Cauchy problem $$\begin{cases}x'(t)=f(x) \\ x(0)=x_0\end{cases}$$ with $t\in \mathbb{R}$ and $f:I\subseteq\mathbb{R} \longrightarrow\mathbb{R}$ . If $f(x_0)=0$ , then $x(t)=x_0$ is the solution of the equation. If $f(x_0)\neq 0$ , then $f(x(t))\neq 0 \, \forall t$ in the maximal interval of definition of the solution, because the solution is unique. I don't understand the bold part: supposing that $\exists t_1$ such that $f(x(t_1))=0$ , what is the contradiction? I was thinking that, defining $x(t_1):=x_1$ , if $x_1$ is a zero of $f$ , then $x(t)=x_1$ can be another solution of the problem, which contradicts the unicity, but this is not true since $x(0)=x_1\neq x_0$ , so $x(t)=x_1$ is not a solution of the problem. What am I missing?
|
If $x(t_1)=x_1$ and $f(x_1)=0$ , then as you found also $\bar x(t)\equiv x_1$ is a solution. Now you have the situation that both $x$ and $\bar x$ solve the Cauchy problem with initial condition $x(t_1)=x_1$ . This is impossible. So you have to backtrack to the last assumption. Thus $x(t_1)=x_1$ or $\dot x(t_1)=f(x(t_1))=0$ is not possible. And if you want to force this, then $x(0)=x_0$ is not possible, but that was the problem to solve, so we can not throw out this condition.
|
|real-analysis|ordinary-differential-equations|cauchy-problem|
| 1
|
If you lived in a 4-torus, what would the doughnut hole look like from the inside?
|
I'm not just curious; it refers to general relativity. Specifically, would the hole in the torus' center look to us like a sphere, one you cannot enter because you always slip across the side and go around it instead of through?
|
There is not necessarily any "middle" of a four-torus, nor is there necessarily any "hole" in any place you might call the "middle". When we try to visualize space in cases where the space is not Euclidean, there's a strong inclination to try to do this by embedding the space in a Euclidean space of higher dimensions. But this is often a distortion of the actual structure of the space. The old Asteroids game from the early days of arcade video games is a good example. The asteroids in this game traveled at constant velocity through a two-dimensional toroidal space. That is, when an asteroid went off the bottom of the screen it reappeared at the top, and when it went off the right edge it reappeared on the left edge. In each case it reappeared with the exact same speed and direction. We call the space in which the Asteroids game lives "toroidal" because one way to visualize its topology is to imagine that we put a photograph of the game screen on a rubber sheet and glue the left edge to
|
|general-topology|
| 0
|
Allegedly: the existence of a natural number and successors does not imply, without the Axiom of Infinity, the existence of an infinite set.
|
The Claim: From a conversation on Twitter, from someone whom I shall keep anonymous (pronouns he/him though), it was claimed: [T]he existence of natural numbers and the fact that given a natural number $n$ , there is always a successor $(n+1)$ , do not imply the existence of an infinite set. You need an extra axiom for that. It was clarified that he meant the Axiom of Infinity. The Question: Is the claim true? Why or why not? Context: I like how, if true, it goes against the idea that, if you just keep adding one to something, you'll get something infinite. This is beyond me. Searching for an answer online lead to some interesting finds, like this . To add context, then, I'm studying for a PhD in Group Theory. I have no experience with this sort of foundational question. I'm looking for an explanation/refutation. To get some idea of my experience with playing around with axioms, see: What is the minimum number of axioms necessary to define a (not necessarily commutative) ring (that doe
|
As noted in the comments, the structure ( $V_\omega$ , $\in$ ) satifies all the axioms of ZFC except for the axiom of existence of an inductive set, and moreover every object of it is finite under all (usual?) definitions (since choice holds), so it follows that some axiom form of infinity is in fact necessary to prove existence of an infinite set edit to further clarification: while naïvely one may think "but of course there are infinitely many different objects available (inside such a model / provably in such a theory, etc.)", the point is that 'finite' and 'infinite' really are formal phrases; compare and contrast Skolem's paradox
|
|set-theory|infinity|axioms|foundations|peano-axioms|
| 0
|
irreducibility in p-adic field
|
Let $u\in \mathbb{Z}_p^*$ be a unit in the ring of $p$ -adic integers. Assume that $u^{1/p}\not\in \mathbb{Q}_p$ , in other words $u$ is not a $p$ -power. I am wondering how the polynomial $f=x^p-u$ factors over $\mathbb{Q}_p$ . I managed to prove (without using that $u$ is a unit) that f is irreducible if and only if it has no roots. However, if $f$ has a root $\alpha$ , then we have $$ f(x)=(x-\alpha)g(x) $$ I am wondering, do we know if $g$ is irreducible over $\mathbb{Q}_p$ or how it factors?
|
I claim that $f$ is irreducible, independent of concrete choice of $u$ . Note that we know all the roots of $f$ : They are of the form $\zeta_p^k u^{1 / p}$ for $\zeta_p$ a primitive $p$ th root of unity, $u^{1 / p}$ an initial choice of root and $k = 1, \ldots, p$ . Let now $K = \mathbb{Q}_p(\zeta_p)$ . Clearly $L = K(u^{1 / p})$ is a splitting field for $f$ , and $\operatorname{Gal}(L / K) \cong \mathbb{Z} / p \mathbb{Z}$ generated by $u^{1 / p} \mapsto \zeta_p u^{1 / p}$ (to make this completely rigorous you could apply to Kummer theory) which implies that $f$ is irreducible over $K$ and hence also over the smaller field $\mathbb{Q}_p$ (since $f$ clearly has no roots in $K$ and $\operatorname{Gal}(L / K)$ acts transitively on its roots). Note also that this does not use that $u$ is a $p$ -adic unit, or in fact anything about the ground field except that it is not of characteristic $p$ (this assumption is hidden away in my appeal to Kummer theory), so the proof translates freely to o
|
|algebraic-number-theory|irreducible-polynomials|p-adic-number-theory|class-field-theory|
| 1
|
Hessian of sigmoids
|
I have $q = f^{T} A f$ , where $A$ is an $n \times n$ symmetric matrix and $f$ a $n \times 1$ vector output of a sigmoid function $$f = \frac{1}{1 + e^{-w^T X}}$$ I want to take the seconder order derivative of (Hessian) $q$ w.r.t vector $w$ which is a $p$ -dimensional vector. Hence $X$ is an $n \times p$ matrix. So far, I've only succeed in the computing the first order derivative using the chain rule, which I found to be as follows: $$\frac{dq}{dw} = 2(Af)^{T}DX$$ where $D$ is a diagonal matrix with its diagonal entries equal to $f_{i}(1 - f_{i})$ Checking the result of the above gradient against automatic differentiation tools, it is correct. However, I'm stuck at this step to get the seconder order derivative of $q$ . How can I move from the gradient and get the hessian of the quadratic form? Thanks in advance.
|
$ \def\o{{\tt1}} \def\BR#1{\Big[#1\Big]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\diag#1{\op{diag}\LR{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} $ Define a few useful variables $$\eqalign{ p &= \exp\LR{Xw},\;&P=\Diag p &\qiq dp = PX\:dw \\ f &= \fracLR{p}{\o+p},\;&F=\Diag f &\qiq df = \LR{F-F^2}X\:dw \\ Y &= \Diag{Af} \\ }$$ Use these to calculate the differential and gradient $$\eqalign{ q &= f^TAf \\ &= A:ff^T \\ dq &= A:\LR{f\,df^T+df\,f^T} \\ &= \LR{A+A^T}f:\c{df} \\ &= 2Af:\c{\LR{F-F^2}X\:dw} \\ &= 2X^T\LR{F-F^2}Af:dw \\ \grad qw &= 2X^T\LR{F-F^2}Af \;\equiv\; g \\ }$$ The Hessian is the gradient of $g$ $$\small\eqalign{ dg &= 2X^T\LR{\c{dF}-2F\,\c{dF}}Af + 2X^T\LR{F-F^2}A\:\c{df} \\ &= 2X^T\LR{I-2F}\,\Diag{df}\:Af + 2X^T\LR{F-F^2}A\,df \\ &= 2X^T\LR{I-2F}\,\Diag{Af
|
|calculus|multivariable-calculus|derivatives|hessian-matrix|scalar-fields|
| 0
|
Bisector Intersection Proof
|
$AGF$ is a triangle, $B$ is the center of its inscribed circle, and $D$ and $E$ are the intersections of that circle with $[AG]$ & $[AF]$ respectively. I couldn’t figure out how to prove that the perpendicular bisector of $[AD]$ , the perpendicular bisector of $[AE]$ , and the angular bisector of $\widehat{GAF}$ intersect. ( $H$ on the figure). Do you know how to prove this intersection? Thank you in advance! https://www.geogebra.org/m/fhge9wjr
|
Triangles $\color{brown} {ADB}$ and $\color{red}{AEB}$ are equal (two equal angles and a common side). Therefore $$AE=AD$$ Consider the symmetry $s$ with respect to $AB$ . $$s(D)=E$$ Let $d$ , the perpendicular bisector of $[AD]$ and $e$ , the perpendicular bisector of $[AE]$ $$s(d)=e$$ So $d, e$ and and the angular bisector of $\widehat{GAF}$ intersect.
|
|geometry|angle|
| 1
|
What is the number of ways of dividing $2n$ distinct balls into $n$ identical bins, such that each bin contains two balls?
|
What is the number of ways of dividing $2n$ distinct balls into $n$ identical bins, such that each bin contains two balls? My working was as follows: First select $n$ balls in $\binom{2n}{n}$ ways, then put one ball into each bin, and arrange it in a total of $n!$ possible ways, then, put the rest of the $n$ balls into the $n$ bins, in one way. This can be done in a total of $\binom{2n}{n} n!$ ways. But there seems to be double counting for each bin, so we need to divide by $2^{n}$ to get the correct answer, i.e, $\binom{2n}{n} \frac{n!}{2^n} $ . My question is how is this double counting happening? Also, more generally, is there a way to determine how to divide $kn$ distinct balls into $n$ identical bins such that each bin has $k$ balls?
|
The question boils down to the # of ways pairs can be formed. For pairing $2n$ items, the formula $\large{\frac{(2n)!}{n!2^n}}$ can be understood by noting that in the $(2n)!$ permutations of the items which we can form, neither the order of the pairs nor the order within the pairs matter, thus the division by $n!2^n$ An easy alternative formula is to think of pairs being formed sequentially , ie $[(2n−1)(2n−3)(2n−5)...1]$ This skipping by two is called a double factorial and so the formula simplifies to just $(2n−1)!!$ For the generalised case of $kn$ balls with $k$ in each of $n$ bins, it is simpler to revert to the original formula type, thus $\dfrac{(kn)!}{n! (k!)^n}$
|
|combinatorics|
| 0
|
How to evaluate $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ without finding an analytic continuation, still using complex analysis.
|
There is a similar post regarding the integral $\int_{-\infty}^\infty \sin^3(x)/x^3 dx$ on Stack. The reason I have some trouble with this post is because it gives an analytic continuation of the function $\sin^3(x)/x^3=\left(\frac{e^{iz}-e^{-iz}}{2iz}\right)^3$ , namely $h(z)=-\frac{e^{3iz}-3e^{iz}}{4z^3}-\frac{1}{2z^3}$ . Firstly, I'd like to know how such an analytic continuation is obtained. Regarding the sinc function, I know that sine has a zero of order 1, and the denominator also does, so at $z=0$ , there is a removable singularity and thus an analytic continuation. I also know that $\lim_{z\to 0}\text{sinc}(z)=1$ (meaning 1 is an analytic continuation to $\mathbb{C}$ ?) Regarding the actual integral itself, I have considered the usual indented semicircle, of radius $R$ , with an $\epsilon$ semicircle about the essential singularity 0. Without using that analytic continuation $h(z)$ , this gives, letting $f(z)=\frac{e^{3iz}-3e^{iz}+3e^{-iz}-e^{-3iz}}{z^3}$ $$0=\frac{i}{8}\left[
|
Here is another method which just uses IBP, not using analytic continuation (or contours). In fact, \begin{align} \int_{-\infty}^{\infty}\sin^3\left(x\right)\frac{1}{x^3}\;\mathrm{d}x&=2\int_{0}^{\infty}\sin^3\left(x\right)\frac{1}{x^3}\;\mathrm{d}x\\ &=-\int_0^{\infty}\sin^3(x)\;\mathrm{d}(x^{-2})\\ &\overset{IBP}=-3\int_0^{\infty}\sin^2(x)\cos(x)\;\mathrm{d}(x^{-1})\\ &\overset{IBP}=\frac34\int_0^{\infty}\frac{-\sin(x)+3\sin(3x)}{x}\;\mathrm{d}x\\ &=\frac34(-\frac\pi2+\frac{3\pi}{2})\\ &=\frac34\pi. \end{align}
|
|real-analysis|integration|complex-analysis|contour-integration|analytic-continuation|
| 0
|
How to prove the two answers to an integral are equivalent
|
I'm trying to do the integral: $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ So I try two ways to do it, the first method I used is to multiply $e^x$ on both sides first. $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ $$=\int{\frac{e^x}{\sqrt{1-e^{2x}}}}dx$$ $$=\int{\frac{\sin{\theta}}{\sqrt{1-\sin^2{\theta}}}\times\frac{\cos{\theta}}{\sin{\theta}}}d\theta$$ $$=\theta+C$$ $$=\arcsin{(e^x)}+C$$ The second method is to do substitution directly. $$\int{\frac{1}{\sqrt{e^{-2x}-1}}}dx$$ $$=-\int{\frac{1}{u}\times\frac{u}{u^2+1}}du$$ $$=-\arctan{u}+C$$ $$=-\arctan{\sqrt{e^{-2x}-1}}+C$$ $\text{After input them into desmos, I found out that the distances betweens the graphs is constant}\frac{\pi}{2}$ $\text{So I want to ask that, can you prove that,} $ $$\arcsin{(e^x)}-(-\arctan{\sqrt{e^{-2x}-1}})\text{ is a constant?}$$
|
You have already proved it, by doing those integrals! If you compute the same indefinite integral in multiple ways, then (provided your computations were correct) whatever answers you get must be equal up to a constant. But in case you’re not quite confident of your original calculations, you can check them by differentiating the solutions and checking they give the same answer , which should be the body of your original integral. That is, if you can show $f'(x) = g'(x)$ , that means $\frac{d}{dx}(f(x)-g(x)) = 0$ , so the difference $f(x)-g(x)$ is constant. Paul Frost’s answer goes through the details of this for your functions. (Here I’m assuming that the solutions $f(x)$ , $g(x)$ are defined on a connected domain. If there are holes in their domain, typically coming from singularities in the integrand, like e.g. $\int \frac{1}{x}\mathrm{d}x$ , then the different between the solutions is only locally constant — the solutions must still agree up to some constant within each connected c
|
|calculus|integration|trigonometry|indefinite-integrals|inverse-trigonometric-functions|
| 0
|
Exercise 1.1.17 from West
|
Here is exercise 1.1.17 from Introduction to Graph Theory by Douglas B. West 1.1.17. Prove that $K_n$ has three pairwise-isomorphic subgraphs such that each edge of $K_n$ appears in exactly one of them if and only if n is congruent to $0$ or $1$ modulo $3$ . I am trying to formalize a proof for this exercise, but struggle simplifying my arguments. I would appreciate if you could pinpoint where I should be clearer or where I am wrong. Here is what I've sketched so far. Sketch The number of edges for any given n-vertex clique $K_n$ is $n(n-1)/2$ . If $3 | n+1$ then $3\nmid n(n-1)$ . Therefore, $e(K_n)$ is divisible by three if and only if $n+1$ is not divisible by three. To us this means that the edge set of a n-vertex clique can be partitioned into three equally sized subsets of edges given that $n(n-1)$ is divisible by three. First, consider the case where $n$ is divisible by three . Now, imagine we are reconstructing a clique with three pairwise isomorphic subgraphs. And observe how t
|
There is a simpler way to do this. You can partition, iff $n \equiv_3 0$ or $n \equiv_3 1$ , the set $E(K_n)$ into $3$ subgraphs where each of the $3$ subgraphs consists of a clique on $\lceil \frac{n}{3} \rceil$ vertices, and a complete bipartite graph with $\lfloor \frac{n}{3} \rfloor$ vertices on each side, on the remaining $2 \lfloor \frac{n}{3} \rfloor$ vertices. For $n \equiv_3 0$ : Write the vertices of $K_n$ as $v_1,\ldots, v_n$ Let $H_i; i =0,1,2$ be the graph where $v_r$ and $v_s$ form an edge iff $r+s \equiv_3 i$ . Then check that every edge of $K_n$ is in exactly one of the $3$ graphs $H_i$ ; $i=0,1,2$ , and that each of the $H_i$ s consists of a clique on $n/3$ vertices plus a complete bipartite graph, with $n/3$ vertices on each side. on the remaining $2n/3$ vertices. So $H_0,H_1,H_2$ are indeed isomorphic to each other and their edge-set partitions $K_n$ . For $n \equiv_3 1$ : Then $n-1 \equiv_3 0$ . Partition $E(K_{n-1})$ then, into $H_0,H_1,H_2$ as above in this answer
|
|graph-theory|graph-isomorphism|
| 1
|
$-\Delta u =1$ in a unbounded strip
|
Consider the problem $$ \left\lbrace \begin{gathered} -\Delta u = 1 \quad\textrm{in}\quad S\\ u=0\quad\textrm{on}\quad \partial S \end{gathered} \right. $$ where $S$ is the infinity strip $\mathbb{R}\times (0,1)$ or the semi-infinity strip $\mathbb{R}_+\times (0,1)$ . What extra conditions should be imposed on this problem in order to obtain a bounded solution with respect to the $H^1_0(S)$ norm? Is it possible? Moreover, would this solution be unique? If it is not possible, what type of bounded solutions could be obtained? Only in $L^2$ or another function space? It is a rather wide question, but I am looking forward to possible answers.
|
This is my try: Let $u$ be a bounded solution to the problem on $S=\mathbb{R}_+\times (0,1)$ . Define $v(x,y)=u(x,y)-\frac{y(1-y)}{2}$ . $v$ is harmonic on $S$ with boundary condition $v(x,0)=v(x,1)=0$ and $v(0,y)=\frac{y(y-1)}{2}$ Define a function with polar coordinates $w(r,\theta)=v(-\ln r,\theta)$ on the sector $0 . $w(r,0)=w(r,1)=0$ and $w(1,\theta)=\frac{\theta(\theta-1)}{2}$ Since $v$ is bounded so is $w$ and thus it is the unique solution to the problem in the bounded sector: $$w(r,\theta)=\sum_{n=1}^\infty A_n r^{n\pi}\sin(n\pi\theta)$$ Where $A_n \approx \frac{1}{n^3}$ are the Fourier coefficients of $w(1,\theta)=\frac{\theta(\theta-1)}{2}$ Therefore $v(x,y)=\sum_{n=1}^\infty A_n e^{-n\pi x}\sin(n\pi y)\in H^1_0(S)$ But $u=v+\frac{y(1-y)}{2}$ and $\frac{y(1-y)}{2}\notin H^1_0(S)$ , thus $u\notin H^1_0(S)$ .
|
|complex-analysis|partial-differential-equations|elliptic-equations|
| 1
|
Basic lemma on normality of map between von Neumann algebras
|
Let $f: M\to N$ be a bounded linear map between von Neumann algebras. Assume that for every net $0 \le x_i\nearrow x$ , we have that $f(x_i)\to f(x)$ $\sigma$ -strongly. Is it true that $f$ is normal, i.e. $\sigma$ -weakly continuous? Let $\omega \in N_*$ . Then we need to check that $\omega \circ f\in M_*$ . Now, if $f$ would be $*$ -homomorphism, I think I can prove this: we can use that $\eta \in M_*$ iff $$\eta(\sum_{i\in I} e_i) = \sum_{i\in I} \eta(e_i)$$ for all families of orthogonal projections $\{e_i\}_i\subseteq M$ . So, let $\{e_i\}_i$ be such a family. Then $$(\omega \circ f)(\sum_{i\in I}e_i)= \omega(\sum_{i\in I} f(e_i))= \sum_{i\in I} \omega(f(e_i))$$ where the first equality uses the assumption and the second equality uses that $\{f(e_i)\}_i$ is again a family of orthogonal projections. My questions: (1) Do I need the assumption that $f$ is a $*$ -homomorphism, or is there another argument that avoids this? (2) Is the converse of the above true? I.e. if $f$ is normal,
|
As requested, here are my comments promoted to an answer: You should go through your argument again - you never need to use $f$ is a $\ast$ -homomorphism in the first place if, for the second inequality, instead of using $f(e_i)$ is a family of orthogonal projections, you just use $\omega$ is $\sigma$ -weakly and therefore also $\sigma$ -strongly continuous. As for the converse, it is definitely true when $f$ is positive, since an increasing net of positive element converges $\sigma$ -weakly to $x$ iff it converges $\sigma$ -strongly to $x$ . Not sure whether this is generally true for bounded linear maps though.
|
|functional-analysis|operator-theory|operator-algebras|c-star-algebras|von-neumann-algebras|
| 1
|
An attempt for approximating the logarithm function $\ln(x)$: Could be extended for big numbers?
|
An attempt for approximating the logarithm function $\ln(x)$ : Could be extended for big numbers? PS: Thanks everyone for your comments and interesting answers showing how currently the logarithm function is numerically calculated, but so far nobody is answering the question I am asking for, which is related to the formula \eqref{Eq. 1} : Is it correctly calculated?, Could a formula for the logarithm of large numbers be found with it? Here with "big/large numbers" I am meaning in the same sense of how the Stirling's approximation formula approximates the factorial function at large values. Intro__________ On a previous question I found that the following approximation could be used: $$\ln\left(1+e^x\right)\approx \frac{x}{1-e^{-\frac{x}{\ln(2)}}},\ (x\neq 0) \quad \Rightarrow \quad \dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln\left(x^{\ln(2)}\right)} \approx \frac{x}{x-1}$$ And later I noted that I could do the following: $$\dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln(2)} \approx \frac{x\ln\lef
|
First, it should be noted that computers store floating-point numbers in what's essentially scientific notation, with a sign bit ( $+$ or $-$ ), fixed-width, “significand” or “mantissa”, and exponent indicating a power of two: $$x = (-1)^sm2^p, s \in \{0, 1\}, m \in [1, 2), p \in \mathbb{Z}$$ It's a bit more complicated than that in reality because zero, denormals, infinity, and NaN are special cases. But given that you're taking a logarithm, I'm assuming that you only care about positive finite numbers anyway. For these, we have: $$\log x = \log m + p\log 2$$ The upshot of this is that if you have a log function that's accurate on the interval $[1, 2]$ , you can easily construct one that's accurate $\forall x > 0$ . So it's not a problem if your approximation can't handle large $x$ 's “directly”. import math LOG2 = 0.6931471805599453 def approx_log(x): if x So let's implement your $f_1$ function. from math import log def f1(x): y = x ** log(2) return (x - 1) * (x + y) / (x * (1 + y))
|
|real-analysis|combinatorics|convergence-divergence|solution-verification|pochhammer-symbol|
| 0
|
Long exact sequence of the exponential short exact sequence and elements of $H^1(X,\mathcal{O}^*_X)$
|
I am trying to understand how to think about the elements of $H^1(X,\mathcal{O}^*_X)$ I know one way is to think about it as the Picard group via the isomorphism. But when I try to "see" the elements of $H^1(X,\mathcal{O}^*_X)$ I get confused. We have $$ 0\to \mathbb{Z}\to \mathcal{O}_X\to \mathcal{O}^*_X\to 0$$ From we get the long exact $$\ldots \to H^1(X,\mathbb{Z})\to H^1(X,\mathcal{O}_X)\to H^1(X,\mathcal{O}^*_X)\to \ldots$$ So $H^1(X,\mathcal{O}^*_X):=\dfrac{\text{Ker}(\phi^1:(\mathcal{O}^*_X(X))^1\to (\mathcal{O}^*_X(X))^2)}{\text{Im}(\phi^0: \mathcal{O}^*_X(X)\to (\mathcal{O}^*_X(X))^1)}$ and the maps $\phi$ are from the resolution of $\mathcal{O}^*_X$ i.e $$0\to \mathcal{O}^*_X(X)\to (\mathcal{O}^*_X(X))^1\to (\mathcal{O}^*_X(X))^2\to \ldots$$ But I don't know what the sheaf $(\mathcal{O}^*_X(X))^1$ , (and so on) might be. So how can I make sense of it ? Should I just be think elements of $H^1(X,\mathcal{O}^*_X)$ as non-vanishing global sections ?
|
Should I think about elements of $H^1(X, \mathcal{O}_X^*)$ as non-vanshing global sections? No because that would be absolutely incorrect to do. Global sections of a sheaf $\mathcal{E}$ are given by $H^0(X,\mathcal{E})$ . Elements of $H^1(X,\mathcal{E})$ are not, inherently, geometric objects. At best they can be represented by geometric objects, in certain cases (like de Rham cohomology). But for a general sheaf that is not the case. So the most concrete way to think about $H^1(X,\mathcal{O}_X^*)$ is to represent its elements by the combinatorial objects which are are used to compute it. In this case, that means we take an open cover with desirable properties $\{U_\alpha\}$ . Then an element $f\in H^1(X,\mathcal{O}_X^*)$ can be represented as follows. For each $U_\alpha\cap U_\beta$ , we take an element $f_{\alpha\beta}\in \mathcal{O}_X^*(U_\alpha\cap U_\beta)$ such that $f_{\alpha\beta}=1/f_{\beta\alpha}$ . The reason being that this is the condition which guarantees that $\delta f =
|
|complex-geometry|group-cohomology|sheaf-cohomology|
| 0
|
Hermitian inner product and basis change
|
The question here has the following set up Let $\{e_i\}$ be a basis of a $\mathbb{C}$ -linear space $E$ and $h$ a Hermitian metric on $E$ . Denote by $H=\{h_{ij}\}$ the matrix of $h$ for $\{e_i\}$ , that is $$h_{ij}=h(e_i,e_j).$$ Let now $g\in GL(E)$ be a complex automorphism and consider the new basis $\{e'_i\}=\{ge_i\}$ . Denote by $H'=\{h'_{ij}\}$ the corresponding matrix. What is the relation between $H$ and $H'$ ? The answers give two different results for this and I've been trying to figure out if they are the same or is there something else going on. The accepted answer states that $$ H' = g^\ast H g $$ and the latter one that $$ H'=g^TH\bar{g}. $$ If we consider $v,u \in H$ with $v=v^1e_1+\dots+v^ne_n$ and $u=u^1e_1+\dots+u^ne_n$ , then $$h(u,v)=u^ih(e_i,e_j)\bar{v}^j = \begin{bmatrix}u\end{bmatrix}^TH\begin{bmatrix}\bar{v}\end{bmatrix}$$ which would seem like more like the latter one?
|
For the sake of lisibility, $G$ is written instead of $g$ . For a scalar $a$ , $\bar{a}$ denotes the complex conjugate of $a$ . Also, $\overline{G}$ is defined such that $\overline{G}_{i,j} = \overline{G_{i,j}}$ (conjugation only), whereas $G^\ast$ is defined such that $G^{\ast}_{i,j} = \overline{G_{j,i}}$ (conjugation and transposition). If $h(., .)$ is linear in the first argument (scalar product following the convention in mathematics): $$\begin{align} H'_{i,j} &= h(G e_i, G e_j) \\ &= h(\sum_k G_{k,i} \cdot e_k, \sum_l G_{l,j} \cdot e_l) \\ &= \sum_{k,l} G_{k,i} \cdot H_{k,l} \cdot \overline{G_{l,j}} \\ &= (G ^T H \overline{G})_{i,j} \end{align}$$ If $h(., .)$ is linear in the second argument (scalar product following the convention in physics): $$\begin{align} H'_{i,j} &= h(G e_i, G e_j) \\ &= h(\sum_k G_{k,i} \cdot e_k, \sum_l G_{l,j} \cdot e_l) \\ &= \sum_{k,l} \overline{G_{k,i}} \cdot H_{k,l} \cdot G_{l,j} \\ &= (G^\ast H G)_{i,j} \end{align}$$
|
|linear-algebra|
| 1
|
How to evaluate $I=\int_0^{\frac{\pi}{2}}\sin^2x\ln(\sin^2(\tan x))dx$
|
$$I=\int_0^{\frac{\pi}{2}}\sin^2x\ln(\sin^2(\tan x))dx \hspace{15mm}(1)$$ Now, using definite integral property of $\int_a^bf(x)dx=\int_a^bf(a+b-x)dx$ $$I=\int_0^{\frac{\pi}{2}}\cos^2x\ln(\sin^2(\cot x))dx\hspace{15mm}(2)$$ After $\tan x=t$ substitution in $(1)$ and $\cot x=m$ in $(2)$, to get: $$I=\int_0^{\frac{\pi}{2}}\frac{t^2}{(1+t^2)^2}\ln(\sin^2 t)dt=\int_0^{\frac{\pi}{2}}\frac{m^2}{(1+m^2)^2}\ln(\sin^2 m)dm$$ After variable change and addition, I get: $$2I=\int_0^{\frac{\pi}{2}}\frac{x^2}{(1+x^2)^2}\ln(\sin^4 x)dx\implies \frac{I}{2}=\int_0^{\frac{\pi}{2}}\frac{x^2}{(1+x^2)^2}\ln(\sin x)dx$$ How could I proceed? Any other solutions which happen to be more efficient/simple?
|
After the substitution $\tan{x}\mapsto x$ , we get \begin{align} \mathscr{R} =&\int^\infty_0\frac{x^2}{(1+x^2)^2}\ln(\sin^2{x})\ {\rm d}x\\ =&\Re\int^\infty_{-\infty}\frac{x^2\ln(1-e^{i2x})}{(1+x^2)^2}{\rm d}x-\ln{2}\underbrace{\int^\infty_{-\infty}\frac{x^2}{(1+x^2)^2}{\rm d}x}_{\frac{\pi}{2}} \end{align} Even though the function $\displaystyle f(z)=\frac{z^2\ln(1-e^{i2z})}{(1+z^2)^2}$ has infinitely many branch points at $z=n\pi$ , once we close the contour along the upper half of $|R|$ and make semicircular bumps around the branch points, one may check (by letting $z=n\pi+ \epsilon e^{i\theta}$ ) that the contribution along the bumps vanishes. The integral along the big arc also tends to $0$ . Hence \begin{align} \mathscr{R} =&2\pi i{\rm Res}(f,i)-\frac{\pi}{2}\ln{2}\\ \end{align} Using WolframAlpha to compute the residue and simplify terms, $$\mathscr{R}=\frac{\pi}{e^2-1}-\pi-\frac{\pi}{2}\ln{2}+\frac{\pi}{2}\ln(e^2-1)$$
|
|calculus|integration|definite-integrals|
| 0
|
Book recommendation: A Calculus book with good balance of intuition and rigor
|
Can someone recommend a Calculus book that emphasizes and clarifies the intuition behind the theorems to the maximum, while at the same time is fairly rigorous? During my bachelor's in computer science, we used Stewart. I didn't like it as it was far from rigorous. Besides, I don't just want a lot of numerical examples, but rather to clarify the concepts, with a minimum number of examples. I have read about 1/3 of Spivak, and to me it's an introduction to analysis, not Calculus. Although I understood almost all what I read in Spivak, I felt that my intuition wasn't strengthened and that he could have gave more insight to the ideas and concepts, not just mechanical proofs, I felt I didn't get the big picture. (For example, he gave no intuition behind the chain rule where in fact there is a good one, also he barely mentioned Riemann sums, which is important to intuition in my humble opinion) So does such a book exist?
|
Short calculus , Serge Lang. There are no epsilon-deltas, but this does not imply that the book is not rigourous. Lang learned this attitude from Emil Artin, around 1950.
|
|calculus|reference-request|book-recommendation|
| 0
|
Unsure of rigorous use of Axiom of Union - example exercise is Tao's Analysis I 3.5.1
|
I am concerned that I don't fully understand the Axiom of Union and how to use it rigorously . Exercise 3.5.1(iii) from Terence Tao's Analysis I provides an opportunity to use it: (iii) Show that regardless of the definition of ordered pair, the Cartesian product $X \times Y$ of any two sets $X,Y$ is again a set. (Hint: first use the axiom of replacement to show that for any $x \in > X$ , that $\{(x, y) : y ∈ Y\}$ is a set, and then apply the axiom of union.) Note that we can only use the axioms already introduced up to this point in the book. My Solution Fix an $x' \in X$ and apply the Axiom of Replacement to Y as follows. Consider the statement $P$ : $$ P(\;y, (x'y)\;) : (x',y) \text{ is an ordered pair with } y \in Y $$ This statement $P$ is true if $(x', y)$ is an ordered pair with $y \in Y$ . For every $y$ it is true for at most one ordered pair $(x', y)$ , therefore it is suitable for use by the Axiom of Replacement. This gives us a new set $S$ of ordered pairs: $$ S_{x'} = \{ (x
|
If you need to form the set union $\bigcup_{Z \in S}{Z}$ , or in common set theory notation, $\bigcup{S} = \bigcup{ \{\, Z \mid Z \in S \,\} }$ , you first need to have the set $S$ at hand. In standard $\mathsf{ZF}$ set theory, the axiom of union is $$\forall S \exists U \forall z ( z \in U \Longleftrightarrow \exists Z ( z \in Z \wedge Z \in S ) ).$$ This says that, given any set $S$ , you can form a union $U$ . You may think of $U$ being “constructed” in the following manner: take any set $Z \in S$ , pour all elements from $Z$ into $U$ , repeat steps 1 to 2 for all other elements of $S$ . You may prove that for any set $S$ , any unions $U_1$ and $U_2$ which satisfy equivalence in the union axiom must be equal via the axiom of extensionality (that sets are equal precisely when they contain the same elements). In broad strokes, the proof of the exercise proceeds as follows. Assume that for any objects* $x$ and $y$ , you have defined another object $(x, y)$ which exists uniquely. More a
|
|set-theory|
| 1
|
What is the number of ways of dividing $2n$ distinct balls into $n$ identical bins, such that each bin contains two balls?
|
What is the number of ways of dividing $2n$ distinct balls into $n$ identical bins, such that each bin contains two balls? My working was as follows: First select $n$ balls in $\binom{2n}{n}$ ways, then put one ball into each bin, and arrange it in a total of $n!$ possible ways, then, put the rest of the $n$ balls into the $n$ bins, in one way. This can be done in a total of $\binom{2n}{n} n!$ ways. But there seems to be double counting for each bin, so we need to divide by $2^{n}$ to get the correct answer, i.e, $\binom{2n}{n} \frac{n!}{2^n} $ . My question is how is this double counting happening? Also, more generally, is there a way to determine how to divide $kn$ distinct balls into $n$ identical bins such that each bin has $k$ balls?
|
Alternative approach: Index the balls $~B_1, B_2, \cdots, B_{2n}.~$ Use the following algorithm to partner the balls. Choose the ball of lowest index, whose partner-ball has not yet been decided, and select (at random) one of the remaining (as yet unpartnered) balls to partner with. So, for the first partnering, which involves ball $~B_1,~$ there are $~(2n-1)~$ choices, because there are $~(2n-1)~$ other balls that have not yet been partnered . Regardless of which ball is partnered with $~B_1~$ there will then be $~2(n-1)~$ balls that are still un-partnered. Whenever there are $~2r~$ balls un-partnered, there will be $~2r-1~$ choices for which other ball to partner with the ball of lowest index that has not yet been partnered. Therefore, the computation must be $$\prod_{r=1}^n (2r-1).$$ is there a way to determine how to divide $kn$ distinct balls into $n$ identical bins such that each bin has $k$ balls? Use the exact same approach as in the first problem. $~r~$ will (again) run from $
|
|combinatorics|
| 1
|
Formulating a linear programming/optimization problem with a "soft" constraint
|
I have an optimization problem which I hope I can formulate as a linear program. The problem involves a vector $x$ of binary decision variables (so each entry of the $x$ -vector is either $0$ or $1$ ). The goal is to $$\text{maximize}\ c^Tx$$ where $c$ contains positive constants that are given by the problem. Furthermore I have a single constraint $$ d^Tx \le_{soft} K $$ where $d$ is a vector also containing positive constants and $K$ is an upper bound. Both are given by the problem. Now here is where my problem lies. The inequality in the constraint is not a normal inequality, meaning that the limit $K$ can be exceeded, hence I called it "soft" constraint. Let's say we choose $n$ of the $x$ 's as $1$ and the sum of all the corresponding $d$ 's does not exceed $K$ , then we are allowed to choose one more entry of the $x$ vector. So $$ \sum_{i=1}^{n} d_i\cdot x_i \le K \quad \text{but}\quad \sum_{i=1}^{n+1} d_i\cdot x_i \substack{\ge \\ \gt} K \\ $$ $$ \text{(this is illustrative as th
|
This can be represented as integer linear programming. But that's not what you asked. You asked if it can be formulated as a linear program. It is unlikely it can be represented as linear programming. Without the "soft" constraint, this is equivalent to the knapsack problem, which is NP-hard. I expect it will remain NP-hard even with the "soft" constraint. Linear programming can be solved in P. It is conjectured that P != NP. Therefore, you should not expect to be able to represent a NP-hard problem as a linear program. Integer linear program, yes. Linear program, no.
|
|optimization|linear-programming|mixed-integer-programming|
| 0
|
Adjoint of the identity is the fractional Laplacian?
|
I want to show that the adjoint of the identity $i: H_0^s(\Omega)\rightarrow L^2(\Omega), x\mapsto x$ where $s>0$ is arbitrary is the fractional Laplacian i.e. $(-\Delta)^{s}$ where $(-\Delta)^s$ is defined by measurable functional calculus. We assume $\Omega$ to be open and bounded. Any ideas to support me? What I know is that if $\Omega=\mathbb{R}^d$ then we could use the definition of $H_0^s(\Omega)$ via Fourrier transform. But such an definition is not available if $U$ is assumed to be open and bounded. However I think we one might identify elements in $H_0^s(\Omega)$ via zero extention as elements of $H_0^s(\mathbb{R})$ .
|
It depends what you mean by 'adjoint'. First possibility is to be the adjoint in the sense of dual spaces. Then $i^*:(L^2)^* \to (H^s)^*$ . If one identifies $L^2=(L^2)^*$ , then $i^*(f)$ applied to $u\in H^s$ is $i^*(f)(u) = \int_\Omega f\cdot u$ . Second possibility is the Hilbert space adjoint. The definition of $i^*$ is $\langle i(u),v\rangle_{L^2} = \langle u,i^*(v) \rangle_{H^s}$ for all $u\in H^s$ , $v\in L^2$ . In order to compute $v$ , one has to solve the variational problem $$ \langle u,i^*(v) \rangle_{H^s} = \langle i(u),v\rangle_{L^2} = \int_\Omega uv\ dx \quad \forall u\in H^s. $$ The left-hand side is the inner product in $H^s$ , induced by the fractional Laplacian. In this sense, $i^*(v) = (-\Delta^s)^{-1}v$ .
|
|functional-analysis|sobolev-spaces|fourier-transform|laplacian|fractional-sobolev-spaces|
| 0
|
Constructive proof of a statement about a property of a Lebesgue-null set
|
I was studying some measure theory and upon searching for additional exercises on the internet I came upon one that said: Given $H$ a Borel set subset of the real numbers, prove that if $\lambda(H)=0$ (its Lebesgue measure) then there exists a number $\alpha$ such that $\alpha + H$ , the set of all the reals of the form $\alpha + h$ where $h \in H$ , is a subset of the irrational numbers. I had no problems in proving this statement, but the proof that I came up with is by contradiction and thus does not find the wanted number (which is not even unique, since it's provable that the set of all such real numbers is a Borel set with measure $+\infty$ ). My question is, is it possible to find a constructive proof of this statement? Or maybe is it possible to find such a number for a special non trivial (like one made only of irrational or rational numbers) null set, for example the Cantor set?
|
(Of course it depends a bit on what exactly 'constructive' should mean, but) Take for example $H :=$ all the computable ( or arithmetical, or analytical, ... ) real numbers, which is countable, so of course Borel and having measure $0$ . Then, since rationals are computable and $H$ is a field, $\alpha$ must not belong to $H$ , so existence of $\alpha$ is in particular existence of a non-computable number, which can be considered non-constructive
|
|analysis|measure-theory|
| 0
|
Allegedly: the existence of a natural number and successors does not imply, without the Axiom of Infinity, the existence of an infinite set.
|
The Claim: From a conversation on Twitter, from someone whom I shall keep anonymous (pronouns he/him though), it was claimed: [T]he existence of natural numbers and the fact that given a natural number $n$ , there is always a successor $(n+1)$ , do not imply the existence of an infinite set. You need an extra axiom for that. It was clarified that he meant the Axiom of Infinity. The Question: Is the claim true? Why or why not? Context: I like how, if true, it goes against the idea that, if you just keep adding one to something, you'll get something infinite. This is beyond me. Searching for an answer online lead to some interesting finds, like this . To add context, then, I'm studying for a PhD in Group Theory. I have no experience with this sort of foundational question. I'm looking for an explanation/refutation. To get some idea of my experience with playing around with axioms, see: What is the minimum number of axioms necessary to define a (not necessarily commutative) ring (that doe
|
The answer of ac15 is correct and to the point. It helps to clarify the underlying philosophical terminology. Namely, there are two types of infinity, in a sense. If one says Take $n$ , form its successor $n + 1$ , and its successor $(n + 1) + 1$ in turn, and proceed , the described non-ending process may be viewed as a potential infinity . This is something that never ends without being a concrete whole. Other examples include when defining a formal theory, you may think of the set of syntactic variables as being potentially infinite (there are always more variables than you need). Or you may believe the tape in Turing machines to be potentially infinite (tape never runs out but does not need to be actually infinite in size), and so on. When one adds This does not imply the existance of an infinite set , they are talking about completed or actual infinities . The set of natural numbers taken as a whole is (has the property of being) a completed infinity, as are $\mathbb{R}$ , $\aleph_
|
|set-theory|infinity|axioms|foundations|peano-axioms|
| 0
|
Matrices and differentiation commute
|
Suppose for simplicity we have a plane curve $\gamma(t)=(f(t),g(t))$ . I'm just curious exactly what is the property responsible for the fact that, if $R_{90}$ is the two dimensional rotation matrix by $90$ degrees counterclockwise, then $$\frac{d}{dt}\left(R_{90}\left(\gamma(t)\right)\right)=R_{90}\left(\frac{d}{dt}(\gamma(t))\right).$$ I'm thinking this only applies to matrices $R$ that are linear isomorphisms (and for when the multiplication makes sense). What is the general property at work here?
|
Note that $$R_{90}=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix},$$ so $$R_{90}(\gamma(t))=(-g(t),f(t)).$$ Then as we take derivatives: $$\frac{d}{dt}R_{90}(\gamma(t))=(-g'(t),f'(t))=R_{90}(f'(t),g'(t))=R_{90}\left(\frac{d}{dt}\gamma(t)\right).$$ Of course, Andrew's answer illustrates a more general case.
|
|calculus|linear-algebra|matrix-calculus|
| 0
|
Operator norm and Hilbert Schmidt norm
|
I'm looking for a proof of \begin{equation} ||T||\leq ||T||_{HS}, \end{equation} for which it is sufficient to show \begin{equation} ||Tx|| \leq ||x|| \cdot ||T||_{HS} \forall x\in H, x\not=0 \end{equation} can someone help?
|
Or you can use the property of adjoint operators, $$\left \|Tx \right \|^{2}=\sum \left \langle Tx,e_{i} \right \rangle^{2}=\sum \left \langle x,T^{\ast }e_{i} \right \rangle^{2}\leq \sum\left \|T^{\ast }e_{i} \right \|^{2}=\left \|T^{\ast } \right \|_{HS}^{2}=\left \|T\right \|_{HS}^{2}$$
|
|functional-analysis|operator-theory|hilbert-spaces|normed-spaces|
| 0
|
How to prove consistency with choice for large cardinal extensions?
|
How can we know if an extension of $\sf ZF$ by some large cardinal property that results in a consistency strength beyond $0^{\#}$ is compatible with choice or not? I mean the easiest way to know if an extension of $\sf ZF$ is compatible with choice is to construct a model of it in $L$ . But if any extension goes beyond $0^{\#}$ then no model of it can be constructed in $L$ , and so there is no uniform way to prove its compatibility with choice. Is there something that takes the place of $L$ for those high kinds of extending $\sf ZF$ ? Is it $\sf HOD$ ? Or perhaps other known models in which choice hold. Also, are there known models that can interpret partial forms of choice, like $\sf CC$ and $\sf DC$ principles at such altitudes?
|
This is the territory of inner model theory . The issue with $\mathsf{HOD}$ is that it's far too flexible: there's generally no easy way to argue (and indeed it's usually false) that passing to $\mathsf{HOD}$ preserves large cardinals. So the mere fact that $\mathsf{HOD}^\mathcal{M}\models\mathsf{ZFC}$ whenever $\mathcal{M}\models\mathsf{ZF}$ doesn't help us much. Instead, we need a more explicit construction. This is where mice and their ilk enter the picture ( Schimmerling's The ABCs of mice is the closest thing I know to a readable intro to mouse theory). However, it's also worth noting that the definitions of large cardinal notions often rely on choice in subtle ways. For example, it's (we hope!) consistent with $\mathsf{ZF}$ that $\omega_1$ is measurable via the club filter. Similarly, we currently don't know whether "If a Reinhardt is consistent with $\mathsf{ZF}$ then it's consistent with $\mathsf{ZFC}$ " is correct (we know the conclusion is false but the hypothesis could be tr
|
|set-theory|first-order-logic|axiom-of-choice|large-cardinals|
| 1
|
All actions of $SO(3)$ on $S^2$
|
There are two obvious (smooth left) actions of $\mathrm{SO}(3)$ on $S^2$ . There is the standard action by which $\mathrm{SO}(3)$ acts by 3D rotations on the standard embedding of $S^2$ in $\mathbb{R}^3$ , and there is the trivial action in which $\mathrm{SO}(3)$ leaves the points of $S^2$ unchanged. It feels like these are the only two options. Is that intuition correct?
|
No, there are other smooth actions: Fix $g\in\mathrm{SO}(3)$ and denote by $(h,x)\mapsto h.x$ the standard action. Then $$ \mathrm{SO}(3) \times\mathbb S^2\to\mathbb S^2,\;\;(h,x)\mapsto ghg^{-1}.x $$ is also a smooth left action, which is not the trivial action and in general different from the standard one.
|
|geometry|differential-geometry|group-actions|principal-bundles|
| 1
|
Clarification on Sign in Mean Curvatures of Parallel Surfaces
|
For my Differential Geometry class, I've encountered an issue with a problem from do Carmo. Although I'm aware that similar questions have been previously addressed, my issue differs. I understand how to solve the problem but am encountering a specific challenge. Firstly, let me outline the problem: Let $\mathbf{x} = \mathbf{x}(u,v)$ be a regular parametrized surface. A parallel surface to $\mathbf{x}$ is a parametrized surface $$\overline{\mathbf{y}}(u,v) = \mathbf{x}(u,v) + a \mathbf{N}(u,v),$$ where $a$ is a real constant and $\mathbf{N}$ denotes the unit normal to $\mathbf{x}$ . (b) Show that the Gaussian and mean curvatures $\overline{K}$ and $\overline{H}$ of $\mathbf{y}$ are respectively given by $$\overline{K} = \frac{K}{1 - 2Ha + Ka^2},$$ and $$\overline{H} = \frac{H - Ka}{1 - 2Ha + Ka^2}.$$ From question (a), we derived: $$\overline{\mathbf{y}}_u \times \overline{\mathbf{y}}_v = (1 - 2 H a + K a^2) (\mathbf{x}_u \times \mathbf{x}_v).$$ This implies that the unit normal $\over
|
I would say it is largely a convention that $1-2Ha+Ka^2$ is positive. But it makes sense from an evolution point of view as $a$ changes. When $a=0$ , the parallel surface is the original surface. When $a$ is small, then $1-2Ha+Ka^2>0$ . As you said, $$ \bar {\bf y}_u \times \bar {\bf y}_v = (1-2Ha+Ka^2) \, (\bar {\bf x}_u \times \bar {\bf x}_v). $$ For the surface $\bar {\bf y}_a$ to be regular, we need the above to be nonzero. So from a evolution point of view, the coefficient $1-2Ha+Ka^2$ starts as 1 and would stay positive, before the parallel surface becomes singular. (See also the comment of Shifrin above.)
|
|differential-geometry|surfaces|curvature|
| 1
|
Summation notation in an expectation formula
|
Formula: $$ \begin{aligned} \mathbb{E}(N) & =\sum_{n \geq 1} \overbrace{n}^{\sum_{m=1}^n 1} \mathbb{P}(N=n) \\ & =\sum_{1 \leq m \leq n} \mathbb{P}(N=n)=\sum_{1 \leq m} \overbrace{\sum_{m \leq n} \mathbb{P}(N=n)}^{\mathbb{P}(N \geq m)}=\sum_{1 \leq m} \mathbb{P}(N \geq m) \end{aligned} $$ Question I don't understand how $\sum_{n \geq 1} \overbrace{n}^{\sum_{m=1}^n 1} \mathbb{P}(N=n)$ is equal to $\sum_{1 \leq m \leq n} \mathbb{P}(N=n)$ . Also, the notation below the sum in $\sum_{1 \leq m \leq n} $ doesn't make sense to me. How are we exactly summing and how does it follow up more directly from $\sum_{n \geq 1} \overbrace{n}^{\sum_{m=1}^n 1} \mathbb{P}(N=n)$ ? And what the above little sum serve to explain because for me, it is just the expectation formula (given the context, $N$ takes integer values), why do we also need to see that $n$ as a sum ? The second one, $\sum_{1 \leq m} \sum_{m \leq n}$ makes more sense with what the sum actually is, like when we are at the iteration $m=1$ a
|
In this context, I interpret $\sum_{1\le m\le n}$ as being a double sum over two indices $m$ and $n$ . I don't love that notation though. I would write this calculation as $$ \mathbb{E}(N) = \sum_{n=1}^\infty \mathbb{P}(N=n) \cdot n = \sum_{n=1}^\infty \mathbb{P}(N=n) \sum_{m=1}^n 1 = \sum_{m=1}^\infty \sum_{n=m}^\infty \mathbb{P}(N=n) =\sum_{m=1}^\infty \mathbb{P}(N \geq m). $$
|
|probability|summation|expected-value|random-walk|random|
| 1
|
Here in this exercise im trying to compute P(lim sup $A_n$) and P(lim inf $A_n$) where $A_n=\{n, n+1, n+2,...\} $which is a sequence of sets
|
Let $\omega=N$ and let $B$ be sigma algebra of subsets of $N$ so that $(N,B)$ is a measure space. Let $P$ be a function defined on $B$ ie $P: B\to[0,1]$ be a probability measure s.t for any singleton $A=\{n\}$ $P(\{n\})=\frac{1}{n(n+1)}$ . Let $An=\{n,n+1,n+2...\}$ Compute $P($ lim inf $ A_n)$ and $P($ lim sup $ A_n)$ This is my attempt
|
If $k$ belongs to $A_n$ for infinitely many $n$ then $k \ge n$ for infinitely many $n$ which is impossible. Hence, no point belongs to $A_n$ for infinitely many $n$ . This makes $\lim \inf A_n \subseteq \lim \sup A_n =\emptyset$ and empty set has probability $0$ .
|
|probability|measure-theory|
| 0
|
Cofinal subset equivalent to unbounded subset?
|
As stated in the title, given a poset $(S,\leq)$ I think it's trivial that an unbounded subset $A \subseteq S$ is cofinal, but does the opposite implication hold? Definition 1. A subset $X$ of a poset $(S,\leq)$ is said to be cofinal if: $\forall s \in S \; \exists x \in X \; s \leq x$ . Definition 2. A subset $X$ of a poset $(S,\leq)$ is said to be bounded : $\exists s \in S \; \forall x \in X \; x \leq s$ .
|
Neither implication holds. In $(\mathbb{N}_+, \mid)$ the subset $\{ 2^n : n \in \mathbb{N} \}$ is unbounded, but not cofinal. In $(\mathcal{P}(\mathbb{N}), \subseteq)$ the subset $\{ \mathbb{N} \}$ is cofinal, but bounded.
|
|set-theory|order-theory|
| 1
|
Is there a problem if I don't use $0$ in Peano arithmetic?
|
Peano arithmetic is the following list of axioms (along with the usual axioms of equality) plus induction schema. $\forall x \ (0 \neq S ( x ))$ $\forall x, y \ (S( x ) = S( y ) \Rightarrow x = y)$ $\forall x \ (x + 0 = x )$ $\forall x, y \ (x + S( y ) = S( x + y ))$ $\forall x \ (x \cdot 0 = 0)$ $\forall x, y \ (x \cdot S ( y ) = x \cdot y + x )$ In my country's general education, the natural number does not include $0$ . Is there a problem if I write this as follows? $\forall x \ (1 \neq S ( x ))$ $\forall x, y \ (S( x ) = S( y ) \Rightarrow x = y)$ $\forall x \ (x + 1 = S(x) )$ $\forall x, y \ (x + S( y ) = S( x + y ))$ $\forall x \ (x \cdot 1 = x)$ $\forall x, y \ (x \cdot S ( y ) = x \cdot y + x )$
|
If you want a Peano-like axiomatization of the positive integers, then that's possible, but what you've written is not it. Axiom 5 should say $\forall x\,(x \cdot 1 = x)$ , instead. Otherwise, you get an operation $\cdot$ that acts a bit like multiplication, but is not commutative (we can prove that $S(1) \cdot 1 = 1$ but $1 \cdot S(1) = S(1)$ , for example), instead of having the axioms describe what we think of as "normal" arithmetic. After you have corrected axiom 5, there is no problem with those axioms. (Essentially, all you've done is provided a different base case for how multiplication works.) ...but there is a problem calling them "the Peano axioms", because they are not. For the purposes of speaking with the greater mathematical community, the Peano axioms are a specific thing, and you will get confused. If there is some nationwide standardized exam on which you have to repeat that $0$ is not a natural number, then of course you do that - but you don't have to contort yoursel
|
|arithmetic|first-order-logic|axioms|peano-axioms|
| 1
|
All Integer solutions to $2006=(x+y)^{2}+3x+y$
|
How do I find all the solutions to $2006=(x+y)^{2}+3x+y$ Where $x,y \in \mathbb{Z}$ One such solution I found was $(x,y)=(1000,-998)$ . I did it by the following, $2006+3y=(x+y)^{2}+3(x+y)+y$ $2(1003+y)=(x+y)(x+y+3)$ . Which upon solving the system, we will get $x=1000$ and $y=-998$ . How can I continue to find other solutions? 'Cause I noticed that $(13,31)$ also works but I'm not sure where that's derived from, and I have a feeling that there's more.
|
There are in fact infinitely many solutions. We re-write the equation as a quadratic in $y$ : $$y^2 + (2x+1)y + x^2+3x-2006 = 0.$$ Since we seek integer solutions, the discriminant must be a perfect square: $$(2x+1)^2 - 4(x^2+3x-2006) = -8x+8025=n^2.$$ We need $n$ to be odd, so we write $n=2k+1$ and rearrange to get $$x = \frac{1}{8}(8024-4k-4k^2) = \frac{1}{2}(2006-k-k^2)$$ and thus $$y = \frac{-(2x+1)\pm n}{2} = k-x\text{ or }-x-k-1.$$ Substituting in our expression for $x$ , we have $$(x,y)=\left(\frac{1}{2}(2006-k-k^2),\frac{1}{2}(-2006+3k+k^2)\right)\text{ or }\left(\frac{1}{2}(2006k-k^2),\frac{1}{2}(-2008-k+k^2)\right),\qquad k \in \mathbb{N}.$$
|
|elementary-number-theory|
| 1
|
What's the probability that water will reach the destination?
|
I'm doing practice problems in a book for studying purposes and I'm just stumped on how to solve this one. I've tried everything I can think of. Water from source to destination can go through 8 different pumping stations (see Figure). Each of the 16 pipes lets the water through with probability p. What's the probability that water will reach the destination?
|
There seems to be some dispute as to the problem statement, so I'll present my interpretation and solve that one. Of course, my interpretation might be wrong. My Interpretation: There is a big source of water, capable of delivering water through all eight stations simultaneously, if all the stations happen to be working. Each station has an incoming and an outgoing pipe. Each of those pipes is open with probability $p$ , independently of all the other pipes. We are asking for the probability that water makes it from Source to Destination, hence we are asking for the probability that there is at least one station for which both incoming and outgoing pipes are open. Solution: Let's compute the probability that all stations fail. A given station is working with probability $p^2$ (you need both pipes open). Thus a given station fails with probability $1-p^2$ . Thus all eight stations fail simultaneously with probability $(1-p^2)^8$ . It follows that the desired result is $$\boxed {1-(1-p^2
|
|probability|
| 0
|
Are there any unanswered questions regarding math behind Rubik's cubes?
|
I'm a student who has to do either a capstone(as in release a product of some sort) or research project as part of graduating. Right now I'm planning on doing something related to computer science and Rubik's cubes, or at least math and Rubik's cubes. I'm wondering if there are any questions that might be unknown, or not have been looked at very much regarding them.
|
Given the open endedness of this question, I hope you don't mind if I answer with an anecdote. When I was an undergraduate at MIT, I took abstract algebra. The professor was the legendary Michael Artin. At the time I was reading Adventures in Group Theory by David Joyner, and in the book, it stated that if you take any two random elements from the Rubik's Cube group, there is about a 50% probability that those two elements will generate the whole group. But the book didn't really offer a proof of that. So, I thought to myself, surely Professor Artin would have some insight here! I went to one of his 1-1 office hours. I had never gone to one before, so this was the first time talking to him directly. When it was my turn, before I could introduce myself, he said "Wait! Let me see if I can recall your name. You're timidpueo!" I said yes, but how did you know? Then he pulled out a ring of flashcards with every undergraduate student's name and picture in it. "I try to remember all my studen
|
|computer-science|rubiks-cube|
| 0
|
How to choose initial (x0,y0) point to approximate a solution of a system of non-linear equations using newton method?
|
I'm studying Newton's method for solving a system of non-linear equations. A system of non-linear equations, whose (one of the) solutions is ( $x^*$ , $y^*$ ): \begin{equation*} \left\{ \begin{alignedat}{3} % R & L & R & L & R & L f(x,y) = 0 \\ g(x,y) = 0 \end{alignedat} \right. \end{equation*} Can be approximated by a system of linear equations using newton's expansion, around a chosen( $x_0$ , $y_0$ ) starting point, whose solution is a ( $x_{1}$ , $y_1$ ) point, which is closer to ( $x^*$ , $y^*$ ) than ( $x_0$ , $y_0$ ). \begin{equation*} \left\{ \begin{alignedat}{3} % R & L & R & L & R & L f_x(x_0, y_0).(x-x_o) + f_y(x_0,y_0)(y-y_0) = -f(x_0,y_0) \\ g_x(x_0, y_0).(x-x_o) + g_y(x_0,y_0)(y-y_0) = -g(x_0,y_0) \end{alignedat} \right. \end{equation*} The process can be repeated as many times as wished, obtaining ( $x_{k+1}$ , $y_{k+1}$ ) points that converge to ( $x^*$ , $y^*$ ) each iteration more. I got the theory, but I'm struggling to apply it to a real example. Take: \begin{equati
|
I will use my answer to Newton method to solve nonlinear system as a guide and you can see another example using this method. The regular Newton-Raphson method is initialized with a starting point $x_0$ and then you iterate $\tag 1x_{n+1}=x_n-\dfrac{f(x_n)}{f'(x_n)}$ In higher dimensions, there is an exact analog. We define: $$F\left(\begin{bmatrix}x_1\\x_2\end{bmatrix}\right) = \begin{bmatrix}f_1(x_1,x_2) \\ f_2(x_1,x_2) \end{bmatrix} = \begin{bmatrix}x_1^2 + x_2^2-2 \\ x_1^2-x_2^2 -1 \end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix}$$ The derivative of this system is the $2x2$ Jacobian given by: $$J(x_1,x_2) = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \dfrac{\partial f_1}{\partial x_2} \\ \dfrac{\partial f_2}{\partial x_1} & \dfrac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 2x_1 & 2 x_2\\ 2 x_1 & -2x_2\end{bmatrix}$$ The function $G$ is defined as: $$G(x) = x - J(x)^{-1}F(x)$$ and the functional Newton-Raphson method for nonlinear systems is given by the it
|
|numerical-methods|newton-raphson|numerical-calculus|
| 1
|
Resource to implement Block Lanczos in python;.
|
I am trying to implement GNFS in python and I wanted to make it as fast as possible just because. I am trying to find a resource that will help me implement Block Lanczos but so far haven't seen many papers on the subject. Can someone please direct me to a good one that will help me somewhat understand and implement the algorithm? Thanks.
|
There's an old implementation in C with extensive comments at https://web.archive.org/web/20200114212239/http://www.friedspace.com/QS/QS-2.0/lanczos.c as part of SIMPQS. It is adapted from Jason Papadopoulos's implementation, presumably an earlier instance of https://github.com/radii/msieve/tree/master/common/lanczos .
|
|group-theory|number-theory|prime-factorization|python|
| 0
|
Find the radius of the circle tangent to $x^2, \sqrt{x}, x=2$
|
I'm taking up a post that was closed for lack of context because I'm very interested in it : Let $(a,b)$ be the center of this circle. It seems intuitive that $b=a$ , but I have not been able to prove it formally, although I know that two reciprocal functions are symmetric with respect to the first bisector $y=x$ . Then let $(X,X^2)$ the point of tangency with $y=x^2$ . I think we're going to use the formula for the distance from $(a,a)$ to the line $y-X^2=2X(x-X)$ . We have obviously the relation $r=2-a$ . The normal to $(X,X^2) $ passes through $(a,a)$ I'm not sure if my notations are the best to elegantly solve the exercise. I hope you will share my enthusiasm for this lovely exercise that I have just discovered thanks to MSE.
|
I let $(t,t^2)$ be one tangent point $(s^2,s)$ another and $(2,y_2)$ the third. The first three equations then expresses these points are on the circle, the two next compare the equations with scaling factor l2 of the tangents to the circle at $(t,t^2)$ and the parabola $y=x^2$ at the same point. The scaling factor l3 the same for $(s^2,s)$ on $x=y^2.$ the last equation says that y2 $=k$ or that the $y$ -coordinate of the last tangent point is the same as that of the center, essentially that this is a tangent point for the circle and $x=2.$ Then taking the grobner basis in M2 with an elimination order and keeping only elements with $h,k,r$ and feeding them to maxima CAS solve I get the eight real solutions pictured of which five are tangent to the positive branch of $\sqrt{x}$ . I also get these eight repeated with negative $r.$ And lots of non-real solutions (40). Details added R=QQ[l2,l3,s,t,y2,h,k,r,MonomialOrder=>Eliminate 5] S=R[x,y] (x+2-h)^2+y^2-r^2 toString oo (x+s^2-h)^2+(y+s-
|
|geometry|functions|analytic-geometry|
| 0
|
One cannot place the numbers 1, 2, 3, 4, 5, 6, and 7 to make the sum of the numbers on the hexagon equal to 7 times the number in the center.
|
Question Prove by contradiction the following statement: One cannot place the numbers 1, 2, 3, 4, 5, 6, and 7 to make the sum of the numbers on the hexagon equal to 7 times the number in the center. Let a, b, c, d, e, f be the numbers on the hexagon, and g the number in the center. Write down an equation that corresponds to the negation of the statement, then try to reach a contradiction. Note: This statement can be easily proved by explicitly trying all possible values for g. Such proof is not going to get credit. What I did I am confused on the equation that this is asking for. I think it would have been possible with brute force, though it is not allowed in the question. Does it mean x=7p where x is the sum of the outside corners and p the number in the middle?
|
Note $1+2+\cdots+7=28$ . Let $x$ be the number in the middle, then $$28-x=7x$$ but it has no integer solution.
|
|linear-algebra|geometry|discrete-mathematics|
| 1
|
One cannot place the numbers 1, 2, 3, 4, 5, 6, and 7 to make the sum of the numbers on the hexagon equal to 7 times the number in the center.
|
Question Prove by contradiction the following statement: One cannot place the numbers 1, 2, 3, 4, 5, 6, and 7 to make the sum of the numbers on the hexagon equal to 7 times the number in the center. Let a, b, c, d, e, f be the numbers on the hexagon, and g the number in the center. Write down an equation that corresponds to the negation of the statement, then try to reach a contradiction. Note: This statement can be easily proved by explicitly trying all possible values for g. Such proof is not going to get credit. What I did I am confused on the equation that this is asking for. I think it would have been possible with brute force, though it is not allowed in the question. Does it mean x=7p where x is the sum of the outside corners and p the number in the middle?
|
Yes, $x=7p$ is the equation they want — if you substitute $x$ with $a+b+c+d+e+f$ and $p$ with $g$ , that is. As for proving the claim, note that for it to be true, $a+b+c+d+e+f\equiv 0\mod(7)$ . Consider the sum of numbers from $1$ to $7$ , which is $28$ . Subtracting from this any number from $1$ to $7$ corresponds to a particular choice for $a,b,c,d,e,f,g$ . Since $28$ is itself a multiple of $7$ , it is clear that subtracting any number from $1$ to $6$ will result in a number not divisible by $7$ , so the only possible choice for the centre number is $7$ itself. But then the sum from $1$ to $6$ is $21$ , which is certainly not equal to $7\times 7$ , proving the claim.
|
|linear-algebra|geometry|discrete-mathematics|
| 0
|
If $\varphi|_W$ and $\bar{\phi}$ are nonsingular prove that $\varphi$ is nonsingular.
|
Here is the question I am trying to answer the second and the third part of it: If $W$ is a subspace of the vector space $V$ stable under the linear transformation $\varphi$ (i.e., $\varphi(W) \subseteq W$ ), show that $\varphi$ induces linear transformations $\varphi |_{W}$ on $W$ and $\bar{\varphi}$ on the quotient vector space $V/W.$ If $\varphi|_W$ and $\bar{\phi}$ are nonsingular prove that $\varphi$ is nonsingular. Prove the converse holds if $V$ has finite dimension and give a counterexample with $V$ infinite dimensional. I know that a linear transformation $f$ is nonsingular iff ker f = 0, but still I do not know how to prove the second and the third part of the above question. Any hint will be greatly appreciated. Edit: Here is my trial depending on the given comments below: Since we know that a linear transformation $f$ is nonsingular iff $\ker f = 0.$ and since we are given that ker $\bar{\varphi} = 0$ and ker $\varphi|_W =0$ and since my definition for $\bar{\varphi}(x)$ is
|
Addition for the 3 above: Suppose $V=\mathbb R^\infty$ , and $\varphi((x_1,x_2,\cdots))=(0,x_1,x_2,\cdots)$ . Then $Ker(\varphi)=0$ , that is $\varphi$ is nonsingular. Suppose $W=\{(0,x_2,x_3,x_4,\cdots)|x_i\in\mathbb R,i\ge2\}$ , then $V/W=\{(x_1,0,\cdots)+W|x_1\in\mathbb R\}$ . Then $\bar\varphi(V/W)=0$ , $Ker(\bar\varphi)=V/W$ , that is $\bar\varphi$ is not nonsingular.
|
|linear-algebra|abstract-algebra|ring-theory|vector-spaces|modules|
| 0
|
Do We Need the Axiom of Choice for Finite Sets?
|
So I am relatively familiar with the Axiom of Choice and a few of its equivalent forms (Zorn's Lemma, Surjective implies right invertible, etc.) but I have never actually taken a set theory course. I know there are times when an explicit way of choosing some elements may not be provided, and instead we call upon the Axiom of Choice to do it for us. But one thing has always bothered me. I have heard that for a finite set we do not need the Axiom of Choice to choose an element. I find this a little confusing. How is choosing an element from a finite set any different than choosing from an infinite set? I have heard before "since the set it finite, there are finitely many orders, so order the set and choose the first element in the order." But how does one choose what order to use? This involves picking an element from a finite set, i.e. it presumes the conclusion. I'm guessing there is a good reason why everyone keeps insisting that the Axiom of Choice in not needed to choose from finite
|
This exactly shows the deference between classical mathematics and cunstructive mathematics. In cunstructive mathematics, when you say $A$ is non-empty, it means "assuming $A$ is empty leads to a contradiction". but when you say $A$ is inhabited, it means "you can cunstruct an element of it". For a good example, consider $$A=\{x \in \{0,1,2,...,9\}: x ~ repites ~infinitely~ often ~in ~the ~expansion~ of ~\pi\}$$ $A$ is obviously non-empty. But it is not obviously inhabited, because if you say it is inhabited, you should exactly determine which x in $\{0,1,2,...,9\}$ repites infinitely often in the expansion of $\pi$ , which is not so easy. But in classical mathematics, these two notions are the same. The question you asked shows that you probably think cunstructively rather than classically. So I recommend you to go through the area of cunstructivism. You may enjoy that world.
|
|set-theory|axiom-of-choice|
| 0
|
Discriminant of the Kummer extension of p-adics
|
Let $u\in \mathbb{Z}_p^*$ be a unit in the ring of $p$ -adic integers. Assume that $u^{1/p}\not\in \mathbb{Q}_p$ , in other words $u$ is not a $p$ -power. Define by $K:=\mathbb{Q}_p(u^{1/p})$ . What is the discriminant of $K$ ? The discriminant of the defining polynomial $f(x):=x^p-u$ is $\Delta(f)=(-1)^{p(p-1)/2}p^pu^{p-1}$ , and I know that $\Delta(K/\mathbb{Q}_p)|\Delta(f)$ , however can one explicitly compute $\Delta(K/\mathbb{Q}_p)$ ? I think the question reduces to how do I find an integral basis for $\mathcal{O}_K$ , or how do I find a uniformizer for $\mathcal{O}_K$ ?
|
You'll need to consider whether or not $u^{p-1} \equiv 1 \bmod p^2$ . This "Wieferich prime" criterion comes up in the number field setting in order to determine whether $\mathbf Q(\sqrt[p]{a})$ has ring of integers $\mathbf Z[\sqrt[p]{a}]$ when $p$ is prime and $a$ is a squarefree integer other than $\pm 1$ : see the last theorem here . It is also relevant in the local case. You tell us that $u$ is not a $p$ th power in $\mathbf Q_p$ . That implies $x^p - u$ is irreducible over $\mathbf Q_p$ (over all fields, even those of characteristic $p$ , $x^p - a$ is irreducible when $a$ is not a $p$ th power in the field). Let $K = \mathbf Q_p(\sqrt[p]{u})$ , so $[K:\mathbf Q_p] = p$ . Since " $n=ef$ " in local fields, when $n = p$ either $e = p$ or $f = p$ . So $K$ is either totally ramified or unramified. Theorem . If $u^{p-1} \not\equiv 1 \bmod p^2$ then $K/\mathbf Q_p$ is totally ramified with prime element $\sqrt[p]{u}-u$ and ring of integers $\mathbf Z_p[\sqrt[p]{u}]$ . Proof . The polyno
|
|algebraic-number-theory|p-adic-number-theory|discriminant|kummer-theory|
| 1
|
Exercise 1.1.17 from West
|
Here is exercise 1.1.17 from Introduction to Graph Theory by Douglas B. West 1.1.17. Prove that $K_n$ has three pairwise-isomorphic subgraphs such that each edge of $K_n$ appears in exactly one of them if and only if n is congruent to $0$ or $1$ modulo $3$ . I am trying to formalize a proof for this exercise, but struggle simplifying my arguments. I would appreciate if you could pinpoint where I should be clearer or where I am wrong. Here is what I've sketched so far. Sketch The number of edges for any given n-vertex clique $K_n$ is $n(n-1)/2$ . If $3 | n+1$ then $3\nmid n(n-1)$ . Therefore, $e(K_n)$ is divisible by three if and only if $n+1$ is not divisible by three. To us this means that the edge set of a n-vertex clique can be partitioned into three equally sized subsets of edges given that $n(n-1)$ is divisible by three. First, consider the case where $n$ is divisible by three . Now, imagine we are reconstructing a clique with three pairwise isomorphic subgraphs. And observe how t
|
Consider a complete graph $K_n=(V,E)$ where $n\gt1$ . If $n\not\equiv2\pmod3$ there is a permutation $\sigma$ of $V$ of order $3$ with at most one fixed point; the cycle decomposition of $\sigma$ consts of $k$ $3$ -cycles if $n=3k$ , or $k$ $3$ -cycles and a $1$ -cycle if $n=3k+1$ . The permutation $\sigma$ of $V$ induces a permutation $\tau$ of $E$ which maps the edge $vw$ to the edge $\sigma(v)\sigma(w)$ . Note that $\tau$ is a permutation of order $3$ with no fixed points, its cycle decomposition consisting entirely of $3$ -cycles. Choose one edge in each $3$ -cycle of $\tau$ and color it red. If an edge $e$ is colored red, then color $\tau(e)$ white and $\tau^2(e)$ blue. Thus each edge of $K_n$ has a unique color, red, white, or blue; so the graph $K_n$ is decomposed into a red graph, a white graph, and a blue graph. Finally, the permutation $\sigma$ maps the red graph isomorphically to the white graph, the white graph isomorphically to the blue graph, and the blue graph isomorphic
|
|graph-theory|graph-isomorphism|
| 0
|
Proof that the Function $f(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} $ Satisfies $ f(x+y) = f(x)f(y) $
|
I am attempting to prove that the function $ f(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} $ satisfies the functional equation $f(x+y) = f(x)f(y) $ for all real $ x $ and $y$ , without relying on the knowledge that $ f(x) $ is the exponential function or using Taylor series expansions. I have been unable to establish the relationship rigorously.It may not be correct what i tried and i may have an error in all equations, but i have tried using the binomial theorem and limit theorems. We know that $ f(x+y) = \sum_{n=0}^{\infty} \frac{(x+y)^n}{n!} $ , so by the binomial theorem, it becomes: $ \sum_{i=0}^{n} \left( \sum_{k=0}^{i} \frac{x^{k-i} y^i}{i!(k-i)!} \right) $ Similarly, I represented $ f(x)f(y) $ as: $ \sum_{i=0}^{n} \left( \sum_{k=0}^{n} \frac{y^k x^i}{k! i!} \right) $ Then, I considered their difference in module and tried to find ways to show that they can become arbitrarily close. Another approach I tried was induction. I wanted to prove that $ \sum_{i=0}^{n} \left( \sum_{k=0}^{n}
|
Math Werenski already gave an answer using Cauchy product so I'll try proving that using differentiation First of all it's possible to prove that $f'(x) = f(x)$ using the standard definition of derivative $$ \begin{align} & & f'(x) &=\lim_{\epsilon\rightarrow 0}\frac{f(x+\epsilon)-f(x)}{\epsilon} = \lim_{\epsilon \rightarrow 0} \sum_{n=1}^{\infty}\frac{((x+\epsilon)^n-x^n}{n!\epsilon}\\ && &= \lim_{\epsilon \rightarrow 0} \sum_{n=1}^{\infty}\frac{\sum_{k=0}^{n-1}x^{k}(x+\epsilon)^{n-1-k}}{n!} = \lim_{\epsilon \rightarrow 0} \sum_{n=0}^{\infty}\frac{\sum_{k=0}^{n}x^{k}(x+\epsilon)^{n-k}}{(n+1)!}\\ & &\Rightarrow f(x)'-f(x) &= \lim_{\epsilon \rightarrow 0} \sum_{n=0}^{\infty}\frac{\sum_{k=0}^{n-1}(x^{k}(x+\epsilon)^{n-k}-x^n)}{(n+1)!}\\ & & &= \lim_{\epsilon \rightarrow 0} \;\epsilon \,\sum_{n=0}^{\infty}\frac{\sum_{k=0}^{n-1}\sum_{l=0}^{n-k}x^{k+l}(x+\epsilon)^{n-k-l-1}}{(n+1)!} = 0\\ \end{align} $$ Then we have that $\frac{d}{dx}(f(x)f(-x)) = f(x)f(-x) - f(x)f(-x) = 0$ hence $f(x)f(-x)
|
|real-analysis|calculus|sequences-and-series|
| 0
|
Why is $\sup f_- (n) \inf f_+ (m) = \frac{5}{4} $?
|
Let $f_- (n) = \Pi_{i=0}^n ( \sin(i) - \frac{5}{4}) $ And let $ f_+(m) = \Pi_{i=0}^m ( \sin(i) + \frac{5}{4} ) $ It appears that $$\sup f_- (n) \inf f_+ (m) = \frac{5}{4} $$ Why is that so ? Notice $$\int_0^{2 \pi} \ln(\sin(x) + \frac{5}{4}) dx = Re \int_0^{2 \pi} \ln (\sin(x) - \frac{5}{4}) dx = \int_0^{2 \pi} \ln (\cos(x) + \frac{5}{4}) dx = Re \int_0^{2 \pi} \ln(\cos(x) - \frac{5}{4}) dx = 0 $$ $$ \int_0^{2 \pi} \ln (\sin(x) - \frac{5}{4}) dx = \int_0^{2 \pi} \ln (\cos(x) - \frac{5}{4}) dx = 2 \pi^2 i $$ That explains the finite values of $\sup $ and $ \inf $ .. well almost. It can be proven that both are finite. But that does not explain the value of their product. Update This is probably not helpful at all , but it can be shown ( not easy ) that there exist a unique pair of functions $g_-(x) , g_+(x) $ , both entire and with period $2 \pi $ such that $$ g_-(n) = f_-(n) , g_+(m) = f_+(m) $$ However i have no closed form for any of those ... As for the numerical test i got about $ln
|
Taking logarithms lets us write $$\ln(\sup f_-(n)\inf f_+(m))=\sup\ln f_-(n)+\inf\ln f_+(m).$$ (We will assume that $n$ is odd so that $f_-(n)>0$ ). Let's begin by computing $\ln f_+(m)$ . The Fourier expansion $$\ln(\sin(x)+5/4)=\sum_{k=1}^\infty\frac{-2\cos(k(x+\pi/2))}{2^kk}$$ let us write $\ln f_+(m)$ as \begin{align*} \ln f_+(m)&=\ln\prod_{j=0}^m(\sin(j)+5/4)\\ &=\sum_{j=0}^m\ln(\sin(j)+5/4)\\ &=\sum_{j=0}^m\sum_{k=1}^\infty\frac{-2\cos(k(j+\pi/2))}{2^kk}\\ &=-\sum_{k=1}^\infty\frac{1}{2^kk}\sum_{j=0}^m2\cos(k(j+\pi/2))\\ &=-\sum_{k=1}^\infty\frac{1}{2^kk}\sum_{j=0}^me^{ik(j+\pi/2)}+e^{-ik(j+\pi/2)}\\ &=-\sum_{k=1}^\infty\frac{1}{2^kk}\left[e^{ik\pi/2}\frac{1-e^{ik(m+1)}}{1-e^{ik}}+e^{-ik\pi/2}\frac{1-e^{-ik(m+1)}}{1-e^{-ik}}\right]\\ &=-\sum_{k=1}^\infty\frac{1}{2^kk}\left[e^{ik\pi/2}\frac{e^{-ik/2}-e^{ik(m+\frac{1}{2})}}{e^{-ik/2}-e^{ik/2}}+e^{-ik\pi/2}\frac{e^{ik/2}-e^{-ik(m+\frac{1}{2})}}{e^{ik/2}-e^{-ik/2}}\right]\\ &=\sum_{k=1}^\infty\frac{e^{ik\pi/2}(e^{-ik/2}-e^{ik(m+\frac
|
|calculus|geometry|fractions|limsup-and-liminf|products|
| 1
|
Question About Proving The Countable Subadditivity of $\mu^*:\mathcal{P}(\mathbb{R})\to[0,+\infty]$
|
Let $F:\mathbb{R}\to\mathbb{R}$ be a bounded, nondecreasing, and right-continuous function that satisfies $\lim_{x\to-\infty}F(x)=0$ . Define a function $\mu^*:\mathcal{P}(\mathbb{R})\to[0,+\infty]$ by letting $\mu^*(A)$ be the infimum of the set of sums $\sum_{n=1}^{\infty}(F(b_n)-F(a_n))$ , where $\{(a_n,b_n]\}$ rangers over the set of sequences of half-open intervals that cover $A$ , in the sense that $A \subseteq \bigcup_{n=1}^{\infty}(a_n,b_n]$ . I need to prove the following: If $\{A_n\}$ is an infinite sequence of subsets of $\mathbb{R}$ , then $\mu^*(\bigcup_nA_n)\leq\sum_n\mu^*(A_n)$ . I would like to precede in the following way: Let $\{A_n\}_{n=1}^{\infty}$ be an arbitrary sequence of subsets of $\mathbb{R}$ . If $\sum_n\mu^*(A_n)=+\infty$ , then $\mu^*(\bigcup_nA_n) \leq \sum_n\mu^*(a_n)$ certainly holds. So suppose that $\sum_n\mu^*(A_n) , and let $\epsilon$ be an arbitrary positive number. For each $n$ choose a sequence $\{(a_{n,i},b_{n,i}]\}_{i=1}^{\infty}$ that covers $
|
$\mu^{*}$ by definition is the infimum of that set, so you can take an element from that set which is smaller than $\mu^{*}+\epsilon$
|
|real-analysis|sequences-and-series|measure-theory|solution-verification|proof-writing|
| 1
|
Does a metric space need to be a Hausdorff space?
|
I need to prove that: Let $X$ , $Y$ be metric compact spaces, $f:X \to Y$ be continuous and bijective. Then $f$ is an homeomorphism. But I have been investigating, and there's no such theorem, because it says that $Y$ needs to be a Hausdorff space. Can someone explain me why $Y$ needs to be a Hausdorff space? Thanks.
|
So in metric spaces, basic open sets are open balls, right? Now pick two points $a\neq b$ from $Y$ , there you can define distance between them in metric space, say it is $\delta (>0)$ , and now construct the open balls $B(a;\frac{\delta}{4})$ (centered at $a$ ) and $B(b;\frac{\delta}{4})$ , now is there anything inside intersection? If there is something, then you'll have an absurd inequality $\delta\leq \frac{\delta}{2}$ !!!
|
|general-topology|metric-spaces|separation-axioms|
| 0
|
Prove that a multivariable function has an absolute minimum yet no absolute maximum on a open set
|
Define $f(x,y)=xy+\frac{3}{x}+\frac{4}{y}$ on the set $\{(x,y):x,y>0\}$ . How could it be proven that $f$ has an absolute minimum but no absolute maximum on this set. One idea which I had was to use a theorem which states roughly: "Let $f$ be a continuous function on an unbounded closed set $S \subset \mathbb R^n.$ If $f(x)\to+\infty$ as $|x|\to\infty$ , then $f$ has an absolute minimum but no absolute maximum on S." The function seems to be continuous on this set. Regardless it seems to be inapplicable to the above case since the set $\{(x,y):x,y>0\}$ is open.
|
To show there is no absolute maximum, you can think for any $M\in\mathbb{R}$ , there exists a pair $(x_1,y_1)\in A:=\{(x,y):x,y>0\}$ such that $|f(x_1,y_1)|>M$ . But in this case, you do not need both $x,y$ to be free. For example, $f(1,y)=4+y+\dfrac{4}{y}$ already has no absolute maximum (try! Use single variable to show no absolute maximum). To show there is an absolute minimum, an intuitive guess is that it should be a local minimum also as it has no 'boundary' in $A$ . Then apply standard multivariable optimization tricks, you should solve $$\textbf{x}=(\hat{x},\hat{y})=\left(\sqrt[3]{\dfrac{9}{4}},\sqrt[3]{\dfrac{16}{3}}\right)\tag{*}$$ which is also the unique critical point in $A$ . To show this is local minimum, use Hessian or definition to prove is ok. Then since $f$ is continuous over $A$ , $f$ must also attain absolute minimum at that point in this region. Edit 2: Thanks for clarifying again. Let's use another approach. Note as $(x_1,y_1)\to(x,y)\in\partial A$ , $f(x,y)\to+\
|
|real-analysis|multivariable-calculus|
| 0
|
Universal Quantification over Disjunction
|
I know that $\forall x [R(x) \rightarrow S(x)]$ is NOT equivalent to $\forall x R(x) \rightarrow \forall x S(x)$ , because universal quantification does not distribute over disjunction. However, it seems to me that $$(*) \forall x [R(x) \rightarrow S(x)] \rightarrow [\forall x R(x) \rightarrow \forall x S(x)]$$ just intuitively. (For context, I am studying downward-entailments in linguistic semantics.). How can I prove (*)?
|
Hint: One way in which you can proceed is by using a semantic argument (i.e., by using models) and showing that it is valid in any model that you take. Then, apply the Gödel's completeness theorem. However, if you want a syntactic proof, you must give us your axioms first, because $(*)$ is given like an axiom in many books.
|
|first-order-logic|quantifiers|natural-deduction|
| 0
|
small side question (field theory, galois groups)
|
why does $\mathbb{Q}(3^{1/8},\zeta)=\mathbb{Q}(3^{1/8},2^{1/2}, i)$ ? I know that $\mathbb{Q}(3^{1/8},\zeta)=\mathbb{Q}(3^{1/8},3^{1/8}e^{2\pi/8},3^{1/8}e^{4\pi/8},3^{1/8}e^{6\pi/8},3^{1/8}e^{8\pi/8},3^{1/8}e^{10\pi/8},3^{1/8}e^{12\pi/8},3^{1/8}e^{14\pi/8})$ $=\mathbb{Q}(3^{1/8},3^{1/8}e^{\pi/4},3^{1/8}e^{\pi/2},3^{1/8}e^{3\pi/4},-3^{1/8},3^{1/8}e^{5\pi/4},3^{1/8}e^{3\pi/2},3^{1/8}e^{7\pi/4})$ , but that's all.
|
$=\mathbb{Q}(3^{1/8},3^{1/8}e^{\pi/4},3^{1/8}e^{\pi/2},3^{1/8}e^{3\pi/4},-3^{1/8},3^{1/8}e^{5\pi/4},3^{1/8}e^{3\pi/2},3^{1/8}e^{7\pi/4})$ = $=\mathbb{Q}(3^{1/8},3^{1/8}*[\frac{\sqrt2}{2}+i\frac{\sqrt2}{2}],3^{1/8}i,3^{1/8}*[\frac{-\sqrt2}{2}+i\frac{\sqrt2}{2}],-3^{1/8},3^{1/8}*[-\frac{\sqrt2}{2}-i\frac{\sqrt2}{2}],3^{1/8}[-i],3^{1/8}*[\frac{\sqrt2}{2}+-\frac{\sqrt2}{2}])$ $=\mathbb{Q}(3^{1/8},2^{1/2},i)$
|
|field-theory|galois-theory|
| 0
|
small side question (field theory, galois groups)
|
why does $\mathbb{Q}(3^{1/8},\zeta)=\mathbb{Q}(3^{1/8},2^{1/2}, i)$ ? I know that $\mathbb{Q}(3^{1/8},\zeta)=\mathbb{Q}(3^{1/8},3^{1/8}e^{2\pi/8},3^{1/8}e^{4\pi/8},3^{1/8}e^{6\pi/8},3^{1/8}e^{8\pi/8},3^{1/8}e^{10\pi/8},3^{1/8}e^{12\pi/8},3^{1/8}e^{14\pi/8})$ $=\mathbb{Q}(3^{1/8},3^{1/8}e^{\pi/4},3^{1/8}e^{\pi/2},3^{1/8}e^{3\pi/4},-3^{1/8},3^{1/8}e^{5\pi/4},3^{1/8}e^{3\pi/2},3^{1/8}e^{7\pi/4})$ , but that's all.
|
It's enough to show that $\Bbb Q(\zeta) = \Bbb Q(\sqrt 2, i)$ and since $\displaystyle \zeta = \frac{\sqrt 2}{2}+ \frac {\sqrt 2}{2} i$ , it's obvious that $\zeta \in \Bbb Q(\sqrt 2, i)$ , so $\Bbb Q(\zeta) \subseteq \Bbb Q( \sqrt 2, i)$ . To prove the other containment, just note that $\displaystyle \zeta^{-1}=\frac{\sqrt 2}{2}- \frac {\sqrt 2}{2} i$ , so $\zeta+ \zeta^{-1}= \sqrt 2 \in \Bbb Q( \zeta)$ , and $\displaystyle \frac{1}{\sqrt 2} \left ( \zeta - \zeta^{-1} \right ) = i \in \Bbb Q( \zeta)$ .
|
|field-theory|galois-theory|
| 0
|
Find the radius of the circle tangent to $x^2, \sqrt{x}, x=2$
|
I'm taking up a post that was closed for lack of context because I'm very interested in it : Let $(a,b)$ be the center of this circle. It seems intuitive that $b=a$ , but I have not been able to prove it formally, although I know that two reciprocal functions are symmetric with respect to the first bisector $y=x$ . Then let $(X,X^2)$ the point of tangency with $y=x^2$ . I think we're going to use the formula for the distance from $(a,a)$ to the line $y-X^2=2X(x-X)$ . We have obviously the relation $r=2-a$ . The normal to $(X,X^2) $ passes through $(a,a)$ I'm not sure if my notations are the best to elegantly solve the exercise. I hope you will share my enthusiasm for this lovely exercise that I have just discovered thanks to MSE.
|
Some solutions are on the line $y = x$ . The reverse is not true as shown by Jan-Magnus Økland. The reason is that $y = \sqrt{x}$ is the inverse function of $y = x^2$ for $x\geq 0$ hence its plot is the reflection of the plot of $y = x^2$ over the line $y = x$ . Hence for any circle on the line $y = x$ that touches $y = x^2$ , it also touches $y = \sqrt{x}$ . The line $x = 2$ then can be made tangent by moving the center of the circle on the line $y = x$ . Let $T(x_t,x^2_t)$ be the tangent point between the circle to the parabola. The tangent vector at T is then $(1,2x)$ , consequently: $$(a-x_t) + (a-x^2_t)2x_t=0 \Rightarrow a = x_t\frac{1+2x^2_t}{1+2x_t}$$ The radius of the circle is $$r=\sqrt{(a-x_t)^2+(a-x^2_t)^2} = \left|2 - a\right|$$ A bit of transformation and we get: $$(2x^3_t-2x^2_t)^2+(x_t-x^2_t)^2=(2x^3_t-3x_t-2)^2$$ Unfortunately this equation doesn't have nice solution, so you'll have to calculate numerically Here's what I found on Wolfram Alpha: $x_t\approx -0.441229$ ;
|
|geometry|functions|analytic-geometry|
| 0
|
how do you find the initial amount of a decay problem when you dont have one in the problem?
|
Suppose a sample of a certain substance decayed to $65.2\%$ of its original amount after $300$ days. What is the half-life (in days) of this substance? (Round your answers to two decimal places.) I just cant process how to get all the values for the Decay Rate Formula: $M(t) = M(0)e^{kt}$
|
You can not get the Initial Amount. Luckily , you do not require the Initial amount. SOLUTION 1 : $M(t) = M(0)e^{kt} \tag{1}$ $M(t)/M(0) = e^{kt} \tag{2}$ We are given that $M(300)$ is $65.2 \%$ of $M(0)$ Hence , $M(300)/M(0)=65.2/100$ Plug that in (2) to get $k$ via $\log()$ : $0.652 = e^{k \times 300} \tag{3}$ $\log (0.652) = k \times 300 \tag{4}$ You should get $k \approx -0.0014257$ Calculate $t$ with $M(t)/M(0)=50\%$ using $50\%=0.5$ & the other known values. $0.50 = e^{-0.0014257 \times t} \tag{5}$ $\log (0.50) = -0.0014257 \times t \tag{6}$ With that , you will get half-life $t \approx 486.18$ in Days. SOLUTION 2 : Alternatively , you can take Initial Amount = $100$ , then at $t=300$ , we have $65.2$ , then we want $t$ where $M(t)=50$ It will give same answer.
|
|calculus|algebra-precalculus|exponential-function|
| 0
|
Show that $(X_n, Y_n) \stackrel d \to (X,Y).$
|
Let $\{X_n \}_{n \geq 1}$ and $\{Y_n \}_{n \geq 1}$ be independent random variables such that $X_n \stackrel d \to X$ and $Y_n \stackrel d \to Y.$ Suppose that $X$ and $Y$ are also independent. Then show that $(X_n, Y_n) \stackrel d \to (X,Y).$ In order to show that I have to use the following fact $:$ Let $\mathscr A$ be a $\pi$ -system of subsets of $\mathbb R$ such that each open set in $\mathbb R$ can be written as a countable union of sets from $\mathscr A.$ For a collection of real random variables $\{X_n\}_{n \geq 1}$ and $X,$ if $\mathbb P (X_n \in A) \to \mathbb P(X \in A)$ for every $A \in \mathscr A,$ then $X_n \stackrel d \to X.$ My Attempt $:$ Let $\mathcal C_X$ and $\mathcal C_Y$ be the collections of $X$ -continuity sets and $Y$ -continuity sets respectively. Then using the independence, we can see that $$\mathbb P ((X_n, Y_n) \in A \times B) \to \mathbb P ((X,Y) \in A \times B),$$ for all $A \in \mathcal C_X$ and $B \in \mathcal C_Y.$ So it is natural to consider $\math
|
Let $I = (a, b)$ be an interval in $\mathbb R.$ Let $C(F_X)$ be the set of all $X$ continuity points i.e. the points where $F_X$ (cdf of $X$ ) is continuous. Since $C (F_X)$ is dense in $\mathbb R,$ there exist sequences $\{a_n \}_{n \geq 1}$ and $\{b_n\}_{n \geq 1}$ in $C(F_X)$ such that $a_n \searrow a$ and $b_n \nearrow b.$ Then $I = \bigcup\limits_{n = 1}^{\infty} (a_n, b_n).$ So any open interval in $\mathbb R$ can be written as a countable union of sets from $\mathcal C_X.$ Similar argument holds if $X$ is replaced by $Y.$ Also note that, any open set in $\mathbb R^2$ is the countable union of open rectangles having rational vertices. Combining these two considerations, it follows that any open subset of $\mathbb R^2$ can be written as a countable union of sets from $\mathscr A,$ which is what you wanted to show.
|
|probability-theory|weak-convergence|
| 0
|
Universal Quantification over Disjunction
|
I know that $\forall x [R(x) \rightarrow S(x)]$ is NOT equivalent to $\forall x R(x) \rightarrow \forall x S(x)$ , because universal quantification does not distribute over disjunction. However, it seems to me that $$(*) \forall x [R(x) \rightarrow S(x)] \rightarrow [\forall x R(x) \rightarrow \forall x S(x)]$$ just intuitively. (For context, I am studying downward-entailments in linguistic semantics.). How can I prove (*)?
|
If you are using natural deduction axioms/rules (e.g. as per Van Dalen), I think the folowing is a proof: (A stands for assumption, Best viewed on a PC; is there a good way of presenting tables in Latex?). $Level | Line | Formula \quad\quad\quad\quad\quad \quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad |Rule \quad |Depends on\quad | Cancelled by $ $2 \quad\quad 1\quad\quad \quad\quad \forall x (R(x) \rightarrow S(x)) \quad\quad\quad \quad \quad\quad\quad\quad\quad\quad\quad\quad A \quad \quad \quad \quad \quad \quad \quad\quad\quad8$ $2 \quad\quad 2 \quad\quad\quad \quad(R(x) \rightarrow S(x))\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad \forall x E \quad \quad 1$ $3 \quad\quad 3 \quad\quad\quad \quad \quad \quad\forall x R(x)\quad \quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad A \quad\quad\quad\quad\quad\quad\quad\quad\quad 7 $ $3 \quad\quad 4 \quad\quad\quad \quad\quad \quad R(x) \quad \quad \quad\quad\quad\quad\quad\quad\quad\qu
|
|first-order-logic|quantifiers|natural-deduction|
| 0
|
Are quasicomponents connected in a compact non-Hausdorff space?
|
Are quasicomponents connected in a compact space? Background: Quasicomponents are connected in a compact Hausdorff space . A non-compact locally compact Hausdorff space may not have connected quasicomponents . It can be shown that a subset is an open connected component if and only if it is an open quasicomponent. (*) In a finitely generated space (aka Alexandrov discrete space), every intersection of open subsets is open. Hence every quasicomponent is open as an intersection of clopen subsets. By (*), the quasicomponents are connected. More generally, a space with open components (a sumconnected space) has connected quasicomponents by (*). This includes locally connected spaces, which include finitely generated spaces. So this question is about whether the first result can be generalized to non-Hausdorff spaces.
|
The following counter-example is taken from A. Tarizadeh, Idempotents and connected components, Example 3.15 (but I'm pretty sure that it appeared elsewhere long before): Define $Y = \{\frac{1}{n}: n \in \mathbb N\} \cup \{0\}$ as a subspace of $\mathbb R$ and define $X = Y \cup \{0^*\}$ by doubling $0$ , i.e. X is the subspace of the line with two origins . Obviously, $Y$ is compact, $T_1$ . Moreover, every clopen set, which contains $0$ , also contains $0^*$ . As each $\{\frac{1}{n}: n \ge m\} \cup \{0, 0^* \}$ is clopen, the quasi-component of $0$ is $\{0, 0^* \}$ . However, $\{0, 0^* \}$ is discrete, hence not connected.
|
|general-topology|
| 1
|
Analyzing stability of equilibrium points in a system
|
Let us consider the dynamical system \begin{aligned} \dot{x}&=\frac{(a-x)}{b} -z-u, \\ \dot{y}&=x-c y, \\ \dot{z}&=x-y, \\ \dot{u}&=x-1 \end{aligned} , where $a,b,c$ are parameters We can observe that the equilibrium point of the system is of the form $(x, y,z, u) = (1,1,\frac{a-1}{b}-t,t)$ , where $t \in \Bbb{R}$ . So here we have infinite number of equilibrium points depending on $t$ . The equilibrium points exists only when $c=1$ . For the stability of the equilibrium points, I got the characteristic equation to be $P(\lambda) = \lambda^{4} + \lambda^{3}(c + \frac{1}{b}) + \lambda^{2}(2-\frac{c}{b})+\lambda(2c-1) =0$ implying one eigenvalue is zero. For stability we need all real parts of four eigenvalues to be less than zero. I am thinking how to proceed from here? I am thinking if there exists any stable equilibrium point or they are all unstable?
|
The $\dot y = x - cy$ implies that $c = 1$ , then the characteristic polynomial is $$\lambda (\lambda ^3+\left(1+\frac{1}{b}\right)\,\lambda ^2+\left(2+\frac{1}{b}\right)\,\lambda +1)$$ Since there is a single root equal $0$ the system can only be at most marginally stable Using the Routh table: $$ \begin{array}{|c|c|c|c|} \hline 1 & 2+1/b & 0\\ \hline 1+1/b& 1& 0\\ \hline (1/b^2+3/b + 1) b/(b+1)& 0& 0\\ \hline 1 & 0 & 0\\ \hline \end{array} $$ The condition for the system to be marginally stable is $1+1/b \geq 0$ , $(1/b^2+3/b + 1) b/(b+1) \geq 0$ . Consequently $b\in [\frac{-3-\sqrt{5}}{2},-1]$ . Then you need to check $b$ at the boundaries to make sure that there all imaginary roots are distinct
|
|ordinary-differential-equations|eigenvalues-eigenvectors|dynamical-systems|
| 1
|
Rotman's Algebraic topology Lemma 10.26
|
Lemma 10.26. Let $G$ be a group acting transitively on a set $Y$ , and let $y_0 \in Y$ . Then $$\mathrm{Aut}(Y) \cong N_G(G_0)/G_0,$$ where $G_0$ is the stabilizer of $y_0$ . $N_G(G_0)$ is the normalizer of $G_0$ and $\mathrm{Aut}(Y)$ is set of all $G-$ isomorphisms, $Y\rightarrow Y$ (bijective $G-$ maps $f(g\cdot y)=g\cdot f(y))$ . I was trying to understand this lemma in the book of Rotman, but I am confused while understanding the well-defined part, mainly how they defined the map. Can anyone please explain how the map is well-defined? Proof. Let $\varphi \in \mathrm{Aut}(Y)$ . Since $G$ acts transitively on $Y$ , there is $g \in G$ with $\varphi(y_0) = gy_0$ . First, we show that $g \in N_G(G_0)$ . If $h \in G_0$ , then $hy_0 = y_0$ and $$gy_0 = \varphi(y_0) = \varphi(hy_0) = h\varphi(y_0) = hgy_0;$$ hence $y_0 = g^{-1}hgy_0$ and $g^{-1}hg \in G_0$ , as desired. Second, if $\varphi(y_0) = gy_0 = g_1y_0$ , then $g^{-1}g_1$ fixes $y_0$ and $gG_0 = g_1G_0$ . Therefore the function $$\
|
Fix $y_0$ . Given $\varphi\in\mathrm{Aut}_G(Y)$ , $\varphi(y_0)\in Y$ . Since the action is transitive, there exists a $g\in G$ such that $gy_0=\varphi(y_0)$ . Rotman wants to map it to a coset of $G_0$ in $N_G(G_0)$ , namely to the coset $g^{-1}G_0$ . But $g$ need not be unique. So he needs to verify two things for this function to make sense: That $g$ (and hence $g^{-1}$ ) actually lies in $N_G(G_0)$ ; and That if $g'$ is any other element of $G$ such that $g'y_0$ is also equal to $\varphi(y_0)$ , then the cosets $g^{-1}G_0$ and $(g')^{-1}G_0$ are the same coset. So, first, he shows that $g\in N_G(G_0)$ . To do that, we need to show that if $h\in G_0$ , then $g^{-1}hg\in G_0$ . Indeed, we have that because $h\in G_0$ then $hy_0=y_0$ ; and because $gy_0 = \varphi(y_0)$ , we have that $$\begin{align*} (g^{-1}hg)\cdot y_0 &= (g^{-1}h)\cdot(g\cdot y_0)\\ &=( g^{-1}h)\cdot\varphi(y_0)\\ &= g^{-1}\cdot (h\cdot\varphi(y_0))\\ &= g^{-1}\cdot\varphi(h\cdot y_0)\\ &= g^{-1}\cdot\varphi(y_0) &\
|
|group-theory|algebraic-topology|group-actions|
| 0
|
Matrices and differentiation commute
|
Suppose for simplicity we have a plane curve $\gamma(t)=(f(t),g(t))$ . I'm just curious exactly what is the property responsible for the fact that, if $R_{90}$ is the two dimensional rotation matrix by $90$ degrees counterclockwise, then $$\frac{d}{dt}\left(R_{90}\left(\gamma(t)\right)\right)=R_{90}\left(\frac{d}{dt}(\gamma(t))\right).$$ I'm thinking this only applies to matrices $R$ that are linear isomorphisms (and for when the multiplication makes sense). What is the general property at work here?
|
The main reason is that multiplication by a matrix is a linear operation. By definition of the differentiability, $$ \gamma(t) \underset{t\to t_0}{=} \gamma(t_0) + (t-t_0)\gamma'(t_0) + o(t-t_0). $$ If $M$ is any matrix, the application $u \mapsto M(u)$ is linear and continuous, and thus $$ M(\gamma(t)) \underset{t\to t_0}{=} M(\gamma(t_0)) + (t-t_0)M(\gamma'(t_0)) + o(t-t_0). $$ (The continuity is needed to ensure that $M(o(t-t_0)) = o(t-t_0)$ .) Hence, $M\circ\gamma$ is differentiable with $(M\circ\gamma)'=M\circ\gamma'$ .
|
|calculus|linear-algebra|matrix-calculus|
| 0
|
Why no one uses the product formula for sine function to calculate $\pi$?
|
$$\sin(\pi x)=\pi x \prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)$$ $$\pi = \frac{\sin(\pi x)}{x\prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)}$$ Let $x=\frac{1}{2}$ $$\pi = \frac{2}{\prod_{n \ge 1}\left(1-\frac{1}{4 n^2}\right)}$$ The question is Why no one uses this formula to calculate $\pi$ ? Maybe I am wrong but I have never seen anyone uses this. The answer must be that this formula converge very slowly to $\pi$ and not efficient at all. But how slowly it converge to $\pi$ ? How much terms it need to reach the correct first 10, 100, 1000 decimals ?
|
But how slowly it converge to ? Are you not able to run numerical calculations yourself? Let $x_N = 2/\prod_{n=1}^N (1 - 1/(4n^2))$ . We have $$ x_{100} = 3.133\ldots, \ x_{10^3} = 3.1408\ldots, \ x_{10^4} = 3.1415141\ldots, \ $$ $$ x_{10^5} = 3.1415847\ldots, \ x_{10^6} = 3.1415918\ldots, \ x_{10^7} = 3.14159257\ldots. $$ So it looks like multiplying the number of terms in a partial product by a factor of $10$ typically leads to just one additional correct digits in the partial product. The millionth partial product only gives us $5$ correct digits past the decimal point, which is only halfway to the $10$ correct digits after the decimal point that you asked about. In short, the numerical data point to this product expression having terrible convergence.
|
|calculus|numerical-methods|pi|infinite-product|
| 0
|
Why no one uses the product formula for sine function to calculate $\pi$?
|
$$\sin(\pi x)=\pi x \prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)$$ $$\pi = \frac{\sin(\pi x)}{x\prod_{n \ge 1}\left(1-\frac{x^2}{n^2}\right)}$$ Let $x=\frac{1}{2}$ $$\pi = \frac{2}{\prod_{n \ge 1}\left(1-\frac{1}{4 n^2}\right)}$$ The question is Why no one uses this formula to calculate $\pi$ ? Maybe I am wrong but I have never seen anyone uses this. The answer must be that this formula converge very slowly to $\pi$ and not efficient at all. But how slowly it converge to $\pi$ ? How much terms it need to reach the correct first 10, 100, 1000 decimals ?
|
It's obviously slow. Let the sequence be $x_n$ then $\dfrac{x_n}{x_{n-1}}=\frac{1}{1-\frac{1}{4n^2}}\approx1+\frac{1}{4n^2}$ Let's say you wanna have $100$ significant digits then $\frac{1}{4n^2} hence $n > 2.5 \cdot10^{49}$ Another reason is that long multiplication is a lot slowlier than long addition
|
|calculus|numerical-methods|pi|infinite-product|
| 0
|
Number of points where $f'(x) = 0$ for $f( x ) = 2^x − x^2$.
|
I am approaching this question via graphs but not able to conclude whether or not graphs intersect or at how many points they do. Please explain this question without seeing the actual graph.
|
$$\forall x\in \mathbb R, 2^x:=\exp(x\log2)$$ where $\log x$ is, of course, the 'Napierian' logarithm of $x$ , to base $\mathrm e$ . So, $$\color{red}{f'(x)}=\log 2\exp(x\log 2)-2x$$ $$\color{blue}{f''(x)}=(\log2)^2\exp(x\log 2)-2$$ You then have to make a table of variations (see here for example) to prove that there are two values $a$ and $b$ of $x$ for which $f'(x)=0$ , using $c=1-2\frac{\log(\log 2)}{\log 2}\approx2.06$ and your course results that it is a unique opportunity to review or learn( exponential function , monotonic function , intermediate value theorem ,...) Hint : use $\color{green}{f(-1)=-\frac12,f(0)=1,f(2)=f(4)=0}$ and Rolle's theorem too. I think it doesn't make sense not to rely on the real graph since these are essentially graphical results https://www.geogebra.org/classic/uwsryu6g
|
|functions|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.