title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
$f:A \to B$ is a function. Show that if the right inverse is unique, then the left inverse is also unique.
|
I tried through contradiction. Assume there are $h_1$ , $h_2 :A \to B$ such that for all $x \in A$ , I have $h_1(f(x)) = x = h_2(f(x))$ . And show the right inverse $g:B \to A$ is not unique. Also I know that $f(g(y)) = y$ for all $y \in B$ . The only thing coming to mind is, applying $h_1,h_2$ to $f(g(y)) = y$ . So $h_1(f(g(y)))=h_2(f(g(y))) \implies h_1(y) = h_2(y)$ . But this gets me nowhere.
|
$h_1(y) = h_2(y)$ for all $y \in B$ in your attempt is equivallent to $h_1 = h_2$ by the definition of the mapping. Since we can take left inverses $h_1$ and $h_2$ arbitrary, it follows that the left inverse is unique, though we did not lead a contradiction in this proof. You must suppose $h_1 \neq h_2$ to lead a contradiction, which is equivalent to $h_1(y) \neq h_2(y)$ for some $y \in B$ (and it is converse to your result).
|
|functions|
| 1
|
Rotation a 3D frame of a reference to match the X-axis with the direction of a unite vector.
|
I am trying to solve a problem related to computational fluid dynamics. However, I got stuck on a mathematical operation and am unsure how to tackle it. Here is it. Let $\boldsymbol{n}$ be a unit vector in 3D space with known components. Given the following rotational matrix: $$ \begin{equation}\begin{gathered} \mathbf{T}=\mathbf{T}(\theta^{(y)},\theta^{(z)})=\mathbf{T}^{(y)}\mathbf{T}^{(z)},\\ \mathbf{T}^{(y)}\equiv\mathbf{T}^{(y)}(\theta^{(y)}) =\begin{bmatrix}\cos\theta^{(y)}&0&\sin\theta^{(y)}\\0&1&0\\-\sin\theta^{(y)}&0&\cos\theta^{(y)}\end{bmatrix},\\ \mathbf{T}^{(z)}\equiv\mathbf{T}^{(z)}(\theta^{(z)}) =\begin{bmatrix}\cos\theta^{(z)}&\sin\theta^{(z)}&0\\-\sin\theta^{(z)}&\cos\theta^{(z)}&0\\0&0&1\end{bmatrix}. \end{gathered}\end{equation} $$ The goal is to find the angles of rotation such that the x-axis of the rotated frame of reference matches the direction of the normal vector. My question is: How do we define the angles of rotation given only the components of the normal ve
|
Let $T_y = \begin{bmatrix} c_1 && 0 && s_1 \\ 0 && 1 && 0 \\ -s_1 && 0 && c_1 \end{bmatrix} $ where $c_1 = \cos(\theta_y) , s_1 = \sin(\theta_y) $ And let $T_z = \begin{bmatrix} c_2 && -s_2 && 0 \\ s_2 && c_2 && 0 \\ 0 && 0 && 1 \end{bmatrix} $ where $c_2 = \cos(\theta_z) , s_2 = \sin(\theta_z) $ Then $T = T_y T_z = \begin{bmatrix} c_1 c_2 && - c_1 s_2&& s_1 \\s_2 && c_2 && 0 \\ -s_1 c_2 && s_1 s_2 && c_1 \end{bmatrix} $ And now given the the unit vector $n = [n_x, n_y, n_z]^T $ , we want $ \begin{bmatrix} c_1 c_2 \\ s_2 \\ -s_1 c_2 \end{bmatrix} = \begin{bmatrix} n_x \\ n_y \\ n_z \end{bmatrix} $ For the $y$ -component of both vectors, we have $ sin(\theta_z) = n_y $ Hence $ \theta_z = \sin^{-1}(n_y) $ or $\theta_z = \pi - \sin^{-1}(n_y) $ And from the first and third components, we have $ \theta_y = \text{atan2} \left( \dfrac{n_x}{\cos(\theta_z) } \ , - \ \dfrac{n_z}{\cos(\theta_z) } \right)$
|
|matrices|rotations|
| 0
|
Show that $\int_{0}^{\infty}\sin\left(\frac{1}{x^{2}}\right)\ln xdx=\sqrt{\frac{\pi}{2}}\left(\frac{\gamma}{2}+\frac{\pi}{4}+\ln2-1\right)$
|
I came up with this problem while messing around in Desmos and was very surprised by the solution I got from Wolfram Alpha . The integral calculator is unable to solve it and WolframAlpha does not provide an indefinite integral that could hint at a possible solution. My intuition tells me to try integration by parts with $u=\ln(x)$ but I don't know how to take the integral of $\sin\left(\frac{1}{x^{2}}\right)$ .
|
Letting $x\mapsto \frac{1}{\sqrt x}$ yields $$ I=-\frac{1}{4} \int_0^{\infty} \frac{\sin x \ln x}{x^{\frac{3}{2}}} d =-\frac{1}{4} I^{\prime}\left(-\frac{1}{2}\right) $$ where $$ \begin{aligned} I(a) & =\int_0^{\infty} x^{a-1} \sin x d x \\ & =-\Im \int_0^{\infty} x^{a-1} e^{-i x} d x \\ & =-\Im\left(\frac{\Gamma(a)}{i^a}\right) \\ & =-\Gamma(a) \Im\left(e^{-\frac{\pi}{2} i a}\right) \\ & =\Gamma(a) \sin \left(\frac{\pi}{2} a\right) \end{aligned} $$ By logarithmic differentiation, we have $$ \begin{aligned} I(a) & =\Gamma(a) \sin \left(\frac{\pi}{2} a\right) \\ \ln I(a) & =\ln \Gamma(a)+\ln \sin \left(\frac{\pi}{2} a\right) \\ I^{\prime}(a) & =I(a)\left[\psi(a)+\frac{\pi}{2} \cot \left(\frac{\pi}{2} a\right)\right] \\ I^{\prime}\left(-\frac{1}{2}\right) & =-\Gamma\left(-\frac{1}{2}\right) \sin \frac{\pi}{4}\left[\psi\left(-\frac{1}{2}\right)-\frac{\pi}{2}\right] \\ & =2 \sqrt{\pi} \cdot \frac{1}{\sqrt{2}}\left(2-\gamma-\ln 4-\frac{\pi}{2}\right) \end{aligned} $$ now we can conclude tha
|
|integration|definite-integrals|improper-integrals|trigonometric-integrals|
| 1
|
if $\tan(\cot(x))=\cot(\tan(x))$ find $\sin(2x)$
|
If $\tan(\cot(x))=\cot(\tan(x))$ , find $\sin(2x)$ . so to start $\cot\left(\frac{\pi}{2}-\cot \left(x\right)\right)=\cot\left(\tan\left(x\right)\right)$ or $\frac{\pi}{2}=\tan\left(x\right)+\frac{1}{\tan\left(x\right)}$ which is $\frac{\sec^2\left(x\right)}{1+\tan\left(x\right)}$ now using the sin(2x) formula that should give $\frac{4}{\pi}$ however, this is wrong why is it wrong?
|
Here is the mistake. $\cot A = \cot B \nRightarrow A = B$ . We have : $\cot A = \cot B \implies \tan A = \tan B \implies \sin ({A - B}) = 0 \implies A - B = n\pi$ . The second implication in the above line is proved by observing that $\tan A - \tan B = 0$ and simplifying further using sin and cos.
|
|algebra-precalculus|trigonometry|
| 0
|
How to prove point is NOT a limit point? $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$
|
I've seen a bunch of questions showing how to prove a point is a limit point, but almost nothing showing that a point is NOT a limit point. My definition of a limit point $x$ is that $\forall \epsilon >0, (B(x,\epsilon)\backslash\{x\})\cap E \neq \emptyset$ where $B(x,\epsilon)$ is the ball centered in $x$ with radius $\epsilon$ . Since we are in $\mathbb{R}$ , $B(x,\epsilon)=]x-\epsilon, x+\epsilon[$ . Let $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$ be a set. How do I show that 5/3 is not a limit point? Intuitively I can see that points will not "accumulate" at 5/3, but how do I show this rigorously?
|
Hints: Since $\frac {4n-3} {2n-2} \to 2$ we expect that $\frac {4n-3} {2n-2} >1.9$ for $n$ sufficiently large. Verify that this inequality is indeed true if $n>5$ . Next note that $1.9 >\frac 5 3$ . Can you now find an interval around $\frac 5 3$ which contains at most $5$ points of the given sequence?
|
|real-analysis|general-topology|real-numbers|
| 0
|
Exponential Distribution - Memoryless Property - Intuition
|
I understand the derivation leading to the memoryless property, but I cant seem to understand its application in a real scenario. Let say we have a period $P$ , in which no event occurs, if the memoryless property is true, it seem to me that the probability that an event occur at any instance $i$ in $P$ is a constant. The memoryless property is defined as: $$\mathbb{P}(T> s+t \vert T > s) = \mathbb{P}(T> t)$$ In case of that $t$ is approaching $0$ (i.e the very next instance). $$\mathbb{P}(T> s+0 \vert T > s) = \mathbb{P}(T> 0)$$ Meaning, the probability that an event occurs in the very next instance equal to the probability that an event occurs the very next moment after initial time. And you can do the same thing for every instance $i$ during $P$ . Meaning, during period $P$ in which no event occurs, the probability of an event occur in any instance $i \in P$ is equal to $\mathbb{P}(0)$ . If that is true, why even has a CDF or PDF, as the value is equal to the PDF at time $0$ anyway.
|
For all $t > 0$ , $$\Pr[T > s + t \mid T > s] = \Pr[T > t] \tag{1}$$ implies that as $t \to 0^+$ , we simply have $$\Pr[T > s \mid T > s] = \Pr[T > 0] = 1,$$ which of course is trivially true. But your interpretation of what this means is not correct. $(1)$ is the conditional probability that, having waited at least $s$ units of time without seeing the event, the probability of having to wait $t$ more units of time to see it is equal to the unconditional probability of having to wait at least $t$ units of time. It doesn't say that you actually see the event in the next $t$ units of time; rather, it's saying you have to wait at least $t$ more units. The concept you are referring to is known as the hazard rate or hazard function , which is the instantaneous likelihood of seeing the event at a given moment in time, given that you have not already seen it. In other words, it is $$h(t) = \lim_{\Delta t \to 0^+} \frac{\Pr[T \le t + \Delta t \mid T > t]}{\Delta t}. \tag{2}$$ The interpretatio
|
|statistics|exponential-function|
| 1
|
Solved - Finding the function of $(1+x)(1+x^4)(1+x^{16})(1+x^{64})....$
|
The question states: For $0 , let $f(x) = (1+x)(1+x^4)(1+x^{16})(1+x^{64})(1+x^{256}).... $ Find $f(x).$ $f(x)= \prod_{n=0}^{\infty}(1+x^{(4^n)})$ I've tried noticing that it looks similar to the telescoping series $(1+x)(1+x^2)(1+x^4)...$ but applying the same trick of multiplying by $(1-x)$ doesn't work here. Can someone please guide me in the right direction to solving this problem? Thanks very much. Edit: The original problem was to find $f^{-1} \left ( \frac{8}{5f(\tfrac{3}{8})} \right)$ , as can be seen in this thread. The solution can be found by noticing that $f(x)f(x^2)=(1+x)(1+x^2)(1+x^4)(1+x^8)... = \frac{1}{1-x}$ and by plugging in $x = 3/8$ (credit to John Omelian) we get $f(\frac{3}{8})f(\frac{9}{64}) = \frac{1}{1-\frac{3}{8}} = \frac{8}{5}$ , in which then we can divide by $f(\frac{3}{8})$ and take the inverse of both sides to get $f^{-1} \left ( \frac{8}{5f(\tfrac{3}{8})} \right) = \boxed{\frac{9}{64}\:}$ . I apologize for any confusion that I've caused, but thanks to e
|
I am not sure if $f(z)$ has an elementary closed form, due to its lacunary behavior on the boundary $|z|=1$ . For example, the figure below demonstrates the graph of $|f(\frac{299}{300}e^{i\theta})|$ for $-\pi\leq \theta \leq\pi$ , hinting an extremely wild behavior of $f(z)$ near the boundary $|z|=1$ that cannot be replicated by any elementary functions: Let us analyze this behavior more closely. Let $\alpha$ be a dyadic rational number, and let $m \in \mathbb{Z}_{\geq 0}$ be the smallest non-negative integer such that $2^{2m+1} \alpha \in \mathbb{Z}$ . If we let $p = 2^{2m+1} \alpha$ , then $p$ is either an odd integer, or $2$ times an odd integer. Now, for $0 we have \begin{align*} |f(re^{2\pi i \alpha})| &= \prod_{k=0}^{\infty} \left| 1 + r^{4^k} e^{(2^{2k+1}\alpha) \pi i} \right| \\ &= \left( \prod_{k=1}^{m} \left| 1 + r^{4^k} e^{p \pi i / 4^k} \right| \right) (1 + r^{4^k}e^{p\pi i}) \left( \prod_{k=m+1}^{\infty} ( 1 + r^{4^k} ) \right). \end{align*} To estimate the behavior of th
|
|sequences-and-series|telescopic-series|
| 0
|
To find unknown rows in a unitary matrix
|
The problem is to find a unitary matrix A whose first row is a multiple of a) $(1,1,-i)$ and b) $\left(\frac{1}{2},\frac{i}{2},\frac{(1-i)}{2}\right)$ Now the first part of a is easy because the rows have to form an Orthonormal set. Thus one gets the usual orthonormalization. However I am stuck as to what to do next, as I am getting just a bunch of equations applying the fact that the dot product has to be zero, especially in part b of the problem.
|
To complete a unit row $u_1=(u_{11},u_{12},u_{13})$ with $(u_{11},u_{12})\ne 0\ne u_{13}$ to a unitary matrix, we can choose a row $u_2$ proportional to $(\overline{u_{12}},-\overline{u_{11}},0)$ and a row $u_3$ proportional to $\left(u_{11},u_{12},-\frac{|u_{11}|^2+|u_{21}|^2}{\overline{u_{31}}}\right)$ . This approach yields matrices $$ \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{-i}{\sqrt{3}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{2i}{\sqrt{6}} \end{pmatrix}\mbox{ and } \begin{pmatrix} \frac{1}{2} & \frac{i}{2} & \frac{1-i}{2} \\ \frac{-i}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{2} & \frac{i}{2} & \frac{-1+i}{2} \end{pmatrix}, $$ for a) and b), respectively.
|
|matrices|unitary-matrices|
| 0
|
How to prove point is NOT a limit point? $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$
|
I've seen a bunch of questions showing how to prove a point is a limit point, but almost nothing showing that a point is NOT a limit point. My definition of a limit point $x$ is that $\forall \epsilon >0, (B(x,\epsilon)\backslash\{x\})\cap E \neq \emptyset$ where $B(x,\epsilon)$ is the ball centered in $x$ with radius $\epsilon$ . Since we are in $\mathbb{R}$ , $B(x,\epsilon)=]x-\epsilon, x+\epsilon[$ . Let $E=\{\frac{4n-3}{2n-1}:n\in\mathbb{N}\}$ be a set. How do I show that 5/3 is not a limit point? Intuitively I can see that points will not "accumulate" at 5/3, but how do I show this rigorously?
|
Prove $$\exists\epsilon >0, (B(5/3,\epsilon)\setminus \{5/3\})\cap E = \varnothing.$$ Since $E$ consists of the numbers $$1>\frac53>\frac95>\dots,$$ this condition on $\epsilon$ is equivalent to $$\epsilon\le\min\left(\frac53-1,\frac95-\frac53\right)=\min\left(\frac23,\frac2{15}\right)=\frac2{15}.$$
|
|real-analysis|general-topology|real-numbers|
| 0
|
Prove the solvability of $(x^2 - ab)(x^2 - bc)(x^2 - ac) \equiv 0 \ (mod \ p)$
|
Prove that comparison is solvable (has solutions) for any prime $p$ and any $a, b, c \in \mathbb{Z}$ : $$(x^2 - ab)(x^2 - bc)(x^2 - ac) \equiv 0 \ (mod \ p)$$ Not sure where to begin here. I will be grateful for any hints and solutions to this task. UPD: There was already an idea to solve this problem (see below in the comments). But it seems to me that this idea already assumes that the comparison has a solution and p divides one of $(x^2 - ab), (x^2 - bc), (x^2 - ac)$ , hence the existence of such a, b, c. But we need to prove something else, that some numbers a, b, c, p already exist and prove that the comparison is $(x^2 - ab)(x^2 - bc)(x^2 - ac)\equiv 0 \ (mod \ p)$ has solutions (solvable) for these numbers (something like a proof in the other direction with respect to the proposed idea). The authors of this problem wanted people to use some properties of the quadratic residue modulo in solving it. If my reasoning is wrong, then please explain why they are not correct.
|
As $p$ is prime, it must be a factor for at least one of the factors on the LHS of the congruency because If $p|xyz$ where $x,y,z \in \mathbb{Z}$ and $p$ is prime, then $p|x$ or $p|y$ or $p|z| Now take $$x^2 - ab \equiv 0 \pmod p$$ $$x^2 - ab = kp \quad k \in \mathbb{Z}$$ $$x = \pm \sqrt{kp - ab}$$ Similarly $$x = \pm \sqrt{kp - bc}$$ $$x = \pm \sqrt{kp - ca}$$ Are also solutions to the congruency
|
|elementary-number-theory|prime-numbers|
| 0
|
$(A+B)^2\|B^{-1}\|-A$ is positive definite?
|
If $A$ and $B$ are positive definite matrix, $(A+B)^2\|B^{-1}\|-A$ is also positive definite? When $A$ and $B$ are real numbers, then $(A+B)^2\|B^{-1}\|-A$ is obviously positive definite. When $A$ and $B$ are matrix, I can use orthonormal matrix $O$ to transform $B$ an diagonal matrix $\operatorname{diag}\{\lambda_1,\cdots,\lambda_n\}$ . Then I can transform the original problem to $(A+B)^2-\lambda_n A$ is it also positive definite?
|
If $A\ge 0$ , $B\ge 0$ then $$A^{\frac12} B A^{\frac12} \ge 0$$ PS: this property does not require $A$ and $B$ to commute. Since $B \ge \lambda_n I$ then $A+B \ge \lambda_n I$ then, \begin{align} \left(A+B\right)^2 &= \left(A+B\right)^{\frac12}\left(A+B\right)\left(A+B\right)^{\frac12}\\ &\ge \lambda_n \left(A+B\right)^{\frac12}\left(A+B\right)^{\frac12}\\ &= \lambda_n (A+B)\ge \lambda_n A \end{align} By the way, using the same idea you can prove the stronger result: $$(A+B)^2\left\|\left(A+B\right)^{-1}\right\| - (A+B) \ge 0$$
|
|linear-algebra|matrices|analysis|matrix-decomposition|
| 0
|
Is this an adequate definition of the natural numbers in terms of elementary set theory?
|
Is this an adequate definition of the natural numbers? The term number is synonymous with natural number in this context. The terms front and back are used in order to avoid using numbers first and second to define numbers. $1$ is a number. The set $\mathbb{N}$ of numbers is the exactly that set of elements appearing as front elements in the pairs of the succession of numbers defined as follows: The succession of numbers is the set of ordered pairs of numbers including every number exactly once as a front element, and every number other than 1 exactly once as a back element. If $n$ is a front number then the back number, called the successor of $n,$ is denoted $n^\prime$ [Added to address a flaw pointed out in the answer posted by Greg Martin.] Every number $n$ uniquely terminates a subset of $\mathbb{N}$ (called the segment $\mathbb{S}_n$ ), such that $1$ is in $\mathbb{S}_n$ and $\mathbb{S}_n$ includes the successor of each of its members other than $n$ . For every number $n$ we defi
|
As written this is circular, although I suspect it could be rewritten to not be circular. However, note that if any set $S$ satisfies the second part of the definition, then so (for example) does $S \cup \{ (x,y), (y,x) \}$ where $x,y$ are any objects that are not elements of $S$ . There are many such extensions of $S$ and so this definition doesn't define the natural numbers uniquely. The missing piece is some sort of induction/well-ordering statement that guarantees that every natural number can be found in a sequence of ordered pairs that contains $1$ .
|
|elementary-number-theory|elementary-set-theory|definition|
| 1
|
Let $f(x)$ be a continuous function on $[0,1]$ such that $f(1)=0$. $\int_0^1 (f'(x))^2.dx=7$ and $\int_0^1 x^2f(x).dx=\frac13$. Find $\int_0^1f(x).dx$
|
I have a solution for the above question but i wanted to check if what i am doing is correct and can be done or not. The function i am getting also satisfies both the condition but i still am not sure if my method is correct or not. I have given my solution below in the answers , is an image of the original question
|
According to the first crucial step of the solution proposed by the asker we get $$\int\limits_0^1x^3f'(x)\,dx =-1\quad (*)$$ By the Cauchy-Schwarz inequality we obtain $$1\le \int\limits_0^1x^6\,dx\, \int\limits_0^1[f'(x)]^2\,dx={1\over 7}\cdot 7=1$$ As the equality occurs in the C-S inequality, we get $f'(x)=cx^3.$ Next $(*)$ gives $c=-7.$ As $f(1)=0$ we get $$f(x)=-{7\over 4}(x^3-1)$$ Remark The solution can be made more elementary, by making a guess that $f'(x)=cx^3.$ Then $(*)$ gives $c=-7.$ Then we observe that $$\int\limits_0^1\left [f'(x)+7x^3\right ]^2\,dx=0$$ so $f'(x)=-7x^3.$
|
|calculus|functions|definite-integrals|functional-calculus|
| 1
|
$M$ is a continuous local martingale. Prove the consequences
|
$M$ is a continuous local martingale. Show that: a) If $M_0=0$ and $\mathbb{E} ( ) _t for $t\ge 0$ then $M \in \mathcal M^{2,c}$ b) There is a sequence of stopping moments $\tau_n \to T$ such that $M^{\tau_n} \mathbb 1_{\{\tau_n>0\}}$ is a bounded martingale c) For any stopping moment $\tau$ , the process $M^{\tau}$ is a local martingale $\mathcal M^{2,c}$ is family of continuous martingales $\in L^2$ My thoughts: a) Assume that $\mathbb{E} ( ) _t\le A$ then: $$E[M_t^2] = E\sum (M_{t_{i+1}}-M_{t_i})^2 \leq A E[\sup_i|M_{t_{i+1}}-M_{t_i}|].$$ b) Probably optional stopping theorem is helpful Unfortunately, I don't know what to do next... Can anyone help me? I spent several hours on this task, but I don't understand how to get to the solution.
|
$M$ is a continuous local martingale. Show that: a) If $M_0=0$ and $\mathbb{E} ( ) _t for $t\ge 0$ then $M \in \mathcal M^{2,c}$ We will show that finite quadratic variation implies that M is L2-bounded martingale. We follow Revuz-Yor (1.23) Proposition. L2-bounded. Since $M-[M]$ is a local martingale, we get a sequence of stopping times ${T_n},n\ge1$ that diverge $T_n\to\infty$ and $\mathbb{E}[M^2_{t\wedge T_n}1_{T_{n}>0}-[M]_{t\wedge T_n}]$ is bounded. Using that $ E[ _{\infty}] we bound by: $$\mathbb{E}[M^2_{t\wedge T_n}1_{T_{n}>0}]\leq E[ _{\infty}]=E[M_{0}^{2}]=:K$$ and by Fatou's lemma $$\mathbb{E}[M^2_{t}]\leq\lim_{n}\mathbb{E}[M^2_{t\wedge T_n}]\leq K.$$ So this gives L2-bounded. Martingale This also gives that $M_{t\wedge T_n}1_{T_{n}>0}$ is uniformly integrable and so we can apply Vitali convergence theorem for the conditional relation $$E[M_{t\wedge T_n}1_{T_{n}>0}|\mathcal{F}_{s}]=M_{s\wedge T_n}1_{T_{n}>0}$$ to get martingale $$E[M_{t}|\mathcal{F}_{s}]=M_{s}.$$ see also $L
|
|stochastic-processes|stochastic-calculus|martingales|stochastic-analysis|
| 1
|
Show that $\frac{hr}{R^2 \sin B}(\sin^2\phi+\cos^2\phi\sin^2B) - \sin^2B - \frac{1}{R}\sqrt{\frac{hr}{\sin B}}(2\cos\phi\cos B\sin B) = 0$
|
I'm having a hard time showing that the left-hand side of the following equation does indeed equal zero. $$\frac{hr}{R^2 \sin B}(\sin^2\phi+\cos^2\phi\sin^2B) - \sin^2B - \frac{1}{R}\sqrt{\frac{hr}{\sin B}}(2\cos\phi\cos B\sin B) = 0$$
|
Perhaps because it is not true. Take $\phi = 0$ , $B=\pi /2$ , $h=r=2$ and $R=1$ then the LHS $=3$
|
|trigonometry|
| 0
|
Average number of random integers needed to satisfy any modular subset sum?
|
A multiset $S$ of numbers is considered satisfactory if $\left|\left\{\sum U \bmod n \mid U\subseteq S\right\}\right| = n$ , that is, all remainders mod $n$ can be constructed by summing subsets of $S$ . If we start a random process with an empty multiset, and keep adding uniformly random integers $[0, n)$ (with replacement) until it is satisfactory, how many numbers will we have added on average? Or conversely, if that question is easier, if we have a collection of $k$ random numbers, what is the probability that it is satisfactory?
|
Heuristically, I suspect something like $\ell \cdot H_\ell + O(\ell)$ numbers should suffice, where $\ell = \lg |S|$ and $H_\ell$ is the $\ell$ th harmonic number. In particular, there are $2^{|S|}$ subsets of $S$ , and if we model each as having a random sum mod $n$ , then the coupon collector problem gives us this estimate for the number of draws until we have obtained all possible remainders. Of course this heuristic makes an independence assumption that is totally bogus, so this is not by any means a rigorous result. It's possible this estimate might be terribly wrong if there is some interesting structure that throws things off. But I would speculate it is probably in the right ballpark.
|
|modular-arithmetic|random|
| 0
|
Let $f(x)$ be a continuous function on $[0,1]$ such that $f(1)=0$. $\int_0^1 (f'(x))^2.dx=7$ and $\int_0^1 x^2f(x).dx=\frac13$. Find $\int_0^1f(x).dx$
|
I have a solution for the above question but i wanted to check if what i am doing is correct and can be done or not. The function i am getting also satisfies both the condition but i still am not sure if my method is correct or not. I have given my solution below in the answers , is an image of the original question
|
$f(x)$ is a continuous function on $[0,1].f(1) = 0$ . $\int_0^1 (f’(x))^2 , dx = 7$ $\int_0^1 x^2 f(x).dx = \frac13$ . We want to find $\int_0^1 f(x).dx$ . First, let’s use the given information to find an expression for $f’(x)$ : We know that $\int_0^1 (f’(x))^2.dx = 7$ . Applying the fundamental theorem of calculus, we have: $\int_0^1 (f’(x))^2.dx = f(1) - f(0) = 0 - f(0) = -f(0)$ . Therefore, $-f(0) = 7$ , which implies $f(0) = -7$ Next, let’s find an expression for $\int_0^1 x^2 f(x).dx$ : $\int_0^1 x^2 f(x).dx=\frac13$ Now, let’s use integration by parts to relate $\int_0^1 f(x).dx$ to the given integral: $\int_0^1 x^2 f(x).dx = \frac13$ . Let $u = x^2$ and $dv = f(x).dx$ . Then, we have: $du = 2x.dx$ and $v = \int f(x).dx$ Applying integration by parts: $uv - \int v.du = \frac13$ $x^2 f(x) - \int 2x f(x).dx = \frac13$ $x^2 f(x) - 2 \int x f(x).dx = \frac13$ $x^2 f(x) - 2 \left(\int_0^1 x f(x).dx\right) = \frac13$ Rearranging: $\int_0^1 x f(x).dx = \frac{x^2 f(x) - \frac13}{2}$ No
|
|calculus|functions|definite-integrals|functional-calculus|
| 0
|
Show that quad $ABDE$ is cyclic
|
We're given a random red line and a point $A$ outside of it. From the red line, we take a random point $B$ such that the circle $\omega = \odot (A,AB)$ cuts the red line again at $C \neq B$ . Let $D = \odot(B,BA) \cap \odot (A,AB)$ be on the same half plane of the red line as $A$ and let $E$ be the meeting of the perpendicular bisector of $D$ and $C$ with the red line. Show that $AEBD$ is cyclic. I believe we need a congruence of triangles here. It is clear that $\triangle DAE \cong \triangle CAE$ by SSS. But I feel like I still need a little detail I'm not quite seeing, can you guys help me?
|
This is trivial by phantom points. Let $E' = \odot (BDA) \cap BC$ . You already know $AD=AC$ and $60^\circ = \measuredangle ABD = \measuredangle AED = \measuredangle CEA $ . Moreover $\measuredangle E'DA = \measuredangle E'BA = \measuredangle CBA = \measuredangle ACB = \measuredangle ACE'$ . Then by AAS congruency, $AE'$ is the perpendicular bisector of $CD$ , i.e., $E = E'$ , as desired. $\blacksquare$
|
|geometry|euclidean-geometry|circles|
| 1
|
Line integral off by a factor of 2
|
Perform the closed line integral $\int \vec{F}\cdot d\vec{s}$ for the given field along the specifcied path. Let $\hat F =r\hat{\theta}$ starting and finishing at $(2,0)$ along the counter-clockwise circle $r=2$ . $$\int_CF \cdot d\vec{s}=\int_0^{2\pi}F(\vec{r}(t))\cdot r'(t) dt = \int_0^{2\pi}2 \vec{\theta}\cdot \theta dt = 4\pi.$$ Let $\vec{r}(t)=2\hat{r} + t\hat{\theta}$ , $t\in [0,2\pi]$ . Then $\vec{r}'(t) = (0,1)$ . However in my book, the answer is $8\pi$ . What have I done wrong? EDIT: This was very instructive! I didn't do the change of coordinates correctly. Funnily enough, I just learned how to do change of tangent basis coordinates in my geometry class but didn't see the application here.
|
From the very scratch. I can only confirm what Ninad Munshi was saying that you are asking for problems when you do vector integrals with non Cartesian unit vectors. To disentangle the notation I write $$ \boldsymbol{\gamma}(t)=\gamma_x(t)\,\hat{\boldsymbol{x}}+\gamma_y(t)\,\hat{\boldsymbol{y}} $$ for the curve over which we integrate and start with Cartesian coordinates. The tangent vector at this curve is $$ \boldsymbol{\gamma}'(t)=\gamma_x'(t)\,\hat{\boldsymbol{x}}+\gamma_y'(t)\,\hat{\boldsymbol{y}}=\underbrace{\gamma_x'(t)\,\partial_x+\gamma_y'(t)\,\partial_y}\,. $$ The notation in the underbraced term has the huge advantage that the chain rule leads directly to the expressions in polar coordinates: $$ \boldsymbol{\gamma}'(t)=\gamma_r'(t)\,\partial_r+\gamma_\theta'(t)\,\partial_\theta\,, $$ where \begin{align}\tag{1} \partial_x&=\cos\theta\,\partial_r-\color{red}{\tfrac1r}\sin\theta\,\partial_\theta\,,& \partial_y&=\sin\theta\,\partial_r+\color{red}{\tfrac1r}\cos\theta\,\partial_\t
|
|calculus|linear-algebra|multivariable-calculus|vector-analysis|
| 1
|
What are the formulas for the circumradius, surface area and volume of each Kepler-Poinsot polyhderon based on the length of the entire edge?
|
Every formula I've found online is based on only a part of the total edge. If anyone knows the formulas based on each red edge below I would greatly appreciate it. A derivation of those formulas would be even better.
|
As I understood from the discussion in the comments, a great stellated dodecahedron with the pentagram chord $c$ has circumsphere radius $$r_c=c\cdot \frac{\sqrt{3}\cdot (3+\sqrt{5})}{4\cdot (2+\sqrt{5})}= c\cdot \frac{\sqrt{3}\cdot (\sqrt{5}-1)}4,$$ surface area $$A=c^2\cdot \frac{15\sqrt{5+2\sqrt{5}}}{(2+\sqrt{5})^2}=c^2\cdot \frac{15\sqrt{5+2\sqrt{5}}}{9+4\sqrt{5}},$$ and volume $$V =c^3\cdot \frac{5\cdot (3+\sqrt{5})}{4\cdot (2+\sqrt{5})^3}=c^3\cdot \frac{5\cdot (13\sqrt{5}-29)}{4}.$$ To avoid the ambiguity with the multiplication and the division in the formulae at the linked page, I used the provided calculator to numerically check the final formulae above, plugging into them $c=1$ .
|
|geometry|polyhedra|derivation-of-formulae|
| 0
|
Finding Smooth Numbers
|
This is a German Olympiad exercise from 2015. Call a number $n\in\mathbb{N}$ smooth if there are $a_1,...,a_n\in\mathbb{Z}$ such that $$ a_1+a_2+...+a_n=a_1\cdot a_2\cdot ... \cdot a_n=n. $$ Find all smooth numbers. So I think that it is evident that every composite number can be written as $$ n=a_1+...+a_m=a_1\cdot ...\cdot a_m $$ for $m\leq n$ . Because we just take one prime factor of and then add only $1$ s. But the number is smooth if $m=n$ . I have trouble finding solutions, I found the obvious solution $1=1$ . Has anyone got an idea of how to find more or all smooth numbers?
|
First of all, let's assume $n$ is an odd smooth number. Let $m_1$ be the number of $a_i$ 's which are $\equiv 1\pmod{4}$ and $m_2$ be the number of $a_i$ 's which are $\equiv -1\pmod{4}$ . Then, we will have: $$m_1+m_2=n \ (*),\\ m_1-m_2 \equiv n\pmod{4} \ \ (**), \\ (-1)^{m_2} \equiv n\pmod{4} \ \ \ (***).$$ From $(*)$ and $(**)$ , we conclude that $2m_2 \equiv 0 \pmod{4}$ . Hence $m_2$ must be even. Now, by $(***)$ , it is clear that we must have $n=4k+1$ for some $k \in \mathbb N.$ In this case, $$4k+1=\underbrace{1+1+\cdots+1}_{2k}+\underbrace{(-1)+(-1)+\cdots+(-1)}_{2k}+(4k+1).$$ Now, let's assume $n$ is an even smooth number. If $n$ is of the form $4k+2$ for some $k \in \mathbb N,$ then exactly one of the $a_i$ 's, say $a_n$ , must be even. Therefore, $a_1+ ... + a_{4k+1}=n-a_n=(4k+2)-a_n$ , which is impossible because the left hand of the identity is odd while the right hand is even. Hence, an even smooth number must be a multiple of $4$ . If $n=8k$ where $k \in \mathbb N$ , the
|
|number-theory|elementary-number-theory|contest-math|
| 1
|
Finding integer solutions for a logarithmic equation with constraints
|
I am struggling with a logarithmic equation where I am required to find integer solutions for a such that x is also an integer and greater than 4. The equation is given as: $a = \log_2\left(\frac{9 \cdot 2^x - 112}{2^x - 13}\right) - 1$ To clarify, I am looking for the values of x that make a an integer where x > 4 . I understand that for a to be an integer, the argument of the log base 2 (i.e., $\frac{9 \cdot 2^x - 112}{2^x - 13}$ ) must be a power of 2. The constants in the numerator and denominator, along with the subtraction of 1, make it non-trivial to find such x . Could anyone provide insight or a method to determine the possible integer values of x that satisfy the given conditions? Any assistance or suggestions on how to approach this would be greatly appreciated. Thank you for your time and help!
|
For $a$ to be an integer $f(x)=\log_2\left(\frac{9\cdot2^x-112}{2^x-13}\right)$ must be an integer. You can graph this and look for integer solutions: $\log_2\left(\frac{9\cdot2^x-112}{2^x-13}\right)$ has a vertical asymptote at $9\cdot2^{x}-112$ or $x=\frac{\log_2(112/9)}{\log_2(2)}\approx 3.64$ . The limit of $\log_2\left(\frac{9\cdot2^x-112}{2^x-13}\right)$ as $x\to +\infty$ is $\frac{\log_2(9)}{\log_2(2)}\approx 3.17$ . The limit of $\log_2\left(\frac{9\cdot2^x-112}{2^x-13}\right)$ as $x\to -\infty$ is $\frac{\log_2(112/13)}{\log_2(2)}\approx 3.107$ . Furthermore, by taking the derivative, one can show the function is strictly decreasing everywhere it is continuous. Evaluating at some integer points: $f(4)=3.415037$ is not an integer. As $f(x)$ is stricly decreasing to $3.17$ , there will be no integer solutions for $x\geq4$ . $f(2)=3.078002$ and as $\lim_{x\to -\infty}f(x)=3.107$ , there are no integer solutions for $x\leq 2$ . The last point to check is $x=3$ , which evaluates to
|
|logarithms|diophantine-equations|
| 1
|
Fundamental group of cyclic group $Z_n$
|
How to calculate the fundamental group of discrete group like the cyclic group $Z_n$ ? Physically, it seems that $\pi_1(Z_2)$ will be trivial, as there can be no vortex structure for $Z_2$ symmetry. But there can be some "vortex" structure in $Z_n, n\geq3$ , so $\pi_1(Z_n), n\geq3$ may be non-trivial.
|
A discrete space is not path-connected, so you need to choose a basepoint in your discrete group. Then, any loop lands in the path component of the chosen basepoint which is contractible, hence $\pi_1(G)=0$ for any discrete group $G$ .
|
|algebraic-topology|homotopy-theory|fundamental-groups|
| 1
|
solution-verification | Prove that the triangles $AMD'$ and $ACB'$ have the same center of gravity.
|
the question Let the cube $ABCDA'B'C'D'$ and $M$ be a point on the semi-right $(AB$ so that $BM=AB$ . Prove that the triangles $AMD'$ and $ACB'$ have the same center of gravity. the drawing the idea In traingle $AMD'$ , B is the midpoint of MA, which means that $D'B$ is a median and the center of gravity of that traingle (let it be $G_1$ ) verify this relation $G_1B=\frac{D'B}{3}$ Let $DB$ intersect $AC$ in point $O$ => O is the midpoint of BD and AC In triangle $ACB'$ , O is the midpoint of ac, which means that $B'O$ is a median and the centre of gravity of that traingle (let it be $G_2$ ) verify this relation $G_2O=\frac{B'O}{3}$ Let the inetrsection of lines $D'B, B'O$ be $G$ $D'B' || OB =>$ triangles $D'B'G$ and $BOG$ are similar and have the report of similarity $2$ , which means $GB=\frac{D'B}{3}$ and $GO=\frac{OB'}{3}$ This means that $G_1, G_2$ and $G$ are the same points and we arrive to the desire conclusion. Im not sure if showing these reports means that the points coincide
|
Your solution is correct and perfect, bravo! The strategy to invoke a third intermediate point $G$ which is shown to coincide with both the concerned centroids works out well. The reason your conclusion of points in same ratio implying coincidence holds is because you first show that the points $G_1$ and $G$ lie on the same line ( $D'B$ ), then show that their distance from a fixed point ( $B$ ) is the same (since length of $D'B$ is fixed) and on the same side of B (obviously), meaning the points must be coincident. You can think of this like a circle of fixed radius can cut a ray passing through the centre at exactly one point. This proves $G_1 = G$ . By identical logic, $G_2 = G$ and hence by transitivity, $G_1 = G_2$ and you're done. Having said this, I feel it's important to suggest a couple of tangible improvements, the first being vocabulary. Mathematics already is an intricate subject and there's simply no need to add to that by using non-standard terms such as centre of gravity
|
|geometry|solution-verification|
| 1
|
Area of a triangle in Lockhart's Lament
|
In the essay Lockhart's Lament (page 4), the author describes a proof for the standard area of triangle $(bh)/2$ by enclosing the triangle in a rectangle and chopping the rectangle into two (perhaps unequal parts) using its altitude (which is the perpendicular from the top vertex to the base). Of the four triangles so formed, we see that they are partitioned into pairs of congruent triangles of equal area; hence the area of the triangle is precisely half of the rectangle. We assume that the altitude is lying inside the triangle here. Now suppose the altitude lies outside the triangle. Can a similar argument still be made? I am not looking for any rigorous proof but only a hint.
|
Yes - the rectangle is split into two large right triangles, and each large right triangle is split into an obtuse triangle and a small right triangle.
|
|euclidean-geometry|triangles|area|
| 1
|
Why do we believe the equation $ax+by+c=0$ represents a line?
|
I'm going for quite a weird question here. As we know, the equation in Cartesian coordinates for a line in 2-dimensional Euclidean geometry is of the form $ax+by+c=0$. I'm wondering why do we "believe" the plotted graph is the same "line" as in our intuition. It might sound crazy, but think of the time when there were still no coordinates, no axes, no analytic geometry. When Descartes started to grasp the concept that equations represented geometric figures (or more accurately, loci) he would have tried plotting easy forms first, and what else could be easier than $y=x$ or $y=2x+3$ etc. Plotting those revealed something evidently a line to his (and our) naked eyes, but it wouldn't be appropriate for a mathematician to conclude from that alone that the figure is actually a "line", right? So jumping back to our own time, if we forget for once that $ax+by+c=0$ "is" a line, looking at it with fresh eyes, by what criteria are we using to say it is so. I've tried some regular characterizatio
|
Let $A=\left(x_{A},y_{A}\right)$ and $B=\left(x_{B},y_{B}\right)$ be distinct points. We define the line $l$ through points $A$ and $B$ to be the set of points $P=\left(x,y\right)$ such that $\overrightarrow{BP}\parallel\overrightarrow{AB}$ (i.e. $\left(x-x_{B},y-y_{B}\right)=k\left(x_{B}-x_{A},y_{B}-y_{A}\right)$ for some $k\in \mathbb R$ ). With some algebra, we can show $l=\left\{ \left(x,y\right):ax+by+c=0\right\}$ (for some $a,b,c\in\mathbb{R}$ with at least one of $a$ or $b$ non-zero).
|
|soft-question|euclidean-geometry|analytic-geometry|
| 0
|
Recurrence Relation with Alternating Variable
|
I am trying to turn the recurrence relation into a linear equation in order to sum some probabilities. I've done this with some recursive stuff similar to the Fibonacci sequence, but I was told by a friend that this is much harder because I have an alternating variable. $x_n=(1-x_{n-1})*(\frac{1}{36+12(-1)^{n}}),$ $x_1=\frac{1}{24}, x_2=\frac{23}{1152}, x_3$ should equal $\frac{1129}{27648}$ The recursive sequence works as intended, and I could definitely program it to approach the eventual number I want, but I'd rather get the proper linear one. I tried to apply the method some people on here explained to me for fibonacci-esque sequences, but I couldn't get it to work. Honestly I don't think I completely get what exactly is happening when I translate from one to the other, so that might be even more important than this specific equation. If I did not explain anything right please ask me, thank you.
|
Alternate terms of your sequence satisfy linear recurrences with constant coefficients: \begin{align} x_{2n+1}&=\frac{1}{24}\big(1-x_{2n}\big)\\ &=\frac{1}{24}-\frac{1}{24\cdot48}\big(1-x_{2n-1}\big)\\ &=\frac{47}{1152}+\frac{x_{2(n-1)+1}}{1152}\ . \end{align} The solution of this recurrence is \begin{align} x_{2n+1}&=\frac{x_1}{1152^n}+47\left(\frac{1152^n-1}{1152^{n+1}-1152^n}\right)\\ &=\frac{1}{24\cdot1152^n}+\frac{47\big(1152^n-1\big)}{1151\cdot1152^n}\\ &=\frac{1128\cdot1152^n+23}{27624\cdot1152^n}\ , \end{align} and the evenly indexed terms of the sequence are then given by \begin{align} x_{2n}&=1-24x_{2n+1}\\ &=\frac{23\big(1152^n-1\big)}{1151\cdot1152^n}\ . \end{align}
|
|linear-algebra|recurrence-relations|
| 0
|
if $\tan(\cot(x))=\cot(\tan(x))$ find $\sin(2x)$
|
If $\tan(\cot(x))=\cot(\tan(x))$ , find $\sin(2x)$ . so to start $\cot\left(\frac{\pi}{2}-\cot \left(x\right)\right)=\cot\left(\tan\left(x\right)\right)$ or $\frac{\pi}{2}=\tan\left(x\right)+\frac{1}{\tan\left(x\right)}$ which is $\frac{\sec^2\left(x\right)}{1+\tan\left(x\right)}$ now using the sin(2x) formula that should give $\frac{4}{\pi}$ however, this is wrong why is it wrong?
|
You got the first line right, but since cot(x) is a periodic function with cycle $π$ then $\cot(a) =\cot(b)$ implies that $a=b+nπ$ where $n\in \mathbb{N}$ so $$\tan(\cot(x))=\cot(tan(x))$$ $$\tan(π/2-\tan(x)) =\tan(\cot(x)) $$ $$π/2-\cot(x)+nπ=\tan(x) $$ $$π/2+nπ=tan(x) +\cot(x)$$ $$\frac{2nπ+π}{2}\frac{\sin^2(x)}{\sin(x)\cos(x)}+\frac{cos^2(x)}{\cos(x)\sin(x)}$$ $$\frac{π}{2}(2n+1)=\frac{1}{\sin(x)\cos(x)}$$ $$\frac{π}{4}(2n+1)=\frac{1}{2\sin(x)cos(x)}$$ $$\frac{4}{π(2n+1)}=sin(2x)$$
|
|algebra-precalculus|trigonometry|
| 0
|
Why do we believe the equation $ax+by+c=0$ represents a line?
|
I'm going for quite a weird question here. As we know, the equation in Cartesian coordinates for a line in 2-dimensional Euclidean geometry is of the form $ax+by+c=0$. I'm wondering why do we "believe" the plotted graph is the same "line" as in our intuition. It might sound crazy, but think of the time when there were still no coordinates, no axes, no analytic geometry. When Descartes started to grasp the concept that equations represented geometric figures (or more accurately, loci) he would have tried plotting easy forms first, and what else could be easier than $y=x$ or $y=2x+3$ etc. Plotting those revealed something evidently a line to his (and our) naked eyes, but it wouldn't be appropriate for a mathematician to conclude from that alone that the figure is actually a "line", right? So jumping back to our own time, if we forget for once that $ax+by+c=0$ "is" a line, looking at it with fresh eyes, by what criteria are we using to say it is so. I've tried some regular characterizatio
|
Another approach. The distance between $A(x_1,y_1)$ and $B(x_2,y_2)$ is given by the functional $F(y)=\int_A ^B f(x,y,y')dx$ where $F(x,y,y')=\sqrt{1+(y')^2}.$ Then, the Euler-Lagrange equation $F_y-(F_{y'})'=0$ that I remember from the calculus of variations course, gives $\frac{y'}{\sqrt{1+(y')^2}}=c$ . Hence, $y'=\frac{c}{\sqrt{1-c^2}}=:m$ and $y=mx+n.$ Vertical lines are not functions $(c=\pm1$ case).
|
|soft-question|euclidean-geometry|analytic-geometry|
| 0
|
Show that $ \Vert Tx_\varepsilon -x_\varepsilon \Vert \leq \varepsilon $
|
Exercise: Let $X$ be a Banach space, $\Vert \quad \Vert$ is a norm on $X$ . Let $B_1= \{ x \in X:\Vert x \Vert=1 \} $ . If the oprator $T: B_1\rightarrow B_1$ satisfies for all $x, y\in B_1$ , there is $\Vert Tx-Ty\Vert \leq \Vert x-y\Vert$ . Then for all $\varepsilon \in (0,1)$ , there exists $x_\varepsilon\in B_1$ such as $\Vert Tx_\varepsilon-x_\varepsilon \Vert\leq \varepsilon$ . My Attempt: This exercise looks quite like the Fixed Point Theorem so I want to prove it by the similar way. But I failed because without $0 in the Fixed Point Theorem I can't find a Cauchy sequence. So I can't find a $x$ even is dependent on $\varepsilon$ to make $Tx$ and $x$ close enough. I'm not sure if my attempt is on a correct way. Update: According to @Brian-Moehring, there is a mistake in this exercise. Maybe the $B_1$ should be a unit ball rather than a unit sphere.
|
In fact, if $B_1$ is a unit ball rather than a unit sphere, I can give a proof. For all $r$ that $0 , it's clear that $rT$ is a contraction mapping from $B_1$ to $B_1$ . So there is a fixed point $y$ for $rT$ such that $rT(y)=y$ . Then we have $$\Vert T(y)-y\Vert =\Vert T(y)-rT(y) \Vert=(1-r)\Vert T(y) \Vert \le 1-r.$$ So given $\varepsilon$ , let $r$ close enough to 1, we have $\Vert T(y_\varepsilon)-y_\varepsilon\Vert \le 1-r\le \varepsilon$ . Here $y_\varepsilon$ is the fixed point of $rT$ .
|
|functional-analysis|banach-spaces|
| 0
|
Number of ways to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice
|
Question: How many ways are there to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice? My Approach: I started out with the exponential generating function where: XA represents the number of 'A's, XB represents the number of 'B's, XC represents the number of 'C's, XD represents the number of 'D's, and $XA + XB + XC + XD = 10$ . Here, $A(XA) = \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}$ , and $A(X) = (A(XA))^4$ . My Problem: I am unable to solve or reduce it to a standard formula of $e^x$ from which I can easily calculate the coefficient of $ x^{10} $ . Can I do it directly using permutation with limited repetition?
|
You can do it with basic combinatorial methods: What you have are the letters aabbccdd?? with ?? representing wild card letters. There are $\frac{8!}{2!2!2!2!}$ . There are 4 possibilities where the two ?? are the same, and $6$ where they are different. There are $\frac{10!}{2!2!4!2!}$ in the first case and $\frac{10!}{2!2!3!3!}$ in the second. Combining the possibilities we have $4\times \frac{10!}{2!2!4!2!}+6\times \frac{10!}{2!2!3!3!}$ which evaluates to 226800
|
|combinatorics|
| 0
|
Number of ways to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice
|
Question: How many ways are there to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice? My Approach: I started out with the exponential generating function where: XA represents the number of 'A's, XB represents the number of 'B's, XC represents the number of 'C's, XD represents the number of 'D's, and $XA + XB + XC + XD = 10$ . Here, $A(XA) = \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}$ , and $A(X) = (A(XA))^4$ . My Problem: I am unable to solve or reduce it to a standard formula of $e^x$ from which I can easily calculate the coefficient of $ x^{10} $ . Can I do it directly using permutation with limited repetition?
|
Can I do it directly using permutation with limited repetition? Sure. Every letter has to appear twice. That means $8$ out of the $10$ positions are already occupied and we have only $2$ positions left to fill. These $2$ positions can be filled using the same letter or two different letters. Let's consider these cases separately. Case 1: Same letter If we fill the two remaining places using the same letter, we can choose any of the four letters to fill with. And for every letter we choose, the number of permutations would be $$\frac{10!}{2! \cdot 2! \cdot 2! \cdot 4!}$$ [There will be three letters repeated two times each and one letter repeated four times.] So the total number of permutations for this case equals $$\frac{4 \cdot 10!}{2! \cdot 2! \cdot 2! \cdot 4!}$$ Case 2: Different letters We can pick two letters out of the four in ${4 \choose 2} = 6$ ways. And for every combination we choose, the number of permutations of the $10$ letters would be $$\frac{10!}{2! \cdot 2! \cdot 3!
|
|combinatorics|
| 1
|
Finding integer solutions for a logarithmic equation with constraints
|
I am struggling with a logarithmic equation where I am required to find integer solutions for a such that x is also an integer and greater than 4. The equation is given as: $a = \log_2\left(\frac{9 \cdot 2^x - 112}{2^x - 13}\right) - 1$ To clarify, I am looking for the values of x that make a an integer where x > 4 . I understand that for a to be an integer, the argument of the log base 2 (i.e., $\frac{9 \cdot 2^x - 112}{2^x - 13}$ ) must be a power of 2. The constants in the numerator and denominator, along with the subtraction of 1, make it non-trivial to find such x . Could anyone provide insight or a method to determine the possible integer values of x that satisfy the given conditions? Any assistance or suggestions on how to approach this would be greatly appreciated. Thank you for your time and help!
|
We have equivalently $2^{a+1}=\dfrac{9 \cdot 2^x - 112}{2^x - 13}$ and making the division, $$2^{a+1}=9+\dfrac{5}{2^x-13}$$ For $x\gt4$ one has $2^x-13\ge19$ so there are no integer solutions. ( We can note that $x=3$ would be a solution but we have the restriction $x\gt4$ ).
|
|logarithms|diophantine-equations|
| 0
|
Prove that $a+b+c+d \geq \frac{2}{a+1}+\frac{2}{b+1}+\frac{2}{c+1}+\frac{2} {d+1}$
|
the question Let $a,b,c,d>0$ with $a+b+c+d \geq \frac{1}{a}+ \frac{1}{b}+\frac{1}{c}+ \frac{1}{d}$ . Prove that $a+b+c+d \geq \frac{2}{a+1}+\frac{2}{b+1}+\frac{2}{c+1}+\frac{2} {d+1}$ the idea Maybe $\frac{1}{a}+a \geq 2$ would be useful... The inequality look like could be obtained from the inequality of means, more explicit from arithmetic mean and harmonic mean : $1+\frac{a+b+c+d}{4} \geq \frac{4}{\frac{1}{a+1}+\frac{1}{b+1}+\frac{1}{c+1}+\frac{1} {d+1}}$ I tried processing it and use $a+b+c+d \geq \frac{1}{a}+ \frac{1}{b}+\frac{1}{c}+ \frac{1}{d}$ , but honestly I got to nothing useful. Hope one of you can help me! Thank you!
|
From the inequality between harmonic and arithmetic mean: $$ \frac{2}{x+1} \le \frac 12 \left( \frac 1x + 1\right) \, . $$ Now use $\frac{1}{x}+x \geq 2$ to conclude that $$ \frac{2}{x+1} \le \frac 12 \left( \frac 1x + \frac 12 \left( \frac 1x + x\right) \right) = \frac 34 \frac 1x + \frac 14 x \, . $$ Then add these inequalities for $x = a, b, c, d$ and use the given $a+b+c+d \geq \frac{1}{a}+ \frac{1}{b}+\frac{1}{c}+ \frac{1}{d}$ .
|
|inequality|
| 1
|
What structure do smooth maps and diffeomorphisms preserve?
|
Context: Continuous maps between topological spaces are structure preserving in the following sense: Given two topological spaces $(X,\tau_X),(Y,\tau_Y)$ (where $(X,\tau_X)$ is the topological space of set $X$ endowed with a topology $\tau_X$ ; similarly $(Y,\tau_Y)$ ), a map $f:(X,\tau_X)\to (Y,\tau_Y)$ is continuous iff $f^{-1}(V)\subset X$ is open whenever $V\subset Y$ open. A smooth manifold is a Hausdorff second countable locally Euclidean space with a smooth structure. The smooth structure is one where you take a smooth atlas and consider the unique maximal smooth atlas generated by it. Problem: Question 1: What is the structure on a smooth manifold? Is it this maximal smooth atlas itself or merely the requirement that the co-ordinate charts (which the underlying topological manifold already has) need to be smoothly compatible? Question 2: I want to show or think of smooth maps as the maps that preserve this structure of smooth manifolds. How can I do that? Here is my (WRONG) gue
|
To answer question (1), it is the atlas itself which defines the smooth structure. Given a smooth manifold $M$ with its atlas that I will denote $\mathcal A$ , it is quite possible that $M$ has another smooth structure with atlas denoted $\mathcal A'$ , but that $\mathcal A$ and $\mathcal A'$ are not compatible with each other in the sense that the overlap map between some chart in $\mathcal A$ and some chart in $\mathcal A'$ is not smooth; if that happened, we would say that $\mathcal A$ and $\mathcal A'$ define distinct smooth structures on $M$ . Even on the real number line $M=\mathbb R$ this is possible: take $\mathcal A$ to have a unique chart $(U,\phi)$ where $U=\mathbb R$ and $\phi : \mathbb R \to \mathbb R$ is the identity map $\phi(x)=x$ ; and take $\mathcal A'$ to have also a unique chart $(V,\psi)$ where $V=\mathbb R$ and $\psi : \mathbb R \to \mathbb R$ is the cubing map $\psi(x)=x^3$ . The overlap map $\phi(\psi^{-1}(y)) = \sqrt[3]{y}$ is not smooth. The correct answer to
|
|differential-geometry|smooth-manifolds|smooth-functions|
| 1
|
Proof: All functions in $L^2[0,1]$ are in $L^1[0,1]$
|
I would like to know if my demonstration of all functions in $L^2[0,1]$ are in $L^1[0,1]$ ? $\forall f\in L^2[0,1] $ we can split $f$ in two differents spaces: $A=\left \{0\leq x\leq 1:|f(x)|>1) \right \}$ and $\textrm{no}A=\left \{0\leq x\leq 1:0\leq |f(x)|\leq 1 \right \}$ (Of course $A \cup \textrm{no}A=[0,1]$ .) We know too that $y^2>|y|$ iff $|y|>1$ . So: $ \int_{\textrm{no}A}|f(x)|\mathrm{d}x \leq \int_{0}^{1}1\mathrm{d}x=1 $\infty>\int_{0}^{1}f^2(x)\mathrm{d}x\ge\int_{A}f^2(x)\mathrm{d}x> \int_{A}^{}|f(x)|\mathrm{d}x$ So by (1)+(2): $ \int_{0}^{1}|f(x)|\mathrm{d}x = \int_{\textrm{no}A}|f(x)|\mathrm{d}x+\int_{A}|f(x)|\mathrm{d}x so by definition $f$ is in $L^1[0,1]$ too. Is it correct? Thank you.
|
Your prove is nicer that the one below (to my point of view). But here an other solution. Using Holder inequality we first identify $\frac{1}{p} = \frac{1}{1} = \frac{1}{2} + \frac{1}{2} = \frac{1}{q}+ \frac{1}{r}$ . Holder inequality states that $\| f g \|_p \leq \| f \|_q \| g\|_r$ we take $g(x) = 1$ and thus we can write. $\| f g \|_1 = \int_0 ^1 |f(x) \cdot 1| dx = \int_0 ^1 |f(x) \cdot 1| dx \leq \int_0 ^1 |f(x)|^2 dx \int_0 ^1 |1| dx . In other words $f(x) \in L^1 [0;1] $ More generally if $p \leq q$ we can still generalize your proof in order to prove that $f \in L^q (X; A ; \mu) \Rightarrow f \in L^p (X; A ; \mu) $ with $(X; A ; \mu)$ a finite measure space. by proceeding similarly. 1- If $p and that means $\exists r > 0$ s.t. $\frac{1}{p} = \frac{1}{q} + \frac{1}{r}$ . 2- We apply the Holder inequality on $f(x), g(x)=1$ so we get: $$ \|f \| _p= (\int(f(x) \cdot 1)^p d \mu)^{1/p} = \|f \cdot g \|_p \leq \| f \|_q \cdot \| g \|_r = (\int(f(x) \cdot 1)^q d \mu)^{1/q} (\int( 1)^r
|
|solution-verification|normed-spaces|hilbert-spaces|lp-spaces|
| 0
|
What structure do smooth maps and diffeomorphisms preserve?
|
Context: Continuous maps between topological spaces are structure preserving in the following sense: Given two topological spaces $(X,\tau_X),(Y,\tau_Y)$ (where $(X,\tau_X)$ is the topological space of set $X$ endowed with a topology $\tau_X$ ; similarly $(Y,\tau_Y)$ ), a map $f:(X,\tau_X)\to (Y,\tau_Y)$ is continuous iff $f^{-1}(V)\subset X$ is open whenever $V\subset Y$ open. A smooth manifold is a Hausdorff second countable locally Euclidean space with a smooth structure. The smooth structure is one where you take a smooth atlas and consider the unique maximal smooth atlas generated by it. Problem: Question 1: What is the structure on a smooth manifold? Is it this maximal smooth atlas itself or merely the requirement that the co-ordinate charts (which the underlying topological manifold already has) need to be smoothly compatible? Question 2: I want to show or think of smooth maps as the maps that preserve this structure of smooth manifolds. How can I do that? Here is my (WRONG) gue
|
So first things first, every diffeomorphism is also a homeomorphism by definition, so diffeomorphisms automatically preserve every topological property about manifolds. Since Lee has already given you an excellent answer, I will provide an alternative but equivalent view. Apologies if you know nothing about sheafs. Now a smooth manifold also comes equipped with it's sheaf of smooth functions $C^\infty_M$ , that is for each open set $U\subset M$ we take continuous functions $f:U\rightarrow \mathbb R$ such that in every coordinate chart $(V\subset U,\phi)$ , $f\circ \phi^{-1}$ is a smooth function on an open subset of $\mathbb R^n$ . A continuous map $F:M\rightarrow N$ is then smooth if and only if for every open set $U\subset N$ , we have that: \begin{align} F^\sharp:C_N^\infty(U)&\longrightarrow C^\infty_M(F^{-1}(U))\\ f&\longmapsto f\circ F \end{align} is a morphism of rings which commutes with restriction maps. In other words, we have that $F$ is smooth if and only if this prescripti
|
|differential-geometry|smooth-manifolds|smooth-functions|
| 0
|
Number of ways to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice
|
Question: How many ways are there to form a sequence of 10 letters from four 'a's, four 'b's, four 'c's, and four 'd's if each letter must appear at least twice? My Approach: I started out with the exponential generating function where: XA represents the number of 'A's, XB represents the number of 'B's, XC represents the number of 'C's, XD represents the number of 'D's, and $XA + XB + XC + XD = 10$ . Here, $A(XA) = \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!}$ , and $A(X) = (A(XA))^4$ . My Problem: I am unable to solve or reduce it to a standard formula of $e^x$ from which I can easily calculate the coefficient of $ x^{10} $ . Can I do it directly using permutation with limited repetition?
|
Since two different answers have been posted, here is a mechanical way of computing as the product of two multinomial coefficients, viz. $\mathtt{(Lay\;Down\;Pattern)\times(Permute\; pattern)}$ $ = \dbinom{10}{4,2,2,2}\dbinom{4}{1,3} + \dbinom{10}{3,3,2,2}\dbinom{4}{2,2} = 226800$
|
|combinatorics|
| 0
|
Zero operator on a dense domain
|
Let $T$ be a, possibly unbounded, self-adjoint operator with a dense domain $D(T)$ in a Hilbert space $H$ . If $T=0$ on $D(T)$ , can $T$ be different from the $0$ operator, i.e. $Tv=0$ for all $v\in H$ ?
|
The domain of the adjoint is the whole Hilbert space. $$0= = \ \ \forall x\in D(T),y\in H$$ Then we have $0= \ \ \forall x\in H$ as $D(T)$ is dense, so $T^*y=0\ \ \forall y\in H$ .
|
|functional-analysis|operator-theory|
| 0
|
Understanding Peirce's law
|
I am trying to understand the meaning of Peirce's Law, namely: [(P→Q)→P]→P What does this mean? I am trying to find a practical example of P and Q where the truth of P would follow from the truth of P→Q. Any suggestion?
|
You can think of it like this: If $P$ is false, then $P\to Q$ is true but $P$ is false, so $(P\to Q)\to P$ is false. Therefore, if $(P\to Q)\to P$ is true, it must be that $P$ is true. You asked for a "real world" example. One issue with this is that we need to treat $\to$ as the material conditional, and uses of "if...then..." in natural language, especially in complicated nested statements, often have a different connotation. Let's say $P$ is "you get an A in the class" and $Q$ is "you take the final exam". Imagine the syllabus for a class contains the statement "In order to get an A in the class, you must take the final exam". This can be interpreted as $P\to Q$ . Now suppose your professor says to you, at the end of the semester, "If I follow the final exam policy in the syllabus, I will give you an A in the class". This can be interpreted as $(P\to Q)\to P$ . What can you conclude? Well, if you weren't going to get an A in the class, then the final exam policy would be trivially s
|
|logic|
| 0
|
HoTT and isomorphisms
|
I have heard that Homotopy Type Theory makes it so that isomorphic objects are “equal”. I wonder how this squares with a lot of mathematical examples from Algebra and Set Theory, where the nature of the isomorphism, or a certain class of isomorphisms, and how they interact with other morphisms, is relevant. How can you do any of this math if you just say “isomorphic objects are equal” and that’s that?
|
An equivalence relation in a theory can be called 'equality' when it satisfies the classical principle of "Indiscernability of identicals", that is to say, for our purpose, the "Indiscernability of equivalent objects". With the way type equivalence is defined in Homotopy Type Theory, "the nature of isomorphisms" is unchanged. But the theory is built so that we have an indiscernability of equivalent types, and therefore type equivalence is, in this theory, a type equality. More accurately, the HoTT book axiomatizes first identity types, that define a type equality. Type equivalence is then defined based on identity types, and, without the Univalence axiom, it almost satisfies the property of "indiscernability of equivalent types", with the exception of the reflexivity constructor of type identity, that is "too strict". The Univalence axiom introduces a sort of "alternative constructor" for type identity, making it less strict, more rich, so that type equivalence now satisfies the criter
|
|logic|type-theory|homotopy-type-theory|
| 0
|
The difference between Riemann integrable function and Lebesgue integrable function
|
My professor asked me how to intuitively understand Lebesgue's dominated convergence theorem and what's the effect of the integrable dominated function. More specifically, when we are given a Lebesgue integrable function, why it suffices to consider other things on a subset with finite measure? I think the professor means the following theorem: If $f$ is Lebesgue integrate in $X$, for every $\epsilon>0$, there exists measurable $E$ with $\mu(E) Hence I proved the statement using monotone convergence of $|f|\chi_{B(x_0,n)}$, but it seems my professor was not satisfied. She wanted me to explain what's the essential difference between a Lebesgue integrable function and a Riemann integrable function, and what makes a Lebesgue integrable function not Riemann integrable. She said answering the second question will help me answer the original question. I think from the definition of Riemann integral and the theorem, the Lebesgue integrable function is Riemann integrable if and only if it's co
|
Edit 1: I originally wrote that Riemann integrable functions are those with $countable$ sets of discontinuity at most. This is wrong, as @Ted Shifrin pointed out. Take the indicator function of the Cantor set: it has uncountably many discontinuity points, but they form a set of measure zero - the Cantor set itself. So, we have two questions from your professor: What's the essential difference between a Lebesgue integrable function and a Riemann integrable function? What makes a Lebesgue integrable function not Riemann integrable? Question 1 was essentially answered by you: Riemann integrability requires the function value to not have a large oscillation when $x$ changes slightly. This is true. We ask, for Riemann integrable functions, that they're discontinuous in sets with Lebesgue measure zero in $\mathbb{R}^{n}$ , and thus "negligible". We could also add the fact that all Riemann integrable functions are bounded,whereas Lebesgue integrable functions need not be so. But that's essent
|
|real-analysis|lebesgue-integral|
| 0
|
Understanding Peirce's law
|
I am trying to understand the meaning of Peirce's Law, namely: [(P→Q)→P]→P What does this mean? I am trying to find a practical example of P and Q where the truth of P would follow from the truth of P→Q. Any suggestion?
|
Just use a very simple way of thinking. True -> True is true, True -> False is False, False -> Anything is True. Apply it and you can get: If is false, then → is true but is false, so (→)→ is a false statement. Therefore if (→)→ is true, it must be that is true. If P is true and so is Q then we get (→)→ is true. If Q is false then (→)→ is also true (false->anything is true)
|
|logic|
| 0
|
Determine Functions of a Reference Triangle for Finite Element
|
I am given a problem to define the functions $\phi_1(x, y)$ , $\phi_2(x, y)$ and $\phi_3(x, y)$ for a single triangular element as a reference to do element assembly. However I'm not sure why the answer is: φ1(x, y) := 1 − (1/2 x) − (1/2 y) φ2(x, y) := (1/2 x) − (1/2 y) φ3(x, y) := y I understand that the φ1(x, y) is a representation of the vertex at point 1, which can be represented by φ1(x, y) := ax + by + c . However how do you obtain the coefficients of the functions φ1(x, y) := 1 − (1/2 x) − (1/2 y) ? How can you create a function out of a vertex? The problem is as below: A two-dimensional triangle, cf. Figure 1 on the left, is given as reference element. Piecewise linear functions are defined and shall be used as test functions; each function takes the value 1 at exactly one corner of the triangle and 0 at all other corners. Define the three functions $\phi_1(x, y)$ , $\phi_2(x, y)$ and $\phi_3(x, y)$ for a single triangular element and compute their gradients. Figure 1: Left: re
|
The three basis functions for linear finite elements are given by functions \begin{equation} \phi_i = a_i x + b_i y + c_i \quad i \in \{ 1, 2, 3 \} \end{equation} and by the conditions of each $\phi_i$ being 1 at one of the triangle vertices and 0 on the other two vertices. So for $\phi_1$ you plug in these conditions and get a system of linear equations for coefficients $a_1, b_1, c_1$ that reads \begin{align} 1 =& \phi_1(0, 0) = c_1, \\ 0 =& \phi_1(1, 1) = a_1 + b_1 + c_1, \\ 0 =& \phi_1(2, 0) = 2a_1 + c_1. \end{align} You solve this system and get the coefficients and then do the same for $\phi_2$ and $\phi_3$ . This is the main idea although there are some tricks to skip the computation and directly guess the solution.
|
|finite-element-method|
| 0
|
Let $a_n$ be the sequence defined inductively by $a_1 = 2$ and $a_{n+1} =(1/2)(a_n+2/a_n)$. Prove that $(a_n)^2 ⩾ 2$ for $n ∈ N$.
|
I have already shown that $a_{n} \in \left[1, 2\right]\ \forall\, n \in \mathbb{N}$ . I've also been told to start the proof by assuming $a_{n} and finding a contradiction but I have no clue how to do this.
|
Proof via induction. Basis case: $a_1 > \sqrt{2}.$ Induction case: Using AM-GM, $a_n > 2 \implies a_{n+1} = \frac{a_n + \frac{2}{a_n}}{2} > \sqrt{(a_n)\left( \frac{2}{a_n} \right)} = \sqrt{2}. $
|
|real-analysis|sequences-and-series|algebra-precalculus|limits|upper-lower-bounds|
| 0
|
Let $a_n$ be the sequence defined inductively by $a_1 = 2$ and $a_{n+1} =(1/2)(a_n+2/a_n)$. Prove that $(a_n)^2 ⩾ 2$ for $n ∈ N$.
|
I have already shown that $a_{n} \in \left[1, 2\right]\ \forall\, n \in \mathbb{N}$ . I've also been told to start the proof by assuming $a_{n} and finding a contradiction but I have no clue how to do this.
|
We'll proceed by induction in order to prove that for the sequence defined by $a_{n+1} = \left(a_n + \frac{2}{a_n}\right) \cdot \frac{1}{2}, a_1 = 2$ , we have $(a_n)^2 \geq 2$ for all $n$ . We'll first check the base case, and then assume the property holds for $n = k$ to prove it for $n = k+1$ . 1-Base Case (n=1) For $a_1 = 2$ , we have: $ (a_1)^2 = 2^2 = 4 \geq 2 $ So, the base case holds. 2- Assumption of the Induction Assume for some $k \geq 1$ , $(a_k)^2 \geq 2$ . 3- Inductive Step Given: $ a_{k+1} = \left(a_k + \frac{2}{a_k}\right) \cdot \frac{1}{2} $ we need to show that $(a_{k+1})^2 \geq 2$ . Let's express $(a_{k+1})^2$ and show it satisfies $(a_{k+1})^2 \geq 2$ : $(a_{k+1})^2 = \left(\left(a_k + \frac{2}{a_k}\right) \cdot \frac{1}{2}\right)^2 = \frac{1}{4}\left(a_k^2 + 4 + \frac{4}{a_k^2}\right) = \frac{(a_k^2 + 2)^2}{4a_k^2}$ Now it lets us to prove that $(a_{k+1})^2 \geq 2$ . This is a case, indeed the following inequalities are verify: $\frac{(a_k^2 + 2)^2}{4a_k^2} \geq 2
|
|real-analysis|sequences-and-series|algebra-precalculus|limits|upper-lower-bounds|
| 0
|
Let $a_n$ be the sequence defined inductively by $a_1 = 2$ and $a_{n+1} =(1/2)(a_n+2/a_n)$. Prove that $(a_n)^2 ⩾ 2$ for $n ∈ N$.
|
I have already shown that $a_{n} \in \left[1, 2\right]\ \forall\, n \in \mathbb{N}$ . I've also been told to start the proof by assuming $a_{n} and finding a contradiction but I have no clue how to do this.
|
You have $a_{n+1}=\dfrac{a_n^2+2}{2a_n}$ and $a_2=\dfrac32$ so $a_2^2\gt2$ . Consider tha function $f(x)=\dfrac{x^2+2}{2x}$ . It is easy to verify that $f'(x)=\dfrac{x^2-2}{2x^2}\gt0$ when $x\gt2$ so it is increasing for $x\gt2$ (in fact for $x\gt\sqrt2$ . Since $a_2^2\gt2$ we have $a_n^2\ge2$
|
|real-analysis|sequences-and-series|algebra-precalculus|limits|upper-lower-bounds|
| 0
|
What is the relationship among $a, b$ and $c$ for $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\left(ax^2+2bxy+cy^2\right)}dx dy=1$
|
What relationship must hold between the constants $a, b$ and $c$ to make: $$ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\rm e}^{-\left(\, ax^{2}\ +\ 2bxy\ +\ cy^2\,\right)}\phantom{A\,}{\rm d}x\,{\rm d}y = 1 $$ I am absolutely clueless on how to proceed with this question. I found a solution to this question on this website but the initial steps where it uses the transformation with the constraints on $\alpha, \beta, \gamma$ and $\delta$ is not clear to me. Like what was the intuition behind this transformation $?$ . I understand that the end result is somewhat similar to the initial assumptions, but how am I supposed to think of this particular transformation in an exam? I would appreciate any alternate answers/techniques to this question. Explanation(s) to the external linked solution are also welcome. Edit #1: As mentioned by fellow users in the comments, the link is hidden behind a paywall. Here's the crux of what was given as the solution: They used the transformation $$s=\alp
|
Integrate in polar coordinates as follows \begin{align} &\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\left(ax^2+2bxy+cy^2\right)}dx dy\\ =& \int_0^{2\pi}\int_0^\infty e^{-r^2\left(a\cos^2t+2b \cos t\sin t+c\sin^2t\right)}rdrdt \\ =& \ \frac12 \int_0^{2\pi} \frac1{a\cos^2t+2b \cos t\sin t+c\sin^2t}dt\\ =& \ \frac12 \int_0^{2\pi} \frac{\sec^2 t}{a+2b \tan t+c\tan^2t}dt\\ =&\ \frac1{2c} \int_0^{2\pi} \frac{d(\tan t)}{(\tan t+\frac bc)^2 + \frac{ac-b^2}{c^2}} = \frac1{c} \int_{-\pi/2}^{\pi/2} \frac{d(\tan t)}{\tan^2 t+ \frac{ac-b^2}{c^2}}\\ =&\ \frac1{\sqrt{ac-b^2}}\tan^{-1}\frac{c\tan t}{\sqrt{ac-b^2}}\bigg|_{-\pi/2}^{\pi/2}=\frac\pi{\sqrt{ac-b^2}}=1 \end{align} which leads to $ac-b^2=\pi^2$ .
|
|calculus|integration|multivariable-calculus|multiple-integral|jacobian|
| 0
|
Explicit proof of the fact that a domain which is not a UFD is not a PID
|
In the same spirit as this question , I would like to prove explicitely that if $R$ is a domain which is not a UFD, then it is not a PID. I am interested in the case where there is an element $a\in R$ which has at least two non equivalent decompositions. Without any loss of generality, we may assume that $a=u \pi_1\cdots,\pi_r=u' \pi'_1\cdots\pi's,$ where $r,s\geq 1$ , and for all $i,j$ , $ \pi$ and $\pi'_j$ are not associate (the case $r=0$ or $s=0$ is not possible since we want two distinct decompositions) What I can prove. I can prove that there exists a $j$ such that $(\pi_1,\pi'_j)$ is not a principal ideal . Sketch. Assume to the contrary that $(\pi_1,\pi_j')$ is generated by some $\alpha_j$ , for all $j$ . Then $\alpha_j$ is a common divisor of $\pi_1$ and $\pi_j'$ , so it is invertible since these irreducible elements are non associate. Thus $(\pi_1,\pi'_j)=R$ for all $j$ . Hence there is a Bézout relation between $\pi_1$ and $\pi'_j$ , and multipliying all these relations and
|
I finally found an explicit example, proving that the principality of $(\pi_1,\pi'_j)$ depends on $j$ . Let $R=\mathbb{Z}[\sqrt{-21}]$ . Then $2,3, 7,11,1\pm \sqrt{-21}$ are irreducible, pairwise non associate, and $2\cdot 3\cdot 7\cdot 11=-(\sqrt{-21})^2(1-\sqrt{-21})(1+\sqrt{-21})$ . Then, there is no Bézout relation $2z+z'(1-\sqrt{-21})=1$ (otherwise, multiply by $1+\sqrt{-21}$ to get a contradiction) , so $(2,1-\sqrt{-21})$ is not principal. However, we have $2\cdot 11+(\sqrt{-21})^2=1$ , so $(2,\sqrt{-21})$ is a principal ideal.
|
|abstract-algebra|principal-ideal-domains|unique-factorization-domains|
| 1
|
Proper Way to Calculate Value of Riemann Zeta function?
|
I understand that an Analytic Continuation of a function will extend its domain into areas that it previously wasn't defined in. I've been looking at one of the Analytic Continuations of the Zeta function, the Riemann Zeta function: $$\zeta(s) = 2^s \pi^{s-1} \sin \left(\frac{\pi s}{2}\right) \Gamma(1-s) \zeta(1-s)$$ I understand that there are several possible definitions for the Riemann Zeta function; however, what I'm confused about is the exact meaning of the $\Gamma(1-s)$ and $\zeta(1-s)$ elements inside the definition above. As far as I understand, $\Gamma$ is another function that has several possible definitions. If I wanted to calculate the value of the $\Gamma$ function in this case, can I pick any definition of the $\Gamma$ function that's using the same definition planes as the Riemann Zeta function I'm using? Or is there a specific variant that I should use? What's also confusing to me is the $\zeta(1-s)$ part. What I initially assumed that would be is an evaluation of the
|
The functional equation is just a property of the Riemann zeta function. It does not provide the definition of the Riemann zeta function for all complex numbers $s\neq 1$ . There are several proofs that the original zeta function, defined by the series $\zeta(s) = \sum_{n=1}^{\infty}n^{-s}$ for $\Re s>1$ , has the analytic continuation to complex numbers $s\neq 1$ . For instance, for $\Re s>1$ one has the formula \begin{align}\zeta(s)\Gamma(s/2)\pi^{-s/2} = -\frac{1}{s(1-s)} + \int_1^\infty (u^{s/2}+u^{(1-s)/2})\psi(u)\frac{du}{u}\end{align} where $\psi(u) = \sum_{n=1}^{\infty}e^{-\pi n^2u} = O(e^{-\pi u})$ as $u\to\infty$ . What is important is that, the last improper integral converges locally uniformly on $\mathbb{C}$ , hence defines an entire function. Therefore, we may give new definition of the Riemann zeta function $\zeta(s)$ for all $s\neq 1$ , by means of the above formula. The Gamma function is treated in the same way. For $\Re s>0$ , one has the formula $\Gamma(s+1) = s\Gamm
|
|complex-analysis|special-functions|analytic-number-theory|riemann-zeta|zeta-functions|
| 0
|
Use of intermediate value theorem to show that $p = (\cos 2\pi x, \sin 2 \pi x)$ is a covering map for $S^1$.
|
I am not completely following the argument given in Munkres' topology to show that $p = (\cos 2\pi x, \sin 2 \pi x)$ is a covering map for $S^1$ is a covering map for $S^1$ . Here is the outline of the proof: Consider the set $U \subseteq S^1$ consisting of points of $S^1$ having positive first coordinate. The set $p^{-1}(U)$ is the union of $$ V_n = (n - \frac{1}{4}, n + \frac{1}{4}) $$ The map $p$ is injective when restricted to $\overline V_n$ , because $\sin 2\pi x$ is strictly monotonic on such an interval. Furthermore, p carries $\overline V_n$ surjectively onto $\overline U$ , and $V_n$ to $U$ , by the intermediate value theorem. I am not sure how the intermediate value theorem (IVT) applies. Do I need to identify each point on the circle with its angle and then use the IVT?
|
As Munkres writes, the fact that $p$ is a covering map comes from elementary properties of the sine and cosine functions. Which properties do we need? $\sin$ is strictly monotonically increasing on each interval $[2 \pi n - \frac \pi 2 , 2 \pi n + \frac \pi 2]$ . This implies that $\sin 2 \pi t$ is strictly monotonically increasing, in particular injective, on each interval $\overline V_n = [n - \frac 1 4, n + \frac 1 4]$ . Hence $p$ is injective on $\overline V_n$ . $\cos$ is nonnegative on each interval $[2 \pi n - \frac \pi 2 , 2 \pi n + \frac \pi 2]$ . This implies that $\cos 2 \pi t$ is nonnegative on each interval $\overline V_n$ . I think it is also well-known that $\sin$ maps each interval $[2 \pi n - \frac \pi 2 , 2 \pi n + \frac \pi 2]$ bijectively onto $[-1,1]$ . Anyway, this can be proved by the IVT. Let $y \in [-1,1]$ . Since $\sin (2 \pi n - \frac \pi 2) = \sin (-\frac \pi 2) = -1$ and $\sin (2 \pi n + \frac \pi 2) = \sin (\frac \pi 2) = 1$ , we find a (unique) number $t
|
|real-analysis|general-topology|algebraic-topology|proof-explanation|covering-spaces|
| 1
|
Proving linear transformation is one to one if vectors in vector space are linearly independent
|
I have a linear transformation $F: U \to V$ . Given that the vectors $\vec{v_1}...\vec{v_n} \in V$ are linearly independent I want to show that $F$ is one-to-one. Also $F(\vec{u_i}) = \vec{v_i}$ and $\vec{u_i} \text{ form the basis of } U$ $\forall i = 1,2,...,n$ . My approach was $$s_1\vec{v_1}+...+s_n\vec{v_n} = \vec{0} \iff s_1 = .... =s_n = 0 $$ $$\implies s_1F(\vec{u_1})+...+s_nF(\vec{u_n}) = \vec{0}$$ $$ \implies F(s_1\vec{u_1}+...+s_n\vec{u_n}) = \vec{0}$$ Thus $s_1\vec{u_1}+...+s_n\vec{u_n} \in \text{Ker } F$ and since $s_1 = .... =s_n =0$ we have $\vec{0} \in \text{Ker} F.$ In my course, I have proved if $\text{Ker} F = \vec{0} $ then F is one-to-one. My question is whether my proof is correct or if I need to show that $F$ cannot contain any other vector other than the zero vector. If yes how shall I do it ?
|
This is not a valid proof. If you look closely you have only proven that $$ 0 \in \ker F$$ which is always the case for any linear map. What you need to prove is that the kernel of $F$ contains only $0$ . That is, you must be prove that $\ker F = 0$ . One way to do this is to pick any vector $ u \in \ker F$ and prove that it must be equal to $0$ . Let $u \in \ker F \subset U$ . Since $\{u_i\}_{i=1}^n$ forms a basis for $U$ there exists $\lambda_1,...\lambda_n$ such that $$ u = \sum_i \lambda_i u_i.$$ Since we know that $u \in \ker F$ we therefore have $$ F(u) = 0 \iff \sum_i \lambda_i F(u_i) = 0 \iff \sum_i \lambda_i v_i = 0.$$ Using the linear of independence of $v_i$ you should be able to conclude that $u = 0$ .
|
|linear-algebra|vector-spaces|linear-transformations|
| 1
|
Hilbert space valued random variables as a Hilbert space tensor product
|
Let $\mathcal{H}$ be a Hilbert space and $(\Omega, \mathcal{F}, P)$ a probability space with $L^2$ -space $L^2(\Omega)$ . I am looking at the space of square integrable $\mathcal{H}$ -valed random variables defined as $$L^2(\Omega,\mathcal{H})=\{X:\Omega \to \mathcal{H} \text{ measurable} : \mathbb{E}[\|X\|^2] which is again a Hilbert space. Now, I was wondering if this space can be identified with the Hilbert space tensor product $L^2(\Omega)\otimes \mathcal{H}$ in some way? I know Hilbert space tensor products are related to the space of Hilbert-Schmidt operators from $L^2(\Omega)\to \mathcal{H}$ , which seems useful here, but I'm not familiar enough with these operators to figure it out myself.
|
Assuming $H$ is separable, the two spaces are naturally isomorphic. The isomorphism $L^2(\Omega) \otimes H \to L^2(\Omega, H)$ sends $f \otimes h$ to the function $\Omega \ni \omega \mapsto f(\omega)h \in H$ .
|
|probability-theory|measure-theory|hilbert-spaces|tensor-products|
| 1
|
How to put this in graphing or grid paper and solve
|
On a chean sheet of grid paper, paste some small pictures of objects such that they are positioned at different coordinates. Then, draw circles that contain them pictures. Using the pictures and the circles driven on the grid formulate and solve problems involving the equation of the circle, then solve them.
|
The equation of a circle centered at $(a, b)$ is $$(x-a)^2+(y-b)^2=r^2$$ where $r$ denotes the radius of the circle. I recommend using DESMOS because you can add images and graphs very easily. You can add images by going to the + button and clicking "Image."
|
|circles|
| 0
|
SVD and least square solution
|
Let $K \in \mathbb{R}^{m,n}$ , $u \in \mathbb{R}^n$ , and $f \in \mathbb{R}^m$ . Assume that $m and $K$ have full rank so a solution exists but is not unique. I want to understand why this smallest solution is spanned by the first $m$ right singular vectors, denoted by $V_m$ = $(v_1, v_2, \dots, v_m)$ : $$ K V_m z = f $$ .
|
If $K=U\Sigma V^T$ , then the minimization task $$ \text{minimize}~~\|Kz-f\|_2 $$ is equivalent to $$ \text{minimize}~~\|\Sigma y-b\|_2,~~y=(V^Tz),~~b=(U^Tf) $$ This last system is completely decoupled, the influence of each row on the residual is independent of all the other rows and can be considered in isolation. One does not need the full SVD for such a solution, a QR decomposition allows a similar simplification.
|
|linear-algebra|numerical-methods|least-squares|svd|
| 0
|
Let $a_{n}$ be the sequence defined inductively by $a_{1} = 2$ and $a_{n+1} = 1/2 ( a_n + 2/a_n)$. Prove this sequence is decreasing
|
I have already inductively proven that $a_n∈[1,2]$ and found that $(a_{n})^2⩾2$ for all $n ∈ N$ I think I need to prove that the sequence converges to the greatest lower bound root 2 but i havent a scooby how to do this
|
Since you already have $$(a_n)^2\ge2$$ Dividing by $2a_n$ since we know it is positive we get $${a_n\over 2}\ge{1\over a_n}\\a_n\ge{a_n\over 2}+{1\over a_n}\\a_n\ge a_{n+1}$$ Which shows it is decreasing
|
|real-analysis|sequences-and-series|discrete-mathematics|convergence-divergence|
| 0
|
How to find the distribution of $X+Y$ from joint pgf
|
Given that X and Y have joint probability generating function(pgf) $$G_{X,Y}(s,t)=e^{\alpha(s-1)+\beta(t-1)+\gamma(st-1)}$$ the question ask me to find the marginal distribution of $X$ , $Y$ and the distribution of $X+Y$ For marginal distribution of $X$ , when $t=1$ $$G_{X,Y}(s,1)=G_X(s)=e^{(\alpha+\gamma)(s-1)}$$ which is the pgf of poisson distribution, and similar for $s=1$ . Then X,Y follows poisson distribution with parameter $\alpha+\gamma$ and $\beta+\gamma$ respectively. My question is that how I can find the distribution of $X+Y$ ?
|
Here is the trick $$G_{X+Y}(s) = \mathbb{E}\bigl(s^{X+Y}\bigr) = \mathbb{E}(s^Xs^Y) = G_{X,Y}(s,s) = e^{\alpha(s-1)+\beta(s-1)+\gamma(s^2-1)}$$ $$\mathbb{P}(X+Y=k) = G^{(k)}_{X+Y}(0) = \left.\frac{d^k}{ds^k}\,e^{\alpha(s-1)+\beta(s-1)+\gamma(s^2-1)}\right|_{s=0}$$ All you need to do is to find a closed form for the k-th derivative evaluated in zero, which may be difficult though.
|
|probability|probability-distributions|
| 1
|
Show that for every integer $n$, if $n^3 + n$ is divisible by $3$, then $2n^3 + 1$ is not divisible by $3$
|
I've been trying to prove the following statement but have not succeeded with any proof method: "Show that for every integer $n$ , if $n^3 + n$ is divisible by $3$ , then $2n^3 + 1$ is not divisible by $3$ ." I've attempted various approaches, including direct proof and contraposition, but to no avail. I also tried proof by cases trying to show that $3$ has to divide $n$ if $n^3+n$ is divisible by $3$ but I also wasn't able to Could anyone provide a solution?
|
Consider $$n^3+n=n\cdot (n^2+1)$$ Squares are either $0$ or $1$ modulo $3$ . So $n^2+1$ is either $1$ or $2$ . It means that $n^2+1$ is never divisible by $3$ . So $n$ must be. Then $2n^3$ is also divisible by $3$ . If we add $1$ , it is no longer so. ———— Another nice reasoning came to my mind. Consider the difference of the two numbers in question: $$(2n^3+1)-(n^3+n)=n^3-n+1=(n-1)\cdot n\cdot (n+1)+1.$$ One of the three consecutive numbers is always divisible by $3$ , so $(n-1)\cdot n\cdot (n+1)+1$ is not. Now $2n^3+1$ is the sum of $n^3+n$ , divisible by 3, and $(n-1)\cdot n\cdot (n+1)+1$ , not divisible by $3$ . Hence it is not divisible by $3$ .
|
|elementary-number-theory|modular-arithmetic|divisibility|
| 1
|
Can we choose a set to make sure the action of a permutation group transitive?
|
Let a finite group $G$ of order $n$ be given, so $G$ is isomorphic to a permutation group embedded in $S_n$ . Can we always find a set $\Omega$ such that $G$ acts transitively on $\Omega$ ? (For example, does $G$ always act transitively on a set $\Omega$ of size $n$ )? If so, is it a difficult problem to find the smallest such set? I am learning about Cayley's theorem and wondering about this question. I know that in general, if we're given a finite group $G$ it's hard to find the symmetric group of minimal order into which $G$ embeds. This question seems analogous in some ways, and I'm curious if it is easier.
|
I like to add that it is a much more interesting question to ask for a minimal set $\Omega$ such that $G$ acts faithfully on $\Omega$ . Answers: Which finite groups have their minimal permutation degree equal to their order? Minimal Permutation Representation Degree of a group: GAP implementation Finding the minimal $n$ such that a given finite group $G$ is a subgroup of $S_n$ Minimal degree faithful permutation representations of some finite simple groups
|
|abstract-algebra|permutations|finite-groups|group-actions|symmetric-groups|
| 0
|
Find the maximum value of $a$ under the condition $2e\ln x\leq ax+b\leq \frac{1}{2}x^2+e$,where $a,b\in\mathbb{R}$,$x>0$
|
Assume there exists $a,b\in \mathbb{R}$ ,such that $$2e\ln x\leq ax+b\leq \frac{1}{2}x^2+e$$ hold for $\forall x>0$ .find the maximum value of $a$ . I guess the critical situation is $y=ax+b$ it the common tangent line of $f(x)=2e\ln x$ and $g(x)=\frac{1}{2}x^2+e$ where $e$ is the Euler constant.and assume the tangency point of $y$ and $f(x)$ is $(x_1,2e\ln x_1)$ , the tangency point of $y$ and $g(x)$ is $(x_2,\frac{1}{2}x_2^2+e)$ then $$\begin{cases}x_2=a \\ \frac{2e}{x_1}=a \\ \frac{\frac{1}{2}x_2^2+e-2e\ln x_1}{x_2-x_1}=a\end{cases}$$ in order to find the maximum value of $a$ ,i want to get a function or equation of $a$ , so I eliminate $x_1,x_2$ in above equations, then I get $$\ln\frac{2e}{a}=\frac{3}{2}-\frac{a^2}{4e}>\frac{3}{2}$$ then $$\frac{2e}{a}>e^{\frac{3}{2}}\Rightarrow a I can't go further more. I think the reason is i can't separate $a$ in $\ln\frac{2e}{a}=\frac{3}{2}-\frac{a^2}{4e}$ .I only get a upper bound of $a$ , not the maximum of $a$ . and the critical situation
|
When ax+b is tangent to the other two functions and b is the smallest, the value of a is the largest Let the tangent point be(x1,y1)(x2,y2)So the derivatives of the other functions is equal and equals to a Now we have x1=2e·1/x2 which means x₁x₂=2e So a=(x₁²/2+e)'=(x₁²/2+e-2elnx₂)/x₁-x₂ you can solve this equation and get X₂=e³/²-x₁²/4e(it is the power of e i dont know how to use the latex sorry…) recall that x₁x₂=2e now we have:x₁·e³/²-x₁²/4e=2e this is a transcendental function and we can guess by making x₁²/4e=1 which gives x₁=2√e so the maximum value of a is a=2√e
|
|calculus|analysis|inequality|analytic-geometry|
| 0
|
Show that for every integer $n$, if $n^3 + n$ is divisible by $3$, then $2n^3 + 1$ is not divisible by $3$
|
I've been trying to prove the following statement but have not succeeded with any proof method: "Show that for every integer $n$ , if $n^3 + n$ is divisible by $3$ , then $2n^3 + 1$ is not divisible by $3$ ." I've attempted various approaches, including direct proof and contraposition, but to no avail. I also tried proof by cases trying to show that $3$ has to divide $n$ if $n^3+n$ is divisible by $3$ but I also wasn't able to Could anyone provide a solution?
|
By Fermat's little theorem , it follows that $n^3 \equiv n \pmod{3}$ . Hence, $$2n^3 + 1 = n^3+n^3 + 1 \equiv n^3 +n + 1 \equiv 1 \pmod{3}.$$
|
|elementary-number-theory|modular-arithmetic|divisibility|
| 0
|
Zero operator on a dense domain
|
Let $T$ be a, possibly unbounded, self-adjoint operator with a dense domain $D(T)$ in a Hilbert space $H$ . If $T=0$ on $D(T)$ , can $T$ be different from the $0$ operator, i.e. $Tv=0$ for all $v\in H$ ?
|
Any self adjoint operator is closed. It is obvious that if $A$ is bounded and closed then $D(A)=H.$
|
|functional-analysis|operator-theory|
| 0
|
How to frame the dual statement in a lattice ordered set or an algebraic lattice in general
|
I am learning the theory of posets and lattices which will eventually lead to Boolean Algebra. I am stuck with the proper understanding of the concept of duality. Followings are what I have gathered so far: If $ (P,\preccurlyeq ) $ is a poset, then $ (P,\succcurlyeq) $ is also a poset where $ \succcurlyeq $ is defined in terms of $ \preccurlyeq $ as $$ y\succcurlyeq x\iff x\preccurlyeq y. $$ Then $ (P,\succcurlyeq) $ is called the dual of the poset $ (P,\preccurlyeq ) $ . Every statement in $ P $ involving the relation $ \preccurlyeq $ has a corresponding dual statement in $ P^{\partial} $ obtained by replacing the relation $ \preccurlyeq $ with $ \succcurlyeq $ . A statement in $ (P,\preccurlyeq ) $ is true if and only if the dual statement in $ (P,\succcurlyeq ) $ is true. A poset $ (L,\preccurlyeq ) $ becomes a lattice ordered set (LOS) if for any two elements $ x,y\in L $ , both $ \sup\{x,y\} $ and $ \inf\{x,y\} $ exist in $ L $ . If $ L $ is a LOS and $ L^{\partial} $ is the dual
|
Your first guess (in the paragraph just after the seventh enumeration point) is correct and what you present as a counter-example for it, it isn't. The dual statement of $$y \leq z \implies x \vee y \leq x \vee z$$ is $$y \geq z \implies x \wedge y \geq x \wedge z,$$ as your guess would dictate. However, we can re-write this statement as $$z\leq y \implies x \wedge z \leq x \wedge y,$$ which, after swapping the roles of $y$ and $z$ , becomes $$y \leq z \implies x \wedge y \leq x \wedge z.$$ It perhaps not immediate that this last statement is dual of the first, but it is equivalent to the second displayed one, which has exactly the right shape, and so it is indeed dual of the first.
|
|order-theory|lattice-orders|
| 1
|
Computing partial trace of a given kronecker product matrix with respect to the first component
|
Suppose I have two matrices, given as: $A= \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ , $B= \begin{pmatrix} e & f \\ g & h \end{pmatrix}$ Then,the Kronecker product of $A$ and $B$ is $A \otimes B=\begin{pmatrix} ae & af & be & bf\\ ag & ah & bg & bh\\ ce & cf & de & df\\ cg & ch & dg & dh \end{pmatrix} $ So that the partial trace w.r.t $B$ is given as $Tr_B\left(A \otimes B\right)= \begin{pmatrix} ae+ah & be+bh \\ ce+ch & de+dh \end{pmatrix}$ Which can be obtained from $A \otimes B$ by just taking trace of each $2 \times 2$ block. However,the partial trace w.r.t $A$ is given as $Tr_A\left(A \otimes B\right)= \begin{pmatrix} ae+de & af+df\\ ag+dg & ah+dh \end{pmatrix}$ I did not understand how to obtain this expression from that of $A \otimes B$ . Can anyone please help?
|
Let $M$ denote the block matrix $$ M = \pmatrix{M_{11} & M_{12}\\ M_{21} & M_{22}}, $$ with each $M_{ij}$ of size $n \times n$ . Then the partial trace with respect to the second component is given by $$ \operatorname{tr}_2(M) = \pmatrix{\operatorname{tr}(M_{11}) & \operatorname{tr}(M_{12})\\ \operatorname{tr}(M_{21}) & \operatorname{tr}(M_{22})}, $$ as you have said. The partial trace with respect to first component is given by $$ \operatorname{tr}_1(M) = M_{11} + M_{22}. $$ In a sense, you are taking the "trace" of the matrix by adding up the blocks that appear on the diagonal.
|
|linear-algebra|matrices|trace|kronecker-product|
| 1
|
Regarding solving for the coordinates of reflection points in 3D space
|
In space, there exists a smooth plane, and additionally, there are two points, denoted as $A (x_a, y_a, z_a)$ and $B (x_b, y_b, z_b)$ . A beam of light is emitted from point A, reflects off the mirror surface of the smooth plane at a reflection point P, and finally arrives at point B. Now, we can obtain the mirror image point of B with respect to the smooth plane, denoted as $B' = (x_b', y_b', z_b')$ . How can I calculate the expression for point P in the most elegant and simple way?
|
Consider this figure.DP is perpendicular to plane FG and B' is the mirror of B about the plane. Fundamental optic theorem says for reflection we must have: $\angle APD=\angle BPD$ To find coordinate of P we connect B' to A; P is the intersection of line AB' and plane FG. I show calculations in this example: Let: Equation of the plane: $Ax+By+Cz+D=0$ The coordinates of mirror of $B:(x_0, y_0, z_0) is: $B': (x_0-2A\lambda, y_0-2B\lambda, Z_0-2C\lambda)$ $\lambda=\frac{Ax_0+By_0+Cz_0+D}{A^2+B^2+C^2}$ Example: Plane: $2x+y+z+4=0$ $A:(1, -1, 2)$ $B: (0, 1, 1)$ Using the formula you get: $\lambda=\frac{2\times 0+1\times 1+1\times 1 +4}{2^2+1^2+1^2}=1$ $B': (-4, -1, -1)$ Equation of line AB': $\frac{x+4}5=\frac{y+1}{-2}=\frac{z+1}3=t\space\space\space\space\space\space\space\space(1)$ finding x, y and z in terms of t and putting in the equation of the plane we finally get: $t=\frac 6{11}$ Putting t in (1) gives: $P: ( \frac {-32}{11}, \frac {-23}{11}, \frac 7{11})$ Update: If you can obtain t
|
|linear-algebra|geometry|3d|
| 0
|
Order of conjugacy class divides order of group
|
I was reading this proof: https://proofwiki.org/wiki/Number_of_Conjugates_is_Number_of_Cosets_of_Centralizer and I don't understand the very last part, why the order of the conjugacy class divides the order of the group. It says that it follows by Lagrange's but the conjugacy class is not necessarily a subgroup? I'm thinking that perhaps the index of the centralizer is exactly equal to the order of some subgroup of $G$, but I'm not sure. Can someone explain?
|
For those who are lost among words (like I was), there is a very natural one to one correspondance between the cosets of the centralizer and the conjugacy class. $\DeclareMathOperator{\cl}{cl}$ Let $G$ be a finite group and $g\in G$ and $H$ be the centralizer of $g$ in $G$ i.e., $H=C_G(g)$ . We want to show that size of the conjugacy class of $g$ i.e., $|\cl(g)|$ is given by the number of left cosets in the partition $G/H$ . (Mind that I have not called it a quotient group.) We can exhibit a mapping $f:G/H\to \cl(g)$ defined by $f(kH)=kgk^{-1}$ i.e., a left coset $kH$ maps to conjugate $kgk^{-1}$ of $g$ . This mapping is obviously onto because every conjugate of $g$ is of the form $kgk^{-1}$ whence, $kH$ is a pre-image. Show that $f$ is well-defined, one-one and hence, it is bijective. $\begin{align}&f(aH)=f(bH)\\ \iff & aga^{-1}=bgb^{-1}\\ \iff & (b^{-1}a)g=g(b^{-1}a)\\ \iff &b^{-1}a\in H \\ \iff & b^{-1}aH=H\\ \iff & aH=bH \end{align}\tag*{}$ Check the steps: top to bottom (conclude
|
|group-theory|
| 0
|
Is $f'(c)=0$ necessary condition for an inflection point at $c$?
|
I know that an inflection point occurs when there is a change of sign of the second derivative. I have tried to prove that if there is a change of sign in $f''$ at $c$ then $f'(c) = 0$ but I have not succeeded. I'm also aware that there can be an inflection point when the first derivative is undefined. Like: $$ f(x)= \begin{cases} \sqrt{x},& x\geq 0\\ -\sqrt{-x}, & x \lt 0 \end{cases} $$ So in principle I would say it's not a necessary condition. But could we say that $f'(c) = 0$ or either $f'$ is undefined is a necessary condition? How can I prove that I cannot have $f'(c) = 5$ and an inflection point at $c$ .
|
The statement is wrong, you don't necessarily need $f'(c)=0$ at an inflection point, we can see this by looking at $f(x)=x^3+x$ which has positive derivative everywhere but an inflection point at $x=0$ . However, by the intermediate value theorem, if we assume $f''$ exists, at least around the inflection point, it is true that an inflection point must have $f''(c)=0$ .
|
|real-analysis|
| 1
|
A weird probability question
|
This is the problem in question: You have two identical bowls: the first one contains 3 white balls and 4 black balls, and the second one contains 4 white balls and 5 black balls. If you choose randomly a ball from the two bowls, what is the probability it is white ? Let's define our events as such: A1 = choosing a ball from the first bowl A2 = choosing a ball from the second bowl B = choosing a white ball One approach would be using the theorem of total probability: $$\text{We know that }P(B|A_1) = \frac34\text{ and }P(B|A_2) = \frac45\text{, and that:}$$ $$P(A_1) = P(A_2) = \frac12\text{, because the bowls are identical}$$ $$P(B) = P(B |A_1)\times P(A_1) + P(B|A_2)\times P(A_2) = \frac37 \times \frac12 + \frac49 \times \frac12 = 55/126$$ The second approach would be simplifying the problem: Because the two bowls are identical, we could just say we don't even choose between two bowls, but just between the set of all balls. Then we could calculate the probability directly: $$P(B) = \fr
|
The main problem is just that, if we restrict to reality, and thus perhaps intuition, you cannot truly pick balls at random when they are in two bowls. You have to pick the bowls first, which is the first method. The second "method" is equivalent to assuming the probability of a bowl being picked is proportional to the number of balls in them. This additional (equivalent) assumption is what makes the result different. And that cannot make sense in reality unless you destroy the bowls first.
|
|probability|combinatorics|solution-verification|
| 0
|
Query regarding approach to solve a given differential equation.
|
There's a equation $$N(t) = N(t)\frac{P(t,z)}{B}-C\frac{d(P(t,z))}{dz}$$ $$N(t) = A\frac{dP(t,z)}{dt}$$ Constants: B, C=3.9878*10⁻⁷, $A=0.11941$ , Variables: N(t) is a function of t and is defined at a point along z, it's a vector quantity. P(t) is a function of t as well as z Background: I encountered this equation when I was analysing a mass transfer problem. N is the flux of molecules across the surface of a liquid in contact with an air film of a certain thickness. P is the partial pressure at point z at time t. The situation is as follows: a volatile chemical spills on the floor of a poorly ventilated laboratory. We have to find out the time required for the complete evaporation of the chemical. Since the space is poorly ventilated, the vapour will accumulate in the air, reducing the ease with which the liquid can evaporate. This will also reduce N over time. My Question: How should I solve the differential equation mentioned at the start of the question? I am looking for a method
|
$$A\frac{\partial P}{\partial t} = A\frac{\partial P}{\partial t}\frac{P}{B}-C\frac{\partial (P)}{\partial z}$$ $$A\left(\frac{P}{B}-1\right)\frac{\partial P}{\partial t}-C\frac{\partial P}{\partial z}=0 $$ This is a first order PDE. The Charpit-Lagrange characteristic ODEs are : $$\frac{dt}{A\left(\frac{P}{B}-1\right)}=\frac{dz}{-C}=\frac{dP}{0}$$ An obvious first characteristic equation comes from $dP=0$ : $$P=c_1$$ A second characteristic equation comes from solving $\frac{dt}{A\left(\frac{c_1}{B}-1\right)}=\frac{dz}{-C}$ : $$A\left(\frac{c_1}{B}-1\right)z+Ct=c_2$$ The general solution of the PDE on implicit form $c_2=F(c_1)$ is : $$\boxed{A\left(\frac{P}{B}-1\right)z+Ct=F(P)}$$ $F$ is an arbitrary function. They are infinity many solutions of the PDE. If some valid initial condition is specified one can determine the function $F$ . In order to go further in the solving please add to the question the initial condition clearly expressed on mathematical form.
|
|ordinary-differential-equations|derivatives|partial-differential-equations|education|applications|
| 0
|
polynomial solution of $a_n + ca_{n-1} + a_{n-2} = 0$
|
Let $c$ be a constant, and consider the following recurrence relation: $a_n + ca_{n-1} + a_{n-2} = 0$ . I am asked to find the value of $c$ where the recurrence has a non constant polynomial solution. I declared $f(x) = \sum_{n=0}^\infty a_n \cdot x^n$ , multiplied the recurrence by $x^n$ and took the sum from $n=2$ , and got that $$\sum_{n=2}^\infty a_n \cdot x^n = -c \cdot \sum_{n=2}^\infty a_{n-1} \cdot x^n - \sum_{n=2}^\infty a_{n-2}x^n$$ $$\sum_{n=2}^\infty a_n \cdot x^n = -cx \cdot \sum_{n=2}^\infty a_{n-1} \cdot x^{n-1} - x^2 \cdot \sum_{n=2}^\infty a_{n-2}x^{n-2}$$ $$ f(x) - a_0 - a_1 x = -cx(f(x)-a_0) - x^2f(x)$$ $$ (1+cx+x^2)f(x) = a_0 cx + a_1 x + a_0 $$ $$f(x) = \frac {a_0 cx + a_1 x + a_0}{x^2 +cx + 1} $$ and here i cant see any restriction on $c$ . Any $c$ looks fine to me, and no specific $c$ would make problems. Would love to get some help moving forward, or corrections if my way of solving this question is wrong. Thank you!
|
Hypothesis: $a_n=\sum_{i=0}^M b_i n^i$ , $b_M \ne 0$ . It's simpler to write the recurrence as $a_{n+1} + ca_{n} + a_{n-1} = 0$ which leads to $\sum_{i=0}^M b_i (n+1)^i + c\sum_{i=0}^M b_i n^i + \sum_{i=0}^M b_i (n-1)^i=0$ $\sum_{i=0}^M b_i ((n+1)^i + (n-1)^i + c n^i) = 0$ The left side is an $M$ 'th degree polynomial. We want $b_i$ and $c$ such that it is zero for all $n$ . Just focusing on the $n^M$ term of this expression: $b_M (2n^M + c n^M) = 0$ which means $c = -2$ . At this point, $c=-2$ is shown to be a necessary condition. For this specific problem, it is easiest to come up with a working solution -- like $a_n=n$ -- to show that it is sufficient.
|
|combinatorics|recurrence-relations|
| 1
|
Probability Question about the sum of two random uniform variables module
|
assume two random uniform independent variables X,Y~Unif([N]) prove that the modulo of the sum: (( X + Y ) modulo N) follows a uniform distribution on the set {0,1,…,N−1} i need to general proof for any N that belongs to Z i tried: defining another variable Z = (x+y)ModuleN and then saying P(Z=z)=P((x+y)ModuleN = z) and then i thought of using the fact that they are independent maybe something like for every z P(Z=z)= sum for every z P(x=i)*P(y=n-i)moduleN
|
Consider an arbitrary $k \in \left\{0, 1, ..., N-1\right\}$ . For any $X$ value in $\left\{0, 1, ..., N-1\right\}$ , there is exactly one $Y$ value in $\left\{0, 1, ..., N-1\right\}$ such that $X+Y \equiv k \pmod{N}$ . Specifically, this value is : $\,\,k - X \pmod{N}$ . Therefore the following cardinality is constant and is equal to $N$ : $$\left|\left\{\left(X,Y\right) \in \left\{0, 1, ..., N-1\right\}^2 : X+Y \equiv k \pmod{N}\right\}\right| \, = \, N$$ Because our choice of $k$ was arbitrary, this implies that the distribution you describe is uniform.
|
|probability|group-theory|modular-arithmetic|modules|
| 1
|
Name of integration techniques
|
Consider following integrations : $$\int \sec^3 x\ dx,\ \int \cos\ x\ e^x\ dx $$ These can be calculated by integration by parts. But here for instance to calculate the latter example, we meet $$\ast\ \int e^x \cos \ x\ dx = e^x \sin\ x + e^x \cos\ x - \int e^x \cos \ x dx $$ Note that the integration we want to calculate appears again. It is as like reduction formula for $\int \cos^n x\ dx$. Here is there a name of integration technique in $\ast$ ? Thank you in advance.
|
No, this is still integration by parts. This is actually one of its other forms, using this to solve an equation for your integral above. It's just strange since many times integration by parts is not like that, but I use it sometimes too and it's rather effective...
|
|calculus|
| 0
|
The method of moments estimator of median
|
In some textbook, it is written that the method of moment estimator can be applied even in estimating median. However, I can't come up with the idea. In many cases the median is estimated by $x_{n/2}$ or $(x_{(n-1)/2} + x_{(n+1)/2})$ depending on whether n is even or odd. I think unfortunately it is not method of moment estimator. Does someone know the way to get the estimator? To simplify the discussion, it is ok to think cumulative distribution function is continuous and strictly increasing function.
|
You can use the median $\tilde{x}$ in place of the mean: $\bar{x}$ for an estimation of the calculation of any moment $k$ >1: $$ m_{(k)} = \frac 1 n \sum_{i=0}^n (x_{(i)} - \tilde{x})^k$$ The advantage is the increased robustness when outliers are present. As with any estimation method, use common sense and visualizations to confirm the median is the better centroid value than the mean. Any value between the median and mean can be used as $\tilde{x}$ with bounded error. Approximations of the non-linear value of $\tilde{x}$ can be refined via iterations or with differential equations, if the distribution is known. The following paper describes the error and asymptotically bounded correctness of the replacement of mean with median: John T. Chu. Harold Hotelling. "The Moments of the Sample Median." Ann. Math. Statist. 26 (4) 593 - 606, December, 1955. https://doi.org/10.1214/aoms/1177728419
|
|statistics|median|parameter-estimation|
| 0
|
How to prove that CC($\mathbb R$) is true in permutation models
|
I've familiarised myself with models of set theory and am beginning to understand the basics, but am still very far away from being a proper model theorist. I currently live under the impression that I understand the idea of the construction of the first and second Fraenkel models. Hence, unfortunately, I must ask how one proves that the axiom of countable choice for subsets of $\mathbb R$ holds in permutation models. As far as I'm aware, it is not necessary to assume the axiom of choice when constructing permutation models. There is a particular issue that caused me some trouble. When we start constructing some permutation model, we assume that we already have some set theory as a metatheory. In this set theory, the atoms are mere sets, and thus, if we start building up the von Neumann hierarchy with atoms, all atoms will appear at some point in the kernel, which is a problem, because atoms should not be in the kernel. What am I missing here? EDIT: Or are the atoms greater than some s
|
Trivially. Since permutation models preserve choice in the pure sets (or, kernel), and since $\Bbb R$ is in bijection with $\cal P(\omega)$ , it must be well-orderable.
|
|set-theory|axiom-of-choice|
| 0
|
$a+b+c+d+e=79$ with constraints
|
How many non-negative integer solutions are there to $a+b+c+d+e=79$ with the constraints $a\ge7$, $b\le34$ and $3\le c\le41$? I get that for $a\ge7$ you do $79-7=72$, $\binom{72+5-1}{5-1}=\binom{76}4$. For $b\ge35$ I think it's $\binom{47}4$ and I'm not too sure what it is for $3\le c\le41$ and I also have no clue as to how to do them all at the same time.
|
$a+b+c+d+e=79$ over non-negative integers, $a>=7,\;\;b Put $a'=a-7,\quad c'=c-3,\;$ then we have $a'+b+c'+d+e = 69,\;\;b , giving the answer of $\dbinom{73}4 - \dbinom{38}4 - \dbinom{34}4$
|
|combinatorics|discrete-mathematics|integer-partitions|
| 0
|
self-adjoint operator $L = *D + D*$ on 3-manifold
|
In Witten Quantum Field Theory and the Jones Polynomial , he mentioned, Let $D$ be the exterior derivative on M, twisted by the fiat connection $A$ let $*$ be the Hodge operator that maps $k$ forms to $3-k$ forms. On a three manifold one has a natural self-adjoint operator $$L = *D + D*$$ which maps differential forms of even order to forms of even order and forms of odd order to forms of odd order. Let $L_-$ denote its restriction to forms of odd order. In $d$ -dimension (say $d=3$ ), isn't that $*D$ maps a $k$ form to $d-k-1$ -form? isn't that $D*$ maps a $k$ form to $d-k+1$ -form? So $L = *D + D*$ on $k$ -form $V$ produce $LV$ with both $d-k-1$ -form and $d-k+1$ -form? how can this $L$ operator be natural? Say $d=3, k=0$ , we get $d-k-1=2$ -form and $d-k+1=4$ -form? Say $d=3, k=1$ , we get $d-k-1=1$ -form and $d-k+1=3$ -form? Say $d=3, k=2$ , we get $d-k-1=0$ -form and $d-k+1=2$ -form? Say $d=3, k=3$ , we get $d-k-1=-1$ -form and $d-k+1=1$ -form?
|
Letting \begin{align*} \Omega^{\bullet}(M) &= \bigoplus_{k=0}^{\dim M}\Omega^k(M)\\ \Omega^{\text{even}}(M) &= \bigoplus_{k=0}^{\lfloor\dim M/2\rfloor}\Omega^{2k}(M)\\ \Omega^{\text{odd}}(M) &= \bigoplus_{k=0}^{\lfloor\dim M/2\rfloor}\Omega^{2k+1}(M), \end{align*} we have $\Omega^{\bullet}(M) = \Omega^{\text{even}}(M)\oplus\Omega^{\text{odd}}(M)$ . The operator $L : \Omega^{\bullet}(M) \to \Omega^{\bullet}(M)$ satisfies $L|_{\Omega^{\text{even}}(M)} : \Omega^{\text{even}}(M) \to \Omega^{\text{even}}(M)$ and $L|_{\Omega^{\text{odd}}(M)} : \Omega^{\text{odd}}(M) \to \Omega^{\text{odd}}(M)$ . In particular, if $\alpha \in \Omega^p(M)$ , then $L(\alpha)$ need not be a differential form of pure degree, but it is the sum of such forms whose degrees have the same parity as $p$ does.
|
|differential-geometry|differential-forms|
| 0
|
Help to write the domain in a forma correct way
|
The question is simple: the domain of $f(x, y) = \ln(\sqrt{xy} + 1)$ . Now this is just $xy \geq 0$ , which means either $x \geq 0$ and $y \geq 0$ or $y \leq 0$ and $y \leq 0$ . This is easy to say and write in this way, but I want to write it in a formal mathematical way. This is what I did: $$D = \{(x, y) \in \mathbb{R}^2 | (x \geq 0, y \geq 0) \cup (x \leq 0, y \leq 0)\}$$ Yet I have some doubts about, since $\cup$ means at least in one of the sets. Maybe it's still correct, and I am confusing the symbol, but $A \cup B$ means that an element $k$ is in $A$ or $B$ or both. This would make me thing that I could have $x \geq 0$ and $y \leq 0$ , which wouldn't be in $D$ ... Can you please help me?
|
This is fine using comma as a logical "and", more formally we can state that $$D = \{(x, y) \in \mathbb{R}^2 | (x \geq 0 \land y \geq 0) \cup (x \leq 0 \land y \leq 0)\}$$ or also simply $$D = \{(x, y) \in \mathbb{R}^2 | xy\ge 0\}$$
|
|calculus|analysis|multivariable-calculus|functions|
| 1
|
Exponential generating function of the sequence $1,0,1,0,\dots$
|
I'm currently learning exponential generating functions and am working on a problem that needs me to find the corresponding closed form expression to a given sequence. I've been given the sequence $1,0,1,0,...$ and decided to use the EGF $e^x$ to find my solution by substituting $0$ in every odd term in the expression: $e^x = \sum_{r=1}^\infty \frac{x^r}{r!} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + ...$ Doing so, I've arrived at this expression: $1 + \frac{x^2}{2!} + \frac{x^4}{4!} + ...$ But I don't know where to go from here. Am I on the right track or is there a better way to find the closed form expression?
|
HINT: Have you considered $\frac{e^x+e^{-x}}{2}$ . ETA: And that is basically all there is. Let $(a_i)_{i=0}^{\infty}$ be a sequence. Then the exponential generating function $f$ for this sequence is by definition the function whose Taylor series is prescribed by $f(x) =$ $a_0 + \sum_{i=1}^{\infty} \frac{a_ix^i}{i!}$ . If the sequence is $1,0,1,0,\ldots$ where every other term is $1$ and the remaining terms are $0$ , or equivalently, if the generating sequence is $(a_i)_{i=0}^{\infty}$ , where $a_i$ is $1$ iff $i$ is even and is $0$ otherwise, then by definition of exponential generating function is $f(x)=\frac{e^x+e^{-x}}{2}$ , as this function has the Taylor series expansion $f(x)=1+\sum_{i=1}^{\infty} \frac{a_ix^i}{i!}$ , where $a_i$ is $1$ iff $i$ is even, and $a_i =0$ otherwise.
|
|discrete-mathematics|generating-functions|
| 0
|
Doubt in proof showing in topoi, equivalence relations are effective
|
This is proposition 1.4 (pg. 30) in van Oosten's lecture notes on Topos Theory. Prop : In a topos, every equivalence relation is effective, i.e. a kernel pair. Following is the first half of the proof. Let $\phi : X \times X \to \Omega$ classify the subobject $\langle r_0, r_1 \rangle : R \to X \times X$ , and let $\overline{\phi} : X \to Ω^X$ be its exponential transpose. We claim that the square, is a pullback, so that $R$ is the kernel pair of $\overline{\phi}$ . To see that it commutes, we look at the transposes of the compositions $\overline{φ}r_i$ , which are arrows, $$R \times X \xrightarrow{r_i \times id} X \times X \xrightarrow{\phi} \Omega$$ Both these maps classify the object $R′$ of $R$ -related triples, seen as subobject of $R \times X$ , so they are equal. I was able to understand this proof until the last line. How are we able to consider $R'$ as a subobject of $R \times X$ ? And how is the arrow $\phi (r_i \times id)$ the classifying arrow of $R'$ . $R'$ is defined as p
|
Consider first the diagram $$ \require{AMScd} \begin{CD} R'@>{t}>> R @>>> *\\ @V{\langle s, r_1t\rangle}VV @VV{\langle r_0,r_1\rangle}V @VV{\top}V\\ R\times X @>{r_1\times\mathrm{id}_X}>> X\times X@>{\varphi}>>\Omega\\ @V{\mathrm{pr}_0}VV @VV{\mathrm{pr}_0}V\\ R @>{r_1}>>X \end{CD} $$ where $*$ is the terminal object of your topos and the top right square is a pullback square. The bottom left square is also a pullback, as is the large left vertical rectangle, both by direct inspection. By pullback pasting, this means that the upper left square is a pullback. In particular, the map $R'\to R\times X$ is the pullback of a mono, and hence a mono as well. A similar argument shows that the map $X\times R\xrightarrow{\mathrm{id}_X\times r_0} X\times X\xrightarrow{\varphi}\Omega$ classifies $R'$ , but note that we have swapped the terms in the categorical products. To show that the map $R\times X\xrightarrow{r_0\times\mathrm{id}_X}X\times X\xrightarrow{\varphi}\Omega$ classifies $R'$ , it suff
|
|category-theory|topos-theory|
| 1
|
Tamely extension of the maximal unramified extension of a local field
|
This construction comes from the chapter VII of "the local Langlands conjecture for $GL(2)$ ": Let $F$ be a local field. We know that for any $n \in \mathbb{N}$ there exists a unique unramified extension of degree $n$ . We can denote by $F_{\infty}$ the composite of all these fields that is clearly the maximal unramified extension of $F$ . Now for each integer $n\geq 1$ such that $p \not\mid n$ (where $p$ is the characteristic of the residue field of $F$ ), the field $F_{\infty}$ has a unique extension $E_n/F_{\infty}$ of degree $n$ . My question is: why has $F_{\infty}$ a unique extension of degree $n$ ? This is a totally tamely ramified extension and so $E_n$ is generated, over $F_\infty$ , by an $n$ -th root of a uniformizer. Moreover I know that there exists a finite number of extensions of fixed degree for a local field. But now I don't know how to proceed.
|
By your last paragraph, it appears that you know how to show each tamely totally ramified extension of a local field $K$ with degree $n$ is $K(\sqrt[n]{\pi})$ where $\pi$ is some choice of uniformizer in $K$ . That is an important background step. Because $F_\infty$ is the maximal unramified extension of $F$ , its finite extensions are all totally ramified, so a finite extension of $F_\infty$ with degree $n$ that is not divisible by $p$ is a tamely totally ramified extension of $F_\infty$ with degree $n$ . By the same type of reasoning used to get the result in the previous paragraph, the extension is $F_\infty(\sqrt[n]{\pi})$ where $\pi$ is some uniformizer in $F_\infty$ . We want to show this extension is independent of the choice or $n$ th root of $\pi$ and on the choice of $\pi$ . The field $F_\infty(\sqrt[n]{\pi})$ is independent of the choice of $n$ th root of $\pi$ because any two $n$ th roots of $\pi$ have a ratio that is an $n$ th root of unity and all $n$ th roots of unity ar
|
|number-theory|field-theory|algebraic-number-theory|local-field|unramified-extension|
| 1
|
Calculating the time taken for a bouncing ball to come to rest.
|
A ball is dropped from rest at a height $h_0$ and bounces from a surface such that the height of the $n$ th bounce, $h_n$ is given by $h_n=αh_{n-1}$ , where $h_{n-1}$ is the height of the previous, $(n-1)$ th bounce. The factor $α$ has value between 0 and 1. How long does the ball take to cover the distance it travels before coming to rest? I calculated the total distance travelled by the ball before coming to rest to be $\frac{2h_0}{1-α}-h_0$ , using the infinite sum of $x^n$ , where $|x| , which is equal to $\frac{1}{1-x}$ . Note that after the first bounce, the distance of every subsequent "bounce" is doubled, as the ball falls to the ground then bounces up to the same height again. In order to calculate how long the ball takes to cover this distance, I must not assume an average velocity. Therefore, I thought to use the SUVAT equation $s=ut+\frac{1}{2}at^2$ , where $a=g=9.81ms^{-2}$ , to find the time taken for each bounce. However, the question hints towards the solution involving
|
The question can’t be answered in its current form because the answer depends on when and where the energy is lost. The ball loses energy both from the inelastic bounce on the ground and from air friction. It would be quite complicated to solve this taking both effects into account simultaneously, so I suspect that the intended interpretation is that one of the effects is to be neglected. Since the equation you want to use assumes that there’s no air friction, I’ll make that assumption (so we’re talking e.g. about a basketball and not a table tennis ball). It’s convenient to make the motion symmetric about $t=0$ by placing the origin at the time when the ball is at its peak (and thus at rest). Then the downward motion is described by $s=\frac12gt^2$ , and setting $s=h_n$ yields $\frac12gt_n^2=h_n$ and thus $t_n=\sqrt{\frac{2h_n}g}$ . The time the ball takes to rise and fall is twice this time. The recurrence relation for the times is therefore $t_n=\sqrt{\alpha}t_{n-1}$ . I’ll leave th
|
|sequences-and-series|classical-mechanics|kinematics|
| 1
|
Probability of strings making a complete loop
|
We got 6 pieces of strings. The top ends of the ropes are randomly paired up and tied together. The same procedure is done with the bottom ends of the strings. What is the probability that, as a result of the process, the 6 pieces of strings will be connected in a single closed loop of string? My initial attempt is to just multiply the probabilities in each step: Probability of tying the top of the string in initial procedure is 1. Because any head we tie will not effect the result we are getting. Second, step if we choose a random string and random bottom end of it, out of our new 3 full strings; only one of the outcomes will not be favorable out of all 5 bottom ends. So favorable probability $\frac{4}{5}$ Third, if we consider one of our new tied strings's bottom ends we have 2 favorable ends out of 3 ends so $\frac{2}{3}$ and finally 1 way to tie the last ends. Therefore in total; $1 \times \left(\frac{4}{5}\right) \times \left(\frac{2}{3}\right) \times 1 = \frac{8}{15} $ This was m
|
Here is another approach that yields the same answer... Note that each configuration of resulting strings is equally likely to occur. So, I proceed by counting $(1)$ all possible ways the six strings can be tied together and $(2)$ all such ways that result in a loop. As we begin tieing the top ends together, there are ${6 \choose 2} = 15$ ways of selecting two ends of the available six to bind together, each of which leaves four top ends untied. Then, there are ${4 \choose 2} = 6$ ways of selecting two ends of the remaining four to bind together, each of which leaves two top ends untied. Of course, there is only ${2 \choose 2} = 1$ way to tie the remaining two ends together, so there are $ {6 \choose 2} \cdot {4 \choose 2} \cdot {2 \choose 2} = 15 \cdot 6 \cdot 1 = 90$ ways of tieing the top ends together. Now, for each such way, we can repeat this logic to conclude there are $90$ ways of tieing the bottom ends together. Hence, there are $90 \cdot 90 = 8100$ distinct ways of tieing the
|
|probability|probability-theory|puzzle|
| 0
|
What's the intuition for reflexive pairs?
|
A reflexive pair is a pair of morphisms $f, g: A \to B$ such that there exists a common section $s: B \to A$ , i.e. $fs = gs = 1_B$ . What's the intuition behind this concept? I tried looking at this in the category of sets to see if I recognised this as a well-known notion, but I don't see it. I see that $f$ and $g$ are split epis. I also constructed this basic example: $$ \begin{aligned}f : \{1,2,3\} &\to \{a, b\} \\ 1 &\mapsto a \\ 2 &\mapsto b \\ 3 &\mapsto b\end{aligned} $$ $$ \begin{aligned}g : \{1,2,3\} &\to \{a, b\} \\ 1 &\mapsto a \\ 2 &\mapsto a \\ 3 &\mapsto b\end{aligned} $$ This is a reflexive pair with section given by $a \mapsto 1$ , $b \mapsto 3$ . It seems any pair of parallel epis in $\mathrm{Set}$ is a reflexive pair, with common section given by selecting for every element in the codomain an element in the intersection of the preimage by one and the other morphism. However this didn't help in elucidating the concept.
|
One important source of reflexive pairs is group actions (and more generally, actions of some algebraic object with a unit). Suppose $G$ is a group and $X$ is a set with left $G$ -action $\alpha\colon G\times X\to X$ . The maps $\alpha,\mathrm{pr}_2\colon G\times X\rightrightarrows X$ have a common section $s\colon X\to G\times X, x\mapsto(e,x)$ , where $e\in G$ is the neutral element. The reason we consider such reflexive pairs in the context of group actions is to take their colimit. The colimit of a reflexive pair is called a reflexive coequalizer , and in case of $\alpha,\mathrm{pr}_2\colon G\times X\rightrightarrows X$ with section $s$ as defined above, the reflexive coequalizer is the quotient set $X/G$ , where we collapse each $G$ -orbit to a single point (in other words, we quotient out the equivalence relation generated by $g\cdot x\sim x$ for $g\in G$ and $x\in X$ ). More generally (but entirely analogously, or rather, by definition), given any category $\mathcal{C}$ with fin
|
|category-theory|intuition|
| 0
|
Find $n\in N$ for which $2*[\frac{1^2}{2}]+2^2*[\frac{2^2}{3}]+...+2^n*[\frac{n^2}{ n+1}]$
|
Question Find $n\in N$ for which $$2 \times \left[\frac{1^2}{2}\right] + 2^2 \times \left[\frac{2^2}{3}\right] + ... + 2^n \times \left[\frac{n^2}{n+1}\right] = 2^{2025} \times 2022 + 4$$ where $[a]$ is integer part function applied on $a$ (the whole part of $a$ ) Idea I tried getting $2^n \times \left[\frac{n^2}{n+1}\right]$ to a more friendly form, but honestly got to nothing useful. By brute force, I got to the conclusion that $\left[\frac{n^2}{n+1}\right]$ is always increasing by one... again, I tried demonstrating it but I don't know how. I hope one of you can help me! Thank you!
|
As already pointed out in comments, $\left\lfloor\dfrac{n^2}{n+1}\right\rfloor$ is just $n-1$ in disguise. And you are asked to calculate $\displaystyle S=\sum\limits_{k=1}^n(k-1)2^k$ Notice that if we change $2$ by $x$ we get $\displaystyle\begin{align} S(x)&=\sum\limits_{k=1}^n(k-1)x^k=x^2\sum\limits_{k=1}^n(k-1)x^{k-2}=x^2\sum\limits_{k=1}^n\Big(x^{k-1}\Big)'=x^2\Bigg(\sum\limits_{k=1}^n x^{k-1}\Bigg)'\\\\ &=x^2\Bigg(\sum\limits_{k=0}^{n-1} x^k\Bigg)'=x^2\bigg(\frac{x^n-1}{x-1}\bigg)'=\frac{(nx^2-nx-x^2)\,x^n+x^2}{(x-1)^2}\end{align}$ Note: sums are finite so exchanging sum and derivation is not an issue, just regular stuff expressed in condensed form. Now of course we substitute back $x=2$ to get $S(2)=(2n-4)\,2^n+4=(n-2)\,2^{n+1}+4$ And $n=2024$ is clearly solution. Note for the future: When you will see sums later with terms in $nx^n$ or $n^2x^n$ or $\frac {x^n}n$ and so on, think about derivating once, twice or integrating the geometric $g(x)=\sum\limits_{k=0}^{n-1}x^k=\dfrac{x^
|
|square-numbers|
| 0
|
Example of a function on $([0,1]; \sigma ([0;1]); \lambda )$ for which it has no meaning to write $ \int f d \lambda$
|
I am trying to understand Lebesgue integration and in order to understand well this concept I would like to have an example of a function $(0,1] $ ( or $ [0,1) , [0,1] , (0,1) $ ) for which it has no meaning to write $ \int f d \lambda$ with $([0,1]; \sigma ([0,1]); \lambda )$ with $\lambda = $ Lebesgue measure. For example understanding why the Dirichlet function is not Riemann integrable but it is Lebesgue integrable helped me a lot to understand the concept of Lebesgue integrable. But I am a little bit stuck in my understanding because now it seems to me that all the functions can be Lebesgue integrable as the Lebesgue integration authorize the integral to be equal to $ \infty $ . Where it is not the case for the Riemann definition of integration which impose the convergence of the sum (thus has to be finite). I thought about the function $\frac{-1}{x}$ as not been Lebesgue integrable on $([0,1]; \sigma ([0,1]); \lambda )$ but I am not sure that this is correct and more precisely I
|
If you allow the Lebesgue integral to have infinite value, I deduce your convention for Lebesgue integrability is when $\int f_+dx,\int f_- dx$ are not both infinite. There are a few nice example to help your intuition about Riemann and Lebesgue integrability: Riemann and Lebesgue integrals have the same value for continuous functions on compact intervals Some functions are Riemann integrable but not Lebesgue integrable, this is the case for say $f(x)=\sin(x)/x$ . The limit definition of the Riemann integral will converge, while the integrals of $f_+,f_-$ are both infinite. Another nice example is the Cauchy base 13 function, as the support has measure zero. Most Lebesgue measurable functions are not Riemann Integrable. This is the case for some classical examples such as $f(x)=\mathbb{1}_{\mathbb{Q}\;\cap\; [0,1]}$ . Many functions are not Lebesgue integrable. One can easily construct a function such that both $f_-,f_+$ have infinite integrals. Regarding $f(x)=1/x$ on say $(0,1]$ , it
|
|integration|measure-theory|lebesgue-integral|
| 1
|
"Closed" form for $\sum \frac{1}{n^n}$
|
Earlier today, I was talking with my friend about some "cool" infinite series and the value they converge to like the Basel problem, Madhava-Leibniz formula for $\pi/4, \log 2$ and similar alternating series etc. One series that popped into our discussion was $\sum\limits_{n=1}^{\infty} \frac{1}{n^n}$. Proving the convergence of this series is trivial but finding the value to which converges has defied me so far. Mathematica says this series converges to $\approx 1.29129$. I tried Googling about this series and found very little information about this series (which is actually surprising since the series looks cool enough to arise in some context). We were joking that it should have something to do with $\pi,e,\phi,\gamma$ or at the least it must be a transcendental number :-). My questions are: What does this series converge to? Does this series arise in any context and are there interesting trivia to be known about this series? I am actually slightly puzzled that I have not been able
|
Since the question asks for interesting trivia, I believe the following is appropriate as an answer. Digits The digit sequence is A073009 on OEIS. This simple Python Script may be used to compute the digits of the sequence. from decimal import Decimal, getcontext getcontext().prec = 1003 total = Decimal(0) for n in range(1, 10000): term = Decimal(1) / Decimal(n) ** n total += term print(total) With this I've obtained the following: 1.2912859970626635404072825905956005414986193682745223173100024451369445387652344555588170411294297089849950709248154305484104874192848641975791635559479136964969741568780207997291779482730090256492305507209666381284670120536857459787030012778941292882535517702223833753193457492599677796483008495491110669649755010519757429116210970215616695328976892427890058093908147880940367993055895352006337161104650946386068088649986065310218534124791597373052710686824652246770336860469870234201965831431339687388172956893553685179852142066626416543806122456994096635604388
|
|sequences-and-series|
| 0
|
Killing magnetic field corresponding to Killing vector field
|
I am working on the paper " magnetic curves corresponding to Killing magnetic field $E^3$ " and I need some help. As I understand, vector field $V$ on manifold $M$ is a Killing vector field if it satisfies $$\langle \nabla_Y V, Z\rangle + \langle \nabla_ZV, Y\rangle =0$$ for every vector fields $Y, Z$ on $M$. Some of the Killing vector field examples are also given such as $\partial _x$ which I can verify directly. It is also said that for any Killing vector field $V$ corresponding Killing magnetic field can be computed by the formula $$F_V=\iota_Vdv_g,$$ where $\iota$ is inner product and $$\langle X\times Y, Z\rangle =dv_g(X,Y,Z).$$ I do not really understand what is the meaning of that formula and how to interpret and apply it to Killing vector field? For example given Killing vector field $V=-y \partial_ x +x\partial_y$ the Killing magnetic field corresponding to it found as $$F_V=-(xdx+ydy) \wedge dz.$$ Can somebody please explain me clearly how is it found since I have just start
|
$F_V=\iota_Vdv_g$ , this equation is valid in 3-dimensional spaces. It basically says that 2-forms are in bijective correspondence with vector fields. By the way, $\iota$ is used for interior product not for "inner product". See this question to see the relation between them. Consider a vector field $V \in \mathfrak{X}(M^3)$ for a 3-dimensional manifold $(M^3,g)$ . Take its metric equivalent 1-form denoted by $V^\flat$ and then apply Hodge star to this 1-form $\star V^\flat$ which is obviously a 2-form. Now, this 2-form can also be expressed by using the interior product $\iota_V$ , as $\star V^\flat=\iota_V dv_g$ , where $dv_g$ is the volume form of $(M^3,g)$ . Let's label volume form $dv_g$ as $\Omega$ and take a general vector field $ V=P \partial_x + Q \partial_y + N\partial_z $ $ \in \mathfrak{X}(\mathbb{R}^3)$ , where $P$ , $Q$ and $N$ are smooth functions in $\mathbb{R}^3$ . Since we have the Euclidean metric, corresponding 1-form becomes $V^\flat=P dx+ Q dy + N dz $ . Applying
|
|differential-geometry|mathematical-physics|
| 0
|
Show that there is a real number $a$ such that $\Vert y-ax \Vert=\inf \{ \Vert y-bx \Vert: b\in\Bbb R\}$
|
Exercise: $X$ is a normed linear space and $\Vert \quad \Vert$ is a norm on $X$ . $x$ is a non-zero element in $X$ . Then for all $y\in X$ , there is a real number $a$ such that $\Vert y-ax \Vert=\inf \{ \Vert y-bx \Vert: b\in\Bbb R\}$ . My Attempt I tried to work it out but failed. I haven't a clue about it. Added: Sorry for my hollow description. It's clear that there is a lower bound for $\{ \Vert y-bx \Vert: b\in\Bbb R\}$ because $\Vert \quad \Vert$ is a norm and 0 is always a lower bound for the set. So the set of real numbers $\{ \Vert y-bx \Vert: b\in\Bbb R\}$ does have a infimum. Denote the infimum of this set by $l$ . Then for all $\varepsilon>0$ , there is a real number $b$ such that $\Vert y-bx \Vert . Namely, we can get a sequence $\{b_n\}_n$ such that $\lim_{n \to \infty} \Vert y-b_n x \Vert=l$ . Here, if $\{b_n\}_n$ converge to $a$ , then we have $\Vert y-ax \Vert=\inf \{ \Vert y-bx \Vert: b\in\Bbb R\}$ . But how can I find a convergent $b_n$ ? And in fact I'm not sure if
|
Note that the sequence $\{\|y-b_nx\|\}$ , because it's convergent, is bounded. Let's say $L\in \Bbb R$ is such that $\|y-b_nx\| for all $n$ . The triangle inequality yields $$ \big|\,\|bx\| - \|y\|\,\big| \leq \|y-bx\| $$ Now set $$ M = \frac{\|y\| + L}{\|x\|} $$ The sequence $\{b_n\}$ is bounded (above and below) by this $M$ . Indeed, $M yields $$ \frac{\|y\| + L}{\|x\|} so this $b$ cannot be an element of the sequence $\{b_n\}$ . Thus $\{b_n\}$ has at least one convergent subsequence by compactness of the interval $[-M, M]$ . Setting $a$ to be the limit of one of these subsequences lets you use the continuity of the function $b\mapsto\|y-bx\|$ to conclude.
|
|functional-analysis|
| 1
|
Calculating $\int_0^{\infty } \left(\text{Li}_2\left(-\frac{1}{x^2}\right)\right)^2 \, dx$
|
Do you see any fast way of calculating this one? $$\int_0^{\infty } \left(\text{Li}_2\left(-\frac{1}{x^2}\right)\right)^2 \, dx$$ Numerically, it's about $$\approx 111.024457130115028409990464833072173251135063166330638343951498907293$$ or in a predicted closed form $$\frac{4 }{3}\pi ^3+32 \pi \log (2).$$ Ideas, suggestions, opinions are welcome, and the solutions are optionals . Supplementary question for the integrals lovers: calculate in closed form $$\int_0^{\infty } \left(\text{Li}_2\left(-\frac{1}{x^2}\right)\right)^3 \, dx.$$ As a note, it would be remarkable to be able to find a solution for the generalization below $$\int_0^{\infty } \left(\text{Li}_2\left(-\frac{1}{x^2}\right)\right)^n \, dx.$$
|
The following is another way to show that $$ \begin{align} \int_{0}^{\infty} \operatorname{Li}_{2}^{2} \left(-x^{-1/a} \right) \, \mathrm dx &= a\int_{0}^{\infty} \frac{\operatorname{Li}_{2}^{2}(-u)}{u^{a+1}} \, \mathrm du \\ &=\frac{2 \pi}{a} \csc (\pi a) \left(\psi_{1}(a)-\frac{\pi ^2}6-\frac{2}{a}\psi(a)-\frac{2 \gamma }{a}\right) \end{align}$$ for $0 , where $\psi(a)$ is the digamma function and $\psi_{1}(a)$ is the trigamma function. This generalization appears in Po1ynomial 's answer. I just replaced $a$ with $1/a$ for convenience. Using the principal branch of the dilogarithm and the branch of $z^{-a-1}$ where $0 \le \arg(z) , let's integrate the function $$f(z) = \frac{\operatorname{Li}_{2}^{2}(-z)}{z^{a+1}} $$ around the following double keyhole contour: From the dilogarithm "inversion" formula it follows that on the upper side of the branch cut on $[-\infty, -1]$ , $$\operatorname{Li}_{2}(-z) = - \operatorname{Li}_{2} \left(-\frac{1}{z} \right) + \frac{\pi^{2}}{3} - \frac{1}{
|
|calculus|integration|definite-integrals|special-functions|polylogarithm|
| 0
|
How to evaluate $\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$
|
Question: How to evaluate this integral $$\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$$ My messy try $$\int_{0}^{1} \int_{0}^{1} \frac{{(1 + x) \cdot \log(x) - (1 + y) \cdot \log(y)}}{{x - y}} \cdot (1 + \log(xy)) \,dy \,dx$$ $$ \begin{array}{r} I=\int_0^1 \int_0^1 \frac{(1+x) \ln (x)-(1+y) \ln (y)}{x-y}(1+\ln (x y)) d y d x \\ I=\int_0^1 \int_0^{\frac{1}{x}} \frac{(1+x) \ln (x)-(1+x t)(\ln (x)+\ln (t))}{1-t}(1+2 \ln (x)+\ln (t)) d t d x \\ =\int_0^1 \int_0^{\frac{1}{x}} x \ln (x)(1+2 \ln (x)+\ln (t)) d t d x- \\ -\int_0^1 \int_0^{\frac{1}{x}} \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x)+\ln (t)) d t d x \\ =\int_0^1 x \ln (x)\left(\frac{1+2 \ln (x)-\ln (x)-1}{x}\right) d x-\int_0^1 \int_0^1 \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x) \ln (t)) d x d t + \end{array} $$ $$- \int_{0}^{\infty} \int_{0}^{\frac{1}{t}} \frac{{(1 + xt) \cdot \ln(t)}}{{1 - t}} \cdot (1 + 2 \cdot \ln(x) + \ln(t)) \,dx \,dt$$ $$I_1 = \int_{0}
|
Continuation $$\begin{aligned} & -\int_0^{+\infty} \int_0^{\frac{1}{t}} \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x)+\ln (t)) d x d t \\ & I_1=\int_0^1 x \ln (x)\left(\frac{1+2 \ln (x)-\ln (x)-1}{x}\right) d x- \\ & -\int_0^1 \int_0^1 \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x)+\ln (t)) d x d t+ \\ & I_2=-\int_0^{+\infty} \int_0^{\frac{1}{t}} \frac{(1+x t) \ln (t)}{(1-t)}(1+2 \ln (x)+\ln (t)) d x d t \\ & I=I_1+I_2 \text { (1) } \\ & \left(\begin{array}{c}\boldsymbol{D}=\left\{(\boldsymbol{x}, \boldsymbol{t}) / \mathbf{0} $$\begin{gathered}-\int_1^{+\infty} \frac{t \ln (t)}{1-t} \int_0^{\frac{1}{t}}(2 x \ln (x)+x(1+\ln (t)) d x d t \\ \left.I_2=-\int_0^{+\infty} \frac{\ln (t)}{(1-t)}(x+2(x \ln (x)-x)+x \ln (t))\right]_0^{\frac{1}{t}} d t- \\ -\int_0^{+\infty} \frac{t \ln (t)}{1-t}\left[x^2 \ln (x)-\frac{1}{2} x^2+\frac{1}{2} x^2(1+\ln (t))\right]_0^{\frac{1}{t}} d t \\ =-\int_0^{+\infty}\left(-\frac{\ln (t)}{t(1-t)}-\frac{\ln ^2(t)}{t(1-t)} d t+\frac{1}{2} \int_0^{+\infty} \frac{\ln ^2(t)}{
|
|calculus|integration|multivariable-calculus|definite-integrals|closed-form|
| 0
|
What's the intuition for reflexive pairs?
|
A reflexive pair is a pair of morphisms $f, g: A \to B$ such that there exists a common section $s: B \to A$ , i.e. $fs = gs = 1_B$ . What's the intuition behind this concept? I tried looking at this in the category of sets to see if I recognised this as a well-known notion, but I don't see it. I see that $f$ and $g$ are split epis. I also constructed this basic example: $$ \begin{aligned}f : \{1,2,3\} &\to \{a, b\} \\ 1 &\mapsto a \\ 2 &\mapsto b \\ 3 &\mapsto b\end{aligned} $$ $$ \begin{aligned}g : \{1,2,3\} &\to \{a, b\} \\ 1 &\mapsto a \\ 2 &\mapsto a \\ 3 &\mapsto b\end{aligned} $$ This is a reflexive pair with section given by $a \mapsto 1$ , $b \mapsto 3$ . It seems any pair of parallel epis in $\mathrm{Set}$ is a reflexive pair, with common section given by selecting for every element in the codomain an element in the intersection of the preimage by one and the other morphism. However this didn't help in elucidating the concept.
|
To get something out of the way: not all pairs of parallel epis in $\mathsf{Set}$ have a common section. A simple example is where $E = \{a, b, c, d\}$ maps epimorphically to $\{0, 1\}$ in two ways $f, g$ , where $f^{-1}(0) = \{a, b\}$ and $g^{-1}(0) = \{c, d\}$ . The intuition I have for reflexive forks as such is focused less on what "they are like" for certain concrete categories, and more on why we should care or what makes them so wonderful. Of course one big source for them is equivalence relations $E \hookrightarrow X \times X$ , where by composing with the two product projections we get a pair of maps $E \underset{q}{\overset{p}{\rightrightarrows}} X$ with a common section $i: X \to E$ , purely and simply because of the reflexivity property (that the diagonal map $\delta: X \to X \times X$ factors through the inclusion $E \hookrightarrow X \times X$ ), and this connection with the reflexive property is surely the reason for the name "reflexive pair". So you can and should think
|
|category-theory|intuition|
| 1
|
What is the relationship among $a, b$ and $c$ for $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\left(ax^2+2bxy+cy^2\right)}dx dy=1$
|
What relationship must hold between the constants $a, b$ and $c$ to make: $$ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\rm e}^{-\left(\, ax^{2}\ +\ 2bxy\ +\ cy^2\,\right)}\phantom{A\,}{\rm d}x\,{\rm d}y = 1 $$ I am absolutely clueless on how to proceed with this question. I found a solution to this question on this website but the initial steps where it uses the transformation with the constraints on $\alpha, \beta, \gamma$ and $\delta$ is not clear to me. Like what was the intuition behind this transformation $?$ . I understand that the end result is somewhat similar to the initial assumptions, but how am I supposed to think of this particular transformation in an exam? I would appreciate any alternate answers/techniques to this question. Explanation(s) to the external linked solution are also welcome. Edit #1: As mentioned by fellow users in the comments, the link is hidden behind a paywall. Here's the crux of what was given as the solution: They used the transformation $$s=\alp
|
An alternative method is to complete the square for the x-variable as $(x+\frac{b}{a}y)^{2}$ and integrate over $x$ , yielding a Gaussian-type integral: \begin{equation} \int_{-\infty}^{\infty}e^{-a((x+\frac{b}{a}y)^2-(\frac{b}{a}y)^2)}dx= \sqrt\frac{\pi}{a}e^{\frac{b^2}{a}y^2} \end{equation} Upon inserting the RHS into the y-variable integral, performing a second Gaussian integration, and noticing that convergence is ensured only if $c-\frac{b^2}{a}>0$ , yields the result: \begin{equation} \frac{\pi}{\sqrt(ac-b^2)}= 1 \end{equation} Indeed, since $ac=\pi^{2}+b^2$ , and since neither $a$ or $c$ can be negative for convergence reasons, both $a$ and $c$ must be simultaneously positive.
|
|calculus|integration|multivariable-calculus|multiple-integral|jacobian|
| 0
|
Linear operator - Exercise
|
Consider a linear operator as defined below $$S_N[u(x)] : = \int_\Bbb R \chi_{[-N,N]}(\xi)\hat u(\xi)e^{ix\xi}d\xi, \ u \in L^1(\Bbb R)$$ Prove that $S_n:L^1(\Bbb R) \to C^0_b(\Bbb R)$ is well defined and bounded - $C^0_b(\Bbb R)$ is the space of continuos bounded fuction - I'm having some trouble understanding exercise requests. First, well definition, the question is to prove that is effectively a linear operator? In this case using integral's propriety we got $$ S_N[\alpha u(x) + \beta v(x)] = \alpha\int_\Bbb R \chi_{[-N,N]}(\xi)\hat u(\xi)e^{ix\xi}d\xi + \beta \int_\Bbb R \chi_{[-N,N]}(\xi)\hat v(\xi)e^{ix\xi}d\xi \\= \alpha S_N[u(x)] + \beta S_N[ v(x)]$$ or instead we have to verify that the image of $u(x)$ via $S_N$ is unique? In this case since we have a Fourier transform is everything all right? For boundness is it enough to show that the norm of the operator is bounded? For example I think that in some sense $$||S_N[u(x)]|| \leq \int_\Bbb R |\hat u(\xi)|d\xi$$ since $u\in L^1(
|
I assume that with $\hat u$ you mean the Fourier transform of $u$ and $\chi_{[-N,N]}$ is the characteristic function of the interval $[-N,N]$ . The first point is to show that the operator $S_N$ is well defined, i.e. that it can be applied to elements of its domain $L^1(\mathbb{R})$ . Now since $$ \hat u (k) = \int_\mathbb{R} \frac{dx}{2\pi} e^{-ikx} u(x) $$ we have \begin{align} \vert \hat u (k)\vert &\le \frac{1}{2\pi} \int_\mathbb{R} \vert u(x) \vert \\ & = \frac{1}{2\pi} \Vert u \Vert_{L^1} \end{align} Then \begin{align} \left \vert S_N[u](x) \right \vert &\le \int_{-N}^N \vert \hat u (k) \vert dk \\ &\le \frac{2N}{2\pi} \Vert u \Vert_{L^1} \end{align} Which shows that $S_N$ is well defined. Since $|\hat u (k) |$ is bounded it is integrable on a finite interval, we can use dominated convergence to show that $S_N[u](x)$ is continuous, so it does belong to $C_b^0(\mathbb{R})$ . But also, taking the supremum over $x$ \begin{align} \Vert S_N[u] \Vert_{C^0_b} &= \sup_{x\in\mathbb{R}} \v
|
|linear-transformations|fourier-transform|upper-lower-bounds|
| 1
|
When is a sequence that is bounded below by an unbounded and monotone increasing also monotone increasing?
|
Let $(a_n)$ be an unbounded and monotone increasing sequence of positive integers and let $(b_n)$ be another sequence of positive integers such that for all $n \in \mathbb{N}$ , $a_n \leq b_n$ . Evidently, $(b_n)$ is unbounded. Is it the case that $(b_n)$ is also monotone increasing? If not, is there a " $(b_n)$ independent" condition that I can put on $(a_n)$ that will imply $(b_n)$ is monotone increasing?
|
$(b_n)$ may not be monotonically increasing, and there isn't a condition you can force on $(a_n)$ to make $(b_n)$ increasing. For every $(a_n)$ you could set, for example, $$ b_n= \begin{cases} a_n, & \text{if } n \text { is odd,}\\ a_{n+1}+1 & \text{if } n \text { is even.}\\ \end{cases} $$ Then $(b_n)$ will be decreasing from each even index, when it is $a_{n+1}+1$ to the next odd index, where it is $a_{n+1}$ .
|
|real-analysis|sequences-and-series|monotone-functions|
| 0
|
Solve the cubic polynomial $a^3 - 10a + 5 = 0$
|
I am not able to understand how to solve a cubic polynomial. I have tried hit and trial for finding the first root but in vain. How do I solve for $a$ in $$ a^3 - 10a + 5 = 0\ ? $$
|
We can easily check that $a^3-10a+5=0$ has no rational solution. For polynomials with leading coefficient 1 and constant coefficient $5$ , only $-5, -1, 1,$ and $5$ are possible rational solutions, due to the Rational Root Theorem. We apply the cubic formula. As there is no quadratic term, the formula is simplified. https://math.stackexchange.com/a/4868629/928654 Let $i^2=-1$ and exactly one of the following be true $w_1=w_2=1$ or $w_1=\frac{-1+i\sqrt3}{2}$ ,and $w_2=\frac{-1-i\sqrt3}{2}$ or $w_1=\frac{-1-i\sqrt3}{2}$ , and $w_2=\frac{-1+i\sqrt3}{2}$ then $$w_1\sqrt[3]{-2.5+\sqrt{6.25-\frac{1000}{27}}}+w_2\sqrt[3]{-2.5-\sqrt{6.25-\frac{1000}{27}}}=w_1\sqrt[3]{-2.5+\frac{i\sqrt3325}{6\sqrt3}}+w_2\sqrt[3]{-2.5-\frac{i\sqrt3325}{6\sqrt3}}$$ We have the cube root of a complex number, whose real and imaginary parts are not expressible by radicals. Complex numbers have an absolute value. In this case, we have $\sqrt{2.5^2+(\frac{\sqrt3325}{6\sqrt3})^2}=\sqrt{6.25+\frac{3325}{108}}$ . this is
|
|polynomials|cubics|
| 0
|
I'm wondering if we can prove the following inequality with those ideas.
|
We aim to prove that $\tan\left(\frac{|x+y|}{1+|x+y|}\right)$ $\leq \tan\left(\frac{|x|}{1+|x|}\right) + \tan\left(\frac{|y|}{1+|y|}\right)$ for all $x, y \in \mathbb{R}$ . Let's assume that $\tan\left(\frac{|x+y|}{1+|x+y|}\right)$ $\geq \tan\left(\frac{|x|}{1+|x|}\right) + \tan\left(\frac{|y|}{1+|y|}\right)$ for all $x, y \in \mathbb{R}$ . Now consider $\lim_{{y \to \pm\infty}} \tan\left(\frac{|x+y|}{1+|x+y|}\right) \geq \lim_{{y \to \pm\infty}} \left(\tan\left(\frac{|x|}{1+|x|}\right) + \tan\left(\frac{|y|}{1+|y|}\right)\right)$ $\iff \tan(1) \geq \tan\left(\frac{|x|}{1+|x|}\right) + \tan(1)$ $\iff 0 \geq \tan\left(\frac{|x|}{1+|x|}\right)$ which leads to a contradiction since this is not true for all $x \in \mathbb{R}$ . Therefore, the reverse inequality is true for all $x \in \mathbb{R}$ , and since it holds the limit inequality, meaning $\lim_{{y \to \pm\infty}} \tan\left(\frac{|x+y|}{1+|x+y|}\right) \leq \lim_{{y \to \pm\infty}} \left(\tan\left(\frac{|x|}{1+|x|}\right) + \tan\lef
|
There is a flaw in your reasoning. You first showed that $$\tan\left(\frac{|x+y|}{1+|x+y|}\right) \geq \tan\left(\frac{|x|}{1+|x|}\right) + \tan\left(\frac{|y|}{1+|y|}\right)\quad \forall x, y \in \mathbb{R}$$ lead to a contradiction by using a limits based argument. If the inequality above does not hold for all $x,y$ , then the correct deduction to make is that there are specific values of $x,y$ such that the inequality does not hold. Hence you deduce that $$ \exists x,y \in \mathbb R : \quad\tan\left(\frac{|x+y|}{1+|x+y|}\right) In general its incorrect to say "(inequality holds for all $x$ ) leads to contradiction" $\implies $ "reverse inequality is true for all $x$ " Instead you should have "(inequality holds for all $x$ ) leads to contradiction" $\implies$ " There exists a value of $x$ such that the reverse inequality is true. In fact this flaw is also present at the start of the answer. Indeed you are trying to prove that $$\tan\left(\frac{|x+y|}{1+|x+y|}\right) \leq \tan\left(\f
|
|real-analysis|general-topology|multivariable-calculus|inequality|trigonometry|
| 1
|
Type of solution (as the critical point of the energy functional) after doing scaling to the PDE.
|
Now I have a nonlinear elliptic PDE, if I do scaling such as $u=\lambda v$ and then get a new PDE, then I turn to study the new equation since it has a simpler form. My question is easy to understand: if I prove that for the new PDE, all the solutions are not the local minimum of the energy functional (of the new PDE), can we say that all the solution of the PDE before scaling are also not the local minimum of the energy functional (of the old PDE)? I have checked their second derivations, they are not the same since we have the scaling. But they are basically the same question, I think it's natural to ask whether the type of critical points will be the same as well?
|
Ok, after seeing some of the comments, I think the answer to your question is yes. Let $$ I[v] = \int_\Omega L(\nabla u, u,x)\, dx $$ be some functional which we’d like to minimise over a set of admissible functions $\mathcal A$ . My interpretation of your question is: if there are no stable local minimums of $I$ if and only if there are no stable local minimums of $I_\lambda$ with $I_\lambda [v]:=I[\lambda v]$ and $\lambda >0$ . (Correct me if this is wrong) For simplicity, assume that $\mathcal A$ is a vector space (often this is the case). Then $u$ is a local minimum of $I_\lambda$ if $$ \frac{d}{d\varepsilon}\bigg \vert_{\varepsilon=0} I_\lambda [u+\varepsilon \eta]=0 \text{ and } \frac{d^2}{d\varepsilon^2}\bigg \vert_{\varepsilon=0} I_\lambda [u+\varepsilon \eta]\geq 0 $$ for all $\eta \in \mathcal A$ . But clearly, $$\frac{d}{d\varepsilon}\bigg \vert_{\varepsilon=0} I_\lambda [u+\varepsilon \eta]= \lambda\frac{d}{d\varepsilon}\bigg \vert_{\varepsilon=0} I[\lambda u+\varepsilon \e
|
|functional-analysis|partial-differential-equations|calculus-of-variations|elliptic-equations|
| 1
|
Understanding the axioms for neighbourhoods and their independence/consistency
|
After reading answers to this question I can't help being confused about axioms 2 and 4 being truly independent, or about their true meaning for that matter. Recalling them, for a given set $X$ : If $N$ is a neighbourhood of $x \in X$ (i.e., $N \in \mathcal N(x)$ ), then $x \in N$ . If $N \in \mathcal N(x)$ and $N \subseteq N'$ then $N' \in \mathcal N(x)$ . If $N, N' \in \mathcal N(x)$ then $N \cap N' \in \mathcal N(x)$ . For any $x \in X$ and $N \in \mathcal N(x)$ there exists $N' \in \mathcal N(x)$ such that for every $y \in N'$ it holds that $N \in \mathcal N(y)$ . Now my questions: By (1) and (4), does it follow that $N' \subseteq N$ (referring to $N'$ and $N$ as in 4)? Are $N'$ and $N$ required to be different (i.e. 'proper inclusion') in 4? Can you give an example of a neighbourhood system that would satisfy the first three axioms but not the fourth (assuming the later false for the purpose)?
|
I find that it helps with understanding these types of statements to firstly re-state them in more plain English. So "if $N$ is a neighbourhood of $x$ , then if $N$ is enlarged it remains a neighbourhood of $x$ " and "if $N$ is a neighbourhood of $x$ , then it contains a smaller neighbourhood $M$ of $x$ such that $N$ is a neighbourhood of each point in $M$ ". It also helps to have some (perhaps slightly non-rigorous) heuristic, or slogan, or picture in mind. For example " $N$ is a neighbourhood of $x$ means that if you move $x$ a tiny little bit then it stays in $N$ " (then axiom 4 says "if you move $x$ a tiny bit, $N$ will still be a neighbourhood of $x$ "). Or you might find it helps you to ground these examples in a concrete setting like $X = \Bbb R^2$ with the usual topology, by checking why these axioms are actually true in that case. As for your questions: Yes, indeed the $N'$ in axiom 4 will be a subset of $N$ (in fact that's even how it's stated at the question you link). No, t
|
|general-topology|axioms|neighbourhood|
| 1
|
polynomial solution of $a_n + ca_{n-1} + a_{n-2} = 0$
|
Let $c$ be a constant, and consider the following recurrence relation: $a_n + ca_{n-1} + a_{n-2} = 0$ . I am asked to find the value of $c$ where the recurrence has a non constant polynomial solution. I declared $f(x) = \sum_{n=0}^\infty a_n \cdot x^n$ , multiplied the recurrence by $x^n$ and took the sum from $n=2$ , and got that $$\sum_{n=2}^\infty a_n \cdot x^n = -c \cdot \sum_{n=2}^\infty a_{n-1} \cdot x^n - \sum_{n=2}^\infty a_{n-2}x^n$$ $$\sum_{n=2}^\infty a_n \cdot x^n = -cx \cdot \sum_{n=2}^\infty a_{n-1} \cdot x^{n-1} - x^2 \cdot \sum_{n=2}^\infty a_{n-2}x^{n-2}$$ $$ f(x) - a_0 - a_1 x = -cx(f(x)-a_0) - x^2f(x)$$ $$ (1+cx+x^2)f(x) = a_0 cx + a_1 x + a_0 $$ $$f(x) = \frac {a_0 cx + a_1 x + a_0}{x^2 +cx + 1} $$ and here i cant see any restriction on $c$ . Any $c$ looks fine to me, and no specific $c$ would make problems. Would love to get some help moving forward, or corrections if my way of solving this question is wrong. Thank you!
|
Polynomial sequences ${\mathbf a} := (a_n)$ of degree $ are characterized by the property that $(\Delta^k {\bf a})$ is the zero sequence, where $\Delta$ is the finite difference operator , $(\Delta{\textbf a})_n = a_{n + 1} - a_n$ . So, if ${\mathbf a}$ is a polynomial sequence of degree $ for some $c$ , we have $$0 = (\Delta^2 {\bf a})_n = (\Delta {\bf a})_{n+1} - (\Delta {\bf a})_n = (a_{n + 2} - a_{n + 1}) - (a_{n + 1} - a_n) = a_{n + 2} - 2 a_{n + 1} + a_n $$ But by hypothesis $a_{n + 2} = -c a_{n + 1} - a_n$ , so the condition is $$0 = (-c a_{n + 1} - a_n) - 2 a_{n + 1} + a_n = (2 + c) a_{n + 1} .$$ So, either $a_n = 0$ for all $n$ (which is a constant sequence), or $c = -2$ . Substituting $c = -2$ gives the recurrence $$a_n - 2 a_{n - 1} + a_{n - 2} = 0,$$ and it's routine to verify that any polynomial of degree $1$ satisfies the recurrence. Alternatively, the general solution to a constant-coefficient linear recurrence $$a_n + b_{k - 1} a_{n - 1} + \cdots + b_0 a_{n - k} = 0$$ i
|
|combinatorics|recurrence-relations|
| 0
|
Prove that $f[f^{-1} [f[X]]] = f[X]$
|
I'm trying to prove that $f[f^{-1} [f[X]]] = f[X]$ , where $f: A\to B $ and $ X \subset A$ . I have already proved that $X \subset f^{-1}[f[X]]$ . My thoughts: First, I know that $ f[X] = \{f(x):x \in X\} =\{f(x) : x \in f^{-1}[f[X]]\} $ (because $X \subset f^{-1}[f[X]]$ ). This is the set of the functions of the form $f[f^{-1} [f[X]]]$ , so we see that $f[f^{-1}[ f[X]]] = f[X]$ . Or can I just say that $ f[X] = \{f(x):x \in X\}$ implies that $f(x) \in f[X]$ and, as $X \subset f^{-1}[f[X]]$ , we conclude that $f[X] =f[f^{-1} [f[X]]]$ . Is this right? If so, is it a sufficient proof? Thank you for your help.
|
Another way to prove this statement is to first show that if $f:A\to B$ is a function and $Y\subseteq B$ , then $f(f^{-1}(Y))=Y\cap\DeclareMathOperator{\im}{im}\im(f)$ , where $\im(f)$ denotes the image of $f$ . Thus, if $X\subseteq A$ , then $$ f(f^{-1}(f(X)))=f(X)\cap\im(f)=f(X) \, . $$
|
|functions|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.