title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Integral of $\int_{b-a}^{a} \frac{1}{x} \sqrt{a^2-(b-x)^2}dx$
|
I am solving the integral $$\int_{b-a}^{a} \frac{1}{x} \sqrt{a^2-(b-x)^2}dx$$ which is related to the magnetic flux of toroid with circular cross section. I got this integration but have no idea how to compute it...
|
Split the integral into three \begin{align} &\int_{b-a}^{a} \frac{1}{x} \sqrt{a^2-(b-x)^2}dx\\ =& \int_{b-a}^{a}\frac b{\sqrt{a^2-(b-x)^2}}+ \frac{b-x} {\sqrt{a^2-(b-x)^2} }+\frac{a^2-b^2} {x\sqrt{a^2-(b-x)^2}}\ dx\\ =& \ \bigg[b\cos^{-1}\frac{b-x}a + \sqrt{a^2-(b-x)^2}\\ &\>\>\>\>\>\>\>\>\>\>\>\>\>-{\sqrt{b^2-a^2}}\tan^{-1}\frac{a^2-b^2+b x}{\sqrt{b^2-a^2} \sqrt{a^2-(b-x)^2}}\bigg]_{b-a}^a\\ =&\ b\cos^{-1}\frac{b-a}a+ \sqrt{b(2a-b)}\\ &\>\>\>\>\>-{\sqrt{b^2-a^2}}\bigg( \frac\pi2+\tan^{-1}\frac{a^2-b^2+ab}{\sqrt{b^2-a^2} \sqrt{b(2a-b)}}\bigg) \end{align} The first two integrals are straightforward and the third is evaluated via the Euler substitution.
|
|calculus|integration|
| 0
|
Stochastic average of a differential equation is not the same as average of its solutions
|
Assume a static (time-independent) random variable $r$ for which we know its probability distribution $P(r)$ . Consider this to be a Gaussian distribution, such that $\langle r \rangle=0$ and $\langle r^2 \rangle=r_0$ , etc. Consider now the following differential equation: $\frac{d}{dt}X(t)=r A X(t)$ , where $X=(x_1,x_2)^T$ and $A$ is a time-independent $2\times 2$ matrix. I am interested in taking the average of the differential equation over the random variable. If I naively just take the expected value of the previous differential equation, the problem is that I would just get $\frac{d}{dt}X=0$ because $\langle r \rangle=0$ . This is not true, as $X$ is a functional of r, $X(r,t)$ , so $\langle r X(r,t) \rangle\neq0$ . One possible solution would be to formally solve the differential equation as $X(t)=e^{rAt}X(0)$ and then take the average. I would like to avoid this method, as it is not always aplicable. What would be the correct way to proceed if I would like to get an averaged d
|
You should be careful about how you consider this ODE. A reasonable interpretation is a path-by-path interpretation. That is, for each $\omega \in \Omega$ we can define the differential equation: $$\frac{dX_t}{dt}(\omega ) = r(\omega) A X_t$$ whose solution is the trajectory $$X_t(\omega) = X_0 \exp \left( r(\omega) A t \right)$$ One can then compute the time-average as: $$\mu_t = E(X_t) = E( X_0 \exp \left( r A t \right))$$ where the expectation is taken relative to the distribution of $r$ . Novikov's criterion gives you conditions under which a certain process is a martingale, but that process has nothing to do with your random ODE.
|
|ordinary-differential-equations|stochastic-processes|average|stochastic-differential-equations|
| 0
|
Showing that ~A cannot be derived from A
|
If from the assumption of some A it follows that A is a theorem, then under threat of inconsistency of the system ¬A is not derivable from A. Is this statement true?
|
If $A$ itself is inconsistent, then you can derive $A$ as well as $\neg A$ from $A$ , and that is using a complete and consistent (sound) proof system. For example, from $P \land \neg P$ you can derive $P \land \neg P$ itself, but you can also derive $\neg (P \land \neg P)$ . Indeed, you can derive anything from a contradiction.
|
|logic|
| 1
|
Assume that $\frac{d}{d\theta}\sin\theta = \cos\theta$. Use implicit differentiation to prove that $\frac{d}{d\theta}\cos\theta = - \sin\theta$
|
Assume that $\frac{d}{d\theta}\sin\theta = \cos\theta$ . Use implicit differentiation to prove that $\frac{d}{d\theta}\cos\theta = - \sin\theta$ Here's my attempt: $(\sin\theta)^2 + (\cos\theta)^2 = 1$ Differentiate both sides and we have: $2\sin\theta(\frac{d}{d\theta}\sin\theta) + 2\cos\theta(\frac{d}{d\theta}\cos\theta) = 0$ Since $\frac{d}{d\theta}\sin\theta = \cos\theta$ , it follows that: $\cos\theta(\sin\theta + \frac{d}{d\theta}\cos\theta) = 0$ Now I'm stuck. We can't just conclude that $(\sin\theta + \frac{d}{d\theta}\cos\theta) = 0$ since it's possible that $\cos\theta = 0$ while the other expression isn't. Please advise!
|
I was inspired by Vasili's answer, and came up with an even simpler approach. Use $\cos(x)=\sin(x+\pi/2)$ . Then: $$\cos'(x) = \sin'(x+\pi/2) = \cos(x+\pi/2) = \sin(x+\pi) = -\sin(x)$$
|
|calculus|implicit-differentiation|
| 0
|
Closed form answer of an Integral containing exponential and cosine function
|
I am stuck with the following integral while studying some cases of anharmonic Oscillations: $$\int_0^t\frac{\exp(-\mu t')}{\cos^2(\omega t'-\theta_0)}\, dt'$$ where, $\mu$ , $\theta_0$ and $\omega$ have been considered as Real constants. I have put this integral in Mathematica, which shows a very complicated result involving Hypergeometric functions, but I could not understand how the derivation can be done. Would you please suggest me any relevant sources or method such that I can do this integral.
|
Let $ u = wt'-\theta_0 \Rightarrow dt' = \frac{du}{w} $ , $ t' = \frac{(u+\theta_0)}{w}$ $a = w\times0-\theta_0$ and $b=wt-\theta_0$ Also set $\psi = \frac\mu{w}$ $$ w^{-1}\int_a^b e^{-\psi(u+\theta_0)}\sec^2(u)du=w^{-1}e^{-\psi\theta_0}\int_a^b e^{-\psi u}\sec^2(u)du=w^{-1}e^{-\psi\theta_0}\left(\left.e^{-\psi u}\tan{(u)}\right.\rvert_a^b+\psi\int_a^b e^{-\psi u}\tan{(u)} du\right)$$ Now to evaluate $$ \int_a^b e^{-\psi u}\tan{(u)} du $$ Set $ \xi = e^{-\psi u} \Rightarrow du = -\psi e^{\psi u} d\xi $ and $u = \frac{-\ln(r)}{\psi}$ So $c=e^{-\psi a}$ $ d =e^{-\psi b}$ $$ \int_a^b e^{-\psi u}\tan{(u)} du=\psi\int_c^d \tan{\left(\frac{\ln(\xi)}{\psi}\right)} d\xi=\psi i^{-1}\int_c^d \frac{e^{\frac{\ln(\xi)i}{\psi}}-e^{-\frac{\ln(\xi)i}{\psi}}}{e^{\frac{-\ln(\xi)i}{\psi}}+e^{\frac{\ln(\xi)i}{\psi}}} d\xi=\psi i^{-1}\left(\xi\rvert_c^d-2\int_c^d \frac{e^{-\frac{\ln(\xi)i}{\psi}}}{e^{\frac{-\ln(\xi)i}{\psi}}+e^{\frac{\ln(\xi)i}{\psi}}} d\xi\right)=\psi i^{-1}\left(\xi\rvert_c^d-2\int_c^d \
|
|integration|definite-integrals|improper-integrals|special-functions|
| 0
|
Convergence of $\displaystyle\sum_{k=1}^{\infty}\frac{1}{k^k}$ through Cauchy criterion
|
So I need to show that the series $$ \sum_{k=1}^{\infty}\frac{1}{k^k}$$ converges, which is very quick using different tests, but the caveat is that I need to do this using the Cauchy criterion for series. This boils down to showing the sequence of partial sums $(s_n)$ is Cauchy, but the problem lies in actually finding a closed form for $(s_n)$ . I tried checking by hand the cases $n=1, 2, 3, 4, 5$ to find a pattern, make an educated guess and prove it correct by induction, but no clear pattern arises.
|
Consider $s(n)=\sum_{k=1}^nk^{-k}$ . Cauchy's criterion requires that we show that , for all $\epsilon>0$ , there exists a $N\in\Bbb N$ such that $$|s(n+m)-s(n)|\leq \epsilon \\ m\in\mathbb N$$ For all $n> N$ . In our case, $$|s(n+m)-s(n)|=\sum_{k=n+1}^{n+m}k^{-k}=\sum_{k=1}^m (n+k)^{-(n+k)}$$ We know, for certain, that $$|s(n+m)-s(n)|\leq \sum_{k=1}^m (n+k)^{-2}$$ By simply comparing terms. We can bound this sum with an integral (draw a graph!) $$\sum_{k=1}^m (n+k)^{-2}\leq \int_1^m \frac{1}{(n+x-1)^2}\mathrm dx=\frac{1}{n}-\frac{1}{n+m-1}\leq \frac{1}{n}$$ Therefore $$|s(n+m)-s(n)|\leq \frac{1}{n}$$ Therefore, for any given $\epsilon$ , we can simply choose an $N$ such that $\frac{1}{N} . $\blacksquare$
|
|real-analysis|sequences-and-series|
| 0
|
Extracting the vector field from equations
|
Given the following equation: $$ \dot{\Theta} = \operatorname{sgn}(z) \sqrt{| z |} $$ $$ z = \dot{\Theta}^2 \operatorname{sgn}(\dot{\Theta}) $$ $$ \dot{z} = 2 \sqrt{| z |} \ddot{\Theta} $$ how can I get the vector field $\boldsymbol{g}$ ? $$ \dot{\boldsymbol{q}} = \boldsymbol{g} (\boldsymbol{q}) $$ where $\boldsymbol{q} = (\Theta, z)^T$ . This expression was taken from the scientific paper "Dynamics and stability of a rimless spoked wheel: a simple 2D system with impacts" by Michael J. Coleman. EDIT: another equation which is in the paper is: $$ \ddot{\Theta} - \sin(\Theta + \alpha) = 0 $$
|
Equation $(7)$ specifies the time evolution of each of the generalized coordinates by providing $\dot{\theta}(\theta,z)$ (which actually has no explicit $\theta$ dependence in this case) and $\dot{z}(\theta,z)$ . The idea of equation $(9)$ is to repackage these two scalar equations as one vector equation for the time evolution of the state vector $\boldsymbol{q}=\begin{bmatrix} \theta & z\end{bmatrix}^{\intercal}$ . This is done by defining the vector field $\boldsymbol{g}: \boldsymbol{q}=\begin{bmatrix} \theta & z\end{bmatrix}^{\intercal} \mapsto \boldsymbol{\dot{q}} = \begin{bmatrix} \dot{\theta}(\theta,z) & \dot{z}(\theta,z) \end{bmatrix}^{\intercal}$ . You can see the role of $\boldsymbol{g}$ is to map the state vector $\boldsymbol{q}$ to its temporal rate of change $\boldsymbol{\dot{q}}$ such that $\boldsymbol{\dot{q}}=\boldsymbol{g}(\boldsymbol{q})\:(9)$ and that this is done component-wise by viewing $\boldsymbol{\dot{q}} = \begin{bmatrix} \dot{\theta}(\boldsymbol{q}) & \dot{z}(
|
|education|vector-fields|
| 1
|
How to calculate $\int _0^1 \int _0^1\left(\frac{1}{1-xy} \ln (1-x)\ln (1-y)\right) \,dxdy$
|
Let us calculate the sum $$ \displaystyle{\sum_{n=1}^{+\infty}\left(\frac{H_{n}}{n}\right)^2}, $$ where $\displaystyle{H_{n}=1+\frac{1}{2}+\cdots+\frac{1}{n}}$ the $n$ -th harmonic number. My try The main idea because the actions are many. However, we go directly to the sum without the individual decays. $$ \begin{split} {\left(\frac{H_{n}}{n}\right)^2} =\left(\frac{H_{n}}{n}\right) \left(\frac{H_{n}}{n}\right) & = \left(-\int _0^1 x^{n-1}\ln (1-x) dx\right) \left(-\int _0^1 y^{n-1} \ln (1-y)dy\right)\\ & =\int _0^1 \int _0^1 x^{n-1}y^{n-1} \ln (1-x)\ln (1-y) dxdy \end{split}$$ So the sum sought, with transfer within the integral, is $$ \begin{split} \int _0^1 \int _0^1 &\left( \sum_{n=1}^{\infty}x^{n-1}y^{n-1} \ln (1-x)\ln (1-y)\right) \,dxdy \\ &= \int _0^1 \int _0^1 \left(\frac{1}{1-xy} \ln (1-x)\ln (1-y)\right) \,dxdy \end{split} $$ My main question is how to evaluate the final integral, not the main question (sum).
|
\begin{align}J&=\int _0^1 \int _0^1 \frac{\ln (1-x)\ln (1-y) }{1-xy} \,dxdy\\ &\overset{u=1-xy,v=\frac{1-x}{1-xy}}=\int_0^1\int_0^1\frac{\ln\left(\frac{u(1-v)}{1-uv}\right)\ln(uv)}{1-uv}dudv\\ &=\underbrace{\int_0^1\int_0^1\frac{\ln u\ln(uv)}{1-uv}dudv}_{z(v)=uv}+\underbrace{\int_0^1\int_0^1\frac{\ln\left(1-v\right)\ln(uv)}{1-uv}dudv}_{z(v)=uv}-\\&\underbrace{\int_0^1\int_0^1\frac{\ln\left(1-uv\right)\ln(uv)}{1-uv}dudv}_{z(u)=uv}\\ &=\int_0^1 \frac{\ln u}{u}\left(\int_0^u\frac{\ln z}{1-z}dz\right)du+\int_0^1\frac{\ln(1-v)}{v}\left(\int_0^v \frac{\ln z}{1-z}\right)dv-\\&\int_0^1 \frac{1}{v}\left(\int_0^v\frac{\ln(1-z)\ln z}{1-z}\right)dv\\ &\overset{\text{IBP}}=-\frac{1}{2}\int_0^1 \frac{\ln^3 u}{1-u}du+\int_0^1 \frac{\ln(1-v)}{v}\left(-\ln(1-v)\ln v+\int_0^v\frac{\ln(1-z)}{z}dz\right)+\\&\int_0^1\frac{\ln(1-v)\ln^2 v}{1-v}dv\\ &=3\zeta(4)-\underbrace{\int_0^1\frac{\ln^2(1-v)\ln v}{v}dv}_{z=1-v}+\frac{1}{2}\zeta(2)^2+\int_0^1\frac{\ln(1-v)\ln^2 v}{1-v}dv\\ &\boxed{=3\zeta(4)+\frac{1}{2}
|
|calculus|integration|multivariable-calculus|definite-integrals|summation|
| 0
|
Inequality involving square root of $x$
|
I am reading an article in which they state the following fact: If $a,b,x>0$ , then $x\leq a\sqrt{x}+b$ implies that $x\leq a^2+b$ . I think the proof of this should be pretty trivial, but I cannot figure it out. I have tried squaring both sides of the first inequality and then using the quadratic formula, but it does not turn out as the second inequality. Could someone please help? EDIT: As people have pointed out in the comments, this statement is not true. What I think is true (could someone confirm?) is that for $a,b,x>0$ , $x\leq a\sqrt{x}+b$ implies $x\leq 2a^2+2b$ . This is seen by noting that \begin{align*} x&\leq \frac{a^2}{2}+\frac{a\sqrt{a^2+4b}}{2}+b\\ &\leq \frac{a^2}{2}+\frac{a\sqrt{(a+2\sqrt{b})^2}}{2}+b\\ &= a^2+a\sqrt{b}+b\\ &\leq 2a^2+2b \end{align*} In the article I read, the inequality is used with an unfixed constant before $a$ and $b$ , so I think the result they obtain by using the inequality is true, even though the statement itself is not.
|
Note that $x = a \sqrt{x} + b$ when $t = \sqrt{x}$ satisfies the quadratic $t^2 = a t + b$ , so $t = (a \pm \sqrt{a^2 + 4 b})/2$ , but since $b > 0$ , $a - \sqrt{a^2 + 4 b} so only $t = \sqrt{x} = (a + \sqrt{a^2 + 4 b})/2$ works, and then $$ x = \frac{\left(a + \sqrt{a^2 + 4 b}\right)^2}{4} = \frac{a^2 + a \sqrt{a^2 + 4 b}}{2} + b $$ Since $b > 0$ , this is greater than $a^2 + b$ . We conclude that there are no $a, b > 0$ for which $0 implies $x \le a^2 + b$ .
|
|inequality|
| 0
|
Let $V$ be a vector space and $T$ be a linear map on $V$. Suppose that $\text{dim null}(T^2) = 5$. Prove that $\text{dim null}(T) \ge 3$.
|
Let $V$ be a vector space and $T$ be a linear map on $V$ . Suppose that $\text{dim null}(T^2) = 5$ . Prove that $\text{dim null}(T) \ge 3$ . I can come up with a sort of heuristic argument: if $\text{dim null}(T) , then when applying $T$ to $\text{range}(T) (\subset V)$ , the dimension of the null space can grow by at most 2. Hence $\text{dim null}(T^2) . However, I cannot figure out how to formulate this idea rigorously.
|
You can try to write the Kernel of $T^2$ as a sum of vector spaces $$\ker T + \ker T_{|T(V)}.$$ The sum is not direct, but assuming the dimension of $\ker T$ to be smaller than $2$ like you did, it can add up to at most Dimension $4$ . If you wann, I can help you clarify why the sum above is equal to $\ker T^2$ .
|
|linear-algebra|vector-spaces|
| 0
|
Optimizing a probability for marbles in two buckets
|
There are 25 red marbles and 25 blue marbles divided between two buckets. You select a bucket uniformly at random select a marble from that bucket uniformly at random Find initial arrangements of marbles into the buckets in order to maximize $\mathbb P(\text{marble picked is red})$ . Source (this problem also appears as Question 4.22 in Heard on the Street , however the author's solution is mere handwaving). Label the buckets and let $R_1$ , $B_1$ denote respectively the number of red and blue marbles in Bucket 1. The probability under consideration is $$\frac 12 \Big(\frac{R_1}{R_1 + B_1} + \frac{25-R_1}{50-R_1 - B_1} \Big).$$ The subsequent optimization problem is to maximize this quantity under the constraints that $R_1\in \{0,\ldots, 25\}$ , $B_1\in \{0,\ldots, 25\}$ . My question is: What's the most efficient way of solving this maximization problem using only pen and paper (e.g. during an interview) ? The Mathematica command Maximize[{1/2*(r/(r + b) + (25 - r)/(50 - r - b)), Elem
|
In the extreme case where bucket 1 contains 25 red and 25 blue, the probability of picking a red is $\frac 12$ . In the other extreme case where bucket 1 is empty, this probability is also $\frac 12$ . Let $f(r,b)= \frac{r}{r + b} + \frac{25-r}{50-r - b}$ . Assuming $(r,b)\notin \{(0,0),(25,25)\}$ , $f(r,b)$ is twice the probability under scrutiny. The sum $S:=r+b$ occurs twice in the expression of $f$ ; we are interested in the case where $0 . When $S=25$ , note that $f(r,b) = 1$ and the probability is $\frac 12$ . If $b>0$ , note that $f(r+1, b-1) - f(r, b) = \frac{50-2S}{S(50-S)}$ . Hence swapping a blue ball for a red ball in bucket $1$ increases strictly the probability as long as $S . Assume now that $0 . Optimality dictates $b=0$ , hence $r>0$ and $f(r,b) = 1+\frac{25-r}{50-r} = 2-\frac{25}{50-r}$ , which decreases strictly with $r$ , and is thus maximal for $r=1$ . Note that $f(1,0) \frac{73}{98}\approx 0.745 > \frac 12$ . Assume finally that $25 . Exchanging the roles of bucke
|
|probability|optimization|nonlinear-optimization|
| 1
|
Number of compositions of $n$ into an odd number of part, each of which is at least $3$
|
I'm having trouble with the following question: Let $a_n$ be the number of compositions of $n$ into an odd number of parts, each of which is at least $3$ . Prove that $a_n = [x^n]\frac{x^3-x^4}{1-2x-x^2-x^6}.$ This is what i've done so far: Let $S=N_{\text{odd}\geq 3}^k$ where $N_{\text{odd}\geq 3}= \{3,5,7,...\}.$ Then we have that \begin{align} [x^n]\Phi_S(x) &= [x^n]N_{\text{odd}\geq 3}^k(x) \\ &= [x^n](N_{\text{odd}\geq 3}(x))^k \:\: \text{by product lemma} \\ &= [x^n](x^3+x^5+x^7+...)^k \\ &= [x^n](x^3(1+x^2+x^4+...))^k \\ &= [x^n](\frac{x^3}{1-x^2})^k \:\: \text{by geometric series} \end{align} Since we want an odd number of parts where eacht part is at least $3,$ we obtain $(\frac{x^3}{1-x^2})^3 + (\frac{x^3}{1-x^2})^5 + (\frac{x^3}{1-x^2})^7 + ...$ I have no clue on how to continue from here to obtain the expression I want to prove. Maybe again some geometric series? Did I even this the previous part right. Hints and/or solutions are very welcome.
|
Since the individual parts are not required to be odd, just at least 3, you should have $[x^n](x^3 + x^4 + x^5 + \cdots)^k$ in your third line. This is of course $[x^n](x^3/(1-x))^k$ . Summing over odd $k \ge 1$ gives $$ \left( x^3 \over 1-x \right)^1 + \left( x^3 \over 1-x \right)^3 + \left( x^3 \over 1-x \right)^5 + \cdots$$ and this is itself a geometric series with common ratio $(x^3/(1-x))^2$ ; you can get the result you want by doing some algebra.
|
|combinatorics|
| 1
|
Computing the normalization of Jackson Kernel?
|
The Jackson Kernel is $$ K_n(x) = C_n\frac{\sin^4(nx/2)}{\sin^4(x/2)} $$ Where $C_n = \frac{3}{2\pi n(2n^2+1)}$ is chosen to make sure that $\int_{-\pi}^\pi K_n(x) dx =1.$ How could I evaluate this integral? I have not found a reference for this.
|
Following the logic of this answer for the integral $$\int_{-\pi}^{\pi}\frac{\sin^2(nx/2)}{\sin^2(nx)}$$ we can write $$\frac{\sin^4(nx/2)}{\sin^4(nx)}=e^{-2(n-1)}\sum_{j,k,l,m=0}^{n-1}e^{i(j+k+l+m)x}$$ and so the integral is equal to $$2\pi\cdot \#\{j,k,l,m\in[n-1]:j+k+l+m=2(n-1)\}$$ From there it's as simple as counting the number of integer non-negative solutions of the equation $x_1+x_2+x_3+x_4=2n-2$ under the constraints $x_i \leq n-1$ . This number can be easily determined by the inclusion-exclusion principle: There are $\binom{2n-2+4-1}{4-1}=\binom{2n+1}{3}$ non-negative integer solutions (without any constraints) From that, subtract the number of solutions with $x_1 > n-1$ . This is equal to the number of non-negative integer solutions of $y_1+x_2+x_3+x_4=2n-2-n=n-2$ . There are $\binom{n+1}{3}$ such solutions. Do the same for the number of solutions with $x_2>n-1$ , $x_3>n-1$ , and finally, $x_4>n-1$ . There are no more solutions to take into account (i.e. no solutions such th
|
|calculus|integration|definite-integrals|fourier-analysis|
| 1
|
Use induction: postage $\ge 64$ cents can be obtained using $5$ and $17$ cent stamps.
|
I have come up with: Assume for any $n\ge 64$ there exists numbers $x$ and $y$ such that $n = 17x + 5y$ and also then that $n+1 = 17x + 5y + 1$ but am fairly new to the concept of induction and not sure where to go after this.
|
You can exhibit combinations of stamps for $64,65,66,67,..., 99$ cents as your base case, for instance, and then use the inductive step. More explicitly: Base case: $n=64$ : $2\times$ $17$ c stamps and $6\times$ $5$ c stamps: $n=65$ : you can fill these in (with the help of a computer) $n=66$ : $n=67$ : $\vdots$ $n=99$ : Inductive step: For any $99\leq k\leq n$ , assume that $$k=17a_k+5b_k,$$ to some nonnegative integers $a_k$ and $b_k$ . Hence, $$n+1=17a_n+5b_n+1=17a_n+5b_n+(35-34)=17(a_n-2)+5(b_n+7).$$ If $a_n\geq 2$ the proof finishes. Otherwise, note that $64\leq k=n+1-17\leq n$ and $k=17a_k+5b_k$ . It follows that $$n+1=17(a_k+1)+5b_k,$$ for instance.
|
|induction|
| 0
|
Summation with indexing variable in term
|
I am trying to solve $$\sum_{j=0}^{n-1}j2^j$$ but I don't know how to proceed with the $j$ in front of $2^j$ . What is making it difficult for me is it is an indexing variable, so I don't think I can factor it out. How would I proceed? Thank you
|
Hint: The triangular array ( $n = 6$ shown) $$ \begin{array}{crcrcrcrcr} &1\cdot2^1 &+ &2\cdot2^2 &+ &3\cdot2^3 &+ &4\cdot2^4 &+ &5\cdot 2^5 \\[8pt] = &2^1 &+ &2^2 &+ &2^3 &+ &2^4 &+ &2^5 \\ & &+ &2^2 &+ &2^3 &+ &2^4 &+ &2^5 \\ & & & &+ &2^3 &+ &2^4 &+ &2^5 \\ & & & & & &+ &2^4 &+ &2^5 \\ & & & & & & & &+ &2^5 \end{array} $$ can be summed by columns first or by rows first.
|
|discrete-mathematics|summation|
| 0
|
Obtaining a tighter lower bound on an American call with two discrete dividends
|
Question Suppose a stock pays 2 discrete dividends $d_1, d_2$ at times $t_1, t_2$ respectively, where $ t Assume the risk-free rate, $r$ , is a positive constant. Given that The lower and upper bounds for an American call, $C_t$ , is trivially determined as $$\max{ \left( S_t - D_t - Ke^{-r(T-t)}, S_t - K, 0 \right)} \leq C_t \leq S_t.$$ Consider a strategy that exercises the option at the instant before time $t_1$ , we have $$S_t - Ke^{-r(t_1 -t)} \leq C_t.$$ Consider a strategy that exercises the option at the instant before time $t_2$ , we have $$S_t - d_1e^{-r(t_1 -t)} - Ke^{-r(t_2 -t)} \leq C_t.$$ Combining the three inequalities above, what is the improved lower bound for an American call in this situation? My attempt I understand that American call with discrete dividends should at most be exercised at the instant before the ex-dividend dates for optimal payoff, so I obtained the lower bound as $$\max{\left(S_t - Ke^{-r (t_1 - t)}, S_t - d_1e^{-r (t_1 - t)} - Ke^{-r (t_2 - t)},
|
You can avoid to include it simply because it can never be the maximum value since $$ S_t - K e^{-r(t_+t)}>S_t - K $$ is verified for every $t which is always the case by contruction. Hence including it gives no information since it can never be reached
|
|finance|upper-lower-bounds|
| 1
|
When is the ratio of largest number and smallest number when the sum and sum of squares is fixed
|
Given $n$ positive real numbers $a_1\geq a_2,\cdots\geq a_n>0$ . Assume $\sum_{i=1}^n a_i=b_1, \sum_{i=1}^n a_i^2=b_2$ . I want to find an upper bound on $\frac{a_1}{a_n}$ , and the condition when this upper bound is achieved. My intuition is that the upper bound is achieved either $a_1=a_2=,\cdots=a_{n-1}>a_n$ or $a_1>a_2=a_3=\cdots a_n$ , but I don't know how to prove it. Some ideas: if for any fixed small $a_n$ , the equations $a_1+\cdots+a_{n-1}=b_1-a_n, a_1^2+\cdots+a_{n-1}^2=b_2-a_n^2$ has solutions, then one can let $a_n\to 0$ , and the $\frac{a_1}{a_n}\to\infty$ . To prevent this from happening, we need some conditions on $b_1, b_2$ so that $a_n$ has a positive lower bound. Specifically, one can use the following inequality \begin{align} \frac{a_1+\cdots+a_{n-1}}{n-1} \leq \sqrt{\frac{a_1^2+\cdots+a_{n-1}^2}{n-1}} \end{align} We plug in $a_1+\cdots+a_{n-1}=b_1-a_n$ and $a_1^2+\cdots+a_{n-1}^2=b_2-a_n^2$ , after some calculations, we get $na_n^2-2b_1a_n+b_1^2-(n-1)b_2\leq 0$ . T
|
This can be solved using Lagrange multipliers. The objective function is $$ f(a_1,\ldots,a_n)=\log\frac{a_1}{a_n}-\lambda\sum_ia_i-\mu\sum_ia_i^2\;, $$ where I took the logarithm of the ratio to be maximized in order to simplify the calculations. Setting the derivative with respect to $a_k$ for $k$ other than $1$ or $n$ to zero yields $\lambda+2\mu a_k=0$ and thus $a_k=-\frac\lambda{2\mu}$ . The corresponding equations for $k=1$ and $k=n$ , $\lambda+2\mu a_1=\frac1{a_1}$ and $\lambda+2\mu a_n=-\frac1{a_n}$ , are linear in $\lambda$ and $\mu$ and can thus be solved to obtain $$ -\frac\lambda{2\mu}=\frac{a_1^2+a_n^2}{a_1+a_n}\;, $$ which is indeed between $a_1$ and $a_n$ and thus an admissible value of the other $a_k$ . Substituting the $a_k$ into the constraints yields \begin{eqnarray*} (n-2)\frac{a_1^2+a_n^2}{a_1+a_n}+a_1+a_n&=&b_1\;,\\ (n-2)\left(\frac{a_1^2+a_n^2}{a_1+a_n}\right)^2+a_1^2+a_n^2&=&b_2\;. \end{eqnarray*} Dividing the square of the first equation by the second yields $$
|
|combinatorics|analysis|inequality|
| 0
|
A probability question over multiple questions test.
|
I was wondering about this problem: say I have to take a test made of $31$ questions chosen among a database of $140$ questions total. Those questions are open questions (that is, not multiple choice questions). Say that I know all the database of questions, because it's public, but I have studied poorly and I only know $100$ of the $140$ questions. Is there a way to find a sort of "average" probability for which al least $25$ of the $31$ questions are among the $100$ I know? I said $25$ but we could do $n$ to generalise. I was thinking that something I should (?) take into account is the number of ways the professor can choose $31$ questions among the $140$ that is $$\binom{140}{31} = 11338699879051838313792998934400$$ Now from here I don't know how to proceed. I maybe think that I should take into account the fact that I know $100/140$ of the questions, but I don't know how to put the $40$ questions I don't know into account, in the total ways to choose the questions. Also my "at lea
|
This is a hypergeometric probability. In the question pool, there are $100$ questions you have prepared for, and $40$ questions you have not. If $31$ questions are selected without replacement and $X$ is the random number of prepared questions on the test, then the desired probability is $$\Pr[X \ge 25] = \sum_{x=25}^{31} \frac{\binom{100}{x} \binom{40}{31-x}}{\binom{140}{31}}.$$
|
|probability|combinatorics|probability-distributions|binomial-coefficients|
| 0
|
This question is based on normal to the ellipse
|
If the normal at $(4 \cos \alpha, 3 \sin \alpha)$ on the ellipse $9 x^2+16 y^2=144$ again intersects the ellipse at $(4 \cos \beta, 3 \sin \beta)$ , then $\cos \beta=\cos \alpha\left(\frac{a-b \cos ^2 \alpha}{c-d \cos ^2 \alpha}\right)$ where $\mathrm{a}, \mathrm{b}, \mathrm{c}, \mathrm{d} \in \mathrm{N}$ and $\operatorname{GCD}(\mathrm{a}, \mathrm{b}, \mathrm{c}, \mathrm{d})=1 .(\alpha \neq \mathrm{n} \pi, \mathrm{n} \in \mathrm{I})$ we have to find ab,c,d. I have written equation of normal to ellipse at first point. Then substituted second point and got a relationbut stuck after that
|
Let $P = (x_1, y_1) = ( 4 \cos \alpha, 3 \sin \alpha ) $ The normal vector at $P$ is $ g = ( 3 \cos \alpha, 4 \sin \alpha)$ Therefore, the normal line at $P$ has the parametric equation $ \ell(s) = P + s g = (4 \cos \alpha , 3 \sin \alpha) + s ( 3 \cos \alpha, 4 \sin \alpha) = ( (4 + 3 s) \cos \alpha , (3 + 4 s ) \sin \alpha )$ Substituting this into the equation of the ellipse, we get $ 9 (4 + 3 s)^2 \cos^2 \alpha + 16 (3 + 4 s)^2 \sin^2 \alpha = 144 $ And this reduces to $ s( 216 \cos^2 \alpha + 224 \sin^2 \alpha ) + s^2 ( 81 \cos^2 \alpha + 256 \sin^2 \alpha) = 0 $ Since $s\ne 0$ then $s = - \dfrac{ 216 \cos^2 \alpha + 224 \sin^2 \alpha }{ 81 \cos^2 \alpha + 256 \sin^2 \alpha } $ Substitute this into the expression for $\ell(s)$ , you get $ Q = P + s g = ( (4 + 3 s ) \cos \alpha, (3 + 4 s ) \sin \alpha ) \\ = \dfrac{( (-324 \cos^2 \alpha + 352 \sin^2 \alpha) \cos \alpha , (-621 \cos^2 \alpha - 128 \sin^2 \alpha ) \sin \alpha ) }{81 \cos^2 \alpha + 256 \sin^2 \alpha }$ But $Q = ( 4 \
|
|geometry|analytic-geometry|conic-sections|
| 0
|
When does the closure of a free subgroup of $\mathsf{SL}(2;\mathbb{C})$ equal $\mathsf{SL}(2;\mathbb{C})$
|
Let $A \subset \mathsf{SL}(2; \mathbb{C})$ be a finite set of matrices. Consider the set $$S = \overline{\langle A \rangle},$$ where we take the closure in $\mathsf{SL}(2; \mathbb{C})$ , and where $\langle A \rangle$ denotes the free group of $A$ . Clearly $S$ is a group, and by construction $S$ is closed in $\mathsf{SL}(2; \mathbb{C})$ . Therefore, by Cartan's closed subgroup theorem, it must be the case that $S$ is a Lie subgroup of $\mathsf{SL}(2; \mathbb{C})$ . I am interested in understanding what are the necessary and sufficient conditions on $A$ for it to be the case that, in fact, $S = \mathsf{SL}(2; \mathbb{C})$ ? It is easy to come up with a few concrete conditions for insufficiency: for instance, if all the elements of $A$ are mutually commuting, or if $A \subset \mathsf{SL}(2; \mathbb{R})$ , or if $A \subset \mathsf{SU}(2)$ . But I am interested in more systematically characterizing when $S$ equals $\mathsf{SL}(2; \mathbb{C})$ . Any thoughts or references are welcome. Thank
|
First of all, the list of closed connected subgroups of $SL(2, {\mathbb C})$ is not very long, see my answer here . From that, you conclude that a subgroup $\Gamma is either virtually solvable (i.e. contains a solvable subgroup of finite index), or is discrete, or is dense in $SL(2, {\mathbb C})$ or its closure equals a conjugate of $SU(2)$ or $SL(2, {\mathbb R})$ or contains $SL(2, {\mathbb R})$ as an index 2 subgroup. Thus, effectively, you are asking for necessary and sufficient conditions for discreteness of a finitely generated subgroup of $SL(2, {\mathbb C})$ . There is one standard necessary condition for discreteness of 2-generated subgroups $\Gamma$ which are not virtually solvable, called the Jorgensen Inequality : $$ {\displaystyle \left|\operatorname {Tr} (A)^{2}-4\right|+\left|\operatorname {Tr} \left(ABA^{-1}B^{-1}\right)-2\right|\geq 1.\,} $$ Here are $A, B$ are generators of $\Gamma$ . From this, one deduces the following: Suppose that a subgroup $\Gamma is not virtuall
|
|group-theory|lie-groups|lie-algebras|free-groups|
| 1
|
Show that estimate is unbiased
|
TASK: Suppose $X_i (i=1,2,3,…,n)$ are i.i.d random variables with PMF: $f(x, \theta ) = exp(\theta-x), x>\theta$ . Is the estimate unbiased: $\mu = \frac{1}{n} + $ min $(X_i)$ ? Answer: First, we find the CDF for $\mu$ : $F(x) = P(\mu min $(X_i) min $(X_i)\geq x - \frac{1}{n}) = 1 - \prod_{i=1}^{n} P(X_i \geq x - \frac{1}{n}) = 1 - \prod_{i=1}^{n} (1 - P(X_i . Next: $P(X_i , so $[1] = 1 - \prod_{i=1}^{n} (1+ exp(\theta - x + \frac{1}{n}) - 1) = 1 -exp(\theta * n - x * n + 1)$ . Now we find PMF for $ \mu: f(x) = F'(x) = n * exp(\theta * n - x * n +1)$ . EDITED : We can find mean: $E(\mu) = \int_{\theta}^{+\infty}x * n * exp(\theta * n - x * n +1) dx = e * (\theta + \frac{1}{n})$ So mean with $f(x, \theta)$ is: $E(\theta) = \int_{\theta}^{+\infty}x * exp(\theta - x)dx = \theta + 1$ . Finally, $E(\mu) \neq E(\theta)$ , so the estimate isn't unbiased. That's it? Thank you for any help.
|
To show an estimator is unbiased, we need to show that $\mathbb{E}(\mu)=\theta$ , where $\mu$ is the estimator and $\theta$ is the parameter. Let's call $Y:= \min(X_1, \dots, X_n)$ . $$ F_Y(y) = 1-P(X_1>y, \dots, X_n>y)=1-[1-F_X(y)]^n=1-e^{n(\theta - y)} \\ \Rightarrow f_Y(y) = ne^{n(\theta-y)}, \quad y>\theta $$ The expectation then becomes: $$ \mathbb{E}(\mu) = \frac{1}{n}+\int_{\theta}^{+\infty}y f_Y(y)dy = \frac{1}{n}+\theta +\frac{1}{n}=\frac{2}{n}+\theta \ne \theta $$ hence $\mu$ is biased.
|
|statistics|statistical-inference|estimation|
| 1
|
Is it possible to decrease the channel capacity by adding a row to the coding channel matrix?
|
I have the following channel with $\mathcal X = \mathcal Y$ : $ p(y|x) = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 0 & 1/2 & 1/2 \\ 1/2 & 0 & 1/2\end{bmatrix} $ Is it possible to decrease the capacity by adding a row to the channel matrix? I think not, this is what I have so far, I added the following last row (it can be any row for that matter): $ p(y|x) = \begin{bmatrix} 1/2 & 1/2 & 0 \\ 0 & 1/2 & 1/2 \\ 1/2 & 0 & 1/2 \\ 1/3 & 1/3 & 1/3 \\ \end{bmatrix} $ I showed that if I pick $P_x = \left\{ \frac{1}{3},\frac{1}{3}, \frac{1}{3}, 0\right\}$ , the capacity will not decrease for any row, meaning that by ignoring the added symbol to $\mathcal{X}$ the capacity will not be reduced. $$ C = \max_{P_X}I(X;Y) = H(Y) - H(Y|X) \approx 0.584 $$ , as it was before adding the row. Is there a way to formulate a more theoretical / general solution, using any of the coding channel related theorems? It might got to do with the channel's rate, that can only be bigger or equal to the rate after the row is added
|
As I mentioned in comments, adding a row increases the input symbol set by one. You can always choose to never use the new symbol. So adding the new symbol cannot hurt channel capacity. Formally it means that the new set of joint distributions on $(X,Y)$ includes all old joint distributions on $(X,Y)$ (so what we can do with the new symbol is at least as good or better). Generally, if $\mathcal{X}=\{x_1, ..., x_n\}$ is a finite and nonempty input symbol set, we can define $S(\mathcal{X})$ as the probability simplex on $\mathcal{X}$ : $$S(\mathcal{X}) = \left\{(p(x))_{x\in \mathcal{X}}: \sum_{x\in\mathcal{X}} p(x) = 1, p(x)\geq 0 \quad \forall x \in \mathcal{X}\right\}$$ Given: Let $\mathcal{X}$ , $\mathcal{Y}$ , and $\tilde{\mathcal{X}}$ be finite and nonempty sets that satisfy $\mathcal{X}\subseteq\tilde{\mathcal{X}}$ . Suppose we have given channel probabilities $$ p(y|x)=P[Y=y|X=x] \quad \forall (x,y) \in \tilde{\mathcal{X}}\times \mathcal{Y}$$ Then: $$\sup_{p \in S(\mathcal{X})} I(
|
|information-theory|entropy|
| 0
|
Summation with indexing variable in term
|
I am trying to solve $$\sum_{j=0}^{n-1}j2^j$$ but I don't know how to proceed with the $j$ in front of $2^j$ . What is making it difficult for me is it is an indexing variable, so I don't think I can factor it out. How would I proceed? Thank you
|
$S=\sum_{k=0}^{n-1} 2^{xj}=\frac{2^{nx}-1}{2^x-1}$ $dS/dx= \sum_{k=0}^{n-1}j\cdot \ln 2 \cdot 2^{xj}=\frac{(2^x-1)(\ln 2 \cdot n\cdot 2^{nx})-\ln2 \cdot 2^x\cdot(2^{nx}-1)}{(2^x-1)^2}$ $\frac{1}{\ln 2}\frac{dS}{dx}|_{x=1}= \sum_{k=0}^{n-1} j \cdot 2^j= 2n\cdot 2^n-2\cdot(2^n-1)=2(n-1)2^{n}+2$ $0\cdot 1s+ 1 \cdot 2 +2 \cdot 4 + 3 \cdot 8+...$ $f(n)=2(n-2)2^{n-1}+2$ Nont sure why I got an off by one error, but after compensating for it, we can prove $f(n)$ works via induction. Note $f(1)=0, f(2)=2, f(3)=10$ , ..., i.e. this satisfies the sum for the first few values. $f(n+1)=2(n-1)\cdot 2^n+2$ $f(n+1)-f(n)= n2^n$ $Q(n)=\sum_{k=0}^{n-1}k\cdot 2^k$ $Q(n+1)=\sum_{k=0}^{n} k \cdot 2^k$ $Q(n+1)-Q(n)=n\cdot 2^n$ So $f(1)=Q(1)=0$ $f(n+1)-f(n)=Q(n+1)-Q(n)=n\cdot 2^n$ So if $f(n)=Q(n)$ then $f(n+1)=Q(n+1)$ and the sum is $f(n)$ holds by induction.
|
|discrete-mathematics|summation|
| 0
|
Is it possible to create triangle ∆ABC Depending on the points H,I,G?
|
There is an engineering question that occurred to me several years ago, but I could not solve it Assuming $H$ is the point of intersection of the heights, $I$ is the point of intersection of the bisectors, and $G$ is the point of intersection of the averages in triangle $∆ABC$ , and the points $H$ , $I$ , and $G$ are only data, is it possible that by relying on them we can create the vertices $A$ , $B$ , and $C$ using the construction of the ruler and compass? Please do not ask me to give my attempt to solve this question because it really does not show any insight or thread to reach the answer. I will be happy if someone can answer this question in the comments. Edit: I chose these three centers because they do not lie on a single line; Because I know that there would be no hope in the general case of encrypting a triangle uniquely with three collinear centers. Yes, George Cantor's theory states that there is a correspondence between a line and a plane, but this opposition is not conn
|
We have G and H, the mid point N of GH is the center of the nine point circle, let's denote it's radius as $r_n$ , so: $r_n= \frac {GH}2\space\space\space\space\space\space\space\space\space(1)$ Also we have I , let's denote $IG=d$ , due to Euler's theorem we have: $d^2=R(R-2r)\space\space\space\space\space\space\space\space\space(2)$ where R and r are the radii of circumcircle and incircle respectively. Now we use this fact that nine point circle and incircle are tangent at point M and I, N and M are colinear , hence we have: $IN=r-r_n\space\space\space\space\space\space\space\space\space(3)$ In equations (1), (2) and (3) d and IN are known so we can find : $r_n=\frac 14 \frac{d^2}{IN}$ In this way $R=2r_n$ and r can also be found. Now we draw three circles and use their propertis to draw the triangle. It must not be hard. Update, how to costruct the triangle: We use this fact that the radii of circles passing the vertexes of triangle and orthocenter have equal radii to that of circum
|
|geometry|euclidean-geometry|
| 1
|
Is true that $F(H)\otimes F(H)\cong F(H)$? where $F(H)$ is the full fock space of Hilbert space $H$
|
Let $H$ be a Hilbert space. Define the full fock space $F(H)=\bigoplus\limits_{n=0}^\infty H^{\otimes n}$ where $H^{\otimes 0}=\Bbb{C}\Omega$ , $\Omega$ is an element of $H$ of unit norm. Let us define the map $\phi:F(H)\otimes F(H)\to F(H)$ by $$\phi((a\Omega\oplus x_1\oplus x_2\oplus\cdots)\otimes(b\Omega\oplus y_1\oplus y_2\oplus\cdots))=ab\Omega\oplus (a y_1+b x_1)\oplus (a y_2+x_1\otimes y_1+b x_2)\oplus\cdots$$ I think this $\phi$ defines the unitary operator between $F(H)\otimes F(H)$ and $F(H)$ . I can verify that $\phi$ preserves inner product and it is onto since $\phi((a\Omega\oplus x_1\oplus x_2\oplus\cdots)\otimes(1\Omega\oplus 0\oplus 0\oplus\cdots))=a\Omega\oplus x_1\oplus x_2\oplus\cdots$ . Therefore, it unitary. But I want to compute what is the adjoint of $\phi$ . Initially, I thought it would be $\phi^*(\xi)=\xi\otimes (1\Omega\oplus 0\oplus 0\cdots)$ for $\xi\in F(H)$ . But with this formula $\langle \phi^*(\xi),\eta\otimes\zeta)=0$ if the vaccum of $\zeta$ is $0$ i
|
(Disclaimer: I want to warn against a common mistake when dealing with tensor products, which is to forget that the tensor product is the (maybe closed) linear span of the elementary tensors. For some things one can comfortably work with elementary tensors, but for other things, like calculating norms, sums of elementary tensors cannot be avoided. Mentioning this as someone who made this mistake several times) Your $\phi$ is well-defined, as it is a linear extension of $$ (a\Omega\oplus x_1\oplus x_2\oplus\cdots)\otimes (0\oplus\cdots\oplus 0\oplus y_n\oplus0\oplus\cdots)\longmapsto (\overbrace{0\oplus\cdots\oplus 0}^n\oplus ay_n\oplus x_1\otimes y_n\oplus\cdots) $$ There is an issue, though, which is how you see $F(H)\otimes F(H)$ as a Hilbert space. What I mean is that in general the inner product on the tensor product is defined as $$ \langle x\otimes y,z\otimes w\rangle=\langle x,z\rangle\langle y,w\rangle.$$ With this definition, your $\phi$ does not preserve the inner product. In
|
|functional-analysis|hilbert-spaces|operator-algebras|quantum-mechanics|
| 1
|
Bijection between an uncountable number of real lines to $\mathbb{R}^2$
|
A friend came up with what looks like a very simple and beautiful construction for mapping an uncountable number of dimensions into the plane ( $\mathbb{R}^2$ ). Specifically, each dimension is indexed by a number $\theta \in [0,2\pi)$ . And each location $x$ along dimension $\mathbb{R}_\theta$ gets mapped to the polar coordinate $(\theta, \exp(x)) \in \mathbb{R}^2$ . This construction seems sound and would imply that an uncountable number of uncountable reals can live comfortably inside $\mathbb{R}^2$ . Is this correct? P.S. Okay, after writing this out, I now see that this should definitely be the case, since $\mathbb{R}^2$ is, by construction, an uncountable number of real lines, one for each value of $y$ . So we've mainly switched to polar coordinates here.
|
If I have understood correctly, this is not right. The function you are describing is $$f\colon [0,2π) \times ℝ → ℝ^2\\ (θ, x) ↦ (θ, e^x) $$ where $(θ, e^x)$ is wiritten in polar coordinates. First, the domain is not $ℝ^ℝ$ . Pedagogical note on $ℝ^ℝ$ : It might be hard to wrap your hand around $ℝ^ℝ$ . A point in this space has a continuum many of coordinates $(x_λ)_{λ∈ℝ}$ . Hence, a point can be thought of as a real-valued function from $ℝ$ that for each coordinate spits value at that coordinate. Compare this with, say $ℝ^3$ , where each point $x= (x_1, x_2, x_3)= (x_i)_{i ∈ \{1,2,3\}}$ can be thought as a function $x\colon \{1,2,3\} → ℝ$ with $x(i) = x_i$ . Second, $f$ is not a surjection because there is no point that is mapped to $(0,0)$ since $e^x > 0$ for all inputs $x$ . Third, as it was mentioned in the comments by Rob Arthan, $$ |ℝ^ℝ| = 2^\mathfrak{c} > |ℝ| = |ℝ^2|. $$ Check out this MSE question for details. To sum up, I think this might be a nice lesson for you to write down
|
|real-analysis|general-topology|
| 1
|
Polynomial Rings and Finitely Generated Modules
|
I've been trying to solve the following question: Show that the polynomial ring $\mathbb{Z}[x,y,z]$ is a finitely generated module over its subring generated by the following three elements: $$x+y+z,\ xy+xz+yz,\ xyz.$$ Could you give me a hint? Say $R$ is the ring generated by the above three elements. My idea was to show that $x$ is integral over $R$ , i.e. that the subring $R[x]$ of $\mathbb{Z}[x,y,z]$ is finitely generated as an R-submodule of $\mathbb{Z}[x,y,z]$ . Alternatively, I tried finding the monic polynomial in $R[w]$ that $x$ satisfies, but neither method worked (or maybe I just missed something). I am quite confused and feel like I'm doing something completely wrong.
|
Set $a = x + y + z$ , $b = xy + xz + yz$ , and $c = xyz$ . Then $$ (t-x)(t-y)(t-z) = t^3 - at^2 + bt - c \in R[t] $$ so $x$ , $y$ , and $z$ are all integral over $R$ . There is a standard property relating integrality to finitely generated modules: when $B/A$ is a ring extension and $b \in B$ , the following properties are equivalent: (i) $b$ is integral over $A$ , (ii) $A[b]$ is a finitely generated $A$ -module, (iii) $b$ is contained in a subring $S$ of $B$ such that $A \subset S \subset B$ and $S$ is a finitely generated $A$ -module. In the proof of this equivalence, it is the direction from (iii) to (i) that is the most technical. Anyway, this equivalence implies that the sum and product of elements in $B$ integral over $A$ is integral over $A$ , so the set of elements in $B$ integral over $A$ is a subring of $B$ . The proof that (i) implies (ii) shows that when $b$ and $b'$ in $B$ are integral over $A$ , $A[b,b']$ is a finitely generated $A$ -module, and the same is true for the s
|
|abstract-algebra|commutative-algebra|modules|
| 1
|
Summation with indexing variable in term
|
I am trying to solve $$\sum_{j=0}^{n-1}j2^j$$ but I don't know how to proceed with the $j$ in front of $2^j$ . What is making it difficult for me is it is an indexing variable, so I don't think I can factor it out. How would I proceed? Thank you
|
Sum of a geometric progression: $$1+x+x^2+\cdots+x^n=\frac{x^{n+1}-1}{x-1}\quad(x\ne1)$$ Differentiate with respect to $x$ : $$1+2x+3x^2+\cdots+nx^{n-1}=\frac{(n+1)x^n(x-1)-(x^{n+1}-1)}{(x-1)^2}=\frac{nx^{n+1}-(n+1)x^n+1}{(x-1)^2}$$ Multiply by $x$ : $$x+2x^2+3x^3+\cdots+nx^n=\frac{nx^{n+2}-(n+1)x^{n+1}+x}{(x-1)^2}$$ Set $x=2$ : $$\sum_{j=0}^nj2^j=(n-1)2^{n+1}+2$$ Oops, I misread the question. OK, now substitute $n-1$ for $n$ : $$\boxed{\sum_{j=0}^{n-1}j2^j=(n-2)2^{n}+2}$$ To verify the result by mathematical induction, note that $$[(n-1)2^{n+1}+2]-[(n-2)2^{n}+2]=n2^n$$
|
|discrete-mathematics|summation|
| 0
|
Suppose $H_1$ and $H_2$ are subgroups of $G$. Suppose $a_1$ and $a_2$ are two elements of $G$ such that $a_1H_1 = a_2H_2$. Prove that $H_1 = H_2$.
|
I believe that the problem stated in the title is false. I have attempted to prove it by manipulating the assumption that there is some $h_1 \in H_1$ and some $h_2 \in H_2$ such that $a_1h_1 = a_2h_2$ . The idea was to come to the conclusion that $H_1 \subset H_2$ and $H_2 \subset H_1$ showing they were equal. After trying every type of manipulation I could think of I could not come to this conclusion. Despite this, I cannot seem to find a counter example. I've put some different GPTs on ChatGPT through the wringer trying to find one or some proof but it also has fell short. Anyone have any ideas?
|
The statement is correct. We have $a_2^{-1}a_1H_1=H_2$ . This implies that the coset $a_2^{-1}a_1H_1$ contains the identity of $G$ , and so it has to be equal to the coset $H_1$ . Hence $H_1=a_2^{-1}a_1H_1=H_2$ .
|
|abstract-algebra|
| 0
|
Equivalence between two integrals
|
I would like to prove the equality of the two integrals $$ I_1 = \int_0^1 \frac{\sqrt{1-x^2}}{(1+x)(1+2x)} dx$$ and $$ I_2 = -\int_0^1 \frac{(-x+2)^2-1 -\sqrt{1-x^2}\sqrt{1-{(-x+1)}^2}}{\left(\sqrt{1-x^2}+\sqrt{1-{(-x+1)}^2}\right )(1-x)(3-2x)} dx.$$ Mathematica can solve this and the solution is for both integral $$I_{1/2} = \frac{1}{4} [-\pi - 2 \sqrt{3} \log(2 - \sqrt{3})].$$ What I would like is a simple proof of the equality $I_1 = I_2$ , without having to calculate the solution directly. Any help would be appreciated. EDIT: We can multiple the integrand of $I_2$ by $\frac{\sqrt{1-x^2}-\sqrt{1-{(-x+1)}^2}}{\sqrt{1-x^2}-\sqrt{1-{(-x+1)}^2}}$ . After a substitution $u = 1-x$ we get $$I_2 = \int_0^1 \frac{\sqrt{1-x^2}}{(1+x)(1+2x)} \frac{(2x-5)(1+x)}{(1-2x)(1-x)}.$$ Almost there but I still can't conclude.
|
Recognize that the differential of the two integrands is \begin{align} & \frac{1 +\sqrt{1-x^2}\sqrt{1-{(1-x)}^2}- (2-x)^2}{\left(\sqrt{1-x^2}+\sqrt{1-{(1-x)}^2}\right )(1-x)(3-2x)}\\ &-\frac{\sqrt{1-x^2}}{(1+x)(1+2x)} =\frac1{\sqrt{1-x^2}} - \frac1{\sqrt{1-(1-x)^2}}\\ &\>\>\>\>-3\bigg(\frac1{(1-4x^2)\sqrt{1-x^2}} + \frac1{[1-4(1-x)^2]\sqrt{1-(1-x)^2} }\bigg) \end{align} which leads to $$I_2-I_1 = K_1 -3K_2$$ where $K_1$ and $K_2$ vanish as seen below \begin{align} K_1 &=\int_0^1 \frac1{\sqrt{1-x^2}} - \frac1{\sqrt{1-(1-x)^2}}\overset{x\to 1-x}{dx}=-K_1=0\\ K_2 &=\int_0^1 \frac1{(1-4x^2)\sqrt{1-x^2}} + \frac1{[1-4(1-x)^2]\sqrt{1-(1-x)^2} }\ dx \\ &= \frac2{\sqrt3}\bigg(\tanh^{-1}\frac{\sqrt3x}{\sqrt{1-x^2}} - \tanh^{-1}\frac{\sqrt{1-(1-x)^2}} {\sqrt3(1-x)}\bigg)\bigg|_0^{\frac12}=0 \end{align}
|
|integration|definite-integrals|
| 1
|
Map between pairs $(X,A) \to (Y,B)$ induces homomorphism taking $C_n(A)$ to $C_n(B)$
|
I'm going through relative homology in Hatcher's Algebraic Topology . Just as there are induced homomorphisms for nonrelative homology, a map $f: (X,A) \to (Y,B)$ — i.e., such that $f(A) \subset B$ — induces homomorphisms $f_\sharp: C_n(X,A) \to C_n(Y,B)$ . Of course, for this to work, $f_\sharp: C_n(X) \to C_n(Y)$ must "take $C_n(A)$ to $C_n(B)$ " (Hatcher, pg. 118). However, this is not immediately obvious to me. Since $f$ restricted to $A \to B$ can fail to be surjective, why should every $n$ -chain in $B$ be mapped to by an $n$ -chain in $A$ ? In other words, isn't it possible that there exists a chain in $B$ which is not mapped to by a chain in $A$ via $f_\sharp$ ?
|
Nevermind, it appears that it was not meant that $f_\sharp$ takes $C_n(A)$ (on)to $C_n(B)$ , and its restriction to $C_n(A) \to C_n(B)$ is, indeed, not surjective in general. I think a proof that maps between pairs $(X,A) \to (Y,B)$ induces a well-defined homomorphism $f_\sharp: C_n(X,A) \to C_n(Y,B)$ goes like this: Suppose $[c] \in C_n(X,A)$ . Given $f_\sharp: C_n(X) \to C_n(Y)$ , define $f_\sharp: C_n(X,A) \to C_n(Y,B)$ by $f_\sharp([c]) = [f_\sharp(c)] \in C_n(Y,B)$ . To see that this is well-defined, let $a \in C_n(A)$ and $b \in C_n(B)$ . Then \begin{align} f_\sharp([c+a]) &= [f_\sharp(c+a)] \\ &= [f_\sharp(c)+f_\sharp(a)] \\ &= [f_\sharp(c)+b] \\ &= [f_\sharp(c)] \\ &= f_\sharp([c]). \end{align} Thus, the surjectivity of $f_\sharp: C_n(A) \to C_n(B)$ was never needed.
|
|algebraic-topology|homology-cohomology|
| 0
|
How to prove every shell is non-overlapping in volume of revolution by "shells"? Does the Riemann sum imply the volume is over-counted?
|
Why is the shell method not $$\lim_{n \rightarrow \infty} \sum_{k=1}^n 2\pi\left(\frac{(b-a)}{n}\right)\cdot f\left((k-1)\cdot\frac{(b-a)}{n}\right)\cdot \frac{1}{n} + \pi f\left((k-1)\cdot\frac{(b-a)}{n}\right) \cdot \left(\frac{1}{n}\right)^2.$$ ? This means the shells are concentric circles going outward, the subsequent shells after the first are hollow in the center, and the hole increases further towards the end points in increasing order. I have trouble with the implication that in the "shell method" to calculate volumes of revolution, the left end point of the first piece is $0$ , the left end point of the second piece is $\dfrac{(b-a)}{n}$ , the left end point of the third piece is $2\dfrac{b-a}{n}.$ Why do you use the left end point of the pieces in the integral rather than $\int$ (constant "radius" of cylinder) $\cdot 2\pi\cdot$ height of shell $\cdot dx$ ? If you're shells radius is not a constant, and the shell radius are in the increasing sequence $a_k=(k-1)\dfrac{b-a}{n}$
|
For the time being, ignore the solid of revolution and go back to the basic Riemann sum of $f(x)$ over the interval $[a,b]$ using an equally spaced partition. We divide the interval $[a,b]$ into $n$ equal subintervals. Since the width of the interval is $b-a$ , the width of each subinterval is $\frac{b-a}{n}$ . On this matter, surely there can be no disagreement. The first (leftmost) subinterval has its left endpoint at $a$ . So to determine the right endpoint of the first subinterval, you must add to $a$ the width of the subinterval; i.e., the right endpoint must be $$a + \frac{b-a}{n}.$$ This is also the left endpoint of the second subinterval. So the right endpoint of the second subinterval is $$a + \frac{b-a}{n} + \frac{b-a}{n} = a + 2 \frac{b-a}{n},$$ and so on, creating an arithmetic sequence of endpoints $$a,\; a + \frac{b-a}{n},\; a + 2 \frac{b-a}{n},\; a + 3 \frac{b-a}{n}, \;\ldots,\; a + k \frac{b-a}{n},\; \ldots,\; a + n \frac{b-a}{n}$$ whose initial term is $a$ , with commo
|
|calculus|integration|volume|riemann-sum|solid-of-revolution|
| 1
|
Prove that $|Y| = |\mathbb{N} \to \mathbb{N}|$, where $Y$ is the set of all non-injective functions
|
Let $Y$ be the set of all non injective functions from $\mathbb{N} \to \mathbb{N}$ , i.e., for each $n \in \mathbb{N}$ , the cardinality $\bigl| f^{-1}({n}) \bigr|$ is different from $1$ . I need to prove this cardinality equivalence. So to do that I need to find an injective function from $Y$ to $\mathbb{N} \to \mathbb{N}$ and another one from $\mathbb{N} \to \mathbb{N}$ to $Y$ . The first one is obvious because $Y \subseteq (\mathbb{N} \to \mathbb{N})$ . I'm having trouble finding the second one. I tried with mod, but it makes it non-injective. Another example?
|
Hint: To each function $f:\mathbb{N}\to\mathbb{N}$ correspond a new function $F:\mathbb{N}\to\mathbb{N}$ defined by $F(2n)=f(n)$ and $F(2n+1)=0$ for all $n\in\mathbb{N}$ .
|
|combinatorics|functions|discrete-mathematics|elementary-set-theory|
| 0
|
Help with chapter 1 exerice 6 from "Differential Forms" by Carmo
|
I am currently reading Carmo's book about differential forms. I tried excercise 6 from chapter 1, but I am a bit confused. Here is the excercise: Let $f: U \subset \mathbb{R}^m \to \mathbb{R}^n$ be a diff. map. Assume that $m and let $\omega$ be a $k$ -form in $\mathbb{R}^n$ , with $k>m$ . Show that $f^*\omega=0$ . My idea was that we have more different $df_{i_k}$ (we should have $n$ many, right?) and then $k so there is some $i_k$ in the definition of $f^*\omega$ where $df_{i_k} \wedge df_{i_k}$ so this is $0$ . In the exercise, they said that $k>m$ not $k>n$ so my argument would be false. In the book he always uses the notation $f: U \subset \mathbb{R}^n \to \mathbb{R}^m$ . Is this a mistake in the exercise or is there a different way that I do not see? I'm glad if someone helps me. Thanks a lot :)
|
To make this post officially closed here the summary. In general for every k-form in $\mathbb{R}^m$ there are $\binom{m}{k}$ different basis vectors. This implies there are no non-trivial k-forms in $\mathbb{R}^m$ if $k > m$ . For this exercise Since $f^*\omega$ is a k-form in $\mathbb{R}^m$ , but $k>m$ we can only have the k-form $f^*\omega = 0$
|
|differential-geometry|solution-verification|differential-forms|
| 1
|
Fact checking: are these inclusion relations regarding algebraic varieties of polynomial ideals correct?
|
I'm studying inclusion identities within polynomial ideals theory. More precisely, i'm interested in the correspondece of an ideal $I\subseteq \mathbb{F}[\vec{x}]$ and its associated affine variety $\mathcal{V}(I)$ for an arbitrary commutative field $\mathbb{F}$ . I've encountered some useful facts: mainly, that if $\{\mathfrak{a,b}\}\subseteq\mathbb{F}[\vec{x}]$ we have that: $$\mathfrak{a\cdot b\subseteq a\cap b\subseteq a+b\subseteq a\cup b}\quad (1)$$ These inclusion identities can be found by following some algebraic properties of ideals, while noting that every element $r\in\mathfrak{a\cup b}$ can be written in the form: $$r=f_1a+f_2b$$ Where $f_i\in\mathbb{F}[\vec{x}]$ , $a\in\mathfrak{a}$ and $b\in\mathfrak{b}$ . In a similar fashion, we can show: $$\mathcal{V}(\mathfrak{a\cup b})\subseteq\mathcal{V}(\mathfrak{a+b})\subseteq\mathcal{V}(\mathfrak{a\cap b})\subseteq\mathcal{V}(\mathfrak{a\cdot b})$$ So there seems to be a complete inclusion-reversing correspondence between ideals
|
The inclusions $\mathfrak{a} \cdot \mathfrak{b} \subseteq \mathfrak{a} \cap \mathfrak{b} \subseteq \mathfrak{a} + \mathfrak{b}$ are very well-known results; you can find them in any textbook on Commutative Algebra. As an example where all inclusions are strict, consider, say, $\mathfrak{a} = \langle x^2y \rangle$ and $\mathfrak{b} = \langle xy^2 \rangle$ . Then $\mathfrak{a} \cdot \mathfrak{b} = \langle x^3y^3 \rangle$ , $\mathfrak{a} \cap \mathfrak{b} = \langle x^2y^2 \rangle$ and $\mathfrak{a} + \mathfrak{b} = \langle xy^2, x^2y \rangle$ . As for $\mathfrak{a} \cup \mathfrak{b}$ , what do you mean by that? The basic set union will fail to be an ideal; if you mean the smallest ideal that contains $\mathfrak{a}$ and $\mathfrak{b}$ , sometimes denoted as $\langle \mathfrak{a} \cup \mathfrak{b}\rangle$ , we have, in fact, $\langle \mathfrak{a} \cup \mathfrak{b}\rangle = \mathfrak{a} + \mathfrak{b}$ (why?).
|
|commutative-algebra|ideals|affine-varieties|
| 1
|
Approach to an improper integral involving sinh
|
How do I calculate an integral $\int_{-\infty}^{\infty} \frac{z-a}{\sinh(z-a)}\frac{z+a}{\sinh(z+a)} dz$ for $a>0$ ? Expanding the integrand and integrating term-by-term (a-la polylog) gives rise to an ugly looking double series. Edit: it surely can be done this way, but what is the most straightforward way to calculate this?
|
Another contour integration approach: Let $N$ be a positive integer greater than $a$ , and integrate the function $$f(z) = \frac{z-a}{\sinh(z-a)} \frac{z+a}{\sinh(z+a)} \, e^{ipz}, \quad p>0,$$ around a rectangular contour with vertices at $-N$ , $N$ , $N+ i \pi (2N+1)/2$ , and $-N+i\pi (2N+1)/2$ . For any positive integer $N$ , the top of the contour passes halfway between adjacent poles of $f(z)$ . As $N \to \infty$ , the integral vanishes on the left and right sides of the contour because $|\sinh(z)|$ grows exponentially as $\Re(z) \to \pm \infty$ . And the integral vanishes on the top the contour because $|e^{i p z}|$ decays exponentially to zero as $\Im(z) \to \infty$ . We therefore have $$ \begin{align} \int_{-\infty}^{\infty} f(x) \, \mathrm dx &= 2 \pi i \left(\sum_{n=1}^{\infty} \operatorname*{Res}_{z=a+ i \pi n} f(z) + \sum_{n=1}^{\infty} \operatorname*{Res}_{z = -a + i \pi n } f(z)\right) \\ &= 2 \pi i \sum_{n=1}^{\infty} \lim_{z \to a+ i \pi n} \frac{(z-a)(z+a)}{\cosh(z-a)\
|
|improper-integrals|
| 0
|
Prove that $|Y| = |\mathbb{N} \to \mathbb{N}|$, where $Y$ is the set of all non-injective functions
|
Let $Y$ be the set of all non injective functions from $\mathbb{N} \to \mathbb{N}$ , i.e., for each $n \in \mathbb{N}$ , the cardinality $\bigl| f^{-1}({n}) \bigr|$ is different from $1$ . I need to prove this cardinality equivalence. So to do that I need to find an injective function from $Y$ to $\mathbb{N} \to \mathbb{N}$ and another one from $\mathbb{N} \to \mathbb{N}$ to $Y$ . The first one is obvious because $Y \subseteq (\mathbb{N} \to \mathbb{N})$ . I'm having trouble finding the second one. I tried with mod, but it makes it non-injective. Another example?
|
Take any non-injective surjection $h:\Bbb N\to\Bbb N$ (e.g., with the convention $\min(\Bbb N)=0$ : $h(0)=0$ and $\forall n\ge1\quad h(n)=n-1$ ). Then, $$f\mapsto f\circ h$$ is an injection from $\Bbb N^{\Bbb N}$ into $Y$ .
|
|combinatorics|functions|discrete-mathematics|elementary-set-theory|
| 0
|
The polynomial ideal associated to a prime ideal is prime
|
Let $A$ be a ring, $I \subset A$ a prime ideal and $\Phi : A \rightarrow A[x]$ the canonical inclusion homomorphism. Is $I[x]= \{ \sum_{k=0}^n a_k x^k : n\in \Bbb{N}, (a_k)_{k=0}^n \in I^n \}$ also prime in $A[x]$ ? I am wondering this in order to apply it on other proof. I have search on my algebra books and notes and it does not appear theory about it. I have tried some ways to prove it, thinking about properties like $A/I$ being a integral domain since $I$ is prime, the relation between prime ideals and nilpotent elements among other things. Any possible help or reference to already existing theory would be appreciated.
|
Yes, it is. In fact $A[X]/I[X] \cong A/I[X] $ , which is an integral domain (because so it is $A/I$ ) if $I$ is a prime ideal in $A$ .
|
|abstract-algebra|ideals|principal-ideal-domains|
| 0
|
Different definitions of sequential homotopy colimits
|
There are two notions of homotopy limits, one for triangulated categories and one for model categories and I wonder whether these two coincide. More concretely, let $\mathcal{T}$ be a triangulated category having countable direct sums and $X_0 \to X_1 \to \cdots $ a sequence of morphisms in $\mathcal{T}$ , we define the homotopy colimit of this sequence to be $$\operatorname{HoLim}(X_i) = \operatorname{Cone}\left(\bigoplus_{i=1}^{\infty} X_i \overset{\operatorname{id} - \operatorname{shift}}{\longrightarrow} \bigoplus_{i=1}^{\infty}X_i \right).$$ Now if $\mathfrak{M}$ is a model category and $\mathbb{N}$ to be the poset $\left \{0 , we have a colimit functor $\operatorname{colim} \colon \mathfrak{M}^{\mathbb{N}} \to \mathfrak{M}$ and we define the homotopy colimit as its left derived functor $$\operatorname{HoLim} = \mathbf{L}\operatorname{colim} \colon \mathbf{Ho}(\mathfrak{M}^{\mathbb{N}}) \longrightarrow \mathbf{Ho}(\mathfrak{M}).$$ My question is, if $\mathcal{T} = \mathbf{Ho}(\mat
|
Yes. First observe that in the triangulated category of a stable model category, the mapping cone of the difference of two morphisms is their homotopy coequalizer and the direct sum of a family of objects is their homotopy coproduct. Thus, the diagram in the first displayed formula is the homotopy coequalizer of two homotopy coproducts. The resulting double homotopy colimit can be rewritten as a homotopy colimit over a single indexing category $I$ , given by the Grothendieck construction applied to the indexing categories. In our case, the category $I$ is a poset that has pairs $(i,j)$ , $i≥0$ , $j∈\{0,1\}$ as objects and $(i,j) if $j=0$ , $l=1$ , and $i≤k≤i+1$ . The projection $\def\N{{\bf N}} I→\N$ induces a functor, which can be easily checked to be ∞-final by computing the comma categories and observing they are weakly contractible. Thus, the homotopy colimit over $I$ can be computed as the homotopy colimit over $\N$ , which completes the proof.
|
|model-categories|triangulated-categories|
| 1
|
Let $V$ be a vector space and $T$ be a linear map on $V$. Suppose that $\text{dim null}(T^2) = 5$. Prove that $\text{dim null}(T) \ge 3$.
|
Let $V$ be a vector space and $T$ be a linear map on $V$ . Suppose that $\text{dim null}(T^2) = 5$ . Prove that $\text{dim null}(T) \ge 3$ . I can come up with a sort of heuristic argument: if $\text{dim null}(T) , then when applying $T$ to $\text{range}(T) (\subset V)$ , the dimension of the null space can grow by at most 2. Hence $\text{dim null}(T^2) . However, I cannot figure out how to formulate this idea rigorously.
|
Let $W=\ker(T^2)$ . $W$ is invariant under $T$ . Therefore, we can consider $T_w = T\rvert_W$ the operator $T$ restricted to $W$ . Since $\ker (T) \subset \ker(T^2) = W$ , it follows that $\ker(T_w) = \ker(T)$ . On the other hand, $(T_w)^2\equiv 0$ , so the minimal polynomial of $T_w$ is either $x$ or $x^2$ . If $T_w\equiv 0$ , then $\ker(T_w) = W$ , and we are done. Otherwise, its minimal polynomial is $x^2$ , so the maximum order of the Jordan blocks corresponding to the eigenvalue $0$ is $2$ . Since $\dim W = 5$ , there are at least $3$ Jordan blocks for the eigenvalue $0$ , which implies $\dim \ker (T)\geq 3$ , as desired.
|
|linear-algebra|vector-spaces|
| 0
|
Sum $\sum\limits_{n=1}^{\infty}\frac{(-1)^n}{\sqrt{n}}$
|
I have the following infinite sum: $$ \sum\limits_{n=1}^{\infty}\frac{(-1)^n}{\sqrt{n}} $$ Because there is a $(-1)^n$ I deduce that it is a alternating series. Therefore I use the alternating series test: $$ \lim_{n\to\infty} \frac{1}{\sqrt{n}} $$ Because this limit is decreasing and approaching $0$ I thought it should therefore be convergent. However in the answer key it uses a different method to get a different answer. It instead takes the absolute value of the series: $\left|\frac{(-1)^n}{\sqrt{n}}\right| = \frac{1}{\sqrt{n}}$ and says because this is a divergent p series that the series is divergent. Why is the alternating series test not applied here?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} & \color{#44f}{\sum_{n = 1}^{\infty}{\pars{-1}^{n} \over \root{n}}} = \lim_{N \to \infty}\,\sum_{n = 1}^{N}{\pars{-1}^{n} \over \root{n}} \\[5mm] = & \ \lim_{N \to \infty}\bracks{% \sum_{n = 1}^{\left\lfloor\,{N/2}\,\right\rfloor}{1 \over \root{2n}} - \pars{\
|
|calculus|sequences-and-series|convergence-divergence|summation|
| 0
|
Simple way to show that standard basis on $\ell^p$ is weakly pre-compact?
|
Consider the sequence $e_n = (0,0,\ldots,0,1,0,\ldots)$ in $\ell^1$ which is weakly convergent to zero in $\ell^p$ . for all $1\leq p \leq \infty.$ It is then obvious, from the theorem that sequences completely characterises weak convergence, that $\{e_n\}\cup\{ 0\}$ is weakly compact. However, I remember somehow that this non-trivial theorem is not needed, and there is a more obvious answer (of the proof that $\{e_n\}\cup\{ 0\}$ is weakly compact). Does anyone have the solution that uses the weak topology directly without using sequential characterisations?
|
What you have said for $p=1$ is wrong.
|
|functional-analysis|alternative-proof|weak-topology|
| 0
|
Proving that each ear decomposition of a $4$-regular $n$-vertex $2$-connected graph has $n+1$ ears
|
Prove that in each ear decomposition of a $4$ -regular $n$ -vertex $2$ -connected graph $G$ , the number of ears (counting the initial cycle) is $n+1$ . Definition of ear decomposition: $(P_0, P_1, \dots ,P_k)$ of the edge set of G such that (a) $P_0$ is a cycle of length at least 3, and (b) for $i=1,\dots,k$ , $P_i$ is an ear of $P_0 \cup P_1 \cup \dots \cup P_i$ . I have started with a proof by induction, and this is what I have so far. Consider $n=3$ . We have a cycle $P_0$ of length 3. From the picture, which is linked below, we can decompose G into $P_0 \cup P_1 \cup P_2 \cup P_3$ . Counting the initial cycle, we have 4 ears, which holds for our base case. Base Case ( $n=3$ ) Graph: From here, I am not exactly sure how to articulate my induction step. Would somebody be able to point me in the right direction from here?
|
I will answer the question in the case of simple graphs and proceed so without induction. (I don't even know if the claim is true otherwise, but from the fact that you consider $n = 3$ as base case, it seems you are considering multigraphs.) Let $G$ be a (simple) $4$ -regular, $2$ -connected graph with $n$ vertices. Furthermore, let $P_0, \dots, P_l$ be an ear decomposition of $G$ . The claim is equivalent to showing that $l = n$ . For that, we use a double counting argument. Namely, let $I = \{(v, k) \in V(G) \times [l] \colon v \text{ is an endpoint of }P_k\}$ . On the one hand, $|I| = 2 l$ as each ear has exactly two (distinct!) endpoints. On the other hand, fix an arbitrary vertex $v \in V(G)$ . Let $k$ be the smallest index such that $v \in V(P_0\cup \dots \cup P_k)$ . We must have $\deg_{P_0\cup \dots \cup P_k}(v) = 2$ : If $k = 0$ , then $v$ was initially part of the cycle $P_0$ . Otherwise, it must be an internal vertex of $P_k$ . Indeed, as $k$ is minimal, $v$ is not a vertex
|
|graph-theory|proof-explanation|
| 0
|
If we remove the diagonal from $X\times X$, is it necessarily disconnected?
|
If $X$ is a compact, connected Hausdorff space, we know that the diagonal $\Delta_X=\{(x,x)\in X\times X\}$ is closed in $X\times X$ by Hausdorffness. But is $X\times X\setminus\Delta_X$ disconnected in general? I know that, if $X$ has a total order $ such that the induced order topology is contained in the original topology (i.e. it is coarser), then we can write $$X\times X\setminus\Delta_X=\{(x,y)\in X\times X:x y\}$$ Also, if $\Delta_X$ is a zero-set , then we also have the disconnectedness of $X\times X\setminus\Delta_X$ since, if $\Delta_X=f^{-1}[\{0\}]$ , then we have the disjoint open sets $f^{-1}[(-\infty,0)],f^{-1}[(0,+\infty)]$ . But we don't have that $\Delta_X$ is a zero-set in general as $X$ may not be first countable and $\Delta_X$ being $G_\delta$ implies $X$ first countable for compact Hausdorff spaces .
|
Let $X$ be $\Bbb{R}$ with the cofinite topology (any infinite set would do: I'm just using $\Bbb{R}$ so I can use geometrical language). The closed subsets of $X$ are either empty, finite or equal to the whole real line $\Bbb{R}$ . For each point $(x, y)$ in a closed subset $C$ of $X \times X$ , either $(x', y) \in C$ for all $x'$ or for only finitely many $x'$ and either $(x, y') \in C$ for all $y'$ or for only finitely many $y'$ . It follows that, if the projection of $C$ onto the $x$ -axis is infinite, then $C$ contains an entire horizontal line and if the projection onto the $y$ -axis its infinite, then $C$ contains an entire vertical line. If $X \times X \setminus \Delta_X = C \cup D$ , where $C$ $D$ are closed in the subspace topology on $X \times X \setminus \Delta_X$ , then the projections of both $C$ and $D$ onto the coordinate axes are surjective and this implies that $C$ and $D$ cannot be disjoint.
|
|general-topology|compactness|examples-counterexamples|connectedness|
| 0
|
Does every finitely generated dense subgroup of $ SU(n) $ contain a free subgroup?
|
I read in On the spectral gap for finitely-generated subgroups of SU(2) that every finitely generated dense subgroup of $ SU(2) $ contains a free subgroup. Is it true in general that every finitely generated dense subgroup of $ SU(n) $ contains a free subgroup?
|
This follows easily from the Tits alternative , as pointed out by Moishe Kohan. Let $ G $ be a connected Lie group that is not solvable. Let $ \Gamma $ be a finitely generated dense subgroup of $ G $ . Since $ G $ is not solvable and $ \Gamma $ is dense in $ G $ then $ \Gamma $ cannot be solvable or virtually solvable (a finite index subgroup of a dense group is still dense). So by the Tits alternative $ \Gamma $ must contain a nonabelian free group.
|
|group-theory|representation-theory|lie-groups|free-groups|
| 0
|
How to pick a measurable set $U$ s.t. $\mathcal N^c \subseteq U\subseteq I $ and $m_*(U)<1-\varepsilon, \forall\epsilon\gt 0$
|
In this answer , there's the following line: If $m_*(\mathcal N^c) , then $\forall\ \varepsilon>0$ , $\exists\ U$ is measurable s.t. $\mathcal N^c \subset U \subset I$ and $m_*(U) . (Here $m_*$ denotes outer measure.) I don't see immediately why such measurable set $U$ always exists. Can someone help to give out one such construction of $U$ with arbitrarily $\epsilon$ ?
|
It should be $\exists\epsilon\gt 0$ instead of $\forall\epsilon\gt 0$ . This is because if such set $U$ exists for arbitrary $\epsilon\gt 0$ then: $$ (1-\epsilon)=(1-\epsilon) + 0\gt m_*(U)+m_*(U^c)\ge m_*(U\cup U^c)=1, \forall\epsilon, $$ which is false. Also $\exists\epsilon\gt 0$ is reasonable since from the assumption that $m_*(\mathcal N^c)\lt 1$ , we can show that there exists such measurable set $U$ . From the definition of $m_*$ we have: $$ m_*(\mathcal N^c) =\inf\left\{ \Sigma_{k=1}^{\infty}\mathscr{l}(I_n): I_n\text{ are open intervals and }\mathcal N^c\subseteq\bigcup_{k=1}^{\infty}\ I_n\right\}\lt 1. $$ So it must be the case that there exists some $\{I_n\}$ such that $\mathcal N^c\subseteq\bigcup_{k=1}^{\infty}\ I_n$ and $\Sigma_{k=1}^{\infty}\mathscr{l}(I_n)\lt 1$ , otherwise $m_*(\mathcal N^c)\ge1$ which is a contradiction to the assumption. Choose $U=\bigcup_{k=1}^{\infty}\ I_n\cap[0, 1]$ and $0\lt\epsilon\lt 1-\Sigma_{k=1}^{\infty}\mathscr{l}(I_n)$ satisfies the desire
|
|measure-theory|lebesgue-measure|
| 0
|
Evaluate: $\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$
|
The value of $$\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$$ (where $0 ) is equal to (A) $\frac{1}{2a}\left(\ln(\frac{1-a}{1+a})\right)^2$ (B) $\frac{1}{2a}\ln(\frac{1+a}{1-a})$ (C) $\frac{1}{a}\ln(\frac{1+a}{1-a})$ (D) $\frac{1}{2a}\left(\ln(\frac{1+a}{1-a})\right)^2$ (E) $\frac{1}{a}\left(\ln(\frac{1-a}{1+a})\right)^2$ My Attempt $$I=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1-t)}{1-at}dt=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1+t)}{1+at}dt$$ $$I=\int_{-1}^{1}\ln(1+t)\left(\frac{1}{1-at}-\frac{1}{1+at}\right)dt=\int_{-1}^{1}\ln(1+t)\left(\frac{2at}{1-a^2t^2}\right)dt$$ After this I am not able to do. Is this approach correct I wonder
|
$$ \begin{aligned} \int_{-1}^1\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}\mathrm dt&=\int_{-1}^1\ln(1+t)\frac{2at}{1-a^2t^2}\mathrm dt\\ &=-2at\int_{-1}^1\sum_{k\geq 1}\frac{(-1)^kt^{k}}{k}\sum_{n\geq 0}a^{2n}t^{2n}\mathrm dt\\ &=-2\sum_{k\geq 1}\sum_{n\geq 0}\frac{(-1)^ka^{2n+1}}{k}\int_{-1}^1t^{2n+k+1}\mathrm dt\\ &=2\sum_{k\geq 0}\sum_{n\geq 0}\frac{(-1)^ka^{2n+1}}{k+1}\int_{-1}^1t^{2n+k+2}\mathrm dt \end{aligned} $$ In view that $$\int_{-1}^1t^{2n+k+2}=\left\{ \begin{aligned} &0,\ 2m+k+2\ \text{odd}\\ &\frac{2}{2m+2k+3},\ 2m+k+2\ \text{even} \end{aligned} \right. ,$$ this means that only the terms with even $k$ will "survive" because $2n+2+k=$ even $+k=$ even. So we'll transform $k\to 2k$ : $$ \begin{aligned} 2\sum_{k\geq 0}\sum_{n\geq 0}\frac{a^{2n+1}}{2k+1}\int_{-1}^1t^{2(n+k+1)}\mathrm dt&=4\sum_{k\geq 0}\sum_{n\geq 0}\frac{a^{2n+1}}{(2k+1)(2k+2n+3)}\\ &= (...)\\ &=2\sum_{i\geq 0}\sum_{j\geq 0}\frac{a^{2i+2j+1}}{(2i+1)(2j+1)}\\ &=\frac{1}{2a}\left(\ln\left(\frac{1+a}{1-a}\righ
|
|real-analysis|calculus|integration|definite-integrals|improper-integrals|
| 0
|
Evaluate the improper integral $\int_0^\infty \ln(1-e^{-x})e^{-ax}x^bdx$
|
Evaluate the integral $$\int_0^\infty \ln(1-e^{-x})e^{-ax}x^bdx, \quad a,b>0.$$ I'm trying to use substitution $t=1-e^{-x}$ and integration by parts. Also I've tried with Gamma function in parts. But I don't get anywhere. Any help will be appreciated.
|
Here's a closed-form solution for nonnegative integers $b$ , in terms of the gamma and polygamma functions, that uses differentiating under the integral sign twice. First consider the case $b = 0$ . Substituting $u := e^{-x}, du = -e^{-x} \,dx$ , transforms the integral to $$\int_0^\infty e^{-a x} \log(1 - e^{-x}) \,dx = \int_0^1 u^{a - 1} \log(1 - u) \,du .$$ To evaluate this integral, we differentiate under the integral sign. Define $$I(t) := \int_0^1 u^{a - 1} (1 - u)^t \,du ,$$ so that $$I'(t) := \int_0^1 u^{a - 1} (1 - u)^t \log(1 - u) \,du,$$ and in particular so that the desired integral is $I'(0)$ . The usual definition of the beta function and the formula for expressing the beta function , $\mathrm{B}$ , in terms of the gamma function , $\Gamma$ , gives $$I(t) = \mathrm{B}(a, t + 1) = \frac{\Gamma(a) \Gamma(t + 1)}{\Gamma(a + t + 1)} .$$ Differentiating with respect to $t$ gives $$(\psi(t + 1) - \psi(a + t + 1)) \frac{\Gamma(a) \Gamma(t + 1)}{\Gamma(a + t + 1)} ,$$ where $\psi
|
|calculus|integration|definite-integrals|improper-integrals|
| 0
|
How to construct a complex function series that converges uniformly on a closed region but its derivative does not converge uniformly on the boundary?
|
I'm looking for an example of a series of complex functions $\{f_n(z)\}$ with the following properties: The series $\sum_{n=1}^{\infty} f_n(z)$ converges uniformly on a closed region $D$ in the complex plane, including its boundary. The derivative series $\sum_{n=1}^{\infty} f_n'(z)$ converges uniformly on any closed subregion within (D), but not on the boundary of $D$ . To clarify, $D$ could be, for instance, the closed unit disk. I am interested in an explicit construction of such a series where the uniform convergence of the derivative series breaks down precisely at the boundary of $D$ , while maintaining uniform convergence on any closed subregion inside $D$ . I have been pondering for hours how to construct such a series and whether it might involve some delicate balancing of the growth rates of the terms in the series or some other non-trivial feature of complex analysis. Any insights or specific examples that satisfy these conditions would be greatly appreciated.
|
On the unit disk, take a continous function for which it's Fourier series does not converge uniformly, say $\sum a_n e^n$ . Then integrate it, e.i. look at $\sum a_n/n e ^n$ . By the Cauchy-Schwartz inequality Fourier coefficients are in $l_1$ so it converges uniformly on the whole closed disk.
|
|sequences-and-series|complex-analysis|convergence-divergence|uniform-convergence|divergent-series|
| 1
|
How should I derive the relationship $\lambda_{k}=\int_{1}^{2}x^2y_{k}'^2dx, k=1, 2, ...$?
|
a) Show that the equation and boundary conditions $\frac{d}{dx}(x^2\frac{dy}{dx})+\lambda xy=0, y(1)=0, y'(2)=0$ form a regular Sturm-Liouville system. Show further that the system can be written as a constrained variational problem with functional $S[y]=\int_{1}^{2}x^2y'^2dx, y(1)=0$ , and constraint $C[y]=\int_{1}^{2}xy^2dx=1$ . b) Assume that the eigenvalues $\lambda_{k}$ and eigenfunctions $y_{k}, k=1, 2, ...,$ exist. Working from equation ( $\frac{d}{dx}(x^2\frac{dy}{dx})+\lambda xy=0, y(1)=0, y'(2)=0)$ , derive the relationship $\lambda_{k}=\int_{1}^{2}x^2y_{k}'^2dx, k=1, 2, ...$ . c) Using the trial function $z=Asin(\pi(x-1)/2)$ , show that the smallest eigenvalue, $\lambda_{1}$ , satisfies the inequality $\lambda_{1}\leq\frac{(7\pi^2-18)\pi^2}{6(4+3\pi^2)}$ . Justify your answer briefly. Here's my work: a) Consider the equation and boundary conditions $\frac{d}{dx}(x^2\frac{dy}{dx})+\lambda xy=0, y(1)=0, y'(2)=0$ . By definition, a regular Sturm-Liouville system is defined to b
|
For part (b), multiplying $(x^2y')' + \lambda xy = 0$ by $y$ and integrating by parts, we find \begin{align*} \lambda\int_1^2 xy^2\, dx & = -\int_1^2 (x^2y')'y\, dx = -x^2y'y\bigg|_1^2 + \int_1^2 x^2(y')^2\, dx = \int_1^2 x^2(y')^2\, dx, \end{align*} where the boundary term vanishes due to the boundary conditions. For part (c), note that the trigonometric identity for $\sin(a - b) = \sin(a)\cos(b) - \cos(a)\sin(b)$ permits us to rewrite $z = -A\cos(\pi x/2)$ . You can compute the integral $\int_1^2 x\cos^2(\pi x/2)\, dx$ using the identity $\cos^2\theta = \dfrac{1 + \cos(2\theta)}{2}$ and integration by parts. Added: The integral becomes $\int_1^2 \frac{x}{2}\left(1 + \cos(\pi x)\right) dx = \int_1^2 \frac{x}{2}\, dx + \int_1^2 \frac{x}{2}\cos(\pi x)\, dx$ . The first integral is now straightforward. For the second integral, integrating by parts with $u = x/2$ and $dv = \cos(\pi x)\, dx$ , we find \begin{align*} \int_1^2 \frac{x}{2}\cos(\pi x)\, dx = \frac{x}{2}\cdot \frac{\sin(\pi x)}
|
|calculus|calculus-of-variations|
| 1
|
About $x$ own potenciation (the tetration of $^\infty x$) with two real solutions
|
I was looking at a problem that deals with tetration. Here's the problem: $$\begin{split}x^{x^{x^{\ddots}}}=3 \\ x^3=3 \Rightarrow x= \sqrt[3]{3} \end{split}$$ However, I saw that the solution is $-\frac{3}{\ln(3)}W_{-1}\left(-\frac{3}{\ln(3)}\right)\approx 2.478$ and watched some videos which explain something like the convergence "test" of $e^{-e}$ and $e^e$ and more ones some Lambert $W_{-1}$ function but I don't undertand this one. I try to solve that one by this logic: $$x^{x^{x^{\ddots}}}=n$$ So $$x^n=n\text{, which implies, $\forall n \in \mathbb{R}$,}\, x=\sqrt[n]{n}\text{.}$$ What's wrong with this logic?
|
The answer is simpler and no knowledge about the Lambert function is necessary. The infinite exponential tower $$x^{x^{x^\cdots}}$$ is defined as the limit of the sequence $$x,x^x,x^{x^x},x^{x^{x^x}},\cdots$$ when this sequence converge. This happens to be for $x \in [e^{-e},e^{1/e}]$ . So those are the only real values of $x$ for which the expression $$x^{x^{x^\cdots}}$$ makes sense. That means that $$f(x)=x^{x^{x^\cdots}}$$ defines a function $f:[e^{-e},e^{1/e}] \to \Bbb R$ . The image of that function turn out to be $[1/e,e]$ . Since $3\not\in [1/e,e]$ , that simply means that $$x^{x^{x^\cdots}} = 3$$ has no solution. But your reasoning works if you choose a number in the image of the function. For any $y\in [1/e,e]$ the solution of $$x^{x^{x^\cdots}} = y$$ is in fact $x=\sqrt[y]y$ .
|
|real-analysis|lambert-w|tetration|
| 0
|
Composition of convex function and affine function
|
Let $g: E^{m} \rightarrow E^{1}$ be a convex function, and let $h: E^{n} \rightarrow E^{m} $ be an affine function of the form $h(x)=Ax+b$, where $A$ is an $m \times n$ matrix and $b$ is an $m \times 1 $ vector. Then, show that the composite function $f : E^n \rightarrow E^{1} $ defined as $f(x)=g(h(x))$ is a convex function. Also, assuming twice differentiability of $g$, derive the expression for the hessian of $f$
|
Affine composition Suppose g is convex, then $f(x ) = g(Ax + b )$ is convex. Proof: \begin{align*} epi f &= \{(x, t ): f(x ) \le t\}\\ & = \{(x, t ): g(Ax + b ) \le t\}\\ & = \{(x,t ) : (Ax + b,t ) \in epi(g)\}\\ & = \{(x,t ) : h(x,t) \in epi(g)\}\\ & = h^{-1} epi(g) \end{align*} Define $h$ as a function(affine actually) that maps $(x,t )$ to $(Ax + b, t)$ . It's affine because \begin{align*} h(\theta(x_1,t_1) + (1-\theta)(x_2,t_2)) &= \left( A(\theta x_1 + (1-\theta)x_2) + b, \theta t_1 + (1-\theta)t_2 \right) \\ &= \theta(Ax_1 + b, t_1) + (1-\theta)(Ax_2 + b, t_2)\\ &= \theta h(x_1,t_1) + (1-\theta)h(x_2,t_2) \end{align*} Therefore, epi(f) is the inverse image of h. Since epi g is convex and h is affine, we obtain that epi f is a convex set and thus f is a convex function (Refer to 2.3.2 Affine functions in Boyd Convex-Optimization).
|
|convex-analysis|function-and-relation-composition|
| 0
|
Concave function and slope property
|
Let $f:[a,b]\rightarrow \mathbb{R}$ be a function. $f'_{+}(t)= \lim_{t'\rightarrow t^{+}}\frac {f(t')-f(t)}{t'-t}$ . Let $f:[0,1]\rightarrow \mathbb{R}$ be a concave function. Then this means $\frac{f(t)-f(0)}{t}\geq \frac{f(t')-f(0)}{t'}$ for $0 . Then as here slope is decreasing function, $f'_{+}(t)$ is equal to infimum of $\frac{f(t)-f(0)}{t}$ over all $t>0$ SO why is the following true: true that $f'_{+}(t)\geq \frac{f(s)-f(t)}{s-t}$ ?
|
$f$ concave means $f(x+t(y-x))\geq f(x)+t(f(y)-f(x))$ So $f(y)-f(x)\leq$ $\lim_{t\rightarrow 0+} \frac{f(x+t(y-x))-f(x)}{t}$ Write $h=t(y-x)$ . Then, $t\rightarrow 0$ iff $h\rightarrow 0$ . So, $\lim_{t\rightarrow 0^{+}}\frac{f(x+t(y-x))-f(x)}{t}=\lim_{h\rightarrow 0^{+}}\frac{(f(x+h)-f(x))(y-x)}{h}=f'_{+}(x)(y-x)$ So, $f'_{+}(x)(y-x)\geq f(y)-f(x)$
|
|real-analysis|solution-verification|convex-analysis|convex-geometry|
| 0
|
Intersection of Submodules inside a Tensor Product
|
Let $R \subset S$ a flat, injective extension of commutative rings, $M \subset N $ an inclusion of $R$ -modules. We identify $M \otimes_S 1_R \subset N \otimes_R 1_S \subset N \otimes S$ as $R$ -submodules. Question: Is there an immediate argument to show that $M \otimes_R 1_S = (N \otimes_R 1_S) \cap M \otimes_R S $ ? Note, that if $R$ would be a field, then $N \otimes S$ would have a basis consisting of $\{s_i \otimes n_j \}$ , where $\{s_i\}_{i \in I}$ form basis of $S$ over $R$ , where $\{n_i\}_{j \in J}$ is $R$ -basis of $N$ and we could make a coefficient comparison argument for an element from $(N \otimes_R 1_S) \cap M \otimes_R S $ . But for general $R$ and modules $M,N$ such argument using basis decomposition not works due to absence of well defined bases. How to prove the claim else? Is it at all true?
|
The claim is false in general. Here's a counterexample: $R = \mathbb{Z}$ , $M = \mathbb{Z}$ , $N = \mathbb{Q}$ , $S = \mathbb{Q}$ . However, what you mentioned is true if $R \subset S$ is faithfully flat . Indeed, the below commutative diagram has the top row being exact. The flatness of $S$ tells you that the bottom row is also exact. Moreover, the vertical tensor maps are also injective because of the stronger assumption of faithful flatness. Now, what you want to show is that if something in $N \otimes_R S$ comes from both $N$ and $M \otimes_R S$ , then it in fact comes from $M$ . Indeed, this is clear from exactness and diagram chasing.
|
|abstract-algebra|commutative-algebra|tensor-products|
| 1
|
Evaluate: $\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$
|
The value of $$\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$$ (where $0 ) is equal to (A) $\frac{1}{2a}\left(\ln(\frac{1-a}{1+a})\right)^2$ (B) $\frac{1}{2a}\ln(\frac{1+a}{1-a})$ (C) $\frac{1}{a}\ln(\frac{1+a}{1-a})$ (D) $\frac{1}{2a}\left(\ln(\frac{1+a}{1-a})\right)^2$ (E) $\frac{1}{a}\left(\ln(\frac{1-a}{1+a})\right)^2$ My Attempt $$I=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1-t)}{1-at}dt=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1+t)}{1+at}dt$$ $$I=\int_{-1}^{1}\ln(1+t)\left(\frac{1}{1-at}-\frac{1}{1+at}\right)dt=\int_{-1}^{1}\ln(1+t)\left(\frac{2at}{1-a^2t^2}\right)dt$$ After this I am not able to do. Is this approach correct I wonder
|
\begin{align} &\int_{-1}^1\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}\ \overset{t\to \frac{1-t}{1+t}}{dt}\\ =& -\frac2{1+a}\int_0^\infty \frac{\ln t}{(t+1)(t+ \frac{1-a}{1+a})} \overset{t\to \frac{1-a}{1+a}\frac1t}{dt}\\ =& -\frac2{1+a}\int_0^\infty \frac{\ln \frac{1-a}{1+a}-\ln t}{(t+1)(t+ \frac{1-a}{1+a})} {dt}\\ =& -\frac{\ln \frac{1-a}{1+a}}{1+a}\int_0^\infty \frac{1}{(t+1)(t+ \frac{1-a}{1+a})} {dt}\\ =& -\frac{\ln \frac{1-a}{1+a}}{1+a} \bigg( -\frac{1+a}{2a}{\ln\frac{1-a}{1+a}}\bigg)= \frac{1}{{2a}} \ln^2\frac{1-a}{1+a} \end{align}
|
|real-analysis|calculus|integration|definite-integrals|improper-integrals|
| 1
|
For an affine open subscheme $\operatorname{Spec}A$, $\{ \operatorname{ker}(A_q \to (A_0)_q )\} = \{ \operatorname{minimal prime ideals of} A_q \}$?
|
Let $V:=\operatorname{Spec}A$ be an affine open subscheme of a scheme $Y$ with $x=\mathfrak{q}_x\in V$ . Consider next set $$\{ Y_0 \cap V : Y_0 \operatorname{is an irreducible component of} Y \operatorname{containing} x=\mathfrak{q} \} .$$ Then it is the set of irreducible components of $V:=\operatorname{Spec}A$ containing $x$ . Furthurmore we can endow the reduced subscheme structure on each closed subset $Y_0 \cap V = \operatorname{Spec}(A_0 := A/J_0)$ of $V$ . We know that the set of irreducible components of $V$ containing $x$ is in bijective correspondence with the set of minimal priem ideals of $\mathcal{O}_{V,x} = A_{\mathfrak{q}}$ ( $\because$ Theorem 3.33 in https://virtualmath1.stanford.edu/~conrad/216APage/handouts/irreddim.pdf ) My question is, then, furthermore, $$\{ \operatorname{ker}(A_{\mathfrak{q}} \to (A_0)_{\mathfrak{q}}) : Y_0 \operatorname{is an irreducible component of} Y \operatorname{containing} x=\mathfrak{q}_x\} = \{ \operatorname{minimal prime ideals of} A_{
|
O.K. I think that I solved the problem. I leave an answer as a record. For simplicity, let $X:=\operatorname{Spec}A$ be an affine scheme with $x:=\mathfrak{p} \in X$ . By the hungerford, Algebra, p.147, Theorem 4.11-(i), there exists a bijection : $$\psi : \{ \operatorname{prime ideals} Q\subseteq A \operatorname{which are contained in} \mathfrak{p}\} \to \{\operatorname{prime ideals of} A_{\mathfrak{p}} \}$$ $$ Q \mapsto Q_{\mathfrak{p}}$$ And via this bijection, the minimal prime ideals corresponds to the minimal prime ideals so we also obtain bijection : $$\psi : \{ \operatorname{ minimal prime ideals} Q\subseteq A \operatorname{which are contained in} \mathfrak{p}\} \to \{\operatorname{minimal prime ideals of} A_{\mathfrak{p}} \}$$ $$ Q \mapsto Q_{\mathfrak{p}} $$ (True?) So, $$\{\operatorname{minimal prime ideals of} A_{\mathfrak{p}} \} = \{ Q_{\mathfrak{p}} : Q \operatorname{is minimal prime ideal of } A \operatorname{such that} Q \subseteq \mathfrak{p}\} \tag{1}.$$ Consider an i
|
|algebraic-geometry|
| 1
|
General formula for $\int_0^{\pi/2} \tan^{\alpha}(x) dx$?
|
There are already questions about how to find $\int \tan^{1/2}(x) dx$ . But how to derive a general formula for $\int_0^{\pi/2} \tan^{\alpha}(x) dx$ (which converges if $|\alpha| ) ? More details: I have to evaluate this integral because I want to derive Euler's reflection formula for gamma functions. The integral above is the result of using a substitution $v=\tan^2(x)$ in $\int_0^\infty \frac{v^\beta}{1+v} dv$
|
$$ \begin{aligned} \int_0^{\frac{\pi}{2}} \tan ^\alpha x d x & =\int_0^{\frac{\pi}{2}} \sin ^{ 2\left(\frac{\alpha+1}{2}\right)-1 }x \cos ^{2\left(\frac{1-\alpha}{2}\right)-1} x d x \\ & =\frac{1}{2} B\left(\frac{\alpha+1}{2}, \frac{1-\alpha}{2}\right) \\ & =\frac{\pi}{2} \csc \left(\pi\left(\frac{1-\alpha}{2}\right)\right)\quad (\textrm{Using reflection property of Beta function}) \\ & =\frac{\pi}{2} \sec \left(\frac{\pi \alpha}{2}\right) \end{aligned} $$
|
|calculus|integration|improper-integrals|gamma-function|
| 0
|
Property of $p$-variations in probability
|
Let $X$ be a continuous process. Suppose that for some $p, t > 0$ , the $p$ -th variation over $[0, t]$ exists and is finite: along any sequence of partitions $\pi_n = \{0 = t_0 of $[0, t]$ with mesh size $\|\pi_n\| \to 0$ , $$\sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^p \to L_t, \quad n \to \infty,$$ where $L_t$ is almost surely finite and the convergence holds in probability. Prove that for $p' > p$ , the $p'$ -th variation of $X$ over $[0, t]$ equals $0$ ) almost surely. Moreover, prove that for $p' , the $p'$ -th variation equals $+\infty$ on the set $\{L_t > 0\}$ . I have an attempted solution for the first part, and it is as follows. Given the $p$ -th variation over $[0, t]$ converges to $L_t$ for some $p, t > 0$ , we consider a partition $\pi_n = \{0 = t_0 of $[0, t]$ with mesh size $\|\pi_n\| \to 0$ . For $p' > p$ , we examine the $p'$ -th variation: $$V^{p'}_n = \sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^{p'}.$$ Given $p' > p$ and $|X_{t_k} - X_{t_{k-1}}| \geq 0$ , propert
|
We prove the first and second parts separately. First write $$s_t(X; \|\pi_n\|) := \sup\{|X_r - X_s| : 0 \leq r, s \leq t \text{ with } |r-s| \leq \delta\}.$$ Then since $X$ is uniformly continuous on $[0, t]$ and $\|\pi_n\| \to 0$ , we must have $s_t(X; \|\pi_n\|) \to 0$ almost surely. In particular if $p' , $$\sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^{p'} \leq \sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^{p} \cdot s_t^{p'-p}(X; \|\pi_n\|).$$ As a result, $\sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^{p'}\to 0$ in probability since $L_t$ is almost surely finite. This gives us that the $p'$ -th variation of $X$ over $[0, t]$ almost surely equals $0$ . For the second part, take $p' and $\Pr[L_t > 0] > 0$ . For the sake of contradiction, suppose that the $p'$ -th variation is not almost surely infinite on the set $\{L_t > 0\}$ , that is, there exists a sequence of partitions $\pi_n$ with mesh size $\|\pi_n\| \to 0$ such that $$\Pr\left[\sum_{t_k \in \pi_n} |X_{t_k} - X_{t_{k-1}}|^{p'}
|
|probability-theory|
| 1
|
tensor product of a graded vector space and an object in k-linear category
|
In the book "Fourier-Mukai Transforms in Algebraic Geometry", the author has been using the following terminology quite a few times in the first two chapters (Proof of Lemma 1.58, Definition 2.72, etc.), and I can't make sense of it. After digging further, I think here is the correct way to phrase the question: Fix a field $k$ and let $\mathscr{A}$ be a $k$ -linear abelian (or triangulated) category. Let $V^\bullet$ be a ( $\mathbb{Z}$ -)graded vector space and $E \in \mathscr{A}$ . Consider $E$ as a complex concentrated at degree 0 and define $$V^\bullet \otimes_k E = \bigoplus_i V^i \otimes_k E[-i]$$ What doesn't make sense to me is the tensor product on the right (and hence the left). Shifted or not, you are taking the tensor product of a vector space with an abstract object in a category. How can that even make sense?
|
One way to think of this is $$ V \otimes_k E := \bigoplus_{i=1}^{\dim(V)} E. $$ Another (functorial) point of view is that $$ - \otimes_k E \colon \mathrm{Vect} \to \mathcal{A} $$ is the left adjoint functor to $$ \operatorname{Hom}(E, -) \colon \mathcal{A} \to \mathrm{Vect}. $$
|
|algebraic-geometry|category-theory|tensor-products|triangulated-categories|
| 0
|
Generalization of the fact that pre-images of a function preserve more set operations than images
|
Given a function $f: A \rightarrow B$ images of subsets of $A$ preserve only inclusion and union of sets whereas pre-images of subsets of $B$ are better behaved, so to speak, and preserve (in addition to the above) intersections and differences of sets as well. I have both the intuition and the proofs for the this statement. I am trying to understand what's the deeper truth behind this. I think that given a set $C$ , a collection of sets $\mathcal{D}$ , and a function $g:C\rightarrow\mathcal{D}$ with the property that for every $c1,c2 \in C$ we have that $c1\neq c2 \implies g(c1) \cap g(c2) = \emptyset$ then $g$ preserves all set operations on subsets of $C$ . The pre-image is clearly such a function from individual elements of (or singleton sets in) $B$ to the powerset of $A$ so it follows that the pre-image applied to arbitrary subsets of $B$ (not necessarily singletons) preserves set operations. Is this correct? Is this the deepest / most general intuition one can have as to why the
|
Your intuition is correct and I will just make it more formal by first giving the appropriate definitions and then by summarizing, in a second part, the behaviour of functions and relations with respect to set operations. The proofs are omitted, but please ask me if you need them. Definitions on relations . Let $E$ and $F$ be two sets. A relation on $E$ and $F$ is a subset of $E \times F$ . A relation $\tau$ can also be viewed as a function from $E$ to ${\cal P}(F)$ by setting, for each $x \in E$ , $$ \tau(x) = \{ y \in F \mid (x, y) \in \tau \} $$ By abuse of language, we say that $\tau$ is a relation from $E$ to $F$ . The inverse of a relation $\tau \subseteq E \times F$ is the relation $\tau^{-1} \subseteq F \times E$ defined by $$ \tau^{-1} = \{ (y, x) \in F \times E \mid (x, y) \in \tau \} $$ It can be also viewed as a function from $F$ to ${\cal P}(E)$ defined by $$ \tau^{-1}(y) = \{x \in E \mid y \in \tau(x) \} $$ A relation from $E$ to $F$ can be extended to a function from ${\
|
|functions|elementary-set-theory|relations|
| 0
|
what symbol allows you to start at a number, repeat an expression a curtain amount of times, but just sets a variable instead of summing the answer?
|
This is very difficult to explain, but what I'm looking for, is kind of like sigma but instead I want the middle to do something like this "x=n+1" instead of doing (n+1)+(n+1)+(n+... etc. etc. So, as an explanation, I am in short trying to recreate the exponent symbol function following these requirements; All defined blocks should start with variable "value" [0] and end with an updated value in the main value variable. All instructions must never contain "if" or "if/else" statements when referring to the actual math problem. All instructions that require prior defined blocks must never use the main value until the end value is calculated, to which it must be stored in the variable "value" as its last action. All Instructions may not use math related operation blocks from the editor, and should instead solve the operation/expression using only priorly made defined blocks. (Aside from addition being the only exception in creating the first defined block.) All instructions must be compos
|
I think you mean iterating a function (continuing the comments) Say we have $f(x) = x^2 - 1$ , and let’s say we start at a number, $x=\pi$ . Now, to repeat that operation, we iterate the function, which basically means plugging $f$ into $f$ like this: $$f(f(\pi)) = f(\pi^2 - 1)$$ Usually to repeat something $n$ times we have some handy notation: $$\underbrace{f(f(\cdots f(x))\cdots}_{n \text{ } f \text{ signs}} = f^n (x)$$ However Desmos doesn’t support $f^n (x)$ notation, so we have to just write $f(f(f(\cdots$ . Here’s how you could do it in desmos (this is the app, not the website but I think they are same):
|
|measure-theory|discrete-mathematics|recreational-mathematics|
| 1
|
Find the laurent expansion
|
Find the Laurent expansion of $\dfrac{e^\frac{1}{z}}{z+1}$ in the domain $|z|>0$ . My attempt: Since we have to work on domain $|z|>0$ , let $z=\dfrac{1}{t}$ , so I have to find the Laurent expansion of $\dfrac{te^t}{t+1}$ at $t=0$ . Particularly, I have to find the Laurent expansion of $\dfrac{e^t}{t+1}$ at $t=0$ . I already knew that $$e^t=\sum\limits_{0}^{\infty}\dfrac{t^n}{n!},\quad\forall t\in\mathbb{C}$$ $$\dfrac{1}{t+1}=\sum\limits_{n=0}^{\infty}(-1)^nt^n,\quad\forall |t| But I don't know how to find the product of these two series. Could someone help me pr have another way to deal with this problem? Thanks in advance!
|
The product of those two series – and they are really Taylor rather than Laurent swifties in $t$ – is the convolution: $$\frac{e^t}{1+t}=\sum_{n=0}^\infty t^n\sum_{k=0}^n\frac{(-1)^{n-k}}{k!}$$ Thus the Laurent series of the original function is $$\frac{e^{1/z}}{1+z}=\sum_{n=0}^\infty(1/z)^{n+1}\sum_{k=0}^n\frac{(-1)^{n-k}}{k!}$$
|
|complex-analysis|laurent-series|
| 0
|
Evaluate: $\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$
|
The value of $$\int_{-1}^{1}\ln\left(\frac{1+t}{1-t}\right)\frac{1}{1-at}dt$$ (where $0 ) is equal to (A) $\frac{1}{2a}\left(\ln(\frac{1-a}{1+a})\right)^2$ (B) $\frac{1}{2a}\ln(\frac{1+a}{1-a})$ (C) $\frac{1}{a}\ln(\frac{1+a}{1-a})$ (D) $\frac{1}{2a}\left(\ln(\frac{1+a}{1-a})\right)^2$ (E) $\frac{1}{a}\left(\ln(\frac{1-a}{1+a})\right)^2$ My Attempt $$I=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1-t)}{1-at}dt=\int_{-1}^{1}\frac{\ln(1+t)}{1-at}dt-\int_{-1}^{1}\frac{\ln(1+t)}{1+at}dt$$ $$I=\int_{-1}^{1}\ln(1+t)\left(\frac{1}{1-at}-\frac{1}{1+at}\right)dt=\int_{-1}^{1}\ln(1+t)\left(\frac{2at}{1-a^2t^2}\right)dt$$ After this I am not able to do. Is this approach correct I wonder
|
Letting $x=\frac{1-t}{1+t}$ transforms the integral into $$ \begin{aligned} I & =-2 \int_0^{\infty} \frac{\ln x}{[(1-a)+(1+a) x](1+x)} d x \\ & =-\frac{2}{1+a} \underbrace{ \int_0^{\infty} \frac{\ln x}{\left(x+k\right)(1+x)} d x}_{J} \end{aligned} $$ where $k= \frac{1-a}{1+a}$ . Putting $x\mapsto\frac{k}{x} $ changes $$ \begin{aligned} J & =\int_0^{\infty} \frac{\ln k-\ln x}{\left(1+x\right)(x+k)} d x \\ & =\ln k \int_0^{\infty} \frac{d x}{\left(1+x\right)(x+k)}-J\\&= \frac{ \ln k}{2} \int_0^{\infty} \frac{d x}{\left(1+x\right)(x+k)}\\&=\frac{\ln k}{2} \cdot \frac{1}{k-1} \int_0^{\infty} \left( \frac{1}{1+x}-\frac{1}{x+k}\right) d x\\&= \frac{\ln ^2 k}{2(k-1)}\\&= \frac{\ln ^2\left(\frac{1-a}{1+a}\right)}{\frac{-4a}{1+a}} \end{aligned} $$ Hence $$ \boxed{I=\frac{1}{2 a} \ln ^2\left(\frac{1-a}{1+a}\right)} $$
|
|real-analysis|calculus|integration|definite-integrals|improper-integrals|
| 0
|
Algebraic equation of a specific six-degree polynomial
|
During my research, I was analyzing a linear system with characteristic polynomial of six order, $p(x)=x^6+3ax^5+3a^2x^4+a^3x^3 - a^3b_1b_3b_3$ . I used Mathematica and it found all roots as the following: I was quite impressed since it is not obvious for me how to factor this equation. I am curious how Mathematica find these solutions? What is the algorithm? Thank you for reading!
|
It factors very nicely: $$p(x)=(x(x+a))^3-a^3b_1b_2b_3$$ So when $p(x)=0$ , $x(x+a)$ equals one of the cube roots of $a^3b_1b_2b_3$ , and then we can use the quadratic formula.
|
|abstract-algebra|algebra-precalculus|
| 0
|
Any maximal linearly independent subset of free module $M$ has cardinality equal to the cardinality of a basis of $M$.
|
Let $R$ be integral domain and $M$ be an $R$ module. I want to prove if $M$ is a free module, then any maximal linearly independent subset of $M$ has cardinality equal to the cardinality of a basis of $M$ . Any maximal linearly independent subset of $M$ must have the same cardinality. So it only remains to prove the cardinality is equal to the cardinality of a basis of $M$ . So if $\{m_i\}_{i\in I}$ is a maximal linearly independent subset of $M$ , then we need to prove $\{m_i\}_{i\in I}$ is a basis of $M$ ? But how? Any helps would be appreciated greatly.
|
You could refer to the book >: page307 ,Lemma 1.2: Let M be an R-module, and let S $\in$ M be a linearly independent subset. Then there exists a maximal linearly independent subset of M containing S. page311, Proposition 1.15: Let R be an integral domain, and let M be a free R-module; assume that M is generated by S: M = $ $ . Then S contains a maximal linearly independent subset of M. These two theorems imply that the basis of a R-module,where R is a integral domain, is a maximal linearly independent subset. But the reverse is not true. The integral domains satisfy the 'IBN (Invariant Basis Number) property'. In particular, any two maximal linearly independent subsets of a free module over an integral domain have the same cardinality. Therefore, the basis has the same cardinality with the maximal linearly independent subset.
|
|abstract-algebra|modules|
| 0
|
Average is higher than 75th percentile
|
I have a statistical question with mean, median and percentile. I have run my data with over thousand samples. From this run, I can see that the Average result value is higher than the median and percentile 75th value... This seems my data is right skewed. What else can I deduct from about my results? Details on the results: average = 2.5, percentile 75 = 2 and median = 1, max= 11 A valid result could be anything from 0 up to anything.
|
Well, the values seem to be integers. (Assuming they are), zeros can not be more than half of the data set. and more than 3/4 of it are either 0, 1 or 2. The simplest data producing this results would be like $\left[1, 1, 1, 1, 1, 2, 2, 11\right]$ . Not much more to see there, methinks.
|
|statistics|standard-deviation|order-statistics|
| 0
|
In $\Delta ABC$, such that $\cos^2{A} + \cos^2{B} + \cos^2{C} = \sin^2{B}$, then $\tan{A} \cdot \tan{C} =$?
|
In $\Delta ABC$ , find the value of $\tan{A} \cdot \tan{C}$ , such that $\cos^2{A} + \cos^2{B} + \cos^2{C} = \sin^2{B}$ . My idea is to substitute $\cos{B} = \cos{[\pi - (A+C)]} = -\cos{(A+C)}$ and $\sin{B} = \sin{[\pi - (A+C)]} = \sin{(A+C)}$ into the constraint to eliminate the variable B, then transform the constraint into an equation with the only variable $\tan{A}\tan{C}$ . Then what I need to do is simply solving the equation and get the value of $\tan{A} \cdot \tan{C}$ . But I'm stuck on the second step. Dose anyone know how to simplify $$\cos^2{A} + \cos^2{(A+C)} + \cos^2{C} = \sin^2{(A+C)}$$ into an equation in terms of the variable $\tan{A}\cdot\tan{C}$ ?
|
In a triangle $A + B + C = \pi$ . Thus can write the equation as $cos^2A + cos^2C + cos^2(A+C) = sin^2(A+C)$ which can be further expanded as $cos^2A + cos^2C + cos^2Acos^2C + sin^2Asin^2C - 2sinAsinCcosAcosC = sin^2Acos^2C + sin^2Ccos^2A + 2sinAsinCcosAcosC.$ Rearranging, we get $cos^2A + cos^2C + (cos^2A - sin^2A)(cos^2C - sin^2A) = sin2A + sin2C = cos^2A + cos^2C + cos2Acos2C.$ Now using $sin2x = \frac{2tanx}{sec^2x}$ and $cos2x = \frac{1-tan^2x}{sec^2x}$ we get $cos^2A + cos^2C = \frac{4tanAtanC +tan^2A + tan^2C - 1 - tan^2Atan^2C}{sec^2Asec^2C}$ . Cross multiplying we get $2 + tan^2A + tan^2C = 4tanAtanC +tan^2A + tan^2C - 1 - tan^2Atan^2C$ which implies $tanAtanC= 1 ,3$ .
|
|trigonometry|
| 0
|
Can I shorthand chain divisibility in the same manner as equality?
|
In the topic of divisibility. Suppose I starts with $a \mid n$ , then I manipulate it into $a \mid m$ , then $a \mid s$ , and finally $a \mid t$ . Should I write my train of thought as $$ \begin{align} a \mid n &\implies a \mid m \\ &\implies a \mid s \\ &\implies a \mid t. \end{align} $$ Or can I just shorthanded it (in the same manner as equality) into $$ \begin{align} a &\mid n \\ &\mid m \\ &\mid s \\ &\mid t. \end{align} $$ Can I do that? Should I do that? Anyone done that before?
|
If it is the case that $a$ divides $n$ and $n$ divides $m$ and $m$ divides $s$ and $s$ divides $t$ , then you can write $a\mid n\mid m\mid s\mid t$ and that's perfectly understandable. (Chaining like this really works for transitive binary relations.) But I wouldn't write this vertically as you've done in the second equation. On the other hand, if you're proving that all of $n$ , $m$ , $s$ , and $t$ are multiples of $a$ without saying that they divide one another, then the first way (with implication signs) seems reasonable.
|
|elementary-number-theory|notation|divisibility|
| 0
|
In $\Delta ABC$, such that $\cos^2{A} + \cos^2{B} + \cos^2{C} = \sin^2{B}$, then $\tan{A} \cdot \tan{C} =$?
|
In $\Delta ABC$ , find the value of $\tan{A} \cdot \tan{C}$ , such that $\cos^2{A} + \cos^2{B} + \cos^2{C} = \sin^2{B}$ . My idea is to substitute $\cos{B} = \cos{[\pi - (A+C)]} = -\cos{(A+C)}$ and $\sin{B} = \sin{[\pi - (A+C)]} = \sin{(A+C)}$ into the constraint to eliminate the variable B, then transform the constraint into an equation with the only variable $\tan{A}\tan{C}$ . Then what I need to do is simply solving the equation and get the value of $\tan{A} \cdot \tan{C}$ . But I'm stuck on the second step. Dose anyone know how to simplify $$\cos^2{A} + \cos^2{(A+C)} + \cos^2{C} = \sin^2{(A+C)}$$ into an equation in terms of the variable $\tan{A}\cdot\tan{C}$ ?
|
Let $\cos(A-C)=k\cos(A+C)=-k\cos B$ $\implies k=\dfrac{\cos(A-C)}{\cos(A+C)}\iff\dfrac{k-1}{k+1}=\cdots=\tan A\tan C$ From the given condition, $$\cos^2A-\sin^2C+2\cos^2B=0$$ $$\iff\cos(A-C)\cos(A+C)+2\cos^2B=0$$ $$\iff-\cos B(\cos(A-C)-2\cos B)=0$$ If $\cos B\ne0,k=-\dfrac{\cos(A-C)}{\cos B}=-2,\tan A\tan C=?$ Else $\cos B=0\implies B=\dfrac\pi2\iff A+C=\dfrac\pi2, A-C=\pm\dfrac\pi2, \implies A$ or $C=0$ which is impossible
|
|trigonometry|
| 1
|
Using convexity to prove integral is nonnegative
|
Suppose $f$ is a convex function and consider the following integral: $$I = \int_{t_1}^{t_2} f(g(h(t)-c, t)) - f(g(h(t),t)) -f'\big(g(h(t) - c, t) - g(h(t), t)\big) dt$$ where I've abused notation to write $$f' = \frac{\partial{f}}{\partial g}(u(h(t), t)).$$ and $c > 0$ is a constant. I would like to show that $I \geq 0$ and this somehow follows from $f$ being convex but I cannot figure it out.
|
If $I\subset\Bbb{R}$ is an open interval and $f:I\to\Bbb{R}$ is convex and differentiable, then for all $a,b\in I$ , we have the inequality \begin{align} 0\leq f(a)-f(b)-f’(b)[a-b].\tag{$*$} \end{align} Geometrically, $a\mapsto f(b) +f’(b)[a-b]$ is the equation of the tangent line to the graph of $f$ at the point $b$ . The inequality is saying that $f(a)$ is always greater or equal to this value, i.e the graph of $f$ always lies above the tangent line at any point. Apply this with $a=g(h(t)-c,t)$ and $b=g(h(t),t)$ . Why is the inequality $(*)$ above true? Well, for a differentiable convex function, I hope you’re happy with the fact that $f’$ is an increasing function (one can actually refine these statements much more using Dini derivatives, but let’s not get too derailed). So, for any $\alpha,\beta\in I$ with $\alpha , we have \begin{align} f’(\alpha)\leq \frac{f(\beta)-f(\alpha)}{\beta-\alpha}\leq f’(\beta).\tag{$**$} \end{align} Why? Because by the mean-value theorem, there exists s
|
|real-analysis|integration|inequality|convex-analysis|
| 1
|
A proof of the fact that the Fourier transform is not surjective from $\mathcal{L}^1(\mathbb{R})$ to $C_0( \mathbb{R})$
|
Let $f_n = \mathbb 1_{[-n,n]}$ for all $n \in \mathbb{N}$ Compute explicitly $f_n \star f_1$ for all $n \in \mathbb{N}$ . Show that $f_n \star f_1$ is the Fourier transform of $g_n = \frac{ \sin{(2\pi x)} \sin{(2 \pi nx)}}{\pi ^2 x^2}$ Show that $\| g_n \|_1 \to \infty$ Deduce that the Fourier transform is not surjective from $\mathcal{L}^1(\mathbb{R})$ to $C_0( \mathbb{R})$ Here, $C_0(\mathbb{R})$ is the space of continuous functions $f : \mathbb{R} \to \mathbb{R}$ that tends to $0$ at $-\infty$ and $+\infty$ . I managed to compute 1) and proved 2). But I don't know how to prove 3). Moreover, I don't see how can 1), 2) and 3) prove 4). Can anyone help?
|
Not related to the body of the post, but rather the title. One can provide an explicit example to show the Fourier transform is not surjective: https://mathoverflow.net/a/319225/112504 proves that $$g(t) := 1_{[0\leq t\leq e]}\cdot \frac te + 1_{[t>e]}\cdot \frac{1}{\ln t},$$ extended to be an odd continuous function on $\mathbb R$ can not be the Fourier transform of any $L^1(\mathbb R)$ function. That answer cites: "this example is taken from the book Classical Fourier Transforms, Springer–Verlag, 1989 by K. Chandrasekharan." For the convenience of the reader, I will copy and paste the proof from the answer's source ( M. Thamban Nair's Fourier Analysis notes ): Proposition 3.2.1 If $f \in L^1(\mathbb{R})$ is an odd function, then $\hat{f}$ is an odd function and there exists $M>0$ such that $$ \left|\int_r^R \frac{\hat{f}(\xi)}{\xi} d \xi\right| \leq M $$ for all $r, R$ with $0 . Proof. Let $f \in L^1(\mathbb{R})$ is an odd function. It can be easily seen that ${ }^4$ , $$ \hat{f}(\xi
|
|real-analysis|analysis|fourier-analysis|convolution|fourier-transform|
| 0
|
Rational functions where they are undefined
|
There's something I can't quite wrap my head around, and I am not satisfied with my professor's "that's just defined that way" answer. So suppose we have: $$f_1(x) = x - 5$$ $$f_2(x) = \frac{1}{x - 5}$$ $$f_3(x) = 1$$ By simple algebra, we can see that $f4(x) = f_1(x) \cdot f_2(x) = f_3(x)$ What I mean is the rational function we get from dividing $x - 5$ by $x - 5$ is the same as the constant function $y = 1$ Now I like to look at functions graphically to understand them. So we can clearly see that $f_1(5) = 1$ and $f_2(5)$ is not defined. So how come, when we multiply these two functions, we end up with a function (or a curve) that is defined at point $x = 5$ . By logic, on $x = 5$ , we should have $0/0$ , which is undefined. But here it just takes the value $1$ . How come? Visuals to better understand whats bugging me: I would appreciate if your answer addresses this logically (or rather conceptually), not as a plain mathematical proof using algebra. I too, know, that algebraically
|
Another way of looking at it other than Julio Puerta's excellent answer is to realize that the domain of function is an essential part of the definition of a function and can not be ignored. So the definitions of your functions have to be: $f_1:\mathbb R \to \mathbb R$ and $f_1(x)= x-5$ . $f_2:\mathbb R\setminus\{5\}\to \mathbb R$ and $f_2(x)=\frac 1{x-5}$ $f_3:\mathbb R \to \mathbb R$ and $f_3(x) = 1$ Now the definition of multiplying functions is not just figuring that $f\cdot g(x)=f(x)\cdot g(x)$ but it is also a case of figuring out the domain. The proper definition should be If $f:A\to B$ and $g:C\to B$ then $f\cdot g$ will be defined as: $f\cdot g: A\cap C \to B$ and $f\cdot g(x) = f(x)\cdot g(x)$ . With this definition we get: That the domain of $f_1\cdot f_2$ is $\mathbb R\cap (\mathbb R\setminus \{5\}) = \mathbb R\setminus \{5\}$ . Meanwhile, of course, $f_1(x)\cdot f_2(x)=\frac {x-5}{x-5}=1$ so......(That was your "simple algebra" and it was perfectly correct.) If $f_1\cdot f
|
|calculus|functions|
| 0
|
How can you prove an inductive case of $n^2 < 2^n$?
|
I'm trying to prove that $n^2 . I see this proof question from the example of 2.5.3 here . I understand that when you prove an inductive case where $P(k+1)$ is true, you have first to assume the base case where $k^2 is true. Based on the base case assumption, you have to consider the case where $(k+1)^2 . In the linked article, it says, "To prove such an inequality, start with the left-hand side and work towards the right-hand side:" And I can't understand the calculation flow. I understand $(k+1)^2$ to the second power is expanded, and you get $k^2 + 2k + 1$ . And the article says by the inductive hypothesis, you get " ${} , which confuses me: I can't understand what is happening here. I can't understand why the $=$ sign changes into $ . And the following flow makes me confused further: ${} , which is derived from "since $2k + 1 for $k \geq 5$ ," to which I wonder where $2k + 1$ came from. And the calculation flow in the article reaches " ${} = 2^{k+1}$ ". I am sure that my descriptio
|
The claim they are using is $2k+1 for $k\geq 5$ . And they assume it is clear. You can prove it with induction too. The induction step in that goes like (induction hypothesis: $2k+1 ) $$\begin{align} 2(k+1)+1 &= 2k+1 + 2 \\ & Here in the last inequality we used $2 \leq 2^k$ , which is clear when $k$ is positive.
|
|discrete-mathematics|inequality|proof-writing|induction|
| 0
|
Expected Number of Times to Roll a Die before Observing a Sequence
|
Suppose there is a 6 sided die where each side is equally likely. I want to to answer the following question: On average, how many times do I have to roll the dice before observing the following sequence (in this order): 1,2,3,4,5,6,5,4,3,2,1 I thought I could model this problem as a Discrete Markov Chain using the following logic: State 1 (start): there is a 1/6 probability of going to State 2 and a 5/6 probability of staying in State 1 State 2 (rolling a 1): there is a 1/6 probability of going to State 3 and a 5/6 probability of going back to State 1 State 3 (rolling a 2): there is a 1/6 probability of going to State 4 and a 5/6 probability of going back to State 1 etc. Using this logic, I tried to represent this problem as a Markov Chain: $$P = \begin{pmatrix} \begin{array}{cccccccccc|c} 0 & 1/6 & 5/6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1/6 & 5/6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/6 & 5/6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1/6 & 5/6 & 0 & 0 & 0 & 0 & 0 \\ 0
|
You're on the right track, but that matrix $(I-Q)^{-1}R$ gives the probabilities of being absorbed to a particular state. The expected number of visits to each state before being absorbed (to any absorbing state) are just the entries of the fundamental matrix $(I-Q)^{-1}$ . From row $i$ you read the expected number of visits to each state $j$ when you start from the state $i$ . So if you want the expected total number of steps before absorption, sum the row of the starting state (i.e. the corresponding index). In this case that would be the first row.
|
|probability|
| 1
|
Is $\pi(x)$ well-approximated by an expression of the form $\frac{x}{\log(x)}(1+1/\log(x)+c/\log(x)^2)$ for some $c\in[0,2.51]$?
|
It is known that for all $x\ge 355991$ , $$\frac{x}{\log(x)}\left(1+\frac{1}{\log(x)}\right) I am curious whether there are more precise results of this form, i.e. that $\pi(x)$ is greater than or less than $\frac{x}{\log(x)}(1 + \frac{1}{\log(x)} + \frac{c}{\log(x)^2})$ for tighter values of $c$ (perhaps dependent on the Riemann Hypothesis or other conjectures). Is it known where there is a real constant $c$ such that $$\lim_{x\to\infty}\left(\frac{\pi(x)\log(x)^3}{x}-\log(x)^2-\log(x)\right)=c$$ or is it possible (or even conjectured/known) that this limit does not exist? If so, what are the extant results on its liminf and limsup? If such a constant $c$ does exist, can this sort of asymptotic behavior be extended further, to $1/\log(x)^3$ terms and so on? Are there conjectured values for these coefficients? Empirically, at $x=10^{28}$ it seems like the right constant $c$ is around $2.099$ , but the ideal constant seem to decrease as $x$ grows larger. From some crude curve-fitting, I
|
It can be verified using integration by parts that $$ \int_2^x {\frac{{{\rm d}t}}{{\log t}}} \sim \frac{x}{{\log x}}\sum\limits_{k = 0}^\infty {\frac{{k!}}{{\log ^k x}}} $$ as $x\to+\infty$ . In 1899, de la Vallée Poussin proved that $$ \pi (x) = \int_2^x {\frac{{{\rm d}t}}{{\log t}}} + \mathcal{O}\!\left( {x{\rm e}^{ - c\sqrt {\log x} } } \right) = \int_2^x {\frac{{{\rm d}t}}{{\log t}}} + \mathcal{O}\!\left( {\frac{1}{{\log ^r x}}} \right) $$ as $x\to+\infty$ , where $c$ is a suitable positive number and $r>0$ is arbitrary. Hence, $$ \pi (x) \sim \frac{x}{{\log x}}\sum\limits_{k = 0}^\infty {\frac{{k!}}{{\log ^k x}}} $$ as $x\to+\infty$ . Therefore, the constants you are after are the factorials.
|
|number-theory|prime-numbers|
| 1
|
For $k$-cycle $(1, 2, ..., k)$, do the elements in it need to be taken in order?
|
For $k$ -cycle $(1, 2, ..., k)$ , do the elements in it need to be taken in order? Is $(3,1,5,2,4)$ a $5$ -cycle? Or the elements in the rotation must satisfy a law of order? i.e. $5$ -cycle just are $(1,2,3,4,5)$ or $(5,4,3,2,1)$ . For $k$ -cycle, do the elements in them need to be in order? When we say " $\sigma=(i_1,i_2,...,i_k)$ , then 1) $\sigma(i_1)=i_2$ , $\sigma(i_2)=i_3$ , ..., $\sigma(i_k)=i_1$ ; 2) $\sigma(n)=n$ , $n\notin\left \{i_1,i_2,...,i_k\right \}$ ." For $\left \{i_1,i_2,...,i_k\right \}$ , them can be $\left \{1, 2, 3, 4, 5\right \}$ or $\left \{1,3,2,5,4\right \}$ . Is it not necessary that $i_1=1$ , $i_2=2$ , $\cdots$ , $i_k=k$ ?
|
For the same reason that a bijection does not send a particular element in its domain to its range prior to others, the specific order in cycle notation does not matter. For example, the function (bijection) $$f: \left\{1,2,3\right\}\to \left\{1,2,3\right\}$$ given by $$1 \mapsto 3 $$ $$2 \mapsto 1$$ $$3\mapsto 2$$ is the same function $$f: \left\{1,2,3\right\}\to \left\{1,2,3\right\}$$ given by $$2 \mapsto 1$$ $$1 \mapsto 3 $$ $$3\mapsto 2$$ In other words the function (bijection) $f\in S_3$ can be represented equivalently by $$(1 3 2)$$ or $$(2 1 3).$$
|
|group-theory|permutations|permutation-cycles|
| 1
|
Given a random walk with shifted exponential increment, how to calculate the expected sum distance to the origin?
|
Suppose $\{X_i, i=1,2,\ldots\}$ are i.i.d. random variables with exponential distribution and mean $\mu$ . Consider a random walk as follows. $$S_1=0$$ $$S_{i+1}= \begin{cases} S_i+X_i-k, & \text{if $S_i \ge 0$,} \\ X_i-k, & \text{if $S_i , where $k$ is a given constant greater than 0. How to calculate the value of $\mu$ such that the expected sum distance from $S_i$ to the point 0 for the first $R$ steps is minimized, i.e., $\arg \min_u E[\sum_{i=1}^{R}|S_i|]$ . Thank you. So far, I only have an approximate solution for two extreme cases, i.e., when $\mu>>k$ and when $\mu . However, I still have no idea about how to calculate the expected sum distance when the value of $\mu$ and $k$ are comparable, e.g., when $\mu=60$ and $k=100$ . The hitting time of a similar random walk has been asked before here .
|
We have solved this problem when $R \rightarrow \infty$ , i.e., $\arg \min_u \lim_{R\rightarrow\infty}E[\sum_{i=1}^{R}|S_i|]$ . I post it here in case someone will need it in the future. Since $\lim_{R\rightarrow\infty}E[\sum_{i=1}^{R}|S_i|]=\lim_{R\rightarrow\infty}\sum_{i=1}^{R}E[|S_i|]=\lim_{i\rightarrow\infty} R \cdot E[|S_i|]=R \cdot \lim_{i\rightarrow\infty} E[|S_i|]$ , we can get the following equation. $$\arg \min_u \lim_{R\rightarrow\infty}E[\sum_{i=1}^{R}|S_i|]=\arg \min_u \lim_{R\rightarrow\infty}E[|S_i|]$$ Thus, we try to find the value of $\mu$ that can minimize $\lim_{i\rightarrow\infty}E[|S_i|]$ . When $\mu>k$ , it is obvious that $\lim_{i\rightarrow\infty}E[|S_i|] \rightarrow \infty$ . Thus, it is impossible for $\mu>k$ to minimize the expected sum distance, and in the rest of this answer, we only consider $\mu \le k$ . When $\mu \le k$ , in order ot calculate $\lim_{i\rightarrow\infty}E[|S_i|] \rightarrow \infty$ , we try to calculate the limiting distribution of $S_i$
|
|stochastic-processes|random-walk|
| 0
|
Convergence result for the union of type classes with bounded entropy
|
I need to prove the following in coding theory: For a set $\mathcal{X}$ , the type of a sequence $x_1^n = (x_1,\ldots, x_n) \in \mathcal{X}^n$ is its empirical distribution $\hat{P}=\hat{P}_{x_1^n}$ , i.e. the distribution defined by $\hat{P}(a) = \frac{|\{i\colon x_i = a\}|}{n}$ . A distribution $P$ on $\mathcal{X}$ is called an $n$ -type if it is the type of some $x_1^n\in \mathcal{X}^n$ . The set of all sequences $x_1^n \in \mathcal{X}^n$ of type $P$ is called the type class of the $n$ -type $P$ , and is denoted by $\mathcal{T}_P^n$ . Let $A_n \subset \mathcal{X}^n$ be the union of all type classes $\mathcal{T}_P^n$ with $H(P) \leq R$ , where $H$ is the entropy. Then the following hold: $\frac{1}{n} \log |A_n| \to R$ as $n\to\infty$ ( $\log$ here is the base 2 logarithm). $Q^n(A_n) \to 1$ as $n\to\infty$ for every distribution $Q$ on $\mathcal{X}$ with $H(Q) For clarity, for any distribution $P$ on $\mathcal{X}$ , $P^n$ denotes the distribution of $n$ independent drawings from $P$ ,
|
The first question relies on the following observations: a) The total number of $n$ -types is 'small', i.e., bounded as $(n+1)^{|\mathcal{X}|}$ . Indeed, the number of $n$ -types is the number of nonnegative solutions to $$ n_1 + n_2 + \cdots + n_{|\mathcal{X}|} = n,$$ which works out, by stars and bars, to $\binom{n + |\mathcal{X}|-1}{|\mathcal{X}|-1}$ (this is also the origin of the inverse factor in the inequalities you have written). The key point is that this quantity grows polynomially in $n.$ b) For any distribution $P$ , there exists an $n$ -type $P_n$ such that $\|P - P_n\|_1 \le |\mathcal{X}|/n$ . I'll leave it to you to prove this statement. c) Entropy is continuous in $P$ (w.r.t. the $\ell_1$ -topology, say), and so for $\varepsilon > 0,$ there exists $\delta$ such that for any $Q: \|P-Q\|_1 \le \delta, |H(Q) - H(P)| . Now, to work out 1), first note that \begin{align*} |A_n| &= \sum_{n\textrm{-types } P : H(P) \le R } |\mathcal{T}_P^n|\\ &\le \sum_{n\textrm{-types } P : H(
|
|probability-theory|statistics|information-theory|entropy|
| 1
|
Integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$
|
I know such integral: $\int_0^{\infty}\frac{\ln x}{e^x}\,dx=-\gamma$ . What about the integral $\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx$ ? The answer seems very nice: $-\frac{1}{2}{\ln}^22$ but how it could be calculated? I tried integration by parts but the limit $\displaystyle{\lim_{x\to 0}\ln x\ln(1+e^{-x})}$ doesn't exist. Or I can also write the following equality $$\int_0^{\infty}\frac{\ln x}{e^x+1}\,dx=\lim\limits_{t\to 0}\frac{d}{dt}\left(\int_0^{\infty}\frac{x^t}{e^x+1}\, dx\right)$$ but I don't know what to do next.
|
Noticing that $$ \int_0^{\infty} \frac{\ln x}{e^x+1} d x=\left.\frac{\partial}{\partial a} I(a)\right|_{a=1} $$ where $$ \begin{aligned} I(a)&=\int_0^{\infty} \frac{x^{a-1}}{e^x+1} d x \\& =\int_0^{\infty} \frac{e^{-x} x^{a-1}}{1+e^{-x}} d x \\ & =\sum_{n=0}^{\infty}(-1)^n \int_0^{\infty} e^{-(n+1) x} x^{a-1} d x \\ & =\sum_{n=0}^{\infty} \frac{(-1)^n \Gamma(a)}{(n+1)^a} \\ & =\Gamma(a) \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^a} \\ \\ \end{aligned} $$ $$ \frac{\partial}{\partial a}I(a)=-\Gamma(a) \sum_{n=1}^{\infty} \frac{(-1)^n \ln n}{n^a}+\Gamma^{\prime}(a) \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^a} $$ Putting $a=1$ yields $$ \begin{aligned} \int_0^{\infty} \frac{\ln x}{e^x+1} d x & =-\Gamma(1) \sum_{n=1}^{\infty} \frac{(-1)^n \ln n}{n}+\Gamma’(1) \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \\ & =-\frac{1}{2} \ln 2(\ln 2-2 \gamma)-\gamma \ln 2 \cdots (*)\\ & =-\frac{1}{2} \ln ^2 2 \end{aligned} $$ Details for (*) please refer to the post .
|
|calculus|
| 0
|
Definition of smooth surface meaning
|
I am studying differential geometry. Definition of surface is S subset of $R^3$ is surface if for every $p \in S$ , exist open set U in $R^2$ , open neighborhood V of p in S, differentiable map \begin{equation} X:U \rightarrow R^3 \end{equation} s.t. X(U) = V X is homeomorphism for every q in U, $(dX)_q:R^2 \rightarrow R^3$ is injective i can understand fist and second condition for X, but third one is confusing. As far as i know, single cone $S={(u,v,(u^2+v^2)^0.5)}$ is not surface because of the third condition of X. but as \begin{equation} (dX)_q = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \frac{u}{(u^2+v^2)^{0.5}} & \frac{v}{(u^2+v^2)^{0.5}} \end{pmatrix} \end{equation} is consisted of two linearly independent vectors, it seems satisfying third condition. My understanding of third condition is existence of tangent plane of surface but it seems to satisfy third condition but doesn't have tangent plane at O. What was wrong?
|
The problem is that your $X$ $$ X (u, v) = (u, v, \sqrt{u^2+ v^2})$$ is not differentiable at $(u, v) = (0,0)$ , so it cannot be a parametrization. The single cone $$\{ (x, y, z) : x^2 + y^2 = z^2, z \ge 0\}$$ is not a regular surface (as defined in Do-Carmo). The reason is that locally a regular surface can always be written as a graph of one of the form: $$ z = h(x, y), \ \ y = g(z, x), \ \ x = \ell (y, z).$$ (See Proposition 3 in section 2.2 of the book) However, in the case of the single cone, it is clear that the second and the third case are not possible, while if it is the first case, then we must have $$ h(x,y) = \sqrt{x^2+y^2},$$ but this is not differentiable at $(0,0)$ . Hence the single cone cannot be a regular surface.
|
|differential-geometry|
| 1
|
Isomorphism $(A\times B)^\vee\to A^\vee\times B^\vee$ for abelian varieties
|
Let $A$ and $B$ be abelian varieties, and consider the natural map $$f:(A\times B)^\vee\to A^\vee\times B^\vee$$ sending a line bundle on $A\times B$ to the restrictions to $A\times\{0\}$ and $\{0\}\times B$ . I would like to check that this is an isomorphism. I had some old notes from a course on abelian varieties giving a hint which I'm trying to decypher. They say, first check that $f$ induces an isomorphism between the tangent spaces, hence deduce it finite etale. Then show that the only geometric points in the kernel is the identity, and then the claim follows since finite etale isogenies satisfying this property are isomorphisms. Would anyone be able to enlighten me with a more detailed proof, or give a reference where I could read a proof?
|
So, this was fun to track down! (If you don’t know a result that I use and don’t give a reference for, you can search [keywords]+stacks project). We will show that for any finite type $k$ -scheme, $(A\times B)^{\vee}(S) \rightarrow A^{\vee}(S) \times B^{\vee}(S)$ is injective. In particular, this means that $f: (A \times B)^{\vee} \rightarrow A^{\vee} \times B^{\vee}$ is proper with finite fibres hence finite. Moreover, since $A \times B)^{\vee}$ and $A^{\vee} \times B^{\vee}$ are abelian varieties of the same dimension, they are regular (integral) schemes of the same dimension, so the miracle flatness theorem applies and $f$ is flat. Thus $f$ is finite flat to an integral base, so it is locally free of constant rank $d \geq 1$ , where $d$ is the degree of the kernel of $f$ as a finite $k$ -scheme. But since $f$ is injective on $S$ -points for any $S$ -scheme of finite type, one must have $d=1$ and we are done. First, we show that $f$ is injective on geometric points: we can thus assum
|
|number-theory|algebraic-geometry|arithmetic-geometry|abelian-varieties|isogeny|
| 1
|
Is a Set of Infinitely Differentiable Functions Linearly Independent?
|
Let V be the following vector space. $V = \{ f : \mathbb{R} \longrightarrow \mathbb{R} : \text{f = infinitely differentiable at all x, } f^{(k)}(0) = 0 \;\;, \forall \;k \in (0 \cup \mathbb{N}) \}$ How would one show that this vector space is infinite dimensional? To show that a vector space is infinite dimensional, one needs to show: Basis ( $V$ ) contains an infinite number of linearly independent vectors. I understand that if $f \in V$ , then: $f$ = infinitely differentiable $f^{(k)}(0) = 0, \;\;\; (k = 0, 1, 2, ... n, ...) $ This would make $\{f^{(1)}, f^{(2)}, f^{(3)}, ... f^{(n)}... \}$ be an infinite subset of $V$ , as each of the elements would also be infinitely differentiable and would pass through $(0,0)$ . But how would one show that $\{f, f^{(1)}, f^{(2)}, f^{(3)}, ... f^{(n)}... \}$ is linearly independent?
|
Via derivatives: Your approach works out. Let $f$ not be identically zero. Assume that some of $\{f^{(k)} \ : \ k\in \mathbb{N}\}$ are linearly dependent. Then we get that $f$ solves a linear ODE with constant coefficients. In fact, it solves an initial value problem of the form $$\begin{cases}\sum_{j=0}^N a_j f^{(j)}(x)&=0, &&x\in \mathbb{R},\\ f^{(k)}(0)&=0, &&k\in \{0, \dots, N\}. \end{cases}$$ Now, by Picard-Lindelöf, this admits a unique classical solution. However, $f=0$ is a solution, so we must have $f=0$ , which yields the desired contradiction. We are left to prove that there exists $f\in V$ which is not identically zero. Indeed, the function $$ f(x) =\begin{cases} e^{-1/x^2},& x\neq 0,\\ 0,& x=0 \end{cases} $$ is in $V$ (see How do you show that $e^{-1/x^2}$ is differentiable at $0$? ). Via polynomials: Alternatively one can use geetha290krm's hint. Namely, if we pick again $f\in V$ not identically zero, then we can consider $B=\{ x^k f \ : \ k\in \mathbb{N}\}$ . One readily
|
|linear-algebra|
| 1
|
Example of Complex Pythagorean Triples
|
I am looking for example of a Pythagorean Triple with Gaussian Integers. I followed the links and looked at followings : Relation to Gaussian integers in https://en.m.wikipedia.org/wiki/Pythagorean_triple Links and Google searches mentioned in: Generating all the Pythagorean triples by factorizing using complex numbers And References for generating Pythagorean triple using complex number Just looking for an example of a Pythagorean Triple as Gaussian integers, must be missing something obvious in all the above and their mentioned links to not come up with Just one example.
|
Examples : $$(-4+1i)^2+(4+8i)^2=(4+7i)^2$$ $$(-4-1i)^2+(4-8i)^2=(4-7i)^2$$ Multiples : $$(-8+2i)^2+(8+16i)^2=(8+14i)^2$$ $$(-12-3i)^2+(12-24i)^2=(12-21i)^2$$ By the way , we can see that all "Ordinary" Pythagorean triples are also "Complex" Pythagorean triples : (1) by setting Imaginary Part to $0$ or (2) by multiplying by $i$ or (3) by multiplying by arbitrary gaussian integer $z$ : $$(3+0i)^2+(4+0i)^2=(5+0i)^2$$ $$(0+3i)^2+(0+4i)^2=(0+5i)^2$$ $$(3z)^2+(4z)^2=(5z)^2$$ ADDENDUM : Some more Examples with Positive Integers : $$( 1 + 5 i)^2+( 9 + 3 i)^2=( 8 + 4 i)^2$$ $$( 2 + 3 i)^2+( 6 + 2 i)^2=( 6 + 3 i)^2$$ $$( 2 + 9 i)^2+( 7 + 6 i)^2=( 6 + 10 i)^2$$ $$( 2 + 9 i)^2+( 9 + 2 i)^2=( 6 + 6 i)^2$$ $$( 3 + 6 i)^2+( 6 + 3 i)^2=( 6 + 6 i)^2$$ $$( 4 + 1 i)^2+( 7 + 4 i)^2=( 8 + 4 i)^2$$ $$( 5 + 5 i)^2+( 7 + 1 i)^2=( 8 + 4 i)^2$$ $$( 6 + 2 i)^2+( 2 + 3 i)^2=( 6 + 3 i)^2$$ $$( 6 + 3 i)^2+( 3 + 6 i)^2=( 6 + 6 i)^2$$ $$( 6 + 3 i)^2+( 8 + 4 i)^2=( 10 + 5 i)^2$$ $$( 7 + 1 i)^2+( 5 + 5 i)^2=( 8 + 4 i)^
|
|reference-request|complex-numbers|soft-question|examples-counterexamples|pythagorean-triples|
| 1
|
Show a collinearity in a tetrahedron
|
my question Let $ABCD$ be a tetrahedron and the means $M, N$ of the edges $AC$ and $BD$ , respectively. Show that for any point $P$ on the segment $MN$ , with P different from M and N, there exist and are unique the points $X$ and $Y$ on the segments $AB$ , respectively $CD$ , so that $X, P$ and $Y$ are collinear. my idea Because P is different from M and N if $X, Y, P$ are collinear, wich means X does not belong to ${A, B}$ and so on $Y$ does not belong to ${C, D}$ . Indeed, if $X = A$ , then $XY ⊂ (ACD)$ , so $P ∈ (ACD)$ , false. The other situations are similar. That all I could think of. Hope one of you can help me! Thank you!
|
Let S be the mean of AD and T the mean of BC. From the triangle ADC we have SM||DC. From the triangle BDC we have NT||DC. It means that SM||NT. And there is a plane SMTN. Now the problem became the following. We have a plane (SMTN), a line in it (MN) with a point in it (P), two lines parallel to the plane (AB, DC). Such that neither of three lines a parallel to each other. We want to prove that there are unique points X in DC and Y in AB such that X-P-Y. Let $\alpha$ be a plane containing DC and parallel to SMTN. When X runs through AB, line XP intersects with $\alpha$ . (If it doesn’t intersect then it is parallel to $\alpha$ . Then since it contains P, it is in SMTN. Then there is a point of AB in SMTN, but they are parallel). The set of intersection points is some line in $\alpha$ , parallel to AB. So it is not parallel to CD and has a unique intersection with it. That would be point Y. Notice that intersection of AP with $\alpha$ and intersection of BP with $\alpha$ are in differen
|
|geometry|
| 0
|
Show a collinearity in a tetrahedron
|
my question Let $ABCD$ be a tetrahedron and the means $M, N$ of the edges $AC$ and $BD$ , respectively. Show that for any point $P$ on the segment $MN$ , with P different from M and N, there exist and are unique the points $X$ and $Y$ on the segments $AB$ , respectively $CD$ , so that $X, P$ and $Y$ are collinear. my idea Because P is different from M and N if $X, Y, P$ are collinear, wich means X does not belong to ${A, B}$ and so on $Y$ does not belong to ${C, D}$ . Indeed, if $X = A$ , then $XY ⊂ (ACD)$ , so $P ∈ (ACD)$ , false. The other situations are similar. That all I could think of. Hope one of you can help me! Thank you!
|
Let $ \vec{b} = \vec{AB} , \vec{c} = \vec{AC}, \vec{d} = \vec{AD} $ And let $A$ be the origin. Then $M = t \vec{c} , N = \vec{b} + s (\vec{d} - \vec{b} ) $ Now, since $P$ is on the line segment connecting $M$ and $N$ , then $ P = (1 - w) M + w N = (1 - w) t \vec{c} + w \bigg( (1 - s) \vec{b} + s \vec{d} \bigg) \tag{1}$ Let $X$ be on the line segment $AB$ , then $ X = \alpha \vec{b} $ And let $Y$ be on the line segment $CD$ , then $ Y = (1 - \beta) \vec{c} + \beta \vec{d} $ Since we want $P$ to lie on the line segment connecting $X$ and $Y$ , then we must able to express $P$ as follows $ P = (1 - \gamma) X + \gamma Y = (1 - \gamma) \alpha \vec{b} + \gamma \bigg( (1 - \beta) \vec{c} + \beta \vec{d} \bigg) \tag{2}$ Comparing $(1)$ and $(2)$ , and since $\vec{b}, \vec{c}, \vec{d}$ are linearly independent, then equality of the two expressions implies: $ w(1 - s) = (1 - \gamma) \alpha $ $ (1 - w) t = \gamma (1 - \beta) $ $ w s = \gamma \beta $ Adding the last two equations $ (1 - w) t + w s
|
|geometry|
| 0
|
How to integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$?
|
Q) How to Integrate $\int \frac{3x^{4}+5x^{3}+7x^{2}+2x+3}{(x-6)^{5}}dx$ ? First of all let me tell what I think about this question. In my Coaching Institute, the chapter 'Integration' is over. This question came in my mind while I was solving the questions of 'Integration By Partial Fraction Decomposition' . Let me give two examples: Example 1) Let's integrate $\int\frac{x-5}{(x-7)^{2}}dx$ Now let me tell the solution of $\int\frac{x-5}{(x-7)^{2}}dx$ Let $I=\int\frac{x-5}{(x-7)^{2}}dx$ $\implies \frac{(x-5)}{(x-7)^{2}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}$ $\implies (x-5)=Ax+(B-7A)$ Upon solving we get : $A=1, B=2$ $\implies I=\int\frac{1}{(x-7)}dx+\int\frac{2}{(x-7)^{2}}dx$ Finally, after this step, it is easy to solve. Now let me give the $2^{nd}$ example: Evaluate $ I_1=\int\frac{3x^{2}+2x+4}{(x-7)^{3}}dx$ Similarly we can integrate this expression by using Partial Fraction Decomposition. $\implies \frac{3x^{2}+2x+4}{(x-7)^{3}}=\frac{A}{(x-7)}+\frac{B}{(x-7)^{2}}+\frac{C}{(x-7)^{3}
|
Synthetic division Performing synthetic division five times to find the remainders which are the coefficients of $(x-6)^k$ as below: we have $$ \begin{aligned} 3 x^4+5 x^3+7 x^2+2 x+3 = 3(x-6)^4+77(x-6)^3+745(x-6)^2+3218(x-6)+5235 \end{aligned} $$ Hence $$ \begin{aligned} I= & 3 \int \frac{d x}{x-6}+77 \int \frac{d x}{(x-6)^2}+745 \int \frac{d x}{(x-6)^3} +3218 \int \frac{d x}{(x-6)^4}+5235 \int \frac{d x}{(x-6)^5} \\ = & 3 \ln |x-6|-\frac{77}{x-6}-\frac{745}{2(x-6)^2}-\frac{3218}{3(x-6)^3} -\frac{5235}{4(x-6)^4}+C \end{aligned} $$
|
|calculus|integration|indefinite-integrals|
| 0
|
Compute the probability density of a function of a random variable
|
First I made this question for myself just out of curiosity. But got stuck in the middle of solving it. The question: Say we have a probability density function of a continuous stochastic variable X as $$f_X(x)=\begin{cases} \frac{2}{\pi^2}x & 0\leq x\leq\pi \\ 0 & \text{otherwise} \end{cases}$$ If we make another continuous stochastic variable Y by $$Y=\sin X \space(i.e. \space y=\sin x)$$ what is the probability density function of Y gonna be? $$\\$$ My solution: $$\int_0^1 f_Y(y) dy = 1 , \int_0^\pi f_X(x)dx=1\\ \rightarrow \int_0^1 f_Y(y) dy\space =\int_0^\pi f_X(x)dx\space \\=\int_0^\frac{\pi}{2}f_X(x)dx+\int_\frac{\pi}{2}^\pi f_X(x)dx\\=\int_0^1\frac{2}{\pi^2}x\space \frac{dx}{dy}\space dy+\int_1^0 \frac{2}{\pi^2}x\frac{dx}{dy}\space dy$$ Now, for $x\in [-\frac{\pi}{2},\frac{\pi}{2}],\space x$ can be written as $arcsin(y)$ and the $x$ and $\frac{dx}{dy}$ in the integral on the left side can be substituted with functions of $y$ . So now we have $$\int_0^1\frac{2}{\pi^2}x\space \fr
|
Let's say that $Y=g(X)$ . Formally speaking (here $g$ has to be strictly increasing) the CDF of $Y$ is defined by $F_Y(y)=P(Y\leq y)=P(g(X)\leq y)=P(X\leq g^{-1}(y))=F_X(g^{-1}(y))$ , hence the density of $Y$ is its derivative: $$ f_Y(y)=\frac{1}{g'(g^{-1}(y))}f_X(g^{-1}(y)). $$ In your case, as $\sin$ is not strictly increasing on $[0,\pi]$ you have $$ \begin{align*} P(\sin X\leq y)&=P(X\leq \arcsin y)+P(X\geq \pi-\arcsin y ),\\ &=F_X(\arcsin y)+1-F_X(\pi-\arcsin y). \end{align*} $$ All computations done, you end up with $$ f_Y(y)=\frac{2}{\pi\sqrt{1-y^2}}\mathbf{1}_{[0,1[}(y). $$
|
|probability-distributions|solution-verification|
| 1
|
Is this a correct proof for the following summation problem?
|
There is the following problem in my math book, but the author did not include a solution. Let $a_i$ and $b_{ij}$ be expressions dependent on i and j. Prove that: $$\sum_{i=1}^n a_i \sum_{j=1}^m b_{ij} = \sum_{i=1}^n \sum_{j=1}^m a_ib_{ij}$$ I tried doing this the following way: $$\sum_{i=1}^n a_i \sum_{j=1}^m b_{ij}=a_1\sum_{j=1}^m b_{1j}+a_2\sum_{j=1}^m b_{2j}+...+a_n\sum_{j=1}^m b_{nj}=\sum_{j=1}^m a_1b_{1j}+\sum_{j=1}^m a_2b_{2j}+...+\sum_{j=1}^m a_nb_{nj}=\sum_{i=1}^n\sum_{j=1}^m a_ib_{ij}$$ Would this be a correct proof? If not, what have I done wrong?
|
A potential problem of your proof: the meaning of " $\cdots$ " is intuitively obvious but formally unclear. How rigurous is your book? How is $\sum$ defined in your book? With " $\cdots$ "? Maybe using induction? My suggestion: start proving $$a\sum_{j=1}^m b_{ij} = \sum_{j=1}^m a b_{ij}.$$
|
|solution-verification|summation|
| 0
|
Proving that $\left|\int_{a}^{b}\frac{\sin(x)}{x}dx\right|\leq 3$, given $1\leq a<b$
|
If $1\leq a Proceeding by integration by parts; let $u(x)=\sin(x)$ and $dv(x)=1/x$, then $u'=\cos(x)$ & $v(x)=\log(x)$. We get, $$\bigg|\sin(x)\log(x)\bigg|_a^b-\int_a^b \cos(x)\log(x)dx\bigg|$$ Now, i think it might be convenient to use the fact that $sine$ and $cosine$ are bounded above by 1 and $\log(x)\leq x-1$. Thanks
|
By the second mean value theorem for definite integrals, $$\big|\int_a^b\frac{\sin x}{x}dx\big|=\big|\frac{1}{a}\int_a^\xi\sin xdx\big|\le\frac{2}{a}$$
|
|real-analysis|inequality|improper-integrals|lebesgue-integral|
| 0
|
Internal hom takes coends to ends
|
I know that this is a very general fact about limits and colimits, but I would like to prove it directly for ends and coends. If $\mathcal V$ is a closed braided monoidal category, $V$ an object in $\mathcal V$ and $P\colon\mathcal M^\mathsf{op}\otimes\mathcal M\to \mathcal V$ is a $\mathcal V$ -functor, I can define its coend, whether it exists, as the initial $\mathcal V$ -extranatural transformation $i\colon P(-,-)\overset{\cdot\cdot}{\Rightarrow}\int^MP(M,M)$ . This, as far as I understand, is a necessary step in enriched category theory in order to define the coends "representably" for a functor valued in any other $\mathcal V$ -category. Dually, I can define the end $\int_MP(M,M)\overset{\cdot\cdot}{\Rightarrow}P(-,-)$ , and one should have $$\int_M\mathcal V(P(M,M),V)\cong\mathcal V(\int^MP(M,M),V).$$ The representable definition will be, for a $\mathcal V$ -functor $P\colon\mathcal M^\mathsf{op}\otimes\mathcal M\to \mathcal D$ , the object $\int^MP(M,M)$ of $\mathcal D$ , toget
|
$\require{AMScd}\newcommand{\op}{{^\mathsf{op}}}\newcommand{\M}{\mathcal{M}}\newcommand{\V}{\mathscr{V}}\newcommand{\D}{\mathscr{D}}\newcommand{\C}{\mathscr{C}}$ Why can you only define (co)ends in the ordinary way for $\V$ -valued functors? Let me say $\V$ is a closed symmetric monoidal category. I stipulate that only because I've developed and read theory of enriched stuff only in the symmetric case and I can't be sure what does and doesn't rely on $\gamma^2=1$ without doing a very thorough review. Say $F:\underline{\C\op\boxtimes\C}\to\underline{\D}$ is a $\V$ -functor. I can define a $\V$ -wedge to be an object $\partial\in\D$ with a family of arrows $\omega_\varsigma:\partial\to F(\varsigma,\varsigma)$ for $\varsigma\in\C$ - "arrows" being in the underlying category $(\underline{\D})_0$ - that satisfy: $$\begin{CD}\underline{\C}(\varsigma,\varsigma')@>F^\varsigma_{\varsigma,\varsigma'}>>\underline{\D}(F(\varsigma,\varsigma),F(\varsigma,\varsigma'))\\@VF\op^{\varsigma'}_{\varsigma'
|
|category-theory|limits-colimits|enriched-category-theory|
| 0
|
Searching for useful ternary codes
|
I'm trying to construct a $[108;60;16]_3$ code through concatenation, but I can't find a ternary code that'd help me with it. Hamming codes are binary, Golay's codes have distances of 5 and 6 which are not divisors of 16, the only Reed-Solomon code that is useful here is $[3;2;2]_3$ , but it needs to use $[36;30;8]_9$ as an outer code which is even harder to construct. What ternary codes could be helpful here? Thanks in advance.
|
Concatenation works like charm here. Let us first describe a ternary code $\mathcal{C}_1$ with parameters $[4;3;2]_3$ . That is easy, we simply extend any sequence $(t_1,t_2,t_3)$ of ternary symbols by adding a checksum $$ t_4=t_1+t_2+t_3. $$ The arithmetic is done modulo three (or in the prime field $\Bbb{F}_3$ ). The minimum Hamming distance is obviously $2$ . For if $(t_1,t_2,t_3)$ and $(t_1',t_2',t_3')$ differ at only one position, then the corresponding checksums $t_4=t_1+t_2+t_3$ and $t_4'=t_1'+t_2'+t_3'$ are also different. So the sequences $(t_1,t_2,t_3,t_4)$ and $(t_1',t_2',t_3',t_4')$ differ at at least two positions. Let's denote this encoding process by $$\psi:\Bbb{F}_3^3\to\Bbb{F}_3^4.$$ The other code we need is a Reed-Solomon code $\mathcal{C}_2$ with alphabet $\Bbb{F}_{27}$ , length $n_2=27$ , and rank $k_2=20$ . Reed-Solomon codes are maximum distance separable (=they reach the Singleton bound), so this code has minimum distance $d_2=n_2-k_2+1=8$ . The way the concaten
|
|coding-theory|
| 0
|
Internal hom takes coends to ends
|
I know that this is a very general fact about limits and colimits, but I would like to prove it directly for ends and coends. If $\mathcal V$ is a closed braided monoidal category, $V$ an object in $\mathcal V$ and $P\colon\mathcal M^\mathsf{op}\otimes\mathcal M\to \mathcal V$ is a $\mathcal V$ -functor, I can define its coend, whether it exists, as the initial $\mathcal V$ -extranatural transformation $i\colon P(-,-)\overset{\cdot\cdot}{\Rightarrow}\int^MP(M,M)$ . This, as far as I understand, is a necessary step in enriched category theory in order to define the coends "representably" for a functor valued in any other $\mathcal V$ -category. Dually, I can define the end $\int_MP(M,M)\overset{\cdot\cdot}{\Rightarrow}P(-,-)$ , and one should have $$\int_M\mathcal V(P(M,M),V)\cong\mathcal V(\int^MP(M,M),V).$$ The representable definition will be, for a $\mathcal V$ -functor $P\colon\mathcal M^\mathsf{op}\otimes\mathcal M\to \mathcal D$ , the object $\int^MP(M,M)$ of $\mathcal D$ , toget
|
$\require{AMScd}$ Fix an object $X\in\cal V$ . Applying ${\cal V}(-,X)$ to the initial cowedge $\alpha : P(-,-)\overset{..}\Rightarrow \int^MPMM$ you get a wedge ${\cal V}(\alpha,X):{\cal V}(\int^MPMM,X) \overset{..}\Rightarrow {\cal V}(P(-,-),X)$ , where it's clear how the functor ${\cal V}(P(-,-),X)$ is defined: $$ \begin{CD} {\cal M}^\text{op}\times{\cal M}@= ({\cal M}^\text{op}\times{\cal M})^\text{op} @>P^\text{op}>> {\cal D}^\text{op} @>{\cal V}(-,X)>> \cal V \end{CD}$$ Now, in order to prove that ${\cal V}(\alpha,X)$ is a terminal wedge, take another object $Z\in \cal V$ , and a wedge $Z \overset{..}\Rightarrow {\cal V}(P(-,-),X)$ ; this corresponds to a cowedge $P(-,-) \overset{..}\Rightarrow {\cal V}(Z,X)$ , under the correspondence (true for every $M$ ) $$ \tag{$\star$}{\cal V}(Z,{\cal V}(PMM,X))\cong {\cal V}(PMM,{\cal V}(Z,X))$$ hence to a unique morphism $$ \begin{CD} \int^M PMM @>>>{\cal V}(Z,X) \end{CD}$$ hence (for the same isomorphism $(\star)$ ) to a unique morphism $
|
|category-theory|limits-colimits|enriched-category-theory|
| 1
|
Prove that if $1 \leq p \leq q \leq \infty$, then $||x||_q \leq ||x||_p$, for every $x \in\mathbb{R}^{n}$.
|
(Hint) Consider the case $||x||_{p} = 1$. So far I have that $1= |x_{1}|^p+|x_2|^{p}+..+|x_{n}|^{p}$, because I used the fact that $||x||_{p}=1$. Why do I have to use the positive homogeneity property for norms? So now all of the $x_{i}$ elements are all less than $1$ for it to equal $1$. So that means $P$ has to be greater than $1$ for this to make sense right? Then, from here, I don't know where to go...
|
More generally given that $ 1 \leq p you can prove that $||a||_q \leq ||a||_p$ for a sequence of real numbers $a = (a_n)$ verifying $a \in l_p $ . 1- Indeed $ \forall n \in \mathbb{N}$ we have that because of the sum of positive numbers: $$||a||_p^{q-p}=(|a_1|^p+|a_2|^p+...+|a_n|^p+...)^{\frac{q-p}{p}} \geq |a_n|^{\frac{q-p}{p}}=|a_n|^{q-p} \Rightarrow \forall n , |a_n|^q \leq ||a||^{q-p}_p |a_n|^p$$ 2- According to "1-" we can write. $$||a||_q = (|a_1|^q+|a_2|^q+...+|a_n|^q+...)^{\frac{1}{q}} \leq (||a||^{q-p}_p |a_1|^p+ ||a||^{q-p}_p |a_2|^p + ... + ||a||^{q-p}_p |a_n|^p +...)^{\frac{1}{q}} = ||a||_p^{\frac{q-p}{q}}||a||_p^{\frac{p}{q}} = ||a||_p^{\frac{q-p}{q} + \frac{p}{q}} = ||a||_p$$ And this finish the prove.
|
|real-analysis|lp-spaces|
| 0
|
Number of irreducible polynomials with degree $6$ in $\mathbb{F}_2[X]$
|
I'm looking for the number of irreducible polynomials with degree $6$ in $\mathbb{F}_2[X]$ with leading coefficient $1$ . First question: The leading coefficient $1$ is redudant because every polynomial of degree $6$ has leading coefficient $1$ ? My solution: The polynomials must be of the form $$ x^6 + \dots + 1$$ to ensure the degree of $6$ and to ensure that $0$ is not a root. Now we have $$ \dots + \alpha_5x^5+ \dots +\alpha_1x+ \dots$$ with $\alpha_i \in \mathbb{F}_2$ . To avoid that $1$ is root, an odd number of $\alpha_i$ must be $1$ . The number of combinations for that would be $\frac{2^{5-1}}{2}= 8.$ Is that correct?
|
Let $f(x) = x^6 + a_5 x^5 + + a_4 x^4 + + a_3 x^3 + + a_2 x^2 + + a_1 x +a_0 \in \mathbb{F}_2[x]$ . Assume $f(x)$ is irreducible over $\mathbb{F}_2$ . First, $0$ cannot be its root, so we have $a_0 = 1$ . And this means we still need to consider $2^5 = 32$ cases. Second, $1$ cannot be its root, so only odd number of $a_i$ s $(1 \leq i \leq 5)$ can be $1$ , and this means either 1 of $a_i$ s, or 3 of $a_i$ s, or 5 of $a_i$ s, can be 1. So the number of cases we need to consider is $$\begin{pmatrix} 5 \\ 1 \end{pmatrix} + \begin{pmatrix} 5 \\ 3 \end{pmatrix} + \begin{pmatrix} 5 \\ 5 \end{pmatrix} = 5 + 10 + 1 = 16$$ Third, we need to exclude those polynimials s.t. 1 and 0 are not their roots but they can still be written as a product. And since we are assuming that 1 and 0 are not their roots, we only need to consider the factors of degree 2, 3, and 4 with 0 and 1 not their roots. For factors of degree 2, we have $x^2 + x + 1$ For factors of degree 3, we have $x^3+ x + 1$ and $x^3 + x^2
|
|abstract-algebra|finite-fields|irreducible-polynomials|
| 0
|
Showing Variant of Exponential Martingale is a super-martingale
|
Let $B_t$ be a standard Brownian motion, and let $a_t$ be progressively measurable and so that $a_t \in [0,1]$ almost surely. It's well known that $e^{\theta B_t - \frac{1}{2} \theta^2 t}$ is a martingale for constants $\alpha.$ Let $M_t = a_t B_t$ . I want to show now that $e^{\theta M_t - \frac{1}{2} \theta^2 t}$ is a super-martingale. Intuitively, this should be true because $a_t \in [0,1]$ , and the function $e^x$ is convex, so the large positive fluctuations of the Brownian motion are down-weighted. I tried using Ito's lemma to write $$e^{\theta M_t} - 1 = \int_0^t e^{\theta M_s} \theta dM_s + \frac{\theta^2}{2} \int_0^t e^{\theta M_s} a_s^2 ds,$$ though I feel like this is kind of circular, since I don't quite know how to show that this last term is less than $e^{\theta^2t/2}$ (in expectation, which would show that this process is a super-martingale). Any advice would be great!
|
First Round To have an easier life I will modify slightly your term in the exponential to be more of a standard form when $\alpha$ is not constant. From the integration-by-parts formula we get $$ \theta M_t=\theta\alpha_tB_t=\theta\int_0^t\alpha_s\,dB_s+\theta\int_0^tB_s\,d\alpha_s=:X_t+A_t\,. $$ From the Ito formula we get \begin{align} e^{\theta M_t-\frac12\langle X\rangle_t}&=1+\int_0^te^{\theta M_s-\frac12\langle X\rangle_s}\,dX_s+\int_0^te^{\alpha_sM_s-\frac12\langle X\rangle_s}\,dA_s \end{align} where $$ \langle X\rangle_t=\theta^2\int_0^t\alpha_s^2\,ds\,. $$ Since $\int_0^te^{\theta M_s-\frac12\langle X\rangle_s}\,dX_s$ is a martingale it follows from the uniqueness of the Doob-Meyer decomposition that $e^{\theta M_t-\frac12\langle X\rangle_t}$ is a supermartingale if and only if $$ \int_0^te^{\theta M_s-\frac12\langle X\rangle_s}\,dA_s $$ is decreasing. But this is not the case since the integrand is $$ e^{\theta M_s-\frac12\langle X\rangle_s}\,\theta\,\color{red}{B_s}\,\alpha'
|
|stochastic-calculus|stochastic-analysis|
| 0
|
Summation with $n+k-1 \choose k$
|
I've derived a couple of probability distributions that involve the summations: $$ \sum_{k=0}^{n-1}\binom{n+k-1}{k}p^nq^k $$ $$ \sum_{k=0}^{n-1}(n+k)\binom{n+k-1}{k}p^nq^k $$ where $q=1-p$ . I am struggling to derive closed-form expressions for these summations, which I feel ought to be possible given their similarity to the negative binomial distribution. I have tried a variety of approaches, the latest being to write the sum as $S_n$ and consider $qS_n-S_n$ , but no joy so far. Any help is much appreciated!
|
Hint If you replace the binomial coefficient by the gamma function $$S_n=\sum_{k=0}^{n-1}\binom{n+k-1}{k}\,p^n\,q^k=\frac{p^n}{\Gamma (n)}\sum_{k=0}^{n-1} \frac{ \Gamma (k+n)}{\Gamma (k+1)}\,q^k$$ and, as usual, the result of the summation is an hypergeometric function. If $q=(1-p)$ $$S_n=1-\binom{2 n-1}{n} \Big((1-p)\, p\Big)^n \, _2F_1(1,2 n;n+1;1-p)$$ where appears the Gaussian hypergeometric function.
|
|probability|binomial-coefficients|
| 0
|
A limit question of 3-variable-functions. $\lim\limits_{ (x,y,z) \to (0,0,0)} \frac {xyz^2}{x^2+y^4+z^6}$
|
$$\lim\limits_{ (x,y,z) \to (0,0,0)} \frac {xyz^2}{x^2+y^4+z^6}$$ I checked that the limit does not exist but I cannot prove that. I tried $y=mx$, $z=nx$ and also $y=x^m$, $z=x^n$ but they gave me nothing but the limit equals to zero. Thanks a lot
|
We have $|x| \le \sqrt{x^2 + y^4 + z^6}$ and $|y| \le \sqrt[4]{x^2 + y^4 + z^6}$ and $|z| \le \sqrt[6]{x^2 + y^4 + z^6}$ , so we got: $$\left| \frac{x y z^2}{x^2 + y^4 + z^6} \right| \le (x^2 + y^4 + z^6)^{\frac{1}{2} + \frac{1}{4} + \frac{2}{6} - 1} = (x^2 + y^4 + z^6)^{\frac{1}{12}}$$ Since the right has limit $0$ , the limit converges to $0$ .
|
|calculus|real-analysis|analysis|limits|multivariable-calculus|
| 0
|
A stronger version of Cauchy integral theorem
|
I have learnt the "normal" Cauchy integral theorem that if a function $f$ is holomorphic in an open set that contains a simple closed piece-wise smooth curve $\Gamma$ and its interior, then $\displaystyle\int_{\Gamma}f(z)\mathrm{d}z=0$ . But our teacher asks us to prove a stronger version: Suppose that $\Gamma\subset\mathbb{C}$ is a rectifiable simple closed curve and $\Omega$ is its interior. $f$ is continuous in $\overline{\Omega}$ and holomorphic in $\Omega$ , then $\displaystyle\int_{\Gamma}f(z)\mathrm{d}z=0$ . I manage to prove it when $\Omega$ is a star domain, but fail to generalize this conclusion. Also, I know this question is trivial if I am allowed to use Mergeylan's theorem directly. But Mergeylan's theorem is too advanced for us because we have only been studying complex analysis for a month! I know this question have been posted many times on mathstackexchange, but I wonder if there is any more elementary proof. Any help would be appreciated.
|
Boils down to compare growth of partions of the area with countour integrals over all the boundaries of a say small rectangles by Stokes theorem to the area $$\int_{\partial V} f(z) dz = \int_ V \partial_{\overline z}f(z) \ d\overline z \wedge dz =0$$ by Cauchy-Riemann equations. Check convergence for any series of countour inside converging to the rectifiable limiting contour with normal distance dr length of contour and mean of f.
|
|complex-analysis|
| 0
|
The existence of "inverse" in monoidal category
|
In a monoidal category $\mathcal{C}$ , Does any $f\in \operatorname{Hom}_{\mathcal{C}}(X\otimes \mathbf{1},Y\otimes \mathbf{1})$ can be expressed as $f=g\otimes \operatorname{Id}_{\mathbf{1}}$ , where $g\in\operatorname{Hom}_{\mathcal{C}}(X,Y)$ ? Short remark: I am reading a lecture note about tensor category by P. Etingof, S. Gelaki, D. Nikshych, and V. Ostrik. The link is attached here https://ocw.mit.edu/courses/18-769-topics-in-lie-theory-tensor-categories-spring-2009/pages/lecture-notes/ . In their convention, a monoidal category is defined as a quintuple $(\mathcal{C},\otimes, a, \mathbf{1}, \iota)$ where $\mathcal{C}$ is a category, $\otimes : \mathcal{C}\times \mathcal{C}\to \mathcal{C}$ is a bifunctor. $a: (\bullet\otimes \bullet)\otimes \bullet \xrightarrow[]{\sim} \bullet\otimes(\bullet\otimes\bullet)$ is natural isomorphism between two tri-functor $\mathcal{C}\times\mathcal{C}\times\mathcal{C}\to \mathcal{C}$ . $$ a_{X,Y,Z}: (X\otimes Y)\otimes Z \xrightarrow[]{\sim} X\otim
|
The answer is yes. We can prove that $$f=(\eta_Y\circ R_1^{-1}(f)\circ\eta_X^{-1})\otimes\operatorname{Id}_{\mathbf{1}}\quad \forall f\in\operatorname{Hom}_{\mathcal{C}}(X,Y)$$ Let $g=\eta_Y\circ R_1^{-1}(f)\circ\eta_X^{-1} $ in the commutative diagram present in the question, we will get $R_1^{-1}(g\otimes \operatorname{Id}_{\mathbf{1}})=R_1^{-1}(f)$ . As mentioned in the lecture note attached in the question, there exists a natural isomorphism $r: R_1\xrightarrow{\sim}\operatorname{Id}_{\mathcal{C}}$ , combined with the natural isomorphism (category equivalence) $\eta^\prime: (R_1\circ R_1^{-1})\xrightarrow{\sim}\operatorname{Id}_{\mathcal{C}}$ , we can establish a natural isomorphism $\lambda: R_1^{-1}\xrightarrow{\sim}\operatorname{Id}_{\mathcal{C}}$ . This natural isomorphism tells that if $R_1^{-1}(g\otimes \operatorname{Id}_{\mathbf{1}})=R_1^{-1}(f)$ then $g\otimes\operatorname{Id}_{\mathbf{1}}=f$ .
|
|monoidal-categories|
| 0
|
Trying to figure out a pattern's formula for a game.
|
Some basic information is that the game starts with 6 inventory slots and the first additional slot (7th slot) costs 400 coins, the 8th slot costs 857, the 9th 1339, and so on. The game is called Sol's RNG ( Link Here ). I am trying to figure out if there is an equation that would fit this kind of pattern. I think it's exponential but I am unsure. I have graphed these out onto Desmos up to the 42nd term ( Link Here ), but I am unsure how to proceed with the calculations. Hoping someone can help! Cheers!
|
The formula $$ y=\left\lfloor 400(x-6)^{1.1} \right\rfloor $$ fits the data exactly. Here's Desmos link . You can do the fitting in Desmos. It came close enough to allow to guess the "correct" values (when you also put the floor in there).
|
|pattern-recognition|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.