title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
A question related to Jordan Curve Theorem
|
Problem. Let $C$ be a smooth Jordan curve, and $ \gamma: [-1,1]\to\mathbb R^2,$ a segment crossing $C$ perpendicularly at $x_0\in C$ , with $\gamma(0)=x_0$ . Show that there exists an $\varepsilon>0,\,$ such that the segments $$ \ell_-=\big\{\gamma(t): t\in (-\varepsilon,0)\big\}\quad\text{and}\quad \ell_+=\big\{\gamma(t): t\in (0,\varepsilon)\big\} $$ lie each wholly in a different connected component of $\mathbb R^2\setminus C$ . This question, once we draw a curve and a perpendicular, looks obvious. Often in proofs, it is also considered obvious. Nevertheless, I have not managed to provide a rigorous proof/explanations. Any ideas? Note. By "smooth curve" I mean that $\gamma$ is continuously differentiable with non-vanishing derivative.
|
I work in $\mathbb C$ here not $\mathbb R^2$ and use different notation than OP. Let $\gamma: [0,1]\longrightarrow \mathbb C$ be a Jordan curve which has a continuous non-vanishing derivative everywhere and $\sigma: [-1,1]\longrightarrow \mathbb C$ be a line segment crossing $\big[\gamma\big]$ (the image of $\gamma$ ) perpendicularly at a point $z_0 \in \big[\gamma\big]$ . We need only consider the problem in a particularly simple setting. First, to make things easy we may parameterize $\gamma$ such that $\gamma\big(\frac{1}{2}\big)=z_0$ . Second the properties of interest will be preserved by any invertible affine map, so using translation we can take $z_0 =0$ and using rotation we can take $\big[\sigma\big]$ to be a segment on the real line which implies $\gamma'\big(\frac{1}{2}\big) \not \in \mathbb R$ . Third we can also choose to parameterize $\sigma$ to be negative when $t\lt 0$ and positive when $t\gt 0$ . Suppose for contradiction that $\ell_{+}$ does not exist. This means ther
|
|algebraic-topology|curves|
| 0
|
Entire function bounded in every horizontal line, and has limit along the positive real line
|
Let $f(z)$ be an entire function (holomorphic function on $\mathbb{C}$ ) satifying the following condition: $$|f(z)|\leq \max (e^{\text{im}(z)},1 ),\ \forall z\in\mathbb{C}$$ $$\lim_{\mathbb{R}\ni t\rightarrow+\infty} f(t)=0$$ My question is: can we prove that the following limit exists? $$\lim_{\mathbb{R}\ni t\rightarrow-\infty} f(t)$$ Maybe we can even prove $f(z)=0$ . But I do not know how to prove it. My trying: if $f(z)$ has finitely many zeros, then by Hadamard factorization theorem, $f(x)=e^{az+b}P(z)$ where $P(z)$ is a polynomial. such a function can not be bounded by $\max(e^{\text{im}(z)},1)$ , unless $P(z)\in\mathbb{C}$ and $ia\in\mathbb{R}$ . Then we know $f(z)$ has to be $0$ . But if $f(z)=0$ has infinitely many solution, then I do not know how to proceed.
|
I think the answer is no (Sorry Conrad's answer didn't display for me for some reason, but I came up with this independently, and I think my construction is different from theirs). First consider the following function $$\phi(z) = 10^{-10}\frac{e^{iz} - 1}{z}.$$ We check that it has the following properties $\phi(0) = 10^{-10}i$ , $\phi$ is holomorphic. $\phi(z) \leq \frac{\max(e^{Im(z)}, 1)}{100(|z| + 1)}.$ This can be seen by considering the regions $|z| \leq 1$ and $|z| \geq 1$ separately. Now we form the function $$f(z) = \sum_{k = 1}^\infty \phi(z + 2^k).$$ I think this has the desired properties. Check that Due to property 2) of $\phi(z)$ , the sum in $f(z)$ converges uniformly in every unit ball (due to exponential decay), so $f(z)$ is holomorphic. We have $$|f(z)| \leq \max(e^{Im(z)}, 1) \cdot \sum_{k = 1}^\infty \frac{1}{100(|z + 2^k| + 1)}.$$ It is not hard to check that the second term is at most $1$ . Indeed, let $m$ be the largest positive integer such that $|z| \geq 2^m$
|
|complex-analysis|harmonic-functions|entire-functions|
| 0
|
Can a smooth curve have a segment of straight line?
|
Setting: we are given a smooth curve $\gamma: \mathbb{R} \rightarrow \mathbb{R}^n$ Informal Question: Is it possible that $\gamma$ is a straight line on $[a,b]$ , but not a straight line on $[a,b]^c$ ? Formal Question: It is possible that $\gamma''(t)=0$ for all $t\in [a,b]$ , while $\gamma''(t)\neq 0$ for some $t\not\in [a,b]$ ? The motivation for me to ask this question is that the textbook we use in our geometry class discusses only smooth curves on bounded open intervals. While I know that the curvature $\gamma''$ can be zero on a point (for instance: $\gamma(t)=(t,\sin t)$ has zero curvature on $\{n\pi:n\in\mathbb{Z}\}$ ), I can not come up with an example of a smooth curve $\gamma$ such that $\gamma''$ is $0$ on some interval $(a,b)$ . I think such an example if exists will be interesting to see in GGB, but I failed to come up with one due to my inexperience. Thanks for any help.
|
You should be able to come up with examples that are circular arcs and line segments (that are tangent to the arcs where they are met). For instance, This is the part of the circle centered at $(-1/2,1/10)$ with radius $1$ between angles $\pi$ and $3\pi/2$ , the horizontal line segment from $(-1/2,1/10)$ to $(1/2,1/10)$ , then the part of the circle centered at $(1/2,1/5)$ of radius $1/10$ between the angles $3\pi/2$ and $11\pi/6$ . Note: There is nothing special about the line segment being horizontal -- that's just easy for me to find an example off the top of my head. As long as the line segment(s) are tangent to the circles at the points where the lines meet their arcs, the result is continuous and has continuous first derivatives, i.e., is $C^1$ . The second (and higher) derivatives are not continuous at the points where these arcs and segments meet. Also, a detail of the example is that the graphed function is continuous on $[-3/2,1/2+\sqrt{3}/20]$ , but the derivative is only co
|
|geometry|differential-geometry|curves|smooth-functions|
| 0
|
Implicit derivation in application problem
|
I want to solve an application problem that says the following: A country's savings $S$ is defined implicitly in terms of its national income $I$ by the equation $$S^2+\frac{1}{4}I^2=SI+I$$ Where both $S$ and $I$ are in billions of dollars. Find the marginal propensity to consume when $I=22$ and $S=18$ . To solve it I only have to derive implicitly and replace, but I want to know if I derive implicitly with respect $I$ correctly, is my doubt. Note that this expression can be simplified to the following: $$I^2 - \left(4S+4\right)I + 4S^2 = 0$$ Considering this simplified expression, we can implicitly differentiate with respect to $I$ as follows: $$\frac{d}{dI}\left(I^2\right) - \frac{d}{dI}\left(\left(4S+4\right)I\right) + \frac{d}{dI}\left(4S^2\right) = 0$$ In other words, $$2I - 4I\frac{dS}{dI} - 4S - 4 + 8S\frac{dS}{dI} = 0$$ If we take the values from the statement, that is, $I=22$ and $S=18$ , we obtain: $$2\left(22\right) - 4\left(22\right)\frac{dS}{dI} - 4\left(18\right) - 4 + 8\
|
The simplest way is to rewrite the equation as $[S-$ (1/2) $I]^2=I$ . The purpose of the question is to derive the marginal propensity to consume which is $(1-dS/dI)$ . However, what makes sense to me in the application of implicit function in this setting is given the function $f(S,I)=[S-$ (1/2) $I]^2-I$ , and a pair $(S^*,I^*)$ such that $f(S^*,I^*)=0$ we can solve $S$ as a function of $I$ for all values around some neighborhood of $S^*$ , moreover, the "implicit" function is continuously differentiable in the neighborhood as well (and one can calculate the required derivative $dS/dI$ ). However, the points $S=18$ and $I=22$ do not satisfy the savings-income identity so the question makes no sense to me, either in terms of the math or the economics. We can verify that with $I^*=22$ , $S^*=\sqrt22+11$ satisfies the identity and if my calculations are correct marginal propensity to consume in this case is $0.3933$ . $dC/dI=(1/2)(1-1/[S-$ (1/2) $I])$
|
|calculus|derivatives|
| 1
|
Value of $C$ changing in a differential equation
|
I have the differential equation $\frac{dy}{dx} + 5y = 20$ with $y(0)=2$ . This is my attempt: $$\ln|20-5y|=-5x+C$$ $$20-5y=Ce^{-5x}$$ $$y=\frac{Ce^{-5x}}{-5}+4$$ Then plugging the initial condition gives $C=10$ . Here, the value of $C$ changes on different steps. You could look at the first line and say $C$ is $\ln 10$ , you could look at the last line and say $C$ is $10$ , or you could consider $C / (-5)$ as $C$ and say it is $-2$ . What is the correct answer of $C$ ? Why does it change like this?
|
The 'C' in the first line, let's call it C1 is $ln10$ as you've mentioned. The 'C' in the second line in your post (call it C2) is not the same as C1, it is in fact $e^{C1}$ , and $e^{ln10}$ is 10 by the properties of natural logarithms. The goal of solving diff equations most of the time is to remove the diff (dy/dx) and find the equation. You can do so in the first line itself by using y(0) = 2, the original equation would become the following: $ln(20 - 5y) = -5x + ln(10)$ ..........(1) Or you can go to the second line and using y(0) = 2, you would get C = 10 (what we have defined above as C2) and then the original equation would become: $20 - 5y = 10e^{-5x} $ ..............(2) Both equations 1 and 2 represent the same pair of values or the same curve given that their domains are the same (due to the restrictions of log function as in equation 1, y can be at most 4).
|
|calculus|ordinary-differential-equations|
| 1
|
How Do I describe the Lie Algebra of this Symplectic Group?
|
Question Write $(2n\times 2n)$ -matrices in block form $A=\begin{bmatrix}a&b\\c&d\end{bmatrix} \in Mat_{\mathbb{R}}(2n)$ where each of $a,b,c,d$ is an $n\times n$ block. Define $J=\begin{bmatrix}0&1\\-1&0\end{bmatrix}$ where 1 is a shorthand for an $n\times n$ identity matrix. Let $Sp(2n)= \{ A\in Mat_\mathbb{R}(2n)| AJA^T=J \}$ , so that $Sp(2n)=F^{-1}J$ , where $F: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is the map $F(A)=AJA^T$ . Here, the codomain is the set of real $n\times n$ matrices satisfying $C=-C^T$ . (I have already shown that the differential $D_AF: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is $AJH^T+HJA^T$ ) Show that $Sp(2n)$ is a matrix Lie group and describe its Lie algebra $\mathfrak{sp} (2n)$ . Note I showed that $Sp(2n)$ is a matrix Lie group by showing group closure, identity, associativity, inverses and smoothness of group operations. However, I can’t seem to describe Lie algebra $\mathfrak{sp} (2n)$ .
|
Let $A(t)$ be a curve in $Sp(2n)$ with $A(0)=Id$ and $A'(0)=X$ . Differentiating the defining equation $$ \frac{d}{dt}\Big|_{t=0} (A(t)JA(t)^T) = \frac{d}{dt}\Big|_{t=0} J, $$ we get $$ X J + JX^T = 0. $$ That is the defining equation for the Lie algebra ${\mathfrak s}{\mathfrak p}(2n)$ , in $Mat_{\mathbb R}(2n)$ .
|
|differential-geometry|manifolds|smooth-manifolds|
| 1
|
How Do I describe the Lie Algebra of this Symplectic Group?
|
Question Write $(2n\times 2n)$ -matrices in block form $A=\begin{bmatrix}a&b\\c&d\end{bmatrix} \in Mat_{\mathbb{R}}(2n)$ where each of $a,b,c,d$ is an $n\times n$ block. Define $J=\begin{bmatrix}0&1\\-1&0\end{bmatrix}$ where 1 is a shorthand for an $n\times n$ identity matrix. Let $Sp(2n)= \{ A\in Mat_\mathbb{R}(2n)| AJA^T=J \}$ , so that $Sp(2n)=F^{-1}J$ , where $F: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is the map $F(A)=AJA^T$ . Here, the codomain is the set of real $n\times n$ matrices satisfying $C=-C^T$ . (I have already shown that the differential $D_AF: Mat_{\mathbb{R}}(2n)\rightarrow Skew_{\mathbb{R}}(2n)$ is $AJH^T+HJA^T$ ) Show that $Sp(2n)$ is a matrix Lie group and describe its Lie algebra $\mathfrak{sp} (2n)$ . Note I showed that $Sp(2n)$ is a matrix Lie group by showing group closure, identity, associativity, inverses and smoothness of group operations. However, I can’t seem to describe Lie algebra $\mathfrak{sp} (2n)$ .
|
I haven't enough "reputation" to add a comment to Three Aggies, but I wanted to note additionally that $-JX^T= (XJ)^T$ so that the characterization $XJ + JX^T = 0$ can be rewritten as $XJ = (XJ)^T$ .
|
|differential-geometry|manifolds|smooth-manifolds|
| 0
|
A presentation of a group of order 12
|
Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$. I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$). Is it good step to reduce the number of generators or not necessary?
|
Hint: We have $$ (ac)^3=acacac=abc^2ac=abcbc^2=baab=1 $$ And $\langle a,b\,|\,a^2=c^3=1\land (ac)^3=1\rangle$ is the standard presentation for $A_4$ .
|
|abstract-algebra|group-theory|group-presentation|
| 0
|
Completion of some $C[0,1]$ functions with some inner product is a reproducing kernel Hilbert space
|
The problem Let $X$ be the space of $C^1[0,1]$ functions with the special property $f(0) = 0$ . Consider the inner product defined in the following way: $$\langle f, g \rangle = \int_0^1 f'(x) \overline{g'(x)} dx$$ I want to show that the space $H$ corresponding to the completion of $X$ under the induced norm metric is a reproducing kernel Hilbert space, i.e. $\forall x \in [0,1]$ that we have $$|L_x(f)| = |f(x)| \leq C_x||f||_H, \forall f \in H$$ for some $C_x > 0$ . My progress It is not particularly difficult to show that $L_x$ is bounded on the dense subspace $X$ , as we can use the fundamental theorem of calculus effectively, i.e. for $f \in X$ we have \begin{aligned}|L_x(f)| = |f(x)| = \Big | \int_0^x f'(y) dy \Big | \leq \int_0^x |f'(y)| dy \\ \leq \sqrt{\int_0^x 1^2dy}\sqrt{\int_0^x|f'(y)|^2dy} \leq \sqrt{x}||f'||_2 = \sqrt{x}||f||_H, \end{aligned} and so $C_x = \sqrt{x}$ suffices. My difficulty is enforcing this on the completion $X$ . In my mind, there seems to be two potenti
|
This is a standard density argument, which is used for instance in Sobolev space theory. The functional $L_x$ is continuous on the dense subspace. Hence, there is a unique extension to its closure, and this extension $L_x: X \to \mathbb R$ is linear and continuous. (In fact, this is true for every uniformly continuous function.) The space $X$ is a closed subspace of the usual Sobolev space $H^1(0,1)$ . Any function in $X \subset H^1(0,1)$ can be changed on a set of measure zero to obtain a continuous function (this is a Sobolev embedding theorem). For such functions, the extended functional $L_x$ indeed returns the evalution at $x$ .
|
|real-analysis|functional-analysis|hilbert-spaces|lp-spaces|reproducing-kernel-hilbert-spaces|
| 0
|
Remarkable logarithmic integral $\int_0^1 \frac{x \log ^2(x) \log (1-x)}{1+x^2} dx$
|
Question: how to evaluate this logarithm integral? $$ I=\int_0^1 \frac{x \log ^2(x) \log (1-x)}{1+x^2} d x $$ My attempt: $$ \begin{aligned} I=&\int_0^1 \frac{x \log ^2(x) \log (1-x)}{1+x^2} d x\\ =&\frac{1}{2} \int_0^1 \log ^2(x) \log (1-x) d\left(\log \left(x^2+1\right)\right) \xrightarrow{I B P} \\ \stackrel{\text{IBP}}{=}&\underbrace{\left.\frac{1}{2} \log \left(x^2+1\right) \log ^2(x) \log (1-x)\right|_0 ^1}_{=0} \\ &-\int_0^1 \frac{\log (x) \log (1-x) \log \left(x^2+1\right)}{x} d x+\frac{1}{2} \int_0^1 \frac{\log ^2(x) \log \left(x^2+1\right)}{1-x} d x \\ \end{aligned} $$ Let $$J = \int_0^1 \frac{x \log ^2(x) \log (1-x)}{1+x^2} dX$$ $$K = \int_0^1 \frac{\log ^2(x) \log \left(x^2+1\right)}{1-x} dX$$ How to evaluate $J$ and $K$ ?
|
To continue the evaluation of the integral by using your work so far, you might first notice that $$\int_0^1 \frac{\log ^2(x) \log \left(1+x^2\right)}{1-x} \textrm{d}x=\int_0^1 \frac{\log ^2(x) \log((1-x^4)/(1-x^2))}{1-x} \textrm{d}x$$ $$=\int_0^1 \frac{\log^2(x) \log(1-x^4)}{1-x^4} (1+x)(1+x^2) \textrm{d}x-\int_0^1 \frac{\log^2(x) \log(1-x^2)}{1-x^2} (1+x) \textrm{d}x,$$ and here we observe we can exploit the Beta function, or we can use the generalization $$ \int_0^1\frac{\displaystyle \log^m(x)\log\left(\frac{1+x^2}{2}\right)}{1-x}\textrm{d}x$$ $$=(-1)^{m-1}m! \biggr((m+1) \zeta(m+2)-\frac{1}{2^{m+2}}\sum_{k=0}^m \eta(k+1) \eta(m-k+1)$$ $$-\sum_{k=0}^m \beta(k+1) \beta(m-k+1)\biggr),$$ whose derivation is shown in this answer (and also found in the sequel mentioned below, p. $38$ ). As regards the other integral, you can exploit $\displaystyle \int_{0}^{1}x^{n-1}\log(x)\log(1-x)\textrm{d}x=\frac{H_n}{n^2}+\frac{H_n^{(2)}}{n}-\frac{\pi^2}{6}\frac{1}{n}$ , which is given in a generali
|
|calculus|integration|definite-integrals|closed-form|
| 0
|
Choosing exactly 3 boys and 2 girls
|
There are 5 seats vacant in a college, so more candidates are called. Suppose, the next 5 on the waiting list have an equal probability of being a boy or a girl. What is the probability that exactly 3 girls and 2 boys are admitted? In this question the answer is given as $5/16$ . The process: $${5 \choose 3} \times \frac{1}{32}$$ But I think that there is no need for picking seats particularly. Could anyone point out the mistake I am making in this? Why is $5 \choose 3$ multiplied?
|
$^5C_3$ is not referencing the seats at all. You have a fixed number of seats and an equal number of people waiting. To have a better understanding, ignore the part about vacant seats completely (since an equal number of candidates are available). Your problem is now equivalent to you having $5$ students and the probability that $3$ of them are girls and $2$ are boys. How do you find this? Break down your desired situation into parts that must happen together . you choose $3$ out of $5$ students that are necessarily girls the rest are necessarily boys Situation $1$ happens with probability $^5C_3\times P($ those $3$ are girls $)$ . Similarly for situation $2$ . Why do we multiply? Because they must happen together (you add when it is an OR situation and multiply when it is AND ).
|
|probability|combinatorics|
| 1
|
Understanding Why a Product of Cyclic Groups with Non-Coprime Orders is not Cyclic
|
Let $G_1, G_2, \ldots, G_t$ be finite cyclic groups, and define $G = G_1 \times G_2 \times \ldots \times G_t$ . Let $n_j = |G_j|$ , such that $|G| = \prod_{j=1}^{t} n_j$ . For part (a) of the problem, we are asked to suppose that $\gcd(n_i, n_j) > 1$ for some $i \neq j$ and show that $G$ is not cyclic. I understand that a group $G$ is cyclic if there exists an element $g \in G$ such that every element of $G$ can be written as $g^k$ for some integer $k$ . My attempt to solve this part is based on the hint given: Hint: Show that the order of any element is less than $n$ , where $n = \prod_{j=1}^{t} n_j$ . Here is my incomplete attempt: I started by considering an arbitrary element $(g_1, g_2, \ldots, g_t) \in G$ , where each $g_j$ is a generator of $G_j$ . Then, the order of this element would be the least common multiple of the orders of $g_j$ , which is $\operatorname{lcm}(|g_1|, |g_2|, \ldots, |g_t|)$ . Since $G_j$ are all cyclic, $|g_j| = n_j$ . However, I'm stuck on how to proceed f
|
Note that for all $(g_m)_{m=1}^{t} ∈G$ $$ \text{lcm}(|g_m|) ≤ \text{lcm}(|g_{m ≠ i, j}|)×\text{lcm}(|g_i|, |g_j|) = \text{lcm}(|g_{m ≠ i, j}|)\frac{n_{i}n_{j}}{\text{gcd}(n_i, n_j)} \lt n_jn_i\prod_{m \ne i,j} n_m = |G|$$ as $\text{gcd}(n_i, n_j) > 1$ . Thus the order of every element is less than the order of the group. Alternatively, you could prove that the product $G_i \times G_j$ is non-cyclic, thus the entire product is non-cyclic.
|
|abstract-algebra|group-theory|cyclic-groups|products|
| 1
|
Is it true or false that if $P(A \cap B) = P(A)P(B)$, then $A$ and $B$ are independent events.
|
I'm helping my brother with probability theory. He just covered the definition of two events being independent, but when I saw the definition in his textbook, I was a bit confused. I'm familiar with the following notion of independency: If $A$ and $B$ are independent events, then $P(A \cap B) = P(A)P(B)$ . What I had a hard time finding out is whether the converse is true, i.e. if $P(A \cap B) = P(A)P(B)$ , then $A$ and $B$ are independent events. I swear I remember my professor in undergrad telling me it is not in general true. But my brother's textbook states it as a definition.
|
The statement is true. For any two events, we know by this axiom of probability that $$P(A \cap B) = P(A|B) \cdot P(B)$$ Given that $P(A \cap B) = P(A) \cdot P(B)$ , (assuming $P(B) ≠ 0$ ), we get $$P(A|B) = P(A)$$ This implies that the probability of A under the condition that B has already occurred is the same as probability of A occurring without having to know if B occurred. In other words, A and B are independent events.
|
|probability|probability-theory|independence|
| 1
|
A question about partial derrivaive
|
In calculus, we are introduced about the partial derivatives, like for example \begin{align} \frac{\partial(x^2-y)}{\partial x}(x,y),\ \ \ \frac{\partial(x+e^y-yz)}{\partial y}(x,y,z),\ \ \ \frac{\partial(x+y)}{\partial(x+y)}(x,y)\ \ ... \end{align} But I wonder if these sort of derivatives \begin{align} \frac{\partial(ax+by)}{\partial(cx+dy)}(x,y),\ \ \ \frac{\partial(ax+by+cz)}{\partial(dx+ey+fz)}(x,y,z) \ \ ... \end{align} are exists in math ? And if they are valid, then what are their explicit value ? Or even more generally, is something like $\displaystyle\frac{\partial f(x_1,\cdots,x_n)}{\partial g(x_1,\cdots,x_n) }$ exists in math ? My guess, if they are valid, then they can be identified as $$ \frac{\partial f(x_1,\cdots,x_n)}{\partial g(x_1,\cdots,x_n) } = \frac{\displaystyle \frac{\partial f}{\partial x_i }(x_1,\cdots,x_n) }{\displaystyle \frac{\partial g}{\partial x_i }(x_1,\cdots,x_n) }$$ but I am not sure. I hope someone will help me to clarify this. Thanks.
|
Sounds like you may be looking for the directional derivative $D_{\boldsymbol{v}}f(\boldsymbol{p})$ for $\boldsymbol{v} \in \mathbb{R^n}$ . This is morally a measure of how much $f$ grows or decays as the inputs of $f$ move away from $\boldsymbol{p}$ in the direction of $\boldsymbol{v}$ . Formally, given a function $f: \mathbb{R}^n \to \mathbb{R}$ we define the directional derivative of $f$ at $\boldsymbol{p}\in \mathbb{R}^n$ in the direction of $\boldsymbol{v} \in \mathbb{R}^n$ to be $$D_\boldsymbol{v}f(\boldsymbol{p) := }\lim_{t\to 0} \frac{f(\boldsymbol{p}+t\boldsymbol{v}) - f(\boldsymbol{p)}}{t}.$$ One can show that the directional derivative is conveniently $$D_\boldsymbol{v}f(\boldsymbol{p}) = \nabla f(\boldsymbol{p}) \cdot \boldsymbol{v}$$
|
|real-analysis|calculus|derivatives|
| 1
|
Parametrization of the rational $x$ where both $\sqrt{1+x^2}$ and $\sqrt{9+x^2}$ are rational
|
I could parametrize the rational $x$ where $\sqrt{1+x^2}$ is rational as: $y^2-x^2=1,\ y=\frac{a^2+b^2}{a^2-b^2},\ x=\frac{2ab}{a^2-b^2}$ . $\sqrt{9+x^2}$ is similar. However, I cannot find $x$ where "both" are rational. Is there any method except brute force? Thanks.
|
Here are my thoughts. This not a complete answer, in the end one has to find rational points on a cubic curve (elliptic curve), which I don't know much about. Setting $t = x-y$ , you obtain a rational parametrization of $y^2 - x^2 = 1$ as $$x(t) = \frac{t^2-1}{2t}, \quad y(t) = -\frac{t^2+1}{2t}, \quad t \neq 0.$$ Similarly for $s = x-y$ you obtain a parametrization of $y^2 - x^2 = 9$ as $$x(s) = \frac{s^2-9}{2s}, \quad y(s) = - \frac{s^2+9}{2s}, \quad s \neq 0.$$ If you now set $x(t) = x(s)$ , you obtain the cubic equation $$\tag{*}\label{eq}st^2 - s = s^2 t - 9t.$$ So any rational point $(s,t) \in \mathbb Q^2$ which satisfies \eqref{eq} gives you a solution. There is an obvious solution $(s,t) = (0,0)$ , but we cannot plug this into $x(s)$ or $x(t)$ .
|
|algebraic-geometry|parametrization|
| 1
|
Show that if $x=-1$ is a solution of $x^{3}-2bx^{2}-a^{2}x+b^{2}=0$, then $1-\sqrt{2}\le b\le1+\sqrt{2}$
|
$$x^{3}-2bx^{2}-a^{2}x+b^{2}=0$$ Show that if $x=-1$ is a solution, then $1-\sqrt{2}\le b\le1+\sqrt{2}$ I subbed in the solution $x=-1$ , completed the square, and now I'm left with the equation $\left(a-1\right)^{2}+b^{2}=2$
|
You seem to have made a mistake during simplification and/or completing the square. Substituting $x = -1$ into the given equation, we get $$\begin{align*} -1 - 2b + a^2 + b^2 &= 0 \\[0.3cm] \Rightarrow \;\;\;\;\;\;a^2 + (b-1)^2 &= 2 \end{align*}$$ Since $a^2 \ge 0$ , $$\begin{align*} (b-1)^2 &\le 2 \\[0.3cm] \end{align*}$$ Can you take it from here?
|
|polynomials|roots|cubics|
| 1
|
How to express rotation direction mathematically?
|
In This question I tried to solve this problem Show that as $z$ traverses a small circle in the complex plane in the positive (counterclockwise) direction, the corresponding point $P$ on the sphere traverses a small circle in the negative (clockwise) direction with respect to someone standing at the center of the circle and with body outside the sphere. I realised I have no idea how to express a rotation or rotation direction in $\mathbb{R} ^n$ mathematically. It is easy to do that in the complex plane > $a+bi + re^{i \theta} $ detriment a rotation for a point moving in complex plane with center at $a+bi$ and radius $r$ if $\theta$ increases from $0$ to $2\pi - \varepsilon $ this rotation is positive otherwise the rotation is negative It is obvious that a rotation of a point around other point is determined by the two points, the plane that contain the rotation in $\mathbb{R} ^n$ and the direction of rotation. This is equivalent to finding an intersection of an $n$ dimension sphere and
|
If you want something that appears like your complex number formula, you can use the fact that for a skew symmetric matrix $A = -A^T, A \in \mathbb R^n$ , matricies of the form $R = \exp(A)$ are generalizations of rotations to higher dimensions called "special orthogonal matricies". If you want to rotate in the $e_j, e_{j+1} \ $ plane, you can take $i =\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} $ and use as your rotation matrix: $$R(\theta) = \exp(( 0_{j-1} \ \oplus i \theta \oplus 0_{n-j+1} \ \ \ )) = I_{j-1} \oplus \exp(i \theta) \oplus I_{n-j+1}$$ $0_k$ here is the k-by-k zero matrix and $I_k$ is the k-by-k identity matrix. Note that we can define the complex numbers by $1 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ and $i = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$ so what's happening here is really the same algebra you already know and love. In higher-dimensions we can't just use one number, $\theta$ to describe all rotations, since the rotation can be happening in any
|
|algebra-precalculus|reference-request|soft-question|analytic-geometry|rotations|
| 0
|
Existence of an inequality between probabilities of two events
|
Let $A,B$ be any two events such that $P(A\cap B)>0$ and $A$ is an event which depends on some variable $x$ . Hence $P(A)$ is a function of $x$ . I am trying to check if there exists a constant $C>0$ (which is independent of $x$ ) such that $P(A\cap B)\geq CP(A)$ . My attempt: It is a weird inequality because we know $P(A\cap B)\leq P(A)$ . Also I think $P(A\cap B)>0$ implies that there exists a $C(x)$ such that $P(A\cap B)\geq C(x)P(A)$ . But I don't think there exists an absolute constant $C$ . On the other hand, both $P(A\cap B)$ and $P(A)$ depend on $x$ which gives me a feeling that $C$ need not depend on $x$ . Any suggestions?
|
I cannot see how this is possible. The problem is asking us the value of C (where C between 0 and 1) such that C is independent of x. We can write: $$ C\le\frac{P(A\cap{B})}{P(A)} $$ For this to be independent of x, the numerator and denominator have to be proportional for each x. But I cannot see how this can be the case. Consider as an example an event A such that $P(A\cup{B})=1$ for every x (so trying to maximise the probability of A). Since we are trying to prove the problem for every A, it should apply to the above example as well. But: $$ C\le\frac{P(A\cap{B})}{P(A)}\implies{C\le\frac{P(A\cap{B})}{P(A\cap{\lnot{B}})+P(A\cap{B})}}\implies{C\le\frac{P(A\cap{B})}{1-P(B)+P(A\cap{B})}} $$ Since $P(B)$ doesn't depend on x, but $P(A\cap{B})$ does depend on x, the above fraction cannot have a constant value for every x. Also, since $P(A\cap{B})$ can be arbitrarily small, the above fraction has a limit towards zero.
|
|real-analysis|probability-theory|
| 1
|
$\dfrac{1}{a-b \cos x}$ format integral
|
Looking at solving: $$\int_{0}^{2\pi}\frac{1}{\sqrt{2}-\cos x}\,dx \\$$ According to integral tables (such as this one ) the solution is: $$\frac{2}{\sqrt{a^2 - b^2}}\arctan\frac{\sqrt{a^2 - b^2}\tan(x/2) }{a+b}$$ $$a=\sqrt{2}, b=-1$$ $$\left.2\arctan\frac{2\tan(x/2) }{\sqrt{2}-1}\right\rvert_0^{2\pi}$$ Solving this equation for $x={2\pi}$ results in zero, and for $x=0$ results in zero, so the result is zero. Except I'm missing something here because the original function to be integrated is never less than zero, it oscillates between 0.414 and 2.414, so there must be net positive area under that curve. Another way to solve this integral is to transform it to a complex line integral, using residues and getting a result of ${2\pi}$ $$\int_{0}^{2\pi}\frac{1}{\sqrt{2}-\cos x}\,dx = {2\pi}\\$$ Why doesn't the table of integrals get the same result?
|
Let’s start with $$ J=\int_0^\pi \frac{1}{\sqrt{2}-\cos x} $$ Using the substitution $x\mapsto \pi - x$ , we have $$ \begin{aligned} J & =\frac{1}{2} \int_0^\pi\left(\frac{1}{\sqrt{2}-\cos x}+\frac{1}{\sqrt{2}+\cos x}\right) d x \\ & =\sqrt{2} \int_0^\pi \frac{1}{2-\cos ^2 x} d x \\ & =2 \sqrt{2} \int_0^{\frac{\pi}{2}} \frac{1}{2-\cos ^2 x} d x \\ & =2 \sqrt{2} \int_0^{\frac{\pi}{2}} \frac{\sec ^2 x}{2 \sec ^2 x-1} d x \\ & =2 \sqrt{2} \int_0^{\infty} \frac{d t}{2 t^2+1} \text {, where } t=\tan x \\ & =2\left[\tan ^{-1}(\sqrt{2} t)\right]_0^{\infty}\\&=\pi \end{aligned} $$ Hence $$ \boxed{\int_0^{2 \pi} \frac{1}{\sqrt{2}-\cos x} d x=2 \pi} $$
|
|integration|complex-analysis|
| 0
|
Limiting distribution of harmonic sample means of i.i.d. $U(0,1)$s after appropriate scaling.
|
Suppose $\{X_n\}$ is a sequence of i.i.d. random variables which follow the uniform distribution on $(0,1)$ . Denote $$M_n=\frac{X_1+\cdots+X_n}{n},\;H_n=\frac{n}{\frac{1}{X_1}+\cdots+\frac{1}{X_n}}.$$ By Strong Law of Large Numbers we know $M_n\to\frac{1}{2},H_n\to0$ almost surely. Note for $p\geq1$ , we have $$\mathbb{E}M_n^p\leq\mathbb{E}\frac{X_1^p+\cdots+X_n^p}{n}=\mathbb{E}X_1^p,$$ thus $M_n$ is bounded in $L^p$ for any $p\geq1$ . Note $H_n\leq M_n$ by Cauchy-Schwarz inequality, we know $\{H_n,M_n|n\geq1\}$ is bounded in $L^p$ . Hence we know for any $r\geq1$ , we have $$\mathbb{E}M_n^r\to2^{-r},\mathbb{E}H_n^r\to0.$$ Now I want to find a sequence $\{a_n\}$ so that $a_nH_n$ converges (in distribution) to some nonzero distribution, but I don't know how to deal with it. Any help will be appreciated.
|
According to this paper , theorem 2.5, we have: $$H_n \cdot \ln^2(n)-\ln(n) \xrightarrow[n\to +\infty]{\mathcal D} X$$ where $X$ has the density function $f_X(x) = g(-x)$ with $g$ defined by the formula $(12)$ .
|
|probability-theory|probability-distributions|probability-limit-theorems|
| 1
|
Log summation with geometric progression
|
Problem statement: Show that $\ln\left(x_{n}\right) where $x_{n}$ = $\left(1+\frac{1}{3}\right)\left(1+\frac{1}{3^{2}}\right)...\left(1+\frac{1}{3^{n}}\right)$ I simplified RHS to be $\frac{1-\left(\frac{1}{3}\right)^{n}}{2}$ Then multiplied by 2, and not sure what to do now, after various attempts. bump!
|
Since $\ln\bigl((1+\frac{1}{3})(1+\frac{1}{3^2})…(1+\frac{1}{3^n})\bigr)=\sum_{k=1}^{n}\ln(1+\frac{1}{3^k})$ , try to prove that $\forall k\in\{1,2,…,n\}$ the following inequality holds: \begin{equation} \ln\bigl(1+\frac{1}{3^k}\bigr)
|
|inequality|proof-writing|
| 0
|
The relation between the eigenvalue of a Hermitian matrix and the block matrix that composed by it real and imaginary part
|
Recently I am reading a paper . In their "Proof of Lemma 1" on page 24, they have: $$\lambda_+(\mathbf{Q})=2\lambda_+(\tilde{\mathbf{Q}})$$ where $\mathbf{Q}$ is a Hermtian matrix, $\tilde{\mathbf{Q}}=\frac{1}{2}\left[\begin{array}{cc} \operatorname{Re}\{\mathbf{Q}\} & -\operatorname{Im}\{\mathbf{Q}\} \\ \operatorname{Im}\{\mathbf{Q}\} & \operatorname{Re}\{\mathbf{Q}\} \end{array}\right]$ is a real symmetric matrix, $\lambda_+ = \max\{\lambda_{\max}(\mathbf{-Q},0)\}$ , and $\lambda_{\max}$ denotes the maximum eigenvalue. I haven't figured out why it's that.
|
First of all, it's well known (from the min-max theorem for example) that for a Hermitian matrix $H$ the maximum eigenvalue is given by $\sup\{\mathbf{z}^* H \mathbf{z}: \mathbf{z}^*\mathbf{z}=1\}$ and, similarly, for a real symmetric matrix $A$ , the maximum eigenvalue is given by $\sup\{\mathbf{x}^T A \mathbf{x}: \|\mathbf{x}\|_2=1\}$ . Now for any $\mathbf{z}=\mathbf{u}+i\mathbf{v}$ you can see by writing $Q = \Re Q + i \Im Q$ and noticing that $\Re Q = \Re Q^T$ and $\Im Q = -\Im Q^T$ that $$\mathbf{z}^* Q \mathbf{z} = 2\cdot [\mathbf{u}^T, \mathbf{v}^T] \;\tilde{Q} \begin{bmatrix}\mathbf{u}\\\mathbf{v}\end{bmatrix}$$ Finally, notice that $\mathbf{z}^*\mathbf{z}=1$ if and only if $\left\|\begin{bmatrix}\mathbf{u}\\\mathbf{v}\end{bmatrix}\right\|_2 = 1$ and so by taking supremum in the above equality we get that the maximum eigenvalues of $Q$ and $2\tilde{Q}$ are the same. I'll leave it to you to fill out the details.
|
|matrices|matrix-decomposition|hermitian-matrices|skew-symmetric-matrices|
| 0
|
A set of lines, each of which intersect the others, are either coplanar or share a single common point.
|
Prove that a set of lines, each of which intersect the others, are either coplanar or share a single common point. Source: Hadamard This is a surprisingly difficult problem, because it requires ping-ponging back and forth between two dissimilar concepts: lines being coplanar and lines sharing a common point . I could not find a way to unify these two concepts, so instead developed the proof below, to which I request verification and feedback. Is the proof correct? Is it well written (I tried to provide sufficient detail while being succinct)? Is there an alternate or simpler solution? Proof: Let $S$ be a set of lines all of which share a common point $A$ . If line $\ell$ does not include point $A$ , but still intersects every line in $S$ , then $S \cup \{\ell\}$ must be coplanar. Indeed, there is a unique plane $P$ containing $\ell$ and $A$ , and since every line in $S$ includes two points on $P$ (the line's point of intersection with $\ell$ and the point $A$ ), every line in $S$ lies
|
First notice that "either or" is false, it should be an inclusive or: a set $S$ of lines which share a single common point can be coplanar (and even must be, if $|S|=2$ ). Your proof looks fine now, but here is a variant without induction: Let $S$ be your set of (at least two) lines, each of which intersects the others. Let $\ell_1,\ell_2$ be two of them, sharing a point $A$ and lying in a plane $P$ . Assume that not all lines of $S$ contain $A$ , and let us prove that all lines of $S$ lie in $P$ . Let $\ell_3$ be a line in $S$ which does not contain $A$ . Then, the three lines $\ell_1,\ell_2,\ell_3$ don't share a common point. In particular, $\ell_3$ lies in $P$ since it contains two distinct points of $P$ : one in $\ell_1$ and one in $\ell_2$ . Therefore, for every $\ell\in S$ , $\ell$ meets their union $\ell_1\cup\ell_2\cup\ell_3$ in at least two distinct points. Again, since these points lie in $P$ , so does $\ell$ .
|
|geometry|solution-verification|induction|euclidean-geometry|solid-geometry|
| 1
|
Weak convergence of Hilbert-space valued stochastic process
|
Suppose I have a sequence of stochastic processes $(X_n(t))_{t\in [0,1]}$ such that $X_n(t) \in H$ , where $H$ is some separable Hilbert space and $X_n \in C([0,1,], H)$ . Goal. I want to show that $X_n$ converges weakly to a stochastic process $X$ , i.e. $\mathrm{Law}(X_n) \rightharpoonup \mathrm{Law}(X)$ as probability measures on $C([0,1],H)$ . Question. Suppose I know that for any dual elements $\phi_1,\ldots,\phi_k \in H'$ , where $k$ is finite but arbitrary, it holds that the sequence of processes $(\langle{\phi_1,X_n(t)}\rangle,\ldots,\langle{\phi_k,X_n(t)}\rangle)_{t\in [0,1]} \in C([0,1], \mathbb{R}^k)$ converges weakly to $(\langle{\phi_1,X(t)}\rangle,\ldots,\langle{\phi_k,X(t)}\rangle)_{t\in [0,1]}$ . Does this imply that $X_n$ weakly converges to $X$ ? If it makes a difference, the desired limit $X$ is a Gaussian process. I suspect the answer is yes by some sort of approximation argument, but I couldn't find a reference in the usual suspects as far as texts on infinite-dime
|
Here is a (very degenerate) counterexample. Let $(e_n)_{n\in\mathbb N}$ be a Hilbert basis of $H$ . As is well-known, the sequence $(e_n)_{n\in\mathbb N}$ converges to the zero vector of $H$ for the weak topology $\sigma(H,H')$ but not for the strong topology $\beta(H,H')$ defined by the norm $\lVert\cdot\rVert_H$ of $H$ because $\lVert e_n\rVert_H=1$ for every $n\in\mathbb N$ . Such a sequence makes it is possible to obtain a counterexample using only constant processes. For every $n\in\mathbb N$ and every $t\in[0,1]$ , let $X_n(t)$ be everywhere equal to $e_n$ and let $X(t)$ be everywhere equal to the zero vector of $H$ . Every path of every $X_n$ and every path of $X$ is continuous (because constant), and $X$ is Gaussian if you acccept that Dirac measures are Gaussian measures with variance zero. From the weak convergence of $(e_n)_{n\in\mathbb N}$ to the zero vector of $H$ , it is easy to see that for every $\phi_1,\dots,\phi_k\in H$ , the law of $(\langle\phi_1,X_n\rangle,\dots,\l
|
|functional-analysis|probability-theory|stochastic-processes|stochastic-analysis|probability-limit-theorems|
| 0
|
Solving a second order differential question where one solution is known.
|
Problem: Given that $y = e^{2x}$ is a solution of $$ (2x+1)y'' - 4(x+1)y' + 4y = 0 $$ find a linearly independent solution by reducing the order. Write the general solution. Answer: Let $y = e^{2x}v$ . We have: \begin{align*} \dfrac{dy}{dx} &= e^{2x} \dfrac{dv}{dx} + 2 e^{2x} v \\ \dfrac{d^2y}{dx^2} &= e^{2x} \dfrac{d^2v}{dx^2} + e^{2x} \dfrac{dv}{dx} + 2 e^{2x} \dfrac{dv}{dx} + 4 e^{2x} v \\ % \dfrac{d^2y}{dx^2} &= e^{2x} \dfrac{d^2v}{dx^2} + 4 e^{2x} \dfrac{dv}{dx} + 4 e^{2x} v \\ \end{align*} Now we substitute into the original equation: \begin{align*} (2x+1) \left( e^{2x} \dfrac{d^2v}{dx^2} + 4 e^{2x} \dfrac{dv}{dx} + 4 e^{2x} v\right) -4(x+1)\left( e^{2x} \dfrac{dv}{dx} + 2 e^{2x} v \right) + 4 e^{2x}v &= 0 \\ % (2x+1) \left( \dfrac{d^2v}{dx^2} + 4 \dfrac{dv}{dx} + 4 v\right) -4(x+1)\left( \dfrac{dv}{dx} + 2 v \right) + 4 v &= 0 \\ % % (2x+1) \left( \dfrac{d^2v}{dx^2} + 4 \dfrac{dv}{dx} \right) -4(x+1)\left( \dfrac{dv}{dx} \right) + (2x+1)(4v) - 4(x+1)(2v) + 4 v &= 0 \\ % (2x+1) \
|
As an extended comment. The reduction order method follows the procedure: given a linear differential operator $\mathcal{D}[\cdot]$ and given $y(x)$ such that $\mathcal{D}[y(x)]=0$ we have $$ \mathcal{D}_1[v(x) y(x)]-v(x)\mathcal{D}_1[y(x)]=\mathcal{D}_2[v(x)] $$ note that $\mathcal{D}_1[y(x)]=0$ . Now $\mathcal{D}_2[v(x)]$ represents an ode in $v(x)$ with reduced order regarding $\mathcal{D}_1[y(x)]$ . In our case $$ \mathcal{D}_1[\cdot]=\left((2x+1)\frac{d^2}{dx^2}-4(x+1)\frac{d}{dx}+4\right)[\cdot] $$ and making $$ \mathcal{D}_1[v(x) y(x)]-v(x)\mathcal{D}_1[y(x)] $$ we have $$ (2x+1)(y'' v+2v' y' + y v'')-4(x+1)(y' v+v' y)+4 y v = (2x+1)(2v' y'+y v'')-4(x+1)(v' y)=y(2x+1)v''+(2y'(2x+1)-4y(x+1))v'=0 $$ now making $y = e^{2x}$ we have $$ (2x+1)v''+4x v'=0 $$
|
|ordinary-differential-equations|
| 0
|
The relation between the eigenvalue of a Hermitian matrix and the block matrix that composed by it real and imaginary part
|
Recently I am reading a paper . In their "Proof of Lemma 1" on page 24, they have: $$\lambda_+(\mathbf{Q})=2\lambda_+(\tilde{\mathbf{Q}})$$ where $\mathbf{Q}$ is a Hermtian matrix, $\tilde{\mathbf{Q}}=\frac{1}{2}\left[\begin{array}{cc} \operatorname{Re}\{\mathbf{Q}\} & -\operatorname{Im}\{\mathbf{Q}\} \\ \operatorname{Im}\{\mathbf{Q}\} & \operatorname{Re}\{\mathbf{Q}\} \end{array}\right]$ is a real symmetric matrix, $\lambda_+ = \max\{\lambda_{\max}(\mathbf{-Q},0)\}$ , and $\lambda_{\max}$ denotes the maximum eigenvalue. I haven't figured out why it's that.
|
You can represent the matrix entries of a complex matrix as $2\times 2$ blocks with real components, where $a+b i$ then becomes the block: $$ \begin{pmatrix} a & -b \\ b & a \end{pmatrix} $$ By doing so you get a real $2n \times 2n$ matrix. And then you change the ordering of the basis vectors $e_m$ by: $$ e'_m = e_{2m-1} \ \ \ \mbox{for} \ \ m=1..n $$ $$ e'_{n+m} = e_{2m} \ \ \ \mbox{for} \ \ m=1..n $$ to get the block form of the matrix $\bar{Q}$ . Before this change of basis it should be obvious that the representation of $Q$ in its eigenvector basis has a block-diagonal matrix equivalent for the $2n \times 2n$ matrix and, subsequently, the $2\times 2$ blocks can be diagonalized to the same eigenvalues that $Q$ has plus their conjugates. After the change of basis the eigenvalues should still be the same. (And the factor $1/2$ is just a choice of the authors, of course).
|
|matrices|matrix-decomposition|hermitian-matrices|skew-symmetric-matrices|
| 0
|
proving that a polynomial has at least two roots
|
I have the following question in real analysis course: let $p(x)=x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}$ . Prove that if $x_{0}$ is a root of P, meaning $p(x_{0})=0$ and $p'(x_{0})\neq0$ , then p has at least two roots. approach: I tried to make several things, like first of looking at $p(-x_{0})$ and denote $p(-x_{0})=\frac{m}{2}$ and so $\displaystyle\frac{m}{2}=x_{0}^{4}-a_{3}x_{0}^{3}+a_{2}x_{0}^{2}-a_{1}x_{0}+a_{0}$ and then we can use the fact that $p(x_{0})=0$ and therefore $p(x_{0})-p(-x_{0})=-\frac{m}{2}$ . It doesn't get me much far tho. I assume it has to do something with the derivative because they mentioned it. any clues how to help? Thank you all in advance
|
Let p(x) be a polynomial of exact even degree n >= 2. Assume that the highest power has a coefficient of 1, otherwise we divide by the highest coefficient. Now if we calculate the limit if p(x) for x -> +infinity and x -> -infinity, both are +infinity, so we have two different points with p(x) > 0. Your condition shows that you have t with p(t) = 0, and p(t') t. Either way the polynomial has both positive and negative values in an interval that doesn't include t, and because polynomials are continuous there must be a zero at some point other than t. You actually don't need polynomials and converging towards infinity, all you need is that p(x) is continuous everywhere, and there are arbitrary large and arbitrary small x both with p(x) > 0, or both with p(x) For odd degrees it doesn't work; for example for $p(x) = x$ , or $p(x) = x^3 + x$ , or $p(x) = x^{999} + x$ .
|
|real-analysis|polynomials|infinitesimals|
| 0
|
How to find the time period of the sum of two waves with different frequency
|
I'm dealing with a few sin waves that represent music. An example is the following f(x)=sin( $\frac{1.47x}{440}$ )+sin(x) How would I represent the time period of this superposed wave? Also if the frequency is possible to find (to my knowledge it isn't), can it be represented with a formula?
|
$f(t) = \sin( a t ) + \sin (b t) $ We want to find the period $T$ (if it exists) such that $ f(t + T) = f(t) $ This implies, after some algebraic manipulations that $ \sin(a t) \cos(aT) + \cos(at) \sin(aT) + \sin(bt) \cos(bT) + \cos(bt) \sin(bT) = \sin(at) + \sin(bt) $ hence, $ \sin(at ) ( \cos(a T) - 1) + \sin(bt) ( \cos(b T) - 1) + \cos(at) \sin(aT) + \cos(bt) \sin(b T) = 0 $ From the coefficients of $\cos(at)$ and $\cos(bt)$ , this implies that $ a T = k \pi $ and $ b T = m \pi $ for some integers $k$ and $ m$ . since we also want $\cos(aT) = 1$ and $\cos(bT) = 1 $ then $k $ and $m$ must be even integers. Hence, we want to find even $k$ and $m$ such that $ T = \dfrac{k \pi }{a} = \dfrac{m \pi}{b} $ The minimum such $T$ is $\pi$ times the least even common multiple of $\dfrac{1}{a}$ and $\dfrac{1}{b} $ , such that the multiples are even multiples. So for your example function $ f(x) = \sin(\dfrac{1.47}{440} x) + \sin(x) $ Then $a = \dfrac{147}{44000} , b = 1 $ And we want the least e
|
|functions|wave-equation|
| 0
|
$As + At = A$ $\implies$ $as^k+bt^k=1$
|
So I am studying this proof of Quillen's theorem . (page no. 27) Here they say for a commutative ring $A$ we are given $As+At=1$ . So we get $as+bt=1$ for some $a,b\in A$ . But, in the proof they used for $k\in \mathbb{N}$ and $as^k+bt^k=1$ for some $a,b\in A$ Can anyone point what I am missing out?
|
Andrew's comment is a nice proof. Here is another one. More generally, if $R$ is a commutative ring with $1$ , and $A, B$ are ideals s.t. $A+B=R$ , then $A^m+B^n=R$ for all $m,n\in\mathbb{Z}_+$ . (Then the original question comes from applying this conclusion to $Aa$ , $Ab$ , $m=n=k$ .) To see this, we first prove $A+B^n=R$ or all $n\in\mathbb{Z}_+$ . This is because \begin{equation*} R=R^n=(A+B)^n\subseteqq A+B^n. \end{equation*} (To see why $\subseteqq$ holds, note that an element in $(A+B)^n$ must be a finite sum of terms like $(a_1+b_1)\cdots(a_n+b_n)\in b_1b_2\cdots b_n+A\subseteq B^n+A$ .) Since $A+B^n=R$ provided $A+B=R$ , the substitution $B^n\leftarrow A$ , $B\leftarrow A$ , $n\leftarrow m$ implies $A^m+B^n=R$ .
|
|abstract-algebra|ring-theory|modules|projective-module|
| 1
|
Convergence in $L_p$ norm of minimum of uniform random variables, $Y_1=\min\left\{X_1,\ldots,X_n\right\}$ with $X_i\sim U(0,1)$
|
${X_i}$ are i.i.d with $X_i\sim U\left(0,1\right)$ . Prove that $Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\}$ converges to $Y=0$ in expectation of order $p\ge1$ . My try and where I got stuck: $$\lim _{n\to \infty }\left(E\left[\left|Y_1-Y\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|Y_1\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|\min\left\{X_1,X_2,\ldots,X_n\right\}^p\right|\right]\right)\\ \le \lim _{n\to \infty }\left(E\left[\left|X_i\right|^p\right]\right)\:\forall i\in \left\{1,2,\ldots,n\right\}=\lim _{n\to \infty }\left(\int _0^1\left|X_i\right|^p\:dx\right)\ne 0$$ That is my problem, I don't know how to receive $0$ here
|
Your first inequality is not strict enough to get the result. You will need a better estimate of $E\left[\left|Min\left\{X_1,X_2,..,X_n\right\}^p\right|\right]$ .
|
|probability|expected-value|uniform-convergence|uniform-distribution|
| 1
|
A question about partial derrivaive
|
In calculus, we are introduced about the partial derivatives, like for example \begin{align} \frac{\partial(x^2-y)}{\partial x}(x,y),\ \ \ \frac{\partial(x+e^y-yz)}{\partial y}(x,y,z),\ \ \ \frac{\partial(x+y)}{\partial(x+y)}(x,y)\ \ ... \end{align} But I wonder if these sort of derivatives \begin{align} \frac{\partial(ax+by)}{\partial(cx+dy)}(x,y),\ \ \ \frac{\partial(ax+by+cz)}{\partial(dx+ey+fz)}(x,y,z) \ \ ... \end{align} are exists in math ? And if they are valid, then what are their explicit value ? Or even more generally, is something like $\displaystyle\frac{\partial f(x_1,\cdots,x_n)}{\partial g(x_1,\cdots,x_n) }$ exists in math ? My guess, if they are valid, then they can be identified as $$ \frac{\partial f(x_1,\cdots,x_n)}{\partial g(x_1,\cdots,x_n) } = \frac{\displaystyle \frac{\partial f}{\partial x_i }(x_1,\cdots,x_n) }{\displaystyle \frac{\partial g}{\partial x_i }(x_1,\cdots,x_n) }$$ but I am not sure. I hope someone will help me to clarify this. Thanks.
|
When we write $\frac{\partial f(x,y)}{\partial x}$ there is a hidden convention that this means the derivative keeping $y$ constant. But if we wrote $\frac{\partial f(x,y)}{\partial (ax+by)}$ we would have a problem: there is no natural choice of what to keep constant. To make things clear we have to explicitly change the pair of variables: for example $u:=ax+by, v:=23y^2+\cos xy$ : then if we write $\frac{\partial f}{\partial u}$ the implication is that $v$ is being kept constant.
|
|real-analysis|calculus|derivatives|
| 0
|
In a ring $1+\epsilon+\cdots+\epsilon^{p-1}=0$ implies $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ for $0<c<p$.
|
I am a bit unsure about my following "proof", in particular the rationality of "taking projection". Problem : $R$ is a ring with identity, not necessarily commutative . If $\epsilon\in R$ satisfies the equation $1+\epsilon+\cdots+\epsilon^{p-1}=0$ , where $p$ is a prime number, show that for all integer $0 , we have $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ . Proof : By considering roots in $\mathbb{C}[X]$ , there exists $\phi(X)\in\mathbb{Q}[X]$ s.t. \begin{equation*} 1+X^c+X^{2c}+\cdots+X^{(p-1)c}= (1+X+\cdots+X^{p-1})\phi(X). \end{equation*} Moreover this $\phi(X)$ is actually in $\mathbb{Z}[X]$ since $1+X+\cdots+X^{p-1}$ is monic. Now we take projection $\pi: \mathbb{Z}\to R$ , $1\mapsto 1_R$ in the above equation and yield an identity on $R[X]$ . Plugging in $X=\epsilon$ implies $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ . $\square$ My confusion is that taking projection $\pi: \mathbb{Z}\to R$ and seems a bit causual. Is this proof correct?
|
The key is that the projection induces a homomorphism of the polynomial rings. In general, if $\psi:A\to B$ is a ring homomorphism, then it induces a homomorphism $\bar\psi:A[X]\to B[X]$ given by $$ \bar\psi \left(\sum_{i=0}^n a_i x^i\right) := \sum_{i=0}^n \psi(a_i) x^i. $$ Thus, any polynomial identity in $A[X]$ can be carried to a polynomial identity in $B[X]$ . Maybe a down-to-earth example would make it more clear. In $\Bbb Z[X]$ , we have $(3X + 2)^2 = 9X^2 +12X+4$ . Using the "mod 4" homomorphism $\Bbb Z \to \Bbb Z/4\Bbb Z$ , we get $$(\bar3X + \bar2)^2 = \bar9X^2 +\bar{12}X+\bar4 = X^2$$ in $(\Bbb Z/4\Bbb Z)[X]$ . So in your question, $\sum X^{kc}$ and $\sum X^{k}$ in $\Bbb Z[X]$ maps to $\sum 1_RX^{kc}$ and $\sum 1_RX^{k}$ in $R[X]$ . $\phi$ maps to some polynomial $\phi'$ in $R[X]$ (we don't care what it is). The equation $\sum X^{kc} = \left(\sum X^{k}\right)\phi'(X)$ still holds in $R[X]$ because the operation is homomorphic.
|
|abstract-algebra|ring-theory|noncommutative-algebra|
| 1
|
How to calculate: $ \int_{-7 \pi}^{2 \pi} \frac{1}{2 \sin(x) - \cos(x) + 5} dx$
|
How to calculate the value of this Riemann integral? $$ \int_{-7 \pi}^{2 \pi} \frac{1}{2 \sin(x) - \cos(x) + 5} dx $$ I used the universal trigonometric substitution and found the following antiderivative of the integrand function: $$ \int\frac{1}{2 \sin(x) - \cos(x) + 5} dx = \frac{\arctan \left( \frac{3 \tan \left( \frac{x}{2} \right) }{\sqrt{5}} + 1 \right) }{\sqrt{5}} + const $$ We cannot simply use the Newton-Leibniz formula by simply substituting $- 7\pi$ and $2 \pi$ , because $\tan \left(-\frac{7\pi}{2} \right)$ is not defined. I don’t quite understand what needs to be done in this case. However, I can add that universal trigonometric substitution works for $\forall x \in (-\pi + 2\pi k, \pi + 2\pi k), k \in \mathbb{Z}$ . And for the original integral, the boundaries of integration do not coincide with $\forall x \in (-\pi + 2\pi k, \pi + 2\pi k), k \in \mathbb{Z}$ , and this introduces me further more at a dead end. I would be grateful for any hints and solutions to this task!
|
Actually, it should be $$\frac1{\sqrt5}\arctan\left(\frac{3\tan\left(\frac x2\right)+1}{\sqrt5}\right).$$ Anyway, since the function that you want to integrate is periodic wih period $2\pi$ , your integral is equal to $$4\int_{-\pi}^\pi\frac1{2\sin(x)-\cos(x)+5}\,\mathrm dx+\int_\pi^{2\pi}\frac1{2\sin(x)-\cos(x)+5}\,\mathrm dx.\label{a}\tag1$$ Now, since \begin{align}\int_{-\pi}^\pi\frac1{2\sin(x)-\cos(x)+5}\,\mathrm dx&=\left[\frac1{\sqrt5}\arctan\left(\frac{3\tan\left(\frac x2\right)+1}{\sqrt5}\right)\right]_{x=-\pi}^{x=\pi}\\&=\frac1{\sqrt5}\left(\frac\pi2-\left(-\frac\pi2\right)\right)\\&=\frac\pi{\sqrt5}\end{align} and \begin{align}\int_\pi^{2\pi}\frac1{2\sin(x)-\cos(x)+5}\,\mathrm dx&=\left[\frac1{\sqrt5}\arctan\left(\frac{3\tan\left(\frac x2\right)+1}{\sqrt5}\right)\right]_{x=\pi}^{x=2\pi}\\&=\frac1{\sqrt5}\left(\arctan\left(\frac1{\sqrt5}\right)-\left(-\frac\pi2\right)\right)\\&=\frac{2\arctan\left(\frac1{\sqrt5}\right)}{2\sqrt5},\end{align} the integral \eqref{a} is equal to $
|
|calculus|integration|riemann-integration|trigonometric-integrals|
| 1
|
can the period an automorphism of an abelian group be unbounded?
|
Suppose we have a countable abelian group $A$ . Let $\phi$ be an automorphism of $A$ . Let $P = \{ a \in A \mid \exists n \in \mathbb{N}, \text{such that } \phi^{n} (a) =a \}$ be the sets of periodic points of $\phi$ in $A$ . For $a \in P$ , let $m_a$ be the smallest positive integers such that $\phi^{m_a} (a) =a$ . Question: Is it possible that $\{m_a \mid a \in P\}$ is unbounded? Thoughts: If A is finitely generated abelian, then by the fundamental theorem of finitely generated abelian groups the period must be bounded. I was wondering if this also holds for the countable abelian groups?
|
Take the direct sum of $\mathbb{Z}/p\mathbb{Z}$ over all the primes, and then take the automorphism of it that is coordinate-wise an order $p-1$ automorphism in each point (i.e. multiply by a generator of $(\mathbb{Z}/p\mathbb{Z})^\times)$ . Then we get elements of arbitrarily high period, since the element which is $0$ in every coordinate but the $p$ th coordinate and $1$ in the $p$ th coordinate is then periodic of order $p-1$ .
|
|abstract-algebra|group-theory|abelian-groups|
| 0
|
Can a (co)monad composed with itself be a (co)monad again?
|
I have very little categorical knowledge, my question comes mostly from programming, but I'm interested in the categorical solution of this problem. Let's assume we have a monad $(T, \eta, \mu)$ Is $T^2$ automatically a monad? Will the following (for example) $$\eta' = \eta T ∘ \eta $$ and $$\mu' = {\mu}T ∘ \mu T^2 $$ suffice in $(T^2, \eta', \mu')$ ? My intuition tells me, that according to monad laws $$\mu' \circ \eta' T^2 = {\mu}T ∘ \mu T^2 \circ \eta T^3 \circ \eta T^2 = \mu T \circ (\mu \circ \eta T)T^2 \circ \eta T^2 = \mu T \circ \eta T^ 2 = (\mu \circ \eta T)T = I$$ but this way $$\mu' \circ T^2\eta' = \mu T ∘ \mu T^2 \circ T^2 \eta T \circ T^2 \eta = \mu T ∘ (\mu T \circ T^2 \eta) T \circ T^2 \eta$$ and we don't have the $\mu T \circ T^2 \eta = I$ rule, so the right identity rule fails and therefore $(T^2, \eta', \mu')$ is not a monad. I also have a feeling neither of the 5 other definitions of $\mu'$ will satisfy all the rules even if we change the definition of $\eta'$ . Is
|
Generally speaking, composing a monad $S$ with a monad $T$ does not produce a monad structure on their composite endofunctor $ST$ : you require extra structure. This extra structure is known as a distributive law . A distributive law is a natural transformation $TS \Rightarrow ST$ satisfying some properties. In the special case $S = T$ , one might be tempted to take the identity natural transformation $TT \Rightarrow TT$ . However, this is a distributive law if and only if $T$ is idempotent monad , which is a strong condition on $T$ .
|
|category-theory|monads|
| 1
|
can the period an automorphism of an abelian group be unbounded?
|
Suppose we have a countable abelian group $A$ . Let $\phi$ be an automorphism of $A$ . Let $P = \{ a \in A \mid \exists n \in \mathbb{N}, \text{such that } \phi^{n} (a) =a \}$ be the sets of periodic points of $\phi$ in $A$ . For $a \in P$ , let $m_a$ be the smallest positive integers such that $\phi^{m_a} (a) =a$ . Question: Is it possible that $\{m_a \mid a \in P\}$ is unbounded? Thoughts: If A is finitely generated abelian, then by the fundamental theorem of finitely generated abelian groups the period must be bounded. I was wondering if this also holds for the countable abelian groups?
|
In fact, every infinite abelian torsion group $A$ has such automorphisms. As a rough sketch, if each $p$ -component is finite, you may find an automophism of order at least $p-1$ on each of these (nontrivial) components and paste them together to form the desired map. Otherwise, $A$ has some infinite $p$ -component, so we may as well assume that $A$ itself is $p$ -primary. If the exponent of $A$ is infinite, multiplication by (e.g.) $p+1$ will to the job. On the other hand, if the exponent is finite, $A$ is a sum of cyclic groups, of which at least one type must appear infinitely often. On this direct summand $C_{p^n}^{\oplus I}$ we may then take any permutation of the summands that has unbounded finite cycles and obtain such an automorphism this way. (Note that the same is not true for all torsion free groups, as there are arbitrarily large rigid examples, i.e. groups whose only automorphisms are $\pm \mathrm{id}$ .)
|
|abstract-algebra|group-theory|abelian-groups|
| 1
|
p-variation of x
|
Let us consider data $(k \in \mathbb{Z}_{\ge 1}, (s_1, \ldots, s_k) \in \mathbb{R}^k_{\ge 0})$ such that $s_1 + \ldots + s_k = 1$ . The mesh of a datum like that is $\max (s_1, \ldots, s_k)$ . Given $p > 1$ , what is a nice way to see that, as the mesh tends to zero, $s_1^p + \ldots + s_k^p$ tends to zero?
|
Just do $$ \sum s_i^p = \sum s_i s_i^{p-1} \leq (\max \{ s_i \} )^{p-1} \sum s_i = {\text{mesh}}^{p-1} $$
|
|calculus|inequality|normed-spaces|
| 1
|
Convergence in $L_p$ norm of minimum of uniform random variables, $Y_1=\min\left\{X_1,\ldots,X_n\right\}$ with $X_i\sim U(0,1)$
|
${X_i}$ are i.i.d with $X_i\sim U\left(0,1\right)$ . Prove that $Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\}$ converges to $Y=0$ in expectation of order $p\ge1$ . My try and where I got stuck: $$\lim _{n\to \infty }\left(E\left[\left|Y_1-Y\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|Y_1\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|\min\left\{X_1,X_2,\ldots,X_n\right\}^p\right|\right]\right)\\ \le \lim _{n\to \infty }\left(E\left[\left|X_i\right|^p\right]\right)\:\forall i\in \left\{1,2,\ldots,n\right\}=\lim _{n\to \infty }\left(\int _0^1\left|X_i\right|^p\:dx\right)\ne 0$$ That is my problem, I don't know how to receive $0$ here
|
If $X_i\sim U\left(0,1\right), i=1, \dots, n$ and are independent, we have the following property : $$Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\} \sim \text{Beta}(a=1,b=n).$$ Hence, as the expectation of any power of a beta distribution is known (see here ), we have $$ \mathbb E\left[\left|Y_1-0\right|^p\right]=\mathbb E\left[Y_1^p\right]=\\ \frac{\Gamma(a+b)\Gamma(a+p)}{\Gamma(a)\Gamma(a+p+b)}= \frac{\Gamma(1+n)\Gamma(1+p)}{\Gamma(1)\Gamma(1+n+p)}= \\ \frac{n!\Gamma(1+p)}{(n+p)((n-1)+p)...(1+p)\Gamma(1+p)}=\color{blue}{\frac{n!}{(n+p)((n-1)+p)...(1+p)}}. \tag {1} $$ For $p=1$ , it becomes $\frac{1}{n+1}$ , which tends to $0$ as $n \to \infty$ . For $p>1$ , from $$\frac{n!}{(n+p)((n-1)+p)...(1+p)} \le \frac{n!}{n!+ p^n},$$ we see that the last term in (1) tends to zero as $n \to \infty$ . This yields the desired result: $$\mathbb \lim_{n \to \infty } E\left[\left|Y_1-0\right|^p\right]=0$$ for any $p \ge 1$ .
|
|probability|expected-value|uniform-convergence|uniform-distribution|
| 0
|
Distributions over a unit square
|
Problem definition Consider the following bivariate random variable \begin{equation*} \overline{z}\triangleq \frac{1}{4} \sum_{j=1}^4 z_j \end{equation*} where $z_1, \dots, z_4$ are uniformly distributed over the edges of a square centered in the origin and with edge of unitary length. More precisely, \begin{equation*} \begin{aligned} z_1 &= \left[\begin{array}{c} x_1 \\ -0.5 \end{array}\right] \qquad x_1 \sim \mathcal{U}([-0.5,0.5])\\ z_2 &= \left[\begin{array}{c} +0.5 \\ y_2 \end{array}\right] \qquad y_2 \sim \mathcal{U}([-0.5,0.5])\\ z_3 &= \left[\begin{array}{c} x_3 \\ +0.5 \end{array}\right] \qquad x_3 \sim \mathcal{U}([-0.5,0.5])\\ z_4 &= \left[\begin{array}{c} -0.5 \\ y_4 \end{array}\right] \qquad y_4 \sim \mathcal{U}([-0.5,0.5]) \end{aligned} \end{equation*} my question is the following: what is the distribution of $\overline{z}$ ? My attempt Clearly, $\overline{z}$ is a point that falls inside the square. I'm wondering if considering $\overline{z}$ as uniformly distributed ins
|
Your random variable comes out to be $(\frac{x1+x2}{4},\frac{y1+y2}{4})$ . So the probability distribution of the X-coordinate of the random variable is the convolution of the probability distributions of $x_1$ and $x_2$ (scaled by $\frac{1}{4}$ at the end). This turns out to be the autoconvolution of a uniform probability distribution from $-0.5$ to $0.5$ . This autoconvolution will give you a triangular probability distribution function. We can say the same for the Y-coordinate. Here's a simulated result: And this is the final probability distribution:
|
|probability|uniform-distribution|density-function|covariance|
| 0
|
Is this proof that the collection F(N) of all finite subsets of N is countable correct?
|
I am going through Bartle and Sherbert's Introduction to Real Analysis and doing all the exercises. I have just tried the last of section 1.3, which requires you to prove the aforesaid statement. I think I've come up with something satisfactory, but it looks nothing like the hint the book offers at the end. I start by building a grid that includes all the finite subsets of N as follows. Each finite subset has a natural number n of elements (except the empty set, which has none), so each row includes all the subsets with the same number of elements (except the first row, which also includes the empty set. First row would start like this: ∅, {1}, {2}, {3} ... Second row would start like this: {1,2}, {1,3}, {1,4} ... Position within the row is determined by two aspects: we add up the elements of the sets and start with those who sum is the smallest possible natural number m that results from the sum of distinct elements of the sets in the row. For the first row, this is the empty set; for
|
Your argument looks fine to me. You can try to formalize it, but at this point the idea is more important than writing every rigorous detail down. One suggestion if you want to formalize your argument a bit more: prove that the order you define on the rows is a well-order, meaning that any subset (of the row) has a minimum element. In other words, this ensures that there is always a well-defined "next" element in the sequence. You can also prove that within a single row, you reach any set belonging to it in a finite number of steps. This is to ensure your map from $\mathbb{N} \times \mathbb{N}$ is surjective.
|
|elementary-set-theory|solution-verification|
| 1
|
Number of integral solutions for $x_1 + x_2 - x_3 = n$ where $n \geq x_1 , x_2 , x_3 \geq 0$
|
I have been asked Integral solutions for $x_1 + x_2 - x_3 = n$ where $n \geq x_1 , x_2 , x_3 \geq 0$ . My approach: We have, $0 \leq x_3\leq n$ $\Rightarrow n \leq x_1 + x_2 \leq 2n$ Further, by assuming some integer $t$ , for which $x_1+x_2+t = n$ , we can find the number of integers for $x_1$ and $x_2$ which should take care of the corresponding values of $x_3$ . However, when I solve by taking the case of $n = 20$ , my answer does not match with the given, 651. My book solution: Number of solution $= \binom{n+1}{1} + \binom{n+2}{1}+\binom{n+3}{1} + \ldots + \binom{2n+1}{1} = \dfrac{(n+1)(3n+2)}{2}$ Any help would be appreciated
|
Your approach is correct (though: 1) you should have described it more explicitly, and 2) I am reluctant to name $t$ your negative integer: better $-i$ ), and your book is wrong, because it forgot the conditions $x_2,x_3\le n$ . The number of solutions is $$\sum_{i=0}^n\left|\{(x_1,x_2)\in\Bbb Z^2\mid 0\le x_1,x_2\le n\text{ and }x_1+x_2=n+i\}\right|$$ $$=\sum_{i=0}^n\left|\{x_1\in\Bbb Z\mid 0\le x_1,n+i-x_1\le n\}\right|$$ $$=\sum_{i=0}^n\left|\{x_1\in\Bbb Z\mid i\le x_1\le n\}\right|$$ $$=\sum_{i=0}^n(n-i+1)$$ $$=\sum_{k=1}^{n+1}k$$ $$=\frac{(n+1)(n+2)}2.$$
|
|combinatorics|permutations|combinations|multinomial-theorem|
| 1
|
Asymmetric relation is antisymmetric and irreflexive as well
|
A relation is asymmetric if and only if it is both antisymmetric and irreflexive. I read this in Wikipedia's article about binary relation. So my problem with the statement is that though I can relate asymmetry with antisymmetry but cannot see the reasoning behind a binary relation being asymmetric is also irreflexive. Here's what I understand about the statement. Asymmetry $ \forall a,b\in X \quad aRb \implies \lnot (bRa) $ Antisymmetry $ \forall a,b\in X \quad (aRb \land bRa) \implies a=b $ In other words, $ \forall a,b\in X \quad (aRb \land a≠b) \implies \lnot (bRa) $ Now, if $ a=b $ was true in an asymmetric relation then by definition of asymmetry it would state that (putting $ a $ in place of $b$ ) $ \forall a,a\in X \quad aRa \implies \lnot (aRa) $ But this statement seems contradictory. So we conclude that a binary relation between any two elements of a set will be asymmetric if equality does not hold between them. If this is the case then we have three condition. $ aRb $ $ \ln
|
Your argument towards anti-symmetry is not correct. In your argument towards anti-symmetry, you state: $ \forall a,a\in X \quad aRa \implies \lnot (aRa) $ But this statement seems contradictory. No, it is not contradictory. It simply means that you cannot have $aRa$ for any $a$ , for if you did, then you would get a contradiction. So, this statement implies: $ \forall a\in X \quad \lnot (aRa) $ and that is irreflexivity! So your argument, which was supposed to demonstrate antisymmetry, actually demonstrates irreflexivity. Moreover, your argument does not demonstrate anti-symmetry! First, your statement: So we conclude that a binary relation between any two elements of a set will be asymmetric if equality does not hold between them. is a really strange statement: you shouldn't say that 'a binary relation between any two elements is asymmetric'. Rather, the whole relation is asymmetric, and that will constrain what relations may or may not hold between any two elements. But given asymmet
|
|logic|set-theory|
| 0
|
$g \in L^p_{\operatorname{loc}}(\Omega) \Longleftrightarrow |g|^p \, \chi_\Omega \in L^1_{\operatorname{loc}}(\mathbb R^n) ? $
|
Let $\Omega \subset \mathbb R^n$ be an arbitrary open set and consider the usual Lebesgue spaces of locally $p$ -integrable functions, for $1 \leqslant p . My goal is to prove the following claim: Claim. Given $1 \leqslant p , it follows that $$ g \in L^p_{\operatorname{loc}}(\Omega) \Longleftrightarrow |g|^p \chi_\Omega \in L^1_{\operatorname{loc}}(\mathbb R^n). $$ My attempt. First suppose that $g \in L^p_{\operatorname{loc}}(\Omega).$ Writing it out, this means that $$ \int_K |g(x)|^p \, dx for every compact set $K \subset \mathbb \Omega$ . In order to prove that $|g|^p \, \chi_\Omega \in L^1_{\operatorname{loc}}(\mathbb R^n),$ one must show that $$ \int_C |g(x)|^p \, \chi_\Omega(x) \, dx for every compact set $C \subset \mathbb R^n$ . Clearly, if $C \cap \Omega$ is a compact subset of $\Omega$ , we obtain the desired result. It's obvious that $C \cap \Omega$ is a subset of $\Omega$ and so everything we need to guarantee is that $C \cap \Omega$ is compact. Since $C$ is compact, we d
|
You cannot prove this. Let $\Omega =(0,1),p=1$ and $g(x)=\frac 1 x$ . Then $g \in L_{\text {loc}} ^{1}(\Omega)$ according to your definition. Take $C=[0,1]$ . Then $\int_C|g(x)|\chi_{\Omega}(x)dx=\infty$ .
|
|real-analysis|functional-analysis|solution-verification|lebesgue-integral|lebesgue-measure|
| 1
|
Proof of excission Axiom for singular Homology
|
Im working through the proof of the excission Axiom for singular Homology in the Bredon Homology book. I have two questions regarding this. Bredon first defines the chain map $\gamma$ and the Chain Homotopy $T$ between $\gamma$ and the identity on the subcomplex of affine simplicies. then right after Lemma 17.1 on page 225 he extends this definitions of $T:\Delta_p(X) \to \Delta_{p+1}(X),\gamma:\Delta_p(X) \to \Delta_p(X)$ on the hole $\Delta(X)$ . He writes down four conditions he want to have on this extensions. I don't understand the fours condition. It says that $\gamma(\sigma)$ and $T(\sigma)$ should be chains in $im(\sigma)$ . What does this mean? $\gamma(\sigma)$ is a map $\Delta_p \to X$ how can a map lie in the image of another map? How is this meant. Then he says after it that the (4) condition just follows by naturality why is this the case? The other thing I wanted to ask is probably related. Let $U$ be an open cover of $X$ . In The proof of 17.7 Bredon claims that $T_k(c)$
|
In general, to say a chain $c\in\Delta_p(X)$ lies in a subspace $A\subseteq X$ just means it is contained in the subcomplex $\Delta_p(A)\subseteq\Delta_p(X)$ , equivalently its support (the union of the images of its simplices) is contained in $A$ . In this specific case, this follows by applying naturality to the map $\sigma\colon\Delta^p\rightarrow X$ , which implies that $\gamma(\sigma)=\gamma\sigma_{\ast}(\iota_p)=\sigma_{\ast}(\gamma\iota_p)$ . An analogous argument holds for $T$ . For your second question, the point you've omitted is that $c$ was a $U$ -small chain to begin with. That is, $c$ is a linear combination of singular simplices each lying in some element of $U$ (they don't all need to lie in the same element, to be clear), but $T_k$ is linear and (4) says that $T_k$ applied to each such simplex is itself a linear combination of simplices lying in the respective element of $U$ , so overall $T_k(c)$ is a linear combination of simplices each lying in some element of $U$ .
|
|algebraic-topology|homology-cohomology|
| 0
|
Convergence in $L_p$ norm of minimum of uniform random variables, $Y_1=\min\left\{X_1,\ldots,X_n\right\}$ with $X_i\sim U(0,1)$
|
${X_i}$ are i.i.d with $X_i\sim U\left(0,1\right)$ . Prove that $Y_1=\min\left\{X_1,X_2,\ldots,X_n\right\}$ converges to $Y=0$ in expectation of order $p\ge1$ . My try and where I got stuck: $$\lim _{n\to \infty }\left(E\left[\left|Y_1-Y\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|Y_1\right|^p\right]\right)=\lim _{n\to \infty }\left(E\left[\left|\min\left\{X_1,X_2,\ldots,X_n\right\}^p\right|\right]\right)\\ \le \lim _{n\to \infty }\left(E\left[\left|X_i\right|^p\right]\right)\:\forall i\in \left\{1,2,\ldots,n\right\}=\lim _{n\to \infty }\left(\int _0^1\left|X_i\right|^p\:dx\right)\ne 0$$ That is my problem, I don't know how to receive $0$ here
|
Well here's a short way to do this provided you know about Dominated Convergence Theorem or more generally Uniform Integrability. So firstly, you show convergence in probability. So $P(|Y_{n}|\geq\epsilon)=P(\{|X_{1}|\geq\epsilon\},...,\{|X_{n}|\geq\epsilon\})\stackrel{iid}{=}P(|X_{1}|\geq\epsilon)^{n}=(1-\epsilon)^{n}$ which goes to $0$ as $n\to\infty$ ( $Y_{n}$ denotes the minimum order statistic). And $|Y_{n}|\leq 1$ for all $n$ . Hence using Dominated Convergence Theorem , you have that $Y_{n}\xrightarrow{L^{p}}0$ . The uniform integrability theorem basically says that if $X_{n}\xrightarrow{P} X$ and $X_{n}$ is uniformly integrable then $X_{n}\xrightarrow{L^{1}}X$ and vice-versa. You can see my short proof here . So basically, you can even repeat my proof from there and conclude convergence in expectation for any $p$ . In fact, you can generalize this even further. Let $X_{1}$ be non-negative and have $p$ -th moment such that for each $\epsilon>0$ , there exists $c(\epsilon)>0$ suc
|
|probability|expected-value|uniform-convergence|uniform-distribution|
| 0
|
How to classify $2^n$ groups up to isomorphism?
|
Groups of order $2^n$ have most non-isomorphic types and are notoriously hard to classify. For example, order $32$ has $51$ non-isomorphic groups, and order $64$ has $267$ ones. In general group theory texts, only groups of order up to $15$ are (completely) classified. Even the order of $16$ is not available for a full classification. On this website , all groups up to order $500$ are classified with given names. My question is, are groups of $64$ or $128$ or higher classified manually through presentations or by programming (computational group theory)?
|
It is known that the number of isomorphism classes of groups of order $p^n$ ( $p$ prime) grows as $p^{{\frac{2}{27}n^3} +O(n^{8/3})}$ (Charles Sims, Enumerating p-groups , Proc. London Math. Soc., Series 3, 15 (1965), 151-166). Because of this exponential growth, is it believed that almost all finite groups are $2$ -groups: the fraction of isomorphism classes of $2$ -groups among isomorphism classes of groups of order at most $n$ is thought to tend to $1$ as $n$ tends to infinity. If you look at the table in this paper of Conway, Dietrich and O'Brien, you can see that of the 49 910 529 484 different groups of order at most $2000$ , 49 487 367 289, or just over 99%, are the $2$ -groups of order 1024. The first to create some order in the plethora of groups of prime-power order was Philip Hall ( The classification of prime-power groups , J. Reine Angew. Math. 182 (1940),130-141). He observed that the notion of isomorphism of groups is really too strong to give rise to a satisfactory clas
|
|abstract-algebra|group-theory|
| 1
|
How many consecutive 0 digits are at the end of $\mathrm{K}$
|
Let K be the largest value such that $(2023!)^{2023!}$ is divisible by $42^{\mathrm{K}}$ . How many consecutive 0 digits are at the end of \mathrm{K}$? I tried to find the highest power of $42$ that divides $2023!$ Since $42 = 2 \times 3 \times 7$ , we need to count the occurrences of 2, 3, and 7 in the prime factorization of $ 2023!$ . Counting the number of factors of 2 in $2023! =\left\lfloor\frac{2023{2}\right\rfloor +\left\lfloor\frac{2023}{2^2}\right\rfloor \left\lfloor\frac{2023{2^3}\right\rfloor + \ldots = 1011 + 505 + 252 + \ldots$ . Counting the number of factors of 3: $\text{Number of factors of } 3 = \left\lfloor\frac{2023}{3}\right\rfloor + \left\lfloor\frac{2023}{3^2}\right\rfloor + \left\lfloor\frac{2023}{3^3}\right\rfloor + \ldots.$ Counting the number of factors of 7: $\text{Number of factors of } 7 = \left\lfloor\frac{2023}{7}\right\rfloor + \left\lfloor\frac{2023}{7^2}\right\rfloor + \ldots$ Then take the minimum of these counts, pair up to get the factors of 42: $K
|
You have found out that $355$ is the largest value of $N$ with $42^N\mid2023!$ . But the number in question is $2023!$ to the power of itself , and so $K=355\cdot2023!=71\cdot5\cdot2023!$ . We now count how many $5$ factors are in $K$ : $$1+\sum_{i=1}^\infty\lfloor2023/5^i\rfloor=1+404+80+16+3=504$$ Since this is smaller than the number of $2$ factors in $K$ – $2014$ – the answer is $504$ .
|
|elementary-number-theory|divisibility|
| 1
|
In a ring $1+\epsilon+\cdots+\epsilon^{p-1}=0$ implies $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ for $0<c<p$.
|
I am a bit unsure about my following "proof", in particular the rationality of "taking projection". Problem : $R$ is a ring with identity, not necessarily commutative . If $\epsilon\in R$ satisfies the equation $1+\epsilon+\cdots+\epsilon^{p-1}=0$ , where $p$ is a prime number, show that for all integer $0 , we have $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ . Proof : By considering roots in $\mathbb{C}[X]$ , there exists $\phi(X)\in\mathbb{Q}[X]$ s.t. \begin{equation*} 1+X^c+X^{2c}+\cdots+X^{(p-1)c}= (1+X+\cdots+X^{p-1})\phi(X). \end{equation*} Moreover this $\phi(X)$ is actually in $\mathbb{Z}[X]$ since $1+X+\cdots+X^{p-1}$ is monic. Now we take projection $\pi: \mathbb{Z}\to R$ , $1\mapsto 1_R$ in the above equation and yield an identity on $R[X]$ . Plugging in $X=\epsilon$ implies $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=0$ . $\square$ My confusion is that taking projection $\pi: \mathbb{Z}\to R$ and seems a bit causual. Is this proof correct?
|
Here is a different take on the problem: $\epsilon^{p}=1$ because $1-\epsilon^{p}=(1-\epsilon)(1+\epsilon+\cdots+\epsilon^{p-1})=0$ $1+\epsilon^c+\epsilon^{2c}+\cdots+\epsilon^{(p-1)c}=1+\epsilon+\cdots+\epsilon^{p-1}=0$ because $k \mapsto kc$ is a permutation of the integers mod $p$ and $\epsilon^{p}=1$
|
|abstract-algebra|ring-theory|noncommutative-algebra|
| 0
|
Parking lot ticket probability.
|
I am struggling to find a quick an intuitive solution to this problem: You parked your car on Monday and didn't pay for parking. You leave your car there until the end of Tuesday. You know that you have the same probability of getting a ticket on Monday and Tuesday. You also know that if you get a ticket on Monday, then you won't get one on Tuesday. At the same time, you know that the probability of getting a ticket on either of the two days is 0.84. What is the probability that you get a fine on Monday? I have solved it this way: Let's condition everything with time. We know that P(getting ticket on Monday | it is Monday today) = P(ticket on Tuesday | it is tuesday today). At this point, we can think that the probability of getting a ticket on either of the two days is observed on Monday, so that P(M or T| it is Monday) = 0.84. Now I have made an assumption, but I still struggle to properly motivate it. We have that P(M or T| it is Monday)=P(M| it's M) + P(T and not M| it's T). Now we
|
Let the probability you get a ticket on any day be $x$ Now, there is a chance you get a ticket on Monday and if not there is again an equal chance you get a ticket on Tuesday. The total probability you even get a ticket is given as $0.84$ . $$P(getting\ a\ ticket) = P(Ticket\ on\ Monday) + P(No\ ticket\ on\ Monday) \cdot P(Ticket\ on\ Tuesday)$$ $$\Rightarrow 0.84 = x + (1-x) \cdot x$$ Solving the quadratic gives us $0.6$ . So, the probability of a fine on Monday is $0.6$
|
|probability|
| 1
|
If $A$ is absolutely flat then every primary ideal is maximal
|
Given the ring $A$ is commutative with an identity element. If $A$ is absolutely flat (i.e. each $A$-module is flat) then every primary ideal is maximal. This exercise 4.3 comes from the classical text of Atiyah-Macdonald : introduction to the commutative algebra. Attempt: Suppose $\mathfrak{q}$ is a primary ideal in $A$ and fix an $x\in A-\mathfrak{q}.$ Then $\bar{x}\in A/\mathfrak{q}$ is non-zero since $x\not\in\mathfrak{q}$ Recall that $A$ is absolutely flat $\Longleftrightarrow$ every principal ideal is idempotent. Then as $x\in\langle x\rangle=\langle x^{2}\rangle$, we have $x=ax^{2}$ for some $a\in A$ and hence $x(ax-1)=0\in\mathfrak{q}$ and we thus have $(ax-1)^{n}\in\mathfrak{q}$ for some integer $n>0$ since $x\not\in\mathfrak{q}.$ Therefore, we get that $(ax-1)^{n}=\bar{0}$ in $A/\mathfrak{q}$. It follows that $\bar{a}\bar{x}-\bar{1}\in A/\mathfrak{q}$ is nilpotent and thus $\bar{a}\bar{x}=(\bar{a}\bar{x}-\bar{1})+\bar{1}\in A/\mathfrak{q}$ is unit which implies $\bar{x}\in A/
|
A short proof of primary ideal of an absolutely flat ring is maximal: Let $A$ be an absolutely flat ring and $x\in A$ . From Exercise 2.27 of Atiyah-Macdonald, $ = =....= $ for every positive integer $n$ . Suppose $q$ is a primary ideal of $A$ and $m$ a maximal ideal containing it. Take any $x\in m$ . Since $x=bx^2$ for some $b\in A$ , $x(1-bx)= 0$ and $1-bx\notin m$ (otherwise 1 is in $m$ ). Thus $x^n\in q$ for some positive integer $n$ . But from $ = $ implies $x= ax^n$ for some $a\in A$ . Thus $x\in q$ and we conclude.
|
|abstract-algebra|proof-verification|commutative-algebra|
| 0
|
Calculating Launch Angle Given Distance with Added Height (which itself depends on launch angle)
|
In my physics class we have learned to calculate a desired launch angle to allow a projectile to hit a target given the target’s distance away and the initial velocity. Now in this case the initial velocity of the projectile occurs at the axis of rotation of the so called “cannon” that is launching the projectile. But what if the initial velocity occurs at the tip of the cannon? When the launch angle changes, so does the launch height and the distance to the target. With this added information, I was NOT able to solve for the desired launch angle using the kinematic equations of motion. Is this problem possible to solve? As requested by a comment, here is the equation I couldn’t solve for $\theta$ : $$d - r \cos(\theta) = \frac{ v \cos(\theta) }{-g} \cdot \left( -v \sin(\theta)\pm \sqrt{\bigl(v \sin(\theta)\bigr)^2 + 2g \bigl( h + r \sin(\theta) \bigr) \vphantom{\Big|}}\right)$$ This question has been asked before here but not answered.
|
So in your case, the initial velocity occurs at the tip of the barrel, which implies that the starting height and distance to the target changes based on the cannon's angle $(\theta)$ . If we take $l$ to be the length of the cannon and $h$ to be the cannon's starting height, then we can formulate two equations: Starting height of projectile = $l\cdot \sin(\theta)+h$ Distance lost between projectile and target = $l\cdot \cos(\theta)$ The next step is to take the projectile's initial launch velocity $(v_{initial})$ , and split this up into the vertical and horizontal vectors. We need to keep in mind that gravity causes the vertical velocity to decrease by $\approx9.81\,m/s$ every second, while the horizontal velocity stays unaffected if we neglect air resistance: Vertical velocity = $v_{initial}\cdot \sin(\theta)-9.81t$ Horizontal velocity = $v_{initial}\cdot \cos(\theta)$ Now we can use these velocity equations to create formulas for the projectile's height and distance traveled. The ho
|
|physics|kinematics|projectile-motion|
| 0
|
Proving $\text{Li}_3\left(-\frac{1}{3}\right)-2 \text{Li}_3\left(\frac{1}{3}\right)= -\frac{\log^33}{6}+\frac{\pi^2}{6}\log 3-\frac{13\zeta(3)}{6}$?
|
Ramanujan gave the following identities for the Dilogarithm function : $$ \begin{align*} \operatorname{Li}_2\left(\frac{1}{3}\right)-\frac{1}{6}\operatorname{Li}_2\left(\frac{1}{9}\right) &=\frac{{\pi}^2}{18}-\frac{\log^23}{6} \\ \operatorname{Li}_2\left(-\frac{1}{3}\right)-\frac{1}{3}\operatorname{Li}_2\left(\frac{1}{9}\right) &=-\frac{{\pi}^2}{18}+\frac{1}{6}\log^23 \end{align*} $$ Now, I was wondering if there are similar identities for the trilogarithm ? I found numerically that $$\text{Li}_3\left(-\frac{1}{3}\right)-2 \text{Li}_3\left(\frac{1}{3}\right)\stackrel?= -\frac{\log^3 3}{6}+\frac{\pi^2}{6}\log 3-\frac{13\zeta(3)}{6} \tag{1}$$ I was not able to find equation $(1)$ anywhere in literature. Is it a new result? How can we prove $(1)$? I believe that it must be true since it agrees to a lot of decimal places.
|
From \heef{http://functions.wolfram.com/ZetaFunctionsandPolylogarithms/PolyLog3/06/01/}{Wolfram Functions} (in functionals with six trilogarithms) we have the identity: \begin{align*} & L{i_3}\left( \frac{1 - z}{1 + z} \right) - L{i_3}\left( -\frac{1 - z}{1 + z} \right) \\ &= -2L{i_3}\left( \frac{z}{z - 1} \right) - 2L{i_3}\left( \frac{z}{z + 1} \right) + \frac{1}{2}L{i_3}\left( \frac{z^2}{z^2 - 1} \right) + \\ &\quad \frac{7}{4}L{i_3}(1) + \frac{1}{4}\log\left( \frac{1 - z^2}{z^2} \right)\log^2\left( \frac{1 + z}{1 - z} \right) + \frac{1}{4}\pi^2\log\left( \frac{1 - z}{1 + z} \right) \end{align*} And for $z = \frac{1}{2}$ arises: \begin{align*} & L{i_3}\left( \frac{1}{3} \right) - L{i_3}\left( -\frac{1}{3} \right) \\ &= -2L{i_3}(-1) - 2L{i_3}\left( \frac{1}{3} \right) + \frac{1}{2}L{i_3}\left( -\frac{1}{3} \right) + \frac{7}{4}L{i_3}(1) + \frac{1}{4}\log^3(3) - \frac{1}{4}\pi^2\log(3) \end{align*} Because $L{i_3}(1) = \zeta(3)$ and $L{i_3}(-1) = -\frac{3}{4}\zeta(3)$ , finally, we hav
|
|real-analysis|sequences-and-series|analysis|special-functions|polylogarithm|
| 0
|
Finding the general form for the hyperbola formed by the intersection of a cone and a plane, in the plane of intersection
|
I have a cone, centred at the origin, aligned with the $z$ axis, and so having equation $x^2+y^2=a^2z^2$ . I also have a general plane, $px+qy+rz=s$ . Suppose that I know a priori that this plane and cone intersect to form a hyperbola (rather than a parabola or ellipse) in the plane. If we then define an orthogonal coordinate system $(X,Y)$ in the plane, how would I go about finding the equation of the hyperbola in the $(X,Y)$ coordinate system, in terms of the parameters $a,p,q,r,s$ ? This seems like it should be a relatively easy problem to solve (especially with computer algebra packages like Mathematica), but I can't seem to work out how to do it. It's easy enough to get the shape in the $(x,y)$ plane by eliminating $z$ from the two equations. But how can I project this curve into the $(X,Y)$ plane, and get the resulting shape? My reason for doing this is that, I have a CCD image (which is my $(X,Y)$ plane) with curves that I know are formed by such a projection, to which I'm tryin
|
Suppose you eliminated $z$ from the two equations of the cone and the plane and obtained an equation of the projection of the hyperbola in the form $ (u - u_0)^T H ( u - u_0) = 1$ where $u = [x, y]^T $ Given $u$ on this projected hyperbola, one can easily find the corresponding $z$ from $ z = \dfrac{ s - p x - q y }{r} $ Therefore, we define $r_1 = [x, y, z]^T$ to be the corresponding point on the hyperbola on the plane then $ r_1 = r_0 + A u $ where $r_0 = \begin{bmatrix} 0 \\ 0 \\ \dfrac{s}{r} \end{bmatrix} $ $A = \begin{bmatrix} 1 && 0 \\ 0 && 1 \\ - \dfrac{p}{r} && - \dfrac{q}{r} \end{bmatrix} $ From this, it follows that $ u = (A^T A)^{-1} A^T (r_1 - r_0 ) = G (r_1 - r_0) $ where $G=(A^T A)^{-1} A^T$ is $2 \times 3$ and has rank $2$ . At this point, it useful to introduce the vector $ u_1 = r_0 + A u_0 $ From which it follows that $u_0 = G (u_1 - r_0) $ Substituting this into the found hyperbola equation: $ (G (r_1 - u_1))^T H (G (r_1 - u_1)) = 1 $ So that $ (r_1 - u_1)^T G^T H G
|
|geometry|analytic-geometry|
| 1
|
Confused by this proof in Jech's set theory
|
In Jech's Set Theory, Chapter 11, the universal set $U$ is defined as: For each $\alpha \geq 1$ , there exists a set $U \subset \mathcal{N}^2$ such that $U$ is $\Sigma_{\alpha}^0$ (in $\mathcal{N}^2$ ) and that for every $\Sigma_{\alpha}^0$ set $A$ in $\mathcal{N}$ , there exists some $a \in \mathcal{N}$ such that $A = \{x:(x,a) \in U\}$ . Here, $\mathcal{N}$ refers to the set $\omega^{\omega}$ (i.e. the Baire space), and $\Sigma_{\alpha}^0$ , $\Pi_{\alpha}^0$ for some ordinal $\alpha$ are collections of sets that form the Borel heirarchy. The proof that $U$ exists is done by induction on $\alpha$ . Namely, let $U$ be a universal $\Sigma_{\alpha}^0$ set, we can construct a universal set $V$ that is $\Sigma_{\alpha+1}^0$ , and the idea is: $$ (x,y) \in V \text{ if and only if for some $n$}, (x,y_{(n)}) \not \in U $$ Where $y_{(n)}$ is $y_{(n)}(k)=y(\Gamma(n,k))$ and $\Gamma : N \times N \rightarrow N$ is some one-to-one and onto pairing function. It follows that if $(x,y_{(n)}) \not \in
|
Turning some comments into an answer. Since $U\subseteq\mathcal N^2$ is $\mathbf{\Sigma}^0_\alpha$ -universal and $A_n\subseteq\mathcal N$ is $\mathbf{\Pi}^0_\alpha$ , so that $\mathcal N-A_n$ is $\mathbf{\Sigma}^0_\alpha$ , we must have $$\mathcal N-A_n=\{x\in\mathcal N\mid (x,a_n)\in U\},$$ for some $a_n\in\mathcal N$ . But this is equivalent to saying that $$A_n=\{x\in\mathcal N\mid (x,a_n)\not\in U\},$$ for the same $a_n\in\mathcal N$ as above. That being said let me try to explain the intuition behind this argument. The fact that $U\subseteq\mathcal N^2$ is $\mathbf{\Sigma}^0_\alpha$ -universal means that $U$ is coding $\mathbf{\Sigma}^0_\alpha$ subsets of $\mathcal N$ by elements of $\mathcal N$ , by thinking about $a\in\mathcal N$ as the code for the set $\{x\in\mathcal N\mid (x,a)\in U\}$ . Now note that if we can find a $\mathbf{\Sigma}^0_\alpha$ -universal set $U\subseteq\mathcal N^2$ , then we can also find a $\mathbf{\Pi}^0_\alpha$ -universal set $U'\subseteq\mathcal N^2$ ,
|
|real-analysis|proof-explanation|set-theory|descriptive-set-theory|borel-sets|
| 1
|
Probability that no partial sum of a permutation of $\{1,\dots,n\}$ are divisible by $3$
|
Assume that $a_1,a_2,\cdots,a_n$ is a completely random permutations of numbers $1$ to $n$ . What is the probability that none of the $n$ partial sums $A_1 = a_1$ , $A_2 = a_1 + a_2, \dots$ , and $A_n = a_1 + a_2 + \cdots + a_n$ are divisible by three? Here are my attempts: Since $1+\dots+n = n(n+1)/2$ , we have that $n = 1 \mod 3$ . Let $n = 3k + 1$ and let $N(i)$ be the number of numbers in $\{1,\dots,n\}$ that are $i\mod 3$ , then $$N(1) = N(2) + 1 = N(0) + 1 = k+1.$$ Here is one way I found to avoid divisibility by three: Permutations of $1,1,2,1,2, \cdots$ ( $0$ s can be calculated separately) I would appreciate it if someone could help me to move forward and finish my solution. Thank you!
|
Indeed you are correct that the probability is $0$ unless $n \equiv 1 \mod 3$ . So let $n=3k+1$ . Consider modulo $3$ . The permutation cannot start with a $0$ , so the admissible fraction is $\frac{2k+1}{3k+1}$ . Given that there is one more $1$ than $2$ s and $0$ s, you found the only way (ignoring $0$ s for now) of success: $$1,1,2,1,2,\dots, 1, 2.$$ Let $X$ be a multiset of $k+1$ copies of $1$ and $k$ copies of $2$ . Then, there are $\binom{2k+1}{k}$ ways to arrange set $X$ , but only one way is successful, so the probability is $$\frac{2k+1}{3k+1}\binom{2k+1}{k}^{-1}.$$
|
|probability|number-theory|permutations|combinations|divisibility|
| 1
|
Properties of the 2D Fourier Transform for Real Functions
|
In the context of the 1D Fourier transform, for a real function $f(x)$ , the Fourier transform has a property: $F(\omega) = F^*(-\omega)$ . This implies that the real part of the Fourier transform is an even function, while the imaginary part is an odd function. I am attempting to generalize this property to 2D functions but am encountering difficulties. To illustrate, consider the following matrix as an example: $$ \begin{bmatrix} 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ \end{bmatrix} $$ Using NumPy (in python) for the Fast Fourier Transform, I printed out the imaginary part of the result: $$ \begin{bmatrix} 0.0 & 2.0 & 0.0 & -2.0 \\ 0.0 & -2.0 & 2.0 & 0.0 \\ 0.0 & 0.0 & 0.0 & 0.0 \\ 0.0 & 0.0 & -2.0 & 2.0 \\ \end{bmatrix} $$ Contrary to my expectations, the output was not symmetrical. I am seeking guidance to understand the properties of the Fourier transform of a real function in 2D space.
|
I found it. The property is as follow: $F(i, j) = F^*(-i, -j)$ . In my example the $(0, 0)$ of Fourier transformed is set to be the zero value at $(3, 2)$ .
|
|fourier-analysis|fourier-transform|
| 0
|
Trouble understanding the proof that if $X^n$ is a direct sum of $X^p$ and $X^q$, then $n = p + q$.
|
I'm following the proof from a textbook (finite-dimensional vector spaces), but I don't understand the reasoning behind two inequalities ( $n>p+q$ and $n ). They seemingly pop out of nowhere. It's probably really obvious, but i just don't know. It goes something like this: Assume that $X^n = X^p \bigoplus X^q$ . If $e_1,...,e_p$ is basis in $X^p$ , and $e_1',...,e_q'$ is basis in $X^q$ , let's prove that vectors $e_1,...,e_p,e_1',...,e_q'$ are linearly independent. From $\sum_{k=1}^p \alpha_k e_k + \sum_{k=1}^q \alpha_k' e_k' = 0$ we get that $a+b=0$ , where $a$ is the first sum, and $b$ is the second one. Since $a \in X^p$ , and $b \in X^q$ , and the sum is direct, we know that $a=0$ , and $b=0$ . But for $a=0$ we have $\sum \alpha_k e_k = 0$ , so $\alpha_1=...=\alpha_p=0$ . The same goes for $b$ , so $\alpha_1' = ... = \alpha_q' = 0$ . Which means that $e_1,...,e_p,e_1',...,e_q'$ are linearly independent. But that means that $n > p + q$ . The other side: Every $x \in X^n$ can be writ
|
I suppose you must have proven the following theorem in your course/textbook, or something equivalent, when discussing bases, as the results you describe for direct sums rely upon it. Theorem: Let $V$ be a finite dimensional vector space, with $\dim(V)=n$ , and $S =\{v_1, \dots, v_m\}$ be some finite collection of vectors on $V$ . If $S$ is linearly independent, then $m \le n$ If $S$ spans $V$ , then $m \ge n$ . In both cases, we have equality if and only if $S$ is a basis for $V$ .
|
|linear-algebra|vector-spaces|
| 0
|
Trouble understanding the proof that if $X^n$ is a direct sum of $X^p$ and $X^q$, then $n = p + q$.
|
I'm following the proof from a textbook (finite-dimensional vector spaces), but I don't understand the reasoning behind two inequalities ( $n>p+q$ and $n ). They seemingly pop out of nowhere. It's probably really obvious, but i just don't know. It goes something like this: Assume that $X^n = X^p \bigoplus X^q$ . If $e_1,...,e_p$ is basis in $X^p$ , and $e_1',...,e_q'$ is basis in $X^q$ , let's prove that vectors $e_1,...,e_p,e_1',...,e_q'$ are linearly independent. From $\sum_{k=1}^p \alpha_k e_k + \sum_{k=1}^q \alpha_k' e_k' = 0$ we get that $a+b=0$ , where $a$ is the first sum, and $b$ is the second one. Since $a \in X^p$ , and $b \in X^q$ , and the sum is direct, we know that $a=0$ , and $b=0$ . But for $a=0$ we have $\sum \alpha_k e_k = 0$ , so $\alpha_1=...=\alpha_p=0$ . The same goes for $b$ , so $\alpha_1' = ... = \alpha_q' = 0$ . Which means that $e_1,...,e_p,e_1',...,e_q'$ are linearly independent. But that means that $n > p + q$ . The other side: Every $x \in X^n$ can be writ
|
First off: The inequalities are not sharp, i.e. you only get $n\geq p+q$ and $n\leq p+q$ which then together yield $n = p+q$ . Now, for your actual question, think about how the dimension of a vector space $X$ is defined in the first place: The dimension of $X$ is the largest possible $n\in\mathbb{N}_0$ so that there exist $n$ linearly independent vectors in $X$ (of course, one has to first invest some efforts into showing that this quantity is well-defined). In your first step, you show that there exist vectors $e_1,...,e_p,e_1',...,e_q'$ in $X$ which are linearly independent, so the dimension of $X^n$ (again, the largest possible number of linearly independent vectors) is at least $p+q$ . This gives $n\geq p+q$ . In the second step, it is shown that any vector in $X^n$ can be written as a sum of vectors from $X^p$ and $X^q$ , or in other words there can only be at most $p+q$ linearly independent vectors. This gives $n\leq p+q$ .
|
|linear-algebra|vector-spaces|
| 1
|
A geometric argument for geodesics in manifolds also being geodesics in hypersurafces?
|
I've read that when a hypersurface within a manifold contains a curve, if the curve is a geodesic in the manifold, it is also a geodesic in the hypersurface. This is quite abstract for me, I've only recently started learning GR, could someone provide a geometric (or if necessary, algebraic) argument for why this is true? Is the converse ever/always/never true? (Based on my limited intuition, I'd guess sometimes but not always, but I can't think of anything a professor wouldn't call 'handwavey'). Cheers
|
Easy to prove when you find the right theorem. In the GR background, let's consider $g_{ab}$ in orthonormal basis being (-+++) signature. Theorem 1. Consider a hypersurface $Σ$ in manifold $(M,g_{ab})$ , with unit normal vector field on it denoted as $n^a$ , then it's induced with a natural metric $h_{ab}$ on $Σ$ : $$ h_{ab}=g_{ab}+n_an_b $$ Theorem 2. Relation between the covariant derivative $∇_c$ of $M$ and that of the hypersurface $\mathrm D_c$ : Consider acting on any arbitrary tensor field $T^{a_1\cdots a_k}{}_{b_1\cdots b_l }$ on $Σ$ . $$ \mathrm D_cT^{a_1\cdots a_k}{}_{b_1\cdots b_l }=h^{a_1}{}_{d_1}\cdots h^{a_k}{}_{d_k} h_{b_1}{}^{e_1}\cdots h_{b_l}{}^{e_l} h_c{}^f ∇_f T^{d_1\cdots d_k}{}_{e_1\cdots e_l }. $$ Now let's prove the geodesic w.r.t $M$ must be also geodesic w.r.t $Σ$ . Set tangent vector as $w^a$ . Then we have $w^b∇_bw^a=0$ . Let's check it $\Rightarrow$ $w^bD_bw^a=0$ : $$ w^b\mathrm D_bw^a= {h^a}_α\underline{{h^β}_bw^b}∇_βw^α={h^a}_α\underline{w^β∇_βw^α}=0 $$ in
|
|general-relativity|
| 0
|
Count the number of unique values that $f(n) = \lfloor \frac{n^2}{2023} \rfloor$ can take if $n \in [1, 2023]$
|
Consider the following step function: $f: \mathbb{N} \rightarrow \mathbb{N}, f(n) = \lfloor \frac{n^{2}}{2023} \rfloor$ Where $n \in [1, 2023]$ . How many distinct values does the step function take? Thanks!
|
$45^2=2025$ , so all values before $45$ return $0$ . $2023$ is the $1011$ th odd number, so after $1011$ each interval between squares is equal to or greater than $2023$ and returns a new value. Between $45$ and $1011$ , you move from $45^2/2023=1.x$ to $1011^2/2023=505.x$ so you must have: $0-44: 1$ value $45-1011: 505$ values $1012-2023: 1011$ values For $1517$ values. Edit : I can't do math: @Lozenges points out that $n=1012-2023$ yields $1012$ values for $1518$ total.
|
|functions|
| 0
|
Artinian rings that are not Artin algebras
|
An Artin algebra $A$ is an algebra over a commutative Artinian ring $R$ which is finitely generated over $R$, e.g. finite dimensional algebra over a field. Clearly, any Artin algebra is left and right Artinian. What are examples of left and right Artinian rings that not Artin algebras?
|
An example of a different flavour than the Weyl field, comes from the fact that even if $R$ is both left and right Artininan, the center of $R$ need not be. One such example is given in Jensen, Christian U.; Jøndrup, Søren. Centres and Fixed-Point Rings of Artinian Rings Math. Z. 130 (1973), 189--197. MR0318222 I'll rewrite it here for convenience: Let $F$ be a field and $I$ be an infinite set. Let $K$ be the field of fractions of $F[t_i^{1/2^j} |i\in I, j \in \mathbb N]$ . Define an $F$ -algebra automorphism of $K$ by squaring the variables: $\sigma(t_i^{1/2^j}) = t_i^{1/2^{j-1}}$ . Define $A$ to be the ring $K + Kx + Ky$ , where $xk = \sigma(k)x$ and $yk=ky$ for $k \in K$ , and $x^2 = y^2 = xy = yx = 0$ . then the center of $A$ is $F+Ky$ which is not artinian.
|
|abstract-algebra|reference-request|
| 0
|
Does $\sum^\infty_{n=1} \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!}$ converge?
|
I need to prove that the infinite series $$ \sum^\infty_{n=1} \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} $$ converges or diverges. I already showed that $n^n \leq n!e^n$ and $(1 + \frac{1}{n})^n \leq e$ $\forall n \in \mathbb{N}$ which must surely be useful to proof the divergence/convergence of this series but I don't know how. I am sure I must use the direct comparison test i.e. compare the sequence to some other sequence of which I know that its series diverges/converges. I tried $$ \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} \leq \frac{1+n^n}{1+n^2e^nn!} \leq \frac{1+n!e^n}{1+n^2e^nn!} \leq \frac{1+n!e^n}{n^2e^nn!} = \frac{1}{n^2}\left(\frac{1}{e^nn!} + 1\right) ... $$ and $$ \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} \geq \frac{n^n}{1 + n^2e^nn!} ... $$ but I don't know how I can simplify those expressions further to get a useful result.
|
Okay so I thought of answering my own question here in case anyone stumbles upon the same and summarize what was discussed in the comments above. We want to know if the series $\sum^\infty_{n=1} \frac{\textrm{sin}^2(n)+n^n}{1+n^2e^nn!}$ converges or diverges. One way to do this is by using the direct comparison test . We can estimate the sequence of this series upwards as follows: $$ \frac{\textrm{sin}^2(n)+n^n}{1+n^2e^nn!} \leq \frac{1+n^n}{1+n^2e^nn!} \overset{*}{\leq} \frac{1+e^nn!}{1+n^2e^nn!} \leq \frac{1+e^nn!}{n^2e^nn!} = \frac{1}{n^2}\left(\frac{1}{e^nn!}+1\right) \leq \frac{2}{n^2}. $$ Since we know the limit of the series $$ \sum^\infty_{n=1} \frac{2}{n^2} = 2 \sum^\infty_{n=1} \frac{1}{n^2} = 2 \cdot \frac{\pi^2}{6} = \frac{\pi^2}{3} $$ we can conclude by the direct comparison test that $\sum^\infty_{n=1} \frac{\textrm{sin}^2(n)+n^n}{1+n^2e^nn!}$ must also converge. * For this inequality we used that $n^n \leq n!e^n \ \forall n \in \mathbb{N}$ . You can find the proof here f
|
|real-analysis|calculus|sequences-and-series|analysis|convergence-divergence|
| 1
|
Which is the correct definition of covectors?
|
Some says covectors are linear map that maps $ V \mapsto R $ (which means it's just a row vector considering vectors are $ n $ x $ 1 $ matrix and mapping is matrix multiplication), while some say it's a certain value whose components follow $ v'_i = \frac{\partial x^j}{\partial x'^i} v_j $ under coordinate transformation from $ x $ coordinate to $ x' $ coordinate. Let's see gradient for example. $ \nabla $ itself is not a map $ \nabla : V \mapsto R $ , but divergence, $ \nabla \cdot $ is. So $ \nabla $ is not a covector of first definition. However, considering $ \nabla $ 's components are $ \frac{\partial}{\partial x^i} $ , they actually follow the transformation rule by $ \frac{\partial x^i}{\partial x'^j} \frac{\partial}{\partial x^i} = \frac{\partial}{\partial x'^j} $ . So it's a covector of the second example. Which one's right? Are they actually the same definition and I'm misunderstanding something or are they just different things I'm talking about?
|
REPLY TO COMMENT: A vector field with respect to coordinates $(x^1, \dots, x^n)$ is of the form $$ V = v^i\frac{\partial}{\partial x^i}. $$ Therefore, if we do a change of coordinates, where we view the old coordinates $(x^1, \dots, x^n)$ as functions of the new coordinates $(\bar{x}^1, \dots, \bar{x}^n)$ , then coordinate vector fields are related by $$ \frac{\partial}{\partial \bar{x}^j} = \frac{\partial x^i}{\partial\bar{x}^j}\frac{\partial}{\partial x^i}.$$ Therefore, $$ V = v^i\frac{\partial}{\partial x^i} = \bar{v}^j\frac{\partial}{\partial\bar{x}^j}, $$ where $$ v^i = \bar{v}^j\frac{\partial x^i}{\partial\bar{x}^j}. $$ On the other hand, with respect to the coordinates $(x^1, \dots, x^n)$ , the gradient of $f$ is the vector field $$\nabla f = \frac{\partial f}{\partial x^i}\frac{\partial}{\partial x^i} $$ and the gradient of $f$ with respect to $(\bar{x}^1,\dots,\bar{x}^n)$ is $$\nabla f = \frac{\partial f}{\partial\bar{x}^j}\frac{\partial}{\partial \bar{x}^j} = \frac{\partial f
|
|differential-geometry|vectors|definition|tensors|covariance|
| 0
|
How to calculate: $ \int_{-7 \pi}^{2 \pi} \frac{1}{2 \sin(x) - \cos(x) + 5} dx$
|
How to calculate the value of this Riemann integral? $$ \int_{-7 \pi}^{2 \pi} \frac{1}{2 \sin(x) - \cos(x) + 5} dx $$ I used the universal trigonometric substitution and found the following antiderivative of the integrand function: $$ \int\frac{1}{2 \sin(x) - \cos(x) + 5} dx = \frac{\arctan \left( \frac{3 \tan \left( \frac{x}{2} \right) }{\sqrt{5}} + 1 \right) }{\sqrt{5}} + const $$ We cannot simply use the Newton-Leibniz formula by simply substituting $- 7\pi$ and $2 \pi$ , because $\tan \left(-\frac{7\pi}{2} \right)$ is not defined. I don’t quite understand what needs to be done in this case. However, I can add that universal trigonometric substitution works for $\forall x \in (-\pi + 2\pi k, \pi + 2\pi k), k \in \mathbb{Z}$ . And for the original integral, the boundaries of integration do not coincide with $\forall x \in (-\pi + 2\pi k, \pi + 2\pi k), k \in \mathbb{Z}$ , and this introduces me further more at a dead end. I would be grateful for any hints and solutions to this task!
|
The Newton-Leibniz formula can still be applied if a continuous primitive is utilized as below \begin{align} &\int_{-7 \pi}^{2 \pi} \frac{1}{2 \sin x - \cos x + 5} \ dx\\ =& \ \frac1{2\sqrt5}\left( x+2\cot^{-1}\frac{5+2\sqrt5 -\cos x+2\sin x} {\sin x+2\cos x}\right)\bigg|_{-7\pi}^{2\pi}\\ =&\ \frac1{2\sqrt5} \left(9\pi +2\cot^{-1}{\sqrt5}\right) \end{align}
|
|calculus|integration|riemann-integration|trigonometric-integrals|
| 0
|
Does $\sum^\infty_{n=1} \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!}$ converge?
|
I need to prove that the infinite series $$ \sum^\infty_{n=1} \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} $$ converges or diverges. I already showed that $n^n \leq n!e^n$ and $(1 + \frac{1}{n})^n \leq e$ $\forall n \in \mathbb{N}$ which must surely be useful to proof the divergence/convergence of this series but I don't know how. I am sure I must use the direct comparison test i.e. compare the sequence to some other sequence of which I know that its series diverges/converges. I tried $$ \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} \leq \frac{1+n^n}{1+n^2e^nn!} \leq \frac{1+n!e^n}{1+n^2e^nn!} \leq \frac{1+n!e^n}{n^2e^nn!} = \frac{1}{n^2}\left(\frac{1}{e^nn!} + 1\right) ... $$ and $$ \frac{\textrm{sin}^2(n) + n^n}{1 + n^2e^nn!} \geq \frac{n^n}{1 + n^2e^nn!} ... $$ but I don't know how I can simplify those expressions further to get a useful result.
|
Of course if you know the Stirling equivalent $$n!\sim \big(\frac ne\big)^n\sqrt{2\pi n}$$ you have immediatly $u_n\sim{n^{-5/2}}$ .
|
|real-analysis|calculus|sequences-and-series|analysis|convergence-divergence|
| 0
|
I'm very confused about this in Lax's book. Why does infinite decimal representation not produce infinite 9?
|
1.2 Numbers and the Least Upper Bound Theorem 1.2a Numbers as Infinite Decimals There are two familiar ways of looking at numbers: as infinite decimals and as points on a number line. The integers divide the number line into infinitely many intervals of unit length. If we include the left endpoint of each interval but not the right, we can cover the number line with nonoverlapping intervals such that each number a belongs to exactly one of them, $$\(\ n\ \leq \ a\ . Each interval can be subdivided into ten subintervals of length $$\(\frac{1}{10}\)$$ . As before, if we agree to count the left endpoint but not the right as part of each interval, the intervals do not overlap. Our number a belongs to exactly one of these ten subintervals, say to $$\(n + \frac{\alpha_{1}}{10} \leq a Thus using the representation of $\(a\)$ as a point on the number line and the procedure just described, we can find $\(\alpha_{k}\)$ in $\(a\ = \ n.\alpha 1\alpha 2\ ...\alpha_{k}\ ...\ \)$ by determining the a
|
Imagine what has to happen in order for the decimal expansion of a number $x \in \mathbb{R}$ to end with infinitely many nines. This means that starting with some interval $I = [a, b)$ , we have the following: $x$ belongs to $I$ . When $I$ is divided into $10$ subintervals of equal lengths, $x$ belongs to the rightmost of them; name this smaller interval $I_1$ . When $I_1$ is further divided into $10$ subintervals of equal lengths, $x$ again belongs to the rightmost of them; name it $I_2$ . The pattern continues. Effectively, $x$ belongs to the intersection of all $I_n$ 's. If we write $a = b - d$ for some $d > 0$ , we can see that $$\begin{align*} I & = [b - d, b) \\[1ex] I_1 & = \left[ b - \frac{d}{10}, b \right) \\[1ex] I_2 & = \left[ b - \frac{d}{100}, b \right) \\[1ex] & \vdots \end{align*}$$ Thus $x \in \bigcap_{n=0}^{\infty} \left[ b - \frac{d}{10^n}, b \right)$ . But this intersection is empty, so $x$ can not be there - hence it was impossible for the decimal expansion of $x$ t
|
|real-analysis|
| 1
|
If the function $f$ is a surjection, then the image of the preimage of a set under $f$ is that same set
|
Let $f$ be a function from $A$ to $B$ . Let $f$ be a surjection onto $B$ . I strongly think that for any $Y \subset B$ , then $f(f^-1(Y))=Y$ . I also think the surjection condition is necessary and no other condition is necessary (e.g. $f$ might not be an injection). I have both the proof (hopefully correct) and the intuition. Am I right?
|
Yes, you are correct. For a surjective function $f:A\to B$ and a set $Y \subseteq B$ , we have $f(f^{-1})(Y) = Y$ . Basically, the proof is about carefully writing what each concept means: Surjectivity of $f$ : By definition, being surjective means for every element $b \in B$ , there exists at least one element $a \in A$ such that $f(a) = b$ . That is, every element in the codomain $B$ has a preimage in the domain $A$ . Preimage: The preimage of $Y$ under $f$ , denoted by $f^{-1}(Y)$ , is the set of all elements in the domain that map to elements in $A$ . That is, $f^{-1}(Y) = \{a \in A:f(a) \in Y\}$ . Image of the Preimage: Now, we need to consider the image of $f^{-1}(Y)$ under $f$ . This is the set of all elements in the codomain $B$ that are obtained by applying $f$ to elements in the preimage of $Y$ . That is, $f(f^{-1}(Y)) = \{b \in B:\exists\,a \in f^{-1}(Y) \text{ such that } b = f(a)\}$ . Showing Equality: $f(f^{-1}(Y)) = Y$ : (a) $f(f^{-1}(Y)) \subseteq Y$ : Consider any elem
|
|analysis|elementary-set-theory|
| 1
|
The correct way of looking at Fourier transform
|
I Know that Fourier transform states that any non-periodic function could be described as summation of sines and cosines by saying that $$F(w)=\int_{-\infty }^{\infty }f(x)e^{^{-iwt}}dt$$ And this was derived from Fourier series by saying that any non-periodic function is a periodic one provided that the period goes to infinity, and by saying that you can say that any function could be described as a bunch of sines and cosines and that's why we transform our function to the frequency domain But I did know recently that the Fourier transform is a projection of our function on the orthogonal basis which is $e^{^{-iwt}}$ and by saying that you mean that we show that the frequency inside our function by projecting it on another basis function So by that we say that we can represent the frequency inside our function by projecting it or by saying that is a Fourier series which has an infinite period My question is: which one of these ways is the correct one to think about Fourier transform?
|
These explanations are not necessarily meant to be rigorous, but may help intuition. This said, the Fourier transform is indeed a projection onto an orthogonal basis (given that $\displaystyle\int_{-\infty}^\infty e^{i\omega t}e^{i\sigma t}dt=\delta(\omega-\sigma)$ ), while the idea of an infinite period involves a delicate conversion from discrete to continuous spectrum.
|
|fourier-analysis|fourier-series|fourier-transform|transformation|
| 0
|
Finding period of pendulum through interpolation
|
I’m looking for finding an efficient answer to this problem, which is to find the period time of a pendulum using interpolation. The pendulum behavior was given using the equations $\phi’’+\frac g L \sin(\phi)$ , $\phi(0)=\frac{6\pi}{7}$ , $\phi’(0)=0.8$ , $0\leq t \leq T$ I’ve rewritten this to a system of first order differential equations and solved it using Runge-Kutta 4. You can find an image of the plotted solution below, where the blue graph shows the pendulum movement: graph I want to find an interpolation over a subset of the two periods I’ve plotted to find the period time, but I’m a little stuck trying to pick what to do. Particularly which points I’m supposed to interpolate over and also what degree of polynomial I should pick. I’ve heard that you could get away with a low (1st, 2nd) degree polynomial, but I really can not figure out how.
|
This is not an answer. The figure below is shown as an addition to my previous comments. The calculus was done with time increment dt= 0.00001 If more accuracy is needed one have to use a smaller dt. For example with $dt= 0.00001$ the result found above for the period was $T= 7.59686$ . With $dt= 0.000001$ the result becomes $T= 7.596946$ which is closer to the value computed with WolframAlpha $T= 7.59696...$ Note : The initial angle 6*pi/7 is large. The usual formula to compute an approximate of the period are not convenient in this case because the too large deviation. THEORY : The analytical solving leads to $$t(\phi)=\sqrt{\frac{2L}{g\,(B+1)}}\left(EllipticF\left(\frac{\phi}{2}\:\Bigg|\:\frac{2}{B+1} \right) - EllipticF\left(\frac{\phi_0}{2}\:\Bigg|\:\frac{2}{B+1} \right)\right)$$ $$B=\frac{L}{2g}(\phi'_0)^2-\cos(\phi_0)$$ $EllipticF$ is the function "Elliptic Integral of the first kind". The maximum of $\phi$ (i.e.: amplitude) is : $$\phi_{max}=\cos^{-1}(-B)$$ The period is : $$T=
|
|numerical-methods|interpolation|
| 0
|
Every interpolation pair is uniform for Banach category
|
I want to prove the following: If we restrict ourselves to the category $\mathcal{B}$ (banach spaces), then every interpolation pair $(E,F)$ with respect to the compatible couples $(E_0,E_1),(F_0,F_1)$ is an uniform interpolation couple. Im following the proof of J. Bergh, which states that: We consider the set of morphisms between $(E_0,E_1)$ and $(F_0,F_1)$ such that $T$ is a morphism between $E$ and $F$ . We denote this space as $\mathcal{T}_1$ with the norm $$ \operatorname{max}\left(||T||_{\mathcal{L}(E,F)},||T||_{\mathcal{L}(E_0,F_0)},||T||_{\mathcal{L}(E_1,F_1)}\right),$$ and as $\mathcal{T}_2$ with the norm $$ \operatorname{max}\left(||T||_{\mathcal{L}(E_0,F_0)},||T||_{\mathcal{L}(E_1,F_1)}\right). $$ It is immediate to see that $\mathcal{T}_1$ and $\mathcal{T}_2$ are Banach spaces. Furthermore, the identity map $I:\mathcal{T}_1\to \mathcal{T}_2$ is a linear, bounded, and bijective operator. By the Banach isomorphism theorem, the inverse identity operator $I^{-1}:\mathcal{T}_2\
|
Saying that $I^{-1}$ is bounded is saying there exists some $C>0$ such that $$ \|T\|_{\mathcal{T}_1}=\|I^{-1}(T)\|_{\mathcal{T}_1} \le C\|T\|_{\mathcal{T}_2} $$ But $$ \begin{align} \|T\|_{\mathcal{T}_1}&=\operatorname{max}\left(||T||_{\mathcal{L}(E,F)},||T||_{\mathcal{L}(E_0,F_0)},||T||_{\mathcal{L}(E_1,F_1)}\right)\\ \|T\|_{\mathcal{T}_2}&=\operatorname{max}\left(||T||_{\mathcal{L}(E_0,F_0)},||T||_{\mathcal{L}(E_1,F_1)}\right) \end{align} $$
|
|functional-analysis|normed-spaces|banach-spaces|interpolation-theory|
| 1
|
Prove that $K=\cup_{n=1}^\infty K_n$ is compact
|
Considering $\mathbb{R}^2$ with the standard metric. Suppose that, for each integer $n\geq 1$ , there is a compact set $K_n \subseteq \mathbb{R}^2$ and a point $x_n \in\mathbb{R}^2$ such that: $K_n \subseteq B(x_n, 2^{-n})$ the sequence $(x_n)_{n=1}^\infty$ converges to a point $x \in K$ Prove that $K=\cup_{n=1}^\infty K_n$ is compact. I know that the infinite union of compact sets is not necessarily compact, so I would like to know where that fails and why the fact that the sequence of the centres of the sets that bound each $K_n$ fixes that. I want to use the Heine-Borel theorem in $\mathbb{R}^n$ and therefore to prove that $K$ is closed and bounded, and have thought to say $K \subset B(x_1, ||x1-x||+1/2)$ , but do not see why I can say that the set is closed.
|
Let $\left\{y_m\right\}$ be a sequence in $K$ . If every $K_n$ contains only finitely some $y_m$ 's, then choose a $y_{m_1}$ belonging to some $K_{n_1}$ . Suppose that we have determined $y_{m_1},y_{m_2},\cdots,y_{m_k}$ , as $m_1 and $n_1 . Now, since $\left\{y_m\right\}$ lies in infinitely some $K_n$ 's, there must be infinitely some $K_n$ 's that do not contain $y_1,y_2,\cdots,y_{m_k}$ , to which some of $y_m$ 's belong. Therefore, we are able to choose a $y_{m_{k+1}}$ from the $y_m$ 's, with $m_{k+1}>m_k$ , belonging to a $K_{n_{k+1}}$ , with $n_{k+1}>n_k$ . Therefore we constructed a subsequence $\left\{y_{m_k}\right\}$ of $\left\{y_m\right\}$ converging to $x$ . The other case is quite simple. If a subsequence of $\left\{y_m\right\}$ is contained in a single $K_n$ , then it has a convergent subsequence. Now since every sequence in $K$ has a convergent subsequence, $K$ is compact.
|
|general-topology|compactness|
| 0
|
Prove that $K=\cup_{n=1}^\infty K_n$ is compact
|
Considering $\mathbb{R}^2$ with the standard metric. Suppose that, for each integer $n\geq 1$ , there is a compact set $K_n \subseteq \mathbb{R}^2$ and a point $x_n \in\mathbb{R}^2$ such that: $K_n \subseteq B(x_n, 2^{-n})$ the sequence $(x_n)_{n=1}^\infty$ converges to a point $x \in K$ Prove that $K=\cup_{n=1}^\infty K_n$ is compact. I know that the infinite union of compact sets is not necessarily compact, so I would like to know where that fails and why the fact that the sequence of the centres of the sets that bound each $K_n$ fixes that. I want to use the Heine-Borel theorem in $\mathbb{R}^n$ and therefore to prove that $K$ is closed and bounded, and have thought to say $K \subset B(x_1, ||x1-x||+1/2)$ , but do not see why I can say that the set is closed.
|
I know that the infinite union of compact sets is not necessarily compact, so I would like to know where that fails and why the fact that the sequence of the centres of the sets that bound each $K_n$ fixes that. Let's see. So why would an infinite sequence of compact sets not be compact? By the Heine Borel theorem, That union could be unbounded, even though each set is bounded. That's very clear, take say $\mathbb R = \cup_{n \geq 1} [-n,n]$ . What you see is that $\mathbb R$ is unbounded basically because even though each set is unbounded, there is a point in each set that lies outside any particular fixed bound you give me. For example, you give me $74$ but then $[-75,75]$ is in the union. The best way of phrasing the above is that $\cup_{i=1}^{\infty} K_i$ won't be compact if there exists a sequence $y_i \in \cup_{i=1}^{\infty} K_i$ such that $\|y_i\| \to \infty$ , because that's what you get by contradicting boundedness. That union need not be closed, even though each set is closed
|
|general-topology|compactness|
| 1
|
Prove $\mu$ is an outer measure, find the collection of all $\mu$-measurable sets and the necessary and sufficient conditions s.t. $\mu$ is a measure
|
I am trying to solve the exercise below. I have managed to prove that $\mu$ is an outer measure following the definition of outer measures ( $\mu(\emptyset)=0$ , monotonicity, $\sigma$ -subadditivity). However, I am struggling to find $\mathcal{M}^*$ , the collection of all $\mu$ -measurable sets. It is clear that $\emptyset, X \in \mathcal{M}^*$ but I can't find more sets $A$ , which satisfy $ \forall E \subseteq X \; \; \mu(E) = \mu(E \cap A) +\mu(E \cap A^c)$ . I also managed to prove that if an outer measure is finitely additive, it is a measure in general, but I can't find the sufficient and necessary conditions to prove that $\mu$ is a measure. Exercise: If $X \ne \emptyset$ and $\mu$ is a set function defined on $\mathcal{P}(X)$ as \begin{equation} \mu(X) = \begin{cases} \text{Card}(E), & \text{Card}(E) a) Prove that $\mu$ is an outer measure and find $\mathcal{M}^*$ . b) Prove that if an outer measure is finitely additive, it is also a measure. Find the necessary and sufficient
|
All subsets $A\subseteq X$ are $\mu$ -measurable. If $E\subseteq X$ then $E=(E\cap A)\sqcup(E\setminus A)$ . The cardinality of a disjoint union is always the sum of cardinalities, whether they're finite or not $$ \operatorname{Card}(E)=\operatorname{Card}(E\cap A)+\operatorname{Card}(E\setminus A)\tag{1} $$ If $E$ is finite, then all numbers in $(1)$ are finite and you get the conclusion $\mu(E)=\mu(E\cap A)+\mu(E\setminus A)$ . If $E$ is infinite then $\mu(E)=+\infty$ . Since an infinite set cannot be a union of two disjoint finite sets, either $E\cap A$ or $E\setminus A$ is an infinite set. Therefore either $\mu(E\cap A)=+\infty$ or $\mu(E\setminus A)=+\infty$ (or both). Either way $\mu(E)=\mu(E\cap A)+\mu(E\setminus A)$ .
|
|measure-theory|outer-measure|
| 1
|
Proving that the multiplication operator is closed
|
I was looking at this exercise: a) Let $f_n$ be a Cauchy sequence in $L^p(x,\mu)$. Prove that there exists a subsequence that converges pointwise $\mu$-almost everywhere. b) Let p $\in [1,\infty], d\ge1$, and $m:\mathbb R^d\rightarrow \mathbb K$ be measurable. $$A:dom(A)\rightarrow L^p(\mathbb R^d), Af(x):=m(x)f(x)$$ is the multiplication operator with dom(A):={$f \in L^p(\mathbb R^d)|mf \in L^p(\mathbb R^d)$}. Show that A is closed. I could already show a) and the general idea of b) in combination with a) is also clear to me, but I can't quite write it out in full. I'd appreciate it if someone could help me out.
|
Take $=(f_n,Af_n)=(f_n,mf_n) \to (f,g)$ where convergence is in $L^p$ . Then there exists a subsequence $\{f_{n_k}\}$ such that $f_{n_k}\to f$ a.e. To prove $f \in D(A)$ , we use Fatou's Lemma: $$\int |{f^{p}m^{p}}|d\mu \leq \liminf_{k \to \infty}\int |f_{n_k}m|^{p}d\mu= \liminf_{k \to \infty}\|Af_{n_k}\|_p^{p}$$ Which is finite since $\{Af_{n_k}\}_k$ is convergent in $L^p$ (hence bounded). Thus $fm \in L^p$ and $f \in D(A)$ . To prove $g=fm$ a.e, we use the same trick. Since $mf_{n}\to g$ in $L^p$ , every subsequence does an hence $mf_{n_k}\to g$ in $L^p$ . Thus $$\|{g-mf}\|^{p}=\int \lim_{k \to \infty}|g-mf_{n_k}|^{p}\leq \liminf_{k \to \infty}\int |g-mf_{n_k}|^{p}=\liminf_{k \to \infty}\|g-mf_{n_k}\|_p^{p}=0$$ thus $g=mf$ a.e and hence $A$ is closed.
|
|functional-analysis|measure-theory|
| 0
|
Finding period of pendulum through interpolation
|
I’m looking for finding an efficient answer to this problem, which is to find the period time of a pendulum using interpolation. The pendulum behavior was given using the equations $\phi’’+\frac g L \sin(\phi)$ , $\phi(0)=\frac{6\pi}{7}$ , $\phi’(0)=0.8$ , $0\leq t \leq T$ I’ve rewritten this to a system of first order differential equations and solved it using Runge-Kutta 4. You can find an image of the plotted solution below, where the blue graph shows the pendulum movement: graph I want to find an interpolation over a subset of the two periods I’ve plotted to find the period time, but I’m a little stuck trying to pick what to do. Particularly which points I’m supposed to interpolate over and also what degree of polynomial I should pick. I’ve heard that you could get away with a low (1st, 2nd) degree polynomial, but I really can not figure out how.
|
If I understand correctly, you want to compute the zero crossings from known data points on the curve. An easy solution is to identify the two points on both sides of a change of sign and perform linear interpolation between them. For better precision, as the derivatives are also available at these points, you can resort to inverse Hermite interpolation (interpolate $t$ as a cubic function of $\phi$ ). Example with a much exaggerated step.
|
|numerical-methods|interpolation|
| 1
|
Show that Foata map $\Phi$ implies there is a bijective map $co\Phi$ on $S_n$ such that $comaj(\sigma) = inv(co\phi(\sigma))$.
|
Problem Statement: If $\sigma$ is a permutation in $S_n$ define the co-major index of $\sigma$ as follows: $$\operatorname{comaj}(\sigma) = \sum_{\sigma_i > \sigma_{i+1}}(n-i).$$ Show that Foata map $\Phi$ implies there is a bijective map $co\Phi$ on $S_n$ such that $comaj(\sigma) = \operatorname{inv}(co\Phi(\sigma))$ . Refer to $\Phi$ and its properties. What I know $\operatorname{maj}(\sigma) = \sum_{\sigma_i > \sigma_{i+1}} i = \operatorname{inv}(\Phi(\sigma) $ My thought was to rewrite the comaj in terms of the maj as follows: $$\operatorname{comaj}(\sigma) = \sum_{\sigma_i > \sigma_{i+1}}(n-i) = n -\sum_{\sigma_i > \sigma_{i+1}}i$$ Then I realized that $n$ is probably not a constant and got stuck. I am not sure what the right approach to this problem is.
|
Consider the reversal of complement of $\sigma$ , $\sigma^{rc}$ , obtained by reading $\sigma$ right-to-left upside down (i.e. rotating its permutation diagram $180^\circ$ ). In other words, $\sigma^{rc}(i)=n+1-\sigma(n+1-i)$ . Then $i\in\operatorname{Des}(\sigma)$ , i.e. $\sigma(i)>\sigma(i+1)$ , is equivalent to $$ \sigma^{rc}(n-i)=n+1-\sigma(i+1)>n+1-\sigma(i)=\sigma^{rc}(n-i+1), $$ i.e. $n-i\in\operatorname{Des}(\sigma^{rc})$ . Therefore, $\operatorname{comaj}(\sigma^{rc})=\operatorname{maj}(\sigma)$ , and vice versa, $\operatorname{maj}(\sigma^{rc})=\operatorname{comaj}(\sigma)$ . Thus, $\operatorname{comaj}(\sigma)=\operatorname{maj}(\sigma^{rc})=\operatorname{inv}(\Phi(\sigma^{rc}))$ . So, we could define $\operatorname{co}\Phi=\Phi\circ(r\circ c)$ , where $r$ and $c$ are the reversal and complement maps, $r(\pi)(i)=\pi(n+1-i)$ , $c(\pi)(i)=n+1-\pi(i)$ .
|
|combinatorics|
| 0
|
Show that $A=\{x \in \mathbb{R}^2: d_a(x,0) \leq 1\}$ is not compact.
|
Considering the metric in $\mathbb{R}^2$ : \begin{equation} d_{a}=\begin{cases} \lVert x-y \rVert & \text{if x and y lie on the same line through the origin}, \\\ \lVert x \rVert + \lVert y \rVert & \text{otherwise} \end{cases} \end{equation} Show that $A=$ { $x \in \mathbb{R}^2: d_a(x,0) \leq 1$ } is not compact. I want to showing that either it is not bounded or that it is not closed. However, it is clearly bounded because $A \subset $ { $x \in \mathbb{R}^2: d_a(x,0) \leq 1+\epsilon$ }. But to show it is not closed I am having trouble, because the only convergen sequences are those that go along the same line of the origin, and thus we are working with the usual metric and the limits are all in the closed unit ball. I also tried to prove that $A^C$ is not open, but without luck because balls outside $A$ with a radius smaller than $1$ are simply straight lines with direction to the origin and you can find an $\epsilon$ such that $B(x,\epsilon) \subset A^C$ so I am getting that $A$ is
|
In a general metric space, "closed and bounded" is not equivalent to "compact". Instead, try finding an open cover that does not have a finite subcover. Hint: two points are only "close to each other" if they are on the same line through the origin. This means that two distinct lines through the origin are far away from each other, no matter how close they may seem when thinking of $\Bbb{R}^2$ in the usual way. So, when thinking about this space, pretend any two lines are orthogonal. (You get a sort of weird infinite-dimensional star.)
|
|general-topology|compactness|
| 0
|
Methods for Determining Divisibility by 4 for the Formula $2^n - 46$
|
I'm working on a problem where I need to determine the conditions under which $2^n - 46$ is divisible by 4, where $n$ is a non-negative integer. I understand that for any power of 2 greater than $2^2$ , the result is always divisible by 4. However, when subtracting 46 from $2^n$ , I'm unsure how to systematically approach or solve for $n$ to ensure the result remains divisible by 4. I've considered direct computation for small values of $n$ and observed patterns, but I'm looking for a more general method or a mathematical insight that could help solve this more efficiently or elegantly. Specifically, I'm interested in any theorems, properties, or techniques that could be applied to this problem. Could anyone provide guidance on how to approach this problem or point me towards relevant mathematical concepts or methods that could simplify determining the divisibility of $2^n - 46$ by 4? Thank you in advance for any assistance!
|
You may also write: $2^n-46=2(2^{n-1}+1)-48$ $4|48\Rightarrow 2|2^{n-1}+1$ But this is not possible because $(2^{n-1}+1)$ is always odd and not divisible by 2 except when $n=1$ .
|
|logarithms|diophantine-equations|
| 0
|
$f = X^{3} + 2$ irreducible in $\mathbb{F}_{49}[X]$.
|
Question : Prove that $f = X^{3} + 2$ is irreducible in $\mathbb{F}_{49}[X]$ . Is $f$ irreducible in $\mathbb{F}_{7^{n}}$ for all even $n$ ? My attempt : Notice that $\deg(f) = 3$ , therefore $f$ is irreducible in $\mathbb{F}_{49}[X]$ if and only if $f$ has no roots. We will first look at $f$ in $\mathbb{F}_{7}[X]$ , since if $f$ would have roots in $\mathbb{F}_{7}$ then it would also have roots in $\mathbb{F}_{49}$ because it is a subfield. With checking we find that $f$ has no roots in $\mathbb{F}_{7}$ . Now let $\alpha \in \overline{\mathbb{F}_{7}}$ be a root of $f$ . Then, it follows that $\mathbb{F}_{7}(\alpha)$ is a finite field extension of $\mathbb{F}_{7}$ of degree 3. Then it follows that $\# \mathbb{F}_{7}(\alpha) = 7^{3}$ and by uniqueness of this field we have $\mathbb{F}_{7}(\alpha) \cong \mathbb{F}_{7^{3}}$ . Now, we clearly see that a field containing a root of $f$ must contain $\mathbb{F}_{7^3}$ , it follows directly that this root is not contained in $\mathbb{F}_{49}$
|
It is correct, since the minimal polynomial $m_\alpha \in \mathbb{F}_7[X]$ of $\alpha$ divides $X^3+2$ , because $\alpha^3+2 = 0$ . It has to be $\deg m_\alpha = 3$ , since otherwise $X^3 +2 = m_\alpha g$ for some $g \in \mathbb{F}_7[X]$ with $\deg g = 1$ or $\deg m_\alpha = 1$ , a contradiction to the non-existence of roots of $X^3+2$ in $\mathbb{F}_7$ . Therefore $\mathbb{F}_7(\alpha) \cong \mathbb{F}_{7^3}$ .
|
|abstract-algebra|field-theory|galois-theory|finite-fields|
| 1
|
Is it possible to put an equilateral triangle onto a square grid so that all the vertices are in corners?
|
In the following collection of problems - arXiv:1110.1556v2 [math.HO] - the following question is posed: Is it possible to put an equilateral triangle onto a square grid so that all the vertices are in corners? The first approach that springs to mind is to use Pick's Theorem (e.g. http://www.geometer.org/mathcircles/pick.pdf ) assuming that all vertices are on lattice points. It turns out that it is not possible (by Pythagoras, the area of an equilateral triangle with two vertices on lattice points is a rational multiple of $\sqrt3$). My question then is - how can one establish the impossibility of such a placement without resorting to Pick's Theorem?
|
WLOG, let one of the vertices lie at the origin. We then have that the square of the three lengths are given by: $$\begin{align}a^2+b^2=c^2+d^2&=(a-c)^2+(d-b)^2\\&=a^2+b^2+c^2+d^2-2(ac+bd)\end{align}$$ We can assume that not all $a,b,c,d$ are even, otherwise there is a smaller equilateral triangle. Now let's consider some cases: $a^2+b^2\equiv_4 1\iff c^2+d^2\equiv_4 1$ . This yields $$a^2+b^2+c^2+d^2-2(ac+bd)\equiv_4 2(1-ac-bd)$$ This is impossible. $a^2+b^2\equiv_4 2\iff c^2+d^2\equiv_4 2$ . This yields $$a^2+b^2+c^2+d^2-2(ac+bd)\equiv_4 2(ac+bd)\equiv_4 0$$ This is also impossible.
|
|geometry|euclidean-geometry|analytic-geometry|
| 0
|
Seeking the distribution of a ratio of (correlated) sums of squares of iid standard normal variables
|
Suppose given n random variables $X_1,\ldots,X_n$ , $n > 1$ , $X_i \text{ iid } \sim N(0,1)$ , and a number $1 . I am looking for the distribution of the r.v. $$ R = \frac{X_1^2 + \ldots X_m^2}{X_1^2 + \ldots X_n^2} $$ So $R$ is a ratio between correlated $\chi^2$ variables - hence is not $F$ - distributed. In fact I actually want the distribution on $[0,\pi/2]$ of $$ \Theta = \text{arccos}\left(\sqrt R\right) $$ (positive square root, so $\sqrt R \in [0,1]$ ), where "arccos" is the inverse cosine function; $\text{arccos}(x) \in [0,\pi/2]$ for $x \in [0,1]$ . ( $\Theta$ is the angle between a uniform random unit vector in $\mathbb{R}^n$ and the $m$ -dimensional hyperplane in $\mathbb{R}^n$ defined by the $1$ st $m$ coordinate axes.)
|
It's a $\text{Beta}(m/2,(n-m)/2)$ , which is not surprising, since the numerator is a fraction of the denominator. Let $$A=X_1^2+\ldots + X_m^2$$ and $$B=X_{m+1}^2+\ldots + X_n^2.$$ So $A,B$ are independent $\chi^2$ . $A$ has $m$ degrees of freedom and $B$ has $n-m$ degrees of freedom. You want the distribution of $$R=\frac{A}{A+B}.$$ Let $S=A+B$ . Note that $A=RS$ and $B=(1-R)S$ . Taking the Jacobian, we find $$\det\begin{pmatrix} \frac{\partial A}{\partial R} & \frac{\partial B}{\partial R} \\ \frac{\partial A}{\partial S} & \frac{\partial B}{\partial S}\end{pmatrix} = \begin{pmatrix} S & -S \\ R & 1-R\end{pmatrix}=S.$$ By change of variables, we have \begin{align*}f_{R,S}(r,s) & = f_A(a)f_B(b)s \propto a^{\frac{m}{2}-1}\exp\Bigl({-\frac{a}{2}}\Bigr) b^{\frac{n-m}{2}-1}\Bigl({-\frac{b}{2}}\Bigr) \\ & = (rs)^{\frac{m}{2}-1}[(1-r)s]^{\frac{n-m}{2}-1} \exp\Bigl(-\Bigl[\frac{rs+(1-r)s}{2}\Bigr]\Bigr)s \\ & = g(s)r^{\frac{m}{2}--1}(1-r)^{\frac{n-m}{2}-1}.\end{align*} Here, $g(s)$ is some
|
|random-variables|normal-distribution|
| 1
|
Show that $x^5+x^3+x+1$ is irreducible over $\mathbb Q$.
|
Show that $p(x)=x^5+x^3+x+1$ is irreducible. My attempt : First, I tried to shifting Eisenstein's up to $10$ , i.e., $p(x+a)$ , where $a=1,\dots, 10$ using calculator. I did not continue because if I am being honest, you will get bored after a several failures. Also, please, I ask you to do not recommend rational root test for irreducibility. Actually my original question was $x^5+Ax^3+Ax+1$ where $A \in \mathbb Z$ . I put myself this example to observe my own special version, but even can't show easier version. Or, my question can be Is $x^7+Ax^5+Bx^3+Ax+1$ , with $A,B\in\mathbb Z$ , irreducible? (YES! But how can we show?). I couldn't solve also this example. I know that one way to show is the following: If polynomial is irreducible for some modulo $p$ then it is irreducible over $\mathbb Z$ and by Gauss Lemma irreducible over $\mathbb Q$ . Any help is appreciated.
|
Since the degree of $p(x)$ is low, we can use Rational Root Theorem to solve it.Assume that $p(x)$ is reducible in $\mathbb{Q}[x]$ ,there must exist $g(x),h(x)$ ,such that $p(x)=g(x)h(x)$ .What's more ,we have $\deg g(x)=3$ and $\deg h(x)=2$ (that's because f(x) doesn't have rational root.If $\deg h(x)=1$ ,then $f(x)$ has the root $h(x)$ has,then contradiction.Thus,we can let $g(x)=x^3+a_1x^2+a_2x+1 $ and $ h(x)=x^2+a_3x+1$ (another situation is that $g(x)=x^3+a_1x^2+a_2x-1 $ and $ h(x)=x^2+a_3x-1$ .But I think it's probably same).Write down $f(x)=g(x)h(x)$ and compare the number,we have $ \begin{cases} a_1+a_3=0\\ a_2+a_1a_3+1=1\\ a_1+a_2a_3+1=0\\ a_2+a_3=1 \end{cases} $ But there has no solution for $a_3 $ in $\mathbb{Q}$ . I think this method cannot solve the situation that $\deg p(x)$ is too high.
|
|abstract-algebra|irreducible-polynomials|
| 0
|
Show that $A=\{x \in \mathbb{R}^2: d_a(x,0) \leq 1\}$ is not compact.
|
Considering the metric in $\mathbb{R}^2$ : \begin{equation} d_{a}=\begin{cases} \lVert x-y \rVert & \text{if x and y lie on the same line through the origin}, \\\ \lVert x \rVert + \lVert y \rVert & \text{otherwise} \end{cases} \end{equation} Show that $A=$ { $x \in \mathbb{R}^2: d_a(x,0) \leq 1$ } is not compact. I want to showing that either it is not bounded or that it is not closed. However, it is clearly bounded because $A \subset $ { $x \in \mathbb{R}^2: d_a(x,0) \leq 1+\epsilon$ }. But to show it is not closed I am having trouble, because the only convergen sequences are those that go along the same line of the origin, and thus we are working with the usual metric and the limits are all in the closed unit ball. I also tried to prove that $A^C$ is not open, but without luck because balls outside $A$ with a radius smaller than $1$ are simply straight lines with direction to the origin and you can find an $\epsilon$ such that $B(x,\epsilon) \subset A^C$ so I am getting that $A$ is
|
An alternative way to see it is by using the fact that in a metric space, compactness and sequential compactness are equivalent. To do so, consider the sequence $a_n = \left(\cos \frac{1}{n}, \sin \frac{1}{n}\right)$ . First show that $a_n \in A$ and then notice that $d_a(a_n, a_m) = 2$ for all $m\neq n$ which implies that $a_n$ cannot have a convergent subsequence (since any convergent subsequence would be Cauchy).
|
|general-topology|compactness|
| 1
|
Show that $x^5+x^3+x+1$ is irreducible over $\mathbb Q$.
|
Show that $p(x)=x^5+x^3+x+1$ is irreducible. My attempt : First, I tried to shifting Eisenstein's up to $10$ , i.e., $p(x+a)$ , where $a=1,\dots, 10$ using calculator. I did not continue because if I am being honest, you will get bored after a several failures. Also, please, I ask you to do not recommend rational root test for irreducibility. Actually my original question was $x^5+Ax^3+Ax+1$ where $A \in \mathbb Z$ . I put myself this example to observe my own special version, but even can't show easier version. Or, my question can be Is $x^7+Ax^5+Bx^3+Ax+1$ , with $A,B\in\mathbb Z$ , irreducible? (YES! But how can we show?). I couldn't solve also this example. I know that one way to show is the following: If polynomial is irreducible for some modulo $p$ then it is irreducible over $\mathbb Z$ and by Gauss Lemma irreducible over $\mathbb Q$ . Any help is appreciated.
|
Working over $\Bbb{F}_2$ it is clear that $1\in\Bbb{F}_2$ is a root of $x^5+x^3+x+1\in\Bbb{F}_2[x]$ , and we find that $$x^5+x^3+x+1\equiv(x+1)(x^4+x^3+1)\pmod{2}.$$ The quartic factor clearly has no roots in $\Bbb{F}_2$ , and because the only irredicuble quadratic over $\Bbb{F}_2$ is $x^2+x+1$ , it suffices to verify that $$x^4+x^3+1\neq(x^2+x+1)^2,$$ to conclude that $x^4+x^3+1$ is irreducible over $\Bbb{F}_2$ . Over $\Bbb{Q}$ , this means that $x^5+x^3+x+1$ has an irreducible factor of degree at least $4$ . That is to say, it is either irreducible or it has a root in $\Bbb{Q}$ . The rational root test tells you that if it has a root in $\Bbb{Q}$ , then that root must be $1$ or $-1$ . But a quick check shows that these are not roots, and hence the original polynomial is irreducible over $\Bbb{Q}$ . The proof of the more general case of $x^5+Ax^3+Ax+1$ can be handled in a similar way, albeit with a bit more work. Of course the argument above holds identically for any odd $A\neq1$ . Bu
|
|abstract-algebra|irreducible-polynomials|
| 1
|
In proof that smooth point is regular ( First quetion, Gortz's Algebraic Geometry )
|
I am reading the Gortz's Algebraic Geometry, proof of Lemma 6.26 and trying to prove some statement in it. First I propose a question. Let $R:=k[T_1, \dots,T_n]$ be a polynomial ring with closed point $y\in \operatorname{Spec}R$ so that $\mathfrak{p}_y$ is the maximal ideal in $R$ . Let $\mathfrak{m}_y$ be the maximal ideal of the local ring $R_{\mathfrak{p}_y}$ ; i.e., $\mathfrak{m}_y := \mathfrak{p}_y R_{\mathfrak{p}_y}$ . Let $f_1, \dots f_r$ be elements of $\mathfrak{p}_y \subseteq R$ and $a_1, \dots a_r $ be elements of $k$ . Consider its images $f_1/1 , \dots , f_r/1 \in \mathfrak{m}_y $ . Q. Then, my question is, if $a_1 \cdot \frac{f_1}{1} + \cdots + a_r \cdot \frac{f_r}{1} = \frac{a_1 f_1 + \cdots + a_r f_r}{1} \in \mathfrak{m}_y^2 $ , then $\Sigma_{i} a_i \frac{\partial f_i}{\partial T_j} \in \mathfrak{p}_y$ for all $j$ ? I am trying to prove this by brutal force and can't prove yet. This question originates from following proof ( Gortz's Algebraic Geometry book Lemma 6.26 )
|
For notation, $\mathcal{O}_{X,x} \simeq \mathcal{O}_{\mathbb{A}_k^n,y}/(f_1,...,f_{n-d})$ is an abbreviation so yes, if you write things in coordinate like $k[x_1,...,x_n]_{\mathfrak{p}}$ then we should write $f_1/1,...,f_{n-d}/1$ instead. For the bold statement, note that in example 6.5, authors are assuming that the point $x$ is a $k$ -point, that is $\kappa(x)=k$ . This simply makes it possible to obtain a concrete description of the tangent space. The proof of Lemma 6.26 continues by trying to obtain information from this case through base change. For your first concern, then the answer is yes as well, the derivative $D=\frac{\partial}{\partial x}$ sends $\mathfrak{m}^2$ to $\mathfrak{m}$ , this should be obvious because $D(fg) = fD(g) + gD(f)$ .
|
|abstract-algebra|algebraic-geometry|commutative-algebra|
| 1
|
Existence of a kind of balanced tournament schedule
|
Recently I was confronted with another tournament design problem: Suppose we have a tournament with $2n$ teams ( $n\in \mathbb{N}$ ). We have $n$ different types of games (say at $n$ distinct locations), to be played by a pair of teams at a time. The tournament will be played in $2n$ rounds. In the first $n$ rounds as well as in the final $n$ rounds, each team should have played at every location. Each team should have confronted each of the other $2n-1$ teams in contest. For which $n$ can this problem be solved and with what method? Relevant references / terminology? I tried to solve it for $n=6$ (for practical application) and nearly succeeded. I got stuck with an arrangement where the final constraint is not completely satisfied.
|
Suppose $n\geq 7$ and $n$ odd, then such a tournament schedule exists. For the first $n$ rounds, we arrange for game $k\in\{1,...,n\}$ to played at round $j\in\{1,...,n\}$ by the following pair of teams: team $(k-j) \text{ mod } n$ plays against team $((k+j) \text{ mod }n)+n$ . (I use here a mod function that maps to $\{1,...,n\}$ ) In this way, each of the teams $1,...,n$ will have played against each of the teams $n+1,...,2n$ in the first $n$ rounds. For the final $n$ rounds, the schedule will rely on an auxiliary standardized skew room square $S_n$ of size $n \times n$ and using the standard alphabet $\{\infty,1,...,n\}$ . Such a Room square has the pair $(\infty,x)$ in its $(x,x)-$ entry (i.e. the $x$ 'th entry along the diagonal) and if the $(i,j)-$ entry is non-empty, then the $(j,i)-$ entry is empty. Remove the $\infty$ symbols from the diagonal entries in $S_n$ (the resulting object $S_n'$ has unpaired numbers in the diagonal). Now let $Q_n$ be the $n\times n$ square obtained b
|
|combinatorics|reference-request|
| 0
|
What Things can be Linearly Independent?
|
Is linear independence confined to strictly lists of vectors, or does it extend to vector spaces, subspaces, etc? If it isn't just lists of vectors what are the most common things that can be linearly independent?
|
This is more a collection of remarks than an answer, but is both long for a comment and posted in the spirit of not answering in the comments. If (or since ) we're splitting hairs: In my experience with introductory linear algebra books, the property of linear independence generally applies to sets of vectors (i.e., to subsets of a vector space ), not lists (i.e., mappings from an index set taking values in a vector space). One can meaningfully (and perhaps usefully) define linear independence for lists, however. In a similar vein, elementary books assume sets of vectors are ordered even when not explicitly stated. That is, "sets" of vectors are in some respects "listy." For example, one speaks of "the standard matrix of a linear transformation" which implicitly comes with an ordering of rows and columns; one has "the orientation determined by a basis" which again implicitly assumes an ordering. My book explicitly refers throughout to ordered bases, though in my experience doing so is
|
|linear-algebra|linear-independence|
| 0
|
why this equality involving matrix holds true?
|
I am studying Lemma 11 of the paper I am having difficulty understanding on the last step, in particular, I have two questions: The first question (1) On the first equality of the last step on page 15, the authors use $$u^TSu=u^T(nI_n-zz^T+\sigma [\text{ddiag}(\Delta zz^T-\Delta)])u$$ where $S=\text{ddiag(Azz^T)-A}$ . and $\text{ddiag}$ is an operator that sets all off-diagonal entries of a matrix to zero. But I don't understand why this is correct. My confusion is: I understand that $$S=\text{ddiag(Azz^T)-A}=\text{ddiag}((zz^T+\sigma\Delta)zz^T)-zz^T-\sigma\Delta=\text{ddiag}((zz^T)(zz^T))+\sigma\text{ddiag}(\Delta zz^T)-\sigma\Delta-zz^T$$ As we can see that the first term $\text{ddiag}((zz^T)(zz^T))=nI_n$ . Thus I expect $\text{ddiag}(\Delta zz^T)-\Delta=\text{ddiag}(\Delta zz^T-\Delta)$ . But I don't see why this is true. According to the paper, the matrix $\Delta$ is not a diagonal-free matrix, thus the non-diagonal entries of the left-hand side $\text{ddiag}(\Delta zz^T)-\Delta$
|
In the paper you have $A=zz^T+\sigma\Delta$ and $z\in\{\pm1\}^n$ . Therefore $z^Tz=n$ and the matrix $zz^T$ has a diagonal of ones. It follows that $$ \operatorname{ddiag}(zz^Tzz^T) =\operatorname{ddiag}\big(z(z^Tz)z^T\big) =\operatorname{ddiag}(nzz^T) =nI_n $$ and in turn, \begin{align*} S &=\operatorname{ddiag}(Azz^T)-A\\ &=\operatorname{ddiag}\big( (zz^T+\sigma\Delta) zz^T\big) - (zz^T+\sigma\Delta)\\ &=nI_n+\sigma\operatorname{ddiag}(\Delta zz^T) - zz^T - \sigma\Delta.\\ \end{align*} For your second question, note that $\operatorname{ddiag}(uv^T)=\operatorname{diag}(u\odot v)=\operatorname{diag}(v\odot u)$ in general, where $\odot$ denotes Hadamard (i.e., entrywise) product. Therefore $$ u^T\operatorname{ddiag}(\Delta zz^T)u \geq -\|u\|_2\|\operatorname{diag}(z\odot\Delta z)\|_2 = -\|u\|_2^2\|z\odot\Delta z\|_\infty. = -\|u\|_2^2\|\Delta z\|_\infty. $$ For your last question, the constant $\sigma$ is clearly missing. The expression $-\|u\|_2^2\|\Delta\|$ should read $-\sigma\|u\|_2
|
|matrices|inequality|matrix-calculus|positive-semidefinite|matrix-norms|
| 0
|
Outer product of row vectors. Does $\mathbf{x}^T \otimes \mathbf{y}^T$ = $\mathbf{x} \otimes \mathbf{y}$?
|
Does $\mathbf{x}^T \otimes \mathbf{y}^T$ = $\mathbf{x} \otimes \mathbf{y}$ ?
|
The outer product is equivalent to the standard matrix product if we interpret (column) vectors as $n \times 1$ matrices: $$ \mathbf{x} \otimes \mathbf{y} = \mathbf{x} \, \mathbf{y}^\top. $$ You can see by looking at dimensions that this makes sense. In order for matrix multiplication to be defined the column count on the left must match the row count on the right, and here we have $$ (n \times 1)(1 \times n) = (n \times n). $$ On the other hand, $\mathbf{x}^\top \otimes \mathbf{y}^\top$ doesn't make sense as an outer product, as $\mathbf{x}^\top$ and $\mathbf{y}^\top$ are row vectors (often interpreted as co vectors in many contexts). If we just go ahead and blindly compute $$ \mathbf{x}^\top \otimes \mathbf{y}^\top = \mathbf{x}^\top (\mathbf{y}^\top)^\top = \mathbf{x}^\top \mathbf{y}, $$ what we have is a row vector multiplied by a column vector, which yields a scalar since $$ (1 \times n)(n \times 1) = (1 \times 1). $$ This is called the inner product of $\mathbf{x}$ and $\mathbf{y}
|
|outer-product|
| 0
|
For the events A, B, C ⊂ Ω
|
I am new to probability and I have been trying to find a formula or an intution to solve the following problem. For the events A, B, C ⊂ Ω the following probabilities are known: $P(A) = 0.05$ , $P(B) = 0.1$ , $P(A∩B) = 0.03$ , $P(A∪C) = 0.42$ , $P(A∩C) = 0.03$ , $P(C \setminus(A∪B)) = 0.34$ , $P(A∩B∩C) = 0.02$ . Calculate the probabilities: $ P((A \cap C) \setminus B) $ $P(A^c ∩ B ∩ C^c)$ My solution to $1.$ is to calculate probability as follows: $P((A ∩ C) \setminus B) = P(A∩C) - P(A∩B∩C)$ and for the $2.$ I don't know how to solve this. How could one solve this?
|
Building on what JMoravitz suggested in the comments, here's the Venn diagram. The variables denote the probabilities for the respective regions. The question tells us - $$\begin{align*} a+b+c+d &= 0.05 \\[0.3cm] b+c+e+f &= 0.1 \\[0.3cm] b+c &= 0.3 \\[0.3cm] a+b+c+d+e+g &= 0.42 \\[0.3cm] c+d &= 0.03 \\[0.3cm] g &= 0.34 \\[0.3cm] c &= 0.02 \\[0.3cm] \end{align*}$$ For part $(2)$ , you need to find $f$ . Should be easy from here.
|
|probability|
| 0
|
Is this set $A$ open?
|
I have the set $A =$ { $z \in \mathbb C: z = x + 0i = x, x \in \mathbb R$ }. Is this set open in the complex plane? The set $A$ contains all the points on the real axis in the complex plane. This set is open if $A$ contains all of its interior points, i.e. it contains all points safely inside the set $A$ . But an open ball doesn't exist around any of the points in $A$ , so surely $A$ contains no interior points, and hence $A$ is not open?
|
I don't know your level of knowledge, but here is a more sophisticated way to reason, maybe it's futile complication but you'll be the judge... Consider the function $f:\mathbb{C}\to \mathbb{R}: a+ib\mapsto b$ . This is clearly a continuous function, and $f^{-1}(\{0\})=A$ . $\{0\}$ is closed in $\mathbb{R}$ and so must be $A$ (preimage of a closed set through a continuous function is closed). This doesn't rule out that $A$ isn't open but $\mathbb{C}$ is connected, which means that only $\mathbb{C}$ and $\emptyset$ are the subsets open and closed, clearly $A\neq \mathbb{C},\emptyset$ so $A$ can't be open.
|
|metric-spaces|
| 0
|
can the period of an automorphism of a finitely generated module be unbounded?
|
Let $M$ be a finitely generated noetherian module over a ring $R$ . Let $\phi$ be an automorphism of $M$ . Let $P = \{ a \in M \mid \exists n \in \mathbb{N}, \text{such that } \phi^{n} (a) =a \}$ be the sets of periodic points of $\phi$ in $M$ . For $a \in P$ , let $m_a$ be the smallest positive integers such that $\phi^{m_a} (a) =a$ . Question: Is it possible that $\{m_a \mid a \in P\}$ is unbounded? Thoughts: If $M$ is a finitely generated abelian group, then by the fundamental theorem of finitely generated abelian groups the period must be bounded. However, finitely generated modules need not be finitely generated as an abelian group. I was wondering if this also holds for the finitely generated modules?
|
This holds if $M$ is a noetherian module over an arbitrary ring $R$ . For each $n$ , let $M_n=\{a\in M:\phi^n(a)=a\}$ ; note that $M_n$ is a submodule of $M$ , since $\phi^n$ is an automorphism of $M$ . Moreover, we have $M_{mn}\supseteq M_m+M_n$ for all $m,n\in\mathbb{N}$ ; indeed if $a\in M_m$ , then $\phi^{mn}(a)=(\phi^m)^n(a)=a$ , so that $M_m\subseteq M_{mn}$ , and similarly $M_n\subseteq M_{mn}$ . Since $M$ is noetherian, the family of submodules $\{M_n:n\in\mathbb{N}\}$ has a maximal element; say $M_{n}$ . By the remark above, it follows that $M_m\subseteq M_n$ for all $m\in\mathbb{N}$ ; otherwise $M_{mn}\supseteq M_m+M_n$ would properly extend $M_n$ , contradicting maximality. So $M_m\subseteq M_n$ for all $m\in\mathbb{N}$ , and the result follows.
|
|abstract-algebra|ring-theory|commutative-algebra|modules|
| 0
|
Roots of $x^2-x+2=0 \in \mathbb{Z}_3[i]$
|
I've been challenged by a professor to find the roots of $x^2-x+2=0$ in the field $\mathbb{Z}_3[i] = \{a+bi \; \vert \; a,b \in \mathbb{Z}_3\}$ . I used the "normal" quadratic formula and got roots of $2+2i$ and $2+i$ , and was told there's a way without using the formula. Is there a method for this besides guessing and checking the nine elements of $\mathbb{Z}_3[i]$ ?
|
Completing the square (which leads to the quadratic formula): $x^2-x+2=0$ $\iff x^2+2x+2=0$ $\iff (x+1)^2=-1$ $\iff x+1=\pm i$ $\iff x=2\pm i$ $\iff x=2+i\text{ or }x=2+2i,$ since $-1=2 $ in $\mathbb Z_3$ . Alternatively, note that $0, 1, $ and $2$ are not roots, so the roots are complex conjugates $a\pm bi$ . The sum of the roots of $x^2-tx+d$ is $t$ , so we have $(a+bi)+(a-bi)=2a=1;$ i.e., $a=2$ . The product of the roots of $x^2-tx+d$ is $d$ , so we have $(a+bi)(a-bi)=a^2+b^2=2; $ i.e., $b^2=1$ . Again we get the roots $a\pm bi=2\pm i$ . Really, though, checking the nine elements $\{0,\pm1,\pm i, \pm 1\pm i\}$ of $\mathbb Z_3[i]$ is not hard. It is rather easy to see that $0$ and $\pm 1$ are not roots. Nor is $\pm i$ since $i^2\mp i+2$ has an imaginary part. Nor is $1+i$ , since $(1+i)^2-(1+i)+2=2i-1-i+2$ also has an imaginary part, so the roots must be the other pair of conjugates $-1\pm i$ .
|
|abstract-algebra|ring-theory|complex-numbers|field-theory|integers|
| 0
|
Which statements count as first-order?
|
I have been reading a bit about formal languages to understand exactly what we mean by "first-order" when talking about things such as the transfer principle. If we use the language of set theory $\displaystyle S=\left\langle \in \right\rangle$ , then all first-order statements we can create right now consist of logical symbols and our relation $\in$ . But suppose we define a new relation: $\displaystyle a\subseteq b \iff\forall x[( x\in a \implies x\in b)]$ Now we have used our language $S$ to define a new relation. So we could make a statement such as: $\displaystyle \forall x\forall y[(x\subseteq y)\vee\neg (x\subseteq y)]$ Would this also be considered a first-order statement for some type of number (such as the naturals)? Or would we call this something different entirely? If so, suppose we now go a step further and talk about real numbers where we have our standard ordered relation $ . Suppose you make a statement such as: $\forall x\forall y[(x Would this be considered "first-or
|
Recall that the ordinary definitions of formal languages used in mathematical logic contain functions that yield terms from terms in the form $f(t_{1}, t_{2},\ldots t_{n})$ and relations that yield formulas from terms in the form $R(t_{1}, t_{2},\ldots t_{n})$ , where are $t_{1}, t_{2},\ldots t_{n}$ are terms of the language ( i.e. , individual constants, individual variables and those in functional form). If the language allows quantification not only a domain of discourse, but also over functions and relations, then we say that it is a second- or higher-order language. Therefore, for example, Kripke–Platek set theory with urelements is expressed in first-order language, although it posits urelements that build sets. Returning to the question, then, if the statement included quantification over the ordering relation, such as $\exists R(\ldots)$ , where $R$ is a relational variable (that could be instantiated to $ ), it would not be first-order; in the given form, it is.
|
|logic|
| 0
|
Find the angle x in the regular octagon below
|
In the plane figure below, we have a regular octagon $ABCDEFGH$ , and I belongs to the diagonal $CG$ , so that $\angle GIH = 30°$ . Knowing this, determine in degrees the value of the angle indicated by x.(S: $75^o$ ) Itry: $a_i=\frac{180.(8-2)}{8} = 135^o $ $\angle IGH = \frac{135} {2}=67,5^o \implies\angle IHG = 82,5^o$ $\angle AHB = \frac{180-135}{2}=22,5^o \implies \angle BHI =135 - 82,5-22,5 = 30^o \cong \angle GIH$ I'm missing a relationship to finish that I didn't find
|
I will present an alternative approach using trigonometry. Let $L$ denote the length of the equal-length sides of the regular octagon $ABCDEFGH$ . Applying the Law of Cosines to $\triangle BAH$ , we find that $$(HB)^2 = (BA)^2 +(AH)^2 -2(BA)(AH) \cos(\angle BAH)=2L^2(1-\cos(135^{\circ}))$$ $$\implies HB = \left(\sqrt{2+\sqrt{2}}\right) L. \tag{1}$$ Now by the Law of Sines applied to $\triangle GIH$ , $$\frac{\sin(\angle GIH)}{HG} = \frac{\sin(\angle HGI)}{IH} \implies \frac{\sin(30^{\circ})}{L} = \frac{\sin(\frac{135^{\circ}}{2})}{IH}$$ $$\implies IH = \left(\sqrt{2+\sqrt{2}}\right) L, \tag{2}$$ where we have made use of the half-angle identity $\sin \left(\frac{\theta}{2}\right)= \pm \sqrt{\frac{1-\cos(\theta)}{2}}$ to compute an exact value for $\sin(67.5^{\circ})$ . At this point, we have shown that $HB = \left(\sqrt{2+\sqrt{2}}\right) L = IH$ . This is sufficient to conclude that $\angle IBH = \angle HIB$ (formally, we are applying the Law of Sines to $\triangle IBH$ ). Since the i
|
|geometry|euclidean-geometry|plane-geometry|
| 0
|
"Space" and "time" part of a $ 2 $-form on a product manifold $ \mathbb R\times S $
|
I'm working through exercises 50 and 51 in Baez, Muniain, Gauge Fields, Knots and Gravity . Let $ M $ be a manifold and suppose we can write $ M $ as a product $ M = \mathbb R\times S $ of a "time" $ 1 $ -dimensional manifold $ \mathbb R $ with some "space" manifold $ S $ . Take a $ 2 $ -form $ F $ on $ M $ . Exercise 50 asks to show that $ F $ can be written in a unique way as $$ F = B + E\wedge\mathrm dt $$ where $ B $ is some $ 2 $ -form, $ E $ is some $ 1 $ -form, and $ \mathrm dt $ is the $ 1 $ -form (globally) defined as $$ \mathrm dt_{(t,p)} = \Bigl(\frac{\partial}{\partial t}\Big\rvert_{(t,p)}\Bigr)^* $$ for any $ (t,p)\in \mathbb R\times S $ , where $ (\partial/\partial t)\rvert_{(t,p)} $ is the tangent vector in $ \mathrm T_{(p,q)}(\mathbb R\times S) $ that correspond to the pair of tangent vectors $ \bigl((\partial/\partial t)\rvert_t,0\bigr) $ in $ \mathrm T_t\mathbb R\otimes \mathrm T_pS $ under the canonical identification $ \mathrm T_{(p,q)}(\mathbb R\times S)\cong \math
|
No, I would not say you're on the right track. It is not possible to prove a decomposition $\Omega^k(M)\cong (\mathbb{R}dt\wedge\Omega^{k-1}(S))\oplus \Omega^k(S)$ by proving an isomorphism of vector spaces. You need an isomorphism of vector bundles, or spaces of sections (as $C^\infty(M)$ -modules). So observe that the vector field $\partial_t$ gives rise to a linear map $\iota_{\partial_t}:\Omega^\bullet(M)\to\Omega^{\bullet-1}(M)$ , i.e. an endomorphism of $\Omega^\bullet(M)$ which lowers degree by $1$ . Now, it is a fact from linear algebra that $\Omega^\bullet(M)\cong\text{im }\iota_{\partial_t}\oplus\ker\iota_{\partial_t}$ , and therefore in particular $\Omega^k(M)\cong \iota_{\partial_t}\Omega^{k+1}(M)\oplus (\mathbb{R}dt\wedge\Omega^{k-1}(S))$ . The reason being, of course, that $\omega\in\mathbb{R}dt \wedge\Omega^{k-1}(S)$ is a necessary and sufficient condition for $\omega\in\ker\iota_{\partial_t}$ . This is the part you can prove in local coordinates. It is easy to see that
|
|differential-geometry|
| 0
|
Outer product of row vectors. Does $\mathbf{x}^T \otimes \mathbf{y}^T$ = $\mathbf{x} \otimes \mathbf{y}$?
|
Does $\mathbf{x}^T \otimes \mathbf{y}^T$ = $\mathbf{x} \otimes \mathbf{y}$ ?
|
Without making matrix identifications, if $x,y\in V$ , then $x^\top,y^\top\in V^*$ . Thus, $x\otimes y\in V\otimes V$ and $x^\top\otimes y^\top\in V^*\otimes V^*$ . Although both of these can be identified with $V\otimes V^*\cong \text{Hom}(V,V)$ once you choose a basis and dual basis, these are very different entities.
|
|outer-product|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
Using that $K^2 = M^2 + N^2$ by the Pythagorean theorem, that for all positive $x$ we have $x + \frac{1}{x} \ge 2$ by the AM–GM inequality (or, alternatively, from $(x-1)^2\ge 0 \;\to\; x^2-2x+1\ge 0\;\to\;x^2+1\ge 2x\;\to\; x+\frac{1}{x}\ge 2$ ), and your LHS expression simplification, we get $$\begin{equation}\begin{aligned} \frac{K^2(M+N)^2}{M^{2}N^{2}} & = \frac{(M^2+N^2)(M^2+2MN+N^2)}{M^{2}N^{2}} \\ & = \frac{M^2+2MN+N^2}{N^{2}} + \frac{M^2+2MN+N^2}{M^{2}} \\ & = \frac{M^2}{N^2} + 2\left(\frac{M}{N}\right) + 1 + 1 + 2\left(\frac{N}{M}\right) + \frac{N^2}{M^2} \\ & = 2 + 2\left(\frac{M}{N} + \frac{1}{\frac{M}{N}}\right) + \left(\frac{M^2}{N^2} + \frac{1}{\frac{M^2}{N^2}}\right) \\ & \ge 2 + 2(2) + 2 \\ & = 8 \end{aligned}\end{equation}$$ Since $M$ , $N$ and $K$ are all positive, taking square roots of both sides gives the requested inequality, i.e., $$\frac{K}{M}+\frac{K}{N}\ge 2\sqrt{2}$$
|
|inequality|trigonometry|
| 0
|
Tautologies in classical logic
|
According to its truth table, $P \lor \neg P$ is a tautology, i.e. it is true for all truth values of its constituent propositions. But how come that is true in classical logic? $\lor$ is the inclusive 'or', so $P \lor \neg P$ means 'either $P$ , or $\neg P$ , or both of them'. But in classical logic it is never true that both $P$ and $\neg P$ , i.e. $P \land \neg P$ is a contradiction (false for all truth values of its constituent propositions). So why is $P \lor \neg P$ always tautologically true?
|
As mentioned or hinted at in some of the comments and answers, $P \lor \neg P$ , or the law of the excluded middle is axiomatically true in classical logic. So it is true because we assume it to be true without proof. One alternative axiom is the double negation rule : $\neg \neg P \Rightarrow P$ . From double negation , you can prove the law of the excluded middle as a tautology. An example of such a proof can be seen here . The basic idea of the proof is to assume $\neg (P \lor \neg P)$ and show that it leads to a contradiction, therefore $\neg \neg (P \lor \neg P)$ is tautologically true. Then apply the double negation rule.
|
|logic|propositional-calculus|intuition|
| 0
|
Showing that for every acute angle $x$ in a right triangle $\frac{1}{\sin x}+\frac{1}{\cos x}\ge 2\sqrt 2$ is always true
|
The problem is to show that in a right-angle triangle with hypothenuse K and sides M and N, the inequality $\frac{K}{M}+\frac{K}{N}\ge 2\sqrt 2$ is always true. My approach: I tried to simplify the expression $\frac{K}{M}+\frac{K}{N}$ = $\frac{K(M+N)}{MN}$ , and since we have a triangle, this means that $K\ge M+N$ which makes us conclude that $$\frac{K(M+N)}{MN}\ge\frac{K^2}{MN}$$ , and since K is the hypothenuse, we get: $\frac{K(M+N)}{MN}\ge \frac{M^2+N^2}{MN}\ge 2$ by AM-GM. So this means that I got: $$\frac{K}{M}+\frac{K}{N}\ge 2$$ but couldn't reach the desired result which is $2\sqrt 2$ , so any help or another approach is much appreciated.
|
Another way: $$f(x)=\csc x+\sec x\implies f’(x)=-\csc x \cot x+\sec x\tan x=0$$ $$\implies \tan^3x=1\implies x=\frac{\pi}{4}$$ Can you finish?
|
|inequality|trigonometry|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.