title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
An integral question which i have never encountered before
|
I = $\int _{0}^{3}\left( 1+x^{2}\right) d[ x]$ $a)12$ $b)17$ $c)15$ $d)19$ where $[x]$ is the greatest integer less than or equal to $x$ , shouldn't this integral be $0$ since $d[x]$ is 0 for all $x$ in the domain of the function. what is the correct explanation ??
|
This is the Riemann-Stieltjes integral of $(1+x^2)$ from $0$ to $3$ with respect to $g(x)=[x]$ . This integral can be evaluated like a Riemann sum, partitioning $[0,3]$ into intervals with tags and increasing the number of intervals. The kind of sum we're trying to evaluate is: $$ \lim_{n\to \infty}\sum_{i=1}^{n}f(x_i)\>|g(x_i)-g(x_{i-1})| $$ Here, we'll use the rightmost point as the tag for definiteness, but it doesn't change the answer. Notice that the function $g$ is constant in each of the open intervals $(0,1)$ , $(1,2)$ , and $(2,3)$ , so the factors on the right are only nonzero when we evaluate $g$ on two different intervals: when $x_i=1,2,3$ . The sum then reduces to: $$ f(1) + f(2) + f(3) = 2 + 5 + 10 = 17$$ Another way to think about this is, as the comments suggest, using the definition $\int f(x)dg(x)=\int f(x) g’(x)dx$ . Here, we need $g$ to be differentiable, but if we allow some hand-wavy use of the delta function then $g'(x) = \delta(x-1) + \delta(x-2) + \delta(x-3) $
|
|calculus|integration|
| 0
|
Convex combination of Dirichlet random variables
|
For positive integer $k$ , let $(X_1,\ldots,X_k)\sim\mathrm{Dir}(\alpha_1,\ldots,\alpha_k)$ be a probability distribution over $k$ items drawn from a $k$ -component Dirichlet distribution and $p=(p_1,\ldots,p_k)$ be another fixed distribution. What is the pdf of the random variable $Q=\sum_{i=1}^k p_i X_i$ ? If each $X_i$ were independent gamma random variables with parameter $\alpha_i$ , i.e., $X_i\sim\mathrm{Gamma}(\alpha_i,\beta)$ , then this would be easy: by linearity, $Q\sim\mathrm{Gamma}(\sum_{i=1}^k p_i\alpha_i, \beta)$ , following the notations in this lecture note . The Dirichlet random variables can be obtained by normalizing the gamma random variables, and the marginal of each component is a beta random variable. A way to show this is the case is done by observing that if $Y_i\sim\mathrm{Gamma}(\alpha_i,1)$ for $i\in\{1,2\}$ , then $\frac{Y_1}{Y_1+Y_2}\sim\mathrm{Beta}(\alpha_1,\alpha_2)$ . This requires that $Y_1$ and $Y_2$ are independent from this post . I suspect that $
|
There is no well-known distribution for the weighted sum of a random vector with a Dirichlet distribution. However, as partially checked in this old answer , the beta distribution can be a good approximation for it (you do not need to normalize the vector $p$ as it is already normalized). This 2023 paper derives a novel integral representation for the density of a weighted sum of Dirichlet distributed random variables (Appendix A.1, page 15); you can use it if you want the exact distribution. This paper also presents various non-asymptotic Gaussian-based bounds for probabilities of linear transformations of a Dirichlet random vector. Regarding your results: Note that for $c>0$ , and $X$ and $Y$ that are independent with $$X \sim \text{Gamma} (\alpha_1, \lambda), Y \sim \text{Gamma} (\alpha_2, \lambda),$$ we have $$ cX \sim \text{Gamma} \left ( \alpha_1, \frac{\lambda_1}{c} \right ) $$ $$X +Y \sim \text{Gamma} \left (\alpha_1+\alpha_2, \lambda \right).$$ Hence, generally there is no $\a
|
|probability|probability-theory|probability-distributions|gamma-distribution|
| 0
|
Do surjective morphisms $\Bbb C^\times \to \Bbb C$ and viceversa between Riemann surfaces exist and can they be extended to $\Bbb P^1\to\Bbb P^1$?
|
How can I prove the existence or not of a surjective morphism of Riemann surfaces $\Bbb C^\times \to \Bbb C$ ? and viceversa? In case that it exists, how can I prove if there's any that can be extended to a morphism $\Bbb P^1\to \Bbb P^1$ ? $\Bbb C$ is identified with a chart $U_1 \subseteq \Bbb P^1$ and $\Bbb C^\times$ with $U_0 \cap U_1 \subseteq \Bbb P^1$ . I guess it will suffice to find and example in case it existed, but I have no clue how to proceed. A morphism of Riemann surfaces is a holomorphic function from $X \to Y $ for $X$ , $Y$ complex manifolds of dimension $n$ and $m$ , and Just to match domain and codomain I think I could use $exp(1/z):\Bbb C^\times \to \Bbb C$ and $exp(z):\Bbb C \to \Bbb C^\times $ but how do I prove or not that is a morphism of Riemann surfaces and if can be extended.
|
$$p(z)=z(z+1)$$ is an example of such morphism. Every non-constant polynomial is surjective. In our case we omit $0$ from the domain, but $z=-1$ also maps to $0$ , therefore $p$ is surjective. We can easily expand it to the projective line by setting $p(0)=0$ and $p(\infty)=\infty$ . From here on you can easily check that the expanded polynomial is surjective and holomorphic in $0$ and $\infty$ . Edit for clarification. let $f(z)=z(z+1)$ be a polynomial from $\mathbb{C}$ to itself. Then $f|_{C^x}=p$ . For expansion, define $$p'\colon\mathbb{P}^1 \rightarrow \mathbb{P}^1$$ as $$p'(z)=p'([z,1])=[f(z),1]$$ and $$p'([1,0]) = [1,0]$$ . Consider charts $$\phi_i \colon U_i \rightarrow \mathbb{C}$$ Where $U_i=\{[x_0,x_1];x_i\neq 0\} \subset \mathbb{P}^1$ $$\phi_0\colon [1,y] \mapsto y$$ $$\phi_0^{-1} \colon y \mapsto [1,y]$$ and $$\phi_1\colon [x,1] \mapsto x$$ $$\phi_1^{-1} \colon x \mapsto [x,1]$$ then $p'$ is holomorphic if $\phi_i \circ p' \circ \phi_i^{-1}$ is holomorphic. First let us ch
|
|differential-geometry|manifolds|complex-geometry|riemann-surfaces|
| 1
|
A generalization of Baumslag-Solitar groups
|
I am wondering about the following generalization of the group $B_{1,2}=\langle a,b\, |\, bab^{-1}=a^2\rangle$ : $$ G_k=\langle a_1,a_2,\ldots,a_{k+1}\, |\, a_{i+1}a_ia_{i+1}^{-1}=a_i^2, i=1,2,\ldots,k\rangle. $$ These groups, probably, were studied before; is there a name for these groups? One specific question: Question: Are the groups $G_k$ linear?
|
The groups $G_k$ are not linear for $k>1$ . This follows from the remark of Moishe Kohan. The relations $a_2a_1a_2^{-1}=a_1^2$ and $a_3a_2a_3^{-1}=a_2^2$ imply the relation $a^{2^{2^n}}_1=a^{n}_3a_2a^{−n}_3a_1a^{n}_3a^{−1}_2a^{−n}_3$ . If we assume that $a_1,a_2,a_3$ are matrices, then the coefficients in $a^{2^{2^n}}_1$ grow at least as $2^{2^n}$ , what is not possible for $a^n_3$ .
|
|group-theory|group-presentation|combinatorial-group-theory|linear-groups|
| 1
|
Confusion regarding orthonormal basis of $L^2[0, 1]$, in requiring $f(0) = f(1)$?
|
Let us consider here the continuous elements of $L^2$ . It is often stated that the family $e_k(x) = e^{-2 \pi i k x}$ is an orthonormal basis of $L^2[0, 1]$ , in that a function can be written as $$ f(x) = \sum_{k \in \mathbb{Z}} c_k e_k(x), $$ for an appropriate complex sequence $c_k$ . However, my confusion is that, since $e_k(0) = e_k(1) =1$ for all $k$ , it would seem that this representation forces $$ f(0) = f(1). $$ Is there a calculation mistake here? Or is it an issue with point evaluation in $L^2[0, 1]$ ? If it is the latter, is there a way to change the basis $e_k$ such that a valid point-wise representation holds everywhere (say for continuous representatives/elements of $L^2$ )? For instance, what about the function $f(x) = x$ ? A more rigorous version of my second question goes as follows. Is there an orthonormal basis $\{e_k\}$ for $L^2[0, 1]$ such that for continuous elements of $C[0, 1]$ there exists a sequence $c_k$ where the the partial sums $f_N = \sum_{|k|\leq N} c
|
Your confusion is entirely reasonable! Yes, it should come as a shock when one first learns that "convergence" of Fourier series can have different, inequivalent, senses. From a reasonable beginner's viewpoint, of_course "convergence" means "pointwise". Duh. Oop, but Fourier series don't reliably do that. As some sort of consolation-prize, we do have $L^2$ convergence... which is simply not-at-all equivalent to pointwise convergence. Truly, this stunned me when I first understood the fact. Wow. Oof. (Ok, yes, there is Carleson's theorem, that an $L^2$ function's Fourier series does converge almost everywhere (to that function...)) Further, yes, there is the mysterious connection between $f(0)$ and $f(1)$ , and similarly for derivatives, in whatever sense. Yes, of course, the relevant exponentials have equal values at both ends. The point is that Fourier series most naturally are expansions of periodic functions (of whatever sort), that is, of functions on the circle (pulled-back to mak
|
|functional-analysis|fourier-series|orthonormal|hamel-basis|
| 0
|
How to construct a Wiener-like process on Tori?
|
That means the process may be periodic by distribution. And in what condition could the process be present as a series of Fourier bases multiplied by random variables?
|
Brownian motion on a manifold starts with constructing the heat equation problem on it. One main reference is Stochastic Analysis On Manifolds . As mentioned in Example 4.34 Heat Equation and Brownian Motion on Riemannian Manifolds , the transition density on the torus $\mathbb{T}^{n}:=\mathbb{R}^{n}/\mathbb{Z}^{n}$ is $$\large p_{\mathbb{T}^{n}}(x,y,t):=\frac{1}{(4\pi t)^{n/2}}\sum_{m\in \mathbb{Z}^{n}}e^{-\|y'+m-x'\|^{2}/4t},$$ where for $\pi:\mathbb R^{n}\to\mathbb{T}^{n}$ , we let $x',y'$ be two representatives such that $\pi(x')=x,\pi(y')=y$ and $\|\cdot\|$ is the Euclidean-norm.
|
|probability|stochastic-processes|random-variables|
| 0
|
Dimension and Surjectivity proof
|
I want to know how to prove the statement: If $T: V \rightarrow\ W$ is surjective, then $\dim(W) \le \dim (V)$ . I am thinking about proof by contradiction...but don't really have a rigorous proof other than intuition.
|
Let $(e_1,\ldots,e_n)$ be a basis of $V$ . Let $w\in W$ , there exists $v\in V$ such that $T(v)=w$ by surjectivity of $T$ , now $v$ can be written as $$ v=v_1e_1+\cdots+v_ne_n $$ so $$ w=T(v)=v_1T(e_1)+\cdots+v_nT(e_n). $$ This means that the family $(T(e_1),\ldots,T(e_n))$ generates $W$ , so $n\geqslant\dim W$ .
|
|linear-algebra|linear-transformations|
| 1
|
Why does T = S1 work for conics?
|
If we have a conic with equation: $ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$ or S =0, and a point $P(x_1, y_1)$ . Then the equation of the chord with P as its midpoint is given by T = $S_1$ . $S_1$ is obtained by plugging point P in S. ( $S_1$ = $ax_1^2 + 2hx_1y_1 + by_1^2 + 2gx_1 + 2fy_1 + c = 0$ ) The 'T' form of an equation can be obtained by replacing: $$x^2 \rightarrow xx_1$$ $$y^2 \rightarrow yy_1$$ $$x \rightarrow \frac{x + x_1}{2}$$ $$y \rightarrow \frac{y + y_1}{2}$$ $$xy \rightarrow \frac{xy_1 + x_1y}{2}$$ I don't understand how this works ! Could someone help me understand why it does? Edit: The slope of the chord somehow equals the derivative of the conic at point $(x_1, y_1)$ . Can someone explain how that happens?
|
Here is a graphical representation in the case of an ellipse : *Fig. 1.$ First version : analytic geometry calculations. Being given the equation of the conic $$ax^2 + 2hxy + by^2 + 2gx + 2fy + c\tag{*}$$ Let $U=\pmatrix{u\\v}$ be a directing vector of the chord, allowing its parameterization under the form : $$\begin{cases}x&=&x_1+pu\\y&=&y_1+pv\end{cases}\tag{**}$$ The coordinates $(x,y)$ of the intersection points $P_1$ and $P_2$ of the chord with the conic curve will verify the expression obtained by "plugging" the expressions of $x$ and $y$ given by (**) into (*) giving rise to a quadratic polynomial in variable $p$ : $$Ap^2+2Bp+C=0 \ \text{with} $$ $$\begin{cases}A&=&au^2 + 2huv + bv^2\\B&=& u(ax_1+hy_1+g)+v(hx_1+by_1+f)\\C&=&ax_1^2 + 2hx_1y_1 + by_1^2 + 2gx_1 + 2fy_1 + c\end{cases}\tag{1}$$ Observe now that point $P_1$ is the midpoint of the chord iff the intersection points $Q$ and $Q"$ correspond to two opposite values of $p,-p$ . It amounts to say that the sum of the roots of
|
|geometry|conic-sections|
| 1
|
Strict Maximum Principle (Complex Version)
|
Strict Maximum Principle - Complex Version (Gamelin; p88) Let $h$ be a bounded complex-valued harmonic function on a domain $D$ . If $|h(z)| \le M$ for all $z \in D$ , and $|h(z_0)| = M$ for some $z_0 \in D$ , then $h(z)$ is constant on $D$ . First I will copy the proof (then explain where I am confused) Proof: We replace $h(z)$ by $\lambda h(z)$ for an appropriate unimodular constant $\lambda$ , and we can assume $h(z_0) = M$ . Let $u(z) = \Re h(z)$ . Then $u(z)$ is a harmonic function on $D$ that attains its maximum at $z_0$ . By the strict maximum principle for real-valued harmonic functions, $u(z) = M$ for all $z \in D$ . Since $|h(z)| \le M$ and $\Re h(z) = M$ we must have $\Im h(z) = 0$ for all $z \in D$ hence $h(z)$ is constant. Questions If I understand correctly, we ``redefine'' $h$ as $\lambda h$ , so essentially we have $h^* = \lambda h$ for an appropriate $\lambda \in \mathbb{C}$ . If that's the case, then shouldn't $u(z) = \Re h^*(z)$ , and consequently the maximum scaled
|
Here $\lambda=|h(z_0)|/h(z_0)$ , since $h(z_0)\neq0$ this is well-defined. Moreover $|\lambda|=1$ Set $h^*=\lambda h$ so $h^*(z_0)=|h(z_0)|=M$ . Although it is possible that $\operatorname{ran}h$ doesn't include $\mathbb R$ , $h^*$ always does.
|
|complex-analysis|
| 1
|
Voight Quaternion Algebras proof of Lemma 42.2.7
|
Let $E$ be a supersingular elliptic curve, $\mathsf{O}$ be the endomorphism ring of $E$ , and $I$ be a left $\mathsf{O}$ -ideal. In Voight's Quaternion Algebras , he defines the map $\phi_I$ as the isogeny from $E$ to $E / E[I]$ , where $E[I]$ is the scheme-theoretic intersection $$E[I] := \bigcap_{\alpha \in I} E[\alpha],$$ and $E[\alpha]$ is $\ker \alpha$ as a group scheme. The author then goes on to prove Lemma 42.2.7: The pullback map $$\phi_I^* : \operatorname{Hom}(E_I, E) \to I$$ defined by $\psi \mapsto \psi \phi_I$ is an isomorphism of left $\mathsf{O}$ -modules. In the proof, the first sentence is "The image of $\operatorname{Hom}(E_I, E)$ under precomposition by $\phi_I$ lands in $\operatorname{End}(E) = \mathsf{O}$ and factors through $\phi_I$ so lands in $I$ by definition." I'm not sure what definition the author is referring to at the end of the sentence. It seems that the statement we want is that $I$ is exactly the set of endomorphisms whose kernel contains $E[I]$ , but
|
Re-found my own question, and I'll add an answer just to get this off of the unanswered queue: As far as I can tell, the quoted statement is equivalent to showing that the set of $\alpha \in O$ such that $E[I] \subset \ker \alpha$ is exactly $I$ . Voight defines $I(H) := \{\alpha \in O : \alpha(P) = 0 \text{ for all } P \in H\}$ , and then proves in a later Proposition (42.2.16) that $I(E[I]) = I$ , which is the statement we want here. That proof is more complicated than saying that this is true by definition, but doesn't seem to rely on Lemma 42.2.7, so I think Lemma 42.2.7 should really come after Proposition 42.2.16 logically. That would fix the problem presented here. If anyone has another explanation though, I would still love to hear it!
|
|elliptic-curves|schemes|quaternions|arithmetic-geometry|
| 1
|
exponential of operators - Trotter product formula
|
Let $H$ be a self-adjoint operator, bounded from below, with spectrum consisting of isolated eigenvalues $E_n (E_0 with finite multiplicities $d(n)$ , and $Tr[exp(- \beta H)] for all positive $\beta$ . Furthermore, let $\lambda$ be a small positive constant, and $V$ a relatively bounded self-adjoint operator with respect to H, i.e., there exist constants $a, b > 0$ such that $$ ||V \psi || \leq a ||H \psi || + b ||\psi||$$ for all $\psi \in \mathcal{D}(H) \subseteq \mathcal{D}(V)$ . Does Lie product formula (or else known as Trotter product formula) $$ e^{-it(H+\lambda V)} = \lim_{n \to \infty}(e^{\frac{-itH}{n}} e^{\frac{-it \lambda V}{n}})^n$$ hold true in this case ( $t$ is just the parameter denoting time)?
|
You only need self-adjointness for $H$ , but you keep the assumptions on $V$ . Then, for $\lambda$ such that $\lambda a , by the Kato-Rellich theorem $H + \lambda V$ is self-adjoint on $D(H)$ which is equal to $D(H) \cap D(\lambda V)$ , hence you can apply the Trotter product formula given for example by Theorem VIII.30 in Reed&Simon Volume I with $A = H$ and $B = \lambda V$ (this one is proven inside the book, while its generalisation Theorem VIII.31 to the case where $A + B$ is essentially self-adjoint on the intersection of the domains isn't). Statement of the aforementioned theorem to keep the post self-contained: Theorem VIII.30: Let $A$ and $B$ be self-adjoint operators on $\mathscr{H}$ and suppose that $A+B$ is self-adjoint on $D(A) \cap D(B)$ . Then: $${\text{s -}\lim}_{n \to \infty} \left(e^{itA/n}e^{itB/n}\right)^n = e^{it(A+B)}$$ ( $\text{s-}\lim$ aka "strong limit" means that the result deals with pointwise convergence).
|
|functional-analysis|operator-theory|hilbert-spaces|spectral-theory|self-adjoint-operators|
| 1
|
Probability of 200 people playing
|
In a certain game, a player rolls a coin onto a board with coloured squares, which are either red, blue, green or yellow. If the coin lands entirely within one of these coloured squares the player wins a prize; otherwise, the player loses. The probabilities of coin landing on each of the coloured squares are respectively $\mathbb{P}(\textrm{Red})=0.15$ , $\mathbb{P}(\textrm{Blue})=0.09$ , $\mathbb{P}(\textrm{Green})=0.05$ , and $\mathbb{P}(\textrm{Yellow})=0.06$ . The probability the player loses is $\mathbb{P}(\textrm{Lose})=0.65$ . 1. One day $200$ people play this game. Approximately how many would you expect to win a prize? Ok, I know this seems so easy but this was my solution which is probably totally wrong. I guessed that the probabilities above were for one person. First, I added $\mathbb{P}(\textrm{Red})$ , $\mathbb{P}(\textrm{Blue})$ , $\mathbb{P}(\textrm{Green})$ and $\mathbb{P}(\textrm{Yellow})$ together as that's the chance of winning, i.e. $0.09 + 0.15 + 0.05 + 0.06 = 0.2
|
$0.09 + 0.15 + 0.05 + 0.06 = 0.35$ not $0.2$ . Note that $0.35+0.65=1$ as you would hope, since the total probability should be $1$ . So you should have done the calculation $200 \times 0.35=70$ as the expected number of winners. This answers the question "Approximately how many would you expect to win a prize?" . You would subtract this from $200$ to get the expected number of losers, so $200-70=130$ . As an alternative approach, $200 \times 0.65 =130$ gives the same expected number of losers.
|
|probability|
| 0
|
Lower bounding $|f'(0)|$ for holomorphic $f: \mathbb{D} \to \mathbb{D}$ when $f(0)=0$ and $f(\frac{1}{2})=c>\frac{1}{4}$
|
Let $\mathcal{F}_c = \{ f: \mathbb{D} \to \mathbb{D} \text{ holomorphic } | f(0)=0, f(\frac{1}{2})=c \}$ for $c\in [0, \frac{1}{2})$ . Denote $m_c = \inf_{f \in \mathcal{F}_c } |f'(0)|$ . Prove $$ \begin{align} m_c=0& \text{ if } c \leq \frac{1}{4} \\ m_c>0& \text{ if } c > \frac{1}{4} \\ \end{align} $$ My try: I managed to show the first one by considering the function $f(z)=4cz^2$ . But I'm stuck with the second one. I've tried to calculate $$ c = f(\frac{1}{2}) - f(0) = \int_{[0,\frac{1}{2}]} f'(z) dz = \int_{[0,\frac{1}{2}]} \left( \int_{[0,z]}f''(w)dw + f'(0) \right) dz = \int_{[0,\frac{1}{2}]} \int_{[0,z]}f''(w)dwdz + \frac{1}{2}f'(0) $$ I've then tried to bound $f''(w)$ (for $w\in [0, z] \subset [0, \frac{1}{2}]$ ) with Cauchy formula (using countour $\gamma_r$ which is a circle of radius $r centered at $w$ ) $$ |f''(w)| = \left| \frac{2!}{2\pi i} \int_{\gamma_r} \frac{f(s)}{(s-w)^3} ds \right| \leq \frac{2}{2\pi} \int_{\gamma_r} \frac{|f(s)|}{|s-w|^3} |ds| \leq \frac{2}{2\pi r^
|
Set $a = f'(0)$ . The function $g(z) = f(z)/z$ has a removable singularity at the origin. We have $|g(z)| \le 1$ for all $z \in \Bbb D$ because of the Schwarz lemma, also $g(0) = f'(0) = a$ and $g(1/2) = 2 c$ . The Schwarz-Pick theorem applied to the function $g$ at $z_1 = 1/2$ and $z_2 = 0$ gives $$ \left| \frac{2c - a}{1-2ca}\right| \le \frac 12 $$ and it follows that $$ 2c - |a| \le |2c-a| \le \frac 12 |1-2ca| \le \frac 12 + c |a| \\ \implies 2c - \frac 12 \le (c+1)|a| \\ \implies |a| \ge \frac{4c-1}{2(c+1)} \, . $$ This proves that $$ m_c = \inf_{f \in \mathcal{F}_c } |f'(0)| \ge \frac{4c-1}{2(c+1)} > 0 $$ for $c > 1/4$ .
|
|complex-analysis|
| 1
|
Potential density of killed brownian by local time
|
Suppose $B_t$ is a standard Brownian motion on $\mathbb{R}$ and let $L_t$ be its local time at zero. Let $p_t(x,y)$ be the transition density of $B_t$ , i.e. $p_t(x,y) = \frac{1}{\sqrt{2\pi}}\exp\left(- \frac{(x-y)^2}{2t} \right)$ and let $u^\alpha(x,y)$ be the potential density of $B_t$ , i.e. $u^\alpha(x,y) = \int_0^\infty e^{-\alpha t} p_t(x,y)dt$ . It is well known that $u^0(x,y) \equiv \infty$ , in other words, there is no Greens function for 1-d heat equation. Now, let $B^k_t$ be the 1-d Brownian motion killed when $L_t>e$ where $e$ is an independent exponential random variable. It is known that the transition semi-group of $B_t^k$ is given by $$ P_t^kf(x) = \mathbb{E}_x\left[e^{-L_t}f(B_t) \right],\quad f \in L^2(\mathbb{R}),\, t \ge 0. $$ My question is, 1) does the killed process $B^k_t$ has a Greens function? That is, if $u_k^\alpha(x,y)$ is the $\alpha$ -potential density of $B^k_t$ , which exists by Fukashima, does $u^0_k(x,y)$ exist? 2) if it exists, can it be expressed in
|
Yes. In fact $$ u_k^0(x,y) = u^0_0(x,y)+c^{-1},\qquad \qquad (1) $$ where $u^0_0$ is the Green's function for the Brownian motion killed upon first hitting $0$ , and the constant $c$ is given by $$ c=\int_{\Bbb R} \Bbb E_x[L_1]\,dx. $$ This came from a little of the theory of Brownian excursions from $0$ . In general, if $X$ admits potential densities and the CAF $A$ associated with $\nu$ satisfies $\Bbb P_x[A_\infty>0]=1$ , then the killed process $X^k$ is transient and will admit potential densities. Something like (1) will hold. The first term on the right of (1) should be replaced by the the potential density of $X$ killed on hitting the fine support, $F$ , of $A$ . The second term won't be constant, but will involve the theory of the excursions of $X$ from $F$ .
|
|brownian-motion|stochastic-analysis|greens-function|potential-theory|local-time|
| 0
|
Lee Introduction to Smooth Manifolds Proposition 7.17
|
Apologies if this is silly, but I don't quite follow the proof of Proposition 7.17 in John M. Lee's Introduction to Smooth Manifolds. I get that we showed $F : G \rightarrow F(G)$ is a diffeomorphism onto $F(G)$ using Proposition 5.18, but I'm not sure how to prove $F : G \rightarrow F(G)$ is a group isomorphism. Many thanks in advance!
|
This is a consequence of the following bit of elementary group theory: If $G$ is any group and $\varphi\colon G \to H$ is a group homomorphism, then $\varphi(G)$ is a subgroup of $H$ , so if $\varphi$ is injective, then $\varphi\colon G \to \varphi(G)$ is a bijection, hence a group isomorphism.
|
|proof-explanation|lie-groups|differential-topology|
| 1
|
Why does T = S1 work for conics?
|
If we have a conic with equation: $ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$ or S =0, and a point $P(x_1, y_1)$ . Then the equation of the chord with P as its midpoint is given by T = $S_1$ . $S_1$ is obtained by plugging point P in S. ( $S_1$ = $ax_1^2 + 2hx_1y_1 + by_1^2 + 2gx_1 + 2fy_1 + c = 0$ ) The 'T' form of an equation can be obtained by replacing: $$x^2 \rightarrow xx_1$$ $$y^2 \rightarrow yy_1$$ $$x \rightarrow \frac{x + x_1}{2}$$ $$y \rightarrow \frac{y + y_1}{2}$$ $$xy \rightarrow \frac{xy_1 + x_1y}{2}$$ I don't understand how this works ! Could someone help me understand why it does? Edit: The slope of the chord somehow equals the derivative of the conic at point $(x_1, y_1)$ . Can someone explain how that happens?
|
The given equation of the conic is $a x^2 + 2 h x y + b y^2 + 2 g x + 2 f y + c = 0 $ If we define $p = \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $ Then the above conic equation can be written in linear algebra style matrix-vector form as follows $ p^T Q p = 0 $ where $ Q = \begin{bmatrix} a && h && g \\ h && b && f \\ g && f && c \end{bmatrix} $ Suppose, you have two points $p_1$ and $p_2$ on this conic, and define their midpoint as $ p' = \dfrac{1}{2} (p_1 + p_2) $ , so that $p_2 = 2 p' - p_1 $ Since $p_2$ is on the conic, then $ p_2^T Q p_2 = 0 $ But $p_2 = 2 p' - p_1$ , so $ (2 p' - p_1)^T Q (2 p' - p_1) = 0 $ Multiply this out, you get $ 4 p'^T Q p' - 4 p'^T Q p_1 + p_1^T Q p_1 = 0 $ Since $p_1$ is on the conic as well, then $p_1^T Q p_1 = 0 $ , therefore the above equation reduces to $ p'^T Q p' = p'^T Q p_1 $ Similarly, we can show that $ p'^T Q p' = p'^T Q p_2 $ Now take an arbitrary point $q$ on the chord connecting $p_1$ and $p_2$ , then $ q = (1 - \lambda) p_1 + \lambda p_2
|
|geometry|conic-sections|
| 0
|
Finding a matrix for the functor $\Lambda^{p}f$ of functions from anti-symmectric tensors onto itself
|
Let $\mathcal{B}$ be an ordered basis for a vector space $V$ of dimension $n$ . Denote the basis for the set of anti symmetric tensors ( $\Lambda^{p}{V}$ ) by $\mathcal{B}^{p}$ . For $n=3$ , let $A=\left( a^{i}_{j} \right )$ be the matrix of the linear map $f: V \to V$ with respect to the basis $\mathcal{B}$ . Describe the matrices of $\Lambda^{p}{f}$ with respect to $\mathcal{B}^{p}$ for $p=0,1,2,3$ . So far I have gotten $\Lambda^{0}f = \mathbb{1}_{\mathbb{F}}$ . And for $p=3$ : $\mathcal{B}=\{ e^{123} \} = \{ e^{1} \wedge e^{2} \wedge e^{3} \}$ \begin{align*} (\Lambda^{3}f(e^{123})) (e_{1}, e_{2}, e_{3}) := e^{123} (f(e_{1}), f(e_{2}), f(e_{3})) = e^{123} (a^{\alpha}_{1}e_{\alpha} , a^{\beta}_{2}e_{\beta}, a^{\gamma}_{3}e_{\gamma}) \\ = a^{\alpha}_{1} a^{\beta}_{2} a^{\gamma}_{3} e^{123} (e_{\alpha}, e_{\beta}, e_{\gamma}) \\ = a^{\alpha}_{1} a^{\beta}_{2} a^{\gamma}_{3} \delta_{\alpha}^{1} \delta_{\beta}^{2} \delta_{\gamma}^{3} = a^{1}_{1} a^{2}_{2} a^{3}_{3} \end{align*} I'm not s
|
It seems you are actually referring to $\Lambda^pV^*$ and $\Lambda^pf^*$ ; these constructions are perfectly valid over $V$ rather than $V^*$ , so I was confused at first. Once we fix the basis $\mathcal B$ , we can treat the elements of $V$ as column vectors and those of $V^*$ as row vectors with $e^1 = (1,0,0,\dotsc)$ , $e^2 = (0,1,0,\dotsc)$ , etc. Then $f(v) = Av$ and the dual $f^* : V^* \to V^*$ can be written $f^*(\omega) = \omega A$ . Now, $\Lambda^pf^*$ is defined such that it is a homomorphism over $\wedge$ , so $$\begin{aligned} {[}\Lambda^2f^*(e^i\wedge e^j)](e_k, e_l) & = [f^*(e^i)\wedge f^*(e^j)](e_k, e_l) = (e^iAe_k)(e^jAe_l) - (e^iAe_l)(e^jAe_k) \\& = a^i_ka^j_l - a^i_la^j_k. \end{aligned}$$ To get a matrix out of this, we have to decide on an order for $\mathcal B^p$ and assign a linear index accordingly. Writing this out in full generality is painful, but it will look something like the following: ordering $\mathcal B^2$ via $$ e^1\wedge e^2,\quad e^1\wedge e^3,\quad\d
|
|vector-spaces|tensors|functors|
| 1
|
A Criterion for Faithfully Flatness via Intersection Property
|
Let $R \subset S$ be an injective map of commutative rings, $M \subset N $ an inclusion of $R$ -modules. We identify $M, N$ respectively as $M \otimes_R 1_S \subset N \otimes_R 1_S \subset N \otimes S$ . In this answer Haran elaborated that if $R \subset S$ a faithfully flat extension, then the intersection of $N \otimes_R 1_S $ and $ M \otimes_R S $ behaves as nice as one would naive expect, i.e., $$M \otimes_R 1_S = (N \otimes_R 1_S) \cap M \otimes_R S. $$ Remark: As Aphelli pointed out, the intersection inside $N \otimes_R S $ with $M \otimes_R S $ should be read as the intersection with the image of the induced map $M \otimes S \to N \otimes S $ , which (if we not assume flatness for $\phi$ not need to be injective) Question: What about a converse statement? Can this be used as sufficient criterion for faithfully flatness? (at least if we already assume $R \subset S$ to be flat?) That means, if for all inclusions $M \subset N $ of $R$ -modules, holds $M \otimes_R 1_S = (N \otimes_R
|
Assuming that $R\to S$ is injective and flat, the answer is yes: Let $M\subset R$ be an ideal and let $N=R$ . Then from the assumption we get that $M\otimes_R1_S=R\otimes_R1_S\cap M\otimes_RS\subset R\otimes_RS$ . Via the obvious identifications, this becomes $M=R\cap MS\subset S$ . Thus, if $MS=S$ , then $M=R$ . Since $M$ was arbitrary (and under the assumption that $S/R$ is flat to begin with), it follows that $S/R$ is faithfully flat.
|
|abstract-algebra|commutative-algebra|flatness|
| 1
|
Why is this continuous function not differentiable?
|
Let $g : \mathbb{R} \to \mathbb{R}$ , such that \begin{equation} g(x)=\begin{cases} 2x + 1 & \text{ if } x I am trying to understand the properties of this function. I already know that it is continuous, as its' components are continuous functions(because $2x+1$ is linear and $e^x$ is exponential). But I don't understand why it is not differentiable, because I think that the derivates of both of the component functions exist at any given point within the specified ranges ( $x for $2x + 1$ and $x \geq 0$ for $e^x$ ). Can you please explain the differentiability of this function?
|
The derivative of the left-half of the function $(2x+1)$ is $2$ . The derivative of the right-half of the function for all other $x$ -values is $e^x$ . Using this information, we can formulate another piecewise function that shows the derivative of $g(x)$ . $g'(x)= \begin{cases} 2 \,, & \text{for} \;\;\; x \lt 0\\ e^x \,, & \text{for} \;\;\; x \ge 0 \end{cases} $ This function seems to support the original claim you made. It clearly shows that $g'(x)$ exists for all real numbers, including $x=0$ . However, this is a trap! We have to look at the derivative in the context of the original function. At $x=0$ , $g(x)$ has a sharp cusp, which makes it impossible to predict the behavior of the function around that point. And that ruins the whole point of the derivative. If I showed you the function $g(x)$ with the right-half erased, and asked you to draw a line continuing the slope of the left-half, you would probably draw a line continuing with a slope of $2$ . It would be impossible for you
|
|functions|exponential-function|
| 0
|
Find a continuous function from Moore plane to $\mathbb{R}$
|
Find a continuous function from the Moore plane to $\mathbb{R}$ like $f$ ,such that $f(o) = 1$ and $f(X) = 0$ where $o$ denotes the origin and $X$ denotes the $x$ -axis without origin. My attempt: Based on the definition of completely regular spaces, this question is relative to that. I define $f$ in a way that $f(p) = 1/2$ where $p$ is a point with positive $y$ -coordinate, unfortunately this function is not continuous. Thank for your time.
|
It seems that you already know that the space is completely regular. So all you need to prove is that $X$ is closed in it. Because then it is a simple matter of applying the definition of being completely regular to $(0,0)$ and $X$ . So why is $X$ closed? The simplest way is to show that its complement $X^c$ is open. If $(x,y)$ is such that $y>0$ then consdier $B((x,y),y/2)$ which is an open (euclidean) ball around $(x,y)$ of radius $y/2$ . Then every element in $B((x,y),y/2)$ also has second coordinate greater than $0$ . Which means that $B((x,y),y/2)\subseteq X^c$ . And so every element with positive coordinate is contained in $X^c$ together with a small enough neighbourhood. So the only point left is $(0,0)\in X^c$ . But by the definition of Moore space, it has an open neighbourhood in the form of say $B((0,1),1)\cup\{(0,0)\}$ . And this is a subset of $X^c$ as well. All in all, $X^c$ is open and thus $X$ is closed. And to see why Moore plane is regular, you can have a look here: Pr
|
|general-topology|continuity|
| 0
|
Why are these primitives containing $\arcsin x$ equal up to a constant?
|
While trying to solve $\displaystyle\int\sqrt{14x-x^2}\;dx$ , I obtained three different primitives in three different ways: Method 1: completing the square $c(x)=\dfrac{1}{2}\left[49\arcsin\left(\dfrac{x}{7}-1\right)+(x-7)\sqrt{14x-x^2}\right]$ Method 2: using the substitution $x=u^2$ $s(x)=49\arcsin\left(\sqrt{\dfrac{x}{14}}\right)+\dfrac{1}{2}(x-7)\sqrt{14x-x^2}$ Method 3: Wolfram $w(x)=\dfrac{1}{2}(x-7)\sqrt{14x-x^2}-49\arcsin\left(\sqrt{1-\dfrac{x}{14}}\right)$ At first, I thought some mistakes were made, as I was not able to reconcile the different appearances of the $\arcsin(\cdot)$ terms. After some inspection using Desmos, it turns out all 3 functions are equal up to a constant. In fact, they are separated by an amount of $\dfrac{49\pi}{4}$ : All three functions (without $+C$ ): All three functions (with the appropriate $+C$ ): So at this point, I'm pretty much convinced all three methods agree and are valid. However, I would like to know how exactly one can prove the $49\pi/4
|
Short answer: The relation between $\sin t$ and $\cos t$ leads to a relation between $\arcsin $ and $\arccos$ of corresponding arguments. Then the formula connecting $\cos (2t)$ and either $\cos t$ or $\sin t$ leads to a relation that lets us rewrite $\arccos$ for an argument in terms of $\arccos$ for half of argument (or double of it, seen reciprocally). Details: Let us denote by $X\in[0,1]$ the value $x/14$ (for $x\in[0,14]$ ), then by $t$ the value of the arcsine computed in $\sqrt{X}=\sqrt{x/14}$ . We want to show in the world of functions taken modulo constants : $$ \frac 12\arcsin\left(\frac x7-1\right) \equiv -\arcsin\sqrt{1-\frac x{14}} \equiv \underbrace{\arcsin\sqrt{\frac x{14}}}_{=t} \qquad \text{ modulo constants ,} $$ or in other words $$ \frac 12\arcsin\left(2X-1\right) \equiv -\arcsin\sqrt{1-X} \equiv \underbrace{\arcsin\sqrt X}_{=t} \qquad \text{ modulo constants ,} $$ Then $\sin t=\sqrt X$ , $\sin^2 t=X$ , $\cos^2 t=1-X$ , $\cos t=\sqrt {1-X}$ , $t=\arccos\sqrt{1-X}$ .
|
|calculus|integration|inverse-trigonometric-functions|
| 0
|
About Group of nth roots of unity
|
Let $z \in \mathbb{C}$ and $n$ an integer $>3$ . Define $U$ as the set of all $\lambda \in \mathbb{C}$ such that $\lambda^n = 1$ . I aim to prove the following implication: $ (\forall \lambda \in U; (\lambda -1)z + 1 \in U \cup \{0\}) \implies z = 0 \quad \text{or} \quad z = 1 $ I am uncertain if this statement is true.
|
$z=1/2$ works when $n=2.$ Look at $1/2(1-1)+1=1\in U.$ And $1/2((-1-1))+1=0\in U\cup\{0\}.$
|
|complex-analysis|complex-numbers|roots-of-unity|
| 0
|
What is the relationship between the Laplace equation and the Wave equation?
|
What is the relationship between the Laplace equation: $$ (\delta^2_x + \delta^2_y)\phi = 0 $$ and the Wave equation: $$ (\delta^2_x - \delta^2_y)\phi = 0 $$ What is the relationship between the Laplace equation: $$ (\delta^2_{x0} + \delta^2_{x1} + \delta^2_{x2} + \delta^2_{x3})\phi = 0 $$ and the Wave equation: $$ (\delta^2_{x0} - \delta^2_{x1} - \delta^2_{x2} - \delta^2_{x3})\phi = 0 $$
|
Making the substitutions: $ x:= x $ ; $ y:= iy $ transforms the 2-dimensional Laplace equation into the 2-dimensional Wave equation. Making the substitutions: $ x_0:= x_0 $ ; $ x_1:= ix_1 $ ; $ x_2:= jx_2 $ ; $ x_3:= kx_3 $ transforms the 4-dimensionsl Laplace equation into the 4-dimensional Wave equation. These transformations move the Laplace equation from Euklidean space into reciprocal space. The Wave equation is thus just Laplace's equation in reciprocal space!
|
|partial-differential-equations|complex-numbers|mathematical-physics|quaternions|
| 0
|
Finding the ideal phase shift when manually fitting a trigonometric curve
|
So essentially for a school project I've been given a scatterplot with an array of points shaped in a rough sinusodial wave, and I need to plot a line of best fit manually. So far I have all the values, except for c (horizontal/phase shift). I want to find the best way to get the most accurate value. so far my equation is this: y = 3.685 cos 29.1(x+c) + 5.455 Desmos approximates the value of c to 1.22, as can be seen here: But obviously I have to calculate the phase shift myself so this isn't really helpful. Here is my graph so far: https://imgur.com/a/YCr9mjj The curve in blue is desmos model, fitted to the points on the scatterplot. The one in red is what i've got, using the c value of 1.22 that desmos automatically calculates and plugs into my function. How can I calculate the c value myself? desmos file if anyone wants to play with this or see the values for themselves: https://www.desmos.com/calculator/us3bchvbwq Attempts: I tried substituting x and y coordinates of a point on my
|
You need to fit the following curve $$ y = A \sin( k x) + B \cos( k x) + C \tag{1}$$ for the parameters $(A,B,C)$ . Then the amplitude of the oscillation is $R = \sqrt{A^2+B^2}$ and the phase is $\phi = \arctan \frac{B}{A}$ since the above is equivalent to $$ y =R \sin ( k x+\phi) +C \tag{2}$$
|
|trigonometry|regression|
| 0
|
Bijection between permutations of even length cycles and pairs of factors
|
I was trying to solve Exercise 3.13.15 in Cameron's Combinatorics: Topics, Techniques, Algorithms. It goes like this: (a) Let $n = 2k$ be even, and $X$ a set of $n$ elements. Define a factor to be a partition of $X$ into $k$ sets of size $2$ . Show that the number of factors is equal to $(2k-1)!!=1.3.5...(2k-1)$ . (b) Show that a permutation of $X$ interchanges some $k$ -subset with its complement if and only if all its cycles have even length. Prove that the number of such permutations is $((2k - 1)!!)^2$ . [HINT: any pair of factors defines a partition of $X$ into a disjoint union of cycles, and conversely. The correspondence is not one-to-one, but the non-bijectiveness exactly balances.] I was able to solve the first part, but I am stuck in the second part. Specifically, I don't understand the hint: how can two factors define a partition of $X$ into a disjoint union of cycles? Can anyone shed some light on it? Somehow one should be able to show that this defines a bijection, which i
|
Let $E$ be the set of permutations of $X$ where all cycles have even length. Let $F$ be the set of ordered pairs $(f_1,f_2)$ , where $f_1,f_2$ are both factors of $X$ . I will define a bijection $\phi:E\to F$ , with inverse $\psi:F\to E$ , which proves that $|E|=|F|$ . It should be clear that $|F|=(2k-1)!!$ , completing the proof. Given $\pi \in E$ , let $(\sigma_1,\dots,\sigma_{2r})$ be a particular cycle of $\pi$ . To be unambiguous, assume that $\sigma_1$ is the smallest element of $\{\sigma_1,\dots,\sigma_{2r}\}$ . There are two cases: If $r=1$ , then $\{\sigma_1,\sigma_2\}$ will be a pair in both $f_1$ and $f_2$ . If $r\ge 2$ , then the pairs $\{\{\sigma_1,\sigma_2\},\dots,\{\sigma_{2i-1},\sigma_{2i}\},\dots,\{\sigma_{2r-1},\sigma_{2r}\}\}$ will all occur in $f_1$ , while the pairs $\{\{\sigma_2,\sigma_3\},\dots\{\sigma_{2r},\sigma_{2r+1}\},\dots,\{\sigma_{2r},\sigma_1\}\}$ will occur in $f_2$ . Repeating this for all cycles in $\pi$ , both $f_1$ and $f_2$ will be factors. The pai
|
|combinatorics|permutations|combinatorial-proofs|
| 1
|
How to prove a matrix $M$ has incoherence property?
|
By incoherence, I am referring to equation 1.18 in the paper The Power of Convex Relaxation: Near-Optimal Matrix Completion Given a rank 1 matrix $n\times n$ matrix $M = x \mathbf{1}^T$ , $x \in \mathbb{R}^n$ where each $x_i \in [1, \sqrt{\mu_0}] $ . The paper states that one can easily show the matrix obeys incoherence property with parameter $\mu_0$ . However, I am not exactly sure how to prove it. The incoherence property with parameter $\mu_0$ is satisfied if \begin{equation} \|P_U e_a\|^2 \le \frac{\mu_0 r}{n}, \quad \|P_V e_b\|^2 \le \frac{\mu_0 r}{n}, \end{equation} Let $u_i$ and $v_i$ be singular vectors, and let $P_U$ and $P_V$ be defined as follows: $P_U = \sum_{i \in [r]} u_i u_i^*$ $P_V = \sum_{i \in [r]} v_i v_i^*$
|
Hint : The singular value decomposition of the rank $r = 1$ matrix $M = x\vec{1}^T$ is $\sigma_1u_1v_1^T$ where $u_1 = \dfrac{x}{\|x\|}$ , $v = \dfrac{\vec{1}}{\|\vec{1}\|} = \dfrac{1}{\sqrt{n}}\vec{1}$ , and $\sigma_1 = \|x\| \cdot \|\vec{1}\| = \sqrt{n}\|x\|$ . So, the rank- $1$ projection matrices are $P_U = u_1u_1^T = \dfrac{xx^T}{\|x\|^2}$ and $P_V = v_1v_1^T = \dfrac{1}{n}\vec{1}\vec{1}^T$ . Can you show that these satisfy the bounds $\|P_Ue_a\|^2 \le \dfrac{\mu_0 }{n}$ and $\|P_Ve_b\|^2 \le \dfrac{\mu_0 }{n}$ ? It may help to note that $\|P_Ue_a\|^2$ is the sum of the squares of the entries in the $a$ -th column of $P_U$ .
|
|convex-optimization|svd|matrix-completion|
| 1
|
Understanding the Inclusion/Exclusion Principle
|
Problem : A renovation of an arena proposes to give the seats colors from a color scheme with $5$ different colors. In each row, all $5$ colors must be used at least once. In how many different ways can this be done for a row of $20$ seats? Answer : $5^{20} - \binom{5}{1} (4^{20}) + \binom{5}{2} (3^{20}) - \binom{5}{3} (2^{20}) + \binom{5}{4} (1^{20})$ (Where " $\binom{5}{1}$ " would represent the value " $5$ choose $1$ ") I have a decent understanding of the inclusion/exclusion principle, yet can't seem to wrap my head around the logic behind each part of the answer. Any advice is appreciated to clear things up. Thank you.
|
It should be clear that $5^{20}$ is the total number of ways to assign a color to each of the $20$ seats. We can instead answer the question How many colorings do not use at least one color? and subtract the answer to that question from $5^{20}$ to get the desired answer. Assume for the sake of simplicity that the colors are red, orange, yellow, green, and blue. Let's first ask the question How many colorings do not use red? This is simple to calculate: for each seat, we just have four options (orange, yellow, green, blue). There are $4^{20}$ ways to do this. By symmetry, there are also $4^{20}$ colorings that avoid orange, $4^{20}$ colorings that avoid yellow, and so on. This gives a total of $5 (4^{20})$ colorings. But there's an issue if we stop here. Suppose a coloring did not use either red or orange . How many times did we count that so far? Well, it was counted once for the colorings that excluded red, and another time for the colorings that excluded orange - in other words, we
|
|discrete-mathematics|inclusion-exclusion|
| 0
|
How to construct a concrete sequence $\{(a_i,b_i]\}$ such that the inequality $\sum_i(F(b_i)-F(a_i)) < \epsilon$ holds for all $\epsilon>0$?
|
I got stuck on the following problem: Let $F:\mathbb{R}\to\mathbb{R}$ be a bounded, nondecreasing, and right-continuous function that satisfies $\lim_{x\to-\infty}F(x)=0$ . Define a function $\mu^*:\mathcal{P}(\mathbb{R})\to[0,+\infty]$ by letting $\mu^*(A)$ be the infimum of the set of sums $\sum_{n=1}^{\infty}(F(b_n)-F(a_n))$ , where $\{(a_n,b_n]\}$ ranges over the set of sequences of half-open ivtervals that cover $A$ , in the sense that $A \subseteq \bigcup_{n=1}^{\infty}(a_n,b_n]$ . Prove that $\mu^*(\emptyset) = 0$ . I would like to show it in this way: For each positive number $\epsilon$ there is a sequence $\{(a_i,b_i]\}$ of half-open intervals (whose unions necessarily includes $\emptyset$ ) such that $\sum_i(F(b_i)-F(a_i)) . However, I failed to give a concrete argument showing that the above claim is true; for example, I couldn't construct such a concrete sequence $\{(a_i,b_i]\}$ such that the inequality $\sum_i(F(b_i)-F(a_i)) holds for all $\epsilon>0$ . Could someone pleas
|
By right continuity there exists $a_n \in (0,1)$ such that $F(n+a_n)-F(n) . $\sum_n [F(n+a_n)-F(n)] .
|
|real-analysis|sequences-and-series|analysis|measure-theory|proof-writing|
| 1
|
Integrate : $\int e^{x+e^{x+e^x}}\, dx$
|
Question : $$\int e^{x+e^{x+e^x}} dx$$ source : Integration Techniques and Tricks University of Miami Mathematics Union Trevor Birenbaum My attempt: can be rewritten as $$\int e^x (e^{e^x})^{(e^{e^x})} dx$$ let $z=e^{e^x} \implies dz = e^{e^x} e^x dx$ so the integral transforms to $$\int \frac{z^z}{z}dz$$ now is it even possible to solve this
|
NOTE: The answer has been improved using Тyma's suggestion in the comment section. First, a couple of observations. The function $z^z/z$ seems to be continuous. I checked it using the Desmos graphing calculator. The picture is given below: Second, the integral $\int z^z$ dz is called the Bernoulli integral and seems similar to your problem. I recommend watching this YT video that solves it because I used a similar approach. With that in mind, here is how I solved the problem. Let us first reformulate your integral: $$I=\int{z^z\over z}dz=\int{z^{-1}z^z}dz \tag 1$$ Keeping in mind that $z^{z}=e^{\ln(z^{z})}=e^{{z}\ln(z)}$ , we write: $$I = \int z^{-1}e^{{z}\ln(z)}dz \tag 2$$ The function $e^x$ has a nice Taylor expansion: $$e^x = \sum_{n=0}^\infty { x^n\over n!} \tag 3$$ where $n \in \mathbb{N}_0$ . This means we can reformulate the integral as: $$I = \int z^{-1}\sum_{n=0}^\infty { \big({{z}\ln(z)}\big)^n\over n!} dz \tag 4$$ The sum and the factorial do not have to be inside the integr
|
|integration|indefinite-integrals|
| 1
|
Changing the parameter in the formula defining a function (First order logic)
|
Let $\mathcal{A}$ be an L-structure with base set $A$ and $f$ a definable function. $$y=f(x)\Leftrightarrow \mathcal{A}\models \psi(x,y,\overline{a})$$ If we put another parameter $\overline{b}$ in place of $\overline{a}$ and we set $$B:=\{(x,y)\in A^2, \mathcal{A}\models\psi(x,y,\overline{b})\}$$ I think that the set $B$ could be empty. And in the case where it is not empty it does not need to define a function. What i mean is, it may exit to different $y_1$ and $y_2$ such that $(x,y_1)\in B$ and $(x,y_2)\in B$ . My question: Is my understanding correct? and can we find a concrete example?
|
Yes. There are lots of examples. Take for instance $\mathcal{A}=(\mathbb{Z};+)$ and consider $$\psi(x,y,a)\equiv (a+a=a\wedge y=x+x)\vee (a+a\not=a\wedge \theta)$$ for any $\theta$ whatsoever. When we set the parameter $a$ to $0$ we'll get the function $x\mapsto 2x$ , while any other choice of parameter will behave as horribly as you want depending on what $\theta$ is (e.g. take $\theta\equiv\top$ to get the total relation and take $\theta=\perp$ to get the empty relation, neither of which is a function defined on all of $\mathcal{A}$ ).
|
|first-order-logic|model-theory|
| 1
|
Is it true that the probability of n hyperplanes with a maximum of K-2 dimensions intersecting in K-dimensional ambient space is 0?
|
For back-ground, I'm not well-schooled in higher-dimensional geometry, but I'm currently learning statistical data-science in which many methods rely on the properties of higher-dimensional space. In Support-Vector Machine (SVM) clustering, the goal is to find a hyper-plane which will cleanly divide the data-points (vectors of n-dimensions) into labelled groups, and to use that dividing line to predict future data. SVM is especially good for binary classification. However, it might be the case that no such hyper-plane exists in n-dimensions (especially common if n is low, like 2 or 3). One interesting technique used to address this problem is to randomly assign each point a value in an additional dimension (e.g. in 2-d space, assign each data-point a random z-value, transforming the 2-d space into a new 3-d space in which it is more likely that a hyper-plane exists which will cleanly divide the data in two). I was wondering how effective this process is, and after thinking for awhile,
|
It's nice to think of codimension for this sort of thing. The codimension of a subspace of $\mathbb R^n$ is defined to be $n$ minus the dimension of the subspace. So a line in 3-space has co-dimension 2, and a plane in 4-space has dimension 2 and codimension 2. The great theorem is that if you have subspaces $U$ and $V$ , the codimension of $U \cap V$ is the sum of the codimensions of $U$ and $V$ (in the "generic" case, i.e., avoiding things like the case in 3-space where $U$ is a plane and $V$ is a line that just happens to be completely contained in $U$ -- that's the non-generic case). [Special case: if the codimension is larger than the dimension of the ambient space, the intersection can still be the 0-dimensional subspace consisting of just the origin.] The big idea of non-generic cases is that they have probability zero under a uniform distribution on the (compact) set of $k$ -planes in $n$ space. (The more general theorem is that in all cases, the codimension of $U \cap V$ is at
|
|linear-algebra|geometry|vector-spaces|euclidean-geometry|analytic-geometry|
| 0
|
About Group of nth roots of unity
|
Let $z \in \mathbb{C}$ and $n$ an integer $>3$ . Define $U$ as the set of all $\lambda \in \mathbb{C}$ such that $\lambda^n = 1$ . I aim to prove the following implication: $ (\forall \lambda \in U; (\lambda -1)z + 1 \in U \cup \{0\}) \implies z = 0 \quad \text{or} \quad z = 1 $ I am uncertain if this statement is true.
|
The result is true for $n \ge 3$ . We assume $z \ne 0$ since that works as noted and we need to prove that $z=1$ . Note that for a fixed $z \ne 0$ all $(\lambda -1)z + 1$ are distinct. Note that for any root of unity $\lambda \ne 1$ of order $n$ with argument $\theta \in (-\pi,\pi]$ we have that $\arg (\lambda-1)=\frac{\theta+\pi}{2}$ . Since $n \ge 3$ there is at least one such $\lambda \ne 1$ st $(\lambda-1)z=\delta-1$ for some $\delta \in U$ (since we can take $0$ only once and there are at least two $\lambda \ne 1$ in $U$ ). In particular modulo $2\pi$ we have that $\arg z=\arg (\delta-1)-\arg(\lambda-1)=\frac{\pi k}{n}$ for some integral $k$ Now if $(\lambda -1)z=-1$ we get that (again modulo $2\pi$ ) $\arg z = \pi-\pi/2-\pi k_1/n$ and that cannot be $\pi k/n$ unless $n$ even so if $n$ odd we cannot take $0$ and $(\lambda-1)z+1 \in U$ . But then when $\lambda=\lambda_2, ...\lambda_n$ all the roots distinct from $1$ we have $(\lambda_k-1)z$ runs through all $\lambda_k-1$ also so mu
|
|complex-analysis|complex-numbers|roots-of-unity|
| 1
|
Dudley's Inequality can be Loose (Vershynin 8.1.12)
|
Let $e_1,...,e_n$ denote the canonical basis vectors in $\mathbb{R}^n$ . Consider the set $$T = \left \{ \frac{e_k}{\sqrt{1 + \log k}}, k =1,...,n\right \}$$ Show that $$\int_{0}^\infty \sqrt{\log \mathcal{N}(T,d,\epsilon)} d\epsilon \rightarrow\infty$$ as $n \rightarrow \infty$ . Here $\mathcal{N}(T,d,\epsilon)$ is the size of the smallest $\epsilon$ -net of $T$ with respect to the metric $d$ . An $\epsilon$ -net in this context is a set $N \subset T$ such that for all $t \in T$ there is an $s \in N$ such that $d(s,t) \leq \epsilon$ . I assume that $d(x,y) = ||x-y||_2$ , although this isn't explicitly stated. Also note,that $\epsilon$ -nets must be subsets of $T$ I also write $\log$ when really I mean $\ln$ . I've spent quite a bit of time working on this exercise and from what I can tell, unless I've missed the trick completely, is that this example actually does not work. Here is my argument. For convenience, let $v_k = \frac{e_k}{\sqrt{1+\log k}}$ . Let $f_n(k):\{1,...,n-1\} \right
|
Here's an alternative solution $$\int_{0}^{\infty} \sqrt{\log{N(T,\epsilon)}} = \int_{0}^{diam(T)} \sqrt{\log{N(T,\epsilon)}}$$ Make the change of variables $\epsilon = 2/\sqrt{\log{t}}$ and define $diam(T)=d(T)$ . Then $$\int_{0}^{diam(T)} \sqrt{\log{N(T,\epsilon)}} d\epsilon = \int_{e^{4/d(T)^{2}}}^{\infty} \sqrt{\log{N(\frac{2}{\sqrt{\log{t}}})}} \frac{dt}{t (\log{t})^{3/2}} \geq \int_{e^{4/d(T)^{2}}}^{\infty} \sqrt{\log{P(\frac{1}{\sqrt{\log{t}}})}} \frac{dt}{t (\log{t})^{3/2}}$$ Where $P$ is the packing number. Now, note that it is a decreasing function of it's argument so $$\int_{0}^{diam(T)} \sqrt{\log{N(T,\epsilon)}} d\epsilon \geq \sum_{m = \lceil e^{4/d(T)^{2}}\rceil}^{\infty} \int_{m}^{m + 1}\frac{\sqrt{\log{m}}}{t (\log{t})^{3/2}}dt \geq \sum_{m = \lceil e^{4/d(T)^{2}}\rceil}^{\infty} \int_{m}^{m + 1}\frac{(\log{m})^{1/2}}{(m+1)(\log{(m+1)})^{3/2}}dt$$ Moreover $$\int_{0}^{diam(T)} \sqrt{\log{N(T,\epsilon)}} d\epsilon \geq \sum_{m = \lceil e^{4/d(T)^{2}}\rceil}^{\infty} \fr
|
|probability|probability-theory|inequality|concentration-of-measure|
| 0
|
Cannot find the dual function
|
The picture shows an example of solving the integer problem with a decomposition method. However, what I am trying to ask is about the dual function part instead of the integer part. As you can see, the integer variables are relaxed. However, I cannot find the same dual function as the author shows. The dual function should be as: $$d(\lambda)=-b^T\lambda+min_{x\in X}\sum_{i\in I}(c_i^T+\lambda^TH_i)x_i \label{1},$$ where $b$ is the shared resouce ( $11.1$ in the problem), $I$ is the set of agents. In this case, it should be: $$d(\lambda)=-11.1\lambda+min_{x\in X}\sum_{i=1}^4(c_i^T+\lambda^TH_i)x_i.$$ Let us focus on the minimization part, which is: $$\min_{x\in X}\sum_{i=1}^4(c_i^T+\lambda^TH_i)x_i.$$ Then for the first agent $i=1$ , denote by $x_{11}, x_{21}$ its two variables, we have, $$\min_{x_1\in X_1}(c_1^T+\lambda^TH_1)x_1=(x_{11}+x_{21}+\lambda^T(x_{11}+x_{21}))=(\lambda^T+1)x_{11}+(\lambda^T+1)x_{21}.$$ Similarly, we have for $i=2$ $$\min_{x_2\in X_2}(c_2^T+\lambda^TH_2)x_2=(
|
The intent in the given solution appears to be that we don't drop the integrality constraint on $x_1, x_2, x_3, x_4$ . With this modification, though your overall approach is correct, the optimal choice of primal variables for $\lambda \in [0,2/5]$ is $$x_1 = (0,0), \quad x_2 = (2,0), \quad x_3 = (0,1), \quad x_4 = (1,0).$$ This changes $2.1(5\lambda-2)$ to $2(5\lambda-2)$ , $1.1(\lambda-1)$ to $1(\lambda-1)$ , and $1.2(\lambda-3)$ to $1(\lambda-3)$ , and altogether we get $$-11.1\lambda + 2(5\lambda-2) + (\lambda-1) + (\lambda-3) = -8 + 0.9\lambda,$$ just as in the given solution. There is another discrepancy that I believe is simply a calculation error. For higher values of $\lambda$ , we will drop the $2(5\lambda-2)$ term, then the $(\lambda-1)$ term, then the $(\lambda-3)$ term as each one becomes positive, because we can always switch the corresponding $x_i$ to $(0,0)$ . This gives us: $$d(\lambda) = \begin{cases}-8 + 0.9\lambda & 0 \le \lambda (It is very easy to subtract $10$ fr
|
|optimization|convex-optimization|linear-programming|lagrange-multiplier|duality-theorems|
| 0
|
Prove there exists such $k$
|
Let $f$ be holomorphic on $D(0,1)$ , continuous on $\overline{D}(0,1)$ s.t $f(z)=0\quad\forall |z|=1$ and $0\leq\arg z\leq\dfrac{2\pi}{n}$ for some $n\in\mathbb{N}^*.$ a) Let $g(z)=\prod\limits_{k=0}^{n-1}f\left(e^{\frac{2k\pi i}{n}}z\right)$ . Prove $g(z)=0\quad\forall |z|=1$ . b) Prove that $g=0$ on $D(0,1)$ . Then show that $f=0$ on $D(0,1)$ . Actually, I can do part b) by using Maximum modulus principle. I'm stuck at part a). Here are my thoughts for a) Notice that $\left\vert e^{\frac{2k\pi i}{n}}z\right\vert=1$ when $|z|=1$ . I just need to show that there exists $k\in\{0,1,...,n-1\}$ s.t $\arg (e^{\frac{2k\pi i}{n}}z)\leq\dfrac{2\pi}{n}$ since then I can make use of the hypothesis $$f(z)=0\quad\forall |z|=1, 0\leq\arg z\leq\dfrac{2\pi}{n}.$$ Could someone help me how to prove there exists such $k$ ? Thanks in advance!
|
Notice, that the multiplication by $e^\frac{2k\pi i}{n}$ is just a simple rotation with the angle $\frac{2k\pi}{n}$ as $e^\frac{2k\pi i}{n}=\cos(\frac{2k\pi}{n})+i\sin(\frac{2k\pi}{n})$ . So as $k$ runs from $0$ to $n-1$ , the angle of rotations grow with the exact amount. By pigeonhole principle, there will be a good $k$ , as the difference of the angles of the rotations are exactly the needed bound $\frac{2\pi}{n}$ . Basically we use pigeonhole principle to show that if we rotate $z$ by $\frac{2\pi}{n}$ , then after at most $n-1$ rotations, we get an image of $z$ in every angle around the origin of size $\frac{2\pi}{n}$ . (But actually we do not need it: as it starts from one of the angles of size $\frac{2\pi}{n}$ , one of the images will fall into the desired angle.)
|
|complex-analysis|
| 1
|
If $P$ is projective, then $Ext(P,\_)=0$
|
My attempt: $P$ is projective. Take $0 \to B \to I_0 \to I_1 \to 0$ is an injective resolution of $B$ . Apply $Hom(P,\_)$ , left exactness of covariant functor implies exactness of the sequence: $0 \to Hom(P,B) \to Hom(P,I_0) \to Hom(P,I_1)$ . Note $Hom(P,I_0) \to Hom(P,I_1)$ is onto. Then $Ext^1(P,B) = \frac{ker(Hom(P,I_1) \to Hom(P,I_2))}{im(Hom(P,I_0) \to Hom(P,I_1))}=\frac{Hom(P,I_1)}{Hom(P,I_1)}=0$ because $I_2$ doesn't exist so $ker(Hom(P,I_1) \to 0)=Hom(P,I_1)$ . Is this right in any way? I did see solutions that use $P$ as a direct summand but I just wanted to use the definition of $Ext(\_,\_)$ . Bredon just straight up says that because $Hom(P,I_0) \to Hom(P,I_1)$ is onto, then $Ext(P,B)=0$ without splitting argument (or is it implied?) and I didn't understand how.
|
This does not quite work. First, note that $B$ may not have an injective resolution of the form $0\rightarrow B\rightarrow I_0\rightarrow I_1\rightarrow 0$ , as this would imply that certain Ext modules are trivial, since Ext will be the homology of the sequence below: $$0\rightarrow \text{Hom}(A,I_0)\rightarrow\text{Hom}(A,I_1)\rightarrow 0\rightarrow \dots$$ In particular, $\text{Ext}^n (A,B)$ will be trivial for all $n \geq 2$ for any module $A$ . But there are modules with nontrivial higher Ext modules, so the injective resolution you wrote above does not always work. EDIT (Explanation written out more explicitly) Note that if $B$ has a resolution of the desired form, the r.h.s. zero extends indefinitely, i.e., the resolution looks like $0\rightarrow B\rightarrow I_0\rightarrow I_1\rightarrow 0\rightarrow0\rightarrow\dots$ , but oftentimes we truncate it after the first zero. Since $0$ is a zero object, $\text{Hom}(A,0)=0$ for any module $A$ , so the Ext modules are the homology of
|
|solution-verification|homological-algebra|projective-module|
| 1
|
How to prove continuity of the inverse function $\alpha^{-1}$ in general?
|
I am reading "Analysis on Manifolds" by James R. Munkres. (1) Let $\alpha:\mathbb{R}\to\mathbb{R}^2$ be the mapping such that $\alpha(t) = (t^3,t^2)$ for $t\in\mathbb{R}$ . Then, $\alpha$ is one-to-one and $\alpha^{-1}:\alpha(\mathbb{R})\to\mathbb{R}$ is continuous. My proof: $t\mapsto t^3$ is one-to-one. So, $\alpha$ is one-to-one. Let $(u_0,v_0)\in\alpha(\mathbb{R})$ . Let $t_0\in\mathbb{R}$ be the unique real number such that $t_0^3=u_0$ and $t_0^2=v_0$ . Let $\varepsilon_0$ be an arbitrary positive real number. We want to find a positive real number $\delta$ such that $$||(u,v)-(u_0,v_0)|| Since $\alpha^{-1}((u,v))=u^{\frac{1}{3}}$ and $u\mapsto u^{\frac{1}{3}}$ is continuous, there is a positive real number $\delta_0$ such that $$|u-u_0| And since $|u-u_0|\leq ||(u,v)-(u_0,v_0)||$ , $$||(u,v)-(u_0,v_0)|| holds. (2) Let $\alpha:\mathbb{R}^2\to\mathbb{R}^3$ be the mapping such that $\alpha(x,y) = (x(x^2+y^2),y(x^2+y^2),x^2+y^2)$ for $(x,y)\in\mathbb{R}^2$ . Then, $\alpha$ is one-to-
|
Since $\alpha(0,0)=(0,0,0)$ , $\alpha^{-1}(0,0,0)=(0,0)$ . Let $(X,Y,Z)\in\alpha(\mathbb{R}^2)\setminus\{(0,0,0)\}$ . Then $X^2+Y^2=Z^3$ . So, $Z=(X^2+Y^2)^{\frac{1}{3}}$ . Then, $\alpha^{-1}(X,Y,Z)=\left(\frac{X}{(X^2+Y^2)^{\frac{1}{3}}},\frac{Y}{(X^2+Y^2)^{\frac{1}{3}}}\right)$ holds. We check this: $\alpha\left(\frac{X}{(X^2+Y^2)^{\frac{1}{3}}},\frac{Y}{(X^2+Y^2)^{\frac{1}{3}}}\right)=\left(\frac{X}{(X^2+Y^2)^{\frac{1}{3}}}\cdot(X^2+Y^2)^{\frac{1}{3}},\frac{Y}{(X^2+Y^2)^{\frac{1}{3}}}\cdot(X^2+Y^2)^{\frac{1}{3}},(X^2+Y^2)^{\frac{1}{3}}\right)=(X,Y,Z)$ . Let $\beta:\mathbb{R}^2\to\mathbb{R}^2$ be the mapping such that $\beta(0,0)=(0,0)$ and $\beta(X,Y)=\left(\frac{X}{(X^2+Y^2)^{\frac{1}{3}}},\frac{Y}{(X^2+Y^2)^{\frac{1}{3}}}\right)$ for $(X,Y)\neq (0,0)$ . Then, $\beta$ is continuous at $(X,Y)\neq (0,0)$ . We check $\beta$ is continuous at $(0,0)$ . It is sufficient to check the following: For an arbitrary positive real number $\varepsilon$ , there is a positive real number $\delta$
|
|continuity|inverse-function|
| 0
|
Dudley's Inequality can be Loose (Vershynin 8.1.12)
|
Let $e_1,...,e_n$ denote the canonical basis vectors in $\mathbb{R}^n$ . Consider the set $$T = \left \{ \frac{e_k}{\sqrt{1 + \log k}}, k =1,...,n\right \}$$ Show that $$\int_{0}^\infty \sqrt{\log \mathcal{N}(T,d,\epsilon)} d\epsilon \rightarrow\infty$$ as $n \rightarrow \infty$ . Here $\mathcal{N}(T,d,\epsilon)$ is the size of the smallest $\epsilon$ -net of $T$ with respect to the metric $d$ . An $\epsilon$ -net in this context is a set $N \subset T$ such that for all $t \in T$ there is an $s \in N$ such that $d(s,t) \leq \epsilon$ . I assume that $d(x,y) = ||x-y||_2$ , although this isn't explicitly stated. Also note,that $\epsilon$ -nets must be subsets of $T$ I also write $\log$ when really I mean $\ln$ . I've spent quite a bit of time working on this exercise and from what I can tell, unless I've missed the trick completely, is that this example actually does not work. Here is my argument. For convenience, let $v_k = \frac{e_k}{\sqrt{1+\log k}}$ . Let $f_n(k):\{1,...,n-1\} \right
|
After a long think I've also come to what I feel is the cleanest solution to this problem so far. It starts with Lemma 4.2.8, which stats for any set $T$ that $$\mathcal{P}(T,2\varepsilon) \leq \mathcal{N}(T,\varepsilon) \leq \mathcal{P}(T,\varepsilon).$$ The lower bound is the one of interest. Plugging this into Dudley's integral gives \begin{align} \int_0^\infty \sqrt{\log(\mathcal{N}(T,\varepsilon))} d\varepsilon &\geq \int_0^\infty \sqrt{\log(\mathcal{P}(T,2\varepsilon))} d\varepsilon \\ &= \frac{1}{2}\int_0^\infty \sqrt{\log(\mathcal{P}(T,\varepsilon))} d\varepsilon \end{align} Now we can lower bound $\mathcal{P}(T,\varepsilon)$ in the following way. Recall the definition $$v_k = \frac{e_k}{\sqrt{1+\log(k)}}$$ and therefore for $k > j \geq 1$ we have \begin{align} \|v_j - v_k\|_2 &= \sqrt{\frac{1}{1+\log(j)} + \frac{1}{1+\log(k)}} \\ &> \sqrt{\frac{1}{1+\log(k)} + \frac{1}{1+\log(k)}} &(k > j)\\ &= \sqrt{\frac{2}{1+\log(k)}} \end{align} Now set this equal to $\varepsilon$ and solv
|
|probability|probability-theory|inequality|concentration-of-measure|
| 1
|
Suppose $A$ is a $7\times 7$ matrix with all entries less than $1$ in magnitude. Prove that $\det (100I+A) > 0$
|
Suppose $A$ is a $7\times 7$ matrix with all entries less than $1$ in magnitude. Prove that $\det (100I+A) > 0$ . This is my first post on the mathematics stack exchange, so forgive me if it’s not in the right format or if it’s been asked before (I couldn’t find anything like it, though). My elementary linear algebra professor posted this question in the beginning of class this morning and left it up to us to discuss independently. I can gather that the determinant of $100 I_7$ is $100^7$ and every element in $A$ is in between $-1$ and $1$ (given) but other than that, I’ve got no clue where to start. Thank you!
|
The product of elements in the main diagonal is $100^7$ . Estimate the sum of all other summands in the determinant: notice, that every other permutation contains $100$ at most five times. How many permutations are there all together? It is $7!=5040$ . But $100^7>100^5\cdot 5040$ , which means that all the other permutations cannot be as negative as how big that one permutation is alone. Edit: Thank you user1551 for pointing out the small mistake: we only know that the product of elemnts in the main diagonal is at least $99^7$ , while when bounding the negative part, we should use $101$ , not $100$ . But the proof still works as $99^7>101^5\cdot 5040$ .
|
|linear-algebra|proof-writing|determinant|
| 1
|
Condition of Convergence for the following Integral, evaluated using Gamma Function.
|
Question: Evaluate in terms of the Gamma function the integral $$\int_{0}^{\infty}\frac{x^k}{k^x} dx$$ And state the condition on $k$ such that the integral converges. Attempt Let $$I = \int_{0}^{\infty}\frac{x^k}{k^x}dx = \int_{0}^{\infty}e^{-x\ln(k)}\cdot x^k dx$$ So $k>0$ and $k$ can not be negative. Let $y = x\cdot\ln(k)$ . After simplification and using the gamma function: $$I = \frac{1}{(\ln k)^{(k+1)}}\cdot\int_{0}^{\infty} e^{-y}\cdot y^k dy$$ $$ I = \frac{\Gamma(k+1)}{(\ln k)^{(k+1)}}$$ Now how to find the condition for the integral to converge, without graphing the function? I thought it is $k>0$ , but graphing the function shows that it diverges except when $k>1$
|
$I=\int_0^\infty \frac{x^k}{k^x}dx=\int_0^\infty e^{-x\ln k}\cdot x^k dx$ $\pi(k) = \int_0^\infty x^ke^{-x}dx$ $I=\frac{1}{(\ln k)^{k+1}}\int_0^\infty (x \cdot \ln k)^k\cdot e^{-x\ln k}\cdot \ln k \cdot dx $ $u=x \ln k. du =dx \ln k$ $I=\frac{1}{(\ln k)^{k+1}}\int_0^\infty u^ke^{-u} du=\frac{\pi(k)}{(\ln k )^{k+1}}$ $\Gamma{(k+1)}=\pi(k)$ To prove divergence, you can use a comparison test. $I=\int_0^\infty e^{-x\ln k}\cdot x^k dx$ $k$ must be greater than zero because of the domain of natural log. If $k\ge 1$ , we have the case addressed above. Consider $0 . $I=\int_0^1 e^{-x\ln k}\cdot x^k dx+\int_1^\infty e^{-x\ln k}\cdot x^k dx>\int_0^1 e^{-x\ln k}\cdot x^k dx+\int_1^\infty e^{-x\ln k}dx=\int_0^1 e^{-x\ln k}\cdot x^k dx+\infty$ The first part of the integral is bounded, the second part, on the interval from 1 to $\infty$ diverges. Since I is greater than the unbounded part, it must diverge.
|
|integration|convergence-divergence|
| 1
|
In geometric algebra, what is the dot product of a vector and a scalar? what is the wedge product of a vector and a scalar?
|
I am watching a series on geometric calculus by Alan Mcdonald and in the first episodes he states that for any vector u and multivector M: $uM = u \cdot M + u \land M$ This doesn't really take into account what happens with the 0-grade i.e scalar part of M, let's call it s. For the identity to be true either $u \cdot s$ or $u \wedge s$ has to be zero, and from what i read previously in some instances $u \land s = us$ , which implies $u \cdot s = 0$ , which kind of makes intuitive sense i guess?
|
The general dot product formula for two k-vectors $ a_r, b_s $ , of grades r and s respectively, is typically defined as a grade selection of the following sort: $$a_r \cdot b_s={\left\langle{{ a_r b_s }}\right\rangle}_{{\left\lvert {r - s} \right\rvert}}.$$ Similarly, the wedge product of the same k-vectors is defined as $$a_r \wedge b_s={\left\langle{{ a_r b_s }}\right\rangle}_{r + s}.$$ Both of these can be extended to multivectors by decomposing that multivector into component k-vectors. By these definitions, if $u$ is a scalar, and $v$ is a k-vector, we have $u \cdot v = u v = u \wedge v,$ which also holds if $v$ is a multivector (and $u$ still a scalar.)
|
|multivariable-calculus|differential-geometry|bilinear-form|clifford-algebras|geometric-algebras|
| 0
|
Given $ye^{-{xe^{y-2x}}} = 2xe^{-x},$ where $x>\frac{1}{\sqrt{2}}, y<\sqrt{2}$. Show that $y=y(x)$ decreases.
|
Suppose $y$ is defined by the following implicit equation: $ye^{-{xe^{y-2x}}} = 2xe^{-x},$ where $x,y\geq 0.$ I want to show that $y$ decreases as $x$ increases, when $x>\frac{1}{\sqrt{2}}$ and $y . Here is my work: Suppose by contradiction that there exist some $x_1, x_2> \frac{1}{\sqrt{2}}$ such that $x_1 From the above relation, we have $$2x_1 = y_1e^{x_1(1-e^{y_1-2x_1})} \tag{1}$$ and $$2x_2 = y_2e^{x_2(1-e^{y_2-2x_2})} .\tag{2}$$ Subtracting this results in the following: $$2(x_2-x_1) = y_2e^{x_2(1-e^{y_2-2x_2})}- y_1e^{x_1(1-e^{y_1-2x_1})}.$$ From here, I am not sure how to proceed. The goal is to arrive at the contradiction $y_1>y_2$ with the assumption but getting this inequality seems somewhat challenging. Following @jean's comment: Take the natural logarithm on both sides of $(1)$ and $(2)$ , and subtract: \begin{align*}\ln{x_2}-\ln{x_1} &= \ln{y_2}-\ln{y_1} + x_2(1-e^{y_2-2x_2})- x_1(1-e^{y_1-2x_1})\\ & = \ln{y_2}-\ln{y_1} + (x_2-x_1) + (e^{y_1-2x_1}-e^{y_2-2x_2}). \end{alig
|
Some thoughts. Fact 1. $1 - xy\mathrm{e}^{y - 2x} > 0$ on $(\frac{1}{\sqrt{2}}, \infty)\times (0, \sqrt{2})$ . Fact 2. For each fixed $x > \frac{1}{\sqrt{2}}$ , the equation $ \ln y - x \mathrm{e}^{y-2x} = \ln 2 + \ln x - x$ has exactly one real solution $y \in (0, \sqrt{2})$ . $\phantom{2}$ Now, let $$F(x, y) := \ln y - x \mathrm{e}^{y-2x} - (\ln 2 + \ln x - x).$$ Clearly, $F(x, y)$ is continuously differentiable on $(\frac{1}{\sqrt{2}}, \infty)\times (0, \sqrt{2})$ . By Fact 1, we have $$\frac{\partial F}{\partial y} = \frac{1}{y}\left(1 - xy\mathrm{e}^{y - 2x}\right) > 0.$$ By the Implicit Function Theorem , using Fact 2, the equation $F(x, y) = 0$ implicitly determines $y$ as a differentiable function of $x$ , given that $x\in (\frac{1}{\sqrt{2}}, \infty)$ . Taking derivative with respect to $x$ on $\ln y - x \mathrm{e}^{y-2x} - (\ln 2 + \ln x - x) = 0$ , we have $$\frac{1}{y} \cdot y'(x) - \mathrm{e}^{y-2x} - x \mathrm{e}^{y-2x}(y'(x) - 2) - \frac{1}{x} + 1 = 0$$ or $$\frac{1}{y}\
|
|calculus|derivatives|inequality|implicit-function|
| 1
|
Divergence of a dynamic system after element sign change
|
Suppose there is a continuous dynamic system of order $n+1$ given by $$\begin{align} \dot{x}_1 &= Ax_1 + F(t)x_1 + G_1(t)x_2 \\ \dot{x}_2 &= kx_2 + G_2(t)x_1 \end{align}$$ where $x_1\in\mathbb{R}^n$ is a vector and $x_2\in\mathbb{R}$ is a scalar. Moreover, we know that matrices $A, F(t), G_1(t), G_2(t)$ are bounded and such that for $k the system is asymptotically stable what can be shown by the Lyapunov analysis. Specifically, $A\in\mathbb{R}^{n\times n}$ is Hurwitz. Question: Can we say anything specific about the behavior of this system for $k>0$ ? Specifically, I would like to show that the system is unstable and moreover its evolution is such that $x_2$ diverges and (at least after some initial transient state) does not oscillate (i.e. does not change its sign). Side question: If nothing can be said in the general case, what tools could be useful in analysis of the specific system (assuming I know $A, F(t), G_1(t), G_2(t)$ )? I am familiar with multiple approaches to show stabilit
|
Without further assumptions, one cannot say anything. Consider the case $G_2 = 0$ . In that case the system clearly becomes unstable if $k > 0$ . On the other hand if $n = 1, A = \alpha 0$ , then the system is stable if $k + \alpha 0$ . These conditions may be satisfied for arbitrary large positive $k$ , if also $\gamma$ and $|\alpha|$ are sufficiently large.
|
|dynamical-systems|stability-theory|lyapunov-functions|
| 0
|
"Nice" values of x such as sin(x) is transcendental
|
First time posting here so please forgive any lack of adherence to best practices. sin(x) is a transcendental function. However most common values for the angle x will yield an algebraic number result. In particular, sin(q) (in degrees) or sin(qπ) (in radians) will always be algebraic. I can reverse-engineer a transcendental result by making y=sin(x), assigning a transcendental number to y in [-1, 1], and then solving for x. For example, sin(arcsin(1/e)) must be transcendental. But x=(arcsin(1/e)) is not what I would call a "niece value of x". After a couple of hours researching the internet I didn't find much. I have the intuition that sin(q) (q rational, not zero and in radians), as well as sin(k) (k irrational and in degrees, or in radians but not of the form qπ) would most of the times (or always?) be transcendental. And these would be "nice". But I know it can be extremely difficult to prove. So the question is, are there "nice" values of x such as sin(x) is confirmed to be transc
|
As a consequence of the Lindemann–Weierstrass theorem , $\sin(x)$ is transcendental for any nonzero algebraic number $x$ . The same happen with all the other trigonometric functions. Here's the proof: Let $x$ a nonzero algebraic complex number. Assume $a = \sin(x)$ is algebraic. Then $$a = \frac{e^{ix}-e^{-ix}}{2i}$$ so $$(e^{ix})^2-2ai e^{ix}-1 = 0$$ Since the polynomial $z^2-2aiz-1$ has algebraic coefficients, its roots are algebraic, that is $e^{ix}$ is algebraic, but this contradicts this particular case of the Lindemann–Weierstrass theorem: If $z$ is a nonzero algebraic number, then $e^z$ is transcendental.
|
|trigonometry|transcendental-numbers|transcendental-functions|
| 1
|
How do you calculate the range of trigonometric function?
|
What is the range of following function $f(x) = 3|\sin x|$ ? I tried multiplying by $3$ with the range of $\sin x$ function which is $[-1, 1]$ which gave me the outcome $[-3, 3]$ , but I feel like it's not the correct range of it. How to get the range of it?
|
You're on the right track, but it's $|\sin x|$ that is scaled by a factor of $3$ . The absolute value guarantees that the values are non-negative: $$ \begin{array}{rcrcl} -1 & \leq & \sin x \phantom{|} & \leq & 1 \\ \implies \qquad 0 & \leq & |\sin x| & \leq & 1 \\ \implies \qquad 0 & \leq & 3\,|\sin x| & \leq & 3 \end{array} $$ If we're being really careful, we should note that on each line, these numbers are bounds on possible values for the function in the middle. So, these inequalities show that the range of $f(x)$ is a subset of the interval $[0, 3]$ . But why is it all of that interval, with no gaps? In other words, why is it the case that for any real number $y$ such that $0 \leq y \leq 3$ , we can find an $x$ such that $f(x) = y$ ? Such an $x$ would have to satisfy $3|\sin x| = y$ , so $|\sin x| = \tfrac13 y$ or $$ \sin x = \tfrac13 y, $$ where $\sin x \geq 0$ . Remember that $0 \leq y \leq 3$ , so $0 \leq \tfrac13 y \leq 1$ . And we can always find an $x$ (representing an angl
|
|algebra-precalculus|functions|trigonometry|
| 0
|
Is open map needed to be continuous?
|
Let $X,Y$ be the topological spaces and $f:\ X\to Y$ . Then $f$ is continuous of $X$ if and only if preimage of any open set in $Y$ is also an open set in $X$ . An open map $f:\ X\to Y$ is a map that takes any open set in $X$ to an open set in $Y$ . In general topology, we are introduced with some famous results of the open map. However, I haven't found any result states that open map implies continuous so far. I wonder if there're some counter examples for an open map that is not continuous ? Thanks.
|
Consider the spaces $X = \mathbb{R}$ with the standard topology, and $Y = \mathbb{R}$ with the discrete topology. Define the map $f: X \rightarrow Y$ by $f(x) = x$ , which is the identity map. $f$ is an open map: Since $Y$ has the discrete topology, every subset of $Y$ is considered open, including the images of open sets in $X$ under $f$ . Therefore, $f$ maps open sets to open sets. $f$ is not continuous: In the discrete topology on $Y$ , single points $\{y\}$ are open, but their preimages under $f$ , which are also single points $\{x\}$ in $X$ , are not open in the standard topology of $X$ . Hence, $f$ fails the definition of continuity because there exists an open set in $Y$ whose preimage is not open in $X$ . This counterexample demonstrates that an open map is not necessarily continuous, highlighting the independence of these two topological properties.
|
|general-topology|continuity|
| 0
|
Prove or disprove the following: For all rational numbers $x$ and $y$, the number $x^y$ is also rational.
|
Prove or disprove: For all rational numbers $x$ and $y$ , the number $x^y$ is also rational. I think that the statement is true since I can not come up with a counterexample but I am unsure of where to go from here... If anyone can finish the proof for me that would be greatly appreciated along with any explanations! Proof: Suppose $x$ and $y$ are rational numbers $x = p/q$ and $y = r/s$ . Is $(p/q)^{(r/s)}$ rational since it can not be expressed as the ratio of two integers?... Thank you in advance! Edit: Sorry for my confusion! I now understand how the square root of two makes the statement false.
|
Assume $2^{\frac{1}{TREE(3)}}$ is rational. Then there are coprime integers $z_1$ and $z_2$ such that $$\frac{z_1}{z_2}=2^{\frac{1}{TREE(3)}} $$ Raise both sides to the $TREE(3)$ th power. $$\frac{{z_1}^{TREE(3)}}{{z_2}^{TREE(3)}}=2$$ Multiply by ${z_2}^{TREE(3)}$ $${z_1}^{TREE(3)}=2{z_2}^{TREE(3)}={z_2}^{TREE(3)}+{z_2}^{TREE(3)}$$ So we have a perfect ( $TREE(3)$ )th power that can be expressed as a sum of two perfect ( $TREE(3)$ )th powers. But this contradicts Fermat's Last Theorem, as proven by Andrew Wiles in 1995, because: if this was true, there would be an immodular, semistable elliptic curve, proven by Ken Ribet in 1986, contradicting the proof by Wiles that all semistable elliptic curves as modular. So there is an irrational $x^y$ for rational $x$ and rational $y$
|
|discrete-mathematics|proof-writing|solution-verification|
| 0
|
M/M/c queue, but customers might leave the queue due to impatience
|
Given a M/M/c queue (Poisson arrival, exponentially distributed service time, c servers). The queue is unlimited in length and operates by first-in-first-out. Each customer who arrives and needs to queue has a patience time T, with a uniform distribution over an interval of [a,b]. That means if the customer has been waiting in queue for T minutes, they will abandon the queue. I would like to find the average waiting time and the rate of queue abandonment for this queue. I suppose the average waiting time would be similar to a typical M/M/c queue. However I don't know how to approach finding the queue abandonment rate. Is it as simple as figuring out how many customer has a patience time below the average waiting time? Thanks in advance.
|
with a uniform distribution over an interval of [a,b] For uniform distribution case, I don't know. For exponential distribution case, the resutl is as follows: Mandelbaum and Zeltyn 2004 introduce Erlang A model in the paper Service Engineering in Action: The Palm/Erlang-A Queue, with Applications to Call Centers In the Erlang A model $\ M/M/s/k+M\quad$ , each caller posses an exponentially distributed patience time with mean $\theta^{-1}$ Stationary probability distribution: $$ \begin{aligned} & P_n=\left\{\begin{array}{l} \frac{1}{n}\left(\frac{\lambda}{\mu}\right)^n P_0, 0 \leq n \leq s . \\ \frac{\lambda^n}{s ! \prod_{j=1}^{n-s}(s \mu+j \theta) \mu^s}, s+1 \leq n \leq k . \end{array}\right. \\ & P_0=\left(\sum_{n=0}^s \frac{1}{n !}\left(\frac{\lambda}{\mu}\right)^n+\sum_{n=s+1}^k \frac{\lambda^n}{s ! \prod_{j=1}^{n-s}(s \mu+j \theta) \mu^s}\right)^{-1} . \\ & \end{aligned} $$ Average queue size: $$ L_{n}=\sum_{n=0}^{k}np_{n}=(\sum_{n=0}^{s}\frac{1}{(n-1)!}(\frac{\lambda}{m})^{n}+\s
|
|stochastic-processes|queueing-theory|
| 0
|
sigma algebra - topological space - all closed (and open) sets
|
Is the set of all open and closed sets in $\mathbb{R}^2$ a $\sigma$ -algebra for $\mathbb{R}^2$ ? Is the set of all closed sets in $\mathbb{R}^2$ a topology for $\mathbb{R}^2$ ? I would say no in both cases, since the union of countably many closed sets is not necessarily closed or open again.
|
You are correct. Let's consider a concrete example. Let $\mathcal{B}_1$ denote the set of all open and closed sets of $\mathbb{R}^2$ , and let $\mathcal{B}_2$ denote the set of all closed sets of $\mathbb{R}^2$ , where the notions of "open" and "closed" are taken with respect to the usual (metric) topology on $\mathbb{R}^2$ . We use the fact that $A = [0, 1) \times \mathbb{R}$ is neither open nor closed in $\mathbb{R}^2$ . $A$ is not open because, for example, there is no neighborhood of $0 \times 0 \in A$ that is contained in $A$ (any neighborhood of $0 \times 0$ also contains, for some $\varepsilon > 0$ , the point $-\varepsilon \times 0 \notin A$ ). $A$ is not closed because its complement $\mathbb{R}^2 - A = ((-\infty, 0) \cup [1, \infty)) \times \mathbb{R}$ is not open (consider the point $1 \times 0$ for example). Now, we see that $A$ can be written as a countable union of closed sets: $$A = \bigcup_{n \in \mathbb{Z}^+} \left[0, 1 - \frac{1}{n + 1} \right] \times \mathbb{R}$$ I'l
|
|real-analysis|general-topology|measure-theory|measurable-sets|
| 0
|
Proof of identity on sum of powers of primitive root.
|
Let $q = p^e$ for some prime $p$ . Consider the trace function $\mathrm{Tr}_{\mathbb{F}_q/\mathbb{F}_p}:\mathbb{F}_q\to \mathbb{F}_p$ defined by $\mathrm{Tr}_{\mathbb{F}_q/\mathbb{F}_p}(x) = \sum_{i=0}^{e-1}x^{p^i}$ . Let $C \subseteq \mathbb{F}_q^n$ be a linear subspace and let $\zeta = e^{2\pi i/p}$ . What I would like to show is $$\sum_{c \in C}\zeta^{\mathrm{Tr}_{\mathbb{F}_q/\mathbb{F}_p}(\langle c, z\rangle)} = \begin{cases}|C|&\text{if }z\in C^\perp\\0 & \text{otherwise}\end{cases}.$$ What I know is that the trace function is surjective and each element of $\mathbb{F}_p$ is mapped to by $p^{e-1}$ elements of $\mathbb{F}_q$ . Additionally, I see why the first case is true because if $z \in C^\perp$ we get $\langle c, z\rangle = 0$ for all $c$ and the result follows. I also know that $\sum_{i=0}^{p-1}\zeta^i = \frac{\zeta^p - 1}{\zeta - 1} = 0$ so my idea was to try and show that each power of $\zeta$ is obtained the same number of times $t$ . In other words, for each $i = 0, 1, \
|
We can generalize the idea you started with. Let $S$ be the following multiset: $$ \{\langle c,z\rangle : c \in C\} $$ Since $z\notin C$ , there exists $c_0\in C$ such that $\langle c_0,z\rangle=1$ . Now, consider an arbitrary element $q\in \mathbb{F}_q$ . $C$ is a vector space over $\mathbb{F}_q$ , so $c\to c+qc_0$ is a bijection on $C$ , and thus we have $$ S=\{\langle c+qc_0,z\rangle : c\in C\}=\{\langle c,z\rangle+q : c\in C\}=\{s+q : s \in S\} $$ In other words, for any $q\in\mathbb{F}_q$ , if we add $q$ to every element in $S$ , we get the same multiset that we started with. Now, we can show that any two elements in the multiset $S$ appear with equal frequencies: Consider two arbitrary $q_1,q_2\in\mathbb{F}_q$ . Then, let $c$ be the frequency of $q_1$ in $S$ . Then, since $x\to x+(q_2-q_1)$ is an injective function on $\mathbb{F}_q$ , $q_1+(q_2-q_1)=q_2$ has frequency $c_1$ in the multiset $\{s+(q_2-q_1) : s\in S\}$ . However, by the above finding, $\{s+(q_2-q_1) : s\in S\}=S$ ,
|
|elementary-number-theory|prime-numbers|coding-theory|primitive-roots|field-trace|
| 1
|
Let :(,)→ℝ be a differentiable function. Show that if ()=sup{()∣∈(,)} for some ∈(,) , then ′()=0 .
|
Here is what I have so far: suppose f'(c) > 0 then lim (f(x) - f(c)) / (x - c) = f'(c) > 0 there exists some delta > 0 s.t. if abs(x-c) 0 take x = c + (delta/2). not sure how to complete the proof, hopefully this explains where I am headed
|
This part is not precise enough: there exists some delta > 0 s.t. if abs(x-c) 0 This is how you would phrase this more precisely: For all $\epsilon > 0$ , there exists $\delta > 0$ such that for all $x$ such that $|x-c| , the following inequality holds: $$ \left|\frac{f(x)-f(c)}{x-c}-f'(c)\right| After this, my hint would be, if you are doing a proof by contradiction where you assume $f'(c) > 0$ and then you show a contradiction, to show a contradiction here, you can take $\epsilon = f'(c)/2$ . Then, you have: $$ \left|\frac{f(x)-f(c)}{x-c}-f'(c)\right| which can be rephrased as: $$ f'(c)-\frac{f'(c)}{2} If $f'(c)$ is positive, then $f'(c)-f'(c)/2=f'(c)/2$ is positive and thus the above inequality says $(f(x)-f(c))/(x-c)$ is positive. Then, how can you choose a value of $x$ which will give you a contradiction?
|
|real-analysis|calculus|
| 0
|
Series and convergence
|
So I have a question about a sequence. lets assume $a_n = {sin(\frac 1n)\over 3n}$ does the limit of the sequence $(-1)^na_n$ converge to zero? I's pretty sure this is decreasing because of the derivative test which gives me a negative derivative and I believe it should be bounded at zero. But if it oscelates between negative and positive as n goes to infinity does it still converge at zero? Also can anyone tell me if my previous steps were wrong? Thanks.
|
Since $|a_n|\le |\frac{1}{3n}|$ , which converges to zero, $a_n$ and $(-1)^n a_n$ also converge to zero, i.e. their limit is zero. Note that the definition of convergence to zero for the sequence $a_n$ is that for any $\epsilon>0$ , we can find $N$ such that $|a_n-0| for every $n>N$ , so the changing sign has no effect.
|
|sequences-and-series|
| 0
|
Inverse propagation of information from the PDF of $Y=f(X)$ to the PDF of $X$
|
Assume a non-linear relation between the random variables $\mathbf{Y} = f(\mathbf{X})$ , where $\mathbf{Y}\sim p_Y$ takes values $\mathbf{y} \in \mathbb{R}^M$ and $\mathbf{X}\sim p_X$ takes values $\mathbf{x} \in \mathbb{R}^N$ , with $M\leq N$ . My question is about the "inverse" problem described below. Direct problem - If we know the PDF $p_X$ , then the PDF $p_Y$ is formally given by $$p_Y(\mathbf{y}) = \int \delta^M(f(\mathbf{x})-\mathbf{y} ) \, p_X(\mathbf{x}) \, d^Nx$$ In general, this expression can not be handled with analytical techniques. However, we can sample some values $\mathbf{x}_i\sim p_X$ : the scatter of the $f(\mathbf{x}_i)$ values already allows us to probe $p_Y$ . Inverse problem - The PDF $p_Y$ and the map $f$ are given. We want to estimate $p_X$ . Unfortunately, the formal expression $$ p_X(\mathbf{x}) = \int \delta^N(f^{-1}(\mathbf{y})-\mathbf{x} ) \, p_Y(\mathbf{y}) \, d^My $$ is useless: contrary to the previous case, it does not allow us to come up with a pra
|
In Measure Theory (advanced probability theory), your direct problem is known as the push forward and the inverse could probably be considered the pullback. As highlighted here ( https://mathoverflow.net/questions/122704/pullback-measures ), “To define pullbacks of measures we need some additional data, because otherwise one would be able to obtain a canonical measure on an arbitrary measurable space M by pulling back the canonical measure on the point along the unique map M→pt.” In other words, you need some kind of way of highlighting the relative importance of various points within $f^{-1}(y)$ for any $y$ . In probability theory terms, if we have some prior distribution $p_X$ and some known final distribution $p_Y$ , then we want a distribution $q_X$ that pushes forward to $p_Y$ and that for any given value of $f(x)$ has the same density within $f^{-1}(x)$ as $p_X$ . The simplest way to do this is by scaling up $p_X$ . We can define some weight function $w(y)$ as the ratio of $p_Y$
|
|probability-distributions|reference-request|random-variables|inverse-problems|empirical-bayes|
| 0
|
Find the basis of a vector subspace
|
Find the basis of a linear subspace in Rn which consists of all n-dimensional vectors with coordinates (a,b,a,...,b). I do not know how to solve these kind of problems when the coordinates are some kind of variables. My instinct tells me its dimension is 2 and the basis is kinda like this: (1,0,1,...0) and (0,1,0,...1), but I just can't prove it.
|
We claim that $B_1 = \left\{ \begin{pmatrix} 1 \\ 0 \\ 1 \\ \vdots \\ 1 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \right\}$ is a basis. To show this, we first verify that the vectors in $B_1$ are linearly independent. This is easy to see: if you row-reduce $\begin{pmatrix} v_1 & v_2 \end{pmatrix}$ , you get $$\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \end{pmatrix}$$ which has a pivot in each column. Then, we verify that $\text{Span} B_1$ is equal to the subspace. That $\text{Span} B_1$ is contained in the subspace is easy to see: both elements of $B_1$ are contained in the subspace. To check that the subspace is contained in $\text{Span} B_1$ , row-reduce the system $\begin{pmatrix} v_1 & v_2 & w \end{pmatrix}$ to get $$\begin{pmatrix} 1 & 0 & a \\ 0 & 1 & b \\ 1 & 0 & a \\ \vdots & \vdots \\ 1 & 0 & a \end{pmatrix} \longrightarrow \begin{pmatrix} 1 & 0 & a \\ 0 & 1 & b \\ 0 & 0 & 0 \\ \vdots & \vdots & \vdots \\ 0 & 0 & 0 \end{pm
|
|linear-algebra|
| 0
|
How to prove $\tilde{A}=O$ if $\mathrm{rank}(A) \le n - 2$
|
In the final exam of the linear algebra course I took, the following problem was asked and I wasn't able to solve it: Let $n \ge 2$ and $A$ be a $n \times n$ matrix. If $\mathrm{rank}(A) \le n - 2$ , then prove $\tilde{A} =O$ . In this course $\tilde{A}$ denotes the adjugate matrix of $A$ . I suspect that using $\ker{A} \ge 2$ by rank-nullity theorem works well, but I don't know how to prove it. I'm glad for anyone to tell me any hints or any solutions.
|
The proof depends on your definition of adjugate. If adjugate is defined using minors, note that each element of $\tilde A$ is an $(n-1)$ -rowed minor of $A$ . If adjugate is defined via characteristic polynomial, let the characteristic polynomial of $A$ be $x^mq(x)$ , where $m\ge2$ and $q$ is a polynomial of degree $n-m$ such that $q(0)\ne0$ . Since $\dim\ker A\ge2$ , the ascending chain $\ker A\subseteq\ker A^2\subseteq\cdots$ must have been stabilised before the exponent of $A$ reaches $m$ . Therefore $\ker A^{m-1}=\ker A^m$ and in turn, $A^{m-1}q(A)=0$ . It follows that $\tilde{A}=(-1)^{n-1}A^{m-1}q(A)=0$ . If adjugate is defined using exterior product, pick any two linearly independent vectors $x_1,x_2\in\ker(A)$ and let $\{x_1,x_2,\ldots,x_n\}$ be a basis of the ambient vector space $V$ . For any $v\in V$ , note that $$ \tilde{A}v\wedge\bigwedge_{i\in[n]\setminus\{j\}} x_i =v\wedge\bigwedge_{i\in[n]\setminus\{j\}} Ax_i=0 $$ for $j=1,2,\ldots,n$ .
|
|linear-algebra|matrix-rank|
| 1
|
Prove that $\lim_ {x\to \infty}\min(|x-\sqrt{m^2+2n^2}|)=0$
|
For $x\gt0$ , let $f(x)$ be the minimum value of $|x-\sqrt{m^2+2n^2}|$ for all integers $m, n$ . Then Prove that $$\lim_ {x\to \infty} f(x)= \; 0$$ I am completely stuck with this question, minimum value of modulus is zero that's fine, but how $f(x)$ will be zero when $x$ approaches to $\infty$ . If I take $\sqrt{m^2+2n^2}$ to be some nearest value of $x$ Then it takes $|\infty -\infty|$ form; which can take finite values except zero too. How to solve this question
|
Let $x>1$ . Let $m$ be the largest integer less than $x$ . Let $a=x-m$ . Let $n$ be the largest integer smaller than $\sqrt{ax}$ . Let $b=\sqrt{ax}-n$ . Note that $0\leq a,b\leq 1$ . $$m^2+2n^2=(x-a)^2+2(\sqrt{ax}-b)^2 =x^2+a^2+2b^2-4\sqrt{ax}b.$$ Subtracting $x^2$ and bounding it gives: $$-4\sqrt{x}\leq a^2+2b^2-4\sqrt{ax}b\leq 1^2+2\cdot 1^2 -0=3\leq 4\sqrt{x}.$$ This simplifies to $|m^2+2n^2-x^2|\leq 4\sqrt{x}$ . Dividing by $|\sqrt{m^2+2n^2}+x|>x$ gives that: $$f(x)\leq|\sqrt{m^2+2n^2}-x|\leq 4/\sqrt{x}.$$ Thus, as $x$ approaches $\infty$ , $f(x)$ must approach $0$ .
|
|calculus|algebra-precalculus|limits|
| 1
|
Euclidean Geometry problem regarding the line passing through incenter
|
This is from the exercise problems(not the live contest) for regional math competition. Let triangle $ABC$ satisfy $\angle C > 90^\circ$ and let its incenter be $I$ . Let $\ell$ be the line parallel to $BC$ passing through $I$ , and call $\ell \cap AB = D$ , $\ell \cap BC = E$ . We know that $AI = 3$ and the radius of inscribed circle of the triangle $ABC$ is $1$ . Finally, $(\text{area of }ABC) = 5\sqrt{2}$ is given. The problem is to find $DE$ . Below is my attempt to do something but I am stuck. How can I proceed to get $DE$ ? Thank you in advance for any form of help, hint, or solution.
|
In fact, your existing work almost solves the problem already. The next step is to observe that the altitude $h_A$ from $A$ to $BC = x+y$ must satisfy $$5 \sqrt{2} = |\triangle ABC| = \frac{1}{2} h_A (x+y) = \frac{3\sqrt{2}}{2} h_A,$$ hence $h_A = \frac{10}{3}$ . Then because $DE \parallel BC$ and the distance between them is $r = 1$ , it follows from the similarity of triangles that $$\frac{DE}{BC} = \frac{h_A - 1}{h_A} = \frac{7}{10},$$ therefore $$DE = \frac{7}{10} \cdot 3 \sqrt{2} = \frac{21}{5 \sqrt{2}}.$$
|
|geometry|contest-math|euclidean-geometry|
| 0
|
Possible to have only zero eigenvalues of the Hessian of a harmonic function that is neither of the form $ax+by+cz+d$ nor a constant?
|
(Following an earlier post here ) I intuit that if we restrict to functions $f(x,y,z)$ that are harmonic (i.e. satisfying $\nabla^2f=0)$ but neither of the form $ax+by+cz+d$ ( $a,b,c,d\in\mathbb{R}$ ) nor a constant, all the three eigenvalues of the $3\times 3$ Hessian matrix $$ H(x,y,z)= \begin{bmatrix} f_{xx} & f_{xy} & f_{xz}\\ f_{yx} & f_{yy} & f_{yz}\\ f_{zx} & f_{zy} & f_{zz}\\ \end{bmatrix}, $$ cannot all be zero. Of course, I don't have a proof. But does anyone have any counterexample?
|
The answer depends on the precise formulation of the question. The Hessian vanishes everywhere: In this case the function must be (locally) affine. Indeed, if the Hessian vanishes, then we have for every $j\in \{1,2,3\}$ that $$\nabla (\partial_j f)(x,y,z)=((\partial_1 \partial_j f)(x,y, z), (\partial_2 \partial_j f)(x,y,z), (\partial_3 \partial_j f)(x,y,z))=(0,0,0).$$ This means that $\partial_j f$ is (locally) constant. This implies that $f$ is (locally) affine. Hessian is only vanishing at certain points: The function $f(x,y,z) = xyz$ is a counterexample to the claim "if $f$ is harmonic and the Hessian vanishes at least in one point, then $f$ must be an affine function". It is harmonic, we even have $\partial_j^2 f \equiv 0$ for all $j\in \{1,2,3\}$ . However, we have $$ Hf(x,y,z) = \begin{pmatrix} 0 & z & y \\ z & 0 & x \\ y &x & 0 \end{pmatrix}. $$ Thus, we get that $Hf(0,0,0)$ is the zero matrix. Note that we also have $\nabla f(0,0,0)=(0,0,0)$ (it is also a critical point). Thus
|
|eigenvalues-eigenvectors|harmonic-functions|hessian-matrix|
| 1
|
let $A$ and $B$ be rings. Any $A\times B$ module is of the form $M\oplus N$ for some $A$-module $M$ and $B$-module $N$ with $(a,b)\cdot (m,n)=(am,bn)$
|
Let $A$ and $B$ be two rings, and let $L$ be an $A\times B$ -module. Prove that $L$ is of the form $M\oplus N$ for some $A$ -module $M$ and $B$ -module $N$ , with $(a,b)\cdot (m,n) = (am,bn)$ . Let $M := \{ (a_1,0)l_1 + \cdots + (a_n,0)l_n \mid n \in \mathbb N, a_i \in A, \text{ and } l_i \in L \text{ for all } 1 \leq i \leq n \}$ and similarly let $N := \{ (0, b_1)r_1 + \cdots + (0, b_n)l_m \mid m \in \mathbb N, b_j \in A, \text{ and } r_j \in L \text{ for all } 1 \leq j \leq m \}$ . Then we may check that $M$ is an $A$ -module and $N$ is a $B$ -module when we define $a \cdot m := (a,0)m$ and $b \cdot n := (0,b)n$ . I claim that $L = M \oplus N$ . Indeed, for any $l \in L$ we have $l = (1,1)l = (1,0)l + (0,1)l$ . However, I am not sure whether $M$ and $N$ are share only $0$ in their intersection, i.e. whether $L = M + N$ is a direct sum or not. Could anyone help?
|
For $m\in M$ we have $(1,0)m=m$ . For $n \in N$ we have $(1,0)n=0$ . Thus if $x\in M\cap N$ , we have $$x=(1,0)x=0.$$
|
|modules|direct-product|
| 1
|
Why do second order linear ODE have solutions $y_1$ and $y_2=v y_1$ where $v$ is a function of x?
|
$$y^{\prime \prime}+P(x) y^{\prime}+Q(x) y=0$$ Provided we know a solution $y_1$ we assume the second linearly independent solution to be $y_2=v y_1$ substitute this back into the ODE and end up with this relation $y_2=y_1 \int \frac{1}{y_1^2} e^{-\int P(x) d x} d x$ My issue with this is that why is it necessary that the solution be $y_2=v y_1$ , is it never possible that the second solution cannot be expressed as a product of the first solution and a function of x? I am struggling to understand why the first and second solutions have such a relation between them.
|
Well, I may not be the best equipped for this question fully, but my intuition states that no matter what $y_1$ is, $y_1$ is still a function of $x$ and so is $y_2$ so if we just define some $v$ to be the difference in multiplicative factors (the quotient as the comment said) then it’s obvious that $v$ must be a function of x, because both $y_1$ and $y_2$ are. Similarly if we have the opposite question, such that $v$ and $y_1$ are functions of $x$ such that $y_2 = vy_1$ then again, it is clear that $y_2$ itself is a function of $x$ because it is a product of functions which depend solely on $x$ . Hope this helps intuitively.
|
|ordinary-differential-equations|
| 0
|
Analytical solution of the ODE
|
Consider the following boundary value problem. \begin{equation*} \begin{split} u=x^2u'+\frac{x^3}{3}u'', &\quad u(0)=A, \quad u(1)=B \end{split} \end{equation*} Is there any analytical method to solve the problem? According to WolframAlpha one can write the equation as a Sturm-Liouville equation as: \begin{equation*} \begin{split} (x^3u')'&=3u \end{split} \end{equation*} or as an Emden-Fowler one as: \begin{equation*} \begin{split} xu''+3u'=\frac{3u}{x^2} \end{split} \end{equation*} but I can find no methods to solve the problem analytically. I would be very grateful if someone explains some method to solve the problem.
|
I’m not sure if it works for this specific case, but think of this like a Bessel Function solution, in that we want to “guess” the solution to be some polynomial. So we choose an attempt to let the function be the sum from a constant to some finite power $n$ of $x$ (or if you want to find a harder class of solutions, which this might be a part of: use an infinite series denoted as a power series) with coefficients $c_0$ through $c_n$ , even further if it is harder you can work the sum backwards to negative powers of $x$ . Then solve for what the coefficients need to be in order for this ODE to be solved. It may not necessarily be a nice pattern, so it could look like it isn’t converging, which means you can approximate the ODE numerically by cutting off the power series at a specified $n$ with the particular accuracy you desire. Sorry if this seems convoluted, but if you need clarification just send a comment and I’ll try to respond with anything that might help make this clearer.
|
|ordinary-differential-equations|sturm-liouville|
| 0
|
Does $\sum\limits_{k=5}^n\binom{k-3}2\binom{k-1}3$ have a closed form?
|
This summation came up in a problem I was working on. My first attempt in coming up with a formula yielded the result $\binom{n-1}3\binom{n-2}3-\sum\limits_{k=3}^{n-3}\binom{k+1}2\binom k3$ at which point I was unable to simplify any further. I then devised a second method to achieve a formula and came up with the result $\sum\limits_{k=5}^n\binom{k-3}2\binom{k-1}3$ . Both appear to be equivalent, which I was unable to prove directly, but could with induction if needed. I tried using this to come up with a closed formula for one summation or the other but was unable to figure out a way to do so. If worse came to worse, I could treat the summand as a quadratic multiplied by a cubic, in which case, the result of the sum would be some degree 6 polynomial, but is there a nicer closed form for one of these summations?
|
Note $$\binom{k-3}{2}\binom{k-1}{3}=\frac{1}{12}(k-1)(k-2)(k-3)^2(k-4)$$ Consider the expression $$\binom k5(5k-13)-\binom{k-1}{5}(5k-18)$$ Factorise out $(k-1)(k-2)(k-3)(k-4)/120$ , $$\frac{(k-1)(k-2)(k-3)(k-4)}{120}\bigg(k(5k-13)-(k-5)(5k-18)\bigg)$$ Notice that the expression in brackets simply reduces to $30k-90=30(k-3)$ , and so $$3\binom{k-3}{2}\binom{k-1}{3}=\binom k5(5k-13)-\binom{k-1}{5}(5k-18)$$ Sum this up from $k=5$ to $k=n$ (RHS is telescoping series) so that the required sum $S(n)$ is $$\boxed{S(n)=\frac13\binom{n}{5}(5n-13)}$$ This is a pretty nice closed form. How did I guess that form? WA ;) Hope this helps. :)
|
|summation|binomial-coefficients|
| 0
|
Why do we want these properties to be in the extended factorial function?
|
$$\Gamma(x)= \int_0^ \infty e^{-t}t^{x-1}dt \ \ \ \ \ \ \ x>0. $$ Bohr and Mollerup showed that the gamma function is the only positive function $f$ defined on $(0,\infty)$ that has these properties $f(x+1)=xf(x)$ $f(1)=1$ $f(x)$ is a continuous $f(x)$ is log convex My question is why are these properties matter (I mean 3, 4 the first two are essential )? I get that it is probably useful defining $\Gamma$ to be continuous but why it should be log convex it seem to be a random and absurd thing that has no motivation but I am sure that there is a reason why it matters and why did Euler choosed this property when he gave this integral .
|
The first condition implies that $f$ has the form $f(x)=f(\{x\})\prod_{n=0}^{\lfloor x\rfloor-1}(n+\{x\})$ (where $\{x\}$ is the fractional part of $x$ ), and thus, since also $\Gamma(x)=\Gamma(\{x\})\prod_{n=0}^{\lfloor x\rfloor-1}(n+\{x\})$ , $$f(x)=\frac{f(\{x\})}{\Gamma(\{x\})}\Gamma(x).$$ The first factor is periodic; if it isn’t constant, its logarithm must alternate between being convex and concave. Since $(\log\Gamma(x))''\to0$ for $x\to\infty$ , that means that $\log f(x)$ eventually alternates between being convex and concave on every interval $[n,n+1]$ . That’s not a nice property for an interpolation of $n!$ to have.
|
|real-analysis|special-functions|gamma-function|
| 1
|
Why do second order linear ODE have solutions $y_1$ and $y_2=v y_1$ where $v$ is a function of x?
|
$$y^{\prime \prime}+P(x) y^{\prime}+Q(x) y=0$$ Provided we know a solution $y_1$ we assume the second linearly independent solution to be $y_2=v y_1$ substitute this back into the ODE and end up with this relation $y_2=y_1 \int \frac{1}{y_1^2} e^{-\int P(x) d x} d x$ My issue with this is that why is it necessary that the solution be $y_2=v y_1$ , is it never possible that the second solution cannot be expressed as a product of the first solution and a function of x? I am struggling to understand why the first and second solutions have such a relation between them.
|
Note whatever $y_2(x)$ is, it is a function of $x$ , and $y_1(x)$ is a function of $x$ too (nonzero everywhere). The assumption $y_1\neq 0$ is required for the mentioned formula to work as you divide by $y_1$ . Thus, the ratio $y_2(x)/y_1(x)$ is a also a function. Call that $v(x)$ . Then, $y_2(x)=v(x)y_1(x)$ is also a solution. So, you can substitute $vy_1$ for $y$ and try to find $v$ (which is just the formula you mentioned). Hope this helps. :)
|
|ordinary-differential-equations|
| 0
|
finding work required to pump water out of a tank
|
The tank has the shape of a horizontal cylinder with radius $r$ and length $l$ . The water exists through a small opening on the top right side of the cylinder. It seems that there are two ways to solve this problem: integrate horizontally integrate vertically My first attempt was done by integrating vertically. Slicing the cylinder horizontally produces a rectangle of length $l$ and width $2x$ . Using Pythagorean theorem we obtain $x=\sqrt{r^2-y^2}$ . The volume can be found by adding up small rectangular prims, i.e., $V = 2xl\Delta y = \int_{-r}^{r}2l(\sqrt{r^2-y^2})dy$ . To make sure this is correct let $l=5$ and $r=2$ . The volume of this cylinder is $V=\pi r^2h = 62.83$ . The volume obtained using the integral is also $62.83$ . I encounter problems when I try to find the work required to pump the water. The mass of the water is the volume times the density $$m = 2000l\sqrt{r^2-y^2}\Delta y$$ The force due to gravity is $$F = (9.8)\left(2000l\sqrt{r^2-y^2}\Delta y\right)=19600l\sqr
|
In short, all you've computed in the last integral is the weight of the water, not the work applied to pump it out of the tank. The missing factor is the vertical displacement to the opening. Honestly, I'm not sure if there's a general way to integrate horizontally and include the vertical displacement without adding an additional step where we integrate each "slice" vertically. That said, in this case, it's simple enough to see that the center of gravity of the water in a filled tank lies $r$ below the level of the opening. This vertical displacement from the center of gravity to the opening is where you find the missing $r$ .
|
|calculus|
| 0
|
Intersection of an infinite sequence of open dense sets
|
Let $\{G_n\}$ be a sequence of infinite open dense subsets of a complete metric space $X$ . I have already proved intersection of $\{G_n\}$ is non-empty and now wants to prove this intersection is dense in $X$ (as it's claimed to be correct in a textbook), but got no idea on how to do so.
|
If $U$ is a non-empty open set then $U$ is itself a complete metric space under an equivalent metric. Consider $U\cap G_n$ in this space to see that $\cap G_n$ intersects $U$ . Hence, $\cap G_n$ is dense. Ref.: Open subsets of a complete metric space.
|
|general-topology|
| 0
|
Integrate $\int \frac {\sin (2x)}{(\sin x+\cos x)^2}\,dx$
|
Integrate $$\int \frac {\sin (2x)}{(\sin x+\cos x)^2} \,dx$$ My Attempt: $$=\int \frac {\sin (2x)}{(\sin x + \cos x)^2} \,dx$$ $$=\int \frac {2\sin x \cos x}{(\sin x+ \cos x)^2} \,dx$$ Dividing the numerator and denominator by $\cos^2 x$ $$=\int \frac {2\tan x}{(1+\tan x)^2} \,dx$$
|
$$ \begin{aligned}\int \frac{\sin (2 x)}{(\sin x+\cos x)^2} d x = & \int \frac{\sin (2 x)}{1+\sin (2 x)} d x \\ = & \int\left(1-\frac{1}{1+\sin (2 x)}\right) d x \\ = & x-\int \frac{1}{1+\sin (2 x)} \cdot \frac{1-\sin (2 x)}{1-\sin (2 x)} d x \\ = & x-\int \frac{1-\sin (2 x)}{\cos ^2(2 x)} d x \\ = & x-\int\left[\sec ^2(2 x)-\tan (2 x) \sec (2 x)\right] d x \\ = & x-\frac{1}{2}[\tan (2 x)-\sec (2 x)]+C \end{aligned} $$
|
|calculus|integration|indefinite-integrals|trigonometric-integrals|
| 0
|
An exotic operation on repeating sequences of natural numbers such as $\overline{6,9} = 6,9,6,9,6,9, \dots$ Having to do with common subsums.
|
Definitions. This is about repeating sequences of natural numbers such as $\overline{6} = 6,6,6, \dots$ or $\overline{2,1,2} = 2,1,2,2,1,2, \dots$ . I am aware that these can be faithfully represented as the sequence of coefficients of a power series $f(x)(1 + x^k + x^{2k} + \dots)$ where $k = \deg f + 1$ . And conversely, so there is a bijection. However, I have (consider this my attempt) verified that neither power series $+$ nor $\cdot$ represent the below described $\star$ operation. Given $a = \overline{a_0, a_1, \dots, a_n}$ and $b = \overline{b_0, b_1, \dots, b_m}$ define $a \star b$ by the following algorithm. We must start at $c_0 = (a\star b)_0$ the first entry of the result $c$ . The Algorithm . $a \star b$ : Input: $a, b$ (finite sequences such that if you index them the index you pass in gets modulo'd by their length). Output: $c$ another repeating sequence in smallest length finite form. Let $i = 0, j=0, k=0, c = []; s = 0, t = 0$ If $\sum c = \text{LCM}(\sum a, \sum b)$
|
Let $s_a =$ the sequence of partial sums of $a$ , or $s_b = a_0,\ \ a_0 + a_1,\ \ a_0 + a_1 + a_2, \dots$ . Similarly for $s_b$ . Then $c_0$ is defined to be the first sum in both sequences that is the same, the first from left to right as you list the sequence elements. $c_1$ is defined to be the second sum that is common to both $s_a, s_b$ minus $c_0$ , and $c_2$ is defined to be the third sum that is common to both minus $c_0 + c_1$ and so on... Clearly this is an operation of "least-common-subsum" or $\star$ is associative as the entries common to both $s_a, s_b$ are intersected with entries of a third $s_d$ to get the entries common to all three and it doesn't matter if you associate take those common to both $s_b, s_d$ first and then intersect with entries of $s_a$ . And clearly the order doesn't matter. Since $\overline{1}$ is identity for $\star$ , we have proved it to be the law of a commutative monoid on repeating sequences.
|
|abstract-algebra|sequences-and-series|algorithms|modular-arithmetic|periodic-functions|
| 0
|
Minimum losses to eliminate you from top 2 in round robin
|
In a round robin tournament with $n\geq 10$ teams, each match ends in either a win or loss. What is the minimum number of losses to ensure that a team is out of contention of top 2 spots? Clarification: If two teams score the same, it's good enough because they aren't eliminated. My attempt: assign them ranks 1 to $n$ . There are a total of $\frac{n(n-1)}{2}$ matches and hence as many wins. So, the rank 1 needs to have atleast $\left\lfloor \frac{n-1}{2}\right\rfloor$ wins. Keeping them out of the picture, the rank 2 will have atleast $\left\lfloor \frac{n-2}{2}\right\rfloor$ wins. So, if you lose atleast $\left\lfloor \frac{n}{2}+1\right\rfloor$ matches then you're out. Is the above attempt correct? It'll be very helpful if someone can confirm.
|
If draws are equal to victory you are wrong because in four players first can win 3 games and all others win just one and you still win although you lost 3 games ( $\frac{4}{2}$ + 1). If draws are not good enough, 2 losses will eliminate you (again, I'm talking about 4 players). *the draws are talking about ranking and not about a single game Your mistake is that you assumed first place will win the least amount of games possible while you prefer he wins as much as possible because your wins are limited and the opponent's wins are not. In more than 9 players/teams, the solution may be correct and may not, but either way, the way you got there is wrong. you should assume first gets n-1 wins, and that leaves n(n-2)/2 wins. If draws are good enough, there are $(\frac{n(n-2)}{2})$ wins left and you need ceiling $(\frac{n(n-2)}{2(n-1)})$ wins and therefore floor(n - $\frac{n^{2}-2n}{2n-2}$ ) will eliminate you If not, you need floor( $\frac{n^{2}-2n}{2n-2}$ ) + 1 to win and therefore you ne
|
|combinatorics|
| 0
|
Upper bounds for $\| I - \boldsymbol{1}\boldsymbol{p}^T \|_2$
|
I want to know if we can have some bound of $\| I - \boldsymbol{1}\boldsymbol{p}^T \|_2$ , where $I$ denotes the identity matrix of size $k$ , and $\boldsymbol{p} \in \mathbb{R}^K$ is some known vector with non-negative entries. An obvious results can be obtained by using triangular inequality, which yields $$\| I - \boldsymbol{1}\boldsymbol{p}^T \|_2 \leq 1 + K \| \boldsymbol{p} \|,$$ but such bound is not quite tight when $\boldsymbol{p} = \frac{1}{K} \boldsymbol{1}$ , under which left-hand-side gives 1 but right-hand-side gives $1+1=2$ . So I wonder if we can leverage some special structure of induced 2-norm to obtain a tighter bound. Thanks!
|
You can compute the sup norm of your operator exactly. Let $$A = I - |1\rangle\langle p|$$ The sup norm of $A$ is the square root of the maximal eigenvalue of $A^\dagger A$ : $$ A^\dagger A = I -|1\rangle\langle p| - |p\rangle\langle 1| + K |p\rangle\langle p|$$ Now the above matrix is one outside the span of $|1\rangle, |p\rangle$ . So you can compute the matrix of the above on the vectors $|1\rangle, |p\rangle$ . You get (if I didn't make any mistake) $$[A^\dagger A] = \left(\begin{array}{cc} 1-\left\Vert p\right\Vert _{1} & -\left\Vert p\right\Vert _{2}^{2}\\ K\left(\left\Vert p\right\Vert _{1}-1\right) & 1-\left\Vert p\right\Vert _{1}+K\left\Vert p\right\Vert _{2}^{2} \end{array}\right) \, , $$ where $\left\Vert p\right\Vert_1 = \sum_j p_j $ . The eigenvalues of the above matrix are $$ \lambda_\pm = \frac{1}{2} \left ( 2(1-\Vert p\Vert_1) +K \Vert p\Vert_2^2 \pm \Vert p\Vert_2 \sqrt{K (4-4 \Vert p\Vert_1 +K \Vert p\Vert_2^2)} \right) $$ Remember also that $A^\dagger A$ is 1 outside
|
|upper-lower-bounds|matrix-norms|
| 1
|
If $m\mid n$ can we find a group $G$ of order $n$ with a subgroup $H$ of order $m$, i.e. can every divisibility be proved group theoretically?
|
I came across the following problem: Let $p$ be a prime, prove that $$(p-1)^np^{(n^2-n)/2}\mid \prod_{k=1}^n(p^n-p^{k-1})$$ for any $n\in \mathbb{N}$ . It is quite hard to prove it using elementary number theory. However, the problem becomes trivial if we notice that $|\operatorname{GL}_n(\mathbb{F}_p)|=\prod_{k=1}^n(p^n-p^{k-1})$ and $|\operatorname{B}_n(\mathbb{F}_p)|=(p-1)^np^{(n^2-n)/2}$ , where $\operatorname{B}_n(\mathbb{F}_p)$ is the set of $A\in \operatorname{GL}_n(\mathbb{F}_p)$ that is upper triangular. And the proof follows immediately from Lagrange's theorem. So that got me thinking, can every divisibility problem $m\mid n$ be solved using group theory and Lagrange? More precisely, if $m\mid n$ , can we always find a group $G$ of order $n$ and a subgroup $H$ of $G$ of order $m$ ? I know that the above is true for $m=p$ a prime by Cauchy's theorem or more generally for $m=p^k, k=\operatorname{ord}_p(|G|)$ by Sylow. But I also know that the converse of Lagrange's theorem is f
|
More precisely, if $m\mid n$ , can we always find a group $G$ of order $n$ and a subgroup $H$ of $G$ of order $m$ ? Yes. For each $n$ , there exists a cyclic group of order $n$ . In the cyclic group of order $n$ , for each $d\mid n$ , there exists a subgroup of order $d$ . Note that this assumes $m\mid n$ , so it doesn't establish that $m\mid n$ . To establish that $m\mid n$ , it suffices to find a subgroup of order $m$ of a group of order $n$ ; then Lagrange's Theorem does indeed kick in.
|
|abstract-algebra|group-theory|finite-groups|
| 1
|
Construction of Standard Normal Distribution
|
Given $10$ random variables $W_1,W_2,\cdots,W_{10}\stackrel{i.i.d}{\sim}N(\mu,\sigma)$ , where $\mu,\sigma$ are unknown . How can we construct a known function $f$ such that $Y=f(W_1,W_2,\cdots,W_{10})\sim N(0,1)$ ? Note, here known function means there is no $\mu,\sigma$ in $f$ , e.g. $f_1=\frac{X_1-\mu}{\sigma}$ is unknown and $f_2=3X_1-e^{X_2}+X_3$ is known. Firstly I successfully make $\mu$ become $0$ : I let $Y_1=W_1-W_2,Y_2=W_3-W_4,\cdots,Y_5=W_9-W_{10}$ then $Y_1,\cdots,Y_5\stackrel{i.i.d}{\sim} N(0,2\sigma^2)$ . However then I cannot eliminate $\sigma$ . I tried use some f to approach $\frac{X-\mu}{\sigma}$ and I've tried many functions like $Z=\frac{Y_3}{\sqrt{Y_1^2+Y_2^2}}$ and $Z=\frac{Y_1^2+Y_2^2}{\ln(\frac{1}{\pi}\arctan\frac{Y_1}{Y_2}+\frac{1}{2})}$ but nothing worked. Can someone give me some hint?
|
I have some ideas! We bypass $\sigma$ and use $Y_1,\cdots,Y_4$ to generate two i.i.d angles $U_1,U_2\sim U(0,2\pi)$ then use them to get $U(0,1)$ and generate normal distribution! specific proof: Let $$ Y_1=W_1-W_2,~Y_2=W_3-W_4,\cdots,Y_4=W_7-W_8 $$ Then $Y_1,\cdots,Y_4\stackrel{i.i.d}{\sim}N(0,2\sigma^2)$ . Let $$ U_1=\dfrac{1}{\pi}\arctan\dfrac{Y_1}{Y_2}+\dfrac{1}{2},~U_2=\dfrac{1}{\pi}\arctan\dfrac{Y_3}{Y_4}+\dfrac{1}{2} $$ Then $U_1,U_2\stackrel{i.i.d}{\sim}U(0,1)$ ,Then $$ Z=\sqrt{-2\ln U_1}\cos(2\pi U_2)\sim N(0,1) $$ and f is $$ f=\sqrt{-2\ln(\dfrac{1}{\pi}\arctan\dfrac{W_1-W_2}{W_3-W_4}+\dfrac{1}{2})}\cdot \cos(2\arctan\dfrac{W_5-W_6}{W_7-W_8}+\pi) $$
|
|statistics|probability-distributions|normal-distribution|
| 0
|
Solve the equation $\sqrt{x+5}=5-x^2$
|
Solve the equation $\sqrt{x+5}=5-x^2$. I have tried to make the substitution $x=\sqrt{5}\tan^2 \theta$ and wanted to make use of the identity $\tan^2\theta+1=\sec^2\theta$ but it didn't work out. I also tried to make the substitution $y=x+5$ but it lead to nowhere. Since this was a contest problem, I believe there is a short, elegant and elementary solution, please helps.
|
If $x\in \mathbb R$ then we need $5\geq x^2$ . Upon squaring the equation we get that $$x+5 = 5^2+x^4-5(2x^2)$$ which becomes $$5^2-5(2x^2+1)+x^4-x=0$$ and by the quadratic formula $$5 = \frac{2x^2+1\pm \sqrt{(2x^2+1)^2-4(x^4-x)}}2$$ ie $$10=2x^2+1\pm \sqrt{1+4x^2+4x}$$ ie $$10 =2x^2+1\pm (2x+1)$$ This can be resolved into two cases: $$0 = x^2-x-5$$ or $$0=x^2+x-4$$ Then, checking the four roots, we note that two do not satisfy the initial constraint. Thus, $$x \in \left\{\frac{1-\sqrt {21}}{2}, \frac{\sqrt{17}-1}{2}\right\}$$
|
|algebra-precalculus|contest-math|
| 0
|
How can I find numerically nice-to-compute upper limits of nCr?
|
How can I find nicely behaving functions which are easy to compute and which fit well to the upper limits of the (2-logarithm) of the nCr function? What I am interested in in practice is to be able to model "how many binary digits (bits) will a digital representation of nCr(a,b)" have? A guess of 1 or 2 too many bits are acceptable, but a guess of 1 or 2 too few bits are not. So the problem is a bit un-symmetric. Errors in one direction are catastrophically problematic (unable to represent the number) but errors in the other direction "only" cause an unnecessary extra percentage of bits stored. Actually calculating factorials will not be considered easy enough. On the other hand, the simplest of approximations like $nCr(a,b) \leq 2^a$ will be considered too sloppy. Own work what I have tried on my own is to calculate the number of bits of all numbers, then fit low order polynomials to the bit count, then rounded the estimation to closest integer. In the case of too few bits I have upda
|
Edit: I realized my answer is the same as LeonBloy's By using Stirling approximation one can obtain very close value: $$\left(\begin{matrix}a\\b\end{matrix}\right) Therefore $$\log_2\left(\begin{matrix}a\\b\end{matrix}\right) Rearrange the terms: $$\log_2\left(\begin{matrix}a\\b\end{matrix}\right)
|
|linear-algebra|combinatorics|computer-science|information-theory|
| 0
|
Whether $\mathbb{T}\times [0,1]$ is diffeomorphic to $\mathbb{D}^2$?
|
Here $\mathbb{T}$ denotes the torus $\mathbb{R}\backslash \mathbb{Z}$ , and $\mathbb{D}^2$ the closed unit ball in $\mathbb{R}^2$ . Since $\mathbb{T}\times [0,1]$ and $\mathbb{D}^2$ are both smooth compact manifolds with boundary (I'm not sure about this statement, if I was wrong, please correct me), can we establish a diffeomorphism between them? More precisely, can we find a smooth map $\varphi:\mathbb{T}\times [0,1]\to \mathbb{D}^2$ which admits a smooth inverse $\varphi^{-1}:\mathbb{D}^2\to \mathbb{T}\times [0,1]$ ?
|
This isn't possible. The 1-torus $\mathbb{T}$ is diffeomorphic to $S^1$ , hence $\mathbb{T}\times[0,1]$ is homotopic to $S^1$ (it's a cylinder). So, $\mathbb{T}\times[0,1]$ can't be diffeomorphic to $\mathbb{D}^2$ ... they aren't even homotopic (for instance, they have different fundamental groups).
|
|differential-geometry|smooth-manifolds|geometric-topology|manifolds-with-boundary|diffeomorphism|
| 1
|
A mathematical statement about Laplace's equation from famous physicist Landau's book
|
In the book titled Classical Theory of Fields , by famous Physicist Landau, writes, From $\nabla^2\phi(x,y,z)=0$ , ...it follows, in particular, that the potential $\phi$ of the electric field can nowhere have a maximum or a minimum. For in order that $\phi$ have an extreme value, it would be necessary that the first derivatives of $\phi$ w.r.t the coordinates be zero, and that the second derivatives, $\partial^2\phi/\partial x^2,\partial^2\phi/\partial y^2,\partial^2\phi/\partial z^2$ all have the same sign. The last is impossible since the Laplace's equation cannot be satisfied. Is it true that the second derivatives $\partial^2\phi/\partial x^2,\partial^2\phi/\partial y^2,\partial^2\phi/\partial z^2$ must necessarily have the same sign at a local extremum i.e., all negative for local maximum and all positive for a local minimum? Thinking geometrically, probably the answer is yes. But I am not sure.
|
The argument is not correct. Even at a strict local extremum, all second partial derivatives can be zero; consider for example $\phi(x,y,z) = x^4+y^4+z^4$ at the point $(0,0,0)$ . (Correct statements and proofs of the weak/strong maximum principles can be found in many PDE textbooks.)
|
|mathematical-physics|harmonic-functions|
| 1
|
area and volume in imaginary quaternion coordinates
|
If 4D space is represented by quaternions with real component being time and imaginary components describing 3D space, will the area in this 3D space be oriented? Will volume be negative? And will this 4D space metric be [+---] ds^2=dt^2-dx^2-dy^2-dz^2?
|
Just having quaternions does not imply a unique metric. However, there is a canonical metric given by the norm of the quaternions. Given the quaternion $$q= a + bi + cj + dk$$ its norm is given by $$|q| = \sqrt{q^* q} = \sqrt{a^2 + b^2+ c^2 +d^2}\,.$$ You can see that the norm is that of the Euclidean 4-space. The norm is natural as it has the property $$ |q_1 q_2| = |q_1| |q_2|$$ where on the left side $q_1 q_2$ refers to the multiplication as quaternions.
|
|geometry|
| 0
|
Why are these primitives containing $\arcsin x$ equal up to a constant?
|
While trying to solve $\displaystyle\int\sqrt{14x-x^2}\;dx$ , I obtained three different primitives in three different ways: Method 1: completing the square $c(x)=\dfrac{1}{2}\left[49\arcsin\left(\dfrac{x}{7}-1\right)+(x-7)\sqrt{14x-x^2}\right]$ Method 2: using the substitution $x=u^2$ $s(x)=49\arcsin\left(\sqrt{\dfrac{x}{14}}\right)+\dfrac{1}{2}(x-7)\sqrt{14x-x^2}$ Method 3: Wolfram $w(x)=\dfrac{1}{2}(x-7)\sqrt{14x-x^2}-49\arcsin\left(\sqrt{1-\dfrac{x}{14}}\right)$ At first, I thought some mistakes were made, as I was not able to reconcile the different appearances of the $\arcsin(\cdot)$ terms. After some inspection using Desmos, it turns out all 3 functions are equal up to a constant. In fact, they are separated by an amount of $\dfrac{49\pi}{4}$ : All three functions (without $+C$ ): All three functions (with the appropriate $+C$ ): So at this point, I'm pretty much convinced all three methods agree and are valid. However, I would like to know how exactly one can prove the $49\pi/4
|
Bonus 2 $$ \begin{aligned} \int \sqrt{14 x-x^2} d x = & \int \sqrt{14 x-x^2} d(x-7) \\ = & (x-7) \sqrt{14 x-x^2}-\int(x-7) \frac{14-2 x}{2 \sqrt{14 x-x^2}} d x \\ = & (x-7) \sqrt{14 x-x^2}-\int(x-7) \frac{7-x}{\sqrt{14 x-x^2}} d x \\ = & (x-7) \sqrt{14 x-x^2}-\int \frac{14 x-x^2-49}{\sqrt{14 x-x^2}} d x \\ = & (x-7) \sqrt{14 x-x^2}-I+49 \int \frac{d x}{\sqrt{14 x-x^2}} \\ \end{aligned} $$ Rearranging yields $$\int \sqrt{14 x-x^2} d x = \frac{1}{2}(x-7) \sqrt{14 x-x^2}+\frac{49}{2} \underbrace{\int \frac{d x}{\sqrt{49-(x-7)^2}}}_J$$ For the integral $J$ , $$ \begin{aligned} \int \frac{1}{\sqrt{14 x-x^2}} d x = & \int \frac{1}{\sqrt{x} \sqrt{14-x}} dx\\ = & -2 \int \frac{1}{\sqrt{14-(\sqrt{14-x})^2}} d(\sqrt{14-x}) \\ = & -2 \sin ^{-1}\left(\frac{\sqrt{14-x}}{\sqrt{14}}\right) +C\\ = & -2 \sin ^{-1}\left(\sqrt{1-\frac{x}{14}}\right)+C \end{aligned} $$ Plugging back gives WA’s answer $$\int \sqrt{14 x-x^2} d x = \frac{1}{2}(x-7) \sqrt{14 x-x^2} -49 \sin ^{-1} \sqrt{1-\frac{x}{14}}+C $$
|
|calculus|integration|inverse-trigonometric-functions|
| 0
|
Whether $\mathbb{T}\times [0,1]$ is diffeomorphic to $\mathbb{D}^2$?
|
Here $\mathbb{T}$ denotes the torus $\mathbb{R}\backslash \mathbb{Z}$ , and $\mathbb{D}^2$ the closed unit ball in $\mathbb{R}^2$ . Since $\mathbb{T}\times [0,1]$ and $\mathbb{D}^2$ are both smooth compact manifolds with boundary (I'm not sure about this statement, if I was wrong, please correct me), can we establish a diffeomorphism between them? More precisely, can we find a smooth map $\varphi:\mathbb{T}\times [0,1]\to \mathbb{D}^2$ which admits a smooth inverse $\varphi^{-1}:\mathbb{D}^2\to \mathbb{T}\times [0,1]$ ?
|
One can use algebraic topology to prove that the spaces are not even homotopy equivelent (for example, they have non-isomorphic fundamental groups). Here is an alternative approach using only the smooth structures. We work with manifolds with boundary . Each diffeomorphism $f : M \to N$ between such manifolds restricts to a diffeomorphism $\partial f: \partial M \to \partial N$ between their boundaries. For our purposes it even suffices to know that $ f(\partial M) \subset \partial N$ . Assume that there exists a diffeomorphim $f : \mathbb{D}^2 \to \mathbb{T} \times [0,1]$ . We have $\partial \mathbb{D}^2 = \mathbb{T}$ which is connected and $\partial \mathbb{T} = \mathbb{T} \times \{0,1\}$ which has two connected components. Since $f(\mathbb{T}) \subset \mathbb{T}^2 \times \{0,1\}$ , we must have $f(\mathbb{T}) \subset \mathbb{T} \times \{i\}$ either for $i = 0$ or $i=1$ . Hence $\mathbb{T} \times \{0,1\} = f(f^{-1}(\mathbb{T} \times \{0,1\})) \subset f(\mathbb{T}) \subset \mathbb{T}
|
|differential-geometry|smooth-manifolds|geometric-topology|manifolds-with-boundary|diffeomorphism|
| 0
|
Construction of Standard Normal Distribution
|
Given $10$ random variables $W_1,W_2,\cdots,W_{10}\stackrel{i.i.d}{\sim}N(\mu,\sigma)$ , where $\mu,\sigma$ are unknown . How can we construct a known function $f$ such that $Y=f(W_1,W_2,\cdots,W_{10})\sim N(0,1)$ ? Note, here known function means there is no $\mu,\sigma$ in $f$ , e.g. $f_1=\frac{X_1-\mu}{\sigma}$ is unknown and $f_2=3X_1-e^{X_2}+X_3$ is known. Firstly I successfully make $\mu$ become $0$ : I let $Y_1=W_1-W_2,Y_2=W_3-W_4,\cdots,Y_5=W_9-W_{10}$ then $Y_1,\cdots,Y_5\stackrel{i.i.d}{\sim} N(0,2\sigma^2)$ . However then I cannot eliminate $\sigma$ . I tried use some f to approach $\frac{X-\mu}{\sigma}$ and I've tried many functions like $Z=\frac{Y_3}{\sqrt{Y_1^2+Y_2^2}}$ and $Z=\frac{Y_1^2+Y_2^2}{\ln(\frac{1}{\pi}\arctan\frac{Y_1}{Y_2}+\frac{1}{2})}$ but nothing worked. Can someone give me some hint?
|
$\def\ed{\stackrel{\text{def}}{=}}$ You're already part of the way there. Here's another way of reaching your goal that's somewhat less elegant than that given in QiFeng233's answer . As you say, your random variables $\ Y_1,Y_2,\dots,Y_5\ $ are independent, normally distributed random variables with zero mean. It follows from this that the ratio $ \ \frac{\sqrt{5}\,\overline{Y}}{\overline{\sigma}_Y}\ $ of $\ \sqrt{5}\ $ times their sample mean $$ \overline{Y}\ed\frac{1}{5}\sum_{i=1}^5Y_i $$ to the square root $\ \overline{\sigma}_Y\ $ of their unbiased sample variance $$ \overline{\sigma}_Y^2\ed\frac{1}{4}\sum_{i=1}^5\big(Y_i-\overline{Y}\big)^2 $$ follows a Student's t-distribution with $5$ degrees of freedom. Let $\ g:(0,1)\rightarrow(-\infty,\infty)\ $ be the inverse of the standard normal cumulative distribution function—that is, the (unique) function such that $$ g\big(\mathcal{N}(0,1,x)\big)=x $$ for all $\ x\in(0,1)\ $ — $\ \mathcal{T}_5\ $ the cumulative Student's t-distributi
|
|statistics|probability-distributions|normal-distribution|
| 0
|
Using Jacobi symbol: Is $(\frac{54}{77})=1$ solvable?
|
I can't understand what to do in the following example of congruence. I need to decide if this congruence is solvable, and if so, to find all the solutions:$x^2 \equiv 54(77)$. I need to decide whether $(\frac{54}{77})$=1 or -1. First of all I want to ask you what is the idea behind Jacobi symbol? Do I use it when I need to decide $(\frac{a}{b})$ and $b$ is not a prime? both of them are not primes, I want to use Jacobi symbol rules: $(\frac{54}{77})=(\frac{54}{11})(\frac{54}{7})=(\frac{-1}{77})(\frac{5}{77})=(-1)^{-5}(\frac{-2}{7})=1$. But I know that If the reult is 1 then $54$ may or may not be a quadratic residue $(\mod77)$ . How do I determinate it? Thanks!
|
The Jacobi symbol is inconclusive here: $$\big (\frac {54}{77}\big)=\big(\frac {2\cdot 3^3}{7\cdot 11}\big) =\big (\frac 27\big) \big (\frac 3{11}\big)=-\big (\frac 27\big) \big (\frac {-1}3\big) =-(-1)^{\frac {7^2-1}8}\cdot (-1)^{\frac {3-1}2}=-1\cdot (-1)=1.$$ So do $7,11$ separately with the Legendre symbol and apply CRT. We get $\big (\frac {54}7\big) =\big (\frac {-2}7\big)=(-2)^{\frac{ 7-1}2}\pmod 7=-1. $ So there's no solutions mod $7,$ hence no solutions
|
|elementary-number-theory|prime-numbers|
| 0
|
What is the smallest untraceable graph satisfying some necessary conditions for traceability?
|
I am trying to find a graph (ideally the smallest) that demonstrates why the following 3 necessary conditions are not sufficient for a graph to be traceable. In other words, what is the smallest untraceable graph $G$ satisfying the following 3 conditions? $G$ is connected. $G$ has at most two vertices of degree 1. For all $S \subseteq V(G)$ , $G - S$ has at most $\vert S \vert + 1$ components. Wolfram gives drawings of all connected untraceable graphs of order 6 and below but none of these satisfy condition 3.
|
This has 7 vertices. You've found 6 is not enough. In general, having lots of small cuts (in this case, 3 cut vertices) can help with generating these sorts of counterexamples.
|
|combinatorics|graph-theory|hamiltonian-path|
| 1
|
Given a submodule, can we find another submodule such that their direct sum forms a finite index subgroup?
|
Suppose $S$ is a submodule of the $R$ -module $M$ , then it is not necessarily possible to decompose $M$ as $$ M = S \oplus S^c . $$ However, if we suppose that $M$ is finitely generated and Noetherian, and $S$ is a submodule, can we always find a submodule $S'$ that has a trivial intersection with $S$ , such that $S \oplus S^c$ is a finite index subgroup of $M$ (When we view them as abelian groups)? Because $M$ is Noetherian, I know we should be able to find a maximal submodule $S'$ such that $S$ and $S'$ have a trivial intersection. I am struggling to see if this implies that the direct sum has a finite index in $M$ . Any hint would be really appreciated.
|
There are algebras over a field having infinite dimensional simple modules. For example, the first Weyl algebra $k\langle x,y\rangle/(yx-xy-1)$ in characteristic zero, having the simple module $k[x]$ where $y$ acts as differentiation. Taking a non-split extension $M$ of two such simples, it has length two and is indecomposable, so is noetherian with simple socle, which has infinite codimension.
|
|abstract-algebra|ring-theory|commutative-algebra|modules|
| 0
|
Linear Algebra, Subspaces and Basis Properties
|
Let $w = (1, 1 ,2, -1), W = \{ u \in\Bbb R^4\mid u \cdot w = 2 \}$ , and $V = \{ u - (1, 0, 1, 1)\mid u \in W \}$ . (a) Is $W$ a subspace? Explain your answer (b) Show that $V$ is a subspace and that $T = \{ (-1, 1, 0, 0) , (-2, 0, 1, 0) , (1, 0, 0, 1) \}$ is a basis for $V$ . (c) Let $T$ be the basis of $V$ defined in (b) Find $[(4, -1, 1, 5)]_T$ . Show your workings, however, you do not need to show your elementary row operations. So I have shown part (a) but I'm somewhat confused about part (b) and (c), I have attached a link for better referencing and my workings there, for the question https://ibb.co/wp0RcnC my working is https://ibb.co/qgzBDg3 Thanks in advance to whoever that could help me.
|
For b) notice that $V$ is 3-dimensional since it is a rotation of the normal space associated to the 1-dimensional subspace $\langle(1,1,2,-1)\rangle$ in $\mathbb{R}^4$ . Therefore it suffice to check that the three vectors are linearly independent and elements of $V$ to prove that they form a basis. I'll check that $(−1,1,0,0)\in V$ you can check the other two analogously. For $(−1,1,0,0)\in V$ to be true we need to have $\left((−1,1,0,0)+(1,0,1,1)\right)\cdot w =2$ i.e. \begin{align} (0,1,1,1)\cdot (1,1,2,-1) = 0 + 1 + 2 -1=2 \end{align} . Therefore $(−1,1,0,0)\in V$ is true. Checking that these are linearly independent is straightforward using any method to solve the linear system of equations you like. Unfortunatly I am not familiar with the notation you used for c) if you could explain that I can try to answer that too.
|
|linear-algebra|algebra-precalculus|
| 0
|
Cover definition: When is a set a subset of it's cover's union instead of being equal?
|
In the current Wikipedia entry for covers, it says that if ${\displaystyle C=\lbrace U_{\alpha }:\alpha \in A\rbrace }$ is an indexed family of subsets ${\displaystyle U_{\alpha }\subset X}$ (indexed by the set ${\displaystyle A}$ ), then ${\displaystyle C}$ is a cover of ${\displaystyle X}$ if ${\displaystyle \bigcup _{\alpha \in A}U_{\alpha }\supseteq X}$ . Since a set cannot contain duplicates and all $U_\alpha$ are strict subsets of $X$ , why does it say ${\displaystyle \bigcup _{\alpha \in A}U_{\alpha }\supseteq X}$ instead of ${\displaystyle \bigcup _{\alpha \in A}U_{\alpha } = X}$ ? What would be an example where $X$ is only a subset of its cover's union instead of being equal to it?
|
You are right, one could simply require that $\bigcup_{\alpha \in A} U_\alpha= X$ . However, if you continue reading the Wikipedia article, you will find Also, if $Y$ is a (topological) subspace of $X$ , then a cover of $Y$ is a collection of subsets $C = \{U_\alpha \}_{\alpha \in A}$ of $X$ whose union contains $Y$ , i.e., $C$ is a cover of $Y$ if $$ Y ⊆ ⋃_{α ∈ A} U_α .$$ That is, we may cover $Y$ with either sets in $Y$ itself or sets in the parent space $X$ . I think this is the reason for requiring $\bigcup_{\alpha \in A} U_\alpha \supseteq X$ - it keeps notation consistent in both situations.
|
|general-topology|
| 1
|
Extension of Discrete Valuation Rings: Part II
|
Let $(R, m_R, \kappa_R) $ a discrete valuation ring (DVR) with quotient field $K$ , and $K \subset L$ a finite field extension. Let $ B$ be the integral closure of $R$ in $L$ , (which in general is not a DVR), let denote by $\frak{P}_1, \frak{P}_2, ..., \frak{P}_m$ it's maximal primes lying over $m_R$ . Then we obtain ramification decomposition $m_R \cdot B= \frak{P}$$_1^{e_1} \cdot \frak{P}$$_2^{e_2} \cdot ... \cdot \frak{P}$$_m^{e_m} $ and let $f_i:= [B/\frak{P}_i$$ : \kappa_R]$ . I'm looking firstly for a constellation $(R,B;K,L)$ satisfying following properties: $L$ is generated by a single element $L=K(\alpha)$ . satisfies proper inequality $[L:K] >\sum e_if_i$ . Note that in order to satisfy 2. $L/K$ should neccessarily be not separable. A (pathological) variant: Does there exist an extension $R \subset B$ produced as above such that moreover $B=(B, m_B, \kappa_B)$ is even also a DVR, $\kappa_R=\kappa_B$ and $e_{B/R}=1$ , but $R \neq B$ ?
|
I am summarizing what I’ve written in the comments. By the discussion in I.4 of Serre’s Local fields , the formula holds whenever $B$ is finite over $R$ , which happens when the extension is separable or $R$ is complete. In fact, by Stacks Lemma 03GH, the formula will hold when $R$ is Nagata, ie (Stacks Lemma 0BJ0) when its geometric fibres are reduced. This is automatic when $R$ is a $G$ -ring (eg excellent), which occurs for most reasonable rings. To give an explicit counterexample, we follow Stacks Example 09E1. Let $L$ be a field of characteristic $p$ and $K$ be a subfield containing $L^p$ such that $L/K$ is infinite. Let $A$ be the reunion of the $M[[x]]$ for all finite $K \subset M \subset L$ . Then $A$ is a DVR with residue field $L$ and uniformizer $x$ , properly contained in $L[[x]]$ . Let $f \in L[[x]] \backslash A$ (so $f^p \in A$ ). By Stacks Example 09E1, the integral closure $B$ of $A[f]$ is not finite over $A[f]$ hence not finite over $A$ . However, it is still integral
|
|abstract-algebra|ring-theory|commutative-algebra|
| 1
|
Solve the equation $\sqrt{x+5}=5-x^2$
|
Solve the equation $\sqrt{x+5}=5-x^2$. I have tried to make the substitution $x=\sqrt{5}\tan^2 \theta$ and wanted to make use of the identity $\tan^2\theta+1=\sec^2\theta$ but it didn't work out. I also tried to make the substitution $y=x+5$ but it lead to nowhere. Since this was a contest problem, I believe there is a short, elegant and elementary solution, please helps.
|
Several answers identify the quadratic factor equations $x^2-x-5=0$ $x^2+x-4=0$ and from these we get four candidate roots $(1\pm\sqrt{21})/2,(-1\pm\sqrt{17})/2$ . This answer deals with the problem of identifying which roots give nonnegative values for $5-x^2$ to match the requited nonnegative sqtare root in $\sqrt{x+5}$ . Rather than resort to numerical calculation, we can read the correct roots directly from the quadratic factor equations. We look first at the factor equation $x^2-x-5=0,x^2\le5.$ We can eliminate the $x^2$ term to obtain the linear inequality $x\le0,$ which is satisfied specifically by the negative root. Thus $x=(1-\sqrt{21})/2$ . Similarly $x^2+x-4=0,x^2\le5$ leads to the inequality $x\ge-1.$ This might allow the negative root of the qadratic equation $x^2+x-4=0$ if that were $\ge-1$ . But then the positive root would not match with the fact that the sum of the roots must be $-1$ , and so we find that the negative root of the quadratic equation is out of range. Thu
|
|algebra-precalculus|contest-math|
| 0
|
Remark 2, section 4.1 in Algebra From The Viewpoint of Galois Theory
|
The remark mentioned in the title states that if $L/K$ is a Galois extension, an $E$ is an intermediate field such that $E/K$ is a Galois extension, then any $K$ -automorphism of $L$ restricts to a $K$ -automorphism of $E$ . This should be a consequence of Theorem 4(i), section 3.5, which says that an algebraic extension $G/F$ is normal iff any $F$ -homomorphism of $G$ into an algebraic closure $\bar G$ of $G$ restricts to a $F$ -automorphism of $G$ . In particular since $E/K$ is Galois, so normal, Theorem $4$ implies that any $K$ -automorphism of an algebraic closure $\bar E$ restricts to a $K$ -automorphism of $E$ . However, how do I use the fact that $L/K$ (and so $L/E$ ) is Galois, to prove that such implication holds also with $L$ in place of $\bar E$ ? I don't think $L$ is necessarily an algebraic closure of $E$ .
|
Embed $L$ into an algebraic closure $\overline{L}$ . Note that since $L$ is algebraic over $E$ , $\overline{L}$ is also an algebraic closure of $E$ . Now let $\sigma$ be a $K$ -automorphism of $L$ . Since $L\subseteq \overline{L}$ , we can view $\sigma$ as a $K$ -homomorphism $L\to \overline{L}$ . Then $\sigma|_E$ is a $K$ -homomorphism $E\to\overline{L}$ . Since $E$ is normal over $K$ , the image of $\sigma|_E$ is $E$ , so $\sigma|_E$ is a $K$ -automorphism of $E$ , as desired.
|
|abstract-algebra|commutative-algebra|field-theory|galois-theory|
| 1
|
Can I avoid differentiation to find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$
|
The problem states: Find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$ . So, what I did is , Assume, the remainder to be linear, i.e $r(x) = ax+b$ By Eucild''s, $x^{73}-2x^{15}+3x-1 = q(x)(x-1)^2+r(x)$ Putting x = 1, We get, $a+b = 1$ . Then I differentiated it, and the rest is trivial. How can I skip differentiation and get the answer by some other method/way. I was thinking perhaps something involving number theory? I want to know is it possible to skip calculus in such questions?
|
This answer is to explain my comment more thoroughly. You are looking for a polynomial $r(x) = ax + b$ such that $$x^{73}-2x^{15}+3x-1 = q(x)(x-1)^2+r(x)$$ for some polynomial $q(x)$ . If we replace $y = x - 1$ , i.e. $x = y + 1$ , we get an equivalent problem: $$(y+1)^{73}-2(y+1)^{15}+3(y+1)-1 = q(y+1)y^2+r(y+1).$$ The $q(y+1)y^2$ contains no constant or $y$ terms, and $r(y + 1) = ay + a + b$ contains no $y^2$ terms or higher. So, we simply need to find the constant and $y$ terms in order to figure out $r$ . We can do this with binomial theorem. In particular, we know $$(y + 1)^n = \sum_{k=0}^n \binom{n}{k} y^k = \sum_{k=0}^n \frac{n!}{k!(n-k)!} y^k.$$ So, the constant terms and $y$ terms can be extracted, simply by taking $k = 0$ and $k = 1$ respectively. We have $\binom{n}{0} = 1$ and $\binom{n}{1} = n$ . Ignoring quadratic and higher order terms, the left hand side becomes: $$(1 + 73y + \ldots) - 2(1 + 15y + \ldots) + 3(1 + y) - 1 = 1 + 46y + \ldots$$ Therefore, $r(y + 1) = ay + a
|
|calculus|algebra-precalculus|derivatives|polynomials|
| 0
|
Counting arrangements of seven people standing in a row of $3$ and a row of $4$, with $A$ and $B$ together, and with $A$ and $C$ separated
|
Seven people are standing in two rows, with three people in the front row and four in the back row. Among them, A and B must stand next to each other, while A and C must stand separately. How many different arrangements are there? My thought process is like this, but the result is incorrect. First, we consider the condition that A and B must stand next to each other. We can treat A and B as a single unit since they have to stand together. Now we have 6 "entities" to arrange (5 people plus the A-B unit), with 3 spots in the front row and 4 in the back row. Step 1: Choose 3 entities from the 6 to stand in the front row. There are $6 \choose 3$ ways to do this. Step 2: The 3 entities in the front row can be arranged in $3!$ ways. Step 3: The remaining 3 entities (including the A-B unit) in the back row can also be arranged in $3!$ ways. However, since A and B form a unit, there are actually only 2 arrangements (A on the left or A on the right). Step 4: We need to subtract the arrangements
|
One error that pops out in your solution is as follows. You consider $AB$ as one unit and then you consider the possibility to placing $3$ units in the first row. That means you are allowing the possibility of four people being in the first row ( $AB$ as one unit and two others.) Here's how I would solve this. We start by ignoring the condition associated with $C$ . If $AB$ are in the first row, there can only be one another person in that row. So the number of ways for the first row would be ${5 \choose 1} \times 2 \times 2$ . Multiplying this by $4!$ (for the second) we get the total number of ways in which $AB$ are in the first row ( $=480$ ). If $AB$ are in the second row, there can only be two other people in that row. So the number of ways for the second row would be ${5 \choose 2} \times 3! \times 2$ . Multiplying this by $3!$ we get the total number of ways in which $AB$ are in the first row ( $=720$ ). So the total number of ways without the restriction for C would be $480 + 7
|
|combinatorics|permutations|combinations|
| 1
|
Showing divergence of $\sum\limits_{k=1}^{\infty} \log\left(1+\frac{(-1)^{k+1}}{k^\alpha}\right)$ where $0<\alpha<\frac{1}{2}$
|
I am trying to prove that $$\sum\limits_{k=1}^{\infty} \log\left(1+\frac{(-1)^{k+1}}{k^\alpha}\right)$$ diverges, for any $0 . I tried showing this by taking the Taylor expansionof $\log(1+\epsilon)$ around $0$ up to order $N$ , where $N$ is the minimal integer such that $\alpha N >1$ , then substituting $\epsilon = \frac{(-1)^{k+1}}{k^\alpha}$ . This resulted in the sum of the following libnitz serieses + some convergent series (it converges because $\alpha N >1$ ) + the remainder, and they all converge: \begin{align*} &\sum\limits_{k=1}^{N} (\log(1+\frac{(-1)^{k+1}}{k^\alpha})\\ =&\sum\frac{(-1)^{k+1}}{k^\alpha}-\sum\frac{1}{2k^{2\alpha}}+\dots+\sum(-1)^{N+1}\frac{(-1)^{k+1}}{k^{\alpha N}\cdot N}+\sum R_N\left(\frac{(-1)^{k+1}}{k^\alpha}\right) \end{align*} which is clearly in contradiction with the sum of the logs diverging
|
Use General Leibniz's Test: if $\lim\limits_{n\to\infty}a_n=0$ , then $$\sum_{n=1}^\infty a_n\ \mbox{converges}\iff \sum_{n=1}^\infty(a_{2n-1}+a_{2n})\ \mbox{converges},$$ and they have the same sum. Proof: Let $$S_n=\sum_{k=1}^{n}a_k,\quad T_n=\sum_{k=1}^{n}(a_{2k-1}+a_{2k}),$$ then $$S_{2n}=T_n=S_{2n+1}-a_{2n+1}.$$ this implies they have the same convergence and have the same sum. The series $$\sum_{n=1}^{\infty}\ln\left(1+\frac{(-1)^{n+1}}{n^\alpha}\right)$$ is convergent if and only if $\alpha>\frac12$ . Proof: By the General Leibniz's Test , the series $\displaystyle\sum_{n=1}^{\infty}\ln\left(1+\frac{(-1)^{n+1}}{n^\alpha}\right)$ and $$\sum_{n=1}^{\infty}\left[\ln\left(1+\frac{1}{(2n-1)^\alpha}\right) +\ln\left(1+\frac{-1}{(2n)^\alpha}\right)\right]$$ have the same convergence. Note that $$\begin{align*} 0\leq\ln\left(1+\frac{1}{(2n-1)^\alpha}\right) +\ln\left(1+\frac{-1}{(2n)^\alpha}\right) &=\ln\frac{((2n-1)^\alpha+1)((2n)^\alpha-1)}{(2n-1)^\alpha(2n)^\alpha}\\ &\sim\frac{(2n)^
|
|calculus|convergence-divergence|taylor-expansion|divergent-series|
| 0
|
Can I avoid differentiation to find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$
|
The problem states: Find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$ . So, what I did is , Assume, the remainder to be linear, i.e $r(x) = ax+b$ By Eucild''s, $x^{73}-2x^{15}+3x-1 = q(x)(x-1)^2+r(x)$ Putting x = 1, We get, $a+b = 1$ . Then I differentiated it, and the rest is trivial. How can I skip differentiation and get the answer by some other method/way. I was thinking perhaps something involving number theory? I want to know is it possible to skip calculus in such questions?
|
I think that your method is completely algebraic and very nice. We just have $a=f'(1)=46$ , and hence by $a+b=1$ then $r(x)=46x-45$ . And taking $f'(1)$ is not really "calculus". Differentiation of polynomials in abstract algebra is a linear map $D$ with $D(x^n)=nx^{n-1}$ on monomials. So $f'(1)=D(f)(1)=46$ . This is how it is done in abstract algebra. See for example the following posts: Multiple root of a polynomial and formal derivative. How does one determine whether a polynomial has a double root? Repeated roots of a polynomial
|
|calculus|algebra-precalculus|derivatives|polynomials|
| 0
|
Linear Algebra, Subspaces and Basis Properties
|
Let $w = (1, 1 ,2, -1), W = \{ u \in\Bbb R^4\mid u \cdot w = 2 \}$ , and $V = \{ u - (1, 0, 1, 1)\mid u \in W \}$ . (a) Is $W$ a subspace? Explain your answer (b) Show that $V$ is a subspace and that $T = \{ (-1, 1, 0, 0) , (-2, 0, 1, 0) , (1, 0, 0, 1) \}$ is a basis for $V$ . (c) Let $T$ be the basis of $V$ defined in (b) Find $[(4, -1, 1, 5)]_T$ . Show your workings, however, you do not need to show your elementary row operations. So I have shown part (a) but I'm somewhat confused about part (b) and (c), I have attached a link for better referencing and my workings there, for the question https://ibb.co/wp0RcnC my working is https://ibb.co/qgzBDg3 Thanks in advance to whoever that could help me.
|
On $c),$ you get the system: $$\begin {pmatrix} -1&-2&1\\1&0&0\\0&1&0\\0&0&1\end {pmatrix}\begin {pmatrix}a\\b\\c\end {pmatrix}=\begin {pmatrix}4\\-1\\1\\5\end {pmatrix}.$$ Then $(a,b,c)=[(4,-1,1,5)]_T.$ There is a solution and it's $(-1,1,5).$
|
|linear-algebra|algebra-precalculus|
| 0
|
Using triple integration for the area of a triangle
|
For practice I am trying to use the standard definition of volume for a set $X \subset \mathbb{R}^n$ given by: $$vol(X) := \int_X dx_1 \ \ldots \ dx_n$$ to compute the area of the triangle spanned by the points $(0,0,1), (0,2,0)$ and $(1,1,1)$ in $\mathbb{R}^3$ , i.e. the standard area of this triangle, NOT the area of its projection to the $xy$ -plane. (The result should be $1.22$ .) The triangle lies in the plane $x-y-2z = -2$ . Since it furthermore holds $0 \le z \le 1$ , $y=x-2z+2$ and for $x = 0$ holds $y = 2z$ I tried $$\int_0^1 \int_0^{2z} \int_0^{x - 2 z + 2} dy dx dz \approx 1.3333$$ , which is false. I suspect I am making some error with the boundaries of the integrals, but I am not sure where. Could you please give me a hint?
|
I do not know from where you got your formula, but I have even dimensional doubt in it, because area is second power of length, while triple integral gives third power. Now back to work. One way how to calculate area of triangle in space is imagine it as some surface and then use well known formula for area of surface for function $$\iint\limits_{G}\sqrt{1+(z'_x)^2+(z'_y)^2}dxdy$$ where $G$ is projection on $Oxy$ plane. In your case we obtain $$\int\limits_{0}^{1}\int\limits_{x}^{2-x}\sqrt{1+\frac{1}{4}+\frac{1}{4}}dxdy=\sqrt{6}-\sqrt{\frac{3}{2}}\approx 1.224744$$ Second way I see to map your triangle onto $Oxy$ plane and calculate it using the usual double integral, taking into account the Jacobian of the transformation, which I will leave as an exercise for you.
|
|calculus|integration|
| 1
|
Construction of Standard Normal Distribution
|
Given $10$ random variables $W_1,W_2,\cdots,W_{10}\stackrel{i.i.d}{\sim}N(\mu,\sigma)$ , where $\mu,\sigma$ are unknown . How can we construct a known function $f$ such that $Y=f(W_1,W_2,\cdots,W_{10})\sim N(0,1)$ ? Note, here known function means there is no $\mu,\sigma$ in $f$ , e.g. $f_1=\frac{X_1-\mu}{\sigma}$ is unknown and $f_2=3X_1-e^{X_2}+X_3$ is known. Firstly I successfully make $\mu$ become $0$ : I let $Y_1=W_1-W_2,Y_2=W_3-W_4,\cdots,Y_5=W_9-W_{10}$ then $Y_1,\cdots,Y_5\stackrel{i.i.d}{\sim} N(0,2\sigma^2)$ . However then I cannot eliminate $\sigma$ . I tried use some f to approach $\frac{X-\mu}{\sigma}$ and I've tried many functions like $Z=\frac{Y_3}{\sqrt{Y_1^2+Y_2^2}}$ and $Z=\frac{Y_1^2+Y_2^2}{\ln(\frac{1}{\pi}\arctan\frac{Y_1}{Y_2}+\frac{1}{2})}$ but nothing worked. Can someone give me some hint?
|
$C=\frac{Z_1-Z_2}{Z_3-Z_4}$ is Cauchy standard and $\Phi^{-1}(\frac{1}{2}+\frac{1}{\pi}\arctan C )$ is $N(0,1).$
|
|statistics|probability-distributions|normal-distribution|
| 0
|
Riemannian integrability
|
Suppose for $f:[a,b]\to \mathbb{R}$ , $|f(x)|$ is Riemannian integrable on the interval $[a,b]$ . If $f(x)$ is the derivative of $F(x)$ ,that is, $F'(x)=f(x), \forall x \in [a,b]$ . Prove: $f(x)$ is Riemannian integrable on the interval $[a,b]$ too. In this situation I know that the image of $f(x)$ , $f([a,b])$ has intermediate property, so $f([a,b])$ is connected. $|f|$ is continuous on $[a,b]$ almost everywhere, but I don't know what to do next. Hope someone can help me! Thanks!
|
We consider all point where $\vert f\vert$ is continuous. For any point $x$ such that $f(x)=0$ , we note that $\vert \vert f(x)\vert -0\vert , which indicates $f$ is continuous. For other points, without loss of generality, we suppose $f(x_0)>0$ . Thus $\exists \epsilon>0\,$ s.t. $$x\in U(x_0,\epsilon) \Rightarrow \vert\vert f(x)\vert-\vert f(x_0)\vert\vert and then the intermediate property indicates $f=\vert f\vert$ holds in $U(x_0,\epsilon)$ . We also have the continuity at $x$ now.
|
|integration|
| 1
|
Constructing a Countable Base from a Union of Finite Open Covers
|
My question comes form the second section of the first chapter of $\textit{Handbook of Set Theoretic Topology}$ . To paraphrase, the texts essentially says that you could easily verify the following lemma implies the theorem. They state: (Miscenko's Lemma) For any infinite cardial $\kappa$ and any set $E$ with a collection $\mathscr{A} \subset \mathcal{P}(E)$ such that $\text{ord}(p, \mathscr{A}) = |\{ A \in \mathscr{A} : p \in A \} | \leq \kappa$ for every $p \in E$ , then the number of minimal finite covers of $E$ by members of $\mathscr{A}$ is bounded by $\kappa$ . (Miscenko's Theorem) Every compact space with a point countable base has a countable base. My attempt is the following. Allow $X$ to be a compact space with point countable basis $\mathscr{B}$ , and set $\mathscr{B}' = \bigcup\{ B \subset \mathscr{B} : B \text{ is a minimal finite cover of } X\}$ . I'd like to show that $\mathscr{B}'$ is the desired countable basis of $X$ . It certainly covers $X$ , is nonempty, and count
|
The result holds for $T_1$ compact spaces. Let $\mathcal{B}$ be a point countable base of $X$ , where $X$ is your $T_1$ compact space, and let $\mathcal{B}'$ be the union of $\mathcal{C}$ , where $\mathcal{C}$ the family of all finite minimal covers of the space by basic open sets from $\mathcal{B}$ . By Miscenko's Lemma, $\mathcal{C}$ is countable - thus, $\mathcal{B}'$ is countable as well, as being a union of a countable family of finite sets. Also, clearly $\mathcal{B}' \subseteq \mathcal{B}$ . The main point is: you do not need to worry about "directly checking that $\mathcal{B}'$ is a base", because we have a neat argument to show that, indeed, $\mathcal{B}' = \mathcal{B}$ . Indeed: let $B \in \mathcal{B}$ and fix $x \in \mathcal{B}$ . The closed subspace $X \setminus B$ is compact, and for all $y \in X \setminus B$ we may fix a basic open set $B_y \in \mathcal{B}$ such that $x \notin B_y$ (and here is the point where we use the separation axiom hypothesis!). Thus, there is a fin
|
|general-topology|set-theory|
| 0
|
Lower bound on Frobenius norm of product
|
Suppose that $A, B \in \mathbf R^{m \times n}$ with $m \leq n$ . Let $\|\,\cdot\,\|$ denote the Frobenius norm and let $\langle \,\cdot\,, \,\cdot\,\rangle$ denote the Frobenius inner product. Note that $$ \|A^T A\|^2 = \sum_{j=1}^m \sigma_j(A)^4 \geq \frac{1}{m} \left( \sum_{j=1}^m \sigma_j(A)^2 \right)^2 = \frac{1}{m} \|A\|^4, $$ where $\sigma_j(A)$ are the singular values of $A$ . Is it true that, similarly, $$ \|A^T B\|^2 \geq \frac{1}{m} \langle A, B\rangle^2 \, ? $$
|
By Cauchy-Schwarz inequality, we obtain \begin{align*} \langle A,B\rangle^2 =\left(\operatorname{tr}(A^TB)\right)^2 =\left(\operatorname{tr}(AB^T)\right)^2 =\langle I_m,AB^T\rangle^2 \le\|I_m\|_F^2\|AB^T\|_F^2 =m\|\color{red}{AB^T}\|_F^2, \end{align*} which is not exactly your inequality. However, we may modify the proof above to prove your inequality. Since the rank of $A^TB$ is at most $m$ , it admits a singular value decomposition $U\Sigma V^T$ where $\Sigma=S\oplus0$ for some nonnegative $m\times m$ diagonal matrix $S$ and $U,V$ are some orthogonal $n\times n$ matrices. It follows that \begin{align*} \langle A,B\rangle^2 &=\left(\operatorname{tr}(A^TB)\right)^2\\ &=\left(\operatorname{tr}(U\Sigma V^T)\right)^2\\ &=\left(\operatorname{tr}(V^TU\Sigma)\right)^2\\ &\le\left(\operatorname{tr}(\Sigma)\right)^2\\ &=\left(\operatorname{tr}(S)\right)^2\\ &=\langle I_m,S\rangle^2\\ &\le\|I_m\|_F^2\|S\|_F^2\\ &=m\|\Sigma\|_F^2\\ &=m\|U\Sigma V^T\|_F^2\\ &=m\| \color{red}{A^TB}\|_F^2.\\ \end{a
|
|matrices|
| 1
|
Wasserstein-2 distance between $\mu_1$ and $\mu_2$ equals the $L^2$ distance between the Fourier transforms $\hat{\mu}_1$ and $\hat{\mu}_2$
|
Plancherel's theorem states that the Fourier transform preserves the $L^2$ norm, meaning that if $f$ and $g$ are functions whose Fourier transforms are denoted by $\hat{f}$ and $\hat{g}$ respectively, then: $$ \| f \|_{L^2} = \| \hat{f} \|_{L^2} $$ Similarly, is it true that: if $\mu_1$ and $\mu_2$ are probability measures and $\hat{\mu}_1$ and $\hat{\mu}_2$ are their respective Fourier transforms, then the $L^2$ distance between $\hat{\mu}_1$ and $\hat{\mu}_2$ is equal to the Wasserstein-2 distance between $\mu_1$ and $\mu_2$ ? Mathematically, this can be expressed as: $$ W_2(\mu_1, \mu_2) = \left( \int_{X \times X} |x - y|^2 d\pi(x,y) \right)^{1/2} = \left( \int_{\mathbb{R}} |\hat{\mu}_1(\xi) - \hat{\mu}_2(\xi)|^2 d\xi \right)^{1/2} $$
|
No, this is not true, why would it be ? In particular your second distance is nothing but the $L^2$ norm of $\mu_1-\mu_2$ , which is not even always defined when $\mu_1$ and $\mu_2$ are not absolutely continuous with respect to the Lebesgue measure. If you want a counterexample, take $\mu_1 = \delta_0$ and $\mu_2 = \delta_a$ , then their Wasserstein distance is $|a|$ , but $$ |\hat{\mu}_1(\xi)-\hat{\mu}_2(\xi)| = |1-e^{2i\pi\,a\,\xi}| = 2\left|\sin(\pi\,a\,\xi)\right| $$ which is clearly not square integrable ... so the integral is infinite. Actually, the Wasserstein distance should rather be compared with negative Sobolev distances, such as the $H^{-1}$ norm.
|
|complex-analysis|probability-theory|measure-theory|statistics|fourier-analysis|
| 1
|
Relations Symmetry and Transitivity
|
Given the following Relations over the set $M := \{α, β, γ\}$ $R1 := \{(α, α), (α, β), (β, α), (β, β), (γ, γ)\}$ How is $R1$ transitive? The condition for transitivity is $(a,y)\in R1 \text{ and }(y,x) \in R1 \implies (a,x) \in R1$ And how is $R2$ not a partial order? Partial order if thing is antisymmetric, reflexive and transitive.
|
$R1$ is transitive, because $\forall \alpha,\beta, \gamma \in M$ there is a transitive pairs following the definition of transitivity: $(\alpha,\beta) \land (\beta,\alpha) \implies (\alpha,\alpha) \in R1 $ $(\alpha,\beta) \land (\beta,\beta) \implies (\alpha,\beta) \in R1 $ $(\beta,\alpha) \land (\alpha,\beta) \implies (\beta,\beta) \in R1 $ $(\beta,\alpha) \land (\alpha,\alpha) \implies (\beta,\alpha) \in R1$ $(\alpha,\alpha) \land (\alpha,\beta) \implies (\alpha,\beta) \in R1 $ $(\beta,\beta) \land (\beta,\alpha) \implies (\beta, \alpha) \in R1$ That proves that R1 is transitive. R2 is not a partial order, because although it is reflexive it is both antisymmetric and symmetric. The requirements for partial order is that it should be antisymmetric, reflexive and transitive. Antisymmetric because $(\alpha,\gamma) \in R2 \land (\gamma, \alpha) \notin R2$ Symmetric because $(\beta,\gamma) \in R2 \land (\gamma,\beta) \in R2$
|
|relations|order-theory|symmetry|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.