title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Where should we compute the determinant and trace of a matrix?
In the matrix ring $M_3(\mathbb{Z}_6)$ , what is the determinant of the matrix diag( $3,3,1$ )? Is it $9$ with respect to the integers or is it $3$ modulo $6$ ? How about its trace?
You can compute the determinant and trace of the matrix in the integers and then reduce modulo $6$ at the end, or you can reduce the entries as you go. This is because determinant and trace are polynomials in the entries of a matrix and because the reduction modulo $6$ map $\mathbf{Z} \to \mathbf{Z}_6$ is a ring homomorphism. Don't worry if you don't understand this last part. For example, the determinant and trace of $\text{diag}(3, 3, 1)$ are $9$ and $7$ in the integers, but as an element of $\mathbf{Z}_6$ , we get $9 \equiv 3 \, (\text{mod} \, 6)$ and $7 \equiv 1 \, (\text{mod} \, 6)$ . I hope this helps!
|matrices|
0
Beta and gamma function problem
Show that $$\int_0^1\frac{x^2}{\sqrt{1-x^4}}\,dx \times \int_0^1\frac{1}{\sqrt{1+x^4}}\,dx=\frac{\pi}{4\sqrt{2}}$$ I'm trying to solve this problem but I can't prove it. In the end, I got $\Gamma(3/8)$ , and $\Gamma(5/8)$ but I don't know how to solve their values. Firstly I put $x^2=\sin(\theta)$ and solve the first part and $x^2=\tan(\theta)$ . I don't know whether to combine both functions or solve them separately.I tried both but was not able to solve them. Please help guys
For the first one, let $x^2=\sin \theta$ , then $$ \begin{aligned} I & =\frac{1}{2} \int_0^{\frac{\pi}{2}} \sin ^{2\left(\frac{3}{4}\right)-1} \theta \cos ^{2 \left( \frac{1}{2}\right)-1} \theta d \theta \\ & =\frac{1}{4} B\left(\frac{3}{4}, \frac{1}{2}\right)=\frac{\sqrt{\pi}}{4} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{5}{4}\right)} \end{aligned} $$ For the second, let $x^2=\tan \theta$ , then $$ \begin{aligned} J & =\int_0^{\frac{\pi}{4}} \frac{1}{\sec \theta} \cdot \frac{1}{2 \sqrt{\tan \theta}} \sec ^2 \theta d \theta \\ & =\frac{1}{2} \int_0^{\frac{\pi}{4}} \frac{1}{\sqrt{\sin \theta \cos \theta}} d \theta \\ & =\frac{1}{\sqrt{2}} \int_0^{\frac{\pi}{4}} \frac{1}{\sqrt{\sin 2 \theta}} d \theta \\ & =\frac{1}{2 \sqrt{2}} \int_0^{\frac{\pi}{2}} \sin ^{2\left(\frac{1}{4}\right)-1} \theta \cos ^{2\left(\frac{1}{2}\right)-1} \theta d \theta\\&=\frac{1}{4 \sqrt{2}} B\left(\frac{1}{4} ,\frac{1}{2}\right) \end{aligned} $$ Multiplying them yields $$ \begin{aligned} I \times
|definite-integrals|gamma-function|beta-function|
0
About an integral from MIT Integration Bee 2024
Good evening, I was interested in the third integral from the finals of the MIT Integration Bee 2024 : $$I = \int_{-\infty}^{\infty} \frac{1}{x^4+x^3+x^2+x+1} \hspace{0.1cm} \mathrm{d}x$$ One way to solve this is to identify that the denominator is the fifth cyclotomic polynomial, so we can write : $$I = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{(1-x)(x^4+x^3+x^2+x+1)} \hspace{0.1cm} \mathrm{d}x = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Then, by Chasles : $$I = \displaystyle\int_{-\infty}^{0} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Let $x\to -x$ in the first integral, we obtain : $$I = \displaystyle\int_{0}^{\infty} \frac{1+x}{1+x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ And we use these two identities : $$ \displaystyle\int_{0}^{\infty} \frac{x^{s-1}}{1+x^k} \hspace{0.1cm} \mathrm{d}x
Another possible path for the integral: substitute $x=\dfrac{1-y}{1+y}$ , then decompose into partial fractions. $$\begin{align*} \int_{-\infty}^\infty \frac{1-x}{1-x^5} \, dx &= 2 \int_{-1}^1 \frac{y^2+2y+1}{y^4+10y^2+5} \\ &= 4 \int_0^1 \frac{y^2+1}{y^4+10y^2+5} \, dy \\ &= \frac2{\sqrt5} \int_0^1 \left(\frac{2+\sqrt5}{5+2\sqrt5+y^2} - \frac{2-\sqrt5}{5-2\sqrt5+y^2}\right) \, dy \end{align*}$$
|integration|
0
Condition to be a Singular Vector
Let $M$ be a rectangular matrix. Suppose that $x$ and $y$ are vectors so that $$x^T M = (x^T My) y^T.$$ Does this imply that $x$ and $y$ are singular vectors of $M$ ? If M is square and $x=y$ then indeed this implies that $x$ must be an eigenvector of $M.$ This is because then $Mx = (x^T M x) x,$ where $x^TMx$ is a scalar, which implies that $x$ is an eigenvector. Not sure how to reason about this case for singular vectors though.
No, it doesn't imply this, I'm afraid. The equation is equivalent to the following condition: There exists some $\lambda$ such that $M^\top x = \lambda y$ . If $\lambda \neq 0$ , then $\|y\| = 1$ . I hope I don't have to come up with a specific counterexample to prove this; all you'd need to do is start with any matrix $M^\top$ , and any vector $x$ not in its nullspace that also is not a singular vector, and let $y$ be its normalisation. Let's prove this condition is equivalent. If this condition holds, then either $M^\top x = 0$ , in which case both sides are $0$ , or $y^\top y = 1$ , and some $\lambda$ exists such that $x^\top M = \lambda y^\top$ . Subbing this into the equation yields $$\lambda y^\top = \lambda y^\top y y^\top,$$ which are the same, as $y^\top y = 1$ . Now let's assume the equation holds true. If $x^\top M = 0$ , then the equation is satisfied. Otherwise $x^\top M \neq 0$ , which implies that $(x^\top M y)y^\top \neq 0$ , implying that $x^\top M y \neq 0$ . From her
|linear-algebra|
0
Linear transformation matrix given two vectors
I'm having trouble with this problem. Given two vectors $ \overrightarrow{v} = \left( \begin{matrix} v_1 \\ v_2 \end{matrix} \right) $ $ \overrightarrow{v'} = \left( \begin{matrix} v'_1 \\ v'_2 \end{matrix} \right) $ Given that T is a linear transformation $ T:\mathbb{R}^2 \rightarrow \mathbb{R}^2 $ And that $ \overrightarrow{v}' = T(\overrightarrow{v}) $ Find the matrix T which represents the linear transformation. I tried to express this in matrix form as $ \left( \begin{matrix} v'_1 \\ v'_2 \end{matrix} \right) = \left( \begin{matrix} T_{11} & T_{12} \\ T_{21} & T_{22} \end{matrix} \right) \left( \begin{matrix} v_1 \\ v_2 \end{matrix} \right) $ But i don't know how to continue, should i expand the matrix multiplication in the RHS and isolate the T's? I have an example of this , suppose $ \overrightarrow{v} = \left( \begin{matrix} 2 \\ 3 \end{matrix} \right) $ $ \overrightarrow{v'} = \left( \begin{matrix} 6 \\ -1 \end{matrix} \right) $ How would this be? I don't know how to continue
The problem does not have a unique solution. For example, $$ \left(\begin{array}{cc} 3 & 0 \\ 1 & -1 \\ \end{array}\right) \left(\begin{array}{c} 2 \\ 3 \\ \end{array}\right) = \left(\begin{array}{c} 6 \\ -1 \\ \end{array}\right) \ \ \ \left(\begin{array}{cc} 0 & 2 \\ 4 & -3 \\ \end{array}\right) \left(\begin{array}{c} 2 \\ 3 \\ \end{array}\right) =\left(\begin{array}{c} 6 \\ -1 \\ \end{array}\right) $$
|linear-algebra|abstract-algebra|linear-transformations|
0
How many distinct bracelets can be made?
A bracelet contains at least 1 and at most 4 beads of identical size on a loop of string. Anna is making a bracelet and has 3 green, 3 blue and 3 red beads. How many distinct bracelets can she possibly make? The way I attempted this question was by counting up the number of distinct bracelets case by case, and summed them up, but it seems like a rather inefficient way. E.g., 1 bead --> 3 distinct bracelets 2 beads --> 3*3=9 distinct bracelets 3 beads where all beads are the same colour --> 3 distinct bracelets. 3 beads where all beads are different colours --> 3!=6 distinct bracelets. etc.. Is there any other way to do the problem intuitively?
You case analysis should look more like this: Case 1: one bead With $3$ colours to choose from, there are $3$ distinct one-bead bracelets. Case 2: two beads You can have $2$ beads of the same colour or $2$ beads of a different colour. There are $3$ bracelets of the first type and $3$ of bracelets of the second type, for a total of $6$ bracelets. Case 3: three beads You can have a bracelet with one colour, two colours or three colours. There is only $1$ bracelet of three colours and $3$ bracelets of one colour. For the two-colour case, note that there were $3$ two-bead bracelets with $2$ colours; for each of these, you have $2$ possible beads you can append without changing the number of colours, for a total of $2\times 3 = 6$ . So in total, you have $1+3+6=10$ three-bead bracelets. Case 4: four beads This case analysis proceeds exactly the same way as before. There are no bracelets that can be made of one colour, as all three of any colour must have been used by this point. Given $6$ t
|probability|combinatorics|permutations|combinations|
0
non vanishing of a bivariable polynomial
Let $P(X,Y)$ be a non zero polynomial of $\mathbb C[X,Y]$ . Let $\{(x_i,y_i)\in\mathbb C^2\mid i\in\mathbb N\}$ be an infinite subset of $\mathbb C^2$ with the property that $x_i\ne x_j$ if $i\ne j$ . Does there exist an index $i_0\in \mathbb N$ such that there exists $n_0\in\mathbb N$ with the property that $P(x^n_{i_0},y^n_{i_0})\ne0$ for every integer $n\ge n_0$ ? Thanks in advance for any answer or hint.
The answer appears to be "no". As a counter example we can pick polynomial and sequence for which no such $n_0$ is possible. Let $P(x,y)=xy$ and let $(x_i,y_i)=(i,0)$ . Then, $x_i\neq x_j$ for all $i\neq j$ . Likewise, for any $i_0$ , I will never be able to find an $n_0$ where the sequence is non-zero, since the entire sequence is always $0$ . Namely, $$P(x_{i_0}^n,y_{i_0}^n)=x_{i_0}^n y_{i_0}^n=i_0^n\cdot 0^n = 0$$
|algebraic-geometry|polynomials|
1
Is there a name for this property of sequences?
Suppose we have a sequence $(x_n)_{n=1}^\infty\subseteq X$ with $(X,d)$ a metric space, and we have the following property: $$\exists x\in X\forall\epsilon>0[|B_\epsilon(x)\cap (x_n)_{n=1}^\infty|=\aleph_0]$$ (That is to say, there exists a point such that for any $\epsilon>0$ there are a (countably) infinite number of terms of the sequence within $\epsilon$ of that point) What is the name of this property? I would rather not invent a name if one exists. I need a name as it is something memorable to remember (and thus link to). My idea: I quite like " $(x_n)_{n=1}^\infty$ hovers around $x$ " and such an $x$ a " hovering point ". A sequence with this property is a " hovering sequence" Or " lingers " so "let $(x_n)_{n=1}^\infty$ be a lingering sequence" Equivalent to: This property is equivalent to "having a convergent subsequence", but I don't like "let $(x_n)_{n=1}^\infty$ be a sequence with possibly many convergent subsequences converging to distinct points" or something. In the termi
As discussed in the comments, your description of a "hovering point" precisely coincides with the definition of a limit point . Therefore, such sequences (that you have named a "hovering sequence" ) are best described simply as "sequences with limit points" . Your name is shorter, however, as noted by Paul Sinclair, it seems unlikely to catch on given the relatively simple existing description for such sequences.
|sequences-and-series|metric-spaces|terminology|
0
How to prove this (geometrically correct) inequality?
Let $x_1, x_1^*\in \left[0,a\right]$ and $x_2, x_2^*\in \left[0,b\right]$ where $a, b \in \left(0,1\right)$ and $a . Suppose \begin{equation*} \frac{x_1}{x_2}>\frac{a}{b} \end{equation*} and \begin{equation*}\frac{x_1^*}{x_2^*} I want to prove \begin{equation*} \frac{a-x_1^*}{b-x_2^*}>\frac{a-x_1}{b-x_2}. \end{equation*} I believe this inequality is correct based on the following graph. The first inequality implies the red point is below the dashed line and the second inequality implies the blue point is above the dashed line. Then the third inequality is implied by the fact that the blue line is flatter than the red line. But I had difficulty in proving it: \begin{equation*} \left(a-x_1^*\right)\left(b-x_2\right)-\left(a-x_1\right)\left(b-x_2^*\right)=\left(a x_2^*-bx_1^*\right)+\left(bx_1-ax_2\right)+\left(x_1^* x_2-x_1 x_2^*\right). \end{equation*} On the right hand side, the first and second terms are positive, but the third term is negative, so I failed to prove the whole thing is
It is very easy to prove independently that (as suggested by your picture) $$\frac{a-x_1}{b-x_2} and $$\frac{a-x_1^*}{b-x_2^*}>\frac ab.$$ For instance for the first one: $$(a-x_1)b because $$x_1b>x_2a.$$
|inequality|
0
How to prove this (geometrically correct) inequality?
Let $x_1, x_1^*\in \left[0,a\right]$ and $x_2, x_2^*\in \left[0,b\right]$ where $a, b \in \left(0,1\right)$ and $a . Suppose \begin{equation*} \frac{x_1}{x_2}>\frac{a}{b} \end{equation*} and \begin{equation*}\frac{x_1^*}{x_2^*} I want to prove \begin{equation*} \frac{a-x_1^*}{b-x_2^*}>\frac{a-x_1}{b-x_2}. \end{equation*} I believe this inequality is correct based on the following graph. The first inequality implies the red point is below the dashed line and the second inequality implies the blue point is above the dashed line. Then the third inequality is implied by the fact that the blue line is flatter than the red line. But I had difficulty in proving it: \begin{equation*} \left(a-x_1^*\right)\left(b-x_2\right)-\left(a-x_1\right)\left(b-x_2^*\right)=\left(a x_2^*-bx_1^*\right)+\left(bx_1-ax_2\right)+\left(x_1^* x_2-x_1 x_2^*\right). \end{equation*} On the right hand side, the first and second terms are positive, but the third term is negative, so I failed to prove the whole thing is
By the mediant inequality , $\dfrac ab = \dfrac{(a-x_1)+x_1}{(b-x_2)+x_2}$ is between $\dfrac{a-x_1}{b-x_2}$ and $\dfrac{x_1}{x_2}$ . Given $\dfrac ab , so $$ \frac{a-x_1}{b-x_2} $\dfrac ab = \dfrac{(a-x_1^*)+x_1^*}{(b-x_2^*)+x_2^*}$ is between $\dfrac{a-x_1^*}{b-x_2^*}$ and $\dfrac{x_1^*}{x_2^*}$ . Given $\dfrac{x_1^*}{x_2^*} , so $$\frac{x_1^*}{x_2^*} Combining the two inequalities, $$\frac{a-x_1}{b-x_2} (This skips the cases where $x_1^* = 0$ or $x_1 = a$ )
|inequality|
0
Compute the smallest possible integer $x$ such that $\lfloor \sqrt[8]{x}\rfloor < \lfloor \sqrt[7]{x}\rfloor < ... \lfloor \sqrt{x}\rfloor < x$
Compute the smallest positive integer $x$ such that $\lfloor \sqrt[8]{x}\rfloor I do not know how to solve this problem. I tried to brute force it, but it did not work.
Is this a programming question? I am unsure of a mathematical trick that would yield the result, but here is a quick python script from sympy import integer_nthroot for x in range(100000): if all(integer_nthroot(x,k)[0] In about a second it spits out 4096 as the first integer with this property.
|radicals|
0
How many $k$-full numbers are there?
Recall that $n\in\mathbb{N}$ is $k$ - full if, for a prime number $p$ : $p\mid n$ implies $p^{k}\mid n$ . In the paper I am currently reading it is said that there are $O(B^{1/k})$ $k$ -full positive integers in the dyadic interval $(B/2,B]$ for any $B\geq 1$ . How to obtain this bound? Obviously, any $k^{\text{th}}$ power is a $k$ -full number, so there are at least $\approx B^{1/k}$ $k$ -full numbers in $(0,B]$ , and there are $O(B^{1/k})$ powers in $(B/2,B]$ . Note that if number $n$ is $k$ -full, then $n = x^{k}y,\;y\mid x$ , so $B/2 and $y\leq x/2$ , so $B , so $x>B^{\frac{1}{k+1}}$ and clearly $x\leq B^{\frac{1}{k}}$ . Thus, we are left with $B^{1/k} - B^{1/(k+1)}$ choices for $x$ , which is still bad, because it is hard to find bound for choices of $y$ . Trivial bound would be something like $O(B^{1/k+1})$ , where $+1$ comes from the bound for choices of $y$ . Maybe on average there are constant number of choices for $y$ , in which case we get the desired $O(B^{1/k})$ , but I do
Claim: There are at most $O(B^{1/2})$ square-full integers in $[1,B]$ . Proof (due to answers to this post ): Every square-full number $x$ can be written in the form $x=y^2z^3$ , for some integers $y$ and $z$ . For a fixed $z$ , and $y$ varying, the number of square-full numbers in $[1,B]$ of the form $y^2z^3$ is at most $(B/z^3)^{1/2}$ . Summing over all $z$ , we get $$ \begin{align} \text{# square-full integers in $[1,B]$} &\le \sum_{z^{3}\le B}\left(\frac{B}{z^3}\right)^{1/2} \\&\le B^{1/2}\sum_{z=1}^\infty \frac1{z^{3/2}} \\&=\zeta(3/2)B^{1/2}\in O( B^{1/2} ). \end{align} $$ Claim: There are at most $O(B^{1/3})$ 3-full integers in $[1,B]$ . Proof: Every $3$ -full number $x$ can be written in the form $$ x = y^3\cdot z^4\cdot w^5. $$ for some integers $y,z$ and $w$ (prove this). For a fixed $z$ and $w$ , with $y$ varying, the number of 3-full numbers in $[1,B]$ of the form $y^3z^4w^5$ is at most $(B/(z^4w^5))^{1/3}$ . Therefore, the number of 3-full integers in $[1,B]$ is at most $$
|combinatorics|number-theory|
1
Final value of a recursion
Problem Given $p_1, \sigma > 0$ , consider the following recursion \begin{equation*} p_{i}=(1-L_i)p_{i-1} \qquad i=2,\dots,k \end{equation*} where \begin{equation*} L_i \triangleq \frac{p_{i-1}}{p_{i-1}+\sigma} \end{equation*} I would like to compute the final value $p_k$ as $k\to \infty$ . My attempt To be honest, due to its non-linearity, I don't know how to analyze the current recursion. The only thing that comes to my mind is the following: regardless the values of $p_{i-1}$ and $\sigma$ , we have $L_i \in[0,1]$ if $p_{i-1}$ stay non-negative. Under this conjecture, we have the two limit cases in which $L_i\triangleq 0$ and $L_i \triangleq 1$ . The evolution of $p_i$ is then confined between this two cases. $L_i \triangleq 1$ leads to $p_i = 0$ , so in this case the limit value is zero; $L_i \triangleq 0$ leads to $p_i = p_{i-1}$ , so in this case the limit value is $p_1$ . Based on this, I would say that $p_i$ is confined between $0$ and $p_1$ for all $i$ . However, my argument is
Hint: $$\frac1{p_i}=\frac1{p_{i-1}}+\frac1\sigma.$$
|limits|finite-differences|nonlinear-dynamics|discrete-time|
0
Bayes' rule with 3 variables
I have been using Sebastian Thrun's course on AI and I have encountered a slightly difficult problem with probability theory. He poses the following statement: $$ P(R \mid H,S) = \frac{P(H \mid R,S) \; P(R \mid S)}{P(H \mid S)} $$ I understand he used Bayes' Rule to get the RHS equation, but fail to see how he did this. If somebody could provide a breakdown of the application of the rule in this problem that would be great.
Based on Graham Kemp's answer. $$Pr(R|(H,S)) = \frac{Pr(R,H,S)}{Pr(H,S)} \tag{1}$$ And for $$Pr(H|(R,S)) = \frac{Pr(R,H,S)}{Pr(R,S)}\tag{2.1}$$ $$Pr(R,H,S) = Pr(H|(R,S))Pr(R,S) \tag{2.2}$$ Substitute 2.1 to 1, we got $$Pr(R|(H,S)) = \frac{Pr(H|(R,S))Pr(R,S)}{Pr(H,S)} \tag{3}$$ We done the same for $Pr(R,S)$ and $Pr(H,S)$ $$Pr(R,S) = Pr(R|S)P(S) \tag{4}$$ $$Pr(H,S) = Pr(H|S)P(S) \tag{5}$$ Substitute 4,5 to 1, we got $$Pr(R|(H,S)) = \frac{Pr(H|(R,S))Pr(R|S)P(S)}{Pr(H|S)P(S)} \tag{6.1}$$ The term $H(s)$ is cancel-out, so now we got $$Pr(R|(H,S)) = \frac{Pr(H|(R,S))Pr(R|S)}{Pr(H|S)} \tag{6.2}$$
|probability|bayes-theorem|
0
Generalized Likelihood Ratio Tests and Composite Hypotheses
I'm not quite sure that I understand how the generalized likelihood ratio test works for composite hypotheses; observe the example below: Let $X_1,...,X_n$ be a random sample from an exponential distribution, $X_i\sim EXP(\theta) \implies E(X_i)=\theta$. Derive the generalized likelihood ratio test of $H_0:\theta=\theta_0$ vs. $H_a: \theta>\theta_0$. I've been able to to a good portion of the work; we know that, in this case, $\bar{X}$ is the maximum likelihood estimator of $\theta$. But here's where I'm confused. Suppose instead we had that $H_0: \theta=\theta_0$ vs. $H_a:\theta\ne\theta_0$. If this were true instead, we would have that the likelihood ratio is given by: $$\lambda(\vec{X})=\frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n}$$ And then we would reject the null hypothesis if this value was less than some constant $c$. However, considering that a composite hypothesis is given instead , I believe that the decision rule needs to change somehow; the problem is that I don't
We are assessing the following hypothesis: \begin{align*} H_0: \theta \le \theta_0 && H_1: \theta = \theta_1 > \theta_0 \end{align*} In this case, the domains of the null and alternative parameters are: \begin{align*} \Theta_0 = (-\infty, \theta_0] && \Theta_1 = (\theta_0, \infty) \end{align*} Let us first take the case where $\theta_0 > \hat{\theta}_{MLE}$ . Taking a generic likelihood function, the domain, $\Theta_0$ , over which we are performing the restricted MLE includes $\hat{\theta}_{MLE}$ . An image of the generic likelihood function, its MLE over the whole domain and the selection of theta naught greater than theta MLE The blue dotted line is $\theta_0$ and the black dotted line is $\hat{\theta}_{MLE}$ . This example graph makes it clear that the domain $\Theta_0$ (shaded in green) includes the MLE when $\theta_0>\hat{\theta}_{MLE}$ . Therefore, the restricted MLE $\theta_0=\hat{\theta}_{MLE}$ when $\theta_0 > \hat{\theta}_{MLE}$ . Next, we consider the case when $\theta_0 \l
|probability|probability-theory|statistics|statistical-inference|hypothesis-testing|
0
X,Y,Z are independent identically distributed continuous then $X > Y \cap X > Z$ and $Y > Z$ are independent
I'm trying to prove that If $X,Y,Z$ are independent identically distributed continuous random variables then the events $X > Y \cap X > Z$ and $Y > Z$ are independent. I start with proving $P(X > Y \cap X > Z \cap Y > Z) = P(X > Y \cap X > Z) P(Y > Z)$ , that means $$\int\limits_{\mathbb{R}^3} 1_{x > y} 1_{y > z} \, d(P_X(x) \otimes P_{Y}(y) \otimes P_{Z}(z)) = \int\limits_{\mathbb{R}^3} 1_{x > y} 1_{x > z} \, d(P_X \otimes P_{Y} \otimes P_{Z}) \int\limits_{\mathbb{R}^3} 1_{y > z} \, d(P_X \otimes P_{Y} \otimes P_{Z})$$ then try to use the Fubini-Tonelli theorem to prove that without success. Edit: I've forgotten to add the "identically distributed" condition for $X,Y,Z$ , sorry.
Since $P(Y > Z) = P(Y and $P(Y > Z) + P(Y then $$P(Y > Z) = \frac{1}{2}$$ Similarly $P(X > Y > Z) = P(X > Z > Y)$ then $$P(X > Y > Z) = \frac{1}{2} P(X > Y \cap X > Z)$$ So $$P(Y > Z) \times P(X > Y \cap X > Z) = P(X > Y > Z) = P\left(Y > Z \cap \left(\left(X > Y\right) \cap X > Z\right)\right)$$ that means $X > Y \cap X > Z$ and $Y > Z$ are independent.
|probability|measure-theory|random-variables|
0
is $\sin^2 (x)$ the same as $\sin(\sin(x))$?
From my understanding, if something like this is given -> $f^2(x)$ then it is equal to $ff(x)$ . Now if that is the case, then shouldn't $\sin^2 (x)$ be equal to $\sin(\sin x)$ , however, I have seen people use it as $\sin(x)^2$ . Could someone please clarify my confusion.
This is a very annoying notational convention, but: If $n$ is a positive integer, then $\sin^n(x)$ means $\big(\sin(x)\big)^n$ . For instance, $\sin^2(x)$ means $\big(\sin(x)\big)^2$ . If $n$ is $-1$ , then $\sin^{-1}(x)$ means the inverse function $\arcsin(x)$ , instead of the reciprocal. I'm not fully certain what happens for other values. I don't know if there's a consensus on what $“\sin^{-2}(x)”$ means. The same stuff is true for the other trigonometric functions. For what it's worth, for an arbitrary function like $f$ , I like to write $f^{\circ n}(x)$ for $f(\dotsb(f(x)\dotsb)$ , as described here , to avoid ambiguity like this. (In general, in mathematical writing, one should always explain such notations to the reader.)
|notation|exponentiation|
0
What smooth convex functions $f \colon (0, \infty) \to [0, \infty)$ satisfy $f(1) = 0$ and $f(x) = x f\left(\frac{1}{x}\right)$?
I want to find all smooth (that is, infinitely differentiable) convex functions $f \colon (0, \infty) \to [0, \infty)$ with $f(1) = 0$ satisfying the functional equation $$ f(x) = x f\left(\frac{1}{x}\right), \qquad x > 0, $$ that is, $f$ should be equal to its own perspective function . So far, I have only found one example: $f(x) = (x - 1) \ln(x)$ (a non-smooth but convex example is $f(x) = | x - 1 |$ ). If one drops the smoothness assumption, one can just define a convex function $\phi$ on $(0, 1)$ (e.g. $x \mapsto x^{-\alpha} - 1$ for $\alpha > 0$ ) and define $$f(x) = \begin{cases} \phi(x), & \text{if } 0 1 \end{cases}.$$ I noticed that the functional equation is invariant under the scaling $f \mapsto a f$ for $a > 0$ (and the inversion $x \mapsto \frac{1}{x}$ ), but not under shifts $f \mapsto f + a$ or translations and dilations ( $f \mapsto f(\cdot + a)$ , $f \mapsto f(a \cdot)$ ). Motivation. The entropy function $f$ generates the $f$ -divergence between measures $$ D_f(\mu \m
Theorem: A function $f$ on $(0, \infty)$ satisfies $f(x) = xf(1/x)$ if and only if there is some $h$ with $h(x) = h(1/x)$ so that $f(x) = \left(\frac{x}{1+x}\right)h(x)$ . Proof: If $f$ has this form, you can directly compute that $f(x) = xf(1/x)$ . For the converse, let $h(x) = \left(\frac{1+x}{x}\right) f(x)$ and show that $h$ satisfies $h(x) = h(1/x)$ . To find such $h$ , you may take any even function $g_e(x)$ and set $h(x) = g_e(\log x)$ . You can find even functions by choosing any polynomial or power series with even powers, or by taking $g_e(x) = \frac{g(x) + g(-x)}{2}$ to be the even part of any function $g$ on $\mathbb{R}$ . We also have $f(1) = 0$ if and only if $g_e(0) = 0$ . We could also substitute $x/(1+x)$ with any $r(x)$ so that $r(x) \neq 0$ for all $x > 0$ and $r(x) = x r(1/x)$ ; for example, $r(x) = \sqrt{x}$ as mentioned in the OP. Regarding convexity - I'll leave this for you to work out, but presumably you can come up with an appropriate condition on $h$ for $f$
|real-analysis|convex-analysis|functional-equations|information-theory|smooth-functions|
0
Compute the smallest possible integer $x$ such that $\lfloor \sqrt[8]{x}\rfloor < \lfloor \sqrt[7]{x}\rfloor < ... \lfloor \sqrt{x}\rfloor < x$
Compute the smallest positive integer $x$ such that $\lfloor \sqrt[8]{x}\rfloor I do not know how to solve this problem. I tried to brute force it, but it did not work.
So we need $a_n = \lfloor x {\text ^} (1 / (8 - n)) \rfloor$ (for $n \in 0 \cdots 7$ ) to be a strictly increasing non negative sequence of integers. The smallest possible is $a = [0, 1, 2, 3, 4, 5, 6, 7]$ , that is $$a_n = \lfloor x {\text ^} (1 / (8 - n)) \rfloor \ge n$$ It at least has to hold that $$\begin{array} {rcl} \lfloor \sqrt[8]{x} \rfloor \ge 0 & \implies & x \ge 0 \\ \lfloor \sqrt[7]{x} \rfloor \ge 1 & \implies & x \ge 1 \\ \lfloor \sqrt[6]{x} \rfloor \ge 2 & \implies & x \ge 2^6 = 64 \\ \lfloor \sqrt[5]{x} \rfloor \ge 3 & \implies & x \ge 3^5 = 243 \\ \lfloor \sqrt[4]{x} \rfloor \ge 4 & \implies & x \ge 4^4 = 256 \\ \lfloor \sqrt[3]{x} \rfloor \ge 5 & \implies & x \ge 5^3 = 125 \\ \lfloor \sqrt[2]{x} \rfloor \ge 6 & \implies & x \ge 6^2 = 36 \\ \lfloor \sqrt[1]{x} \rfloor \ge 7 & \implies & x \ge 7 \\ \end{array}$$ So $x \ge 4^4$ . But $\lfloor 4^4 \text{^} (1/8) \rfloor = \lfloor 4^4 \text{^} (1/7) \rfloor = 2$ . So $a_4 \ge 5$ . The smallest $y$ satisfying $y^4 \text{^}
|radicals|
0
I want to know the behavior of the solution without solving the ordinary differential equation for enzyme-substrate reactions.
I want to know the behavior of the solution without solving the ordinary differential equation for the following enzyme-substrate reactions. This chemical reaction represents the process by which substrate S is transformed into product P by the action of enzyme E. In detail, the reaction takes place as follows; First, the enzyme E and S become the enzyme-substrate complex ES Next, the enzyme-substrate complex ES splits into the enzyme E and the product P. We also assume that the initial concentration of E is >0. Here, $k_1,k_2,k_3>0$ , and [S] is the concentration of S, and the others similarly represent the concentrations of substances sandwiched between brackets. ES represents a complex of E and S. My question Perhaps from the first equation, we expect to be able to say "that the concentration of substrate S become zero after a sufficient time interval , but can we prove this without solving the differential equation?
Although it is also a chemical reaction problem, this is a problem of qualitative consideration of solutions to differential equations, and is more of a mathematical problem than a chemistry problem. At one point, I obtained the following answers on my own. I think it is probably correct, but if it is wrong, I would appreciate comments. From the equation ③ and ④, the material balance for substrate E can be obtained; adding ③ and ④, $d([E]+[ES])/dt = 0 $ regardless of time, and therefore, $-{k}_{1}[E]+{k}_{2}[ES] = constant$ is guaranteed on this system of differential equations alone, without making any a priori assumptions. Taking into account the common knowledge of chemistry that at the t=∞, all velocities become zero (in other words, chemical equilibrium is reached),we get In equation ①, $d[S]/dt =0$ at t=∞, so $-{k}_{1}[S][E]+ {k}_{2}[ES]=0$ at t=∞. In equation ④, $d[P]/dt =0$ t=∞, so [ES]=0 at t=∞. Therefore, from equations ① and ④, we get that $[S][E]=0$ at t=∞. Furthermore, fro
|ordinary-differential-equations|chemistry|
0
Existence of $1/(e-1)$ value of a continous and differentiable function $f:[0,1]\to \mathbb R$ given $f(1)=1, f(0)=0$
Let $f:[0,1]\to \mathbb R$ be a continous function on $[0,1]$ and differentiable on $(0,1)$ , $f(0)=0, f(1)=1$ . Prove that there exists $c\in (0,1)$ so that $f(c)+\frac{1}{e-1}=f'(c)$ where $e$ is the Euler number. I tried to subs $f(x)=e^{x}g(x)$ so that $f'(x)-f(x)=e^{x}g'(x)$ and define $e^{x}g'(x)=w(x)$ . Then I tried to use the mean value theorem (to ensure the existence of $r \in (0,1)$ s.t. $f'(r)-f(r)=e^{r-1}$ ) however then I have no idea how to continue. My guess is we use intermediate value theorem on $w(x)$ however I don't have any idea on how to do it. Is there any construction to solve this?
Let $$F(x)=e^{-x}\left(f(x)+\frac{1}{e-1}\right),\quad x\in[0,1],$$ then $F$ is continous on $[0,1]$ and differentiable on $(0,1)$ , $$F(0)=F(1)=\frac{1}{e-1},\qquad F'(x)=e^{-x}\left(f'(x)-f(x)-\frac{1}{e-1}\right).$$ So MVT implies the result!
|real-analysis|calculus|analysis|functions|mean-value-theorem|
1
If $f$ is measurable and $f=g$ almost everywhere is $g$ measurable
We were told in lecture, that if two functions $f,g$ are equal almost everywhere. Meaning that for a measure space $(X,A, \mu)$ sucht that the set $\{x \in X | g(x) \neq f(x)\}$ is a measurable nullset, then if $f$ is measurable so is $g$ . I understand that this holds for complete measures. But I am stuck showing it for arbitrary measures. Any help would be appreciated.
It's not true. Suppose $(X, \mathcal A, \mu)$ isn't complete, so that there is some $B \subset C \subset X$ with $\mu (C) = 0$ and $B \notin \mathcal A$ . Then, the indicator function $1_B$ is non-measurable, while $1_C$ is measurable. Now consider the function $g = 1_B - 1_C$ . This is non-measurable because $1_B$ is non-measurable. But $$\{x: 1_C(x) \neq g(x)\} = C.$$
|measure-theory|lebesgue-measure|
1
Product of distances from a point on the circle to equidistant points on the circle
Question (IIT 2023) : Let $A_i\ (1 \le i \le 8)$ be the vertices of the regular octagon that lie on the circle of radius $2$ . Let $P$ be a point on the circle and let $|PA_i|$ denote the distance between the point $P$ and $A_i$ for $1\le i\le 8$ . If $P$ varies over the circle, then the maximum value of the product is $\prod_{i=1}^{8}|PA_i|$ , is: My Solution : As all the points are of a octagon they are equally aligned at $\dfrac{\pi}{4}$ with each other and taking point $P$ to be at an angle of $\theta$ with the $x$ -axis we can rewrite the product as : $$\prod_{i=1}^{8} 4\sin\left(\frac{i\pi}{8}+\frac{\theta}{2}\right)$$ which can be further simplified into $2^9\sin(4\theta)$ which gives us our answer $2^9$ at $\theta = \dfrac{\pi}{8}$ . Now there's another solution floating around online which relies on complex numbers and is quite shorter and it goes as following : what I am not getting here is how does maximizing the product of the roots of $z^8-2^8=0$ produces the same result a
Let $\displaystyle P=2e^{i\alpha},\alpha =\frac{2\pi}{4}$ and $A_{1}(z_{1})\,,\ A_{2}(z_{2})\, ,\cdots ,A_{8}(z_{8})$ be the vertices of a Regular octagon. Then $\displaystyle z^{8}-2^{8}=(z-z_1)(z-z_2)\cdots (z-z_8)$ $\displaystyle |(z-z_1)(z-z_2)\cdots (z-z_8)\bigg|=|z^8-2^8|$ $\displaystyle \prod^{8}_{i=1}\bigg|z-z_i\bigg|=\bigg|2^8 e^{8i\alpha}-2^8\bigg|=2^8\bigg|e^{8i\alpha}-1\bigg|=2^8|e^{4i\alpha}\bigg(e^{4i\alpha}-e^{-4i\alpha}\bigg)\bigg|$ $\displaystyle \prod^{8}_{i=1}\bigg|PA_{i}\bigg| =2^{8}\cdot 1\cdot 2\cdot \bigg|\sin(4\alpha)\bigg|\leq 2^9$
|algebra-precalculus|trigonometry|complex-numbers|trigonometric-series|
0
Prove that $x/y$ < $(x+3)/(y-3)$ obeys the condition $-x<y$ only for y<0
I have this original equation $x/y . For this equation to be true, by cross multiplying the denominators, I arrive at the simpler condition, $-x . But on trial and error, i put negative values of $x$ say $x=-5$ , I get the result as $-5/y which reduces to $y>15/7$ . So for $y=4$ and $x=-5$ , the original equation holds but the arrived condition $-x doesn't hold. So, how can i theoretically prove the original equation implies $-x not for all combinations of $x$ and $y$ but only when $y .
Let's look at the sign table: \begin{array}{|c|ccccc|} \require{enclose} \hline &&0&&3\\ \hline y&-&|&-&\enclose{verticalstrike}{0}&+ \\ \hline y-3&-&\enclose{verticalstrike}{0}&+&|&+ \\ \hline y(y-3)&+&\enclose{verticalstrike}{0}&-&\enclose{verticalstrike}{0}&+ \\ \hline \end{array} We have to considere 3 distinct cases $y = 0$ or $y =3$ . Then one term of the inequality is undefined $y or $y > 3$ . Then $$\frac x y $0 . Then $$\frac x y (x+3)y\iff -x>y$$
|linear-algebra|
1
Compute $\int_{-1}^{1}x\sqrt{x}\tan^{-1}(\sqrt{x})dx$
I was trying to make an IBP problem for an integration bee. I came up with the simple arctan integral with the square roots just for an easy round. However, WolframAlpha challenged me a suggestion, which was adding the bounds -1 to 1. I was so confused because I thought it was undefined, divergent, or complex. But it turns out the answer is real and defined. My attempt was performing integration by parts in which I got the indefinite answer: $$ \left [ \frac{2}{5}x^{\frac{5}{2}}\tan^{-1}(\sqrt{x})-\frac{x^2}{10}+\frac{1}{5}x-\frac{1}{5}\ln|x+1| \right ]_{-1}^{1}$$ The hardest part was inputting the bounds as I am unaware of some weird odd values. $$\frac{\pi}{10}+\frac{1}{10}-\frac{1}{5}\ln(2)-\left(\frac{2}{5}i\tan^{-1}(i)-\frac{3}{2}-\ln(0)\right)$$ Not only that, the answer is $\frac{2}{5}+\frac{\pi}{10}+\frac{\ln(2)}{5}$ . Did I do anything wrong? How do I come to this exact answer???
I do not claim to know how WolframAlpha works behind the scenes. This is merely an attempt to walk through its reasoning for the integral over the negative part of $[-1,1]$ . WA chooses branches of the square root and logarithm such that $$\sqrt x = \begin{cases} \sqrt x & x\ge0 \\ i \sqrt{-x} & x so that when $x\in(-1,0)$ , it works on the expression $$\begin{align*} x\sqrt x \arctan \sqrt x &= i x \sqrt{-x} \left[\frac1{2i} \log \left\lvert\frac{1-\sqrt{-x}}{1+\sqrt{-x}}\right\rvert + \frac12 \arg\left(\frac{1-\sqrt{-x}}{1+\sqrt{-x}}\right)\right] \\ &= \frac12 x \sqrt{-x} \log \frac{1-\sqrt{-x}}{1+\sqrt{-x}} \end{align*}$$ Substitute $x\to-x$ to get a more tractable integrand, $$\int_{-1}^0 x \sqrt x \arctan \sqrt x \, dx = -\frac12 \int_0^1 x \sqrt x \log \frac{1-\sqrt x}{1+\sqrt x} = \int_0^1 x \sqrt x \operatorname{artanh} \sqrt x \, dx$$ WA confirms these have the same value. LHS / RHS How WA actually evaluates either definite integral is hard to pinpoint. Working on the RHS, on
|integration|definite-integrals|
0
$f$ be holomorphic on $\mathbb{D}$, continuous on > $\overline{\mathbb{D}}$ s.t $|f(z)|=1$ whenever $|z|=1$.
Let $f$ be holomorphic on $\mathbb{D}$ , continuous on $\overline{\mathbb{D}}$ s.t $|f(z)|=1$ whenever $|z|=1$ . Prove that if $f(z)\neq 0$ for all $z\in\mathbb{D}$ then $f$ is constant on $\mathbb{D}$ . I already proved the following result: Let $f$ be non-constant, holomorphic on $\mathbb{D}$ , continuous on $\overline{\mathbb{D}}$ s.t $|f(z)|=1$ whenever $|z|=1$ . Then there exists $z_0\in\mathbb{D}$ s.t $f(z_0)=0.$ I don't know whether this could help me solve the actual problem? Could someone give me any idea?
Let $$F(z)=\frac{1}{f(z)},\quad z\in\overline{\mathbb D},$$ then $F$ holomorphic on $\mathbb{D}$ , continuous on $\overline{\mathbb{D}}$ , and $|F(z)|=1$ whenever $|z|=1$ . So $|F(z)|$ attains its minimum at some point $z_0\in\overline{\mathbb D}$ . If $|z_0|=1$ , then $|F(z)|\equiv1$ on $\overline{\mathbb D}$ and $F\equiv C$ ; If $|z_0| , then $$|f(z_0)|=\max_{z\in\overline{\mathbb D}}|f(z)|.$$ maximum modulus principle implies $f$ is constant!
|complex-analysis|cauchy-integral-formula|
0
Laplace transform of a function defined by integral
I have a function of the form $f(u) = ke^{-au}\int_{0}^{\infty}x^{z-1}(1+x)^{-z-1}e^{-bxu}dx$ where $a,b,u,z>0$ are all real positive numbers and $k$ is a positive normalization factor. I need to calculate the Laplace transform of $f(u)$ . We have $$F(s) = \int_{0}^{\infty} f(u)e^{-su}du = k\int_{0}^{\infty} e^{-au} \left(\int_{0}^{\infty}x^{z-1}(1+x)^{-z-1}e^{-bxu}dx\right) e^{-su} du$$ Since Fobini's theorem holds (Does it really hold?) we can change the order of integration $$F(s) = k\int_{0}^{\infty} x^{z-1}(1+x)^{-z-1}\left(\int_{0}^{\infty} e^{-(a+s+bx)u}du\right) dx= k\int_{0}^{\infty} \frac{x^{z-1}(1+x)^{-z-1}}{a+s+bx} dx $$ I can't go any further! What should I do next? Also I'm contemplating the change of integration order, is it valid and correct?!? What about the region of convergence for Laplace transform, how to find it? Thanks in advance! ==============================Corrections============================= The values that $z$ can assume are positive real numbers $z>0$
I don't see Kummer functions. From the Svatovslav presention we get at least formally $$F(s)=\frac{1}{a+s}\times B(z,z+1)\ _2F_1(1,z;z+1; 1-\frac{b}{a+s}).$$ Discussion when $z$ is not a positive number seems complicated.
|real-analysis|integration|laplace-transform|
0
about logical equivalence
so the question is like this: Define a new logical connective □ as p □ q ≡ p → ∼ q. Use the Laws of Logical Equivalence, the equivalence of → to a disjunction, and the definition of □ to show that (p □ q) □ p ≡ p → q. my question is that doesn't □ stands for → ∼ which means whatever is following the □ (lets say x) would eventually turn out to be sth like → ∼ x, so when applying the definition of □ to (p □ q) □ p, we just need to replace the □ with → ∼, and it turns out to be (p → ∼q) → ∼p, I am very confused because after applying the equivalence of → to disjunction and DeMorgan's law it eventually becomes p and q or ~p which is obviously not logically equivalent to p implies q, I have my steps as follows can anyone help me find out from which step am I wrong, thank you so much enter image description here
I think you forgot some parenthesis in your hand written work. It is certainly the case that $(p \wedge q) \lor \sim p$ is equivalent to $p\to q$ . We have, $$(p \wedge q) \lor \sim p \equiv (p \lor \sim p) \wedge (q \lor \sim p) \equiv \sim p \lor q \equiv p\to q $$
|discrete-mathematics|
0
Likelihood computation for hidden markov models.
If we have a $2$ -state model (i.e. the simplest non-trivial example) in a hidden markov model, and some generated observation-data $\mathcal{O}$ from the algorithm for generating observations. Is it then always the case that the argument $\theta_i$ of a given family $\Theta = \{\theta_i\}_{i \in I}$ of parameters (where $I$ is some index-set), which maximes the loglikelihood, is the argument $\theta_{\ell}$ which was used to generate $\mathcal{O}$ , given that $\theta_{\ell}$ is in $\Theta$ . I.e., is it always the case that $\text{argmax}_{ \theta_i\ \in \ \Theta}\Big(\text{Log}(\mathcal{O}\;|\;\theta_i)\Big) = \theta_{\ell}?$ This seems to not always be the case, I've tried this with hmmtrain in MATLAB, using the Baum-Welch algorithm, with Maxiterations = 1 and investigating the likelihood-output for different initial values given to the algorithm, and found examples where this does not hold. Edit: I believe one partial answer is that this depends on the observation-sequence $\mathc
No, it is not true. Consider an analogous case. Suppose you have a coin, with unknown heads probability $p$ . You flip the coin 10 times, and count how many times it came up heads, say $h$ . You compute the maximum likelihood estimate $\hat{p}$ for the heads probability -- i.e., your best guess at $p$ , given the results of those 10 coin flips. Is it always the case that $\hat{p}=p$ ? No, certainly not. We have $\hat{p}=h/10$ . There is no guarantee that $p$ has the form $m/10$ for some integer $m$ , and even if it does, randomness and variability means that often the number of heads will be a bit larger than $10p$ or a bit smaller than $10p$ , so $\hat{p}$ will often be a bit larger than or a bit small than $p$ . If we're lucky, it's sometimes true that in the limit, as the number of observations goes to infinity, the maximum-likelihood estimate converges to the true parameter. This is not always the case, but sometimes it is. But that's the most we can hope. Given a finite number of
|log-likelihood|hidden-markov-models|
0
Existence of $1/(e-1)$ value of a continous and differentiable function $f:[0,1]\to \mathbb R$ given $f(1)=1, f(0)=0$
Let $f:[0,1]\to \mathbb R$ be a continous function on $[0,1]$ and differentiable on $(0,1)$ , $f(0)=0, f(1)=1$ . Prove that there exists $c\in (0,1)$ so that $f(c)+\frac{1}{e-1}=f'(c)$ where $e$ is the Euler number. I tried to subs $f(x)=e^{x}g(x)$ so that $f'(x)-f(x)=e^{x}g'(x)$ and define $e^{x}g'(x)=w(x)$ . Then I tried to use the mean value theorem (to ensure the existence of $r \in (0,1)$ s.t. $f'(r)-f(r)=e^{r-1}$ ) however then I have no idea how to continue. My guess is we use intermediate value theorem on $w(x)$ however I don't have any idea on how to do it. Is there any construction to solve this?
While the problem was solved above, here is a method that uses a stronger result (Darboux's theorem) but shows how to arrive systematically at the function used in the MVT. Suppose that there doesn't exist $c \in (0,1)$ such that $f'(c) = f(c) + \frac 1{e-1}$ . Then, note that $f'(c) - f(c)$ is a function that satisfies the intermediate value property on $(0,1)$ [where we use Darboux's theorem], therefore $f'(x) - f(x)$ is either entirely above or entirely below $\frac 1{e-1}$ on $(0,1)$ . Without loss of generality assume that $f'(x) - f(x) (the opposite case is addressed by changing $f$ to $-f$ ) for all $x \in (0,1)$ . We execute the following series of steps, aiming to create a functional inequality using the combination of both sides. : \begin{align} &f'(x) - f(x) where we obtained the last inequality by integrating the previous one from $a$ to $b$ . By rearrangement, we conclude that $$ e^{-x}f(x) + \frac{e^{-x}}{e-1} \text{ is strictly decreasing on } (0,1). $$ Now, because $f(0
|real-analysis|calculus|analysis|functions|mean-value-theorem|
0
Minimize $\frac{1}{2}\sum_{k=1}^m (x_{k+1}-x_{k})^2$
Given sequence: $$ \begin{cases} x_{n+1}(2\cos(\frac{\pi}{m})-x_n)=1,\forall n\geq 1\\ x_1=x\in\mathbb R,m\in\mathbb N,m\geq 2 \end{cases} $$ Minimize $$A=\frac{1}{2}\sum_{k=1}^m (x_{k+1}-x_{k})^2$$ Edit I've proved that $\frac{dA}{dx}|_{x=-1}=0$ (see my answer). The problem now is to show that $A_{\min}=A|_{x=-1}$ is the global minimum. The part below belongs to the original post $A$ is actually the area bounded by the zig-zag loop between $y=\frac{1}{2\cos(\frac{\pi}{m})-x}$ and $y=x$ . I tried it on a computer and observed that $A$ minimizes at $x=-1,\forall m$ , and other $m-1$ values of $x$ , resulting in the same minimal value of $A$ (the only minimum of $A$ ). I did try to prove the necessary condition $\frac{dA}{dx} |_{x=-1}=0$ , after some algebra, the equivalent equation is: $$\sum_{k=1}^m \left(x_k-2\cos\left(\frac{\pi}{m}\right)x_{k+1}^2+x_{k+1}^3\right)\prod_{i=1}^k x_i^2|_{x=-1}=0$$ I recognized a sort of symmetry inside the sum (sum of two terms (of the sum above) with i
Not yet the solution, but I found The closed form expression of $(x_k)_k$ The minimum and the $m$ values of $x$ such that $A$ reaches its minimum It is not necessary to find the miminum for $\color{red}{x \in \mathbb{R}} $ . It suffices to find the global minimum $A_{\min}$ for $\color{red}{x\in (-\infty,0)}$ . 1 For the sake of simplicity, let us denote $\alpha = \pi/m$ and $t_1,t_2$ be the two solutions of $$t= \frac{1}{2\cos\left(\frac{\pi}{m} \right) -t } $$ then $$t_{1,2} = e^{\pm \mathbf{i} \frac{\pi}{m}} = e^{\pm \mathbf{i} \alpha} = \cos(\alpha) \pm\mathbf{i}\sin(\alpha)$$ We have: $$\begin{align} x_{k+1} -t_1&= \frac{1}{2\cos\left(\frac{\pi}{m} \right) -x_k }-\frac{1}{2\cos\left(\frac{\pi}{m} \right) -t_1 }\\ & = \frac{x_k-t_1}{2\cos\left(\frac{\pi}{m} \right) -x_k }\cdot\frac{1}{2\cos\left(\frac{\pi}{m} \right) -t_1 }\\ x_{k+1} -t_1& = \frac{x_k-t_1}{2\cos\left(\frac{\pi}{m} \right) -x_k }\cdot t_1 \tag{1} \end{align}$$ Same for $t_2$ , we have: $$x_{k+1} -t_2= \frac{x_k-t_2}
|sequences-and-series|optimization|convex-optimization|
0
Should a non-empty subset $B$ of $A$ be formally denoted $B\subseteq A\backslash\emptyset$ or $B\subseteq A\backslash\{\emptyset\}$
Let $A$ be an arbitrary non-empty set. Then, by definition of the empty set, $\emptyset\subseteq A$ . Consider now a non-empty subset $B\subseteq A$ . In order to formally specify that the subset $B$ is non-empty, should I write $B\subseteq A\backslash \emptyset$ or $B\subseteq A\backslash\{\emptyset\}$ ? To me, writing $B\subseteq A\backslash \emptyset$ seems more natural, for the emptyset is itself a set and thus the curly brackets seem redundant and incorrect. However, $B\subseteq A\backslash\{\emptyset\}$ also seems to have some intuitive appeal. Can you please clarify which is the correct way of writing it and why?
Neither is correct: let $A:=\{1,2,3\}$ ; you can write $A-\{1,2\}$ but not " $A-\{\emptyset\}$ " since $\emptyset \notin A$ . A good notation is $$(B\subset A \text{ and } B\neq \emptyset )\iff B\in \mathcal{P}(A)- \{\emptyset\}$$ where $\mathcal{P}(A)$ is the power set of $A$ .
|elementary-set-theory|notation|
1
What rule is used in this derivation of the interarrival time for the Poisson process?
I'm working on calculating the probability distribution of the interarrival time of the Poisson process. The method used in my textbook is very strange I don't understand how the probabilities are calculated. I am familiar with the method used eg. here but not the working below. $ N(t) $ is the Poisson process: $$ P\{N(s) = 1, N(t) = 2\} = P\{\xi_1 \leq s $$ = P\{\eta_1 \leq s At this point we have converted the Poisson process into 3 independent, exponentially distributed random variables. But now we have the first conversion of the probability to integrals and I don't understand how it was done: $$ = \int_{0}^{s} P\{s What rule of probability was used? The first inequality has now become the integral and the random variable $ \eta_1 $ has become u in the probability. Is this a well known rule of probablity (and what is it called if so?) or is this somehow a specific property of this problem? And likewise for the next two steps: $$ = \int_{0}^{s} \left( \int_{s-u}^{t-u} P\{t $$ = \int
After reading more I've found that the rule used in this example is the law of total probability: $$ P\{A\}=\int_{-\infty}^{\infty} P\{A|X=x\}f_X(x) dx $$ First, we notice that the event can be written as: $$ \{\eta_1 \leq s Since $\eta_1$ is exponentially distribution it is non-negative, and the event $\{\eta_1 \leq s\}$ has 0 probability for $\eta_1 > s$ which we can use to adjust the bounds of the integral, therefore we find that: $$= P\{\eta_1 \leq s $$=\int_{-\infty}^{\infty} P(\{\eta_1 \leq s $$=\int_{-\infty}^{\infty} P(\{u \leq s $$=\int_{0}^{s} P(\{s The rule is applied again to the next two equalities in this way as well.
|stochastic-processes|poisson-distribution|poisson-process|
1
Difference of two greatest integer functions
If $[x]$ is the greatest integer less than or equal to x, then from the graph of the greatest function, and splitting into a case where x is an integer, and another where it is not, it is "obvious" that $$[x]-[x-1]=1$$ However, is there a more formal/rigourous proof possible?
Sure. Let $[x] = n$ . That means $n \le x . Subtract $1$ from each of the terms and that tells us $n-1 \le x-1 . And that means that $[x-1]=n$ . So $[x]-[x-1]=n - (n-1) = 1$ . If you don't want to introduce the variable $n$ just do $[x] \le x so $[x]-1 \le x - 1 so So $[x-1] = [x]-1$ so $[x]-[x-1] = 1$ . ======== Actually, in hindsight if you want to be really slick: $[x]-[x-1] = 1 \iff $ $[x-1] = [x] - 1 \underbrace{\iff}_{\text{by definition of greatest integer function}} $ $[x]-1 \le x-1 \le ([x]-1) + 1= [x]\iff$ $[x] \le x \le [x]+1$ which is true as it is the very definition of the greatest integer function.
|real-analysis|algebra-precalculus|functions|alternative-proof|
1
How to calculate the area of a projected path onto a plane?
How do you "orthogonally project" the shape? After I drew multiple graphs, I still had no idea. The correct answer is $12$ . I guess it is $3 \sqrt 2 \cdot 2 \sqrt 2$ according to the gist of the graph that I drew (a plane with two parallel sides through four mid points each, and two shorter sides a little out of shape) The problem: "A bee travels in a series of steps of length $1$ : north, west, north, west, up, south, east, south, east, down. (The bee can move in three dimensions, so north is distinct from up.) There exists a plane $P$ that passes through the midpoints of each step. Suppose we orthogonally project the bee’s path onto the plane $P$ , and let $A$ be the area of the resulting figure. What is $A^2$ ?"
Here I'll develop an analytical approach that hopefully will address the theoretical aspects of the original question. As the bee moves in 3D space, and easst/west, norht/south, and up/down are opposites, without loss of generality I'll define movements as changes in directions $(x,y,z)$ as follows: $$\mathbf{e}=-\mathbf{w}=\begin{pmatrix}1\\0\\0\end{pmatrix}\,,\quad \mathbf{n}=-\mathbf{s}=\begin{pmatrix}0\\1\\0\end{pmatrix}\,,\, \mathbf{u}=-\mathbf{d}=\begin{pmatrix}0\\0\\1\end{pmatrix}\,.$$ We assume that the bee starts at a point $\mathbf{p_1}$ .Thus, We have a set of 10 paths (line segments) defining a closed curve through the following points $\mathbf{p}_i$ as follows: \begin{align} \mathbf{p}_1&\xrightarrow{N} \mathbf{p}_2\xrightarrow{E} \mathbf{p}_3\xrightarrow{N} \mathbf{p}_4\xrightarrow{E} \mathbf{p}_5\xrightarrow{U} \mathbf{p}_6\xrightarrow{S} \mathbf{p}_7\xrightarrow{W} \mathbf{p}_8\xrightarrow{S} \mathbf{p}_9\xrightarrow{W} \mathbf{p}_{10}\xrightarrow{D} \mathbf{p}_1 \end{a
|geometry|3d|area|
0
Difference of two greatest integer functions
If $[x]$ is the greatest integer less than or equal to x, then from the graph of the greatest function, and splitting into a case where x is an integer, and another where it is not, it is "obvious" that $$[x]-[x-1]=1$$ However, is there a more formal/rigourous proof possible?
This can be seen to follow from the more general property that $[x+n]=[x]+n$ for all real $x$ and integer $n$ . Namely one sees that $[x]-[x-1]=[x]-([x]-1)=1$ . This more general property can be derived by noting that for $A$ the set of all integers no greater than $x$ and $B$ the set of all integers no greater $x+n$ that \begin{equation}a\in A\Leftrightarrow a\in\mathbb{Z}\text{ and }a\leq x\Leftrightarrow a+n\in\mathbb{Z}\text{ and }a+n\leq x+n\Leftrightarrow a+n\in B. (1) \end{equation} Now since $[x]\in A$ it follows by $(1)$ that $[x]+n\in B$ . If $b\in B$ , then by $(1)$ one has that $b-n\in A$ and therefore, as $[x]$ is by definition the greatest element of $A$ , that $b-n\leq [x]$ and that $b\leq [x]+n$ . Consequently $[x]+n$ is the greatest element of $B$ and hence is $[x+n]$ . In more techincal language the map $A\to B$ given by $a\mapsto a+n$ is an isomorphism of ordered sets and hence carries the greatest element $[x]$ of $A$ onto the greatest element $[x+n]$ of $B$ .
|real-analysis|algebra-precalculus|functions|alternative-proof|
0
My question is whether given function is periodic or not
F(x) = 1, if x is rational 0, if x is irrational So what i am really confused about is because all professors are telling me its periodic whose period is not defined, i mean it doesn't make any sense to me. So they countered me by telling in constant function periodic i said yes but again they asked what is its period. I mean the problem is what is exact definition of periodic function they are not drawing the line. So please help me with understanding it.
The period of a function is the smallest positive $r$ so that $f(x+r) = f(x)$ for all $x$ . But some functions will have an infinite number of $r$ s but none of them being a smallest positive $r$ so that $f(x+r) =f(x)$ for all $x$ . These functions are periodic because there do exist $r$ so that $f(x+r) = f(x)$ for all $x$ . But the do not have known periods because there is not smallest such (positive) number that do that. In this case if $r\in \mathbb Q$ then $x+r\in \mathbb Q\iff x\in \mathbb Q$ and so $f(x+r) = f(x)$ . But there is no smallest positive $r$ that does this. So $f$ is periodic without a defined specific period. ==== Note the restriction is that $r \ne 0$ . Obviously all functions have $f(x+0) = f(x)$ and obviously that doesn't count.
|functions|irrational-numbers|rational-numbers|periodic-functions|
0
My question is whether given function is periodic or not
F(x) = 1, if x is rational 0, if x is irrational So what i am really confused about is because all professors are telling me its periodic whose period is not defined, i mean it doesn't make any sense to me. So they countered me by telling in constant function periodic i said yes but again they asked what is its period. I mean the problem is what is exact definition of periodic function they are not drawing the line. So please help me with understanding it.
The function is indeed periodic because $\mathbb{Q}$ is closed for the addition so for example $\forall x \in \mathbb{R}, x \in \mathbb{Q} \iff x+1\in \mathbb{Q}$ . This means that $\forall x \in \mathbb{R}, F(x+1) = F(x)$ . We can conclude that $F$ is periodic and $1$ is one period. Moreover, $\forall p\in\mathbb Q,\forall x \in \mathbb{R}, x \in \mathbb{Q} \iff x+p\in \mathbb{Q}$ , which is enough to prove that: $$\forall p\in\mathbb Q,\forall x \in \mathbb{R}, F(x+p) = F(x)$$ Said differently any rational number is a period for $F$ . But we cannot give the minimum period because $\inf(\mathbb{Q_+}) = 0$ . That is the reason why F is periodic (we could exhibit its periods) but we cannot define the period (commonly the minimal positive period) because the inferior bound of the positive periods is $0$ .
|functions|irrational-numbers|rational-numbers|periodic-functions|
0
Solving $\cos(x)\sin(7x)=\cos(3x)\sin(5x)$
Recently, I was trying to solve a trigonometric equation involving the use of sine and cosine: $$\cos(x)\sin(7x)=\cos(3x)\sin(5x)$$ I attempted to remove the coefficients of $x$ within the trigonometric functions, but I did not understand how to. Does anyone understand how I'm supposed to solve this equation? Thank you!
Using the product-to-sum identity $\sin{A}\cos{B}=\frac{\sin{(A+B)}+\sin{(A-B)}}{2}$ in turn to each side: LHS: $\sin{(7x)}\cos{(x)}=\frac{\sin{8x}+sin{6x}}{2}$ RHS: $\sin{(5x)}\cos{(3x)}=\frac{\sin{(8x)}+sin{(2x)}}{2}$ Equating the two, the $\frac{1}{2}$ and the $\sin{(8x)}$ both cancel and we are left with $\sin{(6x)}=\sin{(2x)}$ . From here, geometry/symmetry of the unit circle can be used to obtain the solutions $x=0, \pi, 2\pi...$ More generally, $x=n\pi$ where n is any integer. Other solutions exist such as $x=\frac{-3\pi}{8}+n\pi$ but these require diagrams beyond my ability to produce for the moment in this space.
|trigonometry|
0
Compute $\int_{-1}^{1}x\sqrt{x}\tan^{-1}(\sqrt{x})dx$
I was trying to make an IBP problem for an integration bee. I came up with the simple arctan integral with the square roots just for an easy round. However, WolframAlpha challenged me a suggestion, which was adding the bounds -1 to 1. I was so confused because I thought it was undefined, divergent, or complex. But it turns out the answer is real and defined. My attempt was performing integration by parts in which I got the indefinite answer: $$ \left [ \frac{2}{5}x^{\frac{5}{2}}\tan^{-1}(\sqrt{x})-\frac{x^2}{10}+\frac{1}{5}x-\frac{1}{5}\ln|x+1| \right ]_{-1}^{1}$$ The hardest part was inputting the bounds as I am unaware of some weird odd values. $$\frac{\pi}{10}+\frac{1}{10}-\frac{1}{5}\ln(2)-\left(\frac{2}{5}i\tan^{-1}(i)-\frac{3}{2}-\ln(0)\right)$$ Not only that, the answer is $\frac{2}{5}+\frac{\pi}{10}+\frac{\ln(2)}{5}$ . Did I do anything wrong? How do I come to this exact answer???
We first split the integration interval into two. $$ \int_{-1}^1 x \sqrt{x} \tan ^{-1} \sqrt{x} d x = \underbrace{\int_0^1 x \sqrt{x} \tan ^{-1} \sqrt{x} d x}_{\frac{\pi}{10}+\frac{1}{10}-\frac{1}{5} \ln (2)} + \underbrace{ \int_{-1}^0 x \sqrt{x} \tan ^{-1} \sqrt{x} d x}_{J} $$ For the integral $J$ , we let $x\mapsto -x$ yields $$ \begin{aligned} J & =-\int_0^1 x(\sqrt{x} i) \tan ^{-1}(i \sqrt{x}) d x \\ & =\int_0^1 x \sqrt{x} \operatorname{arctanh}(\sqrt{x}) d x \\ & =2 \int_0^1 x^4 \operatorname{arctanh} x d x \end{aligned} $$ By the expansion of arctanh $x$ , we have $$ \begin{aligned} J& =2 \int_0^1 x^4 \sum_{n=0}^{\infty} \frac{x^{2 n+1}}{2 n+1} d x \\ & =2 \sum_{n=0}^{\infty} \frac{1}{2 n+1} \int_0^1 x^{2 n+5} d x \\ & =2 \sum_{n=0}^{\infty} \frac{1}{(2 n+1)(2 n+6)} \\ & =\frac{1}{5} \sum_{n=0}^{\infty}\left(\frac{1}{n+\frac{1}{2}}-\frac{1}{n+3}\right) \\ & =\frac{1}{5}\left(\psi(3)-\psi\left(\frac{1}{2}\right)\right)\\&= \frac{1}{5}\left((\frac32-\gamma)-(-\gamma-2\ln 2) \right)
|integration|definite-integrals|
1
Exercise 1.1.9 from West
Currently I am reading the first edition "Introduction to Graph Theory" by Douglas B. West. Exercise 1.1.9 states the following: 1.1.9. Suppose that $G$ is a simple graph having no vertex of degree $0$ and no induced subgraph with three edges. Prove that $G$ has at most four vertices. Counter-example: Let $V(G) = $ { $ 1,2,3,4,5 $ } $ $ and $E(G) = $ { $ 12,23,45 $ } $ $ . Is this counter-example correct ? EDIT: Came up with this approach but I'm stuck. Proof. Because the degree of every vertex in $G$ has degree of at least 1 then $$ \sum_{v \in V}^{} \text{deg}(v) \geq |V| $$ and recall that half the sum of all degrees in $G$ is the size of the edge set, thus $$ |E|\geq \frac{1}{2} |V| $$ Now, if $G$ does not contain any induced subgraph $H$ with three edges, that implies there is no 4-vertex or 5-vertex subset of $V(G)$ for which the induced subgraph under that subset contains three edges. Because $|E| \geq 3$ for $|V| \geq 5$ then $G$ must contain a three edge subgraph (and im stuck
Your Counter-Example almost contains the Proof ! When we have graph $G$ with $5$ or more vertices , then a sub-graph ( which might be $G$ itself ) will contain $5$ vertices , which will look like your Example. Let all the vertices be $0$ Degree , unconnected to other vertices. We want the Case where there are no $0$ Degree vertices , while we have $5$ such $0$ Degree vertices. Hence we can connect ( arbitrarily ) vertices $1$ & $2$ to get $(1,2)$ . We still have $3$ vertices with $0$ Degree , with $1$ Edge. Adding a new Edge either will reduce that count by $1$ ( eg $(1,3)$ ) or will reduce that count by $2$ ( eg $(3,4)$ ) , leaving us with atleast $1$ vertex with $0$ Degree. Adding a third Edge will give a subgraph with 3 Edges , while not adding a third Edge will give a graph with at least $1$ unwanted $0$ Degree vertex. Either way , the Conclusion holds ! At the Point where we have 3 Edges , adding more Edges might eliminate all the $0$ Degree vertices , though it can not eliminate
|graph-theory|examples-counterexamples|
0
ODE's: Continuity Equation
The context of this question is Machine Learning (more specifically, my question results from this paper , yet I have a math question, so I'm posting it here). First of all, some definitions (Sec. 2 of the mentioned paper): Let $\mathbb{R}^{d}$ denote the data space with data points $x = (x^{1}$ , $\dots$ , $x^{d})\in\mathbb R^{d}$ . Two important objects we use in this paper are: the probability density path $p: [0, 1]\times\mathbb R^{d} \rightarrow \mathbb R_{> 0}$ , which is a time dependent probability density function, i.e. $\int p_{t}(x)dx = 1$ , and a time-dependent vector field , $v: [0, 1]\times \mathbb R^{d}\rightarrow\mathbb R^{d}$ . A vector field $v_{t}$ can be used to construct a time-dependent diffeomorphic map, called a flow , $\phi: [0, 1]\times \mathbb R^{d} \rightarrow \mathbb R^{d}$ , defined via the ordinary differential equation (ODE): $$\frac{d}{dt}\phi_{t}(x) = v_{t}(\phi_{t}(x)) \tag{1}\label{eq:cnf}$$ $$\phi_{0}(x) = x\tag{2}$$ [...] A CNF [Continuous Normaliz
I think my answer will be better if we know what exactly the Jacobian determinant of the flow look like:). First of all, about the flow $\phi_t(x)$ and its Jacobian Matrix $M_t(x)$ , we can prove an equation $\frac{d}{dt}M_t(x)=Dv_t(\phi_t(x))M_t(x)$ (by using chain rule to find the i-j-th element of $\frac{d}{dt}M_t(x)$ ). Then we also know the initial of $M_t(x)$ : $M_0(x)=D\phi_0(x)=DI(x)=I$ . So we get an IVP in matrix form, and it is easy to solve! So we obtain the expression of $M_t(x)$ : \begin{align} M_t(x)=\exp(\int_{0}^{t}Dv_s(x_s)ds) \end{align} where $x_s=\phi_s(x)$ . Then the determinant comes that \begin{align} J_t(x)=\det(M_t(x))=\det(\exp(\int_{0}^{t}Dv_s(x_s)ds))\\ =\exp(\text{tr}(\int_{0}^{t}Dv_s(x_s)ds))=\exp(\int_{0}^{t}\text{div}(v_s(x_s))ds), \end{align} where we use the fact that $\det(\exp(X))=\exp(\text{tr}(X))$ . Now two densities should satisfy: \begin{align} p_t(x_t)J_t(x_0)=p_0(x_0). \end{align} (I use $x_0$ to avoid the confusion may be caused.) Then the e
|ordinary-differential-equations|stochastic-processes|machine-learning|transport-equation|
0
Characterization of Lipschitz continuity
A function $f$ satisfies the Lipschitz condition on $[a,b]$ iff for all $\epsilon >0$ there exists $\delta>0$ for which the following is true: For all families $\{[a_k,b_k]\}_{k=1}^n$ of closed intervals in $[a,b]$ with $\sum_{k=1}^n b_k-a_k $$ \sum_{k=1}^n |f(b_k)-f(a_k)| Please, can someone give me some hints to prove this statement? In particular I need help to prove that the condition I wrote implies the Lipschitz continuity. Actually, in this case, I also should prove it taking the intervals open and not closed.
Let $f \colon [a,b] \to \mathbb{R}$ be a function. Recall that $f$ is Lipschitz continuous if and only if there is some constant $C > 0$ such that $$\lvert f(x_2) - f(x_1) \rvert In what follows, we say that $f$ has the closed subintervals property if and only if, for every $\varepsilon > 0$ , there exists $\delta > 0$ such that $$\sum_{k=1}^n \lvert f(b_k) - f(a_k) \rvert for any collection of $n$ closed subintervals of $[a,b]$ $(n \in \mathbb{N})$ . So, we want to prove that $f$ is Lipschitz continuous if and only if $f$ has de closed subintervals property. $(\Longrightarrow)$ Suppose that $f$ is Lipschitz continuous. By definition, there is a constant $C > 0$ such that $$\lvert f(x_2) - f(x_1) \rvert for any $x_1, x_2 \in [a,b]$ . Let $\varepsilon$ be an arbitrary positive real number. Hint. If we take any collection $\{[a_k,b_k]\}_{k=1}^n$ of $n$ closed subintervals of $[a,b]$ $(n \in \mathbb{N})$ , what can be said about the value of $$\sum_{k=1}^n \lvert f(b_k) - f(a_k) \rvert gi
|continuity|lipschitz-functions|absolute-continuity|
1
Image of a continuous function over a convex set is convex in $\mathbb{R}\to\mathbb{R}$
Let $f:\Omega\to\mathbb{R}$ be a continuous function over convex domain $\Omega$ . Is the set $$ f\left(\Omega\right)=\left\{ y:y=f\left(x\right):x\in\Omega\right\} $$ Is the set $f(\Omega)$ convex ? Assume $$ \begin{cases} y_{1}=f\left(x_{1}\right)\\ y_{2}=f\left(x_{2}\right) \end{cases} $$ Looking at points between $y_1,y_2$ we get $ y_{t}=y_{1}+\left(y_{2}-y_{1}\right)t $ where $0\le t\le 1 $ $$ y_{t}=f\left(x_{1}\right)+\left[f\left(x_{2}\right)-f\left(x_{1}\right)\right]t=\left(1-t\right)f\left(x_{1}\right)+tf\left(x_{2}\right)$$ The continuous property says $\forall \varepsilon>0$ if there exists $\delta$ such that $$ \left|z-x_{1}\right| then $$\left|f\left(z\right)-f\left(x_{1}\right)\right|\le\varepsilon $$ I tried setting $ z=x_{1}+\alpha\left(x_{2}-x_{1}\right) $ , but got nowhere. if $f$ is invertible then we can easily find an $x $ such that $f(x)=y_t$ Is there a way to prove exists such an $x$ using the continuous property ?
Your set is written incorrectly, with two $:$ symbols. I would write it as $$f(\Omega) = \{ y \in \mathbb{R} : \exists x \in \Omega \text{ satisfying } f(x) = y \}.$$ Now for the proof: Without loss of generality, we may assume $$y_1 Define the function $$g(c) = f((1-c)x_1 + cx_2),$$ for $t \in [0,1]$ and note that $g(0) =y_1$ and $g(1) = y_2$ . Note also that $g$ is well defined, since $\Omega$ is convex, and that $g$ is continuous. By the intermediate value theorem, there exists some $c^* \in [0,1]$ such that $g(c^*) = y_t$ . Hence, $y_t \in f(\Omega)$ .
|calculus|optimization|convex-optimization|
1
Munkres-Analysis on Manifolds: Theorem 20.1
I am studying Analysis on Manifolds by Munkres. I have a problem with a proof in section 20: It states that: Let $A$ be an $n$ by $n$ matrix. Let $h:\mathbb{R}^n\to \mathbb{R}^n$ be the linear transformation $h(x)=A x$. Let $S$ be a rectifiable set (the boundary of $S=BdS$ has measure $0$) in $\mathbb{R}^n$. Then $v(h(S))=|\det A|v(S)$ ($v=$volume). The author starts his proof by considering tha case of $A$ being a non-singular matrix (invertible). I think I understand his steps in that case (I basically had to prove that $h(int S)=int$ $h(S)$ and $h(S)$ is rectifiable, if anybody knows a way this statements are proven autumatically please tell me). He proceeds by considering the case where $A$ is singular, so $\det A=0$. He tries to show now that $v(T)=0$. He states that since $S$ is bounded so is $h(S)$ (I think thats true because $|h(x)-h(a)|\leq n|A||x-a|$ for each $x$ in $S$ and fixed a in $S$, if there is again a better explanation please tell me). Then he says that $h(\mathbb{R}
I am reading Theorem 20.1. I think we need to use the result of Theorem 18.2 to understand the proof of Theorem 20.1. Note that the sentence " These results also hold when $D$ is not compact, provided $\operatorname{Bd}D\subset A$ and $\operatorname{Bd}E\subset B$ . " is not included in the following pdf: https://archive.org/details/MunkresJ.R.AnalysisOnManifoldsTotal/page/n165/mode/1up But in my printed book, this sentence is added. And we need to use Theorem 18.2 in the case in which $D:=S$ is not necessarily compact but $\operatorname{Bd} D\subset A:=\mathbb{R}^n$ and $E:=T$ is not necessarily compact but $\operatorname{Bd} E\subset B:=\mathbb{R}^n$ . Theorem 18.2 on p.154. Let $g:A\to B$ be a diffeomorphism of class $C^r$ , where $A$ and $B$ are open sets in $\mathbb{R}^n$ . Let $D$ be a compact subset of $A$ , and let $E=g(D)$ . (a) We have $$g(\operatorname{Int}D)=\operatorname{Int}E\,\,\,\,\,\text{and}\,\,\,\,\,g(\operatorname{Bd}D)=\operatorname{Bd}E.$$ (b) If $D$ is rectifiabl
|linear-algebra|multivariable-calculus|linear-transformations|determinant|lebesgue-measure|
1
Prove that a Lagrangian that Induces a Linear Elliptic Euler-Lagrange PDE has Unique Form
I am asking if existence, taken as an assumption for a solution $L$ to the linear operator equation $$\mathcal{E}L = F$$ with conditions on $F$ , implies further conditions on $F$ and uniqueness of a generic form for the terms in $L$ not annihilated by $\mathcal{E}$ . The variational problem giving rise to the operator equation is to find a stationary solution $u$ that minimizes the action functional $$I[u]=\int_{\Omega}{L(Du,u,x)dx}$$ This induces the corresponding Euler-Lagrange equation $$\frac{\partial L}{\partial u}-\sum_{i=1}^n{\frac{\partial}{\partial x_i}\left(\frac{\partial L}{\partial(\partial_i u)} \right)}=0$$ and the corresponding operator $$\mathcal{E}=\partial_u-\nabla\cdot \partial_{\partial_i u}$$ This can be adjusted to accommodate any number of higher order derivatives ( see here ). A nice lemma would justify only allowing dependence up to $Du$ , not $D^2u$ and higher. Suppose $L$ induces Euler-Lagrange equations that are linear-elliptic. In the operator formulation
Premises to the answer The question you are asking is not an easy one as it is a version of the inverse problem of the calculus of variation (see [1] and [3] for some bibliographical references). In the form of your operator equation, the problem is hardly solvable, but if you leave the classic formulation of the problem and adopt a generalised one, you can get a necessary and sufficient condition on $F$ in order for your problem to have a solution. The generalised inverse problem of the calculus of variation asks for a solution of the following first variation equation $$\DeclareMathOperator{\dm}{d\!} \frac{\delta\mathfrak{L}}{\delta u }[u](\varphi) = \mathfrak{F}[u](\varphi) \label{2}\tag{FV} $$ where $\mathfrak{L}$ is a functional on a given Banach (or more general) space) $E$ : precisely, in our starting problem $$ \mathfrak{L}(u)=I(u)=\int_{\Omega}{L(Du,u,x)\dm x} $$ $\mathfrak{F}$ is an operator mapping the domain of $\mathfrak{L}$ to the dual $E^\ast$ of $E$ . In our starting pr
|partial-differential-equations|calculus-of-variations|euler-lagrange-equation|elliptic-equations|linear-pde|
0
How is the Poincare disc model or the upper half plane model a representation of hyperbolic space?
I am currently working on these two models and I don't understand the connection between them and hyperbolic space. In case of spherical geometry one can imagine everything well as a 2 dimensional sphere in 3 dimensional space and all the major facts on parallel lines make sense. But in the case of hyperbolic geometry I can't really get an intuition for that. One problem is also that I have no idea what curvature really is. Could anyone please explain where this seemingly random defined metric on the upper half plane comes from and why we want half circles and vertical lines to represent the geodesics ? Thanks in advance.
In case of spherical geometry one can imagine everything well as a 2 dimensional sphere in 3 dimensional space and all the major facts on parallel lines make sense. I would argue that the better analogy is elliptic geometry. That's the same as spherical geometry, except that you treat antipodal points as a unit. Let's use this example to illustrate some aspects of how a model works. You start with a setup you understand well (or for which you have a mathematical theory). In this case it's a sphere in space. You use this to define terms of the axiomatic system you want to model. In this case we define the term “point” to mean a pair of antipodal points on the sphere. And we define the term “line” to mean a great circle on the sphere. The important part here is that the terms used in the axioms of planar geometry are just placeholders. While the people formulation these axioms had some concrete ideas of what a point or a line should be, the axioms don't depend on that meaning. So we can
|hyperbolic-geometry|
1
Find the smallest positive whole number that is equal to seven times the product of its digits
I am helping a kid with the preparation for a mathematical competition. One of the training question is: Find the smallest positive whole number that is equal to seven times the product of its digits They do not provide the answer, but using this little python script I found out it is 735: for i in range(1, 1000): digits = [int(x)for x in str(i)] prod = 1 for digit in digits: prod = prod * digit if prod*7 == i: print(f"The number is: {i}") break Now I want to find a way that could be resolved just with paper and pencil, as they must do in this mathematical competition. I tried to write: $$ 100a + 10b + c = 7abc $$ and then I tried many more things, including dividing rule by 7 etc. But I couldn't find a way of solving this if not by brute-force substituting digits for a, b and c and find the values that satisfy the equation. Thanks! EDIT I know it is a 3 digits number, so it must be less than 999, and since 0 is not a digit it must be greater than 111. Since the number is 7 times the p
One-digit numbers don’t work. Let us try two-digit numbers: $$10a+b=7ab,$$ where $a>0$ . We get $b=a(7b-10)$ . So $b\le 2$ . We check $b=0,1,2$ manually. Consider three-digit numbers now. Let us write $$10(10a+b)=(7ab-1)c,$$ where $a>0$ . Consider three cases. Suppose $c=5$ . Then $20a+2b=7ab-1.$ We see that $b$ is an odd digit such that $7b\ge 20$ . So we manually check $b=3, 5, 7, 9$ . Suppose $c=2$ . Then $50a+5b=7ab-1$ . Now $7b$ is greater $50$ . So $b=8,9$ . Check those manually. If $c$ is any other digit then $10\mid 7ab-1$ . So $7ab=21,91,161,231,301,371,441, 511$ . Or $ab=3, 13, 23, 33, 43, 53, 63, 73$ . $13, 23, 33=3\cdot 11, 43, 53, 73$ are impossible. If $ab=3$ then $a=1, b=3$ or vice versa. If $ab=63$ then $a=7,b=9$ or vice versa. We check those four cases manually. The answer is $735$ .
|puzzle|
0
A closed form for $\int_0^\infty\ln x\cdot\ln\left(1+\frac1{2\cosh x}\right)dx$
Is it possible to evaluate this integral in a closed form? $$\int_0^\infty\ln x\cdot\ln\left(1+\frac1{2\cosh x}\right)dx=\int_0^\infty\ln x\cdot\ln\left(1+\frac1{e^{-x}+e^x}\right)dx$$ I tried to evaluate it with a CAS, and looked up in integral tables, but was not successful.
I played around a little and noticed quickly that $$\displaystyle 1+\frac{1}{2}sech(x)=\frac{e^{2x}+x+1}{e^{2x}+1}$$ By noting the factorization $$\displaystyle \frac{1-x^{3}}{1-x}\cdot \frac{1-x^{2}}{1-x^{4}}=\frac{x^{2}+x+1}{x^{2}+1}$$ By letting $\displaystyle x\to e^{-x}$ , we can break these up somewhat and write $$\displaystyle \frac{e^{2x}+e^{x}+1}{e^{2x}+1}=\frac{1-e^{-3x}}{1-e^{-x}}\cdot \frac{1-e^{-2x}}{1-e^{-4x}}$$ So, we can write $$\displaystyle log(x)\left[log\left(\frac{1-e^{-3x}}{1-e^{-x}}\right)+log\left(\frac{1-e^{-2x}}{1-e^{-4x}}\right)\right]$$ $$\displaystyle log(x)\left[log(1-e^{-3x})-log(1-e^{-x})+log(1-e^{-2x})-log(1-e^{-4x})\right]dx$$ Now, is there some sort of general form for something like $$\displaystyle \int_{0}^{\infty}log(x)log(1-e^{-nx})dx$$ ?. If we use the series for $log(1+u)$ , we get $$\displaystyle \int_{0}^{\infty}log(x)\sum_{k=1}^{\infty}\frac{(-1)^{k}e^{-nkx}}{k}dx$$ Now, it is beginning to look like the Gamma derivative $$\displaystyle \int_{
|calculus|integration|logarithms|improper-integrals|closed-form|
0
Finding the ideal B-spline through data points using Euler-Lagrange: is it just too hard to do?
I am not even sure I have a question anymore (I will just give up)... in the past month or so I have been researching cubic Bézier curves. The idea was to find a fit through data points, using piecewise Bézier curves and the Euler-Lagrange equation to minimize an "energy" proportional with the square of the length of the (total) curve (elastic energy used to stretch) plus the integral of the square of the curvature $\kappa$ of the curve (energy required to bend the curve). Edit: it took me three whole days to type this "question". Please don't get mad for wasting your time if you choose to read it. Here is what I have done so far with respect to the length of the curve. Let $$\begin{equation}\begin{aligned}\vec{P}(t) &= (1-t)^3 \vec{P_0} + 3(1-t)^2t \vec{C_0} + 3(1-t)t^2 \vec{C_1} + t^3 \vec{P_1} \\ &= \vec{P_0} + 3(\vec{C_0} - \vec{P_0})t + 3(\vec{C_1} - 2\vec{C_0} + \vec{P_0})t² + (\vec{P_1} - 3\vec{C_1} + 3\vec{C_0} - \vec{P_0})t³\end{aligned} \end{equation}\tag{1}$$ be a cubic Bézi
Not really an answer, but far too long for a comment. A few remarks … Minimizing arclength seems like a strange thing to do. It will cause your curve to become nearly piecewise linear, which is probably not what you want. Estimating the integral by using a Taylor expansion is not a very good approach. There are many better methods of numerical integration . If you describe your objective (i.e. what you mean by “ideal”), I might be able to suggest other geometric characteristics of the curve that you could optimize. Lots of people have tried minimizing various forms of bending energy. You might be better off using a b-spline representation of the curve. This will enforce certain continuity constraints that you probably care about. If you just use independent Bézier curves, you will have to add a lot of continuity constraints into your optimization problem. I assume that you’ll end up passing the problem to some standard optimization package. To do this, you don’t need a mathematical for
|multivariable-calculus|euler-lagrange-equation|arc-length|bezier-curve|
0
Showing that the $2\times2$ complex matrices with trace zero are a subspace over the reals, with another subspace inside it.
So I have the following question Let $V$ be the set of all $2\times2$ matrices $A=(a_{ij})$ with complex entries and trace zero. Show that $V$ is a vector space over the field of reals. Let $W$ be the set of all matrices $A=(a_{ij})$ in $V$ such that $a_{21}=-{\bar{a_{12}}}$ . Prove that $W$ is a subspace of $V$ and find basis of $W$ . So firstly to prove that $V$ is a vector space, but how will I prove it over the field of reals? Like the if the entries are complex, then maybe I can write $A\in M_{2\times2}(\mathbb{C})$ , and hence I can easily prove $V$ to be vector space by showing the closed addition and scalar multiplication and existence of zero vector. Is it that if I choose $A\in M_{2\times2}(\mathbb{R})$ , I will have to write matrix as basis in ${(1,i)}$ ?. Please help, as I am pretty new in linear algrebra so I lack rigorous proof methods.
This is just an expansion of my comment . Let's consider the following statement: Suppose we have a vector space $X$ . A subset of $Y$ of $X$ is a subspace if it contains $0$ (or, is just non-empty), and is closed under addition and scalar multiplication. And, every subspace is a vector space in its own right. Everyone learning linear algebra learns some variation of this statement, either as a definition of subspaces, or a theorem. And, applying this with $X = A$ and $Y = V$ , one could certainly see why $V$ would be considered a subspace of $A$ . The problem is, the above statement is missing things: small details that are usually not spelled out explicitly. It's the kind of thing that more experienced people can usually remember without a problem, but will trip up students from time to time. What's missing above is the field and operations attached to the aforementioned vector spaces. Remember, a vector space is more than a set. A vector space is technically an ordered quadruple: $(
|linear-algebra|matrices|
0
Is exact cylinder a cylinder object
This is a question about Cisinski's model structure ex nihilo, chapter 2.4 in his book Higher categories and homotopical algebra. It defines what is an exact cylinder $I$ in the presheaf category. With a class of $I$ -anodyne extension this gives a model structure. I just wonder if $IX \to X$ is a weak equivalence. It seems so for Kan-Quillen model structure and Joyal model structure for indirect reasons. I'd like to know whether this hold in general. Initially one may hope $IX \to X$ is in fact an $I$ -homotopy equivalence. But this requires to construct a natural transformation $I\circ I \to I$ that gives a homotopy between $\partial_0\circ \sigma$ and $\mathrm{id}$ , which shows no clue for me. Edit: I see that $X \to IX$ is $I$ -anodyne and thus weak equivalence.
The morphism $I\otimes X\to X$ is indeed a weak equivalence. This is because $\partial I\otimes X\to I\otimes X\to X$ is a cylinder object (with Cisinski's definition of that term), so the map $\{\varepsilon\}\otimes X\to I\otimes X\to X$ is the identity on $X$ for $\varepsilon=0,1$ . The map $\{\varepsilon\}\otimes X\to I\otimes X$ is indeed an $I$ -anodyne map because $I$ is exact (so preserves initial objects, and then we use $\mathrm{An}1$ in Definition 2.4.11). But $\mathrm{id}_X$ is a weak equivalence as well, so by the $2$ -out-of- $3$ property of weak equivalences, it follows that $I\otimes X\to X$ is also a weak equivalence.
|category-theory|homotopy-theory|model-categories|
1
Gradient of the product of two functions, each defined on a manifold
My question is very much related to this other question: Gradient on product manifold Let $(M_1, g_1) \times (M_2, g_2)$ be a product manifold and let $f : M_1 \times M_2 \rightarrow \mathbb{R}$ be a function defined as $f(x,y) = h_1(x) h_2(y)$ where $h_i : M_i \rightarrow \mathbb{R}$ . Is it true that $df_{(x,y)}(V, W) = d(h_1)_x (V) h_2(y) + h_1(x) d(h_2)_y (W)$ for any two vectors $(V, W) \in T_x M_1 \times T_y M_2$ ? How does this relate to the case of $M_1 = M_2 = \mathbb{R}$ with the standard metric, where $\nabla f = (h_1'(x)h_2(y), h_1(x) h'_2(y))$ ? Thank you very much for your help!
So I think I figured out the answer. From the definition of $\nabla f$ as the unique vector field such that $g(\nabla f(x,y), W) = df_{(x,y)}(W)$ for any $W \in T_{(x,y)} M_1 \times M_2$ we know that, writing $W = (U, V)$ for some $U \in T_x M_1$ and $V \in T_y M_2$ then $g(\nabla f(x,y), W) = df_{(x,y)}(W) = d(h_1)_x(U)h_2(y) + h_1(x)d(h_2)_y(V) = g_1(\nabla h_1(x), U)h_2(y) + h_1(x)g_2(\nabla h_2(y), V) = g((h_2(y)\nabla h_1(x), h_1(x)\nabla h_2(y)), (U, V))$ and so $\nabla f(x,y) = (h_2(y)\nabla h_1(x), h_1(x)\nabla h_2(y))$
|differential-geometry|riemannian-geometry|
1
How to put 6 rooks which they wont' attack each other?
We're presented with the following chessboard, and we aim to determine the number of arrangements in which 6 rooks can be placed without threatening each other. It's crucial to note that rooks cannot be placed on squares of the hashed squares. How many such arrangements are possible? I'm trying to determine the coefficient of $x^6$ in the rook polynomial. However, solving it directly seems overly complex. I attempted to apply the inclusion-exclusion principle, but unfortunately, my efforts were unsuccessful
There are $6!$ ways to arrange rooks so that none beat each other. But we must subtract the number of arrangements where rooks take prohibited spots. Using inclusion-exclusion I got the following expression: $$6!-8*5!+(6+5+4+3+3+1)*4!-(11+7+4+1+1)*3!+(3+2+1+2+1)*2!-1.$$ The number of arrangements where $k$ rooks take the prohibited squares can be calculated manually, looking at the picture. So the answer is $$161.$$
|combinatorics|polynomials|generating-functions|
0
Cardinal of the family of all finite subsets of rational numbers contained in [0,1]
Is the set $\mathbb{H}=\{ H: H \subseteq [0,1] \cap\mathbb{Q}, |H| countable?
Yes it is. We know that $\mathbf{card}(\mathbb Q) = \aleph_0$ . As $[0,1] \cap \mathbb Q \subset \mathbb Q$ and $[0,1] \cap \mathbb Q$ is not finite, we can conclude that $\mathbf{card}([0,1] \cap \mathbb Q) = \aleph_0$ . Said differently $[0,1] \cap \mathbb Q$ is countable. For any $n\in \mathbb N^*$ the number of subsets of size $n$ of a countable set is countable, what is often written ${\aleph_0}^n = \aleph_0$ . When you considere the set of all finite subsets, you are building a countable union of countable sets. Which is countable because it is no more than $\mathbf{card}(\mathbb N \times \mathbb N) = {\aleph_0}^2 = \aleph_0$
|elementary-set-theory|
0
Suppose that X ∼ U ( $− π/2$ , $π/2$ ) . Find the pdf of Y = tan(X).
Suppose that X ∼ U $(−π/2,π/2$) . Find the pdf of Y = tan(X). Make sure to define the support of the density function. My work so far: $F_Y(y) = P(Y derivative -> 1/(\pi + \pi y^{2})$ This is the cauchy distribution. Thanks for the help in the comments!
Here an other prove using the theorem of variable changing in probability. We can use it because $\phi(x) =tg(x)$ is a difeomorphism of classe $C^1$ on $[ -\frac{\pi}{2} ; \frac{\pi}{2} ] $ as $ \phi^{-1}(x)=arctg(x)$ exists and is necessary continuous because $(\phi^{-1}(x) )' = \frac{1}{1+x^2}$ and this derivative is continuous. 1- By definition we have that $E[g(X)] = \int g(x)f(x) dx$ with $f(x)$ the density probability of the r.v. $X$ . $E[ g(tan(X)) ] = \int_{-\frac{\pi}{2}} ^{\frac{\pi}{2}} g(tan(x))f(tan(x)) dtan(x)$ with $g$ a measurable function. 2- Changement of variable $ t = tan(x) \Rightarrow dt = (1+tan^2(x)) dx = (1+t^2)dx \Rightarrow dx = \frac{1}{1+t^2} $ , moreover $- \infty . Hence: $E[ g(tan(X)) ] = \int_{-\frac{\pi}{2}} ^{\frac{\pi}{2}} g(tan(x))f(tan(x)) dtan(x) = E[ g(t) ] = \int_{-\infty} ^{\infty} g(t) \frac{1}{1+t^2} dt $ 3- From "2-" the density function is proportional to $\frac{ \alpha}{1+t^2}$ . $\int _{\pm \infty} \frac{ \alpha}{1+t^2} dx = \pi \alpha$ a
|probability|statistics|probability-distributions|
0
Demonstrate that: $\frac{1-a^2+c^2}{5c(3a+2b\sqrt{2})}+\frac{1-b^2+a^2}{5a(3b+2c\sqrt{2})}+\frac{1-c^2+b^2}{5b(3c+2a\sqrt{2})} \geq \frac{6}{5}$
The question Let $a,b,c \in (0,\infty)$ with $a^2+b^2+c^2=\frac{1}{4}$ . Demonstrate that: $$\frac{1-a^2+c^2}{5c(3a+2b\sqrt{2})}+\frac{1-b^2+a^2}{5a(3b+2c\sqrt{2})}+\frac{1-c^2+b^2}{5b(3c+2a\sqrt{2})} \geq \frac{6}{5}$$ The idea I just thought that if we get the first relation to equal 1 it would be very useful: $4(a^2+b^2+c^2)=1=>\frac{1-a^2+c^2}{5c(3a+2b\sqrt{2})}+\frac{1-b^2+a^2}{5a(3b+2c\sqrt{2})}+\frac{1-c^2+b^2}{5b(3c+2a\sqrt{2})}=\sum \frac{3a^2+4b^2+5c^2}{5c(3a+2b\sqrt{2})}$ $\sum \frac{3a^2+4b^2+5c^2}{5c(3a+2b\sqrt{2})} \geq \sum \frac{3(a^2+b^2+c^2)}{5c(3a+2b\sqrt{2})}=\sum \frac{3}{20c(3a+2b\sqrt{2})}$ This is everything I could do... I thought of using Bergstrom or CBS to show the inequality...also I checked the case of equality and it doesn't happen when $a=b=c$ so the inequality of means isn't helpful. I hope one of you can help me! Thank you!
By AM-GM and Cauchy-Schwarz: $$5c(3a+2b\sqrt{2})\le \left(\frac{5c+(3a+2b\sqrt{2})}{2}\right)^2\le (3a^2+4b^2+5c^2)\left(\frac{3+2+5}{4}\right)$$ Following your idea: $$\sum \frac{3a^2+4b^2+5c^2}{5c(3a+2b\sqrt{2})} \geq \sum \frac{3a^2+4b^2+5c^2}{(3a^2+4b^2+5c^2)(\frac{3+2+5}{4})}=\frac{6}{5}$$
|inequality|cauchy-schwarz-inequality|
1
Inequality with logarithms and radicals of order 4
The statement of the problem : If $x,y \in (1, \infty)$ , prove that $\sqrt[4]{x} \bullet y^{\log_x y} \ge y$ My approach : If x = y , we have that $y^{\log_x y} = y$ , and with the fact that $x > 1$ $\implies$ $\sqrt[4]{x} > 1$ , and from here the inequality is obvious. If x $\implies$ $\log_x y >1$ , and proceeding as in the previous case, we prove the inequality. The problem came when I reached the case x > y . I tried to logarithmize base on x or y but it doesn't seem to lead anywhere. Any and all proofs will be helpful. Thanks a lot!
In case of $x \gt y$ , we have : $y \lt x \implies \log_x y \lt 1 \implies \frac{1}{\log_y x} \gt 1 \implies y^{\frac{1}{\log_y x}} \gt y$ . Note here the inequality gets reversed in the second implication because $\log_x y$ is positive when $x,y \gt 1$ . Now the solution follows from similar arguments as in $x \le y$ .
|inequality|logarithms|radicals|
1
Number of ways to distribute gifts among kids subject to the stated conditions
How many ways we can distribute gifts among kids with conditions below. Given the conditions: There are 5 kids, denoted as A, B, C, D, and E. There are 6 gifts, denoted as 1, 2, 3, 4, 5, and 6. Only 3 kids will be given gifts. Each kid can only have 1 gift. Each gift can only be given to 1 kid. Kid A doesn't like gifts 2 and 3. Kid B doesn't like gift 2. Kid D doesn't like gifts 1, 4, and 5. We can select 10 kids to receive gifts from a pool of $\begin{pmatrix}5 \\ 3\end{pmatrix}=10$ children. However, considering that each child may have preferences or dislikes for certain gifts, every combination becomes distinct which become difficult. Are there faster approach?
There are $\binom53$ ways to choose kids who will get gifts. There are $\binom63$ ways to choose gifts that will be presented. There are $3!$ ways to distribute them among $3$ chosen kids ignoring the prohibitions. In total, $\binom53\cdot\binom63\cdot3!$ ways. Now we begin correcting this answer using the inclusion-exclusion principle. The gift $2$ was chosen and presented to the kid A. How many ways to do that? We choose $2$ more gifts, give them to $2$ kids. $\binom42\cdot\binom52\cdot2!$ ways to do it. There are $6$ prohibited pairs kid-gift. So the answer became $\binom53\cdot\binom63\cdot3! - 6\cdot \binom42\cdot\binom52\cdot2!$ at the moment. Now we add the arrangements where two kids got undesired gifts. If those are A and B then they got gifts $3$ and $2$ respectively. Any of the $3$ other kids grabbed any of the $4$ other gifts. $3\cdot4$ ways to do that. If A and D got bad gifts, then there are $2\cdot 3$ ways to do that. And the third kid gets any other gift, $3\cdot4$ ways
|combinatorics|combinations|
1
Looking for an existing proof for a property of triangles
In my paper, I need the following lemma. I can prove it, but it is a little lengthy to be put inside the paper. I am wondering is there any existing proof that I can quote. Lemma 1 : Let the nodes of a triangle $K$ be $A$ , $B$ and $C$ , and the longest edge length of $K$ be $h$ (see attached figure). Then, the following inequality holds $$\min_{P \in K}\max(|PA|, |PB|, |PC|) \le \frac{\sqrt{3}}{3} h\:.$$ Moreover, the equality holds only for a regular triangle with $P$ being the circumcenter of $K$ . The 3D version of the above Lemma is as follows. Lemma 2 For any tetrahedron $K$ , the vertices of which are denoted by $A,B,C,D$ , let $h$ be the longest edge length. Then, $$\min_{P \in K} \max (|PA|, |PB|,|PC|, |PD|) \le \frac{\sqrt{6}}{4} h $$ Moreover, the equality holds only for a regular tetrahedron $K$ with $P$ selected as the circumcenter of $K$ ;
This is well discussed in the Jung's theorem: https://en.wikipedia.org/wiki/Jung's_theorem
|reference-request|inequality|triangles|
0
Is $\bar{\mathcal{E}}=\mathcal{E}\cup F$, where $\mathcal{E}$ is the set of all elementary subsets of $\mathbb{R}$ and $F$ is finite or countable?
Let $m^*$ be an outer measure and $\Delta$ denote the symmetric difference. Define a semi-metric $d: \mathscr{P}(\mathbb{R}) \times \mathscr{P}(\mathbb{R})\to [0, \infty]$ by $d(A, B)=m^* (A \Delta B)$ . Denote by $\mathcal{E}$ the set of all elementary subset of $\mathbb{R}$ where an elementary subset of $\mathbb{R}$ is a finite unions of disjoint intervals (with any endpoint configuration) of finite length. Denote by $\bar{\mathcal{E}}$ the collection of subsets $A\subseteq\mathbb{R}$ for which there exists a sequence $(A_n)$ in $\mathcal{E}$ with $d(A, A_n)\to 0$ as $n\to\infty$ . Given the above information, I'm trying to interpret the set $\bar{\mathcal{E}}$ from its definition. First, I can see that any set of the form $\mathcal{E}\cup F$ satisfies the definition, where $F$ is finite or countable. This comes from the fact that $F$ finite or countable implies $m^*(F)=0$ and the subadditivity property of $m^*$ . But for the converse, I'm still unable to prove that that any set $\ba
It's not true that all sets $\bar{\mathcal{E}}$ are of the form $\mathcal{E}\cup F$ with $F$ countable. Take for example $\bar{\mathscr{E}}=\mathcal{C}\cup\mathcal{E}$ , where $\mathcal{C}$ is the Cantor set.
|real-analysis|measure-theory|
0
How does one perform induction on integers in both directions?
On a recent assignment, I had a question where I had to prove a certain statement to be true for all $n\in\mathbb{Z}$ . The format of my proof looked like this: Statement is true when $n=0$ "Assume statement is true for some $k\in\mathbb{Z}$ " Statement must be true for $k+1$ Statement must be true for $k-1$ My professor said the logic is flawed because of my second bullet point above. She says that since mathematical induction relies on the well-ordering principle and since $\mathbb{Z}$ has no least or greatest element, that using induction is invalid. Instead, she says my argument should be structured like this: Statement is true when $n=0$ "Assume statement is true for some integer $k\geq0$ " Statement must be true for $k+1$ "Assume statement is true for some integer $k\leq0$ " Statement must be true for $k-1$ I am failing to understand where my logic fails and why I need to split the assumptions like she is suggesting. Could someone explain why relying on the well-ordering principl
If a poset $P$ is order-isomorphic to $\Bbb N$ , then one can perform induction on $P$ . Let $\phi: \Bbb N \to P$ be such an isomorphism, then we use $\phi(0)$ as the base case and $\phi(n)\implies\phi(n+1)$ as the inductive step. It's easy to construct the isomorphisms for both $\Bbb Z_{\geq k}$ and $\Bbb Z_{\leq k}$ so that your version of induction is perfectly valid; just take $n\mapsto n-k$ and $n\mapsto k-n$ . Since $\Bbb Z_{\geq k}\cup\Bbb Z_{\leq k} = \Bbb Z$ , the result holds for all integers.
|logic|proof-writing|induction|
0
Show that a state $\rho=\sum_i p_i|e_i\rangle\!\langle e_i|$ has purifications of the form $\sum_i s_i |e_i\rangle\otimes|f_i\rangle$
Let $ρ_A = \sum_{i=1}^r p_i|e_i⟩⟨e_i|$ , where $p_i$ are the nonzero eigenvalues of $ρ_A$ and $|e_i⟩$ corresponding orthonormal eigenvectors. If some eigenvalue appears more than once then this decomposition is not unique. Show that, nevertheless, any purification $|\psi_{AB}⟩$ of $ρ_A$ has a Schmidt decomposition of the form $|\psi_{AB}⟩ = \sum_{i=1}^r s_i|e_i⟩ ⊗ |f_i⟩$ ,...(*) with the same $|e_i⟩$ as above. Hint: Start with an arbitrary Schmidt decomposition and rewrite it in the desired form. My try: Following the hint,let $|\Psi_{AB}\rangle\in H_A\otimes H_B$ be a purification of $\rho_A$ with $H_B$ of dimension dim $H_B \ge $ rank $\rho_A$ . By theorem 2.20 , $|\psi_{AB}⟩ = \sum_{i=1}^r \tilde s_i|\tilde e_i⟩ ⊗ |f_i⟩$ , $\tilde s_i >0$ with the $|\tilde e_i\rangle\in H_A$ orthonormal and $|f_i\rangle \in H_B$ so that $\rho_A=Tr_{B}(|\psi_{AB}\rangle\langle \psi_{AB}|)=\sum_{i=1}^r\tilde s_i^2|\tilde e_i\rangle \langle\tilde e_i|$ I can relate the bases $B=\{|e_i\rangle\}_{i=1}^n$
You correctly deduced that singular vectors $\left|\tilde{e}_i\right>$ of $\Psi_{AB}$ will be eigenvectors of $\rho$ . What is also important is that the eigenvalue of $\left|\tilde{e}_i\right>$ is equal to the square of the singular value $\tilde{s}_i$ . This implies that the change of basis from $B$ to $\tilde{B}$ is not arbitrary: $\left|\tilde{e}_i\right>$ is not just a linear combination of all eigenvectors $\left|e_i\right>$ , it is a linear combination of only eigenvectors $\left|e_i\right>$ corresponding to the singular value $\tilde{s}_i^2$ . There are different ways to express this, it seems that you prefer to work with explicit sums, so I will also write in this form. Assume $$\tilde{s}_1 = \tilde{s}_2 = \dots = \tilde{s}_{q_1} > \tilde{s}_{q_1 + 1} = \dots = \tilde{s}_{q_2} > ... > \tilde{s}_{q_{t - 1} + 1} = \dots = \tilde{s}_{q_t}\quad\text{with $q_t = n$}$$ That is, there is only $t$ different singular values, and the $k$ -th singular value appears with multiplicity $q_k
|linear-algebra|quantum-mechanics|quantum-computation|quantum-information|
0
Probability of getting exactly one pair on rolling five dice
I have a game with five die with six faces (A normal dice). The player rolls all the dice and what I am trying to work out is the probability of rolling exactly two dice with the same number. For example, XXYYZ would not be allowed as it has two pairs. I originally had the following equation: $$\frac{6 \times 1 \times 5 \times 5 \times 5 \times 4}{6^5} = \frac{3000}{7776}$$ But after doing some simulations of the game I've found the value is somewhere around $67\%$ . Does anyone have any ideas to help me? I've tried thinking of it as a slot machine with 5 numbers and each reel has 6 numbers, but still no success. I plan to work out the probability of a tuple later as well as a quadruple.
Assuming the condition is to have exactly one pair of dice with the same number and the other three to have distinct numbers - so all permutations of $aabcd$ . The repeated numbers can appear on any two of the five rolls and could have any of the six numbers ( $1-6$ ), so there are a total of $\binom{5}{2} \times 6$ ways. For the other three rolls, we have $5 \times 4 \times 3$ ways. So the required probability would be $$\frac{\binom{5}{2} \times 6 \times 5 \times 4 \times 3}{6^5} \approx 46.3 \,\%$$ If you could post the code you are using for simulations, perhaps we could see why the difference.
|probability|combinatorics|
1
Can I use the BFGS algorithm in combination with a L1-Norm penalized LLH to get exact zeros?
I am working along a paper Predicting the Long term Stock Market Volatility:A GARCH MIDAS Model with Variable Selection It uses a PLLH = LLH - L1-Norm (equation 7) And always speaks of "non-zero" and "shrunken to zero" We choose the optimal tuning parameter using Generalized Information Criteria (GIC), and select the variables with non zero parameter estimates I coded it in R for a simulation study so I know the true variables and also tried a proximal gradient descent variant but that struggles to converge and takes a lot longer. So now I am using optim(method="BFGS") and it works quiet well but it never returns exact zero estimates but in the ball park of 1e-7 - 1e-13 for the non-active variables. The gradient function is a numerical approximation of the PLLH. Can I even achieve exact zero estimates without a threshold or did I do something wrong?
The BFGS method is a sophisticated gradient descent but a descent method nonetheless. The $1$ -norm partial derivatives (outside of the axes) are equal to "sign(component)". If a component of the current solution is doomed to be non-active, it will oscillate around $0$ as it will be pushed to either side of 0 without hitting it exactly. To sum up, this is perfectly normal and, as you hinted, a threshold should be used afterward if one wants to set non-active components to 0.
|optimization|estimation|
0
Reference request: Why is an integrable system called an integrable system and why is the dynamical billiard on a disk completely integrable?
I am seeking detailed reference or references to help me understand the following: Relevant history and motivation behind the term "integrable system" with appropriate primers The meaning of Remark 3.1 from this article : Note that the entire phase space of the circular billiard map -- which topologically resembles a cylinder -- is fully foliated by homotopically non-trivial invariant curves $C_\omega = \mathbb{R}/2\pi R\mathbb{Z}\times \{\pi\omega\}$ . When observed in the context of the billiard table, this implies that the billiard table is thoroughly foliated by caustics (the centre of the disc corresponds to a degenerate caustic for orbits with $\varphi=\pi/2$ , i.e. the diameter). From this perspective, circular billiards are an example of integrable billiards. I am interested in this particular example because I have heard the phrase "XYZ is not chaotic because it is an integrable system" or that "Well, XYZ is analogous to a billiard in a disk, which is an integrable system!" in
I will try to give you an answer, though with a slightly different approach. There are several things to be agreed upon before entering into this discussion. First, what is "chaotic dynamics"? Although, to my knowledge, there is no a universally accepted mathematical definition of chaos, we do have ways to "classify" what a chaotic system would be. We could, among other things, require the system to be topologically transitive : let $f:X\rightarrow X$ be a continuos map, we say that $f$ is topologically transitive if, for every pair of non-empty open sets $A,B\subset X$ , there exists an integer $n$ such that $$f^n(A)\cap B\neq\emptyset.$$ Second, what is "integrability"? This is one mathematical concept that has been approached in several different ways. Dynamicists and mathematicians who work on billiards would most likely prefer the definition that goes along the lines: a billiard modelled on a table with boundary $\partial \Omega$ is said to be integrable if $\Omega\subset \mathbb{
|reference-request|dynamical-systems|chaos-theory|hamilton-equations|billiards|
0
Is the Nash Equilibrium example in a "Beautiful Mind" accurate?
I was wondering if the Nash Equilibrium example shown in the movie A Beautiful Mind is accurate? and if not, what's wrong with it? Thanks
I in left, friend on right. 1,1 - all get (not pretty) brunettes. 1,5 - I get not pretty, friend gets pretty. 5,1 - friend gets pretty, I get not pretty. 0,0 - all get nothing. 0,0 is Nash equilibrium state (where they would end up, if all decided to go to blonde) ("no agent can unilaterally change his/her strategy and be better off" as pointed in other answer). 1,1 is Pareto optimal. (Pareto optimality is the state at which resources in a given system are optimized in a way that one dimension cannot improve without a second worsening) Nash incentivized his friend/friends to choose 5, 1 (Nash gets pretty, friend gets not pretty) - the name of this state is ... I don't know
|game-theory|nash-equilibrium|popular-math|
0
Is the positive fragment of second-order logic will full semantics compact?
There are two slightly different versions of compactness: If $\Delta$ is finitely satisfiable, then $\Delta$ is satisfiable. If $\Gamma \models \varphi$ then there exists a finite subset $\Gamma_0 \subset \Gamma$ such that $\Gamma_0 \models \varphi$ . I'm interested in the second one here because positive second-order logic does not have actual contradictions. For second-order logic though, these two notions line up. Consider second-order logic with a single non-empty domain. We have function and relation symbols as vocabulary items. A constant is a nullary function. Additionally, quantifiers can introduce functions and relations of arbitrary arity (e.g. $\exists f / 2$ or $\forall R / 3$ ). Suppose our connectives are $\land$ , $\lor$ and $\lnot$ . Consider the fragment without $\lnot$ . Let's call this positive second-order logic. Positive second-order logic with Henkin semantics is definitely compact. I'm wondering whether positive second-order logic with full semantics is compact.
Compactness does not hold for positive second-order logic with full semantics. Take the language with a single binary relation $\#$ as well as equality $=.$ Take $\Gamma$ to consist of: the sentence $\forall x\forall y (x=y\vee x\# y)$ and, for each $n\geq 2,$ the sentence $\exists x_1 \dots \exists x_n \bigwedge_{1\leq i - call this $G_n$ Take $\varphi=\varphi_1\vee\varphi_2$ where: $\varphi_1=\exists x\exists y (x=y\wedge x\#y)$ $\varphi_2=\exists f(f\text{ is injective}\,\wedge\, \exists x\forall y. f(y)\# x).$ Any model $M$ of $\Gamma\cup\{\neg \varphi_1\}$ satisfies ${\#}^M=(\neq)^M$ by which I mean that $\#$ is interpreted as the inequality relation $\{(x,y):x\neq y\}.$ So $M$ is infinite, and it satisfies $\varphi_2$ by taking $f$ to be any injective non-surjective function. Therefore $\Gamma\models\varphi.$ Consider a finite subset $\Gamma_0$ of $\Gamma,$ and set $n=\max\{k:G_k\in\Gamma_0\}.$ Define $M_n$ to be the structure with domain of order $n,$ with ${\#}^{M_n}=({\neq})^{
|logic|solution-verification|model-theory|second-order-logic|
1
Convergence radius for Cauchy multiplication series
Let $\sum_{n=1}^{\infty} a_n x^n$ with convergence radius $R_1$ , and $\sum_{n=1}^{\infty} b_n x^n$ with convergence radius $R_2$ . Let $R$ be the convergence radius of $\sum_{n=1}^{\infty} c_n x^n$ with $c_n := \sum_{i+j=n} a_i b_j$ . Find an example where $max(R_1,R_2) \lt R \lt \infty$ So I've been on this for hours. I figure $\sum a_n$ needs to get canceled at $R_2$ and $\sum b_n$ needs to get canceled at $R_1$ so for example if I choose $R_1,R_2 = 2,3$ then I need the sums to be something like: $$\sum a_n = {x-2 \over x-3}, \sum b_n = {x-3 \over x-2}$$ . Still I have two main problems. First, I have no idea how to recreate the original series' from the sum I wish they will have. Secondly, it seems like unless I add something else their Cauchy multiplication series will just be a infinite sum of 1's which has radius 0. I apologize in advance I may have a lot of wrong assumptions in here and I would really like help that actually goes down to details and small passes from one phrase
You were on the right track! Here's a way to successfully apply your idea . . . If $f,g$ are given by \begin{align*} f&= \frac{2-x}{(1-x)(3-x)} = \frac{1}{2(1-x)} + \frac{1}{2(3-x)} \\[4pt] g&= \frac{1-x}{(2-x)(4-x)} = \frac{3}{2(4-x)} - \frac{1}{2(2-x)} \\[4pt] \end{align*} then we have $$ fg = \frac{1}{(3-x)(4-x)} = \frac{1}{3-x} - \frac{1}{4-x} $$ and it follows that $f$ can be expressed as a convergent power series about $x=0$ with radius $R_1=1$ . $\\[4pt]$ $g$ can be expressed as a convergent power series about $x=0$ with radius $R_2=2$ . $\\[4pt]$ $fg$ can be expressed as a convergent power series about $x=0$ with radius $R=3$ . Note that each of the parts of the partial fraction decompositions of $f,g,fg$ can be expressed as a geometric power series. Then for $f$ we have \begin{align*} f&= \frac{1}{2(1-x)} + \frac{1}{2(3-x)} \\[4pt] &= {\large{\sum_{n=0}^\infty}} \Bigl( {\small{\frac{1}{2}}} \Bigr) x^n + {\large{\sum_{n=0}^\infty}} \Bigl( {\small{\frac{1}{6}}} {\cdot} {\small{\
|calculus|sequences-and-series|convergence-divergence|
1
Showing $\sin^{12}\theta+3\sin^{10}\theta+3\sin^8\theta+\sin^6\theta+2\sin^4\theta+2\sin^2\theta-2=1$, given $\cos\theta=\sin^2\theta$
Let $\theta \in \mathbb R$ such that $\cos\theta=\sin^2\theta$ . Then $$\cos{\theta} + \cos^{2}{\theta} = 1 \tag a$$ We need to prove that: $$\sin^{12}{\theta} + 3\sin^{10}{\theta} + 3\sin^{8}{\theta}+\sin^{6}{\theta}+2\sin^{4}{\theta}+2\sin^{2}{\theta}-2=1 \tag b$$ I can't even start to solve, wolfram alpha, and gpt does brute force, but I feel there is elegant way (it is class 10 problem, tho hots one), tried to rise $(a)$ to power $6$ and derive $(b)$ but no success.
$\cos \theta + \cos^2 \theta = 1$ , hence $\cos \theta = 1 - \cos^2 \theta = \sin^2 \theta$ , that's a good start. Then you have: $\sin^{12} \theta + 3 \sin^{10} \theta + 3 \sin^8 \theta + \sin^6 \theta = \sin^6 \theta \cdot (\sin^2 \theta +1)^3 = \cos^3 \theta \cdot (\cos \theta + 1)^3$ From here on, it's always further, simplying $\sin^2$ to $\cos$ until you found the solution.
|trigonometry|
0
Prove that $Char(F) \neq 0$ given that for all $x \in E$ there exists $n$ such that $x^n \in F$
The following is a problem in our exam which I couldn't get really far with. Let $F$ be a field and $E$ a non-trivial extension of $F$ such that for all $x \in E$ there exists $n_x \in \mathbb{N}$ such that $x^{n_x} \in F$ , prove that $Char(F) \neq 0$ My progress: Let $x \in E/F$ and $x^p \in F$ , there exists $n \in \mathbb{N}$ such that $(x+1)^n,(x+2)^n,\dots,(x+p)^n \in F$ , this means the $p$ fold iteration of the discrete derivative on $f(x)=x^n$ gives $0$ . This was as far as I went, Hints and solutions are appreciated.
I’m pretty sure that this argument isn’t optimal, but it gives a complete solution. I think you can also extend it to show more generally that in general, if $F$ is infinite, then $E/F$ is purely inseparable. Assume that $F$ has characteristic zero and $E \neq F$ . Note that $E/F$ is algebraic. Let $x \in E \backslash F$ . Then, there is a $F$ -embedding $\sigma: E \rightarrow \Omega$ (where $\Omega$ is algebraically closed) such that $\sigma(x) \neq x$ . By the hypothesis, we see that $\sigma(x)=\omega x$ for some root of unity $\omega \neq 1$ . For the same reason, we know that $\omega x+1=\sigma(x+1)=\omega’ (x+1)$ for some root of unity $\omega’ \neq 1$ . It follows that $\omega \neq \omega’$ and thus $x=\frac{\omega’-1}{\omega-\omega’}$ . In particular, since $E \neq F$ , $E$ is contained in the field generated by the roots of unity over $\mathbb{Q}$ . Hence $E/F$ is automatically abelian. Next, pick some $y \in E \backslash F$ (which is algebraic over $\mathbb{Q}$ ) and let $E’=\
|field-theory|galois-theory|
1
Find the number of rectangles not containing the shaded square
I wish to find the number of rectangles that don't contain the shaded square: Image of the grid: I used the way of finding the total number of rectangles and subtracting the rectangles containing the square. I can't calculate the latter. I'd be grateful if someone helped me.
Label the vertical lines in the grid from left to right, starting from $1$ upto $9$ . Similarly label the horizontal lines from top to bottom, from $1$ to $6$ . Now, the black square is formed by intersection of the $5^{th}$ and $6^{th}$ vertical lines with the $3^{rd}$ and $4^{th}$ horizontal lines. If the rectangle chosen should not contain the black square, the 2 vertical lines for making a rectangle should be chosen either from the set of vertical lines labelled { $1, 2, 3, 4, 5$ } OR { $6, 7, 8, 9$ }. Hence the total number of options for choosing the vertical lines is $\binom{5}{2} + \binom{4}{2}$ . Similarly, for horizontal lines, we have $\binom{3}{2} + \binom{3}{2}$ . Hence, the total number of options is : $[\binom{5}{2} + \binom{4}{2}][\binom{3}{2} + \binom{3}{2}]$
|rectangles|multigrid|
1
How does one perform induction on integers in both directions?
On a recent assignment, I had a question where I had to prove a certain statement to be true for all $n\in\mathbb{Z}$ . The format of my proof looked like this: Statement is true when $n=0$ "Assume statement is true for some $k\in\mathbb{Z}$ " Statement must be true for $k+1$ Statement must be true for $k-1$ My professor said the logic is flawed because of my second bullet point above. She says that since mathematical induction relies on the well-ordering principle and since $\mathbb{Z}$ has no least or greatest element, that using induction is invalid. Instead, she says my argument should be structured like this: Statement is true when $n=0$ "Assume statement is true for some integer $k\geq0$ " Statement must be true for $k+1$ "Assume statement is true for some integer $k\leq0$ " Statement must be true for $k-1$ I am failing to understand where my logic fails and why I need to split the assumptions like she is suggesting. Could someone explain why relying on the well-ordering principl
I'd say carry on thinking for yourself and ignore the professor. Your construction is a very trivial and perfectly valid corollary of the one she needs to use twice (once as normally stated and the second time as also a very trivial and perfectly valid corollary of the first). The remarks concerning the well ordering principle are more misleading than useful.
|logic|proof-writing|induction|
0
Kaehler form in coordinates
Consider $M^n$ a $n$ -dimensional Kaehler manifold and $\omega(X,Y) := g(JX,Y)$ its Kaehler 2-form. Let $\{e_0,\dots,e_{2n-1}\}$ be a orthonormal hermitian frame, i.e, $e_{2i+1} = Je_{2i}$ and $\{\theta_i\}$ its dual. We know that $ g = \sum_{i = 0}^{2n-i}\theta_i \otimes \theta_i $ Then, I want to obtain $\omega$ in terms of $\theta_i$ . So $$ \omega(X,Y) = \sum \theta_i\otimes\theta_i(JX,Y) = \sum \theta_{2i}(JX)\theta_{2i}(Y) = - \sum \theta_{2i+1}(X)\theta_{2i}(Y) $$ but I know that I should obtain $$ \omega = \sum_i \theta_{2i}\wedge \theta_{2i+1} $$ how can I obtain the exterior product and not the tensor product?
First notice that the complex structure acts on 1-forms with its transpose map $J^* : \theta \mapsto \theta \circ J$ . Then you can easily check that, for $k = 1 \ldots n$ $$ J^* \theta_{2k-1} = -\theta_{2k}\\ J^* \theta_{2k} = \theta_{2k-1}. $$ Then, by definition $$ \begin{align*} \omega &= \sum_{k=1}^n J^* \theta_{2k-1} \otimes \theta_{2k-1} + J^* \theta_{2k} \otimes \theta_{2k}\\ &= \sum_{k=1}^n -\theta_{2k} \otimes \theta_{2k-1} + \theta_{2k-1} \otimes \theta_{2k}\\ &=\sum_{k = 1}^n \theta_{2k-1}\wedge \theta_{2k} \end{align*} $$ Note that in this computation I used indexes in $\{1, \dots, 2n\}$ , if you use yours this formula changes a bit.
|geometry|differential-geometry|riemannian-geometry|
1
Substitution for Weierstrass Substitution
I recently learnt about the Weierstrass Substitution for integrals of the form: $$\frac{d}{a \sin x+b \cos x}$$ You substitute $\tan(\frac{x}{2})$ as $t$ , and proceed to algebraically manipulate the equation to get partial fractions. Why does this not work with $\tan x$ instead. Thanks.
The relation of the trigonometric functions with $t=\tan(x)$ are $$ \cos^2(x) = \frac{1}{1+t^2},\quad \sin^2(x) = \frac{t^2}{1+t^2},\quad dx=\frac{1}{1+t^2} dt. $$ Hence, for simple linear $\sin(x)$ and $\cos(x)$ , you're going to introduce annoying square roots, without even considering the choice of roots signs. One way to avoid this is to rewrite the squared sine and cosine as another linear trigonometric function, thanks to $$\cos(2x) = 1 - 2\sin^2(x) = 2\cos^2(x) -1.$$ Here is the reason why it is indeed natural to use half-angle substitution, understandable sacrifice in order to obtain rational functions without square roots.
|calculus|
0
Showing that an Algebra is not a $\sigma$-Algebra
In a script I found the following example for an algebra, which is not a $\sigma$ -algebra: Let $$\mathcal{I} = \{ \emptyset, \mathbb{R} \} \cup \{ (a, b] : -\infty and let $\mathcal{A}$ be the set of all finite pairwise disjoint unions from $\mathcal{I}$ . Then $\mathcal{A}$ is an algebra over $\mathbb{R}$ , but not a $\sigma$ -algebra. Showing that it is an Algebra is straight forward. But I cannot find a way to show that it is not a $\sigma$ -Algebra.
The condition that makes the algebra $\mathcal{A}$ be $\sigma$ -algebra is that for each $\{A_j\} \in \mathcal{A}$ then $\cup_{j=1}^\infty A_j \in \mathcal{A}$ . Now, consider the collection $\{(a-\frac{1}{j},b]\}_{j=1}^\infty \in \mathcal A $ . Then we can simply see that $$(a-\frac{1}{j},b]^c = (-\infty,a-\frac{1}{j}] \cup (b,\infty),$$ then $$ \cup_{j=1}^\infty( (-\infty,a-\frac{1}{j}] \cup(b,\infty))=\cup_{j=1}^\infty (a-\frac{1}{j},b]^c = (\cap_{j=1}^\infty (a-\frac{1}{j},b])^c =[a,b]^c \not \in \mathcal{A}$$ Then $$ \cup_{j=1}^\infty( (-\infty,a-\frac{1}{j}] \cup(b,\infty))=\cup_{j=1}^\infty (a-\frac{1}{j},b]^c \not \in \mathcal{A}$$ So we showed that there is a family of sets in $\mathcal A$ , namely, $((-\infty,a-\frac{1}{j}] \cup(b,\infty))^c$ and its union is not in $\mathcal A$ . Then $\mathcal A$ is not $\sigma$ algebra.
|measure-theory|
1
Restaurant confusion
I usually meet with my friend Thomas to discuss engineering mathematics on Saturdays. We agreed to be taking turns when buying lunch (I buy this Saturday he buys the next saturday). On two Saturdays when he was supposed to buy he did not have the money so I bought instead. He was now owing me two meals. But something I couldn't figure out occurred on the next Saturday when he was supposed to buy lunch. This time again he didn't have the money so he suggested that I pay for the meal. He was now going to owe me 3 meals. But I suggested that I give him the money for the meal so that he buys to which he replied "it's the same thing". But it didn't seem to be the same. If a meal is 1 dollar and I bought the third time he was going to owe me 3 meals (3 dollars). If I gave him 1 dollar he was going to owe me 2 meals and 1 dollar but if he used that 1 dollar to buy the meal he was now going to owe me 1 meal and 1 dollar, a total of 2 dollars. How and why is this happening?
"If I gave him 1 dollar he was going to owe me 2 meals and 1 dollar" assumes, by your previous analysis, that he uses the dollar to pay for the meal. So, "but if he used that 1 dollar to buy the meal he was now going to owe me 1 meal and 1 dollar" is clearly false.
|arithmetic|
0
Defining a linear function on $\mathbb{R}^\mathbb{N}$
I was curious about how to define a linear function in a vector space whose basis is not countable. In this case, to have a concrete example, I though about the sequence space in $\mathbb{R}$ , that is, $$\mathbb{R}^\mathbb{N}=\prod^{\infty}\mathbb{R}$$ Imagine that we wanted to define a linear function $A_{i,j}$ that for a sequence $x$ in $\mathbb{R}$ it acts as a swap elements operator: $$x=(x_1, x_2, ..., x_i, ..., x_j,...)$$ $$A_{i,j}(x) = (x_1, x_2, ..., x_j, ..., x_i,...)$$ If the basis is not countable, how can we express $x$ in terms of a linear combination of the elements of the basis $\{e_1, e_2, ...\}$ i.e.: as a vector of some coordinates? And, without this notation, how can we express our function $A_{i,j}$ ? In the case of $\mathbb{R}^n$ , it is be trivial to construct a swap matrix that represents such function. Is it even possible to find a (infinite) matrix form in this other case? Edit: thank you very much for the comments. One of them made me realise that maybe my ch
Note that your $B_i$ is not linear, because $B_i(x+y)\not = B_i(x) + B_i(y)$ . Differentiation over the space of all differential functions is one common example of a linear map over an infinite dimensional vector space.
|linear-algebra|sequences-and-series|
0
A Surprising Sum
My friend gave me the following series about trigonometric functions: $$ s(x) := \sin \Biggl( \sum_{n=0}^\infty \frac{(-1)^n \, \zeta(4n+2)}{2n+1} \, x^{4n+2} \Biggr) $$ and $$ c(x) := \cos \Biggl( \sum_{n=0}^\infty \frac{(-1)^n \, \zeta(4n+2)}{2n+1} \, x^{4n+2} \Biggr) $$ where $\zeta(s)$ is the Riemann zeta function. Then, $s(x)$ and $c(x)$ seem to converge to the following values $$ s(x) = \frac{\sin\bigl( \frac{\pi x}{\sqrt2} \bigr) \cosh\bigl( \frac{\pi x}{\sqrt2} \bigr) - \cos\bigl( \frac{\pi x}{\sqrt2} \bigr) \sinh\bigl( \frac{\pi x}{\sqrt2} \bigr)}{\sqrt{\cosh(\sqrt2\pi x) - \cos(\sqrt2\pi x)}} $$ and $$ c(x) = \frac{\sin\bigl( \frac{\pi x}{\sqrt2} \bigr) \cosh\bigl( \frac{\pi x}{\sqrt2} \bigr) + \cos\bigl( \frac{\pi x}{\sqrt2} \bigr) \sinh\bigl( \frac{\pi x}{\sqrt2} \bigr)}{\sqrt{\cosh(\sqrt2\pi x) - \cos(\sqrt2\pi x)}} $$ respectively. This series is too transcendental for me to prove. I would like to know the proof of this series, but I can't seem to prove it on my own. Can
The following calculations can be made rigorous for $|x| which is the radius of convergence of your original sums. Instead of computing them separately, it's best to join them: $$ \begin{align} f(x) &= c(x)+is(x) \\ &= \exp\left(i\sum_{n=0}^\infty\frac{(-1)^n\zeta(4n+2)}{2n+1}x^{4n+2}\right) \\ &= \exp\left(\sum_{n=0}^\infty\frac{\zeta(4n+2)}{2n+1}(ix^2)^{2n+1}\right) \\ &= \exp\left(\sum_{m=1}^\infty\sum_{n=0}^\infty\frac1{2n+1}\left(\frac{ix^2}{m^2}\right)^{2n+1}\right) \\ \end{align} $$ At this stage, it's natural to study the entire series: $$ A(z)=\sum_{n=0}^\infty\frac{z^{2n+1}}{2n+1} $$ It's just keeping the odd terms of $-\ln(1-z)$ so: $$ \begin{align} A(z) &= -\ln(1-z)+\frac12\ln(1-z^2) \\ &= \frac12\ln\frac{1+z}{1-z} \end{align} $$ so you sum is: \begin{align} f(x) &= \exp\left[\sum_{m=1}^\infty\frac12\ln\left(\frac{1+\frac{ix^2}{m^2}}{1-\frac{ix^2}{m^2}}\right)\right] \\ &= \sqrt{\frac{\prod_{m=1}^\infty \left(1+\frac{ix^2}{m^2}\right)}{\prod_{m=1}^\infty\left(1-\frac{ix^2}{
|sequences-and-series|
0
How to obtain $(a \cdot b)^2 - (a \land b)^2 = a^2b^2$ and why $(a \land b)=|a||b||\sin(\phi)|$ if $(a \land b)^2=-|a|^2|b|^2\sin^2(\phi)$? (GA)
I read in Wiki ( https://en.wikipedia.org/wiki/Bivector ) that antisymmetric part of geometric product can be represent as $(a \land b)$ and $(a \cdot b)^2 - (a \land b)^2 = a^2b^2$ I have 2 questions: How to derive $(a \cdot b)^2 - (a \land b)^2 = a^2b^2$ ? solved: $(ab)(ba) = (a \cdot b)^2 - (a \land b)^2$ Why $(a \land b)=|a||b||\sin(\phi)|$ if $(a \land b)^2=-|a|^2|b|^2\sin^2(\phi)$ ? solved: $B=(a \land b)=|a|b||sin(\phi)|B*$ where $B*$ is unit bivector. Thanks.
This is really two questions, first. Square. Given $\mathbf{a} \cdot \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right),$ $\mathbf{a} \wedge \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \right),$ we have $\begin{aligned}\left( { \mathbf{a} \cdot \mathbf{b} } \right)^2 - \left( { \mathbf{a} \wedge \mathbf{b} } \right)^2 &=\frac{1}{{4}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right)^2 - \frac{1}{{4}} \left( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \right)^2 \\ &=\frac{1}{{4}} \left( { \left( {\mathbf{a} \mathbf{b}} \right)^2 +\left( {\mathbf{b} \mathbf{a}} \right)^2+ \mathbf{a} \mathbf{b} \mathbf{b} \mathbf{a}+ \mathbf{b} \mathbf{a} \mathbf{a} \mathbf{b} } \right)-\frac{1}{{4}} \left( { \left( {\mathbf{a} \mathbf{b}} \right)^2+\left( {\mathbf{b} \mathbf{a}} \right)^2- \mathbf{a} \mathbf{b} \mathbf{b} \mathbf{a}- \mathbf{b} \mathbf{a} \mathbf{a} \mathbf{b} } \right) \\ &=\frac{1}{4}
|geometric-algebras|
1
Linear programming constraint involving maximum
I am trying to formulate a much bigger problem as an integer linear program, but I am stuck with one particular constraint and am not sure how to formulate it. To put it shortly, the problem deals with finding a feasible packing of packages (of different length but the same height) into a truck. Truck has three layers (bottom, middle and upper). Due to many more constraints I have formulated the problem as follows: I have binary variables $x_{ijp}$ where $i$ represents package ID (for each package its length is known to us), $j$ represents layer and $p$ represents position within a layer (for example starting from the front of the truck positions go as 1, 2, and so on). Then, $x_{ijp} = 1$ if package $i$ is placed in layer $j$ at position $p$ . The constraint with which I am stuck is the following: We are given total loading length of truck $L$ and a fixed length $l . If we calculate: $p_1$ = largest index so that length of first $p_1$ packages in the bottom most layer is $ (let us cal
Let $w_i$ be the length of object $i$ . For each object $i$ , each layer $j$ ( $j = 1$ (bottom), $2$ (top) or $0$ (middle)), add a binary variable $d^j_i$ which is $1$ if object $i$ is on layer $j$ and its position is before $p_j$ . We add the constraints: If the objects $i$ and $i'$ are on the same row and that $i'$ is before $p_j$ and after $i$ , then $i$ is also before $p_j$ . $$x_{i,j,p} + x_{i',j,p+1} + d^j_{i'} - 2 \leq d^j_i\;\;\forall i, i', j, p$$ The relations between the different lengths: $$l \leq l_2 = \sum_i w_id^2_i \leq \sum_i w_id^0_i \leq l_1 = \sum_i w_id^1_i \leq L-l$$
|optimization|convex-optimization|linear-programming|mathematical-modeling|integer-programming|
0
If an analytic function is identically zero in a domain, can it be always divided by another analytic function in that domain?
Let's say I have two functions $f$ and $g$ , both analytic on the same domain (open and connected set) $D$ , and suppose also I was able to prove that $f = 0$ on the entire domain. Question . Is it legitimate to then claim that $$ \frac{f(z)}{g(z)}\equiv 0 $$ identically on that domain? In particular, $g(z)$ may assume a zero value in that domain, but since $f(z)$ will be zero as well the fraction is well defined in any case? I suppose this question is not strictly about complex analytic functions, nevertheless it arose in a problem where analytic functions are involved.
Strictly speaking, it is not legitimate because $f/g$ would be undetermined, regardless of the value of $f$ , in the zeroes of $g$ . Particularly, if you cannot prove that $g$ is not identically zero in the domain, then you cannot define $f/g$ anywhere. However, if $g\not\equiv0$ , and $g$ is analytic/holomorphic, then, the zeros of $g$ are isolated singularities of $f/g$ which can be removed. Theorem 3.1 from Lang's state Theorem 3.1 If $f(z)$ is bounded in some neighborhood of $z_0$ , then one can define $f(z_0)$ in a unique way such that the function is also analytic at $z_0$ . Continuity requires $f/g$ to be defined as zero in those points as well, so in a sense your claim is legitimate. Also related as mentioned in the comments: if $g$ is not identically zero, then $f/g$ is meromorphic. From Wikipedia , Every meromorphic function on D can be expressed as the ratio between two holomorphic functions (with the denominator not constant 0) ..." Meromorphic functions have isolated singu
|calculus|complex-analysis|analytic-functions|
1
How to type logarithmic functions into Desmos graphing calculator?
Does anyone know how to type logarithmic functions into Desmos graphing calculator ( https://www.desmos.com/calculator ) ? I need to type a function, in which y equals to logorithm of 10 with the base of 5. I tried: y = log (5) x and y = log 5 (x), but neither one works.
Press underscore: l o g _ 5 → (
|logarithms|graphing-functions|
0
how to calculate $\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}$
Question: how to calculate $$\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}$$ My attempt Let $\arctan \theta =\frac{i}{2}\ln \left( \frac{i+\theta }{i-\theta } \right)$ $$S=\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}=\underset{n\to +\infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{n}{\arctan \frac{1}{1+k^{2}}}=\underset{n\to +\infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{n}{\frac{i}{2}\ln \left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)}$$ $$=\frac{i}{2}\underset{n\to +\infty }{\mathop{\lim }}\,\ln \prod\limits_{k=1}^{n}{\left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)}=\frac{i}{2}\ln \left( \underset{n\to +\infty }{\mathop{\lim }}\,\prod\limits_{k=1}^{n}{\left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)} \right)$$ $$=\frac{i}{2}\ln \left( \underset{n\to +\infty }{\mathop{\lim }}\,\prod\limits_{k=1}^{n}{\left( \frac{i\left( 1+k^{2} \right)+1}{i\left( 1+k^{2} \right)-1} \right)} \right)=\frac{i}{2}\ln \left( \underset{n\to +\infty
Note that by the Weirestrass factorsation of $\sin(z)$ , $$\sinh(\pi z) = \pi z\prod_{k=0}^{\infty} \left(1+\frac{z^2}{(k+1)^2}\right)$$ Thus the expression you left with, becomes $$\frac{i}2 \ln\left(\frac{\sqrt{1+i}}{\sqrt{1-i}}\cdot\frac{\sinh\left(\pi\sqrt{1-i}\right)}{\sinh\left(\pi\sqrt{1+i}\right)}\right)= \frac{-\pi}{8}+\frac{i}2\ln(\sinh(\pi\sqrt{1-i})\text{csch}(\pi\sqrt{1+i}))$$ I'm not going to try to simplify the part with the $\ln$ , but you could surely give it a try to see if it simplifies further. It seems unlikely though given a pesky $\sqrt[4]{2}$ factor lingering in with $e^{i\pi/8}$ because of $\sqrt{1\pm i}$ . Numerically the sum evaluates to $\approx 1.037$ .
|calculus|sequences-and-series|limits|summation|products|
0
Let f be a alternating n-linear form on E, then there exists $\lambda\in\Bbb K$ such that f = $\lambda$ det$_e$.
Let $\Bbb K$ be a field ( $\Bbb R$ or $\Bbb C$ ) , $E$ be a non-empty $\Bbb K$ -vector space of finite dimension $n$ . Let $f$ be a alternating $n$ -linear form on $E$ . Proof there exists $\lambda\in\Bbb K$ such that $f = \lambda \cdot det_e$ . Proof the map det $_e$ is the unique alternating $n$ -linear form on $E$ which equal to 1 $_\Bbb K$ at $e=(e_1,e_2, ... , e_n)$ . The following are definitions or theorems that have already been proven that I'm not sure if we'll need: $S_n$ is symmetric group with degree n. $sgn(\sigma)$ is the sign of $\sigma$ Note that det $_e$ is the determinant respect the basis $e$ , which map from $E^n$ into $\Bbb K$ and defined by the following way $det_e(x_1,x_2, ... , x_n)$ = $\sum_{\sigma \in S_n} sgn(\sigma)\Pi_{i=1}^n x_{\sigma(i),i}$ where $\forall j \in [1,n], j\in \Bbb N$ , $x_j = \sum_{i=1}^n x_{i,j}e_i$ . $\forall \sigma \in S_n, f(x_{\sigma(1)},x_{\sigma(2)}, ... , x_{\sigma(n)}) = \text{sgn}(\sigma)f(x_1,x_2, ... ,x_n)$
You can also use the multilinearity of $f$ . Can you see that $$f(x_1,\ldots,x_n) = \sum_{j_1,\ldots,j_n=1}^n \left(\prod_{k=1}^n x_{k, j_k}\right)f(e_{j_1}, \ldots, e_{j_n})?$$ Now what happens if two of $j_k$ are the same? What happens if they are all distinct? Can you finish the proof?
|linear-algebra|abstract-algebra|
0
A question on a presumably erroneous proof
I'm working through an exercise in an algebra book that asks one to prove that given the field $F$ , considering the ring of polynomials over $F$ given by $F[x]$ , that if $f$ is a degree 1 polynomial, then $(f)$ , the principal ideal generated by $f$ , is maximal. I understand this is special case of the fact that the ideals generated by irreducible elements are maximal. I've attempted to prove this and came to something that looks like a solution, but it appears I am either somewhere implicitly using that $\text{deg} f = 1$ , or otherwise unable to see my error. My proof: Assume for the sake of contradiction that $(f)$ is not maximal, i.e. there is some ideal $I$ such that: $$(f) \subsetneq I \subsetneq R$$ Obviously then, $I \setminus (f)$ is nonempty then. By the well-ordering principle, we can consider then a polynomial of minimal degree in $I \setminus (f)$ . \ \ By definition, $f \nmid g$ , as this would have $g \in (f)$ . Appealing to the division algorithm for polynomials then
You are essentially inlining the ubiquitous proof that ideals in a Euclidean domain are principal, generated by any minimal element. This implies $\,I = (g)\,$ so $\,(g)\supsetneq (f)\,\Rightarrow\, g\mid f\,$ properly , hence $f$ irreducible $\Rightarrow \,g\,$ is a unit, so $I=(g)=(1)$ . Note $ $ Viewed constructively the descent step yields Gauss's algorithm for computing inverses mod primes: if prime $\,p\nmid a\,$ then we can invert $\,a\,$ modulo $\,p\,$ by using the descent $\,a \to p\bmod a\,$ to get a decreasing chain of multiples of $a$ , eventually terminating with $1$ , so $(p,a)=(1).\,$ In the OP special case where $f$ has degree $1$ we can instead use the general fact that in a Euclidean domain if $p\neq 0$ is a nonunit of minimal Euclidean size (here degree) then remainders mod $p\,$ are units $\,u\,$ (or $\,0),\,$ so if $\,p\nmid a\, $ then $\,a = q\,p + u\,$ so unit $\,u = a-q\,p\in (a,p)\Rightarrow (a,p)=(1),\,$ thus $(p)$ is max (said structurally: $(p)$ is max by $R
|abstract-algebra|solution-verification|
0
Example of an uncountable subset of $\mathbb R$ which cannot be proved to have the same cardinality as $\mathbb R$
I am new to mathematical logic so forgive me if this is a bad question. I understand that the Continuum Hypothesis (CH) is independent of ZFC and therefore there exist models of ZFC in which the CH is false. In such models, by the very definition of CH being false, there must exist a set whose cardinality is strictly between that of the natural numbers ( $\aleph_0$ ) and the real numbers ( $2^{\aleph_0}$ ). Does it follow from this that there must exist an uncountable subset of $\mathbb R$ which it is impossible to prove has the same cardinality as $\mathbb R$ (using the axioms of ZFC)? If so, has an example of such a subset been documented? If not, is it known to be impossible to define one?
On one hand, sure, there are such sets. For example, you can construct one by following the proof of Hartogs's theorem . Hartogs's theorem gives that the set of all order types of well-orderings of subsets of $\mathbb{N}$ has cardinality $\aleph_1$ exactly (and is therefore uncountable). However, there are only continuum many relations on subsets of $\mathbb{N}$ , so we can code a representative of each of these $\aleph_1$ -many order types as a real number. Here, a representative just means some relation with that order type . This gets you a subset $\mathcal{W}$ of $\mathbb{R}$ with cardinality exactly $\aleph_1$ . The coding can be done fairly explicitly (although I will leave that as an exercise in this answer). Since the Continuum Hypothesis is independent of ZFC, it's clear that ZFC cannot prove that $\mathcal{W}$ (a set with cardinality $\aleph_1$ ) has the same cardinality as $\mathbb{R}$ (which has cardinality $2^{\aleph_0}$ ). On the other hand, your explicit question was whe
|logic|set-theory|
0
What is the correct MSD function for a biased random walk on a triangular grid?
I am trying to determine the mean squared displacement $\langle r^2\rangle$ as a function of time for a discrete random walk process on a triangular grid, where each step is of size $\ell$ over a time $\tau$ . For each step, there is one of six possible points to move to, with a probability of $p_n$ to move at an angle of $(n-1)\cdot \pi/3$ , $n\in\{1,2,3,4,5,6\}$ . The probabilities are not equal in general, but of course, they must add up to $1$ . I am getting a familiar form of $\langle r^2\rangle$ as $^{*}$ $$\langle r^2\rangle=\left(4D-v^2\tau \right)t+v^2t^2$$ where $D=\ell^2/4\tau$ , and the vector $\mathbf v$ as $$ \mathbf v=\frac{\ell}{\tau}\left[\begin{matrix} (p_1-p_4)+\frac12(p_2-p_5)+\frac12(p_6-p_3) \\ \frac{\sqrt{3}}{2}(p_2-p_5)+\frac{\sqrt{3}}{2}(p_3-p_6) \\ \end{matrix}\right] $$ The fact that I end up with a $\langle r^2\rangle$ equation similar to what I have seen in the simpler case of a random walk on a square grid gives me hope that I am on the right track here. H
The presentation link is broken, but I am still able to independently verify quite a bit. Sanity checks: Dimensional analysis checks out Extreme/particular cases check out (when $p_1=1$ or all $p_n$ are equal give expected correct results) Experimental data from my own hand-written code shows that for $l=1$ and $\tau=1$ the results are mostly likely correct and reasonable. This leads me to believe there is a probably a bug in the code rather than your math, unless it was something silly involving $l$ and $\tau$ . Can you confirm that everything works when $l=1$ and $\tau=1$ ? Can you share any specific runs that break, providing the corresponding $\textbf{v}$ and $p_n$ values from your simulation? We can check if your data matches mine.
|probability|partial-differential-equations|random-walk|integer-lattices|simulation|
0
Can addition of noise to dynamical system reduce estimation errors
I am using Kalman filter to estimate the states of a stochastic dynamical system which has very very small noise( consider zero ). The filter is not aware that the noise is zero. Implementation of KF to estimate the states of this dynamical system gives a value, say $\hat{\mathbf{X}}_1\left(t\right)$ , at any given time $t$ . Let us also assume that we know the true values of the state variables by some method( explanation of the method not important ). Next we manually add some noise to the system and once again apply the KF to estimate the states. Lets call the new states $\hat{\mathbf{X}}_2\left(t\right)$ . Let us call the true values $\mathbf{X}\left(t\right)$ . Under what conditions the following holds true? ( $\|.\|$ indicates 2-norm) $\|\mathbf{X}\left(t\right) - \hat{\mathbf{X}}_1\left(t\right)\| > \|\mathbf{X}\left(t\right) - \hat{\mathbf{X}}_2\left(t\right)\| $
Short answer is yes, addition of noise can improve the filter's performance, especially in the case of a nonlinear filter. Linearization adds some error, which needs to be accounted for somewhere. Your filter might be overconfident, in which case it would not respond to new measurements and could have errors in the state estimates.
|statistics|stochastic-processes|dynamical-systems|kalman-filter|statistical-mechanics|
0
Finding solutions for Real and Imaginary parts of a Complex Number.
This is my starting equation for complex number, $ \hat{k} $ $$ \hat{k}^2 = \mu \epsilon \omega - \mu \sigma \omega^2 i $$ I assumed $ \hat{k} = \alpha + \beta i $ where $ \alpha , \beta \in \mathbb{R} $ I then derived the following: $$ \alpha^4 - \mu \epsilon \omega \alpha^2 - \frac{1}{4} \mu^2 \sigma^2 \omega^2 = 0 $$ Which gives us four solutions for $ \alpha $ . If we consider $\alpha ^2 = x $ and solve for $ x $ , we get 2 solutions. Griffith's Intro to Electrodynamics (3rd ed.) pg 394, states that these two are only two unique solutions for $ \alpha $ and $ \beta $ . And it seems to be the solutions only used the degree 4 equation I am solving above... But that doesn't make sense, how can we get the solution for $ \beta $ from the equation solving for $ \alpha $ ? Shouldn't we also consider that there are a total of 8 eight solutions, 4 for $ \alpha $ and $ \beta $ each? How can we then find unique solutions for $ \alpha $ and $ \beta $ ? Does it have to do with the fact that $ \
The initial equation, namely $$ k^2 = (\alpha + i\beta)^2 = \alpha^2 - \beta^2 + 2i\alpha\beta = \mu\epsilon\omega - i\mu\sigma\omega^2 $$ is equivalent to the system $$ \left\{ \begin{align} \alpha^2 - \beta^2 &= \mu\epsilon\omega & (1) \\ 2\alpha\beta &= -\mu\sigma\omega^2 & (2) \end{align} \right. $$ when separating the real and imaginary parts. The second equation permits to make the substitution $$ \beta = -\frac{\mu\sigma\omega^2}{2\alpha} \quad\quad (3) $$ into the first one, hence the biquadratic equation you wrote, i.e. $$ \alpha^4 - \mu\epsilon\omega\alpha^2 - \frac{1}{4}\mu^2\sigma^2\omega^\color{red}{4} = 0. $$ So the answer to your first question is simply given by Eq. (3). Now, the above equation is solved (after the algebraic clean-up) by $$ \alpha_\pm^2 = \frac{1}{2}\mu\epsilon\omega \left(1 \pm \sqrt{1 + \left(\frac{\sigma}{\epsilon\omega}\right)^2}\right), $$ hence $\alpha_-^2 , which is impossible, since it should be a (real) square. In the same spirit, you can exclu
|complex-numbers|
1
Defining a linear function on $\mathbb{R}^\mathbb{N}$
I was curious about how to define a linear function in a vector space whose basis is not countable. In this case, to have a concrete example, I though about the sequence space in $\mathbb{R}$ , that is, $$\mathbb{R}^\mathbb{N}=\prod^{\infty}\mathbb{R}$$ Imagine that we wanted to define a linear function $A_{i,j}$ that for a sequence $x$ in $\mathbb{R}$ it acts as a swap elements operator: $$x=(x_1, x_2, ..., x_i, ..., x_j,...)$$ $$A_{i,j}(x) = (x_1, x_2, ..., x_j, ..., x_i,...)$$ If the basis is not countable, how can we express $x$ in terms of a linear combination of the elements of the basis $\{e_1, e_2, ...\}$ i.e.: as a vector of some coordinates? And, without this notation, how can we express our function $A_{i,j}$ ? In the case of $\mathbb{R}^n$ , it is be trivial to construct a swap matrix that represents such function. Is it even possible to find a (infinite) matrix form in this other case? Edit: thank you very much for the comments. One of them made me realise that maybe my ch
It is not (usually) required that it is possible to represent a linear function from one vector space $V$ to $W$ as a matrix. It just needs to have these two properties: $$f(u+v) = f(u) + f(v)$$ $$f(\lambda v) = \lambda f(v)$$ These can be compressed into one rule as in Vincent Batens' comment. For a finite dimension space and some countable infinite cases, it is possible to represent the linear function as a matrix but this is not required. An arbitrary uncountable dimension vector space only has a basis if we accept the Axiom of Choice. An interesting example of a linear function which cannot easily be represented as a matrix is the Fourier transform.
|linear-algebra|sequences-and-series|
0
Application of Manifolds to Psychology?
First paragraph of this page , asserts that the theory of mathematical manifolds find application in psychology: They appear "[i]n psychology as spaces of sensations (for example, colours)". An inconclusive search led me to believe that the author confused manifolds with its dictionary definition: something having many different parts or features. Can you inform me about any use of mathematical manifolds in psychology? One specific example per post. Manifold. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Manifold&oldid=47752
Just a tourist here. I'm reading "The Long Journey Home" by A.H. Almaas and he refers to Riemannian Manifolds in the context of psychology as a core component. Fascinating that there is crossover here describing reality, ego structures, and how they relate to soul identity. Check it out. I’m still puzzled but at least finding others with the same question. This will probably get deleted by moderators (hi folks!) because I’m not answering but pointing to a resource for your question that goes beyond it to some truly fascinating research.
|manifolds|
0
Every bounded function on $[a,b]$ is Riemann integrable on $[a,b]$
When we say "Every bounded function on $[a,b]$ is Riemann Integrable on $[a,b]$ ", then does it mean that $[a,b]$ is the domain of the function or any closed interval? I think that it is true considering the fact that $[a,b]$ is the domain of $f$ . But upon discussion, it turns out to be false when we consider Dirichlet function: $$ \begin{cases} 1,\quad x \in \mathbb{Q} \\ 0,\quad x \in \mathbb{Q}^{c} \end{cases} $$ This function is not Riemann integrable on any closed interval. Thus, this function would be a counter-example for the above statement. Any help will be appreciated.
To reiterate multiple comments, the statement "Every bounded function on $[a, b]$ is Riemann Integrable on $[a, b]$ " is false. In response to the clarification, "I wanted to ask if the interval $[a, b]$ in the above question is the domain of $f$ or just any closed interval different from the domain of $f$ ?": If we say " $f$ is integrable on $[a, b]$ " we're implicitly assuming the domain of $f$ contains $[a, b]$ (in practice), or that the domain of $f$ is $[a, b]$ (being pedantically literal about the definition). In practice, being loose about this issue doesn't matter. If $f$ is defined on some closed, bounded real interval $I$ , and if $[a, b] \subseteq I$ , then in saying, " $f$ is integrable on $[a, b]$ " we can replace $f$ by either The restriction $f|_{[a,b]}$ of $f$ to $[a, b]$ , i.e., the function whose domain is $[a, b]$ and whose values agree with the values of $f$ on $[a, b]$ . In symbols, $f|_{[a,b]}(x) = f(x)$ for all $x$ in $[a, b]$ . The product of $f$ and the indicat
|real-analysis|
1
how to calculate $\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}$
Question: how to calculate $$\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}$$ My attempt Let $\arctan \theta =\frac{i}{2}\ln \left( \frac{i+\theta }{i-\theta } \right)$ $$S=\sum\limits_{k=1}^{+\infty }{\arctan \frac{1}{1+k^{2}}}=\underset{n\to +\infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{n}{\arctan \frac{1}{1+k^{2}}}=\underset{n\to +\infty }{\mathop{\lim }}\,\sum\limits_{k=1}^{n}{\frac{i}{2}\ln \left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)}$$ $$=\frac{i}{2}\underset{n\to +\infty }{\mathop{\lim }}\,\ln \prod\limits_{k=1}^{n}{\left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)}=\frac{i}{2}\ln \left( \underset{n\to +\infty }{\mathop{\lim }}\,\prod\limits_{k=1}^{n}{\left( \frac{i+\frac{1}{1+k^{2}}}{i-\frac{1}{1+k^{2}}} \right)} \right)$$ $$=\frac{i}{2}\ln \left( \underset{n\to +\infty }{\mathop{\lim }}\,\prod\limits_{k=1}^{n}{\left( \frac{i\left( 1+k^{2} \right)+1}{i\left( 1+k^{2} \right)-1} \right)} \right)=\frac{i}{2}\ln \left( \underset{n\to +\infty
It's good you did all that ground work. Let's rewrite $$\frac{i}{2}\ln \left( \underset{n\to +\infty }{\mathop{\lim }}\,\prod\limits_{k=0}^{n}{\left( \frac{\left( k+1 \right)^{2}+1-i}{\left( k+1 \right)^{2}+1+i} \right)} \right) = \frac{i}{2}\ln\; \left(\frac{\prod_{k=1}^\infty 1 - \frac{i-1}{k^2}}{\prod_{k=1}^\infty 1 - \frac{-i-1}{k^2}}\right) \qquad (1)$$ where each of the products on the right is convergent. To deal with this, use the product formula for the sine function: $$\frac{\sin \pi x}{\pi x} = \prod_{k=1}^\infty \left(1 - \frac{x^2}{k^2}\right).$$ We'll apply this in a moment, but first let's observe that right side of (1) is of the form $\frac{i}{2} \ln \frac{z}{(\overline{z})} = -\theta$ , where $z = re^{i\theta}$ (using throughout the convention $-\pi ). So it is only a matter of identifying the $\theta$ -coordinate of the product in the numerator appearing in (1). Applying the product formula to the numerator in (1), setting $x^2 = i - 1 = \sqrt{2}\exp(\frac{3\pi i}{4})
|calculus|sequences-and-series|limits|summation|products|
0
how to show that a ODE is negative
So I have the ODE given by $-u''(x)=f(x;\lambda)$ for $x\in(0,1)$ with the boundary condition $u(0)=u(1)$ and the function is defined as $f(x;\lambda) = -\lambda e^{-\lambda x}$ for $\lambda>0$ . This has a unique solution in $C^2_0[0,1]=\{g\in C^2[0,1] \mid g(0)=g(1)=0\}$ . How can I show (using Green's function) that $u(x)\leq 0$ for all $x\in (0,1)$ . The general solution is given by $u''(x)=0$ with $u(x)=Ax+B$ . But we have to also consider two cases, $x\in [0,y)$ and $x\in (y,1]$ . I got that the system of equation as $$G(x,y) = \begin{cases}Ax+B \qquad x\in [0,y) \\ Cx+D\qquad x\in (y,1]\end{cases}$$ Applying the boundary conditions I got that $$G(x,y) = \begin{cases}Ax \qquad\qquad x\in [0,y) \\ C(x-1)\qquad x\in (y,1]\end{cases}$$ I know that I have to also consider $x=y$ where $Ax=C(x-1)$ , I ended up solving $$\lim_{\epsilon \to 0^+}G'(x,y)-\lim_{\epsilon \to 0^-}G'(x,y)=1$$ This ended up giving me $x^2-x$ for both cases. Am I missing something? edit Sorry, I noticed the erro
Rolle's theorem says that $\exists c\in (0,1)$ such that $u'(c)=0$ . Since $u''(c) > 0$ , we must have that $u'$ is negative to the left of $c$ and positive to the right of $c$ , so $u$ is decreasing on $(0,c)$ and hence is negative here. We now just need to show that $u$ is never positive on $(c,1)$ . Suppose there was another root of $u$ , $d\in(c,1)$ . Then again there is some number $e\in(d,1)$ such that $u'(e) = 0$ , but if $u'(c) = u'(e) = 0$ , then Rolle's theorem implies the existence of a number in $(c,e)$ where $u''$ is zero, a contradiction to $u'' >0$ . We have shown that $u$ is decreasing on $(0,c)$ , so it is not positive since $u(0) = 0$ , and that it only has two roots at $x=0,1$ , so it can never become positive in $(0,1)$ .
|ordinary-differential-equations|partial-differential-equations|
0
Eisenstein integers with norm value of 5
I am trying to find out the Eisenstein integers that have norm values of 3,5,7.... I want to see if there is some pattern. For the norm value of 3, I was able to find six Eisenstein integers. For the norm value of 7, I was able to find twelve Eisenstein integers. I am unable to find even a single Eisenstein integer with a norm value of 5. Is this correct? Do Eisenstein integers with a norm value of 5 not exist or am I missing something? Does it have to do anything with the fact that 5 is a prime Eisenstein integer?
You are not missing anything. There are no Eisenstein integers $(a+b\omega$ with $a,b\in\mathbb Z$ and $\omega$ a primitive cube root of $1)$ with a norm of $5.$ One way to see this is to note that $N(a+b\omega)=(a+b\omega)(a+b\overline\omega)$ $=a^2-ab+b^2$ $\equiv a^2+2ab+b^2$ $=(a+b)^2\not\equiv2\pmod3,$ as shown here . Another way to see this, as suggested in comments by Anne Bauval , is to note that $N(a+b\omega)=a^2-ab+b^2=\left(a-\frac12b\right)^2+\frac34b^2=5\iff(2a-b)^2+3b^2=20,$ which has no solutions in integers, because $20, 20-3\cdot1^2,$ and $20-3\cdot2^2$ are not squares and $20-3\cdot b^2 for integers $b>2$ . Finally, if we had $N(a+b\omega)=(a+b\omega)(a+b\overline\omega)=5$ , then $5$ would not be a prime Eisenstein integer.
|number-theory|algebraic-number-theory|eisenstein-integers|
1
How to denote an $n$-dimensional point
An $n$ -dimensional point: $\textbf{x} = (x_1, x_2, \ldots , x_n)$ , for some $n \in \mathbb{Z}, n > 1$ In the book these are denoted by bold letters as above. It's hard for me to write bold letters on paper. What are some quick and recognizable notational conventions for $n$ -dimensional points which translate to handwritten work. Any tips appreciated.
Minor, pedantic nitpick: a point is zero dimensional. The objects you are describing are points in an $n$ -dimensional space. The points themselves are not $n$ -dimensional. With respect to notation, it sounds like you are writing notes or completing homework assignments. In that kind of context the notation can be a lot looser, as the intended audience is yourself and whoever is grading your work or advising you. As long as you have some mutually agreed upon convention, it doesn't matter what notation you use. So pick something you like, and stick with it. In more formal settings (e.g. in a paper which needs to get through peer review), you should expect to adopt whatever notation the editor and/or reviewers recommend. For this particular notational issue: In many contexts, the distinction between a point in some $n$ -dimensional space and a more general notion of "point" isn't very important, so it is common to see notation like $$ x = (x_1, x_2, \dotsc, x_n), $$ where no special emp
|real-analysis|notation|soft-question|
0
Is $\int_0^{\pi}\log \sin \theta d\theta$ not well-defined?
Is $\int_0^{\pi}\log \sin \theta d\theta$ not well-defined? On the third edition of Ahlfors' Complex Analysis, page 160 it states: As a final example we compute the special integral \begin{equation*} \int_0^{\pi}\log \sin \theta d\theta. \end{equation*} ... The same proof applies near the vertex $\pi$ , and we obtain \begin{equation*} \int_0^{\pi}\log (-2ie^{ix}\sin x)dx=0. \end{equation*} Here are my questions. (i) I think $\int_0^{\pi}\log \sin \theta d\theta$ is not well defined, since $\log \sin\theta$ is undefined when $\theta=0$ and $\theta=\pi$ . (ii) Actually, I think we only obtain something like \begin{equation*} \lim_{\delta\rightarrow 0^{+}}\int_{0+\delta}^{\pi-\delta}\log (-2ie^{ix}\sin x)dx=0. \end{equation*} Why is $\lim_{\delta\rightarrow 0^{+}}\int_{0+\delta}^{\pi-\delta}\log (-2ie^{ix}\sin x)dx=\int_0^{\pi}\log (-2ie^{ix}\sin x)dx=0$ ? Notice $\log (-2ie^{ix}\sin x)$ is undefined when $x=0$ and $x=\pi$ . I show you the whole context of the questions below. I have trou
As Lebesgue integral it's always well-defined because log is Lebesgue integrable on (0,1). What's not defined is the log function on the complex plane. More precisely, the imaginary part of log will change 2pi if you draw a loop containing the origin. What the book is doing in some sense is to avoid this issue when taking contour integral.
|integration|complex-analysis|analysis|complex-integration|residue-calculus|
0
Identity for complex inner product spaces
Suppose $V$ is a complex inner product space. I want to prove that for any $x,y \in V$ , we have the following identities: $\langle x, y\rangle = \frac{1}{2\pi} \int_{0}^{2\pi} \|x + e^{it}y\|^2 e^{it}dt$ $\langle x, y\rangle = \frac{1}{N} \sum_{k=1}^{N} \|x + e^\frac{2\pi ik}{N}y\|^2 e^\frac{2\pi ik}{N},$ where $N \geq 3.$ I tried to use the polarization identity (and I still thnk is is the way to approach the problem), but I couldn't figure out the trick, so help would be highly appreciated.
\begin{align} &\frac{1}{N} \sum_{k=1}^{N}\|x + e^\frac{2\pi ik}{N}y\|^2 e^\frac{2\pi ik}{N}\\ &=\frac{1}{N} \sum_{k=1}^{N}\left(\|x\|^2+\|e^\frac{2\pi ik}{N}y\|^2+\langle x, e^\frac{2\pi ik}{N} y\rangle+\langle e^\frac{2\pi ik}{N} y,x\rangle\right)e^\frac{2\pi ik}{N}\\ &=\frac{1}{N} \sum_{k=1}^{N}(\|x\|^2+\|y\|^2)e^\frac{2\pi ik}{N}+\frac1N\sum_{k=1}^{N}\langle x, y\rangle e^\frac{-2\pi ik}{N}e^\frac{2\pi ik}{N}+\frac1N\sum_{k=1}^{N}\langle y, x\rangle e^\frac{2\pi ik}{N}e^\frac{2\pi ik}{N}\\ &=0+\frac1N\sum_{k=1}^{N}\langle x, y\rangle+0\\ &=\langle x, y\rangle \end{align} Some observations: The sum identity is a generalization of polarization identity in complex Hilbert space ( $n=4$ ). $$\langle x,y\rangle ={\frac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}+i\|x+iy\|^{2}-i\|x-iy\|^{2}\right)$$ $\{\frac{2\pi k}n:k,1,\dots,N\}$ is a partition of $[0,2\pi]$ , so the sum is a Riemann sum of the integral. $$\frac{1}{N} \sum_{k=1}^{N} \|x + e^\frac{2\pi ik}{N}y\|^2 e^\frac{2\pi ik}{N}\longrighta
|functional-analysis|hilbert-spaces|
0
Parametric equation of inward pointing half sin waves
Parametric equation of inward pointing half sin waves I can create a circle in red and I can create a sin wave that goes around a circle in green. Parametric equation: x=7cos(t) y=7sin(t)+sin(2pi*5t) But how can I create a half sin wave pointing inward going around in a circle using parametric equations see image below.
First, to help work out how we might create something like this, we will think more with respect to polar coordinates. If you don't know what polar coordinates are, all you need to understand for this answer is that we will think about representing points with an angle and a radius, rather than an $x$ and $y$ coordinate. We do this because it will allow us to generate graphs that repeat around the origin more easily. If we call the angle $t$ , and we let the angle be any value around the origin (but only each angle once) then we shall say $0 \leq t . Now we know that a circle can be parametrised by $(rcos(t), rsin(t))$ for some fixed $r>0$ , so we could try and make this $r$ vary as $t$ varies to change the distance of the graph to the origin. As per your image, we want this $r$ to vary from some value $a>0$ to some value $b>a$ . Then if we select $$r=\frac{(b-a)sin(kt)+a+b}{2}$$ for some $k \in \mathbb{N}$ , we see that $a \leq r \leq b$ as desired. The reason I inserted this $k$ here
|geometry|parametric|desmos|
1