title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Analytical solution to marble toss experiment
|
I am interested in the following problem: Assume we have an infinitely large, homogeneous grid, where each grid point is connected to 8 neighboring grid points (top/bottom, left/right, diagonal). Now we perform a random experiment (marble toss) where we randomly fill $\rho$ percent of the grid's points with $\rho_A$ percent of marbles of type A and $\rho_B$ percent of marbles of type B. Now, If I pick a random marble from the grid, I would like to know the probability for this marble to have a marble of the opposite type within it's neighborhood (the 8 connected grid points). I have already performed monte carlo simulations to retrieve probabilities for given $\rho$ , $\rho_A$ and $\rho_B$ , but I would be interested if there is an analytical solution for such problems. Also, I would be interested in how you would search for such a problem in literature. I simply called this a marble-toss experiment, because this is the first analogy that came to my mind. However, I am an engineer, not
|
Given a node with a certain type, each neighboring node can be either empty, type A, or type B, and we do know the corresponding probabilities. So, for each of the eight neighbors, we have a random trial with three possible outcomes and we want to compute the probability that at least one neighbor has the opposite type. We can even approach the problem with Bernoulli trials, since for each neighbor we are only interested in knowing if it has a certain type or not. For instance, if we are looking at the neighbors of a node with type $A$ and $X_i$ are Bernoulli random variables $X_i \sim Ber(\rho \rho_b)$ , we want to get $P(X \ge 1)$ , where $X = \sum_{i=1}^8 X_i$ has a binomial distribution,i.e. $$ P(X \ge 1) = 1 - P(X=0) = 1 - (1-\rho \rho_B)^8 $$ Similarly, if you start with a node with type $B$ , you end up with a probability of $1- (1-\rho \rho_A)^8$ of having at leas one neighbor with the opposite type. Unless I'm missing something, this seems quite standard, but it would still be
|
|combinatorics|random|monte-carlo|
| 1
|
Find the derivative of a piecewise constant function
|
I want to differentiate the given function: \begin{equation} f(x)= \begin{cases}1 \ \ \ \ \ x\in[0,1] \\ 0 \ \ \ \ \ \ x\not\in[0,1] \end{cases} \end{equation} However, the simple $f'(x)=0$ seems a little bit too simple, since this is similar to the Dirac delta function, just stretched over the interval $[0,1]$ instead of at $x=0$ . So I tried to represent the function as a generalized function, $$f(x)=\int_0^1\delta(x)\phi(x)\text{d}x,$$ where $\phi(x)$ is a locally summable function, infinitely differentiable, $\in C^\infty$ . and use the differentiation rule $$(D^\alpha f,\phi)=(-1)^{|\alpha|}(f,D^{\alpha}\phi),$$ with $\alpha=1$ : $$(D^1 f,\phi)=(-1)^{|1|}(\delta(x),D^{1}\phi)$$ $$(D^1 f,\phi)=-1(\delta(x),\phi'(x))$$ However, I am not sure how to continue. Any hints? Thanks
|
The exact answer to this question is actually as follows. I was corrected by my Professor, and this is apparently the right procedure. \begin{equation} f(x)= \begin{cases}1 \ \ \ \ \ x\in[0,1] \\ 0 \ \ \ \ \ \ x\not\in[0,1] \end{cases} \end{equation} We use the formula $$(f',\phi) = -(f,\phi') = -\int_\mathbb{R} f\phi' dx = -\int_{-\infty}^{x_1}f\phi' dx - \int_{x_1}^{x_2}f\phi' dx - \cdots -\int_{x_m}^{\infty}f\phi' dx $$ where $x_1=0$ and $x_2=1$ , hence we obtain $$(f',\phi) = -(f,\phi') = -\int_\mathbb{R} f\phi' dx = -\int_{-\infty}^{0}f\phi' dx - \int_{0}^{1}f\phi' dx - \int_{1}^{\infty}f\phi' dx $$ Now using integration by parts for each term, we get: $$(f',\phi)=\int_{-\infty}^{\infty} \frac{df}{dx}\phi(x) dx + \sum_{j=1}^m [f(x_{j^+})-f(x{_j^-})]\phi(x_j)=$$ We have that $m=2$ so we obtain $$=\int_{-\infty}^{\infty}\frac{df}{dx}\phi(x) dx + [f(x{_1^+})-f(x{_1^-})]\phi(x_1)+ [f(x{_2^+})-f(x{_2^-})]\phi(x_2)$$ which is $$= [f(0^+)-f(0^-)]\phi(0)+ [f(1^+)-f(1^-)]\phi(1)$$ which gi
|
|calculus|derivatives|
| 1
|
How to solve $\ln(x) = 3\left(1-\frac{1}{x}\right)$?
|
I have been working for this problem for a while: $$\ln(x) = 3\left(1-\frac{1}{x}\right)$$ and by graphing, plugging and chugging values and rigorously doing the math, I can clearly see that one of the values that satisfy this condition is $x = 1$ . However, when I put this into wolfram alpha and Desmos, I see two answers, one that is $x = 1$ and another one that is approximately $16.801$ . It is expressed by the Lambert W Function . I solved for $x = 1$ by using the Lambert W function. I cannot find any way to solve for the latter solution: $16.801$ . Is there anyone that could elaborate for me how to solve for it? Thank you. My method to get $x=1$ : $$\begin{align}\ln(x) = 3\left(1-\frac{1}{x}\right)\:&\Longrightarrow\:x=e^3e^{-\frac{3}{x}}\\&\Longrightarrow\:e^{-3}=\frac{1}{x}e^{-\frac{3}{x}}\\&\Longrightarrow\:-3e^{-3}=-\frac{3}{x}e^{-\frac{3}{x}}\end{align}$$ Applying Lambert W Function, which does the following: $W(ae^a) = a $ : $$-3 = -3/x \Longleftrightarrow x=1$$
|
You correctly found : $$-3e^{-3} = (-3/x) e^{-3/x}$$ Of course an obvious first solution is $x=1$ . Using the LambertW function : $$x\,e^x=a \quad\implies\quad x=W(a)$$ $$W(a)e^{W(a)}=a\quad\text{or equivalently}\quad x=W(x\,e^x)$$ Thus $$\begin{cases} a=-3e^{-3}\\ W(a)=-\frac{3}{x} \end{cases}$$ The solution is : $\quad x=-\frac{3}{W(a)}$ $$x=-\frac{3}{W(-3e^{-3})}\simeq 16.801016$$ Note : The LambertW function is a multivalued function. The two real branches are noted $W_0(X)$ and $W_{-1}(X)$ . The above result corresponds to the branch $W_0$ which is commonly simply noted $W(X)$ . The second branch gives : $$x=-\frac{3}{W_{-1}(-3e^{-3})}=1.$$
|
|calculus|logarithms|lambert-w|
| 0
|
Writing expressions using permutations versus combinations
|
This question states a PMF using combination notation, and the numerator of the expression is actually equal to a permutation: $$\frac{\binom{365}{k}k!}{365^k}$$ I have been trying to write permutations using latex, and I find that it is quite awkward. Question: Is the math written in combinations because that is the preferred option to express the logic or because permutations are difficult to create in LaTeX/Stackexchange. Note: You might think this trivial, but I am trying to understand. Should I think in "combinations" or "permutations"?
|
It depends on the nature of the problem. In the supplied expression, Combination notation, as you’ve written it, is more appropriate. Combinations are used when the order of selection doesn’t matter, hence Permutations when the order does matter. So, it isn’t a function of ease either using Tex or just a preference. So, when focusing on lists of elements where their order matters, you would use Permutations. When your focus is on groups of elements where the order does not matter, you would use. Combinations.
|
|probability|combinatorics|probability-distributions|soft-question|
| 0
|
Writing expressions using permutations versus combinations
|
This question states a PMF using combination notation, and the numerator of the expression is actually equal to a permutation: $$\frac{\binom{365}{k}k!}{365^k}$$ I have been trying to write permutations using latex, and I find that it is quite awkward. Question: Is the math written in combinations because that is the preferred option to express the logic or because permutations are difficult to create in LaTeX/Stackexchange. Note: You might think this trivial, but I am trying to understand. Should I think in "combinations" or "permutations"?
|
The count for distinct permutations of $k$ elements selected from a set of $n$ is equal to the count for distinct combinations of $k$ elements selected from $n$ times the count for distinct arrangements of those $k$ elements. $${^n\mathrm P_k}={^n\mathrm C_k}~k! = \dbinom n k~k! = \dfrac{n!}{(n-k)!}$$ {^n\mathrm P_k}={^n\mathrm C_k}~k! = \dbinom n k~k! = \dfrac{n!}{(n-k)!} Use whichever format you prefer. There isn't that much difference in mathjax complexity. If needed many times in your text, you could always use \def\perm#1#2{{^{#1}\mathrm P_{#2}}} to define an inline macro replacement, so \perm n k produces the required symbols as often as needed. $$\def\perm#1#2{{^{#1}\mathrm P_{#2}}}\perm n k$$
|
|probability|combinatorics|probability-distributions|soft-question|
| 1
|
Show $\root{-e}\of{e}<\ln2$ without a calculator
|
I tried manipulating the expression to come up with inequalities such as $1 . One idea I have is to show that $\lim_{x\to\infty}\left(\frac{1}{x}+\ln2\right)\left(\frac{1}{x}+1\right)^{\frac{x}{e}}>1$ . $$\lim_{x\to\infty}\frac{\left(1+x\ln2\right)\left(1+x\right)^{\frac{x}{e}}}{x^{1+\frac{x}{e}}}>1$$ I don't know how I could progress.
|
Euler showed how to approximate log 2 as continued fraction, see Enestrom #606 , §23. There we find $\displaystyle log(2) \approx \frac{262}{378} \approx 0.693121$ (without a calculator as requested, lookup only). $\displaystyle\root{-e}\of{e}$ may be written as $\displaystyle\frac{1}{e^{1/e}}$ . The decimal expansion of 1/e is known (lookup w/o calculator), how to approximate $e^x$ by CF is known, to compute this without calculator up to the needed precision may do who likes such exercises. Result: $$\displaystyle\root{-e}\of{e}\approx 0,6922\lt\log(2)\approx \frac{9}{13}\approx 0.6923$$ confirmed.
|
|algebra-precalculus|limits|inequality|number-comparison|
| 0
|
How to prove these rules produce a Gray code
|
There is a neat N-ary puzzle that has a solution that follows a 5-ary Gray code. https://www.schwandtner.info/publications/Kugellager.pdf It's not the typical reflected Gray code but still seems to go through all the numbers 00..0 to 44..4, one transition at a time. The rules are as follows (taken from the above pdf but starting at 0 index rather than 1 index) Transition Lower numbers must be Higher numbers must be 0 -- 1 4 0 or 1 or 4 1 -- 2 0 0 or 1 or 2 2 -- 3 0 or 1 or 4 any 3 -- 4 0 or 1 or 2 any I would like to prove that these rules do in fact produce a Gray code for any length n, but am having trouble since a transition depends on the lower and upper numbers. In the above paper, they confirm this is the case for n = 1,2,3,4. One thing I've tried is to show that every number besides 00..0 and 44..4 has exactly two neighbors but that's not enough to prove they all lie on the same line (there could be a line and multiple cycles, and they'd still have the property of only 2 neighbo
|
I'll attempt to prove that | $0^n$ to $4^n$ | = $5^n$ Notation: |A to B| means the count of strings from A to B inclusive. I'll write "A to B inclusive" as [A,B] $a^n$ means an n-length string of aa..a The '*' symbol can mean either concatenation or multiplication (sorry!) but context should be clear. I break up the sequence into parts: [ $2^j*0*0^k$ , $2^{j+1}*0^k$ ) = [ $2^j*0*0^k$ , $4^j*0*4^k$ ] + [ $4^j*1*4^k$ , $2^j*1*0^k$ ], e.g. [0000, 0444], [1444, 1000], [2000, 4044],[4144, 2100],[2200, 4404],[4414, 2210],[2220, 4440],[4441, 2221],[2222, 4444] Since each transition is reversible, |A to B| = |B to A|, so we can look at the following [ $2^j*0*0^k$ to $4^j*0*4^k$ ] [ $2^j*1*0^k$ to $4^j*1*4^k$ ] Since 0 and 1 are allowed as higher digits in all rules, and as lower digits in transitions from 2-3 and 3-4, the above two ranges will have the same size. $Lemma$ | $2^j*x*0^k$ to $4^j*x*4^k$ | = $5*|2^j*x*0^{k-1}$ to $4^j*x*4^{k-1}|$ , where x = 0 or 1, $j>=1$ , $k>=1$ (basically this
|
|puzzle|gray-code|
| 0
|
Examples of mathematical results discovered "late"
|
What are examples of mathematical results that were discovered surprisingly late in history? Maybe the result is a straightforward corollary of an established theorem, or maybe it's just so simple that it's surprising no one thought of it sooner. The example that makes me ask is the 2011 paper John Baez mentioned called "Two semicircles fill half a circle", which proves a fairly simple geometrical fact similar to those that have been pondered for thousands of years.
|
Answer Mirsky's theorem (1971) in order theory has been discovered late. Background In 1950, Robert Dilworth published what is today known as Dilworth's theorem : In any finite poset $(X,\leq)$ , the size of the largest antichain equals the minimum number of blocks of a partition of $X$ into chains. By now, several proofs are known, but for the "hard" implication " $\Rightarrow$ ", all of them are somewhat involved inductive proofs. There is a pretty obvious "dual" of the statement, today known as Mirsky's theorem : In any finite poset $(X,\leq)$ , the size of the largest chain equals the minimum number of blocks of a partition of $X$ into antichains. Surprisingly, noone seemed to have thought about for two decades, until it was published in 1971 by Leon Mirsky. All the more as the proof turned out to be much simpler than everything known for Dilworth's theorem. For the hard implication " $\Rightarrow$ ", there is a direct construction of the partition as the preimages of the map $$X \
|
|soft-question|examples-counterexamples|math-history|big-list|
| 0
|
Prove $\ker(A+T(A))\subseteq \ker(A)$ for $A\geq 0$ and $T$ positive linear map
|
Let $A\geq 0$ be a positive semi-definite complex matrix in $M_d(\mathbb{C})$ . Let $T:M_d(\mathbb{C})\to M_d(\mathbb{C})$ be a positive linear map between $d\times d$ complex matrices, i.e., $A\geq 0\Rightarrow T(A)\geq 0$ . I want to prove that $\ker(A+T(A))\subseteq \ker(A)$ . Intuitively this should be easy in the following sense: on one extreme, consider $T(A) \propto I_d$ (identity matrix), in which case the sum will give trivial kernel (essentially because in the spectral decomposition of $A$ , all the zero eigenvalues are shifted to positive value), hence $A+T(A)$ can remove all vectors in the kernel of $A$ . On the opposite extreme, $T(A)=A$ will make their kernels equal. There are easy intermediate cases, namely $[T(A),A]=0$ , in which case we can diagonalize both simultaneously, and the inclusion follows by analysing whether $T(A)$ shrinks or expands the support of the nonzero block diagonal. However, when they do not commute, I am not sure if there is an easy way to do this
|
$A=B^*B$ and $T(A)=C^*C$ for some $B,C\in M_d(\Bbb C)$ . $\forall x\in\Bbb C^d$ , if $(A+T(A))x=0$ then $$0=x^*(A+T(A))x=\|Bx\|^2+\|Cx\|^2$$ hence $Ax=B^*Bx=B^*0=0$ .
|
|linear-algebra|matrices|linear-transformations|matrix-rank|positive-semidefinite|
| 1
|
Risk probabilities
|
In Risk, if a player attacks another player they can roll up to 3 dice. The defending player can choose to roll either 1 or 2 dice in defence. Let's say that the attackers three dice are labelled $B_1$ , $B_2$ , and $B_3$ , and the defenders two dice are labelled $C_1$ and $C_2$ . They are listed such that $B_{(1)}$ ≤ $B_{(2)}$ ≤ $B_{(3)}$ where $B_{(3)}$ denotes the highest number of eyes of the three dices of the attacker, and that $C_{(1)}$ ≤ $C_{(2)}$ where $C_{(2)}$ denotes the highest number of eyes of the two dices of the defender. I want to find the probability and distribution function for $C_{(2)}$ . My TA wrote on the blackboard that we could use the fact that the event $\{C_{2}\leq K\}$ for $k \in \{1,2,3,4,5,6\}$ fulfills that: $ \{C_{(2)}≤k\} = \{C_1≤k\} ∩ \{C_2≤k\}$ But I just can't see what he means. I know that the the probability for a single dice roll is: $$P(j=k)=\frac{1}{6}\quad \quad ∀k∈ \{1,2,3,4,5,6\} $$
|
Since $C_{(2)}$ is the higher value of the two dice, that means that: if $C_{(2)}$ =k then $C_{(1)}\le{k}$ . So both dice rolled numbers that are k or less. The number of outcomes, where both values are k or less are: $k^2$ . From these outcomes, we need to remove the ones where neither $C_{(1)}$ =k nor $C_{(2)}$ =k. These are they ones that both dice have a value less than k (ie. k-1 or below). The number of such outcomes are: $(k-1)^2$ . So the number of outcomes we are left with are $k^2 - (k-1)^2$ . Divide this with the total number of outcomes, to get the requested probability: $$ P=\frac{k^2 - (k-1)^2}{36} $$
|
|probability|probability-distributions|
| 1
|
Prove $\ker(A+T(A))\subseteq \ker(A)$ for $A\geq 0$ and $T$ positive linear map
|
Let $A\geq 0$ be a positive semi-definite complex matrix in $M_d(\mathbb{C})$ . Let $T:M_d(\mathbb{C})\to M_d(\mathbb{C})$ be a positive linear map between $d\times d$ complex matrices, i.e., $A\geq 0\Rightarrow T(A)\geq 0$ . I want to prove that $\ker(A+T(A))\subseteq \ker(A)$ . Intuitively this should be easy in the following sense: on one extreme, consider $T(A) \propto I_d$ (identity matrix), in which case the sum will give trivial kernel (essentially because in the spectral decomposition of $A$ , all the zero eigenvalues are shifted to positive value), hence $A+T(A)$ can remove all vectors in the kernel of $A$ . On the opposite extreme, $T(A)=A$ will make their kernels equal. There are easy intermediate cases, namely $[T(A),A]=0$ , in which case we can diagonalize both simultaneously, and the inclusion follows by analysing whether $T(A)$ shrinks or expands the support of the nonzero block diagonal. However, when they do not commute, I am not sure if there is an easy way to do this
|
The $T$ here is just a distraction. In general, given any two positive semi-definite matrices $A$ and $B$ , we have $\ker(A+B)=\ker(A)\cap\ker(B)$ . Note that $\langle Ax,x\rangle=0$ only if $Ax=0$ (and similarly for $Bx$ ), because $$ 0\le\langle A(Ax-tx),Ax-tx\rangle =\langle A(Ax),Ax\rangle-2t\langle Ax,Ax\rangle $$ even if the real number $t$ approaches positive or negative infinity. It follows that \begin{align*} x\in\ker(A+B) &\iff (A+B)x=0\\ &\iff \langle (A+B)x,x\rangle=0\\ &\iff \langle Ax,x\rangle+\langle Bx,x\rangle=0\\ &\iff \langle Ax,x\rangle=\langle Bx,x\rangle=0\\ &\iff Ax=Bx=0\\ &\iff x\in\ker(A)\cap\ker(B).\\ \end{align*}
|
|linear-algebra|matrices|linear-transformations|matrix-rank|positive-semidefinite|
| 0
|
Can an infinite sum of non-computable numbers be computable, such that all finite sums of subsets of terms are non-computable?
|
Background In the following question , User1 asks whether an infinite sum of irrational numbers can be rational. Multiple answers 1 indicate the answer to this question is 'yes'. For instance, Rasmus Erlemann notes that $$\tan \frac{\pi}4=\sum_{n=0}^\infty \frac{(-1)^n 2^{2n+2}(2^{2n+2}-1)B_{2n+2}}{(2n+2)!}\left(\frac{\pi}4\right)^{2n+1}=1, $$ by the Maclaurin series of $\tan(\cdot)$ . The Question This question is asked in a similar spirit. I wonder whether there are any infinite series consisting solely of non-computable terms, that amount to a computable number. Examples of such non-computable numbers can found over here . They include Chaitin's constants . To make the question more precise, define $$ C = \sum_{k=1}^{\infty} u_{k} . $$ Here, all $u_{k}$ are non-computable numbers. I am looking for explicit examples of infinite series such that $C$ is a computable number, and all $u_{k}$ are both positive and linearly independent over $\mathbb{Q}$ . Added Although user TonyK answers
|
Let $\alpha\in(0,1)$ be an uncomputable number, and let $\sum u_k$ be any convergent series of positive computable terms linearly independent over $\Bbb Q$ . Then the series $$\alpha u_1+(1-\alpha)u_1+\alpha u_2+(1-\alpha)u_2+\cdots$$ satisfies your conditions (because any linear dependency over $\Bbb Q$ would give us a way to compute the uncomputable $\alpha$ ). You asked for an explicit example, so take for instance $u_k=p_k^{-3/2}$ , where $p^k$ is the $p$ th prime.
|
|sequences-and-series|examples-counterexamples|computability|
| 0
|
$e^{xL}(x+x^3)<\frac{3\pi^2x}{L^2}$ for $x>0$ small enough and $L>0$ small enough
|
I'm trying to prove that $e^{xL}(x+x^3) for $x>0$ small enough and $L>0$ small enough and fixed, that is, there exists $\delta>0$ such that $e^{xL}(x+x^3) for $x \in (0,\delta)$ . However I'm stuck in this problem. I tried using continuity of functions but without success. So I plotted a graph to see if the result made sense and it seems that it does. that
|
It isn't true for every $L>0$ because $$e^{xL}(x+x^3)=x+Lx^2+O(x^3)$$ as $x\to 0$ .
|
|real-analysis|inequality|continuity|
| 0
|
how to show all spin groups are double covers?
|
If that's the definition, then how do we know double covers of $SO(n)$ exist?
|
This is a consequence of covering theory. Its central theorem says that for a reasonable topological space (like the manifold $\operatorname{SO}(n)$ ) path-connected coverings correspond in a certain way to subgroups of the fundamental group, and the degree of such a covering corresponds to the index of the subgroup. The fundamental group of $\operatorname{SO}(n)$ is $\mathbb Z/2\mathbb Z$ hence the trivial group is an index 2 subgroup. The mentioned theorem now implies the existence of a (essentially unique) path-connected double covering.
|
|spin-geometry|
| 1
|
Ramanujan-Type Double Sum Infinite Series for $1/\pi$
|
$$\sum_{n,m=0}^{\infty}\binom{2n}{n}\binom{2m}{m}\binom{2n+2m}{n+m}^3\left(\frac{1+6n+6m}{2^{10n+10m}}\right)=\frac{4}{\pi}$$ These are Ramanujan-Type Series for $1/\pi$ but based on a Double Sum Infinite Series. Basically it's based on the following expansion of Complete Elliptic Integral of the First Kind $K$ with parameter $k$ . $$K=\frac{\pi}{2}\sum_{n,m=0}^{\infty}\binom{2n}{n}\binom{2m}{m}\binom{2n+2m}{n+m}^2\frac{k^{2n+2m}}{2^{6n+6m}}$$ From here on we can follow the similar process as deriving the normal $1/\pi$ series. I haven't particularly seen these double sum ones, so was wondering whether anyone has worked upon them and looking for some reference on these.
|
We can rewrite it by substituting $k=m+n$ , $$ \sum_{n\ge0}\frac{\left ( 6n+1 \right ) \left ( \frac12 \right )_n^3 }{2^{4n}(1)_n^3} \sum_{k=0}^{n} \binom{2k}{k} \binom{2n-2k}{n-k} =\frac{4}{\pi}. $$ The inner binomial sum is elementary and equals $2^{2n}$ , giving the common type of Ramanujan's $\pi$ formula. Z.W. Sun once conjectured lots of series identities in the paper . Some of them involve similar factors but seem much harder to handle.
|
|calculus|sequences-and-series|reference-request|elliptic-integrals|
| 1
|
Basis functions that are one at a point and zero at others
|
1D basis functions In 1D space, I'm looking for a set of basis functions like below. They would be equal to one at certain points and zero at others. 2D schematics In 2D space, the basis function could be something below, taken from this reference : Sigmoid? I don't want a conditional function with if and else . I'm just looking for a straightforward function that is easy to handle. Like what Sigmoid function provides for the step behavior: https://en.wikipedia.org/wiki/Sigmoid_function Derivate of sigmoid I was thinking about using derivative of the sigmoid function: Question Can anyone shed some light on the subject I'm exploring. Am I on the right path? Can sigmoid derivative be used as my basis functions? Update I realized I may be looking for a bump function: https://en.wikipedia.org/wiki/Bump_function but without the if else condition.
|
The function $ \phi:\Bbb R\to [0,1] $ suggested by @SassatelliGiulio works: $$ \phi(x) = \max\left(0,\min\left(\frac{x-x_0+\delta}{\delta},\frac{x_0+\delta-x}{\delta}\right)\right) $$ Which leads to: $$ \phi(x) = \frac{\frac{\frac{x-x_0+\delta}{\delta}+\frac{x_0+\delta-x}{\delta}-\sqrt{\left(\frac{x-x_0+\delta}{\delta}-\frac{x_0+\delta-x}{\delta}\right)^2}}2+\sqrt{\left(\frac{\frac{x-x_0+\delta}{\delta}+\frac{x_0+\delta-x}{\delta}-\sqrt{\left(\frac{x-x_0+\delta}{\delta}-\frac{x_0+\delta-x}{\delta}\right)^2}}2\right)^2}}{2} $$ For $x_0 = 0.25$ and $\delta = 0.125$ , the plot is: The integral of the above plot from 0 to 1 is 0.125 .
|
|calculus|functions|derivatives|exponential-function|finite-element-method|
| 0
|
Probabilities in the board game risk
|
In Risk, if a player attacks another player they can roll up to 3 dice. The defending player can choose to roll either 1 or 2 dice in defence. Let's say that the attackers three dice are labelled $B_1$ , $B_2$ , and $B_3$ , and the defenders two dice are labelled $C_1$ and $C_2$ . They are listed such that $B_{(1)}$ ≤ $B_{(2)}$ ≤ $B_{(3)}$ where $B_{(3)}$ denotes the highest number of eyes of the three dices of the attacker, and that $C_{(1)}$ ≤ $C_{(2)}$ where $C_{(2)}$ denotes the highest number of eyes of the two dices of the defender. I have found the probability function for the defenders highest dice roll, $C_{(2)}$ which is $$P(J=k)=\frac{k^2 - (k-1)^2}{36} for all k = 1,2,3,4,5,6$$ And I have found the probability function for the attacker's highest dice, $B_{(3)}$ which is $$P(J=k)=\frac{k^3 - (k-1)^3}{216} for all k = 1,2,3,4,5,6$$ I want to find the probability $P(B_{(3)}$ > $C_{(2)}$ ) and $P(B_{(2)}$ > $C_{(1)})$ , the probability of the attackers highest dice being larger
|
About the highest rolls. $$\sum_{k=1}^{6} \frac{k^2 - (k-1)^2}{36} \cdot \frac{k^3 - (k-1)^3}{216}$$ looks like a wrong formula. You want $A>D$ , so the formula should look like $$\sum_{k=2}^{6} \left(\sum_{n=1}^{k-1}\frac{n^2 - (n-1)^2}{36}\right) \cdot \frac{k^3 - (k-1)^3}{216}.$$ I’ll explain the formula. Let us assume that the Attacker rolled maximum number $k$ (note that $k\ge 2$ for a chance for the Defender to roll strictly less ). Then the Defender can roll any number from $1$ to $k-1$ . So we multiply the probability of $A=k$ to the sum of probabilities $D=1$ , …, $D=k-1$ . It is equal to $$\sum_{k=2}^{6} \left(\sum_{n=1}^{k-1}2n-1\right) \cdot \frac{k^3 - (k-1)^3}{6^5}=$$ $$=\sum_{k=2}^{6} (k-1)^2 \cdot \frac{k^3 - (k-1)^3}{6^5}=$$ $$=\frac{3667}{7776}\approx 0.4716.$$ Now about the second highest rolls. The probability function of $k$ being the second highest roll for the Attacker can be calculated as $$\frac{6(k-1)(6-k)+1+3(k-1)+3(6-k)}{216}.$$ The four summands in the nume
|
|probability|probability-theory|probability-distributions|
| 1
|
Assumption of a partial derivative Lars Hörmander does in "The Analysis of Linear Partial Operators I"
|
In the proof Theorem 5.2.1 (equality (5.2.5)), Hörmander claims that for any smooth function $\psi \in C^{\infty} (\mathbb{R}^n)$ the following equality is true: $ \frac{\partial}{\partial \epsilon}(\epsilon^{-n} \psi (\frac{x}{\epsilon})) = \sum_{j=1}^n \frac{\partial}{\partial x_j}(\epsilon^{-n}\psi_j(\frac{x}{\epsilon})) $ where $\psi_j(x) = -x_j\psi(x)$ . But with product and chain rule I get that for the left side $\frac{\partial}{\partial \epsilon}(\epsilon^{-n} \psi (\frac{x}{\epsilon})) = -n\epsilon^{-n-1}\psi(\frac{x}{\epsilon})-\epsilon^{-2}\sum_{j=1}^n \frac{\partial}{\partial x_j}(\epsilon^{-n}\psi(\frac{x}{\epsilon}))$ and for the right side $\sum_{j=1}^n \frac{\partial}{\partial x_j}(\epsilon^{-n}\psi_j(\frac{x}{\epsilon})) = -n\epsilon^{-n-1}\psi(\frac{x}{\epsilon})-\epsilon^{-1}\sum_{j=1}^n \frac{\partial}{\partial x_j}(\epsilon^{-n}\psi(\frac{x}{\epsilon})) $ so in the 2nd equality the factor $1/\epsilon$ is missing. I have gone through these calculations multiple time
|
But with product and chain rule I get that for the left side $\frac{\partial}{\partial \epsilon}(\epsilon^{-n} \psi (\frac{x}{\epsilon})) = -n\epsilon^{-n-1}\psi(\frac{x}{\epsilon})-\epsilon^{-2}\sum_{j=1}^n \frac{\partial}{\partial x_j}(\epsilon^{-n}\psi(\frac{x}{\epsilon}))$ is not correct, it should be $\frac{\partial}{\partial \varepsilon}(\varepsilon^{-n} \psi (\frac{x}{\varepsilon})) = -n\varepsilon^{-n-1}\psi(\frac{x}{\varepsilon})-\varepsilon^{-1}\sum_{j=1}^n \frac{\partial}{\partial x_j}(\varepsilon^{-n}\psi(\frac{x}{\varepsilon}))$ , since $\frac{\partial}{\partial x_j}(\varepsilon^{-n}\psi(\frac{x}{\varepsilon})) = \varepsilon^{-n}\cdot \frac{1}{\varepsilon}\frac{\partial \psi}{\partial x_j}(x/\varepsilon)$ and not $\frac{\partial}{\partial x_j}(\varepsilon^{-n}\psi(\frac{x}{\varepsilon})) = \varepsilon^{-n}\frac{\partial \psi}{\partial x_j}(x/\varepsilon)$ . You can caluclate like this: First, it is $$\varepsilon \frac{\partial}{\partial \varepsilon}(\varepsilon^{-n} \psi(x
|
|functional-analysis|analysis|partial-derivative|
| 1
|
How does $W \subset W^{00}$ ? Hoffman and Kunze theorem 3.18
|
Hoffman and Kunze theorem 3.18 . If $S$ is any subset of a finite-dimensional vector space $V$ , then $(S^0)^0$ is the subspace spanned by $S$ . $S^0$ is the annihilator of $S$ and $S^{00}$ is annihilator of $S^0 $ if $W$ is the subspace spanned by $S$ then the book stated that $W \subset W^{00}$ and so $W=W^{00}$ But how is this true ? if $V^{*} $ is a dual for $V$ and $V^{* *}$ is a dual for $V^{*}$ then $W^{00}$ should be in $V^{* *}$ not $V$ , I know that there is an isomorphism from $V$ to $V^{* *}$ but what does that have to do with this theorem?
|
As mentioned by @Sangchul Lee the finite dimensional vector space $V$ over the field $\mathbb{F}$ is naturally isomorphic with its double dual $V^{\ast \ast}.$ The isomorphism is the map that takes $v\in V$ to the evaluation of a functional at $v$ , i.e. $v \mapsto v^{\ast\ast}$ with $v^{\ast\ast}(f) = f(v)$ for all $f \in V^\ast.$ Let $S$ be a subset of $V$ and $W = \text{span}(S)$ the subspace of $V$ generated by $S.$ Note that $S^\circ \subseteq V^\ast$ is the set of functionals $f\colon V \to \mathbb{F}$ that vanish on $S$ , i.e. $f(s) = 0$ for all $s \in S.$ By linearity of $f$ , this also means that $f(w) = 0$ for all $w \in W.$ The double polar $S^{\circ\circ} \subset V^{\ast\ast}$ is the set of all functionals $\phi \colon V^\ast \to \mathbb{F}$ that vanish on $S^\circ,$ that is $\phi(f) = 0$ for all $f \in S^\circ.$ By the natural isomorphism, we can see $S^{\circ \circ} \subseteq V^{\ast\ast} \cong V$ as a set of elements $v$ for which the evaluation at $v$ of a fucntional $f
|
|linear-algebra|vector-spaces|proof-explanation|dual-spaces|vector-space-isomorphism|
| 1
|
Prove that every prime larger than $3$ gives a remainder of $1$ or $5$ if divided by $6$
|
Can we prove that every prime larger than $3$ gives a remainder of $1$ or $5$ if divided by $6$ and if so, which formulas can be used while proving?
|
A proof without word : $\begin{bmatrix} 0 & \textbf1 & 2 & 3 & 4 & \textbf5 \\ 6 & 7 & 8 & 9 & 10 & 11 \\ 12 & 13 & 14 & 15 & 16 & 17 \\ 18& 19 & 20 & 21 & 22 & 23\\ 24 & \color{red}{25} & 26 & 27 & 28 & 29\\ 30 & 31 & 32 & 33 & 34 & \color{red}{35} \\ 36& 37 & 38 & 39 & 40& 41 \\ 42& 43 & 44& 45 & 46 & 47 \\ 48& \color{red}{49} & 50 & 51& 52 & 53 \\ 54& \color{red}{55}&56 &57 & 58 & 59\\..&..&..&..&..&.. \end{bmatrix}$
|
|elementary-number-theory|discrete-mathematics|modular-arithmetic|
| 0
|
Number of paths on a 6X4 grid
|
How many paths are there from start to end on a 6 × 4 grid as shown in the picture? Note that any such path comprises 6 horizontal unit lengths and 4 vertical unit lengths. My work: 5^6. My logic: There are 5 horizontal rows to which 6 natural numbers need to be allotted. Answer is something else why am I wrong
|
Well, it is just the number of unique permutations of $\{R,R,R,R,R,R,U,U,U,U\}$ . There are 10 terms, so it would be $10!$ , but 6 of them are the same, so we have to divide by the permutations of the R terms ( $6!$ ), and the other 4 are the same, so we have to divide by the permuations of the U terms too ( $4!$ ). $$N = \frac{10!}{6!\cdot4!} = \frac{10\cdot9\cdot8\cdot7}{4\cdot3\cdot2\cdot1} = 10\cdot3\cdot7 = \boxed{210}$$
|
|linear-algebra|
| 0
|
Proof that if A and B are connected sets, $A \cup B$ is connected iff $(\bar A \cap B) \cup (A \cap \bar B) \neq \emptyset$
|
The problem says: Let $(X,\tau)$ be a topological space so that $A,B \subset X$ are connected sets in said space. For $M\subset X, \bar M$ refers to the closure of $M$ . Prove that $A \cup B$ is connected if and only if $(\bar A \cap B) \cup (A \cap \bar B) \neq \emptyset$ . I have no idea where to start. Can someone help me?
|
(For the statement to be correct, we need to assume that $A$ and $B$ are both nonempty.) By definition, $A \cup B$ is disconnected iff there are 2 disjoint nonempty open subsets (!) $X,Y$ of $A \cup B$ such that $X \cup Y = A \cup B$ . The first step is to see that if such $X$ and $Y$ exist, they must coincide with $A$ and $B$ (in either of the two ways), i.e. either $X=A, Y=B$ or $X=B, Y=A$ . This follows from the non-emptyness of $A$ and observation that if $X \cap A \neq \emptyset$ and $X \cap B \neq \emptyset$ , since both these intersections are open in $A$ , this would mean that $A$ is be disconnected. Similarly, $B$ has to be fully contained in one of $X$ or $Y$ . Also, neither $X$ nor $Y$ can be the full $A \cup B$ since they are disjoint nonempty subsets. From this it follows that $A \cup B$ is disconnected iff $A \cap B = \emptyset$ and both $A$ and $B$ are open in $A \cup B$ . From here it might be clearer how to go at showing that this is equivalent to the condition $cl(A)
|
|general-topology|proof-writing|connectedness|
| 0
|
Inverse Mellin transform of the Mellin transform of the binomial PMF
|
The probability mass function of the Binomial distribution is given by $$\begin{equation} f(x)=\binom{n}{x} p^x (1-p)^{n-x}, \end{equation}$$ where $p \in [0,1]$ and $x=\{0,1,\dots,n\}$ (finite support). The Mellin transform of $f$ is therefore $$\begin{align} \mathcal{M}\{f\}(s)&=\int_0^{\infty}x^{s-1} \binom{n}{x} p^x (1-p)^{n-x}\mathrm{d}x\\ &=\sum_{k=0}^{n}k^{s-1} \binom{n}{k} p^k (1-p)^{n-k}. \end{align}$$ Since $f$ is a probability mass function, the integral becomes a sum. Now I'd like to apply the inverse Mellin transform to get back $f$ : $$\begin{align} \mathcal{M}^{-1}\{\mathcal{M}\{f\}\}(x) &= \frac{1}{2 \pi i} \int_{c-i\infty}^{c+i\infty} x^{-s} \left( \sum_{k=0}^{n}k^{s-1} \binom{n}{k} p^k (1-p)^{n-k} \right) \mathrm{d}s \\ &=\frac{1}{2 \pi i} \sum_{k=0}^{n} \binom{n}{k} p^k (1-p)^{n-k} \int_{c-i\infty}^{c+i\infty} x^{-s} k^{s-1} \mathrm{d}s. \end{align}$$ The problem is that the integral $\int_{c-i\infty}^{c+i\infty} x^{-s} k^{s-1} \mathrm{d}s$ doesn't seem to converge.
|
Its not the Mellin transform of $$f(x)=\binom{n}{x}\, p^x\, (1-p)^{n-x}\tag{1},$$ but rather the Mellin transform of $$g(x)=\sum\limits_{k=0}^n f(x)\, \delta (x-k)\tag{2}$$ which, assuming the convention $\int\limits_0^\infty \delta(x)\, dx=1$ (versus $\int\limits_0^\infty \delta(x)\, dx=\frac{1}{2}$ for example), is $$\mathcal{M}_x[g(x)](s)=\sum\limits_{k=0}^n \left(\int_0^{\infty} f(x)\, \delta (x-k)\, x^{s-1} \, dx\right)\\=\sum\limits_{k=0}^n f(k)\, k^{s-1}=\sum\limits_{k=0}^n \binom{n}{k}\, p^k\, (1-p)^{n-k}\, k^{s-1}\tag{3}$$ and this explains the presence of the Dirac delta terms in the inverse Mellin transform. But you still need to normalize it such that $$\sum\limits_{k=0}^n \left(\int\limits_{-\infty}^{\infty} f(x)\, \delta (x-k) \, dx\right)=\sum\limits_{k=0}^n f(k)=\sum\limits_{k=0}^n \binom{n}{k}\, p^k\, (1-p)^{n-k}=1\tag{4}.$$ The remainder of this answer illustrates an alternate perspective on the problem. Assuming the definition $$f(k)=\left\{\begin{array}{cc} \binom{n
|
|integration|complex-analysis|binomial-distribution|dirac-delta|mellin-transform|
| 0
|
Approximating norm of a Hilbert space point with the norm of a vector
|
I have the Hilbert space of square integrable functions on $[a, b]$ , and what I would like to have is to discretize this space, i.e., find a sequence of finite-dimensional Hilbert spaces $H_K$ of dimension $K$ such that $|\langle u, v \rangle - \langle u^{(K)}, v^{(K)} \rangle| \rightarrow 0$ as $K \rightarrow \infty$ , where $v^{(K)}$ is a 'discretization' of $v$ , defined appropriately. My idea was to define discretized vectors in such a way that $\langle u^{(K)}, v^{(K)} \rangle$ is the $K$ -th Riemann sum for the integral that I would get for $\langle u, v \rangle$ (say, by equally partitioning $[a, b]$ and taking the points of this partition to represent a basis for the finite spaces). But I know that I need a Lebesgue limit rather than a Riemann limit. Is such discretization possible?
|
I think this is always possible when you have a countable orthonormal basis for the Hilbert space. Consider for such a base $\{ e_k \}_{k=1}^\infty$ , the spaces $H_K:= \text{span}\big( \{e_k\}_{k=1}^K \big)$ . Then $u^{(k)}=\sum_{k=1}^K \langle u , e_k \rangle e_k$ and $v^{(k)}=\sum_{k=1}^K \langle v , e_k \rangle e_k$ . Use the triangle inequality and Cauchy-Schwartz to get that $$ |\langle u, v \rangle - \langle u^{(K)}, v^{(K)} \rangle| \leq | \langle u-u^{(K)}, v \rangle|+ | \langle u^{(k)}, v -v^{(K)} \rangle| \leq \Vert v\Vert \cdot \| u -u^{(K)} \| + \Vert u^{(K)}\Vert \cdot \| v -v^{(K)} \|. $$ You can now use Bessel's inequality and\or Parseval's inequality to get that $\| u^{(K)} \| \leq \Vert u\|$ , while $\| u-u^{(K)} \|^2 \leq \sum_{k=K+1}^\infty | \langle u,e_k\rangle |^2 \overset{K\to \infty}{\to}0$ and likewise for $v-v^{(K)}$ . Now choose $\{ e_k \}$ to be your prefered orthonormal base of $L^2[a,b]$ .
|
|hilbert-spaces|lebesgue-integral|
| 1
|
Eliminating $\theta$ from $x^2+y^2=\frac{x\cos3\theta+y\sin3\theta}{\cos^3\theta}$ and $x^2+y^2=\frac{y\cos3\theta-x\sin3\theta}{\cos^3\theta}$
|
An interesting problem from a 1913 university entrance examination (Melbourne, Australia): Eliminate $\theta$ from the expressions $$x^{2}+y^{2}=\frac{x \cos{3\theta}+y \sin{3\theta}}{\cos^{3}\theta} \tag1$$ $$x^{2}+y^{2}=\frac{y \cos{3\theta}-x \sin{3\theta}}{\cos^{3}\theta} \tag2$$ Find expressions for $x$ and $y$ in terms of $\theta$ As with many of these historic problems, I'm sure it has been discussed somewhere at length... Solutions involving complex numbers would have been acceptable at the time. (EDIT 20/Feb Australian time) I thought I had a solution but upon review I don't think it works (I found an expression for y, then substituted. It was very messy and I don't think the algebra was totally correct.) so editing the post to say: any solutions or suggestions appreciated. The wording of similar questions on the paper suggests that a "simplified" expression is possible. (Edit 21/Feb) Thanks everyone for the excellent suggestions. Really appreciated. I am wondering if the subs
|
Equating numerators of given RHS, letting $ t= \theta, $ $$ x \cos(3t) +y \sin( 3t) = y \cos(3t) -x \sin(3t) $$ $$ \text{ Let} ~x=r \cos \alpha ,~ y= r \sin \alpha ~, r=\sqrt{x^2+y^2}, \alpha = \tan ^{-1}(y/x) $$ Plug in and simplify trig $$ \cos(\alpha -3t) =\sin (\alpha-3t),~ \tan(\alpha-3t)=1$$ $$ 3\theta= \alpha -(2k+1) \pi/4 $$ Let $$\gamma = \tan ^{-1}(y/x)-(2k+1) \pi/4, 3 \theta= \gamma, \theta=\gamma/3 ~~ \cos 3\theta= \cos \gamma,~\sin 3\theta= \sin \gamma; $$ Plug into RHS of the first given equation $$ x^2+y^2=\frac{x \cos \gamma+ y \sin \gamma}{\cos^{3} (\gamma/3)}$$ which contains $ x,y$ and no $\theta,$ but amenable to further simplification. The plot, if I made no mistakes:
|
|trigonometry|math-history|
| 0
|
How to evaluate double integral: $\iint \frac{y}{x} \, dx \, dy$ if it is in the first quadrant and is bounded by: $y=0$, $y=x$, and $x^2 + 4y^2 = 4$
|
I want to evaluate this double integral: $$ \iint \frac{y}{x} \, dx \, dy \quad $$ which is bounded by functions: $$ y = 0 \quad $$ $$ y = x \quad $$ $$ x^2 + 4y^2 = 4 \quad $$ And is in the first quadrant. The thing is that I don't quite get how to determine if the domain is x-simple, y-simple, or both exactly. I can determine it in easy graphs, but most of the time I'm stuck in there. After determining it, which variable should I take as constant and how should I place upper and lower bounds in the integral? These are the main things that stuck me on this problem. I would love any kind of help here and thank you in advance. We say that the domain D in the xy -plane is y-simple if it is bounded by two vertical lines x = a and x = b and two continuous graphs y = c(x) and y = d(x) between these lines. Lines parallel to the y-axis intersect a y-simple domain in an interval (possibly a single point) if at all. Similarly, D is x-simple if it is bounded by horizontal lines y = c and y = d a
|
Let's take a look at the domain $$\Omega=\{(x,y)\in\mathbb{R}^2\colon 0\leq y\leq x,\, x^2+4y^2\leq4\}$$ and notice that $$x^2+4y^2\leq4\iff \frac{x^2}{2^2}+y^2\leq 1$$ In red we have $\Omega$ , in green the ellipse described by $x^2+4y^2\leq4$ and in blu we have $y=x$ . In order to compute $$\iint_{\Omega} \frac{y}{x} \, dx \, dy$$ we use the "elliptic change of variable" which are a variation of the polar coordinates: $$\begin{cases} x=x_0+a\rho\cos(\theta)\\ y=y_0+b\rho\sin(\theta) \end{cases} $$ where $\rho\geq 0, \theta\in [0,2\pi]$ and $(x_0,y_0)$ is the center of the ellipse and $a, b$ are the two parameters that describe the ellipse. In our case, $$(x_0,y_0)=(0,0)$$ $$a=2, b=1$$ so we get $$\begin{cases} x=2\rho\cos(\theta)\\ y=\rho\sin(\theta) \end{cases} $$ so we have that \begin{align} 0\leq y\leq x &\iff 0\leq \rho\sin(\theta)\leq 2\rho\cos(\theta)\\ &\iff 0\leq \tan(\theta)\leq 2 \\ &\iff 0\leq \theta\leq\arctan(2) \end{align} and, substituting into the ellipse inequality,
|
|integration|analysis|functions|
| 1
|
fundamental group of a sphere with 2 handles
|
Prove that the fundamental group of a sphere with 2 handles contains a free group of 2 generators. Intuitively, it is clear that we can cut a figure homotopically equivalent to a bouquet of two spheres with a plane, but I do not know how to formalize this and whether it follows that the free group of 2 generators is contained in the fundamental group of the pretzel.
|
I assume you meant "bouquet of two circles". Even if, the space is not homotopy equivalent to it. The way we are going to do this is by the following: Lemma. Let $X$ be a space and $A\subseteq X$ its retract. Then $\pi_1(A)$ embeds into $\pi_1(X)$ . Proof. Let $r:X\to A$ be retraction and $i:A\to X$ inclusion. Then by functoriality of $\pi_1$ we have $$id=\pi_1(id)=\pi_1(r\circ i)=\pi_1(r)\circ\pi_1(i)$$ and therefore $\pi_1(r)$ is surjective while $\pi_1(i)$ is injective. $\Box$ So it is enough to find a retract of our space with $\mathbb{F}_2$ as fundamental group. The obvious choice is bouquet of two circles, a.k.a. the eight shape. But we have to be careful. Not every eight shape on this surface will be its retract. Have a look at this (horrible yet useful) drawing: which is a 2d section of our space. The first step of constructing our retraction is by projecting entire upper half onto the lower. The blue arrows indicate that. So we endup with half sphere with two half handles. The
|
|algebraic-topology|fundamental-groups|
| 1
|
Examples of stochastic processes that don't exist
|
$B_t$ is said to follow a Brownian motion if i) $B_0=0$ , ii) $B_t$ is continuous a.s. iii) $B_t$ has independent increments iv) $B_t-B_s\sim N(0,t-s)$ . Then Wiener would go on to show that such a process does indeed exist. What are some examples of simple and interesting stochastic processes, that can be proven to not exist?
|
A simple attempt to model "white noise" would be a process $W(t)$ on $[0, \infty)$ , such that $\int_0^s W(t)\; dt = B(s)$ is a Brownian motion. But that does not exist because Brownian motion is almost surely nowhere differentiable. A more sophisticated formulation of white noise must be used, using generalized functions.
|
|stochastic-processes|soft-question|
| 0
|
Is it possible to adjust the sine function to pass through specific points?
|
I have a set of points $\left\{ \left(−\frac{6}{4}, 0\right), \left(−\frac{4}{4}, −\frac{\sqrt{6}}{3}\right), \left(−\frac{3}{4}, −1\right), \left(−\frac{2}{4}, −\frac{\sqrt{6}}{3}\right), \left(\frac{0}{4}, 0\right), \left(\frac{2}{4}, \frac{\sqrt{6}}{3}\right), \left(\frac{3}{4}, 1\right), \left(\frac{4}{4}, \frac{\sqrt{6}}{3}\right), \left(\frac{6}{4}, 0\right) \right\}$ . I want to find a sinusoidal-esque wave that fits these points. The function $f(x) = \sin\left(\frac{2 π}{3} x\right)$ has the correct period/wavelength and amplitude, but is too wide at the wave peaks/troughs. Is there a simple way to modify the sine function to be a little less steep at the midpoints between extrema and a little narrower at the extrema? As an additional constraint, I'd like the derivative of $f(x)$ to not have extra twists/peaks/troughs in it. I believe a more mathematical way of stating that goal is that the second derivative should have no more zeros than the function itself.
|
You could input the points into a calculator and find a sinusoidal regression similar to a linear regression. Then check to see if the graph actually includes your desired points. This may be a good starting point to see if such a curve exists.
|
|trigonometry|
| 0
|
Write $\mathbb{P}^X(A)$ as $\mathbb{E}[1_A X]$
|
$(\Omega, \mathcal{F}, \mathbb{P})$ probability space $(E, \mathcal{B}(E))$ a measurable space where $B(E)$ is the Borel sigma algebra and $E$ is a Banach space $X:\Omega\to E$ random variable with distribution $\mathbb{P}^X$ on $(E, \mathcal{B}(E))$ I know that one can write the probability of an event as the expectation of an indicator function $$ \mathbb{E}[1_A] = \int_{\Omega} 1_A(\omega) \mathbb{P}(d\omega) = \int_A d\mathbb{P} = \mathbb{P}(A). $$ Can I also write $\mathbb{P}^X(A) = \mathbb{E}[1_A X]$ for $A\in\mathcal{B}(E)$ ? I have tried directly $$ \begin{align} \mathbb{P}^X(A) &= \int_A \mathbb{P}^X(de) \\ &= \int_E 1_A(e) \mathbb{P}^X(de) \end{align} $$ but I can't use the change of variables formula here, even if it is very suspitious. The random variable takes inputs in $\Omega$ not in $E$ , I would be tempted to write $$ \int_\Omega 1_A(X(\omega)) \mathbb{P}(d\omega) $$
|
$$\mathbb P^X \left[A\right] = \mathbb P\left[X\in A\right] = \mathbb E\left[\mathbf 1_{X\in A}\right]$$ The $A$ is Borel and $\left\{X\in A\right\} \in \mathcal B(E)$
|
|real-analysis|probability|measure-theory|expected-value|
| 1
|
Determine the pairs of integers $(x,y)$ that verify the relation: $x^2y^2+2xy+36=3y^2+8x^2$
|
the question Determine the pairs of integers $(x,y)$ that verify the relation: $$x^2y^2+2xy+36=3y^2+8x^2$$ the idea Fist of all I tried getting everything on the LHS and write it as a product of numbers or a sum of perfect squares that equal 0, but got to nothing useful. Then I tried writing everything as $(xy+1)^2=3y^2+8x^2-35$ or as $(x^2-2)(y^2-7)+22=(x-y)^2$ , but again you can see we can do nothing with these. Hope one of you can help me! Thank you!
|
Grouping the terms of $y$ gives $$(x^2-3)y^2 + (2x)y + (36-8x^2).$$ The quadratic formula implies (just as @Dan suggested) that $$y = \frac{-x\pm \sqrt{8x^4-59x^2+108}}{x^2-3}.$$ Since we require $y$ to be integer we need $8x^4-59x^2+108$ to be a square. Substituting $u=x^2$ we can look for $u$ such that $8u^2-59u+108$ is a square. The Diophantine equation $8u^2-59u+109=w^2$ for some integer $w$ admits the following recurrence relation for $u$ : $u_0 = 4$ , $u_1=9$ , $u_2=184$ , $u_n=u_{n-3}-35u_{n-2}+35u_{n-1}$ . Where $u_0, u_1, \dots$ represent all possible solutions for $u$ to the Diophantine equation. Using that recurrence we can easily see that any $x$ satisfying the original equation must satisfy $x=\pm 2$ , $x=\pm 3$ , or $|x| > 10^3$ (note that you must check all solutions of $u$ up to $10^6$ to verify this, luckily there are only $5$ of these). If $|x| > 10^3$ then $|y| (due to the comment of @sbares, we know that this bound holds as $x$ get large). We thus have $x = \pm 2$ ,
|
|relations|integers|
| 0
|
sum of uniform and binomial distributions
|
Random variable ξ has a uniform distribution on [0, 1], and η is its independent binomial random variable Bin(n, p). Find the density of the random variable ξ + η. My steps so far: η is a discrete random variable, ξ is absolutely continuous random variable let's do the following: $$F_{\xi + \eta}(t) = P(\xi + \eta \leq t) = P(\bigsqcup\limits_{k=0}^{n}(\eta = k \cap \xi \leq t - k)) = \sum_{k =0}^{n}P((\eta = k) \cap(\xi\leq t-k)) = \sum_{k = 0}^{n}P(\eta = k)P(\xi \leq t- k) = \sum_{k = 0}^{n}\binom{n}{k}p^k(1-p)^{n-k}(t-k)$$ P.S. where did (t - k) come from? $$\int_{0}^{t-k}\frac{1}{1 - 0}dt = t - k$$ so after this I'm stuck and can't come up with something how to get final formula, please help me out!
|
[EDITED] Hint: if $\xi + \eta = X$ , then the integer part of $X$ is $\lfloor X \rfloor = \eta$ and the fractional part is $\{X \} = \xi$ . For any real number $y$ , $X \le y$ if either $\eta , or $\eta = \lfloor y \rfloor$ and $\xi \le \{y\}$ . So the CDF is $$ F_X(y) = \sum_{0 \le k and the derivative of this is the PDF $$ f_X(y) = {n \choose {\lfloor y \rfloor}} p^{\lfloor y \rfloor} (1-p)^{n - \lfloor y \rfloor}$$
|
|probability-theory|summation|random-variables|
| 0
|
$\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$
|
Let $a, b, c \in\mathbb{R}, a, b,c\geq \frac{1}{4}$ and $a+b+c=3$ . Show that $\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$ My idea: $(4a-1)(a-1)^2\geq 0$ . Then I tried to divide $4a^3-9a^2+6a-1$ by $2a^2+3$ but I am stuck.
|
Set $f(x) = \frac1{2x^2+3}$ . Note that $f''(x) = \frac{12(2x^2-1)}{(2x^2+3)^3}$ . Further $f''(x) >0$ for $x \gt \frac1{\sqrt 2}$ . So the tangent to $f(x)$ at $x=1$ ie $g(x) = \frac{9-4x}{25}$ always lies below $f(x)$ when $x > \frac1{\sqrt 2}$ with intersection at $x=1$ . I leave it to you to show that $f(x) \geq g(x)$ for $x \in [\frac14, \frac1{\sqrt 2}]$ . Simple factoring or argument via derivatives should suffice. With that it follows $$f(a)+f(b)+f(c) \geq g(a)+g(b)+g(c) = \frac{27-12}{25} = \frac{3}{5}$$ Note equality is attained at $(1,1,1)$ .
|
|inequality|
| 0
|
Is it possible to adjust the sine function to pass through specific points?
|
I have a set of points $\left\{ \left(−\frac{6}{4}, 0\right), \left(−\frac{4}{4}, −\frac{\sqrt{6}}{3}\right), \left(−\frac{3}{4}, −1\right), \left(−\frac{2}{4}, −\frac{\sqrt{6}}{3}\right), \left(\frac{0}{4}, 0\right), \left(\frac{2}{4}, \frac{\sqrt{6}}{3}\right), \left(\frac{3}{4}, 1\right), \left(\frac{4}{4}, \frac{\sqrt{6}}{3}\right), \left(\frac{6}{4}, 0\right) \right\}$ . I want to find a sinusoidal-esque wave that fits these points. The function $f(x) = \sin\left(\frac{2 π}{3} x\right)$ has the correct period/wavelength and amplitude, but is too wide at the wave peaks/troughs. Is there a simple way to modify the sine function to be a little less steep at the midpoints between extrema and a little narrower at the extrema? As an additional constraint, I'd like the derivative of $f(x)$ to not have extra twists/peaks/troughs in it. I believe a more mathematical way of stating that goal is that the second derivative should have no more zeros than the function itself.
|
With so few points, there will be many ways to do this. However you will have to decide if the results "look sinusoidal enough" for you. One way is to include higher odd harmonics one by one. Here's just the third harmonic: $$f(x) = (1 + a) \sin\left(\frac{2 \pi}{3} x\right) - a \sin\left(3\frac{2 \pi}{3} x\right)$$ For that you just solve for $a$ for one of your "shoulder points". Other ways might include an infinite series of odd powers of $f$ , $$g(x) = \sum_{i=0, 1, 2...} a_i f^{2i+1}(x) $$ But for this one I think unless you are very lucky, you'll need many aterms to get close enough. Python script for plot import numpy as np import matplotlib.pyplot as plt points = ((-6/4, 0), (-4/4, -6**0.5/3), (-3/4, -1), (-2/4, -6**0.5/3), (0/4, 0), (2/4, 6**0.5/3), (3/4, 1), (4/4, 6**0.5/3), (6/4, 0)) xp, yp = np.array(points).T a = -0.055 # approximately x = np.linspace(-2, 2, 1001) f1 = np.sin((2 * np.pi / 3) * x) f2 = (1 + a) * np.sin((2 * np.pi / 3) * x) + a * np.sin(3 * (2 * np.pi / 3) *
|
|trigonometry|
| 1
|
$\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$
|
Let $a, b, c \in\mathbb{R}, a, b,c\geq \frac{1}{4}$ and $a+b+c=3$ . Show that $\frac{1}{2a^2+3}+\frac{1}{2b^2+3}+\frac{1}{2c^2+3}\geq\frac{3}{5}.$ My idea: $(4a-1)(a-1)^2\geq 0$ . Then I tried to divide $4a^3-9a^2+6a-1$ by $2a^2+3$ but I am stuck.
|
OP said that they tried to consider $ \frac{ (4a-1)(a-1)^2}{2a^2 + 3 } \geq 0$ , but got stuck. To continue from there, observe that via partial fractions: $$ 0 \leq \frac{ (4a-1)(a-1)^2}{2a^2 + 3 } = \frac{25}{2(2a^2 + 3) } + 2a - \frac{9}{2}.$$ So, summing across all variables, we get that $$ \sum \frac{ 25}{2(2a^2 + 3 ) } \geq \frac{ 27}{2} - 2 \sum a = \frac{15}{2} . $$ Hence, the desired result follows. Note: While in this case, the result is essentially that of the tangent-line approach (EG See Sahaj's solution), note that in order cases we could have gotten a quadratic quotient.
|
|inequality|
| 1
|
Prove that the triangles $VAC, EAV$ are similar if and only if $\angle EVO=30°$.
|
The question The regular quadrilateral pyramid $VABCD$ has the vertex $V$ . Let $M$ be the middle of the edge $AD$ and $E$ be the point of intersection of the lines $AC$ and $BM$ . Prove that the triangles $VAC, EAV$ are similar if and only if the angle formed by the right $VE$ with the plane $(VBD)$ measures $30°$ . The drawing the angle formed by the right $VE$ with the plane $(VBD)$ its actually angle $EVO=30°$ let $AC=6x => AE=2x, EC=4x, EO=x, AO=3x$ Case 1 : We know that $\angle EVO=30$ . We show that $VAC$ and $EAV$ are similar. $\angle EVO=30°, \angle VOE=90°=> VE=2x => VO=x\sqrt{3} => VA=2x\sqrt{3}$ From here we can say that its verified $\frac{VA}{AE}=\frac{AC}{AV}=\frac{VC}{VE}=\sqrt{3}$ which makes $VAC, EAV$ similar. Case 2 : We show that $\angle EVO=30°$ . We know that $VAC$ and $EAV$ are similar. From the similarity we get that $\frac{VA}{AE}=\frac{AC}{AV}=\frac{VC}{VE}=> VA^2=AC\times AE=12x^2=> VA=2x\sqrt{3},AO=3x=> VO=x\sqrt{3}=> \angle VAO=30°=> \angle AVO=60$ we can
|
Case 1 : Well done; you could have just specified that $VC=VA(=2x\sqrt3)$ Case 2 : Well done; you could have quoted the angle bissector theorem and perhaps write more classicaly $\frac{V\color{red}A}{V\color{red}O}=\frac{\color{red}AE}{\color{red}OE}$ :)
|
|geometry|solution-verification|triangles|angle|
| 1
|
Why are piecewise linear functions semismooth?
|
I came across the following exercise, namely proofing that for the minimumfunction f(a,b) = min(a,b) for $ x = (a,b)^T \in \mathbb{R}^2$ , it holds for every point that sup $|f(x+s)-f(x) - Ms| = O(||s||^2)$ for $||s|| \to 0$ where $M$ is an element of Clarke's generalized differential at x+s. It is clear that at points where $x_1 \neq x_2$ , this holds (as then f is linear, the only element in Clarke's differential is its Jacobian and everything cancels. However, I seem to miss the idea for $x_1 = x_2$ . This generalizes to arbitrary piecewise linear functions, hence the title. Any help is appreciated!
|
I've found a good source online to answer this question: It holds in general for $PC^2$ (piecewise $C^2$ -functions) that they are first order semismooth. A proof is given in "Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces", (M. Ulbrich) Proposition 2.2.6. The basic idea (that was missing for me) was to write $M$ as a convex combination of the elements in the B-differential and then to use Taylor to get an upper bound.
|
|optimization|convex-optimization|non-smooth-optimization|
| 0
|
Row/column swap 0-1 matrix to get a lower-triangular matrix
|
Suppose there is an $n\times n$ matrix $M$ whose entries are 0 and 1, and exactly $2n-1$ many entries are 1's. I am wondering if we can get a lower-triangular matrix from $M$ by only allowing to swap rows and columns?
|
No, for example when $M=\begin{bmatrix}1&0&1\\0&1&0\\1&0&1\end{bmatrix}$ .
|
|linear-algebra|combinatorics|
| 1
|
Theorem 11, Section 4.5 of Hungerford's Algebra
|
Let $R$ be a ring with identity. If $A$ is a unitary right $R$ -module and $F$ is a free left $R$ -module with basis $Y$ , thell every element $u$ of $A\otimes_R F$ may be written uniquely in the form $u =\sum_{i=1}^n a_i\otimes y_i$ , where $a_i\in A$ and the $y_i$ are distinct elements of $Y$ . Proof: For each $y\in Y$ , let $A_y$ be a copy of $A$ and consider the direct sum $\sum_{y\in Y} A_y$ . We first construct an isomorphism $\theta : A\otimes_R F \cong \sum_{y\in Y}A_y$ as follows. Since $Y$ is a basis, $\{y\}$ is a linearly independent set for each $y\in Y$ . Consequently, the $R$ -module epimorphism $\varphi : R\to Ry$ given by $r\mapsto ry$ (Theorem 1.5) is actually an isomorphism. Therefore, by Theorem 5.7 there is for each $y\in Y$ an isomorphism $$A\otimes_R Ry \xrightarrow{1_A\otimes \varphi^{-1}} A\otimes_R R\cong A_y.$$ Thus by Theorems 5.9 and I.8.10 there is an isomorphism $\theta$ : $$A\otimes_R F\cong A\otimes_R ( \sum_{y\in Y}Ry) \cong \sum_{y\in Y} A\otimes_R Ry\
|
Let $u=\sum_{i=1}^na_i\otimes f_i \in A\otimes_R F$ . Since $Y$ is basis of $F$ , we have $f_i=\sum_{j=1}^{n_i}r_{ij}y_{ij}$ where $r_{ij}\in R$ , $y_{ij}\in Y$ . So $$\begin{align} u &= \sum_{i=1}^na_i\otimes f_i \\ &= \sum_{i=1}^na_i\otimes \left( \sum_{j=1}^{n_i}r_{ij}y_{ij} \right) \\ &= \sum_{i=1}^n\sum_{j=1}^{n_i}a_i\otimes r_{ij} y_{ij} \\ &= \sum_{(i,j)} a_i\otimes r_{ij} y_{ij} \\ &= \sum_{(i,j)}a_ir_{ij}\otimes y_{ij} \\ &= \sum_{(i,j)}a_{ij} \otimes y_{ij}. \end{align}$$ Thus $A\otimes_R F$ is generated by $\{a\otimes y\mid a\in A \text{ and } y\in Y\}$ . Define $\psi :A\times F\to \sum_{y\in Y}A_y$ given by $\psi (a,f)=\psi (a,\sum_{i=1}^nr_iy_i) = \{ar_y\}_{y\in Y}$ , where $r_{y_i}=r_i$ and $r_y=0$ for all $y\in Y-\{y_1,...,y_n\}$ . Note $\psi$ is well defined function, since $Y$ is basis of $F$ . We show $\psi$ is middle linear map. Let $a_1,a_2,a\in A$ , $f,g\in F$ and $r\in R$ . Then $$\psi (a_1+a_2,f)=\{(a_1+a_2)r_y\}=\{a_1r_y+a_2r_y\} =\{a_1r_y\}+\{a_2r_y\} = \psi(a_
|
|abstract-algebra|modules|tensor-products|alternative-proof|free-modules|
| 0
|
Determine the pairs of integers $(x,y)$ that verify the relation: $x^2y^2+2xy+36=3y^2+8x^2$
|
the question Determine the pairs of integers $(x,y)$ that verify the relation: $$x^2y^2+2xy+36=3y^2+8x^2$$ the idea Fist of all I tried getting everything on the LHS and write it as a product of numbers or a sum of perfect squares that equal 0, but got to nothing useful. Then I tried writing everything as $(xy+1)^2=3y^2+8x^2-35$ or as $(x^2-2)(y^2-7)+22=(x-y)^2$ , but again you can see we can do nothing with these. Hope one of you can help me! Thank you!
|
A solution based on different ideas from the comments. Let us solve this equation for $y$ , we get $$y=\frac{-x+ \sqrt{8x^4-59x^2+108}}{x^2-3},\tag1$$ $$y=\frac{-x- \sqrt{8x^4-59x^2+108}}{x^2-3}.\tag2$$ Let us prove that if $|x|>3$ then $y$ in the form $(1)$ is in $(2,3)$ : $$2 Multiply by $x^2-3>0$ : $$2x^2-6+x Square all sides (they are positive while $|x|>3$ ): $$4x^4+36+x^2-24x^2-12x+4x^3 $$ Left inequality is: $$4x^4-4x^3-36x^2+12x+72>0$$ $$\iff x^4-x^3-9x^2+3x+18>0$$ $$\iff (x+2)(x-3)(x^2-3)>0,$$ which is true for $|x|>3$ . The right inequality is: $$x^4+6x^3+6x^2-18x-27>0$$ $$\iff (x+3)^2(x^2-3)>0,$$ which is again true for $|x|>3$ . Now prove the similar statement for $y$ in the form $(2)$ : $$|x|>3\implies -2>\frac{-x- \sqrt{8x^4-59x^2+108}}{x^2-3}>-3.$$ Multiply by $x^2-3>0$ : $$-2x^2+6+x>-\sqrt{8x^4-59x^2+108}>-3x^2+9+x$$ $$\iff 2x^2-6-x Square all sides (they are positive while $|x|>3$ ): $$4x^4+36+x^2-24x^2+12x-4x^3 $$ Left inequality is: $$4x^4+4x^3-36x^2-12x+72>0$$ $$\if
|
|relations|integers|
| 1
|
Visualizing the Commutative Property Beyond Three Numbers
|
I've been pondering how to intuitively visualize the commutative property of multiplication when dealing with more than three numbers. For two numbers, we can easily conceptualize this with the area of a rectangle, and for three, the volume of a cube serves as a great illustration. However, when we move to four or more numbers, the challenge becomes apparent due to our spatial limitations in representing higher dimensions. Using tree diagrams, we can abstractly represent the multiplication of several numbers, but juxtaposing different trees doesn't quite capture the intuitive essence of the commutative property in the same way that geometric shapes do in lower dimensions. Does anyone have insights or creative methods for visualizing this property for four or more numbers?
|
Commutative property is defined on two operands. For an expression involving more than two operands, usually we commutate the operands two at a time to derive a new order. An analogy would be tracking two dimensions at a time of a higher-dimensional orthogonal volume, knowing that at every step, they are commutative and no product is changed.
|
|visualization|
| 0
|
Show that every nonnegative integer $n$ is of the form $n = \alpha\cdot 2^{b+1} + \gamma$
|
I want to show that for every nonnegative integers $n$ and $b$ we have $\alpha \geq 0$ and $0 \leq \gamma such that $n = \alpha\cdot 2^{b+1} + \gamma$ . So let $n\geq 0$ be an integer. If $n$ is a multiple of $2^{b+1}$ , then $\alpha = \frac{n}{2^{b+1}}$ and $\gamma = 0$ . But if it is not, we have $\alpha\cdot 2^{b+1} . I would now say we choose $\alpha = \lfloor \frac{n}{2^{b+1}}\rfloor. $ But how can I now prove the range where to choose $\gamma$ from?
|
This is simply arithmetic: Let $\alpha$ be the largest nonnegativve integer such that $\alpha 2^{b+1} \le n$ . [Indeed, there exists such an $\alpha$ as $0 \times 2^{b+1} \le n$ .] Then $\alpha$ is uniquely specified , and the quantity $n-\alpha 2^{b+1}$ is nonnegative as the inequality $\alpha 2^{b+1} \le n$ holds. Furthermore, the inequality $n-\alpha 2^{b+1} holds, lest the inequality $(\alpha+1)2^{b+1} \le n$ holds as well, which would contradict the definition of $\alpha$ . So set $\gamma = n-\alpha 2^{b+1}$ ; then $\gamma$ is uniquely specified as well. Furthermore, as observed above, $n=\alpha 2^{b+1} +\gamma$ , with $\alpha$ and $\gamma$ uniquely specified, with $\alpha$ a nonnegative integer and $\gamma$ a nonnegative integer less than $2^{b+1}$ .
|
|linear-algebra|
| 1
|
About root distribution of complex polynomial
|
We have some Laurent polynomial $f(z)$ with powers of $z$ from $a$ to $-b$ , $a,b$ be positive integers. Consider the equation $f(z)-E=0$ . Let $P$ be the set defined by: $z'\in P$ $\iff$ $\exists E\in \mathbf{C}$ , $z'$ is one of the $b$ smallest roots ordered by magnitude of $f(z)-E=0$ . Prove or disprove: $P$ has a finite diameter. Notes for clarification: 1.For some fixed $E'$ , let the roots of $f(z)-E'=0$ be $|z_1|\leq |z_2|\leq ...$ , then $z_1,z_2 ...,z_b$ are in the set P.(Counting or not counting terms with same magnitude as $z_b$ does not affect the result) The full set $P$ would be the union of all these $z$ over all $E'$ . 2.e.g. If $f(z)=z+\frac{1}{z}$ , P would be the unit circle and its interior. My numerical checks show that it is always finite, but I haven't figured out a proof.
|
Let $f(z)=R(z)+Q(z)/z^b, \deg Q \le b-1, Q(0) \ne 0, \deg R =a$ . For $E \ne 0$ the equation $f(z)=E$ is equivalent to $z^b-\frac{z^bR(z)+Q(z)}{E}=0$ since $Q(0)\ne 0$ so $z=0$ is not a root of the latter. Let $M=\max_{|z|=1}|z^bR(z)+Q(z)|$ then by Rouche for $E>M$ we have that the equation has precisely as many roots as $z^b=0$ inside the unit circle so $b$ roots have modulus bounded by $1$ . Since for $E \le M$ we can write the equation as $z^bR(z)+Q(z)-Ez^b=\sum a_mz^m=0, m=a+b, a_m \ne 0$ fixed, we see that the highest root is bounded by the positive root of the associated Cauchy polynomial $|a_m|z^m-\sum_{k \le m-1}|a_k|z^k$ which have uniformly bounded roots since the coefficients are uniformly bounded and $a_m$ is fixed, so we are done and the set $P$ is indeed bounded
|
|complex-analysis|polynomials|
| 1
|
prove jointly gaussians of non linear function of gaussian
|
Let $x_1 , x_2 , x_3 ~ N(0,1)$ iid. $Y = \frac{x_1 + x_2 * x_3}{\sqrt{1+ x_3^2}}$ Does Y and $X_3$ jointly gaussian ? I already showed that $E[Y|X3]=0 $ and $Var[Y|X3]=0$ And Y|X3 is gaussian... But how can I proceed? Maybe there is a mistake in the question that make it better?
|
$ Z=aX_1+bX_2\sim N(0,a^2+b^2).$ Therefore $Y|X_3\sim N(0,1)$ is independent of $X_3$ , and $Y, X_3$ are Gaussian and independent. Let $U,V$ be two rv with joint distribution $\pi(du)K(u,dv)$ where $U\sim \pi$ and $V|U\sim K(u,dv).$ Then $U$ and $V$ are independent if and only if $U$ and $V|U$ are independent. Proof. $\Rightarrow$ Let $V\sim K(dv)$ . We get $\pi(du)K(u,dv)=\pi(du)K(dv)$ and $K(u,dv)=K(dv),$ at least $\pi(du)$ almost everywhere. $\Leftarrow$ $u\mapsto K(u,dv)$ is a constant. NN2: True, up to the subtilities of conditioning, I do not quite understand your question.
|
|probability-theory|stochastic-processes|normal-distribution|
| 0
|
What are the values of the parameters p and q?
|
The remainder of the division of the polynomial $$W(x)=2x^{4}+px^{3}+9x^{2}+qx+5$$ by the polynomial $$P(x)=2x-2$$ is equal to $10$ . The remainder of the division of $W(x)$ by the polynomial $$Q(x)=x+1$$ is $22$ . What are the values of the parameters $p$ and $q$ ? After deriving the equation $$-p-q=6$$ by using $$W(-1)=22$$ , I am unable to solve the problem
|
you can rewrite as below and solve the system of equation $p,q$ you may know ,if the remainder of the division of the polynomial $X(x)$ by the polynomial $P(x)$ is $10 $ ,it can be write as $$W(x)=P(x)R_{emainder}(x)+10$$ so put $2x-2=0 \to x=1 $ and you will have $$W(1)=(2x-2)R_{emainder}(x)+10|_{x=1} \\ \to W(1)=0R_{emainder}(1)+10$$ also for $Q(x)$ you have $$W(x)=Q(x)R'_{emainder}(x)+22|_{x=-1}\\ \to W(-1)=Q(-1)R'_{emainder}(-1)+22$$ and finally : $$2x^{4}+px^{3}+9x^{2}+qx+5|_{x=1}=10\\2x^{4}+px^{3}+9x^{2}+qx+5|_{x=-1}=22$$ can you take over ?
|
|polynomials|
| 1
|
Show $\root{-e}\of{e}<\ln2$ without a calculator
|
I tried manipulating the expression to come up with inequalities such as $1 . One idea I have is to show that $\lim_{x\to\infty}\left(\frac{1}{x}+\ln2\right)\left(\frac{1}{x}+1\right)^{\frac{x}{e}}>1$ . $$\lim_{x\to\infty}\frac{\left(1+x\ln2\right)\left(1+x\right)^{\frac{x}{e}}}{x^{1+\frac{x}{e}}}>1$$ I don't know how I could progress.
|
Beforehand, we will need the following "basic" facts : $(A)\;$ $2 ; $(B)\;$ $e^x$ is a strictly monotonically increasing function; $(C)\;$ $x^{-1}$ is a strictly monotonically decreasing function; $(D)\,$ $\sqrt{x}$ is a strictly monotonically increasing function over $(1,\infty)$ . First of all, it is to be noted that your inequality is equivalent to the statement $e^{e^{-e/2}} thanks to $(B)$ . From these basic facts, we can deduce that $e^{-e/2} , where the first inequality is due to $(A)$ and $(B)$ , when the second one comes from $(A)$ and $(C)$ . In consequence, one has $e^{e^{-e/2}} thanks to $(B)$ again. Now, given that $\sqrt{e} by $(A)$ and $(D)$ , we can conclude that $e^{e^{-e/2}} . QED
|
|algebra-precalculus|limits|inequality|number-comparison|
| 1
|
Subrings generated by lattice groups in algebras of continous functions
|
This is probably false but can't find a good counterexample. Let $X$ be a compact space and consider the algebra $C(X)$ of all scalar-valued continuous functions on $X$ endowed with the supremum norm. Let $G$ be an additive subgroup of $C(X)$ that is discrete. (This implies that there is $\delta > 0$ so that $\|f-g\| \geqslant \delta$ for all $f,g \in G$ ). Let $R$ be the smallest subring of $C(X)$ containing $G$ . Must $R$ be discrete too?
|
No. Let $X$ be the one-point space, so that $C(X) \cong \mathbb{R}$ (or $\mathbb{C}$ works too if you want to use complex scalars). Now let $G$ be the subgroup $\frac{1}{2}\mathbb{Z} = \{n/2 : n \in \mathbb{Z}\}$ . This is discrete. However, the subring of $\mathbb{R}$ generated by $\frac{1}{2}\mathbb{Z}$ is $$\mathbb{Z}[2^{-1}] = \{n/2^k : n \in \mathbb{Z}, k \in \mathbb{N}\},$$ which is not discrete.
|
|general-topology|functional-analysis|ring-theory|discrete-geometry|
| 1
|
What are the possible applications in maths and physics of vector fields along smooth maps?
|
I am currently working on a problem related to singularities of mappings between manifolds with metrics and the interplay of metric singularities with mapping singularities. Given a smooth map $F:M\longrightarrow N$ , a vector field along $F$ is just a smooth section in the set $\Gamma(M,TN)$ . I wanted references/books/papers (anything) related to applications of these objects in any context in maths and physics, where they are relevant, why they are studied, anything will be great. Thanks in advance!
|
I'm not sure the notation $\Gamma(M,TN)$ is exactly right here. The bundle $TN$ has base space $N$ so what I think you really want to write is $\Gamma(M,F^*TN)$ where $F^*TN$ is the pullback bundle on $M$ induced by $F$ . With this in mind, note that in physics, gauge fields are sections of principle bundles i.e. functions $s:M \rightarrow G$ for some Lie group $G$ . These sections obey differential equations formulated in terms of connections . Connections are a kind of directional derivative that take a direction $v$ at some point $p \in M$ and return the directional derivative of $s$ as a member of $T_{s(p)}G$ . This means that locally connections are of the form $t \otimes \omega$ where $t \in \Gamma(M, s^*TG)$ and $\omega \in \Lambda^1(M)$ . Hence if you take $M$ as spacetime and $F$ as some gauge field on $M$ , then we can think of a connection locally as being the tensor of a vector field along $F$ with a one-form.
|
|reference-request|differential-geometry|mathematical-physics|singularity-theory|applications|
| 1
|
Proof that if A and B are connected sets, $A \cup B$ is connected iff $(\bar A \cap B) \cup (A \cap \bar B) \neq \emptyset$
|
The problem says: Let $(X,\tau)$ be a topological space so that $A,B \subset X$ are connected sets in said space. For $M\subset X, \bar M$ refers to the closure of $M$ . Prove that $A \cup B$ is connected if and only if $(\bar A \cap B) \cup (A \cap \bar B) \neq \emptyset$ . I have no idea where to start. Can someone help me?
|
Thanks, everybody! I've glanced over your proof in order to not give myself any revealing details and just get the gist of it. As far as I know now, this is the proof: Firstly, to prove the $\Leftarrow$ implication, we assume that A $\cup$ B is connected. This means that there exist V,W in $\tau$ which fulfil the following characteristics: A $\cup$ B $\subset$ V $\cup$ W 2)(A $\cup$ B) $\cap$ V $\neq$$\emptyset\$$ \neq $(A$ \cup $B)$ \cap$W 3)(A $\cup$ B) $\cap$ V $\cap$ W= $\emptyset$ Note that 1) implies both A $\subset$ V $\cup$ W and B $\subset$ V $\cup$ W. Similarly, 3) implies both A and B are disjoint from V $\cap$ W. Since A and B are connected, the only way for them to be is if they don't fulfil condition 2). Likewise, the only way for that to happen(since A $\cup$ B is a subset of V $\cup$ W) is if they are "one by one" disjoint from V and W, as in A is disjoint from V and B is disjoint from W (let's take it that is the relation, although it could very well be the other way a
|
|general-topology|proof-writing|connectedness|
| 0
|
How to integrate $\int_{0}^{1} \int_{0}^{1} \int_{0}^{1}\frac{x^{4a - 1} \ln(x)}{\sqrt{yz} \cdot (1 + x^{2a}z + yzx^{4a} + yx^{2a})} \,dx \,dy\,dz$
|
how to integrate $$\int_{0}^{1} \int_{0}^{1}\int_0^1 \frac{x^{4a - 1} \ln(x)}{\sqrt{yz} \cdot (1 + x^{2a}z + yzx^{4a} + yx^{2a})} \,dx \,dy\,dz$$ My attempt $x^{2a} \rightarrow x$ $$=\frac{1}{4a^2} \int_0^1 \int_0^1 \int_0^1 \frac{x \ln(x)}{\sqrt{yz} \cdot (1 + yx)(1 + zx)} \,dx \,dy \,dz$$ $$=\frac{1}{4a^2} \int_0^1 \int_0^1 \int_0^1 \sum_{n, m \geq 0} (-1)^{n+m} y^{n-\frac{1}{2}} z^{m-\frac{1}{2}} x^{n+m+1} \ln(x) \,dx$$ $$= \int_0^1 \sum_{n \geq 0} \frac{(-1)^n x^{n + \frac{1}{2}}}{2n + 1} \cdot \sum_{m \geq 0} \frac{(-1)^m x^{m + \frac{1}{2}} \cdot \ln(x)}{2m + 1} \, dx$$ $$= \frac{1}{a^2} \int_{0}^{1} [\arctan(\sqrt{x})]^2 \ln(x) \,dx$$ $\sqrt{x} = x$ $$=\frac{4}{a^2}\int_{0}^{1} x \ln(x) \arctan^2(x) \,dx$$
|
$$I=\dfrac1{4a^2}\int\limits_0^1\int\limits_0^1\int\limits_0^1\dfrac{x\ln(x)}{\sqrt{yz}(1+yx)(1+zx)}\text dx\text dy\text dz$$ $$=\dfrac1{4a^2}\int\limits_0^1\int\limits_0^1\int\limits_0^1\dfrac{x\ln(x)}{\sqrt{yz}(1+yx)(1+zx)}\text dy\text dz\text dx=\bigg|y=v^2, z=w^2\bigg|$$ $$=\dfrac1{2a^2}\int\limits_0^1\int\limits_0^1\int\limits_0^1\dfrac{x\ln(x)}{(1+xv^2)(1+xw^2)}\text dv\text dw\text dx$$ $$=\dfrac1{2a^2}\int\limits_0^1\left(\dfrac{\arctan \sqrt x}{\sqrt x}\right)^2 \,x\ln(x)\text dx =\dfrac1{2a^2}\int\limits_0^1 \arctan^2 \left(\sqrt x\right)\ln(x)\text dx$$ $$=\dfrac2{a^2}\int\limits_0^1 \arctan^2 \left(y\right)\ln(y)\, y\text dy$$ $$=\dfrac1{48a^2}((36 - 24\text C - 5\pi)\pi -72\ln2 + 42 ζ(3))\approx -0.098$605673\,\dfrac1{a^2}$$ (see also WA calculations ), where $\text C \approx0.915965594\dots $ is the Catalan's constant.
|
|integration|multivariable-calculus|definite-integrals|closed-form|
| 0
|
Fast way to find period-n points of a tent map?
|
I need to find a period-3 orbit for a tent map defined by $$T(x)=\begin{cases}2x & 0\le x\le0.5 \\[2ex] 2-2x &0.5 So, what is a fast and efficient way of finding it? I only know how to do it by guessing-and-checking. For example, I picked a random point, $x = \frac{2}{7}$ and found that $$T\left(\frac{2}{7}\right)=\frac{4}{7}\\[4ex]T\left(\frac{4}{7}\right)=\frac{6}{7}\\[4ex]T\left(\frac{6}{7}\right)=\frac{2}{7}$$ which is a period-3 orbit. Is there any better way of finding a period-3 orbit , more specifically, a period-3 point?
|
Split the domain in to L $=[0,0.5]$ and R $=(0.5,1]$ . The section of the given periodic orbit $2/7\to4/7\to6/7$ can be represented by the sequence LRR (as $2/7$ is in L etc.). Sequences RLR and RRL are equivalent to LRR by just shifting along the periodic orbit. For a sequence LRR, we start with an $x$ in L, so $T(x)=2x$ , which we have specified is in R. Then $T^2(x)=T(2x)=2-2(2x)=2-4x$ , which is also in R. So $T^3(x)=T(2-4x)=2-2(2-4x)=8x-2$ . Now, for a period-3 orbit, $T^3(x)=x$ , i.e. $8x-2=x$ which gives uniquely $x=2/7$ . Let's try LLR. For the first two iterations, $x$ just doubles, so $T^2(x)=4x$ . Next, $4x$ is in R, so $T^3(x)=T(4x)=2-2(4x)=2-8x$ . For a period-3 orbit, $T^3(x)=2-8x=x$ implies uniquely $x=2/9$ , giving the periodic orbit $2/9\to 4/9\to 8/9\to 2/9\to\dots$ Applying the same procedure for RRR, we find uniquely $x=2/3$ , but $T(2/3)=2/3$ , so we have found a fixed point of $T$ , rather than a period-3 orbit. Finally, the last possible sequence, LLL gives $T^3(
|
|dynamical-systems|ergodic-theory|fixed-points|topological-dynamics|
| 0
|
Question on counting the number of triangles formed by 1999 points in a square
|
I was reading an explanation for a solution to an Olympiad problem as follows: Let $S$ be a square with the side length 20 and let $M$ be the set of points formed with the vertices of $S$ and another 1999 points lying inside $S$ . Prove that there exists a triangle with vertices in $M$ and with area at most equal with $\frac 1{10}$ . In the solution it states that if you want to cover the square S with triangles formed by the points in M and the vertices of S, you will need 4000 triangles for which this calculation is provided: 2 * (1999 + 1) = 4000. How was this figure obtained/what is the explanation behind this calculation?
|
The Euler's formula tells that $v-e+f=2$ , where $v$ is the number of vertices, $e$ is the number of edges, $f$ is the number of faces (including the outer infinite face). We know that $$v=4+1999=2003.$$ Let us calculate $e$ . Each triangle face has $3$ edges surrounding it. The outer face has $4$ edges. It would give $3(f-1)+4$ edges. But every edge is counted twice here, since every edge splits two faces. So, $$e= \frac{3(f-1)+4}2.$$ Put it into the Euler’s formula: $$2003-\frac{3(f-1)+4}2+f=2$$ $$4006-3f+3-4+2f=4$$ $$f=4001.$$ So the number of triangles is $f-1=4000.$
|
|geometry|contest-math|pigeonhole-principle|tessellations|
| 0
|
Every bounded function on $[a,b]$ is Riemann integrable on $[a,b]$
|
When we say "Every bounded function on $[a,b]$ is Riemann Integrable on $[a,b]$ ", then does it mean that $[a,b]$ is the domain of the function or any closed interval? I think that it is true considering the fact that $[a,b]$ is the domain of $f$ . But upon discussion, it turns out to be false when we consider Dirichlet function: $$ \begin{cases} 1,\quad x \in \mathbb{Q} \\ 0,\quad x \in \mathbb{Q}^{c} \end{cases} $$ This function is not Riemann integrable on any closed interval. Thus, this function would be a counter-example for the above statement. Any help will be appreciated.
|
A bounded function is Riemann integrable on $[a,b]$ if and only if the set of points at which it is not continuous has measure zero. This is Lebesgue's theorem. The Dirichlet function is nowhere continuous.
|
|real-analysis|
| 0
|
Value of expression $\frac{1}{1+z^{8k}+z^{16k}}$
|
If $z$ is a complex number such that $z^{23}=1.$ Then $\displaystyle \sum^{22}_{k=1}\frac{1}{1+z^{8k}+z^{16k}}$ What I try : Using $z^{23}=1\Longrightarrow z^{24}=z\Longrightarrow z^{24k}=z^k$ So $\displaystyle \sum^{22}_{k=1}\frac{z^{8k}-1}{(1+z^{8k}+z^{16k})\cdot (z^{8k}-1)}$ $\displaystyle =\sum^{22}_{k=1}\frac{z^{8k}-1}{z^{24k}-1}$ $\displaystyle =\sum^{22}_{k=1}\frac{z^{8k}-1}{z^k-1}$ $\displaystyle =\sum^{22}_{k=1}\bigg(1+z^{k}+z^{2k}+\cdots +z^{7k}\bigg)$ I did not understand How I solve it because when Directly calculate each term with summation is very Complex Please have a look on that Problem, Thanks
|
Going on from where you stopped, one has : $$ \begin{align} \sum^{22}_{k=1} \left(1+z^{k}+z^{2k}+\cdots +z^{7k}\right) &= 22 + \sum_{n=1}^7 \sum^{22}_{k=1} z^{nk} \\ &= 22 + \sum_{n=1}^7 \sum^{22}_{k=1} \left(\frac{z^{23n}-1}{z^n-1}-1\right) \\ &= 22 + \sum_{n=1}^7 (-1) \\ &= 15 \end{align} $$
|
|complex-numbers|
| 0
|
Number of possible combinations with a descending-value constraint
|
I'm trying to solve an engineering problem regarding the optimization of electrical steel widths to compose a transformer core. At the end, I want to know how many combination can be made with based on two parameters: The number of steps in the core ( $N$ ) and the number of available widths ( $W$ ). The constraint are: $W>N$ widths need to be descending order. As an example, the following table contains the solution for $W=4$ : N W Combinations Number of combinations $C$ 1 4 4 / 3 / 2 / 1 4 2 4 4-3 / 4-2 / 4-1 / 3-2 / 3-1 / 2-1 6 3 4 4-3-2 / 4-3-1 / 4-2-1 / 3-2-1 4 4 4 4-3-2-1 1 What I want is to find a function (with $W$ and $N$ as input) that gives me the $NC$ . I couldn't find it so far.
|
Based on the comment done by @Daniel Mathis, it's clear that the problem is equivalent to the Binomial Coefficient. So, it could be solved by the following formula: $$C=\frac{W!}{N!(W-N)!}$$ using the $N$ and $W$ as presented in the question.
|
|combinations|
| 0
|
A question about tournaments containing Hamilton paths
|
I need to show that there exists a tournament of n vertices $n\ge3$ that has more than $\frac{n!}{2^{n-1}}$ Hamiltonian paths Can I try to show that with induction? I'm not so sure how to start with $n=3$ Can anyone give a hint? Thanks I have tried something else but not sure it's right I have said that I need to show that the probability of that case must be greater than zero meaning its completion should be less that one which is right because not all tournaments contain at most $\frac{n!}{2^{n-1}}$ because we have shown that there exists (a tournament) with at least $\frac{n!}{2^{n-1}}$ Is that even right? which way is recommended the induction or trying to fix the second way?
|
HINT: Assign each edge $e$ in $K_n$ a direction, where each edge is assigned a direction mutually independently of the directions assigned to the other edges in $K_n$ , and either direction is assigned to $e$ with probability $\frac{1}{2}$ . Let $x_1,x_2, \ldots, x_n$ be an ordering of $V(G)$ . Then the probability that $x_1x_2,\ldots x_n$ form a Hamiltonian path [in that order] in the tournament as prescribed in 1. above is $2^{-(n-1)}$ . [Indeed, each of the $n-1$ edges $\{x_i,x_{i+1}\}$ has to be oriented from $x_i$ to $x_{i+1}$ .] There are $n!$ such orderings as in 2. above. Conclude that the expected number of Hamiltonian paths in the tournament, as constructed in 1. above, is then $n! \times 2^{-(n-1)}$ . Thus, there has to be at least one tournament with that many Hamiltonian paths.
|
|probability|combinatorics|hamiltonian-path|
| 0
|
Fundamental calculus theorem and derivative
|
May I affirm $$\frac{d}{dx}\left(\int_{x_0}^{x} f(t)\cdot g(t) dt\right)=f(x)\cdot g(x)$$ if $f$ and $g$ are well definied?
|
Yes, almost everywhere (by Lebesgue's Fundamental Theorem of Calculus, thanks @peek-a-boo) when $f(t) \cdot g(t)$ is integrable, and in general for continuous functions. Define $h(t) = f(t) \cdot g(t)$ , and let $H(t)$ be an antiderivative for $h(t)$ (this is where the integrability assumption comes into play). Then the Fundamental Theorem of Calculus informs you that $$\frac{\mathrm{d}}{\mathrm{d} x} {\bigg( \int_{x_0}^{x} h(t)\ \mathrm{d} t \bigg)} = \frac{\mathrm{d}}{\mathrm{d} x} {\big( H(x) - H(x_0) \big)}.$$ Now $x_0$ is a constant, so $H(x_0)$ is as well, meaning it vanishes when we take the derivative, so our expression simply becomes $\frac{\mathrm{d}}{\mathrm{d} x} H(x)$ , which is (by definition of $H$ as an antiderivative of $h$ ) the same as $h(x)$ . This is by definition of $h$ the same as $f(x) \cdot g(x)$ , as desired. Luckily in general integrability is very easy to satisfy, including for instance all continuous functions. See the other answer for an example where this
|
|calculus|integration|
| 0
|
How is it possible to demonstrate the necessary system to find particular solutions in LDE of order 2
|
When when want to find particular solutions to a LDE of order 2 like : $$ y'' + ay' + by = f(x) $$ I see in a lot of source that says that after finding two solutions for the homogeneous equations $y_1, y_2$ , we want to find two functions $\lambda(x), \mu(x)$ that satisfies the system : $$ \Biggl \{ \begin{gather*}\lambda'(x) y_1 + \mu'(x) y_2 = 0 \\ \lambda'(x) y_1' + \mu'(x) y_2' = f(x) \end{gather*} \qquad (1) $$ And after we can apply Cramer's Rule to find their derivated form, and then integrate them. But I can't find where does the $(1)$ system comes from. May it be with the Method of the Variation of Constants ?
|
The idea behind variation of parameters is to replace the general solution $c_1 y_1 + c_2 y_2$ to the homogeneous system with $c_1(x) y_1 + c_2(x) y_2$ . When you plug that into an inhomogeneous equation $y''+a(x) y' + b(x) y = f$ , you get $$c_1'' y_1 + 2 c_1' y_1' + c_1 y_1'' + c_2'' y_2 + 2 c_2' y_2' + c_2 y_2'' + a c_1' y_1 + a c_1 y_1' + a c_2' y_2 + a c_2 y_2' + b c_1 y_1 + b c_2 y_2=f.$$ That's a big mess, but a lot of it can be canceled out by using the fact that both of the $y_i$ satisfy the homogeneous equation. That means $c_i y_i'' + ac_i y_i' + b c_i y_i = 0$ so you have $$c_1'' y_1 + 2 c_1' y_1' + c_2'' y_2 + 2 c_2' y_2' + a c_1' y_1 + a c_2' y_2 = f.$$ The main reason why this was a good idea in the first place is that the $c_i$ now only appear through their derivatives, so we can do reduction of order. Explicitly, let $d_i=c_i'$ and then you have $$d_1' y_1 + 2 d_1 y_1' + d_2' y_2 + 2 d_2 y_2' + a d_1 y_1 + a d_2 y_2 = f.$$ Note that given the $d_i$ , we can pick the $c
|
|integration|ordinary-differential-equations|derivatives|fundamental-solution|
| 1
|
Jacobian of ODE in polar coordinates
|
The system of ODEs $$\dot{u} = bu - v + au(u^2 + v^2)$$ $$\dot{v} = u + bv + av(u^2 + v^2)$$ can be written in polar coordinates as $$\dot{r} = br + ar^3$$ $$\dot{\phi} = 1$$ I know that in Euclidean coordinates I can calculate the Jacobian and evaluate it at the equilibrium (0,0) to get $$ \begin{pmatrix} b & -1 \\ 1 & b \end{pmatrix}$$ How can I get the correct Jacobian directly from polar coordinates? Does this even make sense? I should at-least be able to get a nonsingular Jacobian at the origin right?
|
Ordinarily, you can get the correct Jacobian directly by taking the partial derivatives as you had done for $u,v$ . This will yield a polar Jacobian matrix of $$ J=\begin{pmatrix} b & 0 \\ 0 & 0 \end{pmatrix} $$ This leads to your question about invertibility. Ordinarily, if the rank of the Jacobian should remain the same under any change of coordinates, since we are in theory multiplying our Jacobian matrix on the left and right by invertible matrices (these invertible matrices correspond to the Jacobian matrix of the coordinate transformation), but in this case we lose a dimension. So what's going on? This is one of the rare instances where polar coordinates fails us, namely at the origin. If we were examining the Jacobian at any other point in $(u,v)$ space then there would be no issue. The problem comes from the lack of invertibility of the coordinate at the origin. The transformation $$ u=r\cos\phi,\;\;\;\;v=r\sin\phi $$ does not uniquely identify $(u,v)=(0,0)$ with a unique $(r,\
|
|ordinary-differential-equations|jacobian|
| 1
|
Surface integrals in spherical coordinates
|
If I am given a surface in spherical coordinates $(r,\theta,\varphi)$ , such that it is parametrised as: $$ \begin{align} r&=r(\theta,\varphi)\\ \theta&=\theta\\ \varphi&=\varphi \end{align} $$ What is the area $S$ of such surface? Or more specifically, can you show how to get the result: $$ S=\int_{0}^{2\pi}\int_{0}^{\pi}\sqrt{r^2+\left(\frac{\partial r}{\partial \theta}\right)^2 + \frac{1}{\sin^2\theta}\left(\frac{\partial r}{\partial \varphi}\right)^2}\;r\sin\theta\;{\rm d}\theta\,{\rm d}\varphi $$ Some definitions that I am using: $k$ -surface : Let $k,N\in\mathbb{N}$ , $k , $M\subset \mathbb{R}^N$ is called a $k$ -surface , if there exists a non-empty open set $E\subset \mathbb{R}^k$ and a map $\varphi:\mathbb{R}^k\to \mathbb{R}^N$ , such that: (i) $\varphi(E)=M$ , (ii) $\varphi\in C^1(E;\mathbb{R}^N)$ , and (iii) the rank of Jacobi matrix of $\varphi$ is equal $k$ everywhere on $E$ . The surface is called simple if $\varphi$ is also injective on $E$ and $\varphi^{-1}$ is continuo
|
The result can be derived from the theorem below on surface integrals, by setting the integrand function $f$ to unity. I have used the following conventions below : The physics convention $(r, \theta, \varphi)$ for spherical coordinates, with $\theta$ the polar angle and $\varphi$ the azimuthal angle [SPH]. The local spherical coordinate system basis vectors $\underline{\mathbf{\hat{r}}}, \, \underline{\mathbf{\hat{\theta}}}, \, \underline{\mathbf{\hat{\varphi}}}$ given by $\underline{\mathbf{\hat{r}}} = \sin \theta \cos \varphi \: \underline{\mathbf{i}} + \sin \theta \sin \varphi \: \underline{\mathbf{j}} + \cos \theta \: \underline{\mathbf{k}}$ , $\underline{\mathbf{\hat{\theta}}} = \cos \theta \cos \varphi \: \underline{\mathbf{i}} + \cos \theta \sin \varphi \: \underline{\mathbf{j}} - \sin \theta \: \underline{\mathbf{k}}$ , and $\underline{\mathbf{\hat{\varphi}}} = -\sin \varphi \: \underline{\mathbf{i}} + \cos \varphi \: \underline{\mathbf{j}}$ . The definition of the Riemann int
|
|calculus|integration|surfaces|surface-integrals|
| 0
|
Sufficient condition for convergence of integral
|
Given a function $f\colon \mathbb R \to \mathbb R$ and that $$ \lim_{t\to \infty}\int_{-t}^tf(x) \ dx=1, $$ does the integral $$ \int_{-\infty}^{\infty}f(x) \ dx $$ necessarily converge? I've been trying to come up with a counterexample rather than a proof, but to no avail yet.
|
Take any integrable function $g$ with compact support such that $\int_{-\infty}^\infty g(x) dx = 1$ . Then take some function integrable on compact intervals with $h(-x) = -h(x)$ for all $x \in \mathbb{R}$ but which is not integrable. Define $f := h + g$ . For large $t > 0$ it holds $\int_{-t}^t f(x) dx = 1$ , so $\lim_{t\to \infty}\int_{-t}^t f(x) dx = 1$ , but since $h$ is not integrable and $g$ is, $f$ is not integrable. So, for example $g(x) = 1$ for $|x|\leq 1/2$ and $g(x) = 0$ for $|x|>1/2$ and $h(x) = x$ .
|
|real-analysis|integration|indefinite-integrals|
| 1
|
Proving a function isn't bounded
|
Let $f(x) = \frac{1}{x}$ be a function to and from the reals. Is $f$ bounded? I want to prove using the definition. Let $M > 0$ be given. Then there exists a $\delta > 0$ such that: If $0 implies $|f(x)|=|\frac{1}{x}|=\frac{1}{|x|}>\frac{1}{\delta} = M$ whenever $\delta = \frac{1}{M}$ So is it correct to pick $\delta = \frac{1}{M}$ and I have proved that as $x$ tends to $0$ which, $f(x)$ tends to positive infinity, so it is unbounded.
|
If $f: \mathbb{R}\setminus \{0\} \to \mathbb{R}$ was bounded there would exist a constant $M >0$ such that $$ |f(x)| However, this generates a contradiction since $|f(1/M)| > M$ .
|
|real-analysis|limits|analysis|epsilon-delta|
| 1
|
How to evaluate $\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}dx$ in an elegant way?
|
How to prove, in an elegant way that $$I=\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}dx=\frac{11}{4}\zeta(4)-\frac14\ln^42-6\operatorname{Li}_4\left(\frac12\right)\ ?$$ First, let me show you how I did it \begin{align} I&=\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}\ dx\overset{1-x\ \mapsto x}{=}\int_0^1\frac{\ln(1-x)\ln^2x}{2-x}\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\int_0^1x^{n-1}\ln^2x\ln(1-x)\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\frac{\partial^2}{\partial n^2}\int_0^1x^{n-1}\ln(1-x)\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\frac{\partial^2}{\partial n^2}\left(-\frac{H_n}{n}\right)\\ &=\sum_{n=1}^\infty\frac1{2^n}\left(\frac{2\zeta(2)}{n^2}+\frac{2\zeta(3)}{n}-\frac{2H_n}{n^32^n}-\frac{2H_n^{(2)}}{n^22^n}-\frac{2H_n^{(3)}}{n2^n}\right)\\ &=2\zeta(2)\operatorname{Li}_2\left(\frac12\right)+2\ln2\zeta(3)-2\sum_{n=1}^\infty\frac{H_n}{n^32^n}-2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^\infty\frac{H_n^{(3)}}{n2^n} \end{align} By substituting $$S_1=\sum_{n=1}^\infty \frac{H_n}{n^32^n}=\operatorname{Li}_
|
we start with the following result: $$\sum\limits_{n = 1}^\infty {\frac{1}{{{n^3}}}} \left( {\sum\limits_{k = 1}^n {\frac{{{{\left( { - 1} \right)}^{k - 1}}}}{k}} } \right) = \frac{7}{4}\zeta \left( 3 \right)\ln 2 - \frac{{{\pi ^4}}}{{288}}$$ which can be found in the paper "Euler Sums and Contour Integral Representations",written by Philippe Flajolet and Bruno Salvy,then we can get the value of this integral after expanding it into a series: $$\int\limits_0^1 {\frac{{\ln \left( {1 + x} \right){{\ln }^2}x}}{{1 - x}}dx} = 2\sum\limits_{n = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{n - 1}}}}{n}\left( {\sum\limits_{k = n + 1}^\infty {\frac{1}{{{k^3}}}} } \right)}$$ by Bailey's Transform,it is $$= 2\sum\limits_{n = 1}^\infty {\frac{1}{{{n^3}}}} \sum\limits_{k = 1}^n {\frac{{{{\left( { - 1} \right)}^{k - 1}}}}{k}} - 2\sum\limits_{n = 1}^\infty {\frac{{{{\left( { - 1} \right)}^{n - 1}}}}{{{n^3}}}} = - \frac{{19{\pi ^4}}}{{720}} + \frac{7}{2}\zeta \left( 3 \right)\ln 2$$ ...................
|
|integration|sequences-and-series|definite-integrals|riemann-zeta|polylogarithm|
| 0
|
The triangle $ABC$ is right-angled in $A$. Prove that the inequality $(1-\sin B)(1-\sin C)\leq \frac{{(\sqrt{2}-1)}^2}{2}$ holds.
|
The question The triangle $ABC$ is right-angled in $A$ . Prove that the inequality $(1-\sin B)(1-\sin C)\leq \frac{{(\sqrt{2}-1)}^2}{2}$ holds The idea Let's note the sides of the triangle as $a,b,c$ This means that according to the Pythagoras $a^2=b^2+c^2$ . I also tried replacing $\sin B =\frac{AC}{BC}$ and $\sin C= \frac{AB}{BC}$ , but got to nothing useful. I thought of using the sine rule...but again, I got to nothing... I hope one of you can help me! Thank you!
|
In a triangle $\triangle ABC$ $$\cos A+\cos B+\cos C=1+4\sin \tfrac A2\sin\tfrac B2\sin\tfrac C2.$$ Source. Sign mistake. When $A=90^\circ,$ $$\sin\tfrac B2\sin\tfrac C2=\frac{\cos B+\cos C-1}{2\sqrt2}$$ $$4\sin^2\tfrac B2\sin^2\tfrac C2=\frac{(\cos B+\cos C-1)^2}{2}$$ $$(1-\sin B)(1-\sin C)=\frac{(\sqrt2\sin(B+45)-1)^2}{2}\\ \leq\frac{(\sqrt2-1)^2}2$$
|
|geometry|inequality|trigonometry|
| 0
|
Definition of a one point subscheme and fibre at a point
|
The fibre of an $S$ -scheme $X$ over $s\in S$ is defined as the fibre product $X\times_S \{s\}$ where the one point subscheme $\{s\}$ is given a local ring $k(s)$ equal to the residue field at $s$ . This makes $\{s\}$ into an $S$ -scheme $Spec(k(s))$ . My question is: is there any special reason for choosing this particular structure $(s,k(s))$ ? Are there any other local rings we could attach to $\{s\}$ to make it into a scheme and so that the fibre product makes sense? I get that there is a morphism of schemes $Spec(k(s))\rightarrow S$ which is induced by a homomorphism of rings which is just reduction at the maximal ideal of the local ring $\mathcal{O}_{S,s}\rightarrow \mathcal{O}_{S,s}/\mathcal{M}_{S,s}$ , at least in the affine case. In this way I suppose one could think of the resulting $S$ -scheme $Spec(k(s))$ as the most natural one.
|
You absolutely can also consider the local ring $\mathcal{O}_{S,s}$ , and produce a different scheme $X \times_S \operatorname{Spec}(\mathcal{O}_{S,s})$ . One nice thing about $X \times_S \operatorname{Spec} k(s)$ is that this is a $k(s)$ -scheme -- schemes over fields are nice! A more important nice thing about $X \times_S \operatorname{Spec} k(s)$ is that the points of this scheme (as a topological space!) are in bijection with the points of $X$ lying over $s$ , which is not true of $X \times_S \operatorname{Spec}(\mathcal{O}_{S,s})$ in general. For example, let $S = \operatorname{Spec} \mathbb{Z}$ and let $X = \operatorname{Spec} \mathbb{Z}[\frac{1}{2}]$ . In other words, $X$ is the complement of the point $(2)$ in $S$ . Let's pick $s = (2)$ . Note that $X \times_S \operatorname{Spec}(k(s)) = \operatorname{Spec}(\mathbb{Z}[\frac{1}{2}] \otimes \mathbb{Z}/2) = \operatorname{Spec}(0) = \varnothing$ , as desired because $X$ is disjoint from $(2)$ . On the other hand, $X \times_S \opera
|
|algebraic-geometry|schemes|
| 1
|
How to evaluate $\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}dx$ in an elegant way?
|
How to prove, in an elegant way that $$I=\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}dx=\frac{11}{4}\zeta(4)-\frac14\ln^42-6\operatorname{Li}_4\left(\frac12\right)\ ?$$ First, let me show you how I did it \begin{align} I&=\int_0^1\frac{\ln x\ln^2(1-x)}{1+x}\ dx\overset{1-x\ \mapsto x}{=}\int_0^1\frac{\ln(1-x)\ln^2x}{2-x}\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\int_0^1x^{n-1}\ln^2x\ln(1-x)\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\frac{\partial^2}{\partial n^2}\int_0^1x^{n-1}\ln(1-x)\ dx\\ &=\sum_{n=1}^\infty\frac1{2^n}\frac{\partial^2}{\partial n^2}\left(-\frac{H_n}{n}\right)\\ &=\sum_{n=1}^\infty\frac1{2^n}\left(\frac{2\zeta(2)}{n^2}+\frac{2\zeta(3)}{n}-\frac{2H_n}{n^32^n}-\frac{2H_n^{(2)}}{n^22^n}-\frac{2H_n^{(3)}}{n2^n}\right)\\ &=2\zeta(2)\operatorname{Li}_2\left(\frac12\right)+2\ln2\zeta(3)-2\sum_{n=1}^\infty\frac{H_n}{n^32^n}-2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^\infty\frac{H_n^{(3)}}{n2^n} \end{align} By substituting $$S_1=\sum_{n=1}^\infty \frac{H_n}{n^32^n}=\operatorname{Li}_
|
Let $I$ denote the integral in the question. Then we can write \begin{align*} I &= \int_0^1\frac{\log^2 (x) \log(1-x^2)-\log^2(x)\log(1+x)}{1+x}dx \\ &= \int_0^1\frac{\log^2 (x) \log(1-x^2)}{1+x}dx-\int_0^1 \frac{\log^2(x)\log(1+x)}{1+x}dx \quad \cdots (1) \end{align*} The first integral can be manipulated as follows: \begin{align*} \int_0^1\frac{\log^2 (x) \log(1-x^2)}{1+x}dx &= \frac{1}{8}\int_0^1 \frac{\log^2(x) \log(1-x)}{(1+\sqrt{x})\sqrt{x}}dx \\ &= \frac{1}{8} \left( \int_0^1 \frac{\log^2(x)\log(1-x)}{(1-x)\sqrt{x}}dx-\int_0^1 \frac{\log^2(x)\log(1-x)}{1-x}dx\right) \end{align*} Calculation of these two integrals is done by differentiating the beta function. The end result is $$\displaystyle \int_0^1\frac{\log^2 (x) \log(1-x^2)}{1+x}dx = \frac{7}{2} \zeta (3) \log (2)-\frac{11 \pi ^4}{360} \quad ... (2)$$ Now we turn our attention to the second integral of equation (1): \begin{align*} \int_0^1 \frac{\log^2(x)\log(1+x)}{1+x}dx &= \sum_{n=1}^\infty (-1)^{n+1} H_n \int_0^1 x^n \log
|
|integration|sequences-and-series|definite-integrals|riemann-zeta|polylogarithm|
| 0
|
Absolute convergence, conditional convergence or divergence of a series based on a variable
|
So I have this question here: Determine the values of the constant $q$ so that the series $$\sum_{n=3}^{\infty}(-1)^{n-1}\frac{1}{n\sqrt{\ln^q(n)}}$$ is absolutely convergent, conditionally convergent, or divergent. I assume I can just approach this directly. For absolute convergence, we have: $$\sum_{n=3}^{\infty}\left|\frac{1}{n\sqrt{\ln^q(n)}}\right|$$ Could I just use the integral test on this? This would mean that we have $\displaystyle \int_{3}^{\infty}\left|\frac{1}{x\sqrt{\ln^q(x)}}\right|dx$ . Since the integral is going to infinity, we really have: $$\lim_{t \to \infty} \int_{3}^{t}\frac{1}{x\sqrt{\ln^q(x)}}dx$$ . So if $u=\ln(x)$ , this gives $du=\frac1{x}dx{}$ so we get $\displaystyle \int \frac{1}{\sqrt{u^q}}du$ or $\displaystyle \int \frac{1}{u^{q/2}}du$ . So this means that $\frac{q}{2}>1$ or $q>2$ . For conditional convergnce, we have by the alternating series test, we have that for $\displaystyle \sum_{n=3}^{\infty}(-1)^{n-1}\frac{1}{n\sqrt{\ln^q(n)}}$ , $b_n = \frac{1
|
Let $r=-p^{-1},$ where $p> 0.$ The function ${x\over \ln x}$ is increasing for $x>e.$ Thus the function ${x^p\over \ln(x^{p})}={1\over p}{x^p\over \ln x}$ is increasing for $x>e^{1/p}. $ So is the function ${x\over (\ln x)^{1/p}}=x(\ln x)^{r}.$ Therefore for $r=q/2 the series is convergent by the Leibniz theorem.
|
|calculus|sequences-and-series|solution-verification|
| 1
|
Calculate all possible integer powers of $z^{n}, n \in I$
|
I saw a very interesting task to solve and got stuck. Does anyone has some hints how to solve this? Thank you very much in advance, further I wrote what I did. There is a complex number in its algebraic form: $$z=\frac{1}{2} + i\frac{\sqrt{3}}{2}$$ I rewrote it in exponential form in order to raise it to the power ( $ Arg(z) = arctan(\sqrt{3}) = \frac{\pi}{3}$ ): $$\frac{1}{2} + i\frac{\sqrt{3}}{2} = \sqrt{\frac{1}{4} + \frac{3}{4}}e^{i\frac{\pi}{3}} = e^{i\frac{\pi}{3}}$$ $$z^n = e^{in\frac{\pi}{3}}$$ Arg(z) is a tangent of Imaginary and Real part of the complex number and it could be rewritten like that: $$e^{i(n\frac{\pi}{3}+2n\pi k)}$$ How all integers powers could be determined here?
|
It is clear that IF there is a power, $n>1$ where $z^n=1$ , then any power after $n$ is guaranteed to be equal to one of the powers less than $n$ (i.e., $z^{n+m}=1\cdot z^m=z^m$ ). As such, it is reasonable to just perform powers until you reach $1$ (if that ever happens). I believe $z^6=1$ hence the only possible answers are $$z,z^2,z^3,z^4,z^5,z^6$$ and after that will always repeat.
|
|complex-numbers|
| 1
|
Theorem 11, Section 4.5 of Hungerford's Algebra
|
Let $R$ be a ring with identity. If $A$ is a unitary right $R$ -module and $F$ is a free left $R$ -module with basis $Y$ , thell every element $u$ of $A\otimes_R F$ may be written uniquely in the form $u =\sum_{i=1}^n a_i\otimes y_i$ , where $a_i\in A$ and the $y_i$ are distinct elements of $Y$ . Proof: For each $y\in Y$ , let $A_y$ be a copy of $A$ and consider the direct sum $\sum_{y\in Y} A_y$ . We first construct an isomorphism $\theta : A\otimes_R F \cong \sum_{y\in Y}A_y$ as follows. Since $Y$ is a basis, $\{y\}$ is a linearly independent set for each $y\in Y$ . Consequently, the $R$ -module epimorphism $\varphi : R\to Ry$ given by $r\mapsto ry$ (Theorem 1.5) is actually an isomorphism. Therefore, by Theorem 5.7 there is for each $y\in Y$ an isomorphism $$A\otimes_R Ry \xrightarrow{1_A\otimes \varphi^{-1}} A\otimes_R R\cong A_y.$$ Thus by Theorems 5.9 and I.8.10 there is an isomorphism $\theta$ : $$A\otimes_R F\cong A\otimes_R ( \sum_{y\in Y}Ry) \cong \sum_{y\in Y} A\otimes_R Ry\
|
We can simplify the proof by showing that $Y^{(A)}=\sum_{y\in Y}A_y$ satisfies the defining property of the tensor product. We have the obvious $R$ -balanced bilinear map $$ i:A\times F\to Y^{(A)}, \quad (a,f) \mapsto (ar_y) \quad \textrm{for } f=\sum_yr_yy. $$ Given any $R$ -balanced bilinear map $\theta:A\times F\to M$ , we have a linear map $$ \hat\theta:Y^{(A)}\to M, \quad (a_y)\mapsto\sum_y\theta(a_y,y), $$ satisfying $\hat\theta i=\theta$ . Moreover, $\hat\theta$ is unique. Thus, by the uniqueness of the tensor product, there exists a unique isomorphism $\phi:A\otimes_RF\to Y^{(A)}$ such that $a\otimes f\mapsto(ar_y)$ for $f=\sum_yr_yy$ .
|
|abstract-algebra|modules|tensor-products|alternative-proof|free-modules|
| 1
|
Number of triangles in Erdös-Renyi graph
|
For each ${n}$ , let ${(V_n,E_n)}$ be an Erdös-Renyi graph on ${n}$ vertices with parameter ${1/2}$ (we do not require the graphs to be independent of each other). If ${|T_n|}$ is the number of triangles in ${(V_n,E_n)}$ , (i.e. the set of unordered triples ${\{i,j,k\}}$ in ${V_n}$ such that ${\{i,j\}, \{i,k\}, \{j,k\} \in E_n}$ ), show in fact that ${|T_n|/\binom{n}{3}}$ converges almost surely to ${1/8}$ . (Note: in contrast with the situation with the strong law of large numbers, the fourth moment does not need to be computed here.) Attempt: Using the second moment method, one can obtain the bound $$\displaystyle {\bf P}\left(\left|\frac{|T_n|}{\binom{n}{3}} - 1/8\right| \geq \varepsilon\right) \leq O\left(\frac{1}{n^2}\right)/{\varepsilon}^2$$ for any natural number $n$ and any $\varepsilon > 0$ , so $|\frac{|T_n|}{\binom{n}{3}} \to 1/8$ in probability. As the RHS is also summable in $n$ , we may apply the Borel-Cantelli lemma to establish the claim. Is there any other way to do th
|
Let $\{\tau_{i}\}$ for $i=1,2,...\binom{n}{3}$ be the enumeration of triangles. Then $T_{n}=\sum_{i}\mathbf{1}_{\tau_{i}}$ where $\mathbf{1}_{\tau_{i}}$ is the indicator of the event that the triangle $\tau_{i}$ is in the Erdos Renyi Graph. Then you have $$E(T_{n}^{2})=\sum_{i}\mathbf{1}_{\tau_{i}}+\sum_{i\neq j}\mathbf{1}_{\tau_{i}}\mathbf{1}_{\tau_{j}}$$ Now, suppose $\tau_{i}$ and $\tau_{j}$ where $i\neq j$ are both the graph. Then, they either they share no vertices in common or they share one vertex in common or they share two vertices in common. In the first case, you have $\binom{n}{3}\binom{n-3}{3}$ many such possible triangles each of which is present with probability $\dfrac{1}{2^{6}}$ . In case they share one vertex, then the possible number of triangles is $n\cdot\binom{n-1}{2}\binom{n-3}{2}$ , i.e. you choose the vertex which is common in $n$ ways and then chose the each of the two vertices in $\binom{n-1}{2}\binom{n-3}{2}$ ways. Each such triangle will be there with proba
|
|probability-theory|random-graphs|
| 0
|
Representations of $SU(1,1)$ and its isomorphism with $SL(2,\mathbb{R})$
|
I've been reading up on the Lorentz group in $(2+1)-$ dimensions, i.e. $SO(2,1)$ . As I understand it, $SO(2,1)$ is isomorphic to $SL(2,\mathbb{R})$ , $SU(1,1)$ and $Sp(2,\mathbb{R})$ (modulo $\mathbb{Z}_2$ ). I am most interested in the $SL(2,\mathbb{R})$ , $SU(1,1)$ isomorphisms. In particular, there are a couple of confusions I have about the matrix representations of these groups, which I shall now elaborate on. $$$$ 1. $~$ For $SL(2,\mathbb{R})$ , the anti-fundamental representation is trivially equivalent (in fact, identical), to the fundamental representation $N\in SL(2,\mathbb{R})$ , since by definition, the group elements are real. I assume this means that, unlike in the case of $SL(2,\mathbb{C})$ , we don't need dotted indices, and $SL(2,\mathbb{R})$ naturally acts on real two-dimensional spinors $\psi_a$ , which transform under $SL(2,\mathbb{R})$ as $$\psi_a \quad\rightarrow\quad \psi'_a = N_a^{~~b}\psi_b\;,$$ i.e. we do not need to consider " barred " spinors $\bar{\psi}_{\
|
It is important to be clear on the distinction between conjugate representations, and dual representations, as they are only necessarily the same when the relevant group is compact, which is not the case here. Self-conjugacy implies the representation is either quaternionic or real, in this instance the rep is manifestly real. Self duality on the other hand gives rise to invariant forms, either orthogonal or symplectic, and this is what we use to raise and lower indices, as the invariant form induces an isomorphism between the vector space being acted on, and its dual space. Furthermore in the case that a representation has both properties, or is isomorphic to its conjugate-dual a.k.a. anti-dual (but not necessarily either on their own), this gives rise to a sesqui-linear form. It is known that $\operatorname{SL}(2, \mathbb{F}) \cong \operatorname{Sp}(2, \mathbb{F})$ where $\mathbb{F}$ is the underlying field (This is only true for $n=2$ , otherwise $Sp \subset SL$ ). This is important
|
|group-theory|representation-theory|index-notation|
| 0
|
proof of $\pi_1(S^n)=0$ if $n\geq2$
|
I am trying to understand the hatcher's proof of $\pi_1(S^n)=0$ if $n\geq2$ . but it is challenging for me to understand each step. We can express $S^n$ as the union of two open sets $A_1$ and $A_2$ each homeomorphic to $\mathbb{R}^n$ such that $A_1 ∩ A_2$ is homeomorphic to $S^{n−1} \times\mathbb{R},$ for example by taking $A_1$ and $A_2$ to be the complements of two antipodal points in $S^n$ . how do we know the "union of two open sets $A_1$ and $A_2$ each homeomorphic to $\mathbb{R}^n$ " and $A_1 ∩ A_2$ is homeomorphic to $S^{n−1} \times\mathbb{R}?$ Can anyone explain more about this part?
|
To be concrete, let $S^n$ be the sphere $\sum_{i=1}^{n+1}x_i^2=1$ , and take $A_1=S^n\backslash\{(0,\ldots,0,1)\}$ , $A_2=S^n\backslash\{(0,\ldots,0,-1)\}$ (i.e. remove the north and south poles respectively). You can give rigorous arguments that the $A_i$ are homeomorphic to $\mathbb{R}^n$ (e.g. stereographic projection) and that $$ A_1\cap A_2=A_1\backslash\{\text{pt.}\}\cong\mathbb{R}^n\backslash\{0\}\cong S^{n-1}\times \mathbb{R} $$ (e.g. for the second homemorphism, consider $x\mapsto x/\|x\|$ ). While Hatcher's statement is meant to be intuitively "obvious" (as are many arguments in topology, abusing geometric intuition for the sake of efficiency), you should take the time to give more rigorous arguments a few times until you develop the intuition.
|
|algebraic-topology|proof-explanation|
| 1
|
Proving descontinuity of a function with $\varepsilon - \delta$ definition
|
Let $g(x)$ be: \begin{cases} 0 \space\space \text{if}\space\space x \in \mathbb Q \\ x\space\space \text{if} \space\space x \in \mathbb R - \mathbb Q \\ \end{cases} I know this function is continuous at $x = 0$ only, so I want to prove, using the negation of $\varepsilon - \delta$ definition, that $g$ is not continuous anywhere else. $\lim\limits_{x\to a} g(x) \ne g(a)$ means that $\exists \varepsilon \gt 0, \space\space \forall \delta \gt 0 \space\space$ such that $\space\space$ $0 \lt |x - a| \lt \delta \land |g(x) - g(a)| \ge \varepsilon$ Set $\varepsilon = \frac {|a|} 2$ Let $a \in \mathbb Q$ and select a $x \in (a-\delta, a + \delta) \cap \mathbb R - \mathbb Q$ . Such $x$ exists due to irrational numbers being dense in $\mathbb R$ . $|g(x) - g(a)| = |x| \ge \frac {|a|}2$ . Since, from the definition, we just need to show that there exists at least one $x$ inside $(a - \delta, a + \delta)$ , to satisfy the inequality, choose a $x \gt \frac {|a|}2$ . Thus, $|x| \ge \frac {|a|}2$ hol
|
You are missing a qualifier that might clear things up. A function $f$ is continuous at a point $a$ if $$\forall \epsilon>0,\exists\delta>0,\forall x\in\text{Domain}(f),0 The negation of this is $$\exists \epsilon>0,\forall \delta>0,\exists x\in\text{Domain}(f),0 For a proof in the negative, correct, we choose $\epsilon = \frac{|a|}{2}>0$ , we then let $\delta>0$ be arbitrary. We then demonstrate that in either case $a\in\mathbb{Q}-{0}$ or $a\in\mathbb{R}-\mathbb{Q}$ we can find an $x$ where $0 . Your proof appears to be correct.
|
|calculus|continuity|epsilon-delta|
| 0
|
Number of triangles in Erdös-Renyi graph
|
For each ${n}$ , let ${(V_n,E_n)}$ be an Erdös-Renyi graph on ${n}$ vertices with parameter ${1/2}$ (we do not require the graphs to be independent of each other). If ${|T_n|}$ is the number of triangles in ${(V_n,E_n)}$ , (i.e. the set of unordered triples ${\{i,j,k\}}$ in ${V_n}$ such that ${\{i,j\}, \{i,k\}, \{j,k\} \in E_n}$ ), show in fact that ${|T_n|/\binom{n}{3}}$ converges almost surely to ${1/8}$ . (Note: in contrast with the situation with the strong law of large numbers, the fourth moment does not need to be computed here.) Attempt: Using the second moment method, one can obtain the bound $$\displaystyle {\bf P}\left(\left|\frac{|T_n|}{\binom{n}{3}} - 1/8\right| \geq \varepsilon\right) \leq O\left(\frac{1}{n^2}\right)/{\varepsilon}^2$$ for any natural number $n$ and any $\varepsilon > 0$ , so $|\frac{|T_n|}{\binom{n}{3}} \to 1/8$ in probability. As the RHS is also summable in $n$ , we may apply the Borel-Cantelli lemma to establish the claim. Is there any other way to do th
|
Define the random variables $X_{ijk} := 1_{\{i, j\}, \{j, k\}, \{i, k\}\in E_n}$ for all unordered triples ${\{i,j,k\}}$ in ${V_n}$ , and normalise the mean $\displaystyle {\bf E}X_{ijk} = 0$ for all such triples, by replacing each $X_{ijk}$ with $X_{ijk} - 1/8$ , so that $|T_n|$ gets replaced by $|T_n| - \frac{1}{8}\binom{n}{3}$ (and $|T_n|/\binom{n}{3}$ by $|T_n|/\binom{n}{3} - 1/8$ ). To prove the given claim, it then suffices to do so in this mean-zero setting. The first moment calculation shows that $|T_n|$ has mean zero. Hence $\displaystyle {\bf Var}(|T_n|) = {\bf E}|T_n|^2 = {\bf E} |\sum_{\substack{\{i,j,k\} \in V_n \\ \text{unordered}}} X_{ijk}|^2 = \sum_{1 \leq s,t \leq \binom{n}{3}}{\bf E} X_sX_t$ , where we relabel the $X_{ijk} = X_s$ to range over the set $\{1, \dots, \binom{n}{3}\}$ . Note that for $X_s$ and $X_t$ to be correlated, their corresponding triples need to share at least one edge, so the RHS of the above identity equals $\displaystyle \binom{n}{2}\sum_{1 \leq
|
|probability-theory|random-graphs|
| 0
|
Expectation of a function of the CDF of a Normal variable
|
Let X $\sim$ $\mathcal{N(\mu, \sigma^2})$ . Find the Expectation of $\left(-log_{e} \left(\Phi\left(\frac{X - \mu}{\sigma}\right)\right)\right)^3$ , where $\Phi \left(. \right)$ denotes the cumulative distributive function of $\mathcal{N}\left(0, 1\right)$ random variable. Please, provide some help and ideas on how to get this expectation. The answer is given as 6. I have tried to find the $P \left(\left(-log_{e}\left(\Phi(Z)\right)\right)^3 \le z\right) $ , where $Z = \frac{X - \mu}{\sigma}$ and then take the derivative to get the density function. finally calculated the expectation, but I didn't get the answer as 6. I appreciate any help you can provide.
|
Note that, using properties of the Normal Distribution, $\dfrac{X-\mu}{\sigma}\sim N(0,1)$ . Then, as the variable is continuous, a standard result gives us that $$\Phi\left(\dfrac{X-\mu}{\sigma}\right)\sim U(0,1)$$ Then, the variable which expectation we want to find, call It $Y$ , verifies that $$Y\sim -log^3(T),$$ where $T \sim U(0,1)$ But $-log(T)\sim Exp(1)$ as $T \sim U(0,1)$ . Thus, we just need to find $E[Z^3]$ , where $Z\sim Exp(1)$ , that is, $$\int_0^{\infty} x^3 e^{-x} dx$$ You may do this integral by parts, or just remember that, for a Exponential Variable, $$E[Z^k]=k!$$ In any case, the result is 6, as desired.
|
|statistics|normal-distribution|expected-value|cumulative-distribution-functions|
| 0
|
challenging inequality with complex numbers
|
The statement of the problem: Let $n \in \mathbb N $ \ {0} and $z_1,z_2, ... z_n \in \mathbb C $ . Prove that $$\sum_{i=1}^n |z_i||z-z_i| \ge \sum_{i=1}^n |z_i|^2$$ holds for any $z \in \mathbb C $ $\iff$ $z_1+z_2+...z_n=0$ My approach : I tried to prove the inequality using Cauchy's inequality and the modulus inequality, but I reached some ambiguous results, from which it does not appear that their sum would be 0. I think that the solution can also be a geometric one given the condition that z1 +z2+... zn = 0 it turns out that the vertices of the polygon with these affixes is regular, but I'm not sure. Any and all proofs will be helpful. Thanks a lot!
|
Here's a proof using Triangle Inequality and Cauchy-Schwarz (i.) when $\sum_{k=1}^n z_k = 0$ $\sum_{k=1}^n \vert z_k\vert^2= \Big \vert \sum_{k=1}^n \overline z_k \cdot z_k - \overline z_k\cdot z\Big\vert = \Big \vert \sum_{k=1}^n \overline z_k (z_k-z)\Big\vert$ $\leq \sum_{k=1}^n \Big \vert \overline z_k (z_k-z)\Big\vert=\sum_{k=1}^n \Big \vert z_k\Big \vert \cdot \Big \vert z-z_k\Big\vert$ by triangle inequality (ii.) now suppose the inequality remains true when $\sum_{k=1}^n z_k =\lambda\neq 0$ , then Cauchy-Schwarz tells us $\sum_{k=1}^n \vert z_k\vert^2\leq \sum_{k=1}^n \Big \vert z_k\Big \vert \cdot \Big \vert z-z_k\Big\vert\leq \Big(\sum_{k=1}^n \vert z_k\vert^2\Big)^\frac{1}{2} \cdot \Big(\sum_{k=1}^n \Big \vert z-z_k\Big\vert^2\Big)^\frac{1}{2}$ $\implies\sum_{k=1}^n \vert z_k\vert^2\leq \sum_{k=1}^n \Big(z -\overline z_k\big)\overline{\Big(z - \overline z_k\Big)}= n \vert z\vert^2 + \Big(\sum_{k=1}^n \vert z_k\vert^2\Big)- 2\cdot \text{Re}\Big(z \sum_{k=1}^n z_k\Big)$ $\impli
|
|inequality|complex-numbers|cauchy-schwarz-inequality|
| 1
|
find the strictly positive real number r in this equation with complex numbers
|
The statement of the problem : Find the numbers $r \in (0,\infty)$ for which exists a $z \in \mathbb C$ such that $|z| = r$ and $|1+z^2|=2r$ . My approach : I tried a geometric solution in which I looked for complex numbers such that circles of radius 2r have such numbers, but I got stuck. Any and all proofs will be helpful. Thanks a lot!
|
In polar form, the two conditions mean that $$ (r^2\cos2\theta+1)^2+(r^2\sin2\theta)^2=4r^2. $$ That is, $$ r^4-4r^2+1=-2r^2\cos2\theta. $$ Therefore, a solution $z=r(\cos\theta+i\sin\theta)$ exists if and only if $-2r^2\le r^4-4r^2+1\le2r^2$ . The first inequality can be rewritten as $(r^2-1)^2\ge0$ , which is always true. The second one is equivalent to $r^4-6r^2+1\le0$ which holds iff $3-\sqrt{8}\le r^2\le3+\sqrt{8}$ , i.e., iff $\sqrt{3-\sqrt{8}}\le r\le\sqrt{3+\sqrt{8}}$ .
|
|complex-analysis|analytic-geometry|
| 0
|
Show that $\int_{-\infty}^\infty \frac{e^x}{e^{2x}+e^{2a}}\frac{1}{x^2+\pi^2}dx = \frac{2\pi e^{-a}}{4a^2+\pi^2}-\frac{1}{1+e^{2a}}$
|
Show that \begin{align*} \int_{-\infty}^\infty \frac{e^x}{e^{2x}+e^{2a}}\frac{1}{x^2+\pi^2}dx = \frac{2\pi e^{-a}}{4a^2+\pi^2}-\frac{1}{1+e^{2a}} \end{align*} where $a\in \mathbb{R}$ . My small try Let $\displaystyle f(z) = \frac{e^z}{(e^{2z}+e^{2a})z}$ . Let $\Gamma$ be the positively oriented rectangle in the complex plane with vertices $R-i\pi$ , $R+i\pi$ , $-R+i\pi$ and $-R-i\pi$ where $R>|a|$ . There are three first order poles of $f(z)$ lying inside $\Gamma$ at $z= 0, \frac{i\pi}{2}+a$ and $-\frac{i\pi}{2}+a$ . Then, by the residue theorem, we have \begin{align*} \int_\Gamma f(z)\; dz &= 2\pi i \left( \mathop{\text{Res}}_{z=0} \; f(z) + \mathop{\text{Res}}_{z=\frac{i\pi}{2}+a} \; f(z) + \mathop{\text{Res}}_{z=-\frac{i\pi}{2}+a} \; f(z)\right) \\ &= 2\pi i \left( \frac{1}{1+e^{2a}} - \frac{i e^{-a}}{2a+i\pi} + \frac{i e^{-a}}{2a-i\pi}\right) \\ &= 2\pi i \left( \frac{1}{1+e^{2a}} - \frac{2\pi e^{-a}}{4a^2 +\pi^2}\right) \tag{1} \end{align*}
|
Suppose $a\ge 0$ . Let $\displaystyle f(z) = \frac{e^z}{(e^{2z}+e^{2a})(z^2+\pi^2)}$ . Then $f(z)$ will have poles $$ z=\pi i, z_k=(k+\frac12)\pi i+a,k=0,1,2,\cdots.$$ in the upper half plane. Set $R_n=\sqrt{(k+1)^2\pi^2+a^2}$ . Let $\Gamma_n$ be the union of the upper semi-circle $\gamma_n$ with radius $R_n$ centered at $0$ and the segment from $-R_n$ to $R_n$ . Clearly $f(z)$ has poles $$ z=\pi i, z_k=(k+\frac12)\pi i+a,k=0,1,2,\cdots, n $$ and analytic in $\gamma_n$ . Then \begin{align*} \int_{-R_n}^{R_n}f(x)dx+\int_{\gamma_n} f(z)\; dz &= 2\pi i \left( \mathop{\text{Res}}_{z=\pi i} \; f(z) + \sum_{k=0}^n\mathop{\text{Res}}_{z=z_k} \; f(z)\right) \\ &= 2\pi i \left( \frac{i}{2\pi(1+e^{2a})}+\sum_{k=0}^n\frac{i (-1)^{k+1}e^{-a}}{2} \frac{1}{z_k^2+\pi^2}\right) \\ &=-\frac{1}{1+e^{2a}}+\pi e^{-a}\sum_{k=0}^n \frac{(-1)^{k}}{z_k^2+\pi^2}. \end{align*} Now letting $n\to\infty$ gives \begin{align*} \int_\infty^\infty f(x)\; dx &=-\frac{1}{1+e^{2a}}+\pi e^{-a}\sum_{k=0}^\infty \frac{(-1)^
|
|calculus|integration|definite-integrals|improper-integrals|closed-form|
| 0
|
prove jointly gaussians of non linear function of gaussian
|
Let $x_1 , x_2 , x_3 ~ N(0,1)$ iid. $Y = \frac{x_1 + x_2 * x_3}{\sqrt{1+ x_3^2}}$ Does Y and $X_3$ jointly gaussian ? I already showed that $E[Y|X3]=0 $ and $Var[Y|X3]=0$ And Y|X3 is gaussian... But how can I proceed? Maybe there is a mistake in the question that make it better?
|
It seems to me obvious. What about the formal following proof $$E(e^{sY+tX_3})=E[E(e^{sY+tX_3}|X_3)]=E[E(e^{sY}|X_3)\times e^{tX_3}]=e^{s^2/2}\times e^{t^2/2}?$$
|
|probability-theory|stochastic-processes|normal-distribution|
| 0
|
Inclusion map in Mayer-Vietoris sequence for $\mathbb {RP}^2$ with 3 points removed
|
I was trying to compute the homology groups of the projective plane with 3 points removed and I was wondering how the map $H_1(j_1,-j_2)$ acts on the first homology group of the intersection. As open sets $U$ and $V$ I took $U=(\mathbb{RP}^2\setminus \{P,Q,R\})\setminus\partial\mathbb{RP^2}\sim D^2\setminus\{P,Q,R\}{\xrightarrow{r}}\mathbb S^1\vee \mathbb S^1\vee\mathbb S^1$ , $V$ = tubular neighbourhood of the boundary of the projective plane such that $P,Q,R\notin V\sim\mathbb S^1$ , so $U\cap V\sim\mathbb S^1$ . In order to compute the first homology group of $X:=\mathbb {RP^2}\setminus\{P,Q,R\}$ I can use the fact that for path connected topological space $Ab(\pi_1(X,x_0))\cong H_1(X)$ or, since $X$ retracts on the bouquet of 3 circles, we could write $H_1(X)\cong H_1(\mathbb S^1\vee\mathbb S^1\vee\mathbb S^1)\cong \mathbb Z^3$ . But if I try to compute the group with Mayer-Vietoris long sequence I can't figure out the action of the map $H_1(j_1,-j_2):H_1(U\cap V)\to H_1(U)\oplus H
|
To get a deformation retraction of $X$ onto the bouquet of circles, you should arrange things as in this answer. To use your decomposition to compute homology (without knowing $X\simeq S^1\vee S^1\vee S^1$ ) and think about the induced maps in the M-V sequence: $U$ much more obviously retracts onto something homotopy equivalent to $S^1\vee S^1\vee S^1$ , this is basically the known disk-minus-points construction. It even deformation retracts onto this thing, so we know $H_\ast(U)$ . $V$ deformation retracts to a circle in the obvious way. $U\cap V$ is also a circle if you make the tubular neighbourhood thin enough but $U\cap V\hookrightarrow V$ is not inducing the identity map as you would expect; because of the antipodal equivalences, our central loop in $U\cap V$ gets mapped under, say, $z\mapsto z^2$ to a doubly wound loop in $V$ ! $U\cap V\hookrightarrow U$ gives the map $n\mapsto(n,n,n)$ (up to isomorphism); you see this by letting the outermost loop come down onto the triad and n
|
|general-topology|geometry|algebraic-topology|homology-cohomology|
| 1
|
Determine Intersection between a Plane and a Line Rotated about an Axis of Rotation
|
Given: A line segment in 3D space which can be represented as: $\begin{pmatrix}x_0 \\y_0 \\z_0 \end{pmatrix} + t\begin{pmatrix}d_{x} \\d_{y} \\d_{z} \end{pmatrix}$ , where $0\le t\le1$ A point in 3D space about which the line segment above is to be rotated (center of rotation): $\begin{pmatrix}x_c \\y_c \\z_c \end{pmatrix}$ A unit vector representing the axis of rotation: $\begin{pmatrix}k_{x} \\k_{y} \\k_{z} \end{pmatrix}$ A finite plane in 3D space: $\begin{pmatrix}x_1 \\y_1 \\z_0 \end{pmatrix} + u\begin{pmatrix}d_{x1} \\d_{y1} \\d_{z1}\end{pmatrix}+v\begin{pmatrix}d_{x2} \\d_{y2} \\d_{z2} \end{pmatrix}$ , where $0\le u\le1$ and $0\le v\le1$ [This image shows the general setup][1] Problem: When the line segment is rotated about the center of rotation and rotation axis, I would like to determine the minimum amount of rotation required in order for the line segment to intersect the finite plane. Is it possible to solve this problem analytically? What I know: The Rodrigues rotational fo
|
Too long for a comment. Here can be used an approach involving optimization procedures, but a simpler approach to the problem is to obtain a graphical checking. Calling $R(\theta)\cdot\left(\begin{pmatrix}x_0-x_c \\y_0-y_c \\z_0-z_c \end{pmatrix} + t\begin{pmatrix}d_{x} \\d_{y} \\d_{z} \end{pmatrix}\right)+\begin{pmatrix}x_c \\y_c \\z_c \end{pmatrix} - \begin{pmatrix}x_1 \\y_1 \\z_0 \end{pmatrix} - u\begin{pmatrix}d_{x1} \\d_{y1} \\d_{z1}\end{pmatrix}-v\begin{pmatrix}d_{x2} \\d_{y2} \\d_{z2} \end{pmatrix}=0$ and solving for $(t,u,v)$ we will obtain $$ \cases{ t = g_1(\theta)\\ u = g_2(\theta)\\ v = g_3(\theta) } $$ after that, plotting simultaneously $(g_1(\theta),g_2(\theta),g_3(\theta))$ and comparing to the needed range, $(0,1)$ we can easily choose the most convenient value for $\theta$ if it exists. NOTE To obtain an explicit formula for $(t,u,v) = (g_1(\theta),g_2(\theta),g_3(\theta))$ we can proceed as follows: Submitting the segment $s(t) = p_0 + t \vec d$ to a rotation around
|
|geometry|plane-geometry|
| 0
|
Is there anything currently that generates rejection from the mathematical community?
|
Is there anything currently that generates rejection from the mathematical community, as happened with the complex roots of algebraic equations?
|
I don't think the assumptions of the question are correct. "Complex roots" were never rejected, they were not fully understood. I understand it is incredibly hard to place yourself in a world with less knowledge than what you currently possess; but when you "don't know something" you also typically "don't know that you don't know something". The tools of attack we have today, such as "graphing", "algebra", etc. were not always known; and it is typically foolish to think that mathematicians declare things unfit for study without reason. A brief history of polynomials. Polynomials were typically understood mostly as geometric objects. They represented objects in geometry and their roots were typically specific solutions to pure geometric problems. The word "quadratic" stems from the concept of a "square" which has 4 sides. The notion of "completing the square" has an actual meaning in pure geometric terms. Of course the geometry makes no sense when the sides have negative length. Such ge
|
|complex-numbers|self-learning|education|math-history|learning|
| 0
|
Check the convergence : $ 1-\frac{1}{2}+\frac{1 \cdot 3}{2 \cdot 4}-\frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6}+\ldots $
|
Have a look at this question: Test whether the following series is conditionally convergent or absolutely convergent: $$ 1-\frac{1}{2}+\frac{1 \cdot 3}{2 \cdot 4}-\frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6}+\ldots $$ My attempt: Since the given series is $$1+\sum_{n=1}^\infty (-1)^n\frac{1 \cdot 3\cdots 2n-1}{2\cdot 4 \cdots 2n}=\sum_{n=0}^\infty (-1)^na_n,~\text{where,}~a_0=1,~a_n=\frac{1 \cdot 3\cdots 2n-1}{2\cdot 4 \cdots 2n},~n\geq1.$$ See that $\displaystyle \frac{a_{n+1}}{a_n}=\frac{2n-1}{2n+1} for all $n$ . Now, I have to verify $$\lim_{n \to \infty}a_n=\lim_{n \to \infty}\frac{1 \cdot 3\cdots 2n-1}{2\cdot 4 \cdots 2n}=0,$$ to apply alternating series test. Observe that $$a_n=a_n \cdot \frac{2\cdot 4 \cdots 2n}{2\cdot 4 \cdots 2n}=\frac{1 \cdot 2\cdot 3\cdots 2n}{(2\cdot 4 \cdots 2n)^2}=\frac{(2n)!}{4^n(n!)^2}$$ Thereafter, I tried to make a sandwich $$\frac{(2n)!}{4^n(2n)!^2}\leq \frac{(2n)!}{4^n(n!)^2}\leq \frac{(2n)!}{4^nn!}$$ $$\Rightarrow \frac{1}{4^n(2n)!}\leq \frac{(2n)!}
|
We have for $k\ge1$ : $$\begin{aligned} &(k+1)(2k-1)^2=4k^3-3k+1 4k^3-4k^2=4k^2(k-1). \end{aligned} $$ This means: $$ \frac{k-1}k From $$ \left[\frac12\prod_{k=2}^n\frac{2k-1}{2k}\right]^2 and $$ \left[\frac12\prod_{k=2}^n\frac{2k-1}{2k}\right]^2>\frac14\prod_{k=2}^n\frac{k-1}{k}=\frac1{4n}, $$ valid for $n>1$ one concludes: $$ \frac1{\sqrt{4n}} This is much weaker than the Stirling estimate but suffices to solve your problem. The estimate can be sharpened if instead of $\frac12$ the factor $\prod_{k=1}^m\frac{2k-1}{2k}$ is splitted apart. Then the corresponding inequality is valid for $n>m$ .
|
|real-analysis|sequences-and-series|
| 1
|
Let $V$ be an inner product space and let $u$, $v$ $\in$ $V$ . Prove that $||u+v||=||u||+||v|| \iff$ either $u=av$ or $v=au$ for some $a\in\Bbb C$.
|
Let $V$ be an inner product space and let $u$, $v$ $\in$ $V$ . Prove that $\|u+v\|=\|u\|+\|v\|$ iff either $u=\alpha v$ or $v=\alpha u$ for some $\alpha\in\Bbb C$. I got stuck with $\|u+v\|=\|u\|+\|v\|$ if either $u=\alpha v$ or $v=\alpha u$ for some $\alpha\in\Bbb C$. I opened $\|u+v\|=\|u\|+\|v\|$ and got: $$\langle u,v\rangle +\overline{\langle u,v\rangle}=\langle u,v\rangle +\langle v,u\rangle =2\sqrt{\langle u,u\rangle\langle v,v\rangle}=2\|u\|\|v\|$$ But I don't see how to continue.
|
The statement is not quite correct: if $\|u+v\| = \|u\|+\|v\|$ then $\dim(\text{span}_{\mathbb C}\{u,v\}) = 1$ , but this is not sufficient, one needs $v\in\mathbb R_{\geq 0}.u$ and $u \in \mathbb R_{\geq 0}.v$ . The most trivial case where $u \in \mathbb C.v$ but $\|u+v\| \neq \|u\|+ \|v\|$ is where $v =-u$ . If either of $u$ or $v$ is zero, the statement is trivially true. Thus we assume that both are nonzero, in which case the condition that $u\in \mathbb C.v$ is equivalent to the condition $v \in \mathbb C$ . Now suppose that $\|u+v\| = \|u\|+\|v\|$ . Squaring and expanding out, using the fact that $\|u\|^2= \langle u,u\rangle$ , we find: $$ 2\|u\|\|v\| = 2 \langle u,v \rangle. $$ Now by the case of equality in the Cauchy-Schwarz inequality it follows that $u =\mathbb C.v$ . If $u =\lambda.v$ where $u\in \mathbb C^\times$ , then $\|\lambda u+ u\| =\|(\lambda+1)u\| = |\lambda+1|\|u\|$ and $\|u\|+\|v\| = (|\lambda|+1)\|u\|$ . Thus we must have $|\lambda+1| = |\lambda|+1$ , which (by
|
|linear-algebra|
| 0
|
About an integral from MIT Integration Bee 2024
|
Good evening, I was interested in the third integral from the finals of the MIT Integration Bee 2024 : $$I = \int_{-\infty}^{\infty} \frac{1}{x^4+x^3+x^2+x+1} \hspace{0.1cm} \mathrm{d}x$$ One way to solve this is to identify that the denominator is the fifth cyclotomic polynomial, so we can write : $$I = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{(1-x)(x^4+x^3+x^2+x+1)} \hspace{0.1cm} \mathrm{d}x = \displaystyle\int_{-\infty}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Then, by Chasles : $$I = \displaystyle\int_{-\infty}^{0} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ Let $x\to -x$ in the first integral, we obtain : $$I = \displaystyle\int_{0}^{\infty} \frac{1+x}{1+x^5} \hspace{0.1cm} \mathrm{d}x + \displaystyle\int_{0}^{\infty} \frac{1-x}{1-x^5} \hspace{0.1cm} \mathrm{d}x$$ And we use these two identities : $$ \displaystyle\int_{0}^{\infty} \frac{x^{s-1}}{1+x^k} \hspace{0.1cm} \mathrm{d}x
|
For the question (2), we can use power series after splitting the interval into two. $$ \begin{aligned} (PV)\int_0^{\infty} \frac{x^{s-1}}{1-x^k} d x&=\int_0^1 \frac{x^{s-1}}{1-x^k} d x+\int_1^{\infty} \frac{x^{s-1}}{1-x^k} d x \\ = & \int_0^1 \frac{x^{s-1}}{1-x^k} d x+\int_0^1 \frac{\frac{1}{x^{s-1}}}{1-\frac{1}{x^k}} \frac{d x}{x^2} \quad \textrm{ via } x\mapsto \frac{1}{x} \\ = & \int_0^1 \frac{x^{s-1}-x^{k-s-1}}{1-x^k} d x \\ = & \sum_{n=0}^{\infty} \int_0^1\left(x^{s-1}-x^{k-s-1}\right) x^{n k} d x \\ = & \sum_{n=0}^{\infty}\left(\frac{1}{n k+s}-\frac{1}{(n+1) k-s}\right) \\ = & \frac{1}{k} \sum_{n=0}^{\infty}\left(\frac{1}{n+\frac{s}{k}}+\frac{1}{-n-1+\frac{s}{k}}\right)\\=& \frac{1}{k} \sum_{n \in \mathbb{Z}} \frac{1}{n+\frac{s}{k}}\\=& \frac{\pi}{k} \cot \left(\frac{\pi s}{k}\right) \end{aligned} $$ where the last answer uses the identity: $\pi \cot (\pi z)=\sum_{n\in \mathbb{Z}} \frac{1}{z+n}$ , where $n \in \mathbb{Z}.$
|
|integration|
| 0
|
Proof that cardinality of inverse image will always be the same for a particular group homomorphism
|
Let $(G, \star)$ and $(H, \blacksquare)$ be groups, and let $\varphi:G\rightarrow H$ be a group homomorphism. Let's also assume $\varphi$ is surjective. For any $h\in H$ , we define $\varphi ^{-1}(h) = \{g\in G|\varphi (g)=h\}$ . How can I prove that for $h_1,h_2\in H$ , $|\varphi ^{-1}(h_1)|=|\varphi ^{-1}(h_2)|$ ? That is, the number of elements of $G$ that $\varphi$ maps to $h_1$ is the same as to $h2$ .
|
Step 1) Define $Ker_{\phi} =\{g\in G | \phi(g)= e_H \}$ this is the "kernel" of the map $\phi$ Step 2) For $h \in H$ since $\phi$ is surjective there exists $g_h \in G$ such that $\phi(g_h)=h$ , now let $w \in G$ be "another" element in $G$ such that $\phi(w)=h$ we then have: $$\phi((g_h)^{-1}w)=\phi(g_h^{-1})\phi(w)=\phi(g_h)^{-1}h=h^{-1} h = e_H$$ hence $(g_h)^{-1}w \in Ker_{\phi}$ Step 3) if $u,v \in G$ are such that $u^{-1} v \in Ker_{\phi}$ then $\phi(u)=\phi(u)e_H=\phi(u)\phi(u^{-1}v)=\phi(u(u^{-1}v))=\phi(v)$ . Step 4) let $h_1, h_2 \in H$ be given, there exists $g_1, g_2$ such that $\phi(g_1)=h_1, \phi(g_2)=h_2$ now define the function $F: \phi^{-1}(h_1) \rightarrow \phi^{-1}(h_2)$ as follows: $$F(g)=(g_2)(g_1^{-1})g$$ with what was said above you can clearly see that $F$ is injective and surgective aswel, hence we must have: $$|\phi^{-1}(h_1)|=|\phi^{-1}(h_2)|$$
|
|group-theory|group-homomorphism|
| 1
|
Prove that Wallis' product and Euler's formula directly imply that $(-1/2)!=\sqrt{\pi}$
|
(This occured to me recently, and I was pretty sure that it was true, so I was pleased that it really was. This has almost certainly been published many times before, but I didn't see it in either of the Wikipedia articles on the Wallis product and Euler's limit formula for factorial so I thought that I would propose it here.) Euler's formula for general factorial is $$z! =\lim_{n \to \infty} \frac{n!n^z}{\displaystyle\prod_{k=1}^n (z+k)} . $$ Wallis' product is $$\frac{\pi}{2} =\prod_{k=1}^{\infty} \frac{4k^2}{4k^2-1} $$ Prove that Wallis' product and Euler's formula directly imply that $(-1/2)! =\sqrt{\pi} $ . I'll post my answer in a few days if no one else does.
|
Here is my answer. There is my usual excruciating detail since, as always, I wrote nothing down on paper but directly entered this into MacDown, my MathJax editor of choice. Euler's formula for general factorial is $z! =\lim_{n \to \infty} \dfrac{n!n^z}{\prod_{k=1}^n (z+k)} $ . Wallis' product is $\dfrac{\pi}{2} =\prod_{k=1}^{\infty} \dfrac{4k^2}{4k^2-1} $ I first tried to put Wallis' formula into a form that I hoped could match with Euler's formula for $(-1/2)!$ . $\begin{array}\\ \dfrac{\pi}{2} &=\prod_{k=1}^{\infty} \dfrac{4k^2}{4k^2-1}\\ &=\prod_{k=1}^{\infty} \dfrac{2k}{2k-1}\dfrac{2k}{2k+1}\\ &=\lim_{n \to \infty}\prod_{k=1}^{n} \dfrac{2k}{2k-1}\dfrac{2k}{2k+1}\\ &=\lim_{n \to \infty}\prod_{k=1}^{n} \dfrac{2k}{2k-1}\prod_{k=1}^{n}\dfrac{2k}{2k+1}\\ \end{array} $ I then plugged $-1/2$ into Euler's formula, manipulated it a bit, squared that, split and regrouped terms, did some index fiddling, and, shazam, it matched! $\begin{array}\\ -\dfrac12! &=\lim_{n \to \infty} \dfrac{n!n^{-1
|
|limits|factorial|pi|infinite-product|
| 0
|
Calculating asymptotics of integral $B(r) = \int_0^\infty \frac{2}{\pi} \frac{k^2 \sin(k r )}{k^3 + a} dk$
|
I want to calculate asymptotics of this integral $B(r) = \int_0^\infty \frac{2}{\pi} \frac{k ^2 \sin(k r )}{k^3 + a} dk$ for small and big values of parameter $r$ , assuming it is positive. $a$ is also a positive constant. I was able to calculating large $r$ asymptotic by applying substitution $z = k r$ and then leaving in denominator only leading term by $r$ . The result is $B(r) \sim -\frac{1}{r^3}$ However i can't find small $r$ expansion. Numerical calculating says it is linear ,starting from 1: $B(r) = 1 - \beta r$ ,
|
$$B(r)=\frac{2}{\pi}\int_0^\infty \frac{k^2 \sin(k r )}{k^3 + a} dk\overset{k=a^{\frac13}t}{=}\frac{2}{\pi}\int_0^\infty \frac{t^2 \sin(a^{\frac13}rt)}{t^3 + 1} dt$$ Denoting for a while $\,a^{\frac13}r=b\ll1$ $$B(r)=\frac2\pi\int_0^\infty t^2\sin(bt)\left(\frac1{t^3+1}-\frac1{t^3}+\frac1{t^3}\right)dt=1-\frac{2b}\pi\int_0^\infty\frac{\sin(bt)}{bt}\frac{dt}{1+t^3}$$ The function $\bigg|\frac{\sin(bt)}{bt}\bigg|\frac1{1+t^3}$ is dominated by $\frac1{1+t^3}$ . Taking the limit $b\to0$ under the integral sign $$B(r)\sim1-\frac{2b}\pi\int_0^\infty\frac{dt}{1+t^3}\overset{t^3=x}{=}1-\frac{2b}{3\pi}\int_0^\infty\frac{x^{-\frac23}}{1+x}dx\overset{t=\frac1{1+x}}{=}1-\frac{2b}{3\pi}\int_0^1(1-t)^{-\frac23}t^{-\frac13}dt$$ $$=1-\frac{2b}{3\pi}\Gamma\Big(\frac13\Big)\Gamma\Big(\frac23\Big)=1-\frac{2b}{3\pi}\frac\pi{\sin\frac\pi3}=1-\frac{4b}{3\sqrt3}$$ Coming back to $a$ $$\boxed{\,\,B(r)=1-\frac{4a^{\frac13}}{3\sqrt3}+o\big(a^{\frac13}\big)\,\,}$$ To get the asymptotics for $b\gg1$ you can use i
|
|real-analysis|integration|definite-integrals|asymptotics|
| 1
|
Calculating asymptotics of integral $B(r) = \int_0^\infty \frac{2}{\pi} \frac{k^2 \sin(k r )}{k^3 + a} dk$
|
I want to calculate asymptotics of this integral $B(r) = \int_0^\infty \frac{2}{\pi} \frac{k ^2 \sin(k r )}{k^3 + a} dk$ for small and big values of parameter $r$ , assuming it is positive. $a$ is also a positive constant. I was able to calculating large $r$ asymptotic by applying substitution $z = k r$ and then leaving in denominator only leading term by $r$ . The result is $B(r) \sim -\frac{1}{r^3}$ However i can't find small $r$ expansion. Numerical calculating says it is linear ,starting from 1: $B(r) = 1 - \beta r$ ,
|
Let $$ h(t) = \sin t,\quad f(t) = \frac{2}{\pi }\frac{{t^2 }}{{t^3 + a}}. $$ Then $$ \mathscr{M}B(z) = \mathscr{M}h(z)\mathscr{M}f(1 - z), $$ where $\mathscr{M}$ denotes the Mellin transform. Now $$ \mathscr{M}h(z) = \sin \left( {\frac{\pi }{2}z} \right)\Gamma (z) $$ and $$ \mathscr{M}f(1 - z) = \frac{2}{3}a^{ - z/3} \csc \left( {\frac{\pi }{3}z} \right). $$ Thus, by Mellin inversion, $$ B(r) = \frac{1}{{2\pi {\rm i}}}\frac{2}{3}\int_{c - {\rm i}\infty }^{c + {\rm i}\infty } {(a^{1/3} r)^{ - z} \Gamma (z)\sin \left( {\frac{\pi }{2}z} \right)\csc \left( {\frac{\pi }{3}z} \right){\rm d}z} , \quad 0 To obtain the asymptotic behaviour of $B(r)$ as $r\to 0^+$ , we push the contour to the right through the poles at $z=0,-1,-3,-5,\ldots$ . This yields the asymptotic expansion $$ B(r) \sim 1 - \frac{4}{{3^{3/2} }}(a^{1/3} r) - \frac{{6\log (a^{1/3} r) + 6\gamma - 11}}{{18\pi }}(a^{1/3} r)^3 + \frac{1}{{10 \cdot 3^{5/2} }}(a^{1/3} r)^5 + \ldots , $$ as $r\to 0^+$ . If we push the contour to the
|
|real-analysis|integration|definite-integrals|asymptotics|
| 0
|
Determine Intersection between a Plane and a Line Rotated about an Axis of Rotation
|
Given: A line segment in 3D space which can be represented as: $\begin{pmatrix}x_0 \\y_0 \\z_0 \end{pmatrix} + t\begin{pmatrix}d_{x} \\d_{y} \\d_{z} \end{pmatrix}$ , where $0\le t\le1$ A point in 3D space about which the line segment above is to be rotated (center of rotation): $\begin{pmatrix}x_c \\y_c \\z_c \end{pmatrix}$ A unit vector representing the axis of rotation: $\begin{pmatrix}k_{x} \\k_{y} \\k_{z} \end{pmatrix}$ A finite plane in 3D space: $\begin{pmatrix}x_1 \\y_1 \\z_0 \end{pmatrix} + u\begin{pmatrix}d_{x1} \\d_{y1} \\d_{z1}\end{pmatrix}+v\begin{pmatrix}d_{x2} \\d_{y2} \\d_{z2} \end{pmatrix}$ , where $0\le u\le1$ and $0\le v\le1$ [This image shows the general setup][1] Problem: When the line segment is rotated about the center of rotation and rotation axis, I would like to determine the minimum amount of rotation required in order for the line segment to intersect the finite plane. Is it possible to solve this problem analytically? What I know: The Rodrigues rotational fo
|
The line segment is given as $ \ell(t) = P_0 + t \ d , 0 \le t \le 1 $ Then you have an axis of rotation specified by the unit vector $k$ , which passes through the point $C$ . And finally you have the plane which is given in the question in parametric vector form as $ p(u, v) = Q_0 + u \ d_1 + v \ d_2 $ . Using the Rodrigues' rotation matrix formula, and the given $C$ and $k$ , the image of a point $p$ under this rotation by angle $\phi$ is given by $p' = C + R(\phi) (p - C) $ where (this is the Rodrigues' rotation matrix formula) $ R(\phi) = {k k}^T + (I - {k k}^T ) \cos \phi + S_k \sin \phi $ where $S_k$ is the following skew-symmetric matrix, $S_k = \begin{bmatrix} 0 && - k_z && k_y \\ k_z && 0 && - k_x \\ -k_y && k_x && 0 \end{bmatrix} $ For convenience, we can write this as $ R = R_1 + R_2 \cos \phi + R_3 \sin \phi $ where $R_1 = {k k}^T , R_2 = (I - {k k}^T) , R_3 = S_k $ Therefore, the image of a point on the line $\ell(t)$ is the point $ \ell'(t) = C + R(\phi) (P_0 + t d - C)
|
|geometry|plane-geometry|
| 0
|
Finding $MSE$ of optimal estimator
|
Background of the question: We know that $X$ is a continuous random variable that has $P\left(-1\le X\le 1\right)=1$ and $f_X\left(x\right) We define $X(n)=cos(\pi n X)$ $E[X]=\mu$ and $Var[X]=\sigma ^2$ We found in previous sections: $E\left[X\left(n\right)\right]=\int _{-1}^1f_X\left(x\right)cos\left(\pi nx\right)dx$ $X(n)$ is not W.S.S We had another random variable $Z$ which is $Y=\begin{cases}a\left|X\right|&Z\ge 0\\ b\left|X\right|&Z and $Z$ ~ $N(0,1)$ $Z$ is statistic independent of $X$ We found: $\hat{Y}_{opt}\left(X\right)=\left|X\right|\left(\frac{a+b}{2}\right)$ $MSE\left\{\hat{Y\:}_{opt}\left(X\right)\right\}=E\left[\left(\hat{Y\:}_{opt}\left(X\right)-Y\right)^2\right]=\left(\sigma ^2+\mu ^2\right)\left(\frac{a-b}{2}\right)^2$ Showing how I reached the MSE: $E\left[\left(\hat{Y}_{opt}\left(X\right)-Y\right)^2\right]=E\left[\left(\left|X\right|\left(\frac{a+b}{2}\right)-Y\right)^2\right]=E\left[\left|X\right|^2\left(\frac{a+b}{2}\right)^2-2Y\left|X\right|\left(\frac{a+b}{2}\
|
Correct me if I am wrong, but the following claim seems to hold true. For any square-integrable even function $g(x)>0$ defined on $[-1,1]$ , its Fourier series converges in $L_2$ by the Riesz-Fischer theorem: $$\lim_{N\to\infty} \int_{-1}^1\Big(g(x) - \sum_{n=0}^Na_n \cos(n\pi x) \Big)^2dx= 0. $$ Then, let $X$ denote a random variable with a continuous density function $f_X(x) defined on $[-1,1]$ . The boundedness of $f_X(x)$ directly leads to the following result, which hopefully should solve your problem: $$ E\left[\Big(g(X) - \sum_{n=0}^\infty a_n \cos(n\pi X) \Big)^2\right] =\lim_{N\to\infty} E\left[\Big(g(X) - \sum_{n=0}^Na_n \cos(n\pi X) \Big)^2\right]\\ =\lim_{N\to\infty} \int_{-1}^1\Big(g(x) - \sum_{n=0}^N a_n \cos(n\pi x) \Big)^2f(x)dx\leq \lim_{N\to\infty} K\int_{-1}^1\Big(g(x) - \sum_{n=0}^N a_n \cos(n\pi x) \Big)^2dx=0. $$
|
|probability-distributions|expected-value|fourier-series|estimation|mean-square-error|
| 1
|
Suggestions for a CAS to solve $\int^{\pi}_0\frac{4\sin^2\beta\cos^3\kappa}{(\sin^2\kappa + \sin^2\beta\cos^2\kappa)^2}d\kappa$
|
I'm looking for a CAS that can solve the following integral: $$\int^{\pi}_0\frac{4\sin^2\beta\cos^3\kappa}{(\sin^2\kappa + \sin^2\beta\cos^2\kappa)^2}d\kappa$$ I've tried Wolfram Alpha and it returns a very complicated result for the indefinite integral but is unable to produce a result for the definite integral. I've also tried Sympy and it is not yielding anything (not sure if it is still running since it doesn't seem to provide any messages). Does $\sin^2\beta$ cause problems for CAS even though the differential in the integration is stated as $d\kappa$ ? Or maybe I just have to take the WA answer and work through inputting $\pi$ and $0$ . I thought I would ask here first since I have little to no experience with CAS.
|
Let $\kappa\mapsto \pi-\kappa ,$ then $$ \begin{aligned} I&=\int_\pi^0 \frac{4 \sin ^2 \beta \cos ^3(\pi-\kappa)}{\left(\sin ^2 \kappa +\sin ^2 \beta \cos ^2(\pi-\kappa)\right)^2}(-d \kappa) \\ & =-\int_0^\pi \frac{4 \sin ^2 \beta \cos ^3 \kappa}{\left(\sin ^2 \kappa +\sin ^2 \beta \cos ^2 \kappa\right)^2} d \kappa \\ & =-I \\ \therefore I&=0 \end{aligned} $$
|
|integration|
| 0
|
Element-to-Element Expression of a Matrix Exponential?
|
Given a matrix $B = exp(A)$ , how do we express $B_{ml}$ using $A_{ij}$ ? If $A$ is a diagonal matrix, this is very straightforward: $B_{ii} = exp(A_{ii})$ . Can we generalize it to a real square matrix $A \in \mathbb{R}^{N \times N}$ ? Motivation: I am struggling to find an expression of $\frac{\partial B}{\partial A_{ij}} = \sum_m \sum_l \frac{\partial B_{ml}}{\partial A_{ij}}$ . Knowing how to express $B_{ml}$ using $A_{ij}$ will give me a lead. If there is any other way to achieve $\sum_m \sum_l \frac{\partial B_{ml}}{\partial A_{ij}}$ without explicitly knowing $B_{ml}$ , that would also be awesome. I've looked into some old posts like this , but I am not sure if these are applicable in my case. My lack of Lie Algebra knowledge seems to be a problem here.
|
If $A$ is not defective , then it can be decomposed as $$\eqalign{ \def\Mi{M^{-1}} \def\Mt{M^T} \def\Mit{M^{-T}} \def\k{\otimes} \def\h{\odot} \def\c{\cdot} \def\LR#1{\left(#1\right)} \def\l{\lambda} \def\Diag{\operatorname{Diag}} \def\vc{\operatorname{vec}} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} A &= M\,L\,\Mi\qquad\quad L = \Diag(\l_k) \\ }$$ and the Daleckii-Krein Theorem can be invoked to obtain the gradient $$\eqalign{ B &= f(A) \\ dB &= M\Big(R\h\big(\Mi\,dA\,M\big)\Big)\Mi \\ \grad B{A_{ij}} &= M\Big(R\h\big(\Mi\,E_{ij}\,M\big)\Big)\Mi \\ }$$ where $(\h)$ is the Hadamard product and the components of the $E_{ij}$ matrix are all zero except for the $(i,j)$ component which is equal to one. The components of the $R$ matrix are $$\eqalign{ R_{k\ell} \;=\; \begin{cases} {\large\frac{f(\l_k)\,-\,f(\l_\ell)}{\l_k\,-\,\l_\ell}}\qquad{\rm if}\;\l_k\ne\l_\ell \\ \\ \quad{\small f'(\l_k)}\qquad\qquad{\rm otherwise} \\ \end{cases} \\ }$$ In the current problem, the function of
|
|linear-algebra|lie-algebras|matrix-calculus|matrix-exponential|
| 1
|
About ODE $2p'q-3pq'=\lambda$
|
I need to give any possible characterization (could be geometric, algebraic or of other type) of a pair of functions $p$ and $q$ , holomorphic over $\mathbb{C}$ , and a complex number $\lambda$ satisfying the ODE $$2p'q-3pq'=\lambda$$ In an algebraic point of view since every holomorphic function is analytic we can write $p(z) = \sum_{i \geq 0}a_i z^i$ and $q(z) = \sum_{j \geq 0}b_j z^j$ so replacing $p$ , $q$ and their derivatives in the expression we get: $$2p'q-3pq'= \sum_{k\geq 1}[\sum_{i+j = k}2ia_ib_j-3ja_ib_j] x^{k-1} = \lambda$$ which clearly implies \begin{cases} 2a_1b_0 - 3a_0b_1 = \lambda\\ \sum_{i+j=k}2ia_i-3ja_ib_j &= 0 \quad (\forall k \geq 2) \end{cases} This question is not just looking for an explicit solution (since it could be more difficult than expected) but regarding some information or knowledge about the set of solutions. Also, if you have some recommendations of papers or books in which similar equations are treated (maybe in fields like ODEs or Dynamic systems
|
Too long for a comment. If $p$ and $q$ are holomorphic over $\mathbb{C}$ , then $f:=pq$ also is holomorphic over $\mathbb{C}$ . Using $q=f/p$ , you can eliminate it from the equation $2p'q-3pq'=\lambda$ , thus obtaining $$ 5\frac{p'f}{p}-3f'=\lambda, \tag{1} $$ whose solution is $$ p(z)=[f(z)]^{3/5}\exp\left(\frac{\lambda}{5}\int\frac{dz}{f(z)}\right). \tag{2} $$ Now try to characterize $f$ so that both $p$ and $q$ are holomorphic over $\mathbb{C}$ . For instance, $f(z)=a\,(a\neq 0)$ is OK, as it implies $$ p(z)=c\exp\left(\frac{\lambda z}{5a}\right)\quad\text{and}\quad q(z)=\frac{a}{c}\exp\left(-\frac{\lambda z}{5a}\right)\quad(c\neq 0). \tag{3} $$ On the other hand, $f(z)=az\,(a\neq 0)$ is not OK in general, as it implies $$ p(z)=cz^{\left(\frac{3}{5}+\frac{\lambda}{5a}\right)}\quad\text{and}\quad q(z)=\frac{a}{c}z^{\left(\frac{2}{5}-\frac{\lambda}{5a}\right)}\quad(c\neq 0). \tag{4} $$ There are only two values of $a$ for which both $p$ and $q$ are holomorphic over $\mathbb{C}$ : (1)
|
|complex-analysis|ordinary-differential-equations|
| 0
|
find the strictly positive real number r in this equation with complex numbers
|
The statement of the problem : Find the numbers $r \in (0,\infty)$ for which exists a $z \in \mathbb C$ such that $|z| = r$ and $|1+z^2|=2r$ . My approach : I tried a geometric solution in which I looked for complex numbers such that circles of radius 2r have such numbers, but I got stuck. Any and all proofs will be helpful. Thanks a lot!
|
A solution based on analytic geometry: Render the equations as $|z^2+1|=2|z|$ Square both sides and express the squared norms in terms of real and imaginary parts: $(x^2-y^2+1)^2+4x^2y^2=4(x^2+y^2)$ $(x^4+2x^2y^2+y^4)-(2x^2\color{blue}{+6y^2})+1=0$ Compare this with $(x^4+2x^2y^2+y^4)-(2x^2\color{blue}{+2y^2})+1=(x^2+y^2-1)^2$ and observe: $(x^2+y^2-1)^2=4y^2$ We now take square roots of both sides and discover the relationship is the union of two circles! $x^2+y^2-1=\pm2y$ The circle with the $\pm$ sign on the right side chosen as $+$ may be recast as $x^2+(y^2-2y)=1,$ and upon completing the square in the $y$ terms we find the center of the circle is at $(0,1)$ and the radius is $\sqrt2$ : $x^2+(y-1)^2=2.$ If the $\pm$ sign is chosen as $-$ we similarly get a center at $(0,-1)$ and radius also $\sqrt2$ $x^2+(y+1)^2=2.$ Taking the difference between the radius and the center-to-origin distance, both of which terms are the same for both circles, we get $\sqrt2-1$ as the minimal distanc
|
|complex-analysis|analytic-geometry|
| 1
|
Question on counting the number of triangles formed by 1999 points in a square
|
I was reading an explanation for a solution to an Olympiad problem as follows: Let $S$ be a square with the side length 20 and let $M$ be the set of points formed with the vertices of $S$ and another 1999 points lying inside $S$ . Prove that there exists a triangle with vertices in $M$ and with area at most equal with $\frac 1{10}$ . In the solution it states that if you want to cover the square S with triangles formed by the points in M and the vertices of S, you will need 4000 triangles for which this calculation is provided: 2 * (1999 + 1) = 4000. How was this figure obtained/what is the explanation behind this calculation?
|
For a much simpler verification, double count the angle sum. Consider all of the vertex angles of the triangles. What is their sum? Each triangle has an angle sum of $180^\circ$ . The 1999 points each contribute $360^\circ$ (Why?). Add in the 4 corners which have total of $360^\circ$ . Hence, there are a total of $ \frac { (1999+1) \times 360^\circ } { 180^\circ} = 4000$ triangles.
|
|geometry|contest-math|pigeonhole-principle|tessellations|
| 1
|
When finding the asymptotes of function b, is minus infinity obtained?
|
Good time of day. There was a problem while researching the function graph. Namely, when finding the inclined asymptote. Graph: y=x-ln(x+1) When calculating the limit, the coefficient k = 1. Calculate the limit with the formula under b and it turns out that b = minus infinity. I've counted it many times, and I don't understand what's the matter. I decided to check the graph using graphical calculators, and it is clearly shifted to the bottom. Please help me!
|
Well, it can happen that the limit $\lim_{x\to \infty}{\frac{f(x)}{x}}$ for the slope is finite, but contrariwise the intercept's one $\lim_{x\to \infty}{(f(x)-kx)}$ is not finite or it doesn't even exist (think about $x+cos(x)$ ). To have a so called oblique asymptote, both limits must exist and be finite. In your case, the function's derivative/slope tend to a constant as x approaches infinity, but the intercept doesn't converge, thus no oblique asymptote. In fact, if you graph a line $y = x + q$ , you will see that, however low the intercept $q$ is, it will always intersect the function.
|
|asymptotics|graphing-functions|
| 0
|
What is the formula for converting an improper fraction to a mixed number
|
There are methods for converting improper fractions to mixed numbers, but I am interested on finding a formula to which I can input the numerator and denominator of an improper fraction and get an expression of the form $$x+\frac{y}{z},$$ which is pretty much how a mixed number looks like.
|
There is no formula because expressing the number by euclidean division isn't "pretty much how a mixed number looks like" it is exactly what a mixed number looks like. When you write " $1 \frac{1}{2}$ ", it is simply a shorter way to write " $1 + \frac12$ ". So to answer your question, if you have a improper fraction $\frac{x}{y}$ the closest thing you'll get is first doing the euclidean division of $x$ by $y$ , which will get you something of the form $x=q\cdot y+r$ , where $q$ is the quotient and $r$ the remainder. Divide both sides by $y$ and you get $\frac{x}{y} = q + \frac{r}{y}$ . Notice that you have expressed $\frac{x}{y}$ as a whole number plus a fraction, so to express $\frac{x}{y}$ as a mixed fraction, you simply write $q \frac{r}{y}$ .
|
|arithmetic|fractions|derivation-of-formulae|
| 0
|
Proving two local coordinate formulae of Laplace Beltrami-operator are equivelant
|
I am trying to prove two formulations of the Laplace-Beltrami operator $\Delta$ are equivelant: $$\Delta f = \frac{1}{\sqrt{|g|}}\frac{\partial}{\partial x^i}\left(\sqrt{|g|} g^{ij} \frac{\partial f}{\partial x^j}\right) = g^{ij} \left(\frac{\partial^2 f}{\partial x^i \partial x^j} - \Gamma^k_{ij}\frac{\partial f}{\partial x^k}\right) $$ Where I use the Einstein summation convention and $|g|$ refers to the determinant of the metric. I've attempted to follow steps given in the top answer to this post . However, I can't figure out how to get rid of the $\frac{\partial g_{pq}}{\partial x^i} g^{pq}$ given in the last step. Any help in proving the equivelance, or how to get rid of the $\frac{\partial g_{pq}}{\partial x^i} g^{pq}$ would be much appreciated! Thanks
|
Let me finish the calculations in the linked answer (but the last line there doesn't make sense to me). To save some typing, I will write $\partial_k = \frac{\partial}{\partial x^k}$ . We continue with the calculation from the second last line \begin{align} &\quad \frac{1}{\sqrt{\det g}}\partial_i \big(g^{ij} \sqrt{\det g}\partial_j f\big)\\ &= g^{ij}\partial_{ij} f + (\partial_i g^{ij})\partial_j f + \frac{1}{2}g^{ij}(\partial_i g_{pq})g^{pq}\partial_j f\\ &=g^{ij}\partial_{ij} f + (\partial_i g^{ik})\partial_k f + \frac{1}{2}g^{ik}(\partial_i g_{pq})g^{pq}\partial_k f\quad \text{relabel by }k\\ &=g^{ij}\partial_{ij} f - g^{ip}(\partial_i g_{pq})g^{qk}\partial_k f + \frac{1}{2}g^{ik}(\partial_i g_{pq})g^{pq}\partial_k f \quad\text{differentiate the inverse matrix}\\ &=g^{ij}\partial_{ij} f - g^{qp}(\partial_q g_{pi})g^{ik}\partial_k f + \frac{1}{2}g^{ik}(\partial_i g_{pq})g^{pq}\partial_k f \quad\text{in the second term, }i\leftrightarrow q\\ &=g^{ij}\partial_{ij} f - g^{pq} \frac{1}{
|
|differential-geometry|manifolds|riemannian-geometry|laplacian|
| 1
|
Show that $\int_{-\infty}^\infty \frac{e^x}{e^{2x}+e^{2a}}\frac{1}{x^2+\pi^2}dx = \frac{2\pi e^{-a}}{4a^2+\pi^2}-\frac{1}{1+e^{2a}}$
|
Show that \begin{align*} \int_{-\infty}^\infty \frac{e^x}{e^{2x}+e^{2a}}\frac{1}{x^2+\pi^2}dx = \frac{2\pi e^{-a}}{4a^2+\pi^2}-\frac{1}{1+e^{2a}} \end{align*} where $a\in \mathbb{R}$ . My small try Let $\displaystyle f(z) = \frac{e^z}{(e^{2z}+e^{2a})z}$ . Let $\Gamma$ be the positively oriented rectangle in the complex plane with vertices $R-i\pi$ , $R+i\pi$ , $-R+i\pi$ and $-R-i\pi$ where $R>|a|$ . There are three first order poles of $f(z)$ lying inside $\Gamma$ at $z= 0, \frac{i\pi}{2}+a$ and $-\frac{i\pi}{2}+a$ . Then, by the residue theorem, we have \begin{align*} \int_\Gamma f(z)\; dz &= 2\pi i \left( \mathop{\text{Res}}_{z=0} \; f(z) + \mathop{\text{Res}}_{z=\frac{i\pi}{2}+a} \; f(z) + \mathop{\text{Res}}_{z=-\frac{i\pi}{2}+a} \; f(z)\right) \\ &= 2\pi i \left( \frac{1}{1+e^{2a}} - \frac{i e^{-a}}{2a+i\pi} + \frac{i e^{-a}}{2a-i\pi}\right) \\ &= 2\pi i \left( \frac{1}{1+e^{2a}} - \frac{2\pi e^{-a}}{4a^2 +\pi^2}\right) \tag{1} \end{align*}
|
Substitute $t = e^x$ , along with the shorthand $b= e^a$ \begin{align*} I= &\int_{-\infty}^\infty \frac{e^x}{e^{2x}+e^{2a}}\frac{1}{x^2+\pi^2}dx =\int_0^\infty \frac1{(t^2+b^2)(\ln^2t+\pi^2)}dt\\ =& \int_0^\infty \frac{(1+t)-\frac{2b^2}{1+b^2}(t-\frac1t) }{(t^2+b^2)(\ln^2t+\pi^2) } - \frac{\frac1{1+b^2}}{t( \ln^2t+\pi^2) }\ dt \end{align*} Then, utilize the integral expressions $$\frac{1+t}{\ln^2t+\pi^2}= \frac1\pi\int_0^1 \sin(\pi y)t^{1-y}dy$$ $$\frac{t-\frac1t}{\ln^2t+\pi^2}=\frac2\pi\int_0^1 \sin(2\pi y)t^{1-2y}dy$$ and $ \int_0^\infty \frac1{t(\ln^2t+\pi^2)}dt=1 $ to evaluate \begin{align*} I= & \int_0^\infty \frac1{t^2+b^2} \left(\frac1\pi\int_0^1 \sin\pi y \ {t^{1-y}} -\frac{2b^2\sin2\pi y\ t^{1-2y}}{1+b^2}\ dy\right)dt\\ &\>\>\>\>\> -\frac1{1+b^2}\int_0^\infty \frac1{t(\ln^2t+\pi^2)}dt\\ = & \int_0^1 \int_0^\infty \frac{\sin\pi y \ {t^{1-y}}}{\pi(t^2+b^2)} -\frac{2b^2\sin2\pi y\ t^{1-2y}}{\pi(1+b^2)(t^2+b^2)}\ dt\ dy-\frac1{1+b^2}\\ =& \int_0^1 \cos\frac{\pi y}2 b^{-y}dy - \fra
|
|calculus|integration|definite-integrals|improper-integrals|closed-form|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.