title
string
question_body
string
answer_body
string
tags
string
accepted
int64
$\epsilon$-$\delta$ proof, finding $\delta$
I'm trying to prove that: $\dfrac{\sin(x)^2\log(x^2)}{x} \to 0$ as $x \to 0^+$ I have got to the following: $\left| \dfrac{\sin(x)^2\log(x^2)}{x} - 0 \right| = \dfrac{|\sin(x)^2||2\log(x)|}{|x|} \leq \dfrac{|x|^2|2\log(x)|}{|x|}=|x||2\log(x)| $ I don't know how to proceed from here, or if this is the right/best way to start. I hope someone can tell me what im missing.
I think you can easily, changing $x$ by $\lfloor x\rfloor$ , reduce the asked (write, if not) case to a fraction $\frac{\log _{2}n}{n}, n\in \mathbb{N}$ and let me show some technique for its estimation. Let's firstly start with $\frac{n}{b^n}$ , where $b>1$ , and show, that for any $\forall \varepsilon>0$ we can find such $N$ , that for any $n>N$ holds $\frac{n}{b^n} (de facto this is limit definition). This easily comes from inequality $$\frac{n}{b^n} = \frac{n}{(1+b-1)^n}=\frac{n}{1+n(b-1)+\frac{n(n+1)}{2}(b-1)^2+\cdots + b^n} Solving extreme right inequality we obtain $n>\frac{2}{(b-1)^2\varepsilon}-1$ , so for $N=\left\lfloor \frac{2}{(b-1)^2\varepsilon}-1\right\rfloor +1$ we get desired sentence. Come back to logarithm. Using what has just been proven and taking $\varepsilon =1$ we can get such $N$ , that for any $n>N$ holds $\frac{1}{b^n} . Let's consider any $\forall \varepsilon_1>0$ . We can consider $b=2^{\varepsilon_1}$ and have $\dfrac{1}{2^{n\varepsilon_1}} . It is same wi
|real-analysis|epsilon-delta|
0
Topological definition of Spin$(p,q)$?
In short, How can we define Spin(p, q) without referencing Clifford algebras? The answer should be something like "Spin $(p, q)$ is the unique double cover of SO $^+(p, q)$ such that ...". Wikipedia seems to think we can omit the "such that ...", claiming : Up to group isomorphism, SO $(V, Q)$ has a unique connected double cover, the spin group Spin $(V, Q)$ . [Here $Q$ is a nondegenerate quadratic form over a real or complex vector space $V$ , so we can equivalently replace SO $(V, Q)$ with SO $^+(p,q)$ . Wikipedia also omits the $^+$ but says that they are referring to the identity component.] But the above claim is false . One alternative approach is to define the spin groups by explicitly constructing them with Clifford algebras, but I'd like to know of a purely topological definition, as described above. I originally asked this question as a footnote to this one , then decided to move it here. Along with the definition requested above, I'd also appreciate some discussion of why th
Almost nobody does this. There is a discussion in the Wikipedia article on spinor groups but it has too many mistakes and too few references. The only solid math reference I know is in Chapter 5 (freely available from author's webpage here ) in Varadarajan, V. S. , Supersymmetry for mathematicians: an introduction., Courant Lecture Notes in Mathematics 11. Providence, RI: American Mathematical Society (AMS); New York, NY: Courant Institute of Mathematical Sciences (ISBN 0-8218-3574-2/pbk). vi, 300 p. (2004). ZBL1142.58009 . It is useful if you read this answer in conjunction with my answer here . To describe $Spin(p,q)$ as a 2-fold cover of $SO^+(p,q)$ ( $p>1, q>1$ ) one has to look at the maximal compact subgroups since they carry all the homotopy information. The group $SO^+(p,q)$ has maximal compact subgroup $SO(p)\times SO(q)$ and $$ \pi_1(SO^+(p,q))\cong \pi_1(SO(p)\times SO(q))\cong H_1\times H_2, $$ where $H_1, H_2$ are cyclic groups (either infinite cyclic or ${\mathbb Z}_2$ ).
|algebraic-topology|lie-groups|topological-groups|covering-spaces|spin-geometry|
1
What is the difference in the wording of these questions?
Question 1: An account with an initial amount B earns compound interest at an annual effective interest rate $i$ . The interest in the third year is $426$ and the discount in the seventh year is $812$ . Find $i$ . The work for question 1 is simple: $$\dfrac{B(1+i)^7}{B(1+i)^3} = \dfrac{812}{426}$$ This leads you to $i = 17.5 \%$ Question 2: The amount of interest on X for two years is $320$ . The amount of discount on X for one year is $148$ . Find the annual effective interest rate $i$ and the value of X. The work for question 2 is much more complex: it involves setting up $X(1+i)^2 - X = 320$ and $X - \dfrac{X}{1+i} = 148$ , then solving for X and i, leading to $i = .053$ and $X = 2934.48$ . What is the difference in wording between these two questions? How would I know which to do on an exam?
I don't see a difference in the questions as well. But I start in a different way at the first question: The interest in the third year is 426 This gives me the equation $B(1+i)^3-B(1+i)^2=426$ ... and the discount in the seventh year is 812 Mathematically the term is $B(1+i)^7-B(1+i)^6=812$ Factoring out \begin{align} & B(1+i)^2\cdot ((1+i)-1)=426 \\ & B(1+i)^6\cdot (1+i)-1)=812\\ & \\ &B(1+i)^2\cdot i=426 \\& B(1+i)^6\cdot i=812\\ & \\ & \textrm{Dividing one equation by the other } \\ & (1+i)^4=\frac{812}{426} \end{align} So basically the set up for the first question is not different to the set up for the second question-beside the term "discount".
|actuarial-science|compound-interest|
0
Prove that an $s$ element subset of $1,2,...,n$ must have two distinct subsets with the same sum.
A number of problems on math.stackexchange have taken the form Prove that an $s$ element subset of $1,2,...,n$ must have two distinct subsets with the same sum. (For example discrete math about Pigeonhole Principle ) Suppose that the elements of the subset are $a_1 Then the straightforward observations that $\,\,$ there are $2^s-1$ non-empty subsets of the $s$ element subset $\,\,$ the possible sums range from $a_1$ to at most $a_1+\sum_{n-s+2}^n i$ proves such a result providing $$2^s-1> \frac{(2n-s+2)(s-1)}{2}+1$$ or, equivalently, $$n This is a general result, albeit a rather weak one which can be greatly improved. I am interested in what general results can be proved for this type of problem. EXAMPLE $s=9$ . The above result gives $n i.e. $n\le67$ . The result of @CalvinLin (with $a=2,b=7$ ) improves this to $73$ . However, this bound can be greatly improved (one general method for doing this is given as an answer). Are there other methods which are even more effective for such a p
Optimal n are the sequence https://oeis.org/A276661 . The links there provide upper bounds to what the maximum such n is; you are looking for the minimum.
|discrete-mathematics|pigeonhole-principle|
0
Finding the radius of the circle intersection between two spheres
Let’s suppose I have two spheres $S_1$ and $S_2$ . Let's suppose they have a non-trivial intersection, meaning its not a point. Then their intersection forms a circle. Assume I have the centers of each sphere $P_1$ and $P_2$ . Assume their radii are $r_1$ and $r_2$ respectively. Consider this diagram: This is a cross section view of the spheres along their axis of intersection. Importantly, I wish to find the radius. A similar question is asked here: What is the easiest way to find the radius and center of the circle of intersection between two spheres? The author suggests that we can get to the solution for the radius via $d_1^2 + d_2^2 = d^2$ , but I don't understand how this is possible because, in this diagram, when you look at $d_{1}, d_{2}, d$ they are colinear, so I don't see how Pythagorean theorem would apply since that only applies to right triangles. Can some explain why the formula is correct? EDIT: it seems like this thread is relevant https://mathworld.wolfram.com/Sphere-
HINT Area of sectioned triangles shown is expressed in two ways: The three sides of top half triangle are $ r_1,r_2, r$ $$ \frac 12 d~ r = (s(s-d)(s-r_1)(s-r_2))^{\frac12}$$
|geometry|
0
Bound on expected norm of the difference between the sample mean $\bar{X_n}$ and population mean $\mu$ as a function of the sample size $n$ for LLN?
My question is motivated by this question: Does law of large numbers converge in $L^1$? that asks about the the convergence in $L^1$ -norm of the sample mean $\bar{X_n}$ to the population mean $\mu.$ I looked at the answers and comments and I understood that the answer was in the affirmative, and the proof uses the fact that the sample means $\bar{X_n}$ are uniformly integrable(UI) . The two steps a) and b) outlined in the previous link follow immediately using: triangle inequality for norms, exchanging finite sum and integrals and then using that $X_i\sim_{iid}X,$ and finally $\mathbb{E}||X-\mu|| While the use of UI does indirectly show that the Law of Large Numbers (LLN) is true in $L^1,$ I still wonder if there's a bound like the below where $\{X_i\}\sim_{iid} X: \Omega \to \mathbb{R}^d, \bar{X_n}:=\frac{1}{n}\sum_{i=1}^{n}X_i, \mu:=\mathbb{E}[X]:$ $$\mathbb{E}||\bar{X_n}-\mu||\le C_X \frac{1}{\phi(n)}, \phi(n)\to \infty, n \to \infty?$$ Here $C_X$ is a constant depending only upon
Assuming that $X \in L^2$ then by Jensen's inequality and the fact that $\sqrt{x}$ is concave we have $$ E(|\bar X_n - \mu|) = E\left(\sqrt{(\bar X_n - \mu)^2}\right) \le \sqrt{\text{Var}(\bar X_n)} = \frac{\sigma}{\sqrt{n}}. $$ The $\sqrt n$ rate is sharp, as it is obtained when the $X_i$ 's are iid from a normal distribution. As mentioned here , my belief is that the rate can be arbitrarily poor for $X \in L^1$ , and in particular the density $f(x) \propto 1 / |x|^{2 + \epsilon} I(|x| > 1)$ seems to have a rate $n^{\psi(\epsilon)}$ for some $\psi(\epsilon) \to 0$ as $\epsilon \to 0$ (I did not attempt to determine what $\psi(\epsilon)$ is).
|probability|probability-theory|probability-limit-theorems|law-of-large-numbers|rate-of-convergence|
1
When $\mathbb Z_3[x] / \langle x^2+1\rangle$, why $(2x^2 + 2x + 1 + \langle x^2+1\rangle) = (2x + 2 + \langle x^2+1\rangle)$?
When $\mathbb Z_3[x]/\langle x^2+1\rangle$ , why is $(2x^2 + 2x + 1 + \langle x^2+1\rangle) = (2x + 2 + \langle x^2+1\rangle)$ ? I know $\langle x^2+1\rangle$ has $2x^2+2$ but I'm confused by the fact that how $(2x^2 + 2x + 1 + \langle x^2+1\rangle) = (2x + 2 + \langle x^2+1\rangle)$ ?
You're working modulo two things at once: mod 3 and mod $x^2+1$ . Replacing a $2$ with a $-1$ which is allowed because we're working mod 3, we have $$2x^2+2x+1=-x^2+2x+1.$$ Adding $x^2+1$ which is allowed because we're working mod $x^2+1$ , we have $$-x^2+2x+1=2x+2$$ and we're done.
|abstract-algebra|polynomials|ring-theory|field-theory|ideals|
1
$AA_1$ it is the bisector of $\angle A_2AA_4$
Let $ABC$ a triangle inscribed în The circle $\Gamma$ . The bisector of $\angle BAC$ intersecta $BC$ în $A'$ and $\Gamma$ în $A_1$ . Let $AA_2\perp BC, A_2\in BC$ and $A_1A_2\cap \Gamma=\{A_1,A_3\}$ , $A_3A'\cap\Gamma=\{A_3,A_4\}$ . IT will be possible that $AA_1$ to be The bisector of $\angle A_2AA_4$ ?
This answer is intended to be a pure geometrical one. The first observation is simple: it is equivalent to show that $AA_4$ is the diameter (you may need to work with some angles to see this!). This is further equivalent to the fact that $A,A',A_2,A_3$ are on the same circle, so it suffices to show the product $\overline{A_1A_2}\cdot\overline{A_1A_3}=\overline{A_1A'}\cdot\overline{A_1A}$ holds. This suggests to find similar triangles. By again working with angles, we notice that the triangles $\Delta A_1BA'\sim\Delta A_1AB$ therefore $(\overline{A_1B})^2=\overline{A_1A'}\cdot\overline{A_1A}$ . Similarly, $\Delta A_1BA_2\sim\Delta A_1A_3B$ therefore $(\overline{A_1B})^2=\overline{A_1A_2}\cdot\overline{A_1A_3}$ , and the desired result follows.
|geometry|
1
There are 12 students. The student forms 6 groups to do a project. Each week, the students can form the group as the wish. Prove the following.
The full question is, There are 12 students in Mr. Fat's combinatorics class. At the beginning of each week, Mr. Fat assigns a project to his students. The students form six groups. Each group works on the project independently and submits the work at the end of the week. Each week, the students can form the groups as they wish. Prove that, regardless of the way the students choose their partners, there are always two students such that there are at least five other students who have all worked with both of them or have worked with neither of them. (A student cannot work by himself.) The question is from the book "A Path to combinatorics." The Solution is as follows, This problem investigates the working partnership between a student and a pair of students. Thus, we let A = ${s_1, s_2, ... , s_{12}}$ denote the set of all students, and let B = $\{(s_i, s_j) | 1 denote the set of all pairs of students. Then |B| = 66. We say that $s_i$ and $(s_j, s_k)$ are connected if $(s_i, s_j)$ , and
Prove that, regardless of the way the students choose their partners, there are always two students such that there are at least five other students who have all worked with both of them or have worked with neither of them. First, it's important to understand that if a student has "worked with both of them" or "neither of them" (them being students in a pair) is the same as saying that the student is not connected to the pair. Now, there are a total of $12$ students. If we remove the $2$ students $(s_j, s_k)$ being considered, there are $10$ remaining. Of these ten, there must be at least five other students who are not connected to those two students. That leaves us with only $10- 5 = 5$ other students. Meaning the pair $(s_j, s_k)$ can be connected to a maximum of $5$ students. Again, the question states - there must always be two students or one pair in B such that the pair is not connected to at least five students. In other words, there must always be one pair that is connected to
|combinatorics|fubini-tonelli-theorems|
0
$\left( 1 - |\alpha| \right)^{2} \leq |1 - 2\alpha\cos(x) + \alpha^{2}| \leq \left( 1 + |\alpha| \right)^{2}$ with $\alpha \in \mathbb{D}_{1}$
I'm trying to prove the following identity: \begin{equation} \left( 1 - |\alpha| \right)^{2} \leq |1 - 2\alpha\cos(x) + \alpha^{2}| \leq \left( 1 + |\alpha| \right)^{2} \;\; \forall x \in (0, \pi), \end{equation} where $ \alpha \in \mathbb{C}$ , with $|\alpha| . When I work in $\mathbb{C}$ , the inequality is true. But in $\mathbb{C}$ , I can't prove the identity. I graphed the function $f(x) = |1 - 2\alpha\cos(x) + \alpha^{2}|$ , and the identity seems to be true.
For $\alpha\neq0$ , express $\alpha$ in polar coordinates using Euler's formula, i.e., $\alpha=|\alpha|e^{i\theta}=|\alpha|(\cos\theta+i\sin\theta)$ . The triangle inequality yields $$\big|1-|\alpha|\big|\leq|1-\alpha|\leq1+|\alpha|$$ Observe that $|1-\alpha|^2=1-2|\alpha|\cos\theta+|\alpha|^2$
|complex-analysis|trigonometry|
0
Trace of product of three matrices
I have an expression of the form \begin{equation} i\operatorname{tr}(ABC - B^\dagger AC), \end{equation} where $A$ and $C$ are Hermitian matrices, but $B$ is not. $B-B^\dagger := i D$ , where $D$ is also a Hermitian matrix. It can be seen that the above is real by cyclic permutation of the second term inside the trace, $B^\dagger AC \to CB^\dagger A$ : $$ i\operatorname{tr}(ABC - CB^\dagger A) = i \underbrace{\operatorname{tr}(ABC - (ABC)^\dagger)}_{2 i \operatorname{Im} (\operatorname{tr}(ABC))}=-2\operatorname{Im}(\operatorname{tr}(ABC))\in\mathbb{R}. $$ However, for reasons concerning the specific form of the matrices, I would like to rewrite the first expression like so: \begin{align} i\operatorname{tr}(ABC - B^\dagger AC) &= i\operatorname{tr}(ABC - B^\dagger CA + B^\dagger \left[C,A\right])\\ &= i \operatorname{tr}(ABC - A B^\dagger C) - i\operatorname{tr}(B^\dagger \left[A,C\right])\\ &= - \operatorname{tr}(ADC) - i \operatorname{tr}(B^\dagger \left[A,C\right]), \end{align} wher
The Hermitian conjugate is the composition of the complex conjugate and the transpose $$\eqalign{ \def\d{\dagger} &Z^\d \;\equiv\; (Z^*)^T \;\equiv\; (Z^T)^* \\ }$$ Hermitian matrices satisfy $$\eqalign{ &A^\d = A \quad\implies\quad A^T=A^* \\ &C^\d = C \quad\implies\quad C^T=C^* \\ }$$ The double-dot product $(:)$ is a convenient notation for the trace and has these properties $$\eqalign{ \def\a{\alpha} \def\b{\beta} \def\l{\lambda} \def\tr{\operatorname{tr}} X:Y \;&\equiv\; \sum_{i=1}^m\sum_{k=1}^n X_{ik}Y_{ik} \;\equiv\; \tr(X^TY) \\ X:Y &= Y:X \;=\; Y^T:X^T \\ PQ:Y &= P:(YQ^T) \;=\; Q:(P^TY) \\ }$$ Use the above notations to analyze the first expression $$\eqalign{ &\tr(ABC) = AB:C^T = B:A^TC^T = B:(AC)^* &\equiv \l \\ &\tr(B^\d AC) = B^*:AC = \Big(B:(AC)^*\Big)^* &\equiv \l^* \\ &\l \;\equiv\; \a+i\b \quad\iff\quad \l^* = \a-i\b \\ &\therefore \;i\tr(ABC-B^\d AC) \;=\; i(\l-\l^*) \;\equiv\; -2\b \\ }$$ and (the negative of) the second $$\eqalign{ &\tr(ADC) = D^T:CA = D:(AC)^* = iB
|matrices|trace|hermitian-matrices|
1
Why are FFT results different from theory and how to eliminate the difference?
I am working with time series data, and applying FFT to calculate the power spectrum. The data is something like this: $y = \sum_{k=1}^{5} \cos(10 k x)$ All the peaks in the FFT output should have identical magnitudes, similar to the pattern shown in the graph below: However, the FFT output I am getting is different, as shown in the graph below: The magnitudes of the peaks in my data slightly differ. Increasing the sampling rate or using a Hamming window reduces but doesn't fully eliminate these differences. How can I completely remove these differences? N = 1000 t = np.linspace(0,2*np.pi,N) y = np.zeros(N) for freq in [10,20,30,40,50]: y = y + np.cos(freq * t) h = scipy.signal.windows.hamming(N) yfft = np.abs(np.fft.fft(y*h) / N)**2 xfft = np.hstack([np.arange(0,N//2), np.arange(-(N//2),0)]) d = np.array(sorted(zip(xfft,yfft))).reshape(-1,2) xfft = d[:,0] yfft = d[:,1] plt.plot(xfft,yfft,color="red") plt.show() for freq in [10,20,30,40,50]: print(f"freq={freq} magnitude={yfft[xfft==fr
The ideal Fourier transform here would be the sum of 10 Dirac delta functions that has spikes with infinite peaks. It’s not surprising you didn’t get infinite peaks in practice (because that’s impossible), and the width (and so height) of your peaks depends on the specific implementation details and precision of your FFT. If you integrate your FFT’s individual peaks, you’ll probably get totals which are close to each other. If you really want results that match theory closely, use finite functions. I.e. the (unnormalized) Fourier transform of a limited height pulse is $\sin(x)/x$ , so you can use something like $\sum_{k=1}^5 (\sin(x)/x)\cos(10kx)$ which should give spikes with a mostly consistent width/height, but you’ll still get some small differences. You can bound these differences using various theorems related to sampling error in signal analysis.
|fast-fourier-transform|
1
If $\pi $ is normal, can it be used as a random number generator?
If one day we finally prove the normality of $\pi $, would we be able to say that we have ourselves a sure-fire truly random number generator?
A wonderful paper by Jacques Dutka I chanced to see in the 1970's put a million digit pi expansion through a rigorous set of tests for uniformity and unpredictability and showed it was significantly better that the best RNG at the time. Its utility was not limited by the "fact" that since it was entirely known to a million digits, and was an ontic fact (the certain relationship between a radius and circumference) it could not ITSELF be random.
|pi|
0
Differentiation, high school math
I know this platform is for higher-level math. This is just a high school level question but I don't know any more suitable platform to ask. Apologise in advance. The question is to differentiate the expression and multiply by $x$ , do this twice and set $x=1$ . $$\dfrac{x^{n+1}-1}{x-1}$$ After differentiating twice, I get: $$\dfrac{x^{n+1}\cdot\left(n^2x^2+\left(-2n^2-2n+1\right)x+n^2+2n+1\right)-x-1}{\left(x-1\right)^3}$$ I get $\frac{0}{0}$ after setting $x=1$ but the expected answer is $\frac{n(n+1)(2n+1)}{6}$ . Using L'Hospital rule, I get $\frac{x^n\cdot\left(\left(n^3+3n^2\right)x^2+\left(-2n^3-6n^2-3n+2\right)x+n^3+3n^2+3n+1\right)-2x-1}{3\left(x-1\right)^2}$ which is still 0/0. Also, this expression is used to calculate a power series, not the other way around. I'm wholeheartedly grateful for all your conscientious replies, especially Blue and Calvin. I made some calculations error but eventually solved it.
We can use the power series of $\frac{1}{1-x}$ . $$\dfrac{x^{n+1}-1}{x-1}=-(x^{n+1}-1)(1+x+x^2+...)=-(x^{n+1}+x^{n+2}+x^{n+3}+...-1-x-x^2-...-x^n-x^{n+1}-x^{n+2}-x^{n+3}-...)=1+x+x^2+...+x^n=\sum_{k=0}^nx^k$$ The first derivative is $$\sum_{k=1}^nkx^{k-1}$$ Multiplying by $x$ , we get $$\sum_{k=1}^nkx^k$$ The second derivative is $$\sum_{k=1}^nk^2x^{k-1}$$ Multiplying by $x$ , we get $$\sum_{k=1}^nk^2x^k$$ So, on putting $x=1$ we get $\sum_{k=1}^nk^2$ , as expected.
|calculus|derivatives|
0
Minorisation and Coupling in probability theory.
In probability theory, when studying the convergence of a stochastic process to equilibrium minorisation conditions can be exploited (see for instance Assumption 2.1. of https://www.sciencedirect.com/science/article/pii/S0304414902001503?ref=pdf_download&fr=RR-2&rr=7e1ea215b82bb2e7 ). What is the intuitive reason that such a condition can provide convergence to equilibrium, I have heard in passing that it is related to coupling techniques but can not see how.
Not sure if this helps for the above specific paper, but for the Discrete case the reasoning is much easier to explain. For any kernel $K$ acting from $\mathcal{X} \to \mathcal{Y}$ , we can decompose it as convex combination of two Kernels as $$ K = \alpha 1 \pi^T + (1 − \alpha)M $$ where $\alpha = \sum_{y \in \mathcal{Y}} \min_{x \in \mathcal{X}} K(y|x) $ is the Doeblin coefficient of Kernel $K$ and $\pi(y) = \min_{x \in \mathcal{X}} K(y|x)/\alpha $ . Now, whenever a measure $\mu$ is acted upon by a kernel $K$ , there is essentially an 'erasure of the $\alpha$ -th fraction of information' in $\mu$ due to the kernel $1 \pi^T $ in the above decomposition. This idea is basically at the heart of many convergence results.
|probability-theory|probability-distributions|convergence-divergence|stationary-processes|coupling|
0
relative commutant of type III1 factor
Let $M$ be a type III $_1$ factor and $N$ be the non-trivial ( $N\neq \Bbb C 1$ )semi-finite von Neumann subalgebra of $M$ . What is the type of relative commutant $N'\cap M$ ? Is it semi-finite?
There are examples when the relative commutant is not semi-finite. If $B$ is a type $\mathrm{III}_1$ factor, then $M=B\overline\otimes\mathbb{B}(H)\cong B$ for any separable Hilbert space $H$ . Now if you take $N=\mathbb C\otimes B(H)$ , then $N$ is semi-finite with relative commutant $N^\prime\cap M=B$ , which is type $\mathrm{III}_1$ and hence not semi-finite.
|operator-algebras|von-neumann-algebras|
1
How to represent difficulty and success?
This is a revision of another question found here : Consider a criminal pondering the commission of a crime, C. It could be launching a cyber-attack or embezzling funds. How likely is it that the attempt will succeed? Call it P(S). How likely is it that a malefactor will attempt C in the next year? Call it P(A) Assume that experts provide credible estimates for P(S) and P(A). If we assume the community of malicious actors is uniformly competent and that every attempt is equally likely to succeed, how likely is it that a successful attempt will occur? It does not seem credible to assume that S and A are independent. Although the hardness of the problem does not depend on the number of attempts, it does not seem reasonable to assume the converse. If the crime is difficult to commit (requiring intricate timing and deep knowledge, say) fewer attempts are likely to be made. What is the best way to model what might be called the “threat potency”?
The question is not well-formed. If we assume that there is either 0 or 1 attempt in the next year, and you know the probability $p$ that there will be 1 attempt in the next year (perhaps this is what you meant by "P(A)", or perhaps not), and you know the probability $q$ that the attempt will succeed if it is attempted (perhaps what you had in mind with "P(S)"), then the probability of a successful attempt is exactly $pq$ . I think it's easier to formulate this as a conditional probability. Let $\mathcal{A}$ denote the event that there is an attempt, and $\mathcal{S}$ the event that there is a successful attempt; then experts are presumably giving us the probability $\Pr[\mathcal{A}]$ and $\Pr[\mathcal{S} \mid \mathcal{A}]$ . With that formulation, it absolutely is correct that $$\Pr[\mathcal{S} \land \mathcal{A}] = \Pr[\mathcal{S} \mid \mathcal{A}] \times \Pr[\mathcal{A}],$$ and by the structure of "attempt" and "success", we have $$\Pr[\mathcal{S}] = \Pr[\mathcal{S} \land \mathcal{A}
|probability|probability-distributions|conditional-probability|
0
Can we bound the lower and upper asymptotic density of these subsets of $\mathbb N$?
Suppose we partition the powers of $2$ into two sets in the following manner: $$A = \{1\}, \quad B = \{2,4,8,16,\dots\}$$ Now suppose that, for each odd positive integer $t$ , we select either $tA$ or $tB$ , and form the union $S$ of all sets selected in this way. For example, if we always choose $tA$ , then the resulting set is $$S_1 = 1A\cup 3A\cup 5A\cup\cdots = \{1,3,5,7,9,\dots\},$$ while if we always choose $tB$ , the resulting set is $$S_2 = 1B\cup 3B\cup 5B\cup\cdots = \{2,4,6,8,10,\dots\}.$$ These sets consist of all odd and all even integers, respectively; they each have natural density $\frac 12$ . On the other hand, we might form a "mixed" union. A tame example is \begin{align*} S_3 = \left(\bigcup\limits_{t\equiv 1\text{ mod }4} tA\right) \cup \left(\bigcup\limits_{t\equiv 3\text{ mod }4} tB\right) &= 1A\cup 3B\cup 5A\cup 7B\cup\cdots \\ &= \{1,5,6,9,12,13,14,\dots\}, \end{align*} which again can be shown to have natural density $\frac 12$ . A more wild example is \begin{a
We can prove the conjecture true by first considering the logarithmic density of a set $T$ of positive integers, which is $$ \lim_{x\to\infty} \frac1{\log x} \sum_{m\in T\cap[1,x]} \frac1m. $$ Proposition : Any set $S$ formed as described in the OP has logarithmic density $\frac12$ . Proof : Let $x\ge1$ , and consider any odd integer $t\le x$ . Note that $\displaystyle \sum_{m\in tA\cap[1,x]} \frac1m = \frac1t, $ while $$ \sum_{m\in tB\cap[1,x]} \frac1m = \frac1t -\frac1{2^kt} $$ where $k$ is the largest nonnegative integer with $2^kt\le x$ . Consequently, whether $C_t=A$ or $C_t=B$ , we have $$ \sum_{m\in tC_t\cap[1,x]} \frac1m = \frac1t +O\biggl( \frac1x \biggr). $$ It follows that $$ \sum_{m\in S\cap[1,x]} \frac1m = \sum_{\substack{t\le x \\ t\text{ odd}}} \sum_{m\in tC_t\cap[1,x]} \frac1m = \sum_{\substack{t\le x \\ t\text{ odd}}} \biggl( \frac1t +O\biggl( \frac1x \biggr) \biggr) = \frac12\log x + O(1), $$ which implies the proposition. $\quad\square$ While the existence of a set's
|number-theory|asymptotics|additive-combinatorics|
1
Exploring Variations of the Partition Problem: Seeking the Minimum Product Difference
I'm delving into variations of the classic partition problem, specifically focusing on a version that may be termed the "Minimum Product Difference Partition Problem" (MPDPP). The traditional partition problem aims to divide a set of integers into two subsets such that the difference in their sums is minimized. In contrast, the MPDPP seeks to partition a set into two subsets where the difference between the products of the numbers in each subset is minimized. Here's a more formal statement of the problem: Given a finite set of positive integers $S = \{s_1, s_2, \ldots, s_n\}$ , the goal is to find a partition of $S$ into two subsets $S_1$ and $S_2$ such that the absolute difference between the products of the numbers in $S_1$ and $S_2$ is minimized, i.e., minimize $| \prod_{s_i \in S_1} s_i - \prod_{s_j \in S_2} s_j |$ . This variation introduces a multiplicative aspect that seems to complicate the problem significantly. My questions are as follows: Has this problem or a similar one be
The problem is NP-hard. In particular, testing whether there is a partition that leads to difference zero is known as the "Product Partition" problem, which has been proved to be (strongly) NP-complete in the following paper: “Product Partition” and related problems of scheduling and systems reliability: Computational complexity and approximation . C.T. Ng, M.S. Barketau, T.C.E. Cheng, Mikhail Y. Kovalyov. European Journal of Operational Research, 2010, 207:601-604. The paper also shows the existence of a FPTAS for a related problem, which I suspect might solve your problem as well. I suspect that reviewing subsequent papers that cite it might provide a helpful starting point for checking what work has been done since then in the research literature. Related: Subset product is (weakly) NP complete. See https://cs.stackexchange.com/q/7907/755 , https://cstheory.stackexchange.com/q/16902/5038 .
|combinatorics|discrete-optimization|
0
Regularization for quadratic programming
I want to minimize $x$ in this equation with quadratic programming. $$Z = \Phi x_0 + \Gamma U $$ Let's introduce a vector $R$ and I want to select $U$ as minimal as possible so $J$ will be minimized $$J = \frac{1}{2}||Z - R||^2$$ Here $\Phi$ , $x_0$ , $\Gamma$ and $R$ are known. So I extend this problem: $$J = \frac{1}{2}||Z - R||^2 = \frac{1}{2}||\Gamma U + \Phi x_0 - R||^2 = \frac{1}{2}(\Gamma U + \Phi x_0 - R)^TI(\Gamma U + \Phi x_0 - R)$$ Where $I$ is the identity matrix. We continue... $$J = \frac{1}{2}(\Gamma U + \Phi x_0 - R)^TI(\Gamma U + \Phi x_0 - R) = \frac{1}{2}U^T\Gamma^TI\Gamma U - (\Gamma^TI(R-\Phi x_0))^TU + \frac{1}{2}(R-\Phi x_0)^TI(R-\Phi x_0)$$ And then we write it on the QP-form: $$J = \frac{1}{2}x^TQx + c^Tx$$ where $Q = \Gamma^TI\Gamma$ and $c = -\Gamma^TI(R-\Phi x_0)$ . Forget about the constant $\frac{1}{2}(R-\Phi x_0)^TI(R-\Phi x_0)$ The constraints are: $$\Gamma U \leq R - \Phi x_0$$ $$ U \geq 0$$ Question: Where should I put the regulization?
I believe you can put the regularization in your initial objective function. Let's say your objective function now $ J = \frac{1}{2}\lVert Z - R \rVert^2 + \lVert \lambda x \rVert^2 $ Please note the $\lambda$ is based on the parameter you choose carefully. For the rest, you can extend that to the standard QP form. I hope that answer your question.
|optimization|quadratic-programming|
1
Laplace transform of a function defined by integral
I have a function of the form $f(u) = ke^{-au}\int_{0}^{\infty}x^{z-1}(1+x)^{-z-1}e^{-bxu}dx$ where $a,b,u,z>0$ are all real positive numbers and $k$ is a positive normalization factor. I need to calculate the Laplace transform of $f(u)$ . We have $$F(s) = \int_{0}^{\infty} f(u)e^{-su}du = k\int_{0}^{\infty} e^{-au} \left(\int_{0}^{\infty}x^{z-1}(1+x)^{-z-1}e^{-bxu}dx\right) e^{-su} du$$ Since Fobini's theorem holds (Does it really hold?) we can change the order of integration $$F(s) = k\int_{0}^{\infty} x^{z-1}(1+x)^{-z-1}\left(\int_{0}^{\infty} e^{-(a+s+bx)u}du\right) dx= k\int_{0}^{\infty} \frac{x^{z-1}(1+x)^{-z-1}}{a+s+bx} dx $$ I can't go any further! What should I do next? Also I'm contemplating the change of integration order, is it valid and correct?!? What about the region of convergence for Laplace transform, how to find it? Thanks in advance! ==============================Corrections============================= The values that $z$ can assume are positive real numbers $z>0$
You probably have to use $$ _2F_1(a,b;c; z)=\frac{1}{B(b,c-b)}\int_0^1\frac{x^{b-1}(1-x)^{c-b-1}}{(1-xz)^a}dx$$ by a suitable change of variable. Your $a$ can be taken as $0$ without loss of generality.
|real-analysis|integration|laplace-transform|
1
Closed form solution to matrix equation
I'm trying to solve the following matrix equation for $L$ : $$A\cdot L + L^T = 0$$ where A is non-singular and both $A,L \in \mathbb{R}^{n\times n}$ . I'm wondering if this could have a closed-form solution. This seems close to Sylvester's equation but not quite the same.
I begin by three examples showing that the solutions of $$AL+L^T=0\tag{1}$$ can depend on parameters. a) If we take $$A=\pmatrix{1&0&0\\0&1&1\\0&0&-1},$$ then the general solution of (1) is : $$L=\pmatrix{0&a&0\\-a&b&-2b\\0&-2b&4b},$$ with $\det(L)=4a^2b$ . b) If $$A=\pmatrix{1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1}$$ (in fact it is a commutation matrix that will be used later on), the general solution depends on 4 parameters : $$L=\pmatrix{0&a&a&c\\-a&b&-b&d\\-a&-b&b&d\\-c&-d&-d&0}$$ with $\det(L)=0.$ c) For the $6 \times 6$ matrix : $$A=\pmatrix{1&0&0&0&0&0\\0&0&0&0&0&1\\0&0&0&0&1&0\\0&0&0&1&0&0\\0&0&1&0&0&0\\0&1&0&0&0&0}$$ the general solution depends on $9 $ (!) parameters : $$L=\pmatrix{0&e&b&a&b&e\\ -e&i&-f&-g&-h&-i\\ -b&h&d&-c&-d&f\\ -a&g&c&0&c&g\\ -b&f&-d&-c&d&h\\ -e&-i&-h&-g&-f&i}$$ with the interesting factorization of $$\det(L)=(2ce - 2bg + af + ah)^2(2fh + 4di - f^2 - h^2)$$ Please note that in the three examples, the solutions are the sum of a symmetric matrix and a skew-symmet
|matrices|matrix-equations|matrix-calculus|
0
Quadratic form with positive semi-definite matrix
Suppose $A$ is a positive semi-definite matrix, and $x$ and $y$ are real vectors. Under what conditions does the following result hold? $$ x'y > ( Does there exist $\varepsilon>0$ such that the following result always holds? $$ x'y > \varepsilon (
Your first problem is equivalent to wonder when $\langle x,y\rangle>0$ implies that $\lambda_1x_1y_1+\cdots+\lambda_nx_ny_n>0$ where $\lambda_i\geq 0$ for all $i.$ To see this, just diagonalize $A$ in an orthonormal basis. You are now facing with a problem in $R^{2n}$ with the quadratic forms $(x,y)\mapsto \langle x,y\rangle>0$ or $\lambda_1x_1y_1+\cdots+\lambda_nx_ny_n>0.$ Diagonalizing again, you wonder when $\|x\|^2-\|y\|^2>0$ implies that $$\lambda_1(x_1^2-y_1^2)+\cdots+\lambda_n(x_n^2-y_n^2)>0$$ which seems easier. In particular, without loss of generality you can assume $\|x\|^2=1.$ The problem for $n=2$ now seems quite tractable and gives the ideas for $n>2.$ Edit. Something simpler, using the formula $$\langle x,y\rangle=\frac{1}{4}(\|x+y\|^2-\|x-y\|^2)$$ and $X=\frac{x+y}{2}$ , $Y=\frac{x-y}{2}.$ Therefore $\langle x,y\rangle\geq 0\Leftrightarrow \|X\|^2\geq \|Y\|^2$ and $x^TAy\geq 0\Leftrightarrow X^TAX\geq Y^TAY.$ Without loss of generality we may assume that $$A=\mathrm{dia
|linear-algebra|matrices|quadratic-forms|positive-definite|positive-semidefinite|
0
$\left( 1 - |\alpha| \right)^{2} \leq |1 - 2\alpha\cos(x) + \alpha^{2}| \leq \left( 1 + |\alpha| \right)^{2}$ with $\alpha \in \mathbb{D}_{1}$
I'm trying to prove the following identity: \begin{equation} \left( 1 - |\alpha| \right)^{2} \leq |1 - 2\alpha\cos(x) + \alpha^{2}| \leq \left( 1 + |\alpha| \right)^{2} \;\; \forall x \in (0, \pi), \end{equation} where $ \alpha \in \mathbb{C}$ , with $|\alpha| . When I work in $\mathbb{C}$ , the inequality is true. But in $\mathbb{C}$ , I can't prove the identity. I graphed the function $f(x) = |1 - 2\alpha\cos(x) + \alpha^{2}|$ , and the identity seems to be true.
Gary's comment resolves the upper bound. For the lower bound, the idea is similar to Mittens'; you can use Euler's formula for $e^{ix} = \cos x + i \sin x$ to write $$ \left|1 - 2\alpha \cos x + \alpha^2 \right| = \left| (e^{ix} - \alpha)(e^{-ix} - \alpha)\right| $$ and now apply the reverse triangle inequality (lower bound in Mittens' answer) to each factor to get $$ \left| (e^{ix} - \alpha)\right| \left|(e^{-ix} - \alpha)\right| \geq |1 - |\alpha||^2 = (1 - |\alpha|)^2. $$
|complex-analysis|trigonometry|
0
How do I approach this proof to show that P (T, K) is non-decreasing function of T without making any model assumption
Let P (T, K) be the price of a Put option with maturity T and strike K, and assume that the interest rate is zero, i.e., r = 0. By no-arbitrage pricing rule, show that P (T, K) is non-decreasing function of T without making any model assumption, i.e., show that for any K > 0, P (T1, K) ≤ P (T2, K), ∀ 0
One possible way to go about this question, is to prove the following Lemma Lemma If $X_t$ is a martingale, $K a real constant, $0 then $$ \mathbb{E}[(K - X_{T_2})^{\text{+}}|\mathcal{F}_0] \ge \mathbb{E}[(K - X_{T_1})^{\text{+}}|\mathcal{F}_0] $$ We can argue the proof like this: $$ \begin{equation} \begin{split} P(T_2, K) &= \mathbb{E}[(K - X_{T_2})^{\text{+}}|\mathcal{F}_0] \\ &= \mathbb{E}[\mathbb{E}[(K - X_{T_2})^{\text{+}}|\mathcal{F}_1] \;| \mathcal{F}_0] \\ &\ge \mathbb{E}[(\mathbb{E}[(K - X_{T_2}|\mathcal{F}_1])^{\text{+}} \;| \mathcal{F}_0] \\ &= \mathbb{E}[(K - X_{T_1})^{\text{+}}|\mathcal{F}_0] = P(T_1, K) \end{split} \end{equation} $$ where we used the tower property of the conditional expectation, Jensen inequality on the positive part function (which is convex) and the martingality of $X_t$ . There is also another possible solution, which is to emply the usual strategy "buy low, sell high" and look at the PnL at eah date, but I thought this derivation is a bit more elega
|finance|
0
Frobeinus norm of multiplication of two complex Gaussian distributed matrices
There are two complex Gaussian distributed matrices, $\mathbf{A}\in \mathbb{C}^{L\times M}$ and $\mathbf{B}\in \mathbb{C}^{N\times M}$ . The elements of $\mathbf{A}$ and $\mathbf{B}$ are followed i.i.d. complex Gaussian distribution with mean 0 and variances $\sigma_A^2$ and $\sigma_B^2$ , respectively, i.e., $a_{ij}\sim \mathcal{CN}(0,\sigma_A^2)$ and $b_{ij}\sim \mathcal{CN}(0,\sigma_B^2)$ for $\forall i,j.$ I want to find the expectation of $||\mathbf{A}\mathbf{B}^H||_F^2$ . So, how can I express the closed form of $\mathbb{E}\big[||\mathbf{A}\mathbf{B}^H||_F^2\big]$ ?
What is $\textbf{I}$ ? a matrix with all coefficients equal to $1$ ? What is $B^H$ ? Hermitian conjugate? So we are with complex Gaussian variables? What is $|AB^H|? $ | A determinant? So we are in the case $L=N$ ?
|linear-algebra|matrices|normal-distribution|matrix-norms|chi-squared|
0
Prove that the Riemann curvature tensor is a tensor
How would I prove that the Riemann curvature tensor $R: \scr X(M)^3 → \scr X(M)$ , $R(X, Y )Z := \nabla_X\nabla_Y Z − \nabla_Y \nabla_XZ − \nabla_{[X,Y ]}Z$ , is indeed a tensor ? I thought I could use this: (pag 41 in https://radbouduniversitypress.nl/site/books/m/10.54195/EFVF4478/ ) and simply prove that $R(fX_1+gX_2, Y )Z=fR(X_1, Y )Z+gR(X_2, Y )Z$ ...(1) $R(X, fY_1+gY_2 )Z=fR(X,Y_1)Z+gR(X, Y_2 )Z$ ...(2) and $R(X, Y )(fZ_1+gZ_2)=fR(X,Y)Z_1+gR(X, Y )Z_2$ ...(3) Am I on the right track?Isn't doing this indeed using proposition 2.7? My T.A said I cannot use 2.7 it as it is stated, but I don't see why And then how do I use proposition 2.7 then?
In proposition 2.7 the map $\tau$ has values in $C^\infty (M)$ . But the curvature tensor is a map $\mathfrak{X}(M)^3 \to \mathfrak{X}(M)$ as you have defined it. Therefore you can not apply proposition 2.7 in the way you want to. To apply proposition 2.7 we first need to transform the Riemann "Tensor" (note that what is given in your definition is not at all a tensor field!!!) to a $C^\infty$ -multilinear map $\mathfrak{X}(M)^k \times \Omega(M)^l \to C^\infty (M)$ in some "natural" way and then transform it to an actual tensor field using proposition 2.7. What are the right $k$ and $l$ and how can we transform it? When speaking of the Riemann "tensor" (or when saying that it is a tensor(-field)) as defined in the post it is implicitly assumed that the multilinear map is transformed to a tensor field using this "natural way" together with proposition 2.7. Hint: Given a $C^\infty$ -multilinear map $R :\mathfrak{X}(M)^3 \to \mathfrak{X}(M)$ we can define a $C^\infty$ -multilinear map $\t
|differential-geometry|riemannian-geometry|tensors|general-relativity|
0
Definition of an unbiased estimator
I have some confusion with the definition of unbiased estimators. Suppose you are given a random sample $X_1,...,X_n$ of real-valued random variables defined on a common probability space. A statistic is defined as $g(X_1,...,X_n)$ , where $g:\mathbb{R}^n\to\mathbb{R}$ is a measurable function. If there is some parameter $\theta$ of the common distribution of $X_1,...,X_n$ that we want to approximate, we might emphasize this with the notation $g(X_1,...,X_n;\theta)$ , or $\hat{\Theta}(X_1,...,X_n)$ , in which case, we call this random variable a point-estimator. Definition: Let $\varphi:\text{parameters}\to\text{probility measures on} \ \mathbb{R}^n$ , be a parameterization $\varphi(\theta)=\mu_{\theta}$ . If $\hat{\Theta}=g(X_1,...,X_n)$ is an estimator of $\theta$ , define $E_{\theta}(\hat{\Theta})=\int_{\mathbb{R}^n}g(x_1,...,x_n)d\mu_{\theta}(x_1,...,x_n)$ . $\hat{\Theta}$ is said to be an unbiased estimator if $E_{\theta}(\hat{\Theta})=\theta$ , for each $\theta$ . Question and ex
Given a statistical model $(\Omega, P_{\theta})$ parameterized by $\theta\in \Theta\subset R^d$ an estimator of $\theta$ is $any$ function $g$ from $\Omega$ to $R^d$ , $w\mapsto g(w).$ It is said to be unbiased when $$E(g(w))=\int_{\Omega}g(w)P_{\theta}(dw)$$ does exist and is equal to $\theta$ for all $\theta\in \Theta.$ The function $g$ does not depend on $\theta$ , this is crucial.
|statistics|
1
Prove that under ZF, $\forall x (\bigcap \{z:z\notin x\} = \emptyset)$
I am taking a view on the chapter on set theory in Hinman and discovered this statement as Exercise 6.1.27. May I please ask for a proof of it? The claim is a bit interesting, it says ``for every set $x$ , the intersection of the collection of sets that is not in it is empty''. What does it reveal? Is there any interesting application of this result?
There are no classes in ZF, but you can still write expressions with classes that represent abbreviations. So $y \in \cap \{z:z \notin x\}$ means $\forall r (r \in \{z:z \notin x\} \rightarrow y \in r)$ or equivalently $\forall r (r \notin x \rightarrow y \in r )$ . So if we can find a set not in $x$ that doesnt contain $y$ we have shown $\forall r (r \notin x \rightarrow y \in r )$ is false and so $y\in \cap \{z:z \notin x\}$ is false and so $\cap \{z:z \notin x\}=\phi$ . But we know $x$ does not contain every singleton set (an easy proof). So, there is a singleton set, say $\{ t\}$ which is not in $x$ . But then $x \cup \{\{t\}\}$ does not contain every singleton set. So there are at least two singleton sets not in $x$ . And $y$ is not in both of these. Hence $\forall r (r \notin x \rightarrow y \in r )$ is false and so $\cap \{z:z \notin x\}=\phi$ . As this holds for arbitrary $x$ , $\forall x(\cap \{z:z \notin x\}=\phi)$
|elementary-set-theory|set-theory|
1
What is the flaw in the following analysis of the Sleeping Beauty Problem
The sleeping beauty problem is a famous problem where a coin is flipped and then a subject is put to sleep. If the coin was heads they will be awoken on Monday asked what their belief is that the probability was heads. If the coin was tails they will also be awoken on Monday and asked the same question. But, they will then be put back to sleep, have their memory erased and reawoken on Tuesday and asked the same question again. It seems to me with the Sleeping Beauty problem we have the following with h = heads, t = tails, m = Monday, and tu = Tuesday. $$ \begin{align} P(h) &= P(t) = 1/2 \\ P(m|h) &= 1 \\ P(m|t) &= 1/2 \\ P(tu|t) &= 1/2 \end{align} $$ Since we don't know what day it is, we want to answer the question of $P(h| m \cup tu)$ . But from Bayes rule we have, $$ \begin{align} P(h| m \cup tu) &= P(m \cup tu | h) P(h) / P(m \cup tu) \\ &= \left(P(m | h) + P(tu | h)\right) P(h) / P(m \cup tu) \\ &= \frac{(1 + 0)\times 1/2}{ 1} \\ &= 1/2 \end{align} $$ Can someone point out the fla
The puzzle raised by the Sleeping Beauty problem is that there are two conflicting intuitions about it. The halfer intuition is that the subject starts off with a subjective probability that the coin will land heads of 1/2. It continues that on waking up, the subject has not undergone any informative learning experience, so the subject should continue to have the same subjective probability. The thirder intuition is that you can convince yourself that if the experiment is repeated, in the long run, the coin will have landed heads about 1/3 of the times that the subject is woken up, so that the subject should, on waking, change their subjective probability to 1/3. You can try to formalize the halfer intuition by using Bayes rule to argue $$P(\mathrm{heads} \,| \,\mathrm{mon} \lor \mathrm{tue})= 1/2$$ But that after all is just a conditional probability. Someone who accepts the thirder intuition will say that in the circumstances of Sleeping Beauty , that conditional probability does not
|probability|paradoxes|
1
Prove that the Riemann curvature tensor is a tensor
How would I prove that the Riemann curvature tensor $R: \scr X(M)^3 → \scr X(M)$ , $R(X, Y )Z := \nabla_X\nabla_Y Z − \nabla_Y \nabla_XZ − \nabla_{[X,Y ]}Z$ , is indeed a tensor ? I thought I could use this: (pag 41 in https://radbouduniversitypress.nl/site/books/m/10.54195/EFVF4478/ ) and simply prove that $R(fX_1+gX_2, Y )Z=fR(X_1, Y )Z+gR(X_2, Y )Z$ ...(1) $R(X, fY_1+gY_2 )Z=fR(X,Y_1)Z+gR(X, Y_2 )Z$ ...(2) and $R(X, Y )(fZ_1+gZ_2)=fR(X,Y)Z_1+gR(X, Y )Z_2$ ...(3) Am I on the right track?Isn't doing this indeed using proposition 2.7? My T.A said I cannot use 2.7 it as it is stated, but I don't see why And then how do I use proposition 2.7 then?
In the Riemannian context you always have a metric to identify vector fields with 1-forms. Proposition 2.7 says you that a tensor of type $(k,l)$ is nothing but a $\mathcal C^\infty(M)$ -multilinear map $$ \mathfrak X^k(M) \times \Omega(M)^l \to \mathcal C^\infty(M) $$ By the universal property of tensor product, any map with this property factors to a $\mathcal C^\infty(M)$ -linear map $$ \mathfrak X^{\otimes k}(M) \otimes_{\mathcal C^\infty} \Omega^1(M)^{\otimes l} \to \mathcal C^\infty(M) $$ Now, if $g : \mathfrak X^{\otimes 2}(M) \to \mathcal C^\infty(M)$ is a Riemannian metric, for any vector field $X \in \mathfrak X(M)$ you can obtain a 1-form by considering the linear functional $X^\flat: Y \to g(X,Y)$ . This transformation (and its inverse) is called musical isomorphisms . To get back to your case, you have the curvature $$ R : \mathfrak X(M)^3 \to \mathfrak X(M) $$ defined as you stated using the Levi-Civita connection. You may check manually that this is $\mathcal C^\infty(M)
|differential-geometry|riemannian-geometry|tensors|general-relativity|
0
Topology in the context of Pontryagin dual
Let $A$ be an abelian group. The definition of the Pontryagin dual of $A$ is $\text{Hom}_{\text{conti}}(A, \mathbb{Q}/\mathbb{Z})$ . In this context, what are the topologies on $A$ and $\mathbb{Q}/\mathbb{Z}$ ? Is it the cofinite topology on $A$ and the discrete topology on $\mathbb{Q}/\mathbb{Z}$ , or does that depend on the situation? in the case where $A=\widehat{E(\mathbb{Q})}$ ( $\widehat{E(\Bbb{Q})}$ is profinite completion of $E(\Bbb{Q})$ cf. Profinite completion of local Mordell-Weil group ) and $E$ is an elliptic curve defined over $\mathbb{Q}$ , $A$ is an infinite group. However, we usually consider its Pontryagin dual. I'm especially interested in this example. Supplementary information in response to comments: In the case where $A=\widehat{E(\mathbb{Q})}$ and $E$ is an elliptic curve defined over $\mathbb{Q}$ , $A$ is an infinite group. However, we usually consider its Pontryagin dual. I'm especially interested in this example.
First, a clarification following the comments about Pontryagin duality. It indeed concerns a duality theory on locally compact abelian groups, and the dual of such a group $A$ is $\mathrm{Hom}_{C^0}(A,\mathbb{R}/\mathbb{Z})$ endowed with the compact open topology. In particular, this duality preserves finite groups and exchanges compact groups and discrete groups. Note that any continuous homomorphism from a pro-finite group $A$ into $\mathbb{R}/\mathbb{Z}$ must have open kernel (that is because $A$ has arbitrarily small open subgroups, while there is a neighborhood of $0$ in $\mathbb{R}/\mathbb{Z}$ containing no non-trivial subgroup). In particular, its image is finite, and the homomorphism factors through $\mathbb{Q}/\mathbb{Z}$ endowed with its discrete topology . This is why we can say that the Pontryagin dual of a profinite group $A$ is exactly $\mathrm{Hom}_{C^0}(A,\mathbb{Q}/\mathbb{Z})$ , where $\mathbb{Q}/\mathbb{Z}$ is discrete. And indeed, it seems that the topology on $\wid
|abstract-algebra|number-theory|definition|topological-groups|duality-theorems|
1
number of trees with 10 node
How many trees with 10 nodes are there, given that each vertex must have a degree of either 1, 2, or 5? I initially approached the problem by attempting to construct each tree individually, focusing first on maximizing the number of edges with vertices of degree 5, then proceeding similarly for vertices of degree 2. However, this method proved to be time-consuming and lacked certainty regarding the completeness of the solution. Is there a more efficient strategy for solving such problem?
This is a way to make trial and error easier Using Euler characteristic for planar graphs with no faces we get number of edges must always be equal to 9 which is evident from your tries Since number of edges are known sum of degrees for all vertices must be 18 Let $x_n$ be the number of vertices with degree n We have $$x_1+x_2+x_5=10\\x_1+2x_2+5x_5=18$$ This leads to 3 integral solutions $$(2,8,0)\\(5,4,1)\\(8,0,2)$$ The first and third are easy and yield only one graph each since no degree 1 node can be anywhere but the end of a chain As far as I am aware you need trial and error for second Edit: OK so my awareness just increased for second one consider the graph with one long chain if you take off a node at one end and attach it to a free degree one node the number of degree one and degree two nodes don't change So my idea here is first we fix a one degree on one edge of degree 5 (since this is necessary given the numbers) now since we can make a side chain of every length $\ge1$ we
|graph-theory|combinations|trees|
0
Given that the volume of the parallelepiped with sides →$AB=(5,5,8), →AC=(4,-1,4)$ and $→AD=(-9,-1,a)$ is $89 \text{ u}^3$, find $a$.
Q. Given that the volume of the parallelepiped with sides → $AB=(5,5,8), →AC=(4,-1,4)$ and $→AD=(-9,-1,a)$ is $89 \text{ u}^3$ , find $a$ . I attempted to solve this question by finding area of one side then multiplying this by the height but this resulted in a negative $a^2$ , which is clearly incorrect. Any assistance would be greatly appreciated. I have attached my working out below.
To get the area you could have used the cross product of $\vec{AB}$ and $\vec{AC}$ (its length gives you the area). Likewise, the volume can be expressed as the determinant of a $3\times 3$ matrix constructed from the three vectors for the sides involved. But it seems that in your approach you want to find the height (or better, distance) of the point D to the ABC plane, which is arguably the most straightforward approach. You could also use a determinant trick for that purpose, but more straigthforward would be to start with the vector $\vec{AD}$ and project out any components parallel to $\vec{AB}$ and $\vec{AC}$ , in order to end up with a vector from D to the plane which is perpendicular to the plane. So it would seem that the "height" you want is obtained as the $\vec{H}$ by these steps: $$ \vec{X} = \vec{AB} - \left(\frac{\vec{AB}\cdot\vec{AC}}{|\vec{AC}|^2}\right) \vec{AC} $$ and $$ \vec{H} = \vec{AD} - \left(\frac{\vec{AD}\cdot\vec{AB}}{|\vec{AB}|^2}\right) \vec{AB} - \left(\fr
|geometry|
0
number of trees with 10 node
How many trees with 10 nodes are there, given that each vertex must have a degree of either 1, 2, or 5? I initially approached the problem by attempting to construct each tree individually, focusing first on maximizing the number of edges with vertices of degree 5, then proceeding similarly for vertices of degree 2. However, this method proved to be time-consuming and lacked certainty regarding the completeness of the solution. Is there a more efficient strategy for solving such problem?
The number of vertices $v=10$ . Hence the number of the edges is $e=v-1=9$ . The sum of degrees of all vertices is $2e=18$ then. Now let us see how we can split $18$ into a sum of $10$ summands equal to $1$ , $2$ or $5$ . Consider the number of $5$ s. If it is $3$ or more then there is at most $6$ summands which is bad. If we have two $5$ s then we have to get $8$ as a sum of $8$ numbers. They are all ones then. $18=2\cdot 9$ . Hence if there is none $5$ s then $18=1+1+2\cdot8$ is the only way in this case. If there is exactly one $5$ then we have to get $13$ as a sum of $9$ summands. The only way is $13=7\cdot 1+3\cdot 2$ . So we have three possible sets of vertices degrees: $$5,5,1,1,1,1,1,1,1,1;$$ $$2,2,2,2,2,2,2,2,1,1;$$ $$5,2,2,2,2,1,1,1,1,1.$$ There is only one tree of the first type since two $5$ s must be adjacent. Also, there is only one three of the second type since it is easy to see that all $2$ s go in chain. Now the third type. The $2$ s all hang on the $5$ -vertex. They
|graph-theory|combinations|trees|
0
The Argument Principle used to prove the Fundamental Theorem of Algebra
Greene and Krantz pose the following problem in Function Theory of One Complex Variable , Ch. 5 problem 3: Give another proof of the fundamental theorem of algebra as follows: Let $P(z)$ be a non-constant polynomial. Fix $Q\in \mathbb{C}$. Consider \begin{equation} \frac{1}{2\pi i} \oint_{\partial D(Q,R)} \frac{P'(z)}{P(z)}\,dz. \end{equation} Argue that as $R\to +\infty$, this expression tends to a nonzero constant. I was thinking along these lines: Since we do not know $P(z)$ factors completely, let us write $$ P(z) = \prod_j (z - \alpha_j) \, g(z),$$ where $g(z)$ is an irreducible polynomial. Now $$ \frac{P'(z)}{P(z)} = \sum_k \frac{1}{z-\alpha_k} + \frac{g'(z)}{g(z)}.$$ Each of the terms $1/(z-\alpha_k)$ adds $1$ to the integral expression. As $R \to \infty$, all the $\alpha_k$ are eventually inside $D(Q,R)$, whereas the term $g'(z)/g(z)$ approaches zero, since the denominator has a higher degree. Is the reasoning correct ? Can someone offer a simpler argument ?
How about this Suppose $f(z)=z^n+a_{n-1}z^{n-1}+..+a_1z+a_0$ . For $0\leq t \leq 1$ denote; $ F_t=z^n+t(a_{n-1}z^{n-1}+..+a_1z+a_0) $ Then for any circle $C_r$ of radius $r>1$ we have \begin{align*} |F_t(z)|\geq |z|^n-|t||a_{n-1}z^{n-1}+..+a_1z+a_0| \\ \geq 1 - (|a_{n-1}|+..+|a_1|+|a_0|) \end{align*} Notice that we may assume that $|a_{n-1}|+..|a_1|+|a_0| as otherwise we may consider $z=cw$ in $z^n+a_{n-1}z^{n-1}+..+a_1z+a_0=0$ which gives us, $ w^n+\frac{a_{n-1}}{c}z^{n-1}+..+\frac{a_1}{c^{n-1}}+\frac{a_0}{c^n}=0 $ Choose $c$ with $|c|$ large enough so that $|\frac{a_{n-1}}{c}|+..+|\frac{a_1}{c^{n-1}}|+|\frac{a_0}{c^n}| . \par So we have $|F_t(z)|>0$ . Now consider; $ n_t =\frac{1}{2\pi\sqrt{-1}} \int_{C_r} \frac{F'_t}{F_t}dz $ Now as $F_t$ is nowhere zero and holomorphic we have by argument principle that $n_t \in \mathbb{Z}$ (in particular $n_t$ denotes the number of zeros of $F_t$ in the interior of $C_r$ ). Now as $n_t$ is continuous we get $n_t$ is constant and in particular $n_0
|complex-analysis|
0
Beurling's Theorem and Multipliers of $H^2(\mathbb{D})$
I am currently studying a paper by Harold S. Shapiro on a generalized Beurling theorem on reproducing kernel hilbert spaces and I cannot seem to understand a certain detail. (Link: https://www.ams.org/journals/tran/1964-110-03/S0002-9947-1964-0159006-5/S0002-9947-1964-0159006-5.pdf ) In the introduction Shapiro states the condition of a subspace $S$ of $H^2(\mathbb{D})$ being invariant as " $f\in S$ implies $\phi f \in S$ for all bounded analytic $\phi$ ". This differs from the usual definition I find in literature, which is " $f\in S$ implies $z f\in S$ ". He argues that the two formulations are equivalent since polynomials span the bounded analytic functions in the topology of bounded pointwise convergence. Neither do I understand why the formulations are equivalent nor his reasoning. I know that the multiplier class for $H^2$ is exactly $$H^\infty=\{\phi:\mathbb{D}\rightarrow \mathbb{C}\mid \phi \text{ analytic and bounded}\}$$ and that polynomials are not dense in $H^\infty$ as $H^
When Shapiro says that the polynomials "span" a set $B$ "in" a topology, of course he means that the closure of the set of polynomials in that topology is $B$ . I make this somewhat pedantic point only to emphasize that there is nothing counterintuitive about potentially getting different things when one computes the closure of the same set in different topologies, and that is part of what is going on here. Also going on here is that Shapiro is thinking of analytic polynomials as operators in a context where the topology on $\mathcal{B}(H^2)$ most relevant to his discussion is weaker than e.g. the norm topology (which happens to be the topology you get if you consider polynomials only as complex-valued functions on $\mathbb{D}$ or $\mathbb{T}$ and use the supremum norm). Recall as background context that a closed subspace of a Hilbert space is invariant under a set of operators if and only if it is invariant under the weak operator closure of the algebra generated by that set of operat
|functional-analysis|complex-analysis|
1
What's wrong with this derivation of the volume of a hemisphere?
My idea to calculate the volume of the hemisphere is to sum up the area of circles of all radii up to the radius of the hemisphere we are interested in: $$\int_0^r \pi x^2 dx$$ This gives $\frac{1}{3}\pi r^3 \neq \frac{2}{3}\pi r^3$ , so deviating by a factor of $2$ from the known formula for the volume of a hemisphere. Can you explain where my reasoning is wrong and why it deviates by a factor of $2$ ?
You're computing the volume of a cone, not a hemisphere! More precisely, your integral gives the volume obtained by rotating the triangle $0 \le y \le x \le r$ about the $x$ -axis. The hemisphere, on the other hand, is obtained by rotating the region $0 \le y \le \sqrt{r^2-x^2}$ , $0 \le x \le r$ about the $x$ -axis, so to get the volume you need to write $$ \int_0^r \pi \Bigl( \sqrt{r^2-x^2} \Bigr)^2 \, dx $$ instead.
|integration|geometry|solution-verification|solid-geometry|
0
Derivative of sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$
In my AI textbook there is this paragraph, without any explanation. The sigmoid function is defined as follows $$\sigma (x) = \frac{1}{1+e^{-x}}.$$ This function is easy to differentiate because $$\frac{d\sigma (x)}{d(x)} = \sigma (x)\cdot (1-\sigma(x)).$$ It has been a long time since I've taken differential equations, so could anyone tell me how they got from the first equation to the second?
Just as a side note Known We know that: $$ 1 - \sigma(x) = \sigma(-x) $$ From here: https://en.wikipedia.org/wiki/Sigmoid_function Further simplify Therefore this: $$ \dfrac{d}{dx} \sigma(x) = \sigma(x) \cdot (1 - \sigma(x)) $$ ... can be rewritten as: $$ \dfrac{d}{dx} \sigma(x) = \sigma(x) \cdot \sigma(-x) $$
|calculus|derivatives|
0
Eliminating $\theta$ from $x^2+y^2=\frac{x\cos3\theta+y\sin3\theta}{\cos^3\theta}$ and $x^2+y^2=\frac{y\cos3\theta-x\sin3\theta}{\cos^3\theta}$
An interesting problem from a 1913 university entrance examination (Melbourne, Australia): Eliminate $\theta$ from the expressions $$x^{2}+y^{2}=\frac{x \cos{3\theta}+y \sin{3\theta}}{\cos^{3}\theta} \tag1$$ $$x^{2}+y^{2}=\frac{y \cos{3\theta}-x \sin{3\theta}}{\cos^{3}\theta} \tag2$$ Find expressions for $x$ and $y$ in terms of $\theta$ As with many of these historic problems, I'm sure it has been discussed somewhere at length... Solutions involving complex numbers would have been acceptable at the time. (EDIT 20/Feb Australian time) I thought I had a solution but upon review I don't think it works (I found an expression for y, then substituted. It was very messy and I don't think the algebra was totally correct.) so editing the post to say: any solutions or suggestions appreciated. The wording of similar questions on the paper suggests that a "simplified" expression is possible. (Edit 21/Feb) Thanks everyone for the excellent suggestions. Really appreciated. I am wondering if the subs
Squaring and adding gives us $$2(x^2+y^2)^2=\frac{x^2+y^2}{\cos^6\theta}\Rightarrow x^2+y^2=\frac{1}{2}\sec^6\theta$$ Putting in original equations we obtain $x\cos3\theta+y\sin3\theta-\frac{1}{2}\sec^3\theta=0$ $-x\sin3\theta+y\cos3\theta-\frac{1}{2}\sec^3\theta=0$ Solving for $x$ and $y$ we have $$x=\frac{\cos3\theta-\sin3\theta}{2\cos^3\theta}$$ and $$y=\frac{\cos3\theta+\sin3\theta}{2\cos^3\theta}$$
|trigonometry|math-history|
0
Why is this particular substitution made? ($y=tx$)
A problem in Complex Numbers (Andreescu, Andrica) involving a Complex Equation & a "Substitution" technique : Let us solve the equation $z^3 = 18 + 26i$ , where $z = x + yi$ and $x$ , $y$ are integers. Solution given : We can write $(x + yi)^3 = (x + yi)^2 \times (x + yi)$ = $(x^2 − y^2 + 2xyi)(x + yi) = (x^3 − 3xy^2) + (3x^2y − y^3)i = 18 + 26i$ Using the definition of equality of complex numbers, we obtain $x^3 − 3xy^2 = 18$ , $3x^2y − y^3 = 26$ Setting $\color{maroon}{y = tx}$ in the equality $18(3x^2y − y^3) = 26(x^3 − 3xy^2)$ , let us observe that $x \ne 0$ and $y\ne 0$ implies $18(3t − t^3) = 26(1 − 3t^2)$ , which is equivalent to $(3t − 1)(3t^2 − 12t − 13) = 0$ . The only rational solution of this equation is $t=\frac{1}{3}$ ... I'm not able to understand the rationale behind substituting $y=tx$ . I tried substituting $y=mx+c$ after differentiating both relations, but that leads nowhere ( $1=1$ , haha). On thinking about it a bit more, I think it's because the relations are just
Every Complex Number $z=x+iy$ can be put on the Complex Plane with Cartesian Coordinates $z=(x,y)$ & can written in Polar Coordinates like $z=(r,\theta)$ . Here $\tan(\theta)=y/x$ , which "Directly" gives us the "Substitution" in general scenarios. We can check that the 2 equations will not let $x=0$ ( it will give $0=18$ ) & $y=0$ ( it will give $0=26$ ) Hence $\tan(\theta)=y/x$ is neither indeterminate nor $0$ nor $\infty$ here. More-over , it is a real number in general , though it is a rational number in the given context. Hence we take the Single Solution to the Cubic Equation which is the real number $1/3$ which is rational number too , according to the given context. It means $z=3y+yi$ where $y$ is Non-Zero Integer. We have to use this to get the overall Solution here. When we use $z=3y+yi$ with $z^3=18+26i$ , we will get 3 Solutions , including the wanted Solution $z=3+1i$ , where the 2 Solutions will involve unwanted complicated values.
|polynomials|solution-verification|complex-numbers|systems-of-equations|
0
How to find out the general formula of a sequence that includes the sum of the sequence
I am an electrical engineer, currently focused on designing a feedback network. I've attempted to model the engineering problem as a mathematical sequence and sought to derive a general expression for it. Despite numerous attempts, I have been unsuccessful thus far. The sequence is shown below $$ A_n=A_{n-1}+k\sum_{j=0}^{n-1}(D-A_j) $$ Both k and D are constants. A nice guy suggested to define $$B_n:=D−A_n$$ , which satisfies $$B_n=B_{n−1}−k\sum_{j=0}^{n-1}(B_j)$$ , however still not work. Can some one help to guide how to find out the general expression?
If you follow the suggestion of the previous answer you will end with the hypothesis: $$ \frac{B_n}{B_0}=\sum_{r=0}^n \binom{n+r}{2r}(-k)^r.\tag1 $$ This expression can be proved as follows. It is obviously true for $n=0$ . Assume that it is true for all $j\le n$ , then in view of $$\begin{align} \frac{B_{n+1}}{B_0}&=\frac{B_n}{B_0}-k\sum_{k=0}^n \frac{B_j}{B_0}\\ &=\sum_{r=0}^n \binom{n+r}{2r}(-k)^r-k\sum_{j=0}^{n}\sum_{r=0}^n\binom{j+r}{2r}(-k)^r\tag2\\ &=\sum_{r=0}^n \binom{n+r}{2r}(-k)^r+\sum_{r=0}^n(-k)^{r+1}\sum_{j=0}^{n}\binom{j+r}{2r}\tag3\\ &=\sum_{r=0}^n \binom{n+r}{2r}(-k)^r+\sum_{r=0}^n(-k)^{r+1}\binom{n+r+1}{2r+1}\tag4\\ &=\sum_{r=0}^n \binom{n+r}{2r}(-k)^r+\sum_{r=1}^{n+1}(-k)^{r}\binom{n+r}{2r-1}\tag5\\ &=\sum_{r=0}^{n+1}\left[\binom{n+r}{2r}+\binom{n+r}{2r-1}\right](-k)^r\tag6\\ &=\sum_{r=0}^{n+1}\binom{n+r+1}{2r}(-k)^r\tag7 \end{align}$$ the expression is valid also for $j=n+1$ . Thus by induction the expression $(1)$ is valid for all $n\ge0$ . Explanations $(2)$ : ind
|sequences-and-series|
0
What is the conditiones to have the plane $(ABC) $ and $(DEF) $ perpendicular?
Let $A(a_1, a_2, a_3), B(b_1, b_2, b_3), C(c_1, c_2, c_3), D(d_1, d_2, d_3), E(e_1, e_2, e_3), F(f_1, f_2, f_3)$ in the 3d system of coordinates. What is the conditiones to have the plane $(ABC) $ and $(DEF) $ perpendicular?
I'll assume that $A$ , $B$ and $C$ are pairwise different and that the same holds for $D$ , $E$ and $F$ (otherwise the question is meaningless). We have that $$(ABC)\perp (DEF)\iff \textbf{u}\perp \textbf{v}\iff \textbf{u}\cdot\textbf{v}=0$$ Where $\textbf{u}$ and $\textbf{v}$ are non-zero vectors normal to $(ABC)$ and $(DEF)$ respectively. Moreover you should convince yourself that $$\textbf{u}\perp (ABC)\iff \textbf{u}\perp (AB)\text{ and }\textbf{u}\perp (AC)$$ $$\textbf{v}\perp (DEF)\iff \textbf{v}\perp (DE)\text{ and }\textbf{v}\perp (DF)$$ What's a way to construct a non-zero vector that's orthogonal to both $(AB)$ and $(AC)$ ? The answer is that you can take $$\textbf{u}=\overrightarrow{AB}\times\overrightarrow{AC}$$ where $\times$ denotes the cross product of vectors and similarly $$\textbf{v}=\overrightarrow{DE}\times\overrightarrow{DF}.$$ So a characterization of the sort that you ask is that $$(ABC)\perp (DEF)\iff \left(\overrightarrow{AB}\times\overrightarrow{AC}\right)\cdo
|geometry|
1
Does $(\bigcap \{z:z\notin x\} = \emptyset)$ hold for every set $x$ in ZF$-$AF?
I asked about this and the answer is true if we assume AF. As a more interesting question, does it still hold without AF? According to the suggestion in the comment, I should post it as another question. Original question: Prove that under ZF, $\forall x (\bigcap \{z:z\notin x\} = \emptyset)$
Given a set $x$ , let $z=\{y\in x\cup(\bigcup x):y\notin y\}$ . Then $z\notin z$ and $z\notin x\cup(\bigcup x)$ , so we have $z,\{z\}\notin x$ and $z\cap\{z\}=\varnothing$ . Alternatively, filling in the details from Asaf Karagila's answer, let $a=\{z\in\bigcup x:z\notin z\}$ and $b=\{z\in\{a\}\cup(\bigcup x\}):z\notin z\}$ ; then $\{a\},\{b\}\notin x$ and $\{a\}\cap\{b\}=\varnothing$ .
|elementary-set-theory|set-theory|
0
Convergence of a sequence w.r.t French Railway Metric
So I'm given the following version of the French Railway Metric: $$ d^{'}_{f}(z_1, z_2) = \begin{cases} |z_1 - z_2| & \text{if } \exists \lambda \in \mathbb{R} \text{ such that } z_1 = \lambda z_2,\\ |z_1| + |z_2| & \text{otherwise. } \end{cases} $$ As a part of this question I have verified that this is a metric and I have described the closed balls $B_1(z)$ of radius $1$ with respect to $d^{'}_{f}$ depending on $z \in \mathbb{C}$ as follows: $B_1(z) = \{u \in \mathbb{C} : |z-u| \leq 1\} \cup \{u \in \mathbb{C} : |u| \leq 1 - |z|\}$ If $|z| > 1$ $ \implies \{u \in \mathbb{C} : |u| \leq 1 - |z|\} = \emptyset$ . $\implies B_1(z) = \{u \in \mathbb{C} : |z-u| \leq 1\}$ I.e., $B_1(z)$ is the disc centred at $z$ of radius $1$ in the complex plane. If $|z| \leq 1$ $\implies B_1(z) = \{u \in \mathbb{C} : |z-u| \leq 1\} \cup \{u \in \mathbb{C} : |u| \leq 1 - |z|\}$ . I.e., $B_1(z)$ is the union of all discs centred at the origin of radius $r \leq 1 - |z|$ with the disc centred at $z$ of radius
I may be wrong, but I think your analysis is not correct. In fact for $\vert z\vert\geq \delta>0$ , I think $B_\epsilon(z)=\{\lambda z: \lambda\in \mathbb{R}, \vert \lambda-1\vert for any $\epsilon . In the sequence you look at, $z_n:=1+\frac{i}{n}$ are all linearly independent over $\mathbb{R}$ . This means that $d_f'(z_n,z_m)= \vert 1+\frac{i}{n}\vert +\vert 1+\frac{i}{m}\vert \geq 2$ , and the sequence is not even Cauchy. Later edit: Maybe I should clarify some things. I think the balls in this case are the union of two disjoint sets. $$ B_r^{(1)}(z):= \{\lambda z: \lambda\in \mathbb{R}, \vert z-\lambda z\vert $B_r^{(2)}(z)$ is simply empty when $r\leq \vert z\vert$ . When $z=0$ , $B_r^{(1)}(z)=\{0\}$ and $B_r^{(2)}(z)= \{ z': \vert z'\vert .
|sequences-and-series|general-topology|solution-verification|metric-spaces|
0
Multiplication of semi-orthogonal and diagonal matrices
I have a matrix $M \in \mathbb{R}^{n \times m}$ that satisfies $M^{\top}M=D$ where $D$ is a diagonal matrix. This means that the columns of $M$ are mutually orthogonal to each other. Consider now another diagonal matrix $A \in \mathbb{R}^{n \times n}$ . Is there anything I can say about the product $B=M^{\top}AM$ ? Under what additional hypotheses, if any, is $B$ also diagonal?
The matrices are related via the Hadamard product $(\odot)$ and the all-ones vector $(\tt1)$ $$\eqalign{ \def\qiq{\quad\implies\quad} \def\op{\operatorname} \def\d{\op{diag}} \def\D{\op{Diag}} D&=M^TIM &\qiq d = \d(M^TIM) \,&\equiv\, (M\odot M)^T{\tt1} \\ B&=M^TAM &\qiq b = \d(M^TAM) \,&\equiv\, (M\odot M)^Ta \\ }$$ The problem statement asserts that $\,A = \D(a)\;{\rm and}\;D = \D(d),\,$ however it is not necessarily true that $\,B = \D(b)$ The simplest solution that makes $B$ diagonal is to make $M$ a diagonal matrix.
|linear-algebra|matrices|
0
Does $(\bigcap \{z:z\notin x\} = \emptyset)$ hold for every set $x$ in ZF$-$AF?
I asked about this and the answer is true if we assume AF. As a more interesting question, does it still hold without AF? According to the suggestion in the comment, I should post it as another question. Original question: Prove that under ZF, $\forall x (\bigcap \{z:z\notin x\} = \emptyset)$
Trivially, since $\{\{a\}\mid a=a\}$ is a proper class, pick $a\neq b$ such that $\{a\}$ and $\{b\}$ are both not in $x$ . Then $\bigcap\{y\mid y\notin x\}\subseteq\{a\}\cap\{b\}=\varnothing$ . The choice of $a$ and $b$ can be made "more canonical" either by constructing these directly from $x$ as some subsets whose singletons are not in $x$ , or, perhaps more simply, the least two ordinals that can be chosen to work for $x$ , as the class of ordinals is a proper class.
|elementary-set-theory|set-theory|
1
Equality in Hardy's inequality via Hölder's
I'm working on Exercise 3.14 in Rudin's Real and Complex Analysis. I was able to answer part (a): that for real $p$ satisfying $1 (I saw a related question here which uses the Fubini theorem, but Rudin doesn't cover Fubini until Chapter 8.)
I would like to suggest this approach. Let's $C_C$ be the set of continuous compact supported functions and $C_C^+$ the subset of non negative functions of $C_C$ . For all $f\in C_C^+$ , it's easy (integrating by parts) to find out that $$\label{a}\tag{1}\int_0^{+\infty}F^p(x)dx=\dfrac{p}{p-1}\int_0^{+\infty}f(x)F^{p-1}(x)dx$$ This is where we can use Hölder inequality, but for $f\in C_C^+$ only... However, this inequality stands also for any non negative funcions in $L^p((0,+\infty))$ . Indeed, $C_C$ is dense in $L^p$ (Rudin's theorem 3.14) so for any non negative function in $L^p$ , there exists a sequence $(g_n)$ in $C_C$ that converges towards $f$ for the $p$ -norm. As we can extract a sub-sequence which converges simply almost everywhere, we can assume that $(g_n)$ converges almost everywhere. Also, we can assume that $(g_n)$ is non negative (if not, consider $\sup(g_n(x),0)$ which is also continuous and compact supported). The idea is to use the monotone convergence theorem : can
|real-analysis|inequality|
0
If $f:\mathbb R\to\mathbb R$ is such that $f>0$ a.e., do $\inf_{\mathbb R} f >0$ and $\sup_{\mathbb R} f>0$?
Let $f:\mathbb R\to\mathbb R$ be a continuous function such that $f>0$ for almost every $x\in\mathbb R$ . Does one can deduce from this that both $\inf_{x\in\mathbb R} f(x) >0$ and $\sup_{x\in\mathbb R} f(x)>0$ ? If yes, how to show this? It seems reasonable to me but I do not how to approach the proof. Any hint?
Let $\,f(x)=\mathrm{e}^{-x^2},\,$ then $f(x)>0$ , for all $x\in\mathbb R,\,$ and $\inf_{x\in\mathbb R}f(x)=0$ .
|real-analysis|calculus|supremum-and-infimum|
0
Finding out the induced map via factoring through canonical map $A \to S^{-1}A$ of localization
I was trying to prove that $\varinjlim_{U \ni x} A(U) \cong A_{\mathfrak{p}}$ where $U$ are basic open sets containing $x = \mathfrak{p} \in \operatorname{Spec} A$ , and $A(U) = A_f$ where $U = X_f$ . $A$ is a commutative ring. I know that the restriction maps from $A(X_f)$ to $A(X_g)$ are given by $a/f^k \mapsto au^k/g^{nk}$ if $g^n = uf$ for some $u \in A$ , some $n >0$ . But in this case I need to know explicitly how the maps from $A_f$ to $A_{\mathfrak{p}}$ are given. I have constructed maps $\alpha_f: A_f \to A_{\mathfrak{p}}$ by factoring the canonical map $A \to A_{\mathfrak{p}}$ through the canonical map $A \to A_f$ , and showed that they commute with the direct system, thus we have some unique commuting homomorphism $\alpha : \varinjlim_{U\ni x}A(U) \to A_{\mathfrak{p}}$ . To show surjectivity of this homomorphism, I need to know explicitly the map $\alpha_f$ , which I presumed to be $a/f^k \mapsto a/f^k$ , but I am seriously doubting this. This map is well-defined and is a ho
That's the correct definition of $\alpha_f$ , yes. Really $A_f\to A_{\mathfrak{p}}$ is just localisation of $A_f$ at $\mathfrak{p}A_f$ , though it is worth noting the homomorphisms in question need not be injective (this requires $A\setminus\mathfrak{p}$ to have no zero divisors, which might not be true) and thus they don't deserve to be called 'inclusions'. On one hand you can use the universal property of localisation to conclude that some factoring $A_f\to A_{\mathfrak{p}}$ must exist; on the other hand, you can use the proof that the universal property holds ! or directly inspect that $\alpha_f$ so defined works, to see that it works. Like, you don't need to conclude $\alpha_f$ is what it is, per se, you can just give it and see it works. We must send $1/f$ in $A_f$ to the inverse of $f$ in $A_{\mathfrak{p}}$ . Oh, but we just call that $1/f$ as well, so the map really is just (induced by) $a/f\to a/f$ . To see $A_{\mathfrak{p}}\cong\varinjlim_{f\notin\mathfrak{p}}A_f\cong\varinjli
|commutative-algebra|sheaf-theory|affine-schemes|
1
Prove that $a^2+b^2+c^2 \geq ab(a+b+\sqrt{ab})+cb(c+b+\sqrt{cb})+ ac(a+c+\sqrt{ac} )$
the question Let $a,b,c$ be positive numbers such that $a+b+c=1$ . Prove that $a^2+b^2+c^2 \geq ab(a+b+\sqrt{ab})+cb(c+b+\sqrt{cb})+ ac(a+c+\sqrt{ac} )$ . the idea After i put some values to $a$ , $b$ , and $c$ I got to the conclusion that the case of equality only happens if $a=b=c=\frac{1}{3}$ , which got me to the idea that I should use the inequality of means: $$\min \leq \frac{2ab}{a+b} \leq \sqrt{ab} \leq \frac{a+b}{2} \leq \sqrt{\frac{a^2+b^2}{2}} \leq \max$$ I came to the conclusion that we can demonstrate that: $\frac{a^2+b^2}{2} \geq ab(a+b+\sqrt{ab})$ we already know by the inequality of means that $\frac{a^2+b^2}{2} \geq ab*1= ab*(a+b+c)$ The problem would be solved if we show that $a+b+c \geq a+b+\sqrt{ab}$ , which I don't think I can demonstrate We can show an inequality maybe using again the inequality of means....idk Hope one of you can help me! Thank you!
Multiply LHS by $a+b+c=1$ . You get after expansion: $$\sum a^3 + \sum(a^2b+ab^2)\ge \sum (a^2b+ab^2) +\sum a^{3/2}b^{3/2},$$ which is true due to Muirhead or just $x^2+y^2+z^2\ge xy+yz+zx.$
|inequality|a.m.-g.m.-inequality|
1
Prove that $a^2+b^2+c^2 \geq ab(a+b+\sqrt{ab})+cb(c+b+\sqrt{cb})+ ac(a+c+\sqrt{ac} )$
the question Let $a,b,c$ be positive numbers such that $a+b+c=1$ . Prove that $a^2+b^2+c^2 \geq ab(a+b+\sqrt{ab})+cb(c+b+\sqrt{cb})+ ac(a+c+\sqrt{ac} )$ . the idea After i put some values to $a$ , $b$ , and $c$ I got to the conclusion that the case of equality only happens if $a=b=c=\frac{1}{3}$ , which got me to the idea that I should use the inequality of means: $$\min \leq \frac{2ab}{a+b} \leq \sqrt{ab} \leq \frac{a+b}{2} \leq \sqrt{\frac{a^2+b^2}{2}} \leq \max$$ I came to the conclusion that we can demonstrate that: $\frac{a^2+b^2}{2} \geq ab(a+b+\sqrt{ab})$ we already know by the inequality of means that $\frac{a^2+b^2}{2} \geq ab*1= ab*(a+b+c)$ The problem would be solved if we show that $a+b+c \geq a+b+\sqrt{ab}$ , which I don't think I can demonstrate We can show an inequality maybe using again the inequality of means....idk Hope one of you can help me! Thank you!
Using cyclic (not symmetric) summations: $$\begin{align} \sum a^2 &= (\sum a)(\sum a^2) \\ &= \sum a^3 + \sum a^2b+\sum ab^2 \\ &\ge \sum (ab)^{3/2} + \sum a^2b+\sum ab^2 \\ & = \sum ab(a+b+\sqrt{ab}) \end{align}$$ where the inequality is true by AM-GM on $(a^3+b^3)/2$
|inequality|a.m.-g.m.-inequality|
0
Random walk with positive drift
I'm working on something where the following claim, if true, would be quite helpful: Let $S$ be a random variable distributed over $\{-1,0\}\cup\mathbb N$ with $\mathbb E[S]>0$ . Consider a one dimensional random walk with step size $S$ , starting at $1$ . Is it true that, with nonzero probability, the random walk never reaches 0? Intuitively this claim seems true (as the random walk has positive drift) but I'm not sure how to show it. If the claim does not hold in general, are there other conditions we can impose on the distribution of $S$ that would make the claim true?
To verify the general case, you can use a simple argument based on the Markov property of the random walk: Let $(S_n)_{n\in\mathbb{N}}$ be an i.i.d. sequence with marginal distribution $\mathcal{L}(S)$ . By the law of large numbers (and $\mathbb{E}(S)>0$ ) there exists $N\in\mathbb{N}$ such that $\mathbb{P}(\sum_{k=1}^n S_k \geq 1\ \forall n\geq N)>0$ , since almost surely $\lim_{n\to\infty}\frac1n \sum_{k=1}^n S_k=\mathbb{E}(S)>0$ Since $N$ is finite, there must be some $0\leq n_0 such that \begin{equation}\mathbb{P}\Bigg(\sum_{k=1}^{n_0} S_k =\min\Big\{\sum_{k=1}^n S_k, n\in\mathbb{N}_0\Big\}\Bigg)>0.\end{equation} But that forces the event $\{\sum_{k=n_0+1}^n S_k\geq 0$ for all $n>n_0\}$ to have strictly positive probability. Due to the fact that $(S_n)_{n>n_0}$ has the same distribution as $(S_n)_{n\in\mathbb{N}}$ , this proves your claim. In fact you can show that given your step-size distribution (bounded from below by -1) the probability you ask for is the unique root of $x^2=g(
|probability|random-walk|
0
A confusion about Lie group of two dim non abelian real Lie algebra
In the book by Fulton & Harris (chapter 10, section 1) : Finally, in the real case things are simpler: when we exponentiate the adjoint representation as above, the Lie group we arrive at is already simply connected, and so is the unique connected real Lie group with this Lie algebra. Just to give background : It has been shown that there exist a unique non abelian two dimensional Lie algebra with basis $\{X,Y \}$ such that $[X,Y]= X.$ They calculate the adjoint form by exponenting the lie algebra to find $G_0 = \{ \begin{pmatrix} a & b\\ 0 & 1\\ \end{pmatrix} : a \ne 0 \} $ . Is it not true that over real numbers $G_0$ would be homeomorphic to $\mathbb R \times S^1?$ I am not able to make sense of what exactly they are telling the above quoted paragraph.
Your formula for $G_0$ has some issues. First, the quantity $b$ has not been specified. You seem to have assumed $b$ takes values in $S^1$ , but it should instead take values in $\mathbb R$ . Making that correction we get $$G_0 = \left\{ \begin{pmatrix} a & b\\ 0 & 1\\ \end{pmatrix} : a \in \mathbb R - \{0\}, \, b \in \mathbb R \right\} $$ This leads to the second issue. With that correction it follows that $G_0$ is homeomorphic to $(\mathbb R - \{0\}) \times \mathbb R$ , which is not even connected, and so certainly not simply connected. But this can also be corrected, by instead specifying $a \in \mathbb R_+$ : $$G_0 = \left\{ \begin{pmatrix} a & b\\ 0 & 1\\ \end{pmatrix} : a \in \mathbb R_+, \, b \in \mathbb R \right\} $$ And now $G_0$ is homeomorphic to $\mathbb R_+ \times \mathbb R$ which is indeed simply connected. One last comment: it's true that $\mathbb R - \{0\}$ and $\mathbb R_+$ have isomorphic Lie algebras, but only the second one is simply connected. It's also true that $
|lie-groups|lie-algebras|
1
Alternative method for solving $|3x-9|+|7x-8|-||20x-13|-3x|-3=0$
Recently I came across the following equation: $$|3x-9|+|7x-8|-||20x-13|-3x|-3=0.$$ An obvious approach would be to divide it into cases and solve for each interval for each mod involved. Another approach that came to my mind was to solve it graphically, however that was equally as laborious if not more. It would be great help if anyone can suggest any other innovative method to help me solve this equation.
I would suggest this method: To avoid the branching cases hell, you can go algebraic, i.e replace $|7x-8|=s\ (7x-8)$ with $s\in\{-1,+1\}$ . You do that for every absolute value encountered (that should give $4$ ) and lead to $f(s_1,s_2,\cdots)\ x+g(s_1,s_2,\cdots)=0$ Which is straightforward to solve. This gives you $2^4$ candidate solutions (by pluging $\pm 1$ for each $s_i$ ), but this method introduces many spurious solutions (in fact the $s_i$ are not independent) so you have to test every candidate solution against the original equation. Though since everything leads to rather simple calculations, this can be done quickly (preferably automated if you can with computer) and you are sure not to miss any solution. Here is the detailed solve, please try to do it for yourself first $\begin{align}|3x-9|&+|7x-8|-||20x-13|-3x|-3\\&=(3x-9)s_1+(7x-8)s_2-((20x-13)s_3-3x)s_4-3\\&=(3s_1+7s_2-20s_3s_4+3s_4)x+(-9s_1-8s_2+13s_3s_4-3)\end{align}$ $$x=\frac{9s_1+8s_2-13s_3s_4+3}{3s_1+7s_2-20s_3s_4+
|algebra-precalculus|
1
How to use Laurent expansion of a function into this form?
How can I compute the Laurent expansion of the function $$f(z)=\frac{1}{(2z+1)(z-2)}$$ when $\frac{1}{2} with this $\frac{1}{z^2}$ term?
So this is very simple; first you compute the partial fraction decomposition: $$\frac{1}{\left(2x+1\right)\left(x-2\right)} = \frac{A}{2x+1}+\frac{B}{x-2}$$ which gives you A = -2/5 and B=1/5 so: $$\frac{1}{\left(2x+1\right)\left(x-2\right)}=-\frac{2}{5}\cdot\frac{1}{2x+1}+\frac{1}{5}\cdot\frac{1}{x-2}$$ and then you put those into the geometric series: $$-\frac{2}{5}\cdot\frac{1}{2x+1}+\frac{1}{5}\cdot\frac{1}{x-2}=-\frac{2}{5}\sum_{n=0}^{\infty}\left(-2x\right)^{n}+\frac{1}{5}\cdot\left(-\frac{1}{2}\right)\sum_{n=0}^{\infty}\left(\frac{1}{2}x\right)^{n}=\sum_{n=0}^{\infty}\left(-\frac{2}{5}\cdot\left(-2\right)^{n}-\frac{1}{10}\cdot\left(\frac{1}{2}\right)^{n}\right)x^{n}$$
|complex-analysis|laurent-series|
0
Complex extension of the Stone - von Neumann Theorem?
I have two questions related to an extension of the Stone - von Neumann Theorem : (1) Are there unitary groups with uncountably many elements indexed over the complex plane? (2) Can the Stone - von Neumann Theorem be formulated over $\mathbb {C}$ instead of over $\mathbb {R}$ , to establish a one-to-one correspondence between self-adjoint operators on a Hilbert space $\mathcal {H}$ and one-parameter families of continuous unitary operators $\{U_{z}\}_{z\in \mathbb {C}}$ : (1) $\forall z_{o}\in \mathbb {C} ,\ \psi \in {\mathcal {H}}:\ \lim _{z\to z_o}U_{z}(\psi )=U_{z_o}(\psi), $ (2) $\forall z_1,z_2\in \mathbb {C} :\ U_{z_1+z_2}=U_{z_1}U_{z_2}.$ I'm interested whether its possible to derive a one-to-one correspondence between a (complex) parameter strongly continuous unitary groups of operators $U :\mathcal{H}\to\mathcal{H}$ and linear operators (not necessarily self-adjoint/Hermitian!) $A_U :\mathcal{H}\to\mathcal{H}$ by $$A_U\ \psi = \lim_{|z|\to 0}{\frac{U(z)\psi - \psi}{iz}}\\ U_A(
(I do not understand well where the Stone-Von Neumann Theorem enters the issue, maybe the right theorem is the Stone Theorem , not the SvN one?) Suppose that a strongly continuous one-parameter group of unitaries $\mathbb{C} \ni z \mapsto U_z$ exists in the complex Hilbert space $\cal H$ with the properties you described. In particular $A_U : D(A_U) \to {\cal H}$ exists with domain $D(A_U)\subset {\cal H}$ defined as you indicated. Take $\phi \in \mathbb{R}$ . Then $\mathbb{R} \ni t \mapsto U_{e^{i\phi}t}$ is a strongly continuous one-parameter group of unitaries in standard sense and every $\psi\in D(A_U)$ , per definition, belongs to the domain of its selfadjoint generator $A_\phi$ according to the Stone theorem. As a consequence, computing the $t$ -derivative of both sides of $$U_{e^{i\phi}t} \psi = e^{itA_\phi}\psi$$ we have $$\tag{1} e^{i\phi}A_U\psi = A_\phi \psi\:, \quad \forall \psi \in D(A_U)\:.$$ In particular $$e^{-i\phi'} \langle \psi, A_{\phi'}\psi\rangle = \langle \psi, A
|quantum-mechanics|
0
Is there a spelling mistake or am I missing something
Here, $[ \cdot]$ is $\lfloor \cdot \rfloor $ floor function. N $ \in N$ . Where did $\frac{[Nx]} N + \frac{1}{2N}$ came from and how does $x$ differs by $\frac{1}{2N}$ . Shouldn't it be $\frac{1}N$ if $x$ is an integer.
The double inequality $$\frac{\lfloor Nx\rfloor}N \leqslant x \leqslant \frac{\lfloor Nx\rfloor}N + \frac 1N$$ describes an interval containing the $x$ value: $$x\in\left [ \frac{\lfloor Nx\rfloor}N , \frac{\lfloor Nx\rfloor}N + \frac 1N \right]$$ The interval starts at $\frac{\lfloor Nx\rfloor}N$ and its length is $\frac 1N$ , so the value $$\frac{\lfloor Nx\rfloor}N + \frac 1{2N}$$ is just a midpoint of the interval. And any $x$ in the interval differs from the interval's midpoint by not more than a half of the interval's length: $$\left|x - \left(\frac{\lfloor Nx\rfloor}N + \frac 1{2N}\right)\right| \leqslant \frac 1{2N}$$
|algebra-precalculus|inequality|ceiling-and-floor-functions|elementary-functions|rounding-unit|
0
Vizing's Theorem Proof, Kempe Chain
I'm trying to follow the proof of Vizing's Theorem given here https://math.uchicago.edu/~may/REU2015/REUPapers/Green.pdf Unfortunately, I soon ran into trouble with this line: If $c_k$ is absent at $v$ , then we can reassign colours $c_i$ to $\{v, w_i\}$ for $i ∈ [k]$ and we are done. The problem is that ever frustrating when not understood phrase: "we are done". Suppose we take the very simplest case, where $c_0$ is already absent at $v$ . Then we can change $\{v,w_0\}$ to color $c_0$ . But then... So what? All we know about $c_0$ is that it was chosen so that it is absent at $w_0$ . So how does changing this edge to this color mean 'we are done'? It would help if I knew what this proof was trying to establish to be 'done'. They're assuming the existence of a counterexample, so of course this has to be a proof by contradiction. They take a minimal counterexample $G$ , and then consider an edge $\{v,w_0\}$ that if removed would make the new graph no longer a counterexample; so the natu
Although sometimes there's a good reason to write proofs by minimal counterexample, in this proof, it's used gratuitously. Instead, read the proof as an induction on the number of edges. Initially, we take any edge $e = \{v, w_0\}$ whatsoever. By the induction hypothesis, Vizing's theorem applies to $G-e$ , so we take the edge coloring of $G-e$ that this implies. The rest of the proof is explaining how to extend this edge coloring to an edge coloring of $G$ with $\Delta(G)+1$ colors. If we achieve this, then we are done - in the sense that we have proven that Vizing's theorem also applies to $G$ , which completes the induction step. By induction, the theorem applies to graphs with any number of edges. The actual proof is not phrased as a proof by induction, but it's also trying to edge-color $G$ by starting from an edge-coloring of $G-e$ , which is why I say that it might as well have been a proof by induction. We win by edge-coloring $G$ with $\Delta(G)+1$ colors, since in that case w
|graph-theory|coloring|
1
Improper integral with parameter. For what value of a does the integral converge?
i just cant figure it out.... Please explain... $ \int\limits_{\scriptsize 0}^{\scriptsize \pi}{\cos\left(x\right)\,\sin^{a}\left(x\right)}{\;\mathrm{d}x} $
Let $u = \sin(x)$ then $du=\cos(x) dx$ . $$\int_c^d \cos(x) \sin^a(x) dx = \int_c^d u^a du=\frac{u^{a+1}}{a+1}\bigg|_c^d= \frac{(\sin(x))^{a+1}}{a+1}\Bigg|_c^d.$$ So the integral doe not converge when $a=-1$ . So the integral converges for all $a>-1$ . For $a , the integral is not defined (blow up) on the boundaries, namely $x=0$ , and $x=\pi$ . So the integral converges for $a \in (-1,\infty)$
|convergence-divergence|improper-integrals|
1
If a sequence is bounded, does its weak limit happen to be bounded?
Let $(H, \|\cdot\|)$ be a Hilbert space and let $(f_n)_n$ be a bounded sequence in $H$ , i.e. there exists $C>0$ such that $$\|f_n\|\le C \quad\forall n.$$ Let $f$ denote the weak limit of $f_n$ . Does one can conclude that $f$ is bounded in $H$ ? If it is true, is it bounded with the same constant $C$ ? I am trying to find a proof (or a counterexample) on this matter on my textbook, but I can not find. Does anyone have a reference? Or better, anyone could please help me in proving the result?
If you’re asking whether $\|f\| , the answer is trivially a yes, because norms are by definition finite. For the second question, yes. Notice that by Cauchy-Schwarz, \begin{align} |\langle f_n,f\rangle|&\leq \|f_n\|\cdot\|f\|\leq C\|f\|. \end{align} Since $f_n\to f$ weakly, we can take the limit $n\to\infty$ on the LHS to get $\|f\|^2=\langle f,f\rangle\leq C\|f\|$ . Hence, $\|f\|\leq C$ (really this is a two case argument: if $\|f\|=0$ , the inequality is trivial, and otherwise we divide both sides of the previous inequality by $\|f\|$ ). Edit: @Bruno B’s comment is a good one, so I think I might as well mention the generalization to Banach spaces. Theorem. Let $X$ be any Banach space. If $\{x_n\}_{n=1}^{\infty}$ is a sequence in $X$ which converges weakly to some $x\in X$ , then $\{x_n\}_{n=1}^{\infty}$ is norm-bounded and $\|x\|\leq \liminf\limits_{n\to\infty}\|x_n\|$ . If $\{\xi_n\}_{n=1}^{\infty}$ is a sequence in $X^*$ which converges in the weak* topology to some $\xi\in X^*$ ,
|real-analysis|calculus|sequences-and-series|functional-analysis|
1
The Jacobian of $g(\vec{x}) = f(A\vec{x} + \vec{b})\vec{x}$.
Let $A = \mathbb{R}^{n \times n}$ and $f: \mathbb{R^{n}} \mapsto \mathbb{R}$ I can compute Jacobians of simple functions, but this question obliterated me, and I have spent days trying to understand it. Within the solution they derive that $[D(\vec{g}(\vec{x}))]_{jk} = f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})\frac{\partial \vec{x}_j}{\partial x_k} + \vec{x}_j \frac{\partial f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})}{\partial x_k}$ This is fine as it is just chain rule, but where they lose me is when they change to summation: $f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})\frac{\partial \vec{x}_j}{\partial x_k} + \vec{x}_j \sum_{\ell=1}^{n} \frac{\partial f(\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})}{\partial (\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})_{\ell}} \cdot \frac{\partial (\mathbf{A}\vec{\mathbf{x}} + \mathbf{b})_{\ell}}{\partial x_k}$ I've tried coming up with a simple example using the 1-norm of an A $\mathbb{R}^{2 \times 2}$ , and the accompanying x and b vectors, but it doesn't
Let the vector $\mathbf{y} = A \mathbf{x} + \mathbf{b}$ , then $ g(\mathbf{x} ) = f(\mathbf{y}) \mathbf{x} $ The Jacobian $J$ is define element by element as having its $ij$ -th entry as follows $ J_{ij} = \dfrac{\partial g_i}{\partial x_j} $ Now $g_i = f(\mathbf{y} ) x_i $ . Therefore, $ J_{ij} = \dfrac{\partial [f(\mathbf{y}) x_i]} {\partial x_j} = x_i \dfrac{ \partial f(\mathbf{y})}{\partial x_j} + f(\mathbf{y}) \delta_{ij} $ Now using the chain rule, $\dfrac{\partial f(\mathbf{y})} {\partial x_j} = \displaystyle \sum_{k=1}^n \dfrac{\partial f(\mathbf{y})} {\partial y_k} \left( \dfrac{\partial y_k}{\partial x_j} \right)$ Since $\mathbf{y} = A \mathbf{x} + b $ , then $\dfrac{\partial y_k}{\partial x_j} = A_{kj} $ And $\dfrac{\partial f(\mathbf{y})}{\partial y_k} $ is the $k$ -element of the gradient of $f$ . Therefore, $ J_{ij} = x_i \left( (\nabla f)^T A_{j} \right) + f(\mathbf{y} ) \delta_{ij} $ where $\delta_{ij} =1$ if $i = j$ , and $0$ otherwise. where $A_{j}$ is the $j$ -column
|linear-algebra|multivariable-calculus|optimization|jacobian|
0
Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number.
the question Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number. the idea The number would be rational only if $x^2+7x+21$ would be a square number which means that $x^2+7x+21=k^2 => k^2-x^2=(x+k)(k-x)=7(x+3)$ . From here i thought of going with divisibilities, but got nothing useful. Idk what to do froward. Hope one of tou can help me! Thank you!
While there is a solution by completing a square, I want to present another way. We have that $x^2+7x+21$ is a square of an integer. Let $x>0$ . Then $$(x+3)^2 It means that $$x^2+7x+21=(x+4)^2.$$ This gives $x= 5$ . Now let $x\le-9$ . Then $$(x+4)^2 So we have that $$x^2+7x+21=(x+3)^2.$$ This gives $x= -12$ . Now we just try all $x$ from $\{-8,-7,-6,-5,-4,-3,-2,-1,0\}$ and find that $x=-3$ and $x=-4$ work, too. So the answer is $x\in\{-12,-4,-3,5\}$ .
|rational-numbers|square-numbers|
1
What is the limit (as a distribution) of this Fourier series, similar to that of a Dirac Delta?
I know that a representation of the Dirac delta function is $\sum_{n=-\infty}^\infty e^{inx}=2 \pi \delta(x)$ . I am trying to figure out if the series with positive $n$ only $\sum_{n=0}^\infty e^{inx}$ has a simple limit as a distribution, but I cannot find any sources. The real part should be $\pi \delta(x)+\frac{1}{2}$ , but what about the imaginary one?
I'll assume you are thinking in $\mathbb R/2\pi\mathbb Z$ so that your equations are correct, and there is no need for a Dirac comb. First off, your sum converges in the sense of distributions. Indeed, taking a (periodic) test function $\phi$ , the bracket amounts to summing its positive index Fourier coefficients $\phi_n = \int_0^{2\pi}\phi(x)e^{inx}dx$ . By smoothness, they decay fast, i.e. $\phi_n = O(n^{-k})$ for all non positive integer $k$ since $\phi\in\mathcal C^k$ , so the sum converges. One way of doing it is to look at the limit when $r\to1^-$ (Abel summation) of: $$ f_r(x) = \sum_{n=0}^\infty r^ne^{inx} $$ You recognise the geometric series: $$ f_r(x) = \frac1{1-re^{ix}} $$ Naively setting $r=1$ you get: $$ f(x) = \frac1{1-e^{ix}} $$ The issue is that it is not a well defined distribution due to the simple poles at $x=0$ . You view it as the distributional limit, which is your original sum by Abel's theorem. You can explicitly calculate the limit bracket for a (periodic) te
|fourier-series|distribution-theory|dirac-delta|
0
Example of finding the lower semi-continuous convex function whose subdifferential contains the support of an optimal transport
Related to this post, but slightly altered. Let $\mu$ uniform on $\{0\}\times [0,1]$ , $\nu$ uniform on $\{-1,1\}\times [0,1]$ . Consider the Kantorovich optimal transport problem of $\mu$ to $\nu$ with the squared distance cost function. Intuitively, the optimal transport should split the mass at each point of $\mu$ and transport it along a straight line, giving half the mass to the $-1$ part and the other half to the $1$ part of $\nu$ (see drawing). According to the Knott-Smith Optimality Criterion, $\pi$ being optimal is equivalent to the support of $\pi$ being contained in the graph of the subdifferential of a convex lower semi-continuous function $\varphi$ . Now the support of $\pi$ is given by: $\{((0,a),(\pm 1, a))|a\in[0,1]\}$ . So now I want to find a $\varphi$ convex l.s.c. such that this support is in its subdifferential. My lecture notes claim that $\varphi(x_1,x_2)=|x_1|$ should work, however I get that the graph of its subdifferential is: $\{((0,x_2),(a,0))|x_2\in\mathbb{
I think that your function $\varphi$ is wrong, $$ \varphi(x_1, x_2) = |x_1| + \frac12 x_2^2 $$ should work.
|measure-theory|optimal-transport|
1
Why are there two different ways to solve the same problem but two different answers?
I was talking to a professor, and we both were stumbled on the problem of: $\sqrt{\sqrt{x^{2}}-1}=2$ My professor's thought process was that you would immediately square both sides to get $\sqrt{x^{2}-1}=4$ then solve to get $x= \pm 5$ My thought process was that you could simplify the $\sqrt{x^{2}}$ first which would the leave the problem as $\sqrt{x-1}=2$ and then you would solve and be left with $x=5$ . This way of the solution proves that $x$ can not be negative $5$ because the $x^{2}$ becomes $x$ . When we plugged these two equations into the calculator $\sqrt{\sqrt{x^{2}}-1}=y$ and $y=\sqrt{x-1}$ the calculator showed us that both problems were not similar. One had two solutions the other had only one. Essentially, how is it that the same problem can have +/- as an answer while the other can be only +. Technically, aren't both problems the same? Just one is more simplified than the other. I feel like this has something to do with a deeper level of PEMDAS
The solution of your professor is correct; the answer is $+5$ and $-5$ . If one translates this problem into precise mathematics, it amounts to asking “For which real numbers $x\in\mathbb R$ is $\sqrt{\sqrt{x^{2}}-1}=2$ ”. One can directly see that this is fulfilled by $\pm 5$ . You are losing the second solution when you assume that $\sqrt{x^2}=x$ . This is not the case; instead $\sqrt{x^2}=|x|$ . So doing your approach correctly leads to $$ \sqrt{\sqrt{x^{2}}-1}=2~~\Leftrightarrow~~\sqrt{|x|-1}=2~~\Leftrightarrow~~|x|-1=4~~\Leftrightarrow~~|x|=5 $$
|algebra-precalculus|
0
Why are there two different ways to solve the same problem but two different answers?
I was talking to a professor, and we both were stumbled on the problem of: $\sqrt{\sqrt{x^{2}}-1}=2$ My professor's thought process was that you would immediately square both sides to get $\sqrt{x^{2}-1}=4$ then solve to get $x= \pm 5$ My thought process was that you could simplify the $\sqrt{x^{2}}$ first which would the leave the problem as $\sqrt{x-1}=2$ and then you would solve and be left with $x=5$ . This way of the solution proves that $x$ can not be negative $5$ because the $x^{2}$ becomes $x$ . When we plugged these two equations into the calculator $\sqrt{\sqrt{x^{2}}-1}=y$ and $y=\sqrt{x-1}$ the calculator showed us that both problems were not similar. One had two solutions the other had only one. Essentially, how is it that the same problem can have +/- as an answer while the other can be only +. Technically, aren't both problems the same? Just one is more simplified than the other. I feel like this has something to do with a deeper level of PEMDAS
The idea is that $\sqrt{x^2}$ does not simplify to $x$ but actually to $|x|$ . Then when plugged $-5$ in the expression the result is $$\sqrt{\sqrt{(-5)^2} - 1} = \sqrt{\sqrt{25} - 1} = \sqrt{5 - 1} = 2$$ A final consideration is that usually each squared sign or absolute value operation will add one extra solution to the problem. In this case since you have one squaring operation (the square roots don't decrease the number of solutions) it is expected to exist two solutions for the problem. The one you found is one of them, but your teacher's solution shows both
|algebra-precalculus|
0
Convergence of a Sequence by definition. Approach issue
Definition : So according to the definition a sequence $(x_n)$ is convergent to some $x$ if: For any $\varepsilon > 0$ there exists some $K(\epsilon) \in \mathbb{N}$ such that for all $n \in \mathbb{N}$ if $n \ge K(\epsilon)$ , we have $|x_n - x| . a) Now my approach of showing if the given sequence is convergent (works for fairly obvious ones) is to show that for an arbitrary given $\varepsilon > 0$ there is a $K(\epsilon)$ . Then, based on the choice of $n \ge K(\epsilon)$ derive the conclusion. b) In books I have seen the other approach, for an arbitrary given $\varepsilon > 0$ following from the conclusion $|x_n - x| make simplifications to get a $f(n) then claim that this will be the case only if $K(\epsilon)$ is taken such that $g(K) and then for all $n \ge K(\epsilon)$ the statement will hold true. I am wondering, if my approach is totally wrong? I can't see it right now because in my head I think, that I am following the definition. I am asking this because I have got some resu
You said in a comment that for certain values of $\varepsilon$ I am in no position to continue my argument since I never considered the sign of the right hand side expression in the first place I think writing a concrete example of $K(\varepsilon)$ should help. In your approach, you can take $K(\varepsilon)=\lceil\frac{1}{\varepsilon}\rceil$ as follows : Proof : For $\varepsilon > 0$ , take $K(\varepsilon)=\lceil\frac{1}{\varepsilon}\rceil$ . Then for any $n \ge K(\epsilon)$ we would have $n > \frac{1}{\epsilon} - 1$ i.e. $n + 1 > \frac{1}{\varepsilon}, \frac{1}{n+1} . Since $\varepsilon > 0$ was arbitrary, the sequence, by the definition, converges to $0$ . $\ \square$
|real-analysis|sequences-and-series|convergence-divergence|
0
Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number.
the question Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number. the idea The number would be rational only if $x^2+7x+21$ would be a square number which means that $x^2+7x+21=k^2 => k^2-x^2=(x+k)(k-x)=7(x+3)$ . From here i thought of going with divisibilities, but got nothing useful. Idk what to do froward. Hope one of tou can help me! Thank you!
Just use that there cannot be a perfect square between two consecutive perfect squares (otherwise they would not be consecutive...) Then, for your polynomial $p(x)=x^2+7x+21$ it is easy to see that for any integer $x>5$ you will have: $$ (x+3)^2 so in that case p(x) itself cannot be a perfect square. With some minus-signs or other trickery you'll find a similar rule for negative $x$ -values below a certain negative number (I leave the details to you!) That leaves just a small region of possible candidates around 0 to check by hand. NB: by consecutive we mean next neigbors in the sequence of perfect squares, not that they have to be true neigbors of course (that happens only for 0 and 1!) PS: I see @Aig beat me on this!
|rational-numbers|square-numbers|
0
Using Matrix Calculus in Backpropagation derivation. Rules of order of matmul and transposition when taking derivatives in different layouts.
I define a neural network with $L$ layers ( $L-1$ hidden layers). The forward pass is as follows: $$ \mathbf{a}^{(l)} = f(\mathbf{W}^{(l)}\mathbf{a}^{(l-1)}+\mathbf{b}^{(l)}) $$ Where $l \in [0,L]$ and $\mathbf{a}^{(0)}=\mathbf{x}$ . We have a dataset $D = \{(\mathbf{X}_i, \mathbf{Y}_i) \mid \mathbf{X}_i = \mathbf{x} \in \mathbf{X}, \mathbf{Y}_i = \mathbf{y} \in \mathbf{Y}, i = 1, \ldots, N\}$ , where $N$ is the size of the dataset and $\mathbf{x}$ and $\mathbf{y}$ are the $i$ th input and label of the dataset. We have a cost/loss function that for example is defined as half the mean square error. $N_L$ is the size of the output layer. For a single data point (SGD) we have: $$ C(\mathbf{a}^{(L)},\mathbf{y}) = \frac{1}{N_{L}}\sum_{i=1}^{N_{L}}{\frac{1}{2}(a^{(L)}_{i}-y_i)^2} $$ We want to minimise the cost function to train the network. $$ \min_{\mathbf{W^{(l)}}, \mathbf{b^{(l)}} \space \forall{l \in [0,L]}} C(\mathbf{a}^{(L)},\mathbf{y}) $$ We derive backpropagation algorithm without m
I don't think it matters because on back-propagation you would need to update the weights. meaning the dimensions of the derivatives need to match the dimensions. So, practically, I think the col/row representation of the derivatives is fixed. Because instead of taking the derivative form and you write out exactly what the derivative values are you will be constrained by the order of operations of the forward propagation.
|calculus|matrices|matrix-calculus|machine-learning|neural-networks|
0
Why doesn't substiuting work here?
I know that: $$\int \cos x dx = \sin x +C$$ Substiute $x$ for $ax+b$ : $$\int \cos(ax+b) dx = \sin(ax+b) +C$$ but according to my book: $$\int \cos(ax+b) dx = \frac{1}{a}\sin(ax+b) +C$$ Why doesn't substiuting work here?
You need to substitute $x$ for $ax + b$ not only in argument of $\cos$ , but also in differential: $\int \cos(ax + b)\, d\color{red}{(ax + b)} = \sin(ax + b)$ . This is equivalent to your citation, as $\int \cos(ax + b)\, d(ax + b) = a \cdot \int \cos(ax + b)\, dx$ .
|substitution|
1
Attempt at creating a formula relating debt, payments and interest
I tried writing down a formula relating a given debt and interest to the periodic payments and number of payments. So let's say someone starts off with a debt of $D$ . The periodic interest is $r$ (for example, $0.6\% =0.006$ each month). Let's call the periodic payment $p$ , and the number of payments $N$ . I based my equation off of the fact that the total payments must equal to the original debt plus the added interest. The total payments are clearly $Np$ . The added interest each month is $r$ times the remaining debt, so after one period we would add $$rD$$ after another, we would add $$(D+rD-p)r=rD+r^2D-pr$$ next we would add $$(D+rD+r^2D-pr-p)r=rD+r^2D+r^3D-pr^2-pr$$ then $$(D+rD+r^2D+r^3D-pr^2-pr-p)r=rD+r^2D+r^3D+r^4D-pr^3-pr^2-pr$$ and so on, for each of the $N$ periods. Adding all of these up, along with the initial debt $D$ will give us the total that needs to be payed. Noticing the similar terms in each expression, we get $$\text{Total} = D(Nr+(N-1)r^2+(N-2)r^3+\dots +r^N)-p
I think you will find it easier to work backwards so with $D_n$ being the amount outstanding with $n$ months remaining, starting with $D_0=0$ , and you have $D_{n-1}=D_{n}(1+r)-p$ since you make a payment at the end of the month, i.e. $D_{n}=\frac{D_{n-1}+p}{(1+r)}$ . For given $n$ and $r$ , clearly $D_n$ and $p$ are proportional to each other, so using $\frac{D_n}{p}=\frac{\frac{D_{n-1}}{p}+1}{(1+r)}$ just gives a tangle of $r$ s to deal with: $\frac{D_1}{p}=\frac{1}{1+r}$ $\frac{D_2}{p}=\frac{1+(1+r)}{(1+r)^2}$ $\frac{D_3}{p}=\frac{1+(1+r)+(1+r)^2}{(1+r)^3}$ and you can see a geometric series appearing in the numerator. So you can then show, and prove by induction, that $\frac{D_n}{p} =\frac{1}{r}\left(1-\frac1{(1+r)^n}\right)$ , i.e. $D_n =\frac{p}{r}\left(1-\frac1{(1+r)^n}\right)$ and $p= r\,D_n \left(1+\frac{1}{(1+r)^n-1}\right)$ and in answer to your question $$n=\dfrac{\log(p)-\log(p-rD_n)}{\log(1+r)}.$$
|algebra-precalculus|summation|economics|compound-interest|
1
Minimizing symmetric convex functions of eigenvalues
I am stuck with the following problem. Prove that the optimal value to the SDP \begin{align} \text{minimize} \quad &\operatorname{tr}(V) \end{align} \begin{align} \text{subject to} \quad &\begin{bmatrix} X & U \\ U & I \end{bmatrix} \succeq 0, \quad \begin{bmatrix} U & X \\ X & V \end{bmatrix} \succeq 0 \end{align} is $f(X) = \sum_{i} \lambda_i(X)^{3/2}$ , where $X \in \mathbf{S}^{m}_{+}$ and $U, V \in \mathbf{S}^{m}$ I have tried decomposing $X = Q \Lambda Q^T$ and using Schur complement to re-write the constraints but to no avail. My initial idea was to show that the problem is equivalent to the problem \begin{align} \text{minimize} \ \ \operatorname{tr}(V) \end{align} \begin{align} \text{subject to } \quad f(x) \le \operatorname{tr}(V) \end{align} I am writing this question after trying to solve the problem for 6 hours. I have no idea on how to approach this problem. Any help is greatly appreciated. I really appreciate any help you can provide.
If $X$ is positive definite, by Schur complement, the optimization problem is equivalent to \begin{align*} &\min_{U, V \in S^{m}} \quad \mathrm{tr}(V)\\ &\mathrm{subject \ to}\quad X \succeq U^2, \quad U \succ 0, \quad V \succeq XU^{-1}X. \end{align*} From $X \succeq U^2$ and $U \succ 0$ , we have $ U^{-1} \succeq X^{-1/2}$ . We have $$\mathrm{tr}(V) \ge \mathrm{tr}(XU^{-1}X) = \mathrm{tr}(U^{-1}X^2) \ge \mathrm{tr}(X^{-1/2}X^2) = \mathrm{tr}(X^{3/2}).$$ On the other hand, $(U, V) = (X^{1/2}, X^{3/2})$ is feasible with $\mathrm{tr}(V) = \mathrm{tr}(X^{3/2})$ . Thus, the minimum is $\mathrm{tr}(X^{3/2})$ . Some thoughts if $X$ is not positive definite consider the following optimization problem (for fixed $\delta > 0$ ) \begin{align*} &\min_{U, V \in S^{m}} \quad \mathrm{tr}(V)\\ &\mathrm{subject \ to}\quad \begin{bmatrix} X + \delta I & U \\ U & I \end{bmatrix} \succeq 0, \quad \begin{bmatrix} U & X + \delta I \\ X + \delta I & V \end{bmatrix} \succeq 0. \end{align*} Similarly, the min
|optimization|convex-optimization|nonlinear-optimization|semidefinite-programming|schur-complement|
0
Attempt at creating a formula relating debt, payments and interest
I tried writing down a formula relating a given debt and interest to the periodic payments and number of payments. So let's say someone starts off with a debt of $D$ . The periodic interest is $r$ (for example, $0.6\% =0.006$ each month). Let's call the periodic payment $p$ , and the number of payments $N$ . I based my equation off of the fact that the total payments must equal to the original debt plus the added interest. The total payments are clearly $Np$ . The added interest each month is $r$ times the remaining debt, so after one period we would add $$rD$$ after another, we would add $$(D+rD-p)r=rD+r^2D-pr$$ next we would add $$(D+rD+r^2D-pr-p)r=rD+r^2D+r^3D-pr^2-pr$$ then $$(D+rD+r^2D+r^3D-pr^2-pr-p)r=rD+r^2D+r^3D+r^4D-pr^3-pr^2-pr$$ and so on, for each of the $N$ periods. Adding all of these up, along with the initial debt $D$ will give us the total that needs to be payed. Noticing the similar terms in each expression, we get $$\text{Total} = D(Nr+(N-1)r^2+(N-2)r^3+\dots +r^N)-p
The fundamental thing is that whether you take time at start or the end, the debts and payments must balance out. You can actually simplify the computations a lot by using a multiplication factor $m=1+r$ Suppose we take the final payment as the point where the time value of money is to be equated, then $Dm^n = P(m^{n-1} + m^{n-2} +... + 1)$ and using the G.P. formula $Dm^n = P\cdot\dfrac{m^n-1}{m-1}$ Actually your total payment valued at the final payment time is simply $Dm^n$ or $D$ at the time the debt was incurred, although the total payments spread over the amortisation period was $Pn$ , because of the time value of money . If you want to find how many months it will take to liquidate the debt, you will have to use logarithms, or use a spreadsheet to see how many months it takes to liquidate the debt.
|algebra-precalculus|summation|economics|compound-interest|
0
Proving solutions of $y''+p(x)y'+q(x)y=0$ to be linearly independent
When studying Elementary Differential Equations by William, I found trouble understanding Theorem 5.1.5 It says the two solutions are linearly independent iff their Wronskian is never zero, but I think they can still be linearly independent even if the Wronskian is zero for some $x$ . In the proof, when $W(x_0)=0$ , Theorem 5.1.4 is used to show $W\equiv 0$ . This theorem is the Abel's identity . It seems flawless, until I saw this answer . So $p(x)$ must be continuous on $(a,b)$ , but it is not as long as $W(x_0)=0$ , so we shouldn't use Abel's identity. This is because $$ y_1''+p(x)y_1'+q(x)y_1,\quad y_2''+p(x)y_2'+q(x)y_2 $$ $$ y_1''y_2+p(x)y_1'y_2+q(x)y_1y_2,\quad y_2''y_1+p(x)y_2'y_1+q(x)y_2y_1 $$ $$ p(x) = \frac{y_1''y_2-y_2''y_1}{y_2'y_1-y_1'y_2} = \frac{y_1''y_2-y_2''y_1}{W[y_1,y_2](x)}, $$ $p(x)$ is undefined at $x_0\in(a,b)$ . Also, I noticed all this by considering an example: for two solutions $y_1=1+x$ and $y_2=1+x^2$ , $W(x) = x^2+2x-1$ , on interval $(0,\infty)$ . So alt
Two functions $f(x)$ and $g(x)$ are linearly independent if and only if neither is a constant multiple of the other. That's directly from the definition of linear independence: there do not exist constants $a_1$ and $a_2$ , not both $0$ , such that $a_1 f(x) + a_2 g(x) = 0$ for all $x$ in the domain of the functions. So, in your example, $1 + x$ and $1 + x^2$ are obviously linearly independent because their ratio is not constant. What about Abel's identity? That gives a formula for the Wronskian of two solutions of the differential equation $y'' + p(x) y' + q(x) y = 0$ on an interval where $p(x)$ and $q(x)$ are defined and continuous, and in particular shows that in that interval the Wronskian is either always $0$ or never $0$ . So if you find two linearly independent functions (such as your $1+x$ and $1+x^2$ ) whose Wronskian is $0$ at some point but not elsewhere, it tells you that something goes wrong with the coefficients of a second-order homogeneous linear differential equation t
|ordinary-differential-equations|wronskian|
1
Is there a name for this vector space?
I've seen in some books of Analysis the notation $\mathcal{L}(\mathcal{L}(V), V) \cong \mathcal{L}_2(V)$ and I don't find anything about this vector space. What is it? Is there a name for it? Also, if I wrote this wrong, I aks for someone to correct me.
I do think it's the other way round. $\mathcal{L}(V, \mathcal{L}(V, W)) \cong \mathcal{L}_2(V, W)$ for two vector spaces $V$ and $W$ is meant to say that the LHS, the space of linear maps mapping elements of $V$ to linear maps $V \to W$ , can be identified with the RHS, the space of bilinear maps $V \times V \to W$ . This is not that unexpected: consider a bilinear map $F : (u, v) \in V \times V \mapsto F(u,v) \in W$ . Then we can choose to either view it as is, i.e. as a simple bilinear map, or we can look at the set of maps $$F(u, \cdot) : v \in V \mapsto F(u, v) \in W$$ for $u \in V$ , and notice that $F$ being bilinear implies that the assignment $u \mapsto F(u, \cdot)$ is itself linear, hence: $$(u \in V \mapsto (v \in V \mapsto F(u,v) \in W)) \in \mathcal{L}(V, \mathcal{L}(V, W))$$ Moreover, it shouldn't be too hard to see that the other direction can be dealt with in the same type of way, taking a linear map $G : u \mapsto G(u) : V \to W$ and "making" it a bilinear map. When $V
|linear-algebra|linear-transformations|terminology|
1
Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number.
the question Determine all the integers $x$ that have the property that $\sqrt{x^2+7x+21}$ is a rational number. the idea The number would be rational only if $x^2+7x+21$ would be a square number which means that $x^2+7x+21=k^2 => k^2-x^2=(x+k)(k-x)=7(x+3)$ . From here i thought of going with divisibilities, but got nothing useful. Idk what to do froward. Hope one of tou can help me! Thank you!
For some integer $y$ , we have $(2y)^2=4(x^2+7x+21)=(2x+7)^2+35$ . Factorizing the difference of squares then gives four possible values for $\{2y+2x+7,2y-2x-7\}$ : namely $\{\pm35,\pm1\}$ and $\{\pm7,\pm5\}$ . Subtraction to eliminate $y$ correspondingly yields four values for $4x+14$ , namely $\pm34$ and $\pm2$ . Hence the possible values for $x$ are $-12,-4,-3,$ and $5$ .
|rational-numbers|square-numbers|
0
Number of paths with vertices of the same color in a binary tree with random coloring of vertices
Question from an old exam: In a full binary tree of height $n$ , each vertex is randomly colored either white or black. A path is considered good if it goes from any vertex to a leaf and all vertices on this path are of the same color. Let $X$ denote the number of good paths. For example, for a tree of height $n = 3$ in the figure, we have $X=8$ ( $4$ good paths consisting of $1$ vertex, $3$ good paths consisting of $2$ vertices, and $1$ good path consisting of $3$ vertices). a) calculate the expected value of $X$ . b) Calculate the variance of $X$ . It is enough to provide a formula that is the sum of a polynomial number of terms in compact form (you do not have to calculate this sum). My solution to a) Let $f(n)$ denote the number of good paths for a tree of height $n$ . Then $f(n) = f(n-1) + f(n-1) + \frac{1}{4}(2+1+1+0) = 2f(n-1) + 1$ , because the number of paths in a tree of height n is the number of paths in both subtrees of height $n-1$ plus either zero, one, or two new paths c
Let $X_n$ be the number of good paths and $Y_n$ the number of good paths starting from the tree's root. $L_n$ and $R_n$ are the events where the root is the same color as the left and right child. \begin{align} X_n &= X^{(\ell)}_{n-1}+X^{(r)}_{n-1}+Y_n\\ Y_n &= \mathbf 1_{L_n}Y^{(\ell)}_{n-1}+ \mathbf 1_{R_n}Y^{(r)}_{n-1} \end{align} Computing $\mathbb E\left[Y_n\right]$ : using the second equation, you can see that, $$\mathbb E\left[Y_n\right] = \mathbb E\left[Y_{n-1}\right] = \mathbb E\left[Y_1\right] = 1$$ Computing $\mathbb E\left[X_n\right]$ : using the first equation, $$\mathbb E\left[X_n\right] = 2\mathbb E\left[X_{n-1}\right] + \mathbb E\left[Y_n\right] = 2\mathbb E\left[X_{n-1}\right] + 1$$ a simple induction proves that $$\mathbb E\left[X_n\right] = 2^n - 1$$ Computing $\mathbb E\left[Y_n^2\right]$ : using the second equation, $$Y_n^2 = \mathbf 1_{L_n}\left(Y^{(\ell)}_{n-1}\right)^2 + \mathbf 1_{R_n}\left(Y^{(r)}_{n-1}\right)^2 + 2\times\mathbf 1_{L_n}\mathbf 1_{R_n}\left(Y^{
|probability|expected-value|variance|
1
If $X_n=Y_n$ in distribution and $X_n \to 0$ almost surely, then does $Y_n \to 0$ almost surely ? (For particular $X_n$ and $Y_n$)
Of course in general the answer to my question is NO, however I am interested in two sequences of random variables that have very specific structure. Let $a_n,b_n$ be deterministic (real-valued) sequences and let $\xi_n$ be an i.i.d sequence of random variables. Then let $$ X_n=a_n+b_n\xi_n, \quad Y_n=a_n+b_n\xi,$$ where $\xi=\xi_n$ in distribution. If we suppose $X_n \to 0$ almost surely, can we conclude that $Y_n \to 0$ almost surely? An idea of a possible method of proof (or a counter example) would be much appreciated. Note we also assume that the sequence $b_n$ is non-degenerate i.e not the zero sequence (otherwise the question would not be very interesting).
Let $X_n\to0$ a.s. Then, for every $\varepsilon >0$ we have $P(|X_n|>\varepsilon\textrm{ i.o.})=0$ . Let $\varepsilon>0$ . Suppose $P(|Y_n|>\varepsilon\textrm{ i.o.})>0$ . Then $$\begin{aligned}P(|Y_n|>\varepsilon\textrm{ i.o.})>0&\stackrel{\textrm{B-C I}}{\implies} \sum_nP(|Y_n|>\varepsilon)=\infty\\ &\stackrel{\xi_n\sim\xi,\forall n}\implies \sum_nP(|X_n|>\varepsilon)=\infty\\ &\stackrel{\textrm{B-C II}}{\implies}P(|X_n|>\varepsilon\textrm{ i.o.})=1 \end{aligned}$$ But this is a contradiction. Then $P(|Y_n|>\varepsilon\textrm{ i.o.})=0$ . We conclude, because $\varepsilon$ was arbitrary.
|probability|sequences-and-series|probability-theory|limits|
1
Adding the same real to itself finitely many times
Take any positive real number $a$ . Is there always some natural number $n$ such that $a\cdot n \geq 1$ ? And, I guess more generally, is there always some natural number $n$ such that $a\cdot n\geq b$ for any $b\in\mathbb{R}$ ?
Yes for the first question, using $n \ge \frac{1}{a}$ . Similarly for the second question, using $n \ge \frac{b}{a}$ . They are true because $a$ is positive and $n$ is not bounded from above.
|real-numbers|
0
What is the limit (as a distribution) of this Fourier series, similar to that of a Dirac Delta?
I know that a representation of the Dirac delta function is $\sum_{n=-\infty}^\infty e^{inx}=2 \pi \delta(x)$ . I am trying to figure out if the series with positive $n$ only $\sum_{n=0}^\infty e^{inx}$ has a simple limit as a distribution, but I cannot find any sources. The real part should be $\pi \delta(x)+\frac{1}{2}$ , but what about the imaginary one?
I will just add a few details to the proof linked in the comment with no attempt on rigor. As you correctly said, the real part equals $$ \Re{ \sum_{n=0}^\infty e^{inx} } = 1+ \sum_{n=1}^\infty \cos(nx) = \pi \sum_{n=-\infty}^\infty \delta(x-2\pi n) +\frac{1}{2}. $$ This can be proved with the formula for the full comb. For the imaginary part we need $$ \sum_{n=1}^\infty \sin(nx) $$ Using Lagrange trigonometric identities we get $$ \sum_{n=1}^\infty \sin(nx) = \frac{1}{2} \mathrm{PV} \cot(x/2) $$ Where the Principal Value has to be inserted to renormalize the behavior at the pole. All in all $$ \sum_{n=0}^\infty e^{inx} = \pi \sum_{n=-\infty}^\infty \delta(x-2\pi n) +\frac{1}{2} +i \frac{1}{2} \mathrm{PV} \cot(x/2) $$
|fourier-series|distribution-theory|dirac-delta|
0
How to show that $\forall k\in(0,1), \exists L>0, \frac{1+kt}{1-kt} \leq e^{Lt}$ on $[0,1]$?
I want to show the following statement, $$\forall k\in(0,1), \exists L>0, \forall t\in[0, 1], \frac{1+kt}{1-kt} \leq e^{Lt}$$ This may be a very simple question but I can't see the trick. I tried differentiating but it led nowhere, or maybe I am doing it wrong.
Fix $k \in (0, 1)$ , let $g(t) = \ln \left(\frac{1 + kt}{1 - kt}\right)$ , then $$g'(t) = \frac{k}{1+kt} + \frac{k}{1-kt} = \frac{2k}{1-k^2t^2} \le \frac{2}{1-k^2} = L$$ you can then write for all $t\in [0, 1]$ , \begin{align} g(t) = \int_0^t g'(u)\mathrm d u \le Lt. \end{align} Take the exponnetial and you have what you are looking for.
|real-analysis|
1
Does the survival function decrease faster than linear?
I was studying about the mean residual life and theres a point where in evaluating an integral for the expression of the mrl you get (t-u)S(t) when t tends to infinite. How do we know for sure that goes to 0?
The mean residual life doesn't necessarily go to zero: $m(t) = \frac{\int_t^\infty (x-t)dF(x)}{\int_t^\infty dF(x)}$ . "Survival" may be defined differently, however.
|statistics|
0
Attempt at creating a formula relating debt, payments and interest
I tried writing down a formula relating a given debt and interest to the periodic payments and number of payments. So let's say someone starts off with a debt of $D$ . The periodic interest is $r$ (for example, $0.6\% =0.006$ each month). Let's call the periodic payment $p$ , and the number of payments $N$ . I based my equation off of the fact that the total payments must equal to the original debt plus the added interest. The total payments are clearly $Np$ . The added interest each month is $r$ times the remaining debt, so after one period we would add $$rD$$ after another, we would add $$(D+rD-p)r=rD+r^2D-pr$$ next we would add $$(D+rD+r^2D-pr-p)r=rD+r^2D+r^3D-pr^2-pr$$ then $$(D+rD+r^2D+r^3D-pr^2-pr-p)r=rD+r^2D+r^3D+r^4D-pr^3-pr^2-pr$$ and so on, for each of the $N$ periods. Adding all of these up, along with the initial debt $D$ will give us the total that needs to be payed. Noticing the similar terms in each expression, we get $$\text{Total} = D(Nr+(N-1)r^2+(N-2)r^3+\dots +r^N)-p
Let’s assume the payments are made at the end of each monthly period. At the end of month $1$ you owe a total of $D(1+r)-p$ At the end of month $2$ you owe a total of $\left(D(1+r)-p\right)(1+r)-p$ At the end of month $3$ you owe a total of $\left(\left(D(1+r)-p\right)(1+r)-p\right)-p$ $$=D(1+r)^3-p\left(1+(1+r)+(1+r)^2\right)$$ Continuing to the end of the $N$ th month, when you owe nothing, $$0=D(1+r)^N-p\left(1+(1+r)+…+(1+r)^{N-1}\right)$$ $$\implies D(1+r)^N=p\frac{\left((1+r)^N-1\right)}{(1+r)-1}$$ So your regular payment is $$p=\frac{Dr(1+r)^N}{(1+r)^N-1}$$ You can then rearrange this to get $$N=\frac{\log\frac{p}{p-Dr}}{\log(1+r)}$$
|algebra-precalculus|summation|economics|compound-interest|
0
How can I calculate position of a point after it moved towards another point?
I'm working on a text-based game involving a spaceship travelling between planets. Space is represented as a 2D plane. The ship has a current position, a destination and a speed. Every minute, the game engine updates the current position of the ship, according to its current position, current destination and speed. Knowing the ship speed (in unit per minute), I know the distance traveled by the ship in one minute. But I'm struggling to calculate the new position of the ship after it traveled that distance. Let's say the starting position of the ship is (1,1), its destination position is (3,4) and the ship moving towards it at the speed of 1 unit per minute. What would be the new position at the ship after 1 minute, having traveled 1 unit? Furthermore, what is a formula to calculate the new position (xP, yP) of the ship knowing its starting position (xA, yA), its current destination (xB, yB), and the distance traveled in 1 minute ( D )? PS: I should say that I've never learned mathemati
Here is an image explaining everything. It's just the Pythagorean Theorem.
|geometry|
1
Convergence of $\sum_{n=1}^\infty \frac{\sin\frac{n^2+1}{n+1}}{\sqrt n}$
I think the idea is to prove that sequence of partial sums of $\sum_{n=1}^\infty \sin\frac{n^2+1}{n+1}$ is bounded. If that was the case we could use Dirichlet's test since $\frac{1}{\sqrt n}$ is obviously monotone and approaches $0$ . I tried writing $\sin\frac{n^2+1}{n+1}$ as $\sin(n-1+\frac{2}{n+1})$ and $\sin(n+\frac{n-1}{n+1})$ but it didn't lead anywhere except that $\sin(n+\frac{n-1}{n+1})$ could be seen as $\sin(n+1)$ intuitively for big $n$ and we know that $\sum_{n=1}^\infty\sin(an+b)$ is bounded.
Your intuition is correct. Use the angle sum formula: $$ \sin\left(n+\frac{n-1}{n+1}\right) = \sin(n)\cos\left(\frac{n-1}{n+1}\right) + \sin\left(\frac{n-1}{n+1}\right)\cos(n) $$ that way you can split the sum into two. Next, show that the sequences $\frac1{\sqrt n}\cos\left(\frac{n-1}{n+1}\right)$ and $\frac1{\sqrt n}\sin\left(\frac{n-1}{n+1}\right)$ are monotonic for sufficiently large $n$ , and both go to $0$ . Finally, to use Dirichlet's test, you need that $\sum_{n=1}^N \sin(n)$ and $\sum_{n=1}^N \cos(n)$ are bounded. There's a number of ways to show that. One is to use Euler's formulae to turn them into sums of complex exponentials, which are actually geometric series and therefore the partial sums have a (bounded) closed form.
|calculus|analysis|
1
Showing that $\boldsymbol{\nabla}r^n=nr^{n-2}\mathbf{x}$
In three dimensions, use suffix notation and the summation convention to show that $$\boldsymbol{\nabla}r^n=nr^{n-2}\mathbf{x},$$ where $\mathbf{a}$ is any constant vector and $r=|\mathbf{x}|.$ My try: $\nabla f=\frac{\partial f}{\partial x_i}\mathbf{e}_i$ and $r=\sqrt{x_i x_i}=\sqrt{x_1^2+x_2^2+x_3^2}$ . Computing $i$ -th component of $\nabla r^n$ we get $\left(\nabla r^n\right)_i$ . Using chain rule, $\frac{\partial}{\partial x_{i}}(r^{n})=nr^{n-1}\frac{\partial r}{\partial x_{i}}$ . Since $r=|\mathbf{x}|, nr^{n-1}\frac{\partial}{\partial x_{i}}|\mathbf{x}|$ . Then I get $\frac{\partial|\mathbf{x}|}{\partial x_{i}}=\frac{\partial r}{\partial x_{i}}=\frac{x_{i}}{r}=\frac{x_{i}}{|\mathbf{x}|}$ . I can't really show $ \frac{\partial|\mathbf{x}|}{\partial x_i}=\mathbf{x}$ . Maybe I missed some minor yet crucial details? I am assuming this what I used $x_i=\mathbf{x}=(x_1,x_2,x_3)$ is correct.
Observe that it is $nr^{n-2}\vec{x}$ not $nr^{n-1}\vec{x}$ . ( $nr^{n-1}\nabla \vec{x}=nr^{n-1}\frac{x_i \vec{e_i}}{r})$ ). You have solved it correctly... congrats!
|vectors|partial-derivative|vector-analysis|
0
Why does the semisimple (s) and nilpotent (n) component of an element x=n+s of a cartan subalgebra commute with y as soon as y commutes with x.
I do not understand one line in the proof of the following theorem. Within the proof of d) it's stated that, "if $y \in \mathfrak{h}$ , then y commutes with x and hence also with s and n." I do not get why the fact that y commutes with x=s+n already implies that y commutes with both s and n. Does anybody know how the to things are related? I feel like the reason must be that n is the nilpotent and s the semisimple component, but what is the concrete connection? Theorem: Let $ \mathfrak{h} $ be a Cartan subalgebra of a semisimple Lie algebra $ \mathfrak{g} $ . Then: a) The restriction of the Killing form of $ \mathfrak{g} $ to $\mathfrak{h} $ is nondegenerate. b) $ \mathfrak{h} $ is abelian. c) The centralizer of $ \mathfrak{h} $ is $ \mathfrak{h} $ . d) Every element of $ \mathfrak{h} $ is semisimple. Proof for d: Let $x \in \mathfrak{h}$ and let s ( resp. n) be its semisimple (resp. nilpotent) component. If $y \in \mathfrak{h}$ , then y commutes with x and hence also with s and n. We
To turn Callum's and my comments into an answer, "standard" facts about Jordan decomposition in Lie algebras are that a. for an endomorphism $e$ of a finite-dim. vector space $V$ , there exist polynomials without constant term $p(T), q(T)$ such that $e_s := p(e)$ and $e_n := q(e)$ (making use of the associative structure of $End(V)$ ) are the semisimple and nilpotent components of $e$ , respectively; b. if $\mathfrak g$ is any semisimple Lie algebra, then there is a well-defined notion of decomposition of each element $x$ into a semisimple and nilpotent part $x_s+x_n$ , commuting with each other, and such that when taking the adjoint representation, $ad(x_s)$ and $ad(x_n)$ are the semisimple and nilpotent parts of $ad(x)$ viewed as element of $End(\mathfrak g)$ , respectively. See e.g. Humphreys Introduction to Lie Algebras and Representation Theory 4.2 or Bourbaki's volume on Lie theory, vol. I §6 no. 3. Now to apply that to the given situation, one assumes that $ad(x) (y) =0$ and wan
|lie-algebras|semisimple-lie-algebras|
0
Showing that $\boldsymbol{\nabla}r^n=nr^{n-2}\mathbf{x}$
In three dimensions, use suffix notation and the summation convention to show that $$\boldsymbol{\nabla}r^n=nr^{n-2}\mathbf{x},$$ where $\mathbf{a}$ is any constant vector and $r=|\mathbf{x}|.$ My try: $\nabla f=\frac{\partial f}{\partial x_i}\mathbf{e}_i$ and $r=\sqrt{x_i x_i}=\sqrt{x_1^2+x_2^2+x_3^2}$ . Computing $i$ -th component of $\nabla r^n$ we get $\left(\nabla r^n\right)_i$ . Using chain rule, $\frac{\partial}{\partial x_{i}}(r^{n})=nr^{n-1}\frac{\partial r}{\partial x_{i}}$ . Since $r=|\mathbf{x}|, nr^{n-1}\frac{\partial}{\partial x_{i}}|\mathbf{x}|$ . Then I get $\frac{\partial|\mathbf{x}|}{\partial x_{i}}=\frac{\partial r}{\partial x_{i}}=\frac{x_{i}}{r}=\frac{x_{i}}{|\mathbf{x}|}$ . I can't really show $ \frac{\partial|\mathbf{x}|}{\partial x_i}=\mathbf{x}$ . Maybe I missed some minor yet crucial details? I am assuming this what I used $x_i=\mathbf{x}=(x_1,x_2,x_3)$ is correct.
You got, $$ \frac{\partial (r^n)}{\partial x_i}=nr^{n-1}\frac{\partial r}{\partial x_i}. $$ Now, $$ \frac{\partial r^2}{\partial x_i} = \frac{\partial (\vec x\cdot \vec x)}{\partial x_i} \\ \implies r\frac{\partial r}{\partial x_i}=\vec x\cdot\frac{\partial \vec x}{\partial x_i}=\vec x\cdot\vec e_i=x_i. $$ Combining above results, $$ \frac{\partial (r^n)}{\partial x_i}=nr^{n-2}\times r\frac{\partial r}{\partial x_i} \\ \implies \frac{\partial (r^n)}{\partial x_i}=nr^{n-2}x_i. $$ So, we can conclude that $\nabla (r^n)=nr^{n-2}\vec x$ .
|vectors|partial-derivative|vector-analysis|
1
Understanding the concepts of division and fractions
$\require{cancel}$ I'm having some issues regarding division so I will start by asking how this concept was developed throughout the ages: What was the first civilization to introduce the idea of division? What possibly motivated them to define division that way and how related was to our present notion/definition of division? There are several rules I take for granted and when doing calculations I don't know what's really going on and why it actually works. For instance, when dividing fractions, why do we invert and multiply? $\frac{\frac{1}{3}}{\frac{2}{3}} = \frac{1}3 \cdot \frac{3}2$ Some answers I found are like this one: "You must use the multiplicative inverse to cancel the operation and obtain the final result". Which is a shortcut for: $\frac{\frac{1}{3}}{\frac{2}{3}} = \frac{\frac{1}{3}\cdot \frac{3}{2}}{\frac{2}{3}\cdot \frac{3}{2}} = \frac{\frac{1}{3}\cdot \frac{3}{2}}{\cancel{\frac{2}{3}}\cdot \cancel{\frac{3}{2}}} = \frac{1}{3}\cdot \frac{3}{2} $ But this is still non int
If you are interested in how these rules are derived algebraically, that's how you can do it using a few division laws: Proof Why do these rules always work? Because all numbers (with a few very specific exceptions) behave the same way and obey the same laws. Why? I don't know. I think that's just how it is, and we simply discovered it. Even laws of arithmetic were not just invented, they were designed to represent how numbers truly behave. If for example $7+1$ would not be the same as $1+7$ , there wouldn't be a law which states $a+b=b+a$ for every two numbers. By the way, that's why algebra was created, to make arithmetic and its solutions more general. How through intuition or visualization I can be sure that these results are indeed true? I am sure that there is a visualization of dividing fractions, maybe by cutting rectangles into parts. Although, intuition is a bit tricky. Math is not always intuitive. I can't imagine many things either. But I believe that a clear definition of
|math-history|fractions|fair-division|
0
Why is $\langle 3 \rangle$ a generator of $(\mathbb{Z}_7^*, \cdot)$?
This link here shows that $\langle 3 \rangle$ is a generator for the given group by brute force method, that is trial and error. I was curious as to how to justify this using the theorem. According to the Corollary, Generators of $\mathbb{Z}_n$ : An integer $k$ in $\mathbb{Z}_n$ is a generator of $\mathbb{Z}_n$ if and only if $\gcd(n, k) = 1$ . Using this we know that $5$ is a generator of the group since gcd( $|\mathbb{Z}_7^*|, 5) = 1$ . (here $|\mathbb{Z}_7^*| = 6$ , i.e, order of the group) Although gcd( $|\mathbb{Z}_7^*|, 3) = 3$ . Where am I going wrong? Regards,
A precise formulation of the cited statement is: Let $k,n\in\mathbb{Z}$ with $n > 0$ . The residue class of $k$ is a generator of the cyclic group $(\mathbb{Z}/n\mathbb{Z},+)$ if and only if $\gcd(k,n) = 1$ . Clearly, your multiplicative group $(\mathbb{Z}/7\mathbb{Z})^{\times} = (\mathbb{Z}/7\mathbb{Z} \setminus\{0\},\;\cdot)$ does not match the structure required in the statement. The group is cyclic of order $6$ though. So there must exist an isomorphism $\phi : (\mathbb{Z}/6\mathbb{Z},+) \to (\mathbb{Z}/7\mathbb{Z})^\times$ . Hence you could apply the statement to $(\mathbb{Z}/6\mathbb{Z},+)$ and then use the isomorphism $\phi$ to translate the result to the group $(\mathbb{Z}/7\mathbb{Z})^\times$ . However, the problem is that you need an explicit isomorphism $\phi$ for the second step. Writing down such an isomorphism involves the knowledge of a generator of the cyclic group $(\mathbb{Z}/7\mathbb{Z})^\times$ , and you are back at your original problem...
|abstract-algebra|group-theory|ring-theory|finite-fields|cyclic-groups|
0
Compactness and finite (sub)covers
This is a question about possible alternative formulations of a compactness. Suppose $R$ is an open set, and $S$ is a compact subset of $R$ . Suppose we have an arbitrary open cover $R = \bigcup_{i\in I} T_i$ . Then, as the cover of $R$ also covers $S$ can we, for each such cover construct a finite open cover of $R$ as follows, noting that as $R$ is open and $S$ is closed so $R \setminus S$ is open? $$R = S \cup (R\setminus S) = \bigcup_{\text{finitely-many } i} T_i \cup (R \setminus S)$$ Ordinarily for compactness one specifies that this is a sub cover, so I'm inclined to say that this is not valid for showing that $R$ is in fact compact. Shifting to the context of modules over a noetherian ring $A$ equipped with the weak topology, if more generally I have that $R$ and $S$ are both clopen $A$ -modules with $S \subseteq R$ , with $S$ compact, and that for some ideal $I$ of $A$ we have equality of quotients $R/IR = S/IS$ . Then is it possible to show that $R$ is compact? Thanks in advan
To answer the first part of your question, you are correct in noting that this doesn't imply that $R$ is compact since proving compactness requires you to produce a finite subcover of a given cover. In fact, your proof would immediately show that all (non-empty) topological spaces are compact which is of course false :)
|abstract-algebra|category-theory|modules|compactness|
0
Dedekind's example of a cubic field $K$ for which $O_K$ does not have the form $\mathbb Z[\alpha]$
Let $\alpha$ be a root of $f(X)=X^3+X^2-2X+8$ and $\beta = \frac{4}{\alpha}$. It can be shown that $O_K=\mathbb{Z}[\alpha, \beta]$. How does one then establish the following ring isomorphism $$\frac{O_K}{2O_K} \cong \mathbb{F}_2 \times \mathbb{F}_2 \times\mathbb{F}_2\ ?$$ We can't use the Dedekind Criterion here, since $2=|O_K : \mathbb{Z}[\alpha]|$ so the index and prime $2$ are not coprime. This result is important, as it shows that $(2)$ is totally split and the result $O_K \neq \mathbb{Z}[\nu]$ for any $\nu$ would then follow by Dedekind. I would be able to prove the existence of this isomorphism indirectly (after some calculation/manipulation of norms of ideals) if I know that $2$ is unramified, but I don't see why this should hold.
Let $\alpha$ be a root of $f(X)=X^3+X^2-2X+8$ and $\beta = \frac{4}{\alpha}$ . It can be shown that $O_K=\mathbb{Z}[\alpha, \beta]$ . How does one then establish the following ring isomorphism $$\frac{O_K}{2O_K} \cong \mathbb{F}_2 \times \mathbb{F}_2 \times\mathbb{F}_2\ ?$$ You know that $α$ is a root of $f$ and $K=(α)$ then $e_1, e_2, e_3$ is an integral basis for $_K$ , where $e_1=1$ , $e_2=α$ and $e_3=\frac{1}{2} α(α+1)$ . We will show that the linear maps $ψ_v: _K → _2$ defined by $ψ_v(e_i)=v_i$ are ring homomorphisms, for the following values of $v$ : $v=(1,0,0)$ ; $v=(1,1,0)$ ; $v=(1,0,1)$ . then we define $ψ:_K→_2^3$ by $ψ=(ψ_{(1,0,0)},ψ_{(1,1,0)},ψ_{(1,0,1)})$ and we'll conclude that $_K/(2) ≅ _2^3$ . $_2^3$ is a product of fields, so doesn't have nilpotent element, so $(2)$ doesn't ramify. To prove ψ is a ring homomorphism. $\begin{array}{c|ccc} &e_1&e_2&e_3\\\hline e_1&e_1&e_2&e_3\\ e_2&e_2&2e_3-e_2&2e_3+4e_1\\ e_3&e_3&2e_3+4e_1&3e_3-2(e_2+3e_1) \end{array}$ this table mod 2
|abstract-algebra|algebraic-number-theory|
0
Finding the entries in a matrix given that the rank has to be $2$
Find all $y such that the matrix defined for $y $$\begin{pmatrix}-3&2&y\\ 0&1&-\frac{1}{y}\\ y&0&y\end{pmatrix}$$ has rank $2$ . It seems ok, we just need a $0$ row, which could be the last row for example, if $y = 0$ . However, this would mess up the condition in the second row, where $y≠0$ . Im not sure what to do, any suggestions? Could the determinant help in this case?
Edit : the post has been edited, so the answer is no longer correct, but the method is and it does not require any knowledge of eigenvalues, so I will leave it up as it generally applicable to all of these types of problems. See the other answer for the correct solution to the now edited question. Original : What others wrote in the comments will work, but this is the easiest way in my opinion. We can solve this problem using Gaussian elimination to bring the Matrix into upper triangular form, as we know that this preserves Rank (here it also important that $y \neq 0 $ , which we already know): $$\begin{pmatrix}-3&2&y\\ \:0&1&\frac{1}{y}\\ \:y&0&y\end{pmatrix} \rightarrow \begin{pmatrix}-y&\frac{2}{3}y&\frac{1}{3}y^2\\ \:\:0&1&\frac{1}{y}\\ \:\:0&\frac{2}{3}y&y+\frac{2}{3}y^2\end{pmatrix}\rightarrow \begin{pmatrix}-y&\frac{2}{3}y&\frac{2}{3}y^2\\ \:\:0&\frac{2}{3}y&\frac{2}{3}\\ \:\:0&0&y+\frac{1}{3}y^2-\frac{2}{3}\end{pmatrix}$$ From this we can see that when $y=\frac{-3+\sqrt{17}}{2}
|linear-algebra|determinant|matrix-rank|
1
What's wrong with this derivation of the volume of a hemisphere?
My idea to calculate the volume of the hemisphere is to sum up the area of circles of all radii up to the radius of the hemisphere we are interested in: $$\int_0^r \pi x^2 dx$$ This gives $\frac{1}{3}\pi r^3 \neq \frac{2}{3}\pi r^3$ , so deviating by a factor of $2$ from the known formula for the volume of a hemisphere. Can you explain where my reasoning is wrong and why it deviates by a factor of $2$ ?
Your problem seems to be that you don't have a clear concept of how to integrate the volume of an object by stacking disks on each other. In order to compute a volume by stacking disks, you need the distance between disks to be proportional to how much the variable of integration (in your case $x$ ) has increased. A way to look at this is that if you take a stack of disks for which $x$ increases by $\Delta x$ for each new disk, the "thickness" of that disk needs to be $\Delta x.$ A classic puzzle that is constructed with disks of uniform thickness that increase in radius by a constant amount for each disk is the Towers of Hanoi. Usually the disks are stacked from the largest to the smallest rather than smallest to largest, but you would just have to turn the stack upside down to fix that. And the stack of disks looks like a cone, never like a hemisphere: The correct way to do what you are trying to do is called the disk method . It works when you are computing the volume of an object t
|integration|geometry|solution-verification|solid-geometry|
0
Any simplification of $\sum_{k=0}^{n}\binom{n}{k} \frac{(-1)^{k}}{(N-k)^2}$
I am trying to find a compact expression for $\sum_{k=0}^{n}\binom{n}{k} \frac{(-1)^{k}}{(N-k)^2}$ , where $N > n \geq 0$ . Maple simplifies it using hypergeometric functions, which is not very useful. Any idea whether this can be simplified ? and if yes, how ? Thanks a lot!
In order to evaluate where $N\gt n$ $$\sum_{k=0}^n \frac{(-1)^k}{(N-k)^2} {n\choose k}$$ we introduce the function $$f(z) = \frac{n! (-1)^n}{(N-z)^2} \prod_{q=0}^n \frac{1}{z-q}.$$ This has the property that with $0\le k\le n$ $$\;\underset{z=k}{\mathrm{res}}\; f(z) = \frac{n! (-1)^n}{(N-k)^2} \prod_{q=0}^{k-1} \frac{1}{k-q} \prod_{q=k+1}^n \frac{1}{k-q} \\ = \frac{n! (-1)^n}{(N-k)^2} \frac{1}{k!} \frac{(-1)^{n-k}}{(n-k)!} \\ = \frac{(-1)^k}{(N-k)^2} {n\choose k}.$$ Now residues sum to zero and the residue at infinity is zero so we may evaluate using minus the residue at $z=N$ . We find $$- n! (-1)^n \left.\left[ \prod_{q=0}^n \frac{1}{z-q} \right]'\right|_{z=N} \\ = n! (-1)^n \left. \prod_{q=0}^n \frac{1}{z-q} \sum_{q=0}^n \frac{1}{z-q} \right|_{z=N} \\ = n! (-1)^n \prod_{q=0}^n \frac{1}{N-q} \sum_{q=0}^n \frac{1}{N-q} \\ = n! (-1)^n \frac{(N-n-1)!}{N!} (H_N - H_{N-n-1}).$$ This is $$\bbox[5px,border:2px solid #00A000]{ \frac{(-1)^n}{n+1} {N\choose n+1}^{-1} (H_N - H_{N-n-1}).}$$ Here
|summation|binomial-coefficients|hypergeometric-function|
0
Does $xyz = 1$ form a hyperboloid?
It can be shown that taking $y = \frac{1}{x}$ and rotating about the origin can produce (part of) a hyperbola (or already is a hyperbola, technically). Is there such a relation in $3$ D? Does the equation $xyz = 1$ form a hyperboloid? I tried figuring this out with a similar approach as one would in $2$ D: converting to another coordinate system, subtracting an angular offset ( $\phi\rightarrow\phi-\phi_0$ ), and converting back to Cartesian. This results in $xy\cos{(2\phi_0)} + \frac{1}{2}(y^2-x^2)\sin{(2\phi_0)}=1$ , which gives $y=1/x$ for $\phi_0=0$ and $y^2-x^2=2$ for $\phi_0=\pi/4$ . Trying this with spherical coordinates, it became difficult to convert back to Cartesian without leftover terms. I have the equation $r^3\sin{(\phi-\phi_0)}\cos{(\phi-\phi_0)}\sin^2{(\theta-\theta_0)}\cos{(\theta-\theta_0)}=1$ . (In both cases, I use the physics convention for coordinates where $\phi$ is used for rotations in the $XY$ plane.). I tried plotting this and offsetting the angle. $\phi$ wo
The normal hyperboloids are surfaces of degree $2$ in $\mathbb R^3$ . But $xyz=1$ is a surface of degree $3$ . So it is not a "hyperboloid" in that sense. I seem to recall: Newton worked out the possible surfaces of degree $3$ in $\mathbb R^3$ ?? Or did he only do curves of degree $3$ in $\mathbb R^2$ ??
|conic-sections|
0
Why are FFT results different from theory and how to eliminate the difference?
I am working with time series data, and applying FFT to calculate the power spectrum. The data is something like this: $y = \sum_{k=1}^{5} \cos(10 k x)$ All the peaks in the FFT output should have identical magnitudes, similar to the pattern shown in the graph below: However, the FFT output I am getting is different, as shown in the graph below: The magnitudes of the peaks in my data slightly differ. Increasing the sampling rate or using a Hamming window reduces but doesn't fully eliminate these differences. How can I completely remove these differences? N = 1000 t = np.linspace(0,2*np.pi,N) y = np.zeros(N) for freq in [10,20,30,40,50]: y = y + np.cos(freq * t) h = scipy.signal.windows.hamming(N) yfft = np.abs(np.fft.fft(y*h) / N)**2 xfft = np.hstack([np.arange(0,N//2), np.arange(-(N//2),0)]) d = np.array(sorted(zip(xfft,yfft))).reshape(-1,2) xfft = d[:,0] yfft = d[:,1] plt.plot(xfft,yfft,color="red") plt.show() for freq in [10,20,30,40,50]: print(f"freq={freq} magnitude={yfft[xfft==fr
In this particular ideal case, you need to make sure that your cosines are "commensurate" with your domain. I will explain what this means. By using np.linspace with endpoint=True (which is its default value) you are sampling the boundary value twice. This is because the FFT models functions as being periodic. For example if your domain has length $T$ , then the FFT acts as if $y(t)=y(t+T)$ for all $t$ . Your domain should be $[0,T)$ , i.e. excluding one endpoint. If you include both endpoints, like in $[0,T]$ , you will sample the point $t=0$ twice. This is because $y(0)=y(T)$ , again because of periodicity. If you do this, the function is not 'commensurate' with your domain, which just means that it does not fit nicely. You can solve this by having endpoint=False Here is some code that draw the spectrum of a single sine wave. First I plot it using endpoint=True and then using endpoint=False , the latter of which gives the correct result. import numpy as np import matplotlib.pyplot as
|fast-fourier-transform|
0
Is a finite set of probability distributions a subset of the set of all probability distributions over itself?
I am currently trying to write something but I got stuck. Essentially, I need to know whether a finite set of probability distributions is a subset of the set of all probability distributions over itself. Let me put my problem formally. Let $X$ be an arbitrary finite set, and let $D(X)$ be the set of all probability distributions over the set $X$ : namely, \begin{gather} D(X)=\left\{d\in[0,1]^X\mid\sum_{x\in X}d(x)=1\right\} \end{gather} Further, let $\Theta\subsetneq D(X)$ be any finite subset of $D(X)$ . Then, let $\Delta(\Theta)$ be the set of all probability distributions over the set $\Theta$ : namely, \begin{gather} \Delta(\Theta)=\left\{\delta\in[0,1]^\Theta\mid\sum_{\theta\in\Theta}\delta(\theta)=1\right\} \end{gather} QUESTION: is it correct to say that $\Theta\subseteq \Delta(\Theta)$ ? A part of me believes it is correct, for $\Theta$ and $\Delta(\Theta)$ are the same type of set; however, another part of me suspects it is not correct, but I can't pinpoint why. Any help will
Not really. It would be akin to saying $X \subseteq D(X)$ or the set {head, tail} is a subset of all possible coin-toss distributions (whether fair or not fair).
|probability|
1
Testing for convergence - $\int_{0}^{\infty} \frac{x^4}{e^{\sqrt{x}}} dx$
How do I go about testing for the convergence of the following improper integral: $$\int_{0}^{+\infty} \frac{x^4}{e^{\sqrt{x}}} dx$$ I don't suppose it's possible (or practical) to evaluate its antiderivative. I tried testing around for an asymptotically equivalent function in the neighbourhood of $+\infty$ , I've also tried to look for a fitting function to make the comparison test; both to no avail.
Hint The substitution $x = u^2, dx = 2 u \,du$ , transforms the integral to $$2 \int_0^\infty u^9 e^{-u} \,du.$$ So, in fact, the integral has value $2 \cdot 9!$ .
|calculus|integration|convergence-divergence|improper-integrals|
0