title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Solving exercise 2.3.5 in Vershynin's HDP book with the best choice of c
|
I am starting to work my way through Vershynin's High-Dimensional Probability and have become stuck on Exercise 2.3.5. The problem is as follows: Let $X_i \sim \text{Bern}(p_i)$ ( $i\in\{1,\dots,N\}$ ) be independent. Denote $S_N := \sum_i X_i$ and $\mu := ES_N$ . Then show that there is some absolute constant $c>0$ such that for all $\delta\in (0,1]$ , we have $$P(|S_N-\mu|\geq \delta\mu)\leq 2\exp(-c\mu\delta^2).$$ I have applied Chernoff's inequality to transform the problem into showing the following inequality: $$\left(\frac{e}{1+\delta}\right)^{1+\delta} + \left(\frac{e}{1-\delta}\right)^{1-\delta} \leq 2\exp(1-c\delta^2).$$ Following a process similar to the hint given by the answer to this Math.SE question , I've shown the inequality for some $c . However, the approach given there requires one to bound each term on the lefthand side by half the righthand side; I think that this loses some efficiency, since one needs to choose one choice of $c$ to bound each term separately even
|
That $c=1/2$ is the largest $c$ to make the inequality true can be seen by the Taylor expansions of both sides. I will do that at 0 up to order 4. To save some typing, I will write $x$ for $\delta$ . Note that both sides are even functions of $x$ , so we would only have even powers of $x$ . We want to show that \begin{align} \big(\frac{e}{1+x}\big)^{1+x} + \big(\frac{e}{1-x}\big)^{1-x} &= 2e\big(1-\frac{x^2}{2}+\frac{x^4}{24}\big)+O(x^6),\\ 2e^{1-cx^2}&=2e\big(1-cx^2 + \frac{c^2}{2}x^4\big)+ O(x^6). \end{align} The second one is very easy, and it follows from $e^{1-cx^2}=e\cdot e^{-cx^2}$ and the standard MacLaurin series for $e^x$ . So if we establish the first one, then $c=1/2$ is the largest $c$ for the equality to hold, and it does indeed hold since then the coefficients for $x^4$ are $\frac{c^2}2=\frac18>\frac1{24}$ . To establish the first expansion, we use the standard series $$ \ln(1+x)=x-\frac{x^2}2+\frac{x^3}3-\frac{x^4}4+O(x^5), $$ to compute everything up to order $4$ . But
|
|probability|inequality|
| 0
|
Russell's paradox: intuition behind schema of separation versus comprehension
|
I am trying to understand some ideas behind the ZFC axioms and I am following a book by Thomas Jech (Set Theory, 3rd edition, 2000). Unfortunately he is rather "to the point" and does not dwell too much on expanding on any intuitive ideas behind the axioms. In particular I am eager to understand some of the intuition behind Russell's paradox and why this should lead to the schema of comprehension being dropped in favour of the schema of separation; Axiom Schema of Separation : If $P$ is a property with parameter $p$ then for any $X$ and $p$ there exists a set $Y=\{u\in X:P(u,p)\}$ that contains all those $u\in X$ that have property $P$ Schema of comprehension : If $P$ is a property then there exists a set $Y=\{x:P(x)\}$ After reading page 4 from Jech he writes about Russell's paradox showing that the set $S=\{X:X\not\in X\}$ cannot exist (or maybe should not exist in an axiomatic framework). This can be seen to use the property $P(X)=1\Longleftrightarrow X\not\in X$ . I must admit I vi
|
Great question! Many paradoxes exploit self-reference to be formulated. A classic example is the liar's paradox: how do we assign a truth value to the sentence "This sentence is not true"? By referring to itself, we get this circular problem by which the sentence must be false if it is true, but true if it is false. Russel's paradox deals with a particular type of self-reference called being "impredicative." A definition for a new set $X$ is called impredicative if the definition ranges over a class of sets that already contains $X$ . The Russell set $S=\{X|X\not\in X\}$ that you mention relies on this feature: the condition $X\not\in X$ ranges over all possible sets, which includes the set $S$ that we're trying to define. This is how we invoke the paradox, by asking if $S\in S$ : if so, the condition for entry $X\not\in X$ shows that it is not; if not, the condition for entry show that it is. Thus, full comprehension gives rise to self-reference in the form of sets that can have impre
|
|elementary-set-theory|
| 1
|
Determine the ellipse tangent to the $x$ and $y$ axes at known points and also tangent to a given line at a unknown point
|
I'd like to determine the ellipse that is tangent to the $x$ axis at $\mathbf{r_1} = (a, 0), a \gt 0$ and the $y$ axis at $\mathbf{r_2} = (0, b), b \gt 0$ , and also tangent to the line $\mathbf{n}\cdot (\mathbf{r} - \mathbf{r_0}) = 0 $ at an unknown point $\mathbf{r_3}$ , where $\mathbf{r} = [x, y]^T$ and $\mathbf{n}$ and $\mathbf{r_0}$ are given 2-vectors. This is the task. My attempt: The equation of the ellipse is $ (\mathbf{r - C})^T Q (\mathbf{r - C}) = 1 $ where $\mathbf{C}=[C_x, C_y]^T$ is the unknown center, and $Q$ is $2 \times 2$ symmetric and positive definite, also unknown. That makes a total of $5$ unknowns. The gradient of the above function of $\mathbf{r}$ is $ g = 2 Q (\mathbf{r - C}) $ At $\mathbf{r_1}$ the gradient is pointing along the negative $\mathbb{j}$ direction, so that $ Q ( \mathbf{r_1 - C}) = - k \ \mathbf{j} , k \gt 0$ It follows that $ \mathbf{r_1 - C} = - k Q^{-1} \mathbf{j} \tag{1}$ Substituting this into the ellipse equation yields $ k = \dfrac{1}{\sqr
|
Using coordinate geometry An ellipse touches the axes at $(\lambda a,0)$ , $(0,\mu b)$ and the line $\dfrac{x}{a}+\dfrac{y}{b}=1$ $$ \left( \frac{x}{\lambda a}+\frac{y}{\mu b}-1 \right)^2=\frac{4xy}{ab} \left( \frac{1}{\lambda}-1 \right) \left( \frac{1}{\mu}-1 \right)$$ Third contact $$\frac{1}{\lambda+\mu-2\lambda \mu} \begin{pmatrix} \lambda (1-\mu) a \\ \mu(1-\lambda) b \end{pmatrix}$$ Brianchon point $$\frac{1}{1-\lambda \mu} \begin{pmatrix} \lambda (1-\mu) a \\ \mu(1-\lambda) b \end{pmatrix}$$ Please also refer to older posts of mine here and here .
|
|solution-verification|conic-sections|
| 0
|
Understanding The Math Behind Elchanan Mossel’s Dice Paradox
|
So earlier today I came across Elchanan Mossel's Dice Paradox , and I am having some trouble understanding the solution. The question is as follows: You throw a fair six-sided die until you get 6. What is the expected number of throws (including the throw giving 6) conditioned on the event that all throws gave even numbers? Quoted from Jimmy Jin in "Elchanan Mossel’s dice problem" In the paper it goes on to state why a common wrong answer is $3$. Then afterwards explains that this problem has the same answer to, "What is the expected number of times you can roll only $2$’s or $4$’s until you roll any other number?" I don't understand why this is the case. If the original problem is asking for specifically a $6$, shouldn't that limit many of the possible sequences? I also attempted to solve the problem using another method, but got an answer different from both $3$ and the correct answer of $1.5$. I saw that possible sequences could have been something like: $$\{6\}$$ $$\{2,6\}, \{4,6\}
|
The correct answer is 3. Most people seem to forget to apply the condition as the condition states: "all rolls gave even numbers", ie. "each and every roll gave even number". Apply this to each roll, and you very quickly and easily get the answer 3. People seem to have ideas about how to construct the experiment: "discard sequences with odd numbers at the end", or "if you roll an odd number then stop and start again" or "if you roll an odd number then discard this roll and roll again" or whatever other method. However, note that the original question never says so, and never even considers these situations. Instead, it is giving us a condition, and this condition is a FACT, a GIVEN. Even numbers already occurred. So we cannot just introduce odd numbers in the experiment, just because our general encyclopedic knowledge is that a die also has odd numbers. Note also that a fair die has equal probabilities only when unconditioned. After applying the condition, obviously the probability of
|
|probability|conditional-expectation|means|
| 0
|
Need hints (advice) to prove $(\forall a,b,c,d \in \mathbb{R}) (a < b \wedge c<d) \Rightarrow ad+bc < ac +bd$
|
I'm trying to prove this ( source : my uni's textbook says that it's trivial). $$(\forall a,b,c,d \in \mathbb{R}) (a So far, I've managed to get to the point where $$ab $$ ac 3 (using 2.) $$ ad And I sort of don't know how to use this information to get to the desired result. I feel like I'm going around in circles.
|
Hint : $$ 0
|
|algebra-precalculus|proof-writing|arithmetic|axioms|
| 0
|
General method for solving problems like $17 \mid 2x + 3y \Longleftrightarrow 17 \mid 9x + 5y$
|
Note: This is not a duplicate of Understanding a proof that $17\mid (2x+3y)$ iff $17\mid(9x +5y)$ or Understanding a proof that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . I was reading through Naoki Sato's notes on number theory. I am somewhat unsatisfied with the given solution to this problem: Example. Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$ . Solution. $17 \mid 2x + 3y \Rightarrow 17 \mid 13(2x + 3y)$ , or $17 \mid 26x + 39y \Rightarrow 17 \mid 9x + 5y$ , and conversely, $17 \mid 9x + 5y \Rightarrow 17 \mid 4(9x + 5y)$ , or $17 \mid 36x + 20y \Rightarrow 17 \mid 2x + 3y$ . I do understand the solution but it seems like an unmotivated approach. How would one get the numbers $13$ and $4$ except for clever guessing? Is there a general method for solving such problems? That is, some theorem that trivializes such problems. I don't know much about modular arithmetic but I've noticed that $13$ and $4
|
I think a variation of Euclid's GCD algorithm can help out. Line 1: $9x+5y=4(2x+3y)+x-7y$ Line 2: $2x+3y=2(x-7y)+17y$ Stop here since one of the coefficients of x or y is 0. Suppose 17 divides $2x+3y$ , then it must divide $(x-7y)$ by line 2. If it divides both, 17 must also divide $(9x+5y)$ by line 1. Combine the lines to get $(9x+5y)=4[2(x-7y)+17y]+x-7y=9(x-7y)+4 \cdot 17y$ So if $17$ divides $9x+5y$ , it also divides $(x-7y)$ . But by line 2, If 17 divides $(x-7y)$ then it divides $(2x+3y)$ .
|
|elementary-number-theory|modular-arithmetic|divisibility|
| 0
|
Convergence of an improper integral involving the difference of roots
|
I have to study the convergence of this integral: $\int_{0}^{1} \frac{1}{\sqrt x - \sqrt[5]{x}} dx$ I rewrite it as: $\int_{0}^{a} f (x) dx + \int_{a}^{1} f (x) dx$ (with $0 ) in order to deal with the problems in 0 and 1 separately. For the first one, since $ \sqrt x = o(\sqrt [5] x )$ , for $x \rightarrow 0$ , it should be right to say that $f(x) \sim \frac{1}{- \sqrt[5]{x}}$ , so it converges As for the other (here's my doubt), I apply a change of variable (t=1-x) in order to obtain: $\int_{a}^{1} f (x) dx = \int_{0}^{1-a} \frac{1}{\sqrt{1-t} - \sqrt[5]{1-t}} dt$ Then, using the relation $(1+x)^{b} \sim 1 + b x $ (for $ x \rightarrow 0$ ), I rewrite f (t) as: $\frac{1}{(1-\frac {1}{2}t) - (1--\frac {1}{5}t)} $ so that $ \int_{0}^{1-a} \frac{1}{\sqrt{1-t} - \sqrt[5]{1-t}} dt \sim \int_{0}^{1-a} -\frac{10}{3t} dt$ which diverges (negatively) My question: Am I overcomplicating the whole thing, especially the second integral? I feel there should be a faster way for it but at the moment
|
We can use that $$A^5-B^5=(A-B)(A^4+A^3B+A^2B^2+AB^3+B^4)$$ to show that $$\int_{0}^{1} \frac {1}{\sqrt[5]x - \sqrt x} \;d x \ge \int_{0}^{1} \frac{\sqrt[5]{x^4}}{x- x^2\sqrt x} \;d x$$ which is problematic at $x=1$ indeed $$ \frac{\sqrt[5]{x^4}}{x- x^2\sqrt x} = \frac{\sqrt[5]{x^4}}{x}\frac{1}{1- x\sqrt x}$$ and by $x=1-u$ the second factor becomes $$\frac{1}{1- x\sqrt x}=\frac{1}{1- (1-u)\sqrt {1-u}}=\frac{1}{1-\sqrt {1-u}+u\sqrt {1-u}}\sim \frac 1u$$ which is problematic at $u=0$ .
|
|real-analysis|calculus|improper-integrals|
| 0
|
How big do hyper-reals get?
|
Let's assume there is some non-standard model of the reals containing a number $N$ that is larger than any real number. Suppose $\exists N\in {^*}\mathbb{R} ( \forall r\in\mathbb{R}: r Now I know I can find even larger unlimited hyper-reals $2N,3N...$ which are smaller than $N^2,N^3...$ which are still hyper-reals because of closure under multiplication. However, for ordinal numbers I have heard that matters become tricky around the $\varepsilon = \omega^{\omega^{\omega^{...^{...}}}}$ numbers. So given my unlimited $N$ I think I should be able to reach $N^N, N\uparrow\uparrow N$ and so on. As far as I can work out, Knuth's arrow notation is defined recursively in $\mathcal{L}_\mathbb{R}$ , so I think $N\uparrow^n N$ is well defined for finite $n$ . I am less certain if it will transfer and give me $N\uparrow^N N$ . Something like the process used to compute Rayo's number on the other hand, seems outside of the bounds of $\mathcal{L}_\mathbb{R}$ , so I would be very surprised if it coul
|
I think the most convincing way of answering this question is in an axiomatic approach to nonstandard analysis, such as Nelson's IST. Here infinitesimals (as well as unlimited numbers) are found within the "ordinary" real number line $\mathbb R$ . They are detected by a new one-place predicate "standard". Thus, a positive infinitesimal is a real number smaller than every positive standard real number. An $N$ such as you are describing is a nonstandard natural number (bigger than every standard natural number). IST is a conservative extension of ZFC. From this point of view, it is evident that all the usual constructions you mentioned carry over, and you can construct as big a number compared to $N$ as you wish, by all the usual techniques. Provided such constructions do not use the axiom of choice, they can also be carried out in the framework SPOT which is conservative over ZF.
|
|nonstandard-analysis|big-numbers|
| 1
|
How to evaluate $\int\left(\frac{\sin(x)}{2\sin(x)- x(1+\cos(x))}\right)^2dx$?
|
I saw this problem: $$\int\left(\frac{\sin(x)}{2\sin(x)- x(1+\cos(x))}\right)^2dx$$ I tried to solve this problem and I found a strange and unsatisfactory solution using differential equations. Let $$I:=\int\left(\frac{\sin(x)}{2\sin(x)- x(1+\cos(x))}\right)^2dx$$ and let $x=2t$ . Then $$I=\frac{1}{2}\int\left(\frac{\sin(t)\cos(t)}{\sin(t)\cos(t)- t\cos^2(t)}\right)^2dt=\frac{1}{2}\int\left(\frac{\sin(t)}{\sin(t)- t\cos(t)}\right)^2dt$$ Since $\frac{d}{dt}\frac{v}{u} = \frac{uv'-vu'}{u^2}$ and the integral is in the form $\frac{uv'-vu'}{u^2}$ , such that $u =\sin(t)- t\cos(t)$ and $u' =t\sin(t)$ , we need to find a function $v(t)$ such that $v'(t)(\sin(t)- t\cos(t)) -v(t)(t \sin(t)) =\sin^2(t)$ or $$-t( \sin(t)v(t) +\cos(t) v'(t) ) + v'(t) \sin(t) =\sin^2(t)$$ Since $\sin^2(t)$ is not a multiple of $t$ , we can assume that $\sin(t)v(t) +\cos(t) v'(t)=0$ , i.e., $v(t) =- \cos(t)$ , and then $v'(t) \sin(t)=\sin^2(t)$ will solve this differential equation. So $$I=\frac{-\cos(t)}{2(\sin(t)
|
Converting the integral into trigonometric function with half-angle yields $$ \begin{aligned} I & =\int\left(\frac{\sin x}{2 \sin x-x(1+\cos x)}\right)^2 d x \\ & =\int\left(\frac{2 \sin \frac{x}{2} \cos \frac{x}{2}}{4 \sin \frac{x}{2} \cos \frac{x}{2}-2 x \cos ^2 \frac{x}{2}}\right)^2 d x \\ & =\int \frac{\tan ^2 \frac{x}{2}}{\left(2 \tan \frac{x}{2}-x\right)^2} d x \end{aligned} $$ Letting $y=2\tan \frac{x}{2} -x$ , then $$ d y=\left(\sec ^2 \frac{x}{2}-1\right) d x=\tan ^2 \frac{x}{2} d x $$ and $$ \begin{aligned} I & =\int \frac{d y}{y^2}=-\frac{1}{y}+C =\frac{1}{x-2 \tan \frac{x}{2}}+C\\ OR& =\frac{\cos \frac{x}{2}}{x \cos \frac{x}{2}-2 \sin \frac{x}{2}}+C \end{aligned} $$
|
|calculus|integration|indefinite-integrals|alternative-proof|
| 0
|
Understanding Metatheory and the Broader Picture of Foundational Set Theory
|
So I'm trying to put together a clearer picture of what is going on when we study set theory. I'll describe my current picture which I'd appreciate some feedback on, and I'll ask some specific questions as well. So from the start: if we take a Platonist perspective (which I was taught as the most pedagogically effective philosophy to have when learning set theory) then we assume sets in some way or another exist along with the intuitive properties like membership. Then when we list the ZFC axioms (which can be done via some bootstrapping process without need for sets) and we are just saying that sets satisfy these objects. Using our intuitive mathematical reasoning and the axioms we can develop all everyday mathematics, including mathematical logic. Is it fair to say that this intuitive notion of a set and mathematical reasoning is the `most' meta metatheory? However, now having developed mathematical logic, using this metatheory we can consider ZFC formally as a mathematical object al
|
I'll take a crack at this. Any time you make a logical argument, you maybe presume some things that are true (your axioms ) and always presume some rules about how you can work with true premises to reach true conclusions (your rules ). Together, they form a theory . Maybe I'm doing some set theory, and I'm working in the theory $\mathsf{ZFC}$ . Or maybe I'm doing some arithmetic and I'm working in Peano Arithmetic, $\mathsf{PA}$ . But then I get curious, or maybe a little manic, or maybe my name is Hilbert, and I start thinking things like "I wonder if can I be sure the assumptions of my theory don't contradict each other?" or "I wonder if I can prove everything that is true with this theory?" or "My name is Hilbert and I can prove everything with this theory." Then a smart kid by the name of Gödel comes a long and shows that actually, you can't. You can construct a model of $\mathsf{PA}$ inside $\mathsf{PA}$ and then show $\mathsf{PA}\not\vdash\ulcorner\mathsf{PA\ is\ consistent}\urc
|
|logic|set-theory|foundations|meta-math|
| 0
|
Feasibility problem with polynomial inequalities
|
I have several nonconvex quadratic multivariate polynomials $(f_i)_{i\in I}$ for which I need to find a point $\overline{x}$ such that $$(\forall i\in I)\quad f_i(\overline{x})\leq 0.$$ I feel like there should be an (at least heuristically) reasonable algorithm for carrying this out using the gradients of the $(f_i)_{i\in I}$ . However, I am struggling to find one. work so far : I am familiar with the case where the functions $f_i$ are such that $\nabla f_i$ is Lipschitz continuous (in which case you can take steps in the direction of negative gradients for violated inequalities, and you rescale the gradient by a parameter bounded by the inverse of the Lipschitz constant). However, for the non-Lipschitz case (i.e. these multivariate polynomials), I am not sure what is really "standard" in this field. It looks like there are several approaches in the literature, e.g., here , but I don't know if this is actually implementable (e.g., in their algorithm they just say "solve (3.3)" which t
|
Since your function are multivariate polynomial then how about evaluate the grobner basis of this feasible set so they are triangularize. This is convenient because you will always end up with one univariate equation. Other method such as S-procedure or LMI should also be considered.
|
|polynomials|optimization|reference-request|numerical-optimization|non-convex-optimization|
| 0
|
Why Löwenheim–Skolem theorem asserts the non-existence of such predicates in 1st order logic
|
Suppose there was a predicate, in the language of 1st order $ \mathsf {PA} $ , such that it is only true for standard natural numbers i.e. it accepts ALL and ONLY standard natural number, and it rejects any and every non-standard natural number. Since PA is $ \boldsymbol \omega $ -consistent, therefore any such predicate should contradict the Löwenheim-Skolem theorem. But I couldn't find a proof which states such predicates should definitely not exist. So is there any proof for their non-existence? Or are there any such predicates available, and is there any paper or material that mentions anything about such predicates? EDIT - It seems that for some reason @spaceisdarkgreen isn't able to respond to my comments. The answer he gave gets 60% of the question, but there's one part to the question that answer doesn't accounts for - Is it true to say that - " In the intrepretation of any non-standard model No such predicates exist ( in 1st order PA ) which are true for ALL and ONLY standard
|
This is called the overspill principle. If there were such a predicate $\varphi(x)$ for some model $M$ , then $M\models\varphi(0)$ and $M \models \forall x( \varphi(x)\to \varphi(x+1))$ (since the successor of a standard element is standard), so by the induction schema, $M\models \forall x\varphi(x).$ So the only model with such a predicate is the standard model.
|
|model-theory|foundations|peano-axioms|formal-systems|
| 1
|
Find this limit $\lim_{x \to 0}\frac{2x^7-\int_{x^2}^{x^3} \text{sin}(t^2)dt}{\text{tg}(x^6)}$
|
Please tell me how to calculate this limit: $$ \lim_{x \to 0}\frac{2x^7-\displaystyle{\int_{x^2}^{x^3} \text{sin}(t^2)dt}}{\text{tg}(x^6)} $$ I want to apply L'Hopital's rule, but the problem is that the numerator contains an integral with variable limits... Try expanding the integral in a series. Or apply the mean value theorem for the integral? In this case, does the one-sided nature of the limit affect?
|
Using the fact that $\lim_{x\to 0}(\tan x) /x=1$ we can write the desired limit as $$\lim_{x\to 0}\frac{2x^7-\int_{x^2}^{x^3}\sin t^2\,dt}{x^6}$$ We can split this limit into two terms (based on numerator) and the first term tends to $0$ so that desired limit equals the limit of expression $$\frac{1}{x^6}\int_0^{x^2}\sin t^2\,dt-\frac{1}{x^6}\int_0^{x^3}\sin t^2\,dt$$ The first term is even function of $x$ and hence it is sufficient to consider $x\to 0^+$ and then using substitution $u=t^3$ the first term equals $$\frac{1}{3x^6}\int_0^{x^6}\frac{\sin u^{2/3}}{u^{2/3}}\,du$$ Since the integrand tends to $1$ as $u\to 0$ by fundamental theorem of calculus the above expression tends to $1/3$ . Considering $x\to 0^-$ and $x\to 0^+$ separately and using substitution $u=t^2$ we can prove that $$\frac{1}{x^6}\int_0^{x^3}\sin t^2\,dt\to 0$$ and hence the desired limit is $1/3$ .
|
|limits|
| 0
|
How many nilpotent matrices are there in $M_n(\mathbb R)$ up to similarity?
|
I am trying to count all nilpotent matrices in $M_n(\mathbb R)$ up to similarity. I did the same exercise for idempotent matrices and it was quite simple. I realised that rank of an idempotent matrix is a non-negative integer and two idempotent matrices are similar iff they have the same rank. So I got the answer $n+1$ . However, it doesn't seem to be simple for nilpotent matrices. Things which are clear to me: If two nilpotent matrices are similar, they must have the same order of nilpotence. There is exactly one class of similarity for nilpotent matrices of order $n$ . The question I would like to ask: How many nilpotent matrices of order $k$ are there up to similarity? The answer is $1$ if $k=n$ . The answer is $1$ again if $k=1$ . (The null matrix is the only matrix which has nilpotence of order $1$ and it is its own similarity class.) What about other values of $k$ ? By looking at Jordan normal form, here's what I found about $n=2$ and $n=3$ . How to generalize? For $n=2$ , order
|
This is just a guess, but is seems to fit the general idea of what you have found. Consider the number of partitions $p(n)$ . This is the number of distinct possible Jordan decompositions of an $n\times n$ matrix. Given any specific partition, say $n_1 + \cdots + n_k$ , $n_1 \geq \cdots \geq n_k$ , the nilpotency index of a matrix with this Jordan decomposition is $n_1$ , since matrix multiplication can be done block-wise in the diagonal. So the number of $n \times n$ matrices with nilpotency class $n_1$ , up to equivalency, but allowing for permuted blocks, is the number of partitions whose biggest term is $n_1$ (call this $p(n, n_1)$ , say) times the distinct (i.e., factoring multiplicities) permutations of the blocks. If you just want it up to equivalency, the answer is $p(n, n_1)$ . I think the former is very hard to know a general formula for, though, because it depends heavily on the specific partition. However, using a combinatorial argument outlined by @zwim, the total number o
|
|linear-algebra|combinatorics|matrices|
| 0
|
Prove $1 - \frac{1}{2} x^2 \leq \cos x$ using Maclaurin series
|
I want to prove the following inequality using Maclaurin series (for all $\mathbb{R}$ ). $$1 - \frac{1}{2} x^2 \leq \cos x$$ I have tried: $$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{\sin c}{5!} x^5 \geq 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^5}{5!} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} \left( 1 - \frac{x}{5} \right)$$ $c \in (0, x)$ But the last term is not greater than $0$ for $x > 5$ . Do I have to take into account the whole Maclaurin series?
|
If $\left| x \right| > 2$ , then $1 - \frac {1}{2} {x}^{2} . Since $\cos \left( x \right) \ge - 1$ for all $x \in \mathbb {R}$ , $\left| x \right| > 2$ implies $\cos \left( x \right) > 1 - \frac {1}{2} {x}^{2}$ . Below we assume that $\left| x \right| \le 2$ . Recall $$ \cos \left( x \right) = \sum_{k \ge 0} \frac {{\left( - 1 \right)}^{k}}{\left( 2 k \right)!} {x}^{2 k}. $$ So $$ \cos \left( x \right) - \Big( 1 - \frac {1}{2} {x}^{2} \Big) = \sum_{k \ge 2} \frac {{\left( - 1 \right)}^{k}}{\left( 2 k \right)!} {x}^{2 k}. $$ Now, we separate the series into two, one with $k \mapsto 2 k$ and the other with $k \mapsto 2 k + 1$ . Consequently, $$ \begin{align} \sum_{k \ge 2} \frac {{\left( - 1 \right)}^{k}}{\left( 2 k \right)!} {x}^{2 k} & = \sum_{k \ge 1} \bigg( \frac {{x}^{4 k}}{\left( 4 k \right)!} - \frac {{x}^{4 k + 2}}{\left( 4 k + 2 \right)!} \bigg) \\ & = \sum_{k \ge 1} \frac {{x}^{4 k}}{\left( 4 k \right)!} \, \bigg( 1 - \frac {\left( 4 k \right)!}{\left( 4 k + 2 \right)!} {x}^{2}
|
|real-analysis|taylor-expansion|
| 0
|
Does the derivative with respect to a matrix have a Kronecker product matrix representation?
|
I'm confused why I end up with two matrices that are transposes of each other when I take a tensor inner product of a third order tensor with a vector, when I use two different Kronecker product matrix tensor representations that I believe are equal. Let $A$ be a matrix and $x$ and $y$ be vectors. Using index notation, $$ y_i = A_{ip}x_p $$ $$ \begin{aligned} \frac{\partial{y_i}}{\partial{A_{jk}}} &= \frac{\partial{(A_{jk}}x_p)}{\partial{A_{jk}}} \\ &= \delta_{ij}\delta_{pk}x_p \\ &= \delta_{ij}x_p \end{aligned} $$ So, the derivative is a third order tensor, and with $\otimes$ the tensor product $$ \frac{\partial{y}}{\partial{A}} = I \otimes x $$ But since $$ \delta_{ij}x_p = x_p\delta_{ij} $$ $$ \frac{\partial{y}}{\partial{A}} = x \otimes I $$ But the tensor product is not commutative. Also, the Kronecker product representations are different. For example, assume $x$ is $2 \times1$ and $I$ is $2 \times 2$ . $$ x_p\delta_{ij} = \begin{bmatrix}x_1 \\ x_2\end{bmatrix} \otimes \begin{bmat
|
Lets try to use index notation: $$ y_i = {A^p}_i x_p $$ So, for example, $$ \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} {A^1}_1 & {A^2}_1 \\ {A^1}_2 & {A^2}_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} $$ Now we write the differential, $$ dy_i = d{A^p}_i x_p = \delta^p_{p'}\delta^{i'}_i x_p d{A^{p'}}_{i'} = \delta^{i'}_i x_{p'} d{A^{p'}}_{i'} $$ and, as expected, $\partial y /\partial A$ is a third order tensor $\delta^{i'}_i x_{p'}$ . What is the structure of this tensor? Using the set up in the example, $$ {\left(\frac{\partial y}{\partial A}\right)^{i'}}_{ip'}= \begin{bmatrix} \begin{bmatrix} x_1 & 0 \\ x_2 & 0 \end{bmatrix} \\ \\ \begin{bmatrix} 0 & x_1 \\ 0 & x_2 \end{bmatrix} \end{bmatrix} $$ So for $i=1$ we have a matrix whose first column is $\pmb{x}$ and the second column is zero. For $i=2$ we have the reverse. Is there a way to convert this to a Kronecker product? First $\pmb{x}\otimes\pmb{I}$ : $$ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \otimes \b
|
|matrices|tensor-products|matrix-calculus|tensors|
| 0
|
Lower bound on sample size to determine coin fairness
|
This is a conclusion I came to through very basic knowledge of statistics and error analysis. Suppose we want to determine the fairness of a coin. To do this, we need to determine the probability of the coin landing on either Heads or Tails and we do this through repeated flipping. We have an amount of uncertainty we are willing to allow in our measurement of the probability. We define the variable $f$ as follows: $$f = \begin{cases} 1, & \text{if the coin lands on Heads} \\ 0, & \text{if the coin lands on Tails} \end{cases}$$ Then our measurement of the probability $P(H)$ of the coin landing on heads is equal to the sample average of $f$ , $\bar f$ . The formula for the standard error of the measurement of $f$ through $n$ samples is: $$\Delta f = \frac{\sigma_f}{\sqrt{n}}$$ Now, since $f$ can only have values $0$ and $1$ , its standard deviation cannot be greater than $\frac{1}{2}$ . So we have: $$\Delta f So if we flip the coin $n$ times, we can be pretty confident our error in estim
|
Let me first formally express the way commonly used to describe the accuracy of the point estimator $\bar{F_n}$ for the parameter $f$ . Let the following inequality hold: $$\mathbb P \left (|\text{error}_n|:=|\bar{F_n}-f| Then, we say that the probability of that the error of estimating $f$ using $\bar{F}$ is less than $\epsilon>0$ is at least $\delta$ (confidence level). Now let us find out how $n$ , $\epsilon>0$ , and $\delta$ are related. One way is to use Chebyshev inequality. Using this inequality, and noting that $f(1-f) \le \frac{1}{4}$ for $f \in [0,1]$ , we obtain $$\mathbb P \left (|\text{error}_n| From this, we can make the following key observations: When $\epsilon>0$ and $\delta$ are given, if $n\ge \frac{1}{4(1-\delta)\epsilon^2}$ then $\mathbb P \left (|\text{error}_n| . When $n$ and $\epsilon>0$ are given, then $\delta$ is at least $1-\frac{1}{4n\epsilon^2}$ , i.e, $\mathbb P \left (|\text{error}_n| holds for some $\delta \ge 1-\frac{1}{4n\epsilon^2}.$ When $n$ and $\de
|
|probability|statistics|standard-deviation|standard-error|
| 0
|
How to find the coefficient of $x^k$ in the expression $\prod_{p=1}^n (x^p+1)^p$?
|
This Question asked on math over flow I tried to find the indefinite integral $$ f_n(x)=\int \prod_{k=1}^n \cos^k(kx)dx$$ by using Euler's formula and put $x=\frac{\ln y}{2i}$ I got $$ f_n(x)=-i2^{-\frac{n(n+1)}{2}-1}\int y^{-\frac{n(n+1)(2n+1)}{12}-1} \prod_{k=1}^n (y^k+1)^k dy$$ now lets define $a(n,k)$ as the coefficient of $x^k$ in the expression $\prod_{p=1}^n (x^p+1)^p$ then $$ \prod_{k=1}^n (y^k+1)^k =\sum_{k=0}^{\frac{n(n+1)(2n+1)}{6}} a(n,k) y^k$$ So $$ f_n(x)=2^{-\frac{n(n+1)}{2}-1}\sum_{k=0}^{\frac{n(n+1)(2n+1)}{6}} \frac{a(n,k)}{k-\frac{n(n+1)(2n+1)}{12}} (-i)\exp\left(-2x\left(k-\frac{n(n+1)(2n+1)}{12}\right) i\right)+c $$ and where $f_n(x)$ is real So we will take the real part of the result and get $$ f_n(x)=2^{-\frac{n(n+1)}{2}-1}\sum_{k=0}^{\frac{n(n+1)(2n+1)}{6}} \frac{a(n,k)}{k-\frac{n(n+1)(2n+1)}{12}}\sin\left(2x\left(k-\frac{n(n+1)(2n+1)}{12}\right)\right)+c $$ and if $k=\frac{n(n+1)(2n+1)}{12}$ then take limit to get $\frac{\sin(2ax)}{a}=2x , a\to0$ finally if we
|
I got it... firstly the degree of $(x^p+1)^p$ is $p^2$ So the degree of $\prod_{p=1}^n (x^p+1)^p$ is $$N=1+2^2+3^2+...+n^2=\frac{n(n+1)(2n+1)}{6}$$ now we have $$\prod_{p=1}^n (x^p+1)^p=\sum_{p=1}^N a(n,p)x^p $$ by taking kth derivative and put $x\to0$ we get $$\lim_{x\to0} \frac{d^k}{dx^k} \prod_{p=1}^n (x^p+1)^p=\lim_{x\to0} \frac{d^k}{dx^k}\sum_{p=1}^N a(n,p)x^p $$ But for natural $k,p$ $$\lim_{x\to0} \frac{d^k}{dx^k} x^p=0 ,p\ne k$$ So $$\lim_{x\to0} \frac{d^k}{dx^k} a(n,k)x^k=\lim_{x\to0} \frac{d^k}{dx^k} \prod_{p=1}^n (x^p+1)^p $$ then $$ a(n,k)=\frac{1}{k!}\lim_{x\to0} \frac{d^k}{dx^k} \prod_{p=1}^n (x^p+1)^p $$ now to find the kth derivative we need to use General Leibniz rule and get $$ \frac{1}{k!}\lim_{x\to0} \frac{d^k}{dx^k} \prod_{p=1}^n (x^p+1)^p=\frac{1}{k!}\sum_{k_1+k_2+...+k_n=k} \binom{k}{k_1,k_2,...,k_n} \prod_{j=1}^n \lim_{x\to0} \frac{d^{k_j}}{dx^{k_j}} (x^j+1)^j$$ where $$ \lim_{x\to0} \frac{d^{k_j}}{dx^{k_j}} (x^j+1)^j=\sum_{p=0}^j \binom{j}{p} \lim_{x\to0} \frac
|
|calculus|integration|recurrence-relations|indefinite-integrals|products|
| 0
|
L'Hopital's rule with dual numbers
|
Background: For the dual numbers , we extend the reals with an additional unit vector $\epsilon$ subject to the constraint that $\epsilon^2 = 0$ . We can write dual numbers as $x_0 + x_1 \epsilon$ for $x_0,x_1 \in \mathbb{R}$ . We have a rule for multiplication, $$ (x_0 + x_1 \epsilon)(y_0 + y_1 \epsilon) = x_0 y_0 + x_1 y_0 \epsilon + x_0 y_1 \epsilon + x_1 y_1 \epsilon^2 = x_0 y_0 + (x_1 y_0 + x_0 y_1)\epsilon, $$ as well as division, $$ \frac{x_0 + x_1 \epsilon}{y_0 + y_1 \epsilon} = \frac{x_0 + x_1 \epsilon}{y_0 + y_1 \epsilon} \left(\frac{y_0 - y_1 \epsilon}{y_0 - y_1 \epsilon}\right) = \frac{x_0 y_0 + (x_1 y_0 - x_0 y_1)\epsilon}{y_0^2}. $$ It's clear that if $y_0=0$ , division is undefined for all values of $y_1$ . That is, numbers of the form $x_1 \epsilon$ are zero divisors. Furthermore, we have the exact Taylor series $$ f(x_0 + x_1\epsilon) = f(x_0) + x_1 f'(x_0) \epsilon. $$ My question is: Is L'Hopital's rule $$\lim_{x\rightarrow x_0} \frac{f(x_0)}{g(x_0)} = \lim_{x\righta
|
Non-standard analysis is, at least in calculus, about replacing limits with algebraic operations. The dual numbers, and in generalization, truncated Taylor series, are one step towards this, doing "infinitesimal" calculus without using infinitesimals, and represent about what Leibniz understood as or how he used infinitesimals. (Eliminating infinitesimals from common use was a thing of the second half of the 19th century, introducing modern infinitesimals happened in the second half of the 20th century.) So like in algebra where one algebraic number stands for all its conjugates, here one infinitesimal stands for all, provided the functions that they are applied to are "standard", not constructed with infinitesimals (like polynomials with some coefficients infinitesimal). The $ϵ$ stands for this idealized infinitesimal, usually $ϵ^2$ is just a much smaller infinitesimal, but for basic calculus situations indeed practically $ϵ^2=0$ . You could also think about $ϵ$ being a very very smal
|
|abstract-algebra|limits|analysis|exterior-algebra|dual-numbers|
| 0
|
Regarding intersecting surfaces of two surfaces
|
When I am plotting two quadratic surfaces $y^2/2 + z^2/2 - 16.7 z = -30$ and $-(x^2/2) + 16.7 z = 200$ , then its intersecting orbit is found to be a single orbit (see attached image). But, when I am changing the RHS value of both the expressions, then the shape of the intersecting orbit and the number of intersecting orbit changes i.e. after a certain value of RHS in both the equations, the number of intersecting orbit= 2. But, before that, we are getting only one/single intersecting orbit. Example: Intersecting orbit of $y^2/2 + z^2/2 - 16.7 z = -90$ and $-(x^2/2) + 16.7 z = 70$ is shown below. I know these RHS values are only numbers or constant values. Due to these values, the nature of the surfaces remains the same and the shape and number of intersecting orbits change. So, It is confirmed that something is happening (I mean some kind of changes in the dynamics of both the surfaces are taking place). I have attached the code below that clearly shows how orbits are changing w.r.t R
|
Since the question is asked on an explicit example, I am not sure on how much topological background I may assume. Hence, I try to keep the answer explicit and accessible — which implies that it is rather vague at times. I can elaborate further / point to literature, if this is wished. We are given two (quadratic polynomial) functions $f,g:\mathbb R^3\to\mathbb R$ and two values $a,b\in\mathbb R$ such that $f^{-1}(a)$ and $g^{-1}(b)$ are (quadratic) surfaces. We would like to described how the number of components of $$ f^{-1}(a)\cap g^{-1}(b)=\left\{P\in\mathbb R^3~\middle|~f(P)-a=g(P)-b=0\right\}$$ depends on $a$ and $b$ . By assuming that $b$ was already part of $g$ , we only need to consider the number of components of $$ I_t:=f^{-1}(t)\cap g^{-1}(0)=\left\{P\in\mathbb R^3~\middle|~f(P)-t=g(P)=0\right\} $$ for $t\in\mathbb R$ . For most values “regular” of $t$ nothing interesting happens with $I_t$ . But there are some “critical” value where something happens. In your example, this
|
|general-topology|geometry|geometric-topology|
| 0
|
Prove $\lim_{x\rightarrow +\infty}f(x)=0$ under the conditions $|f'(x)|\leq 1/x$ and $\lim_{R\rightarrow +\infty}\frac{1}{R}\int_0^R|f(x)|dx=0$
|
I encountered a problem while working on a mathematical analysis exercise. I wish to share this problem because of its interest, while it is a simple statement I feel it is not straightforward to answer : Given $f\in C^1([0,+\infty])$ such that $|f'(x)|\leq 1/x$ and $\lim_{R\rightarrow +\infty}\frac{1}{R}\int_0^R |f(x)|dx=0$ , how can we prove that $\lim_{x\rightarrow +\infty}f(x)=0$ ? Any assistance would be greatly appreciated.
|
Put $\displaystyle F(x)=\int_0^x f(t)dt$ . For any $\lambda>1$ and $n\in\mathbb N$ there exists $c_{\lambda,n}\in ]\lambda^n;\lambda^{n+1}[$ such that $\displaystyle \frac{F(\lambda^{n+1})-F(\lambda^n)}{\lambda^{n+1}-\lambda^n}=f(c_{\lambda,n})$ and from the second assumption, as $\lambda^{n+1}-\lambda^n=\lambda^{n+1}(1-1/\lambda)$ , we get $\lim_{n\rightarrow +\infty}f(c_{\lambda,n})=0$ . From the first assumption we have for any $y>x>0$ , $|f(y)-f(x)|\leq \ln(y)-\ln(x); (1)$ . Let $(x_n)$ any sequence going to infinity : for any $k$ great enough there exists an integer $n_k$ such that $x_k\in [\lambda^{n_k};\lambda^{n_k+1}]$ hence from (1) we get $|f(x_k)-f(c_{\lambda,n_k})|\leq |\ln(x_k)-\ln(c_{\lambda,n_k})|\leq \ln(\lambda^{n_k+1})-\ln(\lambda^{n_k})$ which gives $|f(x_k)-f(c_{\lambda,n_k})|\leq \ln(\lambda)$ and thus $|f(x_k)|\leq |f(c_{\lambda,n_k})|+\ln(\lambda)$ . Letting $k\rightarrow +\infty$ you have then $\limsup_{k\rightarrow +\infty}|f(x_k)|\leq \ln(\lambda)$ for any $\l
|
|integration|limits|analysis|derivatives|
| 0
|
Does Bi-Intuitionistic Logic turn Classical in sufficiently strong first-order theories?
|
Bi-Intuitionistic Logic adds to Intuitionistic Logic a binary connective $←$ known as co-implication or subtraction. A weak negation $\sim A$ is defined for Bi-Intuitionistic Logic as $\top ← A$ ; Bi-Intuitionistic Logic proves $\sim A \lor A$ for any wff $A$ . It is known that a variety of basic results in set theory and arithmetic imply the full Classical Law of the Excluded Middle, including the well-ordering of the naturals. Supposing someone were to use Bi-Intuitionistic Logic as a basis for a theory like $PA$ or $ZF(C)$ , would results like $\sim A \lor A$ lead to a collapse of the two negations down to just Classical negation? I know that in pretty much any theory of arithmetic, the Classical LEM is equivalent to the well-ordering of the naturals. Is a weaker LEM strong enough to prove a result that would turn the whole thing Classical?
|
Modulo the caveat that you already have to make some concessions from $\mathsf{ZFC}$ to adapt it to intuitionistic logic, no. Bi-intuitionistic logic is conservative over intuitionistic logic, so while it may have use in elucidating the symmetry that classical logic collapses, it will not induce that collapse in and of itself.
|
|logic|nonclassical-logic|
| 0
|
Putting Four distinct Stickers on Three Different Balls Such that stickers are infinite supply and one sticker each ball, and Exactly Two Same Sticker
|
We have three distinct balls (representing the people here) and we have an unlimited supply of four different types of stickers - S, M, T, W (representing the different days / birthdays). In how many ways can each ball be labeled with a sticker such that exactly two balls have the same type of sticker. Will the answer be different if the balls were identical? If the balls are distinct , the answer would be $${4 \choose 1} \cdot {3 \choose 2} \cdot {3 \choose 1} = 36$$ Any of the four sticker types can be the repeating one ${4 \choose 1}$ , any two of the three balls can share the same type ${3 \choose 2}$ , and finally, the last ball can get any of the remaining three types ${3 \choose 1}$ . You can only have a total of two colors on the three balls. If you did Three choose two then Three choose 1 wouldn't you count a total of three colors on the three balls? Why not $$ {4 \choose 1} \cdot { 3 \choose 1}?$$ Doesn't this expression mean: You’re putting the two balls into the bin with on
|
Your answers are both correct, but your explanation for the indistinguishable balls case does not make sense. You should have four bins, one for each color. You can correct your attempt by selecting which of the four bins receives two balls and which of the remaining three bins receives one ball. We must select one of the four colors to place on two of the indistinguishable balls. Since the balls are indistinguishable, it does not matter on which two balls we place that color. That leaves us with a choice of which of the three remaining colors to place on the remaining ball. Hence, there are indeed $$\binom{4}{1}\binom{3}{1}$$ to place three stickers on three balls so that each ball receives one sticker, exactly two of the balls receive a sticker of the same color, and there are four colors available, each of which can be selected as many times as needed.
|
|combinatorics|combinations|
| 1
|
Remainders of factorials modulo primes
|
I am currently working on this problem: Prove that if p is prime, then among $1!,2!,\dots,(p-1)!$ one can find at least $\frac{p-1}{2}$ different remainders modulo $p$ . I don't really have any clue how to prove it. Can anyone help me?
|
The conjecture is that the number $N(p)$ of different remainders modulo $p$ among these factorials $1!,\ldots (p-1)!$ , satisfies $$ \lim_{p\to \infty} \frac{N(p)}{p}=1-\frac{1}{e}\sim 0.6321205588285576784 $$ So $N(p)\ge \frac{p-1}{2}$ for $p$ large enough. For $p\le 10^7$ this bound is true by a computer verification - see the duplicate here: among $ 1!,2!,...,p!$ there are at least $ \sqrt{p}$ different residues in modulo $ p$
|
|factorial|
| 0
|
How duality works with polynomials?
|
Let $E = R_2[X]$ . Let $a \in \mathbb{R}$ . Let $ev_a : E \rightarrow\mathbb{R}, P \mapsto P(a)$ . Let $P = a + bX + cX^2$ . Show that $B = (ev_1, ev_2, ev_3)$ is a base of $E^*$ . Proof : We have: $ev_1(P) = P(1) = a + b + c$ $ev_2(P) = P(2) = a + 2b + 4c$ $ev_3(P) = P(3) = a + 3b + 9c$ So let $A = \begin{pmatrix} 1 & 1 & 1\\ 1 & 2 & 4\\ 1 & 3 & 9 \end{pmatrix}$ . It is a Vondermonde matrix, its det is not zero, so $B$ is a base of $E^*$ . I know that the result is okay but I cannot understand why it works and links it with de definitions, especially with polynomials and coordinates (I understand how duality works in $\mathbb{R}^n$ ). Here are some random knowledges that I cannot put together: Let $C$ = $(1, X, X²) = (c_1, c_2, c_3)$ . We have $c_i^*(c_j)=\delta_{i,j}$ so $c_1^*(P)=c_1^*(a+bX+cX^2)=a$ . What can be the dual base of $C$ ? $(ev_1, ev_2, ev_3)$ is a base of $E^* \iff \forall \phi \in E^*, \exists! \alpha, \beta, \gamma \in \mathbb{R} : \phi = \alpha ev_1 + \beta ev_2 + \
|
Regarding your second question What can be the dual base of $C$ ? You used the correct definition of dual basis We have $c_i^*(c_j)=\delta_{i,j}$ Hence the dual basis for $C$ would be $(c_1^*,c_2^*,c_3^*)$ where $\forall i=1,2,3$ , $c_i^*$ of a polynomial gives the coefficient of the degree $i$ term. Regarding your third question $(ev_1, ev_2, ev_3)$ is a base of $E^* \iff \forall P \in E, \exists! \alpha, \beta, \gamma \in \mathbb{R} : P = \alpha ev_1 + \beta ev_2 + \gamma ev_3$ . I suspect you have some sort of typo, since you are expressing a polynomial $P\in E$ as a linear combination of elements in $E^*$ , but $E\neq E^*$ .
|
|linear-algebra|polynomials|vector-spaces|coordinate-systems|dual-spaces|
| 0
|
I want to prove $\lim_{x \to0}\frac{\sqrt{x+1}- \sqrt{1-x}}{x}=1$
|
i want to prove that $$ \lim_{x \to0}\frac{\sqrt{x+1}- \sqrt{1-x}}{x}=1 $$ using the definition of the limit. I thought that if I say that |x| 0) with $x\in[-1,1]$ then $$|\frac{\sqrt{x+1}- \sqrt{1-x}}{x}-1|= |\frac{2}{\sqrt{x+1}+ \sqrt{1-x}}-1| then because $1+\sqrt{1-x^2}\ge 1$ I can say that $$\frac{2x^2}{\sqrt{x+1}+ \sqrt{1-x}} I know that $\sqrt{1-x}\ge 0.$ Therefore $$x^2(\sqrt{1+x}-\sqrt{1-x}) So, having |x| $$δ^2\sqrt{δ+1}=ε$$
|
The simplest way is to rationalise the given function with $$\frac{\sqrt{x+1}+\sqrt{1-x}}{\sqrt{x+1}+\sqrt{1-x}}$$
|
|limits|analysis|solution-verification|
| 0
|
Sub-dimensional linear subspaces of $\mathbb{R}^{n}$ have measure zero.
|
I would appreciate it if someone could refer me to a proof (or simply give one here) for the statement in the title. That is: If $k I've seen some proofs that use Sard's lemma but I'm not really familiar with that subject and I've never seen a proof of said lemma so I'd appreciate a proof that doesn't use it if possible. Thanks in advance!
|
Let $V$ be a $k$ -dimensional subspace of $\mathbb{R}^n$ , where $k . $V$ is contained in an $(n-1)$ -dimensional subspace of $\mathbb{R}^n$ . So it is sufficient for us to show any $(n-1)$ -dimensional subspace of $\mathbb{R}^n$ has measure zero. Let $V$ be an $(n-1)$ -dimensional subspace of $\mathbb{R}^n$ . Let $a_1,\dots, a_{n-1}$ be a basis of $V$ . Let $a_i=(a_{1,i},\dots, a_{n,i})^T$ . Let $A:=(a_1,\dots, a_{n-1})$ be an $n$ by $n-1$ matrix. $\operatorname{rank} A=n-1$ . Without loss of generality, we can assume that the first $n-1$ rows of $A$ are linearly independent. $V=\{x\in\mathbb{R}^n:x=At, t\in\mathbb{R}^{n-1}\}$ . Let $A':=\begin{pmatrix}a_{1,1} & \cdots &a_{1,n-1}\\ \vdots & \cdots & \vdots\\ a_{n-1,1} &\cdots &a_{n-1,n-1}\end{pmatrix}$ . Then, $A'$ is non-singular. Let $(x_1,\dots,x_n)^T\in V$ . Then, there is $t\in\mathbb{R}^{n-1}$ such that $(x_1,\dots,x_n)^T=At$ . Then, $t=(A')^{-1}(x_1,\dots,x_{n-1})^T$ . Then, $x_n=(a_{n,1},\dots,a_{n,n-1})(A')^{-1}(x_1,\dots,x_{
|
|real-analysis|measure-theory|vector-spaces|lebesgue-measure|geometric-measure-theory|
| 0
|
Why we consider the dual space when defining tensors
|
My question is very simple: A type $(m,n)$ tensor is an element of $V^{\otimes m}\otimes (V^*)^{\otimes n}$ . Is there a reason/motivation, beyond more general definitions, to consider the dual space of $V$ in this definition? Or is it just convention? I don't expect some mindblowing answer, so to speak, but maybe a clarification of the why the tensor product of $V$ with it's dual would be so interesting.
|
Let $V$ a finite dimensional vector space and let $e:=(e_1, \cdots e_n)$ one of its bases. Let $\tilde{e}$ another bases for $V$ . The two bases are related to each other by a linear transformation. i.e. there is a $n \times n$ matrix $A$ such that $$ \tilde{e}=Ae $$ or $\tilde{e_i}=\sum_j A^j_i e_j$ Consider now the dual cobases of $V^*$ i.e. $e^*:=({e^*}^1, \cdots {e^*}^n)$ such that $$ {e^*}^i(e_j)= \delta_j^i $$ Let $B$ the matrix of the change of bases of the dual bases, we have $$ \operatorname{Id}_n= \tilde{e^*}(\tilde{e})= B e^*(A e)=BA e^*(e)=BA \operatorname{Id}=BA $$ so $B=A^{-1}$ A bases for the space of $(1,1)$ tensors is given by $$ e_i \otimes {e^*}^j $$ for $i,j \in \{1, \cdots n\}$ . So it change bases as $$ \tilde{e_i} \otimes \tilde{{e^*}^j}=\sum_k \sum_h A^k_i (A^{-1})^j_h e_k \otimes {e^*}^h $$ while e.g a $(2,0)$ tensor change bases as $$ \tilde{e_i} \otimes \tilde{e_j}=\sum_k \sum_h A^k_i A^h_i e_k \otimes e_h $$ This is quite used in physics
|
|definition|tensors|motivation|
| 0
|
Can we derive a norm and an inner product from a metric?
|
Given an inner product on a vector space, I can always define a norm and a metric (and a topology using that metric). Is the converse true? That is, given a metric on a vector space, can I define an inner product with it? And what about a metric space that is not a vector space?
|
Let $k$ denote the field of real numbers, or the field of complex numbers. Given a vector space $V$ over $k$ , it is always possible define a metric on $V$ (say the discrete metric ), or a norm on $V$ ( by choosing a Hamel basis ). However, if the norm you define has no relation to the metric you define, then there is no point in studying them simultaneously. Thus, we should restrict our attention to metrics $d$ that are induced by a norm $\lVert\cdot\rVert$ , in the sense that $d(x,y)=\lVert x-y\rVert$ for all $x,y\in V$ . This way, the metric and norm “interact” with each other. A metric induced by a norm has two special properties: Translation invariance: $d(x+c,y+c)=d(x,y)$ for all $c,x,y\in V$ . Absolute homogeneity: $d(\lambda x,\lambda y)=|\lambda|d(x,y)$ for all $x,y\in V$ and $\lambda\in k$ . Your question becomes: given a vector space $V$ over $k$ and a metric $d$ on $V$ , must there be a norm on $V$ which induces $d$ ? The answer is negative. Consider, for instance, the vect
|
|real-analysis|vector-spaces|metric-spaces|normed-spaces|inner-products|
| 0
|
Is the linear combination of Gaussian processes also a Gaussian process?
|
Let $X_k = (X_{k, t})_{t\geq 0}$ a Gaussian process for each $k\in [m]$ , and $a\in\mathbf{R}^m$ . Is $Y = (Y_t)_{t\geq 0} = (\sum_{k = 1}^m a_k X_{k,t})_{t\geq 0}$ still a Gaussian process? Each $X_k$ is defined on the same probability space, and I supposedly need to use the fact that a linear transformation of a Gaussian vector is still Gaussian. Let $0\leq t_1 belong to $\mathbf{R}$ . I know $(X_{k,t_l})_{l\leq n}$ is Gaussian for each $k\in [m]$ , and I need to show $(Y_{t_l})_{l\leq n}$ is Gaussian. I tried to write it as a linear transformation of a Gaussian vector, but I'm not sure how, since I have $m$ Gaussian vectors and not just one. I also tried to prove it directly from the definition, taking $c\in \mathbf{R}^n$ , and proving the linear combination $\sum_{l = 1}^n c_lY_{t_l}$ is Gaussian, but I'm not sure all of the $X_{k, t_l}$ 's are jointly Gaussian, just those with the same $k$ . This is lemma 2.23 of Arguin's A First Course in Stochastic Processes.
|
It is easier to show first for two Gaussian processes, namely $X_1$ and $X_2$ . So the resulting process is $Y = X_1 + X_2$ (there is no need to a factor $a$ , since if we multiply a Gaussian process by a factor it is still Gaussian). The PDF of this process is: $$ f_Y(Y) = \int \limits_{-\infty}^\infty f_{X_1,X_2}(Y-X_2,X_2) \, \mathrm{d}y$$ So assuming that joint PDF has joint-Normal distribution, one can calculate a resulting PDF, which is Gaussian (it is a bit long calculus, so it is not shown here, there might be an easier way with a Characteristic funciton calculation). Next, one can proceed by induction for more dimensions.
|
|stochastic-processes|normal-distribution|stochastic-calculus|gaussian-measure|
| 0
|
Why we consider the dual space when defining tensors
|
My question is very simple: A type $(m,n)$ tensor is an element of $V^{\otimes m}\otimes (V^*)^{\otimes n}$ . Is there a reason/motivation, beyond more general definitions, to consider the dual space of $V$ in this definition? Or is it just convention? I don't expect some mindblowing answer, so to speak, but maybe a clarification of the why the tensor product of $V$ with it's dual would be so interesting.
|
As you probably know, for finite-dimensional spaces, $V$ is isomorphic to $V^\ast$ but such isomorphisms are not canonical. It is often important to work with canonical isomorphisms, for instance when doing vector bundles over manifolds. On the other hand, there are certain canonical isomorphisms such as that between $Hom(V,V)$ and $V^\ast\otimes V$ , a fact important for example in Riemannian geometry. That's why one needs to work with the more general tensor products that you mentioned.
|
|definition|tensors|motivation|
| 1
|
Consequence following from the hypothesis $ : 7$ divides $a^2 + b^2$ with $ a, b \in \mathbb Z$
|
Not HM, simply self teaching. Source : Alain Troesch (Louis-le-Grand High School, Paris) , Exercices $2022-2023$ , Polycopié des exercices , page $99$ , Ex. $21.10$ http://alain.troesch.free.fr/ Given ( by hypothesis) that $7 | a^2 + b^2$ with $a, b \in \mathbb Z$ . Prove that $ 7$ divides both $a$ and $b$ . I only manage to prove the implication $ 7|a \implies 7|b^2$ but not $ 7|a \implies 7|b$ , even less the desired conjunction. Proof of the implication : Since $7 | a^2 + b^2$ , it follows that: $ a^2 + b^2 = 7k , k \in \mathbb Z$ . Now suppose that $7|a$ meaning equivalently that : $ a = 7k'$ . In that case we have : $(7k')^2 + b^2 = 7k$ $\iff 7^2(k')^2 + b^2 = 7k $ $\iff b^2 = 7[ k -7(k')^2]$ $ \iff 7 | b^2 $
|
One of the properties of primes is that Suppose a prime $p\mid ab$ then $p\mid a$ or $p\mid b$ This follows from the fundamental theorem of arithmetic. Now since $7\mid b\cdot b$ then either $7\mid b$ or $7\mid b$ . So $7\mid b$ . Notice that you have only proven that if $7\mid a$ then $7\mid b$ . The case still remains where $7\not\mid a,b$ . You can now write: $a=7m+x$ , $1\leq x $b=7n+y$ , $1\leq y Then after some algebra $$\begin{align}a^2+b^2&=(7m+x)^2+(7n+y)^2\\&=x^2+y^2+7(7m^2+7n^2+2mx+2my)\end{align}$$ Can you take it from here?
|
|elementary-number-theory|sums-of-squares|
| 1
|
How can different representations of the same integer be equivalent?
|
I recently read about a way to define the set of integers as the set of all equivalence classes for some equivalence relation $\simeq$ satisfying $(a,b)\simeq(c,d)$ for $(a, b),\;(c,d)\in\mathbb{N}\times\mathbb{N}$ iff $a+d=b+c$ : $\mathbb{Z}=\mathbb{N}\times\mathbb{N}/ \simeq$ I think I get the general idea behind this definition. The reason we define every integer as an equivalence class instead of a specific pair of tuples is that infinitely many pairs of tuples satisfy the relation. I am trying to get some intuition for this representation of integers. Particularly, I am curious as to how this representation of the positive integers corresponds to their definition as von Neumann ordinals. Consider the number $1$ . As a von Neumann ordinal, it's defined as: $1=\{\emptyset\}$ If we use our new definition of the integers, another representation of $1$ would be: $[1, 0]=\{ (\{ \emptyset \},\emptyset ),(\{ \emptyset,\{ \emptyset \} \},\{ \emptyset\})... \}$ I have a hard time understand
|
If you define $\mathbb{Z}$ like that, then strictly speaking $\mathbb{N}$ is not a subset of $\mathbb{Z}$ . But $\mathbb{Z}$ contains a subset $M$ which looks exactly like $\mathbb{N}$ , including how addition and multiplication works, namely the set of equivalence classes $[(a,b)]$ where $a \ge b$ (or $a>b$ , depending on whether you count zero as a natural number or not). So it's customary to identify $\mathbb{N}$ with $M$ , so that $\mathbb{N}$ can be considered as a subset of $\mathbb{Z}$ (as we are used to thinking about it).
|
|elementary-set-theory|set-theory|integers|ordinals|
| 1
|
Why $\lim_{h→0+}\frac{ f (x + 2h) − f (x + h)}{h}$ is not a derivative?
|
Let $f :\mathbb{R}→\mathbb{R}$ be a continuous function with the property that $\lim_{h→0+}\frac{ f (x + 2h) − f (x + h)} {h} = 0$ , for all $x ∈ \mathbb R$ . Prove that $f$ is constant I’m not asking for a solution to this problem, I’m interested in why what is written cannot be considered a definition of a derivative. We can take $t=x+h$ and then $\lim_{h→0+}\frac{ f (t+h) − f (t)} {h} = 0$ and this is the definition of derivative. Or is the problem in $\lim_{h→0+}$ ,but $f$ is a continuous function, which means $\lim_{h→0+}f(x+h)=\lim_{h→0-}f(x+h)$ I will be glad if you can help me understand this problem.
|
Avika Weinberger made a comment with a nice example what could go wrong with this alternative definition for non-continuous $f$ . If $f$ is continuous and obeys the limit the OP imposed, then fix an arbitrary $x\in \mathbb{R}$ , arbitrary $\varepsilon>0$ and suppose $\delta>0$ is so small that $\forall h \in (0,\delta)$ $$|f(x+2h)-f(x+h)|\leq \varepsilon h/2.$$ Because of the continuity of $f$ we can conclude that $\forall h \in (0,+\infty)$ there exists $N_h\in \mathbb{N}_0$ s.t. $$|f(x+2^{-N_h}h)-f(x)|\leq \varepsilon h/2.$$ Then for all $h\in (0,\delta)$ we have $$\left|f(x+h)-f(x)\right|\leq \left(\sum_{j=0}^{N_h-1} |f(x+2^{-j}h)-f(x+2^{-j-1}h)|\right)+|f(x+2^{-N_h}h)-f(x)| \\ \leq \left(\sum_{j=0}^{N_h-1} \varepsilon 2^{-j-2}h\right)+\varepsilon h/2 \leq \varepsilon h.$$ So $f$ is differentiable at $x$ with $f'(x)=0$ . Now you can conclude through Lagrange's theorem that $f$ must be a constant function.
|
|real-analysis|derivatives|
| 0
|
Plotting regions in complex plane
|
Plot the region represented by $\frac{π}{3}\leqslant \arg\left(\frac{z+1}{z-1}\right)\leqslant\frac{2π}{3}$ in the Argand plane. I know that such inequalities usually represent the minor or major arcs of the circle and the area bounded by them,but I can't figure out this one. Any help is appreciated.
|
Hint : Think of all arcs like $$\arg\left(\frac{z+1}{z-1}\right)=\theta$$ but keeping $\theta$ in the given range. Please ask if you need more details. Edit: Adding Desmos graph
|
|geometry|complex-numbers|mobius-transformation|
| 0
|
How do people perform mental arithmetic for complicated expressions?
|
This is the famous picture " Mental Arithmetic. In the Public School of S. Rachinsky. " by the Russian artist Nikolay Bogdanov-Belsky . The problem on the blackboard is: $$ \dfrac{10^{2} + 11^{2} + 12^{2} + 13^{2} + 14^{2}}{365} $$ The answer is easy using paper and pencil: $2$ . However, as the name of the picture implies, the expression ought be simplified only mentally. My questions: Are there general mental calculation techniques useful for performing basic arithmetic and exponents? Or is there some trick which works in this case? If so, what is the class of problems this trick can be applied to?
|
For a more comprehensive answer read: https://www.meer.com/pt/68619-calculo-mental-por-nikolai-petrovich-bogdanov-belsky
|
|algebra-precalculus|recreational-mathematics|mental-arithmetic|
| 0
|
Half-planes in $\mathbb{R}^2$
|
I proved a proposition in $\mathbb{R}^2$ using a cartersian coordinate system. Now I want to provide another proof of the same proposition without the cartesian coordinate system. Anyway I need to express some half-planes through the scalar product. My skill in geometry are very basic and I'm having some difficulties. I try to explain the matter: if $x_0,x_1\in\mathbb{R}^2$ (with $x_0\neq x_1$ ), the set $\Delta=\{x\in\mathbb{R}^2\ \ :\ \ \langle x-x_0,x_1-x_0\rangle\ge 0\}$ is the half-plane containing $x_1$ whose boundary is orthogonal to the line $r$ passing through $x_0$ and $x_1$ . These are the questions: Fixed an angle $\alpha$ , there is a way to write the vectorial equations of the two lines $r_1$ and $r_2$ , passing through $x_0$ , inclined at an angle $\alpha$ with respect to the line $r=\{x_0+\lambda (x_1-x_0)\ \ :\ \ \lambda\in\mathbb{R}\}$ ? There is a way to define the two half-planes (having $r_1$ and $r_2$ as boundary) using a similar expression of the set $\Delta$ , e
|
First, you have to answer the question : what is the mathematical definition of an half-plane? Or even what is a half-space? Generalizing, it won't be much harder to answer. Let $E$ be a real vector space (for example, $\mathbb R^2, \mathbb R^3, \mathbb R^4,...)$ . Then let $f\neq0$ be a linear form on $E$ . And let $a\in \mathbb R$ . $$H:=\{x\in E: f(x)=a\}$$ is an hyperplane of E. In $\mathbb R^2$ , the hyperplanes are the lines. For example let $E:=\mathbb R^2, x_0\in E$ with its Euclidean structure, where the dot product is denoted $\langle.,.\rangle$ , $x_1\in E$ , $a=\langle x_0,x_1-x_0\rangle$ and $f:E\to \mathbb R, x\mapsto \langle x,x_1-x_0\rangle$ . $H$ is the line passing through $x_0$ and perpendicular to the line $r$ that you defined. Now let's come to the mathematical definition of a half-space : $$\{x\in E: f(x)\geq a\}\text{ and }\color{orange}{\{x\in E: f(x)\leq a\}}$$ are called closed half-spaces containing H. In a second step, you can use your point 1 to convince yo
|
|geometry|plane-geometry|
| 0
|
Why is $2^n +1=(2+1)(2^{n-1} - 2^{n-2} +2^{n-3}-\ldots+1)$ for odd $n$
|
This transformation is only a part of solution of my problem but the most significant one. I need to show that $2^n+1$ is divisible by $3$ for odd $n$ ’s. I have exercised polynomial transformation for two weeks and I did geometric progression for a week, but I can’t still easily come up to this transformation.
|
We want to show that $2^{2n-1} +1$ is divisible by $3$ for all $n$ . Obviously this holds for $n = 1$ . Now $$2^{2(n+1)-1}+1=4\cdot2^{2n-1} + 1 =3\cdot2^{2n-1}+2^{2n-1}+1.$$
|
|algebra-precalculus|
| 0
|
orienting a point in polar coordinates along a particular unit vector
|
I have the center of a circle $\vec{c}$ in 3 space and the radius $r$ . I also have a unit vector $\hat{v}$ defining the orientation of the plane of the circle. I wish to parameterize this circle and place a single point on the circle. My logic is, imagine I have a point in polar coordinates. I could describe it as $$\vec{p} = r \cos(\lambda) \hat{i}+ r \sin(\lambda)\hat{j}+0\hat{k}$$ Now the way I'm imagining this is that this point can sweep out the entire circle just by modifying $\lambda$ . Intuitively, I can move this point so that the center of the circle it sweeps out lays at the point of the circle I want. Thus, I can do $$\vec{p} = \vec{c} + r \cos(\lambda) \hat{i}+ r \sin(\lambda)\hat{j}+0\hat{k}$$ However this still doesn't give me the right orientation. I wish the plane of the circle to be perpendicular to $\hat{v}$ . I am stuck with my initial orientation of my circle and now I wish to reorient the circle along the unit vector $\hat{v}$ . How do I do this?
|
Method 1 : 2 parameters, and you'd get a disc instead of a circle Take all vectors on the unit ball with center at origin. Take their cross product with $\hat{v}$ (this will leave behind only that part of each vector which is perpendicular to $\hat{v}$ . Finally multiply by $r$ and add $\vec{c}$ $$r(\cos(\theta)\cos(\phi)\hat{i}+\cos(\theta)\sin(\phi)\hat{j}+\sin(\theta)\hat{j})\times\hat{v}+\vec{c}$$ Method 2 : 1 parameter, and you'd get a circle. But need to choose something first... Choose any plane (passing through origin) of which $\hat{v}$ is not a part of. Since $\hat{v}$ can't simultaneously be on the XY, YZ and ZX planes, so we assume (WLOG) that XY is our plane of choice. Take the unit circle centered at the origin on that plane. Cross with $\hat{v}$ . Normalize to unit magnitude. Multiply by $r$ and add $\vec{c}$ . $$r\frac{(\cos(\theta)\hat{i}+\sin(\theta)\hat{j})\times \hat{v}}{|(\cos(\theta)\hat{i}+\sin(\theta)\hat{j})\times \hat{v}|}+\vec{c}$$ . [Please comment if any er
|
|vectors|circles|polar-coordinates|parametric|
| 0
|
I want to prove $\lim_{x \to0}\frac{\sqrt{x+1}- \sqrt{1-x}}{x}=1$
|
i want to prove that $$ \lim_{x \to0}\frac{\sqrt{x+1}- \sqrt{1-x}}{x}=1 $$ using the definition of the limit. I thought that if I say that |x| 0) with $x\in[-1,1]$ then $$|\frac{\sqrt{x+1}- \sqrt{1-x}}{x}-1|= |\frac{2}{\sqrt{x+1}+ \sqrt{1-x}}-1| then because $1+\sqrt{1-x^2}\ge 1$ I can say that $$\frac{2x^2}{\sqrt{x+1}+ \sqrt{1-x}} I know that $\sqrt{1-x}\ge 0.$ Therefore $$x^2(\sqrt{1+x}-\sqrt{1-x}) So, having |x| $$δ^2\sqrt{δ+1}=ε$$
|
To restate the problem to make it easier to reference: $$ \text{I want to prove }\lim_{x \to 0}\frac{\sqrt{x+1}- \sqrt{1-x}}{x}=1 \tag{Eq. 1}$$ The precise definition of the Calculus Limit is given at the California SFU College website as : To Quote From the University of California Site on Hopital's Rule : DETERMINING LIMITS USING L'HOPITAL'S RULES THEOREM 1 (L'Hopital's Rule for zero over zero): Suppose that $$\lim_{x→a}f(x)=0\text{ , }\lim_{x→a}g(x)=0 \text{ , }$$ and that functions $f$ and $g$ are differentiable on an open interval I containing $a$ . Assume also that $g′(x)≠0$ in $I$ . If $x≠a$ Then $$\lim_{x→a}\frac{f(x)}{g(x)}=\lim_{x→a}\frac{f′(x)}{g′(x)}$$ so long as the limit is finite, $+∞$ , or $−∞$ . Similar results hold for $x→∞$ and $x→−∞$ . Referring to these referenced proofs, set: $$f(x)=\sqrt{x+1}- \sqrt{1-x} \text{ and } g(x)=1 \text{ and } a=0\tag{Eqs. 2}$$ $$\lim_{x→(a=0)}\frac{f(x)}{g(x)}=$$ $$=\lim_{x→(a=0)}\frac{f′(x)}{g′(x)}=$$ $$= \frac{\frac{d}{dx}\left(\sqrt
|
|limits|analysis|solution-verification|
| 0
|
$g(f(x + y)) = f(x) + (x+y)g(y).$ Value of $(0) + (1)+\dots+ (2024)$?
|
Let $f$ and $g$ be functions such that for all real numbers $x$ and $y$ : $$g(f(x + y)) = f(x) + (x+y)g(y).$$ The value of $(0) + (1)+\ldots (2024)$ is? I found the question on Mathematics Stack Exchange: Finding the value of functions: As the question was unanswered, I tried to solve it but couldn't get to the answer. I tried the approach of the questioner in the mentioned post but didn't find it any useful. Edit: My Approach: According to the suggestion provided, when I use $y=-x$ in the original equation: $$g(f(0))=f(x)$$ This implies that $f(x)$ is constant for every inputs i.e. $f(0)=g(f(0))$ , $f(1)=g(f(0))$ , $f(2)=g(f(0))$ , etc. But how does it imply that $g$ is identically equal to $0$ ; i.e. $g≡0$ I have thought of the following to prove $g≡0$ and I don't know whether it is right or wrong. Please point out my mistake or continue in your own way. When we replace the value of $f(x)=g(f(0))$ in the original equation, we get, $$g(g(f(0))=g(f(0))+(x+y)g(y)$$ Since $g(g(f(0))$ , $
|
$$g(f(x+y))=f(x)+(x+y)g(y)$$ Let $y=-x$ $$g(f(0))=f(x)$$ Now in original substitute $x=0$ and $x=1$ to get $$ g(f(y))=f(0)+yg(y)\\g(f(y+1))=f(1)+(y+1)g(y) $$ Subtract both to get $$ g(f(y+1))-g(f(y))=f(1)-f(0)+g(y) $$ Since $f(x)=g(f(0))$ we get $$ g(g(f(0)))-g(g(f(0)))=g(f(0))-g(f(0))+g(y)\\ g(y)=0 $$
|
|functions|summation|real-numbers|
| 1
|
How to find the elementary divisors of this endomorphism
|
I have to do the following exercise: Consider the endomorphism $h\colon\mathbb{R}^{6}\rightarrow\mathbb{R}^{6}$ with unique eigenvalues $0,\,2,\,8$ . If $h$ is not diagonalizable and we know that dim $S(0)=1$ , dim $S(2)\geq2$ , dim $S(8)\geq2$ , find all the possibilities for the elementary divisors and invariant factors. I have read that if $a$ is an eigenvalue of $h$ and dim $S(a)=\lambda$ , then there are $\lambda$ elementary divisors that are powers of $x-a$ . Is this right? How can I continue the exercise? Clarifications: (1) By ''unique'' I mean that there are no other eigenvalues. (2) $S(a)$ is the generalized eigenspace of $a$ . Thanks in advance.
|
If I am remembering the terminology correctly, the "elementary divisors" is the name given to the primary (="prime power" here) ideals that occur when representing $\mathbb R^6$ as a direct sum of cyclic $\mathbb R[t]$ -modules (the primary decomposition has the maximal number of cyclic summands, while the description with the minimal number of cyclic summands is the one associated to the "invariant factors"). The constraints on the generalised eigenspaces means that $x$ occurs exactly once, and then you have $$ \tag{$\dagger$} \dim(S(2))+\dim(S(8)) =5 $$ Thus if $\dim(S(2))=3$ , then you can have $\{(x-2),(x-2),(x-2)\}$ or $\{x-2,(x-2)^2\}$ or $\{(x-2)^3\}$ , and similarly for the eigenvalue $8$ -- though now by $(\dagger)$ $\dim(S(8)) = 2$ then you have two possibilities: $\{(x-8)^2\}$ or $\{(x-8),(x-8)\}$ .
|
|abstract-algebra|
| 0
|
Why is $2^n +1=(2+1)(2^{n-1} - 2^{n-2} +2^{n-3}-\ldots+1)$ for odd $n$
|
This transformation is only a part of solution of my problem but the most significant one. I need to show that $2^n+1$ is divisible by $3$ for odd $n$ ’s. I have exercised polynomial transformation for two weeks and I did geometric progression for a week, but I can’t still easily come up to this transformation.
|
This response is used to indicate that the OP (i.e. original poster) could have taken an entirely different approach to the underlying problem: I need to show that $2^n+1$ is divisible by $3$ for odd $n$ ’s. This approach will use the fact that for $~a,b,n,p \in \Bbb{Z^+},~$ that $~a \equiv b \pmod{p} \implies a^n \equiv b^n \pmod{p}.~$ This may be shown by noting that $~a \equiv b \pmod{p} \implies \exists ~k \in \Bbb{Z}~$ such that $~a = b + pk.~$ This implies that $$a^n = (b + pk)^n = \sum_{r=0}^n b^{n-r} (pk)^r \binom{n}{r}.$$ Therefore, $$a^n - b^n = \sum_{r=1}^n b^{n-r} (pk)^r \binom{n}{r}. \tag1 $$ In (1) above, every term on the RHS has a factor of $~p~$ in it. Therefore, the RHS is $~\equiv 0 \pmod{p}.~$ Therefore, $~a^n \equiv b^n \pmod{p}.$ Note that $~2^1 \equiv 2 \pmod{3}~$ and that $~2^2 \equiv 1 \pmod{3}.~$ Then, employing modular arithmetic, $~2^{2n} \equiv 1^n \equiv 1 \pmod{3}.~$ Therefore, $~2^{2n+1} = (2 \times 2^n) \equiv (2 \times 1) = 2 \pmod{3}.~$ Therefore $~2^
|
|algebra-precalculus|
| 0
|
Are convergent sequences closed under uniform convergence?
|
Setting: Let $Y$ be a metric space and let $a_{n,k}\in Y$ for all $n\in\mathbb{N}$ and $k\in\mathbb{N}$ . Suppose $a_{n,k}\to a_{\bullet, k}$ uniforly as $n\to\infty$ . Suppose the sequences $\{a_{n,k}\}\to A_n$ as $k\to \infty$ for each $n\in\mathbb{N}$ , . There are two questions in my mind: Is it possible that $a_{\bullet,k}$ diverge as $k\to\infty$ ? Is it possible that $A_n$ diverges while $a_{\bullet,k}$ converge? I appreciate help in all forms. The motivation for me to ask these questions is that I know the following three properties boundedness unboundedness continuous are closed under uniform convergent. That is, the limit of a uniformly convergent sequence of bounded function must be bounded, and vice versa for the other two properties, but I cannot prove nor disprove that convergence of sequences in metric space is also closed under uniform convergent. Also, I have already proved the following Theorem by myself, and I was trying to figure out if the hypotheses are all necess
|
The answer to the second question is no. If $\lim_{k\to\infty}a_{\bullet,k}=A$ then $\lim_{n\to\infty}A_n$ exists and equals $A$ . Proof: Let $\epsilon>0$ . Let $N$ be such that $\forall n\ge N,\forall k,d(a_{n,k},a_{\bullet,k}) , and $K$ be such that $\forall k\ge K,d(a_{\bullet,k},A) . Then, $$\forall n\ge N,\forall k\ge K,d(a_{n,k},A) hence $$\forall n\ge N,d(A_n,A)\le\epsilon.$$ The answer to the first question is yes. Take for instance $$Y=\Bbb R^*,\quad a_{n,k}=\frac1n+\frac1k.$$ However, if $Y$ is complete, the answer to the first question is no (i.e. the answer to the question in the title is yes), because if $(a_{n,k})_k$ converges uniformly as $n\to\infty$ then it is uniformly Cauchy, hence $(A_n)_n$ is Cauchy.
|
|analysis|metric-spaces|uniform-convergence|sequence-of-function|double-sequence|
| 1
|
Prove of inradius of a right angle triangle. R²= R1²+ R2²
|
A right-angle triangle ABC is present whose right angle is BAC. One perpendicular from point A is taken on BC which is AD. Then in triangle ABC, triangle ABD, and triangle ADC, three inscribed circles are drawn with the radius of r, r1 and r2 respectively. Then prove that r^2 = r1^2 + r2^2 Picture Link is attached
|
In the attached figure we have equalizing areas $$r_1=\frac{CD\times AD}{CD+AD+b}=\frac{bc\cos(\gamma)}{a\cos(\gamma)+a+c}\\r_2=\frac{DB\times AD}{DB+AD+c}=\frac{bc\sin(\gamma)}{a\sin(\gamma)+a+b}\\r=\frac{bc}{a+b+c}$$ so we have to verify that $$\left(\frac{bc\cos(\gamma)}{a\cos(\gamma)+a+c}\right)^2+\left(\frac{bc\sin(\gamma)}{a\sin(\gamma)+a+b}\right)^2=\left(\frac{bc}{a+b+c}\right)^2$$ Since $a\cos(\gamma)=b$ and $a\sin(\gamma)=c$ we have finally to verify that $$(bc\sin(\gamma))^2+(bc\cos(\gamma))^2=(bc)^2$$ We are done.
|
|geometry|triangles|circles|
| 1
|
Does $(p\implies q)\implies\neg( q\implies p)$?
|
$p\implies q$ does not necessarily indicate that $q\implies p$ , which is why there exists a double-sided implication: $\iff$ . $p\iff q$ is the same as saying $(p\implies q)\land(q\implies p)$ . However, sometimes when I am illustrating my working out, I use the one-sided $\implies$ symbol even when using the $\iff$ symbol would also work. My question is: does $p\implies q$ directly say that $q\implies p$ is false? Even if it does not, is it better practice to try and always use $\iff$ when it works both ways?
|
By definition: $(p\Longrightarrow q)\equiv(q\lor\lnot p)$ and $\lnot(q\Longrightarrow p)\equiv(q\land\lnot p)$ . Clearly if $q$ holds and $p$ holds or if $\lnot q$ and $\lnot p$ hold, your implication fails. Edit: The notation $p\Longleftrightarrow q$ is just a short form for $(p\Longrightarrow q)\land(q\Longrightarrow p)$ . When using only a one-sided arrow, one doesn't say anything about the converse direction. It can be used when one doesn't really care about the converse direction.
|
|logic|soft-question|terminology|
| 0
|
Prove the following complex integral satisfies $\mid \int_\ell \frac{z^3}{z^2+1}dz \mid \leq \frac{9\pi}{8}$
|
Prove $\mid \int_\ell \frac{z^3}{z^2+1}dz \mid \leq \frac{9\pi}{8}$ where $\ell$ is $|z|=3,\Re(z)\geq 0$ . My attempt: $$\mid \int_\ell \frac{z^3}{z^2+1}dz \mid \leq \int_\ell \frac{\mid z\mid^3}{\mid z^2+1\mid} \ dz $$ and therefore, since $|z|=3\Rightarrow |z|^3=27, |z^2+1|\geq |z|^2-1=8$ , we obtain $$ \mid \int_\ell \frac{z^3}{z^2+1}dz \mid \leq \frac{27}{8}\cdot 3\pi, $$ where $3\pi $ is the curve perimeter. The same minimal value for the denominator is obtained when taking $z=x+iy$ and examining the value of $|z^2+1|^2 = 4\cdot (25-y^2)$ (where $x=\sqrt{9-y^2}$ ). Unfortunately, I cannot see how I can refine this argument to obtain the required bound. Will be happy to receive some help with this. Thank you
|
The trick here is to reduce the power by polynomial division, so $\frac{z^3}{z^2+1}=z-\frac{z}{z^2+1}$ Then $$|\int_\ell \frac{z^3}{z^2+1}dz|= |-\int_\ell \frac{z}{z^2+1}dz +\int_\ell z dz| $$ By direct computation or Cauchy (moving to the vertical segment from $-3$ to $3$ ) we have $\int_\ell z dz=0$ , so now your reasoning gives the required bound $9\pi/8$
|
|complex-analysis|complex-integration|
| 1
|
Product topology and intersection of basis elements
|
I was wondering if you could help me settle a discussion with a co-author. Neither of us is a hard-core topologist but our current paper forced us into this, so we figured this forum was a good place to settle the discussion! Setting We are thinking of the space $\mathbb{R}^X$ where $X$ is an arbitrary set (so, all real valued functions from $X$ to $\mathbb{R}$ ). We endow this with the product topology (or pointwise convergence topology). The basis we are using for this topology is the standard one, composed of sets that take the form $\Pi_{x\in X}I(x)$ where each $I(x)$ is an open set and only finitely many of these $I(x)$ sets are different from $\mathbb{R}$ . Also, without loss of generality, I guess we can take the $I(x)$ sets to be intervals but that is not fundamental to the problem. Question If $D_1, \dots, D_n$ are basis elements, is their intersection. $D_1 \cap D_2 \cap \dots \cap D_n$ a basis element? (I know that in general intersection of basis elements is not a basis ele
|
There is no contradiction here. A basis $\mathscr B$ for a topology $\mathscr T$ is a subset $\mathscr B \subset \mathscr T$ such that each $U \in \mathscr T$ is a union of elements of $\mathscr B$ . Equivalently, one can require that for each $U \in \mathscr T$ and each $x \in U$ there exists $B \in \mathscr B$ such that $x \in B \subset U$ . There is no problem to allow that $\emptyset \in \mathscr B$ . Okay, the empty set does not contribute to unions, thus pragmatically it does not make much sense to have $\emptyset \in \mathscr B$ . But it is no formal issue. Of course one can define a basis to be a collection of non-empty sets; it is a matter of taste. In your example there are two approaches: As a basis take all $\Pi_{x\in X}I(x)$ where each $I(x)$ is an open set and only finitely many of these $I(x)$ sets are different from $\mathbb{R}$ . Then $\emptyset$ is a basis element and Co-author 1 is right. As a basis take all $\Pi_{x\in X}I(x)$ where each $I(x)$ is an open non-empty s
|
|general-topology|pointwise-convergence|product-space|
| 0
|
2023 MIT Integration Bee Semifinals #1 Problem 1 $\int e^{\cos x}\cos(2x+\sin x)dx$
|
How do I solve $$\int e^{\cos x}\cos(2x+\sin x)dx$$ ? This is a problem from the 2023 MIT Integration Bee semifinals . I thought that this integral would be solved by $u=e^{\cos x}$ but that doesn't work since we have $\cos(2x+\sin x)$ and not $\sin x$ next to $e^{\cos x}$ . Other than that I am a bit lost. Edit: Integral Calculator couldn't find the solution.
|
Utilize the pruduct rule $$[e^{g(t)}f(t)]’=e^{g(t)} [f’(t) +g’(t)f(t)]\tag1 $$ to first integrate \begin{align} J(a)=&\int e^{a\cos x}\cos(x+a\sin x)\ dx\\ =& \ \frac1a\int e^{a\cos x}[(\sin (a\sin x))’+(a\cos x)’\sin(a\sin x)]\ dx\\ = &\ \frac1a e^{a\cos x}\sin(a\sin x) \end{align} Then, utilize the rule (1) again to obtain \begin{align} J’(a)=&\int \frac d{da}\left[e^{a\cos x}\cos(x+a\sin x)\right]\ dx\\ =& \int e^{a\cos x}\cos(2x+a\sin x)dx \end{align} Thus \begin{align} &\int e^{\cos x}\sin(2x+\sin x)dx\\ =& \ J’(a)\bigg|_{a=1}= \frac d{da} \left[\frac1a e^{a\cos x}\sin(a\sin x)\right]_{a=1} \\ =& \ 2e^{\cos x}\sin\frac x2\cos\left(\frac x2+\sin x\right) \end{align}
|
|calculus|integration|contest-math|indefinite-integrals|
| 0
|
Explanation of last step in proof of Lemma 4.2 in paper "An Axiomatic Approach to Forcing in a General Setting"
|
The paper "An Axiomatic Approach to Forcing in a General Setting" by R. Freire and P Holy includes the proof of Lemma 4.2. (Note that in the paper a 'generic filter' means a filter in a partial order P and C is a class of sets and $F_G()$ is a valuation under G and other items are in the notes under the question). My question is : what is the reasoning behind the last step ? Lemma 4.2. Let D $\in$ C be such that D is dense in P. If G is a generic filter, then G intersects D. Proof. Let G be a generic filter and assume for a contradiction that G $\cap$ D = $\emptyset$ . Making use of axioms (*) and (**), it follows that $$M[G] \models \neg \exists x \; x \in F_G (\check{D}) \cap F_G(\dot{G}) $$ By axiom (4 - the Truth Lemma), we may thus find p $\in$ G such that $$p \Vdash \neg \exists x \; x \in \check{D} \cap \dot{G} $$ By Lemma 4.1, equivalently $$\forall q \leq p \; \neg [q \Vdash \exists x \; x \in \check{D} \cap \dot{G}] $$ Since D is dense, we may fix q $\leq$ p in D. Last step:
|
The reason is axiom (5) Definability Lemma. For every $q$ , it is true that $q$ forces that $G$ contains $q$ (which is the interpretation of $\check q\in\dot G$ ).
|
|proof-explanation|forcing|
| 0
|
Brownian Motion Distribution
|
Let $(W(t))_{t\geq 0}$ be a Brownian motion and $0 What is the distribution of $2W(s)-W(t)$ ? If $s>t$ , then $2W(s)-W(t) \sim W(4s)-W(t) \sim N(0, 4s-t)$ , but here we have $t>s$ . Does this calculations still hold?
|
The Brownian motion can be realized as the (continuos realization) of a Gaussian process $W_t$ such that $$ \mathbb{E}[W_t]=0 $$ $$ \operatorname{Cov}(W_t,W_s)= \min (s,t) $$ The variance of the difference is $$ \operatorname{Var}(2W(s)-W(t))= 4\mathbb{E}[W(s)^2]+\mathbb{E}[W(t)^2] -4\operatorname{Cov}(W(s),W(t))=4s+t-4 \min(s,t) $$ So we have $$ 2W(s)-W(t) \sim \mathcal{N}(0,4s+t-4 \min(s,t)) $$ or, if you prefer $$ 2W(s)-W(t) \sim W(4s+t-4 \min(s,t)) $$ Notice how, if $t>s$ you have $$ 2W(s)-W_t \sim \mathcal{N}(0,t) $$ while if $t you have $$ 2W(s)-W(t) \sim \mathcal{N}(0,4s-3t) $$ consistently with the other answers
|
|probability-distributions|stochastic-processes|brownian-motion|
| 1
|
To decompose a conditionally convergent series into a partial bounded series and another decreasing series
|
In the first-year mathematics analysis course, the instructor assigned a problem on the convergence of series. We are given that a series $\sum_{n=1}^\infty A_n$ converges absolutely if $\sum_{n=1}^\infty |A_n| . Alternatively, it converges conditionally if its partial sums converge, while $\sum_{n=1}^\infty |A_n|=+\infty$ . The problem is stated as follows: Suppose $\sum_{n=1}^\infty A_n$ converges. The task is to prove the existence of sequences ${a_n}$ and ${b_n}$ such that $A_n=a_nb_n$ for $n\geq 1$ . At the same time, the partial sum of $\sum_{n=1}^\infty a_n$ must be bounded, while ${b_n}$ is monotonically decreasing and tends to $0$ . I have successfully solved the case when $\sum_{n=1}^\infty A_n$ converges absolutely. In this scenario, we define $R_n=\sum_{k\geq n}|A_k|$ and we construct $$ b_n:=\sqrt{R_{n+1}}+\sqrt{R_n},\quad a_n:=\frac{A_n}{b_n}. $$ These sequences satisfy the given requirements. However, I am facing difficulty in solving the case when $\sum_{n=1}^\infty A_n
|
The idea here is that $\sum_{N \le n \le M}A_n \to 0, M,N \to \infty$ so fix a strictly increasing sequence $N_k, k \ge 1$ st $$|\sum_{N \le n \le M}A_n| \le \frac{1}{k^2}, M,N \ge N_k$$ where of course you can use other absolutely convergent series instead of $\sum 1/k^2$ with appropriate choices as below. Then for $N_k \le n pick $a_n=A_n \sqrt k, b_n =1/\sqrt k$ (while if you want $b_n$ strictly decreasing you can wiggle it with a very small strictly decreasing sequence depending on $\max_{n ) Then for any $N \ge N_1$ pick $k$ st $N_k \le N and note that $$|\sum_{n \le N}a_n| \le |\sum_{1 \le n But now $|\sum_{n \le N_1}a_n|=A$ and each $|\sum_{N_r \le n by our choices. Hence the partial sums of $a_n$ are bounded by $A+\sum_{r \ge 1}\frac{1}{r^{3/2}}$ for $N \ge N_1$ and of course the first few up to $N_1$ are bounded by some other constant $A_1$ hence they are all bounded uniformly and we are done
|
|real-analysis|calculus|sequences-and-series|conditional-convergence|
| 0
|
Calculating frequency of dice rolls regarding a maximum number of points
|
Let's consider rolling a fair dice ( d6 ) three times ( 3d6 ) and summing up their number of points. Therefore, you have a minimum sum of 3 and a maximum of 18. For both, the amount of permutations with repetition equals 1 ( {1,1,1} {6,6,6} ). For a sum of 4, one has 3 permutations {1,1,2} {1,2,1} {2,1,1} . I was wondering if there is any way to calculate this using a formular. Especially, this gets interesting when combining multiple non-cubic dices, increasing the complexity. Basic combinatoric formulars seem to not appeal to this problem. I also thought about using the normal distribution, however this set of data is discreet. Binomial distribution also does not work here, as we consider 6 pathways at each knot. Thank you very much in advance. :)
|
You need to distribute a sum between three summands, each is not greater than $6$ . This is the inclusion-exclusion with limitations type of a problem. The answer can be found here . The formula is right at the end of the page ( $m=3$ is the number of rolls, $R=6$ is a maximum roll, $k=13$ is the sum, $\binom ab=0$ if $a ). For example let us calculate the number of permutations with the sum equal to $13$ : $$ \sum_{t=0}^3(-1)^t\binom3t\binom{3+13-(6+1)t-1}{3-1}=$$ $$= \sum_{t=0}^3(-1)^t\binom3t\binom{15-7t}{2}=$$ $$= \binom{15}2-\binom31\binom{8}2 =105-3\cdot 28=105-84=21.$$ Indeed, here are all $21$ of the rolls: 661 652 643 634 625 616 562 553 544 535 526 463 454 445 436 364 355 346 265 256 166
|
|combinatorics|statistics|
| 1
|
Can lagrange multipliers be vectors?
|
In this thread about the derivation of lagrange multipliers as well as the wikipedia article the Lagrangian is defined as $ F(\vec{x}, \vec{\lambda}) = f(\vec{x}) - \langle \vec{\lambda}, \vec{g(x)} \rangle $ . I thought that $\lambda$ had to be a scalar in order to constrain $ \nabla g $ and $ \nabla f $ to be collinear. How can I interpret the formulation in which $\lambda$ is a vector?
|
If you are trying to optimize a function $f$ subject to constrains $g_1=0,\ldots,g_k =0$ , then the key point of the theory of Lagrange multipliers is that, at a (constrained) local extremum $x_0$ , the gradient $\nabla(f)$ will be normal to the domain given by the constraints. When there are multiple constraints, this means that $\nabla(f)(x_0)$ will be a linear combination of the gradients $\nabla(g_1)(x_0),\ldots,\nabla(g_k)(x_0)$ , that is, there will be $\vec{\lambda} \in \mathbb R^k$ such that $$ \nabla(f)(x_0) = \sum_{i=1}^k \lambda_i \nabla(g_i)(x_0) $$ which translates into a "Lagrangian" $$ F(\vec{x},\vec{\lambda}) = f(\vec{x}) - \langle \vec{\lambda},\vec{g}(x)\rangle, $$ where $\vec{g} = (g_1,\ldots,g_k)$ . Thus the Lagrange multipliers form a vector in $\mathbb R^k$ where $k$ is given by the number of constraining functions (so in particular, while it is a vector, it is not a vector in the same space as the vector $\vec{x}$ ).
|
|multivariable-calculus|lagrange-multiplier|
| 1
|
Prove that $\int_0^\infty \frac{e^{\cos(ax)}\cos\left(\sin (ax)+bx\right)}{c^2+x^2}dx =\frac{\pi}{2c}\exp\left(e^{-ac}-bc\right)$
|
In my course, I have to prove formula below $$I=\int_0^\infty \frac{e^{\cos(ax)}\cos\left(\sin (ax)+bx\right)}{c^2+x^2}dx =\frac{\pi}{2c}\exp\left(e^{-ac}-bc\right)$$ for $a,b,c>0.$ I know that this integral can be easily solved with complex analysis using $$f(z)=\frac{1}{2} \ \mathbb{R} \left(\int_{-\infty}^\infty \frac{\exp\left(e^{iaz}+ibz\right)}{c^2+z^2}dz\right)$$ but right now I am in a course dealing with real analysis. I tried to use parametrization integral method $$I'(a)=-\int_0^\infty \frac{xe^{\cos(ax)}\sin(\sin(ax)+(a+b)x)}{c^2+x^2}dx $$ but it doesn't look easier to handle. I tried to differentiate it again, but I just got a horrible form. An idea came to mind to differentiate with respect to parameter $b$ and set a differential equation $$I''(b)+x^2I(b)=0$$ plugging this ODE to W|A, I got $$I(b)=c_1\cos(bx^2)+c_2\sin(bx^2)$$ It's definitely wrong! After seeing Samrat's answer, I tried to plug in again to W|A and I got $$I(b)=c_1 D_{-1/2}((i+1)b)+c_2 D_{-1/2}((i-1)b)$$ w
|
$$ \begin{align} \int_0^\infty \frac{e^{\cos(ax)}\cos\big(\sin (ax)+bx\big)}{c^2+x^2}dx &= \frac{1}{2} \ \text{Re} \int_{-\infty}^\infty \frac{e^{e^{iax}} e^{ibx}}{c^2+x^2}dx \\ &= \text{Re} \ i \pi \ \text{Res} \left[\frac{e^{e^{iaz}}e^{ibz}}{c^{2}+z^{2}}, ic \right] \\ &= \pi \frac{e^{e^{-ac}}e^{-bc}}{2c} \\ &= \frac{\pi}{2c} e^{-bc+e^{-ac}} \end{align}$$ To see that $ \displaystyle \int \frac{e^{e^{iaz}} e^{ibz}}{c^2+z^2}dz$ vanishes along the upper half of $|z|=R$ as $R \to \infty$ , expand $e^{e^{iaz}}$ in a Maclaurin series, switch the order of integration and summation, and then apply Jordan's lemma. Or something like that.
|
|calculus|real-analysis|integration|definite-integrals|improper-integrals|
| 0
|
Integral: $\int_0^\infty \tan^{-1}\left(\frac{2ax}{x^2+c^2} \right)\sin(bx) \; dx$
|
Please help me in proving the following result: $$\displaystyle \int_0^\infty \tan^{-1}\left(\frac{2ax}{x^2+c^2} \right)\sin(bx) \; dx=\frac{\pi}{b}e^{-b\sqrt{a^2+c^2}}\sinh (ab)$$ I found this integral from here: http://integralsandseries.prophpbb.com/post2652.html?sid=d6641d4d4a3726f1b27bbb4b98ca840a and the solution uses contour integration. I am wondering if there is a way to solve it without using contour integration. I tried differentiating wrt $a$ and $c$ but in both cases, the resulting expression was dirty which made me reluctant to proceed further. I am out of ideas for this one. Any help is appreciated. Thanks!
|
When we see an arctan, sometimes parts is a good start. Let $\displaystyle u=\tan^{-1}\left(\frac{2ax}{x^{2}+c^{2}}\right), \;\ dv=\sinh(bx)dx, \;\ du=\frac{-2a(x+c)(x-c)}{x^{4}+2(2a^{2}+c^{2})x^{2}+c^{4}}dx, \;\ v=\frac{-1}{b}\cos(bx)$ The uv part goes to 0 and we have remaining due to it being an even function: $$\frac{2a}{b}\int_{-\infty}^{\infty}\frac{(x-c)(x+c)\cos(bx)}{x^{4}+2(2a^{2}+c^{2})x^{2}+c^{4}}dx$$ Use a semicircle in the UHP and consider: $$f=\frac{2a}{b}\int_{C}\frac{(z+c)(z-c)e^{ibz}}{z^{4}+2(2a^{2}+c^{2})z^{2}+c^{4}}$$ The portion around the arc tends to 0 as $R\to \infty$ . It has poles at: $$z=(\sqrt{a^{2}+c^{2}}+a)i, \;\ z=(\sqrt{a^{2}+c^{2}}-a)i$$ $$2\pi i \cdot Res(f, \;\ (\sqrt{a^{2}+c^{2}}+a)i)=\frac{\pi }{b}e^{-ab}e^{-b\sqrt{a^{2}+c^{2}}}$$ $$2\pi i \cdot Res(f, \;\ (\sqrt{a^{2}+c^{2}}-a)i)=\frac{\pi}{b}e^{ab}e^{-b\sqrt{a^{2}+c^{2}}}$$ sum the residues in the UHP and note the exponential identity for sinh(ab). Thus, obtaining: $$=\frac{\pi}{b}e^{-b\sqrt{a^{2}+
|
|calculus|integration|definite-integrals|
| 0
|
Boole Algebra, ab+bc+ca=(a+b)(b+c)(c+a), how to solve starting from left?
|
I was able to find a demonstration, starting from the right part, I wanted to start from the left part, in order to check if I understood well my knowdlege, Starting from left or right does make any difference in how it's complex the resolution? And, in case yes, what criteria should I use to identify which part is simpler? My current steps are these: (intuition, with the expanded form, I can have some chance to re-group as I need, I will add the missing term but with neutral values) ab+bc+ca = ab(c+'c) + bc (a+'a) + ca (b+'b) ................ = abc + ab'c + abc + 'abc + abc + a'bc ................ = how should I proceed? (the single quote is not) The steps from the right are these: (a+b)(b+c)(c+a) = a(b+c)(c+a)+b(b+c)(c+a) ........................... = abc+aab+acc+aac+bbc+abb+bcc+abc ........................... = abc+ab+ac+ac+bc+ab+bc ........................... = abc+ab+ac+bc ........................... = ab(c+1) + ac + bc ........................... = ab+ac+bc Following the suggesti
|
Distribution works both ways, also over AND. To mimic your attempt from right to left: $$\begin{align*} ab+bc+ca &= (a+bc+ca)(b+bc+ca)\\ &= (a+b+c)(a+b+a)(a+c+c)(a+c+a)(b+b+c)(b+b+a)(b+c+c)(b+c+a)\\ &= (a+b+c)(a+b)(a+c)(a+c)(b+c)(a+b)(b+c)\\ &= (a+b+c)(a+b)(a+c)(b+c)\\ &= (a+b+c0)(a+c)(b+c)\\ &= (a+b)(a+c)(b+c) \end{align*}$$
|
|boolean-algebra|
| 1
|
Different ways to prove $\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$ (the Basel problem)
|
As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem ) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
|
This proof comes from here The triangle inequality implies $\displaystyle 1 + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6} $ Pick three random points $A$ , $B$ , $C$ on the unit circle. Let the sides of triangle $ABC$ have length $a$ , $b$ , $c$ . The probability that $a+b\gt xc$ is $$\mathbb{Pr}(\frac{a+b}c\gt x)=\frac4{\pi^2}\int_{-\pi/2}^{\pi/2} \arcsin(\frac{\cos\phi}x)d\phi$$ which has the series $$\frac8{\pi^2}\sum_{n=0}^{\infty}\frac1{(2n+1)^2x^{2n+1}}$$ By the triangle inequality, the limit of the probability as $x\to1$ is $1$ , which establishes the sum. Proof: The angles at $A$ , $B$ and $C$ are half the angles subtended at the centre so $(\alpha,\beta,\gamma)$ is uniformly distributed over the flat region bounded by $(\pi,0,0),(0,\pi,0),(0,0,\pi)$ . Since $\frac{\sin\alpha}a=\frac{\sin\beta}b=\frac{\sin\gamma}c$ , the key ratio equals $$\frac{a+b}c=\frac{\sin\alpha+\sin\beta}{\sin\gamma}\\=\frac{2\sin\frac{\alpha+\beta}2\cos{\frac{\alpha-\beta}2}}{\sin(\alpha+\be
|
|sequences-and-series|fourier-analysis|big-list|transcendental-numbers|faq|
| 0
|
Find $a$ for which $(a-3)(a-7)$ is a perfect square
|
The only solutions seem to be 3 and 7 but I can't prove that there are no others. Context: Find every value for integer a, for which $x^2-(a+5)x+5a+1$ expression can be factored as $(x+b)(x+c)$ where b and c are integers. $x=\frac{a+5 \pm \sqrt{(a-3)(a-7)}}{2}$ (In a pre-calc book, btw. Real solutions only (don't know if it matters)) I have no idea where to start, forgive me for not having any real attempts.
|
To answer your question: when can $(a-3)(a-7)$ be a perfect square? Suppose $$(a-3)(a-7)=k^2, $$ for some integer $k$ . Then $$a^2-10a+21-k^2=0.$$ This implies that $$a = \dfrac{10\pm\sqrt{100-4(21-k^2)}}{2},$$ implying that $$a = \dfrac{10\pm 2\sqrt{4+k^2}}{2}.$$ As $a$ has to be an integer, we look for values of $k$ for which $\sqrt{4+k^2}$ is a perfect square. This is quite easy as $$k^2+4 = m^2 \implies 4 = (m-k)(m+k).$$ Following are the possibilities now: $(m-k, m+k)=(\pm1,\pm4); (m-k, m+k)=(\pm2,\pm2); (m-k,m+k)=(\pm4,\pm1)$ . The first and last cases can be easily discarded as the solutions to which are not integers. Thus, we are left with the option $(m-k, m+k) = (\pm2,\pm2)$ . Solving this, we would get $m=\pm2$ and $k=0$ . Thus $$(a-3)(a-7)=0,$$ implying that $a=3, 7$ are the only solutions. $\textbf{Note.}$ I really did not notice the comment of Will Jagy in the comments which runs on the similar lines as that of this. I just noticed it after writing this answer.
|
|square-numbers|
| 1
|
Find $a$ for which $(a-3)(a-7)$ is a perfect square
|
The only solutions seem to be 3 and 7 but I can't prove that there are no others. Context: Find every value for integer a, for which $x^2-(a+5)x+5a+1$ expression can be factored as $(x+b)(x+c)$ where b and c are integers. $x=\frac{a+5 \pm \sqrt{(a-3)(a-7)}}{2}$ (In a pre-calc book, btw. Real solutions only (don't know if it matters)) I have no idea where to start, forgive me for not having any real attempts.
|
Alternative approach: Let $~b = (a-5) \implies ~(a-3) \times (a-7) = b^2 - 4.~$ So, you are looking for some integer $~b,~$ other than $~b = \pm 2,~$ such that $~b^2 - 4~ = n^2 \implies b^2 - n^2 = 4,~$ for some integer $~n.$ If such a solution $~(b,n)~$ exists, then it must also exist with $~b,n \in \Bbb{Z_{\geq 0}},~$ with $~b > n.$ So, there must exist a $~k \in \Bbb{Z^+}~$ such that $~b = n+k \implies 4 = (n+k)^2 - n^2 = 2nk + k^2.~$ Since $~4~$ is an even number, $~k^2~$ must also be even, which implies that $~k~$ is even. So, the smallest possible value for $~k~$ is $~k = 2.~$ Then, this implies that $~4 \geq 2n(2) + (2)^2 = 4n + 4.~$ This requires that $~n = 0,~$ which implies that $~b^2 = 4 \implies b = \pm 2.~$ So, no solution other than $~b = \pm 2~$ is possible.
|
|square-numbers|
| 0
|
Evaluate $\int_0^{\infty } \frac{\sinh (a x) \sinh (b x)}{(\cosh (a x)+\cos (t))^2} \, dx$
|
Gradshteyn&Ryzhik $3.514.4$ states that $$\int_0^{\infty } \frac{\sinh (a x) \sinh (b x)}{(\cosh (a x)+\cos (t))^2} \, dx=\frac{\pi b \csc (t) \csc \left(\frac{\pi b}{a}\right) \sin \left(\frac{b t}{a}\right)}{a^2}$$ Whenever $0 . My bet is on Feynman's trick or contour integration but haven't figure out the exact way. Any help will be appreciated!
|
$$2 \sum_{n=1}^{\infty} (-1)^{n-1} \sin(bnx) e^{-anx} = \frac{\sin bx}{\cosh ax + \cos bx}$$ and $$2 \sum_{n=1}^{\infty} (-1)^{n-1} \cos (bnx) e^{- anx} = \frac{e^{-ax} + \cos bx}{\cosh ax + \cos bx} .$$ Differentiating the first identity with respect to $a$ , $$2 \sum_{n=1}^{\infty} (-1)^{n-1} n \sin (bnx) e^{-anx} = \frac{\sinh (ax) \sin (bx)}{(\cosh ax + \cos bx)^{2}} . $$ Therefore, $$ \int_{0}^{\infty} \frac{\sinh (ax) \sin (bx)}{(\cosh ax + \cos bx)^{2}} \ dx = 2 \int_{0}^{\infty} \sum_{n=1}^{\infty} (-1)^{n-1} n \sin (bnx) e^{-anx} \ dx .$$ Changing the order of integration and summation at this point will lead to a nonsensical result, namely that the integral is not finite. But this is not entirely unexpected since Fubini's theorem is not satisfied. That is $$ 2 \int_{0}^{\infty} \sum_{n=1}^{\infty} \Big| (-1)^{n-1} n \sin (bnx) e^{-anx} \Big| \ dx \not So that the theorem is satisfied, write the right hand side as $$ \begin{align} 2 \lim_{\epsilon \to 0} \int_{\epsilon}^{\inft
|
|integration|complex-analysis|definite-integrals|
| 0
|
Evaluate $\int_{-\frac{1}{\sqrt 3}}^{\frac{1}{\sqrt 3}} \frac{x^4}{1-x^4} \cos^{-1} (\frac{2x}{1+x^2}) dx$
|
Note that the integral can(not) be simplified as $$ 2\int_0^{1/\sqrt 3} \frac{x^4}{1-x^4} \cos^{-1} \left(\frac{2x}{1+x^2}\right)dx $$ Since $\cos^{-1}$ is not an even function. Let $x=\tan y$ $$ \implies \int_0^{\pi/6}\frac{x^4}{1-x^4} \left(\frac{\pi}{2} -2y\right) \sec^2(y)\ dy $$ So how do I solve the original problem?
|
We will integrate by parts to get rid of the arccos. The remaining integral will be trivial by symmetry. Observe that (at least for $0 ) $$ \partial_x \cos^{-1} \left( \frac{2x}{1+x^2}\right) = \frac{2(1+x^2) - (2x)(2x)}{1+x^2} \frac{-1}{\sqrt{1 - \left(\frac{2x}{1+x^2} \right)^2}} = \frac{-2}{1+x^2}. $$ Also, $$ \frac{x^4}{1-x^4} = -1 + \frac{1}{2} \left[\frac{1}{1+x^2} + \frac{1}{1-x^2} \right]. $$ Integrating by parts now yields $$ \int_{-\frac{1}{\sqrt{3}}}^{\frac{1}{\sqrt{3}}} dx \frac{x^{4}}{1-x^{4}}\cos^{-1}\left(\frac{2x}{1+x^{2}}\right) = \left( -x + \frac{1}{2} \tan^{-1} x + \frac{1}{2}\tanh^{-1} x \right) \cos^{-1}\left(\frac{2x}{1+x^{2}}\right) \left. \right\lvert _{-\frac{1}{\sqrt{3}}}^{\frac{1}{\sqrt{3}}} - \int_{-\frac{1}{\sqrt{3}}}^{\frac{1}{\sqrt{3}}} dx \left( -x + \frac{1}{2} \tan^{-1} x + \frac{1}{2} \tanh^{-1} x \right) \frac{-2}{1+x^2} \\= \pi \left[\frac{\pi}{12} - \frac{1}{\sqrt 3} + \frac{1}{4} \log \left(\frac{\sqrt 3 + 1}{\sqrt 3 - 1} \right) \right], $$ beca
|
|calculus|definite-integrals|trigonometric-integrals|
| 0
|
Generalized intermediate value theorem for $s f(x) -t f(y) = (s - t) f(z)$
|
As we know for $s, t >0,$ we have $s f(x) + t f(y) = (s + t) f(z)$ , for continues function $f$ on $[a, b]$ ( $x, y\in (a, b)$ ). Is it true for $s f(x) - t f(y) = (s - t) f(z)$ ????
|
No. Consider $f(x)=x$ , $s=1,t=1,x=0,y=1$ . Then $sx-ty=-1$ , which is never equal to $(s-t)f(z) = 0$ for all $z$
|
|calculus|
| 1
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $$\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$$ Starting from the condition set that $xyz =1$ I did this : $$x^3+y^3+z^3\ge3\sqrt[3]{x^3y^3z^3}$$ $$x^3+y^3+z^3\ge3$$ I also know that the fraction $\frac{3}{2}$ usually either comes from the famous $Nesbitt$ equation or from some sort of $Tittu$ that leads to $\frac{9}{4}$ which then is simplified. However I can't find any way how to manipulate the inequality. I did try to multiply and rewrite the inequality so that $x,y,z$ are an even power like this : $$\frac{x^4}{x^2+yz}+\frac{y^4}{y^2+xz}+\frac{z^4}{z^2+xy}\ge\frac{3}{2}$$ However this didn't bring me anywhere. Any kind of help is welcomed.
|
We use the tangent line method , ie first set $f(x)= \frac{x^5}{x^2+1}$ so that we have to show that $$f(x)+f(y)+f(z) \geq \frac{3}{2}$$ Note $f’’(x) \geq 0 \ \forall x \in \mathbb R_{\geq 0}$ . Considering the tangent to $f(x)$ at $x=1$ , it becomes clear that $f(x) \geq 2x-\frac{3}{2}$ (try to verify this by cross multiplication) so that $$f(x)+f(y)+f(z) \geq 2(x+y+z) - \frac{9}{2} \geq 6-\frac{9}{2} = \frac32$$ where the last inequality follows via AM-GM. Equality is attained at $(1,1,1)$ .
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 0
|
Model Theory in the Language of Peano Arithmetic
|
Most introductory textbooks on model theory establish the theory based on the ZF set theory (e.g. [1]). In particular, a structure is defined to be a 4-tuple of sets , and so on. In [2], I came to realize that this choice is unnecessary, i.e. the underlying theory ought not be ZF set theory; people suggested there are model theories based on other theories (e.g. Peano Arithmetic (PA), category theory, type theory.. etc). It would be nice to learn about all those model theories, and about their differences. However, that's going to be too broad so here I specifically hope to start with model theory in PA (1). Please note that I'm not asking about models of PA, but model theory written in PA. A particularly interesting yet basic result is that model theory written in PA cannot prove the consistency of PA (cf this ) (2). By learning (1), I hope to see how a structure can be defined without the notion of sets. By learning (2), I hope to see how one can disprove an statement about "PA/PA" (
|
I hope to see how a structure can be defined without the notion of sets. Why would you want to do so? I don't think PA is the right theory in which to do model theory. The cornerstone of model theory is Gödel's completeness theorem; its proof certainly doesn't require the full strength of ZFC. If you look at the wiki page of Reverse mathematics , you see completeness theorem (for countable language) is provable in $\mathrm{WKL}_0$ , which is roughly speaking a ridiculously weak set theory; in some sense it's even weaker than PA since it has proof theoretic ordinal $\omega^\omega$ while PA has $\varepsilon_0$ , but it's still set theory. That being said, I guess you can do some model theory in PA in an ad hoc way. Exercise I.16.19 in Kunen asks you to give a finitistic proof of the consistency of DTO, the theory of dense linear order without end points; finitistic means you can certainly carry this out in PA. If you look at the proof you are basically building a model of DTO within PA,
|
|logic|set-theory|model-theory|peano-axioms|
| 0
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $$\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$$ Starting from the condition set that $xyz =1$ I did this : $$x^3+y^3+z^3\ge3\sqrt[3]{x^3y^3z^3}$$ $$x^3+y^3+z^3\ge3$$ I also know that the fraction $\frac{3}{2}$ usually either comes from the famous $Nesbitt$ equation or from some sort of $Tittu$ that leads to $\frac{9}{4}$ which then is simplified. However I can't find any way how to manipulate the inequality. I did try to multiply and rewrite the inequality so that $x,y,z$ are an even power like this : $$\frac{x^4}{x^2+yz}+\frac{y^4}{y^2+xz}+\frac{z^4}{z^2+xy}\ge\frac{3}{2}$$ However this didn't bring me anywhere. Any kind of help is welcomed.
|
Since we have the constraint $xyz = 1$ , we can make the substitution $x = \frac{a}{b}$ , $y = \frac{b}{c}$ , and $z = \frac{c}{a}$ , where $a$ , $b$ , and $c$ are positive real numbers. (If you haven't seen this before, you can for example let $a = x$ , $b = 1$ , and $c = \frac{1}{y}$ , but we aren't really interested in the particular values of $a$ , $b$ , and $c$ .) We then want to prove that the inequality $$ \frac{a^5}{a^2 b^3 + b^5} + \frac{b^5}{b^2 c^3 + c^5} + \frac{c^5}{c^2 a^3 + a^5} \geq \frac{3}{2} $$ holds for all positive real numbers $a$ , $b$ , and $c$ (without any additional constraints) or, equivalently, $$ \frac{a^6}{a^3 b^3 + a b^5} + \frac{b^6}{b^3 c^3 + b c^5} + \frac{c^6}{c^3 a^3 + c a^5} \geq \frac{3}{2}. $$ By the Cauchy-Schwarz Inequality (sometimes called Titu's Lemma, like in your question) $$ \frac{a^6}{a^3 b^3 + a b^5} + \frac{b^6}{b^3 c^3 + b c^5} + \frac{c^6}{c^3 a^3 + c a^5} \geq \frac{(a^3 + b^3 + c^3)^2}{a^3 b^3 + b^3 c^3 + c^3 a^3 + ab^5 + bc^5 + ca^
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 0
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $$\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$$ Starting from the condition set that $xyz =1$ I did this : $$x^3+y^3+z^3\ge3\sqrt[3]{x^3y^3z^3}$$ $$x^3+y^3+z^3\ge3$$ I also know that the fraction $\frac{3}{2}$ usually either comes from the famous $Nesbitt$ equation or from some sort of $Tittu$ that leads to $\frac{9}{4}$ which then is simplified. However I can't find any way how to manipulate the inequality. I did try to multiply and rewrite the inequality so that $x,y,z$ are an even power like this : $$\frac{x^4}{x^2+yz}+\frac{y^4}{y^2+xz}+\frac{z^4}{z^2+xy}\ge\frac{3}{2}$$ However this didn't bring me anywhere. Any kind of help is welcomed.
|
Alternate approach: We try to homogenize the inequality. Let $(xyz)^{2/3}$ be denoted by $p$ . $$\begin{align} \sum_{cyc} \frac{x^5}{x^2+1} &= \sum_{cyc}\frac{x^5}{x^2+p} \\ &= \frac1{xyz}\sum_{cyc}\frac{x^5}{x^2+p} \\ &\ge \frac 3{x^3+y^3+z^3} \sum_{cyc}\frac{x^6}{x^3+xp} \tag{AM-GM} \\ &\ge \frac 3{\sum_{cyc} x^3} \frac{\left(\sum_{cyc} x^3\right)^2}{\sum_{cyc} x^3+xp} \tag{C-S} \\ &= 3\cdot \frac{\sum_{cyc} x^3}{\sum_{cyc} x^3+xp} \ge \frac{3}{2} \end{align}$$ where the last inequality follows since $$\sum_{cyc}x^3 \ge \sum_{cyc} xp$$ which is trivially true by Muirhead or AM-GM.
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 0
|
Motivation for the definition of curvature of a plane curve
|
I am seeking a motivation for the definition of the curvature of a plane curve. How did people come with the idea of the definition of the curvature? Below I am more specific. The fundamental theorem for plane curves states the following. Giving two curves $\alpha$ and $\beta$ there exists a rigid motion $M:\mathbb R^2\to \mathbb R^2$ such that $\beta(t)=M(\alpha(t))$ if, and only if, the curves $\alpha$ and $\beta$ have the same curvature function. Assume for a moment that you do not know about the curvature and that you would like to classify plane curves up to rigid motions. How could one come up with the idea of a geometric invariant that is enough to distinguish curves?
|
The key here is to consider an arclength parametrisation of the curve, i.e., a parameter $s$ such that $v(s)=\alpha'(s)$ is a unit vector for all $s$ . Then it turns out that you can choose a single-branched angle $\theta(s)$ (roughly, angle with the positive direction of the $x$ -axis) such that $v(s)=e^{i\theta}$ . Then it turns out that the curvature of the curve is precisely $\theta'(s)$ . If you think the derivative is natural, you will be naturally led to think that the curvature $\theta'(s)$ is also natural.
|
|differential-geometry|curvature|plane-curves|
| 0
|
Minimum size of $\Omega$ for independent events with specified probabilities
|
Let $A_1, A_2, A_3, ..., A_n (n>1)$ be independent events such that $P(A_i)=\frac{1}{i}$ . What is the smallest possible size of $\Omega$ ? And what if $P(A_i)=\frac{1}{e^i}$ ? I imagine this as tossing $n$ unfair coins, where the first one always lands heads and the others can land either heads or tails, hence there would be $2^{n-1}$ possible outcomes and that is the size of $\Omega$ . If $P(A_i)=\frac{1}{e^i}$ , then the situation is the same, but the first coin can also land either heads or tails, hence the answer is $2^n$ . Is this correct? A similar question was asked here , but in my case one event has a probability of $1$ .
|
Yes. The answer you refer to shows that for $k$ independent events each with probability in $(0,1)$ , the minimal size of $\Omega$ is $2^k$ . If you need to add independent events with probability 0, just add $\emptyset$ , and for events with probability 1, take $\Omega$ itself. So here, if $\mathbb P(A_i)=\frac 1i$ then $\mathbb P(A_1)=1$ so the minimal size is $2^{n-1}$ , and if $\mathbb P(A_i)=\frac{1}{e^i}$ then the minimal size if $2^n$ .
|
|probability|
| 1
|
Example 3.5 Silverman's Elliptic Curves: problem with a point in the projective space
|
I am reading Silverman's The Arithmetic of Elliptic Curves . I have a question regarding Example 3.5, which is the following: I have doubts about the equality $[-Y^2,Y(X-Z))]=[-Y,X-Z]$ . Two points $[x_0,\ldots, x_n]$ , $[x_0',\ldots, x_n']$ in the projective space are the same if we can write $x_i=\lambda x_i'$ for some $\lambda \in K^\times$ for every $i=0\ldots, n$ . So my question here is: what happens if $Y=0$ ? I don't think we can safely state that $[-Y^2,Y(X-Z))]=[-Y,X-Z]$ . Actually, if $Y=0$ , we have that $[-Y^2,Y(X-Z))]=[0,0]$ so this is not even a well-defined point. What worries me is that this reasoning he does is precisely meant to solve a problem that arises precisely when $Y=0$ , which is the only time when I don't see his reasoning working.
|
Be careful: the point is not that the equality $[-Y^2,Y(X-Z)]=[-Y,X-Z]$ holds at a point with $Y=0$ - rather, this equality holds on their common domain of definition and you can consider the map which is the gluing of these maps. Said slightly differently, you can consider a map defined by $[X+Z,Y]$ on the open set $V\setminus [1,0,-1]$ and $[-Y,X-Z]$ on the open set $V\setminus [1,0,1]$ , and since $[X+Z,Y]=[-Y,X-Z]$ on the open set $V\setminus \{[1,0,-1],[1,0,1]\}$ , these formulas combine to give one map $V\to\Bbb P^1$ .
|
|algebraic-geometry|elliptic-curves|projective-space|
| 1
|
Why is a finite integral domain always field?
|
This is how I'm approaching it: let $R$ be a finite integral domain and I'm trying to show every element in $R$ has an inverse: let $R-\{0\}=\{x_1,x_2,\ldots,x_k\}$, then as $R$ is closed under multiplication $\prod_{n=1}^k\ x_i=x_j$, therefore by canceling $x_j$ we get $x_1x_2\cdots x_{j-1}x_{j+1}\cdots x_k=1 $, by commuting any of these elements to the front we find an inverse for first term, e.g. for $x_m$ we have $x_m(x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k)=1$, where $(x_m)^{-1}=x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k$. As far as I can see this is correct, so we have found inverses for all $x_i\in R$ apart from $x_j$ if I am right so far. How would we find $(x_{j})^{-1}$?
|
That’s a corollary from a more general result: Lemma. Let $R$ be a finite commutative ring. Then every element it either a unit or a zero divisor. Proof. Let $r \in R$ and consider the mapping $$\varphi_r: R \to R, \quad a \mapsto ra.$$ $\varphi_r$ can either be injective or not. If $\varphi_r$ is not injective, there are elements $a,b \in R, a \neq b$ such that $ra = rb$ . That is, $ra - rb = 0$ , and hence $$r (a - b) = 0.$$ Since $a \neq b$ , it holds that $a - b \neq 0$ , so $r$ is a zero divisor. Otherwise, if $\varphi_r$ is in fact injective, it’s also surjective since it’s a mapping from a finite set to itself. Let $a$ be a preimage of $1 \in R$ , that is, $$1 = \varphi_r(a) = ra.$$ But that means that $r$ is a unit. $\blacksquare$ Corollary. Every finite integral domain is a field. Proof. An integral domain has no zero divisors except $0$ . Hence, by the lemma, all elements except $0$ are units, $R^\times = R \setminus \{ 0 \}$ , so $R$ is a field. $\blacksquare$
|
|abstract-algebra|ring-theory|field-theory|integral-domain|finite-rings|
| 0
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$
|
Prove the inequality knowing $x,y,z \ge 0$ and $xyz=1$ $$\frac{x^5}{x^2+1}+\frac{y^5}{y^2+1}+\frac{z^5}{z^2+1}\ge\frac{3}{2}$$ Starting from the condition set that $xyz =1$ I did this : $$x^3+y^3+z^3\ge3\sqrt[3]{x^3y^3z^3}$$ $$x^3+y^3+z^3\ge3$$ I also know that the fraction $\frac{3}{2}$ usually either comes from the famous $Nesbitt$ equation or from some sort of $Tittu$ that leads to $\frac{9}{4}$ which then is simplified. However I can't find any way how to manipulate the inequality. I did try to multiply and rewrite the inequality so that $x,y,z$ are an even power like this : $$\frac{x^4}{x^2+yz}+\frac{y^4}{y^2+xz}+\frac{z^4}{z^2+xy}\ge\frac{3}{2}$$ However this didn't bring me anywhere. Any kind of help is welcomed.
|
Another solution: $$\sum \frac{x^5}{x^2+1} =\frac{1}{(xyz)^{2/3}}\cdot \sum \frac{x^5}{x^2+xyz} =$$ $$= \frac{1}{(xyz)^{2/3}}\cdot \sum \frac{x^4}{x+yz} \overset{\text{AM-GM}}\ge \frac{3}{x^2+y^2+z^2}\cdot\sum \frac{(x^2)^2}{x+yz} \ge $$ $$\overset{\text{Titu}}\ge \frac{3}{x^2+y^2+z^2}\cdot \frac{(x^2+y^2+z^2)^2}{xy+yz+zx+x+y+z}\ge $$ $$\overset{*}\ge \frac{3(x^2+y^2+z^2)}{x^2+y^2+z^2+x^2+y^2+z^2} = \frac32.$$ $(*)$ is true because $$x^2+y^2+z^2\ge xy+yz+zx$$ and $$x^2+y^2+z^2\ge x+y+z$$ which follows from $$x^2+y^2+z^2\ge \frac{(x+y+z)^2}{3} \ge (x+y+z)\sqrt[3]{xyz}=x+y+z.$$
|
|inequality|cauchy-schwarz-inequality|a.m.-g.m.-inequality|
| 0
|
Number of spanning trees for $K_n-e$
|
As the title suggests, I want to calculate the number of spanning trees in $K_n - e$ where $K_n$ is the complete graph on $n$ vertices and $e$ is any edge. The answer to this problem is $(n-2)*(n)^{n-3}$ and i know one method to do this. My actual question is regarding one of my approaches which is wrong. So let's pick any vertex of the graph, say $v_1$ and delete any edge for it. now it has $n-2$ edges and through this i can reach any of the vertex. Now all the remaining vertices form a $K_{n-1}$ graph and these would have $(n-1)^{n-3}$ spanning trees so the total spanning trees would be $(n-2)*(n-1)^{n-3}$ . Where am i exactly going wrong and is it possible to proceed with this approach? Thank you
|
Your method counts only the spanning trees where $v_1$ is a leaf. A spanning tree of $K_n - e$ does not necessarily consist of a spanning tree of $K_n - v_1$ plus an edge to $v_1$ other than $e$ .
|
|combinatorics|graph-theory|trees|
| 1
|
Motivation for the definition of curvature of a plane curve
|
I am seeking a motivation for the definition of the curvature of a plane curve. How did people come with the idea of the definition of the curvature? Below I am more specific. The fundamental theorem for plane curves states the following. Giving two curves $\alpha$ and $\beta$ there exists a rigid motion $M:\mathbb R^2\to \mathbb R^2$ such that $\beta(t)=M(\alpha(t))$ if, and only if, the curves $\alpha$ and $\beta$ have the same curvature function. Assume for a moment that you do not know about the curvature and that you would like to classify plane curves up to rigid motions. How could one come up with the idea of a geometric invariant that is enough to distinguish curves?
|
$\newcommand\R{\mathbb{R}}$ Suppose you have a curve in the plane (think of a road). Suppose you parameterize it: $$ c: I \rightarrow \R^2, $$ where $I \subset \R$ is a connected interval (think of a car driving along the road, where $t$ is time and $c(t)$ is where the car is at time $t$ ). Then $v(t) = c'(t)$ is the velocity vector (which tells you the direction of travel and speed of the car at time $t$ ). The magnitude of the velocity vector is the speed of the parameterized curve (speed the car is traveling at). Let's call this $$ \sigma(t) = |v(t)|. $$ The unit vector $$ u(t) = \frac{v(t)}{|v(t)|} $$ tells you the direction of the velocity vector (direction in which the car is traveling). You can therefore write the velocity vector as $$ c'(t) = v(t) = \sigma(t)u(t). $$ The acceleration vector is the rate of change of the velocity vector, i.e, the second derivative of the position vector, $$ a(t) = v'(t) = c''(t). $$ There are two sources of acceleration: Change in speed and chang
|
|differential-geometry|curvature|plane-curves|
| 0
|
Find Modulus, Real and imaginary part of $\frac{(1+i)(1+i)^2(1+i)^3...(1+i)^{20}}{i^0+i^2+i^4+i^6+...+i^{20}}$
|
How to solve this task (find modulus, real and imaginary part)? $$z = \frac{(1+i)(1+i)^2(1+i)^3...(1+i)^{20}}{i^0+i^2+i^4+i^6+...+i^{20}}$$
|
Let $w=x+iy=1+i$ , then $r= \sqrt{x^2+y^2}$ , and $\cos(\theta) =x/r =1/\sqrt{2}$ , $\sin(\theta)=r/y=1/\sqrt{2}$ , this implies that $\theta =\pi/4$ , and there fore we have $w=\sqrt{2}e^{i\pi/4}$ . Then we have $$|z| =|\frac{{(\sqrt{2}e^{i\pi/4})}^{\sum_{i=1}^{20}i}}{6(1) + 5(-1)}|=\sqrt{2}^{\frac{20(20+1)}{2}} |e^{i \frac{\pi}{4}\frac{20(20+1)}{2} }|=2^{105}$$
|
|linear-algebra|complex-numbers|
| 1
|
Prove that $\mathcal{F}=\bigcup^\infty_{n=1}\sigma(\{\{1\},..\{n\}\})$ is a field but not a $\sigma$-field on $\mathbb{N}$
|
Consider the sigma fields on $\mathbb{N}$ : $F_n=\sigma(A_n)$ , where $A_n=\{\{1\},\{2\},\ldots,\{n\}\}$ for $n\ge1$ . Define $\mathcal{F}:=\bigcup^\infty_{n=1}F_n$ . Prove that $\mathcal{F}$ is a field but not a $\sigma$ -field. For the $\sigma$ field part, I show that $\mathcal{F}$ is a countably infinite set (which is an impossibility for any $\sigma$ field). Is there any counterexample i.e., some set $A$ which belongs to $\sigma(\mathcal{F})$ but not $\mathcal F$ ?
|
To understand why $\mathcal F$ is not a $\sigma$ -field, it helps to have a more direct understanding of what $\mathcal F$ and $\sigma(\mathcal F)$ actually are. Since $F_n$ is generated by $\{1\}, \{2\}, \dots, \{n\}$ , it can be characterized by the following property: it includes the subsets of $\{1,2,\dots,n\}$ and their complements. Taking the union of $F_n$ over all $n$ means that we include all finite sets (because every finite set is a subset of $\{1,2,\dots,n\}$ for some $n$ ) and all their complements. On the other hand, if a set is infinite and its complement is infinite, then it is not in $F_n$ for any $n$ , so it is not in $\mathcal F$ . In particular, $\mathcal F$ contains $\{n\}$ for every $n$ , so $\sigma(\mathcal F)$ contains every subset of $\mathbb N$ : we can obtain every subset of $\mathbb N$ as a countable union of its singleton subsets. So to explain why $\mathcal F \ne \sigma(\mathcal F)$ , we just need to find some subset of $\mathbb N$ which is neither finite
|
|real-analysis|measure-theory|
| 1
|
Proof by contradiction, that a set of all binary sequences, where "1" cant be twice in a row, is uncountable
|
I emphasize that I want to prove it by contradiction using cantor diagonal method. $$A = \{ (a_i) \mid \text{$a_i$ all the binary sequence where $1$ doesn't appear twice in a row}\}.$$ So I'm assuming $A$ is a countable set, and trying to find a contradiction by finding $(a_i)$ in $A$ but not listed. My first attempt was to take the diagonal and change every $0$ to $1$ and vice versa. a1 = 01001... a2 = 10100... a3 = 01000... a4 = 00001... a5 = 10101... For this example my attempt fail because I will get 11110 which is definitely not part of $A$ (because $1$ appears twice in a row). The next attempt was to change $0$ to $1$ only if there is not a $1$ in the previous digit. For this attempt I will get 10100 , which is definitely in $A$ , but also mentioned before $(a_2)$ , so it isn't a contradiction.
|
You can prove $A$ uncountable as follows. (Here I denote the set of all infinite binary sequences by $2^\mathbb N$ .) Prove $2^\mathbb N$ uncountable by the usual diagonal argument. Define a function $f : 2^\mathbb N \to A$ as follows: $$ f(x)_n := \begin {cases} x_k & \text {if $n = 2k$ for some $k$} \\ 0 & \text {otherwise} \end {cases} $$ In other words, $f$ interleaves its argument with a sequence of zeroes; the image of $f$ is the set of all binary sequences where every odd-indexed item is zero. $f$ is easily proven injective: if $x$ and $y$ differ at position $k$ , then $f(x)$ and $f(y)$ differ at position $2k$ . Therefore $|2^\mathbb N| = |f[2^\mathbb N]|$ . You can now say $A$ is uncountable, because you have exhibited uncountably many elements of it. Said differently, since $f[2^\mathbb N] \subseteq A$ , we have that $|\mathbb N| , or simply $|\mathbb N| . $\blacksquare$ The above is probably not a proof your instructor had in mind, but it does rely on a diagonal argument; jus
|
|set-theory|cantor-set|
| 0
|
Why is $2^n +1=(2+1)(2^{n-1} - 2^{n-2} +2^{n-3}-\ldots+1)$ for odd $n$
|
This transformation is only a part of solution of my problem but the most significant one. I need to show that $2^n+1$ is divisible by $3$ for odd $n$ ’s. I have exercised polynomial transformation for two weeks and I did geometric progression for a week, but I can’t still easily come up to this transformation.
|
Hint: you reported that you have studied geometric progressions. What is the sum of a geometric series with initial term $2^{n-1}$ , ratio $\frac{-1}2$ , and final term $1$ ?
|
|algebra-precalculus|
| 0
|
Show that $a_n \underset{(+\infty)}{\sim} n$ with $a_n$ solution of $e^{-x}\sum_{k=0}^{n}\frac{x^k}{k!}=\frac{1}{2}$
|
Let $n$ be a natual and $f_n$ defined as: $$ f_n (x ) = e^{-x}\sum_{k=0}^{n}\frac{x^k}{k!} $$ Let $a_n$ be the unique positive solution of $f_n (x )=\frac{1}{2}$ , I'm asked to show that $a_n \underset{(+\infty)}{\sim}n$ . What I know is that $$ \lim\limits_{n \rightarrow +\infty} f_n (n ) = \lim\limits_{n \rightarrow +\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \lim\limits_{n \rightarrow +\infty}f_n (a_n ) = \frac{1}{2} $$ which makes the result intuitive. Because we have $f_n (n ) \underset{(+\infty)}{\sim}f_n (a_n )$ with $f_n$ only passing through $\frac{1}{2}$ once. However, I struggle to find how to use this to find that $a_n \underset{(+\infty)}{\sim}n$ . Any hint would be appreciated.
|
Let $N_x$ denote a random variable having the Poisson distribution with rate $x$ . Then \begin{align*} f_n(x) = \mathbf{P}(N_x \leq n). \end{align*} Moreover, by realizing the family $(N_x)_{x\geq 0}$ as a Poisson point process on $[0, \infty)$ with unit intensity, it follows that $f_n(x)$ is strictly decreasing in $x$ . In light of this, it suffices to show: Lemma. We have $\lim_{n\to\infty} f_n(\alpha n) = 1$ for any $0 , and $\lim_{n\to\infty} f_n(\beta n) = 0$ for any $\beta > 1$ . Let's see why this implies the desired conclusion. Assuming the lemma, for any $0 , we have $f_n(\alpha n) > \frac{1}{2} > f_n(\beta n)$ and thus $\alpha n for any sufficiently large $n$ . So, $$ \alpha \leq \liminf_{n\to\infty} \frac{a_n}{n} \leq \limsup_{n\to\infty} \frac{a_n}{n} \leq \beta $$ holds. However, both the above liminf and limsup are independent of the choice of $\alpha$ and $\beta$ , hence the conclusion follows by letting $\alpha \uparrow 1$ and $\beta \downarrow 1$ . Proof of Lemma. By t
|
|sequences-and-series|roots|
| 1
|
Question about square matrices
|
Given two invertible square matrices A, B is it always true that $$(ABA^{-1})^{-1}=AB^{-1}A^{-1}$$ I think it's false since in matrix multiplication order matters but i can't find a counterexample
|
IF $A$ and $B$ are both invertible square matrices, then $$(ABA^{-1})^{-1} = ((AB)A^{-1})^{-1}=(A^{-1})^{-1}(AB)^{-1}=AB^{-1}A^{-1}$$ If fact, one can show that, given $A,B,C$ invertible matrices, then $(AB)^{-1}=B^{-1}A^{-1}$ $(ABC)^{-1}=C^{-1}B^{-1}A^{-1}$
|
|linear-algebra|matrices|inverse|
| 0
|
Question about square matrices
|
Given two invertible square matrices A, B is it always true that $$(ABA^{-1})^{-1}=AB^{-1}A^{-1}$$ I think it's false since in matrix multiplication order matters but i can't find a counterexample
|
If B is invertible then: $(ABA^{-1})^{-1} = AB^{-1}A^{-1}$ To show this is true, show that $(ABA^{-1})(AB^{-1}A^{-1}) = (AB^{-1}A^{-1})(ABA^{-1}) = I$ Since matrix multiplication is associative we can drop the parentheses. $ABA^{-1}AB^{-1}A^{-1}$ And we have an $A$ next to an $A^{-1}$ and they cancel each other out (or multiply to $I$ ) $ABB^{-1}A^{-1}$ And we again have a pair of inverses together, and after we cancel those we will still have a pair of inverses together.
|
|linear-algebra|matrices|inverse|
| 1
|
Detail in Verification of Suspension-Loop Adjunction in Infinity Category Theory
|
Let $\mathscr{C}$ be a pointed $\infty$ -category. Then, $\Sigma : \mathscr{C} \rightleftarrows \mathscr{C} : \Omega$ can be defined through $\Sigma = \operatorname{colim}( * \leftarrow X \to *)$ and $\Omega = \lim ( * \to X \leftarrow *)$ . There seems to be some part in the theory of (co-)limits and adjunctions for $\infty$ -categories that I'm not understanding well enough. I wanted to verify $\Sigma \dashv \Omega$ which caused me a lot of confusions. Initially, I thought that this would be a quick computation via \begin{align*} \operatorname{map}(\Sigma x, y) &\simeq \operatorname{map}(*, y) \times_{\operatorname{map}(x,y)} \operatorname{map}(*, y) \\ &\simeq \Omega \operatorname{map}(x,y) \\ &\simeq \operatorname{map}(x,*) \times_{\operatorname{map}(x,y)} \operatorname{map}(x,*) \\ &\simeq \operatorname{map}(x, \Omega y). \end{align*} This seems fine but to show that we get an adjunction one should really prove that $$ \operatorname{map}(\Sigma x, y) \xrightarrow{\Omega} \operator
|
I asked Dominik Kirstein about this who gave me the following hint. It actually suffices to show that we have a natural equivalence $\operatorname{map}(\Sigma x, y) \simeq \operatorname{map}(x, \Omega y)$ which we have because the chain of equivalences I gave in the OP are all natural. Once we have this, an argument similar to the $1$ -categorical one allows us to prove that such natural equivalences all must be induced by a "unit" map, which would be the image of $\mathrm{id}_{\Sigma x}$ under the above equivalence. Making the 1-categorical argument work, one has to be careful and use the $\infty$ -categorical Yoneda Lemma.
|
|algebraic-topology|category-theory|homotopy-theory|higher-category-theory|
| 0
|
Number of non-congruent quadrilaterals with vertices chosen from among seven points equally distributed on a circle
|
Seven points are equally distributed on a circle. How many non-congruent quadrilaterals can be drawn with vertices chosen from among the seven points? My approach: If there are $7$ points and we have to choose 4 points then there are $\binom{7}{4}$ ways to construct a quadrilateral. Hence, there are a total of $35$ ways in a circle to construct a quadrilateral using $7$ points. But these $35$ quadrilaterals include both congruent and non-congruent quadrilaterals. From this how can we find no of non-congruent quadrilaterals? Also, I am unclear about the concept of non-congruent and congruent quadrilaterals. So please help me understand this with your clear explanation.
|
For two quadrilaterals to be congruent, corresponding sides and angles must be equal. Because of the symmetric arrangement of points in this case, if corresponding sides are equal, angles would be too. So focusing only on corresponding sides, here's what we can do. We'll take side lengths as one more than the (smaller) number of points lying between its two ends. Meaning sides involving adjacent points have a length of $1$ unit. If a side is obtained by skipping one point, it's length is $2$ units. $^{[1]}$ Now, if $x_1$ , $x_2$ , $x_3$ , and $x_4$ are the side-lengths, we have $x_1 + x_2 + x_3 + x_4 = 7$ , $x_i$ is a natural number. $^{[2]}$ The number of unique solutions to this equation would be $$\binom{7-1}{4-1} = 20$$ Rotations of one unique type of quadrilateral correspond to $4$ solutions counted in the result we got above ( $20$ ). For example, $$(1, 2, 1, 3), \; (3, 1, 2, 1), \; (1, 3, 1, 2), \; (2, 1, 3, 1)$$ Dividing by $4$ would compensate for that. We will have $20/4 = 5$
|
|combinatorics|geometry|quadrilateral|
| 1
|
Example for proper inclusion in Lorentz space
|
The details about Lorentz space are given in Wikipedia page . It is easy to prove that $L^{p,q}$ is contained in $L^{p,r}$ whenever $q . I'm trying to construct an example which shows that the inclusion is proper. I tried examples like $f=\frac{1}{x^{1/p}}\chi_{(0,1)}$ . But they are not working. Can I get a precise example for the same?
|
Try $f=\sum_{j=1}^\infty j^{-q}2^{-j/p}\chi_{(0,2^{-j})}$ , or equivalently $f(x)=x^{-1/p}|\log x|^{-q}\chi_{(0,\frac12)}$ .
|
|real-analysis|functional-analysis|analysis|lebesgue-integral|
| 0
|
Questions related to Cones and Subspaces of Euclidean Space
|
Cone: A subset $ S \subseteq \mathbb{R}^n$ is a cone if $\alpha \geq 0 \implies \alpha S \subseteq S.$ Polar: A Polar $K^*$ of a cone $K$ is a closed convex cone such that $$K^*=\{y \in \mathbb{R}^n \mid x\in K \implies (x,y) \geq 0\}$$ where $(x,y)$ denote the standard inner product of vectors. Problem If $K=\mathbb{R}^n_+ \cap R(A^T)$ then $K^*=\mathbb{R}^n_+ +N(A)$ where $A \in R^{m \times n}$ and $R(A),N(A)$ denote the range and null space of $A.$ Question How to approach this problem ? I know polar of a set is orthogonal to the original set and $N(A)$ and $R(A^T)$ is orthogonal for any matrix. Now how to reach to the desire result ? Thank in advanced. Edit: The dual of the intersection of the convex cones is the sum of their dual. Since $(\mathbb{R}^n_+)^*=\mathbb{R}^n_+$ and $(R(A^T))^*=N(A)$ then $K^*=\mathbb{R}^n_+ +N(A)$ where $K=\mathbb{R}^n_+ \cap R(A^T)$ . Thanks copper.hat
|
Just for completion :-). Let $K^+ = \{ x | \langle y, x \rangle \ge 0 \text{ for all } y \in K \}$ . The key results are that if $K$ is a closed convex cone then $K^{++} = K$ and if $A \subset B$ then $B^+ \subset A^+$ . Suppose $A,B$ are closed convex cones. Then we will show that $(A \cap B)^+ = A^++B^+$ . Since $A\cap B \subset A$ we have $A^+ \subset (A \cap B)^+$ and similarly for $B$ . Since $ (A \cap B)^+$ is a convex cone, we have $A^++B^+ \subset (A \cap B)^+$ . To show the other direction, it is sufficient to show that $(A^++B^+)^+ \subset (A \cap B)^{++} = A\cap B$ . Suppose $x \in (A^++B^+)^+$ , then $\langle x, \alpha + \beta \rangle \ge 0$ for all $\alpha \in A^+, \beta \in B^+$ . Setting $\beta =0$ shows that $x \in A^{++} = A$ and similarly for $B$ which shows that $x \in A \cap B$ , the desired result. Finally, it is straightforward to show that $(\mathbb{R}^n_+)^+ = \mathbb{R}^n_+$ and $({\cal R} A^T)^+ = ({\cal R} A^T)^\bot = \ker A$ .
|
|linear-algebra|convex-cone|dual-cone|
| 0
|
Number of non-congruent quadrilaterals with vertices chosen from among seven points equally distributed on a circle
|
Seven points are equally distributed on a circle. How many non-congruent quadrilaterals can be drawn with vertices chosen from among the seven points? My approach: If there are $7$ points and we have to choose 4 points then there are $\binom{7}{4}$ ways to construct a quadrilateral. Hence, there are a total of $35$ ways in a circle to construct a quadrilateral using $7$ points. But these $35$ quadrilaterals include both congruent and non-congruent quadrilaterals. From this how can we find no of non-congruent quadrilaterals? Also, I am unclear about the concept of non-congruent and congruent quadrilaterals. So please help me understand this with your clear explanation.
|
This can be answered with Burnside's lemma, because counting quadrilaterals up to congruence is the same as counting the number of orbits of quadrilaterals under the symmetry group of the regular heptagon. For each symmetry, we must add up the total number of fixed points for that symmetry, and then divide by the number of symmetries. There are 14 symmetries, consisting of 7 rotations and 7 reflections. The trivial rotation has $\binom74=35$ fixed points. All 6 other rotations have no fixed points. Each reflection has $\binom 32=3$ fixed points. A quadrilateral with reflection symmetry is uniquely specified by choosing two out of three points which are on one side of the line of symmetry, and then choosing those corresponding points on the other side. Putting this altogether, the number of quadrilaterals up to congruence is $$ \frac1{14}\left(35 +\color{gray}{6\cdot0}+7\cdot 3 \right)=4. $$ Here are what the four quadrilaterals look like: Four consecutive vertices. Three consecutive ve
|
|combinatorics|geometry|quadrilateral|
| 0
|
Is $\mathbb{R}^3\setminus\{(x,y,z)\in\mathbb{R}^3\mid x\leq 1,y\leq1,-1\leq z\leq 1\}$ a star domain?
|
Is $\mathbb{R}^3\setminus\{(x,y,z)\in\mathbb{R}^3\mid x\leq1,y\leq1,-1\leq z\leq1\}$ a star domain. My approach: Let be $N:=\{(x,y,z)\in\mathbb{R}^3\mid x\leq1,y\leq 1,-1\leq z \leq1\}$ and $M:=\mathbb{R}^3\setminus N$ and $a\in M$ such that $a_3>1$ and $b=(0,0,1)\in N$ . We define $S_{a,b}:\mathbb{R}\to\mathbb{R}^3$ by $S_{a,b}(t):=a+(b-a)t$ . We are looking for a $t_0$ such that $a_3+(1-a_3)t_0 . For example $t_0:=\frac{2a_3+2}{a_3-1}$ satifies this requirement. Hence, the vector $$ c:=S_{a,b}(t_0)=\begin{pmatrix} a_1-a_1t_0\\a_2-a_2t_0\\-a_3-2 Next we define $S_{a,c}:[0,1]\to\mathbb{R}^3$ by $S_{a,c}(t):=a+t(c-a)$ and see that $$ S_{a,c}\left(\frac{a_3-1}{2a_3+2}\right)=\begin{pmatrix} a_1\\a_2\\a_3\end{pmatrix}+\frac{a_3-1}{2a_3+2} \begin{pmatrix} -a_1\frac{2a_3+2}{a_3-1}\\-a_2\frac{2a_3+2}{a_3-1}\\-2a_3-2 \end{pmatrix}=\begin{pmatrix} 0\\0\\1 \end{pmatrix}. $$ This means that the line Segment $S_{a,c}$ is not a subset of $M$ . We can conduct the same arguing if we consider $a\in M
|
A simpler approach might be as follows. Assume for the sake of contradiction that $a\in M$ is a center. Now find a pair of points $(0,0,1+\epsilon)\in M$ and $(0,0,-1-\epsilon)\in M$ such that at least one is not visible from $a$ .
|
|real-analysis|analysis|elementary-set-theory|
| 0
|
Do we have $\mathbb{P}(\Vert A \Vert >\lambda) \le \mathbb{P}(\langle Ax,y\rangle > \lambda)$?
|
Given that $\Vert A \Vert = \sup\left\{\langle Ax,y \rangle: x,y \in\mathbb{R}^n, \Vert x\Vert_2 = \Vert y\Vert_2 =1\right\}.$ I wonder whether following statements holds: $$\forall \lambda>0: \mathbb{P}(\Vert A \Vert >\lambda) \le \mathbb{P}(\langle Ax,y\rangle > \lambda).$$
|
The inequality should go the other way. (I am assuming that $x,y \in \mathbb R^n$ with $\|x\|_2 = \|y_2\| = 1$ , as in the definition of $\|A\|$ .) If $\langle Ax,y\rangle > \lambda$ for some fixed choice of $x$ and $y$ , then this implies that $\|A\| > \lambda$ : $\|A\|$ is defined to be equal to $\langle Ax,y\rangle$ for the best choice of $x$ and $y$ depending on $A$ . So the event " $\|A\| > \lambda$ " contains the event " $\langle Ax,y\rangle > \lambda$ ", and therefore we must have $$\Pr[\|A\| > \lambda] \ge \Pr[\langle Ax,y\rangle > \lambda].$$
|
|probability|inequality|probability-distributions|
| 0
|
Is $\mathbb{R}^3\setminus\{(x,y,z)\in\mathbb{R}^3\mid x\leq 1,y\leq1,-1\leq z\leq 1\}$ a star domain?
|
Is $\mathbb{R}^3\setminus\{(x,y,z)\in\mathbb{R}^3\mid x\leq1,y\leq1,-1\leq z\leq1\}$ a star domain. My approach: Let be $N:=\{(x,y,z)\in\mathbb{R}^3\mid x\leq1,y\leq 1,-1\leq z \leq1\}$ and $M:=\mathbb{R}^3\setminus N$ and $a\in M$ such that $a_3>1$ and $b=(0,0,1)\in N$ . We define $S_{a,b}:\mathbb{R}\to\mathbb{R}^3$ by $S_{a,b}(t):=a+(b-a)t$ . We are looking for a $t_0$ such that $a_3+(1-a_3)t_0 . For example $t_0:=\frac{2a_3+2}{a_3-1}$ satifies this requirement. Hence, the vector $$ c:=S_{a,b}(t_0)=\begin{pmatrix} a_1-a_1t_0\\a_2-a_2t_0\\-a_3-2 Next we define $S_{a,c}:[0,1]\to\mathbb{R}^3$ by $S_{a,c}(t):=a+t(c-a)$ and see that $$ S_{a,c}\left(\frac{a_3-1}{2a_3+2}\right)=\begin{pmatrix} a_1\\a_2\\a_3\end{pmatrix}+\frac{a_3-1}{2a_3+2} \begin{pmatrix} -a_1\frac{2a_3+2}{a_3-1}\\-a_2\frac{2a_3+2}{a_3-1}\\-2a_3-2 \end{pmatrix}=\begin{pmatrix} 0\\0\\1 \end{pmatrix}. $$ This means that the line Segment $S_{a,c}$ is not a subset of $M$ . We can conduct the same arguing if we consider $a\in M
|
Call your set $U$ , and suppose $(x,y,z) \in U$ can be used as base point. If $z\neq 0$ , take the line that passes through the origin and $(x,y,z)$ , that is, $$f(t)=(tx,ty,tz), \quad t\in\mathbb{R}$$ As $(0,0,0)\notin U$ , this line is not contained in $U$ . Nonetheless, taking a sufficiently large $t_0 , the $z$ -component of $f(t_0)$ is greater in magnitude than $1$ , and thus $f(t_0)\in U$ . For this reason, the segment joining $(x,y,z)$ and $f(t_0)$ is not contained in $U$ . If $z=0$ then take the line that passes through $(1,1,1)\notin U$ and $(x,y,z)=(x,y,0)$ . Using a similar reasoning as before you can find a point in $U$ such that the segment joining it and $(x,y,z)$ is not contained in $U$ . This argument may be generalized to an arbitrary non-star domain.
|
|real-analysis|analysis|elementary-set-theory|
| 1
|
Where does the term 'dense' used in forcing/Martin's axiom come from?
|
There are some common meanings to 'dense' in Mathematics. In Topology, a subset $S\subseteq X$ of a topological space $(X, \tau)$ is dense if the intersection of every non-empty open set with $S$ is non-empty. In linear order theory, if $(X, is linearly ordered and $S\subseteq X$ , $S$ is dense if for every $x there is an $s\in S$ such that $x . In forcing or Martin's axiom, dense is defined as a subset $S\subseteq P$ of a partially ordered set $(P,\le)$ such that for all $x\in P$ there is an $s\in S$ such that $s\le x$ . The definition resembles more the definition of a filter-base (with the entire $P$ being the filter). Why is the term 'dense' used for this property? Is there a historical basis for this that predates forcing? The definition can be made a particular case of the topological one by taking as basis for a topology on $P$ sets of the form $\{y\colon y\le x\}$ for $x\in P$ . This seems contrived. The definition of dense on a linear order can also be made topological by taki
|
The relevant space isn't $P$ itself, but rather the space $\mathsf{MF}(P)$ of maximal filters through $P$ with topology generated by sets of the form $$U_p:=\{H: p\in H\}$$ for $p\in P$ . For example, if we take $P$ to be $2^{ (an inessential variant of Cohen forcing), then $\mathsf{MF}(P)$ is just Cantor space $2^\omega$ with the usual topology. A set of conditions $D\subseteq P$ is dense in the forcing sense iff the corresponding set $$\bigcup_{p\in D}U_p$$ is dense - indeed, dense open - in the topological sense. And this is not contrived at all; this is the translation which makes clear the relationship between forcing and the Baire category theorem (see also this answer of mine ).
|
|general-topology|logic|set-theory|terminology|forcing|
| 1
|
Proof that $\cos(x) = \cos\left(\frac{\pi}{2}\right)\implies x = \frac{\pi}{2} + k\pi$
|
I know that $$\cos(x) = \cos\left(\frac{\pi}{2}\right)$$ yields to $$(1)\qquad \qquad \qquad x = \frac{\pi}2 + 2k\pi$$ or $$(2)\qquad \qquad \qquad x = -\frac{\pi}2 + 2k\pi$$ which (as I found out in a textbook) is the same as $$x = \frac{\pi}{2} + k\pi$$ My question is how (1) and (2) imply the last result, I would like an algebraic proof or demonstration. $\\$ I know it's obvious, but I couldn't find out a rigorous demonstration by myself to why it's the same thing.
|
Let indicate with $x_1 = \frac{\pi}2 + 2k_1\pi$ $x_2 = -\frac{\pi}2 + 2k_2\pi$ $x_k = \frac{\pi}{2} + k\pi$ then we need to prove that $\forall k_1 \; \exists k$ such that $$x_k =x_1 \iff \frac{\pi}{2} + k\pi = \frac{\pi}2 + 2k_1\pi \iff k=2k_1$$ and that $\forall k_2 \;\exists k$ such that $$x_k =x_2 \iff \frac{\pi}{2} + k\pi = -\frac{\pi}2 + 2k_2\pi \iff k=2k_2-1$$ and since $\forall k$ we have $\cos x_k=\cos \frac \pi 2$ , we are done.
|
|trigonometry|proof-writing|
| 0
|
In a family with two children, what are the chances, if one of the children is a girl, that both children are girls?
|
In a family with two children, what are the chances, if one of the children is a girl, that both children are girls? I just dipped into a book, The Drunkard's Walk - How Randomness Rules Our Lives , by Leonard Mlodinow, Vintage Books, 2008. On p.107 Mlodinow says the chances are 1 in 3. It seems obvious to me that the chances are 1 in 2. Am I correct? Is this not exactly analogous to having a bowl with an infinite number of marbles, half black and half red? Without looking I draw out a black marble. The probability of the second marble I draw being black is 1/2.
|
Given the family has two kids , and hiddenly adopting binary biological sex (each kid is either a boy (B) or a girl (G)), the sample space is $$\Omega=\{(G,G), (G,B), (B,G), (B,B) \}$$ with the following probability measure: $$\mathbb P ((G,G))=p^2, \mathbb P ((G,B))=\mathbb P ((G,B))=p(1-p), \mathbb P ((B,B))=(1-p)^2$$ where $\color{blue}{p}$ denotes the probability that a kid is girl ( hiddenly assuming the sex of each kid is independent from each other). After being informed that one of the two kids is girl, i.e., the event $G_1=\{(G,B), (B,G), (G,G) \}$ , the conditional probability that the other is also girl, i.e., the event $G_2=\{(G,G) \}$ , is given by $$\mathbb P (G_2 | G_1)=\color{blue}{\frac{p^2}{1-(1-p)^2}}.$$ The parameter $\color{blue}{p}$ needs to be estimated based on some data set. Assume the family lives in Turkey (you may consider other countries), from this Turkish official website , we have According to birth statistics; the number of babies born alive in 2020 was
|
|probability|faq|
| 0
|
Nomenclature: Collection of sets in which each set has some element unique to it
|
A collection $\mathcal{X}$ of sets may have the property that each member of the collection has some element unique to it, more precisely, that for each element $A$ of $\mathcal{X}$ , for some object $x$ , for every element $X$ of $\mathcal{X}$ , $x \in X$ if and only if $X = A$ (equivalently, there is no element $A$ of $\mathcal{X}$ such that $A \subset \bigcup (\mathcal{X} - \{A\})$ ). Is there some recognized name for this property? One reason that there might be is that it is a necessary and sufficient condition for the mapping $u \colon \mathcal{P}(\mathcal{X}) \to \mathcal{P}(\bigcup \mathcal{X})$ , $u \colon A \mapsto \bigcup A$ to be an injection.
|
Such collections are known as minimal covers .
|
|elementary-set-theory|terminology|
| 1
|
Proving a Maurer-Cartan type equation without differential forms
|
I found this very nice theorem in the book by Onishchik ,"Lie groups and Lie algebras I". Proposition 2.9. Let $g(t,s)\in G$ be a smooth map from some domain in $\mathbb{R}^2$ into a Lie group $G$ . Define the corresponding maps into the Lie algebra $\mathfrak{g}$ of $G$ by $$ \xi(t,s)=\left(R_{g}^{-1}\right)_\ast\partial_t g,\quad \eta(t,s)=\left(R_{g}^{-1}\right)_\ast\partial_s g. $$ Then $$ \partial_t \eta - \partial_s\xi=[\xi,\eta].$$ I'm familiar with this kind of formula from the theory of integrable systems, where it's called a "zero curvature condition". Obviously it also follows from the Maurer-Cartan equation by pulling back along $g$ . Anyway, the proof of this is elementary in matrix groups because you just differentiate the two equations defining $\xi$ and $\eta$ with respect to the other variable and then notice that $\partial_t\partial_sg=\partial_s\partial_tg$ . This is basically what the book does by using local coordinates. What I'm struggling with is writing down a c
|
This might not be what you want, but I don't think you can prove the Maurer-Cartan equation without referencing differential forms, because the main object in the Maurer Cartan equation is the Maurer Cartan form, which is itself a differential form. What I can do is provide a coordinate free proof of the Maurer-Cartan equation. Define the Maurer Cartan form on a Lie group $G$ pointwise by: \begin{align} \mu_G(X_g)=D_gL_{g^{-1}}(X_g) \end{align} and let $M\times G$ be the trivial principal bundle, with trivial connection/horizontal distribution $H=\pi_M^*TM$ . Then clearly the connection one form inducing this connection is given by $\pi_G^*\mu_G$ . The curvature vanishes identically as: $$F(X,Y)=d\mu_G(\pi_{G*}\circ \pi_{H} X,\pi_{G*}\circ\pi_{H}X)=d\mu_G(0,0)=0$$ since $\pi_{G*}\circ \pi_{H}$ is identically zero, where $\pi_H$ is the projection $T(M\times G)\rightarrow \pi_M^*TM$ . By the structure equation (which you can prove in a coordinate invariant way easily), we have that: \beg
|
|differential-geometry|lie-groups|
| 0
|
Is this conditional probability always equal to 1?
|
Consider $X = \text{number of defective items in bought items}$ . Is the probability that $X \geq a$ given $X = a$ , always 1: $P(X \geq a| X = a)=1$ . I was wondering if the above holds because $A$ (event that $X \geq a$ ) will always occur if $B$ (event that $X = a$ ) occurs, therefore, making the probability 1. If not why not? Any thoughts? Thanks in advance.
|
The event $X \ge a$ is a superset of $X = a$ ; i.e., $$(X = a) \subseteq (X \ge a).$$ Therefore, $$\Pr[X \ge a \mid X = a] = \frac{\Pr[(X \ge a) \cap (X = a)]}{\Pr[X = a]} = \frac{\Pr[X = a]}{\Pr[X = a]} = 1$$ whenever $\Pr[X = a] > 0$ .
|
|probability|statistics|conditional-probability|bayes-theorem|
| 1
|
Question about Lagrange multipliers and combining constraints
|
Hi I have this question about Lagrange multipliers and specifically when there are 2 constraints given. The standard answer to this question uses the lagrangian and 2 constraints with 2 extra variables (lambda 1 and lambda 2). I understand this method completely fine, but when I first attempted this problem, I tried combining the 2 constraints given into one constraint (i.e. putting x=2z+3 into the other constraint) and then using Lagrange multipliers with only 1 constraint. However, this gave a completely different result! (The standard solution was sqrt(2) but I got 1 with my method). Can anyone explain why these 2 methods give different results?
|
There must be an error in your implementation of the 2nd method (since the ellipse you get by combining the two constraints does not have any point at distance 1 from the origin, whereas you claim to have found one!) It is not completely clear from your question what you did. Going to only one constraint would logically require reducing the problem to two coordinates only, and from what you write you seem to have eliminated $x$ , which leaves you with the equation in the $yz$ -plane: $$ y^2+4(z-3/2)^2=1 $$ Still the function to be minimized remains the distance to the origin in the original problem, so in 3 dimensions! If you miss that, than indeed the point $$(y,z)=(0,-1)$$ in your restricted space gives you distance 1, so maybe that's what you have done. But the distance should be computed by (implicitly) including the $x$ -coordinate again, which will extend the point to $(2z+3,y,z)$ so the quadratic form to minimize now isn't $y^2+z^2$ but: $$y^2+5z^2+12z+9$$ This (accidentally?) d
|
|multivariable-calculus|optimization|systems-of-equations|lagrange-multiplier|constraints|
| 0
|
on the product of any two ideals
|
i was reading an Algebra textbook by Siegfried Bosch and I read that given two ideals $\mathfrak a $ and $\mathfrak b$ of a certain ring $R$ ;we can construct a new ideal $\mathfrak a$ • $\mathfrak b $ ={ $\sum_{i=1}^{ } .So my question is:why it was inserted $ ?
|
In groups, we would define product of two normal subgroups $H$ and $K$ something like $$H\cdot K =\{h\cdot k : h\in H, k\in K\}$$ However, there's a constraint in rings, we need ideals to be additive subgroups and in particular, closed under addition. What more elements should we include in the set $P=\{ab:a\in \mathfrak a, b\in\mathfrak b\}$ so that it becomes closed under addition? You need the additive subgroup spanned by elements of $P$ . Let me give an example to state what I mean. What's the additive subgroup of $\mathbb R$ spanned by the set $\{1,\pi\}$ ? You must force it to be closed under addition $\{m+n\pi: m,n\in\mathbb Z\}$ . It was quite simple to find the additive span of two elements. What would be your approach for multiple elements or even possibly, infinitely many elements? There can arbitrarily any number of elements in $P$ . We pick finitely many elements from $P$ in all possible ways and take their sum. $\mathfrak{a\cdot b}=\langle P\rangle_+ =\{\sum_{i=1}^n p_i:n
|
|abstract-algebra|
| 1
|
Question about a group defined on rings
|
Let $R$ be a commutative ring and $f : R \to R^{\times}$ such that $f(x+y) = f(x)f(y)$ for all $x,y \in R.$ Define the following group $G =R \times R$ such that $$(x,y)(x', y') = (x+x', y + f(x)y').$$ This group is clearly a non abelian group in general. So my hunch is that this group can be embedded in a matrix group for nice enough $f$ because of the following similarity : Let $R$ be a field and $f$ be a surjective map onto $R^{\times}$ then the elements $$\begin{pmatrix} f(x) & y\\ 0 & 1\\ \end{pmatrix}$$ of the form a group under matrix multiplication which is similar to the above group. The problem with $G$ to do similar thing, in the definition of group operation there is $f$ occuring which looks difficult to incorporate to the matrix multiplication. What do we know about the group $G$ ? (I am not sure if this group is a particular case of some well known group).
|
First off, there's a general notion of semidirect product into which your group fits. Given a group $G$ , and an action of $G$ on another group $A$ given by a homomorphism $\phi: G \to \mathrm{Aut}(A)$ , one can construct the semidirect product $G \ltimes A$ whose underlying set is the same as that for $G \times A$ , and where elements are multiplied according to the rule $$(g, a) \cdot (h, b) = (gh, a\phi(g)(b)).$$ In your example, both $G$ and $A$ are given by the underlying additive group of $R$ , and $\phi: R \to \mathrm{Aut}(R)$ takes $x$ to the automorphism $y \mapsto f(x)y$ . There's an awful lot of literature on this construction, but I'll just say a couple of things. (1) The group of affine transformations $f: R \to R$ , where elements of the group are functions of the form $f(x) = mx + b$ for $m \in R^\times$ and where the group multiplication is given by functional composition, is a famous example which can be generalized considerably. (2) Semidirect products naturally give
|
|abstract-algebra|group-theory|ring-theory|
| 0
|
Question about Lagrange multipliers and combining constraints
|
Hi I have this question about Lagrange multipliers and specifically when there are 2 constraints given. The standard answer to this question uses the lagrangian and 2 constraints with 2 extra variables (lambda 1 and lambda 2). I understand this method completely fine, but when I first attempted this problem, I tried combining the 2 constraints given into one constraint (i.e. putting x=2z+3 into the other constraint) and then using Lagrange multipliers with only 1 constraint. However, this gave a completely different result! (The standard solution was sqrt(2) but I got 1 with my method). Can anyone explain why these 2 methods give different results?
|
Notice that replacing two constraints with one is not valid. Just to make this as straightforward as possible, consider the system of linear equations $x+y=0$ and $x-y=0$ . The only solution, of course, is $x=y=0$ . However, if we add the equations, eliminating $y$ , we obtain $2x=0$ , which is an entire line. Obviously, this alone is not enough information. When you combined your two constraints, you went from a curve of eligible points to a surface of eligible points (a cylinder over an ellipse in the $yz$ -plane). This allows too many eligible points.
|
|multivariable-calculus|optimization|systems-of-equations|lagrange-multiplier|constraints|
| 1
|
Prove that for the given enumeration of rationals and a sequence of positive reals that sum to $1$, $h$ is Riemann integrable and determine integral
|
Here's the question: Let $\left\{q_1, q_2, \ldots\right\}$ be an enumeration of all rational numbers in the interval $[0,1]$ . This means that $q_k$ 's are rational numbers in $[0,1]$ and that every rational number $q$ in $[0,1]$ is equal to exactly one $q_k$ . Now for any $x \in[0,1]$ , let $S_x=\left\{k: q_k \leq x\right\}$ . Fix a sequence $\left\{p_n\right\}_{n \geq 1}$ of strictly positive real numbers such that $\sum_n p_n=1$ . Define a function $h:[0,1] \rightarrow[0,1]$ by $h(x)=\sum_{k \in S_x} p_k$ . In other words, $h(x)$ is the sum of $p_k$ 's where corresponding $q_k$ 's are in $[0, x]$ . (a) Show that $h$ is continuous at all irrational points and is discontinuous at all rational points. In particular, $h$ has infinitely many discontinuity points. (b) Show that $h$ is Riemann integrable. (c) Compute $\int^1 h$ I'm done with the first two parts, the first one following from a jump at rationals and second one following from monotonous nature of the function $h$ . I need hin
|
Let $P_n$ be the partition including $0,1$ and $q_1,...,q_n$ . We denote this partition by $P_n=\{t_0=0,t_1,...t_n,t_{n+1}=1\}$ . Moreover, let $s_i$ be the term in $\sum p_k$ corresponding to $t_i$ for $1\leq i\leq n$ . Observe that $h(t_i)\geq \sum_{j=1}^is_j$ and $h(t_i)+\sum_{j=i}^ns_j\leq 1$ $\implies h(t_i) \leq 1-\sum_{j=i}^ns_j$ We can now bound the upper and lower Riemann sums. $$L(f,P_n)\geq \sum_{i=1}^nh(t_i)(t_{i+1}-t_i) $$ $$\geq \sum_{i=1}^n\left((t_{i+1}-t_i)\sum_{j=1}^i s_j\right)=\sum_{i=1}^n(s_i-t_is_i)$$ This last sum is just a rearrangement of $\sum_{k=1}^np_k-p_kq_k$ . $$U(f,P_n)\leq \sum_{i=0}^nh(t_{i+1})(t_{i+1}-t_i)$$ $$\leq \sum_{i=0}^n(t_{i+1}-t_i)(1-\sum_{j=i+1}^ns_j)$$ $$=1- \sum_{i=0}^n\left((t_{i+1}-t_i)\sum_{j=i+1}^ns_j\right)$$ $$ = 1-\sum_{i=1}^ns_it_i=1-\sum_{k=1}^np_kq_k.$$ Taking limits as $n\to\infty$ , we find the desired integral to be $$\fbox{1-$\sum p_kq_k$}.$$
|
|real-analysis|integration|riemann-integration|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.