title
string
question_body
string
answer_body
string
tags
string
accepted
int64
how to find p-th primitive root of unity in GF(2^m)
The definition of quadratic residue codes involves finding a primitive p-th root of unity in some finite extension field of $GF(2)$ . 2 is a quadratic residue of prime $p$ . By brute force search I found solutions for $p=7, \zeta=Z(2^3)$ and $p=23,\zeta=Z(2^{11})^{89}$ ; $Z(2^m)$ is a primitive element in $GF(2^m)$ . My question is how is this solved in general? In particular, what is the solution for $p=47$ ?
This question was answered to some satisfaction in the comment by Jyrki Lahtonen , as written by the poster in another comment. From the comments: "In every group, if an element $a$ has order $n$ , then $a^k$ has order $n / \text{gcd}(n,k)$ . If $\zeta$ is a primitive element of the field $GF(2^{23})$ it has order $2^{23}-1$ . As $$\frac{2^{23}-1}{47} = 178481,$$ we conclude that $\zeta^{178481}$ is a root of unity of oder 47." "See for example this old thread and others linked to it." "Compare: The exponent $89$ is similarly gotten as $(2^{11}-1)/23$ ."
|finite-fields|coding-theory|roots-of-unity|quadratic-residues|
1
Difference between infinite union and arbitrary union of closed/open sets
I am trying to understand the following idea from a first course in real analysis about the topology of open and closed sets in a metric space $(X, d)$ . Take a set $K_n$ = $\left[-1 + \dfrac{1}{n}, 1 - \dfrac{1}{n}\right]$ . For $n=1$ we have [0]. for $n=2$ we have $\left[\dfrac{-1}{2}, \dfrac{1}{2}\right]$ , for $n=3$ we have $\left[\dfrac{-2}{3}, \dfrac{2}{3}\right]$ , with each of the above sets being closed. However, for $\bigcap^{\infty}_{n=1}$ we have [0], which is open because any ball of radius $r>0$ will include other points not equal to 0, so the ball is not contained in the set [0], so the infinite intersection is open. But then we have the theorem that says if $A_{\gamma}$ is closed, $\bigcap\limits_{\gamma \in I}A_{\gamma}$ is closed. Maybe I am misunderstanding the difference between "arbitrary intersection of an infinite family" and "infinite intersection of indexed sets," but if I have the intersection of all the $\gamma \in I$ , and I is an infinite set such as $\math
Maybe to write less in the comments, in a place where it is easier to edit, I'll write some facts here. In any metric space $(X,d)$ , a singleton $\{ x\}$ is closed. It is usually not an open set. If it is open, then there exists $r>0$ such that $\{ y\in X: d(x,y) . An infinite union of closed sets is usually not open. It is true that a general union of open sets is open. If I am looking at the correct version of the theorem you are referring to, then you are misquoting it. An open subset may also be closed if the space is not connected. While $\mathbb{R}$ is connected, you do not seem to rely on it explicitly for your "contradiction". So I think it is possible that you assume closed set are not open and that open sets are not closed.
|real-analysis|general-topology|metric-spaces|set-theory|
0
How to identify the decoding result is error?
Let $(15,11,3)_2$ be a linear code and the parity check matrix is $$\begin{pmatrix}1&0&1&0&1&0&1&0&1&0&1&0&1&0&1\\0&1&1&0&0&1&1&0&0&1&1&0&0&1&1\\0&0&0&1&1&1&1&0&0&0&0&1&1&1&1\\0&0&0&0&0&0&0&1&1&1&1&1&1&1&1\\\end{pmatrix}.$$ Suppose the sent codeword is $(000111100001111)$ , and the error is $(111000000000000)$ (the weight is 3). Then the received codeword is $(111111100001111)$ . It is easy to check that the syndrome decoding using the nearest codeword principle will identity it as a codeword, which is clearly wrong. How to handle this case? I know if the weight of error $t \leq (d-1)/2$ , the decoder can run correctly.
This question was answered to some satisfaction in the comment by Jyrki Lahtonen , as written by the poster in another comment. From the comments: "The short answer is that you cannot handle this at all without an extra layer somewhere. Say, if the information in this block is a part of longer stream of data, protected by a CRC or some such, then you can tell that something went wrong. But still you would not know what. The point of error-correction is to reduce the probability of an incorrect interpretation. It cannot make that probability exactly zero. When designing a system you first agree on a certain Quality Of Service, and then find FEC parameters that deliver what is wanted." " FEC = (Forward Error Correction) . I equate it with the use of an error-correcting code, but FEC is the term telcomm engineers use. I guess there is room for differences (or other methods of error control). QoS is another acronym the engineers like. To give an example I am somewhat familiar with. When de
|linear-algebra|coding-theory|
0
Duplication code - error correction
Explain how the duplication code given by {00,11,22} with letters in the set $\mathbb{Z}$/3 can detect one error. If I were to construct a table of Hamming distances, I would show that the minimal distance is 2, and thus the error detection is 1 because C is n error detecting iff the minimal distance is $\geq$ n+1 Is this correct?
This question was answered in the comments by Snowball and Jyrki Lahtonen , both stating that the argument in the post is correct. Additionally, Jyrki Lahtonen offered a more intuitive way of interpreting the solution. From the comments: "That is correct. But the argument If a single error happened, then the components of the received vector are distinct. Hence that is not a codeword and an error has occured may be conceptually easier. Your call :-)"
|coding-theory|
0
Searching for useful ternary codes
I'm trying to construct a $[108;60;16]_3$ code through concatenation, but I can't find a ternary code that'd help me with it. Hamming codes are binary, Golay's codes have distances of 5 and 6 which are not divisors of 16, the only Reed-Solomon code that is useful here is $[3;2;2]_3$ , but it needs to use $[36;30;8]_9$ as an outer code which is even harder to construct. What ternary codes could be helpful here? Thanks in advance.
This question was answered to some satisfaction in the comment by Jyrki Lahtonen , as written by the poster in another comment. From the comments: " $[27;20;8]_{27}$ (Reed-Solomon) and a single ternary parity check?"
|coding-theory|
0
Parity check matrix to generator matrix
Does anyone know how to go from a parity check matrix $H$ to a generator matrix ( $GH^T = 0$ ) without having either in standard form? Meaning that the order of row/columns stay the same (i.e. the 'bits' stay in the same place as opposed to putting the matrix in standard form because this might interchange some columns) (this might be simple but it's not clear to me how to do it) EDIT: Is this $G$ just the orthogonal complement of $H$ ? Thanks :)
This question was answered to some satisfaction in the comment by Andreas Lenz , as written by the poster in another comment. From the comments: "You may work with the reduced row echelon form instead of the standard form. This avoids interchanging columns. See for example https://dept.math.lsa.umich.edu/~hochster/419/ker.im.html"
|coding-theory|
0
Proving that any polynomial in a field $K$ can be "expanded" about any point in $K$
This post concerns the subsection "Division of Polynomials", pages 76--77, in the textbook Analysis I by Amann and Escher. I would appreciate feedback on my attempted proof of Proposition 8.16. Excerpt from text: Further context: My attempt: We proceed by induction on $n \in \mathbb N$ , the degree of the polynomial $p \in K[X]$ . Base case: Assume $\deg (p) = 0$ . That means $p$ is a constant polynomial $p = b_0$ , so there is certainly a unique $b_0$ such that $p = \sum_{k = 0}^0 (X - a)^k b_k = b_0$ . Induction step: Assume the result is true for all polynomials of degree $n$ . Consider $p \in K[X]$ such that $\deg (p) = n + 1$ . Because $\deg (X - a) = 1$ , we can use Proposition 8.15 and divide $p$ by $X - a$ . There are therefore unique polynomials $p_{(1)}$ and $b_0'$ such that $p = (X - a)p_{(1)} + b_0'$ . We see that $\deg(b_0') = 0$ , and because we are in a field, we must have $\deg \big( (X - a)p_{(1)} \big) = \deg (X - a) + \deg (p_{(1)})$ , which implies that $\deg(p_{(1)
Thanks to both you and Bill Dubuque for this very nice solution. Just two small notes for me (as I now arrive here while reading Amann Escher): (1) In the induction step saying "We see that $\deg(b_0') = 0$ ..." is not strictly speaking correct, since all we have is that $\deg(b_0') (the zero polynomial for example does not obey your claim), but this is immaterial as either way we obtain a constant power series (isomorphic to $b_0 \in K$ per the bottom of p.72) as you have done. (2) There is (at least to my taste) the tiniest bit extra that needs to be shown at the end of your existence argument. In particular, once we have arrived at the decomposition $p = (X - a)p_{(1)} + b_0'$ we have, from equations (8.19) and (8.20) (and that $K$ a field so there's no subadditivity of degrees when multiplying) that $n+1 \deg p = \max \{ \deg (X-a) + \deg p_{(1)}, 0\} = \max \{ 1 + \deg p_{(1)}, 0\}$ . Now from here we need to show that $p_{(1)} \neq 0$ so that $\deg p_{(1)} \neq - \infty$ so that
|abstract-algebra|polynomials|field-theory|solution-verification|
0
Computing a parity check matrix given the minimum code distance
As per the title, I would like to know if it is possible to construct a parity check matrix given the length of the code, dimension and the minimum distance of the code. For example, if I would like to construct a parity matrix for this code C[22, 13, 5], how do I go about doing this?
This question was answered in the comment by Dilip Sarwate . From the comments: " In general , the answer is No, you cannot construct a parity check matrix for a $[n,k,d]$ code even if you can prove that such a code must exist."
|linear-algebra|coding-theory|
0
Why do we call integration "accumulation of change"?
So in virtually all English-language calculus classes I have seen, we define integration as the "accumulation of change". And that makes sense to me intuitively, but when I think about it, I feel like accumulation of value makes more sense. Because if we take the change in a function $f:\mathbb{R}\mapsto\mathbb{R}\text{ s.t. } f(x)=k$ for $k\in\mathbb{R}$ , then the "change" in $f$ at any $x$ is $0$ . So accumulating it, we add $0$ with itself some number of times, right? Which will always be $0$ . Yet, $$\int_a^b k\, dx$$ isn't always equal to zero... I feel like accumulation of change makes sense to me but I can't put my finger on why it does. Can anyone try to explain?
Viewing integration as the accumulation of something can be a reasonable way to think about it sometimes. But it's not possible actually to define integration just using a few words like that. All you can do with a description like "accumulation of change" is to hint at an idea that motivates the actual definition of an integral. Here's an example of how an English-language calculus course introduces integration (from MIT Open Courseware Single Variable Calculus ): The definite integral tells us the value of a function whose rate of change and initial conditions are known. The word "accumulation" does not occur in any form here. Here's how integration is introduced in Paul's Online Notes : In the past two chapters we’ve been given a function, $f(x)$ , and asking what the derivative of this function was. Starting with this section we are now going to turn things around. We now want to ask what function we differentiated to get the function $f(x)$ . I didn't have to look very hard to fin
|calculus|integration|definite-integrals|indefinite-integrals|
0
When are $\mathfrak O_K/\mathfrak p^m$ and $\mathbb Z/p^m\mathbb Z$ isomorphic as $\mathbb Z$ modules?
Given an algebraic field $K$ and denote by $\mathfrak O_K$ its ring of integers. Suppose that $\mathfrak p$ is a prime ideal and $p$ is a prime contained in $\mathfrak p$ . Under what conditions of $\mathfrak p$ can $\mathfrak O_K/\mathfrak p^m$ be isomorphic to $\mathbb Z/p^m\mathbb Z$ as $\mathbb Z$ -modules? Note that if they are isomorphic, then the number of elements in $\mathfrak O_K/\mathfrak p^m$ must be $p^m$ and therefore the norm of $\mathfrak p$ must be $p$ . Is this condition also sufficient? Or what is the sufficient condition?
Your condition is not sufficient. Let $e$ and $f$ denote the ramification and inertial degrees of $\mathfrak{p}/p$ . The norm homomorphism satisfies $N(\mathfrak{p}) = p^f$ , so the condition that $N(\mathfrak{p}) = p$ is equivalent to the condition $f = 1$ . This implies that $\mathcal{O}_K/\mathfrak{p}$ and $\mathbb{Z}/p\mathbb{Z}$ are isomorphic as fields (and hence $\mathbb{Z}$ -modules), so your condition does work for $m = 1$ . However, for larger $m$ , you need to add an addition hypothesis. Indeed, the smallest power of $p$ in $\mathcal{O}_K/\mathfrak{p}^m$ is equal to $p^t$ where $t = \lceil \frac{m}{e} \rceil$ . Assuming $f = 1$ , we know that $|\mathcal{O}_K/\mathfrak{p}^m| = p^m$ , so the $\mathbb{Z}$ -module isomorphism $\mathcal{O}_K/\mathfrak{p}^m \equiv \mathbb{Z}/p^m$ holds iff the smallest power of $p$ in $\mathfrak{p}^m$ to be $p^m$ . It is easy to check that this is true iff $m = 1$ or $e = 1$ .
|abstract-algebra|number-theory|algebraic-number-theory|
1
Prove with delta-epsilon definition: $\lim_{x\to 3} \frac{x^2-9}{x^2-3x}=2$
Prove with delta-epsilon definition: $\displaystyle\lim_{x\to 3} \frac{x^2-9}{x^2-3x}=2$ I started with the following inequality: $$ \Big|\frac{x^2-9}{x^2-3x}-2\Big| Then I could not progress after this: $$ \Big|\frac{x+3}{x}-2\Big| Could anyone help me to find $\delta=\delta(\epsilon)?$
$|\frac{x^2-9}{x^2-3x} -2| = |\frac{-x^2+6x-9}{x^2-3x}| = |\frac{x^2-6x+9}{x^2-3x}| = |\frac{x-3}{x}| = |x-3||\frac{1}{x}|$ You want $|x-3||\frac{1}{x}| when $|x-3| so we can bound $\delta \leq 1$ then $|x-3| so $|\frac{1}{x}| so we choose $\delta = \min\{1, 2\epsilon\}$ then $$|x-3||1/x|
|real-analysis|epsilon-delta|
1
PDE Cauchy problem
Question: Find the general solution of the equation $$y\dfrac{\partial z}{\partial x}+2z\dfrac{\partial z}{\partial y}=\frac{y}{x}$$ Then solve the Cauchy problem with Cauchy data $$x=y^2, \ \ z=2$$ That is, find the integral surface of this equation passing through that curve. Attempt at solution: Characteristic system : $$\frac{dx}{y}=\frac{dy}{2z}=\frac{x\ dz}{y}$$ Integrating $\frac{dx}{y}=\frac{dy}{2z}$ gives $$u_1(x,y,z)=C_1=x-\frac{y^2}{4z}$$ Integrating $\frac{dx}{y}=\frac{x \ dz}{y}$ gives $$u_2(x,y,z)=C_2=\ln x-z$$ Inserting Cauchy data gives : $$C_1=y^2-\frac{y^2}{8}-\frac{7}{8}y^2$$ $$C_2=\ln x-\ln y^2-z+2$$ But I am unsure of what to do next to get the general solution.
You cannot integrate $\frac{dx}{y}=\frac{dy}{2z}$ assuming that $z$ is a constant. You should, instead, replace $z$ with the result of the second integration, $z=\ln x-C_2$ : \begin{align} &\int\frac{y\,dy}{2}=\int z\,dx=\int (\ln x - C_2)\,dx \\ &\implies \frac{y^2}{4}+C_1=x(\ln x -1)-C_2x=x(z-1) \\ &\implies (z-1)x-\frac{y^2}{4}=C_1=f(C_2)=f(\ln x - z). \tag{1} \end{align} Now, use the Cauchy data $z(y^2,y)=2$ to determine $f$ : \begin{align} &(2-1)y^2-\frac{y^2}{4}=f(\ln y^2 - 2) \\ &\implies f\!\left(\ln\left(\frac{y^2}{e^2}\right)\right)=\frac{3y^2}{4}=\frac{3}{4}e^2\exp\left(\ln\left(\frac{y^2}{e^2}\right)\right) \\ &\implies f(t)=\frac{3}{4}e^{2+t}. \tag{2} \end{align} It follows from $(1)$ and $(2)$ that $$ (z-1)x-\frac{y^2}{4}=\frac{3}{4}e^{2+\ln x - z} \implies \left(z-1-\frac{3}{4}e^{2-z}\right)x-\frac{y^2}{4}=0. \tag{3} $$ Equation $(3)$ is the solution to the Cauchy problem in implicit form.
|partial-differential-equations|
0
On a Universal Property for the Tangent Space.
A time ago, I was asking if there exists an universal property of the tangent space and what it says about any construction of it. I've found the definition maded in Tammo Dieck's book of Algebraic Topology: A tangent space at $p$ is a vector space $T_pM$ such that any chart $(U,x)$ on $M$ induces a chart $(T_pM, \text{d}x_p)$ on the tangent space, where $\text{d}x_p\colon T_pM\to\mathbb{R}^n$ is an isomorphism and for any two charts $(U,x)$ and $(V,y)$ the coordinate change $\text{d}y_p\circ\text{d}x_p^{-1}$ is equals to the derivative of the coordinate change $y\circ x^{-1}$ at $x(p)$ . For me, at least, this approach of defining the tangent space do justice to the idea of "the best linear approximation of a manifold near a point", since all coordinate system on $M$ has it's own approximation and the coordinate change is approximately the one of the charts on $M$ . My question is about the construction of the tangent space. I've already seen that the tangent space via curves satisfie
Let $M$ be a manifold and $p\in M$ . Define $$ T_pM\colon = Der_p(\mathcal C^\infty M)=\{D\colon\mathcal C^\infty M\to \mathbb R \vert D \text{ is derivative at }p\}. $$ Now we check that the universal property holds for $T_p M$ defined above. A chart $(U,x)$ of $M$ indeed induces an isomorphism via $$ \mathrm d x_p\colon T_pM \to \mathbb R^n,\,D\mapsto (D(x^1),\,D(x^2),\,\dots,D(x^n)). $$ Now consider two charts $(U,x),\,(V,y)$ on $M$ with $U\cap V \neq\emptyset$ . For $v\in \mathbb R^n$ we have $(\mathrm d x_p)^{-1}(v)=D_v$ where $$ D_v(f)=D_{x(p)}(f\circ x^{-1})(v). $$ Thus $$ (\mathrm d y_p \circ (\mathrm d x_p)^{-1})(v)=\mathrm d y_p(D_v)=(D_v(y^i))_i=(D_{x(p)}(y^i\circ x^{-1})(v))_i=D_{x(p)}(y\circ x^{-1})(v) $$ which means that indeed $\mathrm d y_p \circ (\mathrm d x_p)^{-1}=D_{x(p)}(y\circ x^{-1})$ .
|smooth-manifolds|tangent-spaces|derivations|
1
Change of basis vs change of coordinate system
To my understanding, a change of coordinate system can be thought of as any set of invertible relations between the old coordinates and the new. i.e. every new coordinate can be represented as a function of the old coordinates and vice versa, regardless of whether that function is linear or not. This means that a change of coordinates cannot be generally thought of as a change of basis (invertible linear transformation). Is that correct? I am asking this because I had been learning about tensors, and I noticed how sometimes they are defined with regard to how their components vary under change of basis, and other times they are defined with regard to how they vary under change of coordinates. Thanks!
This means that a change of coordinates cannot be generally thought of as a change of basis (invertible linear transformation). Is that correct? Yes, a change of coordinates is not always simply a change of basis: take $\mathbb R ^2$ as $\mathbb R$ -vector space and consider a simple translation by $b\in \mathbb R ^2\setminus \{0\}$ as the change of coordinate system, say from $B$ to $B'$ , we denote this map with $\phi:\mathbb R ^2 \rightarrow \mathbb R ^2, x\mapsto x+b$ This map is not linear as $\phi (0) = b\neq 0$ (for $0 := 0_{\mathbb R ^2}:=(0,0)$ ), hence it cannot be represented as change of basis matrix (or this kind of tensor representation, since it does not preserve the structure of the space), but still being bijective and hence (only in the sense of function between sets, invertible). Seeing this as special case of tensor representation should answer your question.
|coordinate-systems|tensors|change-of-basis|
0
Please I'm a bit confused about proving that this sequence is convergent
Suppose xn is a sequence of real numbers satisfying |xn- xn-1|≤ 2^-n for each n ≥ 2. Prove that the sequence is convergent.
Because convergence in the real numbers is the same as the sequence being Cauchy, you can prove it is Cauchy by induction on the difference between $m$ and $n$ for arbitrary $m,n>N.$ If you need help understanding Cauchy sequences this is a relevant question. (Hint: If you WLOG assume $m>n$ you can create a triangle inequality chain to find $|x_m-x_n|$ as a geometric sum, then you will be able to find such an N.)
|calculus|sequences-and-series|real-numbers|
0
Is every subset of $X$ compact and connected under the trivial topology on $X$?
I was thinking about the trivial topology, in which the only open sets are $X$ and $\emptyset$ , where $X$ is the entire space we are working in. My doubt is: are all the subset of $X$ compact and connected? I understand that they are all closed, since the only open sets are $X$ and $\emptyset$ . But compact means also bounded, so if for example $X = \mathbb{R}$ , how can be, for example, $(1, +\infty)$ bounded? Also I'm having problems in understanding connectedness. Say we have $[1, 3]\cup [4, 6]$ , how is this connected in the trivial topology?
" ... compact also means bounded ... " is true in a metric space, where the concept of bounded is defined. But in a topological space where no metric is defined, boundedness is undefined. So you have to adjust your intuition here so as to entirely ignore the concept of boundedness in approaching this problem. Instead, you have to directly apply the definition of compactness. Compactness, by definition, says every open cover has a finite subcover. Well, in the trivial topology there are only 2 open sets, namely the empty set and the whole space. So, any given open cover is already finite , because that open cover has at most 2 elements. The space is therefore compact. Dis connectedness, by definition, says that there is a disjoint pair of nonempty open sets whose union is the whole space. Well, in the trivial topology there is only 1 nonempty open set, namely the whole space. So there does not even exist two nonempty open sets (let alone two which are disjoint and whose union is the who
|general-topology|compactness|connectedness|
0
Show that $\mu$ is the Lebesgue measure on $\mathbb{R}$, $\mu$ being invariant by translation and $\mu(]0,1])=1$
$\mu$ is a measure on $(\mathbb{R}, B(\mathbb{R}))$ which is invariant by translation ( $\forall a,b,h\in\mathbb{R}:\,\mu(]a+h, b+h]) = \mu]a, b]$ ) and such that $\mu(]0,1])=1$ . What I need to show is that $\mu(]0,x])=x$ , for $x\in \mathbb{N}$ , then for $x\in \mathbb{Q}_+$ , then for $x\in \mathbb{R}_+$ , and finally, to conclude that $\mu$ is the Lebesgue measure on $\mathbb{R}$ . Showing it for $x\in \mathbb{N}$ was fairly easy, but I am really struggling for the next steps. For $x\in \mathbb{Q}_+$ , I tried to write $x$ as $x = \frac{p}{q}$ but it seems to lead nowhere, how are the cases $x\in \mathbb{Q}_+$ and $x\in \mathbb{R}_+$ different ? Thanks :)
Let $p_n = \mu((0,1/n])$ . Then by translation invariance, $p_n$ satisfies $$ 1 = \mu((0,1]) = np_n, $$ so $\mu((0,1/n])= 1/n$ . From here, you should be able to say what the $\mu$ -measure of an interval of the form $(0,m/n]$ is using translation invariance. To establish the measure of an arbitrary interval, use continuity of the measure $\mu$ and approximation of the endpoints by rational numbers.
|measure-theory|lebesgue-measure|real-numbers|rational-numbers|
0
How to prove the constant's power x series?
Considering $k,x\in \mathbb{R}, k^x=1+\sum_{n=1^{\infty\frac{x^n(\ln(k))^n}{n!}$ . Does it have an elementary proof? I attempt to solve by Maclaurin Series: $f(x)=\sum_{n=1}^{\infty}\frac{x^n f^{(n)}(0)}{n!}$ . It is well known that the $n$ th derivatives of the function $k^x$ are $\frac{d}{dx}k^x=k^x\ln (k)$ ; $\frac{d^2}{dx^2}k^x=k^x(\ln(k))^2$ and, consequently, $\frac{d^n}{dx^n}k^x=k^x(\ln (k))^n$ . So $k^x=\sum \frac{x^n (k^0)^{(n)}}{n!}=\sum \frac{x^n (k^0 \ln (k))^{(n)}}{n!}=\sum_0^\infty \frac{x^n (\ln (k))^n}{n!}.$ The first term, when $n=0$ is $1$ . Therefore, all the summation may be rewritten as $1+\sum_1^\infty \frac{x^n (\ln(k))^n}{n!}$ .
Notice $k^x=e^{\ln(k^x)}=e^{x{\ln(k)}}$ by exponent rules, and the taylor series for $e^x$ gives rise to this identity. The 1 term arises from n=0 in the taylor series for $e^x.$ If $k^x$ is complex you can take the principal branch of the logarithm, which will give the same answer for the complex series of $e^x$
|calculus|sequences-and-series|summation|taylor-expansion|irrational-numbers|
0
Pullback of a flat module is flat
Let $f\colon (X,\mathcal{O}_X)\rightarrow (Y,\mathcal{O}_Y)$ be a map of ringed spaces. In the Stacks Project, in the proof of Tag 09U8 , they basically claim at some point, that for any flat $\mathcal{O}_Y$ -module $\mathcal{F}$ its pullback $f^*\mathcal{F}$ is also flat. Now they do link to a proof of this result, but in that linked statement they assume $f$ to be a flat morphism. So if $f$ is flat, that the statement is clear, but does it also hold for an arbitrary $f$ ?
The trick is that you can change what $\mathcal{O}_X$ is to get a flat morphism. Let $X'=(X,f^{-1}\mathcal{O}_Y)$ , which is flat over $Y$ , as $(f^{-1}\mathcal{O}_Y)_x\cong\mathcal{O}_{Y,f(x)}$ and the induced map $\mathcal{O}_{Y,f(x)}\to\mathcal{O}_{Y,f(x)}$ is an isomorphism. Writing $$f^*\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G} \cong f^{-1}\mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} \mathcal{O}_X \otimes_{\mathcal{O}_X} \mathcal{G} \cong f^{-1}\mathcal{F}\otimes_{f^{-1}\mathcal{O}_Y} \mathcal{G}$$ for $\mathcal{G}$ a sheaf of $\mathcal{O}_X$ -modules, we see that pulling back to $X$ is equivalent to pulling back to $X'$ on the level of sheaves of abelian groups, and $X'\to Y$ is flat, so we can use the result. This is essentially covered at the last statement ( 0GMU ) in 02N2 , but it could be added to the lemma you're reading to make it a little clearer, perhaps.
|algebraic-geometry|sheaf-theory|flatness|
1
How to elegantly solve $\left(x^2-\frac1{x^2}\right)^2+2\left(x+\frac1{x}\right)^2 = 2024$?
Find all values of $x$ such that $$\left(x^2-\frac{1}{x^2}\right)^2\:+\:2\,\left(x+\frac{1}{x}\right)^2 \:=\: 2024$$ I arrived at the solutions $\,x=\pm \sqrt{22 \pm \sqrt{483}}$ , $\,\pm \sqrt{-23 \pm 4 \sqrt{33}}\,$ , by simplifying to a point where there is no more linear term, and then substituting $y=x^2$ . It took many steps and was quite messy. Is there some elegant way to solve it?
I think your answer is a little off -- as it's always complex. Observe that $(x\pm\tfrac 1x)^2=x^2\pm2+\tfrac 1{x^2}$ so $(x-\tfrac 1x)^2= (x+\tfrac 1x)^2-4$ . Thus, if we set $u=(x+\tfrac 1x)^2$ your equation is $u(u-4)+2u-2024=0$ . The solutions are $u=1 \pm \sqrt{2025}=\{46,-44\}$ . Now we have $x+\tfrac 1x=\pm \sqrt{46}$ : $$ x=\frac{\sqrt{46} \pm \sqrt{42}}{2},\frac{-\sqrt{46} \pm \sqrt{42}}{2} $$
|algebra-precalculus|roots|radicals|
0
Prove that $\lim_{x\rightarrow\infty}f(2x)-f(x)=0$ using the Mean value theorem
I encountered the following question, regarding the Mean value theorem: Let $f$ be some function such that it is differentiable the interval $(5, \infty)$ , and satisfies $\lim_{x\rightarrow\infty}f'(x)=0$ . Prove that $\lim_{x\rightarrow\infty}f(2x)-f(x)=0$ . I am trying to use the limit definition (using $\varepsilon, M$ ) and the Mean value theorem, but I got stuck. Here is my attempt: Let $\varepsilon>0$ . For any $x>5$ , By the Mean value theorem on $[x, 2x]$ , there exists some $x such that $f'(c)=\frac{f(2x)-f(x)}{x}$ . I know that $f'(c)$ (and $|f'(c)|$ ) can be as small as I want, but I am stuck due to the $x$ in the denominator, and I don't know how to get rid of it.
This is not true. Let $f(x)=\sqrt{x}$ . Then $f$ is differentiable and $f'(x)=1/(2\sqrt{x})$ . This is tends to zero as $x\to\infty$ . But $\sqrt{2x}-\sqrt{x}=\sqrt{x}(\sqrt{2}-1)\to+\infty$ . Maybe you want to solve another problem, that is $f(x+2)-f(x)\to 0$ . This is indeed true as $f(x+2)-f(x)=2f'(c)$ . And $f'(c)\to 0$ . Edit. So we need to show that $f(x)/x\to 0$ . For any $\varepsilon>0$ there is $T$ such that for $x\geq T$ we have $|f(x)'| . We apply intermediate value theorem for interval $[T,x]$ . We get $|f(x)-f(T)|=|f'(c)(x-t)|\leq \varepsilon/2 |x|$ . So $|f(x)/x|\leq |(f(x)-f(T))/x|+|f(T)/x|\leq \varepsilon/2+|f(T)|/x$ . So for $x>2|f(T)|/\varepsilon$ , we get $|f(x)/x|\leq \varepsilon$ . This means that $|f(x)/x|\to 0$ .
|limits|mean-value-theorem|
0
Is every subset of $X$ compact and connected under the trivial topology on $X$?
I was thinking about the trivial topology, in which the only open sets are $X$ and $\emptyset$ , where $X$ is the entire space we are working in. My doubt is: are all the subset of $X$ compact and connected? I understand that they are all closed, since the only open sets are $X$ and $\emptyset$ . But compact means also bounded, so if for example $X = \mathbb{R}$ , how can be, for example, $(1, +\infty)$ bounded? Also I'm having problems in understanding connectedness. Say we have $[1, 3]\cup [4, 6]$ , how is this connected in the trivial topology?
In the trivial topology, where the only open sets are the entire space $X$ and the empty set $\emptyset$ , let's address your questions: $\textbf{Compactness:}$ Every subset of $X$ is compact in the trivial topology. Compactness in a topology is not equivalent to boundedness in the metric space sense. In the trivial topology, compactness follows directly from the fact that any open cover for a subset must include the entire subset itself (if non-empty) or be empty. $\textbf{Connectedness:}$ A subset $A$ is connected in the trivial topology if the only way to express $A$ as the union of two disjoint open sets is to have one of them as $\emptyset$ . For example, let's consider the set $[1, 3] \cup [4, 6]$ in the trivial topology on $\mathbb{R}$ . In this topology, the only open sets are $\emptyset$ and $\mathbb{R}$ . The set $[1, 3] \cup [4, 6]$ cannot be expressed as the union of two disjoint non-empty open sets. Therefore, it is connected in the trivial topology. $\textbf{In summary:}$
|general-topology|compactness|connectedness|
0
Show that $ \int_\mathbb{R} 2F(x)dF(x)=1 $.
Let $F$ be Stieltjes measure function (nondecreasing and right continuous) and $\mu$ be corresponding measure on $(\mathbb{R}, \mathcal{R})$ . Assume that $F: \mathbb{R}\to [0,1]$ is continuous. Show that $$ \int_\mathbb{R} 2F(x)dF(x)=1 $$ I found that this question is a corollary of Durrett's textbook exercise 1.7.3: Let $F$ be continuous. Then $$ \int_{(a,b]}2 F(y)dF(y)=F^2(b)-F^2(a) $$ However, the proof relies on the previous results: Let $F, G$ be Stieltjes measure functions and let $\mu, \nu$ be corresponding measures. Then $$\int_{(a,b]}F(y)dG(y)+\int_{(a,b]}G(y)dF(y)=F(b)G(b)-F(a)G(a)+\sum_{x\in (a,b]} \mu(\{x\})\nu(\{x\})$$ Can we prove our result without using this result? If we just apply Durrett's result, then we take $a\to-\infty, b\to \infty$ , we will prove our result.
$$\int udv = uv - \int vdu$$ Claim: $$\int F(x) dF(x) = F^2(b) - F^2(a) - \int F(x)dF(x)$$ Proof: Since this is integral is Riemann integrable, we have: $$g([a,b]) = \int_{[a,b]} F(x)dF(x) = \lim_{n \rightarrow \infty} \sum_{1 \leq i \leq n-1} F(a_i) (F(a_{i+1}) - F(a_i))$$ $$ = \lim_{n \rightarrow \infty} \sum_{1 \leq i \leq n-1} F(a_i) (F(a_{i+1}) - F(a_i)) - F^2(a_{i+1}) + F^2(a_i) + F^2(a_{i+1}) - F^2(a_i)$$ $$ = F^2(b) - F^2(a) + \lim_{n \rightarrow \infty} \sum_{1 \leq i \leq n-1} F(a_i) (F(a_{i+1}) - F(a_i)) - F^2(a_{i+1}) + F^2(a_i)$$ $$ = F^2(b) - F^2(a) + \lim_{n \rightarrow \infty} \sum_{1 \leq i \leq n-1} F(a_i) F(a_{i+1}) - F^2(a_{i+1})$$ $$ = F^2(b) - F^2(a) - \lim_{n \rightarrow \infty} \sum_{1 \leq i \leq n-1} F(a_{i+1})(F(a_{i+1}) - F(a_i))$$ $$ = F^2(b) - F^2(a) - \int F(x) dF(x)$$ Rest follows since, $$\int F(x) dF(x) = \frac{1}{2} (F^2(b) - F^2(a))$$
|real-analysis|probability|
0
Show that $ \int_\mathbb{R} 2F(x)dF(x)=1 $.
Let $F$ be Stieltjes measure function (nondecreasing and right continuous) and $\mu$ be corresponding measure on $(\mathbb{R}, \mathcal{R})$ . Assume that $F: \mathbb{R}\to [0,1]$ is continuous. Show that $$ \int_\mathbb{R} 2F(x)dF(x)=1 $$ I found that this question is a corollary of Durrett's textbook exercise 1.7.3: Let $F$ be continuous. Then $$ \int_{(a,b]}2 F(y)dF(y)=F^2(b)-F^2(a) $$ However, the proof relies on the previous results: Let $F, G$ be Stieltjes measure functions and let $\mu, \nu$ be corresponding measures. Then $$\int_{(a,b]}F(y)dG(y)+\int_{(a,b]}G(y)dF(y)=F(b)G(b)-F(a)G(a)+\sum_{x\in (a,b]} \mu(\{x\})\nu(\{x\})$$ Can we prove our result without using this result? If we just apply Durrett's result, then we take $a\to-\infty, b\to \infty$ , we will prove our result.
Assuming the continuity of $F$ , let $X$ be a random variable with distribution function $F$ . Then $\int F(x)\,dF(x) = \mathbb E[F(X)]$ . Since $F(X)$ is uniformly distributed in $[0,1]$ , it follows $\mathbb E[F(X)] = \frac{1}{2}$ , which solves the problem.
|real-analysis|probability|
1
Trouble understanding the relation between instantaneous velocity, rate of change and derivatives
The rate of change of position with respect to time is velocity. But we measure two common velocities, average velocity over an interval of time and instantaneous velocity. Now, I understand average velocity as the rate of change. I believe it is the slope of the secant line whose x values are the values across which we are computing the average. My doubt is, how is the instantaneous velocity a rate of change? If you compute the rate of change, at an instant (i.e at an point), you'll see that the x axis is not changing. While discussing this with another group, they told me to use the definition of limits. The instantaneous velocity is $\Delta{x}/\Delta{t}, \Delta{t} \rightarrow 0$ . But that just defines the tangent (which is the derivative) at some point, t. I still don't see the relation between instantaneous velocity and rate of change. Regarding my prerequisite knowledge, I have a Masters in Engineering. Therefore I understand the formulae of calculus quite well. I just never both
It may be helpful to first clear up some confusion. The instantaneous velocity of a function $f(x)$ , represented by its derivative $f'(x)$ , at some point $x_0$ represents the slope of the tangent line at that point, not the tangent line itself. When another group mentioned the limit definition of the derivative, namely $$ f'(x_0) = \lim_{h \to 0}\frac{f(x_0+h)-f(x_0)}{h} $$ it is important to note that as we decrease the size of $h$ , we are not caring about the total change in $f$ from $x_0$ to $x_0+h$ , but rather the convergent value of the ratio of these quantities. Even if $f(x_0+h)$ gets ever closer to $f(x_0)$ , because $h$ is growing small itself, there is a good chance that the two 'balance each other out' in some meaningful way. Instantaneous velocity is therefore the limiting ratio, for $h \to 0$ . If we include units in the problem more explicitly, the fact that this converges to a rate of change with respect to the variable $x$ can also be seen. I hope this answer addres
|calculus|derivatives|
0
Algorithms on converting hex to IEEE-754 single-precision floating point number
I need help with finding algorithm converting hex to IEEE-754 single-precision floating point number (without coding). I couldn't find any on the internet :( For example, given input string 0xB9CD542 , we need to return string 0x1.39aa84p-104 where last number is in hexadecimal exponential form (single precision floating point number) rounded to 0
OK, I'll bite: If 0xB9CD54 (binary 00000000101110011100110101010100 ) is an IEEE-754 single-precision number, then it can be decomposed as follows: Sign bit = 0 , so number is positive Biased exponent = 00000001 , so unbiased exponent is -126 Mantissa (including implied leading 1 ) = 101110011100110101010100 = 0xB9CD54 Shifting the mantissa left by 1 so that it starts with 1. gives 1.739AA8 . So we end up with 0x1.739AA8p-126 . Conversely, the decimal form of 0x1.39aa84p-104 would seem to be 0xb9CD542 according to this site . So it looks like you need to check your sources!
|computer-arithmetic|
0
Finding a confidence interval for an unknown distribution
Problem: A manufactures of times wants to advertise a mileage interval that excludes no more than $10\%$ of the milage on tired he sells. All he knows is that for a large number of tires tested, the mean milage with $25,000$ miles, with a standard deviation of $4,000$ . What interval would you suggest? Answer: In this case, I am going to apply Tchebysheff's Theorem. \begin{align*} u &= 25000 \\ \sigma &= 4000 \\ P(|Y - u | \leq k\sigma ) &\geq 1 - \dfrac{1}{k^2} = 0.1 \\ 1 - \dfrac{1}{k^2} &= 0.1 \\ -\dfrac{1}{k^2} &= -0.9 \\ \dfrac{1}{k^2} &= \dfrac{3}{10} \\ k^2 &= \dfrac{10}{3} \\ k &= \pm \sqrt{ \dfrac{10}{3} } \\ k \sigma &\doteq 13333.333 \\ \end{align*} The lower bound of the answer is: $$25000 - 4000\left( \sqrt{ \dfrac{10}{3}} \right) $$ Hence the answer is: $$ ( 11666.667, 38333.333 ) $$ Note: The book's answer is: $$ ( 12,351, 37,649 ) $$ Where did I go wrong? Here is an updated solution which I believe is correct: Answer: \newline In this case, I am going to apply Tchebyshe
It should be $P(|Y-u|\leq k\sigma) \geq 1-\frac{1}{k^2} = 0.9$ since you want 90% chance to be inside the interval
|probability|statistics|
1
Confusion over defining discriminants in algebraic number theory
I am reviewing some algebraic number theory, and found myself confused about two seemingly distinct notions of discriminant that are used. Let $K$ be a field, and $L$ a finite separable extension. We can define the trace pairing $\mathrm{Tr}_{L/K}:L \times L \rightarrow K$ which sends $(\alpha, \beta) \to \alpha \beta$ , which is nondegenerate. Then the discriminant of $L/K$ is the discriminant of the trace pairing as a symmetric bilinear form. For any $\beta_1, \dots, \beta_n \in L$ we can define $$\mathrm{D}(\beta_1, \dots, \beta_n) = \det((\mathrm{Tr}_{L/K}(\beta_i \beta_j))_{ij})$$ and if the $\beta_i$ are a basis for $L$ as a $K$ -vector space, then we recover the discriminant. We can also define for an extension $B/A$ of rings, where $B \cong A^n$ is a finitely generated free $A$ -module (and I think one can even define this when $B$ is finite projective), a trace pairing $\mathrm{Tr}_{B/A}:B \times B \rightarrow A$ analogously to the above. This too has a discriminant, relative
Your discriminant concepts depend on the choice of a basis. Let $B$ be a commutative ring with a subring $A$ such that $B$ is a finite-free $A$ -module. When $x_1,\ldots,x_n$ and $y_1,\ldots, y_n$ are two $A$ -bases of $B$ , let $y_j = \sum_{i=1}^n a_{ij}x_i$ , so $(a_{ij})$ is the change of basis matrix from the $x$ 's to the $y$ 's. Then we have the matrix equation $$ ({\rm Tr}_{B/A}(y_iy_j)) = (a_{ij})^{\top}({\rm Tr}_{B/A}(x_ix_j))(a_{ij}), $$ so taking determinants tells us $$ {\rm disc}_{B/A}(y_1,\ldots,y_n) = (\det(a_{ij}))^2{\rm disc}_{B/A}(x_1,\ldots,x_n). $$ The number $\det(a_{ij})$ is a unit in $A$ since a change of basis matrix always has a determinant in $A^\times$ . Thus the discriminants of two $A$ -bases of $B$ are equal up to a unit square factor in $A$ . Example . For a field $F$ , a nonconstant polynomial $f(x)$ in $F[x]$ has a discriminant, defined in terms of its roots, and this is a special case of the above construction using a power basis: when $B = F[x]/(f(x))
|abstract-algebra|number-theory|algebraic-number-theory|discriminant|
1
Terminology and notation for different sorts of "product maps"
Given an indexed collection of functions $\{f_i : X_i \to Y_i\}_{i\in I}$ , there is a product map $\prod_{i\in I}f_i:\prod_{i\in I}X_i \to \prod_{i\in I}Y_i$ defined by $(x_i)_{i\in I} \mapsto (f_i(x_i))_{i\in I}$ . This is, I believe, the standard notion of a product map, which makes the Cartesian product into a multifunctor on $\textbf{Set}$ . However, there is a related notion, where if we have an indexed collection of functions $\{f_i : X \to Y_i\}_{i\in I}$ for some fixed domain $X$ , there is a "product map" $\prod^*_{i\in I}f_i:X \to \prod_{i\in I}Y_i$ defined by $x \mapsto (f_i(x))_{i\in I}$ . I am not sure what the standard name or standard notation for this second notion is. Can someone tell me what it is?
It is the map you get due to the universal property of the product. A common notation for it is $(f_i)_{i \in I}$ . A less common name and notation appears in Engelking's General Topology : He calls it the diagonal of the mappings $\{f_i\}_{i \in I}$ and he denotes it by $\triangle_{i \in I} f_i$ .
|elementary-set-theory|
0
Probability that a triangle inscribed in a square comprises at least $\frac{1}{4}$ of the area of the square
Question: Suppose that points $P_1$ , $P_2$ , and $P_3$ are chosen uniformly at random on the sides of a square $T$ . Compute the probability that $$\frac{[\triangle P_1 P_2 P_3]}{[T]}>\frac{1}{4}$$ where $[X]$ denotes the area of polygon $X$ . Without loss of generality, I assumed the side length of the square to be $1$ . Because the question mentions $\frac{1}{4}$ of the square, I considered splitting the square into quadrants. It is obvious that all three points cannot lie in the same quadrant of the square. Similarly, there cannot be two points in the same quadrant because the area must be less than $\frac{1}{2} \cdot \frac{1}{2} \cdot 1=\frac{1}{4}$ . [EDIT: This is wrong] This means that all three vertices must lie in different quadrants in the square. From here, I considered cases: Case 1: Two of the points are on the same side, which occurs with probability $\frac{9}{16}$ . Case 2: All three points are on different sides, which occurs with probability $\frac{3}{8}$ . From here,
I would try to estimate this statistically in Excel. A square with each side 1 unit can have its lower left corner at the origin. Randomly select 3 x and y coordinate pairs where one coordinate in the pair is 0 or 1 and the other can be determined with =rand(), use the distance formula for each pair to find the sides of the triangle they form, then use Heron's Area Formula to find the area of the triangle. Do this 10,000 times and count up the number of times the triangle's area is $\geq$ .25. The first part of this video is helpful: https://www.youtube.com/watch?v=i4VqXRRXi68
|probability|contest-math|geometric-probability|
0
When is cdf $F_{X_1+\dots+X_n}(c)$ of sum of iid zero mean random variables decreasing in sample size $n$?
Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with mean zero (e.g., $N(0,1)$ ). Let $n > m$ and $c \geq 0$ . I want to show that $$ P\left(\sum_{i=1}^n X_i \leq c \right) \leq P \left( \sum_{i=1}^m X_i \leq c \right).$$ In view of existing concentration bounds like Hoeffding's inequlity which scale with the the length of the sequence, i.e. $n$ and $m$ , I would think that the above statement should hold. Edit: Since it was pointed out that this doesn't hold for specific cases where $n$ and $m$ are small and the distribution of $X_i$ is discrete, assume that $X_1, X_2, \dots$ are Gaussian and $n$ sufficiently large.
Here, I show that the inequality in the OP holds for the large class of log-concave distributions , including any affine transformation of some of the following well-known distributions: the normal distribution the exponential distribution the uniform distribution the logistic distribution the extreme value distribution the Laplace distribution the chi distribution the hyperbolic secant distribution, and many other distributions (see the link 1 ). The transformation needs to be such that the mean is zero, e.g. $$X_1,X_2,\dots \sim a \, \mathcal E (\lambda) -\frac{a}{\lambda} + \mathcal U(-b,b) + \mathcal N(0,\sigma^2)+ c\, \chi^2_k -c\, k $$ for any $a,b, c \in \mathbb R, \sigma^2>0 $ . Proof First consider the following two key properties for log-concave distributions 1 : If $X$ has a log-concave distribution $f_X$ , then its cdf $F_X(x)=\mathbb P (X \le x)$ is also log-concave, that is, $\log F_X(x)$ is a concave function on its support. The sum of two independent random variable wit
|probability|probability-theory|concentration-of-measure|
0
Can the Least Squares Method be expressed as a convolution with a kernel?
Consider a laser line position estimation by fitting using the Least Square Method (LSM) and prove (or disprove) that it can be considered as a convolution with some function and finding the center by looking for the maximum (zero‐crossing by the derivative). What is the smoothing function? The Least Square Method (LSM) is defined as: $$\sum_i[S(x_i)-F(x_i,;a,b,\ldots)]^2=min,$$ where the fitting function is: $$F(x;y_0,A,x_0,w)=y_0+A\cdot g(x-x_c,w)$$ The fit program will adjust all parameters, but we are interested only in x_c. Hint: change sums to integrals in LSM description! Relevant Equations fitting function: convolution: Least Squares Method: I started by converting the LSM from sum to integral form: $$f(x_c)=\sum_i[S(x_i)-F(x_i,;a,b,\ldots)]^2tof(x_c)=\int(S(x)-F(x-x_c)^2dx$$ Since we are not interested in the other parameters (like offset), I assumed that they are fitted correctly and thus ignored them, turning $$F(x-x_c)$$ directly to $$g(x-x_c)$$ Then I expanded the binomial
Yes, it is true under certain circumstances and then it is at the heart of optimum radar signal processing and was introduced by P. M. Woodward during WWII. It would be way too long to go through the whole issue but I just highlight here a few points. A standard and excellent reference is Difranco & Rubin: Radar Detection, Prentice-Hall 1968, Chapter 7. The problem setup The simplest example: say you wish to estimate the range of a stationary target. You send out a signal $s(t)$ whose round trip delay (range) is $\tau$ , so your radar receives a noisy signal $\tilde y(t)=As(t-\tau) + \tilde n(t)$ where $\tilde n(t)$ is a stationary receiver noise, normally distributed with uniform spectral density $\mathcal N_0$ . The multiplier $A$ is the roundtrip transmitted attenuation signal suffers, and for the sake of this note we assume to be known. The receiver therefore knows everything about the signal that can be known except for the delay $\tau$ , and our job is to make the best estimate t
|linear-algebra|
0
Non-trivial linear combination of non-integrable functions is integrable (specific case).
$\textbf{Set-up.}$ Let $g$ be some $L^2$ function such that for $i+j = 2$ $$ \frac{x^iy^j}{g(x,y)}\in L^2(B), $$ where $B$ is some open ball centered at $(0,0)$ of some radius $\delta$ . Specifically, $$ \frac{x^2}{g(x,y)},\frac{xy}{g(x,y)}, \frac{y^2}{g(x,y)}\in L^2(B). $$ Moreover, $$ \frac{1}{g(x,y)},\frac{x}{g(x,y)},\frac{y}{g(x,y)}\not\in L^2(B) $$ for any ball $B$ . $\textbf{Claim.}$ If $$ \frac{c_1 x + c_2 y}{g(x,y)}\in L^2(B), $$ then $c_1 = c_2 = 0$ . $\textbf{Ideas.}$ If $g$ is the Euclidean-distance function, this result follows as one is able to convert to polar coordinates and things are nice. In this more abstract setting, it is much more unclear. My intuition is saying that this claim should be true as any linear combination of these functions is not contributing an addition "zero" that could control the blowup of $\frac{1}{g(x,y)}$ . In general, there are linear combinations of non-integrable functions that are integrable. For example, $$ \frac{x}{\log(x)},\frac{-1}{\lo
A counter example is as follows. Let $$g(x,y) = (x^2+y^2)^{\frac{1+\frac{2xy}{x^2+y^2}}{2}}.$$ Note that if we convert to polar coordinates, we have that $$\widetilde{g}(r,\theta) = r^{1+\sin(2\theta)}.$$ One can show that for $i+j = 2$ $$ \frac{x^iy^j}{g(x,y)}\in L^2(B), $$ whereas for $i+j = 1$ , we have that $$ \frac{x^iy^j}{g(x,y)}\not\in L^2(B). $$ Lastly, one can show (through some slightly tedious computations) that $$ \frac{x-y}{g(x,y)}\in L^2(B). $$
|real-analysis|integration|analysis|hilbert-spaces|
1
$T(n) = 2T(n/2) + \Theta(n)$ tight bound
This is the approximated version of average case of quicksort. With the iteration method we end up with $T(n) = n T(1) + n \log n$ . We can say that this is easily $O(n \log n)$ , but the tight bound is requested. Is $nc + n \log n$ enough to show that it is $\Theta(n \log n)$ too, and how?
A function $f$ is in $\Theta(g)$ if and only if it is both in $\Omega(g)$ and $O(g)$ . As you already know, we have $nc + n \log n \in O(n \log n)$ , and clearly $nc + n \log n \in \Omega(n \log n)$ as well since $n \log n$ is a lower bound, provided that $c \geq 0$ , so we conclude that $nc + n \log n \in \Theta(n \log n)$ .
|asymptotics|recurrence-relations|
0
Probability that a triangle inscribed in a square comprises at least $\frac{1}{4}$ of the area of the square
Question: Suppose that points $P_1$ , $P_2$ , and $P_3$ are chosen uniformly at random on the sides of a square $T$ . Compute the probability that $$\frac{[\triangle P_1 P_2 P_3]}{[T]}>\frac{1}{4}$$ where $[X]$ denotes the area of polygon $X$ . Without loss of generality, I assumed the side length of the square to be $1$ . Because the question mentions $\frac{1}{4}$ of the square, I considered splitting the square into quadrants. It is obvious that all three points cannot lie in the same quadrant of the square. Similarly, there cannot be two points in the same quadrant because the area must be less than $\frac{1}{2} \cdot \frac{1}{2} \cdot 1=\frac{1}{4}$ . [EDIT: This is wrong] This means that all three vertices must lie in different quadrants in the square. From here, I considered cases: Case 1: Two of the points are on the same side, which occurs with probability $\frac{9}{16}$ . Case 2: All three points are on different sides, which occurs with probability $\frac{3}{8}$ . From here,
If I have a triangle in $\mathbb{R}^2$ with vertices at $(x_1,y_1)$ , $(x_2,y_2)$ , and $(x_3,y_3)$ , the area of the triangle is $$\frac{1}{2}\Biggl|\begin{array}{lll}x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1\end{array}\Biggr|,$$ where for a square matrix $A$ , $|A|$ denotes the absolute value of the determinant of $A$ . To avoid having to deal with the annoying $1/2$ , we will look at situations where $$\Biggl|\begin{array}{lll}x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1\end{array}\Biggr|\geqslant 1/2.$$ [https://www.cuemath.com/geometry/area-of-triangle-in-determinant-form/][1] Everything is scale and translation invariant, so we can assume the square is in the plane with vertices at $(0,0)$ , $(1,0)$ , $(0,1)$ , and $(1,1)$ . We let $b,t,l,r$ denote the bottom, top, left, and right edges of the square. We let $pqr$ ( $p,q,r\in \{b,t,l,r\}$ ) denote a particular way for the vertices to land on the sides of the square. So $bbb$ means all vertices are on the bottom, $btl$ me
|probability|contest-math|geometric-probability|
1
Probability that a triangle inscribed in a square comprises at least $\frac{1}{4}$ of the area of the square
Question: Suppose that points $P_1$ , $P_2$ , and $P_3$ are chosen uniformly at random on the sides of a square $T$ . Compute the probability that $$\frac{[\triangle P_1 P_2 P_3]}{[T]}>\frac{1}{4}$$ where $[X]$ denotes the area of polygon $X$ . Without loss of generality, I assumed the side length of the square to be $1$ . Because the question mentions $\frac{1}{4}$ of the square, I considered splitting the square into quadrants. It is obvious that all three points cannot lie in the same quadrant of the square. Similarly, there cannot be two points in the same quadrant because the area must be less than $\frac{1}{2} \cdot \frac{1}{2} \cdot 1=\frac{1}{4}$ . [EDIT: This is wrong] This means that all three vertices must lie in different quadrants in the square. From here, I considered cases: Case 1: Two of the points are on the same side, which occurs with probability $\frac{9}{16}$ . Case 2: All three points are on different sides, which occurs with probability $\frac{3}{8}$ . From here,
Here's a fully correct version of my solution that I tried to make in the original post (thanks to user469053 for providing the correct values for me to check against) Without loss of generality, consider the square with vertices $(0,0), (0,1), (1,0), (1,1)$ . We proceed using casework: All three vertices are on different sides of the square. This occurs with probability $\frac{3}{8}$ . Let the vertices be $(x_1,0)$ , $(0,y_1)$ , and $(x_2,1)$ . Using the determinant form for the area of a triangle, the area is given by $$A=\frac{1}{2} \begin{vmatrix} x_1 & 0 & 1 \\ 0 & y_1 & 1 \\ x_2 & 1 & 1 \end{vmatrix}=\frac{1}{2}x_1-\frac{1}{2}y_1 (x_1-x_2)$$ Assume that $x_1>x_2$ . Then, $$\frac{1}{2}x_1-\frac{1}{2}y_1 (x_1-x_2)>\frac{1}{4} \implies x_1-y_1 (x_1-x_2)>\frac{1}{2}$$ For convenience, replace $x_1$ with $y$ , $x_2$ with $x$ , and $y_1$ with $z$ . We now have the system of inequalities $\begin{cases} y-yz+xz>\frac{1}{2} \\ y>x \\ 0 \leq x,y,z \leq 1 \end{cases}$ . We now consider grap
|probability|contest-math|geometric-probability|
0
Limit $\lim_{n\to +\infty}{\frac{\left(n!\right)^2\cdot4^n\cdot n}{\left(2n\right)!}}$
I am currently trying to calculate the following limit of sequence: $$\lim_{n\to +\infty}{\frac{\left(n!\right)^2\cdot4^n\cdot n}{\left(2n\right)!}}$$ I need it to prove that a series diverges, but I really can't find a way to solve this, any help would be greatly appreciated! (I really hope I'm not missing anything obvious!) Edit : You are all absolutely right, I apologize for the lack of context. What is actually going on is that working with a function series, in particular the power series: $$\sum_{n=1}^{\infty}{\frac{\left(n!\right)^2}{\left(2n\right)!}x^n}$$ I needed to find pointwise and total convergence $^1$ for it. To do that I tried to find the radius of convergence using D'Alembert's criterion $^2$ and I found the radius to be $\rho =4$ . Therefore my pointwise convergence would be for $\left|x\right| . At this point, I needed to check convergence in the points that coincide with the extremities of the convergence radius, so $x=4$ and $x=-4$ . I am currently working on $x=4
So, I believe I have come to a working solution, mostly based on the answer provided by @RyszardSzwarc . He deserves the credit, I'll simply be expanding on what he wrote for it to be clearer. As he said in his answer, we can write $4^n$ as $\left(1+1\right)^{2n}$ . We can then use the binomial theorem: $$\left(x+y\right)^{n}=\sum_{k=0}^{n}{\binom{n}{k}x^{n-k}y^k}\text{, where }\binom{n}{k}=\frac{n!}{k!\left(n-k\right)!}$$ in our specific case, it will become: $$\left(1+1\right)^{2n}=\sum_{k=1}^{2n}{\binom{2n}{k}}$$ Considering that $0 , and that, for this reason, $\binom{2n}{n}$ is one of the terms of our summation, we can get to the following conclusion: $$\sum_{k=1}^{2n}{\binom{2n}{k}}>\binom{2n}{n}\implies 4^n>\binom{2n}{n}$$ We did all of this for simply one reason: $$\binom{2n}{n}=\frac{\left(2n\right)!}{n!\left(2n-n\right)!}=\frac{\left(2n\right)!}{\left(n!\right)^2}\implies 4^n>\frac{\left(2n\right)!}{\left(n!\right)^2}$$ At this point, @RyszardSzwarc suggested to simply put th
|calculus|sequences-and-series|limits|factorial|divergent-series|
0
Prove that $(e^x-\ln y)^2+(x-y)^2\geq 2$
Prove that $(e^x-\ln y)^2+(x-y)^2\geq 2,\forall x\in\mathbb{R},\forall y>0$. When does the equality hold? I note that the equality holds only if $x=0,y=1$, but I don't know how to prove that without using functions with two variables. Any hint?
Use the following identity: for $\forall x,y\in(0,\infty)$ , $$(e^x-\ln y)^2+(x-y)^2=(e^x-x)^2+(\ln y-y)^2\color{red}{+}2(x-\ln y)(e^x-y).\tag{1}$$ Due to the monotonicity of $e^x$ , we have $$(x-\ln y)(e^x-y)=(x-\ln y)(e^x-e^{\ln y})\geq0.\tag{2}$$ By $(1),(2)$ , we get (also use $e^x-x\geq1,y-\ln y\geq1$ ) $$(e^x-\ln y)^2+(x-y)^2\geq(e^x-x)^2+(\ln y-y)^2\geq1+1=2.$$ When $x=1$ and $y=0$ , the equality is ture!
|calculus|inequality|
0
Extension Problem from Axler's "Linear Algebra Done Right," 1.C, Problem 13. (p25)
How can one prove/disprove this argument for an arbitrary number $n$ of subspaces? Prove or disprove that for subspaces $U_1, U_2, ... U_n$ of a vector space $V$ over $F$ , $$U_1 \cup U_2 \cup... U_{n-1} \cup U_n$$ is a subspace of $V$ if and only if $U_1 \cup U_2 \cup... U_{n-1} \cup U_n$ is $U_j$ for some $j = 1,2,3, ..., n$ . A proof of this statement should involve induction over $n$ , but this approach eludes my intuition. Or is there a counterexample that disproves the claim? Note: This is an extension of problems 12, 13 from Chapter 1.C of Axler's Linear Algebra Done Right (4th ed). Here is problem 13: Prove that the union of three subspaces of V is a subspace of V if and only if one of the subspaces contains the other two. (This exercise is surprisingly harder than Exercise 12, possibly because this exercise is not true, if we replace F with a field containing only two elements.) Problem 13 is an extension of the previous problem, which involves the union of two subspaces, inst
The statement is false. Pick any odd prime $p$ , and any $n \ge 2$ . The finite field $\Bbb{F}_p$ admits an inverse for $2$ , since $p$ is odd. The vector space $\Bbb{F}^n_p$ is finite, containing $p^n$ points, and we can express $\Bbb{F}^n_p$ as the (finite) union of all one-dimensional subspaces. That is, $$\Bbb{F}^n_p = \bigcup_{x \in \Bbb{F}^n_p} \operatorname{span}\{x\},$$ but since $n \ge 2$ , none of these subspaces are equal to $\Bbb{F}^n_p$ .
|linear-algebra|
1
What is the outer automorphism group of the Pauli group?
Let $P_n$ be the Pauli group on n qubits. The order of the group is $|P_n| = 2^{2n+2}$ . I beleive the outer automorphism group is closely related to the symplectic group $Sp(2n,2)$ but I don't know the exact relation. Direct calculations for small n seem to show that $|Out(P_n)| = 2 |Sp(2n,2)|$ . Does anyone know of how this can shown in general or a reference on the subject.
It looks like some work was done a while ago by Michael Planat on the outer automorphism groups of the Pauli group [1]. He has some other resources that may be good to look at as well [2], and you could also look on Google Scholar for other papers / collaborators to start a more thorough search. Sources Planat, Michel, and Maurice Kibler. "Unitary reflection groups for quantum fault tolerance." Journal of computational and theoretical nanoscience 7.9 (2010): 1759-1770. Link here “Reflection Groups for Quantum Computing” slide deck, Michael Planat. Link here
|group-theory|finite-groups|automorphism-group|
1
Supremum and infimum for $n^{1/n}$
I have the set $A=\{\sqrt[n]{n}: n \in \mathbb{N}\}$ . I can see that $\inf A$ is $1$ as $\sqrt[1]{1}=1$ but I am having trouble with $\sup A$ . I know that it is $\sqrt[3]{3}$ because it only decreases from then onwards. But I have trouble proving that is $\sqrt[3]{3}$ . I have tried induction but I can't get it to work. Any strategies appreciated.
The derivative of $f(x)=x^{1/x}$ is equal to $x^{1/x-2}(1-\log x)$ . It is negative when $0 and positive when $x>0$ . It is $0$ if $x=e$ . So $x=e$ is the global maximum. It means that we only need to check $n=2$ and $3$ . Since $\sqrt[3]3>\sqrt 2$ , we have that $$\sup A = \sqrt[3]3.$$
|real-analysis|
0
A question about the proof of the open mapping theorem (2.11) in Walter Rudin's Functional Analysis
Can refer to page 65-66 of this pdf . I actually have two questions about the proof. The first is, in proving the first part of (2), why $\overline{\Lambda (V_2)}$ contains a neighborhood of $0$ ? I think it only proves $\overline{\Lambda (V_2)}$ has a nonempty interior. It is not quite obvious to me why this interior must contain $0$ . My second question is, in proving the second part of (2), why $y_{m+1} \to 0$ ? I don't quite see how this can be inferred from the continuity of $\Lambda$ .
For your 2nd question, I think " $y_n \rightarrow 0$ " can be get directly from the precondition given by "Assume $n\ge 1$ and $y_n$ has been chosen in $\overline{\Lambda(V_n)}$ ", even without considering its follow-up construction steps of $x_n$ . $y_n \in \overline{\Lambda(V_n)}\Rightarrow y_n \rightarrow 0$ . Since Y is a T.V.S, for any neighborhood U of 0 in Y, we have a neighborhood W of 0 in Y,s.t. $\overline{W}\subset U$ . Since $\{V_n\}$ is a local base for the topology of X, and $V_0$ is a bounded member, there exists $N>0$ s.t. $V_0\subset 2^n \Lambda^{-1}(W)$ as n>N, i.e. $V_n\subset \Lambda^{-1}(W)$ , which leads to $\overline{\Lambda(V_n)}\subset \overline{W}\subset U$ . Since $y_n\in \overline{\Lambda(V_n)}$ , then $y_n\in U$ . This tell us for any neighborhood U of 0 in Y, there exists a $N>0$ s.t. $y_n \in U$ for all n>N, i.e. $y_n\rightarrow 0$ . Actually, I think the phrase "Assume $n\ge 1$ and $y_n$ has been chosen in $\overline{\Lambda(V_n)}$ " is somehow misleadin
|functional-analysis|topological-vector-spaces|open-map|
0
Trouble understanding the relation between instantaneous velocity, rate of change and derivatives
The rate of change of position with respect to time is velocity. But we measure two common velocities, average velocity over an interval of time and instantaneous velocity. Now, I understand average velocity as the rate of change. I believe it is the slope of the secant line whose x values are the values across which we are computing the average. My doubt is, how is the instantaneous velocity a rate of change? If you compute the rate of change, at an instant (i.e at an point), you'll see that the x axis is not changing. While discussing this with another group, they told me to use the definition of limits. The instantaneous velocity is $\Delta{x}/\Delta{t}, \Delta{t} \rightarrow 0$ . But that just defines the tangent (which is the derivative) at some point, t. I still don't see the relation between instantaneous velocity and rate of change. Regarding my prerequisite knowledge, I have a Masters in Engineering. Therefore I understand the formulae of calculus quite well. I just never both
My doubt is, how is the instantaneous velocity a rate of change? If you compute the rate of change, at an instant (i.e at an point), you'll see that the x axis is not changing. From what I can gather, it's the use of instantaneous (instantaneous velocity) that's giving you a hard time. You are taking instantaneous to mean "at one instant", say, $t = 5 \text{s}$ . And this is correct. However, velocity, by definition, has to do with change. And a change requires a non-zero interval of time. So, in an attempt to resolve this conflict, let me give you an example. Say you are driving in one direction along a perfectly straight line. Your speed is not constant though. Now, let's consider your motion from $t = 0 \text{ s}\;$ to $\;t = 60 \text{ s}$ . We'll assume you cover $1$ mile in this duration. Now, what's your average velocity (magnitude only)? $60$ miles per hour, right? It's easy. But what was your velocity at $t = 5 \text{ s}$ ? Now, we are taking of instantaneous velocity. Was it t
|calculus|derivatives|
1
Prove that any integer divisible by 3 can be written as a sum of cubes of four integers.
My book says only this: $$6k = (k+1)^3 + (k-1)^3 + (-k)^3 + (-k)^3$$ $$6k - 15 = (2k)^3 + (-2k-1)^3 + (k-2)^3 + (-k-2)^3$$ I want to know how it got to these expressions. Using trial and error will take a lot of time.
The question is about expressing an integer as the sum of four cubes. You asked: I want to know how it got to these expressions. The key is that cubes can be positive or negative and the difference of consecutive cubes is quadratic. For example, $$ (k+1)^3 - k^3 = 3k^2 + 3k + 1. \tag1 $$ Furthermore, the difference of two consecutive differences is linear. $$ ((k\!+\!1)^3\!-\!k^3)\!-\!(k^3\!-\!(k\!-\!1)^3)\!=\! (3k^2\!+\!3k\!+\!1)\!-\!(3k^2\!-\!3k\!+\!1)\!=\!6k. \tag2 $$ This implies the first expression you quoted: $$ 6k \!=\! (k\!+\!1)^3\!-\!k^3\!-\!k^3 + (k\!-\!1)^3\!=\! (k\!+\!1)^3\!+\!(k\!-\!1)^3\!+\!(-k)^3\!+\!(-k)^3. \tag3 $$ Notice that $\,6k\,$ is always even, that the cube of an odd number is odd and the cube of an even number is even. Thus, to express an odd number as the sum of four cubes, either exactly one of the cubes must be odd or exactly three of the cubes must be odd. So in the first equation replace $\,k\,$ with $\,2k\,$ to get $$(2k+1)^3 - (2k)^3 = 12k^2 + 6k + 1.
|algebra-precalculus|elementary-number-theory|
0
How to solve the eigenvalue problem $y'' + \lambda y=0$, $y'(-L) = y'(L)=0$
How do I solve the eigenvalue problem $y'' + \lambda y=0$ , $y'(-L) = y'(L)=0$ ? My attempt: Case 1: $\lambda=0$ . If $\lambda=0$ , then the BVP becomes $y''(x)=0$ , $y'(-L)=y'(L)=0$ . The general solution to this differential equation is $y(x)=Ax+B$ . Taking the derivative gives $y'(x)=A$ , and the boundary condition implies that $A=0$ , but $B$ can be arbitrary. This means that $\mu_0=0$ must be an eigenvalue and the function $$y_0(x)=B,\quad B\text{ is a constant}$$ is its corresponding eigenfunction. Case 2: $\lambda . Let $\lambda = -p^2 . The BVP becomes $y''(x)-p^2y(x)=0$ , $y'(-L)=y'(L)=0$ . The solution to this DE is $y(x)=A\cosh(px)+B\sinh(px)$ . Taking the derivative gives $y'(x)=Ap\sinh(px)+Bp\cosh(px)$ . Plugging in $-L$ and $L$ gives \begin{align*} A\cosh(pL) - B\sinh(pL) &= 0\\ A\cosh(pL) + B\sinh(pL) &= 0 \end{align*} Solving for $A$ and $B$ gives $A=B=0$ , which is a trivial solution. Case 3: $\lambda>0$ . Let $\lambda = p^2 > 0$ . The BVP becomes $y''(x)+p^2y(x)=0$ ,
You can eliminate some cases by proper normalizations. Start by solving $$ y''+\lambda y = 0,\;\; y'(-L)=0, y(-L)=1. $$ Adding the condition $y(-L)=1$ eliminates the trivial solution, and eliminates an ambiguity in the solution by normalizing it at $-L$ . The unique solution is $$ y_{\lambda}(x)=\cos(\sqrt{\lambda}(x+L)). $$ Then $\lambda$ is an eigenvalue iff $y_{\lambda}'(L)=0$ ; so the eigenvalue equation is $$ \sqrt{\lambda}\sin(2L\sqrt{\lambda})=0. $$ Therefore, the eigenvalues are $$ \lambda=n^2\pi^2/4L^2,\;\; n=0,1,2,3\cdots. $$ The corresponding normalized eigenfunction solutions are $$ y_0(x)=1,\\ y_n(x)=\cos\left(\frac{n\pi}{2L}(x+L)\right),\;\; n=1,2,3,\cdots . $$ Check: These solutions do satisfy $y_n'(-L)=0,\;y_n'(L)=0$ . And they are solutions of $y_n''+\lambda y_n=0$ , where $\lambda=n^2\pi^2/4L^2$ , as required.
|ordinary-differential-equations|partial-differential-equations|
0
By this definition, can a lambda term be of 'infinite length'?
In untyped lambda calculus, if we define (as is common) the set $\Lambda$ of all $\lambda$-terms with: \begin{align} &(1) & \text{if } u \in V, \text{then } u \in \Lambda \\ &(2) & \text{if } M \text{ and } N \in V, \text{then } (MN) \in \Lambda \\ &(3) & \text{if } u \in V \text{ and } M \in \Lambda, \text{then } (\lambda.u M) \in \Lambda \\ \end{align} or in short using a grammar $$ \Lambda = V | (V.\Lambda) | (\Lambda \Lambda),$$ can we then prove that all $\lambda$-terms are 'finite'? I ask because I have never seen talk about this in my books, so is it maybe due to the informal definitions? or because it forces to add more mathematics than wanted into the book in order to be able to create such a proof? or because this is not the topic for introductory books? or for some other reason?
It's entirely possible to have infinite terms conforming to the syntax you stated. An important class of such terms are the Rational Infinite terms. They are given by the property that, though they may be infinite, they have only a finite number of distinct subterms. For instance, $D_3 D_3$ can be thought of as having the rational infinite lambda term $⋯ D_3 D_3 D_3 D_3$ as its normal form, where $D_3 = λx·x x x$ . In particular, the combinator reduction engine Combo , when given $D_3 D_3$ will recognize it as a "cyclic term", recognizing that it's contained as a subterm in its own reduction as $D_3 D_3 → ⋯ → D_3 D_3 D_3$ and stop reduction. It's entirely possible to modify "Combo" so that (a) it will actually produce the rational term as a normal form, rather than blocking reduction, and (b) it can process rational infinite lambda terms. I just never got around to making the upgrade (yet). A finite syntax for rational infinite terms requires a way to explicitly refer to embedded subte
|lambda-calculus|
0
Show that if $x>1$, $\log_e\sqrt{x^2-1}=\log_ex-\dfrac{1}{2x^2}-\dfrac{1}{4x^4}-\dfrac{1}{6x^6}-\cdots$
Show that if $x>1$ , $\log_e\sqrt{x^2-1}=\log_ex-\dfrac{1}{2x^2}-\dfrac{1}{4x^4}-\dfrac{1}{6x^6}-\cdots$ $\log_e\sqrt{x^2-1}=\dfrac{1}{2}\log_e(x^2-1)=\dfrac{1}{2}[\log_e(x+1)+\log_e(x-1)]$ I know: $\log_e(x+1)=x-\dfrac{x^2}{2}+\dfrac{x^3}{3}- \cdots$ $\log_e(1-x)=-x-\dfrac{x^2}{2}-\dfrac{x^3}{3}- \cdots$ How would I get $\log_e(x-1)$ ?
Here is the solution. $\log_e \sqrt{x^2 - 1} - \log_e x = \frac{1}{2}\log_e \left({1 - \frac{1}{x^2}}\right)$ . Now use the Taylor series expansion of $\log_e {(1 + x)}$ and replace $x$ by $\frac{-1}{x^2}$ .
|sequences-and-series|logarithms|
1
$\sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x}}}}}}$ as composition of functions
Edit 1 : do NOT want to differentiate, the other question is about differentiation, this question is about function composition, NOT differentiation . Looking at differentiate $\sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x}}}}}}$ (twice deleted, again) it seemed that if it can be written a composition of functions then it could be chain (saw) ruled. Anne Bauval's answer effectively used a sequence of functions to solve the problem. It could also be a good example of the chain rule if expressed as a composition of functions. $f(x)=\sqrt {x}$ and $g(x)=x^2+x$ then $$f \circ g \circ f(x)= \sqrt {x + \sqrt {x} }$$ but this is useless after that. My question is there a simple function composition to express $\sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x + \sqrt {x}}}}}}$ PS: I can differentiate the expression by using chain rule implicitly but looking to use the chain rule explicitly.
This can be very trivially composed of one function with itself if you take a multi variable function. Let $g(x,y) = \sqrt{x + y}$ Then $g(x,g(x,(g(x,g(x,g(x,g(x,0)))))))$ . If all you were looking for is the composition then this trivially solves it with the square root of a linear function but again, it is a function of two inputs, even though the y is a “dummy” variable such that we only plug in g and 0 into it, never actually define y as a variable.
|algebra-precalculus|
1
Understanding example of weak convergence does not imply strong convergence
Consider the sequence $\left\{e_n\right\}_{n \in \mathbb{N}} \subset \ell^p(\mathbb{F}), 1 \leq p \leq \infty$ , of unit vectors. Then $\left\|e_n\right\|_p=1$ for all $n \in \mathbb{N}$ , and $\left\{e_n\right\}_{n \in \mathbb{N}}$ is not a Cauchy sequence for any $p$ . For $1 , we can now use the representation $\ell^p(\mathbb{F})^* \cong \ell^q(\mathbb{F})$ : for every $x^* \in$ $\ell^p(\mathbb{F})^*$ , there exists a $y \in \ell^q(\mathbb{F})$ such that $$ \left\langle x^*, e_n\right\rangle_{\ell^p}=\sum_{k=1}^{\infty} y_k\left(e_n\right)_k=y_n \rightarrow 0, $$ since $\|y\|_q implies that $\left\{y_k\right\}_{k \in \mathbb{N}}$ is a null sequence. Hence $e_n \rightharpoonup 0$ in $\ell^p(\mathbb{F})$ for $1 $p , and therefore also $e_n \rightharpoonup^* 0$ by reflexivity ( $X$ is reflexive, then weak convergence in $X$ and weak-* convergence in $X^**$ coincide) Since $\ell^1(\mathbb{F})$ is not reflexive, we have a choice here: if we consider $e_n \in \ell^1(\mathbb{F})$ via the i
Assume that $e_n \rightharpoonup u$ in $\ell^1$ where $u\in\ell^1$ . Case 1: $u\ne 0$ . Then there exists some $n_0$ such that $u_{n_0}\ne 0$ . Let $F_1$ be the functional on $\ell^1$ given by $F_1(v)=v_{n_0}$ . It is bounded because $|v_{n_0}|\le\|v\|_1$ for all $v\in \ell^1$ . We have $F_1(e_n)=0$ for all $n>n_0$ so $\lim_{n\to+\infty}F_1(e_n)=0$ . On the other hand $F_1(u)=u_{n_0}\ne 0$ . This contradicts $e_n \rightharpoonup u$ . Case 2: $u=0$ . Let $F_2$ be the functional on $\ell^1$ given by $F_2(v)=\sum_{k=1}^\infty v_k$ . This is also bounded because $|\sum_{k=1}^\infty v_k|\le \sum_{k=1}^\infty |v_k|=\|v\|_1$ . We have $F_2(e_n)=1$ for all $n$ , but $F_2(u)=F_2(0)=0$ , contradicting $e_n \rightharpoonup u$ . In either case there's a contradiction to $e_n \rightharpoonup u$ so the sequence $(e_n)_n$ does not converge in $\ell^1$ . Schur's theorem (from 1921) proves that weak convergence of a sequence in $\ell^1$ is equivalent to strong convergence of the same sequence. This doe
|functional-analysis|linear-transformations|weak-convergence|strong-convergence|
0
what is the definite formula for an exponent?
I'm attempting to code math formulas from square 1 where i use addition to make subtraction, multiplication, division, -etc. etc. But that brung me to exponents, i felt it was simple enough, and i smacked in a repeat and a self multiplying script, and i got what i was looking for, but when i got to things like 2^0, i got 0 because variable=(2)*(2) happening 0 times = 0 and not the usual 1, then negatives started becoming an issue on the scripts complexity, while i'd like to just fix them with more scripting, i want to find the definite formula for this so as to keep things simple. though what is this formula?
Just start the recursion at 0, eg : def expo(x, y): """Computes x^y for integers x and y""" if y==0: return 1 elif y """thank you @Soham Saha for showing how to easily extend this algorithm for possibly negative $y$ """ return 1/expo(x, -y) else: return x * expo(x, y-1)
|computer-science|recreational-mathematics|
0
How to generalize a sequence of terms/patterns in this case
$y_{0}=0$ $y_{1}=\frac{-1}{2}x^2$ $y_{2}=\frac{-1}{2}x^2 - \frac{1}{10}x^5$ $y_{3}=\frac{-1}{2}x^2 - \frac{1}{10}x^5- \frac{1}{80}x^8$ $y_{4}=\frac{-1}{2}x^2 - \frac{1}{10}x^5- \frac{1}{80}x^8- \frac{1}{880}x^{11}$ Now I am trying to figure out a general pattern. I can see that the powers follow an arithmetic sequence 3 is added each time. The denominator is the current power multiplied by its previous power. so for example $10 = 5 \times 2$ and $80 = 8 \times 5 \times 2$ . My problem is that I am not sure how to generalise this. I am thinking $(\frac{1}{\prod^{n}_{k = 0} 3n-1}\times x^{3n-1}$ ). But not sure if this is correct? How can I prove by induction in this case? I have not done anything with the $\prod$ symbol before. EDIT: Thank you @mathlove for the detailed and thoughtful answer. For the proving part, I forgot to write up the whole problem. But here it is: This problem about applying the methods of successive approximations ${φ0(x)=0, φ1(x), φ2(x), . . . , φn(x)}$ to the in
You’re thinking is correct. All the $\prod$ symbol means is successive multiplication. You’ve used that symbol implicitly without realizing many times before. For example $x^n = \prod_{k = 1}^n x$ or $n! = \prod_{k = 1}^n k$ . It is the same as a $\sum$ , but for multiplication instead of addition. But yes, from those patterns it seems to be exactly what you said you start with 0 then subtract $\frac{x^2}{2}$ then every successive term you subtract by adding 3 to the power of $x$ and multiplying the new power of $x$ to the previous denominator and that is your last part of your next iteration.
|sequences-and-series|algebra-precalculus|induction|
0
Logistic differential equation to model population
Problem Description: The population of the world was about 5.3 billion in 1990. Birth rate in the 1990s ranged from 35 to 40 million per year and death rates ranged from 15 to 20 million per year. Let's assume that the carrying capacity for world population is 100 billion. Write the logistic differential equation for these data. (Because the initial population is small compared to the carrying capacity, you can take k to be an estimate of the initial relative growth rate.) My calculation: Because it's a logistic model and the carrying capacity is 100 billion, I wrote the differential equation as: $\frac{dy}{dx} = ky(1-\frac{y}{100})$ 100 denotes 100 billions for the carrying capacity. My question: How can we know the value of k?
Problem Description: The population of the world was about 5.3 billion in 1990. Birth rate in the 1990s ranged from 35 to 40 million per year, and death rates ranged from 15 to 20 million per year. Let's assume that the carrying capacity for world population is 100 billion. Logistic Differential Equation: The logistic differential equation for this scenario is given by: $ \frac{dy}{dx} = ky\left(1 - \frac{y}{100 \text{ billion}}\right) $ Here, $ y $ represents the population, $ k $ is the growth rate constant, and $ 100 \text{ billion} $ is the carrying capacity. Determining the Value of $ k $ : To determine the value of $ k $ , we use the initial condition that the population in 1990 was about 5.3 billion. Let $ y(1990) = 5.3 \text{ billion} $ . Substituting this into the differential equation: $ \frac{dy}{dx} = k \times 5.3 \times 10^9 \left(1 - \frac{5.3 \times 10^9}{100 \times 10^9}\right) $ To fully determine $ k $ , additional information is needed, such as population growth or d
|calculus|ordinary-differential-equations|
0
Using reduced sample space for this question on biased coins
I understand that this question has been asked before on this forum but, my question is not on how to do the question but on why my approach using reduced sample space fails. Consider this question, also asked here : When coin 1 is flipped, it lands on heads with probability .4; when coin 2 is flipped, it lands on heads with probability .7. One of these coins is randomly chosen and flipped 10 times. (a) What is the probability that the coin lands on heads on exactly 7 of the 10 flips? (b) Given that the first of these 10 flips lands heads, what is the conditional probability that exactly 7 of the 10 flips land on heads? For part (b), my initial approach was just to disregard the biasness of the coins, and just calculate the probability that 6 of the remaining 9 coins are heads, so the conditional probability is just $\frac{9\choose{6}}{2^9}$ because in the reduced sample space of 9 tosses, we want 6 of the tosses to be heads, the remaining 3 to be tails. Is there a reason why does this
... my question is not on how to do the question but on why my approach using reduced sample space fails. Your approach fails because you are not using the information that the first toss landed heads (except to count 1 heads). Here's why that is a problem. Coin 2 is significantly more likely to land heads compared to coin 1. So if the first toss landed heads, it's more likely (not certain though) that coin 2 is being used. You are not using this information at all. Instead you are using the probability of landing heads for each of the remaining tosses to be $0.5$ , which is simply not true.
|probability|
1
Series of $\, _2F_1\left(-\frac{1}{n},\frac{1}{n};\frac{1}{n}+1;-1\right)$
I am trying the series expansion of this function : $$f(n)=\, _2F_1\left(-\frac{1}{n},\frac{1}{n};\frac{1}{n}+1;-1\right)\text{at}\; \infty$$ This comes from : find an asymptotic series for this integral $$I(n)=\int_{0}^{1}\sqrt[n]{1+x^n}dx$$ By using Mathematica, I could find $$I(n)=1+\frac{\pi^2}{12n^2}-\frac{5\zeta(3)}{8n^3}+\frac{x}{n^4}+O\left(\frac{1}{n^5}\right)$$ But I can't find what is $x$ equals to. So, I want to ask how to get that asymptotic series and can we use elementary method to find an asymptotic series for this integral? Thank you for your time and your help.
By performing the change of integration variables from $x$ to $t$ via $x^n = t$ , followed by integration by parts, we arrive at $$ \int_0^1 {(1 + x^n )^{1/n} \mathrm{d}x} = \frac{1}{n}\int_0^1 {(1 + t)^{1/n} t^{1/n - 1} \mathrm{d}t} = 2^{1/n} - \frac{1}{n}\int_0^1 {(1 + t)^{1/n - 1} t^{1/n} \mathrm{d}t}. $$ Now, observe that $$ 2^{1/n} = \sum\limits_{k = 0}^\infty {\frac{{\log ^k 2}}{{k!}}\frac{1}{{n^k }}} $$ and $$ (1 + t)^{1/n - 1} t^{1/n} = \sum\limits_{k = 0}^\infty {\frac{1}{{k!}}\frac{{\log ^k ((1 + t)t)}}{{1 + t}}\frac{1}{{n^k }}} $$ for any $n \in \mathbb{Z}^+$ and $0 . Consequently, we can expand $I(n)$ as $$ I(n) = 1 + \sum\limits_{k = 1}^\infty \frac{a_k}{n^k} $$ for any $n \in \mathbb{Z}^+$ , with $$ a_k=\frac{\log ^k 2}{k!} - \frac{1}{(k - 1)!}\int_0^1 \frac{\log ^{k - 1} ((1 + t)t)}{1 + t} \mathrm{d}t. $$ The coefficients $a_k$ can be represented using polylogarithms and values of the Riemann zeta function, thus yielding the coefficients as provided in the comments. To l
|integration|sequences-and-series|asymptotics|
1
The Cryptic Cube Conundrum
Consider a perfect cube $N^3$ where $N$ is a positive integer. You are given the following cryptic clues related to $N$ : The sum of the digits of $N$ is a perfect square. $N$ is divisible by the sum of its digits. The prime factorization of $N$ includes exactly three distinct primes. Determine the smallest possible value of $N$ that satisfies all these conditions. $\textbf{My Work:}$ Let $N = abc$ be the three-digit number representing the cube. Without loss of generality, assume $a$ , $b$ , and $c$ are the digits of $N$ . Despite my efforts in analyzing the conditions and attempting various approaches, I couldn't determine the smallest possible value of $N$ that satisfies all these conditions. Any guidance, insights, or a solution from the community would be greatly appreciated.
$N$ can’t be one-digit. Let us try to find two-digit $N$ . The sum of its digits is a square, so $1$ , $4$ , $9$ or $16$ . There is only $10$ with sum $1$ , it doesn’t fit. If the sum if digits of $N$ is $4$ , then $N$ is $13$ , $22=2\cdot11$ , $31$ , $40=2^3\cdot5$ . Doesn’t work again. If the sum of digits is $9$ , then we have to try $18=2\cdot3^2$ , $27=3^3$ , $36=2^2\cdot3^2$ , $45=5\cdot3^2$ , $54=2\cdot3^3$ , $63=3^2\cdot7$ , $72=2^3\cdot3^2$ , $81=3^4$ , $90=2\cdot3^2\cdot5$ . If the sum of digits is $16$ , then we look at $79$ , $88=2^3\cdot11$ , $97$ . The sum can’t be greater than $18$ . $90$ is divisible by $9+0=9$ , so this is the answer.
|elementary-number-theory|recreational-mathematics|
0
Why Morse Index Theorem imply Jacobi Theorem?
Picture below is from do Carmo's Riemannian Geometry , seemly, the author think that the 2.2 Index Theorem implies the 2.9 Corollary. I don't see that at all. Did the author make a mistake? In where, $E$ is the energy of curve in the variation. For example, assuming $f(s,t)$ is the variation of $\gamma (t)$ , then $$ E(s) =\int_0^a |\partial_t f(s,t)|^2 dt $$
Since $\gamma(a)$ is not conjugate to $\gamma(0)$ , the index form is not degenerate. Since it’s a geodesic, $E’(0)=0$ . The index being zero means there are no nonzero vector fields for which the quadratic form is negative and it can’t be zero (since it’s not degenerate). That is, for all nonzero $V$ , we have $0 . Thus, $E$ is minimized at zero for some neighborhood and so by continuity of the second derivative, maximized at the endpoint for a small enough region. The converse is similar.
|riemannian-geometry|
1
Supremum and infimum for $n^{1/n}$
I have the set $A=\{\sqrt[n]{n}: n \in \mathbb{N}\}$ . I can see that $\inf A$ is $1$ as $\sqrt[1]{1}=1$ but I am having trouble with $\sup A$ . I know that it is $\sqrt[3]{3}$ because it only decreases from then onwards. But I have trouble proving that is $\sqrt[3]{3}$ . I have tried induction but I can't get it to work. Any strategies appreciated.
Note that the ratio $$ \begin{align} \frac{\frac{(n+1)^n}{n^{n+1}}}{\frac{n^{n-1}}{(n-1)^n}} &=\frac{\left(n^2-1\right)^n}{n^{2n}}\tag{1a}\\ &=\left(1-\frac1{n^2}\right)^n\tag{1b}\\[12pt] &\lt1\tag{1c} \end{align} $$ Inequality $(1)$ means that $\frac{(n+1)^n}{n^{n+1}}$ is decreasing. When $n=3$ , we have $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &=\frac{64}{81}\tag{2a}\\[6pt] &\lt1\tag{2b} \end{align} $$ Thus, for $n\ge3$ , we have $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n}\tag3 $$ When $n=2$ , we have $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &=\frac98\tag{4a}\\[6pt] &\gt1\tag{4b} \end{align} $$ Thus, for $n\le2$ , we have $$ (n+1)^{\frac1{n+1}}\gt n^{\frac1n}\tag5 $$ Inequalities $(3)$ and $(5)$ show that $n^{\frac1n}$ is greatest when $n=3$ .
|real-analysis|
1
What does it mean for the variance to be squared?
Suppose we are measuring the heights of different people. Our measures will be in $cm$ , but the variance will be in $cm^2$ . If we were measuring the time to complete a race course, our measures would be in $seconds$ , but the variance would be $seconds^2$ What do those units mean? Why are they there? I'm not asking about the formula, that much I can understand, but I'm asking for the underlying meaning of it.
We calculate the variance by finding the squared differences between each data point and the mean. Squaring these differences ensures they are all positive. The squaring of variance is used to ensure that there are not canceling issues while using signed values that have direction. For example, someone a few centimeters taller and another a few centimeters shorter than the average might have their deviations cancel to zero. Squaring ensures all deviations contribute as positive values, preventing this cancellation and providing a more accurate picture of the spread. Squaring the units also gives more weight to larger deviations. Since squares are always non-negative, larger deviations from the mean contribute more significantly to the overall variance.
|statistics|intuition|variance|
0
Number of real roots of polynomial equation $\sum_{j=1}^n \frac{a_j}{(\lambda_j+x )^2}=1$
In one of my recent answers to an optimization question (see here ), I came across the following equation: $$\sum_{j=1}^n \frac{a_j}{(\lambda_j+x )^2}=1$$ where $a_j>0, \lambda_j \ge 0, j=1,...,n$ . After doing some numerical checks, I realized that this equation always has two real roots (try it yourself here ). One may note that the equation can be written based on a polynomial with $2n$ degrees. Is there any formal proof or counterexample for this observation?
We can assume that $\lambda_1 \le \lambda_2 \le \ldots \le \lambda_n$ . The function $$ f(x) = \sum_{j=1}^n \frac{a_j}{(\lambda_j+x )^2} $$ is continuous on the open interval $(-\lambda_1, \infty)$ with $$ \lim_{x \to (-\lambda_1)+} f(x) = +\infty \, , \, \lim_{x \to \infty} f(x) = 0 \, . $$ It follows that $f(x) = 1$ has a solution in $(-\lambda_1, \infty)$ . (Actually exactly one solution in that interval because $f$ is strictly decreasing there.) In the same way one can show that $f(x) = 1$ has (exactly) one solution in the interval $(-\infty, -\lambda_n)$ . This shows that $f(x) =1$ has at least two real solutions. There can be additional solutions (one or two in each of the intervals $(-\lambda_{k+1}, -\lambda_k)$ ). For example, the equation $$ \frac{1}{x^2} + \frac {1}{(x+5)^2} + \frac{1}{(x+10)^2} = 1 $$ has six real solutions ( WolfamAlpha ).
|real-analysis|calculus|polynomials|roots|
1
Definition of a convex cone
In the definition of a convex cone, given that $x,y$ belong to the convex cone $C$,then $\theta_1x+\theta_2y$ must also belong to $C$, where $\theta_1,\theta_2 > 0$. What I don't understand is why there isn't the additional constraint that $\theta_1+\theta_2=1$ to make sure the line that crosses both $x$ and $y$ is restricted to the segment in between them.
We are given that $x$ and $y$ belong to the convex cone C. Therefore, $\theta_1 x \in C$ and $\theta_2 y \in C$ for any $\theta_1,\theta_2 > 0$ . It is also possible to prove the condition $\theta_1 x + \theta_2 y \in C$ directly. First note that, for any $0 , it is also true that $\theta \theta_1 x \in C$ and $(1 - \theta) \theta_2 y \in C$ since, for such a $\theta$ , $0 . Now, since $C$ is convex, a convex combination of $\theta \theta_1 x$ and $(1 - \theta) \theta_2 y$ also lies in C. More specifically, $(1 - \theta) (\theta \theta_1 x) + \theta ((1 - \theta) \theta_2 y) \in C$ . Here, the coefficients of $(\theta \theta_1 x)$ and $ ((1 - \theta) \theta_2 y)$ sum to 1 due to which it is possible to state this conclusion. Let $\gamma = (1 - \theta) (\theta \theta_1 x) + \theta ((1 - \theta) \theta_2 y) \in C$ . On rearrangement we find that $\gamma = \theta(1 - \theta)(\theta_1 x + \theta_2 y)$ . Now, we know that $\theta(1 - \theta) > 0$ . Hence, $\alpha = \frac{1}{\theta(1 - \thet
|convex-analysis|definition|convex-cone|
0
Solving complex linear equations with conjugate operations
Consider the following equations: $$\left\{ \matrix{ {z_1} + {z_2} = 9 + 5i \hfill \cr \overline {{z_1}} + 2{z_2} = {z_1} + 10 + 2i \hfill \cr} \right.$$ Assuming ${z_1} = {x_1} + i{y_1}$ and ${z_2} = {x_2} + i{y_2}$ , substitute them to equations $$\left\{ \matrix{ {x_1} + i{y_1} + {x_2} + i{y_2} = 9 + 5i \hfill \cr {x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right) = {x_1} + i{y_1} + 10 + 2i \hfill \cr} \right.$$ The real parts and imaginary parts correspondingly equate: $$\left\{ \matrix{ {\mathop{\rm Re}\nolimits} ({x_1} + i{y_1} + {x_2} + i{y_2}) = {\mathop{\rm Re}\nolimits} (9 + 5i) \hfill \cr {\mathop{\rm Re}\nolimits} ({x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right)) = {\mathop{\rm Re}\nolimits} ({x_1} + i{y_1} + 10 + 2i) \hfill \cr {\mathop{\rm Im}\nolimits} ({x_1} + i{y_1} + {x_2} + i{y_2}) = {\mathop{\rm Im}\nolimits} (9 + 5i) \hfill \cr {\mathop{\rm Im}\nolimits} ({x_1} - i{y_1} + 2\left( {{x_2} + i{y_2}} \right)) = {\mathop{\rm Im}\nolimits} ({x_1} + i{y_1} + 10 + 2i) \hf
It is usually done separately for real and imaginary but its not hard doing it by hand $$ \bar{z_1}+2z_2=z_1+10+2i\\ 2z_2-(z_1-\bar{z_1})=10+2i $$ Since $z_1-\bar{z_1}$ is purely complex the real part of $z_2$ must be $5$ $$z_1+z_2=9+5i$$ This means the real part of $z_1$ is $4$ Now you can solve normal 2 variable for imaginary parts Note: Your answer seems to be wrong might want to check again and MATLAB is wayyy too overkill
|linear-algebra|complex-analysis|systems-of-equations|
1
Prove that if $\alpha \geq1$ and $n\geq 2\alpha \log(2\alpha)$ then $n \geq \alpha \log(2n)$
I want to prove this statement: $\alpha \geq1$ and $n\geq 2\alpha \log(2\alpha)$ then $n \geq \alpha \log(2n)$ It seemed pretty simple but I couldn't do too much. My first attempt was $$ \log(n) \geq \log ( 2\alpha \log(2\alpha) ) $$ But this attempt didn't look promising. Could you give me a hint?
If ${a}\geq 1$ , then we know that $\log(a)\geq0$ $\Rightarrow n\geq 0$ . Substituting $ p = \alpha\log(2\alpha)$ . We can clearly see the following: $$n\geq 2p$$ And because the numbers are positive then $$n\geq p$$ EDIT: Considering the change of what we have to prove: Here is the updated variant. If $n\geq 2\alpha\log(2\alpha)$ , and all of the terms are positive, we can say that $n = 2\alpha\log(2\alpha) + \psi_1$ Where $\psi_1$ is some number greater than $0$ . Plugging this into the second inequality we get what we have to prove: $$2\alpha\log(2\alpha) + \psi_1 = \alpha\log(4\alpha\log(2\alpha) + 2\psi_1) + \psi_2 $$ $${(2\alpha)}^{2\alpha}e^{\psi_1}=(4\alpha \log(2\alpha)+2\psi_1)^{a}e^{\psi_2}$$ $$(2\alpha)^{2\alpha}=((4\alpha\log(2\alpha)+2\psi_1)^{\frac{1}{2}})^{2a}e^{\psi_2-\psi_1}$$ $$e^{\frac{\psi_1-\psi_2}{2a}}=\left(\frac{\sqrt{(4\alpha\log(2\alpha)+2\psi_1)}}{2\alpha}\right) $$ Let $\psi_3=e^{\frac{\psi_1-\psi_2}{2a}}$ , and we know that $\psi_3 \gt 0$ , so we only have
|inequality|logarithms|
0
The Cryptic Cube Conundrum
Consider a perfect cube $N^3$ where $N$ is a positive integer. You are given the following cryptic clues related to $N$ : The sum of the digits of $N$ is a perfect square. $N$ is divisible by the sum of its digits. The prime factorization of $N$ includes exactly three distinct primes. Determine the smallest possible value of $N$ that satisfies all these conditions. $\textbf{My Work:}$ Let $N = abc$ be the three-digit number representing the cube. Without loss of generality, assume $a$ , $b$ , and $c$ are the digits of $N$ . Despite my efforts in analyzing the conditions and attempting various approaches, I couldn't determine the smallest possible value of $N$ that satisfies all these conditions. Any guidance, insights, or a solution from the community would be greatly appreciated.
So it’s not necessarily true that N is three digits. Let N be denoted by $$\sum_{k = 0}^{n - 1} 10^k a_k$$ where n is the number of digits. Also $$\sum_{k = 0}^{n - 1} a_k = m^2$$ And we know that $$N \equiv 0 (\mod m^2)$$ And that N = p_1^{j_1}p_2^{j_2}p_3^{j_3} And we know because $N$ is divisible by $m^2$ at least one of the $j$ ’s is even. Suppose $p_1 = 2, p_2 = 3, p_3 = 5, j_1 = 2, j_2 = 1, j_3 = 1$ Then $N = 60$ and N is the smallest value which is divisible by a perfect square and that has 3 distinct factors, but 6 is not a perfect square. If we find adjust the values to make the next 2 smallest numbers with those properties we see that for $j_1 = 3$ , we get $N = 80$ , but 8 is not a perfect square, then changing $j_1 = 1, j_2 = 2$ which is the third smallest we get $N = 90$ then 9 is a perfect square and $N$ is divisible by 9. Therefore $N = 90$ is the smallest. However, this doesn’t use $N^3$ anywhere, so if we were to interpret the question as what is $N^3$ then we see that
|elementary-number-theory|recreational-mathematics|
0
Fourier series for $\sin^n(x)$
I am trying to find the Fourier series for $\sin^a(x)$ where $a$ is an arbitrary integer $\ge1$. Here is what I have done so far: $$\sin^a(x)= \left(\frac{e^{ix}-e^{-ix}}{2i}\right)^a=(2i)^{-a}(e^{ix}-e^{-ix})^a$$ Fourier series: $\frac{1}{2\pi} \sum_{-\infty}^{\infty} c_n e^{inx}$ \begin{align} c_n & =\frac{1}{2\pi} \int_{-\pi}^{\pi} f(x) e^{-inx} \, dx \\[10pt] & =\frac{1}{2\pi} \int_{-\pi}^{\pi} (2i)^{-a}(e^{ix}-e^{-ix})^a e^{-inx} \, dx \\[10pt] & =\frac{1}{2\pi} (2i)^{-a} \int_{-\pi}^{\pi} e^{-inx} \sum_{k=0}^a \binom{a}{k} e^{(a-k)ix} e^{-kix} (-1)^k \, dx \\[10pt] & =\frac{1}{2\pi} (2i)^{-a} \left[ \sum_{k=0}^a \binom{a}{k} (-1)^k \frac{1}{(a-2k-n)i} e^{(a-2k-n)ix} \right]_{-\pi}^\pi \\[10pt] & =\frac{1}{2\pi} (2i)^{-a} \sum_{k=0}^{a} \binom{a}{k} (-1)^k \frac{1}{(a-2k-n)i} (e^{(a-2k-n)i\pi} - e^{-(a-2k-n)i\pi}) \\[10pt] & =\frac{1}{2\pi} (2i)^{-a} \sum_{k=0}^{a} \binom{a}{k} (-1)^k \frac{1}{(a-2k-n)i} ((-1)^{(a-2k-n)} - (-1)^{-(a-2k-n)}) \\[10pt] & =0 \end{align} At this point
For even n: let n=2k $$ sin^{2k}(x)=\frac{(-1)^{k}}{2^{2k-1}}(\sum_{r=0}^{k-1}((-1)^{r}\binom{2k}{r}cos((2k-2r)x))+\frac{(-1)^{-k}\binom{2k}{k}}{2}) $$ For odd n: Let n=2k+1 $$ sin^{2k+1}(x)=\frac{(-1)^{k}}{2^{2k}}\sum_{r=0}^{k}\binom{2k+1}{r}sin((2k-2r+1)x) $$ The series for $sin^{2k+1}(x)$ follows directly from your approach. For $sin^{2k}(x)$ ,write $sin^{2k}(x)$ as $sin^{2k-1}(x) \cdot sin(x)$ . Now use the formula $2sinC \cdot sinD = cos(C-D)-cos(C+D)$ and collect like terms.
|fourier-analysis|fourier-series|
0
Alternative method of solving $\int_0^{\pi/2} {\sin^2{x} \ln{\tan x} \,dx}$
Solve the integral: $$\int_0^{\pi/2} {\sin^2{x} \ln{\tan x} \,dx}$$ I have already found the answer to be $\frac{\pi}{4}$ by the method explained below, but I would like to know whether there is another way. --- My method --- Use a $u$-sub: $u=\tan x, \,du=\sec^2x \,dx$ $$\int_0^{\pi/2} {\sin^2{x} \ln{\tan x} \,dx}=\int_0^{\pi/2} {\frac{\tan^2{x}}{\sec^2{x}} \ln{\tan x} \,dx}=\int_0^{\pi/2} {\frac{\tan^2{x}}{\sec^4{x}} \ln{\left(\tan x\right)} \sec^2x \,dx}=\int_0^{\infty} {\frac{u^2}{\left(1+u^2\right)^2} \ln{u} \,du}$$ Now, just 'pull your luckiest rabbit out of your hat': $$I(a) = \int_0^{\infty} {\frac{u^a \ln{u}}{\left(1+u^a\right)^2} \,du}$$ And use a 'reverse Feynman method': $$\int{I(a) \,da} = \int_0^{\infty} {\int{ \left( \frac{u^a \ln{u}}{\left(1+u^a\right)^2} \,da\right)}\,du} = -\int_0^{\infty} {\frac{\,du}{1+u^a}} = -\frac{\pi}{a} \csc{\left(\frac{\pi}{a}\right)}$$ Now, $I(a)$ is just the derivative of the last expression: $$I(a) = \frac{\,d}{\,da} {\left[ -\frac{\pi}{a}
Using double angle formula and IBP, we have $$ \begin{aligned} \int_0^{\frac{\pi}{2}} \sin ^2 x \ln (\tan x) d x = & \int_0^{\frac{\pi}{2}} \frac{1-\cos 2 x}{2} \ln (\tan x) d x \\ = & \frac{1}{2} \int_0^{\frac{\pi}{2}} \ln (\tan x) d x-\frac{1}{4} \int_0^{\frac{\pi}{2}} \ln (\tan x)d(\sin 2x) \\ = & -\frac{1}{4}\left([\sin 2 x\ln(\tan x)]_0^{\frac{\pi}{2}} -\int_0^{\frac{\pi}{2}} \frac{\sin 2 x}{\tan x} \sec ^2 x d x\right) \\ = & \frac{1}{4} \int_0^{\frac{\pi}{2}} 2 d x \\ = & \frac{\pi}{4} \end{aligned} $$
|integration|
0
How do i determine the "sign" of a region of integration in multivariable integrals?
In 1 dimension if i am given two numbers a $x \in [a,b]$ i know that $\int_{a}^{b}f(x)dx$ is positive and $\int_{b}^{a}f(x)dx$ is negative. However in 2d and above the region of integration is often given simply as a set of numbers: $\iint_{D}f(x)dxdy$ Am i to assume the D in $\iint_{D}f(x)dxdy$ is a "positive region" unless otherwise stated? Edit: In a technical sense, is the region of integration assumed to be a positively oriented differentiable manifold unless stated otherwise?
There aren't any "negative regions" in $\mathbb{R}^1$ either. It's just a convention related to the notation $\int_a^bf$ , which is defined as $\int_{[a,b]}f$ if $a\leq b$ and $-\int_{[b,a]}f$ if $a\geq b$ . (It probably isn't actually defined that way the first time people learn about integration, but it is still a useful analogy to the multi-dimensional case.)
|calculus|integration|multivariable-calculus|
0
Defining the Y combinator in terms of S, K and I
We know that the Y-combinator is defined as: $$\text{Y}:=\lambda f.(\lambda x.f(xx))(\lambda x.f(xx))$$ Wikipedia says : $$\text{Y}:=\text{S(K(SII))(S(S(KS)K)(K(SII)))}$$ Now the question is: What logical steps can we take to convert the first definition to the second? While it is easy to show the equivalence between the two definitions, finding how the first definition can motivate and lead to the second definition is, in my opinion, a tricky task. I have added my proof as an answer, but all other ideas and suggestions are welcome.
Define $D = S I I$ , $B = S (K S) K$ and $V = S B (K D)$ . Then, your definition is $Y = S (K D) V$ . Since $D x = S I I x = I x (I x) = x x$ , then, when applied to $f$ , this is equivalent to $$S (K D) V f = K D f (V f) = D (V f) = V f (V f) = S V V f.$$ Thus, under the η-rule, $$S (K D) V = λf·S (K D) Vf = λf·S V V f = S V V.$$ Actually, if you go with strict $SKI$ abstraction, then $S V V$ is what the article should be citing, not $S (K D) V$ . First, $$f(x x) = K f x (D x) = S (K f) D x\quad⇒\quadλx·f(x x) = λx·S (K f) D x = S (K f) D.$$ Second, $$S (K f) = K S f (K f) = S (K S) K f = B f,\quad D = K D f.$$ Therefore $$S (K f) D = B f (K D f) = S B (K D) f = V f.$$ Thus, it follows that $$λf·(λx·f (x x))(λx·f (x x)) = λf·(V f)(V f) = λf·S V V f = S V V.$$ Under the combinator engine Combo , which I put up on GitHub, the abstraction algorithm uses a wider range of combinators, including $D$ and $B$ . If will yield $S \_0 \_0$ , where $\_0 = C B D$ , since $C a b = S a (K b)$ (under
|logic|lambda-calculus|combinatory-logic|
0
Graph colouring on $\mathbb{R}^2$. Color all points in $\mathbb{R}^2$ one of $3$ colors, show there exist $2$ points of the same color of distance $1$
Problem: (i) Show that no matter how we color all the points of the plane $\mathbb{R}^2$ in $3$ colors, there always exist two points at distance $1 $ with the same color. (ii) Is it true that for every positive integer k, when all the points of $\mathbb{R}^2$ are $k$ -colored, there always exist two points that are at distance $1$ and have the same color? I am not sure how to approach the 1st problem, let alone the 2nd one. I'm sure this type of question are those kind of questions that require some insightful tricks For part 2, I believe it is false, but I am unsure of how to prove that for any positive integer k, there will always exist 2 points that are at distance $1$ and have the same number
(i) Assume we coloured the plane in such way that no two similarly coloured points are at distance $1$ from each other. Consider a triangle with each side = $1$ . Its vertices A, B, C are distinct colours. Consider a point D symmetric to A with respect to BC. D must have the same colour as A. It actually means that every two points at distance $\sqrt3$ from each other are the same colour. Then just consider a circumference of radius $\sqrt3$ . All its points are the same colour as the center of the circumference. Obviously there are two points on this circumference at distance $1$ from each other. (ii) Consider the regular tessellation of the plane with regular hexagons of diameter slightly less than $1$ . Colour them with $7$ colours in a “chess” order, that is neighbour hexagons are of different colours. One can calculate the distance between the nearest same-coloured hexagons. It is greater than $1$ . Thus we get the counterexample for $k=7$ . You can consider the regular tessellati
|graph-theory|coloring|
1
What is a Tail Field and how to interpret it?
I cannot understand or form a good intuition in my head of what a tail field is. An introduction to rigorous probability theory by Rosenthal gives the following definition: Given a sequence of events $A_1, A_2, ...$, we define their tail field: $$\tau = \bigcap_{n=1}^\infty\sigma(A_n, A_{n+1}, A_{n+2},\ldots) $$ So if an event A is an element of $\tau$, what does that mean in simple terms?
It's the set of all tail events . Given infintely many random variables $X_i$ , we call an $A \in \sigma(X_i, \, i \in I)$ a tail event if for all finite $I_0 \subset I$ , we have $A \in \sigma(X_i, \, i \in I- I_0)$ . In other words, a tail event is an event that does not depend on finitely many of the random variables. It will occurr or not occurr regardless of what a finite subset of the random variables does. In German, we call it literally translated: "asymptotic events". That name is very suggestive. And if you look at your defintion, it's the same idea. We take the intersection of all events that don't depend on the first $n$ random variables. And because the union is infinite over all $n$ , we have the set of all events that don't care about finite subsets - it's those events that are "asymptotic".
|probability-theory|
0
Find smallest step size so that gradient descent will diverge
Suppose I want to use fixed-sized gradient descent for a function like $y=x^2$ using the formula starting at some point (for example $x_0=4$ ): $x_{i+1}=x_{i}-\alpha*f'_{x}(x_i)$ I am trying to figure out how I could determine what would be the smallest possible value of $\alpha$ so that my algorithm won't actually converge. I was thinking that I should probably find some value $\delta>0$ so that starting from $x_0$ method will never come to a region $(-\delta; \delta)$ . Is this correct approach? If so, how would I find one? Thanks in advance.
Fix any $x_0$ in $\mathbb{R}\backslash\{0\}$ (if $x_0=0$ the sequence obtained is constant and it's trivial the method converges). Then, $|x_{i+1}|=|x_i-\alpha\cdot f'(x_i)|=|x_i-\alpha\cdot 2x_i|=|x_i||1-2\alpha|$ If $c=|1-2\alpha| (which is equivalent to $0 ), then $|x_n|=c^n|x_0|\to 0$ , so $x_n$ converges to $0$ (as desired, since it is the only minimum of $f$ ). If $\alpha=1$ , then $|x_{i+1}|=|x_i-\alpha\cdot 2x_i|=|-x_i|=|x_i|$ , so the sequence oscillates between $x_0$ and $-x_0$ , and thus does not converge.
|optimization|gradient-descent|
1
Probability that the reciprocal triangle inequality $\frac{1}{a} + \frac{1}{b} \ge \frac{1}{c}$ holds
Let $(a,b,c)$ be the sides of a triangle inscribed inside a unit circle such that the vertices of the triangle are distributed uniformly on the circumference. The regular triangle inequality states that the sum of any two sides is greater than the third side. But what happens if we take the sum of the reciprocal of any two sides? Is it greater than the third side? It turns out that the reciprocal triangle inequality $\frac{1}{a} + \frac{1}{b} \ge \frac{1}{c}$ is not true in general however, experimental data shows an interesting observation that the probability $$ P\left(\frac{1}{a} + \frac{1}{b} \ge \frac{1}{c}\right) = \frac{4}{5} $$ Can this be proved or disproved? Note that this is equivalent proving or disproving $$ P\left(\frac{1}{\sin x} + \frac{1}{\sin y} \ge \frac{1}{|\sin (x+y)|} \right) = \frac{4}{5} $$ where $0 \le x,y \le \pi$ . Related question : Probability that the geometric mean of any two sides of a triangle is greater than the third side is $\displaystyle \frac{2}{5}
Assume that the circle is centred at the origin, and the vertices of the triangle are: $A(\cos(-2Y),\sin(-2Y))$ where $0\le Y\le\pi$ $B(\cos(2X),\sin(2X))$ where $0\le X\le\pi$ $C(1,0)$ Let: $a=BC=2\sin X$ $b=AC=2\sin Y$ $c=AB=\left|2\sin\left(\frac{2\pi-2X-2Y}{2}\right)\right|=|2\sin(X+Y)|$ $P\left[\frac1a+\frac1b\ge\frac1c\right]=1-P\left[\frac1a+\frac1b $P\left[\frac1a+\frac1b This last probability is the ratio of the area of the shaded region to the area of the square in the graph below. Rotate these regions $45^\circ$ clockwise about the origin, then shrink by factor $\frac{1}{\sqrt2}$ , by letting $X=x-y$ and $Y=x+y$ . Using symmetry, we only need to consider the left half of the blue "diamond". Note that in the left half, $0 , so $|\sin(2x)|=\sin(2x)$ . $P\left[\frac{1}{\sin X}+\frac{1}{\sin Y} $=P\left[\frac{1}{\sin(x-y)}+\frac{1}{\sin(x+y)} Now we want to express the inequality as $-f(x) for some $f(x)$ . $=P\left[\sin(2x)(\sin(x+y)+\sin(x-y)) $=P\left[2\sin(2x)(\sin x)(\cos y
|integration|geometry|inequality|trigonometry|triangles|
1
$\mathcal{B}_n:= \{(a, b): a, b \in \mathbb{R}, a<b\} \cup \{(a, b) \cup \{n\}: a, b \in \mathbb{R}, a<b\}$ where $n \in \mathbb{N}$
Is $\mathcal{B}_n:= \{(a, b): a, b \in \mathbb{R}, a where $n \in \mathbb{N}$ is a basis for a topology on $\mathbb{R}?$ How can I prove this one? In our proof for the second condition, we have a problem in the case if both $B_1$ and $B_2$ are in the form $ \{(a, b) \cup \{n\}$ where $x$ is not in both interval but $x = n$ .
By what you said, this is not a base. Take $B_1=(-1,0)\cup \{2\}$ and $B_2=(0,1)\cup\{2\}$ . Their intersection is $\{2\}$ , but there is no base set contained in this singleton.
|general-topology|
1
How much choice is needed to prove that $|G| \nless |G/ \sim|$ for any equivalence relation $\sim$ on set $G$?
$G/ \sim$ is the set of $\sim$ -equivalence classes in $G$ and $|G/ \sim|$ is the cardinality of $G/ \sim$ . $|A| \leq |B|$ means that there is an injective function from $A$ to $B$ . $|A| means that $|A| \leq |B|$ and $|A| \neq |B|$ . If $\pi:G \to G/ \sim$ is defined by $\pi (x) = [x]_{\sim}$ , then $\pi$ has an injective right inverse $h:G/ \sim \to G$ (by the Axiom of Choice) and $|G/ \sim| \leq |G|$ and $|G| \nless |G/ \sim|$ Is the full axiom of choice needed to prove that $|G| \nless |G/ \sim|$ (just that one part, not necessarily both of $|G/ \sim| \leq |G|$ and $|G| \nless |G/ \sim|$ )? If not, what is the weakest form of choice that is needed to prove it? If the restriction that " $G$ is a group, $N$ is a subgroup of $G$ and $\sim$ is defined by $x \sim y$ iff $x y^{-1} \in N$ " was added, does that change the answer?
This is known as the Weak Partition Principle . We don't know all too much about it, but it implies there are no-measurable sets of reals and there are no infinite Dedekind-finite sets.
|set-theory|equivalence-relations|axiom-of-choice|
1
A stronger version of the Rosen’s subsequence theorem.
The following question was asked in my combinatorics exam - “Let $n$ be a positive integer. Exhibit an arrangement of integers between $1$ to $n^2$ which has no increasing or decreasing subsequence of length $n + 1$ .” This question has a very similar structure to the Rosen’s subsequence theorem , and I am aware of its proof using the pigeonhole principle, as given in the link above by @Rajdeep. But i am not really sure on how to provide the requisite construction for my question above — the pigeonhole principle fails here as we have $n^2$ numbers and $n^2$ pairs, if we proceed in a similar fashion as done in the proof linked above. So could somebody provide me with some hints or solutions ? I may be completely wrong as well in the using pigeonhole principle for this problem, so different solutions are also welcomed.
Split all the numbers in groups of $n$ numbers and then make the following sequence (each string is one group): $$n^2-n+1, n^2-n+2,…,n^2,$$ $$…,$$ $$2n+1,2n+2,…,3n,$$ $$n+1,n+2,…,2n,$$ $$1,2,…,n.$$ Every strictly increasing subsequence is of maximal length $n$ , since for every number in the sequence all the numbers that are greater than it are in the same group. If you want to make a strictly decreasing subsequence, you have to take at most one number from each group, so again the length is at most $n$ .
|combinatorics|discrete-mathematics|
1
Why does the graph of the absolute error vs x for the three-point central difference formula have this shape?
Using the three-point central difference formula, $$f'(x) \approx \frac{f(x+h)-f(x-h)}{2h}$$ to approximate the $f'(x)$ of $f(x)= e^{-x} sin(x)$ for the interval $[0,15]$ using different step sizes ( $h$ ) produces the following error vs $x$ graph . I wanted to ask why there are these uniform 'bumps' in the error? where it dips all of a sudden for certain values of $x$ . I've found lots of explanations going over plots of error vs $h$ but so far I haven't found anything for error vs $x$ graphs. I'm guessing it's something to do with the rounding or truncation error suddenly decreasing? If so why at these $x$ points in particular? Here's the python code for the $h=0.01$ line: h = 0.01 # step size (h) x = np.arange(0, 15, h) g = np.exp(-x) * np.sin(x) # function f(x) # find solutions to f'(x) fn = (g[2:] - g[:-2]) / (2*h) # numerical solution fa = np.cos(x) * np.exp(-x) - np.sin(x) * np.exp(-x) # analytical solution err = np.abs(fa[1:-1] - fn) plt.semilogy(x[1:-1], err, '.-')
The method error in its leading term is for the central difference quotient $$ \frac{f(x+h)-f(x-h)}{2h}=f'(x)+\frac{h^2}{6}f'''(x)+O(h^4) $$ In the given case we get $$ f(x)=e^{-x}\sin x\\ f'(x)=e^{-x}(\cos x-\sin x)\\ f''(x)=-2e^{-x}\cos x\\ f'''(x)=2e^{-x}(\sin x+\cos x)=2^{3/2}e^{-x}\sin\left(x+\frac\pi4\right) $$ The error term will oscillate, changing sign. Close to its roots the error will be especially small. And indeed the dips in the graph are at about $\frac{3\pi}4\approx2\frac14$ , $\frac{7\pi}{4}\approx 5\frac12$ , $\frac{11\pi}{4}\approx8\frac12$ , ... The depth of the dips mainly depends on the sampling density, how close the samples get to the root of the error term. Other sources of error contributions only become substantial at much smaller step sizes, down from $h=10^{-5}$ for the given situation.
|numerical-methods|rounding-error|truncation-error|
1
How to solve this ode of the distribution?
I want to solve the following ode of distribution, $$\partial_y^2 f(x,y)=\delta_x,$$ where $\delta_x$ is the Dirac delta function defined by $\left =g(x).$ I did some attempts, $$\left =\left =\phi(x).$$ But I can't get the explicit formula of $f(x,y).$
You can either use the the well known and easy to proof formulas: $$ \partial_yH(y-x)=\delta_x\\ \partial_y(|y-x|)=sgn(y-x) $$ where the equality is to be understood in a distributional sense. With these formulas you can straightforward integrate the ODE w.r.t. $y$ twice. Alternatively, you can (in a first step) restrict yourself to test functions whose (compact) support lies entirely above or below $x$ . The ODE for such test functions obviously reads $$ \partial_y^2f(x,y)=0 $$ Thus, we get a potential full solution by $$ f=\begin{cases} yg_-(x)+h_-(x)&y Now you simply need to fit $g_\pm, h_\pm$ such that $f$ is continous and its second distributional derivate yields the $\delta_x$ . Either way you end up with something like this $$ f=\frac{1}{2}|y-x| + y g(x) + h(x) $$
|real-analysis|ordinary-differential-equations|distribution-theory|
0
What is the limit of $\lim_{x \to \infty} \frac{e^x x^5}{x^x}$?
I am struggling with the evaluation of the following limit: $$ \lim_{x \to \infty} \frac{e^x x^5}{x^x} = \lim_{x \to \infty} \frac{e^x}{x^{x-5}}. $$ Intuitively I think the limit must be 0 since if $x$ is "very large" it must outpower $e^x$ by a long shot. But this is not really a rigorous explanation. I tried using the rule of l'Hopital but got nowhere since the terms just keep repeating. Can anyone help me?
If $x>e^2$ , $x-5>0$ , so $x^{x-5}>(e^2)^{x-5}$ and $$0\leq\dfrac{e^x}{x^{x-5}}\leq \dfrac{e^x}{(e^2)^{x-5}}=\dfrac{e^x}{e^{2x-10}}=e^{-x+10}\to 0,$$ so the limit is indeed $0$ .
|real-analysis|calculus|limits|analysis|
1
What is the limit of $\lim_{x \to \infty} \frac{e^x x^5}{x^x}$?
I am struggling with the evaluation of the following limit: $$ \lim_{x \to \infty} \frac{e^x x^5}{x^x} = \lim_{x \to \infty} \frac{e^x}{x^{x-5}}. $$ Intuitively I think the limit must be 0 since if $x$ is "very large" it must outpower $e^x$ by a long shot. But this is not really a rigorous explanation. I tried using the rule of l'Hopital but got nowhere since the terms just keep repeating. Can anyone help me?
$$\begin{align}\lim_{x \to \infty} \frac{e^x}{x^{x-5}}&= \lim_{x \to \infty} \frac{e^x}{e^{(x-5)\log x}}\\& = \lim_{x \to \infty} e^{x-(x-5)\log x}\\& = \lim_{x \to \infty} e^{-x\log x+o(x\log x)}\\&=0\end{align} $$
|real-analysis|calculus|limits|analysis|
0
Ordinary linear system with dimension 3 and two independent eigenvectors
I am trying to solve the following system of ODEs $$ \overline{X'}= \begin{bmatrix} 5 & 2 & 0 \\ 0 & 5 & 0 \\ 0 & 3 & 5 \end{bmatrix} \overline X $$ The characteristic polynomial has 3 identical real roots $\lambda_{1,2,3} = 5$ I have found two eigen vectors for $\lambda = 5$ which are, \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} and \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} I know how to get two independent solutions. How do I get tge third? Thanks
Denote the system's matrix with $A$ . You have found that $\text{Ker}(A-\lambda I)=\langle(1,0,0),(0,0,1)\rangle$ . This has dimension $2$ , which is not equal to the eigenvalue's, which is $3$ . Now you have to continue finding kernels of powers of $A-\lambda I$ . In this case, $(A-\lambda I)^2=0_{3\times 3}$ , so $\text{Ker}[(A-\lambda I)^2]=\mathbb{R}^3$ , and you may take your third vector as $(0,1,0)$ , which is linearly independent from the first two.
|ordinary-differential-equations|
0
if $a,b,c$ are the roots of the equation $x^3-px^2+qx-r=0$ find the value of $(b+c)(c+a)(a+b)$
if $a,b,c$ are the roots of the equation $x^3-px^2+qx-r=0$ find the value of $(b+c)(c+a)(a+b)$ Using vieta's formula: $a+b+c=p$ $ab + ac +bc = q$ $abc = r$ on expansion of the bracket, $(b+c)(c+a)(a+b) $ $=(bc+c^2+ab+ac)(a+b)$ $=(q+c^2)(a+b)$ after this step I am unable to proceed further. Could someone please help me out?
Since $a,b,c$ is the solutions, we have $$(x-a)(x-b)(x-c)=0=x^3-px^2+qx-r,$$ and let $x=a+b+c$ , we have $$(b+c)(c+a)(a+b)=(a+b+c-a)(a+b+c-b)(a+b+c-c)\\=(a+b+c)^3-p(a+b+c)^2+q(a+b+c)-r$$
|algebra-precalculus|
0
Show the existence of a sequence of simple functions such that $\lim_{n\to\infty}\varphi_n(x)=f(x)$ and $\Vert\varphi _n(x)\Vert\leq\Vert f(x)\Vert$
Let $(X,\Sigma)$ be a measurable space and $f:X\to \mathbb{R}^d$ be a measurable function. Suppose that $\Vert \cdot \Vert $ is a norm of $\mathbb{R}^d$ . How can I show that there's a sequence $(\varphi_n)_{n\in\mathbb{N}}$ of simple functions such that $\lim_{n\to\infty}\varphi_n(x)=f(x)$ and $\Vert \varphi _n(x)\Vert \leq \Vert \varphi _{n+1}(x)\Vert \leq \Vert f(x)\Vert $ for any $x\in X$ and $n\in\mathbb{N}$ ? I know how to prove the existence of that sequence if $\Vert \cdot \Vert $ is the Euclidean norm, but I don't know how to prove that for any norm of $\mathbb{R}^d$ . Suppose that $\pi_i :\mathbb{R}^d\to \mathbb{R}$ is the $i$ th canonical projection for all $i\in \{1,\cdots,d\}$ . I know that for all $i\in \{1,\cdots, d\}$ there's a sequence of simple functions $(\varphi_n^i)_{n\in\mathbb{N}}$ such that for all $x\in X$ we have $\lim_{n\to\infty}\varphi _n^i(x)=(\pi _i\circ f)(x)$ and $|\varphi _n^i(x)|\leq |\varphi _{n+1}^i(x)|\leq |(\pi _i\circ f)(x)|$ for all $n\in\mathbb
Take $n$ , split $[-n,n]^d$ into $(2n^2)^d$ cubes with side-length $1/n$ . If $C$ is such a cube then let $x_C$ be an element of least norm in the cube. Now take $x\in X$ . If $f(x)\in C$ for some of these cubes set $\phi_n(x):=x_C$ . If $f(x)\not\in [-n,n]^d$ then $\phi_n(x):=x_C$ , where $C$ is a cube that contains a point of $[-n,n]^d$ that is closest to $f(x)$ . If $n > \|f(x)\|$ then $|f(x) - \phi_n(x)| \le diam( [0,1/n]^d)$ , which tends to zero for $n\to\infty$ . Also $\|\phi_n(x)\| \le \|f(x)\|$ . To get monotonicity, we restrict the sequence to powers of $2$ . Set $\psi_k := \phi_{2^k}$ . Then the cubes to side-length $1/2^{k+1}$ are contained in cubes of side-length $1/2^k$ . Take $x\in X$ , $f(x) \in C$ with $C$ a cube of side-length $1/2^{k+1}$ , $f(x) \in C'$ with $C'$ a cube of side-length $1/2^k$ . Then by construction $C' \supset C$ and $\|x_C\| \ge \|x_{C'}\|$ , $\psi_k(x) = x_{C'}$ , $\psi_{k+1}(x)=x_C$ , and $\|\psi_k(x)\|\le \|\psi_{k+1}(x)\|$ . And $(\psi_k)$ has t
|functional-analysis|measurable-functions|
1
Show the statistics are unbiased for function $\frac{\alpha}{\lambda}$
TASK : Let $X_1,...,X_n$ a sample of i.i.d random variables from gamma distribution $(\lambda, \alpha)$ with parameter $\theta = (\lambda, \alpha)$ . Show that statistics: $\frac{1}{n}\sum_{i=1}^{n}X_i$ are unbiased for parametric function $\frac{\alpha}{\lambda}$ . ANSWER : The mean for statistics is: $\lambda * \alpha$ . But how can i show they are unbiased? What for there are parametric function? Thank's for any help.
If $X\sim G(\lambda,\alpha)$ , then $\mathbb{E}[X]=\dfrac{\alpha}{\lambda}$ . As expectation is linear, $\mathbb{E}\left[\dfrac{1}{n}\displaystyle\sum_{i=1}^n X_i\right]=\dfrac{1}{n}\displaystyle\sum_{i=1}^n \mathbb{E}[X_i]\stackrel{\text{i.i.d}}{=}\dfrac{1}{n}\displaystyle\sum_{i=1}^n \mathbb{E}[X]=\dfrac{1}{n}\left(n\dfrac{\alpha}{\lambda}\right)=\dfrac{\alpha}{\lambda}$ .
|statistics|statistical-inference|
1
prove that $\sum_{n = 1}^{\infty} \frac{|\sin(nx)|}{n}$ is divergent
I tried much to show that the following series is divergent for $x \in (0,\pi)$ $\sum_{n = 1}^{\infty} \frac{|\sin(nx)|}{n}$ My opinion was to use the comparison test with series $\frac{1}{n}$ but I could not achieve something good. Thank for your time.
Proof using Dirichlet's test: Firstly I'll introduce the test: Given a sequence $\{a_n\}_{1}^{\infty}$ and a sequence $\{b_n\}_{1}^{\infty}$ satisfying: $\{a_n\}_{1}^{\infty}$ is monotonic $\lim_{n \to \infty}{a_n} = 0$ $ \left| \sum_{n=1}^{N} b_n \right| \leq M $ for every positive integer $N$ The series $ \sum_{n=1}^{\infty} a_n \cdot b_n $ converges. Let us define $ a_n = \frac{1}{n} $ and $ b_n = \left| sin(n\cdot x) \right| $ . We know that $a_n$ is monotonic and converges to 0. All we have left to prove is that $ \left| \sum_{n=1}^{N} b_n \right| = \left| \sum_{n=1}^{N} \left| sin(n\cdot x) \right| \right| = \sum_{n=1}^{N} \left| sin(n\cdot x) \right| \leq M $ for every positive integer $N$ We know that $\left| sin(n \cdot x)\right| \leq 1$ and so $\sum_{n=1}^{N} \left| sin(n\cdot x) \right| \leq \sum_{n=1}^{N} 1 = N $ And so for every positive integer $N$ : $ \left| \sum_{n=1}^{N} b_n \right| \leq N $ According to Dirichlet's test, we get that: $ \sum_{n=1}^{\infty} a_n \cdot b_
|calculus|sequences-and-series|convergence-divergence|trigonometric-series|
0
Showing existence of a non-standard model of arithmetic elementarily equivalent to standard model of arithmetic
Let $\mathcal{M}_A=\langle\mathbb N, 0^{\mathcal{M}_A}, s^{\mathcal{M}_A}, +^{\mathcal{M}_A}, \times^{\mathcal{M}_A}, be the standard model for the language of arithmetic $\mathcal L_A$ . Define theory $\text{Th}(\mathcal{M}_A)=\{\alpha : \mathcal{M}_A\models\alpha\}$ . I am trying to show the existence of model $\mathcal M$ that is elementarily equivalent to $\mathcal{M}_A$ and contains at least one non-standard element. I can start the proof as follows: Extend $\mathcal{L}_A$ with a new constant symbol $c$ , and call this such language $\mathcal{L}'$ . Define $\Gamma=\text{Th}(\mathcal{M}_A)\cup\{\underbrace{s\dots s}_{\text{$n$ times}}(0) . One can use compactness to demonstrate that $\Gamma$ is finitely satisfiable, and hence satisfiable. So there is some model $\mathcal M$ so that $\mathcal{M}\models\Gamma$ and contains non-standard element in its domain. I am not sure whether $\mathcal M$ is elementarily equivalent to $\mathcal M_A$ (with respect to $\mathcal L_A$ ). I think the
Every sentence $\alpha$ in $\mathcal{L}_A$ is either true in $\mathcal{A}$ or false in $\mathcal{A}$ . Hence for every sentence $\alpha$ either $\alpha\in\text{Th}(\mathcal{M}_A)$ or $\neg\alpha\in\text{Th}(\mathcal{M}_A)$ . Now note that $\mathcal{M}\models\text{Th}(\mathcal{M}_A)$ . There is nothing more for $\mathcal{M}$ to add to $\text{Th}(\mathcal{M}_A)$ . The addition of a single sentence will result in triviality.
|logic|model-theory|nonstandard-models|
0
Incorrect formula on Wikipedia?
There might be an incorrect formula mentioned in the article Kronecker delta on Wikipedia . The formula I'm talking about is this: $\delta_{nm}=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{k=1}^Ne^{2\pi i\frac{k}{N}\left(n-m\right)}$ After I simplified this formula (the right side), because I saw that there was $e^{\pi i}$ which could be turned into $-1$ , it came out so that $\delta_{nm}=1$ which is obviously incorrect. So I'd like you guys to look into it just in case I'm wrong.
It is like the classic example $$ i=i^1=i^{\frac{4}{4}}=(i^4)^{\frac{1}{4}}=(1)^{\frac{1}{4}}=1 $$ Whilst actually $$ i^{\frac{4}{4}}\neq(i^4)^{\frac{1}{4}} $$ It matters on the $\gcd$ of the two terms. For example: $$ i^{\frac{3}{4}}=(i^3)^{\frac{1}{4}} $$ Since $\gcd(3,4) = 1 $ , but $\gcd(4,4)=4$ .
|linear-algebra|algebra-precalculus|
0
Solve $(\sin (\theta)+\cos(\theta))^2= \frac{3}{2}$
I came across this problem: Solve for $\theta$ (between value $0$ and $2\pi$ ) where $$(\sin (\theta)+\cos(\theta))^2= \frac{3}{2}$$ I started by expanding the left side to get $\sin^2(\theta)+\cos^2{\theta}+\sin(\theta)\cos(\theta) = \frac{3}{2}$ then changed sin&cos squared sum into $1$ and used double angle formula for sin to get $\sin(2\theta)=1$ . Then I continued to get the solution $\theta=\pi/4 +2\pi n$ but the answer is incorrect. Can I get help on understanding my mistake here?
Your thought process is correct overall but you've made a mistake while expanding the left side. More precisely, you should've done $$ (\sin(\theta) + \cos(\theta))^2 = \sin^2(\theta) + 2 \sin(\theta)\cos(\theta) + \cos^2(\theta) = 1 + 2\sin(\theta)\cos(\theta) = 1 + \sin(2\theta). $$ Hence, it follows that $$ (\sin(\theta) + \cos(\theta))^2 = \frac{3}{2} \Longleftrightarrow \sin(2\theta) = \frac{1}{2} .$$ Solving the latter equation, you obtain $$ 2\theta = \frac{\pi}{6} + 2\pi n \quad \text{ or } \quad 2\theta = \frac{5\pi}{6} + 2\pi n, $$ which is equivalent to $$ \theta = \frac{\pi}{12} + \pi n \quad \text{ or } \quad \theta = \frac{5\pi}{12} + \pi n. $$ Since you're only interested in values of $\theta$ such that $\theta \in [0,2\pi]$ , your solutions are $$ \Big\{ \frac{\pi}{12}, \frac{5\pi}{12}, \frac{13\pi}{12},\frac{17\pi}{12} \Big\}. $$
|algebra-precalculus|trigonometry|
1
Infintely nested radical
Recently, I saw this intriguing radical, which is infinitely nested. I tried to calculate its value by considering the term $(2^n)^2$ inside $(n+1)$ square roots but could not de-nest it. But by using the concept of an approaching (limit) method, its value seems $2$ . Could you please provide a method to derive it? $\displaystyle\sqrt{1^2+\sqrt{2^2+\sqrt{4^2+\sqrt{8^2+\sqrt{16^2+\sqrt{32^2+\cdots}}}}}}=2$
Let $$a_n = \sqrt{1^2+\sqrt{2^2+\sqrt{4^2+…+\sqrt{2^{2n}}}}}.$$ We see that the sequence $(a_n)$ is increasing. Also we have: $$a_n = \sqrt{1^2+\sqrt{2^2+…+\sqrt{2^{2n-2} +\sqrt{2^{2n}}}}} =$$ $$= \sqrt{1^2+\sqrt{2^2+…+\sqrt{2^{2n-2} + 2^n}}} \le $$ $$\le \sqrt{1^2+\sqrt{2^2+…+\sqrt{2^{2n-2} + 2\cdot 2^{n-1}+1}}} =$$ $$= \sqrt{1^2+\sqrt{2^2+…+\sqrt{(2^{n-1}+1)^2}}} = $$ $$ =\sqrt{1^2+\sqrt{2^2+…+\sqrt{2^{2n-4} + 2^{n-1}+1}}}=$$ $$=\sqrt{1^2+\sqrt{2^2+…+\sqrt{2^{2n-6} + 2^{n-2}+1}}}=$$ $$=… =\sqrt{1^2+ 2\cdot 1+1}=2$$ So, $a_n$ is increasing and bounded, then it converges to some $a$ . We want to prove that $a=2.$ $a_n \sim 2$ is equivalent to $$2^{2n}\sim (…((((2^2-1^2)^2-2^2)^2-4^2)^2-8^2)^2-…-2^{2n-2})^2.$$ It’s easy to show by induction that the RHS $=(2^{n}+1)^2$ . Indeed, $$((2^{n}+1)^2-2^{2n})^2=(2^{2n}+2\cdot2^n+1-2^{2n})^2= (2^{n+1}+1)^2.$$ And since $2^{2n}\sim (2^{n}+1)^2$ , we’re done.
|radicals|nested-radicals|
0
I'm having trouble understanding why my method of solving a specific combinatorics deck of cards question doesn't work.
QUESTION: A poker hand of five cards is dealt from a standard pack of 52. Find the probability of obtaining two pairs. My method is as follows: 13C1 (choose value of first pair) x 4C2 (choose suits of pair) x 12C1 (choose value of second pair, cannot be same as first pair) x 4C2 (choose suits of second pair) x 11C1 (choose value of last card) x 4C4 (suit of last card). Then put that over 52C5. The answer, however, is (13C2×4C2×4C2×11C1×4), which yields a different result. I understand why the answer is correct but not why my method is incorrect, if anyone can offer a solution it would be heavily appreciated.
I understand why the answer is correct but not why my method is incorrect... Your method regards the order of the first and second pairs as consequential. So $(4♣,4♠,A♥,A♠,\_)$ and $(A♥,A♠,4♣,4♠,\_)$ are counted as two, when they are one and the same. Divide by $2$ to compensate for this and you should get the same answer.
|probability|combinatorics|
0
Exercise regarding finite dimensional topological vector spaces
I want to show the following: Let $V$ be a finite dimensional topological vector space and Hausdroff. Suppose $f: \mathbb{K}^n \rightarrow V$ to be an vector space isomorphism. Show that $f$ is a homeomorphism. Since $f$ is a vector space isomorphism, we know that it is bijective. So the only thing left to show is that $f$ and $f^{-1}$ is continuous. Further, since $f$ is an isomorphism between $V$ and $\mathbb{K}^n$ , we can conclude that $dim V=n$ . Let $B \subseteq V$ be an open subset. I want to show that $f^{-1}(B)$ is open in $\mathbb{K}^n$ . What do I also know: The addition $+:V \times V \rightarrow V$ and scalar multiplication $\cdot:\mathbb{K} \times V \rightarrow V$ are continuous. I do not really know how to continue from here on. Hints/Solution would be appreciated.
I am not sure if it is correct, but here are my thoughts: Let $\{b_1,...,b_n\}$ be a basis of $\mathbb{K}^n$ , a vector space isomorphism is determined by how it maps $\{b_1,...,b_n\}$ to $V$ . Let $x \in \mathbb{K}^n$ , i.e $x=\sum_{k=1}^{n}\lambda_k b_k$ . Then $T(x)=\sum_{k=1}^{n}\lambda_k T(b_k)$ and since the addition in $V$ and scalar multiplication is continuous (V is a topological vector space) we can conclude that $T$ is continuous. A vector space isomorphism is a linear bijection, thus can be inverted. By the same argumentation, $T^{-1} : V \rightarrow \mathbb{K}^n$ is continuous.
|general-topology|functional-analysis|topological-vector-spaces|
0
My solution on series of functions: $\sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$
Study the punctual, absolute and uniform convergence of the following series of functions: $$\sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$$ I have done this. Is it correct? 1. Pointwise Convergence : For each fixed $x$ , we need to check if the sequence of functions converges. Let's denote the $n$ -th term of the series as $f_n(x)$ : $$f_n(x) = \left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$$ Since the range of $\arcsin$ is between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ , and the term inside the $\arcsin$ function tends to infinity as $x$ approaches either $+\infty$ or $-\infty$ , we can see that the series is defined only for $x \in ]-\infty, -1] \cup [1, +\infty[$ . 2. Absolute Convergence: To test absolute convergence, we need to consider the absolute value of the series: $$\sum_{n=1}^\infty \left|\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n\right|$$ Since $\left|\arcsin(y)\right| \leq \frac{\pi}{2}$ for all $y$ , we have: $$\left|\left[\arcsin\left
This is not correct. For example, for $x=1$ one gets $\displaystyle\sum_{n=1}^\infty (\arcsin(1))^n=\displaystyle\sum_{n=1}^\infty \left(\dfrac{\pi}{2}\right)^n=\infty$ . Notice that the series is actually geometric, and will converge iff the ratio is smaller than $1$ in modulus. That will happen when $$\left|\arcsin\left(\dfrac{2x^2-1}{x}\right)\right| Solving this inequalities (Multiplying by $x$ or $-x$ respectively, one gets quadratic equations), one obtains that the pointwise convergence is only attained at (approximated values) $(-0.948,-0.527)\cup (0.527, 0.948)$ . Absolute convergence is also achieved in that set, since for the values there (why?) $$ \left|\arcsin\left(\dfrac{2x^2-1}{x}\right)\right|=\pm\arcsin\left(\left|\dfrac{2x^2-1}{x}\right|\right)$$ and the set is symmetric respect to the origin. Uniform convergence is only achieved in compact subsets, as usually happens with geometric series (when $x$ approaches the extremes of the intervals, the ratio approaches $\pm 1$
|sequences-and-series|solution-verification|
0
My solution on series of functions: $\sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$
Study the punctual, absolute and uniform convergence of the following series of functions: $$\sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$$ I have done this. Is it correct? 1. Pointwise Convergence : For each fixed $x$ , we need to check if the sequence of functions converges. Let's denote the $n$ -th term of the series as $f_n(x)$ : $$f_n(x) = \left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n$$ Since the range of $\arcsin$ is between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ , and the term inside the $\arcsin$ function tends to infinity as $x$ approaches either $+\infty$ or $-\infty$ , we can see that the series is defined only for $x \in ]-\infty, -1] \cup [1, +\infty[$ . 2. Absolute Convergence: To test absolute convergence, we need to consider the absolute value of the series: $$\sum_{n=1}^\infty \left|\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n\right|$$ Since $\left|\arcsin(y)\right| \leq \frac{\pi}{2}$ for all $y$ , we have: $$\left|\left[\arcsin\left
ClearAll["Global`*"]; f[m_] := AsymptoticSum[ ArcSin[(2 x^2 - 1)/x]^n, {n, 1, Infinity}, {x, Sqrt[1/2], m}] f[4] Mathematica code shows that: $$ \text{For } x \text{ centered at }\frac{1}{\sqrt{2}} \text{, } \\ \sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n \sim 4 \left(x-\frac{1}{\sqrt{2}} \right) + \left(16-2 \sqrt{2}\right) \left(x-\frac{1}{\sqrt{2}}\right)^2 -\frac{4}{3} \left(12 \sqrt{2}-59\right) \left(x-\frac{1}{\sqrt{2}}\right)^3 -\frac{4}{3} \left(87 \sqrt{2}-286\right) \left(x-\frac{1}{\sqrt{2}}\right)^4 \cdots $$ ClearAll["Global`*"]; f[m_] := AsymptoticSum[ ArcSin[(2 x^2 - 1)/x]^n, {n, 1, Infinity}, {x, -Sqrt[1/2], m}] f[4] Mathematica code shows that: $$ \text{For } x \text{ centered at }-\frac{1}{\sqrt{2}} \text{, } \\ \sum_{n=1}^\infty\left[\arcsin\left(\frac{2x^2-1}{x}\right)\right]^n \sim +4 \left(x+\frac{1}{\sqrt{2}}\right) +\left(2 \sqrt{2}+16\right) \left(x+\frac{1}{\sqrt{2}}\right)^2 +\frac{4}{3} \left(12 \sqrt{2}+59\right) \left(x+\frac{1}{\s
|sequences-and-series|solution-verification|
0
Whitney Embedding Theorem (Proposition 6.15 Lee Smooth Manifolds)
In the Compact case of the proof for Whitney's Embedding Theorem presented in John M. Lee's Introduction to Smooth Manifolds, why do we define the function $F$ this way? I don't quite get why we need to last $m$ components which are just the bump functions i.e. why do we define the map particularly into $\mathbb{R}^{nm + m}$ It does seem like the function takes each ball $B_i$ in the finite cover of $M$ and maps it into $B\mathbb{R}^{n+1}$ , giving us a map into $(\mathbb{R}^{n+1})^m$ . Is this related in any way? Many thanks in advance.
Let's first look at the map $$ p \mapsto \rho_1(p) \phi_1(p), $$ defined for $p \in B_1$ for the time being. Near the center of $B_1$ , we'll have $\phi_1(p)$ is approximately $1$ , so this will look like an embedding of $B_1$ into $R^n$ . Near the edge of $B_1$ , $\phi_1(p)$ will be almost zero, so this just collapses the outer regions of that ball down towards the origin. Let's take a concrete example. Suppose that you have a 1-manifold --- the real line! --- and a coordinate chart $\phi$ that maps points of some interval in $M$ to $[-1, 1]$ . We'll also build a partition-of-unity function $\rho$ by defining $$ \rho(p) = (1 - \phi(p))^2. $$ ---basically it's "1" at the center of the 'interval' and fades off to 0 by the ends of the interval. It's not $C^\infty$ , but it'll be at least $C^1$ when extended by $0$ outside the chosen interval. What does the image of the interval look like? Under $\phi$ alone, it'd just be the open interval from $-1$ to $1$ -- great! But when you multiply
|proof-explanation|differential-topology|
1
Solve $2x^2-3\sqrt{2x^2-7x+7}=7x-3$
Solve $2x^2-3\sqrt{2x^2-7x+7}=7x-3$ $\Rightarrow (2x^2-7x)-3\sqrt{2x^2-7x+7}=-3$ Let $\sqrt{2x^2-7x+7}=y$ so that $2x^2-7x+7=y^2$ , $\Rightarrow (y^2-7)-3y=-3$ or $y^2-3y-4=0$ So $y=4$ or $y=-1$ $\sqrt{2x^2-7x+7}=4$ or $\sqrt{2x^2-7x+7}=-1$ $2x^2-7x+7=16$ or $2x^2-7x+7=1$ $x=\dfrac{2}{9}, -1$ or $x=2, \dfrac{3}{2}$ My question: If I take $x=2$ and plug it in $2x^2-3\sqrt{2x^2-7x+7}=7x-3$ , the equality does not hold. So is this a valid solution or not?
When you take $\sqrt{2x^2-7x+7}=y$ note that $$2x^2-7x+7=2(x^2-\frac 72 x)+7\\=2((x-\frac {7}{4})^2-(\frac {7}{4})^2)+7\\ \ge f(\frac {7}{4})\sim0.9$$ so $y=-1$ is impossible, but $y=4$ has two solution
|algebra-precalculus|
0
Prove that this is a rhombus.
Show that a convex quadrilateral with vertices ABCD is a rhombus if the following triangles have equal perimeters: ABP, BCP, CDP, DAP. P is where the diagonals cross. $AP = a$ $BP = b$ $CP = c$ $DP = d$ $AB + a + b = BC + b + c = CD + c + d = DA + d + a$ This means there are 6 equations: 1. $AB + a + b = BC + b + c$ $AB + a = BC + c$ 2. $BC + b + c = CD + c + d$ $BC + b = CD + d$ 3. $CD + c + d = DA + d + a$ $CD + c = DA + a$ 4. $DA + d + a = AB + a + b$ $DA + d = AB + b$ 5. $AB + a + b = CD + c + d$ 6. $BC + b + c = DA + d + a$ I was hoping that massaging these equations would lead to all sides being equal, or at least that a=c or b=d. But all that I was able to get was: 1.-3. : $AB + CD = BC + DA$ No matter how I combine any of the equations, it all leads to this one. Well, except for these two: $AB + a + b + BC + b + c = CD + c + d + DA + d + a$ $AB + a + 2b + c + BC = CD + c + 2d + a + DA$ $AB + 2b + BC = CD + 2d + DA$ $AB + a + b + DA + d + a = BC + b + c + CD + c + d$ $AB + a + b
Not my proof, but I like it. Let $|AP| and $|CP| . Then rotate both $A, C$ through $P$ . $\triangle A'PC'$ has the same perimeter as $\triangle DPB$ , so $C'=B$ and $A'=D$ . $P$ is then the midpoint of both the diagonals, and it follows that $$|AB|=|BD|=|DC|=|CA|$$
|geometry|systems-of-equations|
0
Counting odd smooth numbers
Let $P(n)$ be the largest prime factor of $n$ , and let $\Psi(x,B) = |\{ n \mid n \leq x \wedge P(n) \leq B\}|$ . (This is a well-studied function in analytic number theory.) Define $\Psi'(x,B) = | \{ n \mid n \leq x \wedge P(n) \leq B \wedge \mbox{$n$ odd}\}|$ . Is there a good estimate for $\Psi'(x,B)$ , or for the ratio $\Psi'(x,B)/\Psi(x,B)$ ? The answer to this post shows how to prove that $\Psi(x, B) \sim \frac{1}{\pi(B)!} \cdot \prod_{p \leq B} \frac{\log x}{\log B}$ . If I repeat the argument to try to estimate $\Psi'(x,B)$ I get $\Psi'(x,B) \sim \frac{1}{(\pi(B)-1)!} \cdot \prod_{2 . But then $\Psi'(x,B)/\Psi(x,B) = \frac{\pi(B)}{\log x} \sim \frac{B}{(\log B) \cdot (\log x)}$ , which can exceed 1. So that wasn't very useful.
Let $\Psi_{\rm odd}(x,B) = |\{n \mid n \leq x \wedge P(n) \leq B \wedge \mbox{$n$ is even}\}|$ , so $\Psi(x,B) = \Psi_{\rm odd}(x,B) + \Psi_{\rm even}(x,B)$ . Note that $\Psi_{\rm even}(x,B) = \Psi(x/2,B)+1$ since there is a natural one-to-one correspondence given by dividing by 2. (The “+1” is due to the edge case of 2 itself.) This allows for calculating the asymptotic behavior of $\Psi_{\rm odd}(x,B)/\Psi(x,B)$ .
|analytic-number-theory|
0
If $f(z)$ is analytic everywhere and $\lim_{|z| \to \infty} |f(z)| = 0$, then $f(z)$ is bounded?
Is that true? because if it's analytic then there would be no points of discontinuity, hence it would not go to infinity anywhere
By Liouville, if $f$ is not constant, $f(z)$ is dense. But outside a ball it's bounded. And also on that ball. Contradiction. Or, being entire implies continuity everywhere. As suggested, mere continuity implies the image of any compact set is compact, hence bounded. But if $\lim_{\mid z\mid\to \infty}\mid f(z)\mid =0,$ we have, given $\epsilon \gt0,$ that there's a radius $r$ such that outside $B(0,r),$ we have $\mid f(z)\mid\lt\epsilon. $ Putting these together, we have $\mid f(z)\mid \le M,$ for all $z\in\Bbb C,$ where $M:=\rm max(\mid f(\bar B(0,r))\mid,\epsilon).$ Thus $f$ is bounded.
|complex-analysis|
1
Solve $2x^2-3\sqrt{2x^2-7x+7}=7x-3$
Solve $2x^2-3\sqrt{2x^2-7x+7}=7x-3$ $\Rightarrow (2x^2-7x)-3\sqrt{2x^2-7x+7}=-3$ Let $\sqrt{2x^2-7x+7}=y$ so that $2x^2-7x+7=y^2$ , $\Rightarrow (y^2-7)-3y=-3$ or $y^2-3y-4=0$ So $y=4$ or $y=-1$ $\sqrt{2x^2-7x+7}=4$ or $\sqrt{2x^2-7x+7}=-1$ $2x^2-7x+7=16$ or $2x^2-7x+7=1$ $x=\dfrac{2}{9}, -1$ or $x=2, \dfrac{3}{2}$ My question: If I take $x=2$ and plug it in $2x^2-3\sqrt{2x^2-7x+7}=7x-3$ , the equality does not hold. So is this a valid solution or not?
You have assume $\sqrt{2x^2-7x+7}=y$ . You should note that the output of a square root is always postive. Therefore, y must be postive throughtout your solution. So, $y=-1$ isn't valid. Therefore, you should have solved only for the $y=4$ part. This would give you the correct roots
|algebra-precalculus|
1
A stronger version of discrete "Liouville's theorem"
If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition $$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$$ then is $f$ constant function?
Let $S$ be the set of harmonic functions $f:\mathbb{Z}^d \to [0,+\infty)$ with the constraint $f(0)\in [0,1]$ . For any $x,y\in \mathbb{Z}^d$ , let $d(x,y)=\sum_{j=1}^d |x_j-y_j|$ , i.e. we use the taxicab metric. For any $x,y \in \mathbb{Z}^d$ with $d(x,y)=1$ , the harmonicity and non-negativity of $f$ implies that $f(y)\leq (2d)f(x)$ , so as a corollary $f(x)\in [0,(2d)^{d(x,0)}]$ and also " $f\in S$ has a zero" $\Leftrightarrow f\equiv 0$ . If we now endow the vector space of functions with domain $\mathbb{Z}^d$ and co-domain $\mathbb{R}$ with the norm $$\|g\| = \sup_{x\in \mathbb{Z}^d} (4d)^{-d(x,0)}|g(x)|,$$ then $S$ is a compact, convex subset. Let $f\in S$ be an arbitrary extreme point of $S$ . First consider the case where $f$ has a zero: then, as previously discussed, $f\equiv 0$ and we are done. In the other case, we have $$f(.)=\sum_{j=1}^d \left[(2d)^{-1}f(e_j)\right]\underbrace{\left[f(.+e_j)/f(e_j)\right]}_{\in S}+\sum_{j=1}^d\left[(2d)^{-1}f(-e_j)\right]\underbrace{\left
|analysis|discrete-mathematics|recurrence-relations|
0
Studying the series $\sum_{n=1}^{\infty}\frac{1}{4n^2-1}$
Study the character of the following number series: $$\sum_{n=1}^{\infty}\frac{1}{4n^2-1}$$ and in the case of convergence also determine its sum. Solution : To study its convergence, we can try to express each term in a simpler form. Notice that $4n^2 - 1$ can be factored as $(2n + 1)(2n - 1)$ , which suggests using partial fraction decomposition(*). We can write: $$\frac{1}{4n^2 - 1} = \frac{1}{(2n + 1)(2n - 1)} = \frac{1}{2(2n - 1)} - \frac{1}{2(2n + 1)}$$ Now, the given series becomes a telescoping series after decomposition: $$\sum_{n=1}^{\infty}\left(\frac{1}{2(2n - 1)} - \frac{1}{2(2n + 1)}\right)$$ When we expand this series, a lot of terms will cancel out, leaving only the first and last terms: $$\left(\frac{1}{2(2\cdot1 - 1)} - \frac{1}{2(2\cdot1 + 1)}\right) + \left(\frac{1}{2(2\cdot2 - 1)} - \frac{1}{2(2\cdot2 + 1)}\right) + \dots$$ $$= \left(\frac{1}{2(1)} - \frac{1}{2(3)}\right) + \left(\frac{1}{2(3)} - \frac{1}{2(5)}\right) + \dots + \left(\frac{1}{2(n) - 1} - \frac{1}{2
The easy way by renumeration $$\sum _{n=1}^{\infty } \frac{1}{4 n^2-1}=\frac{1}{2} \sum _{n=1}^{\infty } \left(\frac{1}{2 n-1}-\frac{1}{2 n+1}\right)=\lim_{k\to\infty}\frac{1}{2} \left(\sum _{n=0}^k \frac{1}{2 n+1}-\sum _{n=1}^k \frac{1}{2 n+1}\right) = \frac{1}{2} $$
|sequences-and-series|
1
Interpretation Theorem
The Interpretation Theorem is the following excerpt from Kunens old Set Theory 8. Appendix $1$ : More on relativization We sketch here a more formal treatment than that in $\S 2$ . There is a general notion of relativization in logic, but we shall discuss only the special case of interest for set theory. A relative interpretation of set theory into itself consists of two formulas, $\mathbf{M}(x,v)$ and $\mathbf{E}(x,y,v)$ , with no free variables other than the ones shown. We think of $v$ as a parameter defining the class $\{ x: \mathbf{M}(x,v) \}$ with binary relation $\{ \langle x, y \rangle : \mathbf{E}(x,y,v) \}$ . If $\phi$ is a formula, we define $\phi^{\mathbf{M}, \mathbf{E}}$ by replacing $x \in y$ by $\mathbf{E}(x,y,v)$ and restricting the bound variables to range over $\mathbf{M}$ . In $\S\S 1-7$ , $\mathbf{E}(x,y,v)$ was always $x \in y$ . In the case of $\mathbf{WF}$ , where we discuss a fixed model, the parameter $v$ does not appear in $\mathbf{M}$ . However, when discussi
I'm going to give you the model-theoretic proof, despite the fact that you said in the comments that it's not what you're looking for, just so you can see how obvious the statement is when you look at it from a semantic point of view. And I can't help making a meta-comment: of course there are good reasons to try to prove things in weak meta-theories. But in my opinion, it's only worth worrying about what meta-theoretic principles are necessary to develop a body of mathematics after you already understand that body of mathematics. I would strongly recommend when first learning logic to allow yourself to use all the tools of ordinary mathematics when proving things. That way you can more easily develop intuition and clearly see which things are easy and which are more significant results. First, let's be clear about what the completness theorem says. For any first-order theory $T$ and any sentence $\varphi$ , $T\models \varphi$ if and only if $T\vdash \varphi$ . One direction of this bi
|logic|set-theory|first-order-logic|natural-deduction|formal-proofs|
0