title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Can a shape be both similar and congruent?
|
I know that congruent shapes have the same size and the rotations don't matter. I just want to know if congruent shapes can also be similar at the same time. I've seen that similar shapes have one shape that is a different size in comparison to another shape with proportional sides and the same angles. So is it safe to say that congruent shapes can also be similar?
|
Congruency, by definition, implies that the shapes are identical , in terms of side lengths, dimensions and angles. For two triangles ABC and A'B'C', we get A=A',B=B',C=C'. Similarity implies that the relationship between side lengths is identical for each corresponding pair of sides i.e. the ratio is the same between side length A and A', and B and B' and C and C'. Here, A/A' = B/B'=C/C'. If A = A', B=B' etc., then clearly the ratio is shared for each corresponding side, i.e. the ratio is always 1. Hence, congruent implies similar, and is a stronger geometric property. Side lengths 1,2,3 and 2,4,6 produce a similar pair of triangles, but not a congruent pair. Hence, similar does not imply congruent.
|
|geometry|
| 0
|
Is a C*-correspondence just a Hilbert space you get from the Gelfand-Naimark theorem?
|
A C -correspondence over a C -algebra $A$ is a (right) Hilbert $A$ -module ( so a Hilbert space ) $H$ together with a faithful representation $A\rightarrow B(H)$ . Am I right in understanding that a C*-correspondence is just a Hilbert space that you get from the Gelfand-Naimark theorem? Just to know more, is it possible to have multiple C -correspondences over a single C -algebra? I think yes. I would be happy to see some examples. I am also interested in knowing about any literature where they consider a family of C*-correspondences for whatever purposes.
|
Let $A,B$ be $C^*$ -algebras. An $A$ - $B$ - $C^*$ -correspondence consists of a right Hilbert $B$ -module $\mathcal{E}$ together with a (non-degenerate) $*$ -homomorphism $$\pi: A \to \mathcal{L}_B(\mathcal{E}).$$ Thus, $A$ - $\mathbb{C}$ -correspondences correspond to (non-degenerate) $*$ -representations of $A$ on Hilbert spaces. This motivates the notion of $C^*$ -correspondences, as it is natural to replace Hilbert spaces by Hilbert modules (and sometimes this is really necessary, as the representation theory of $C^*$ -algebras on Hilbert spaces is not always strong enough to capture relevant information). The underlying idea here is simple: $\mathcal{E}$ has (by definition) a right $B$ -action. The existence of the $*$ -morphism $\pi$ means that $\mathcal{E}$ also carries a left $A$ -action, namely $$a.\xi :=\pi(a)\xi, \quad a \in A, \xi \in \mathcal{E}$$ that is compatible with the right $B$ -module structure. In other words, an $A$ - $B$ - $C^*$ -correspondence should be though
|
|hilbert-spaces|operator-algebras|c-star-algebras|
| 0
|
Why we cannot draw the graph of a three-dimensional function?
|
I am a mathematical beginner. As we all know, the graph of a one-dimensional function is a curve, and the graph of a two-dimensional function is a surface. What is the graph of a three-dimensional function? For example, $f(x,y,z)=x^2+y^3+5xyz$ . Someone said that we cannot draw the graph of a three-dimensional function. Could someone explain why? Thank you in advance for your kind help.
|
I mean the answer is we can visualise them, but it's just not possible to draw them! Obviously , the limitation here is that a three dimensional function actually delves into the fourth dimension, whereas we obviously live in a 3D space and so it's not possible for us to draw such a function. However, a nice representation of a 3 dimensional function could be to set one of the variables to time, and see how the shape of the graph changes throughout time. (which closely links to the ideas of level sets) That is not to say that we can't analyse such functions, it is just significantly harder to visualise them.
|
|graphing-functions|
| 0
|
Degeneracy in Linear Programming and Multi-Objective/Hierarchical Optimization
|
fellow enthusiasts of mathematical optimization! I'm delving into a fascinating scenario within the realm of linear programming (LP) and am curious about degenerate solutions and their potential in multi-objective or hierarchical optimization contexts. Consider the following setup: Let us define vectors $x, c_1 \in \mathbb{R}^n$ , a matrix $A \in \mathbb{R}^{m \times n}$ , and vectors $l, u \in \mathbb{R}^m$ , where we the convex polyhedral set $X = {x \in \mathbb{R}^n : l \leq Ax \leq u}$ is non-empty and compact. Given this, we explore a degenerate linear program: $$ \min{c^T_1 x}\\ l \le Ax \le u $$ that is, the minimum of $c_1^T x$ over $X$ results in a subset $X_1 \subseteq X$ , specifically, a face of the convex polyhedron $X$ . Here's where it gets intriguing: this scenario, often seen as undesirable due to its degeneracy, opens up a unique opportunity. What if we could use this "degenerate" solution space, $X_1$ , to then optimize a second cost function, $c_2^T x$ , under the s
|
I think you are mentioning a special case of linear bilevel programming, and this book could serve you as a starting point: A Gentle and Incomplete Introduction to Bilevel Optimization by Yasmine Beck and Martin Schmidt. Visit especially Section 6 for some algorithms designed for linear bilevel problems.
|
|optimization|reference-request|convex-optimization|
| 0
|
Proving symmetric matrices are diagonalizable
|
I'm at a loss of how to show this. I know that this is implied by the Spectral Theorem but not sure exactly how to show this in a simple, straightforward proof. I've tried laying down what I know about symmetric matrices and diagonalizable ones but I'm not too sure what to use when. My guess is I'd have to reason with regards to the eigenvalues but again I'm not sure. A matrix is symmetric if $A = A^T$. A matrix $A$ is symmetric if it can be expressed in the form $A = QDQ^{T}$. A square matrix $A$ is called diagonalizable if $\exists$ invertible P such that $P^{-1}AP$ is a diagonal matrix. Would really appreciate any help.
|
Let be a symmetric matrix. We prove the claim by induction. For , the statement is trivially true. Let $\left(\lambda_1, x_1\right), \left(\lambda_2, x_2\right) \dots \dots \left(\lambda_k, x_k\right)$ with $x_i$ ’s being independent. Thus we have, \begin{align} A \left[x_1| x_2| \dots |x_k\right] &= \left[Ax_1|Ax_2|\dots|Ax_k\right] \\ &= \left[ \lambda_1 x_1|\lambda_2 x_2|\dots|\lambda_k x_k \right ] \\ &= \left[x_1|x_2| \dots |x_k\right] \begin{bmatrix} \lambda_1 & & \\ &\lambda_2& \\ & & \ddots \\ & & & \lambda_k \end{bmatrix} \end{align} Since $A$ is symmetric, we can take $x_i$ 's to be orthonormal. Let $ \left \{y_1, y_2 \dots y_{n-k} \right\}$ be a set of orthonormal vectors such that $\left\{ x_1, x_2 \dots x_k ,y_1, y_2, \dots y_{n-k}\right \}$ form a orthonormal basis for $\mathbb{R}^n$ Let $P_1 = \begin{bmatrix} x_1 &x_2 &\dots &x_k \end{bmatrix} \in \mathbb{R}^{n \times k}$ , $P_2 = \begin{bmatrix} y_1 &y_2 &\dots &y_{n-k} \end{bmatrix} \in \mathbb{R}^{n \times (n-k)}$ and
|
|linear-algebra|matrices|linear-transformations|
| 0
|
What are some subanalytic sets which are not semianalytic?
|
I'm working with sheaf theory for the most part, and subanalytic sets are an integral part of the field. However, I'm struggling to get an intuition for subanalytic sets . I know, because I've seen it stated several times now, that they are a necessary concept to work with (at least if we're looking to use o-minimality. Otherwise, honestly I still don't quite get the need for them). The argument I've seen being used for their utility is the breakdown of the Tarski-Seidenberg Theorem, meaning that the property isn't preserved under projections essentially. However, I have never seen a proof of this fact, and/or some examples of the differences between semianalytic sets and subanalytic sets. Could anyone provide me with some examples/concrete cases that illustrate the difference between the properties, or at least some references that do so? Or even just some papers where it is proven that the Tarski-Seidenberg Theorem fails for semianalytic sets, with preference for the original paper?
|
So, there is apparently a result by Łojasiewicz (in some lecture notes of his: "Ensembles semi-analytiques", published in 1965) that states the following: Let $X \subset M$ be a subanalytic subset of an analytic manifold. Then, $X$ is semianalytic of dimension $\leq k$ if and only if locally in $M$ , there is an analytic set $Z \subset M$ of dimension $\leq k$ such that $X \subset Z$ , $\overline{X} - X$ is semianalytic of dimension $\leq k-1$ and $X - \text{int}_{Z}(X)$ (that is, $X$ except its interior in $Z$ ) is also semianalytic of dimension $\leq k-1$ . Using this criterion, and the concept of dimension attributed to semianalytic sets (using stratification with some more semianalytic sets), an argument by Osgood (in 1916, in an article called "On functions of several complex variables", page 2) can be used, as follows: Consider the projection \begin{align*} \pi: \mathbb{R}^{4} & \rightarrow \mathbb{R}^{3}\\ (x,y,z,w)&\mapsto (x,y,w), \end{align*} and the semianalytic set (in fact
|
|intuition|sheaf-theory|
| 1
|
Triangulation of a convex n-gon so that all triangles share a side with the polygon
|
So I was reading A Path to Combinatorics for Undergraduates by Titu Andreescu and Zuming Feng, and I came across this question: Let n be an integer greater than $4$, and let $P_1 P_2 \ldots P_n$ be a convex $n$ sided polygon. Zachary wants to draw $n-3$ diagonals that partition the region enclosed by the polygon into $n-2$ triangular regions and that may intersect only at the vertices of the polygon, In addition, he wants each triangular region to have at least $1$ side that is also a side of the polygon. In how many ways can Zachary do this? After researching a bit i found out that $C_{n-2}$ (the $(n-2)^{\text{th}}$ Catalan number) counts the number of triangulations for a convex n-sided polygon, but I don't know how to account for the triangulations that have triangles which don't have a side in common with the polygon. I would really appreciate it if someone could help me solve the problem.
|
Here is a slightly different reasoning from the older answer presented here. We will prove that there is $n\cdot 2^{n-5}$ ways to triangulate a convex $n$ -sided polygon in such way that every triangle has a side common with the polygon. Each triangulation has at least one ear . (If $n>3$ a polygon has at least $2$ ears. If we consider the dual graph of triangulation , it is easy show that $t_2-t_0=2$ , where $t_2$ is the number of ears and $t_0$ is the number of triangles which have $0$ common sides with the polygon. $t_0=0$ is our case, so there are exactly $2$ ears if $n>3$ . We won’t really use this fact though. Or rather, we will prove the same statement in the process.) There are $n$ ways of choosing where the first ear of triangulation will be for a convex polygon, since any pair of neighbour edges can be chosen. Let us call the farthest vertices of these edges A and B. It is clear that there must be a diagonal drawn either from A or from B (but not both). Moreover, if a diagona
|
|combinatorics|
| 0
|
Question About Nested Interval Property
|
This is a theorem from Abott's Real Analysis textbook. Theorem 1.4.1 (Nested Interval Property) For each $n\in N,$ assume we are given a closed interval $I_n = [a_n,b_n] = \{x\in R : a_n\leq x\leq b_n \}$ . Assume also that each $I_n$ contains $I_n{_+}{_1}$ . Then the resulting sequence of closed intervals $I_1\subseteq I_2\subseteq I_3\subseteq I_4\subseteq \cdots$ has a nonempty intersection; that is, $\bigcap\limits_{n=1}^{\infty} I_{n} \neq \emptyset$ QUESTION HERE: Wikipedia says that an given an interval $I_n$ and $I_n{_+}{_1}$ , $I_n{_+}{_1}$ is a subset of $I_n$ which makes total sense. But in the theorem it reads that the "resulting sequence of closed intervals" and below it says $I_1$ is a subset of $I_2$ , so on and so forth, which is not true. Am I misreading this?
|
The book says $$I_1 ⊇ I_2 ⊇ I_3 ⊇ I_4 ⊇···$$ (you have it flipped)
|
|real-analysis|notation|
| 0
|
Limit value of logarithm term using sum-representation
|
I want to correctly prove the following statement: \begin{equation} \lim_{x \rightarrow \infty} \frac{1}{x \cdot \ln\left(\frac{x + a}{x - a}\right)} = \frac{1}{2a} \ , \ a \in \mathbb{R}^{\ast+} \end{equation} This should atleast be true for $a \in \mathbb{R}^{\ast+}$ , but I have no idea how to correctly prove it, as the logarithm rules don't seem to be of any help here. When using the sum-statement of the logarithm for the denominator we should get: \begin{equation} x \cdot \left(\frac{2a}{(x-a)} - \frac{2a^2}{(x-a)^2} + \frac{8a^3}{3(x-a)^3} - \frac{4a^4}{(x-a)^4} + HOT\right) \end{equation} While the first term resembles the statement wo are looking for, can we actually just say that the rest of the terms tends to zero "fast" enough? And what would happen if $a$ tends to $\infty$ as well? Edit: changed the statement so that it is mathematically correct. The previous statement was: \begin{equation} \lim_{x \rightarrow \infty} \frac{1}{\ln\left(\frac{x + a}{x - a}\right)} = \frac{x}
|
Consider $$ f(x)=\frac{1}{\ln\left(\frac{x+a}{x-a} \right)} = \frac{1}{\ln\left(1+\frac{2}{\frac{x}{a}-1} \right)} $$ With the variable change $t=\frac{x}{a}$ , we have $$ g(t) = \frac{1}{\ln\left(1+\frac{2}{t-1} \right)} $$ Now we're interested in the value of $\lim_{t \to \infty} g(t)/t$ . Using Taylor expansion for $\ln(1+x)$ , we get $$ \ln(1+s) = s - \frac{s^2}{2} + \frac{s^3}{3} + \ldots $$ we get $\lim_{s \to 0} \ln(1+s) \approx s$ . Therefore $$ \lim_{t\to \infty} \ln \left(1+\frac{2}{t} \right) \approx \frac{2}{t} $$ Combining the previous points, we get $$ \frac{1}{\ln\left(1+\frac{2}{t-1} \right)} \to \frac{1}{\left(\frac{2}{t} \right)} $$ and $$ \lim_{t \to \infty} \frac{g(t)}{t} \approx \frac{1}{t\left(\frac{2}{t} \right)} = \frac{1}{2} $$ Sorry for the slightly hand-wavy points, but I think this approach should yield the result.
|
|limits|logarithms|
| 0
|
Limit value of logarithm term using sum-representation
|
I want to correctly prove the following statement: \begin{equation} \lim_{x \rightarrow \infty} \frac{1}{x \cdot \ln\left(\frac{x + a}{x - a}\right)} = \frac{1}{2a} \ , \ a \in \mathbb{R}^{\ast+} \end{equation} This should atleast be true for $a \in \mathbb{R}^{\ast+}$ , but I have no idea how to correctly prove it, as the logarithm rules don't seem to be of any help here. When using the sum-statement of the logarithm for the denominator we should get: \begin{equation} x \cdot \left(\frac{2a}{(x-a)} - \frac{2a^2}{(x-a)^2} + \frac{8a^3}{3(x-a)^3} - \frac{4a^4}{(x-a)^4} + HOT\right) \end{equation} While the first term resembles the statement wo are looking for, can we actually just say that the rest of the terms tends to zero "fast" enough? And what would happen if $a$ tends to $\infty$ as well? Edit: changed the statement so that it is mathematically correct. The previous statement was: \begin{equation} \lim_{x \rightarrow \infty} \frac{1}{\ln\left(\frac{x + a}{x - a}\right)} = \frac{x}
|
Start with the reciprocal $$\log \left(\frac{x+a}{x-a}\right)$$ Using long division $$\frac{x+a}{x-a}=1+\frac{2 a}{x}+\frac{2 a^2}{x^2}+\frac{2 a^3}{x^3}+\frac{2 a^4}{x^4}+O\left(\frac{1}{x^5}\right)$$ By Taylor $$\log \left(\frac{x+a}{x-a}\right)=\frac{2 a}{x}+\frac{2 a^3}{3 x^3}+O\left(\frac{1}{x^5}\right)$$ Long division $$\frac 1 {\log \left(\frac{x+a}{x-a}\right) }=\frac{x}{2 a}-\frac{a}{6 x}+O\left(\frac{1}{x^3}\right)$$
|
|limits|logarithms|
| 0
|
Are there 45 unital magmas with three elements (up to isomorphism)?
|
How many unital magmas (magma with an identity element) with three elements are there (up to isomorphism)? My approach: List out all of the possible 2x2 multiplication tables for the two non-identity elements. There are $3^{4} = 81$ of these. Extend these 81 tables by adding in the rows and columns for the identity element. Manually identify isomorphisms, by re-labelling the two non-identity elements. I found that of the 81 tables, there are 36 isomorphic pairs of tables, plus 9 more tables where re-labelling simply gives the same table again. Thus, I conclude that, up to isomorphism, there are $36 + 9 = 45$ unital magmas with three elements. Is this correct? And is there a more efficient approach to obtaining the solution?
|
This is correct. I would phrase this in terms of group actions, although it sounds like my argument is pretty much the same as yours. Let the underlying set be $M=\{1,x,y\}$ . Every unital magma with $3$ elements is isomorphic to a unital magma with underlying set $M$ and $1$ being the unit by appropriately labeling the elements. The number of unital magma structures on $M$ is the number of maps $\{x,y\}\times\{x,y\}\rightarrow M$ (cause the rest of the multiplication is determined by the unit axiom and there are no further restrictions), of which there are $3^{2\cdot2}=81$ many. To determine how many of these are isomorphic, note that an isomorphism of unital magmas necessarily preserves the unit, so the only possible isomorphism is always given by the involution $\tau\colon M\rightarrow M$ that fixes $1$ and switches $x$ and $y$ (and every multiplication determines one that is isomorphic to it via $\tau$ by transport of structure, i.e. $m\mapsto\tau\circ m\circ(\tau\times\tau)$ ). Th
|
|abstract-algebra|magma|
| 1
|
$f(x) =\sin (1/x)$. Why is the function $f\colon (0,\infty)\to [-1,1]$ not an open map?
|
The function $g\colon (0,\infty)\to \mathbb{R}$ , defined by $g(x)=\frac{1}{x}$ sends an open interval to an open interval. The sine function $h\colon \mathbb{R}\to [-1,1]$ , $h(x)=\operatorname{sin}x$ is an open map. So I think that the composition $f\colon (0,\infty)\to [-1,1]$ , $f(x)=\operatorname{sin}(\frac{1}{x})$ is an open map, but I found a statement that the function $f$ is continuous, but neither open nor closed. I also directly tried to prove that $f$ is an open map, and I think that the image of an open interval in $(0,\infty)$ is one of the following form: $$(a,b), \quad (b,1], \quad [-1, a), \quad [-1,1]$$ Why is the function $f$ not open?
|
Edit: As Anne Bauval found out, the source is indeed erroneous. See her answer for details and a better explanation, with picture of the source . I'll leave my original answer nonetheless: After a discussion in the comments, I have now decided to post an answer to this question. The function you describe (in a bit of unfortunate notation) is $f:(0,\infty) \to [-1,1], x \mapsto \sin(1/x)$ . Since you specify the codomain, you presumably mean the subspace topology on $[-1,1]$ and therefore the function is an open map, as @AnneBauval correctly pointed out (this is also the only way that $\sin(x)$ can be open in our situation). The difference with your source likely comes from the fact that if you specify $[-1,1]$ as your codomain with the subspace topology then $[-1,1]$ is an open set, whereas the source you looked at presumably used $f:(0,\infty) \to \mathbb{R}, x \mapsto \sin(1/x)$ , in which case $(0,a)$ gets mapped to $[-1,1]$ for every positive $a$ , which is of course not open in $\
|
|general-topology|
| 1
|
Problem about $\sin(1/x)$ in topology. (open and closed functions)
|
Let $f:(0,\infty)\to [-1,1]$ defined by $f(x)=\sin(1/x)$ . Show that $f$ is continuous but neither open nor closed, where $(0,\infty)$ and $[-1,1]$ are a subspace of $\mathbb{R}$ with usual topology. First, $f$ is continuous, since if $A \subseteq [-1,1]$ is a open, then $f^{-1}(A)$ is open, where $A=[-1,1] \lor A=[-1,b) \lor A=(a,1] \lor A=(a,b)$ . If $-1 \leq a . But i don't know show that $f$ neither open nor closed. This problem is in General Topology (Schaum):
|
$f:(0,\infty)\to [-1,1],x\mapsto\sin\frac1x$ is indeed (as suggested by @AlexR) not closed, since the image of the closed subset $[1,\infty)$ is $(0,\sin1]$ , which is not closed in $[-1,1]$ . But $f$ is open, since the image of an open interval in $(0,∞)$ is one of the following form: $$(a,b), \quad (b,1], \quad [-1, a), \quad [-1,1]$$ and all of them are open in $[-1,1]$ .
|
|general-topology|
| 0
|
If $\vert d\vert\geq 3$, is $-1+\sqrt{d}$ an irreducible element of $\mathbb{Z}[\sqrt{d}]$?
|
Let $d\in\mathbb{Z}$ be an integer which is not a square (it does not have to be squarefree, though). Question. Assume that $\vert d\vert\geq 3$ to avoid special cases. Is is true that $\pi=-1+\sqrt{d}$ is irreducible in $R=\mathbb{Z}[\sqrt{d}]$ ? Remark. I can show this it is true in the following cases: $d $d=1\pm p$ , where $p$ is prime. Apart from these cases, I have no idea whether $\pi$ might be irreducible in full generality or not when $d>0$ . (I tried to use a CAS to produce examples and counterexamples, but I am really bad at programming so I didn't get very far.) Addendum. I tried to determine first the values of $d$ for which $\pi$ is a prime element. This happens to be the case exactly when $d=1\pm p$ , $p$ prime, so this does not give any new insight. Update (February 21,2024). In his answer, Keith Conrad gives examples of integers $d\not\equiv 1 \ [4]$ for which $-1+\sqrt{d}$ is not irreducible. When $d\equiv 1 \ [4]$ , there are also such examples. For example, $(-1+\sq
|
Suppose $\mathbf Z[\sqrt{d}]$ has unique factorization, so it is the ring of integers in $\mathbf Q(\sqrt{d})$ (because UFDs are integrally closed), which implies $d$ is squarefree. In a UFD, prime and irreducible elements are the same thing. An element $\alpha$ is prime exactly when the ideal $(\alpha)$ is a prime ideal, which makes $(\alpha)$ a maximal ideal: the residue ring $\mathbf Z[\sqrt{d}]/(\alpha)$ is finite and a finite integral domain is a field. The size of $\mathbf Z[\sqrt{d}]/(\alpha)$ is $|{\rm N}(\alpha)|$ and a finite field has prime-power order, so when $\mathbf Z[\sqrt{d}]$ has unique factorization, a necessary condition that an element $\alpha$ in this ring be irreducible is that the absolute value of its norm is a prime power. Since $|{\rm N}(-1+\sqrt{d})| = |1 - d| = |d-1|$ , when $\mathbf Z[\sqrt{d}]$ has unique factorization and $|d-1|$ is not a prime power, $-1+\sqrt{d}$ is not irreducible. Example . When $d = 7, 11, 19, 22, 23, 31, 43, 46$ , and $47$ , $\math
|
|abstract-algebra|algebraic-number-theory|divisibility|
| 1
|
Maximum of $x+y+z$
|
Given natural numbers $x,y,z$ such that it satisfies $x^{2}y+y^{2}z+z^{2}x-23=xy^{2}+yz^{2}+zx^{2}-25=3xyz$ . Find the possible maximum value of $x+y+z$ . For the above problem I've tried using the identity $x^{3}+y^{3}+z^{3}-3xyz=(x+y+z)(x^{2}+y^{2}+z^{2}-xy-xz-yz)$ but ultimately lead to nowhere... (same thing with AM-GM, but that just might be cause I didn't dig deep enough) Is there any other identity I could use?
|
Subtract the first two equalities: $$x^2(y-z)+y^2(z-x)+z^2(x-y)=-2$$ Now we can see that the polynomial on the left hand side easily factors, it can be written as $$(x-y)(y-z)(z-x)=2$$ Since $x,y,z \in \mathbb N$ we know each of these terms in the product can only be $\pm 2$ and $\pm 1$ . WLOG let that $z>y>x$ , and now we are just left to do a little casework. Case $1$ : $y-x=2$ , $z-x=1$ , $z-y=1$ has no solution. Case $2$ : $y-x=1$ , $z-x=2$ , $z-y=1$ has solution $(x,x+1,x+2)$ . Case $3$ : $y-x=1$ , $z-x=1$ , $z-y=2$ has no solution. So now we only need to solve for $x$ such that $$x^2(x+1)+(x+1)^2(x+2)+(x+2)^2x -23 = 3x(x+1)(x+2)$$ which after expanding becomes $x=7$ . Therefore, $(x,y,z)=(7,8,9)$ up to permutation and hence $$\boxed{x+y+z =24}$$
|
|algebra-precalculus|inequality|
| 1
|
$\mathcal{C}^2[0,1]$ is a Banach Algebra
|
The following is problem 13 here . Consider functions in $\mathcal{C}^2[0,1]$ and $a,b>0$ . In this case, if we define: $$\lVert f \rVert:=\lVert f \rVert_\infty+ a \lVert f' \rVert_\infty +b \lVert f'' \rVert$$ This norm makes $\mathcal{C}^2[0,1]$ into a Banach algebra if and only if $a^2\geq 2 b$ I haven't been able to prove that being a Banach algebra implies $a^2\geq 2 b$ . Here is the converse: To prove that it is Banach whenever $a,b>0$ is straightforward. If $f_n$ is $\lVert \cdot \rVert$ - Cauchy, we have uniform convergence of each derivative and we may write, $g(x)=\lim f_n''(x)$ , $h(x)=\lim f_n'(x)$ , $f(x)=\lim f_n(x)$ . Because of uniform convergence of the sequence of continuous functions, $g, h$ and $f$ are continuous. Furthermore, because of uniform converge, we also have: $$\int_a^x f_n'(u) du =f_n(x)-f_n(a) \quad \quad \int_a^x g(u) du=f(x)-f(a)\quad \quad f'(x)=g(x)$$ By similar reasoning, $g'(x)=h(x)$ . So we are Banach as stated. We need only verify product is com
|
The problem actually states the condition $2b\leq a^2$ , not $b\leq a^2$ . Consider $f=g=x$ . Then $\|f\|=1+a$ , $\|f^2\|=1+2a+2b$ . So we get that $1+2a+2b=\|f^2\|\leq\|f\|^2=(1+a)^2=1+2a+a^2$ which gives $2b\leq a^2$ .
|
|functional-analysis|banach-algebras|
| 1
|
Stuck trying to prove $T = \alpha I$ for some $\alpha \in \mathbf{F}$.
|
Exercise. Suppose $V$ is finite-dimensional and $T \in \mathcal{L}(V)$ . Prove that $T$ has the same matrix with respect to every basis of $V$ if and only if $T$ is a scalar multiple of the identity operator. Source. Linear Algebra Done Right, Sheldon Axler, 4th Edition, Section 3D, exercise number 19. Where I'm stuck. I believe I was able to prove the backward direction (see section below this one for that proof). I'm having trouble proving the forward direction. Here's what I have tried: Assuming $T$ has the same matrix with respect to every basis of $V$ , that means that the entry in row $j$ , column $k$ of $\mathcal{M}(T, (v_1,\ldots,v_n))$ is equal to the entry in row $j$ , column $k$ of $\mathcal{M}(T, (u_1,\ldots,u_n))$ , where $v_1,\ldots,v_n$ and $u_1,\ldots,u_n$ are any bases of $V$ . This fact, along with the way the entries are defined by $T$ , implies $$ Tv_k = A_{1,k}v_1 + \cdots + A_{n,k}v_k \\ Tu_k = A_{1,k}u_1 + \cdots + A_{n,k}u_k $$ I need to somehow use this to show
|
Hint: What happens to the matrix if you replace basis $\{v_1, v_2, \ldots, v_n\}$ with let's say $\{\alpha v_1, v_2, \ldots, v_n\}$ for $\alpha \not= 0$ ? What happens if you re-order the basis? In particular, let's say you exchange $v_i$ and $v_j$ ?
|
|linear-algebra|matrices|solution-verification|vector-spaces|linear-transformations|
| 0
|
Value of $\sum\limits^{20}_{n=1}f'\bigg(\frac{1}{n^2}\bigg)=$
|
Let for a differentiable function $\displaystyle f:\bigg(0,\infty\bigg)\rightarrow \mathbb{R}$ and $\displaystyle f(x)-f(y)\geq \ln\bigg(\frac{x}{y}\bigg)+x-y$ for all $x\in(0,\infty)$ . Then $\displaystyle \sum^{20}_{n=1}f'\bigg(\frac{1}{n^2}\bigg)$ What I try : $\displaystyle f(x)-f(y)\geq \ln\bigg(\frac{x}{y}\bigg)+x-y\cdots (1)$ Now interchange $x\rightarrow y$ , Then $\displaystyle f(y)-f(x)\leq \ln\bigg(\frac{y}{x}\bigg)+y-x$ $\displaystyle f(x)-f(y)\leq \ln\bigg(\frac{x}{y}\bigg)+x-y\cdots (2)$ Form $(1)$ and $(2)$ , We get $\displaystyle f(x)-f(y)= \ln\bigg(\frac{x}{y}\bigg)+x-y$ How do I solve it , please have a look, Thanks
|
This might look very silly, but: $f(x)-f(y) = \ln{(\frac{x}{y})} + x - y = \ln{x} - \ln{y} + x - y$ This gives me the idea that: $f(x) = \ln x + x + C$ (where $C$ is some random constant). The rest is obvious.
|
|sequences-and-series|functions|derivatives|inequality|
| 1
|
How to prove $\sum \limits_{cyc} \frac{1}{7a^2+bc} \ge \frac{9}{8(ab+bc+ca)}$ for $a,b,c>0$?
|
Let $a,b,c$ be positive real numbers such, prove $$\frac{1}{7a^2+bc}+\frac{1}{7b^2+ca}+\frac{1}{7c^2+ab} \ge \frac{9}{8(ab+bc+ca)}$$ I saw it here . I try to full expanding but it's very complicated. Also, Cauchy Schwartz $$\sum_{cyc} \frac{1}{7a^2+bc} \ge \frac{9}{7(a^2+b^2+c^2)+(ab+bc+ca)}$$ leads us to a wrong inequality. Then I try $2bc \le b^2+c^2$ without success. So far, I haven't had any good idea yet. Can someone help me with this problem? Thank you. My expanding $$\sum_{sym} \left(56a^4b^2+56a^4c^2+49a^4bc-\frac{49}{2}(ab)^3-\frac{49}{2}(ac)^3+456a^3b^2c+456a^3bc^2-1024a^2b^2c^2\right) \ge 0$$ it seems true as Murihead but I don't know how to use it.
|
A couple of solutions involving $pqr$ -techniques. Let $p=a+b+c,\;q=ab+bc+ca,\; r=abc.$ Then, after some expanding and simplifying, the inequality turns into: $$f(r)=-4608r^2+(680pq-63p^3)r+56p^2q^2-161q^3\ge0.$$ We see that $f(r)$ is quadratic and concave. Hence it is sufficient to prove $f(r)\ge 0$ for the minimal and maximal values of $r$ . So we need to prove the inequality for $r=0$ and $a=b$ . If $r=0$ then the inequality becomes $56p^2q^2-161q^3\ge0.$ Or $56p^2\ge 161q$ , which is true since $p^2\ge 3q.$ If $a=b$ we can put $a=b=1$ since the inequality is homogeneous. The inequality then becomes: $$\frac{2}{7+c}+\frac{1}{7c^2+1}\ge \frac{9}{16c+8}.$$ Expanding and simplifying: $$224c^3+128c^2+152c+72\ge 63c^3+441c^2+9c+63$$ $$161c^3-313c^2+143c+9 \ge 0$$ The LHS has a root $c=1$ . It actually turns out to be of multiplicity $2$ . And the LHS can be factorized as $(c-1)^2(161c+9)$ which is non-negative. Let us prove the initial $pqr$ -inequality in a different way: $$680pqr+56p^2
|
|inequality|
| 0
|
Is there a single term that generalizes the names of the individual vector and scalar types in a multivector?
|
In three dimensions, a multivector consists of a scalar, a vector, a bivector and a tri-vector. Is there a term that generalizes these names? For example, in an $n$ -dimensional space, can I use the terminology $k$ -vector (where $0\leq k\leq n$ ) to refer to the different scalar and vector types, such that a 0-vector is a scalar, a 1-vector is a vector, a 2-vector is a bivector and a 3-vector is a trivector? I haven't found any such name generalization, but it seems to me like it would be very useful (kind of like how a face of an $n$ -polytope can be referred to as a $k$ -face, where $0\leq k\leq n$ and $k$ is the dimensionality of the face). Edit: I found the answer to my own question. They can indeed be called $k$ -vectors.
|
Scalars, vectors, bivectors and trivectors can indeed collectively be referred to as $k$ -vectors.
|
|geometric-algebras|
| 0
|
Let $H\le G$ be finite and $g\in G$. How large can $gH\cap Hg^{-1}$ be if $gH\ne Hg^{-1}$? Is $\frac{1}{2}|H|<|gH\cap Hg^{-1}|<|H|$ possible?
|
(Related to this question .) Suppose that $H$ is a finite subgroup, and $g$ is an element of a group. How large can the intersection $gH\cap Hg^{-1}$ be given that $gH\ne Hg^{-1}$ ? Is it possible to have, say, $\frac{1}{2}|H| ? The abelian case is easy: if the group is commutative, then $gH$ and $Hg^{-1}$ coincide if $g^2\in H$ , and are disjoint otherwise.
|
Suppose that $|gH\cap Hg^{-1}|>\frac12\,|H|$ . Then, by the box principle, there is an element $h\in H$ such that $gh=h^{-1}g^{-1}$ . We have then $gHg=ghHg=h^{-1}g^{-1}Hg$ . As a result, $|gH\cap Hg^{-1}|=|gHg\cap H|=|h^{-1}g^{-1}Hg\cap H|=|g^{-1}Hg\cap H|$ , and since $g^{-1}Hg\cap H$ is an intersection of two subgroups of size $|H|$ , the size of the intersection is either $|H|$ , or $\frac12\,|H|$ at most.
|
|abstract-algebra|group-theory|
| 0
|
Deducement of First and second Borel-Cantelli Lemma
|
Suppose that $\Omega$ is a set, $(\Omega, \mathscr{G})$ is a measure space, and $Z: \Omega \to \mathbb{R}$ is a given mapping. Then Z is $\mathscr{G}$ measurable iff $$Z =\displaystyle\sum_{i=1}^\infty \lambda_i I_{A_i} \tag{1}$$ for some { $\lambda$ } $\subset \mathbb{R}$ and { $A_i$ } $\subset \mathscr{G}$ . From(1) deduce the First Borel-Cantelli Lemma: if $\displaystyle\sum_{i=1}^{\infty} P(A_i) then $P(\lim sup A_n) =0$ Solution:= The "if" part follows directly from the observation that the mapping given by (1) is the pointwise limit as $n \to \infty $ of the $\mathscr{G}$ measurable mappings $\displaystyle\sum_{i=1}^n \lambda_i I_{A_i}.$ For the opposite direction, first suppose that Z is nonnegative . Define $$\begin{align*} Z_1 := I_{Z \geq 1},\\ & S_n :=\displaystyle\sum_{i=1}^n \displaystyle\frac{1}{i}Z_i, Z_{n+1} := I_{Z-\displaystyle\frac{1}{n+1}\geq S_n}, n\geq 1 \end{align*}$$ We claim that $Z = \lim S_n = \displaystyle\sum_{i=1}^\infty \displaystyle\frac{1}{i} Z_i $ (Not
|
The First Borel-Cantelli Lemma can be applied in risk assessment for flood insurance in the following way: The lemma states that if you have an infinite sequence of events, and the sum of their probabilities is finite, then the probability that any of these events will occur infinitely often is zero. In the context of flood insurance, consider each "event" as a flood occurring in a specific area. If the probability of a flood occurring in each year is independent of the previous years, and the sum of these probabilities over an infinite time horizon is finite, then according to the First Borel-Cantelli Lemma, the probability that floods will occur infinitely often in this area is zero. This can help insurance companies in their risk assessment. If the conditions of the lemma are met, they can conclude that the risk of having to pay out for flood damage every year is zero. This can influence the pricing of their insurance policies and their decision on whether to offer insurance in that
|
|probability|random-variables|limsup-and-liminf|borel-cantelli-lemmas|
| 1
|
Form of ideal generated by subset of noncommutative nonunital ring
|
Given a noncommutative nonunital ring $R$ and a subset $S\subseteq R$ , I know that the left and right ideals generated by $S$ in $R$ have the forms $$RS:=\{\sum^n_{i=1} r_is_i:n\in\mathbb{N}\backslash\{0\},r_i\in R,s_i\in S \}$$ and $$SR:=\{\sum^n_{i=1}s_ir_i:n\in\mathbb{N}\backslash\{0\},r_i\in R,s_i\in S \}$$ respectively. I also know that the two sided ideal generated by $S$ in $R$ has the form $$RSR:=\{\sum^n_{i=1}r_is_i\tilde{r}_i:n\in\mathbb{N}\backslash\{0\},r_i,\tilde{r}_i\in R,s_i\in S \}.$$ Now, my question is, how do the above ideals differ from a general ideal generated by $S$ in $R$ ? For example, maybe $R=x\mathbb{C}\langle x,y\rangle$ is the free algebra over $\mathbb{C}$ whose basis consists of words in $x$ and $y$ beginning with $x$ on the left and whose multiplication is concatenation of words. Take $S=x\mathbb{C}\langle x\rangle=x\mathbb{C}[x]$ to be the set of nonconstant polynomials over $\mathbb{C}$ . Then, clearly none of $RS$ , $SR$ or $RSR$ contain $S$ as a su
|
I'm not sure if this answers your question. If it doesn't, consider it an extended comment. Definitions are -well- a matter of definition and thus cannot be wrong or true, but they can certainly be weird or counterintuitive. And I want to suggest that the definitions you provided for the left/right/two-sided ideal generated by a subset of a (non-unital non-commutative) ring are weird and counterintuitive. If there is any justice in the world, then in any situation where we have some kind of structure $X$ (which can have sub- somethings ) with some subset $S \subseteq X$ , then the something generated by $X$ should certainly include $S$ , in fact it should be the smallest something contained in $X$ that contains $S$ . The definitions you have provided just don't achieve this. Let's consider the commutative non-unital example of $R=\Bbb Z/4\Bbb Z$ with zero multiplication. What should the ideal generated by $\{2\}$ be? According to the definitions you gave, it should be $\{0\}$ . I'd arg
|
|abstract-algebra|ring-theory|soft-question|ideals|noncommutative-algebra|
| 1
|
Solving $(2n)^{\log 2}=(5n)^{\log 5}$
|
I have seen this equation from a link named Asisten and German Academy , (it is a video of Facebook) where there is a complicate solution (I invite to watch it) for $$(2n)^{\log 2}=(5n)^{\log 5}$$ I have adopted, instead, this approach: $$(2)^{\log 2}(n)^{\log 2}=(5)^{\log 5}(n)^{\log 5} \iff 2n^{\log 2}=5n^{\log 5}$$ After $$\frac{n^{\log 2}}{n^{\log 5}}=\frac 52 \iff n^{(\log 2-\log 5)}=\frac 52$$ $$\log(n^{(\log 2-\log 5)})= \log 5-\log 2 $$ $$(\log 2-\log 5)\log n=\log 5-\log 2 \iff \log n =-1$$ Hence $$e^{\log n}=e^{-1}\implies n=\frac 1e$$ Now I have seen the solution is $n=1/10$ . Is it different my solution why my base is $e$ and not $10$ ? Generally I have seen that $\log=\log_{10}$ . In Italy we used often $\log=\log_e$ . I yet thought with the old notation that $\operatorname{Log}=\log_{10}.$ I not adopted often $\ln$ where the base is neperian.
|
We have that $a^{\log a} = e^{\log^2 a}$ then $$(2n)^{\log 2}=(5n)^{\log 5} $$ $$2^{\log 2}n^{\log 2}=5^{\log 5}n^{\log 5}$$ $$e^{\log^2 2+\log n \log 2}=e^{\log^2 5+\log n \log 5}$$ and since $e^x$ is striclty monotonic, the latter is equivalent to $$\log^2 2+\log n \log 2=\log^2 5+\log n \log 5$$ $$\log n = \frac{\log^2 2-\log^2 5}{\log 5 -\log 2}=-\log 10 \iff n=\frac1{10}$$
|
|algebra-precalculus|logarithms|
| 0
|
If a curve in a regular surface is smooth, is the corresponding curve in the parameterization space also smooth?
|
Let $\Sigma \subset \mathbb R^3$ be a regular surface and $\gamma: (- \varepsilon, \varepsilon) \to \mathbb R^3$ be a smooth curve with $\mathrm{im}( \gamma) \subset \Sigma$ , and $\gamma (0) = p$ . Let $\sigma : U \subset \mathbb R^2 \to \Sigma$ be a parameterization of $\Sigma$ near $p$ (so it's smooth as a map to $\mathbb R^3$ ). Define the map $\Gamma: (- \varepsilon, \varepsilon) \to \mathbb R^2$ via $\Gamma(t) = \sigma^{-1}(\gamma(t))$ , i.e. $\Gamma$ is the curve in the parameterization plane corresponding to $\gamma$ on the surface. Is $\Gamma$ necessarily a smooth map to $\mathbb R^2$ ? Can I write, say, $D_0 \gamma = D_{\sigma^{-1} (p)} \sigma \circ D_0 \Gamma$ by the chain rule?
|
An equivalent definition for a regular surface is as such: $\Sigma$ is a regular surface iff for all $p \in \Sigma$ there exists an open neighborhood $T$ of $p$ in $\mathbb R^3$ , an open neighborhood $W$ of the origin in $\mathbb R^3$ , and a diffeomorphism $g: T \to W$ such that $g(T \cap \Sigma) = W \cap (\mathbb R^2 \times \{ 0 \})$ . Let's WLOG show that $\Gamma$ is smooth at $0 \in (-\varepsilon, \varepsilon)$ . For $t$ near $0$ (i.e. $\gamma(t)$ near $p$ ), we have $$ \Gamma = \sigma^{-1} \circ \gamma = (g \circ \sigma)^{-1} \circ (g \circ \gamma). $$ $g\circ\sigma$ is a smooth map in $\mathbb R^3$ landing in $\mathbb R^2$ , so we can treat it as a smooth map to $\mathbb R^2$ . As a map to $\mathbb R^2$ , it's a local diffeomorphism at $\sigma^{-1}(p)$ since $$ D_{\sigma^{-1}(p)}(g \circ \sigma) = \underbrace{D_p g}_{\text{isometry $\mathbb R^3 \to \mathbb R^3$}} \circ \underbrace{D_{\sigma^{-1}(p)} \sigma}_{\text{rank 2}} $$ has rank $2$ , so is invertible. Hence there exists a
|
|differential-geometry|
| 1
|
Solving $(2n)^{\log 2}=(5n)^{\log 5}$
|
I have seen this equation from a link named Asisten and German Academy , (it is a video of Facebook) where there is a complicate solution (I invite to watch it) for $$(2n)^{\log 2}=(5n)^{\log 5}$$ I have adopted, instead, this approach: $$(2)^{\log 2}(n)^{\log 2}=(5)^{\log 5}(n)^{\log 5} \iff 2n^{\log 2}=5n^{\log 5}$$ After $$\frac{n^{\log 2}}{n^{\log 5}}=\frac 52 \iff n^{(\log 2-\log 5)}=\frac 52$$ $$\log(n^{(\log 2-\log 5)})= \log 5-\log 2 $$ $$(\log 2-\log 5)\log n=\log 5-\log 2 \iff \log n =-1$$ Hence $$e^{\log n}=e^{-1}\implies n=\frac 1e$$ Now I have seen the solution is $n=1/10$ . Is it different my solution why my base is $e$ and not $10$ ? Generally I have seen that $\log=\log_{10}$ . In Italy we used often $\log=\log_e$ . I yet thought with the old notation that $\operatorname{Log}=\log_{10}.$ I not adopted often $\ln$ where the base is neperian.
|
As already pointed out in the comments and in Xander's answer, your calculation is wrong because $2^{\log_b 2} = 2$ and $5^{\log_b 5} = 5$ do not hold simultaneously for any base $b$ . I would argue as follows: $$ \begin{align} &(2n)^{\log 2}=(5n)^{\log 5} \\ \iff &\log 2 (\log 2 + \log n) = \log 5 (\log 5 + \log n) \\ \iff &\log n = - \frac{(\log 5)^2 - (\log 2)^2}{\log 5 - \log 2} = -\log(10) = \log \frac{1}{10} \\ \iff &n = \frac{1}{10} \, . \end{align} $$ This calculation is valid no matter what the base of the logarithm is.
|
|algebra-precalculus|logarithms|
| 0
|
"How to solve a system of equations of estimators (alpha and beta) using the method of maximum likelihood (beta distribution) in R?"
|
set.seed(42) # Data x Hello, I'm encountering an issue with my code. It's yielding negative values that are far from correct. I've searched online and even consulted ChatGPT, but haven't found a solution to my problem. Could anyone help? The task was to derive estimators for the parameters α and β using the method of maximum likelihood (ML) and solve it, implementing it in RStudio.
|
Apart from a one significant issue plus a minor point, your code works. The minor point is that if you want other people to reproduce your results then you need to specify what packages you are using, here nleqslv The significant issue is that you started with a likelihood of $$\prod_i \left(\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}x_i^{\alpha-1}(1-x_i)^{\beta-1}\right) = \left(\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\right)^n\left(\prod_i x_i\right)^{\alpha-1}\left(\prod_i (1-x_i)\right)^{\beta-1}$$ but, in finding the log-likelihood and its derivatives, you lost the $n$ term. You need to include it. So if your code looked something like library(nleqslv) set.seed(42) # Data x then it would give the results [1] 3.828875 1.992556 which are not far from the original values of $4$ and $2$ .
|
|statistics|maximum-likelihood|beta-function|
| 0
|
On the definition of an Interpretation Isomorphism
|
I wish to check my understanding of an interpretation isomorphism, as defined (although paraphrased) in Boolos' Computability and Logic : Two interpretations $P$ and $Q$ are isomorphic iff there is a map $j:P\to Q$ such that $j\left(f^P(p_1,\ldots,p_n)\right) = f^Q\left(j(p_1),\ldots,j(p_n)\right).$ $R^P(p_1,\ldots,p_n)$ if and only if $R^Q\left(j(p_1),\ldots,j(p_n)\right)$ . $j\left(c^P\right) = c^Q$ . I was wondering why the first two conditions act directly on members of $|P|$ and $|Q|$ , as opposed to acting on closed terms. In the latter case, conditions $(1)$ and $(2)$ would be $j\left(f^P(t_1^P,\ldots,t_n^P)\right) = f^Q\left(j(t_1^Q),\ldots,j(t_n^Q)\right)$ . $R^P(t_1^P,\ldots,t_n^P)$ if and only if $R^Q\left(j(t_1^P),\ldots,j(t_n^P)\right)$ . for any closed terms $t_i$ . Is the following the reason for choosing the former definition? If we use the latter definition, the Isomorphism Lemma would fail. (Isomorphism Lemma) If there is an isomorphism between two interpretations $P$
|
There is much more to an interpretation than its closed terms. So a definition of isomorphism that only mentions the elements named by closed terms would be very strange indeed. For example, a group is an interpretation of the language of groups, but the only element of a group named by a closed term is the identity element. So by your definition, any two groups would be isomorphic as interpretations. This is obviously undesirable.
|
|logic|definition|
| 0
|
Determine $n\in N$ such that $\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$ .
|
the question Determine $n\in N$ such that $\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$ . The idea So I thought of using these formulas; $$a^n-b^n=(a-b)(a^{n-1}+...+b^{n-1})$$ and $$a^n+b^n=(a+b)(a^{n-1}-...+b^{n-1})$$ The last one works only for even $n$ and it won't help me that much... I Don't know where to start. I hope one of you can help me! Thank you!
|
Are you sure that is the correct question? if you look at this picture, I take $f(x)=\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} - \frac{5^n-2^n}{5^n+2^n}$ and I look for $f(x) \le 0 $ so that satisfy the $\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$ but It seems there is no Natural solution for it! You can use this link also https://www.desmos.com/calculator/gigrs3v682 Remark: For big $n \in \mathbb{N}$ there is no solution because $$\lim \frac{3^n-2^n}{3^n+2^n} \to 1\\ \lim \frac{5^n-2^n}{5^n+2^n} \to 1\\\lim \frac{5^n-3^n}{5^n+3^n} \to 1 \\ \to 1+1\le 1 \text{ no-solution}$$
|
|inequality|natural-numbers|
| 0
|
The sequence $a_{n+2}=\frac{a_{n+1}+a_n}{\gcd\left(a_{n+1,}a_n\right)}$ is bounded
|
Let a sequence of natural numbers be $a_{n+2}=\frac{a_{n+1}+a_n}{\gcd\left(a_{n+1,}a_n\right)}$ where $a_1=a,a_2=b$ . Find all such pairs of $a,b$ such that the sequence is bounded. Denote $\gcd(a,b)=(a,b)$ . First of all, I notice if $(a,b)=1$ then the sequence would grow like $a+b,a+2b,....$ , so $(a,b)=d>1$ . Now let $a=dx,b=dy$ and $(x,y)=1$ . Then we get $a_3= x+y$ so, $a_4=\frac{x+y+b}{(x+y,b)}$ . How do I proceed?
|
The solution given by @Aig is very elegant but I think there is another solution that completes the idea mentioned by the OP. Let's assume $a_1,a_2,a_3 , ... \ $ is bounded. As the OP noticed, we must have $a_1=d_1x_1$ and $a_2=d_1y_1$ where $(x_1,y_1)=1$ and $d_1>1.$ Then, we have $a_3=x_1+y_1$ and we must also have $(a_2,a_3)=d_2>1$ . Since $(x_1,y_1)=1$ , we will have $d_2|d_1$ (in fact $(d_2,y_1)=1$ ). We can suppose $a_2=d_2x_2$ and $a_3=d_2y_2$ , where $(x_2,y_2)=1$ . Now, we have $a_4=x_2+y_2$ and we must also have $(a_3,a_4)=d_3>1.$ Again, since $(x_2,y_2)=1$ , we will have $d_3|d_2$ (and consequently $d_3|d_1$ ). Again, we suppose $a_3=d_3x_3$ and $a_4=d_3y_3$ , where $(x_3,y_3)=1$ . Doing the same procedure infinitely many times, we obtain a non-increasing sequence $d_1,d_2,d_3, ... \ $ such that for every $i \in \mathbb N$ , we have $d_i|d_1$ . That means the sequence $d_1,d_2,d_3, ... \ $ is constant, from some point on; let's say the sequence is eventually equal to $d$ . T
|
|sequences-and-series|elementary-number-theory|recurrence-relations|
| 0
|
Determine $n\in N$ such that $\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$ .
|
the question Determine $n\in N$ such that $\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$ . The idea So I thought of using these formulas; $$a^n-b^n=(a-b)(a^{n-1}+...+b^{n-1})$$ and $$a^n+b^n=(a+b)(a^{n-1}-...+b^{n-1})$$ The last one works only for even $n$ and it won't help me that much... I Don't know where to start. I hope one of you can help me! Thank you!
|
$$\frac{3^n-2^n}{3^n+2^n}+\frac{5^n-3^n}{5^n+3^n} \leq \frac{5^n-2^n}{5^n+2^n}$$ $$\iff 1-\frac{2\cdot 2^n}{3^n+2^n}+1-\frac{2\cdot 3^n}{5^n+3^n} \leq 1- \frac{2\cdot 2^n}{5^n+2^n}$$ $$\iff \frac 12 \leq \frac{2^n}{3^n+2^n}+\frac{3^n}{5^n+3^n}-\frac{2^n}{5^n+2^n}$$ $$\iff \frac{3^n}{5^n+3^n} + 2^n\cdot\left(\frac1{3^n+2^n} - \frac1{5^n+2^n}\right) \ge \frac12$$ $$\iff \frac{3^n}{5^n+3^n} + \frac{10^n-6^n}{15^n+6^n+10^n+4^n} \ge \frac12$$ Clearly, the left hand side is decreasing in $n$ . However, when we set $n=0$ , equality holds, so there are no solutions for $n \in \Bbb N$ .
|
|inequality|natural-numbers|
| 1
|
A literally challenging math book
|
The only way to learn mathematics is to do mathematics. - Paul Halmos Most books about uni-level mathematics follow a strict scheme of giving you the content and letting you practice with it with some problems at the end of each chapter. Often boring. In "The Lady, or The Tiger?", Smullyan blasts the reader with (at first) seemingly unrelated logic puzzles that lead the reader to understand a complex concept such as Gödel's theorems. I love Smullyan's taste for teaching math and I'm an unconditional follower of the Socratic method (maieutics). So I was wondering: are math books that combine both to teach other stuff? I'm thinking of something like the following. On the outside, it looks like a regular problem compendium. Inside, it's a lot more. The book's entry level is some first-year math but can deepen a lot . The subject can be anything, from general topology to combinatorial geometry. It essentially drives the reader's own discovery of the subject by asking the right questions in
|
A Problem Seminar by D.J Newman Fits almost all of the requirements. More generally, I would pickup anything from Series 714 , especially the older titles. Albeit not related but you might enjoy Problems In Applied Mathematics by Murray Klamkin .
|
|problem-solving|book-recommendation|education|
| 0
|
A question regarding the matrix matrix $P^{-1}DP$ where $D=diag(1,0,\ldots,0)$
|
Let $A$ be an $n\times n$ matrix which is similar to $D=diag(1,0,\ldots,0)$ and Let $P$ be the invertible matrix such that $P^{-1}AP=D$ . What is the vector $$ A \begin{pmatrix} 1\\1\\ \vdots \\1 \end{pmatrix}=? $$ My attempt : If the matrix is $2\times2$ and $$ P= \begin{pmatrix} a & b\\ c & d\\ \end{pmatrix} $$ where $$ \vec{v}= \begin{pmatrix} a\\ c \\ \end{pmatrix} $$ is the eigenvector of $1$ , then we get $$ A \begin{pmatrix} 1\\1\\ \end{pmatrix}= P \begin{pmatrix} 1 & 0\\ 0 & 0\\ \end{pmatrix} P^{-1}\begin{pmatrix} 1\\1\\ \end{pmatrix}= \frac{d-b}{ad-bc}\begin{pmatrix} a\\c\\ \end{pmatrix} $$ How to generalized it to any order $n$ ?
|
If $P^{-1} A P = D = \operatorname{diag}(1,0,\ldots,0)$ , then: $$ A = P D P^{-1}$$ Note that the title presents this relationship differently than the body of your Question, but let's go with the latter. Also let's assume the matrix $A$ is meant to have real number entries. Thus: $$ A \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} = P D P^{-1} \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} $$ Without more information, not much more can be said about this vector. Indeed without further restrictions $A$ can be any rank $1$ matrix with trace $1$ , i.e. $$ A = u v^T $$ for some nonzero (column) vectors $u,v$ such that trace $A = \sum_{i=1}^n u_i v_i = 1$ . Without loss of generality one can scale the vector $v$ to have entries that sum to $1$ , by correspondingly multiplying the vector $u$ by the reciprocal of the sum of entries in $v$ . Now $v^T (1,1,\ldots,1)^T = r$ , the sum of entries in $v$ . It follows that: $$ A \begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \end{pmatrix} = r u $$
|
|linear-algebra|matrices|eigenvalues-eigenvectors|diagonalization|
| 1
|
Angles can be congruent but not equal?
|
How could ‘two angles cannot be equal, but two angles can be congruent? ‘ My understanding was if a shape is congruent then it would be equal in size and shape. If two angles are congruent, how could they not be equal?
|
In most modern textbooks of Geometry, figures are defined as sets of points . And two sets are equal only if they have the same elements, i.e. two figures are equal only if they are made of the same points . Congruence, instead, is a relation between figures obeying some axioms. Two congruent figures needn't be equal.
|
|geometry|
| 0
|
How can I define a topology on the empty set?
|
We know that, indiscrete topology is the smallest topology. It has $2$ elements (they are the empty set and whole set). Suppose the given set is the empty set, then how can I define a topology on that set? Is it possible?
|
The empty space on $\pi$ -Base: https://topology.pi-base.org/spaces/S000163 Several of the standard properties hold vacuously. In particular any statement that must hold for all points, nonempty open sets, etc., will be true because these things don't exist in the first place.
|
|general-topology|
| 0
|
A global inequality of perturbation analysis of convex optimization
|
My questions are originated from the book Convex Optimization written by Stephen P. Boyd , where a proof of a global inequality of perturbation analysis puzzles me. And here are my questions: In the proof of the inequality $5.57$ , how can the first inequality hold when we might have two different domains for unperturbed and perturbed problems? (I can only get the inequality for all feasible points of unperturbed problem, while having no ideas about the points belonging to the perturbed problem which are not feasible for the unperturbed problem) Is this true for a problem of a non-convex objective with a convex inequality constraint (suppose our perturbation is always greater than zero)? Or is there any weaker result for non-convex problems I can resort to? Those two problems are: Unperturbed problem \begin{equation} \begin{array} \displaystyle \text{minimize} & f_{0}(x)\\ \textrm{s.t.} & f_{i}(x) \leq 0, i=1,\dots, m \\ & h_{i}(x) = 0, i=1,\dots, p\\ \end{array} \end{equation} Perturb
|
I can answer your question 1. In this context we assume strong duality holds, and KKT conditions hold. In particular, we have the primal optimal point $x^*$ together with $(\lambda^*, \nu^*)$ satisfy $$ \nabla f(x^*)+\sum_{i=1}^m \lambda_i^* \nabla f_i(x^*) +\sum_{i=1}^p \nu_i^*\nabla h_i(x^*)=0 $$ Note that the function on the RHS of the first inquality is convex in $x$ and its derivative vanishes at $x^*$ , so $x^*$ is the global minimizer, i.e. $g(\lambda^*,\nu^*)=\min_{x\in \mathbb{R}^n} f(x)+\sum_{i=1}^m \lambda_i^* f_i(x) +\sum_{i=1}^p \nu_i^*h_i(x)$ . Hence the first inequality is true for every $x\in \cap \textbf{ dom} f_i$ , not only for the feasible points of unperturbed problem.
|
|optimization|convex-optimization|perturbation-theory|
| 0
|
Domination number is at most $n/2$
|
Let $G$ be a graph of $n$ vertices with no isolated vertex. Prove that the domination number of $G$ is at most $\lfloor n/2 \rfloor.$ Turns out this result can easily solve a combinatorics problem I'm working on right now. And since the problem only asks for even $n$ I'm going to ignore the odd numbers for the moment. I tried induction like this: The base case is clear so assume the result hold for some even number $n$ . Now given a graph of $n+2$ vertices I want to remove $2$ vertices so I can use the induction hypothesis. The problem is that for some graphs, removing any two vertices will give a graph with isolated vertices.
|
Another similar proof: Show that for any graphs (even with isolated vertices) there exists an independent dominating set. Show that if $X$ is an independent dominating set, then $\overline{X}$ is a dominating set. Conclude that either $X$ either $\overline{X}$ has the desired size where $X$ is an independent dominating set. Proofs: By induction on the number of vertices. Consider a vertex v. By induction there exists an independent dominating set $X$ of $G \setminus N[v]$ (where $N[v]$ denotes the closed neighborhood). Thus $X \cup \{v\}$ is an independent dominating set of $G$ . Consider $X$ an independent dominating set of $G$ . Let $x$ be a vertex of $X$ , as $x$ is not isolated, $x$ has a neighbor called $y$ . As $X$ is an independent set, then $y \not\in X$ . Thus $y \in \overline{X}$ dominates $x$ . We conclude that $\overline{X}$ is a dominating set of $G$ . As $V(G) = X \cup \overline{X}$ and as the union is disjoint, then $n = |X| + |\overline{X}|$ . We deduce that $|X| \geq n
|
|combinatorics|discrete-mathematics|graph-theory|
| 0
|
Simple but confusing: Finding the $n$th term of a sequence, given that the sum of the $1$st to $n$th terms is $3n^2-2$
|
So the question is the following: Given that $S_n = 3n^2 - 2$ , Find the nth term of the sequence $a_n$ . $S_n$ is the sum of the sequence from $1$ to $n$ . I know that the answer should be an arithmetic sequence, because all the choices are as such, and anyway we know that from the form of $S_n$ . The weird thing is that if we solve it this way: $S_n - S_{n-1} = a_n$ , We know that the formula for the sum of an arithmetic sequence is : $$S_n = \frac {n(a_1 + a_n)}{2}$$ Using this and calculating $S_n - S_{n-1}$ for $S_n = 3n^2 - 2$ , we get $a_n = 6n-3$ and where $a_1 = 3$ However, if we work backward to verify the answer, starting from $a_n = 6n-3$ and plugging that into the general formula $S_n = \frac {n(a_1 + a_n)}{2}$ , we get: $$S_n = \frac {n(a_1 + a_n)}{2} =\frac {n(3 + 6n-3)}{2} = 3n^2$$ Which works perfectly if we try out a few values for example $S_3 = 27$ , and $a_1 = 3, a_2 = 9, a_3 = 15 ,$ so $a_1 + a_2 + a_3 = 3 + 9 + 15 = 27$ But the sequence $6n-3$ does not work with
|
Presumably the question means that the sum $S_n$ of the first $n$ terms of the sequence is given by $S_n=3n^2-2$ for $n\geq 1$ . Indeed, the sum of $0$ terms of anything must be $0$ , so the formula can't work for $n=0$ (where it gives $-2$ ). Now your calculation for the $n$ th term involves substituting ${n-1}$ into the formula, so this is only valid if $n-1\geq 1$ , i.e. $n\geq 2$ . It doesn't give the correct value for $n=1$ . So the $n$ th term of your sequence is $a_n=6n-3$ for $n\geq2$ and (just using the value of $S_1$ ) $a_n=1$ for $n=1$ .
|
|sequences-and-series|summation|
| 0
|
Constructing coproducts from free products and coproducts in $Sets$
|
One can check that the coproduct of $G=\langle A \mid R \rangle$ and $G'=\langle A' \mid R' \rangle$ is $G * G'= \langle A \amalg A' \mid R \amalg R' \rangle$ . I'm wondering if such an argument can be generalized for arbitrary categories $C$ with a right adjoint forgetful functor to $Sets$ . More concretely, given $G,G' \in C$ , with $R:F(A) \to G$ and $R':F(A') \to G'$ the canonical maps of the adjuction, if there's a coproduct $G * G'$ then there is a map $R * R' : F(A \amalg A')\cong FA \amalg FA' \to G* G'$ induced by the coproduct map $A \amalg A' \to G * G'$ . I have two questions about this: Can we somehow state that $G * G'$ is (isomorphic to) a quotient arising from $R*R'$ ? If not, what natural requirements could be added to the category $C$ so that this can be stated and proved? Possibly with the assumptions added in the first question, can we prove that a coproduct exists? It should be induced somehow from $R$ and $R'$ , but $R * R'$ is no longer available.
|
The correct thing to quotient in a general algebraic theory is a congruence . It is a subobject $R \hookrightarrow G \times G$ , satisfying reflexivity, symmetry and transitivity. The quotient is simply the coequalizer of the two maps $R \rightrightarrows G$ . Exercise : prove that congruences in the category of groups naturally corresponds to normal subgroups. This legitimizes our practice of quotienting by normal subgroups. Of course, we don't need to ensure a monomorphism, since for example we can consider the presentation $⟨a, b \mid ab, ab⟩$ where the relation $ab = e$ is repeated. So we relax that and consider simply an object equipped with a map to $G \times G$ . Now we may state our question properly: given $R_1 \rightrightarrows G_1$ and $R_2 \rightrightarrows G_2$ , is the coproduct of their coequalizers isomorphic to the coequalizers of their coproducts? This is simply an instance of colimits commuting with colimits . The existence is also guaranteed, i.e. the theorem proves
|
|category-theory|
| 1
|
How to find the coefficient of $x^3y^4z$ in $ (x+y+z)^5 (1+x+y+z)^{5}$?
|
First of all, I know that there is an extremely similar question from yesterday that has been closed due to Mathematics Stack Exchange guidelines, so I can't comment and find what is incorrect in my way. The question : find the coefficient of $(x^3y^4z)$ in $ (x+y+z)^5 (1+x+y+z)^{5}$ . We know that: $(x+y+z)^5$ = $\sum^{}_{n_1 + n_2 + n_3 = 5}$ ${5\choose n_1, n_2, n_3}$ $x^{n_1} y^{n_2}z^{n_3}$ And : $(1+x+y+z)^5$ = $\sum^{}_{n_a + n_b + n_c +n_d = 5}$ ${5\choose n_a, n_b, n_c, n_d}$ ${x^{n_a} y^{n_b}z^{n_c}1^{n_d}}$ $ (x+y+z)^5 (1+x+y+z)^{5}$ = $\sum^{}_{n_1 + n_2 + n_3 = 5}$ ${5\choose n_1, n_2, n_3}$ $x^{n_1} y^{n_2}z^{n_3}$ $\sum^{}_{n_a + n_b + n_c +n_d = 5}$ ${5\choose n_a, n_b, n_c, n_d}$ ${x^{n_a} y^{n_b}z^{n_c}1^{n_d}}$ Which equals to: $\sum^{}_{n_a + n_b + n_c +n_d = 5}$ $\sum^{}_{n_1 + n_2 + n_3 = 5}$ ${5\choose n_1, n_2, n_3}$ ${5\choose n_a, n_b, n_c, n_d}$ $x^{n_1} y^{n_2}z^{n_3}$ ${x^{n_a} y^{n_b}z^{n_c}1^{n_d}}$ Hence, to get the coefficient : $ n_1 =2, n_2 = 2, n_3 =
|
Since you want a term of total degree $8$ , you might start by noting that the terms of total degree $8$ in $(x+y+z)^5 (1+x+y+z)^5 = \sum_{n=0}^5 {5 \choose n} (x+y+z)^{5+n}$ come from $ {5 \choose 3} (x+y+z)^8$ . Next, the $x^3$ term in ${5 \choose 3} (x+y+z)^8$ is ${5 \choose 3}{8 \choose 3} x^3 (y+z)^5$ . Finally, the term in $y^4 z^1$ is ${5 \choose 3}{8 \choose 3}{5 \choose 4} x^3 y^4 z^1$ . So your coefficient is $$ {5 \choose 3}{8 \choose 3}{5 \choose 4} = 2800$$
|
|discrete-mathematics|multinomial-coefficients|
| 1
|
Condition for showing families of seminorms generate same topology
|
There is a statement about locally convex spaces in Reed & Simon, Methods of Modern Mathematical Physics (Vol I, Section V.1) that is given without a proof. The statement is: Given two families of seminorms $\{\rho_\alpha\}_{\alpha \in A}$ and $\{d_\beta\}_{\beta \in B}$ over a locally convex space X, if the families generate the same natural topologies, then for each $\alpha \in A$ , there are $\beta_1, \ldots, \beta_n \in B$ and $C > 0$ such that for all $x \in X$ : $$\rho_\alpha(x) \leq C(d_{\beta_1}(x) + \ldots + d_{\beta_n}(x))$$ And correspondingly, for each $\beta \in B$ , there are $\alpha_1, \ldots, \alpha_n \in A$ and $D > 0$ such that for all $x \in X$ : $$d_\beta(x) \leq D(\rho_{\alpha_1}(x) + \ldots + \rho_{\alpha_n}(x))$$ The natural topology generated by a family of seminorms $\{\rho_\alpha\}_{\alpha \in A}$ is defined to be the weakest topology such that each of the seminorms and vector addition are continuous. How would you go about proving the existence of C and $\bet
|
Fix $\alpha$ and $\epsilon > 0$ . Then $\{y\in X\colon \rho_\alpha(y) is a neighborhood of $0$ in $\tau_\rho$ . So it's a neighborhood of $0$ in $\tau_d$ . Therefore there exist $\beta_1,\ldots,\beta_n$ and $\delta>0$ such that $$\{y\in X\colon d_{\beta_1}(y)\le\delta\land\ldots\land d_{\beta_n}(y)\le\delta\}\subseteq\{y\in X\colon \rho_\alpha(y) For $x\in X$ let $M=\sum_{i=1}^n d_{\beta_i}(x)$ . If $M>0$ then $d_{\beta_i}(\frac{\delta}{M}x)=\frac{\delta}{M}d_{\beta_i}(x)\le \frac{\delta}{M}M=\delta$ for all $i\in\{1,\ldots,n\}$ . Therefore by $(1)$ , $\rho_\alpha(\frac{\delta}{M}x) and $\rho_\alpha(x) , so taking $C=\frac{\epsilon}{\delta}$ we get the kind of bound we're looking for. If $M=0$ I claim that $\rho_\alpha(x)=0$ . Because if $M=0$ then $d_{\beta_i}(x)=0$ for all $i\in\{1,\ldots,n\}$ . So $d_{\beta_i}(\mu x)=0$ for any $\mu > 0$ and $i$ , so by $(1)$ , $\rho_\alpha(\mu x) for all $\mu>0$ which implies that $\rho_\alpha(x) for all $\mu>0$ . By taking $\mu\to +\infty$ we get
|
|functional-analysis|locally-convex-spaces|frechet-space|
| 0
|
Pythagorean theorem for operators
|
For numbers $a,b\in\mathbb{R}_+$ , we have that $$\sqrt{a^2+b^2} \leq a+b.$$ Can we extend this to operators? In other words, given positive semi-definite operators $A,B$ (of same size), is it the case that \begin{align} \sqrt{A^2+B^2} \leq A + B \end{align} where the ordering is the matrix ordering ( $Q\geq0$ means $Q$ is positive semi-definite)? I have been looking into this note , where they show many useful results regarding operator monotonicity and convexity, though could not find a good tool for answering this one. Remark : there exist semi-definite positive matrices $A,B$ that result in $AB+BA$ be non-positive (i.e. $AB+BA$ can have negative eigen-values). (Not sure if the title is a good one! The reason behind choosing it is that given a right-angled triangle with sides $a,b$ the hypothenuse is $\sqrt{a^2+b^2}$ , and from triangle inequality it follows that $\sqrt{a^2+b^2}\leq a+b$ ).
|
Partial Answer : The following analysis works if it is true that $A^2 \geq B^2 \implies A \geq B$ , which I'm not sure myself. Let's consider the squared version of the problem (which may or may not be equivalent to the original), $$A^2 + B^2 \leq (A+B)^2$$ The positive semi-definite condition can be imposed as, \begin{align} &x^T \left[(A+B)^2 - A^2 - B^2\right]x \geq 0 \qquad \forall x \in \mathbb{R}^n \\ &x^T \left(AB + BA\right)x \geq 0 \qquad \forall x \in \mathbb{R}^n \end{align} Hence, $AB + BA$ must also be positive semi-definite. But as the OP remarked, this is not always true. This implies that the OP's original relation ( $\sqrt{A^2+B^2} \leq A + B$ ) is unlikely to always hold true.
|
|matrices|inequality|convex-analysis|
| 0
|
Curve cover direction
|
I have interesting problem: Let's imagine a wire bent into a curve. We have a set of non-stretchable plastic covers of various diameters. The larger the diameter of the cover, the deeper it can be placed on the wire. The cover has some direction. With the casing diameter approaching zero, the direction is that of the derivative at the ends of the curve. If the cover is wide enough so that the entire curve fits in it, there is only one cover instead of two (starting and ending) and its direction can be calculated using the "rotating calipers" algorithm (in practice, a poly-line can be used instead of a smooth curve). How to calculate the direction depending on the curve (implemented as a polyline) and the width of the cover? Problem goal: image reconstruction where we have many polylines and gaps between parts of polylines, we need to find the most appropriate line B for line A and connect them by filling the gap - if tail cover of A intersect with head cover of B, optionally, we also l
|
I'm not familiar with the rotating calipers algorithm, but if it's not computationally expensive, then an idea is to numerically find the subcurve with one end fixed as the same as the original curve such that the minimal cover over this subcurve has same width as the target cover. Since the width of the minimal cover of such subcurves monotonically increases with the length of the subcurve, one could try bisection search over the unknown endpoint of the subcurve, with the initial interval values being the two ends of the full curve, and evaluating the subcurves using the rotating calipers algorithm. Repeating this but fixing the other end instead would then yield the two desired covers.
|
|geometry|differential-geometry|
| 0
|
Constructing coproducts from free products and coproducts in $Sets$
|
One can check that the coproduct of $G=\langle A \mid R \rangle$ and $G'=\langle A' \mid R' \rangle$ is $G * G'= \langle A \amalg A' \mid R \amalg R' \rangle$ . I'm wondering if such an argument can be generalized for arbitrary categories $C$ with a right adjoint forgetful functor to $Sets$ . More concretely, given $G,G' \in C$ , with $R:F(A) \to G$ and $R':F(A') \to G'$ the canonical maps of the adjuction, if there's a coproduct $G * G'$ then there is a map $R * R' : F(A \amalg A')\cong FA \amalg FA' \to G* G'$ induced by the coproduct map $A \amalg A' \to G * G'$ . I have two questions about this: Can we somehow state that $G * G'$ is (isomorphic to) a quotient arising from $R*R'$ ? If not, what natural requirements could be added to the category $C$ so that this can be stated and proved? Possibly with the assumptions added in the first question, can we prove that a coproduct exists? It should be induced somehow from $R$ and $R'$ , but $R * R'$ is no longer available.
|
In total generality, no I don't think you can always get this. You can't always even get a presentation, nevermind a way to combine presentations. But in specificity, here is a canonical presentation of algebras over a monad as quotients of free algebras. Source: Category theory in context . This is one algebraic form of "generators and relations". Let $(T,\eta,\mu)$ be a monad over a category $\mathscr{C}$ , and $(\varsigma;\alpha:T\varsigma\to\varsigma)$ a $T$ -algebra. Then within the category of algebras $\mathscr{C}^T$ there is a coequaliser diagram: $$F(T\varsigma)\overset{T\varsigma}{\underset{\mu_\varsigma}{\rightrightarrows}}F(\varsigma)\twoheadrightarrow(\varsigma;\alpha)$$ Where $F:\mathscr{C}\to\mathscr{C}^T$ is the free algebra functor. In the case of Abelian groups, this generalises the observation we can always present an Abelian group as a free quotient via some relations. That this is a genuine coequaliser follows from one direction of the Barr-Beck theorem. It also me
|
|category-theory|
| 0
|
Short question: How is $C^1(K)$ defined for a compact set $K$?
|
Let $K\subset\mathbb R^d$ be compact. Then how is $C^1(K)$ defined? I've seen this at many places, but no one seems to care about the problems at the boundary of $K$ .
|
It probably means "the set of restrictions of functions that are defined on an open set containing $K$ , and that are $C^1$ on the open set". I think I remember from differential geometry courses that whenever $K$ is a submanifold of $\mathbb{R}^n$ , the intrinsic definition of $C^1(K)$ coincides with this one, therefore it is a "good definition". Note that any $C^1$ function in the above sense is obviously $C^1$ in @Surb's sense. For $d = 1$ , the definitions agree: let $f: [a,b] \rightarrow \mathbb{R}$ that is continuous, $C^1$ on the interior of the interval, and such that $j(a) := \lim_{x\to a} f'(x)$ and $j(b) :=\lim_{x\to b} f'(x)$ exist. Then define $g$ as $f$ on $[a,b]$ and, on $(a-1,a)$ to be $t \mapsto (t-a)j(a) + f(a)$ and on $(b, b+1)$ to be $t \mapsto (t-b)j(b) + f(b)$ . Then it is easy to show that $g$ is $C^1$ on $(a-1,b+1)$ . I don't know if the same is true in higher dimensions.
|
|real-analysis|analysis|derivatives|compactness|
| 0
|
Explicit curve with unbounded torsion
|
I have been trying to experiment with curves. Recently I found a curve (in $\mathbb{R}^2$ ) with unbounded curvature. However, I am not able to find one curve with unbounded torsion, for various reasons: It has to be in $\mathbb{R}^3$ , which makes things more complicated Torsion is not as easy to calculate as curvature, I'm aware of the formula $\displaystyle\frac{(\alpha''\times\alpha')\cdot \alpha'''}{k^2_{\alpha}}$ I know the existence of this curves because of the Fundamental Theorem, but I would like to find one explicitly. Can anyone guide/help me?
|
Take $\gamma\colon (0,+\infty) \to \Bbb R^3$ defined by $$ \gamma(t) = \left(\frac{1}{t}\cos t^2, \frac{1}{t}\sin t^2, t\right). $$ Then for all $t >0$ , \begin{align} \gamma'(t) &= \left(-2 \sin t^2 -\frac{1}{t^2}\cos t^2, 2\cos t^2 - \frac{1}{t^2}\sin t^2, 1\right) ,\\ \gamma''(t) &= \left(-4t\cos t^2 + \frac{2}{t} \sin t^2 + \frac{2}{t^3} \cos t^2, -4t\sin t^2 - \frac{2}{t} \cos t^2 + \frac{2}{t^3} \sin t^2, 0\right),\\ \gamma'''(t) &= \left(8t^2\sin t^2 -\frac{6}{t^2}\sin t^2 -\frac{6}{t^4} \cos t^2, -8t^2\cos t^2 + \frac{6}{t^2}\cos t^2 - \frac{6}{t^4} \sin t^2, 0 \right), \end{align} and you can check that, whenever it makes sense, the torsion is $$ \tau(t) = \frac{8t^9 - 10 t^5}{20t^8 - 11t^4 + 2} \sim_{t\to\infty} \frac{2}{5}t, $$ and thus is unbounded.
|
|differential-geometry|curves|
| 1
|
Proof of a particular piece of Milnor-Wolf theorem
|
The Milnor-Wolf theorem says that a finitely generated solvable group that doesn't have exponential growth is virtually nilpotent. The proof I've seen is divided into two pieces: Prove that such a solvable group is virtually polycyclic. This is proved in Milnor's paper , which is only a few pages, and (unsurprisingly) extremely readable. Prove that a polycyclic group that doesn't have exponential growth is virtually nilpotent. The only reference I can find for this is in Wolf's original paper [Theorem 4.3], but it's proved in the larger context of what Wolf is doing, so makes use of some things I'm not super familiar with (simply connected Lie groups, Mostow's results, etc.). Is there a proof of the second part that is more in the same spirit of Milnor's? That is, purely group-theoretic, and relatively easy to follow? I think (though I could be wrong) that by induction on Hirsch length this can be reduced to the problem of showing: If $G=N\rtimes\mathbb{Z}$ with $N$ finitely generated
|
Here is a proof based on Proposition 14.28 in Geometric Group Theory by Drutu and Kapovich. Appetizer First, we need some lemmas about a particular case of the question, when $G=\mathbb{Z}^n\rtimes_A\mathbb{Z}$ ; here $\mathbb{Z}=\langle t\rangle$ is acting by the matrix $A\in GL(n,\mathbb{Z})$ : $t^{-1}vt=Av$ . Lemma 1 : If $A\in GL(n,\mathbb{Z})$ has every eigenvalue of norm $1$ , then every eigenvalue is a root of unity. Proof: Let $\chi(x)=\sum_{j=0}^n a_{j}x^j$ be the characteristic polynomial of $A$ . If $\{\lambda_i\}$ are the eigenvalues of $A$ , then by assumption $\lvert\lambda_i\rvert=1$ for all $i$ . Since $a_j$ is the sum of $\binom{n}{k}$ products of its roots, then $\lvert a_j\rvert\le\binom{n}{k}$ for all $j$ . The eigenvalues of $A^k$ are $\{\lambda_i^k\}$ , and since they also all have norm $1$ , it's also true that its characteristic polynomial $\chi_k(x)=\sum_{j=0}^n a_{k,j}x^j$ has $\lvert a_{k,j}\rvert\le\binom{n}{k}$ . Since these coefficients are integers, there
|
|group-theory|geometric-group-theory|solvable-groups|nilpotent-groups|
| 1
|
Use the MVT for Integrals to bound $\int_0^1\frac{x^6}{\sqrt{1+x^2}}dx$
|
I have an exercise to use the mean value theorem for integrals to show that $$\frac{1}{7\sqrt 2}\le\int_0^1\frac{x^6}{\sqrt{1+x^2}}\ dx\le\frac{1}7$$ I've determined that the integrand is increasing on the given interval and therefore minimized at $x=0$ and maximized at $x=1$ . By the integral MVT there exists some point $c\in [0,1]$ such that $$\int_0^1\frac{x^6}{\sqrt{1+x^2}}\ dx = f(c)(1-0) = f(c)$$ Since $0\le f(c)\le 1/\sqrt 2$ I get a bound but not as tight as the one requested. But at this point I'm not sure where a factor of 7 comes from. I could guess that I should start actually integrating some stuff but then that seems like I'm not using the integral MVT as instructed. If I tried doing just a little, I could set $u=1+x^2$ so that the integral becomes $$\int_1^{2}\frac{x^6}{u^{1/2}} \left(\frac{du}{2x^2}\right) = \frac 1 2 \int_1^2 \frac{x^4\ du}{u^{1/2}} = \frac 1 2 \int_1^2\frac{(u-1)^2\ du}{u^{1/2}}$$ But like ... now I'm just computing the integral.
|
Let $f(x)=\frac1{\sqrt{1+x^2}}$ and $g(x)=x^6$ . Then, $$\int_0^1f(x)g(x)dx=c\int_0^1g(x)dx$$ were $c\in[\min f([0,1]),\max f([0,1])]=[1/\sqrt2,1]$ .
|
|integration|inequality|definite-integrals|mean-value-theorem|
| 1
|
Necklace with 4p beads (Burnside's lemma)
|
Let $p \geq 3$ be a prime number. We consider $2p$ black beads and $2p$ blue beads (both indistinguishable). How many unique necklaces of size $4p$ , created from these beads, are there? (Consider only rotations.) I need help with the Burnside's lemma. (I have not found a similar question with a satisfactory answer). The first approach (by the formula): We will use the well-known formula. The color multi-set is $B = \{1^{2p}, 2^{2p}\}$ . The number of such necklaces is: $$ \begin{align} N(B) &= \frac{1}{|B|} \sum_{d\mid\gcd(n_1 \dots n_k)} \binom{|B|/d}{n_1/d \dots n_k/d} \phi(d) \\ &= \frac{1}{4p} \sum_{d\mid\gcd(2p, 2p)} \binom{4p/d}{2p/d \; 2p/d} \phi(d) \\ &= \frac{1}{4p} \biggl( \binom{2}{1 \; 1}\phi(2p) + \binom{2p}{p \; p}\phi(2) + \binom{4}{2 \; 2}\phi(p) \biggr) \\ &= \frac{1}{4p} \biggl( 2(p-1) + \frac{(2p)!}{(p!)^2} + 6(p-1) \biggr) \\ &= \frac{8(p-1)(p!)^2+(2p)!}{4p(p!)^2} \end{align} $$ The second approach (by the Burnside's lemma): The group size is $4p$ . The Id fixes $\
|
Temporarily ignoring the requirement that there must be an equal number of black and blue beads... When rotating once, or indeed when rotating any number of times $k$ where $\gcd(k,4p)=1$ you will have that every bead must be the same. That is, we have one degree of freedom. When rotating twice, or indeed when rotating any number of times $k$ where $\gcd(k,4p)=2$ you will have that every other bead must be the same. That is, we have two degrees of freedom. When rotating four times, or any number of times $k$ where $\gcd(k,4p)=4$ you will have every fourth bead must be the same. That is, four degrees of freedom. When rotating $k$ times where $\gcd(k,4p)=p$ you will have every $p$ 'th bead must be the same. And finally, when rotating $2p$ times, every $2p$ 'th bead ( i.e. the bead opposite ) must be the same and we'll have $2p$ degrees of freedom. Noting that each of these will partition the set of beads into $1,2,4,p$ or $2p$ equally sized sets, and recalling that there must be an equal
|
|combinatorics|group-theory|discrete-mathematics|group-actions|necklace-and-bracelets|
| 1
|
solution-verification | Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$.
|
the question Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$ . the idea for the number to be rational both members have to be rational (*) because a is natural, it means that $\sqrt{q}$ and $\sqrt{q+1}$ should both be perfect squares, but they are also consecutive $\sqrt{q}+1=k^2+1$ => $\sqrt{q+1}=k^2+2k+1$ The equality would happen only id $2k=0 => k=0 => q=0$ Im not sure of the part I noted (*), because I think I should also demonstrate this fact, but I don't know how. Hope one of you can tell me if my idea is right and how can I improve my answer! Thank you!
|
$x=(a+\sqrt{q})(a+\sqrt{q+1})$ is rational iff $y=(a-\sqrt{q})(a-\sqrt{q+1})$ is, because $xy$ belongs to $\mathbb Z$ . Hence in this case $x+y$ is rational i.e $\sqrt{q}\sqrt{q+1}$ is. Then for some integers $\alpha,\beta$ with $GCD(\alpha,\beta)=1$ we have $\beta^2q(q+1)=\alpha^2$ so if $\alpha\neq 0$ , $\alpha^2$ is a divisor of $q(q+1)$ and we have $1=\beta^2\lambda$ where $\lambda=\frac{q(q+1)}{\alpha^2}$ . This entails $\beta=\lambda=1$ and $q(q+1)=\alpha^2$ so $\alpha^2-q^2=q$ : this is not possible as $\alpha$ would be then greater than $q+1$ and thus $\alpha^2-q^2\geqslant 2q+1>q$ . So $\alpha$ and hence $q$ must be zero.
|
|rational-numbers|square-numbers|radical-equations|
| 0
|
Why does this nominally divergent limit of an infinite sum of bessel functions converge
|
The setup isn't important, but in case you're curious, this is a physically relevant thing I'm trying to calculate that should have a finite value. This equation gives the electric field of a small patch of voltage $V$ of length $l$ on the surface of a grounded conducting cylinder of radius $R$ in the center of the cylinder: $$ |\mathbf{E}|=\frac{Vl^2}{2\pi R^3}\lim_{\epsilon\rightarrow 0^+}\sum_{m=1}^\infty \frac{\alpha_{1m}\exp\left(-\alpha_{1m}\epsilon\right)}{J_2(\alpha_{1m})}\approx 1.77578 \frac{Vl^2}{2\pi R^3} $$ $J_1$ is the first Bessel function of the first kind, and $\alpha_{1m}$ is it's $m$ 'th zero (not including the one at $x=0$ ). The $\approx$ comes from me evaluating this numerically. I understand that I cannot move the limit into the sum because if I do, the sum no longer converges. What I don't understand is why it even has a well defined limit with the limit outside of the sum. For large $m$ , the zeros go toward $\alpha_{1m}\rightarrow (m+1/4)\pi$ . For large input
|
The following is a supplement to the second part of Sangchul Lee 's answer that is labeled "Old Answer." Let $\eta(s)$ be the Dirichlet eta function , which can be defined in terms of the Riemann zeta function as $ \eta(s)= \left(1-2^{1-s} \right) \zeta(s).$ To prove that $$\lim_{\varepsilon \to 0^+} \sum_{n=1}^{\infty} (-1)^n n^{3/2} e^{-\varepsilon n} = \operatorname{Li}_{-3/2}(-1) = - \eta(-3/2), $$ we'll use the dominated convergence theorem to show that the polylogarithm function $\operatorname{Li}_{-3/2}(x)$ , as a function of the real variable $x$ , is a continuous function on $[-1, 0]$ . (We can show more, but that's all that's needed here.) An integral representation for $\operatorname{Li}_{s}(x)$ that is valid for $\Re(s) >-2$ is $$\operatorname{Li}_{s}(x) = \frac{x}{\Gamma(s+2)} \int_{0}^{\infty} \frac{t^{s+1} e^{t}(e^{t}+x)}{(e^{t}-x)^{3}} \, \mathrm dt, \quad \Re(s)>-2, x \notin[1, \infty). \tag{$\spadesuit$}$$ This integral representation can be obtained by performing two
|
|sequences-and-series|convergence-divergence|bessel-functions|
| 0
|
Solving a parametric limit using Taylor series
|
Solve $$ \lim _{x \rightarrow 0} \frac{\ln ( \cos^a x) + \sin^2 x}{\frac{\pi}{2} - \arctan\frac{1}{x^4}} = \frac{0}{0}= \lim _{x \rightarrow 0} \frac{a \ln ( \cos x) + \sin^2 x}{\frac{\pi}{2} - \arctan\frac{1}{x^4}}$$ using Taylor series we can write : $\sin x = x - \frac{1}{3}x^3 + o(x^3) \implies \sin^2 x= x^2 -\frac{2}{3}x^4 + o(x^4)$ $\cos x = 1 - \frac{x^2}{2} + o(x^2) \implies \ln(\cos x) = ( 1- x^2 + o(x^2) ) $ therefore the numerator will be $$ \alpha( 1- x^2 + o(x^2) + x^2 -\frac{2}{3}x^4 + o(x^4) ) = \alpha +x^2(1-\alpha )+o(x^2) $$ i have the following problem to keep going, $\arctan (1/x^4) $ , the argument of the function tends to infinity therefore i don't know how to "solve" the zero at the denominator therefore i can't use taylor to solve the denominator, and if i can't use it at the denominator i also could not at the numerator because otherwise i will not be able to get rid of the $o(x^2)$ , but without using the Taylor series i don't know if there's a way to solve it
|
We can use that $$\frac{\pi}{2} - \arctan\frac{1}{x^4} =\arctan x^4=x^4+ o(x^4)$$ note also that for the numerator $\sin^2 x= x^2 -\frac{x^4 }{3}+ o(x^4)$ $\ln(\cos x) = -\frac{x^2}2-\frac{x^4}{12}+ o(x^4)$
|
|calculus|limits|taylor-expansion|
| 1
|
Short question: How is $C^1(K)$ defined for a compact set $K$?
|
Let $K\subset\mathbb R^d$ be compact. Then how is $C^1(K)$ defined? I've seen this at many places, but no one seems to care about the problems at the boundary of $K$ .
|
$$C^m(\bar{\Omega}) := \{ f \in C^m(\Omega); \: \forall |\alpha| \leq m, \exists \: g^{\alpha} \in C({\bar{\Omega}}) \: s.t. \: \partial^{\alpha}f = g^{\alpha}|_{\Omega} \}$$ ln other words, $C^m(\bar{\Omega})$ consists of all functions $f \in C^m(\Omega)$ that, together with all their partial derivatives $\partial^{\alpha}f \:, 1 \leq \alpha \leq m $ possess continuous extensions to $\bar{\Omega}$ or equivalently, such that, at each point $x_0 \in \partial \Omega, \lim_{x \rightarrow x_0} \partial^{\alpha}f(x)$ exists in $\mathbb{R} \:, \: 0 \leq \alpha \leq m $ or equivalently, when $\Omega$ is bounded, if each function $\partial^{\alpha} f, \: 0 \leq \alpha \leq m$ is uniformly continuous in $\Omega$ . Source: Linear and Nonlinear Functional Analysis (section 1.18) by Philippe G. Ciarlet
|
|real-analysis|analysis|derivatives|compactness|
| 0
|
Example to show that the containment $\overline {f^{-1}(B)} \subset f^{-1}(\bar B) $ is proper where $f$ is continuous mapping
|
Let $f: X \to Y$ be a continuous function, where $B\subset Y$ . Then, $\overline {f^{-1}(B)} \subset f^{-1}(\bar B) $ holds, here's the proof . I am looking for an example to illustrate that the above containment is proper in general metric spaces. In $\mathbb R$ , I could not think of an example. Kindly help. Thanks in advance.
|
Ok, what would an example have to look like? Say $x\in f^{-1}(\overline{B})\setminus\overline{f^{-1}(B)}$ . We know every neighbourhood $U$ of $f(x)$ intersects $B$ , pulling back to a neighbourhood $f^{-1}(U)$ of $x$ intersecting $f^{-1}(B)$ . As $x$ fails to be in the closure, there must be a neighbourhood $V$ of $x$ which does not contain $f^{-1}(U)$ for any neighbourhood $U$ of $x$ ; that is to say, $f(V)$ does not contain any neighbourhood of $f(x)$ . So we need $x$ to have neighbourhoods whose $f$ -images are hollow (or at least hollow at $x$ ). The easiest way to arrange this is to make $f$ collapse some neighbourhood of $x$ to a point, to have $f$ constant (to a point of $\overline{B}$ ) on some neighbourhood $V$ of $x$ which avoids $f^{-1}(B)$ . However we can't really do that and have $f$ continuous if our space has good connectivity properties, so my natural instinct was to look at the rationals - which are totally separated. Indeed, let $f:\Bbb Q\to\Bbb R$ map $x\mapsto\beg
|
|real-analysis|continuity|metric-spaces|examples-counterexamples|inverse-function|
| 1
|
Combinatorics: number of ways students can sign up to courses
|
There are $n$ distinguishable students. In how many ways can they sign up to courses $A, B, C$ if each of the students can choose either $0$ , $1$ , $2$ or $3$ courses and also (*) none of the $7$ parts of Venn Diagram visualizing the problem are empty. My solution: I assumed that (*) is equivalent to simply having to subtract the number of possibilities where students didn't choose certain courses and I'll use that fact later in the solution. Let's first count the total number of ways students can sign up to the courses. Let's consider a single student. If they choose to sign up to $0$ courses, they have $1$ possibility $=$ choosing $0$ courses. If they choose to sign up to $1$ course, they have $3$ posibilites $=$ $A$ or $B$ or $C$ . If they choose to sign up to $2$ courses, they have $3$ possibilities $=$ not choosing $A$ or $B$ or $C$ . If they choose to sign up to $3$ courses, they have $1$ possibility $=$ choosing $A$ and $B$ and $C$ . So there are $8$ options in total and since
|
I found it difficult to follow through your solution. Not necessarily because it was wrong but because it was too complicated. It was too easy to miss something. Here's something that could be among the problems leading to an incorrect answer. It appears as if your solution is okay with the case where there is $1$ student attending each of the three courses but no student who is taking exactly $2$ courses (say, A and B only). Remember, each region of the Venn diagram has to be non-empty. Post OP's second solution The problem with $\binom{n}{7} \cdot 7! \cdot 8^{n-7}$ is that it is over counting. Consider this. In the first step when you place one student in each region, you place $S_1$ (student 1) in the region for "A only" and in the second step $S_2$ joins $S_1$ . Lets call this arrangement 1 . Now in another arrangement, you place $S_2$ in "A only" in the first step and then $S_1$ voluntarily joins later. Let's call this arrangement 2 . If you think about it, everything else being c
|
|combinatorics|solution-verification|
| 0
|
How to show that $O(\sum_1^{\infty} n^{1-{2\sigma}} e^{-\delta n} \sum_1^{n/2} 1/r )=O({\delta}^{2 \sigma -2} \log \dfrac{1}{\delta})$?
|
The following lemma is from Titchmarsh's The Theory of the Riemann Zeta-Function: I have difficulties in getting both the estimates: 1- $O((\sum_1^{\infty} n^{-{\sigma}} e^{-\delta n})^2) = O((\int_1^{\infty} x^{-{\sigma}} e^{-\delta x})^2)$ , so how to estimate the integral or any other way to get $O({\delta}^{2 \sigma -2})$ ? 2- $O(\sum_1^{\infty} n^{1-{2\sigma}} e^{-\delta n} \sum_1^{n/2} 1/r )= O(\sum_1^{\infty} n^{1-{2\sigma}} e^{-\delta n} \log n) = O(\int_1^{\infty} x^{1-{2\sigma}} e^{-\delta x} \log x )$ , so how to estimate the integral or any other way to get $O({\delta}^{2 \sigma -2} \log \dfrac{1}{\delta})$ ? WolframAlpha couldn't be useful.
|
Both of these estimates rely on the same basic idea, which is that after a basic substitution you get the bound multiplied by some integral that obviously converges to some value independent of $\delta$ . I'll look at the first one. The second is essentially the same. As you've noticed, the first estimate reduces to studying the (square of the) integral $$ \int_1^\infty t^{-\sigma} e^{-\delta t} dt = \int_1^\infty t^{1 - \sigma} e^{-\delta t} \frac{dt}{t}. $$ Perform the change of variables $t \mapsto \tfrac{t}{\delta}$ to see that this is $$ \int_\delta^\infty (t/\delta)^{1 - \sigma} e^{-t} \frac{dt}{t} = \delta^{\sigma - 1} \int_\delta^\infty t^{1 - \sigma} e^{-t} \frac{dt}{t}. $$ The behavior of the integral on the right depends on what $\sigma$ and $\delta$ are. If $\sigma , then the integral converges $$ \int_\delta^\infty t^{1 - \sigma} e^{-t} \frac{dt}{t} \leq \int_0^\infty t^{1 - \sigma}{e^{-t}} \frac{dt}{t} = \Gamma(1 - \sigma) If $\delta \geq 1$ , then also have $$ \int_\delt
|
|definite-integrals|asymptotics|analytic-number-theory|riemann-zeta|estimation|
| 0
|
Nim multiplication inverse
|
We have $$\operatorname{mex} S=\min \{n \in \mathbb{N} \mid n \notin S\},a \oplus b=\operatorname{mex}\{a' \oplus b ; a \oplus b', \forall a', b'\in \mathbb{N}: a' and $$a \otimes b=\operatorname{mex}\{(a' \oplus b) \oplus (a \otimes b') \oplus (a' \otimes b') \mid a',b' \in \mathbb{N}, a' Let $c \in \mathbb{N}^*$ , prove that there exists a unique number $d \in \mathbb{N}$ such that $c \otimes d = 1$ . The operation $\otimes$ is called the nim multiplication. I have proven its commutative, associative, and distributive properties with addition. I am currently proving its existence of an inverse. In the book "On Numbers and Games" by John H. Conway, on page 56, there is a mention of the formula for the inverse, but there is no mention of the proof, stating similarly to chapter 1. I have revisited chapter 1 but gained no insight.
|
Following Conway's convention, instead of using $\oplus, \otimes$ , and $\oslash$ for nimber operations, I will just use the ordinary $+,\cdot,$ and $\frac{x}y$ . We will need to use the fact that nimber multiplication has no zero divisors. That is, if $a$ and $b$ are nonzero nimbers, then $ab\neq 0$ . To see this, note that $ab\neq a'b+ab'+a'b'$ for all $a' and $b' , and take $a'=b'=0$ . In ONAG , Conway gives the following inductive definition for the nim multiplicative inverse. When $x\neq 0$ is a nimber, $$ \frac1x=\text{mex }(Y),\text{ where }\\ Y=\left\{0,\frac{1+(x+x')\cdot y'}{x'} \mid 1\le x' The way that I wrote it, $Y$ is circularly defined in terms of itself. To be rigorous, define a sequence of sets $Y_0,Y_1,Y_2,\dots$ inductively as follows: $$ \begin{align} Y_0&=\{0\}\\ Y_{n+1}&=\left\{\frac{1+(x+x')\cdot y'}{x'} \mid 1\le x' We then define $Y=\bigcup_{n=0}^\infty Y_n$ . Lemma: For all $y'\in Y$ , $xy'\neq 1$ . Proof: If $y'=0$ , then obviously $xy'=0\neq 1$ . Otherwise,
|
|combinatorial-game-theory|
| 1
|
curves and projections to axis or coordinates
|
Let $X$ be a smooth surface and $C$ a smooth curve, over $\mathbb C$ . Let $P\in X$ , $c_0\in C$ , and $C'$ a smooth curve in $X\times C$ such that $x=(P,c_0)\in C'$ . Let $$p:X\times C\longrightarrow C$$ be the projection. Suppose that $C'$ is transverse to $p^{-1}(c_0)$ at $x$ . Question: does there exist neigbourhoods $U$ of $P$ and $V$ of $c_0$ such that, for every $v\in V$ , $p^{-1}(v)\cap U$ contains exactly one point?
|
The surface $X$ is irrelevant for the problem. You just have a morphism of smooth curves $f : C' \to C$ which is etale at point $c_0 \in C'$ and you are asking if it is birational. Indeed, for any such morphism you can choose an embedding $C' \subset X$ into a smooth surface and consider the product map $C' \hookrightarrow X \times C$ . Now, the answer to the question is negative. For example, you can take any finite covering $f:C'' \to C$ of degree larger than 1, take $c_0 \in C''$ to be a point away from the ramification divisor, and set $$ C' = C'' \setminus (f^{-1}(f(c_0)) \setminus \{c_0\}). $$ In other words, consider remove from $C''$ all points in the fiber of $c_0$ except for $c_0$ itself.
|
|algebraic-geometry|
| 0
|
Klenke's proof of Slutzky's Theorem
|
In Klenke's book on probability he states Slutzky's theorem as: Let $X, X_1, X_2, \ldots$ and $Y_1, Y_2\ldots$ be random varaibles with values in $E$ . Assume $X_n \xrightarrow{\mathcal{D}} X$ and $d(X_n, Y_n) \xrightarrow{n\rightarrow \infty} 0$ in probability. Then $Y_n \xrightarrow{\mathcal{D}} X$ . Here $E$ is a metric space with metric $d$ , and $\xrightarrow{\mathcal{D}}$ means convergence in disribution, i.e. $X_n \xrightarrow{\mathcal{D}} X$ if the distributions $\mu_{X_n}$ of the $X_n$ converge weakly to the distribution $\mu_X$ of $X$ . By weak convergence he means $$\int f \mu_{X_n} \rightarrow \int f \mu_X$$ for all continuous and bounded functions $f$ . His proof is as follows: Let $f: E \rightarrow \mathbb{R}$ be bounded and Lipschitz continuous with constant $K$ . Then $$|f(x) - f(y)| \leq Kd(x,y) \wedge 2\|f\|_{\infty} \quad \text{for all } x, y \in E.$$ Dominated convergence yields $\limsup_{n \rightarrow \infty} \mathbf{E}[|f(X_n) - f(Y_n)|] = 0.$ Hence we have $$\lim
|
Let me know if you need additional clarity on the following. (1) Klenke proves weak convergence is equivalent to $\int f d\mu_n\to \int fd\mu$ for all $f$ Lipschitz and bounded (Portmanteau theorem). (2) $E[|f(X_n)-f(Y_n)|]\leq E[Kd(X_n,Y_n)\wedge 2\|f\|_\infty]\to 0$ by dominated convergence (version with convergence in probability) since $d(X_n,Y_n)\to^P 0$ and $Kd(X_n,Y_n)\wedge 2\|f\|_\infty$ is bounded. (3) If $\lim$ exists $\limsup$ exists and is equal to $\lim$ . $\limsup$ is used in the next passage since we don't know if $\lim$ exists yet for $|E[f(Y_n)]-E[f(X)]|$ . (4) $\limsup_n|E[f(X_n)]-E[f(X)]|=0\iff \lim_nE[f(X_n)]=E[f(X)]$ . For $f$ bounded Lipschitz, this is indeed one of the equivalent definitions of weak convergence in the setting of probability measures (just probability notation).
|
|real-analysis|probability-theory|measure-theory|convergence-divergence|probability-limit-theorems|
| 1
|
Klenke's proof of Slutzky's Theorem
|
In Klenke's book on probability he states Slutzky's theorem as: Let $X, X_1, X_2, \ldots$ and $Y_1, Y_2\ldots$ be random varaibles with values in $E$ . Assume $X_n \xrightarrow{\mathcal{D}} X$ and $d(X_n, Y_n) \xrightarrow{n\rightarrow \infty} 0$ in probability. Then $Y_n \xrightarrow{\mathcal{D}} X$ . Here $E$ is a metric space with metric $d$ , and $\xrightarrow{\mathcal{D}}$ means convergence in disribution, i.e. $X_n \xrightarrow{\mathcal{D}} X$ if the distributions $\mu_{X_n}$ of the $X_n$ converge weakly to the distribution $\mu_X$ of $X$ . By weak convergence he means $$\int f \mu_{X_n} \rightarrow \int f \mu_X$$ for all continuous and bounded functions $f$ . His proof is as follows: Let $f: E \rightarrow \mathbb{R}$ be bounded and Lipschitz continuous with constant $K$ . Then $$|f(x) - f(y)| \leq Kd(x,y) \wedge 2\|f\|_{\infty} \quad \text{for all } x, y \in E.$$ Dominated convergence yields $\limsup_{n \rightarrow \infty} \mathbf{E}[|f(X_n) - f(Y_n)|] = 0.$ Hence we have $$\lim
|
The subalgebra of Lipschitz functions is dense in $C_0(E)$ , provided $E$ is locally compact. Convergence of distribution of $X_n$ can be tested using $\mathbb{E}f(X_n)$ for $f \in C_0(E)$ . You need this to apply dominated convergence, which is used immediately after the inequality is stated; $0 \leq \mathbb{E}(|f(X_n) - f(X)|)$ , so if $\limsup_n \mathbb{E}(|f(X_n) - f(X)|) =0$ , it follows that $\lim_n \mathbb{E}(|f(X_n) - f(X)|) =0$ ; the author is choosing not to be presumptuous about the existence of this limit; Provided your limsup holds for all $f \in C_b(E)$ , they are equivalent
|
|real-analysis|probability-theory|measure-theory|convergence-divergence|probability-limit-theorems|
| 0
|
Existence of smallest period for continuous, periodic functions
|
I have found several partial answers to that question, all of which used more sophisticated mathematics than what I think is warranted by the question. Now, this may well be because my proofs below are not correct -- and so I thought to write them here and ask for feedback. First, suppose that we have a sequence $T_n \rightarrow T$ , such that for any $n$ , we have $f(x + T_n) = f(x)$ , where $f$ is a continuous function, and $x$ is any value in its domain. Then we must also have $f(x + T) = f(x)$ . To see why this is, let $x$ be fixed; it is immediate that the sequence $x + T_n$ converges to $x + T$ . Now, on the one hand, the sequence $f(x + T_n)$ is constant---all its terms are equal to $f(x)$ . On the other, because $f$ is continuous, we have $\lim_{n \rightarrow \infty} f (x + T_n) = f (\lim_{n \rightarrow \infty} x + T_n) = f (x + T)$ . Thus we conclude that $f(x + T) = f(x)$ . But what happens when $T = 0$ ? By the above result, we are left with a rather trivial condition, that
|
Your proof looks perfectly fine to me. Slightly rephrased, you're showing that the set $\{a+kT_n'\mid k\in\mathbb{Z},n\in\mathbb{N}\}$ is dense in $\mathbb{R}$ , and since $f$ takes the same value on all points in this set and continuous function are fully determined by their values on a dense set, $f$ must be constant.
|
|periodic-functions|
| 1
|
Why is the commutant of a unital $*$-representation a von Neumann algebra?
|
If $\pi$ is a $*$ -representation (i.e. a $*$ -homomorphism) of a unital Banach $*$ -algebra, then $\pi(\mathscr{A})$ , is trivially self-adjoint, but why is $\pi(\mathscr{A})'$ a von Neumann algebra? This is used in the following proof that such a representation is irreducible iff $\pi(\mathscr{A})' = \mathbb{C}\mathbb{1}$ : https://planetmath.org/criterionforabanachalgebrarepresentationtobeirreducible
|
One possible definition of a von-Neumann algebra is that it is the commutant of a $*$ -closed set. By that definition, $\pi(\mathscr{A})'$ is von-Neumann. Another possible definition is that it's a $*$ -closed subalgebra of $B(H)$ that is topologically closed in one of several topologies. Say we use the weak topology. Let $\mathscr{S}$ be a $*$ -closed set. Let $\mathscr{M}=\mathscr{S}'$ . For $T\in\mathscr{M}$ and $S\in\mathscr{S}$ , $S^*$ is also in $\mathscr{S}$ , so $TS^*=S^*T$ . Adjointing this we get $ST^*=T^*S$ . This proves $\mathscr{M}$ is $*$ -closed. If $T_1,T_2\in\mathscr{M}$ , $\lambda\in\mathbb{C}$ and $S\in\mathscr{S}$ . Since $T_1 S=S T_1$ and $T_2 S=S T_2$ then $\lambda T_1 S=S\lambda T_1$ and $S(T_1+T_2)=(T_1+T_2)S$ and $(T_1 T_2)S=T_1 S T_2=S T_1 T_2$ . This proves $\mathscr{M}$ is closed under sum, scalar multiplication and multiplication so it's a $*$ -subalgebra. Finally, topologically, assume $T_\alpha$ is a net in $\mathscr{M}$ that converges weakly to some $T\i
|
|representation-theory|von-neumann-algebras|
| 1
|
Prove that f(z) = 1/2(z + 1/z) is injective in V = {z in C : Im(z) > 0}
|
I'm struggling to prove that f(z) is an injective function with domain V where V = {z $\in$ $\mathbb{C}$ : Im(z) > 0}. Up until now, I suppose that theres 2 distinct points in V a and b where f(a) = f(b) and I'm trying to prove by contradiction that a = b. But instead of proving the injectivity, I end up proving surjectivity. Here's what I have: $\frac{a^2 + 1}{a} = \frac{b^2 + 1}{b}$ $a^2b + b = ab^2 + a$ $a(ab - 1) + b(1 - ab) = 0$ But that ends up being a proof for surjectivity since if b = 1/a, then f(a) = f(b). I'm not sure how else I could prove injectivity for this function.
|
You are almost there. From $a(ab-1)+b(1-ab)=0$ , you get $$(a-b)(ab-1)=0 $$ A product is $0$ if and only if one of the factors is $0$ . So either $a=b$ (and we are done), or: $ab=1$ . Now recall that $a,b\in V$ , i.e., $\operatorname{Im} a>0$ and $\operatorname{Im} b>0$ . Do you know how to find $b=\frac1a$ if $a=x+iy$ ?
|
|complex-analysis|complex-numbers|
| 0
|
Is there a continuous function $f$ such that $f( f(x))=-x^3$
|
Is there a continuous real valued function $f$ such that $f \circ f(x)=-x^3$ . If we set $f^0(x)=I(x)=x$ , $f^1(x)=f(x)$ , $f^2(x)=f\circ f(x)$ , ..., then a simple argument show that $f^{2k}(x)=(-1)^kx^{3^k}$ . This may help to answer the question.
|
No such function exists. $g(x) = -x^3$ is a bijective (i.e. invertible, i.e. one-to-one and onto) function. As such, if $f$ is such that $g = f \circ f$ , then $f$ must also be bijective. However, this means that $f$ must be monotonically increasing or monotonically decreasing. In either case, we find that $f \circ f$ must be monotonically increasing, which does not hold for $g$ .
|
|calculus|analysis|
| 0
|
Predicate logic, is this statement true or false
|
I'm sorry if this is a noob question but I got this statement asked on a test: $$\forall\ x \exists y \,F(x,y) \rightarrow \exists y \forall x \, G(x,y)$$ where $$M = \lbrace \text{Natural numbers > 0} \rbrace \\ F(x,y): \, x y\\$$ The statement is supposed to be false. I get that the left hand side is true but shouldn't the right also be true. Can $y = x$ , since this is the only way that I could see this being false? Otherwise $y$ could be $1$ and all the other values for $x$ would be greater than $y$ .
|
∃∀(,) says that we can pick a first number such that, whatever number we go on to pick second, the latter is larger than the former. Nothing bars us from picking the same number twice. So, as you put it, we can have $x = y$ . Or least that's a sort-of-acceptable shorthand for an absolutely standard point about using quantified variables. In assigning values to variables in the standard semantical story about the quantifiers, different variables are allowed to take the same value.
|
|logic|
| 1
|
Predicate logic, is this statement true or false
|
I'm sorry if this is a noob question but I got this statement asked on a test: $$\forall\ x \exists y \,F(x,y) \rightarrow \exists y \forall x \, G(x,y)$$ where $$M = \lbrace \text{Natural numbers > 0} \rbrace \\ F(x,y): \, x y\\$$ The statement is supposed to be false. I get that the left hand side is true but shouldn't the right also be true. Can $y = x$ , since this is the only way that I could see this being false? Otherwise $y$ could be $1$ and all the other values for $x$ would be greater than $y$ .
|
So, you are right. LHS is true, and RHS is false. ∀∃(,) -- This is true, just take y=x+1 ∃∀(,) -- This is false, let's take x=1, and there's no such y, that G(1,y). So, you have True → False, which is false statement
|
|logic|
| 0
|
Combinatorics: number of ways students can sign up to courses
|
There are $n$ distinguishable students. In how many ways can they sign up to courses $A, B, C$ if each of the students can choose either $0$ , $1$ , $2$ or $3$ courses and also (*) none of the $7$ parts of Venn Diagram visualizing the problem are empty. My solution: I assumed that (*) is equivalent to simply having to subtract the number of possibilities where students didn't choose certain courses and I'll use that fact later in the solution. Let's first count the total number of ways students can sign up to the courses. Let's consider a single student. If they choose to sign up to $0$ courses, they have $1$ possibility $=$ choosing $0$ courses. If they choose to sign up to $1$ course, they have $3$ posibilites $=$ $A$ or $B$ or $C$ . If they choose to sign up to $2$ courses, they have $3$ possibilities $=$ not choosing $A$ or $B$ or $C$ . If they choose to sign up to $3$ courses, they have $1$ possibility $=$ choosing $A$ and $B$ and $C$ . So there are $8$ options in total and since
|
Assuming indistinguishable students: Then you're looking for integer solutions of $$x_1+x_2+ \ldots +x_8 = n$$ where $x_i\ge 1$ for $1\le i\le7$ and $x_8\ge0$ . So, we introduce a variable $y = x_8+1$ so that $$x_1+x_2+ \ldots +x_7+y = n+1$$ By sticks and stones , this has $\binom{n}{7}$ solutions. Assuming distinguishable students: We use PIE . $8^n$ ways to place the students anyhow, $7^n$ where one is empty (we care about 7 such regions), $6^n$ where two are empty (we care about $\binom{7}{2}$ such pairs), $5^n$ where three are empty (we care about $\binom{7}{3}$ such triplets) and so on. In the end, you will have $$\sum_{i=0}^7 (-1)^i\binom{7}{i}(8-i)^n$$
|
|combinatorics|solution-verification|
| 1
|
Reduced finitely generated k-algebras are isomorphic to $k^n$
|
We let $k$ denote a fixed algebraically closed field of characteristic zero. We let $R$ denote a reduced finitely generated $k$ -algebra where $\dim_kR = n$ as a $k$ vector space and $n$ is a positive integer. If we write $M$ to be the set of all proper maximal ideals of $R$ , how might one show that $|M| = n$ ? Namely, we are looking to show that $R \cong k^n$ as $k$ -algebras through this.
|
There are several strategies you could pick. Here's one. First, I claim that $R$ is of Krull dimension zero: we have $k\subset R$ is an integral extension (as for any $0\neq r\in R$ we have that there is some $a>0$ with $r^a\in\operatorname{Span}_k(1,r,r^2,\cdots,r^{a-1})$ by finite-dimensionality) and Krull dimension is preserved by integral extensions. This implies that every prime ideal is maximal, so the intersection of all maximal ideals is the nilradical of $R$ ; in particular, it is zero, as $R$ is reduced. Hence $R\cong R/0\cong R/\bigcap_{m\subset R} \cong \prod_{m\subset R} R/m$ by the Chinese remainder theorem. Now by Zariski's lemma, the quotient of a finitely-generated $k$ -algebra by a maximal ideal is a finite algebraic extension of $k$ , which must be $k$ since $k$ is algebraically closed. So $R\cong \prod_{m\subset R} k$ , and counting dimensions we have that $R$ has exactly $\dim_k R$ maximal ideals.
|
|algebraic-geometry|maximal-and-prime-ideals|algebras|
| 0
|
What is wrong in this derivation of the time derivative of a flux?
|
There are several resources, including this question as well as for example problem 5-1 in Kovetz "The Principles of Electromagnetic Theory" which state that the time derivative of a flux can be given: $$ \frac{d}{dt} \int_{S(t)} \vec{A}\cdot \vec{n}ds = \int_{S(t)} (\frac{\partial \vec{A}}{\partial t} + (\nabla \cdot \vec{A})\vec{v} - \nabla \times (\vec{v} \times \vec{A})) \cdot \vec{n}ds. $$ And I have attempted to try and reproduce this result using a method similar to that of the Reynold's Transport Theorem Wikipedia Article . But I end up with some extra "shear" terms, and I would love to understand where I am going wrong. The derivation goes as follows. I want to examine the integral of the flux, and so we can rewrite our integral: $$ \int_{S(t)} \vec{A}\cdot\vec{n}ds = \int_{S(t)} \vec{A}(x, y, z, t) \cdot \vec{n} ds = \int_{S(t)} A_x(x(y, z, t), y, z, t) dydz - A_y(x, y(x, z, t), z, t) dxdz + A_z(x, y, z(x, y, t), t) dxdy $$ We can look at this sum of three integrals as three
|
I figured out the problem with my derivation! This will benefit from using index notation. We begin with a slight formalization of the problem to be able to apply ideas from continuum mechanics. Consider a vector field $\mathbf{A}(x, y, z, t)$ which is a function of space and time, and a surface $\mathcal{S}$ such that there exists some reference configuration $S_0$ and a differentiable, invertible map $\varphi : S_0 \times \mathbb{R} \rightarrow \mathcal{S} \times \mathbb{R}$ such that $\varphi(X, Y, Z, t) \in \mathcal{S}$ $\forall X, Y, Z \in S_0$ and our surface $\mathcal{S} \times \{t\} = \varphi(S_0, t)$ . In English, this is just a surface that has a smooth enough trajectory over time. With this definition, we are also able to define what we mean by $\mathbf{v}_i = \frac{\partial\varphi_i}{\partial t}$ . Here Roman letters correspond to spatial indices and derivatives. One final piece of convention is that capital letters will be used for coordinates and indices in our reference
|
|derivatives|vector-analysis|surface-integrals|
| 1
|
solution-verification | Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$.
|
the question Show that if $a$ and $q$ are natural numbers and the number $(a+\sqrt{q})(a+\sqrt{q+1})$ is rational, then $q=0$ . the idea for the number to be rational both members have to be rational (*) because a is natural, it means that $\sqrt{q}$ and $\sqrt{q+1}$ should both be perfect squares, but they are also consecutive $\sqrt{q}+1=k^2+1$ => $\sqrt{q+1}=k^2+2k+1$ The equality would happen only id $2k=0 => k=0 => q=0$ Im not sure of the part I noted (*), because I think I should also demonstrate this fact, but I don't know how. Hope one of you can tell me if my idea is right and how can I improve my answer! Thank you!
|
Let $x:=(a+\sqrt q)(a+\sqrt{q+1})$ . If $x=0$ then $a=q=0$ . If $x\ne0$ , we can borrow the first step of @FredBernard's answer : if $x$ is rational, so is $y:=(a-\sqrt{q})(a-\sqrt{q+1})$ , because $xy=(a^2-q)(a^2-q-1)$ is an integer. Then, $\sqrt{q(q+1)}=\frac{x+y}2-a^2\in\Bbb Q$ hence $q(q+1)$ is a perfect square, hence so are its two coprime factors $q$ and $q+1$ , but $m=\sqrt q,n=\sqrt{q+1}\implies1=(n-m)(n+m)\implies n=1,m=0\implies q=0$ .
|
|rational-numbers|square-numbers|radical-equations|
| 0
|
How would you explain a tensor to a computer scientist?
|
How would you explain a tensor to a computer scientist? My friend, who studies computer science, recently asked me what a tensor was. I study physics, and I tried my best to explain what a tensor is, and I said something along the lines of "a mathematical object that is described in between the mappings of vector spaces", and he wasn't quite about that definition. I understood why, since it is a pretty wordy definition I gave, so I decided to give a more, down to earth, definition, describing a tensor as some array, with n-dimensions. However, he was still kind of confused by this. Can anyone synthesise a decent definition, tailored to a computer scientist's understanding?
|
For a computer scientist: A tensor is to a vector (or matrix) what a factory[1] is to an object. An n-dimensional tensor accepts the coordinate system of your screen, and returns an n-dimensional array that represents a geometric quantity that is independent of which coordinate system you chose. This geometric quantity is always the "same" quantity, but its representation changes because your coordinate system changed. So all tensors look like n dimensional arrays (vectors, matrices, etc), but to qualify as a tensor the numbers in that array need to change consistently when you chose a different coordinate system. Example: Consider the 1-dimensional tensor that returns a vector of length one pointing straight up from the bottom of your screen. If you choose the coordinate system where the y axis increases from the bottom of the screen to the top, then this tensor will return the representation $$[0, 1]$$ . However, if you choose the coordinate system where the y axis decreases from the
|
|linear-algebra|vector-analysis|tensors|
| 0
|
Ramification of mod $\ell$ representation of elliptic curves
|
Let $E$ be an elliptic curve over $\mathbb{Q}$ , and let $p,\ell$ be two prime numbers. Consider the mod $\ell$ representation $$\rho:Gal(\mathbb{\overline{Q}}/\mathbb{Q})\to Aut(E[\ell])= GL(2,\ell).$$ I want to know the results [or references] about the ramification of $\rho$ at different primes $p$ . For example, can we know $\rho$ is unramified/tamely remified/wildly ramified at $p$ , given conditons such that $E$ has good/mlutipicative reduction at $\ell$ or $p$ ?
|
I might as well answer this. The first, and most important result here is the Néron-Ogg-Shafarevich criterion (see eg Serre-Tate, Good reduction of Abelian varieties – or just Silverman for elliptic curves, maybe the Advanced Topics but I’m not sure): Given an Abelian variety $A$ over a local field $K$ of residue characteristic $p$ , the following are equivalent: $A$ has good reduction $A[m]$ is an unramified $G_K$ -module for infinitely many $m$ coprime to $p$ $A[m]$ is an unramified $G_K$ -module for all $m$ coprime to $p$ . In particular, if $A$ is an elliptic curve with potentially good reduction, the image of inertia on $A[m]$ (for $m \geq 3$ coprime to $p$ ) doe not depend (up to group isomorphism) on the choice of $m$ . This is because in a $q$ -adic ring $R$ , $GL_2(R)$ does not contain a torsion element congruent to $1$ mod $q$ (or $4$ when $q=2$ ). The direct consequences are the following: if $p \neq \ell$ and $E$ has good reduction at $p$ , then $E[\ell]$ is unramified at $
|
|elliptic-curves|arithmetic-geometry|galois-representations|
| 1
|
Countable ordinals having a certain property
|
Which are the countable ordinals $\lambda$ such that, for every sequence of ordinals $\alpha_i\ (i\in\mathbb{N})$ such that $\ $ it is strictly increasing for all sufficiently large $i$ $\ $$\alpha_i for all $i\in\mathbb{N}$ $\ $$\sup\{ \alpha_i:i\in\mathbb{N}\}=\lambda$ , there exists an ordinal $\beta such that there exists a sequence $\gamma_i$ , with $\gamma_i+\beta=\alpha_i$ for all $i\in\mathbb{N}$ , verifying 1.,2. and 3. above. Furthermore, if such ordinals exist, how are they characterized? Thanks.
|
Let us say that an ordinal $\xi$ is very good if for any $(\alpha_i)_{i=1}^\infty \subset \xi=[0,\xi)$ such that there exists $i_0\in \mathbb{N}$ such that $(\alpha_i)_{i=i_0}^\infty$ is strictly increasing to $\xi$ , there exist $j_0\geqslant i_0$ , $\beta\in\xi$ , and $(\gamma_i)_{i=j_0}^\infty\subset \xi$ such that $(\gamma_i)_{i=j_0}^\infty$ strictly increases to $\xi$ and $\alpha_i=\gamma_i+\beta$ for all $i\geqslant j_0$ . Recall that a subset $I\subset \xi$ is called cofinal in $\xi$ if $\sup I=\sup\xi$ . The cofinality of $\xi$ is the minimum cardinality of a cofinal subset. An ordinal has cofinality $0$ iff it is $0$ , it has cofinality $1$ iff it is a successor. In order for there to exist a sequence $(\alpha_i)_{i=1}^\infty$ which is strictly increasing to $\xi$ , the cofinality of $\xi$ must be exactly $\aleph_0$ . So if $\xi$ is any ordinal with cofinality not equal to $\aleph_0$ , it will vacuously be very good, because there will be no $\alpha_i$ as in the hypothesis of
|
|set-theory|ordinals|
| 1
|
"If a vertex appears $k$ times in an eulerian circuit, then it must have degree $2k$" - Why?
|
I need help understanding this part of this proof from Graphs and Digraphs (7th ed): Theorem 3.1 . A connected multigraph is eulerian if and only if every vertex has even degree. The authors of this book consider a (multi)graph to be eulerian if it contains an eulerian circuit, i.e. a circuit that contains every edge of the graph. Proof . One direction is clear. If a vertex appears $k$ times in an eulerian circuit, then it must have degree $2k$ . Thus every vertex of an eulerian multigraph must have even degree. The problems that I am having with this proof are (1) I don't know if I am interpreting this proof correctly and (2) it does not seem rigorous enough for me to use in an exam. My intuition for this direction of this proof is that whenever you "pass through" a vertex in an eulerian circuit, you must be able to come out--so, for every edge incident to some vertex, there must exist another incident edge to that vertex. Again, however, I am doubting whether this is correct. I have
|
Break the edges of the graph into half-edges and start the circuit in the middle of an edge. Each visit to a vertex removes two half-edges from further use - the incoming one and the outgoing one. So the number of usable half-edges at the vertex decreases by $2$ . Its parity (even or odd) does not change. At the completion of the circuit, every half-edge has been used, so the number of usable half-edges at each vertex is now $0$ , an even number. Thus all the parities of usable half-edges at any time must have been even. This includes before the first visit, when it was the degree of the vertext.
|
|graph-theory|proof-explanation|eulerian-path|
| 0
|
When does $Ax=0$ have non zero solutions?
|
Let $k > n$ . Let $A$ be an $n\times k$ real matrix. I am interested in the non-zero solutions of the system of $Ax=0$ . More precisely, I am looking for the most general conditions that $A$ must satisfy such that the system $Ax=0$ admits non zero solutions. Any idea ?
|
Nevzat's solution is kinda complicated. As suggested by Thorgott: We have $Rank(A)\leq\min\{n,k\}=n$ and by Rank-nullity theorem $$\small Rank(A)+Null(A)=\text{#columns}=k.$$ It follows that $\small Null(A)=k-Rank(A)\geq k-n>0.$ Let $r$ be the number of pivots (or leader 1s) in the reduced row-echolon form of $A$ . Clearly, $r\leq n$ . Let $t$ be the number of parameters in the solution of the homogenous system $AX=0.$ Then $$r+t=\text{#variables}=k.$$ It follows that $t=k-r\geq k-n>0.$
|
|linear-algebra|matrices|systems-of-equations|
| 0
|
Explicit curve with unbounded torsion
|
I have been trying to experiment with curves. Recently I found a curve (in $\mathbb{R}^2$ ) with unbounded curvature. However, I am not able to find one curve with unbounded torsion, for various reasons: It has to be in $\mathbb{R}^3$ , which makes things more complicated Torsion is not as easy to calculate as curvature, I'm aware of the formula $\displaystyle\frac{(\alpha''\times\alpha')\cdot \alpha'''}{k^2_{\alpha}}$ I know the existence of this curves because of the Fundamental Theorem, but I would like to find one explicitly. Can anyone guide/help me?
|
If we plot Eqn={u Cos[t],u Sin[t], a t} in 3d we see a linear helicoid whose $u=0$ central line has a constant torsion. The spine twists as can be seen by the rotation of binormal ( not shown). However, if the third parametric coordinate $z$ red line is kept as any monotonously increasing function then its torsion also increases monotonously as evident in this particular parametrized example: Eq={u Cos[t],u Sin[t],1+Sinh[t]/5} ParametricPlot3D[Eq,{u,0,6},{t,0,1.8Pi},Mesh->{3,55}, PlotStyle->{Yellow},Axes->None,Boxed->False]
|
|differential-geometry|curves|
| 0
|
Do matrices really rotate and stretch vectors or is that definition incorrect?
|
I come from the applied math and statistics world, but I was talking to my friend who comes from the pure math and number theory world—in particular Galois representation theory. I mentioned something about the confusing definition of "matrices" in textbooks. Many textbooks talk about a matrix as the solution to a linear system of equations, or other abstract descriptions. The definition that I have always found useful, was the sense that matrices rotate and scale vectors through linear transformations. Now, coming from the pure math side, my friend said that this definition was not accurate. I am trying to paraphrase some of his comments, but he said that in higher dimensions, matrices can stretch and rotate vectors only locally . He also said that it depends on what vectors the matrix acts upon. His response threw me for a loop and I was trying to understand how to resolve his statements. First, is my understanding of matrices incorrect? It is fine if this idea of rotating and stretc
|
When you ask what a matrix "really is", you are asking the wrong question. A matrix is, at its heart, just a big block of numbers arranged in a rectangular grid. That's it. The important question is: what can you do with a matrix? That's a much more interesting story! In fact matrices can be used in many different ways: They can store data. In many practical applications, this is their primary use. The "tax table" that says how much American taxpayers owe the Internal Revenue Service every year is just a matrix. So are population and census tables. A list of prices of a set of $n$ items at a grocery store over a period of $m$ weeks can be arranged as an $n \times m$ matrix. If you have such a matrix, you can look up information in it. A matrix can store the coefficients of a system of linear equations in $n$ variables. By performing row operations on such a matrix, we can solve the system of equations. An $n \times m$ matrix $A$ can be used to define a linear transformation $T: \mathbb
|
|linear-algebra|abstract-algebra|matrices|representation-theory|
| 0
|
How can I resolve this graphically derived identity?
|
This problem arose when looking into the area of a dignomonic tiling. I found an identity for an arbitrary number, call it $\Phi$ , that is totally independent of the tiling itself. The result is given as follows: $$ 1+(\Phi^2-1)\sum_{k=0}^{m-1}\Phi^{2k}= \Phi^{2m} $$ Here, $m$ is the number of gnomonic pairs. To put this perspective, the first term on the LHS represents the area of the tiling seed , $(\Phi^2-1)$ represents the area of the initial gnomonic pair, while the summation represents the area growth of successive gnomon pairs. It’s interesting that while this identity was derived for positive $\Phi$ , it is true for negative and complex $\Phi$ as well. Dignomonic tiling is dependent upon two parameters, $\phi_r$ and $\phi_s$ with the seed tile being $1/\phi_s\times 1$ and the growth rate is $\Phi=\phi_r \phi_s$ . To obtain the area, just multiply the above equation by $1/\phi_s$ , i.e., the area of the seed. You can find out more about dignomonic tiling here or here: M.J. Gaza
|
The enlightened way: Beginning with $$ 1+(\Phi^2-1)\sum_{k=0}^{m-1}\Phi^{2k}=\Phi^{2m} $$ It becomes apparent that the LHS can be expressed as a recurrence formula, that is, $$ f_k=f_{k-1}+(\Phi^2-1) \Phi^{2k-2}, \quad f_0=1 $$ By induction then, $$ \begin{align} &f_1=f_0+(\Phi^2-1) \Phi^0=1+(\Phi^2-1)= \Phi^2\\ &f_2= f_1+(\Phi^2-1) \Phi^2=\Phi^2+(\Phi^2-1) \Phi^2=\Phi^4\\ &\vdots\\ &f_k=\Phi^{2k} \end{align} $$ The easier way: Here we utilize the partial sum equation $$ \sum_{k=1}^n x^k=\frac{x}{x-1}(x^n-1) $$ Then $$ \begin{align} &1+(\Phi^2-1)\sum_{k=0}^{m-1}\Phi^{2k}= \Phi^{2m}\\ &1+(\Phi^2-1)+ (\Phi^2-1)\sum_{k=1}^{m-1}\Phi^{2k}= \Phi^{2m}\\ &\Phi^2+ (\Phi^2-1)\frac{\Phi^2}{\Phi^2-1}( \Phi^{2(m-1)}-1) =\Phi^{2m}\\ &\Phi^{2m}=\Phi^{2m}\\ \end{align} $$
|
|solution-verification|recurrence-relations|tiling|
| 0
|
Combinatorial explanation of Catalan asymptotics
|
It is well-known that the Catalan numbers have an asymptotic approximation $$C_n\sim \frac{4^n}{\sqrt{\pi}n^{3/2}}.$$ I am curious about combinatorial interpretations of this formula, rather than a formal derivation (for example using the Stirling approximation.) For example, why would I expect $4^n$ and $n^{3/2}$ to be in this formula?
|
Pick your favourite combinatorial proof of the fact that $C_n=\frac1{n+1}\binom{2n}n$ (unless that’s already how you define the Catalan numbers). The central binomial coefficient $\binom{2n}n$ counts the simple random walks that start from the origin and end up at the origin after $2n$ steps. The simple random walk spreads out as $\sqrt n$ , approximating a Gaussian by the central limit theorem, and thus the probability to end up at the origin after $2n$ steps decays as $\frac1{\sqrt n}$ . There are $4^n$ instantiations in all, so $\binom{2n}n\sim\frac{4^n}{\sqrt n}$ .
|
|combinatorics|asymptotics|catalan-numbers|
| 1
|
Explaining this proof without words for $\frac{\pi}{4} = 2 \arctan\frac{1}{3} + \arctan\frac{1}{7}$
|
Proof of Hutton's 1776 formula: Why does the figure establish this identity? $$ \frac{\pi}{4} = 2\arctan\left(\frac{1}{3}\right) + \arctan\left(\frac{1}{7}\right) $$
|
Let $\alpha$ be the angle $GAB$ , let $\beta$ be the angle $HAB$ , and let $\gamma$ be the angle $CAB$ . Notice that $\alpha+\beta+\gamma=\frac{\pi}{4}$ since the triangle $ACD$ is right and $\tan(A)=5/5$ . Notice also that the angles $AHC$ and $AHG$ are $\frac{\pi}{2}$ since $AH$ bisects the line segment $CG$ and the lengths of $AC$ and $AG$ are equal. With the Pythagorean theorem and basic trigonometry definitions, $\tan\alpha=\frac17$ , $\tan\beta=\frac{\sqrt 5}{3\sqrt 5}$ , and $\tan\gamma=\frac{\sqrt 5}{3\sqrt 5}$ . Hence $\alpha=\arctan(1/7)$ and $\beta=\gamma=\arctan(1/3)$ . It follows that $$\frac{\pi}{4}=\alpha+\beta+\gamma=2\arctan\frac13+\arctan\frac17.$$
|
|geometry|proof-explanation|proof-without-words|
| 1
|
Limit of sequences in $\ell^2$
|
Let $\ell^2$ over $\mathbb{C}$ Let $h \in \ell^2$ such that $\forall n \in \mathbb{N}: h_n \neq 0$ where $h_n$ is nth number in $h$ Let $\{v_m\}_{m \in \mathbb{N}}$ be a sequence of points in $\ell^2$ such that $v_m \to h$ in norm Let $\{u_m\}_{m \in \mathbb{N}}$ be a sequence of points in $\ell^2$ such that $u_m \to h$ in norm We denote $v_{m,n}$ the nth number in $v_m$ and $u_{m,n}$ the nth number in $u_m$ I would like to know if it is true that $$ \lim_{m \to \infty} \dfrac{v_{m,m}}{u_{m,m}} =1 $$ Thanks.
|
Sequence $h = (1,\frac{1}{2},\frac{1}{3},...)$ is in $\ell^2$ . We can approach this sequence in $\ell_2$ with both $v_m = h + \frac{1}{m}(1,1,....,1,0,0,...)$ (exactly $m$ first entries are $1$ ) and similarly $u_m = h + \frac{1}{m^2}(1,1,...,1,0,0,...)$ (similarly here), since $$ \|h-v_m\|_2^2 = \sum_{k=1}^m \frac{1}{m^2} = m \cdot \frac{1}{m^2} =\frac{1}{m} \to 0$$ and $$ \|h-u_m\|_2^2 = \sum_{k=1}^m \frac{1}{m^4} = m \cdot \frac{1}{m^4} = \frac{1}{m^3} \to 0.$$ But, we see that $v_{m,m} = \frac{2}{m}$ , $u_{m,m} = \frac{1}{m} + \frac{1}{m^2}$ giving us $\frac{v_{m,m}}{u_{m,m}} = \frac{2}{1+\frac{1}{m}} \to 2$ . You can easily modify above to get any value as a limit
|
|functional-analysis|hilbert-spaces|
| 1
|
Cokernel of chain map is complex of cokernels.
|
$\newcommand{\coker}{\text{coker}}$ $\newcommand{\Ima}{\text{Im}}$ I've started to learn some homological algebra and have been struggling with verifying that if $\mathcal{A}$ is an abelian category then the category of chain-complexes in $\mathcal{A}$ is also an abelian category. In particular, showing if $f_\bullet:A_\bullet \to B_\bullet$ is a chain map then $\coker (f_\bullet) = \dots \to \coker f_{n+1} \to \coker f_n \to \dots$ .. The diagram I had in mind was: Where $\pi_n:B_n \to B_n/\Ima(f_n)$ are natural projection, and I'd like to fill in the maps $\partial_n$ on the bottom row. There's the obvious guess that you would induce the map $\tilde{d_n^B}: B_m/\Ima f_n \to B_{n-1}$ and then project onto $\coker f_{n-1}$ , but to have the induced map I would need that $\Ima f_n \subseteq \ker d_n^B$ , and I can't seem to conclude that's the case. I know everything in sight commutes, but $d_n^B \circ f_n = 0$ even seems like perhaps a wrong condition to me, because by commutativity of
|
This is not only part of a proof that $\mathsf{Ch}(\mathscr{A})$ is Abelian (by in particular showing it has all cokernels) but it is part of a proof that the evaluation functors $e_n:\mathsf{Ch}(\mathscr{A})\to\mathscr{A}$ are exact (by preserving all cokernels). We are given the zero complex as a clear zero object. It is clear a map of chain complexes is zero (by definition, factors through the zero object) if and only if every single component is zero. Given $f:A_\bullet\to B_\bullet$ , $g:B_\bullet\to C_\bullet$ , $gf$ is then zero if and only if $g_nf_n$ is zero for all $n$ . That is to say, if and only if $g_n$ factors through $\operatorname{coker}f_n$ for every $n$ . There are then two things to check: firstly, that there is a sensible notion of differential making $\operatorname{Cok}_\bullet f$ a chain complex such that $\operatorname{coker}f:B_\bullet\to\operatorname{Cok}_\bullet f$ is a chain map, secondly that the induced map $\operatorname{Cok}_\bullet f\to C_\bullet$ which
|
|homological-algebra|abelian-categories|
| 1
|
If set theory only contains the notions of “set” and “is a member of” as primitives, how can an axiom of set theory refer to a “formula”?
|
It's said that the primitive concepts of set theory are those of "set" and "membership", then all axioms of set theory must begin with "Let $A$ be a set" or "Let $x\in A$ ", but they don't. For example, let us consider the subset axiom: Subset Axiom. Let $\varphi(x)$ be a formula and let $A$ be a set. Then there exists a set $S$ such that for all sets $x$ we have that $x\in S$ if and only if $x\in A$ and $\varphi(x)$ . This axiom begins with "Let $\varphi(x)$ be a formula" but "formula" is not a primitive concept, then, for having sense, it must be a defined concept, but as far as I see we can't define "formula" in terms of "set" and "membership" if I am wrong, please tell me. Now, in the case where we can't define "formula", how is the subset axiom justified from a logical point of view? In many books, when the subset axioms is introduced, the statement "Let $\varphi(x)$ " is used informally, I will appreciate if you recommend me a book on set theory where the concept of "formula" is
|
Formulas and first-order logic come "below" set theory, in that axiomatic set theories such as ZFC rest on top of first-order logic. In each language of first-order logic, there is a definition for what a formula is, which is a string of characters $($ , $)$ , $\land$ , $\lnot$ , $\lor$ , variable symbols, relation symbols, and predicate symbols, which follows the rules for being a wff of that logic. But this is conceptually "below" set theory, and more general, as it also applies for formalizing what a formula is w.r.t. some other axiomatic systems like Peano arithmetic, or the group axioms. About set theory resting on FOL, this answer sounds relevant.
|
|logic|set-theory|axioms|meta-math|
| 0
|
This is my proof that $e^{iz} +1 - i = 0$ has no solutions
|
So $e^{iz} = e^{ix-y} = \frac{e^{ix}}{e^y} = \frac{\cos(x)+i\sin(x)}{e^y} = \frac{\cos(x)}{e^y} +\frac{i\sin(x)}{e^y} = -1 + i$ Then $\frac{1}{e^y} \cos(x) = -1$ $\frac{1}{e^y} \sin(x) = 1$ With some basic manipulation, we get $\tan(x) = -1$ which gives us $x = \frac{3\pi}{4} + k$ , for some integer $k$ Now, since $\frac{1}{e^y}$ is always positive, for $\frac{1}{e^y} \cos(x) = -1$ , $\cos(x)$ has to be negative and for $\frac{1}{e^y} \sin(x) = 1$ , $\sin(x)$ has to be positive BUT at $x = \frac{3\pi}{4} + k$ , $\sin(x)$ and $\cos(x)$ are either both positive or negative so there are no solutions. Is this valid? I like how I answered this question but it may be wrong
|
It does, in fact. One can use the complex logarithm and get a value for $\ln(i-1)$ . Mathematica was also able to produce a nice looking value for this: $$\ln(i-1)=\frac12\ln2+\frac{3\pi}{4}i.$$ To avoid dealing with branch cuts and multiple values, a more correct statement is actually just $$e^{\frac12\ln2+\frac{3\pi}{4} i}.$$ Using this you can find the solution $z=\frac{3\pi}{4}-\frac{\ln 2}{2}i$ .
|
|complex-analysis|solution-verification|
| 1
|
This is my proof that $e^{iz} +1 - i = 0$ has no solutions
|
So $e^{iz} = e^{ix-y} = \frac{e^{ix}}{e^y} = \frac{\cos(x)+i\sin(x)}{e^y} = \frac{\cos(x)}{e^y} +\frac{i\sin(x)}{e^y} = -1 + i$ Then $\frac{1}{e^y} \cos(x) = -1$ $\frac{1}{e^y} \sin(x) = 1$ With some basic manipulation, we get $\tan(x) = -1$ which gives us $x = \frac{3\pi}{4} + k$ , for some integer $k$ Now, since $\frac{1}{e^y}$ is always positive, for $\frac{1}{e^y} \cos(x) = -1$ , $\cos(x)$ has to be negative and for $\frac{1}{e^y} \sin(x) = 1$ , $\sin(x)$ has to be positive BUT at $x = \frac{3\pi}{4} + k$ , $\sin(x)$ and $\cos(x)$ are either both positive or negative so there are no solutions. Is this valid? I like how I answered this question but it may be wrong
|
As a preliminary remark, your formula for $x$ should be $x = (\frac{3}{4}+k)\pi$ , not $\frac{3}{4}\pi + k$ . Assuming that's what you actually meant, you are still somehow visualizing the angle $x$ incorrectly. It is not true that for $x=(\frac{3}{4}+k)\pi$ the sine and cosine have the same sign. Quite the opposite is true: If $k$ is even then $x=(\frac{3\pi}{4}+k)\pi$ is a 2nd quadrant angle, $\cos(x)$ is negative, and $\sin(x)$ is positive. If $k$ is odd then $x=(\frac{3\pi}{4}+k)\pi$ is a 4th quadrant angle, $\cos(x)$ is positive, and $\sin(x)$ is negative. Once that is all straightened out, your solution method is fine. You'll get solutions $x+iy$ of the form $x=(\frac{3}{4}+k)\pi$ for all even integers $k$ , and the value of $y$ will always be the same, namely $y=-\ln(2/\sqrt{2}) = -\ln(\sqrt{2}) = -\ln(2)/2$ .
|
|complex-analysis|solution-verification|
| 0
|
This is my proof that $e^{iz} +1 - i = 0$ has no solutions
|
So $e^{iz} = e^{ix-y} = \frac{e^{ix}}{e^y} = \frac{\cos(x)+i\sin(x)}{e^y} = \frac{\cos(x)}{e^y} +\frac{i\sin(x)}{e^y} = -1 + i$ Then $\frac{1}{e^y} \cos(x) = -1$ $\frac{1}{e^y} \sin(x) = 1$ With some basic manipulation, we get $\tan(x) = -1$ which gives us $x = \frac{3\pi}{4} + k$ , for some integer $k$ Now, since $\frac{1}{e^y}$ is always positive, for $\frac{1}{e^y} \cos(x) = -1$ , $\cos(x)$ has to be negative and for $\frac{1}{e^y} \sin(x) = 1$ , $\sin(x)$ has to be positive BUT at $x = \frac{3\pi}{4} + k$ , $\sin(x)$ and $\cos(x)$ are either both positive or negative so there are no solutions. Is this valid? I like how I answered this question but it may be wrong
|
Here is a nonconstructive argument showing that this result is false: if $f \colon \mathbb{C} \rightarrow \mathbb{C}$ is an entire function (holomorphic on the entire complex plane), then the image of $f$ is either all of $\mathbb{C}$ , the complex plane minus a point, or a singleton, i.e., any nonconstant entire function can miss at most one point in its image. This is known as Picard's Theorem . The map $f(z) = e^{iz}$ is a nonconstant entire function missing the point $0 \in \mathbb{C}$ , and so it must hit the point $i-1$ . This is equivalent to the equation $$ e^{iz}+1-i=0 $$ having a solution. In general, if $c \in \mathbb{C} \setminus \{0\}$ , then $e^{iz} + c = 0$ has a solution.
|
|complex-analysis|solution-verification|
| 0
|
Best fit line to a set of $(x,y)$ points based on sum of squared perpendicular distances to the line
|
Given a set of points $ P_i = (x_i,y_i), \ i =1,2,\dots, N $ . I want to find the best fitting line to these points. The equation of the line is $$ n^T ( r - r_0 ) = 0 $$ where $n$ is a unit vector. I want to find $r_0$ and $n$ such that the sum of squared perpendicular distances from $(x_i, y_i)$ to the line is minimized. That is, I want to minimize $ f = \displaystyle \sum_{i=1}^N d_i^2 $ where $d_i = | n^T (P_i - r_0) | $ My Attempt: is detailed in my answer below. Your comments, hints, and solutions are highly appreciated.
|
In the hope you don't mind if I do not use vectors but apply the LS method to $y=mx+b$ and minimize the sum of the squared perpendicular distances $\displaystyle\sum_k \frac{(y_k - (mx_k +b))^2}{1+m^2}\Rightarrow min$ That is $\displaystyle\frac{\displaystyle m^2\left(\sum x_{k}^2\right)+2m b \left(\sum x_{k}\right)-2m\left(\sum x_{k}y_{k}\right)+b^2 n-2b\left(\sum y_{k}\right)+\sum y_{k}^2}{m^2+1}$ $\displaystyle\frac{\partial}{\partial b}=0$ results in $\displaystyle b=\frac{\displaystyle-(\sum x_k)\cdot m+\sum y_k}{n}=\bar{y}-m\bar{x}$ Substituted $b$ in ansatz with it results in (equation $IV$ ) $\displaystyle\frac{\displaystyle m^2 n\left(\sum x_k^2\right)-m^2\left(\sum x_k\right)^2-2mn\left(\sum x_ky_k\right)+2m\left(\sum x_k\right)\left(\sum y_k\right)+ n\left(\sum y_k^2\right)-\left(\sum y_k\right)^2}{n\left(m^2+1\right)}$ Fun starts here : $\displaystyle\frac{\partial}{\partial m}=0$ results $\displaystyle m_1= \left(-n\sum x_k^2+n\sum y_k^2+(\sum x_k)^2-(\sum y_k)^2 \\ +\math
|
|optimization|analytic-geometry|regression|
| 1
|
Getting my concepts confused with sampling
|
Let me first set the scene. This is how we learned the variables and their names and symbols. We have the sugar content of 77 different cereal brands. It firsts asks "estimate the true average sugar content of these 77 cereal brands using a SRS of size 10". These next two questions are what confuses me. What is the true variance of the estimator above? Using this sample data, estimate the true variance of this estimator I am really confused on the difference between these two questions. I thought the first one is asking var(y_hat) and the second one is asking var_circumflex(y_hat). If someone could please explain the difference preferable in pretty simple terms since more will just confuse me more.
|
The "true" value of anything is the value you calculate if you have perfect information about the entire population. When you have to calculate a value from a specific sample and use that to infer the true value, then that's the "estimate". The true variance of the estimator is $\mathrm{var}(\bar{y})$ - notice that its formula uses $\sigma^2$ which is a property of the whole population. The estimator of the variance is $\widehat{\mathrm{var}}(\bar{y})$ - notice that it uses $s^2$ , which is a value that depends on the specific sample that has been taken.
|
|sampling-theory|
| 0
|
Is the induced map on the quotient space bounded?
|
Let $T: V \rightarrow W$ be a continuous/bounded linear map between Banach spaces. Let $J \subseteq \ker(T)$ be a closed subspace. Then I know that this induces a unique linear map on the quotient $\bar{T}:V/J \rightarrow W$ s.t. $\bar{T}\circ \pi=T$ . I also know from topology that $\bar{T}$ is continuous w.r.t. the quotient topology on $V/J$ . We can also define a norm on $V/J$ via: $$ \forall v \in J: \ \|v+J\| := \inf_{w \in J} \|v+w\| $$ In particular is $J$ itself again a Banach space with this norm. I am now wondering whether $\bar{T}$ is always continuous w.r.t. this norm (or is the topology induced by this norm exactly the quotient topology?). I managed to come up with this proof and was wondering if it is correct or if there is another, faster way to see this (perhaps by showing that the norm topology is the quotient topology): \begin{align*} &\sup \{\|\bar{T}(v+J)\|\ : \ \|v+J\| \leq 1\} \\ = & \sup \{\|T(v)\|\ : \ \|v+J\| \leq 1\} \end{align*} Now by definition of the infim
|
The quotient topology is topology induced by the quotient norm. The quotient topology is such that $U\subset V/J$ is open iff $Q^{-1}(U)$ is open, where $Q:V\to V/J$ is the quotient map $Qx=x+J$ . It suffices to show that $U\subset V/J$ is norm-open iff $Q^{-1}(U)$ is. The quotient map satisfies $\|Q\|\leqslant 1$ , so it's Lipschitz continuous, and $U\subset V/J$ open implies $Q^{-1}(U)$ is open. On the other hand, if $Q^{-1}(U)$ is open and $U$ is non-empty, fix $y\in U$ . Fix $x\in Q^{-1}(U)$ such that $Qx=y$ . Since $Q^{-1}(U)$ is open, there exists $r>0$ such that $x+rB_V\subset Q^{-1}(U)$ . Here $B_V=\{y\in V:\|y\| is the open unit ball. I claim that $Q(x+rB_V)=y+rB_{V/J}$ , so that $$U\supset Q(Q^{-1}(U))\supset Q(x+rB_V)=y+rB_{V/J}.$$ Thus the open ball $y+rB_{V/J}$ is contained in $U$ , and $U$ is open. Thus $U\subset V/J$ is open in the quotient topology iff it's open in the quotient norm topology. To see why $Qx+rB_V=y+rB_{V/J}$ , first fix $u\in V$ with $\|u\| . Then $Q(x+u
|
|general-topology|functional-analysis|solution-verification|banach-spaces|quotient-spaces|
| 1
|
A question on Lie Groups
|
Let $G$ be a Lie group. (a) Let $m : G \times G \to G$ denote the multiplication map. Using Proposition 3.14 of john lee smooth manifolds (The Tangent Space to a Product Manifold) to identify $T_{(e,e)}(G \times G)$ with $T_e(G) \bigoplus T_e(G)$ , Show that: (a) the differential $dm_(e,e)(X,Y)=X + Y$ (b) Let $i: G \to G$ denote the invertion map. show that $di_e: T_eG \to T_eG$ is given by $di_e(X)= -X$ This is problem 7-2 of John Lee Smooth Manifolds. I already solved this wanna see if my solution is correct or not. thanks. Here is my solution: We have : $dm_{(e,e)}(X,Y) = dm_{(e,e)}(X,0)+dm_{(e,e)}(0,Y)=d(m^{1})_e(X)+d(m^{2})_e(Y)$ Which $m^1:G \to G $ is defined by $x \mapsto m(x,e)$ , $m^2:G \to G $ defined by $y \mapsto m(e,y)$ , but $m^1 = m^2 = Id_G$ ,so $dm_{(e,e)}(X,Y)=X+Y$ this proves (a). Put $ n= m \circ p \circ s$ which $s :G \to G \times G$ is defined by $x \mapsto (x,x)$ and $p:G \times G \to G \times G$ is defined by $(x,y) \mapsto (x,i(y))$ then $n$ is a constant map
|
Here is a kinda awkward way to solve the problem. For (a), we know that we have isomorphism $\alpha : T_{(e,e)(G \times G)} \to T_eG \oplus T_eG$ given by $$ \alpha(v) = \Big(d(\pi_1)_{(e,e)}(v), d(\pi_2)_{(e,e)}(v) \Big), $$ with $\pi_1, \pi_2 : G \times G \to G$ is a canonical projection $\pi(g,h) = g, \,\pi_2(g,h) = h$ . Let $\mathtt{i}, \mathtt{j} : G \hookrightarrow G \times G$ is embedding $\mathtt{i}(g)=(g,e), \, \mathtt{j}(g)=(e,g)$ . Then inverse of $\alpha$ is $\beta : T_eG \oplus T_eG \to T_{(e,e)}(G \times G)$ given by $$ \beta(X,Y) = d\mathtt{i}_e(X) + d\mathtt{j}_e(Y). $$ Notation $dm_{(e,e)}$ given in the problem is composition of $dm_{(e,e)} : T_{(e,e)}(G \times G) \to T_eG$ with isomorphism $\beta$ . Denote $\gamma := dm_{(e,e)} \circ \beta $ . $$ \gamma : T_eG \oplus T_eG \xrightarrow{\beta}T_{(e,e)}(G\times G) \xrightarrow{dm_{(e,e)}} T_eG. $$ With this noatation, we need to show that $\gamma(X,Y) = X + Y$ . We know that $m \circ \mathtt{i} = \text{Id}_G$ and $m \cir
|
|differential-geometry|lie-groups|smooth-manifolds|
| 0
|
This is my proof that $e^{iz} +1 - i = 0$ has no solutions
|
So $e^{iz} = e^{ix-y} = \frac{e^{ix}}{e^y} = \frac{\cos(x)+i\sin(x)}{e^y} = \frac{\cos(x)}{e^y} +\frac{i\sin(x)}{e^y} = -1 + i$ Then $\frac{1}{e^y} \cos(x) = -1$ $\frac{1}{e^y} \sin(x) = 1$ With some basic manipulation, we get $\tan(x) = -1$ which gives us $x = \frac{3\pi}{4} + k$ , for some integer $k$ Now, since $\frac{1}{e^y}$ is always positive, for $\frac{1}{e^y} \cos(x) = -1$ , $\cos(x)$ has to be negative and for $\frac{1}{e^y} \sin(x) = 1$ , $\sin(x)$ has to be positive BUT at $x = \frac{3\pi}{4} + k$ , $\sin(x)$ and $\cos(x)$ are either both positive or negative so there are no solutions. Is this valid? I like how I answered this question but it may be wrong
|
We have by $z=x+iy$ $$e^{iz} +1 - i = 0 \iff e^{iz} =-1 + i \iff e^{ix}e^{-y} =\sqrt 2e^{i\left(\frac34\pi+2k\pi\right)}$$ that is $e^{-y}=\sqrt 2 \implies y=\log \frac{\sqrt 2}2$ $x=\frac34\pi+2k\pi$ or $$z=\frac34\pi+2k\pi+i\log \frac{\sqrt 2}2$$
|
|complex-analysis|solution-verification|
| 0
|
Bounding $\frac{q\left(x\right)}{q^{\prime}\left(x\right)\cdot x}$ for polynomial $q$ with finite non-negative coefficients that sum to 1
|
Is there an easy enough way to show that for a polynomial $q(x)=\sum_{p\ge2}\gamma_{p}x^{p}$ with $\gamma_{p}\ge0$ and $\sum_{p\ge2}\gamma_{p}=1$ where there are finitely many non-zero $\gamma_{p}$ , we have for $x>0$ : $\frac{q\left(x\right)}{q^{\prime}\left(x\right)\cdot x}\ge\frac{1}{p_{\text{max}}}$ ? I wrote explicitly $\frac{q\left(x\right)}{q^{\prime}\left(x\right)\cdot x}=\frac{\sum_{p\ge2}\gamma_{p}x^{p}}{\sum_{p\ge2}p\gamma_{p}x^{p}}$ , and argued that $\frac{1}{p_{\text{max}}}$ is indeed the limit at $x\rightarrow{\infty}$ . Then showing that the expression is decreasing finishes the problem. I couldn't prove it is decreasing using the derivative. Alternatively, I have a feeling there is a more direct, algebraic, easy arguement to show $\frac{\sum_{p\ge2}\gamma_{p}x^{p}}{\sum_{p\ge2}p\gamma_{p}x^{p}}\ge\frac{1}{p_{\text{max}}}$ . Am I wrong?
|
\begin{align} f(x) &\equiv \frac{q(x)}{xq'(x)} = \frac{\sum_{p\geq 2} \gamma_p x^p}{\sum_{p\geq 2} p\gamma_p x^p} \\ &= \frac{1}{\sum_{p\geq 2}c_p(x) p}, \qquad c_p(x) \equiv \frac{\gamma_p x^p}{\sum_{p\geq 2} \gamma_p x^p}, \; \sum_{p\geq 2} c_p(x) = 1 \\ &= \frac{1}{ _{c_p(x)}} \end{align} So the expression you're interested in is the inverse of the average $p$ weighted by $c_p(x)$ . A fundamental property of averages is that they lie between the min and max values of their elements. Hence, $$\frac{1}{p_{\max}} \leq f(x) \leq \frac{1}{p_{\min}}$$ Since $f(0)=\frac{1}{p_{\min}}, f(\infty) = \frac{1}{p_{\max}}$ , these are also the tightest bounds for $f(x)$ . An interesting implication of this analysis is that $\frac{xq'(x)}{q(x)}$ could be interpreted as the expectation of the order of the most dominant term in the polynomial expansion of $q(x)$ .
|
|calculus|algebra-precalculus|polynomials|
| 0
|
If $G$ is an abelian Lie group then the Lie algebra of $G$ is abelian
|
This is problem 8-25 from John Lee's Introduction to Smooth Manifolds. Prove that if $G$ is an abelian Lie group, then $Lie(G)$ , the Lie algebra of the Lie group $G$ is abelian. [Hint: show that the inversion map $i:G \to G$ is a group homomorphism and use Problem 7-2.] Problem 7-2 gives that $di_e (X_e)= -X_e$ for $X_e \in T_e G$ . So I have $(i_* X)_e = di_e (X_e)=-X_e$ . It seems like the proof suggests $[X,Y]=[-X,-Y]=[i_* X,i_* Y]=i_*[X,Y]=-[X,Y]$ and conclude that $[X,Y]=0$ for all smooth left invariant vector fields $X,Y$ in $G$ . However, this requires that $(i_* X)_g = -X_g$ for all $g\in G$ . But I can't see how this can be obtained from the equality on the identity. From Problem 8-24(b) I know that $i_* X$ is right-invariant for a left invariant $X$ , but not left invariant. So I cannot apply $dL_g$ on both sides to get $(i_* X)_g = -X_g$ . I've been stuck on this step for a while. I would greatly appreciate any help.
|
If $G$ is abelian, then $i$ is a Lie group homomorphism. So from Teorema 8.44 in Lee's book that $i_* : LieG \to LieG$ is a Lie algebra homomorphism. So for any $X \in LieG$ , $i_*X \in LieG$ is vector field that is $i$ -related to $X$ . So by the help of Problem 7-2, for any $X \in LieG$ , $$ i_*X|_g = (di_e(X_e))^{\text{L}}|_g = d(L_g)_e (-X_e) = -X_g \implies i_*X = -X. $$ So $\forall X,Y \in LieG$ , $$ -[X,Y] = i_*[X,Y] = [i_*X,i_*Y] = [-X,-Y] = [X,Y] \implies [X,Y] = 0. $$
|
|differential-geometry|lie-groups|lie-algebras|smooth-manifolds|abelian-groups|
| 1
|
Testing Invertibility of a Matrix
|
In Paul Zeitz's the Art and Craft of Problem Solving, he says on page 34 that one way to show that a matrix $\mathbf{C}$ is invertible is by showing that $\mathbf{C} b_i \neq 0$ for each basis vector $b_i$ . However, this can't be true -- consider the example $\begin{bmatrix}1 & 1 \\ 1 & 1\end{bmatrix}$ . Am I just tripping and misinterpreting what he wrote?
|
I think a better way of putting this statement is the equation $Ax=0$ has only one solution , x=0 if A is invertible, so any basis for $R^n$ would not include that zero vector (as it contradicts linear independence) so for any basis of $R^n$ , the basis vectors can't be left multiplied to give 0. (Whereas in your counterexample I can take $ $ , $ $ as a basis for $R^2$ and we get a left multiplication to give zero.)
|
|linear-algebra|matrices|
| 1
|
Given $F: M \to N$ a smooth submersion, every smooth vector field on $N$ has a lift in $M$
|
I am currently stuck on Problem 8-18 (b) in Lee's Introduction to Smooth Manifolds. We have a smooth submersion $F: M \to N$ and a smooth vector field $Y$ on $N$ . The problem in part (b) asks to show that if $\dim M \neq \dim N$ , then $Y$ has a lift, but the lift is not unique. I am struggling to show the existence of a lift globally. If we assume that $\dim M = \dim N$ instead (this is part (a) of the problem) then it is clear that $dF_p: T_pM \to T_pN$ is an isomorphism for all $p \in M$ so we can define the lift of $Y$ by $X_p = dF_p^{-1}\left(Y_{F(p)}\right)$ for all $p$ in $M$ . By the rank theorem, there are smooth charts $(U, \varphi)$ for $M$ containing $p$ and $(V, \psi)$ for $N$ such that $F(U) \subseteq V$ and $\psi \circ F \circ \phi^{-1} = \text{id}$ . Then working in the coordinate basis $\frac{\partial}{\partial x^i}$ associated to $(U, \varphi)$ and $\frac{\partial}{\partial y^i}$ associated to $(V, \psi)$ and using the Jacobian of $dF_p$ which is just $I_n$ in this c
|
Here is my solution. Suppose $m=\dim M \neq \dim N=n$ and $Y \in \mathfrak{X}(N)$ . For any p $p \in M$ let $(U_p,x^i)$ be a chart centered at $p$ and $(V_{F(p)},y^j)$ centered at $F(p)$ so the representation of $F: M \to N$ is $$ \hat{F}(x^1,\dots,x^m) = (x^1,\dots,x^n). $$ If there is a smooth vector field $X$ that is $F$ -related to $Y$ , then $\forall x \in U$ we have $$ Y_{F(x)} = Y^j(F(x)) \partial_{y^j}\big|_{F(x)} = dF_x(X_x) = X^i(x) \frac{\partial F^j}{\partial x^i}(x) \partial_{y^j} \big|_{F(x)} = X^i(x) \, \delta^j_i \partial_{y^j}\big|_{F(x)}. $$ So the first $n$ components of $X$ at $(U_p,x^i)$ must satisfy $X^i = Y^i \circ F|_{U_p}$ . The rest of the components of $X$ can be chosen arbitrarily. So define local vector $X_p : U_p \to TM$ $F$ -related to $Y$ as $X_p = X_p^i \partial/\partial x^i$ with $X_p^i = Y^i \circ F|_{U_p}$ for $i=1,\dots,n$ . This construction can be done for every point in $M$ , so by partition of unity we can blend them to get a global vector field
|
|differential-topology|smooth-manifolds|vector-fields|
| 0
|
Show by hand : $e^{e^2}>1000\phi$
|
Problem: Show by hand without any computer assistance: $$e^{e^2}>1000\phi,$$ where $\phi$ denotes the golden ratio $\frac{1+\sqrt{5}}{2} \approx 1.618034$ . I come across this limit showing: $$\lim_{x\to 0}x!^{\frac{x!!^{\frac{2}{x!!-1}}}{x!-1}}=e^{e^2}.$$ I cannot show it without knowing some decimals and if so using power series or continued fractions. It seems challenging and perhaps a tricky calculation is needed. If you have no restriction on the method, how to show it with pencil and paper ? Some approach: We have, using the incomplete gamma function and continued fractions: $$\int_{e}^{\infty}e^{-e^{-193/139}x^{193/139+2}}dx=\frac{139}{471}\cdot e\cdot\operatorname{Ei}_{332/471}(e^2)>e^{-e^2},$$ where $\operatorname{Ei}$ denotes the exponential integral. Finding an integral for the golden ratio $\phi$ is needed now. Following my comment we have : $$e^{-e^2} Where the function in the integral follow the current order for $x\ge e$ .As said before use continued fraction of incomple
|
A very small idea which may help someone start towards a solution: Let $\lambda = e^{e^2}/1000$ ; we want to show that $\lambda > \phi$ . We see directly that $$e^2 = \sum_{n=0}^\infty \frac{2^n}{n!} > \sum_{n=0}^4 \frac{2^n}{n!} = 7$$ and $$e^{7/6} = \sum_{n=0}^\infty \frac{(7/6)^n}{n!} > \sum_{n=0}^3 \frac{(7/6)^n}{n!} = \frac{4033}{1296} > 3,$$ so $$e^{e^2} > e^7 > 3^6.$$ Thus, $$\lambda > 3^6/10^3 > 2/3 > 2/(1+\sqrt{5}) = 1/\phi.$$ The argument up to this point surely can be improved! Since the roots of $x^2-x-1$ are $\phi$ and $1/\phi$ , it now suffices to show that $\lambda^2-\lambda-1 > 0$ , i.e. that $$(e^{e^2})^2 > 10^3 e^{e^2} + 10^6.$$ I'm not sure yet how to do this without a computer, but there is more room between the LHS and RHS here than between $\lambda$ and $\phi$ , so hopefully this is a helpful reduction. Basically we are using the fact that $x \mapsto x^2-x-1$ has slope $\approx 2\phi-1 = \sqrt{5} > 2$ near $x = \phi$ to amplify the gap between the two things we wa
|
|inequality|constants|golden-ratio|number-comparison|
| 0
|
Infinite group with only two conjugacy classes
|
Can you show me a reasonably simple (using only elementary group-theoretic tools) example of infinite group with just 2 conjugacy classes ?
|
Peter Cameron outlines the construction of an infinite group with 2 conjugacy classes on page 8 of his Permutation Groups book. For more details see the classic paper: Graham Higman, Bernhard H. Neumann and Hanna Neumann, Embedding theorems for groups, J. London Math. Soc. (1) 24 (1949), 247-254. This gives more details than Rotman's exercise mentioned by Arturo.
|
|group-theory|
| 0
|
Die rolling contest problem
|
I got this problem as part of an interview... Problem Statment: You have two players, Frank and Jane, that take turns in rolling a fair k-sided die. Whoever rolls a k first wins the game. The Python program should output the probability that Frank wins the game for k=6 thru 99. That is, the output will be an array of probabilities where index 0 is the probability when k = 6; index 1 when k = 7; etc. Note that it doesn't state who goes first or put any limit on the number of rolls. I asked for clarification because it sure seems to me the probability of Frank winning is always 50%, regardless of the value of k. I believe there is a slight advantage if Frank rolls first, but that's not in the problem statement. The person responded insisting that the probability of Frank winning does depend on k. I don't see it. Very similar probably is covered here: Probability of winning a game by rolling the die first but I think that solution depends on knowing who rolls first correct?
|
Let $p$ denote the probability that Frank wins the game. If Frank rolls first There is a probability of $\frac{1}{k}$ that Frank wins on the first roll. Otherwise (with probability $1 - \frac{1}{k} = \frac{k-1}{k}$ ), Jane gets the opportunity to roll. Given this, there is a $\frac{1}{k}$ probability that she wins on this roll. The absolute probability that this happens is thus $\frac{k-1}{k^2}$ . Otherwise (with probability $\frac{k-1}{k} - \frac{k-1}{k^2} = \frac{k^2-2k+1}{k^2}$ ), we arrive at an equivalent of state #1, as if a new game starts, and Frank rolls first. Thus, the probability that Frank wins the game is: $$p = \frac{1}{k} + \frac{k^2-2k+1}{k^2}p$$ Solving for $p$ in terms of $k$ gives: $$\boxed{p = \frac{k}{2k-1}}$$ If Jane rolls first Then she's in the same position as Frank in the previous scenario. Thus, the probability that she wins the game is $\frac{k}{2k-1}$ . And the probability that Frank wins is the complement of this, $p = 1 - \frac{k}{2k-1}$ , or: $$\boxed{p
|
|probability-theory|
| 1
|
Inner product of scaled Hermite functions
|
I'm attempting to find a closed form expression for $$\int_{-\infty}^{\infty}e^{-\frac{x^2\left(1+\lambda^2\right)}{2}}H_{n}(x)H_m(\lambda x)dx$$ where $H_n(x)$ are the physicist's hermite polynomials, but haven't had any luck. Anyone know of a way to compute this?
|
There is a more compact closed-form solution expressed with hypergeometric functions, using known Laplace transforms. The method used can be recycled to find a closed-form solution for other integrals of the same type; this is why I am adding this answer despite the fact that it is an old question. Noting that for $k \in \mathbb{N}_0$ : $$H_{2k}(x) = (-1)^k 2^{2k}\, k! \, L_{k}^{-\frac{1}{2}}(x^2) $$ $$H_{2k+1}(x) = (-1)^k 2^{2k+1}\, k! \, x\, L_{k}^{\frac{1}{2}}(x^2), $$ in which $L_k^\alpha$ is a generalized Laguerre polynomial; for even values of the indices of the posted integral $I_{n,m}$ , an obvious change of variables gives: \begin{align} I_{2k,2p} &= (-1)^{k+p}2^{2(k+p)}k!\,p! \int_{0}^\infty u^{-\frac{1}{2}} e^{-\frac{u}{2} (1+\lambda^2)} L_k^{-\frac{1}{2}}(u)L_p^{-\frac{1}{2}}(\lambda^2 u)\,du \\ &= (-1)^{k+p}2^{2(k+p)}k!\,p!\; \mathscr{L}\left(u^{-\frac{1}{2}} L_k^{-\frac{1}{2}}(u)L_p^{-\frac{1}{2}}(\lambda^2 u); \frac{1}{2}(1+\lambda^2)\right) \tag{1} \end{align} in which
|
|definite-integrals|special-functions|hermite-polynomials|
| 0
|
on the representation of a field
|
Considered an irreducible polynomial with coefficient over a field $\Bbb K$ $ (f(x)\in\Bbb K[x]$ ) and $\deg f(x)=n$ ; in my textbook is written that: $$ {\Bbb K[x]\over \langle f(x)\rangle} = \{a_0+a_1x+...+a_{n-1}x^{n-1}+\langle f(x)\rangle \mid a_i\in\Bbb K\} $$ Is it true? and if it's how can I prove this ?(I apologize if the question may seem silly)
|
The polynomials of degree less than that of $f$ give an irredundant collection of representatives for the quotient, yes. Maybe a nod to the division algorithm for polynomials in one variable with coefficients in a field?
|
|abstract-algebra|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.