title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Rencontres Numbers
I'm having trouble understanding rencontres numbers, $D_{n,k}$ . The numerical values shown of the wiki page: https://en.wikipedia.org/wiki/Rencontres_numbers Looking at n = 3: $D_{3,3} = 1$ I think I understand this because there is only one way to give all three items to the correct person $D_{3,2} = 0$ because if 2 people had the correct item the third person must also have the right item $D_{3,1} = 3$ as there are three different ways to give one person the right item $D_{3,0} = 2$ this is the part I don't understand, I thought the answer should be 1 as there is only one way to give nobody the right item Can someone please explain why for $n =3$ and $k=0$ the answer is $2$ ?
If there are three people ( $A, B, C$ ) and three presents ( $a, b, c$ ), there are two ways in which nobody gets the right present. $$\begin{align*} &A \;\;\; B \;\;\; C \\\\ &c \;\;\;\; a \;\;\;\; b \\\\ &b \;\;\;\; c \;\;\;\; a \\\\ \end{align*}$$
|combinatorics|permutations|
0
Continuity Of Argument Function.
Fix $m\in \mathbb R$ . Define $f_m :\mathbb R^2 \setminus\{(0,0)\}\rightarrow(m,m+2\pi]$ $~~$ as $(x,y) \mapsto$ argument of $(x,y)$ in $(m,m+2\pi]$ . i.e $$(x,y)=\left(\cos(f_m (x,y)),\sin(f_m (x,y))\right)$$ Now if $f_m$ is continuous then its restriction to unit circle is. which is compact but its image $(m,m+2\pi]$ is not .so $f_m$ is discontinuous. And it is discontinuous on ray with angle $m$ i.e on $$\{(r\cos m,r\sin m) : r\in \mathbb R^+\}$$ and continuous on rest part of the domain. I want to prove it by mathematical argument . If $m=-\pi$ then $f_m$ is just usual Arg function . and for that i can prove because Arg function has explicit formula in terms of $\arctan$ function. So My question Is how can we prove it for arbitrary $m\in \mathbb R$ ?
We know $f_m(r\cos m,r\sin m)=m+ 2\pi$ since $f_m(x,y) \in (m,m+2\pi]$ , and $m , $f_m(r\cos \theta, r\sin \theta)=\theta$ . Thus $$\lim_{\theta \rightarrow m} f_m(r\cos \theta,r\sin \theta) = \lim_{\theta \rightarrow m} \theta = m \neq m+2\pi = f_m(r\cos m,r \sin m)$$ which means that $f_m$ is discontinuous at $(r\cos m , r\sin m)$ . Continuity on $\theta \in (m,m+2\pi)$ is obvious since $f_m(r\cos \theta,r\sin \theta)=\theta$ on that interval, which is continuous.
|complex-analysis|continuity|polar-coordinates|
1
A Projectile Motion Problem
I need more help, but this time it if for a problem involving projectile motion. The question is " A shot leaves the thrower’s hand 6.5 ft above the ground at a 45° angle at 44 ft / sec. Where is it 3 sec later?" I tried substituting known values into the equation for a vector describing projectile motion, and ended up with r (t)= . I ended up with for the coordinates, and I tried graphing the vector on my graphing calculator. On the other hand, the solution to the problem is "Shot put is on the ground, about 66 ft 3 in. from the stopboard." I tried solving the problem a second time and tried to parametrize the position vector to get x= 44cos(45) and y= 6.5 - 32t^2. I'm just lost now.
I would start by splitting the initial velocity vector into the horizontal and vertical directions. The initial velocity of $44$ ft/s has an angle of $45^\circ$ , so... Initial horizontal velocity = $44*cos(45)\approx31.113$ ft/s Initial vertical velocity = $44*sin(45)\approx31.113$ ft/s Now that we have the initial velocity vectors, we need to write equations that model the change in those velocities over time. If we disregard air resistance, then the horizontal vector will not have any other force acting on it, so horizontal velocity does not change. Horizontal velocity = $31.113$ ft/s However, the vertical velocity will change because the force of gravity will act against the upward motion of the projectile. The constant of gravitational acceleration is approximately $32.17$ ft/s $^2$ . This means the the vertical velocity will decrease by $32.17$ ft/s every second. Therefore, the equation would be as following: Vertical velocity = $(31.113 - 32.17t)$ ft/s Now, we can use these equa
|vectors|parametrization|projectile-motion|
0
Question about the PHP (The pigeonhole principle PHP)
A class of 32 students is organized in 33 teams. Every team consists of three students and there are no identical teams. Show that there are two teams with exactly one common student. The PHP as I know is like, there are 6 students and 5 groups, there must be a group that has two students according to the PHP. How can solve the question by using PHP?
An approach using contradiction goes as follows. Let there be no 2 such teams that have 1 player in common. Now, given any 2 teams, they may have either 2 common players or none. Observe that if a team (T1) has 2 common players with 2 other teams (T2, T3) then we have a pair (T2, T3) with 1 common player which is a contradiction. Hence any team can have common players with at most 1 other team. Let the number of teams having common players with another team be 'a' and the number of teams having no common player with any other team be 'b'. Then, writing equations for the total number of teams and players, $$ a+b=33 $$ $$ 2a+3b = 32 $$ which is inconsistent and a contradiction. So, there exist at least 2 teams having 1 player in common.
|functions|discrete-mathematics|pigeonhole-principle|
0
Probability Theory Bunny Question
Each bunny is individually ready for the big mission with probability p. If I have n bunnies and need k for the mission, find the expected number of k-large squads I can form with bunnies ready for the mission. I said that for binomial distribution we have $$P(X=x) = \binom{n}{x}p^{x}(1-p)^{n-x}$$ So then $E(X) = x*P(X=x)$ , professor said this isn't right so I was wondering what I did wrong?
Continuing from the discussion in the comments: The random variable $X$ you described (binomially distributed) is the number of bunnies which are ready for the big mission. So, $\mathbb{E}(X)$ is the expected [number of bunnies which are ready for the big mission]. We want to find the expected [number of squads of $k$ bunnies which are ready for the big mission], so we should make a new random variable! Let $Y$ be the number of squads of $k$ bunnies which are ready for the big mission. You should find the PMF of $Y$ , and use this to compute $\mathbb{E}(Y)$ , which will be the expected [number of squads of $k$ bunnies which are ready for the big mission] -- exactly the quantity you want to find! Or alternatively, you can try to find a formula for $Y$ in terms of $X$ , then use the PMF of $X$ to compute $\mathbb{E}(Y)$ .
|probability|
0
Morphism $\infty$-categories
Let $\mathcal{C}$ be an $\infty$ -category, and let $X, Y$ be a pair of its objects. It is said that $\text{Mor}_{\mathcal{C}}(X,Y)$ has again the structure of an $\infty$ -category. What is this structure, how is it defined? I guess the definition I am familiar with is saying that an $\infty$ -category is a simplicial set satisfying a filling property. What are the $n$ -simplices of the morphism space? Somehow I thought those should be the $1$ -simplices whose face maps give $X$ and $Y$ . Is the morphism simplicial set basically the simplicial subset comprised of these simplices?
You are talking about quasicategories as a model for $\infty$ -categories. There are a few ways to define the mapping space $\mathrm{Map}_\mathcal{C}(X,Y)$ of a quasicategory $\mathcal{C}$ with given objects $X$ and $Y$ in $\mathcal{C}$ (i.e. $X$ and $Y$ are $0$ -simplices of $\mathcal{C}$ ): the reason is that we only care about the homotopy type of the mapping space (and I use the word ''space'' here actually already to just mean ''homotopy type''), and there can be multiple convenient simplicial sets modelling the same homotopy type. I will of the usual ones the one that is symmetric and gives you the necessary intuition. So, we can model the mapping space as the following pullback: $$ \require{AMScd} \begin{CD} \mathrm{Map}_\mathcal{C}(X,Y) @>>> \mathcal{C}^{\Delta^1}\\ @VVV @VV{\mathrm{source}\times\mathrm{target}}V\\ \Delta^0 @>{(X,Y)}>> \mathcal{C}\times\mathcal{C} \end{CD} $$ Here, the pullback square is a pullback in the category $\mathsf{sSet}$ of simplicial sets. The object
|homotopy-theory|simplicial-stuff|higher-category-theory|
0
Which calculus method should I use for this investigation? What does it mean choose a suitable length for the cross section?
Your task is to determine which shape maximises the volume of water that can be held by a gutter of a fixed length of material. choose a suitable fixed length for the cross section and explore different shapes. You should consider a rectangular, triangular and round gutter as well as at least 2 other shapes. Determine the dimensions and cross-sectional area that maximises the water held by the gutter for each shape. Generalise each shape for a length L and make use of calculus to determine the dimensions of the shape and associated cross-sectional area that optimises the water held by the gutter. what mathematical method do I approach this question with what constraints do I use? My peers all are using different methods and I don't which one to use some are using a perimeter and some are using a surface area which one is easier I have been able to do some sort of working out for the rectangular gutter and triangle however they are not coherent enough to hand in to the teacher. Furtherm
I think your problem is to minimize the cross sectional perimeter for a fixed cross sectional area in order to minimize the material used in construction. Take for example a square with sides equal to 1. The area is 1 and the perimeter is 3. Now consider a half circle with area $A=\frac{\pi}{2} r^2 = 1$ and $r=\sqrt{\frac{2}{\pi}}$ . The length of the perimeter is $\pi r=\sqrt {2\pi}=2.5066$ . The circle is more optimal than a square. Now take an equilateral triangle with base b and height $h=\frac{\sqrt 3 b}{2}$ . $A=\frac{\sqrt 3 b^2}{4}= 1$ and $b=\frac{2}{\sqrt {\sqrt 3}}$ and the perimeter is $\frac{4}{\sqrt {\sqrt 3}}=3.0394$ . So in ranking optimal shapes we find circle ranks first, square ranks second and equilateral triangle ranks third.
|calculus|derivatives|optimization|area|volume|
0
Knock out tournament 3
Thirty-two players ranked 1 to 32 are playing in a knockout tournament. Assume that in every match between any two players, the better-ranked player wins, the probability that ranked 1 and ranked 2 players are winner and runner up respectively, is? My approach: But the given answer is 16/31. Could someone please explain where i went wrong?
What is wrong with your calculation is that you don't take into account that for any match the lower-ranked player always loses. So that puts constraints on which players can be placed in the 'blank' spots, whereas in your calculation any players could end up in any black spot. As a simple example: the lowest ranked player cannot get through the first round, but in your calculation that player could fill one of the blanks after the first round. Now, I really like @Haris Answer in which they give the same explanation .. but also provide a nice fix using your approach! So you should definitely accept their Answer! Still, I like to point out that it can be much more easily established that the probability should be $\frac{16}{31}$ : in order for the two players to end up in the final, they need to be in opposite halfs of the bracket. So, wherever the rank $1$ player goes, the rank $2$ player needs to be in one of the $16$ spots of the other half, rather than in one of the $15$ spots of th
|probability|combinatorics|
0
Finding a general solution of a partial differential equation.
Let $p= \frac{\partial z}{\partial x}, ~q= \frac{\partial z}{\partial y}$. Find the general solution of the partial differential equation $z = p x+ qy +p+q -pq$, by finding the envelope of those planes that pass through the origin. It is given that, $z =ax+ by + a+b -ab$ is a complete integral. (This question is part of a problem from the book "Elements of partial differential equations" by Ian N. Sneddon.)
The planes $z=ax+by+a+b-ab$ pass through the origin if $a$ and $b$ satisfy the relation $$ a+b-ab=0 \implies b=\frac{a}{a-1}. \tag{1} $$ Substituting $(1)$ in the equation of the planes we find $$ z=ax+\frac{ay}{a-1}. \tag{2} $$ The envelope of these planes is obtained by eliminating $a$ between $(2)$ and the equation $$ \frac{\partial}{\partial a}\left(-z+ax+\frac{ay}{a-1}\right)=0 \implies x+\frac{y}{a-1}-\frac{ay}{(a-1)^2}=0. \tag{3} $$ Solving $(3)$ for $a$ we obtain $$ a=1\pm\sqrt{\frac{y}{x}}. \tag{4} $$ Substituting $(4)$ in $(2)$ we finally obtain $$ z=\left(\sqrt{x}\pm\sqrt{y}\right)^2, \quad \text{or} \quad (z-x-y)^2=4xy. \tag{5} $$
|partial-differential-equations|
0
Whether a function has an inverse and horizontal line test
Apologize for naive question, but I got mixed answers regarding this one. The inverse of $f$ exists if and only if $f$ is bijective. Horizontal line test is a test used to determine whether a function is injective (i.e., one-to-one). However, I read many answers that to determine whether $f$ has an inverse we should apply horizontal line test. What if the function passes horizontal line test, but is not surjective? Why can we still use horizontal line test to determine whether $f$ has an inverse?
You can always restrict the codomain to make a function surjective. So if $f:X \rightarrow Y$ is injective then the same function but with codomain $f(X)$ , denote it $g:X \rightarrow f(X)$ where $g(x)=f(x) \: \forall x \in X$ , is surjective and thus bijective. So you can get an inverse $g^{-1}:f(X) \rightarrow X$ .
|functions|
0
Bocce league: How many wins to advance to playoffs?
In our bocce division, we have $14$ teams (including us). Everyone plays each other $1$ time during the $13$ week season ( $13$ games for all). At the end of the $13$ weeks, the top $6$ teams advance to the playoffs. What is the minimum wins a team must have to land in the top $6$ ? At this time no one is undefeated so the very best record could be $12-1$ . Thank you. I’m going around in circles in my head.
This answer concerns the minimum number of wins to be guaranteed a position in the playoff. Unintuitively, a team can still be eliminated with $8$ wins here is an example: "Team 1 wins against Teams 2,3,4,5,8,9,10,11,12 (9 Wins). Team 2 wins against Teams 3,...,11 (9 Wins). Team 3 wins against Teams 4,...,12 (9 Wins). Team 4 wins against Teams 5,...,13 (9 Wins). Team 5 wins against Teams 6,... 14 (9 Wins). Team 6 Wins against Teams 7,..., 14 AND Team 1 (9 Wins). A team with 8 wins is not guaranteed a place in the playoff." A similar example can show that a team with $9$ wins can also be eliminated. Suppose we have a league where there are: $6$ very strong but equal teams, and $8$ garbage teams. The $6$ teams are guaranteed to win against the $8$ garbage teams, and split the remaining wins amongst themselves evenly. There will be $$13+12+11+10+9+8=63$$ games played by these top $6$ teams. Since each of these games will be a win for the strong teams, and they divide them equally, each to
|arithmetic|
0
Can $4\cdots41$ (with odd number of $4$s) be a Square Number?
Consider a number in its decimal representation that begins with an odd number of consecutive digits of 4 , followed by a single digit of 1 . An example of such a number would be 41, 4441, or any similar pattern extending with 4s. My question is: Can such a number ever be a perfect square ? To clarify, the numbers we're considering take the form 44 … 41 44…41, where the number of 4's is odd, and it's terminated by a single 1. Here are the specific points I'm curious about: Is there a mathematical approach or theorem that directly addresses the properties of numbers with specific digit patterns in relation to being perfect squares? Could modular arithmetic or any form of number theory provide insight into proving or disproving the possibility of such a number being a perfect square? I've attempted some preliminary analysis, including playing around with smaller cases and considering the last digits of square numbers, but haven't reached a conclusive answer. Any guidance, references, or
The answer of @Aig correctly chose to look at the given number in mod 11, but I would like to show why mod 11 was specifically chosen. Solution A number of form $aaa \dots aa1$ with $2n$ digits ( $2n-1$ a's) can be written as $$\underbrace{aaa \dots aa1}_{2n} = \sum_{k=1}^{2n-1} a(10^k) + 1$$ Which can be written as $$= a(10^{2n-1}) + a(10^{2n-2}) + \dots + a(10) + 1$$ Adding and subtracting $(a-1)$ $$= \left[ a(10^{2n-1}) + a(10^{2n-2}) + \dots + a(10) + a \right] - (a-1)$$ $$= a\left[(10^{2n-2})(10 + 1) + (10^{2n-4})(10 + 1) + \dots + (1)(10 + 1)\right] - (a-1)$$ $$= 11a\left[10^{2n-2} + 10^{2n-4} + \dots + 1 \right] - (a-1)$$ If we look at it in mod 11, then $$\underbrace{aaa \dots aa1}_{2n} \equiv -(a-1) \pmod{11}$$ $$-(a-1) \equiv 12 - a \pmod{11}$$ Using the table in the answer of @Aig, $\underbrace{aaa \dots aa1}_{2n}$ is not a perfect square when $$12-a \not\equiv \{0,1,4,9,5,3\} \pmod{11}$$ $$\implies a \not\equiv \{12,11,8,3,7,9 \} \pmod{11}$$ $$\implies a \not\equiv \{1,0,8,
|square-numbers|
0
Whether a function has an inverse and horizontal line test
Apologize for naive question, but I got mixed answers regarding this one. The inverse of $f$ exists if and only if $f$ is bijective. Horizontal line test is a test used to determine whether a function is injective (i.e., one-to-one). However, I read many answers that to determine whether $f$ has an inverse we should apply horizontal line test. What if the function passes horizontal line test, but is not surjective? Why can we still use horizontal line test to determine whether $f$ has an inverse?
In the context of math classes where you're told to use the horizontal line test, you're working with 2-dimensional graphs in the plane and "has an inverse" typically means you can define a function $f^{-1}$ which maps points from the image of $f$ to the domain of $f$ . This is good enough for those types of classes because the teachers just want you to know the concept (of an inverse). As you move on and become more advanced the concept of an inverse also becomes more rigorous and needs a better definition. Take any curve in the family of logistic curves for example - a tool evoked when discussing population growth (such a curve could be given by the function $\frac{e^x-1}{e^x+1}$ ). We think of logistic curves, when viewing them as functions, as mapping real numbers to other real numbers; but there is no function which maps every real number back to its input according to the logistic curve, because not every real number will be mapped to when we input some point on the logistic curv
|functions|
1
Problem with the use of monomials: determine a formula
Andrea has $x$ euros, Marco has $y$ euros more than Andrea but $z$ euros less than Luke. Determine the formula that expresses the total sum $s$ owned by the three friends. Since Andrea has $x$ euros, Marco has $y$ euros more than Andrea and Luca has $z$ euros less than Marco, we can write the following equations: The amount of money owned by Marco: $x + y $ The amount of money owned by Luke: $(x + y) - z $ The sum total, $s $ , will be the sum of Andrew, Mark and Luke's money: $$ s = x + (x + y) + ((x + y) - z) $$ Simplifying this expression we get: $$ s = x + x + y + x + y - z= 3x + 2y - z $$ but the solution of the book is $s = 3x + 2y + z$ . Why?
Marco has $s$ euros less than Luke, so Luke owns $(x+y)+z$ , and not $(x+y)-z$ . So that you will get the claimed result.
|algebra-precalculus|
1
Measurability of "(multivariate) inverse transformation sampling"
Given a measure $m \in \mathcal P(\mathbb R^d)$ , one can sample from $m$ using realizations of a random variable $U$ uniformly distributed on $[0,1]^d$ . The procedure is called (multivariate if $d \ge 2$ ) inverse transformation sampling, and defines a map $$ F : \mathcal P(\mathbb R^d) \times [0,1]^d \to \mathbb R^d $$ such that, for all $m \in \mathcal P(\mathbb R^d)$ , $F(m,U)$ has law $m$ whenever $U$ is uniformly distributed on $[0,1]^d$ . Is the map $F$ measurable with respect to some "usual" product sigma-algebra, for example where the first coordinate is endowed with the Borel sigma-algebra associated to the weak topology? Thanks in advance!
This map is, to begin with, not continuous, for the simple reason that we need not even have continuity in the second coordinate for the first coordinate fixed. Let $m$ be the fair coin-flipping distribution on $\{0,1\}$ . Then $F(m,x)=1$ for $x>1/2$ and $F(m,1/2)=0$ . So $F$ is not continuous.
|probability|general-topology|continuity|
0
Which calculus method should I use for this investigation? What does it mean choose a suitable length for the cross section?
Your task is to determine which shape maximises the volume of water that can be held by a gutter of a fixed length of material. choose a suitable fixed length for the cross section and explore different shapes. You should consider a rectangular, triangular and round gutter as well as at least 2 other shapes. Determine the dimensions and cross-sectional area that maximises the water held by the gutter for each shape. Generalise each shape for a length L and make use of calculus to determine the dimensions of the shape and associated cross-sectional area that optimises the water held by the gutter. what mathematical method do I approach this question with what constraints do I use? My peers all are using different methods and I don't which one to use some are using a perimeter and some are using a surface area which one is easier I have been able to do some sort of working out for the rectangular gutter and triangle however they are not coherent enough to hand in to the teacher. Furtherm
If I understand correctly, you want to hold the perimeter of the cross section constant while maximizing the cross sectional area. It looks like you were asked to consider regular n-gons. A regular n-gon can be broken up into $n$ Isosceles triangles with base = $L/n$ . The apothem will be $a\tan (\pi/n)=(1/2)(L/n)\implies a=\frac{L}{2n}\cot(\pi/n)$ Area is half the apothem times the perimeter: $A=(1/2)ap$ So $A(n)=\frac{L^2}{4n}\cot(\pi /n)=\frac{\pi L^2\cos(\pi /n)}{4\pi n(\sin \pi /n)}$ Tangent is monotonically increasing on the interval $ (0,\pi/2)$ , so its reciprocal $\cot(x)$ is monotonically decreasing. Since $n$ is increasing, the argument for the cotangent is decreasing. Together these imply that the area increases monotonically with increasing $n$ . A circle is a regular polygon with infinite sides, so $\lim_{n\to \infty} A(n)=\frac{L^2}{4\pi}$ You can prove that from $A(n)$ using the fact that $\lim_{x\to 0} (\sin x)/x =1$ and $\lim _{x \to 0} \cos x=1$ A more thorough consi
|calculus|derivatives|optimization|area|volume|
0
Norm of cos in $L^2[-T,T]$
I am reading the book Applied Fourier Analysis by Tim Olson. There I got introduced the Fourier Series and we have derived the coefficients $a_k$ and $b_k$ for the Fourier Series on $L^2[-\pi,\pi]$ . The author is now setting up to introduce orthonormal expansions. I could recreate the norm of $\sin$ and $\cos$ in $L^2[-\pi,\pi]$ as follows: For $\cos$ $$\begin{align} \lvert\lvert \cos \rvert\rvert &= \sqrt{ \int _{-\pi}^\pi \cos^2(t) \, dt} \\ &= \sqrt{ 2\int _{0}^\pi \cos^2(t) \, dt} \tag{$\cos$ even function}\\ &= \sqrt{ 2\left[\frac{1}{2}(\cos(t)\sin(t) + t)\right]_{0}^\pi} \\ &= \sqrt{ \cos(\pi)\sin(\pi) + \pi - (\cos(0)\sin(0)) } \, \\ &= \sqrt{ (-1)\cdot0 + \pi } \\ &= \sqrt{ \pi} \, \\ \end{align}$$ For $\sin$ $$\begin{align} \lvert\lvert \sin \rvert\rvert &= \sqrt{ \int_{-\pi}^\pi \sin^2(t) \, dt} \\ &= \sqrt{ \int_{-\pi}^\pi \frac{1 - \cos(2t)}{2} \, dt} \\ &= \sqrt{ \frac{1}{2} \int_{-\pi}^\pi (1 - \cos(2t)) \, dt }\\ &= \sqrt{ \frac{1}{2} \left[t - \frac{\sin(2t)}{2}\right]
Since the subject is Fourier series you should probably compute the norm of the trigonometric polynomials which are periodic of period $P$ . Your author uses the notation where the period is $P=2T$ This means you should consider the norm of the functions $\cos(\omega n x)$ , $\sin(\omega n x)$ where $n$ is an integer and $\omega=2\pi/P=\pi/T$ . After a simple change of variable the computation is reduced to the case where the period is $2\pi$ . The trick to integrate is either use bisection formulae: \begin{align} \cos(y)^2 &= \frac{1+\cos(2y)}{2} \\ \sin(y)^2 &= \frac{1-\cos(2y)}{2} \, , \\ \end{align} as you are doing, or integrate by parts. Finally, Fourier series become much simpler in the complex case, where the orthonormal basis is simply $e^{ikx}/\sqrt{2\pi}$ .
|fourier-analysis|orthonormal|
1
How can we formally / rigorously use Mertens' third theorem with $n^2 - 1$ instead of $\ln n$?
I'm quite new to Analytic NT, so was wondering if the following is true, and beyond being obviously true, how could we prove it rigorously line-by-line? See: Mertens Third theorem The function they use to "modulate the product" (which normally diverges to zero) is $\ln n$ . And the constant they get is $\gt 1/2$ . Then to me, it's obviously true that if we used the function $n^2 -1$ in place of $\ln n$ that either: The constant approached in the limit is easily $\gt 1/2$ . Or, the limit approached is $\infty$ . But I have no idea how to "work with asymptotics". This seems like it would be an easy exercise. Do we even need a proof for it? I'd really like to know how to work with the limits involved.
\begin{align} \lim_{n\to\infty}(n^2-1)\prod_{p\le n}\left(1-\frac{1}{p}\right) &= \lim_{n\to\infty}\frac{n^2-1}{\log n}\log n\prod_{p\le n}\left(1-\frac{1}{p}\right)\\ &=\lim_{n\to\infty}\frac{n^2-1}{\log n}\cdot\lim_{n\to\infty}\log n\prod_{p\le n}\left(1-\frac{1}{p}\right)\\ &=e^{-\gamma}\lim_{n\to\infty}\frac{n^2-1}{\log n} = \infty \end{align} From this proof is easy to see that if you change $\log n$ by any function that grows even slightly faster than $\log n$ (for example $\log n\cdot \log\log n$ ) the limit will be infinite.
|limits|number-theory|elementary-number-theory|prime-numbers|infinite-product|
1
Reflexivity of $\ell^p$
I'm having bad difficulties in understanding how to prove that $\ell^p$ with $1 Maybe trivially for you, books!! I want to show formally that the canonical application $J_{\ell^p}:\ell^p \rightarrow (\ell^p)^{\ast\ast}$ is surjective. I tried for hours but nothing, I'm blocked. Let be $j_p: \ell^q \rightarrow (\ell^p)^\ast$ and $j_q: \ell^p \rightarrow (\ell^q)^\ast$ the isomorphisms that I have. Let be $z \in (\ell^p)^{\ast\ast}$. I want to find an $x \in \ell^p$ such that $J_{\ell^p}(x)=z$, i.e. $\langle z,x'\rangle=\langle x',x\rangle$ for every $x' \in (\ell^p)^\ast$. Probabily there will be many "$j_p, j_q, j_p^{-1}, j_q^{-1},J_{\ell^p}$" but I have no idea about how to choose them. Some help is greatly appreciated! Thank you!
This can be done with adjoints and I think it is more enlightening to do it this way. Lemma: If $T:X\rightarrow Y$ is a surjective isometry between Banach spaces, then $T^*: Y^* \rightarrow X^*$ is also a surjective isometry. Proof: We will only need to use that $T^*$ is surjective, but this is immediate: $T^*(x^* \circ T^{-1})=x^*$ . I will prove that it preserves the norm because it is also straightforward, but we will not need this: $$\lVert T^*(y^*)\rVert=\sup_{x\in B_X} \lVert y^*(T(x))\rVert=\sup_{y\in B_{Y}}\lVert y^*(y)\rVert=\lVert y^*\rVert\quad \quad\square$$ Now let us continue with our quest of proving reflexivity of $\ell^p$ for $1 . If we consider the canonical surjective isometry $\phi: (\ell^p)^*\rightarrow l^q$ , then its adjoint will satisfy: $$\phi^*:(\ell^q)^*\rightarrow (\ell^p)^{**}$$ Next we consider $\psi: \ell^p \rightarrow (\ell^q)^*$ the canonical isometry. By using these maps we gain with our Lemma that composition is surjective: $$\phi^*\circ \psi: \ell^p\
|functional-analysis|banach-spaces|lp-spaces|
0
Explain that a function has a given Jacobian matrix
Assume that $W$ is a $n \times n$ matrix with elements $w_{ij}$ , and that $\mathbf{b} \in \mathbb{R}^n$ is a column vector with elements $b_{i}$ . We know that the Jacobian matrix to the affine transformation $\mathbf{F}(\mathbf{x}) = W\mathbf{x} + \mathbf{b}$ is $\mathbf{F}'(\mathbf{x}) = W$ . We are now looking at the function $\mathbf{G}: \mathbb{R}^{(n^2+n)} \rightarrow \mathbb{R}^n$ defined by $\mathbf{G}(w_{11},\ldots,w_{1n},\ldots,w_{n1},\ldots,w_{nn},\ldots,b_1,\ldots,b_n) = W\mathbf{x} + \mathbf{b}$ We are looking at the elements in $W$ and $\mathbf{b}$ as variables (with these listed row by row, from left to right), and $\mathbf{x}$ as a constant. Explain that the Jacobian matrix to $\mathbf{G}$ is where $O$ stands for a vector or matrix with only zeros. Above, $\mathbf{x}$ is therefore repeated on each row. When I find the Jacobian matrix to the function, I get the exact same one, except I get $x_1, x_2, \ldots, x_n$ in the diagonal to the left. How do I get this to be the
We can rewrite the given matrix product as follows $$ Wx + b = \begin{bmatrix} w_1^T \\ \vdots \\ w_n^T \end{bmatrix} \cdot x + b $$ $$ = \begin{bmatrix} w_1^T \cdot x + b_1 \\ \vdots \\ w_n^T \cdot x + b_n \end{bmatrix} = \begin{bmatrix} x^T \cdot w_1 + b_1 \\ \vdots \\ x^T \cdot w_n + b_n \end{bmatrix}$$ $$ = \begin{bmatrix} x^T & & &\vert & 1 & & \\ & \ddots & & \vert & & \ddots & \\ & & x^T & \vert & & &1 \end{bmatrix} \cdot \begin{bmatrix} w_1 \\ \vdots \\ w_n \\ b_1 \\ \vdots \\ b_n\end{bmatrix}.$$ The jacobian with respect to $w_i$ and $b_i$ can then be directly calculated from the last expression and is identical to the one provided by you.
|linear-algebra|
1
Proof using p-adic numbers that $\lim_{x \in \mathbb{N} \to \infty} 2^x = 0$? What am I doing wrong?
Approaching an infinite natural x value for the expression $5^{2^x}$ , one is left with the following p-adic number: $$...212890625$$ Which is equal to its own square. With basic algebra one can see that, since this value is equal to itself squared, it's equal to 1. Therefore, $$\lim_{x \in \mathbb{N} \to \infty} 5^{2^x} = 1$$ Therefore $$\lim_{x \in \mathbb{N} \to \infty} 2^x = 0$$ This shouldn't be true. What did I do wrong? I am new to the field of p-adics, so bear with me.
To get this off the unanswered list and turn comments into an answer: You seem to be using the $10$ -adic numbers $\mathbb Z_{10} = \projlim \mathbb Z / \mathbb 10^k$ . It is correct that in those, the sequence $a_n := 5^{(2^n)}$ converges to an element $a \in \mathbb Z_{10}$ that, in $10$ -adic a.k.a. decimal expansion, matches what you write. It is also true that that element satisfies $a^2=a$ (almost by construction). Such elements are called idempotents . In domains , the only idempotents are $0$ and $1$ . This element visibly being neither shows that $\mathbb Z_{10}$ is not a domain. In fact, for prime $p$ , the $p$ -adics $\mathbb Z_p$ are a domain. Now while for prime powers one would still have $\mathbb Z_{p^r} \simeq \mathbb Z_p$ , for a general natural number $n$ , the ring of $n$ -adic integers $\mathbb Z_n := \projlim_k \mathbb Z / n^k$ happens to be isomorphic to the direct product $$ \mathbb Z_{p_1} \times \dots \times \mathbb Z_{p_r} $$ where the $p_i$ are the prime divi
|p-adic-number-theory|
0
Verifying that a vector is a highest weight vector
I'm trying to show that $(e_1 \otimes e_2 \otimes e_3 - e_3 \otimes e_1 \otimes e_2) \otimes e_n^*$ is a highest weight vector for the irreducible submodule of $V^{\otimes 3} \otimes V^*$ with highest weight (0,0,1,0,...,0,1), where $V$ is the standard $\mathfrak{sl}_n(\mathbb{C})$ -representation and $V^*$ is its dual. According to this post , it should be the unique vector that satisfies: $E_{33}$ scales it by 1 $E_{(n-1)(n-1)}$ scales it by 1 $E_{ii}$ for $1 \leq i \leq n-1$ and $i \neq 3,n-1$ kill it $E_{ij}$ for $i kill it If the $E_{ij}$ 's act on my vector in the way given by the answer to this question , I find that $E_{33}$ fixes my vector but $E_{(n-1)(n-1)}$ kills it. So I'm misunderstanding something.
I don't think the linked post says what you claim it does. The highest weight $(0,0,1,\dots,0,1)$ just means $\omega_3 + \omega_{n-1}$ so its weight vectors are scaled by $1$ by the coroots $\alpha_3^\vee$ and $\alpha_{n-1}^\vee$ (fundamental weights are a dual basis to the coroots) but these are not $E_{33}$ and $E_{(n-1)(n-1)}$ . Rather, the coroots in $\mathfrak{sl}_n$ look like $\alpha_k^\vee = E_{kk} - E_{(k+1)(k+1)}$ . You should think of them as generalising the element $h = \begin{pmatrix}1&0\\0&-1\end{pmatrix} \in \mathfrak{sl}_2$ Let $v=(e_1 \otimes e_2 \otimes e_3 - e_3 \otimes e_1 \otimes e_2) \otimes e_n^*$ Then $E_{11}v = E_{22}v = E_{33}v = v$ so that $\alpha_1^\vee v = (E_{11} - E_{22})v = 0$ and $\alpha_2^\vee v = (E_{22} - E_{33})v = 0$ . Then $E_{kk}v = 0$ for $3 so $\alpha_3^\vee v = v$ and $\alpha_k^\vee v = v$ for $3 . Finally, $E_{nn}v = -v$ so that $\alpha_{n-1}^\vee v = (E_{(n-1)(n-1)} - E_{nn})v = v$ . To conclude $\alpha_k^\vee v = v$ if $k=3,n-1$ and $\alpha
|representation-theory|lie-groups|lie-algebras|semisimple-lie-algebras|irreducible-representation|
1
Probability Theory Bunny Question
Each bunny is individually ready for the big mission with probability p. If I have n bunnies and need k for the mission, find the expected number of k-large squads I can form with bunnies ready for the mission. I said that for binomial distribution we have $$P(X=x) = \binom{n}{x}p^{x}(1-p)^{n-x}$$ So then $E(X) = x*P(X=x)$ , professor said this isn't right so I was wondering what I did wrong?
The existing answers and comments are unnecessarily complicated. There are $\binom nk$ groups of $k$ bunnies, and for each such group the probability that its bunnies are all ready for the mission is $p^k$ (assuming that by “individually” you mean “independently”). Thus, by linearity of expectation the expected number of groups of $k$ bunnies that are ready is $\binom nkp^k$ .
|probability|
0
Proof that $int(A)=\mathbb{C}-\overline{A^c}$
I want to prove that $int(A)=\mathbb{C}-\overline{A^c}$ , I know that this is equivalent to prove that $$Int(A)=\overline{A^c}^c$$ So let's try to prove the first inclusion. let $x \in int(A)$ , as $int(A) \subseteq A \subseteq \mathbb{C}$ then naturally $x \in \mathbb{C}$ , and there exists $r >0$ s.t. $$B_{r}(x) \subseteq A$$ so then $B_{r}(x) \cap A ^c = \emptyset$ . Finally $x \in \mathbb{C}-\overline{A^c}$ . The first inclusion seems clear, but the second inclusion is not so clear because I cannot guarantee that for some $\varepsilon >0$ $B_{\varepsilon}(x) \subseteq A$ Any suggestions on how to complete the exercise, I appreciate it!
If $x\in\mathbb{C}\setminus \overline{A^c}$ , then $x\notin \overline{A^{c}}$ . Therefore, there exists $r\gt 0$ such that $B_r(x)\cap A^c=\varnothing$ (since $\overline{X}=\{x\mid \text{ for all }r\gt 0, B_r(x)\cap X\neq\varnothing\}$ ). Therefore, $B_r(x)\subseteq (A^c)^c = A$ , as desired. This of course works in any topological space. If $x\in X\setminus\overline{A^c}$ , then $x\notin\overline{A^c}$ , so there exists an open set $U$ such that $x\in U$ and $U\cap A^c=\varnothing$ . Since $U\cap A^c=\varnothing$ , then $U\subseteq (A^c)^c = A$ , so $x\in U\subseteq A$ , showing that $x\in\mathrm{int}(A)$ . Conversely, if $x\in \mathrm{int}(A)$ , then there exists an open set $V$ such that $x\in V\subseteq A$ , and therefore $V\cap A^c=\varnothing$ , showing that $x\notin\overline{A^c}$ .
|general-topology|
1
Regular conditional distribution of $Y$ given $X=x$ in Klenke's book
In his book "Probability theory", Klenke uses the following definition of transition kernel: and if in $ii)$ the measure is a probability measure for all $\omega_1$ then $K$ is called a stochastic kernel. Later on he defines regular conditional distribution of $Y$ given $\mathcal F$ as follows: After the last line in the image above, he added "(the function from the factorization lemma with an arbitrary value for $x \notin X(\Omega)$ ) is called a regular conditional distribution of $Y$ given X.", which perplexes me a lot. My attempt to understand this line: First, I think he forgot to assume that the map $ \omega \mapsto K_{Y|\sigma(X)}(\omega, B)$ should be $\mathcal F$ -measurable for any fixed $B \in \mathcal E$ . Next, I assume the previous statement holds and fix one $B \in \mathcal E$ . Since $K_{Y|\sigma(X)}(\cdot, B)$ is a version of $ P(Y \in B | \sigma(X)) $ and is $\sigma(X)$ -measurable too, by factorization theorem, there is a measurable function $\kappa(x, B)$ from $(E',
Amazingly, in an attempt to find a rectification, I tumbled on the book "Conditional measures and applications" by M. M. Rao. It turns out that my approach to fix the problem with an $N \in E'$ such that $P(X\in N) = 1$ and $N \subset X(\Omega)$ is called perfectness (of the measure space $(\Omega, \mathcal A)$ ) in the literature (which I know nothing about before). See page 133 of the book mentioned above. Also the assumption $X(\Omega)$ is measurable turns out to be a crucial assumption in Doob's theorem too, see page 129 of the same book. Conclusion: there is no way to fix the issue unless we make these assumptions!
|probability-theory|measure-theory|conditional-probability|
0
$\{x \in V : Tx = c\} \neq \emptyset$ if and only if $\{x \in V : Tx = c\} = v + \text{null} \ T$
Exercise. Suppose $T \in \mathcal{L}(V,W)$ and $c \in W$ . Prove that $\{x \in V : Tx = c\}$ is either the empty set or is a translate of $\text{null} \ T$ . Source. Linear Algebra Done Right, Sheldon Axler, 4th edition, Exercises 3E, first part of problem 8. Notation. $\mathcal{L}(V,W)$ is the set of all linear maps from $V$ to $W$ . $v + \text{null} \ T$ is the set $\{v + n: n \in \text{null} \ T\}$ . What I've tried. My strategy is to show $$ \{x \in V : Tx = c\} \neq \emptyset \iff \{x \in V : Tx = c\} = v + \text{null} \ T $$ I first prove the forward direction. Suppose $$\{x \in V : Tx = c\} \neq \emptyset$$ Then $\exists \ v \in \{x \in V : Tx = c\}$ such that $Tv = c$ . Consider any $n \in \text{null} \ T$ . We must have $Tn = 0$ . Adding $Tn$ to both sides of $Tv = c$ : \begin{align} Tv + Tn &= c + 0 = c \implies \\ T(v + n) &= c \end{align} This implies $$(v + n) \in \{x \in V : Tx = c\}$$ But $(v + n) \in v + \text{null} \ T$ by definition of $v + \text{null} \ T$ . Question
I do not agree with your strategy to show $$ \{x \in V : Tx = c\} \neq \emptyset \iff \{x \in V : Tx = c\} = v + \text{null} \ T, $$ because the book asks to prove that $$ \{x \in V : Tx = c\}=\emptyset \ \ \text{or} \ \ \{x \in V : Tx = c\} = v + \text{null} \ T. $$ Since $(P\lor Q)$ and $(\lnot P\Rightarrow Q)$ are logically equivalent, we can prove that $$ \{x \in V : Tx = c\}\neq\emptyset \ \ \implies \ \ \{x \in V : Tx = c\} = v + \text{null} \ T. $$ In other words, we assume $$ \{x \in V : Tx = c\}\neq\emptyset $$ and prove that $$\{x \in V : Tx = c\}\subseteq v + \text{null} \ T \ \ \text{and} \ \ v+\text{null} \ T\subseteq\{x \in V : Tx = c\}.$$ Let $w\in\{x \in V : Tx = c\}$ be arbitrary. Then $$Tw=c. \ \ \ \ (*)$$ Since $\{x \in V : Tx = c\}\neq\emptyset$ , then there exists $v\in\{x \in V : Tx = c\}$ such that $$Tv=c. \ \ \ \ (**)$$ Substituting $(**)$ into $(*)$ , we have $Tw=Tv$ . Thus, $$T(w-v)=0\implies w-v\in\operatorname{null}T\implies w=v+n\text{, for some $n\in\opera
|linear-algebra|solution-verification|linear-transformations|
0
If the area integrals of a function are zero, is it in the image of the Laplacian?
I know very little about analysis or PDEs, so apologies if this is an elementary question. Let $g \in C^0(\mathbf{R}^2, \mathbf{R})$ and suppose that there exists $f \in C^2(\mathbf{R}^2, \mathbf{R})$ with $\Delta f = g$ . Here $\Delta = \partial^2/\partial x^2 + \partial^2/\partial y^2$ is the Laplacian on $\mathbf{R}^2$ . If $\Omega \subseteq \mathbf{R}^2$ is compact with piecewise $C^\infty$ boundary $\partial \Omega$ then \begin{align} \int_\Omega g = \int_\Omega \nabla \cdot \nabla f = \int_{\partial \Omega} \nabla f = 0, \end{align} by the divergence theorem (unless I am mistaken). If $g \in C^0(\mathbf{R}^2, \mathbf{R})$ with \begin{align} \int_\Omega g = 0 \end{align} for all compact $\Omega \subseteq \mathbf{R}^2$ with piecewise $C^\infty$ boundary, does there exist $f \in C^2(\mathbf{R}^2, \mathbf{R})$ with $\Delta f = g$ ?
Suppose $g = \Delta f$ . It is certainly okay to say \begin{align} \int_\Omega g = \int_{\partial \Omega} \nabla f \cdot \hat{n} \end{align} where $\hat{n}$ is a smooth normal to $\partial \Omega$ , but the latter integral is not necessarily zero. This is because this is a flux integral not a line integral. We do have \begin{align} \int_{\partial \Omega} \nabla f \cdot \hat{t} = 0 \end{align} by the fundamental theorem for line integrals, but here $\hat{t}$ is a smooth tangent to the boundary. In particular, functions $g$ in the image of the Laplacian do not satisfy \begin{align} \int_\Omega g = 0 \end{align} for all compact $\Omega$ unless they are zero.
|integration|partial-differential-equations|
1
Question about Neighborhoods from Alfhors
In the passage by Alfhor's Complex Analysis, under Chapter 3 Compactness, states: I am new to topology, but here's my understanding: If a neighborhood isn't "intersecting" or doesn't have a point belonging to $X$ we can ignore it, and if it does, we replace that neighborhood with $2\epsilon$ -neighborhood so that the neighborhood now meets $X$ . I also understand that finite covering is there to provide "definiteness" to the open covering of $X$ . However, immediately a red flag arose. Question This building of finite covers, and "filtering", saying "Yea " or "Nay" doesn't sit right. Is there a formalism or theorem that allows us to do in a sense this ambiguous "filtering" and the ability to replace a $\epsilon$ -neighborhood with a $2\varepsilon$ -neighborhood to achieve a finite sub-covering?
This is precisely the definition of open covering compactness: $X$ is compact in this sense iff for every open covering, there exists a finite subcovering. For metric spaces (and even under less strict assumptions about the topology), this is equivalent to the convergent subsequence definition of compactness. The replacement of $\varepsilon$ -neighborhoods with $2\varepsilon$ -neighborhoods just guarantess the centers to be contained within $X$ . The triangle inequality guarantees the existence of such replacement neighborhoods that still cover $X$ . After replacement, the definition of compactness is used to reduce the amount of neighborhoods needed to finitely many.
|general-topology|
1
limit of sequence $\{a_n\}$ where $a_n = n!/c^n$, as $n$ tends to infinity
I was wondering if there was a general way to find (or show that the sequence diverges) $$ \lim_{n \to \infty} \{a_n\}$$ Where $a_n$ is of the form $\frac{n!}{c^n}$ , $c$ being a non-zero natural number. I think that I was able to find an intuitive answer following the reasoning below. However, I would like to know if there is a more rigorous way to go about it. We can show that past a certain point, the sequence is growing if $\frac{a_{n+1}}{a_n} \geq 1$ for all $n$ greater than some value: $$ \begin{align} \frac{a_{n+1}}{a_n} &\geq 1 \\\\ \frac{(n+1)!}{c^{(n+1)}}\cdot\frac{c^n}{n!} &\geq 1 \\\\ \frac{n+1}{c} &\geq 1 \\\\ n &\geq c-1 \end{align} $$ So, after $c-1$ terms, the sequence will start growing. Also, because of the definition of factorials, at the cth term, both the numerator and denominator are going to be multiplied by $c$ . The next term will then be multiplied by $\frac{c+1}{c}$ , etc. In other words, after $c$ terms, the numerator will start growing faster than the denom
I’d say that the approach you are taking is correct. One way to maybe make it more clear could be the following. We’d like to prove that $a_n\to \infty$ . Then, for $n \geq c$ , we have $$\frac{n!}{c^n} = \frac{c!}{c^c} \frac{(c+1)\cdots n}{c^{n-c}} \geq \frac{c!}{c^c} \left( \frac{n+1}{c} \right)^{n-c}$$ Where on the right hand side, the first term is a constant, and the second is the power of a number greater than one. Therefore, as $n \to \infty$ , the sequence must also tend to infinity. Your answer for the bonus question is correct.
|sequences-and-series|convergence-divergence|factorial|
1
Can I determine the angles of a quadrilateral if I know the lengths of the sides and the difference between the diagonals?
I know the lengths of the four sides of a quadrilateral and the difference between the diagonals (but I do not know the actual lengths of the diagonals). My instinct is that this information ought to be sufficient to determine the angles of the quadrilateral, because a specific difference between the diagonals constrains it to a single, fixed shape. example measurements: L side length = 326mm; R side length = 325mm; bottom length = 677mm; top length = 675mm; diagonal from bottom L to top R is 7mm longer than from bottom R to top L
Let's call the vertices of the quadrilateral $A,B,C,D$ in counter clockwise direction. Let $a = AB , b = BC, c = CD , d = DA $ . We can can let $A = (0,0)$ on the cartesian plane, and $B = (a, 0) $ . Further, let $C = (x_1, y_1) $ and $D = (x_2, y_2)$ . Now we have $ b^2 = (x_1 - a)^2 + y_1^2 $ $ c^2 = (x_1 - x_2)^2 + (y_1 - y_2)^2$ $ d^2 = x_2^2 + y_2^2$ And if $e = AC - BD$ , then $ e = \sqrt{ x_1^2 + y_1^2 } - \sqrt{ (x_2 - a)^2 + y_2^2 } $ we now have $ x_1^2 + y_1^2 = b^2 - a^2 + 2 a x_1$ $ (x_2 - a)^2 + y_2^2 = d^2 + a^2 - 2 a x_2 $ Hence, $ e = \sqrt{ b^2 - a^2 + 2 a x_1 } - \sqrt{ d^2 + a^2 - 2 a x_2 } $ Squaring, $ e^2 = b^2 + d^2 + 2 a (x_1 - x_2) - 2 \sqrt{ (b^2 - a^2 + 2 a x_1)(d^2 + a^2 - 2 a x_2 ) } $ So that $ \bigg( e^2 - b^2 - d^2 - 2 a (x_1 - x_2) \bigg)^2 = 4 \bigg(b^2 - a^2 + 2 a x_1\bigg) \bigg(d^2 + a^2 - 2 a x_2 \bigg)$ So now we have four quadratic equations in $4$ unknowns $x_1,y_1, x_2, y_2$ . The actual dimension of the problem can be reduced to $2$ utilizing
|geometry|
0
Elliptic curves - finding which primes $p$ guarantee there are $x,y\in\mathbb{Z}_p$ on the curve
For which primes $p$ do there exist $x,y\in\mathbb{Z}_p$ such that $3y^2=4x^3-10?$ This is a question in an elliptic curves course so I assume we want to transform this into an elliptic curve! My idea is that we want to birationally transform this curve into one with a Weirstrass equation, i.e. $y^2=x^3+Ax+B$ . To get rid of the coefficient of $3$ before the $y^2$ term, we can make the mapping $(x,y)\mapsto (x,y/\sqrt{3})$ which gives $Y^2=4x^3-10$ . Now we want to get rid of the coefficient before $x^3$ so we can make the overall transformation $(x,y)\mapsto\left(\frac{x}{\sqrt[3]{4}},\frac{y}{\sqrt{3}}\right)$ . This will then give \begin{equation}Y^2=X^3-10.\end{equation} So we have now recast the problem into finding the primes $p$ such that there are $X,Y\in\mathbb{Z}_p$ with $Y^2=X^3-10$ . However, it is not clear to me that the transformation I made is actually going to be useful, nor can I figure out how, even if it is a good idea, I can then go on to solve the problem. A naive
This is too loong for a comment, so it became an answer. To bring the given equation in the short Weierstraß form over the rationals, we can follow the steps: start with $3y^2 = 4x^3-10$ , in order to obtain a square on the L.H.S. we multiply everything with (the cube) $3^3$ , so we can group $(3^2y)^2 = 4(3x)^3 - 3^3\cdot 10$ , and now in order to obtain a cube in the main term on the R.H.S. we divide by (the square) $4$ on both sides, thus introducing denominators, or multiply everything with (the square) $4^2$ , so we can group $(4\cdot 3^2y)^2 = (4\cdot 3x)^3 - 4^2\cdot 3^3\cdot 10$ . Substitute now the number under the square on the L.H.S. by $Y$ , the number under the cube on the R.H.S. by $X$ , so we obtain an equation of an equivalent curve: $$ (E)\ :\qquad Y^2 = X^3 - 4320\ . $$ Which is showing only the affine piece $(X,Y)$ of points $[X:Y:Z]$ in the $2D$ projective space, where $N\ne 0$ . Note that for a fixed prime $p$ a projective point $[X:Y:Z]$ can be rearranged up to a
|number-theory|solution-verification|elliptic-curves|algebraic-curves|local-field|
1
Triple Integral Reiteration
I have the following triple integral: $$ I = \int_0^1 dz \int_z^1 dx \int_0^{x-z} f(x, y, z) \ dy $$ I want to reiterate the integral in such a way that the integrations are performed in the following order: first (z), then (y), then (x), and sketch the following graph in space. I am a little bit new to changing the order of integration in triple integrals, as how the boundaries are changing and how to graph it correctly. I would appreciate any kind of help. Thank you in advance.
As we can see we want to change the order of integration of the integral: $$ I\ = \int \int \int \ f(x,y,z) \ dy\ dx\ dz $$ We want to change it into the next order: $$ I\ = \int \int \int \ f(x,y,z) \ dz\ dy\ dx $$ I will use algebra to write iteration of the integral with the changed integration order. More exactly, I will use inequalities. Now we see from the given iteration: $$ I = \int_0^1 dz \int_z^1 dx \int_0^{x-z} f(x, y, z) \ dy $$ We can write three sets of inequalities satisfied by the outer variable $z$ , the middle variable $x$ , and the inner variable $y$ . We write these in order as follows: $$ 0 \leq \ z \ \leq \ 1 $$ $$ z \leq \ x \ \leq \ 1 $$ $$ \ \ \ \ \ \ \ 0 \leq \ y \ \leq \ x - z $$ Note that the limits for each variable can be constant or can depend only on variables whose inequalities are on lines above the line for that variable. (In this case, the limits for $z$ must both be constant, those for $x$ can depend on $z$ , and those for $y$ can depend on both $x$
|integration|analysis|graphing-functions|
0
Determining constants $a,b,c,d \in \mathbb{R}$ approximate second derivate of finite difference
given that $a,b,c,d \in \mathbb{R}$ and $u\in\mathbb{R}^4$ , how can I find $a,b,c,d$ such that for a fixed $x\in \mathbb{R}$ $$e_h:=\Bigg |\frac{au(x)+bu(x+h)+cu(x+2h)+du(x+3h)}{h^2}-u''(x)\Bigg |=O(h^2)$$ I've tried to use the MVT and talyor expansion of $u(x+h),u(x+2h), u(x+3h)$ , but I don't know how I am supposed to show that it is equal to $O(h^2)$ ? what I tried So I found the taylor up to order $4$ to get a system of equations for $a,b,c,d$ . But to show that it is $O(h^2)$ do I try to construct a $u_h$ , since $u_h-u=O(h^\alpha)$ so there exists a constant $c>0$ such that $$|u_h-u|\leq ch^\alpha$$ Is this on the right track?
The classic way to think of this: say you have $n$ given sample points some $h$ -scaled distance away from a fixed point $x$ written as $x + r_ih$ for $i = 1, \ldots, n$ . Our goal is to find an approximation of $u^{(k)}(x)$ for some $k \geq 0$ by using a linear combination of the form $\sum_{i=1}^na_iu(x+ r_ih)$ ; we will need to determine the coefficients $a_i$ for $i = 1, \ldots, n$ and the order of the approximation. Like you said we will use the Taylor expansion to get: $$ u(x+ r_ih) = \sum_{j=0}^\infty \frac{r_i^jh^j}{j!}u^{(j)}(x) $$ $$\implies u^{(k)}(x) = \sum_{i=1}^na_iu(x+ r_ih) = \sum_{i=1}^na_i\sum_{j=0}^\infty \frac{r_i^jh^j}{j!}u^{(j)}(x) = \sum_{j=0}^\infty\left(\frac{h^j}{j!}\sum_{i=1}^nr_i^ja_i\right)u^{(j)}(x) $$ By comparing coefficients on both sides, this means that we need $\sum_{i=1}^nr_i^ja_i = 0$ for $j \neq k$ and $\sum_{i=1}^nr_i^ka_i = \frac{k!}{h^k}$ , or writing $R$ as the Vandermonde matrix with entries $r_{ij} := r_j^i$ this is the linear system $R^T\ve
|real-analysis|partial-differential-equations|taylor-expansion|
0
Show that $\mathcal{B}^s$ is the smallest $\sigma$-field
Let $\mathcal{B}$ be the Borel $\sigma$ -field on $\mathbb{R}$ . We denote the reflection of a set B to be $B^-:=\{x\in \mathbb{R}: -x\in B\}$ . Let $\mathcal{B}^s$ be the collection of symmetric Borel sets, i.e., $ \mathcal{B}^s:=\{B\cup B^-: B\in \mathcal{B}\}$ . Show that $\mathcal{B}^s$ is the smallest $\sigma$ -field to make all continuous even functions from $(\mathbb{R}, \mathcal{B}^s)\to (\mathbb{R}, \mathcal{B})$ be measurable. My idea is that (1) I first try to show that $\mathcal{B}^s$ is a $\sigma$ -field. But I am stuck on the second point that for any $A\in \mathcal{B}^s$ , the complement $A^c$ is also in $\mathcal{B}^s$ . Since $A\in \mathcal{B}^s$ , then $A=B\cup B^-$ . Then $A^c=B^c\cap (B^-)^c$ . How to rewrite it as the union of two Borel sets? For the countable union, it is trivial. For $\{A_i\}_i\subset \mathcal{B}^s$ , $\cup_i A_i=\cup_i (B_i\cup B_i^-)\in \mathcal{B}^s$ because $\cup_i B_i\in \mathcal{B}$ and $\cup_i B_i^-\in \mathcal{B}$ . (2) I am stuck on how
Note that $\mathcal{B}^s$ is simply the collection of all symmetric Borel sets, i.e: $\mathcal{B}^s=\{B\in\mathcal{B}: x\in B \iff -x\in B\}$ Indeed, if $B$ is any Borel set then clearly $B\cup B^-$ is a symmetric Borel set. Conversely, if $C$ is a symmetric Borel set then $C=C^-$ , and so $C=C\cup C^-\in\mathcal{B}^s$ . So now what you basically have to show is that the set $A^c=B^c\cap (B^-)^c$ is both Borel and symmetric. This is almost trivial, so I'll leave it to you. As for the second part, first note that any continuous even function $f:\mathbb{R}\to\mathbb{R}$ is indeed measurable with respect to $\mathcal{B}^s$ . Now let $\mathcal{F}$ be the smallest sigma-field on $\mathbb{R}$ with this property. We want to show that $\mathcal{B}^s=\mathcal{F}$ . We know that $\mathcal{F}\subseteq\mathcal{B}^s$ , so in particular all sets in $\mathcal{F}$ are symmetric. We have to show the other inclusion. For this, first show that for any $a>0$ we have $(-a,a)\in\mathcal{F}$ , this is easy.
|real-analysis|
1
How to compute eigen values and eigen vectors for eigen value problem in terms of inner product?
My apologies if this is a silly question. I feel like I'm misunderstanding something basic and just need a quick clarification. I have an eigenvalue problem stated in terms of the inner product. More specifically, I have an operator $O: X \to Y$ , where $X$ and $Y$ are Hilbert spaces. I would like to find the largest $n$ eigenvalues $\lambda_i \in \mathbb{R}^+$ and corresponding $n$ eigenfunctions $\varphi_i \in X$ that satisfy $\langle O \varphi_i, Ov \rangle_Y = \lambda_i \langle \varphi_i, v \rangle_X$ for all $v \in X$ . The $O \varphi_i$ for $i = 1,...n$ will span the space that I want to find. I need to do this using the inner product matrices of $X$ and $Y$ (e.g for $X$ this means some matrix $I_X$ such that $\langle u, v \rangle_X = u^T I_X v$ ). My attempt so far is as follows I think I can represent $\langle O \varphi_i, Ov \rangle_Y = \lambda_i \langle \varphi_i, v \rangle_X$ as $M_{sol}^T I_Y M_{sol} = \lambda_i M_{input}^T I_X M_{input}$ where $M_{input}$ is a matrix whose
I think I now partially understand your question. Let me write it down in my words, and we'll see if that helps you. It's possible that I have made a mistake somewhere, I didn't know the problem before. Let $X$ and $Y$ be real Hilbert spaces and $O:X\rightarrow Y$ be a (continuous) linear operator. It should work similarly with complex Hilbert spaces, but you tagged real analysis, so maybe that's all you need. You're looking for $\varphi\in X$ and $\lambda\in\mathbb{R}^+$ such that $$\langle O \varphi, Ov \rangle_Y = \lambda \langle \varphi, v \rangle_X$$ for all $v\in X$ , and you're wondering whether this can be regarded as finding the eigenvalues and eigenvectors of some matrix or operator. High-level version Let us introduce the adjoint operator $O^*:Y\rightarrow X$ like here. Then our condition becomes $$\left\langle O^*O\varphi,v\right\rangle_X=\lambda\langle\varphi,v\rangle_X$$ for all $v\in V$ , or in other words $$\left\langle O^*O\varphi,\cdot\right\rangle_X=\langle\lambda\va
|real-analysis|linear-algebra|functional-analysis|normed-spaces|inner-products|
1
Closed form solution to matrix equation
I'm trying to solve the following matrix equation for $L$ : $$A\cdot L + L^T = 0$$ where A is non-singular and both $A,L \in \mathbb{R}^{n\times n}$ . I'm wondering if this could have a closed-form solution. This seems close to Sylvester's equation but not quite the same.
This is the Sylvester-Transpose matrix equation sometimes called the T-Sylvester equation, analyzed in The solution of the equation $AX + X^* B = 0$ by Teran,Dopico If you want to implement a solver yourself, an easier approach is to reduce it to continuous Lyapunov equation, derivation is on mathematica.SE post . There may be an infinite number of solutions to the continuous Lyapunov equation, but if we restrict attention to the least-squares solution, it has an elegant closed form solution in terms of eigenvectors of $A$ , proved here by user1551
|matrices|matrix-equations|matrix-calculus|
0
How can you construct a right triangle, ABC with a right angle in C, if you are given the hypotenuse, c, and the altitude of the point C?
How can you construct a right triangle, $\triangle ABC$ with a right angle in $C$ , if you are given the hypotenuse, $c$ , and the altitude of the point $C$ ? I know it's very basic but I just can't seem to figure this out - I've tried everything I can think of, started from both $h_c$ and $c$ - but I keep getting stuck. Apologies for the poor formatting, it's my first post here. I'm sorry for asking such a basic question. Thank you in advance. EDIT: Thank you so much for the help! I know the guidelines about schoolwork questions, but I'm really struggling with construction so I thought I'd give it a shot. Again, I'm very grateful for the help, thank you Narasimham and Michael Burr for answering/commenting! You really did help! All the best!
Using a compass draw a semicircle with AB as diameter = 2r and next draw a straight line parallel to AB at given altitude h Join AC1,BC1, so there are two solution constructions. You can label second vertex as C2, (a,b) between AC2,BC2.
|triangles|geometric-construction|
1
Integral $\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$
Show that: $$\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$ I evaluated this by some Fourier series. Is there any other method? Start with substitution of $$u=\arcsin x$$ Then we have to integrate $$\int_0^{\frac{\pi}{2}}\frac{u^3\cos u}{\sin^2 u}\text{d}u=-\int_0^{\frac{\pi}{2}}u^3\csc u\text{d}u$$ Since $$\int\csc u\text{d}u=\ln (\csc u-\cot u)=\ln \left(\frac{1-\cos x}{\sin x}\right)=\ln 2+2\ln \left(\sin \frac{x}{2}\right)-\ln \sin x$$ Thus $$\int_0^{\frac{\pi}{2}}u^2\csc u\text{d}u=\int_0^{\frac{\pi}{2}}u^2\text{d}\left(2\ln \frac{\sin u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-2\int_0^{\frac{\pi}{2}}u\left(2\ln \sin \frac{u}{2}-\ln \sin u\right)$$ $$=-\frac{\pi^2}{4}\ln 2-4\int_0^{\frac{\pi}{2}}u\ln \sin \frac{u}{2}\text{d}u+2\int_0^{\frac{\pi}{2}}u\ln \sin u\text{d}u$$ $$=-\frac{\pi^2}{4}\ln 2+4\int_0^{\frac{\pi}{2}}u\left[\ln 2+\sum_{n=1}^{\infty}\frac{\cos nu}{n}\right]\text{d}u-\int_0^{\frac{\pi}{2}}u^2\cot u\text{d}u$$ $$=\frac
Letting $\arcsin x \mapsto x$ yields \begin{aligned} & I=\int_0^{\frac{\pi}{2}} x^3 \cot x \csc x d x=-\int_0^{\frac{\pi}{2}} x^3 d(\csc x) \\ & = \underbrace{ -\left[x^3 \csc x\right]_0^{\frac{\pi}{2}}}_{-\frac{\pi^3}{8}} +3 \int_0^{\frac{\pi}{2}} x^2 \csc x d x \end{aligned} $$ \begin{aligned} \int_{0}^{\frac{\pi}{2}} x^{2}\csc x d x =& \int_{0}^{\frac{\pi}{2}} x^{2} d\left[\ln \left(\tan \frac{x}{2}\right)\right] \\ =& {\left[x^{2} \ln \left(\tan \frac{x}{2}\right)\right]_{0}^{\frac{\pi}{2}}-2 \int_{0}^{\frac{\pi}{2}} x \ln \left(\tan \frac{x}{2}\right) d x } \\ =&-8 \int_{0}^{\frac{\pi}{4}} y \ln (\tan y) d y \textrm{, where }y=2x\\ =&-8\left[-\frac{\pi}{4} G+\frac{7}{16} \zeta(3)\right] \\ =& 2 \pi G-\frac{7}{2} \zeta(3) \end{aligned} $$ where $ \int_{0}^{\frac{\pi}{4}} x \ln (\tan x) d x= -\frac{\pi}{4} G+\frac{7}{16} \zeta(3) $ from my post . Now we can conclude that $$I= -\frac{\pi^3}{8}+6 \pi G-\frac{21}{2} \zeta(3)$$
|real-analysis|calculus|integration|fourier-analysis|trigonometric-integrals|
0
How is a morphism different from a function
How is a morphism (from category theory) different from a function? Intuitive explanation + maths would be great
In algebraic geometry, functions and morphisms are both important concepts, but they serve different purposes and have different definitions: Functions : In algebraic geometry, a function typically refers to a regular function or rational function on an algebraic variety. A regular function on an algebraic variety (V) is a function that is locally given by a quotient of polynomials. It's defined on an open subset of (V), where it behaves smoothly. A rational function on an algebraic variety (V) is a function that is locally given by a quotient of polynomials, but it may have poles where it's not defined. Regular and rational functions are used to study the geometry and topology of algebraic varieties. Morphisms : A morphism in algebraic geometry is a structure-preserving map between algebraic varieties. It's a more general notion than a function. Formally, a morphism (f: V \rightarrow W) between algebraic varieties (V) and (W) is a map that associates to each point (P) in (V) a point (
|functions|category-theory|morphism|
0
Integrating $\int\frac{\cos(\omega t)\gamma e^{-\gamma t}}{\omega}dt$
How to integrate $$\int\frac{\cos(\omega t)\gamma e^{-\gamma t}}{\omega}dt,$$ where $\gamma, \omega \neq 0$ . I tried using substitution $u=\omega t$ , $du=\omega dt$ and got $\frac{1}{\omega^2} \int \cos(u) \gamma e^{-\frac{\gamma u}{\omega}} du$ and then integrating by parts, but so far it seems like an endless cycle of substitutions and ind integration by parts, and I can't get to something meaningful. Is there any easy way to compute this? Edit: Using one of the hints, I arrived at $\frac\gamma\omega\mathrm{Re}\left[\frac{\mathrm{e}^{(i\omega-\gamma)t}}{i\omega-\gamma}+C\right]$ , but not sure how do go from here. Have I made any errors here (I am not sure about the denominator)?
We firstly take off the constant and get $I=\frac{\gamma}{\omega} \int \cos (\omega t) e^{-\gamma t} d t.$ $$ \begin{aligned}\int \cos (\omega t) e^{-\gamma t} d t = & \Re \int e^{\omega t i} e^{-\gamma t} d t \\ = & \Re \int e^{(\omega i-\gamma) t} d t \\ = & \Re\left(\frac{e^{(\omega i-\gamma)t}}{\omega i-\gamma}+ C\right) \end{aligned} $$ where $C $ is a complex constant. By rationalisation, we get $$ \begin{aligned} \frac{e^{(\omega i-\gamma)t}}{\omega i-\gamma} & =-\frac{e^{(\omega i-\gamma)t}}{\gamma-\omega i} \cdot \frac{\gamma+\omega i}{\gamma+\omega i} \\ & =-\frac{e^{-\gamma t}}{\gamma^2+\omega} e^{\omega i}(\gamma+\omega i) \end{aligned} $$ Hence we get $$ \Re\left(\frac{e^{(\omega i-\gamma)t}}{\omega i-\gamma}\right)=-\frac{e^{-\gamma t}}{\gamma^2+\omega^2}(\gamma \cos \omega t-\omega \sin \omega t) $$ and arrive at $$ \boxed{I=\frac{\gamma e^{-\gamma t}}{\omega(\gamma^2+\omega^2)}(\omega \sin \omega t-\gamma \cos \omega t)+k} $$ where $k=\Re {(C)}$ is a real constant.
|calculus|integration|indefinite-integrals|
0
Admissibility of Löb's rule in basic modal logic K
While I was preparing a talk on the admissible rules of modal logic, I found the following fact in Wikipedia (see https://en.wikipedia.org/wiki/Admissible_rule#Examples ). It says that Löb's rule $(\square p \to p)/p$ is admissible in minimal modal logic K (а rule $\phi/\psi$ is called admissible in logic $L$ , if for all substitution $\sigma$ , such that $\vdash_L\sigma(\phi)$ , also $\vdash_L\sigma(\psi)$ ). This is a really interesting result, since in logic $K4$ , for instance, it is no longer true (in $K4$ we can deduce a substitution instance of $\square p \to p$ for Löb's axiom $\square(\square p \to p) \to \square p$ ). Could you please help me to prove this fact. Thank you!
The key is that the modal logic K is complete for the class of well-founded frames (even for the class of finite trees!). And on the class of well-founded frames, Löb's rule is valid. Suppose $\varphi$ is a sentence such that $\square\varphi\to \varphi$ is a theorem of K. We would like to show that $\varphi$ is a theorem of K. Suppose not. Then there is a Kripke frame $M$ and a world $w\in M$ such that $\varphi$ is false at $w$ . If the modal depth of $\varphi$ is $d$ , then we can replace $(M,w)$ by a tree $T$ of height at most $d$ rooted at $r$ such that $(M,w)$ and $(T,r)$ are $d$ -bisimilar, and hence $\varphi$ is false at $r$ in $T$ . Now since $\square\varphi\to \varphi$ is a theorem of $K$ , $\lnot \varphi\to \lozenge\lnot\varphi$ is also a theorem, so there exists a child of $r$ in $T$ at which $\varphi$ is false. Repeating at most $d$ times, we arrive at a leaf of $T$ at which $\varphi$ is false. But now there are no worlds accessible from this leaf, so $\square\varphi\to \var
|logic|modal-logic|
1
Subspace of real valued functions on [0,1] is complete
I’m currently working on the following problem: Let $X$ denote the set of nondecreasing functions $f:[0,1]\to\mathbb{R}$ . We endow $X$ with the sup metric. Prove that $X$ is complete. I notice that if we are working with continuous functions, then we can use the nice property of $C([0,1])$ that it is complete under sup metric and prove that any closed subset is a complete subspace. However, since continuity is not specified, I wonder what nice properties for general functions from $[0,1]$ to $\mathbb{R}$ that we can use. Thank you! Here is the question from its original source: https://ww3.math.ucla.edu/wp-content/uploads/2021/09/basic-19F.pdf (Q11).
The space of bounded functions with the sub-norm is complete (see Space of bounded functions is complete ). Thus, all you need to prove is that your subspace is closed. Pick $(f_n)_n$ a converging sequence (in the sup-norm) of monotonically increasing functions functions. This means, for all $n\in \mathbb{N}$ and all $0\leq x\leq y \leq 1$ we have $f_n(x) \leq f_n(y)$ . However, if $f(x)=\lim_{n\rightarrow \infty} f_n(x)$ and $f(y)=\lim_{n\rightarrow \infty} f_n(y)$ , then we also get $f(x)\leq f(y)$ as pointwise convergence preserves $\leq$ (see Suppose that $(s_n)$ converges to $s$, $(t_n)$ converges to $t$, and $s_n \leq t_n \: \forall \: n$. Prove that $s \leq t$. ).
|real-analysis|analysis|metric-spaces|complete-spaces|
1
Continued Fraction Representation of sin(x)
To provide context, the continued fraction in the form $\frac{a_0}{1-\frac{a_1}{1+a_1-\frac{a_2}{1+a_2-...}}}$ evaluated to the $n$ th denominator equals $\sum_{k=0}^{n}\prod_{j=0}^{k}a_j$ . If one wants to write $e^x$ as a continued fraction, you can take the Maclaurin series of it which is $1+\sum_{n=1}^{\infty}x^n/n!$ and rewrite that as $1+\sum_{n=1}^{\infty}\prod_{i=1}^{n}(x/i)$ . Evaluating each $a_i$ to be in the form $x/i$ , one can arrive at the continued series representation of $e^x$ . When I was trying to replicate this (rewriting other functions in the summation of products form), I had trouble with functions like log of x and trigonometric functions. Can anybody help with getting $\sin{x}$ and $\cos{x}$ into the form $c+\sum_{k=0}^{n}\prod_{j=0}^{k}a_j$ where $c$ is either $0$ or $1$ ? The inverses of the function as well?
Lets write this up clearly, you want $\sin(x)$ to be written in a similar way to how you wrote $e^x$ . This special form is essentially something like: $$ 1 + \frac{x}{1} + \frac{x}{1}\frac{x}{2} + \frac{x}{1}\frac{x}{2}\frac{x}{3} + \frac{x}{1}\frac{x}{2}\frac{x}{3}\frac{x}{4} + \frac{x}{1}\frac{x}{2}\frac{x}{3}\frac{x}{4}\frac{x}{5} + \cdots$$ $\sin(x)$ can also be written in this special way as well: $$ 0 + \frac{x}{1} + 0\cdot 0 - \frac{x}{1}\frac{x}{2}\frac{x}{3} + 0\cdot 0\cdot 0 \cdot 0 + \frac{x}{1}\frac{x}{2}\frac{x}{3}\frac{x}{4}\frac{x}{5} + \cdots$$ half of the products you want to write are all zeros. Hope that helps.
|taylor-expansion|fractions|continued-fractions|
0
$|X_t - X_s|$ notation in stochastic processes
Let $X_t$ be a real stochastic process. I am new to stochastic processes and in some proofs I see inequalities such as: $$|X_t - X_s| \leq C_1 \quad \forall |t-s| \leq C_2 \tag{1}$$ but $X$ is a function of $t$ and $\omega$ , i.e. $X_t(\omega)$ so I am not sure what is meant by (1). Does this mean (1) holds uniform in $\omega$ ? Is this standard notation?
Indeed as mentioned in the comments and by you, this often just means almost surely: for almost every $\omega\in \Omega$ we have $$|X_{t}(\omega)-X_{s}(\omega)|\leq C_{1}(\omega), \forall |t-s|\leq C_{2},$$ or in probability terms $$P[|X_{t}-X_{s}|\leq C_{1}, \forall |t-s|\leq C_{2}]=1$$ $$\Leftrightarrow \int_{\Omega} 1_{|X_{t}(\omega)-X_{s}(\omega)|\leq C_{1}(\omega), \forall |t-s|\leq C_{2}} dP(\omega)=1.$$ The $C_{1}$ might just be deterministic. The $\omega$ here means that we sample a realization of the path $(X_{t})_{t\geq 0}$ . See for example What is a sample path of a stochastic process for many nice answers See for example, Kolmogorov continuity theorem , where an inequality for the moments implies an almost sure modulus of continuity with random constant.
|probability|probability-theory|stochastic-processes|notation|stochastic-calculus|
0
About a counterexample
If $(X,\mathcal S ) $ is a measurable space and $f:X \to [-\infty ,\infty] $ is a function such that $f^{-1}((a ,\infty)) \in \mathcal S $ for every $a \in R$ . Then $f \ $ is $\ $ $ \mathcal S $ - measurable . I am trying to disprove this statement by searching counterexample .I am trying to construct a characteristic function for counterexample .Can any one give me some counter example if this statement is false and if true then please give me proof .
Take Borel sigma algebra for simplicity. Define $f(x) = +\infty$ if $x \in V$ (V for Vitali or any non-measurable set you like) and $f(x) = -\infty$ otherwise. Then $f^{−1}((a,∞)) = \emptyset$ for all $a$ , but $f$ is not measurable.
|real-analysis|measure-theory|
0
"Agreeing" orthogonal projections
This is a projection excercise I'm stuck on. Lets say we have two planes in R3, $A$ and $B$ , which both go through the origin. We also have two orthogonal projections, $ProjA$ and $ProjB$ , projecting on plane $A$ and $B$ respectively. $ProjA$ is given by the projection matrix $P1$ , so that $$ProjA(v) = P1v$$ $ProjB$ is specified by the following vector $d2$ = [knownx; d; known z] that is parallel to the projection direction. Note that $d2$ is parametric in $d$ . The matrix $P1$ and the known $x$ and $z$ values in $d2$ are given in the task. The task is is to find the only possible value of $d$ such that the following holds for all $v ∈ R3: ProjA(ProjB(v)) = ProjB(ProjA(v))$ . My thought process was this: Since d2 is parallell to projection direction, then d2 is orthogonal to plane B. So, $$ProjB(v) = v - Projd2(v)$$ where $Projd2(v)$ is the projection of v onto d2: $d2.v\over||d2||^2$$d2$ Continuing, $$ProjA(ProjB(v)) = ProjA(v-Projd2(v)) = P1(v-Projd2(v))$$ $$ProjB(ProjA(v)) = Proj
If $a$ is the unit normal vector to plane $A$ and $b$ is the unit normal vector to plane $B$ . Then $\text{ Proj_A } (v) = ( I_3 - {aa}^T ) v $ And similarly, $ \text{ Proj_B }(v) = (I_3 - {bb}^T ) v $ Now using the basis $ a, b, a \times b$ , we can write $ v = c_1 a + c_2 b + c_3 (a \times b) $ So that $ \text{ Proj_A } (v) = ( I_3 - {aa}^T ) v = v - c_1 a - c_2 a (a^T b) $ $ \text{ Proj_B } (v) = ( I_3 - {bb}^T ) v = v - c_1 b (a^T b) - c_2 b$ And $ \text{ Proj_B }( \text{ Proj_A }) (v) = (I - {bb}^T) ( v - c_1 a - c_2 a (a^T b) ) \\ = v - c_1 b (a^T b ) - c_2 b - (I - {bb}^T) ( c_1 a + c_2 a (a^T b) ) \\ = v - c_1 b (a^T b) - c_2 b - c_1 a - c_2 a (a^T b) + c_1 b (a^T b) + c_2 b (a^T b)^2 $ Exchanging $a$ and $b$ , and $c_1$ and $c_2$ , we get $ \text{ Proj_A }( \text{ Proj_B }) (v) = v - c_2 a (a^T b) - c_1 a - c_2 b - c_1 b (a^T b) + c_2 a (a^T b) + c_1 a (a^T b)^2 $ Since $a,b$ are linearly independent, (otherwise we have the trivial case $a= \pm b$ ), then the equality of the t
|linear-algebra|projection|projection-matrices|
0
$ \int_0^1(\ln\left(\frac{1}{x}\right))^n $ dx
find values of n for which the integral converges $ \int_0^1 (\ln\left(\frac{1}{x}\right))^n $ dx i am able to understand that Integral needs to be broken in two parts so that convergence can be checked at x=1 and x=0 separately. for the first part if n 0 then its proper integral i need to check for the other remaining cases but unable to get how?
$\int _0^1(\ln\frac{1}{x})^n\mathrm{d}x\overset{t=\frac{1}{x}}{=}\int_1^{+\infty}(\ln t)^nt^{-2}\mathrm{d}t\overset{z=\ln t}{=}\int_0^{+\infty}z^n\mathrm{e}^{-z}\mathrm{d}z=\Gamma(n+1)=n!$
|improper-integrals|
1
The holomorphic differential of an elliptic curve as a Riemann surface
I am reading a Teichmueller theory book and trying to understand elliptic curves as examples of Riemann surfaces. Consider the elliptic curve \[X = \{[z : w : y] \in \mathbb C \mathrm P^2 \mid w^2y = z (z - y) (z - \lambda y)\} \] where $\lambda \in \mathbb C$ is a complex number not equal to $0$ or $1$ . I am trying to understand how the formula $\omega = \mathrm d z/ w$ produces a holomorphic differential on the Riemann surface $X$ , especially at the point $\infty$ . My partial work is below. There is a related question here - Holomorphic differential of elliptic curve over $\mathbb C$ but I don't understand the answer there. It would be great if someone could clarify that answer too. What I understand so far: $X$ is a Riemann surface. Using the implicit function theorem, we can conclude the following: On the open set $\\{[z : w : y] \in X \mid y = 1\\}$ , the function $W = w/y$ is a coordinate except for at most four points where $\frac{\mathrm d}{\mathrm d z} z (z - 1) (z - \lambd
The infinity point is $[0:1:0]$ , i.e. $w\ne 0$ . I will write $a$ instead of $\lambda$ against the history for an easy typing. We start with the $1$ -form $\omega$ . But note that $\omega$ is defined w.r.t. a very specific affine piece inside the elliptic curve. It is the piece with affine coordinates $(z,w)$ , and the $y$ -component, $y\ne 0$ , is normed to one. In this world lives an ant, which always uses letters like $z,w,y$ for computations, we will also do so in connection with this affine piece. Now there is also an affine piece, where from the ant point of view of $[z:w:y]$ we are missing the point on the elliptic curve denoted by $\infty$ , which is $[0:1:0]$ , the $y$ -component is missing. In order to make computations, it is useful to use $[Z:W:Y]$ , and the letters $Z,W,Y$ in connection with this world, where $W\ne 0$ . Here is living a bee, and in all books in the library it sees only capital letters. Our passage is the "simple passage" $[z:w:y]=[Z:W:Y]$ , and this makes
|complex-analysis|elliptic-curves|riemann-surfaces|elliptic-integrals|elliptic-functions|
1
Is a subgroup of $\operatorname{GL}(n,\mathbb{R})$ semialgebraic if and only if all its orbits are?
A subset $X \subseteq \mathbb{R}^n$ is called semialgebraic if it is of the form $$ X = \bigcup_{finite} \bigcap_{finite} \{ x \in \mathbb{R}^n \colon f_{i,j}(x) \star 0 \} $$ where $\star$ represents any of the symbols $=, \leq , \geq, $ and $f_{i,j} \in \mathbb{R}[X_{11}, \ldots , X_{nn}]$ . A subgroup $G is called a semialgebraic group if $G$ is a semialgebraic subset of $\mathbb{R}^{n\times n}$ . Is the following true? Conjecture: A subgroup $G is semialgebraic if and only if for every $p \in \mathbb{R}^n$ , the orbit $G.p \subseteq \mathbb{R}^n$ is semialgebraic. One direction follows from the Tarski-Seidenberg transfer principle: If $G$ is semialgebraic, orbits $$ G.p = \{x \in \mathbb{R}^n \colon \exists g \in G , x=g.p\} $$ are semialgebraic. What about the other direction?
Here is a counter-example. Consider the subgroup of $GL(2,{\mathbb R})$ consisting of matrices with rational determinants. Another example is the following subgroup of $SO(4)$ : Unitary matrices (elements of $U(2)$ ) whose complex determinants are roots of unity. Conjecture. Suppose that $G is a subgroup with semialgebraic orbits. Then $G$ contains a semialgebraic subgroup with exactly the same orbits as $G$ .
|group-theory|algebraic-groups|semialgebraic-geometry|
1
let $A,G,H$ be the AM, GM and HM, of two distinct reals. Then Find the interval in which the roots of $Ax^2-2Gx+H=0$ lie.
let $A,G,H$ be the AM, GM and HM, of two distinct reals. Then Find the interval in which the roots of $Ax^2-2Gx+H=0$ lie. Let the number be $p,q$ so $ A= \frac{\left(p+q\right)}{2}$ $G=\sqrt{pq}= \frac{2A}{G^2}$ $H=2\left(\frac{1}{p}+\frac{1}{q}\right)$ The roots are , therefore $\frac{\left(-2G\pm\sqrt{4G^2-4AH}\right)}{2A}$ or $\frac{\left(G\ \pm\sqrt{G^2-\frac{2A^2}{G^2}}\right)}{A}$ Not sure what to do beyond this, would appreciate some inputs( Just realised I probably have to use $AM\geq GM$ , not sure how to use it though
Your definition of HM is incorrect. $$HM = \frac{2}{\frac{1}{p}+\frac{1}{q}}$$ So, the $4G^2-4AH$ part becomes $0$ . Hence, it has a repeated root which is $\frac{G}{2A}$ or $\frac{\sqrt{pq}}{p+q}$
|sequences-and-series|
0
let $A,G,H$ be the AM, GM and HM, of two distinct reals. Then Find the interval in which the roots of $Ax^2-2Gx+H=0$ lie.
let $A,G,H$ be the AM, GM and HM, of two distinct reals. Then Find the interval in which the roots of $Ax^2-2Gx+H=0$ lie. Let the number be $p,q$ so $ A= \frac{\left(p+q\right)}{2}$ $G=\sqrt{pq}= \frac{2A}{G^2}$ $H=2\left(\frac{1}{p}+\frac{1}{q}\right)$ The roots are , therefore $\frac{\left(-2G\pm\sqrt{4G^2-4AH}\right)}{2A}$ or $\frac{\left(G\ \pm\sqrt{G^2-\frac{2A^2}{G^2}}\right)}{A}$ Not sure what to do beyond this, would appreciate some inputs( Just realised I probably have to use $AM\geq GM$ , not sure how to use it though
Let us assume that $p, q > 0$ . Then your harmonic mean should be $$H = \frac{2}{\frac{1}{p} + \frac{1}{q}} = \frac{2pq}{p+q} = \frac{G^2}{A}.$$ The roots of $A x^2 - 2Gx + H = 0$ are $$x = \frac{2G \pm \sqrt{4G^2 - 4A H}}{2A} = \frac{G \pm \sqrt{G^2 - AH}}{A} = \frac{G}{A}.$$ So $x = G/A$ is a double root, and we want to find the range of $G/A$ for such $p,q$ . Since the AM-GM inequality states that $A \ge G \ge 0$ , with equality occurring if $p = q$ , it follows that the range must be a subset of $x \in (0, 1]$ . The value $x = 0$ cannot be attained because $G > 0$ whenever $p, q > 0$ . (Although the problem statement does not seem to explicitly require $p, q > 0$ , we take it as such.) We can arbitrarily choose $q = 1$ to find $x = \frac{2\sqrt{p}}{p+1}$ and this clearly has the limit $$\lim_{p \to \infty} \frac{\sqrt{p}}{2(p+1)} = 0,$$ so the desired interval is $(0,1]$ .
|sequences-and-series|
0
Is there anything like “cubic formula”?
Just like if we have any quadratic equation which has complex roots, then we are not able to factorize it easily. So we apply quadratic formula and get the roots. Similarly if we have a cubic equation which has two complex roots (which we know conjugate of each other) and one fractional root, then we are not able to find its first root by hit & trial. So my question is like quadratic formula, is there exist any thing like cubic formula which help in solving cubic equations? For example, I have an equation $$2x^3+9x^2+9x-7=0\tag{1}$$ and I have to find its solution which I am not able to find because it has no integral solution. Its solutions are $\dfrac {1}{2}$, $\dfrac{-5\pm \sqrt{3}i}{2} $, I know these solutions because this equation is generated by myself. So how can I solve equations like these? Also while typing this question, I thought about the derivation of quadratic formula, which is derived by completing the square method. So I tried to apply ‘completing the cube’ method on
Absolutely! Suppose $$ax^3+bx^2+cx+d=0$$ , $i^2=-1$ , and exactly one of the following is true. $w_1=w_2=1$ or $w_1=\frac{-1+i\sqrt3}{2}$ ,and $w_2=\frac{-1-i\sqrt3}{2}$ or $w_1=\frac{-1-i\sqrt3}{2}$ , and $w_2=\frac{-1+i\sqrt3}{2}$ then $$x=-\frac{b}{3a}+w_1\sqrt[3]{\frac{9abc-2b^3-27a^2d}{54a^3}+\sqrt{(\frac{9abc-2b^3-27a^2d}{54a^3})^2+(\frac{3ac-b^2}{9a^2})^3}}+w_2\sqrt[3]{\frac{9abc-2b^3-27a^2d}{54a^3}-\sqrt{(\frac{9abc-2b^3-27a^2d}{54a^3})^2+(\frac{3ac-b^2}{9a^2})^3}}$$ Credit goes to Girolamo Cardano. This formula can be easily memorized once you notice certain patterns.
|algebra-precalculus|polynomials|roots|cubics|
0
How to show a continuous function is bounded when the domain is compact?
$K$ is a nonempty subset of $\mathbb{R}^n$ . I have to show that $K$ is compact if and only if every continuous function $f: K \to \mathbb{R}$ is bounded. I know that if $f$ is continuous and $K$ is compact then $f$ is uniformly continuous. Does it mean the function is bounded ? I have no idea how to connect compact and continuity.
Let $K \subseteq \mathbb{R}^n$ be nonempty. (If $K$ is compact, then for each continuous function $f: K \to \mathbb{R}$ , $f (K)$ is bounded.) First, you can show that a continuous image of a compact set is compact. This is straightforward: consider an arbitrary open cover $\mathcal{U}$ of $f (K)$ . Since $f$ is continuous, the collection $\{f^{-1} (U) | U \in \mathcal{U}\}$ constitutes an open cover of $K$ . Since $K$ is compact, there is a finite open subcover $\{f^{-1} (U_1), \dots, f^{-1} (U_n)\}$ of $K$ . Then $\{U_1, \dots, U_n\} \subseteq \mathcal{U}$ constitutes a finite open subcover of $f (X)$ . Conclude by observing that $f (K) \subseteq \mathbb{R}$ , being compact, must be bounded: if not, the open cover $\{f (K) \cap (y - 1, y + 1) | y \in f (K)\}$ of $f (K)$ would have no finite subcover, for instance. (If for each continuous function $f: K \to \mathbb{R}$ , $f (K)$ is bounded, then $K$ is compact.) By Heine-Borel, it is equivalent to show that $K$ is closed and bounded i
|general-topology|metric-spaces|
0
How to remember the difference between "stars and bars" and multinomial coefficients?
Explanation: It is easy for me to remember the difference between permutation coefficients and combination coefficients verbally: one says that the former "is the number of ways of choosing $k$ things from $n$ options without replacement where order matters" and that the latter "is the number of ways of choosing $k$ things from $n$ options without replacement where order doesn't matter" . Question: How can one quickly describe (verbally) the difference between multinomial coefficients and "stars and bars" (which will here henceforth be called multiset coefficients )? For which does order matter? For which does order not matter? For which is the choosing with/without replacement? Attempt: Both involve a counting process with two considerations: (i) dividing objects of $n$ types into $i$ containers, (ii) each container with $k_i$ spots, so assigning $k = \sum_i k_i$ objects from the $n$ different types. So it seems like there might be some ambiguity, when one says "with/without replaceme
Here is a simple way to state it. Below that, I give a simple real-world scenario that can be turned into either type of problem. Most of the bad explanations online give examples that aren't even remotely similar (e.g. the ubiquitous "anagram" example for multinomials), which probably causes this sort of confusion. Multisets The multiset coefficient is what is commonly called, at lower levels, a combination with replacement. Multisets solve allocation questions . If we are making $k$ choices from a set of $n$ possible things to choose, the multiset counts how many ways there are to do that. A concrete example is this: if I am at a barbecue joint and order a protein-packed dinner with $k = 4$ servings of meat, which I can choose from $n=3$ options (say, beef, chicken, and pork), the number of ways I can allocate my choices is given by $\binom{n+k-1}{k} = \binom{n+k-1}{n-1} = \binom{6}{2} = 15$ . Here is why: imagine that I order by standing in front of each type of meat in the case and
|combinatorics|soft-question|definition|multinomial-coefficients|multisets|
0
Find all polynomials $P(x)$ with real coefficients satisfying $P(x-1)P(x+1)=P(P(x))$
Find all polynomials $P(x)$ with real coefficients satisfying $P(x-1)P(x+1)=P(P(x))$ I tried $P(x) = c$ (const) and got $c=0$ or $c=1$ . But when I tried $P(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+...+a_{0}$ , normally I would be able to find $a_{n}$ but in this case, both highest coefficients on the two sides is $a_{n}^{2}$ so I can't find it. I don't know how to progress in this problem. Any help would be highly appreciated! P/s: With the help of @Theo Bendit, I come up with this solution. Let $\deg(P(x))=n$ , then $\deg(LHS)=2n$ while $\deg(RHS)=n^{2}$ => $n=2$ (as $n=0$ then I have done it above). Therefore, $P(x)=ax^{2}+bx+c$ . We see that for $x^{4}$ , we have $a^2=a^3 => a=1$ and we can do the same for $b$ and $c$ . In the end, we get $P(x)=x^2-2x+1$ is a solution.
$P(x)=x^2-2x+1$ $P(x+1)P(x-1)=P(P(x))$ Let's say that $P(x)$ has degree $n$ . That means that $P(x+1)$ and $P(x-1)$ will each have degree $n$ too. Their product will consequently be of degree $2n$ (because $x^n$ times $x^n$ is $x^{2n}$ ). The right-hand side will have a degree of $n^2$ because $P(x)$ is of degree $n$ and $P(\textit{something of degree n})$ will return something of degree $n^2$ . We can therefore conclude since both sides must have the same degree, that $n^2=2n$ and $n$ either equals $2$ or $0$ . A zero-degree polynomial is a cheeky answer, so let's just say that $P(x)$ is of degree $2$ , or a quadratic. $P(x)=ax^2+bx+c$ $P(x+1)=a(x+1)^2+b(x+1)+c=a(x^2+2x+1)+b(x+1)+c=ax^2+2ax+a+bx+b+c$ $P(x-1)=a(x-1)^2+b(x-1)+c=a(x^2-2x+1)+b(x-1)+c=ax^2-2ax+a+bx-b+c$ $P(x+1)P(x-1)=(ax^2+2ax+a+bx+b+c)(ax^2-2ax+a+bx-b+c)$ Getting a calculator to evaluate this... $\space=(a^2 - b^2 + 2 a c + c^2)+ (- 2 a b + 2 b c )x+( - 2 a^2 + b^2 + 2 a c)x^2 + (2 a b) x^3 + (a^2 )x^4$ Next up is evaluat
|functional-equations|
0
Let $a, b, c \in \mathbb{Z}$. Show: If $a \shortmid b$ and $a \shortmid (b+c),$ then $a \shortmid c.$ Hint: $c = (b+c)-b.$
I'm working on my discrete math homework, and I come to this problem: Let $a, b, c \in \mathbb{Z}$ . Show: If $a \shortmid b$ and $a \shortmid (b+c),$ then $a \shortmid c.$ Hint: $c = (b+c)-b.$ I've been trying to figure out something with the definition of divides, but am having trouble seeing how that can lead to the outcome that I need.
There exists $m, n \in \mathbb{Z}^+$ , s.t. $b = a\cdot m$ and $b+c = a \cdot n$ . Hence, $c = (b+c) - b = a \cdot (n - m)$ , where $n - m$ must be a positive integer. Therefore, we proved $a \shortmid c$ .
|elementary-number-theory|discrete-mathematics|
1
High School Combinatorics - I do not agree with the provided explanation
The proposed problem was the following (translated into English by me): "A school has $16$ students interested in participating in a team competition. First, they must split themselves into $8$ pairs. Then, each of those pairs must choose (among themselves) someone to be their leader. Finally, two teams of $8$ students are formed by choosing $4$ pairs to compose the first team, which already locks the opposing team. Considering two teams to be equal if, and only if, they are composed of the same pairs (with pairs being equal if they are composed of the same students, with the same leader), in how many distinct ways can these $16$ students divide themselves into two teams of eight?" Solution Provided (translated by me): First, compute the total amount of distinct pairs that can exist. This is clearly given by $C_{16,2} = \frac{16!}{14!2!} = 15\cdot8$ . Since each pair has two choices of leader, we also multiply by two, to obtain $16\cdot15 = 240$ Now, multiply the amount of distinct pai
We all agree that the provided answer is incorrect for the reasons you state and others. Then the fun starts. Here is how I see it: First, choose the leaders, eight out of 16 students C(16,8) ways. Then assign a partner for each leader from the remaining 8 students 8! ways. Finally, choose team A from the eight pairs C(8,4) ways. Team B will be composed of those not chosen for A. So the total is C(16,8) 8! C(8,4), = 16! / 4! 4! = 1,307,674,368,000, if I worked my calculator right. Assuming that teams A and B are distinctly identifiable. (Otherwise, it's half that number). Am I missing something?
|combinatorics|solution-verification|
0
Hamming distance triangle inequality
How to prove triangle inequality of metric space for Hamming distance between the bit strings of length n? $$d(x,y) \le d(x,z)+d(z,y)$$
The left hand side contribution occurs for all those coordinates $i$ such that $x_i\neq y_i$ . Now, fixing a coordinate $i$ , if $x_i\neq y_i$ then it is clear that $x_i\neq z_i$ or $y_i\neq z_i$ . Thus, for all those coordinate $i$ contributing in $d(x, y)$ will contribute in $d(x, z)$ or $d(y, z)$ . This gives us the desired inequality.
|metric-spaces|triangle-inequality|
0
Can this proof that $\sqrt{2}$ is irrational be rewritten using only integers?
Can this proof that $\sqrt{2}$ is irrational be rewritten using only integers? Most proofs that $\sqrt{2}$ is irrational start with assuming that $2=\dfrac{a^2}{b^2}$ and derive a contradiction. For a nice collection, see my question here: What is the most unusual proof you know that $\sqrt{2}$ is irrational? These proofs generally only use properties of the integers. However the following proof uses the actual $\sqrt{2}$ and thus requires the actual square roots between integers. So I wondered if this could be modified so it did not require non-integers. I will write the proof in this more general form: If $n$ and $k$ are positive integers such that $k^2 then $\sqrt{n}$ is irrational. Proof: Suppose $\sqrt{n}=\dfrac{a}{b}$ , so $a=b\sqrt{n}$ and $a\sqrt{n}=nb$ . Then $\begin{array}\\ \sqrt{n} &=\dfrac{a}{b}\\ &=\dfrac{a}{b}\dfrac{\sqrt{n}-k}{\sqrt{n}-k}\\ &=\dfrac{a\sqrt{n}-ka}{b\sqrt{n}-kb}\\ &=\dfrac{nb-ka}{a-kb}\\ \end{array} $ Since $k^2 , $k \lt \sqrt{n}=\dfrac{a}{b} \lt k+1 $ so
Both $k^2$ and $n$ are integers, with $k^2 . Thus $n - k^2 \geq 1$ . Therefore, $a^2 - k^2b^2 \geq b^2$ . Final edit: I am certain that $a^2 - k^2b^2 \geq b^2$ , and I can now see that your identity comes from: $$a = \sqrt{n}b \\ b = \frac{a}{\sqrt{n}}$$ To be clear, your proof cannot work in the second case because: $$a^2 - k^2b^2 = b^2(n - k^2) \geq b^2$$ But this trick functions in the first proof as: $$a-kb = b(\sqrt{n}-k) You don't actually need to use the inequality manipulation that you have at the end, once you can show that $a-kb$ is an integer, and it is less than $b$ (as you multiplied $b$ by a number smaller than $1$ ), your conclusion follows immediately.
|proof-writing|radicals|integers|irrational-numbers|
0
Show that the intersection of the distinct subgroups of a $p$-group of index $p$ is normal.
I've been struggling with this exercise (Exercise 8.2.28, Introduction to Abstract Algebra by Nicholson) for a few hours, and I haven't made much progress. Let $G$ be a group of order $p^n$ and let $H_1, \dots ,H_m$ be the distinct subgroups of $G$ of index $p$ . If $N = H_1 \cap \cdots \cap H_m$ , show that $N \triangleleft G$ and that $x^p = 1$ for every coset $x$ in $G/N$ . From an earlier exercise, either $H_i \triangleleft G$ or $H_i = N(H_i)$ where $N(H_i)$ is the normalizer of $H_i$ for each $i$ . Furthermore, by Lagrange's Theorem, these subgroups are the ones of order $p^{n-1}$ . Let $G$ be a finite $p$ -group of order $p^n$ . Then there exists a series $$ G = G_0 \supset G_1 \supset \cdots \supset G_n = \{ 1 \} $$ of subgroups of $G$ such that $G_i \triangleleft G$ , $|G_i| = p^{n-i}$ , and $|G_i / G_{i+1}| = p$ for all $i$ . I have a strong feeling that this is somehow related but I've been stuck for hours on how to use it. I have no idea where to go from here and I've made
To show that $N$ is normal, note that if $H$ is of index $p$ , then the index of $H$ is the smallest prime dividing $|G|$ , so $H$ is normal. Thus, $N$ is the intersection of normal subgroups, hence is normal. Alternatively, if $H$ has index $p$ and $x\in G$ , then $xHx^{-1}$ also has index $p$ . Therefore, $$xNx^{-1} = x\left(\bigcap_{[G:H]=p}H\right)x^{-1} = \bigcap_{[G:H]=p}xHx^{-1} = N.$$ To prove the coset of $x$ has exponent $n$ , note that for every $H$ of index $p$ , $G/H$ has order $p$ , so $(xH)^p=eH$ ; therefore, $x^p\in H$ . Thus, $x^p$ lies in every subgroup of index $p$ , hence in their intersection,
|abstract-algebra|group-theory|normal-subgroups|p-groups|
1
Why is radian so common in maths?
I have learned about the correspondence of radians and degrees so 360° degrees equals $2\pi$ radians. Now we mostly use radians (integrals and so on) My question: Is it just mathematical convention that radians are much more used in higher maths than degrees or do radians have some intrinsic advantage over degrees? For me personally it doesn't matter if I write $\cos(360°)$ or $\cos(2\pi)$. Both equals 1, so why bother with two conventions?
The radian has certain special properties that facilitate analyzing trigonometric functions. $$\frac{d}{dx} \sin(x rad)=\cos (x rad)$$ Now let us look at quadrants. (1 quadrant is a right angle) $$\frac{d}{dx} \sin(x quad)=\frac{\pi\cos(x quad)}{2}$$ A generalized version is $$\frac{d}{dx} \sin(x)=\frac{\pi\cos(x)}{H}$$ Where $H$ is the measurement of a half-circle in the same units as $x$ For radians, $H=\pi$ , which simplifies the derivative. Repeated derivation actually cycles back to $\sin(x rad)$ , but not for other units.
|geometry|education|
0
evaluate $\sum_{i=0}^{n-1}\sum_{j=i+1}^{n+1} {n+1\choose j}{n\choose i}$
Let $n$ be a nonnegative integer. Evaluate $\sum\limits_{i=0}^{n-1}\sum\limits_{j=i+1}^{n+1} {n+1\choose j}{n\choose i}$ . Below is a summary of a solution based off of a problem in the summation chapter of the book Problem Solving Through Problems by Loren Larson. Multiply both sides of the sum by $\dfrac{1}{2^{2n+1}}.$ Consider a matching game played with two players A and B and $2n+1$ coins. Player $A$ flips $n+1$ coins and picks $n$ of them so that the number of heads flipped is maximal. Player $B$ flips $n$ coins. The player with the most heads flips wins, with ties going to B. Then the probability $A$ wins is $\sum_{i=0}^{n-1} P(\text{ B flips i heads }) \sum_{j=i+1}^n P(\text{ A flips j heads } | \text{ B flips i heads }) = \sum_{i=0}^{n-1} P(\text{ B flips i heads }) \sum_{j=i+1}^n P(\text{ A flips j heads }) = \sum_{i=0}^{n-1} {n\choose i}\dfrac{1}{2^n}\sum_{j=i+1}^n {n+1\choose j}(\dfrac{1}2)^{n+1}$ Note that the game can be reformulated as follows: players A and B both flip
Here is a much simpler proof: Set \begin{equation} L:=\sum\limits_{i=0}^{n}\ \ \sum\limits_{j=i+1}^{n+1}\dbinom{n+1}{j}\dbinom {n}{i}. \end{equation} (This differs from your sum only in an extra addend, which is obtained for $i=n$ and $j=n+1$ . This extra addend is $1$ .) We claim that $L=2^{2n}$ . Indeed, \begin{align*} L=\underbrace{\sum\limits_{i=0}^{n}\ \ \sum\limits_{j=i+1}^{n+1}}_{=\sum\limits_{0\leq i On the other hand, \begin{align*} L & =\sum\limits_{0\leq i (here, we renamed the indices $u$ and $v$ as $i$ and $j$ ). Adding these two equalities together, we find \begin{align*} & L+L\\ & =\sum\limits_{0\leq i Thus, $2L=L+L=2^{2n+1}$ . Dividing this equality by $2$ , we find $L=2^{2n}$ , qed.
|probability|combinatorics|summation|contest-math|binomial-coefficients|
0
Remainders of $(1-3i)^{2009}$ when divided by $13+2i$ in $\Bbb{Z}[i]$
Find the possible remainders of $(1-3i)^{2009}$ when divided by $13+2i$ in $\mathbb{Z}[i]$ . I'm having a hard time understanding remainders in $\mathbb{Z}[i]$ . I'm gonna write my solution to the problem above just to give some context, but you can skip this part and read the question below if you want. We start by noticing that $N(13+2i) = 173$ , which is a prime number. Therefore, $13+2i$ is irreducible in $\mathbb{Z}[i]$ . This allows us to quickly calculate the number of elements of the multiplicative group $(\mathbb{Z}[i]/(13+2i))^{\times}$ , namely $N(13+2i) - 1 = 172$ . By Lagrange's Theorem, since $1-3i$ and $13+2i$ are coprime, $$(1-3i)^{11\cdot 172} \equiv 1\mod(13+2i).$$ We now reduced the problem to finding $(1-3i)^{117} \mod (13+2i)$ . Note that $117 = 3^2\cdot 13$ , so by a tedious but direct calculation (once we figure out $(1-3i)^{3^2}$ modulo $13+2i$ we get a nice expression, so the $13$ th power is relatively straightforward to compute), we arrive at $6-2i$ . Questio
Posting this because the existing examples I could easily find, indeed, use small numbers only, and that does allow the law of small numbers to kick in and ruin examples (most notably when looking at the remainders modulo $2+i$ , when we can always find a remainder of norm $\le1$ ). Below is how I (and undoubtedly most, if not all, the other teachers) explain remainders in $\Bbb{Z}[i]$ . Write $p=13+2i$ for short. Look at the picture: The black dots are the grid of Gaussian integers. The red dots mark the elements of the ideal generated by $p$ . They also form an infinite pattern of repeating squares with sides given (as vectors) by $p$ and $ip=-2+13i$ . You see that the remainder $r=6-2i$ (marked by a green dot) falls into a square with corners at $0$ , $p$ , $-ip$ and $(1-i)p$ . The four possible remainders are the separation of $r$ from each of these: $r=6-2i$ , $r-p=-7+4i$ , $r+ip=4+11i$ , and $r-(1-i)p=-9+9i$ . All of those happen to have norms $ , but this is only because the gre
|number-theory|elementary-number-theory|algebraic-number-theory|gaussian-integers|
1
binomial coefficient identity $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$
While trying to solve a probability question about a coin flip game, I arrived at the expression $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}.$ Computing this sum for several low values of $n$ suggested that it summed to $2^{2n-2}$ , suggesting there is an identity: $$\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$$ However I was not able to prove the identity by means of the Pascal recurrence relation, the Chu-Vandermonde identity $\sum_{k=0}^r {m\choose k}{n\choose {r-k}}={{m+n}\choose r}$ , nor the $\sum_{k=0}^n{n\choose k} = (1+1)^n=2^n$ identity from the binomial theorem. Can I get a hint how to proceed?
This is similar to the proof by darij grinberg in the other thread but the formula there looks slightly different. Let $$S=\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}$$ be the original sum. Then $$S=\sum_{k=0}^n{n\choose k}\left[2^{n-1}-\sum_{j=k}^{n-1}{{n-1}\choose j}\right]$$ $$=\sum_{k=0}^n {n\choose k}\cdot 2^{n-1}-\sum_{k=0}^n{n\choose k}\sum_{j=k}^{n-1}{{n-1}\choose j}$$ $$=2^n\cdot 2^{n-1}-\sum_{k=0}^n{n\choose {n-k}}\sum_{j=k}^{n-1}{{n-1}\choose j}$$ $$=2^{2n-1}-\sum_{k’=0}^n{n\choose k’}\sum_{j=n-k’}^{n-1}{{n-1}\choose j}~({\rm with~}n-k=k’)$$ $$=2^{2n-1}-\sum_{k’=0}^n{n\choose k’}\sum_{j=n-k’}^{n-1}{{n-1}\choose {n-1-j}}$$ $$=2^{2n-1}-\sum_{k’=0}^n{n\choose k’}\sum_{j’=0}^{k’-1}{{n-1}\choose j’}~({\rm with~}n-1-j=j’)$$ $$=2^{2n-1}-S,$$ hence $$2S=2^{2n-1}~{\rm and~so~}S=2^{2n-2},$$ as required.
|combinatorics|summation|binomial-coefficients|
0
Tilted rectangle falling down
Rectangle $ABCD$ is tilted such that its base $AB$ makes an angle of $\theta$ with the horizontal floor. It has its vertex $A$ in contact with the horizontal floor, and the rectangle is released from this position, to fall freely and the vertex at $A$ is free to slide along the floor. What will be the rectangle's final linear and angular velocity when its base hits the floor? The rectangle has dimensions: $\overline{AB} = \ell$ and $\overline{BC} = w $ . My initial thought: The solution, I think, can be obtained from the Euler-Lagrange equation of motion . First, I define my state variables as follows: firstly, we have the linear distance of point $A$ from a fixed origin. Let's call this $x$ . Then we have the tilt angle $q$ of the base $AB$ from the $x$ axis. We can take $x(0) = 0$ and $q(0) = \theta$ . The position of the center of mass of the rectangle is $$ P = (x, 0) + ( \ell \cos(q) - w \sin(q), \ell \sin(q) + w \cos(q) ). $$ Hence, the linear velocity is $$ \dot{P} = (\dot{x}, 0
$$\frac{∂L}{∂θ} - \frac{d}{dt}(\frac{∂L}{∂\dotθ })=0 $$ And $ L = \frac{mv^2}{2}+mgh$ Where $ v = \dotθl$ and $ h = sinθl$ So we get $$-mglcosθ-ml^2\ddotθ=0$$ $$ \ddotθ= -\frac{g}{l}cosθ$$ Here I substituted $φ = \frac{g}{l} $ Which I solved, using the laplace transform $$ \mathcal{L}(\ddotθ)=θ(s)s^2-Qs-w$$ where $w$ is the starting speed and Q is the starting angle. $$ \mathcal{L}(-φcosθ) = -φ\frac{s}{s^2+1}$$ Combining them together we get: $$ θ(s) = -φ\frac{1}{(s^2+1)s} + \frac{Q}{s} + \frac{w}{s^2}$$ Now lets do them by parts: $$ \mathcal{L}^-1(-φ\frac{1}{(s^2+1)s}) = -φ\mathcal{L}^-1(\frac{1}{(s^2+1)s}) = -φ(\mathcal{L}^-1 \frac{s^2+1}{(s^2+1)s} - \mathcal{L}^-1 \frac{s^2}{(s^2+1)s}) = -φ(1 - cost) $$ $$ \mathcal{L}^-1(\frac{Q}{s}) = Q$$ $$ \mathcal{L}^-1(\frac{w}{s^2})=wt$$ So, $$ θ(t) = φ(cost-1) + wt + Q$$ The Angle is relative to point B. I think this is correct, but if someone finds something incorrect tell me. EDIT: If you truly wanted to find the final linear and angular ve
|solution-verification|physics|euler-lagrange-equation|
0
An operator that satisfies some condition is a normal operator
QUESTION: Let $H$ be Hilbert space, $T$ is a bounded operator on $H$ , $TT^{\ast }\geqslant T^{\ast }T$ , proof $T$ is normal operator( $TT^{\ast } =T^{\ast }T$ ). I guess this question is missing condition (maybe $T$ is a compact operator?). I don't know what " $TT^{\ast }\geqslant T^{\ast }T$ " means, here's my guess: 1. $TT^{\ast }\geqslant T^{\ast }T\Leftrightarrow (TT^{\ast }x,x)\geqslant (T^{\ast }Tx,x)$ 2. $TT^{\ast }\geqslant T^{\ast }T\Leftrightarrow ||TT^{\ast }||\geqslant ||T^{\ast }T||$
Although in the comments, @leslie townes has already answered this question(actually the result of a paper: Andô, T. On hyponormal operators. Proc. Amer. Math. Soc. 14, 290-291 (1963)), for the sake of the integrity of this question, I will paraphrase the paper's opinion as follows, which makes some sense to a newbie like me: Since the proof of the paper involves the use of some results of the exercises in Berberian's book(Berberian S K. Introduction to Hilbert space[M]. American Mathematical Soc., 1999.)(which is not difficult because of the excellent layout of this book), as for the results not proved in the following, they can be found in Berberian's book, or are well-know results(at least to me). 1.Problem review: Let $H$ be a Hilbert space, $T$ is a bounded operator on $H$ , $TT^{\ast }\leqslant T^{\ast }T$ , proof $T$ is normal operator( $TT^{\ast }=T^{\ast }T$ ). As mentioned in the comments, $TT^{\ast }\leqslant T^{\ast }T\Leftrightarrow \left( TT^{\ast }x,x\right) \leqslant \l
|functional-analysis|operator-theory|normal-operator|
0
Explain that a function has a given Jacobian matrix
Assume that $W$ is a $n \times n$ matrix with elements $w_{ij}$ , and that $\mathbf{b} \in \mathbb{R}^n$ is a column vector with elements $b_{i}$ . We know that the Jacobian matrix to the affine transformation $\mathbf{F}(\mathbf{x}) = W\mathbf{x} + \mathbf{b}$ is $\mathbf{F}'(\mathbf{x}) = W$ . We are now looking at the function $\mathbf{G}: \mathbb{R}^{(n^2+n)} \rightarrow \mathbb{R}^n$ defined by $\mathbf{G}(w_{11},\ldots,w_{1n},\ldots,w_{n1},\ldots,w_{nn},\ldots,b_1,\ldots,b_n) = W\mathbf{x} + \mathbf{b}$ We are looking at the elements in $W$ and $\mathbf{b}$ as variables (with these listed row by row, from left to right), and $\mathbf{x}$ as a constant. Explain that the Jacobian matrix to $\mathbf{G}$ is where $O$ stands for a vector or matrix with only zeros. Above, $\mathbf{x}$ is therefore repeated on each row. When I find the Jacobian matrix to the function, I get the exact same one, except I get $x_1, x_2, \ldots, x_n$ in the diagonal to the left. How do I get this to be the
Consider the more general case $\mathbf{g} =\mathbf{Wx+b}$ where $\mathbf{W} \in \mathbb{R}^{M\times N}$ . The differential writes \begin{eqnarray} d\mathbf{g} &=& (d\mathbf{W}) \mathbf{x} \\ &=& (\mathbf{x}^T \otimes \mathbf{I}_M) d \operatorname{vec}(\mathbf{W}) \\ &=& (\mathbf{x}^T \otimes \mathbf{I}_M) \mathbf{K}_{M,N} d \operatorname{vec}(\mathbf{W}^T) \\ &=& (\mathbf{I}_M \otimes \mathbf{x}^T) d \operatorname{vec}(\mathbf{W}^T) \end{eqnarray} Last line comes from the properties of permutation matrices: $$ (\mathbf{I}_M \otimes \mathbf{x}^T ) = \mathbf{K}_{1M} (\mathbf{x}^T \otimes \mathbf{I}_M) \mathbf{K}_{M,N} $$ and $\mathbf{K}_{1M}=\mathbf{I}_M$ . The Jacobian writes $$ \frac{\partial \mathbf{g}}{\partial \operatorname{vec}(\mathbf{W}^T)} = \mathbf{I}_M \otimes \mathbf{x}^T $$ which is the expected result.
|linear-algebra|
0
Prove that $f(u+v)\le f(u) +f(v)$ being $f(u)= Au\cdot u\in\mathbb R$
Let $A\in\mathbb R^{n\times n}$ be a symmetric and positive definite matrix. Consider the function $$f: u\in\mathbb R^n\mapsto f(u)= (Au\cdot u)^{1/2}\in\mathbb R.$$ Let $\|\cdot\|_1$ denote a norm and set $$n(u):= \big(\|u\|^2_1 + (f(u))^2\big)^{1/2}.$$ As an exercise, I have to prove that $$n(u+v)\le n(u) +n(v)\quad\forall u,v\in\mathbb R^n.$$ I am stuck at this point. Using the triangle inequality for $\|\cdot\|_1$ I write $$n(u+v) = \big(\|u+v\|^2_1 + (f(u+v))^2\big)^{1/2}\le \big(\|u\|^2_1 + \|v\|^2_1 +2\|u\|_1 \|v\|_1 + A(u+v)\cdot(u+v)\big)^{1/2} = \big(\|u\|^2_1 + \|v\|^2_1 +2\|u\|_1 \|v\|_1 + f(u) + f(v) +2Au\cdot v\big)^{1/2} $$ What am I doing wrong?
If $a,b$ are norms then $n=(a^2+b^2)^{1/2}$ is as well. For this question we will only bother to check triangle inequality. Since $(x^2+y^2)^{1/2}$ is the standard norm on $\mathbb R^2$ , it satisfies the triangle inequality: for all $x_1,x_2,y_1,y_2 \in \mathbb R$ , we have $((x_1+x_2)^2+(y_1+y_2)^2)^{1/2}\le (x_1^2+y_1^2)^{1/2} + (x_2^2+y_2^2)^{1/2}$ so it follows \begin{align} &n(u+v)\\&=(a(u+v)^2+b(u+v)^2)^{1/2} \\&\le ((a(u)+a(v))^2+(b(u)+b(v))^2)^{1/2} \\ &\le (a(u)^2+b(u)^2)^{1/2} + (a(v)^2+b(v)^2)^{1/2} \\&= n(u) + n(v),\end{align} as required. This applies to your $f(u) = (u^TAu)^{1/2}$ (putting $a=\|\cdot\|_1$ and $b=f$ ) since e.g. it satisfies Cauchy-Schwarz, see Norm with symmetric positive definite matrix for detail (link from above comment). for the title question with $f(u)=u^TAu$ without the ${(\phantom{u^TAu})}^{1/2}$ , if $A$ is given by $Ae_1 = \binom{100}{1}$ and $Ae_2 = \binom{1}{100}$ then note $e_1 A e_2 = e_2 A e_1 = 1 > 0$ and consequently $f(e_1+e_2)>f(e_1)+f
|real-analysis|linear-algebra|matrices|inequality|
0
An operator that satisfies some condition is a normal operator
QUESTION: Let $H$ be Hilbert space, $T$ is a bounded operator on $H$ , $TT^{\ast }\geqslant T^{\ast }T$ , proof $T$ is normal operator( $TT^{\ast } =T^{\ast }T$ ). I guess this question is missing condition (maybe $T$ is a compact operator?). I don't know what " $TT^{\ast }\geqslant T^{\ast }T$ " means, here's my guess: 1. $TT^{\ast }\geqslant T^{\ast }T\Leftrightarrow (TT^{\ast }x,x)\geqslant (T^{\ast }Tx,x)$ 2. $TT^{\ast }\geqslant T^{\ast }T\Leftrightarrow ||TT^{\ast }||\geqslant ||T^{\ast }T||$
[OP posted an answer as I was converting my own comments, which differ in substance from OP's exposition, to an answer] We can show that if $T$ is compact and satisfies your condition, then $T$ is normal. Let us call a bounded operator $A$ on Hilbert space hyponormal if $AA^* \leq A^*A$ ; the hypothesis on $T$ is that the operator $T^*$ is hyponormal. Hyponormal operators generalize normal operators and there is quite a bit of general theory about them. One respect in which hyponormal operators are just like normal operators is the following (famous for normal operators, less well known for hyponormal operators). Theorem . The eigenspaces of a hyponormal operator are reducing subspaces for that operator, and eigenspaces of a hyponormal operator corresponding to distinct eigenvalues are orthogonal to one another. Proof . The eigenspaces of any operator (hyponormal or not) are invariant under that operator. If $A$ is hyponormal, then for any vector $\xi$ and $\lambda \in \mathbb{C}$ we a
|functional-analysis|operator-theory|normal-operator|
1
Show that the interval $[0, 1]$ cannot be partitioned into two disjoint sets $A$ and $B$ such that $B = A + a$ for some real number $a$.
I stumbled upon this problem on Putnam and Beyond by Gelca et. al. It’s problem 9 in chapter 1.1. I have tried to do proof by contradiction by supposing that in fact $[0, 1]$ can be partitioned into two disjoint sets $A$ and $B$ such that $B = A + a$ for some real number $a$ . Assume $a$ is positive W.L.O.G. The first thing I did was to try to split $[0, 1]$ into two disjoint sets. There exists $k > 0$ such that $[0, 1] = [0, k] \cup (k, 1]$ . Let $A = [0, k]$ . I tried to reach a contradiction by showing that there is no real positive number $a$ such that $[0 + a, k + a] = (k, 1]$ . So, $a > k$ and $k + a = 1$ . Combining these two conditions, $a > \frac{1}{2}$ and $k . But there seems to be no problem with $a$ . And I can’t think of any other way of contradicting the statement. A hint instead of a solution would be much appreciated.
It is trivial to show the statement for $|a|\ge1$ . W.L.O.G we can say that $0 . Now, let's suppose that sets $A$ and $B$ s.t. they satisfy the given conditions exist. 1. case $a Considering that $A$ and $B$ are disjoint, the fact that $a>0$ and that element $1$ has to be in one of them it must be $1\in B$ , this means that $\max A=1-a$ . Similarly $\min B=a$ . This implies, not only $[0,a)\subset A$ , but also $2a\in A$ . Continuing like this we can see that all subsets of $[0,1]$ of the form $[2ka,(2k+1)a)$ are subsets of $A$ and all subsets of the form $[(2k+1)a,(2k+2)a)$ are subsets of $B$ . Keep in mind that $k\in \mathbb{N}$ , until $2ka=1$ . This means that there exists some $n\in \mathbb{N}$ , s.t. $2na=1$ , which would mean $1\in A$ , but of course that is impossible, therefore a contradiction. 2. case $a>\frac{1}{2}$ You can simply show that both $A$ and $B$ in this case would have to be intervals, since $\min B=a>\frac{1}{2}$ and $2a\notin A$ , because if it were that $A$ wo
|elementary-set-theory|recreational-mathematics|
1
Prove by contradiction that a circle chord is no longer than its diameter
Can anyone help me with this homework question of mine? I'm actually new to proofs. Here's the question, "Prove, by contradiction, that no chord of a circle is longer than a diameter." My only knowledge on proof by contradiction is on conditional statements, for instance, given $P\rightarrow Q$ , then the proof by contradiction will first assume that the hypothesis $(P)$ is true with the denial / negation of the conclusion $(Q)$ , then work your way to find a contradiction to the assumption.
"My only knowledge on proof by contradiction is on conditional statements, for instance, given (P)→(Q) , then the proof by contradiction will first assume that the hypothesis (P) is true with the denial / negation of the conclusion (Q) , then work your way to find a contradiction to the assumption." That's a good start. Let's see what this looks like with the simple illustration exercise provided. To avoid a technicality that is not good here, let's rely on a drawing : With obvious notations, (P):" $a$ a chord of a circle of diameter $c=DD'$ " (Q):" $a\leq c$ As simple as it is, hoping to demonstrate $P\implies Q$ still requires some knowledges. Let's choose the following knowledges: (1) $CC'B$ is a right triangle; (2)therefore, in $CC'B, BC\leq CC' \color{red}{(*)}$ . As you wrote,let us "assume that (P) is true with the denial non(Q) $$\text{(nonQ):}c of the conclusion (Q) , then work your way to find a contradiction." Here, it's immediate to contradict $\color{red}{(*)}$ and to conc
|proof-writing|
0
Action of $SL_2(\mathbb{Z})$ on the projective plane over $\mathbb{Z}_p$
The group $SL_2(\mathbb{Z})$ act on the projective spaces $P(\mathbb{Z}_p)$ and the upper half of the complex plane $\mathbb{H}$ by linear fractional transformations. I am wondering whether there is a direct connection between these actions. For example: Is there a natural projection $\mathbb{H}\rightarrow P(\mathbb{Z}_p)$ or an embedding $P(\mathbb{Z}_p)\rightarrow\mathbb{H}$ which agrees with the action? Also, the points of $P(\mathbb{Z}_p)$ could be placed as the vertices of a $p$ -gon plus a point at infinity; is there a connection with the disk model of $\mathbb{H}$ so that the natural action on the disk induces the action on $P(\mathbb{Z}_p)$ ?
The intermediate is the projective line over $\mathbb{Q}$ . There is a natural projection from the projective line over $\mathbb{Q}$ onto $P(\mathbb{Z}_p)$ , and also the projective line over $\mathbb{Q}$ is naturally embedded into the boundary of $\mathbb{H}$ .
|finite-fields|group-actions|hyperbolic-geometry|projective-space|
0
Matrices whose Linear Combinations are All Singular
I'd like to know if the following problem of elementary linear algebra is already solved / solvable. For two (singular) $n\times n$ matrices $P$ and $Q$, if $\det(\lambda P+\mu Q)=0$ for any $\lambda,\mu\in\mathbb{R}$, what are conditions on $P$ and $Q$?
For matrices $P, Q \in \mathbb{R}^{n \times n}$ , the following is a degree- $n$ polynomial: $$\det(\lambda P + (1-\lambda) Q) = \det(Q + \lambda (P-Q)) = \sum_{i=0}^n \xi_k(P, Q) \cdot \lambda^k$$ Therefore the following is the desired criterion for the determinant to vanish identically: $$ \text{For all } k \in \{0,1, \ldots n\}, \quad \xi_k(P, Q) = 0$$ The quantities $\xi_k$ are directly computed to be: $$ \xi_k(P, Q) = \sum_{\alpha \in \{0, 1\}^n \\ \sum_i \alpha_i = k} \sum_{\sigma \in S_n} \prod_{i=1}^n \bigg[ \alpha_i P_{i, \sigma(i)}+ (1-2\alpha_i) Q_{i, \sigma(i)} \bigg]$$ Note that each $\xi_k(P, Q)$ is a degree- $n$ polynomial in the entries of $P$ and $Q$ . Therefore, the collection of all $(P, Q) \in \mathbb{R}^{2n^2}$ whose linear combinations are all singular lies on an algebraic variety defined by $(n+1)$ polynomial equations of degree $n$ . Note that its degree of freedom is at least $2n^2 - n - 1$ . (I'm the same person who posted the question, just 10 years into the
|linear-algebra|
0
Existence of a Subset $S$ of $\mathbb{N}$ Where All but Finitely Many Natural Numbers Are Sums of Consecutive Elements of $S$
I am pondering a question in number theory that touches upon the representation of natural numbers as sums of consecutive elements from a subset S of $\mathbb{N}$ . Specifically, the question is: Does there exist a subset $S$ of $\mathbb{N}$ such that all but finitely many natural numbers are the sum of multiple consecutive elements (see addendum) in $S$ ? Considering the case where $S = \mathbb{N}$ , the natural numbers that cannot be represented as a sum of multiple consecutive elements in $S$ are precisely the powers of 2, indicating that there are infinitely many such numbers. However, does this observation extend to every possible subset $S$ of $\mathbb{N}$ ? Intuitively, I believe this might be the case for any subset $S$ , but I'm at a loss for how to approach proving or disproving this conjecture. I'm eager to see if anyone can provide insights, suggest methods for proof, or offer counterexamples. Your expertise and thoughts on this matter would be greatly appreciated. By sum o
I conjecture that there is no such subset $S$ . It follows from a more concrete Conjecture. For each subset $S$ of $\mathbb N$ and each $n\in\mathbb N$ , the set of natural numbers which are at most $2^n$ and are not sums of multiple consecutive elements of $S$ , has size at least $n$ . I wrote a program to check Conjecture for small $n$ , and it is already checked for $n=4$ and $n=5$ . Technical remarks. Adding $1$ to $S$ , if needed, we can assume that $1\in S$ , that is $s_1=1$ . Moreover, it is convenient to add to $S$ also an element $s_0=0$ . Now for each nonnegative integer $n$ we put $t_n=\sum_{i=0}^n s_i$ . Then $t_0=0$ , $t_1=1$ , and $t_{n+1}-t_n> t_n-t_{n-1}$ for each natural $n$ . The sums of multiple consecutive elements of $S$ are exactly the differences $t_n-t_m$ for nonnegative integers $n$ an $m$ such that $n-m\ge 2$ .
|combinatorics|number-theory|summation|natural-numbers|
0
If $f_n\to f$ pointwise on $[0,1]$, then $\sup_{[0,1]}f_n\to\sup_{[0,1]}f$ pointwise?
Let $\{f_n(x)\}_{n=1}^\infty$ be a sequence of bounded functions that converge pointwise on $[0,1]$ to some bounded function $f(x)$ . I am trying to determine whether it is true that $\sup_{x\in[0,1]}f_n(x)\to\sup_{x\in[0,1]}f(x)$ pointwise on $[0,1]$ . I think it is, and here is my proof: Let $x\in[0,1]$ . Fix $\epsilon>0$ . Obtain some $N$ so that for any $n>N$ , $|f_n(x)-f(x)| . Then, $f_n(x)-f(x) and $f(x)-f_n(x) . Then, we have $$f_n(x)-\sup_{x\in[0,1]}f(x)\le f_n(x)-f(x) $$f(x)-\sup_{x\in[0,1]}f_n(x)\le f(x)-f_n(x) It proves $|\sup_{x\in[0,1]}f_n(x)-\sup_{x\in[0,1]}f(x)| . Did I make any mistakes?
Presumably the intended question was If $f_n\to f$ pointwise on $[0,1]$ , must $\sup_{[0,1]}f_n\to\sup_{[0,1]}f$ ? If so, the answer is "no". For a counterexample, if we let $$ f_n(x)= \begin{cases} 1&\text{if}\;x={\large{\frac{1}{n}}}\\[4pt] 0&\text{otherwise}\\ \end{cases} $$ then $f_n\to 0$ pointwise, but $\sup_{[0,1]} f_n=1$ for all $n$ .
|real-analysis|convergence-divergence|pointwise-convergence|
0
The non-differentiability of the Weierstrass function at integers
I came across the following problem while studying series of functions: Prove that $S(x)=\sum_{n=1}^\infty \frac{\sin(2^n x)}{2^n}$ is not differentiable at integers. I am aware that this series is commonly known as the Weierstrass function , and according to Hardy's famous work in 1916 Hardy's paper , it is a typical example of a function that is continuous everywhere but differentiable nowhere. I also found a method in another article by Jon Johnsen https://zhangyk8.github.io/teaching/file/Con_nowhere_diff.pdf that is proven using Fourier analysis. But this homework question comes before the content on Fourier analysis, and the teacher does not recommend using the Fourier analysis method. It appears that this problem may be a specific case of $a=1/2, b=2$ , as mentioned on page 309 of Hardy's paper. However, I am still struggling to understand how the application of Lemma 2.12 in that paper leads to the final conclusion that $f(x)$ cannot possess a finite differential coefficient. It
The keyword is lunacary series, power series or Fourier series, whose coefficients are zero almost all except for a subseries with distances growing exponentially. The proof of continuity by a dominant series is easy, the proof of non.differentiabilty is difficult, resting on the fact, that any series convergent to x produces series of values, oscillating by the order of the basic oscillating function used. By a first search, there is an elaborated PDF seminar article at RWTH Aachen in German. It's easy by google translate to produce an English version. https://en.wikipedia.org/wiki/Lacunary_function is helpful for understanding the phenomenon of the construction of analytic functions, convergent in the unit disc, nowhere continuous on the unit circle, and therefore having no analytic continuation beyond their primary domain of holomorphy.
|real-analysis|calculus|derivatives|
0
Filtration of separable Hilbert spaces
Let $\mathcal{H}$ be a separable Hilbert space. Let $(\mathcal{S}_\alpha)_{\alpha\in\Lambda}$ be a decreasing net of closed subspaces of $\mathcal{H}$ , i.e. such that for each $\alpha,\beta\in\Lambda$ , there exist $\gamma\ge\alpha,\beta$ such that $\mathcal{S}_\gamma\subseteq \mathcal{S}_\alpha\cap \mathcal{S}_\beta$ . Using separability of $\mathcal{H}$ , can we conclude that $(\mathcal{S}_\alpha)$ admits a subnet of countable cardinality?
No. For example, consider a free ultrafilter $\mathcal{U}$ on $\mathbb{N}$ . For each subset $A$ of $\mathbb{N}$ , it corresponds to a subset of the standard basis of $\ell^2$ and therefore corresponds to a closed subspace of $\ell^2$ . Thus, $\mathcal{U}$ induces a decreasing net of closed subspaces of $\ell^2$ . There can be no countable subnet because if such a countable subnet exists, then $\mathcal{U}$ , as a point in $\beta\mathbb{N}$ , would admit a countable local base, which is impossible. See https://mathoverflow.net/questions/464384/points-in-the-stone-cech-compactification-are-intersection-of-open-sets .
|hilbert-spaces|nets|
1
How to create AND with XOR and NOT
I can only create XOR and NOT gates, can I use them to create an AND gate? I was expecting this to be easy to find, but I was unable to do so. I´m quite unsure of what the answer might be, since unlike AND and OR gates, XOR has no output that is generated by only one input (true is returned for both 1+0 and 0+1, while falste for 1+1 and 0+0).
A bit late, but this is another reason: If you create an algebra over the field $Z/2Z$ , with as multiplication AND, then the addition is XOR and $\neg p$ is simply $1 + p$ , which uses addition. It should be obvious that multiplication can’t be defined through addition, which means you can’t form AND with only XOR and NOT.
|boolean-algebra|
0
partial derivative of geometric brownian motion wrt time? $S_t = S_0 e^{\mu t - \frac{1}{2}\sigma^2t + \sigma W_t}$
If geometric brownian motion is given by: $$S_t = S_0 e^{\mu t - \frac{1}{2}\sigma^2t + \sigma W_t}$$ Then what would $\frac{\partial S_t}{\partial t}$ be?
With respect to Itô's differentiation formula, the first thing is to take $W_t$ as $x_t$ or because it's not differentiable, so $$g(t,x_t)= e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}$$ then using Itô's differentiation formula as below $$dg(t,x)=\frac{\partial g}{\partial t}dt +\frac{\partial g}{\partial x}dx+\frac 12 \frac{\partial^2 g}{\partial^2 x}(dx)^2\\ dg(t,x)=e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}(\mu t - \frac{1}{2}\sigma^2)dt +e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}(1\sigma)dx+\frac 12 e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}(1\sigma)^2(dx)^2\\$$ now you can simplify by factoring $e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}$ and this fact $$\boxed {(dx)^2=(dW_t)^2=dt}$$ so $$dg(t,x)=e^{\mu t - \frac{1}{2}\sigma^2t + \sigma x}((\mu t - \frac{1}{2}\sigma^2)dt+(1\sigma)dx+\frac 12(1\sigma)^2(dx)^2) |_{x=W_t}\\\to\\ dg(t,x)=e^{\mu t - \frac{1}{2}\sigma^2t + \sigma W_t}((\mu t - \frac{1}{2}\sigma^2)dt+(1\sigma)dW_t+\frac 12(1\sigma)^2\underbrace{(dt)}_{(dx)^2=(dW_t)^2=
|stochastic-processes|stochastic-calculus|brownian-motion|martingales|stochastic-differential-equations|
0
Solving $2mx^{2m+1}-(2m+1)x^{2m}+1=0$
I set up a problem for myself and came to this equation and I'm wondering if it can be solved algebraically. The variable $m$ is a whole number and we're solving for $x$ . $$2mx^{2m+1}-(2m+1)x^{2m}+1=0$$ One solution is $x=1$ but I believe there should be another root in $(-\infty,0)$ .
As a hint $$2mx^{2m+1}-(2m+1)x^{2m}+1=0 \\2mx^{2m+1}-2mx^{2m}-x^{2m}+1=0 \\2m(x^{2m+1}-x^{2m})-(x^{2m}-1)=0\\2mx^{2m}(x-1)-(x^{2m}-1)=0\\$$ now factor $(x-1)$ or take $f(x)=2mx^{2m+1}-(2m+1)x^{2m}+1$ $$f'(x)=2m(2m+1)x^{2m}-(2m+1)2mx^{2m-1}=\\2m(2m+1)x^{2m-1}(x-1)=0\\ \to f'=0 \to x=0,1 \\ f(x)=(x-1)^2(somthing)$$ you can draw $f'(x)$ table of sign and finally with respect to $f'$ sign ,you have $$x=1,1 \ or \ (x-1)^2$$ and other negative root
|calculus|polynomials|roots|
0
Solving diophantine equations: Finding integer solutions: $(x+1)^2+(x+2)^2+...+(x+2001)^2=y^2$
I would like to find all integer solutions to the Diophantine equation. I wonder if there is a way to solve it using Fermat. $(x+1)^2+(x+2)^2+\ldots+(x+2001)^2=y^2$
I think your problem does not have a solution. To see this, notice that $x^2 \equiv 1 \ \text{mod} \ 3$ whenever $x$ is not divisible by $3$ . Further, $2001$ is divisible by $3$ , thus, on the right hand side you have $$\sum_{i=1}^{2001} (x+i)^2 \equiv \frac{2001}{3} \sum_{i=1}^{3} (x+i)^2 \equiv \frac{2001}{3} \sum_{i=1}^{3} i^2 \equiv 1334 \equiv 2 \quad \text{mod} \ 3$$ while the left hand side gives remainder $0$ or $1$ mod $3$ . As per your question, Fermats (presumably little) theorem says that a prime $p$ and $a$ not a multiple of $p$ , $a^{p-1} \equiv 1 \ \text{mod} \ 3$ , which would give you the remainders I talked about above. In my opinion this is a bit overkill for the question. If you fancy, you can also generalise this argument to many further cases. You might wish to establish which other numbers instead of $2001$ would this argument still go trough. You can also estabish similar equations for powers other than 2, i.e. the solvability of an equation of the form $$\sum_
|elementary-number-theory|diophantine-equations|
0
$S = \left \{ (x_1,x_2,x_3) \in\mathbb{R}^3 : x_1^2+x_2^2-x_3^2 =\lambda\right \} $. Find the $\lambda$ for which $S$ is a submanifold of $\Bbb{R}^3$
I am learning alone and I have just start studying about manifolds and below a question. I want to be sure that my answer and so my understanding is correct. Question: Let $ \lambda \in \mathbb{R}$ and let $S = \left \{ (x_1,x_2,x_3) \in \mathbb{R}^3 : x_1^2+x_2^2-x_3^2 = \lambda \right \} $ . Find the $\lambda$ for which $S$ is a submanifold of $\mathbb{R}^3 $ Answer: 1- According to the definition I have learned, in order for $S$ to be a manifold of $ \mathbb{R}^3 $ of dimension $p \in \left \{ 0;1;2;3 \right \} $ , $S$ must verify that: $\forall a \in S $ it exists at least one neighborhood $U_a$ of $a$ and at least one mapping $ f \in C^1(U_a; \mathbb{R}^{3-p} )$ with a differential $D\left \{ f(x) \right \}$ (so the mapping $f$ is differentiable ) s.t. $rank ( D\left \{ f(x) \right \} ) = 3 - p$ and $S \cap U_a = \left \{ f = 0 \right \} $ . With $S \cap U_a = \left \{ f = 0 \right \} $ meaning that all the points $x' \in \mathbb{R}^3 $ that are in $ S $ and in the neighborhood $U
If you define $ F: \mathbb R^3 \to \mathbb R$ as the map $F(x_1, x_2,x_3) = x_1^2 +x^2_2 - x_3^3$ then $S$ can be viewed as $S=F^{-1}(\lambda)$ . By a standard result in Differential Geometry, $S$ is an embedded submanifold if and only if $\lambda$ is a regular value for $F$ . Being a regular value means that, for any $x \in F^{-1}(\lambda)$ , $dF_x$ is a surjective linear map. As you correctly pointed out, the only point where the differential has not full rank is $x = 0$ , i.e. the origin. So you just have to assure that the origin does not lie in $S$ ; but it is clear that $0 \in F^{-1}(\lambda)$ if and only if $\lambda = 0$ . This proves that $S$ is an embedded submanifold (so itself a manifold) if and only if $\lambda \neq 0$ .
|geometry|differential-geometry|solution-verification|manifolds|
1
Solving $2mx^{2m+1}-(2m+1)x^{2m}+1=0$
I set up a problem for myself and came to this equation and I'm wondering if it can be solved algebraically. The variable $m$ is a whole number and we're solving for $x$ . $$2mx^{2m+1}-(2m+1)x^{2m}+1=0$$ One solution is $x=1$ but I believe there should be another root in $(-\infty,0)$ .
Put $P(x)=2mx^{2m+1}-(2m+1)x^{2m}+1$ . Counting each root with its multiplicity you have $2m+1$ roots in $\mathbb C$ . $P'(x)=2m(2m+1)x^{2m-1}(x-1)$ so assuming $m>0$ you can see that $P$ is strictly increasing on $]-\infty;0]$ and $P(x)>0$ on $[0;1[\cup]1;+\infty[$ as it reaches its minimum $0$ on $[0;+\infty[$ at $x=1$ . $\lim_{x\rightarrow +\infty}P(x)=-\infty$ and $P(0)=1$ so by the intermediate value theorem $P$ has exactly one real root in $]-\infty;0[$ and as $1$ is root of $P'$ with multiplicity $1$ it is root for $P$ with multiplicity $2$ . So for any $m>0$ you have two real roots : $1$ and another root $a_m that depends on $m$ , and if $m>1$ you also have roots belonging to $\mathbb C\setminus\mathbb R$ .
|calculus|polynomials|roots|
0
Solve Determinant Equation
Identify $t\in\mathbb{R}$ such that (basically it is a determinant of a block matrix of size $n$ ): $$ \left| \begin{array}{cc} \mathbf{x}+t\mathbf{y} & \mathbf{B} \\ \mathbf{c} & \mathbf{D} \end{array} \right| = 0 $$ where $\mathbf{x}\in\mathbb{R}^{m\times1}$ , $\mathbf{y}\in\mathbb{R}^{m\times1}$ , $\mathbf{c}\in\mathbb{R}^{(n-m)\times1}$ are column vectors, and $\mathbf{B}\in\mathbb{R}^{m\times (n-1)}$ , $\mathbf{D}\in\mathbb{R}^{(n-m)\times (n-1)}$ are matrices, and $m$ is an integer with $1 \le m \le n$ , and $t\in\mathbb{R}$ is the unknown variable we want to solve. I am not sure if the solution exists in general cases, but in my cases, solution $t$ exists. I encountered such a problem when I want to write a computer program that finds a proper value $t$ that makes some points coplanar, such a geometric problem can be finally reduced to the above mathematical problem after some calculation. The problem is quite challenging, because I would like to find a closed-form solution such
Let $X:=\begin{pmatrix} \mathbf{x}& \mathbf{B} \\ \mathbf{0}& \mathbf{D} \\ \end{pmatrix}$ and let $Y:=\begin{pmatrix} \mathbf{y}& \mathbf{B} \\ \mathbf{c}& \mathbf{D} \\ \end{pmatrix}$ . By multilinearity of the determinant it holds: $$0=\det\begin{pmatrix} \mathbf{x}+t \mathbf y& \mathbf{B} \\ \mathbf{c}& \mathbf{D} \\ \end{pmatrix}= \det X + t \det Y \Leftrightarrow t\det Y=-\det X $$ So if $\det Y=\det X=0$ , any $t\in \mathbb R$ works; if $\det Y=0\neq \det X$ , no $t\in \mathbb R$ works; if $\det Y\neq 0$ , then $t=-\frac{\det X}{ \det Y}$ .
|linear-algebra|matrices|numerical-methods|determinant|
1
How many natural numbers $a\le100$ are there such that $a=[\frac a2]+[\frac a3]+[\frac a5]$, where [.] represents the greatest integer function?
A natural number $a$ is selected from the first $100$ natural numbers. The probability that $a=[\frac a2]+[\frac a3]+[\frac a5]$ , where [.] represents greatest integer function, is $\frac mn$ where $m,n$ are coprime then $(m+n)$ is equal to My Attempt: Let $a=30n+\gamma$ , where $0\le\gamma\lt30$ Putting this in the given equation, I get, $n=\gamma-[\frac{\gamma}{2}]-[\frac{\gamma}{3}]-[\frac{\gamma}{5}]$ $\gamma=0$ doesn't satisfy but $\gamma=1, 2, ..., 29$ satisfy. So, the probability is $\frac{29}{100}$ . Is this correct?
The answer looks correct. But you probably should demonstrate that $a\le 100$ when $\gamma=1,2,…,29$ . It is sufficient to show that $n\le 2$ . It is true since $$n=\gamma-[\gamma/2]-[\gamma/3]-[\gamma/5]\le $$ $$\le \gamma-(\gamma/2-1/2)-(\gamma/3-2/3)-(\gamma/5-4/5)=$$ $$=59/30-\gamma/30\le 2.$$ It is also worth showing that $n\ge 0$ : $$n=\gamma-[\gamma/2]-[\gamma/3]-[\gamma/5]\ge$$ $$\ge\gamma-\gamma/2-\gamma/3-\gamma/5=-\gamma/30.$$ Since $\gamma , we get $n\ge-\gamma/30>-1\ge 0$ . Finally, it would be good to show that different values of $\gamma$ give different values of $a$ . If different values of $\gamma$ lead to different values of $n$ then values of $a$ will be different since $\gamma . Otherwise, if values of $n$ are the same, then values of $a=30n+\gamma$ obviously differ.
|probability|combinatorics|discrete-mathematics|solution-verification|contest-math|
1
How many natural numbers $a\le100$ are there such that $a=[\frac a2]+[\frac a3]+[\frac a5]$, where [.] represents the greatest integer function?
A natural number $a$ is selected from the first $100$ natural numbers. The probability that $a=[\frac a2]+[\frac a3]+[\frac a5]$ , where [.] represents greatest integer function, is $\frac mn$ where $m,n$ are coprime then $(m+n)$ is equal to My Attempt: Let $a=30n+\gamma$ , where $0\le\gamma\lt30$ Putting this in the given equation, I get, $n=\gamma-[\frac{\gamma}{2}]-[\frac{\gamma}{3}]-[\frac{\gamma}{5}]$ $\gamma=0$ doesn't satisfy but $\gamma=1, 2, ..., 29$ satisfy. So, the probability is $\frac{29}{100}$ . Is this correct?
Yes, the answer is $\frac{29}{100}$ . There are a few details that need to be taken care of. You are setting $\gamma$ to be one of $\{1,2,\dots,29\}$ and each value of $\gamma$ generates a value for $n$ , which then determines $a=30n+\gamma$ . Why is $\gamma\ge \left\lfloor \frac{\gamma}{2} \right\rfloor + \left\lfloor \frac{\gamma}{3} \right\rfloor + \left\lfloor \frac{\gamma}{5} \right\rfloor$ for all $\gamma \in \{1,2,\dots,29\}$ ? What about two different $\gamma$ 's giving the same value for $a$ ? Is the condition $a\le 100$ satisfied in each case ?
|probability|combinatorics|discrete-mathematics|solution-verification|contest-math|
0
Curvature formula derivation in Tom Apostol's Calculus vol. $1$ in case of non-invertible speed
In section $14.14$ , Tom Apostol is deriving curvature vector as follows: $$ \frac{dT}{ds} = \frac{dt}{ds} \frac{dT}{dt} = \frac{1}{s'(t)} T'(t) = \frac{1}{v(t)}T'(t) $$ As far as I'm aware (theorem $6.7$ in his book), $\frac{dt}{ds} = \frac{1}{s'(t)}$ only if $s(t)$ has inverse, and in general speed $v(t)$ does not have to have one. Furthermore, if $v(t)$ does not have an inverse, then $\frac{dt}{dv}$ doesn't seem to even make sense. How should I justify that derivation? Thanks! Note: $v(t) = \|X'(t)\|$ is the speed, the magnitude of the first derivative of the position vector. $t$ is a parameter of the curve. $T(t) = \frac{X'(t)}{\|X'(t)\|}$ is the unit tangent vector.
After having looked through the Theorem 6.7, I think that what it actually means is that $$\frac{dx}{df(x)}=\frac{1}{\frac{df(x)}{dx}}$$ only holds if $f(x)$ is an invertible function. But suppose we have a non-invertible function, maybe $\sin(x)$ for example. In this case, for analysing the value of $\frac{dx}{d\sin(x)}$ around $x=a$ we can take a small enough neighbourhood around $a$ in which the function is invertible and use that region. This ought to work for most well-behaved functions, but if someone can come up with a function that doesn’t satisfy this definition, please comment.
|linear-algebra|derivatives|
1
Why is the complex Lie group $(\mathbb C^*)^n$ called "Complex Torus"
While studying complex Lie groups theory, and more generally complex geometry, I've found two different objects which are called "complex tori". Consider the multiplicative group $\mathbb C^*$ , of course this is a non-compact complex Lie group. The direct product of $n$ copies of this is of course a complex Lie group which is referred to as "complex torus". Let $\{v_1, \dots, v_{2n}\}$ be a set of $\mathbb R$ -linearly independent vectors in $\mathbb C^n$ , then they generate a lattice $\Lambda = \bigoplus_j v_j \mathbb Z$ which is a discrete subgroup of $\mathbb C^n$ . As its action by translation is a covering action we can induce a complex Lie group structure on the quotient manifold $\mathbb C^n/\Lambda$ . This is a compact complex Lie group which is also referred to as "complex torus". Now my question is: if these constructions produce objects which are not even homeomorphic, why do they have the same name? Is there a deep reason for that? To me, it's clear that it's reasonable t
Your reasoning is basically correct. They have the same name because they're both "complex tori" in a sense. It depends on your setting on which terminology one uses/prefers for these things, and here the word "complex" is used for two different things. In my setting, any space which is homotopy equivalent to an $n$ -dimensional torus is a torus; we wouldn't usually consider these objects as sub-objects within an ambient space, and we'd ignore the "complex" bit. If we refer to the objects in 1. as complex tori, we are trying to emphasise certain algebraic structure. If we refer to those objects in 2. as complex tori, it's because we want to emphasise that they're quotients of complex space.
|lie-groups|complex-geometry|complex-manifolds|
0
Prove that $\left\{ (x;y;z)\in\mathbb{R}\times\mathbb{R}^{+*} \times\mathbb{R}:x^2+z^2=1/y\right\}$ is a submanifold and justify his dimension
In introduction I would like to say that I just begin to study (alone) what is are submanifold so I need your help in order to be sure that my understanding of those concept is correct. Question: Let $S = \left \{ (x;y;z) \in \mathbb{R} \times \mathbb{R}^{+*} \times \mathbb{R} : x^2 + z^2 = 1/y \right \}$ . Prove that $S$ is a submanifold and justify his dimension. Answer: 1- First of all let chose $\forall a \in S$ the neigboorhood $U_a$ to be equal to $ \mathbb{R}^3 - (0;0;0)$ 2- Now let define $f(x;y;z) = x^2 + z^2 - 1/y $ . We can remark two things. First that $ f \in C^1(U_a; \mathbb{R})$ as the sum of function $C^1(U_a)$ , and segondly that $ U_a \cap S = \left \{ f = 0 \right \} $ 3- Now as $f \in C^1(U_a; \mathbb{R}) \Rightarrow D \left \{ f(x;y;z) \right \} = \vec{\bigtriangledown} \left \{ f(x;y;z) \right \} = \begin{pmatrix} 2x\\ \frac{1}{y^2}\\ 2z \end{pmatrix} \neq \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}$ on $U_a$ . It is obvious that the $rank(\vec{\bigtriangledown} \left
According to https://math.stackexchange.com/users/1152406/federico-t it seems that my answer is correct. Rem: The gradient is of $rank=1$ as for any line we choose all the $2$ other lines can be write as a linear combinaison of the first one (trivial to prove).
|geometry|differential-geometry|solution-verification|manifolds|geometric-topology|
1
binomial coefficient identity $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$
While trying to solve a probability question about a coin flip game, I arrived at the expression $\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}.$ Computing this sum for several low values of $n$ suggested that it summed to $2^{2n-2}$ , suggesting there is an identity: $$\sum_{k=0}^n {n\choose k}\sum_{j=0}^{k-1}{{n-1}\choose j}=2^{2n-2}$$ However I was not able to prove the identity by means of the Pascal recurrence relation, the Chu-Vandermonde identity $\sum_{k=0}^r {m\choose k}{n\choose {r-k}}={{m+n}\choose r}$ , nor the $\sum_{k=0}^n{n\choose k} = (1+1)^n=2^n$ identity from the binomial theorem. Can I get a hint how to proceed?
$$ \begin{align} &\sum_{k=0}^n\binom{n}{k}\sum_{j=0}^{k-1}\binom{n-1}{j}\\ &=\sum_{k=0}^n\left[\binom{n-1}{k-1}+\binom{n-1}{k}\right]\sum_{j=0}^{k-1}\binom{n-1}{j}\tag{1a}\\ &=\sum_{k=0}^{n-1}\sum_{j=0}^k\binom{n-1}{k}\binom{n-1}{j}+\sum_{k=0}^{n-1}\sum_{j=0}^{k-1}\binom{n-1}{k}\binom{n-1}{j}\tag{1b}\\ &=\sum_{j=0}^{n-1}\sum_{k=j}^{n-1}\binom{n-1}{k}\binom{n-1}{j}+\sum_{k=0}^{n-1}\sum_{j=0}^{k-1}\binom{n-1}{k}\binom{n-1}{j}\tag{1c}\\ &=\sum_{k=0}^{n-1}\sum_{j=k}^{n-1}\binom{n-1}{k}\binom{n-1}{j}+\sum_{k=0}^{n-1}\sum_{j=0}^{k-1}\binom{n-1}{k}\binom{n-1}{j}\tag{1d}\\ &=\sum_{k=0}^{n-1}\sum_{j=0}^{n-1}\binom{n-1}{k}\binom{n-1}{j}\tag{1e}\\ &=\left[\sum_{k=0}^{n-1}\binom{n-1}{k}\right]^2\tag{1f}\\[9pt] &=2^{2n-2}\tag{1g} \end{align} $$ Explanation: $\text{(1a):}$ $\binom{n}{k}=\binom{n-1}{k-1}+\binom{n-1}{k}$ (Pascal's Identity) $\text{(1b):}$ substitute $k\mapsto k+1$ in the left sum $\text{(1c):}$ swap the order of summation in the left sum $\text{(1d):}$ swap $j$ and $k$ in the left sum
|combinatorics|summation|binomial-coefficients|
0
Suppose X,Y are iid normal distribution with $\mu = 0$, what is the distribution of ratio of absolute value of X to Y?
Let X and Y be iid normal distribution with mean $\mu = 0$ , let $Z=\frac{|X|}{|Y|}$ , what is the distribution of Z? I'm thinking of using the distribution method but I'm stuck by the integration. I'm also wondering if change of variable technique can be used.
I'm not sure if it has a name; however, the ratio $X/Y$ itself has a well-known, named distribution , so you can just take the absolute value of that random variable. First off, $X/Y = (X/\sigma)/(Y/\sigma)$ , so we may assume without loss of generality that $X,Y$ are standard normal random variables (with mean $0$ and standard deviation $1$ ) Then we may write $$\mathbb{P}(X/Y \leq 0) = \mathbb{P}(X\leq 0, Y>0) + \mathbb{P}(X\geq 0, Y so for $r > 0$ , $$\mathbb{P}(X/Y \leq r) = \frac12 + \mathbb{P}(0 0, Y>0).$$ This last expression can be found by a polar substitution: $$\begin{align*}\mathbb{P}(X \leq rY, X>0, Y>0) &= \frac{1}{2\pi}\int_0^\infty\int_0^{ry}e^{-\tfrac12(x^2+y^2)}\,dx\,dy \\ &= \frac1{2\pi}\int_0^\infty\int_0^{\arctan(r)}\rho e^{-\rho^2/2}\,d\theta\,d\rho \\ &= \frac1{2\pi}\arctan(r)\end{align*}$$ giving $$\mathbb{P}(X/Y \leq r) = \frac12 + \frac1\pi\arctan(r).$$ The expression for $r can be found by symmetry as $\mathbb{P}(X/Y \leq r) = \mathbb{P}(X/Y \geq -r)$ , which
|probability-distributions|normal-distribution|change-of-variable|
1
Roots of $f(z)=z^2$
I'm having difficulty grasping how the zeros of Riemann zeta function is calculated. I thought studying a simpler function such as $f(z)=z^2$ may help. I'm reading this LibreText document (PDF) . I understand how the function $f(z)=z^2$ maps the lines $x=1$ and $y=1$ in the z-plane into parabolas in the w-plane. (Fig. 8.3.2) Can you give any clues about how to calculate $f(z)=0$ ?
$$f(z) = z^2 = (x+iy)^2$$ Remember that $a$ is a zero of the function if $f(a) = 0$ , now $$a= x_a + iy_a$$ Where $x_a$ and $y_a$ are real numbers $$(x_a+iy_a)^2 = 0$$ $$x_a + iy_a = 0$$ Which is only possible when $(x_a,y_a)=(0,0)$ as $x$ and $y$ are real numbers, hence $a=0 + i0 = 0$ is the only zero of $f(z)=z^2$ In LibreText document (PDF) , in fig 8.3.3, the point (x=0,y=0) in the z-plane gets mapped to (u=0, v=0) in the w-plane. The point from the z-plane which maps to (u=0, v=0) will be the zero of the mapping function. Notice that the axis of the graphs in Fig 8.3.2 and 8.3.3 do not cross at the origin, which may be confusing you. Zeros are not defined by the fact that they touch an axis (the axis can be shifted as in your case), but by the fact that they make the function equal to $0$ Plotting In the comments, you also asked for a plot. The plot is given in your LibreText document in Fig 8.3.3 for a set of grid lines. You have to understand that a plot for mapping all the set
|riemann-zeta|
1
Simple questions on the parametrization of the surface $\left\{(x;y;z)\in\mathbb{R}\times\mathbb{R}^{+*}\times\mathbb{R}:x^2+z^2=1/y\right\}$
I am studying alone and I would like to have a feedback on my two first answers in "a)" and "b)" and get help on "c)". Question: We have the following parametric surface in $\mathbb{R}^3$ , $ \vec{\phi}(r ; \theta ) = \begin{pmatrix} r \cdot cos(\theta)\\ 1/r^2 \\ r \cdot sin(\theta) \end{pmatrix}$ , with $r >0 , 0 \leq \theta which is a parametrization I ve found of the surface $S= \left \{ (x;y;z) \in \mathbb{R} \times \mathbb{R}^{+*} \times \mathbb{R} : x^2 + z^2 = 1/y \right \}$ a) Find an orthonormal basis of the tangent plane at all point $( r ; \theta ) $ b) From their deduce a normal vector at all point $( r ; \theta ) $ c) Justify that the parametrization of $S$ you ve found is a bijection Answer: a) As we proved here $S$ is a sub manifold of order $2$ . Then we can say that the tangent plane of $S$ at any point $x_p$ is $T_{x_0} S = Span \left \{\frac{\partial }{\partial r} \vec{\phi} ; \frac{\partial }{\partial \theta} \vec{\phi} \right \} $ with $\frac{\partial }{\partial r
I think I ve found a solution for "c)" in all the cases I will be happy to have your feedback on what I ve writen. c) To prove that $\vec{\phi} (r ; \theta) $ is a bijection from $r > 0 , 0 \leq \theta to $S$ I need to demonstrate that $\vec{\phi} (r ; \theta) $ is injective and surjective. 1- Injectivity : $ f : X \rightarrow Y , \forall a \neq b \in X \Rightarrow f(a) \neq f(b)$ Let take two points $ (r_1; \theta_1) \neq (r_2; \theta_2)$ . In the case it is at least $r_1 \neq r_2 \Rightarrow 1/r_1^2 \neq 1/r_2^2$ (the " $Y$ axis component" of our parametrisation) and so obviously $\vec{\phi}(r_1 ; \theta_1) \neq \vec{\phi}(r_2 ; \theta_2)$ . In the case it is at least $ \theta_1 \neq \theta_2 \neq 0 , \pi$ we have that $sin(\theta_1) \neq sin(\theta_2) $ by the definition of the $sin(.)$ function. Now in the special case that $ \theta_1 = 0 , \theta_2 = \pi $ we can eventually have that the Z and Y axis component are equals but concerning the X axis component we have that $r_1 cos(0)
|abstract-algebra|geometry|solution-verification|manifolds|geometric-topology|
1
Uniqueness of non-zero homomorphism in abelian Banach algebra
Consider $\mathbb{C}^3$ equipped with product $\bullet$ given as $$(x_1,x_2,x_3)\bullet(y_1,y_2,y_3)~=~ (x_1y_1,x_1y_2+x_2y_1,x_1y_3+x_2y_2+x_3y_1)$$ and norm $\|(x_1,x_2,x_3)\|=|x_1|+|x_2|+|x_3|$ . $A:=(\mathbb{C}^3,\bullet,\|\!\cdot\!\|)$ is then a unital abelian Banach algebra with unit $(1,0,0)$ . How would one verify that there cannot be other non-zero homomorphism on $A$ than $$A\ni(x_1,x_2,x_3)\mapsto x_1\in\mathbb{C}?$$
Let $f:A\to \Bbb{C}$ be an algebra homomorphism with $f(e_1) =1$ Then $f(e_2)•f(e_3)=f(e_2•e_3) =f(0) =0$ And $f(e_2^2) =f(e_3) $ Hence $f(e_2) =0=f(e_3) $
|commutative-algebra|spectral-theory|banach-algebras|
1
Algebraic topology book recommendations
I'm a grad student and I just started seeing a a few algebraic topology things but I'm getting a bit lost so I'd like to do some extra digging. Anyone know a good book to study the basics on homology? (my class is using Rotman's An intro to Algebraic Topology and Bredon's Topology and Geometry). We also just started up with tensors and categories on a different class but we only have the professor's notes, is there a book where I can check out the basics on this too?
An Introduction to Algebric Topology , 1957, by Andrew H. Wallace . As the teaching of this discipline is very axiomatic, the author, a mathematician of international reputation, wanted to expose the main notions of algebraic topology in a natural way and to show the rationale for the geometrical constructions that are made. As far as I'm concerned, this goal of the author has been achieved thanks to this book. Introduction aux catégories et aux problèmes universels , 1971, by Jaffard and Poitou . It's in French but I think it's a really good introductory text. For tensors, I have no idea.
|reference-request|algebraic-topology|
0
Lebesgue integral and limit; $\lim_{a\to 0^+}\int_{a}^{1}(t\ln(t))^3dt=\int_{0}^{1}(t\ln(t))^3dt$
How can I prove: $$\lim_{a\to 0^+}\int_{a}^{1}(t\ln(t))^3dt=\int_{0}^{1}(t\ln(t))^3dt$$ assuming that the integral are Lebesgue integrals. There may be a theorem that confirms this equality.
A more elementary approach could be the following. The function $t\mapsto t\ln t$ is continuous and bounded in $(0,1]$ , just notice that $\lim_{t \to 0^+}t\ln t=0$ , therefore if we set $$ f(t):=\begin{cases} t\ln t,& t>0\\ 0,& t=0 \end{cases} $$ then $f$ is continuous in $[0,1]$ , so $f^3$ is Riemann integrable, so the improper integral coincides in value with the Riemann integral. Now, every integral in these expressions is trivially Lebesgue integrable as $f$ is continuous and bounded, so we are done.
|limits|measure-theory|lebesgue-integral|
1
Can I determine the angles of a quadrilateral if I know the lengths of the sides and the difference between the diagonals?
I know the lengths of the four sides of a quadrilateral and the difference between the diagonals (but I do not know the actual lengths of the diagonals). My instinct is that this information ought to be sufficient to determine the angles of the quadrilateral, because a specific difference between the diagonals constrains it to a single, fixed shape. example measurements: L side length = 326mm; R side length = 325mm; bottom length = 677mm; top length = 675mm; diagonal from bottom L to top R is 7mm longer than from bottom R to top L
Yes, it is possible because there are five unknowns, the four angles, $\alpha, \beta,\gamma,\delta$ and one of the diagonals $D_1,D_2$ because $|D_1-D_2|=L$ where $L$ is data. But we have five equations. Suppose $D_1\lt D_2$ so $D_2=D_1+L$ , our five equations are $$D_1^2=a^2+d^2-2ad\cos(\alpha)\\D_1^2=b^2+c^2-2bc\cos( \gamma)\\(D_1+L)^2=a^2+b^2-2ab\cos( \beta)\\(D_1+L)^2=c^2+d^2-2cd\cos (\delta)\\\alpha+\beta+\gamma+\delta=2\pi$$
|geometry|
0
why learning math in elementary school was harder for me rather than upper grades?
When I was an elementary student, I'd suffered from understanding basic things like multiplication table and other simple things and I had to memorized them. Last hours I was searching for genesis of multiplication table for the first time (I was curious, If that wasn't folded by experience or observation, So how was multiplication table made?). After this question, I remembered my childhood whom I had been struggled with the most basic things and even I didn't know intuitively some of them currently. I don't know much about the effect of IQ in rate of learning math or same stuffs, but my questions are: Have you experienced or observed same thing as this situation that understanding and intuition of sophisticated subjects like abstract algebra or topology are simple for them, But They cannot perceive the simple things? Is there any correlation between mathematical intelligence with presenting question from objects and awareness from what we don't know? I'm a Computer Science undergradu
When I was an elementary student, I'd suffered from understanding basic things like multiplication table and other simple things and I had to memorized them. Just my 2c, from my anyway limited experience in/about teaching basic mathematics: As for understanding , understanding division is especially difficult for kids, but persistent problems usually boil down to how capable to teacher was. (Except for extreme cases, most pupils are in fact capable of basic mathematical reasoning if properly guided/introduced to it.) As for memorizing , that is also part of that learning, the practical part: the multiplication table and even the addition table before it, otherwise we get students that cannot even add small numbers without counting on fingers...
|education|philosophy|
1
What is the relation between these two descriptions of General Number Field Sieve?
I am reading about General Number Field Sieve (GNFS) algorithm and its implementations, and the literature seems to have two different versions of the algorithm, but I can't find how these two are related. Is one an improvement over another? Or are they both used in practice? I just can't find any article or anything that would acknowledge existence of both... I found plenty of articles that follow one of the variants, I link some of them below. Variant 1: A single polynomial $f\in \mathbb{Z}[x]$ is chosen and is then used to sieve integer pairs $(a,b)$ such that $a+\theta b$ is smooth over some algebraic factor base and at the same time $a+bm$ is smooth over some rational factor base. Lattice sieving and trial division , Golliver, Lenstra & McCurley, 1994, https://link.springer.com/chapter/10.1007/3-540-58691-1_38 A Tale of Two Sieves , Pomerance, 1996, https://www.ams.org/notices/199612/pomerance.pdf An Introduction to the General Number Field Sieve , Matthew E. Briggs, 1998, https:/
I think I found a link actually in one of the references above, in A Tale of Two Sieves by Carl Pomerance, on page 1484: Several variations on the basic idea of the number field sieve show some promise. One can replace the linear expression $a − mb$ used in the number field sieve with $b^k g(a/b)$ , where $g(x)$ is an irreducible polynomial over $\mathbb{Z}$ of degree $k$ with $g(m)\equiv 0 \pmod{n}$ . That is, we use two polynomials $f(x)$ , $g(x)$ with a common root $m\bmod n$ (the original scenario has us take $g(x) = x − m$ ). It is a subject of current research to come up with good strategies for choosing polynomials. From the link there we can find two articles A multiple polynomial general number field sieve and An implementation of the number field sieve by M. Elkenbracht-Huizing from 1996. So these two seem to be first published works using the two polynomials variation which was then used afterwards (although some later materials later still refer the original variant describ
|abstract-algebra|algebraic-number-theory|factoring|
1
Find solution to recurrence equation
Given a recurrence equation $nS(n)-(n+2)S(n-1)-n=0$ I first tried to find the homegeneous solution using the characteristic equation: $$nr-(n+2)=0$$ which gives me the solution $S_h(n)=a(\frac{n+2}{n})^n$ . Then for the inhomogeneous part I tried a linear ansatz $cn+d$ . So I have: $$n(cn+d)-(n+2)(c(n-1)+d)-n=0$$ which simplifies to $-cn+2c-2d-n=0$ . Putting $-cn+2c=0$ and $-2d-n=0$ , I get $c=0,d=-n/2$ However the final solution differs from what WolframAlpha says so I think I made a mistake. Any help would be appreciated!
You have indeed : $$S(n) - S(n-1)(1+\dfrac{2}{n}) =1 $$ Which gives actually (beware of $n$ changing) for homogenous solution : $$S_h(n)= S(0)\prod_{i=1}^n(1+\dfrac{2}{i})$$ Your particular solution don't work because $d$ cannot depend on $n$ . But taking $ S_p(n)= -(n+1) $ works because cancel the denominator. Finally $$ S : n \to S(0)\prod_{i=1}^n(1+\dfrac{2}{i})-(n+1)$$
|combinatorics|recurrence-relations|
0