title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Proving the process $(Y_t)_{t\in[0,\infty)}$ is a standard Brownian motion
Let $a\in(0,\infty)$ . Let $(X_t)_{t\in[0,a]}$ be a standard Brownian motion and $(W_s)_{s\in[0,\infty)}$ be a standard Brownian motion such that $W_s$ is independent of $X_t$ for all $s \geqslant 0$ and $t \in [0, a]$ . Define a new process $(Y_t)_{t∈[0,\infty)}$ as follows. \begin{equation} Y_{t} = \begin{cases} X_t & \text{if}~0\leqslant t\leqslant a\\ W_{t-a}+X_a & \text{if}~t>a. \end{cases} \end{equation} Justify whether the above process $(Y_t)_{t\in[0,\infty)}$ is a standard Brownian Motion. We must check if the process $Y_t$ satisfies all the properties (independent increments, sample path continuity, etc.). To do so, you have to pick four-time points, let them be $t_j , then vary where $a$ belongs in the in the inequality, and then take cases. In the case where $t_j\leqslant a\leqslant t_{j+1}\leqslant t_i we have that $Y_{t_{j+1}}-Y_{t_{j}} = W_{t_{j+1}-a}+X_a-X_{t_{j}}$ and $Y_{t_{i+1}}-Y_{t_{i}}=W_{t_{i+1}-a}-W_{{t_i}-a}.$ How do I prove that $W_{t_{j+1}-a}$ is independent
The quadratic variation of $Y$ is $t\,.$ (The rest of the things you need to apply for Levy characterisation I leave to you.) Proof. We only have to consider partitions $0 of the interval $[0,t]$ where $t_n=t>a\,.$ WLOG we can also assume that there is an $t_k$ in every such partition such that $t_k=a\,.$ \begin{align} &\sum_{i=1}^n\big(Y_{t_{i+1}}-Y_{t_i}\big)^2=\sum_{i=k+1}^n \big(Y_{t_{i+1}}-Y_{t_i}\big)^2+\sum_{i=1}^k\big(Y_{t_{i+1}}-Y_{t_i}\big)^2\\[2mm] &=\sum_{i=k+1}^n \big(W_{t_{i+1}-a}-X_a-W_{t_i-a} +X_a\big)^2+\sum_{i=1}^k\big(X_{t_{i+1}}-X_{t_i}\big)^2\,. \end{align} The limit of the second term is $a\,.$ The first term is \begin{align} &\sum_{i=k+1}^n \big(W_{t_{i+1}-a}-W_{t_i-a}\big)^2\\ \end{align} whose limit is $t-a\,.$
|stochastic-processes|brownian-motion|
0
Does this property generalize to modules over noncommutative rings?
Let $M$ be a finite simple module over a ring $R$ . For commutative $R$ , it is easy to show that for every $r \in R$ , the function $M \rightarrow M; x \mapsto rx$ is either bijective or constantly $0$ . I'm wondering if this is also the case for noncommutative $R$ (I'm only interested in the case when $R$ is finite if that makes a difference). I tried proving it, but to no avail. It's probably false in the noncommutative case, but I wasn't able to find a counterexample, as I'm not very experienced with modules.
Let $F$ be a finite field and consider $F^2$ as an $M_2(F)$ module. It is simple (no nonzero submodules) since any non-zero vector can be moved to any other one by some element of $M_2(F)$ (proof: either one is a scalar multiple of the other and you can take the diagonal matrix with that scalar, or they form a basis and you can take the transformation flip-flopping them). But you can write down a matrix with a kernel, e.g. $\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ .
|ring-theory|commutative-algebra|modules|noncommutative-algebra|
1
Argmax of sum of log functions
I am trying to find the below with the context that each $\pi_s$ is a probability. (Trying to assign probabilities so I get maximum likelihood) $$\underset{\bf{\pi}}{\operatorname{argmax}} \ \sum_{s} c_s * ln(\pi_s)$$ with constraint of $$\sum_{s} \pi_s=1$$ $c_s$ is integer constants. We can assume non-zero pi values if needed. I tried AM-GM inequality as below: $$\underset{\bf{\pi}}{\operatorname{argmax}} \ \sum_{s} c_s * ln(\pi_s)$$ $$=\underset{\bf{\pi}}{\operatorname{argmax}} \ \sum_{s} ln(\pi_s^{c_s})$$ $$=\underset{\bf{\pi}}{\operatorname{argmax}} \ ln(\prod_{s}\pi_s^{c_s})$$ $$=\underset{\bf{\pi}}{\operatorname{argmax}} \ \prod_{s}\pi_s^{c_s}$$ $$\le\underset{\bf{\pi}}{\operatorname{argmax}} \ (\frac{1}{|s|}\sum_s\pi_s^{c_s})^{|s|}\ \text{by AM_GM inequality}$$ So, now I know it is maximum when it is equal, but I got stuck here... Not sure if it's the right approach or not. Please help and thanks a lot!
Write down the Lagrangian $$\mathcal{L}(\pi, \lambda) = \sum_{s} c_{s}\ln(\pi_{s}) + \lambda (\sum_{s}c_{s} - 1)$$ Setting the gradient of $\mathcal{L}$ wrt $\lambda$ and $\pi$ to zero while enforcing the problem constraints yields $$\pi_{s} = \frac{c_{s}}{\sum_{t}c_{t}}$$ Hope it helps.
|optimization|maxima-minima|maximum-likelihood|
1
Field extension as vector spaces
Let $F$ be a field and $K/F$ a field extension. I have seen that we can consider the extension as a $F$ -vector space(scalars over $F$) Is this a kind of convention? If not ,is there a proof about this?
Let $$\star :K\times K\to K,(x,y)\mapsto x+y$$ where $+$ is the addition in $K$ ; and $$\bullet :F\times K\to K, (k,x)\mapsto kx$$ where $kx$ is the result of the multiplication of $k$ and $x$ in $K(^1)$ . We can therefore use the usual notations for $\star$ , the law of internal composition and $\bullet$ , the law of external composition and verify for example that $(K,\star)$ is a commutative group as follows : $\forall x\in K, x+0=0+x=x$ , where $0$ is the one of $K$ ; $\forall x,y \in K, x+(y+z)=(x+y)+z$ , since it is true in $K$ ; $\forall x \in K, x+(-x)=(-x)+x=0$ ; $\forall x,y \in K, x+y=y+x$ , since it is true in $K$ . (We could have gone faster by writing that $\star=+$ so that $(K,\star)$ is a commutative group since $(K,+) $ is) The rest of the proof is just as easy... And we can conclude $$(K,\star,\bullet)\text{ is a vector space over }F.\square$$ $(^1)$ Recall that saying that K is an extension of F means that F is a subfield of K. So $$k\in F\implies k\in K$$
|abstract-algebra|field-theory|
0
Help computing $\nabla_{X} \mathsf{tr}\left( f \left( X \right) Y \right)$
I'm hoping someone can help me compute $$\nabla_{X} \mathsf{tr}\left( f \left( X \right) Y \right)$$ where $f : \mathbb{R}^{u \times v} \to \mathbb{R}^{m \times n}$ and $Y \in \mathbb{R}^{n \times m}$ . I'd like to get an expression in terms of $\nabla_X f \left( X \right)$ . What I've worked out so far: for any $1 \leq i \leq u$ and $1 \leq j \leq v$ we have \begin{align} \frac{\partial}{\partial X_{ij}} \mathsf{tr}\left( f(X) Y \right) &= \frac{\partial}{\partial X_{ij}} \sum_{k=1}^m \left( f(X) Y \right)_{kk} \\ &= \sum_{k=1}^m \frac{\partial}{\partial X_{ij}} \left( f(X) Y \right)_{kk} \\ &= \sum_{k=1}^m \frac{\partial}{\partial X_{ij}} \sum_{p=1}^n f(X)_{kp} Y_{pk} \\ &= \sum_{k=1}^m \sum_{p=1}^n \frac{\partial}{\partial X_{ij}} f(X)_{kp} Y_{pk} \end{align} but I would like to relate this expression to $\nabla_X f \left( X \right)$ if possible. Thanks!
Trace is linear, so calculating any directional derivative, $\nabla_v \text{tr}(f(X)Y) = \text{tr}\big((\nabla_v f(X))Y\big)$ .
|multivariable-calculus|
0
Finding the values of $x$ that satisfy $\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$ for all $n$
If the exhaustive set of $x\in(0,2\pi)$ for which $\forall n$ the inequality $$\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$$ is valid is $l_1\le x\le l_2$ , find $l_1$ and $l_2$ . Let $\displaystyle\sum_{i=1}^n \sin(ix)$ = $S$ then I have managed to show that $$S=\frac{\sin\left(\frac{nx+x}{2}\right)\cdot\sin\left(\frac{nx}{2}\right)}{\sin\left(\frac{x}{2}\right)}$$ I do not know what to do next. How can I handle the case for all $n?$ Any help is greatly appreciated.
Put $y=\frac x2$ . Then $y\in (0,\pi)$ , so $\sin y>0$ and $\cos\frac y2>0$ . For each $n$ we have $$S=\frac{\sin (n+1)y\cdot\sin ny}{\sin y}=\frac{\cos y -\cos(2n+1)y}{2\sin y}\le \frac{\sqrt{3}}{2},$$ that is $$\cos y -\cos(2n+1)y\le \sqrt{3}\sin y,$$ $$\frac12 \cos y-\frac{\sqrt{3}}2\sin y\le\frac 12\cos(2n+1)y,$$ $$\cos\left(y+\frac{\pi}3\right)\le\frac 12\cos(2n+1)y$$ The latter inequality holds for each natural $n$ provided $$\cos\left(y+\frac{\pi}3\right)\le-\frac 12=-\cos\frac{\pi}3,$$ $$\cos\left(y+\frac{\pi}3\right)+\cos\frac{\pi}3\le 0,$$ $$\cos\left(\frac y2+\frac{\pi}3\right)\cos\frac{y}2\le 0,$$ that is when $$\frac y2+\frac{\pi}3\ge \frac{\pi}{2},$$ $$y\ge \frac{\pi}3.$$ The remaining case $y was already considered in D S's answer .
|sequences-and-series|inequality|trigonometry|
1
Probability distribution of a random variable - Interview
this question is from a interview a friend had and I was curious how to solve it. The question is: I have two random variables, $a$ and $b$ . $a=rand(1,100)$ $b=rand(a,100)$ The rand function generates a number within the range of the numbers in the parentheses, with equal probability for each number. How is $b$ distributed (what is its probability distribution function), and if I want b to be uniformly distributed between $a$ and $100$ , just as a is uniformly distributed between 1 and $100$ , what should I do? My aspiration is for them to be uniformly distributed; it's enough to make $b$ distributed more uniformly than it is now (I'm allowed to use any operator/function, etc.)? My attempt: My approach to solving this involves trying to use a new function, $c=rand(1,a)$ , so that they might complement each other. I attempted to use modulo operations and an if statement, meaning if the number is not within the range, then generate a new number. Had a few other attempts, reached: $a$ di
Interviewer: Assume $a\sim U[0,1]\,,$ and conditional on $a\,,$ $$b\sim U[a,1]\,.$$ What is the unconditional distribution of $b\,?$ Candidate (sweating): \begin{align} &\mathbb P\big\{b\le y\big\}=\int_0^1\mathbb P\big\{b\le y\,\big|\,a=x\big\}\,dx =\int_0^1\frac{y-x}{1-x}1_{\{x\le y\}}\,dx=\int_0^y\frac{y-x}{1-x}\,dx\\[2mm] &=-y\ln(1-x)\Big|_{x=0}^{x=y}+\Big[x+\ln(1-x)\Big]_{x=0}^{x=y}\\[2mm] &=-y\ln(1-y)+y+\ln(1-y)\\[2mm] &=y+(1-y)\ln(1-y)\,. \end{align} This is the unconditional CDF of $b\,.$ The PDF of $b$ is $$ \frac{d}{dy}P\big\{b\le y\big\}=2-\ln(1-y)\,. $$ The plot of it resembles the one in Joseph's answer. To work out a possibility to turn this $b$ into a random variable that is unconditionally uniform on $[c,1]$ where $c$ is fixed (I deliberately don't denote this by $a$ ) one can use the well-known method to stick $b$ into its own CDF by which it produces a $U[0,1]$ variable and then stick that into the inverse CDF of the desired distribution. The CDF of $U[c,1]$ is $$ x\m
|probability|probability-distributions|random-variables|uniform-distribution|
1
Integration including greater integer function
Need help with this integral I= $\int_{0}^{9 \times 10^6} (1-\frac{\lfloor\sqrt x\rfloor}{\sqrt x})$ where $\lfloor x \rfloor$ denotes greatest integer function. I have tried breaking it into fractional part but couldn't get any good approach and also if i go for breaking the GIF function where ever it is integer...it comes out to be 3001 points in the given range of number in the integration.
$$\int_{0}^{9 \times 10^6} \left(1-\frac{\lfloor\sqrt x\rfloor}{\sqrt x}\right)dx=9\times10^6-\int_{0}^{9 \times 10^6} \left(\frac{\lfloor\sqrt x\rfloor}{\sqrt x}\right)dx$$ Put $x=t^2$ so $dx=2tdt$ and the second integral becomes $$2\int_{0}^{3 \times 10^3}\lfloor(t)\rfloor dt= 2\sum_{n=0}^{n=3\times10^3}=2\frac{3\times10^3(3\times10^3+1)}{2}$$ So $$\int_{0}^{9 \times 10^6} \left(1-\frac{\lfloor\sqrt x\rfloor}{\sqrt x}\right)dx=-3000$$
|calculus|integration|definite-integrals|
0
Question Regrading Proof Of Ryll-Nardzewski Theorem of Model Theory.
I have the a problem understanding the proof given by Tent and Ziegler. It states that a countable theory $T$ is $\aleph_0$ categorical if and only if for every $n \in \mathbb{N}$ there is a finte number of formulas, up to equivalence, relative to $T$ . Nevertheless I have a problem understanding what does it precisely mean by "finte number of formulas, up to equivalence, relative to $T$ ". Does this means that for each $n \in \mathbb{N}$ there is a set $\{\varphi_i\}_{i such that for every $L$ -formula $\varphi$ with $n$ free variables, there is a $j such that $T \models \forall \bar{x}(\varphi(\bar{x} \leftrightarrow \varphi_j(\bar{x}))$ and for all $i \neq j$ $T \models \forall \bar{x}(\varphi_i(\bar{x} \leftrightarrow \varphi_j(\bar{x}))$ . Or does it simply mean that there is a set $\{\varphi_i\}_{i such that $i \neq j$ $T \models \forall \bar{x}(\varphi_i(\bar{x} \leftrightarrow \varphi_j(\bar{x}))$ . I would be very grateful if someone could completely clarify this matter. Thank
It means that for each $n$ , there is a finite set of formulas $\{\varphi_1, \ldots, \varphi_m\}$ such that for any formula $\varphi,$ there is an $i\in\{1,\ldots, m\}$ such that $$ T\vdash \forall x_1\ldots \forall x_n (\varphi(x_1,\ldots, x_n)\leftrightarrow \varphi_i(x_1,\ldots, x_n))$$ (In all cases 'formula' means formula in the language $L$ , i.e. without parameters, with free variables amongst $x_1,\ldots, x_n$ . Note that $m$ may depend on $n$ , so e.g. there does not need to be a uniform bound over all $n$ for the number of inequivalent formulas in $n$ variables, it just needs to be finite for each $n$ .) Another way to look at it is that for each $n$ , there are only finitely many sets of tuples $(a_1,\ldots, a_n)$ definable without parameters in any model of $T.$ For a concrete example, for a model $(\mathcal M, of the countably categorical theory of dense linear orderings without endpoints, the only definable sets of elements are $\emptyset$ and $M$ and the only definable s
|model-theory|
1
Why $\Gamma \vdash A \; \mathrm{type}$ instead of $\Gamma \vdash A :\mathrm{type}$, etc?
I am learning type theory from nlab articles. The following is from typed predicate logic . Judgments and contexts Typed predicate logic is a type theory which consists of two layers, a layer for types and a layer for propositions. We have the basic judgement forms of the type layer: $\Gamma \vdash A\; \mathrm{type}$ - $A$ is a well-typed type in context $\Gamma$ . $\Gamma \vdash a : A$ - $a$ is a well-typed term of type $A$ in context $\Gamma$ . And the basic judgement forms of the proposition layer: $\Gamma \vdash \phi \; \mathrm{prop}$ - $\phi$ is a well-formed proposition in context $\Gamma$ $\Gamma \vdash \phi \; \mathrm{true}$ - $\phi$ is a well-formed true proposition in context $\Gamma$ As well as context judgments: $\Gamma \; \mathrm{ctx}$ - $\Gamma$ is a well-formed context. Contexts are defined by the following rules: $$\frac{}{() \; \mathrm{ctx}} \qquad \frac{\Gamma \; \mathrm{ctx} \quad \Gamma \vdash A \; \mathrm{type}}{(\Gamma, a:A) \; \mathrm{ctx}} \qquad \frac{\Gamma \;
Notice that typed predicate logic ( TPL ) as introduced by Raymond Turner is aimed at being a minimal metatheory. The basic judgements are defined as T type $\quad$ T is a type , $\phi$ prop $\quad\phi$ is a proposition, t : T $\qquad$ t is an object term of type T , $\phi\qquad\quad$ the proposition $\phi$ is true . These judgements are expressed in sequents of the form $\Gamma\vdash\Theta$ , signifying that judgement $\Theta$ can be made relative to a context $\Gamma$ . Only the judgement t : T is an assignment (i.e., to a type), the others are declared to be so and so. In some expositions, variations on this notation may be deemed as more suitable to certain purposes. For example, the 4th form of judgement may be written as $\phi$ true , though this would actually be needed if $\phi$ could be written with false . Likewise, ctx is a choice made in the cited contribution.
|logic|type-theory|
0
An example of something that is not a subcomplex.
I was reading this part in Allen Hatcher book : But I did not understand it, can someone explain it to me with drawing please? Edit: I need a picture for the last 3 lines.
Here is the circle with its minimal cell complex structure: Let's call the marked point $(-1,0)$ . The only subcomplexes of this are the point $(-1,0)$ , the whole circle, and the empty set. So for example take the map $S^1 \to S^1$ that sends $(x,y)$ to $(-|x|, y)$ (you can think of this as first projecting the circle to the vertical line segment from $(0,-1)$ to $(0, 1)$ , and then "wrapping" that vertical line segment onto the left half to the circle). The image of this map is the left half of the circle, and that is not a subcomplex. If you glued a 2-cell using this as the attaching map, then the closure of that 2-cell would not contain the entire 1-cell, just part of it, and so it would not be a subcomplex.
|algebraic-topology|cw-complexes|
1
Is $x^{2/3} + y^{2/3}$ a rational function?
Can $$ x^{2/3} + y^{2/3} $$ be expressed as a fraction of two polynomials in x and y? How can we see this easily? (It is the curve swept by a stick sliding down a wall)
Your $f(x,y) = x^{\frac{2}{3}}+y^{\frac{2}{3}}$ is not a rational function. For if it were, the it can be written as a ratio of two polynomial in $x,y$ . That is $f = \dfrac{P}{Q} \implies P = fQ$ . But $fQ$ is then no longer a polynomial since it has terms with non-integer exponents. This contradicts $P$ being a polynomial. Another way to see this is by plugging some numbers, say $x = 3 = y$ into the expression and the output is an irrational number which is a contradiction since it is supposed to be a rational number.
|algebra-precalculus|
1
Might there be an $n^{\text{th}}$ digit of $\pi$ where the sequence becomes palindromic?
Assuming $n>1$ , would it be reasonable to think there is an $n^{\text{th}}$ digit of $\pi$ where stopping there would yield a palindromic number $(3.14159...951413)$ ? Would it be more likely that this occurs an infinite amount of times or never?
If $\pi$ is a normal number (probable, but not proved) then any finite sequence appears infinitely often, so any particular palindrome appears infinitely often somewhere. But that does not say one ever occurs at the start. That is certainly possible , but I think very unlikely. It might be possible to prove that for normal numbers it happens with probability $0$ . It's easy to show it does not happen in $\pi$ up to, say, $n=10000$ (I haven't checked). Then you can see that if it does not happen at $2n$ but does happen at $2n+1$ it must almost happen at $2n$ , and that says that the second half of the expansion so far must almost mirror the first half. For normal numbers that happens with probability about $1/10^n$ .
|irrational-numbers|pi|transcendental-numbers|palindrome|
0
why can the result of the four point finite difference formula of a function, be greater than the exact derivative of the function?
if i have a function like $f(x) = e^{\sqrt{x+\frac{2}{x}}}$ . why is the result of using the formula below, where $x$ is $2$ and $h$ is $0.1$ , greater than the exact derivative of $f(x)$ ?
Meanwhile you probably found an answer to your questions on your own. Nonetheless I took the time to take a closer look. I turns out that the result of your four point finite difference approximation formula is just $0.0025$ % biger than $f'(2)$ . In case this deviation is too much I'd suggest you reach for another approximation. Notwithstanding this nothingness of $0.0025$ % this does not answer the reason of the positive deviation. Maybe you plotted a graph of your function, at $x=2$ it is convex . Even so your four point approximation formula is symetric the higher slope at $x=2.1$ and $x=2.2$ is reflected in the result. There is an explanation for a simpler numeric derivative approximation showing "funny" results when the function changes from concave to convex and back. It could also help to shed some light on your observation.
|derivatives|finite-differences|
0
Finding the common face of clinging soap bubbles using trigonometric functions of angles
I am trying to help my daughter with a problem from Stewart's Precalculus book.This problem comes right after law of sines. When two bubbles cling together in midair, their common surface is part of a sphere whose center D lies on the line passing through the centers of the bubbles (please refer to the figure below) also angles ACB and ACD each have measure 60 degrees Show that the radius r of the common surface is given by r = ab / (b - a) Find the radius of the common face if the radii of the bubbles are 3cm and 4cm I could do the second one but after using law of cosines to find length of the segment AB in triangle CBA. That came out as Then I used law of sines in triangle ABC to find angle CAB = 73.897 degrees Angle CAD = 180 - angle CAB = 106.1 degrees angle CDA = 180 - 106.1 - 60 = 13.897 degrees Then I used law of sines in triangle CAD to find the value of r But I couldn't make any headway for the first one. Also it seems to me that I don't need law of cosines to solve this prob
Considerations of surface tension physics.. differential pressure on either side of wall determine radii of spherical bubble segments $$ \frac{1}{r}=\frac{1}{a}-\frac{1}{b},~ a which is the relation between curvatures. It is well explained in the link: Two bubbles interphase circle radius $$ \angle BCD= 120^{\circ}$$ and CA is angle bisector at C. Inserting $(a=3, b=4)$ we get $r=12.$ The angles if required are computed from the Cosine Rule. If there are three coalescing bubbles it will be further interesting to derive and find along similar lines, that each circle tangent at the common vertex makes the same $ \angle 120^{\circ}$ to one another.
|algebra-precalculus|trigonometry|
0
Comparing integral with a sum
Show that \begin{equation}\sum_{m=1}^k\frac{1}{m}>\log k.\end{equation} My intuition here is that the LHS looks a lot like $\int_1^k\frac{1}{x}\textrm{d}x$ , and this evaluates to $\log k$ . To justify why the sum is larger, note that $1/m$ is convex, and my understanding is that more generally, \begin{equation}\sum_{m=1}^kf(m)>\int_1^kf(x)\textrm{d}x\end{equation} when $f$ is convex (and of course integrable!), and that the reverse inequality holds when $f$ is concave. Is this actually correct? I know that $\sum_{m=1}^k\frac{1}{m}-\log k\to\gamma$ , the Euler-Mascheroni constant, but without prior knowledge of this I wasn't sure how to approach this problem. I am interested in a more general way to compare discrete sums with their integral counterparts.
For any interval $[m,m+1]$ , you can hopefully prove that $\frac{1}{m} > \frac{1}{x}$ for almost all $x \in [m,m+1]$ . Then you can define a piecewise function $$f(x) = \frac{1}{m} \text{ if $x \in [m,m+1]$}.$$ Clearly then $f(x) > \frac{1}{x}$ for almost all $x \in [1,k]$ . Moreover, $\frac{1}{m} = \int_{m}^{m+1}f(x)dx$ for all $m$ . Therefore, $$\sum_{m=1}^{k} \frac{1}{m} = \sum_{m=1}^{k} \int_{m}^{m+1}f(x)dx > \sum_{m=1}^{k-1} \int_{m}^{m+1}f(x)dx.$$ (Note the change of upper indices!) Since $f$ is a sufficiently nice function, we have $$\sum_{m=1}^{k-1} \int_{m}^{m+1}f(x)dx = \int_{1}^{k} f(x) dx$$ and by the fact that $f(x) > \frac{1}{x}$ almost everywhere, we should get $$\sum_{m=1}^{k} \frac{1}{m} = \int_{1}^{k}f(x)dx > \int_{1}^{k}\frac{1}{x} dx = \log(k) - \log(1) = \log(k).$$
|real-analysis|analysis|number-theory|elementary-number-theory|summation|
0
The image of a submanifold under a diffeomorphism is a submanifold
Let $M \subseteq \mathbb{R}^n$ be a smooth $k$ -dimensional submanifold and let $A \subseteq \mathbb{R}^n$ be an open subset with $M \subseteq A$ . Show that if $\psi$ is a diffeomorphism from $A \rightarrow B \subseteq \mathbb{R}^n$ , then the image $\psi(M)$ is a (smooth) submanifold. My approach: Since $M$ is a submanifold, for $x \in M$ , there exist open subsets $U,V$ with $x \in U$ and a diffeomorphism $\phi: U \rightarrow V$ , such that (*) $\phi(M \cap U)=V \cap \mathbb{R}^k$ . Let $x \in \psi(M)$ , then $\psi^{-1}({x}) \in M$ . Since $M$ is a submanifold,for $\psi^{-1}(x)$ there exists a $U,V$ and $\phi$ s.t. as in the section above. Now $x \in \psi(U)$ , where $\psi(U)$ is open. If we apply $\phi^{-1}$ to (*) we get $M \cap U$ = $\phi^{-1}(V \cap \mathbb{R}^k)$ . Applying $\psi$ yields $\psi(U \cap M)=\psi (\phi^{-1}(V \cap \mathbb{R}^k))$ , since $\psi, \phi$ are bijective we get $\psi(U) \cap \psi(M)=\psi(\phi^{-1}(V)) \cap \psi(\phi^{-1}(\mathbb{R}^k))$ . And Thus $\psi(A)
The proof is by-and-large fine, but one thing is fishy: You do not make use of the assumption that $A$ is open. This I believe comes about as follows: You simply say that "there exist open subsets $U$ , $V$ such that..." but you don't actually tells us what $U$ and $V$ are open subsets of (if I were to grade this, I would definitely deduct a point for that). The definition/result you want to use there I believe reads something like "there exist open subsets $U, V \subseteq \mathbb{R}^n$ such that...," and now you have a slight problem: $\psi$ is only defined on $A$ , but $U$ need a priori not be contained in $A$ ! But of course we can replace $U$ with $A \cap U$ (which is again an open subset of $\mathbb{R}^n$ since $A$ is open ) and $V$ with $\phi(A \cap U)$ , and then the rest of your argument goes through. As for other ways to show this, since this is a pretty elementary fact most proofs are going to be about the same, but I do want to present another proof as follows: We take as de
|differential-geometry|solution-verification|submanifold|
1
Not clear on the last step of Andrew Granville's proof of the Sylvester-Schur Theorem
In Appendix 5A of Andrew Granville's Number Theory Revealed: A Masterclass , he presents a proof of the Sylvester-Schur Theorem. I am unclear on the last step of the following inductive proof. Exercise 5.10.1 is used to show that for all $n\ge k \ge 1$ : $$\left(1 + \frac{1}{n+k}\right)^k \le \left(1 + \frac{k}{n+1}\right)$$ Assume that up to $n$ , with $\pi(k)$ being the prime counting function , it is shown that: $${{n+k}\choose{k}} > (n+k)^{\pi(k)}$$ Here is the step that I am not clear on: $${{n+1+k}\choose{k}} = \left(1 + \frac{k}{n+1}\right){{n+k}\choose{k}} > \left(1 + \frac{1}{n+k}\right)^k(n+k)^{\pi(k)} > (n+k+1)^{\pi(k)}$$ I am not clear how we can be sure that: $$\left(1 + \frac{1}{n+k}\right)^k(n+k)^{\pi(k)} > (n+k+1)^{\pi(k)}$$ How can we be sure that this inequality holds for all $n \ge k \ge 1$ ? Thanks.
For all $n, k \ge 1$ , since $k \gt \pi(k)$ and $1 + \frac{1}{n+k} \gt 1$ , we therefore get $$\begin{equation}\begin{aligned} \left(1 + \frac{1}{n+k}\right)^k & \gt \left(1 + \frac{1}{n+k}\right)^{\pi(k)} \\ \left(1 + \frac{1}{n+k}\right)^k & \gt \left(\frac{n + k + 1}{n+k}\right)^{\pi(k)} \\ \left(1 + \frac{1}{n+k}\right)^k(n + k)^{\pi(k)} & \gt (n + k + 1)^{\pi(k)} \end{aligned}\end{equation}$$
|prime-numbers|proof-explanation|
1
Optimization of 3 variable function when all variables have value
I have the function In this function, all variables x, y, and z have a value. The goal is to optimize the function, but only one variable can be increased. Based on the values of the variables, what is the way to decide which variable should be increased, such that the function grows the most? So far I have tried to look at the gradient of the function, but have yet to come up with a solution. Thanks in advance.
If we take the partial derivatives in each direction, we get the following: $$\begin{eqnarray} \frac{\partial f}{\partial x} & = & \frac{y(1 - z)}{\left(xy + (1 - z)(1 - x)\right)^2} \\ \frac{\partial f}{\partial y} & = & \frac{x(x - 1)(z - 1)}{\left(xy + (1 - z)(1 - x)\right)^2} \\ \frac{\partial f}{\partial z} & = & \frac{-xy(x - 1)}{\left(xy + (1 - z)(1 - x)\right)^2} \end{eqnarray}$$ These all have the same denominator, which thanks to being squared will never be negative, so assuming it's non-zero we can just compare the numerators, which are all polynomial in $x, y, z$ . None of them is obviously the winner, so for any given value of the variables you just evaluate all three and the one with the biggest value is the one that will have the greatest increase.
|calculus|multivariable-calculus|functions|optimization|
1
Density of power with random variable
Random variables X, Y, Z are independent with uniform distribution from [0, 1]. find density of $XY^{Z}$ actually I'm stuck because power is a function (Random variable). I understand how to find density of XY using substitution, calculating Jacobian and integration. but after I get density of XY I have no idea how to find $XY^{Z}$ , even though I know how to find density of composition $\phi(\xi)$ I got no idea how to get inverse function here... please help me out
Note that if $X \sim \operatorname{Uniform}(0,1)$ , then $$-\log X \sim \operatorname{Exponential}(1).$$ The proof of this is left as an exercise. Next, consider $$\log (XY^Z) = \log X + Z \log Y.$$ What is the distribution of $Z \log Y$ when $Y, Z \sim \operatorname{Uniform}(0,1)$ and are independent? Edit your question to include your work in order to proceed further.
|probability|probability-distributions|independence|uniform-distribution|density-function|
0
What is the probability a person leaves via a certain exit?
| Exit 1 | | Exit 4 | | | | | | | | | | | | | _____________| |_____________| |________ Exit 2 S1 s2 Exit 6 _____________ _____________ ________ | | | | | | | | | | | | | Exit 3 | | Exit 5 | Hopefully the above diagram is clear. There are 6 exits. There is a bug that starts at S1, we equal probability he can go to Exit1, Exit2, Exit3, or s2. Once at S2, he can go to S1 Exit6, Exit5, Exit4 with equal probability. Once you go to an exit, you are no longer able to return and you are done. What is the probability the person leaves via Exit 6. I've tried solving this with a markov matrix, but I think there is a simpler solution. Any hints / how to approach this more directly?
I think your Markov matrix idea is quite simple and effective. Either raise it to a large power or look at its Eigenvalues and the steady-state should become quite clear. Another option is to use a directed graph/network with 8 nodes but this would likely then lead to an adjacency matrix and be very similar to your Markov approach.
|probability|
0
What is the probability a person leaves via a certain exit?
| Exit 1 | | Exit 4 | | | | | | | | | | | | | _____________| |_____________| |________ Exit 2 S1 s2 Exit 6 _____________ _____________ ________ | | | | | | | | | | | | | Exit 3 | | Exit 5 | Hopefully the above diagram is clear. There are 6 exits. There is a bug that starts at S1, we equal probability he can go to Exit1, Exit2, Exit3, or s2. Once at S2, he can go to S1 Exit6, Exit5, Exit4 with equal probability. Once you go to an exit, you are no longer able to return and you are done. What is the probability the person leaves via Exit 6. I've tried solving this with a markov matrix, but I think there is a simpler solution. Any hints / how to approach this more directly?
Let $P_n$ be the probability of exiting by exit 6 in $n$ steps, $n$ must be even. $$ P_2 = P(S_1 \to S_2)\times P(S_2 \to E_6) =\frac 1{16}$$ $$P_4 = P(S_1 \to S_2)\times P(S_2 \to S_1)\times P(S_1 \to S_2)\times P(S_2 \to E_6)= \frac 1{16} \times P_2$$ So the total probability of leaving by $E_6$ is given by the sum of the geometric series... $$ P = \sum_{k=1}^\infty \left( \frac 1{16} \right )^k = \frac 1{15} $$
|probability|
0
Trying to prove Euler's formula in 3D Geometric Algebra (Clifford algebra)
I'm a non-mathematician trying to learn geometric algebra by working my way (slowly) through " Imaginary Numbers are not Real " by Gull, Lasenby and Doran [ Found. Phys. 23(9), 1175-1201 (1993) ]. So far so good and I can prove Euler's formula with GA in two dimensions - but I'm struggling to prove the following identity (Section 3, eq.13) in the three dimensional case: $$ e^{-i\,(\tfrac{a}{2})}=\cos(\tfrac{\lvert a\rvert}{2})-i \tfrac{a}{\lvert a \rvert}\sin(\tfrac{\lvert a\rvert}{2}) \phantom{tab} $$ (where $a$ is a vector and $i$ is the pseudoscalar). My current attempt is to define an orthonormal basis $\{\sigma_1,\sigma_2,\sigma_3\}$ , put $i=\sigma_1\sigma_2\sigma_3$ and let $a=(e_1\sigma_1+e_2\sigma_2+e_3\sigma_3)$ ; from which I obtain: $$ \begin{align} e^{-i\,(\tfrac{a}{2})}&=e^{-\sigma_2\sigma_3\tfrac{e_1}{2}}e^{-\sigma_3\sigma_1\tfrac{e_2}{2}}e^{-\sigma_1\sigma_2\tfrac{e_3}{2}}\\ &= (\cos\tfrac{e_1}{2}-\sigma_2\sigma_3\sin\tfrac{e_1}{2})\, (\cos\tfrac{e_2}{2}-\sigma_3\sigma_
I'd start by assuming the Taylor series representation of the exponential $\exp(x) = \sum_{k = 0}^\infty \frac{x^k}{k!}.$ Since $a$ and $i$ commute $\begin{aligned}\exp(-i a/2) &= \sum_{k = 0}^\infty \frac{i^k (-a/2)^k}{k!} \\ &= \sum_{\mbox{$k$ even}} \frac{i^k (-a/2)^k}{k!}+ \sum_{\mbox{$k$ odd}} \frac{i^k (-a/2)^k}{k!} \\ &= \sum_{r = 0}^\infty \frac{i^{2r} (-a/2)^{2r}}{(2r)!}+ \sum_{r = 0}^\infty \frac{i^{2r + 1} (-a/2)^{2r + 1}}{(2r + 1)!} \\ &= \sum_{r = 0}^\infty \frac{(-1)^{r}(-a/2)^{2r}}{(2r)!}+ i \sum_{r = 0}^\infty \frac{(-1)^{r} (-a/2)^{2r + 1}}{(2r + 1)!} \\ \end{aligned}$ The even sum has the structure of a cosine, and is in fact a scalar, since $a^{2k} = \left\lvert {a} \right\rvert^{2k}$ . The odd sum has the structure of a sine, but it is a vector. We may express this sum as a vector times scalar sum, by noting that $a^{2k+1} = \left\lvert {a} \right\rvert^{2k} a = \left\lvert {a} \right\rvert^{2k+1} a / \left\lvert {a} \right\rvert.$ This leaves us with $\begin{aligne
|clifford-algebras|geometric-algebras|
1
Question regarding "h" in differentiation via first principles
Here at the second last step he factors out $h$ from the numerator. Then cancels the $h$ at the denominator. He then says since $h$ approaches $0$ we consider it as $0$ or something that's really small so much so that it won't affect our answer much, and so we are left with $2x$ . My question is, throughout the equation we say $\lim h\to 0$ , its not just at the very last step. So why don't we just consider $h$ to be $0/a$ very small number from the very beginning (I know this will mean we are essentially dividing by $0$ from the very start which gives undefined), but there has to be some sense of continuity here, what's the point of just considering $h$ as $0$ at the very end. I know some of you will say well $h$ isn't $0$ actually it's something approaching $0$ i.e. something very close to $0$ , even so my point still stands, in fact a new issue arrises, if I assumed $h$ to be very small and not $0$ from the very start (like I previously stated above) I won't get undefined but I woul
You're wary about the whole $\frac{0}{0}$ thing, and that's perfectly valid. However, what you should notice is: Whenever $h \neq 0$ , the division is valid, and it does give the result $2x + h$ . As $h$ approaches $0$ from below (i.e. if we look at values of $h$ that are negative but getting closer to zero, like $-0.01, -0.001, -0.0001, \ldots$ ) then $2x + h \rightarrow 2x$ . Similarly, as $h$ approaches $0$ from above, we also get $2x + h \rightarrow 2x$ . Because we are getting the same limit in both directions, we can say that $\lim_{h \rightarrow 0} \frac{2xh + h^2}{h} = \lim_{h \rightarrow 0} (2x + h) = 2x$ . And this is all fine, because when we take the limit we can ignore the point itself and so there is never any actual division by zero. If you arrange things slightly differently, you get $\frac{2xh + h^2}{h} = 2x \frac{h}{h} + \frac{h^2}{h}$ , and maybe it's a little easier to see what's happening here - if $h$ is some small value, then $\frac{h}{h} = 1$ , and $\frac{h^2}{h
|calculus|
0
How many cycles of each type are in $S_6$
How many cycles of each type in $S_6$ ? I know I can write down all cycles. I wonder if there a formula for me to quickly calculate the number of cycles of each type in $S_n$ ?
Here I discuss the number of each cycle type for $S_7$ based on the formula given in the previous answer. $2$ -cycle $$ N=\binom{7}{2}(2-1)!=21 $$ $(2,2)$ -cycle $$ N=\binom{7}{2}\binom{5}{2}\dfrac{(2-1)!\,(2-1)!}{2!}=105 $$ $(2,2,2)$ -cycle $$ N=\binom{7}{2}\binom{5}{2}\binom{3}{2}\dfrac{(2-1)!\,(2-1)!\,(2-1)!}{3!}=105 $$ $3$ -cycle $$ N=\binom{7}{3}(3-1)!=70 $$ $(3,2)$ -cycle $$ N=\binom{7}{3}\binom{4}{2}(3-1)!\,(2-1)!=420 $$ $(3,2,2)$ -cycle $$ N=\binom{7}{3}\binom{4}{2}\binom{2}{2}\dfrac{(3-1)!\,(2-1)!\,(2-1)!}{2!}=210 $$ $(3,3)$ -cycle $$ N=\binom{7}{3}\binom{4}{3}\dfrac{(3-1)!\,(3-1)!}{2!}=280 $$ $4$ -cycle $$ N=\binom{7}{4}(4-1)!=210 $$ $(4,2)$ -cycle $$ N=\binom{7}{4}\binom{3}{2}(4-1)!\,(2-1)!=630 $$ $(4,3)$ -cycle $$ N=\binom{7}{4}\binom{3}{3}(4-1)!\,(3-1)!=420 $$ $5$ -cycle $$ N=\binom{7}{5}(5-1)!=504 $$ $(5,2)$ -cycle $$ N=\binom{7}{5}\binom{2}{2}(5-1)!\,(2-1)!=504 $$ $6$ -cycle $$ N=\binom{7}{6}(6-1)!=840 $$ $7$ -cycle $$ N=\binom{7}{7}(7-1)!=720 $$ The class equation of $S
|abstract-algebra|combinatorics|group-theory|symmetric-groups|
0
Why isn't finitism nonsense?
This is a by product of this recent question , where the concept of ultrafinitism came up. I was under the impression that finitism was just "some ancient philosophical movement" in mathematics, only followed by one or two nowadays, so It sounded like a joke to me. But then I got curious and, after reading a bit, It seems to me that the only arguments against infinite mathematics that finitists seem to have are that "there are numbers so big that we couldn't computate in a lifetime" or the naive set theory paradoxes . The former doesn't seem like a serious argument, and the latter is not a problem now that mathematics relies on consistent axioms. Are there some (maybe arguably) good mathematical reason to deny the existence of $\infty$ or is it just a philosophical attitude? The concept of unboundedness seems pretty natural to me, so what could be a reason to avoid it? Does this attitude even make any sense? In short, why today-finitists have a problem with $\infty$? Edit: First of all
The question is backwards. Not even non-finitists would argue that finite math is meaningless. The correct question is whether the non-finite is meaningful. Finitism and atheism are the positions that the posited entity, respectively infinity and god, makes no sense or does not exist. A finitist can no more prove infinity does not exist than an atheist can prove the non-existence of god or a physicist can prove the non-existence of unobservable universes. The burden of proof is on proving the posited entity or concept is meaningful or exists. The literal, original and true meaning of infinite is "has no end". What sense is there in talking about the end state of something that does not end? "end" here has nothing to do with time or resource - those would be further limits. The proof of the infinity of natural numbers by contradiction of the hypothesis that there is some last number is nothing more than the statement that the there is no end to the repeated application of the successor
|soft-question|infinity|philosophy|
0
Why the integral in $[-\pi-\frac{\pi}{k}, \pi-\frac{\pi}{k}]$ equals the integral in $[-\pi, \pi]$?
I'm showing that the coefficients of the Fourier series of a $2\pi$ -periodic function $f$ can be written as $$a_k = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \left(f(x) - f\left(x - \frac{\pi}{k}\right)\right)\cos(kx) dx$$ and $$b_k = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \left(f(x) - f\left(x - \frac{\pi}{k}\right)\right)\sin(kx) dx.$$ Firstly, I consider the change $x = u - \frac{\pi}{k}$ . Then $$a_k = \frac{1}{\pi} \int\limits_{-\pi}^{\pi} f(x)\cos(kx) dx = -\frac{1}{\pi} \int\limits_{-\pi-\frac{\pi}{k}}^{\pi-\frac{\pi}{k}} f\left(x - \frac{\pi}{k}\right)\cos(kx) dx.$$ So, considering the mean $a_k = \frac{a_k + a_k}{2}$ , $$a_k = \frac{1}{2\pi}\left(\int\limits_{-\pi}^\pi f\left(x\right)\cos(kx)dx - \int\limits_{-\pi-\frac{\pi}{k}}^{\pi-\frac{\pi}{k}}f\left(x - \frac{\pi}{k}\right)\cos(kx)dx\right).$$ Now I have a problem justifying that $\int\limits_{-\pi-\frac{\pi}{k}}^{\pi-\frac{\pi}{k}}f\left(x - \frac{\pi}{k}\right)\cos(kx)dx = \int\limits_{-\pi}^\pi f\left(x-\frac{\pi}{k}\r
$\int\limits_{-\pi-\frac{\pi}{k}}^{\pi-\frac{\pi}{k}}f\left(x - \frac{\pi}{k}\right)\cos(kx)dx = \int\limits_{-\pi-\frac{\pi}{k}}^{-\pi} f\left(x-\frac{\pi}{k}\right)\cos(kx)dx\space\space+\int\limits_{-\pi}^{\pi} f\left(x-\frac{\pi}{k}\right)\cos(kx)dx\space\space-\int\limits_{\pi-\frac{\pi}{k}}^{{\pi}} f\left(x-\frac{\pi}{k}\right)\cos(kx)dx\space\space$ Now by doing a change of variable from $x$ to $x+2\pi$ , the above equation becomes $\int\limits_{\pi-\frac{\pi}{k}}^{\pi-\frac{\pi}{k}}f\left(x - \frac{\pi}{k}\right)\cos(kx)dx = \int\limits_{\pi-\frac{\pi}{k}}^{\pi} f\left(x+2\pi-\frac{\pi}{k}\right)\cos(kx)dx\space\space+\int\limits_{-\pi}^{\pi} f\left(x-\frac{\pi}{k}\right)\cos(kx)dx\space\space-\int\limits_{\pi-\frac{\pi}{k}}^{{\pi}} f\left(x-\frac{\pi}{k}\right)\cos(kx)dx\space\space$ and as you rightly guessed , since $f(x)$ is $2\pi$ periodic , hence $f(x)=f(x+2\pi)$ and therefore first and third integral cancel out resulting in $\int\limits_{\pi-\frac{\pi}{k}}^{\pi-\frac{\pi
|integration|fourier-analysis|periodic-functions|
1
Stochastic average of a differential equation is not the same as average of its solutions
Assume a static (time-independent) random variable $r$ for which we know its probability distribution $P(r)$ . Consider this to be a Gaussian distribution, such that $\langle r \rangle=0$ and $\langle r^2 \rangle=r_0$ , etc. Consider now the following differential equation: $\frac{d}{dt}X(t)=r A X(t)$ , where $X=(x_1,x_2)^T$ and $A$ is a time-independent $2\times 2$ matrix. I am interested in taking the average of the differential equation over the random variable. If I naively just take the expected value of the previous differential equation, the problem is that I would just get $\frac{d}{dt}X=0$ because $\langle r \rangle=0$ . This is not true, as $X$ is a functional of r, $X(r,t)$ , so $\langle r X(r,t) \rangle\neq0$ . One possible solution would be to formally solve the differential equation as $X(t)=e^{rAt}X(0)$ and then take the average. I would like to avoid this method, as it is not always aplicable. What would be the correct way to proceed if I would like to get an averaged d
Consider linear evolution equations of the form $$\tag{1} x'(t)=r A x(t) $$ Where $x$ is an $n$ -tuple, $r$ is a scalar random variable and $A$ is an $n\times n$ matrix. The solution of (1) (which we will not need to evaluate) is $$\tag{2} x(t)=e^{rAt}x(0) $$ You seek a differential equation for the average $\tilde{x}$ such that $$\tag{3} \tilde{x}(t):=\left =\left x(0) $$ with initial conditions $\tilde{x}(0)=x(0)$ . Differentiating (3) we have $$\tag{4} \tilde{x}'(t)=K'(t)\tilde{x}(t) $$ Where $e^{K(t)}=\left $ is a 'matrix cumulant' generating function. The appropriate mean-field equivalent of (1) is obtained by making the replacement $rA\to K'(t)$ . For Gaussian $r$ with mean $\mu$ and variance $\sigma^2$ , we can evaluate explicitly $$\tag{5} K(t)=\mu t A +\frac{\sigma^2 t^2}{2}A^2 $$ Which follows from expanding the exponential and integrating each term against the Gaussian measure. The evolution equations for the averages of (1) are $$\tag{6} \tilde{x}'(t)=\left[\mu +\sigma^2 A
|ordinary-differential-equations|stochastic-processes|average|stochastic-differential-equations|
1
How does first order logic influence everyday mathematics?
I have read through a book about first order logic. It was interesting. However, when I read through undergraduate math texts it’s unclear about what system they are working in. They don’t specify anything at all about axioms or logical axioms. I’m so confused now. What is the point of learning about first order logic if it seems that nobody cares / knows very much about it? Analysis 1 by Tao talks about ZFC and Peano arithmetic but that’s about all I’ve seen. Is it worth learning model theory? Where do I start with that?
Most of the time, any particular field of mathematics will be built on some kind of foundation, and the assumption is that the foundation is stable enough for the field to work. At various times, someone constructs a paradox from some area of mathematics, and there's some work to find a system that avoids that problem. When this happens, some of the theories that rely on the previous system need to be revised since they may no longer be valid. One of the biggest examples of this was in the late 1800s and early 1900s where things like Russell's paradox and Godel's incompleteness theorem revealed issues in set theory and arithmetic which were considered to be some of the most fundamental parts of mathematics. If you look at this from the view of, say, calculus or probability theory, everything still basically works. It's unlikely you were going to try to integrate over a set that isn't actually a set, or flip a coin based on the completeness of arithmetic. But if you did, then you would
|foundations|
0
Triebel-Lizorkin space $F_{p,q}^s(\mathbb{R}^d)$, $p=\infty$
One way to define the Triebel-Lizorkin space is using dyadic resolution of unity. Let $\psi$ be a Schwartz function which satisfies $\hat{\psi}(\xi)=1$ when $|\xi|\leq 1$ and $\hat{\psi}(\xi)=0$ when $|\xi|>\frac{3}{2}$. Define $\psi_0:=\psi$ and $\psi_j,j\in\mathbb{N}$ via $$\widehat{\psi_j}(\xi)=\hat{\psi}(2^{-j}\xi)-\hat{\psi}(2^{-j+1}\xi).$$ Then we can see that $\widehat{\psi_j}$ is supported in the set $$\{\xi\in\mathbb{R}^d: 2^{j-1}\leq|\xi|\leq 2^{j+1}\}.$$ Moreover: $$\sum_{j=0}^\infty\widehat{\psi_j}(\xi)=1,\quad\forall \xi\in\mathbb{R}^d.$$ In this case we call $\psi$ a generating function, and $(\psi_j)_{j=0}^\infty$ a dyadic resolution of unity. Note that in this case every tempered distribution $f$ has the following representation $$f=\sum_{j=0}^\infty\psi_j*f,$$ with the series converging in the space of tempered distributions. For $s\in\mathbb{R}$, $0 What I'm curious about is that, what happens if $p=\infty$? I've read several textbooks , e.g. "Modern Fourier analysis"
The reason that $\|(\sum_{j\ge0}|2^{js}\psi_j\ast f|^q)^{1/q}\|_{L^\infty(\mathbb R^n)}$ are not equivlant norms for different $\psi$ is mentioned in Triebel's "Theory of Function Spaces I" Section 2.3.2 Remark 4, where he refers to his book " Spaces of Besov-Hardy-Sobolev type " Section 2.1.4. Nevertheless for $p=\infty$ we have the Morrey-type characterizations: \begin{equation*} \|f\|_{F_{\infty q}^s(\mathbb R^n)}=\sup_{J\ge0,x\in\mathbb R^n}2^{Jn/r}\left\|\Big(\sum_{j\ge J}|2^{js}\psi_j\ast f|^q\Big)^{1/q}\right\|_{L^r(B(x,2^{-J}))} \end{equation*} Different $0 and $\psi$ results to the equivalent norms. This characterization is introduced by Michael Frazier and Björn Jawerth in 1990 on the paper " A discrete transform and decompositions of distribution spaces ". Ever since then this becomes a standard definition for $F_{\infty q}$ -spaces. Triebel himself also explains this in his 4th book of Theory of Function Spaces, of course the book is much younger than this question.
|functional-analysis|fourier-analysis|normed-spaces|harmonic-analysis|
0
Riemann integrable function continuity points
$f$ is Riemann integrable on $[a,b]$ , $\int_a^bf(x)^2>0$ . Prove that there is a point of continuity $x$ on $[a,b]$ that $f(x) \neq 0$ . I provided the following: almost every point of $[a,b]$ is a continuity point. $F(x) = \int_a^bf(x)^2>0$ is differentiable in every continuity point and in such point $F'(x) = f^2(x)$ . If $F(x) > 0$ then there is a point where $f^2(x) > 0$ . It has a continuity point in some ball. Am I false everywhere?
We can use the following results: Theorem 1 : Let $f:[a, b] \to\mathbb{R} $ be Riemann integrable on $[a, b] $ with $\int_a^b f(x) \, dx>0$ . Then there is a non-degenerate sub-interval $[c, d] $ of $[a, b] $ such that $f(x) >0$ for all $x\in[c, d] $ . Theorem 2 : Let $f:[a, b] \to\mathbb{R} $ be Riemann integrable on $[a, b] $ . Then there is a point $c\in[a, b] $ such that $f$ is continuous at $c$ . Use theorem 1 on $f^2$ to show that there is a sub-interval $[c, d] $ where $f^2>0$ and then $f$ is non-zero on that interval. Further by theorem 2 there is a point say $p\in[c, d] $ where $f$ is continuous at $p$ and we have $f(p) \neq 0$ because $f$ is non-zero on whole interval $[c, d]$ .
|real-analysis|integration|definite-integrals|riemann-integration|
0
What is the meaning of this notation $\mathbb C_2$ in Jacobson's Basic Algebra I?
I was looking at Chapter 2 (Rings) in Basic Algebra I by Jacobson (1974). (Section 2.1 is Definition and Elementary Properties, 2.2 is Types of Rings, 2.3 is Matrix Rings (the ring $M_n(R)$ of $n\times n$ matrices over the ring $R$ ), and 2.4 is Quaternions.) On page 96, Jacobson defines quaternions: We consider the subset $\mathbb H$ of the ring $M_2(\mathbb C)$ of... matrices that have the form... $\pmatrix{a&b\\-\overline b&\overline a}$ . We claim that $\mathbb H$ is a subring of $\color{blue}{\mathbb C_2}$ ... $\mathbb H$ is a subgroup of the additive group of $M_2(\mathbb C)$ . We obtain the unit matrix by taking $a=1, b=0$ .... $\mathbb H$ is closed under multiplication and so $\mathbb H$ is a subring of $M_2(\mathbb C)$ . ... Every non-zero element of $\mathbb H$ has an inverse in $\color{blue}{\mathbb C_2}$ ... and... it is contained in $\mathbb H$ . Hence $\mathbb H$ is a division ring. My question is what is $\color{blue}{\mathbb C_2}$ ? Is it a typographical error for $M_2(
I expect that this is a typo. On page 98 of the second edition of the text (published 1985), it has " $M_2(\mathbb{C})$ " where you first have " $\mathbb{C}_2$ ": and again on page 99, where you have $\mathbb{C}_2$ again, it has " $M_2(\mathbb{C})$ " again:
|abstract-algebra|matrices|ring-theory|notation|quaternions|
1
What is the meaning of this notation $\mathbb C_2$ in Jacobson's Basic Algebra I?
I was looking at Chapter 2 (Rings) in Basic Algebra I by Jacobson (1974). (Section 2.1 is Definition and Elementary Properties, 2.2 is Types of Rings, 2.3 is Matrix Rings (the ring $M_n(R)$ of $n\times n$ matrices over the ring $R$ ), and 2.4 is Quaternions.) On page 96, Jacobson defines quaternions: We consider the subset $\mathbb H$ of the ring $M_2(\mathbb C)$ of... matrices that have the form... $\pmatrix{a&b\\-\overline b&\overline a}$ . We claim that $\mathbb H$ is a subring of $\color{blue}{\mathbb C_2}$ ... $\mathbb H$ is a subgroup of the additive group of $M_2(\mathbb C)$ . We obtain the unit matrix by taking $a=1, b=0$ .... $\mathbb H$ is closed under multiplication and so $\mathbb H$ is a subring of $M_2(\mathbb C)$ . ... Every non-zero element of $\mathbb H$ has an inverse in $\color{blue}{\mathbb C_2}$ ... and... it is contained in $\mathbb H$ . Hence $\mathbb H$ is a division ring. My question is what is $\color{blue}{\mathbb C_2}$ ? Is it a typographical error for $M_2(
I don't have the book you mention, so I'm only answering based on the provided context. Going by the definition (e.g., see this section ), it looks like $\mathbb{C}_2$ is indeed a typo or an ad-hoc abbreviation for $M_2(\mathbb{C})$ ; if the book has a note on notational conventions at the beginning, that should tell you which one it is. $M_n({R})$ is the ring of $n \times n$ matrices over the ring $R$ . That makes $M_2(\mathbb{C})$ the ring of all $2 \times 2$ matrices over the ring of complex numbers.
|abstract-algebra|matrices|ring-theory|notation|quaternions|
0
Generalised integral $\int_{0}^{\infty} \int_{0}^{y} \sqrt{x^2 + y^2} e^{-x^2 - y^2} dx dy$
I am having trouble understanding how to compute this integral $$\int_{0}^{\infty} \int_{0}^{y} \sqrt{x^2 + y^2} e^{-x^2 - y^2} \, dx \, dy$$ My idea is to consider the whole plane xy plane instead of the half of the positive quadrant. Then the integral would look like this. $$\frac{1}{8} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \sqrt{x^2 + y^2} e^{-x^2 - y^2} \, dx \, dy $$ Of course we can change to polar coordinates and have $$\frac{\pi}{4} \int_{0}^{\infty} r^2 e^{-r^2} \, dr $$ Here I am slightly at a loss and unsure of what to do? Any hint or suggestion? Thank you! Note: Simply considering integration by parts and the Gaussian Integral we arrive at the desired answer. // as noted by @Benjamin Wang
If we perform the change of integration variables from $x$ to $t$ via $x=yt$ , we obtain $$ \int_0^{ + \infty } { \int_0^1 {y^2\sqrt {1 + t^2 } {\rm e}^{ - y^2(1 + t^2 )} {\rm d}t} \,{\rm d}y} . $$ Interchanging the order of integration yields $$ \int_0^1 {\sqrt {1 + t^2 } \int_0^{ + \infty } {y^2 {\rm e}^{ - y^2(1 + t^2 )} {\rm d}y} \,{\rm d}t} . $$ Performing the change of integration variables from $y$ to $s$ via $s^2 = y^2(1 + t^2 )$ gives $$ \int_0^1 {\frac{1}{{1 + t^2 }}\int_0^{ + \infty } {s^2 {\rm e}^{ - s^2 } {\rm d}s}\, {\rm d}t} = \int_0^1 {\frac{{\rm d}t}{{1 + t^2 }}} \cdot \int_0^{ + \infty } {s^2 {\rm e}^{ - s^2 } {\rm d}s} = \frac{\pi }{4} \cdot \frac{{\sqrt \pi }}{4} = \frac{{\pi ^{3/2} }}{{16}}. $$
|real-analysis|calculus|integration|multivariable-calculus|definite-integrals|
0
Solve $\begin{cases} (x+y)^2+3(x-y)=30 \\ xy+3(x-y)=11 \end{cases}$
Solve $\begin{cases} (x+y)^2+3(x-y)=30 \\ xy+3(x-y)=11 \end{cases}$ I subtracted the bottom equation from the top: $\Rightarrow (x+y)^2-xy=19$ And then I'm stuck because I can't come up with a new equation from these three.
$$\begin{gathered} \left\{ \begin{gathered} {\left( {x + y} \right)^2} + 3\left( {x - y} \right) = 30 \hfill \\ xy + 3\left( {x - y} \right) = 11 \hfill \\ \end{gathered} \right. \Leftrightarrow \left\{ \begin{gathered} {\left( {x - y} \right)^2} + 4xy + 3\left( {x - y} \right) = 30 \hfill \\ xy = 11 - 3\left( {x - y} \right) \hfill \\ \end{gathered} \right. \hfill \\ \Leftrightarrow {\left( {x - y} \right)^2} + 4\left( {11 - 3\left( {x - y} \right)} \right) + 3\left( {x - y} \right) = 30 \hfill \\ \Leftrightarrow {\left( {x - y} \right)^2} - 9\left( {x - y} \right) + 14 = 0 \Leftrightarrow \left[ \begin{gathered} x - y = 2 \hfill \\ x - y = 7 \hfill \\ \end{gathered} \right. \hfill \\ * \left\{ \begin{gathered} x - y = 2 \hfill \\ xy = 5 \hfill \\ \end{gathered} \right. \Leftrightarrow \left[ \begin{gathered} \left\{ \begin{gathered} x = 1 - \sqrt 6 \hfill \\ y = - 1 - \sqrt 6 \hfill \\ \end{gathered} \right. \hfill \\ \left\{ \begin{gathered} x = 1 + \sqrt 6 \hfill \\ y = - 1 + \sqrt
|algebra-precalculus|
1
Showing the uniqueness of a solution to an ordinary differential equation
$$ \dot{y}(x)+y(x)=u(x) $$ $$ y_c(x) = ce^{-x} + \int_{0}^{x} u(t) e^{(t-x)} dt $$ Show that if u is periodic with a period of T, then there exists exactly one solution to the differential equation with the period T. Where here $y_c(0) = c$ holds. The use of the second equation is hinted. I am stuck trying to demonstrate the proposed property. Could someone provide a hint on how to proceed? Your help will be greatly appreciated.
If $u$ is sufficiently well-behaved, the solution to the initial value problem $\dot{y}+y=u(x)$ and $y(0)=c$ is unique $^{(*)}$ and given by $$ y_c(x) = ce^{-x} + \int_{0}^{x} u(t) e^{t-x}\, dt. \tag{1} $$ If there exists $c$ such that $y_c$ is $T$ -periodic, then $y_c(T)=y_c(0)$ . According to $(1)$ , this implies $$ ce^{-T}+\int_0^Tu(t)e^{t-T}\,dt=c \implies c=\frac{1}{1-e^{-T}}\int_0^Tu(t)e^{t-T}\,dt, \tag{2} $$ i.e., there can be at most one value of $c$ such that $y_c$ is $T$ -periodic. Eq. $(2)$ is a necessary condition for $y_c$ to be $T$ -periodic. I will now show that, if $u$ is $T$ -periodic, then it also is a sufficient condition. Indeed, according to $(1)$ , \begin{align} y_c(x+T)-y_c(x)&=ce^{-x-T} + \int_{0}^{x+T} u(t) e^{t-(x+T)}\, dt -ce^{-x} - \int_{0}^{x} u(t) e^{t-x}\, dt \\ &=e^{-x}\left(ce^{-T}-c+\int_{0}^{x+T} u(t) e^{t-T}\, dt - \int_{0}^{x} u(t) e^{t}\, dt\right). \tag{3} \end{align} Since $u(t)=u(t+T)$ , the last integral in $(3)$ can be rewritten as $$ \int_{0}
|ordinary-differential-equations|periodic-functions|
1
Sum of autocorrelation coefficients
This is a follow-up to this thread: Proof that sum over autocorrelations is -1/2 I am posting a new thread as that was posted 6 years ago. In that threat the stackexhange author (Kuhlambo) lists some equations from a paper found in https://doi.org/10.1515/ROSE.2009.008 , that proof that the sum of the sample autocorrelation coefficients is always equal to -1/2. I find it hard to understand how the original author of the proof expanded the variance: $$\sum^{T}_{t=1} (y_t - \overline y)^2$$ into the following expression: $$\big(\sum^{T}_{t=1} (y_t - \overline y)\big)^2-2 \sum_{h=1}^{T-1}\sum^{T-h}_{t=1} (y_t - \overline y)(y_{t+h} - \overline y)$$ This is a central step to the proof and I cannot see how the author went from one step to the next. The paper does not provide any detail on this derivation, and considers this step as elementary, which it might be given my limited understanding. Can someone help me understand how we move from one step to the next?
Here is one approach to the calculation. I'll illustrate it when $T=5$ . Let's start by making a matrix which I will color in to help the exposition. Let $Y_{t,s} = (y_t - \bar{y})(y_s - \bar{y})$ and define the matrix \begin{equation} Y = \begin{bmatrix} \color{green}{Y_{1,1}} & \color{blue}{Y_{1,2}} & \color{blue}{Y_{1,3}} & \color{blue}{Y_{1,4}} & \color{blue}{Y_{1,5}} \\ \color{red}{Y_{2,1}} & \color{green}{Y_{2,2}} & \color{blue}{Y_{2,3}} & \color{blue}{Y_{2,4}} & \color{blue}{Y_{25,}} \\ \color{red}{Y_{3,1}} & \color{red}{Y_{3,2}} & \color{green}{Y_{3,3}} & \color{blue}{Y_{3,4}} & \color{blue}{Y_{3,5}} \\ \color{red}{Y_{4,1}} & \color{red}{Y_{4,2}} & \color{red}{Y_{4,3}} & \color{green}{Y_{4,4}} & \color{blue}{Y_{4,5}} \\ \color{red}{Y_{5,1}} & \color{red}{Y_{5,2}} & \color{red}{Y_{5,3}} & \color{red}{Y_{5,4}} & \color{green}{Y_{5,5}} \end{bmatrix} \end{equation} Now let's a few things about this matrix. First, $$\sum_{t=1}^T (y_t-\bar{y})^2 = \sum_{t=1}^T (y_t-\bar{y})(y_t -\bar
|covariance|correlation|time-series|
1
Number of elements in set $S = \{(x,y,z): x,y,z \in \mathbb{Z}, x+2y+3z=42, x,y,z \ge 0\}$ is?
Number of elements in set $S = \{(x,y,z): x,y,z \in \mathbb{Z}, x+2y+3z=42, x,y,z \ge 0\}$ is? My solution: This is equal to the coefficient of $t^{42}$ in $(1+t+t^2+t^3+...+t^{43})(1+t^2+t^4+...+t^{42})(1+t^3+t^6+...+t^{42})$ = $\frac{(1-t^{43})}{1-t} \times \frac{(1-t^{44})}{1-t^2} \times \frac{(1-t^{45})}{1-t^3} $ = $\frac{1} {(1-t)(1-t^2)(1-t^3)}$ since I neglected higher powers of t = $\frac{1} {(1-t)^3(1+t)(1+t+t^2)}$ = $\frac{(1-t)^{-3}} {(1+t)(1+t+t^2)}$ Now I know the coefficient of $x^n$ in $(1-x)^{-r}$ is $\binom {n+r-1}{r-1}$ . But I don't know what to do with the denominator part. Can someone help??
Just use partial fractions. Write $$ \frac{1}{(1-t)^3(1+t)(1+t+t^2)}=\frac{A_1}{1-t}+\frac{A_2}{(1-t)^2}+\frac{A_3}{(1-t)^3}+\frac{B}{1+t}+\frac{C_0+C_1t}{1+t+t^2}. $$ Multiplying LHS by $1+t$ and letting $t=-1$ yields $$ B=\left[\frac{1}{(1-t)^3(1+t+t^2)}\right]_{t=-1}=\frac{1}{(1+1)^3(1-1+1)}=\frac{1}{8}. $$ Multiplying LHS by $(1-t)^3$ and letting $t=1$ yields $$ A_3=\left[\frac{1}{(1+t)(1+t+t^2)}\right]_{t=1}=\frac{1}{(1+1)(1+1+1)}=\frac{1}{6}. $$ You can keep going like this (using your favorite method of partial fraction expansion) to obtain the following: \begin{multline*} \frac{1}{(1-t)^3(1+t)(1+t+t^2)}=\\ =\frac{1}{6}\cdot\frac{1}{(1-t)^3}+\frac{1}{4}\cdot\frac{1}{(1-t)^2}+\frac{17}{72}\cdot\frac{1}{1-t}+\frac{1}{8}\cdot\frac{1}{1+t}+\frac{1}{9}\cdot\frac{2+t}{1+t+t^2}. \end{multline*} Everything except the last summand is expands easily into a power series, and for the last summand we can write $$ \frac{2+t}{1+t+t^2}=\frac{(1-t)(2+t)}{(1-t)(1+t+t^2)}=\frac{2-t-t^2}{1-t^3}=\fr
|binomial-coefficients|generating-functions|
0
A pointwise Hadamard-Landau-Kolmogorov inequality for $C^2$ functions
A well-known interpolation inequality proved by Hadamard (1914) and was generalized by Landau (1913) and Kolmogorov (1939) asserts that $$ |f'(x)|^2 \leq 2 \sup_{x \in \mathbf R} |f(x)|\sup_{x \in \mathbf R} |f''(x)| $$ for any $x$ and for any function $f \in C^2(\mathbf R)$ . Let us assume additionally $$|f''(x)| \leq 1 \quad\text{for any } x.$$ Then the inequality becomes $$ |f'(x)|^2 \leq 2 \sup_{x \in \mathbf R} |f(x)| \quad\text{for any } x. $$ My question is: can we have $$ |f'(x)|^2 \leq 2 |f(x)| \quad\text{for any } x. $$ This could be too strong, so a weaker version is also meaningful, namely there is some $C>0$ such that $$ |f'(x)|^2 \leq C |f(x)| $$ for all $x \in \mathbf R$ .
Thanks, Martin R. The final remark (too long for a comment) is as follows: the factor $2$ is optimal in the sense that it cannot be replaced by any smaller positive constant and the factor $2$ is never achieved. For the former conclusion, the optimality can be proved by constructing suitable functions, namely given any $C \in (0, 2)$ , there is some positive $C^2$ function $f$ such that $$f'(x)^2 > C f(x).$$ Indeed, let $$f(x)=c+\left\{\begin{aligned} & 0 & & \text{if } x \leq -2,\\ & \frac {(x+2)^2}2 & & \text{if } -2 2,\\ \end{aligned}\right.$$ for some $c>0$ to be determined later. (This function is not of class $C^2(\mathbf R)$ , but twice differentiable except at $\pm 1, \pm 2$ , so it serves our proof.) Then, we get $$f'(x)=\left\{\begin{aligned} & 0 & & \text{if } x \leq -2,\\ & x+2 & & \text{if } -2 2,\\ \end{aligned}\right.$$ Hence at $-1$ we should have $$f'(-1)^2 - C f(-1) = 1 - C(c+\frac 12) > 0$$ provided $c \in (0, 1/C-1/2)$ . For the later conclusion, I believe that ther
|inequality|interpolation|
0
How to solve for the angle $x$ ?($\tan(x)+ 4 \sin(x)=\sqrt 3$)
I have tried $\tan(x)+ 4 \sin(x)=\sqrt 3$ to make the notation simpler I will write $s:=\sin(x)$ $$4s +\frac{s}{\sqrt{1-s^2}}=\sqrt{3}$$ $$(\sqrt{3}-4s)^2 (1-s^2) = s^2$$ $$-16s^4 +8 \sqrt{3} x^3 +12s^2 -8\sqrt{3} s +3=0$$ needless to say solving this Quartic equation will be too difficult and annoying so there must be some trick but I couldn't find it.
$$\begin{gathered} \tan \left( x \right) + 4\sin \left( x \right) = \sqrt 3 \left( 1 \right) \hfill \\ * x \ne \frac{\pi }{2} + k\pi ,k \in \mathbb{Z} \hfill \\ \left( 1 \right) \Leftrightarrow 4\sin \left( x \right)\cos \left( x \right) + \sin \left( x \right) = \sqrt 3 \cos \left( x \right) \hfill \\ \Leftrightarrow 2\sin \left( {2x} \right) = \sqrt 3 \cos \left( x \right) - \sin \left( x \right) = 2\cos \left( {x + \frac{\pi }{6}} \right) = 2\sin \left( {\frac{\pi }{3} - x} \right) \hfill \\ \Leftrightarrow \sin \left( {2x} \right) = \sin \left( {\frac{\pi }{3} - x} \right) \Leftrightarrow \left[ \begin{gathered} 2x = \frac{\pi }{3} - x + k2\pi \hfill \\ 2x = \pi - \left( {\frac{\pi }{3} - x} \right) + k2\pi \hfill \\ \end{gathered} \right.\left( {k \in \mathbb{Z}} \right) \hfill \\ * 2x = \frac{\pi }{3} - x + k2\pi \Leftrightarrow 3x = \frac{\pi }{3} + k2\pi \Leftrightarrow x = \frac{\pi }{9} + k\frac{{2\pi }}{3} \hfill \\ * 2x = \pi - \left( {\frac{\pi }{3} - x} \right) + k2\pi \L
|geometry|algebra-precalculus|
1
Confusion on finding if a limit exists or not.
I have started the introduction of calculus in high school, and I am confused about this problem. Define $f(x) = -x$ for $x . Then find out if $\lim_{x \to 0} f(x)$ exists or not. If it does, then state the value. Here is my approach: The limit exists if $\lim_{x \to 0^{-}} f(x)$ = $\lim_{x \to 0^{+}} f(x)$ . Then is it appropriate to state that $\lim_{x \to 0^{-}} f(x) = 0$ , because $0^{-}$ is still in the interval $x ? It probably is. But, is it appropriate for $\lim_{x \to 0^{+}} f(x) = 0$ ? I think not because $0^{+}$ is NOT in the interval. Is my reasoning correct?
Here is my approach: The limit exists if $\lim_{x \to 0^{-}} f(x)$ = $\lim_{x \to 0^{+}} f(x)$ . No. This is false. If $f$ is defined on an (possibly punctured) interval $(a-\epsilon, a+\epsilon)\setminus \{a\}$ , then yes, the limit $$\lim_{x\to a} f(x)$$ exists if and only if the two directional limits, $$\lim_{x\to a^+} f(x) \text {and }\lim_{x\to a^-} f(x)$$ exist. However, since your funciton is not defined on such an interval, that particular fact cannot help you with calculating the limit. Additinally, I would strongly warn you against sentences such as your last one, i.e. that " $0^+$ is not in the interval". The interval is a set of numbers. Elements of the set are numbers. $0^+$ is not a number, it's just a collection of symbols used to denote a directional limit. So yeah, sure $0^+$ is not in the interval, but neither is $0^-$ . The sentence is more or less nonsensical.
|calculus|limits|
0
Contour Integral of $1/z dz$ with triangle
I was asked to find the contour integral $dz/z$ of a triangle with vertices $1-2i$ , $-2+i$ and $1+i$ . I am not sure how to begin this but my professor said that it would be: $$\int_{-1}^{2} \frac{-i}{1-ti} dt$$ But this doesn't make sense to me. why would $z$ be equal to $1-ti$ ? I can see he may be working from the vertically connected vertices but that would be $1+iy$ right?
I'll assume that you want to do this directly, rather than appealing to the residue theorem. Suppose we have a path $\gamma: [a, b] \to \mathbb{C}$ . When we write down a contour integral $\int_{\gamma} f(z) \; dz$ , this is the same as $$\int_{a}^b f(\gamma(t)) \gamma'(t) \; dt,$$ just like any other path/line/curve integral. So, the integral that your prof told you to start with corresponds to the piece of the triangular contour going from $1 + i$ to $1 - 2i$ . One way to parametrize this line segment is to define $\gamma(t) = 1 - ti$ with domain $t \in [-1, 2]$ . Then $\gamma'(t) = -i$ , so indeed the correct integral expression (using $f(z) = 1/z$ in my previous formula) is $$\int_{-1}^2 \frac{-i}{1 - it} \; dt.$$ You can do something completely similar with the other two pieces and add all of them up to get the full contour integral. That being said, despite the fact that this integral looks simple, you need to be careful. It is true that the function $\log(1 - it)$ has this integ
|complex-analysis|contour-integration|
0
Organizing the content of Euclidean geometry with pictorial mind maps
There is an idea that has been on my mind for a while, and I would like to share it so that it turns into a snowball. Perhaps it will be useful and attractive to engineering enthusiasts. I ask you to help me develop this model that I am thinking of... Most geometric theorems are proven by putting several theorems together, but there are one or two theorems that represent the core idea of ​​the proof. If you know the basic theorem you will use to do your proof, you still have to do some work to arrive at the proof, even though the basic step has already been done; Making a very large mind map would be impractical, but small mind maps can be made to express some conclusions A lot of written mind maps can be done, but I think it would be fun and useful for geometry enthusiasts to have pictorial mind maps so they can guess a good visualization of the proof. I think that pictorial mind maps are an excellent idea for engineering, as the engineering student has a good connection between the e
Indeed, as @Jean-Marie said, some mental diagrams should include information about the methods used in proof, which may not be obvious, so I suggest using stereotypes of common proof patterns, here are some examples... Retrograde thinking: Pigeon principle: The principle of inclusion and exclusion: Cavalieri principle: The principle of infinite descent: Mathematical induction: Carpet principle: Telescope series: Proof by contradiction: Counting in two ways:
|geometry|soft-question|euclidean-geometry|big-list|
0
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$, for any $n\in N, n\geq 2$
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$ , for any $n\in N, n\geq 2$ I let $a=\sqrt{n}$ , and we know that $k\leq a , where $k\in N$ . From now all we have to do is to show that $k \leq \frac{1}{a^2}+a I tried processing the first inequality but got to nothing useful. I hope one of you can help me! Thank you!
Let $e = a-k$ be the fractional part of $a$ , which means $0 \leq e . Then the problem becomes $$0 \leq e + \frac{1}{(k+e)^2} The LHS is obvious, so we only need to consider the RHS. Now let's consider the condition $n . Since n is integer we can further restrict it to $n \leq (k+1)^2 - 1$ . Then we have $$ \begin{align} & &(k+e)^2 &\leq (k+1)^2 - 1 \\ &\Leftrightarrow &e &\leq \sqrt{k^2+2k}-k\\ & & &= 1 + (\sqrt{k^2+2k}-k-1)\\ & & &= 1 - \frac{1}{k+\sqrt{k^2+2k}}\\ & & & Hence for $k \geq 3$ $$ e + \frac{1}{(k+e)^2} Cases for $k can be checked by hand
|inequality|radicals|
0
Can you determine $n$ with oracle for primality of $n+m$ for given $m$?
I haven’t studied much number theory at all, but I got asked this question by a family member who is a fan of recreational math, and I have no clue how to answer it, or even the subfield it precisely belongs to and keywords to search to find discussion of it: Say that $n \in \mathbb{N}$ is unknown, but you have access to an oracle which, given an $m \in \mathbb N$ , tells whether or not $m+n$ is prime. Is it possible to determine in finite time the value of $n$ ? What about for other subsets of $\mathbb N$ ? Is there a name for this property?
Yes, it is indeed possible to determine $n$ in finite time. First, note that by Dirichlet's theorem, for all primes $p$ , there exist infinitely many primes that are $a \bmod{p}$ for $a \in \{1, 2, \ldots, p-1\}$ . Fix any prime $p > n$ and let $q_1, \ldots, q_{p-1}, q'_1, \ldots, q'_{p-1}$ be distinct primes greater than $p$ such that $q'_i \equiv q_i \equiv i \bmod{p}$ . Then, we can figure out in finite time that $m + n$ is prime for $m \in \{p - n, q_1 - n, \ldots, q_{p-1} - n, q'_1 - n, \ldots, q'_{p-1} - n\}$ . First, these $2p-1$ values for $m$ cover all possible remainders modulo $p$ , so one of these values $m$ must give $m + n$ being divisible by $p$ , which forces $m + n = p$ . Furthermore, all but one remainder (corresponding to $p-n$ ) occurs more than once, so we also know exactly which value of $m$ gives $m + n = p$ . From here, we deduce that $n = p - m$ . Example : Let $n = 4$ . We can use an oracle to find values of $m$ such that $m + n$ is prime until we find some pr
|number-theory|prime-numbers|
1
Confusion on finding if a limit exists or not.
I have started the introduction of calculus in high school, and I am confused about this problem. Define $f(x) = -x$ for $x . Then find out if $\lim_{x \to 0} f(x)$ exists or not. If it does, then state the value. Here is my approach: The limit exists if $\lim_{x \to 0^{-}} f(x)$ = $\lim_{x \to 0^{+}} f(x)$ . Then is it appropriate to state that $\lim_{x \to 0^{-}} f(x) = 0$ , because $0^{-}$ is still in the interval $x ? It probably is. But, is it appropriate for $\lim_{x \to 0^{+}} f(x) = 0$ ? I think not because $0^{+}$ is NOT in the interval. Is my reasoning correct?
The theorem you are trying to use $\lim_{x \to a} f(x) \text{ exists} \hspace{2mm}\text{iff} \lim_{x \to a^+} f(x) = \lim_{x \to a^-} f(x) $ only makes sense when a symmetric open neighbourhood can be open around $a$ . The idea of neighbourhoods are a little more complicated than the intuitive definition of a limit and are sometimes omitted on high school courses, but basically the idea is that you can generate an open and bounded interval around the point $a$ that will be called neighbourhood. Let me introduce the notion of the rigorous definition of limit without using extensive formal notation: we say $\lim_{x \to a} f(x) = L \text{ if you can take } x \in (-\delta + a,a+\delta) \text{ with } \delta >0 $ a small number, and you can insure that $ f(x) \in (-\epsilon+L,L+\epsilon) \text{ and } \epsilon$ is another small positive number. The idea is that given any small positive number $\epsilon$ , you can give a positive $\delta$ that insures that all the $x$ 's in the neighbourhood o
|calculus|limits|
0
Does improvement on regularity implies compactness?
Let $M$ be a compact metric space and $\mathcal C^0(M)=\{f:M\to\mathbb R; f\ \text{is continous}\}$ . Let $T:\mathcal C^0(M)\to \mathcal C^0(M)$ be a bounded linear transformation. Suppose that for every $f\in\mathcal C^0(M)$ we have that $Tf$ is $1/2$ -Hölder. Does this imply that $T$ is compact? It is clear that if $ | T f | _ {\mathcal C^{1/2} (M)}\leq K$ for some $ K> 0$ for every $\| f\|_{\mathcal C^0(M)}\leq 1$ , then Arzela-Ascoli implies the result. However, I believe that this is not generally true. Also, I am not able to imply that $T(B(0,1))$ is equicontinuous.
$C^{1/2}(M)$ is a Banach space. Since the norm on $C^{1/2}(M)$ is stronger than the norm on $C^0(M)$ , one sees that $T$ is closed as an operator from $C^0(M)$ to $C^{1/2}(M)$ . Thus, it is bounded by the closed graph theorem. In particular, we indeed have $\sup_{\|f\| \leq 1} |Tf|_{C^{1/2}(M)} . The remainder is just Arzelà–Ascoli theorem, as you already mentioned.
|real-analysis|functional-analysis|
1
Some properties about the equation $P(z)\phi(z)=0$
Let $\phi(z)=\sum_{m=1}^\infty (\phi_{m}z^m-\psi_{-m}z^{-m})$ where $\phi_m, \psi_{-m}\in \mathbb{C}$ . If we can find one polynomial $P(z)\in \mathbb{C}[z]$ such that $P(z)\phi(z)=0$ , how can I get some properties about $\sum_{m=1}^\infty \phi_{m}z^m$ and $\sum_{m=1}^\infty \psi_{-m}z^{-m}$ from this equation $P(z)\phi(z)=0$ ? For example: $(1):$ Can I get that $\phi(z)$ is a rational function? $(2):$ Can I get $\sum_{m=1}^\infty \phi_{m}z^m=\sum_{m=1}^\infty \psi_{-m}z^{-m}$ ? Here, the left hand and the right hand can be viewed as the same rational function expanding at $0$ and $\infty$ ? Why I have this question I am reading a paper Asymptotic representations and Drinfeld rational fractions , the doi of this paper is $10.1112/S0010437X12000267$ . In this paper's lemma 3.9, if we define $\Psi(z)= \sum_{k=0}^\infty \Psi_{i,k}z^k$ , the author proves that $$\sum_{m=0}^Na_mz^{N-m}\bigg(\Psi_i(z)-\sum_{p=0}^m\Psi_{i,p}z^p\bigg)=0.$$ Then he conclude that $\phi(z)= \sum_{k=0}^\infty \ph
This is a Laurent series . I presume it is supposed to converge to $\phi(z)$ , at least in some annulus $A = \{z: r with $r . If $P(z) \phi(z) = 0$ where $P$ is a polynomial not identically $0$ , then $\phi(z) = 0$ everywhere in $A$ . And then all the coefficients are $0$ by the Cauchy integral formula.
|linear-algebra|rational-functions|quantum-computation|quantum-groups|
1
Integer ordered pairs $(x,y)$ for which $x^2-y!$.....
[1] Total no. of Integer ordered pairs $(x,y)$ for which $x^2-y! = 2001$ [2] Total no. of Integer ordered pairs $(x,y)$ for which $x^2-y! = 2013$ My Try:: (1) $x^2-y! = 2001\Rightarrow x^2 = 2001+y!$ We Know that $y!$ end with $0, 1,2,4,6$ and last digit of $x^2$ is $0,1,4,5,6,9$ But I Did Not understand How can I proceed further Help Required, Thanks
I will answer only the second equation and I will not use the known “modulo”, namely $\pmod{5}$ , $\pmod{8}$ , and $\pmod{9}$ . Since we know that $13^{3}-13^{2}=2028$ and $2028-13=2015$ , then $2013\equiv11\equiv-2\pmod{13}$ . However, we know that $-2$ is only a quadratic residue to prime numbers $p\equiv 1,3\pmod{8}$ , hence $x^{2}\equiv-2\equiv11\pmod{13}$ is not solvable, and thus $x^{2}-y!=2013$ does not have solutions when $y\geq12$ . Now, we need to only check the values of $y\leq11$ to see if are there any positive integer solutions and by brute force, it turns out there is none, and as a result, $x^{2}-y!=2013$ does not have any solutions where $x, y \in \mathsf{Z}^{+}$ .
|elementary-number-theory|factorial|
0
Proving the process $(Y_t)_{t\in[0,\infty)}$ is a standard Brownian motion
Let $a\in(0,\infty)$ . Let $(X_t)_{t\in[0,a]}$ be a standard Brownian motion and $(W_s)_{s\in[0,\infty)}$ be a standard Brownian motion such that $W_s$ is independent of $X_t$ for all $s \geqslant 0$ and $t \in [0, a]$ . Define a new process $(Y_t)_{t∈[0,\infty)}$ as follows. \begin{equation} Y_{t} = \begin{cases} X_t & \text{if}~0\leqslant t\leqslant a\\ W_{t-a}+X_a & \text{if}~t>a. \end{cases} \end{equation} Justify whether the above process $(Y_t)_{t\in[0,\infty)}$ is a standard Brownian Motion. We must check if the process $Y_t$ satisfies all the properties (independent increments, sample path continuity, etc.). To do so, you have to pick four-time points, let them be $t_j , then vary where $a$ belongs in the in the inequality, and then take cases. In the case where $t_j\leqslant a\leqslant t_{j+1}\leqslant t_i we have that $Y_{t_{j+1}}-Y_{t_{j}} = W_{t_{j+1}-a}+X_a-X_{t_{j}}$ and $Y_{t_{i+1}}-Y_{t_{i}}=W_{t_{i+1}-a}-W_{{t_i}-a}.$ How do I prove that $W_{t_{j+1}-a}$ is independent
First, lets clarify what independent processes means Independent stochastic processes and independent random vectors . The definition for the two processes to be independent is given by PlanetMath : Two stochastic processes $\lbrace X(t)\mid t\in T \rbrace$ and $\lbrace Y(t)\mid t\in T \rbrace$ are said to > be independent, if for any positive > integer $n , and any sequence > $t_1,\ldots,t_n\in T$ , the random > vectors > $\boldsymbol{X}:=(X(t_1),\ldots,X(t_n))$ > and > $\boldsymbol{Y}:=(Y(t_1),\ldots,Y(t_n))$ > are independent. So in this particular, we have independence of the vectors $$(W_{t_{1}},...,W_{t_{n}})\text{ and }(X_{s_{1}},...,X_{s_{n}})$$ and so this also implies $W_{t_{2}}-W_{t_{1}}$ is independent of $X_{s_{2}}-X_{s_{1}}$ . The rest of the proof follows as the above answer mentioned https://math.stackexchange.com/a/4882724/383044 .
|stochastic-processes|brownian-motion|
0
ODE homogeneous kernal collides with non-homogeneous kernel then times t, why.
This post is a call back to one of my old questions that got no response. That is, why multiply an extra t (but not anything else) when we get repeated roots from either the characteristic polynomial of the linear ODE or from getting repeated roots from the homogeneous kernel and the non-homogeneous kernel? The reasoning goes like,
Let's start with expressing the linear ODE system with linear algebra. Let $\mathbf{A}$ be an $n\times n$ matrix. $x_o\in\mathbb{C}$ are the initial condition. The IVP $\dot{x} = \mathbf{A}x,\ x(0) = x_0$ has a unique solution given by $x(t)=e^{\mathbf{A}t}x_0$ . Let $\mathbf{A}=\mathbf{P}^{-1}\mathbf{J}\mathbf{P}$ where $\mathbf{J}$ is in Jordan normal form. We also denote the Jordan block w.r.t the $i^{th}$ eigenvalue by $J_i$ . $\mathbf{J}=(J_i)=\bigg(\begin{bmatrix} \lambda_i & 1 & \cdots & & 0\\ \vdots & \lambda_i & 1 & & \\ & & \ddots & \ddots & \vdots\\ & & & \lambda_i & 1\\ 0 & & & \cdots & \lambda_i\\ \end{bmatrix}\bigg)$ Now we have, $x(t) = e^{\mathbf{A}t}x_0=\mathbf{P}^{-1}e^{\mathbf{J}t}\mathbf{P}x_0\implies y(t)=e^{\mathbf{J}t}y_0$ We further decompose $\mathbf{J} = \lambda_i\mathbf{I} + \mathbf{\Lambda}$ . Notice $\mathbf{\Lambda}$ is nilpotent. The upper off-diagonal shifts up k times in $\mathbf{\Lambda}^k.$ Suppose $\mathbf{\Lambda}$ is a nilpotent operator of order $
|linear-algebra|ordinary-differential-equations|
1
Jacobian of Transformation in the Complex Plane
Let $$f(z) = \sum_{n=0}^\infty c_nz^n$$ be analytic in the disc $\mathbb{D} \ \{ z \in \mathbb{C} | |z| $$A = \pi \sum_{n=1}^\infty n |c_n|^2$$ Looking at the solution - I lack some fundamental understanding. Why is the Jacobian of the transformation $|f'(z)|^2$, giving us that $A = \int_{\mathbb{D}}|f'(z)|^2 dx dy$. Can someone help me see this by walking through the definition?
This is essentially equivalent to the beautiful answer by Chappers but avoids going to $(x,y)$ coordinates as much as possible. The metric on the Argand plane is $$ds^2=dx^2+dy^2=(dx+idy)(dx-idy)=dz d\bar{z}\tag{1}$$ Let $w \to z=f(w)$ , then holomorphicity of the map $f:\mathbb{C} \to \mathbb{C}$ implies existence of $f'(w)$ and in turn $dz=\frac{\partial z}{\partial w}dw=f'(w) dw$ and similarly for the antiholomorphic piece $d\bar{z}=\overline{f'(w)} d\bar{w}$ . So by $(1)$ $$ds^2=|f'(w)|^2 dw d \bar{w}$$ This proves that the Jacobian of the holomorphic maps are indeed $|f'(w)|^2$ implying that it is a conformal transformation.
|complex-analysis|jacobian|
0
localization over different rings
I guess this is a bit of a soft question. Suppose you have a hom of commutative rings with identity $R\to S$ , and a hom of $S$ modules $f: M\to N$ . Somehow it shouldn't matter whether you localize $f$ in the category of $S$ modules or in the category of $R$ modules (I realize I'm leaving vague the set you're using to localize). What is the correct, most general or most categorical statement of this?
Maybe this will do: My assumption is that the localization in $S$ -MOD is over a set, $U\subset S$ which is the image of a multiplicative set, $W$ , in $R$ . I don't know how essential this assumption is. Then if we call $\phi: R\to S$ , and if $\phi_W^\ast$ is the restriction of scalars functor from $S[U^{-1}]$ -MOD to $R[W^{-1}]$ -MOD, $\phi^\ast$ is restriction of scalars from S-MOD to R-MOD, and $loc_W$ and $loc_U$ are localization functors from $R$ -MOD to $R[W^{-1}]$ -MOD and from $S$ -MOD to $S[U^{-1}]$ -MOD, respectively, then the claim is we have a natural isomorphism of functors: $$\phi_W^\ast \circ loc_U \cong loc_W \circ \phi^\ast$$ defined in an obvious way. Edit: the assumption can obviously be weakened to " $\phi(U)\subset W$ and every element of $W$ divides an element of $\phi(U)$ ." It seems the only obstruction is possible non-surjectivity of the obvious abelian group hom $M[W^{-1}] \to M[U^{-1}]$ .
|commutative-algebra|
0
Solving the Diophantine Equation $x^2 - y! = 2001$ and $x^2 - y! = 2016$
I had recently faced a problem: Solve the Diophantine Equation $x^2 - y! = 2001$. Solving it was quite easy. You show how $\forall y \ge 6$, $9|y!$ and since $3$ divides the RHS, it must divide the LHS and if $3|x^2 \implies 9|x^2$ and so the LHS is divisible by $9$ and the RHS is not. Contradiction. Hence, the only solution is $(45, 4)$. That made me wonder, how we can solve the Diophantine Equation $x^2 - y! = 2016$. We cannot apply the same logic here. $2016$ is a multiple of $9$ and it is clear that $3|x$ and $9 \nmid x$. How do I proceed from here?
You could also use $\pmod{27}$ . We know that if $u\geq9$ , then $27\mid n!$ , and $2016\equiv18\pmod{27}$ (as $45^{2}=2025$ is divisible by $27$ ). $27$ is equal to $3^{3}$ . Let’s say we have $x^{2}\equiv a\cdot b^{2c}\pmod{b^{2n+1}}$ . Now if $n>c$ , then this equation will only be solvable iff $a$ is a quadratic residue $\pmod{b}$ . For example, $x^{2}\equiv7^{2}\cdot5\pmod{7^{2}}$ is solvable, but $x^{2}\equiv7^{2}\cdot5\pmod{7^{3}}$ is not. This is because after dividing it by $7^{2}$ , we get a number that is $5\pmod{7}$ , and $5$ is not a quadratic residue $\pmod{7}$ . In this case, $a=5, b=7, c=2, n=1$ . Let’s go back to the question; since we know that $2016\equiv18\pmod{27}$ , then $x^{2}-y!=2016$ does not have positive integer solutions when $y\geq9$ , because $27\mid y!$ (if $y\geq9$ ), and $y!+2016\equiv18\pmod{27}$ when $y\geq9$ , but $18$ isn’t a quadratic residue $\pmod{27}$ , as after dividing it by $9$ , we get a number that is $2\pmod{3}$ , which is a contradiction,
|number-theory|elementary-number-theory|diophantine-equations|
0
How to construct an algebraic function that is not rational?
How to construct a algebraic irrational function $f(x)$ such that $$f(x)= \sum_{i=1}^{\infty}a_i x^i$$ with $a_1, a_2,\dots, a_i,\dots \in \mathbb{N}$ . Reference is appreciated. Update : an instance is $$f(x)=\frac{2}{1+\sqrt{1-4x}} =\sum C_n x^n$$ with $c_n$ being the $n_{th}$ Catalan number. Thank to Jyrki Lahtonen. And some are here Hope more examples.
Do you know Eisenstein's theorem ? It states that if you have any algebraic power series $$f(z) = \sum_n a_n z^n$$ with $a_n \in \mathbb Q$ , then there is a number $A \in \mathbb N$ such that $a_n A^n \in \mathbb Z$ for all $n$ . In other words, the power series $$g(z) = f(Az) = \sum (a_n A^n) z^n$$ has integral coefficients. For example if $$f(z) = \sqrt{1+z} = \sum_{k=0}^\infty \binom{\frac 1 2}{k} z^k,$$ I'm pretty sure that $$g(z) = \sqrt{1+4z}$$ has integral coefficients (though I'm a bit stupid proving it right now...). Exercise!
|functions|algebraic-geometry|reference-request|power-series|
1
What are the factors of the matrix M?
A matrix is given by $M = \begin{bmatrix} 1 & a & bc\\ 1 & b & ca\\ 1 & c & ab \end{bmatrix}$ What are the factors of this matrix M ? My Try :- Multiply $R1$ by $a$, $R2$ by $b$ and $R3$ by $c$. After that, take common $abc$. $abc\begin{bmatrix} a & a^2 & 1\\ b & b^2 & 1\\ c & c^2 & 1 \end{bmatrix}$ So, I think $abc$ will be one of the factors of this matrix $M$ Instead of this method, I tried by one more method applying row operations . $R2\Rightarrow R2-R1$ and $R3\Rightarrow R3-R1$. This gives \begin{bmatrix} 1 & a & bc\\ 0 & b-a & c(a-b)\\ 0 & c-b & a(b-c) \end{bmatrix} Hence, Determinant is $\left ( a-b \right )\left ( b-c \right )\left ( c-a \right )$, which says $(a-b)$ or $(b-c)$ or $(c-a)$ is the factor. Hence, All the factors completely different by the two different methods. Where am I going wrong ?
I cannot believe that it took MSE almost seven years to find OP's mistakes in the row operations. The determintant of the matrix \begin{bmatrix} 1 & a & bc\\ 1 & b & ca\\ 1 & c & ab \end{bmatrix} is without doubt $(a-b)(c-a)(b-c)\,.$ With row operations, $$\det \begin{bmatrix} 1 & a & bc\\ 0 & b-a & c(a-b)\\ 0 & c-\color{red}a & \color{red}{b(a-c)} \end{bmatrix}=b(b-a)(a-c)-c(c-a)(a-b)=(a-b)(c-a)(b-c) $$ as it must.
|linear-algebra|matrices|determinant|
0
Can I always construct eigenvector sets which dotproduct between any two of them >= 0?
I'm trying to find eigenvector that dotproduct $v_1$ and $v_2$ 's dotproduct $v_1$ would also be eigenvector of a matrix. so my question is: 1.for the wolfram alpha eigenvector, when the result eigenvectors dotproduct>0, is it happen because the algorithm just picks the >0 eigenvectors? If it finds dotproduct 0 or dotproduct 0 case? Edit:I change the subject,sorry for the confusion. suppose I have a set of eigenvectors { $v_1$ , $v_2$ , $v_3$ ,...} which have some dotproduct between two elements =0. take (1,2,0),(5,0,1),(-1,1,1) as examples, this 3 eigenvectors have some negative dotproduct between them, but if they're the eigenvectors of I, then I can always change the eigenvectors set to another set which satisfy dotproduct>0 condition. Then the same eigenvector under different eigenvalue condition,w.r.t eigenvalue = 1,2,3 yields $\left[ \begin{array}{ccc}\frac{27}{13}&-\frac{7}{13}&-\frac{5}{13}\\-\frac{4}{13}&\frac{15}{13}&\frac{20}{13}\\-\frac{2}{13}&\frac{1}{13}&\frac{36}{13}\\\e
Let $V$ be any $n\times n$ matrix such that $$ V^TV=P:=\pmatrix{n&-1&\cdots&-1\\ -1&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&-1\\ -1&\cdots&-1&n}. $$ (Such a matrix exists because $P=(n+1)I_n-ee^T$ is positive definite.) Denote the $j$ -th column of $V$ by $v_j$ . By construction, each $v_j$ is an eigenvector of $A=I$ and $\langle v_i,v_j\rangle=-1$ whenever $i\ne j$ .
|matrices|eigenvalues-eigenvectors|
0
Definition of mean of a random variable in joint probability distribution?
I am reading the "Probability & Statistics for Engineers & Scientists" 9th edition and encounter a proof of theorem 4.4 where it said: $$\mu_{X}=\sum_{x}xf(x,y),\ \ \mu_{Y}=\sum_{y}yf(x,y)$$ where $f(x,y)$ is the joint probability distribution of the random variables X and Y . But as far as I'm aware, the mean of a random variable in a joint probability distribution should be: $$\mu_{X} = \sum_{x}x g(x) = \sum_{x} \sum_{y} x f(x, y) $$ where $g(x)$ is the marginal distributions of X alone $g(x) = \sum_{y}f(x,y)$ . Am i understanding this correctly or it is a mistake from the book ? Please help me clarify this. Here is a link to the full proof in the book:
Certainly $\mu_X=\sum_xxf(x,y)$ cannot be right, because the RHS is a function of $y$ . As you say, it should be $\mu_X=\sum_x\sum_yxf(x,y)$ . That is also clearly what the author meant, since that is what you need to substitute in the previous equation to get the final equation. So this appears to be a typesetting error.
|probability|
1
Can I always construct eigenvector sets which dotproduct between any two of them >= 0?
I'm trying to find eigenvector that dotproduct $v_1$ and $v_2$ 's dotproduct $v_1$ would also be eigenvector of a matrix. so my question is: 1.for the wolfram alpha eigenvector, when the result eigenvectors dotproduct>0, is it happen because the algorithm just picks the >0 eigenvectors? If it finds dotproduct 0 or dotproduct 0 case? Edit:I change the subject,sorry for the confusion. suppose I have a set of eigenvectors { $v_1$ , $v_2$ , $v_3$ ,...} which have some dotproduct between two elements =0. take (1,2,0),(5,0,1),(-1,1,1) as examples, this 3 eigenvectors have some negative dotproduct between them, but if they're the eigenvectors of I, then I can always change the eigenvectors set to another set which satisfy dotproduct>0 condition. Then the same eigenvector under different eigenvalue condition,w.r.t eigenvalue = 1,2,3 yields $\left[ \begin{array}{ccc}\frac{27}{13}&-\frac{7}{13}&-\frac{5}{13}\\-\frac{4}{13}&\frac{15}{13}&\frac{20}{13}\\-\frac{2}{13}&\frac{1}{13}&\frac{36}{13}\\\e
Too long for a comment. When you have three orthogonal eigenvectors $$ v_1,v_2,v_3 $$ to the same eigenvalue you can set $$ w_1:=v_1\,,\;w_2:=v_2-v_1-v_3\,,\;w_3=v_3-v_2-v_1\,. $$ The $w_1,w_2,w_3$ are eigenvectors to that eigenvalue. Calculate the dot products between them and realize that there will be simple condition on the length of the $v_i$ that makes those dot products all negative. Changing the length of $v_i$ obviously does not change the fact that they (hence also the $w_i$ ) are eigenvectors.
|matrices|eigenvalues-eigenvectors|
0
Beginner question on differential equations: they look useful to search for a function for solution of an equation, but why are derivatives mandatory?
At 54 yo, differential equations are still a wall for me, I have difficulties to go through. One video for beginner helped me, explaining me that in common equations, we are searching for the values of a variable $x$ , for solutions in differential equations, for the values of a function $f(x)$ But the video was also telling and showing that : a $f(x)$ solution searched for, could also be named $y$ , as $y = f(x)$ samples of resolution of a differential equation, where one of the member had a derivative of $f(x)$ in example : $y′+2y=x^2−2x+3$ My question might look strange, I try to ask it the clearest manner I can: $y = f(x)$ , $y' = f'(x)$ , ... Must a differential equation carry the function itself and some derivatives (of any order) of itself? Said another manner: is $h(x) = \frac{f(x)}{g(x)}$ a differential function too? If I say: I'm searching for $h(x)$ for solution of a function that solves the equation above? Said another manner: Why searching for functions as solutions does r
One video for beginner helped me, explaining me that in common equations, we are searching for the values of a variable $x$ , for solutions in differential equations, for the values of a function $f(x)$ That statement is true, but misleading. Here is a more accurate statement: An "ordinary" equation has variables which stand for numbers, and a solution to the equation is a number (or collection of numbers) which makes the equation true. A functional equation has variables which stand for functions, and a solution to the equation is a function (or collection of functions) which makes the equation true. A differential equation is a type of functional equation. Specifically, a differential equation is a functional equation that involves a derivative. To answer your specific questions: Must a differential equation carry the function itself and some derivatives (of any order) of itself? To be precise, a differential equation must contain a derivative somewhere, because that's what the phras
|ordinary-differential-equations|functions|derivatives|
1
Generalised integral $\int_{0}^{\infty} \int_{0}^{y} \sqrt{x^2 + y^2} e^{-x^2 - y^2} dx dy$
I am having trouble understanding how to compute this integral $$\int_{0}^{\infty} \int_{0}^{y} \sqrt{x^2 + y^2} e^{-x^2 - y^2} \, dx \, dy$$ My idea is to consider the whole plane xy plane instead of the half of the positive quadrant. Then the integral would look like this. $$\frac{1}{8} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \sqrt{x^2 + y^2} e^{-x^2 - y^2} \, dx \, dy $$ Of course we can change to polar coordinates and have $$\frac{\pi}{4} \int_{0}^{\infty} r^2 e^{-r^2} \, dr $$ Here I am slightly at a loss and unsure of what to do? Any hint or suggestion? Thank you! Note: Simply considering integration by parts and the Gaussian Integral we arrive at the desired answer. // as noted by @Benjamin Wang
As regards the last part of your computation, integrating by parts we find, $$ \int_{0}^{\infty} r^2 e^{-r^2} \,dr= \int_{0}^{\infty} r D(-e^{-r^2}/2)\,dr=\bigg[-r e^{-r^2}/2\bigg]_0^{\infty}+ \frac{1}{2}\int\limits_0^{\infty} e^{-r^2}\,dr= 0+\frac{\sqrt{\pi}}{4}$$ where at the last step we applied the Gaussian Integral .
|real-analysis|calculus|integration|multivariable-calculus|definite-integrals|
1
What does area density of mm$^{-2}$ mean
I'm not being smart right now but what does an area density of this mean $$10000 \text{ mm}^{-2}$$
Recall that a base to a negative exponent is equivalent to 1/the base to that positive exponent. $(x^{-2} = \frac{1}{x^2})$ . Acceleration due to gravity is sometimes listed as 9.8m $s^{-2}$ which is 9.8m/ $s^2$ or 9.8m/s/s - 9.8 meters per second per second or "per second squared". Area density is usually mass over area though so a unit is probably needed for the 10000.
|unit-of-measure|
1
Using Outer Products and Matrix Multiplication to Compute Tour Weight in Traveling Salesman Problems
Set up Let $G$ be a complete, weighted, and directed graph with $N$ vertices as in the asymmetric Traveling Salesman Problem (TSP). Without loss of generality, let the the vertex set $V$ of $G$ be such that $V = \{1, 2, 3, \ldots, N\}$ . Let $A$ be the adjacency matrix of $G$ . Then the weight of the edge from vertex $i$ to vertex $j$ is $A_{i,j}$ . Let $T$ be a tour of $G$ i.e. a permutation of the vertices of $G$ . For example, $T = \langle 3,1,2 \rangle$ for a graph with $N = 3$ vertices. The return to the first vertex is implicitly included at the end of the tour. Then the weight $w_T$ of $T$ is then \begin{equation} w_T = A_{T_N, T_1} + \sum_{i = 1}^{N-1} A_{T_i,T_{i+1}}. \end{equation} Matrix Multiplication Now, instead of "indexing into" $A$ , another way to get $A_{ij}$ is through multiplication of basis vectors $e_i$ where $e_i$ is zero in all its entries except at index $i$ where it is $1$ . For example, $A_{11} = e_1^\intercal A e_1 = \langle 1, 0, \ldots, 0\rangle^\intercal
I think there is a problem both in the calculation of your matrices, and in the formula for your sum. You want $E_l$ to order rows according to the permutation, I think it accomplishes what it does. You want $E_r$ to order columns according to the permutation but shifted by one. Your definition our $E_r$ gives the permutation matrix for the permutation shifted by one, but for swapping columns, you need to transpose it. If we follow your notations, we should have \begin{equation} E_{1, r}= \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix} \end{equation} This is the permutation matrix for the permutation $\langle 2,3,1 \rangle$ which is $T_1$ shifted by one. Then you calculate $w^T = Tr(E_lAE_R^T)$ .
|linear-algebra|graph-theory|discrete-optimization|algebraic-graph-theory|
1
Determinant of enlarged matrix
Let $A$ be the $n\times n$ matrix with entries $A_{ij}$ and consider the enlarged $2n\times 2n$ matrix $$\tilde A=\begin{pmatrix}A_{11} & 0 & A_{12} & 0 & \dots & A_{1n} & 0 \\ 0 & A_{11} & 0 & A_{12} & \dots & 0 & A_{1n} \\ A_{21} & 0 & A_{22} & 0 & \dots & A_{2n} & 0 \\ 0 & A_{21} & 0 & A_{22} & \dots & 0 & A_{2n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ A_{n1} & 0 & A_{n2} & 0 & \dots & A_{nn} & 0 \\ 0 & A_{n1} & 0 & A_{n2} & \dots & 0 & A_{nn}\end{pmatrix} $$ that is, the matrix where every row is duplicated and padded with zeros. Is there a simple formula for the determinant of $\tilde A$ given the determinant of $A$ ?
Note that $$\widetilde{A} = A \otimes I,$$ for $\otimes$ the Kronecker product, see https://en.wikipedia.org/wiki/Kronecker_product . The determinant of Kronecker products admits a closed form, which is also presented in the same wikipedia article.
|linear-algebra|matrices|determinant|
1
Does convergence in distribution implies imply this inequity?
Consider a sequence of random variables $\{X_n\}_n$ . Given that $$X_n - \sqrt{n} c \to N(0,1)$$ in distribution ( $c>0$ is a constant number bounded away from zero). Show that for any constant $t>0$ , we have $$\lim_{n\to \infty} P(X_n>t)=1$$ To show this, I use the following proof: $$P(X_n>t)=P(X_n-\sqrt{n} c> t-\sqrt{n})=1-P(X_n-\sqrt{n}c $$\lim_{n\to\infty} P(X_n-\sqrt{n}c where $\Phi$ is the CDF of $N(0,1)$ . So, we have $$\lim_{n\to\infty} P(X_n>t)=1$$ However , I am not sure whether the equation $$\lim_{n\to\infty} P(X_n-\sqrt{n}c is correct or not. Any suggestion is welcomed.
It is correct. If $X_n \to X$ in distribution and if the CDF of $X$ is continuous then $\sup_x|P(X_n\le x)-P(X\le x)| \to 0$ . This implies that $|P(X_n\le x_n)-P(X\le x_n)| \to 0$ for any sequence $(x_n)$ .
|probability-theory|weak-convergence|
1
How to determine a basis for the tangent space given a local trivialization.
I heard the following: For a smooth submanifold $M \subseteq \mathbb{R}^n$ , given a local trivialization one can easily find a basis for the tangent space. I want to know how. So first I should maybe formalize the question: Suppose $M \subseteq \mathbb{R}^n$ is a smooth $k$ -dimensional submanifold and for $p \in M$ , $\phi:U \rightarrow V$ a local trivialization of $M$ such that $x \in M \cap U$ . Then there exist functions $b_1,...,b_k$ such that $B=\{b_1(x),...,b_k(x)\}$ form a basis of $T_xM$ . My idea: If $\phi:U \rightarrow V$ is a local trivialization, then $\frac{\partial}{\partial x_1} \phi(x)_{|x=p},...,\frac{\partial}{\partial x_k}\phi(x)_{|x=p}$ should be a basis for the tangent space $T_pM$ . My problems at the moment: I don't know how to explicitly show my assumption. I.e. I need to show that for $v \in T_pM$ , $v$ can be written as a linear combination of $\frac{\partial}{\partial x_1} \phi(x)_{|x=p},...,\frac{\partial}{\partial x_k}\phi(x)_{|x=p}$ . And I also need to
You are making things appear more complicated than they are by combining two separate issues: the embedding in Euclidean space and the question of local trivialisation. It is preferable to think of $M$ as a $k$ -dimensional differentiable manifold. Then the existence of a local chart means that one has a map $\psi$ between a neighborhood in $M$ and a neighborhood in $\mathbb R^k$ . If $(x_1,\ldots,x_k)$ are the coordinates in $\mathbb R^k$ , then $(\frac{\partial}{\partial x_1}, \ldots,\frac{\partial}{\partial x_k})$ is a basis for the tangent space of $\mathbb R^k$ at a point, and their image under $d\phi$ (or its inverse) will give a basis for the tangent space to $M$ at the corresponding point.
|differential-geometry|submanifold|tangent-spaces|
0
Any $\ell^2$-closed subspace of $\ell^2 \cap \ell^1$ is finite-dimensional
Let $X$ be a closed subspace of $\ell^2$ such that $X$ is contained in $\ell^1$ . It is easy to show that the inclusion operator $J \colon X \hookrightarrow \ell^1$ is closed, hence, by the closed graph theorem $J$ is bounded. Is it true that $X$ is automatically finite dimensional? I would really appreciate any hints.
The norms $\|\cdot\|_1$ and $\|\cdot\|_2$ are equivalent on $X.$ Assume $X$ is infinite dimensional. Let $\{v_n\}_{n=1}^\infty$ be an orthonormal basis in $X.$ Any element $\varphi\in \ell^\infty$ determines a bounded linear functional on $X.$ Thus it corresponds to an element $u\in X$ such that $$\varphi(v_n) =\langle v_n,u\rangle$$ Hence $\varphi(v_n)\to 0$ by the Bessel inequality. This means the sequence $v_n$ tends weakly (in $\ell^1$ ) to $0.$ By the Schur theorem every weakly convergent sequence in $\ell^1$ is norm convergent. This gives a contradiction as $\|v_n\|_2=1,$ hence $\|v_n\|_1\not\to 0.$
|functional-analysis|closed-graph|
0
Sobolev Embedding in $L^1$
For $n\geq 1,$ is there a minimal $s$ (depending on $n$ ) such that $H^s(\mathbb{R}^n)$ is continuously embedded in $L^1(\mathbb{R}^n)?$ I researched several Sobolev embeddings books but I could not find such result or a counterexample. Thank you.
No. Sobolev embedding only increases the index p. A typical example would be $(1+x^2)^{-1/2}$ . It's $H^\infty$ but not integrable.
|real-analysis|hilbert-spaces|sobolev-spaces|
1
How to construct an algebraic function that is not rational?
How to construct a algebraic irrational function $f(x)$ such that $$f(x)= \sum_{i=1}^{\infty}a_i x^i$$ with $a_1, a_2,\dots, a_i,\dots \in \mathbb{N}$ . Reference is appreciated. Update : an instance is $$f(x)=\frac{2}{1+\sqrt{1-4x}} =\sum C_n x^n$$ with $c_n$ being the $n_{th}$ Catalan number. Thank to Jyrki Lahtonen. And some are here Hope more examples.
I fully endorse Red_Trumpet's solution (particularly the reference to Eisenstein's theorem I was not aware of). Posting as an alternative the use of algebraic equations that recursively define the solution as a power series of the required type. For example, consider the equation $$y=1+x y^3,\qquad(*)$$ when we can immediate deduce that the solution $y$ is algebraic over the rational function field $\Bbb{Q}(x)$ . Furthermore, we see that we get $y$ as an element of $\Bbb{Z}[[x]]$ using $x$ -adic iteration. Begin by defining $y_0(x)=1$ , and then use the recurrence relation $$ y_{n+1}(x)=1+x y_n(x)^3. $$ It follows immediately, by induction on $n$ , that for all $n$ we have $y_n(x)\in\Bbb{Z}[x]$ . Furthermore, we see that for all $n\ge1$ , the polynomials $y_{n+1}(x)$ and $y_n(x)$ differ from each other by a multiple of $x^{n+1}$ . Therefore the sequence $y_n(x)$ converges (with respect to the $x$ -adic topology) to a power series in $\Bbb{Z}[[x]]$ . Here $$ \begin{aligned} y_0(x)&=1,\\
|functions|algebraic-geometry|reference-request|power-series|
0
Fourier transform and power series
My major is Physics so I am very naive about the Fourier transform, and its main use for me is to simplify analysis of physical problems with translational symmetry, turning some PDEs into ODEs. I am computing a quantity (a holographic correlation function) $f(\vec{x})$ and I am able to obtain its Fourier transform $\tilde{f}(\vec{p}) = \frac{1}{\sqrt{2\pi}}\int d^4\vec{x} f(\vec{x}) e^{-i\vec{p}\cdot\vec{x}}$ in a close form as a formal series, with each term having an explicit but complicated expression, so no hope to sum up the series. What I want is to extract physical quantities from the first few powers in $\vec{x}$ of $f(\vec{x})$ , for example it can look like \begin{align} \frac{1}{(x_1^2+x_2^2)^{2\Delta}}(3x_1^2-x_2^2+o(\vec{x}^2)) \end{align} My question is how do I obtain the data of power series in $\vec{x}$ of $f(\vec{x})$ from the Fourier transform $\tilde{f}(\vec{p})$ . There is no way to get a simple closed form for $f(\vec{p})$ , but I may be able to do a $\frac{1}{\v
The power series of $f$ in terms of it's Fourier transform $\hat f$ $$ f(x) = \int d^4 p \, e^{ip\cdot x} \hat{f}(p) $$ (note that I changed the normalization of the Fourier transform) can be simply obtained, formally, by expanding the exponential $$ f(x) \, {"="} \sum_{k=0} \frac{(i x)^k}{k!} \mu_k $$ where the (wannabe) moments are $$\mu_k =\int d^4 p \, p^k \hat{f}(p) \ . $$ Clearly for the moments to exist $\hat f$ should decay sufficiently fast at infinity. This however does not guarantee that the power series of $f$ converge. A necessary condition is that $\hat f$ decays exponentially (being compactly supported, for example, would do as well, and implies even better regularity property of $f$ ).
|fourier-transform|conformal-field-theory|
0
Permutations with Repeated Letters
This question is taken from A First Course in Probability (8e) by Ross. How many different arrangements can be formed from the letters PEPPER? I understand that there are $6!$ permutations of the letters when the repeated letters are distinguishable from each other. And that for each of these permutations, there are $(3!)(2!)$ permutations within the Ps and Es. This means that the $6$$!$ total permutations accounts for the $(3!)(2!)$ internal permutations. Then, the explanation in the text states that there are $\frac{6!}{(3!)(2!)} = 60$ possible letter arrangements of the letters PEPPER. I don't understand this last part. I thought that since the internal permutations were accounted for the total possible letter arrangements would be the $1 - \frac{(3!)(2!)}{6!}$ . Can someone please explain the logic behind the last part? Thank you.
Main question as posted by the OP How many different arrangements can be formed from the letters PEPPER? Main Doubts: 6! permutations of the letters when the repeated letters are distinguishable from each other And that for each of these permutations, there are (3!)(2!) permutations within the Ps and Es This means that the 6 ! total permutations accounts for the (3!)(2!) internal permutations. Then, the explanation in the text states that there are 6!(3!)(2!)=60 I don't understand this last part. I thought that since the internal permutations were accounted for the total possible letter arrangements would be the 1−(3!)(2!)6! Answering all the points to clarify most of the doubts: For unique 6 letters answer would be 6! or 6P6 (Permutation) or calculate using Combination (6c1 * 5c1 * 4c1 * 3c1 * 2c1 * 1c1) , why arrangement is different from mere selection since order of selection is also important and that's we can deduce the same using Combination as shown above 3! for P and 2! for E
|combinatorics|permutations|
0
Why is the Brachistochrone cycloid flipped
While solving the Brachistochrone problem, I setup a variable $T$ , $$T=\int\dfrac{\sqrt{1+(y')^2}}{\sqrt{2gy}}$$ Then employing the Euler-Lagrange equations (Beltrami's identity): $$\dfrac{dy}{dx}=\sqrt{\dfrac{1}{c^2y}-1}$$ Whose parametric solution is $$x=\dfrac{1}{2c^2}[\theta-\sin(\theta)]+k$$ $$y=\dfrac{1}{2c^2}[1-\cos(\theta)]$$ Where $k$ is the integration constant. When applied the boundary condition $(0,0)$ , the $k$ vanishes. When plotted, the curve is a cycloid. However, if I were to drop a ball, the optimum curve would be flipped, looking more like: $$x=\dfrac{1}{2c^2}[\theta-\sin(\theta)]$$ $$y=\dfrac{1}{2c^2}[\cos(\theta)-1]$$ Why is it that many papers have the cycloid as the solution although it in reality is flipped? What would be a suitable explanation? Do I need to specify any other boundary conditions?
There is no need for extra boundary conditions. It's just that you have implicitly assumed that the $y$ -axis is pointing downward; hence, the value of the $y$ coordinate is positive downward, and negative upward. This is evident from the equation for $T$ , since $2gy$ has to be positive. If you had written the equation as $T = \int \frac{\sqrt{1+y'^2}}{\sqrt{-2gy}}$ ( $y$ -axis pointing up), then the result would be what you are looking for.
|optimization|calculus-of-variations|euler-lagrange-equation|variational-analysis|
1
Residue of Pole $s=1$ of $\zeta$ function
I have trouble to understand Why the residue of the riemann $\zeta$ function is 1. I can just find that One can see this because $\lim_{s\to 1} (s-1)\zeta(s)=1$. But I do not understand how to get the 1 by using the Series representation $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$
The limit becomes much easier to grasp if you use the alternating series representation, $$\zeta(s) = \frac{1}{1 - 2^{1-s}} \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^s}.$$ The alternating sum equals $\log(2)$ for $s=1$ , so what you have to evaluate is the limit of $\frac{s - 1}{1 - 2^{1-s}}$ . By using L'Hôpital's rule, this equals $1 / \log(2)$ , so that the limit is $\log(2) / \log(2) = 1$ .
|complex-analysis|riemann-zeta|
0
Ways To Order Matches - Six Nations
There are $n$ teams in a sports tournament, and each team has to play every other team, and all teams have to play every weekend over $n-1$ weekends. For example, rugby six nations $n=6$ pretty much follows this pattern. Hoe many ways are there to uniquely order the matches? Note: we are not counting home vs away matches as different. But a bonus question would be to calculate the number of unique ordering where home and away are treated differently. My Attempt I believe the answer is: $\prod_{k=1}^{n-1} k! = \prod_{k=1}^{n-1} k^{n-k}$ Any team has $(n-1)!$ ways to sequence who they play each weekend. The next team has $(n-2)!$ ways for the remaining matches excluding the first team etc. If you care about order within a weekend multiply the above by $(\frac{n}2)!$ . This is always defined as $n$ is guaranteed to be even. Problem: Overcounting occurs with the above assumption of a choice algorithm. We can see why by counterexamlple (thanks to commenter). A chooses: Ab, ac, ad, ae, af B
Here’s Java code that counts these schedules by enumueration. The result is OEIS sequence A036981 , which gives the counts for up to $14$ teams. The entry’s title is “Number of $(2n+1)\times(2n+1)$ symmetric matrices each of whose rows is a permutation of $1\ldots(2n+1)$ ”, where $2n+1$ corresponds to your $n-1$ . As discussed in the comments, my original suggestion for converting such a matrix into a tournament schedule turned out to be wrong, whereas the OP’s suggestion works: The matrix entries specify the weekend on which the row and column teams play each other, except the diagonal (since a team doesn’t play itself), which specifies when the teams play the $n$ -th team not represented by a row or column. For this to work, the diagonal must also contain a permutation of the teams. This is indeed the case, since every team appears an odd number of times in the matrix (once in each row, whose number is odd) and an even number of times off the diagonal (by symmetry) and hence at least
|factorial|
1
Determinant of enlarged matrix
Let $A$ be the $n\times n$ matrix with entries $A_{ij}$ and consider the enlarged $2n\times 2n$ matrix $$\tilde A=\begin{pmatrix}A_{11} & 0 & A_{12} & 0 & \dots & A_{1n} & 0 \\ 0 & A_{11} & 0 & A_{12} & \dots & 0 & A_{1n} \\ A_{21} & 0 & A_{22} & 0 & \dots & A_{2n} & 0 \\ 0 & A_{21} & 0 & A_{22} & \dots & 0 & A_{2n} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ A_{n1} & 0 & A_{n2} & 0 & \dots & A_{nn} & 0 \\ 0 & A_{n1} & 0 & A_{n2} & \dots & 0 & A_{nn}\end{pmatrix} $$ that is, the matrix where every row is duplicated and padded with zeros. Is there a simple formula for the determinant of $\tilde A$ given the determinant of $A$ ?
Consider small case for $n=2$ . You can generalize the process for any $n$ . Say we have $A=\begin{pmatrix} a & b \\ c & d\end{pmatrix}$ . Then $\bar{A}=\begin{pmatrix} a & 0 & b & 0 \\ 0 & a & 0 & b \\ c & 0 & d & 0 \\ 0 & c & 0 & d \end{pmatrix}$ . Now interchange $C_{2}$ with $C_{3}$ to get $\begin{pmatrix} a & b & 0 & 0 \\ 0 & 0 & a & b \\ c & d & 0 & 0 \\ 0 & 0 & c & d \end{pmatrix}$ . Now interchange $R_{2}$ with $R_{3}$ to get $\begin{pmatrix} a & b & 0 & 0 \\ c & d & 0 & 0 \\ 0 & 0 & a & b \\ 0 & 0 & c & d \end{pmatrix}=\begin{pmatrix} A & 0 \\ 0 & A\end{pmatrix}$ . For any $n\times n$ matrix $A$ , we can perform such row and column operations to get $\bar{A}=\begin{pmatrix} A & 0 \\ 0 & A\end{pmatrix}$ . Also note that total row and column operations required to simply $\bar{A}$ are even in numbers for any $n$ . So, we have $\det(\bar{A})=(\det(A))^2$
|linear-algebra|matrices|determinant|
0
Expectation of X^2 using complement of CDF
For a nonnegative random variable $$E[X]=\int_0^\infty P(X >x)dx$$ This video on a question from IIT JAM 2023 extends it to $$E[X^2]=\int_0^\infty 2x P(X >x)dx$$ How was the result was arrived at? Where can I get references for such results? Books or structured references (rather than single links preferred).
$$E[X^2]=\int_0^\infty P(X^2 >x)dx=\int_0^\infty P(X>\sqrt{x})dx$$ Substitute $t=\sqrt{x}\implies t^2 =x\implies 2t\text{ } dt=dx$ $$=\int_0^\infty P(X>t) \text{ } 2t \text{ }dt$$ And since the variable of integration is a dummy variable, change back to $x$ : $$=\int_0^\infty 2x P(X >x)dx$$
|calculus|integration|statistics|improper-integrals|expected-value|
0
Proving $\frac{3}{4}+ab+bc+ac\leq a+b+c$
Let $a,b, c\in (0,1)$ such that, $abc=(1-a)(1-b)(1-c)$ . Show that $\frac{3}{4}+ab+bc+ac\leq a+b+c$ . My idea: I denote $p=a+b+c, q=ab+bc+ac, r=abc$ . Then $1+q=2r+p$ . I have to prove that $r\leq \frac{1}{8}$ . Now I am stuck.
Using the relation we get $a +b + c = 1 + ab + bc + ca -2abc = \frac{1}{4}- 2abc + \frac{3}{4} + ab +bc +ca $ Now Using AM-GM: $(a+b+c) \ge 3\sqrt[3]{(abc)}$ And $(1-a)+ (1-b) + (1-c) \ge 3\sqrt[3]{abc} $ $ \implies 3- (a+b+c) \ge 3\sqrt[3]{abc}$ (by the relation) Adding the two inequations we get $\sqrt[3]{abc} \le \frac{1}{2}$ on cubing both sides and multiplying $-2$ both sides and adding $\frac{1}{4}$ both sides. We get $\frac{1}{4} -2abc \ge 0$ So using this result we get $a +b + c \ge \frac{3}{4} +ab + bc +ca$
|inequality|
0
Existence of $\{0,1\}^X$ in $\mathsf{ZFC}$ without the axiom of power set
Is it possible to prove the existence of the set of all function from a given set $X$ to $\{0,1\}$ without using the axiom of power set in $\mathsf{ZFC}$ ?
No. You can't do that. If you consider the set of all hereditarily countable sets (recall that $x$ is hereditarily countable if its transitive closure, $\operatorname{tcl}(x)$ , is countable), denoted often by $H(\omega_1)$ or $H_{\omega_1}$ , then $(H(\omega_1),\in)\models\sf ZFC^-$ , that is $\sf ZFC$ without the Axiom of Power Set (and in fact with the stronger Axiom of Collection), but clearly $2^\omega$ is not hereditarily countable as it is not countable itself. Moreover, given any $x\in H(\omega_1)$ , we can diagonalise over $x\cap 2^\omega$ to construct some $f\colon\omega\to 2$ such that $f\notin x$ . Since any such $f$ is in fact hereditarily countable, $f\in H(\omega_1)$ . So indeed, $H(\omega_1)$ satisfies that " $2^\omega$ does not exist".
|set-theory|
1
Value of a Sum linked with Beta Dirichlet Function and Zeta Function
Recently, I tried to calculate this double sum: $$ F(s) = \sum_{(a,b) \in \mathbb{Z}^2 \backslash (0,0) } \frac{1}{(a^2 + b^2)^s}$$ For $ s \in \mathbb{C}, Re(s) > 1$ I think i found the value of this function with the following reasoning. Indeed, $F(s)$ is : $$ F(s) = \sum_{n = 1}^{\infty} \frac{r_2(n)}{n^s} $$ where $r_2$ is the "two-square function" : the number of possibilities to write $n$ as sum of two squares. The Jacobi two-square theorem give us the well-known formula : $r_2(n) = 4(d_1(n) - d_3(n))$ . Another way to write this equality is : $$r_2(n) = 4 \sum_{d | n} \sin \left( \frac{1}{2} \pi d \right)$$ Therefore the Mobius inversion formula give us : $$ 4 \sin \left( \frac{1}{2} \pi n \right) = \left( r_2 \star \mu \right) (n) $$ Using the compatibility properties of dirichlet series with the convolution product, we have : $$ 4 \sum_{n = 1}^{\infty} \frac{\sin \left( \frac{1}{2} \pi n \right)}{n^s} = F(s) \left( \sum_{n = 1}^{\infty} \frac{\mu(n)}{n^s} \right)$$ And we can
Recall the definition of the Dedekind $\zeta$ -function , and let us specialize to the case: $$ K=\Bbb Q(i)\ ,\qquad\text{ where }i=\sqrt{-1} $$ is an abstract root of the polynomial $X^2+1$ . Then $K$ is the field of gaussian numbers, the ring of integers $\mathcal O_K=\Bbb Z[i]$ is the ring of all $a+bi\in K$ with $a,b\in\Bbb Z$ . This ring is a unique factorization domain, each ideal $I$ is principal, i.e. $I=(a+ib)$ for some suitable $a+ib\in \mathcal O_K$ , and two ideals $(a+ib)$ and $(a'+ib')$ are equal, iff the $\mathcal O_KK$ -numbers $a+ib$ and $a'+ib'$ are associated, i.e. are obtained from each other by multiplication with a unit. There are exactly four units, $\pm 1$ , $\pm i$ . There are four units. So the Dedekind $\zeta$ -function $\zeta_K$ is given by: $$ \zeta_K(a)= \sum_{(0)\ne I\le \mathcal O_K}N(I)^{-s} =\sum_{\substack{a+ib\ne 0\\\text{modulo}\\\text{association}}}N(a+ib)^{-s} =\frac 14\sum_{a+ib\ne 0}(a^2+b^2)^{-s}\ . $$ Here, $N=N_{K:\Bbb Q}$ is the norm of $K$
|complex-analysis|analysis|number-theory|analytic-number-theory|
0
CW complex - closure finiteness and weak topology
Well, this is embarassing. To be honest, during my PhD, I haven't really bothered too much regarding the topology of CW complexes. Back then, I understood the first few pages of Hatcher's book (the construction is easy to follow, also the examples given there), and didn't really care much about it. When I revised my stuff, I read abit about the topological part of it (for instance Hatcher's Appendix/Bredon's book regarding the topological aspects), but didn't really understand it (it wasn't that important back then) so I didn't really bother. Now its stabbing me in the back. So I decided to crack this open and get over with it. Hence, I've been reading Lee's Introduction to Topological Manifolds book, Chapter 5. It was going well until I hit Proposition 5.4. The confusing part is this : (1) In the proof of (C) condition, he writes "Because $\overline{e}$ is compact, it is covered by finitely many such neighborhoods". I can not see how this implies the closure finiteness condition. That
For any $A\subseteq X$ , we have $$A=A\cap X=A\cap\bigcup_{e\in\mathcal{E}}e=\bigcup_{e\in\mathcal{E}}A\cap e=\bigcup_{e\in\mathcal{E}\colon\ A\cap e\neq\emptyset}A\cap e\subseteq\bigcup_{e\in\mathcal{E}\colon\ A\cap e\neq\emptyset}e.$$ In your question (1), we take $A=U_x$ and the union is finite by the choice of $U_x$ , so this gives the fact you were needing. In question (2), the same reasoning but with the closures establishes $A=\bigcup_{e\in\mathcal{E}\colon\ A\cap\overline{e}\neq\emptyset}A\cap\overline{e}$ . However, $W\setminus A=W\setminus(W\cap A)$ and then $$W\cap A=\bigcup_{e\in\mathcal{E}\colon\ A\cap\overline{e}\neq\emptyset}A\cap W\cap\overline{e}=W\cap\bigcup_{i=1}^nA\cap\overline{e_i}$$ by the choice of $W$ , so $W\setminus A=W\setminus\left(\bigcup_{i=1}^nA\cap\overline{e_i}\right)$ .
|algebraic-topology|quotient-spaces|cw-complexes|
1
Is identifying the range of function necessary?
I often see functions written as $f:\Bbb{R}\rightarrow\Bbb{R}$ or similarly $f:\Bbb{C}\rightarrow\Bbb{C}$ . My concern is that there are functions where $f:\Bbb{R}\rightarrow\Bbb{R}$ doesn't satisfy. And the more specific the function gets, the more likely it doesn't map to just basic number system (Such as reals, or complex). So why are we stating: $$f:\Bbb{Domain}\rightarrow\Bbb{Codomain}$$ and deal with it from there? Wouldn't it be better if we just state the domain (Like $f:\Bbb{R}\rightarrow$ ) and discuss about how range depends on the condition we are given? I don't see what the big deal of setting codomain is. I feel like I might be missing something basic from this question. Thank you for your help in advance!
It sounds like you want to know why it is “helpful” or “intuitive” for a function to have a codomain. Just to be clear (I don’t think this is what you’re asking about), it’s not possible to make a function with no idea of what the codomain is. How could you specify a rule for computing the function without giving a way the type of thing that rule produces? Even worse, how could you check to make sure that the function maps each input to a unique output in the codomain? On the other hand, some function rules are so closely related that we use the same notation, even when the domain (and codomain) are different: $x \to 2x$ , or $x \to e^{x}$ , for example. They both work on integers, reals, even square matrices. Why isn’t there some way to define these rules less strictly, so we can study them without specifying exactly what the domain/codomain are? First of all, there is a way: “ $2x$ ” is a polynomial, “ $e^{x}$ ” can be written as a power series, and both of these are mathematical obj
|functions|
1
$a^2-b^2=37$, evaluate $ a^2+b^2$
Given $a^2-b^2=37$ and also a and b are integers, can we evaluate $a^2+b^2$ possible values? Are those many or just some? I found that $a^2+b^2$ can be only $685$ . But how to prove it? I just guessed, but can we somehow evaluate it?
I am just building upon J.W. Tanner's comment and posting it as an answer. Your equation can be rewritten as $(a+b)(a-b)=37$ and since 37 is prime we have four cases. Case $1$ : $a+b=37$ and $a-b=1$ $\Rightarrow a = 19,\ b=18$ Case $2$ : $a+b=1$ and $a-b=37$ $\Rightarrow a = 19,\ b=-18$ Case $3$ : $a+b=-37$ and $a-b=-1$ $\Rightarrow a = -19,\ b=-18$ Case $4$ : $a+b=-1$ and $a-b=-37$ $\Rightarrow a = -19,\ b=18$ In all cases, $a^2 + b^2 = 361 + 324 = 685$
|algebra-precalculus|
1
$a^2-b^2=37$, evaluate $ a^2+b^2$
Given $a^2-b^2=37$ and also a and b are integers, can we evaluate $a^2+b^2$ possible values? Are those many or just some? I found that $a^2+b^2$ can be only $685$ . But how to prove it? I just guessed, but can we somehow evaluate it?
The product of $a+b$ and $a-b$ is $37$ , and since $37$ is prime, we must have one factor $1$ and the other $37,$ or one factor $-1$ and the other $-37$ . Now solve for $a$ and $b$ in each of these cases and see what $a^2+b^2$ is.
|algebra-precalculus|
0
The supremum of solution of an iteration sequence
I am attempting to solve a problem involving a recursive sequence defined as follows: given $a_0 = 0$ and $a_n = a_{n-1}^2 + c$ , where $c$ is a complex number. The objective is to find the supremum of the absolute value of $c$ such that there exists an integer $T$ satisfying $\forall n \in \mathbb{N}, a_{n+T} = a_n$ . In order to determine this supremum, I sought to find the value of $c$ for which $a_T = 0$ . However, upon attempting to solve this problem using MATLAB, I found something interesting. The maximum absolute value obtained was tending to 2, with a sequence of values leading to it: $0 \rightarrow 1 \rightarrow 1.7549 \rightarrow 1.9408 \rightarrow 19854 \rightarrow 1.9964 \rightarrow 1.9991$ . I conjectured that the supremum might be 2, but I have not been able to conclusively solve this problem. I am seeking guidance on how to proceed further. Thank you for your assistance.
The supremum is $2$ and attained at $c=-2$ . For the proof, see any text on Mandelbrot set (beginning with Wikipedia article).
|complex-analysis|recursion|
0
condition for differentiability of G at $t_0$
This question is Ex 2.13 from Le Gall's Measure Theory, Probability and Stochastic Processes book. Let $\varphi : [0,1] \to \mathbb{R}$ be integrable with respect to the Lebesgue measure. For every $t \in \mathbb{R}$ , set $$G(t) = \int_{0}^{1} \lvert \varphi(x) - t \rvert dx$$ Prove that $G$ is continuous on $\mathbb{R}$ . For a fixed $t_0 \in \mathbb{R}$ give a necessary and sufficient condition for $G$ to be differentiable at $t_0$ . The continuity part is easy, for the differentiability part I looked at $$\frac{G(t) - G(t_0)}{t} = \int_{0}^{1} \frac{\lvert \varphi(x) - t \rvert - \lvert \varphi(x) - t_0 \rvert}{t} dx $$ I first tried to focus on $t > t_0$ and I split into $\{\varphi > t\}, \{t >= \varphi > t_0\}, \{t_0 > \varphi\}$ . In this case to of the integrals simplify to $\lambda(\{\varphi > t\} \cap [0,1])$ and $\lambda(\{t_0 \geq \varphi\} \cap [0,1])$ , both of which exist in the limit $t \to t_0^{+}$ . This leaves the middle term but it looked like $$\int_{[0,1] \cap \{\
Note that whenever $x \in [0,1]$ is such that $\phi(x) \in [t_0,t)$ we have $$t+t_0-2\phi(x) \in (t+t_0-2t,t+t_0-2t_0] = (t_0-t,t-t_0]$$ and so $\left|\frac{t+t_0-2\phi(x)}{t-t_0}\right| \le 1$ . Therefore $$\left|\int_{[0,1]\cap \{\phi \in [t_0,t)\}} \frac{t+t_0-2\phi(x)}{t-t_0}\mathrm{d}x\right| \le \int_{[0,1]\cap \{\phi \in [t_0,t)\}} 1\mathrm{d}x = \lambda([0,1]\cap\{\phi \in [t_0,t)\}) $$ with $\lambda$ denoting the Lebesgue measure. The RHS converges to $\lambda([0,1]\cap \{\phi = t_0\})$ as $t \to t_0^+$ , so if $\phi \ne t_0$ almost-everywhere on $[0,1]$ we have existence of the limit. Computing the limit $t\to t_0^-$ in the same way will show they agree and so we have existence of the limit as $t\to t_0$ . If instead $\lambda([0,1]\cap \{\phi = t_0\}) \ne 0$ then note $$\begin{align*} \frac{G(t)-G(t_0)}{t-t_0} &= \int_{[0,1]\cap \{\phi = t_0\}} \frac{|\phi(x)-t|-|\phi(x)-t_0|}{t-t_0} \mathrm{d}x+\int_{[0,1]\cap \{\phi \ne t_0\}} \frac{|\phi(x)-t|-|\phi(x)-t_0|}{t-t_0} \mathrm
|calculus|measure-theory|
0
$a^2-b^2=37$, evaluate $ a^2+b^2$
Given $a^2-b^2=37$ and also a and b are integers, can we evaluate $a^2+b^2$ possible values? Are those many or just some? I found that $a^2+b^2$ can be only $685$ . But how to prove it? I just guessed, but can we somehow evaluate it?
Casework can be avoided as follows: It has already been noted that 37 is a prime, and $(a+b)(a-b) = 37$ for integers $a,b$ implies that $\{ |a-b|, |a+b|\} = \{1 , 37\}$ . Then $$a^2+b^2 = \frac{|a-b|^2+|a+b|^2}{2} = \frac{1^2+37^2}{2} = \boxed{685}$$
|algebra-precalculus|
0
Why is the relative interior of a set defined in terms of its affine (vs convex) hull?
I'm taking an Intro to Optimization grad course and the notion of the relative interior of a set was introduced as The relative interior of a convex set $C$ , denoted $\text{ri } C$ , is the interior of $C$ as a topological subspace of $\text{aff } C$ : $z \in \text{ri } C$ if there's an $r > 0$ such that $(z+rB) \cap \text{aff } C \subseteq C$ . Where $B = \{x \in \mathbb R^n : |x| . I'm wondering why is it that the affine , and not the c onvex , hull that is used in the definition. Maybe I'm missing something, but it doesn't seem economical to me in the sense that the convex hull too includes the set, has an interior that overlaps with the "intended" relative interior of the original set (as in it recovers the interior of a set that was lost in higher dimensions), but is also "smaller" than the affine hull. If it's not a matter of convention, my hunch is that it could have something to do with an affine set being parallel to a (particular) subspace of $\mathbb R^n$ , so that properti
Given a metric space $V$ and any vector subspace $U$ , you can define the "subspace topology" on $U$ and it will also be a metric space with the same metric . Since $\textrm{aff}\, C$ is a subspace, you can define a topology on it which is (in some sense) derived from the topology of $V$ , and one define a notion of an open ball using the same metric (which appears in the definition of relative interior), etc. However, since a convex hull is not guaranteed to be a vector space, it probably won't be a metric space with the same metric (probably not even a topological vector space, since it would lack additive inverses). One could technically topologize it with the subspace topology, but it won't have the same topology as the subspace topology w.r.t. $\textrm{aff}\, C$ . Since the topology w.r.t. $\textrm{aff}\, C$ is more compatible with the topology of $V$ , this sort of construction is far more useful in convex analysis.
|convex-analysis|convex-optimization|
0
Let $G$ be a finite abelian group of order n. How many distinct group homomorphisms are there from $G$ to $R/Z$?
First, is this question well-defined? That is, the homomorphisms are determined by the order n. Since $R/Z$ is the multiplicative unit circle, all finite subgroups are cyclic, so the image of the homomorphisms must be cyclic. My intuition is there exists bijection between cyclic subgroups of $G$ , but I don't know how to proceed it formally.
This gets you into group cohomology. For any subgroup $A\le G$ such that $G/A$ is cyclic, it's the number of central extensions $$0\to A\to G\to G/A\to 0,$$ by the first isomorphism theorem. These are at least as many as in the second cohomology group , $H^2(G/A,A).$ (When the action of $G/A$ on $A$ is trivial, the correspondence is one-one.) In the situation where the extension is split we have $G\cong A×G/A,$ and this corresponds to zero in $H^2(G/A,A).$
|group-theory|finite-groups|abelian-groups|group-homomorphism|
0
Can $C_p^\infty(\mathbb{R}^n)$ be considered as a dual space of $T_p\mathbb{R}^n$, where $p\in \mathbb{R}^n$?
warning: This may be a stupid question, because I am poor at differential euqations $v_p\in T_p\mathbb{R}^n$ is a linear function over $C_p^\infty (\mathbb{R}^n)$ , where $C_p^\infty (\mathbb{R}^n)$ denote the functions over $\mathbb{R}^n$ who are smooth at $p$ . And $T_p\mathbb{R}^n$ has basis of $\left\{ \frac{\partial }{\partial x^1 }\bigg|_{p},\dots,\frac{\partial }{\partial x^n}\bigg|_{p} \right\}$ , $\frac{\partial }{\partial x^j}\bigg|_{p}x^i=\delta_{ij}$ , which forms a dual relationship. Does this suggests $\{x^1,\dots,x^n\}$ is a basis of $C^\infty_p(\mathbb{R}^n)$ ? If so, how does linear space $C^\infty_p(\mathbb{R}^n)$ isomorphic to $T^*_p\mathbb{R}^n$ ?
Good question. No. Your definition of $C_p^\infty(\mathbb{R}^n)$ will need some work. As you've described it, it's the functions that are smooth at $p$ , but these functions form an infinite-dimension vector space over $\mathbb{R}$ whereas the tangent space is $n$ -dimensional, so they can't possibly be dual. Even in dimension one, and even if we think about germs of functions instead of global functions, you still have different functions $x, x^2, \ldots$ which all evaluate the same at the origin but are different in any neighborhood of it. So it is true that a tangent vector acts on a smooth function, but there is no duality here. In fact, the dual space to tangent vectors is not functions but differential forms.
|manifolds|smooth-manifolds|dual-spaces|tangent-spaces|co-tangent-space|
1
How to solve for the angle $x$ ?($\tan(x)+ 4 \sin(x)=\sqrt 3$)
I have tried $\tan(x)+ 4 \sin(x)=\sqrt 3$ to make the notation simpler I will write $s:=\sin(x)$ $$4s +\frac{s}{\sqrt{1-s^2}}=\sqrt{3}$$ $$(\sqrt{3}-4s)^2 (1-s^2) = s^2$$ $$-16s^4 +8 \sqrt{3} x^3 +12s^2 -8\sqrt{3} s +3=0$$ needless to say solving this Quartic equation will be too difficult and annoying so there must be some trick but I couldn't find it.
I often use the tan-half-angle substitution of $t = \tan \frac{x}{2}$ which results in the following: $$\begin{aligned} \sin x = \frac{2 t}{1+t^2} \\ \cos x = \frac{1-t^2}{1+t^2} \end{aligned}$$ The equation now is just a rational polynomial $$ \frac{2 t}{1-t^2} +4 \frac{2 t}{1+t^2} = \sqrt{3}$$ and it has 4 solutions $$ t = \begin{cases} \sqrt{3} \\ \tfrac{\sqrt{3}}{3} - \tfrac{4 \sqrt{3} \sin \tfrac{\pi}{18}}{3} \\ \tfrac{\sqrt{3}}{3} + \tfrac{4 \sqrt{3} \cos \tfrac{\pi}{9}}{3} \\ \tfrac{\sqrt{3}}{3} - \tfrac{4 \sqrt{3} \cos \tfrac{2\pi}{9}}{3} \end{cases}$$ and then recover $x$ with $$x =2 \arctan t$$
|geometry|algebra-precalculus|
0
Finding value of $I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x$
Finding value of $$I=\int^{\pi\over2}_0 \frac{\sin(nx)}{\sin{x}} \ \mathrm{d}x.$$ I got this question in my book. I got if $n$ is even then $I$ is $0$ , by using king's rule it turns out to be $I=-I$ . But I couldn't solve it when $n$ is an odd number. I tried by taking $n=1,3$ and $5$ and simply expanding $\sin (3x)$ and $\sin (5x)$ in terms of $\sin x$ and integrating I get $I=\pi$ . But can it be proved by in general taking $n$ , not assuming any particular odd value of $n$ .
We first deal with the odd one. Noticing $$ \begin{aligned} &2 \sin x[ \cos (2 x)+\cos (4 x)+\ldots+\cos (2 n x)] \\ = & {[\sin (3 x)-\sin (x)]+[\sin (5 x)-\sin (3 x)] } +\ldots +[\sin (2 n+1) x-\sin (2 n-1)] \\ = & \sin (2 n+1) x-\sin x , \end{aligned} $$ we have $$ \int_0^{\frac{\pi}{2}} \frac{\sin (2 n+1) x}{\sin x}dx= \int_0^{\frac{\pi}{2}}2[ \cos (2 x)+\cos (4 x)+\ldots+\cos (2 n x)]+1 d x=\frac{\pi}{2} $$ Similarly for the even, we have $$ \begin{aligned} & 2 \sin x[\cos x+\cos (3 x)+\cdots+\cos (2n -1) x] \\ =&[\sin (2 x)-\sin 0]+[\sin (4 x)-\sin 2 x]+\ldots+ [\sin (2nx)-\sin (2(n-1) x \\ =&\sin (2 n x), \end{aligned} $$ hence $$ \begin{aligned} \int_0^{\frac{\pi}{2}} \frac{\sin (2 n x)}{\sin x}dx & =2 \int_0^{\frac{\pi}{2}}\left[\cos x+\cos (3 x)+\ldots+\cos (2 n-1)x\right] \\ & =2\left[\frac{\sin x}{1}+\frac{\sin (3 x)}{3}+\cdots+\frac{\sin (2 n-1)x}{2 n-1}\right]_0^{\frac{\pi}{2}} \\ & =2\sum_{k=1}^n\frac{(-1)^{k-1}}{2 k-1} \end{aligned} $$
|calculus|integration|definite-integrals|
1
Minimizing action integral
I am exploring Taylor's Classical Mechanics and I am unsure about the mathematics when he derives the minimum of the action. We define this function $f(x(t),\dot{x}(t),t)$ to be a Lagrangian. We define the action to be $$ S = \int_{t_{1}}^{t_{2}} f(x(t),\dot{x}(t),t)dt$$ Let some function $x$ satisfy the condition that $S$ is minimum. Then we define $X(t) = x(t)+\alpha \eta (t)$ to be a family of functions that $x$ lives in. By design, we know that the minimum of $S$ occurs when $\alpha = 0$ . We then go on the differentiate $S$ with respect to $\alpha$ where the integrand is $f(X(t),\dot{X}(t),t)$ . We get to a point where we are finding $$\frac{\partial f(x(t)+\alpha \eta(t),\dot{x}(t)+\alpha \dot{\eta}(t),t)}{\partial \alpha}$$ My understanding of the chain rule is that this result is supposed to be the partial of $f$ with respect to the first argument multiplied by the partial of the first argument with respect to $\alpha$ and so on but this is not reflected in the textbook. The re
This is the directional derivative of $f$ in the direction $v = (\eta, \dot{\eta},0)$ which evaluates to $$\nabla f \cdot v = \partial_{x}f v_{1} + \partial_{\dot x}f v_{2} + \partial_{t} f v_{3}$$ Setting $v$ to what you have gives you the result. Here, $x$ and $\dot x$ represent partial derivatives wrt the first and second arguments, respectively. Whether you denote them by lower or upper case letters is a purely stylistic choice. Here's a good reference https://tutorial.math.lamar.edu/classes/calciii/directionalderiv.aspx
|partial-derivative|physics|calculus-of-variations|
0
the intuition behind eigenbasis?
I'm learning linear algebra from scratch and I'm trying to grasp the intuition behind the formula that's used to operate with a diagonalized transformation. As far as I understand, the process begins, roughly speaking, like this: you first get the eigenvectors of a transformation, then change the basis of a vector x with this matrix (you assume it exists, but you don't include it in the computation), then apply the original transformation, then apply the inverse of the change of basis, and then you got the diagonalized matrix. Something like this: import numpy as np # some vector vec = np.array([1,1]) # some transformation t = np.array([[1,1], [0,2]]) # change of basis matrix (with the eigenvectors of t) cb = np.array([[1,1], [0,1]]) # inverse of cb cb_inv = np.linalg.inv(cb) # eigenbasis (cb_inv*t*cb) eb = np.dot(cb_inv,np.dot(t,cb)) # array([[1., 0.], # [0., 2.]]) This doesn't sound so crazy to me. But what I can't wrap my head around is why the "second step" looks like this (an inve
Your code becomes more readable if you use vec2=M.dot(vec1) # or vec2=M @ vec1 # instead of vec2=np.dot(M,vec1) But what I can't wrap my head around is why the "second step" looks like this (an inverse "first", then the eigenbasis, then the eigenvector basis, and then you've to multiply by the vector you wanted to transform in first place): If you have a basis of vectors and you want to transfer something into that basis (express it as a combination of that basis vectors) then you need the inverse. Because if you take the result and multiply it with the basis then you get back the original. Another way to think about it: If you have the i-th basis vector and you transform it you want it to be a 1 on the i-th position and all zeros otherwise.
|linear-algebra|eigenvalues-eigenvectors|
0
How to find a vector from the intersection of 2 vectors?
This question is designed for high school students, so advanced linear algebra solution might not be needed. Suppose I have a $\vec{G}$ as a meet point between vector $\vec{DB}$ and $\vec{CE}$ as illustrated here: Note that the intersection point is labeled with G and not E. If $\vec{AD}:\vec{DC}=3:1$ and $\vec{AE}:\vec{EB}=1:2$ , then how to find the proportion $\vec{DG}:\vec{GB}$ using a vector concept? I don't know where to start. I was thinking to write $\vec{DB}$ and $\vec{EC}$ using this formula below $$\vec p = \dfrac{m \vec c + n \vec a}{m + n}$$ When $p$ is $\vec{DB}$ , $\vec a = A$ , $\vec b =C$ , $m=3$ and $n=1$ (The same formula goes for $\vec{CE}$ as well) , but I'm not sure if that would help. The reason I'm thinking that way, because I expect to find $\vec G$ first, and then I may be able to use the formula above. However, again that may not be possible if I don't know the vector position yet. I'm not interested in solving this using trigonometry since the topic we were
Complete the triangle to a parallelogram with vertices ACBH. Extend CE to intersect AH at I and to intersect BH at J. Note that $$\frac{AI}{BC}=\frac{AE}{BE}=\frac12,$$ so $$\frac{HJ}{AC}=\frac{HI}{AI}=\frac{BC-AI}{AI}=1.$$ Now we know $$\frac{DG}{GB}=\frac{DC}{BJ}=\frac{DC}{BH+HJ}=\frac18.$$
|linear-algebra|vectors|vector-analysis|
1
Finding the values of $x$ that satisfy $\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$ for all $n$
If the exhaustive set of $x\in(0,2\pi)$ for which $\forall n$ the inequality $$\sin x+\sin2x+\sin3x+\cdots+\sin nx\le\frac{\sqrt3}{2}$$ is valid is $l_1\le x\le l_2$ , find $l_1$ and $l_2$ . Let $\displaystyle\sum_{i=1}^n \sin(ix)$ = $S$ then I have managed to show that $$S=\frac{\sin\left(\frac{nx+x}{2}\right)\cdot\sin\left(\frac{nx}{2}\right)}{\sin\left(\frac{x}{2}\right)}$$ I do not know what to do next. How can I handle the case for all $n?$ Any help is greatly appreciated.
Whereas my first answer is based on trigonometric transformations, this answer is based on geometric intuition. When $x=\pi$ then for each natural $n$ the left hand side of the inequality equals $0$ , so the inequality holds. So we assume that $x\ne\pi$ . Let $(z_n)_{n\in\mathbb N\cup \{-1,0\}}$ be a sequence of points of the plane $\mathbb R^2$ such that $z_{-1}=\left(-\frac 12,0\right)$ , $z_0=\left(\frac 12,0\right)$ , and for each natural $n$ the vector $\overline{z_{n-1}z_n}$ is the vector $\overline{z_{n-2}z_{n-1}}$ rotated counterclockwise by the angle $x$ . Then the question is equivalent for which $x\in (0,2\pi)$ for each natural $n$ we have that the ordinate of $z_n$ is at most $\frac {\sqrt{3}}2$ . It is easy to see that all points of the sequence $(z_n)_{n\in\mathbb N\cup \{-1,0\}}$ belong to a circle $C$ containing the points $z_{-1}$ , $z_0$ , and $z_1$ . Let $O=(x_O,y_O)$ be the center of $C$ and $R$ be radius of $C$ . The point $O$ belongs to the middle perpendicular to
|sequences-and-series|inequality|trigonometry|
0
Solving $\dot{x}^2+\frac{k}{m}x^2-\frac{k}{m}x^2_{max}=0$ from a spring oscillator problem
In my mechanics class we had a differential equation arise from a spring oscillator via its energy ( $x^2_{max}$ is some constant maximum length) : $$\begin{align}\frac{1}{2}kx^2_{max}&=\frac{1}{2}mv^2+\frac{1}{2}kx^2\\\frac{k}{m}x^2_{max}&=\dot{x}^2+\frac{k}{m}x^2\\\end{align}$$ So we have $\dot{x}^2+\frac{k}{m}x^2-\frac{k}{m}x^2_{max}=0$ from this. I've been having some trouble solving this and WFA wasn't any help either. In the lecture my professor took the derivative and solved from there : $$\begin{align}2\ddot{x}\dot{x}+2\frac{k}{m}x\dot{x}&=0\\\implies\ddot{x}+\frac{k}{m}x&=0\end{align}$$ In the end this would mean $x=x_{max}\cos(\omega t+\phi)$ , but I'm still really confused about the technique of just taking the derivative here to solve. I thought maybe it's something like $y=x^2\implies\dot{y}=2x\dot{x}$ in disguise, but it doesn't really simplify once I rearrange for $\dot{x}^2$ . How can I understand this?
Maybe moving away from the dot notation (which I have always hated but is sometimes necessary) may be helpful... $$\frac{1}{2}kx^2_{max}=\frac{1}{2}mv^2+\frac{1}{2}kx^2$$ $$\therefore\frac{k}{m}x_{max}^2=v^2+\frac{k}{m}x^2$$ Applying the $\frac{d}{dt}$ operator on both sides of our equation yields; $$0=\frac{d}{dt}(v\cdot v)+\frac{k}{m}\bigg[\frac{d}{dt}(x\cdot x)\bigg]$$ Which, by the product rule, gives us; $$0=2(v' \cdot v)+2\frac{k}{m}(x'\cdot x)$$ $$\therefore0=2(x'' \cdot x')+2\frac{k}{m}(x'\cdot x)=2x'\bigg[x''+\frac{k}{m}x\bigg]$$ Now, this equality must hold for all $t$ but we know that $2x'$ is not equal to $0$ at all times. Therefore we must have; $$x''+\frac{k}{m}x=0 $$
|ordinary-differential-equations|classical-mechanics|
1
If $M$ is an artinian module and $f : M\to M$ is an injective homomorphism, then $f$ is surjective.
If $M$ is an artinian module and $f: M\to M$ is an injective homomorphism, then $f$ is surjective. I somehow found out that if we consider the module $\mathbb Z_{p^{\infty}}$ denoting the submodule of the $\mathbb{Z}$-module $\mathbb{Q/Z}$ consisting of elements which are annihilated by some power of $p$, then it is artinian, but if we have the homomorphism $f(\frac{1}{p^{k}})=\frac{1}{p^{k+1}}$, then we get a $\mathbb{Z}$-module homomorphism, but this map is not surjective, because $\frac{1}{p}$ has no preimage. I would be very grateful if someone can tell me what is wrong with this counterexample? And how to prove the proposition above if it is correct? Thanks.
Indeed, the proportion is correct, as it is easy to verify the descending chain: $$ M \supseteq \operatorname{Im} f \supseteq \operatorname{Im} f^2 \supseteq \operatorname{Im} f^3 \supseteq \ldots. $$ of submodules of $M$ . Since $M$ is Artinian, there exists $k \in \mathbb{N}$ such that $\operatorname{Im} (f^k) = \operatorname{Im} (f^{k+1})$ . Note that $f^n:M \longrightarrow M$ is injective for all $n \in \mathbb{N}$ since $f$ is injective. In particular, $f^k$ is injective. For any $x \in M$ , there exists $y \in M$ such that $f^{k+1}(y) = f^k(x)$ , since $f^k(x) \in \operatorname{Im}(f^k) = \operatorname{Im}(f^{k+1})$ . Then, $f^k(f(y) - x) = f(f^k(y)) - f^k(x) = f^{k+1}(y) - f^k(x) = 0$ . This implies that $f(y) - x = 0$ , i.e., $f(y) = x$ , since $f^k$ is injective. Hence, $\operatorname{Im} (f) = M$ , since $x$ is arbitrary in $M$ , which means $f$ is surjective.
|abstract-algebra|modules|
0
what is the definite formula for an exponent?
I'm attempting to code math formulas from square 1 where i use addition to make subtraction, multiplication, division, -etc. etc. But that brung me to exponents, i felt it was simple enough, and i smacked in a repeat and a self multiplying script, and i got what i was looking for, but when i got to things like 2^0, i got 0 because variable=(2)*(2) happening 0 times = 0 and not the usual 1, then negatives started becoming an issue on the scripts complexity, while i'd like to just fix them with more scripting, i want to find the definite formula for this so as to keep things simple. though what is this formula?
The only "definite" formula I can think of is the power series of x^y (the one as a function of $x$ and the other as a function of $y$ ).
|computer-science|recreational-mathematics|
0
I am not able to solve this question on set theory I tried but I am unsure of my answer
Let $S$ be the set of all positive integers such that each element $n$ in $S$ satisfies the following conditions: The sum of the digits of $n$ is a prime number. The sum of the digits of $n^2$ is a perfect square. Prove or disprove the existence of an infinite subset $T$ of $S$ such that for every $n$ in $T$ , the sum of the digits of $n^3$ is a perfect cube. Attempt To prove the existence of an infinite subset $T$ of $S$ such that for every $n$ in $T$ , the sum of the digits of $n^3$ is a perfect cube, we need to show that there are infinitely many positive integers $n$ satisfying the given conditions. I analyzed the conditions provided: The sum of the digits of $n$ is a prime number. The sum of the digits of $n^2$ is a perfect square. I tried to construct such an infinite subset $T$ by considering the properties of the conditions. First, i started with the condition that the sum of the digits of $n$ is a prime number. This condition restricts the possible values of $n$ to those whose
Numbers of the form $10^n+1$ work for all positive $n$ . The digits of $10^n+1$ sum to $2$ . $(10^n+1)^2=10^{2n}+2\times 10^n+1$ so the digits sum to $4$ . $(10^n+1)^3=10^{3n}+3\times 10^{2n}+3\times 10^n+1$ and the digits sum to $8$ .
|elementary-number-theory|
0
How to take the gradient of the quadratic form?
It's stated that the gradient of: $$\frac{1}{2}x^TAx - b^Tx +c$$ is $$\frac{1}{2}A^Tx + \frac{1}{2}Ax - b$$ How do you grind out this equation? Or specifically, how do you get from $x^TAx$ to $A^Tx + Ax$?
Here's the approach I learned, with differentials. One only needs the product rule. I'll respond to the final part of the question, "Or specifically, how do you get from $x^TAx$ to $A^Tx + Ax$ ?" Let $f(x) = x^T A x$ . Then, to make the application of the production rule absolutely clear, let $g(x) = x^T$ and $h(x) = Ax$ so that $f(x) = g(x)h(x)$ . Then by the product rule, $$df(x) = dg(x)h(x) + g(x)*dh(x)$$ We can compute $$ dg(x) = d(x^T) = (dx)^T $$ and $$ dh(x) = A dx $$ this latter one also by the product rule. Then we have $$ df = (dx)^T Ax + x^T Adx $$ Now we want to get all the $dx$ term on the right. Since each term has dimension 1x1 (scalar), we can take the transpose of $(dx)^T Ax$ without changing its value, leaving $$ df = x^T A^T dx + x^T A dx = x^T (A^T+A) dx $$ By definition $df = (\nabla f)^T dx$ . Thus we have $$ \nabla f = (A^T + A)x $$
|multivariable-calculus|derivatives|matrix-calculus|quadratic-forms|scalar-fields|
0