title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Is the transfer principle the reason we can talk about complex numbers such $e^{\pi i}$
|
My question is basically in the title. When we move from real numbers to complex numbers, is the reason we can talk about expressions such as $e^{\pi i}$ the transfer principle? EDIT: Since clarification was needed, I was referring to something similar to the transfer principle for hyperreal numbers, where it's used to extend real functions to the hyperreals. I assumed perhaps something similar was going on here. I should've probably asked for "a" transfer principle instead of "the" transfer principle. Analytic continuation was what I was after!
|
"The" transfer principle or "a" transfer principle? In general, a transfer principle has two components: the type of "structure-transformation" whose input and output we want to transfer between, and the logic in question which determines the "transferrable" properties. The classical "reals-to-hyperreals" transfer principle takes as its latter component first-order logic, and as its former component the ultrapower construction; really, it's just a particular corollary of Łos' Theorem . But there are other logics and other methods of getting new structures from old. There is indeed a weak transfer phenomenon holding between $\mathbb{R}$ and $\mathbb{C}$ . Specifically, $\mathbb{C}$ is a quotient of a subring of a power of $\mathbb{R}$ . Half of this is obvious: $\mathbb{C}\cong \mathbb{R}[x]/\langle x^2+1\rangle$ . (In fact that " $\cong$ " may well be an " $=$ " depending on how exactly you set things up.) The non-obvious part is that $\mathbb{R}[x]\hookrightarrow\mathbb{R}^\kappa$ for
|
|logic|
| 0
|
Schauder basis in $\ell^2(\mathbb{N})$.
|
I have a question about this problem posted here . Essentially they are trying to prove that the sequence $\{x_n\}_{k = 1}^\infty = \{\alpha e_n + \beta e_{n+1}\}_{n=1}^\infty$ (here $\{e_n\}_{n = 1}^\infty$ is the canonical orthonormal basis for $\ell^2(\mathbb{N})$ ) is a Schauder basis for $\ell^2(\mathbb{N})$ whenever $|\beta| The accepted answer on here gives a hint on how to write the orthonormal basis in terms of this sequence. Following this hint gives us that we can write (uniquely) $$e_k = \frac{1}{\alpha}\sum_{n=k}^\infty \left(-\frac{\beta}{\alpha}\right)^{n-1}x_n.$$ However, why is this enough to conclude that $\{x_n\}_{n=1}^\infty$ is a Schauder basis? In order for this to happen for each $y \in \ell^2(\mathbb{N})$ there must be a unique way to write $$y = \sum_{n=1}^\infty c_k x_k$$ for some scalars $\{c_n\}_{n=1}^\infty$ . We already know we have the decomposition of $y$ in terms of the orthonormal basis and each element of the orthonormal basis has a unique decompositi
|
As you observed, it suffices to check existence. We first note a truncation estimate: $$e_k = \frac{1}{\alpha}\sum_{n=k}^M \left(-\frac{\beta}{\alpha}\right)^{n-1} x_n + O\left(\left(\frac{\beta}{\alpha}\right)^M \right)$$ as $M \to \infty$ . Suppose $y = \sum_{i=1}^{\infty} a_i e_i$ . Let $\epsilon > 0$ . Take $M$ large such that $O\left(\left(\frac{\beta}{\alpha}\right)^M \right) = O(\epsilon)$ , and take $N > M$ such that $$y = \sum_{i=1}^N a_i e_i + O(\epsilon).$$ Substitute the truncated expression of $e_i$ 's in the sum; all sums are finite, so swapping sums have no issues, and we see that \begin{align*} y = \, & \sum_{i=1}^N a_i e_i + O(\epsilon) \\ = \, & \frac{1}{\alpha}\sum_{i=1}^N a_i \sum_{n=i}^M \left(-\frac{\beta}{\alpha}\right)^{n-1} x_n + O(\epsilon) \\ = \, & \frac{1}{\alpha}\sum_{n=1}^M \left(\sum_{i \leq n} a_i\right) \left(-\frac{\beta}{\alpha}\right)^{n-1} x_n + O(\epsilon) + O\left(\left(\frac{\beta}{\alpha}\right)^M\right) \\ = \, & \frac{1}{\alpha}\sum_{n=1}^M \
|
|real-analysis|functional-analysis|hilbert-spaces|
| 0
|
Topological version of uniform convergence of functions
|
We have a sequence of continuous functions $\{f_n\}$ on a Banach space $X$ and $f_n(x)\to f(x)$ for each $x\in X$ as $n\to\infty$ . Given an open ball $B\subset X$ and $\epsilon>0$ , we want to show that there exists another open ball $B_0\subset B$ and $m\geq 1$ such that $|f_m(x)-f(x)|\leq \epsilon$ , for all $x\in B_0$ . My approach: Let $Y$ be a closed ball in $B$ , fix $\epsilon>0$ and consider $E_l=\{x\in Y| \sup_{j,k\geq l} |f_j(x)-f_k(x)| . (Please check the following!) Then $E_l$ is open for each $l$ : this is because for any $x\in E_l$ , take the radius $r$ and corresponding ball $B_r(x)$ , such that both $d(f_j(x),f_j(x'))$ and $d(f_k(x),f_k(x')) where $\delta=\epsilon-\sup_{j,k\geq l} |f_j(x)-f_k(x)|$ (by taking the minimum $r$ at $x$ for $f_j$ and $f_k$ ), and therefore, we have $$\sup_{j,k\geq l} |f_j(x')-f_k(x')|=|f_{j'}(x')-f_{k'}(x')|$$ where $j', k'$ corresponds to the indices in the family $\{f_n\}$ corresponding to the supremum at the point $x'$ , then $$\leq |f_{j'
|
Edit: Your $E_l$ is not necessarily open(the final result you try to prove is equivalent to that one of $E_l$ has nonempty interior). The first equality $$\sup_{j,k\geq l} |f_j(x')-f_k(x')|=|f_{j'}(x')-f_{k'}(x')|$$ makes no sense. For example, take $f_n=x^n$ on $[0,1]$ , then there is always an annoying isolated point $1$ in every $E_l$ . But it is true that the union of $E_l$ is the $Y$ . So the following argument still works after some minor modification. Baire Category Theorem says in complete metric space, if we have a sequence of closed sets $A_j$ with empty interior, then $\bigcup_j A_j$ also has empty interior. $E_l(\epsilon)=\{x\in Y| \sup_{j,k\geq l} |f_j(x)-f_k(x)| . Since $Y=\bigcup \bar E_l(\epsilon /2)$ has nonempty interior, by Baire Category Theorem, at least one of $\bar E_l(\epsilon /2)$ , say $\bar E_m$ contains an open ball $B_0$ in $Y$ . But $B_0\subset\bar E_m(\epsilon /2)\subset E_m(\epsilon)$ . Thus in $B_0$ , $|f_j-f_k| whenever $j,k\geq m$ , thus we also have
|
|real-analysis|general-topology|functional-analysis|analysis|functions|
| 1
|
Estimate yaw given pitch, roll, and change in pitch and roll after rigid transformation
|
Given a gravity vector $g$ in a 3D coordinate frame $F$ we can find pitch $p$ and roll $r$ (Euler angles) of $F$ relative to $g$ . Assume we apply a rigid transformation to $F$ , sense a new gravity vector $g'$ , and compute new Euler angles $p'$ and $r'$ . Can we predict the yaw angles $y$ or $y'$ given $(p, r, p', r')$ ? Intuitively it seems doable since axis rotations are all linked but I do not know where to start. Update: I think the solution depends on the rotation part, $R$ , of the rigid transformation, and so cannot be predicted using only $(p, r, p', r')$ . I've been thinking about it graphically where the pitch vector (axis of pitch rotation), for example, is embedded in world coordinates. But since its exact rotation about $g$ is unknown, I think of the pitch vector as, like, a cone, $C$ , of all possible orientations. Then, after the rigid transformation is applied and $g',p',r'$ are measured we can find a new cone $C'$ for the new pitch vector. I think the intersection of
|
Think about it this way. At any point in time the local frame can rotate about the gravity vector and the gravity vector would be invariant in that frame. So, you you can't uniquely determine the yaw angles about global Z in either the original frame or the new frame. You need three non-colinear points to uniquely determine a frame's orientation. The gravity vector gives you only colinear points.
|
|trigonometry|linear-transformations|rotations|
| 0
|
How to solve $\int_0 ^1 \left( \frac{x^2}{1+x^2} \right)\frac{1-x\tan(x)+ \tan(x)-x}{1 -x\tan(x)-\tan(x)-x}dx$?
|
I saw this interesting problem: $$\int_0 ^1 \left( \frac{x^2}{1+x^2} \right)\frac{1-x\tan(x)+ \tan(x)-x}{1 -x\tan(x)-\tan(x)-x}dx$$ I tied all the tricks that I know and non of them were useful at all after some thoughts I tried to graph the function to if there is some way I can use king's rule after some substitution to simplify the integral and the graph is very strange So one of my friends tried to use wolfram alpha and it gave two different values! This made is integral more strange and I don't know how it is possible that wolfram found two different values. I think it might be related to Riemann's rearrangement theorem.
|
The integral diverges at the singularity $s=0.40262817\dots$ which is the solution to $x+\arctan x=\pi/4$ . The integral $$\begin{align}I=&\int \frac{x^2}{1+x^2}\left (\frac{1}{1-\tan(1+\arctan x)}-1\right)\\=&\int B(x)\frac{1}{1-\tan(1+\arctan x)}+C(x)\end{align}$$ where both $B,C$ are bounded functions. Near the singularity $s$ , $B\neq 0$ and $$\frac{1}{1-\tan(1+\arctan x)}=\frac D{x-s}+E(x)$$ with $D=-\frac {s^2+1}{2s^2+4}\neq 0$ and $E$ bounded. thus the integral diverges.
|
|calculus|integration|convergence-divergence|definite-integrals|
| 0
|
Conditional distribution of Brownian motion given the passage time?
|
I am stuck with this question. Suppose a stock price follow Brownian motion starting at price $p$ . Denote by $\tau_r$ the passage time reaching level $r$ with $r . What would be the distribution of the stock at time $t$ given that $\tau_r \geq t$ (given that the stock's price never went below price $r$ )?
|
Once one has the joint distribution, we can obtain the conditional one $${\displaystyle f_{Y\mid X}(y\mid x)={\frac {f_{X,Y}(x,y)}{f_{X}(x)}}\qquad .}$$ For the case of undrifted BM, one can just use the reflection principle see The Maximum of Brownian Motion and the Reflection Principle . By definition $$[\tau_a \leq t] = \left[ \sup_{s \leq t} X_s \geq a \right] \tag{1}$$ and so determining the distribution of hitting time $\tau_a$ is equivalent to finding the distribution of $\sup_{s \leq t} X_s$ . Joint distribution of $(X_t, \sup_{s \leq t} X_s)$ : Let $(X_t)_{t \geq 0}$ be a Brownian motion on a probability space $(\Omega,\mathcal{A},\mathbb{Q})$ . Then the joint distribution $(X_t,\sup_{s \leq t} X_s)$ equals $$\mathbb{Q} \left[ X_t \in dx, \sup_{s \leq t} X_s \in dy \right] = \frac{2 (2y-x)}{\sqrt{2\pi t^3}} \exp \left(- \frac{(2y-x)^2}{2t} \right) 1_{[-\infty,y]}(x) \, dx \, dy. \tag{2}$$ For a proof see e.g. René Schilling/Lothar Partzsch: Brownian Motion - An Introduction to
|
|stochastic-processes|stochastic-calculus|brownian-motion|finance|
| 0
|
Calculus minimum stationary point
|
Prove using calculus that $f\left(s\right)=\frac{n^{2}}{s}+\frac{1}{t-s}$ , $t>s>0$ , has a minimum at $s=\frac{nt}{1+n}$ . I differentiated $\frac{n^{2}}{s}$ w/respect to s and got $-\frac{n^{2}}{s^{2}}$ , but am not sure how to differentiate $\frac{1}{t-s}$ and would like some guidance on how to continue from there.
|
Hint: Think about how to differentiate $\frac{1}{x-1}$ . Use the same concept in your question since $t$ is a constant.
|
|calculus|proof-writing|
| 1
|
Schauder basis in $\ell^2(\mathbb{N})$.
|
I have a question about this problem posted here . Essentially they are trying to prove that the sequence $\{x_n\}_{k = 1}^\infty = \{\alpha e_n + \beta e_{n+1}\}_{n=1}^\infty$ (here $\{e_n\}_{n = 1}^\infty$ is the canonical orthonormal basis for $\ell^2(\mathbb{N})$ ) is a Schauder basis for $\ell^2(\mathbb{N})$ whenever $|\beta| The accepted answer on here gives a hint on how to write the orthonormal basis in terms of this sequence. Following this hint gives us that we can write (uniquely) $$e_k = \frac{1}{\alpha}\sum_{n=k}^\infty \left(-\frac{\beta}{\alpha}\right)^{n-1}x_n.$$ However, why is this enough to conclude that $\{x_n\}_{n=1}^\infty$ is a Schauder basis? In order for this to happen for each $y \in \ell^2(\mathbb{N})$ there must be a unique way to write $$y = \sum_{n=1}^\infty c_k x_k$$ for some scalars $\{c_n\}_{n=1}^\infty$ . We already know we have the decomposition of $y$ in terms of the orthonormal basis and each element of the orthonormal basis has a unique decompositi
|
Observe that for an invertible linear operator $T$ and a Schauder basis $\{e_n\}_{n=1}^\infty$ the sequence $\{Te_n\}_{n=1}^\infty$ consitutes the Schauder basis as well. Indeed for any element $x\in H$ we have $$T^{-1}x=\sum_{n=1}^\infty b_ne_n \iff x=\sum_{n=1}^\infty b_nTe_n$$ Consider the operator $T=\alpha I+\beta S=\alpha(I+\gamma S)$ , where $\gamma=\beta/\alpha$ and $S$ is the shift operator defined by $Se_n=e_{n+1}.$ As $|\gamma| and $\|S\|=1,$ the operator $ T$ is invertible.
|
|real-analysis|functional-analysis|hilbert-spaces|
| 1
|
Concurrency involving two tangents and a secant of a circle
|
I found out this problem while fooling around with Geogebra. Let $(O)$ be a circle in the plane, $A$ be a point lies outside of the circle. From $A$ , draw two tangents $AB, AC$ to $(O)$ ( $B, C\in (O)$ ). A line $d$ passing through $A$ ( $O\not\in d$ ) intersects $(O)$ at two points $E, F$ . $BC$ and $EF$ intersects at $K$ ; $CE, CF$ intersects the line $AO$ at $X, Y$ respectively. a. Prove that $CK, FX, EY$ are concurrent. b. Let $D$ be the intersection of $BO$ and $(O)$ . $DE, DF$ intersect $AO$ at $M, N$ respectively. Prove that $DK, EN, FM$ are concurrent. What I have proved so far: $AEMC, AFCN$ ; $EMYF, EXNF$ are inscribed. $OM = ON$ . Indeed, since $\widehat{CEM} =\widehat{EBD}$ and $\widehat{OBE} = \widehat{BAM}=\widehat{MAC}$ then $AEMC$ is inscribed. Similarly, $AFCN$ is inscribed. Now since $\widehat{AME} = \widehat{ECA}$ and $\widehat{ECA} = \widehat{EFC}$ then $\widehat{EMY}+\widehat{EFY} = \pi$ , thus $EMYF$ is inscribed. Similarly, $EXNF$ is inscribed. Note that $MNDC$ i
|
For Part $(a)$ : Step $1$ : Show that: $$\frac{FY}{YC}=\frac{AF}{AC} \times \frac{\sin \angle YAF}{ \sin \angle YAC}.$$ Step $2$ : Show that: $$\frac{XC}{XE}=\frac{AC}{AE} \times \frac{\sin \angle YAC}{ \sin \angle YAF}.$$ Step $3$ : Show that: $$\frac{EK}{KF}=\frac{EB}{BF} \times \frac{\sin \angle CBE}{ \sin \angle CBF}=\frac{EB}{BF} \times \frac{CE}{CF}.$$ Step $4$ : Show that $\frac{EB}{BF}=\frac{AB}{AF}$ and $\frac{CE}{CF}=\frac{AE}{AC}$ . Step $5$ : Conclude that $\frac{FY}{YC} \times \frac{XC}{XE} \times \frac{EK}{KF}=1$ . Now, note that the claim follows from Ceva's theorem . For Part $(b)$ : Do exactly the same process.
|
|geometry|contest-math|euclidean-geometry|
| 1
|
$\sqrt{2}$ is irrational $\equiv \lnot (\sqrt{2}$ is rational)?
|
Definition A real number is irrational if it is not rational. According to this definition, does " $\sqrt{2}$ is irrational" mean $\lnot$ ( $\sqrt{2}$ is rational)? This seems right to me, but an example that is why I am not confident is that $\lnot$ ( $i$ is rational) is true, but " $i$ is irrational" is false. However, I feel that complex numbers are irrelevant in this context.
|
From your definition "A real number is..." means that your definition does not speak to numbers which are not real, so the ir-/rationality of $i$ , which is complex but not real, is not resolved by this definition. (Of course, $i$ is irrational, but this definition does not tell you so.) The given definition allows one to go from $$\sqrt{2} \text{ is irrational}$$ to $$\sqrt{2} \text{ is not rational.}$$ One then has to use the grammatical and syntactical rules of English to move to $$\text{it is not the case that (} \sqrt{2} \text{ is rational),}$$ which is immediately $$ \neg (\sqrt{2} \text{ is rational).}$$
|
|logic|
| 1
|
If $X$ is an infinite-dimensional Banach space and $u\in B(X)$ ,then $\bigcap_{v\in K(X)}\sigma(u+v) =\cdots$
|
If $X$ is an infinite-dimensional Banach space and $u\in B(X)$,why the following equality is true? $$\bigcap_{v\in K(X)}\sigma(u+v) =\sigma(u) \setminus \{\lambda \in\mathbb{C}\mid u - \lambda \text{ is Fredholm of index zero} \}$$ The index of $u\in A$ is defined to be $$\operatorname{ind}(u)=\operatorname{nul}(u)-\operatorname{def}(u)$$ where co-dimension of $u(X)$ in $Y$ be denoted by $\operatorname{def}(u)$ and $\dim(\ker(u))$ denoted by $\operatorname{nul}(u)$ $u\in B(X,Y)$ is Fredholm operator, if $\ker(u)$ is finite-dimensional and $u(X)$ is finite-co-dimensional in $Y$ $$K(X,X):=K(X)=\{u : X \longrightarrow X\mid u\text{ is compact operator}\} $$
|
I confess this was an exam question for me yesterday and (surprisingly) I was only able to prove the inclusion Blumenthal mentions to be harder, namely: $$\cap_{S\in \mathcal{K}(X)}\sigma(T+S)\subseteq\sigma(T)\setminus\{\lambda \in \mathbb{C}\:| \: \text{$T-\lambda$ is fredholm and $ind(T-\lambda)=0$} \}$$ $\lambda \in \cap_{S\in \mathcal{K}(X)}\sigma(T+S)$ . If $T-\lambda$ were invertible: $$T+S-\lambda=(T-\lambda)(I+(T-\lambda)^{-1}S)$$ But this is the composition of invertible operators for $S$ compact with small norm because of Von Neumann. Hence, we conclude that: $$\cap_{S\in \mathcal{K}(X)}\sigma(T+S)\subseteq \sigma(T)$$ Suppose by way of contradiction that: $$\left(\cap_{S\in \mathcal{K}(X)}\sigma(T+S)\right)\cap\{\lambda \in \mathbb{C}\:| \: \text{$T-\lambda$ is fredholm and $ind(T-\lambda)=0$} \}\not=\emptyset$$ In this case, we have that: $$Ker(T-\lambda)\: \text{has basis given by} \:\{ x_1,...,x_n\}$$ $$ Im(T-\lambda)\: \text{has complemented space with basis given by} \
|
|operator-theory|banach-algebras|
| 0
|
Estimate norm of convolution operator
|
I'm trying to find the operator norm for $T: L^2([0,1])\to L^2([0,1])$ , defined as $Tf(x)=\int_{[0,1]}|\sin(x-y)|^{-\alpha}f(y)dy$ , where $0 . Using an upper bound on $|\sin(x)|\geq |x|/2$ , I am able to find that for any $x\in[0,1]$ , $\int |\sin(x- y)|^{-\alpha} dy\leq \frac{2^\alpha}{1-\alpha}=C$ , and then by Theorem 6.18 in Folland, it should follow that $$\lVert Tf\rVert_2 \leq C\lVert f\rVert_2$$ However, I'm currently stuck on finding or closely estimating the operator norm, so any help on this would be appreciated.
|
Too long for a comment, but not a full answer. This post suggests that the norm of a convolution operator $f\mapsto f*g$ on $\mathbb R$ is $\|g\|_{L^1}$ if $g\geq 0$ (in general, it is the $L^\infty$ norm of the Fourier transform of $g$ ). The main intuition behind it is that, by scaling symmetry, you can think of $g$ as being vey close to a Dirac delta multiplied by $\|g\|_{L^1}$ . Most likely, the same is not true here, due to the boundedness of the domain which does not allow for arbitrary scaling of functions. One estimate I can think of on the operator norm of $T$ is the following. By Riesz-Thorin theorem , one has the bound $$ \|T\|_{L^2\to L^2}\leq \|T\|_{L^1\to L^1}^{1/2}\|T\|_{L^\infty\to L^\infty}^{1/2}, $$ where $\|T\|_{L^p\to L^p}$ is the operator norm of the operator on $L^p(0,1)$ . The norm $\|T\|_{L^1\to L^1}$ is $\|g\|_{L^1(-1/2,1/2)}$ , where $g(x):=|\sin(x)|^{-\alpha}$ . The same is true for the norm $\|T\|_{L^\infty\to L^\infty}$ . So, by the above bound, one has $$
|
|real-analysis|functional-analysis|operator-theory|normed-spaces|lp-spaces|
| 0
|
Double negation in sequent calculus
|
Kind of related to this post . I wonder if it is possible to derive $\Phi\vdash\Delta$ from $\lnot\lnot\Phi\vdash\Delta$ using standard sequent calculus elimination rules. I am not sure where to start. Applying $\lnot$ R rule to $\lnot\lnot\Phi\vdash\Delta$ will result in $\vdash\Delta,\lnot\lnot\lnot\Phi$ , which makes the problem worse.
|
If $\neg\neg \Phi \vdash \Delta$ is derivable in the usual classical sequent calculus LK, then so is $\Phi \vdash \Delta$ . We know this because $\Phi \vdash \neg\neg \Phi$ is derivable simply by invoking $\neg L$ and $\neg R$ rules. With this we can just use $$\frac{\Phi \vdash \neg\neg\Phi \:\:\: \neg\neg\Phi \vdash \Delta}{\Phi \vdash \Delta} \text{cut}$$ However, there is no single schematic derivation that has the conclusion $\Phi \vdash \Delta$ at its root, and proceeds upward through $\neg\neg\Phi \vdash \Delta$ without invoking the cut rule. We know this because cut-free proofs have to satisfy the subformula property , and in general $\neg\neg \Phi$ need not be a subformula of $\Phi, \Delta$ . For example, $\bot \vdash \top$ clearly has no cut-free proof in which the sequent $\neg\neg \bot\vdash \top$ makes an appearance.
|
|logic|proof-theory|sequent-calculus|
| 1
|
Linear Programming: Minimize deviation and evenly maximize decisive variable
|
I came across linear programming while trying to find a solution to a problem. I didn't use LP before so pardon if it is naive question. I hope this forum can help. There are n tasks (x1, x2 .., xn) with current assigned values (r1, r2 .. rn) and new target values (t1, t2 ... tn). Each of the tasks has certain known constraint on upper and lower bound values ( lbi ). There are also constraint across the task values. (See example for more details) Objective is to minimize individual task deviation. I tried with minimizing sum of absolute value deviation ( $min \sum_i \left|x_i - t_i\right|$ ) but with that some tasks are more close to target value than others. Edit1 : Here is a simple example. This is the input: task current (r) target (t) LB (lb) UB (ub) X1 500 1500 500 1500 X2 500 1500 500 1500 X3 5000 4000 4000 8000 X4 5000 4000 4000 8000 Update Eg constraint across task total values: $x_1 + x_2 \le 2000$ My initial approach : Here is the primary objective I initially created: \begin
|
I was able to achieve the desired outcome using two LP. Pending test for all corner cases. Also, not sure if this is optimal solution but better than before. First LP, Minimize the absolute sum. Same objective as posted in the question. \begin{align} min \sum_i \left|x_i - t_i\right| \end{align} Subject to: \begin{align} \text{lb}_i \le x_i \le \text{ub}_i \\ \sum_i x_i - \sum_i t_i = 0 \\ \ x_1 + x_2 \le 2000 \end{align} Calculate two standard deviation $sd_i$ from the first LP results $x_i$ . one for the tasks whose current value is less than target. second for the tasks whose current value is greater than target. Second LP, Add new constraint to have absolute value within calculated sd to reduce variability. \begin{align} min \sum_i \left|x_i - t_i\right| \end{align} Subject to: \begin{align} \text{lb}_i \le x_i \le \text{ub}_i \\ \sum_i x_i - \sum_i t_i = 0 \\ \ x_1 + x_2 \le 2000 \\ \left|x_i - t_i\right| \le sd_i \\ \end{align} This is the outcome I got: TaskX1 = 1000.0 TaskX2 =
|
|linear-programming|
| 0
|
How to solve $\int_0 ^1 \left( \frac{x^2}{1+x^2} \right)\frac{1-x\tan(x)+ \tan(x)-x}{1 -x\tan(x)-\tan(x)-x}dx$?
|
I saw this interesting problem: $$\int_0 ^1 \left( \frac{x^2}{1+x^2} \right)\frac{1-x\tan(x)+ \tan(x)-x}{1 -x\tan(x)-\tan(x)-x}dx$$ I tied all the tricks that I know and non of them were useful at all after some thoughts I tried to graph the function to if there is some way I can use king's rule after some substitution to simplify the integral and the graph is very strange So one of my friends tried to use wolfram alpha and it gave two different values! This made is integral more strange and I don't know how it is possible that wolfram found two different values. I think it might be related to Riemann's rearrangement theorem.
|
The integrand has the form $$\frac{x^2}{1+x^2}\ \frac {(1-x)(1-\tan x)}{(1-x)-(1+x)\tan x}$$ It has a simple pole at $$x=\frac{1-\tan x}{1+\tan x} = \frac{\cos x-\sin x}{\cos x+\sin x}$$ with $x=0.4\dots$ inside the interval of integration. Setting $\tan x = \frac{1-x-\epsilon}{1+x + \epsilon},$ the expression not odd wrt to the singular point, so the definite integral makes no sense, not even as a principal value, because the weights of integration of the positive and negative parts on both sides of the pole are different. Mathematica NIntegrate yields random values for different values of WorkingPrecision and complaints about lack of convergency. NIntegrate[ x^2/(1 + x^2) ((1 - x) (1 - Tan[x]))/ ((1 - x) - (1 + x) Tan[x]), {x, 0,1}, WorkingPrecision -> #] & /@ {16, 32, 64} // Quiet {0.05450521388384730, -0.041473061817256730764505273808547, 0.06152386453244370124428309681908080225858385539795037284604940271}
|
|calculus|integration|convergence-divergence|definite-integrals|
| 0
|
solution verification| Determine the smallest number that is integer $k$ for which the inequality $x^4+4x^2+k>6x$ is true for every real number x.
|
the question Determine the smallest number integer $k$ for which the inequality $x^4+4x^2+k>6x$ is true for every real number $x$ . my idea $x^4+4x^2-6x+4>4-k => x^4+4x^2-6x+4=(x^2-2x+1)(x^2+2x+4)+3x^2=(x-1)^2((x+1)^2+3)+3x^2 \geq 3$ This means that $4-k -k k>1$ so the smallest value it can take is 2. Im not sure is this inequality $(x-1)^2((x+1)^2+3)+3x^2 \geq 3$ is right. I don't know how to show it and I conclude it by doing brute force. I hope one of you can help me and tell me how is the correct way to solve it! Thank you.
|
Another way. $$f(x)=-x^4-4x^2+6x$$ $$f'(x)=-4x^3-8x+6$$ The maximum of $f(x)$ occurs at $(x_m,y_m)$ where $f'(x_m)=0$ . By intermediate-value theorem, $\tfrac35 Hence $$f(\tfrac35)=\tfrac{1269}{625} Conclusion: The smallest such integer $k$ is $k=3$ .
|
|solution-verification|
| 0
|
Prove complex satisfying $|z+a\bar{z}|=r$ is ellipse
|
Prove complex numbers satisfying $|z+a\bar{z}|=r$ are ellipse. Here $0 , $r>0$ . I don't know how to address this problem in an easy way. Polar coordinate doesn't make the problem easier. Any help would be useful.
|
Let $$z=x+iy,\quad \bar{z}=x-iy$$ then $$z+a\bar{z}=(x+iy)+a(x-iy)=(1+a)x+i(1-a)y$$ and remembering that $|z|=\sqrt{x^2+y^2}$ , we get that $$|z+a\bar{z}|=\sqrt{(1+a)^2x^2+(1-a)^2y^2}$$ so we have that $|z+a\bar{z}|=r$ can be written as $$(1+a)^2x^2+(1-a)^2y^2=r^2$$ which is precisely the equation of an ellipse $$\frac{x^2}{A^2}+\frac{y^2}{B^2}=1$$ with $$A^2=\frac{r^2}{(1+a)^2},\quad B^2=\frac{r^2}{(1-a)^2}$$
|
|complex-analysis|complex-numbers|
| 0
|
Determine the number of group homomorphisms $f:\mathbb Z_{63}\to\mathbb Z_{147}$ with $|image(f)|=7$.
|
Solution is apparently 6. Current attempt, although, I am not sure whether this the right approach: Let $f(x):=x$ . We require $f(63)=63x\equiv 0 \mod 147$ ; that is, $63x=147k\iff 7\mid x$ . So we have 21 distinct homomorphisms. If we consider a few, say, $f_7(a)=7a\in \mathbb Z_{147}\implies |img(f_7)|=\lfloor \frac{147}{7}\rfloor =21$ , $f_{14}(a)=14a\in \mathbb Z_{147}\implies |img(f_{14})|=\lfloor \frac{147}{14}\rfloor =10$ , $\vdots$ $f_{7t}(a)=7ta \implies |img(f_{7t})|=\lfloor \frac{21}{t}\rfloor$ for $t\in\{0,...,20\}$ (verify inductively). Thus, setting $\lfloor \frac{21}{t}\rfloor=7\implies t=3$ as the unique $t$ within our specified range. What other 5 homomorphisms with images of size 7 am I missing? Ideally, I would like a general method for finding any image size
|
There are a number of homomorphisms from $\Bbb Z_{63}$ to $\Bbb Z_{147}. $ As usual since $\Bbb Z_{63}$ is cyclic, each is determined by $h (1).$ The only requirement is that $\mid h (1)\mid\mid63.$ A cyclic group has one cyclic subgroup of every order dividing its order. There's $\varphi (7)=6$ generators for the subgroup of order $7$ in $\Bbb Z_{147}.$ Defining $h(1)$ to be any of these six gives different homomorphisms with image order $7.$ For an image size $m$ , there's $\varphi (m)$ generators to choose from, where $\varphi $ is Euler's phi function. Any of the mutual divisors of $147$ and $63$ is a possibility. So, $m=1,3,7,$ or $21.$ The number of homomorphisms in each case is $1,2,6$ or $12.$
|
|abstract-algebra|group-theory|cyclic-groups|group-homomorphism|
| 0
|
Abbott: Which type of proof is this?
|
Let $x_1=2$ and define $$ x_{n+1}:=\frac{1}{2}\left(x_n+\frac{2}{x_n}\right).$$ Show that $x_n^2\geq 2$ . This was my working Suppose inductively that $x_n^2\geq 2$ for some $n\in \mathbb N$ . Then $$x_{n+1}^2=\left(\frac{x_n+\frac{2}{x_n}}{2}\right)^2\underbrace{\geq}_{\text{AM-GM}} x_n\left(\frac{2}{x_n}\right)=2. $$ I wonder how can we prove some statement without using inductive hypothesis? (by induction) Actually I saw one post on MSE ( Using induction to show that if $x_n^2 \geq 2$, then $x_{n+1}^2 \geq 2$ ) which helped me to solved this problem by some nice technique. However I saw the solution in authors solution manual where he writes MY QUESTION : Which type of proof is this (one mentioned above and mine)? If this is induction and why not using inductive hypothesis?
|
Being rigorous, your proof is not by induction. When we are doing proofs by induction, we need to do "three steps". In order: Prove the base, i.e., prove that the assertion is true for $n=0$ (or $n=1$ in some cases) Suppose that the assertion is true for $n$ . Using the fact that is true for $n$ , prove the assertion for $n+1$ . If we are done with the previous three steps, then, by the induction principle, the assertion is true for all $n\in\mathbb{N}$ . Now, the proof in the solution manual is a direct proof, i.e., the author chooses a fixed, but arbitrary, $n\in\mathbb{N}$ , and prove directly, only using the definition, that the required property holds for $n$ . As the only property of $n$ that was used was the fact that is a natural number, this is an argument that works for all natural numbers. So, you don't require induction. Is it clear now?
|
|real-analysis|calculus|proof-writing|
| 1
|
Volume of Hamming ball w.r.t power-law distribution on hypercube
|
Let $n$ be a large positive integer and let $X$ be a random element of hypercube $\{0,1\}^n$ such that $\mathbb P(|X|= \ell) \propto (\ell + 1)^{-\beta}$ for all $\ell \in \{0,1,\ldots,n\}$ . Here $\beta \ge 0$ is a constant. Fix $x \in \{0,1\}^n$ , $r \in (0,1)$ , and let $B_n(x;rn)$ be the Hamming ball of radius $rn$ around $x$ . Question. What is a good asymptotic estimate for $\mathbb P(X \in B_n(x;rn))$ ? Edit As point outed by user @Mike Earnest, the distribution of $X$ is underspecified. What I really had in mind for the distribution of $X$ was the following: $$ \mathbb P(X=x) \propto (|x| + 1)^{-\beta},\,\forall x. $$ With this modification, not that $\beta=0$ to the uniform distribution on $X$ , and in this case it is standard knowledge (coding theory, etc.) that $$ \mathbb P(X \in B_n(x;rn)) \asymp 2^{-(1-H_2(r))n}, $$ where $H_2$ is binary entropy.
|
Let's first count the number of $z \in \{0,1\}^n$ such that $d_H(z,x) = t$ and $|z| = \ell$ . For such a $z$ to even exist, a necessary and sufficient condition is that $$ \ell = t + w,\quad 0 \le t \le n-w. $$ Now, WLOG let $x=1^w0^{n-w}$ , as a string of $0$ 's and $1$ 's. Here, $w = |x|$ . We can always write $z=uv$ , where $u \in \{0,1\}^w$ , $v \in \{0,1\}^{n-w}$ . Let $w_1 = d(u,1^w) = w-|u|$ and $w_0 = d(v,0^{n-w}) = |v|$ . Then, the only constraint on $z$ is that $$ t = w_0+w_1 = w-|u| + |v|,\quad \ell = |u| + |v|. $$ Solving this gives $|v| = (\ell + t-w)/2 = t$ and $|u| = \ell - t = w$ . In particular, this means that $u=1^w$ and so $z$ is of the form $z=1^w v^{n-w}$ , where $|v| = t$ . There are ${n-w\choose t}$ choices for $v$ , and therefore, for $z$ . We conclude that \begin{equation} \mathrm{card}(\{z \in \{0,1\}^n \mid d_H(z,x) = t,\, |z| = \ell\}) = \begin{cases} {n-w\choose t},&\mbox{ if }0 \le t \le n-w,\, \ell = t + w,\\ 0,&\mbox{ otherwise } \end{cases} \end{equati
|
|probability|combinatorics|information-theory|random-walk|coding-theory|
| 0
|
Show that $\lim_{n\to \infty} I_{F_n}(x)=0$ a.e. for $\in \mathbb{R}$ if and only if $\mu(\lim \sup F_n)=0$
|
Let $(\mathbb{R}, \mathcal{F}, \mu)$ be a measure space. Let $F_n$ be measurable subsets of $\mathbb{R}$ . Show that $\lim_{n\to \infty} I_{F_n}(x)=0$ a.e. for $\in \mathbb{R}$ if and only if $\mu(\lim \sup F_n)=0$ . My proof is as follows. (1) Assume that $\lim_{n\to \infty}I_{F_n}=0$ . Note that $$ \{x:\lim I_{F_n}= 0\}^c=\cup_{k\ge 1}\cap_{N\ge 1}\cup_{n=N}^\infty \{x: I_{F_n}\ge \frac{1}{k}\}=\cap_N\cup_{n=N} F_n $$ Since $\mu(\{x:\lim I_{F_n}= 0\}^c)=0$ , then $\mu(\cap_N\cup_{n=N} F_n)=0$ . (2) Assume that $\mu(\cap_{N\ge 1}\cup_{n=N} F_n)=0$ . Then $\mu(\{x:\lim I_{F_n}= 0\}^c)=0$ . So we prove our statement. Does it make sense?
|
There are two issues with your proof. Firstly, apriori to $\lim\sup$ and $\lim\inf$ of a real sequence existing and being equal, the term " $\lim$ " does not make sense. So you should say $\{x:\lim\sup I_{F_{n}}\neq 0\}$ . Also note that since this is a sequence of $0$ s and $1$ 's, the limit exists and is $0$ if and only if $\lim\sup$ of this sequence is $0$ which forces $\lim\inf$ to also be $0$ . So it's better show necessity and sufficiency separately rather than in one step in-order to better understand what's really happening. First assume $\mu(\lim\sup F_{n})=0$ Then take $x\notin\lim\sup F_{n}$ . It means that $x\notin F_{n}$ for all but finitely many $n$ . Which means that $I_{F_{n}}(x)=0$ for all but finitely many $n$ and hence $\lim I_{F_{n}}(x)$ exists and is $0$ . Conversely, suppose $\lim I_{F_{n}}=0$ a.e. . This means $\lim\sup I_{F_{n}}=0$ a.e. and let $E$ be such that $\mu(E)=0$ and $\lim\sup I_{F_{n}}(x)=0$ for all $x\in E^{c}$ . Now suppose $y\in \{\lim\sup F_{n}\}$
|
|real-analysis|
| 1
|
How many truth functions are there?
|
The textbook says that given $n$ atomic wff, there are $2^{2^n}$ truth functions. Is it correct to say that since the set of all the atomic wff is denumerable, there are $2^{2^{\aleph_0}}$ truth functions (that is, truth functions are uncountable)?
|
The set of all truth assignments can be identified with the set of all infinite sequences of $1$ 's and $0$ 's, since a truth assignment gives a truth value to a countable set of propositional variables. This in turn can be identified with the set of all subsets of $\mathbf{N}$ , since we can think of a sequence of $1$ 's and $0$ 's as a characteristic function of natural numbers, i.e. a number $n$ is in the subset iff the value of the sequence at the $n$ -th place is $1$ . The set of subsets of natural numbers has cardinality $2^{\aleph_0}$ .
|
|logic|
| 1
|
Compare the speed at which $(\bar{X})^2$ converges to zero to the speed at which $\sqrt{n}$ diverges to infinity
|
Setup Assume the following formation $$ \bar{X} \overset{P}{\to} 0 \text{ as } n\to\infty. $$ Lemma We want to show $$ \sqrt{n}\cdot\left(\bar{X}\right)^2 \overset{P}{\to} 0. $$ This is indeterminate form ( $\infty \times 0$ ). It seems to me that the sample mean would win because it is sensibly multiplied twice by something that converges to zero, but how can this be shown mathematically? Background Let $X_1, \ldots, X_n$ be i.i.d. $N(\xi, \sigma^2)$ and consider the problem of simultaneously estimating or testing $\xi$ and $\sigma^2$ . Then, we are interested in the joint distribution of $$ Y_1 := \frac{\sqrt{n}(\bar{X}-\xi)}{\sigma}\text{ and }Y_2 := \sqrt{n} \left( \frac{\sum_{i=1}^n (X_i-\bar{X})^2}{n\sigma^2}-1\right). $$ Since this distribution is independent of $\xi$ and $\sigma$ , suppose that $\xi = 0$ . By bivariate CLT, $$ \frac{\sqrt{n}\bar{X}}{\sigma}\text{ and }\sqrt{n} \left( \frac{\sum_{i=1}^n X_i^2}{n\sigma^2}-1\right) $$ are asymptotically independently distributed a
|
Here I assume the following: $(X_1, \ldots, X_n)$ is a random iid sample with $\mathbb{E}(X_i) = 0, \text{Var}(X_i) = \sigma^2 . For $\epsilon > 0$ , by the Markov's inequality, we have $$ \mathbb{P}(\sqrt{n}(\bar{X})^2 > \epsilon) \le \dfrac{\sqrt{n}}{\epsilon}\mathbb{E}[(\bar{X})^2] $$ But, $$ \mathbb{E}[(\bar{X})^2] = \text{Var}(\bar{X}) = \dfrac{\sigma^2}{n} $$ Thus, $$ \mathbb{P}(\sqrt{n}(\bar{X})^2 > \epsilon) \le \dfrac{\sigma^2}{\sqrt{n}\epsilon} \rightarrow 0, \text{ as } n \rightarrow \infty $$
|
|real-analysis|probability|probability-theory|statistics|statistical-inference|
| 1
|
What is $\lim_{x\to 0} (1+x)e^{-\left(\frac{1}{|x|} + \frac{1}{x}\right)}$
|
$\lim_{x\to 0} (1+x)e^{-\left(\frac{1}{|x|} + \frac{1}{x}\right)}$ Obviously L'hopital is inapplicable here. I guess it can be done by saying that $e^{-\infty}$ is almost zero so the limit is zero but what's the formal way of doing this?
|
Note that $$ \frac{1}{|x|} + \frac{1}{x} = \begin{cases} \frac{2}{x} & \mathrm{if\;} x > 0 \\ 0 & \mathrm{if\;} x In consequence, one has $$ \lim_{x\to0^+} (1+x)e^{-\left(\frac{1}{|x|}+\frac{1}{x}\right)} = \lim_{x\to0^+} (1+x)e^{-2/x} = 0 $$ but $$ \lim_{x\to0^-} (1+x)e^{-\left(\frac{1}{|x|}+\frac{1}{x}\right)} = \lim_{x\to0^-} (1+x) = 1, $$ so that the limit doesn't exist in the end.
|
|calculus|limits|limits-without-lhopital|
| 0
|
Find the Domain of this irrational expression
|
While finding the Domain of this expression $$\sqrt{(2x-8)(x-1)}$$ I got this: $(2x-8)(x-1)≥0$ $2x-8\geq0 \vee x-1\geq0$ $x\geq4 \vee x\geq 1$ So the Domain is: $x\in [1, +\infty)$ But the real domain is $x\in (-\infty,1] \cup [4,+\infty)$ So it means that $x\leq1$ Can someone please explain this?
|
$(2X-8)(x-1)≥0$ $2x-8\geq0 \vee x-1\geq0$ An error occurs here. In writing $\lor$ , you are saying "or," but this is not the case for this. In general, $ab \ge 0$ only if both are positive, both are negative, or at least one is zero. Hence, you would instead write $$\Big( 2x - 8 \ge 0 \text{ and } x-1 \ge 0 \Big) \text{ or } \Big( 2x - 8 \le 0 \text{ and } x-1 \le 0 \Big)$$ For instance, by writing "or," in your original expression, if $2x-8 > 0$ but $x-1 , then you get a negative under the radical -- not good! In the first case, you can see that $x \ge 4$ ; in the second, $x \le 1$ . Why? In the first, you derive $x \ge 4$ , and $x \ge 1$ Which values $x$ satisfy both? Well, all $x \ge 4$ . Likewise, for the other, we derive $x \le 4$ , and $x \le 1$ The values that satisfy both are $x \le 1$ . Hence, we conclude that the domain of the function is all $x$ such that $$ x \ge 4 \text{ or } x \le 1 $$ I suspect your original error comes from using the "property" $$\sqrt{ab} = \sqrt a \sq
|
|algebra-precalculus|functions|radicals|
| 1
|
Find the Domain of this irrational expression
|
While finding the Domain of this expression $$\sqrt{(2x-8)(x-1)}$$ I got this: $(2x-8)(x-1)≥0$ $2x-8\geq0 \vee x-1\geq0$ $x\geq4 \vee x\geq 1$ So the Domain is: $x\in [1, +\infty)$ But the real domain is $x\in (-\infty,1] \cup [4,+\infty)$ So it means that $x\leq1$ Can someone please explain this?
|
you can draw a sign table to get $D_f$ Domain fo the function
|
|algebra-precalculus|functions|radicals|
| 0
|
Recurrence for number of $L$-bit strings with no $0^{w}$
|
I have been trying and struggling to come up with a generalized recurrence relation for the number of $L$ -bit strings with no runs of $w$ consecutive zeros. I have a relation that gives the right answers for $L = 3$ but it gives the wrong answers for $L = 4$ . I'll use 3-bit strings for a concise example. The 3-bit strings are: 000 001 010 100 101 110 011 111 If $w = 1$ , i.e. valid bitstrings have no $0^{1}$ , then obviously the only valid bitstring is the one with no zeros at all: 000 001 010 100 101 110 011 111 ^^^ 1 valid string If $w = 2$ , i.e. valid bitstrings have no $0^{2}$ , we can see that 5 bitstrings meet the restriction: 000 001 010 100 101 110 011 111 ^^^ ^^^ ^^^ ^^^ ^^^ 5 valid strings If $w = 3$ : 000 001 101 100 101 110 011 111 ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ 7 valid strings And if $w = 4$ , obviously all bitstrings are valid, because a run of $0^{4}$ isn't possible with 3-bit strings: 000 001 101 100 101 110 011 111 ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ ^^^ 8 valid strings After
|
Fix a positive integer $w$ . For nonnegative integers $n,k$ with $k\le w$ , let $f(n,k)$ be the number of $n$ -bit strings with no run of $w$ consecutive $0$ bits, given an immediately preceding run of exactly $k$ consecutive $0$ bits. Then we have the recursion $$ f(n,k) = \begin{cases} 0&\text{if}\;\,k=w\\[4pt] 2^n&\text{if}\;\,k+n and then, expressed in terms of $f$ , the number of $L$ -bit strings with no run of $w$ consecutive $0$ bits is just $f(L,0)$ .
|
|combinatorics|recurrence-relations|bit-strings|
| 1
|
Is there an idempotent e in a ring $R$ such that the functor $T_e$, induced by $e$, is not full?
|
Let $R$ be a ring and let $e$ be a non-zero idempotent in $R$ . For each $R$ -module $M$ , define $T_e: M \rightarrow eM$ , where $eM$ is the left $eRe$ -module. For each pair of $R$ -modules $M, N$ and each left $R$ -homomorphism $f: M \rightarrow N$ , let $T_e(f): f \mapsto f|_{eM}$ . Then $T_e$ defines a covariant additive functor from $R-Mod$ to $eRe-Mod$ . Is there an example of ring $R$ and non-zero idempotent $e$ such that the above functor $T_e$ is not full?
|
Let $R=\begin{pmatrix}k&k\\0&k\end{pmatrix}$ , the ring of upper triangular $2\times2$ matrices over a field $k$ , let $e=\begin{pmatrix}1&0\\0&0\end{pmatrix}$ , so that $eRe\cong k$ , and let $M=R$ , so that $eM=\begin{pmatrix}k&k\\0&0\end{pmatrix}$ . Then $\operatorname{Hom}_R(M,M)$ is $3$ -dimensional, but $\operatorname{Hom}_{eRe}(eM,eM)$ is $4$ -dimensional, so $T_e$ can't be full.
|
|modules|functors|idempotents|
| 0
|
Find a vertex of a tetragon where three vertices are given
|
Suppose that $V,W,U$ are three 3D points and $L,K$ are given positive values. Let $dist(A,B)$ represents euclidean distance between $A,B$ . Morover, assume that $M$ is a plane that passes through $V,W,U$ . What I need is a point $P$ where: $$dist(V,P)=L \\ dist(U,P)=K \\ P\in M $$ Obviously, we must find the equations of 2 circles, $C_1(V,L), C_2(U,K)$ where $C_1\subset M, C_2\subset M$ and then find their intersections. These possible intersections would be our points. Of course $L,K$ are given such a way that the intersections exist.
|
Hint. Calling $\hat{k}=\frac{(V-W)\times(W-U)}{\|(V-W)\times(W-U)\|}$ we can determine $\hat p,\hat q$ such that $$ \cases{ \hat p\cdot \hat k = 0\\ \|\hat p\| = 1\\ \hat q = \hat k\times \hat p } $$ after that, calling $$ \cases{ C_1=V+\left(\hat p\cos t_1+\hat q \sin t_1\right)L\\ C_2=U+\left(\hat p\cos t_2+\hat q \sin t_2\right)K\\ } $$ we will look for solutions to $$ V+\left(\hat p\cos t_1+\hat q \sin t_1\right)L = U+\left(\hat p\cos t_2+\hat q \sin t_2\right)K $$ so multiplying successively by $\hat p,\hat q$ we have $$ \cases{ V\cdot\hat p+\cos t_1 L = U\cdot \hat p + \cos t_2 K\\ V\cdot\hat q+\sin t_1 L = U\cdot \hat q + \sin t_2 K} $$ Two equations to solve in $t_1, t_2$ NOTE To solve the system $$ \cases{ c_1+L\cos t_1 = c_2+K\cos t_2\\ c_3+L\sin t_1 = c_4+K\sin t_2 } $$ by using $\sin^2t_1+\cos^2t_1 = 1$ we obtain an expression as $$ b_1 \sin t_2 + b_2\cos t_2 + b_0=0 $$ or equivalently $$ \frac{b_1}{\sqrt{b_1^2+b_2^2}}\sin t_2+\frac{b_2}{\sqrt{b_1^2+b_2^2}}\cos t_2+\frac{b_
|
|linear-algebra|geometry|analytic-geometry|vector-analysis|
| 0
|
Defining a Plane with Two Parallel Lines
|
I'm learning about planes in linear algebra, and my professor said that to define a plane two lines must intersect. However, I was thinking, what if the two lines are parallel? In that case, they would never intersect. Can't two parallel lines still define a plane, you can think of the plane as a piece of paper and if I draw 2 parallel lines on it I have a plane. How can we define a plane with two parallel lines mathematically or do I must have a point of intersection?
|
Let the first line be $a_0 + tv$ and let the second line be $a_1 + tv$ where $t$ is a parameters. Then your plane contain the third line $a_0+s(a_1-a_0)$ where $s$ is a parameter. Now you have found two non-parallel lines (first line and third line) and they intersect at $a_0$ . you can proceed to work on it as usual.
|
|linear-algebra|geometry|
| 1
|
Logit Gradient/Hessian derivations
|
I'm trying to follow the algebra leading from the gradient function to the Hessian in Logistic Regression, but I can't quite understand where I have gone wrong. I have the gradient function as: $$ \sum_i^m \left[x_i\cdot \frac {\exp\{-\theta^T x_i\}} {1+\exp\{-\theta^T x_i\}} - (y_i-1) x_i \right] $$ After deriving the gradient, I'm trying to rearrange terms and simplify the expression... First, we can note that the logistic regression model can be expressed as $$ p(y=1|x; \theta) = \frac{1}{1+exp(-\theta^Tx_i)} $$ Similarly, we can also show that $$ p(y=0|x; \theta) = 1-\frac{1}{1+exp(-\theta^Tx_i)} = \frac{exp(-\theta^Tx_i)}{1+exp(-\theta^Tx_i)} $$ These expressions rely on the definition of the logistic/sigmoid function $\sigma(\theta^Tx_i) = \frac {1} {1+\exp(-\theta^Tx_i)}$ and let us state more concisely $\frac {exp(\theta^Tx_i)} {1+\exp(\theta^Tx_i)} = 1-\sigma(\theta^Tx_i)$ So the problem we are trying to solve goes from: $$ \sum_i^m \left[x_i\cdot \frac {\exp\{-\theta^T x_i\}}
|
I assume this is a binary logistic regression (a good reference on this is C. M. Bishop, " Pattern Recognition and Machine Learning ", Chapter 4, pp.205-206). We are given some feature vectors which we denote as $x_{ij}$ ( $i$ indexes vectors, $j$ indexes features). We are trying to calibrate $\theta_j$ ; the probability of feature vector $x_{ij}$ being part of class 1 is, $$ p_i = \frac{\exp\left(\sum_j \theta_j x_{ij}\right)} {1+ \exp\left(\sum_j \theta_j x_{ij}\right)} $$ If we use the cross-entropy error function, \begin{equation} E = - \sum_i \left( t_i \ln p_i + (1- t_i) \ln (1-p_i) \right) \label{CE} \tag{CE} \end{equation} where $t_i \in \{0,1\}$ then since, $$ \ln p_i = \sum_j \theta_j x_{ij} - \ln \left( 1+ \exp\left(\sum_j \theta_j x_{ij}\right) \right) $$ and, $$ \ln (1-p_i) = - \ln \left( 1+ \exp\left(\sum_j \theta_j x_{ij}\right) \right) $$ we can rewrite (\ref{CE}) as, $$ E = -\sum_i \left( t_i\sum_j \theta_j x_{ij} - \ln \left( 1+ \exp\left(\sum_j \theta_j x_{ij}\right)
|
|multivariable-calculus|matrix-calculus|machine-learning|hessian-matrix|
| 0
|
If $x, y, z$ are independent random variables, are $ x-y, y-z, x-z$ also independent of each other?
|
If $x, y, z $ are independent random variables, such as all following a uniform distribution between $0$ and $1$ , are $x-y, y-z, ,x-z$ also independent of each other?
|
We have $$ \mathbb{E}[(x-y)(y-z)]=-\mathbb{E}[y^2]-\mathbb{E}[x]\mathbb{E}[y]-\mathbb{E}[x]\mathbb{E}[z]+\mathbb{E}[y]\mathbb{E}[z] $$ and similarly for the other products. So the r.v. are independent iif $$ \mathbb{E}[y^2]=\left(\mathbb{E}[y]\right)^2 $$ i.e. iif the r.v. have zero variance. This is not the case for the uniform distribution
|
|probability|independence|correlation|
| 1
|
Finding $1$-periodic $h_{\omega}(s)$'s so that $\lim_{\omega\rightarrow 2\pi n} \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + h_{\omega}(s) = s$
|
One can show that the expression $$ \frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} $$ approaches $s$ as $\omega\rightarrow 0$ by L'Hopital's rule. However, it seems there is no convergence when considering $\omega\rightarrow 2\pi n$ and $n$ is a nonzero integer (unless we plug special values of $s$ ). My question is, are there $1$ -periodic functions $h_{\omega}(s)$ such that $$ \lim_{\omega\rightarrow 2\pi n}\frac{e^{-i\omega s} - 1}{e^{-i\omega} - 1} + h_{\omega}(s) = s $$ for all $n\in\mathbb{Z}$ ? At the very least, can we find something that works for at least $n=1$ ? The functions $h_{\omega}(s)$ should be $1$ -periodic in $s$ for every $\omega$ .
|
If the limit relation holds for $s$ and $s + 1$ then taking the difference we get $$ \lim_{w\to2\pi n}\left[\frac{e^{-iws} - 1}{e^{-iw} - 1} + h_w(s) - \frac{e^{-iw(s + 1)} - 1}{e^{-iw} - 1} - h_w(s + 1)\right] = -1. $$ If $h_w$ is $1$ -periodic then $h_w(s)$ and $h_w(s + 1)$ cancel out, giving \begin{gather*} -1 = \lim_{w\to2\pi n}\left[\frac{e^{-iws} - 1}{e^{-iw} - 1} - \frac{e^{-iw(s + 1)} - 1}{e^{-iw} - 1} \right] = \lim_{w\to2\pi n}\left[\frac{e^{-iws} - e^{-iw(s + 1)}}{e^{-iw} - 1} \right] \\ = \lim_{w\to2\pi n}\left[\frac{e^{-iws}(1 - e^{-iw} )}{e^{-iw} - 1} \right] = -e^{-i2\pi ns}, \end{gather*} which does not hold for noninteger $s$ if $n\neq 0$ .
|
|calculus|limits|analysis|
| 1
|
Distance in Cantor space
|
We have $C = 2^\mathbb{N}$ and we identify the Cantor set with the image of $C$ via: $ t\colon C \rightarrow [0,1], \left(x_n\right)_n \mapsto \sum_{n=1}^{\infty}\frac{2x_n}{3^n} $ Now if $x,y\in t(C)$ and $\lvert x - y \rvert for some $N\in \mathbb{N}$ then the first $N$ digits of $x$ and $y$ coincide. Now intuitively this seems very clear, however I cant seem to formalize it. I tried by contradiction i.e. let $j s.t $x_j \neq y_j$ , and then tried to bound the distance from below but I couldnt finish the bound. How can one prove this?
|
One can use induction to prove $A(N)$ : If $(x_n), (y_n) \in C$ such that $\lvert t((x_n)) - t((y_n)) \rvert , then $x_n = y_n$ for $n \le N$ . As the base case take $N = 0$ for which $A(0)$ it is trivially true. Now assume that $A(N)$ is true. We prove that $A(N+1)$ is true. Consider $(x_n), (y_n) \in C$ such that $\lvert t((x_n)) - t((y_n)) \rvert which means that $$d := \sum_{n=1}^\infty \frac{2x_n}{3^n} - \sum_{n=1}^\infty \frac{2y_n}{3^n}$$ satiesfies $\lvert d \rvert . Using $A(N)$ we get $x_n = y_n$ for $n \le N$ and therefore $$d = \sum_{n=N+1}^\infty \frac{2(x_n-y_n)}{3^n} .$$ Assume that $x_{N+1} \ne y_{N+1}$ , w.l.o.g. $x_{N+1} = 1, y_{N+1} = 0$ . With $$R = \sum_{n=N+2}^\infty \frac{2(x_n-y_n)}{3^n}$$ we get $$d = \frac{2}{3^{N+1}} + R .$$ We have $$\lvert R \rvert \le \sum_{n=N+2}^\infty \frac{2\lvert x_n-y_n \rvert}{3^n}\le \sum_{n=N+2}^\infty \frac{2}{3^n} = \frac{2}{3^{-(N+2)}}\sum_{n=0}^\infty \frac{1}{3^n} = \frac{2}{3^{-(N+2)}} \frac 3 2 = \frac{1}{3^{-(N+1)}}. $$ Si
|
|general-topology|cantor-set|
| 1
|
Sanity Check: Monoids as Algebras over an Operad
|
I should probably study the classical viewpoint first. I haven't yet and I will eventually but let us stick to $\infty$ -operads for this question. I'm following the lecture notes from Hebestreit-Wagner . Let me recall the terminology first. One can define an $\infty$ -operad $\mathbb{A}\mathrm{ssoc} \to \mathbb{\Gamma}^{\mathrm{op}}$ via finite sets with partially defined maps with a total ordering on each fiber. An algebra over an $\infty$ -operad $\mathscr{O}$ in a symmetric monoidal $\infty$ -category $\mathscr{C}^{\otimes} \to \Gamma^{\mathrm{op}}$ is an $\infty$ -operad map $\mathscr{O} \to \mathscr{C}^{\otimes}$ over $\Gamma^{\mathrm{op}}$ which preserves inerts (i.e. cocartesian lifts of inert maps). Question. I only want to check $\operatorname{Alg}_{\mathbb{A}\mathrm{ssoc}}(\mathbf{Set}^{\times}) \simeq \operatorname{Mon}(\mathbf{Set})$ or $\operatorname{Alg}_{\mathbb{A}\mathrm{ssoc}}(\mathbf{Ab}^{\otimes}) \simeq \mathbf{Ring}$ but somehow I can't get this. How do you do thi
|
We will argue that $\mathrm{Alg}_{\mathbb{A}\mathrm{ssoc}}(\mathcal{C}^\otimes)\simeq\mathrm{Mon}(\mathcal{C}^\otimes)$ for any monoidal $1$ -category $\mathcal{C}^\otimes$ . As a first step, we must find the inerts of $\mathbb{A}\mathrm{ssoc}$ . Given an inert $\alpha\colon\langle n \rangle\to\langle m\rangle$ , every fiber of $\alpha$ is either empty or a singleton, so has a unique total order regardless. There is hence only one lift of $\alpha$ to a map in $\mathbb{A}\mathrm{ssoc}$ , and this is a cocartesian lift. Next, we recall the way a monoidal $1$ -category $\mathcal{C}^\otimes$ is turned into an $\infty$ -operad. This is spelled out in Example II.11(e) of the linked lecture notes: essentially, the objects of $(\mathcal{C}^\otimes)_n$ are tuples $(n,x_1,\ldots,x_n)$ (and $(\mathcal{C}^\otimes)_0$ has a unique object $(0,I)$ , where $I$ is the unit of the monoidal structure), and a map $(n,x_1,\ldots,x_n)\to(m,y_1,\ldots,y_m)$ is a map $\alpha\colon \langle n\rangle \to\langle
|
|category-theory|homotopy-theory|higher-category-theory|operads|
| 1
|
Get areas of section behind ellipse focus and section in front of ellipse focus
|
How would I get the area of the portion of the ellipse identified as "back" (to the left of the focus "S") in the drawing and the portion of the ellipse identified as forward (to the right of the focus "S") in the drawing. EDIT: Image of the ellipse has been changed to better coincide with the answer given by @of course.
|
Applying the idea in my comment above, and the notation depicted in your figure, we want to horizontally scale the ellipse, and turn it into a circle of radius $R$ . Define the image of scaling as follows: $ x' = \sin(\beta) \ x $ $ y' = y $ Since the original equation of the ellipse is $ \dfrac{ x^2 \sin^2( \beta) }{R^2} + \dfrac{y^2}{R^2} = 1 $ Then after applying the scaling, it becomes the circle $ x'^2 + y'^2 = R^2 $ Originally the line separating the two parts of the ellipse is located at $x = s $ where $ - \dfrac{R}{\sin(\beta)} \le s \le \dfrac{R}{\sin( \beta)} $ The corresponding line for the scaled ellipse is $ x' = s' $ where $ s' = s \sin(\beta) $ Now the area of the left part of the circle to the left of this vertical line is $ A'_{Left} = \dfrac{1}{2} R^2 \phi - \dfrac{1}{2} R^2 \sin( \phi) $ where $ \phi = 2 \cos^{-1} \left( \dfrac{ -s'} {R} \right) = 2 \cos^{-1} \left( - \dfrac{ s \sin(\beta)}{R} \right) $ The other part is $ A'_{Right} = \pi R^2 - A'_{Left}$ The corres
|
|conic-sections|
| 1
|
An easy question on the general properties of reflections
|
Good morning to everybody. I'm involved in the study of reflective categories (from Borceux' book) and I have a silly doubt. Recall that if $F: \mathcal{A} \to \mathcal{B}$ is a functor and $B$ is an object in $\mathcal{B}$ , a reflection of $B$ along $F$ is a pair $(R_B,\eta_B)$ where $R_B$ is an object of $\mathcal{A}$ and $\eta_B: B \to F(R_B)$ is a morphism of $\mathcal{B}$ such that given an object $A$ in $\mathcal{A}$ and a morphism $b: B \to F(A)$ , then there exists a unique $a: R_B \to A$ such that $F(a) \circ \eta_B=b$ . When for every $B \in \mathcal{B}$ the reflection of $B$ along $F$ exists and such a reflection $(R_B,\eta_B)$ has been chosen, there exists a unique functor $R: \mathcal{B} \to \mathcal{A}$ such that $R(B)=R_B$ for every $B \in \mathcal{B}$ . In such a setting, if $F: \mathcal{A} \to \mathcal{B}$ is a full embedding, it is well known that $R_A=A$ and $\eta_A=Id_A$ for every object $A$ in $\mathcal{A}$ (that is contained in $\mathcal{B}$ ). In the general cas
|
This is generally false. Consider the (non-full) inclusion functor $I\colon\mathsf{Mon}\to\mathsf{sGrp}$ of the category of monoids into the category of semigroups. It has a left adjoint $L\colon\mathsf{sGrp}\to\mathsf{Mon}$ which sends a semigroup $S$ to the monoid $S\sqcup\{e\}$ , where $e$ acts as the unit object of the monoid structure, and restricted to $S$ the monoid structure is the semigroup structure we already had on $S$ . You are asking if, for a general monoid $M$ , it holds that $LM\cong M$ , and if the unit of the adjunction satisfies $\eta_M=\mathrm{id}_M$ . However, $LM=M\sqcup\{e\}$ and the adjunction unit is the inclusion $M\hookrightarrow M\sqcup\{e\}$ , which is indeed a semigroup homomorphism, but generally not an isomorphism, let alone an identity.
|
|category-theory|
| 1
|
Covering Space for 3-punctured sphere
|
I have been considering whether we can find a branched covering map $p:Y \rightarrow \mathbb{CP}^1$ , having 3 critical values, such that exactly five points are mapped to those critical values. That is, $|p^{-1}(\text{Crit}(p))| = 5$
|
I'm not an expert, but I hope this helps: Let $Y$ refer to a compact Riemann Surface of genus $g$ , there is a useful form of the Riemann-Hurwitz formula: if $p:S\to S'$ is a map of degree $N$ between surfaces of genus $g,g'$ then $$ r-2+2g = N(b-2+2g') $$ where $b$ is the number of branch points (critical values) and $r$ es the number of ramification points (ie whose image is a branch point). For the data you provide a function can exist only if $$2g + 3 = N$$ so if there is such map it will be of degree $2g+3$ . For $g=0$ we get $N=3$ and it easy to find a funcion with exactly 5 ramification points ie from the sphere into itself we could use $f(z) = -2z^3+3z^2$ which has critical values $\{0,1,\infty\}$ and ramification points $\{0,1,\infty, 2/3, -1/2 \}$ . I understand that this function is essentially unique modulo mobius transformations. For genus 1 you have to find a degree 5 function. I describe below how to obtain one concrete example but I'm not sure if there is such a functio
|
|algebraic-geometry|algebraic-topology|riemannian-geometry|complex-geometry|algebraic-curves|
| 0
|
Why does a summation vanish for a constant $f(x)$ in the Deutsch–Jozsa algirthm?
|
In Quantum Computing: From Linear Algebra to Physical Realizations , pg. 103, it states: Let us consider the summation $$ \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{x \cdot y} $$ with a fixed $y \in S_n$ Clearly it vanishes since $x \cdot y = 0$ for half of $x$ and $x\cdot y = 1$ for the other half unless $y=0$ . Where $x \cdot y := x_{n-1}y_{n-1}\oplus ...\oplus x_0y_0$ I have been thinking about it for quite some time and do not understand how this is clear. Why is it immediate that $x \cdot y = 0$ for half of $x$ ?
|
If $y\neq 0$ then there exists some $x_0$ such that $x_0 \cdot y = 1$ and also as $x$ ranges over $\{0,1\}^n$ , so does $x+x_0$ . Then $$S := \sum_{x\in \{0,1\}^n}(-1)^{x\cdot y} = \sum_{x\in \{0,1\}^n}(-1)^{(x+x_0)\cdot y}=(-1)^{x_0 \cdot y}\sum_{x\in \{0,1\}^n}(-1)^{x\cdot y}=-S$$ which shows that $S=0$ .
|
|linear-algebra|algorithms|quantum-computation|
| 1
|
Complex Conjugation is non-holomorphic
|
If I consider $z_0 \in \mathbb{C}$ and denote $g(z)=\bar{z}$ , then I can verify that: For $\lambda \in \mathbb{R^*}$ : $$lim_{\lambda \to 0} \frac{g(z_0+\lambda)-g(z_0)}{\lambda}=\frac{\lambda}{\lambda}=1$$ In other hand, if we consider $w \in i\mathbb{R^*}$ : $$lim_{w \to 0} \frac{g(z_0+w)-g(z_0)}{w}=\frac{-w}{w}=-1$$ So that, $g$ is not holomorphic at $z_0$ . But I could see in an identical manner that $\frac{\partial u}{\partial x}+i\frac{\partial{v}}{\partial x}\neq -i(\frac{\partial u}{\partial y} + i\frac{\partial v}{\partial y})$ , i.e., $$ \frac{\partial u}{\partial x}\neq \frac{\partial v}{\partial y} \quad or \quad \frac{\partial u}{\partial y}\neq -\frac{\partial v}{\partial x} $$ Is this a good approach?
|
It is not holomorphic. On the C-R equations you get $f(x+iy)=x-iy\implies u(x,y)=x, v(x,y)=-y.$ So $\frac {\partial u}{\partial x}=1\not=-1=\frac{\partial v}{\partial y}.$ and $\frac {\partial u}{\partial y}=0=-\frac {\partial v}{\partial x}.$ But at any rate it fails.
|
|complex-analysis|
| 1
|
Every subring of a field is an integral domain
|
When I was trying to prove the following theorem: Theorem. Let $E$ be an extension field of the field $F$ and let $a \in E$ .If $a$ is algebraic over $F$ , then $F(a) \cong F[x]/ \langle p(x)\rangle$ , where $p(x)$ is a polynomial in $F[x]$ of minimum degree such that $p(a)=0$ . Moreover, $p(x)$ is irreducible over $F$ . I considered an homomorphism $\phi: F[x] \longrightarrow F(a) $ such that $\text{Ker} \phi \neq 0$ , so $\text{ker} \phi=\langle p(x)\rangle$ . As $F[x]$ is a Principal Ideal Domain, we have that $p(x)$ is irreducible. Now by the First Isomorphism Theorem $F[x]/ \langle p(x)\rangle \cong \text{Im} \phi \leq F(a).$ Now it is claimed that $\langle p(x)\rangle$ is a prime ideal of $F[x]$ becuase $\text{Im}\phi$ is an integral domain. My question comes from this last sentence, how can be proved that: Theorem. Every subring R of a field F is an integral domain. So we have to prove R to be commutative with unit. It is clear that it is commutative because if there exist $a,b\
|
Your counterexample is great. What happens is that $\text{Im}\phi$ is an integral domain not only because it is a subring of $F(a)$ , from which you obtain commutativity, but also from the fact that $1 \in F[x]/ \langle p(x)\rangle$ . This comes from the fact that $ 1 \in F[X]$ as F is a field and because $$\langle p(x)\rangle = \{r(x)p(x):r(x) \in F[x]\}$$ and $$F[x]/ \langle p(x)\rangle = \{f(x) + r(x)p(x):r(x) \in F[x]\}$$ you choose $r(x)=0 \in F[x]$ and you finally have that $\text{Im}\phi$ is an integral domain so $\langle p(x)\rangle$ is a prime ideal.
|
|abstract-algebra|ring-theory|field-theory|irreducible-polynomials|maximal-and-prime-ideals|
| 1
|
extension scalars of a variety
|
Let $K\to L$ be an arbitrary field extension. $K$ has characteristic zero. Let $X$ be a variety over $K$ . Let $Y=X\times_K spec L$ . 1)Is it true that $Y$ is reduced? 2)Let $Y=\cup Y_i$ be decomposition into irreducible components. We have the natural maps $Y_i\to X$ . Is it true that these maps are dominant? I know that $Y\to X$ is flat, but it is not necessary of finite type.
|
If $X$ is a reduced $K$ -scheme of finite type, then exercise 3.15, chapter II in Hartshorne (or QingLiu proposition 2.7, chapter II), shows that TFAE: $X \times_K \overline{K}$ is reduced with $\overline{K}$ algebraic closure. $X \times_K K^p$ is reduced with $K^p$ perfect closure. $X \times_K L$ is reduced for any extesion $L/K$ . In your case, $K^p=K$ (any perfect field, not necessarily of characteristic zero) and $X = X \times_K K^p$ is reduced, we know that $X \times_K L$ is reduced for all $L/K$ . This is not true in general. For instance, take $X = X_1 \sqcup X_2$ with $X_1,X_2$ geometrically integral (being integral after any extension of fields), then $X_L = (X_1)_L \sqcup (X_2)_L$ ; by exercise 3.15 in Hartshorne again, $(X_1)_L,(X_2)_L$ are irreducible and the map $(X_1)_L \to X$ is never dominant because the image does not contain the generic point of $X_2$ . It is true however, if you assume that $X$ is irreducible. You can consult Lemma 33.8.10 on StackProject (I think it
|
|algebraic-geometry|
| 1
|
An easy question on the general properties of reflections
|
Good morning to everybody. I'm involved in the study of reflective categories (from Borceux' book) and I have a silly doubt. Recall that if $F: \mathcal{A} \to \mathcal{B}$ is a functor and $B$ is an object in $\mathcal{B}$ , a reflection of $B$ along $F$ is a pair $(R_B,\eta_B)$ where $R_B$ is an object of $\mathcal{A}$ and $\eta_B: B \to F(R_B)$ is a morphism of $\mathcal{B}$ such that given an object $A$ in $\mathcal{A}$ and a morphism $b: B \to F(A)$ , then there exists a unique $a: R_B \to A$ such that $F(a) \circ \eta_B=b$ . When for every $B \in \mathcal{B}$ the reflection of $B$ along $F$ exists and such a reflection $(R_B,\eta_B)$ has been chosen, there exists a unique functor $R: \mathcal{B} \to \mathcal{A}$ such that $R(B)=R_B$ for every $B \in \mathcal{B}$ . In such a setting, if $F: \mathcal{A} \to \mathcal{B}$ is a full embedding, it is well known that $R_A=A$ and $\eta_A=Id_A$ for every object $A$ in $\mathcal{A}$ (that is contained in $\mathcal{B}$ ). In the general cas
|
An equivalent statement to $(R_B, \eta_B)$ being a reflection of $B$ is that the map $\hom(R_B, A) \to \hom(B, F(A))$ given by $a \mapsto F(a) \circ \eta_B$ is an isomorphism. For the case that you're talking about with $B = F(A)$ , $R_B = A$ and $\eta_B = \mathrm{id}_{F(A)}$ , this is says that $\hom(A, A) \to \hom(F(A), F(A))$ given by $a \mapsto F(a)$ is an isomorphism. So $(A, \mathrm{id}_{F(A)})$ is a reflection of $F(A)$ precisely if $F$ is fully faithful on morphisms from $A$ to itself. It's not too hard to find examples where this isn't true. For example, most constant functors will fail to be faithful. Lots of structure-forgetting functors won't be full.
|
|category-theory|
| 0
|
Is this a finite set for which it is impossible to list its elements?
|
Here is the idea for the set. Let $\alpha$ be a real number. Then $x_\alpha$ be a set of digits with the property that a digit is in the set if it appears infinitely many times in $\alpha$ . For many numbers $\alpha$ it is easy to determine $x_\alpha$ . For some it is challenging like $\alpha = \pi$ . However, what if $\alpha$ was a non-computable number? Then it would be impossible to list the elements of $x_\alpha$ , right?
|
There is a difference between the mathematical definition of being able to list all elements of a set and to actually and practically being able to list them. Your set is mathematically well defined, and it has to be of finite size, so it is listable. Us potentially not being able to figure out what that set exactly looks like does not take away from that.
|
|elementary-set-theory|
| 1
|
Lyapunov Asymptotic Stability
|
I am still new to Lyapunov stability and I have a question: The system is: $\dot{x}_1 = x_2(1-x_1^2)$ and $\dot{x}_2=-(x_1+x_2)(1-x_1^2)$ I used $V(x)=\frac 1 2(x_1^2+x_2^2)$ Then, I get $\dot{V}(x)= -(1-x_1^2)x_2^2$ . Can I define $X=\{x\in \mathbb {R}^2 | x_1^2+x_2^2 where $\dot{V}(x) $\forall x\in X$ and conclude it is asymptotically stable at the origin? What I am curious is how I define the open set X in which $\dot{V}(X) . Can it be arbitrary as long as it is open and contains $0$ ?
|
Your function $V$ is a weak Lyapunov function in the region $\Omega = \{ |x_1| , since $V$ is positive definite and $\dot V \le 0$ in $\Omega$ , but it's not a strong Lyapunov function in $\Omega$ , since $\dot V = 0$ along the whole line $x_2=0$ (so that the condition “ $\dot V$ is negative definite in $\Omega$ ” is violated). So you can't use Lyapunov's theorem to show asymptotic stability. But asymptotic stability follows instead from LaSalle's theorem, since the required extra assumption for that theorem is satisfied. Namely, on the line segment $(x_1,0)$ with $|x_1| (the “problematic” part of $\Omega$ , where $\dot V$ vanishes), the ODEs reduce to $$(\dot x_1, \dot x_2) = (0, -x_1(1-x_1^2)),$$ which, because of the nonzero second component, makes it impossible for any solution except the equilibrium solution $(x(t),y(t))=(0,0)$ to stay in that line segment for all $t$ . But a simpler alternative is to note that in the region $\Omega$ , where the factor $1-x_1^2$ is positive, the t
|
|stability-theory|lyapunov-functions|
| 0
|
Showing $\tan x = \frac{\sin\alpha\sin y}{1 - \sin\alpha\cos y}$, given $\sin x = \sin \alpha\sin (x + y)$
|
I've been struggling with this for some time but can't prove what question asks. Question is as follows: If $\sin x = \sin \alpha\sin (x + y)$ , prove that $$\tan x = \frac{\sin\alpha\sin y}{1 - \sin\alpha\cos y}$$ I began by expanding what is given, i.e. $$\sin x = \sin \alpha(\sin x\cos y + \cos x\sin y)$$ expanding gives: $$\sin x = \sin \alpha\sin x\cos y + \sin\alpha\cos x\sin y$$ Now my first question is regarding this equality. Does this show $\sin\alpha\cos y = 1$ and $\cos x \sin y =0$ ? Whether or not this is true, I have been unable to manipulate what is given into what is required. A brief outline of my method is as follows: If $$\sin x = \sin \alpha\sin (x + y)$$ then $$\tan x = \frac{\sin \alpha\sin (x + y)}{\cos x}$$ i.e. $$\tan x = \frac{\sin \alpha(\sin x\cos y + \cos x\sin y)}{\cos x}$$ giving $$\tan x = \sin \alpha(\tan x\cos y + \sin y)$$ Despite many attempts from different angles of attack, but similar to that shown, I have been unable to complete the proof. I sho
|
To derive the expression $\tan x = \frac{\sin\alpha\sin y}{1 - \sin\alpha\cos y}$ from the given equation $\sin x = \sin \alpha\sin (x + y)$ , we can use trigonometric identities. We have: $$\sin x = \sin \alpha\sin (x + y)$$ Using the sum-to-product identity for sine, we get: $$ \sin x = \sin \alpha (\sin x \cos y + \cos x \sin y) $$ Isolated $\sin x$ : $$ \sin x - \sin \alpha \sin x \cos y = \sin \alpha \cos x \sin y $$ $$\sin x (1 - \sin \alpha \cos y) = \sin \alpha \cos x \sin y $$ Finally, solve for $\sin x$ : $$\sin x = \frac{\sin \alpha \cos x \sin y}{1 - \sin \alpha \cos y} $$ Now, since $\tan x = \frac{\sin x}{\cos x}$ , we have: $$\tan x = \frac{\sin \alpha \sin y}{1 - \sin \alpha \cos y} $$
|
|algebra-precalculus|trigonometry|
| 0
|
What is the limit function of $f_n(x) := \begin{cases} (x-n+1)(n+1-x): n-1 < x < n+1\\ 0: x \leq n-1 \lor x \geq n+1 \end{cases}$?
|
I have a question about the convergence of $(f_n)_{n \in \mathbb{N}}$ with $$ f_n(x) := \begin{cases} (x-n+1)(n+1-x): n-1 Does this sequence of functions converge and if so what is its limit function? Does it converge point wise or also uniformly? What I have done so far: I plotted some elements of the sequence to see how it behaves (I know this is also kind of obvious already from the definition but I did it anyways) and I realized that the sequence consists just of these "hills" that look like $f(x)=-x^2+1$ and shoot off to positive infinity. What I don't get about this is how this could every possibly converge to anything since it clearly does not "stop" sliding off to the right. But of course the functions values always stay the same and I am sure herein lies the "trick" somewhere. I thought of trying to prove that $$ f_n \to f = \begin{cases} -x^2+1: -1 but I got stuck when taking $x \in [1, \infty)$ because even if $f=0$ at that point I cannot conclude the same for any $f_n(x)$ .
|
It is quite clear that $\{f_n\}$ converges pointwise to $0$ . This is because, for any given $x\in \mathbb{R}$ , choosing $N_x\in \mathbb{N}$ such that $x , then for all $n\geq N_x$ , we would have $x and hence by definition $f_n(x)=0$ , for all $n\geq N_x$ , implying that $\{f_n(x)\}$ converges to $0$ . However, the convergence is not uniform. This is because, for each $n$ , we have $n\in (n-1, n+1)$ , forcing by definition that $$f_n(n) =1.$$ Hence, for any $\epsilon , it would not hold good that there exists an $N_{\epsilon}$ such that for all $n\geq N_{\epsilon}$ , we would have $$|f_n(x)| for all $x$ .
|
|real-analysis|sequences-and-series|uniform-convergence|pointwise-convergence|
| 1
|
Proving $f(V)=\mathrm{span}(f(\mathbf{v}_{1}),\dots, f(\mathbf{v}_{n})).$
|
I'm using the following definitions: Definition 1 (Linear Span) Let $(V,+,\cdot)$ be a vector space over a field $\mathbb{K}$ . The linear span of a subset $X\subseteq V$ is defined as \begin{align*} \mathrm{span} (X):=\left\{\sum\limits_{i=1}^{n} \lambda_{i}\mathbf{v}_{i}\; \Bigg | \; n\in\mathbb{N},\;\lambda_{i}\in\mathbb{K},\; \mathbf{v}_{i}\in X\right\} \end{align*} Definition 2 (Linear Map) Let $(V,+,\cdot)$ and $(W,\oplus,\odot)$ be vector spaces over a field $\mathbb{K}$ . A map $\Phi:V\rightarrow W$ is called a linear map iff for all $\mathbf{u},\mathbf{v}\in V$ and all $\lambda \in\mathbb{K}$ we have i) $\Phi(\mathbf{u}+\mathbf{v})= \Phi(\mathbf{u})\oplus\Phi(\mathbf{v})$ (additivity) ii) $\Phi(\lambda\cdot \mathbf{u})=\lambda\odot \Phi(\mathbf{u})$ (homogeneity) I'm trying to prove the following: Let $V$ and $W$ be $\mathbb{K}$ -vector spaces and let $\mathbf{v}_{1},\dots,\mathbf{v}_{n}\in V$ for some $n\in\mathbb{N}$ such that $V=\mathrm{span}(\mathbf{v}_{1},\dots,\mathbf{v}
|
Julio Puerta basically answers your precise question in the comment section. Yet, you could lighten your proof and make it more readable (and also circumvent your question) by considering a single general vector instead of the whole vector space at the same time. It can be done as a double inclusion. Let $V$ and $W$ be $\Bbb{K}$ -vector spaces, with $V = \mathrm{span}(\mathbf{v}_1,\ldots,\mathbf{v}_n)$ and $f : V \to W$ a linear map. Let be $\mathbf{v} = \sum_i\lambda_i\mathbf{v}_i \in V$ . Then, one has $$ f(\mathbf{v}) = f\left(\sum_i\lambda_i\mathbf{v}_i\right) = \sum_i\lambda_if(\mathbf{v}_i) \in \mathrm{span}(f(\mathbf{v}_1,\ldots,f(\mathbf{v}_n)), $$ by linearity of $f$ , hence $f(V) \subset \mathrm{span}(f(\mathbf{v}_1,\ldots,f(\mathbf{v}_n))$ . Now, let's prove the reversed inclusion. Let be $\mathbf{w} = \sum_i\lambda_if(\mathbf{v}_i) \in \mathrm{span}(f(\mathbf{v}_1,\ldots,f(\mathbf{v}_n))$ . Then, the same property of linearity permits to write : $$ \mathbf{w} = \sum_i\lambd
|
|linear-algebra|linear-transformations|
| 1
|
Determine the values at which the point elasticity of demand is maximum
|
Given the demand equation $$p=\frac{240}{q+15}$$ Where $15 \leq q \leq 105$ , for what value of $q$ is $|\eta|$ a maximum? for what value is it a minimum? Well, i try the following. Note that: $$\eta=\frac{dq}{dp} \frac{p}{q}$$ As $p=\frac{240}{q+15}$ , then $q=\frac{240-15p}{p}$ and $\frac{dq}{dp}=\left(\frac{240-15p}{p}\right)'=\frac{\left(240-15p\right)'\:p-p'\left(240-15p\right)}{p^2}=-\frac{240}{p^2}$ and $$\eta=-\frac{240}{p^2} \frac{240}{\frac{q+15}{\frac{240-15p}{p}}}=-\frac{57600\left(240-15p\right)}{p^3\left(q+15\right)}$$ Now, I was thinking of evaluating the behavior of the derivative of this function to try to find the maximum and minimum, but it is a function that depends on both $p$ and $q$ so it is not clear to me what I can do from that $\eta$ expression to find the maximum and minimum. Any suggestions?
|
The derivative is right. Now we insert the terms into the formula, $p$ remains $p$ . $$\eta=-\frac{240}{p^2}\cdot \frac{p}{\frac{240-15p}{p}}=-\frac{240}{p^2}\cdot \frac{p^2}{240-15p}=-\frac{240}{240-15p}$$ $$=\frac{240}{15p-240}=\frac{16}{p-16}$$ Next we have to evaluate the limits of $p$ . For $q=15$ we obtain the upper bound $p^o=\frac{240}{15+15}=8$ . For $q=105$ we obtain the lower bound $p^u=\frac{240}{105+15}=\frac12$ . The we have $\frac12\leq p \leq 8$ . Now you can maximize and minimize $\eta$ with the regard to the bounds.
|
|derivatives|economics|
| 1
|
Let $x$, $y$ be positive hyperreal numbers. Can $\frac{x}{y}+\frac{y}{x}$ be infinite? finite? infinitesimal?
|
I'm going through problems in chapter one of Elementary Calculus: An Infinitesimal Approach by H. Jerome Keisler. I'm doing an even numbered one and there are only answers for odd numbered problems. Q. 42, section 1.5 problems. Let $x$ , $y$ be positive hyperreal numbers. Can $\frac{x}{y}+\frac{y}{x}$ be infinite? finite? infinitesimal? \begin{align} \frac{x}{y}+\frac{y}{x} = \frac{x^2+y^2}{xy} \\ \end{align} In the case where $x = y$ we have $$\frac{2x^2}{x^2} = 2$$ And this is a finite. In the case where $x \ge y$ we have $$\frac{x^2+y^2}{xy}$$ and the $x^2$ in the numerator will be greater than the $xy$ in the denominator.I am just not sure if the numerator is so much larger than the denominator to say this expression is infinite in this case. I don't fully understand these hyperreals I suppose. Any help here?
|
I might be missing something, but the problem seems pretty straightforward to me. Let $ \frac{x}{y} = t$ , the required expression is now t+1/t. It is trivial that as t tends to infinite 1/t tends to 0 so the required expression can be infinite. A simple application of AM-GM inequality will show that for positive numbers the required sum cannot be infinitesimal. $$ \frac{t+1/t}{2} \ge \sqrt{t*1/t} = 1$$
|
|calculus|infinitesimals|
| 1
|
Given $x^2y = 32$ and $x^3/y = 1/8$, find $\log_2 x$ and $\log_4 y$
|
Let $\log_2 x=a$ and $\log_4 y = b$ . Then from $$x^2y = 32$$ and $$\frac{x^3}{y} = \frac{1}{8}$$ we need to find the values of $a$ and $b$ . I substituted values for $x$ and $y$ : $\log_2 2^2 = 2$ and $\log_4 8 = 1.5$ or $\log_2 8 = 2\cdot1.5$ where $a$ is $2$ and $b$ is $1.5$ . So $\log_2 x^2y=a+2b$ or $\log_2 32 = 2+(2\cdot1.5)$ If $\frac{x^3}{y} = \frac{1}{8}$ then the exponent would be $-3$ , but this would mean $a = 2$ and $b = 1.5$ and would not prove to be true throughout the equation. Any assistance is appreciated.
|
More simply than taking logs, plugging $x=2^a$ and $y=4^b$ (as advised in comment) directly gives: $$32=2^{2a}4^b\iff2^5=2^{2a+2b}\iff5=2a+2b$$ and $$\frac18=2^{3a}4^{-b}\iff2^{-3}=2^{3a-2b}\iff-3=3a-2b.$$ The solution is therefore $$a=\frac25, \quad b=\frac{21}{10}.$$ Ted's suggestion (developped 20 min later in @ZYX's answer) may look even simpler, but is actually equivalent to solving this system the following way: $$\begin{align}(2a+2b=5,\quad3a-2b=-3)&\iff(5a=5-3,\quad2b=3a+3)\\&\iff\left(a=\frac25,\quad b=\frac{21}{10}\right).\end{align}$$
|
|logarithms|exponentiation|
| 0
|
Explain the double sum step by step
|
I'm looking at the following: But when I try to reproduce the result, I get: For $n=1$ , $$\sum_{i=0}^{n} \sum_{j=0}^{i} (i+1)(j+1)$$ $$= \sum_{i=0}^{n} (i + 1) \cdot \sum_{j=0}^{i} (j+1)$$ $$= \sum_{i=0}^{1} (i + 1) \cdot \sum_{j=0}^{1} (j+1)$$ $$= \left((0+1)+ \sum_{i=1}^{1} (i + 1)\right) \cdot \left((0+1) + \sum_{j=1}^{1}(j+1)\right)$$ $$= \big(1 + (1 + 1)\big) \cdot \big(1 + (1+1)\big)$$ $$= 3 \cdot 3$$ $$= 9$$ Please, explain what is wrong with my solution
|
This double sum is not a product of two individual sums, but it is an inner sum nested inside an outer sum. Concretely, you face up a sum $$S_n = \sum_{i=0}^n a_i,$$ whose elements are themselves defined by a sum, more precisely $$a_i = (i+1) \sum_{j=0}^i (j+1).$$ Given that the inner sum depends on the index of the outer one, you cannot separate and treat them freely. When $n = 1$ , you end up with $S_1 = a_0 + a_1$ , with $$a_0 = (0 + 1) \sum_{j=0}^0 (j+1) = 1 \cdot (0+1) = 1$$ and $$a_1 = (1 + 1) \sum_{j=0}^1 (j+1) = 2 \cdot ((0+1) + (1+1)) = 6,$$ hence $S_1 = 7$ in the end.
|
|discrete-mathematics|summation|
| 1
|
Derivation of the density function of a Gaussian random vector
|
Let $X=(X_1,\ldots,X_n)^T$ be a random vector with mean $m$ and covariance $C$ . Assume $C$ is positive definite, so it has inverse $C^{-1}$ . Let $C=A A^T$ . I saw there are mainly two approaches to define a Gaussian random vector. One uses $X=AY+m$ (where $Y=(Y_1,\ldots,Y_n)^T$ is a random vector with $Y_i$ 's independent standard Gaussian random variables) as the definition, then its mean is $m$ and covariance is $A A^T\equiv C$ . The other one says if a random vector has density $$\frac{1}{(2\pi)^\frac{n}{2}\sqrt{\det C}}\exp(-\frac{1}{2}((x-m)^T C^{-1}(x-m))$$ then it is a Gaussian vector with mean $m$ and $C$ . My question is: how can we deduce the density function from the first definition? My attempt is to first use $$F_X(x)=\Pr(X\le x)=\Pr(AY+m\le x)=\int_{y:Ay+m\le x}f_Y(y)\mathrm{d}y$$ the inequality above are all component-wise. Use the independence of $Y_i$ , we can easily derive $f_Y(y)=\frac{1} {(2\pi)^\frac{n}{2}}e^{-\frac{1}{2}y^Ty}$ . I'm stuck at the last step where
|
The mapping $T: y \mapsto Ay + m$ has inverse mapping $T^{-1}: x \mapsto A^{-1}(x - m)$ , which has the Jacobian $$ J_{T^{-1}} = \text{det}(A^{-1}) = \dfrac{1}{\text{det}(A)} $$ Since $C = A A^T$ , we must have $\vert \text{det}(A) \vert = \sqrt{\text{det}(C)}$ , therefore $$ \vert J_{T^{-1}} \vert = \dfrac{1}{\sqrt{\text{det}(C)}} $$ By the multidimensional change of variable theorem, we have $$ f_X(x) = f_Y(T^{-1}(x)) \vert J_{T^{-1}} \vert = \dfrac{1}{\sqrt{\text{det}(C)}} f_Y(A^{-1}(x - m)) $$ The calculation of $f_Y$ is straightforward, since $Y_i$ are independent normal random variables: $$ f_Y(y) = \dfrac{1}{(2\pi)^{n/2}}\exp\left(-\dfrac{1}{2}y^T y\right) $$ Thus, $$ f_X(x) = \dfrac{1}{(2\pi)^{n/2}\sqrt{\text{det}(C)}}\exp\left(-\dfrac{1}{2}(x - m)^T (A^{-1})^T A^{-1}(x - m)\right) $$ Finally, notice that $(A^{-1})^T A^{-1} = (A A^T)^{-1} = C^{-1}$
|
|probability-distributions|normal-distribution|
| 1
|
How to determine if function is differentiable?
|
I am given this piecewise function $f: \mathbb{R} \rightarrow \mathbb{R}$ , $f(x)= \left\{\begin{array}{ll}x^2 & x>0 \\ 0 & x ≤ 0 \\ \end{array} \right. $ I have to determine if the function is differentiable over $\mathbb{R}$ or not. The way I think about differentiability is like this: "Can a function be differentiated at every point?" But that doesn’t really help me, I have been looking for a specific formula I can generally use to determine if a function is differentiable or not, but with no luck. How should I go about finding out if this function is differentiable or not?
|
First, we must check the continuity, because continuity is a necessary condition for differentiability. The only point we must check is the junction $x=0$ $$\lim_{x\to 0^+}f(x)=\lim_{x\to 0^+} x^2=0$$ $$\lim_{x\to 0^-}f(x)=\lim_{x\to 0^-} 0=0$$ so $f$ is continuous over $\mathbb{R}.$ Now we want to study if it's differentiable. Notice that if $x>0$ , then $f(x)=x^2$ which is differentiable, with derivative $f'(x)=2x.$ The same happens if $x\leq 0$ , this time with $f'(x)=0.$ So we can consider $$f'(x)= \begin{cases} 2x & x>0 \\ 0 & x\leq 0 \end{cases} $$ Is it continuous? If yes, then $f$ if differentiable over $\mathbb{R}.$ Again, the only point we have to study is $x=0$ , so we have $$\lim_{x\to 0^+}f'(x)=\lim_{x\to 0^+} 2x=0$$ $$\lim_{x\to 0^-}f'(x)=\lim_{x\to 0^-} 0=0$$ In conclusion, $f$ is differentiable over $\mathbb{R}.$
|
|ordinary-differential-equations|functions|derivatives|continuity|
| 0
|
Logit Gradient/Hessian derivations
|
I'm trying to follow the algebra leading from the gradient function to the Hessian in Logistic Regression, but I can't quite understand where I have gone wrong. I have the gradient function as: $$ \sum_i^m \left[x_i\cdot \frac {\exp\{-\theta^T x_i\}} {1+\exp\{-\theta^T x_i\}} - (y_i-1) x_i \right] $$ After deriving the gradient, I'm trying to rearrange terms and simplify the expression... First, we can note that the logistic regression model can be expressed as $$ p(y=1|x; \theta) = \frac{1}{1+exp(-\theta^Tx_i)} $$ Similarly, we can also show that $$ p(y=0|x; \theta) = 1-\frac{1}{1+exp(-\theta^Tx_i)} = \frac{exp(-\theta^Tx_i)}{1+exp(-\theta^Tx_i)} $$ These expressions rely on the definition of the logistic/sigmoid function $\sigma(\theta^Tx_i) = \frac {1} {1+\exp(-\theta^Tx_i)}$ and let us state more concisely $\frac {exp(\theta^Tx_i)} {1+\exp(\theta^Tx_i)} = 1-\sigma(\theta^Tx_i)$ So the problem we are trying to solve goes from: $$ \sum_i^m \left[x_i\cdot \frac {\exp\{-\theta^T x_i\}}
|
Define a few variables $$\eqalign{ \def\c#1{\color{red}{#1}} \def\t{\theta} \def\p{\partial} \def\l{\lambda} \def\qiq{\quad\implies\quad} \def\o{{\tt1}} z &= X\t &&\qiq &dz = X\,d\t \\ e &= \exp(z), \;&E = {\rm Diag}(e) &\qiq &de = e\odot dz= E\,dz \\ p &= \frac{e}{\o+e}, &P = {\rm Diag}(p) &\qiq &dp = (P-P^2)\,dz \\ &&&\qiq&\;\,p=P\o \\ }$$ Write the cross-entropy $(\l)$ in terms of these variables, then calculate its gradient $$\eqalign{ -\l &= y:\log(p) \;+\; (\o-y):\log(\o-p) \\ -d\l &= y:P^{-1}\,\c{dp} \;+\; (\o-y):(I-P)^{-1}\,(-dp) \\ &= y:P^{-1}\,\c{(P-P^2)\,dz} \;+\; (y-\o):(I-P)^{-1}(P-P^2)\,dz \\ &= y:(I-P)\,dz \;+\; (y-\o):P\,dz \\ &= (y-Py):\c{dz} \;+\; (Py-\c{P\o}):dz \\ -d\l &= (y-\c{p}):\c{X\,d\t} \\ d\l &= X^T(p-y):d\t \\ g &=\frac{\p\l}{\p\t} = X^T(p-y) \\ }$$ The Hessian is much easier to calculate, since everything except $p$ is a constant $$\eqalign{ &dg = X^T\c{dp} \;=\; X^T(P-P^2)\,X\,d\t \\ &H\;=\;\frac{\p g}{\p\t} \;=\; X^T(P-P^2)\,X \\ }$$ In the above, all fun
|
|multivariable-calculus|matrix-calculus|machine-learning|hessian-matrix|
| 1
|
Is this an invertible linear map?
|
$\DeclareMathOperator{\R}{\mathbb R} T:\R^2\rightarrow \R^3$ $T(x)= \begin{bmatrix}1&0\\0&1\\0 &0\end{bmatrix}x, \forall x\in \R^2$ I know that a linear map is invertible if and only if the map is bijective. By looking at the matrix which is injective the map is not invertible. But I thought could it be an invertible map if the codomain $\R^3$ is changed to the subspace in $\R^3$ , i.e., $\text{span}\{[1\; 0\; 0]^{\top}, [0\; 1\; 0]^{\top}\}$ , because in this way the map becomes both injective and surjective. If it is a linear invertible map, then what is the matrix representation of this linear map?
|
No, it isn't. The considered map, namely $T : \Bbb{R}^2 \to \Bbb{R}^3, (x,y) \mapsto (x,y,0)$ is nothing else than the canonical injection from $\Bbb{R}^2$ to $\Bbb{R}^3$ . As indicated by its name, it is indeed injective, since its kernel is trivial, but it is not surjective (and thus non-invertible), because $\dim \mathrm{im\;} T = 2 , or $T(\Bbb{R}^2) = \Bbb{R}^2 \otimes \{0\} \subsetneq \Bbb{R}^3$ in other words. More concretely, you can notice that $e_3 = (0,0,1) \in \Bbb{R}^3$ has no preimage.
|
|linear-algebra|linear-transformations|
| 1
|
What exactly is a structure of a language?
|
I understand that structures are means by which we know how to interpret certain languages. So for example, in the language of number theory, we might have $\displaystyle L=\left\langle 0,S,+,\cdot,E, . As I understand it, without specifying our strucutre, we do not know how to inerpret the symbols of our language. The book I am reading provides the following definition: Fix a language $L$ . An $L$ -structure $\alpha$ is a nonempty set $A$ , called the universe of $\alpha$ , together with: For each constant symbol $c$ of $L$ , an element $c^\alpha$ of $A$ . For each $n$ -ary function symbol $f$ of $L$ , a function $f^\alpha:A^n \to A$ For each $n$ -ary relation symbol $R$ of $L$ , an n-ary relation $R^\alpha$ on $A$ A few questions regarding this definition: When we talk about the nonempty set $A$ , is this the same as talking about "the domain" of our structure? Does the second point in the definition mean that our structure is "closed", i.e. that every functional relation will take $
|
Your understanding is correct. Symbols have no meaning until they are interpreted. Yes, $A$ is the domain of the $L$ -structure $\alpha$ . You specify the language $L$ and then interpret the symbols of the language in $\alpha$ . That is why it is called an $L$ -structure. You may encounter the notation $$\alpha:=\left(A,\{R^{\alpha}:R\in L\},\{f^{\alpha}:f\in L\},\{c^{\alpha}:c\in L\}\right)$$ Which is just notation to tell us what the $L$ -structure $\alpha$ consists of, as the definition you provided. Yes, if $f\in L$ is an $n$ -ary function symbol, then its interpretation $f^{\alpha}$ is a function with domain $A^{n}$ and range always a subset of the domain of interpretation $A$ . Meaning is always with respect to one domain. If the domain changes, the meaning may change. But you cannot have two different meanings within the same domain. The superscript is notation to tell us we are talking about an element of the domain $A$ . If $c\in L$ is a constant symbol, then $c^{\alpha}\in A$
|
|logic|first-order-logic|formal-languages|
| 1
|
Zariski closure of an algebraic subgroup of finite index
|
Let $G$ be an algebraic group and $H \subset G$ a subgroup of $G$ . Let $H_0$ be a subgroup of $H$ of finite index. Then I guess the Zariski closure of $H_0$ is exactly the Zariski closure of $H$ in $G$ , since intuitively it should be. But I have no clue how to prove it, anything will be helpful.
|
Note that this is equivalent to asking whether any finite index subgroup of $H$ is dense. In particular there's no real reason to consider the situation within an ambient algebraic group $G$ . If you assume irreducibility, or even connectedness, of $H$ , the claim is true as the only finite index closed subgroup of $H$ is $H$ itself (This follows as $H$ is the union of the cosets of any such subgroup), yet the closure of any finite index subgroup will be a finite index subgroup as well. Without assuming connectedness of $H$ the claim is not true. Consider for example $H_0\subsetneq H\subsetneq G$ finite groups.
|
|algebraic-groups|
| 1
|
Skew Symmetric Matrix for expressing a Rotation
|
How skew symmetric matrix can be used to express rotations about a given axis? I came across this concept while dealing with rotation matrices used in robotics. Can someone detail on this concept? and explain how a vector is converted to a skew symmetric matrix? Also I would like to go in depth in this.
|
Rotations in $\Bbb{R}^3$ can be represented by the orthogonal matrices of the group $SO(3)$ . Since it is a Lie group, the elements of the associated Lie algebra $\mathfrak{so}(3)$ play the role of generators of the said rotations. Concretely, it means that $\forall R \in SO(3)$ $\exists A \in \mathfrak{so}(3)$ such that $R = e^{tA}$ , with $t \in \Bbb{R}$ . Yet, the algebra $\mathfrak{so}(3)$ is precisely made of skew-symmetric matrices. Addendum. It may be proven as follows. Since $R = e^{tA}$ is orthogonal, one has $R^{-1} = R^T$ , hence $R^TR = e^{tA^T}e^{tA} = 1$ . Now, we can differentiate this equality with respect to $t$ , i.e. $e^{tA^T}(A^T + A)e^{tA} = 0$ , hence finally $A^T = -A$ when evaluated at $t = 0$ .
|
|linear-algebra|matrices|rotations|
| 0
|
Categorical kernel of composition of morphisms (in abelian categories)
|
A while ago I was interested in categorical image of composition of morphisms in abelian categories, see this post . I learnt that $\text{Im}(gf)=\text{Im}(g i)$ where $i$ is the canonical monomorphism gotten by factorizing $f$ in the abelian category. Recently, I had to deal with the same thing but with kernels. Context 1: Recall that abelian categories have all kernels. The categorical kernel of a morphism $f$ is the equalizer of $f$ and the zero morphism (which exists as abelian categories contain zero objects). Context 2: In the category of sets, for a composable pair of morphisms $f,g$ we know $\ker(gf)=f^{-1}(\ker(g))$ , ie the preimage of $\ker(g)$ along $f$ . Question: Is there a categorical version of this for arbitrary abelian categories? My thoughts so far: Let $A\xrightarrow{f}B\xrightarrow{g}C$ be a composable pair of morphisms in an arbitrary abelian category. Let $k:\ker(g)\to B$ be the kernel map of $g$ . I think I need to lift this map along $f$ to get a map from $\ker
|
You can easily see using the universal property that $\ker(g\circ f)=\ker(p\circ f)$ where $g=i\circ p$ is the canonical decomposition of $g$ in an abelian category (this is basically due to the fact that $i$ is a mono).
|
|abstract-algebra|category-theory|homological-algebra|abelian-categories|
| 1
|
Summing the heights of every simple non-left-moving paths $(0,0)\to (n,m)$
|
There is a challenging question assigned regarding Counting. here is the question: Consider a square lattice $(\mathbb{Z}_{\ge 0}^2)$ with point $A$ on its bottom left (the origin) and point $B$ on its top right. We call a path from $A$ to $B$ a good path if it does not ever move left and it does not visit any point more than once. For each good path, we write down the height (the $y$ -coordinate) of every horizontal segment. What is the sum of the numbers written down? I have attached an example picture. In this case, the sum is $1+3+0+0+0+5+1=10$ . At first, I had the idea of solving the same problem for a smaller instance but still could not find the general approach to it. I think this is, in core, a stars and bars problem since we have to have $7$ moves to right and the fact that the number of squares is actually related to the number of moves we take to the right. I have also noticed that we have to take $5$ upward steps more than downward steps so that we can get to the point B.
|
Let $A=(0,0)$ and $B=(n,m)$ . The heights in each good path can be represented by a string $(x_1,\dots, x_{n})$ where each height is in $\{0,\dots,m\}$ . All such strings represent good paths. So there are $(m+1)^n$ good paths. The average height is $m/2$ . There are $n$ heights in each path. So the sum is $$\frac12 mn(m+1)^n.$$ You can show that taking the average works by a bijection argument, i.e. pairing each path with its "opposite-height" path (for even $m$ the path with no pair occurs once and is exactly average-height).
|
|combinatorics|discrete-mathematics|
| 1
|
Two questions about PDE
|
I deal with two problems: $\frac{\partial v}{\partial y} - 2t\frac{\partial v}{\partial t} + 3v = e^{3y-t}$ , $v(y,0)=e^y$ . $\frac{\partial v}{\partial y} - 2t\frac{\partial v}{\partial t}+3v=e^{3y-t}$ , $v(0,t)=e^t$ . Here is my approach. I found no solution. Let $y:=y(s,\tau)$ , $t:=t(s,\tau)$ such that $$\frac{\partial y}{\partial s} = 1,\quad \frac{\partial t}{\partial s} = -2t,\quad \frac{\partial v}{\partial s} + 3v = e^{3y-t},$$ with initial conditions $y(0,\tau) = \tau$ , $t(0,\tau)=0$ , and $v(0,\tau)=e^{\tau}$ . Note that $\frac{\partial t}{\partial s}=-2t$ give us $t(s,\tau)=e^{-2s}C_1(\tau)$ . Since $t(0,\tau)=0$ , we have $C_1(\tau)=0$ which implies $t(s,\tau)=0$ , contradiction. With same approach, we can solve that $y(s,\tau) = s$ , $t(s,\tau)=e^{-2s}\tau$ . Hence, we remain to solve that $\frac{\partial v}{\partial s} + 3v= e^{3y-t} = e^{3s-e^{-2s}\tau},$ and I have no idea how to solve it. I am not sure why problem 1 doesn't have a solution and why problem 2 is due to
|
Your approach is correct. For problem 1, the $y$ -axis is a characteristic curve, so you should expect things to go wrong when you try to assign values to $v$ there. For problem 2, you can in principle solve the ODE through multiplication by the integrating factor $e^{3s}$ , but then you will need to calculate $$ \int e^{6s} e^{-\tau e^{-2s}} \, ds , $$ which has no elementary antiderivative if $\tau \neq 0$ . (The substitution $u=e^{-2s}$ gives something which can be expressed in terms of the exponential integral Ei, according to Mathematica.) What's the context? Did you get these problems as exercises in some course? In that case, it's probably best to ask your teacher to clarify.
|
|partial-differential-equations|
| 0
|
Convergence in $\mathscr{L}^1$ implies uniform integrability?
|
Quoting from Rogers and Williams, p. 116, Theorem 21.2 (A necessary and sufficient condition for $\mathscr{L}^1$ convergence): Let $(X_n)$ be a sequence in $\mathscr{L}^1$ , and let $X\in\mathscr{L}^1$ . Then $X_n\to X$ in $\mathscr{L}^1$ , or, equivalently $\mathbb{E}(|X_n - X|)\to 0$ , if and only if the following conditions are satisfied: $X_n\to X$ in probability; the sequence $(X_n)$ is uniformly integrable. The proof provided by Rogers and Williams covers the 'if' part. Question: But how to prove in the converse direction that convergence in $\mathscr{L}^1$ implies uniform integrability? By the following remark in Rogers and Williams, I get the impression I should look for an integrable non-negative random variable $Y$ that bounds the collection of random variables $|X_n - X|$ , but I am not sure how (e.g. $\sup_n|X_n - X|$ I suppose can't do as it might not be integrable): "It is of course the 'if' part of the theorem that is useful. Since the result is 'best possible', it must
|
Suppose that $(X_n)_{n\in\mathbb N}$ is not uniformly integrable. Then there exist $\varepsilon>0$ and a sequence of integers $(n_k)_{k\in\mathbb N}$ such that $$ \forall k\in\mathbb N,\quad\mathbb E\left[\vert X_{n_k}\vert1_{\{\vert X_{n_k}\vert>k\}}\right]\ge\varepsilon. $$ If $(n_k)_{k\in\mathbb N}$ were bounded, you would be able to show that the finite family $(X_n)_{0\le n\le\max_{k\in\mathbb N}n_k}$ is not uniformly integrable, which is impossible as it composed of integrable random variables. So $(n_k)_{k\in\mathbb N}$ is not bounded, hence you can choose an increasing subsequence $(n_{k_i})_{i\in\mathbb N}$ . But then you can show that $(X_{n_{k_i}})_{i\in\mathbb N}$ does not converges in $L^1$ , which contradicts the convergence in $L^1$ of $(X_n)_{n\in\mathbb N}$ .
|
|probability-theory|uniform-integrability|
| 0
|
Two questions about PDE
|
I deal with two problems: $\frac{\partial v}{\partial y} - 2t\frac{\partial v}{\partial t} + 3v = e^{3y-t}$ , $v(y,0)=e^y$ . $\frac{\partial v}{\partial y} - 2t\frac{\partial v}{\partial t}+3v=e^{3y-t}$ , $v(0,t)=e^t$ . Here is my approach. I found no solution. Let $y:=y(s,\tau)$ , $t:=t(s,\tau)$ such that $$\frac{\partial y}{\partial s} = 1,\quad \frac{\partial t}{\partial s} = -2t,\quad \frac{\partial v}{\partial s} + 3v = e^{3y-t},$$ with initial conditions $y(0,\tau) = \tau$ , $t(0,\tau)=0$ , and $v(0,\tau)=e^{\tau}$ . Note that $\frac{\partial t}{\partial s}=-2t$ give us $t(s,\tau)=e^{-2s}C_1(\tau)$ . Since $t(0,\tau)=0$ , we have $C_1(\tau)=0$ which implies $t(s,\tau)=0$ , contradiction. With same approach, we can solve that $y(s,\tau) = s$ , $t(s,\tau)=e^{-2s}\tau$ . Hence, we remain to solve that $\frac{\partial v}{\partial s} + 3v= e^{3y-t} = e^{3s-e^{-2s}\tau},$ and I have no idea how to solve it. I am not sure why problem 1 doesn't have a solution and why problem 2 is due to
|
$$\frac{\partial v}{\partial y} - 2t\frac{\partial v}{\partial t} = -3v+e^{3y-t}$$ Charpit-Lagrange characteristic ODEs : $$\frac{dy}{1}=\frac{dt}{-2t}=\frac{dv}{-3v+e^{3y-t}}=ds$$ Note: This is equivalent to : $\frac{dy}{ds} = 1,\quad \frac{dt}{ds} = -2t,\quad \frac{dv}{ds} + 3v = e^{3y-t}$ A first characteristic equation comes from solving $\frac{dy}{1}=\frac{dt}{-2t}$ : $$t\:e^{2y}=c_1$$ $e^y=(\frac{c_1}{t})^{1/2}\quad\implies\quad e^{3y-t}=(\frac{c_1}{t})^{3/2}e^{-t}$ A second characteristic equation comes from solving $\frac{dt}{-2t}=\frac{dv}{-3v+e^{3y-t}}=\frac{dv}{-3v+(\frac{c_1}{t})^{3/2}e^{-t}}$ : $\frac{dv}{dt}=-\frac{1}{2t}\left(-3v+(\frac{c_1}{t})^{3/2}e^{-t}\right)\quad\implies\quad \frac{dv}{dt}-\frac{3}{2t}v=-\frac{(c_1)^{3/2}}{2t^{5/2}}e^{-t}$ Solving this first order linear ODE involves the special function Ei : https://mathworld.wolfram.com/ExponentialIntegral.html The solution of the ODE is : $$v=\frac{1}{12}\left(\frac{c_1}{t}\right)^{3/2}\bigg((t^2-t+2)e^{-t}+t^3\
|
|partial-differential-equations|
| 0
|
Relationship between the quotient space of an equivalence relation and the kernel of a function
|
Let $R$ an equivalence relation over a set $A$ . For $a \in A$ we define $a / R = \{ b \in A: (a, b) \in R\}$ and call this set the equivalence class of $A$ . Furthermore, we define $A / R = \{a / R : a \in A\}$ the set of all equivalence classes for the equivalence relation $R$ over $A$ . As a last definition, if $f : A \mapsto B$ we define $ker ~ f = \{(a, b) \in A^2 : f(a) = f(b)\}$ . In general, a function $f : A / R \mapsto B$ is ambiguous or contradictory. For example, if we define $f(a / R) = a^2$ and $R$ is the relationship "has the same parity", then the fact that $2 / R = 4 / R$ would lead us to expect $f(2 / R) = 4 = f(4 / R) = 16$ . Notwithstanding, an important idea in modern algebra is the following: If $f : A \mapsto B$ is onto, then $\overline{f}(a / ker ~ f) = f(a)$ defines a bijection $\overline{f} : A / ker ~ f \mapsto B$ . It is clear to me $a.$ that $\overline{f}$ is a well-defined function, in the sense that to each element in the domain it maps a unique element i
|
In your proof of surjectivity, you've taken an element $b\in B$ , but then started talking about $b/ ker f$ . That's wrong - $x/ker f$ only makes sense for $x$ in $A$ , and elements of $B$ do not have to be elements of $A$ . The more general lesson we can learn here is that when you've got sets named $A$ and $B$ laying around, notation like $ker ~ f = \{(a, b) \in A^2 : f(a) = f(b)\}$ is risky - it can easily lead you into mistakes like this. I'd recommend $(a,a')$ or $(a_1, a_2)$ for a generic element of $A\times A$ - let the notation help you, not trick you.
|
|abstract-algebra|functions|logic|
| 1
|
How to find missing vertex/point of a tetrahedron
|
I need to find the missing point of a tetrahedron side length $x$ , with three points being $$v_1=(0,0,0) \quad v_2=\left(\frac{1}{2}x,0,\frac{\sqrt3}{2}x\right) \quad v_3=(x,0,0)$$ I can't seem to find the 4th point. Can anyone help? I just need the equation.
|
We have $$ \cases{ v_1=(0,0,0) \\ v_2=\left(\frac{1}{2}x,0,\frac{\sqrt3}{2}x\right)\\ v_3=(x,0,0) } $$ the triangle with vertices on $v_1,v_2,v_3$ has the baricentre given by $$ g=\frac 13(v_1+v_2+v_3) = \left(\frac{x}{2},0,\frac{x}{2 \sqrt{3}}\right) $$ and the normal to the plane containing $v_1,v_2,v_3$ is $\vec n=(0,1,0)$ then we can propose as fourth vertex $v_4 = g + \gamma\vec n$ . Now the tetrahedron baricentre is given by $$ G=\frac 14(v_1+v_2+v_3+v_4) = \left(\frac{x}{2},\frac{\gamma}{4},\frac{1}{4} \left(\frac{\sqrt{3} x}{2}+\frac{x}{2 \sqrt{3}}\right)\right) $$ now as $\|v_1-G\|^2 = \|v_4-G\|^2$ we have $$ \frac{\gamma^2}{2}-\frac{x^2}{4}-\frac{1}{16} \left(\frac{\sqrt{3} x}{2}+\frac{x}{2 \sqrt{3}}\right)^2+\left(\frac{x}{2 \sqrt{3}}-\frac{1}{4} \left(\frac{\sqrt{3} x}{2}+\frac{x}{2 \sqrt{3}}\right)\right)^2=0 $$ and solving for $\gamma$ we got $$ \gamma = \pm\sqrt{\frac{2}{3}} x $$ and thus the generic regular tetrahedron is defined by the four vertices $$ \cases{ v_1=(0,0
|
|geometry|solid-geometry|
| 0
|
Why must rational $x$ and $y$ satisfy $x+y=2$ and $2\sqrt{3xy}=3$ in order for $2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$ to be true?
|
I am going through a maths book and unable to understand the following: Given that $x$ and $y$ are rational numbers, then if the equation $$2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$$ is true, then we must have $x+y=2$ and $2\sqrt{3xy}=3$ . I don't understand this. We cannot reach such a conclusion with equations involving only rational numbers. For example, let $a$ , $b$ , $c$ , and $d$ be rational numbers, and $a - b = c - d$ ; then it does not mean that $a = c$ and $b = d$ always for the equation to be true. How can we be sure that we must have $x + y = 2$ and $2 \sqrt{3xy} = 3$ for the equation to be true?
|
For better clarity, we can transform the given equation as: $$2-\sqrt{3} = (x+y) - 2\sqrt{xy}.$$ Set $z=x+y$ for time being. Then $$2-z= \sqrt{3}-2\sqrt{xy}.$$ Squaring both sides, we get $$4+z^2-4z -2-4xy= -4\sqrt{3xy}.$$ Note that the left hand side is rational number. Therefore right hand side should also be rational, implying that $\sqrt{3xy}$ is rational. Say $\sqrt{3xy}=r$ , a rational. Now, get back to your original equation: $$2\sqrt{3}-3 = (x+y)\sqrt{3} -2r.$$ This must imply that $x+y=2$ and $3=-2r$ .If not, then $$\sqrt{3} = \frac{2r-3}{x+y-2},$$ would be rational!
|
|algebra-precalculus|irrational-numbers|
| 0
|
Can the product of two rational numbers be an irrational number? (Kindly see the example in description)
|
I checked in many sources and I saw " Multiplication is closed under Rational Numbers Q ". But consider $$ a = \frac{1}{7} ; \;\;\; b = \frac{22}{1} ;$$ both a, b are individually rational (either repeating or terminating decimal vlaues) $$ a = 0.\overline{142857} ; \;\;\;b = 22.0 ; $$ but their product $$ \frac{22}{7}=3.14159265359 ...$$ which is clearly irrational . Then how is multiplication closed on rational numbers??
|
Isn't is just absurd to think otherwise - The very fact that you can write a number in $\frac{p}{q}$ makes it rational number, and if you have two such numbers there multiplication or sum can also be written in the same form, so hence its rational. Irrational comes only when you cant write in fractional form, but they are already in fractional form. What did you expect.
|
|irrational-numbers|rational-numbers|decimal-expansion|
| 0
|
How do I show that the synthesis equation of the Fourier Transform equals the original function?
|
How do I show that the synthesis equation of the Fourier Transform equals the original function? I want to expand equation 2 using equation 1 and show that the integral indeed equals the original function. I am fine with some handwavy math. I have seen this derivation of the Fourier Transform: AMATH231_Fall_2020_Lecture_notes (37).pdf where the Fourier Transform is the limit of Fourier series. But that notation uses $\omega$ and also doesn't directly expand the integral. I am looking for something along these lines: Only the component that was at frequency $\xi$ can produce a non-zero value of the infinite integral, because all the other shifted components are oscillatory and integrate to zero.
|
Inserting (1) into (2) and formally swapping the order of integration yields $$\int_{-\infty}^\infty \left( \int_{-\infty}^\infty f(y) e^{-i2\pi\xi y} dy \right) e^{i2\pi\xi x}d\xi =\int_{-\infty}^\infty \int_{-\infty}^\infty e^{i2\pi\xi(x-y)} f(y) dy d\xi = \int_{-\infty}^\infty f(y)\left(\int_{-\infty}^\infty e^{i2\pi\xi(x-y)} d\xi\right) dy$$ Formally, the inner integral on the right is the Dirac delta $\delta(y-x)$ (since the Fourier transform of $1$ is a Dirac delta, due to $1$ only containing one frequency), so then we get $$ \int_{-\infty}^\infty f(y)\delta(y-x)dy = f(x)$$ Of course, this is all hand-wavy and you need to appeal to the theory of distributions to make this argument rigorous. The standard proof of the inversion formula uses an approximation argument: approximate a Dirac delta by a sequence of Gaussians, whose Fourier transforms are also Gaussians, and pass to the limit.
|
|fourier-analysis|fourier-series|fourier-transform|
| 0
|
Why must rational $x$ and $y$ satisfy $x+y=2$ and $2\sqrt{3xy}=3$ in order for $2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$ to be true?
|
I am going through a maths book and unable to understand the following: Given that $x$ and $y$ are rational numbers, then if the equation $$2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$$ is true, then we must have $x+y=2$ and $2\sqrt{3xy}=3$ . I don't understand this. We cannot reach such a conclusion with equations involving only rational numbers. For example, let $a$ , $b$ , $c$ , and $d$ be rational numbers, and $a - b = c - d$ ; then it does not mean that $a = c$ and $b = d$ always for the equation to be true. How can we be sure that we must have $x + y = 2$ and $2 \sqrt{3xy} = 3$ for the equation to be true?
|
Consider $2 \sqrt{3} - 3 = (x + y) \sqrt{3} - 2 \sqrt{3xy}$ . If $xy=a^2$ for some non-negative rational $a$ then we get $$2 \sqrt{3} - 3 = (x + y) \sqrt{3} - 2a \sqrt{3}$$ $$\sqrt{3}= \frac3{2-x-y+2a}$$ which can’t be true. If $xy=3a^2$ for some non-negative rational $a$ then we get $$2 \sqrt{3} - 3 = (x + y) \sqrt{3} - 6a$$ $$\sqrt3\cdot (2-x-y)=3-6a.$$ Obviously, $x+y=2$ and $3=6a\iff 2\sqrt{3xy}=3$ . If $xy$ is of any other form, then $3xy$ is equal to $pa^2$ , where $a$ is non-negative rational and $p$ is such a non-negative rational that every prime is included once in its either numerator or denominator, or not included at all. And there is at least one such prime other than $3$ . The equality becomes: $$2 \sqrt{3} - 3 = (x + y) \sqrt{3} - 2a \sqrt{3p}.$$ Divide by $\sqrt3$ : $$2 - \sqrt{3} = (x + y) - 2a \sqrt{p}$$ $$2 - (x + y) = \sqrt3 - 2a \sqrt{p}$$ $$\implies (2 - x - y)^2=3+4a^2p-4a\cdot \sqrt{3p}.$$ But it can’t be true since $\sqrt{3p}$ is not rational due to structure
|
|algebra-precalculus|irrational-numbers|
| 0
|
Proving the existence of a stationary point on $(0,2)$
|
Given a function $f:[0, 2]\to\mathbb{R}$ continuous on $[0,2]$ and differentiable on $(0, 2)$ such that $f(1)=0$ while $f(0)f(2)>0$ , show that there exists at least one stationary point in the set $(0,2)$ Right away we have $f(0)>0$ and $f(2)>0$ and intuitively it seems like we could find $f(a)=f(b)$ and use Rolle’s Theorem. I’m just not sure how to put this into words and construct the proof rigourously. I struggled to apply the steps used in similar questions like this one because less is known about the exact values of the function in this case - only that it’s positive at certain points. I checked with the mean value theorem directly but it doesn’t seem to lead to anything. $$f’(c)=\frac{f(2)-f(0)}{2-0}$$
|
The hypothesis implies that both $f(0)$ and $f(2)$ must be of the same sign. Suppose $f(0)$ and $f(2)$ are both positive. As $f(1)=0$ , the minimum of $f$ on $[0,2]$ must be attained at a point $\alpha$ which is different from $0$ and $2$ . $\alpha$ would be a critical point then. If both $f(0)$ and $f(2)$ are negative, then as $f(1)=0$ , the maximum of $f$ must be attained at a point different from that of $0$ and $2$ . That would be the critical point.
|
|calculus|functions|
| 1
|
Closed form for infinite sum involving Chebyshev polynomials
|
There exists a generating function for the Chebyshev polynomials in the following form: $$\sum\limits_{n=1}^{\infty}T_{n}(x) \frac{t^n}{n} = \ln\left( \frac{1}{\sqrt{ 1 - 2tx + t^2 }}\right)$$ Question: can one find a similar closed form for $\sum\limits_{n=1}^{\infty}T_{2n}(x) \frac{t^n}{n}$ ?
|
It suffices to observe that $$\sum\limits_{n=1}^{\infty}T_{n}(x) \frac{(\color{red}-t)^n}{n} = \ln\left( \frac{1}{\sqrt{ 1 \color{red}+ 2tx + t^2 }}\right)$$ Then $$\begin{align} \ln\left( \frac{1}{\sqrt{ 1 - 2tx + t^2 }}\right) + \ln\left( \frac{1}{\sqrt{ 1 + 2tx + t^2 }}\right) &= 2\sum\limits_{n\in \mathbb{N}, 2|n}T_{n}(x) \frac{t^n}{n} \\ &= 2\sum\limits_{m=1}^{\infty}T_{2m}(x)\frac{t^{2m}}{2m}\\ &= \sum\limits_{m=1}^{\infty}T_{2m}(x)\frac{(\color{red}{t^{2} })^m}{m} \end{align}$$ $$\implies \sum\limits_{n=1}^{\infty}T_{2n}(x) \frac{t^n}{n}=\ln\left( \frac{1}{\sqrt{ 1 - 2\sqrt{t}x + t }}\right) + \ln\left( \frac{1}{\sqrt{ 1 + 2\sqrt{t}x + t }}\right) $$
|
|generating-functions|chebyshev-polynomials|
| 1
|
Does my proof that the recurrence $T(n) = T(\frac{n}{2}) + d = \Theta(lgn)$work?
|
Suppose we have the recurrence $T(n) = T(\frac{n}{2}) + d$ if $n = 2^j$ and where is some integer greater than $0$ (i.e n is even). I know that this recurrence is $\Theta(lg(n))$ , and I want to prove it with two inductive substitution proofs, one for $O(lg(n))$ and one for $\Omega(lg(n))$ . If both proofs are successful, we have $\Theta(lg(n))$ . I am still getting used to proving bounds of recurrences using the substitution method, so I would welcome and appreciate comments on incorrect steps or jumps in reasoning in my proof. $(1) T(n) = O(lg(n)$ To prove the first part, we suppose that, for all $ n_0 , $T(n') \leq c lg(n)$ , where $n_0$ we'll work out later. We then show that in that case $T(n) \leq c lg(n)$ . Now, if if $2n_0 \leq n$ , Our assumption gives us that $T(\frac{n}{2}) \leq clg(\frac{n}{2})$ . By substitution, we get: $T(n) \leq clg(\frac{n}{2}) + d$ which implies $T(n) \leq clg(n) - c + d$ which holds for a $c$ sufficiently large to dominate the $-c + d$ term. For $n ,
|
This is verbose, also you use the result, would be better to arrive to it. Just set $n=2^k$ and $U(k)=T(n)$ Then since $\frac n2=2^{k-1}$ then $\ U(k)=U(k-1)+d\iff U(k)=U(0)+kd$ Now substitute back for $T$ knowing $k=\log_2(n)$ and $U(0)=T(1)$ $$T(n)=T(1)+d\,\log_2(n)$$
|
|solution-verification|asymptotics|recursive-algorithms|
| 0
|
When can non-trivial roots of unity be partitioned such that product of their sums is an integer?
|
We have, for $z = e^{\frac{2 \pi i}{17}}$ , $$(z + z^2 + z^4 + z^8 + z^{16} + z^{15} + z^{13} + z^9) (z^3 + z^5 + z^6 + z^7 + z^{10} + z^{11} + z^{12} + z^{14}) \in \mathbb Z.$$ This happens because the Galois group $\operatorname{Gal}(\mathbb Q(z)/\mathbb Q)$ acts on the partition of roots by either swapping them or invariant. For the general case with primitive roots, it suffices to find a quotient group of $\operatorname{Gal}(\mathbb Q(z)/\mathbb Q) \cong (\mathbb Z/n\mathbb Z)^\times \cong (\mathbb Z/\varphi(n) \mathbb Z, +)$ isomorphic to $\mathbb Z/ 2 \mathbb Z$ . Since the Euler totient is always even, this is achievable. What happens if we require all non-trivial roots of unity (i.e. not equal to 1)? This would require a partition such that the group action of $(\mathbb Z/n\mathbb Z)^\times$ on $\{1,2,\dots, n-1\}$ either swaps the sets or is invariant. Or in more abstract terms, the $G$ -set has a quotient with exactly two elements. There is always a boring solution: put primi
|
The only such numbers are odd prime powers. Let me first reduce the case to elementary number theory. The orbits of the $G$ -set are various quotients of $G$ , and we need to pick another quotient isomorphic to $\mathbb Z/2\mathbb Z$ that lies below all those quotients. Dually, we have a lot of subgroups of $G$ , and we wish to select an index-2 subgroup containing all of them. These subgroups are precisely $G_k = \{mk+1 \pmod n \mid m \in \mathbb Z\} \cap (\mathbb Z/n\mathbb Z)^\times$ , where $k$ runs through the non-trivial factors of $n$ . It is now easy to see that if $n$ has a lot of factors, then $\bigcup_k G_k$ will inevitably generate the entire group. For $n = p^k$ , we have $\bigcup_{j=1}^k G_{p^j} = G_{p}$ . This is a subgroup of index $(p-1)$ , which is of course contained in a subgroup of index $2$ . (Even numbers $n$ are out of the question.) Suppose $n$ is not a prime power, say $n = p^k m$ with $p \nmid m$ . Then $$(ap^k + 1)(bm+1) \equiv (ap^k + bm) + 1 \pmod n,$$ whi
|
|group-theory|galois-theory|group-actions|roots-of-unity|
| 0
|
Convergence in $\mathscr{L}^1$ implies uniform integrability?
|
Quoting from Rogers and Williams, p. 116, Theorem 21.2 (A necessary and sufficient condition for $\mathscr{L}^1$ convergence): Let $(X_n)$ be a sequence in $\mathscr{L}^1$ , and let $X\in\mathscr{L}^1$ . Then $X_n\to X$ in $\mathscr{L}^1$ , or, equivalently $\mathbb{E}(|X_n - X|)\to 0$ , if and only if the following conditions are satisfied: $X_n\to X$ in probability; the sequence $(X_n)$ is uniformly integrable. The proof provided by Rogers and Williams covers the 'if' part. Question: But how to prove in the converse direction that convergence in $\mathscr{L}^1$ implies uniform integrability? By the following remark in Rogers and Williams, I get the impression I should look for an integrable non-negative random variable $Y$ that bounds the collection of random variables $|X_n - X|$ , but I am not sure how (e.g. $\sup_n|X_n - X|$ I suppose can't do as it might not be integrable): "It is of course the 'if' part of the theorem that is useful. Since the result is 'best possible', it must
|
Clearly, $(X_n)$ is bounded in $L^1$ so it suffices to show that $$ \forall ε>0 \ \exists δ>0 \text{ such that } (P(A) Let $ε>0$ and pick $n_0$ such that for $n> n_0$ $$ \mathbb E [|X_n-X|] Note that $$\mathbb E[|X_n| 1_A] \le \mathbb E[|X_n-X| 1_A] + \mathbb E[|X| 1_A] \le \mathbb E[|X_n-X|] + \mathbb E[|X| 1_A].$$ Hence for $n>n_0$ you have that for any measurable $A$ $$ \mathbb E[|X_n| 1_A] \le ε + \mathbb E[|X| 1_A] $$ Since $\{X_1, \dots , X_{n_0},X\}$ is uniformly integrable (this follows from the absolute continuity of the integral) we can find $δ>0$ such that $$ P(A) In conclusion, for any measurable $A$ with $P(A) you have that $\mathbb E[|X_n| 1_A] for every $n$ .
|
|probability-theory|uniform-integrability|
| 1
|
How to prove the following determinant identity
|
Prove: $$ \begin{array}{|cccccccccc|} 1 & 0 & 0 & \cdots & 0 & 1 & 0 & 0 & \cdots & 0 \\ x & x & x & \cdots & x & y & y & y & \cdots & y \\ x^{2} & 2 x^{2} & 2^{2} x^{2} & \cdots & 2^{m-1} x^{2} & y^{2} & 2 y^{2} & 2^{2} y^{2} & \cdots & 2^{m-1} y^{2} \\ x^{3} & 3 x^{3} & 3^{2} x^{3} & \cdots & 3^{m-1} x^{3} & y^{3} & 3 y^{3} & 3^{2} y^{3} & \cdots & 3^{m-1} y^{3} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ x^{n} & n x^{n} & n^{2} x^{n} & \cdots & n^{m-1} x^{n} & y^{n} & n y^{n} & n^{2} y^{n} & \cdots & n^{m-1} y^{n} \end{array}=(x-y)^{m^{2}}(x y)^{\frac{m^{2}-m}{2}}\left(\prod_{i=0}^{m-1} i !\right)^{2} $$ where $n=2m-1$ . A friend of mine gave me this problem. He claimed that he solved this by a very complicated method(which is too long to type here, and to be frank, I didn't get it at all). The following part is my progress. First, we want to extract $y$ so that we can let $z=\frac{x}{y}$ , and then the determinant should be a polyno
|
We will execute a "degree matching argument". That is, The determinant of the matrix $$ \begin{array}{|cccccccccc|} 1 & 0 & 0 & \cdots & 0 & 1 & 0 & 0 & \cdots & 0 \\ x & x & x & \cdots & x & 1 & 1 & 1 & \cdots & 1 \\ x^{2} & 2 x^{2} & 2^{2} x^{2} & \cdots & 2^{m-1} x^{2} & 1 & 2 & 2^{2} & \cdots & 2^{m-1} \\ x^{3} & 3 x^{3} & 3^{2} x^{3} & \cdots & 3^{m-1} x^{3} & 1 & 3 & 3^{2} & \cdots & 3^{m-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ x^{n} & n x^{n} & n^{2} x^{n} & \cdots & n^{m-1} x^{n} & 1 & n & n^{2} & \cdots & n^{m-1} \end{array} $$ is a polynomial in $x$ (by the Laplace expansion). Its degree can also clearly be upper bounded by the same expansion. Then, we will show that $(x-1)^{m^2}$ and $x^{\frac{m^2-m}{2}}$ divide this determinant, as polynomials. Since these are co-prime polynomials, we will have then shown that the determinant is divisible by $(x-1)^{m^2}x^{\frac{m^2-m}{2}}$ , and hence determined up to a constant. Thi
|
|linear-algebra|algebra-precalculus|determinant|multilinear-algebra|
| 1
|
How to determine if function is differentiable?
|
I am given this piecewise function $f: \mathbb{R} \rightarrow \mathbb{R}$ , $f(x)= \left\{\begin{array}{ll}x^2 & x>0 \\ 0 & x ≤ 0 \\ \end{array} \right. $ I have to determine if the function is differentiable over $\mathbb{R}$ or not. The way I think about differentiability is like this: "Can a function be differentiated at every point?" But that doesn’t really help me, I have been looking for a specific formula I can generally use to determine if a function is differentiable or not, but with no luck. How should I go about finding out if this function is differentiable or not?
|
Perhaps a slightly clearer approach is simply the following. A function $f$ is said to be differentiable at $x$ if the following limit $$ \lim_{h\to0} \frac{f(x+h)-f(x)}{h} $$ exists and is finite. Now note that the limit above exist if and only if the limit from above and below are equal (and are finite). So we may define (if they exist) $$ f'_\pm (x)= \lim_{h\to0^\pm} \frac{f(x+h)-f(x)}{h} $$ and we say $f$ is differentiable at $x$ if $f'_+(x)=f'_-(x)\neq \pm \infty$ , in which case $f'(x)= f'_+(x)=f'_-(x)$ . Clearly your function is differentiable for all $x>0$ and for all $x . It remains to check $x=0$ . Now $$f'_+(0) = \lim_{h\to0^+} \frac{h^2-0}{h} = 0. $$ On the other hand $$f'_-(0) = \lim_{h\to0^-} \frac{0-0}{h} = 0. $$ The two limits coincide so the function is differentiable over the whole real line.
|
|ordinary-differential-equations|functions|derivatives|continuity|
| 1
|
Is it possible to estimate the elipse perimeter using linear algebra
|
Assuming the elipse is just an circle being stretch on one aix, And the perimeter of shape equals to the sum of all Vectors length. We may say that the elipse is an circle being linearly transform to an elipse and then the change in the perimeter is the basically change in all vectors length which we can calculate by divide the transformation into very small transformations each of them create elpise from the last elipse . Till you get the final elipse from the circle . Because if we take a shape and will scale one aix of each vector of the shape you will get new shape with perimeter equal to the ratio between all y values of each vector and x value of each vector. But in case of elipse the ratio change as match the ratio betweens the big decimeter of the elipse and the small one , Get bigger. So in this case we may divide it to multiple transformation each time we multiply perimeter of the last elipse by the ratio between the decimeters of the elipse and the old perimeter. For example
|
No. This is because if you stretch one dimension by a factor $\alpha$ , the curve length is not proportional to this stretch. The curve length operator is a non-linear operator. You can clearly see this by measuring the hypotenuse of a triangle and noting that $\sqrt{\alpha^2+1}$ is not proportional to $\alpha$ .
|
|linear-algebra|geometry|
| 0
|
Evaluate $\int\limits_0^{\sqrt 2 } {\frac{1}{{3{a^2} + 2}}\frac{{\arctan \left( {\sqrt {{a^2} + 1} } \right)}}{{\sqrt {{a^2} + 1} }}da}$
|
I am trying to evaluate this: $$I=\int\limits_0^{\sqrt 2 } {\frac{1}{{3{a^2} + 2}}\frac{{\arctan \left( {\sqrt {{a^2} + 1} } \right)}}{{\sqrt {{a^2} + 1} }}da} $$ It looks like Ahmed's integral, but I don't know how to solve this. Here is what I tried: $$\eqalign{ & I\left( b \right) = \int\limits_0^{\sqrt 2 } {\frac{1}{{3{a^2} + 2}}\frac{{\arctan \left( {b\sqrt {{a^2} + 1} } \right)}}{{\sqrt {{a^2} + 1} }}da} ,I\left( 0 \right) = 0 \cr & I'\left( b \right) = \int\limits_0^{\sqrt 2 } {\frac{1}{{\left( {3{a^2} + 2} \right)\left( {1 + {b^2}\left( {{a^2} + 1} \right)} \right)}}da} = \left. {\left( {\frac{{\sqrt 6 }}{{2{b^2} + 6}}{{\tan }^{ - 1}}\left( {\sqrt {\frac{3}{2}} a} \right) - \frac{b}{{\sqrt {{b^2} + 1} \left( {{b^2} + 3} \right)}}{{\tan }^{ - 1}}\left( {\frac{{ab}}{{\sqrt {{b^2} + 1} }}} \right)} \right)} \right|_0^{\sqrt 2 } \cr & = \frac{{\pi \sqrt {\frac{2}{3}} }}{{2{b^2} + 6}} - \frac{b}{{\left( {{b^2} + 3} \right)\sqrt {{b^2} + 1} }}{\tan ^{ - 1}}\left( {\frac{{\sqrt 2 b}}{
|
\begin{align} &\int_{0}^{\sqrt2}\frac{\tan^{-1}\sqrt{x^2+1}}{(3x^2+2)\sqrt{x^2+1}}\,dx\\ =& \ \int_0^{\sqrt2} \tan^{-1}\left(\sqrt{x^2+1}\right)\ d\bigg(-\frac1{\sqrt2}\cot^{-1}\frac x{\sqrt{2x^2+2}} \bigg)\\ \overset{ibp}=& \ \frac{\pi^2}{72\sqrt2}+\frac1{\sqrt2}\int_0^{\sqrt2} \frac{x\cot^{-1}\frac x{\sqrt{2x^2+2}} }{(x^2+2)\sqrt{x^2+1}} \ dx\\ =& \ \frac{\pi^2}{72\sqrt2}+\frac1{\sqrt2}\int_0^{\sqrt2} \int_0^{\sqrt2} \frac{x^2 }{(x^2+2)(x^2+y^2 +x^2y^2 )} \ dy \ dx\\ =& \ \frac{\pi^2}{72\sqrt2}+\frac1{2\sqrt2}\int_0^\sqrt2 \int_0^\sqrt2 \bigg( \frac{x^2 }{x^2+2}+\frac{y^2}{y^2+2} \bigg)\frac1{x^2+y^2 +x^2y^2 }\ dy \ dx\\ =& \ \frac{\pi^2}{72\sqrt2}+\frac1{\sqrt2}\int_0^\sqrt2 \int_0^\sqrt2 \frac{1}{(x^2+2)(y^2+2)}\ dy \ dx\\ =& \ \frac{\pi^2}{72\sqrt2}+\frac1{\sqrt2}\left( \frac\pi{4\sqrt2}\right)^2 =\frac{13\pi^2}{288\sqrt2} \end{align}
|
|calculus|integration|
| 1
|
Find the equation of the tangent to a system using implicit function theorem
|
Here is my system of equations: $C: \begin{cases}x^2 + y^2 +z^2 = 14\\ x^3+y^3+z^3=36 \end{cases}$ Firstly I managed to show that for all $a \in C$ , the implicit function theorem applies to express $x$ and $y$ as a function of $z$ . Which implies the existence of $\phi$ such that, for all $(x,y,z)$ close enough to $a$ , we get: $f(x,y,z)=0 \Leftrightarrow \phi(z) = (x,y)$ (with f being the function: $f(x,y,z) = (x^2+y^2+z^2-14, x^3+y^3+z^3-36)$ What I do not undesrstand is how to find the equation of the tangent at the point $(1,2,3) \in C$ . What I had done was to start from the realtion given by the theorem: $f(\phi(z),z)=0$ By deriving both sides, I get: $\frac{\partial f}{\partial x \partial y}(\frac{\partial\phi(z)}{\partial z},z) \circ \frac{\partial\phi(z)}{\partial z} + \frac{\partial f(\phi(z),z)}{\partial z} = 0$ Which I rearrange: $\frac{\partial\phi(z)}{\partial z} = \frac{\partial f^{-1}}{\partial x \partial y}(\frac{\partial\phi(z)}{\partial z},z) \circ \frac{\partial f(
|
Your system in a neighbourhood of the point $P=(1,2,3)$ defines a differentiable curve $\gamma$ . By using the Implicit Function Theorem, we may find the tangent line to $\gamma$ at $P$ : there exist two functions $x=\phi(z)$ and $y=\psi(z)$ such that $\phi(3)=1$ and $\psi(3)=2$ and the desired tangent line has the following parametric equation: $$\begin{cases} x=1+\phi'(3)t\\ y=2+\psi'(3)t\\ z=3+t\end{cases}$$ You may find $\phi'(3)$ and $\psi'(3)$ , by deriving the equations in the system with respect to $z$ and solving it: $$\begin{cases}\phi(z)^2 + \psi(z)^2 +z^2 = 14\\ \phi(z)^3+\psi(z)^3+z^3=36 \end{cases} \implies \begin{cases} 2\cdot 1\cdot \phi'(3) + 2\cdot 2\cdot \psi'(3) +2\cdot 3 = 0\\ 3\cdot 1^2\cdot \phi'(3) + 3\cdot 2^2\cdot \psi'(3) +3\cdot 3^2 = 0 \end{cases}$$ Can you take it from here?
|
|systems-of-equations|differential|tangent-line|implicit-function-theorem|
| 1
|
Determine the domain-sets of $f(z)=z\bar{z}/(z^2+\bar{z}^2)$
|
Determine the domain-sets of $$f(z)=\frac{z\bar{z}}{z^2+\bar{z}^2}$$ Here's my attempt: Consider z = x+iy, $z^2+\bar{z}^2 = (x+iy)^2 +(x-iy)^2 = 2x^2$ we know that $z^2+\bar{z}^2 \neq 0$ and so $x \neq 0$ The solution is $\Bbb{C}$ \ {z = x + iy | x = y or x = -y} I don't really understand the solution or the steps that will lead to it. Any help is appreciated.
|
You made a slight error: $$\begin{align}z^2+\bar{z}^2=&(x+iy)^2+(x-iy)^2 \\=&(x^2+2ixy+i^2y^2)+(x^2-2ixy+i^2y^2) \\=&2x^2+2i^2y^2 \\=&2(x^2-y^2) \\2(x^2-y^2)\ne & 0 \\x^2\ne&y^2 \\x\ne&\pm y\end{align}$$ so the solution should be: $$\mathbb{C}\backslash\{z=x+iy\,|\,x=\pm y\}$$ you could also word it as: $$\left|\Re(z)\right|\ne\left|\Im(z)\right|$$
|
|complex-analysis|complex-numbers|principal-ideal-domains|
| 1
|
Closed form for infinite sum involving Chebyshev polynomials
|
There exists a generating function for the Chebyshev polynomials in the following form: $$\sum\limits_{n=1}^{\infty}T_{n}(x) \frac{t^n}{n} = \ln\left( \frac{1}{\sqrt{ 1 - 2tx + t^2 }}\right)$$ Question: can one find a similar closed form for $\sum\limits_{n=1}^{\infty}T_{2n}(x) \frac{t^n}{n}$ ?
|
Just for the record, here is a little modification to NN2's answer in order to avoid the inner square root for $t$ : $$\sum\limits_{n=1}^{\infty}T_{2n}(x) \frac{t^n}{n} = - \frac{1}{2} \, \ln \bigg ( 1 + 2\,(1-2x^2)\,t + t^2 \bigg) $$
|
|generating-functions|chebyshev-polynomials|
| 0
|
Why must rational $x$ and $y$ satisfy $x+y=2$ and $2\sqrt{3xy}=3$ in order for $2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$ to be true?
|
I am going through a maths book and unable to understand the following: Given that $x$ and $y$ are rational numbers, then if the equation $$2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$$ is true, then we must have $x+y=2$ and $2\sqrt{3xy}=3$ . I don't understand this. We cannot reach such a conclusion with equations involving only rational numbers. For example, let $a$ , $b$ , $c$ , and $d$ be rational numbers, and $a - b = c - d$ ; then it does not mean that $a = c$ and $b = d$ always for the equation to be true. How can we be sure that we must have $x + y = 2$ and $2 \sqrt{3xy} = 3$ for the equation to be true?
|
Hmm... it's not as obvious as I thought See, my first thought was this as a case of if $W$ is irrational and $a,b,c,d$ are rational and $a+bW = c+dW$ then $b=d$ and $a=c$ . Why? Because $a-c = (d-b)W$ . The left hand side is rational so the right hand side must be rational. But $W$ is irrational and the only way an irrational times a rational can be rational is if we are multiplying by zero. So $d-b =0$ and $a-c=0$ and so $a=c$ and $b=d$ . But for us to use this on $-3 +2\sqrt 3 = -2\sqrt{3xy} + (x+y)\sqrt 3$ and reach the conclusion $-3 =-2\sqrt{3xy}$ and $2=x+y$ , we have to have to be assuming $-2\sqrt{3xy}$ is rational. And we can't assume that, can we? Well, lets muck about: $2\sqrt 3 -3 = 2 \sqrt{3} - 3 = (x + y) \sqrt{3} - 2 \sqrt{3xy}$ we can divide both sides by $\sqrt 3$ to get $2 -\sqrt{3}= (x+y) - 2\sqrt{xy}$ . Put the nasty irrationals to one side $2-(x+y) = \sqrt 3 - 2\sqrt{xy}$ and square both sides $4 -4(x+y) + (x+y)^2 = 3 + 4xy -4\sqrt{3xy}$ Now the LHS is rational so
|
|algebra-precalculus|irrational-numbers|
| 1
|
Does there exist at least four elements in $\operatorname{SL}(2,q)$ of different order?
|
The Question: In $\operatorname{SL}(2,q)$ , does there always exist at least four elements of distinct order? Thoughts: I have strong reasons to suspect that this is true. The order of $\operatorname{SL}(2,q)$ is $(q-1)q(q+1)$ , so is a multiple six. Cauchy's theorem implies elements $g,h$ of order two and three, respectively. There's the identity element. Also, since $-I$ and $h$ commute, when $q$ is odd, $|-Ih|=6$ . So what I really need is the $q$ even case. I've spent far too long on this but I need it for my dissertation.
|
As $\mathrm{SL}(2,2)\cong S_3$ , it doesn't contain elements besides ones of oders $1,2,$ and $3$ . In particular your claim is false for $q=2$ . However for $q=2^n$ for $n>1$ you can prove the existence of an element of order other than $1,2,$ or $3$ as follows: If $n>1$ view $\mathbb{F}_{2^n}$ as $\mathbb{F}_2[x]/(P)$ for some irreducible degree $n$ polynomial $P$ . Now the matrix $$\begin{pmatrix}x&x+1\\1&1\end{pmatrix}\in\mathrm{SL}(2,2^n)$$ can be checked to have unequal to the identity matrix first, second, and third powers and thus be of order exceeding $3$ .
|
|matrices|group-theory|finite-groups|
| 0
|
How to integrate with respect to $\overline{z}$
|
Suppose we want to evaluate $$\int_C f(z) \, d \overline{z}$$ (I'm intentionally being vague with how $f$ and $C$ are defined because my question doesn't focus on these details - see example below for such details). Suppose that the best way to go about this is to instead evaluate the integral with respect to $z$ . How do we write $d\overline{z}$ in terms of $z$ (that is, what the relationship is between $d\overline{z}$ and $dz$ )? Take, for example, the following problem from Ahlfors's Complex Analysis : If $P(z)$ is a polynomial and $C$ denotes the circle $|z-a| = R$ , what is the value of $$\int_C P(z) \, d \overline{z} \, \, ?$$ After seeing a solution to the above problem, I suspected that the relationship was that $d\overline{z} = -dz$ based on this particular problem's solution. Is the relationship $d\overline{z} = -dz$ correct? Examples (as above) of this kind in the textbooks I've read are scarce and the explanations for how these integrals are evaluated are also scarce (if an
|
(As you found out in the meantime,) Ahlfors defines integration with respect to $\bar z$ by “double conjugation:” $$ \int_C f(z) \, d \overline{z} = \overline{\int_C \overline{f(z)} \, dz} \, . $$ If $\gamma:[0, 1]\to \Bbb C$ is a parametrization of $C$ then $$ \int_C f(z) \, d \overline{z} = \int_0^1 f(\gamma(t)) \overline{\gamma'(t)} \, dt \, . $$ There is no relationship $d \bar z = -dz$ , that impression is perhaps caused by a wrong intermediate result in the solution of the exercise. With respect to the exercise: For all integers $n$ is $$ \int_C (\bar z - \bar a)^n dz = i R^{n+1} \int_0^{2 \pi} e^{i(1-n)t} \, dt = \begin{cases} 2 \pi i R^2 & \text{ if } n = 1 \, ,\\ 0 & \text{ if } n \ne 1 \, . \end{cases} $$ It follows that for $P(z) = \sum_{k=0}^n a_k (z-a)^k$ $$ \int_C \overline{P(z)} \, dz = 2 \pi i R^2 \overline{a_1} = 2 \pi i R^2 \overline{P'(a)} \, . $$ (That intermediate result is wrong in your solution.) Conjugation then gives the result $$ \int_C P(z) \, d\bar z = -2\pi
|
|integration|complex-analysis|contour-integration|
| 1
|
Find the plane which contains the lines.
|
Q: Find the plane which contains the lines $$ \langle 2,1,4\rangle + t\langle1,-2,4\rangle\text{ and }\langle2,-3,3\rangle + s\langle2,-4,8\rangle $$ I know to get the plane I need a point and a vector normal to the plane. Since my lines are parallel if I take the cross product of the direction vectors I'll get zero. I am confused on how to get the normal vector here. Would using the two points to get a vector, then taking the cross product of that vector and the direction of the line be the right move here?
|
Hint: The 2 lines given have the same direction vector:(1,-2,4). This gives you one direction vector in the plane. The other direction vector is given by: (2,1,4)-(2,-3,3)=(0,4,1)¡ Take the cross product of these 2 vectors to obtain the normal of the plane $\overrightarrow{n}$ . Equation of plane: $\overrightarrow {n}\cdot(\overrightarrow {r}- (2,1,4))=0$
|
|calculus|multivariable-calculus|
| 0
|
How to evaluate $\sum_{n=1}^{\infty}\frac{H_{n}^{(2)}}{n^{4}}$
|
It may be rather tedious and I will have to delve into deeper, but I have a little something. We probably already know this one. The thing is, the first one results in yet another Euler sum. But, I think we can get it without contours. It may require some work, though. $$ \sum_{n=1}^\infty\frac{H_n^{(2)}}{n^{4}} = \sum_{n=1}^\infty\sum_{k=1}^n \frac 1 {k^4} = \sum_{n=1}^\infty\frac 1{n^2} \sum_{k=n}^\infty\frac 1{k^4}\ . $$ This result: $$ \sum_{n=1}^{\infty}\frac{H_n^{(2)}}{n^4} = \zeta(2)\zeta(4)+\zeta(6) -\sum_{n=1}^{\infty}\frac{H_n^{(4)}}{n^2} $$ where that sum on the end is equal to $$ \sum_{n=1}^{\infty}\frac{H_n^{(4)}}{n^2} =\frac{37}{12}\zeta(6)-\zeta^{2}(3)\ . $$ Which matches up if we change it all to $\zeta(6)$ i.e. $\displaystyle \zeta(4)\zeta(2)=\frac{7}{4}\zeta(6)$ and $$ \sum_{n=1}^{\infty}\frac{H_n^{(2)}}{n^4} =\zeta(2)\zeta(4) + \int_0^1\frac{\log(x)\operatorname{Li}_4(x)}{1-x}\; dx $$ My main question is how to evaluate the last integral. Any better ways to evaluate
|
\begin{align}J&=\int_0^1\frac{\text{Li}_4(x)\ln x}{1-x}dx\\ &=-\frac{1}{6}\int_0^1\int_0^1\frac{x\ln^3 t\ln x}{(1-x)(1-tx)}dtdx\\ &=-\frac{1}{6}\int_0^1\int_0^1\left(\frac{\ln^3 t\ln x}{(1-x)(1-t)}-\frac{\ln^3 t\ln x}{(1-tx)(1-t)}\right)dtdx\\ &=-\zeta(2)\zeta(4)+\frac{1}{6}\int_0^1\int_0^1\frac{\ln^3 t\Big(\ln(t x)-\ln t\Big)}{(1-tx)(1-t)}dtdx\\ &=-\zeta(2)\zeta(4)+\frac{1}{6}\int_0^1 \frac{\ln^3 t}{t(1-t)}\left(\int_0^t\frac{\ln x}{1-x}dx\right)dt-\frac{1}{6}\int_0^1 \frac{\ln^4 t}{t(1-t)}\left(\int_0^t\frac{1}{1-x}dx\right)dt\\ &\overset{\text{IBP}}=-\zeta(2)\zeta(4)-\frac{1}{24}\int_0^1\frac{\ln^5 t}{1-t}dt+\frac{1}{6}\int_0^1\frac{\ln^3 t}{1-t}\left(\int_0^t\frac{\ln x}{1-x}dx\right)dt+\frac{1}{30}\int_0^1\frac{\ln^5 t}{1-t}dt+\\ &\frac{1}{6}\int_0^1\frac{\ln^4 t\ln(1-t)}{1-t}dt\\ &=\zeta(6)-\zeta(2)\zeta(4)+\frac{1}{6}\underbrace{\int_0^1\frac{\ln^3 t}{1-t}\left(\int_0^t\frac{\ln x}{1-x}dx \right)dt}_{=A}+\frac{1}{6}\int_0^1\frac{\ln^4 t\ln(1-t)}{1-t}dt\\ A&=\int_0^1\int_0^1\frac
|
|calculus|integration|definite-integrals|closed-form|harmonic-numbers|
| 0
|
get dihedral angles of octahedron given all triangles
|
An octahedron (not necessarily regular) consists of 8 triangles. You can see it as two pyramids glued together (for now on I only consider this case). Call the triangles in the upper pyramid $T_1, T_2, T_3, T_4 $ (clockwise) and in the lower pyramid $T_5, …, T_8$ (clockwise). The seams are $(T_i, T_{i+1}) (i=1,2,3,5,6,7), (T_4, T_1), (T_8, T_5)$ and $(T_i, T_{i+4}) (i=1,2,3,4)$ . Assume you now all information about the triangles. Call the angles and length for triangle $T_i$ (starting from the top (for $i = 1,...,4$ ), resp. bottom (for $i = 5,6,7,8$ ) and go clockwise): $(a_i, b_i, c_i)$ for lengths and $(\alpha_i, \beta_i, \gamma_i)$ for the angles. You know that they form an octahedron, the question is know: How can I find all dihedral angles? I guess you can find equations to which these dihedral angles need to fullfill and that by solving these system of equations I can find the dihedral angles. Does someone have ideas for getting these equations?
|
This boils down to finding dihedral angles in a pyramid with quadrilateral base, given all its edges. Let's take then a pyramid with base $ABCD$ and vertex $V$ and give its edges the names in figure below: $abcd$ its lateral edges and $pqrs$ its base edges. Let's also name the diagonals of the base: $x=AC$ and $y=BD$ , while $h=VH$ is the height of the pyramid (not shown in the figure). To compute dihedral angles we need to compute $x$ , $y$ and $h$ . To this end, let's recall that the volume of a tetrahedron and the area of a triangle can be computed from their edges via the Cayley-Menger determinant . For intance, the volume of tetrahedron $VABC$ is: $$ 288(\text{volume}_{VABC})^2= \begin{vmatrix} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & a^2 & b^2 & c^2 \\ 1 & a^2 & 0 & p^2 & x^2 \\ 1 & b^2 & p^2 & 0 & q^2 \\ 1 & c^2 & x^2 & q^2 & 0 \end{vmatrix} $$ while the area of triangle $ABC$ is given by: $$ 16(\text{area}_{ABC})^2= -\begin{vmatrix} 0&1&1&1\\1&0&x^2&q^2\\1&x^2&0&p^2\\1&q^2&p^2&0 \end{vmatr
|
|geometry|trigonometry|3d|
| 1
|
Prove solution formula of the Water Bottles problem
|
Problem There are n water bottles that are initially full of water. You can exchange m empty water bottles for one full water bottle. The operation of drinking a full water bottle turns it into an empty bottle. Given the two integers n and m , return the maximum number of water bottles you can drink. Solution $(int)Math.Ceiling((((double)n * m) / (m - 1)) - 1)$ My question First, we will drink $n$ water bottles, after that we will exchange $n$ water bottles for $n/m$ full bottles and will drink them, then we will exchange the $n/m$ empty water bottles for $n/m^2$ and so on. So, at the end the number of water bottles we will drink will be $n + n/m + n/m^2 + n/m^3 + ... = n(1 + 1/m + 1/m^2 + 1/m^3 + ...)$ . Now, the $1/m + 1/m^2 + 1/m^3 + ...$ is a sum of an infinite geometric series, so $1/m + 1/m^2 + 1/m^3 + ... = (1/m) / (1 - 1/m) = 1 / (1 - m)$ . So, the number of drinked bottles will be $n(1 + 1/m + 1/m^2 + 1/m^3 + ...) = n(1 + 1/(1 - m)) = (n*m)/(m-1)$ . Where I am stuck After that
|
Suppose you drink in the end a total of $d$ bottles, the answer to the question. You are going to end up with at least $1$ and no more than $m−1$ empty bottles: when you stop, you must have just emptied a bottle and not have enough empty bottles to get a new full bottle. You started with $n$ full bottles, so acquired $d-n$ extra full bottles by handing over $m(d-n)$ empty bottles. So looking at the empty bottle you have left, $$1 \le d-m(d-n) \le m-1.$$ Assuming $m>1$ , this implies $$\frac{mn}{m-1} -1 \le d \le \frac{mn}{m-1}-\frac{1}{m-1}.$$ You can write this as $d = \left\lceil \frac{mn}{m-1} - 1\right\rceil$ rounding up as in your question or as $d = \left\lfloor \frac{mn-1}{m-1} \right\rfloor$ rounding down - they give the same $d$ .
|
|sequences-and-series|geometric-series|
| 1
|
Is there a "correct" $k$ scheme structure to put on $\coprod_{i=1}^n \operatorname{Spec}(k)$?
|
Let $k$ be an algebraically closed field, with $n\in\mathbb N\subset k$ invertible. I am trying to prove that if $\mathbb G_m=k[t,t^{-1}]$ is the multiplicative group scheme over $k$ , and $\mu_n$ is the kernel of the group map $[n]:\mathbb G_m\rightarrow \mathbb G_m$ defined on all $k$ schemes $S$ by $s\in G(S)\mapsto s^n\in G(S)$ , then $\mu_n$ is isomorphic to the group scheme $\coprod_{\alpha\in \mathbb Z/n\mathbb Z}\operatorname{Spec}(k)$ . However, I have just realized that I am not entirely sure how to turn $\coprod_{i=1}^n \operatorname{Spec}(k)$ into a $k$ -scheme. Indeed, we can identify $\coprod_{i=1}^n \operatorname{Spec}(k)$ with $\operatorname{Spec}(k^n)$ , and then any morphism $k\rightarrow k^n$ turns $\operatorname{Spec}(k^n)$ into a $k$ -scheme. Is there any reason I should $x\mapsto (x,0,\dots,0)$ over say $x\mapsto (x,\dots, x)$ or any other such variant?
|
$x\mapsto(x,x,\cdots,x)$ is the correct one, i.e. using the map $\mathsf{Spec}(\Delta)$ where $\Delta:k\to k^n$ . Why? Because you have a coproduct of schemes, which are all $k$ -schemes; if you want a coproduct of $k$ -schemes i.e. the coproduct as taken in the slice category over $\mathsf{Spec}(k)$ , which is categorically the best thing to do !, then this is the structural morphism that arises. Caveat: I know this is the coproduct if you restrict to affine ( $k$ -) schemes. I'm not sure how colimits work for completely general schemes, I'm a newbie.
|
|algebraic-geometry|schemes|affine-schemes|group-schemes|
| 0
|
Can we write every $C^1$ complex function on the unit circle as the the difference of two approriate functions?
|
If $g$ is $C^1$ on the unit circle $C(0,1)$ . Then there is a function $f^+$ holomorphic on $B(0,1)$ and continuous on $\bar B(0,1)$ , a function $f^-$ holomorphic on $\mathbb{C}\backslash\bar B(0,1)$ and continuous on $\mathbb{C}\backslash B(0,1)$ , such that $g(z)=f^+(z)-f^-(z)$ on the unit circle. This seems like it could be a direct consequence of some of the lecture results (e.g. Cauchy's Integral Formula, Taylor series), but cauchy's formula just gives a holomorphic function on $B(0,1)$ , I do not know how that relates to the value of $g(z)$ . While Taylor series only applies for functions holomorphic on some domain. If we can extend $g$ to some holomorphic function on $\mathbb{C}$ then surely the result follows... But how do we do something like that? Any help will be appreciated.
|
It is possible. Write the Fourier series of $g$ on the unit circle as $$ g(z)=\sum_{n\in\mathbb Z} a_n z^n. $$ This gives the usual Fourier series for those $z$ such that $|z|=1$ , and the series on the right does not necessarily converge for the other values of $z\in\mathbb C$ . Since $g$ is $C^1$ , the Fourier coefficients $(a_n)_{n\in\mathbb Z}$ decay rapidly (we have at the very least $(na_n)_{n\in\mathbb Z}\in \ell^2(\mathbb Z)$ ). Now let $$ f^+(z):=\sum_{n\geq 0} a_n z^n, $$ $$ f^-(z):=\sum_{n Both series converge uniformly to continuous functions $f^+$ and $f^-$ on respectively $\bar B(0,1)$ and $\mathbb C\setminus B(0,1)$ due to the fast decay of the coefficients $a_n$ . It is not hard to show holomorphicity of the two functions in the desired sets. As you can see, the only choice you can make is whether to include the term $a_0$ in $f^+$ or in $f^-$ . The rest of the terms in the Fourier series are slpit into the two functions depending on where they are singular. The maps $g
|
|complex-analysis|analytic-functions|
| 0
|
Finding Splitting Field with Minimal Adjoined Elements
|
I have to find the splitting field of the following in $\mathbb{Q}$ : a) $f(x) = x^6 + 1 $ b) $f(x) = (x^2-3)(x^3+1) $ For a), finding the 6th roots of -1, I concluded that $\pm i, \frac{\pm \sqrt{3} \pm i}{2}$ should be in the splitting field. The minimum adjoined splitting field is thus $\mathbb{Q}[\sqrt{3},i]$ . For b) using similar method, I found that $-1, \sqrt{3}, \frac{1 \pm \sqrt{3}i}{2}$ should be in the splitting field. Is the minimal adjoined field for b) $\mathbb{Q}[\sqrt{3},i]$ as well? I am not so sure in how to find the minimal elements one must adjoin to $\mathbb{Q}$ .
|
For the first question, note that $x^6+1$ divides $x^{12}-1$ , and that all roots of $x^{12}-1$ are contained in $\Bbb{Q}(\zeta_{12})$ , where $\zeta_{12}$ denotes a primitive $12$ -th root of unity. Because $$[\Bbb{Q}(\zeta_{12}):\Bbb{Q}]=\varphi(12)=4,$$ we see that this is a quartic extension containing all roots of $x^6+1$ . You have already shown that any splitting field of $x^6+1$ must contain $i$ and $\sqrt{3}$ , from which it easily follows that the degree of a splitting field must be at least $4$ . This shows that $\Bbb{Q}(\zeta_{12})$ is a splitting field of $x^6+1$ over $\Bbb{Q}$ . For the second question, a similar argument works, with slightly more effort.
|
|abstract-algebra|splitting-field|
| 0
|
Equivalence of two ways to solve this combinatorics problem, and how to prove the resulting equality
|
This question arises from the following problem: We define the set $N=\{1,2,...,n\}$ . How many different possibilities are there to form three sets $A$ , $B$ and $C$ such that $A\subseteq B\subseteq C\subseteq N$ ? There are two ways to find the solution to this problem: Strategy 1 For each $m\in N$ , the element $m$ can be in the following sets: $m\notin A,B,C$ . $m\notin A,B$ but $m\in C$ . $m\notin A$ but $m\in B,C$ . $i\in A,B,C$ . So, there are 4 possibilities for each element, and therefore the answer is $4^n$ Strategy 2 We know that the amount of different sets $C\in N$ with cardinal $i$ is ${n\choose i}$ . Since we need $A\subseteq B\subseteq C\subseteq N$ , the amount of possible ways to do it is: $$w_n\equiv\sum_{k=0}^j\sum_{j=0}^i\sum_{i=0}^n{n\choose i}{i\choose j}{j\choose k}$$ I would like to prove that $w_n=4^n$ . I tried to proceed by induction, proving that $w_n=4^n\Rightarrow w_{n+1}=4^{n+1}$ . We know that, when we add the element $n+1$ to $N$ , the only triplets $A
|
Your sum has a slight mistake. It should be $$w_n = \sum_{i=0}^n \sum_{j=0}^i\sum_{k=0}^j \binom{n}{i}\binom{i}{j}\binom{j}{k}$$ Can you see why? Solving: $$\begin{align} w_n &= \sum_{i=0}^n \sum_{j=0}^i\sum_{k=0}^j \binom{n}{i}\binom{i}{j}\binom{j}{k} \\ &= \sum_{i=0}^n \sum_{j=0}^i \binom{n}{i}\binom{i}{j} \sum_{k=0}^j \binom{j}{k} \\ &= \sum_{i=0}^n\binom{n}{i}\sum_{j=0}^i\binom{i}{j}2^j \\ &= \sum_{i=0}^n\binom{n}{i}3^i \\ &= 4^n \tag*{$\blacksquare$} \end{align}$$
|
|combinatorics|combinatorial-proofs|
| 1
|
Simplifying $d(\frac{2y_1}{r^2+1})\wedge d(\frac{2y_2}{r^2+1})\wedge\cdots\wedge d(\frac{2y_n}{r^2+1})$, where $r^2=\sum y_i^2$
|
I've been trying to find a nice, simple form for the following expression: $$d\left(\frac{2y_1}{r^2+1} \right)\wedge d\left(\frac{2y_2}{r^2+1} \right)\wedge ... \wedge d\left(\frac{2y_n}{r^2+1} \right) $$ where $r^2 = y_1^2 + ... + y_n^2\ $ , and I am regarding the $y_i$ as the standard coordinates for $\mathbb{R}^n$ . By trying the case $n=3$ , I thought the general expression could be $$2^n \frac{1-r^2}{(r^2+1)^{n+1}}dy_1\wedge...\wedge dy_n$$ and I tried to show this by induction but got stuck. Any help is appreciated! Edit: I came up with the formula by checking the case n=3. Then I tried showing by induction. Base case is saying $$d\left(\frac{2y_1}{y_1^2+1}\right) = \frac{2(1-y_1^2)}{(y_1^2+1)^2}dy_1$$ which is true. Then I tried applying the induction step as follows. Let $x_i = 2y_i/(r^2+1)$ $$dx_1\wedge ... \wedge dx_n = (-1)^{n-1}d(x_n dx_1\wedge ... \wedge dx_{n-1})$$ But I just realized this wouldn't work since I can't really apply the inductive hypothesis to $dx_1\wedge ..
|
First, I would get rid of all the $2$ 's. You can certainly put the $2^n$ in there at the end. Write $u=1+r^2$ . Note that $$d\left(\frac{y_i}{u}\right) = \dfrac{dy_i}u + y_i\,d\left(\frac1u\right)=\frac{dy_i}u - y_i\,\left(\frac{2r\,dr}{u^2}\right).$$ Then \begin{align*} \left(\frac{dy_1}u+y_1\,d\left(\frac1u\right)\right)&\wedge\dots\wedge\left(\frac{dy_n}u+y_n\,d\left(\frac1u\right)\right) \\ &= \frac{dy_1\wedge\dots\wedge dy_n}{u^n} + \sum\frac{dy_1}u\wedge\dots\wedge y_i\,d\left(\frac1u\right)\wedge\dots\wedge\frac{dy_n}u \\ &=\frac{dy_1\wedge\dots\wedge dy_n}{u^n} + \sum\frac{dy_1}u\wedge\dots\wedge y_i\frac{(-2y_i\,dy_i)}{u^2}\wedge\dots\wedge\frac{dy_n}u \\ &=\frac{dy_1\wedge\dots\wedge dy_n}{u^n} - \frac{2r^2}{u^{n+1}}dy_1\wedge\dots\wedge dy_n \\ &=\frac{(1+r^2)-2r^2}{u^{n+1}}dy_1\wedge\dots\wedge dy_n = \frac{1-r^2}{u^{n+1}}dy_1\wedge\dots\wedge dy_n, \end{align*} as desired. (Note that at most only one $d(1/u)$ factor can appear when you expand the product of sums, since $d
|
|multivariable-calculus|differential-forms|exterior-algebra|
| 1
|
Why must rational $x$ and $y$ satisfy $x+y=2$ and $2\sqrt{3xy}=3$ in order for $2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$ to be true?
|
I am going through a maths book and unable to understand the following: Given that $x$ and $y$ are rational numbers, then if the equation $$2\sqrt{3}-3=(x+y)\sqrt{3}-2\sqrt{3xy}$$ is true, then we must have $x+y=2$ and $2\sqrt{3xy}=3$ . I don't understand this. We cannot reach such a conclusion with equations involving only rational numbers. For example, let $a$ , $b$ , $c$ , and $d$ be rational numbers, and $a - b = c - d$ ; then it does not mean that $a = c$ and $b = d$ always for the equation to be true. How can we be sure that we must have $x + y = 2$ and $2 \sqrt{3xy} = 3$ for the equation to be true?
|
Since the OP has not specified their level, nor from which book this is taken, I feel inclined to post this: Suppose $x+y \neq 2$ . Then, $$2\sqrt{3xy}-3 = (x+y-2)\sqrt3 \in \Bbb Q(\sqrt3)$$ where $\Bbb Q(\sqrt3)$ is the smallest field containing the RHS. Then, if $\Bbb F$ is the smallest field containing the LHS, we must have $\Bbb Q(\sqrt3) = \Bbb F$ . Then the LHS is of the form $a+b\sqrt3, a,b \in \Bbb Q$ , and upon observation, $a=0$ . So, if we divide by $\sqrt 3$ , we must end up with rational values for both LHS and RHS. That is $$0 \neq 2\sqrt{xy} - \sqrt{3} \in \Bbb Q$$ Squaring, $2\sqrt{3xy} \in \Bbb Q \implies 2\sqrt{3xy} - 3 \in \Bbb Q$ , contradiction! Hence, $x+y = 2$ , which also implies $2\sqrt{3xy} = 3$ . Another approach: Assume $\sqrt{3xy}$ is irrational. Then $$-3 = \sqrt{3}(x+y - 2 - 2\sqrt{xy}) \iff -\sqrt{3} =x+y-2-2\sqrt{xy}$$ This must imply $-2\sqrt{xy} = -\sqrt 3 \iff \sqrt{3xy} = 3/2 \in \Bbb Q$ , contradiction. Hence, $\sqrt{3xy}$ is rational, then compari
|
|algebra-precalculus|irrational-numbers|
| 0
|
A new type of convexity related to the exponential function
|
I am new to the study of (undergraduate) convexity and I have recently come across a new definition which generalizes to the classic concept of convex function and is the following. Let be $I\subset \mathbb{R}$ an interval. We say that $f$ is a exponential type convex function if it holds that for every $x, y \in I$ and $t\in[0,1]$ $$ f(t x+(1-t) y) \leq\left(e^{t}-1\right) f(x)+\left(e^{1-t}-1\right) f(y) . $$ So I am looking for new examples of this definition but it is very recent, after an inspection I have observed that the function $f(x)=x^{p}$ with $p>1,$ is a good candidate as an example, but I don't know how to show that it satisfies the definition above? or what the appropriate interval of real numbers would be? or the value of the constant $t$ that works?. Any help or contribution would be greatly appreciated. Thank you!
|
The usual definition of convex function immediatley implies the definition you wrote if $f\geq 0$ , due to the well-known inequality $e^x\geq 1+x$ . In fact, that inequality implies $$ tf(x)+(1-t)f(y)\leq (e^t-1)f(x)+(e^{1-t}-1)f(y). $$ So, any non-negative convex function would satisfy your definition. This includes $f(x)=|x|^p$ with $p>1$ .
|
|calculus|analysis|convex-analysis|convexity-inequality|
| 1
|
How to show that the function: $ f: (0,1) \to \mathbb{R}^3, x \mapsto (\sin(x),\cos(x),x^2 )$ is injective?
|
How to show that the function: $ f: (0,1) \to \mathbb{R}^3, x \mapsto (\sin(x),\cos(x),x^2 )$ is injective? To prove this, I was thinking to start by supposing that $f(x_1)=f(x_2)$ ; this implies that $(\sin(x_1),\cos(x_1),x_1^2 ) = (\sin(x_2),\cos(x_2),x_2^2)$ . This gives us $\sin(x_1)=\sin(x_2) \Longleftrightarrow x_1=x_2$ , $ \cos(x_1)=\cos(x_2) \Longleftrightarrow x_1=x_2 $ and $ x_1^2=x_2^2 \Longleftrightarrow x_1=x_2$ . Is this correct? Should I add something to this proof? If yes, what?
|
You are on the right track, but you concluded too quickly. The first two equations, namely $\sin x_1 = \sin x_2$ and $\cos x_1 = \cos x_2$ don't imply $x_1 = x_2$ but $x_1 = x_2 + 2\pi k$ , $k \in \Bbb{Z}$ , because of the $2\pi$ -periodicity of the sine and cosine functions. Then, the third equation imposes $$x_1^2 = x_2^2 = (x_1 + 2\pi k)^2 = x_1^2 + 2\pi kx_1 + 4\pi^2k^2,$$ hence $2\pi k(x_1 + 2\pi k) = 0$ , which itself implies $k = 0$ and thus $x_1 = x_2$ . Note that the solution $x_1 = -2\pi k$ has to be discarded, because it wouldn't belong to $\Bbb{I}$ .
|
|analysis|functions|
| 1
|
Finding the analytical solution of a first order system with pure time delay
|
I have a simple system and I am searching for the solution for f(t): $$\frac{\partial f(t)}{\partial t} = c_1 \left( f(t) + g(t) + c_2 \right)$$ . It turns out that, in this system $g$ can be related to $f$ by a pure time delay: $$g(t) = f(t-a)u(t-a)$$ where $u(t)$ is the Heaviside function. Applying the Laplace transform to the equation above and isolating $F(s)$ leads to: $$F(s) = \frac{f(0) + c_1c_2}{s - c_1(1 - e^{-as})}$$ . Since there is a complex exponential at the denominator I cannot use the inverse Laplace transform nor break this into partial fractions. How can one find the solution of such an equation? If I cannot directly resolve the equation, can I infer a given solution (from the numerical integration of this system) and identify the coefficients? If the analytical solution does not exists, is there a proof that explains why?
|
Solving The differential equation is a linear DDE of order one with a non-constant coefficient. We can get rid of this one by splitting the DDE into two intervals: \begin{align*} \frac{\operatorname{d}f\left( t \right)}{\operatorname{d}t} &= c_{1} \cdot \left( f\left( t \right) + f\left( t - a \right) \cdot \theta\left( t - a \right) + c_{2} \right)\\ \frac{\operatorname{d}f\left( t \right)}{\operatorname{d}t} &= c_{1} \cdot f\left( t \right) + c_{1} \cdot f\left( t - a \right) \cdot \theta\left( t - a \right) + c_{1} \cdot c_{2}\\ \frac{\operatorname{d}f\left( t \right)}{\operatorname{d}t} &= \begin{cases} c_{1} \cdot f\left( t \right) + c_{1} \cdot c_{2}, &\text{for } t Now there are two linear DEs of order $1$ but constant coefficients. We can trivially for $t (because it's an order $1$ ODE) where $c$ can be any constant: \begin{align*} f\left( t \right) &= c \cdot e^{c_{1} \cdot t} - c_{2}\\ \end{align*} For $t \geq a$ we could say $f\left( t \right) = g\left( t \right) + b$ . We w
|
|ordinary-differential-equations|inverse-laplace|delay-differential-equations|
| 0
|
I found a pattern in consecutive squares: $(a^2-b^2)-(b^2-c^2)$ is always $2$.
|
I was working on squares of numbers then found out that the difference of difference between two consecutive numbers is $2$ . Saying this with an example like $$2^2=4\qquad\qquad 3^2=9\qquad\qquad 4^2=16$$ $$9-4=5\qquad\qquad 16-9=7$$ $$7-5=2$$ $4^2-3^2-(3^2-2^2)$ is always $2$ . If we take any three consecutive three numbers and do the same thing we get $2$ . Is this proved before ?
|
This is a nice discovery! But why does it work? Let $n^2$ and $(n+1)^2$ be two consecutive squares. If you expand $(n+1)^2$ , you will get $n^2+2n+1$ . $(n^2+2n+1)-(n^2)=2n+1$ . This means the difference between consecutive squares is always the odd numbers. Now, we can say that $((n+2)^2-(n+1)^2)-((n+1)^2-n^2)=(2(n+1)+1)-(2n+1)=2$ . Q.E.D.
|
|algebra-precalculus|arithmetic|
| 0
|
Stein and Shakarchi, Complex Analysis, Chapter 2 Example 2
|
I'm reading through Stein and Shakarchi's Complex Analysis textbook, but I'm a bit confused by their proof that $$ \int_{0}^{\infty} \frac{1-\cos(x)}{x^2} dx = \frac{\pi}{2}$$ They consider the function $f(z) = \frac{1-e^{iz}}{z^2}$ and an indented semicircle as their contour The part where I'm confused is the integral of $f(z)$ over $\gamma_{\epsilon}^+$ . The way they evaluate this integral is by first noting that $$ f(z) = \frac{-iz}{z^2} + E(z) $$ where $E(z)$ is bounded as $z\rightarrow 0$ . I'm fine with the rest of the proof but I'm puzzled by $E(z)$ . My question: How is this function bounded as $z \rightarrow 0$ ? It seems like $E(z)$ would just be $$ E(z) = \frac{1+iz+e^{iz}}{z^2}$$ Is it because we're integrating over $\gamma_{\epsilon}^+$ and so $|z| = \epsilon$ and so $$ \left| E(z) \right| = \left| \frac{1}{z} + \frac{i}{z} + \frac{e^{iz}}{z^2} \right| \leq \left|\frac{1}{z} \right| + \left| \frac{i}{z} \right| + \left| \frac{e^{iz}}{z^2} \right| = \frac{1}{\epsilon} + \f
|
No, \begin{align*} E(z) &= \frac{1-e^{iz}}{z^2} - \frac{-iz}{z^2} = \frac{1-\left(1+iz+\frac{(iz)^2}2 + \frac{(iz)^3}6 +\dots\right)}{z^2} - \frac{-iz}{z^2} \\ &= \frac{-\left(\frac{(iz)^2}2 + \frac{(iz)^3}6 +\dots\right)}{z^2} = \frac12 + O(z). \end{align*} It appears you lost a negative sign in writing down your formula for $E(z)$ .
|
|complex-analysis|proof-explanation|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.