title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
How to calculate the derivative of a Newtonian potential inside a box with uniform source distribution?
|
I'm working on a potential flow problem. I have a box, centered at the origin (i.e. $V=[-x_b,x_b]\times[-y_b,y_b]\times[-z_b,z_b]$ ) that has inside of it a uniform distribution of source strength. We can assume this distribution has unit strength. The potential induced by the distribution of source strength in the box at a point $(x,y,z)$ will then be $$\phi(x,y,z) = \frac{-1}{4\pi}\iiint_V \frac{1}{\sqrt{(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2}} dV$$ I need to know the second derivatives of potential (i.e. $\frac{\partial^2\phi}{\partial x^2}$ , etc.) at the origin. I can easily show that for any point $(x,y,z)$ , we have $$\frac{\partial^2\phi}{\partial x^2}(x,y,z) = \frac{1}{4\pi}\iiint_V \frac{1}{\left[(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2\right]^{3/2}} - 3\frac{(x-x_0)^2}{\left[(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2\right]^{5/2}} dV$$ Since I want to know $\phi$ at $(x,y,z)=(0,0,0)$ , and the box is centered at the origin, I can simplify this to $$\frac{\partial^2\phi}{\partial x^2}(0,0,0) = \f
|
Denote $(a,b,c):=(x_b,y_b,z_b)$ , $(x,y,z):=(x_0,y_0,z_0)$ , and $I:=\left.\dfrac{\partial^2\phi}{\partial x^2}\right\rvert_{(0,0,0)}$ . Integrating by parts makes quick work of the $x$ -integral, $$-3 \int_0^a \frac{x^2}{\left(x^2+y^2+z^2\right)^{5/2}} \, dx = \frac a{\left(a^2+y^2+z^2\right)^{3/2}} - \int_0^a \frac{dx}{\left(x^2+y^2+z^2\right)^{3/2}} \\ \implies I = \int_0^c \int_0^b \frac{a}{\left(a^2+y^2+z^2\right)^{3/2}} \, dy \, dz$$ Trigonometric or hyperbolic substitutions are standard in dealing with the $y$ -integral. If $y=\sqrt{a^2+z^2}\sinh u$ , then $$\int_0^b \frac{dy}{\left(a^2+z^2+y^2\right)^{3/2}} = \frac1{a^2+z^2} \int_0^{\sinh^{-1}\tfrac b{\sqrt{a^2+z^2}}} \frac{du}{\cosh^2u}$$ Can you continue from here?
|
|integration|potential-theory|
| 0
|
Why is Euler's Totient function always even?
|
I want to prove why $\phi(n)$ is even for $n>3$. So far I am attempting to split this into 2 cases. Case 1: $n$ is a power of $2$. Hence $n=2^k$. So $\phi(n)=2^k-2^{k-1}$. Clearly that will always be even. Case 2: $n$ is not a power of $2$. This is where I am unsure where to go. I figure I will end up using the fact that $\phi(n)$ is multiplicative, and I think I'll get a $(p-1)$ somewhere in the resulting product which will make the whole thing positive, as $p$ is prime implies $(p-1)$ is even.
|
Since $\phi$ is multiplicative for relatively prime numbers, and a product with at least one even number is even, it suffices to show that this holds for prime powers. Cleary, $\phi(2^k)$ is even for $k>1$ . For primes $p>2$ , $\phi(p) = p-1$ is even. If we assume that $p^k$ is even for the purpose of induction, we get that $\phi(p^{k+1}) = p^{k+1}(\frac{p-1}{p})= p^k(p-1)$ . We know that $p - 1$ is even, and by supposition $p^k$ is even. Therefore, so is $p^{k+1}$ .
|
|elementary-number-theory|prime-numbers|totient-function|parity|
| 0
|
Find all integer solutions of $n = 1 + a + b + c$, with $a,b,c$ distinct divisors of $n$, and prove there are no more solutions.
|
This is a question derived from the 17 camels problem . I have been asked to find every integer $n$ satisfying $$ n = 1+a+b+c, $$ where $a, b, c$ are distinct divisors of $n$ greater than one. By brute-forcing I got the answer $n\in\{12,18,20,24,42\}$ : \begin{align} 12 &= 1+2+3+6, \\ 18 &= 1+2+6+9, \\ 20 &= 1+4+5+10, \\ 24 &= 1+3+8+12, \\ 42 &= 1+6+14+21. \end{align} Now I have to prove these are the only solutions. How can I aproach this? Thanks!
|
Let $A,B,C$ be integers such that $a=n/A$ , $b=n/B$ , $c=n/C$ . We may and do sort these different numbers so that $$ A Let us observe that the smallest number $A$ among them has to be $2$ , else $A\ge 3$ , $B\ge 4$ , $C\ge 5$ , and we obtain a contradiction from $n=1+a+b+c=\frac nA+\frac nB+\frac nC . Now set $A=2$ . Then we have $$ \frac n2 =1+\frac nB+\frac nC\ . $$ The last number is between $\frac nB+\frac nC$ and $\frac nB+2\frac nC$ . It remains to collect first all $B,C$ with $2 that satisfy $$ \frac 1B+\frac 1C Then check if there is any solution for the corresponding triple $(A,B,C)$ . We have $\frac 2C , so $C>4$ , so $C$ is among $5,6,7,8,\dots$ . This allows to work on the other side, $\frac 1B>\frac 12-\frac 2C\ge \frac 12-\frac 25$ , which gives $B , so for $B$ we have to check the values $3,4,5,6,7,8,9$ . All estimations above were rough, but the cases to be checked are no so many, so let us cover them one by one. $(A,B)=(2,3)$ leads to $n=1+\frac n2+\frac n3+\frac nC$
|
|elementary-number-theory|
| 0
|
On the number of $\sigma\in S_{p-1}$ of a given form.
|
Let $p$ be a prime, $d$ a proper divisor of $p-1$ , and $\sigma\in S_{p-1}$ of the form (everything is modulo $p$ ): $$\sigma=(1,x,\dots,x^{d-1})(i_2,i_2x,\dots,i_2x^{d-1})\dots(i_k,i_kx,\dots,i_kx^{d-1})\tag1$$ (hence $|\sigma|=d$ ) where $kd=p-1$ and, $\forall i,j=1,\dots,p-1$ such that $i+j\not\equiv_p0$ : $$\sigma(i+j\bmod p)\equiv_p\sigma(i)+\sigma(j)\tag2$$ Does $(2)$ follow from $(1)$ , or does it really establish a constraint on the possible $\sigma$ 's of the form $(1)$ ? As for the context, I'm interested in the number $N_p(d)$ of the permutations of the form $(1)$ , which fulfil $(2)$ .
|
(2) does imply (1). The permutation $\sigma$ in (1) is much more conveniently written as multiplication by $x$ , where $x$ has order $d$ , modulo $p$ . Then it's obvious that $\sigma(i+j) \equiv x(i+j) = xi+xj \equiv \sigma(i)+\sigma(j)$ (mod $p$ ). Since the multiplicative group modulo $p$ is cyclic of order $p-1$ , it's a standard result that the number of elements of order $d$ is exactly $\phi(d)$ (the Euler phi function); so that's how many permutations of the type (1) there are. (By the way, all of this works for $d=1$ and $d=p-1$ as well.)
|
|abstract-algebra|elementary-number-theory|permutations|modular-arithmetic|symmetric-groups|
| 1
|
Isomorphism between two fields can be extended to isomorphism between their respective closures
|
Let $K_{1}$ and $K_{2}$ be two isomorphic fields. Prove that an isomorphism $f \colon K_{1} \xrightarrow{\sim} K_{2}$ can be extended to an isomorphism $\sigma \colon \overline{K_{1}} \xrightarrow{\sim} \overline{K_{2}}$ between their respective closures. I intended to do this using Zorn's Lemma (akin to the proof of uniqueness of algebraic closures up to isomorphism), this is my attempt: Attempt: Let $f \colon K_{1} \rightarrow K_{2}$ be a field isomorphism, and let $i_{1} \colon K_{1} \rightarrow \overline{K_{1}}$ , $i_{2} \colon K_{2} \rightarrow \overline{K_{2}}$ be field homomorphisms. Composing $f$ with $i_{2}$ gives us: $\phi = i_{2} \circ f \colon K_{1} \rightarrow \overline{K_{2}}$ . We aim to prove the existence of $\sigma \colon \overline{K_{1}} \rightarrow \overline{K_{2}}$ , such that $\phi = \sigma \circ i_{1}$ . This can be done using Zorn's Lemma. Let $\mathcal{A}$ be a collection of pairs $(M, \psi)$ , where $M$ is a subfield of $\overline{K_{1}}$ containing $K_{1}$ ,
|
$\sigma:F\to\overline{K_2}$ can be extended to a homomorphism of the polynomial rings $F[x]\to\overline{K_2}[x]$ by $\sigma(\sum\limits_{i=0}^n a_ix^i)=\sum\limits_{i=0}^n\sigma(a_i)x^i$ , i.e we simply apply $\sigma$ to the coefficients of the polynomial. Now let $m\in F[x]$ be the minimal polynomial of $\alpha$ over $F$ . Pick any root $\beta\in\overline{K_2}$ of the polynomial $\sigma(m)\in\overline{K_2}[x]$ . Such a root exists because $\overline{K_2}$ is algebraically closed. Note that $\sigma(m)$ must be irreducible over the field $\sigma(F)$ (why?), and so $\sigma(m)$ is also the minimal polynomial of $\beta$ over $\sigma(F)$ . And now we can define a map $\rho:F(\alpha)\to\overline{K_2}$ as follows: $\rho(f(\alpha))=\sigma(f)(\beta)$ where $f\in F[x]$ is a polynomial. It's not hard to check that $\rho$ is a well defined homomorphism that extends $\sigma$ , contradicting the maximality of $F$ . So indeed $F=\overline{K_1}$ , we got our required extension. It remains to prove tha
|
|abstract-algebra|ring-theory|field-theory|galois-theory|ring-homomorphism|
| 1
|
Probability problem - finding the missing number.
|
I am a retired philosopher, familiar with some of the philosophical problems about probability (e.g. Hume's problem of induction) but at a loss in calculating probabilities. I recently came across the following problem - problem for me, that is, not site users. The number of pupils in a school in the years 2020, 2021 and 2023 was: 2020 - 200 2021 - 220 2023 - 250 On the basis of the data, probably how many were at the school in 2022? Any help would be appreciated. I realise the answer may vary with the theory of probability used.
|
Without any additional information, all we can conclude is that $$n\ge0$$ where $n$ denotes the number of students. You can't have a negative number of people. If you want my philosophical take on it, I would say that $n=0$ because remote learning was started as a response to COVID-19.
|
|statistical-inference|
| 0
|
Show that minimum of $f(x,y,z) = x^2 + y^2 + z^2$ subject to constraint $x^4 + y^4 + z^4 = 1$ is $1$
|
Using Lagrange multipliers, I can see that the maximum of $f(x, y, z)$ subject to the constraint is $\sqrt{3}$ . To see this, here is what I did: Show the constraint in the form $g(x,y,z) = k$ , which gives us $g(x, y, z) = x^4 + y^4 + z^4 - 1$ . To satisfy $\nabla{f(x, y, z)} = \lambda \nabla{g(x, y, z)}$ , start by taking all the relevant partial derivatives: $\frac{\partial f}{\partial x} = 2x$ $\frac{\partial f}{\partial y} = 2y$ $\frac{\partial f}{\partial z} = 2z$ $\frac{\partial g}{\partial x} = 4x^3$ $\frac{\partial g}{\partial y} = 4y^3$ $\frac{\partial g}{\partial z} = 4z^3$ Set up a system of equations, and solve. $2x = \lambda4x^3$ $2y = \lambda4y^3$ $2z = \lambda4z^3$ $x^4 + y^4 + z^4 = 1$ Skipping ahead through the remaining arithmetic, this gives $x^2 = y^2 = z^2$ and also $x = y = z = \sqrt{\frac{1}{3}}$ . To see this in more detail, we can use $g(x, y, z)$ . If $x^2 = y^2 = z^2$ as we stated, then we have: $(x^2)^2 + (y^2)^2 + (z^2)^2 = 1$ and therefore $(x^2)^2 + (x^2
|
$x = 2\lambda x^3$ Either $x = 0$ or $\lambda =\frac {1}{2x}$ . You have investigated what happens if $\lambda = \frac 1{2x}$ but what about $x = 0?$
|
|multivariable-calculus|lagrange-multiplier|
| 1
|
How to calculate the derivative of a Newtonian potential inside a box with uniform source distribution?
|
I'm working on a potential flow problem. I have a box, centered at the origin (i.e. $V=[-x_b,x_b]\times[-y_b,y_b]\times[-z_b,z_b]$ ) that has inside of it a uniform distribution of source strength. We can assume this distribution has unit strength. The potential induced by the distribution of source strength in the box at a point $(x,y,z)$ will then be $$\phi(x,y,z) = \frac{-1}{4\pi}\iiint_V \frac{1}{\sqrt{(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2}} dV$$ I need to know the second derivatives of potential (i.e. $\frac{\partial^2\phi}{\partial x^2}$ , etc.) at the origin. I can easily show that for any point $(x,y,z)$ , we have $$\frac{\partial^2\phi}{\partial x^2}(x,y,z) = \frac{1}{4\pi}\iiint_V \frac{1}{\left[(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2\right]^{3/2}} - 3\frac{(x-x_0)^2}{\left[(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2\right]^{5/2}} dV$$ Since I want to know $\phi$ at $(x,y,z)=(0,0,0)$ , and the box is centered at the origin, I can simplify this to $$\frac{\partial^2\phi}{\partial x^2}(0,0,0) = \f
|
So I made some grave mistake here: Still I want to show the post and add the real solution at the end: With Gauss Law: The integral over the divergence on a volume is equal to the integral of that field on the boundary: (Also see: Divergence Theorem - Wikipedia ) $ \int_V \nabla \mathbf{F} dV = \oint_S \mathbf{F}\cdot\mathbf{n} dS $ So if you have a symmetric problem then this helps you find a solution quickly. But your box is not symmetric? If you are interested in the field or the potential in the inside: You can still imagine a sphere on the inside since. The above integral is only dependent on the inside of the sphere the block outside is irrelevant. The potential that is created by a point source at the origin at a distance r is: $ \Phi(r) = \frac{-1}{4\pi |r|} $ Since your integral sums up over exactly such point sources you have a divergence of 1 everywhere in your box. Thus within a circle of radius r that completely fits into the box the integral over the volume becomes just a
|
|integration|potential-theory|
| 1
|
How can I solve 1. $yy'+x=\frac{1}{2}(\frac{x^2+y^2}{x})^2$ and 2. $y'=(\frac{3x+y^3-1}{y})^2$?
|
At school we were given the following differential equation to solve: a) $yy'+x=\frac{1}{2}(\frac{x^2+y^2}{x})^2$ and b) $y'=(\frac{3x+y^3-1}{y})^2$ . What I know now are: DE with separable variables Homogeneous DE of the 1st order Linear DE Bernoulli DE Riccati's DE Lagrange's DE Clairotova DE Experiment: I transformed the first equation (a) example) into $y'y^2=9x^2+y^6+1+6xy^3-6x-2y^3$ by squaring, and the second into $x^2y'+x^3 /y=1/(2y)x^4+x^2y+(1/2)y^3$ . How do I proceed as I can't see which type I could even use with which?
|
Hint. For the first, $$ \frac 12(y(x)^2)'+x = \frac 12\left(\frac{x^2+y(x)^2}{x}\right)^2 $$ making $z = y^2$ we have $$ \frac 12 z' + x = \frac 12\left(\frac{x^2+z}{x}\right)^2 $$ and also $$ z'+2x = (x^2+z)' $$ so we can follow with $u = x^2+z$ and $$ \frac 12 u' = \frac 12\left(\frac{u}{x}\right)^2 $$ which is separable, etc. and for the second $$ y^2y'=\left(3x+y^3-1\right)^2 $$ calling $z = y^3$ we have $$ \frac 13 z' = \left(3x+z-1\right)^2 $$ etc.
|
|ordinary-differential-equations|soft-question|
| 1
|
Prove a convex quadrilateral with perpendicular diagonals and one pair of congruent, non-consecutive angles is a kite.
|
I strongly suspect the conditions in the title are sufficient for a kite, but I am unable to give a proof, direct or otherwise. I've attempted to proceed directly with triangle congruence strategies and trigonometry, by contraction assuming incongruent edges or the diagonal between congruent angles not bisected by the other diagonal, and even gotten nowhere with a coordinate geometry attempt. I appreciate any and all help! I feel like I've gotten closest by assuming the latter contradiction and extending the figure into a rhombus, but that still doesn't quite get me there.
|
Let the angles at $A$ and $C$ be congruent, and diagonal $BD$ to intersect $AC$ at $H$ . Point $A$ lies on an arc of circle $BAD$ , and the locus of points $C$ such that $\angle BCD\cong\angle BAD$ is the reflection of that arc about $BD$ . It follows that $AH\cong CH$ and so on.
|
|geometry|euclidean-geometry|quadrilateral|
| 0
|
Prove a convex quadrilateral with perpendicular diagonals and one pair of congruent, non-consecutive angles is a kite.
|
I strongly suspect the conditions in the title are sufficient for a kite, but I am unable to give a proof, direct or otherwise. I've attempted to proceed directly with triangle congruence strategies and trigonometry, by contraction assuming incongruent edges or the diagonal between congruent angles not bisected by the other diagonal, and even gotten nowhere with a coordinate geometry attempt. I appreciate any and all help! I feel like I've gotten closest by assuming the latter contradiction and extending the figure into a rhombus, but that still doesn't quite get me there.
|
Given that our diagonals are perpendicular, the fact that our opposite angles are equal gives the fact that the diagonal different from the segment relying the points having congruent angles is the perpendicular bisector of that segment. Hence, the two other points of our quadrilateral wherever they may be are on the perpendicular bisector of our segment and hence equidistant from its extremities. We have now a quadrilateral whose sides are 2 by 2 equidistant and adjacent, and so our quadrilateral is a kite.
|
|geometry|euclidean-geometry|quadrilateral|
| 0
|
Prove that if polynomial with integral coefficients has a solution in integers, then the congruence is solvable for any value of modulus $m$.
|
Prove that if $F(x_1, x_2, \ldots, x_n) = 0$ , where $F$ is a polynomial with integral coefficients, has a solution in integers, then the congruence $F(x_1, x_2, \ldots, x_n) \equiv 0 \pmod{m}$ is solvable for any value of modulus $m$ . I am not sure how to prove this. My attempt: Let $F(x_1, x_2, \ldots, x_n) = \sum_{i_1, i_2, \ldots, i_n} a_{i_1, i_2, \ldots, i_n} x_1^{i_1} x_2^{i_2} \ldots x_n^{i_n}$ be the polynomial with integer coefficients. Suppose there exists a solution $(a_1, a_2, \ldots, a_n)$ such that $F(a_1, a_2, \ldots, a_n) = 0$ . Consider the congruences: $ x_1 \equiv a_1 \pmod{m} $ $ x_2 \equiv a_2 \pmod{m} $ $ \vdots $ $ x_n \equiv a_n \pmod{m} $ By the definition of congruence, we have $x_i = a_i + m \cdot t_i$ for some integers $t_i$ . Substitute these expressions into $F(x_1, x_2, \ldots, x_n)$ : $$ F(a_1 + m \cdot t_1, a_2 + m \cdot t_2, \ldots, a_n + m \cdot t_n) = 0 $$ Is this correct so far? Not sure what to do next.
|
Can't you just say $F(x_1,\ldots, x_n) = 0$ $\implies$ $F(x_1, \ldots, x_n) \equiv_m 0$ $\forall m \in \mathbb{N}$ , as $0 \equiv_m 0$ for all such $m$ ?
|
|elementary-number-theory|polynomials|modular-arithmetic|
| 1
|
How many ways can you select $5$ marbles if you select at least one from each color?
|
You have $5$ identical black marbles, $5$ identical green marbles and $3$ identical red marbles. How many ways can you choose $5$ marbles with at least one from each color?
|
Given that we have one of each colour in our grab, we already have taken 3 of our five marbles. Leaves 2 spots to be filled by 4 blacks, 4 greens and 2 reds. If our two extra marbles are the same colour, we have 3 clear options (2 blacks, 2 greens, or 2 reds). If our two remaining marbles are different colors, there are also 3 possibilities: 1 black 1 green, 1 black 1 red, and 1 green 1 red. We hence have 3+3=6 possibilities to grab marbles in which we have at least one of each color. I hope this helps!
|
|permutations|combinations|
| 0
|
How to define periodicity of orbits for general group actions?
|
Let $H$ be a topological group acting on a topological space $X$ . Is there a general definition of periodicity in this case? Write $X=G/\Gamma$ and consider the orbit $Hg\Gamma$ , what does it mean for the orbit $Hg\Gamma$ to be periodic? If there is an abstract definition, I also wish to know the intuition behind it.
|
The following is taken from Wikipedia : For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group $G$ acting on a probability space $X$ in a measure-preserving way, an orbit $G.x \subset X$ will be called periodic (or equivalently, closed) if the stabilizer $\text{Stab}_{G}(x)$ is a lattice inside $G$ .
|
|lie-groups|dynamical-systems|homogeneous-spaces|
| 0
|
Definition of coercive function
|
I was studying coercive functions and I came upon three definitions: $$f: \mathbb{R}^n \rightarrow \mathbb{R}$$ $$M \in \mathbb{R}, r \in \mathbb{R}$$ 1 - it's the usual, the one I saw most. $$\lim_{||x||\to+\infty} f(x) = +\infty$$ 2 - For all $M>0$ , exist an $r>0$ such that $f(x)>M$ when $||x||> r$ . 3 - Someone said it to me, so I dont know if its correct. it's like the second one, but without the constraint on M. Like... For all $M \in \mathbb{R}$ , exist an $r>0$ such that $f(x)>M$ when $||x||> r$ . I'm doing an exercise right now, and I want to use the third one, because I have an M, but I don't really know that M>0. But if the third one still holds true, I can use without fear. I basically want to know: is those three equivalent, or just one and two(like stated in the answers already? M needs to be like M>0, or could be just any $M \in \mathbb{R}$ ?
|
Your 2. and 3. statement is not the right definition. May be the 2. should say: To every M>0 there exists an r>0 such that f(r)>m but this would be just the same as 1.
|
|calculus|optimization|
| 0
|
Overview of basic results in Stochastic Calculus
|
Are there some good overviews of basic facts about Stochastic Integrals and Stochastic Calculus? These can be in the form of resources (preferably accessible online) as well as directly writing out these results as answers. If possible, it would be helpful to link to proofs of the results. These can be external proofs or proofs on the site. This question was inspired by other similar [big-list] questions including overviews on basic results on images and preimages and on basic results in cardinal arithmetic . See the answers at these links for inspiration on the types of responses that would be suitable for this question. Edit: I have now offered a bounty to raise awareness of the question and encourage strong answers for future reference. Edit 2: I have posted an answer with links to lecture notes and blogs. However, I still welcome other answers. I am especially looking for an answer with a direct list of results. Although the answer I have posted with links is a good start, it would
|
I'll include an answer with books. If you're looking for a comprehensive overview of basic results, then these books will for sure do the job. They do way more actually. Medvegyev, P. (2007). Stochastic integration theory (Vol. 14). OUP Oxford. Goes through the general theory of stochastic integration; It provides a clear exposition of essential concepts such as local martingales, semimartingales and quadratic variation; It also gives very good intuition on some measure theoretic considerations; Most classical theorems of Stochastic Calculus can be found here in great generality; Overall, one of the best books in this list, at least I think so. Schilling, R. L., & Partzsch, L. (2014). Brownian motion: an introduction to stochastic processes. Walter de Gruyter GmbH & Co KG. Covers Brownian Motion in a complete and comprehensive way; If the goal is to deeply understand Brownian motion and obtain a solid, yet introductory, understanding of stochastic integration, then this is a great book
|
|reference-request|stochastic-calculus|big-list|online-resources|faq|
| 1
|
convex functions, is it enough to check if convex on all intervals
|
Let $f: X \to \mathbb{R}$ be some function defined on a convex set $X \subset \mathbb{R}^n$ . Now assume that for all $x \in X$ and $y \in \mathbb{R}^n$ such that $x+ty \in X$ for all $t \in [0,1]$ , the function $f(x+ty)$ is a convex function on $[0,1]$ . Does that mean $f$ is convex? To me the standard definition of convexity reduces to the case of intervals anyway, so intuitively my statement seems equivalent to $f$ being convex, but I don't see an argument why the definitions are equivalent.
|
Yes, it does. The usual definition is that $f$ is convex if for all $u, v \in X$ and $0 \le s \le 1$ , $$f(su + (1-s) v) \le s f(u) + (1-s) f(v)$$ That is true (for any given $u$ and $v$ ) if the function $t \mapsto f(u + t (v - u))$ is convex on $[0,1]$ .
|
|convex-analysis|definition|
| 1
|
Definition of coercive function
|
I was studying coercive functions and I came upon three definitions: $$f: \mathbb{R}^n \rightarrow \mathbb{R}$$ $$M \in \mathbb{R}, r \in \mathbb{R}$$ 1 - it's the usual, the one I saw most. $$\lim_{||x||\to+\infty} f(x) = +\infty$$ 2 - For all $M>0$ , exist an $r>0$ such that $f(x)>M$ when $||x||> r$ . 3 - Someone said it to me, so I dont know if its correct. it's like the second one, but without the constraint on M. Like... For all $M \in \mathbb{R}$ , exist an $r>0$ such that $f(x)>M$ when $||x||> r$ . I'm doing an exercise right now, and I want to use the third one, because I have an M, but I don't really know that M>0. But if the third one still holds true, I can use without fear. I basically want to know: is those three equivalent, or just one and two(like stated in the answers already? M needs to be like M>0, or could be just any $M \in \mathbb{R}$ ?
|
I would call your definition 1. coercivity. For definitions 2. and 3. it looks like you have written them wrong. Note that if 2. read for every $M>0$ there exists an $r>0$ such that $f(x) \geq M$ for all $\|x\| \geq r$ then this would coincide with 1. The word coercive can refer to the use of force to achieve a particular outcome, in a mathematical context you might see such a function subtracted from a differential equation to prevent blow-up to infinity. I think the term is frequently used in the optimisation community. p.s dont worry about $M\in R \subset \mathbb{R}$ , since if that property holds for each $M>0$ it will hold for each $M \in R$ .
|
|calculus|optimization|
| 1
|
Prove a convex quadrilateral with perpendicular diagonals and one pair of congruent, non-consecutive angles is a kite.
|
I strongly suspect the conditions in the title are sufficient for a kite, but I am unable to give a proof, direct or otherwise. I've attempted to proceed directly with triangle congruence strategies and trigonometry, by contraction assuming incongruent edges or the diagonal between congruent angles not bisected by the other diagonal, and even gotten nowhere with a coordinate geometry attempt. I appreciate any and all help! I feel like I've gotten closest by assuming the latter contradiction and extending the figure into a rhombus, but that still doesn't quite get me there.
|
You can do this with only beginning triangle geometry, like Book I of Euclid's Elements, no need for circles or symmetry (though those are quick routes if you have those theorems handy). We are given the convex quadrilateral at left with diagonals intersecting at right angles, and with $\angle{A} = \angle{C}$ . I say $\triangle{ABD} \cong \triangle{CBD}$ . For if not, then cut $HC'$ equal to $AH$ . Then by SAS $\triangle{AHB} \cong \triangle{C'HB}$ and likewise $\triangle{AHD} \cong \triangle{C'HD}$ . By addition of these like triangles, $\triangle{ABD} \cong \triangle{C'BD}$ and so $\angle{BAD} \cong \angle{BC'D}$ . But we also have $\angle{A} = \angle{C}$ , yet $\angle{A}=\angle{BC'D}$ , so $\angle{C}=\angle{BC'D}$ the lesser to the greater, which is absurd. Therefore $AH=CH$ and $\triangle{ABD} \cong \triangle{CBD}$ . The key here is you get to assume something extra (that is false) which lets you solve the problem conclusively.
|
|geometry|euclidean-geometry|quadrilateral|
| 1
|
Looking for (overkill) usages of indicator functions
|
I am going to give a presentation about the indicator functions , and I am looking for some interesting examples to include. The examples can be even an overkill solution since I am mainly interested in demonstrating the creative ways of using it. I would be grateful if you share your examples. The diversity of answers is appreciated. To give you an idea, here are my own examples. Most of my examples are in probability and combinatorics so examples from other fields would be even better. Calculating the expected value of a random variable using linearity of expectations. Most famously the number of fixed points in a random permutation. Showing how $|A \Delta B| = |A|+|B|-2|A \cap B|$ and $(A-B)^2 = A^2+B^2-2AB$ are related. An overkill proof for $\sum \deg(v) = 2|E|$ .
|
Given an integer $n\ge 2$ , prove that $$\lfloor \sqrt n \rfloor + \lfloor \sqrt[3]n\rfloor + \cdots +\lfloor \sqrt[n]n\rfloor = \lfloor \log_2n\rfloor + \lfloor \log_3n\rfloor + \cdots +\lfloor \log_nn\rfloor$$ Proof: For any $0\le x\le n$ , we have $[x]=\sum_{i=1}^n\chi(x\ge i)$ , where $\chi(\cdot)$ is the indicator function. \begin{align} LHS&=\sum_{j=2}^n\sum_{i=1}^n\chi(\sqrt[j]{n}\ge i)\\ &=\sum_{i=1}^n\sum_{j=2}^n\chi(\sqrt[j]{n}\ge i)\\ &=n-1+\sum_{i=2}^n\sum_{j=2}^n\chi(n\ge i^j)\\ &=\sum_{i=2}^n\sum_{j=1}^n\chi(n\ge i^j)\\ &=\sum_{i=2}^n\sum_{j=1}^n\chi(\log_i^n\ge j)\\ &=\sum_{i=2}^n[\log_i^n]=RHS. \end{align}
|
|probability|combinatorics|soft-question|big-list|characteristic-functions|
| 0
|
Scheduling a Tournament
|
I am scheduling a games tournament for my girlfriend's cousin's 21st birtday party. There will be 8 teams. There can be either 7 or 8 different games. The briefing I have been given is: Each team must play every other team at least once; Each team must play every game (they do not need to play every team at every game); Each game requires 2 teams to play. I have worked out that there needs to be a minimum of 7 rounds, so that each team can play every other team. I have designed a schedule which states who plays who in each round. However, I only have 1 version of each game, so each round has to have 4 different games being played. Can anyone find a schedule which satisfies the above criteria?
|
On the wikipedia page on the Room square , there is an example given which works for 8 teams. Assign your 8 teams a number from 0 to 7, associate to each row of the table a type of game and associate each column with a game round (7 rounds suffices). Now you can read of which pair of teams plays what game at what round from the table. Room squares exist also in the sizes 1, 9, 11, 13... for resp. 2, 10, 12, 14... teams. So the only (odd) sizes where they do not exist are 3 and 5.
|
|permutations|
| 0
|
Count the number of binary representations that have the given number of occurrences.
|
How many binary representation exists that each has five "00", three "01", three "10" and three "11" in it ?!
|
See in the given problem that you can only have 3 '01' and 3 '10' meaning that you have to change the sequence of a binary string 3 times. For example in the binary represantation 11111011101101 I have changed the string a total of 4 times and got the abovementioned requirments. Now we can view this problem as more of a stars and bars problem. We can distribute the 5 '00's into the three strings of zeroes ie which would give us these combinations for strings:(A string signifies a series of a single number for example 111111111 is a string of ones) {1,1,3} - 3 cases {2,2,1} - 3 cases {0,1,4} - 6 cases {0,0,5} - 3 cases {3,2,0} - 6 cases Now similarly for 1 we have 4 strings and 3 '11's. {0,2,1,0} - 12 cases (0 signifies there is only one '1' in that string and the pair does not form) {1,1,1,0} - 4 cases {0,0,3,0} - 4 cases Notice that each case case gives a unique representation Now we just have to multiply the 0 cases by 1 cases and add them all $3*20+3*20+3*20+6*20+6*20$ =420 Hope thi
|
|combinatorics|binary|
| 1
|
Monotonically increasing path in a complete graph
|
Given a complete graph with n vertices such that all edge weights are distinct. Prove that we can find a monotonically increasing path of length n-1. I tried finding such a path by sorting the edges in increasing weight and then greedily selecting such that they make up a path. eg: Lets consider a graph where $w_{a,b}=1, w_{a,c}=2, w_{a,d}=3, w_{b,c}=4, w_{b,d}=5, w_{c,d}=6$. I first select a->b. Then we find the next edge which starts from b and has the least weight. We get b->c. And so on. The path I get is : a->b->c->d. But I'm not able to prove that this will always work. Any suggestions? I think the problem is similar to Counting certain paths in a complete graph .
|
The following argument is from a column by Peter Winkler , attributed to Ehud Friedgut; it may be equivalent to Graham and Kleitman's argument, but is easier to read. It works in the general case of a graph $G$ with $n$ vertices and $m$ edges, where the edges are labeled with distinct weights. Put a person on each vertex in the graph, and announce the edges in increasing order of their weights (from the lowest-weight to the highest-weight edge). Every time an edge is announced, have the two people standing on its endpoints switch places. When this is done, each of the $n$ people has walked an increasing walk. (This walk might repeat vertices, so it's not an increasing simple path; that problem is much harder.) Also, the total number of steps taken is $2m$ , because every time an edge is announced, two people each take a step. Therefore someone must have taken at least $2m/n$ steps: in the case $G = K_n$ , this gives $n-1$ . Here are some comments about variants of this problem. What ab
|
|combinatorics|graph-theory|recreational-mathematics|random-graphs|
| 0
|
Find all integer solutions of $n = 1 + a + b + c$, with $a,b,c$ distinct divisors of $n$, and prove there are no more solutions.
|
This is a question derived from the 17 camels problem . I have been asked to find every integer $n$ satisfying $$ n = 1+a+b+c, $$ where $a, b, c$ are distinct divisors of $n$ greater than one. By brute-forcing I got the answer $n\in\{12,18,20,24,42\}$ : \begin{align} 12 &= 1+2+3+6, \\ 18 &= 1+2+6+9, \\ 20 &= 1+4+5+10, \\ 24 &= 1+3+8+12, \\ 42 &= 1+6+14+21. \end{align} Now I have to prove these are the only solutions. How can I aproach this? Thanks!
|
If $n=1+a+b+c$ with $a,b,c$ distinct divisors of $n$ and $a\lt b\lt c$ then $c=\dfrac n2$ . In fact, if $c\le\lfloor\dfrac n3\rfloor$ we have $$n\le 1+\lfloor\dfrac n5\rfloor+\lfloor\dfrac n4\rfloor+\lfloor\dfrac n3\rfloor\le1+n\left(\dfrac15+\dfrac14+\dfrac13\right)$$ which implies $\dfrac{13 n}{60}\le1$ so $n\le 4$ which can be discarded straightforward. A consequence is that $n$ should be even . So we have to find even $n\gt42$ such that $$n=1+a+b+\frac n2\iff \frac n2=1+a+b$$ If $n$ is divisible by $3$ and $4$ we have $\dfrac n2=1+\dfrac n4+\dfrac n3$ which give $LHS\lt RHS$ so we must have $\dfrac n2\le1+\dfrac n5+\dfrac n4$ which give $n\le 20\lt42$ . If $n$ is not divisible by $3$ and $4$ we would have as possibility $$\dfrac n2\le1+\lfloor\dfrac n6\rfloor+\lfloor\dfrac n5\rfloor\le1+n\left(\dfrac{11}{30}\right)$$ and still we get impossibility. It follows that for other smaller possible divisors we are not solutions. We are done: $n\in\{12,18,20,24,42\}$ is the whole set of sol
|
|elementary-number-theory|
| 0
|
Square of the Dirac Delta Shenanigans
|
The following is a purely mathematical problem, but it often arises when dealing with transition probabilities in Quantum Field Theory. In QED the transition probability amplitudes often contain a Dirac delta of the form: $$\delta (E_f-E_i)$$ but since we have to deal with Born's rule we are interested in the square of this quantity, and so we must ask ourself what it means to take the square of a Dirac delta. Usually we deal with the problem (to be precise in the context of box quantisation) in the following way: We write the Dirac delta in Fourier form 1 : $$\delta (E_f - E_i)=\lim _{T \to +\infty}\frac{1}{2\pi}\int _{-T/2}^{+T/2}e^{i(E_f-E_i)t}dt$$ obtaining: $$2\pi\delta (E_f - E_i)=\lim _{T \to +\infty}\int _{-T/2}^{+T/2}e^{i(E_f-E_i)t}dt=\lim _{T \to +\infty}\left[\frac{e^{i(E_f-E_i)t}}{i(E_f-E_i)}\right] _{-T/2}^{+T/2}=\lim _{T \to +\infty}\frac{2 \sin(\Delta E T/2)}{\Delta E} \tag{1}$$ From $(1)$ we operate the following magic trick (i.e. abuse): since we want to find the squar
|
Pick any absolutely integrable function $\eta(x)$ such that $\int_{-\infty}^\infty dx \ \eta(x)=1 $ . We use $\eta$ to 'construct' a Dirac delta (a so called nascent delta or approximation to the identity ) $$\tag{1} \frac{1}{\epsilon}\eta\left(\frac{x}{\epsilon}\right) \to \delta (x) \qquad ,\qquad \epsilon \to 0 $$ Where in the first expression $\to$ means in the sense of distributions. Concretely this means $$\tag{2} \lim_{\epsilon\to0}\ \epsilon^{-1}\int\limits_{-\infty}^\infty dx \ \eta(x/\epsilon) f(x)=f(0) $$ For any suitable test function $f(x)$ . Note that (2) is exactly the action of the delta: $\int_{-\infty}^\infty dx \ \delta(x)f(x)=f(0)$ . To prove (1) you can look at theorem 9.8 in this answer . Now we apply (1) to your question. Let $T=1/\epsilon$ then (1) reads $$\tag{3} T \eta(xT)\to\delta(x) \qquad , \qquad T\to\infty $$ In your case we pick $\eta(x)=\frac{2}{\pi}\frac{\sin^2(x/2)}{x^2}$ so that $$\tag{4} T\eta(xT)=\frac{2}{\pi}\frac{\sin^2(xT/2)}{x^2T}\to\delta(x) \
|
|physics|mathematical-physics|distribution-theory|dirac-delta|quantum-field-theory|
| 1
|
Why do we believe the equation $ax+by+c=0$ represents a line?
|
I'm going for quite a weird question here. As we know, the equation in Cartesian coordinates for a line in 2-dimensional Euclidean geometry is of the form $ax+by+c=0$. I'm wondering why do we "believe" the plotted graph is the same "line" as in our intuition. It might sound crazy, but think of the time when there were still no coordinates, no axes, no analytic geometry. When Descartes started to grasp the concept that equations represented geometric figures (or more accurately, loci) he would have tried plotting easy forms first, and what else could be easier than $y=x$ or $y=2x+3$ etc. Plotting those revealed something evidently a line to his (and our) naked eyes, but it wouldn't be appropriate for a mathematician to conclude from that alone that the figure is actually a "line", right? So jumping back to our own time, if we forget for once that $ax+by+c=0$ "is" a line, looking at it with fresh eyes, by what criteria are we using to say it is so. I've tried some regular characterizatio
|
Here is a more algebraic way to answer the question. Fix some point on your line. Then you may interpret the ambiant Euclidean plane as the 2-dimensional vector space $\mathbb{R}^2$ , with your line passing through the origin. Now it makes sense to say that your line can be identified with the image of an injective linear map $L:\mathbb{R} \to \mathbb{R}^2$ , understanding that $\mathbb{R}$ is the "canonical line". Consider the linear map $P:\mathbb{R}^2 \rightarrow \text{coker}L$ . It is clear that a point $(x,y)\in\mathbb{R}^2$ is in the image of $L$ (i.e. is on the line) if and only if $P(x,y) = 0$ . The matrix representation of $P$ may be written as $\begin{pmatrix}a&b\end{pmatrix}$ for some real numbers $a,b$ . Hence a point $(x,y)$ is on the line if and only if $$\begin{pmatrix}a&b\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} = 0,$$ which is just saying that $ax+by=0$ .
|
|soft-question|euclidean-geometry|analytic-geometry|
| 0
|
Why does Im($(-1/10^n)^{-1/10^n}$) turn into the digits of pi as integer n gets larger?
|
$(-.001)^{-.001} \approx 1.007 - .0031634i$ , $(-.000001)^{-.000001} \approx 1.000014 - .00000314164$ , and $(-.000000001)^{-.000000001} \approx 1.0000000207 - .00000000314159271i$ . Notice that as we approach 0 from the negative direction for $x^x$ by magnitudes of ten, the imaginary component more closely approximates pi, divided by an increasing power of ten. Why does this happen?
|
For $r>0$ and small, we have principal value $$ (-r)^{-r} = e^{-r\ln r} \cos(r\pi)-i e^{-r\ln r}\sin(r\pi) . $$ The imaginary part is $$ -e^{r\ln r}\sin(r\pi) $$ Now, for $r$ very close to zero, we have $r\ln r \approx 0$ and $e^{r\ln r} \approx 1$ and $\sin(r\pi) \approx r\pi$ . And of course, this $r\pi$ is what you are seeing when $r$ is a negative power of $10$ .
|
|calculus|limits|complex-numbers|pi|
| 0
|
Tangent bundle of a sphere $T\mathbb S^n$ is diffeomorphic to $\mathbb S^n \times \mathbb S^n - \Delta$
|
Let $\mathbb S^n$ denote the $n$ -sphere, which is the smooth manifold consisting of all points in $\mathbb R^{n+1}$ with Euclidean norm one. Recall that the tangent bundle of $\mathbb S^n$ , denoted $T \mathbb S^n$ , is a $2n$ -dimensional smooth manifold. Let $\Delta \subseteq \mathbb S^n \times \mathbb S^n$ denote the diagonal $\Delta := \{(x,x) : x \in \mathbb S^n\}$ . By Hausdorffness, $\Delta$ is closed, so $\mathbb S^n \times \mathbb S^n - \Delta$ is an open subset of $\mathbb S^n \times \mathbb S^n$ . In this question about the tangent bundle of the $n$ -sphere, (I believe) one of the answers claims that we have a diffeomorphism $$T\mathbb S^n \cong \mathbb S^n \times \mathbb S^n - \Delta.$$ How can we see that this is true? What is an explicit diffeomorphism?
|
We have \begin{align*} S^n \times S^n \setminus \Delta & = \left \{(x,y)\mid x, y \in S^n, x \neq y \right \} \\ & = \bigcup_{x \in S^n} \left \{x\right \} \times (S^n \setminus \left \{x \right \}) \\ & \simeq \bigcup_{x \in S^n} \left \{-x \right \} \times T_{-x}S^n \\ & = TS^n \end{align*} where the isomorphism $S^n \setminus \left \{x \right \} \simeq T_{-x}S^n$ is given by identifying $S^n \setminus \left \{x \right \}$ with $\mathbb{R}^n$ via the steoreographic projection where $x$ plays the role of $\infty$ and hence $-x$ plays the role of the origin, which is, the origin of the tangent space $T_{-x}S^n$ .
|
|differential-geometry|smooth-manifolds|spheres|diffeomorphism|tangent-bundle|
| 0
|
Show $f_n(x) = x^{1- \frac{1}{n}} - x$ converges uniformly to 0 as $n \to \infty$ for $x \in [0,1]$.
|
I am guessing this can be argued by the monotonicity of the $f_n$ , but is there a reasonable way to do it directly? That is, given $\epsilon > 0$ to find $N$ such that if $n>N$ then $|f_n(x)| for all $x \in [0,1]$ . I tried using $f_n(x) = x(x^{\frac{-1}{n}}-1) \leq (x^{\frac{-1}{n}}-1)$ but then the convergence is not uniform for $x^{\frac{-1}{n}}-1$ and so this seemed to be no help.
|
Hint: For $0 \le x \le \epsilon$ we have $x^{1-\frac 1n} \le \epsilon^{1-\frac 1n}\le \epsilon ^{1/2}$ provided $n \ge 2$ . Thus, $f_n(x)\leq \epsilon ^{1/2}+\epsilon$ for $x \le \epsilon$ . For $x >\epsilon$ use the fact that $x^{1-\frac 1n} \to x$ uniformly.
|
|real-analysis|calculus|
| 1
|
Help understanding implicit derivation
|
I'm currently enrolled in Calculus 1, and everything has been pretty smooth up until these last two sections involving the chain rule and implicit derivation. After watching multiple YouTube videos, and reading different professors' explanations, all of these resources either explain it in a different way, or use different notations/process to get to the answer. I'm really lost, and would like help on this problem I'm working on and if possible, an explanation of what is going on with implicit derivation/how to solve these problems. $$ y = \frac{x+5}{y-5}$$ $$ \frac{dy}{dx} = ? $$ So far I've tried two approaches: 1st approach: $(y-5)y = x+5$ Taking deriv of both sides $2ydy - 5 = xdx$ $2ydy=xdx+5$ $dy = \frac{xdx + 5}{2y}$ $\frac{dy}{dx} = \frac{x+5}{2y}$ 2nd approach: $dy = \frac{\frac{d}{dx}(x+5)(y-5) - (x+5)(\frac{d}{dy}(y-5))}{(y-5)^2}$ $dy = \frac{xdx(y-5) - (x+5)(ydy)}{(y-5)^2}$ Then I tried multiplying a (y-5) out from the numerator, resulting in the left side $dy(y-5)$ , and g
|
This is one you can solve two ways to check the work. Implicit: $y^2-5y = x+5$ $2y\frac{dy}{dx}-5\frac{dy}{dx}=1$ $\frac{dy}{dx}=\frac{1}{2y-5}$ And we are finished. But double check: $y^2-5y = x+5$ $(y-\frac{5}{2})^2 = \frac{25}{4}+x+5$ $y=\frac{5}{2} \pm \sqrt{\frac{45}{4}+x}$ $\frac{dy}{dx}=\frac{1}{2} (\frac{45}{4}+x)^{-\frac{1}{2}} = \frac{1}{2\sqrt{\frac{45}{4}+x}}$ Return to the implicit and plug in the $y$ we just solved for: $\frac{dy}{dx}=\frac{1}{2y-5}$ $\frac{dy}{dx}=\frac{1}{2\Big( \frac{5}{2} \pm \sqrt{\frac{45}{4}+x} \Big)-5} = \frac{1}{2\sqrt{\frac{45}{4}+x}}$ as desired. Rather than professors notes, or YouTube videos, I might suggest reading the sections of one or two textbooks on the topic, it will likely clear things up for you.
|
|calculus|implicit-differentiation|
| 1
|
Spatial analyticity for solutions of linear parabolic PDEs
|
I believe that the following result is true, but am having a hard time finding a suitable reference to convince myself: Suppose $X:\mathbb{R}^n\to \mathbb{R}^n$ is analytic. If the smooth function $u:(0,\infty)\times \mathbb{R}^n\to \mathbb{R}$ solves the parabolic equation $$\frac{\partial u}{\partial t}=\Delta u+ X\cdot \nabla u,$$ then for each $t>0$ , the mapping $x\mapsto u(t,x)$ is analytic. The elliptic version of this result is frequently cited as "well-known" in the literature, and in fact, there is this result for elliptic non-linear systems as well: https://www.jstor.org/stable/2372830 . The parabolic result is for the regular heat equation ( $X=0$ ) is discussed here: Space analyticity of solution to heat equation . However, the proof it refers to appears to lean quite heavily on an explicit expression for the heat kernel/fundamental solution, which might not be available if $X\neq 0$ . Thanks in advance.
|
I think the result you’re looking for is given in: Classes of solutions of linear systems of partial differential equations of parabolic type by Avner Friedman. Duke Math. J. 24 (3), 433-442, (September 1957) DOI: 10.1215/S0012-7094-57-02450-X
|
|analyticity|parabolic-pde|
| 1
|
A curious result from Mathematica on $\int\sin^5x\cos^7x \,dx$
|
While creating an answer key for my Calc 2 students and trying to save a bit of time, I plugged a trigonometric integral into Mathematica and was a little confused about how it approached the problem. The integral was $$ \int\sin^5x\cos^7x \,dx $$ The typical way to evaluate this integral would be to use the Pythagorean identity and a substitution. Something like this: \begin{align*} \int\sin^5x\cos^7x \,dx &= \int (\sin x)\big(\sin^4 x\big)\cos^7x\,dx \\ &= \int(\sin x)\big(1-\cos^2 x\big)^2\cos^7 x\, dx \\ &= -\int \big(1-u^2\big)^2u^7\, du \\ &= -\int u^7-2u^9+u^{11}\,du \\ &= -\frac{u^8}{8}+\frac{u^{10}}{5}-\frac{u^{12}}{12}+C \\ &= -\frac{\cos^8 x}{8}+\frac{\cos^{10}x}{5}-\frac{\cos^{12}x}{12}+C \end{align*} Where I've used the substitution $u=\cos x$ . We also could have used $u=\sin x$ to get $$ \int\sin^5x\cos^7x \,dx = \frac{\sin^6 x}{6} -\frac{3\sin^8 x}{8}+\frac{3\sin^{10}}{10}-\frac{\sin^{12} x}{12}+C $$ which is equivalent up to an added constant. When I plugged the integr
|
Here's another method: The exponential representations of $\sin, \cos$ are $$\sin x = \frac{1}{2 i} (e^{i x} - e^{-i x}), \qquad \cos x = \frac{1}{2} (e^{i x} + e^{-i x}) .$$ Substituting gives \begin{align*} \sin^5 x \cos^7 x &= \left[\frac{1}{2 i} (e^{i x} - e^{-i x})\right]^5 \left[\frac{1}{2} (e^{i x} + e^{-i x})]\right]^7 \\ &= \frac{1}{2^{12} i} (\color{#ff0000}{e^{12 i x}} \color{#0000ff}{+ 2e^{10 i x}} - \cdots \color{#0000ff}{- 2 e^{-10 i x}} \color{#ff0000}{- e^{12 i x}}) \\ &= \frac{1}{2^{11}} \left[\frac{1}{2 i} (\color{#ff0000}{e^{12 i x} - e^{-12 i x}}) + 2 \cdot \frac{1}{2 i} (\color{#0000ff}{e^{10 i x} - e^{-10 i x}}) + \cdots \right] \\ &= \frac{1}{2^{11}} (\sin 12 x + 2 \sin 10 x - 4 \sin 8 x - 10 \sin 6 x + 5 \sin 4 x + 20 \sin 2 x) . \end{align*} So, integrating gives \begin{multline*} \int \sin^5 x \cos^7 x \,dx \\ = \frac{1}{2^{11}} \left(-\frac{1}{12} \cos 12 x - \frac{2}{10} \cos 10 x + \frac{4}{8} \cos 8 x + \frac{10}{6} \cos 6 x - \frac{5}{4} \cos 4 x - \frac{
|
|calculus|trigonometric-integrals|
| 0
|
A curious result from Mathematica on $\int\sin^5x\cos^7x \,dx$
|
While creating an answer key for my Calc 2 students and trying to save a bit of time, I plugged a trigonometric integral into Mathematica and was a little confused about how it approached the problem. The integral was $$ \int\sin^5x\cos^7x \,dx $$ The typical way to evaluate this integral would be to use the Pythagorean identity and a substitution. Something like this: \begin{align*} \int\sin^5x\cos^7x \,dx &= \int (\sin x)\big(\sin^4 x\big)\cos^7x\,dx \\ &= \int(\sin x)\big(1-\cos^2 x\big)^2\cos^7 x\, dx \\ &= -\int \big(1-u^2\big)^2u^7\, du \\ &= -\int u^7-2u^9+u^{11}\,du \\ &= -\frac{u^8}{8}+\frac{u^{10}}{5}-\frac{u^{12}}{12}+C \\ &= -\frac{\cos^8 x}{8}+\frac{\cos^{10}x}{5}-\frac{\cos^{12}x}{12}+C \end{align*} Where I've used the substitution $u=\cos x$ . We also could have used $u=\sin x$ to get $$ \int\sin^5x\cos^7x \,dx = \frac{\sin^6 x}{6} -\frac{3\sin^8 x}{8}+\frac{3\sin^{10}}{10}-\frac{\sin^{12} x}{12}+C $$ which is equivalent up to an added constant. When I plugged the integr
|
Let $z=e^{ix}$ . $$\begin{align}\sin^5x\cos^7x&=\left(\frac{z-\bar z}{2i}\right)^5\left(\frac{z+\bar z}2\right)^7\\ &=\frac1{2^{12}i}\left(z^5-5z^3+10z-10\bar z+5\bar z^3-\bar z^5\right)\\ &\quad\left(z^7+7z^5+21z^3+35z+35\bar z+21\bar z^3+7\bar z^5+\bar z^7\right)\\ &=\frac1{2^{12}i}\left(z^{12}+2z^{10}-4z^8-10z^6+5z^4+20z^2\right.\\&\quad\left.-20\bar z^2-5\bar z^4+10\bar z^6+4\bar z^8-2\bar z^{10}-\bar z^{12}\right)\\ &=\frac1{2^{11}}\left(\sin(12x)+2\sin(10x)-4\sin(8x)-10\sin(6x)+5\sin(4x)+20\sin(2x)\right), \end{align} $$ hence $$\begin{align} \int\sin^5x\cos^7x \,dx &=-\frac{\cos(12x)}{2^{10}\cdot24}-\frac{\cos(10x)}{2^{10}\cdot10}+\frac{\cos(8x)}{2^{10}\cdot4}+5\frac{\cos(6x)}{2^{10}\cdot6}-5\frac{\cos(4x)}{2^{10}\cdot8}-5\frac{\cos(2x)}{2^{10}}+C, \end{align} $$ which is Mathematica's result.
|
|calculus|trigonometric-integrals|
| 1
|
I showed $V$ is isomorphic to $U \times (V/U)$ without using the assumption that $V/U$ is finite-dimensional. What did I do wrong?
|
Exercise. Suppose $U$ is a subspace of $V$ such that $V/U$ is finite-dimensional. Prove that $V$ is isomorphic to $U \times (V/U).$ Outline of my proof. We need to construct a linear bijection between $V$ and $U \times (V/U)$ , where $U \times (V/U) = \{(u, v+U) : u \in U, v \in V\}$ . I defined a mapping $T: U \times (V/U) \to V$ by $$ T(u,v+U) = v+u $$ where $v+u \in V$ . I was then able to show the following are true about $T$ : Linear Injective Surjective Question. In my full proof, I didn't use the fact that $V/U$ is finite-dimensional, which makes me think something is wrong. Is it because my $T$ is not well-defined? How come I was able to show the above three properties?
|
The issue is your "function" $T$ is not well-defined. In particular, it is possible that $a+U = b+U$ and $a \neq b$ . In this case, you would have $T(0,a+U)=a$ and $T(0,a+U)=T(0,b+U)=b$ , which is contradictory to being a function.
|
|linear-algebra|solution-verification|quotient-spaces|
| 1
|
Random variable, confusion from theoretical to practical
|
From theoretical perspective, we define $X$ is a real-valued random variable from $(\Omega, \mathcal{A})$ to $(\Bbb{R},\mathcal{B}(\Bbb{R}))$ iff it is a measurable function wrt to $\mathcal{A}$ and $\mathcal{B}(\Bbb{R})$ . To calculate the probability of events in $\mathcal{A}$ , we add a measure $P$ on $(\Omega, \mathcal{A})$ , and through $P$ , we can define the law $P_X$ of $X$ as the image measure induced by $X$ . In practical setting, we always skip some details and focus on the property of specific random variables. What I know is if we have $(E, \mathcal{E}, \mu)$ , we can always choose $(\Omega, \mathcal{A},\nu)=(E, \mathcal{E}, \mu)$ , so the identity function as a random variable satisfying the law $\mu$ . My question is When we say a Bernoulli random variable, $P(X=0)=p, P(X=1)=1-p$ , does it mean we give the law of this random variable? Here it seems we only consider the target space $=(\{0,1\}, 2^{\{0,1\}}, P)$ , Is it a real-valued random variable? Why it is measurable w
|
A Bernoulli random variable is a random variable $X$ defined on some probability space $(\Omega,\mathcal F, P)$ such that $P[\omega\in\Omega: X(\omega)\in\{0,1\}]=1$ . (The default meaning of "random variable" is "measurable map from $(\Omega,\mathcal F)$ to $(\Bbb R,\mathcal B(\Bbb R))$ .) The label "Bernoulli" refers to the nature of the image distribution, not to $(\Omega,\mathcal F,P)$ . Given $X$ as above we can modify it slightly by defining $Y(\omega)=X(\omega)$ if $X(\omega)\in\{0,1\}$ and $Y(\omega)=0$ (say) if $X(\omega)\in\Bbb R\setminus \{0,1\}$ . Then $Y$ is a random variable with range contained in $\{0,1\}$ , and $P(Y=X)=1$ . In particular, $Y$ is also a Bernoulli random variable, with $p:=P(Y=1)=P(X=1)$ . You can check that a function $X:\Omega\to \{0,1\}$ is a random variable (i.e. $X^{-1}(B)\in\mathcal F$ for all $B\in\mathcal B(\Bbb R)$ if and only if $\{X=1\}\in\mathcal F$ .
|
|probability|probability-theory|statistics|
| 0
|
Bayesian updating on expectation with many candidates
|
I have an individual selected for a job. He was competing against 10 candidates. All individuals are independently drawn from uniform distribution U(0,1). Him being chosen means he is better than the other. Normally, expected ability would be - $$ E(a|chosen) = \frac{\int_{b=0}^1 \int_b^1 a ~ da db}{\int_{b=0}^1 \int_b^1 da db} = \frac{2}{3} $$ Where, a denotes his own ability and b being ability of other Agents. But I also want to incorporate number of individuals he was competing against because that would boost his confidence further. How can I incorporate that? My hunch is tedious and I don't know if it is correct or not - There is ranking of individuals a>b>c>d>.. >j and expected ability is individual being better than each. Am I right? Is there a simpler way? $$\frac{\int_{b=0}^1 \int_b^1 a ~ da db}{\int_{b=0}^1 \int_b^1 da db} *... \frac{\int_{i=0}^1 \int_0^i i ~ dj di}{\int_{i=0}^1 \int_0^i dj di} $$
|
Is there a simpler way? Yes. Rather than multiplying, the method is to nest the integrals. You have that: $\qquad\begin{align}\mathsf E(X_{1}\mid X_{1}{\,=\,}\max\{X_1,X_2\}) &=\dfrac{\int_0^1\int_0^{x_1} x_1\,\mathrm d x_2\,\mathrm d x_1}{\int_0^1\int_0^{x_1} \mathrm d x_2\,\mathrm d x_1}\\[1ex]&=\dfrac 23\end{align}$ Similarly, the expected ability of a particular individual given that it is greater than all others among three is: $\qquad\begin{align}\mathsf E(X_{1}\mid X_{1}{\,=\,}\max\{X_1,X_2,X_3\}) &=\dfrac{\int_0^1\int_0^{x_1}\int_0^{x_1} x_1\,\mathrm d x_3\,\mathrm d x_2\,\mathrm d x_1}{\int_0^1\int_0^{x_1}\int_0^{x_1} \,\mathrm d x_3\,\mathrm d x_2\,\mathrm d x_1}\\[1ex]&=\dfrac {\int_0^1 x^3\,\mathrm d x}{\int_0^1 x^2\,\mathrm d x}\\[1ex]&= \dfrac{3}{4}\end{align}$ And generally, for any integer $n\geqslant 1$ , the expected ability of a particular individual given that it is greater than all others among the $n$ : $\qquad\begin{align}\mathsf E(X_{1}\mid X_{1}{\,=\,}\max_{1\l
|
|expected-value|conditional-expectation|bayesian|
| 1
|
Countable subspace of uncountable set is indiscrete or $T_0$
|
I'm solving the problems from "Topology Without Tears" Let $(X,\mathcal{T})$ be a topological space, where $X$ is an infinite set. Prove that $(X,\mathcal{T})$ has a subspace homeomorphic to $(\mathbb{N},\mathcal{T}_1)$, where either $\mathcal{T}_1$ is the indiscrete topology or $(\mathbb{N},\mathcal{T}_1)$ is a $T_0$-space. My half-proof: $X$ can be countable or uncountable. Number of open sets in $\mathcal{T}$ can be countable or uncountable. There are 4 cases: a) $X$ - infinite, countable. $\mathcal{T}$ - finite b) $X$ - infinite, countable. $\mathcal{T}$ - infinite, countable b2) $X$ - infinite, countable. $\mathcal{T}$ - infinite, uncountable c) $X$ - infinite, uncountable. $\mathcal{T}$ - countable d) $X$ - infinite, uncountable. $\mathcal{T}$ - uncountable Cases: c) Let's consider the case when $X$ - uncountable and $\mathcal{T}$ - countable. We can build a partition $\mathcal{P}$ (not necessarily open sets) of $X$ such that for any $p \in \mathcal{P}$ and for any open set $s \i
|
I'm going to attempt a less elegant proof, so please excuse me. Since I'm also working through Topology Without Tears, I'm going to evade using equivalence classes, as that hadn't been brought up in the text thus far. Let ( $X$ , $\tau$ ) be a topological space, where $X$ is infinite. We are required to prove that ( $X$ , $\tau$ ) has a subspace homeomorphic to ( $\mathbb N$ , $\tau$$_1$ ), where $\tau$$_1$ is either indiscrete or $T$$_0$ . Case 1: There exists finite pairs $x$ , $y$ $\in$ $X$ , such that $x$ $\in$ $U$ and $y$ $\notin$ $U$ , for some open subset $U$ . This topology is not $T$$_0$ . But since the space is infinite we can find an induced topology on a countable infinite subspace $S$ $\cap$ $\tau$ $=$ $S$ , which is indiscrete. Case 2: There exists infinite pairs $x$ , $y$ $\in$ $X$ , such that $x$ $\in$ $U$ and $y$ $\notin$ $U$ , for some open subset $U$ . This topology is $T$$_0$ . We can find an induced topology on a countably infinite subspace that is is also $T$$_0$
|
|general-topology|
| 0
|
Kummer's Lemma and $1+\zeta$
|
In lecture we were told to think about the following: Kummer's Lemma: Let $p$ be an odd prime and let $\zeta := e^{2\pi i / p}$ . Every unit of $\mathbb{Z}[\zeta]$ is of the form $r\zeta^g$ , where $r$ is real and $g$ is an integer. This theorem says that the units of $\mathbb{Z}[\zeta]$ can be thougth as the points of the lines that pass through the vertices of an equally spaced $p$ -gon centered at the origin. (Said vertices are of the form $\zeta^g$ .) However, we know that $1+\zeta$ is a unit . But this number does not lie on one of those lines. What is wrong here? Edit: This question seems to originate from Stewart & Tall's "Algebraic Number Theory and Fermat's Last Theorem" (4th edition), where it appears as Ex. 11.6 (p. 199).
|
Let the odd prime number be $p=2k+1$ . Consider the number $$ w = \zeta^k(1+\zeta)\ . $$ Then $$ \begin{aligned} \bar w &=\overline{\zeta^k(1+\zeta)} =\bar\zeta^k(1+\bar\zeta) =\zeta^{-k}(1+\zeta^{-1}) \\ & =\zeta^{p-k}\frac 1\zeta(\zeta+1) =\zeta^{k+1}\frac 1\zeta(\zeta+1) =\zeta^k(\zeta+1)=w\ . \end{aligned} $$ So $w$ is a real number, and $1+\zeta=w\;\zeta^{-k}=w\;\zeta^{k+1}$ .
|
|number-theory|roots-of-unity|cyclotomic-fields|
| 1
|
Is every regular Alexandrov topology a partition topology?
|
An Alexandrov topology on a set $X$ is a topology in which arbitrary intersections of open sets are open. Equivalently, every point has a smallest open neighborhood. Given a partition on a set $X$ , one can form the corresponding partition topology by taking the partition as a base for the topology. It is clear that a partition topology is an Alexandrov topology (the smallest nbhd of a point is the partition block containing the point), and is regular . The converse should be true: Every regular Alexandrov topology is a partition topology. In particular, every finite regular space has a partition topology. Can anybody provides a proof?
|
Suppose $X$ is regular and Alexandrov. If $x \in X$ , let $U_x$ be the minimal open neighbourhood of $x$ . As Randall says, it would be nice if we could show that the family of all $U_x$ is a partition. It is pretty clear that this family is a basis for the topology on $X$ , as for any open $U$ , we have that $U = \bigcup_{x \in U} U_x$ . It's clear that the $U_x$ cover all of $X$ , so all we need to show is that the $U_x$ are pairwise either equal or disjoint. Equivalently, we need to show that if a pair of points in $X$ is topologically distinguishable, then they can be separated by open sets. Indeed, suppose that $x, y \in X$ are topologically distinguishable. Without loss of generality, assume there is an open set $U$ with $x \in U$ but $y \notin U$ . Then $X \setminus U$ is a closed set which contains the point $y$ , but does not contain the point $x$ . By regularity, we can find open sets that separate $X \setminus U$ and $x$ . In particular, they separate $x$ and $y$ . So we are
|
|general-topology|
| 1
|
What is this sequence called? [0,0,1,0,2,0,2,2,1...]
|
The sequence $s_n$ follows these rules $s_0=0$ $s_{n-k}=s_{n-1},k>0$ If k does not exist, set k to $0$ $s_n=k$ With these rules, you get the sequence $[0,0,1,0,2,0,2,2,1,6,0,5,0,2,6,5,...]$
|
This sequence is OEIS A181391 , aka Van Eck's Sequence.
|
|sequences-and-series|
| 1
|
Dick has 5 loaves of bread. Nick has 3 loaves. The three share the bread equally with Albert. Albert gives Dick and Nick $8. What's a fair split?
|
Dick and Nick share their food with Albert. Dick has 5 loaves of bread and Nick has 3 loaves. The three share the bread equally. Albert gives Dick and Nick 8 dollars, which they agree to share fairly. How should Dick and Nick divide the eight dollars between them? Dicks contributes $5/8$ of the loaves, while Nick contributes $3/8$ . They each get $8/3$ of the bread. Dick provided $5/8$ of Albert's meal so he should get \$5 and Nick \$3. Is that correct? I'm not completely sure. It sort of feels right but not quite intuitively. I'm imagining the 8 loaves of bread combined into one. Split equally the bread still contains $5/8$ bread from Dick...
|
This problem is not well-specified. For one thing, most of the answers assume that the worth of a loaf of bread is determined by what Albert donates. But actually, Dick and Nick both bought their bread before Albert arrived, and the price was determined by the baker, not by Albert. If the aim is to treat Dick and Nick fairly, one could argue that this needs to be done in two stages. First, ensure that neither Dick or Nick suffers financially as a result of their unequal contributions to the shared meal. Then, ensure that whatever additional money they receive from Albert is divided equally between Dick and Nick. Here are a few possible scenarios: (a) One loaf costs 10 cents. If Albert hadn't come along, Nick would have given Dick $((50-30)÷2) = 10$ cents to make it fair. Because Albert did come along, they now have an extra \$8 to share, so Dick should receive $((8÷2)+0.10) = \$4.10$ , and Nick should receive $((8÷2)-0.10) = \$3.90$ . (b) One loaf costs $\$1$ . If Albert hadn't come al
|
|algebra-precalculus|
| 0
|
A very odd resolution to an integral equation
|
Here is something I've found on the internet $$\begin{aligned} f-\int f&=1\\ \left(1-\int\right)f&=1\\ f&=\left(\frac1{1-\int}\right)1\\ &=\left(1+\int+\int\int+\dots\right)1\\ &=1+\int1+\int\int1+\dots\\ &= 1+x+\frac{x^2}2+\dots\\ &= e^x \end{aligned}$$ At first I interpreted this as a joke, but on closer inspection I'm not so sure... The first thing I checked was the solution, and fairly enough, $e^x$ satisfies the initial equation. It is not the only solution, though: $\lambda e^x$ works jut as well for any $\lambda\in\mathbb R$ . I'm not really familiar with integral equations; I don't know if the solution space is a vector space. Maybe it is and our goal is to find a basis for it. If that is the case, then $e^x$ may actually be the expected result. That being settled, I started looking at the development. Writing $\displaystyle f-\int f$ as $\displaystyle \left(1-\int\right)f$ is completely fine, its just operational calculus. The "division of both sides" by $\displaystyle\left(1-
|
In general if $T$ is a bounded linear operator on a Banach space $B$ with norm $||T|| then $1-T$ is invertible, $1 + T + T^2 + \cdots$ converges in the Banach space topology, and $$(1-T)^{-1} = 1 + T + T^2 + \cdots$$ just as in the case of real numbers. See Neumann series . In this case, interpreting $\int$ as the operator $\mathcal{C}[0, 1-\epsilon] \to \mathcal{C}[0, 1-\epsilon]$ given by $(\int f)(x) = \int_0^x f(t) \, dt$ , where $\mathcal{C}[0, 1-\epsilon]$ is the Banach space of continuous real-valued functions on $[0, 1- \epsilon]$ for some fixed $\epsilon > 0$ , we note that $\int$ is a bounded linear operator and has norm $||\int|| = 1-\epsilon$ (see Integral as a linear operator is bounded ). Then we know that $(1 - \int)$ is invertible and $(1-\int)^{-1}= 1 + \int + \int\int + \int\int\int + \cdots$ , so these calculations are indeed justified.
|
|calculus|functional-analysis|solution-verification|integral-equations|
| 1
|
Finding the smallest natural number given a value of a tau function (number of positive divisors)
|
Find the smallest $n\in N$ such that $\tau(n)=10$
|
Let $n>1$ and $n=p_1^{k_1}p_2^{k_2}...p_r^{k_r}$ be the prime factorization of n such that $\tau(n)=10$ . We know that $10=10\bullet1$ or $10=2\bullet5$ . Then, we are certain that there are only two distinct prime factors of n where $k_1=0$ and $k_2=$ , or $k_1=1$ and $k_2=4$ . Since we want to find the smallest n with 10 positive divisors, it is logical to use the two lowest prime numbers 2 and 3. Thus, $n=2^03^9$ or $n=2^93^0$ or $n=2^13^4$ or $n=3^12^4$ . Hence, n=19 683, or n=512, or n=162, or n=48.Therefore, by observation, 48 is the smallest natural number with 10 positive divisors.
|
|elementary-number-theory|
| 0
|
Is there a standard name for the boundary of a cube?
|
A distinction is commonly made between a ball (solid) and a sphere (the boundary of a ball). This distinction is made in other dimensions as well (e.g. circle versus disc, in 2D). From what I've seen trying to research this online, the word "cube" is usually used to refer to the soild. Is there any standard name for just the boundary of a cube? The only thing that came to mind was "box" but that both doesn't seem to be a standard term, and might intuitively refer to the boundary of any rectangular prism. (My question extends to squares as well, i.e. is there a standard name for just the boundary of a square? And a similar question could be asked in reverse about the torus, where that name is usual used to refer to only the boundary.) If there is no standard name, my follow-up question would be: why do we have distinct names for spheres and balls, but not other common solids? Just historical accident? (Is the distinction made in languages other than English?)
|
I know of no standard names for those boundaries. Mathematicians give the sphere and ball those distinctive names because they often want to discuss both in the same context. The boundary of the cube comes up less often. In some contexts it's the two skeleton. The one skeleton is the set of edges. To refer to a torus together with its interior you would say "solid torus".
|
|geometry|terminology|convention|
| 1
|
Is there a commutative binary operation which is closed, has a neutral, unique negatives, satisfies cancellation but not associative?
|
What I want is: a close operation there's a neutral there're unique negatives cancellation laws commutativity but no associativity Can I get all this?
|
Here’s Java code that generates all left-cancellative binary operations with identity on up to $6$ elements and checks whether they fulfil the remaining conditions. The result is that there are no such binary operations on less than $6$ elements, and $396$ such operations on $6$ elements, which form $7$ equivalence classes under permutations of the elements: $$ \begin{array}{cc} \begin{array}{c|cc} &0&1&2&3&4&5\\\hline 0&0&1&2&3&4&5\\ 1&1&4&3&5&2&0\\ 2&2&3&0&4&5&1\\ 3&3&5&4&1&0&2\\ 4&4&2&5&0&1&3\\ 5&5&0&1&2&3&4 \end{array}& \begin{array}{c|cc} &0&1&2&3&4&5\\\hline 0&0&1&2&3&4&5\\ 1&1&5&3&4&2&0\\ 2&2&3&4&5&0&1\\ 3&3&4&5&0&1&2\\ 4&4&2&0&1&5&3\\ 5&5&0&1&2&3&4 \end{array}& \begin{array}{c|cc} &0&1&2&3&4&5\\\hline 0&0&1&2&3&4&5\\ 1&1&3&4&5&2&0\\ 2&2&4&3&0&5&1\\ 3&3&5&0&4&1&2\\ 4&4&2&5&1&0&3\\ 5&5&0&1&2&3&4 \end{array}\\ \begin{array}{c|cc} &0&1&2&3&4&5\\\hline 0&0&1&2&3&4&5\\ 1&1&4&3&5&2&0\\ 2&2&3&4&0&5&1\\ 3&3&5&0&4&1&2\\ 4&4&2&5&1&0&3\\ 5&5&0&1&2&3&4 \end{array}& \begin{array}{c|cc} &0&1&
|
|group-theory|
| 0
|
Prove $\frac{|a+b|}{|a-b|} = \cot(\frac{\alpha}{2})$ where $a$ and $b$ are vectors of equal magnitudes.
|
Prove $$ \frac{|a+b|}{|a-b|} = \cot\Bigl(\frac{\alpha}{2}\Bigr) $$ where $a$ and $b$ are vectors of equal magnitudes and $\alpha$ is the angle between $a$ and $b$ . My approach: Let $$ \left\{ \begin{aligned} a &= [x,y,z] \\[4pt] b &= [p,q,r] \end{aligned} \right. $$ Solving LHS: \begin{align} \frac{|a+b|}{|a-b|} &= \sqrt\frac{(x+p)^2+(y+q)^2+(z+r)^2}{(x-p)^2+(y-q)^2+(z-r)^2} \\ &= \sqrt\frac{x^2+y^2+z^2+p^2+q^2+r^2+2(xp+yq+zr)}{x^2+y^2+z^2+p^2+q^2+r^2-2(xp+yq+zr)} \\ &= \sqrt\frac{x^2+y^2+z^2+xp+yq+zr}{x^2+y^2+z^2-(xp+yq+zr)} \end{align} As by vector magnitude formula: $$ \sqrt{x^2+y^2+z^2} = \sqrt{p^2+q^2+r^2} $$ Now solving RHS: $$\cot\Bigl(\frac{\alpha}{2}\Bigr) = \frac{\sin\Bigl(\dfrac{\alpha}{2}\Bigr)}{\cos\Bigl(\dfrac{\alpha}{2}\Bigr)} $$ From here I did plenty of calculations using the vector dot product formula: $$ \begin{gather} a \cdot b = |a||b| \cos(\theta) \\ \cos(2\theta) = 2\cos^2(\theta) - 1 = 1 - 2\sin^2(\theta) \end{gather} $$ I got \begin{align} \cos\Bigl(\frac{\alp
|
Since a and b are vectors of equal magnitudes, therefore a-b and a+b are the diagonal vectors of the rhombus containing a and b . By the property of rhombus, a-b and a+b are perpendicular and bisecting each other. Moreover, a+b bisects the angle included by the vectors a and b . Hence $$ \frac{\frac{|a+b|}{2}}{\frac{|a-b|}{2}}=\cot \left(\frac{\alpha}{2}\right) \Rightarrow \frac{|a+b|}{|a-b|}=\cot \left(\frac{\alpha}{2}\right) $$
|
|linear-algebra|trigonometry|vectors|
| 0
|
How to parametrically express a parametric plane curve of a 3D plot as an axis in different 2D plot?
|
Preface Consider a plane curve defined by parametric equations (for $t_1\le t \le t_2$ ): $$x=x\left(t\right)$$ $$y=y\left(t\right)$$ In addition, there is a scalar function $f\left(x,y\right)$ defined for every value of $x$ and $y$ . Giving an example to illustrate (in this example, $t$ is the central angle in the $x,y$ plane): $$x\left(t\right)=\cos\left(t\right)$$ $$y\left(t\right)=\sqrt{1-\cos^2\left(t\right)}$$ $$f\left(x,y\right)=\cos^2\left(\pi * 6 * x * y\right)$$ The parametric curve, and the value of $f\left(x,y\right)$ along that curve, can be depicted in a 3D plot (in this example, for the range $0\leq t\leq 2\pi$ ), expressed something like: $$ParametricPlot3D\left[x\left(t\right), y\left(t\right), f\left(x,y\right) \right]$$ and depicted graphically as: I'll choose one segment $s\left(t\right)$ of this curve to simplify the example (e.g. for the range $\frac{\pi}{4}\leq t\leq \frac{3\pi}{4}$ ): Objective My objective is to plot the values of $f\left(x,y\right)$ along the
|
For your parametric plot, your $x$ coordinate as a function of $t$ should be the length traversed along the curve between time $t_1$ and $t$ . This is: $$\int_{w = t_1}^{w = t} ds = \int_{w = t_1}^{w = t} \sqrt{\left(x'(w)\right)^2+\left(y'(w)\right)^2}dw$$
|
|curves|parametric|
| 0
|
Mathcounts 2014 National Team #8
|
I encountered this problem when practicing with past competition problems. Since 1-6 contains 3 even, 3 odd and 3 primes (2,3,5), each branch obviously has $\dfrac{1}{2}$ probability to happen. It is easy to compute that in each cycle the probability to Obtain a 'B', 'N' and 'A' is $P(B)=\frac{1}{8}$ , $P(N)=\dfrac{1}{4}$ and $P(A)=\dfrac{1}{2}$ , with the rest $\dfrac{1}{8}$ to reset to start without writing any letter, cause a retry chance. So overall, the chance of getting these letters are $$\begin{cases} P(B)=\frac{1}{8}\cdot\left(1+\frac{1}{8}+\frac{1}{8^{2}}+\cdots\right)=\frac{1}{7}\\ P(N)=\frac{1}{4}\cdot\left(1+\frac{1}{8}+\frac{1}{8^{2}}+\cdots\right)=\frac{2}{7}\\ P(A)=\frac{1}{2}\cdot\left(1+\frac{1}{8}+\frac{1}{8^{2}}+\cdots\right)=\frac{4}{7}\\ \end{cases}$$ They add up to 1, as the machine eventually will split one of the three. So the probability to get BANANA should be $$\frac{1}{7}\cdot\frac{4}{7}\cdot\frac{2}{7}\cdot\frac{4}{7}\cdot\frac{2}{7}\cdot\frac{4}{7}=\frac{
|
You have computed $P(N)$ and $P(B)$ wrong: note that there is no third roll, only two rolls in a cycle. Carefully examining the flowchart nets you $P(A) = 1/2, P(B) = 1/6, P(N) = 1/6$ and $P(\text{reset}) = 1/6$ , which gives the right answer, as excluding resets we get $P(A) = 3/5, P(B) = 1/5, P(N) = 1/5$ .
|
|probability|contest-math|
| 1
|
Radical of a finite-dimensional Lie algebra
|
I failed to understand the radical of finite dimensional Lie algebra $\mathfrak{g}$ . There are two definitions for radical: It is the sum of all solvable ideals of $\mathfrak{g}$ . It is the unique maximal solvable ideal of $\mathfrak{g}$ . For the first definition, we know that if $\mathfrak{a}$ and $\mathfrak{b}$ are solvable ideals, then so if $\mathfrak{a}+\mathfrak{b}$ . However, to ensure that the sum of all solvable ideals are solvable, it seems that we need that there are only finitely many solvable ideals to apply induction, but I failed to see why. For the second definition, how can we ensure the existence and uniqueness of maximal solvable ideal? For rings, we can use Zorn's Lemma to deduce the existence of maximal ideal, but I don't know how to prove the existence of maximal solvable ideal in this case. Any help would be appreciated. Thanks in advance.
|
There might be various ways to go about this, but I recommend the following: Because $\mathfrak g$ has finite dimension, among all its solvable ideals there exists one that has maximal dimension. Now show: If $M$ is such an ideal (of maximal dimension among the solvable ones), and $I \subseteq \mathfrak g$ is any solvable ideal, then $I \subseteq M$ . From this one can conclude that such $M$ contains (hence equals) the sum of all solvable ideals, that it is unique, and that it is in fact maximal among the solvable ideals.
|
|lie-algebras|
| 0
|
A very odd resolution to an integral equation
|
Here is something I've found on the internet $$\begin{aligned} f-\int f&=1\\ \left(1-\int\right)f&=1\\ f&=\left(\frac1{1-\int}\right)1\\ &=\left(1+\int+\int\int+\dots\right)1\\ &=1+\int1+\int\int1+\dots\\ &= 1+x+\frac{x^2}2+\dots\\ &= e^x \end{aligned}$$ At first I interpreted this as a joke, but on closer inspection I'm not so sure... The first thing I checked was the solution, and fairly enough, $e^x$ satisfies the initial equation. It is not the only solution, though: $\lambda e^x$ works jut as well for any $\lambda\in\mathbb R$ . I'm not really familiar with integral equations; I don't know if the solution space is a vector space. Maybe it is and our goal is to find a basis for it. If that is the case, then $e^x$ may actually be the expected result. That being settled, I started looking at the development. Writing $\displaystyle f-\int f$ as $\displaystyle \left(1-\int\right)f$ is completely fine, its just operational calculus. The "division of both sides" by $\displaystyle\left(1-
|
The answer by @Jair Taylor addresses the case of bounded operators of norm strictly less than $1$ (hence the restriction to the interval $[0,1-\epsilon]$ ), and justifies rigorously why the series works. I’m not sure if it’s possible to generalize that argument directly to work for larger domains. Hence, what follows is really more of an excuse to illustrate/apply some of the ideas in functional analysis, particularly around Banach’s fixed point theorem. Your problem really is a fixed point problem, because $f-\int f=1$ rearranged gives $f=1+\int f$ , i.e the function $f$ equals some other operator applied to $f$ , namely $\Phi(f)=1+\int f$ , i.e we have to solve the equation $f=\Phi(f)$ . This is actually a very common theme when solving (nonlinear) integral equations, or even differential equations, or simply to prove many types of existence results: reformulate your original problem as a fixed-point problem (or whatever the case may be), and try to solve that new problem using the ‘
|
|calculus|functional-analysis|solution-verification|integral-equations|
| 0
|
Determining If Set is Ring or Field
|
Decide whether the following subset of $\mathbb{R}$ is not a ring, or is a ring but not a field, or is a field: S = { ${a + b\sqrt[3]{2} + c\sqrt[3]{4} | a,b,c \in \mathbb{Q} } $ } I tried approaching by considering the polynomial $f(x) = x^3 - 2$ $\in \mathbb{Q}[x] $ and observing that a root is $x = \sqrt[3]{2}$ . Then, if there exists a set containing $\mathbb{Q}$ and $\sqrt[3]{2}$ , we prove that S is a field as S = $ \mathbb{Q}[\sqrt[3]{2}]$ . However, I have been unsuccessful. Are there any hints for tackling this?
|
First show that $S$ is a ring by showing that it satisfies all ring properties. Consider the map $\psi:\mathbb Q[x]\to S$ by sending $f(x)\to f(\sqrt[3]{2})$ . It is clear that $\psi$ is surjective with kernel $(x^3-2)$ . So by the first isomorphism theorem it follows that $S=\mathbb Q(\sqrt[3]{2})$ .
|
|abstract-algebra|ring-theory|field-theory|
| 0
|
Limit involving an Indicator Function and Sum
|
For each $n\in\mathbb{N}$ , consider real $n$ numbers $x_{i,n}$ , $1\le i \le n$ , given by $$\sum_{i=1}^{n}x_{i,n}=0,\\-1/n \le x_{i,n} \le 1-1/n.$$ I am trying to find value of $$\lim_{n\to \infty} \sum_{l=1}^{\lfloor n/p \rfloor}\textbf{1}_{lp\le n}x_{lp,n},$$ where $p$ denotes a prime and $\textbf{1}_{\lbrace\cdot \rbrace}$ denotes an indicator function. Approach: Since $n$ is going to $\infty$ , we can say $$\textbf{1}_{lp\le n}\equiv1$$ in the limit. This leads to $$\lim_{n\to \infty} \sum_{l=1}^{\lfloor n/p \rfloor}x_{lp,n}=\sum_{i=1}^{\infty}x_{i,\infty} - x_{1,\infty}=-x_{1,\infty}.$$ Does this suggest the limit does not exist?
|
Let $n$ and $l\le\lfloor n/p\rfloor$ be any natural numbers. Then $lp\le n$ , so $\textbf{1}_{lp\le n}=1$ . Put $x_n=\sum_{l=1}^{\lfloor n/p \rfloor}\textbf{1}_{lp\le n}x_{lp,n}=\sum_{l=1}^{\lfloor n/p \rfloor} x_{lp,n}$ . The (existence of the) limit $\lim_{n\to\infty} x_n$ depends on the numbers $x_i,n$ for $n\in\mathbb N$ and $1\le i\le n$ . For instance, if all these numbers are zeroes then the limit is zero too. On the other hand, if $p=2$ , and for each even $n$ and each natural $i\le n/2$ we have $x_{i,n}=\frac {(-1)^{i+n/2}}{n}$ , then $x_n=\frac {(-1)^{n/2}}2$ , so the limit does not exist.
|
|real-analysis|sequences-and-series|limits|ceiling-and-floor-functions|
| 1
|
Looking at all permutations of a product of matrices
|
Let $M$ be an $n \times n$ nonnegative row-stochastic matrix, and let $M_i$ be the matrix whose $i$ -th row is the same as that of $M$ , and whose remaining rows are all the same as that of the identity matrix. For example, if $M = \begin{bmatrix} 1/6 & 2/6 & 3/6 \\ 4/15 & 5/15 & 6/15 \\ 7/24 & 8/24 & 9/24 \end{bmatrix}$ , then $M_2 = \begin{bmatrix} 1 & 0 & 0 \\ 4/15 & 5/15 & 6/15 \\ 0 & 0 & 1 \end{bmatrix}$ . Is there a clean expression for the sum of $M_{\pi(1)} M_{\pi(2)} \dots M_{\pi(n)}$ over all permutations $\pi$ of $1, \dots, n$ ? (More formally, $\sum_{\pi \in S_n} \prod_{i=1}^n M_{\pi(i)}$ .) After some examples, I suspect that the answer is $n!$ times the diagonal matrix whose elements are $M_{1,1}, M_{2,2}, \dots$ , but I have no idea how to prove it. Might it involve looking at the transpose? Fix a length- $n$ column vector $v$ whose elements are reals from $0$ to $1$ . Repeat this process forever: choose a random permutation $\pi$ (different every time), and update $v$ t
|
In the $2 \times 2$ case, with $M = \pmatrix{a_{11} & 1-a_{11}\cr a_{21} & 1 - a_{21}}$ . your sum is $$\left(\begin{array}{cc} -a_{2,1} a_{1,1}+2 a_{1,1}+a_{2,1} & \left(a_{2,1}-2\right) \left(a_{1,1}-1\right) \\ a_{2,1} \left(a_{1,1}+1\right) & -a_{2,1} a_{1,1}-a_{2,1}+2 \end{array}\right) $$ In the $3 \times 3$ case, with $$M = \left(\begin{array}{ccc} a_{1,1} & a_{1,2} & 1-a_{1,1}-a_{1,2} \\ a_{2,1} & a_{2,2} & 1-a_{2,1}-a_{2,2} \\ a_{3,1} & a_{3,2} & 1-a_{3,1}-a_{3,2} \end{array}\right)$$ your sum is too complicated to write out here. All matrix elements are polynomials of total degree $3$ in the $a_{ij}$ . The $(2,1)$ matrix element, for example, is $$ -a_{1,1} a_{2,2} a_{3,2}-a_{1,2} a_{2,1} a_{3,2}-2 a_{1,2} a_{2,2} a_{3,2}-2 a_{1,1} a_{3,2}+3 a_{1,2} a_{2,2}-a_{1,2} a_{3,2}+a_{3,2} a_{2,2}+3 a_{1,2}+2 a_{3,2}$$
|
|linear-algebra|matrices|adjacency-matrix|stochastic-matrices|
| 0
|
Find all integers n such that $(\frac{n^3-1}{5})$ is prime
|
Find all integers n such that $(\frac{n^3-1}{5})$ is prime? My Approach: I wrote all the prime which i will get after dividing $(n^3-1)$ by $5$ . $n^3-1=10,15,25,35,55,...,215$ which lead me to $n^3=11,16,26,...,216$ , then I obtained $n=6$ My doubt is that how to check more value of $n$ without using modular arithmetic because the book I'm referring has not introduced it yet. Second approach: $\frac{(n^3-1)}{5}=\frac{(n-1)(n^2+n+1)}{5}$ But my second approach too does not lead me anywhere. This problem is from the book Pathfinder for Olympiad Mathematics
|
There is a simpler way ${n^3 - 1}≡0$ mod $5$ ${n^3}≡1$ mod $5$ ${n}≡1$ mod $5$ $n=5k+1$ for integer $k$ To let $\frac{n- 1}{5}$ = 1. $k=1$ , so the only solution for n is 6
|
|elementary-number-theory|prime-numbers|contest-math|
| 0
|
Frullani's integral
|
I know that there are a lot of posts that derive Frulani's integral but I would really like to know how to do it for myself. Here is the problem: Lef $f:[0,\infty)\to\mathbb{R}$ a $C^{1}$ function such that $\displaystyle\int_{1}^{\infty} \frac{f(u)}{u}\,d u$ converges. Prove, using the Fubuni's theorem that if $0 then $$\int_{0}^{\infty}\dfrac{f(ax)-f(bx)}{x}\, dx=f(0)\log\left(\frac{b}{a} \right)$$ Proof: By definition, $I=\displaystyle\int_{0}^{\infty}\dfrac{f(ax)-f(bx)}{x}\, dx=\lim\limits_{t\to\infty}\displaystyle\int_{0}^{t}\dfrac{f(ax)-f(bx)}{x}\, dx $ . By the fact that $\dfrac{d\left(\frac{f(xy)}{x} \right)}{dy}=f'(xy)$ , then follows that $\displaystyle\int_{b}^{a} f'(xy)\, dy=\dfrac{f(ax)-f(bx)}{x}$ and therefore we can write $I=\lim\limits_{t\to\infty} \displaystyle\int_{0}^{t}\left( \displaystyle\int_{b}^{a} f'(xy)\, dy\right)\, dx$ . By Fubini's theorem, follows that $I=\lim\limits_{t\to\infty} \displaystyle\int_{b}^{a}\left( \displaystyle\int_{0}^{t} f'(xy)\, dx\right)\,
|
Too long for a comment We can use the Mean value theorem. Let $f(x)$ is a continuous function ( $C^0$ ), and $b>a$ Then $$ \int_\delta^\Delta\frac{f(ax)-f(bx)}{x}dx= \int_{\delta a}^{\Delta a}\frac{f(t)}{t}dt-\int_{\delta b}^{\Delta b}\frac{f(t)}{t}dt$$ $$=\int_{\delta a}^{\delta b}\frac{f(t)}{t}dt+\int_{\delta b}^{\Delta a}\frac{f(t)}{t}dt-\int_{\delta b}^{\Delta a}\frac{f(t)}{t}dt-\int_{\Delta a}^{\Delta b}\frac{f(t)}{t}dt=\int_{\delta a}^{\delta b}\frac{f(t)}{t}dt -\int_{\Delta a}^{\Delta b}\frac{f(t)}{t}dt$$ According to the Mean value theorem for integrals, there exist $c_1\in[\delta a;\delta b] , c_2\in[\Delta a;\Delta b]$ such that $$=f(c_1)\int_{\delta a}^{\delta b}\frac{dt}{t} -f(c_2)\int_{\Delta a}^{\Delta b}\frac{dt}{t}=\Big(f(c_1)-f(c_2)\Big)\ln\frac{b}{a}$$ and $$\underset{x\in[\delta a;\delta b]}{\min f(x)} Now let's lead $\delta\to0$ and $\Delta\to\infty$
|
|integration|improper-integrals|fubini-tonelli-theorems|
| 0
|
Integer part of $x^n$
|
Given a real number $x>1$ and a natural number $n$ , what can we say about the integer part of $x^n$ in terms of $x$ and $n$ ? For simplicity, let us assume $x . For the first few values of $n$ , $\left\lfloor x^n\right\rfloor =1$ . But, once it crosses $1$ , it starts growing quite rapidly. Is it possible to express $\left\lfloor x^n\right\rfloor$ in terms of $x$ and $n$ ? Or at least provide some bounds? I was trying to think of it recursively. For a given $x>1$ , let $I_n = \left\lfloor x^n\right\rfloor$ and $F_n = \left\{ x^n\right\}$ so that $$I_{n+1} = \left\lfloor x(I_n+F_n)\right\rfloor$$ but this doesn't take me anywhere. Any useful results will be appreciated. Some specific sequences are given in OEIS A002379 , OEIS A064628 and OEIS A091946 . Please note that the question is to find $\lfloor x^n \rfloor$ given a $x$ and an $n$ without using the $\text{floor}$ operator
|
If $\lfloor x^n\rfloor , then $x^n , so $x . If $\lfloor x^n\rfloor , then $x^n , so $x . So for a given $k$ , find $n$ such that $k^{\frac1{n+1}}
|
|elementary-number-theory|real-numbers|exponentiation|ceiling-and-floor-functions|
| 0
|
Can I find a non-trivial cubic polynomial with rational zeroes *and* rational turning points?
|
Maybe it's staring me in the face and I can't see it, but I wanted a problem for my students to be able to sketch a polynomial function, intercepts, turning points, and all. Preferably none of the zeroes are turning points, to make for a more interesting sketch.
|
$a,b,c \in \mathbb Q.$ $(x-a)(x-b)(x-c)=0$ $x^3-(a+b+c)x^2+(ab+ac+bc)x-abc=0$ $3x^2-2(a+b+c)x+(ab+ac+bc)=0$ $x=\frac{(a+b+c)\pm \sqrt{(a+b+c)^2-3(ab+ac+bc)}}{3}$ $w=a+b+c$ $z=ab+ac+bc$ $x=\frac{w \pm \sqrt{w^2-3z}}{3}$ So find $w,z \in \mathbb Q$ . $ p,q,m,n \in \mathbb N .w=p/q. z=m/n. $ $\frac{p^2}{q^2}-3\frac{m}{n}=r^2/s^2$ $ns^2p^2-3mq^2s^2=nr^2q^2$ Let $n=q=1$ $s^2p^2-3ms^2=r^2$ Let $r=ks, k \in \mathbb N$ . $p^2-3m=k^2$ $(p-k)(p+k)=3m$ Let $p=k+3, p+k=m\implies m=2k+3$ So it looks like setting a value of $k$ can be used to give you what you seek. Keep it simple, let $k=1$ . Then $p=4$ and $m=5$ $\implies w=4, z=5$ . Now find $a,b,c$ so that $a+b+c=4$ $ab+ac+bc=5$ $c=4-(a+b)$ $ab+(a+b)(4-a-b)=5$ $ab+4a-a^2-ab+4b-ab-b^2=5$ $-a^2+4a+4b-ab-b^2-5=0$ $-a^2+a(4-b)+(4b-b^2-5)=0$ $a=\frac{(b-4)\pm\sqrt{(4-b)^2+4(4b-b^2-5)}}{-2}$ $16+b^2-8b +16b-4b^2-20=-3b^2+8b+16$ $-3b^2+8b+16=j^2/l^2$ $-3b^2+8bl^2+(16l^2-j^2)=0$ $b=\frac{-8l^2\pm \sqrt{64l^4+12(16l^2-j^2)}}{-6}$ $64l^4+192l^2-12j^2=4(16
|
|calculus|algebra-precalculus|polynomials|
| 0
|
"Coordinate-free" minors, submatrices and cofactors.
|
Let $M$ be an $n \times n$ matrix, then the $(i,j)$ -submatrix of $M$ is the $n-1 \times n-1$ matriz $M_{ij}$ given by removing the $i$ -th line and the $j$ -th column from $M$ . The $(i,j)$ -minor is $m_{ij} = \det M_{ij}$ and the $(i,j)$ -cofactor is $c_{ij} = (-1)^{i + j}m_{ij}$ Is there a way to define these things for linear maps in a coordinate free way? I'm looking for some way to, given $T \colon V \to V$ a linear map, define $S \colon W \to W$ where $W$ is a hyperplane of $V$ and such that given a basis $e = (e_1, \dots, e_n)$ of $V$ , there exists a basis $f = (f_1, \dots, f_{n-1})$ of $W$ related to $e$ in some way such that the matrix of $S$ in $f$ is a submatrix of the matrix of $T$ in $e$ . Some of you might be wondering the motivation behind it all: I am revisiting some ideas in linear algebra in a coordinate free way, and I've stumbled into Cayley-Hamilton. The original algebraic proof uses the definition of cofactor, so I'm wondering how one would go about proving it w
|
For the minors and the submatrices, they inherently rely on the way we picked our bases for the domain and the target space, so I would say it is impossible to completely do away with the coordinates, strictly speaking, but we can make is so that the dependent on the bases aren't apparent. However, there is an interpretation of the cofactor of $M$ in a coordinates-free way, which relies on understanding the multilinear algebra . Suppose that the matrix $M$ represents the linear map $T:V \to V$ with respect to the basis $\{e_1, \dots, e_n\}$ . The submatrix $M_{ij}$ of $M$ then represents the linear map $$ S_{ij} = \pi_{\hat i}\circ T \circ \mathfrak{i}_{\hat j}: W \to U, $$ where $W = \text{span}\{e_1,\dots, e_{j-1},e_{j+1},\dots, e_n\}$ , $U = \text{span}\{e_1,\dots, e_{i-1},e_{i+1},\dots, e_n\}$ . Here $\mathfrak{i}_{\hat j}: W\hookrightarrow V$ is the inclusion map, and $\pi_{\hat i}: V \to U$ is the projection map. The minors (or subdeterminants) of $M$ is the determinants of these
|
|linear-algebra|matrices|linear-transformations|
| 1
|
Nice application of dominated convergence theorem
|
Let $\delta \in \mathbb{R}$ , $$f(x)=\frac{sin(x^2)}{x}+\frac{\delta x}{1+x}.$$ Show that $$\operatorname{lim_{n\to \infty}} \int_{0}^{a}f(nx)=a\delta$$ for each $a>0.$ I am unable to find integrable bound for the sequence of function $\{f(nx)\}$ . And want to apply Dominated convergence theorem.
|
$|f(t)|\le t+\delta t\le (1+\delta)a$ if $0 \le t \le a$ and $|f(t)|\le \frac 1 x+\delta\leq (\frac 1 a+\delta)$ if $t >a$ . So $|f(t)|\le \max \{(1+\delta)a, (\frac 1 a+\delta)\}$ for all $t$ . The constant function $g(t)=\max \{(1+\delta)a, (\frac 1 a+\delta)\}$ is a dominating integrable function for $f(nx)$ on $[0,a]$ .
|
|integration|measure-theory|lebesgue-measure|measurable-functions|
| 1
|
cohomological base change isomorphism hold on a non empty open subset
|
I was reading Lazarsfeld's Positivity in Algebraic Geometry II In page 312,there is a remark says if $f: V \longrightarrow W$ is any proper morphism of varieties, with $W$ reduced, and if $\mathcal{F}$ is any coherent sheaf on $V$ , then there is a non-empty Zariskiopen subset $U \subset W$ such that the natural maps $$ R^j f_* \mathcal{F} \otimes \mathbf{C}(t) \longrightarrow H^i\left(V_t, \mathcal{F}_t\right) $$ are isomorphisms for all $j>0$ and $t \in U$ . Proof. In fact, thanks to Grauert direct image theorem it suffices to choose $U$ such that $\mathcal{F}$ is flat over $U$ and all the direct images $R^j f_* \mathcal{F}$ are locally free on $U$ . I know $f$ is proper there exist a dense Zariski open subset $U$ such that $\mathcal{F}|_{f^{-1}(U)}$ is flat over $U$ . thus the flatness of $\mathcal{F}$ is guaranteed, on the other hand since $W$ is reduced, the normal locus is dense and coherent sheaf is locally free on the codimension 2 subset of normal scheme, thus we can gaurantee
|
You only have to do it for finitely many $j$ by Grothendieck vanishing: if $j>\dim V$ , then $R^jf_*\mathcal{F}=0$ as $H^j(U,\mathcal{F}|_U)=0$ for all open $U$ . Now you can take an intersection of finitely many dense open sets.
|
|algebraic-geometry|
| 1
|
Let $I(x) := \int_{B(x,R)} \frac{f(y)}{|x-y|^{n-1}} dm(y)$. Then there exists $C = C(n)$ such that $|I(x)| \leq CR \cdot Mf(x)$
|
Let $\lambda_n$ be the $n$ -dimensional Lebesgue measure, and let $f\in L^1_{loc}(\mathbb R^n)$ and $Mf$ be the maximal function of $f$ . For fixed $R>0$ and $x \in \mathbb R^n$ and we define $$ I(x)= \int_{B(x,R)}\frac{ f(y)}{|x-y|^{n-1}} d\lambda_n(y).$$ Prove that $$ |I(x)|\leq CR \cdot (Mf)(x)$$ for each $x\in \mathbb R^n$ , where $C=C(n)>0$ is a constant only depending on $n$ and where $Mf$ is the Hardy-Littlewood maximal function of $f$ . My first step was to express $\int_{B(x,R)} \frac{|f(y)|}{|x-y|^{n-1}} d\lambda_n(y)$ as a double integral and using Fubini-Tonelli to change the order of integration. However, I was not able to get further. I know that $\lambda_n(\{Mf > \alpha \}) \leq \frac{5^n}{\alpha} ||f||_1$ and that for $||Mf||_p \leq C ||f||_p$ where $C > 0$ depends on $n$ and $p$ and where $1 , but I don't see how I can apply these results for the above problem. Also, I just wanted to ask if the order of inequality we are trying to prove ( $I(x)= \int_{B(x,R)}\frac{ f(y
|
There are two useful principles we can use here. If you're trying to bound a quantity by the maximal function of $f$ , you should try to bound your quantity by averages of $f$ , since the maximal function is just a supremum over averages. When dealing with integrals with singularities, a reasonable strategy is to break up the integral into pieces so you can leverage different behavior of your integrand near and away from the singularity. In this case, the singularity is at $y=x$ . To apply principle 2, let's consider breaking up the domain of integration by $$ B(x,R) = \{x\}\cup\left(\bigcup_{m\geq 0} A_m(x,R)\right) $$ where $A_m(x,R) = \{y\in\mathbb{R}^n: 2^{-(m+1)}R \leq |x-y| \leq 2^{-m}R\}$ . For $y\in A_m(x,R)$ , we have $$ |x-y|^{-(n-1)} \leq \left(\frac{2^{m+1}}{R}\right)^{n-1}. $$ Thus on each annulus we have $$ \int_{A_m(x,R)} \frac{|f(y)|}{|x-y|^{n-1}}dy \leq \int_{A_m(x,R)} |f(y)|\left(\frac{2^{m+1}}{R}\right)^{n-1}dy. $$ We can now bound this by expanding the region of int
|
|real-analysis|measure-theory|lebesgue-integral|fubini-tonelli-theorems|
| 1
|
double integral convert from cartesian to polar
|
I have this question but I'm stuck on the integral - usually the r gets cancelled out but this time they're both r^2? I can't tell what I'm not seeing or doing wrong.
|
You've done the transformation correctly: the resulting integral is $$I = \int_{\theta=-\pi/4}^{\pi/3} \cos \theta \, d\theta \int_{r=1}^2 r^2 \sin r^2 \, dr.$$ Although the integral with respect to $\theta$ is simply $(\sqrt{2} + \sqrt{3})/2$ , the integral with respect to $r$ does not have an elementary closed form. I suspect the author of the problem failed to realize that the Jacobian of the transformation introduces another factor of $r$ . The antiderivative can be expressed in terms of Fresnel integrals but this is not useful. The approximate numerical value is $$I \approx 1.1047531349442256551\ldots.$$
|
|polar-coordinates|multiple-integral|
| 0
|
Parametrization of rational points on the hyperboloid $z^2=-5x^2+4y^2+9$
|
I am trying to find rational parametrization of the rational points on the hyperboloid $z^2=-5x^2+4y^2+9$ . I found many answers about parametrization of simple hyperbolic curves, but I cannot find the case of hyperboloids. I've tried some substitutions like $\sqrt{5}x=\sqrt{4y^2+9}sin\theta,\ z=\sqrt{4y^2+9}cos\theta$ but failed. Can anyone help or recommend related materials?
|
$$x={3\cos\theta\sec\alpha\over\sqrt{5}}\\y={3\cos\theta\tan\alpha\over2}\\z=3\sin\theta$$ This is the curve's parametrisation for rational points unfortunately you cant use the parametrised Pythagorean triplets $(m^2-n^2,2mn,m^2+n^2)$ for finding the ratios because of that $\sqrt{5}$ in x you'll need to just hand pick ratios to make them rational If say that was not an irrational number you can take 4 rational numbers and use the above parametrisation for Pythagorean triplets to make two right angled triangles with rational sides which will have rational trig ratios for $\theta,\alpha$
|
|number-theory|
| 1
|
Derivative of Discrete Mutual Information
|
Given a fixed channel $p(y|x)$ , and denoting the discrete input pmf $p(x)$ , and the corresponding output pmf by $p(y)$ , prove that: $$ \frac{\partial I(X;Y)}{\partial p(x)} = I(X=x;Y) - \log{e} $$ where, $$ I(X=x;Y) = \sum_{y} p(y|x) \log{\left(\frac{p(y|x)}{p(y)}\right)} $$ Note $^*$ : the $\log$ is base 2 here. Solution (thanks to @stochasticboy321): We can write $$ I(X;Y) = \sum_{i}p(x_i)I(X=x_i;Y) \Longrightarrow \frac{\partial I(X;Y)}{\partial p(x_j)} = I(X=x_j;Y) + \sum_{i} p(x_i)\color{red}{\frac{\partial I(X=x_i;Y)}{\partial p(x_j)}}. $$ But, $$ \begin{aligned} \color{red}{\frac{\partial I(X=x_i;Y)}{\partial p(x_j)}} &= \frac{\partial }{\partial p(x_j)} \sum_{k} p(y_k|x_i) \log{\left(\frac{p(y_k|x_i)}{p(y_k)}\right)}\\ &= \frac{\partial }{\partial p(x_j)}\left(\sum_{k} p(y_k|x_i) \log{(p(y_k|x_i)) - \sum_{k} p(y_k|x_i)\log [p(y_k)]}\right) \\ &= -\frac{\partial }{\partial p(x_j)} \sum_{k} p(y_k|x_i)\log [p(y_k)] \\ &= - \sum_{k} p(y_k|x_i)\color{blue}{\frac{\partial \log [p(
|
There are quite a few calculation errors in your work, e.g., $$ \partial_{p(x_j)} \log p(y) = \log e\frac{1}{p(y)} \partial_{p(x_j)} \sum_{i} p(y|x_i) p(x_i) = \log e\frac{1}{p(y)} \cdot p(y|x_j),$$ and the other $i \neq j$ terms don't enter since neither $p(y|x_i)$ nor $p(x_i)$ directly depend on $p(x_j)$ . Also, when computing $\partial_{p(x_j)} I,$ the second term should also sum over $i = j$ since the $-\log p(y)$ term in $I(X = x_j;y)$ varies with $p(x_j)$ . I'll just start from scratch because I'm getting lost in the notation in the question. Clearly, $$ \partial_{p(x)} I(X;Y) = I(X = x;Y) + \sum_{x'} p(x') \partial_{p(x)} I(X = x';Y).$$ Now, $$I(X = x';Y) = \sum_y p(y|x') \log p(y|x') - \sum_y p(y|x') \log p(y)\\ = f(p(y|x')) - \sum_y p(y|x') \log p(y), $$ and so, $$ \partial_{p(x)} I(X = x';Y) = - \log e \sum_y \frac{p(y|x')}{p(y)}\partial_{p(x)} p(y) = -\log e \sum_y \frac{p(y|x')p(y|x)}{p(y)}.$$ This in turn yields $$\sum_{x'} p(x') \partial_{p(x)} I(X = x';Y) = - \log e \sum
|
|multivariable-calculus|information-theory|mutual-information|
| 0
|
Dual basis problem on the space of polynomials $\mathbb{R}_{1}[x]$
|
Problem goes as follows: Let $\mathbb{R}_{1}[x]$ be the linear space of polynomials of degree $\leq 1$ . Define the covectors $f_{1}, f_{2} \in (\mathbb{R}_{1}[x])^{*}$ (dual space), as: $$ f_{1}(p(x)) = \int_{0}^{1} p(x) \,dx $$ $$ f_{2}(p(x)) = \int_{0}^{2} p(x) \,dx $$ . Determine the basis of $\mathbb{R}_{1}[x]$ whose dual basis is $\{ f_{1}, f_{2} \}$ . My solution: Let $\{v_{1}, v_{2}\}$ be the basis dual to $\{ f_{1}, f_{2} \}$ . Given the standard basis of $\mathbb{R}_{1}[x]$ , $\{ e_{1} = 1, e_{2} = x \}$ , I can form a matrix $f_{i}(e_{j})$ , evaluating, I determine it to be: $$ \begin{pmatrix} 1 & \dfrac{1}{2} \\ 1 & 2 \end{pmatrix} $$ The inverse of this matrix is $$ \begin{pmatrix} \dfrac{4}{3} & -\dfrac{1}{3} \\ -\dfrac{2}{3} & \dfrac{2}{3} \end{pmatrix} $$ So the columns of this matrix are the coefficients of the polynomials I looking for. Hence, $v_{1} = \dfrac{4}{3} - \dfrac{2}{3}x, v_{2} = -\dfrac{1}{3} + \dfrac{2}{3}x$ Is this correct?
|
You have that $f_1(a+bx)=a+\tfrac1{2}b$ and $f_2(a+bx)=2(a+b)$ , therefore $f_1(a+bx)=1$ and $f_1(a+bx)=0$ if and only if $a=2$ and $b=-2$ . By the other hand $f_1(a+bx)=0$ and $f_2(a+bx)=1$ if and only if $b=1$ and $a=-\tfrac1{2}$ , so the dual basis is $\{2-2x,-\tfrac1{2}+x\}$ .
|
|linear-algebra|solution-verification|linear-transformations|dual-spaces|
| 1
|
Help proving a chain rule from total derivative chain rule
|
Consider $f:\mathbb{R}^d\to\mathbb{R}$ and $g:\mathbb{R}\to\mathbb{R}^d$ . It is known that $$ \tag{*} (f\circ g)'(x) = \sum_{i=1}^d \partial_i f(g(x)) \cdot g'(x)^i $$ I would like to prove this from the chain rule for the total derivatives: $$ \tag{**} D_x(f\circ g) = D_{g(x)}f \circ D_xg $$ I'm not sure how to rigorously proceed here. Intuitively I know the two total differentials in the total derivative chain rule can be represented as matrices and their composition will correspond to the multiplication in the desired partial derivative chain rule. But I'm not sure how to get there. How does the function composition get converted into a summation + multiplication? Another related issue. The expression $(*)$ is a real number when evaluated at $x$ . The expression $(**)$ is a linear map from $\mathbb{R}\to\mathbb{R}$ . It's not to difficult to understand that the linear maps $\mathbb{R}\to\mathbb{R}$ (the dual space of $\mathbb{R}$ ) are isomorphic to $\mathbb{R}$ itself. But still,
|
$$ \newcommand{\R}{\mathbb{R}} \newcommand{\bv}[1]{\boldsymbol{#1}} $$ DISCLAIMER! My other answer is more algebraically motivated and clearer in my opinion. This one churns through the algebra and gets the right answer but turns to components too early in the treatment. See my other answer. Resolving the main confusion The insight from Nicholos Todoroff in the other answer addresses one of my confusions. The key is that, while $f'(x)$ is a real number, $D_xf$ is a map from $\mathbb{R}\to \mathbb{R}$ , in some sense it is a dual vector to the real numbers. It could be represented as a $1\times1$ matrix. However, there is a correspondence between these two objects: $$ \tag{1} D_xf(h) = f'(x)\cdot h, $$ where $h\in \R$ and $\cdot$ is regular $\R$ multiplication. In the OP we seek an expression for $(f\circ g)'(x)$ derived using $D_x(f\circ g)$ . Using $(1)$ , we will, therefore, proceed by calculating $D_x(f\circ g)(h)$ . The Algebra However, we will proceed more generally than requested
|
|linear-algebra|multivariable-calculus|partial-derivative|
| 0
|
$Ker \phi \xrightarrow{\phi'}A \xrightarrow{\phi}B \xrightarrow{\phi''}Coker \phi$ How are the maps $\phi'$ and $\phi''$ defined?
|
Consider in some category $Ker \phi \xrightarrow{\phi'}A \xrightarrow{\phi}B \xrightarrow{\phi''}Coker \phi$ How are the maps $\phi'$ and $\phi''$ defined? Does it depend on the category? I mean what are $\phi'(x)=?$ and $\phi''(x) =?$ I am guessing the first one is the inclusion map? I am hoping to check knowing this that $\phi\circ \phi'=0$ and $\phi''\circ\phi=0$ which brings me to ask also how to check the second one?
|
If you're not talking about category theory, but rather about abelian groups, or rings, or similar, the definition of these maps are canonical: $\phi'$ is the inclusion map, and $\phi''$ is the quotient map, end of story. In category theory, however, these maps aren't defined that way at all: Given a morphism $\phi: A\to B$ , a kernel of $\phi$ is an object $\ker \phi$ together with a morphism $\phi':\ker\phi\to A$ such that $\phi\circ\phi'$ is the zero morphism, required to fulfill the following: For any other object $C$ and morphism $\varphi:C\to A$ such that $\phi\circ\varphi$ is the zero morphism, there is a unique morphism $\varphi':C\to \ker \phi$ such that $\varphi = \phi'\circ\varphi'$ Note that in true category fashion, there are no details whatsoever about what $\ker \phi$ and $\phi'$ actually are. Which is to say, any object that is isomorphic to $\ker \phi$ is also a kernel, with its own version of $\phi'$ . To give an example, in group theory, with the map $\phi = \Bbb Z\t
|
|category-theory|
| 1
|
Are functions of two independent random variables also independent?
|
We have two independent random variables: 1. r ~ U(0,1) 2. θ ~ U(0,2π) Also we have X = r.cosθ and Y = r.sinθ. Are X and Y independent?
|
A simple argument for 'no' is: $P(X > 2^{-\frac{1}{2}}) = P(Y > 2^{-\frac{1}{2}}) > 0$ , but $P(X > 2^{-\frac{1}{2}} \mbox{ and } Y > 2^{-\frac{1}{2}}) = 0$ .
|
|random-variables|independence|
| 0
|
Are functions of two independent random variables also independent?
|
We have two independent random variables: 1. r ~ U(0,1) 2. θ ~ U(0,2π) Also we have X = r.cosθ and Y = r.sinθ. Are X and Y independent?
|
No. If you know for example that $X \ge 0.8$ , then you now also know that $|Y| \le 0.6$ , because $X^2+Y^2=r^2 \le 1$ . That means you get information about $Y$ from $X$ that you didn't have before (only $|Y| \le 1$ is known from the original formula).
|
|random-variables|independence|
| 0
|
How do we know that common rearrangement proofs of the Pythagorean theorem work for any right triangle?
|
I’m a little bit puzzled by geometrical proofs, like the common algebraic proof for the Pythagorean theorem listed Wikipedia's "Pythagorean theorem" entry . I understand the idea of arranging the right triangles and the area $c^2$ in a neat way to form another square and writing the area of the new square in different terms and going from there. But I’m a little confused on how you actually know that the right triangle you draw isn’t just the one special right triangle with which you can actually form such a square. How do we know this way of rearranging the pieces works for any other right triangle?
|
But I’m a little confused on how you actually know that the right triangle you draw isn’t just the one special right triangle with which you can actually form such a square. If I am not mistaken, this is the figure you are referring to. Ask yourself, what is it about this right triangle that is not applicable to every right triangle possible? We are not making any unwarranted assumptions about its side lengths or angles, right? We are not saying something like - one side has a length $4$ units or one acute angle measures $40^{\circ}$ . So undoubtedly, this isn't "just one special right triangle". As for being able to rearrange the triangles into a square, again, imagine any right triangle. Can you make four copies of it? Of course, you can. Can you arrange the four triangles as shown in the figures below? sure, why not? Perhaps you are not sure if the corners would make right angles. Well, they have to. If you observe any corner made by the triangles, it's either the right angle itself
|
|geometry|
| 0
|
How do we know that common rearrangement proofs of the Pythagorean theorem work for any right triangle?
|
I’m a little bit puzzled by geometrical proofs, like the common algebraic proof for the Pythagorean theorem listed Wikipedia's "Pythagorean theorem" entry . I understand the idea of arranging the right triangles and the area $c^2$ in a neat way to form another square and writing the area of the new square in different terms and going from there. But I’m a little confused on how you actually know that the right triangle you draw isn’t just the one special right triangle with which you can actually form such a square. How do we know this way of rearranging the pieces works for any other right triangle?
|
Consider a right triangle with legs of arbitrary length $a,b$ . Let $c$ denote the length of the hypotenuse. Without loss of generality, suppose $b \geq a$ . Then orient this triangle such that its vertices are $V_1 = \{(-a,0), (0,0), (0,b)\}$ . Note that a second triangle with vertices $V_2 = \{ (-a,0),(b-a,0),(b-a,-a)\}$ would be congruent (as can be seen by checking the difference in corresponding coordinates). Similarly, $V_3 = \{ (b-a,-a),(b-a,b-a),(b,b-a)\}$ and $V_4 = \{ (b,b-a),(0,b-a),(0,b)\}$ are also congruent. This arrangement of the four congruent triangles ensures that the square of side length $b-a$ with vertices $S_{int}=\{(0,0),(b-a,0),(b-a,b-a),(0,b-a)\}$ , which degenerates to a point in the case that $b=a$ , does not overlap with any triangle. Moreover, all four triangles fall within the square of side length $c$ with vertices $S_{ext}=\{(-a,0),(b-a,-a),(b,b-a),(0,b)\}$ . You can look at the coordinates of the vertices to convince yourself of this. Critically, this
|
|geometry|
| 1
|
How do you find the multivariable limit $\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt x +\sqrt y }$
|
$\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt x+\sqrt y}$ considering domain $\{(x,y) \in \mathbb{R}^2 : x,y \ge 0, (x,y) \ne 0\}$ I tried using polar coordinates, but the theta function is unbounded. I also tried using the sandwich theorem, but could not find appropriate bounds. How do I approach this question
|
As "geetha290krm" said: For $x>0,y>0$ , you can rewrite as below $$\frac{xy}{\sqrt x +\sqrt y } =\frac 12 \frac{2xy}{\sqrt x +\sqrt y }\\0 \le\frac 12(\frac{xy}{\sqrt x +\sqrt y } +\frac{xy}{\sqrt x +\sqrt y } )=\frac 12(\frac{y\sqrt{x^2}}{\sqrt x +\sqrt y } +\frac{x\sqrt{y^2}}{\sqrt x +\sqrt y })\le \\\frac 12(\frac{y\sqrt{x^2}}{\sqrt x } +\frac{x\sqrt{y^2}}{\sqrt y })=\frac 12 (y\sqrt x+x \sqrt y)\\=\frac 12 \sqrt {xy}(\sqrt y+\sqrt x) $$ and it works for every path near $(0,0)$ .
|
|limits|multivariable-calculus|
| 1
|
Find $L_1$ and $L_2$ (formal languages)
|
We have two languages $L_1, L_2 \subseteq \{{a, b}\}^{*}$ . According to the following formulas find $L_1$ and $L_2$ : $L_1 = \{\lambda\} \cup \{a\}.L_1 \cup \{b\}.L_2 $ $L_2 = \{\lambda\} \cup \{b\}.L_2 $ I am trying to solve this question. extra explaination would be great. Thank you in advance for your help. Edit 1: Here's what I think so far: About the $L_2$ , I can say that $L_2$ contains the empty string and every word in $L_2$ starts with the letter {b}. About the $L_1$ , I can say that $L_1$ also contains the empty string, every word in $L_1$ starts with the letter {a} followed by whatever we had in $L_1$ or letter {b} followed by whatever we had in $L_2$ . Is the following corrct? $L_1 = \{a, b\}^*$ $L_2 = \{b\}.\{a, b\}^*$
|
Let $1$ be the empty word and let $+$ denote union. Your system of equation becomes \begin{align} L_1 &= 1 + aL_1 + bL_2 \\ L_2 &= 1 + bL_2 \end{align} All you need to know is Arden's rule , which states that if $K$ and $L$ are languages such that $1 \notin K$ , then $K^*L$ is the unique solution to the equation $X = KX + L$ . Applying Arden's rule to the second equation, one gets $L_2 = b^*1 = b^*$ . Carrying this over to the first equation, one obtains $L_1 = 1 + aL_1 + bb^* = aL_1 + b^*$ , whence $L_1 = a^*b^*$ by Arden's rule.
|
|formal-languages|automata|regular-language|
| 1
|
Kummer's Lemma and $1+\zeta$
|
In lecture we were told to think about the following: Kummer's Lemma: Let $p$ be an odd prime and let $\zeta := e^{2\pi i / p}$ . Every unit of $\mathbb{Z}[\zeta]$ is of the form $r\zeta^g$ , where $r$ is real and $g$ is an integer. This theorem says that the units of $\mathbb{Z}[\zeta]$ can be thougth as the points of the lines that pass through the vertices of an equally spaced $p$ -gon centered at the origin. (Said vertices are of the form $\zeta^g$ .) However, we know that $1+\zeta$ is a unit . But this number does not lie on one of those lines. What is wrong here? Edit: This question seems to originate from Stewart & Tall's "Algebraic Number Theory and Fermat's Last Theorem" (4th edition), where it appears as Ex. 11.6 (p. 199).
|
Geometrically: Consider the regular $p$ -gon with its center at the origin $O$ and with $A = (0,1)$ , and the other points $B_n$ with coordinates corresponding to $\zeta^n$ . The lines going through these points and $O$ make angles $\tau n/p$ with the positive $x$ -axis, and angles $\tau/2 - \tau n/p$ with the negative $x$ -axis. Because $p$ is odd, the two sets of angles together are the angles $\tau n/2p$ . For each of the points $B_n$ , the location of the complex point $1+\zeta^n$ can be found by constructing a rhombus using $O$ , $A$ , and $B_n$ as three vertices. Call this new point $C_n$ . (Think the standard picture of vector addition.) Now $\angle AOB_n$ has a measure equal to $\frac{\tau n}{p}$ , meaning $\angle OAC_n$ has half that measure (by the properties of a rhombus). But we've seen that each of the angles $\tau n/2p$ is made by one of the lines passing through the origin and one of the points $B_n$ . Therefore $C_n$ must lie on a line passing through $O$ and $B_m$ for
|
|number-theory|roots-of-unity|cyclotomic-fields|
| 0
|
Subbase for a topology
|
Let $(X,\tau)$ be a topological space. A collection $\mathcal{S}$ of its open subsets is a subbase for $\tau$ provided that the collection $$\mathcal{B}=\{V\mid V=\cap_{i=1}^k W_i, k\in \mathbb{N}, W_i\in \mathcal{S}\} $$ is a base for $\tau$ . How can I show the following: Let for any set $X$ , $\mathcal{S}$ be a collection of its subsets. Prove that $\mathcal{S}$ is a subbase for a topology in $X$ if and only if $X=\cup_{W\in \mathcal{S}}W$ . I have proved one side that if union of members of $\mathcal{S}$ is $X$ , then $\mathcal{S}$ is a subbase. I am not able to prove the converse.
|
The base $\cal B$ is the set of intersections of finite families of members of $\cal S$ . It contains in particular the intersection of an empty family, which by convention, is equal to $X$ . Thus $X$ belongs to $\cal B$ . Let me insist that you need to use this convention. Defining $\cal B$ as the set of intersections of nonempty finite families of members of $\cal S$ does not work: if $\cal S$ is the empty collection, then so is $\cal B$ and $\cal B$ is not a base.
|
|general-topology|
| 1
|
Does $L^2(\mathbb{R})$ have idempotent under convolution operation.
|
One can easily prove that there does not exist any $f\in L^1(\mathbb{R})$ such that $f*f=f$ holds due to Riemann-Lebesgue Lemma. But what about the case when $f\in L^2(\mathbb{R})?$ Is there any counterexample or is there any such function which fails to exist?
|
Assume $f\in L^2(\Bbb R^d)$ is a solution to $f*f = f$ . Then $g:=\mathcal F(f)\in L^2$ by Plancherel and in the sense of distributions (see the Lemma below), $$ g^2 = g $$ Since $g\in L^2$ , one deduces that the above identity holds a.e. and so $g$ is an indicator function. Since $g\in L^2$ , it is the indicator function of a set of finite measure . Conversely, if $g$ is the indicator function of a set of finite measure, then $g\in L^1$ and $g^2\in L^1$ and $g^2=g$ , so taking the Fourier transform in the classical sense, $f*f=f$ . Lemma: if $f\in L^2$ , then in the sense of distributions, $\mathcal F(f*f) = \mathcal F(f)^2$ . Proof: let $f_n$ be a sequence of $C^\infty_c$ functions converging to $f$ in $L^2$ . Then $\mathcal F(f_n)^2\to \mathcal F(f)^2$ in $L^1$ (because $a^2-b^2=(a-b)(a+b)$ and by Cauchy-Schwarz), and so in the sense of (tempered) distributions. Moreover, $f_n*f_n$ converges to $f*f$ in $L^\infty$ by Young's inequality (write $f_n*f_n-f*f = f_n*(f_n-f) + (f_n-f)*f$
|
|fourier-analysis|fourier-transform|
| 1
|
Compute $ \int_{0}^{1}\frac{\ln(x) \ln^2 (1-x)}{x} dx $
|
Compute $$ \int_{0}^{1}\frac{\ln(x) \ln^2 (1-x)}{x} dx $$ I'm looking for some nice proofs at this problem. One idea would be to use Taylor expansion and then integrating term by term. What else can we do? Thanks.
|
\begin{align}J&=\int_{0}^{1}\frac{\ln(x) \ln^2 (1-x)}{x} dx\\ &\overset{\text{IBP}}=\int_0^1\frac{\ln^2 x\ln(1-x)}{1-x}dx\\ &\overset{\text{IBP}}=-\left[\left(\int_0^x\frac{\ln^2 t}{1-t}dt-\int_0^1\frac{\ln^2 t}{1-t}dt\right)\ln(1-x)\right]_0^1+\\&\int_0^1 \frac{1}{1-x}\left(\int_0^1 \frac{x\ln^2(tx)}{1-tx}dt-\int_0^1\frac{\ln^2 t}{1-t}dt\right)dx\\ &=\int_0^1\int_0^1 \left(\frac{\ln(tx)^2}{(1-t)(1-x)}-\frac{\ln(tx)^2}{(1-t)(1-tx)}-\frac{\ln^2 t}{(1-t)(1-x)}\right)dtdx\\ &\overset{\text{Fubini}}=2\zeta(2)^2+\int_0^1 \int_0^1\left(\frac{\ln^2 x}{(1-t)(1-x)}-\frac{\ln^2(tx)}{(1-t)(1-tx)}\right)dtdx\\ &=2\zeta(2)^2+\int_0^1\left(\frac{1}{1-t}\left(\int_t^1\frac{\ln^2 x}{1-x}dx\right)-\frac{1}{t}\left(\int_0^t\frac{\ln^2 x}{1-x}dx\right)\right)dt\\ &\overset{\text{IBP}}=2\zeta(2)^2-J-6\zeta(4)\\ J&=\boxed{\zeta(2)^2-3\zeta(4)=-\frac{\pi^4}{180}} \end{align}
|
|calculus|real-analysis|integration|definite-integrals|
| 0
|
Prove $(\Re{z_1})^2+(\Re{z_2})^2+(\Re{z_3})^2 < \frac{3}{2}$ for solutions of $4z^3-4z^2+12z-1=0$
|
The statement of the problem : Let $z_1, z_2, z_3$ be the solutions of the equation $$4z^3-4z^2+12z-1=0$$ Prove that : $(\Re{z_1})^2+(\Re{z_2})^2+(\Re{z_3})^2 ( $\Re x$ is the real part of the complex number x) My approach : We consider the function $f(x)=4x^3-4x^2+12x-1$ and we prove that it is strictly increasing on $\mathbb R$ .Let $x,y \in \mathbb R $ with $x>y$ . Now we prove that $f(x)-f(y)>0$ $$f(x)-f(y) = 4x^3-4y^3 - (4x^2-4y^2) + 12x - 12y -1 + 1 =$$ $$=(x-y)(4x^2+4y^2+4xy-4x+4y+12)=$$ $$(x-y)(4((x-\frac{1}{2})^2 +(y-\frac{1}{2})^2+xy+\frac{1}{2})+8)$$ which is obviously strictly greater than $0$ . Now $f(0)=-1 and $f(\frac{1}{2})=4,5 > 0$ , and from the fact that the function is strictly increasing, we get that $z_1 \in (0,\frac{1}{2})$ . So the other $2$ roots are complex and $z_3=\bar{z_2}$ (the conjugate of $z_2$ ).I tried to apply the Vieta's formulas but I got stuck. I think I made some progress showing that a root is real and determining its interval and I think I am cl
|
Here is a computer based answer with Mathematica 14. sol = Solve[4*z^3 - 4*z^2 + 12*z - 1 == 0, z]; ToRadicals[Total[Table[Re[z]^2 /. sol[[k]], {k, 1, 3}]]] $$\frac{1}{36} \left(\sqrt[3]{3 \sqrt{4233}-73}-\frac{32}{\sqrt[3]{3 \sqrt{4233}-73}}+2\right)^2+2 \left(-\frac{1}{12} \sqrt[3]{3 \sqrt{4233}-73}+\frac{1}{3}+\frac{8}{3 \sqrt[3]{3 \sqrt{4233}-73}}\right)^2 $$ We directly compute the sum under consideration. And finally N[%] $$0.425417 $$
|
|functions|inequality|complex-numbers|
| 0
|
How do you find the multivariable limit $\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt x +\sqrt y }$
|
$\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt x+\sqrt y}$ considering domain $\{(x,y) \in \mathbb{R}^2 : x,y \ge 0, (x,y) \ne 0\}$ I tried using polar coordinates, but the theta function is unbounded. I also tried using the sandwich theorem, but could not find appropriate bounds. How do I approach this question
|
Hint: $u:=\sqrt{x}, v:=\sqrt{y}$ where $u,v \ge 0;$ Consider $0 \le u,v Numerator : $u^2v^2 \le uv \le (u+v)^2.$ Can you finish?
|
|limits|multivariable-calculus|
| 0
|
Inequality regarding inner product and functions with zero integral
|
Suppose $X$ is a finite set and $f:X\rightarrow \mathbb{R}$ satisfies $\sum_{x\in X}f(x)=0$ . Let $p\in\Delta(X)$ be a probability measure on $X$ . Does the following statement hold? $$ \sum_{x\in X} f(x)(p(x))^2 \ge 0 \Rightarrow \sum_{x\in X} f(x)p(x) \ge 0. $$ I find it hard to find any counterexamples. I know that when $|X|=2$ the statement holds. Also, Cauchy-Schwarts does not seem to lead me anywhere. I feel like some generalized version of Jensen's inequality could work?
|
Counter-example: $X=\{-1,0,1\}, f(-1)=-1, f(0)=\frac 5 8, f(1)=\frac 3 8, p(-1)=\frac 1 3 , p(1)=0, p(1)=\frac 2 3$ .
|
|inequality|inner-products|
| 1
|
Finding the equation of two lines lying on the surface of a hyperbloid
|
Trying to solve this question: Surface S is obtained by revolving the line $x^2-z^2=4$ around the "z" axis. Write the equation for S. Show that exactly two lines pass through M=(2,0,0) which also lies exactly on the surface of S. Obtain the equation for the lines. For part 1, we use $x=\sqrt (x^2+y^2)$ . If I'm correct; $S=x^2+y^2-z^2=4$ which makes a hyperboloid and hyperboloids are doubly ruled. However, I have no idea how to obtain the equation for the lines. If it's possible I don't want to use trigonometry(sin, cos, θ) or integration. I've read this post but couldn't really understand it.
|
Your hyperboloid can be represented as $$ S=\left(\matrix{x\\ y\\ z\\}\right)\left(\matrix{1&0&0\\ 0&1&0\\ 0&0&-1\\}\right)\left(\matrix{x\\ y\\ z\\}\right)-4=0 $$ or $p^TMp-4=0$ now given a line $p = p_0 + \lambda \vec v$ with $\vec v = (v_x,v_y,v_z)$ , if the line is contained in $S$ we should have $$ (p_0+\lambda\vec v)^tM(p_0+\lambda\vec v)-4=p_0^TMp_0+2\lambda p_0 M \vec v+\lambda^2\vec v^TM\vec v-4=0 $$ and this should be true for all $\lambda$ hence $$ \cases{ p_0^TMp_0-4=0\\ p_0 M \vec v=0\\ \vec v^TM\vec v=0\\ \|\vec v\|=1 } $$ as an example, given $p_0 = (0,2\sqrt{2},2)'\in S$ we need $$ \cases{ 2\sqrt{2}v_y-2v_z=0\\ v_x^2+v_y^2-v_z^2=0\\ v_x^2+v_y^2+v_z^2=1 } $$ giving $$ \vec v = \cases{(\frac 12,\frac 12,\frac {1}{\sqrt{2}})\\ (\frac 12,-\frac 12,-\frac {1}{\sqrt{2}}) } $$
|
|multivariable-calculus|surfaces|solid-geometry|solid-of-revolution|
| 1
|
Proof For Combination Formula: N choose K
|
I have been looking at this problem for a long time. Can anyone prove the combination formula using factorials N choose K? In case anyone does not know how to list all combinations in a set, you start with a permutation tree (for example) 1 2 3 4 234 1 34 12 4 123 You then delete all connected groups in the second row that are less than the previous row (or experience an inversion) which are in bold above.
|
Below is a more complete and rigorous proof than the other existing answers. Abstract We will prove the formula for the number of $k$ -combinations $_nC_k$ by showing that $_nP_k = k! \cdot {_nC_k}$ . We show the latter by first proving $_nP_k \ge k! \cdot {_nC_k}$ , and then showing that $_nP_k \not> k! \cdot {_nC_k}$ . Assumptions of the Reader This proof assumes the reader understands the formula for ${_nP_k}$ . Assumptions in Proof $n,k \in \mathbb{Z}^{nonneg}$ with $n \ge k$ . $S$ is a set of $n$ arbitrary and distinguishable objects. ${_nP_k}$ is the total number of $k$ -permutations from the $n$ objects in $S$ . ${_nC_k}$ is the total number of $k$ -combinations from the $n$ objects in $S$ . Let $P$ and $C$ be the set of all $k$ -permutations and $k$ -combinations respectively. Note that $|P|={_nP_k}$ and $|C|={_nC_k}$ . Proof Lemma 1: $_nP_k \ge k! \cdot {_nC_k}$ . Proof of Lemma 1: Our goal is to show that exactly $k! \cdot {_nC_k}$ unique $k$ -permutations can be created from
|
|combinations|
| 0
|
Can a smooth curve have a segment of straight line?
|
Setting: we are given a smooth curve $\gamma: \mathbb{R} \rightarrow \mathbb{R}^n$ Informal Question: Is it possible that $\gamma$ is a straight line on $[a,b]$ , but not a straight line on $[a,b]^c$ ? Formal Question: It is possible that $\gamma''(t)=0$ for all $t\in [a,b]$ , while $\gamma''(t)\neq 0$ for some $t\not\in [a,b]$ ? The motivation for me to ask this question is that the textbook we use in our geometry class discusses only smooth curves on bounded open intervals. While I know that the curvature $\gamma''$ can be zero on a point (for instance: $\gamma(t)=(t,\sin t)$ has zero curvature on $\{n\pi:n\in\mathbb{Z}\}$ ), I can not come up with an example of a smooth curve $\gamma$ such that $\gamma''$ is $0$ on some interval $(a,b)$ . I think such an example if exists will be interesting to see in GGB, but I failed to come up with one due to my inexperience. Thanks for any help.
|
Using Friedrichs mollifiers you are able to construct smooth functions which are identically one, hence with second derivative identically $0$ , in a neighborhood of any compact set. One can obtain such smooth functions, also called "cut-off functions", via convolution with a bump function. I suggest the wikipedia page https://en.wikipedia.org/wiki/Mollifier for an overview. Roughly, let $\varphi:\mathbb{R}\to\mathbb{R}$ be defined as $$\varphi(x):=\begin{cases}\frac{e^{\frac{-1}{1-x^2}}}{C}\quad\ |x|\le 1\\ 0\quad \qquad |x|>1\end{cases},$$ where $C$ is a suitable normalizing factor. For each $\varepsilon>0$ define $\varphi_\varepsilon(x):=\frac{1}{\varepsilon}\varphi(\frac{x}{\varepsilon})$ . Now let $I=[a,b]$ . Denote with $\chi_I$ the characteristic function of $I$ and with $"*"$ the convolution product between functions. The function $$\chi_{\varepsilon}(x):=\chi_I*\varphi_\varepsilon(x)$$ is identically "1" on $[a+\varepsilon,b-\varepsilon]$ . The procedure can be done similarly
|
|geometry|differential-geometry|curves|smooth-functions|
| 0
|
Let $M$ be an $A,B$-bimodule, $C$ ring. The functor $\text{Hom}_A(-,M)$, from the opp category of $A,C$-bimodules to $C,B$-bimodules, has left adjoint
|
Let $M$ be an $A,B$ -bimodule and $C$ a ring. Show that the functor $\text{Hom}_A(-,M)$ , from the opposite category of $A,C$ -bimodules to $C,B$ -bimodules, is a right adjoint. [Hint: its left adjoint is essentially $\text{Hom}_B(-,M)$ , with "op" in the right places.] Following the hint, I tried to prove that $\text{Hom}_{C,B}( \text{Hom}_B(N,M),L) \cong \text{Hom}_{A,C}(N, \text{Hom}_A(L,M))$ where $N$ is an $C,B$ -bimodule and $L$ is an $A,C$ -bimodule. However, I am stuck on showing there there is a bijection between the two sets, much less showing that the bijection is natural. The problem is that any $C,B$ -bimoule homomorphism that takes an (right $B$ -module homomorphism) $g$ to an element $l \in L$ may not be surjective, so that it's unclear to me how to define a homomorhpism in $\text{Hom}_{A,C}(N, \text{Hom}_A(L,M))$ that would naturally correspond to a map in $\text{Hom}_{C,B}( \text{Hom}_B(N,M),L)$ . Also, I am not sure what the hint " "op" in the right places" mean. EDIT
|
If $F:\mathcal{C}\to \mathcal{D}$ is a functor, its left/right adjoint goes from $\mathcal{D}$ to $\mathcal{C}$ . Applying to your case, you have $F=\text{Hom}_A(-,M):\mathcal{C}^{op}\to \mathcal{D}$ which tells you that you have to consider its adjoint as a functor $\mathcal{D}\to \mathcal{C}^{op}$ (here $\mathcal{C}$ and $\mathcal{D}$ denote the categories of modules you mention).
|
|category-theory|modules|adjoint-functors|
| 0
|
Spivak: Understanding computation of an upper bound for remainder of Taylor polynomial approximation of $\log{(1+x)}$.
|
There is a particular calculation in Spivak's Calculus that I had to think about a lot. I think I finally understood it but I am not sure about why it was done this way. The calculation is related to another recent question that I asked about computing the remainder term in a Taylor polynomial approximation to $\log{(1+x)}$ . The conclusion of that question was how to compute the remainder term in the following expression, for $x\geq 0$ $$\log{(1+x)}=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+...+\frac{(-1)^{n-1}x^n}{n}+\frac{(-1)^n}{n+1}t^{n+1}, t\in (0,x)$$ where $$|R_{n,0}(x)|=\left | \frac{(-1)^n}{n+1}t^{n+1} \right |\leq \frac{x^{n+1}}{n+1} \tag{1}$$ and $R_{n,0}(x)$ is the remainder term. Some work was involved in coming up with the remainder above (which is obtained from the integral form of the remainder), instead of just applying the formula for the Lagrange remainder (which I will do below). Then Spivak says and there is a slightly more complicated estimate when $-1 (Problem
|
This is not an answer to your specific question (as Dr. Shifrin already addressed this in his comments) but I figured I would use this question as an opportunity to clean up the solution manual's argument (as it is very terse). As indicated by Ted Shifrin on the answer to your previous post ( Is the remainder term in a Taylor polynomial approximation for $\log{(1+x)}$ correct in Spivak's Calculus, Ch. 20? ) the most general Taylor Polynomial plus remainder form for $\log(1+x)$ is: $\log(1+x)=x-\frac{x^2}{2}+\cdots+(-1)^{n}\frac{x^n}{n}+(-1)^n \int_0^x \frac{u^{n}}{1+u}du$ The above formula is valid $\forall x \gt -1$ . Note that $(-1)^n \int_0^x \frac{u^{n}}{1+u}du$ is a specific formulation of $R_{n,0,\log(1+I)}(x)$ , the remainder term. Next, consider the following argument, which only uses information derived from previous chapters within the book: \begin{align}\left|(-1)^n \int_0^x \frac{u^{n}}{1+u}du\right|&=\left|\int_0^x \frac{u^{n}}{1+u}du\right| \\ &=\left|-\int_x^0\frac{u^n}{
|
|calculus|integration|derivatives|taylor-expansion|
| 0
|
Why are there letters as additional digits in bases greater than the decimal base (10)?
|
For example, in the hexadecimal base (16), the letters A through F are included as digits, but why? This letter thing happens in bases greater than ten. For example, in the hexadecimal base, C is 12 in the decimal base and E is 14 in the decimal base. Also, in base 20, the letters A through J are included as digits. Why are there letters for extra digits in bases greater than ten? I'm thinking your answers will be "positive!"
|
I think the reason is that they want 1 symbol for each value , but personally I don't like it because of confusion . So instead of writing 1A20 , write 1 11 2 0 with the symbols >9 not written as letters. This makes it easier to convert between bases and is more convenient.
|
|elementary-number-theory|notation|
| 0
|
Qing Liu's definition of specialization of point - follow-up question
|
The definition of a generic point in Qing Liu's book "Algebraic geometry and arithmetic curves" has been discussed before in these questions: Explaining the motivation behind two different definitions of a generic point , A question about generic points. My question is slightly different since I can't figure how a simple statement in the next page comes. Let me give some background. Liu's definition of generic point: For a topological space $X$ we say that $x$ specializes to $y$ if $y\in\overline{\{x\}}$ . We say that $x$ is a generic point of $X$ if $x$ is the unique point of $X$ that specializes to $x$ (def. 4.10, pg. 63). Remarks: $x$ specializes to $y$ is equivalent to $\overline{\{y\}}\subseteq\overline{\{x\}}$ . $x$ being generic is equivalent to the following: There exists no $y$ such that $\overline{\{x\}}\subseteq\overline{\{y\}}$ . The other definition of a generic point, i.e. that $\overline{\{x\}}=X$ , is less general than Liu's definition, as was remarked in the previous q
|
To see this, you can just prove that $\overline{Z \cap U} = Z$ (closure in $X$ ); indeed $Z = (Z \setminus U) \cup \overline{Z \cap U}$ so by irreducibility we can conclude. To end, we need to check that $Z = \overline{\overline{\left \{\xi\right \}}^U}^X$ . There is an obvious inclusion of LHS into RHS so again by maximality, we can reduce to proving that: if $U \subset X$ open, and $T$ irreducible in $U$ , then $\overline{T}$ is irreducible in $X$ . Indeed, if $\overline{T} = V_1 \cup V_2$ (with $V_1,V_2$ closed) then you see that there is an obvious inclusion $T \subset \overline{T} \cap U= (V_1 \cap U) \cup (V_2 \cap U)$ and hence $T = V_1 \cap U$ for instance, so $\overline{T} = \overline{V_1 \cap U} \subset \overline{V_1} = V_1$ .
|
|general-topology|algebraic-geometry|terminology|
| 0
|
Mixing saddle points of a zero-sum game
|
I am fairly new to proof writing so I would appreciate if you could help me out! Suppose that $(X^*, Y^*)$ and $(X^0, Y^0)$ are saddle points of a zero-sum game with payoff matrix $A$ . Show that $(X^*, Y^0) and (X^0, Y^*)$ are also saddle points. This is what I know: If $(X^*, Y^*)$ is a saddle point for a zero sum game with payoff matrix A (which is an $n \times m$ matrix), then $ E(X,Y^*) \leq E(X^*,Y^*) \leq E(X^*,Y) \ \forall X \in S_n, Y \in S_m $ Where $E(X,Y)$ denotes the expected payoff to Player 1. Similarly, for $(X^0,Y^0)$ we have $E(X,Y^0) \leq E(X^0,Y^0) \leq E(X^0,Y) \ \forall X \in S_n, Y \in S_m $ Now, to check whether $(X^0, Y^*)$ is a saddle point, I know that $E(X,Y^0)\leq E(X^*,Y^0)$ since $X^*$ is an optimal strategy for Player 1, but this feels like an illegal step to make. How should I go about it? Should I use that $E(X,Y)=\sum_j\sum_i x_i a_{i,j}y_j^T$ ? I am stuck since the result looks so obvious but I can't think of a way to justify it.
|
I think the step you suspect is illegal is illegal indeed. Note that the strategy $X^*$ is optimal, when we are sure that the other player plays $Y^*$ , but we don't know yet, if it is optimal, when the other player plays any other strategy. I'd rather approach the problem in the following way. Since $E(X, Y^*) \leq E(X^*, Y^*) \leq E(X^*, Y) \quad (*)$ , and $E(X, Y^0) \leq E(X^0, Y^0) \leq E(X^0, Y) \quad (**)$ we also have the following inequalities: $E(X^0, Y^0) \stackrel{(**)}{\leq} E(X^0, Y^*) \stackrel{(*)}{\leq} E(X^*, Y^*)$ $E(X^*, Y^*) \stackrel{?}{\leq} E(X^*, Y^0) \stackrel{?}{\leq} E(X^0, Y^0)$ Where I left the second line for you to fill in which inequalitites imply the inequalities. Can you finish the proof from here?
|
|proof-writing|game-theory|
| 1
|
Comparing unbiased random walks with unequal step sizes and single boundary
|
I understand that the probability of absorbing eventually is one in a simple random walk with a single absorbing boundary. However, if we depart from this simple random and consider biased random walks with unequal step sizes, the absorbing probability may not always be one. Specifically, let $S_n = X_1 + \cdots + X_n$ , where $X_i = 1$ with probability $p$ and $X_i = -a$ with probability $1-p$ , and $a > 1$ . If the single boundary is $b > 0$ , $p \leq 1/2$ , and the process stops once $S_n \geq b$ , the probability of the process stopping would not be zero. Let $ R(a, b, p) $ be the probability of stopping with the parameters $a,b$ , and $p$ . My question is whether this probability is monotonic in these parameters. Would $R(a, b, p)$ always (weakly) decrease as $a$ increases? Also, would $R(a,b,p)$ (weakly) increase as $p$ increases?
|
If $E(e^{sX_1})=\varphi(s)$ and $T=\inf \{n; S_n=b\}$ then $M_n=e^{sS_n-n\varphi (s)}$ is a martingale and if $s>0$ we have $$1=E(M_T)=E(e^{sb-T\varphi (s)}1_{T which leads to the calculation of the distribution of $T.$
|
|probability|probability-theory|martingales|random-walk|
| 0
|
Does Cantor’s theorem rely on the Empty Set being in the power set of a set?
|
As I understand, Cantor’s diagonal set can be empty, that is, there could be a mapping from the the Natural Numbers to the Power Set of the Natural Numbers in which the empty set is not mapped. The function is still not a bijection but there is only one element not mapped (one set that is there only because it is vacuously true). So, does Cantor theorem depend on the empty set being part of the Power set? And how could there be a cardinality in between if in this mapping only one element is not mapped?
|
I believe the discussion is transparent if we work with the set-theoretical version of Cantor's theorem. The theorem says that given any function $f:X\to P(X)$ , the set $H(f,X)=\{x\in X\,:\, x\notin f(x)\}$ is not in $\operatorname{im}(f)$ . Therefore, a proof that there is no surjection $X\to P(X)$ would be like this: consider a function $f:X\to P(X)$ ; by Cantor's theorem $\forall y\in X,[f(y)\ne H(f,X)]$ ; since $H(f,X)\in P(X)$ , $f$ is not surjective. Now, the same argument doesn't work if we substitute $P(X)$ with $P(X)\setminus\{\emptyset\}$ , because the claim $H(f,X)\in P(X)\setminus\{\emptyset\}$ in (3) may fail. In fact, $f(x)=\{x\}$ is an easy example of a function $f:X\to P(X)$ such that $H(f,X)=\emptyset$ and, moreover, there are bijections $X\to P(X)\setminus\{\emptyset\}$ if $\lvert X\rvert\le1$ . For the special case where $X=\Bbb N$ (or more generally $X$ Dedekind-infinite) a proof that there isn't a surjection $X\to P(X)\setminus\{\emptyset\}$ could be like this: co
|
|elementary-set-theory|natural-numbers|
| 1
|
Rigorously distinguishing torque from work, or, a more accurate algebraic structure for dimensional analysis
|
The algebraic structure underlying dimensional analysis is commonly said to be a finitely generated Abelian group, whose generating set is the set of base units (e.g. length, time, mass, charge, and temperature). The problem with this is that there exist pairs of derived units that are not commensurable despite having the same decomposition, with the canonical example being work and torque: both decompose to $T^{−2}ML^{2}$ (aka force times distance) but it doesn't make sense to add them. It occurs to me that the problem with this pair of units is conflating two kinds of multiplication. Work is more accurately described as the line integral of force over a path, which is a generalized dot product; torque is the cross product of force with the distance between the point where the force is applied and the axis of rotation. So we ought to say that work decomposes to $T^{-2}ML\cdot L$ whereas torque decomposes to $T^{-2}ML\times L$ and they are not the same. The question is then: Does confl
|
They represent different Kinds of Quantities. Trying to analyse equations to infer whether you're dealing with a torque or a work value would be rather difficult in general, especially within large programmes. A simpler way would be to use a symbolic checker that allows for naming, and performs dimensional analysis. See https://www.scitepress.org/Link.aspx?doi=10.5220/0012318900003645
|
|abstract-algebra|abelian-groups|dimensional-analysis|unit-of-measure|
| 0
|
Spivak, Ch. 20, Prob. *17a: Is the solution manual incorrect?
|
*17 (a) Show that if $|g'(x)|\leq M|x-a|^n$ for $|x-a| , then $|g(x)-g(a)|\leq \frac{M|x-a|^{n+1}}{n+1}$ for $|x-a| Here is the solution in the solution manual By hypothesis, $$-M(x-a)^n\leq g'(x)\leq M(x-a)^n, \text{ for } x\geq a$$ It follows from the Mean Value Theorem that $$\frac{-M(x-a)^{n+1}}{n+1}\leq g(x)-g(a)\leq \frac{M(x-a)^{n+1}}{n+1}$$ i.e., that $|g(x)-g(a)|\leq \frac{M(x-a)^{n+1}}{n+1}$ . The case $x\leq a$ is treated similarly. Sometimes the solutions are quite terse like this and skip over many steps. Here is my attempt at filling in the intermediate steps in the solution manual solution. We know that $|g'(x)|\leq M|x-a|^n$ and therefore $$-M|x-a|^n\leq g'(x)\leq M|x-a|^n$$ If $x\geq a$ , then $|x-a|=x-a$ and $$-M(x-a)^n\leq g'(x)\leq M(x-a)^n\tag{1}$$ The Mean Value Theorem says that there is some point $c\in (a,x)$ at which $$g'(c)=\frac{g(x)-g(a)}{x-a}\tag{2}$$ Hence $$-M(c-a)^n\leq g'(c)\leq M(c-a)^n$$ But since $x-a>c-a$ , and using $(2)$ we have $$-M(x-a)^n $$-M(
|
Based on the part $b$ portion of this 4-part problem, I would argue that the $\frac{1}{n+1}$ is simply a typo (both in the prompt itself and the solution manual). As you described in your post, using the MVT (outlined in Chapter 11) we can show that: If $|g'(x)| \leq M|x-a|^n$ for $|x-a|\lt \delta$ , then $|g(x)-g(a)| \lt \left|M(x-a)^{n+1}\right|$ The part $b$ portion of this problem then asks us to use part $a$ in order to show that: If $\lim_{x \to a}\frac{g'(x)}{(x-a)^n}=0$ , then $\lim_{x\to a}\frac{g(x)-g(a)}{(x-a)^{n+1}}=0$ Well, by assumption, we have that $\forall \varepsilon \gt 0: \exists \delta_{\varepsilon} \gt 0: \forall x \in (a-\delta_{\varepsilon},a+\delta_{\varepsilon})\setminus \{a\}: \left|\frac{g'(x)}{(x-a)^n}\right| \lt \varepsilon$ Choosing an arbitrary $\varepsilon$ , we rewrite the above expression as: $\forall x \in (a-\delta_{\varepsilon},a+\delta_{\varepsilon})\setminus \{a\}: \left|\frac{g'(x)}{(x-a)^n}\right| \lt \varepsilon \iff \forall x \in (a-\delta_{\
|
|calculus|derivatives|proof-explanation|mean-value-theorem|
| 0
|
How to find the intersection of a pair of equal arcs arcs touching two points
|
Though this question doesn't have mathematical origins, it boils down to geometry. I'm a high school student so forgive me if the answer is obvious, I will likely have overlooked it. I am building a robot which is controlled like a tank, with one set of wheels on each side being controlled by a driver, allowing me to power each side of the robot individually, causing the motion of the robot when rotating to essentially be an arc. The sensors retrieve enough data to know the orientation of the robot, its position and the position of the destination. The algorithm itself is part of the robot's course correction, and must bring the robot to the destination, leaving it facing north. To reduce the sharpness of the turns as much as possible whilst maintaining simplicity, I've decided to to model the path of the robot as two arcs of equal radius. I want to know the position or bearing of the robot when it has to change arcs in terms of the data available to me. I know each set of data has one
|
The problem can be solved using addition formulae to give the formula: $$ \theta = \arccos\left(\frac{h\cos(\phi)+h−w\sin(\phi)}{2\sqrt{h^2+w^2}}\right)+\arccos\left(\frac{-h}{2\sqrt{h^2+w^2}}\right) $$
|
|geometry|algorithms|circular-motion|
| 1
|
Specific basis for the space of symmetric matrices
|
Consider the space of symmetric matrices $symm(M)$ over reals of dimension $n \times n$ . It is clear that there is a straightforward basis for this space where for any $i \ge j$ $M_{ij}(m,n) = 1$ if $m = i,j$ and $n = j,i$ , otherwise 0 everywhere. I'm trying to understand the following equation: $$vec(M)\cdot vec(yy^T - zz^T) = 0 $$ where M is a full rank symmetric matrix, $y,z \in \mathbb{R}^n$ . Now, can we find $O(d^2)$ vectors in the orthogonal complement of $vec(M)$ such that they span the complement and each can be written as $vec(yy^T - zz^T)$ ? In other words, is there a basis to the orthogonal subspace of $M$ such that every element is of the form $vec(yy^T - zz^T)$ ? The basis above ${M_{ij}}$ has that desired form where the idea is to just consider extended vectors for $[1/2, 1]$ and $[1/2, -1]$ . But this won't ensure why anu other vector has the desired form. Some idea on how to solve this would be useful.
|
Peliminary result : Let $(S_n)$ be the vector space of real symmetric $n \times n$ matrices. The set of all matrices of the form $xx^T-yy^T$ (which belong to $(S_n)$ ) generate $(S_n)$ . General idea presented in the even case $n=2p$ (to be adapted for the case where $n$ is odd) presented in the case where all eigenvalue are distinct. Let $M$ be a symmetric matrix. $M$ is diagonalizable with real eigenvalues which can be either negative or positive. By a suitable translation, $N=M-tI_n$ , we can assume there are as many $ or $\ge 0$ eigenvalues ; call them $-a_k$ and $b_k$ resp. with associated eigenvectors $U_k$ and $V_k$ resp. Consider the diagonalization identity $N=ADA^T$ where $D=diag(-a_1 \cdots -a_p, b_1 \cdots b_p)$ and denoting the columns of $A$ by $U_1 \cdots U_p, V_1 \cdots V_p$ . This identity can be written : $$N=\sum_{k=1}^p -a_k U_kU_k^T + \sum_{k=1}^p b_k V_kV_k^T$$ $$N=\sum_{k=1}^p -\underbrace{\sqrt{a_k} U_k}_{Y_k}\sqrt{a_k}U_k^T + \sum_{k=1}^p \underbrace{\sqrt{b_k}
|
|matrices|matrix-decomposition|matrix-rank|symmetric-matrices|
| 0
|
Prove that $\int f \ln(f) d \mu =\sup \left \{ \int f \phi d \mu : \int e^{\phi} d\mu \leq 1 \right \}$
|
Question Prove that $\int f \ln(f) d \mu = \sup \left \{ \int f \phi d \mu : \int e^{\phi} d\mu \leq 1 \right \}$ With $f$ verifying $ \int f d \mu = 1 $ and $ f \cdot \ln(f) $ is integrable, with $ \phi $ a real measurable function s.t. $ f \phi $ is $\mu$ -integrable Answer 1- $\int f \cdot \ln ( \frac{e^{\phi}}{f}) d \mu = \int f \cdot \ln(e^{\phi}) - f \cdot \ln(f) d\mu = \int f \phi - f \cdot \ln(f) d\mu = \int f \phi - \int f \cdot \ln(f) d\mu$ When the last equality came from the fact that by assumption $ f \cdot \ln(f) $ and $\int f \phi d \mu$ are integrable. 2- Because we have that: $d \nu = f d \mu $ we can write: $\ln (\int e^{\phi} d \mu) = \ln (\int e^{\phi} \frac{f }{f} d \mu) = \ln (\int e^{\phi} \frac{1 }{f} d \nu) $ . $\ln(.)$ is a concave function hence by Jensen inequality we have $ \ln (\int e^{\phi} d \mu) = \ln (\int e^{\phi} \frac{1 }{f} d \nu) \geq \int \ln(e^{\phi} \frac{1 }{f}) d \nu = \int \ln(e^{\phi} \frac{1 }{f}) f d \mu$ 3- Let prove that $ \int f \ln(f)
|
According to https://math.stackexchange.com/users/915356/snoop (this remark is in the comment of my post) You proved that $∫ϕfdμ≤∫fln(f)dμ$ for all real $∫ϕfdμ∈{∫ϕfdμ:∫eϕdμ≤1}=:F$ Therefore, $supF≤∫fln(f)dμ$ Since $∫fln(f)dμ∈R$ and $∫eln(f)dμ=1$ we have $∫fln(f)dμ∈F$ and so $∫fln(f)dμ≤supF$ . It follows that $∫fln(f)dμ=supF$
|
|measure-theory|inequality|solution-verification|lebesgue-integral|integral-inequality|
| 1
|
use epsilon delta definition of the limit to Prove that $\lim_{→2}^2=4$
|
On https://openstax.org/books/calculus-volume-1/pages/2-5-the-precise-definition-of-a-limit , example 2.41, they used a Geometric Approach, which i find hard to understand. I have 2 questions: 1. They stated that Choose $=\min\{2−\sqrt{4−},\sqrt{4+}-2\}$ , and uses $- \left(2 - \sqrt{4 - \epsilon}\right) \leq -\delta to prove $\lim_{→2}^2=4$ But $\sqrt{4+}-2$ is less than $2−\sqrt{4−}$ , so I think they use $|x-2| is enough, take $=\min\{2−\sqrt{4−},\sqrt{4+}-2\}$ is not necessary. 2. I used another method that differs from the textbook. Is my method correct? $|x-2| For $0 : for $x>2$ , $x-2 $x $x^2 -4 $x^2 -4 solve $^2+4 = $ $ =-2 \pm \sqrt{4+ }$ since $ >0$ $ =-2 + \sqrt{4+ }$ for $x , $2-x $2- $(2-)^2 - 4 $^2 - 4 $|x^2-4| $-\epsilon solve $^2-4 = -$ $ =2 \pm \sqrt{4- }$ $ =2 - \sqrt{4- }$ For $2 , for every $*0 , there exists a $*0 $0 $0 $2 $0 $|x^2-4| For $x , for every $*0 , there exists a $*0 $0 $0 $-2 $2>x>\sqrt{4- }$ $0>x^2-4>-$ $|x^2-4| such that if $0 , then $∣x^2−4∣
|
If $\lim_{x\to a}f(x)=L$ , then for all $\epsilon\gt 0$ there exists a $\delta$ such that if $|x-a|\lt \delta$ , $|f(x)-L|\lt \epsilon$ . If $f(x)=x^2$ and $a=2$ and $L=4$ , this implies that for all $\epsilon\gt 0$ there exists a $\delta$ such that if $|x-2|\lt \delta$ , $|x^2-4|\lt \epsilon$ Working backwards, $|x^2-4|\lt\epsilon$ implies $-\epsilon\lt x^2-4\lt \epsilon$ which in turn implies $4-\epsilon\lt x^2 \lt 4+\epsilon$ . Taking the positive square root yields $\sqrt{4-\epsilon}\lt x \lt \sqrt{4+\epsilon}$ . Then subtracting $2$ gives us $\sqrt{4-\epsilon}-2\lt x-2\lt \sqrt{4+\epsilon}-2$ . We have found the desired form of our first $\delta$ . Let's call it $\delta_1$ . If $|x-2|\lt \delta_1$ then $|x-2|\lt \sqrt{4+\epsilon}-2$ , which implies that $2-\sqrt{4+\epsilon}\lt x-2 \lt \sqrt{4+\epsilon}-2$ . This implies that $x\lt \sqrt{4+\epsilon}$ and from there that $x^2\lt 4+\epsilon$ , allowing us to conclude that $x^2-4\lt \epsilon$ , which is what we wanted to show. All wel
|
|calculus|
| 0
|
Real Number recursion
|
Let $T_{n}$ be a sequence of real numbers such that $T_{n+1}\ge T_{n}^{2}+\frac{1}{5}$ Prove that $\sqrt{T_{n+5}}\ge T_{n}$ . I rearranged the first expression in terms of $T_{n}$ , but seemingly to no avail. Are there any other pointers for solving this problem?
|
For all real $x$ we have $$ x^2-x = \Bigl(x-{\small{\frac{1}{2}}}\Bigr)^2 - {\small{\frac{1}{4}}} \ge -{\small{\frac{1}{4}}} $$ so then we get \begin{align*} T_{n+5} &= T_{n+1} + \sum_{k=1}^4 (T_{n+k+1}-T_{n+k}) \\[4pt] &\ge T_{n+1} + \sum_{k=1}^4 \left(\Bigl(T_{n+k}^2+{\small{\frac{1}{5}}}\Bigr)-T_{n+k}\right) \\[4pt] &= T_{n+1} + \sum_{k=1}^4 \left(\Bigl(T_{n+k}^2-T_{n+k}\Bigr)+{\small{\frac{1}{5}}}\right) \\[4pt] &\ge T_{n+1} + \sum_{k=1}^4 \Bigl(-{\small{\frac{1}{4}}}+{\small{\frac{1}{5}}}\Bigr) \\[4pt] &= T_{n+1} + \sum_{k=1}^4 \Bigl(-{\small{\frac{1}{20}}}\Bigr) \\[4pt] &= T_{n+1} - {\small{\frac{1}{5}}} \\[4pt] &\ge \Bigl(T_n^2+{\small{\frac{1}{5}}}\Bigr) - {\small{\frac{1}{5}}} \\[4pt] &= T_n^2 \\[4pt] \end{align*} hence $\sqrt{T_{n+5}}\ge T_n$ .
|
|algebra-precalculus|recursion|
| 0
|
Why is the group of rotations in $\mathbb{R}^n$ not $n$-dimensional?
|
I have a rather basic mis-understanding about Lie groups and Lie algebras. Consider the Lie group $SO(N)$ for $N>3$ of rotations on $\mathbb{R}^N$ . On the one hand this Lie group has dimension $N(N-1)/2$ , since every $SO(N)$ element can be parametrized as $e^{X}$ , where $X$ is an anti-symmetric matrix with $N(N-1)/2$ free parameters. On the other hand, $\mathbb{R}^N$ has $N$ cardinal axes. Can I not express every rotation in $\mathbb{R}^N$ as products of rotations about the axes, implying that $SO(N)$ has dimension $N$ ? Where do the additional degrees of freedom come from? A follow up related question is: each Lie algebra generator $X(i,j)$ , for $1 \leq i generates a one-parameter group of rotations $O(\theta) = e^{\theta X_{ij}}$ . Is there a simple geometric explanation for which axis this rotation is about?
|
Firstly note that already in just 2 dimensions this doesn't make sense. $SO(2)$ represents rotations in the plane and is 1 dimensional. So the naive assumption should be that we have $\frac{N}{2}$ ( $\frac{N-1}{2}$ if $N$ is odd) dimensions for $SO(N)$ by splitting into orthogonal 2-dimensional subspaces . The problem being that products of these simple rotations don't generate all possible elements of $SO(N)$ . Indeed they generate exactly a Cartan subgroup. The next naive guess is that we shouldn't have made the 2-dimensional subspaces orthogonal and worked with all subspaces or at least all subspaces we get from a fixed orthonormal basis. But the number of those is exactly $N$ choose $2$ or $\frac{N(N-1)}{2}$ which is the correct answer. Obviously I haven't shown that there is no redundancy there but this is the right idea.
|
|linear-algebra|geometry|lie-groups|
| 1
|
Where to find proof for the remainder formula of the interpolation in two variables
|
Professor showed this result in the lecture without giving any proof (after proving the existence of the interpolating polynomial in two variables). I've been trying to prove it myself or find a book where is proved but I failed. This is the theorem: Let $$ x_0 $$ M = \{ (x_i, y_j) : 0 \leq i \leq n, 0 \leq j \leq m \}, \quad f \in \mathcal{C}^{m + n + 2}([a,b] \times [c,d]), $$ $$ p \in \Pi_{n, m} : p(x_i, y_j) = f(x_i, y_j) \quad \forall 0 \leq i \leq n, 0 \leq j \leq m. $$ Then, for all $(x, y) \in (x_0, x_n) \times (y_0, y_m)$ exist $\xi, \xi' \in (x_0, x_n), \eta, \eta' \in (y_0, y_m)$ such that $$ f(x, y) - p(x, y) = \frac{1}{(n + 1)!} \frac{\partial^{n + 1} f(\xi, y)}{\partial x^{n + 1}} \prod_{i = 0}^n (x - x_i) $$ $$ + \frac{1}{(m + 1)!} \frac{\partial^{m + 1} f(x, \eta)}{\partial y^{m + 1}} \prod_{j = 0}^m (y - y_j) $$ $$ - \frac{1}{(n + 1)! (m + 1)!} \frac{\partial^{n + m + 2} f(\xi', \eta')}{\partial x^{n + 1} \partial y^{m + 1}} \prod_{i = 0}^n (x - x_i) \prod_{j = 0}^m (y
|
Let $$X_{- 1}(x) = 1,\quad X_{i}(x) = \prod_{k = 0}^{i}\left( x - x_{k} \right)\quad\forall 0 \leq i \leq n $$ $$ Y_{- 1}(y) = 1,\quad Y_{j}(y) = \prod_{l = 0}^{j}\left( y - y_{l} \right)\quad\forall 0 \leq j \leq m $$ Considering $f( \cdot ,y) \in C^{n + 1}$ as a univariate function, we'll have by the Newton form for a single variable for interpolation $$ f(x,y) = \sum_{i = 0}^{n}X_{i - 1}(x)f\left\lbrack x_{0},\ldots,x_{i};y \right\rbrack + X_{n}(x)f\left\lbrack x_{0},\ldots,x_{n},x;y \right\rbrack $$ where $$ f\left\lbrack x_{0},\ldots,x_{i};y \right\rbrack = \left( f( \cdot ,y) \right)\left\lbrack x_{0},\ldots,x_{i} \right\rbrack $$ Now, $f\left\lbrack x_{0},\ldots,x_{i};y \right\rbrack$ for $0 \leq k \leq n$ are also univariate functions wrt $y$ . Because it implicitely depends on (f \in C^{n + m + 2}) by the recursive formula of divided differences, we'll have $f\left\lbrack x_{0},\ldots,x_{i};y \right\rbrack \in C^{m + 1}$ so we can apply Newton formula $$f\left\lbrack x_{0},\ld
|
|numerical-methods|interpolation|approximation-theory|lagrange-interpolation|multivariate-polynomial|
| 1
|
A question related to Jordan Curve Theorem
|
Problem. Let $C$ be a smooth Jordan curve, and $ \gamma: [-1,1]\to\mathbb R^2,$ a segment crossing $C$ perpendicularly at $x_0\in C$ , with $\gamma(0)=x_0$ . Show that there exists an $\varepsilon>0,\,$ such that the segments $$ \ell_-=\big\{\gamma(t): t\in (-\varepsilon,0)\big\}\quad\text{and}\quad \ell_+=\big\{\gamma(t): t\in (0,\varepsilon)\big\} $$ lie each wholly in a different connected component of $\mathbb R^2\setminus C$ . This question, once we draw a curve and a perpendicular, looks obvious. Often in proofs, it is also considered obvious. Nevertheless, I have not managed to provide a rigorous proof/explanations. Any ideas? Note. By "smooth curve" I mean that $\gamma$ is continuously differentiable with non-vanishing derivative.
|
Let $J: S^1 \hookrightarrow \mathbb{R}^2$ be the Jordan curve. Then, since $J$ is $C^1$ with non-vanishing derivative and a topological embedding, the image $C = J(S^1)$ is a $C^1$ -submanifold of $\mathbb{R}^2$ . So, there is a chart $\phi: U \to \mathbb{R}^2$ of $\mathbb{R}^2$ at $x_0 \in U \subseteq \mathbb{R^2}$ with $\phi(C \cap U) = \mathbb{R}\times\{0\}$ and $\phi(x_0) = (0,0)$ . Now, since $\gamma(0) = x_0 \in U$ there is some $\delta > 0$ with $\gamma((-\delta, \delta)) \subseteq U$ and therefore $g:=\phi\circ\gamma|_{(-\delta,\delta)}: (-\delta,\delta) \to \mathbb{R}^2$ a $C^1$ -curve with $g(0) = \phi(x_0) = (0,0)$ . Since $\gamma$ intersects $C$ orthogonally in $x_0$ , the tangent vectors of $\gamma$ in $0$ and $J$ at $J^{-1}(x_0)$ are linearly independent. Thus $(a,b)=g'(0) \in T_{(0,0)} \mathbb{R}^2 = \mathbb{R}^2$ has $b \neq 0$ , since $T_{(0,0)}\phi(C \cap U) = \mathbb{R}\times \{0\}$ . W.l.o.g let $b > 0$ . Therefore, there is some $\varepsilon > 0$ with $\varepsilon
|
|algebraic-topology|curves|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.