qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,174 | <p>I'm developing a course that focuses on the transistion from arithmetic to algebraic thinking, particularly in grades 5-8. We will do this through focus on the common core. I'm also putting together a collection of suggested readings from the math education literature. I would be interested to hear your suggestions for suggested readings.</p>
| MathTeacher | 3,528 | <p>I'd suggest literature on students' understanding of the equals sign, e.g.,
1. "Concepts Associated with the Equality Symbol" by Kieran. <a href="http://link.springer.com/article/10.1007/BF00311062" rel="nofollow">http://link.springer.com/article/10.1007/BF00311062</a>
2.A Longitudinal Examination of Middle School Students' Understanding of the Equal Sign and Equivalent Equations, Alibali et al.
<a href="http://www.tandfonline.com/doi/abs/10.1080/10986060701360902#.VK3Rx4rF_dc" rel="nofollow">http://www.tandfonline.com/doi/abs/10.1080/10986060701360902#.VK3Rx4rF_dc</a>
3. "From an operational to a relational conception of the equal sign," Molina
<a href="http://www.researchgate.net/publication/46593005_From_an_operational_to_a_relational_conception_of_the_equal_sign._Thirds_graders_developing_algebraic_thinking" rel="nofollow">http://www.researchgate.net/publication/46593005_From_an_operational_to_a_relational_conception_of_the_equal_sign._Thirds_graders_developing_algebraic_thinking</a></p>
|
3,333,928 | <p>I am reading an example of root test for a serie: <span class="math-container">$$\sum_{n=1}^\infty\frac{3n}{2^n}.$$</span> So applying the root test, we get <span class="math-container">$$\lim_{n \to \infty}\sqrt[n]{\frac{3n}{2^n}}=\lim_{n \to \infty}\frac{\sqrt[n]{3n}}{2}=\frac{1}{2}\lim_{n\to\infty}\exp(\frac{1}{n}\ln (3n))\\=\frac{1}{2}\exp(\lim_{n\to\infty}\frac{\ln (3n)}{n})=\frac{1}{2}\exp(\lim_{n\to\infty}\frac{\frac{3}{3n}}{1})=\frac{1}{2}e^0=\frac{1}{2}.$$</span></p>
<p>I don't understand the expression starting to where the <span class="math-container">$\exp$</span> is used. can someone explain which formula is used there? Then why <span class="math-container">$$\lim_{n\to\infty}\frac{\ln (3n)}{n}=\lim_{n\to\infty}\frac{\frac{3}{3n}}{1}?$$</span></p>
| trula | 697,983 | <p>a transcendental function f(x) gives transcendental results for most rational x
example: e^x, sin(x) etc.
the simple seaming equation e^x=x or cos(x)=x have no formula for x as result, but must be calculated numerically.
also you cn not rewrite e^x as a polynomial or a fraction of polynoms
trula</p>
|
96,289 | <p>In 1995 (if I'm not mistaken) Taylor and Wiles proved that all semistable elliptic curves over $\mathbb{Q}$ are modular. This result was extended to all elliptic curves in 2001 by Breuil, Conrad, Diamond, and Taylor.</p>
<p>I'm asking this as a matter of interest. Are there any other fields over which elliptic curves are known to be modular? Are there any known fields for which this is not true for? </p>
<p>Also, is much research being conducted on this matter?</p>
| David Roberts | 4,177 | <p>In the <a href="http://www.ams.org/notices/199911/comm-darmon.pdf" rel="nofollow">article in the Notices of the AMS</a> which came out when the BCDT proof was announced, it says</p>
<blockquote>
<p><em>Generalizations to other number fields.</em> A number
of ingredients in Wiles’s method have been significantly
simplified, by Diamond and Fujiwara
among others. Fujiwara, Skinner, and Wiles have
been able to extend Wiles’s results to the case
where the field $\mathbb{Q}$ is replaced by a totally real number
field $K$. In particular, this yields analogues of
the Shimura-Taniyama-Weil conjecture for a large
class of elliptic curves defined over such a field.</p>
</blockquote>
<p>Unfortunately it doesn't say what sorts of elliptic curves are covered by these results.</p>
|
2,485,261 | <blockquote>
<p>$\displaystyle \sum_{k=0}^n k {n \choose k} p^k (1-p)^{n-k}$ with $0<p<1$</p>
</blockquote>
<p>I know of one way to evaluate it (from statistics) but I was wondering if there are any other ways. </p>
<p>This is the way I know:</p>
<p>Let </p>
<p>$$M(t)=\displaystyle \sum_{k=0}^n e^{kt} {n \choose k} p^k (1-p)^{n-k}$$</p>
<p>Then $$M(t)=\displaystyle \sum_{k=0}^n {n \choose k} (pe^t)^k (1-p)^{n-k}=(pe^t+1-p)^n$$</p>
<p>$$M'(t)=\displaystyle \sum_{k=0}^n ke^{kt} {n \choose k} p^k (1-p)^{n-k}=pe^tn(pe^t+1-p)^{n-1}$$</p>
<p>$$M'(0)=\displaystyle \sum_{k=0}^n k {n \choose k} p^k (1-p)^{n-k}=np$$</p>
| A.G. | 115,996 | <p>You can also notice that this is the expectation $E(X)$ where $X$ is a binomial random variable with parameters $n$ and $p$, and $E(X)=n\,p$.</p>
|
2,764,818 | <blockquote>
<p>Let $f(x)=ax^3+bx^2+cx+d$, be a polynomial function, find relation between $a,b,c,d$ such that it's roots are in an arithmetic/geometric progression. (separate relations)</p>
</blockquote>
<p>So for the arithmetic progression I took let $\alpha = x_2$ and $r$ be the ratio of the arithmetic progression.</p>
<p>We have:</p>
<p>$$x_1=\alpha-2r, \quad x_2=\alpha, \quad x_3=\alpha +2r$$</p>
<p>Therefore:</p>
<p>$$x_1+x_2+x_3=-\frac ba=3\alpha$$
$$x_1^2+x_2^2+x_3^2 = 9\alpha^2-2\frac ca \to 4r^2=\frac {b^2-3ac}{3a^2}$$
$$x_1x_2x_3=\alpha(\alpha^2-4r^2)=-\frac da$$</p>
<p>and we get the final result $2b^3+27a^2d-9abc=0$.</p>
<p>How should I take the ratio at the geometric progression for roots?</p>
<p>I tried something like </p>
<p>$$x_1=\frac {\alpha}q, \quad x_2=\alpha, \quad x_3=\alpha q$$</p>
<p>To get $x_1x_2x_3=\alpha^3$ but it doesn't really work out..</p>
<p>Note:</p>
<p>I have to choose from this set of answers:</p>
<p>$$\text{(a)} \ a^2b=c^2d \quad\text{(b)}\ a^2b^2=c^2d \quad\text{(c)}\ ab^3=c^3d$$</p>
<p>$$\text{(d)}\ ac^3=b^3d \quad\text{(e)}\ ac=bd \quad\text{(f)}\ a^3c=b^3d$$</p>
| orangeskid | 168,051 | <p>To sum up, a cubic has its roots in arithmetic progression if and only if the arithmetic mean of the roots is a root of the cubic ( so equals one of the roots).</p>
<p>For the geometric progression, we could use the same trick, and say that the geometric mean of $x_1$, $x_2$, $x_3$ is a root of $P$. Alternatively, to avoid cubic roots, one considers the equivalent statement that $x_1x_2x_3$ must equal one of the $x_i^3$. So we set up the equation $Q=0$ with roots $x_1^3$, $x_2^3$, $x_3^3$ and impose the condition that $x_1x_2x_3$ is a root. </p>
<p>The equation for $x_i^3$ can be obtained readily by <a href="http://www.wolframalpha.com/input/?i=GroebnerBasis%5B%20%7Ba%20x%5E3%20%2B%20b%20x%5E2%20%2B%20c%20x%20%2B%20d,%20y-x%5E3%7D,%20%7By%7D,%7Bx%7D%5D" rel="nofollow noreferrer">eliminating</a> $x$ from the equalities $y = x^3$, $ax^3 + b x^2 + c x + d$. We get a cubic equation for $y$
$$a^3 y^3+ (3 a^2 d - 3 a b c + b^3)y^2 +(3 a d^2 - 3 b c d + c^3)y+ d^3=0$$
and the condition is that $-\frac{d}{a}$ is a root of this equation.</p>
<p>In general, given a polynomial $P$ of degree $n$, one can get the condition on the coefficients so that for some ordering of the roots we have an algebraic condition $F(x_1, \ldots, x_n)=0$. We take all the possible permutations of $F(x_{\sigma(1)}, \ldots, x_{\sigma(n)})$ of $F$. The condition is that the product of all these permutations is $0$. </p>
<p>If we want $m$ conditions $F_1= \cdots = F_m=0$, we set up $F=\sum t_i F_i$, where $t_1$, $\ldots t_m$ are variables, consider all the possible permutations of $F$. The condition is that the product of all these is $0$, as a polynomial in $t_1$, $\ldots t_m$.</p>
|
188,938 | <p>Hyperbolic "trig" functions such as $\sinh$, $\cosh$, have close analogies with regular trig functions such as $\sin$ and $\cos$. Yet the hyperbolic versions seem to be encountered relatively rarely. (My frame of reference is that of someone with college freshman/sophomore, but not advanced math.)</p>
<p>Why is that? Is it because the hyperbolic versions of these functions are less common/useful than the circular versions? </p>
<p>Can you do the "usual" applications (Taylor series, Fourier series) with hyperbolic functions as you can with trigonometric?</p>
<p>I'm not a professional mathematician. I've had three semesters of calculus and one of linear algebra/differential equations, and "barely" know about hyperbolic functions. The question is with that frame of reference. </p>
| Community | -1 | <p>I can think of two reasons.</p>
<ol>
<li><p>When we do geometry, we usually work in Euclidean space, where the intrinsic property of a line segment between two points is its length, given (in two dimensions) by $\ell^2 = \Delta x^2 + \Delta y^2$. We are allowed to change our reference frame as long as we preserve lengths, which means that the transformation of $(\Delta x,\Delta y)$ is a rotation, and the transformed vector must lie on the circle $\Delta x^2 + \Delta y^2 = \operatorname{const}$, which is naturally parametrized by $\sin$ and $\cos$. The hyperbolic variants don't have much to do with circles or with rotation, so are not relevant here (unless you supply them with imaginary arguments, at which point you're really working just with $\sin$ and $\cos$ in disguise).</p>
<p>On the other hand, the natural setting for special relativity is <a href="https://en.wikipedia.org/wiki/Minkowski_space">Minkowski space</a>, where the invariant property of the interval between two points in spacetime is of the form $s^2=\Delta x^2-\Delta t^2$, with a minus sign. Here the allowed changes of reference frame are given by <a href="https://en.wikipedia.org/wiki/Lorentz_transformation">Lorentz transformations</a>, the transformed interval lies on $\Delta x^2-\Delta t^2=\operatorname{const}$ which is a hyperbola, and indeed one finds hyperbolic trigonometric functions to be <a href="https://en.wikipedia.org/wiki/Velocity-addition_formula#Special_theory_of_relativity">quite useful in special relativity</a>.</p></li>
<li><p>The usual trigonometric functions $\sin$ and $\cos$ are two real solutions to the differential equation $y'' = -y$, which describes a simple harmonic oscillator (a conservative system in a parabolic potential well) which is a central example in much of classical physics. The hyperbolic versions $\sinh$ and $\cosh$ are solutions to $y'' = y$ instead. This equation is rarely useful to model physical systems in real life because all its solutions are unbounded and gain infinite amounts of kinetic energy. Even taken locally, the equation describes an unstable equilibrium, so any real system will not spend most of its time there without additional forcing.</p></li>
</ol>
|
188,938 | <p>Hyperbolic "trig" functions such as $\sinh$, $\cosh$, have close analogies with regular trig functions such as $\sin$ and $\cos$. Yet the hyperbolic versions seem to be encountered relatively rarely. (My frame of reference is that of someone with college freshman/sophomore, but not advanced math.)</p>
<p>Why is that? Is it because the hyperbolic versions of these functions are less common/useful than the circular versions? </p>
<p>Can you do the "usual" applications (Taylor series, Fourier series) with hyperbolic functions as you can with trigonometric?</p>
<p>I'm not a professional mathematician. I've had three semesters of calculus and one of linear algebra/differential equations, and "barely" know about hyperbolic functions. The question is with that frame of reference. </p>
| Argon | 27,624 | <p><span class="math-container">$\sinh$</span> and <span class="math-container">$\cosh$</span> seem to appear less then their circular counterparts in real analysis as is explained well in some other answers. However, hyperbolic functions appear quite commonly in complex analysis. From <a href="http://en.wikipedia.org/wiki/Euler%27s_formula" rel="nofollow noreferrer">Euler's formula</a> and its <a href="http://en.wikipedia.org/wiki/Euler%27s_formula#Relationship_to_trigonometry" rel="nofollow noreferrer">subsequent trigonometric definitions</a>, one finds that </p>
<p><span class="math-container">$$\cos (ix) = \cosh x$$</span>
<span class="math-container">$$\sinh (ix)=i\sin x$$</span></p>
<p>which is a clear connection between hyperbolic and circular trigonometric functions.</p>
<hr>
<p>One use of hyperbolic functions, that I have personally used, is in integration.
<span class="math-container">$$\cosh^2 x - \sinh^2 x=1$$</span>
This identity can often be used for substitutions to evaluate integrals with <span class="math-container">$x^2+1$</span> and <span class="math-container">$x^2-1$</span> (instead of <span class="math-container">$\sec$</span> and <span class="math-container">$\tan$</span>), just as the identity <span class="math-container">$\cos^2 x+\sin^2 x=1$</span> can be used for substitutions to evaluate integrals with <span class="math-container">$1-x^2$</span>. </p>
<p>For example, to evaluate</p>
<p><span class="math-container">$$\int \frac{dx}{\sqrt{x^2+1}}$$</span></p>
<p>we may substitute <span class="math-container">$x = \sinh u \implies dx = \cosh u \, du$</span> so the integral becomes</p>
<p><span class="math-container">$$\int \frac{\cosh u}{\sqrt{\sinh^2 u+1}}\, du = \int 1 \, du= u +C = \operatorname{arsinh} x + C$$</span></p>
<p>And, of course, the hyperbolic functions provide a parametrization of the standard hyperbola.</p>
|
188,938 | <p>Hyperbolic "trig" functions such as $\sinh$, $\cosh$, have close analogies with regular trig functions such as $\sin$ and $\cos$. Yet the hyperbolic versions seem to be encountered relatively rarely. (My frame of reference is that of someone with college freshman/sophomore, but not advanced math.)</p>
<p>Why is that? Is it because the hyperbolic versions of these functions are less common/useful than the circular versions? </p>
<p>Can you do the "usual" applications (Taylor series, Fourier series) with hyperbolic functions as you can with trigonometric?</p>
<p>I'm not a professional mathematician. I've had three semesters of calculus and one of linear algebra/differential equations, and "barely" know about hyperbolic functions. The question is with that frame of reference. </p>
| Tunococ | 12,594 | <p>I'd say because hyperbolic functions can be written pretty easily in terms of exponential functions. By that I mean you don't need $i$ like when you express $\sin$ and $\cos$ using $\exp$. That means in the "real" world, it's not necessary to use $\sinh$ and $\cosh$ because you can always resort to $\exp$, but the same is not true for $\sin$ and $\cos$. However, I wouldn't say hyperbolic functions are rarely encountered.</p>
<p>Very often I find myself choosing hyperbolic functions over exponential functions in the context of ODEs and PDEs with boundary/initial conditions at $0$. (I believe sophomores should have taken/be taking these classes, no?) The fact that $\sinh(0) = \cosh'(0) = 0$ and $\cosh(0) = \sinh'(0) = 1$ makes your expression much cleaner. Also, you get $\sinh(ax)$ and $\cosh(ax)$ as two solutions to the ODE $y'' - a^2y = 0$ immediately, similar to the way you get $\sin(ax)$ and $\cos(ax)$ from $y'' + a^2y = 0$. Variation of parameters also gives $\sinh$ as a kernel when you solve the non-homogeneous ODE with Dirichlet boundary conditions. There are just so many things you can express cleanly in terms of hyperbolic functions.</p>
<p>Oh, and don't forget that $\tanh$ is a really nice bijection from $\mathbb R$ to $(-1, 1)$. Kind of funny that $\tanh$ looks like $\frac 2\pi \arctan$.</p>
|
2,008,263 | <p>Solve $$(1+y^2\sin2x) \;dx - 2y\cos^2x \;dy = 0$$</p>
<p>Well, first of all I've written $M = 1+y^2\sin2x$ , $N = 2y\cos^2x$.</p>
<p>Then, I noticed that $M'_y$ <strong>does not</strong> equal to $N'_x$.</p>
<p>I'm trying to find something to multiply the equation with, but my math skills sucks. So I'm going for $\frac{M'_y - N'_x}{N} = h(x)$ now I need to find $h(x)$ which I'm kinda struggling to find, would love your help.</p>
<p><strong>Edit: I just noticed that $M'_y = N'_x$</strong></p>
| Max Payne | 232,145 | <p>I think the equation is exact, as</p>
<p>$$2y \sin 2x = 4y\cos x\sin x$$</p>
|
918,689 | <p>of 5 be selected that contain /at least/ 1 of the broken bulbs?</p>
<p>So far, I have tried only 1 method, as it's the only one I've been taught, but I don't know if I am doing it right.
I tried doing C(100,1)/C(100,5) but it just doesn't seem right. Is it? If it isn't, what am I doing wrong?</p>
| voldemort | 118,052 | <p>Hints:</p>
<p>1) Total number of ways to choose a sample of $5$ bulbs from $100$= $100 \choose 5$.</p>
<p>2) Total number of ways to choose $5$ non defective bulbs= $98 \choose 5$- as there are $98$ non defective bulbs.</p>
<p>Now subtract $(2)$ from $1$ to get your answer.</p>
|
954,933 | <p>Let $\phi\in\ell^\infty$. For $p\in[1,\infty]$, define $M_\phi:\ell^p\to\ell^p$ by</p>
<p>$$M_\phi(f)=\phi f.$$</p>
<p>Show that $\Vert M_\phi\Vert=\Vert\phi\Vert_\infty$, and $M_\phi$ is compact if and only if $\phi\in c_0$, i.e. $\phi$ is a sequence that converges to $0$.</p>
<p>I only have problem with the part "$\phi\in c_0$ $\Rightarrow$ $M_\phi$ compact.</p>
<p>I tried to prove by contradiction, assume $M_\phi$ is not compact, then there is a bounded sequence $(f_n)_{n\in\mathbb{N}}$ in $\ell^p$ s.t. $(\phi f_n)_{n\in\mathbb{N}}$ has no convergent subsequence, then it also has no Cauchy subsequence. Then we can define</p>
<p>$$t:=\inf_{m\neq n}\Vert \phi(f_m-f_n)\Vert_p>0$$</p>
<p>At this point I am stuck, I tried to find some $n\neq m$ s.t. $\Vert \phi(f_m-f_n)\Vert_p<t$, then we get a contradiction. Can someone give some hints? Thanks!</p>
<p>P.S. It is required to use the definition of compactness to prove this question.</p>
| Jordan | 116,955 | <p>First I would point out that the $t$ you have defined may in fact be zero, even if the sequence $M_\phi (f_n)$ has no Cauchy subsequence - all you would need is for <em>some</em> pair $f_n,f_m$ to map to the same sequence under $M_\phi$.</p>
<p>Now, to prove the claim, one relatively easy way is to express $M_\phi$ as the norm limit of a sequence of finite rank operators.</p>
<p>If you don't want to appeal to this fact, you can prove the claim directly. You should consider a bounded set $B$ (say, the unit ball) in $\ell^p$, and show that the set $M_\phi(B)$ is totally bounded by covering it with a finite number of $\varepsilon$-balls for arbitrary $\varepsilon>0$. Hint: think about what multiplication by $\phi \in c_0$ will do to the tail of a unit-norm element of $\ell^p$.</p>
<p><strong>Edit</strong>: details on what I mean above. Let $B$ be the unit ball in $\ell^p$ and let $\varepsilon>0$; we seek a finite collection of $\varepsilon$-balls covering $M_\phi(B)$.</p>
<p>As $\phi\in c_0$ we can find $N$ so that $\vert \phi_n \vert\leq \varepsilon/2$ for all $n>N$. Now consider each element of $M_\phi(B)$ as having two pieces: the first $N$ entries, and the rest (the tail). By the argument that gives $\Vert M_\phi \Vert = \Vert \phi\Vert_\infty$, the $p$-norm of the tail will be less than $\varepsilon/2$. In other words, any element of $M_\phi(B)$ is less than $\varepsilon/2$ away from a finite-dimensional (i.e. $N$-dimensional) bounded subset of $\ell^p$, call it $F$.</p>
<p>Finite dimensional and bounded means that we can cover $F$ by a finite number of $\varepsilon/2$-balls. Now, any element of $M_\phi(B)$ differs by at most $\varepsilon/2$ from some element of $F$, which in turn differs by at most $\varepsilon/2$ one of the finitely many centers. The triangle inequality gives us that an arbitrary element of $M_\phi (B)$ differs by at most $\varepsilon$ from one of the finitely many centers - i.e. we've covered $M_\phi(B)$ by a finite collection of $\varepsilon$-balls.</p>
|
2,114,446 | <p>But, just to get across the idea of a generating function, here is how a generatingfunctionologist might answer the question: the nth Fibonacci number, $F_{n}$, is the coefficient of $x^{n}$ in the expansion of the function $\frac{x}{(1 − x − x^2)}$ as a power series about the origin.</p>
<p>I am reading a book about generating function, however, I got a little rusted about power series. could anyone give me a quick review about what the statement above is saying?</p>
<p>namely, </p>
<p>$F_{n}$, is the coefficient of $x_{n}$ in the expansion of the function $\frac{x}{(1 − x − x^2)}$ as a power series about the origin</p>
| Peter Taylor | 5,676 | <p>Your comments on angryavian's answer suggest that it would be worth showing some more intermediate steps. Given $f(x) = F_0 + F_1 x + F_2 x^2 + F_3 x^3 + \ldots$ we have $$\begin{eqnarray}
f(x) = & F_0 + & F_1 x + & F_2 x^2 + F_3 x^3 + F_4 x^4 + F_5 x^5 + \ldots \\
xf(x) = & & F_0 x + & F_1 x^2 + F_2 x^3 + F_3 x^4 + F_4 x^5 + \ldots \\
x^2f(x) = & & & F_0 x^2 + F_1 x^3 + F_2 x^4 + F_3 x^5 + \ldots \\
\end{eqnarray}$$</p>
<p>whence</p>
<p>$$f(x) - xf(x) - x^2f(x) = F_0 + (F_1 - F_0) x + (F_2 - F_1 - F_0) x^2 + (F_3 - F_2 - F_1) x^3 + \ldots$$</p>
<p>But since $F_{i+2} = F_{i+1} + F_i$ that simplifies to $$f(x) - xf(x) - x^2f(x) = F_0 + (F_1 - F_0) x$$ which is easily rearranged to $$f(x) = \frac{F_0 + (F_1-F_0)x}{1-x-x^2}$$</p>
|
2,349,124 | <p>I keep on hitting a road block in trying to solve this, especially when trying to prove it going from the right hand side to the left hand side. </p>
| Bram28 | 256,001 | <p>From right to left:</p>
<p>$$X=$$</p>
<p>$$(X \cap Y) \cup (X \cap Y^C)=$$</p>
<p>$$(X \cap ((X \cap Y^C) \cup (X^C \cap Y))) \cup (X \cap Y^C)=$$</p>
<p>$$((X \cap X \cap Y^C) \cup (X \cap X^C \cap Y)) \cup (X \cap Y^C)=$$</p>
<p>$$((X \cap Y^C) \cup \emptyset) \cup (X \cap Y^C)=$$</p>
<p>$$(X \cap Y^C) \cup (X \cap Y^C)=$$</p>
<p>$$X \cap Y^C=$$</p>
<p>$$X \cap Y^C \cap Y^C=$$</p>
<p>$$X \cap Y^C \cap ((X \cap Y^C) \cup ( X^C \cap Y))^C =$$</p>
<p>$$(X \cap Y^C) \cap (X \cap Y^C)^C \cap (X^C \cap Y)^C =$$</p>
<p>$$\emptyset \cap (X^C \cap Y)=$$</p>
<p>$$\emptyset$$</p>
|
919,572 | <p>Do you know any nice way of expressing </p>
<p>$$\sum_{k=0}^{n} \frac{H_{k+1}}{n-k+1}$$
?</p>
<p>Some simple manipulations involving the integrals lead to an expression that also uses<br>
the hypergeometric series. Is there any way of getting a form that doesn't use the HG function?</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{{\cal I}_{n} \equiv\sum_{k = 0}^{n}{H_{k + 1} \over n - k + 1}
=\sum_{k = 1}^{n + 1}{H_{k} \over n - k + 2}:\ {\large ?}}$.</p>
<blockquote>
<p>\begin{align}
\sum_{n = 0}^{\infty}{\cal I}_{n}z^{n}&
=\sum_{n = 1}^{\infty}{\cal I}_{n - 1}z^{n - 1}=
\sum_{n = 1}^{\infty}z^{n - 1}\sum_{k = 1}^{n}{H_{k} \over n - k + 1}
=\sum_{k = 1}^{\infty}H_{k}\sum_{n\ =\ k}^{\infty}{z^{n - 1} \over n - k + 1}
\\[3mm]&=\sum_{k = 1}^{\infty}H_{k}\sum_{n = 1}^{\infty}{z^{n + k - 2} \over n}
={1 \over z^{2}}\sum_{k = 1}^{\infty}H_{k}z^{k}\sum_{n = 1}^{\infty}{z^{n} \over n}
={1 \over z^{2}}\bracks{-\,{\ln\pars{1 - z} \over 1 - z}}\bracks{-\ln\pars{1 - z}}
\\[3mm]&={\ln^{2}\pars{1 - z} \over z^{2}\pars{1 - z}}
={1 \over z^{2}}\,
\lim_{\mu\ \to\ -1}\partiald[2]{\pars{1 - z}^{\mu}}{\mu}
={1 \over z^{2}}\,
\lim_{\mu\ \to\ -1}\partiald[2]{}{\mu}\sum_{n = 0}^{\infty}\pars{-1}^{n}z^{n}
{\mu \choose n}
\\[3mm]&={1 \over z^{2}}\,
\lim_{\mu\ \to\ -1}\partiald[2]{}{\mu}\sum_{n = 0}^{\infty}z^{n}
{-\mu + n - 1\choose n}
\end{align}</p>
</blockquote>
<p>There's not any contribution from the first two terms such that:
\begin{align}
\sum_{n = 0}^{\infty}{\cal I}_{n}z^{n}&=
{1 \over z^{2}}\,\lim_{\mu\ \to\ -1}\partiald[2]{}{\mu}\sum_{n = 2}^{\infty}z^{n}
{-\mu + n - 1\choose n}
=
\lim_{\mu\ \to\ -1}\partiald[2]{}{\mu}\sum_{n = 0}^{\infty}z^{n}
{-\mu + n + 1\choose n + 2}
\end{align}</p>
<blockquote>
<p>\begin{align}
{\cal I}_{n}&\equiv\sum_{k = 0}^{n}{H_{k + 1} \over n - k + 1}
=\lim_{\mu\ \to\ -1}\partiald[2]{}{\mu}{-\mu + n + 1\choose n + 2}
\\[5mm]&=\lim_{\mu\ \to\ -1}{-\mu + n + 1 \choose n + 2}\left\lbrace%
\bracks{\Psi\pars{-\mu + n + 2} - \Psi\pars{-\mu}}^{2}
\right.
\\&\left.\phantom{\lim_{\mu\ \to\ -1}\qquad\qquad\qquad\,\,\,\,\,\,\,\,\,\,}-
\Psi'\pars{-\mu} + \Psi'\pars{-\mu + n + 2}\right\rbrace
\\[5mm]&=\bracks{\Psi\pars{n + 3} - \underbrace{\Psi\pars{1}}
_{\ds{=\ \color{#c00000}{-\gamma}}}}^{2} - \underbrace{\Psi'\pars{1}}
_{\ds{=\ \color{#c00000}{\pi^{2} \over 6}}}
+\Psi'\pars{n + 3}
\end{align}</p>
</blockquote>
<p>$$\color{#66f}{\large%
\sum_{k = 0}^{n}{H_{k + 1} \over n - k + 1}
=\bracks{\Psi\pars{n + 3} + \gamma}^{2} + \Psi'\pars{n + 3} - {\pi^{2} \over 6}}
$$</p>
<blockquote>
<p>$\ds{\Psi\pars{z}}$ and $\ds{\gamma}$ are the <i>Digamma Function</i> and the
<i>Euler-Mascheroni Constant</i>, respectively.
<a href="http://people.math.sfu.ca/~cbm/aands/page_258.htm" rel="nofollow">See this link</a>. </p>
</blockquote>
|
919,572 | <p>Do you know any nice way of expressing </p>
<p>$$\sum_{k=0}^{n} \frac{H_{k+1}}{n-k+1}$$
?</p>
<p>Some simple manipulations involving the integrals lead to an expression that also uses<br>
the hypergeometric series. Is there any way of getting a form that doesn't use the HG function?</p>
| Pedro | 23,350 | <p>Let $f(x)=\displaystyle\sum\limits_{n\geqslant 0}\frac{1}{n+1}x^n=-x^{-1}\log(1-x)$</p>
<p>Then $\displaystyle g(x)=\frac{1}{1-x}f(x)=\sum_{n\geqslant 0}\sum_{k=0}^n \frac{1}{k+1}x^n=\sum_{n\geqslant 0}H_{n+1} x^n$ and you want the coefficient of $x^n$ (i.e. $f^{(n)}(0)/n!$) in $$f(x)g(x)=x^{-2}\log^2(1-x)/(1-x)$$ I guess you can do some complex countour integration trickery, which I don't know about. </p>
<p>Note that by the observation, i.e. since it is $[x^n]$ in $(1-x)^{-1}f(x)^2$, we can associate $f(x)^2$ first, and thus $$\sum_{k=0}^n \frac{H_{k+1}}{n-k+1}=\sum_{k=0}^n\sum_{j=0}^k\frac{1}{j+1}\frac{1}{k-j+1}$$</p>
|
1,102,758 | <p><strong>Problem</strong></p>
<p>Given a pre-Hilbert space $\mathcal{H}$.</p>
<p>Consider unbounded operators:
$$S,T:\mathcal{H}\to\mathcal{H}$$</p>
<p>Suppose they're formal adjoints:
$$\langle S\varphi,\psi\rangle=\langle\varphi,T\psi\rangle$$</p>
<p>Regard the completion $\hat{\mathcal{H}}$.</p>
<p>Here they're partial adjoints:
$$S\subseteq T^*\quad T\subseteq S^*$$
In particular, both are closable:
$$\hat{S}:=\overline{S}\quad\hat{T}:=\overline{T}$$</p>
<blockquote>
<p>But they don't need to be adjoints, or?
$$\hat{S}^*=\hat{T}\quad\hat{T}^*=\hat{S}$$
<em>(I highly doubt it but miss a counterexample.)</em></p>
</blockquote>
<p><strong>Application</strong></p>
<p>Given the pre-Fock space $\mathcal{F}_0(\mathcal{h})$.</p>
<p>The ladder operators are pre-defined by:
$$a(\eta)\bigotimes_{i=1}^k\sigma_i:=\langle\eta,\sigma_k\rangle\bigotimes_{i=1}^{k-1}\sigma_i\quad a^*(\eta)\bigotimes_{i=1}^k\sigma_i:=\bigotimes_{i=1}^k\sigma_i\otimes\eta$$
and extended via closure:
$$\overline{a}(\eta):=\overline{a(\eta)}\quad\overline{a}^*(\eta):=\overline{a^*(\eta)}$$
regarding the full Fock space $\mathcal{F}(\mathcal{h})$.</p>
<p>They are not only formally:
$$\langle a(\eta)\varphi,\psi\rangle=\langle\varphi,a^*(\eta)\psi,\rangle$$
but really adjoint to eachother:
$$\overline{a}(\eta)^*=\overline{a}^*(\eta)\quad\overline{a}^*(\eta)=\overline{a}(\eta)^*$$
<em>(The usual proof relies on Nelson's theorem, afaik.)</em></p>
| Disintegrating By Parts | 112,478 | <p>Let $S=\frac{d}{dx}$ and $T=-\frac{d}{dx}$ on the linear subspace $\mathcal{H}=\mathcal{C}_{0}^{\infty}(0,2\pi)\subset \hat{\mathcal{H}}=L^{2}[0,2\pi]$ consisting of infinitely differentiable functions on $[0,2\pi]$ which vanish outside some compact subset of $(0,2\pi)$. Then
$$
(Sf,g) = (f,Tg),\;\;\; f,g\in\mathcal{C}_{0}^{\infty}.
$$
Both operators $S$ and $T$ are closable in $L^{2}$ and the domains consist of all $f \in L^{2}$ which are absolutely continuous on $[0,2\pi]$ with $f'\in L^{2}$ and $f(0)=f(2\pi)=0$. So $S^{\star} \ne \overline{T}$ and $T^{\star} \ne\overline{S}$ because the domains of the two adjoints are also equal and consist of absolutely continuous $f \in L^{2}$ with $f' \in L^{2}$ (no endpoint conditions.)</p>
|
2,127,494 | <p>Given two $3$D vectors $\mathbf{u}$ and $\mathbf{v}$ their cross-product $\mathbf{u} \times \mathbf{v}$ can be defined by the property that, for any vector $\mathbf{x}$ one has $\langle \mathbf{x} ; \mathbf{u} \times \mathbf{v} \rangle = {\rm det}(\mathbf{x}, \mathbf{u},\mathbf{v})$.
From this a number of properties of the cross product can be obtained quite easily. It is less obvious that, for instance $|\mathbf{u} \times \mathbf{v}|^2 = |\mathbf{u}|^2 |\mathbf{v}|^2 - \langle \mathbf{u} ; \mathbf{v} \rangle ^2$, from which the norm of the cross-product can be deduced.</p>
<p>Is it possible to obtain these properties nicely (i.e. without dealing with coordinates), but with elementary linear algebra only (i.e. without the exterior algebra stuff, only properties of determinants and matrix / vector multiplication).</p>
<p>Thanks in advance! </p>
| Widawensen | 334,463 | <p>The attempt to prove</p>
<p>$|\mathbf{u} \times \mathbf{v}|^2 - |\mathbf{u}|^2 |\mathbf{v}|^2 + \langle \mathbf{u} ; \mathbf{v} \rangle ^2=0$,</p>
<p>with the use of formula </p>
<p>$\mathbf{u} \times \mathbf{v}= \mathbf {S(u)v}$ where $\mathbf {S(u)}$ is skew-symmetric matrix. </p>
<p>For simplification let normalize $|\mathbf{u}|=1$.</p>
<p>The formula can be written: </p>
<p>$\mathbf {(S(u)v)}^T \mathbf {S(u)v}-(\mathbf{v}^T \mathbf{v})( \mathbf{u}^T \mathbf{u}) +(\mathbf{v}^T \mathbf{u})(\mathbf{u}^T \mathbf{v})$=<br>
$\mathbf {v^TS(u)^T} \mathbf {S(u)v} - \mathbf{v}^T \mathbf{v} \mathbf{u}^T \mathbf{u} +\mathbf{v}^T \mathbf{u}\mathbf{u}^T \mathbf{v}$=<br>
$\mathbf {-v^TS^2(u)} \mathbf {v} -\mathbf{v}^T \mathbf{v} \mathbf{u}^T \mathbf{u} + \mathbf{v}^T \mathbf{u}\mathbf{u}^T \mathbf{v}$=<br>
$\mathbf {-v^T( uu^T-I)} \mathbf {v} - \mathbf{v}^T \mathbf{v} \mathbf{u}^T \mathbf{u} +\mathbf{v}^T \mathbf{u}\mathbf{u}^T \mathbf{v}$=<br>
$ -\mathbf {v^T uu^T v}+ \mathbf {v}^T \mathbf {v} - \mathbf{v}^T \mathbf{v} \mathbf{u}^T \mathbf{u} +\mathbf{v}^T \mathbf{u} \mathbf{u}^T \mathbf{v}$ =<br>
$ \mathbf {v}^T \mathbf {v} - \mathbf{v}^T \mathbf{v} \mathbf{u}^T \mathbf{u}$=<br>
$\mathbf {v}^T \mathbf {v} (1-\mathbf {u}^T \mathbf {u})= 0$.</p>
<p>So in this case it is fulfilled. I hope all steps are understood.</p>
|
707,317 | <p>Let $g: R\rightarrow R$ be a twice differentiable function satisfying $g(0)=1, g'(0)=0$ and $ g''(x)-g(x)=0$, for all $x$ in R</p>
<p>Fix $x$ in R. Show that there exists $M>0$ such that for all natural number n and all θ from 0 to 1 $$ |g^{(n)}(θx)|\leq M$$</p>
<p>Also, find the coefficients of the Taylor expansion of $g$ about $0$, and prove that this expansion converges to $g(x)$ for all $x$ in R</p>
<p>p.s.
My idea is to start from proving that $g$ has derivatives of all orders, but I am not sure whether it is a correct start and how I can proceed. Any suggestion or attempt is appreciated. </p>
| IV_ | 292,527 | <p><span class="math-container">$g''(x)-g(x)=0$</span> means <span class="math-container">$g''(x)=g(x)$</span>. And because <span class="math-container">$g$</span> is twice differentiable, <span class="math-container">$g''(x)$</span> is twice differentiable and so on. <span class="math-container">$g$</span> is infinitely often differentiable therefore.</p>
<p><span class="math-container">$$g^{(n)}(0)=
\begin{cases}
1, & \text{if }n\text{ even}\\
0, & \text{if }n\text{ odd}
\end{cases}$$</span></p>
<p><span class="math-container">$$g(x)=\sum_{n=0}^{\infty}\frac{g^{(n)}(0)}{n!}x^n$$</span></p>
<p><span class="math-container">$$g(x)=\sum_{n=0}^{\infty}\frac{1}{(2n)!}x^{2n}$$</span></p>
<p>According to the ratio test, this infinite series is convergent. Its radius of convergence is <span class="math-container">$\infty$</span>.</p>
<p><span class="math-container">$$g(x)=\frac{1}{2}e^{-x}+\frac{1}{2}e^x$$</span></p>
<p><span class="math-container">$$g^{(n)}(x)=
\begin{cases}
+\frac{1}{2}e^{-x}+\frac{1}{2}e^x, & \text{if }n\text{ even}\\
-\frac{1}{2}e^{-x}+\frac{1}{2}e^x, & \text{if }n\text{ odd}
\end{cases}$$</span></p>
<p><span class="math-container">$$g^{(n)}(\theta x)=
\begin{cases}
+\frac{1}{2}e^{-\theta x}+\frac{1}{2}e^{\theta x}, & \text{if }n\text{ even}\\
-\frac{1}{2}e^{-\theta x}+\frac{1}{2}e^{\theta x}, & \text{if }n\text{ odd}
\end{cases}$$</span></p>
<p><span class="math-container">$\ +\frac{1}{2}e^{-\theta x}+\frac{1}{2}e^{\theta x}\le \ +\frac{1}{2}e^{-x}+\frac{1}{2}e^{x}$</span><br>
<span class="math-container">$\ -\frac{1}{2}e^{-\theta x}+\frac{1}{2}e^{\theta x}\le |-\frac{1}{2}e^{-x}+\frac{1}{2}e^{x}|$</span><br>
<span class="math-container">$|-\frac{1}{2}e^{-\theta x}+\frac{1}{2}e^{\theta x}|\le \ +\frac{1}{2}e^{-x}+\frac{1}{2}e^{x}$</span> </p>
<p><span class="math-container">$$\forall x\in\mathbb{R}\colon |g^{(n)}(x)|\le M=+\frac{1}{2}e^{-x}+\frac{1}{2}e^{x}$$</span> </p>
|
773,880 | <p>What approach would be ideal in finding the integral $\int4^{-x}dx$?</p>
| user1337 | 62,839 | <p>Rewrite the integral as $$\int e^{(- \ln 4) x} \mathrm{d}x. $$</p>
|
1,342,570 | <p>So, this was my initial proof:</p>
<hr>
<p>Assume $R$ is a ring, and $a,b\in R$</p>
<p>Let $x_1$ and $x_2$ be solutions of $ax=b$</p>
<p>Hence, $ax_1=b=ax_2 \Rightarrow ax_1-ax_2=0_R \Rightarrow a(x_1-x_2)=0_R$</p>
<p>Thus, we have $x_1-x_2=0_R \Rightarrow x_1=x_2$, and only one solution exists.</p>
<hr>
<p>Only now did I realize that I can only assume $x_1-x_2=0_R$ from $a(x_1-x_2)=0_R$ if $R$ was an integral domain. I didn't know why they provided that $R$ had an identity or why $a$ is a unit.</p>
| egreg | 62,967 | <p>Other answers have been given, but I'll throw my 2 cents anyway.</p>
<p>For proving uniqueness you just need that $a$ is <em>left invertible</em>, that is, there exists $c\in R$ such that $ca=1$.</p>
<p>Indeed, if $ax_1=b=ax_2$, you get $a(x_1-x_2)=0$. Thus $ca(x_1-x_2)=c0=0$ and therefore $x_1-x_2=0$, because $ca=1$.</p>
<p>Right invertibility of $a$ provides existence: if $ad=1$, for some $d\in R$, then $a(db)=(ad)b=1b=b$, so $db$ is a solution to $ax=b$.</p>
<p>Note that if $a$ is both left and right invertible, then, with the same notation as before, we have $c=d$: indeed
$$
c=c1=c(ad)=(ca)d=1d=d
$$
so left and right inverses are the same and, in particular, unique (because any right inverse must be the same with one left inverse and similarly for the other side). In this case $a$ is called a unit and the unique left and right inverse is denoted by $a^{-1}$.</p>
|
69,272 | <p>By the way, does anyone know how to prove in an elementary way (i.e. expanding) that $\prod_1^n (1+a_i r)$ tends to $e^r=\sum \frac{r^k}{k!}$ as you let $\max|a_i|\to 0$ with $0\leq a_i \leq 1$ and $\sum a_i = 1$? An easy solution goes by writing the product with the exponential function so that you get the exponential of $\sum \log(1+a_i r) = \sum \int_0^1 \frac{a_i r}{(1+s a_i r)} ds$.</p>
<p>You can then integrate by parts (i.e. Taylor expand) to obtain $\sum a_ i r − \sum \int_0^1 (1−s)\frac{(a_i r)2}{(1+s a_i r)2}ds$. Now, $\sum a_i r = r$ is the main term. After you take $\max|a_i|$ to be less than $.5/|r|$, the error term is bounded in absolute value by $C \sum |a_i r|^2 \leq \max|a_i|\cdot \sum |a_i| |r|^2 \leq C |r|^2 \max |a_i|$.</p>
<p>I was hoping to find an elementary proof of this convergence by expanding the product $\prod_1^n (1+a_i r)$ and gathering terms with a common power of $r$. In particular, it would be nice to prove the convergence of this limit without the exponential function, since then the limit could be considered a definition of $e^r$. The case when all of the $a_i$ are equal is done in Rudin's "Principles of Mathematical Analysis".</p>
<p>The motivation for this problem comes from compound interest, which I described in a different thread here: <a href="https://mathoverflow.net/questions/40005/generalizing-a-problem-to-make-it-easier/69224#69224">Generalizing a problem to make it easier</a> .</p>
| Anthony Quas | 11,054 | <p>$\prod_{i=1}^n (1+a_ir)=1+\sum_{k=1}^n r^k\sum_{i_1 < \ldots < i_k}a_{i_1}\ldots a_{i_k}$.</p>
<p>Notice that $1^k=\left(\sum_{i=1}^n a_i\right)^k =k!\sum_{i_1 < \ldots < i_k}a_{i_1}\ldots a_{i_k}+\text{other terms}$, where the other terms are (positive) terms with a repeated $a_i$. It follows that the $k$th term in the original sum is at most $1/k!$ so this gives the upper bound. </p>
<p>To show the original claim it suffices to bound the terms with repeated $a_i$'s in terms of $\delta=\max a_i$. More specifically given $\epsilon>0$ and $k$ it suffices to show that for any finite sequence of positive numbers summing to 1 whose maximum is less than $\delta$, one has
$$\left(\sum a_i\right)^k -k!\sum_{a_{i_1} < \ldots < a_{i_k}}a_{i_1}\ldots a_{i_k} < \epsilon$$. </p>
<p>This summation consists of terms of a number of "types" e.g. (2,1,1,1,1,1) represents terms in which one $a_i$ occurs twice and 5 other $a_i$'s occur once each; $(5,3,2,1,1)$ represents terms of the form $a_{i_1}^5a_{i_2}^3a_{i_3}^2a_{i_4}a_{i_5}$ where there is no longer a requirement that the $i_j$ are increasing; instead the $i_j$ should be increasing within the terms that have the same power.</p>
<p>For a fixed $k$, the number of types is finite. So that it suffices to show that for each type, the contribution goes to 0 uniformly as $\delta\to 0$. Clearly for the type $(p_1,p_2,\ldots,p_r)$ you can bound the term $a_{i_1}^{p_1}\ldots a_{i_r}^{p_r}$ by $\delta^{\sum p_i-r}a_{i_1}\ldots a_{i_r}$. Let $\Delta=\sum p_i-r$ (this is at least 1 for all non-trivial types). The summation is then bounded above by $\delta^\Delta\sum_{a_{i_j}\text{ distinct}}a_{i_1}\ldots a_{i_r}$ which is at most $\delta^\Delta$.</p>
|
3,250,061 | <blockquote>
<p>Prove that if <span class="math-container">$p\equiv 5\pmod{8}$</span>, <span class="math-container">$p>5$</span> then <span class="math-container">$\zeta_p$</span> not constructible </p>
</blockquote>
<p>How to do this? There is a theorem in my book that says that the regular <span class="math-container">$n$</span>-gon is constructible iff <span class="math-container">$n=2^k\cdot n_0$</span> where <span class="math-container">$n_0$</span> is the product of distinct Fermat primes, but I don't know how to apply it here since we are talking about an infinitude of primes.</p>
| Dzoooks | 403,583 | <p>Suppose that <span class="math-container">$p > 5$</span> is constructible and <span class="math-container">$p \equiv 5 \pmod{8}$</span>. Since <span class="math-container">$p$</span> must be a Fermat prime, we have <span class="math-container">$p=2^{2^n}+1$</span> for some <span class="math-container">$n \geq 2$</span>. But <span class="math-container">$$8=2^3 \mid 2^{2^n} \implies p \equiv 1 \pmod{8},$$</span> a contradiction. </p>
<hr>
<p>The same proof would show that if <span class="math-container">$q \equiv 3,5,7 \pmod{8}$</span> with <span class="math-container">$3, 5 \nmid q$</span>, then <span class="math-container">$\zeta_q$</span> is not constructible.</p>
|
1,409,545 | <p>I'm not sure if there is an actual solution to this problem or not, but thought I would give it a shot here to see if anyone has any ideas. So here goes:</p>
<p>I basically have three vertices of a rigid triangle with known 3D coordinates. The vertices are projected onto a 2D plane (by projection, I mean that each vertex would basically have a fixed line drawn from it to the 2D plane, and that "line" would also stay rigid to the triangle so that the lines would move along with the triangle if it is transformed), in which I also know the 2D coordinates. A transformation matrix is applied to the original three points (can be a combination of rotation and translation) and I now know the new 2D projection coordinates.</p>
<p>Is it possible to obtain either the unknown transformation matrix or the new coordinates? Any ideas are much appreciated. Thanks!</p>
| k170 | 161,538 | <p>First note that
$$\frac{d}{dx}a^{f(x)}=a^{f(x)}(\ln a)\frac{d}{dx}f(x)$$
So now we have
$$\frac{d}{dx}\left[7+5^{x^2+2x-1}\right]$$
$$=\frac{d}{dx}[7]+\frac{d}{dx}\left[5^{x^2+2x-1}\right]$$
$$=0+5^{x^2+2x-1}(\ln 5)\frac{d}{dx}\left[x^2+2x-1\right]$$
$$=5^{x^2+2x-1}(\ln 5)\left(2x+2\right)$$
$$=2\cdot 5^{x^2+2x-1}(x+1)\ln 5$$</p>
|
2,091,766 | <p>Suppose $h:R \longrightarrow R$ is differentiable everywhere and $h'$ is continuous on $[0,1]$, $h(0) = -2$ and $h(1) = 1$. Show that:
<p> $|h(x)|\leq max(|h'(t)| , t\in[0,1])$ for all $x\in[0,1]$</p>
<p>I attempted the problem the following way:
Since $h(x)$ is differentiable everywhere then it is also continuous everywhere. $h(0) = -2$ and $h(1) = 1$ imply that h(x) should cross x-axis at some point (at least once). Denote that point by c to get $h(c) = 0$ for some $c\in[0,1]$.
<p> $h'(x)$ continuous means that $lim[h'(x)] = h'(a)$ as $x\rightarrow a$ but then I am stuck and I don't see how what I have done so far can help me to obtain the desired inequality.
<p>Thank you in advance!</p>
| 5xum | 112,884 | <p>You ask "How can I disprove it", but you didn't really define a strict mathematical statement. Your statement</p>
<blockquote>
<p>As $d\to\infty$, $S=[a,b]$</p>
</blockquote>
<p>lacks definitions. You seem to imply that for a sequence of sets $A_1,A_2,\dots $, there exists a limit $$\lim_{n\to\infty} A_n$$
but limits are really only defined for <em>real numbers</em>, so unless you define what you mean, there is no point in asking your question.</p>
<hr>
<p>Now, there <strong>are</strong> cases where a "sort of" limit makes sense. For example, if $A_1\subseteq A_2 \subseteq A_3\cdots$, then at each step, $A_i$ grows to $A_{i+1}$ and you could say that $$\bigcup_{n=1}^\infty A_n$$ is what it is growing towards "at infinity".</p>
<p>In your case, the condition is sort of met since for each $n$, there exist infinitely many such $m$ that $S_n\subseteq S_m$, and your question could then be </p>
<blockquote>
<p>Is $$S_\infty:=\bigcup_{d=1}^\infty S_d$$ equal to $(a,b)$?</p>
</blockquote>
<p>In which case the answer is <strong>no</strong> because, if $a$ and $b$ are rational, the set $S_\infty$ contains only rational numbers.</p>
<p>However, the set $S_\infty$ is <strong>dense</strong> in $(a,b)$, meaning that the <strong>closure</strong> of $S_\infty$ is, indeed, $(a,b)$.</p>
<p>In other words, this means you can get arbitrarily close to any number in $(a,b)$, even though you maybe can't reach the number itself. More strictly, for every number $x\in(a,b)$, and any $\epsilon > 0$, you can find some $s\in S_\infty$ such that $|s-x|<\epsilon$.</p>
|
2,018,239 | <p>I have to show, using induction, that $2^{4^n}+5$ is divisible by $21$. It is supposed to be a standard exercise, but no matter what I try, I get to a point where I have to use two more inductions.</p>
<p>For example, here is one of the things I tried:</p>
<p>Assuming that $21 |2^{4^k}+5$, we have to show that $21 |2^{4^{k+1}}+5$.</p>
<p>Now, $2^{4^{k+1}}+5=2^{4\cdot 4^k}+5=2^{4^k+3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5=2^{4^k}2^{3\cdot 4^k}+5+5\cdot 2^{3\cdot 4^k}-5\cdot 2^{3\cdot 4^k}=2^{3\cdot 4^k}(2^{4^k}+5)+5(1-2^{3\cdot 4^k})$.</p>
<p>At this point, the only way out (as I see it) is to prove (using another induction) that $21|5(1-2^{3\cdot 4^k})$. But when I do that, I get another term of this sort, and another induction.</p>
<p>I also tried proving separately that $3 |2^{4^k}+5$ and $7 |2^{4^k}+5$. The former is OK, but the latter is again a double induction.</p>
<p>Is there an easier way of doing this?</p>
<p>Thank you!</p>
<p><strong>EDIT</strong></p>
<p>By an "easier way" I still mean a way using induction, but only once (or at most twice). Maybe add and subtract something different than what I did?...</p>
<p>Just to put it all in a context: a daughter of a friend got this exercise in her very first HW assignment, after a lecture about induction which included only the most basic examples. I tried helping her, but I can't think of a solution suitable for this stage of the course. That's why I thought that there should be a trick I am missing... </p>
| Community | -1 | <p>Note that $2^3 = 1\mod 7$ and hence $2^{3n} = 1 \mod 7$. Now, $4^k -1 = 0\mod 3$ and it follows that $2^{4^k-1} = 1\mod 7$ and $2^{4^k} = 2\mod 7$, Thus
$$2^{4^k}+ 5 = 0 \mod 7 $$</p>
|
1,933,744 | <p>I simulated the following situation on my pc. Two persons A and B are initially at opposite ends of a sphere of radius r. Both being drunk, can take exactly a step of 1 unit(you can define the unit, i kept it at 1m) either along a latitude at their current location, or a longitude. A and B are said to meet, if the arc length distance between A and B becomes less than equal to 1km.</p>
<p>Note: the direction of possible motion of each man is fixed w.r.t thr axis of the globe. Either latitude or longitude. Assume such a coordinate system exists before hand(just like the 2d analog on a plane, moving in x or y only and not absolutely randomly).</p>
<p>The simulation returned results, which i could not comprehend fully. The average time to meet, was about 270 years for a sphere of radius 100km!. Can someone shed some light on how i can proceed with proving this result. I want the expected time of meeting given the radius and step length, given that each move requires 1 sec. I tried considering a spehrical cap of arc length √n, after n steps, in analogy with the 2d model. But then,i cant calculate the expected time. If possible please help or suggest some related articles.</p>
| Bolton Bailey | 165,144 | <p>As stated in Daniel's answer, it is easier to think about the problem if we say only one of the two people is moving. Suppose person $B$ stays at the North pole and person $A$ starts at the South pole, and each time step, $A$ moves 1m either longitudinally or latitudinally. </p>
<p>If $A$ moves latitudinally, then $A$'s distance from the North pole remains the same. Therefore, we ignore these latitudinal movements and say instead that $A$ has a 50% chance of not moving and a 50% chance of taking a 1 meter step along a great circle containing the North and South pole. We parameterize this circle by $x = 0$ at the South pole, $x = 1$ one meter in one direction from the South pole, $x = -1$ one meter in the opposite direction, and so on.</p>
<p>If the radius of the sphere is 100000m, and $A$ needs to be 1000m from the North pole to see $B$, then $A$ will see $B$ if the distance of $A$ from the South Pole is
$$ \pi 100000m - 1000m \approx 313159m$$</p>
<p>If $f(x)$ is the expected number of remaining time steps when $A$ is at position $x$ then $f$ satisfies
$$ f(x) = \frac{f(x-1) + 2f(x) + f(x+1)}{4} + 1 $$
And
$$f(313159) = f(-313159) = 0$$
We see that this is satisfied by
$$ f(x) = 2(313159)^2 - 2x^2 $$</p>
<p>So
$$ f(0) = 2(313159)^2 = 1.9 \times 10^{11} $$
So if the time steps are half seconds (since two movements happen each second - one for each person), the expected time should be 3108 years.</p>
|
248,733 | <p>Assume the following matrix
$$
C_p^{(a,b)}:=\left(
\begin{array}{cccccc}
a &a &0 &\cdots &\cdots &0 \\
0 &0 &a &\ddots &\ddots &\vdots \\
\vdots &\ddots &\ddots &\ddots &\ddots &\vdots \\
\vdots &\ddots &\ddots &\ddots &\ddots &0 \\
0 &\cdots &\cdots &0 &0 &a \\
b &b &\cdots &\cdots &b &b \\
\end{array}
\right)_{p \times p}\, .
$$
Where $a$ and $b$ are any integer number. With the numerical simulation, i found that the $n$th power of the matrix $C_p^{(a,b)}$, has
the following form
$$
{(C_p^{(a,b)})}^n:=\left(
\begin{array}{cccccc}
{g_1^{a,b}}(n) &{g_1^{a,b}}(n) &\cdots &\cdots &{g_1^{a,b}}(n) \\
\\
{g_2^{a,b}}(n) &{g_2^{a,b}}(n) &\cdots &\cdots &{g_2^{a,b}}(n) \\
\\
\vdots &\cdots &\cdots &\cdots &\vdots \\
\vdots &\cdots &\cdots &\cdots &\vdots \\
\\
{g_p^{a,b}}(n) &{g_p^{a,b}}(n) &\cdots &\cdots &{g_p^{a,b}}(n) \\
\end{array}
\right)_{p \times p}\, .
$$
Where ${g_i^{a,b}}(n)$, $1\leq i \leq p$, are expressions based on the parameters $a$, $b$ and $n$. For example, two consecutive power
of the matrix $C_7^{(2,3)}$, are as follows
$$
{(C_7^{(2,3)})}^9:=
\left( \begin {array}{ccccccc} 8000&8000&8000&8000&8000&8000&8000
\\ 12000&12000&12000&12000&12000&12000&12000
\\ 30000&30000&30000&30000&30000&30000&30000
\\ 75000&75000&75000&75000&75000&75000&75000
\\ 187500&187500&187500&187500&187500&187500&187500
\\ 468750&468750&468750&468750&468750&468750&468750
\\ 1171875&1171875&1171875&1171875&1171875&1171875&
1171875\end {array} \right)\, .
$$</p>
<p>$$
{(C_7^{(2,3)})}^{10}:=
\left( \begin {array}{ccccccc} 40000&40000&40000&40000&40000&40000&
40000\\ 60000&60000&60000&60000&60000&60000&60000
\\ 150000&150000&150000&150000&150000&150000&150000
\\375000&375000&375000&375000&375000&375000&375000
\\ 937500&937500&937500&937500&937500&937500&937500
\\ 2343750&2343750&2343750&2343750&2343750&2343750&
2343750\\ 5859375&5859375&5859375&5859375&5859375&
5859375&5859375\end {array} \right)\, .
$$
Is there a way to find an explicit formula for ${g_i^{a,b}}(n)$, $1\leq i \leq p$ in general. The matrix $C_p^{(a,b)}$ is so interesting.
If $a=-b$ then
$$
\forall n\geq p \qquad {(C_p^{(a,b)})}^n=O_p\, .
$$
Where $O_p$ is a zero matrix of order $p$. In some cases, ${g_i^{a,b}}(n)$, $1\leq i \leq p$, are fixed. For example,
if $[a=-(d\pm1) \, \& \, b=d]$ or $[b=-(d\pm1) \, \& \, a=d]$ where $d$ is an integer number, then we have
$$
\forall n\geq p-1 \qquad {(C_p^{(a,b)})}^n=\pm F_p\, .
$$
Where $F_p$ is a fixed matrix of order $p$. For example, by using $C_5^{(-3,2)}$ and $C_4^{(3,-4)}$, we can see that
$$
C_5^{(-3,2)}=
\left( \begin {array}{ccccc} -3&-3&0&0&0\\0&0&-3&0
&0\\ 0&0&0&-3&0\\ 0&0&0&0&-3
\\ 2&2&2&2&2\end {array} \right) \Rightarrow
\forall n\geq 4 \quad {(C_5^{(-3,2)})}^n=
\left( \begin {array}{ccccc} 81&81&81&81&81\\ -54&-
54&-54&-54&-54\\ -18&-18&-18&-18&-18
\\ -6&-6&-6&-6&-6\\ -2&-2&-2&-2&-2
\end {array} \right)\, .
$$
$$
C_4^{(3,-4)}=
\left( \begin {array}{cccc} 3&3&0&0\\0&0&3&0
\\ 0&0&0&3\\ -4&-4&-4&-4
\end {array} \right)
\Rightarrow
\forall n\geq 3 \quad {(C_4^{(3,-4)})}^n=\pm
\left( \begin {array}{cccc} 27&27&27&27\\ -36&-36&-
36&-36\\ 12&12&12&12\\ -4&-4&-4&-4
\end {array} \right)\, .
$$
In some especial cases, i found an expression for ${g_i^{a,b}}(n)$, $1\leq i \leq p$. Assume $C_p^{(a,b)}$, for $a=b=1$, as follows
$$
C_p^{(1,1)}:=\left(
\begin{array}{cccccc}
1 &1 &0 &\cdots &\cdots &0 \\
0 &0 &1&\ddots &\ddots &\vdots \\
\vdots &\ddots &\ddots &\ddots &\ddots &\vdots \\
\vdots &\ddots &\ddots &\ddots &\ddots &0 \\
0 &\cdots &\cdots &0 &0 &1 \\
1 &1 &\cdots &\cdots &1 &1 \\
\end{array}
\right)_{p \times p}\, .
$$
With the induction on $n$, we can prove that for $n\geq p-1$, we have </p>
<p>$$
{(C_p^{(1,1)})}^n:=\left(
\begin{array}{cccccc}
2^{n-(p-1)} &2^{n-(p-1)} &\cdots &\cdots &2^{n-(p-1)} \\
\\
2^{n-(p-1)} &2^{n-(p-1)} &\cdots & \cdots & 2^{n-(p-1)} \\
\\
2^{n-(p-2)} &2^{n-(p-2)} &\cdots & \cdots & 2^{n-(p-2)} \\
\\
2^{n-(p-3)} &2^{n-(p-3)} &\cdots & \cdots & 2^{n-(p-3)} \\
\\
\vdots &\cdots &\cdots &\cdots &\vdots \\
\vdots &\cdots &\cdots &\cdots &\vdots \\
\\
2^{n-1} &2^{n-1} &\cdots &\cdots & 2^{n-1} \\
\end{array}
\right)_{p \times p}\, .
$$
Is there a method to find a general expression for ${g_i^{a,b}}(n)$, $1\leq i \leq p$? I would greatly appreciate for any suggestions.</p>
| Robert Israel | 13,650 | <p>The characteristic polynomial of $C_p^{(a,b)}$ is $\lambda^p - (a+b) \lambda^{p-1}$. Therefore, for $m \ge p$ we have $$(C_p^{(a,b)})^m = (a+b)^{m-p} (C_p^{(a,b)})^{p-1}$$
It appears that $B = (C_p^{(a,b)})^{p-1}$ has entries
$$ \eqalign{b_{1j} &= a^{p-1}\cr
b_{ij} &= a^{p-i} b (a+b)^{i-2}\ \text{for}\ i \ge 2\cr}$$</p>
<p>EDIT:
We can exhibit the Jordan form of $C_p^{(a,b)}$ explicitly: $C_p^{(a,b)} = S J S^{-1}$ where</p>
<p>$$ J = \pmatrix{0 & 1 & 0 & \ldots & 0 & 0\cr
0 & 0 & 1 & \ldots & 0 & 0\cr
0 & 0 & 0 & \ldots & 0 & 0\cr
\ldots &\ldots &\ldots &\ldots & \ldots & \ldots\cr
0 & 0 & 0 & \ldots &1 & 0\cr
0 & 0 & 0 & \ldots & 0 & 0\cr
0 & 0 & 0 & \ldots & 0 & a+b\cr} $$
$$ S = \pmatrix{\frac{a^{p-2} b}{a+b} & \frac{a^{p-3}(a+b)^2 - a^{p-1}}{(a+b)^2} & \frac{a^{p-4}(a+b)^3 - a^{p-1}}{(a+b)^3} & \ldots & \frac{(a+b)^{p-1} - a^{p-1}}{(a+b)^{p-1}} & \frac{a^{p-1}}{(a+b)^{p-1}}\cr
-\frac{a^{p-2} b}{a+b} & -\frac{a^{p-2} b}{(a+b)^2} & -\frac{a^{p-2} b}{(a+b)^3} & \ldots & -\frac{a^{p-2} b}{(a+b)^{p-1}} & \frac{a^{p-2} b}{(a+b)^{p-1}}\cr
0 & -\frac{a^{p-3} b}{a+b} & -\frac{a^{p-3} b}{(a+b)^2} & \ldots & -\frac{a^{p-3} b}{(a+b)^{p-2}} & \frac{a^{p-3} b}{(a+b)^{p-2}}\cr
\ldots &\ldots &\ldots &\ldots & \ldots & \ldots\cr
0 & 0 & 0 & \ldots & -\frac{b}{a+b} & \frac{b}{a+b}}$$
$$ S^{-1} = \pmatrix{0 & -\frac{a+b}{a^{p-2} b} & \frac{1}{a^{p-3} b} & 0 & \ldots & 0 & 0\cr
0 & 0 & -\frac{a+b}{a^{p-3} b} & \frac{1}{a^{p-4} b} & \ldots & 0 & 0\cr
0 & 0 & 0 & -\frac{a+b}{a^{p-4} b} & \ldots & 0 & 0\cr
\ldots &\ldots &\ldots &\ldots & \ldots & \ldots & \ldots \cr
0 & 0 & 0 & 0 & \ldots & -\frac{a+b}{ab} & \frac{1}{b}\cr
1 & 1 & 1 & 1 & \ldots & 1 & -\frac{a}{b}\cr
1 & 1 & 1 & 1 & \ldots & 1 & 1\cr}$$</p>
|
723,633 | <p>My book asserts that for fixed $w$ where $w\neq 0$ that $P^2=P$ for $P(v)=\frac{\langle v,w\rangle }{||w||^2}w$</p>
<p>My book has a general corralary that $v\to P(v)$ is a bounded linear transformation and the fact that $P^2=P$ implies it is a projection. I'm not sure how they made the assertation. Any ideas?</p>
| Community | -1 | <p>We have</p>
<p>$$\require{cancel}P^2(v)=P(P(v))=P\left(\frac{\langle v,w\rangle }{||w||^2}w\right)=\frac{\langle v,w\rangle }{||w||^2}P\left(w\right)=\frac{\langle v,w\rangle }{||w||^2}\cancelto{=1}{\frac{\langle w,w\rangle }{||w||^2}}w=P(v)$$</p>
<p>Moreover, we have by the Cauchy-Schwarz inequality
$$||P(v)||=\left|\frac{\langle v,w\rangle }{||w||^2}\right|\cdot||w||\le \frac{||v||||w||}{||w||^2}\cdot||w||=||v||$$
hence $P$ is bounded and
$$||P||\le1$$
and if $v\in \operatorname{Im}(P)$ then $P(v)=v$ and then $||P(v)||=||v||$ so
$$||P||\ge 1$$
and we conclude that
$$||P||=1$$</p>
|
3,290,095 | <p>Now first something that I already know;
<span class="math-container">\begin{eqnarray}
∞/ ∞ = undetermined ( ≠1 ) \\
∞- ∞ = undetermined (≠0)\\
\end{eqnarray}</span></p>
<p>So basically one reason for this is that the <span class="math-container">$∞$</span> I assume is not as same as the <span class="math-container">$∞$</span> someone else will assume as <span class="math-container">$ ∞$</span> is a very large number with no definite value.....but what if I assign the <span class="math-container">$ ∞$</span> to a certain variable....that way the infinity is always same.</p>
<p>For eg:</p>
<p>What if I assign <span class="math-container">$ a=∞$</span>;</p>
<p>Now infinity is always the same if I use <span class="math-container">$a $</span> instead of directly using <span class="math-container">$∞$</span>......so my question is are the same laws mentioned above applicable here.....or can i solve it like solving any other equation;
<span class="math-container">\begin{eqnarray}
a/a = 1 \\
a-a = 0\\
\end{eqnarray}</span>
Or are these still undetermined? .</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Infinity is not a real number but you can do some algebra with infinity. </p>
<p>For example we define
<span class="math-container">$$ \lambda +\infty =\infty$$</span>for any real <span class="math-container">$\lambda$</span></p>
<p><span class="math-container">$$ \infty +\infty=\infty$$</span>
<span class="math-container">$$\lambda \times \infty =\infty $$</span> for positive <span class="math-container">$\lambda$</span></p>
<p><span class="math-container">$$\infty \times \infty =\infty$$</span></p>
<p>But there are some undefined expressions such as <span class="math-container">$$\infty -\infty$$</span> or <span class="math-container">$$\frac {\infty}{\infty}$$</span> which do not follow the usual algebraic rules. </p>
<p>There is no way around it and we have to deal with them case by case. </p>
|
2,276,907 | <p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p>
<p>A. $\sin{2x}$
B. $\cos{2x}$
C. $\tan {\frac{x}{2}}$</p>
<p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p>
<p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
| AlgorithmsX | 355,874 | <p>$$\begin{align}\sin^2x&=1-\cos^2x&\text{Pythagorean Identity}\\
\sin2x&=2\sin x\cos x&\text{Double Angle}\\
\cos2x&=2\cos^2x-1&\text{Double Angle}\\
\tan x&=\frac{\sin x}{\cos x}&\text{Def. of $\tan x$}
\end{align}$$
You should be able to take it from there.</p>
|
2,276,907 | <p>If $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, find the exact value of each of the following:</p>
<p>A. $\sin{2x}$
B. $\cos{2x}$
C. $\tan {\frac{x}{2}}$</p>
<p>Okay, so I am going through my old exam reviews for the final exam I have this evening, and choosing problems I have trouble with. Problems like these a struggle. Could someone give me some sort of step by step? I don't need to know all A,B,and C, but maybe one of them would help. Also, if $\cos{x}=\frac{3}{5}$ and angle $x$ terminates in the fourth quadrants, wouldn't that fraction be negative?</p>
<p>EDIT: Thank you for all the feedback! I understand now and finally realized what I have been messing up on was so small! Will make a mental note so I don't mess up on tonights final. :)</p>
| Jam | 161,490 | <p>Think back to the definition of $\cos(x)$ and try to draw a right angled triangle. Since $x$ is in the fourth quadrant, also know that it's in the bottom right of the plane. We also know that $\cos(x)=\frac{\text{adjacent}}{\text{hypotenuse}}$ so we have some information about our triangle. From the question, we can see that the adjacent length must be $3$ and the hypotenuse is $5$. So by Pythagoras' theorem (or by knowing about the $3,4,5$ triangle) the opposite length should be $4$. The triangle should look like the figure below. So then by using the triangle to find $\sin(x)$ and $\tan(x)$ and using the identities that @SiongThyeGoh posted, you should be able to complete the question. You might want to practice deriving those identities, too, if you think it'd help you remember them. You can do so geometrically.</p>
<p><a href="https://i.stack.imgur.com/gUeKV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gUeKV.png" alt="enter image description here"></a></p>
|
1,958,491 | <p>Let $t^k$ act as the $k$-th derivative operator on the set of polynomials. So</p>
<p>$$t^k(x^n)=t^k x^n=(n)_kx^{n-k}$$</p>
<p>where $(n)_k=n(n-1)(n-2)...(n-k+1)$ is the falling factorial. Then with a formal power series, $f(t)=\sum_{k\ge 0}a_k\frac{t^k}{k!}$, the linear operator $f(t)$ acts as such that</p>
<p>$$f(t)(x^n)=f(t)x^n=\sum_{k=0}^n\binom{n}{k}a_k x^{n-k}$$</p>
<p>Therefore, depending on the coefficients of the power series, we can get some interesting binomial identites. For example, if $f(t)=e^{yt}$, since the coefficients $a_n=y^n$, we get</p>
<p>$$e^{yt}x^n=\sum_{k=0}^n\binom{n}{k}y^k x^{n-k}=(x+y)^n$$</p>
<p>by linearity, </p>
<p>$$(e^{yt}-1)x^n=(x+y)^n-x^n=\sum_{k=1}^{n}\binom{n}{k}y^k x^{n-k}$$</p>
<p>and perhaps not as obvious</p>
<p>$$\left(\frac{e^{yt}-1}{t}\right)x^n=\int_{x}^{x+y}u^ndu$$</p>
<p>Now suppose that $f(t)=e^{yt}-1-yt$. Then</p>
<p>$$(e^{yt}-1-yt)x^n=(x+y)^n-x^n-ynx^{n-1}=\sum_{k=2}^{n}\binom{n}{k}y^k x^{n-k}$$</p>
<p>Obviously there is a nice formed forward difference equation in the previous case that is not happening here. But there is a relationship with subtracted terms of the binomial expansion. What i would really like help understanding is whether or not a possible analogous integral representation exists for the following operator:</p>
<p>$$\left(\frac{e^{yt}-1-yt}{t^2}\right)x^n=\left(\sum_{k=0}^\infty\frac{y^{k+2}}{(k+2)(k+1)}\frac{t^k}{k!}\right)x^n=\sum_{k=0}^n\binom{n}{k}\frac{y^{k+2}}{(k+1)(k+2)}x^{n-k}$$ </p>
<p>$$=\sum_{k=0}^n\binom{n+2}{k+2}\frac{y^{k+2}}{(n+1)(n+2)}x^{n-k}=\frac{1}{(n+1)(n+2)}\sum_{k=2}^{n+2}\binom{n+2}{k}y^kx^{n+2-k}$$ </p>
<p>It is not as simple. Clearly $\frac{d^2}{dx^2}\frac{x^{n+2}}{((n+2)(n+1)}$. If I integrated below I think the math is correct</p>
<p>$$\int_x^{x+y}{\frac{u^{n+1}}{n+1}}du=\frac{1}{(n+1)(n+2)}\sum_{k=1}^{n+2}\binom{n+2}{k}y^kx^{n+2-k}$$ </p>
<p>Which is really close, but the lower bound on the summation is $1$, not $2$. Does any one have any insight in how i can fix this, if possible?</p>
| epi163sqrt | 132,007 | <p><em>Note:</em> OPs calculations are quite ok and it shows the operators are closely related, but different. I don't think there is a necessity to <em>fix</em> anything.</p>
<p>I skimmed through the classic <em><a href="http://rads.stackoverflow.com/amzn/click/0486441393" rel="nofollow">The Umbral Calculus</a></em> by Steven Roman, but there was no indication that something more is going on regarding OPs question. Another source I've checked without success was <em><a href="http://rads.stackoverflow.com/amzn/click/0828400334" rel="nofollow">The Calculus of Finite Differences</a></em> by C. Jordan.</p>
<p>It might be helpful to list a few higher powers of the operators under consideration.</p>
<p>In the following I use OPs notation which is precisely the same used by Steven Roman.</p>
<blockquote>
<p><strong>Translation operator: $e^{yt}$</strong></p>
<p>Since this operator satisfies:
\begin{align*}
e^{yt}x^n&=\sum_{k=0}^\infty \frac{y^k}{k!}t^kx^n
=\sum_{k=0}^n \frac{y^k}{k!}(n)_kx^{n-k}=\sum_{k=0}^n\binom{n}{k}y^kx^{n-k}\\
&=(x+y)^n
\end{align*}
we obtain
\begin{align*}
\left(e^{yt}\right)^2 x^n=e^{yt}(x+y)^n=(x+2y)^n
\end{align*}
and in general for $j\geq 1$
\begin{align*}
e^{jyt}x^n=(x+jy)^n
\end{align*}</p>
</blockquote>
<p>Since $x^n, n\geq 0$ form a basis of the vector space of all polynomials $p$ in a single variable $x$ and the translation operator is linear, we obtain
\begin{align*}
e^{jyt}p(x)=p(x+jy)\tag{1}
\end{align*}</p>
<p>hence the name <em>translation operator</em>.</p>
<blockquote>
<p><strong>Forward difference operator: $e^{yt}-1$</strong></p>
<p>Here we obtain for polynomials $p$ using (1)</p>
<p>\begin{align*}
\left(e^{yt}-1\right)p(x) = p(x+y)-p(x)
\end{align*}</p>
</blockquote>
<p>The next one is</p>
<blockquote>
<p><strong>Operator: $\frac{\exp(yt)-1}{t}$</strong></p>
<p>We obtain
\begin{align*}
\left(\frac{e^{yt}-1}{t}\right)x^n&=\sum_{k=1}^\infty \frac{y^k}{k!}t^{k-1}x^n\\
&=\sum_{k=1}^{n+1}\frac{y^k}{k!}(n)_{k-1}x^{n-(k-1)}\\
&=\frac{1}{n+1}\sum_{k=1}^{n+1}\binom{n+1}{k}y^kx^{n+1-k}\\
&=\frac{1}{n+1}\left((x+y)^{n+1}-x^{n+1}\right)\tag{2}\\
&=\frac{1}{n+1}\int_x^{x+y}u^n\,du
\end{align*}</p>
<p>Similarly we obtain from (2) by linearity
\begin{align*}
\left(\frac{e^{yt}-1}{t}\right)^2x^n
&=\left(\frac{e^{yt}-1}{t}\right)\frac{1}{n+1}\left((x+y)^{n+1}-x^{n+1}\right)\\
&=\frac{1}{n+1}\left[\frac{1}{n+2}\left((x+2y)^{n+2}-(x+y)^{n+2}\right)\right.\\
&\qquad\qquad\quad\left.-\frac{1}{n+2}\left((x+y)^{n+2}-x^{n+2}\right)\right]\\
&=\frac{1}{(n+1)(n+2)}\left((x+2y)^{n+2}-2(x+y)^{n+1}+x^{n+2}\right)\\
&=\frac{1}{(n+2)_2}\left(\int_{x+y}^{x+2y}u\, du-\int_x^{x+y}u\,du\right)\tag{3}
\end{align*}</p>
</blockquote>
<p>Here at (2) and (3) we can see quite nicely how the operator $\frac{\exp(yt)-1}{t}$ is connected with the integral operator. It can be extended to higher powers without too much effort and the relationship with the integral operator looks plausible.</p>
<blockquote>
<p><strong>Operator: $\frac{\exp(yt)-1-t}{t^2}$</strong></p>
<p>Now we take a look at the operator which is on the focus of OP and its generalisation.</p>
<p>\begin{align*}
\left(\frac{e^{yt}-1-t}{t^{2}}\right)x^n
&=\sum_{k=2}^\infty\frac{y^k}{k!}t^{k-2}x^n\\
&=\sum_{k=2}^{n}\frac{y^{k}}{k!}(n)_{k-2}x^{n-(k-2)}\\
&=\frac{1}{(n+2)_2}\sum_{k=2}^n\binom{n+2}{k}y^kx^{n+2-k}\\
&=\frac{1}{(n+2)_2}\left((x+y)^{n+2}-x^{n+2}-nyx^{n+1}\right)
\end{align*}</p>
<p>Comparing the final expression with (3) we do not see a plausible representation via integrals since the term $nyx^{n+1}$ don't provide anything nicely of the form
\begin{align*}
\text{integrated expression (end point) - integrated expression (starting point)}
\end{align*}</p>
</blockquote>
<p>This impression becomes more strongly when looking at the general case. We obtain for $j\geq 1$
\begin{align*}
\left(\frac{e^{yt}-1-\frac{t^2}{2}-\cdots-\frac{t^{j-1}}{(j-1)!}}{t^j}\right)x^n
&=\sum_{k=j}^\infty\frac{y^k}{k!}t^{k-j}x^n\\
&=\frac{1}{(n+j)_j}\sum_{k=j}^\infty\binom{n+j}{k}y^kx^{n+j-k}\\
&=\frac{1}{(n+j)_j}\left((x+y)^{n+j}-\sum_{k=0}^{j-1}\binom{n+j}{k}y^kx^{n+j-k}\right)
\end{align*}</p>
|
748,815 | <blockquote>
<p>$\displaystyle\sum\limits_{k=1}^nk^2(k-1){n\choose k}^2 = n^2(n-1)
{2n-3\choose n-2}$ considering $n\ge2$</p>
</blockquote>
<p>Can somebody help with this combinatorial proof?
I'm struggling a lot.
Thanks.</p>
<p><strong>EDIT:</strong> Ok. I could figure it out, if we had $\displaystyle\sum\limits_{k=1}^nk^2{n\choose k}^2 = n^2
{2n-2\choose n-1}$.</p>
<p>The problem is, i don't understand what to do with that $(k-1)$ and how it leads to ${2n-3\choose n-2}$.</p>
<p>I know $k{n\choose k} = n{n-1\choose k-1}$ </p>
<p>Choosing a team of $k$ elements from $n$ and from that $k$ elements, pick a captain is the same as choose a captain first, and then, complete the team, choosing $k-1$ elements from $n-1$</p>
<p>But, what about $k(k-1){n\choose k}$ ?</p>
| robjohn | 13,854 | <p><strong>Hint:</strong> Note that because choosing $k$ elements from a set of $n$ is the same as choosing the complement of the $k$ elements, we have
$$
\binom{n}{k}=\binom{n}{n-k}\tag{1}
$$
and since choosing a team of $k$ people and then a leader from those chosen is the same as choosing a leader and then choosing the remaining $k-1$ from the remaining $n-1$, we get
$$
k\binom{n}{k}=n\binom{n-1}{k-1}\tag{2}
$$
and
$$
k^2(k-1)=k(k-1)(k-2)+2k(k-1)\tag{3}
$$
Then consider <a href="http://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow">Vandermonde's Identity</a>.</p>
<hr>
<p><strong>Full Solution:</strong></p>
<p>$$
\hspace{-5mm}\begin{align}
&\sum_{k=1}^nk^2(k-1)\binom{n}{k}^2\\
&=\sum_{k=1}^nk(k-1)(k-2)\binom{n}{k}\binom{n}{n-k}+2\sum_{k=1}^nk(k-1)\binom{n}{k}\binom{n}{n-k}\tag{4}\\
&=n(n-1)(n-2)\sum_{k=1}^n\binom{n-3}{k-3}\binom{n}{n-k}+2n(n-1)\sum_{k=1}^n\binom{n-2}{k-2}\binom{n}{n-k}\tag{5}\\
&=n(n-1)(n-2)\binom{2n-3}{n-3}+2n(n-1)\binom{2n-2}{n-2}\tag{6}\\[4pt]
&=(n-1)(n-2)^2\binom{2n-3}{n-2}+4(n-1)^2\binom{2n-2}{n}\tag{7}\\[4pt]
&=n^2(n-1)\binom{2n-3}{n-2}\tag{8}
\end{align}
$$
Explanation:<br>
$(4)$: apply $(1)$ and $(3)$<br>
$(5)$: apply $(2)$ several times<br>
$(6)$: Vandermonde Identity<br>
$(7)$: $\binom{2n-3}{n-3}\stackrel{(1)}=\binom{2n-3}{n}\stackrel{(2)}=\frac{2n-3}{n}\binom{2n-4}{n-1}\stackrel{(1)}=\frac{2n-3}{n}\binom{2n-4}{n-3}\stackrel{(2)}=\frac{n-2}{n}\binom{2n-3}{n-2}$<br>
$(7)$: $\binom{2n-2}{n-2}\stackrel{(1)}=\binom{2n-2}{n}\stackrel{(2)}=\frac{2n-2}{n}\binom{2n-3}{n-1}\stackrel{(1)}=\frac{2n-2}{n}\binom{2n-3}{n-2}$ </p>
|
189,650 | <p>let $S=\{s_1, s_2, s_3 \}$, if $s_1$ can be represented as a linear combination of $s_2$ and $s_3$, $s_2$ can be represented as a linear combination of $s_1$ and $s_3$ but $s_3$ can not be represented as a linear combination of $s_1$ or $s_2$ or $s_1$ and $s_2$, can we call $S$ a linearly dependent set? </p>
| Hagen von Eitzen | 39,174 | <p><strong>Warning!</strong>
By strictly adhering to notation, there is a special case where your $S=\{s_1, s_2, s_3\}$ with the given conditions is linearly independent, namely if $s_1=s_2$ and $s_1, s_3$ are linearly independent.
This happens when one uses sets instead of families to talk about linear dependence and bases etc.</p>
<p>Concrete: In the vector space $\mathbb R^2$ let $s_1 = s_2 = (1, 0)$ and $s_3 = (0, 1)$. Then $s_1 = 1\cdot s_2+0\cdot s_3$ and $s_2 = 1\cdot s_1+0\cdot s_3$, whereas $s_3$ cannot be expressed as a linear combination of $s_1$ and $s_2$. The set $S=\{s_1, s_2, s_3\}$ has caridnality two and is linearly independent.</p>
|
12,690 | <p>I understand Lie groups are defined by the structure constants associated with the lie brackets, which are treated as commutators in quantum mechanics, but i dont know of a math theory related to group theory to define or use an anti commutator. If Lie groups theory uses the commutator, what theory uses the anti commutator?</p>
<p>Finite groups (not Lie groups, which are continuous), can be specified by structure equations analogous to a Lie bracket, but more general, of which commutation or anti commutation relations are just one of infinite possibilities. Is there such variety of possible structure equations in continuous groups too?</p>
| T.. | 467 | <p>If Lie algebras are (in light of the Poincare-Birkhoff-Witt theorem) a complete axiomatization of the antisymmetric multiplication $AB-BA$ in an associative algebra, Jordan algebras are an almost-complete axiomatization of the symmetric multiplication $(AB+BA)/2$. ( <a href="http://en.wikipedia.org/wiki/Jordan_algebra" rel="nofollow">http://en.wikipedia.org/wiki/Jordan_algebra</a> ). They are the closest thing known to an intrinsic structure related to anticommutators.</p>
<p>Unlike Lie algebras, there are exceptional Jordan algebras that do not come from the multiplication in associative algebras, and Jordan algebras are not known to be infinitesimal objects for groups or any other structure. So they do not arise as often outside of (or in) quantum mechanics and the initial hopes for a Lie-like theory have not been realized.</p>
<p>If you mean not the anti-commutator but the supercommutator, $AB \pm BA$ with the sign depending on parity of $A$ and $B$ or their consituents (e.g., negative for bosons and positive for fermions) there is a theory of Lie superalgebras and Lie supergroups. There are new phenomena in the classification, such as continuous families of nonisomorphic simple objects. The theory can be phrased as the "Lie group theory in the category of super-vector spaces", which in turn is a special case of Lie theory (or group theory, or algebraic geometry) in a tensor category. So in principle there are many possible theories of commutator-like objects, but I don't know if any have been found to be interesting besides the usual theory, its super-version, and analogues in characteristic $p$. </p>
|
1,180,437 | <p>I am trying to understand this proof. Rather an important part of the proof. I have already shown this is true for $n=2$ and am assuming the $a_n$ case is true.</p>
<p>$$(a_1^2+a_2^2+...+a_n^2) \le (a_1+a_2+...+a_n)^2$$
Want to show that
$$(a_1^2+a_2^2+...+a_n^2 + a_{n+1}^2) \le (a_1+a_2+...+a_n+a_{n+1})^2$$
$=$
$$(a_1^2+a_2^2+...a_n^2) + a_{n+1}^2 \le ((a_1+a_2+...a_n)+(a_{n+1}))^2$$</p>
<p>$=$
$$(a_1^2+a_2^2+...+a_n^2 + a_{n+1}^2) \le (a_1+a_2+...+a_n)^2+2(a_1+a_2+...+a_n)(a_{n+1})+(a_{n+1}) ^2$$ and here is the part I am not understanding. For some reason the proof moves some of the terms over and I cannot identify what is being replaced or why. My guess is that the terms that moves are the ${n+1}$ terms. But, I am not certain. </p>
<p>$$a_1^2+a_n^2+a_{n+1}^2...+2(a_1+a_2+...a_n)(a_{n+1}) \le (a_1+a_2+...a_n)^2$$</p>
| Joffan | 206,402 | <p>If you really need to use induction, here's what you need for the inductive step:</p>
<p>Assuming $(a_1^2+a_2^2+...+a_n^2 ) \le (a_1+a_2+...+a_n)^2$</p>
<p>then
$$\begin{align} (a_1+a_2+...+a_n+a_{n+1})^2 &= ((a_1+a_2+...a_n)+(a_{n+1}))^2 \\
&= (a_1+a_2+...+a_n)^2+2(a_1+a_2+...+a_n)a_{n+1} + a_{n+1}^2 \\
&\ge (a_1+a_2+...+a_n)^2 + a_{n+1}^2 \tag{*}\\
&\ge (a_1^2+a_2^2+...+a_n^2 ) + a_{n+1}^2 \\
\end{align}
$$as required. Note that $2(a_1+a_2+...+a_n)a_{n+1}\ge 0)$ for the (*) step.</p>
<p>You do require that all the $a_i$ are not negative.</p>
<p>Being visual myself, I prefer pictures....</p>
<p><img src="https://i.stack.imgur.com/vSKPy.png" alt="enter image description here"></p>
|
3,298,412 | <blockquote>
<p>For an n-dimensional vector space <span class="math-container">$V$</span> and an ordered basis <span class="math-container">$B$</span> of <span class="math-container">$V$</span>
, the mapping <span class="math-container">$\Phi : \mathbb{R}^n → V , \Phi(e_i) = b_i, i = 1,...,n$</span>
is linear , where <span class="math-container">$E = (e_1,...,e_n)$</span> is the standard basis of
<span class="math-container">$\mathbb{R}^n$</span>.</p>
</blockquote>
<p>This is a paragraph from my textbook, I am trying to figure out why <span class="math-container">$\Phi$</span> is a linear map. </p>
<p>For an example, If I set <span class="math-container">$n=2$</span>, I can show that there exist a linear combination of <span class="math-container">$B$</span> vectors for each vector of <span class="math-container">$E$</span> where in this case <span class="math-container">$E = (e_1, e_2)$</span> (Cartesian basis) and <span class="math-container">$B = \{(1, -1)^T, (1, 1)^T\}$</span>. But I don't know what to do after this.</p>
<p>Accroding to my book, the mapping is also isomorphic since <span class="math-container">$\Phi$</span> is linear and <span class="math-container">$\dim(\mathbb{R}^n) = \dim(V)$</span>, I am just stuck at seeing the linearity part. Any pointers?</p>
| InsideOut | 235,392 | <p>Generally, vector spaces are defined in an absolutely abstract way. Namely, a vector space <span class="math-container">$V$</span> over a field <span class="math-container">$\Bbb K$</span> is just a set on which are defined two operations (inner operation and an external operation) that satisfy a precise list of axioms. </p>
<p>When <span class="math-container">$\Bbb K=\Bbb R$</span>, the set <span class="math-container">$\Bbb R^n$</span> is the simplest example possible. What is nice is that any abstract vector space <span class="math-container">$V$</span> of dimension <span class="math-container">$n$</span> over <span class="math-container">$\Bbb R$</span> is isomorphic to <span class="math-container">$\Bbb R^n$</span>.</p>
<p>The correct way to define <span class="math-container">$\Phi$</span> is the following. Fix a basis, usually the Cartesian basis <span class="math-container">$\{e_1,\dots, e_n\}$</span> for <span class="math-container">$\Bbb R^n$</span> and a basis <span class="math-container">$\{b_1,\dots,b_n\}$</span> for <span class="math-container">$V$</span>. Define <span class="math-container">$\Phi:\Bbb R^n\to V$</span> in this way: </p>
<ol>
<li>Pick <span class="math-container">$(a_1,\dots,a_n)\in \Bbb R^n$</span>, </li>
<li>Let <span class="math-container">$w=a_1e_1+\cdots + a_ne_n$</span> </li>
<li>Define <span class="math-container">$\Phi(w)=v\in V$</span> as <span class="math-container">$a_1b_1+\cdots +a_nb_n$</span>. </li>
</ol>
<p>Now verify that <span class="math-container">$\Phi$</span> is linear, injective and surjective. </p>
<p>Final notice: This construction holds for any field <span class="math-container">$\Bbb K$</span>. I used <span class="math-container">$\Bbb R$</span> since your question is posed when <span class="math-container">$\Bbb K=\Bbb R$</span>.</p>
|
3,111,985 | <p><span class="math-container">$f_n(x)= \frac{x}{(1+x)^n}\quad f_n(0)=0$</span></p>
<p>pointwise convergence: <span class="math-container">$\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}=x \sum_{n=1}^{\infty} \frac{1}{(1+x)^n}$</span> and the series is a geometric series convergent if <span class="math-container">$|x+1|>1$</span>.</p>
<p>So there is pointwise convergent in <span class="math-container">$E=(-\infty-2)\cup[0,+\infty)$</span></p>
<p>The sum of the series is <span class="math-container">$S(x)=1$</span> for <span class="math-container">$x\ne0$</span> and <span class="math-container">$S(0)=0$</span> so if I consider <span class="math-container">$[0,+\infty)$</span> there is not uniform convergence.
But is there convergence on a subset of <span class="math-container">$[0,+\infty)$</span>?</p>
<p>If I consider <span class="math-container">$A=[b,+\infty),b>0$</span> <span class="math-container">$sup_A|f_n(x)|=f_n({\frac{1}{n-1}})$</span> general term of a convergent series so for Weierstrass test the series uniform converges for in A</p>
<p>And in B=<span class="math-container">$(-\infty,-2)$</span>: if I consider sup<span class="math-container">$_B|S(x)-S_N(x)|>$</span>sup<span class="math-container">$_B|\sum_{n=N+1}^{+\infty}f_n(x)|$</span>
The superior extreme is major of value of the fuction in every its point so I take <span class="math-container">$x_n=-2-{\frac{1}{n}}$</span> and prove there isn't convergence?</p>
| Kavi Rama Murthy | 142,385 | <p>If <span class="math-container">$S_N$</span> is the N-th partial sum then <span class="math-container">$S_N-1=\frac 1 {(1+x)^{N}}$</span> (by the formula for sum of a finite geometric sum). Hence the series converges uniformly on a set iff <span class="math-container">$|(1+x)|^{N} \to \infty$</span> uniformly on <span class="math-container">$S$</span>. I think you can take it from here.</p>
|
3,362,000 | <p>From listing the first few terms, I suspect that the sequence is increasing, so I wanted to use mathematical induction to verify my suspicion.</p>
<p>I have assumed that <span class="math-container">$a_k<a_{k+1}$</span>, I don't see how I can obtain <span class="math-container">$a_{k+1}<a_{k+2}$</span> because <span class="math-container">$\frac{1}{a_k}>\frac{1}{a_{k+1}}$</span></p>
| JezuzStardust | 213,886 | <p>Prove that <span class="math-container">$a_n > 0$</span> for all <span class="math-container">$n$</span>. </p>
<p>Then use that
<span class="math-container">$$
a_n = a_{n-1} + \frac{1}{a_{n-1}} > a_{n-1},
$$</span>
since <span class="math-container">$1 / a_{n-1} > 0$</span>. </p>
|
2,780,597 | <p>The <strong>definition</strong> of a <em>convex set</em> is geometrically intuitive. But the definition of <em>convex function</em> doesn't seem so intuitive: $S \subset \mathbb{R}^n$ is convex if given $x,y\in S$ the line segment joining $x,y$ is in $S$. </p>
<p>Let $f$ be a real valued function from an open interval $I$. Consider the graph of $f$ in plane: set theoretically it is
$$\{(x,f(x))\,|\, x\in I\}.$$
For any $a,b\in I$, consider the line joining $(a,f(a))$ and $(b,f(b))$. Then one of the following happens:</p>
<ol>
<li><p>The line lies above the graph of $f$.</p></li>
<li><p>The line lies below the graph of $f$.</p></li>
<li><p>None of these hold. </p></li>
</ol>
<blockquote>
<p><strong>Q.</strong> Suppose you know the definition of <em>convex set</em> and let $f:I\rightarrow \mathbb{R}$ be a function which does not satisfy (3). This means $f$ satisfies (1) or (2). What is <em>intuition way</em> to define function to be convex or concave?</p>
</blockquote>
<p>For example, in Wikipedia, it says that <em>a function is convex if graph above $f$ is convex</em>. But if we are trying to give intuitive definition of convex function based on convex set, according to convexity of region above $f$ or below $f$, what is intuitive way to decide one of them? </p>
<hr>
<p>Since $f$ is satisfying (1) or (2), so the words <em>convex function</em> and <em>concave function</em> are reserved for it; we can assign any word to any case without intuition. But, considering definition of <em>convex set</em> can we get intuition to define convex function? Note that almost everyone knows geometric explanation of the standard definition of convex function. </p>
| DechiWords | 839,564 | <p>I'm refer Boris T. Polyak in his book 'INTRODUCTION TO OPTIMIZATION', page 8.</p>
<p><strong>Definition of convex function</strong></p>
<p>A scalar function <span class="math-container">$f(x)$</span> on <span class="math-container">$\mathbb R^n$</span> is said to be <em>convex</em> if</p>
<p><span class="math-container">$$f(\lambda x+(1-\lambda)y)\leq \lambda f(x) + (1-\lambda)f(y)$$</span></p>
<p>for any <span class="math-container">$x,y\in \mathbb R^n$</span> and <span class="math-container">$0\leq \lambda \leq 1$</span>.</p>
<p><strong>Theorem</strong></p>
<p>Let <span class="math-container">$f:\mathbb R^n \to \mathbb R$</span>. <span class="math-container">$f$</span> is a convex function if and only if <span class="math-container">$H_{\gamma}=\{(x,\gamma)|\gamma \geq f(x)\}$</span> is a convex set.</p>
<p>Now, look this theorem who is so powerful.</p>
<p><strong>Theorem</strong></p>
<p>A function <span class="math-container">$f$</span> is convex if and only if <span class="math-container">$\nabla^2f(x)\geq 0$</span>.</p>
<p><span class="math-container">$\nabla^2f(x)$</span> are Hessian matrix and <span class="math-container">$\nabla^2f(x)\geq 0$</span> refer nonnegative definite.</p>
<p><em>Example</em></p>
<p>Let <span class="math-container">$g(x,y) = x^2-xy+y^2$</span>.</p>
<p><span class="math-container">$$\implies \nabla g(x,y) = (2x-y,-x+2y)$$</span>
Then
<span class="math-container">\begin{equation}
\nabla^2g(x,y) =
\left(\begin{smallmatrix}
2 & -1\\
-1 & 2
\end{smallmatrix}\right)
\end{equation}</span></p>
<p><span class="math-container">$$\implies \nabla^2g(x,y)\geq 0,\forall x,y\in \mathbb R$$</span></p>
<p><span class="math-container">$$\therefore g(x,y) \text{ is a convex function}$$</span></p>
<p>Note that if <span class="math-container">$\nabla^2f(x)$</span> are not a constant matrix but is nonnegative definite you can not say <span class="math-container">$f(x)$</span> are a convex function because this needs to be true on all variables of the function by definition of convex function.</p>
|
1,511,246 | <blockquote>
<p>What is the value of $0.7\overline{54}$ +$0.69\overline2$?</p>
<p>(a) $\frac{1813}{900}$ (b) $\frac{1783}{910}$ (c) $\frac{14323}{9900} (d) \frac{13243}{9900}$</p>
</blockquote>
<p>I get</p>
<p>@edit</p>
<p>$$754-7/990 + 692-69/900$$=$747$/$990$ + $623$/$900$=$1$/$90$($747$/$11$ + $623$/$10$)</p>
<p>=($7470$/$11$ + $623$ . $11$/$10$)=($7470$ + $6853$)=$14323$/$9900$</p>
<p>Thankx for help I did not multiplied by $90$</p>
<p>Can anyone guide me how to solve the problem?</p>
| Akash SSM 2 8 std | 286,933 | <p>0.7545454...(+)
0.6922222...[=]
1.44676767...
Now convert this into rational number.
I got the answer as 14323/9900.
Hope this helps.</p>
|
2,010,693 | <p>How can I prove that $x_{n+1}=c+\sqrt{x_n}$, $x_1=a>0$ and $c>0$ converges?
I know that the limit (if it exists) is $L={{2c+1+\sqrt{4c+1}}\over 2}$.
I have already prove that if $x_1<L$ then $x_n<L$ so its bounded from above but how can I prove that if $x_1<L$ then the sequence is increasing?
I would really appreciate any hints or ideas.</p>
| Zongxiang Yi | 388,565 | <p>So you have $x_{n+1}-x_n=c+\sqrt{x_n}-x_n$. Now consider the function:
$$f(x)=c+\sqrt{x}-x,x\in R.$$
It follows
$$f'(x)=\frac{1}{2\sqrt{x}}-1.$$
You can see that when $x< \frac{1}{4}$, it has
$$f'(x)>0.$$
This means
$$f(x)>f(\frac{1}{4})=1+\sqrt{\frac{1}{4}}-\frac{1}{4}=\frac{5}{4}>0.$$
That's
$$x_{n+1}-x_n>0,n=1,2,\cdots, \text{ if } x< \frac{1}{4}.$$</p>
<p>Conversly, Let $f(x)>0$ and $f'(x)>0$, you can easily get the number you want $L=\frac{2c+1+\sqrt{4c+1}}{2}$.</p>
|
869,337 | <p>"Abstract index" and "coordinate free notations" are often submitted as alternatives to Einstein Summation notation. Could you illustrate their use using an example?</p>
<p>Here's a sum written in Einstein's notation:</p>
<p>$a_{ij}b_{kj} = a_{i}b_{k}$</p>
<p>How would you rewrite it in a modern way? </p>
| reuns | 276,986 | <p><span class="math-container">$$R[y]/(f(y))\cong R[x]/(g(x))$$</span> as <span class="math-container">$R$</span>-algebras iff there exists <span class="math-container">$\phi,\varphi\in R[t]$</span> such that <span class="math-container">$$f(\phi(x))\in (g(x)),\quad g(\varphi(y))\in (f(y)),\quad \phi(\varphi(y))-y\in (f(y)),\ \varphi(\phi(x))-x\in (g(x))$$</span></p>
<p>The isomorphism is <span class="math-container">$y\to \phi(x)$</span> with inverse <span class="math-container">$x\to \varphi(y)$</span>.</p>
<p>Note that if <span class="math-container">$R$</span> is a field, <span class="math-container">$f$</span> is irreducible and <span class="math-container">$\deg(g)=\deg(f)$</span> then the existence of <span class="math-container">$\phi$</span> is sufficient.</p>
|
4,416,063 | <p>How to solve <span class="math-container">$\int\frac{\ln(x \ln(x))}{x} dx$</span>?</p>
<p>My work:<br />
Let <span class="math-container">$t = \ln(x) \implies x= e^t ; dt = \dfrac{dx}{x}$</span></p>
<p>So above integral changes to,
<span class="math-container">$$\int t ( e^t t) dt$$</span>
<span class="math-container">$$\int t^2 e^t dt$$</span></p>
<p>Using IBP to get:
<span class="math-container">$$t^2e^t - 2te^t + 2e^t + C$$</span></p>
<p>Undoing the substitution to get,
<span class="math-container">$$x\log^2(x) - 2x\log(x) + 2x + C$$</span></p>
<p>But differentiating this doesn't give the original integral. I've definitely done anything wrong which I'm unable to understand. Can anyone help me to solve it?</p>
<p>Also I'm wondering if We could solve it without using by parts.</p>
<hr />
<p><strong>Edit</strong><br />
I tried again as suggested in comments.
<span class="math-container">$$\int\dfrac{\log(x \log(x))}{x}dx \overset{t\to\log(x)}= \int\log(e^t t) dt = \int t + \log(t) = \dfrac{t^2}{2} + t\log(t) - t + C$$</span></p>
<p>By undoing the substitution,
<span class="math-container">$$\boxed{\dfrac{\log^2(x)}{2} + \log(x)\log(\log x) - \log(x)+ C}$$</span></p>
<p>What's wrong with this?</p>
| Dr. Sundar | 1,040,807 | <p>We make the substitution <span class="math-container">$t = \ln| x | $</span> or <span class="math-container">$x = e^t$</span>.</p>
<p>Then <span class="math-container">$dt = {1 \over x} dx$</span> or <span class="math-container">${dx \over x} = dt$</span>.</p>
<p>Thus, the given integral can be simplified as
<span class="math-container">$$
I = \int \ln| t e^t | dt
$$</span></p>
<p>Using the formula of integration by parts
<span class="math-container">$$
\int u dv = u v - \int v du,
$$</span>
we can simplify the integral <span class="math-container">$I$</span> as
<span class="math-container">$$
I = t \ln| t e^t | - \int \ t d\left[ \ln\left( t e^t \right) \right]
$$</span></p>
<p>A simple calculation shows that
<span class="math-container">$$
I = t \ln | t e^t | - \int \ t {1 \over t e^t} \ \left( e^t + t e^t \right)
dt
$$</span></p>
<p>Simplifying, we get
<span class="math-container">$$
I = t \ln | t e^t | - \int \ (t + 1) dt = t \ln | t e^t | - {t^2 \over 2} - t + c
$$</span>
where <span class="math-container">$c$</span> is an integration constant.</p>
<p>Back-substitution of <span class="math-container">$t = \ln| x |$</span> yields the final result as
<span class="math-container">$$
I = \ln| x | \ln\left| x \ln | x | \right| - {(\ln| x |)^2 \over 2} - \ln| x | + c
$$</span></p>
|
3,415,378 | <p>I am looking for an estimation or an approximation of </p>
<p><span class="math-container">$\sum _{k=1}^{n}{\log(k)\binom {n}{k}}$</span></p>
<p>Any hints will be appreciated.
Thank you.</p>
| metamorphy | 543,769 | <p>I'm continuing the computations by Jack D'Aurizio (following <a href="https://math.stackexchange.com/a/3399858">myself</a>).
<span class="math-container">\begin{align}\sum_{k=1}^{n}\binom{n}{k}\ln k&=\sum_{k=1}^{n}\binom{n}{k}\int_0^1\frac{x^{k-1}-1}{\ln x}\,dx\\\color{gray}{[\text{note the sign}]}\quad&=\int_0^1\frac{(1+x)^n-1-(2^n-1)x}{x\ln x}\,dx\\\color{gray}{\text{[integrate by parts]}}\quad&=\int_0^1\big(2^n-1-n(1+x)^{n-1})\ln\ln\frac{1}{x}\,dx\\\color{gray}{[\text{substitute }x=2e^{-t}-1]}\quad&=-(2^n-1)\gamma-2^n n\int_0^{\ln 2}e^{-nt}\ln\ln\frac{1}{2e^{-t}-1}\,dt\\&=\color{blue}{2^n(\ln n-\ln 2-I_n)}+\underbrace{\gamma+2^n n\int_{\ln 2}^{\infty}e^{-nt}\ln 2t\,dt}_{\text{small, can be neglected}},\end{align}</span>
<span class="math-container">$$I_n=n\int_0^{\ln 2}e^{-nt}\varphi(t)\,dt,\qquad\varphi(t)=\ln\left[\frac{1}{2t}\ln\frac{1}{2e^{-t}-1}\right].$$</span> <span class="math-container">$I_n$</span> fits <a href="https://en.wikipedia.org/wiki/Watson%27s_lemma" rel="nofollow noreferrer">Watson's lemma</a>: <span class="math-container">$\varphi(t)=\frac{1}{2}t+\frac{3}{8}t^2+\frac{1}{3}t^3+\frac{65}{192}t^4+\frac{67}{180}t^5+\ldots$</span> gives <span class="math-container">$$I_n\asymp\frac{1}{2n}+\frac{3}{4n^2}+\frac{2}{n^3}+\frac{65}{8n^4}+\frac{134}{3n^5}+\ldots$$</span></p>
|
1,606,202 | <p>I'm having trouble figuring out why these two different ways to write this combination give different answers. Here is the scenario:</p>
<p>Q: Choose a group of 10 people from 17 men and 15 women, in how many ways are at most 2 women chosen?</p>
<p>Solution A: From 17 men choose 8, and from 15 women choose 2. Or from 17 men choose 9, and from 15 women choose 1. Or from 17 men choose 10.</p>
<p>C(17,8)*C(15,2)+C(17,9)*C(15,1)+C(17,10) = 2936648 ways</p>
<p>Solution B: Choose from the men to fill the first 8 positions and choose the next 2 positions from the remaining men and women.</p>
<p>C(17,8)*[C(9,2)+C(9,1)*C(14,1)+C(14,2)] = 6150430 ways</p>
<p>What is wrong with my logic or interpretation here? </p>
| Brian M. Scott | 12,042 | <p>Suppose that the men are $M_1,\ldots,M_{17}$, and the women are $W_1,\ldots,W_{15}$. Consider the group</p>
<p>$$\{M_1,M_2,\ldots,M_9,W_1\}\;.$$</p>
<p>Your second approach counts this $9$ times: once as $\{M_1,\ldots,M_8\}$ for the $8$ men and $\{M_9,W_1\}$ for the last two, once as $\{M_1,M_2,M_3,M_4,M_5,M_6,M_7,M_9\}$ for the $8$ men and $\{M_8,W_1\}$ for the last two, and so on. It does this overcounting for every group that contains exactly one woman.</p>
<p>Groups that contain no women are overcounted even more: for any group of $10$ men, there are $\binom{10}2=45$ different ways to split it into the first $8$ and the last $2$, so it gets counted $45$ times!</p>
|
2,032,387 | <p>I know this is somewhat of an odd question, but I am having trouble with my TI-84 calculator and I don't know why.</p>
<p>I'm trying to find the RREF of the transpose of a <span class="math-container">$4\times6$</span> matrix; for some reason my graphing calculator gives me an error. Something to do with the dimensions? Here is a photo of matrix <span class="math-container">$A$</span>.
<img src="https://i.stack.imgur.com/JHf9A.png" alt="" /></p>
<p>I want to find RREF<span class="math-container">$(A$</span> transposed<span class="math-container">$)$</span>.</p>
| perplexed | 179,093 | <p><a href="http://tibasicdev.wikidot.com/rref" rel="nofollow noreferrer">The TI-84's rref function throws an error if there are more rows than columns</a>, and the transpose has more rows than columns.</p>
|
2,032,387 | <p>I know this is somewhat of an odd question, but I am having trouble with my TI-84 calculator and I don't know why.</p>
<p>I'm trying to find the RREF of the transpose of a <span class="math-container">$4\times6$</span> matrix; for some reason my graphing calculator gives me an error. Something to do with the dimensions? Here is a photo of matrix <span class="math-container">$A$</span>.
<img src="https://i.stack.imgur.com/JHf9A.png" alt="" /></p>
<p>I want to find RREF<span class="math-container">$(A$</span> transposed<span class="math-container">$)$</span>.</p>
| user399923 | 399,923 | <p>Change your matrix from 6x4 to 6x6 by adding two columns of zeros. Then you can use the rref or ref functions. Then just ignore the added columns.</p>
|
3,130,939 | <p>Suppose the following function with pi notation, with the pi denoting the iterated product, multiplying from <span class="math-container">$i = 0$</span> to <span class="math-container">$i = n$</span>:</p>
<p><span class="math-container">$$\prod_{i=0}^n \ln(y_i^{x - 1})$$</span></p>
<p>That is, the natural logarithm of <span class="math-container">$y$</span>, subscripted by <span class="math-container">$i$</span>, to the power of <span class="math-container">$x - 1$</span>.</p>
<p>What is the derivative of this product - to be clear, its derivative with respect to <span class="math-container">$x$</span>, not <span class="math-container">$y$</span>? </p>
| clathratus | 583,016 | <p>Let
<span class="math-container">$$f(x)=\prod_{i=0}^{n}f_i(x)$$</span>
and let <span class="math-container">$$g^{(m)}(x)=\left(\frac{d}{dx}\right)^mg(x),\qquad m=0,1,2,...$$</span>
as well as <span class="math-container">$\delta_{ij}$</span> denote the Kronecker Delta.</p>
<p>We have that
<span class="math-container">$$f'(x)=\sum_{i=0}^{n}\prod_{j=0}^{n}f_j^{(\delta_{ij})}(x)=\sum_{i=0}^{n}\frac{f_i'(x)}{f_i(x)}f(x)$$</span>
We use this with the choice
<span class="math-container">$$f_i(x)=\ln(y_i^{x-1})=(x-1)\ln y_i$$</span>
which gives <span class="math-container">$$f_i'(x)=\ln y_i$$</span>
So
<span class="math-container">$$f'(x)=\sum_{i=0}^{n}\frac{1}{x-1}\prod_{j=0}^{n}(x-1)\ln y_j$$</span>
<span class="math-container">$$f'(x)=(x-1)^n\sum_{i=0}^{n}\prod_{j=0}^{n}\ln y_j$$</span>
<span class="math-container">$$f'(x)=(x-1)^n\left(\sum_{i=0}^{n}1\right)\prod_{j=0}^{n}\ln y_j$$</span>
<span class="math-container">$$f'(x)=(n+1)(x-1)^n\prod_{j=0}^{n}\ln y_j$$</span></p>
|
634,127 | <p>How to prove this (true or not)?</p>
<blockquote>
<p>$f(a,b) = f(a,c)$ must hold if $b = c$</p>
</blockquote>
<p><b>Note:</b> <i><b>f(a,b)</b> is a function with <b>a</b> & <b>b</b></i> parameters</p>
<p>thanks</p>
| arkadeep | 120,499 | <p>See it is a bi variate function that you have given here.
Just think of a 3-D(2 dimentional system of co-ordinate).
Now if a function is a bivariate one then the functional value will have the value on axis which one is mutually perpendicular to the other two axis.
Now you have the equation as f(a,b)=f(a,c).Now we have to prove that the equation can only hold if b=c.
But see can we say anythhing about the functional characteristics like if f(a,b)=f(a,c) then 'b' should be equal to 'c'!
How can anyone conclude it surely without knowing the actual function?
See here I have an example: f(a,b)= (a^2)+(b^2) and f(a,c)= (a^2)+(c^2).
See here,for the first function f(a,b) if we put a=1 and b=1 then the functional value will be 2,fine.If we put in the 2nd function a=1 and c=(-1) then also the functional value is 2 but see here for these two function <strong>b(b=1) is not equal to c(c=-1)</strong> for these two cases <strong>but still the functional value are same!!</strong>
So what do you think my friend.Don't we need the actual characteristics of the function? </p>
|
203,456 | <p>Please help me proof $\log_b a\cdot\log_c b\cdot\log_a c=1$, where $a,b,c$ positive number different for 1.</p>
| Madrit Zhaku | 34,867 | <p>Before we prove the given identity proof this idenity</p>
<p>$$\log_b a\log_c b=\log_c a$$</p>
<p>Proof: Implement the formula $\log_a b=\frac{\log_x b}{\log_x a}$</p>
<p>$$\frac{\log a}{\log b}\cdot\frac{\log b}{\log c}=\frac{\log a}{\log c}=\log_c a$$</p>
<p>Now proof the given identity.</p>
<p>$$\log_b a\cdot\log_c b\cdot\log_a c=1$$</p>
<p>$$\log_c a\cdot\log_a c=1$$</p>
<p>$$\frac{1}{\log_a c}\cdot\log_a c=1$$</p>
<p>$$1=1$$</p>
|
500,632 | <p>Find all such lines that are tangent to the following curves:</p>
<p>$$y=x^2$$ and $$y=-x^2+2x-2$$</p>
<p>I have been pounding my head against the wall on this. I used the derivatives and assumed that their derivatives must be equal at those tangent point but could not figure out the equations. An explanation will be appreciated.</p>
| Old John | 32,441 | <p>Here is a hint for a method which avoids calculus:</p>
<p>The line $y=ax+b$ is a tangent to a quadratic such as $y=x^2$ if and only if the quadratic equation you get by solving these equations simultaneously has a double root. This will give you an equation which must be satisfied by the unknowns $a$ and $b$.</p>
<p>You can do the same for the line $y=ax+b$ and your other quadratic, then solve the two simultaneous equations to find $a$ and $b$.</p>
|
500,632 | <p>Find all such lines that are tangent to the following curves:</p>
<p>$$y=x^2$$ and $$y=-x^2+2x-2$$</p>
<p>I have been pounding my head against the wall on this. I used the derivatives and assumed that their derivatives must be equal at those tangent point but could not figure out the equations. An explanation will be appreciated.</p>
| Kaster | 49,333 | <p>Tangent line of first equation through some point $(x_1,f_1(x_1))$ is
$$
y = f_1(x_1) + f'_1(x_1)(x-x_1) = x_1^2 + 2x_1(x-x_1) = 2x_1x-x_1^2
$$
Tangent line of second equation through some point $(x_2, f_2(x_2))$ is
$$
y = f_2(x_2) + f'_2(x_2)(x-x_2) = -x_2^2+2x_2-2 + (-2x_2+2)(x-x_2)
= 2(1-x_2)x+x_2^2-2
$$
In order these two lines to be the same one must require
$$
2x_1 = 2(1-x_2) \\
-x_1^2 = x_2^2-2
$$
It is quite simple to solve so I leave it to you. Solution is
$$
k = 1 \pm \sqrt 3 \\
b = -1 \pm \frac {\sqrt 3}2
$$
for the line $y = kx + b$.</p>
<p><img src="https://i.stack.imgur.com/g3UvW.png" alt="para"></p>
|
105,190 | <p>Let $\zeta_K(s)$ be the Dedekind zeta function for a number field $K$. We can understand the first non-vanishing coefficient of its Laurent series via the class number formula. Is anything known/conjectured about the next term?</p>
<p>On a related note, the BSD conjecture predicts the value of the first non-vanishing Taylor coefficient of the Hasse-Weil $L$-function of (say) an elliptic curve. Are there any conjectures about the coefficients after that?</p>
| paul garrett | 15,629 | <p>The questions about Birch-SwinnertonDyer are much subtler than the first question, and I do not pretend to have anything to say about it.</p>
<p>Edit: and, indeed, the following bits of information are a "weak" answer, at the level of saying "yes, just as the Euler-Mascheroni constant (and a family of such constants) appears in zeta, similar constants provably appear in the Laurent expansion at 1 for Dedekind zetas." This is very distinct from any discussion at the <em>midpoint</em> of the critical strip, I agree, despite class numbers' appearance at 1. (Some potential confusion about whether that central point is 1 or 1/2, due to traditional normalization of zeta functions of elliptic curves.) [end-of-edit]</p>
<p>About the first question, it has been known for some decades (though I do not know a citation, perhaps because not so much came out of such ideas) that, following Shintani (and generalizations by Satake decades after), especially for totally real fields (where the discussion was motivated by special-value results at positive even integers, as an approach complementing Siegel's and Klingen's), the non-zero ideals can be expressed as a finite sum over ideal classes, each of which can be expressed as a sum over elements of a lattice modulo units, ... the key point being that the latter has (many) reasonable sets of representatives from the intersection of a "rational cone" with a lattice. For very general reasons (extrapolated in Ash-Mumford-Rapoport-Tai, but anticipated before... e.g., by Shintani) this intersection of lattice with rational cone is a <em>finite</em> sum of sums of positive-integer-coefficiented sums of lattice points... </p>
<p>Shintani's goal, and Satake's, was to obtain an expression for values of L-functions with a meaning a bit different from Siegel's or Klingen's (not quite addressing things like Lichtenbaum's K-theoretic conjectures, but... who knows?)</p>
<p>Incidental to that, as I myself once considered (fruitlessly), the simple classical argument about Laurent series of zeta at $1$ extends to give an analogous, obviously fussier, result for totally real, and probably other, number fields. </p>
<p>I think the analogous question for all other (automorphic) L-functions is much subtler. Edit-edit: e.g., consider the Kronecker limit formula! (Again, not <em>midpoint</em> of critical strip, but edge...)</p>
|
9,629 | <p>are people facing problem of not loading latex symbols in MSE? I have high speed internet connection but I am facing this problem from yesterday,any suggestion?It says "math processing error" if my connection is low speed but this is not the case, I am just watching all latex symbols instead of compiled complete picture.</p>
| Balbichi | 24,690 | <p>Uncaught TypeError: Cannot read property 'strings' of undefined TeX-AMS_HTML.js:43
a.CreateLocaleMenu TeX-AMS_HTML.js:43
a.showRenderer.a.cookie.showRenderer.p.showRenderer TeX-AMS_HTML.js:43
CALLBACK.execute MathJax.js:29
(anonymous function)
MathJax.Object.Subclass.Execute MathJax.js:29
QUEUE.Subclass.ExecuteHooks MathJax.js:29
CALLBACK.execute MathJax.js:29
(anonymous function)
MathJax.Object.Subclass.Execute MathJax.js:29
QUEUE.Subclass.Post MathJax.js:29
CALLBACK.execute MathJax.js:29
(anonymous function)
BASE.Object.Subclass.Process MathJax.js:29
BASE.Object.Subclass.call MathJax.js:29
WAITEXECUTE MathJax.js:29
(anonymous function)
MathJax.Object.Subclass.Execute MathJax.js:29
a.Ajax.loadComplete MathJax.js:29
(anonymous function) TeX-AMS_HTML.js:59</p>
|
1,380,508 | <p>Is there such a proof that states that the Runge Phenomena will always occur when interpolating with higher order polynomials or is this just observed empirically?</p>
| mathcounterexamples.net | 187,663 | <p>Runge Phenomena doesn't occur for all functions. For a detailed analysis on polynomial interpolation at equidistant points you can have a look <a href="http://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Epperson329-341.pdf" rel="nofollow">here</a></p>
|
4,046,356 | <p>Recently, some of the remarkable properties of second-order
Eulerian numbers <span class="math-container">$ \left\langle\!\!\left\langle n\atop k\right\rangle\!\!\right\rangle$</span> <a href="https://oeis.org/A340556" rel="nofollow noreferrer">A340556</a> have been proved on MSE [ <a href="https://math.stackexchange.com/questions/4034224"> a </a>,
<a href="https://math.stackexchange.com/questions/4037172"> b </a>, <a href="https://math.stackexchange.com/questions/4037946"> c </a> ].</p>
<p>But there are also other notable identities to which these numbers lead.
For instance, we also noticed the following identity, which we haven't seen
elsewhere (reference?):</p>
<p><span class="math-container">$$ \sum_{j=0}^{k} \binom{n-j}{n-k}
\left\langle\!\! \left\langle n\atop j\right\rangle\!\! \right\rangle \,=\,
\sum_{j=0}^k (-1)^{j+k} \binom{n+k}{n+j} \left\{ n+j \atop j\right \} \quad ( n \ge 0) $$</span></p>
<p>If we use the notation <span class="math-container">$ \operatorname{W}_{n, k} $</span> for these numbers,
we can also introduce the corresponding polynomials.</p>
<p><span class="math-container">$$ \operatorname{W}_{n}(x) = \sum_{k=0}^n \operatorname{W}_{n, k} x^k \quad ( n \ge 0) $$</span></p>
<p>So far so routine, but then came the surprise:</p>
<p><span class="math-container">$$ 3^n W_n\left(-\frac13\right) \, = \, 2^n \left\langle\!\!\left\langle - \frac{1}{2} \right\rangle\!\!\right\rangle_n \quad ( n \ge 0) $$</span></p>
<p>On the right side are the numbers we recently asked about their
<a href="https://math.stackexchange.com/questions/4044848">combinatorial significance</a>!</p>
<p>Should I trust this strange equation?</p>
| Marko Riedel | 44,883 | <p>In trying to verify the identity</p>
<p><span class="math-container">$$\sum_{j=0}^{k} {n-j \choose n-k}
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
= \sum_{j=0}^k (-1)^{j+k} {n+k \choose n+j}
\left\{ n+j \atop j\right \}$$</span></p>
<p>we quote from <a href="https://math.stackexchange.com/questions/4034224/">MSE</a>
the following identity</p>
<p><span class="math-container">$$\left\langle\!\! \left\langle n\atop k
\right\rangle\!\! \right\rangle =
\sum_{j=0}^k (-1)^{k-j} {2n+1\choose k-j} {n+j\brace j}.$$</span></p>
<p>We get for the LHS</p>
<p><span class="math-container">$$\sum_{j=0}^{k} {n-j \choose n-k}
\sum_{p=0}^j (-1)^{j-p} {2n+1\choose j-p} {n+p\brace p}
\\ = \sum_{p=0}^k {n+p\brace p}
\sum_{j=p}^k (-1)^{j-p} {2n+1\choose j-p} {n-j\choose n-k}.$$</span></p>
<p>The inner sum is
<span class="math-container">$$\sum_{j=0}^{k-p} (-1)^j {2n+1\choose j}
{n-j-p\choose n-k}
\\ = \sum_{j=0}^{k-p} (-1)^j {2n+1\choose j}
{n-j-p\choose k-p-j}
\\ = [z^{k-p}] (1+z)^{n-p}
\sum_{j=0}^{k-p} (-1)^j {2n+1\choose j}
\frac{z^j}{(1+z)^j}.$$</span></p>
<p>Here the coefficient extractor enforces the upper limit of the sum
and we may extend <span class="math-container">$j$</span> to infinity:</p>
<p><span class="math-container">$$[z^{k-p}] (1+z)^{n-p}
\sum_{j\ge 0} (-1)^j {2n+1\choose j}
\frac{z^j}{(1+z)^j}
\\ = [z^{k-p}] (1+z)^{n-p}
\left(1-\frac{z}{1+z}\right)^{2n+1}
= [z^{k-p}]
\frac{1}{(1+z)^{n+p+1}}
\\ = (-1)^{k-p} {k-p+n+p\choose n+p}
= (-1)^{k-p} {n+k\choose n+p}.$$</span></p>
<p>Introducing the leading term,</p>
<p><span class="math-container">$$\sum_{p=0}^k (-1)^{k+p}
{n+k\choose n+p} {n+p\brace p}$$</span></p>
<p>This is the claim.</p>
As for the polynomials we find
<p><span class="math-container">$$\sum_{k=0}^n x^k
\sum_{j=0}^{k} {n-j \choose n-k}
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
= \sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
\sum_{k=j}^n {n-j\choose n-k} x^k
\\ = \sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle x^j
\sum_{k=0}^{n-j} {n-j\choose n-j-k} x^k
= \sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle x^j
\sum_{k=0}^{n-j} {n-j\choose k} x^k
\\ = \sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle x^j
(1+x)^{n-j}
= (1+x)^n \sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
\frac{x^j}{(1+x)^j}.$$</span></p>
<p>Multiplying by <span class="math-container">$3^n$</span> and evaluating at <span class="math-container">$x=-1/3$</span> we obtain</p>
<p><span class="math-container">$$3^n \frac{2^n}{3^n}
\sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
\frac{(-1/3)^j}{(2/3)^j}
= 2^n
\sum_{j=0}^n
\left\langle\!\! \left\langle n\atop j
\right\rangle\!\! \right\rangle
\left(-\frac{1}{2}\right)^j$$</span></p>
<p>as claimed. </p>
<p><strong>Remark.</strong> Maybe we can pause here for a few days. Thanks! </P></p>
|
237,708 | <p>Does the series </p>
<p>$$\sum_{n=1}^{\infty}\log n - (\log n)^{n/(n+1)}$$</p>
<p>converge?</p>
| WimC | 25,313 | <p>Let $n \geq 3$ then</p>
<p>$$
\log(n) - \log(n)^{n/(n+1)} = \frac{\log(n)^{n/(n+1)}}{n+1} (n+1) \left(\log(n)^{1/(n+1)} - 1 \right) \geq \frac{\log(\log(n))}{n+1}
$$</p>
<p>Since $\log(\log(n)) \to \infty$ and $\sum 1/(n+1)$ diverges, this series itself diverges.</p>
|
96,191 | <p>I am trying to calculate the following integral which contains a parameter.
<a href="https://i.stack.imgur.com/qUJ9f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qUJ9f.jpg" alt="enter image description here"></a></p>
<p>I have used the Integrate and FullSimplify using assumptions but Mathematica fails to produce an analytical solution.</p>
<pre><code>Integrate[((Sin[u]^1.82 + (parameter^(-1))^0.63*Sin[u]^2.45)*
Sin[parameter + u]^1.82)/(parameter + Sin[u])^2, {u, 0, Pi},
Assumptions -> Inequality[0, Less, parameter, Less, 1]]
</code></pre>
<p>Is there another function I can use? If not, what function would you recommend in order to estimate the integral? My end goal is to replace π (integral upper limit) with a second parameter.</p>
| Jason B. | 9,490 | <p>It really depends on the level at which you want to estimate this function. Do you want to end up with a nice closed expression? Do you simply need <strong>an expression</strong> to model the data? You can't be sure that an analytic solution exists. I tried the Rubi package (apmaths.uwo.ca/~arich) and it didn't give a solution. But you can always fit it to some curve.</p>
<pre><code>{redata, imdata} =
Transpose[{{#1, Re[#2]}, {#1, Im[#2]}} & @@@
Table[
{parameter, NIntegrate[((Sin[u]^1.82 + (parameter^(-1))^0.63*Sin[u]^2.45)*
Sin[parameter + u]^1.82)/(parameter + Sin[u])^2, {u, 0,Pi}]}
, {parameter, 0.005, 1, .005}]];
ListLinePlot /@ {redata, imdata}
</code></pre>
<p><a href="https://i.stack.imgur.com/8XtLN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8XtLN.png" alt="enter image description here"></a></p>
<p>This suggests to me that we could model both the real and imaginary parts with a multi-exponential decay. </p>
<pre><code> Grid[Table[
func = Sum[A[n] Exp[-B[n] x], {n, 1, nexp}];
params = Flatten[Table[{A[n], B[n]}, {n, nexp}]];
refit = NonlinearModelFit[redata, func, params, x];
imfit = NonlinearModelFit[imdata, func, params, x];
{Show[ListLinePlot[redata, ImageSize -> 300],
Plot[refit[x], {x, 0, 1}, PlotStyle -> {Dashed, Red}]],
ListPlot[Transpose[{redata[[All, 1]], refit["FitResiduals"]}],ImageSize -> 300],
Show[ListLinePlot[imdata, ImageSize -> 300],
Plot[imfit[x], {x, 0, 1}, PlotStyle -> {Dashed, Red}]],
ListPlot[Transpose[{imdata[[All, 1]], imfit["FitResiduals"]}],ImageSize -> 300]}
, {nexp, 2, 5}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/gWqek.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gWqek.png" alt="enter image description here"></a></p>
<p>So you could decide to ignore errors smaller than, say 0.5, which would allow you to ignore the imaginary part altogether, and take an answer with three exponential decay terms</p>
<p>$$51.6583 e^{-164.728 x}+15.9181 e^{-23.0781 x}+6.3396 e^{-2.9963 x}$$</p>
<p>A similar strategy would work for the upper integration limit.</p>
|
86,762 | <p>The other day, my teacher was talking infinite-dimensional vector spaces and complications that arise when trying to find a basis for those. He mentioned that it's been proven that some (or all, do not quite remember) infinite-dimensional vector spaces have a basis (the result uses an Axiom of Choice, if I remember correctly), that is, an infinite list of linearly independent vectors, such that any element in the space can be written as a finite linear combination of them. However, my teacher mentioned that actually finding one is really complicated, and I got a sense that it was basically impossible, which reminded me of Banach-Tarski paradox, where it's technically 'possible' to decompose the sphere in a given paradoxical way, but this cannot be actually exhibited. So my question is, is the basis situation analogous to that, or is it actually possible to explicitly find a basis for infinite-dimensional vector spaces?</p>
| Qiaochu Yuan | 232 | <p>It's known that the statement that every vector space has a basis is equivalent to the <a href="http://en.wikipedia.org/wiki/Axiom_of_choice">axiom of choice</a>, which is independent of the <a href="http://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">other axioms of set theory</a>. This is generally taken to mean that it is in some sense impossible to write down an "explicit" basis of an arbitrary infinite-dimensional vector space. On the other hand,</p>
<ul>
<li>Some infinite-dimensional vector spaces do have easily describable bases; for example, we are often interested in the subspace spanned by a countable sequence $v_1, v_2, ...$ of linearly independent vectors in some vector space $V$, and this subspace has basis $\{ v_1, v_2, ... \}$ by design.</li>
<li>For many infinite-dimensional vector spaces of interest we don't care about describing a basis anyway; they often come with a <a href="http://en.wikipedia.org/wiki/Topological_vector_space">topology</a> and we can therefore get a lot out of studying <a href="http://en.wikipedia.org/wiki/Dense_set">dense</a> subspaces, some of which, again, have easily describable bases. In <a href="http://en.wikipedia.org/wiki/Hilbert_space">Hilbert spaces</a>, for example, we care more about <a href="http://en.wikipedia.org/wiki/Orthonormal_basis">orthonormal bases</a> (which are not Hamel bases in the infinite-dimensional case); these span dense subspaces in a particularly nice way. </li>
</ul>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| Gautam Shenoy | 35,983 | <p>I'm sure everyone has answered the question appropriately. But here's my 2 cents:</p>
<p>From the Argand plane perspective, multiplying a complex number by $i$ is equivalent to rotating it about a circle (with radius = modulus of complex number) counterclockwise by 90 degrees. So ask yourself where you end up when you take $i$ and multiply it with $i$ twice.</p>
|
611,529 | <p>$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$</p>
<p>Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?</p>
| Lucian | 93,448 | <p>$\sqrt[n]z$ does not return a single value, but <em>n</em> complex values. Hence your confusion, since both <em>i</em> and $-i$ are among the square roots of $-1$.</p>
|
455,230 | <p>I found this proposition and don't see exactly as to why it is true and even more so, why the converse is false:</p>
<p>Proposition 1. The equivalence between the proposition $z \in D$ and the proposition $(\exists x \in D)x = z$ is provable from the definitory equations of the existential quantifier and of the equality relation. If $D = \{t_{1},t_{2},...t_{n}\}$, the sequent
$z = t_{1} \vee z = t_{2} \vee z \vee ... z = t_{n} \vdash z \in D$ is provable from the definition of the additive disjunction $\vee$.</p>
<p>The converse sequent $z \in D \vdash z = t_{1} \vee z = t_{2} \vee ... z = t_{n}$ is not provable.</p>
<p>The author goes on to say: "We adopt the intuitionistic interpretation of disjunction. With respect to it, one can characterize a particular class of finite sets"</p>
<p>On the first part of the proposition, well I am not sure what the point is as if we take an element z in D, then we could just call this element x and hence this x = z. Is there something more to this? On the second part of Proposition 1 since $z = \text{ some } t \in D$ since t is in the set of axioms and $t = z$, then $t \vdash z$ as z is derivable from t.</p>
<p>For the converse if $z \in D$ then why wouldn't z = some $t \in D$? Is this because of the Incompleteness theorem? That perhaps there D as a set of axioms has some consequence which can not be proven by the set of axioms in D? Or perhaps I am way off here.</p>
<p>Any ideas?</p>
<p>Thanks,</p>
<p>Brian</p>
| Kevin Ventullo | 546 | <p>Consider the algebra $K[T]$. The simple $K[T]$-modules are of the form $K[T]/P(T)$ for some irreducible polynomial $P$. These do not occur as submodules of $K[T]$, since every such submodule contains a free module, and hence is infinite dimensional over $K$. </p>
|
2,734,374 | <p>I don't think it is possible because that entails that only the <span class="math-container">$\mathbf 0$</span>-vector is in the eigenspace, but <span class="math-container">$\mathbf 0$</span> is not an eigenvector by definition. </p>
<p>However, my textbook says:</p>
<blockquote>
<p>For an <span class="math-container">$n\times n$</span> matrix, if there are <span class="math-container">$n$</span> distinct eigenvalues, then all eigenspaces have dimension at most <span class="math-container">$1$</span>.</p>
</blockquote>
<p>which seems to imply that eigenspaces of dimension <span class="math-container">$0$</span> are possible.</p>
| the_candyman | 51,370 | <p>Following the definition, <span class="math-container">$\lambda$</span> is an eigenvalue of the matrix <span class="math-container">$A$</span> if there exists a non-zero vector <span class="math-container">$v$</span> such that:</p>
<p><span class="math-container">$$Av = \lambda v.$$</span></p>
<p>The definition itself assures that, if <span class="math-container">$\lambda$</span> is an eigenvalue, then there must be also an eigenvector <span class="math-container">$v$</span>. The presence of at least one eigenvector implies that the eigenspace relative of <span class="math-container">$\lambda$</span> has <strong>at least</strong> dimension equal to <span class="math-container">$1$</span>.</p>
<p>You cannot define an eigenvalue without an eigenvector, and viceversa.</p>
|
2,734,374 | <p>I don't think it is possible because that entails that only the <span class="math-container">$\mathbf 0$</span>-vector is in the eigenspace, but <span class="math-container">$\mathbf 0$</span> is not an eigenvector by definition. </p>
<p>However, my textbook says:</p>
<blockquote>
<p>For an <span class="math-container">$n\times n$</span> matrix, if there are <span class="math-container">$n$</span> distinct eigenvalues, then all eigenspaces have dimension at most <span class="math-container">$1$</span>.</p>
</blockquote>
<p>which seems to imply that eigenspaces of dimension <span class="math-container">$0$</span> are possible.</p>
| Gatgat | 550,782 | <p>It doesn't imply that dimension 0 is possible. You know by definition that the dimension of an eigenspace is at least 1. So if the dimension is also at most 1 it means the dimension is exactly 1. It's a classic way to show that something is equal to exactly some number. First you show that it is at least that number then that it is at most that number.</p>
|
2,734,374 | <p>I don't think it is possible because that entails that only the <span class="math-container">$\mathbf 0$</span>-vector is in the eigenspace, but <span class="math-container">$\mathbf 0$</span> is not an eigenvector by definition. </p>
<p>However, my textbook says:</p>
<blockquote>
<p>For an <span class="math-container">$n\times n$</span> matrix, if there are <span class="math-container">$n$</span> distinct eigenvalues, then all eigenspaces have dimension at most <span class="math-container">$1$</span>.</p>
</blockquote>
<p>which seems to imply that eigenspaces of dimension <span class="math-container">$0$</span> are possible.</p>
| Marc van Leeuwen | 18,880 | <p>It is a matter of convention. What everybody should agree on is that $\lambda$ being an eigenvalue of$~A$ means that $\dim(\ker(A-\lambda I))>0$, so the dimension of the eigenspace associated to an eigenvalue is never$~0$. However if the dimension in that formula is$~0$, so if $\lambda$ is <em>not</em> an eigenvalue, then one could still agree that $\ker(A-\lambda I)$ may be called the eigenspace $E_\lambda$ of$~A$ for (the non-eigenvalue)$~\lambda$. This may seem a bit weird, but there are many occasions where it is a convenience to be able to talk about $E_\lambda$ for any scalar$~\lambda$, without first having to ensure that $\lambda$ is an eigenvalue. For instance, if $A$ is a projector (so $A^2=A$) then it is always true that the whole space decomposes as $E_0\oplus E_1$, though it might happen that one of $E_0$ and $E_1$ has dimension$~0$ (namely if $A=I$ respectively $A=0$); this statement would be more complicated to make if it were forbidden to mention $E_\lambda$ unless $\lambda$ was actually an eigenvalue.</p>
|
1,587,498 | <p>I need some help with this (seemingly) simple problem. As before, it comes from Apostol "Calculus", Volume 1, Section 8.28, Question 23 and it states:</p>
<p>Solve the differential equation $(1+y^2e^{2x})y^{'} + y = 0$ by introducing a change of variable of the form $y = ue^{mx}$, where $m$ is constant and $u$ is a new unknown function.</p>
<p>This seems to be straight forward but I seem to be having problems. My working so far has been as follows:</p>
<p>$$
y = ue^{mx} \Rightarrow y^{'} = u(me^{mx}) + u^{'}e^{mx} = e^{mx}(u^{'}+mu)
$$
so we obtain:
$$
(1+(ue^{mx})^{2}e^{2x})e^{mx}(u^{'}+mu) + ue^{mx}= 0
$$
$\Rightarrow$
$$
(1+u^{2}e^{2(m+1)x})e^{mx}(u^{'}+mu) + ue^{mx}= 0
$$
$\Rightarrow$
$$
e^{mx}((1+u^{2}e^{2(m+1)x})(u^{'}+mu) + u) = 0
$$
$\Rightarrow$
$$
(1+u^{2}e^{2(m+1)x})(u^{'}+mu) + u = 0
$$
$\Rightarrow$
$$
u^{'}+mu = \frac{-u}{1+u^{2}e^{2(m+1)x}}
$$
$\Rightarrow$
$$
u^{'} = \frac{-u}{1+u^{2}e^{2(m+1)x}} - mu
$$
$\Rightarrow$
$$
u^{'} = \frac{-u - mu(1+u^{2}e^{2(m+1)x})}{1+u^{2}e^{2(m+1)x}}
$$
$\Rightarrow$
$$
u^{'} = \frac{-u - mu - mu^{3}e^{2(m+1)x}}{1+u^{2}e^{2(m+1)x}}
$$
And I seem to get somewhat stuck as to how to proceed with this. Im trying to convert this (somehow) into a differential equation that has separate variable to allow for extracting a solution but the trick seems to be eluding me. If anyone has any suggestions that would be much appreciated.</p>
| André Nicolas | 6,312 | <p>Hint: We can choose $m$ freely. Let $m=-1$.</p>
|
535,080 | <p>For the following example: </p>
<blockquote>
<p>Let the topological space $X$ be the real line $\mathbb{R}$. An open set is any set whose complement is finite. Let $S=[0,1]$. Find the closure, the interior, and the boundary of $S$. </p>
</blockquote>
<p>What is meant by let the topological space $X$ be the real line $\mathbb{R}$?</p>
| D Left Adjoint to U | 26,327 | <p>What it should say is let $X$ be a topological space on $\Bbb{R}$ whose open sets consist of all subsets of $\Bbb{R}$ that have a finite complement in $\Bbb{R}$. </p>
|
2,872,492 | <p>My work starts with a supposition of $N$, so that for $n > N$ we have $\vert b \vert ^n < \epsilon$.</p>
<p>Since $0 < \vert b \vert < 1$, we see the logarithm with base $\vert b \vert$ is a decrescent function meaning it will invert the inequality once taken.
$$\vert b \vert ^n < \epsilon $$
$$n > \text{log}_{\vert b \vert}\epsilon $$</p>
<p>Done the scrapwork, lets start the formal proof.
Let $\epsilon > 0$. Let $N = \text{log}_{\vert b \vert}\epsilon$. If $n > N$ and $0 < \vert b \vert< 1$, then
$$\vert b \vert ^n < \vert b \vert ^{\text{log}_{\vert b \vert}\epsilon} = \epsilon$$
Hence by definition, $\text{lim} \ b^n=0$ </p>
<p>Have I done everything correctly? I am using in an implicit manner that $\vert b^n \vert = \vert b \vert ^n$, is everything fine with this? (Something is really pinching me up!)</p>
| Lev Bahn | 523,306 | <p>My answer can be over killing, but it has different view point.</p>
<p>If $0<|b|<1$, note that</p>
<p>$\sum_{n=0}^{\infty}|b|^n = \frac{1}{1-|b|} <\infty$.</p>
<p>Thus, $\lim_{n\rightarrow \infty}|b|^n=0$ so</p>
<p>$|\lim_{n\rightarrow \infty} b^n|\leq |\lim_{n\rightarrow \infty}|b|^n| =0$ $\implies \lim_{n\rightarrow \infty}b^n=0$.</p>
|
1,621,347 | <p>Is there a closed-form expression for the following definite integral?
\begin{equation}
\mathcal{I} = \int_{\delta_1}^{\delta_2}(1+Ax)^{-L}x^{L}\exp\left(-Bx\right)dx,
\end{equation}
where $A$, $B$, $\delta_1$, and $\delta_2$ are positive constant. $L$ is a positive integer.</p>
<p>I am facing problem due to finite limits $\delta_1$ and $\delta_2$. I know the answer when "$\delta_1 = 0$ and $\delta_2 = \infty$." </p>
| Robert Israel | 8,508 | <p>Since your endpoints are arbitrary, what you need is an antiderivative.<br>
For convenience, apply scaling so that $A=1$. Let</p>
<p>$$ f_L(x) = \int \left( \dfrac{x}{1+x}\right)^L e^{-Bx}\; dx $$</p>
<p>The generating function is</p>
<p>$$ \eqalign{g(t,x) &= \sum_{L=0}^\infty f_L(x) t^L\cr &= \int \sum_{L=0}^\infty \left( \dfrac{xt}{1+x}\right)^L e^{-Bx}\; dx\cr
&= \int \dfrac{1+x}{1+ (1-t)x} e^{-Bx}\; dx \cr
&= \int \left(\dfrac{1}{1-t} - \dfrac{t}{(1-t)(1 + (1-t) x)}\right) e^{-Bx}\; dx\cr
&= - \dfrac{e^{-Bx}}{B(1-t)} - \dfrac{t}{(1-t)^2} e^{B/(1-t)} \text{Ei}\left(-Bx - B/(1-t) \right) }$$</p>
<p>I don't know if there is a single closed form for all $L$, but you can get arbitrarily many terms by taking derivatives of this. Thus</p>
<p>$$ \eqalign{f_0(x) &= - \dfrac{- e^{-Bx}}{B} \cr
f_1(x) &= - - \dfrac{- e^{-Bx}}{B} - e^B \text{Ei}(-B(x+1))\cr
f_2(x) &= - \dfrac{e^{-Bx}}{B} - \dfrac{e^{-Bx}}{x+1} - (B+2) e^{B} \text{Ei}(-B(x+1))\cr
f_3(x) &= - \dfrac{e^{-Bx}}{B} - \dfrac{B+6}{2(x+1)} e^{-Bx} + \dfrac{e^{-Bx}}{2(x+1)^2} - \dfrac{B^2 + B + 6}{2} e^B \text{Ei}(-B(x+1))
}$$</p>
<p>In general, it seems you have
$$ f_L(x) = -\dfrac{e^{-Bx}}{B} + p_L(B) e^B \text{Ei}(-B(x+1)) + e^{-Bx} \sum_{j=1}^{L-1} \dfrac{q_{L,j}(B)}{(x+1)^j}$$</p>
<p>where $p_L$ and $q_{L,j}$ are polynomials of degree $L-1$ and $L-1-j$ respectively.</p>
<p>EDIT:
Indeed, if you take this form and differentiate it, you find</p>
<p>$$ f'_L(x) = e^{-Bx} \left( 1 + \dfrac{p_L(B)}{x+1} - \sum_{j=1}^{L-1}
\dfrac{B q_{L,j}(B) }{(x+1)^j}- \sum_{j=1}^{L-1} \dfrac{j q_{L,j}(B)}{(x+1)^{j+1}}\right) $$</p>
<p>which you want to be $$ e^{-Bx} \left( 1 - \dfrac{1}{1+x}\right)^L
= e^{-Bx} \sum_{j=0}^L {L \choose j} \dfrac{(-1)^j}{(1+x)^j}$$</p>
<p>Equating coefficients of powers of $1+x$ and solving gives you the $p_L$ and $q_{L,j}$.</p>
|
452,653 | <p>If $f:X\rightarrow Y$ is initial in category <strong>Top</strong> then
it is easy to proof that </p>
<blockquote>
<p>(!) the topology on $X$ is the set of preimages of open sets in $Y$. </p>
</blockquote>
<p>Just construct topology $Z$ having
the same underlying subset as $X$ and let the set of these preimages
serve as topology on it. Then from $g:Z\rightarrow X$ with $x\mapsto x$
it is clear that $fg$ is continuous so the conclusion that $g$ is
continuous can be made. Then we are ready.
But now my question: </p>
<blockquote>
<p>what if we do not work in $\textbf{Top}$ but in category $\textbf{Haus}$?</p>
</blockquote>
<p>The constructed topology $Z$ does not have to be a Hausdorff space (or am I overlooking something here?) and if the fact that $f$ is initial in $\textbf{Haus}$ would work then it would justify the conclusion that $g$ can be recognized as an arrow in $\textbf{Haus}$. </p>
<blockquote>
<p>Is there a way out? Or - even stronger - is statement (!) not true in $\textbf{Haus}$?</p>
</blockquote>
| Mhenni Benghorbal | 35,472 | <p><strong>Hint:</strong> Use <a href="http://en.wikipedia.org/wiki/Alternating_series_test" rel="nofollow">alternating series test</a>.</p>
|
452,653 | <p>If $f:X\rightarrow Y$ is initial in category <strong>Top</strong> then
it is easy to proof that </p>
<blockquote>
<p>(!) the topology on $X$ is the set of preimages of open sets in $Y$. </p>
</blockquote>
<p>Just construct topology $Z$ having
the same underlying subset as $X$ and let the set of these preimages
serve as topology on it. Then from $g:Z\rightarrow X$ with $x\mapsto x$
it is clear that $fg$ is continuous so the conclusion that $g$ is
continuous can be made. Then we are ready.
But now my question: </p>
<blockquote>
<p>what if we do not work in $\textbf{Top}$ but in category $\textbf{Haus}$?</p>
</blockquote>
<p>The constructed topology $Z$ does not have to be a Hausdorff space (or am I overlooking something here?) and if the fact that $f$ is initial in $\textbf{Haus}$ would work then it would justify the conclusion that $g$ can be recognized as an arrow in $\textbf{Haus}$. </p>
<blockquote>
<p>Is there a way out? Or - even stronger - is statement (!) not true in $\textbf{Haus}$?</p>
</blockquote>
| Clement C. | 75,808 | <p><strong>Hint:</strong>
For $a_n=\frac{1}{n^\alpha \ln^\beta n}$ ($n\geq 2$), the positive series $\sum a_n$ converges if</p>
<ul>
<li>$\alpha > 1$; or</li>
<li>$\alpha = 1$ and $\beta > 1$</li>
</ul>
<p>(and diverges otherwise.)</p>
<p>This'll allow you to see if your series converges absolutely.</p>
|
3,153,306 | <p>In other words, say I am looking for multiple X</p>
<p>let: </p>
<p>X < 1000005</p>
<p>let the fist 18 divisors of X be:
1 | 2 | 4 | 5 | 8 | 10 | 16 | 20 | 25 | 32 | 40 | 50 | 64 | 80 | 100 | 125 | 160 | 200 </p>
<p>finally, I also know: X has exactly 49 divisors. </p>
<p>I will tell you what the answer is...frankly if you google it it will probably showed up...but again: is this possible knowing only the count/number of divisors and that x cannot be bigger than some number? thanks</p>
| Henry Lee | 541,220 | <p><span class="math-container">$$f(p)=\frac{1}{p^2-1}\sum_{q=3}^p\frac{q^2-3}{q}$$</span>
If we use the fact that:
<span class="math-container">$$\sum_{q=3}^p\frac{q^2-3}{q}=\sum_{q=3}^pq-3\sum_{q=3}^p\frac1q$$</span>
Now we know that:
<span class="math-container">$$\sum_{q=3}^pq=\sum_{q=1}^pq-\sum_{q=1}^2q=\frac{p(p+1)}{2}-3$$</span>
<span class="math-container">$$-3\sum_{q=3}^p\frac1q=-3\left[\sum_{q=1}^p\frac1q-\sum_{q=1}^2\frac1q\right]=-3\left[\ln(p)+\gamma+\frac{1}{2p}-\frac32\right]$$</span>
If we add these together we get:
<span class="math-container">$$f(p)=\frac{1}{p^2-1}\left[\frac{p(p+1)}{2}-3\left(\ln(p)+\gamma+\frac{1}{2p}-\frac12\right)\right]$$</span>
Now if you find <span class="math-container">$f'(p)$</span> you can find a maximum value and find what this is</p>
|
2,215,087 | <p>I'm trying to show that $\mathbb{Z}[\sqrt{11}]$ is Euclidean with respect to the function $a+b\sqrt{11} \mapsto|N(a+b\sqrt{11})| = | a^2 -11b^2|$</p>
<p>By multiplicativity, it suffices to show that $\forall x \in \mathbb{Q}(\sqrt{11}) \exists n \in \mathbb{Z}(\sqrt{11}):|N(n-x)| < 1$</p>
<p>For the analogous statement for $\mathbb Z [\sqrt6]$, it worked by considering different cases, so I tried to do the same thing here. Here is what I did so far:</p>
<p>Let $x+y\sqrt{11} \in \mathbb Q (\sqrt{11})$</p>
<p><strong>Case 1:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \frac{1}{\sqrt{11}}$, then we can choose such a $b$ and a $a \in \mathbb Z$ s.t. $|x-a| \leq \frac{1}{2}$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on suppose $\forall b \in \mathbb Z: |y-b| > \frac{1}{\sqrt{11}}$</p>
<p><strong>Case 2:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \sqrt{\frac{5}{44}}$ Then we have $1 < 11 (y-b)^2 < \frac{5}{4}$, so we can choose $a \in \mathbb Z$ such that $\frac{1}{2} \leq |x-a| \leq 1$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on suppose $\forall b \in \mathbb Z: |y-b| > \sqrt{\frac{5}{44}}$</p>
<p><strong>Case 3:</strong> Suppose there exists a $b \in \mathbb Z$ s.t. $|y-b| < \sqrt{\frac{2}{11}}$ Then we can choose $a \in \mathbb Z $ s.t. $1 \leq |x-a| \leq \frac{3}{2}$, then we have $|N(x+y\sqrt{11}-(a+b\sqrt{11}))| < 1$</p>
<p>From now on, we may suppose that $|y-b| > \sqrt{\frac{2}{11}}$.</p>
<p>This is where I'm stuck. I tried choosing $b \in \mathbb Z$ s.t. $\frac{1}{2} \geq |y-b| > \sqrt{\frac{2}{11}}$, but then I run into problems, whether I choose $a \in \mathbb Z$ s.t. $1 \leq |x-a| \leq \frac{3}{2}$ or s.t. $ \frac{3}{2} \leq |x-a| \leq 2$</p>
| Chan Tai Man | 876,234 | <p>I am a novice and learning to write simple proofs. I welcome corrections and suggestions. Below is a detailed workout. Most maths students will find it unnecessarily verbose. Oppenheim (1934) proved, among other proofs, that <span class="math-container">$\mathbb{Z}[\sqrt{11}]$</span> has a division algorithm and hence a Euclidean domain. We unpack his proof here.</p>
<p>Let <span class="math-container">$R = \mathbb{Z}[\sqrt{11}] = \{ s+t\sqrt{11} : s,t \in \mathbb{Z} \} $</span> be a quadratic integer ring. It is said to be a Euclidean domain if there are:</p>
<p>(a) a well defined norm <span class="math-container">$N(\alpha) \leq N(\alpha\beta)$</span>, and</p>
<p>(b) a division algorithm <span class="math-container">$\alpha = q \beta +r$</span> such that either (i) <span class="math-container">$r=0$</span>, or (ii) <span class="math-container">$r \ne 0$</span> and <span class="math-container">$N(r) < N(\beta)$</span></p>
<p>with <span class="math-container">$\alpha,\beta \in R$</span>, <span class="math-container">$\beta \ne 0$</span> and some quotient and remainder <span class="math-container">$q,r \in R$</span>.</p>
<p>Let <span class="math-container">$\alpha = s+t\sqrt{11} \in R$</span>. Define norm <span class="math-container">$N: R \setminus\{0\} \rightarrow \mathbb{Z}_{\geq 0}$</span> by setting <span class="math-container">$\alpha \mapsto |s^2-11t^2|$</span>, which is well defined in <span class="math-container">$R$</span>. Proof omitted. Norm <span class="math-container">$N$</span> is also well defined in <span class="math-container">$\mathbb{Q}[\sqrt{11}]$</span>. Proof omitted. There exists <span class="math-container">$\alpha/\beta \in \mathbb{Q}[\sqrt{11}] = \{ s+t\sqrt{11} : s,t \in \mathbb{Q} \} \supset R $</span> with a numerator and a denominator <span class="math-container">$\alpha, \beta \in R$</span>. Proof omitted.</p>
<p>Suppose the nearest lattice point to point <span class="math-container">$\alpha / \beta$</span> is <span class="math-container">$m+n\sqrt{11} \in R$</span> (i.e. <span class="math-container">$m,n \in \mathbb{Z}$</span>) with <span class="math-container">$\alpha / \beta = (m+n\sqrt{11}) + (a+b\sqrt{11})$</span>, <span class="math-container">$a,b \in \mathbb{Q}$</span>, <span class="math-container">$-\frac{1}{2} \leq a \leq \frac{1}{2}$</span> and <span class="math-container">$-\frac{1}{2} \leq b \leq \frac{1}{2}$</span>. We will consider lattice point <span class="math-container">$(m,n) = m+n\sqrt{11}$</span> and five others nearby. They, <span class="math-container">$(m+x,n+y)$</span> with <span class="math-container">$(x,y) \in \{ (0,0), (1,0), (-1,0), (2,0), (2,1), (5,2) \} $</span>, are candidate quotients to a division algorithm. By considering only positive values of <span class="math-container">$a,b$</span> such that <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span> and <span class="math-container">$0 \leq b \leq \frac{1}{2}$</span>, similar arguments could be made for the three other quadrants by symmetry. With <span class="math-container">$q=(m+x,n+y)$</span>, rewrite <span class="math-container">$\alpha = q \beta +r$</span> as:</p>
<p><span class="math-container">\begin{align}
\alpha / \beta &= q + r/\beta \\
&= q + \big( (a+b\sqrt{11}) - (x+y\sqrt{11}) \big)
\end{align}</span></p>
<p>Condition (b)(ii) if <span class="math-container">$r \ne 0$</span>, <span class="math-container">$N(r) < N(\beta)$</span> is true <span class="math-container">$\iff$</span> <span class="math-container">$N(r)/N(\beta) < 1$</span> <span class="math-container">$\iff$</span> <span class="math-container">$N(r/\beta) < 1$</span> <span class="math-container">$\iff$</span> <span class="math-container">$N \big( (a+b\sqrt{11}) - (x+y\sqrt{11}) \big)$</span> <span class="math-container">$= |(x-a)^2 - 11(y-b)^2| < 1$</span>. Proof omitted.</p>
<p><strong>Proposition:</strong> A division algorithm holds for <span class="math-container">$\mathbb{Z}[\sqrt{11}]$</span> if both inequalities:</p>
<p><span class="math-container">\begin{align}
U(x,y): &\ (x-a)^2 - 11(y-b)^2 < 1 \ \text{and} \\
V(x,y): &\ 11(y-b)^2 - (x-a)^2 < 1 \ \text{are true,}
\end{align}</span></p>
<p>for all <span class="math-container">$a,b \in \mathbb{Q}$</span>, <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span> and <span class="math-container">$0 \leq b \leq \frac{1}{2}$</span>, and some <span class="math-container">$x,y \in \mathbb{Z}$</span>.</p>
<p>Rewrite not <span class="math-container">$U(x,y)$</span> and not <span class="math-container">$V(x,y)$</span> as <span class="math-container">$P(x,y)$</span> and <span class="math-container">$N(x,y)$</span> respectively.</p>
<p><span class="math-container">\begin{align}
P(x,y): &\ (x-a)^2 \geq 1 + 11(y-b)^2 \\
N(x,y): &\ 11(y-b)^2 \geq 1 + (x-a)^2
\end{align}</span></p>
<p><strong>Suppose to the contrary</strong> that the contrapositive of the above proposition is true. It asserts that a division algorithm does not hold for <span class="math-container">$\mathbb{Z}[\sqrt{11}]$</span>. Then either <span class="math-container">$P(x,y)$</span> or <span class="math-container">$N(x,y)$</span> is true for some <span class="math-container">$a,b$</span> and all <span class="math-container">$x,y$</span>.</p>
<p>Consider <span class="math-container">$\mathbf{P(0,0)} : (-a)^2 \geq 1 + 11(-b)^2$</span>. It implies <span class="math-container">$a^2 \geq 1$</span>. Take the positive root and we have <span class="math-container">$a \geq 1$</span> which contradicts <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span>. Since <span class="math-container">$P(0,0)$</span> is false, <span class="math-container">$N(0,0)$</span> must hold.</p>
<p><span class="math-container">$N(0,0): 11b^2 \geq 1+a^2$</span></p>
<p>Consider <span class="math-container">$\mathbf{P(1,0)} : (1-a)^2 \geq 1 + 11b^2$</span>. By <span class="math-container">$N(0,0)$</span>,</p>
<p><span class="math-container">\begin{align}
(1-a)^2 &\geq 1 + 11b^2 \geq 2+a^2 \\
(1-a)^2 &\geq 2+a^2 \\
1-2a+a^2 &\geq 2+a^2 \\
-2a &\geq 1 \\
a &\leq -\frac{1}{2}
\end{align}</span></p>
<p>which contradicts <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span>. Since <span class="math-container">$P(1,0)$</span> is false, <span class="math-container">$N(1,0)$</span> must hold.</p>
<p><span class="math-container">$N(1,0): 11b^2 \geq 1 + (1-a)^2$</span></p>
<p><a href="https://i.stack.imgur.com/INURF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/INURF.jpg" alt="a starfish and six candidate quotients q" /></a>
<a href="https://www.desmos.com/calculator/rhrza9vqpx" rel="nofollow noreferrer">https://www.desmos.com/calculator/rhrza9vqpx</a>
<a href="https://www.desmos.com/calculator/pccntyashy" rel="nofollow noreferrer">https://www.desmos.com/calculator/pccntyashy</a></p>
<p>Consider <span class="math-container">$\mathbf{P(-1,0)} : (-1-a)^2 \geq 1 + 11b^2$</span>. By <span class="math-container">$N(1,0)$</span>,</p>
<p><span class="math-container">\begin{align}
(1+a)^2 &\geq 1 + 11b^2 \\
(1+a)^2 - 1 &\geq 11b^2 \geq 1 + (1-a)^2 \\
1 + 2a + a^2 - 1 &\geq 1 + 1 - 2a + a^2 \\
4a &\geq 2 \\
a &\geq \frac{1}{2} \\
\end{align}</span></p>
<p>Then <span class="math-container">$a = \frac{1}{2}$</span> since <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span>. It follows that:</p>
<p><span class="math-container">\begin{align}
(1+\frac{1}{2})^2 - 1 &\geq 11b^2 \geq 1 + (1-\frac{1}{2})^2 \\
\frac{5}{4} &\geq 11b^2 \geq \frac{5}{4}
\end{align}</span></p>
<p>But <span class="math-container">$b = \sqrt{ \frac{5}{44} } \notin \mathbb{Q}$</span> contradicts <span class="math-container">$b \in \mathbb{Q}$</span>. Since <span class="math-container">$P(-1,0)$</span> is false, <span class="math-container">$N(-1,0)$</span> must hold.</p>
<p><span class="math-container">$N(-1,0): 11b^2 \geq 1 + (1+a)^2$</span></p>
<p>Consider <span class="math-container">$\mathbf{N(2,0)} : 11b^2 \geq 1 + (2-a)^2$</span>.</p>
<p><span class="math-container">\begin{align}
11b^2 &\geq 1 + (2-a)^2 \geq \frac{13}{4} > \frac{11}{4} \\
b^2 &> \frac{1}{4}
\end{align}</span></p>
<p>Take the positive root and we have <span class="math-container">$b > \frac{1}{2}$</span>. But it contradicts <span class="math-container">$0 \leq b \leq \frac{1}{2}$</span>. Since <span class="math-container">$N(2,0)$</span> is false, <span class="math-container">$P(2,0)$</span> must hold.</p>
<p><span class="math-container">$P(2,0): (2-a)^2 \geq 1 + 11b^2$</span></p>
<p>Consider <span class="math-container">$\mathbf{N(2,1)} : 11(1-b)^2 \geq (2-a)^2$</span>. By <span class="math-container">$N(2,0)$</span>,</p>
<p><span class="math-container">\begin{align}
11(1-b)^2 &\geq 1 + (2-a)^2 \geq 1 + 11b^2 \\
11 - 22b + 11b^2 &\geq 1 + 11b^2 \\
9 &\geq 22b \\
b &\leq \frac{9}{22}
\end{align}</span></p>
<p>By <span class="math-container">$N(-1,0)$</span>,</p>
<p><span class="math-container">\begin{align}
11b^2 &\geq 1 + (1+a)^2 \geq 2 > \frac{81}{44} \\
$b^2 &> (\frac{9}{22})^2
\end{align}</span></p>
<p>Take the positive root and we have <span class="math-container">$b > \frac{9}{22}$</span>. But it contradicts <span class="math-container">$b \leq \frac{9}{22}$</span>. Since <span class="math-container">$N(2,1)$</span> is false, <span class="math-container">$P(2,1)$</span> must hold.</p>
<p><span class="math-container">$P(2,1): (2-a)^2 \geq 1 + 11(1-b)^2$</span></p>
<p>By <span class="math-container">$P(2,1)$</span> and <span class="math-container">$N(-1,0)$</span>, they imply:</p>
<p><span class="math-container">\begin{align}
(2-a)^2 &\geq 1 + 11(1-b)^2 \\
(2-a)^2 &\geq 1 + 11 -22b + 11b^2 \\
(2-a)^2 -12 + 22b &\geq 11b^2 \geq 1 + (1+a)^2 \\
4 -4a + a^2 -12 + 22b &\geq 2 + 2a + a^2 \\
22b &\geq 10 + 6a \\
11b &\geq 5 + 3a
\end{align}</span></p>
<p>Out of interest. Take the least possible value to the RHS; it implies <span class="math-container">$11b \geq 5$</span>. Likewise, take the greatest possible value to the LHS; it implies <span class="math-container">$11 \geq 10 + 6a$</span> <span class="math-container">$\implies$</span> <span class="math-container">$1 \geq 6a$</span>.</p>
<p>Consider <span class="math-container">$\mathbf{N(5,2)}: 11(2-b)^2 \geq 1 + (5-a)^2$</span>. By <span class="math-container">$P(2,1)$</span>, it implies:</p>
<p><span class="math-container">\begin{align}
11(2-b)^2 &\geq 1 + (5-a)^2 \\
11(2-b)^2 &\geq 26 -10a +a^2 \\
11(2-b)^2 &\geq (4 - 4a + a^2) + 22 -6a \\
11(2-b)^2 - 22 + 6a &\geq (2-a)^2 \geq 1 + 11(1-b)^2 \\
22 - 44b + 11b^2 + 6a &\geq (2-a)^2 \geq 12 -22b + 11b^2 \\
10 + 6a &\geq 22b \\
5 + 3a &\geq 11b
\end{align}</span></p>
<p>By <span class="math-container">$11b \geq 5 + 3a$</span> and <span class="math-container">$5 + 3a \geq 11b$</span>, we have <span class="math-container">$5 + 3a = 11b$</span>. Retrace backwards the argument leading to <span class="math-container">$5 + 3a \geq 11b$</span>. We have <span class="math-container">$11(2-b)^2 - 22 + 6a = (2-a)^2 = 1 + 11(1-b)^2$</span>. Take the middle and the RHS of this chain of equality and name it <span class="math-container">$P'(2,1)$</span>. Establishing the following two equalities are not strictly necessary for this proof. Since Oppenheim (1934) mentioned them, we will include them here. Take the LHS and the middle of the above chain of equality, we have <span class="math-container">$11(2-b)^2 - 22 + 6a = (2-a)^2$</span> <span class="math-container">$\implies$</span> <span class="math-container">$11(2-b)^2 = (4 - 4a + a^2) + 22 -6a$</span> <span class="math-container">$\implies$</span> <span class="math-container">$11(2-b)^2 = 26 -10a +a^2$</span> <span class="math-container">$\implies$</span> <span class="math-container">$11(2-b)^2 = 1 + (5-a)^2$</span>. Name it <span class="math-container">$N'(5,2)$</span>. Retrace backwards the argument leading to <span class="math-container">$11b \geq 5 + 3a$</span>. We have <span class="math-container">$(2-a)^2 -12 + 22b = 11b^2 = 1 + (1+a)^2$</span>. Take the middle and the RHS of this chain of equality and name it <span class="math-container">$N'(-1,0)$</span>. And we have:</p>
<p><span class="math-container">\begin{align}
P'(2,1)&: (2-a)^2 = 1 + 11(1-b)^2 \\
N'(5,2)&: 11(2-b)^2 = 1 + (5-a)^2 \\
N'(-1,0)&: 11b^2 = 1 + (1+a)^2
\end{align}</span></p>
<p>By <span class="math-container">$N'(-1,0)$</span> and <span class="math-container">$5 + 3a = 11b$</span>,</p>
<p><span class="math-container">\begin{align}
11b^2 &= 1 + (1+a)^2 \\
(11b)^2 &= 11 + 11(1+a)^2 \\
(5+3a)^2 &= 11 + 11(1+a)^2 \\
25 + 30a + 9a^2 &= 22 + 22a + 11a^2 \\
2a^2 - 8a &= 3 > 0
\end{align}</span></p>
<p>It implies <span class="math-container">$a<0$</span> or <span class="math-container">$a > 4 > \frac{1}{2}$</span> but they contradict <span class="math-container">$0 \leq a \leq \frac{1}{2}$</span>. And hence <span class="math-container">$N(5,2)$</span> is false, <span class="math-container">$P(5,2)$</span> must be true.</p>
<p><span class="math-container">$P(5,2): (5-a)^2 \geq 1 + 11(2-b)^2$</span></p>
<p>Consider <span class="math-container">$\mathbf{P(5,2)}$</span>, by <span class="math-container">$N'(-1,0)$</span>, it implies:</p>
<p><span class="math-container">\begin{align}
(5-a)^2 &\geq 1 + 11(2-b)^2 \\
25 - 10a + a^2 &\geq 1 + 11(2-b)^2 \\
23 - 12a + 1 + (1 + 2a + a^2) &\geq 1 + 11(2-b)^2 \\
23 - 12a + \big( 1 + (1 + a)^2 \big) &\geq 1 + 11(2-b)^2 \\
23 - 12a + 11b^2 &\geq 45 - 44b + 11b^2 \\
44b &\geq 22 +12a \\
22b &\geq 11 + 6a
\end{align}</span></p>
<p>Possible range of the LHS is <span class="math-container">$[0,11]$</span> and that of RHS is <span class="math-container">$[11,14]$</span>. Therefore, they only possible values are <span class="math-container">$b = \frac{1}{2}$</span> and <span class="math-container">$a=0$</span>. Substitute <span class="math-container">$b = \frac{1}{2}$</span> back into <span class="math-container">$P(5,2)$</span>, and it asserts,</p>
<p><span class="math-container">$(5-a)^2 \geq 1 + 11(2- \frac{1}{2} )^2 = \frac{103}{4} > (\frac{10}{2})^2 = 5^2$</span></p>
<p>Take the positive root (since negative root <span class="math-container">$a-5 \geq 0$</span> is false) and we have <span class="math-container">$5-a > 5$</span> <span class="math-container">$\implies$</span> <span class="math-container">$a < 0$</span>. But it contradicts <span class="math-container">$a = 0$</span>. And hence <span class="math-container">$P(5,2)$</span> is false.</p>
<p>Since both <span class="math-container">$N(5,2)$</span> and <span class="math-container">$P(5,2)$</span> are false, they contradicts the proposition that either <span class="math-container">$P(x,y)$</span> or <span class="math-container">$N(x,y)$</span> is true for some <span class="math-container">$a,b$</span> and all <span class="math-container">$x,y$</span>. And hence, the proposition that a division algorithm does not hold is false. Therefore, <span class="math-container">$\mathbb{Z}[\sqrt{11}]$</span> has a division algorithm and hence a Euclidean domain. <span class="math-container">$\ \ \ \ \blacksquare$</span></p>
|
4,331,081 | <p>Suppose <span class="math-container">$a, b >0$</span>. I'm looking for closed expressions for the following integral:
<span class="math-container">$$\int_{-\pi}^{\pi}\sqrt{a^{2}-2ab\cos(x)+b^{2}}dx $$</span>
I tried to solve this by myself and got nowhere and even wolfram alpha couldn't get me an answer so maybe this is not easily solved, but might have some complicated closed expression. Any help is important. Thanks!</p>
| projectilemotion | 323,432 | <p>This does not admit a closed form in terms of elementary functions for general <span class="math-container">$a,b>0$</span>. However, using some elementary trigonometric identities and symmetry properties, one can get a solution in terms of elliptic integrals.</p>
<hr />
<p>Firstly, by the substitution <span class="math-container">$u=x+\pi$</span> and the identity <span class="math-container">$\cos(u-\pi)=-\cos(u)$</span>, one has
<span class="math-container">$$\int_{-\pi}^{\pi} \sqrt{a^2+b^2-2ab\cos(x)}~dx=\int_{0}^{2\pi} \sqrt{a^2+b^2+2ab\cos(u)}~du.$$</span>
Using symmetry of <span class="math-container">$\cos(u)$</span> about <span class="math-container">$u=\pi$</span> and the identity <span class="math-container">$\cos(u)=1-2\sin^2(u/2)$</span>, one has
<span class="math-container">$$\begin{align} \int_{0}^{2\pi} \sqrt{a^2+b^2+2ab\cos(u)}~du&=2\int_0^{\pi} \sqrt{a^2+b^2+2ab\cos(u)}~du\\&=2\int_0^{\pi} \sqrt{(a+b)^2-4ab\sin^2(u/2)}~du\\&=4\int_0^{\pi/2} \sqrt{(a+b)^2-4ab\sin^2(v)}~dv\\&=4(a+b)\int_0^{\pi/2} \sqrt{1-\frac{4ab}{(a+b)^2}\sin^2(v)}~dv\\&=4(a+b)E\left(\frac{\sqrt{4ab}}{a+b}\right),\end{align}$$</span>
where we have used the substitution <span class="math-container">$v=u/2$</span> and the definition of the <a href="https://en.wikipedia.org/wiki/Elliptic_integral#Complete_elliptic_integral_of_the_second_kind" rel="nofollow noreferrer">complete elliptic integral of the second kind</a> according to Wikipedia
<span class="math-container">$$E(k):=\int_0^{\pi/2} \sqrt{1-k^2\sin^2(\theta)}~d\theta.$$</span>
Note that in some conventions the argument inside the elliptic integral is <span class="math-container">$k^2$</span> instead of <span class="math-container">$k$</span> (such as the one used by <a href="https://reference.wolfram.com/language/ref/EllipticE.html" rel="nofollow noreferrer">Mathematica</a>).</p>
<hr />
<p><strong>Sidenote:</strong> Consider a circle of radius <span class="math-container">$a$</span> with center at a distance <span class="math-container">$b$</span> to the origin of <span class="math-container">$\mathbb{R}^2$</span>. This integral comes up when computing the average distance from the origin to the points of this circle. Namely, one considers
<span class="math-container">$$\frac{1}{2\pi}\int_{0}^{2\pi} \sqrt{a^2+b^2+2ab\cos(x)}~dx.$$</span>
Equivalently, it is distance travelled by a point at a distance of <span class="math-container">$b$</span> from the centre of a rolling disc with radius <span class="math-container">$a$</span> in one full rotation (forget the multiplicative factor outside the integral).</p>
|
30,292 | <p>One can view a random walk as a discrete process whose continuous
analog is diffusion.
For example, discretizing the heat diffusion equation
(in both time and space) leads to random walks.
Is there a natural continuous analog of discrete self-avoiding walks?
I am particularly interested in self-avoiding polygons,
i.e., closed self-avoiding walks.</p>
<p>I've only found reference
(in Madras and Slade,
<em>The Self-Avoiding Walk</em>, p.365ff)
to continuous analogs of "weakly self-avoiding walks"
(or "self-repellent walks") which discourage but
do not forbid self-intersection.</p>
<p>I realize this is a vague question,
reflecting my ignorance of the topic.
But perhaps those knowledgeable could point me to the
the right concepts. Thanks!</p>
<p><b>Addendum</b>.
<a href="http://en.wikipedia.org/wiki/Schramm%25E2%2580%2593Loewner_evolution">Schramm–Loewner evolution</a> is the answer. It is the conjectured scaling limit of the self-avoiding walk and several other related stochastic processes. Conjectured in low dimensions,
proved in high dimensions, as pointed out by Yuri and Yvan. Many thanks for your help!</p>
| Yuri Bakhtin | 2,968 | <p>In 2D the scaling limit is believed to be SLE with parameter 8/3. This was conjectured by Lawler, Schramm and Werner and, to the best of my knowledge, still remains open.</p>
|
8,107 | <p>Imagine I have a company that makes widgets, where each widget costs me A dollars to make. Each month I can allocate money toward research and development with the aim of finding a new process that will allow me to build widgets for a cost of A/B dollars. Presume that I know that for each C dollars I spend on research and development there's a D% chance of finding a breakthrough. Of course, spending money on research and development means that I have less to spend on building widgets.</p>
<p>I have a monthly budget of E dollars. This budget is not directly tied to my profit margin, but it is safe to say that it my profit margins influence future budgets (i.e., if I make no widgets for three straight months b/c I do all research and development, it's likely that my budget will be reduced, whereas if I discover a breakthrough the first month my profits will skyrocket and I'll likely see that budget grow over time).</p>
<p>In case that is too abstract, here's the real world scenario I'm interested in solving (although I'd like a more general approach, as well):</p>
<ul>
<li>A = 15 dollars</li>
<li>B = 3</li>
<li>C = 5 dollars</li>
<li>D = 2.75%</li>
<li>E = 30 dollars</li>
</ul>
<p>That is, today widgets cost me 15 dollars to build but if I can find a breakthrough I know I can make them at 1/3 the cost (5 dollars). For each 5 dollars I spend on research and development there is a 2.75% chance I'll find the breakthrough. However, I have only 30 dollars to spend each month. If I spend it all on research and development and have no success then I have made no widgets for sale. If I spend it all on widget construction I have no chance of finding a breakthrough.</p>
<p>Is there some statistical distribution or formula that can let me plug in these variables and see some sort of breakdown that gives me an idea of whether it's a good idea to spend any money on research and development each month and, if so, how much?</p>
| Hahn | 93,001 | <p>The other answers are better, but knowing something about the marketplace for your widgets I will add a few more considerations. </p>
<p>If (1 & A) you are on an island with consumers (or manufacturing bases) in distant lands that are susceptible to hostile takeovers that will dramatically decrease your monthly profits quickly, it is best to take an all or nothing approach to R&D. Either defend your current global positions as much as feasible, or sit back and research until super widgets allow you to retake market share quickly. </p>
<p>If (2) you are free from hostile takeovers because of the strength of your moat and can all but guarantee a steady source of monthly income for some time, it is more beneficial to conduct steady R&D while increasing your capacity to enter foreign markets quickly when your research bears fruit. </p>
<p>If (3 & B) you are in an immediate struggle for local control of the market, you may need to mirror the actions of your competitor more closely to avoid a collapse of your position because of an error in judgment. Alternately, it may be beneficial to take advantage of your competitor's spending on R&D to quickly take over their market share and turn their research, even if successful, into a pyrrhic victory. </p>
<p>In either case, consider the benefits of proper collusion. You and friendly industries need not all research super widgets if you are in agreement on future market share divisions. Spreading the cost of R&D as appropriate may yield the results you seek, while allowing the weaker friendly industries to defend their current market share or make headways in taking over hostile foreign markets. </p>
|
24,230 | <p>$f$ is continuous between $[0,1]$, and $f(0)=f(1)$.</p>
<p>I want to prove that there is an $a \in [0,0.5]$ such that $f(a+0.5)=f(a)$.</p>
<p>ok, so Rolle's theorem can be useful here, but I can't see the connection to the derivative,</p>
<p>(Weierstrass, Uniform continuity?) I'll be glad to instructions.</p>
<p>Thanks.</p>
| Eelvex | 7,476 | <p><em>Or</em> </p>
<p>consider if there is a $b\in [0,1]$ such that $f(b) = f(0) = f(1)$.</p>
<p>What if there is no such $b$?</p>
<p>What if $b = 0.5$?</p>
<p>[ You don't need Rolle's theorem this way ]</p>
|
721,449 | <p>I need to determine all the positive divisors of 7!. I got 360 as the total number of positive divisors for 7!. Can someone confirm, or give the real answer?</p>
| copper.hat | 27,978 | <p>360 is incorrect.</p>
<p>$7! = 2^4 3^2 5^1 7^1$. Now start counting...</p>
<p><strong>Note</strong>: Count $\{0,1,2,3,4\} \times \{0,1,2\} \times \{0,1\} \times \{0,1\}$.</p>
|
1,963,295 | <p>I have the following equation for a decision boundary line: $-w_0 = w_1x_1 + w_2x_2$ and I want to prove that the distance from the decision boundary to the origin is $l = \frac{w^Tx}{||w||}$. I am having trouble wrapping my mind around how I can just get the distance from a line to a point. Am I supposed to be averaging the distances of all the points on the line to the point?</p>
| qwr | 122,489 | <p>Mean and variance derivations are given by Maxime Beauchamp, "On numerical computation for the distribution of the convolution of N independent rectified Gaussian variables" at <a href="http://journal-sfds.fr/article/view/669" rel="nofollow noreferrer">http://journal-sfds.fr/article/view/669</a></p>
<p><span class="math-container">$$
\operatorname{E}[X^+] = \mu \left(1 - \Phi \left(-\frac{\mu}{\sigma} \right)\right) + \sigma \phi\left(-\frac{\mu}{\sigma} \right) \\
\operatorname{Var}[X^+] = (\mu^2 + \sigma^2) \left(1 - \Phi\left(-\frac{\mu}{\sigma} \right)\right) + \mu \sigma \phi \left(-\frac{\mu}{\sigma} \right) - \operatorname{E}[X^+]^2
$$</span></p>
|
157,587 | <p>I know the following is a well-known result.</p>
<p>Let $D = B(0,1) \subset \mathbb{C} $ a disc, $f$ holomorphic on $D$. Show that $$ 2|f^{'}(0)| \le \sup_{z, w \in D} |f(z)-f(w)|$$
Furthermore, there is equality if and only if $f$ is linear.</p>
<p>I need some reference about the second part, i.e. there is equality if and only if $f$ is linear.</p>
| Malik Younsi | 1,162 | <p>This was first proved by Landau and Toeplitz in 1907. A reference for the proof (and for generalizations) is the <em>paper Area, capacity and diameter versions of Schwarz's lemma</em> by Burckel, Marshall, Minda, Poggi-Corradini and Ransford.</p>
<p>See Theorem 1.3 <a href="http://arxiv.org/pdf/0801.3629v1.pdf" rel="nofollow">here</a></p>
|
2,283,123 | <p>Let $(\mathbb{R}, +, 0)$ be the additive group of reals. Is this structure $\aleph_0$-saturated? </p>
<p>I don't really see how to go about showing this. To show it is not saturated, it is enough to exhibit a type omitted in $(\mathbb{R}, +, 0)$. The interesting statements we can make about groups are usually to do with torsion or divisibility, but I can't see a way to find a type omitted in $(\mathbb{R}, +, 0)$ from that. </p>
| zarathustra | 73,997 | <p>Every (nontrivial) torsion-free divisible group has a structure of $\Bbb Q$-vector space. This shows that the theory $T$ of torsion-free divisible group is uncountably categorical (because a vector space is identified up to isomorphism by its dimension, and all the uncountable $\Bbb Q$-vector spaces have the same dimension). Suppose that $(\Bbb R,+,0)$ is <em>not</em> $\aleph_0$-saturated. Then some type $p$ would be omitted in $(\Bbb R,+,0)$, but by Lowenheim-Skolem one would be able to find a model of $T$ of size $|\Bbb R|$ that realizes $p$, and this model would not be isomorphic to $(\Bbb R,+,0)$*. This contradicts the fact that $T$ is $|\Bbb R|$-categorical.</p>
<p>*One has to use the ultrahomogeneity of $(\Bbb R,+,0)$ for this argument to work, as explained by Alex in the comments below. Ultrahomogeneity of $(\Bbb R,+,0)$ follows from linear algebra: given two families $\overline a$ and $\overline b$ that satisfy the same formulas, there exists an automorphism of $(\Bbb R,+,0)$ that takes $\overline a$ to $\overline b$.</p>
|
3,890,382 | <blockquote>
<p>Find the locus of <span class="math-container">$z$</span> such that <span class="math-container">$\arg \frac{z-z_1}{z-z_2} = \alpha$</span>.
Use and draw <span class="math-container">$w = \frac{z-z_1}{z-z_2}$</span>.</p>
</blockquote>
<p>This exercise was discussed many times -- <a href="https://math.stackexchange.com/questions/2120597/finding-the-loci-of-arg-dfracz-az-b-theta?rq=1">1</a>, <a href="https://math.stackexchange.com/questions/1333930/reasoning-centre-and-ways-of-expressing-locus-of-arg-frac-z-az-b-c?rq=1">2</a>, <a href="https://math.stackexchange.com/questions/1573659/how-to-describe-the-locus-of-z-where-arg-left-fracz-z-1z-z-2-right-t?rq=1">3</a>, <a href="https://math.stackexchange.com/questions/1844695/locus-of-complex-number-2/1844716#1844716">4</a> -- but I was unable to find answers to my problem with <span class="math-container">$0$</span> there.</p>
<p>I believe I understand where the arcs came from, here's my work:</p>
<p><a href="https://i.stack.imgur.com/CiMR7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CiMR7.jpg" alt="picture showing my work" /></a></p>
<p>If I understand correctly, for <span class="math-container">$\alpha = \pm\pi$</span>, the locus would be the segment connecting <span class="math-container">$z_2$</span> and <span class="math-container">$z_1$</span>, not including the points themselves.</p>
<p>I can not understand what is happening when <span class="math-container">$\alpha = 0$</span>.</p>
<p><span class="math-container">$\alpha = 0 = \arg \frac{z-z_1}{z-z_2} = \arg w \Longrightarrow \frac{z-z_1}{z-z_2} = k \in \mathbb{R}, \frac{z-z_1}{z-z_2} = k\frac{z-z_2}{z-z_2}.$</span></p>
<p>Solving this for <span class="math-container">$z$</span>, <span class="math-container">$z = \frac{x_1-kx_2}{1-k} +i\frac{y_1-ky_2}{1-k}$</span>. I am having trouble understanding the locus of this <span class="math-container">$z$</span>. The textbook says it should be 'two segments with end points in <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span>, and one of this segments contains an infinitely distant point'. How to understand why is this answer right, and how to draw it? It seems the infinitely distant point matches <span class="math-container">$k=1$</span>, but why should it lie in the 'direction' of the line passing through <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span>?</p>
<p>My class notes are messy. Why is <span class="math-container">$(0, 1)$</span> special on <span class="math-container">$w$</span> plane?
<a href="https://i.stack.imgur.com/SmPAE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SmPAE.jpg" alt="messy class notes" /></a></p>
<p>Thank you.</p>
| user2661923 | 464,411 | <p>You know that (within a modulus of <span class="math-container">$2\pi$</span>), <br>
<span class="math-container">$\arg \left(w_1 \times w_2\right) ~=~ \arg(w_1) + \arg(w_2).$</span></p>
<p>Therefore, <span class="math-container">$\arg \left(\frac{w_1}{w_2}\right) ~=~ \arg(w_1) - \arg(w_2).$</span></p>
<blockquote>
<p>I can not understand what is happening when <span class="math-container">$\alpha = 0.$</span></p>
</blockquote>
<p>In this situation, you have that</p>
<p><span class="math-container">$$0 = \alpha ~=~ \arg \left(\frac{z - z_1}{z - z_2}\right)
~=~ \arg(z - z_1) - \arg(z - z_2).$$</span></p>
<p>Imagine the infinite line that passes through <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span>.</p>
<p>Note, that the <span class="math-container">$\arg$</span> function is not defined on the complex number <span class="math-container">$(0 + i[0]).$</span></p>
<p>Therefore, <span class="math-container">$z$</span> is not allowed to equal either <span class="math-container">$z_1$</span> or <span class="math-container">$z_2$</span>.</p>
<p>There are 3 possibilities:</p>
<p><span class="math-container">$\underline{\text{case 1} ~z ~\text{is not on this line}}$</span></p>
<p>Then,</p>
<p><span class="math-container">$$\arg(z - z_1) \neq \arg(z - z_2).$$</span></p>
<p>Therefore, this possibility must be excluded from the locus of satisfying points.</p>
<p><span class="math-container">$\underline{\text{case 2} ~z ~\text{is on this line}, ~\textbf{but between}
~z_1 ~\text{and} ~z_2}$</span></p>
<p>Then,</p>
<p><span class="math-container">$$\arg(z - z_1) ~=~ \arg(z - z_2) ~\pm ~\pi.$$</span></p>
<p>Therefore, this possibility must also be excluded from the locus of satisfying points.</p>
<p><span class="math-container">$\underline{\text{case 3} ~z ~\text{is on this line}, ~\textbf{but not between}
~z_1 ~\text{and} ~z_2}$</span></p>
<p>Then, <strong>regardless</strong> of whether the point <span class="math-container">$z$</span> is closer to <span class="math-container">$z_1$</span> or closer to <span class="math-container">$z_2$</span>,</p>
<p><span class="math-container">$$\arg(z - z_1) ~=~ \arg(z - z_2).$$</span></p>
<p>Therefore, this possibility represents the locus of <strong>all</strong> satisfying points.</p>
<p>Thus, the locus of all satisfying points, when <span class="math-container">$\alpha = 0,$</span> is all <span class="math-container">$z$</span> that are
on the line formed by <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span>, but are <strong>not</strong> between <span class="math-container">$z_1$</span> and <span class="math-container">$z_2$</span>.</p>
|
560,929 | <p>Consider a circle with two perpendicular chords, dividing the circle into four regions $X, Y, Z, W$(labeled):</p>
<p><img src="https://i.stack.imgur.com/2TDK5.png" alt="enter image description here"></p>
<p>What is the maximum and minimum possible value of </p>
<p>$$\frac{A(X) + A(Z)}{A(W) + A(Y)}$$</p>
<p>where $A(I)$ denotes the area of $I$?</p>
<p>I know (instinctively) that the value will be maximum when the two chords will be the diameters of the circle, in that case, the area of the four regions will be equal and the value of the expression will be $1$. </p>
<p>I don't know how to rigorously prove this, however. And I have absolutely no idea about minimizing the expression. </p>
| Christian Blatter | 1,303 | <p>When the considered quotient is not constant it has a maximum value $\mu>1$, and the minimum value is then ${1\over \mu}$. I claim that
$$\mu={\pi+2\over\pi-2}\doteq4.504\ ,\tag{1}$$
as conjectured by MvG.</p>
<p><img src="https://i.stack.imgur.com/s8n29.jpg" alt="enter image description here"></p>
<p><em>Proof.</em> I'm referring to the above figure. Since the sum of the four areas is constant the quotient in question is maximal when
$$S(\alpha,p):={\rm area}(X)+{\rm area}(Z)\qquad\left(-{\pi\over2}\leq\alpha\leq {\pi\over2}, \quad 0\leq p\leq1\right)$$
is maximal. </p>
<p>Turning the turnstile around the point $(p,0)$ and looking at the "infinitesimal sectors" so arising we see that
$$\eqalign{{\partial S\over\partial\alpha}&={1\over2}\bigl((r_1^2-r_4^2)+(r_3^2-r_2^2)\bigr)\cr &={1\over2}\bigl((r_1+r_3)^2-(r_4+r_2)^2-\bigr)+(r_2r_4-r_1r_3)\cr &=2\bigl((1-p^2\sin^2\alpha)-(1-p^2\cos^2\alpha)\bigr)+0\cr
&=2p^2(\cos^2\alpha-\sin^2\alpha)\ . \cr}$$
From this we conclude the following: Starting at $\alpha=-{\pi\over2}$ we have $S={\pi\over2}$, then $S$ decreases for $-{\pi\over2}<\alpha<-{\pi\over4}$, from then on increases until $\alpha={\pi\over4}$, and finally decreases again to $S={\pi\over2}$ at $\alpha={\pi\over2}$. It follows that for the given $p\geq0$ the area $S$ is maximal at $\alpha={\pi\over4}$.</p>
<p>We now fix $\alpha={\pi\over4}$ and move the center $(p,0)$ of the turnstile from $(0,0)$ to $(1,0)$. Instead of "infinitesimal sectors" we now have "infinitesimal trapezoids" and obtain
$${\partial S\over\partial p}={1\over\sqrt{2}}((r_2+r_3)-(r_1+r_4)\bigr)>0\qquad(0<p\leq1)\ .$$
It follows that $S$ is maximal when $\alpha={\pi\over4}$ and $p=1$. In this case one has $S=1+{\pi\over2}$, which leads to the $\mu$ given in $(1)$.</p>
|
3,206,730 | <blockquote>
<p>Let <span class="math-container">$f : (-1,1)\to (-\pi/2,\pi/2)$</span> be the function defined by <span class="math-container">$f(x)= \tan^{-1}\left(\frac{2x}{1-x^2}\right)$</span> the verify that <span class="math-container">$f$</span> is bijective</p>
</blockquote>
<p>To check objectivity I assumed 2 variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to be equal and so as to prove <span class="math-container">$f(x)=f(y)$</span>. But I couldn't do so. I also wish to prove surjectivity.</p>
| Community | -1 | <p><span class="math-container">$$\forall x\in(-1,1):\left(\frac x{1-x^2}\right)'=\frac{1+x^2}{(1-x^2)^2}>1$$</span></p>
<p>and</p>
<p><span class="math-container">$$\forall t:(\arctan(t))'=\frac1{1+t^2}>0.$$</span></p>
<p>Both functions are monotonous and continous.</p>
|
80,899 | <p>This is related to a previous post of mine (<a href="https://math.stackexchange.com/questions/78669/limit-superior-of-a-sequence-showing-an-alternate-definition">link</a>) regarding how to show that for any sequence $\{x_{n}\}$, the limit superior of the sequence, which is defined as $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}$, is equal to the to supremum of limit points of the sequence. Below I think I have found a counter-example to this (although I know I am wrong but just don't know where!).</p>
<p>Define the sequence $x_{n}=\sin(n)$. We have $\text{sup }_{k\geq n} x_{k}=1$ for any $k\geq 1$ (i.e. for any subsequence). Thus $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}=1$. Now the sequence has no limit points since it does not converge to anything so the supremum of all the subsequence limits is the supremum of the empty set which presumably is not equal to 1 (I actually do not know what it is). So we have a sequence where the supremum of limit points does not equal $\text{inf}_{n\geq 1}\text{sup }_{k\geq n} x_{k}=1$?</p>
<p>Any help with showing where this is wrong would be much appreciated.</p>
| Florian | 1,609 | <p>The sequence does not converge to anything, but a subsequence might converge. In fact there exists a subsequence whose limit is 1.</p>
|
892,986 | <p>Can someone help me to compute:
$$\int \frac{x}{(x^2-4x+8)^2}\mathrm dx$$
And, in general, the type:</p>
<p>$$\int \frac{N(x)}{(x^2+px+q)^n}\mathrm dx$$
with the order of polynomial $N(x)<n$ and $n$ natural greater than 1?</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>As $\displaystyle x^2-4x+8=(x-2)^2+2^2,$</p>
<p>use <a href="http://en.wikipedia.org/wiki/Trigonometric_substitution" rel="nofollow">Trigonometric substitution</a> as $x-2=2\tan\theta$</p>
|
892,986 | <p>Can someone help me to compute:
$$\int \frac{x}{(x^2-4x+8)^2}\mathrm dx$$
And, in general, the type:</p>
<p>$$\int \frac{N(x)}{(x^2+px+q)^n}\mathrm dx$$
with the order of polynomial $N(x)<n$ and $n$ natural greater than 1?</p>
| amWhy | 9,003 | <p>$$\int \frac{x}{(x^2-4x+8)^2} dx = \int\frac{x - 2 + 2}{(x^2 - 4x + 8)^2}\,dx $$
$$= \frac 12\int \frac{2x - 4}{(x^2 - 4x + 8)^2}\,dx + \int \frac 2{((x-2)^2 + 2^2)^2}\,dx$$</p>
<p>For the first integral, use $u = x^2 - 4x + 8 \implies du = (2x-4)\,dx$.</p>
<p>For the second integral, put $2\tan \theta = (x-2)\implies 2\sec^2 \theta\,d\theta = dx$.</p>
|
523,529 | <p>I'm new to inequalities in mathematical induction and don't know how to proceed further. So far I was able to do this:<br>
$V(1): 1≤1 \text{ true}$ <br>
$V(n): n!≤((n+1)/2)^n$ <br>
$V(n+1): (n+1)!≤((n+2)/2)^{(n+1)}$<br><br></p>
<p>and I've got : <br>$(((n+1)/2)^n)\cdot(n+1)≤((n+2)/2)^{(n+1)}$ <br>$((n+1)^n)n(n+1)≤((n+2)^n)((n/2)+1)$</p>
| Jeyekomon | 29,060 | <p>An induction proof:</p>
<p>First, let's make it a little bit more eye-candy:</p>
<p>$$
n! \cdot 2^{n} \leq (n+1)^n
$$</p>
<p>Now, for $n=1$ the inequality holds. For $n=k\in\mathbb{N}$ we know that:</p>
<p>$$
k! \cdot 2^{k} \leq (k+1)^k
$$</p>
<p>holds and we need to prove:</p>
<p>$$
(k+1)! \cdot 2^{k+1} \leq (k+2)^{k+1}
$$</p>
<p>We will now prove this chain of inequalities (which gives us the actual proof):</p>
<p>$$
(k+1)! \cdot 2^{k+1} \leq 2(k+1)^{k+1} \leq (k+2)^{k+1}
$$</p>
<p>The first inequality is from the assumption (both sides multiplied by $2(k+1)$). Now we just need to prove the second one. In other words, we need to prove this (for some big enough positive integer $p$):</p>
<p>$$
2p^{p} \leq (p+1)^{p}
$$</p>
<p>And that's rather obvious. The inequality</p>
<p>$$
2 \leq \left(1+\frac{1}{p}\right)^{p}
$$</p>
<p>holds because the function on the right is known to be increasing and its limit (as $p\to\infty$) is $e$. So at some point on it has to be greater than 2.</p>
|
479,551 | <p>A store carries three types of donuts: Strawberry, Chocolate and Glazed</p>
<p>Suppose you bought $4$ of each kind and in addition, you have the option to apply sprinkles on your donuts. How many ways are there to eat the donuts if you never eat two donuts in a row that both have sprinkles? </p>
<p>The idea I have is to apply inclusion-exclusion, but I am not sure where to start or if inclusion-exclusion is the best route to take. Any hints would be great. </p>
| Brian M. Scott | 12,042 | <p>ShreevatsaR has already broken the problem into its component parts, and you’ve correctly solved the first part.</p>
<p>For the recurrence, let $u_n$ be the number of ways to apply sprinkles to a row of $n$ doughnuts so that no two adjacent doughnuts have sprinkles. This is the same as the number of $n$-bit binary strings (strings of zeroes and ones) in which no two ones are adjacent. Let’s calculate a few values by hand. Say that a binary string is <em>good</em> if it does not have two adjacent ones. The empty string is good and is the only string of length $0$, so $u_0=1$. Both strings of length $1$ are good, so $u_1=2$. Three of the four strings of length $2$ are good, $00,01$, and $10$, so $u_2=3$. The good strings of length $3$ are $000,001,010,100$, and $101$, so $u_3=5$. And the good strings of length $4$ are $0000,0001,0010,0100,1000,0101,1001$, and $1010$, so $u_4=8$. The sequence $1,2,3,5,8$ is probably familiar: it’s part of the Fibonacci sequence. If you want further verification, you can check that there are indeed $13$ good strings of length $5$. This suggests that the recurrence that we want is $u_n=u_{n-1}+u_{n-2}$, with initial values $u_0=1$ and $u_1=2$, so that $u_n=F_{n+2}$.</p>
<p>Take a closer look at the $8$ good strings of length $4$: is there any obvious way to split them into a group of $5$ and a group of $3$? I see one such way: $5$ of them end in $0$, and the other $3$ end in $1$. This observation leads naturally to the recurrence. Every good string of length $n$ that ends in $0$ can be obtained by tacking a $0$ onto the end of a good string of length $n-1$, and every good string of length $n-1$ can be extended to a good string of length $n$ by appending a $0$; that accounts for $u_{n-1}$ of the good strings of length $n$, and all of them that end in $0$. Assuming that $n\ge 2$, a good string of length $n$ that ends in $1$ must actually end in $01$, so it can be obtained from a good string of length $n-2$ by tacking on $01$. On the other hand, every good string of length $n-2$ can be extended to a good string of length $n$ by appending $01$, and that accounts for every good string of length $n$ that ends in $1$. There are therefore $u_{n-2}$ such strings and hence $u_{n-1}+u_{n-2}$ good strings of length $n$ altogether.</p>
|
1,027,807 | <p>So I have this question that looks like</p>
<p>$$ \frac{x^3 + 3x^2 - x - 8}{x^2 + x - 6} $$</p>
<p>and first I got the partial fraction so getting </p>
<p>$$ x + 2 + \frac{3x + 4}{x^2 + x -6} $$</p>
<p>but now I'm trying to integrate it and I cannot remember for the life of me how I should integrate the fraction on the end. Please help.</p>
| Khosrotash | 104,171 | <p>$$3x+2+\frac{3x+4}{x^2+x−6} =\\3x+2+\frac{3x+4}{(x-2)(x+3)} =\\
3x+2+\frac{a}{(x-2)}+\frac{b}{(x-2)} =\\
$$now find a,b
$$\frac{a}{(x-2)}+\frac{b}{(x-2)} =\frac{a(x+3)+b(x-2)}{(x-2)(x+3)}=\frac{3x
+4}{(x-2)(x+3)}\\\rightarrow \\(a+b)x=3x\\3a-2b=4\\a=2,b=1\\
$$</p>
|
4,601,113 | <p>I have a plane, defined by ax1+bx2+cx3=d, and a point which I know is on said plane. How could I convert the coordinates of the point to coordinates relative to the plane? I have attempted to find a solution online, but so far have been met with confusing answers such as</p>
<blockquote>
<p>Find the dot products <MY_POINT, e1> and <MY_POINT, e2></p>
</blockquote>
<p>But I do not know what e1 and e2 represent.
Could tell me how to find the coordinates relative to the plane from the 3d coordinates of the point?</p>
<p>This is in context of a projection of a 3d object's points onto a 2d plane, so as to visualise it on a screen.</p>
| geetha290krm | 1,064,504 | <p>This follows immediately from SLLN's since <span class="math-container">$(f(X_i))$</span> is also i.i.d. Measurability and boundedness is enough for this; continuity is not needed. [Convergence holds in the almost sure sense].</p>
|
4,601,113 | <p>I have a plane, defined by ax1+bx2+cx3=d, and a point which I know is on said plane. How could I convert the coordinates of the point to coordinates relative to the plane? I have attempted to find a solution online, but so far have been met with confusing answers such as</p>
<blockquote>
<p>Find the dot products <MY_POINT, e1> and <MY_POINT, e2></p>
</blockquote>
<p>But I do not know what e1 and e2 represent.
Could tell me how to find the coordinates relative to the plane from the 3d coordinates of the point?</p>
<p>This is in context of a projection of a 3d object's points onto a 2d plane, so as to visualise it on a screen.</p>
| Balaji sb | 213,498 | <p>By chebyshev inequality,
<span class="math-container">$P(|\frac{1}{N} \sum_i f(X_i)-E(f(X))|^2 \geq \frac{1}{N}) \leq \frac{Var(\frac{1}{N} \sum_i f(x_i))}{\frac{1}{N}} = \frac{Var(f(X))}{N} \rightarrow 0 \ as \ N \rightarrow \infty.$</span></p>
<p>where we used the fact that <span class="math-container">$Var(f(X))$</span> is finite.
This is for convergence in probability. This is true as long as the set <span class="math-container">$S=\{z: | z-E(f(X))|^2 \geq \frac{1}{N}\}$</span> is measurable w.r.t random variable <span class="math-container">$Z = \frac{1}{N} \sum_i f(X_i)$</span>.</p>
|
3,669,080 | <p>I would love to get some insight on how to solve <span class="math-container">$\int_0^{\frac\pi4}\log(1+\tan x)\,\mathrm dx$</span> using Leibniz rule of integration. I know it can be solved using the property<span class="math-container">$$\int_a^bf(x)\,\mathrm dx=\int_a^bf((a+b)-x)\,\mathrm dx,$$</span>but I find that method rather rigorous and am in search of a shorter elegant method.</p>
<p>Attached below is my attempt at doing it. I am unable to find <span class="math-container">$f(0)$</span> thus also the arbitrary constant.</p>
<p><img src="https://i.stack.imgur.com/f3osw.jpg"></p>
<p>My answer is <span class="math-container">$\dfrac π4\log(2)$</span>, which is wrong. Please correct my method and show me the right way to do it.</p>
<p>p.s please excuse my code I am new to Mathjax.</p>
| heropup | 118,193 | <p>Your choice of where to put the parameter results in significant problems, which I will illustrate with an example. Suppose we are interested in computing
<span class="math-container">$$\int_{x=0}^{\pi/4} \log (1+x) \, dx.$$</span>
If we introduce a parameter <span class="math-container">$t$</span> as you did, we have the function
<span class="math-container">$$I(t) = \int_{x=0}^{\pi/4} \log(t(1+x)) \, dx$$</span>
Hence
<span class="math-container">$$I'(t) = \int_{x=0}^{\pi/4} \frac{1}{t(1+x)} \cdot (1+x) \, dx = \int_{x=0}^{\pi/4} \frac{1}{t} \, dx = \frac{\pi}{4t}.$$</span>
Thus
<span class="math-container">$$I(t) = \int \frac{\pi}{4t} \, dt = \frac{\pi}{4} \log t + C$$</span>
and we want to find <span class="math-container">$I(1)$</span>. But this is just <span class="math-container">$C$</span>. You've accomplished nothing, and this is because on the interval of integration,
<span class="math-container">$$\log (t(1+x)) = \log t + \log (1+x).$$</span> So you end up getting <span class="math-container">$\frac{\pi}{4t}$</span> no matter what function of <span class="math-container">$x$</span> you have in the logarithm, because it is lost after differentiation with respect to <span class="math-container">$t$</span>. Had the integrand been <span class="math-container">$$\log\left(\sin^2 (x + 1/x) + 4x^{2059371} - 4/e^{x^2 + 1/x^{19}}\right),$$</span> you would still get the same result. This means your choice of parametrization of the integrand is ineffective and you must try a different one.</p>
|
487,084 | <p>I need to know if every group whose order is a power of a prime $p$ contains an element of order $p$? Should I proceed by picking an element $g$ of the group and proving that there is an element in $\langle g \rangle$ that has order $p$?</p>
| Alexander Gruber | 12,952 | <p>Not the group of order $p^0=1$!</p>
<p>Other than that, first prove that if the order of a group element $x$ is $mn$, then the order of $x^m$ is $n$. Then you can either show directly that if $x\in G$ and $|G|$ is finite, $x^{|G|}$ is the identity, or apply Lagrange's theorem to $\langle x \rangle$.</p>
|
2,292,511 | <p>I need some help to solve this problem:</p>
<blockquote>
<p>Evaluate A such that the exponential distribution with parameter $\alpha, P(X = x) = Ae^{−\alpha x}$, is normalized.
Here, $\alpha > 0$ and $\Omega = \mathbb{R}_{+}$.</p>
</blockquote>
<p>I've been trying to evaluate the following Integral </p>
<p>$$
\int^{\infty}_{0}Ae^{-\alpha x}dx=1 \,\, ,
$$</p>
<p>I always get teh result that $A$ must be equal to 0... Am I making something wrong?</p>
<p>Edit: What I did:</p>
<p>$$
\int^{\infty}_{0}Ae^{-\alpha x}dx=1 \Longrightarrow -\frac{A}{\alpha}\lim_{b\to \infty} \int^{b}_{0}e^{-\alpha x}dx=1 \,\, ,
$$</p>
<p>Since $\lim_{b\to \infty}e^{-\alpha \cdot \infty}=0$, I get the not true equality</p>
<p>$$
0=1 \,\, .
$$</p>
| Darío A. Gutiérrez | 353,218 | <p>$$\int_0^{\frac{\pi}{2}}(\sin^2\theta+a)^{-\frac{3}{2}}\,d\theta$$</p>
<p>$$\int_0^{\frac{\pi}{2}} \frac{1}{(\sin^2\theta+a)\sqrt{\sin^2\theta+a}} \,d\theta$$</p>
<p>$$\int_0^{\frac{\pi}{2}} \frac{1}{a(\frac{1}{a}\sin^2\theta+1)\sqrt{a\left( \frac{1}{a}\sin^2\theta+1\right) }} \,d\theta$$</p>
<p>$$\frac{1}{a\sqrt{a}}\int_0^{\frac{\pi}{2}} \frac{1}{(\frac{1}{a}\sin^2\theta+1)\sqrt{\frac{1}{a}\sin^2\theta+1}} \,d\theta$$</p>
<p>$$\frac{1}{a\sqrt{a}}\int_0^{\frac{\pi}{2}} \frac{1}{(1+a^{-1}\sin^2\theta)\sqrt{1+a^{-1}\sin^2\theta}} \,d\theta$$</p>
<p>$$\frac{1}{a\sqrt{a}}\underbrace{\int_0^{\frac{\pi}{2}} \frac{1}{(1 - (-a^{-1})\sin^2\theta)\sqrt{1 - (-a^{-1})\sin^2\theta}} \,d\theta}_{\Pi(-a^{-1}\,|\,-a^{-1})}$$</p>
<p>Also</p>
<blockquote>
<p><strong>Solution I</strong>
$$\int_0^{\frac{\pi}{2}}(\sin^2\theta+a)^{-\frac{3}{2}}\,d\theta = \frac{\Pi\left(-a^{-1}\,|\,-a^{-1}\right)}{a\sqrt{a}}\, $$
<a href="http://www.wolframalpha.com/input/?i=(EllipticPi%5B-(1%2Fa),-(1%2Fa)%5D)%2F(a*%5Csqrt%7Ba%7D)" rel="nofollow noreferrer"><em>For more information</em></a></p>
</blockquote>
<p>Where $\Pi\left(-a^{-1}\,|\,-a^{-1}\right)$ ist <a href="http://mathworld.wolfram.com/EllipticIntegraloftheThirdKind.html" rel="nofollow noreferrer"><em>The Complete Elliptic Integral of the Third Kind</em></a>
$$\Pi\left( n\,|\,m\right) = \Pi\left( n;\,\frac{\pi}{2}\,|\,m\right) = \int_0^{\frac{\pi}{2}} \frac{1}{\left( 1 - n\sin^2(t)\right) \sqrt{1 - m\sin^2(t)}} \,dt$$
<br><br>
The integral is also allowed to be expressed in terms of <a href="http://functions.wolfram.com/EllipticIntegrals/EllipticK/introductions/CompleteEllipticIntegrals/ShowAll.html" rel="nofollow noreferrer"><em>The Complete Elliptic Integral of the Second Kind</em></a> $E(m)$</p>
<p>$$E(m) = \int_0^{\frac{\pi}{2}}\sqrt{1 - m\sin^2(t)}\, dt$$</p>
<p>There the complete elliptic integral of the second kind satisfies <a href="http://dlmf.nist.gov/19.7.E5" rel="nofollow noreferrer"><em>the imaginary modulus identity</em></a>:
$$E(-m)=\sqrt{1+m}\,\,E\left(\frac{m}{1+m}\right)$$
Where
\begin{align*}
E\left( \frac{m}{m+1}\right) &= \int_0^{\frac{\pi}{2}}\sqrt{1 - \left( \frac{m}{m+1}\right)\sin^2(t)}\, dt\\
&= \frac{1}{\sqrt{m+1}}\int_0^{\frac{\pi}{2}}\sqrt{m + 1 -m\sin^2(t)}\, dt\\
&= \frac{1}{\sqrt{m+1}}\int_0^{\frac{\pi}{2}}\sqrt{1 + m\cos^2(t)}\, dt
\end{align*}
Changing variables to $\theta = \frac{\pi}{2} - t,\, dt = -d\theta,\implies \cos(t) = \sin\theta$, with $\theta$ going now from $\frac{\pi}{2}$ to $0$, then reversing the path of integration and changing the sign on the integral, therefore
$$E\left( \frac{m}{m+1}\right) = \frac{1}{\sqrt{m+1}}\int_0^{\frac{\pi}{2}}\sqrt{1 + m\sin^2\theta }\, d\theta$$
<br>
In this way we can write the integral as follows
$$\int_0^{\frac{\pi}{2}}(\sin^2\theta+a)^{-\frac{3}{2}}\,d\theta$$
$$\frac{1}{a\sqrt{a}}\int_0^{\frac{\pi}{2}}\frac{1}{(\frac{1}{a}\sin^2\theta+1)\sqrt{\frac{1}{a}\sin^2\theta+1}} \,d\theta$$
Set $k = \frac{1}{a} $
$$\frac{k}{\sqrt{a}}\int_0^{\frac{\pi}{2}}\frac{1}{(k\sin^2\theta+1)\sqrt{k\sin^2\theta+1}} \,d\theta$$
$$\frac{k}{\sqrt{a}}\int_0^{\frac{\pi}{2}}\frac{1}{(k\sin^2\theta+1)^{\frac{3}{2}}}\,d\theta$$</p>
<p>$\frac{d^2}{d\theta^2}(\sqrt{1 + a\sin^2\theta})$<br>
$= \frac{-a+2a\sin^2\theta + k^2\sin^4\theta}{(1 + k\sin^2\theta)^{\frac{3}{2}}}$<br>
$= \frac{-a-1+1+2a\sin^2\theta + k^2\sin^4\theta}{(1 + k\sin^2\theta)^{\frac{3}{2}}}$<br>
$= \frac{-(a+1)+ (1 + a\sin^2\theta)^2}{(1 + k\sin^2\theta)^{\frac{3}{2}}}$<br>
$= \frac{a+1}{(1 + k\sin^2\theta)^{\frac{3}{2}}}-\sqrt{1+a\sin^2\theta}$<br></p>
<p>$$\frac{1}{k + 1}\left[ \frac{k}{\sqrt{a}}\int_0^{\frac{\pi}{2}}\sqrt{1 + k\sin^2\theta} - \frac{d^2}{d\theta^2}(\sqrt{1 + a\sin^2\theta}) \right]\,d\theta$$
$$\frac{1}{k + 1}\left[ \frac{k}{\sqrt{a}}\underbrace{\int_0^{\frac{\pi}{2}}\sqrt{1 + k\sin^2\theta}\, d\theta}_{E(-k)} - \left(\frac{k\sin\theta\cos\theta}{\sqrt{k\sin^2\theta+1}}\bigg\vert_{\frac{\pi}{2}}^{0}\right) \right]$$
$$\frac{1}{k+1}\left[\frac{k}{\sqrt{a}}E(-k) - (0 - 0)\right]$$
$$\frac{k}{(k+1)\sqrt{a}}E(-k)$$
Go to back $k$
$$\frac{1}{\left( \frac{1}{a}+1\right)a^{\frac{3}{2}}}E(-a^{-1})$$
$$\frac{1}{\left( \frac{1}{a}+1\right)a^{\frac{3}{2}}}\sqrt{1+a^{-1}}E\left( \frac{a^{-1}}{1+a^{-1}}\right)$$</p>
<blockquote>
<p><strong>Solution II</strong>
$$\int_0^{\frac{\pi}{2}}(\sin^2\theta+a)^{-\frac{3}{2}}\,d\theta = \frac{1}{\sqrt{\frac{1}{a}+1}\,a^{\frac{3}{2}}}E\left( \frac{1}{a + 1}\right)$$<a href="http://www.wolframalpha.com/input/?i=integral%20from%200%20to%20pi%2F2%20of%201%2F(a%20%2B%20sin%5E2(x))%5E(3%2F2)%20with%20respect%20to%20x" rel="nofollow noreferrer">For more Information</a></p>
</blockquote>
<p><br><br>
Many important applications of these integrals were found at that time.\
The problem of evaluating such integrals was converted into the problem of evaluating only three basic integrals, namely the incomplete elliptic integrals of the first, second, and third kinds— $F(z|m),\, E(z|m)$, and $\Pi(n;z|m)$ (A. M. Legendre). See <a href="http://functions.wolfram.com/EllipticIntegrals/EllipticF/introductions/IncompleteEllipticIntegrals/ShowAll.html" rel="nofollow noreferrer">Introduction to the incomplete elliptic integrals</a>
<br><br>
Here are some interesting links:<br>
1- <a href="http://mathworld.wolfram.com/EllipticIntegral.html" rel="nofollow noreferrer">Definition of Elliptic Integral</a><br>
2- <a href="http://functions.wolfram.com/EllipticIntegrals/EllipticK/introductions/CompleteEllipticIntegrals/ShowAll.html" rel="nofollow noreferrer">Introduction to the complete elliptic integrals</a><br>
3- <a href="http://reference.wolfram.com/language/ref/EllipticPi.html" rel="nofollow noreferrer">Compute $\Pi\left(n|m\right)$</a><br>
4- <a href="http://dlmf.nist.gov/19" rel="nofollow noreferrer">Elliptic Integrals</a><br></p>
|
4,185,658 | <p>In the proof of Proposition. 1.3. page 100 Functional Analysis book of Conway the following claim (<span class="math-container">$X$</span> is a TVS and <span class="math-container">$p$</span> is a seminorm.)</p>
<p>If <span class="math-container">$0 \in \operatorname{Int}{\{x \in X : p(x) \le 1}\}$</span> then <span class="math-container">$0 \in \operatorname{Int}{\{x \in X : p(x) \le \epsilon}\}$</span> for every <span class="math-container">$\epsilon >0$</span>. How this is true?</p>
| Martin Argerami | 22,857 | <p>Fix <span class="math-container">$\epsilon>0$</span>. If <span class="math-container">$$0 \not\in \operatorname{Int}{\{x \in X : p(x) \le \epsilon}\},$$</span> there exists a net <span class="math-container">$\{x_j\}$</span> such that <span class="math-container">$p(x_j)>\epsilon$</span> and <span class="math-container">$x_j\to0$</span>. As multiplication by a scalar is continuous, <span class="math-container">$\frac1\epsilon x_j\to0$</span>; and <span class="math-container">$p(\frac1\epsilon x_j)>1$</span>, so <span class="math-container">$$0 \not \in \operatorname{Int}{\{x \in X : p(x) \le 1}\}.$$</span></p>
|
2,777,631 | <p>Angle bisectors of traingle $ABC$ meet its circum-circle ( after passing through in-center) at opposite points $P, Q$, and $R$ respectively on the circumcircle. </p>
<p>Find $\angle RQP.$ </p>
<p>Is there any way of getting the answer through its in-center properties?</p>
<p>Ans = $90-\frac{B}{2}$ </p>
| Arthur | 15,500 | <p>It's not a contradiction yet, because $c$ could be $0$. Specifically pick $c\geq0$ such that $c\neq0$ (if you can guarantee that such a thing exists), and you will have your contradiction.</p>
|
4,166,894 | <p>Let <span class="math-container">$N\triangleleft G$</span> and <span class="math-container">$K\leqslant G$</span>. Consider <span class="math-container">$\phi :G \mapsto G/N$</span> onto group homomorphism. Show that <span class="math-container">$\phi(K)=KN/N$</span>.</p>
<p>I thought using the equality <span class="math-container">$(G/N)^{n}=G^{n}N/N$</span> to show but I can not show. Any idea will be appreciated.</p>
| Steven | 849,372 | <p>Intuitively this is easy to see: with a fixed number of hyperedges, you will maximize the number of complete subgraphs when the hyperedges share as many vertices as possible.
So in your case: consider the complete 3-uniform hypergraph on <span class="math-container">$x$</span> vertices. It has <span class="math-container">$\binom{x}{3}$</span> hyperedges (as requested), and contains <span class="math-container">$\binom{x}{4}$</span> copies of <span class="math-container">$K_4^3$</span>. This proves that the upper bound is tight.</p>
|
1,081,717 | <p>I have a vector valued mapping $F:\mathbb{R}^2\rightarrow\mathbb{R^2}$, I'm wondering whether there's a sufficient condition for it to be a contraction mapping. </p>
<p>For example, if $F$ is $:\mathbb{R}\rightarrow\mathbb{R}$, and $F\in C^1$, then a sufficient condition is $F'(\cdot)<1$ in all its domain. So for the $\mathbb{R}^2\rightarrow\mathbb{R^2}$ mapping, is there a similar condition, say, the spectrum of its Jacobian matrix is less than 1?</p>
<p>Thanks!</p>
| Community | -1 | <p>Disclaimer: these are my musings about what's going on, without actually having seen anything that properly explains things.</p>
<hr>
<p>First the stuff I do know. Let $V^*$ denote the space of all linear functionals on a vector space $V$.</p>
<p>An important part of <em>multilinear algebra</em> is the tensor product. You can look this up, but the key idea is that $V \otimes W$ is the target space for the most general way for multiplying vectors from $V$ with vectors from $W$ to get a result that is still a vector space, and such that the corresponding tensor product of vectors $\otimes : V \times W \to V \otimes W$ is a bilinear function.</p>
<p>If $V$ and $W$ are finite dimensional, and $v_i$ and $w_j$ are bases, then a basis for $V \otimes W$ would be given by the set $v_i \otimes w_j$.</p>
<p>The odd thing about multilinear algebra is that things can be combined in a lot of ways. For example, a linear functional $T : V \to \mathbf{R}$ can be used to construct a map $V \otimes W \to W$, defined on a generating set by the formula</p>
<p>$$ T(v \otimes w) = T(v) w $$</p>
<hr>
<p>Now, the stuff I don't know.</p>
<p>I assume $\mathscr{S}(\mathbf{R}^n)$ denotes the space of test functions. Since the ordinary product of a test function in $x$ and a test function in $y$ is a bilinear map, there is a corresponding linear transformation</p>
<p>$$ \mathscr{S}(\mathbf{R}) \otimes \mathscr{S}(\mathbf{R}) \to \mathscr{S}(\mathbf{R}^2) $$</p>
<p>which replaces the tensor product with the ordinary product. I believe this map is continuous, injective, and has dense image.</p>
<p>For two linear functionals $S$ and $T$ on $\mathscr{S}(\mathbf{R})$, their tensor product acts on the space of tensor products of test functions, given by the formula on a generating set:</p>
<p>$$ (S \otimes T)(f \otimes g) = S(f) T(g) $$</p>
<p>We can thus extend $S \otimes T$ by continuity to be a <em>partial</em> linear functional on $\mathscr{S}(\mathbf{R}^2)$.</p>
<p>And this is about where my musings peter out. Maybe $S \otimes T$ is always a totally defined functional? In any case, a key point is that I'm not trying to convolve two arbitrary distributions on $\mathbf{R}^2$: instead, I'm trying to find a decomposition where I can split the problem into separate variables so that the two distributions are univariate.</p>
<p>This would all be nicer with Hilbert spaces; above when I say "tensor product", I mean the tensor product of the vector space structure. I think the tensor product of the <em>Hilbert space structure</em> works out to be nicer, so that we actually have an isomorphism $\mathscr{L}(\mathbf{R}) \otimes \mathscr{L}(\mathbf{R}) \cong \mathscr{L}(\mathbf{R}^2)$ as well as an isomorphism $H^* \cong H$, and all the facts I know about multilinear algebra still apply too.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.