qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
58,024 | <p><img src="https://i.stack.imgur.com/cTpA2.jpg" alt="Show that..."></p>
<p>The picture says it all. "Vis at" means "show that". My first thought was that h is 2x, which is not correct. Maybe the formulas for area size is useful? </p>
<p>EDIT: (To make the question less dependent from the <a href="https://math.meta.stackexchange.com/questions/1805/on-the-inclusion-of-pages-of-text-as-images-in-questions">picture</a>.)</p>
<p>A square with side $x$ is placed in the right triangle with legs $g$ and $h$. Show that $x=\frac{gh}{g+h}$.</p>
| Michael Hardy | 11,667 | <p>You can split the triangle into one triangle with base $g$ and height $x$, and another with base $h$ and height $x$. Just draw the diagonal line from the right angle to the opposite vertex of the $x\times x$ square. One of those triangles has area $gx/2$; the other has area $hx/2$. But they must add up to $gh/2$. Hence
$$
\frac{gx}{2} + \frac{hx}{2} = \frac{gh}{2}.
$$
By trivial algebra, the desired result will follow.</p>
|
3,455,009 | <p>In the proof of the expectation of the binomial distribution,</p>
<p><span class="math-container">$$E[X]=\sum_{k=0}^{n}k \binom{n}{k}p^kq^{n-k}=p\frac{d}{dp}(p+q)^n=pn(p+q)^{n-1}=np$$</span></p>
<p>Why is <span class="math-container">$\sum_{k=0}^{n}k \binom{n}{k}p^kq^{n-k}= p\frac{d}{dp}(p+q)^n$</span>?</p>
<p>I know that by the binomial theorem <span class="math-container">$(p+q)^n=\sum_{k=0}^{n}\binom{n}{k}p^kq^{n-k}$</span>. </p>
| Steve Kass | 60,500 | <p>Equivalently, you want to find all complex numbers <span class="math-container">$z$</span> for which <span class="math-container">$z$</span> and <span class="math-container">$z+1$</span> are cube roots of the same complex number <span class="math-container">$c$</span>. In the <a href="https://en.wikipedia.org/wiki/Complex_plane#Argand_diagram" rel="nofollow noreferrer">Argand plane</a>, the three cube roots of a complex number <span class="math-container">$c$</span> lie at the vertices of an equilateral triangle with centroid at the origin. (This is a fun exercise that is left to the reader!)</p>
<p>The numbers <span class="math-container">$z$</span> and <span class="math-container">$z+1$</span> are two vertices of an equilateral triangle if and only if the side length of the triangle is <span class="math-container">$1$</span> and one side is parallel to the horizontal axis (and bisected by the vertical axis). Therefore the real parts of <span class="math-container">$z$</span> and <span class="math-container">$z+1$</span> are <span class="math-container">$\pm{1\over2}$</span>, so <span class="math-container">$z$</span> has real part <span class="math-container">$-{1\over2}$</span>.</p>
<p>The distance between the centroid of an equilateral triangle and any side is one-third the triangle’s height, and the height of an equilateral triangle with side <span class="math-container">$1$</span> is <span class="math-container">$\sqrt3\over2$</span>, so the complex part of <span class="math-container">$z$</span> (and of <span class="math-container">$z+1$</span>) is <span class="math-container">$\pm\sqrt3\over2$</span>, whence <span class="math-container">$z=-{1\over2}\pm i{\sqrt3\over6}$</span>.</p>
|
3,444,673 | <p>How to evaluate this double integral
<span class="math-container">$$\int_{0}^1\int_{x^2}^x \frac{x}{y}e^{-\frac{x^2}{y}}dydx.$$</span></p>
<p>It seems like I am evaluating the double integral of a non-elementary function. I tried substitutions but the integral is growing. </p>
<hr>
<p>Following Fred's suggestion, the integral evaluates to <span class="math-container">$\frac{e-2}{2e}$</span></p>
| Stefan Lafon | 582,769 | <p>We'll need to use the fact that, by Cauchy-Schwartz:
<span class="math-container">$$1+\alpha\beta = 1\times 1 +\beta\times\gamma
\leq\sqrt{1+\beta^2}\sqrt{1+\gamma^2}\tag{1}$$</span>
First inequality:
<span class="math-container">$$\begin{split}
1+\bigg(\int|f|d\mu\bigg)^2&=\int\bigg[1\bigg(\int|f(x)|d\mu(x)\bigg)|f(y)|\bigg]d\mu(y)\\
&\leq\int\sqrt{1+\bigg(\int|f(x)|d\mu(x)\bigg)^2}\sqrt{1+|f(y)|^2}d\mu(y) \,\,\,\,\,\mbox{ using }(1)\\
&=\sqrt{1+\bigg(\int|f|d\mu\bigg)^2}\cdot\int\sqrt{1+|f|^2}d\mu
\end{split}$$</span>
Second inequality:
<span class="math-container">$$
\bigg(\int\sqrt{1+|f|^2}d\mu\bigg)^2 \leq\bigg(\int\sqrt{1+2|f| + |f|^2}d\mu\bigg)^2 = \bigg(\int(1+|f|)d\mu\bigg)^2= \bigg(1 + \int|f|d\mu\bigg)^2
$$</span></p>
|
3,444,673 | <p>How to evaluate this double integral
<span class="math-container">$$\int_{0}^1\int_{x^2}^x \frac{x}{y}e^{-\frac{x^2}{y}}dydx.$$</span></p>
<p>It seems like I am evaluating the double integral of a non-elementary function. I tried substitutions but the integral is growing. </p>
<hr>
<p>Following Fred's suggestion, the integral evaluates to <span class="math-container">$\frac{e-2}{2e}$</span></p>
| Samrat Mukhopadhyay | 83,973 | <p>The second inequality is fine.</p>
<p>For the first inequality, first note that the desired inequality can be expressed as <span class="math-container">$\sqrt{1+(\mathbb{E}(|f|))^2}\le \mathbb{E}\sqrt{1+|f|^2}$</span>, which then can be seen to be a simple consequence of the Jensen's inequality since the function <span class="math-container">$\sqrt{1+x^2}$</span> is convex.</p>
|
3,620,612 | <p>I had a question in the exercises of a complex analysis course I couldn't solve, It asked me to evaluate this integral <span class="math-container">$$\int_{-\pi}^{\pi}\frac{dx}{\cos^2(x) + 1}$$</span></p>
<p>I tried to evaluate it without using residues, the antiderivative of this function contains tan, which is not defined at <span class="math-container">$\pm \pi/2 $</span></p>
<p>all the examples I have worked with so far are rational functions (a polynomial over another) </p>
| mathlover123 | 761,688 | <p>First start by:
<span class="math-container">$$\int _{-\pi }^{\pi }\:\frac{dx}{\left(\cos \left(x\right)\right)^2+1}=4\int _{0\:}^{\frac{\pi }{2}\:}\:\frac{dx}{\left(\cos \:\left(x\right)\right)^2+1}=4\int _{0\:}^{\frac{\pi \:}{2}\:}\:\frac{\left(\sec \left(x\right)\right)^2dx}{2+\left(\tan \left(x\right)\right)^2}$$</span>
Then using substitution <span class="math-container">$$u\sqrt{2}=\tan \left(x\right)$$</span>
We get:
<span class="math-container">$$2\sqrt{2}\int _0^{\infty }\:\frac{du}{1+u^2}=\pi \sqrt{2}$$</span></p>
|
19,880 | <p>I want to write down $\ln(\cos(x))$ Maclaurin polynomial of degree 6. I'm having trouble understanding what I need to do, let alone explain why it's true rigorously.</p>
<p>The known expansions of $\ln(1+x)$ and $\cos(x)$ gives:</p>
<p>$$\forall x \gt -1,\ \ln(1+x)=\sum_{n=1}^{k} (-1)^{n-1}\frac{x^n}{n} + R_{k}(x)=x-\frac{x^2}{2}+\frac{x^3}{3}+R_{3}(x)$$
$$\cos(x)=\sum_{n=0}^{k} (-1)^{n}\frac{x^{2n}}{(2n)!} + T_{2k}(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+T_{4}(x)$$</p>
<p>Writing $\ln(1+x)$ with $t=x-1$ gives:</p>
<p>$$\forall t \gt 0,\ \ln(t)=\sum_{n=1}^{k} (-1)^{n-1}\frac{(t+1)^n}{n} + R_{k}(t)$$</p>
<p>But now I'm clueless. </p>
<ul>
<li>Do I just 'plug' $\cos(x)$ expansion in $\ln(t)$? Can I even do that?</li>
<li>Isn't it a problem that $\ln(x)$ isn't defined for $x\leq 0$ but $|\cos(x)| \leq 1$?</li>
</ul>
| Derek Jennings | 1,301 | <p>The MacLaurin series of $f(x)$ is given by</p>
<p>$$f(x)= f(0)+f^{\prime}(0)x + \frac{ f^{\prime \prime }(0)}{2!}x^2 +
\frac{ f^{\prime \prime \prime}(0)}{3!}x^3 + \cdots $$</p>
<p>and so you need to calculate the values of $f(0),f^{\prime}(0),\ldots$ for your function
$f(x)=\log( \cos x).$</p>
<p>We have
$$\begin{align}
f(x) &= \log( \cos x ) \quad \textrm{ and so } f(0)=0, \\
f^{\prime}(x) &= - \tan x \quad \textrm{ and so } f^{\prime}(0)=0, \\
f^{\prime \prime }(x) &= - \sec^2 x \quad \textrm{ and so } f^{\prime \prime }(0)=-1.
\end{align}$$</p>
<p>Just carry on differentiating and evaluating the derivatives at $x=0$ until you reach the term in $x^6 .$</p>
<p>It's worth noting that since $\sec^2 = 1 + \tan^2 x$ we can make the evaluation easier by writing</p>
<p>$$f^{ \prime \prime }(x)=-1-f^{\prime}(x)^2$$</p>
<p>and so, using the chain rule,</p>
<p>$$f^{ \prime \prime \prime }(x)=-2f^{\prime}(x)f^{\prime \prime}(x),$$</p>
<p>and so putting $x=0$ we have $f^{ \prime \prime \prime }(0)=0.$</p>
<p>Differentiating again we get</p>
<p>$$f^{(4)}(x)=-2f^{ \prime \prime }(x)^2-2f^{\prime}(x)f^{\prime \prime \prime}(x)$$</p>
<p>and so $f^{(4)}(0)=-2,$ and so on...</p>
|
1,973,686 | <p>I am stuck on two questions :</p>
<ol>
<li>If $f,g\in C[0,1]$ where $C[0,1]$ is the set of all continuous functions in $[0,1]$ then is the mapping $id:(C[0,1],d_2)\to (C[0,1],d_1)$ continuous ? where $id$ denotes the identity mapping.</li>
</ol>
<p>where $d_2(f,g)=(\int _0^1 |f(t)-g(t)|^2dt )^{\frac{1}{2}} $ and $d_1(f,g)=(\int _0^1 |f(t)-g(t)|dt )$
?</p>
<p>2.If $f\in L^2(\Bbb R)$ does it imply that $f\in L^1(\Bbb R)$.</p>
<p><strong>My try</strong>:</p>
<p>1.The first question reduces to proving the fact that if $(\int _0^1 |f(t)-g(t)|^2dt )^{\frac{1}{2}} <\infty$ does it imply that $(\int _0^1 |f(t)-g(t)|dt )<\infty$ which I am unable to prove.</p>
<p>I am also unable to conclude anything for the 2nd one.</p>
<p>Please give some hints</p>
| marwalix | 441 | <p>Take $f(x)=\sqrt{x}$ one has $f\in L^2([0,1])$ and obviously $f\notin L^1([0,1])$ for the integral diverges at $0$.</p>
<p>Now consider a sequence of functions in $L^2$ that converges to the above $f$. If the identity were continuous the sequence should converge in $L^1$ but we have seen that $f\notin L^1$ so the identity is not continuous.</p>
|
610,672 | <p>Could anyone help me with homework or give me a hint? Any help would be highly appreciated.</p>
<p>Given a set of N distinct objects:</p>
<p>How many ways are there to pick any number of them to be in a pile while the rest are in anotherpile? If your answer is written in terms of binomial coecients, use the Binomial Theorem to write as a single (N-dependent) number.</p>
<p>Thanks
Daniel </p>
| Eric Auld | 76,333 | <p>Think about going to each object and flipping a switch on it, L or R, to decide which pile it goes in. How many choices do you have to make in this process? Now make sure to divide by two because we don't care to distinguish between the left and the right piles.</p>
|
2,571,031 | <p>I need to solve the quadratic programming problem $$ \text{minimize}\,\, \sum_{j=1}^{n}(x_{j})^{2} \\ \text{subject to}\,\,\, \sum_{j=1}^{n}x_{j}=1,\\ 0 \leq x_{j}\leq u_{j}, \, \, j=1,\cdots , n $$</p>
<p>I know that the first thing I need to do is form the Lagrangian. </p>
<p>Now, for a problem in standard form (note that below, $\overline{x}$, $\overline{\lambda}$, $\overline{\mu}$ denote vectors): $$ \text{minimize} \, \, f_{0}(\overline{x}) \\ \text{subject to} \,\,\, f_{i}(\overline{x}), \,\,\, i=1,\cdots, m \\ h_{i}(\overline{x}), \,\,\, i = 1,\cdots, p $$ the Lagrangian looks like this: $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = f_{0}(x) + \sum_{i=1}^{m}\lambda_{i}f_{i}(\overline{x}) + \sum_{i=1}^{p}\mu_{i}h_{i}(\overline{x})$</p>
<p>In this case, I am being thrown off by the fact that my sole $h_{i}(\overline{x})$ happens to be a sum that adds up to $1$, and if I want my $f_{i}(\overline{x})$'s to be $\leq 0$, I'm going to need to rewrite the last line of constraints as $x_{j} - u_{j} \leq 0$, $j = 1,\cdots , n$ and $-x_{j} \leq 0$, $j = 1, \cdots, n$.</p>
<p>Then, would my Lagrangian be $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = \sum_{j=1}^{n}(x_{j})^{2} + \sum_{j=1}^{n}\lambda_{i}(x_{j}-u_{j}) + \sum_{j=1}^{n}\nu_{i} (-x_{i}) + \mu\left[\left(\sum_{j=1}^{n}x_{j} \right)-1\right]$ ?</p>
<p><strong>And then, how would I go about completing the problem?</strong> I've never done a problem with this many Lagrange variables in it before, nor with this many constraints, and so I'm finding it a little overwhelming...</p>
<p>Thank you ahead of time for your time and patience!</p>
| copper.hat | 27,978 | <p>The primal problem is $\inf_x \sup_{\mu, \lambda \ge 0, \nu \ge 0 } L(x,\lambda, \nu, \mu)$, the dual is $ \sup_{\mu, \lambda \ge 0, \nu \ge 0 }\inf_x L(x,\lambda, \nu, \mu)$.</p>
<p>Since ${\partial L(x,\lambda, \nu, \mu) \over \partial x} = 2x + \lambda - \nu + \mu e$, where $e=(1,1,...)^T$, we can compute an explicit expression for the
minimising $x$ and so compute a formula for $\inf_x L(x,\lambda, \nu, \mu)$.</p>
|
2,571,031 | <p>I need to solve the quadratic programming problem $$ \text{minimize}\,\, \sum_{j=1}^{n}(x_{j})^{2} \\ \text{subject to}\,\,\, \sum_{j=1}^{n}x_{j}=1,\\ 0 \leq x_{j}\leq u_{j}, \, \, j=1,\cdots , n $$</p>
<p>I know that the first thing I need to do is form the Lagrangian. </p>
<p>Now, for a problem in standard form (note that below, $\overline{x}$, $\overline{\lambda}$, $\overline{\mu}$ denote vectors): $$ \text{minimize} \, \, f_{0}(\overline{x}) \\ \text{subject to} \,\,\, f_{i}(\overline{x}), \,\,\, i=1,\cdots, m \\ h_{i}(\overline{x}), \,\,\, i = 1,\cdots, p $$ the Lagrangian looks like this: $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = f_{0}(x) + \sum_{i=1}^{m}\lambda_{i}f_{i}(\overline{x}) + \sum_{i=1}^{p}\mu_{i}h_{i}(\overline{x})$</p>
<p>In this case, I am being thrown off by the fact that my sole $h_{i}(\overline{x})$ happens to be a sum that adds up to $1$, and if I want my $f_{i}(\overline{x})$'s to be $\leq 0$, I'm going to need to rewrite the last line of constraints as $x_{j} - u_{j} \leq 0$, $j = 1,\cdots , n$ and $-x_{j} \leq 0$, $j = 1, \cdots, n$.</p>
<p>Then, would my Lagrangian be $\displaystyle L(\overline{x},\overline{\lambda}, \overline{\mu}) = \sum_{j=1}^{n}(x_{j})^{2} + \sum_{j=1}^{n}\lambda_{i}(x_{j}-u_{j}) + \sum_{j=1}^{n}\nu_{i} (-x_{i}) + \mu\left[\left(\sum_{j=1}^{n}x_{j} \right)-1\right]$ ?</p>
<p><strong>And then, how would I go about completing the problem?</strong> I've never done a problem with this many Lagrange variables in it before, nor with this many constraints, and so I'm finding it a little overwhelming...</p>
<p>Thank you ahead of time for your time and patience!</p>
| robjohn | 13,854 | <p><strong>Basic Variational Approach</strong></p>
<p>Since
$$
\sum_{j=1}^nx_j=1\tag1
$$
any variation of the $x_j$'s must satisfy
$$
\sum_{j=1}^n\delta x_j=0\tag2
$$
At an interior critical point of
$$
\sum_{j=1}^nx_j^2\tag3
$$
we will have
$$
\sum_{j=1}^n2x_j\delta x_j=0\tag4
$$
At an interior critical point, any change that maintains $(1)$ should not change $(3)$. That is, for any $\delta x_j$ that satisfies $(2)$, $\delta x_j$ should satisfy $(4)$.</p>
<p>Note that $(2)$ says that $(\delta x_1,\delta x_2, \delta x_3,\dots,\delta x_n)$ is perpendicular to $(1,1,1,\dots,1)$, and that is the only restriction on $\delta x_j$, unless $x_j=0$ or $x_j=u_j$ (the edge cases). Furthermore, $(4)$ is satisfied when $(\delta x_1,\delta x_2, \delta x_3,\dots,\delta x_n)$ is perpendicular to $(x_1,x_2,x_3,\dots,x_n)$. This means that any $(\delta x_j)$ that is perpendicular to $(1,1,1,\dots,1)$ is perpendicular to $(x_1,x_2,x_3,\dots,x_n)$. That is, $(1,1,1,\dots,1)$ is parallel to $(x_1,x_2,x_3,\dots,x_n)$.</p>
<p>Thus, the only interior critical points happen when
$$
x_1=x_2=x_3=\dots=x_n=\lambda\tag5
$$
In light of $(1)$, this means that
$$
(x_1,x_2,x_3,\dots,x_n)=\tfrac1n\left(1,1,1,\dots,1\right)\tag6
$$
We also need to check the edge cases where some $x_j=0$ or some $x_j=u_j$. In those cases, we still have the analog of $(5)$ for the interior $x_j$; that is, those for which $0\lt x_j\lt u_j$.</p>
<hr>
<p><strong>Lagrangian Approach</strong></p>
<p>The Lagrangian would be
$$
\mathcal{L}(x_1,x_2,x_3,\dots,x_n,\lambda)=\sum_{j=1}^nx_j^2-\lambda\left(\sum_{j=1}^nx_j-1\right)\tag7
$$
Taking the gradient this locates the interior critical points
$$
\begin{align}
0
&=\nabla\mathcal{L}(x_1,x_2,x_3,\dots,x_n,\lambda)\\
&=\left(2x_1-\lambda,2x_2-\lambda,2x_3-\lambda,\dots,2x_n-\lambda,\sum_{j=1}^nx_j-1\right)\tag8
\end{align}
$$
which we can solve to get $(6)$.</p>
<p>There are $2n$ $n-1$ dimensional edges, where $x_j=0$ and $x_j=u_j$, and a number of corners, etc. that need to be considered separately. They are not handled in the $n$-dimensional Lagrangian, though we can consider separate $n-1$ dimensional Lagrangians.</p>
|
3,700,367 | <p><strong>What is the <em>average</em> distance from any point on a unit square's perimeter to its center?</strong></p>
<p>The distance from a square's corner to its center is <span class="math-container">$\dfrac{\sqrt{2}}{2}$</span> and from a point in the middle of a square's side length is <span class="math-container">$\dfrac{1}{2}$</span>. <a href="https://i.stack.imgur.com/exKkJ.png" rel="nofollow noreferrer">A visual explanation of what I'm trying to explain</a></p>
<p>So, what would the <em>average</em> distance be, accounting for all the points along a square's perimeter?</p>
<p>Also if possible, a general formula for finding the average distance from center to edge of any <span class="math-container">$n$</span>-sided regular polygon would be super awesome. </p>
| Ben Grossmann | 81,360 | <p><strong>Hint:</strong> Proving <span class="math-container">$M^{n-1} = 0 \implies$</span> the set <span class="math-container">$\{I,M,\dots,M^{n-1}\}$</span> is linearly dependent (not linearly independent) is easy: simply note that any set that includes the zero-vector (or in this case the zero matrix, since our vector space is that of the set of matrices) must be linearly dependent.</p>
<p>The inverse implication is trickier. Suppose that <span class="math-container">$M^{n-1} \neq 0$</span>, and suppose that <span class="math-container">$c_0,c_1,\dots,c_{n-1}$</span> are such that
<span class="math-container">$$
c_0 I + c_1 M + \cdots + c_{n-2} M^{n-2} + c_{n-1} M^{n-1} = 0.
$$</span>
We want to show that the coefficients <span class="math-container">$c_0,\dots,c_{n-1}$</span> must all be zero, showing that the set is linearly independent. Multiply both sides of the above equation by <span class="math-container">$M^{n-2}$</span> and simplify; what does this tell you about the coefficients?</p>
|
4,558,460 | <p>According to the implicit function theorem(on <span class="math-container">$\mathbb R^2$</span> for simplicity), if <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}\ne 0$</span> at <span class="math-container">$(x_0, y_0)$</span>, then on a neighborhood of <span class="math-container">$(x_0, y_0)$</span>, there is a function <span class="math-container">$g\in C^1$</span> such that
<span class="math-container">$$y = g(x)$$</span></p>
<p>There was no information about the converse in the text book, but one day I wondered that they is a function <span class="math-container">$g\in C^1$</span> such that <span class="math-container">$f(x, g(x)) = 0$</span> even though <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}= 0$</span> ? Because <span class="math-container">$\displaystyle\frac{\partial f}{\partial y}$</span> takes part in deonminator, <span class="math-container">$g$</span> won't have a derivative function(or maybe another closed-form), and I have found <span class="math-container">$g$</span> but the only continuity works. Can we find the example that the implicit function <span class="math-container">$g$</span> exists even though the partial derivative at that point is zero? If no, how can I prove this, and which condition can be added to make the Implicit function theorem an equivalent conditions?</p>
<p>Really appreciate your helps.</p>
| Andrew D. Hwang | 86,418 | <p>tl; dr: There is No Hope of a converse along the suggested lines.</p>
<hr />
<p>Arguably the simplest, most devastating counterexample is <span class="math-container">$f(x, y) = y^{3}$</span>, whose zero level is the <span class="math-container">$x$</span>-axis, the graph of a constant function, but whose differential is identically zero on the entire level curve.</p>
<p>(Let's not nitpick about whether <span class="math-container">$f(x, y) = y^{2}$</span> is simpler. Cubing is a smooth bijection of the reals while squaring isn't, so we avoid even the whiff of certain spurious conjectures.)</p>
|
839,124 | <p>This is similar to an exercise I just posted. The necessary part is easy, but the sufficient condition I'm having trouble seeing.</p>
<p>$\Rightarrow$. Since $(x,y)=g,$ there exist integers $x_1, y_1$ such that $x=gx_1, y=gy_1$. Since $[x,y]=l$, there exist integers $x_2, y_2$ such that $l=xx_2=yy_2$. Then
$$l=gx_1x_2=gy_1y_2$$
In both cases, $g|l$</p>
<p>$\Leftarrow$. Since $g|l$, there exists an integer $k$ such that $l=gk$. Since $k$ is an integer, there exist integers $k_1, k_2$ such that $k=k_1+k_2$. Then
$$l=gk_1+gk_2 $$
$$gk=gk_1+gk_2$$
I feel like I'm not even close.... any hints?</p>
| Bill Dubuque | 242 | <p>$\begin{eqnarray}{\bf Hint}\quad\ g\mid \ell &\iff&\!\!\! (g,\,\ell)\, =\, g\quad\ &\rm or&\ \quad g\le \ell &\!\iff&\!\! g\wedge\ell\, =\, g\quad\text{in lattice language} \\
&\iff&\!\!\! {\bf [\,}g,\,\ell\,{\bf ]}\, =\, \ell\ \ \ & &\ \ \ \ \phantom{g\le \ell} &\!\iff&\!\! g\vee\ell\, =\, \ell \end{eqnarray}$</p>
<p><strong>Remark</strong> $\ $ This is obvious if one views it from a <a href="http://en.wikipedia.org/wiki/Lattice_%28order%29#Connection_between_the_two_definitions" rel="nofollow">lattice-theoretic perspective</a> (see this <a href="http://en.wikipedia.org/wiki/Least_common_multiple#Lattice-theoretic" rel="nofollow">Wikipedia entry</a> for that viewpoint). You may find it instructive to generalize the problem to lattices.</p>
|
44,868 | <p><strong>Bug introduced in version 8 or earlier and fixed in 10.0</strong></p>
<hr>
<p>I have created a notebook with two cells. This is the content of the first:</p>
<pre><code>g = Graph[{1 \[UndirectedEdge] 2, 2 \[UndirectedEdge] 3, 1 \[UndirectedEdge] 3, 1 \[UndirectedEdge] 4, 4 \[UndirectedEdge] 5, 4 \[UndirectedEdge] 6}]
</code></pre>
<p>And this is the content of the second:</p>
<pre><code>g
listDegree = VertexDegree[g]
vl = VertexList[g]
nodeMaxDegree = Pick[vl, listDegree, Max[VertexDegree[g]]][[1]]
aM = AdjacencyMatrix[g];
vLM = aM[[VertexIndex[g, nodeMaxDegree]]];
nN = Pick[vl, vLM, 0]
</code></pre>
<p>If I evaluate the second cell (after processing the first) for a <strong>second</strong> time:</p>
<ol>
<li>the first time no problem, the results are correct; </li>
<li><strong>the second time the vertex list of <code>g</code> is inexplicably wrong but the graph remain correct!!</strong></li>
</ol>
<p>I don't understand the cause because the graph <code>g</code> is never touched.</p>
<p>Thanks in advance</p>
| István Zachar | 89 | <p>This is a bug in <code>Pick</code> caused by <code>SparseArray</code>, has nothing to do with <code>Graph</code>. Minimal example (<code>SparseArray</code> object is the fullform version of your <code>vLM</code>):</p>
<pre><code>x = {1, 2, 3, 4, 5, 6};
Pick[x, SparseArray[Automatic, {6}, 0, {1, {{0, 3}, {{2}, {3}, {4}}}, {1, 1, 1}}], 0];
FullForm@x
</code></pre>
<blockquote>
<pre><code>{1, System`Private`InternSequence[], System`Private`InternSequence[],
System`Private`InternSequence[], 5, 6}
</code></pre>
</blockquote>
<p>As you can observe, the value of <code>x</code> gets updated though no assignment is done: those members are replaced in <code>x</code> which are listed in the <code>SparseArray</code> (2, 3 and 4).</p>
<p>One obvious solution for your case is to wrap the <code>AdjacencyMatrix</code> into <code>Normal</code> so its result won't be a <code>SparseArray</code>.</p>
|
897,633 | <p><strong>First question:</strong></p>
<p>Let's say we have a hypothesis test:</p>
<p>${ H }_{ 0 }:u=100$
and ${ H }_{ 1 }:u\neq 100$.</p>
<p>The sample has a size of 10 and gives an average $u=103$ and a p-value = 0.08.
The level of significance is 0.05.</p>
<p>I'm asked the following question (exam):</p>
<p>A) We can conclude that $u=100$</p>
<p>B) We cannot conclude that $u\neq100$</p>
<p>The 2 answers are rather similar, but not the same. I would say B) but I'm not so sure given what I've read.</p>
<p>The p-value here indicates that we cannot reject the null hypothesis, so we cannot accept H1 ?</p>
<p><strong>Second question:</strong></p>
<p>What does it mean exactly that a test is significant ?</p>
<p>Does it mean that we can reject the null hypothesis ?</p>
<p>Thanks in advance.</p>
<p>Regards,</p>
| heropup | 118,193 | <p>A hypothesis test in the frequentist sense is a procedure by which one arrives at a decision about whether the data contains sufficient evidence to accept the alternative hypothesis. In other words, there are two choices: either you reject the null $H_0$, or the test is inconclusive.</p>
<p>The reason why you cannot ever "accept the null" under such a test is because the test statistic and the resulting $p$-value are calculated under the distributional assumption that the null is true. Therefore, a $p$-value that is not sufficiently small (i.e., smaller than the $\alpha$ level) is somewhat tautological: it essentially says that, assuming the data indeed is drawn from a distribution that follows the null hypothesis, the probability of seeing a sample as extreme as that you obtained is $p$. If $p$ is "large," that doesn't mean $H_0$ is true, because you <em>assumed</em> that it was true in order to get $p$ in the first place.</p>
<p>All that you can say if you fail to reject $H_0$ is that the data does not furnish enough evidence to suggest with a high degree of confidence that $H_0$ is false.</p>
<p>Another way to think of it is this: suppose I give you a coin and you wish to test if it is biased. If you toss it 10 times and observe 5 heads and 5 tails, that does not necessarily mean that it is in fact fair--you could have observed this result from a biased coin purely due to random chance. All you can say is that the result you obtained does not furnish strong evidence that the coin is biased.</p>
|
3,193,305 | <p>A random variable <span class="math-container">$x$</span> from the set <span class="math-container">$\{1, 2, ... ,n\}. $</span> Let <span class="math-container">$x$</span> has distribution function <span class="math-container">$f(k) = Y(n) · g^k$</span> where <span class="math-container">$g$</span> is a fixed number within <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Find <span class="math-container">$Y(n)$</span> which is a constant term in terms of n. </p>
<p>I do not know how to determine <span class="math-container">$Y(n)$</span>. Can I integrate <span class="math-container">$f(k)$</span> ? Thank You. </p>
| Lutz Lehmann | 115,115 | <p>You should have found out that the Wronskian is constant as the coefficient of the first derivative term is zero.</p>
<p>After that, it is just a matter of re-scaling one or both of the solutions to get the Wronski-determinant to have the value 1 at one and thus every point.</p>
<hr>
<p>(<em>Add</em>) Interpreting the term "fundamental" in "system of fundamental solutions" more strictly, it means that at some point <span class="math-container">$x_0$</span> you have initial values
<span class="math-container">$$
\pmatrix{y_1(x_0)&y_2(x_0)\\y_1'(x_0)&y_2'(x_0)}
=\pmatrix{1&0\\0&1}
$$</span>
so that no rescaling is necessary.</p>
|
389,750 | <p>Given A(1,4) and B (3,-5) use the dot product to find point C so that triangle ABC is a right angle triangle.</p>
| user77528 | 77,528 | <p>Just put $(C-A).(B-A)=0$, that should solve it. Note that there are infinitely many such triangles.</p>
|
2,290,395 | <p>What if in Graham’s Number every “3” was replaced by “tree(3)” instead? How big is this number? Greater than Rayo’s number? Greater than every current named number?</p>
| Daniela Bellachioma | 919,701 | <p>No, Rayo's Number is just too big, imagine a Googol symbols in the first order set theory, you cannot express it, why? because even writing down a symbol per Planck time (5.39 x 10^-44 seconds) it would still take about 10^48 years, and another problem is the space, the number of particles in the observable is about 10^80, a Googol is 10^100,and bigger than any named number except infinity, infinity is NOT a number, this number would be bigger than TREE(3), but smaller than TREE(4), which is much bigger than TREE(3), and then if you pick a number like TREE(g(64)), or TREE^TREE(3)(3),(where TREE^2(3) = TREE(TREE(3))) or a number even MUCH bigger than Rayo's Number like FOOT^10(10^100) or Fish Number 7, this "Graham's TREE(3)" wouldn't even be 0 against them.</p>
|
3,145,973 | <p>Show that for every integer <span class="math-container">$n ≥ 3$</span>, the number <span class="math-container">$n!e$</span> is not an integer.</p>
<p>I have shown the inequality <span class="math-container">$\displaystyle0< \sum_{m=n+1} \frac{1}{m!} < \frac{1}{n!}$</span> for <span class="math-container">$n>3$</span></p>
<p>and I know <span class="math-container">$\displaystyle e = \sum_{k=0}^{\infty} \frac{1}{k!}$</span>. Therefore <span class="math-container">$\displaystyle n!e = \sum_{k=0}^{\infty} \frac{n}{k!}$</span>. How do I continue here. I tried to prove this by induction but I wasn't even sure how to show that <span class="math-container">$3e$</span> isn't an integer.</p>
<p>Lastly, how may I incorporate this result in the process of proving that e is irrational?</p>
| Rylee Lyman | 447,318 | <p>Maybe let's just talk through why restrictions is the way to go.</p>
<p>Suppose we have <span class="math-container">$\phi\colon G \to G$</span> an automorphism. We want to show that <span class="math-container">$\phi(N) = N$</span>. Since <span class="math-container">$K$</span> is characteristic, we know that <span class="math-container">$\phi(K) = K$</span>. Now consider the map <span class="math-container">$\varphi\colon K \to K$</span> defined by <span class="math-container">$\varphi(k) = \phi(k)$</span>. Since <span class="math-container">$\phi(K) = K$</span>, we see that <span class="math-container">$\varphi$</span> is surjective. I'll leave you to check that <span class="math-container">$\varphi$</span> is injective and a homomorphism. What we've shown then, is that <span class="math-container">$\varphi$</span> is an automorphism of <span class="math-container">$K$</span>, so since <span class="math-container">$N$</span> is characteristic in <span class="math-container">$K$</span>, we see that <span class="math-container">$\varphi(N) = N$</span>. This tells us that <span class="math-container">$$\text{for every }x \in N,\ \varphi(x) = \phi(x) \in N.$$</span></p>
<p>Therefore we conclude that <span class="math-container">$\phi(N) = N$</span>.</p>
<p>Actually, if you've done your exercise, you've shown something a little more: <span class="math-container">$\varphi$</span> is exactly what we mean by restriction: <span class="math-container">$\phi|_K$</span> means look at what happens if we forget about what <span class="math-container">$\phi$</span> does to group elements outside of <span class="math-container">$K$</span>. Since <span class="math-container">$K$</span> is a subgroup, you've shown that <span class="math-container">$\phi|_K$</span> is still a homomorphism.</p>
|
2,632,696 | <p>I have this equation: $x^2y'+y^2-1=0$. It's an equation with separable variable. When I calculate the solution do I have to consider the absolute value for the argument of the log? </p>
| mordecai iwazuki | 167,818 | <p>Sorry I meant to write $\tanh^{-1}(y)$ in my comment, so the answer can be found as follows,</p>
<p>$$\frac{dy}{1-y^2} = \frac{dx}{x^2}\\
\implies\tanh^{-1}(y) = -\frac{1}{x} + c\\
\implies y = \tanh(c-\frac{1}{x})$$</p>
|
537,965 | <p><span class="math-container">$X_0:\Omega\rightarrow I$</span> is a random variable where <span class="math-container">$I$</span> is countable. Also <span class="math-container">$Y_1,Y_2,\dots$</span> are i.i.d. <span class="math-container">$\text{Unif}[0,1]$</span> random variables. </p>
<p>Define a sequence <span class="math-container">$(X_n)$</span> inductively as <span class="math-container">$X_{n+1}=G(X_n,Y_{n+1})$</span>, where <span class="math-container">$G:I\times[0,1] \rightarrow I$</span>. Show that <span class="math-container">$(X_n)$</span> is a Markov chain and determine the transition matrix in terms of G? </p>
| José Luis León | 551,176 | <p>The question is a particular case of a more general theorem:</p>
<blockquote>
<p>Let <span class="math-container">$S$</span> be a numerable set and let <span class="math-container">$X_0:\Omega \to S$</span> be a random variable on <span class="math-container">$(\Omega, \mathcal{F}, P)$</span>. Let <span class="math-container">$\{Y_n\}$</span>, <span class="math-container">$Y_n:\Omega \to \mathbb{R}^N$</span> be a sequence of i.i.d. random variables on <span class="math-container">$(\Omega, \mathcal{F}, P)$</span> that are also independent of <span class="math-container">$X_0$</span>, and let <span class="math-container">$f:S\times \mathbb{R}^N\to S$</span> be a Borel function. Then the sequence <span class="math-container">$\{X_n\}$</span>, <span class="math-container">$X_n:\Omega \to S$</span> defined by
<span class="math-container">$$
X_{n+1}(x) := f(X_n(x), Y_n(x)), \quad \forall\, n \geq 0
$$</span>
is a homogeneous Markov chain with state space <span class="math-container">$S$</span> and transition matrix
<span class="math-container">$$
\mathbf{P}_{ij} = P\left(f(i,Y_k)=j\right) \qquad \forall\, i,j\in S
$$</span></p>
</blockquote>
<p>Proof. It follows, from the definition, that for any integer <span class="math-container">$k$</span>, the random variable <span class="math-container">$X_k$</span> is a function of <span class="math-container">$X_0$</span> and <span class="math-container">$Y_1,\dots, Y_{k-1}$</span>. Consequently, for any integer <span class="math-container">$n$</span>, the random variable <span class="math-container">$Y_n$</span>, which is by definition independent of <span class="math-container">$(X_0,Y_1, \dots, Y_{n-1})$</span>, is also independent of <span class="math-container">$(X_0, \dots, X_n)$</span>. Therefore,
<span class="math-container">\begin{align*}
& \ P\left(X_{n+1} = j \big\lvert X_0=i_0, \dots, X_n=i \right) \\
&= P\left(f(i,Y_n)=j \big\lvert X_0=i_0, \dots, X_n=i\right) \\
&= P(f(i,Y_n)=j) \\
&= P\left(f(i,Y_n)=j \big\lvert X_n=i\right)
\end{align*}</span>
the second equality holds since the random variable <span class="math-container">$(X_0, \dots, X_n)$</span> is independent of <span class="math-container">$Y_n$</span>, hence of <span class="math-container">$f(i,Y_n)$</span>. This shows that <span class="math-container">$\{X_n\}$</span> is a Markov chain and the transition matrix is <span class="math-container">$$\mathbf{P}=(\mathbf{P})_{ij} = P(f(i,Y_n)=j)$$</span> but this last expression is independent of <span class="math-container">$n$</span> since the random variables are i.i.d.
<span class="math-container">$$\tag*{$\Box$}$$</span></p>
|
177,209 | <p>I found the following problem while working through Richard Stanley's <a href="http://www-math.mit.edu/~rstan/bij.pdf">Bijective Proof Problems</a> (Page 5, Problem 16). It asks for a combinatorial proof of the following:
$$ \sum_{i+j+k=n} \binom{i+j}{i}\binom{j+k}{j}\binom{k+i}{k} = \sum_{r=0}^{n} \binom{2r}{r}$$
where $n \ge 0$, and $i,j,k \in \mathbb{N}$, though any proof would work for me.</p>
<p>I also found a similar identity in Concrete Mathematics, which was equivalent to this one, but I could not see how the identity follows from the hint provided in the exercises.</p>
<p>My initial observation was to note that the ordinary generating function of the right hand side is $\displaystyle \frac {1}{1-x} \frac{1}{\sqrt{1-4x}}$, but couldn't think of any way to establish the same generating function for the left hand side.</p>
| Sasha | 11,069 | <p>Restating your question, you are seeking to find the generating function of the left-hand-side:
$$
g(x) = \sum_{n=0}^\infty x^n \sum_{i+j+k=n}\binom{i+j}{i} \binom{j+k}{j} \binom{k+i}{k} = \sum_{i=0}^\infty \sum_{j=0}^\infty \sum_{k=0}^\infty x^{i+j+k} \frac{(i+j)! (i+k)! (j+k)!}{i!^2 j!^2 k!^2}
$$
First, carry out summation over $i$:
$$
g(x) = \sum_{j=0}^\infty \sum_{k=0}^\infty x^{j+k} \frac{(j+k)!}{j!\cdot k!} {}_2F_1\left(1+j, 1+k; 1; x\right)
$$
Now use <a href="http://en.wikipedia.org/wiki/Hypergeometric_function#Kummer.27s_24_solutions">Euler's transformation</a> ${}_2F_1\left(1+j, 1+k; 1; x\right) = (1-x)^{-j-k-1} \, {}_2F_1\left(-j, -k; 1, x\right)$, which gives
$$
g(x) = \frac{1}{1-x} \sum_{j=0}^\infty \sum_{k=0}^\infty \left(\frac{x}{1-x}\right)^{j+k} \frac{(j+k)!}{j!\cdot k!} {}_2F_1\left(-j, -k; 1; x\right) = \\ \frac{1}{1-x}
\sum_{j=0}^\infty \sum_{k=0}^\infty \left(\frac{x}{1-x}\right)^{j+k} \frac{(j+k)!}{j!\cdot k!} \sum_{r=0}^{\min(j,k)} \binom{j}{r}\binom{k}{r} x^r = \\
\frac{1}{1-x} \sum_{r=0}^\infty x^r \sum_{j=r}^\infty \sum_{k=r}^\infty \binom{k+j}{k} \binom{j}{r}\binom{k}{r} \left(\frac{x}{1-x}\right)^{j+k}
$$
Using
$$
\sum_{j=r}^\infty \sum_{k=r}^\infty \binom{k+j}{k} \binom{j}{r}\binom{k}{r} z^{j+k} =
\sum_{j=r}^\infty \binom{j}{r} z^{j+r} \sum_{k=0}^\infty \frac{(k+j+r)!}{j! r! k!} z^k =\\
\sum_{j=r}^\infty \binom{j}{r} z^{j+r} \binom{j+r}{j} \sum_{k=0}^\infty \frac{(j+r+1)_k}{k!} z^k = \sum_{j=r}^\infty \binom{j}{r} z^{j+r} \binom{j+r}{j} \left(1-z\right)^{-j-r-1} = \\ \frac{z^{2r}}{(1-z)^{2r+1}} \frac{1}{r!^2} \sum_{j=0}^\infty \frac{(j+2r)!}{j!} \left(\frac{z}{1-z}\right)^j = \frac{z^{2r}}{(1-z)^{2r+1}} \binom{2r}{r} \left(1-\frac{z}{1-z}\right)^{-1-2r} = \binom{2r}{r} z^{2r} \left(1-2z\right)^{-2r-1}
$$
we continue:
$$
g(x) = \frac{1}{1-x} \sum_{r=0}^\infty x^r \binom{2r}{r} \left(\frac{x}{1-x}\right)^{2r} \left(1 - 2 \frac{x}{1-x} \right)^{-1-2r} = \\ \frac{1}{1-x} \sum_{r=0}^\infty \binom{2r}{r} \frac{1-x}{1-3x} \left(\frac{x^3}{(1-3x)^2}\right)^r = \\
\frac{1}{1-3x} \sum_{r=0}^\infty \binom{2r}{r}\left(\frac{x^3}{(1-3x)^2}\right)^r = \frac{1}{1-3x} \left(1 - 4 \frac{x^3}{(1-3x)^2}\right)^{-1/2} = \frac{1}{1-3x} \left( \frac{(1-4x)(1-x)^2}{(1-3x)^2}\right)^{-1/2} = \frac{1}{1-x} \frac{1}{\sqrt{1-4x}}
$$
which is exactly the generating function of the right-hand-side:
$$
\sum_{n=0}^\infty x^n \sum_{r=0}^n \binom{2r}{r} \stackrel{n=r+k}{=} \sum_{k=0}^\infty x^r \sum_{r=0}^\infty \binom{2r}{r} x^r = \frac{1}{1-x} \cdot \frac{1}{\sqrt{1-4x}}
$$</p>
|
177,209 | <p>I found the following problem while working through Richard Stanley's <a href="http://www-math.mit.edu/~rstan/bij.pdf">Bijective Proof Problems</a> (Page 5, Problem 16). It asks for a combinatorial proof of the following:
$$ \sum_{i+j+k=n} \binom{i+j}{i}\binom{j+k}{j}\binom{k+i}{k} = \sum_{r=0}^{n} \binom{2r}{r}$$
where $n \ge 0$, and $i,j,k \in \mathbb{N}$, though any proof would work for me.</p>
<p>I also found a similar identity in Concrete Mathematics, which was equivalent to this one, but I could not see how the identity follows from the hint provided in the exercises.</p>
<p>My initial observation was to note that the ordinary generating function of the right hand side is $\displaystyle \frac {1}{1-x} \frac{1}{\sqrt{1-4x}}$, but couldn't think of any way to establish the same generating function for the left hand side.</p>
| Rijul Saini | 27,729 | <p>Recall that <span class="math-container">$\sum_{n \ge 0} \binom nm x^n = \frac{x^m}{(1-x)^{m+1}}$</span>.</p>
<p>We have <span class="math-container">\begin{align*}
\sum_{i+j+k=n}\binom{i+j}{i} \binom{j+k}{j} \binom{k+i}{k} &= \sum_{i,j}\binom{i+j}{i} \binom{n-i}{j} \binom{n-j}{i} \\ &= \sum_{i,j}\binom{i+j}{i} \left([x^{n-i}] \frac{x^j}{(1-x)^{j+1}}\right) \left([y^{n-j}] \frac{y^i}{(1-y)^{i+1}}\right) \\ &= [x^ny^n] \sum_{i,j}\binom{i+j}{i} \frac{x^{i+j}}{(1-x)^{j+1}} \frac{y^{i+j}}{(1-y)^{i+1}}
\end{align*}</span></p>
<p>Now, <span class="math-container">\begin{align*}
\sum_{i,j}\binom{i+j}{i} \frac{x^{i+j}}{(1-x)^{j+1}} \frac{y^{i+j}}{(1-y)^{i+1}} &= \frac 1{(1-x)(1-y)} \sum_{p} x^p y^p \sum_{i+j = p} \binom pi \frac 1{(1-x)^{j}} \frac 1{(1-y)^{i}} \\ &= \frac 1{(1-x)(1-y)} \sum_{p} x^py^p \left(\frac 1{1-x} + \frac 1{1-y} \right)^p \\ &= \frac 1{(1-x)(1-y)} \frac 1 {1 - \frac{xy(2-x-y)}{(1-x)(1-y)}} \\ &= \frac 1 {(1-x-y)(1-xy)}
\end{align*}</span></p>
<p>We have <span class="math-container">$[x^ry^r] \frac 1 {(1-x-y)} = \binom {2r} r$</span>, so <span class="math-container">$$[x^ny^n] \frac 1 {(1-x-y)(1-xy)} = \sum_{r=0}^n \binom {2r} r.$$</span></p>
|
1,640,285 | <p>A single-celled spherical organism contains $70$% water by volume. If it loses $10$% of its water content, how much would its surface area change by approximately?</p>
<ol>
<li>$3\text{%}$</li>
<li>$5\text{%}$</li>
<li><p>$6\text{%}$</p></li>
<li><p>$7\text{%}$</p></li>
</ol>
| Jack's wasted life | 117,135 | <p>If $V$ is its volume and $S$ is its surface area, $S$ is proportional to $V^{2\over3}$ so
$$
{dS\over S}={2\over3}{dV\over V}={2\over3}{0.1\times0.7V\over V}\approx0.05
$$</p>
|
1,089,078 | <p>Suppose we have a deck of cards, shuffled in a random configuration. We would like to find a $k$-bit code in which we explain the current order of the cards. This would be easy to do for $k=51 \cdot 6=306$, since we could encode our deck card-by-card, using $2$ bits for the coloring and $4$ bits for the number on each card.</p>
<p>We would like to optimalise this code. There exist $52!$ ways to arrange our deck, and so $k$ will have to be at least: $\lceil\log(52!)\rceil=226$. I'm asked to find a code for a $k$-value halfway between $306$ and $226$.</p>
<p>I understand that my code cannot work card-by-card, since there exist $52$ different cards and $\lceil \log(52)\rceil=6$. Therefore any card-by-card codation will lead to $k\geq306$.</p>
<p>Therefore I concluded my strategy should encode blocks of cards, another idea I had was to encode the colouring first and the numbers second.</p>
<p>Could anyone give me a hint about where to go from here?</p>
| Aryabhata | 1,102 | <p>Order the permutations of $\{1,2,\dots, 52\}$ in lexicographic order.</p>
<p>Encode the $r^{th}$ permutation in that list as $r$ (requiring no more than $226$ bits).</p>
<p>Find a way to get to and get back the permutation, given $r$.</p>
|
1,089,078 | <p>Suppose we have a deck of cards, shuffled in a random configuration. We would like to find a $k$-bit code in which we explain the current order of the cards. This would be easy to do for $k=51 \cdot 6=306$, since we could encode our deck card-by-card, using $2$ bits for the coloring and $4$ bits for the number on each card.</p>
<p>We would like to optimalise this code. There exist $52!$ ways to arrange our deck, and so $k$ will have to be at least: $\lceil\log(52!)\rceil=226$. I'm asked to find a code for a $k$-value halfway between $306$ and $226$.</p>
<p>I understand that my code cannot work card-by-card, since there exist $52$ different cards and $\lceil \log(52)\rceil=6$. Therefore any card-by-card codation will lead to $k\geq306$.</p>
<p>Therefore I concluded my strategy should encode blocks of cards, another idea I had was to encode the colouring first and the numbers second.</p>
<p>Could anyone give me a hint about where to go from here?</p>
| Loren Pechtel | 23,794 | <p>To extend upon Arthur's solution:</p>
<p>There is a fair amount of space that goes unused between the highest available card and the end of the bit field. A fairly simple way to recover a bit of this is to encode two cards at a time. Using his same basic system I get 240 bits needed, only 14 bits above the theoretical minimum.</p>
|
990,796 | <p>I have a homework question in a discrete mathematics class that asks me to determine how many 7-digit id numbers <strong>do not</strong> contain three consecutive sixes. </p>
<p>It seems clear that I should approach this by determining the number that <strong>do</strong> have three consecutive sixes and subtracting that from $10^7$, the total number of possibilities. </p>
<p>If it were asking simply for the number of values that contain three sixes, it would simply be $10^4$, or $9^4$ for <em>exactly</em> three sixes. But what's got me thinking is the requirement that they appear consecutively. By sketching out this: </p>
<pre><code>___ ___ ___ ___ ___ ___ ___
x x x 1
x x x 2
x x x 3
x x x 4
x x x 5
</code></pre>
<p>I determined that there are five possible placements, so I'm thinking that the number of values with three consecutive sixes is $5 * 10^4$. But because arrangements 1&4, 1&5 and 2&5 can coincide, this overcounts by $3 *10 = 30$, so I get a final total of $5 * 10^4 - 30$ values with three consecutive sixes. </p>
<p>Questions: </p>
<ul>
<li>Is my basic approach here logically sound? </li>
<li>Making that sketch felt awfully "hacky" - is there a mathematical technique I could have used instead - particularly when I'm dealing with bigger values? </li>
</ul>
| Steve Kass | 60,500 | <p>You started out just fine by counting the following kinds of numbers:</p>
<pre><code>___ ___ ___ ___ ___ ___ ___
6 6 6 1
6 6 6 2
6 6 6 3
6 6 6 4
6 6 6 5
</code></pre>
<p>And you’re right that you have to adjust for overcounting. Think of it this way, you’ve counted the numbers in the set $S_1$ of numbers $666????$, $S_2$ of numbers $?666???$, and so on. There are $10,000$ in each set. (Let’s assume ID numbers can begin with the digit $0$.)</p>
<p>Some numbers, however, are in more than one set. There are two kinds of these numbers: the numbers with $4$ or more consecutive sixes and the numbers with two separated groups of consecutive sixes. Let’s use X to mean a non-6 digit and count, beginning with those that have $4$, but not $5$, consecutive sixes.</p>
<p>Numbers of the form $6666X??$ are in $S_1$ and $S_2$.</p>
<p>Numbers of the form $X6666X?$ are in $S_2$ and $S_3$.</p>
<p>Numbers of the form $?X6666X$ are in $S_3$ and $S_4$.</p>
<p>Numbers of the form $??X6666$ are in $S_4$ and $S_5$.</p>
<p>There are 900+810+810+900 of these, and they were counted twice, so we need to subtract $3,420$ from your initial count of $50,000$. Now those with $5$, but not $6$ consecutive sixes. (We haven't considered them in the previous calculation.)</p>
<p>Numbers like $66666X?$, $X66666X$, and $?X66666$. Each was counted three times, and there are $90+81+90=261$ of them. So subtract another $522$.</p>
<p>Next, numbers like $X666666$ and $666666X$ were counted four times, and there are $9+9$ of those, so subtract another $3\cdot18$, and $6666666$ was counted five times, so subtract $4$.</p>
<p>Two separated $666$s occur in numbers like $666X666$ and were counted twice, so subtract $9$.</p>
<p>Summarizing, there are $50,000-3,420-522-54-4-9=45,591$ numbers containing $666$, leaving $9,954,009$ 7-digit numbers (possibly starting with one or more zeros) with no string of three sixes.</p>
<p>The mathematical principle (search and you'll find a general way to solve these problems) is called "inclusion-exclusion." Your initial approach is just what you need to get started with it.</p>
|
990,796 | <p>I have a homework question in a discrete mathematics class that asks me to determine how many 7-digit id numbers <strong>do not</strong> contain three consecutive sixes. </p>
<p>It seems clear that I should approach this by determining the number that <strong>do</strong> have three consecutive sixes and subtracting that from $10^7$, the total number of possibilities. </p>
<p>If it were asking simply for the number of values that contain three sixes, it would simply be $10^4$, or $9^4$ for <em>exactly</em> three sixes. But what's got me thinking is the requirement that they appear consecutively. By sketching out this: </p>
<pre><code>___ ___ ___ ___ ___ ___ ___
x x x 1
x x x 2
x x x 3
x x x 4
x x x 5
</code></pre>
<p>I determined that there are five possible placements, so I'm thinking that the number of values with three consecutive sixes is $5 * 10^4$. But because arrangements 1&4, 1&5 and 2&5 can coincide, this overcounts by $3 *10 = 30$, so I get a final total of $5 * 10^4 - 30$ values with three consecutive sixes. </p>
<p>Questions: </p>
<ul>
<li>Is my basic approach here logically sound? </li>
<li>Making that sketch felt awfully "hacky" - is there a mathematical technique I could have used instead - particularly when I'm dealing with bigger values? </li>
</ul>
| Masacroso | 173,262 | <p>You can see it as a sequence of X numbers different than 6 where you put between them, or in the final of the string, a group of six of any cardinality between 0 and 2.</p>
<p>At maximum, because digit length is 7, you can have a string of 7 different numbers with gaps/ends with cardinality 0.</p>
<p>For any gap with cardinality 1 the length decreases by one, and for any gap with cardinality 2 the length decreases by two.</p>
<p>The different combinations of gaps for a length <em>l</em> will be </p>
<ul>
<li>$l=7\to 0/8$</li>
<li>$l=6\to 1/7$</li>
<li>$l=5\to 2/6$</li>
<li>$l=4\to 3/5$</li>
<li>$l=3\to 4/4$</li>
<li>$l=2\to 5/3$</li>
<li>$l=1\to 6/2$</li>
<li>$l=0\to 7/1$</li>
</ul>
<p>The expression $a/b$ means partition $a$ quantity on $b$ parts. More than partition they are partitions where order cares and you can have positions with zeroes and an upper bound of cardinality $2$.</p>
<p>These type of bounded distributions can be computed trough a generating function like this</p>
<p>$$f(x)=(1+x+x^2)^{l+1}=\left(\frac{1-x^3}{1-x}\right)^{l+1}=\sum_{h=0}^{l+1}\binom{l+1}{h}(-1)^hx^{3h}\sum_{j=0}^{\infty}\binom{l+j}{l}x^j$$</p>
<p>Now we have that $3h+j=7-l \to j=7-l-3h$, and the permutations for some <em>l</em> will be</p>
<p>$$p(l)=\sum_{h=0}^{l+1}(-1)^h\binom{l+1}{h}\binom{7-3h}{l}$$</p>
<p>But we must notice that when $7-3h<l$ the sum is zero, so $7-3h\geq l \to h\leq\frac{7-l}{3}$. So the sum will be </p>
<p>$$p(l)=\sum_{h=0}^{\lfloor\frac{7-l}{3}\rfloor}(-1)^h\binom{l+1}{h}\binom{7-3h}{l}$$</p>
<p>And for every <em>l</em> the base amount of permutations will be $PB(l)=9^l$. And because the probability to the string start with a zero is $\frac{1}{10}$ then the final formula will be</p>
<p>$$P=\frac{9}{10}\sum_{l=0}^{7}9^l\sum_{h=0}^{\lfloor\frac{7-l}{3}\rfloor}(-1)^h\binom{l+1}{h}\binom{7-3h}{l}$$</p>
<p>P.S.: Im not completely sure about the probability of $\frac{1}{10}$ of start with zero for this problem. If someone can confirm or negate it I will appreciate it a lot. Maybe more safe try the <strong>negative</strong> approach, i.e., evaluate the string with 3 or more 6's and after subtract to the total. Anyway isnt clear the amount of strings starting with zero from one or other approach.</p>
|
1,357,638 | <p>Here is the problem:</p>
<p>Suppose $n$ people are at a party, and some number of them shake hands. At the end of the party, each guest $G_i$, $1 \leq i \leq n$ shares that they shook hands $x_i$ times. Assume there were a total of $h \geq 0$ handshakes at the party. Use induction on $h$ to prove that:</p>
<p>$x_i + \cdots + x_n = 2h$</p>
<p>I am a little confused about the best way to solve this problem. Performing induction on $h$, means that I will assign my base case for $h=1$. The base case then suggests:</p>
<p>$x_i + \cdots + x_n = 2h$</p>
<p>$x_i + \cdots + x_n = 2$</p>
<p>I am I correct to assume that this means that for my base case for $1$ handshake, that there are 2 ($x_1$ + $x_2$) people who shook hands?</p>
<p>Based on this assumption, my next step would be to assume $h=k, \ \forall \ k \in \mathbb{Z} $. Performing induction on $h$, I would then let $h=(k+1)$ and I want to show that the left side of the equation is equivalent to $2(k+1)$:</p>
<p>$x_i + \cdots + x_n + 2 = 2(k+1)$</p>
<p>^This is the step I'm not really certain of. Since I know that when I add one additional hand shake, that means I am adding $2$ people to the situation... so, would adding $2$ to the left side be a fair way to show this induction? If I do this, then I easily come up with:</p>
<p>$2k + 2 = 2(k+1)$</p>
<p>$ 2k + 2 = 2k + 2$</p>
<p>Which is a true statement, so then I would have (hopefully...) proved this by induction.</p>
<p>What do you guys think? I have traditionally only performed induction on more straight forward math-y problems, so I'm not really sure if I am headed down the right rabbit hole on this one. I am currently in an Introduction to Proofs course at my University. This is a homework problem- I am obviously not looking for someone to do my homework for me (since that wouldn't bode well for my next exam...) but I just want to seek out some advice for my approach to this problem.</p>
<p>Thanks for looking!</p>
| André Nicolas | 6,312 | <p>Assume that the handshakes occur <em>sequentially</em>: At time $1$ there is a handshake, then there is a handshake at time $2$, and so on. Let us suppose that when $h=k$, then $x_1+x_2+\cdots +x_n=2k$. Now at time $k+1$, a new handshake takes place, say between person $i$ and person $j$. Then $x_i$ is incremented by $1$, and so is $x_j$, and therefore $x_1+x_2+\cdots+x_n$ is incremented by $2$.</p>
|
2,972,950 | <p>Everything on this question is in complex plane.</p>
<p>As the book describes a property of a winding number, it says that:</p>
<blockquote>
<p>Outside of the [line segment from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>] the function <span class="math-container">$(z-a) / (z-b)$</span> is never real and <span class="math-container">$\leq 0$</span>.</p>
</blockquote>
<p>Here, the above statement should be interpreted as "never (real and <span class="math-container">$\leq 0$</span>)".</p>
<p>If anyone could explain why this is true that would be great. I do get why any point on the line segment (other than <span class="math-container">$b$</span>, in which case the denominator is <span class="math-container">$0$</span>) has to satisfy the condition that <span class="math-container">$(z-a) / (z-b)$</span> is real and <span class="math-container">$\leq 0$</span>, but I am not sure how to prove why any point not on the line has to satisfy the condition also.</p>
<p>Here, <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are arbitrary complex number in a region determined by a closed curve in the complex plane; both points lie on the same region. </p>
| Alexander Gruber | 12,952 | <p>The imaginary part of <span class="math-container">$u=\frac{z-a}{z-b}$</span> is <span class="math-container">$$\frac{u-\overline{u}}{2i}=\frac{1}{2i}\left(\frac{z-a}{z-b}-\overline{\frac{z-a}{z-b}}\right)=\frac{1}{2i}\left(\frac{z-a}{z-b}-\frac{\overline{z}-\overline{a}}{\overline{z}-\overline{b}}\right)$$</span></p>
<p>So, if we want this to be real, we need</p>
<p><span class="math-container">$$
\begin{eqnarray*}
\frac{1}{2i}\left(\frac{z-a}{z-b}-\frac{\overline{z}-\overline{a}}{\overline{z}-\overline{b}}\right)&=&0
\\
(z-a)(\overline{z}-\overline{b})&=&(z-b)(\overline{z}-\overline{a}) \\
z\overline{z}-z\overline{b}-a\overline{z}+a\overline{b}&=&z\overline{z}-z\overline{a}-b\overline{z}+b\overline{a}
\\
0&=&z(\overline{a}-\overline{b})+b(\overline{z}-\overline{a})-a(\overline{z}-\overline{b})
\\
0&=&-\Re(b)\Im(z)+\Im(b)\Re(z)+\Re(a)(\Im(z)-\Im(b))+\Im(a)(\Re(b)-\Re(z)) \\
0&=&\Re(b)\Im{a}-\Re(a)\Im(b)-(\Re(a)-\Re(b))\Im(z)+(\Im(b)-\Im(a))\Re(z) \\
\Im(z)&=&\frac{\Im(b) \Re(a) - \Im(a) \Re(b) + (\Im(a) - \Im(b)) \Re(z)}{\Re(a) - \Re(b)}
\end{eqnarray*}
$$</span>
which is the complex point-slope form of the line connecting <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
<p>So, what about whether the real part is positive or negative? We'll use this formula for <span class="math-container">$z$</span> to compute <span class="math-container">$u$</span>.</p>
<p><span class="math-container">$$
\begin{eqnarray*}
u&=&\frac{\Re(z)+i\Im(z)-\left(\Re(a)-i \Im(a)\right)}{\Re(z)+i\Im(z)-\left(\Re(b)-i \Im(b)\right)}\\
&=&
\frac{\Re(z)+i\left(\frac{\Im(b) \Re(a) - \Im(a) \Re(b) + (\Im(a) - \Im(b)) \Re(z)}{\Re(a) - \Re(b)}\right)-(\Re(a)+i \Im(a))}{\Re(z)+i\left(\frac{\Im(b) \Re(a) - \Im(a) \Re(b) + (\Im(a) - \Im(b)) \Re(z)}{\Re(a) - \Re(b)}\right)-(\Re(b)+i \Im(b))}\\
&=&
\frac{\left(\Re(z)-\Re(a)\right)\left(\frac{\Re(a)-\Re(b)+i(-\Im(a)+\Im(b))}{\Re(a)-\Re(b)}\right)}{\left(\Re(z)-\Re(b)\right)\left(\frac{\Re(a)-\Re(b)+i(-\Im(a)+\Im(b))}{\Re(a)-\Re(b)}\right)}\\
&=&
\frac{\Re(z)-\Re(a)}{\Re(z)-\Re(b)}
\end{eqnarray*}
$$</span>
From there, it's simple algebra to see that <span class="math-container">$u=\frac{\Re(z)-\Re(a)}{\Re(z)-\Re(b)}$</span> is negative only between <span class="math-container">$\Re(a)$</span> and <span class="math-container">$\Re(b)$</span>, which corresponds to the line segment joining <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.</p>
|
3,965,455 | <p>Find <span class="math-container">$E(X^3)$</span> given <span class="math-container">$X$</span> is in <span class="math-container">$Exp(2)$</span></p>
<p>My idea is that we can use <span class="math-container">$f_X(x)=2e^{-2x}$</span> and integrate <span class="math-container">$\int_{0}^{a}x^3f_X(x)dx$</span>. But from here I don't know where to go.</p>
| Tan | 814,070 | <p>Observe that, <span class="math-container">$$f(x)=\frac{10x}{x-10}=\frac{10x-100+100}{x-10}=10+\frac{100}{x-10}$$</span> So, you should find all <span class="math-container">$x \in \mathbb Z$</span> such that <span class="math-container">$x-10$</span> divides <span class="math-container">$100$</span>, which can be done easily considering the factors of <span class="math-container">$100$</span>.</p>
|
2,408,223 | <p>Compute $\int_0^2 \lfloor x^2 \rfloor\,dx$.</p>
<p>The challenging part isn't the problem itself, but the notation around the x^2. I don't know what it is. If someone could clarify, that would be great!</p>
<p>Edit: Clarified that it represents the floor function, can anyone give me a hint on how to start working on the problem?</p>
| Xander Henderson | 468,350 | <p>That is the greatest integer, or floor, function. The notation $\lfloor x \rfloor$ stands for the greatest integer less than or equal to $x$. Think of it as "rounding down" to the next integer.</p>
|
1,301,116 | <p>We know that if $f \in \mathcal R[a,b]$ and if $a = c_0 < c_1<\cdots<c_m =b$, then the restrictions of $f$ to each subinterval $[c_{i-1},c_i]$ are Riemann integrable.</p>
<p>Is the converse true, i.e if $f ; [a,b] \to \Bbb R$, and $a = c_0 < c_1<\cdots<c_m =b$ and that the restrictions of $f$ to each subinterval $[c_{i-1},c_i]$ belong to $\mathcal R[c_{i-1},c_i]$ then $f$ is Riemann integrable? </p>
<p>I am finding difficult to show this.</p>
| user1337 | 62,839 | <p>As $f$ is Riemann integrable on $[c_{i-1},c_i]$, for any $\epsilon>0$ there exists a partition $P_i$ of it such that $U(f,P_i)-L(f,P_i)< \epsilon/m$. If you use the partitions $\{P_i\}_{i=1}^m$ to create a partition of $[a,b]$ in the obvious way, you're done.</p>
|
43,611 | <p>I posted this on Stack Exchange and got a lot of interest, but no answer.</p>
<p>A recent <a href="http://people.missouristate.edu/lesreid/POW12_0910.html" rel="nofollow">Missouri State problem</a> stated that it is easy to decompose the plane into half-open intervals and asked us to do so with intervals pointing in every direction. That got me trying to decompose the plane into closed or open intervals. The best I could do was to make a square with two sides missing (which you can do out of either type) and form a checkerboard with the white squares missing the top and bottom and the black squares missing the left and right. That gets the whole plane except the lattice points. This seems like it must be a standard problem, but I couldn't find it on the web. <strong>Question:</strong> So can the plane be decomposed into unit open intervals? closed intervals?</p>
| Community | -1 | <p>Decompose a line without a point into a union of disjoint half-open intervals. Put copies of this line on the plane so that the distinguished point is $(0,0)$ and the lines point in all possible directions. You have covered the plane without one point, $(0,0)$. Now take one of these lines and replace it by a line completely covered by unit intervals. You get the whole plane covered. The same works in all dimensions. </p>
<p><b> Edit 1</b> This solution does miss one direction. If closed intervals are allowed, then instead of the last step, replace one half-open interval on one of the lines by a closed one, so that you cover $(0,0)$.</p>
<p><b> Edit 2</b> I just noticed that I was answering a wrong question. Fortunately the correct question has been answered already. </p>
|
43,611 | <p>I posted this on Stack Exchange and got a lot of interest, but no answer.</p>
<p>A recent <a href="http://people.missouristate.edu/lesreid/POW12_0910.html" rel="nofollow">Missouri State problem</a> stated that it is easy to decompose the plane into half-open intervals and asked us to do so with intervals pointing in every direction. That got me trying to decompose the plane into closed or open intervals. The best I could do was to make a square with two sides missing (which you can do out of either type) and form a checkerboard with the white squares missing the top and bottom and the black squares missing the left and right. That gets the whole plane except the lattice points. This seems like it must be a standard problem, but I couldn't find it on the web. <strong>Question:</strong> So can the plane be decomposed into unit open intervals? closed intervals?</p>
| rpotrie | 5,753 | <p>If you consider upper semicontinuous decompositions on compact connected sets, then, in <a href="https://www.ams.org/journals/bull/1968-74-01/S0002-9904-1968-11919-6/S0002-9904-1968-11919-6.pdf" rel="nofollow noreferrer">this paper</a> it is proved that it is not possible to fill any euclidean space in such a way.</p>
<p>There is a <a href="https://projecteuclid.org/journals/duke-mathematical-journal/volume-2/issue-1/Collections-filling-a-plane/10.1215/S0012-7094-36-00202-8.short" rel="nofollow noreferrer">paper by Roberts</a> where he proves the two dimensional result and also gives an example of an upper semicontinuous decomposition of the plane into cellular curves (they may be not simple, but they are a decreasing intersection of disks).</p>
<p>I know this does not respond the question entirely, since the upper semicontinuous hypothesis is strong, however, many times desirable.</p>
|
438,263 | <p>Is there a concrete example of a <span class="math-container">$4$</span> tensor <span class="math-container">$R_{ijkl}$</span> with the same symmetries as the Riemannian curvature tensor, i.e.
<span class="math-container">\begin{gather*}
R_{ijkl} = - R_{ijlk},\quad R_{ijkl} = R_{jikl},\quad R_{ijkl} = R_{klij}, \\
R_{ijkl} + R_{iklj} + R_{iljk} = 0.
\end{gather*}</span>
for which there is no metric for which it is the Riemannian curvature tensor?</p>
<p>The existence of such a curvature was already shown by <a href="https://mathoverflow.net/questions/202211/equations-satisfied-by-the-riemann-curvature-tensor">Robert Bryant</a>, however, I'm looking for a concrete example.</p>
| Peter Taylor | 46,140 | <p><a href="http://openproblemgarden.org/" rel="noreferrer">Open Problem Garden</a></p>
<p>I occasionally stumble across this in my search results. It's currently skewed heavily to graph theory, which suggests that the user base also skews that way. Con: it doesn't appear to be very active; the most recent addition is from 2020, although there are comments from 2022.</p>
|
438,263 | <p>Is there a concrete example of a <span class="math-container">$4$</span> tensor <span class="math-container">$R_{ijkl}$</span> with the same symmetries as the Riemannian curvature tensor, i.e.
<span class="math-container">\begin{gather*}
R_{ijkl} = - R_{ijlk},\quad R_{ijkl} = R_{jikl},\quad R_{ijkl} = R_{klij}, \\
R_{ijkl} + R_{iklj} + R_{iljk} = 0.
\end{gather*}</span>
for which there is no metric for which it is the Riemannian curvature tensor?</p>
<p>The existence of such a curvature was already shown by <a href="https://mathoverflow.net/questions/202211/equations-satisfied-by-the-riemann-curvature-tensor">Robert Bryant</a>, however, I'm looking for a concrete example.</p>
| Gerry Myerson | 3,684 | <p>If it's a problem in Number Theory, the annual West Coast Number Theory meetings have a problem session, and the problems get collected & edited & posted to <a href="https://westcoastnumbertheory.org/problem-sets/" rel="noreferrer">https://westcoastnumbertheory.org/problem-sets/</a></p>
<p>If you can't come to the meeting, you can send the problem to me (I'm the editor of the problem sets), and I'll present it for you.</p>
|
143,070 | <p>Suppose whole square and the left square in the diagram below are pullbacks, then we may wonder whether the right square is a pullback. It is usually not the case. </p>
<p><img src="https://i.stack.imgur.com/yhrcd.jpg" alt="square"></p>
<p>Now we seek some addition condition on $X\to Y$ that forces the right square is a pullback too. </p>
<p>My question: is epic a sufficient condition? (If the category is Sets, then yes.)</p>
<p><strong>Added</strong>: Let $P$ be the pullback of the right square, then there exists $B\to P$, and the square $A\to P \to Y$ // $A\to X \to Y$ is a pullback, so we have the following diagram in which the bottom and the whole squares are pullback, so is the upper square. If the category is Sets, $X\to Y$ is surjective then $A\to P $ is also surjective. Since the pullback of $B\to P$ along a surjective map is an bijection, $B\to P$ must be a bijection. This shows the right square of the original diagram is a pullback. We can also see why we consider some nice condition on $X\to Y$.</p>
<p><img src="https://i.stack.imgur.com/Uepcb.jpg" alt="2nd square"></p>
| john | 8,751 | <p>A sufficient condition in a category with pullbacks is that $X \to Y$ be a pullback stable regular epimorphism: a regular epimorphism all of whose pullbacks are also regular epimorphisms. This property holds in any regular category.</p>
<p>Stability implies that $A \to B$ and $A \to P$ are (pullback stable) regular epis too. Thus $B \to P$ is strong epi by 2 out of 3 on the upper square of the lower diagram. So to show $B \to P$ is an isomorphism you just need to show it is mono.</p>
<p>You need to show that the two maps from the kernel pair $B\times_{P}B$ of $B \to P$ coincide. To see this we'll use the map between kernel pairs $A=A \times_{A}A \to B \times_{P} B$ induced by the same square as before. This map is, in fact, the pullback of $A \to P$ along equally the domain or codomain projections for the graph map between kernel pairs, and so a regular epi itself. Thus the two maps $B\times_{P}B \to B$ coincide because their precomposite with this regular epi does.</p>
<p>(There is a good reason for replacing epis in Set by pullback stable regular epis in a category C with pullbacks. The reason is that in any such category C the pullback stable regular epis, taken as singleton covers, determine the basis for a Grothendieck topology on C. Moreover it is subcanonical and, furthermore, the induced functor $C \to Sh(C)$ to the category of sheaves preserves pullbacks and sends pullback stable regular epis to the same. Thus any exactness property which holds between pullbacks and epis in a Grothendieck topos holds between pullbacks and pullback stable regular epis in an arbitrary category with pullbacks.)</p>
|
143,070 | <p>Suppose whole square and the left square in the diagram below are pullbacks, then we may wonder whether the right square is a pullback. It is usually not the case. </p>
<p><img src="https://i.stack.imgur.com/yhrcd.jpg" alt="square"></p>
<p>Now we seek some addition condition on $X\to Y$ that forces the right square is a pullback too. </p>
<p>My question: is epic a sufficient condition? (If the category is Sets, then yes.)</p>
<p><strong>Added</strong>: Let $P$ be the pullback of the right square, then there exists $B\to P$, and the square $A\to P \to Y$ // $A\to X \to Y$ is a pullback, so we have the following diagram in which the bottom and the whole squares are pullback, so is the upper square. If the category is Sets, $X\to Y$ is surjective then $A\to P $ is also surjective. Since the pullback of $B\to P$ along a surjective map is an bijection, $B\to P$ must be a bijection. This shows the right square of the original diagram is a pullback. We can also see why we consider some nice condition on $X\to Y$.</p>
<p><img src="https://i.stack.imgur.com/Uepcb.jpg" alt="2nd square"></p>
| Michal R. Przybylek | 13,480 | <p>I tried to write the explanation of the comment about a dozen of times, but was never satisfied with the result. Finally, I decided to write a full note (I will try to put it on arXiv in a few minutes; <a href="http://arxiv.org/abs/1311.2974" rel="nofollow">here it is</a>) describing the natural setting for such questions:</p>
<p><a href="http://www.mimuw.edu.pl/~mrp/the_other_pullback_lemma.pdf" rel="nofollow">http://www.mimuw.edu.pl/~mrp/the_other_pullback_lemma.pdf</a></p>
<p>(the note still needs some improvements, but I am running out of time now...)</p>
<p>I found it easier to characterise your condition by "extremal epimorphisms" rather than "strong epimorphisms" (notice however, that in case of finite connected limits, these concepts coincide). Here is the formal statement:</p>
<p>Let us assume that finite connected limits exist. The following are equivalent:</p>
<ul>
<li>your condition along $e \colon X \rightarrow Y$ holds </li>
<li>$e \colon X \rightarrow Y$ is an extremal morphism stable under pullbacks.</li>
</ul>
|
36,774 | <p>Do asymmetric random walks also return to the origin infinitely?</p>
| Did | 6,179 | <p>This is a consequence of the law of large numbers. The position $S_n$ at time $n$ is the sum of $S_0$ and of $n$ i.i.d. displacements, each with expectation $m\ne0$, hence $S_n/n\to m$ almost surely. In particular, $|S_n|\ge |m|n/2$ for every $n\ge N$ where $N$ is random and almost surely finite, which implies $S_n\ne0$. Since $(S_n)$ does not visit zero after time $N$, the number of visits of zero is almost surely finite. The starting point $S_0$ is irrelevant.</p>
|
36,774 | <p>Do asymmetric random walks also return to the origin infinitely?</p>
| Conrado Costa | 226,425 | <p>It depends. If you consider a Random Walk in a Random Environment, it may be asymmetric and recurrent. See <a href="https://arxiv.org/pdf/0707.3160.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/0707.3160.pdf</a></p>
<p>Also, if your walk is homogeneous,</p>
<p>$$ X_i = \begin{cases} +1 \text{ with probability } 2/3\\
-2 \text{ with probability } 1/3\end{cases}$$ </p>
<p>let $S_n = \sum_{i=1}^n X_1$ this walk is recurrent. Indeed, by Donsker Theorem
$$\frac{S_{[tn]}}{\sqrt{n}} \to B_t $$
where $B_t$ is a Brownian motion. This implies that
$$P(S_n <0 \text{ infinitely often and } S_n >0 \text{ infinitely often} ) = 1$$</p>
<p>Since to cross from the negative to the positive this walk must first reach $0$, we conclude that this <strong>asymetric</strong> random walk visit $0$ infinitely many times.</p>
|
296,727 | <p><b>Assuming that G is a finite cyclic group, let "a" be the product of all the elements in the group.</b> </p>
<p>i. <b> If G has odd order, then a=e.</b> Is this because there are an even number of non-trivial elements must have their inverses within the non-trivial factors within the product?</p>
<p>ii. <b> If G has even order then a is not equal to e. </b> Here there an odd number of non-trivial elements. There must also be an odd number of elements who are their own inverses, and therefore a cannot equal e?</p>
<p>Thanks for any help! I'm just reading a textbook and trying practice problems!</p>
| DonAntonio | 31,254 | <p>Let $\,G=\langle x\rangle =\{1\,,\,x\,,\,x^2\,,\,\ldots\,,\,x^{n-1}\}\,$:</p>
<p>$$x\cdot x^2\cdot\ldots\cdot x^{n-1}=x^{\frac{n(n-1)}{2}}=\begin{cases}x^{(n-1)k}\neq 1&\,,\,\,\text{ if}\,\;\;\;n=2k\,\,\,\text{ is even}\\{}\\x^{kn}=1&\,,\,\text{ if}\,\;\;n-1=2k\,\,\,\text{is even}\end{cases}$$</p>
|
3,841,542 | <p>I am trying to show that <span class="math-container">$\sqrt{\sqrt{2}+5}$</span> is constructible through a diagram.</p>
<p>I know how to show something of the form <span class="math-container">$\sqrt[n]{a}$</span> is constructible through a diagram, but I am really having a difficult time with this one.</p>
<p>Any tips? Thanks.</p>
| Abhijeet Vats | 426,261 | <p><strong>Hint:</strong></p>
<p>Have you considered differentiating:</p>
<p><span class="math-container">$$\int_{0}^{\infty} e^{-ax} \sin(kx) \ dx = \frac{k}{a^2+k^2}$$</span></p>
<p>with respect to <span class="math-container">$k$</span> instead? :-)</p>
|
3,841,542 | <p>I am trying to show that <span class="math-container">$\sqrt{\sqrt{2}+5}$</span> is constructible through a diagram.</p>
<p>I know how to show something of the form <span class="math-container">$\sqrt[n]{a}$</span> is constructible through a diagram, but I am really having a difficult time with this one.</p>
<p>Any tips? Thanks.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
Lets <span class="math-container">$\ds{\on{u}\pars{a,k} \equiv \int_{0}^{\infty}\expo{-ax}\sin\pars{kx}\,\dd x}$</span> which satisfies
<span class="math-container">$\ds{\on{u}_{aa}\pars{a,k} + \on{u}_{kk}\pars{a,k} = 0}$</span>.</p>
<p>It has the general solution
<span class="math-container">$$\on{u}\pars{a,k} =
\on{f}\pars{a + k\,\ic} + \on{g}\pars{a - k\,\ic}
$$</span></p>
<p>Then,
<span class="math-container">\begin{align}
0 & = \on{u}\pars{a,0} = \on{f}\pars{a} + \on{g}\pars{a}
\implies\on{g}\pars{a} = -\on{f}\pars{a}
\end{align}</span>
The general solution is reduced to
<span class="math-container">\begin{equation}
\on{u}\pars{a,k} =
\on{f}\pars{a + k\,\ic} - \on{f}\pars{a - k\ic}
\label{1}\tag{1}
\end{equation}</span>
Moreover, <span class="math-container">$\ds{{1 \over a^{2}} = \on{u}_{k}\pars{a,0} =
\on{f}'\pars{a}\ic -
\bracks{\on{f}'\pars{a}\pars{-\ic}} =
2\on{f}'\pars{a}\ic}$</span>
<span class="math-container">$\ds{\implies
\on{f}\pars{a} = {\ic \over 2a} + \underline{\mbox{a constant}}}$</span></p>
<p>and ( see (\ref{1}) )
<span class="math-container">$$
\on{u}\pars{a,k} = {\ic \over 2\pars{a + k\,\ic}} -
{\ic \over 2\pars{a - k\,\ic}} =
\bbx{k \over a^{2} + k^{2}} \\
$$</span></p>
|
234,866 | <p>Problem: when I draw a rectangle and put a coloured edge around it, the displayed edge is centred along the nominal edge and if it follows the same course as one of the axes then it does not show up. For example:</p>
<pre><code>Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {4, 3}]}, Axes -> True]
</code></pre>
<p>yields this:</p>
<p><a href="https://i.stack.imgur.com/eBZNY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eBZNY.jpg" alt="Enter image description here" /></a></p>
<p>The problem also arises when for example I want to show a grid of 1x1 squares with some edged in red and others in blue: along any given line-segment, only one colour shows.</p>
<p>How can I make the coloured "edge" of a rectangle sit inside the shape, touching the shape's boundary, but not running outside it?</p>
| Jagra | 571 | <p>You can change the <code>AxisOrgin</code>.</p>
<pre><code>Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {4, 3}]},
Axes -> True, AxesOrigin -> {-1, -1}]
</code></pre>
<p><a href="https://i.stack.imgur.com/sJPqE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJPqE.png" alt="enter image description here" /></a></p>
<p>Or</p>
<pre><code>Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {4, 3}]},
Axes -> True, AxesOrigin -> {-0.1, -0.1}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ZO1BO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZO1BO.png" alt="enter image description here" /></a></p>
<p>Or (in response to a comment from OP):</p>
<pre><code>Show[{
Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {1, 1}]},
Axes -> True, AxesOrigin -> {-0.1, -0.1}],
Graphics[{EdgeForm[Blue], FaceForm[], Rectangle[{1, 1}, {2, 2}]},
Axes -> True, AxesOrigin -> {-0.1, -0.1}],
Graphics[{EdgeForm[Green], FaceForm[], Rectangle[{1, 0}, {2, 1}]},
Axes -> True, AxesOrigin -> {-0.1, -0.1}],
Graphics[{EdgeForm[Black], FaceForm[], Rectangle[{0, 1}, {1, 2}]},
Axes -> True, AxesOrigin -> {-0.1, -0.1}]
}]
</code></pre>
<p><a href="https://i.stack.imgur.com/d9ApN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d9ApN.png" alt="enter image description here" /></a></p>
<p>But this doesn't get you separately outlined rectangles.
Something like the following may get you closer:</p>
<pre><code>Grid[{
{Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {1, 1}]}],
Graphics[{EdgeForm[Blue], FaceForm[],
Rectangle[{0, 0}, {1, 1}]}]},
{Graphics[{EdgeForm[Green], FaceForm[],
Rectangle[{0, 0}, {1, 1}]}],
Graphics[{EdgeForm[Black], FaceForm[],
Rectangle[{0, 0}, {1, 1}]}]}
}, Spacings -> {-0.5, -0.5}]
</code></pre>
<p><a href="https://i.stack.imgur.com/pLqYO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pLqYO.png" alt="enter image description here" /></a></p>
<p>But this doesn't give you Axes.<br />
One could construct Axes separately, if you really need them.</p>
|
234,866 | <p>Problem: when I draw a rectangle and put a coloured edge around it, the displayed edge is centred along the nominal edge and if it follows the same course as one of the axes then it does not show up. For example:</p>
<pre><code>Graphics[{EdgeForm[Red], FaceForm[], Rectangle[{0, 0}, {4, 3}]}, Axes -> True]
</code></pre>
<p>yields this:</p>
<p><a href="https://i.stack.imgur.com/eBZNY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eBZNY.jpg" alt="Enter image description here" /></a></p>
<p>The problem also arises when for example I want to show a grid of 1x1 squares with some edged in red and others in blue: along any given line-segment, only one colour shows.</p>
<p>How can I make the coloured "edge" of a rectangle sit inside the shape, touching the shape's boundary, but not running outside it?</p>
| Brett Champion | 69 | <p>The first way that comes to mind for me is to use <code>Offset</code> to "shrink" the rectangles by the width of the edges:</p>
<pre><code>Graphics[Table[{
EdgeForm[{AbsoluteThickness[1], RandomChoice[{Red, Blue}]}], FaceForm[],
Rectangle[Offset[{1/2, 1/2}, {i, j}], Offset[{-1/2, -1/2}, {i + 1, j + 1}]]
}, {i, 7}, {j, 5}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/pGnvC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pGnvC.png" alt="enter image description here" /></a></p>
<p>Setting the thickness of the edges allows us to know how far in points we should shift the corners of the rectangles toward their centers.</p>
|
445,816 | <p>I have to show that</p>
<blockquote>
<blockquote>
<p>$\mathbb{C}=\overline{\mathbb{C}\setminus\left\{0\right\}}$,</p>
</blockquote>
</blockquote>
<p>what is very probably an easy task; nevertheless I have some problems.</p>
<p>In words this means: $\mathbb{C}$ is the smallest closed superset of $\mathbb{C}\setminus\left\{0\right\}$.</p>
| Cameron Buie | 28,900 | <p><strong>Hint</strong>: (I assume you're using the usual topology on $\Bbb C$.) Note that $\{0\}$ is not an open set, so $\Bbb C\setminus\{0\}$ is not a closed set. $\Bbb C$ is closed, however. Is $\Bbb C$ a superset of $\Bbb C\setminus\{0\}$? If so, how many extra points does it have? What can you conclude?</p>
|
2,423,569 | <p>I am asked to show that if $T(z) = \dfrac{az+b}{cz+d}$ is a mobius transformation such that $T(\mathbb{R})=\mathbb{R}$ and that $ad-bc=1$ then $a,b,c,d$ are all real numbers or they all are purely imaginary numbers. </p>
<p>So far I've tried multiplying by the conjugate of $cz+d$ numerator and denominator and see if I get some information about $a,b,c,d$ considering that $T(z) \in \mathbb{R}$ whenever $z \in \mathbb{R}$ but this doesn't really work. Also I've considered $SL(2,\mathbb{C}) / \{ \pm I\}$ which is isomorphic to the group of Mobius transformations but this doesn't really help either.</p>
| Wojowu | 127,263 | <p>Hints: if we assume $c\neq 0$, $T(z)=\frac{a}{c}-\frac{1}{c(cz+d)}$. Letting $z$ go to infinity shows $\frac{a}{c}$ is real, hence $c(cz+d)$ is real for all $z\in\mathbb R$. From there conclude $cd$ and $c^2$ are real. The rest should be easy. The case $c=0$ is also not difficult.</p>
|
997,463 | <p>For example, a complex number like $z=1$ can be written as $z=1+0i=|z|e^{i Arg z}=1e^{0i} = e^{i(0+2\pi k)}$.</p>
<p>$f(z) = \cos z$ has period $2\pi$ and $\cosh z$ has period $2\pi i$.</p>
<p>Given a complex function, how can we tell if it is periodic or not, and further, how would we calculate the period? For example, how do we find the period of $f(z)=\tan z$?</p>
| lab bhattacharjee | 33,337 | <p>If Trigonometric substitution is not mandatory, write $$x^3=\frac{x(4x^2+9)-9x}4$$</p>
<p>$$\implies\frac{x^3}{(4x^2+9)^{\frac32}}=\frac14\cdot\frac x{\sqrt{4x^2+9}}-\frac94\cdot\frac x{(4x^2+9)^{\frac32}}$$</p>
<p>Now write $4x^2+9=v$ or $\sqrt{4x^2+9}=u\implies4x^2+9=u^2$</p>
|
3,140,696 | <p>I am trying to figure out the proper definition of a small circle on a biaxial ellipsoid of revolution. One definition is the intersection of the ellipsoid with a cone emanating from the center of the ellipsoid.</p>
<p>The other way I can imagine to define it is a plane intersecting the ellipsoid in which the plane does not also intersect the center of the ellipsoid (or else it would be a great circle).</p>
<p><a href="https://en.wikipedia.org/wiki/Circle_of_a_sphere" rel="nofollow noreferrer">This Wikipedia article</a> also discusses sphere-intersection, but it is limited to spheres, and I am interested in ellipsoids.</p>
<p>Does anyone know if these two methods, cone intersection and plane intersection, result in the same curve? If not, which one is a small circle, and what would be the name of the other resulting curve?</p>
| Community | -1 | <p>The only small circles that you can get are found by intersection with a plane orthogonal to the revolution axis, which are also the intersections with a coaxial cone with apex at the center. This can be sketched in 2D:</p>
<p>Intersection with other planes yield ellipses, and intersection with non-coaxial cones are non-planar curves, with no name.</p>
<hr>
<p>Note that you can reason from a spherical model that you stretch in one direction. The small circles are formed by the intersections with planes or with cones of revolution. After stretching, the planes remain planes, but the cones become elliptic, unless their axis is in the direction of the stretching.</p>
<p><a href="https://i.stack.imgur.com/T30GD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T30GD.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/NexLt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NexLt.png" alt="enter image description here"></a></p>
|
1,523,392 | <p>This is question 2.4 in Hartshorne. Let $A$ be a ring and $(X,\mathcal{O}_X)$ a scheme. We have the associated map of sheaves $f^\#: \mathcal{O}_{\text{Spec } A} \rightarrow f_* \mathcal{O}_X$. Taking global sections we obtain a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$. Thus there is a natural map $\alpha : \text{Hom}(X,\text{Spec} A) \rightarrow \text{Hom}(A,\Gamma(X,\mathcal{O}_X))$. Show $\alpha$ is bijective.</p>
<p>I figure we need to start off with the fact that we can cover $X$ with affine open $U_i$, and that a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$ induces a morphism of schemes from each $U_i$ to $\text{Spec} A$ and some how glue them together. But I have no idea how to show that the induced morphisms agree on intersections. How does this work? </p>
| Shuhang | 239,526 | <p>You have the restriction map: $r_i: \Gamma(X)\longrightarrow\Gamma(U_i)$,
This gives you $Spec\Gamma(U_i)\longrightarrow SpecA$. Gluing works because the restriction maps are compatible to each other.</p>
|
1,523,392 | <p>This is question 2.4 in Hartshorne. Let $A$ be a ring and $(X,\mathcal{O}_X)$ a scheme. We have the associated map of sheaves $f^\#: \mathcal{O}_{\text{Spec } A} \rightarrow f_* \mathcal{O}_X$. Taking global sections we obtain a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$. Thus there is a natural map $\alpha : \text{Hom}(X,\text{Spec} A) \rightarrow \text{Hom}(A,\Gamma(X,\mathcal{O}_X))$. Show $\alpha$ is bijective.</p>
<p>I figure we need to start off with the fact that we can cover $X$ with affine open $U_i$, and that a homomorphism $A \rightarrow \Gamma(X,\mathcal{O}_X)$ induces a morphism of schemes from each $U_i$ to $\text{Spec} A$ and some how glue them together. But I have no idea how to show that the induced morphisms agree on intersections. How does this work? </p>
| Takumi Murayama | 116,766 | <p><strong>EDIT:</strong> I want to add that the relevant parts of EGA to compare are [<a href="http://www.numdam.org/item?id=PMIHES_1960__4__5_0" rel="noreferrer">EGAI</a>, Thm. 1.7.3], which is the analogue of [Hartshorne, II, Prop. 2.3(c)], and [<a href="http://www.numdam.org/item?id=PMIHES_1960__4__5_0" rel="noreferrer">EGAI</a>, Prop. 2.2.4], which is the analogue of your exercise. This proof is similar to the other answer.</p>
<p>[<a href="http://www.ams.org/mathscinet-getitem?mr=3075000" rel="noreferrer">EGAInew</a>, Prop. 1.6.3] is what I am paraphrasing below. It is also [<a href="http://www.numdam.org/item?id=PMIHES_1961__8__5_0" rel="noreferrer">EGAII</a>, Err$_\mathrm{I}$, Prop. 1.8.1], with attribution to Tate.</p>
<hr>
<p>I won't write down all of the details, but here is another way to approach the problem, which I think is easier, since it avoids the issue with trying to cover $X$ by open affines and glueing morphisms together. We use that the category of schemes is a full subcategory of the category of locally ringed spaces. It suffices to show
\begin{align*}
\alpha\colon \operatorname{Hom}_\mathsf{LRS}(X,\operatorname{Spec} A) &\longrightarrow \operatorname{Hom}_\mathsf{Ring}(A,\Gamma(X,\mathcal{O}_X))\\
(f,f^\#) &\longmapsto f^\#(\operatorname{Spec} A)
\end{align*}
is bijective. We construct an inverse map
$$
\rho\colon \operatorname{Hom}_\mathsf{Ring}(A,\Gamma(X,\mathcal{O}_X)) \longrightarrow \operatorname{Hom}_\mathsf{LRS}(X,\operatorname{Spec} A)
$$
as follows. Let $\varphi\colon A \to \Gamma(X,\mathcal{O}_X)$ be given. Define
$$
f \colon X \to \operatorname{Spec} A, \quad x \mapsto \{s \in A \mid \varphi(s)_x \in \mathfrak{m}_x\}
$$
where $\varphi(s)_x$ is the image of $\varphi(s)$ in the stalk $\mathcal{O}_{x,X}$ and $\mathfrak{m}_x \subseteq \mathcal{O}_{x,X}$ is the maximal ideal of $\mathcal{O}_{x,X}$. Note the set on the right is a prime ideal. The map $f$ is continuous since $f^{-1}(D(r)) = \{x \in X \mid \varphi(r)_x \notin \mathfrak{m}_x\} = D(\varphi(r))$. We define the map $f^\#$ of structure sheaves; since $D(r)$ form a basis of $\operatorname{Spec} A$, we construct the morphism on each $D(r)$ and then glue. We define $f^\#(D(r))$ to be the top arrow in the diagram
$$
\require{AMScd}
\begin{CD}
A_r @>f^\#(D(r))>\exists!> \mathcal{O}_X(f^{-1}(D(r)))\\
@AAA @AAA\\
A @>\varphi>> \mathcal{O}_X(X)
\end{CD}
$$
induced by the the universal property of localization [Atiyah-Macdonald, Prop. 3.1], where the hypotheses for the universal property hold since $\varphi(r)$ is invertible in $\mathcal{O}_X(f^{-1}(D(r)))$ by definition of $f$. The morphisms on each $D(r)$ glue together since the maps $f^\#(D(r))$ were constructed uniquely by the universal property above, hence on any intersection $D(rs)$ they must match.</p>
<p>To show $\alpha$ and $\rho$ are inverse to each other, note $\alpha \circ \rho = \mathrm{id}$ is clear by letting $r = 1$ in the diagram above. This implies $\alpha$ is surjective, so it remains to show $\alpha$ is injective. Let $\varphi\colon A \to \Gamma(X,\mathcal{O}_X)$, and let $(f,f^\#)$ such that $\alpha(f,f^\#) = \varphi$. Then, we have the diagram
$$
\begin{CD}
A_{f(x)} @>f^\#_x>> \mathcal{O}_{x,X}\\
@AAA @AAA\\
A @>\varphi>> \mathcal{O}_X(X)
\end{CD}
$$
by taking the direct limit over all open sets $D(r)$ containing a point $x$. Since the map $f_x^\#$ is local, we have $(f_x^\#)^{-1}(\mathfrak{m}_x) = \mathfrak{m}_{f(x)}$, hence $f(x) = \{s \in A \mid \varphi(s)_x \in \mathfrak{m}_x\}$ as desired by using the commutativity of the diagram. The uniqueness of $f^\#$ also follows from this diagram since if $(g,g^\#)$ is any other map $X \to \operatorname{Spec}A$ such that $\alpha(g,g^\#) = \varphi$, then $f^\#_x = g^\#_x$ for all $x$, hence they must be the same morphism.</p>
|
165,489 | <p>I have problem solving this equation, smallest n such that $1355297$ divides $10^{6n+5}-54n-46$. I tried everything using my scientific calculator, but I never got the correct results(!).and finally I gave up!. Could you help me find the first 2 solutions for this equation ? (thanks.)</p>
| KennyColnago | 3,246 | <p>There is no need for a brute force search. There is also no need for compilation, which may run into the maximum compiled integer limit. A faster solution follows from a bit of theory. You can get the first 225000 solutions (up to $n\approx 306$ billion) in less than a second.</p>
<p>The original equation is ${\rm Mod}[10^{6n+5}-54n-46,p]=0$, where $p=1355297$.</p>
<p>Let $m=6n+5$ and write the equation as <code>PowerMod[10,m,p]=Mod[9m+1,p]</code>.</p>
<p>The period of the cycle of <code>PowerMod[10,x,p]</code> for $x=1,2,3,...$ equals the period of the decimal expansion of $1/p$, for prime $p$. The length of the repeating decimal of $1/p$ is <code>MultiplicativeOrder[10,p]</code> which, in this case, equals $p-1$.</p>
<pre><code>MultiplicativeOrder[10, 1355297]
</code></pre>
<blockquote>
<p>1355296</p>
</blockquote>
<p>These are all 1355296 distinct residues in the repeating cycle. This pre-calculation takes about 3/4 second.</p>
<pre><code>residues = PowerMod[10, Range[1355296], 1355297]
</code></pre>
<p>The two modular equations are:</p>
<p>$m=i+j*(p-1)$, mod $p$, with $1\le i \le p-1$, and $j\ge 1$, from length-$(p-1)$ cycle of residues for <code>PowerMod[10,m,p]</code></p>
<p>$9m+1=k*p+r_i$, mod $p$, with $0\le k$, from <code>Mod[9m+1,p]=ri</code>, the i'th residue.</p>
<p>Multiply the first equation by 9 and equate to give $a*j+b*k=c$, where $a=9*(p-1)$, $b=-p$, and $c=r_i-1-9*i$. Solve this linear Diophantine equation via <code>ExtendedGCD[a,b]</code>.</p>
<pre><code>ExtendedGCD[9(p-1),-p]={g,{u,v}}
</code></pre>
<blockquote>
<p>{1, {301177, 2710591}}</p>
</blockquote>
<p>Only the first term, $u=301177$, of the second part is required.</p>
<p>There are infinitely many solutions $\{j,k\}=\{u*c+b*q,v*c-a*q\}$, where $q\ge $0 and $a*u+b*v=1$ is found via <code>ExtendendGCD[a,b]</code>. The smallest solution occurs when <code>q=Quotient[u*c,p]</code>. The last step is to sort those solutions $m$ equivalent to 5, mod 6.</p>
<p>The first roughly quarter million solutions are found in about 1/4 second. The fastest of the current 3 answers was from @MichaelE2, where the first 10 solutions are found in about 0.68 s.</p>
<pre><code>Block[{p = 1355297, i, u, uc},
u = ExtendedGCD[9 (p - 1), -p][[2, 1]];
AbsoluteTiming[
i = Range[Length[residues]];
uc = u*(residues[[i]] - 1 - 9 i);
(Sort[Pick[#, Mod[#, 6], 5] &[i + (p - 1) (uc - p*Quotient[uc, p])]] - 5)/6
]]
</code></pre>
<blockquote>
<p>{0.230856, {2331259, 3776127, 5366598, 5505709, 5652052, 7317951,...,
306133783114, 306135079824, 306136273333}}</p>
</blockquote>
<p>Subsequent solutions follow by decreasing the integer <code>q=Quotient[uc,p]</code>.</p>
|
3,238,563 | <p>I have a question about a proof I saw in a book about basic algeba rules. The rule to prove is:
<span class="math-container">\begin{eqnarray*}
\frac{1}{\frac{1}{a}} = a, \quad a \in \mathbb{R}_{\ne 0}
\end{eqnarray*}</span></p>
<p>And the proof: </p>
<p><span class="math-container">\begin{eqnarray*}
1 = a \frac{1}{a} \Longrightarrow 1 = \frac{1}{a} \frac{1}{\frac{1}{a}} \Longrightarrow a = a \frac{1}{a} \frac{1}{\frac{1}{a}} \Longrightarrow \frac{1}{\frac{1}{a}} = a
\end{eqnarray*}</span></p>
<p>Why is it allowed to just replace <span class="math-container">$a$</span> with <span class="math-container">$1/a$</span>? What's the explanation behind it? </p>
| LarrySnyder610 | 663,638 | <p>I don't think they are replacing <span class="math-container">$a \to \frac1a$</span>. I think the logic in the first implication is they are taking 1 over both sides. So <span class="math-container">$1/1\to 1$</span> on the LHS, and on the RHS,
<span class="math-container">$$a \to \frac1a \text{ and } \frac1a \to \frac{1}{\frac1a}.$$</span>
Similarly, in the next implication, they multiply both sides by <span class="math-container">$a$</span>.</p>
|
3,238,563 | <p>I have a question about a proof I saw in a book about basic algeba rules. The rule to prove is:
<span class="math-container">\begin{eqnarray*}
\frac{1}{\frac{1}{a}} = a, \quad a \in \mathbb{R}_{\ne 0}
\end{eqnarray*}</span></p>
<p>And the proof: </p>
<p><span class="math-container">\begin{eqnarray*}
1 = a \frac{1}{a} \Longrightarrow 1 = \frac{1}{a} \frac{1}{\frac{1}{a}} \Longrightarrow a = a \frac{1}{a} \frac{1}{\frac{1}{a}} \Longrightarrow \frac{1}{\frac{1}{a}} = a
\end{eqnarray*}</span></p>
<p>Why is it allowed to just replace <span class="math-container">$a$</span> with <span class="math-container">$1/a$</span>? What's the explanation behind it? </p>
| zwim | 399,263 | <p>I'm not terribly fond of this proof.</p>
<p>I would rather go on defining the inverse:</p>
<ul>
<li><span class="math-container">$y$</span> is the inverse of <span class="math-container">$x\iff xy=1$</span> then we write it <span class="math-container">$y=\frac 1x$</span>.</li>
<li>since everything is symmetrical <span class="math-container">$x$</span> is also the inverse of <span class="math-container">$y=\frac 1x$</span>.</li>
</ul>
<p>From there on, we have <span class="math-container">$\frac 1{\frac 1a}$</span> is the inverse of <span class="math-container">$\frac 1a$</span> which is <span class="math-container">$a$</span>.</p>
|
2,929,094 | <p>Differentiation of
<span class="math-container">$\int_{a(x)}^{b(x)} f(x,t)\,\text{d}t$</span> is done by Leibniz's integral rule:
<span class="math-container">$$\frac{\text{d}}{\text{d}x} \left (\int_{a(x)}^{b(x)} f(x,t)\,\text{d}t \right )= f\big(x,b(x)\big)\cdot \frac{\text{d}}{\text{d}x} b(x) - f\big(x,a(x)\big)\cdot \frac{\text{d}}{\text{d}x} a(x) + \int_{a(x)}^{b(x)}\frac{\partial}{\partial x} f(x,t) \,\text{d}t,$$</span></p>
<p>if <span class="math-container">$-\infty<a(x),b(x)<\infty$</span>.</p>
<p>Can we say anything in general about the derivative of the powers of the <span class="math-container">$\int_{a(x)}^{b(x)} f(x,t)\,\text{d}t$</span> in a similar fashion? So for example
<span class="math-container">$$\frac{\text{d}}{\text{d}x} \left(\left(\int_{a(x)}^{b(x)} f(x,t)\,\text{d}t\right)^2\right)=~?$$</span></p>
| nonuser | 463,553 | <p><em>Nonstandard, simple, a bit over powered, but most creative solution:</em></p>
<hr>
<p>Consider a homothety <span class="math-container">$H_1$</span> with center at <span class="math-container">$O$</span> which takes <span class="math-container">$A\mapsto C$</span>. Then it takes <span class="math-container">$F\mapsto B$</span>. </p>
<p>Also consider a homothety <span class="math-container">$H_2$</span> with center at <span class="math-container">$O$</span> which takes <span class="math-container">$E\mapsto A$</span>. Then it takes <span class="math-container">$B\mapsto D$</span>.</p>
<hr>
<p>Then composition <span class="math-container">$H_2\circ H_1$</span> takes <span class="math-container">$F\to B$</span> and composition <span class="math-container">$H_1\circ H_2$</span> takes <span class="math-container">$E\to C$</span>. </p>
<p>Now since <span class="math-container">$H_1$</span> and <span class="math-container">$H_2$</span> have the same center they comute: <span class="math-container">$$H_1\circ H_2= H_2\circ H_1$$</span> and name this composition with <span class="math-container">$H$</span>. So <span class="math-container">$H$</span> takes <span class="math-container">$F\mapsto B$</span> and <span class="math-container">$E\mapsto C$</span> so it takes line <span class="math-container">$EF$</span> to line <span class="math-container">$BC$</span>, so <span class="math-container">$EF||BC$</span>. </p>
|
187,432 | <p>Can we evaluate the integral using <a href="http://en.wikipedia.org/wiki/Jordan%27s_lemma#Application_of_Jordan.27s_lemma">Jordan lemma</a>?
$$ \int_{-\infty}^{\infty} {\sin ^2 (x) \over x^2 (x^2 + 1)}\:dx$$</p>
<p>What de we do if removeable singularity occurs at the path of integration?</p>
| robjohn | 13,854 | <p>Using $\sin^2(z)=\frac12(1-\cos(2z))$, you should be able to handle this in much the same way as <a href="https://math.stackexchange.com/questions/160022/integration-by-means-of-complex-analysis/160099#160099">this answer</a>.</p>
<hr>
<p><strong>Details</strong> (modified from the answer mentioned above)</p>
<p>Since $\lim\limits_{z\to0}\frac{1-\cos(2z)}{2z^2}=1$, the singularity of the integrand near $z=0$ is removable. Therefore, since the integrand vanishes for $z$ within $\frac12$ of the real axis as $|z|\to\infty$ and there are no singularities within $\frac12$ of the real axis, the integral does not change when shifting the path of integration from $z=t$ to $z=t-\frac{i}{2}$.</p>
<p>Now we can break up the integral as
$$
\int_{-\infty-i/2}^{\infty-i/2}\frac{1-\cos(2z)}{2z^2(z^2+1)}\,\mathrm{d}z
=\frac14\int_{\gamma^+}\frac{1-e^{2iz}}{z^2(z^2+1)}\mathrm{d}z
+\frac14\int_{\gamma^-}\frac{1-e^{-2iz}}{z^2(z^2+1)}\mathrm{d}z\tag{1}
$$
where $\gamma^+$ and $\gamma^-$ are as depicted below:</p>
<p>$\hspace{4.6cm}$<img src="https://i.stack.imgur.com/cg4YF.png" alt="path of integration"></p>
<p>$\gamma^+$ circles two singularities ($z=0$ and $z=i$) clockwise, and $\gamma^-$ circles one singularity ($z=-i$) counter-clockwise.</p>
<p>All of the singularities are simple, so to get the residue at $z=z_0$, we just need to multiply by $z-z_0$ and taking $\displaystyle\lim_{z\to z_0}$</p>
<p>At $z=0$ the residue of $\displaystyle\frac{1-e^{2iz}}{z^2(z^2+1)}$ is $-2i$</p>
<p>At $z=i$ the residue of $\displaystyle\frac{1-e^{2iz}}{z^2(z^2+1)}$ is $\displaystyle\frac{1-e^{-2}}{-2i}$</p>
<p>At $z=-i$ the residue of $\displaystyle\frac{1-e^{-2iz}}{z^2(z^2+1)}$ is $\displaystyle\frac{1-e^{-2}}{2i}$</p>
<p>Putting these together with $(1)$ yields
$$
\begin{align}
\int_{-\infty}^\infty\frac{1-\cos(z)}{z^2(z^2+1)}\,\mathrm{d}z
&=\frac{2\pi i}{4}\left(-2i+\frac{1-e^{-2}}{-2i}\right)-\frac{2\pi i}{4}\left(\frac{1-e^{-2}}{2i}\right)\\
&=\frac{\pi}{2}\left(2-\frac{1-e^{-2}}{2}-\frac{1-e^{-2}}{2}\right)\\
&=\frac\pi2+\frac{\pi}{2e^2}\tag{2}
\end{align}
$$</p>
|
4,247,268 | <p><strong>Q:</strong></p>
<blockquote>
<p>If <span class="math-container">$f\left(x\right)=-\frac{x\left|x\right|}{1+x^{2}}$</span> then find <span class="math-container">$f^{-1}\left(x\right)$</span></p>
</blockquote>
<p>My approach:</p>
<ol>
<li>Dividing the cases when <span class="math-container">$x\ge0$</span> and when <span class="math-container">$x\le0$</span> to break free of modulus.</li>
<li>Re-arranging the terms to get the expression of x in terms of y.</li>
<li>Here's what I got:</li>
</ol>
<blockquote>
<p>When <span class="math-container">$x\ge0$</span>:
<span class="math-container">$$x=\sqrt{\frac{-y}{1+y}}$$</span> <span class="math-container">$$\to\ y\ ∈\ \left(-1,0\right] Now, y\to x$$</span>
so, <span class="math-container">$f^{-1}\left(x\right)=\sqrt{-\frac{x}{1+x}}$</span> when <span class="math-container">$x\le0$</span></p>
</blockquote>
<blockquote>
<p>When <span class="math-container">$x\le0$</span>:
<span class="math-container">$$x=-\sqrt{\frac{y}{1-y}}$$</span>
when <span class="math-container">$y\ ∈\ \left[0.1\right)$</span> Now replacing <span class="math-container">$y\to x$</span>
We get, <span class="math-container">$f^{-1}\left(x\right)=-\sqrt{\frac{x}{1-x}}\ ;\ x\ge0$</span></p>
</blockquote>
<p>But I have to show that the inverse function <span class="math-container">$f^{-1}\left(x\right)$</span>=<span class="math-container">$\operatorname{sgn}\left(-x\right)\sqrt{\frac{\left|x\right|}{1-\left|x\right|}}$</span></p>
<p>This is where I'm getting stuck. I am unable to convert my answer into this form, mainly because I'm not able to convert the cases into this expression. Is there any step-by-step systematic way in which I can do the same? Any help or guide will be greatly appreciated.</p>
<p><strong>Edit:</strong></p>
<p>Since we got <span class="math-container">$f^{-1}\left(x\right)$</span> and the cases,:</p>
<p><span class="math-container">$f^{-1}\left(x\right)=-\sqrt{\frac{x}{1-x}}\ ;\ x\ge0$</span> and
<span class="math-container">$f^{-1}\left(x\right)=\sqrt{-\frac{x}{1+x}}$</span> when <span class="math-container">$x\le0$</span>,</p>
<p>to write it in given form we need something that will give - sign when <span class="math-container">$x>0$</span> so we will use sgn(-x), and rest is just use of modulus so that we can make the general answer.</p>
| Kavi Rama Murthy | 142,385 | <p>What you have done is correct. All you have to do is switch <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. You writing <span class="math-container">$f^{-1}(y)$</span> in terms of <span class="math-container">$y$</span> so change <span class="math-container">$y$</span> to <span class="math-container">$x$</span> to get <span class="math-container">$f^{-1}(x)$</span>. Note that <span class="math-container">$f(x)$</span> is positive precisely when <span class="math-container">$x$</span> is positive.</p>
<p>However you can also avoid considering the cases <span class="math-container">$x \geq 0$</span> and <span class="math-container">$x,$</span> by taking absolute values:</p>
<p><span class="math-container">$|f(x)|=\frac {x^{2}} {1+x^{2}}$</span> which gives <span class="math-container">$|x|=\frac 1{\sqrt {1-|f(x)|}}$</span>. Now calculate <span class="math-container">$x$</span> from <span class="math-container">$f(x)=-\frac {x|x|} {1+x^{2}}$</span>.</p>
|
2,716,363 | <p>I understand the core principles of how to prove by induction and how series summations work. However I am struggling to rearrange the equation during the final (induction step).</p>
<p>Prove by induction for all positive integers n,</p>
<p>$$\sum_{r=1}^n r^3 = \frac{1}{4}n^2(n+1)^2$$</p>
<p>After both proving for $n=1$ and assuming it holds true for $n=k$:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}k^2(k+1)^2+(k+1)^3$$</p>
<p>However I am unsure of how to proceed from here, the textbook says that the next step is to rearrange to give:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}(k+1)^2(k^2+4(k+1))$$</p>
<p>However I don't understand how they did this, can someone please clarify what they have done or suggest an alternative method to rearrange this equation to prove that the statement holds true for $k+1$ to give:</p>
<p>$$\sum_{r=1}^{k+1} r^3 = \frac{1}{4}(k+1)^2((k+1)+1)^2$$</p>
| ℋolo | 471,959 | <p>$$\frac{1}{4}k^2(k+1)^2+(k+1)^3=\frac{1}{4}k^2(k+1)^2+(k+1)(k+1)^2=(k+1)^2\left(\frac14k^2+k+1\right)=(k+1)^2\left(\frac14k^2+\frac44(k+1)\right)=(k+1)^2\left(\frac14(k^2+4(k+1))\right)=\frac14(k+1)^2\left(k^2+4(k+1))\right)$$</p>
|
96,864 | <p>I don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof:
Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc.</p>
<p>I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else.
So, is my professor's proof wrong, or am I just missing something fundamental?</p>
| Qiaochu Yuan | 232 | <p>The set of upper bounds is closed under intersection, so $p \cap q$ is an upper bound less than $p$. </p>
|
96,864 | <p>I don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof:
Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc.</p>
<p>I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else.
So, is my professor's proof wrong, or am I just missing something fundamental?</p>
| Michael Greinecker | 21,674 | <p>Let $p$ and $q$ be upper bounds of $Y$. Then $p\wedge q$ is an upper bound of $Y$ and $p\wedge q\leq p$ and $p\wedge q\leq q$. Now if for all upper bounds $q$ of $Y$, $p\leq p\wedge q\leq q$, $p$ must be the least upper bound. Otherwise $p\wedge q<p$ for some $q$ and $p\wedge q$ is a strictly lower upper bound of $Y$. </p>
|
5,528 | <p>Let H be a subgroup of G. (We can assume G finite if it helps.) A complement of H in G is a subgroup K of G such that HK = G and |H∩K|=1. Equivalently, a complement is a transversal of H (a set containing one representative from each coset of H) that happens to be a group.</p>
<p>Contrary to my initial naive expectation, it is neither necessary nor sufficient that one of H and K be normal. I ran across both of the following counterexamples in Dummit and Foote:</p>
<ul>
<li><p>It is not necessary that H or K be normal. An example is S<sub>4</sub> which can be written as the product of H=⟨(1234), (12)(34)⟩≅D<sub>8</sub> and K=⟨(123)⟩≅ℤ<sub>3</sub>, neither of which is normal in S<sub>4</sub>.</p></li>
<li><p>It is not sufficient that one of H or K be normal. An example is Q<sub>8</sub> which has a normal subgroup isomorphic to Z<sub>4</sub> (generated by i, say), but which cannot be written as the product of that subgroup and a subgroup of order 2.</p></li>
</ul>
<p>Are there any general statements about when a subgroup has a complement? The <A href="http://en.wikipedia.org/wiki/Complement_%28group_theory%29">Wikipedia page</A> doesn't have much to say. In practice, there are many situations where one wants to work with a transversal of a subgroup, and it's nice when one can find a transversal that is also a group. Failing that, one can ask for the smallest subgroup of G containing a transversal of H.</p>
| Noah Snyder | 22 | <p>There's an excellent online resource for group theory definitions and theorems called <a href="http://groupprops.subwiki.org/wiki/Main_Page" rel="nofollow">"Groupprops, The Group Properties Wiki."</a> It's still in pre-alpha, but it has a lot of stuff and hopefully will continue to improve.</p>
<p>In particular, there are a bunch of theorems for figuring out when a p-group has a normal complement which you can find linked from <a href="http://groupprops.subwiki.org/wiki/Category%3ANormal_p-complement_theorems" rel="nofollow">this page</a>.</p>
|
960,865 | <p>How can i prove 2 is a primitive root mod 37, without calculating all powers of 2 mod 37?</p>
| Peter Taylor | 5,676 | <p>The order of any element in an order 36 group is a factor of 36 (Lagrange's theorem), so it suffices to check $2^a \not\equiv 1\pmod{37}$ for $a \in \{1, 2, 3, 4, 6, 9, 12, 18\}$.</p>
<p>But in fact that can be reduced further: if $2^a \equiv 1\pmod{37}$ then $2^{ab} \equiv 1\pmod{37}$, so the tests can be reduced to $a \in \{12, 18\}$.</p>
|
636,089 | <p>Let the function
$$
f(x) = \begin{cases} ax^2 & \text{for } x\in [ 0, 1], \\0 & \text{for } x\notin [0,1].\end{cases}
$$ Find $a$, such that the function can describe a probability density function. Calculate the expected value, standard deviation and CDF of a random variable X of such distribution.</p>
<p>So thanks to the community, I now can solve the former part of such excercises, in this case by $\int_0^1 ax^2 \, dx = \left.\frac{ax^3}{3}\right|_0^1 = \frac a 3$ so that the function I'm looking for is $f(x)=3x^2$. Still, I'm struggling with finding the descriptive properties of this. Again, it's not specifically about this problem - I'm trying to learn to solve such class of problems so the whole former part may be different and we may end up with different function to work with.</p>
<p>So as for the standard deviation, I believe I should find a mean value $m$ and then a definite integral (at least that's what the notes suggest?) so that I end up with $$\int_{- \infty}^\infty (x-m) \cdot 3x^2 \,\mathrm{d}x$$</p>
<p>As for the CDF and expected value, I'm clueless though. In the example I have in the notes, the function was $\frac{3}{2}x^2$ and for the expected value there is simply some $E(X)=n\cdot m = 0$ written while the CDF here is put as $D^2 X = n \cdot 0.6 = 6 \leftrightarrow D(X) = \sqrt{6}$ and I can't make head or tail from this. Could you please help? </p>
| NasuSama | 67,036 | <p><strong>Expected Value</strong></p>
<p>In general, the expected value is determined by this following expression</p>
<p>$$\mathrm{E}(X) = \int_{-\infty}^{\infty} xf(x)\,dx$$</p>
<p>where $f(x)$ is the probability density function. For your problem, the expected value is</p>
<p>$$\begin{aligned}
\mathrm{E}(X) &= \int_0^1 x\cdot 3x^2\,dx\\
&= \int_0^1 3x^3\,dx\\
&= \left.\dfrac{3}{4}x^4\right\vert_{x = 0}^{x = 1}\\
&= \dfrac{3}{4}
\end{aligned}$$</p>
<p><strong>Variance</strong></p>
<p>Recall that the variance is</p>
<p>$$\mathrm{Var}(X) = \mathrm{E}(X^2) - (\mathrm{E}(X))^2$$</p>
<p>We already know $\mathrm{E}(X)$. We then need to compute $\mathrm{E}(X^2)$, which is</p>
<p>$$\begin{aligned}
\mathrm{E}(X^2) &= \int_0^1 x^2\cdot 3x^2\,dx\\
&= \int_0^1 3x^4\,dx\\
&= \left.\dfrac{3}{5}x^5\right\vert_{x = 0}^1\\
&= \dfrac{3}{5}
\end{aligned}$$</p>
<p>So</p>
<p>$$\mathrm{Var}(X) = \dfrac{3}{5} - \left(\dfrac{3}{4} \right)^2 = \dfrac{3}{80}$$</p>
<p><strong>Standard Deviation</strong></p>
<p>Thus, the standard deviation is</p>
<p>$$\sigma = \sqrt{\mathrm{Var}(X)} = \sqrt{\dfrac{3}{80}}$$</p>
|
3,514,659 | <p>Let <span class="math-container">$f:[a, b]\rightarrow \mathbb{R}$</span> be continuous and let <span class="math-container">$F:[a, b]\rightarrow \mathbb{R}$</span> be defined by <span class="math-container">$F(x)=\int_a^x f(t) \,dt$</span>. Then <span class="math-container">$F$</span> is differentiable whose derivative is <span class="math-container">$f$</span>.</p>
<p>Now define <span class="math-container">$G$</span> on <span class="math-container">$[a, b]$</span> by <span class="math-container">$G(x)=\int_x^b f(t)\, dt$</span>.</p>
<p>It can be shown, <strong>following the proof of above theorem</strong>, that <span class="math-container">$G$</span> is differentiable with derivative <span class="math-container">$-f$</span> (is this correct)? However, I wanted to see, how the assertion about <span class="math-container">$G$</span> follows from that of <span class="math-container">$F$</span> without following the proof of assertion for <span class="math-container">$F$</span>? Any hint? </p>
| Lee Mosher | 26,501 | <p>Here's a hint: start with the fact that if <span class="math-container">$a \le x \le b$</span> then
<span class="math-container">$$\int_a^b f(t) \, dt = \int_a^x f(t) \, dt + \int_x^b f(t) \, dt
$$</span>
which is a simple consequence of the definition of definite integrals.</p>
|
3,514,659 | <p>Let <span class="math-container">$f:[a, b]\rightarrow \mathbb{R}$</span> be continuous and let <span class="math-container">$F:[a, b]\rightarrow \mathbb{R}$</span> be defined by <span class="math-container">$F(x)=\int_a^x f(t) \,dt$</span>. Then <span class="math-container">$F$</span> is differentiable whose derivative is <span class="math-container">$f$</span>.</p>
<p>Now define <span class="math-container">$G$</span> on <span class="math-container">$[a, b]$</span> by <span class="math-container">$G(x)=\int_x^b f(t)\, dt$</span>.</p>
<p>It can be shown, <strong>following the proof of above theorem</strong>, that <span class="math-container">$G$</span> is differentiable with derivative <span class="math-container">$-f$</span> (is this correct)? However, I wanted to see, how the assertion about <span class="math-container">$G$</span> follows from that of <span class="math-container">$F$</span> without following the proof of assertion for <span class="math-container">$F$</span>? Any hint? </p>
| gt6989b | 16,192 | <p><strong>HINT</strong></p>
<p>Note that
<span class="math-container">$$
A = \int_a^b f(t) dt
$$</span>
is a constant and
<span class="math-container">$$
G(x) = \int_x^b f(t) dt = \int_a^b f(t) dt - \int_a^x f(t) dt = A - F(x)
$$</span></p>
|
343,281 | <p>Consider the following note written by Gerhard Gentzen in early 1932, on the onset of his work on a consistency proof for arithmetic:</p>
<blockquote>
<p>The axioms of arithmetic are obviously correct, and the principles of proof obviously preserve correctness. Why cannot one simply conclude consistency, i.e., what is the meaning of the second incompleteness theorem, the one by which consistency of arithmetic cannot be proved by arithmetic means? Where is the Godel-point hiding?</p>
</blockquote>
<p>The first question one might ask when reading this statement (plus three questions) is, how is it that Gentzen concludes that, "The axioms of arithmetic [read 'arithmetic' as meaning, <span class="math-container">$PA$</span>--my comment] are obviously correct."? Well, one might infer that Gentzen infers that "The axioms of arithmetic are obviously correct" by virtue of the fact that the axioms of <span class="math-container">$PA$</span> satisfy the following structure:</p>
<p><span class="math-container">$$\langle \mathfrak N, +, \times, = \rangle$$</span></p>
<p>where <span class="math-container">$\mathfrak N = \{ |, ||, |||,\ldots\},$</span> '<span class="math-container">$+$</span>' as meaning concatenation, '<span class="math-container">$\times$</span>' as meaning the Hilbert-Bernays definition of multiplication (e.g., || <span class="math-container">$\times$</span> ||| means replacing each | in || by |||, i.e., ||||||), and '=' as simply meaning equality as defined by the axioms of equality, i.e. for the axiom of equality 'a=a' one has, for the elements of <span class="math-container">$\mathfrak N$</span>, the following equalities:</p>
<p>{ |=|, ||=||, |||=|||,...} [given this, and the closure of <span class="math-container">$\mathfrak N$</span> under <span class="math-container">$+$</span> and <span class="math-container">$\times$</span>, how is it possible that <span class="math-container">$PA$</span>, satisfying this structure, could ever derive, say, '||=|||'?]</p>
<p>In his answer to Noah Schweber's mathoverflow question, "What are some proofs of Godel's Theorem which are <em>essentially different</em> from the original proof?", Ron Maimon mentions the "Jech/Woodin Set theory model proof". In regards to Gentzen's point of view (at least in early 1932), it might behoove one to take a close look at Prof. Jech's three-page paper (<em>Proceedings of the American Mathematical Society</em>, Volume 121, Number 1, May 1994, pp. 311-313).</p>
<p>Why? Because of "Remark 2" on pg. 312 which states:</p>
<blockquote>
<p>Even though our proof of Godel's Theorem [Second Incompleteness Theorem--my comment] uses the Completeness Theorem, it can be modified to apply to weaker theories such as Peano Arithmetic (<span class="math-container">$PA$</span>). To prove that <span class="math-container">$PA$</span> does not prove its own consistency, (unless it is inconsistent), we argue as follows:</p>
<p>Assume that <span class="math-container">$PA$</span> is consistent and that "<span class="math-container">$PA$</span> is consistent" is provable in <span class="math-container">$PA$</span>. There is a conservative extension <span class="math-container">$\Gamma$</span> [let it be <span class="math-container">$ACA_0$</span> as in Noah Schweber's answer--my comment] of <span class="math-container">$PA$</span> in which the Completeness Theorem is provable [Theorem 5.5, p. 443, of Takeuti's <em>Proof Theory</em>, 2nd ed.--my expansion of his comment by his reference], and moreover, <span class="math-container">$PA$</span> <span class="math-container">$\vdash$</span> (<span class="math-container">$\Gamma$</span> is a conservative extension of <span class="math-container">$PA$</span>). Therefore, <span class="math-container">$\Gamma$</span> <span class="math-container">$\vdash$</span> (<span class="math-container">$\Gamma$</span> is a conservative extension of a consistent theory) and thus proves its own consistency. Consequently, <span class="math-container">$\Gamma$</span> proves that <span class="math-container">$\Gamma$</span> has a model.</p>
<p>Now let <span class="math-container">$\Sigma$</span> be a sufficiently strong finite subset of of <span class="math-container">$\Gamma$</span> that proves that <span class="math-container">$\Sigma$</span> has a model; the proof above leads to a contradiction.</p>
</blockquote>
<p>Is this where the Godel-point is hiding with regards to Gentzen's statement and first question?</p>
<blockquote>
<p>The axioms of arithmetic are obviously correct, and the principles of proof obviously prove correctness. Why cannot one simply conclude consistency....?</p>
</blockquote>
<p>Would the 'Godel-point' in question be, following Prof. Jech's Main Theorem,</p>
<blockquote>
<p>It is unprovable in <span class="math-container">$ACA_0$</span> (unless <span class="math-container">$ACA_0$</span> is inconsistent) that there exists a model of <span class="math-container">$PA$</span>. ?</p>
</blockquote>
<p>Now as regards Noah Schweber's very nice answer, I have two questions regarding the following passage</p>
<blockquote>
<p>...However, we are <em>not</em> guaranteed that our model <span class="math-container">$\mathfrak M$</span> [of <span class="math-container">$ACA_0$</span>-- my comment] thinks that its first-order part actually satisfies <span class="math-container">$PA$</span>. That is, the "obvious truth" of the <span class="math-container">$PA$</span> axioms is not actually that obvious.</p>
<p>This is an example of a failure on an <span class="math-container">$\omega$</span>-rule: while for each axiom <span class="math-container">$\varphi$</span> of <span class="math-container">$PA$</span> we do in fact have "<span class="math-container">$Num$</span>(<span class="math-container">$\mathfrak M$</span>) <span class="math-container">$\vDash$</span> <span class="math-container">$\varphi$</span>" (appropriately phrased) is true in <span class="math-container">$\mathfrak M$</span>, we do <em>not</em> get from this that "<span class="math-container">$Num$</span>(<span class="math-container">$\mathfrak M$</span>) <span class="math-container">$\vDash$</span> each <span class="math-container">$PA$</span> axiom" is true in <span class="math-container">$\mathfrak M$</span>. And this is just like how being able to check each individual derivation in <span class="math-container">$PA$</span> doesn't give us a way to check all derivations at once, so it really shouldn't be suprising.</p>
</blockquote>
<ol>
<li>How does the above passage relate to Gentzen's note, especially the phrase</li>
</ol>
<blockquote>
<p>That is, the "obvious truth" of the <span class="math-container">$PA$</span> axioms is not actually that obvious.</p>
</blockquote>
<ol start="2">
<li>What perspective is Gentzen taking in his note (external or internal) and why does it matter what <span class="math-container">$\mathfrak M$</span> 'thinks' (so to speak) as regards Gentzen's note?</li>
</ol>
<p>Now two questions for Panu Raatikainen: as regards your statement</p>
<blockquote>
<p>In general, we just cannot see that they [the theories "which include elementary arithmetic and happen to be consistent"--my paraphrase of your earlier comment] are consistent.</p>
</blockquote>
<ol>
<li><p>Why not?</p></li>
<li><p>What was Gentzen 'seeing' when he made his statement ("The axioms of arithmetic are obviously correct, and the principles of proof obviously preserve correctness"), and why was his 'seeing' incorrect (i.e., leading to inconsistency)?</p></li>
</ol>
| Thomas Benjamin | 20,597 | <p>In his answer to David Roberts' mathoverflow question, "<span class="math-container">$Z_2$</span> versus second-order <span class="math-container">$PA$</span>" (question 97077), Prof. Ali Enayat writes (Under the subheading, "Regarding the second question)":</p>
<blockquote>
<p>One way to see this is based on an old result (noticed by a number of people, including Takeuti and Feferman) that <span class="math-container">$ACA$</span> is equiconsistent with an extension <span class="math-container">$PA$</span>(<span class="math-container">$T$</span>) of <span class="math-container">$PA$</span> with a distinguished predicate <span class="math-container">$T$</span> that codes up the full truth predicate for the ambient model of arithmetic [which I will assume (for want of a better assumption) is the intended model for <span class="math-container">$PA$</span>--my comment]. Note that <span class="math-container">$PA$</span>(<span class="math-container">$T$</span>) includes induction in the extended language of arithmetic augmented by the predicate <span class="math-container">$T$</span> [footnote 1 on pg. 2 of Enayat and Pakhomov's arXiv preprint, "Truth, Disjunction, and Induction" (arXiv:1805.09890v1 [math.LO] 24 May 2018) seems to be a restatement of this-- my comment]. </p>
</blockquote>
<p>If somehow Gentzen was implicitly working in <span class="math-container">$PA$</span>(<span class="math-container">$T$</span>)(without realizing it, of course), it would explain the viewpoint expressed in the aforementioned note.</p>
<p>The question now remains as to whether in fact he was, which is a question to ask a historian of Gentzen's work (von Plato, perhaps)?</p>
|
120,525 | <p>I'd like to check with my colleagues whether I have correctly understood "embedded resolution of singularities". </p>
<p>Let $X$ be a nonsingular projective variety over $\mathbf C$ and let $D$ be a "nice" divisor on $X$, say $D$ has strictly normal crossings. (Maybe we could just take $D$ to be a closed subscheme?)</p>
<p>Then, has the following statement been proven? And what is a "good" reference?</p>
<p>There exists a projective birational surjective morphism $\psi:Y\to X$ with $Y$ a nonsingular projective variety over $\mathbf C$ and the inverse image of $D$ in $Y$ a nonsingular projective variety over $\mathbf C$ of codimension one in $Y$?</p>
<p>I'm worried about whether I have correctly understood this statement, or maybe one needs some "normality" conditions on $D$ to assure this "embedded" resolution of singularities.</p>
<p>Also, how does one obtain this embedded resolution of singularities? Can we write down a terminating process which ends with an embedded resolution of singularities?</p>
<p>I have a hard time "believing" the above statement, but I don't know why. If anybody can explain to me that this is not so surprising as a result I would be very thankful.</p>
| David E Speyer | 297 | <p>Here is a heuristic argument that there is nothing to explain:</p>
<p>The probability that <span class="math-container">$p$</span> divides the sum of the preceding primes is <span class="math-container">$1/p$</span>. So the expected number of primes less than <span class="math-container">$10^9$</span> with this property is <span class="math-container">$\sum_{p \leq 10^9} \frac{1}{p}$</span>. Using <a href="https://en.wikipedia.org/wiki/Mertens%27_theorems" rel="noreferrer">Mertens' second theorem</a>,
<span class="math-container">$$\sum_{p \leq 10^9} \frac{1}{p} \approx \log \log 10^9 + M \approx 3.3$$</span></p>
<p>Here <span class="math-container">$\log$</span> is natural log and <span class="math-container">$M \approx 0.26149$</span> is <a href="https://en.wikipedia.org/wiki/Meissel%E2%80%93Mertens_constant" rel="noreferrer">Mertens' constant</a>. </p>
<p>This is an example of the motto "<span class="math-container">$\log \log x$</span> goes to infinity but has never been observed to do so". It is quite common for people to look at primes <span class="math-container">$p$</span> which divide some quantity <span class="math-container">$a_p$</span> and conclude that they are surprisingly rare when, in fact, they are simply growing as <span class="math-container">$\log \log N$</span> for the reason above.</p>
|
120,525 | <p>I'd like to check with my colleagues whether I have correctly understood "embedded resolution of singularities". </p>
<p>Let $X$ be a nonsingular projective variety over $\mathbf C$ and let $D$ be a "nice" divisor on $X$, say $D$ has strictly normal crossings. (Maybe we could just take $D$ to be a closed subscheme?)</p>
<p>Then, has the following statement been proven? And what is a "good" reference?</p>
<p>There exists a projective birational surjective morphism $\psi:Y\to X$ with $Y$ a nonsingular projective variety over $\mathbf C$ and the inverse image of $D$ in $Y$ a nonsingular projective variety over $\mathbf C$ of codimension one in $Y$?</p>
<p>I'm worried about whether I have correctly understood this statement, or maybe one needs some "normality" conditions on $D$ to assure this "embedded" resolution of singularities.</p>
<p>Also, how does one obtain this embedded resolution of singularities? Can we write down a terminating process which ends with an embedded resolution of singularities?</p>
<p>I have a hard time "believing" the above statement, but I don't know why. If anybody can explain to me that this is not so surprising as a result I would be very thankful.</p>
| Javier | 31,020 | <p>Question Q1 seems to me an extremely hard problem, but I also believe that the answer is affirmative because the heuristic argument exposed by David Speyer.</p>
<p>I studied with FLorian Luca [1] a related problem that could help to answer question Q2:</p>
<p>Let $A$ the set of integers $n$ such that the sum of the first n primes is divisible by $n$ (instead of $p_n$). In other words, $A$ is the set of $n$ such that the arithmetic mean of the first $n$ primes is an integer:</p>
<p>$A=\{ n: \frac{p_1+\cdots +p_n}n \in \mathbf Z \}=\{ 1, 23, 53, 853, 11869, 117267, 339615, 3600489,..\} $</p>
<p>These numbers are not so rare because in this case the heuristic gives $ A(x)\asymp \sum_{n\le x}\frac 1n\sim \log x$.</p>
<p>We could not prove that $A$ has infinite elements but we proved that they are rare:</p>
<p>$$A(x)\ll x e^{-C(\log x)^{3/5}(\log \log x)^{-1/5}}.$$
Later Matomaki [2] proved the stronger estimate,
$A(x)\ll x^{\frac{19}{24}+\epsilon}$.</p>
<p>[1] Cilleruelo, Javier; Luca, Florian, On the sum of the first n primes. Q. J. Math. 59 (2008), no. 4. <a href="http://www.uam.es/personal_pdi/ciencias/cillerue/articulos.html">http://www.uam.es/personal_pdi/ciencias/cillerue/articulos.html</a></p>
<p>[2] Matomäki, Kaisa, A note on the sum of the first n primes. Q. J. Math. 61 (2010), no. 1, </p>
|
1,068,609 | <p>My prof has taught us that we can express the proposition $⟦$there are exactly two entities characterized by $P$$⟧$ thus:</p>
<p><img src="https://i.stack.imgur.com/aIJbL.jpg" alt="enter image description here"></p>
<p>That proposition looks verbose, despite the fact that it references just two entities. It seems impractical to use such a formula to express a proposition that references a great number of entities (e.g. 482). Is there a more concise way of conveying the number of entities that a proposition references?</p>
| Spenser | 39,285 | <p>Let $G(s)$ be an anti-derivative of
$$g(s)=\frac{\sqrt{1+s^2}}{s}.$$
By the Fundamental Theorem of Calculus,
$$F(t)=\int_1^{t^2}g(s)ds=G(t^2)-G(1)$$
so
$$F'(t)=2tg(t^2)=\frac{2\sqrt{1+t^4}}{t}.$$</p>
|
1,068,609 | <p>My prof has taught us that we can express the proposition $⟦$there are exactly two entities characterized by $P$$⟧$ thus:</p>
<p><img src="https://i.stack.imgur.com/aIJbL.jpg" alt="enter image description here"></p>
<p>That proposition looks verbose, despite the fact that it references just two entities. It seems impractical to use such a formula to express a proposition that references a great number of entities (e.g. 482). Is there a more concise way of conveying the number of entities that a proposition references?</p>
| Cookie | 111,793 | <p>$$F'(t)=\frac d{dt} \left(\int_1^{t^2} \frac{\sqrt{1+s^2}}{s} ds \right)=\frac{\sqrt{1+t^2}}{t} \cdot \frac d{dt}(t^2)$$</p>
|
3,285,036 | <p>Obviously this cannot happen in a right rectangle, but otherwise - as Sin(0) or 180 or 360 equals 0, I guess there is no way to find out what the original angle was?</p>
| guitarphish | 687,096 | <p>This is the inverse problem. The inverse sin function <span class="math-container">$\arcsin(x)$</span> is not one-to-one <a href="https://en.wikipedia.org/wiki/Injective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Injective_function</a> as you pointed out. In general this can type of situation can make it hard to solve many problems in both math and physics. Usually only approximate methods can be used, and in this case because there is no extra information (e.g., <span class="math-container">$x$</span> was an angle between 90 and 200) there is no way to tell what value of <span class="math-container">$x$</span> generated the zero.</p>
|
25,284 | <p>I recently worked my way through Walter Warwick Sawyer's book, <em>Mathematician's Delight</em>, which has opened my eyes to Maths. I used to fear maths, feeling I was incapable. Sawyer (among other authors) has a gift for teaching the subject. I now feel much more confident tackling Maths problems, I have a better intuitive understanding of Maths and a renewed interest in it.</p>
<p>There's a nice summary of Sawyer's life and work here: <a href="https://plus.maths.org/content/os/latestnews/may-aug08/sawyer/index" rel="nofollow noreferrer">https://plus.maths.org/content/os/latestnews/may-aug08/sawyer/index</a></p>
<p>Have you had a similar experience after encountering Sawyer's work?</p>
<p>Stephen</p>
| Sue VanHattum | 60 | <p>I already loved math when I encountered his books. But yes, I was also inspired. <em><strong>Mathematician's Delight</strong></em> might be the one I put dozens of page markers in, so I could find all the great ideas again. He helped me think about how I might want to teach differently, especially in beginning algebra. I have quite a few of his books. The ones on higher math (like abstract algebra) are fun for me to work through.</p>
<p><a href="https://www.wwsawyer.org/work-by-sawyer.html" rel="noreferrer">This site</a> has lists of both his books and articles he wrote.</p>
<p>Thanks for posting this at the beginning of my summer break. I might try to find a few more of his books, and will definitely pull one out again to work on.</p>
|
25,284 | <p>I recently worked my way through Walter Warwick Sawyer's book, <em>Mathematician's Delight</em>, which has opened my eyes to Maths. I used to fear maths, feeling I was incapable. Sawyer (among other authors) has a gift for teaching the subject. I now feel much more confident tackling Maths problems, I have a better intuitive understanding of Maths and a renewed interest in it.</p>
<p>There's a nice summary of Sawyer's life and work here: <a href="https://plus.maths.org/content/os/latestnews/may-aug08/sawyer/index" rel="nofollow noreferrer">https://plus.maths.org/content/os/latestnews/may-aug08/sawyer/index</a></p>
<p>Have you had a similar experience after encountering Sawyer's work?</p>
<p>Stephen</p>
| JRN | 77 | <p>No, I was never inspired by him because I had never heard of him before you mentioned it.</p>
<hr />
<p><sup>Note: My answer is for the original version of the question. Since then, the question has been edited so my answer is no longer appropriate.</sup></p>
|
11,973 | <p>I have a list of strings called <code>mylist</code>:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
</code></pre>
<p>I would like to split <code>mylist</code> by "section headers." Strings that begin with the character <code>[</code> are section headers in my application. Thus, I would like to split <code>mylist</code> in such a way as to obtain this output:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
<p>(The <code>a</code>s, <code>b</code>s, and <code>c</code>s represent <em>any</em> characters; the string inside the section header does <em>not</em> necessarily match the strings that follow in that section. Also, the number of strings in each section can vary.</p>
<p>I have tried:</p>
<pre><code>SplitBy[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>But this is not correct; I obtain:</p>
<pre><code>{{"[a]"}, {"a", "a"}, {"[b]"}, {"b", "b"}, {"[ c ]"}, {"c", "c"}}
</code></pre>
<p>Likewise, using <code>Split</code> (since it applies the test function only to adjacent elements) does not work. The command:</p>
<pre><code>Split[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>yields:</p>
<pre><code>{{"[a]", "a"}, {"a"}, {"[b]", "b"}, {"b"}, {"[ c ]", "c"}, {"c"}}
</code></pre>
<p>Do you have any advice? Thanks.</p>
| J. M.'s persistent exhaustion | 50 | <p>Here's one method, using a slightly modified example:</p>
<pre><code>mylist = {"[a]", "a", "[b]", "b", "b", "b", "[ c ]", "c", "c"};
pos = Append[Flatten[Position[mylist,
s_String /; StringMatchQ[s, "[" ~~ ___]]], Length[mylist] + 1]
{1, 3, 7, 10}
Take[mylist, {#1, #2 - 1}] & @@@ Partition[pos, 2, 1]
{{"[a]", "a"}, {"[b]", "b", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
|
11,973 | <p>I have a list of strings called <code>mylist</code>:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
</code></pre>
<p>I would like to split <code>mylist</code> by "section headers." Strings that begin with the character <code>[</code> are section headers in my application. Thus, I would like to split <code>mylist</code> in such a way as to obtain this output:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
<p>(The <code>a</code>s, <code>b</code>s, and <code>c</code>s represent <em>any</em> characters; the string inside the section header does <em>not</em> necessarily match the strings that follow in that section. Also, the number of strings in each section can vary.</p>
<p>I have tried:</p>
<pre><code>SplitBy[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>But this is not correct; I obtain:</p>
<pre><code>{{"[a]"}, {"a", "a"}, {"[b]"}, {"b", "b"}, {"[ c ]"}, {"c", "c"}}
</code></pre>
<p>Likewise, using <code>Split</code> (since it applies the test function only to adjacent elements) does not work. The command:</p>
<pre><code>Split[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>yields:</p>
<pre><code>{{"[a]", "a"}, {"a"}, {"[b]", "b"}, {"b"}, {"[ c ]", "c"}, {"c"}}
</code></pre>
<p>Do you have any advice? Thanks.</p>
| Leonid Shifrin | 81 | <p>At the risk of being annoying, I will pitch the linked lists again. Here is the code using linked lists:</p>
<pre><code>ClearAll[split];
split[{}] = {};
split[l_List] :=
Reap[split[{}, Fold[{#2, #1} &, {}, Reverse@l]]][[2, 1]];
split[accum_, {h_, tail : {_?sectionQ, _} | {}}] :=
split[Sow[Flatten[{accum, h}]]; {}, tail];
split[accum_, {h_, tail_}] := split[{accum, h}, tail];
</code></pre>
<p>The function <code>sectionQ</code> has been stolen from the answer of @rm-rf. The usage is</p>
<pre><code>split[mylist]
(* {{[a],a,a},{[b],b,b},{[ c ],c,c}} *)
</code></pre>
<p>The advantages I see in using linked lists is that they allow one to produce solutions which are</p>
<ul>
<li>Easily generalizable to more complex problems</li>
<li>Straightforward to implement</li>
<li>Easy to argue about (in terms of algorithmic complexity etc)</li>
</ul>
<p>They may not be the fastest though, so may not always be suitable for performance-critical applications.</p>
|
11,973 | <p>I have a list of strings called <code>mylist</code>:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
</code></pre>
<p>I would like to split <code>mylist</code> by "section headers." Strings that begin with the character <code>[</code> are section headers in my application. Thus, I would like to split <code>mylist</code> in such a way as to obtain this output:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
<p>(The <code>a</code>s, <code>b</code>s, and <code>c</code>s represent <em>any</em> characters; the string inside the section header does <em>not</em> necessarily match the strings that follow in that section. Also, the number of strings in each section can vary.</p>
<p>I have tried:</p>
<pre><code>SplitBy[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>But this is not correct; I obtain:</p>
<pre><code>{{"[a]"}, {"a", "a"}, {"[b]"}, {"b", "b"}, {"[ c ]"}, {"c", "c"}}
</code></pre>
<p>Likewise, using <code>Split</code> (since it applies the test function only to adjacent elements) does not work. The command:</p>
<pre><code>Split[mylist, StringMatchQ[#, "[" ~~ ___] &]
</code></pre>
<p>yields:</p>
<pre><code>{{"[a]", "a"}, {"a"}, {"[b]", "b"}, {"b"}, {"[ c ]", "c"}, {"c"}}
</code></pre>
<p>Do you have any advice? Thanks.</p>
| Murta | 2,266 | <p>Here's my suggestion:</p>
<pre><code>mylist = {"[a]", "a", "a", "[b]", "b", "b", "[ c ]", "c", "c"};
Split[mylist, ! StringMatchQ[#2, "[*"] &]
</code></pre>
<p>and we get:</p>
<pre><code>{{"[a]", "a", "a"}, {"[b]", "b", "b"}, {"[ c ]", "c", "c"}}
</code></pre>
|
316,601 | <p>Can anyone tell me what I am doing wrong? need to prove for $k\ge2$
$$(5-\frac5k )(1+\frac{1}{(k+1)^2}) \le 5 - \frac{5}{k+1}$$$$(5-\frac5k )(1+\frac{1}{(k+1)^2})= 5(1-\frac1k)(1+\frac1{(k+1)^2})$$
$$=5(1+\frac1{k+1)^2}-\frac1k-\frac1{k(k+1)^2})$$
$$= 5(1-\frac{k^2+k+2}{k(k+1)^2})$$
$$=5(1-\frac{k(k+1)}{k(k+1)^2}+\frac2{k(k+1)^2})$$
$$=5(1-\frac{1}{k+1}+\frac2{k(k+1)^2})$$
$$= 5 - \frac5{k+1}+\frac{10}{k(k+1)^2}\le5-\frac5{k+1}$$
which doesn't look true.</p>
| Chrisuu | 63,178 | <p>Your process is correct until the 4th step. There is a sign error in the 5th step. Gigili's answer provides the correct solution, but if you are still stumped about where the minus sign is coming from, here is a more detailed look at all the manipulations and properties involved to get from the 4th step in your process to the correct 5th step (which is the 4th step in Gigili's answer).</p>
<p><span class="math-container">$$\eqalignno{
&\phantom{=}\thinspace 5\left(1 -{k^2 + k + 2\over k{(k + 1)}^2} \right)&(1)\cr
&=5\left(1 + \left(-{k^2 + k + 2\over k{(k + 1)}^2}\right) \right)&(2)\cr
&=5\left(1 + \left(-1\left({k^2 + k + 2\over k{(k + 1)}^2}\right)\right) \right)&(3)\cr
&=5\left(1 + \left(-1\left({k(k + 1) + 2\over k{(k + 1)}^2}\right)\right) \right)&(4)\cr
&=5\left(1 + \left(-1\left({k(k + 1)\over k{(k + 1)}^2} + {2\over k{(k + 1)}^2}\right)\right) \right)&(5)\cr
&=5\left(1 + \left(\biggl(-1\biggr)\left({k(k + 1)\over k{(k + 1)}^2}\right) + \biggl(-1\biggr)\left( {2\over k{(k + 1)}^2}\right)\right) \right)&(6)\cr
&=5\left(1 + \left(\left(-{k(k + 1)\over k{(k + 1)}^2}\right) + \left(- {2\over k{(k + 1)}^2}\right)\right)\right)&(7)\cr
&=5\left(1 + \left(-{k(k + 1)\over k{(k + 1)}^2}\right) + \left(- {2\over k{(k + 1)}^2}\right)\right)&(8)\cr
&=5\left(1 - {k(k + 1)\over k{(k + 1)}^2} - {2\over k{(k + 1)}^2} \right)&(9)\cr
}$$</span></p>
<p>(1) Given (4th step in the incorrect solution / 3rd step in Gigili's answer).<br />
(2) Definition of subtraction in terms of addition.<br />
(3) Multiplication property of negative one.<br />
(4) Associative property of addition; factoring.<br />
(5) Addition of fractions with like denominators.<br />
(6) Distributive property.<br />
(7) Multiplication property of negative one.<br />
(8) Associative property of addition.<br />
(9) Definition of subtraction in terms of addition (similar to 5th step in the incorrect solution, but with correct sign / 4th step in Gigili's answer).</p>
|
2,755,213 | <p><strong>Question.</strong> Find, with proof, the possible values of a rational number $q$ for which $q+\sqrt{2}$ is a reduced quadratic irrational.</p>
<p>So, by definition a <em>quadratic irrational</em> is one of the form $u+v\sqrt{d}$ where $u,v\in\Bbb Q, v\neq 0$ and $d$ being square-free. Then, it is said to be <em>reduced</em> if it exceeds $1$ and has conjugate in the interval $(-1, 0)$.</p>
<p>For the question at hand; this corresponds to having $-1<q-\sqrt{2}<0$ and $q+\sqrt{2}>1$.</p>
<p>But I'm not quite sure how to proceed from here?</p>
| Kirk Fox | 551,926 | <p>Since we have $-1 < q-\sqrt{2} < 0$ and $q+\sqrt{2} > 1$, we can simply move the square roots to get the inequalities
$$-1 + \sqrt{2} < q < \sqrt{2} \text{ and } q > 1 - \sqrt{2}$$
Because $-1 + \sqrt{2} > 1 - \sqrt{2}$, our first inequality is all that is necessary. This means we can define a set $S$ of all rational numbers satisfying the inequality as
$$S = \left\{q \in \mathbb{Q} \mid q \in (-1+\sqrt{2}, \sqrt{2})\right\}$$
This is the set of all rational numbers between $-1+\sqrt{2}$ and $\sqrt{2}$.</p>
|
2,162,452 | <p>Question: Find the slope of the tangent line to the graph of $r = e^\theta - 4$ at $\theta = \frac{\pi}{4}$.</p>
<p>$$x = r\cos \theta = (e^\theta - 4)\cos\theta$$</p>
<p>$$y = r\sin \theta = (e^\theta - 4)\sin\theta$$</p>
<p>$$\frac{dx}{d\theta} = -e^\theta\sin\theta + e^\theta\cos\theta + 4\sin\theta$$
$$\frac{dy}{d\theta} = e^\theta\cos\theta + e^\theta\sin\theta - 4\cos\theta$$</p>
<p>$$\frac{dy}{dx} = \frac{e^\theta(\cos\theta + \sin\theta) - 4\cos\theta}{e^\theta(\cos\theta - \sin\theta) + 4\sin\theta}$$</p>
<p>$$\frac{dy}{dx} = \frac{\sqrt{2}(e^{\frac{\pi}{4}}-2)}{2\sqrt{2}} = \frac{1}{2}e^{\frac{\pi}{4}} - 1$$</p>
<p>When I plugged this problem into Wolfram Alpha (<a href="http://www.wolframalpha.com/input/?i=slope+of+the+tangent+line+to+r+%3D+e%5E(theta)-4+at+theta%3D(pi%2F4)" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=slope+of+the+tangent+line+to+r+%3D+e%5E(theta)-4+at+theta%3D(pi%2F4)</a>), it said that the answer was just $e^{\frac{\pi}{4}}$, so I'm confused where I went wrong in my steps. I tried looking over the arithmetic a couple of times but couldn't find an incorrect step. </p>
<p>Any pointers or help would be appreciated - thank you very much!</p>
| Dr. Sonnhard Graubner | 175,066 | <p>solving the quadratic equation we get
$$x_1=m+\sqrt{-2m-3}$$
$$x_2=m-\sqrt{-2m-3}$$ then we get
$$x_1^2+x_2^2=2m^2-4m-6$$
can you proceed?
i have got $$min(2m^2-4m-6)$$ under the condition $$-2m-3\geq 0$$ is equal to $$\frac{9}{2}$$ and $$m_{min}=-\frac{3}{2}$$</p>
|
2,162,452 | <p>Question: Find the slope of the tangent line to the graph of $r = e^\theta - 4$ at $\theta = \frac{\pi}{4}$.</p>
<p>$$x = r\cos \theta = (e^\theta - 4)\cos\theta$$</p>
<p>$$y = r\sin \theta = (e^\theta - 4)\sin\theta$$</p>
<p>$$\frac{dx}{d\theta} = -e^\theta\sin\theta + e^\theta\cos\theta + 4\sin\theta$$
$$\frac{dy}{d\theta} = e^\theta\cos\theta + e^\theta\sin\theta - 4\cos\theta$$</p>
<p>$$\frac{dy}{dx} = \frac{e^\theta(\cos\theta + \sin\theta) - 4\cos\theta}{e^\theta(\cos\theta - \sin\theta) + 4\sin\theta}$$</p>
<p>$$\frac{dy}{dx} = \frac{\sqrt{2}(e^{\frac{\pi}{4}}-2)}{2\sqrt{2}} = \frac{1}{2}e^{\frac{\pi}{4}} - 1$$</p>
<p>When I plugged this problem into Wolfram Alpha (<a href="http://www.wolframalpha.com/input/?i=slope+of+the+tangent+line+to+r+%3D+e%5E(theta)-4+at+theta%3D(pi%2F4)" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=slope+of+the+tangent+line+to+r+%3D+e%5E(theta)-4+at+theta%3D(pi%2F4)</a>), it said that the answer was just $e^{\frac{\pi}{4}}$, so I'm confused where I went wrong in my steps. I tried looking over the arithmetic a couple of times but couldn't find an incorrect step. </p>
<p>Any pointers or help would be appreciated - thank you very much!</p>
| Bernard | 202,857 | <p>You don't need to use the quadratic formula; $r^s+s^2$ is a symmetric polynomial in $r$ and $s$, hence it can be expressed as a function of the <em>elementary symmetric functions</em>:
$$S=r+s=2m,\quad P=rs=m^2+2m+3.$$
Indeed $\;r^2+s^2=S^2-2P=2m^2-4m-6$. This is a quadratic polynomial in $m$, and its minimum is attained at $\;m=-\dfrac{-4}{2\cdot 2}=1$. However, we must take into account that the given equation has roots if and only if its reduced discriminant
$$\Delta'=m^2-(m^2+2m+3)=-2m-3$$
is non--negative, i.e. if $m\in\bigl(-\infty,-3/2\bigr]$. On this interval, $2m^2-4m-6$ is decreasing, so the minimum, <em>knowing there exists real roots</em>, is
$$2m^2-4m-6\Big\rvert_{m=-3/2}=\frac92.$$</p>
|
2,601,851 | <p>I have a binary variable $y_{t}$ that is equal to $1$ iff the job is scheduled at slot $t$. I need to write constraints that guarantee that if the job is scheduled somewhere, then it must be scheduled for a period of $A$ consecutive slots. I tried to write it this way:</p>
<p>$\sum_{t'=t}^{t+A-1}y_{t'}\geqslant A y_t$ for all $t$.</p>
<p>Here I can see that if $y_t=1$ then I must have $y_{t+1}=\ldots=y_{t+A-1}=1$. The problem with this is that I will have $y_{t'}=1$ for all $t'\geqslant t$ because of the recurrence relation I have in my constraints.</p>
<p>After some effort, I found this way to do it: introduce binary variable $z_t$and add the constraints</p>
<p>$\sum_{t}z_t\leqslant 1$and $z_t\sum_{t'=t}^{t+A-1}y_{t'}\geqslant A y_t z_t$ for all $t$.</p>
<p>But, if I am correct, this is a non-linear constraint.</p>
| Mark | 147,256 | <p>Recall that $\epsilon$ is an arbitrary positive number. In other words, we know that $\inf\{U(f, P')\} - \sup\{L(f, P')\}$ is smaller than every positive number. Also, $\sup\{L(f, P')\} \leq \inf\{U(f, P')\}$. The only way we can satisfy both these properties is if $\inf\{U(f, P')\} - \sup\{L(f, P')\} = 0$.</p>
|
2,601,851 | <p>I have a binary variable $y_{t}$ that is equal to $1$ iff the job is scheduled at slot $t$. I need to write constraints that guarantee that if the job is scheduled somewhere, then it must be scheduled for a period of $A$ consecutive slots. I tried to write it this way:</p>
<p>$\sum_{t'=t}^{t+A-1}y_{t'}\geqslant A y_t$ for all $t$.</p>
<p>Here I can see that if $y_t=1$ then I must have $y_{t+1}=\ldots=y_{t+A-1}=1$. The problem with this is that I will have $y_{t'}=1$ for all $t'\geqslant t$ because of the recurrence relation I have in my constraints.</p>
<p>After some effort, I found this way to do it: introduce binary variable $z_t$and add the constraints</p>
<p>$\sum_{t}z_t\leqslant 1$and $z_t\sum_{t'=t}^{t+A-1}y_{t'}\geqslant A y_t z_t$ for all $t$.</p>
<p>But, if I am correct, this is a non-linear constraint.</p>
| Brian Borchers | 6,310 | <p>What's critical here is the quantifier "for every $\epsilon > 0$." </p>
<p>Spivak has shown that the difference is less than $\epsilon$ for every $\epsilon>0$. That means (for example) that the difference is less than 0.1, and 0.01, and 1.0e-300, and 1.0e-3000, or any other tiny quantity you want to pick. If the difference was some nonzero number, then you'd could always pick an $\epsilon$ smaller than that number and derive a contradiction. </p>
|
3,511,660 | <p>Can you help me please I could not figure this out.</p>
<p>Given: </p>
<p><span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span>, <span class="math-container">$f'(0)$</span> exists, <span class="math-container">$f(x)\neq0$</span> and for all <span class="math-container">$a, b\in\mathbb{R}$</span>, <span class="math-container">$f(a+b)=f(a)f(b)$</span></p>
<p>How to prove that <span class="math-container">$f(x)$</span> is differentiable in <span class="math-container">$\mathbb{R}$</span>? </p>
| azif00 | 680,927 | <p>First note that <span class="math-container">$f(0) = 1$</span>. Next, for any real number <span class="math-container">$x$</span>,
<span class="math-container">$$
\begin{align}
\frac{f(x+h) - f(x)}{h} &= \frac{f(x)f(h) - f(x)}{h} \\
&= f(x)\frac{f(h) - 1}{h} \\
&= f(x)\frac{f(h) - f(0)}{h}
\end{align}
$$</span>
and the latter goes to <span class="math-container">$f(x)f'(0)$</span> as <span class="math-container">$h\to 0$</span>.</p>
|
1,970,458 | <p>Consider a stock that will pay out a dividends over the next 3 years of $1.15, $1.8, and 2.35 respectively. The price of the stock will be $48.42 at time 3. The interest rate is 9%. What is the current price of the stock?</p>
| alexjo | 103,399 | <p>$D_1=1.15,\, D_2=1.8,\,D_3=2.35, \,P_3=48.42,\,r=9\%$.
$$
P_0=\frac{D_1+P_1}{1+r}=\frac{D_1}{1+r}+\frac{D_2+P_2}{(1+r)^2}=\frac{D_1}{1+r}+\frac{D_2}{(1+r)^2}+\frac{D_3+P_3}{(1+r)^3}
$$</p>
|
2,991,366 | <blockquote>
<p>Consider a point <span class="math-container">$Q$</span> inside the <span class="math-container">$\triangle ABC$</span> triangle, and <span class="math-container">$M$</span>, <span class="math-container">$N$</span>, <span class="math-container">$P$</span> the intersections of <span class="math-container">$\overleftrightarrow{AQ}$</span>, <span class="math-container">$\overleftrightarrow{BQ}$</span>, <span class="math-container">$\overleftrightarrow{CQ}$</span> with respective sides <span class="math-container">$\overline{BC}$</span>, <span class="math-container">$\overline{CA}$</span>, <span class="math-container">$\overline{AB}$</span>.</p>
<p>What are the triangles <span class="math-container">$\triangle ABC$</span>, respectively, which are the positions of <span class="math-container">$Q$</span>, for which <span class="math-container">$Q$</span> is the intersection of the heights (aka, the orthocenter) in <span class="math-container">$\triangle MNP$</span>?</p>
</blockquote>
<p>I have not been able to solve the problem using synthetic geometry elements. In order to solve the problem with methods of analytical geometry, we considered <span class="math-container">$A(0, a)$</span>, <span class="math-container">$B(b, 0)$</span>, <span class="math-container">$C(c, 0)$</span>, and <span class="math-container">$Q(m, n)$</span>, where <span class="math-container">$0<n<a,b<0<c, b<m<c$</span>. We calculated the coordinates of <span class="math-container">$M$</span>, <span class="math-container">$N$</span>, <span class="math-container">$P$</span> and then we set the conditions that the <span class="math-container">$\overleftrightarrow{AQ}$</span>, <span class="math-container">$\overleftrightarrow{BQ}$</span>, <span class="math-container">$\overleftrightarrow{CQ}$</span> are perpendicular to <span class="math-container">$\overline{NP}$</span>, <span class="math-container">$\overline{PM}$</span>, <span class="math-container">$\overline{MN}$</span>. But the calculations became complicated, and I quit.</p>
<p>If I could have continued, I would have found three conditions (equalities) that I would have to satisfy <span class="math-container">$m$</span> and <span class="math-container">$n$</span> simultaneously. It follows that the <span class="math-container">$\triangle ABC$</span> triangle must be a particular triangle; and, within this triangle, <span class="math-container">$Q$</span> must have a particular position. One such case is the equilateral triangle and <span class="math-container">$Q$</span> is its center.</p>
<p>How can triangles be characterized with this property?</p>
| Phil H | 554,494 | <p>Any isosceles triangle can comply with your stated conditions. Consider the sketch below. The positions of <span class="math-container">$E$</span> and <span class="math-container">$F$</span> can be adjusted between <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> and <span class="math-container">$F_1$</span> and <span class="math-container">$F_2$</span> respectively until the lines <span class="math-container">$BE$</span> and <span class="math-container">$AF$</span> are perpendicular to <span class="math-container">$DF$</span> and <span class="math-container">$DE$</span>. </p>
<p><span class="math-container">$EF$</span> is parallel to <span class="math-container">$AB$</span> and symmetry plays a big part in this setup.</p>
<p><a href="https://i.stack.imgur.com/THaXq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/THaXq.jpg" alt="enter image description here"></a></p>
|
4,399,371 | <p>According to my textbook, the formula for the distance between 2 parallel lines has been given as below:</p>
<p><a href="https://i.stack.imgur.com/ZQtQk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZQtQk.png" alt="enter image description here" /></a></p>
<p>Where PT is a vector from the first line that makes a perpendicular on the second line, vector B is a vector to which both the lines are parallel to and vector (a2 - a1) is a vector that joins one arbitrary point on the second line, to yet another arbitrary point on the other</p>
<p>This is what I am confused by. The book, along with the numerous threads I've scoured through already provide similar diagrams for the proof:</p>
<p><a href="https://i.stack.imgur.com/qoua3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qoua3.png" alt="enter image description here" /></a></p>
<p>From what I understand, the crossing of ST with B should yield us a vector pointing OUT of the plane to which the lines (and in conjunction, ST) belong</p>
<p>How would that yield us TP/PT? TP/PT belongs to the same plane to which the lines and ST belong as well, so how'd crossing ST and B yield us PT?</p>
<p>I understand the end goal is to calculate the MAGNITUDE of the shortest vector joining both the lines, but I can't seem to understand how d is the magnitude of PT as opposed to being the magnitude of the vector jutting OUT of the plane</p>
| TomKern | 908,546 | <p>The length of the cross product of two vectors is <span class="math-container">$|a \times b| = |a| |b| \sin(\theta)$</span> where <span class="math-container">$\theta$</span> is the angle between them.</p>
|
3,525,814 | <p>One reasonably well-known property of the Thue-Morse sequence is that it can be used to provide solutions to the <a href="https://en.wikipedia.org/wiki/Prouhet%E2%80%93Tarry%E2%80%93Escott_problem" rel="nofollow noreferrer">Prouhet–Tarry–Escott problem</a> - for example, splitting the first eight nonnegative integers into evil and odious numbers, we get</p>
<p><span class="math-container">$0^0+3^0+5^0+6^0=1^0+2^0+4^0+7^0$</span></p>
<p><span class="math-container">$0^1+3^1+5^1+6^1=1^1+2^1+4^1+7^1$</span></p>
<p><span class="math-container">$0^2+3^2+5^2+6^2=1^2+2^2+4^2+7^2$</span></p>
<p>If we increase <span class="math-container">$8$</span> in this example to <span class="math-container">$2^m$</span>, we can get two sets of numbers whose <span class="math-container">$0$</span>th, <span class="math-container">$1st$</span>, ... <span class="math-container">$m$</span>th powers are all equal to one another. </p>
<p>A natural generalization is to ask whether we can obtain this same kind of pattern with more than two sets of integers at once. More formally, we ask for which <span class="math-container">$k,m>0$</span> it is possible to have <span class="math-container">$k$</span> distinct multisets of nonnegative integers whose sum of <span class="math-container">$n$</span>th powers are equal for each <span class="math-container">$n=0,1,\ldots,m$</span>. As the Thue-Morse sequence shows, this can be done for all <span class="math-container">$m$</span> with <span class="math-container">$k=2$</span>. </p>
<p>With <span class="math-container">$m=1$</span>, it's trivial to see that all <span class="math-container">$k$</span> will work, but the case <span class="math-container">$(k,m)=(3,2)$</span> is already nonobvious; the smallest solution of (13, 11, 0), (15, 8, 1), and (16, 5, 3) takes some work to locate. </p>
<p>With <span class="math-container">$m=2$</span>, the maximal <span class="math-container">$k$</span>-value I have found is <span class="math-container">$16$</span>, with the 3-tuples (412, 389, 0), (430, 369, 2), (440, 357, 4), (444, 352, 5), (464, 325, 12), (474, 310, 17), (485, 292, 24), (497, 270, 34), (500, 264, 37), (510, 242, 49), (517, 224, 60), (522, 209, 70), (529, 182, 90), (530, 177, 94), (532, 165, 104), and (534, 145, 122), all of which sum to <span class="math-container">$801$</span> and whose squares sum to <span class="math-container">$321065$</span>.</p>
<p>I also have a solution to <span class="math-container">$k=3, m=3$</span>: (22, 21, 4, 3), (24, 18, 7, 1), (25, 15, 10, 0).</p>
<p>As the only barrier to locating these solutions has been computational power thus far, I expect that solutions exist for all <span class="math-container">$k, m$</span>, though this seems like a very difficult proposition to show; does anyone have pointers to existing work on this question? I was unable to locate any such problem in, e.g., <a href="https://en.wikipedia.org/wiki/Sums_of_powers" rel="nofollow noreferrer">Wikipedia's list of problems relating to sums of powers</a>, or listed generalizations of the Prouhet–Tarry–Escott problem.</p>
<p>I would also be curious to see larger <span class="math-container">$(k,m)$</span> tuples that solutions are located for, as I'm not sure my current algorithms are particularly efficient at finding solutions. </p>
| Chen Shuwen | 954,936 | <p>The question you ask is also call "Multigrade Chains".
You may see plenty of such solutions on my below website:
<a href="http://eslpower.org/chains.htm" rel="nofollow noreferrer">Multigrade Chains</a></p>
<p>Example:</p>
<p><span class="math-container">$ \\\ \ \ \ 0^k+ 567^k+644^k+1778^k+1855^k+2422^k \\ = 2^k+535^k+678^k+1744^k+1887^k+2420^k \\ = 7^k+490^k+728^k+1694^k+1932^k+2415^k \\ = 15^k+444^k+782^k+1640^k+1978^k+2407^k \\= 28^k+392^k+847^k+1575^k+2030^k+2394^k \\= 42^k+350^k+903^k+1519^k+2072^k+2380^k \\ = 62^k+303^k+970^k+1452^k+2119^k+2360^k \\= 70^k+287^k+994^k+1428^k+2135^k+2352^k \\= 95^k+244^k+1062^k+1360^k+2178^k+2327^k \\= 103^k+232^k+1082^k+1340^k+2190^k+2319^k \\= 119^k+210^k+1120^k+1302^k+2212^k+2303^k \\= 144^k+180^k+1175^k+1247^k+2242^k+2278^k \\\ \ \ \ \ (k=1,2,3,4,5) $</span></p>
<p>Recently, I also find such solutions of trigonometry</p>
<p><span class="math-container">$ \ \ \ \ \sin ^{2 k}\left(\frac{0 \pi }{11}\right)+\sin ^{2 k}\left(\frac{\pi }{11}\right)+\sin ^{2 k}\left(\frac{\pi }{11}\right)+\sin ^{2 k}\left(\frac{2 \pi }{11}\right)+\sin ^{2 k}\left(\frac{2 \pi }{11}\right)+\sin ^{2 k}\left(\frac{3 \pi }{11}\right)\\+\sin ^{2 k}\left(\frac{3 \pi }{11}\right)+\sin ^{2 k}\left(\frac{4 \pi }{11}\right)+\sin ^{2 k}\left(\frac{4 \pi }{11}\right)+\sin ^{2 k}\left(\frac{5 \pi }{11}\right)+\sin ^{2 k}\left(\frac{5 \pi }{11}\right) $</span></p>
<p><span class="math-container">$ =\sin ^{2 k}\left(\frac{\pi }{22}\right)+\sin ^{2 k}\left(\frac{\pi }{22}\right)+\sin ^{2 k}\left(\frac{3 \pi }{22}\right)+\sin ^{2 k}\left(\frac{3 \pi }{22}\right)+\sin ^{2 k}\left(\frac{5 \pi }{22}\right)+\sin ^{2 k}\left(\frac{5 \pi }{22}\right)\\+\sin ^{2 k}\left(\frac{7 \pi }{22}\right)+\sin ^{2 k}\left(\frac{7 \pi }{22}\right)+\sin ^{2 k}\left(\frac{9 \pi }{22}\right)+\sin ^{2 k}\left(\frac{9 \pi }{22}\right)+\sin ^{2 k}\left(\frac{11 \pi }{22}\right) $</span></p>
<p><span class="math-container">$
=\sin ^{2 k}\left(\frac{\pi }{33}\right)+\sin ^{2 k}\left(\frac{2 \pi }{33}\right)+\sin ^{2 k}\left(\frac{4 \pi }{33}\right)+\sin ^{2 k}\left(\frac{5 \pi }{33}\right)+\sin ^{2 k}\left(\frac{7 \pi }{33}\right)+\sin ^{2 k}\left(\frac{8 \pi }{33}\right)\\+\sin ^{2 k}\left(\frac{10 \pi }{33}\right)+\sin ^{2 k}\left(\frac{11 \pi }{33}\right)+\sin ^{2 k}\left(\frac{13 \pi }{33}\right)+\sin ^{2 k}\left(\frac{14 \pi }{33}\right)+\sin ^{2 k}\left(\frac{16 \pi }{33}\right) $</span></p>
<p><span class="math-container">$=\sin ^{2 k}\left(\frac{\pi }{44}\right)+\sin ^{2 k}\left(\frac{3 \pi }{44}\right)+\sin ^{2 k}\left(\frac{5 \pi }{44}\right)+\sin ^{2 k}\left(\frac{7 \pi }{44}\right)+\sin ^{2 k}\left(\frac{9 \pi }{44}\right)+\sin ^{2 k}\left(\frac{11 \pi }{44}\right)\\+\sin ^{2 k}\left(\frac{13 \pi }{44}\right)+\sin ^{2 k}\left(\frac{15 \pi }{44}\right)+\sin ^{2 k}\left(\frac{17 \pi }{44}\right)+\sin ^{2 k}\left(\frac{19 \pi }{44}\right)+\sin ^{2 k}\left(\frac{21 \pi }{44}\right)$</span></p>
<p><span class="math-container">$ =\sin ^{2 k}\left(\frac{\pi }{55}\right)+\sin ^{2 k}\left(\frac{4 \pi }{55}\right)+\sin ^{2 k}\left(\frac{6 \pi }{55}\right)+\sin ^{2 k}\left(\frac{9 \pi }{55}\right)+\sin ^{2 k}\left(\frac{11 \pi }{55}\right)+\sin ^{2 k}\left(\frac{14 \pi }{55}\right)\\+\sin ^{2 k}\left(\frac{16 \pi }{55}\right)+\sin ^{2 k}\left(\frac{19 \pi }{55}\right)+\sin ^{2 k}\left(\frac{21 \pi }{55}\right)+\sin ^{2 k}\left(\frac{24 \pi }{55}\right)+\sin ^{2 k}\left(\frac{26 \pi }{55}\right) $</span></p>
<p><span class="math-container">$ =\sin ^{2 k}\left(\frac{2 \pi }{55}\right)+\sin ^{2 k}\left(\frac{3 \pi }{55}\right)+\sin ^{2 k}\left(\frac{7 \pi }{55}\right)+\sin ^{2 k}\left(\frac{8 \pi }{55}\right)+\sin ^{2 k}\left(\frac{12 \pi }{55}\right)+\sin ^{2 k}\left(\frac{13 \pi }{55}\right)\\+\sin ^{2 k}\left(\frac{17 \pi }{55}\right)+\sin ^{2 k}\left(\frac{18 \pi }{55}\right)+\sin ^{2 k}\left(\frac{22 \pi }{55}\right)+\sin ^{2 k}\left(\frac{23 \pi }{55}\right)+\sin ^{2 k}\left(\frac{27 \pi }{55}\right) $</span>
<span class="math-container">$ \ \ \ \ \\ ......$</span>
<span class="math-container">$ \ \ \ \ \ \\ (k=1,2,3,4,5,6,7,8,9,10) $</span></p>
<p>For more result on this, please refer to my Notebooks</p>
<p><a href="http://eslpower.org/Notebook.htm" rel="nofollow noreferrer">http://eslpower.org/Notebook.htm</a></p>
|
3,858,517 | <p>Is it possible to count exactly the number of binary strings of length <span class="math-container">$n$</span> that contain no two adjacent blocks of 1s of the same length? More precisely, if we represent the string as <span class="math-container">$0^{x_1}1^{y_1}0^{x_2}1^{y_2}\cdots 0^{x_{k-1}}1^{y_{k-1}}0^{x_k}$</span> where all <span class="math-container">$x_i,y_i \geq 1$</span> (except perhaps <span class="math-container">$x_1$</span> and <span class="math-container">$x_k$</span> which might be zero if the string starts or ends with a block of 1's), we should count a string as valid if <span class="math-container">$y_i\neq y_{i+1}$</span> for every <span class="math-container">$1\leq i \leq k-2$</span>.</p>
<p>Positive examples : 1101011 (block sizes are 2-1-2), 00011001011 (block sizes are 2-1-2), 1001100011101 (block sizes are 1-2-3-1)</p>
<p>Negative examples : 1100011 (block sizes are <strong>2-2</strong>), 0001010011 (block sizes are <strong>1-1</strong>-2), 1101011011 (block sizes are 2-1-<strong>2-2</strong>)</p>
<p>The sequence for the first <span class="math-container">$16$</span> integers <span class="math-container">$n$</span> is: 2, 4, 7, 13, 24, 45, 83, 154, 285, 528, 979, 1815, 3364, 6235, 11555, 21414. For <span class="math-container">$n=3$</span>, only the string 101 is invalid, whereas for <span class="math-container">$n=4$</span>, the invalid strings are 1010, 0101 and 1001.</p>
| BillyJoe | 573,047 | <p>Here I am going to use generating functions like in <a href="https://math.stackexchange.com/a/1956058/573047">this answer to a related problem</a> to compute columns of @RobPratt table for <span class="math-container">$k \ge 3$</span>.</p>
<p>We can define:</p>
<p><span class="math-container">$$S_y(k,i) = \left\{\text{n. of solutions for} \sum_{j=1}^{k-1} y_j = i \text{ with } y_j \neq y_{j+1}\right\} \tag{1}\label{1}$$</span></p>
<p>and then scompose the problem as follows:</p>
<p><span class="math-container">$$\left\{\text{n. of solutions for} \sum_{j=1}^k x_j + \sum_{j=1}^{k-1} y_j = n-2k+3 \right\}=\\ \sum_{i=0}^{n-2k+3}\left\{\text{n. of solutions for} \sum_{j=1}^k x_j = n-2k+3-i \right\}S_y(k) =\\ \sum_{i=0}^{n-2k+3}{n-k+2-i \choose k-1}S_y(k,i) \tag{2}\label{2}$$</span></p>
<p>When <span class="math-container">$k=3$</span>, the problem of determining <span class="math-container">$S_y(k,i)=S_y(3,i)$</span> is all the same as in the above linked problem, only with <span class="math-container">$2$</span> variables instead of <span class="math-container">$4$</span>. Instead of repeating all calculations we can reuse the above answer, removing all terms with an exponent for <span class="math-container">$y$</span> greater than <span class="math-container">$2$</span>, to get the generating function:</p>
<p><span class="math-container">$$f(x)=\left[\frac{y^2}{2!}\right]\prod_{n\ge0}(1+yx^n) = \left[\frac{y^2}{2!}\right]\left( 1+\frac y{1-x}+ \frac12\frac{y^2}{(1-x)^2}\right)\left( 1-\frac12\,\frac{y^2}{1-x^2}\right)=\\ \frac{1}{(1-x)^2}-\frac{1}{1-x^2}=\sum_{n=0}^{\infty}\left\{\frac 12 \left[1+(-1)^{n+1}\right]+n\right\}x^n$$</span></p>
<p>where in the last step I have used <a href="https://www.wolframalpha.com/input/?i=series+expansion+of+1%2F%281-x%29%5E2-1%2F%281-x%5E2%29+at+x+%3D+0" rel="nofollow noreferrer">WolframAlpha</a> because I am lazy, and then:</p>
<p><span class="math-container">$$S_y(3,i) = [x^i]f(x) = \frac 12 \left[1+(-1)^{i+1}\right]+i \tag{3}\label{3}$$</span></p>
<p>OK, yes, using generating functions for <span class="math-container">$k = 3$</span> and <span class="math-container">$y_1+y_2=i$</span> is a little overkill, because the <span class="math-container">$\eqref{3}$</span> result is obvious (once we choose a value for <span class="math-container">$y_1$</span>, and this can be done in <span class="math-container">$i+1$</span> ways, then <span class="math-container">$y_2$</span> is determined; after that the first addendum is needed to discard the <span class="math-container">$y_1=y_2=i/2$</span> solution when <span class="math-container">$i$</span> is even).
Anyway, replacing in <span class="math-container">$\eqref{2}$</span> we obtain the formula for the third column of @RobPratt table:</p>
<p><span class="math-container">$$\sum_{i=0}^{n-3}{n-1-i \choose 2}\left\{\frac 12 \left[1+(-1)^{i+1}\right]+i\right\}=\\ \frac 1{48} (2 n^4 - 8 n^3 + 4 n^2 + 8 n + 3 (-1)^n - 3)\tag{4}\label{4}$$</span></p>
<p>where again I have used <a href="https://www.wolframalpha.com/input/?i=sum+%28n-1-i%29%28n-2-i%29%2F2%28%281%2B%28-1%29%5E%28i%2B1%29%29%2F2%2Bi%29+for+i%3D0...n-3" rel="nofollow noreferrer">WolframAlpha</a> for the last step (verified against @RobPratt table <a href="https://www.wolframalpha.com/input/?i=1%2F48+%282+n%5E4+-+8+n%5E3+%2B+4+n%5E2+%2B+8+n+%2B+3+%28-1%29%5En+-+3%29+for+n%3D3...16" rel="nofollow noreferrer">here</a>).</p>
<p>Still thinking how to extend this to <span class="math-container">$k \gt 3$</span>...</p>
|
2,416,671 | <p>Here is the problem: Let $K$ be a compact subset of $ \mathbb{R}^{m} $ ($m>1$) with empty interior and such that $\mathbb{R}^{m}\setminus K $ has no bounded component. For $n=1,2,...$, we define
$$K_{n}=\lbrace x\in \mathbb{R^{m}}: distance (x,K)=1/n\rbrace.$$ Prove that for all $x\in K$, there is a sequence $(y_{n})$ in $K_{n}$ converging to $x$, as $n\rightarrow\infty$.</p>
<p>Here is what I have done:
Fix $x\in K$. If for all $r>0$ the ball of center $x$ and radius $r$, $B(x,r)$, encounters some $K_{n}$, then we are done. If not $B(x,r_{0} )\cap K_{n}=\emptyset$, for all positive integer and some $r_{0} >0$. This implies $B(x,r_{0} )$ is included to the complement of $K_{n}$ for all $n$. Intuitively this is impossible since the complements of the $K_{n}$'s encounter each other. How can finish the argument?</p>
| H. H. Rugh | 355,946 | <p>We fix $x\in K$. The following is a bit along the spirit in your attempt.</p>
<p>The function ${\rm dist} (z,K)$ is (1-Lipschitz) continuous in $z$.
It follows that also
$$M(r)= \sup \{ {\rm dist}(z,K): \|z-x\|\leq r \}, \; \; r\geq 0$$
is (1-Lipschitz) continuous in $r\geq 0$. </p>
<p>We have $M(0)=0$ (since $x\in K$) and $M(r)>0$ for every $r>0$ (since $K$ is compact and has empty interior). Since $K$ is bounded, $M(r)$ tends to $+\infty$ as $r\rightarrow \infty$. Clearly $M(r)$ is monotone increasing.</p>
<p>By the intermediate-value theorem, for every $n\geq 1$ there is $r_n$ (not necessarily unique) such that $M(r_n)=\frac1n$.
Since $\bar{B}(x,r_n)$ is compact the sup is attained (a maximum) so we conclude that there is $y_n$ such that
$$ d(y_n,x) \leq r_n \; , \; \; {\rm dist}(y_n,K)=\frac{1}{n}$$
i.e. $y_n\in K_n$. We claim that $r_n\rightarrow 0$ as $n$ goes to infinity:</p>
<p>For $\delta>0$ we have $M(\delta)>0$ so there is $N=N(\delta)\geq 1$ for which $M(\delta)>\frac{1}{N}$. But then by monotonicity of $M$ for every $n\geq N$ we must have $r_n<\delta$ which proves the claim.</p>
<p>(the condition on the complement not having bounded components is not necessary)</p>
|
3,520,354 | <p>In the problem <span class="math-container">$\frac{8.01-7.50}{3.002}$</span></p>
<p>Why would the answer be <span class="math-container">$0.17$</span> and not <span class="math-container">$0.170$</span>? My least amount of <em>sig figs</em> is <span class="math-container">$3$</span> in the original equation. The only thing I can come up with is in the intermediate step.<span class="math-container">$8.01-7.50= 0.51$</span> exactly, which only has <span class="math-container">$2$</span> <em>significant figures</em>. Does the intermediate step really count in determining significant figures? Thank you. :)</p>
| Community | -1 | <p>Unless the 0 in <span class="math-container">$7.50$</span> is an exact figure, you can't deem it significant.</p>
<blockquote>
<p>The significant figures (also known as the significant digits and decimal places) of a number are digits that carry meaning contributing to its measurement resolution. This includes all digits except:[1]</p>
<p>All leading zeros. For example, "013" has two significant figures: 1 and 3;
Trailing zeros when they are merely placeholders to indicate the scale of the number (exact rules are explained at identifying significant figures); and
Spurious digits introduced, for example, by calculations carried out to greater precision than that of the original data, or measurements reported to a greater precision than the equipment supports.</p>
</blockquote>
<p>So unless the 0 is showing accuracy of a scale or an exact measurement ( like there being exactly 453592370 micrograms to a lb by definition) , there could be rounding inaccuracies ( like quoting a nutrition label 64 g of sugar is 21% of a 2000 Calorie diet, or 25 grams sugar to 8% for another, if they round from half percents the range on the first gives <span class="math-container">$128\over 41$</span> to <span class="math-container">$128\over 43$</span> the second gives <span class="math-container">$10\over 3$</span> to <span class="math-container">$50 \over 17$</span> (that's all assuming grams are completely accurate)).</p>
<p>So arguably, you could have as few as two significant figures, or the 0 could be from rounding.</p>
<p>Source wikipedia on significant digits.</p>
|
226,097 | <p>I am having a problem with the following exercise. Can someone help me please.</p>
<p>Find all functions $f$ for which $f'(x)=f(x)+\int_{0}^1 f(t)dt$</p>
<p>Thank you in advance</p>
| Beni Bogosel | 7,327 | <p>Your differential equation is $f'=f+c$ where $c$ is a constant. This is a first order linear differential equation and the solution has a simple formula. Here you can deduce it:</p>
<p>$f'-f=c$ is equivalent to $(e^{-x}f(x))'=ce^{-x}$ and therefore
$$ f(x)=e^x(-ce^{-x}+d)=-c+de^x$$</p>
<p>Now you have to impose the condition $\int_0^1 f(x)dx=c$.</p>
|
2,637,812 | <p>Here is dice game question about probability.</p>
<p>Play a game with $2$ die. What is the probability of getting a sum greater than $7$?</p>
<p>I know how the probability for this one is easy, $\cfrac{1+2+3+4+5}{36}=\cfrac 5{12}$.</p>
<p>I don't know how to solve the follow-up question:</p>
<p>Play a game with $200$ die. What is the probability of getting a sum greater than $700$?</p>
| kiyomi | 527,262 | <p>$$\lim_\limits{x\to0}\frac{x^2-x^2+\frac{x^4}{2}+\mathcal{O}\left(x^5\right)}{x^4+\mathcal{O}\left(x^5\right)}=\lim_\limits{x\to0}\frac12\cdot\frac{x^4+\mathcal{O}\left(x^5\right)}{x^4+\mathcal{O}\left(x^5\right)}=\frac12.$$</p>
|
2,637,812 | <p>Here is dice game question about probability.</p>
<p>Play a game with $2$ die. What is the probability of getting a sum greater than $7$?</p>
<p>I know how the probability for this one is easy, $\cfrac{1+2+3+4+5}{36}=\cfrac 5{12}$.</p>
<p>I don't know how to solve the follow-up question:</p>
<p>Play a game with $200$ die. What is the probability of getting a sum greater than $700$?</p>
| user | 505,767 | <p>As an alternative</p>
<p>$$\frac{x^2-\log(1+x^2)}{x^2\sin^2x}=\frac{x^2-\log(1+x^2)}{x^4}\cdot\frac{x^2}{\sin^2x}\to \frac12$$</p>
<p>indeed</p>
<p>$\frac{x^2}{\sin^2x}\to 1$ by standard limit</p>
<p>and let $y=x^2\to 0$</p>
<p>$$\frac{x^2-\log(1+x^2)}{x^4}=\frac{y-\log(1+y)}{y^2}\stackrel{HR}\implies\frac{1-\frac1{1+y}}{2y}=\frac{1+y-1}{2y(1+y)}=\frac{1}{2(1+y)}\to\frac12$$</p>
|
396,085 | <p>The length of three medians of a triangle are $9$,$12$ and $15$cm.The area (in sq. cm) of the triangle is</p>
<p>a) $48$</p>
<p>b) $144$</p>
<p>c) $24$</p>
<p>d) $72$</p>
<p>I don't want whole solution just give me the hint how can I solve it.Thanks.</p>
| Manuj Khullar | 100,314 | <p>There is a direct formula:</p>
<p>Let
$$s = (m_1+m_2+m_3)/2,$$</p>
<p>Then
$$\text{area} = \frac{4}{3}\sqrt{s(s-m_1)(s-m_2)(s-m_3)}.$$</p>
<p>This gives answer of above question as $72$.</p>
|
2,012,020 | <p>Would it have sense to defined Cauchy sequence in non metric space ? (for example, if $(X,T)$ is a topology space :
$$\forall U\in T, 0\in U, \exists N\in \mathbb N: x_n-x_m\in U.$$</p>
<p>And if yes, would it be interesting ?</p>
| Lee Mosher | 26,501 | <p>As you wrote it, no, this does not make sense, because there is no subtraction operation $x_n - x_m$ in a metric space.</p>
<p>However, there is a theory of <a href="https://en.wikipedia.org/wiki/Uniform_space" rel="nofollow noreferrer">uniform spaces</a>, which is a special kind of topological space $X$ which need not be metrizable but which nonetheless has a built in theory of <a href="https://en.wikipedia.org/wiki/Uniform_space#Completeness" rel="nofollow noreferrer">Cauchy nets</a>. One has to use nets instead of sequences, because unlike metric spaces where sequences are sufficient to detect limit points of a subset, one must use nets to detect limit points in general nonmetrizable spaces. </p>
<p>The rough idea of a uniform space is that in place of the collection of inequalities "$d(x,y) < r$" for $r > 0$, one introduces collection of symmetric, reflexive relations $U \subset X \times X$ called <em>entourages</em>. In place of the condition that $d(x,y)=0 \implies x=y$ one has the condition that the intersection of all entourages is the diagonal subspace $\Delta = \{(x,x) \in X \times X \bigm| x \in X\}$. More axioms are needed in order to say how to express open sets of the topology in terms of the collection of entourages, to replace how open sets of a metric topology are expressed in terms of open balls. For example, some axiom is needed which will replace the triangle inequality and still allow you to construct a basis for a topology, as the triangle inequality lets you do in metric spaces.</p>
<p>See the great old book "General Topology" by Kelley for a full treatment of uniform spaces, or see the wikipedia page for a bare bones outline.</p>
|
2,012,020 | <p>Would it have sense to defined Cauchy sequence in non metric space ? (for example, if $(X,T)$ is a topology space :
$$\forall U\in T, 0\in U, \exists N\in \mathbb N: x_n-x_m\in U.$$</p>
<p>And if yes, would it be interesting ?</p>
| Bargabbiati | 352,078 | <p>If (xα) is a sequence from $\mathbb N$ into X, and if Y is a subset of X, then we say that (xα) is eventually in Y (or residually in Y) if there exists an α in A so that for every β in A with β ≥ α, the point xβ lies in Y.</p>
<p>If (xα) is a sequence in the topological space X, and x is an element of X, we say that the net converges towards x or has limit x if and only if for every neighbourhood U of x, (xα) is eventually in U. So you can define a notion of convergence also in a not metric space. We can't extend this notion to something similar to a Cauchy sequence, because it doesn't make sense to write something as $x-y$ if we haven't a metric notion. </p>
|
397,040 | <p>What is the domain for $$\dfrac{1}{x}\leq\dfrac{1}{2}$$</p>
<p>according to the rules of taking the reciprocals, $A\leq B \Leftrightarrow \dfrac{1}{A}\geq \dfrac{1}{B}$, then the domain should be simply $$x\geq2$$</p>
<p>however negative numbers less than $-2$ also satisfy the original inequality. When am I missing in my understanding?</p>
| egreg | 62,967 | <p>Inequalities with fractions require some care. If you write yours in the form
$$
\frac{1}{x}-\frac{1}{2}\le0
$$
you see that it's equivalent to
$$
\frac{2-x}{2x}\le0
$$
or to
$$
\frac{x(2-x)}{2x^2}\le0
$$
Since $2x^2>0$, we can remove the denominator (but keeping the condition that $x\ne0$). So we have the standard
$$
x(2-x)\le0
$$
or
$$
x(x-2)\ge0
$$
whose solution set is $(-\infty,0]\cup[2,\infty)$. Remembering we can't accept $0$ as solution, we end up with
$$
x<0\text{ or }x\ge2
$$</p>
<p>Of course such a long discussion is too much for this simple case, but it should show how things can go wrong with "hasty simplifications". Here, one can simply observe that when $x<0$ the inequality is clearly satisfied and only for $x>0$ one has to do something more: but for $x>0$ one can indeed reverse the fractions, so this becomes $x\ge2$.</p>
<hr>
<p>This is analogous to irrational inequalities such as $\sqrt{1-x}\ge x-2$. All numbers satisfying
$$
\begin{cases}
1-x\ge0 & \text{(existence of the square root)}\\
x-2<0 & \text{(the RHS is negative)}
\end{cases}
$$
<em>are</em> solutions of the inequality. When $x-2\ge0$, instead, you can square both sides and proceed as usual:
$$
\begin{cases}
1-x\ge(x-2)^2\\
x-2\ge0
\end{cases}
$$</p>
<hr>
<p>Don't be hasty and inequalities will bite you no more.</p>
|
1,097,658 | <p>I read in a notes: A semi-function is a relation (not a function) with of the form $y^2=f(x)$. </p>
<p>It seems that we can get more that one values for $f(x)$ for a single value of $x$. </p>
<p>Could any-one please help me to understand this notion.</p>
<p>The link of the note is <a href="http://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CDcQFjAF&url=http%3A%2F%2Fwww.milefoot.com%2Fabout%2Fpresentations%2Fpolypretzels.pdf&ei=7M6vVN2uL9CquQTkm4LYBg&usg=AFQjCNF97nqb0WA5hAcxlfF4MO-51yEKLA&bvm=bv.83339334,d.c2E" rel="nofollow">here</a>.</p>
| KittyL | 206,286 | <p>The notes say $f(x)$ is a function, so we can get exactly one value for $f(x)$ with a single value of $x$. However, we can get two values of $y$ given a single value of $x$, since it could be plus or minus.</p>
|
1,143,200 | <p>The jump diffusion model is defined as
$$dS_t = \mu S_t dt + \sigma S_t dW_t + S_t d \left(\sum^{N_t}_{i=1}(V_i - 1)\right)\;\;\;\;\;\;\;(1)$$
, where ${V_i}$ is a sequence of iid non-negative random variables and it is independent of $W_t$. In the Merton's jump diffusion model ,
$log(V) \sim N(\mu_J, \sigma^2_J)$ and $ N_t$ is a poisson process with rate $\lambda$.</p>
<p>I was asked to apply Ito lemma to $(d \;logS_t)$ to obtain the following:</p>
<p>$$S_t = S_0 exp \left( \left(\mu - \frac{\sigma^2}{2} \right)t + \sigma W_t\right) \prod^{N_t}_{j=1}V_j \;\;\;\;\;\;\;(2)$$</p>
<p>I literally do not know how to solve this problem because of the term $d \left(\sum^{N_t}_{i=1}(V_i - 1)\right)$. This is how far I got to:</p>
<p>$$dlogS_t = \frac{1}{S_t} \left( \mu S_t dt + \sigma S_t dW_t + S_t d \left(\sum^{N_t}_{i=1}(V_i - 1)\right)\right) - \frac{1}{2S_t^2} d[S,S]_t$$</p>
<p>What exactly is $d[S,S]_t$ ? I know that
$$d[S,S]_t = \sigma^2 S^2_t dt + ...$$
But what is that "..." ? </p>
| user48672 | 138,298 | <p>In your case </p>
<p>the quadratic variation can be obtained by formally squaring the SDE
$
d[S,S]_t = dS_t \cdot dS_t =\left( \mu S_t dt + \sigma S_t dW_t + S_t d \left(\sum^{N_t}_{i=1}(V_i - 1)\right) \right)^2
$
using the fact that
$
(\mu S_t)^2 dt^2 = 0
$
and
$
\mu S_t dt \cdot \sigma S_t dW_t =0
$
we get</p>
<p>$
d[S,S]_t = \sigma^2 S_t^2 dt + 2 [ d\left(\sum^{N_t}_{i=1}(V_i - 1)\right), \mu S_t dt ] +
2 [ d\left(\sum^{N_t}_{i=1}(V_i - 1)\right), dW_t ] + [ d\left(\sum^{N_t}_{i=1}(V_i - 1)\right), d\left(\sum^{N_t}_{i=1}(V_i - 1)\right) ]_t
$.</p>
<p>Since $V_i $ are i.i.d. $ [dV_i, dt ]=0$ and $ [dV_i, dW_t ]=0$ and $ [dV_i, dV_j ]=\delta_{i\,j}$.<br>
For the last term we also have $[dV_i , dV_i ] = dV_i$.
Then finally</p>
<p>$
dlogS_t = \frac{1}{S_t} \left( \mu S_t dt + \sigma S_t dW_t + S_t d \left(\sum^{N_t}_{i=1}(V_i - 1)\right)\right) - \frac{1}{2S_t^2} \sigma^2 S^2_t dt =
$</p>
<p>$
dlogS_t = \left( \mu - \frac{1}{2 } \sigma^2 \right) dt + \sigma \; dW_t + d\left(\sum^{N_t}_{i=1}(V_i - 1)\right)
$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.